text
stringlengths
56
7.94M
\begin{document} \begin{frontmatter} \title{Symmetric Unique Neighbor Expanders and Good LDPC Codes} \author{Oren Becker\corref{mycorrespondingauthor}} \address{Institute of Mathematics, Hebrew University, Jerusalem 9190401 ISRAEL} \ead{[email protected]} \begin{abstract} An infinite family of bounded-degree 'unique-neighbor' expanders was constructed explicitly by Alon and Capalbo (2002). We present an infinite family $\mathcal{F}$ of bounded-degree unique-neighbor expanders with the additional property that every graph in the family $\mathcal{F}$ is a \emph{Cayley graph}. This answers a question raised by Tali Kaufman. Using the same methods, we show that the symmetric LDPC codes constructed by Kaufman and Lubotzky (2012) are in fact symmetric under a \emph{simply} transitive group action on coordinates. \end{abstract} \begin{keyword} Unique neighbor expander \sep Error correcting code \sep Cayley graph \end{keyword} \end{frontmatter} \section{Introduction} An undirected graph $\Gamma=\left(V,E\right)$ is an \emph{$\left(\alpha,\epsilon\right)$-unique-neighbor expander} if for every subset $X$ of $V$ such that $\left|X\right|\leq\alpha\left|V\right|$, there are at least $\epsilon\left|X\right|$ vertices in $V\setminus X$ that are adjacent to \emph{exactly} one vertex in $X$. In \cite{alon2002explicit}, Alon and Capalbo construct families of bounded-degree \emph{$\left(\alpha,\epsilon\right)$-}unique-neighbor\emph{ }expanders for some positive $\alpha,\epsilon$. One of the families constructed in \cite{alon2002explicit} is an infinite family $\mathcal{F}$ of $6$-regular $\left(\alpha,\epsilon\right)$-unique-neighbor expanders. This construction is based on a notion of a product of graphs, forming a product graph $\Gamma$ from a $d$-regular graph $\Gamma'$ and a graph $\Delta$ on $d$ vertices. It is shown in \cite{alon2002explicit} that if $\Gamma'$ is an $8$-regular Ramanujan graph, and if $\Delta$ is a certain specific graph on $8$ vertices, then the product graph $\Gamma$ is an $\left(\alpha,\epsilon\right)$-unique-neighbor expander for some positive absolute constants $\alpha,\epsilon$. In response to a question of Tali Kaufman \cite{kaufman_private}, who asked if there is a infinite family of $\left(\alpha,\epsilon\right)$-unique-neighbor expanders which are Cayley graphs, we develop a similar graph product in the context of Cayley graphs. The product of a graph $\Gamma'$ and a graph $\Delta$ is in general not symmetric, even if both graphs are Cayley graphs. However, we show that if $\Gamma'$ and $\Delta$ are Cayley graphs (with respect to the groups $G$ and $H$ respectively), and if further $\Gamma'$ is bipartite and simply-generator-symmetric with respect to the group $H$ (i.e. $H$ acts simply transitively on the generators of $G$ through group automorphisms of $G$), then the product graph $\Gamma$ can be formed 'symmetrically' and is then itself a Cayley graph. The graph $\Delta$ in \cite{alon2002explicit} is indeed a Cayley graph on the cyclic group $C_{8}$. Fortunately, there are $8$-regular bipartite Ramanujan graphs which are simply-generator-symmetric with respect to the group $C_{8}$. Such graphs were constructed explicitly by Lubotzky, Samuels and Vishne in \cite{zbMATH02183844}, as a special case of the construction of Ramanujan complexes. Thus, we show that the method of \cite{alon2002explicit}, applied in a symmetric manner to this infinite family of Ramanujan graphs from \cite{zbMATH02183844}, gives rise to an infinite family of unique-neighbor expanders which are Cayley graphs. This gives an affirmative answer to Kaufman's question. We conclude: \begin{thm} \label{thm:main-une}For some absolute positive constants $\alpha,\epsilon$, there is an explicit construction of an infinite family of $\left(\alpha,\epsilon\right)$-unique-neighbor expanders, such that every graph in the family is a Cayley graph. \end{thm} Our method has another application: In \cite{zbMATH06294581}, Kaufman and Lubotzky construct asymptotically good families of symmetric LDPC error correcting codes. A code $C\subset\mathbb{F}_{2}^{X}$ is \emph{symmetric} with respect to a group $G$ if there is a transitive group action of $G$ on $X$, such that the corresponding coordinate-interchanging action on $\mathbb{F}_{2}^{X}$ preserves $C$. We say that a symmetric code is \emph{simply-symmetric} if the action of $G$ on $X$ is simply transitive. The codes in \cite{zbMATH06294581} are based on a product of a large graph $\Gamma'$ and a small code $B$. This construction is due to Tanner (\cite{zbMATH03745089}) and Sipser-Spielman (\cite{zbMATH01004587}), and is referred to as an ``expander code'' in case the graph $\Gamma'$ is a good expander. In case the graph $\Gamma'$ is a Cayley graph, a more specific construction is defined in \cite{zbMATH05799072} by Kaufman and Wigderson, and is called a ``Cayley code''. While Cayley codes are not necessarily symmetric, Kaufman and Wigderson construct a family symmetric Cayley codes with constant rate, but with normalized distance $\Omega\left(\frac{1}{\left(\log\log n\right)^2}\right)$, were $n$ is the code length (see Theorem 11 of \cite{zbMATH05799072}). In \cite{zbMATH06294581}, using the Ramanujan graphs of \cite{zbMATH02183844}, asymptotically good symmetric Cayley codes are constructed. The construction uses an infinite family of \emph{bipartite} Ramanujan graphs, but the bipartiteness is not used in the proof (and similar constructions using non-bipartite generator-symmetric expanders are possible). We use the bipartiteness of these Ramanujan graphs, together with the fact that they are \emph{simply}-generator-symmetric, to show that the codes of \cite{zbMATH06294581} are in fact \emph{simply} symmetric. In addition to this extra feature, this construction has the benefit of exhibiting symmetric unique-neighbor expanders and symmetric error correcting codes in one framework. We conclude: \begin{thm} \label{thm:main-codes}There is an infinite family of asymptotically good simply-symmetric LDPC error correcting codes. \end{thm} Finally, we define a variation of the codes of \cite{zbMATH06294581} which improves the bound on their density from $4094$ to $20$. This is achieved by a slight variation on the argument of Sipser-Spielman \cite{zbMATH01004587}. \section{Edge Transitive Ramanujan Graphs\label{sec:ET-Ramanujan}} Let $\Gamma=\Cay\left(G,S\right)$ be a Cayley graph where $S$ is a subset of the group $G$ satisfying $S=S^{-1}$. As a Cayley graph, $\Gamma$ is automatically vertex-transitive. We say that $\Gamma$ is \emph{generator-symmetric} with respect to the group $H$ and the group homomorphism $\theta:H\rightarrow\Aut\left(G\right)$ if $\theta\left(H\right)$ preserves $S$ and induces a transitive action of $H$ on $S$. In this case, the group $G\rtimes_{\theta}H$ acts transitively on the edges of $\Gamma$, and so $\Gamma$ is an edge-transitive graph. If the action of $H$ on $S$ is simply-transitive (in other words, if the action is both transitive and free), then we say that the Cayley graph $\Gamma$ is \emph{simply-generator-symmetric}. This means, in particular, that $\left|H\right|=\left|S\right|$. In this case, the action of $G\rtimes_\theta H$ on the \emph{directed} edges of $\Gamma$ is simply transitive. However, some elements of $G\rtimes_\theta H$ act on $\Gamma$ by "reversing an edge". Therefore, if one regards $\Gamma$ as an undirected graph, the action of $G\rtimes_\theta H$ on edges, while still transitive, is no longer free. Our construction in section \ref{sec:Line-Graphs} is based on the observation that if $\Gamma$ is a bipartite simply-generator-symmetric Cayley graph, then there is a subgroup of $G\rtimes_\theta H$ of index $2$ which acts simply transitively on the $undirected$ edges of $\Gamma$. For notational convenience, for every $h\in H$, we denote the automorphism $\theta\left(h\right)\in\Aut\left(G\right)$ by $\theta_{h}$. In \cite{zbMATH02183844}, Lubotzky, Samuels and Vishne construct highly symmetric Ramanujan complexes, which we now specialize to the one dimensional case (i.e. graphs). We get two infinite families of $d$-regular Ramanujan graphs for each $d=q+1$, where $q$ is an odd prime power. One is a family of bipartite graphs and the other is a family of non-bipartite graphs. All of these graphs are Cayley graphs and are simply-generator-symmetric with respect to the cyclic group $H=C_{d}$. For a brief overview of this special case see section 4 of \cite{zbMATH06294581}. Specializing further to the case of bipartite $8$-regular graphs, we get from \cite{zbMATH02183844} an infinite family of simply-generator-symmetric bipartite Ramanujan Cayley graphs $\left\{ \Gamma'_{n}\right\} $ with $\Gamma'_{n}=\Cay\left(\PGL_{2}\left(7^{n}\right),S\left(n\right)\right)$ for some set of generators $S\left(n\right)\subset\PGL_{2}\left(7^{n}\right)$ defined in \cite{zbMATH02183844}. The group $\PSL_{2}\left(7^{n}\right)$ is of index $2$ in $\PGL_{2}\left(7^{n}\right)$, and is disjoint from $S\left(n\right)$. As shown in \cite{zbMATH02183844}, there is a group homomorphism $\theta_{n}:C_{8}\rightarrow\Aut\left(\PGL_{2}\left(7^{n}\right)\right)$ , inducing a simply-transitive action of $C_{8}$ on $S\left(n\right)$. For an explicit definition of the generators $S\left(n\right)$ and the action of $C_{8}$, see \cite{zbMATH02183844}, Algorithm 9.2. In particular, Step 3 of this algorithm uses an embedding $C_{8}\cong\mathbb{F}_{7^{2}}^{\times}/\mathbb{F}_{7}^{\times}\hookrightarrow\PGL_{2}\left(7^{n}\right)$ to define the generators $S\left(n\right)$ as the $C_{8}$-orbit of a certain element $b\in\PGL_{2}\left(7^{n}\right)$, where $C_{8}$ acts on $\PGL_{2}\left(7^{n}\right)$ by inner automorphisms through this embedding. Thus, the corresponding homomorphism $\theta_{n}:C_{8}\rightarrow\Aut\left(\PGL_{2}\left(7^{n}\right)\right)$ is as required. \section{Line Graphs as Cayley Graphs\label{sec:Line-Graphs}} For a graph $\Gamma'$=$\left(V,E\right)$, the \emph{line graph} $\Gamma$ of $\Gamma'$ is the graph whose vertex set is $E$, with vertices $e_{1},e_{2}\in E$ of $\Gamma$ adjacent in $\Gamma$ if and only if $e_{1}$ and $e_{2}$, as edges of $\Gamma'$, are incident to a common vertex of $\Gamma'$. In general, the line graph of a Cayley graph is \emph{not }itself a Cayley graph. In this section, we show that the line graph of a simply-generator-symmetric \emph{bipartite} Cayley graph is itself a Cayley graph. Let $G$ be a finite group and let $S\subset G$ be a symmetric subset of $G$ (i.e. $S=S^{-1}$) which generates $G$. Let $K$ be a subgroup of $G$ of index $2$ such that $K$ and $S$ are disjoint. Fix a generator $s_{0}\in S$. Then, the Cayley graph $\Cay\left(G,S\right)$ is bipartite with the cosets $K$ and $Ks_{0}$ as its left and right sides respectively. The coset $K$ consists of the vertices connected to $1_{G}$ by a path of even length, and the coset $Ks_{0}$ consists of the vertices connected to $1_{G}$ by a path of odd length. Note that for a Cayley graph $\Cay\left(G,S\right)$, the existence of an index $2$ subgroup $K\leq G$ disjoint from $S$ is in fact equivalent to $\Cay\left(G,S\right)$ being bipartite. Assume further that $\Cay\left(G,S\right)$ is simply-generator-symmetric with respect to the group $H$ and the group homomorphism $\theta:H\rightarrow\Aut\left(G\right)$. Then, for each element $h\in H$, we have $\theta_{h}\left(K\right)=K$. Therefore, we can define the semidirect product $K\rtimes_{\theta}H$. Finally, we let $T$ be a subset of $S$. In this setting, we define two graphs $X$ and $\Gamma$, and show that they are isomorphic. First, for every $h\in H$, we define $\sigma_{1}\left(h\right),\sigma_{2}\left(h\right)\in K\rtimes_{\theta}H$ by \begin{eqnarray*} \sigma_{1}\left(h\right) & = & \left(1_{K},h\right)\\ \sigma_{2}\left(h\right) & = & \left(s_{0}\cdot\theta_{h}\left(s_{0}^{-1}\right),h\right) \end{eqnarray*} and we let $X=\mbox{Cay}\left(K\rtimes_{\theta}H,\Sigma_{1}^{T}\cup\Sigma_{2}^{T}\right)$, where \begin{eqnarray*} \Sigma_{1}^{T} & = & \left\{ \sigma_{1}\left(t\right)\mid t\in T\right\} \\ \Sigma_{2}^{T} & = & \left\{ \sigma_{2}\left(t\right)\mid t\in T\right\} \end{eqnarray*} Second, we define a graph $\Gamma=\Gamma\left(G,S,s_0,K,H,\theta,T\right)$. The vertices of $\Gamma$ are the undirected edges of $\mbox{Cay}\left(G,S\right)$, i.e. $V\left(\Gamma\right)=E\left(\mbox{Cay}\left(G,S\right)\right)$. For each $g\in G$, we let $E_{g}$ be the set of undirected edges in $\mbox{Cay}\left(G,S\right)$ which are incident to $g$, and we define a bijection $e_{g}:H\rightarrow E_{g}$ by \[ e_{g}\left(h\right)=\begin{cases} \left(g,g\cdot\theta_{h}\left(s_{0}\right)\right) & g\in K\\ \left(g\cdot\theta_{h}\left(s_{0}^{-1}\right),g\right) & g\in Ks_{0} \end{cases} \] The edges of $\Gamma$ are $E\left(\Gamma\right)=\left\{ \left(e_{g}\left(h\right),e_{g}\left(ht\right)\right)\mid g\in G,h\in H,t\in T\right\} $. Note that if $T=T^{-1}$, then $\Gamma$ is an undirected graph, and if $T=S$, then $\Gamma$ is the line graph of $\mbox{Cay}\left(G,S\right)$. \begin{prop}\label{prop:line-graph} The bijection $f:K\rtimes_{\theta}H\rightarrow E\left(\mbox{Cay}\left(G,S\right)\right)$ defined by \[ f\left(\left(k,h\right)\right)=\left(k,k\cdot\theta_{h}\left(s_{0}\right)\right) \] is a graph isomorphism from $X=\mbox{Cay}\left(K\rtimes_{\theta}H,\Sigma_{1}^{T}\cup\Sigma_{2}^{T}\right)$ to $\Gamma=\Gamma\left(G,S,s_0,K,H,\theta,T\right)$.\end{prop} \begin{proof} Let $k\in K,h\in H$ and $t\in T$. Direct computation shows that \begin{eqnarray*} \left(\left(k,h\right),\left(k,h\right)\cdot\sigma_{1}\left(t\right)\right) & \overset{f\times f}{\longmapsto} & \left(e_{k}\left(h\right),e_{k}\left(ht\right)\right)\\ \left(\left(k,h\right),\left(k,h\right)\cdot\sigma_{2}\left(t\right)\right) & \overset{f\times f}{\longmapsto} & \left(e_{k\cdot\theta_{h}\left(s_{0}\right)}\left(h\right),e_{k\cdot\theta_{h}\left(s_{0}\right)}\left(ht\right)\right) \end{eqnarray*} and \begin{eqnarray*} \left(e_{k}\left(h\right),e_{k}\left(ht\right)\right) & \overset{f^{-1}\times f^{-1}}{\longmapsto} & \left(\left(k,h\right),\left(k,h\right)\cdot\sigma_{1}\left(t\right)\right)\\ \left(e_{ks_{0}}\left(h\right),e_{ks_{0}}\left(ht\right)\right) & \overset{f^{-1}\times f^{-1}}{\longmapsto} & \left(\left(ks_{0}\theta_{h}\left(s_{0}^{-1}\right),h\right),\left(ks_{0}\theta_{h}\left(s_{0}^{-1}\right),h\right)\cdot\sigma_{2}\left(t\right)\right) \end{eqnarray*} Therefore, both $f$ and $f^{-1}$ are graph homomorphisms, and therefore $f$ is a graph isomorphism.\end{proof} In particular, Proposition \ref{prop:line-graph} implies that the line graph of a generator symmetric bipartite Cayley graph is itself a Cayley graph. It may be interesting to characterize the Cayley graphs whose line graph is a Cayley graph. \section{Symmetric Unique Neighbor Expanders} We begin by describing the construction of an infinite family of $6$-regular unique-neighbor expanders by Alon and Capalbo (see \cite{alon2002explicit}). Let $\Gamma'=\left(V',E'\right)$ be a $d$-regular graph and let $\Delta=\left(V\left(\Delta\right),E\left(\Delta\right)\right)$ be a graph on $d$ vertices. For each vertex $v$ of $\Gamma'$, denote by $E'_{v}$ the set of $d$ edges of $\Gamma'$ which are incident to $v$. For each vertex $v$ of $\Gamma'$, fix a bijection between $E'_{v}$ and $V\left(\Delta\right)$. Note that these bijections \emph{do not }need to be compatible with each other in any way. Form a new graph $\Gamma$ whose vertex set is $E'$, and where $\gamma_{1},\gamma_{2}\in E'$ are adjacent as vertices of $\Gamma$ if and only if $\gamma_{1}$ and $\gamma_{2}$ are both incident to a common vertex $v$ in $\Gamma'$, and are neighbors in $\Delta$ under the identification $E'_{v}\leftrightarrow V\left(\Delta\right)$. Note that the graph $\Gamma$ is a subgraph of the line graph of $\Gamma'$. By Theorem 2.1 of \cite{alon2002explicit}, if $\Gamma'$ is an $8$-regular Ramanujan graph, and if $\Delta$ is the $3$-regular graph on $8$ vertices $v_{0},\dotsc,v_{7}$ with $v_{i}$ adjacent to $v_{i-1},v_{i+1},v_{i+4}$ (indices taken modulo $8$), then $\Gamma$ is a $6$-regular $\left(\alpha,1/10\right)$-unique-neighbor expander, for some positive constant $\alpha$. In this way, an infinite family of (not necessarily bipartite) $8$-regular Ramanujan graphs gives rise to an infinite family of $6$-regular $\left(\alpha,1/10\right)$-unique-neighbor expanders. In \cite{alon2002explicit}, the chosen family of Ramanujan graphs is the one constructed in \cite{zbMATH04079458} by Lubotzky, Phillips and Sarnak. These Ramanujan graphs are Cayley graphs and thus are vertex-transitive, but they are not known to be generator-symmetric. Instead, we use the $8$-regular simply-generator-symmetric bipartite Ramanujan graphs of \cite{zbMATH02183844}. Keeping the notation of sections \ref{sec:ET-Ramanujan} and \ref{sec:Line-Graphs}, we define for each $n\geq 1$, \[ \Gamma_{n}=\Gamma\left(\PGL_{2}\left(7^{n}\right),S\left(n\right),b,\PSL_{2}\left(7^{n}\right),C_{8},\theta_{n},\left\{ 1,4,7\right\}\right) \] On one hand, the graphs $\{\Gamma_n\}$ are a special case of the Alon-Capalbo construction. On the other hand, by Proposition \ref{prop:line-graph}, for each $n\geq1$, the graph $\Gamma_n$ is isomorphic to $\mbox{Cay}\left(\PSL_{2}\left(7^{n}\right)\rtimes_{\theta_n}C_{8},\Sigma_{1}^{\left\{ 1,4,7\right\}}\cup\Sigma_{2}^{\left\{ 1,4,7\right\}}\right)$. We have thus proved the following more elaborate version of Theorem \ref{thm:main-une}: \begin{thm} For some constant $\alpha>0$, there is an infinite family $\left\{ \Gamma_{n}\right\} $ of $6$-regular $\left(\alpha,1/10\right)$-unique-neighbor expanders, such that for every $n$, the graph $\Gamma_{n}$ is a Cayley graph on the group $\PSL_{2}$$\left(7^{n}\right)\rtimes_{\theta_{n}}C_{8}$. \end{thm} \section{Symmetric Good LDPC Codes} We shall refer to linear error correcting codes simply as 'codes'. A code $C\subset\mathbb{F}_{2}^{X}$ is \emph{symmetric} (resp. \emph{simply-symmetric}) with respect to a group $G$ if there is a transitive (resp. \emph{simply-transitive}) action of $G$ on $X$ such that the corresponding coordinate-interchanging action of $G$ on $\mathbb{F}_{2}^{X}$ preserves $C$. For a code $C\subset\mathbb{F}_{2}^{X}$, we denote its dual by $C^{\perp}\subset\mathbb{F}_{2}^{X}$ (this is the set of all vectors ``orthogonal'' to $C$), and think of it as the constraints defining the code $C$ (i.e. they define linear functionals whose common set of solutions is $C$). A spanning set for $C^{\perp}$ is called a \emph{set of defining constraints}. By a standard abuse of terminology, we will refer to a code $C$ together with a specific set of defining constraints simply as a 'code'. Note that if $C$ is a symmetric code with respect to a group $G$, then the dual code $C^{\perp}$ is also symmetric with respect to $G$. Finally, a family of codes is \emph{LDPC} if the defining constraints of the codes in the family are of bounded Hamming weight. One way to obtain symmetric codes is by 'codes defined on groups': For a group $G$, we say that a code $C\subset\mathbb{F}_{2}^{G}$ is a \emph{code defined on $G$} if $C$ is invariant under the action of $G$. Equivalently, a code $C\subset\mathbb{F}_{2}^{G}$ is a code defined on $G$ if it is defined by a $G$-invariant set of constraints. Finally, a code $C$ is defined on the group $G$ if and only if it is simply-symmetric with respect to $G$. Another way to obtain (usually not symmetric) codes is by 'codes defined on graphs': Let $\Gamma=\left(V,E\right)$ be an $l$-regular graph and let $B\subset\mathbb{F}_{2}^{H}$ be a 'small' code of length $l$ (i.e. $H$ is a set of cardinality $l$). For each vertex $v$ of $\Gamma$, fix a bijection $h\mapsto e\left(v,h\right)$ from $H$ to $E_{v}$. Let $C\subset\mathbb{F}_{2}^{E}$ be the code consisting of functions $f:E\rightarrow\mathbb{F}_{2}$ satisfying the following local constraint for each vertex $v$ of $\Gamma$: \[ \left(f\left(e\left(v,h\right)\right)\right)_{h\in H}\in B \] These local constraints are referred to as 'vertex consistency'. This idea is due to Tanner (\cite{zbMATH03745089}) and Sipser-Spielman (\cite{zbMATH01004587}), who refer to such codes (and similar codes) as \emph{expander codes} in case the graph $\Gamma$ is a good expander. If the graph $\Gamma$ is a Cayley graph $\Gamma=\Cay\left(G,S\right)$ then, instead of arbitrarily fixing a bijection between $H$ and $E_{v}$ for every vertex $v$ of $\Gamma$, we can fix one bijection $h\mapsto s_{h}$ from $H$ to $S$. Then, for each vertex $v$ of $\Gamma$, we define the bijection from $H$ to $E_{v}$ by $e\left(v,h\right)=\left(v,v\cdot s_{h}\right)$ (where $\cdot$ is multiplication in the group $G$), and proceed as before to define the code $C$. Codes defined on Cayley graphs in this manner are referred to as \emph{Cayley codes} by Kaufman and Wigderson in \cite{zbMATH05799072} and are denoted by $\Cay\left(G,S,B\right)$. As stated in \cite{zbMATH06294581}, Cayley codes are in general \emph{not }symmetric\emph{. }Therefore, we assume further that $H$ is a group (and not merely a set), that the code $B$ is a code defined on $H$, and that the Cayley graph $\Cay\left(G,S\right)$ is simply-generator-symmetric with respect to the group $H$. Note that in this case we have $\left|H\right|=\left|S\right|$. Let $\theta:H\rightarrow\Aut\left(G\right)$ be the group homomorphism inducing the transitive action of $H$ on $S$. Now, instead of arbitrarily fixing a bijection between $H$ and $S$, we only fix one generator $s_{0}\in S$. We define the bijection from $H$ to $S$ by $h\mapsto\theta_{h}\left(s_{0}\right)$, and proceed as before to define the code $C$. This idea is due to Kaufman and Lubotzky and is presented in \cite{zbMATH06294581}. The resulting code $C$ is symmetric with respect to the group $G\rtimes_{\theta}H$. However, the action of this group on the code $C$ is not \emph{simply }transitive (indeed, the cardinality of the group $G\rtimes_{\theta}H$ is not equal to the length of the code $C$; the size of the group is twice the length of the code). Therefore, $C$ is not a code defined on the group $G\rtimes_{\theta}H$. However, if the graph $\Cay\left(G,S\right)$ is \emph{bipartite}, then there is an index $2$ subgroup $K$ of $G$, disjoint from $S$. Then, by Section \ref{sec:Line-Graphs}, the code $C$ is a code defined on $K\rtimes_{\theta}H$ (and not just a symmetric code with respect to $G\rtimes_{\theta}H$, as in \cite{zbMATH06294581}). If the small code $B$ is defined by $m$ $H$-orbits of constraints, then the code $C$ is defined by $2m$ $K\rtimes_{\theta}H$-orbits of constraints (or $m$ $G\rtimes_{\theta}H$-orbits of constraints, as in \cite{zbMATH06294581}). We summarize this result in the case where the small code $B$ is defined by a single orbit of constraints (since this is sufficient for our application): \begin{prop} \label{prop:group-codes}For $G,S,s_{0},K,H,\theta$ as in section \ref{sec:Line-Graphs}, and a 'small' code $B$ defined on the group $H$ by the orbit of the single constraint $\sum_{t\in T}f\left(t\right)=0$ for some $T\subset H$, the Cayley code $\Cay\left(G,S,B\right)$ is the same as the code defined on the group $K\rtimes_{\theta}H$ with defining constraints consisting of the $K\rtimes_{\theta}H$-orbits of the two constraints $\sum_{t\in T}f\left(\left(1,t\right)\right)=0$ and $\sum_{t\in T}f\left(\left(s_{0}\cdot\theta_{t}\left(s_{0}^{-1}\right),t\right)\right)=0$. \end{prop} Finally, note that in \cite{zbMATH06294581} the Cayley code construction is applied with a sequence $\left\{ \Gamma'_{n}\right\} $ of $q+1$-regular Ramanujan graphs from \cite{zbMATH02183844}, where $q=4093$, and a certain code $B_{0}$ defined on $C_{q+1}$. Each graph $\Gamma'_{n}$ in the sequence is a Cayley graph on $\PGL_{2}\left(q^{n}\right)$ and is simply-generator-symmetric with respect to $C_{q+1}$. These graphs are in fact bipartite, although the proofs in \cite{zbMATH06294581} do not require bipartiteness. Therefore, by Proposition \ref{prop:group-codes}, we conclude that the codes constructed in \cite{zbMATH06294581} are in fact simply-symmetric. These codes are proved in \cite{zbMATH06294581} to have both rate and normalized distance bounded away from zero (i.e. they are \emph{asymptotically good} codes). Also, they clearly are LDPC codes. We have thus proved the following more elaborate version of Theorem \ref{thm:main-codes}: \begin{thm} \label{thm:main-codes-elaborate}There is an asymptotically good infinite family of LDPC codes $\left\{ K_{n}\right\} $, such that for every $n$, the code $K_{n}$ is simply-symmetric with respect to the group $G_{n}=\PSL_{2}\left(q^{n}\right)\rtimes_{\theta_{n}}C_{q+1}$, where $q=4093$. Furthermore, for every $n$, the code $K_{n}$ is defined by constraints consisting of two $G_{n}$-orbits of constraints. \end{thm} \section{Symmetric Good LDPC Codes with Improved Density} If $C$ is a code (together with a set of defining constraints), then we define the \emph{density} of $C$ as the maximal Hamming weight of a defining constraint of $C$. The density of the family of codes $\left\{ K_{n}\right\} $ is proved in \cite{zbMATH06294581} to have density bounded from above by $4094$. We shall define a variation of this construction, with density $20$. The density of an expander code on a graph $\Gamma$ and a small code $B$ equals the density of the small code $B$. Thus, our goal is to define an infinite family of simply-symmetric expander codes using a small code with density $20$. We shall define later a code $B'$ of length $158$, rate $\frac{40}{79}$, distance $15$ and density $20$. Let $\left\{ K'_{n}\right\} $ be the family of expander codes constructed symmetrically using the $158$-regular simply-generator-symmetric Ramanujan graphs of \cite{zbMATH02183844} and the small code $B'$. The family $\left\{ K'_{n}\right\} $ is a family of simply-symmetric codes of density $20$. We now show that this is a family of asymptotically good codes. By Lemma 15 of \cite{zbMATH01004587}, an expander code on a $k$-regular graph $\Gamma$ and a small code $B$ has rate at least $2\cdot\rate\left(B\right)-1$ and normalized distance at least $\left(\frac{\distance\left(B\right)-\lambda}{k-\lambda}\right)^{2}$ where $\lambda$ is the second largest eigenvalue of the adjacency matrix of $\Gamma$ (assuming $\distance\left(B\right)>\lambda$). Thus, since $\rate\left(B'\right)=\frac{40}{79}>\frac{1}{2}$, we have a guarantee that the codes $\left\{ K'_{n}\right\} $ have rate bounded away from zero. However, since $\distance\left(B'\right)=15$, and since $2\sqrt{157}\approx25.1$, the method of \cite{zbMATH01004587} is not enough to show that the codes $\left\{ K'_{n}\right\} $ have normalized distance bounded away from zero. Nevertheless, we do have $15>1+\sqrt{157}\approx13.5$, and thus the next lemma shows that the codes $\left\{ K'_{n}\right\} $ do have distance bounded away from zero: \begin{lem} An expander code $C$ defined on a $k$-regular Ramanujan graph $\Gamma$ using a 'small' code $B$ of length $k$ and distance $d$ larger than $1+\sqrt{k-1}$ has normalized distance larger than some constant $\alpha$ which depends only on $k$ and $d$.\end{lem} \begin{proof} Denote $\Gamma=\left(V,E\right)$. By Theorem 4.2 of \cite{zbMATH01103555}, for every $\delta>0$, and every nonempty subset $X$ of $V$ of cardinality at most $k^{-1/\delta}\cdot\left|V\right|$, the average degree of the induced subgraph of $\Gamma$ on $X$ is at most $\left(1+\sqrt{k-1}\right)\cdot\left(1+C\delta\right)$, where $C>0$ is an absolute constant (see also \cite{alon2002explicit}, Theorem 2.2). Let $\epsilon>0$ be sufficiently small such that $1+\sqrt{k-1}+\epsilon<d$, let $\delta=\frac{\epsilon}{C\left(1+\sqrt{k-1}\right)}>0$, and let $\alpha=k^{-1/\delta-1}>0$. Let $f:E\rightarrow\mathbb{F}_{2}$ be a nonzero word of Hamming weight $w$ such that $w\leq\alpha\cdot\left|E\right|$. Let $S$ be the set of edges $e$ of $\Gamma$ such that $f\left(e\right)=1$. Let $T$ be the subset of $V$ of all vertices in $\Gamma$ which are incident to at least one edge in $S$. Let $\Gamma\left[T\right]$ be the subgraph of $\Gamma$ induced on $T$. Since $0<\left|T\right|\le2\cdot\left|S\right|\leq2\alpha\cdot\left|E\right|=\alpha\cdot k\cdot\left|V\right|=k^{-1/\delta}\cdot\left|V\right|$, the average degree in $\Gamma\left[T\right]$ is at most $\left(1+\sqrt{k-1}\right)\cdot\left(1+C\cdot\delta\right)=\left(1+\sqrt{k-1}\right)\cdot\left(1+\frac{\epsilon}{1+\sqrt{k-1}}\right)=1+\sqrt{k-1}+\epsilon<d$. Therefore, $\Gamma\left[T\right]$ has a vertex $v$ of degree less than $d$. The local subword of $f$ around the vertex $v$ is of positive Hamming weight smaller than $d$ and thus violates a constraint. Therefore, $f$ is not a codeword of $C$. \end{proof} The improvement in this lemma over Lemma 15 of \cite{zbMATH01004587} boils down to the invocation of Kahale's Theorem 4.2 in \cite{zbMATH01103555} instead of the Alon-Chung Lemma (\cite{zbMATH04073036}, Lemma 2.3). This is the same tool that enables the proof of unique-neighbor expansion in \cite{alon2002explicit}. Thus, we get the following improved version of Theorem \ref{thm:main-codes-elaborate}: \begin{thm} \label{thm:main-codes-elaborate-improved-density}There is an asymptotically good infinite family of LDPC codes $\left\{ K'_{n}\right\} $, such that for every $n$, the code $K'_{n}$ is simply-symmetric with respect to the group $G'_{n}=\PSL_{2}\left(q^{n}\right)\rtimes_{\theta_{n}}C_{q+1}$, where $q=157$. Each code $K'_{n}$ has density $20$, and is defined by constraints consisting of two $G'_{n}$-orbits of constraints. \end{thm} Note that the improvement in density over the symmetric good LDPC codes of \cite{zbMATH06294581} comes on the expense of a lower guaranteed normalized distance. We now define a code $B'$ with the desired properties. See Chapter 6 of \cite{zbMATH00053945} for the theory of cyclic codes. Let $B''$ be the cyclic code of length $79$ generated by \begin{eqnarray*} g\left(X\right) & = & X^{39}+X^{36}+X^{35}+X^{31}+X^{30}+X^{29}+X^{27}+X^{26}+X^{25}+X^{24}+X^{21}+\\ & & X^{20}+X^{19}+X^{18}+X^{16}+X^{14}+X^{13}+X^{11}+X^{5}+X^{4}+X^{2}+X+1 \end{eqnarray*} By \cite{zbMATH03593474}, the code $B''$ has rate $\frac{40}{79}$ and distance $15$. Direct computation shows that $h\left(X\right)=\left(X^{79}-1\right)/g\left(X\right)$ has exactly $20$ nonzero coefficients. Thus, the density of the code $B''$ is $20$ (see Section 6.2 of \cite{zbMATH00053945}). Let $B'$ be the cyclic code of length $158=2\cdot79$ generated by $g\left(X\right)^{2}$. Note that the codewords of $B'$ are exactly the result of interleaving two codewords of $B''$. The code $B'$ has rate $\frac{40}{79}$, distance $15$ and density $20$. \section{Conclusion} We make use of of the simple-generator-symmetry of the Ramanujan graphs of \cite{zbMATH02183844}, and of our observation that the line graph of a simply-generator-symmetric \emph{bipartite} Cayley graph is itself a Cayley graph. This observation is crucial in the realization of some of the unique-neighbor expanders of \cite{alon2002explicit} as Cayley graphs. It also allows us to see that the asymptotically good symmetric codes of \cite{zbMATH06294581} are in fact simply-symmetric. Finally, we improve the density of these codes. \section{Acknowledgments} This work was done as part of an M.Sc. thesis submitted to the Hebrew University of Jerusalem. The author would like to thank his advisor Prof. Alexander Lubotzky for his help and guidance. The author would also like to thank ETH Institute for Theoretical Studies for the hospitality. \section*{References} \end{document}
\begin{document} \begin{abstract} We describe a general algorithm for computing intersection pairings on arithmetic surfaces. We have implemented our algorithm for curves over $\bb Q$, and we show how to use it to compute regulators for a number of Jacobians of smooth plane quartics, and to numerically verify the conjecture of Birch and Swinnerton-Dyer for the Jacobian of the split Cartan curve of level~13, up to squares. \end{abstract} \title{Explicit arithmetic intersection theory and computation of N\'eron-Tate heights} \newcommand{ \widetilde{\ca M}^\Sigma}{ \widetilde{\ca M}^\Sigma} \newcommand{\sch}[1]{\textcolor{blue}{#1}} \section{Introduction} If $A/K$ is an abelian variety over a global field $K$, then an ample symmetric divisor class $c$ on $A$ induces a non-degenerate quadratic form $\hat{h}_c$ on $A(K)$, the {\em N\'eron-Tate height} or {\em canonical height} with respect to $c$. Given $P \in A(K)$, the height of $P$ can be defined as \begin{equation*} \hat{h}_c(P) = \lim_{n \to \infty} \frac{1}{n^{2}}h_c(nP), \end{equation*} where $h_c$ is a Weil height on $A$ induced by $c$ (see~\cite{Ner65} and \cite[Section B.5]{HS00}). The N\'eron-Tate height also induces a symmetric bilinear pairing on $A(K)$ given by $$\hat{h}_c(P,Q) = \frac12\left(\hat{h}_c(P+Q) - \hat{h}_c(P) - \hat{h}_c(Q)\right).$$ An algorithm to compute the N\'eron-Tate height is required, for instance, to compute generators of $A(K)$. More precisely, the canonical height endows $A(K)\otimes_K \R$ with the structure of a Euclidean vector space and $A(K)/A(K)_{\tors}$ embeds into this vector space as a lattice $\Lambda$. Given generators of a subgroup of $A(K)/A(K)_{\tors}$ of finite index, we can find generators of the full group by saturating the corresponding sublattice of $\Lambda$. All known methods for this saturation step require an algorithm to compute the canonical height (see~\cite{Sik95, FS97, Sto02}). Another important application is the computation of the regulator of $A/K$, a quantity which appears in the conjecture of Birch and Swinnerton-Dyer. The regulator of $A/K$ is the Gram determinant of a set of generators of $\Lambda$ (for a certain choice of $c$). If we only have generators of a finite index subgroup available, then we can still compute the regulator up to an integral square factor. We can construct $\hat{h}_c$ explicitly if we have explicit formulas for a map to projective space corresponding to the linear system of $c$. For instance, an explicit embedding of the Kummer variety of $A$ has been used to give algorithms for the computation of N\'eron-Tate heights for elliptic curves~\cite{Sil88, MS16a} and Jacobians of hyperelliptic curves of genus 2~\cite{FS97, Sto02, MS16b} and genus 3~\cite{Sto17}. However, this approach becomes quickly infeasible if we increase the dimension of $A$. But if $J$ is the Jacobian variety of a smooth projective geometrically connected curve $C/K$, then there is an alternative way due to Faltings and Hriljac to describe the N\'eron-Tate height on $J/K$ with respect to twice the class of a symmetric theta divisor as follows (see \ref{S:Gross} for details): \begin{equation}\label{eq:FH} \hat h_{2 \vartheta}([D],[E]) = -\sum_{v \in M_K} \Span{D,E}_v. \end{equation} Here $D$ and $E$ are two divisors of degree $0$ on $C$ without common component, $M_K$ denotes the set of places of $K$, and $\Span{D,E}_v$ denotes the local N\'eron pairing of $D$ and $E$ at $v$, which is defined below in sections \oref{sec:non_arch} (for the non-archimedean places) and \oref{sec:arch} (for the archimedean places). In this note, we show how to turn \ref{eq:FH} into an algorithm for computing $\hat{h}_{2\vartheta}$ when $K=\Q$ (our algorithm can be generalised easily to work over general global fields). This was already done independently by the second-named and the third-named authors in~\cite{Hol12} and~\cite{Mul14} in the special case of hyperelliptic curves. But for Jacobians of non-hyperelliptic curves, no practical algorithms for computing N\'eron-Tate heights are known, and therefore no numerical evidence for the Birch and Swinnerton-Dyer conjecture has been collected. In the present paper we develop such an algorithm and we give numerical evidence for the conjecture of Birch and Swinnerton-Dyer for a number of Jacobians, including that of the split Cartan modular curve of level 13. Our main contribution is a new way to compute the non-archimedean local N\'eron pairings. In fact, we give a new algorithm for computing the intersection pairing of two divisors without common component on a regular arithmetic surface, which might be of independent interest. In short, we lift divisors from the generic fibre to the arithmetic surface by saturating the defining ideals, and we use an inclusion-exclusion principle to deal with divisors intersecting on several affine patches. The archimedean local N\'eron pairings $\Span{D,E}_\infty$ are computed in essentially the same way as in in~\cite{Hol12} and~\cite{Mul14}, by pulling back a translate of the Riemann theta function to $C(\C)$. This requires explicitly computing period matrices and Abel-Jacobi maps on Riemann surfaces; we use the recent algorithms of Neurohr~\cite[Chapter 4]{NeurohrPhD} and Molin-Neurohr~\cite{MN19}. The paper is organised as follows: In~\ref{sec:non_arch} we introduce our algorithm to compute non-archimedean local N\'eron pairings. The computation of archimedean local N\'eron pairings is discussed in \ref{sec:arch}. The topic of \ref{sec:global} is how to apply these to compute canonical heights using \ref{eq:FH}. Finally, in~\ref{sec:examples} we demonstrate the practicality of our algorithm by computing the N\'eron-Tate regulator, up to an integral square, for several Jacobians of smooth plane quartics including the split (or, equivalently, non-split) Cartan modular curve of level 13, and we numerically verify BSD for the latter curve up to an integral square. \subseteqsection{Acknowledgements} Most of the work for this paper was done when the authors were participating in the workshop ``Arithmetic of curves'', held in Baskerville Hall in August~2018. We would like to thank the organisers Alexander Betts, Tim Dokchitser, Vladimir Dokchitser and C\'eline Maistret, as well as the Baskerville Hall staff, for providing a great opportunity to concentrate on this project. We also thank Christian Neurohr for sharing his code to compute Abel-Jacobi maps for general curves and for answering several questions, and Martin Bright for suggesting the use of the saturation. Finally, we are very grateful to the anonymous referee for a thorough and rapid report. \section{The non-archimedean N\'eron pairing}\label{sec:non_arch} \newcommand{p}{\mathfrak p} \renewcommand{p}{p} \newcommand{\bb Q_{\fp}}{\bb Q_{p}} \newcommand{\bb Z_{\fp}}{\bb Z_{p}} For simplicity of exposition, we restrict ourselves to curves over the rational numbers; everything we do generalises without substantial difficulty to global fields. For background on arithmetic surfaces and their intersection pairing, we refer to Liu's book \cite{Liu02}. In this section we work over a fixed prime $p$ of $\bb Z$. Let $C/\bb Q_{\fp}$ be a smooth proper geometrically connected curve, and let $\ca C/\bb Z_{\fp}$ be a proper regular model of $C$. Because $\ca C$ is a regular surface, we have an intersection pairing between divisors on $\ca C$ having no components in common; if $\mathcal{P}$ and $\mathcal{Q}$ are distinct prime divisors the pairing is given by \begin{equation*} \iota(\mathcal{P} , \mathcal{Q}) = \sum_{P \in \ca C^0} \on{length}_{\ca O_{\ca C, P}}\left(\frac{\ca O_{\ca C, P}}{\ca O_{\ca C, P}(-\mathcal{P}) + \ca O_{\ca C, P}(-\mathcal{Q})}\right)\log \# k(P); \end{equation*} here $\ca C^0$ denotes the set of closed points of $\ca C$, and $k(P)$ denotes the residue field of the point $P$. We extend to arbitrary divisors with no common components by additivity. In general, this intersection pairing fails to respect linear equivalence. However, if $\ca D$ is a divisor on $\ca C$ whose restriction to the generic fibre $C$ has degree $0$, and $Y$ is a divisor on $\ca C$ pulled back from a divisor on $ \on{Spec} \bb Z_{\fp}$, then $\ca D \cdot Y = 0$. By the usual formalism with a moving lemma, this allows us to define the intersection pairing between any two divisors $\ca D$ and $\ca E$ on $\ca C$ as long as the restrictions of $\ca D$ and $\ca E$ to the generic fibre $C$ have degree $0$ and disjoint support. If $D$ is a divisor on $C$, we write $\ca D$ for the unique horizontal divisor on $\ca C$ whose generic fibre is $D$. For a divisor $D$ of degree $0$ on $C$, we write $\Phi(D)$ for a vertical $\Q$-divisor on $\ca C$ such that for every vertical divisor $Y$ on $\ca C$, we have $\iota(Y , \ca D + \Phi(D)) = 0$; this $\Phi(D)$ always exists, and is unique up to the addition of divisors pulled back from $ \on{Spec} \bb Z_{\fp}$ (see~\cite[Theorem~III.3.6]{Lan88}). Let $D$ and $E$ be two divisors on $C$, of degree $0$ and with disjoint support. Then the \emph{local N\'eron pairing between $D$ and $E$} is given by \begin{equation*} \Span{D,E}_p \coloneqq \iota(\ca D +\Phi(D),\ca E +\Phi(E)). \end{equation*} This pairing is bilinear and symmetric, but it does not respect linear equivalence; see~\cite[Theorem~III.5.2]{Lan88}. Our goal in this section is to compute the pairing $\Span{D,E}_p$, assuming that $D$ and $E$ are given to us (arranging suitable $D$ and $E$, and identifying those primes $p$ which may yield a non-zero pairing, will be discussed in \ref{sec:global}). A first step in applying the above definitions is to compute a regular model of $C$ over $\bb Z_p$. Algorithms are available for this in {\tt Magma}, one due to Steve Donnelly, and another to Tim Dokchitser \cite{Dok18}. For our examples below we used Donnelly's implementation as slightly more functionality was available, but our emphasis in this section is on providing a general-purpose algorithm which should be easily adapted to take advantage of future developments in the computation of regular models. \subseteqsection{The naive intersection pairing} To facilitate the computation of the local N\'eron pairing at non-archimedean places, we will introduce a \emph{naive} intersection pairing, which coincides with the standard intersection pairing on regular schemes, and then give an algorithm to compute the naive intersection pairing in a fairly general setting. \begin{situation}\label{sit:affine} We fix the following data: \begin{itemize} \item An integral domain $R$ of dimension 2, flat and finitely presented over $\bb Z$; \item effective Weil divisors $\ca D$ and $\ca E$ on $\ca C\coloneqq \on{Spec}R$ with no common irreducible component in their support, defined by the vanishing of ideals $I_{\ca D}$ and $I_{\ca E} $ in $ R$ (i.e. $I_{\ca D} = \ca O_{\ca C}(-\ca D) \subseteq \ca O_{\ca C}$, and analogously for $\ca E$); \item a constructible subset $V$ of $\ca C$. \end{itemize} \end{situation} For computational purposes, we suppose that a finite presentation of $R$ is given, along with generators of $I_{\ca D}$ and $I_{\ca E}$. Moreover, we suppose that $V$ is given as a disjoint union of intersections of open and closed subsets. \begin{definition} Let $P$ be a closed point of $\ca C$ lying over $p$. The \emph{naive intersection number of $\ca D$ and $\ca E$ at $P$} is given by \begin{equation*} \iota^{naive}_P(\ca D, \ca E) \coloneqq \on{length}_{\ca O_{\ca C, P}}\left( \frac{\ca O_{\ca C, P}}{I_{\ca D,P} + I_{\ca E,P}}\right)\log \# k(P), \end{equation*} where $I_{D,p} = I_D\otimes {\ca O_{\ca C,P}}$ and likewise for $E$. If $W$ is any subset of $\ca C$, we define \begin{equation*} \iota^{naive}_W(\ca D, \ca E) \coloneqq \sum_{P \in W^0} \iota^{naive}_P(\ca D, \ca E), \end{equation*} where $W^0$ denotes the set of closed points in $W$ lying over $p$. \end{definition} Note that if $\ca C$ is regular at $P$, then $\iota^{naive}_P(\ca D, \ca E)$ is the usual intersection pairing $\iota_P(\ca D, \ca E)$ at $P$. If $W$ and $W'$ are disjoint subsets of $C$, then \begin{equation}\iota^{naive}_W(\ca D, \ca E) + \iota^{naive}_{W'}(\ca D, \ca E) = \iota^{naive}_{W\cup W'}(\ca D, \ca E). \label{lem:additivity} \end{equation} We present here an algorithm for computing the naive intersection pairing $\iota^{naive}_V(\ca D, \ca E)$ for $V$ any constructible subset of $\ca C$. This seems to us a reasonable level of generality to work in; constructible subsets are the most general subsets easily described by a finite amount of data, and should be flexible enough for computing local N\'eron pairings for any reasonable way a regular model is given to us. Note that only being able to compute the intersection pairing at points would not be sufficient, as we would then need to sum over infinitely many points, and only being able to compute it for $V$ affine gives complications where patches of the model overlap. \begin{algorithm}\label{alg:NA} Suppose we are in \ref{sit:affine}. The following is an algorithm to compute $\iota^{naive}_V(\ca D, \ca E)$. \end{algorithm} \textbf{First reduction step:} By \ref{lem:additivity} we may assume $V$ is locally closed. \textbf{Second reduction step:} Write $V = Z_1 \setminus Z_2$ with $Z_2 \subseteq Z_1$ closed, then by \ref{lem:additivity} we have \begin{equation*} \iota^{naive}_V(\ca D, \ca E) = \iota^{naive}_{Z_1}(\ca D, \ca E) - \iota^{naive}_{Z_2}(\ca D, \ca E), \end{equation*} So we may assume $V$ is closed. \textbf{Third reduction step:} Write $V = Z(f_1, \dots, f_r)$, with $f_i \in R$. For a subset $T \subseteq \{1, \dots, r\}$ define $S_T = \on{Spec} \left((\operatorname{pr}od_{i \in T} f_i)^{-1} R\right)$. Then by inclusion-exclusion we have \begin{equation*} \iota^{naive}_V(\ca D, \ca E) = \sum_{T \subseteq \{1, \dots, r\}} (-1)^{\# T} \iota^{naive}_{S_T} (\ca D, \ca E). \end{equation*} Since $S_T$ is affine, we are reduced to the case where $V$ is the whole of $\ca C = \on{Spec} R$. \textbf{Concluding the algorithm:} Since forming quotients commutes with flat base-change, we obtain \begin{equation*} \iota_{\ca C}^{naive}(\ca D, \ca E) = \on{length}_R \left( \frac{R\otimes_{\bb Z} \bb Z_p}{I_{\ca D} \otimes_{\bb Z} \bb Z_p + I_{\ca E} \otimes_{\bb Z} \bb Z_p} \right)\log \#k(p). \end{equation*} This can be computed using \cite[Algorithm 1]{Mul14}. For efficiency we compute this length working modulo a sufficiently large power of $p$, which will be determined in \ref{rem:precision}. \begin{remark} Note that the third reduction step is exponential in $r$. In the examples we've computed, the largest value of $r$ was~4. \end{remark} \subseteqsection{Computing the intersection pairing}\label{sec:intersection_pairing} Let $C/\bb Q_p$ be a smooth projective curve, $\ca C/\bb Z_p$ a regular model, and $\ca D$, $\ca E$ two divisors on $\ca C$ without common component. In this section, we describe several approaches to computing the intersection pairing $\iota(\ca D, \ca E)$, depending on how $\ca C$ is given to us. \subseteqsubsection*{Regular model given by affine charts and glueing data} Suppose that the regular model $\ca C$ is given as a list of affine charts $C_1, \dots, C_n$ and glueing data. We partition $\ca C$ into constructible subsets $V_i$ by, for each $i \in \{ 1, \dots, n\}$, setting $V_i = C_i\setminus (\cup_{j < i} C_j)$. Then the intersection pairing is given by \begin{equation*} \iota(\ca D, \ca E) = \sum_{i \in \{ 1, \dots, n\}} \iota^{naive}_{V_i}(\ca D, \ca E). \end{equation*} \subseteqsubsection*{Regular model as described by {\tt Magma}} {\tt Magma}'s regular models implementation (due to Steve Donnelly) describes the model $\ca C$ in a slightly different way. It constructs a regular model by repeatedly blowing up non-regular points and/or components in a proper model. In this way, it creates a list of affine patches $U_i$ together with open immersions from the generic fibre of the $U_i$ to $C$. For each $i$, it stores a constructible subset $V_i \subseteq U_i$, consisting of all regular points in the special fibre which did not appear in any of the previous affine patches. These $V_i$ form a constructible partition of the special fibre of a regular model. In this case, we simply compute \begin{equation*} \iota(\ca D, \ca E) = \sum_{i \in \{ 1, \dots, n\}} \iota^{naive}_{V_i}(\ca D, \ca E). \end{equation*} \subseteqsection{Computing the non-archimedean local N\'eron pairing} Let $C/\bb Q_p$ be a smooth projective curve, $\ca C/\bb Z_p$ a regular model, $D$ and $E$ degree $0$ divisors on $C$ with disjoint support. In this section we will describe how to compute the local N\'eron pairing $\Span{D, E}_p$. First we compute the extensions of $D$ and $E$ to horizontal divisors $\ca D$ and $\ca E$ on $\ca C$. We break $D$ and $E$ into their effective and anti-effective parts, then choose some extensions of these ideals to $\ca C$ (the associated subschemes may contain many vertical components). We then saturate these ideals with respect to the prime $p$ to obtain (ideals for) horizontal divisors. This works by the following well-known lemma. \begin{lemma}\label{lem:saturation} Let $R$ be a $\bb Z$-algebra, and $I$ an ideal of $R$. The ideal sheaf of the schematic image of $ \on{Spec} R[1/p]/ (I \otimes_R R[1/p])$ in $ \on{Spec} R$ is given by the saturation \begin{equation*} (I:p^\infty) = \{r \in R : \exists n : p^n r \in I\}. \end{equation*} \end{lemma} \begin{proof} It is immediate that $(I:p^\infty)\otimes_R R[1/p] = I\otimes_R R[1/p]$. We need to check that, for any ideal $J \triangleleft R$ with $J\otimes_R R[1/p] = I\otimes_R R[1/p]$, we have $J \subseteqseteq (I:p^\infty)$. Indeed, if $j \in J$ then we can write ${j}$ as a finite sum of elements $\frac{i}{p^{n_i}}$ with $i \in I$, $n_i \in \bb N$, so $p^{\max_i n_i}j \in I$, as required. \end{proof} To compute the vertical correction term $\Phi(D)$, we use the algorithm from \ref{sec:intersection_pairing} to compute the intersection of $\ca D$ with every component of the fibre of $\ca C$ over $p$, then apply simple linear algebra as in \cite[\S 4.5]{Mul14} to find the coefficients of $\Phi(D)$. Finally, we use again the algorithm in \ref{sec:intersection_pairing} to compute \begin{equation*} \Span{D, E}_p = \iota(\ca D +\Phi(D),\ca E +\Phi(E)) = \iota(\ca D, \ca E) + \iota(\Phi(D),\ca E). \end{equation*} \section{The archimedean N\'eron pairing}\label{sec:arch} \subseteqsection{Green's functions; definition of the pairing} Let $C/\bb C$ be a smooth projective connected curve of genus $g$, and $\varphi$ be a volume form on $C$. If $E$ is a divisor on $C$, we write \begin{equation*} g_{E,\varphi}\colon C(\C) \setminus \supp(E) \to \bb R \end{equation*} for a Green's function on $C(\C)$ with respect to $E$ (see \cite[II, \S 1]{Lan88}). If $E$ has degree $0$, and $\varphi'$ is another volume form, then $g_{E, \varphi} - g_{E, \varphi'}$ is constant. If $D = \sum_P n_P P$ is another divisor of degree $0$ with support disjoint from $E$, then the \emph{local N\'eron pairing} is given by \begin{equation*} \Span{D,E}_{\infty} \coloneqq \sum_P n_Pg_{E,\varphi}(P); \end{equation*} this pairing is bilinear and symmetric, and is independent of the choice of $\varphi$, see~\cite[Theorem~III.5.3]{Lan88}. As we evaluate $g_{E,\varphi}$ in a divisor of degree $0$, we can replace $g_{E,\varphi}$ by $g_{E,\varphi}+c$ for a constant $c \in \R$ without changing $\Span{D,E}_{\infty}$. \subseteqsection{Theta functions; a formula for the pairing}\label{sec:theta} Let $\{\omega_1,\ldots,\omega_g\}$ be an orthonormal basis of $H^0(C, \Omega^1)$ with respect to the scalar product $(\omega,\eta)\mapsto\frac{i}{2}\int_{C(\C)} \omega\wedge \bar{\eta}$ and let $\varphi \coloneqq \frac{i}{2g}(\omega_1\wedge\bar{\omega_1}+\ldots+\omega_g\wedge\bar{\omega_g})$ be the canonical volume form. We fix a base point $P_0 \in C(\C)$ and denote by $\alpha:C(\C) \to J(\C)$ the Abel-Jacobi map with respect to $P_0$. By abuse of notation, we also denote the additive extension of $\alpha$ to divisors on $C$ by $\alpha$. Following Hriljac, we construct a Green's function by pulling back the logarithm of a translate of the Riemann theta function $\theta$ along $\alpha$. Let $\tau\in \C^{g\times g}$ be the small period matrix of $J(\C)$; it has symmetric positive definite imaginary part and satisfies $J(\C) \cong \C^g/(\Z^g+\tau\Z^g)$. We define \[\xymatrix{ j:\C^g\ar@{->>}[r]&\C^g/(\Z^g+\tau\Z^g)\ar[r]^{\qquad \simeq} &J(\C),} \] Let $\Theta$ denote the theta divisor on $J$ corresponding to $\alpha$. By a theorem of Riemann (see \cite[Theorem~13.4.1]{Lan83}), there exists a divisor $W$ on $C$ such that $2W$ is canonical and such that the translate $\Theta_{-{\alpha(W)}}$ of $\Theta$ by $-\alpha(W)$ is the divisor of the normalised (in the notation of~\cite[\S13.1]{Lan83}) version of the Riemann theta function \begin{equation}\label{eq:norm_theta} F_{\Theta_{-\alpha(W)}}(z) \coloneqq \theta(z,\tau)\exp\left(\frac{\pi}{2}z^T(\mathop{\mathrm{Im}} \tau)^{-1}z\right). \end{equation} This $W$ is in fact unique up to linear equivalence, by \cite[Chapter II, theorem 3.10]{Tata1}. For the remainder of this section, we suppose that $E=E_1-E_2$, where $E_1$ and $E_2$ are {\em non-special}. This means that they are effective of degree $g$ with $h^0(C, \ca O(E_i)) = 1$. Because of the bilinearity of the N\'eron pairing, the following gives a formula to compute $\Span{D,E}_{\infty}$ for all $D\in \operatorname{Div}^0(C)$ with support disjoint from $E$. \begin{proposition}\label{P:gfformula} Suppose that $D=P_1-P_2$ with $P_1,\, P_2 \in C(\C)$, not in the support of $E$. Then \[ \Span{D,E}_{\infty} = -\log\left|\frac{ \theta(z_{11},\tau)\cdot\theta(z_{22},\tau)}{\theta(z_{12},\tau)\cdot\theta(z_{21},\tau)}\right| -2\pi\mathop{\mathrm{Im}} (z_E)^T\mathop{\mathrm{Im}} (\tau)^{-1}\mathop{\mathrm{Im}} (z_D) \] where $z_D, z_E, z_{ij} \in \C^g$ satisfy $j(z_D) = \alpha(D)$, $j(z_E) = \alpha(E)$ and $j(z_{ij}) = \alpha(P_i-E_j+W)$. \end{proposition} For the proof of~\ref{P:gfformula} we need the notion of a N\'eron function on $J(\C)$, see~\cite[\S13.1]{Lan83}. For each divisor $A \in \operatorname{Div}(J)$, there is {\em a N\'eron function with respect to $A$}, which is uniquely determined up to adding a constant. This is a continuous function $\lambda_A : J(\C) \setminus \supp(A) \to \R$, and together they have the following properties: \begin{enumerate}[label=(NF{\arabic*})] \item if $A,B \in \operatorname{Div}(J)$, then $\lambda_{A+B} - \lambda_{A} - \lambda_{B}$ is constant; \item if $f \in \C(J)$, then $\lambda_{\operatorname{div}(f)} +\log|f| $ is constant; \item if $A \in \operatorname{Div}(J)$ and $Q \in J(\C)$, then $P \mapsto\lambda_{A_Q}(P) - \lambda_{A}(P-Q)$ is constant. \end{enumerate} A result of N\'eron lets us express the N\'eron function of a divisor in terms of the normalised theta function associated to that divisor. In particular, we find: \begin{lemma}\label{lem:nf1} We get a N\'eron function with respect to ${\Theta_{-\alpha(W)}}$ by mapping $P\in J(\C)$ to \[ \lambda_{\Theta_{-\alpha(W)}}(P) \colonequals -\log|\theta(z, \tau)|+ \pi \mathop{\mathrm{Im}}(z) ^T \mathop{\mathrm{Im}}(\tau)^{-1} \mathop{\mathrm{Im}}(z), \] where $z\in\C^g$ is such that $j(z) = P$. \end{lemma} \begin{proof} Let $H$ denote the Hermitian form with matrix $\mathop{\mathrm{Im}}(\tau)^{-1}$; by~\cite[Proposition 13.3.1]{Lan83} this is the Hermitian form (in the language of~\cite[\S13.1]{Lan83}) of the divisor ${\Theta_{-\alpha(W)}}$. Because of N\'eron's theorem (see~\cite[Theorem~13.1.1]{Lan83}) and because of~\ref{eq:norm_theta}, we get a N\'eron function by mapping $P \in J(\C)$ to \begin{align*} \lambda_{\Theta_{-\alpha(W)}}(P)& \colonequals -\log|F_{\Theta_{-\alpha(W)}}(z)| +\frac{\pi}{2}H(z,z) \\ & =-\log| \theta(z,\tau)|- \log \left|\exp\left(\frac{\pi}{2}z^T(\mathop{\mathrm{Im}} \tau)^{-1}z\right)\right| + \frac{\pi}{2} z^T\mathop{\mathrm{Im}}(\tau)^{-1}\bar{z}\\ &= -\log| \theta(z,\tau)|- \frac{\pi}{2}\left(\mathop{\mathrm{Re}}(z)^T(\mathop{\mathrm{Im}} \tau)^{-1}\mathop{\mathrm{Re}}(z) - \mathop{\mathrm{Im}}(z)^T(\mathop{\mathrm{Im}} \tau)^{-1}\mathop{\mathrm{Im}}(z)\right) + \frac{\pi}{2} z^T\mathop{\mathrm{Im}}(\tau)^{-1}\bar{z}\\ &=-\log|\theta(z, \tau)|+ \pi \mathop{\mathrm{Im}}(z) ^T \mathop{\mathrm{Im}}(\tau)^{-1} \mathop{\mathrm{Im}}(z), \end{align*} where $z\in\C^g$ is such that $j(z) = P$. \end{proof} \begin{proof}[Proof of~\ref{P:gfformula}] Let $\Theta^{-} = [-1]^*\Theta$. We first find a N\'eron function for $\Theta^{-}_{\alpha(E_j)}$, where $j \in\{1,2\}$. Since we have \[\Theta^- = \Theta_{-\alpha(2W)}\] by \cite[Theorem~5.5.8]{Lan83}, property (NF3) implies that \begin{equation}\label{E:neron_Theta} \lambda_{j}(P) \coloneqq \lambda_{\Theta_{-\alpha(W)}}(P - \alpha(E_j) + \alpha(W)) \end{equation} is a N\'eron function with respect to $\Theta^{-}_{\alpha(E_j)}$, where $\lambda_{\Theta_{-\alpha(W)}}$ is as in~\ref{lem:nf1}. Since $E_j$ is non-special, a result of Hriljac (see~\cite[Theorem~13.5.2]{Lan83}) implies that \begin{equation}\label{E:green_neron} g_{E_j,\varphi} = \lambda_{j}\circ\alpha + c_j \end{equation} for some constant $c_j\in \R$. Using~\ref{E:green_neron}, \ref{E:neron_Theta} and~\ref{lem:nf1}, we conclude that \begin{align*} g_{E_j,\varphi}(P_i) &= \lambda_{\Theta_{-\alpha(W)}}(\alpha(P_i) - \alpha(E_j) +\alpha(W))+c_j \\ &= -\log|\theta(z_{ij},\tau)|+\pi \mathop{\mathrm{Im}}(z_{ij}) ^T \mathop{\mathrm{Im}}(\tau)^{-1} \mathop{\mathrm{Im}}(z_{ij})+c_j. \end{align*} The result now follows from \begin{equation*}\label{} g_{E,\varphi}(D) = g_{E_1,\varphi}(P_1) - g_{E_2,\varphi}(P_1) - g_{E_1,\varphi}(P_2) + g_{E_2,\varphi}(P_2). \end{equation*} and the definition of the local N\'eron pairing. \end{proof} \begin{remark}\label{R:steffen_messed_up} In~\cite[Corollary~4.16]{Mul14} and~\cite[\S7.3]{Hol12} equivalent formulas for $\Span{D,E}_\infty$ were given for the special case of hyperelliptic curves. Our \ref{P:gfformula} implies those results, if we use a Weierstrass point as the base point for the Abel-Jacobi map; in this case $\alpha(W)=0$. Note that \cite[Corollary~4.16]{Mul14} is stated without the assumption that the curve is hyperelliptic, but is false in general. We have adapted and corrected the proof given there. Alternatively, one could also generalise the proof in~\cite[\S7]{Hol12}. \end{remark} \begin{remark}\label{R:special_is_bad} In the proof of~\ref{P:gfformula} the condition that $E_1$ and $E_2$ are non-special is only used to apply Hriljac's theorem which constructs the Green's function on $C$ by pulling back a N\'eron function on $J$ along the Abel-Jacobi map. If the divisor $E_j$ is non-special, then the intersection of the translate of $\Theta^{-}$ by $\alpha(E_j)$ with the curve $C$ recovers the divisor $E_j$ (see \cite[Theorem~5.5.8]{Lan83}), hence we can pull back a N\'eron function with respect to $\Theta^{-}_{\alpha(E_j)}$ to obtain a Green's function for the divisor $E_j$. In contrast, if the divisor $E_j$ is special then this intersection can (set-theoretically) be much larger, so pulling back a N\'eron function does not give anything meaningful. Indeed, we have found examples where \ref{P:gfformula} is false for special $E_1$ and $E_2$. \end{remark} \subseteqsection{Computing the archimedean local N\'eron pairing} To compute $\Span{D,E}_{\infty}$, we use the {\tt Magma} code written by Christian Neurohr for the computation of the small period matrix $\tau$ associated to $C(\C)$ and the Abel-Jacobi map $\alpha$. See Neurohr's thesis~\cite{NeurohrPhD} for a description of the algorithm. This code makes it possible to numerically approximate these objects efficiently to any desired precision. If $C$ is superelliptic, then we instead use Neurohr's implementation of the specialised algorithms of Molin-Neurohr~\cite{MN19} (\url{https://github.com/pascalmolin/hcperiods}). The code requires as input a (possibly singular) plane model of $C$; this is easy to produce in practice, for instance via projection or by computing a primitive element of the function field of $C$. The Riemann theta function can be computed using code already contained in {\tt Magma}. It is also necessary to find the divisor $W$ in~\ref{P:gfformula}. We first compute a canonical divisor and its image under $\alpha$. Then we run through all preimages under multiplication by~2 in $\C^g/(\Z^g\oplus\tau\Z^g)$ until we find the correct $W$ so that $\Theta_{-\alpha(W)}$ is the divisor of the normalised Riemann theta function, see \ref{sec:tors}. Once we have the correct $\alpha(W)$, we can compute $\Span{D,E}_\infty$ easily via~\ref{P:gfformula}. \begin{remark} The implementation of Molin-Neurohr and the computation of theta functions in Magma are rigorous, which means that for superelliptic curves our algorithm returns a provably correct result to any desired precision, if we disregard possible precision loss. To handle the latter, one would have to use interval or ball arithmetic, as implemented, for instance, in {\tt Arb}~\cite{Joh17}. Indeed, Molin and Neurohr have implemented their algorithms in {\tt Arb}, but we have not attempted to use this. In contrast, Neurohr's {\tt Magma}-implementation of his algorithms for more general curves does not currently yield provably correct output, see the discussion in~\cite[Section~4.10]{NeurohrPhD}. \end{remark} \section{The global height pairing}\label{sec:global} \subseteqsection{Faltings-Hriljac}\label{S:Gross} Let $K$ be a global field and let $C /K$ be a smooth, projective, geometrically connected curve of genus $g>0$ with Jacobian $J = \on{Pic}^0_{C/K}$, and let $D$ and $E$ be degree $0$ divisors on $C$ with disjoint support. If $v\in M_K$ is a place of $K$, then according to~\cite[III, \S 5]{Lan88}, the local N\'eron pairing at $v$ satisfies \[ \langle D, \operatorname{div}(f)\rangle_v = -\log|f(D)|_v, \] for all rational functions $f \in K(C)^\times$ and divisors $D\in \operatorname{Div}(C)$ of degree $0$, with support disjoint from $\operatorname{div}(f)$. Here the absolute values are normalised to satisfy the product formula and we define $f(D)=\operatorname{pr}od_jf(Q_j)^{m_j}$ if $D = \sum m_jQ_j$. Hence the global N\'eron pairing $\sum_{v \in M_K} \Span{D,E}_v$ does respect linear equivalence and extends to a symmetric bilinear pairing on the rational points of $J$. We now relate the global N\'eron pairing to N\'eron-Tate heights. Write $T$ for the image of $C^{g-1}$ in $\on{Pic}^{g-1}_{C/K}$. Choose a class $w \in \on{Pic}^{g-1}_{C/K}(\bar{K})$ with $2w$ equal to the canonical class of $C$ in $\on{Pic}^{2g-2}_{C/K}(K)$. Then the class $\vartheta$ of $T_{-w}$ is a symmetric ample divisor class on $J_{\bar K}$, and $2\vartheta$ is independent of the choice of $w$ and is defined over $K$. The following theorem is due to Faltings and Hriljac \cite{Fal84, Hri85, Gro84}. \begin{theorem}\label{T:FH} Let $D$ and $E$ be degree $0$ divisors on $C$ with disjoint support, then \[ \hat h_{2 \vartheta}([D],[E]) = -\sum_{v \in M_K} \Span{D,E}_v. \] \end{theorem} In the following, we assume $K=\Q$ for simplicity. We also assume that every element of $J(\Q)$ can be represented using a $\Q$-rational divisor; this always holds if $C$ has a $\Q_v$ rational divisor of degree 1 for all places $v$ of $\Q$, see~\cite[Proposition~3.3]{PS97}. This assumption is convenient, as it allows us to compute the non-archimedean N\'eron pairings over $\Z_p$. If such representatives do not exist, we could work over finite extensions. \begin{remark} There is a similar decomposition of the $p$-adic height on $J$ due to Coleman-Gross~\cite{CG89}, where the local summand at a non-archimedean prime $v \ne p$ is the N\'eron pairing at $v$, up to a constant factor, and there is no archimedean summand. Therefore we only need to combine \ref{alg:NA} with an algorithm to compute the summand at $p$, which is defined in terms of Coleman integrals, to get a method for the computation of the $p$-adic height on $J$. This would be interesting, for instance, in the context of quadratic Chabauty, see the discussion in \cite[\S1.7]{BDMTV}. For hyperelliptic curves, such an algorithm is due to Balakrishnan-Besser~\cite{BB12}. \end{remark} \subseteqsection{Finding suitable representatives} Suppose we are given two points $P$, $Q \in J(\bb Q)$, given by $\Q$-rational degree~$0$ divisors $D$ (resp. $E$) representing $P$ (resp. $Q$), and wish to compute the height pairing $\hat h_{2 \vartheta}(P,Q)$. The local N\'eron pairings are only defined for divisors with disjoint support. If $D$ and $E$ have common support, we can move $E$ away from $D$ using strong approximation, see~\cite[\S4.9.4]{NeurohrPhD}. This algorithm computes a rational function $f_P$ for $P$ in the common support of both $D$ and $E$ such that $v_P(\operatorname{div}(f_P)) = -1$ and such that $\supp(\operatorname{div}(f_P)) \cap \supp(D) = \{P\}$. We replace $E$ by $E+\sum_Pv_P(E)\operatorname{div}(f_P)$. In practice, the following approach is often simpler: reduce multiples of $E$ along a suitable divisor until this yields a divisor $E'$ with support disjoint from $D$. Due to the bilinearity of the N\'eron pairings, we can replace $E$ by $E'$, see also~\cite[\S4.1]{Mul14}. In both approaches, the bottleneck is the computation of Riemann-Roch spaces~\cite{Hes02}. We can also use these methods to ensure that $E$ can be written as the difference of non-special divisors. \subseteqsection{Identifying relevant primes} Fix degree~$0$ divisors $D$ and $E$ with disjoint support. A-priori the expression in \ref{T:FH} is an infinite sum; we must identify a finite set $R$ of `relevant' places outside which we can guarantee that the local N\'eron pairing of $D$ and $E$ vanishes. This set $R$ will be the union of three sets; the infinite place, the primes where $C$ has bad reduction, and another finite set containing the other primes at which $D$ and $E$ meet. \subseteqsubsection{Bad primes} We assume that $C$ is given with an embedding $i\colon C \to \bb P^n_{\bb Q}$ in some projective space, and we write $\bar C$ for some proper model of $C$ inside $\bb P^n_{\bb Z}$. For instance, we could always take $n=3$ in practice. The standard affine charts of $\bb P^n_{\bb Z}$ induce an affine cover of $\bar C$, and we check non-smoothness of $\bar C$ on each chart of the cover separately. Suppose that a chart of $\bar C$ is given by an ideal $I \triangleleft \bb Z[x_1, \dots, x_n]$, and $I$ is generated by $f_1, \dots, f_r$. Then a Gr\"obner basis for the jacobian ideal of $I$ will contain exactly one integer, and its prime factors are exactly those primes over which this affine patch fails to be smooth over $\bb Z$. \subseteqsubsection{Primes where $D$ and $E$ may meet}\label{sec:primes_meeting} We reduce to the case where $D$ and $E$ are effective. Then we proceed as above, embedding $C$ in some projective space, and taking some model $\bar C$. On each affine chart, we take some proper models $\bar D$ and $\bar E$ of $D$ and $E$. If $\bar C$ is cut out by $I$, and $\bar D$ and $\bar E$ by ideals $I_D$ and $I_E$, then a Gr\"obner basis for $I + I_D + I_E$ has exactly one entry that is an integer (we denote it $n_{D,E}$), and again the prime factors of $n_{D,E}$ contain all the primes above which $\bar D$ and $\bar E$ meet. \begin{remark}\label{rem:precision} The final step in \ref{alg:NA} computes lengths of modules over $\bb Z$. In fact, it is much more efficient to work modulo a large power of the prime $p$. The techniques just described to identify a finite set of relevant primes can also be used to bound the required precision. If either of the divisors concerned is supported on the special fibre, then it suffices to work modulo $p^n$ where $n$ is the maximum of the multiplicities of the components. If both divisors $D$ and $E$ are horizontal, then the maximal power of the prime $p$ dividing the integer $n_{D,E}$ (defined just above) is an upper bound on the intersection number, and so provides a sufficient amount of $p$-adic precision. Note that resolving singularities by blowing up can only decrease the naive intersection multiplicity, and so this bound is also valid at bad places, as long as the regular model we use is obtained by blowing up $\bar C$. \end{remark} \begin{remark} The integer $n_{D,E}$ can become very large, even if the equations for $C$, $D$ and $E$ have small coefficients (moving $E$ by linear equivalence often makes the coefficients very much larger). As such, factoring it can become a bottleneck. In principle this factorisation should be avoidable; for example, one can treat the bad primes separately, then one has a global regular model over the remaining primes and the multiplicity can be computed there directly. Algorithms for computing heights on genus 1 and 2 curves without factorisation can be found in \cite{MS16a,MS16b}. \end{remark} \section{Examples}\label{sec:examples} We have implemented our algorithm in {\tt Magma}. Besides testing it against the code in {\tt Magma} (based on~\cite{FS97, Sto02, Mul14}) for some hyperelliptic Jacobians, we also tested it on a few Jacobians of smooth plane quartics, though the algorithm is by no means limited to genus 3. At present we can only compute the regulator up to an integral square, because our algorithm only lets us {\em compute} the N\'eron-Tate height -- we cannot use it to {\em enumerate} points of bounded N\'eron-Tate height, which would be required for provably determining generators of $J(\Q)$ with the usual saturation techniques, see the introduction and \cite{Sik95, Sto02}. If $C$ is hyperelliptic of genus at most 3, then this is possible using the algorithms discussed in the introduction. For an Arakelov-theoretic approach to this problem see \cite{Hol14}. \subseteqsection{A torsion example}\label{sec:tors} Let $C \colon X^3Y - X^2Y^2 - X^2Z^2 - XY^2Z + XZ^3 + Y^3Z = 0$ in $\mathbb{P}^2_{\Q}$ from \cite[Example~12.9.1]{BPS16}. Its Jacobian is of rank 0 and has 51 rational torsion points. Its bad primes are 29 and 163, but the model over $\Z_{29}$ and $\Z_{163}$ given by the same equation is already regular. Let $D = D_1 - D_2$ and $E = 3 \cdot E_1 - 3 \cdot E_2$, where $D_1 = (1:0:1)$, $D_2 = (1:1:0)$, $E_1 = (1:0:0)$ and $E_2 = (1:1:1)$. We choose this $E$ rather than $E_1-E_2$ because of the conditions imposed on $E$ in~\ref{sec:theta}. Then the computations for the intersections can be done on the affine patch where $X \neq 0$ of $C$. Consider the ring $$R = \Z[y,z] / (y - y^2 - z^2 - y^2z + z^3 + y^3z),$$ which is regular. The ideals $I_{D_1} = (y, z-1)$ and $I_{3 \cdot E_1} = (y^3, z^3)$ are coprime in $R$, and hence there will be no intersection between $D_1$ and $E_1$ at any of the non-archimedean places. In the same way, there is no non-archimedean intersection between $D_1$ and $E_2$, between $D_2$ and $E_1$, and between $D_2$ and $E_2$. Note that also $\Phi(D)$ and $\Phi(E)$ can be taken to be 0, as the special fibres of the regular models we computed are irreducible. For the computation of the archimedean contribution, we first need a canonical divisor which, for practical reasons, has to be supported outside infinity (i.e.\ $X = 0$). For this purpose, we pick $K = \mathop\mathrm{div} ( (z-1)^2 / (y^2z^2) \, dz ).$ Then we use Neurohr's algorithm~\cite{NeurohrPhD} to compute the small period matrix $\tau$, and $\alpha(D_1), \alpha(D_2)$, $\alpha(E_1), \alpha(E_2),$ and $\alpha(K)$, where $\alpha \colon C(\C) \to J(\C)$ is the embedding whose base point is chosen by Neurohr's algorithm, which turned out to be the point $(1 : -2 : -2.6615...)$ in this case. To find the appropriate divisor $W$ with $2 W = K$ out of the $2^6 = 64$ candidates, we try the 64 candidates for $\alpha(W)$ and compute for which one the function $\theta(z, \tau)$ has a pole at a point $z \in \C^g$ satisfying $j(z) = \alpha(D_1) + \alpha(D_2) - \alpha(W)$ (which is in $\Theta$). Then we finally compute the expression in \ref{P:gfformula}, and find that the archimedean contribution is approximately 0, or to be more precise, the result was approximately $2 \cdot 10^{-29}$ when computing with 30 decimal digits of precision. \subseteqsection{An example in rank 1} Let $C$ be the smooth plane quartic curve over $\Q$ given by $$X^2 Y^2 - X Y^3 - X^3 Z - 2 X^2 Z^2 + Y^2 Z^2 - X Z^3 + Y Z^3 = 0.$$ This is the curve from \cite[Example~12.9.2]{BPS16}. It has rank 1 and trivial rational torsion subgroup. Its bad primes are 41 and 347, but the model over $\Z_{41}$ and $\Z_{347}$ given by the same equation is already regular. Let $D = D_1 - D_2$ and $E = 3 \cdot E_1 - 3 \cdot E_2$, where $D_1 = (1:0:-1)$, $D_2 = (1:1:-1)$, $E_1 = (1:1:0)$ and $E_2 = (1:4:-3)$. The computations for the intersections can be done on the affine patch of $C$ where $X \neq 0$. Consider the ring $$R = \Z[y,z] / (y^2 - y^3 - z - 2z^2 - y^2z^2 - z^3 - yz^3).$$ The sum of the two ideals $I_{D_1} = (y, z+1)$ and $I_{E_2} = (y - 4, z + 3)$ inside $R$ is $(2,y,z+1)$. Hence, the only place where $D_1$ and $E_2$ could possibly intersect is the prime 2. At 2, the length of $\Z_{(2)}[y,z] / (2, y, z+1) \cong \F_2$ as $R_{(2)}$-module is 1, so $\iota(D_1, E_2) = \log(2)$. There is no intersection between $D_1$ and $E_1$, between $D_2$ and $E_1$, and between $D_2$ and $E_2$. Moreover, $\Phi(D)$ and $\Phi(E)$ can be taken to be 0 again. Hence, the intersection pairing $\langle D, E \rangle_{\mathfrak{p}}$ equals $-3\log(2)$ if $\mathfrak{p} = (2)$, and 0 otherwise. We computed the archimedean contribution in the same way as in the previous example, and we found it to be $-0.013563$. Hence, the N\'eron-Tate height pairing is $\hat{h}_{2\vartheta}([D], [E]) = 2.0930$. We performed an analogous computation for the points $F = (0:1:0) - D_2$, and $G = 3\cdot E_2 - 3 \cdot (0:1:-1)$, and found that $\hat{h}_{2\vartheta}([F], [G]) = -0.59966$. We computed this with 30 decimal digits of precision, and found numerically that $-414 \cdot \hat{h}_{2\vartheta}([D],[E]) = 1445 \cdot \hat{h}_{2\vartheta}([F], [G])$. We deduced that $g = [E] - [F]$ is a possible generator for the Mordell-Weil group, and the relation between the heights suggested the relations $[D] = 17 \cdot g$ , $[E] = 255 \cdot g$, $[F] = -69 \cdot g$, and $[G] = 18 \cdot g$, which we confirmed in the Mordell-Weil group. If $g$ is indeed the generator of the Mordell-Weil group, then the regulator is $0.00048282$. \subseteqsection{The split Cartan modular curve of level 13} Let $C$ denote the smooth plane quartic curve given by the equation \begin{equation}\label{E:cartan_model} (-Y-Z)X^3 +(2Y^2 +YZ)X^2 +(-Y^3 +Y^2Z -2YZ^2 +Z^3)X+(2Y^2 Z^2 -3YZ^3 )=0. \end{equation} By work of Baran~\cite{Bar14a, Bar14b} this curve is isomorphic to the modular curve $X_{s}(13)$ which classifies elliptic curves whose Galois representation is contained in a normaliser of a split Cartan subgroup of $\operatorname{GL}_2(\F_{13})$, as well as its non-split counterpart $X_{ns}(13)$. Assuming the Generalised Riemann Hypothesis, Bruin-Poonen-Stoll~\cite[Example~12.9.3]{BPS16} prove that $J(\Q)$ has rank~3; an unconditional proof is given in~\cite{BDMTV}. By a result of Balakrishnan, Dogra, Tuitman, Vonk and the third-named author~\cite{BDMTV}, there are precisely 7 rational points on $C$. Using reduction modulo small primes, Bruin-Poonen-Stoll show that the points \[ P_0 \coloneqq (1:0:0),\, P_1 \coloneqq (0:1:0), \, P_2 \coloneqq (0:0:1), \, P_3 \coloneqq (-1:0:1) \in C(\Q) \] have the property that \[ [P_1-P_0], [P_2-P_0], [P_3 - P_0] \] on the Jacobian $J$ of $C$ generate a subgroup $G$ of $J(\Q)$ of rank 3, which contains all differences of rational points. Therefore the regulator of $J/\Q$ differs from the regulator of $G$ multiplicatively by an integral square. The height pairings that we obtain by using our code are: \begin{center} \begin{tabular}{|c|ccc|} \hline &$[P_1-P_0]$ &$[P_2-P_0]$ &$[P_3-P_0]$ \\ \hline $[P_1-P_0]$ &0.78401 &0.59540 &0.32516 \\ $[P_2-P_0]$ &0.59540 &0.98372 &0.37437 \\ $[P_3-P_0]$ &0.32516 &0.37437 &0.18861 \\ \hline \end{tabular} \end{center} Hence, the regulator is $9.6703 \cdot 10^{-3}$ up to an integral square factor. The work of Gross-Zagier~\cite{GZ86} and Kolyvagin-Logachev~\cite{KL89} implies that the rank part of BSD holds in this example, that the Shafarevich-Tate group is finite, and that the full conjecture of Birch and Swinnerton-Dyer holds up to an integer. We give numerical evidence that it holds up to an integral square. This is the first non-hyperelliptic example where the BSD invariants (except the order of the Shafarevich-Tate group) have been computed; for hyperelliptic examples see \cite{FLSSSW, vB17}. In \cite[Example~12.9.3]{BPS16}, it is already shown that $J$ has no non-trivial rational torsion. It is verified easily that the model in $\Z$ given by the same equation as in \ref{E:cartan_model} is regular at all primes. Hence, all Tamagawa numbers equal 1. For the value of the $L$-function, we use that $J$ is isogenous to the abelian variety $A_f$ associated to a newform $f \in S_2(\Gamma_0(169))$ with Fourier coefficients in $\Q(\zeta_7)^+$. Hence we have $$L(J,s) = \operatorname{pr}od_\sigma L(f^{\sigma}, s),$$ where $\sigma$ runs through $\Gal(\Q(\zeta_7)^+/\Q)$. Computing the factors on the right hand side using {\tt Magma}, we obtained $\lim_{s \rightarrow 1} L(J,s) \cdot (s-1)^{-3} \approx 0.76825$. For the real period, we used the code of Neurohr to compute a big period matrix $\Lambda$ for $J$. One can then apply the methods of the first-named author \cite[Algorithm~13]{vB17} to check that the differentials used for the computation of the big period matrix are 3 times a set of generators for the canonical sheaf. Hence, the real period is $\frac1{27}$ times the covolume of the lattice generated by the 6 columns of $\Lambda + \overline{\Lambda}$ inside $\R^3$. We computed the real period to be $79.444$ and checked that this value agrees with the real volume of $A_f$. Assuming our value for the regulator is correct, the BSD formula predicts that the size of the Shafarevich-Tate group is $\frac{0.76825}{9.6703 \cdot 10^{-3} \cdot 79.444} \approx 1.0000$, which is consistent with the result of \cite{PS99} proving that the size of the group is a square in this case, if it is finite. \subseteqsection{An example with very bad reduction} In all the examples we tried so far, the naive model over $\bb Z$ happened to be regular. We wanted to try an curve where this was far from the case, but still with Jacobian of positive rank. We searched for a curve with some rational points, and very bad reduction at a small prime, finding the genus 3 curve $C$ over $\Q$ given by $$3x^3y + 5 xy^2z + 5y^4 - 1953125z^4 = 0,$$ with rational points $P_1 = (1:0:0)$ and $P_2 = (0:25:1)$. The bad primes are $3$, $5$, $17$, $358166959$, $523687087967$. For the three largest prime factors, the na\"ive models are already regular. The special fibre of the regular model produced by {\tt Magma} over the prime $3$ has 4 irreducible components, with multiplicities $[1,1,2,2]$, and intersection matrix \begin{equation*} \left[ \begin{matrix} -6&0&2&1\\ 0 &-2&0&1\\ 2&0&-2&1\\ 1&1&1&-2\\ \end{matrix} \right]. \end{equation*} That over the prime 5 has 9 components, with multiplicities $[1, 1, 1, 1, 1, 1, 2, 3, 3]$ and intersection matrix \begin{equation*} \left[ \begin{matrix} -1&0&1&0&0&0&0&0&0\\ 0&-4&0&0&0&1&0&0&1\\ 1&0&-2&0&0&1&0&0&0\\ 0&0&0&-3&1&0&1&0&0\\ 0&0&0&1&-2&1&0&0&0\\ 0&1&1&0&1&-3&0&0&0\\ 0&0&0&1&0&0&-2&0&1\\ 0&0&0&0&0&0&0&-1&1\\ 0&1&0&0&0&0&1&1&-2\\ \end{matrix} \right]. \end{equation*} We define a degree~$0$ divisor $D = P_1 - P_2$, and compute the height pairing of $D$ with itself, obtaining \begin{equation*} \hat{h}_{2\vartheta}(D,D) \approx 3.2107. \end{equation*} In particular, this shows that $D$ is not torsion on the Jacobian, hence the rank is at least 1 (probably, it equals 1) and the regulator is probably 3.2107, though of course there might exist a generator of smaller height. The computation took around 5 minutes, with $90\%$ of this time spent on the saturation step (\ref{lem:saturation}). Each saturation carried out took around 1.5 seconds, but the complexity of the reduction types meant that many such steps were necessary. \begin{bibdiv} \begin{biblist} \bib{BB12}{article}{ author={Balakrishnan, Jennifer S.}, author={Besser, Amnon}, title={Computing local $p$-adic height pairings on hyperelliptic curves}, journal={Int. Math. Res. Not. IMRN}, date={2012}, number={11}, pages={2405--2444}, } \bib{BDMTV}{misc}{ author={Balakrishnan, Jennifer S.}, author={Dogra, Netan}, author={M\"uller, J. Steffen}, author={Tuitman, Jan}, author={Vonk, Jan}, title={ Explicit Chabauty-Kim for the Split Cartan Modular Curve of Level 13}, note={Preprint, {\url{https://arxiv.org/abs/1711.05846}}} } \bib{Bar14a}{article}{ author={Baran, Burcu}, title={An exceptional isomorphism between modular curves of level 13}, journal={J. Number Theory}, volume={145}, date={2014}, pages={273--300}, } \bib{Bar14b}{article}{ author={Baran, Burcu}, title={An exceptional isomorphism between level 13 modular curves via Torelli's theorem}, journal={Math. Res. Lett.}, volume={21}, date={2014}, number={5}, pages={919--936}, } \bib{vB17}{misc}{ author={van Bommel, Raymond}, title={ Numerical verification of the Birch and Swinnerton-Dyer conjecture for hyperelliptic curves of higher genus over $\mathbb{Q}$ up to squares}, note={Preprint, {\url{https://arxiv.org/abs/1711.10409}}}, } \bib{BPS16}{article}{ author={Bruin, Nils}, author={Poonen, Bjorn}, author={Stoll, Michael}, title={Generalized explicit descent and its application to curves of genus 3}, journal={Forum Math. Sigma}, volume={4}, date={2016}, pages={e6, 80}, } \bib{CG89}{article}{ author={Coleman, Robert F.}, author={Gross, Benedict H.}, title={$p$-adic heights on curves}, conference={ title={Algebraic number theory}, }, book={ series={Adv. Stud. Pure Math.}, volume={17}, publisher={Academic Press, Boston, MA}, }, date={1989}, pages={73--81}, } \bib{Dok18}{article}{ author = {{Dokchitser}, Tim}, title = {Models of curves over DVRs}, journal = {ArXiv e-prints}, eprint = {1807.00025}, year = {2018} } \bib{Fal84}{article}{ author={Faltings, Gerd}, title={Calculus on arithmetic surfaces}, journal={Ann. of Math. (2)}, volume={119}, date={1984}, number={2}, pages={387--424}, } \bib{FLSSSW}{article}{ author={Flynn, E. Victor}, author={Lepr\'evost, Franck}, author={Schaefer, Edward F.}, author={Stein, William A.}, author={Stoll, Michael}, author={Wetherell, Joseph L.}, title={Empirical evidence for the Birch and Swinnerton-Dyer conjectures for modular Jacobians of genus 2 curves}, journal={Math. Comp.}, volume={70}, date={2001}, number={236}, pages={1675--1697}, } \bib{FS97}{article}{ author={Flynn, E. V.}, author={Smart, N. P.}, title={Canonical heights on the Jacobians of curves of genus $2$ and the infinite descent}, journal={Acta Arith.}, volume={79}, date={1997}, number={4}, pages={333--352}, } \bib{Gro84}{article}{ author={Gross, Benedict H.}, title={Local heights on curves}, conference={ title={Arithmetic geometry}, address={Storrs, Conn.}, date={1984}, }, book={ publisher={Springer, New York}, }, date={1986}, pages={327--339}, } \bib{GZ86}{article}{ author={Gross, Benedict H.}, author={Zagier, Don B.}, title={Heegner points and derivatives of $L$-series}, journal={Invent. Math.}, volume={84}, date={1986}, number={2}, pages={225--320}, } \bib{Hes02}{article}{ author={Hess, F.}, title={Computing Riemann-Roch spaces in algebraic function fields and related topics}, journal={J. Symbolic Comput.}, volume={33}, date={2002}, number={4}, pages={425--445}, } \bib{HS00}{book}{ author={Hindry, Marc}, author={Silverman, Joseph H.}, title={Diophantine geometry}, series={Graduate Texts in Mathematics}, volume={201}, note={An introduction}, publisher={Springer-Verlag, New York}, date={2000}, pages={xiv+558}, isbn={0-387-98975-7}, isbn={0-387-98981-1}, review={\MR{1745599}}, doi={10.1007/978-1-4612-1210-2}, } \bib{MN19}{article}{ author={Molin, Pascal}, author={Neurohr, Christian}, title={Computing period matrices and the Abel-Jacobi map of superelliptic curves}, journal={Math. Comp.}, date={2019}, volume={89}, pages={847--888}, } \bib{Hol12}{article}{ author={Holmes, David}, title={Computing N\'eron-Tate heights of points on hyperelliptic Jacobians}, journal={J. Number Theory}, volume={132}, date={2012}, number={6}, pages={1295--1305}, } \bib{Hol14}{article}{ author={Holmes, David}, title={An Arakelov-theoretic approach to na\"\i ve heights on hyperelliptic Jacobians}, journal={New York J. Math.}, volume={20}, date={2014}, pages={927--957}, } \bib{Hri85}{article}{ author={Hriljac, Paul}, title={Heights and Arakelov's intersection theory}, journal={Amer. J. Math.}, volume={107}, date={1985}, number={1}, pages={23--38}, } \bib{Joh17}{article}{ author = {Johansson, F.}, title = {Arb: efficient arbitrary-precision midpoint-radius interval arithmetic}, journal = {IEEE Transactions on Computers}, year = {2017}, volume = {66}, number = {8}, pages = {1281--1292}, doi = {10.1109/TC.2017.2690633}, } \bib{KL89}{article}{ author={Kolyvagin, V. A.}, author={Logach\"ev, D. Yu.}, title={Finiteness of the Shafarevich-Tate group and the group of rational points for some modular abelian varieties}, language={Russian}, journal={Algebra i Analiz}, volume={1}, date={1989}, number={5}, pages={171--196}, translation={ journal={Leningrad Math. J.}, volume={1}, date={1990}, number={5}, pages={1229--1253}, }, } \bib{Lan88}{book}{ author={Lang, Serge}, title={Introduction to Arakelov theory}, publisher={Springer-Verlag, New York}, date={1988}, pages={x+187}, isbn={0-387-96793-1}, } \bib{Lan83}{book}{ author={Lang, Serge}, title={Fundamentals of Diophantine geometry}, publisher={Springer-Verlag, New York}, date={1983}, pages={xviii+370}, isbn={0-387-90837-4}, } \bib{Liu02}{book}{ author={Liu, Qing}, title={Algebraic geometry and arithmetic curves}, series={Oxford Graduate Texts in Mathematics}, volume={6}, note={Translated from the French by Reinie Ern\'e; Oxford Science Publications}, publisher={Oxford University Press, Oxford}, date={2002}, pages={xvi+576}, isbn={0-19-850284-2}, } \bib{Mul14}{article}{ author={M\"uller, J. Steffen}, title={Computing canonical heights using arithmetic intersection theory}, journal={Math. Comp.}, volume={83}, date={2014}, number={285}, pages={311--336}, } \bib{MS16a}{article}{ author={M\"uller, J. Steffen}, author={Stoll, Michael}, title={Computing canonical heights on elliptic curves in quasi-linear time}, journal={LMS J. Comput. Math.}, volume={19}, date={2016}, number={suppl. A}, pages={391--405}, } \bib{MS16b}{article}{ author={M\"uller, J. Steffen}, author={Stoll, Michael}, title={Canonical heights on genus-2 Jacobians}, journal={Algebra Number Theory}, volume={10}, date={2016}, number={10}, pages={2153--2234}, } \bib{Tata1}{article}{ Author = {Mumford, David}, Publisher = {Birkh{\"a}user}, Title = {{Tata lectures on theta I}}, date = {1983} } \bib{Ner65}{article}{ author={N\'{e}ron, A.}, title={Quasi-fonctions et hauteurs sur les vari\'{e}t\'{e}s ab\'{e}liennes}, language={French}, journal={Ann. of Math. (2)}, volume={82}, date={1965}, pages={249--331}, issn={0003-486X}, review={\MR{0179173}}, doi={10.2307/1970644}, } \bib{NeurohrPhD}{thesis}{ author={Neurohr, Christian}, title={Efficient integration on Riemann surfaces \& applications}, date={2018}, organization={Carl von Ossietzky Universit\"at Oldenburg}, type={PhD thesis}, note={\url{http://oops.uni-oldenburg.de/3607/1/neueff18.pdf}}, } \bib{PS97}{article}{ author={Poonen, Bjorn}, author={Schaefer, Edward F.}, title={Explicit descent for Jacobians of cyclic covers of the projective line}, journal={J. Reine Angew. Math.}, volume={488}, date={1997}, pages={141--188}, } \bib{PS99}{article}{ author={Poonen, Bjorn}, author={Stoll, Michael}, title={The Cassels-Tate pairing on polarized abelian varieties}, journal={Ann. of Math. (2)}, volume={150}, date={1999}, number={3}, pages={1109--1149}, } \bib{Sik95}{article}{ author={Siksek, Samir}, title={Infinite descent on elliptic curves}, journal={Rocky Mountain J. Math.}, volume={25}, date={1995}, number={4}, pages={1501--1538}, } \bib{Sil88}{article}{ author={Silverman, Joseph H.}, title={Computing heights on elliptic curves}, journal={Math. Comp.}, volume={51}, date={1988}, number={183}, pages={339--358}, } \bib{Sto02}{article}{ author={Stoll, Michael}, title={On the height constant for curves of genus two. II}, journal={Acta Arith.}, volume={104}, date={2002}, number={2}, pages={165--182}, } \bib{Sto17}{article}{ author={Stoll, Michael}, title={An explicit theory of heights for hyperelliptic Jacobians of genus three}, conference={ title={Algorithmic and experimental methods in algebra, geometry, and number theory}, }, book={ publisher={Springer, Cham}, }, date={2017}, pages={665--715}, } \end{biblist} \end{bibdiv} \end{document}
\begin{document} \begin{abstract} We give a proof of the parabolic/singular Koszul duality for the category O of affine Kac-Moody algebras. The main new tool is a relation between moment graphs and finite codimensional affine Schubert varieties. We apply this duality to $q$-Schur algebras and to cyclotomic rational double affine Hecke algebras. This yields a proof of a conjecture of Chuang-Miyachi relating the level-rank duality with the Ringel-Koszul duality of cyclotomic rational double affine Hecke algebras. \end{abstract} \thanks{This research was partially supported by the ANR grant number ANR-10-BLAN-0110} \maketitle \setcounter{tocdepth}{2} \tableofcontents \section{Introduction} The purpose of this paper is to give a proof of the parabolic/singular Koszul duality for the category O of affine Kac-Moody algebras. The main motivation for this is the conjecture in \cite{VV} (proved in \cite{RSVV}) relating the parabolic affine category O and the category O of cyclotomic rational double affine Hecke algebras (CRDAHA for short). Using the present work, we deduce from this conjecture the main conjecture of \cite{CM} which claims that the category O of CRDAHA's is Koszul and that the Koszul equivalence is related to the level-rank duality on the Fock space. There are several possible approaches to Koszul duality for affine Kac-Moody algebras. In \cite{BY}, a geometric analogue of the composition of the Koszul and the Ringel duality is given, which involves Whittaker sheaves on the affine flag variety. Our principal motivation comes from representation theory of CRDAHA's. For this, we need to prove a Koszul duality for the category O itself rather than for its geometric analogues. One difficulty of the Kac-Moody case comes from the fact that, at a positive level, the category O has no tilting modules, while at a negative level it has no projective modules. One way to overcome this is to use a different category of modules than the usual category O, as the Whittaker category in loc.~cit. or a category of linear complexes as in \cite{MOS}. To remedy this, we use a truncated version of the (affine parabolic) category O. Under truncation any singular block of an affine parabolic category O at a non-critical level yields a finite highest-weight category which contains both tilting and projective objects. We prove that these highest weight categories are Koszul and are Koszul dual to each other. Note that the affine category O is related to two different types of geometry. In negative level it is related to the affine flag ind-scheme and to finite dimensional affine Schubert varieties. In positive level it is related to Kashiwara's affine flag manifold and to finite codimensional affine Schubert varieties. In negative level, a localization theorem (from the category O to perverse sheaves on the affine flag ind-scheme) has been worked out by Beilinson-Drinfeld and Frenkel-Gaitsgory in \cite{BD}, \cite{FG2}. A difficulty in the proof of the main theorem comes from the absence of a localization theorem (from the category O to perverse sheaves on Kashiwara's affine flag manifold) at the positive level. To overcome this we use standard Koszul duality and Ringel duality to relate the positive and negative level. Our general argument is similar to the one in \cite{B}, \cite{BGS}. For this, we need an affine analogue of the Soergel functor on the category O. We use the functor introduced by Fiebig in \cite{F2}. To define it, we must introduce the deformed category O, which is a highest weight category over a localization of a polynomial ring, and some category of sheaves over a moment graph. By the work of Fiebig, sheaves over moment graphs give an algebraic analogue of equivariant perverse sheaves associated with finite dimensional affine Schubert varieties. An important new tool in our work is a relation between sheaves over some moment graph and equivariant perverse sheaves associated with finite codimensional affine Schubert varieties, see Appendix {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{app:B}. This relation is of independent interest. Let us now explain the structure of the paper. Section 2 contains generalities on highest weight categories and standard Koszul duality. In Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:3} we introduce the affine parabolic category O and its truncated version. Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:momentgraph} contains some genreralities on moment graphs and the relation with the deformed affine category O. The section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:5} is technical and contains the proof of the main theorem. Next, we apply the Koszul duality to CRDAHA's and $q$-Schur algebras in Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{app:A}. The Kazhdan-Lusztig equivalence \cite{KL} implies that the module category of the $q$-Schur algebra is equivalent to a highest weight subcategory of the affine category O of $GL_n$ at a negative level. Thus, our result implies that the $q$-Schur algebra is Morita equivalent to a Koszul algebra (and also to a standard Koszul algebra), see Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:schurkoszul} \footnote{After our paper was written, we received a copy of \cite{CM} where a similar result is obtained by different methods}. To our knowledge, this was not proved so far. There are different possible approachs for proving that the $q$-Schur algebra is Koszul. Some are completely algebraic, see e.g., \cite{PS}. Some use analogues of the Bezrukavnikov-Mirkovic modular localization theorem, see e.g., \cite{Rc}. Our approach has the advantage that it yields an explicit description of the Koszul dual of the $q$-Schur algebra. Finally, we apply the Koszul duality of the category O to CRDAHA's. More precisely, in \cite{VV} some higher analogue of the $q$-Schur algebra has been introduced. It is a highest-weight subcategory of the affine parabolic category O. Since the category is standard Koszul, these higher $q$-Schur algebras are also Koszul and their are Koszul dual to each other. Next, it was conjectured in loc.~cit. (and proved in \cite{RSVV}) that these higher $q$-Schur algebras are equivalent to the category O of the CRDAHA. Thus, our result also implies that the CRDAHA's are Koszul. Using this, we prove the level-rank conjecture for CRDAHA's in \cite{CM}. \section{Preliminaries on Koszul rings and highest weight categories} \subsection{Categories} For an object $M$ of a category $\mathbf{C}$ let ${\bf 1}_M$ be the identity endomorphism of $M$. Let $\mathbf{C}^{{{\operatorname{{op}}\nolimits}}}$ be the category opposite to $\mathbf{C}$. A functor of additive categories is always assumed to be additive. If $\mathbf{C}$ is an exact category then $\mathbf{C}^{{\operatorname{{op}}\nolimits}}$ is equipped with the exact structure such that $0\to M'\to M\to M''\to 0$ is exact in $\mathbf{C}^{{\operatorname{{op}}\nolimits}}$ if and only if $0\to M''\to M\to M'\to 0$ is exact in $\mathbf{C}$. An {\it exact functor} of exact categories is a functor which takes short exact sequences to short exact sequences. Unless specified otherwise, a functor will always be a covariant functor. A contravariant functor $F:\mathbf{C}\to\mathbf{C}'$ is exact if and only if the functor $F:\mathbf{C}^{{\operatorname{{op}}\nolimits}}\to\mathbf{C}'$ is exact. Given an abelian category $\mathbf{C}$, let $\operatorname{Irr}\nolimits(\mathbf{C})$ be the set of isomorphism classes of simple objects. Let ${{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(\mathbf{C})$ and ${{\operatorname{{op}}\nolimits}}eratorname{Inj}\nolimits(\mathbf{C})$ be the sets of isomorphism classes of indecomposable projective, injective objects respectively. For an object $M$ of $\mathbf{C}$ we abbreviate $\operatorname{Ext}\nolimits_{\mathbf{C}}(M)=\operatorname{Ext}\nolimits_{\mathbf{C}}(M,M),$ where $\operatorname{Ext}\nolimits_\mathbf{C}$ stands for the direct sum of all $\operatorname{Ext}\nolimits^i_\mathbf{C}$'s. Let $R$ be a commutative, noetherian, integral domain. An {\it $R$-category} is an additive category enriched over the tensor category of $R$-modules. Unless mentioned otherwise, a functor of $R$-categories is always assumed to be $R$-linear. A {\it hom-finite $R$-category} is an $R$-category whose Hom spaces are finitely generated over $R$. An additive category is {\it Krull-Schmidt} if any object has a decomposition such that each summand is indecomposable with local endomorphism ring. A full additive subcategory of a Krull-Schmidt category is again Krull-Schmidt if every idempotent splits (i.e., if it is closed under direct summands). A hom-finite $k$-category over a field $k$ is Krull-Schmidt if every idempotent splits (e.g., if it is abelian). We call {\it finite abelian $k$-category} a $k$-category which is equivalent to the category of finite dimensional modules over a finite dimensional $k$-algebra. For any abelian category $\mathbf{C}$, let $\mathbf{D}^b(\mathbf{C})$ be the corresponding bounded derived category. \subsection{Graded rings}\label{sec:gradedrings} For a ring $A$ let $A^{{\operatorname{{op}}\nolimits}}$ be the opposite ring. Let $A\mathbf{M}od$ be the category of left $A$-modules and let $A\operatorname{\!-\mathbf{mod}}\nolimits$ be the subcategory of the finitely generated ones. We abbreviate $\operatorname{Irr}\nolimits(A)=\operatorname{Irr}\nolimits(A\operatorname{\!-\mathbf{mod}}\nolimits)$, ${{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(A)={{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(A\operatorname{\!-\mathbf{mod}}\nolimits)$ and ${{\operatorname{{op}}\nolimits}}eratorname{Inj}\nolimits(A)={{\operatorname{{op}}\nolimits}}eratorname{Inj}\nolimits(A\operatorname{\!-\mathbf{mod}}\nolimits).$ By a graded ring $\bar A$ we'll always mean a $\mathbb{Z}$-graded ring. Let $\bar A\mathbf{g}mod$ be the category of finitely generated graded $\bar A$-modules. We abbreviate $\mathbf{D}^b(\bar A)=\mathbf{D}^b(\bar A\mathbf{g}mod),$ $\operatorname{Irr}\nolimits(\bar A)=\operatorname{Irr}\nolimits(\bar A\mathbf{g}mod),$ ${{\operatorname{{op}}\nolimits}}eratorname{Inj}\nolimits(\bar A)={{\operatorname{{op}}\nolimits}}eratorname{Inj}\nolimits(\bar A\mathbf{g}mod)$ and ${{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(\bar A)={{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(\bar A\mathbf{g}mod).$ Given a graded $\bar A$-module $M$ and an integer $j$, let $M\langle j\rangle$ be the graded $\bar A$-module obtained from $\bar M$ by shifting the grading by $j$, i.e., such that $(M\langle j\rangle)^i=M^{i-j}$. Given $M,N\in\bar A\mathbf{g}mod$ let ${{\operatorname{{op}}\nolimits}}eratorname{hom}\nolimits_{\bar A}(M,N)$ and ${{\operatorname{{op}}\nolimits}}eratorname{ext}\nolimits_{\bar A}(M,N)$ be the morphisms and extensions in the category of graded modules. We say that the ring $\bar A$ is {\it positively graded} if $\bar A^{<0}=0$ and if $\bar A^{0}$ is a semisimple ring. A finite dimensional graded algebra $\bar A$ over a field $k$ is positively graded if $\bar A^{<0}=0$ and $\bar A^0$ is semisimple as an $\bar A$-module. Here $\bar A^0$ is identified with $\bar A/\bar A^{>0}$. A graded module $M$ is called \emph{pure of weight $i$} if it is concentrated in degree $-i$, i.e., $M = M^{-i}$. Suppose $\bar A$ is a positively graded ring. Then any simple graded module is pure, and any pure graded module is semisimple. Assume that $k$ is a field and that $\bar A$ is a positively graded $k$-algebra. We say that $\bar A$ is {\it basic} if $\bar A^0$ isomorphic to a finite product of copies of $k$ as a $k$-algebra. Assume that $\bar A$ is finite dimensional. Let $\{1_x\,;\,x\in\operatorname{Irr}\nolimits(\bar A^0)\}$ be a complete system of primitive orthogonal idempotents of $\bar A^0$. The {\it Hilbert polynomial} of $\bar A$ is the matrix $P(\bar A,t)$ with entries $\mathbb{N}[t]$ given by $P(\bar A,t)_{x,x'}=\sum_it^i\dim\bigl(1_x\bar A^i1_{x'}\bigr)$ for each $x,x'$. Assume further that $\bar A$ is positively graded. We have canonical bijections $\operatorname{Irr}\nolimits(A)=\operatorname{Irr}\nolimits(\bar A^0)=\{\bar A^01_x\}$ such that $x$ is the isomorphism class of $\bar A^01_x$. Since $A\operatorname{\!-\mathbf{mod}}\nolimits$ is Krull-Schmidt, there is a canonical bijection ${{{\operatorname{{op}}\nolimits}}eratorname{top}\nolimits}:{{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(A)\to\operatorname{Irr}\nolimits(A)$, $P\mapsto{{{\operatorname{{op}}\nolimits}}eratorname{top}\nolimits}(P)$. We have ${{{\operatorname{{op}}\nolimits}}eratorname{top}\nolimits}(A1_x)=\bar A^01_x=x.$ The set $\{1_x\,;\,x\in\operatorname{Irr}\nolimits(\bar A^0)\}$ is a complete system of primitive orthogonal idempotents of $A$. Given a graded commutative, noetherian, integral domain $R$, a {\it graded $R$-category} is an additive category enriched over the monoidal category of graded $R$-modules. \subsection{Koszul duality} \label{sec:koszul} Let $k$ be a field and $\bar A$ be a graded $k$-algebra. Let $A$ be the (non graded) $k$-algebra underlying $\bar A$. The {\it Koszul dual} of $\bar A$ is the graded $k$-algebra $E(\bar A)=\operatorname{Ext}\nolimits_{A}(\bar A^0).$ Forgetting the grading of $E(\bar A)$, we get a $k$-algebra $E(A)$. It is finite dimensional if $A$ is finite dimensional and has finite global dimension. A \emph{linear projective resolution} of a graded $\bar A$-module $M$ is a graded projective resolution $\dots\to P_1\to P_0\to M\to 0$ such that, for each $i$, the projective graded $\bar A$-module $P_i$ is finitely generated by a set of homogeneous elements of degree $i$. Assume that $\bar A$ is positively graded (in the sense of Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:gradedrings}). We say that $\bar A$ is \emph{Koszul} if each simple graded module which is pure of weight 0 has a linear projective resolution. If $\bar A$ is Koszul, then we say that $A$ {\it has a Koszul grading}. If $\bar A$ is finite dimensional then this grading is unique up to isomorphism of graded $k$-algebras, see \cite[cor.~2.5.2]{BGS}. Assume that $\bar A$ is finite dimensional, has finite global dimension and is Koszul. Then $E(\bar A)$ is Koszul and $E^2(\bar A)=\bar A$ canonically, see \cite[thm.~1.2.5]{BGS}. Put $\bar A^!=E(\bar A)^{{\operatorname{{op}}\nolimits}}$ and $A^!=E(A)^{{\operatorname{{op}}\nolimits}}$. Note that $\bar A^!$ is also Koszul by \cite[prop.~2.2.1]{BGS}. For each $x\in\operatorname{Irr}\nolimits(A)=\operatorname{Irr}\nolimits(\bar A^0)$ the idempotent $1_x\in\bar A^0$ yields an idempotent $1_x^!=E(1_x)\in E(A)$ in the obvious way. The set $\{1_x^!\,;\,x\in\operatorname{Irr}\nolimits(A)\}$ is a complete system of primitive orthogonal idempotents of $E(A)$. By \cite[thm.~2.12.5, 2.12.6]{BGS}, there is an equivalence of triangulated categories $E:\mathbf{D}^b(\bar A)\to \mathbf{D}^b(\bar A^!)$ such that $E(M\langle i\rangle)=E(M)[-i]\langle-i\rangle$ and $E(\bar A^01_x)=\bar A^! 1_x^!$ for each $x\in\operatorname{Irr}\nolimits(A)$. We'll call it the \emph{Koszul equivalence}. By forgetting the grading, we get a bijection $\operatorname{Irr}\nolimits(A)\to{{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(A^!)$, $x\mapsto A^!1_x^!$. It induces a bijection $(\bullet)^!={{{\operatorname{{op}}\nolimits}}eratorname{top}\nolimits}\circ E:\operatorname{Irr}\nolimits(A)\to\operatorname{Irr}\nolimits(A^!)$, which will be called the {\it natural bijection} between $\operatorname{Irr}\nolimits(A)$ and $\operatorname{Irr}\nolimits(A^!).$ Let $\mathbf{C}$ be a finite abelian $k$-category. We say that {\it $\mathbf{C}$ has a Koszul grading} if there is a projective generator $P$ such that the ring $A={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_\mathbf{C}(P)^{{{\operatorname{{op}}\nolimits}}}$ has a Koszul grading $\bar A$. We may also simply say that \textit{$\mathbf{C}$ is Koszul} and we abbreviate $\mathbf{C}^!=A^!\operatorname{\!-\mathbf{mod}}\nolimits.$ The following lemmas are well-known. \begin{lemma} \label{lem:1.1} Let $P$, $A$, $\mathbf{C}$ be as above. If $\bar A$ is a positively graded then $E(\bar A)=\operatorname{Ext}\nolimits_{\mathbf{C}}(L)$, where $L={{{\operatorname{{op}}\nolimits}}eratorname{top}\nolimits}(P)$. \end{lemma} \begin{proof} The equivalence $\mathbf{C}\to A\operatorname{\!-\mathbf{mod}}\nolimits$, $X\mapsto{{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_\mathbf{C}(P,X)$ takes $L$ to $\bar A/\bar A^{>0}$. Therefore $E(\bar A)= \operatorname{Ext}\nolimits_{\mathbf{C}}(L).$ \end{proof} \begin{lemma} \label{lem:1.2} Let $\mathbf{C}$ be a finite abelian $k$-category with a Koszul grading. If $\mathbf{D}$ is a Serre subcategory and the inclusion $\mathbf{D}\subset\mathbf{C}$ induces injections on extensions, then $\mathbf{D}$ has also a Koszul grading. \end{lemma} \begin{proof} Let $P_L$ be the projective cover of $L\in\operatorname{Irr}\nolimits(\mathbf{C})$. The set $\operatorname{Irr}\nolimits(\mathbf{C})$ is finite and $P=\bigoplus_LP_L$ is a minimal projective generator. Set $A={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_\mathbf{C}(P)^{{{\operatorname{{op}}\nolimits}}}$. Let $A_I$ be the quotient of $A$ by the two-sided ideal $I$ generated by $\{{\bf 1}_{P_L}\,;\,L\notin \operatorname{Irr}\nolimits(\mathbf{D})\}.$ The pull-back by the ring homomorphism $A\to A_I$ identifies $A_I\operatorname{\!-\mathbf{mod}}\nolimits$ with the full subcategory of $A\operatorname{\!-\mathbf{mod}}\nolimits$ consisting of the modules killed by $I$. An object $M\in\mathbf{C}$ belongs to $\mathbf{D}$ if and only if ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_\mathbf{C}(P_L,M)=0$ whenever $L\notin\operatorname{Irr}\nolimits(\mathbf{D})$. Therefore, the equivalence ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_\mathbf{C}(P,\bullet):\mathbf{C}\to A\operatorname{\!-\mathbf{mod}}\nolimits$ identifies $\mathbf{D}$ with the full subcategory of $A\operatorname{\!-\mathbf{mod}}\nolimits$ consisting of the modules killed by $I$. We deduce that $\mathbf{D}$ is equivalent to $A_I\operatorname{\!-\mathbf{mod}}\nolimits$. Now, let $\bar A$ be the Koszul grading on $A$. Since $\bar A$ is positively graded, the idempotent ${\bf 1}_{P_L}$ has degree 0. Thus $I$ is an homogeneous ideal of $\bar A$. Hence $\bar A$ yields a grading $\bar A_I$ on $A_I$ such that $\bar A_I^{<0}=0$ and $\bar A_I^0$ is semi-simple as an $\bar A_I$-module. Thus $\bar A_I$ is also positively graded. Fix a graded lift $\bar L$ of $L\in\operatorname{Irr}\nolimits(A)$, see Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:standardkoszul}. The graded $\bar A$-module $\bar L$ is pure. Let $d_L$ be its degree. By \cite[prop.~2.1.3]{BGS} we have ${{\operatorname{{op}}\nolimits}}eratorname{ext}\nolimits_{\bar A}^i(\bar L,\bar L')=0$ unless $i=d_{L'}-d_L$ for $L,L'\in \operatorname{Irr}\nolimits(\mathbf{C})$. We must check that ${{\operatorname{{op}}\nolimits}}eratorname{ext}\nolimits_{\bar A_I}^i(\bar L,\bar L')=0$ unless $i=d_{L'}-d_L$ if $L,L'\in \operatorname{Irr}\nolimits(\mathbf{D})$. This is obvious because the inclusion $\bar A_I\mathbf{g}mod\subset\bar A\mathbf{g}mod$ induces injections on extensions. \end{proof} \iffalse \begin{rk} Let us sketch the construction of the equivalence $E:\mathbf{D}^b(\bar R\mathbf{g}mod)\to \mathbf{D}^b(E(\bar R)\mathbf{g}mod),$ following \cite[sec.~2]{BGS}. See also \cite[sec.~3]{RH}. We can view $\bar R$ as a dg-algebra with 0 differential concentrated in the place 0 of the complex. Fix a linear projective resolution $K_\bullet$ of $\bar R^0$ as a graded $\bar R$-module. It can be viewed as a dg-module over $\bar R$, which is K-projective in the sense of Spaltenstein. The endomorphism ring ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{R}(K_\bullet)$ has a natural structure of dg-algebra such that $K_\bullet$ is a dg-module. Thus, we have the functor $M\to {{\operatorname{{op}}\nolimits}}eratorname{RHom}\nolimits_{R}(K_\bullet,M)$ from the bounded derived category of dg-modules over $\bar R$ to the bounded derived category of dg-modules over ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{R}(K_\bullet).$ Now, taking into account the gradings on $\bar R$ and $K_\bullet$, this functor yields a functor $E'$ from the bounded derived category of finitely generated, graded dg-modules over $\bar R$ to the bounded derived category of finitely generated, graded dg-modules over ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{R}(K_\bullet).$ We view $E(\bar R)$ as a dg-algebra with 0 differential such that $\operatorname{Ext}\nolimits^i_R(\bar R^0)$ is at the $i$-th place in the complex. Since $\bar R$ is Koszul, the dg-algebras ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{R}(K_\bullet)$ and $E(\bar R)$ are quasi-isomorphic. Thus their bounded derived categories are equivalent. The functor $E$ is induced by the functor $E'$. \end{rk} \fi \subsection{Highest weight categories} \label{sec:5} Let $R$ be a commutative, noetherian ring with 1 which is a local ring with residue field $k$. Let $\mathbf{C}$ be an $R$-category which is equivalent to the category of finitely generated modules over a finite projective $R$-algebra $A$. The category $\mathbf{C}$ is a \textit{highest weight $R$-category} if it is equipped with a poset of isomorphisms of objects $(\Delta(\mathbf{C}),\leqslantlant)$ called the standard objects satisfying the following conditions: \begin{itemize} \item the objects of $\Delta(\mathbf{C})$ are projective over $R$ \item given $M\in\mathbf{C}$ such that ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{C}}(D,M)=0$ for all $D\in\Delta(\mathbf{C})$, we have $M=0$ \item given $D\in\Delta(\mathbf{C})$, there is a projective object $P\in\mathbf{C}$ and a surjection $f:P\twoheadrightarrow D$ such that $\ker f$ has a (finite) filtration whose successive quotients are objects $D'\in\Delta$ with $D'>D$. \item given $D\in\Delta$, we have ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{\mathbf{C}}(D)=R$ \item given $D_1,D_2\in\Delta$ with ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{C}}(D_1,D_2)\not=0$, we have $D_1\le D_2$. \end{itemize} See \cite[def.~4.11]{Ro}. Note that since $R$ is local, any finitely generated projective $R$-module is free, hence the set $\tilde\Delta$ in loc.~cit.~is the set of finite direct sums of objects in $\Delta$. The partial order $\leqslantlant$ is called the \emph{highest weight order} on $\mathbf{C}$. We write $\Delta(\mathbf{C})=\{\Delta(\lambdabda)\}_{\lambdabda\in\Lambdabda}$ for $\Lambdabda$ an indexing poset. \begin{lemma}\label{lem:hwtbasic} Let $\mathbf{C}$ be a highest weight $R$-category. Given $\lambdabda\in\Lambdabda$, there is a unique (up to isomorphism) indecomposable projective (resp. injective, tilting, costandard) object associate with $\lambdabda$, denoted by $P(\lambdabda)$ (rep. $I(\lambdabda)$, $T(\lambdabda)$, $\nabla(\lambdabda)$) such that \begin{itemize} \item[{\small($\nabla$)}] ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{C}}(\Delta(\mu),\nabla(\lambdabda))\simeq \delta_{\lambdabda\mu}R$ and $\operatorname{Ext}\nolimits^1_{\mathbf{C}}(\Delta(\mu),\nabla(\lambdabda))=0$ for all $\mu\in\Lambdabda$, \item[{\small($P$)}] there is a surjection $f:P(\lambdabda)\twoheadrightarrow\Delta(\lambdabda)$ such that $\ker f$ has a filtration whose successive quotients are $\Delta(\mu)$'s with $\mu>\lambdabda$, \item[{\small($I$)}] there is an injection $f:\nabla(\lambdabda)\hookrightarrow I(\lambdabda)$ such that $\mathrm{coker} f$ has a filtration whose successive quotients are $\nabla(\mu)$'s with $\mu>\lambdabda$, \item[{\small($T$)}] there is an injection $f:\Delta(\lambdabda)\hookrightarrow T(\lambdabda)$ and a surjection $g:T(\lambdabda)\twoheadrightarrow \nabla(\lambdabda)$ such that $\mathrm{coker} f$ (resp. $\ker g$) has a filtration whose successive quotients are $\Delta(\mu)$'s (resp. $\nabla(\mu)$'s) with $\mu<\lambdabda$. \end{itemize} \end{lemma} See e.g. \cite[prop~.2.1]{RSVV}. The objects $\nabla(\lambdabda)$, $\Delta(\lambdabda)$, $P(\lambdabda)$, $I(\lambdabda)$ and $T(\lambdabda)$ are projective over $R$. We have ${{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(\mathbf{C})=\{P(\lambdabda)\}_{\lambdabda\in\Lambdabda}$, ${{\operatorname{{op}}\nolimits}}eratorname{Inj}\nolimits(\mathbf{C})=\{I(\lambdabda)\}_{\lambdabda\in\Lambdabda}$. The set ${{\operatorname{{op}}\nolimits}}eratorname{Tilt}\nolimits(\mathbf{C})=\{T(\lambdabda)\}_{\lambdabda\in\Lambdabda}$ is the set of isomorphism classes of indecomposable tilting objects in $\mathbf{C}$. Let $\nabla(\mathbf{C})=\{\nabla(\lambdabda)\}_{\lambdabda\in\Lambdabda}$. Note that $\Delta(\lambdabda)$ has a unique simple quotient $L(\lambdabda)$. The set of isomorphism classes of simple objects in $\mathbf{C}$ is given by $\operatorname{Irr}\nolimits(\mathbf{C})=\{L(\lambdabda)\}_{\lambdabda\in\Lambdabda}$. Let $\mathbf{C}^\Delta$, $\mathbf{C}^\nabla$ be the full subcategories of $\mathbf{C}$ consisting of the {\it $\Delta$-filtered} and {\it $\nabla$-filtered} objects, i.e., the objects having a finite filtration whose successive quotients are standard, costandard respectively. These categories are exact. Recall that a tilting object is by definition an object that is both $\Delta$-filtered and $\nabla$-filtered. The opposite of $\mathbf{C}$ is a highest weight $R$-category such that $\Delta(\mathbf{C}^{{\operatorname{{op}}\nolimits}})=\nabla(\mathbf{C})$ with the opposite highest weight order. Given a commutative local $R$-algebra $S$ and an $R$-module $M$, we write $SM=M\otimes_RS$. Let $S\mathbf{C}=SA\operatorname{\!-\mathbf{mod}}\nolimits.$ We have the following, see e.g. \cite[prop.~2.1, 2.4, 2.5]{RSVV}. \begin{prop} \label{prop:2.3} Let $\mathbf{C}$ be a highest weight $R$-category, and let $S$ be a commutative local $R$-algebra with 1. For any $M,N\in\mathbf{C}$ the following holds : (a) if $S$ is $R$-flat then $S\operatorname{Ext}\nolimits^d_\mathbf{C}(M,N)=\operatorname{Ext}\nolimits^d_{S\mathbf{C}}(SM,SN)$ for all $d\geqslantlant 0,$ (b) if either $M\in\mathbf{C}$ is projective or ($M\in\mathbf{C}^\Delta$ and $N\in\mathbf{C}^\nabla$), then we have $S{{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_\mathbf{C}(M,N)={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{S\mathbf{C}}(SM,SN)$, (c) if $M$ is $R$-projective then $M$ is projective in $\mathbf{C}$ (resp.~ $M$ is tilting in $\mathbf{C}$, $M\in\mathbf{C}^\Delta$) if and only if $k M$ is projective in $k\mathbf{C}$ (resp.~ $k M$ is tilting in $k\mathbf{C},$ $k M\in k\mathbf{C}^\Delta$), (d) if either ($M$ is projective in $\mathbf{C}$ and $N$ is $R$-projective) or ($M\in\mathbf{C}^\Delta$ and $N\in\mathbf{C}^\nabla$) then ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_\mathbf{C}(M,N)$ is $R$-projective, (e) the category $S\mathbf{C}$ is a highest weight $S$-category on the poset $\Lambdabda$ with standard objects $S\Delta(\lambdabda)$ and costandard objects $S\nabla(\lambdabda)$. The projective, injective and tilting objects associated with $\lambdabda$ are $SP(\lambdabda)$, $SI(\lambdabda)$ and $ST(\lambdabda)$ \qed \end{prop} \iffalse In particular, the reduction gives bijections ${{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(\mathbf{C})\to{{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(k\mathbf{C})$ and ${{\operatorname{{op}}\nolimits}}eratorname{Tilt}\nolimits(\mathbf{C})\to{{\operatorname{{op}}\nolimits}}eratorname{Tilt}\nolimits(k\mathbf{C}).$ Hence, there are bijections \begin{equation*} {{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(\mathbf{C})\to\Delta(\mathbf{C})\to{{\operatorname{{op}}\nolimits}}eratorname{Tilt}\nolimits(\mathbf{C})\to\nabla(\mathbf{C}),\quad P\mapsto\Delta\mapsto T\mapsto\nabla \end{equation*} such that there is a surjection $P\to\Delta$ whose kernel is filtered by standard modules which are $>\Delta$. Further, there is an injection $\Delta\to T$ whose cokernel is filtered by standard modules which are $<\Delta$ and there is a surjection $T\to\nabla$ whose kernel is filtered by costandard modules which are $<\nabla$. Hence, any of the sets ${{\operatorname{{op}}\nolimits}}eratorname{Tilt}\nolimits(\mathbf{C})$, ${{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(\mathbf{C})$, $\Delta(\mathbf{C})$, $\nabla(\mathbf{C})$, $\operatorname{Irr}\nolimits(\mathbf{C})$ can be regarded as a poset for the highest weight order. \fi \begin{rk} \label{rk:2.3} For any subset $\Sigma\subset\Lambdabda$, let $\mathbf{C}[\Sigma]$ be the Serre subcategory generated by all the $L(\lambdabda)$ with $\lambdabda\in\Sigma$ and let $\mathbf{C}(\Sigma)$ be the Serre quotient $\mathbf{C}/\mathbf{C}[\operatorname{Irr}\nolimits(\mathbf{C})\setminus \Sigma]$. An {\it ideal} in the poset $\Lambdabda$ is a subset of the form $I=\bigcup_{i\in I}\{\leqslantlant i\}$. A {\it coideal} is the complement of an ideal, i.e., a subset of the form $J=\bigcup_{j\in J}\{\geqslantlant j\}$. Now, assume that $\mathbf{C}$ is a highest weight category over a field $k$, and that $I$, $J$ are respectively an ideal and a coideal of $\Lambdabda$. Then $\mathbf{C}[I]$, $\mathbf{C}(J)$ are highest weight categories and the inclusion $\mathbf{C}[I]\subset\mathbf{C}$ induces injections on extensions by \cite[thm.~3.9]{CPS2}, \cite[prop.~A.3.3]{Do}. \end{rk} \subsection{Ringel duality} \label{sec:R} Let $R$ be a commutative, noetherian ring with 1 which is a local ring with residue field $k$. Let $\mathbf{C}$ be a highest-weight $R$-category which is equivalent to $A\operatorname{\!-\mathbf{mod}}\nolimits$ for a finite projective $R$-algebra $A$. We call $T=\bigoplus_{\lambdabda\in\Lambdabda}T(\lambdabda)$ the \emph{characteristic tilting module}. Set $D(A)={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_\mathbf{C}(T)$ and $A^\diamond=D(A)^{{\operatorname{{op}}\nolimits}}.$ The {\it Ringel dual} of $A$ is the $R$-algebra $A^\diamond,$ the Ringel dual of $\mathbf{C}$ is the category $\mathbf{C}^\diamond=A^\diamond\operatorname{\!-\mathbf{mod}}\nolimits.$ The category $\mathbf{C}^\diamond$ is a highest-weight $R$-category on the poset $\Lambdabda^{{\operatorname{{op}}\nolimits}}$. We have an equivalence of triangulated categories $(\bullet)^\diamond:\ \mathbf{D}^b(\mathbf{C})\to \mathbf{D}^b(\mathbf{C}^\diamond)$ called the {\it Ringel equivalence}. It restricts to an equivalence of exact categories $(\bullet)^\diamond:\ \mathbf{C}^\Delta\to (\mathbf{C}^\diamond)^\nabla$ such that $M\mapsto{{\operatorname{{op}}\nolimits}}eratorname{RHom}\nolimits_\mathbf{C}(M,T)^*$. Here $(\bullet)^*$ is the dual as a $k$-vector space. We have $\Delta(\lambdabda)^\diamond=\nabla^\diamond(\lambdabda)$, $P(\lambdabda)^\diamond=T^\diamond(\lambdabda)$ and $T(\lambdabda)^\diamond=I^\diamond(\lambdabda)$, for each $\lambdabda\in\Lambdabda$, see \cite[prop.~4.26]{Ro}. The ring $(A^\diamond)^\diamond$ is Morita equivalent to $A$, see loc.~cit. and \cite[sec.~A.4]{Do}. Now, assume that $R=k$. For each primitive idempotent $e\in A,$ there is a unique $\lambdabda\in\Lambdabda$ such that $Ae=P(\lambdabda)$. We define $e^\diamond\in A^\diamond$ to be the primitive idempotent such that $A^\diamond e^\diamond=P^\diamond(\lambdabda)$. The bijection $(\bullet)^\diamond:\operatorname{Irr}\nolimits(\mathbf{C})\to\operatorname{Irr}\nolimits(\mathbf{C}^\diamond)$ given by $L(\lambdabda)\mapsto L^\diamond(\lambdabda)$ is called the {\it natural bijection} between $\operatorname{Irr}\nolimits(\mathbf{C})$ and $\operatorname{Irr}\nolimits(\mathbf{C}^\diamond).$ The following is well-known, see, e.g., \cite[prop.~A.4.9]{Do}. \begin{lemma}\label{lem:ringeltroncation} A subset $I\subset \Lambdabda$ is an ideal if and only if it is a coideal in $\Lambdabda^{{\operatorname{{op}}\nolimits}}$. We have $\mathbf{C}[I]^\diamond=\mathbf{C}^\diamond(I),$ and the Ringel equivalence factors to an equivalence of categories $\mathbf{C}[I]^\Delta\to \mathbf{C}^\diamond(I)^\nabla$. \end{lemma} \subsection{Standard Koszul duality} \label{sec:standardkoszul} Let $k$ be a field and $\mathbf{C}$ be a highest weight $k$-category. Assume that $\mathbf{C}$ is equivalent to the category of finitely generated left modules over a finite dimensional $k$-algebra $A$. Let $\bar A$ be a graded $k$-algebra which is isomorphic to $A$ as an $k$-algebra. We call $\bar A$ a {\it graded lift} of $A$. A {\it graded lift} of an object $M\in\mathbf{C}$ is a graded $\bar A$-module which is isomorphic to $M$ as an $A$-module. Assume that $\bar A$ is positively graded (in the sense of Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:gradedrings}). We have the following \begin{prop} Given $\lambdabda\in\Lambdabda$ there exists unique graded lifts $\bar L(\lambdabda)$, $\bar P(\lambdabda)$, $\bar I(\lambdabda)$, $\bar \Delta(\lambdabda)$, $\bar \nabla(\lambdabda)$, $\bar T(\lambdabda)$ such that \begin{itemize} \item $\bar L(\lambdabda)$ is pure of degree zero, \item the surjection $\bar P(\lambdabda)\twoheadrightarrow\bar L(\lambdabda)$ is homogeneous of degree zero, \item the injection $\bar L(\lambdabda)\hookrightarrow\bar I(\lambdabda)$ is homogeneous of degree zero, \item the surjection $f:\bar P(\lambdabda)\twoheadrightarrow\bar \Delta(\lambdabda)$ in $(P)$ is homogeneous of degree zero, \item the injection $f:\bar\nabla(\lambdabda)\hookrightarrow\bar I(\lambdabda)$ in $(I)$ is homogeneous of degree zero, \item both the injection $f:\bar\Delta(\lambdabda)\hookrightarrow \bar T(\lambdabda)$ and the surjection $g:\bar T(\lambdabda)\twoheadrightarrow\bar\nabla(\lambdabda)$ in $(T)$ are homogeneous of degree zero. \end{itemize} \end{prop} \begin{proof} The existence of the graded lifts is proved in \cite[cor.~4,5]{MO}. They are unique up to isomorphisms because, by \cite[lem.~2.5.3]{BGS}, the graded lift of an indecomposable object of $k\mathbf{C}$ is unique up to a graded $\bar A$-module isomorphism and up to a shift of the grading. \end{proof} \iffalse \begin{lemma} Let $M$ be an indecomposable object of $\mathbf{C}$ which is projective as a $A$-module. Assume that $kM$ is indecomposable in $k\mathbf{C}$. If $M$ has a graded lift then this lift is unique up to a graded $\bar R$-module isomorphism and up to a shift of the grading. \end{lemma} \begin{proof} If $A=k$ a proof is given in \cite[lem.~2.5.3]{BGS}. For the general case, using the same argument as in loc.~cit., we are reduced to check that ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_\mathbf{C}(M)$ is a local ring. Since $M$ is finite and projective over $A$, an element $x\in{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_A(M)$ is invertible if and only if is reduction to $k$ is invertible in ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{k\mathbf{C}}(kM)$ by the Nakayama lemma. $x\in{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_\mathbf{C}(M)$ \end{proof} \fi The gradings above will be called the {\it natural gradings}. In particular, let $\bar T=\bigoplus_{\lambdabda\in\Lambdabda}\bar T(\lambdabda)$. The grading on $\bar T$ induces a grading on the $k$-algebra $A^\diamond$ given by $\bar A^\diamond={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_\mathbf{C}(\bar T)^{{\operatorname{{op}}\nolimits}}$ . It is called the \emph{natural grading}. A chain complex of projective (resp. ~injective, tilting) modules $\dots \to M_i\to M_{i-1}\to\dots$ is called \emph{linear} provided that for every $i\in\mathbb{Z}$ all indecomposable direct summands of $M_i\langle -i\rangle$ have the natural grading. Following \cite{ADL}, we say that $\bar A$ is {\it standard Koszul} provided that all standard modules have linear projective resolutions and all costandard modules have linear injective coresolutions. By \cite[thm.~1]{ADL}, a standard Koszul graded algebra is Koszul. We will identify $\Lambdabda=\{L(\lambdabda)\}_{\lambdabda\in\Lambdabda}=\{1_x\,;\,x\in \operatorname{Irr}\nolimits(A)\}$ and equip the latter with the partial order $\leqslantlant$. Assume that $\bar A$ is Koszul. By \cite[thm.~3]{ADL}, the graded $k$-algebra $\bar A$ is standard Koszul if and only if $\bar A^!$ is quasi-hereditary relatively to the poset $\{1_x^!\,;\,x\in \operatorname{Irr}\nolimits(A)\}$ such that $1_x^!\leqslantlant 1_y^!\iff 1_x\geqslantlant 1_y.$ Following \cite{Ma}, we say that $\bar A$ is {\it balanced} provided that all standard modules have linear tilting coresolutions and all costandard modules have linear tilting resolutions. By \cite[thm.~7]{MO}, the graded $k$-algebra $\bar A$ is balanced if and only if it is standard Koszul and if the graded $k$-algebra $D(\bar A)$ is positive. If $\bar A$ is balanced, then the following hold \cite[thm.~1]{Ma} \begin{itemize} \item $\bar A$, $\bar A^\diamond$, $\bar A^!$ and $(\bar A^!)^\diamond$ are positively graded, quasi-hereditary, Koszul, standard Koszul and balanced, \item $(\bar A^!)^\diamond=(\bar A^\diamond)^!$ as graded quasi-hereditary $k$-algebras, \item the natural bijection $\operatorname{Irr}\nolimits(A)\to\operatorname{Irr}\nolimits(A^!)$ takes the highest weight order on $\operatorname{Irr}\nolimits(A)$ to the opposite of the highest weight order on $\operatorname{Irr}\nolimits(A^!)$. \end{itemize} Note that the notation $E(\bar A)$ in \cite{Ma} corresponds to our notation $\bar A^!$. We'll say that \textit{$\mathbf{C}$ is standard Koszul or balanced} if we can choose the graded $k$-algebra $\bar A$ in such a way that it is standard Koszul or balanced respectively. \begin{rk} \label{rk:KS} Assume that the graded $k$-algebra $\bar A$ is standard Koszul. In particular, it is finite dimensional, Koszul and with finite global dimension. The Koszul equivalence is given by $E={{\operatorname{{op}}\nolimits}}eratorname{RHom}\nolimits_A(\bar A^0,\bullet)$, up to the grading. See \cite[sec.~2]{BGS}, \cite[sec.~3]{RH} for details. It takes costandard (resp. injective, simple) modules to standard (resp. simple, projective) ones, by \cite[prop.~2.7]{ADL}, \cite[thm.~2.12.5]{BGS}. \end{rk} \begin{rk} Assume that $\bar A$, $\bar A^\diamond$ are both positively graded (in the sense of Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:gradedrings}). The functor ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_\mathbf{C}(\bullet,\bar T)^*$ takes the natural graded indecomposable tilting objects in $\bar A\mathbf{g}mod$ to the natural graded indecomposable projective ones in $\bar A^\diamond\mathbf{g}mod$. It takes also the natural graded indecomposable injective objects in $\bar A\mathbf{g}mod$ to natural graded indecomposable tilting ones in $\bar A^\diamond\mathbf{g}mod$. \end{rk} Let $\mathbf{C}$ be a highest weight category over a field $k$ and $I\subset\operatorname{Irr}\nolimits(\mathbf{C})$ be an ideal. Put $J=\operatorname{Irr}\nolimits(\mathbf{C})\setminus I$. Assume that $\mathbf{C}$ is standard Koszul. The category $\mathbf{C}[I]$ has a Koszul grading by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:1.2} and Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:2.3}. The natural bijection $\operatorname{Irr}\nolimits(\mathbf{C})\to\operatorname{Irr}\nolimits(\mathbf{C}^!)$ is an anti-isomorphism of posets. Thus, the images $I^!,$ $J^!$ of $I,$ $J$ are respectively a coideal and an ideal of $\operatorname{Irr}\nolimits(\mathbf{C}^!)$. \begin{lemma} \label{lem:2.6:E} For each ideal $I\subset\operatorname{Irr}\nolimits(\mathbf{C})$ we have $\mathbf{C}[I]^!=\mathbf{C}^!(I^!).$ \end{lemma} \begin{proof} Set $L_I=\bigoplus_{L\in I}L$. By Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:1.1} and Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:2.3}, we have $\mathbf{C}^!=\operatorname{Ext}\nolimits_\mathbf{C}(L)\operatorname{\!-\mathbf{mod}}\nolimits$ and $\mathbf{C}[I]^!=\operatorname{Ext}\nolimits_\mathbf{C}(L_I)\operatorname{\!-\mathbf{mod}}\nolimits.$ Let $e\in{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_\mathbf{C}(L)$ be the projection from $L$ to $L_I$. We have $\operatorname{Ext}\nolimits_\mathbf{C}(L_I)=e\,\operatorname{Ext}\nolimits_\mathbf{C}(L)\,e$. The full subcategory $K_I\subset \mathbf{C}^!$ consisting of the modules killed by $e$ is a Serre subcategory and there is an equivalence $\mathbf{C}^!/K_I\to \mathbf{C}[I]^!,$ $M\mapsto eM$. In other words, the restriction with respect to the obvious inclusion $\operatorname{Ext}\nolimits_\mathbf{C}(L_I)\subset\operatorname{Ext}\nolimits_\mathbf{C}(L)$, yields a quotient functor $\mathbf{C}^!\to \mathbf{C}[I]^!$ whose kernel is $K_I$. Since $\mathbf{C}$ is standard Koszul, its Koszul dual $\mathbf{C}^!$ is a highest weight category. We have $\mathbf{C}^!(I^!)=\mathbf{C}^!/\mathbf{C}^![J^!],$ because $J^!=\operatorname{Irr}\nolimits(\mathbf{C}^!)\setminus I^!$. Since $\mathbf{C}^![J^!]$ is the Serre subcategory of $\mathbf{C}^!$ generated by $J^!$, it consists of the $\operatorname{Ext}\nolimits_\mathbf{C}(L)$-modules killed by $e$. This implies the lemma. \end{proof} \section{Affine Lie algebras and the parabolic category $\mathbf{O}$} \label{sec:3} \subsection{Lie algebras} Let $\mathfrak{g}$ be a simple Lie $\mathbb{C}$-algebra and let $G$ be a connected simple group over $\mathbb{C}$ with Lie algebra $\mathfrak{g}$. Let $T\subset G$ be a maximal tori and let $\mathfrak{t}\subset\mathfrak{g}$ be its Lie algebra. Let $\mathfrak{b}\subset\mathfrak{g}$ be a Borel subalgebra containing $\mathfrak{t}$. The elements of $\mathfrak{t}$, $\mathfrak{t}^ *$ are called {\it coweights} and {\it weights} respectively. Given a root $\alpha\in\mathfrak{t}^*$ let $\operatorname{ch}\nolimitseck\alpha\in\mathfrak{t}$ denote the corresponding coroot. Let $\Pi\subset\mathfrak{t}^*$ be the set of roots of $\mathfrak{g}$, $\Pi^+\subset\Pi$ the set of roots of $\mathfrak{b}$, and $\mathbb{Z}\Pi$ be the root lattice. Let $\rho$ be half the sum of the positive roots. Let $\Phi=\{\alpha_i\,;\,i\in I\}$ be the set of simple roots in $\Pi^+$. Let $W$ be the Weyl group. Let $N$ be the dual Coxeter number of $\mathfrak{g}$. \subsection{Affine Lie algebras} \label{sec:affine} Let $\mathbf{g}$ be the affine Lie algebra associated with $\mathfrak{g}$. Recall that $\mathbf{g}=\mathbb{C}\partial{{\operatorname{{op}}\nolimits}}lus\widehat{L\mathfrak{g}},$ where $\widehat{L\mathfrak{g}}$ is a central extension of $L\mathfrak{g}=\mathfrak{g}\otimes\mathbb{C}[t,t^{-1}]$ and $\partial=t\partial_t$ is a derivation of $\widehat{L\mathfrak{g}}$ acting trivially on the canonical central element $\mathbf{1}$ of $\widehat{L\mathfrak{g}}$. Consider the Lie subalgebras $\mathbf{b}=\mathfrak{b}{{\operatorname{{op}}\nolimits}}lus\mathfrak{g}\otimes t\mathbb{C}[t]{{\operatorname{{op}}\nolimits}}lus\mathbb{C}\partial{{\operatorname{{op}}\nolimits}}lus \mathbb{C}\mathbf{1}$ and $\mathbf{t}=\mathbb{C}\partial{{\operatorname{{op}}\nolimits}}lus\mathfrak{t}{{\operatorname{{op}}\nolimits}}lus\mathbb{C}\mathbf{1}.$ Let $\widehat\Pi,$ $\widehat\Pi^+$ be the sets of roots of $\mathbf{g}$, $\mathbf{b}$ respectively. We'll call an element of $\widehat\Pi$ an {\it affine root}. The set of simple roots in $\widehat\Pi^+$ is $\widehat\Phi=\{\alpha_i; i\in \{0\}\cup I\}.$ The elements of $\mathbf{t}$, $\mathbf{t}^*$ are called {\it affine coweights} and {\it affine weights} respectively. Let $(\bullet:\bullet):\mathbf{t}^*\times\mathbf{t}\to\mathbb{C}$ be the canonical pairing. Let $\delta$, $\Lambdabda_0$, $\hat\rho$ be the affine weights given by $(\delta:\partial)=(\Lambdabda_0:\mathbf{1})=1,$ $(\Lambdabda_0:\mathbb{C}\partial{{\operatorname{{op}}\nolimits}}lus\mathfrak{t})=(\delta:\mathfrak{t}{{\operatorname{{op}}\nolimits}}lus\mathbb{C}\mathbf{1})=0$ and $\hat\rho=\rho+N\Lambdabda_0.$ An element of $\mathbf{t}^*/\mathbb{C}\delta$ is called a {\it classical affine weight}. Let $cl:\mathbf{t}^*\to\mathbf{t}^*/\mathbb{C}\delta$ denote the obvious projection. Let $e$ be an integer $\neq 0$. We set $\mathbf{t}^*_e=\{\lambdabda\in\mathbf{t}^*;(\lambdabda:\mathbf{1})=-e-N\}.$ We'll use the identification $\mathbf{t}^*=\mathbb{C}\times\mathfrak{t}^*\times\mathbb{C}$ such that $\alpha_i\mapsto(0,\alpha_i,0)$ if $i\neq 0$, $\Lambdabda_0\mapsto (0,0,1)$ and $\delta\mapsto(1,0,0)$. Let $\operatorname{ch}\nolimitseck\alpha\in\mathbf{t}$ be the affine coroot associated with the real affine root $\alpha$. Let $\langle\bullet :\bullet \rangle$ be the non-degenerate symmetric bilinear form on $\mathbf{t}^*$ such that $(\lambdabda:\operatorname{ch}\nolimitseck\alpha_i)= 2\langle\lambdabda:\alpha_i\rangle/\langle\alpha_i:\alpha_i\rangle$ and $(\lambdabda:\mathbf{1})=\langle\lambdabda:\delta\rangle.$ Using $\langle\bullet :\bullet \rangle$ we identify $\operatorname{ch}\nolimitseck\alpha$ with an element of $\mathbf{t}^*$ for any real affine root $\alpha$. Let $\widehat W=W\ltimes\mathbb{Z}\Pi$ be the affine Weyl group and let $\mathcal{S}=\{s_i=s_{\alpha_i}\,;\,\alpha_i\in\widehat\Phi\}$ be the set of simple affine reflections. The group $\widehat W$ acts on $\mathbf{t}^*$. For $w\in W$, $\tau\in\mathbb{Z}\Pi$ we have $w(\Lambdabda_0)=\Lambdabda_0,$ $w(\delta)=\delta,$ $\tau(\delta)=\delta,$ $\tau(\lambdabda)=\lambdabda-\langle\tau:\lambdabda\rangle\delta$ and $\tau(\Lambdabda_0)=\tau+\Lambdabda_0-\langle\tau:\tau\rangle\delta/2$. The {\it $\bullet$-action} on $\mathbf{t}^*$ is given by $w\bullet\lambdabda=w(\lambdabda+\hat\rho)-\hat\rho$. It factors to a $\widehat W$-action on $\mathbf{t}^*/\mathbb{C}\delta$. Two (classical) affine weights $\lambdabda$, $\mu$ are {\it linked} if they belong to the same orbit of the $\bullet$-action, and we write $\lambdabda\sim\mu$. Let $W_\lambdabda$ be the stabilizer of an affine weight $\lambdabda$ under the $\bullet$-action. We say that $\lambdabda$ is {\it regular} if $W_\lambdabda=\{1\}$. Set $\mathcal{C}^\pm=\{\lambdabda\in\mathbf{t}^*\,;\, \langle\lambdabda+\hat\rho:\alpha\rangle\geqslantlant 0,\, \alpha\in\widehat\Pi^\pm\}.$ An element of $\mathcal{C}^-$ (resp.~of $\mathcal{C}^+$) is called an {\it antidominant affine weight} (resp.~ a {\it dominant affine weight}). We write again $\mathcal{C}^\pm$ for $cl(\mathcal{C}^\pm)$. We have the following basic fact, see e.g., \cite[lem.~2.10]{KT}. \begin{lemma} Let $\lambdabda$ be an integral affine weight of level $-e-N$. We have (a) $\sharp(\widehat W\bullet\lambdabda\cap\mathcal{C}^-)=1$ and $\sharp(\widehat W\bullet\lambdabda\cap\mathcal{C}^+)=0$ if $e>0$, (b) $\sharp(\widehat W\bullet\lambdabda\cap\mathcal{C}^+)=1$ and $\sharp(\widehat W\bullet\lambdabda\cap\mathcal{C}^-)=0$ if $e<0$. \end{lemma} We say that $\lambdabda$ is {\it negative} in the first case, and {\it positive} in the second case. For $\lambdabda\in\mathcal{C}^\pm$ the subgroup $W_\lambdabda$ of $\widehat W$ is finite and is a standard parabolic subgroup. It is isomorphic to the Weyl group of the root system $ \{\alpha\in\widehat\Pi\,;\,\langle\lambdabda+\hat\rho:\alpha\rangle=0\}.$ \subsection{The parabolic category $\mathbf{O}$} \label{sec:2.9} Let $\mathcal{P}$ be the set of proper subsets of $\widehat\Phi$. An element of $\mathcal{P}$ is called a {\it parabolic type}. Fix a parabolic type $\nu$. If $\nu$ is the empty set we say that {\it $\nu$ is regular}, and we write $\nu=\phi$. Let $\mathbf{p}_\nu\subset\mathbf{g}$ be the unique parabolic subalgebra containing $\mathbf{b}$ whose set of roots is generated by $\widehat\Phi\cup(-\nu)$. Let $\Pi_\nu$ be the root system of a levi subalgebra of $\mathbf{p}_\nu$. Set $\Pi^+_\nu=\Pi^+\cap\Pi_\nu$. Let $W_\nu\subset\widehat W$ be the Weyl group of $\Pi_\nu$. Let $w_\nu$ be the longest element in $W_\nu$. Let $\tilde\mathbf{O}^{\nu}$ be the category of all $\mathbf{g}$-modules $M$ such that $M=\bigoplus_{\lambdabda\in\mathbf{t}^*}M_\lambdabda$ with $M_\lambdabda=\{m\in M;xm=\lambdabda(x)m,\,x\in\mathbf{t}\}$ and $U(\mathbf{p}_\nu)\,m$ is finite dimensional for each $m\in M$. An affine weight $\lambdabda$ is {\it $\nu$-dominant } if $(\lambdabda:\operatorname{ch}\nolimitseck\alpha)\in\mathbb{N}$ for all $\alpha\in\Pi_\nu^+.$ For any $\nu$-dominant affine weight $\lambdabda,$ let $V^\nu(\lambdabda)$, $L(\lambdabda)$ be the parabolic Verma module with the highest weight $\lambdabda$ and its simple top. Recall that $e$ is an integer $\neq 0$. For any weight $\lambdabda\in\mathfrak{t}^*$, let $\lambdabda_e=\lambdabda-(e+N)\Lambdabda_0$ and $z_\lambdabda=\langle\lambdabda:2\rho+\lambdabda\rangle/2e.$ Let $\mathbf{O}^{\nu}\subset\tilde\mathbf{O}^{\nu}$ be the full subcategory of the modules such that the highest weight of any of its simple subquotients is of the form $\tilde\lambdabda_e=\lambdabda_e+z_\lambdabda\,\delta$, where $\lambdabda$ is a $\nu$-dominant integral weight. We'll abbreviate $V^\nu(\lambdabda_e)=V^\nu(\tilde\lambdabda_e)$ and $L(\lambdabda_e)=L(\tilde\lambdabda_e).$ Now, fix $\mu\in\mathcal{P}$ and assume that $e>0$. We use the following notation \begin{itemize} \item ${{\rm{o}}}_{\mu,-}$ is an antidominant integral classical affine weight of level $-e-N$ whose stabilizer for the $\bullet$-action of $\widehat W$ is equal to $W_\mu$, \item ${{\rm{o}}}_{\mu,+}=-{{\rm{o}}}_{\mu,-}-2\hat\rho$ is a dominant integral classical affine weight of level $e-N$ whose stabilizer for the $\bullet$-action of $\widehat W$ is equal to $W_\mu$, \item $\mathbf{O}^{\nu}_{\mu,\pm}\subset\mathbf{O}^{\nu}$ is the full subcategory consisting of the modules such that the highest weight of any of its simple subquotients is linked to ${{\rm{o}}}_{\mu,\pm}$. \end{itemize} Let $I_\mu^{{{\operatorname{{op}}\nolimits}}eratorname{min}\nolimits}, I^{{{\operatorname{{op}}\nolimits}}eratorname{max}\nolimits}_\mu\subset\widehat W$ be the sets of minimal and maximal length representatives of the left cosets in $\widehat W/W_\mu$. Let $\leqslantlant$ be the Bruhat order. We'll consider the posets \begin{itemize} \item $I_{\mu,-}=(I_\mu^{{{\operatorname{{op}}\nolimits}}eratorname{max}\nolimits},\operatorname{pr}\nolimitseccurlyeq)$ where $\operatorname{pr}\nolimitseccurlyeq=\leqslantlant$, and $I_{\mu,-}^\nu=\{x\in I_{\mu,-}\,;\,x\bullet{{\rm{o}}}_{\mu,-}\ \text{is}\ \nu\text{-dominant}\}$, \item $I_{\mu,+}=(I_\mu^{{{\operatorname{{op}}\nolimits}}eratorname{min}\nolimits},\operatorname{pr}\nolimitseccurlyeq)$ where $\operatorname{pr}\nolimitseccurlyeq=\geqslantlant$, and $I_{\mu,+}^\nu=\{x\in I_{\mu,+}\,;\,x\bullet{{\rm{o}}}_{\mu,+}\ \text{is}\ \nu\text{-dominant}\}.$ \end{itemize} They have the following properties. \begin{lemma} \label{lem:C} For any $\mu$, $\nu\in\mathcal{P}$ we have (a) $x\in I_{\phi,+}^\mu\iff x^{-1}\in I_{\mu,+}$, (b) $x\in I_{\phi,-}^\mu\iff x^{-1}\in I_{\mu,-}$, (c) $I_{\phi,\pm}^\mu\cap I_{\nu,\mp}=\{xw_\nu\,;\,x\in I_{\nu,\pm}^\mu\}$, (d) $x\in I^\mu_{\nu,+}\iff w_\mu xw_\nu\in I_{\nu,-}^\mu$. \end{lemma} \begin{proof} To prove $(a)$ note that, since ${{\rm{o}}}_{\phi,+}$ is dominant regular, we have $$\aligned I_{\phi,+}^\mu &=\{x\in\widehat W\,;\,x\bullet{{\rm{o}}}_{\phi,+}\ \text{is}\ \mu\text{-dominant}\}\cr &=\{x\in \widehat W\,;\, \langle {{\rm{o}}}_{\phi,+}+\hat\rho:x^{-1}(\operatorname{ch}\nolimitseck\alpha)\rangle\geqslantlant 0,\,\forall \alpha\in\Pi^+_\mu\}\\ &=\{x\in\widehat W\,;\,x^{-1}(\Pi_\mu^+)\subset\widehat\Pi^+\}\cr &=\{x\in\widehat W\,;\,x^{-1}\in I_{\mu,+}\}. \endaligned$$ The proof of $(b)$ is similar and is left to the reader. Now we prove part $(c)$. Choose positive integers $d,f$ such that $\pm(f-d)>-N$. Then, the translation functor $T_{\phi,\nu}:{}^z\mathbf{O}_{\phi,\pm}\to{}^z\mathbf{O}_{\nu,\pm}$ in Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2} is well-defined for any $z\in I_\nu^{{{\operatorname{{op}}\nolimits}}eratorname{max}\nolimits}$. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(d),(e)$, we have $$\aligned I^\mu_{\nu,\pm}&=\{x\;;\;xw_\nu\in I^\mu_{\phi,\pm},\,x\in I_{\nu,\pm}\} =\{x\;;\;xw_\nu\in I^\mu_{\phi,\pm}\cap I_{\nu,\mp}\}. \endaligned $$ Part $(d)$ is standard. More precisely, by \cite[prop.~2.7.5]{Ca}, if $x\in I^\mu_{\nu,+}$ then we have $W_\mu\cap x(W_\nu)=\emptyset$ and $x\in(I^{{{\operatorname{{op}}\nolimits}}eratorname{min}\nolimits}_\mu)^{-1}\cap I_\nu^{{{\operatorname{{op}}\nolimits}}eratorname{min}\nolimits}$. Then, we also have $w_\mu x\in(I^{{{\operatorname{{op}}\nolimits}}eratorname{max}\nolimits}_\mu)^{-1}\cap I_\nu^{{{\operatorname{{op}}\nolimits}}eratorname{min}\nolimits},$ from which we deduce that $w_\mu x w_\nu\in I^\mu_{\nu,-}$. The lemma is proved. \end{proof} For each $x\in I_{\mu,\mp}^\nu$, we write $x_\pm=w_\nu xw_\mu\in I_{\mu,\pm}^\nu$. From Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:C} and its proof, we get the following. \begin{cor} \label{lem:C2} (a) We have $I^\mu_{\nu,+}=\{x\;;\;xw_\nu\in(I^{{{\operatorname{{op}}\nolimits}}eratorname{min}\nolimits}_\mu)^{-1}\cap I_\nu^{{{\operatorname{{op}}\nolimits}}eratorname{max}\nolimits}\}$ and $I^\mu_{\nu,-}=\{x\;;\;xw_\nu\in(I^{{{\operatorname{{op}}\nolimits}}eratorname{max}\nolimits}_\mu)^{-1}\cap I_\nu^{{{\operatorname{{op}}\nolimits}}eratorname{min}\nolimits}\}.$ (b) The map $I_{\mu,\pm}^\nu\to I_{\nu,\pm}^\mu$, $x\mapsto x^{-1}$ is an isomorphism of posets. (c) The map $I_{\mu,\mp}^\nu\to I_{\mu,\pm}^\nu,$ $x\mapsto x_\pm$ is an anti-isomorphism of posets. \end{cor} The category $\mathbf{O}^\nu_{\mu,\pm}$ is a direct summand in $\mathbf{O}^\nu$ by the linkage principle, see \cite[thm.~6.1]{So}, and we have $\operatorname{Irr}\nolimits(\mathbf{O}^{\nu}_{\mu,\pm})= \{L(x\bullet{{\rm{o}}}_{\mu,\pm})\,;\,x\in I_{\mu,\pm}^\nu\}$. Further, for each $x\in I^\nu_{\mu,+}$, the simple module $L(x\bullet{{\rm{o}}}_{\mu,+})$ has a projective cover $P^\nu(x\bullet{{\rm{o}}}_{\mu,+})$ in $\mathbf{O}_{\mu,+}^\nu$. For each $x\in I^\nu_{\mu,-},$ there is a tilting module $T^\nu(x\bullet{{\rm{o}}}_{\mu,-})$ in $\mathbf{O}_{\mu,-}^\nu$ with highest weight $x\bullet{{\rm{o}}}_{\mu,-}$, see e.g., \cite{So}. Note that $\mathbf{O}_{\mu,+}^\nu$ does not have tilting objects and $\mathbf{O}_{\mu,-}^\nu$ does not have projective objects. Let $\mathbf{O}^{\nu,\Delta}_{\mu,\pm}$ be the full subcategory of $\mathbf{O}^{\nu,\Delta}_{\mu,\pm}$ consisting of objects with a finite filtration by the parabolic Verma modules. The following is a version of Ringel equivalence for the categories $\mathbf{O}_{\mu,\pm}^\nu$. \begin{lemma}\label{lem:ringel-soergel} There is an equivalence of exact categories $D: \mathbf{O}^{\nu,\Delta}_{\mu,+}{\buildrel\sim\over\to}\,(\mathbf{O}^{\nu,\Delta}_{\mu,-})^{{\operatorname{{op}}\nolimits}}$ which maps $V^\nu(x\bullet{{\rm{o}}}_{\mu,+})$ to $V^\nu(x_-\bullet{{\rm{o}}}_{\mu,-}),$ and $P^\nu(x\bullet{{\rm{o}}}_{\mu,+})$ to $T^\nu(x_-\bullet{{\rm{o}}}_{\mu,-}).$ \end{lemma} \begin{proof} Note that $x_-\bullet{{\rm{o}}}_{\mu,-}=w_\nu x\bullet{{\rm{o}}}_{\mu,-}=-w_\nu(x\bullet{{\rm{o}}}_{\mu,+}+\hat\rho)-\hat\rho$. So the lemma is given by \cite[thm.~6.6, proof of cor.~7.6]{So}. \end{proof} \begin{rk}\label{rk:ringel-soergel} Recall that the BGG-duality on $\mathbf{O}^{\nu}_{\mu,-}$ is an equivalence ${{\rm{d}}}: \mathbf{O}^{\nu}_{\mu,-}{\buildrel\sim\over\to} \mathbf{O}^{\nu,{{\operatorname{{op}}\nolimits}}}_{\mu,-}$ which maps a simple module to itself, maps a parabolic Verma module to a dual parabolic Verma module, and maps a tilting module to itself. Let $\mathbf{O}^{\nu,\nabla}_{\mu,-}$ be the full subcategory of $\mathbf{O}^{\nu}_{\mu,-}$ consisting of objects with a finite filtration by dual parabolic Verma modules. Then we have an equivalence $${{\rm{d}}}\circ D:\mathbf{O}^{\nu,\Delta}_{\mu,+}{\buildrel\sim\over\to}\,\mathbf{O}^{\nu,\nabla}_{\mu,-}$$ which maps $V^\nu(x\bullet{{\rm{o}}}_{\mu,+})$ to ${{\rm{d}}}(V^\nu(x_-\bullet{{\rm{o}}}_{\mu,-})),$ and $P^\nu(x\bullet{{\rm{o}}}_{\mu,+})$ to $T^\nu(x_-\bullet{{\rm{o}}}_{\mu,-}).$ \end{rk} \subsection{The truncated category $\mathbf{O}$} \label{sec:truncation} Fix parabolic types $\mu,\nu\in\mathcal{P}$ and an integer $e>0$. Fix an element $w\in\widehat W$. We define the \emph{truncated parabolic category} $\mathbf{O}$ at the negative/positive level in the following way. First, set ${}^w\!I^\nu_{\mu,-}=\{x\in I^\nu_{\mu,-}\,;\,x\operatorname{pr}\nolimitseccurlyeq\! w\}$ and let ${}^w\mathbf{O}^{\nu}_{\mu,-}$ be the full subcategory of $\mathbf{O}^{\nu}_{\mu,-}[{}^w\!I^\nu_{\mu,-}]$ consisting of the finitely generated modules. We have $\operatorname{Irr}\nolimits({}^w\mathbf{O}^{\nu}_{\mu,-})= \{L(x\bullet{{\rm{o}}}_{\mu,-})\,;\,x\in {}^w\!I_{\mu,-}^\nu\}$. For each $x\in {}^w\!I^\nu_{\mu,-},$ the modules $V^\nu(x\bullet{{\rm{o}}}_{\mu,-})$ and $T^\nu(x\bullet{{\rm{o}}}_{\mu,-})$ belong to ${}^w\mathbf{O}^{\nu}_{\mu,-}$. We may write $T^\nu(x\bullet{{\rm{o}}}_{\mu,-})={}^w\!T^\nu(x\bullet{{\rm{o}}}_{\mu,-})$. The module $L(x\bullet{{\rm{o}}}_{\mu,-})$ has a projective cover in ${}^w\mathbf{O}^{\nu}_{\mu,-}$. We denoted it by ${}^w\!P^\nu(x\bullet{{\rm{o}}}_{\mu,-})$. Let ${}^w\!P^\nu_{\mu,-}$ be the minimal projective generator, ${}^w\!T^\nu_{\mu,-}$ the characteristic tilting module, and let ${}^w\!L^\nu_{\mu,-}$ be the sum of simple modules. \begin{df}\label{df:3.6} Set ${}^w\!\bar A^\nu_{\mu,-}= \operatorname{Ext}\nolimits_{{}^{w}\mathbf{O}^\nu_{\mu,-}}({}^{w}\!L^\nu_{\mu,-})^{{\operatorname{{op}}\nolimits}}$ and ${}^w\!A^\nu_{\mu,-}={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\mathbf{O}^\nu_{\mu,-}}({}^w\!P^\nu_{\mu,-})^{{{\operatorname{{op}}\nolimits}}}.$ \end{df} \begin{lemma} The category ${}^w\mathbf{O}^{\nu}_{\mu,-}\simeq {}^w\!A^\nu_{\mu,-}\operatorname{\!-\mathbf{mod}}\nolimits$ is a highest weight category with the set of standard objects $\Delta({}^w\mathbf{O}^{\nu}_{\mu,-})= \{V^\nu(x\bullet{{\rm{o}}}_{\mu,-})\,;\,x\in{}^w\!I^{\nu}_{\mu,-}\}$ and the highest weight order given by the partial order $\operatorname{pr}\nolimitseccurlyeq$ on ${}^w\!I_{\mu,-}^\nu$. \end{lemma} \begin{proof} First, the category $\mathbf{O}^{\nu}_{\mu,-}$ is a highest weight category in the sense of \cite[def.~3.1]{CPS2}, i.e., it satisfies the axioms in Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:5} although it is not equivalent to the module category of a finite dimensional algebra. By \cite[thm.~3.5]{CPS2} the subcategory ${}^w\mathbf{O}^{\nu}_{\mu,-}$ is also highest weight. Since ${}^w\mathbf{O}^{\nu}_{\mu,-}\simeq {}^w\!A^\nu_{\mu,-}\operatorname{\!-\mathbf{mod}}\nolimits$ and ${}^w\!A^\nu_{\mu,-}$ is finite dimensional, the category ${}^w\mathbf{O}^{\nu}_{\mu,-}$ is a highest weight category in the sense of Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:5}. \end{proof} Next, set ${}^w\!I_{\mu,+}^\nu=\{x\in I_{\mu,+}^\nu\,;\,x\succcurlyeq\! w\}$ and let ${}^w\mathbf{O}^{\nu}_{\mu,+}$ be the full subcategory of $\mathbf{O}^{\nu}_{\mu,+}({}^w\!I_{\mu,+}^\nu)$ consisting of the finitely generated objects. We have $\operatorname{Irr}\nolimits({}^w\mathbf{O}^{\nu}_{\mu,+})= \{L(x\bullet{{\rm{o}}}_{\mu,+})\,;\,x\in {}^w\!I_{\mu,+}^\nu\}$. Let ${}^wL^\nu_{\mu,+}=\bigoplus_{x\in {}^w\!I^\nu_{\mu,+}}L(x\bullet{{\rm{o}}}_{\mu,+})$ and let ${}^w\!P^\nu_{\mu,+}=\bigoplus_{x\in {}^w\!I^\nu_{\mu,+}}P^\nu(x\bullet{{\rm{o}}}_{\mu,+})$. \begin{df} Set ${}^w\!\bar A^\nu_{\mu,+}= \operatorname{Ext}\nolimits_{{}^{w}\mathbf{O}^\nu_{\mu,+}}({}^{w}\!L^\nu_{\mu,+})^{{\operatorname{{op}}\nolimits}}$ and ${}^w\!A^\nu_{\mu,+}={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\mathbf{O}^\nu_{\mu,-}}({}^w\!P^\nu_{\mu,+})^{{{\operatorname{{op}}\nolimits}}}.$ \end{df} Consider the quotient functor $$F={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{O}^{\nu}_{\mu,+}}({}^w\!P^\nu_{\mu,+},\bullet): \mathbf{O}^{\nu}_{\mu,+}\to {}^w\!A^\nu_{\mu,+}\mathbf{M}od.$$ Its kernel is the full subcategory generated by the modules $L(x\bullet{{\rm{o}}}_{\mu,+})$ with $x\not\in {}^v\!I^\nu_{\mu,+}$. So, we have an equivalence of categories $\mathbf{O}^{\nu}_{\mu,+}({}^v\!I^\nu_{\mu,+})\simeq {}^w\!A^\nu_{\mu,+}\mathbf{M}od$. It restricts to an equivalence $${}^w\mathbf{O}^{\nu}_{\mu,+}\simeq {}^w\!A^\nu_{\mu,+}\operatorname{\!-\mathbf{mod}}\nolimits.$$ For each $x\in{}^w\!I_{\mu,+}^\nu$, we'll view $V^\nu(x\bullet{{\rm{o}}}_{\mu,+})$ as an object in ${}^w\mathbf{O}^{\nu}_{\mu,+}$ by identifying it with the ${}^w\!A^\nu_{\mu,+}$-module $F(V^\nu(x\bullet{{\rm{o}}}_{\mu,+}))$. We denote ${}^w\!P^\nu(x\bullet{{\rm{o}}}_{\mu,+})=F(P^\nu(x\bullet{{\rm{o}}}_{\mu,+}))$. For each $x\in{}^w\!I_{\mu,\pm}^\nu$, let $1_x$ be the (obvious) idempotents in the $\mathbb{C}$-algebras ${}^w\!\bar A^\nu_{\mu,\pm}$ and ${}^w\!A^\nu_{\mu,\pm},$ associated with the modules $L(x\bullet{{\rm{o}}}_{\mu,\pm})$ and ${}^w\!P^\nu(x\bullet{{\rm{o}}}_{\mu,\pm})$. We'll abbreviate $1_x^\diamond=(1_x)^\diamond$, ${}^w\!A^{\nu,\diamond}_{\mu,-}=({}^w\!A^\nu_{\mu,-})^\diamond$ and ${}^v\mathbf{O}^{\nu,\diamond}_{\mu,-}=({}^v\mathbf{O}^{\nu}_{\mu,-})^\diamond$. \begin{prop} \label{prop:ringel} Assume that $w\in I^\nu_{\mu,+}$ and $v=w_-\in I^\nu_{\mu,-}$. (a) The category ${}^w\mathbf{O}^{\nu}_{\mu,+}\simeq {}^w\!A^\nu_{\mu,+}\operatorname{\!-\mathbf{mod}}\nolimits$ is a highest weight category with the set of standard objects $\Delta({}^w\mathbf{O}^{\nu}_{\mu,+})= \{V^\nu(x\bullet{{\rm{o}}}_{\mu,+})\,;\,x\in{}^w\!I^{\nu}_{\mu,+}\}$ and the highest weight order given by the partial order $\operatorname{pr}\nolimitseccurlyeq$ on ${}^w\!I_{\mu,+}^\nu$. (b) There is an equivalence of highest weight categories ${}^v\mathbf{O}^{\nu,\diamond}_{\mu,-}\simeq{}^{w}\mathbf{O}^{\nu}_{\mu,+}$ which takes $V^\nu(x\bullet{{\rm{o}}}_{\mu,-})^\diamond$ to $V^\nu(x_+\bullet{{\rm{o}}}_{\mu,+})$ for any $x\in{}^v\!I_{\mu,-}^\nu.$ (c) There is a $\mathbb{C}$-algebra isomorphism ${}^w\!A^{\nu,\diamond}_{\mu,-}\simeq{}^{v}\!A^\nu_{\mu,+}$ such that $1_x^\diamond\mapsto 1_{(x_+)}$ for any $x\in{}^v\!I_{\mu,-}^\nu.$ \end{prop} \begin{proof} First, note that the anti-isomorphism of posets $I_{\mu,+}^\nu\to I_{\mu,-}^\nu,$ $x\mapsto x_-,$ in Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:C2} restricts to an anti-isomorphism of posets \begin{equation*} {}^w\!I_{\mu,+}^\nu{\buildrel\sim\over\to}\, {}^v\!I_{\mu,-}^\nu. \end{equation*} Therefore by Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:ringel-soergel} the functor ${{\rm{d}}}\circ D$ maps ${}^w\!P^\nu_{\mu,+}$ to ${}^v\!T^\nu_{\mu,-}$, and yields an algebra isomorphism ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{\mathbf{O}^{\nu}_{\mu,+}}({}^w\!P^\nu_{\mu,+})^{{\operatorname{{op}}\nolimits}}\simeq{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{\mathbf{O}^{\nu}_{\mu,-}}({}^v\!T^\nu_{\mu,-})^{{\operatorname{{op}}\nolimits}}.$ So, we have ${}^w\mathbf{O}^{\nu}_{\mu,+}=({}^v\mathbf{O}^{\nu}_{\mu,-})^\diamond$. In particular ${}^w\mathbf{O}^{\nu}_{\mu,+}$ is a highest weight category because it is the Ringel dual of a highest weight category. The rest of the proposition follows from the generalities on Ringel duality in Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:R}. \end{proof} We'll denote by ${}^w\! T^\nu(x\bullet{{\rm{o}}}_{\mu,+})$ the tilting object associated with $V^\nu(x\bullet{{\rm{o}}}_{\mu,+})$ in ${}^w\mathbf{O}^{\nu}_{\mu,+}$ and by ${}^w\! T^\nu_{\mu,+}$ the characteristic tilting module. If $\nu=\phi,$ we also abbreviate ${}^w\mathbf{O}_{\mu,\pm}={}^w\mathbf{O}_{\mu,\pm}^\phi,$ suppressing $\phi$ everywhere in the notation. \begin{rk} \label{rk:level} The highest weight category ${}^w\mathbf{O}^\nu_{\mu,\pm}$ does not depend on the choice of ${{\rm{o}}}_{\mu,\pm}$ and $e$ but only on $\mu$, $\nu$ and on the sign of the level, see \cite[thm.~11]{F2}. \end{rk} \begin{rk} Write $D={{\rm{d}}}\circ(\bullet)^\diamond$, where ${{\rm{d}}}$ is the BGG duality. Then, $D$ restrict to an equivalence of categories ${}^w\mathbf{O}^{\nu,\Delta}_{\mu,+}\to({}^v\mathbf{O}^{\nu,\Delta}_{\mu,-})^{{\operatorname{{op}}\nolimits}}$. \end{rk} \subsection{Parabolic inclusion and truncation} \label{sec:tau} Let $i=i_{\nu,\phi}$ be the canonical inclusion ${}^w\mathbf{O}^\nu_{\mu,\pm}\subset{}^w\mathbf{O}_{\mu,\pm}$. It is a fully faithful functor and it admits a left adjoint $\tau=\tau_{\phi,\nu}$ which takes an object of ${}^w\mathbf{O}_{\mu,\pm}$ to its largest quotient which lies in ${}^w\mathbf{O}_{\mu,\pm}^\nu$. We'll call $i$ the {\it parabolic inclusion functor} and $\tau$ the {\it parabolic truncation functor}. Since $i$ is exact, the functor $\tau$ takes projectives to projectives. We have \begin{enumerate} \item[(a)] $i(L(x\bullet {{\rm{o}}}_{\mu,\pm}))=L(x\bullet {{\rm{o}}}_{\mu,\pm})$ for $x\in{}^w\!I_{\mu,\pm}^\nu$, \item[(b)] $\tau({}^w\!P(x\bullet {{\rm{o}}}_{\mu,\pm}))={}^w\!P^\nu(x\bullet {{\rm{o}}}_{\mu,\pm})$ for $x\in{}^w\!I_{\mu,\pm}^\nu$, \item[(c)] $\tau({}^w\!P(x\bullet {{\rm{o}}}_{\mu,\pm}))=0$ for $x\in{}^w\!I_{\mu,\pm}\setminus{}^w\!I_{\mu,\pm}^\nu$. \end{enumerate} The same argument as in \cite[thm.~3.5.3]{BGS}, using \cite[thm.~5.5]{FG2}, implies that the functor $i$ is injective on extensions in ${}^w\mathbf{O}^\nu_{\mu,-}$. See the proof of Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc2} for more details. By Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:ringeltroncation}, for each $w\in I^\nu_{\mu,-}$, the Ringel equivalence yields equivalences of triangulated categories $\mathbf{D}^b({}^v\mathbf{O}_{\mu,+})\to\mathbf{D}^b({}^w\mathbf{O}_{\mu,-})$ and $\mathbf{D}^b({}^v\mathbf{O}^\nu_{\mu,+})\to\mathbf{D}^b({}^w\mathbf{O}^\nu_{\mu,-})$, where $v=w_+:=w_\nu ww_\mu$, such that the diagram below commutes $$\xymatrix{ \mathbf{D}^b({}^v\mathbf{O}_{\mu,+})\ar[r]&\mathbf{D}^b({}^w\mathbf{O}_{\mu,-})\\ \mathbf{D}^b({}^v\mathbf{O}^\nu_{\mu,+})\ar[r]\ar[u]^-i& \mathbf{D}^b({}^w\mathbf{O}^\nu_{\mu,-}).\ar[u]_-i }$$ Thus, the parabolic inclusion functor $i$ is also injective on extensions in ${}^v\mathbf{O}^\nu_{\mu,+}$. \iffalse \begin{rk}\label{rk:mitchell} Let $\mathbf{P}\subset\mathbf{O}_{\mu,+}^\nu$ be the full subcategory with the set of objects $\{P^\nu(x\bullet{{\rm{o}}}_{\mu,+})\,;\,x\in I^\nu_{\mu,+}\}.$ Note that this set is infinite. Consider the algebra $$Q=\bigoplus_{x,y\in I^\nu_{\mu,+}}{{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{O}_{\mu,+}^\nu}\bigl(P^\nu(x\bullet{{\rm{o}}}_{\mu,+}), P^\nu(y\bullet{{\rm{o}}}_{\mu,+})\bigr)^{{\operatorname{{op}}\nolimits}}.$$ The multiplication is the obvious one. Note that $Q$ has no 1 because $I^\nu_{\mu,+}$ is infinite. The category $Q\mathbf{M}od$ is equivalent to the category of all additive functors $\mathbf{P}^{{{\operatorname{{op}}\nolimits}}}\to\mathbb{C}$. Assigning to $M$ the restriction of the functor ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{O}_{\mu,+}^\nu}(\bullet,M)$ to $\mathbf{P}$ yields an equivalence of abelian $\mathbb{C}$-categories $\mathbf{O}_{\mu,+}^\nu\to Q\mathbf{M}od,$ see \cite[thm.~3.1]{Mi}. Consider the idempotent $e=\sum_x1_x$ in $Q$, where $x$ runs over the finite set ${}^w\!I^\nu_{\mu,+}$. We have $e\,Qe={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{\mathbf{O}_{\mu,+}^\nu}\bigl(\bigoplus_{x\in{}^w\!I^\nu_{\mu,+}} P^\nu(x\bullet{{\rm{o}}}_{\mu,+})\bigr)^{{{\operatorname{{op}}\nolimits}}}$ as an algebra. Consider the adjoint pair of functors $(F,G)$ given by $$\gathered F:Q\mathbf{M}od\to eQe\mathbf{M}od,\quad M\mapsto e\,M={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_Q(Qe,M),\\ G:eQe\mathbf{M}od\to Q\mathbf{M}od,\quad M\mapsto {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{eQe}(eQ,M). \endgathered$$ The functor $F$ is a {\it quotient} functor, i.e., it is exact and we have $F G=\mathbf{1}$. The kernel of $F$ is $\mathbf{O}^\nu_{\mu,+}[\not\succcurlyeq\! w]$, where $\operatorname{pr}\nolimitseccurlyeq=\geqslantlant$. Therefore, the functor $F$ factors to an equivalence of abelian categories ${}^w\mathbf{O}^\nu_{\mu,+}\to eQe\operatorname{\!-\mathbf{mod}}\nolimits.$ Since $eQe$ is a finite dimensional algebra, the axioms of a highest weight category are now easily verified for ${}^w\mathbf{O}^\nu_{\mu,+}$. \end{rk} \begin{rk} \label{rk:BGGduality} Taking a module $M\in\mathbf{O}_{\mu,\pm}^\nu$ to its graded dual $M^*=\bigoplus_{\lambdabda\in\mathbf{t}^*}(M_\lambdabda)^*,$ with the contragredient $\mathbf{g}$-action, yields a duality on $\mathbf{O}_{\mu,\pm}^\nu$ called the BGG duality. Since the BGG duality fixes the simple modules, it factors to a duality on ${}^w\mathbf{O}_{\mu,\pm}^\nu$. \end{rk} \fi \subsection{The main result} Fix integers $e,f>0$ and parabolic types $\mu,\nu\in\mathcal{P}$. We choose ${{\rm{o}}}_{\mu,\pm}$ of level $\pm e-N$ and ${{\rm{o}}}_{\nu,\pm}$ of level $\pm f-N$. Fix an element $w\in\widehat W$. Assume that $w\in I^\nu_{\mu,+},$ and set $v=w_-^{-1}\in I^\mu_{\nu,-}$. The main result of this paper is the following. \begin{thm} \label{thm:main} We have $\mathbb{C}$-algebra isomorphisms ${}^w\!A^\nu_{\mu,+}={}^v\!\bar A^\mu_{\nu,-}$ and ${}^w\!\bar A^\nu_{\mu,+}={}^v\! A^\mu_{\nu,-}$ such that $1_x\mapsto 1_y$ with $y=x_-^{-1}$ for each $x\in{}^w\!I_{\mu,\mp}^\nu$. The graded $\mathbb{C}$-algebras ${}^w\!\bar A^\nu_{\mu,+}$ and ${}^v\!\bar A^\mu_{\nu,-}$ are Koszul and balanced. They are Koszul dual to each other, i.e., we have a graded $\mathbb{C}$-algebra isomorphism $({}^w\!\bar A^\nu_{\mu,+})^!={}^v\!\bar A^\mu_{\nu,-}$ such that $1_x^!=1_{y}$ for each $x\in{}^w\!I_{\mu,+}^\nu.$ The categories ${}^w\mathbf{O}^\nu_{\mu,+},$ ${}^v\mathbf{O}^\mu_{\nu,-}$ are Koszul and are Koszul dual to each other. \qed \end{thm} We'll abbreviate ${}^w\!\bar A^{\nu,!}_{\mu,\pm}=({}^w\!\bar A^\nu_{\mu,\pm})^!$. \begin{rk} \label{rk:2.20} Since ${}^w\!\bar A^\nu_{\mu,+}$ and ${}^v\!\bar A^\mu_{\nu,-}$ are balanced, the Koszul duality commutes with the Ringel duality. In particular, for $w\in I^\nu_{\mu,-}$ and $v=w^{-1}\in I^\mu_{\nu,-}$, composing $(\bullet)^!$ and $(\bullet)^\diamond$ we get a graded $\mathbb{C}$-algebra isomorphism $({}^w\!\bar A^{\nu,\diamond}_{\mu,-})^!={}^v\!\bar A^\mu_{\nu,-}$ such that $(1_x^\diamond)^!=1_{y}$ with $y=x^{-1}$ for each $x\in{}^w\!I_{\mu,-}^\nu.$ \end{rk} \section{Moment graphs, deformed category $\mathbf{O}$ and localization} \label{sec:momentgraph} First, we introduce some notation. Fix integers $e,f>0$ and parabolic types $\mu,\nu\in\mathcal{P}$. Let $V$ be a finite dimensional $\mathbb{C}$-vector space, let $S$ be the symmetric $\mathbb{C}$-algebra over $V$ and let $\mathfrak{m}\subset S$ be the maximal ideal generated by $V$. We may regard $S$ as a graded $\mathbb{C}$-algebra such that $V$ has the degree 2. Let $R$ be a commutative, noetherian, integral domain which is a (possibly graded) $S$-algebra with 1. Let $S_0$ be the localization of $S$ at the ideal $\mathfrak{m}$ and let $k$ be the residue field of $S_0$. Note that $k=\mathbb{C}$ and that $S_0$ has no natural grading. \subsection{Moment graphs} \label{sec:3.3} Assume that $R=S$, viewed as a graded $\mathbb{C}$-algebra. \begin{df} A {\it moment graph} over $V$ is a tuple $\mathcal{G}=(I,H,\alpha)$ where $(I,H)$ is a graph with a set of vertices $I$, a set of edges $H$, each edge joins two vertices, and $\alpha$ is a map $H\to\mathbb{P}(V)$, $h\mapsto k\alpha_h$. \end{df} We say that the moment graph $\mathcal{G}$ is a {\it GKM-graph} if we have $k\alpha_{h_1}\neq k\alpha_{h_2}$ for any edges $h_1\neq h_2$ adjacent to the same vertex. \begin{df} An order on $\mathcal{G}$ is a partial order $\operatorname{pr}\nolimitseccurlyeq$ on $I$ such that the two vertices joined by an edge are comparable. \end{df} Given an order $\operatorname{pr}\nolimitseccurlyeq$ on $\mathcal{G}$ let $h',h''$ denote the {\it origin} and the {\it goal} of the edge $h$. In other words $h'$ and $h"$ are two vertices joined by $h$ and we have $h'\operatorname{pr}\nolimitsec h''$. \begin{rk} We use the terminology in \cite{J}. In \cite{F3}, a moment graph is always assumed to be ordered. We'll also assume that $\mathcal{G}$ is {\it finite}, i.e., that the sets $I$ and $H$ are finite. \end{rk} \begin{df} A {\it graded $R$-sheaf over $\mathcal{G}$} is a tuple $\mathcal{M}=(\mathcal{M}_x,\mathcal{M}_h,\rho_{x,h})$ with \begin{itemize} \item a graded $R$-module $\mathcal{M}_x$ for each $x\in I$, \item a graded $R$-module $\mathcal{M}_h$ for each $h\in H$ such that $\alpha_h\mathcal{M}_h=0$, \item a graded $R$-module homomorphism $\rho_{x,h}:\mathcal{M}_x\to\mathcal{M}_h$ for $h$ adjacent to $x$. \end{itemize} A {\it morphism} $\mathcal{M}\to\mathcal{N}$ is an $I\sqcup H$-tuple $f$ of graded $R$-module homomorphisms $f_x:\mathcal{M}_x\to\mathcal{N}_x$ and $f_h:\mathcal{M}_h\to\mathcal{N}_h$ which are compatible with $(\rho_{x,h})$. \end{df} The $R$-modules $\mathcal{M}_x$ are called the \textit{stalks} of $\mathcal{M}$. For $J\subset I$ we set $$\mathcal{M}(J)=\bigl\{(m_x)_{x\in J}\,;\,m_x\in\mathcal{M}_x,\,\rho_{h',h}(m_{h'})= \rho_{h'',h}(m_{h''}),\ \forall h\,\text{with}\, h',h''\in J\bigr\}.$$ Note that $\mathcal{M}(\{x\})=\mathcal{M}_x$. The {\it space of global sections} of the graded $R$-sheaf $\mathcal{M}$ is the graded $R$-module $\Gamma(\mathcal{M})=\mathcal{M}(I)$. We say that $\mathcal{M}$ is of {\it finite type} if all $\mathcal{M}_x$ and all $\mathcal{M}_h$ are finitely generated graded $R$-modules. If $\mathcal{M}$ is of finite type, then the graded $R$-module $\Gamma(\mathcal{M})$ is finitely generated, because $R$ is noetherian. The {\it graded structural algebra} of $\mathcal{G}$ is the graded $R$-algebra $\bar Z_R=\bigl\{(a_x)\in R^{{{\operatorname{{op}}\nolimits}}lus I}\,;\,a_{h'}-a_{h''}\in\alpha_h R,\, \forall h\}.$ \begin{df}\label{def:3.5} Let $\bar\mathbf{F}_{R}$ be the category consisting of the graded $R$-sheaves of finite type over $\mathcal{G}$ whose stalks are torsion free $R$-modules. Let $\bar\mathbf{Z}_{R}$ be the category consisting of the graded $\bar Z_{R}$-modules which are finitely generated and torsion free over $R$. \end{df} The categories $\bar\mathbf{F}_{R}$ and $\bar\mathbf{Z}_{R}$ are Krull-Schmidt graded $R$-categories (because they are hom-finite $R$-categories and each idempotent splits). Taking the global sections yields a functor $\Gamma:\bar\mathbf{F}_R\to\bar\mathbf{Z}_R.$ We'll call again graded $R$-sheaves the objects of $\bar\mathbf{Z}_{R}$, hoping it will not create any confusion. The global sections functor $\Gamma$ has a left adjoint $\mathcal{L}$ called the {\it localization functor} \cite[thm.~3.6]{F3}, \cite[prop.~2.16, sec.~2.19]{J}. We say that $\mathcal{M}$ is {\it generated by global sections} if the counit $\mathcal{L}\Gamma(\mathcal{M})\to\mathcal{M}$ is an isomorphism, or, equivalently, if $\mathcal{M}$ belongs to the essential image of $\mathcal{L}$. This implies that the obvious map $\Gamma(\mathcal{M})\to\mathcal{M}_x$ is surjective for each $x$. The functor $\Gamma$ is fully faithful on the full subcategory of $\bar\mathbf{F}_{R}$ of the graded $R$-sheaves which are generated by global sections, see \cite[sec.~3.7]{F3}. For each subsets $E\subset H$ and $J\subset I,$ we set $ \mathcal{M}_{E}=\bigoplus_{h\in E}\mathcal{M}_h $ and $$ \rho_{J,E}= \bigoplus_{x\in J}\bigoplus_{h\in E}\rho_{x,h}:\mathcal{M}(J)\to \mathcal{M}_{E}. $$ Let $d_x$ be the set of edges with goal $x$, $u_x$ be the set of edges with origin $x$ and $e_x=d_x\bigsqcup u_x$. Given an order $\operatorname{pr}\nolimitseccurlyeq$ on $\mathcal{G},$ we set $\mathcal{M}_{\partial x}=\operatorname{Im}\nolimits(\rho_{\operatorname{pr}\nolimitsec x,d_x}),$ $\mathcal{M}^x=\operatorname{Ker}\nolimits(\rho_{x,e_x})$ and $\mathcal{M}_{[x]}=\operatorname{Ker}\nolimits(\rho_{x,d_x}),$ where we abbreviate $\rho_{x,E}=\rho_{\{x\},E}$. Following \cite[sec.~4.3]{FW}, we call $\mathcal{M}^x$ the {\it costalk} of $\mathcal{M}$ at $x$. Note that $\mathcal{M}^x$, $\mathcal{M}_{[x]}$, $\mathcal{M}_x$ are graded $\bar Z_{R}$-modules such that $\mathcal{M}^x\subset\mathcal{M}_{[x]}\subset\mathcal{M}_x.$ Assume from now on that the moment graph $\mathcal{G}$ is a GKM graph. Let us quote the following definitions, see \cite[lem.~3.2]{J} and \cite[lem.~4.8]{F3}. \begin{df} Assume that $\mathcal{M}\in\bar\mathbf{F}_R$ is generated by global sections. Then, $(a)$ $\mathcal{M}$ is {\it flabby} if $\operatorname{Im}\nolimits(\rho_{x,d_x})=\mathcal{M}_{\partial x}$ for each $x\in I$, $(b)$ $\mathcal{M}$ is {\it $\Delta$-filtered} if it is flabby and if the graded $\bar Z_{R}$-module $\mathcal{M}_{[x]}$ is a free graded $R$-module for each $x\in I$. \end{df} Now, we can define the following categories. \begin{df} Let $\bar\mathbf{F}^\Delta_{R,\operatorname{pr}\nolimitseccurlyeq}$ be the full subcategory of $\bar\mathbf{F}_{R}$ consisting of the $\Delta$-filtered objects. Let $\bar\mathbf{Z}^\Delta_{R,\operatorname{pr}\nolimitseccurlyeq}$ be the essential image of $\bar\mathbf{F}^\Delta_{R,\operatorname{pr}\nolimitseccurlyeq}$ by $\Gamma$. \end{df} We may abbreviate $\bar\mathbf{F}^\Delta_{R}=\bar\mathbf{F}^\Delta_{R,\operatorname{pr}\nolimitseccurlyeq}$ and $\bar\mathbf{Z}^\Delta_{R}=\bar\mathbf{Z}^\Delta_{R,\operatorname{pr}\nolimitseccurlyeq}$ when the order is clear from the context. The categories $\bar\mathbf{F}^\Delta_{R}$ and $\bar\mathbf{Z}^\Delta_{R}$ are Krull-Schmidt graded $R$-categories. Since $\Delta$-filtered objects are generated by global sections, from the discussion above we deduce that $\mathcal{L}$, $\Gamma$ are mutually inverse equivalences of graded $R$-categories between $\bar\mathbf{F}_{R}^\Delta$ and $\bar\mathbf{Z}_{R}^\Delta$. For each $M\in\bar\mathbf{Z}_{R}^\Delta$ we write $M^x=\mathcal{L}(M)^x,$ $M_{[x]}=\mathcal{L}(M)_{[x]},$ $M_x=\mathcal{L}(M)_x,$ $M_{\operatorname{pr}\nolimitsec x}=\mathcal{L}(M)_{\operatorname{pr}\nolimitsec x},$ $M_{\partial x}=\mathcal{L}(M)_{\partial x}$ and $M_h=\mathcal{L}(M)_h.$ The categories $\bar\mathbf{F}^\Delta_{R}$ and $\bar\mathbf{Z}^\Delta_{R}$ are exact categories with the exact structures defined as follows. Following \cite[lem.~4.5]{F3}, we say that a sequence $0\to M'\to M\to M''\to 0$ in $\bar\mathbf{Z}_{R}^\Delta$ is exact if and only if the sequence of (free) $R$-modules $0\to M'_{[x]}\to M_{[x]}\to M''_{[x]}\to 0$ is exact for each $x\in I$. This yields an exact structure on $\bar\mathbf{Z}^\Delta_{R}$. We transport it to an exact structure on $\bar\mathbf{F}^\Delta_{R}$ via the equivalence $(\mathcal{L},\Gamma)$. \begin{rk} \label{rk:exact} The forgetful functor $\bar\mathbf{Z}_R^\Delta\to\bar \mathbf{Z}_R$ is exact by \cite[lem.~2.12]{F4}. Here $\bar \mathbf{Z}_R$ is equipped with the exact structure naturally associated with its structure of abelian category. To avoid confusion we'll call it the \emph{stupid} exact structure. \end{rk} \begin{rk} \label{rk:MI} Let $M\in\bar\mathbf{Z}_R^\Delta.$ Then, we have $M\simeq\Gamma\mathcal{L}(M)$ and $M_x=\mathcal{L}(M)_x.$ For each $J\subset I,$ let $M_J$ be the image of the obvious map $M=\Gamma \mathcal{L} (M)\to\bigoplus_{x\in J}M_x,$ see also \cite[def.~2.7]{F4}. It is a graded $R$-module, and we have $M_J=\mathcal{L}(M)(J)$ by \cite[lem.~3.4]{F3}. Since $\bar Z_{R,x}=R$ for all $x$, the set $(\bar Z_R)_J$ is a graded $R$-subalgebra of $R^{{{\operatorname{{op}}\nolimits}}lus J}$. So $M_J$ is a graded $(\bar Z_R)_J$-module. \end{rk} We'll use two different notions of projective objects in the exact category $\bar\mathbf{Z}_{R}^\Delta$, compare \cite[sec.~3.8]{J}. \begin{df} $(a)$ A module $M\in\bar\mathbf{Z}_{R}^\Delta$ is {\it projective} if the functor ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\bar\mathbf{Z}_{R}}(M,\bullet)$ on $\bar\mathbf{Z}_{R}^\Delta$ maps short exact sequences to short exact sequences. $(b)$ A module $M\in\bar\mathbf{Z}_{R}$ is {\it F-projective} if $\mathcal{L}(M)$ is flabby, $M_x$ is a projective graded $R$-module for each $x$, $M_h=M_{h'}/\alpha_hM_{h'}$ and $\rho_{h',h}$ is the canonical map for each $h$. \end{df} \begin{rk}\label{rk:Fprojective} If $M\in\bar\mathbf{Z}_{R}^\Delta$ is F-projective, then it is projective \cite[prop.~5.1]{F3}. Note that loc.~cit.~uses \emph{reflexive} graded $R$-sheaves. However, $\Delta$-filtered graded $R$-sheaves are automatically reflexive by \cite[sec.~4]{F3}. \end{rk} We say that an object $M\in\bar\mathbf{Z}_{R}^\Delta$ is {\it tilting} if the contravariant functor ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\bar\mathbf{Z}_{R}}(\bullet,M)$ on $\bar\mathbf{Z}_{R}^\Delta$ maps short exact sequences to short exact sequences. For each graded $R$-module $M,$ let $M^*$ be its {\it dual graded $R$-module}, i.e., we set $M^*=\bigoplus_i(M^*)^i$ with $(M^*)^i={{\operatorname{{op}}\nolimits}}eratorname{hom}_{R}(M,R\langle i\rangle)$. Here ${{\operatorname{{op}}\nolimits}}eratorname{hom}_{R}$ is the Hom's space of graded $R$-modules. Since $\bar Z_{R}$ is commutative, the graded dual $R$-module $M^*$ of a graded $\bar Z_R$-module $M$ is a graded $\bar Z_{R}$-module. As above, let the symbols ${{\operatorname{{op}}\nolimits}}eratorname{Tilt}\nolimits$ and ${{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits$ denote the sets of indecomposable tilting and projective objects. By \cite[sec.~4.6]{F3}, the following holds. \begin{prop}\label{prop:F3} The duality $D:\bar\mathbf{Z}_{R}\to\bar\mathbf{Z}_{R}$ such that $M\mapsto M^*$ restricts to an exact equivalence $\bar\mathbf{Z}_{R,\operatorname{pr}\nolimitseccurlyeq}^\Delta\to(\bar\mathbf{Z}_{R,\succcurlyeq}^\Delta)^{{\operatorname{{op}}\nolimits}}$. In particular, it yields bijections ${{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(\bar\mathbf{Z}_{R,\operatorname{pr}\nolimitseccurlyeq}^\Delta)\to{{\operatorname{{op}}\nolimits}}eratorname{Tilt}\nolimits(\bar\mathbf{Z}_{R,\succcurlyeq}^\Delta)$ and ${{\operatorname{{op}}\nolimits}}eratorname{Tilt}\nolimits(\bar\mathbf{Z}_{R,\operatorname{pr}\nolimitseccurlyeq}^\Delta)\to{{\operatorname{{op}}\nolimits}}eratorname{Proj}\nolimits(\bar\mathbf{Z}_{R,\succcurlyeq}^\Delta).$ \qed \end{prop} We define the {\it support} of a graded $R$-sheaf $M$ on $\mathcal{G}$ to be the set ${{\operatorname{{op}}\nolimits}}eratorname{supp}\nolimits(M)=\{x\in I\,;\,M_x\neq 0\}.$ We can now turn to the following important definition. \begin{prop-df}[\cite{F3}] \label{df:3.9} Let $(\mathcal{G},\operatorname{pr}\nolimitseccurlyeq)$ be an ordered GKM-graph. There is a unique object $\bar B_{R,\operatorname{pr}\nolimitseccurlyeq}(x)$ in $\bar\mathbf{Z}_{R}$ which is indecomposable, F-projective, supported on the coideal $\{\succcurlyeq\! x\}$ and with $\bar B_{R,\operatorname{pr}\nolimitseccurlyeq}(x)_x=R$. We call $\bar B_{R,\operatorname{pr}\nolimitseccurlyeq}(x)$ a graded {\it BM-sheaf}. \end{prop-df} We may abbreviate $\bar B_{R}(x)=\bar B_{R,\operatorname{pr}\nolimitseccurlyeq}(x)$ and $\bar C_{R}(x)=\bar C_{R,\operatorname{pr}\nolimitseccurlyeq}(x)=D(\bar B_{R,\succcurlyeq}(x)).$ \begin{rk} \label{rk:BM} The existence and unicity of graded BM-sheaves is proved in \cite[thm.~5.2]{F3} using the Braden-MacPherson algorithm \cite[sec.~1.4]{BM}. See also \cite[thm.~6.3]{FW}. The construction of $\bar B_{R}(x)$ is as follows. \begin{itemize} \item Set $\bar B_{R}(x)_y=0$ for $y\not\succcurlyeq x$. \item Set $\bar B_{R}(x)_x=R$. \item Let $y\succ x$ and suppose we have already constructed $\bar B_{R}(x)_z$ and $\bar B_{R}(x)_h$ for any $z,h$ such that $y\succ z,h',h''\succcurlyeq x$. For $h\in d_y$ set \begin{itemize} \item $\bar B_{R}(x)_h=\bar B_{R}(x)_{h'}/\alpha_h\bar B_{R}(x)_{h'}$ and $\rho_{h',h}$ is the canonical map, \item $\bar B_{R}(x)_{\partial y}= \operatorname{Im}\nolimits(\rho_{\operatorname{pr}\nolimitsec y,d_y})\subset \bar B_{R}(x)_{d_y},$ \item $\bar B_{R}(x)_y$ is the projective cover of the graded $R$-module $\bar B_{R}(x)_{\partial y}$, \item $\rho_{y,h}$ is the composition of the projective cover map $\bar B_{R}(x)_y\to\bar B_{R}(x)_{\partial y}$ with the obvious projection $\bar B_{R}(x)_{d_y}\to\bar B_{R}(x)_h$. \end{itemize} \end{itemize} \end{rk} \begin{prop} \label{prop:decomposition} An F-projective object in $\bar\mathbf{Z}_{R}$ is a direct sum of objects of the form $\bar B_{R}(x)\langle j\rangle$ with $x\in I$ and $j\in\mathbb{Z}$. \end{prop} \begin{proof} See \cite[prop.~3.9]{J}.\end{proof} Next, following \cite[sec.~4.5]{F3}, we introduce the following. \begin{prop-df} There is a unique object $\bar V_R(x)\in\bar\mathbf{Z}_R$ that is isomorphic to $R$ as a graded $R$-module and on which the tuple $(z_y)\in\bar Z_R$ acts by multiplication with $z_x$. We call it a {\it graded Verma-sheaf}. \end{prop-df} \begin{prop}\label{prop:vermasheaf} (a) The graded Verma-sheaves are $\Delta$-filtered and self-dual. (b) We have $\bar V_R(x)^y=\bar V_R(x)_y=\bar V_R(x)_{[y]}=R$ if $x=y$ and 0 else. (c) For $M\in\bar\mathbf{Z}_R$ we have $${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{Z_R} \bigl(\bar V_R(x)\langle i\rangle,M\bigr)=M^x\langle-i\rangle,$$ $${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{Z_R} \bigl(M,\bar V_R(y)\langle j\rangle\bigr)=(M_y)^*\langle j\rangle,$$ $$ {{\operatorname{{op}}\nolimits}}eratorname{hom}\nolimits_{\bar Z_R} \bigl(\bar V_R(x)\langle i\rangle,\bar V_R(y)\langle j\rangle\bigr)= \delta_{x,y}\,R^{i-j}.$$ \end{prop} \begin{proof} Obvious, see e.g., \cite[sec.~3.11]{J}. \end{proof} \begin{rk} \label{rk:filtration} A $\Delta$-filtered graded $R$-sheaf $M$ has a filtration $0=M_0\subseteq M_1\subseteq\dots\subseteq M_n=M$ by graded $\bar Z_R$-submodules with \cite[rk.~4.4, sec.~4.5]{F3} $$\gathered \bigoplus_{r=1}^nM_{r}/M_{r-1}=\bigoplus_y M_{[y]},\quad M_{r}/M_{r-1} =\bar V_R(y_r)\langle i_r\rangle,\quad r\leqslantlant s\Rightarrow y_s\operatorname{pr}\nolimitseccurlyeq y_r. \endgathered$$ In particular, a $\Delta$-filtered graded $\bar Z_R$-module is free and finitely generated as a graded $R$-module. \end{rk} \begin{rk} \label{rk:stalk/costalk} From Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:vermasheaf}$(c)$ we deduce that $(M_y)^*=(M^*)^y.$ \end{rk} \subsection{Ungraded version and base change} \label{rk:basechange} Assume that $R$ is the localization of $S$ with respect to some multiplicative set. Then, we define an $R$-sheaf over $\mathcal{G}$, its space of global sections, the structural algebra $Z_R$, the categories $\mathbf{F}_{R}$, $\mathbf{Z}_{R}$, etc., in the obvious way, by forgetting the grading in the definitions above. In particular, we define as above an exact category $\mathbf{Z}^\Delta_{R}$. Now, assume that $R=S$. Forgetting the grading yields a faithful and faithfully exact functor $\bar\mathbf{Z}^\Delta_{R}\to\mathbf{Z}^\Delta_{R}$. See, e.g., \cite{Ishi}, for details on faithfully exact functors. Let $V_R(x)$, $B_{R}(x)$, $C_{R}(x)$ be the images of $\bar V_R(x)$, $\bar B_{R}(x)$, $\bar C_{R}(x)$. We call $V_R(x)$ a Verma-sheaf and $B_{R}(x)$ a BM-sheaf. Next, for each morphism of $S$-algebras $R\to R'$ we consider the {\it base change} functor $\bullet\otimes_{R}R':R\operatorname{\!-\mathbf{mod}}\nolimits\to R'\operatorname{\!-\mathbf{mod}}\nolimits$ given by $M\mapsto R'M=M\otimes_RR'$. Assume that $R'$ is the localization of $R$ with respect to some multiplicative subset. Then, the base change yields functors $\bullet\otimes_{R}R':\mathbf{F}_{R}\to\mathbf{F}_{R'}$ and $\bullet\otimes_{R}R':\mathbf{Z}_{R}\to\mathbf{Z}_{R'}$. These functors commute with $\Gamma$, $\mathcal{L}$ and the canonical map $R'{{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{Z}_R}(M,N)\to {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{Z}_{R'}}(R'M,R'N)$ is an isomorphism, see \cite[sec.~2.7, 2.18, 3.15]{J}. Further, the base change commutes with the functor $M\mapsto M_{[x]}$, it preserves the flabby sheaves, the Verma sheaves, the F-projective ones and it yields an exact and faithful functor $\mathbf{Z}_{R}^\Delta\to\mathbf{Z}_{R'}^\Delta$, see \cite[sec.~3.15]{J}. Assume now that $R$ is the localization of $S$ with respect to some multiplicative set. Then, we define $V_{R}(x)=R V_S(x),$ $B_{R}(x)=R B_S(x)$ and $C_{R}(x)=R C_S(x).$ Note that $B_{R}(x),$ $C_{R}(x)$ are F-projective by \cite[sec.~3.8]{J}, and that if $R=S_0$ then $B_{R}(x), C_{R}(x)$ are indecomposable by \cite[sec.~3.16]{J}. Further, any $F$-projective $R$-sheaf in $\mathbf{F}_R$ is a direct sum of objects of the form $B_{R}(x)$ with $x\in I$, see \cite[prop.~3.13]{J}. Forgetting the grading and taking the base change, we get the functor $\varepsilon:S\mathbf{g}mod\to S_0\operatorname{\!-\mathbf{mod}}\nolimits$. It is faithful and faithfully exact. Indeed, if $\varepsilon(M)=0$, then there is an element $f\in S\setminus\mathfrak{m}$ such that $fM=0$. Since $M$ is graded, this implies that $M=0$. Hence $\varepsilon$ is faithful, and by \cite[thm.~1.1]{Ishi} it is also faithfully exact. Finally, note that a graded $S$-module $M$ is free over $S$ (as a graded module) if and only if $\varepsilon(M)$ is free over $S_0$. Similarly, consider the functor $\varepsilon:\bar\mathbf{Z}_{S}\to\mathbf{Z}_{S_0}$ given by forgetting the grading and taking the base change. We have the following lemma. \begin{lemma}\label{lem:forget} (a) The functor $\varepsilon$ commutes with the functor $M\mapsto M_{[x]}.$ (b) The functor $\varepsilon:\bar\mathbf{Z}_{S}\to\mathbf{Z}_{S_0}$ is faithful and faithfully exact. (c) For each $M\in\bar\mathbf{Z}_{S},$ we have $M\in\bar\mathbf{Z}_{S}^\Delta$ if and only if $\varepsilon(M)\in\mathbf{Z}^\Delta_{S_0}$. \qed \end{lemma} \subsection{The moment graph of $\mathbf{O}$} \label{sec:3.4} Fix a parabolic type $\mu\in\mathcal{P}$ and fix $w\in\widehat W$. Unless specified otherwise, we'll set $V=\mathbf{t}$. In this section, we define a moment graph ${}^w\mathcal{G}_{\mu,\pm}$ over $\mathbf{t}$ which is naturally associated with the truncated categories ${}^w\mathbf{O}_{\mu,\pm}$. Then, we consider its category of (graded) $R$-sheaves over ${}^w\mathcal{G}_{\mu,\pm}$. In the graded case we'll assume that $R=S$, with the grading above. In the ungraded one, we'll assume that $R=S$ or $S_0$, see the previous section for details. \begin{df} \label{def:momentgraph} Let ${}^w\mathcal{G}_{\mu,\pm}$ be the moment graph over $\mathbf{t}$ whose set of vertices is ${}^w\!I_{\mu,\pm}$, with an edge between $x,y$ labelled by $k\,\operatorname{ch}\nolimitseck\alpha$ if there is an affine reflection $s_{\alpha}\in\widehat W$ such that $s_\alpha y\in xW_\mu$. We equip ${}^w\mathcal{G}_{\mu,+}$ with the partial order $\operatorname{pr}\nolimitseccurlyeq$ given by the opposite Bruhat order, and we equip ${}^w\mathcal{G}_{\mu,-}$ with the partial order $\operatorname{pr}\nolimitseccurlyeq$ given by the Bruhat order. \end{df} Let ${}^w\bar Z_{S,\mu,\pm}$ be the graded structural algebra of ${}^w\mathcal{G}_{\mu,\pm}$. Let ${}^w\bar\mathbf{Z}_{S,\mu,\pm}$ be the category of graded ${}^w\bar Z_{S,\mu,\pm}$-modules which are finitely generated and torsion free over $S$. Let ${}^w\bar\mathbf{Z}_{S,\mu,\pm}^\Delta$ be the category of $\Delta$-filtered graded $S$-sheaves on ${}^w\mathcal{G}_{\mu,\pm,\operatorname{pr}\nolimitseccurlyeq}$. Let $\bar V_{S,\mu,\pm}(x)\in{}^w\bar\mathbf{Z}_{S,\mu,\pm}$ be the (graded) Verma-sheaf whose stalk at $x$ is $S$. We'll use a similar notation for all other objects attached to ${}^w\mathcal{G}_{\mu,\pm}$. We'll use the same notation as in Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:basechange}. For example ${}^w\!B_{S,\mu,\pm}(x)$ is the nongraded analogue of ${}^w\!\bar B_{S,\mu,\pm}(x)$, and ${}^w\!B_{R,\mu,\pm}(x)=R{}^w\!B_{S,\mu,\pm}(x)$. We also set ${}^w\!\bar Z_{k,\mu,\pm}=k {}^w\!\bar Z_{S,\mu,\pm}$, ${}^w\! Z_{k,\mu,\pm}=k {}^w\! Z_{S_0,\mu,\pm}$, ${}^w\bar\mathbf{Z}_{k,\mu,\pm}={}^w\!\bar Z_{k,\mu,\pm}\mathbf{g}mod$ and ${}^w\mathbf{Z}_{k,\mu,\pm}={}^w\!Z_{k,\mu,\pm}\operatorname{\!-\mathbf{mod}}\nolimits$. In the graded case, we put ${}^w\!\bar B_{k,\mu,\pm}(x)= k{}^w\!\bar B_{S,\mu,\pm}(x)$. In the non graded case, we put ${}^w\! B_{k,\mu,\pm}(x)= k{}^w\! B_{S_0,\mu,\pm}(x)$. To unburden the notation we may omit the subscript $k$ and write ${}^w\! Z_{\mu,\pm}={}^w\! Z_{k,\mu,\pm}$, ${}^w\mathbf{Z}_{\mu,\pm}={}^w\mathbf{Z}_{k,\mu,\pm}$, etc. \begin{prop} \label{prop:BMD} The (graded) BM-sheaves on ${}^w\mathcal{G}_{\mu,\pm}$ are $\Delta$-filtered. \end{prop} \begin{proof} The (graded) BM-sheaves on ${}^w\mathcal{G}_{\mu,-}$ are $\Delta$-filtered by \cite[thm.~5.2]{F3}. We claim that the (graded) BM-sheaves on ${}^w\mathcal{G}_{\mu,+}$ are also $\Delta$-filtered. By Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:basechange}, a graded $S$-sheaf $M$ is $\Delta$-filtered if and only if the $S_0$-sheaf $\varepsilon(M)$ is $\Delta$-filtered. Now, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(c)$ below, the BM-sheaves on ${}^w\mathcal{G}_{\mu,+}$ are $\Delta$-filtered for $R=S_0$. This proves our claim. \end{proof} Let $l(x)$ be the length of an element $x\in\widehat W$. From Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{df:3.9}, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:BMD}, we deduce the following. \begin{prop}\label{prop:B} For each $x\in{}^w\!I_{\mu,\pm},$ there is a unique $\Delta$-filtered graded $S$-sheaf ${}^w\!\bar B_{S,\mu,\pm}(x)\in{}^w\bar\mathbf{Z}_{S,\mu,\pm}^\Delta$ which is indecomposable, F-projective, supported on the coideal $\{\succcurlyeq\! x\}$ and whose stalk at $x$ is equal to $S\langle\pm l(x)\rangle$. \qed \end{prop} Assume that $w\in I_{\mu,+}$ and let $v=w_-\in I_{\mu,-}$. From the anti-isomorphism of posets ${}^w\!I_{\mu,+}\simeq{}^v\!I_{\mu,-}$ in Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:C2}(c), we deduce that ${}^w\mathcal{G}_{\mu,+}$ and ${}^v\mathcal{G}_{\mu,-}$ are the same moment graph with opposite orders. In particular, we have ${}^w\bar Z_{S,\mu,+}={}^v\bar Z_{S,\mu,-}$ as graded algebras. Further, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:F3}, there is an exact equivalence $$D: {}^w\bar\mathbf{Z}_{S,\mu,+}^\Delta\to({}^v\bar\mathbf{Z}_{S,\mu,-}^\Delta)^{{\operatorname{{op}}\nolimits}}.$$ The same is true if $+$ and $-$ are switched everywhere. We define ${}^w\bar C_{S,\mu,\pm}(x)=D({}^v\!\bar B_{S,\mu,\mp}(y))$ with $y=x_{\mp}$, $v=w_\mp$ for each $x\in {}^w\!I_{\mu,\pm}$. It is a $\Delta$-filtered graded $S$-sheaf on ${}^w\mathcal{G}_{\mu,\pm}$ which is indecomposable, tilting, supported on the ideal $\{\operatorname{pr}\nolimitseccurlyeq\! x\}$ and whose costalk at $x$ is equal to $S\langle\pm l(x)\rangle$. We abbreviate ${}^w\!\bar B_{S,\mu,\pm}=\bigoplus_x{}^w\!\bar B_{S,\mu,\pm}(x)$ and ${}^w\bar C_{S,\mu,\pm}=\bigoplus_x{}^w\bar C_{S,\mu,\pm}(x).$ By Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:filtration} we have the following lemma. \begin{lemma} \label{rk:filtration2} (a) The graded $S$-sheaf ${}^w\!\bar B_{S,\mu,\pm}(x)$ is filtered by graded Verma-sheaves. The top term in this filtration is $\bar V_{S,\mu,\pm}(x)\langle\pm l(x)\rangle$, which is a quotient of ${}^w\!\bar B_{S,\mu,\pm}(x).$ The other subquotients are of the form $\bar V_{S,\mu,\pm}(y)\langle j\rangle$ with $y\succ x$ and $j\in\mathbb{Z}$. (b) The graded $S$-sheaf ${}^w\bar C_{S,\mu,\pm}(x)$ is filtered by graded Verma-sheaves. The first term in this filtration is the sub-object $\bar V_{S,\mu,\pm}(x)\langle\pm l(x)\rangle.$ The other subquotients are of the form $\bar V_{S,\mu,\pm}(y)\langle j\rangle$ with $y\operatorname{pr}\nolimitsec x$ and $j\in\mathbb{Z}$. \end{lemma} Since ${}^w\bar Z_{S,\mu,+}={}^v\bar Z_{S,\mu,-}$ we may also regard ${}^v\bar C_{S,\mu,-}(x_-)=D({}^w\!\bar B_{S,\mu,+}(x))$ as a graded ${}^w\bar Z_{S,\mu,+}$-module. The following proposition says that ${}^w\!\bar B_{S,\mu,+}(x)$ is self-dual. \begin{prop} \label{prop:duality} For each $x\in{}^w\!I_{\mu,+},$ we have ${}^w\!\bar B_{S,\mu,+}(x)={}^v\bar C_{S,\mu,-}(x_-)$ as graded ${}^w\bar Z_{S,\mu,+}$-modules. \end{prop} \begin{proof} If $\mu=\phi$ is regular then the claim is \cite[thm.~6.1]{F4}. Assume now that $\mu$ is not regular. We abbreviate $\bar B_\mu(x)={}^w\!\bar B_{S,\mu,+}(x)$. We must prove that $D(\bar B_\mu(x))=\bar B_\mu(x).$ The regular case implies that $D(\bar B_\phi(x))=\bar B_\phi(x)$. We can assume that $w\in I_{\mu,-}$ and that $x\in {}^w\!I_{\mu}$. From Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:theta}$(e)$, $(f)$ below, we deduce that $$\bigoplus_{y\in W_\mu}D(\bar B_\mu(x))\langle 2l(y)-l(w_\mu)\rangle{{\operatorname{{op}}\nolimits}}lus D(M)= \bigoplus_{y\in W_\mu}\bar B_\mu(x)\langle l(w_\mu)-2l(y)\rangle{{\operatorname{{op}}\nolimits}}lus M,$$ where $M$ is a direct sum of objects of the form $\bar B_\mu(z)\langle j\rangle$ with $z< x$. We have also $D(\bar B_\mu(e))=\bar B_\mu(e)$. Thus, by induction, we may assume that $D(\bar B_\mu(z))=\bar B_\mu(z)$ for all $z<x$, and the equality above implies that $D(\bar B_\mu(x))=\bar B_\mu(x).$ \end{proof} \begin{rk} The graded sheaf ${}^w\!\bar B_{S,\mu,-}(x)$ is not self-dual, i.e., ${}^w\!\bar B_{S,\mu,-}(x)$ is not isomorphic to ${}^v\!\bar C_{S,\mu,+}(x_+)$ in general. \end{rk} Let $P_{y,x}^{\mu,q}(t)=\sum_iP_{y,x,i}^{\mu,q}\,t^i$ be Deodhar's parabolic Kazhdan-Lusztig polynomial ``of type $q$'' associated with the parabolic subgroup $W_\mu$ of $\widehat W$. Let $Q_{x,y}^{\mu,-1}(t)=\sum_iQ_{x,y,i}^{\mu,-1}\,t^i$ be Deodhar's inverse parabolic Kazhdan-Lusztig polynomial ``of type $-1$''. We use the notation in \cite[rk.~2.1]{KT2}, which is not the usual one. We abbreviate $P_{y,x}=P_{y,x}^{\phi,q}$ and $Q_{y,x}=Q_{y,x}^{\phi,-1}$. \begin{prop}\label{prop:3.26} For each $x,y\in{}^w\!I_{\mu,+},$ we have graded $S$-module isomorphisms (a) ${}^w\!\bar B_{S,\mu,+}(x)_{y} =\bigoplus_{i\geqslantlant 0}(S\langle l(x)-2i\rangle)^{{{\operatorname{{op}}\nolimits}}lus P^{\mu,q}_{y,x,i}},$ (b) ${}^w\!\bar B_{S,\mu,+}(x)_{[y]} =\bigoplus_{i\geqslantlant 0}(S\langle 2l(y)-l(x)+2i\rangle)^{{{\operatorname{{op}}\nolimits}}lus P^{\mu,q}_{y,x,i}}.$ \end{prop} \begin{proof} Any finitely generated projective $S$-module is free, and that the $S$-module ${}^w\!\bar B_{S,\mu,+}(x)_{y}$ is projective because ${}^w\!\bar B_{S,\mu,+}(x)$ is F-projective. Hence, part $(a)$ follows from Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc1} below and \cite[thm.~1.4]{KT2}. Part $(b)$ follows from $(a)$ and Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:duality} as in \cite[prop.~7.1]{F4} (where it is proved for $\mu$ regular). More precisely, write $\bar V(x)=\bar V_{S,\phi}(x),$ $\bar B(x)={}^w\!\bar B_{S,\mu,+}(x)$ and $Z={}^w\! Z_{S,\mu,+}.$ Then, we have $$\bar B(x)^y=\operatorname{Ker}\nolimits(\rho_{y,e_y})\subset\bar B(x)_{[y]}=\operatorname{Ker}\nolimits(\rho_{y,d_y})\subset \bar B(x)_y.$$ In particular, for $\alpha_{u_y}=\operatorname{pr}\nolimitsod_{h\in u_y}\alpha_h$, we have $\alpha_{u_y}\,\bar B(x)_{[y]}\subset\bar B(x)^y.$ Next, the graded $S$-module $\bar B(x)_y$ is free and $\bar B(x)_h=\bar B(x)_y/\alpha_h \bar B(x)_y$ for $h\in u_y$ because $\bar B(x)$ is F-projective. Thus we have $\bar B(x)^y\subset\alpha_{u_y}\,\bar B(x)_y,$ because $\alpha_{h_1}$ is prime to $\alpha_{h_2}$ in $S$ if $h_1\neq h_2$. We claim that $\bar B(x)^y=\alpha_{u_y}\,\bar B(x)_{[y]}.$ Let $b\in\bar B(x)^y$. Write $b=\alpha_{u_y}\, b'$ with $b'\in\bar B(x)_y$. If $\rho_{y,h}(b')\neq 0$ with $h\in d_y$ then $\rho_{y,h}(b)=\alpha_{u_y}\,\rho_{y,h}(b')\neq 0,$ because the $\alpha_{u_y}$-torsion submodule of $\bar B(x)_h$ is zero (we have $\bar B(x)_h=\bar B(x)_{h'}/\alpha_h \bar B(x)_{h'}$, the $S$-module $\bar B(x)_{h'}$ is free, and $\alpha_h$ is prime to $\alpha_{u_y}$). This implies the claim. In particular, we have $\bar B(x)_{[y]}=\bar B(x)^y\langle 2l(y)\rangle$ because $\sharp u_y=l(y)$. Hence, by Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:stalk/costalk} and Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:duality} we have a graded $S$-module isomorphism $$\aligned (\bar B(x)_y)^*\langle 2l(y)\rangle=\bar B(x)^y\langle 2l(y)\rangle =\bar B(x)_{[y]}. \endaligned$$ \end{proof} \begin{df}\label{def:3.23} Consider the graded $S$-algebra ${}^w\!\bar \mathscr{A}_{S,\mu,\pm}= {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{S,\mu,\pm}}\bigl({}^w\!\bar B_{S,\mu,\pm}\bigr)^{{{\operatorname{{op}}\nolimits}}}$ and the graded $k$-algebra ${}^w\!\bar \mathscr{A}_{\mu,\pm}= k{}^w\!\bar \mathscr{A}_{S,\mu,\pm}.$ Set also ${}^w\!\mathscr{A}_{S_0,\mu,\pm}= {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{S_0,\mu}}\bigl({}^w\!B_{S_0,\mu,\pm}\bigr)^{{{\operatorname{{op}}\nolimits}}}$ and let ${}^w\! \mathscr{A}_{\mu,\pm}$ be the $k$-algebra underlying ${}^w\!\bar \mathscr{A}_{\mu,\pm}.$ \end{df} For each $x\in{}^w\!I_{\mu,\pm},$ let $1_x\in {}^w\!\bar \mathscr{A}_{\mu,\pm}$ be the idempotent associated with the direct summand ${}^w\!\bar B_{S,\mu,\pm}(x)$ of ${}^w\!\bar B_{S,\mu,\pm}$. \begin{prop} \label{prop:2.24} The graded $k$-algebra ${}^w\!\bar \mathscr{A}_{\mu,\pm}$ is basic. Its Hilbert polynomial is $$\gathered P({}^w\!\bar \mathscr{A}_{\mu,+},t)_{x,x'}= \sum_{y\leqslantlant x,x'} P^{\mu,q}_{y,x}(t^{-2})\,P^{\mu,q}_{y,x'}(t^{-2})\,t^{l(x)+l(x')-2l(y)}, \\ P({}^w\!\bar \mathscr{A}_{\mu,-},t)_{x,x'}= \sum_{y\geqslantlant x,x'} Q^{\mu,-1}_{x,y}(t^{-2})\,Q^{\mu,-1}_{x',y}(t^{-2})\,t^{2l(y)-l(x)-l(x')}. \endgathered$$ If $\mu=\phi$ then the matrix equation $P({}^w\!\bar \mathscr{A}_{\phi,-},t)\,P({}^v\!\bar \mathscr{A}_{\phi,+},-t)=1$ holds. Here $v=w^{-1}$ and the sets of indices of the matrices in the left and right factors are identified through the map $x\mapsto x^{-1}$. \end{prop} \begin{proof} First, note that if $M,B\in\bar\mathbf{Z}^\Delta_S$ and if $B$ is projective, then the filtration $(M_r)$ of $M$ in Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:filtration} yields a filtration of the graded $S$-module ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{Z_S}(B,M)$ whose associated graded is, see Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:vermasheaf}, $$\aligned \bigoplus_{r=1}^n{{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{Z_S}(B,M_r/M_{r-1}) =\bigoplus_{r=1}^n(B_{y_r})^*\langle i_r\rangle. \endaligned$$ Further, for each $x',$ the graded BM-sheaf ${}^w\!\bar B_{S,\mu,+}(x')$ is projective by Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:Fprojective} and Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:B}. Therefore, for each $x,x',$ there is a finite filtration of the graded $S$-module ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\!Z_{S,\mu,+}} \bigl({}^w\!\bar B_{S,\mu,+}(x'),{}^w\!\bar B_{S,\mu,+}(x)\bigr)$ whose associated graded is, see Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:3.26}, $$\aligned &\bigoplus_{y}\bigoplus_{i\geqslantlant 0} \bigl({}^w\!\bar B_{S,\mu,+}(x')_y\bigr)^* \langle 2l(y)-l(x)+2i\rangle^{{{\operatorname{{op}}\nolimits}}lus P^{\mu,q}_{y,x,i}}= \\ &=\bigoplus_{y}\bigoplus_{i,i'\geqslantlant 0} S\langle 2l(y)-l(x)-l(x')+2i+2i'\rangle^{{{\operatorname{{op}}\nolimits}}lus P^{\mu,q}_{y,x,i}\, P^{\mu,q}_{y,x',i'}}. \endaligned$$ Thus we have a graded $k$-vector space isomorphism $$1_x{}^w\!\bar \mathscr{A}_{\mu,+}1_{x'}= \bigoplus_{y\leqslantlant x,x'}\bigoplus_{i,i'\geqslantlant 0} k\langle 2l(y)-l(x)-l(x')+2i+2i'\rangle^{{{\operatorname{{op}}\nolimits}}lus P^{\mu,q}_{y,x,i}\,P^{\mu,q}_{y,x',i'}},$$ where $y,x,x'$ run over ${}^w\!I_{\mu,+}$. We deduce that $$P({}^w\!\bar \mathscr{A}_{\mu,+},t)_{x,x'}= \sum_{y\leqslantlant x,x'}P^{\mu,q}_{y,x}(t^{-2})\,P^{\mu,q}_{y,x'}(t^{-2})\, t^{l(x)+l(x')-2l(y)}.$$ In other words, the matrix equation $P({}^w\!\bar \mathscr{A}_{\mu,+},t)=P_{\mu,+}(t)^T\,P_{\mu,+}(t)$ holds with $P_{\mu,+}(t)_{y,x}=P^{\mu,q}_{y,x}(t^{-2})\,t^{l(x)-l(y)}$ and $x,y\in{}^w\!I_{\mu,+}.$ Note that $P^{\mu,q}_{y,x,i}=0$ if $l(y)-l(x)+2i>0$ and that if $l(y)-l(x)+2i=0$ then $P^{\mu,q}_{y,x,i}=0$ unless $y=x$. Thus we have $P({}^w\!\bar \mathscr{A}_{\mu,+},t)_{x,x'}\in\delta_{x,x'}+t\mathbb{N}[[t]].$ Hence the graded $k$-algebra ${}^w\!\bar\mathscr{A}_{\mu,+}$ is basic. Set $Q_{\mu,-}(t)_{x,y}=Q^{\mu,-1}_{x,y}(t^{-2})\,t^{l(y)-l(x)}$ for each $x,y\in{}^w\!I_{\mu,+}.$ The matrix equation $P({}^w\!\bar \mathscr{A}_{\mu,-},t)=Q_{\mu,-}(t)\,Q_{\mu,-}(t)^T$ is proved in Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:IC} below. Hence ${}^w\!\bar \mathscr{A}_{\mu,-}$ is also basic. Next, for each $x,x',y\in {}^w\!I_{\mu,+}$ and $a=-1,q$ we have, see e.g., \cite[(2.39)]{KT2} $$\sum_{x\leqslantlant y\leqslantlant x'} (-1)^{l(y)-l(x)}Q^{\mu,a}_{x,y}(t)\,P^{\mu,a}_{y,x'}(t)=\delta_{x,x'}.$$ Further, if $\mu=\phi$ is regular then we have $P_{y,x}(t)=P^{\mu,q}_{y,x}(t)$ and $Q_{x,y}(t)=Q^{\mu,q}_{x,y}(t)$. So, we have the matrix equations $Q_{\phi,-}(t)\,P_{\phi,+}(-t)=1$, $P_{\phi,+}(-t)\,Q_{\phi,-}(t)= 1.$ We deduce that $$\gathered P({}^w\!\bar \mathscr{A}_{\phi,-},t)P({}^w\!\bar \mathscr{A}_{\phi,+},-t)= Q_{\phi,-}(t)\,Q_{\phi,-}(t)^T\,P_{\phi,+}(-t)^T\,P_{\phi,+}(-t)=1. \endgathered$$ The matrix equation in the proposition follows easily, using the fact that $P_{y,x}=P_{y^{-1},x^{-1}}$ and $Q_{x,y}=Q_{x^{-1},y^{-1}}$. \end{proof} \begin{rk}\label{rk:3.29} The grading on ${}^w\!\bar \mathscr{A}_{\mu,\pm}$ is positive, because $P^{\mu,q}_{y,x,i}=0$ if $l(y)-l(x)+2i>0$ and $P^{\mu,q}_{y,x,i}=0$ if $l(y)-l(x)+2i=0$ and $y\neq x$. \end{rk} \begin{rk} The results in this section have obvious analogues in finite type. In this case, we may omit the truncation and Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:3.26}, Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:B1} yield $$ \aligned \bar B_{S,\mu,+}(x)_{y} &=\bigoplus_{i\geqslantlant 0}S\langle l(x)-2i\rangle^{{{\operatorname{{op}}\nolimits}}lus P^{\mu,q}_{y,x,i}}\\ \bar B_{S,\mu,-}(x)_y\{l(w_0)\} &=\bigoplus_{i\geqslantlant 0}S\langle l(w_0x)-2i\rangle^{{{\operatorname{{op}}\nolimits}}lus Q^{\mu,-1}_{x,y,i}}, \endaligned $$ where $w_0$ is the longuest element in the Weyl group $W$. Indeed, we have \begin{equation*} \omega^*\bar B_{S,\mu,+}(w_0xw_\mu)=\bar B_{S,\mu,-}(x)\langle l(w_0w_\mu)\rangle, \end{equation*} where $\omega:\mathcal{G}_{\mu,-}\to\mathcal{G}_{\mu,+}$ is the ordered moment graph isomorphism induced by the bijections $I_{\mu,+}\to I_{\mu,+},$ $x\mapsto w_0xw_\mu$ and $\mathbf{t}\to\mathbf{t},$ $h\mapsto w_0(h).$ Note that \cite[prop.~2.4,2.6]{KT2} and Kazhdan-Lusztig's inversion formula give $Q^{\mu,-1}_{x,y}=P^{\mu,q}_{w_0yw_\mu,w_0xw_\mu}.$ \end{rk} \subsection{Deformed category $\mathbf{O}$} \label{sec:3.6} Assume that $R=S_0$ or $k$. If $R=k$ we may drop it from the notation. We define $\mathbf{O}_{R,\mu,\pm}$ as the category consisting of the $(\mathbf{g},R)$-bimodules $M$ such that \begin{itemize} \item $M=\bigoplus_{\lambdabda\in \mathbf{t}^*} M_\lambdabda$ with $M_\lambdabda=\{m\in M\,;\,xm=(\lambdabda(x)+x)\cdot m,\,x\in\mathbf{t}\}$, \vskip1mm \item $U(\mathbf{b})\,(R\cdot m)$ is finitely generated over $R$ for each $m\in M$, \vskip1mm \item the highest weight of any simple subquotient is linked to ${{\rm{o}}}_{\mu,\pm}$. \end{itemize} Here, for each $r\in R$ and each element $m$ of an $R$-module $M$, the element $r\cdot m$ is the action of $r$ on $m$. Further, in the symbol $\lambdabda(x)+x$ we view $x$ as the element of $R$ given by the image of $x$ under the canonical map $S=S(\mathbf{t})\to R$. The morphisms are the $(\mathbf{g},R)$-bimodule homomorphisms. Next, consider the following categories \begin{itemize} \item ${}^w\mathbf{O}_{R,\mu,-}$ is the Serre subcategory of $\mathbf{O}_{R,\mu,-}$ consisting of the finitely generated modules such that the highest weight of any simple subquotient is of the form $\lambdabda=x\bullet{{\rm{o}}}_{\mu,-}$ with $x\operatorname{pr}\nolimitseccurlyeq w$ and $x\in I_{\mu,-}$. \item $^w\mathbf{O}_{R,\mu,+}$ is the category of the finitely generated objects in the Serre quotient of $\mathbf{O}_{R,\mu,+}$ by the Serre subcategory consisting of the modules such that the highest weight of any simple subquotient is of the form $\lambdabda=x\bullet{{\rm{o}}}_{\mu,+}$ with $x\not\succcurlyeq w$ and $x\in I_{\mu,+}$. We'll use the same notation for a module in $\mathbf{O}_{R,\mu,+}$ and a module in the quotient category $^w\mathbf{O}_{R,\mu,+}$, hoping it will not create any confusion. \end{itemize} For each $\lambdabda\in\mathbf{t}^*$ let $R_\lambdabda$ be the $(\mathbf{t},R)$-bimodule which is free of rank one over $R$ and such that the element $x\in\mathbf{t}$ acts by multiplication by the image of the element $\lambdabda(x)+x$ by the canonical map $S\to R$. The {\it deformed Verma module} with highest weight $\lambdabda$ is the $(\mathbf{g},R)$-bimodule given by $V_{R}(\lambdabda)=U(\mathbf{g})\otimes_{U(\mathbf{b})}R_\lambdabda.$ \begin{prop} \label{prop:deformedhw} (a) The category ${}^w\mathbf{O}_{R,\mu,\pm}$ is a highest weight $R$-category. The standard objects are the deformed Verma modules $V_{R}(x\bullet{{\rm{o}}}_{\mu,\pm})$ with $x\in{}^w\!I_{\mu,\pm}$. (b) Assume that $w\in I^\nu_{\mu,\pm}.$ The Ringel equivalence gives an equivalence $(\bullet)^\diamond:{}^w\mathbf{O}_{R,\mu,\pm}^\Delta\to{}^v\mathbf{O}_{R,\mu,\mp}^\nabla$ where $v=w_\mp$. \end{prop} \begin{proof} First, we consider the category ${}^w\mathbf{O}_{R,\mu,-}$. The deformed Verma modules are split, i.e., their endomorphism ring is $R$. Further, the $R$-category ${}^w\mathbf{O}_{R,\mu,-}$ is Hom finite. Thus we must check that ${}^w\mathbf{O}_{R,\mu,-}$ has a projective generator and that the projective modules are $\Delta$-filtered. Both statements follow from \cite[thm.~2.7]{F1}. Next, we consider the category ${}^w\mathbf{O}_{R,\mu,+}$. Once again it is enough to check that ${}^w\mathbf{O}_{R,\mu,+}$ has a projective generator and that the projective modules are $\Delta$-filtered. By \cite[thm.~2.7]{F1}, a simple module $L(x\bullet{{\rm{o}}}_{\mu,+})$ in $\mathbf{O}_{R,\mu,+}$ has a projective cover $P_R(x\bullet{{\rm{o}}}_{\mu,+})$. Note that the deformed category O in loc.~cit.~ is indeed a subcategory of $\mathbf{O}_{R,\mu,+}$ containing all finitely generated modules. Since $P_R(x\bullet{{\rm{o}}}_{\mu,+})$ is finitely generated, the functor ${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{O}_{R,\mu,+}}(P_R(x\bullet{{\rm{o}}}_{\mu,+}),\bullet)$ commutes with direct limits. Thus, since any module in $\mathbf{O}_{R,\mu,+}$ is the direct limit of its finitely generated submodules, the module $P_R(x\bullet{{\rm{o}}}_{\mu,+})$ is again projective in $\mathbf{O}_{R,\mu,+}$. Now, a standard argument using \cite[thm.~3.1]{Mi} shows that the functor $${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{O}_{R,\mu,+}}\bigl(\bigoplus_x P_R(x\bullet{{\rm{o}}}_{\mu,+}),\bullet\bigr), \quad x\in{}^w\!I_{\mu,+},$$ factors to an equivalence of abelian $R$-categories ${}^w\mathbf{O}_{R,\mu,+}\to A\operatorname{\!-\mathbf{mod}}\nolimits,$ where $A$ is the finite projective $R$-algebra $$A={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{\mathbf{O}_{R,\mu,+}}\bigl(\bigoplus_x P_R(x\bullet{{\rm{o}}}_{\mu,+})\bigr)^{{\operatorname{{op}}\nolimits}}.$$ Therefore by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:ringel}(a) and \cite[thm.~4.15]{Ro} the category ${}^w\mathbf{O}_{R,\mu,+}$ is a highest weight category over $R$. This finishes the proof of $(a)$. By \cite[sec.~2.6]{F2} there is an equivalence of exact categories $\mathbf{O}^{\Delta}_{R,\mu,+}{\buildrel\sim\over\to}\,(\mathbf{O}^{\Delta}_{R,\mu,-})^{{\operatorname{{op}}\nolimits}}$ analogue to the one in Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:ringel-soergel}. It maps $V_R(x\bullet{{\rm{o}}}_{\mu,+})$ to $V_R(x_-\bullet{{\rm{o}}}_{\mu,-}),$ and $P_R(x\bullet{{\rm{o}}}_{\mu,+})$ to $T_R(x_-\bullet{{\rm{o}}}_{\mu,-}).$ We deduce that $({}^v\mathbf{O}_{R,\mu,-})^\diamond={}^w\mathbf{O}_{R,\mu,+}$ for $w\in I_{\mu,+}$ and $v=w_-\in I_{\mu,-}$. Now part (b) follows from generalities on Ringel equivalences, see Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:R}. \end{proof} Let ${}^w\!P_{R}(x\bullet{{\rm{o}}}_{\mu,\pm})$ and ${}^wT_{R}(x\bullet{{\rm{o}}}_{\mu,\pm})$ be respectively the projective and the tilting objects in the highest weight categories ${}^w\mathbf{O}_{R,\mu,\pm}$ as given by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:hwtbasic}. Set ${}^wP_{R,\mu,\pm}=\bigoplus_x{}^wP_R(x\bullet{{\rm{o}}}_{\mu,\pm})$ and ${}^wT_{R,\mu,\pm}=\bigoplus_x{}^wT_R(x\bullet{{\rm{o}}}_{\mu,\pm})$, where $x$ runs over the set ${}^wI_{\mu,\pm}$. Composing the Ringel duality and the BGG duality, we get an equivalence \begin{equation}\label{eq:ringelbgg} D={{\rm{d}}}\circ(\bullet)^\diamond:{}^w\mathbf{O}_{R,\mu,\pm}^\Delta\to({}^v\mathbf{O}_{R,\mu,\mp}^\Delta)^{{\operatorname{{op}}\nolimits}}. \end{equation} Consider the base change functor ${}^w\mathbf{O}_{S_0,\mu,\pm}\to{}^w\mathbf{O}_{k,\mu,\pm}$ such that $M\mapsto kM.$ By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:2.3} we have $k{}^w\!P_{S_0}(x\bullet{{\rm{o}}}_{\mu,\pm})= {}^w\!P(x\bullet{{\rm{o}}}_{\mu,\pm})$ and $k{}^wT_{S_0}(x\bullet{{\rm{o}}}_{\mu,\pm})= {}^wT(x\bullet{{\rm{o}}}_{\mu,\pm})$ for each $x\in{}^w\!I_{\mu,\pm}$. \subsection{The functor $\mathbb{V}$} Assume that $R=S_0$ or $k$. There is a well defined functor $$\mathbb{V}_{\! R}={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\mathbf{O}_{R,\mu,-}}({}^w\!P({{\rm{o}}}_{\mu,-}),\bullet): {}^w\mathbf{O}_{R,\mu,-}^\Delta\to {}^w\mathbf{Z}_{R,\mu,-},$$ called the \emph{structure functor}, see \cite[sec.~3.4]{F2}. At the positive level, following \cite[sec. before rem.~6]{F2}, we define the structure functor $\mathbb{V}_{\! R}$ as the composition \[\xymatrix{{}^w\mathbf{O}_{R,\mu,+}^\Delta\ar[r]^{D} &({}^v\mathbf{O}_{R,\mu,-}^\Delta)^{{\operatorname{{op}}\nolimits}}\ar[r]^{\mathbb{V}_{\! R}} &({}^v\mathbf{Z}_{R,\mu,-})^{{\operatorname{{op}}\nolimits}}\ar[r]^{D}&{}^w\mathbf{Z}_{R,\mu,+}. }\] \begin{prop} \label{prop:foncteurV} Let $R=S_0$ or $k$. The following hold (a) $\mathbb{V}_{\! R}$ is exact on ${}^w\mathbf{O}_{R,\mu,-}$ and on ${}^w\mathbf{O}_{R,\mu,+}^\Delta$, (b) $\mathbb{V}_{\! R}$ commutes with base change, (c) the functor $\mathbb{V}_{S_0}$ on ${}^w\mathbf{O}_{S_0,\mu,\pm}^\Delta$ is such that \begin{itemize} \item[(i)] $\mathbb{V}_{\! S_0}\circ D=D\circ \mathbb{V}_{\! S_0},$ \item[(ii)] $\mathbb{V}_{\! S_0}$ is an equivalence of exact categories ${}^w\mathbf{O}_{S_0,\mu,\pm}^\Delta\to{}^w\mathbf{Z}_{S_0,\mu,\pm}^\Delta,$ \item[(iii)] for each $x\in {}^w\!I_{\mu,\pm}$ we have \begin{itemize} \item[-] $\mathbb{V}_{\! S_0}(V_{S_0}(x\bullet{{\rm{o}}}_{\mu,\pm}))=V_{S_0,\mu,\pm}(x),$ \item[-] $\mathbb{V}_{\! S_0}({}^w\!P_{S_0}(x\bullet{{\rm{o}}}_{\mu,\pm}))={}^w\!B_{S_0,\mu,\pm}(x),$ \item[-] $\mathbb{V}_{\! S_0}({}^wT_{S_0}(x\bullet{{\rm{o}}}_{\mu,\pm}))={}^w\!C_{S_0,\mu,\pm}(x).$ \end{itemize} \end{itemize} \end{prop} \begin{proof} Parts $(a),(b)$ and the first claim of $(c)$ follow from the construction of $\mathbb{V}_R$. The second claim of $(c)$ follows from \cite[thm.~7.1]{F3}. Note that we use here the exact structure on ${}^w\mathbf{Z}_{S_0,\mu,\pm}^\Delta$ defined before Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:exact}, see \cite[sec.~7.1]{F3}. The first claim of $(c)$(iii) is given by \cite[prop.~2(2)]{F2}. The second one is proved in \cite[rk.~7.6]{F3}. The third one follows from the second and $(c)$(i). \end{proof} Now, we compare the $k$-algebra ${}^w\!A_{\mu,\pm}$ in Definition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{df:3.6} with the $k$-algebra ${}^w\!\mathscr{A}_{\mu,\pm}$ in Definition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{def:3.23}. \begin{cor}\label{cor:2.31} We have a $k$-algebra isomorphism ${}^w\!A_{\mu,\pm}{\buildrel\sim\over\to}\, {}^w\!\mathscr{A}_{\mu,\pm}$ such that $1_x\mapsto 1_x$ for each $x\in{}^w\!I_{\mu,\pm}$. \end{cor} \begin{proof} By applying Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:2.3} to base change functor ${}^w\mathbf{O}_{S_0,\mu,\pm}\to{}^w\mathbf{O}_{k,\mu,\pm}$ , we get $ {}^w\!A_{\mu,\pm} =k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\mathbf{O}_{S_0,\mu,\pm}}({}^w\!P_{S_0,\mu,\pm})^{{{\operatorname{{op}}\nolimits}}}. $ Thus, Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(c)$ yields $$ {}^w\!A_{\mu,\pm} =k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{S_0,\mu,\pm}}({}^w\!B_{S_0,\mu,\pm})^{{{\operatorname{{op}}\nolimits}}} =k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{S,\mu,\pm}}({}^w\!B_{S,\mu,\pm})^{{{\operatorname{{op}}\nolimits}}} ={}^w\!\mathscr{A}_{\mu,\pm}.$$ \end{proof} \subsection{Translation functors in $\mathbf{O}$} \label{sec:translationO} Assume that $R=S_0$ or $k$. Fix positive integers $d,$ $e$. Assume that ${{\rm{o}}}_{\mu,-}$, ${{\rm{o}}}_{\phi,-}$ have the level $-e-N$, $-d-N$ respectively. Thus, the integral affine weights ${{\rm{o}}}_{\mu,+}$, ${{\rm{o}}}_{\phi,+}$ have the level $e-N$, $d-N$ respectively. In particular, the integral affine weight ${{\rm{o}}}_{\mu,\pm}-{{\rm{o}}}_{\phi,\pm}$ has the level $\pm (e-d)$. Assume that $e-d\neq\mp N$, hence we have $\pm (e-d)\neq -N$. The affine weight ${{\rm{o}}}_{\mu,\pm}-{{\rm{o}}}_{\phi,\pm}$ is positive if $\pm( e-d)>-N$ and it is negative else, see Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:affine} for the terminology. First, we consider the translation functors on the (deformed) $\Delta$-filtered modules. We have the following. \begin{prop} \label{prop:translation1} Let $R=S_0$ or $k$. Let $w\in\widehat W$ and $z\in I_{\mu,-}$. Assume that $wW_\mu=zW_\mu$ and that $\pm (e-d)>-N$. We have $k$-linear functors $T_{\phi,\mu}:{}^{z}\mathbf{O}_{R,\phi,\pm}^\Delta \to{}^w\mathbf{O}_{R,\mu,\pm}^\Delta$ and $T_{\mu,\phi}:{}^w\mathbf{O}_{R,\mu,\pm}^\Delta \to{}^{z}\mathbf{O}_{R,\phi,\pm}^\Delta$ such that (a) $T_{\phi,\mu}$, $T_{\mu,\phi}$ are exact, (b) $T_{\phi,\mu}$, $T_{\mu,\phi}$ are bi-adjoint, (c) $T_{\phi,\mu}$, $T_{\mu,\phi}$ commute with base change. \end{prop} \begin{proof} The functors $T_{\phi,\mu}$, $T_{\mu,\phi}$ on $\mathbf{O}_{R,\phi,\pm}^\Delta$, $\mathbf{O}_{R,\mu,\pm}^\Delta$ are constructed in \cite{F1}, see also \cite[thm.~4(2)]{F2}. Since $z\in I_{\mu,-}$, the functors $T_{\phi,\mu}$, $T_{\mu,\phi}$ preserve the subcategories ${}^z\mathbf{O}_{R,\phi,-}^\Delta$, ${}^w\mathbf{O}_{R,\mu,-}^\Delta$ by \cite[thm.~4(2)]{F2}. For the same reason the functors $T_{\phi,\mu}$, $T_{\mu,\phi}$ factor to the categories ${}^z\mathbf{O}_{R,\phi,+}^\Delta$, ${}^w\mathbf{O}_{R,\mu,+}^\Delta$. The claims $(a), (b), (c)$ are proved in \cite[thm.~4(1),(6)(4)]{F2}. \end{proof} Next, in the non-deformed setting, we consider the translation functors on the whole category O. We have the following. \begin{prop} \label{prop:translation2} Let $R=k$. Let $w\in\widehat W$ and $z\in I_{\mu,-}$. Assume that $wW_\mu=zW_\mu$ and that $\pm (e-d)>-N$. We have a $k$-linear functor $T_{\phi,\mu}:{}^z\mathbf{O}_{\phi,\pm} \to{}^w\mathbf{O}_{\mu,\pm}$ which coincides with the functors in Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation1} on ${}^z\mathbf{O}_{\phi,\pm}^\Delta$ and such that the following hold (a) $T_{\phi,\mu}$ has a left adjoint functor $T_{\mu,\phi}$, both functors are exact, (b) $T_{\phi,\mu}$, $T_{\mu,\phi}$ take projectives to projectives, (c) $T_{\phi,\mu}$, $T_{\mu,\phi}$ preserve the parabolic category O and commute with $i,\tau$, (d) $T_{\phi,\mu}(L(xw_\mu\bullet {{\rm{o}}}_{\phi,\pm}))=L(x\bullet {{\rm{o}}}_{\mu,\pm})$ for each $x\in{}^z\!I_{\mu,\pm}$, (e) $T_{\phi,\mu}(L(xw_\mu\bullet {{\rm{o}}}_{\phi,\pm}))=0$ for each $x\in{}^z\!I_{\phi,\pm}\setminus{}^z\!I_{\mu,\pm}$, (f) $T_{\phi,\mu}({}^z\!L_{\phi,\pm}^\nu)={}^w\!L_{\mu,\pm}^\nu$, (g) $T_{\mu,\phi}({}^w\!P^\nu(x\bullet {{\rm{o}}}_{\mu,\pm}))= {}^z\!P^\nu(xw_\mu\bullet {{\rm{o}}}_{\phi,\pm})$ for each $x\in{}^w\!I_{\mu,\pm}^\nu$. \end{prop} \begin{proof} The definition of the translation functor $T_{\phi,\mu}:\mathbf{O}_{\phi,\pm} \to\mathbf{O}_{\mu,\pm}$ is well-known, see e.g., \cite[sec.~3]{KT}. Its restriction to $\Delta$-filtered objects coincides with the functor in Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation1} if $R=k$, see \cite{F1}, \cite{KT} for details. Formulas $(d)$, $(e)$ are proved in \cite[prop.~3.8]{KT}. Since $z\in I_{\mu,-}$, they imply that $T_{\phi,\mu}$ factors to a functor ${}^z\mathbf{O}_{\phi,\pm}\to{}^w\mathbf{O}_{\mu,\pm}$ which satisfies again $(d)$, $(e)$. The existence of the left adjoint $T_{\mu,\phi}$ follows from the following general fact. \begin{claim} Let $A$, $B$ be finite dimensional $k$-algebras and $T : A\operatorname{\!-\mathbf{mod}}\nolimits \to B\operatorname{\!-\mathbf{mod}}\nolimits$ be an exact $k$-linear functor. Then $T$ has a left and a right adjoint. \end{claim} The exactness of $T_{\phi,\mu}$ is obvious by construction, see e.g., \cite[sec.~3]{KT}. It is easy to check that $T_{\phi,\mu}$ commutes with the BGG duality. Thus, its right adjoint is the conjugate of $T_{\mu,\phi}$ by the duality. So, to check that $T_{\mu,\phi}$ is exact it is enough to prove that $T_{\phi,\mu}$ takes projectives to projectives. This follows from Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation1}. Part $(b)$ follows from the proof of $(a)$. Part $(c)$ for $T_{\phi,\mu}$ is obvious by construction. For $T_{\mu,\phi}$, it follows by adjunction. To prove $(f)$, note that $(d)$ implies that $T_{\phi,\mu}({}^z\!L_{\phi,\pm})={}^w\!L_{\mu,\pm}$. Thus $(c)$ gives $T_{\phi,\mu}({}^z\!L_{\phi,\pm}^\nu)\subset{}^w\!L_{\mu,\pm}^\nu$. Further, for each $x\in{}^z\!I_{\mu,\pm}^\nu$ part $(e)$ yields \begin{equation} \label{toto} L(xw_\mu\bullet {{\rm{o}}}_{\phi,\pm})\subset T_{\mu,\phi}T_{\phi,\mu}L(xw_\mu\bullet {{\rm{o}}}_{\phi,\pm})= T_{\mu,\phi}L(x\bullet {{\rm{o}}}_{\mu,\pm}). \end{equation} Thus, by adjunction, we have a surjective map $T_{\phi,\mu}L(xw_\mu\bullet {{\rm{o}}}_{\phi,\pm}) \to L(x\bullet {{\rm{o}}}_{\mu,\pm}).$ Now, since the right hand side of \eqref{toto} is in ${}^w\mathbf{O}^\nu_{\phi,\pm}$ by $(c)$, the left hand side is also in ${}^z\mathbf{O}^\nu_{\phi,\pm}$. Thus, we have a surjective map $T_{\phi,\mu}({}^z\!L_{\phi,\pm}^\nu) \to {}^wL^\nu_{\mu,\pm}.$ Part $(g)$ is a consequence of $(a)$, $(d)$, $(e)$. \end{proof} \begin{rk} \label{rk:fidele} Let $w\in\widehat W$ and $w\in I_{\mu,+}^\nu$. Set $z=ww_\mu$. We have $z\in I^\nu_{\phi,+}\cap I_{\mu,-}$. Assume that $e-d>-N$. Then, the functor $T_{\mu,\phi}:{}^w\mathbf{O}_{\mu,+}^\nu\to{}^z\mathbf{O}_{\phi,+}^\nu $ is faithful. To prove this it is enough to check that $T_{\mu,\phi}(L)\neq 0$ for any simple module $L$. This follows from Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(a),(g)$. \end{rk} \subsection{Translation functors in $\mathbf{Z}$} \label{sec:translationZ} Fix elements $w\in\widehat W$ and $z\in I_{\mu,-}$. The algebra ${}^z\!\bar Z_{S,\phi,+}={}^z\!\bar Z_{S,\phi,-}$ consists of the tuples $(a_x)$ of elements of $S$ labelled by elements $x\in \widehat W$ such that $x\leqslantlant z$ and $a_{x}-a_{y}\in\operatorname{ch}\nolimitseck\alpha S$ for all $x,y$ such that $x=s_\alpha y$. Since $z$ is maximal in the left coset $z W_\mu$, switching the coordinates yields a left $W_\mu$-action on such tuples such that $y\cdot(a_x)=(a_{xy})$ for each $y\in W_\mu$. This yields a left $W_\mu$-action on the algebra ${}^z\!\bar Z_{S,\phi,\pm}$. Assume that $w$ belongs to the left coset $zW_\mu$. Then, we have a map ${}^w\!\bar Z_{S,\mu,\pm}\to{}^{z}\!\bar Z_{S,\phi,\pm}$ such that $(a_{x})\mapsto (b_{xy}),$ where we define $b_{xy}=a_x$ for each $x\in{}^{w}\!I_{\mu,\pm}$ and $y\in W_\mu.$ This map identifies the algebra ${}^w\!\bar Z_{S,\mu,\pm}$ with the set of $W_\mu$-invariant elements in the algebra ${}^{z}\!\bar Z_{S,\phi,\pm}$. \begin{lemma}\label{lem:isom12} For $z\in I_{\mu,-}$ and $w\in I_{\mu,\pm}$ such that $w\in zW_\mu$ there are graded ${}^w\!\bar Z_{S,\mu,\pm}$-module isomorphisms (a) $ {}^{z}\!\bar Z_{S,\phi,\pm} \ \simeq\ \bigoplus_{y\in W_\mu}{}^w\!\bar Z_{S,\mu,\pm}\langle -2l(y)\rangle,$ (b) ${}^{z}\!\bar Z_{S,\phi,\pm}\langle 2l(w_\mu)\rangle \ \simeq\ {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\!\bar Z_{S,\mu,\pm}}\bigl( {}^{z}\!\bar Z_{S,\phi,\pm}, {}^w\!\bar Z_{S,\mu,\pm} \bigr).$ \end{lemma} \begin{proof} It is enough to prove the Lemma for ${}^w\!\bar Z_{S,\mu,+}$ because we have an isomorphism of graded algebra ${}^w\!\bar Z_{S,\mu,-}={}^v\!\bar Z_{S,\mu,+}$ for $w\in I_{\mu,-}$ and $v=w_+\in I_{\mu,+}$. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc1}$(a)$ below, the graded $S$-algebra ${}^w\!\bar Z_{S,\mu,+}$ is isomorphic to the equivariant cohomology (graded) algebra $H_T(\bar X_{\mu,w})$ of the affine Schubert variety $\bar X_{\mu,w}=\bar X_{\mu,z}$. Therefore, the Bruhat decomposition yields $$\aligned {}^w\!\bar Z_{S,\mu,+} =H_T(\bar X_{\mu,z}) =\bigoplus_{x\in I_{\mu,-},x\leqslantlant z}H_T(X_{\mu,x})\langle 2l(z)-2l(x)\rangle, \endaligned$$ $$\aligned {}^z\!\bar Z_{S,\phi,+} =H_T(\bar X_{\phi,z}) =\bigoplus_{y\in W_\mu}\bigoplus_{x\in I_{\mu,-},x\leqslantlant z}H_T(X_{\phi,xy})\langle 2l(z)-2l(y)-2l(x)\rangle. \endaligned$$ Since the obvious projection $X_{\phi,xW_\mu}\to X_{\mu,x}$ is a $P_\mu/B$-fibration, we have $ H_T(X_{\mu,x})=\bigoplus_{y\in W_\mu}H_T(X_{\phi,xy}). $ This implies part $(a)$. Finally, part $(b)$ follows from $(a)$, because $$\aligned {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\!\bar Z_{S,\mu,+}}\bigl({}^{z}\!\bar Z_{S,\phi,+},{}^w\!\bar Z_{S,\mu,+}\bigr) &= \bigoplus_{y\in W_\mu}{{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\!\bar Z_{S,\mu,+}}\bigl({}^w\!\bar Z_{S,\mu,+}\langle -2l(y)\rangle, {}^w\!\bar Z_{S,\mu,+}\bigr)\\ &= \bigoplus_{y\in W_\mu}{}^w\!\bar Z_{S,\mu,+}\langle 2l(y)\rangle\\ &={}^{z}\!\bar Z_{S,\phi,+}\langle 2l(w_\mu)\rangle. \endaligned$$ \end{proof} \begin{rk} An algebraic proof of Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:isom12} is given in \cite[lem.~5.1]{F4} in the particular case where $\sharp W_\mu=2$. \end{rk} Let $\bar\theta_{\phi,\mu}:{}^{z}\bar\mathbf{Z}_{S,\phi,\pm}\to {}^w\bar\mathbf{Z}_{S,\mu,\pm}$ and $\bar\theta_{\mu,\phi}:{}^w\bar\mathbf{Z}_{S,\mu,\pm}\to {}^{z}\bar\mathbf{Z}_{S,\phi,\pm}$ be the restriction and induction functors with respect to the inclusion ${}^w\!\bar Z_{S,\mu,\pm}\subset{}^{z}\!\bar Z_{S,\phi,\pm}.$ Next, let $R=S$ or $S_0$. Forgetting the gradings, we define in the same way as above the functors $\theta_{\phi,\mu}:{}^{z}\mathbf{Z}_{R,\phi,\pm}\to {}^w\mathbf{Z}_{R,\mu,\pm}$ and $\theta_{\mu,\phi}:{}^w\mathbf{Z}_{R,\mu,\pm}\to {}^{z}\mathbf{Z}_{R,\phi,\pm}.$ \begin{prop} \label{prop:theta} Assume that $w\in zW_\mu$ and $z\in I_{\mu,-}$. Then, we have (a) $\mathbb{V}_{S_0}\circ T_{\phi,\mu}=\theta_{\phi,\mu}\circ \mathbb{V}_{S_0}$ on ${}^z\mathbf{O}^\Delta_{S_0,\phi,\pm},$ and $\mathbb{V}_{S_0}\circ T_{\mu,\phi}=\theta_{\mu,\phi}\circ \mathbb{V}_{S_0}$ on ${}^w\mathbf{O}^\Delta_{S_0,\mu,\pm},$ (b) $\bar\theta_{\phi,\mu}$ and $\bar\theta_{\mu,\phi}$ are exact functors ${}^{z}\bar\mathbf{Z}_{S,\phi,\pm}^\Delta\to {}^w\bar\mathbf{Z}_{S,\mu,\pm}^\Delta$ and ${}^w\bar\mathbf{Z}_{S,\mu,\pm}^\Delta\to {}^{z}\bar\mathbf{Z}_{S,\phi,\pm}^\Delta,$ (c) $(\bar\theta_{\mu,\phi},\bar\theta_{\phi,\mu}, \bar\theta_{\mu,\phi}\langle 2l(w_\mu)\rangle)$ is a triple of adjoint functors, (d) $\bar\theta_{\phi,\mu}\circ D=D\circ\bar\theta_{\phi,\mu}$ and $\bar\theta_{\mu,\phi}\circ D=D\circ\bar\theta_{\mu,\phi}\circ\langle 2l(w_\mu)\rangle$, (e) for each $x\in {}^w\!I_{\mu,+},$ there is a sum $M$ of ${}^w\!\bar B_{S,\mu,+}(t)\langle j\rangle$'s with $t<x$ such that $$\bar\theta_{\phi,\mu}({}^z\!\bar B_{S,\phi,+}(xw_\mu))= \bigoplus_{y\in W_\mu}{}^w\!\bar B_{S,\mu,+}(x)\langle l(w_\mu)-2l(y)\rangle\bigoplus M,$$ (f) We have \begin{itemize} \item[(i)] $\bar\theta_{\mu,\phi}({}^w\!\bar B_{S,\mu,+}(x))= {}^z\!\bar B_{S,\phi,+}(xw_\mu)\langle -l(w_\mu)\rangle$ for $x\in {}^w\!I_{\mu,+}$, \item[(ii)] $\bar\theta_{\mu,\phi}({}^w\!\bar B_{S,\mu,-}(x))= {}^z\!\bar B_{S,\phi,-}(xw_\mu)$ for $x\in {}^w\!I_{\mu,-}$. \end{itemize} \end{prop} \begin{proof} Part $(a)$ is proved in \cite[thm.~9]{F2}. For $(b)$, note that the base change with respect to $S\to S_0$ commutes with $\theta_{\phi,\mu}$, $\theta_{\mu,\phi}$ and with the functor $M\mapsto M_{[x]}$, see Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:basechange}. Now, by $(a)$ and by Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(c)$, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation1}(a), the functors $\theta_{\phi,\mu}$, $\theta_{\mu,\phi}$ preserve ${}^{z}\mathbf{Z}_{S_0,\phi,\pm}^\Delta$, ${}^w\mathbf{Z}_{S_0,\mu,\pm}^\Delta.$ Hence, by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:forget}(c), the functor $\bar\theta_{\phi,\mu}$, $\bar\theta_{\mu,\phi}$ preserve ${}^{z}\bar\mathbf{Z}_{S,\phi,\pm}^\Delta$, ${}^w\bar\mathbf{Z}_{S,\mu,\pm}^\Delta.$ Next, to prove the exactness of $\bar\theta_{\phi,\mu}$, $\bar\theta_{\mu,\phi}$ it is enough to check the exactness of $\theta_{\phi,\mu}$, $\theta_{\mu,\phi}$, because $\varepsilon$ is faithfully exact by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:forget}(b). This follows from $(a)$ and Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(c)$, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation1}(a). The proof of part $(c)$ is similar to \cite[prop.~5.2]{F4}. More precisely, by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:isom12}(b) there is an isomorphism of functors $$\aligned {}^{z}\!\bar Z_{S,\phi,\pm}\langle 2l(w_\mu)\rangle \otimes_{{}^w\!\bar Z_{S,\mu,\pm}}\!\bullet &\ \simeq\ {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\!\bar Z_{S,\mu,\pm}}\bigl( {}^{z}\!\bar Z_{S,\phi,\pm},\, {}^w\!\bar Z_{S,\mu,\pm} \bigr)\otimes_{{}^w\!\bar Z_{S,\mu,\pm}}\bullet \\ &\ \simeq\ {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\!\bar Z_{S,\mu,\pm}}\bigl( {}^{z}\!\bar Z_{S,\phi,\pm},\,\bullet\bigr). \endaligned$$ Therefore $(\bar\theta_{\mu,\phi},\bar\theta_{\phi,\mu},\bar\theta_{\mu,\phi}\langle 2l(w_\mu)\rangle)$ is a triple of adjoint of functors. Part $(d)$ follows from $(c)$. Indeed, since $\bar\theta_{\phi,\mu}(M)=M$ as a graded $S$-module, we have $\bar\theta_{\phi,\mu}\circ D=D\circ\bar\theta_{\phi,\mu}$. Then, part $(c)$ implies $\bar\theta_{\mu,\phi}\circ D=D\circ\bar\theta_{\mu,\phi}\langle 2l(w_\mu)\rangle$ by unicity of adjoint functors. Now, we prove $(e)$. To unburden the notation let us temporarily abbreviate $\bar B_\mu(x)={}^w\!\bar B_{S,\mu,+}(x)$, $\bar B_\phi(x)={}^z\!\bar B_{S,\phi,+}(x)$, $\bar Z_\mu={}^w\!\bar Z_{S,\mu,+}$, $\bar Z_\phi={}^z\!\bar Z_{S,\phi,+}$ and $\bar\mathbf{Z}_{\phi}={}^z\bar\mathbf{Z}_{S,\phi,+}$. First, note that $\bar\theta_{\phi,\mu}(\bar B_{\phi}(xw_\mu))$ is is a direct sum of objects of the form $\bar B_\mu(z)\langle j\rangle$ by Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:4.44} below. Next, for each subset $J\subset {}^z\!I_{\phi,+}$ and each object $M\in\bar\mathbf{Z}_{\phi}^\Delta,$ we have defined a graded $S$-module $M_J=\mathcal{L}(M)(J)$ in Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:MI}. Setting $J=xW_\mu$, we claim that \begin{equation}\label{3.4}\bar B_{\phi}(xw_\mu)_{xW_\mu}=\bar Z_{\phi,xW_\mu}\langle l(xw_\mu)\rangle. \end{equation} To prove this, we use the geometric approach to sheaves over moment graphs recalled in Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:localization} below. We also use the notation given there (for the dual root system). The $\bar Z_{\phi}$-module $\bar B_{\phi}(xw_\mu)\in\bar\mathbf{Z}_{\phi}^\Delta$ is a graded BM-sheaf on ${}^z\mathcal{G}_{\phi,+}$. Since $x\in I_{\mu,+}$, the set $xW_\mu$ is an ideal of the poset ${}^{xw_\mu}\!I_{\mu,+}$. Thus, by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:foncteurW}, we have $\bar B_{\phi}(xw_\mu)_{xW_\mu}=IH_T\big(X_{\phi,xW_\mu}\big).$ Since $X_{\phi,xW_\mu}$ is a smooth open subset of $\bar X_{\phi,xw_\mu}$ (see comments before Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:foncteurW}), we deduce that $$\bar B_{\phi}(xw_\mu)_{xW_\mu}=IH_T\big(X_{\phi,xW_\mu}\big)= H_T\big(X_{\phi,xW_\mu}\big)\langle l(xw_\mu)\rangle =\bar Z_{\phi,xW_\mu}\langle l(xw_\mu)\rangle.$$ Now, by \cite[prop.~5.3]{F4}, for each $x\in {}^z\!I_{\mu,+}$ we have $\bar\theta_{\phi,\mu}(\bar B_{\phi}(xw_\mu))_{ x} =\bar B_{\phi}(xw_\mu)_{xW_\mu}$. Thus, by \eqref{3.4} and Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:isom12}(a) we have an isomorphism of graded $S$-modules \begin{eqnarray*} \bar\theta_{\phi,\mu}(\bar B_{\phi}(xw_\mu))_{x} &=&(\bar Z_{\phi,xW_\mu})_{xW_\mu}\langle l(xw_\mu)\rangle\\ &=&\bigoplus_{y\in W_\mu}(\bar Z_{\mu,x})_{x}\langle l(xw_\mu)-2l(y)\rangle\\ &=&\bigoplus_{y\in W_\mu}S\langle l(xw_\mu)-2l(y)\rangle. \end{eqnarray*} Finally, since for each $t\in I_{\mu,+}$ we have $$t\not\leqslantlant x\Rightarrow \bar\theta_{\phi,\mu}(\bar B_{\phi}(xw_\mu))_{ t} =\bar B_{\phi}(xw_\mu)_{t W_\mu}=0,$$ the support condition in Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:B} implies that the decomposition of $\bar\theta_{\phi,\mu}(\bar B_{\phi}(xw_\mu))$ into direct sums of $\bar B_\mu(z)\langle j\rangle$'s has the form as predicted in part $(e)$. Finally, we prove $(f)$. By Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(c)$, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(g)$ and part $(a)$ we have $\theta_{\mu,\phi}({}^w\! B_{S,\mu,\pm}(x))={}^z\! B_{S,\phi,\pm}(xw_\mu)$ for each $x\in {}^w\!I_{\mu,\pm}$. To identify the gradings, note that, by \cite[prop.~5.3]{F4}, we have $$\aligned \bar\theta_{\mu,\phi}({}^w\!\bar B_{S,\mu,\pm}(x))_{xW_\mu} ={}^z\!\bar Z_{\phi,xW_\mu}\otimes_{{}^w\!\bar Z_{\mu,x}} {}^w\!\bar B_{S,\mu,\pm}(x)_{x} ={}^z\!\bar Z_{\phi,xW_\mu}\langle\pm l(x)\rangle, \endaligned$$ because ${}^w\!\bar Z_{\mu, x}=S$ and ${}^w\!\bar B_{S,\mu,\pm}(x)_{ x}=S\langle\pm l(x)\rangle.$ Since $({}^z\!\bar Z_{\phi, x})_y=S$ for all $y\in xW_\mu,$ we deduce that $\bar\theta_{\mu,\phi}({}^w\!\bar B_{S,\mu,+}(x))_{xw_\mu}=S\langle l(x)\rangle$ and $\bar\theta_{\mu,\phi}({}^w\!\bar B_{S,\mu,-}(x))_{x}=S\langle-l(x)\rangle.$ \end{proof} \begin{rk} \label{rk:theta/k} Now, we consider the case $R=k$. Recall the algebras ${}^w Z_\mu$, ${}^w\bar Z_\mu$ and the categories ${}^w \mathbf{Z}_\mu$, ${}^w\bar \mathbf{Z}_\mu$ from the beginning of Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:3.4}, and the translation functors $T_{\mu,\phi}$, $T_{\phi,\mu}$ in Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:translationO}. We define $\bar\theta_{\phi,\mu}:{}^{z}\bar\mathbf{Z}_{\phi}\to {}^w\bar\mathbf{Z}_{\mu}$ and $\bar\theta_{\mu,\phi}:{}^w\bar\mathbf{Z}_{\mu}\to {}^{z}\bar\mathbf{Z}_{\phi}$ to be the restriction and induction functors with respect to the inclusion ${}^w\!\bar Z_{\mu}\subset{}^{z}\!\bar Z_{\phi}.$ They are exact (for the stupid exact structure). In the non-graded case, we define $\theta_{\mu,\phi}$, $\theta_{\phi,\mu}$ in a similar way. Then, we have $\mathbb{V}_k\circ T_{\phi,\mu}=\theta_{\phi,\mu}\circ \mathbb{V}_k$ and $\mathbb{V}_k\circ T_{\mu,\phi}=\theta_{\mu,\phi}\circ \mathbb{V}_k$ on $\Delta$-filtered modules by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:theta}$(a)$, because $\theta_{\phi,\mu}$, $\theta_{\mu,\phi}$, $T_{\phi,\mu}$, $T_{\mu,\phi}$ and $\mathbb{V}$ commute with base change by Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation1}. \end{rk} \subsection{Moment graphs and the flag ind-scheme} \label{sec:localization} Fix a parabolic type $\mu\in\mathcal{P}$. Let $P_\mu\subset G(k((t)))$ be the parabolic subgroup with Lie algebra $\mathbf{p}_\mu$. Write $B=P_\phi$ and let $T$ be the torus with Lie algebra $\mathbf{t}$. Note that $T$ is the product of $k^\times\times k^\times$ by the maximal torus of $G$. Let $X'$ be the partial (affine) flag ind-scheme $G(k((t)))/P_\mu$. The affine Bruhat cells are indexed by the cosets $\widehat W/W_\mu$, which we identify with $I_{\mu,+}(=I_\mu^{{{\operatorname{{op}}\nolimits}}eratorname{min}\nolimits})$. For each $w\in I_{\mu,+}$ let $X_w\subset\bar X_w\subset X'$ be the corresponding finite dimensional affine Bruhat cell and Schubert variety. To avoid confusions we may write $X'_\mu=X'$, $X_{\mu,w}=X_w$ and $\bar X_{\mu,w}=\bar X_w$. For any subset $J\subset I_{\mu,+}$ we abbreviate $X_J=\bigcup_{w\in J}X_w$. The group $T$ acts on $\bar X_w$, with the first copy of $k^\times$ acting by rotating the loop and the last one acting trivially. The varieties $\bar X_w$ form an inductive system of complex projective varieties with closed embeddings and $X'$ is represented by the ind-scheme $\text{ind}_w\bar X_w$. Recall that $\bar X_w$ has dimension $l(w)$ and $\bar X_w=\bigcup_{x\in {}^w\!I_{\mu,+}}X_x$. Let $\mathbf{D}^b(\bar X_w)$ be the bounded derived category of constructible sheaves of $k$-vector spaces on $\bar X_w,$ which are locally constant along the $B$-orbits, and let $\mathbf{P}(\bar X_w)$ be the full lubcategory of perverse sheaves. Let $\mathbf{D}^b_T(\bar X_w)$ and $\mathbf{P}_T(\bar X_w)$ be be their $T$-equivariant analogue. For each $x\in {}^w\!I_{\mu,+},$ let ${}^w\!IC(\bar X_x)$ be the intersection cohomology complex in $\mathbf{P}(\bar X_w)$ associated with $\bar X_x$ and let ${}^w\!IC_T(\bar X_x)$ be its $T$-equivariant analogue. Let $IH(\bar X_x)$ be the intersection cohomology of $\bar X_x$ and let $IH_T(\bar X_x)$ be its $T$-equivariant analogue. Finally, let $H(\bar X_x)$, $H_T(\bar X_x)$ denote the ($T$-equivariant) cohomology of $\bar X_x$. See Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:HT} for details. In this section we set $V=\mathbf{t}^*$. Since $S$ is the symmetric algebra over $V$, we have $S=H_T(\bullet)$. Let $S^\vee$ denote the symmetric algebra over $V^*=\mathbf{t}$. Let ${}^w\mathcal{G}_{\mu,\pm}^\vee$ be the moment graph over $V$ whose set of vertices is ${}^w\!I_{\mu,\pm}$, with an edge labelled by $k\,\alpha$ between $x,y$ if there is an affine reflection $s_{\alpha}$ such that $s_\alpha y\in xW_\mu$. We define ${}^w\bar Z_{S,\mu,\pm}^\vee$, ${}^w\bar\mathbf{Z}_{S,\mu,\pm}^\vee$, ${}^w\bar B_{S,\mu,\pm}(x)^\vee$, ${}^w\bar C_{S,\mu,\pm}(x)^\vee$, etc., in the obvious way. We'll also write ${}^w\bar\mathbf{F}_{S,\mu,\pm}^\vee$ for the category of graded $S$-sheaves of finite type over ${}^w\mathcal{G}_{\mu,\pm}^\vee$ whose stalks are torsion free as $S$-modules. We also write ${}^w\bar B_{\mu,\pm}^\vee(x)= k\,{}^w\bar B_{S,\mu,\pm}^\vee(x),$ ${}^w\bar C_{\mu,\pm}^\vee(x)= k\,{}^w\bar C_{S,\mu,\pm}^\vee(x),$ ${}^w\!\bar Z_{\mu,\pm}^\vee=k{}^w\!\bar Z_{S,\mu,\pm}^\vee$, ${}^w\bar\mathbf{Z}_{\mu,\pm}^\vee={}^w\!\bar Z_{\mu,\pm}^\vee\operatorname{\!-\mathbf{mod}}\nolimits$, etc. In the non graded case we use a similar notation. \begin{prop} \label{prop:loc1} For $w\in I_{\mu,+}$ and $x\in{}^w\!I_{\mu,+}$ we have (a) $H_T(\bar X_w)= {}^w\!\bar Z_{S,\mu,+}^\vee$ and $H(\bar X_w)={}^w\!\bar Z_{\mu,+}^\vee$ as graded $k$-algebras, (b) $IH_T(\bar X_x)={}^w\!\bar B_{S,\mu,+}^\vee(x)$ as graded ${}^w\!\bar Z_{S,\mu,+}^\vee$-modules, (c) $IH(\bar X_x)={}^w\!\bar B_{\mu,+}^\vee(x)$ as graded ${}^w\!\bar Z_{\mu,+}^\vee$-modules. \end{prop} \begin{proof} Part $(a)$ follows from \cite{GKM}, parts $(b),$ $(c)$ from \cite[thm.~1.5, 1.6, 1.8]{BM}. \end{proof} \begin{cor}\label{cor:4.44} Assume that $w\in I_{\mu,+}$ and $z=w_-\in I_{\mu,-}$. For each $x\in{}^z\!I_{\phi,+}$, the graded ${}^w\!\bar Z^\vee_{S,\mu,+}$-module $\bar\theta_{\phi\mu}({}^z\!\bar B_{S,\phi,+}^\vee(x))$ is a direct sum of graded modules of the form ${}^w\!\bar B_{S,\mu,+}^\vee(y)\langle j\rangle$ with $y\in {}^w\!I_{\mu,+}$ and $j\in\mathbb{Z}$. \end{cor} \begin{proof} The obvious projection $\pi:\bar X_{\phi,z}\to\bar X_{\mu,w}$ is proper. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc1}, for each $\mathcal{E}\in\mathbf{P}_T(\bar X_{\phi,z}),$ we may regard the cohomology spaces $H(\mathcal{E})$, $H(\pi_*\mathcal{E})$ as graded modules over ${}^z\!\bar Z_{S,\phi,+}^\vee$ and ${}^w\!\bar Z_{S,\mu,+}^\vee$ respectively. Since $\bar\theta_{\phi,\mu}$ is the restriction of graded modules with respect to the inclusion ${}^w\!\bar Z^\vee_{S,\mu,+}\subset{}^{z}\!\bar Z^\vee_{S,\phi,+},$ we have an obvious isomorphism of graded ${}^w\! \bar Z^\vee_{S,\mu,+}$-modules $H(\pi_*\mathcal{E})\simeq\bar\theta_{\phi\mu}H(\mathcal{E})$. Setting $\mathcal{E}={}^w\!IC_T(\bar X_{\phi,x})$, we obtain a graded module isomorphism $H(\pi_*({}^z\!IC_T(\bar X_{\phi,x})))\simeq\bar\theta_{\phi\mu}({}^z\!\bar B_{S,\phi,+}^\vee(x))$. Now, by the decomposition theorem, the complex $\pi_*({}^z\!IC_T(\bar X_{\phi,x}))$ is a direct sum of complexes of the form ${}^w\!IC_T(\bar X_{\mu,y})[j],$ with $y\in{}^w\!I_{\mu,+}$ and $j\in\mathbb{Z}$. We deduce that $\bar\theta_{\phi\mu}({}^z\!\bar B_{S,\phi,+}^\vee(x))$ is a direct sum of graded ${}^w\!\bar Z^\vee_{S,\mu,+}$-modules of the form ${}^w\!\bar B_{S,\mu,+}^\vee(y)\langle j\rangle.$ \end{proof} \begin{rk} The counterpart of Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc1} for ${}^w\!\bar Z_{S,\mu,-}^\vee$ and ${}^w\!\bar B_{S,\mu,-}^\vee(x)$ is proved in Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:IH} below. \end{rk} Recall that the poset ${}^x\!I_{\mu,+}$ is equipped with the opposite Bruhat order. An ideal $J\subset {}^x\!I_{\mu,+}$ is a coideal in the set $\{y\in I_{\mu}^{{{\operatorname{{op}}\nolimits}}eratorname{min}\nolimits}\,;\, y\leqslantlant x\}$ equipped with the Bruhat order. Hence the variety $X_J$ is a $T$-stable open subvariety of $\bar X_x$. We have the following proposition. \begin{lemma}\label{lem:foncteurW} Let $w\in I_{\mu,+}$. For any $x\in{}^w\!I_{\mu,+}$ and any ideal $J\subset {}^x\!I_{\mu,+}$ there is an isomorphism of graded $({}^w\!\bar Z^\vee_{S,\mu,+})_J$-modules ${}^w\!\bar B^\vee_{S,\mu,+}(x)_J=IH_T(X_J).$ \end{lemma} \begin{proof} The $T$-variety $X_J$ satisfies the assumption in \cite[sec.~1.1]{BM}. Hence, by loc.~cit., one can associate a moment graph $\mathcal{G}^\vee_J$ to it. By construction, the graph $\mathcal{G}^\vee_J$ is the same as the subgraph of ${}^w\mathcal{G}^\vee_{\mu,+}$ consisting of the vertices which belong to $J$ and the edges $h$ with $h',h''\in J$. Let $\bar Z^\vee_{S,J}$ be the graded structural algebra of $\mathcal{G}^\vee_J$. We have $\bar Z^\vee_{S,J}=({}^w\!\bar Z^\vee_{S,\mu,+})_J$ as graded rings, see Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:MI}. For any $y\in J$, let $\bar B^\vee_{S,J}(y)$ be the set of sections of the graded BM-sheaf on $\mathcal{G}^\vee_J$ associated with $y$, as defined in Proposition-Definition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{df:3.9}. By \cite[thm.~1.5,1.7]{BM}, we have a graded ring isomorphism $\bar Z^\vee_{S,J}=H_T(X_J)$ and a graded $\bar Z^\vee_{S,J}$-module isomorphism $\bar B^\vee_{S,J}(y)=IH_T(\bar X_y\cap X_J)$. In particular, we have $\bar B^\vee_{S,J}(x)=IH_T(X_J)$. Next, since $J\subset {}^x\!I_{\mu,+}$ is an ideal, it follows from the construction of BM-sheaves recalled in Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:BM} that we have canonical identifications $\bar B^\vee_{S,J}(x)_y={}^w\!\bar B^\vee_{S,\mu,+}(x)_y$, $\bar B^\vee_{S,J}(x)_h={}^w\!\bar B^\vee_{S,\mu,+}(x)_h$ for any $y$, $h\in\mathcal{G}^\vee_J$ which are compatible with the maps $\rho_{y,h}$. This implies that $\bar B^\vee_{S,J}(x)={}^w\!\bar B^\vee_{S,\mu,+}(x)_J$ as graded $\bar Z^\vee_{S,J}$-modules. We deduce that ${}^w\!\bar B^\vee_{S,\mu,+}(x)_J=IH_T(X_J)$ as $({}^w\!\bar Z^\vee_{S,\mu,+})_J$-modules. \iffalse By \cite[sec.~4.5]{FW}, there is a functor $\mathbb{W}_T:\mathbf{D}^b_T(\bar X_w)\to{}^w\bar\mathbf{F}^\vee_{S,\mu}$ such that $H_T(\mathcal{E}_x)=\mathbb{W}_T(\mathcal{E})_x$ for each $\mathcal{E}\in\mathbf{D}^b_T(\bar X_w)$ and each $x\in {}^w\!I_{\mu,+}$. Here, the element $x$ is viewed as a $T$-fixed point of $\bar X_w$ and $\mathcal{E}_x$ is the stalk of $\mathcal{E}$ at this point. We have $H_T(\mathcal{E})=\Gamma(\mathbb{W}_T(\mathcal{E}))$ for $\mathcal{E}\in\mathbf{D}^b_T(\bar X_w)$ by \cite[thm.~4.4]{FW} and $\mathbb{W}_T({}^wIC_T(\bar X_x))=\mathcal{L}({}^w\!\bar B^\vee_{S,\mu,+}(x))$ by \cite[thm.~6.10 prop.~7.1]{FW}. Now, for each ideal $J$ in the set ${}^x\!I_{\mu,+}$ we have \begin{eqnarray*} {}^w\!\bar B^\vee_{S,\mu,+}(x)_J&=&\operatorname{Im}\nolimits\big(\Gamma\mathcal{L}({}^w\!\bar B^\vee_{S,\mu,+}(x))\to\bigoplus_{y\in J}\mathcal{L}({}^w\!\bar B^\vee_{S,\mu,+}(x))_y\big)\\ &=&\operatorname{Im}\nolimits\big(\Gamma\mathbb{W}_T({}^wIC_T(\bar X_x))\to\bigoplus_{y\in J}\mathbb{W}_T({}^wIC_T(\bar X_x))_y\big)\\ &=&\mathbb{W}_T({}^wIC_T(\bar X_x))(J)\\ &=&\Gamma\mathbb{W}_T({}^wIC_T(X_J))\\ &=&IH_T(X_J). \end{eqnarray*} See Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:MI} for the third equality. \fi \end{proof} \begin{prop} \label{prop:loc2} Assume that $w\in I_{\mu,+}$. Let $v=w_-^{-1}\in I^\mu_{\phi,-}$. There is an equivalence of abelian categories $\Phi=\Phi^\mu:{}^v\mathbf{O}^\mu_{\phi,-}\to\mathbf{P}(\bar X_{\mu,w})$ such that $\Phi^\mu(L(y\bullet{{\rm{o}}}_{\phi,-}))={}^w\!IC(\bar X_{\mu,x})$ with $x=y_+^{-1}$ for each $y\in{}^v\!I_{\phi,-}^\mu$. \end{prop} \begin{proof} Set $z=v^{-1}\in I_{\mu,-}$. We have $w=zw_\mu$ and $x=y^{-1}w_\mu.$ By Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:C2}$(b)(c)$, the assignment $y\mapsto x$ yields a bijection ${}^v\!I_{\phi,-}^\mu\to {}^w\!I_{\mu,+}$. By \cite[thm.~5.5]{FG2}, see also \cite[thm.~7.15, 7.16]{BD} and \cite{KT4}, we have an equivalence of abelian categories $\Phi^\phi:{}^v\mathbf{O}_{\phi,-}\to\mathbf{P}(\bar X_{\phi,z})$ such that $L(u\bullet{{\rm{o}}}_{\phi,-})\mapsto{}^{z}\!IC(\bar X_{u^{-1}})$ for each $u\in{}^v\!I_{\phi,-}.$ Composing $\Phi^\phi$ and the parabolic inclusion functor $i_{\mu,\phi}:{}^v\mathbf{O}_{\phi,-}^\mu\to{}^v\mathbf{O}_{\phi,-}$ yields an embedding of abelian categories $\Phi^\phi\circ i_{\mu,\phi}:{}^v\mathbf{O}_{\phi,-}^\mu\to\mathbf{P}(\bar X_{\phi,z})$. It takes $L(y\bullet{{\rm{o}}}_{\phi,-})$ to ${}^{z}\!IC(\bar X_{y^{-1}})$ for each $y\in{}^v\!I_{\phi,-}^\mu.$ Finally, since $z\in I_{\mu,-}$, the obvious projection $\pi:\bar X_{\phi,z}\to\bar X_{\mu,w}$ is smooth. Hence, the functor $\pi^{!*}=\pi^*[\dim\pi]$ yields an embedding of abelian categories $\mathbf{P}(\bar X_{\mu,w})\to\mathbf{P}(\bar X_{\phi,z})$ such that ${}^w\!IC(\bar X_{\mu,x})\mapsto{}^{z}\!IC(\bar X_{\phi, xw_\mu})$ for each $x\in{}^w\!I_{\mu,+}.$ The essential images of the functors $\Phi^\phi\circ i_{\mu,\phi}$ and $\pi^{!*}$ are Serre subcategories of $\mathbf{P}(\bar X_{\phi,v^{-1}})$ generated by the same set of simple objects. Thus, they are equivalent. Hence, the categories ${}^v\mathbf{O}_{\phi,-}^\mu$ and $\mathbf{P}(\bar X_{\mu,w})$ are also equivalent. \end{proof} By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc1}, composing $\Phi^\mu$ and the cohomology we get a functor $\mathbb{H}=\mathbb{H}^\mu:{}^v\mathbf{O}^\mu_{\phi,-}\to{}^w\mathbf{Z}_{\mu,+}^\vee$ for each $w\in I_{\mu,+}$, $v\in I^\mu_{\phi,-}$ such that $v=w_-^{-1}$. \begin{prop} \label{prop:loc3} Let $v,w$ be as above and $z=v^{-1}$. Let $y,t\in{}^v\!I_{\phi,-}^\mu$. Set $x=y_+^{-1}$ and $s=t_+^{-1}$. We have (a) $\mathbb{H}^\mu(L(y\bullet {{\rm{o}}}_{\phi,-}))={}^w\!B_{\mu,+}^\vee(x)$, (b) we have graded $k$-vector space isomorphisms $$\aligned \operatorname{Ext}\nolimits_{{}^v\mathbf{O}^\mu_{\phi,-}}\bigl(L(y\bullet {{\rm{o}}}_{\phi,-}), L(t\bullet {{\rm{o}}}_{\phi,-})\bigr) &=k{{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\!Z_{S,\mu,+}^\vee}\bigl({}^w\!\bar B_{S,\mu,+}^\vee(x), {}^w\!\bar B_{S,\mu,+}^\vee(s)\bigr)\\ &={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\!Z_{\mu,+}^\vee}\bigl({}^w\!\bar B_{\mu,+}^\vee(x), {}^w\!\bar B_{\mu,+}^\vee(s)\bigr),\endaligned$$ (c) there is a morphism of functors $\theta_{\mu,\phi}\circ\mathbb{H}^\mu\to \mathbb{H}^\phi\circ i_{\mu,\phi}$ which yields a ${}^z\!Z_{\phi,+}^\vee$-module isomorphism $\theta_{\mu,\phi}\mathbb{H}^\mu(L)\simeq \mathbb{H}^\phi i_{\mu,\phi}(L)$ for each $L\in\operatorname{Irr}\nolimits({}^v\mathbf{O}^\mu_{\phi,-})$. \end{prop} \begin{proof} Part $(a)$ follows from Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc1}, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc2}. To prove part $(b)$, note that, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc2}, we have a graded $k$-vector space isomorphism $$\operatorname{Ext}\nolimits_{{}^v\mathbf{O}^\mu_{\phi,-}}\bigl(L(y\bullet {{\rm{o}}}_{\phi,-}), L(t\bullet {{\rm{o}}}_{\phi,-})\bigr)= \operatorname{Ext}\nolimits_{\mathbf{D}^b(\bar X_w)}\bigl({}^w\!IC(\bar X_x),{}^w\!IC(\bar X_s)\bigr).$$ Further, by Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:IC/ICT/IH}, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc1}, we have graded $k$-vector space isomorphisms $$\operatorname{Ext}\nolimits_{\mathbf{D}^b_T(\bar X_w)}\bigl({}^w\!IC_T(\bar X_x), {}^w\!IC_T(\bar X_s)\bigr)= {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\!Z_{S,\mu,+}^\vee}\bigl({}^w\!\bar B_{S,\mu,+}^\vee(x), {}^w\!\bar B_{S,\mu,+}^\vee(s)\bigr),$$ $$\operatorname{Ext}\nolimits_{\mathbf{D}^b(\bar X_w)}\bigl({}^w\!IC(\bar X_x), {}^w\!IC(\bar X_s)\bigr)= k\operatorname{Ext}\nolimits_{\mathbf{D}^b_T(\bar X_w)}\bigl({}^w\!IC_T(\bar X_x), {}^w\!IC_T(\bar X_s)\bigr).$$ This proves the first isomorphism in $(b)$. To prove the second one, note that we have ${}^w\!\bar B^\vee_{\mu,+}(x)=k{}^w\!\bar B^\vee_{S,\mu,+}(x)$. Thus, by Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc1}, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:IC/ICT/IH}, we have graded $k$-vector space isomorphisms $$\aligned k{{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\!Z_{S,\mu,+}^\vee}\bigl({}^w\!\bar B_{S,\mu,+}^\vee(x), {}^w\!\bar B_{S,\mu,+}^\vee(s)\bigr)&= k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{H_T(\bar X_w)}\big(IH_T(\bar X_x),IH_T(\bar X_s)\big),\\ &= {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{H(\bar X_w)}\big(IH(\bar X_x),IH(\bar X_s)\big),\\ &={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\!Z_{\mu,+}^\vee}\bigl({}^w\!\bar B_{\mu,+}^\vee(x), {}^w\!\bar B_{\mu,+}^\vee(s)\bigr). \endaligned$$ Now, we prove $(c)$. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc1}, taking the cohomology gives a functor $\mathbf{P}(\bar X_{\mu,w})\to{}^w\! Z^\vee_{\mu,+}\operatorname{\!-\mathbf{mod}}\nolimits$ such that $\mathcal{E}\mapsto H(\mathcal{E}).$ Since $z\in I_{\mu,-}$, the obvious projection $\pi:\bar X_{\phi,z}\to\bar X_{\mu,w}$ is smooth. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc1}, for each $\mathcal{E}\in\mathbf{P}(\bar X_{\mu,w})$ we may regard the cohomology spaces $H(\mathcal{E})$, $H(\pi^*\mathcal{E})$ as modules over ${}^w\!Z_{\mu,+}^\vee$ and ${}^z\!Z_{\phi,+}^\vee$ respectively. The unit $1\to \pi_*\pi^*$ yields a map $H(\mathcal{E})\to H(\pi_*\pi^*\mathcal{E})$. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc1}, it may be viewed as natural morphism of ${}^w\! Z^\vee_{\mu,+}$-modules $H(\mathcal{E})\to\theta_{\phi,\mu}H(\pi^*\mathcal{E})$. Hence, by adjunction, we get a morphism of functors $\theta_{\mu,\phi}\circ H\to H\circ \pi^*$ from $\mathbf{P}(\bar X_{\mu,w})$ to ${}^z\! Z^\vee_{\phi,+}\operatorname{\!-\mathbf{mod}}\nolimits$. Let $\Phi^\mu:{}^v\mathbf{O}^\mu_{\phi,-}\to\mathbf{P}(\bar X_{\mu,w})$ and $\Phi^\phi:{}^v\mathbf{O}_{\phi,-}\to\mathbf{P}(\bar X_{\phi,z})$ be as above. The proof of Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc2} implies that $\mathbb{H}^\mu=H\circ\Phi^\mu$, $\mathbb{H}^\phi=H\circ\Phi^\phi$ and $\Phi^\phi\circ i_{\mu,\phi}=\pi^{!*}\circ\Phi^\mu$. Hence, we have a morphism of functors $\theta_{\mu,\phi}\circ\mathbb{H}^\mu\to\mathbb{H}^\phi\circ i_{\mu,\phi}$. We claim that, for each $L\in\operatorname{Irr}\nolimits({}^v\mathbf{O}^\mu_{\phi,-}),$ the corresponding ${}^z\! Z_{\phi,+}^\vee$-module homomorphism $f:\theta_{\mu,\phi}\mathbb{H}^\mu(L)\to\mathbb{H}^\phi i_{\mu,\phi}(L)$ is an isomorphism. Indeed, set $L=L(y\bullet{{\rm{o}}}_{\phi,-})$ with $y\in {}^v\!I_{\phi,-}^\mu$. By part $(a)$, we have $\mathbb{H}^\mu(L)={}^z\!B_{\mu,+}^\vee(x)$, $\mathbb{H}^\phi i_{\mu,\phi}(L)={}^z\!B_{\phi,+}^\vee(xw_\mu)$ with $x=y_+^{-1}$. Hence, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:theta}$(f)$ and base change, we have $\theta_{\mu,\phi}\mathbb{H}^\mu(L)=\mathbb{H}^\phi i_{\mu,\phi}(L)={}^z\!B_{\phi,+}^\vee(xw_\mu),$ see Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:theta/k}. Hence, the map $f$ can be regarded as an element in ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\!Z_{\phi,+}^\vee}\bigl({}^z\!B_{\phi,+}^\vee(xw_\mu)\bigr).$ Now, by part $(b)$ and Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:2.3}$(b)$, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(b), (c)$, we have $$ \aligned {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\!Z_{\phi,+}^\vee}\bigl({}^z\!B_{\phi,+}^\vee(xw_\mu)\bigr) &=k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\!Z_{S,\phi,+}^\vee}\bigl({}^z\!B_{S,\phi,+}^\vee(xw_\mu)\bigr),\\ &=k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\mathbf{O}^\vee_{S,\phi,+}}\bigl({}^z\!P^\vee_{S}(xw_\mu\bullet {{\rm{o}}}_{\phi,+})\bigr),\\ &={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\mathbf{O}^\vee_{\phi,+}}\bigl({}^z\!P^\vee(xw_\mu\bullet {{\rm{o}}}_{\phi,+})\bigr). \endaligned$$ Here, the symbols $\mathbf{O}^\vee$ and $P^\vee$ denote the category O and the projective modules associated with the dual root system. Since the module ${}^z\!P^\vee(xw_\mu\bullet {{\rm{o}}}_{\phi,+})$ is projective and indecomposable, we deduce that the $k$-algebra ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\!Z_{\phi,+}^\vee}\bigl({}^z\!B_{\phi,+}^\vee(xw_\mu)\bigr)$ is local. Thus, to prove the claim it is enough to observe that the map $f$ is not nilpotent. \end{proof} We can now prove the following graded analogue of (part of) Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:2.31} which compares the graded $k$-algebra ${}^{v}\!\bar A^\mu_{\phi,-}$ in Definition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{df:3.6} with the graded $k$-algebra ${}^{w}\!\bar \mathscr{A}_{\mu,+}$ in Definition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{def:3.23}. The comparison between ${}^{u}\!\bar A^\mu_{\phi,+}$ and ${}^{z}\!\bar \mathscr{A}_{\mu,-}$ will be done in Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:2.42} below. \begin{cor} \label{cor:loc4} Let $w\in I_{\mu,+}$ and $v=w^{-1}_-\in I^\mu_{\phi,-}$. We have a graded $k$-algebra isomorphism ${}^{v}\!\bar A^\mu_{\phi,-}\to{}^{w}\!\bar \mathscr{A}_{\mu,+}$ such that $1_{y}\mapsto 1_x$ with $x=y_+^{-1}$ for each $y\in{}^v\!I_{\phi,-}^\mu$. \end{cor} \begin{proof} We have ${}^w\!\bar \mathscr{A}_{\mu,+}=k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{S^\vee,\mu,+}}\bigl({}^w\!\bar B_{S^\vee,\mu,+}\bigr)^{{{\operatorname{{op}}\nolimits}}},$ where ${}^w\!\bar B_{S^\vee,\mu,+}\in{}^w\bar\mathbf{Z}_{S^\vee,\mu,+}$ is the space of sections of a direct sum of graded BM-sheaves on ${}^w\mathcal{G}_{\mu,+}$. Next, since ${}^v\!\bar A^\mu_{\phi,-}= \operatorname{Ext}\nolimits_{{}^{v}\mathbf{O}^\mu_{\phi,-}}({}^{v}\!L^\mu_{\phi,-})^{{\operatorname{{op}}\nolimits}}$ by Definition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{df:3.6}, we have a graded $k$-algebra isomorphism ${}^v\!\bar A^\mu_{\phi,-}= k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{S,\mu}^\vee}({}^w\!\bar B_{S,\mu,+}^\vee)^{{{\operatorname{{op}}\nolimits}}}$ by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc3}$(b)$. Thus, we must check that $k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{S,\mu}^\vee}({}^w\!\bar B_{S,\mu,+}^\vee)\simeq k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{S^\vee,\mu}}\bigl({}^w\!\bar B_{S^\vee,\mu,+}\bigr).$ The choice of a $\widehat W$-invariant pairing on $\mathbf{t}$ yields a $\widehat W$-equivariant isomorphism $\mathbf{t}^*\simeq\mathbf{t}$. It induces a $\widehat W$-equivariant graded algebra isomorphism $S\simeq S^\vee$. The moment graphs ${}^w\mathcal{G}_{\mu,+}^\vee$ and ${}^w\mathcal{G}_{\mu,+}$ are indeed isomorphic. Using Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:BM}, it is easy to check that this isomorphism identifies the graded BM-sheaves $\mathcal{L}({}^w\!\bar B_{S,\mu,+}^\vee)$ and $\mathcal{L}({}^w\!\bar B_{S^\vee,\mu,+})$. This proves the corollary. \end{proof} \begin{cor}\label{cor:speBpm} For any $w\in I_{\mu,\pm}$, the following hold (a) we have graded $k$-algebra isomorphisms ${}^w\!\bar \mathscr{A}_{\mu,\pm}\to {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{\mu,\pm}}({}^w\! \bar B_{\mu,\pm})^{{\operatorname{{op}}\nolimits}}$, (b) the functor $\mathbb{V}_{\!k}$ is fully-faithful on projective objects in ${}^w\mathbf{O}_{\mu,\pm}^\Delta$. \end{cor} \begin{proof} The obvious map ${}^w\!\bar \mathscr{A}_{\mu,+}\to {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{\mu,+}}({}^w\! \bar B_{\mu,+})^{{\operatorname{{op}}\nolimits}}$ is an isomorphism by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc3}$(b)$ (applied to the dual root system). This proves $(a)$ in the positive-level case. The negative one is proved in a similar way, see Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:B2}. Now, let us concentrate on part $(b)$. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(b), (c),$ we have ${}^w\! B_{\mu,+}=\mathbb{V}_{\!k}({}^w\!P_{\mu,+})$ and ${}^w\!\mathscr{A}_{\mu,+}=k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\mathbf{O}_{S_0,\mu,+}}({}^w\!P_{S_0,\mu,+})^{{\operatorname{{op}}\nolimits}}.$ The latter is isomorphic to ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\mathbf{O}_{\mu,+}}({}^w\!P_{\mu,+})^{{\operatorname{{op}}\nolimits}}$ by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:2.3}$(b)$. We deduce that $\mathbb{V}_{\!k}$ yields an isomorphism ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{\mu,+}}(\mathbb{V}_{\!k}({}^w\!P_{\mu,+})) ={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\mathbf{O}_{\mu,+}}({}^w\!P_{\mu,+})$, which proves that $\mathbb{V}_{\!k}$ is fully-faithful on projectives in ${}^w\mathbf{O}_{\mu,+}^\Delta$. The negative-level case is similar. Indeed, since ${}^w\!P_{\mu,-}=k {}^w\!P_{S_0,\mu,-}$ by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:2.3}$(e)$, it follows from Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(b), (c)$ that ${}^w\! B_{\mu,-}=\mathbb{V}_{\!k}({}^w\!P_{\mu,-})$ and ${}^w\!\mathscr{A}_{\mu,-}=k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\mathbf{O}_{S_0,\mu,-}}({}^w\!P_{S_0,\mu,-})^{{\operatorname{{op}}\nolimits}}.$ Thus ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{\mu,-}}(\mathbb{V}_{\!k}({}^w\!P_{\mu,-})) ={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\mathbf{O}_{\mu,-}}({}^w\!P_{\mu,-})$ by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:2.3}$(b)$. We deduce that the functor $\mathbb{V}_{\!k}$ is fully faithful on projectives in ${}^w\mathbf{O}_{\mu,-}^\Delta$. \end{proof} \begin{rk} Assume that $w\in I_{\mu,\pm}$. We have ${}^w\bar C_{S,\mu,\pm}(x)=D({}^v\!\bar B_{S,\mu,\mp}(y))$ with $y=x_{\mp}$, $v=w_\mp$ for each $x\in {}^w\!I_{\mu,\pm}$. Therefore, there are graded $k$-algebra isomorphisms ${}^v\!\bar \mathscr{A}_{\mu,\mp}= k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{S,\mu,\pm}}\bigl({}^w\bar C_{S,\mu,\pm}\bigr).$ By Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:speBpm}, we also have graded $k$-algebra isomorphisms ${}^v\!\bar \mathscr{A}_{\mu,\mp}= {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z_{\mu,\pm}}({}^w \bar C_{\mu,\pm})$. \end{rk} \section{Proof of the main theorem} \label{sec:5} \subsection{The regular case} \label{sec:3.9} Fix integers $d,e>0$ and fix $\mu\in\mathcal{P}$. Assume that the weights ${{\rm{o}}}_{\mu,-}$, ${{\rm{o}}}_{\phi,-}$ have the level $-e-N$ and $-d-N$ respectively. Let $w\in I_{\mu,+}$ and put $u=w^{-1}$, $v=w^{-1}_-$ and $z=w_-$. Note that $u\in I^\mu_{\phi,+},$ $v\in I^\mu_{\phi,-}$ and $z\in I_{\mu,-}.$ The first step is to compare the algebras ${}^w\!A_{\mu,+}={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\mathbf{O}_{\mu,+}}({}^w\!P_{\mu,+})^{{{\operatorname{{op}}\nolimits}}}$ and ${}^v\!\bar A^\mu_{\phi,-}=\operatorname{Ext}\nolimits_{{}^{v}\mathbf{O}^\mu_{\phi,-}}({}^{v}\!L^\mu_{\phi,-})^{{\operatorname{{op}}\nolimits}},$ and, then, the algebras ${}^w\!\bar A_{\mu,+}=\operatorname{Ext}\nolimits_{{}^{w}\mathbf{O}_{\mu,+}}({}^{w}\!L_{\mu,+})^{{\operatorname{{op}}\nolimits}}$ and ${}^v\!A^\mu_{\phi,-}={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^v\mathbf{O}^\mu_{\phi,-}}({}^v\!P^\mu_{\phi,-})^{{{\operatorname{{op}}\nolimits}}}.$ More precisely, we prove the following. \begin{prop} \label{prop:reg1} If $x\in{}^w\!I_{\mu,+}$ and $y=x_-^{-1},$ then $y\in{}^v\!I_{\phi,-}^\mu$. We have a $k$-algebra isomorphism ${}^w\!A_{\mu,+}={}^v\!\bar A^\mu_{\phi,-}$ such that $1_x\mapsto 1_y$ for each $x\in{}^w\!I_{\mu,+}$. We have a $k$-algebra isomorphism ${}^w\!\bar A_{\mu,+}= {}^v\!A^\mu_{\phi,-}$ such that $1_x\mapsto 1_y$ for each $x\in{}^w\!I_{\mu,+}$. The graded $k$-algebras ${}^w\!\bar A_{\mu,+}$ and ${}^v\!\bar A^\mu_{\phi,-}$ are Koszul and are Koszul dual to each other. Further, we have $1_x=1_y^!$ for each $x\in{}^w\!I_{\mu,+}$. \end{prop} \begin{proof} By Corollaries {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:2.31}, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:loc4}, composing $\mathbb{H}$, $\mathbb{V}_k$ yields $k$-algebra isomorphisms \begin{equation}\label{4.1a}{}^w\!A_{\mu,+}={}^w\! \mathscr{A}_{\mu,+}={}^v\!\bar A^\mu_{\phi,-}, \end{equation} which identify the idempotents $1_{y}\in {}^{v}\!\bar A^\mu_{\phi,-}$ and $1_x\in{}^w\!A_{\mu,+},{}^w\!\mathscr{A}_{\mu,+}$ for each $x\in{}^w\!I_{\mu,+}$ and $y=x_-^{-1}.$ Now, we claim that ${}^v\!A^\mu_{\phi,-}$ has a Koszul grading. By Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:1.2} and Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:tau}, it is enough to check that ${}^v\! A_{\phi,-}$ has a Koszul grading. This follows from the matrix equation in Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:2.24} and from \cite[thm.~2.11.1]{BGS}, because ${}^v\!A_{\phi,-}={}^v\! \mathscr{A}_{\phi,-}$ as $k$-algebras by Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:2.31}. Equip ${}^v\! A^\mu_{\phi,-}$ with the Koszul grading above. Then, Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:1.1} implies that \begin{equation}\label{4.1b} {}^v\!A^{\mu,!}_{\phi,-}=\operatorname{Ext}\nolimits_{{}^v\mathbf{O}^\mu_{\phi,-}}({}^v\!L^\mu_{\phi,-})^{{\operatorname{{op}}\nolimits}}={}^v\!\bar A^\mu_{\phi,-}. \end{equation} Thus, the graded $k$-algebra ${}^v\!\bar A^\mu_{\phi,-}$ is also Koszul. Since ${}^w\!A_{\mu,+}={}^v\!\bar A^\mu_{\phi,-}$ as a $k$-algebra, this implies that the $k$-algebra ${}^w\!A_{\mu,+}$ has a Koszul grading. Applying Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:1.1} once again, we get \begin{equation}\label{4.1c} {}^w\!A_{\mu,+}^!=\operatorname{Ext}\nolimits_{{}^w\mathbf{O}_{\mu,+}}({}^w\!L_{\mu,+})^{{\operatorname{{op}}\nolimits}}={}^w\!\bar A_{\mu,+}. \end{equation} In particular, the graded $k$-algebra ${}^w\!\bar A_{\mu,+}$ is also Koszul. Finally, using \eqref{4.1b}, \eqref{4.1a} and \eqref{4.1c} we get $k$-algebra isomorphisms $${}^v\!A^\mu_{\phi,-}= {}^v\!\bar A^{\mu,!}_{\phi,-}= {}^w\!A_{\mu,+}^!={}^w\!\bar A_{\mu,+}.$$ They identify the idempotent $1_y\in {}^{v}\! A^\mu_{\phi,-}$ with the idempotent $1_x\in{}^w\!\bar A_{\mu,+}.$ \end{proof} \begin{rk} The Koszul grading on ${}^v\!A^\mu_{\phi,-}$ can also be obtained using mixed perverse sheaves on the ind-scheme $X'$ as in \cite[thm.~4.5.4]{BGS}, \cite{AK}. Note that there is no analogue of \cite[lem.~3.9.2]{BGS} in our situation, because ${}^v\!R_{\phi,-}$ is not Koszul self-dual. \end{rk} The second step consists of comparing the $k$-algebras ${}^z\!\bar A_{\mu,-}=\operatorname{Ext}\nolimits_{{}^{z}\mathbf{O}_{\mu,-}}({}^{z}\!L_{\mu,-})^{{\operatorname{{op}}\nolimits}},$ ${}^u\!A^\mu_{\phi,+}={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^u\mathbf{O}^\mu_{\phi,+}}({}^u\!P^\mu_{\phi,+})^{{{\operatorname{{op}}\nolimits}}}$ and the $k$-algebras ${}^z\!A_{\mu,-}={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\mathbf{O}_{\mu,+}}({}^w\!P_{\mu,-})^{{{\operatorname{{op}}\nolimits}}},$ ${}^u\!\bar A^\mu_{\phi,+}=\operatorname{Ext}\nolimits_{{}^{u}\mathbf{O}^\mu_{\phi,+}}({}^{u}\!L^\mu_{\phi,+})^{{\operatorname{{op}}\nolimits}}.$ We can not argue as in the previous proposition, because we have no analogue of the localization functor $\Phi$ in Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc2} for positive levels. Hence, we have no analogue of Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc3} and Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:loc4}. To remedy this, we'll use the technic of standard Koszul duality. To do so, we need the following crucial result. \begin{lemma} \label{lem:balanced} The quasi-hereditary $k$-algebra ${}^w\!A_{\mu,+}$ has a balanced grading. \end{lemma} Now we can prove the second main result of this section. \begin{prop} \label{prop:reg2} If $x\in{}^z\!I_{\mu,-}$ and $y=x_+^{-1}$ then $y\in{}^u\!I^\mu_{\phi,+}$. We have a $k$-algebra isomorphism ${}^z\!\bar A_{\mu,-}={}^{u}\! A^\mu_{\phi,+}$ such that $1_x\mapsto 1_{y}$ for each $x\in{}^z\!I_{\mu,-}$, and a $k$-algebra isomorphism ${}^z\!A_{\mu,-}={}^{u}\!\bar A^\mu_{\phi,+}$ such that $1_x\mapsto 1_y$ for each $x\in{}^z\!I_{\mu,-}$. The graded $k$-algebras ${}^{u}\!\bar A^\mu_{\phi,+}$ and ${}^z\!\bar A_{\mu,-}$ are Koszul and are Koszul dual to each other. Further, we have $1_x=1_y^!$ for each $x\in{}^z\!I_{\mu,-}$. \end{prop} \begin{proof} By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:ringel}, the Ringel dual of ${}^w\!A_{\mu,+}$ is ${}^w\!A_{\mu,+}^\diamond={}^z\!A_{\mu,-}$. Thus, since the $k$-algebra ${}^w\!A_{\mu,+}$ has a balanced grading by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:balanced}, we deduce that the $k$-algebra ${}^z\!A_{\mu,-}$ has a Koszul grading. We equip ${}^z\!A_{\mu,-}$ with this grading. Then, Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:1.1} implies that \begin{equation}\label{4.2b} {}^z\!A^!_{\mu,-}=\operatorname{Ext}\nolimits_{{}^z\mathbf{O}_{\mu,-}}({}^z\!L_{\mu,-})^{{\operatorname{{op}}\nolimits}}={}^z\!\bar A_{\mu,-}. \end{equation} Hence, the graded $k$-algebra ${}^z\!\bar A_{\mu,-}$ is Koszul and balanced. Therefore, \cite[thm.~1]{Ma}, Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:ringel} and \eqref{4.2b} yield a $k$-algebra isomorphism $${}^z\!\bar A_{\mu,-}= {}^z\!A_{\mu,-}^!= (({}^z\!A^\diamond_{\mu,-})^!)^\diamond= ({}^w\! A_{\mu,+}^!)^\diamond.$$ Hence, using \eqref{4.1c} and Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg1}, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:ringel}, we get a $k$-algebra isomorphism $${}^z\!\bar A_{\mu,-}= {}^w\! \bar A_{\mu,+}^\diamond= {}^v\! A^{\mu,\diamond}_{\phi,-}= {}^{u}\! A^\mu_{\phi,+}$$ such that $1_x\mapsto 1_y$. So, the $k$-algebra ${}^{u}\! A^{\mu}_{\phi,+}$ has a Koszul grading which is balanced. We equip ${}^{u}\! A^{\mu}_{\phi,+}$ with this grading. Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:1.1} implies that \begin{equation}\label{4.2c} {}^{u}\! A^{\mu,!}_{\phi,+}=\operatorname{Ext}\nolimits_{{}^u\mathbf{O}^\mu_{\phi,+}}({}^u\!L^\mu_{\phi,+})^{{\operatorname{{op}}\nolimits}}={}^{u}\! \bar A^{\mu}_{\phi,+}. \end{equation} Hence, the graded $k$-algebra ${}^{u}\! \bar A^{\mu}_{\phi,+}$ is Koszul and balanced. Therefore, \cite[thm.~1]{Ma}, Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:ringel} yield $${}^{u}\!\bar A^\mu_{\phi,+}= {}^{u}\!A^{\mu,!}_{\phi,+}= (({}^{u}\!A^{\mu,\diamond}_{\phi,+})^!)^\diamond= ({}^v\! A^{\mu,!}_{\phi,-})^\diamond.$$ So, using \eqref{4.1b} and Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg1}, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:ringel}, we get a $k$-algebra isomorphism $${}^{u}\!\bar A^\mu_{\phi,+}=({}^v\! \bar A^\mu_{\phi,-})^\diamond= {}^w\! A^\diamond_{\mu,+}= {}^z\! A_{\mu,-} .$$ \end{proof} Now, let us prove Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:balanced}. \begin{proof}[Proof of Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:balanced}] We equip ${}^w\!A_{\mu,+}$ with the grading given by ${}^v\!\bar A^\mu_{\phi,-}$, see Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg1}. Since ${}^v\!\bar A^\mu_{\phi,-}$ is Koszul, we must prove that ${}^v\!\bar A^{\mu,!}_{\phi,-}$ is quasi-hereditary and that the grading on ${}^v\!\bar A^{\mu,\diamond}_{\phi,-}$ is positive, see Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:standardkoszul}. We have ${}^v\!\bar A^{\mu,!}_{\phi,-}={}^w\!\bar A_{\mu,+}$ by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg1}. Hence, it is quasi-hereditary. Next, the grading of ${}^z\! \bar \mathscr{A}_{\mu,-}$ is positive by Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:3.29}. Thus, to prove the lemma, it is enough to check that we have a graded $k$-algebra isomorphism ${}^v\!\bar A^{\mu,\diamond}_{\phi,-}={}^z\!\bar \mathscr{A}_{\mu,-}.$ First, we check that ${}^w\! A^\diamond_{\mu,+}={}^z\! \mathscr{A}_{\mu,-}$ as (ungraded) $k$-algebras. To do this, note that Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:ringel}$(c)$ yields a $k$-algebra isomorphism ${}^w\! A_{\mu,+}^\diamond ={}^z\! A_{\mu,-}.$ Further, we have ${}^z\! A_{\mu,-} ={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\mathbf{O}_{\mu,-}}({}^z\! P_{\mu,-})^{{\operatorname{{op}}\nolimits}}$ and, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:ringel}$(b),$ the Ringel equivalence takes ${}^z\mathbf{O}_{\mu,-}^\Delta$ to ${}^z\mathbf{O}_{\mu,+}^\Delta$. Thus, Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:2.3}(b), {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(c)$ yield $$\aligned {}^w\! A_{\mu,+}^\diamond &={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\mathbf{O}_{\mu,+}}({}^z T_{\mu,+})^{{\operatorname{{op}}\nolimits}}\\ &=k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\mathbf{O}_{S_0,\mu,+}}({}^z T_{S_0,\mu,+})^{{\operatorname{{op}}\nolimits}}\\ &=k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\!Z_{S_0,\mu,+}}({}^z\! C_{S_0,\mu,+})^{{\operatorname{{op}}\nolimits}}\\ &={}^z\! \mathscr{A}_{\mu,-}, \endaligned$$ where the last equality is Definition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{def:3.23}. Now, we must identify the gradings of ${}^v\!\bar A^{\mu,\diamond}_{\phi,-}$ and ${}^z\!\bar \mathscr{A}_{\mu,-}$ under the isomorphism ${}^w\! A_{\mu,+}^\diamond={}^z\! \mathscr{A}_{\mu,-}$ above. First, we consider the grading on ${}^v\!\bar A^{\mu,\diamond}_{\phi,-}$. Let ${}^w\bar T_{\mu,+}$ be the graded ${}^v\!\bar A^\mu_{\phi,-}$-module equal to ${}^wT_{\mu,+}$ as an ${}^w\! A_{\mu,+}$-module, with the natural grading (this is well-defined, because ${}^v\!\bar A^\mu_{\phi,-}$ is Koszul). Then, by Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:standardkoszul}, we have ${}^v\!\bar A^{\mu,\diamond}_{\phi,-}= {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! A_{\mu,+}}({}^w\bar T_{\mu,+})^{{\operatorname{{op}}\nolimits}}$. Next, we consider the grading on ${}^z\!\bar \mathscr{A}_{\mu,-}$. By Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:B2}, we have $ {}^z\!\bar \mathscr{A}_{\mu,-}= k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{S,\mu,+}}\bigl({}^w\!\bar C_{S,\mu,+}\bigr) ={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{\mu,+}}\bigl({}^w\!\bar C_{\mu,+}\bigr). $ Let us identify ${}^w\!A_{\mu,+}={}^w\! \mathscr{A}_{\mu,+}$ via \eqref{4.1a}. We must prove the following. \begin{claim}\label{claim:4.7} There is a graded $k$-algebra isomorphism $${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! \mathscr{A}_{\mu,+}}({}^w\bar T_{\mu,+})= {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{\mu,+}}\bigl({}^w\!\bar C_{\mu,+}\bigr).$$ \end{claim} To do that, we'll need some new material. Consider the graded categories given by ${}^w\bar\mathbf{O}_{\mu,+}={}^w\! \bar \mathscr{A}_{\mu,+}\mathbf{g}mod$ and ${}^w\bar\mathbf{O}_{S,\mu,+}={}^w\!\bar \mathscr{A}_{S,\mu,+}\mathbf{g}mod.$ Since we have ${}^w\bar T_{\mu,+}\in{}^v\!\bar A^\mu_{\phi,-}\mathbf{g}mod$ and ${}^w\! \bar \mathscr{A}_{\mu,+}={}^v\!\bar A^\mu_{\phi,-}$ by Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:loc4}, we may view ${}^w\bar T_{\mu,+}$ as an object of ${}^w\bar\mathbf{O}_{\mu,+}$. We have an isomorphism of graded $S$-algebras ${}^w\!\bar \mathscr{A}_{S,\mu,+}= {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{S,\mu,+}}\bigl({}^w\!\bar B_{S,\mu,+}\bigr)^{{{\operatorname{{op}}\nolimits}}}$. Consider the pair of adjoint functors $(\bar\mathbb{V}_S,\bar\psi_S)$ between ${}^w\bar\mathbf{O}_{S,\mu,+}$ and ${}^w\bar Z_{S,\mu,+}\mathbf{g}mod$ given by $$\gathered \bar\mathbb{V}_{S}={}^w\!\bar B_{S,\mu,+}\otimes_{{}^w\!\bar \mathscr{A}_{S,\mu,+}}\bullet: {}^w\bar\mathbf{O}_{S,\mu,+}\to{}^w\bar Z_{S,\mu,+}\mathbf{g}mod,\\ \bar\psi_S={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\! Z_{S,\mu,+}}\bigl({}^w\!\bar B_{S,\mu,+},\bullet\bigr): {}^w\bar Z_{S,\mu,+}\mathbf{g}mod\to{}^w\bar\mathbf{O}_{S,\mu,+}. \endgathered$$ We consider the object ${}^w\bar T_{S,\mu,+}$ of ${}^w\bar\mathbf{O}_{S,\mu,+}$ given by ${}^w\bar T_{S,\mu,+}=\bar\psi_S({}^w\bar C_{S,\mu,+})$. Recall the functor $\mathbb{V}_{S_0}:{}^w\mathbf{O}_{S_0,\mu,+}^\Delta\to{}^w\mathbf{Z}_{S_0,\mu,+}^\Delta$ studied in Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}. It yields an $S_0$-algebra isomorphism ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\mathbf{O}_{S_0,\mu,+}}({}^wP_{S_0,\mu,+})^{{\operatorname{{op}}\nolimits}}\to{}^w\! \mathscr{A}_{S_0,\mu,+}.$ Thus, since ${}^wP_{S_0,\mu,+}$ is a pro-generator of ${}^w\mathbf{O}_{S_0,\mu,+}$, the functor $\phi_{S_0}={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\mathbf{O}_{S_0,\mu,+}}({}^w\!P_{S_0,\mu,+},\bullet)$ identifies ${}^w\mathbf{O}_{S_0,\mu,+}$ with the category ${}^w\!\mathscr{A}_{S_0,\mu,+}\operatorname{\!-\mathbf{mod}}\nolimits.$ Consider the functor $\psi_{S_0}: {}^wZ_{S_0,\mu,+}\operatorname{\!-\mathbf{mod}}\nolimits\to{}^w\!\mathscr{A}_{S_0,\mu,+}\operatorname{\!-\mathbf{mod}}\nolimits$ given by $\psi_{S_0}={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\! Z_{S_0,\mu,+}}\bigl({}^w\!B_{S_0,\mu,+},\bullet\bigr)$. We have the commutative triangle \begin{equation}\label{triangle}\begin{split}\xymatrix{ {}^w\mathbf{O}_{S_0,\mu,+}^\Delta\ar[r]^-{\mathbb{V}_{S_0}}_-\sim\ar@{_{(}->}[d]_-{\phi_{S_0}}& {}^w\mathbf{Z}_{S_0,\mu,+}^\Delta\ar@{^{(}->}[ld]^-{\psi_{S_0}}\cr {}^w\!\mathscr{A}_{S_0,\mu,+}\operatorname{\!-\mathbf{mod}}\nolimits,& }\end{split}\end{equation} such that $\mathbb{V}_{S_0}({}^w\!T_{S_0,\mu,+})={}^w\!C_{S_0,\mu,+}$. Now, we define the module ${}^wT'_{S_0,\mu,+}=\psi_{S_0}({}^w\!C_{S_0,\mu,+})$ in ${}^w\!\mathscr{A}_{S_0,\mu,+}\operatorname{\!-\mathbf{mod}}\nolimits$. It is identified with ${}^wT_{S_0,\mu,+}$ by $\phi_{S_0}$, because \eqref{triangle} is commutative. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(c)$, the functor $\psi_{S_0}$ yields a $k$-algebra isomorphism ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{S_0,\mu,+}}\bigl({}^w\!C_{S_0,\mu,+}\bigr)\to {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! \mathscr{A}_{S_0,\mu,+}}\bigl({}^w\!T'_{S_0,\mu,+}\bigr).$ Note that $\phi_{S_0}$ and $\psi_{S_0}$ are both exact. Since taking hom's commutes with localization, we have $\varepsilon({}^w\!\bar \mathscr{A}_{S,\mu,+})={}^w\!\mathscr{A}_{S_0,\mu,+}$. Thus, forgetting the grading and taking base change yield a functor $\varepsilon:{}^w\bar\mathbf{O}_{S,\mu,+}\to{}^w\mathbf{O}_{S_0,\mu,+}$ which is faithful and faithfully exact, see Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:basechange}. Let ${}^w\bar\mathbf{O}_{S,\mu,+}^\Delta\subset{}^w\bar\mathbf{O}_{S,\mu,+}$ be the full subcategory of the modules taken to ${}^w\mathbf{O}_{S_0,\mu,+}^\Delta$ by $\varepsilon$. We identify ${}^w\mathbf{O}_{S_0,\mu,+}$ with ${}^w\!\mathscr{A}_{S_0,\mu,+}\operatorname{\!-\mathbf{mod}}\nolimits$ by $\phi_{S_0}$. We have the following. \begin{claim}\label{claim:4.6} We have the following commutative diagram \begin{equation*} \label{square} \begin{split} \xymatrix{ {}^w\bar\mathbf{O}_{S,\mu,+}^\Delta\ar[r]^-{\bar\mathbb{V}_{S}} \ar[d]_-\varepsilon&{}^w\bar\mathbf{Z}_{S,\mu,+}^\Delta\ar[d]^-\varepsilon \ar[r]^-{\bar\psi_{S}}&{}^w\bar\mathbf{O}_{S,\mu,+}^\Delta\ar[d]_-\varepsilon \cr {}^w\mathbf{O}_{S_0,\mu,+}^\Delta\ar[r]^-{\mathbb{V}_{S_0}}&{}^w\mathbf{Z}_{S_0,\mu,+}^\Delta \ar[r]^-{\psi_{S_0}}&{}^w\mathbf{O}_{S_0,\mu,+}^\Delta. } \end{split} \end{equation*} \end{claim} Since ${}^w\!\mathscr{A}_{S_0,\mu,+}$ is Noetherian, any finitely generated module has a finite presentation. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(c),$ the functor $\mathbb{V}_{S_0}$ is exact and we have $\mathbb{V}_{S_0}({}^w\!P_{S_0,\mu,+})={}^w\!B_{S_0,\mu,+}$. We identify ${}^w\mathbf{O}_{S_0,\mu,+}$ with ${}^w\!\mathscr{A}_{S_0,\mu,+}\operatorname{\!-\mathbf{mod}}\nolimits$ as above. By the five lemma, the canonical morphism ${}^w\!B_{S_0,\mu,+}\otimes_{{}^w\!\mathscr{A}_{S_0,\mu,+}}\bullet\to\mathbb{V}_{S_0}$ is an isomorphism. We deduce that the left square in Claim {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{claim:4.6} is commutative. Next, by definition we have $\varepsilon\circ\bar\psi_S=\psi_{S_0}\circ\varepsilon$. From the diagram {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{triangle}, we deduce that $\psi_{S_0}$ maps into ${}^w\mathbf{O}_{S_0,\mu,+}^\Delta$ and is a quasi-inverse to $\mathbb{V}_{S_0}$. Claim {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{claim:4.6} follows. From claim {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{claim:4.6} we deduce that $\varepsilon({}^w\bar T_{S,\mu,+})={}^wT_{S_0,\mu,+}$ and ${}^w\bar T_{S,\mu,+}\in{}^w\bar\mathbf{O}_{S,\mu,+}^\Delta$. Next, since $\varepsilon:S\mathbf{g}mod\to S_0\operatorname{\!-\mathbf{mod}}\nolimits$ is faithfully exact by Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:basechange} and $\psi_{S_0}$ gives an isomorphism ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{S_0,\mu,+}}\bigl({}^w\!C_{S_0,\mu,+}\bigr)\to {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! \mathscr{A}_{S_0,\mu,+}}\bigl({}^w\!T_{S_0,\mu,+}\bigr)$, we deduce from claim {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{claim:4.6} that $\bar\psi_S$ gives a graded $k$-algebra isomorphism ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{S,\mu,+}}\bigl({}^w\!\bar C_{S,\mu,+}\bigr)\to {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! \mathscr{A}_{S,\mu,+}}\bigl({}^w\!\bar T_{S,\mu,+}\bigr).$ Finally, since the functor $\psi_{S_0}$ is exact and $\varepsilon$ is faithfully exact by Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:basechange}, we deduce from claim {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{claim:4.6} that $\bar\psi_S$ is exact on ${}^w\bar\mathbf{Z}_{S,\mu,+}^\Delta.$ Now, by Definition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{def:3.23}, we have ${}^w\!\bar \mathscr{A}_{\mu,+}=k{}^w\!\bar \mathscr{A}_{S,\mu,+}$. Thus, the specialization at $k$ gives the module $k{}^w\bar T_{S,\mu,+}$ in ${}^w\bar\mathbf{O}_{\mu,+}={}^w\! \bar \mathscr{A}_{\mu,+}\mathbf{g}mod.$ \begin{claim}\label{claim:4.8} We have $k{}^w\bar T_{S,\mu,+}={}^w\bar T_{\mu,+}$. \end{claim} The specialization gives a graded ring homomorphism $k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! \mathscr{A}_{S,\mu,+}}({}^w\bar T_{S,\mu,+})\to {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! \mathscr{A}_{\mu,+}}(k{}^w\bar T_{S,\mu,+}).$ Since $\varepsilon({}^w\bar T_{S,\mu,+})={}^wT_{S_0,\mu,+}$, forgetting the grading it is taken to the obvious map $k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! \mathbf{O}_{S_0,\mu,+}}({}^wT_{S_0,\mu,+})\to {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! \mathbf{O}_{\mu,+}}(k{}^w T_{S_0,\mu,+}).$ The latter is an isomorphism by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:2.3}$(b)$. Since $k{}^w\bar T_{S,\mu,+}={}^w\bar T_{\mu,+}$, we deduce that $ k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! \mathscr{A}_{S,\mu,+}}({}^w\bar T_{S,\mu,+})= {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! \mathscr{A}_{\mu,+}}({}^w\bar T_{\mu,+}). $ Since we have $k{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{S,\mu,+}}\bigl({}^w\!\bar C_{S,\mu,+}\bigr) ={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{\mu}}\bigl({}^w\!\bar C_{\mu,+}\bigr)$ and ${{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{S,\mu,+}}\bigl({}^w\!\bar C_{S,\mu,+}\bigr)= {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! \mathscr{A}_{S,\mu,+}}\bigl({}^w\!\bar T_{S,\mu,+}\bigr),$ this implies Claim {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{claim:4.7}. Now, we prove Claim {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{claim:4.8}. The grading of ${}^w\bar T_{\mu,+}$ is characterized in the following way. First, we have a decomposition ${}^w\bar T_{\mu,+}=\bigoplus_x{}^w\bar T(x\bullet{{\rm{o}}}_{\mu,+})$ where ${}^w\bar T(x\bullet{{\rm{o}}}_{\mu,+})$ is the natural graded lift of ${}^wT(x\bullet{{\rm{o}}}_{\mu,+})$. Since ${}^w T(x\bullet{{\rm{o}}}_{\mu,+})$ is indecomposable in ${}^w\mathbf{O}_{\mu,+}$, it admits at most one graded lift in ${}^w\bar\mathbf{O}_{\mu,+}$, up to grading shift. Let $\bar V(x\bullet{{\rm{o}}}_{\mu,+})$ be the graded ${}^v\!\bar A^\mu_{\phi,-}$-module equal to $V(x\bullet{{\rm{o}}}_{\mu,+})$ as an ${}^w\! A_{\mu,+}$-module, with the natural grading (this is well-defined, because ${}^v\!\bar A^\mu_{\phi,-}$ is Koszul). The natural grading is characterized by the fact that there is an inclusion $\bar V(x\bullet{{\rm{o}}}_{\mu,+})\subset {}^w\bar T(x\bullet{{\rm{o}}}_{\mu,+})$ which is homogeneous of degree 0. The graded object ${}^w\bar T_{S,\mu,+}$ satisfies a similar property. Indeed, by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:filtration2}, the graded $S$-sheaf ${}^w\!\bar C_{S,\mu,+}(x)$ is filtered by shifted Verma-sheaves, and the lower term of this filtration yields an inclusion $\bar V_{S,\mu}(x)\langle l(x)\rangle\subset {}^w\bar C_{S,\mu,+}(x).$ Further, consider the object ${}^w\bar T_S(x\bullet{{\rm{o}}}_{\mu,+})$ in ${}^w\bar\mathbf{O}_{S,\mu,+}$ given by ${}^w\bar T_S(x\bullet{{\rm{o}}}_{\mu,+})=\bar\psi_S({}^w\bar C_{S,\mu,+}(x))$. Then, we have the decomposition ${}^w\bar T_{S,\mu,+}=\bigoplus_x{}^w\bar T_S(x\bullet{{\rm{o}}}_{\mu,+})$. For each $x\in{}^wI_{\mu,+},$ we consider also the object $\bar V_S(x\bullet{{\rm{o}}}_{\mu,+})$ in ${}^w\bar\mathbf{O}_{S,\mu,+}$ given by $\bar V_{S}(x\bullet {{\rm{o}}}_{\mu,+})=\bar\psi_S(\bar V_{S,\mu}(x))\langle l(x)\rangle$. Since $\bar\psi_S$ is exact, we deduce that ${}^w\bar T_S(x\bullet{{\rm{o}}}_{\mu,+})$ is filtered by $\bar V_{S}(y\bullet {{\rm{o}}}_{\mu,+})$'s, and the lower term of this filtration yields an inclusion $\bar V_{S}(x\bullet {{\rm{o}}}_{\mu,+})\subset {}^w\bar T_S(x\bullet{{\rm{o}}}_{\mu,+})$ which is homogeneous of degree 0. Now, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}$(c),$ we have $\mathbb{V}_{S_0}(V_{S_0}(x\bullet {{\rm{o}}}_{\mu,+}))=V_{S_0,\mu}(x)$. From the proof of claim {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{claim:4.6}, we deduce that $\psi_{S_0}(V_{S_0,\mu}(x))=V_{S_0}(x\bullet {{\rm{o}}}_{\mu,+})$. Thus $$\varepsilon(\bar V_{S}(x\bullet {{\rm{o}}}_{\mu,+}))= \varepsilon\bar\psi_S(\bar V_{S,\mu}(x))= \psi_{S_0}\varepsilon(\bar V_{S,\mu}(x))=V_{S_0}(x\bullet {{\rm{o}}}_{\mu,+}).$$ Since $V_{S_0}(x\bullet {{\rm{o}}}_{\mu,+})$ is free over $S_0,$ we deduce that $\bar V_{S}(x\bullet {{\rm{o}}}_{\mu,+})$ is free over $S$ by Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:basechange}. Therefore, the inclusion $\bar V_{S}(x\bullet {{\rm{o}}}_{\mu,+})\subset {}^w\bar T_S(x\bullet{{\rm{o}}}_{\mu,+})$ gives an inclusion $k\bar V_{S}(x\bullet {{\rm{o}}}_{\mu,+})\subset k{}^w\bar T_S(x\bullet{{\rm{o}}}_{\mu,+})$ which is homogeneous of degree 0. Hence, to prove Claim {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{claim:4.8} we are reduced to check that we have $\bar V(x\bullet{{\rm{o}}}_{\mu,+})=k\bar V_{S}(x\bullet {{\rm{o}}}_{\mu,+})$ in ${}^w\bar\mathbf{O}_{\mu,+}$. Note that $\bar V(x\bullet{{\rm{o}}}_{\mu,+})$ and $k\bar V_{S}(x\bullet {{\rm{o}}}_{\mu,+})$ are both graded lifts of the Verma module $V(x\bullet{{\rm{o}}}_{\mu,+}),$ which is indecomposable. Thus they coincide up to a grading shift. To identify this shift, recall that by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:filtration2} we have a surjection ${}^w\!\bar B_{S,\mu,+}(x)\to\bar V_{S}(x)\langle l(x)\rangle.$ We define ${}^w\bar P_{S}(x\bullet{{\rm{o}}}_{\mu,+})=\bar\psi_S({}^w\!\bar B_{S,\mu,+}(x)).$ Since $\bar\psi_S$ is exact, we have a surjection ${}^w\!\bar P_{S}(x\bullet{{\rm{o}}}_{\mu,+})\to\bar V_{S}(x\bullet{{\rm{o}}}_{\mu,+}).$ Now, since ${}^v\!\bar A^\mu_{\phi,-}$ is Koszul, we can consider the natural graded lift ${}^w\!\bar P(x\bullet{{\rm{o}}}_{\mu,+})$ of ${}^w\!P(x\bullet{{\rm{o}}}_{\mu,+})$ in ${}^w\bar\mathbf{O}_{\mu,+}$. By definition of the natural grading, we have a surjection ${}^w\!\bar P(x\bullet{{\rm{o}}}_{\mu,+})\to\bar V(x\bullet{{\rm{o}}}_{\mu,+}).$ So, we must prove that ${}^w\!\bar P(x\bullet{{\rm{o}}}_{\mu,+})=k{}^w\!\bar P_{S}(x\bullet {{\rm{o}}}_{\mu,+})$ in ${}^w\bar\mathbf{O}_{\mu,+}.$ This is obvious, because ${}^w\!\bar P(x\bullet{{\rm{o}}}_{\mu,+})={}^w\! \bar \mathscr{A}_{\mu,+}1_x$ by definition of the natural grading of ${}^w\! P(x\bullet{{\rm{o}}}_{\mu,+}),$ and because Definition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{def:3.23} yields the chain of isomorphisms $${}^w\bar P_{S}(x\bullet{{\rm{o}}}_{\mu,+})=\bar\psi_S({}^w\!\bar B_{S,\mu,+}(x)) ={{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\! Z_{S,\mu,+}}\bigl({}^w\!\bar B_{S,\mu,+}\bigr)^{{\operatorname{{op}}\nolimits}}1_x ={}^w\! \bar \mathscr{A}_{\mu,+}1_x.$$ \end{proof} We can now prove the following graded analogue of Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:2.31}. See also Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:loc4}. \begin{cor} \label{cor:2.42} Assume that $u\in I^\mu_{\phi,+}$. Let $z=u_-^{-1}\in I_{\mu,-}.$ We have an isomorphism of graded $k$-algebras ${}^{u}\!\bar A^\mu_{\phi,+}\to {}^{z}\!\bar \mathscr{A}_{\mu,-}$ such that $1_{y}\mapsto 1_x$ with $x=y_-^{-1}$ for each $y\in{}^v\!I_{\phi,+}^\mu$. \end{cor} \begin{proof} By \cite[thm.~1]{Ma} and Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:balanced}, the graded $k$-algebra ${}^v\!\bar A^{\mu,\diamond}_{\phi,-}={}^z\! \bar \mathscr{A}_{\mu,-}$ is Koszul. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg2}, the graded $k$-algebra ${}^u\!\bar A^\mu_{\phi,+}$ is Koszul and is isomorphic to ${}^w\! A_{\mu,-}$ as a $k$-algebra. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg2}, we have ${}^{v}\!\bar A^\mu_{\phi,-}={}^w\! A_{\mu,+}$ as a $k$-algebra. Finally, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:ringel}, the Ringel dual of ${}^w\!A_{\mu,+}$ is ${}^w\!A_{\mu,+}^\diamond={}^z\!A_{\mu,-}$. Thus we have a $k$-algebra isomorphism ${}^z\! A_{\mu,-}={}^z\! \mathscr{A}_{\mu,-},$ which lifts to a graded $k$-algebra isomorphism ${}^u\!\bar A^\mu_{\phi,+}={}^{z}\!\bar \mathscr{A}_{\mu,-}$ by unicity of the Koszul grading \cite[cor.~2.5.2]{BGS}. \end{proof} \subsection{The general case} \label{sec:3.10} We can now complete the proof of Theorem {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{thm:main}. We first prove a series of preliminary lemmas. Fix parabolic types $\mu,\nu\in\mathcal{P}$ and integers $d,e,f>0$. Choose integral weights ${{\rm{o}}}_{\mu,-}$, ${{\rm{o}}}_{\nu,-}$, ${{\rm{o}}}_{\phi,-}$ of level $- e-N$, $-f-N$ and $-d-N$ respectively. Fix an element $w\in\widehat W$. Let $\tau_{\phi,\nu}:{}^w\mathbf{O}_{\mu,+}\to{}^w\mathbf{O}^\nu_{\mu,+}$ be the parabolic truncation functor, see Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:tau}. Applying the functor $\tau_{\phi,\nu}$ to the module ${}^w\!P_{\mu,+}$, we get a $k$-algebra homomorphism $\tau_{\phi,\nu}:{}^w\!A_{\mu,+}\to{}^w\!A_{\mu,+}^\nu$. \begin{lemma} \label{lem:gen1} Assume that $w\in I_{\mu,+}^\nu.$ Then, the functor the $k$-algebra homomorphism $\tau_{\phi,\nu}:{}^w\!A_{\mu,+}\to {}^w\!A^\nu_{\mu,+}$ is surjective. Its kernel is the two-sided ideal generated by the idempotents $1_x$ with $x\in{}^w\!I_{\mu,+}\setminus {}^w\!I_{\mu,+}^\nu$. Further, we have $\tau_{\phi,\nu}(1_x)=1_x$ for each $x\in{}^w\!I_{\mu,+}^\nu$. \end{lemma} \begin{proof} By Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:tau}, the functor $\tau_{\phi,\nu}$ takes the minimal projective generator of ${}^w\mathbf{O}_{\mu,+}$ to the minimal projective generator of ${}^w\mathbf{O}_{\mu,+}^\nu$. Let $i_{\nu,\phi}$ be the right adjoint of $\tau_{\phi,\nu}$. For any $M$ the unit $M\to i_{\nu,\phi}\tau_{\phi,\nu}(M)$ is surjective. Hence, for any projective module $P$ we have a surjective map $${{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\mathbf{O}_{\mu,+}}(P,M)\to {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\mathbf{O}_{\mu,+}}(P,i_{\nu,\phi}\tau_{\phi,\nu}(M))= {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^w\mathbf{O}_{\mu,+}^\nu}(\tau_{\phi,\nu}(P),\tau_{\phi,\nu}(M)).$$ Thus, the $k$-algebra homomorphism $\tau_{\phi,\nu}$ is surjective. Let $I\subset {}^w\!A_{\mu,+}$ be the 2-sided ideal generated by the idempotents $1_x$ such that $x\in{}^w\!I_{\mu,+}$ and $\tau_{\phi,\nu}(1_x)=0$. By Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:tau}(c), the latter are precisely the idempotents $1_x$ with $x\in{}^w\!I_{\mu,+}\setminus {}^w\!I_{\mu,+}^\nu$. We have a $k$-algebra isomorphism ${}^w\!A_{\mu,+}/I\simeq {}^w\!A^\nu_{\mu,+}$, because ${}^w\mathbf{O}_{\mu,+}^\nu$ is the Serre subcategory of ${}^w\mathbf{O}_{\mu,+}$ generated by the simple modules killed by $I$. Under this isomorphism, the map $\tau_{\phi,\nu}$ is the canonical projection ${}^w\!A_{\mu,+}\to{}^w\!A_{\mu,+}/I$. The last claim in the lemma follows from Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:tau}(b). \end{proof} Now, let $v\in I_{\nu,-}$ and $w=v^{-1}_+$. We have $w\in I^\nu_{\phi,+}.$ We equip the $k$-algebras ${}^v\!A_{\nu,-}$, ${}^v\!A_{\phi,-}$ with the gradings ${}^w\!\bar A^\nu_{\phi,+}$, ${}^w\!\bar A_{\phi,+}$, see Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg2}. Consider the categories ${}^v\bar\mathbf{O}_{\nu,-}={}^w\!\bar A_{\phi,+}^\nu\mathbf{g}mod$ and ${}^v\bar\mathbf{O}_{\phi,-}={}^w\!\bar A_{\phi,+}\mathbf{g}mod,$ which are graded analogues of the categories ${}^v\mathbf{O}_{\nu,-}$ and ${}^v\mathbf{O}_{\phi,-}.$ To unburden the notation, we'll abbreviate $L_\nu={}^vL_{\nu,-}$ and $L_\phi={}^vL_{\phi,-}$. Let $\bar L_{\nu}$, $\bar L_{\phi}$ be the natural graded lifts in ${}^v\bar\mathbf{O}_{\nu,-}$, ${}^v\bar\mathbf{O}_{\phi,-}$ of the semi-simple modules $L_{\nu}$, $L_{\phi}$ respectively. For $v\in I_{\nu,-}$ and $d+N>f,$ we'll use the following graded analogue of the translation functor $T_{\phi,\nu}:{}^v\mathbf{O}_{\phi,-}\to {}^v\mathbf{O}_{\nu,-}$ in Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}. \begin{lemma} \label{lem:gradedtranslationfunctor} Assume that $v\in I_{\nu,-}$ and $d+N>f$. Then, there is an exact functor of graded categories $\bar T_{\phi,\nu}:{}^v\bar\mathbf{O}_{\phi,-}\to {}^v\bar\mathbf{O}_{\nu,-}$ such that $\bar T_{\phi,\nu}(\bar L_{\phi})=\bar L_{\nu}$ and $\bar T_{\phi,\nu}$ coincides with $T_{\phi,\nu}$ when forgetting the grading. \end{lemma} \begin{proof} First, note that the translation functor $T_{\phi,\nu}:{}^v\mathbf{O}_{\phi,-}\to{}^v\mathbf{O}_{\nu,-}$ is well-defined, because $d+N>f$ and $v\in I_{\nu,-}$. See Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:translationO} for more details. Now, by Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:2.31}, we have ${}^v\mathbf{O}_{\nu,-}={}^v\! \mathscr{A}_{\nu,-}\operatorname{\!-\mathbf{mod}}\nolimits$ and ${}^v\mathbf{O}_{\phi,-}={}^v\! \mathscr{A}_{\phi,-}\operatorname{\!-\mathbf{mod}}\nolimits.$ By Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:speBpm}, we can view ${}^v\! B_{\nu,-}$, ${}^v\! B_{\phi,-}$ as right modules over ${}^v\! \mathscr{A}_{\nu,-}$, ${}^v\! \mathscr{A}_{\phi,-}$ respectively. Consider the functors $\mathbb{V}'_k:{}^v\mathbf{O}_{\nu,-}\to{}^v\mathbf{Z}_{\nu,-}$ and $\mathbb{V}'_k:{}^v\mathbf{O}_{\phi,-}\to{}^v\mathbf{Z}_{\phi,-}$ given by $\mathbb{V}'_k={}^v\! B_{\nu,-}\otimes_{{}^v\! \mathscr{A}_{\nu,-}}\bullet$ and $\mathbb{V}'_k={}^v\! B_{\phi,-}\otimes_{{}^v\! \mathscr{A}_{\phi,-}}\bullet.$ By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:foncteurV}, the functors $\mathbb{V}_k$ on ${}^v\mathbf{O}_{\nu,-}$, ${}^v\mathbf{O}_{\phi,-}$ are exact and we have $\mathbb{V}_k({}^v\! \mathscr{A}_{\nu,-})={}^v\! B_{\nu,-}$, $\mathbb{V}_k({}^v\! \mathscr{A}_{\phi,-})={}^v\! B_{\phi,-}.$ Further, since ${}^v\!\mathscr{A}_{\nu,+}$, ${}^v\!\mathscr{A}_{\phi,+}$ are Noetherian, any finitely generated module has a finite presentation. Thus, by the five lemma, the obvious morphism of functors $\mathbb{V}'_k\to\mathbb{V}_k$ is invertible. Next, by Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:2.42}, we have $ {}^w\!\bar A^\nu_{\phi,+}= {}^v\!\bar \mathscr{A}_{\nu,-}$ and $ {}^w\!\bar A_{\phi,+}= {}^v\!\bar \mathscr{A}_{\phi,-} $ as graded $k$-algebras. Thus, we have ${}^v\bar\mathbf{O}_{\nu,-}={}^v\! \bar \mathscr{A}_{\nu,-}\operatorname{\!-\mathbf{mod}}\nolimits$ and ${}^v\bar\mathbf{O}_{\phi,-}={}^v\! \bar \mathscr{A}_{\phi,-}\operatorname{\!-\mathbf{mod}}\nolimits.$ Consider the functors on ${}^v\bar\mathbf{O}_{\nu,-}$, ${}^v\bar\mathbf{O}_{\phi,-}$ given by $\bar\mathbb{V}_k={}^v\!\bar B_{\nu,-}\otimes_{{}^v\!\bar \mathscr{A}_{\nu,-}}\bullet,$ $\bar\mathbb{V}_k={}^v\!\bar B_{\phi,-}\otimes_{{}^v\!\bar \mathscr{A}_{\phi,-}}\bullet$ respectively. We have the commutative squares $$ \xymatrix{ {}^v\bar\mathbf{O}_{\nu,-}\ar[r]_-{\bar\mathbb{V}_k} \ar[d]&{}^v\bar\mathbf{Z}_{\nu,-}\ar[d] \cr {}^v\mathbf{O}_{\nu,-}\ar[r]^-{\mathbb{V}_k}&{}^v\mathbf{Z}_{\nu,-} } \qquad \xymatrix{ {}^v\bar\mathbf{O}_{\phi,-}\ar[r]_-{\bar\mathbb{V}_k} \ar[d]&{}^v\bar\mathbf{Z}_{\phi,-}\ar[d] \cr {}^v\mathbf{O}_{\phi,-}\ar[r]^-{\mathbb{V}_k}&{}^v\mathbf{Z}_{\phi,-}. } $$ Now, let ${}^v\bar\mathbf{O}_{\phi,-}^\text{proj}\subset {}^v\bar\mathbf{O}_{\phi,-}$ be the full subcategory of the projective objects. We define ${}^v\bar\mathbf{O}_{\nu,-}^\text{proj}$, ${}^v\mathbf{O}_{\nu,-}^\text{proj}$ and ${}^v\mathbf{O}_{\phi,-}^\text{proj}$ in a similar way. The functor $\mathbb{V}_k$ is fully faithful on ${}^v\mathbf{O}_{\nu,-}^\text{proj}$ and ${}^v\mathbf{O}_{\phi,-}^\text{proj}$ by Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:speBpm}. Hence, the functor $\bar\mathbb{V}_k$ is fully faithful on ${}^v\bar\mathbf{O}_{\nu,-}^\text{proj}$ and ${}^v\bar\mathbf{O}_{\phi,-}^\text{proj}$. Therefore, we may identify ${}^v\bar\mathbf{O}_{\nu,-}^\text{proj}$, ${}^v\bar\mathbf{O}_{\phi,-}^\text{proj}$ with some full subcategories ${}^v\bar\mathbf{Z}_{\nu,-}^\text{proj}$, ${}^v\bar\mathbf{Z}_{\phi,-}^\text{proj}$ of ${}^v\bar\mathbf{Z}_{\nu,-}$, ${}^v\bar\mathbf{Z}_{\phi,-}$ via $\bar\mathbb{V}_k$ The functor $\bar\theta_{\phi,\nu}$ in Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:theta/k} gives an exact functor ${}^v\bar\mathbf{Z}_{\phi,-}^\text{proj}\to{}^v\bar\mathbf{Z}_{\nu,-}^\text{proj}$. We define the functor $\bar T_{\phi,\nu}:{}^v\bar\mathbf{O}_{\phi,-}^\text{proj}\to {}^v\bar\mathbf{O}_{\nu,-}^\text{proj}$ so that it coincides with $\bar\theta_{\phi,\nu}$ under $\bar\mathbb{V}_k$. It gives a functor of triangulated categories $\mathbf{K}^b({}^v\bar\mathbf{O}_{\phi,-}^\text{proj})\to\mathbf{K}^b({}^v\bar\mathbf{O}_{\nu,-}^\text{proj}),$ where $\mathbf{K}^b$ denotes the bounded homotopy category. Since ${}^v\bar\mathbf{O}_{\nu,-}^\text{proj}$ and ${}^v\bar\mathbf{O}_{\phi,-}^\text{proj}$ have finite global dimensions, there are canonical equivalences $\mathbf{K}^b({}^v\bar\mathbf{O}_{\phi,-}^\text{proj})\simeq\mathbf{D}^b({}^v\bar\mathbf{O}_{\phi,-})$ and $\mathbf{K}^b({}^v\bar\mathbf{O}_{\nu,-}^\text{proj})\simeq \mathbf{D}^b({}^v\bar\mathbf{O}_{\nu,-})$. Thus, we can view $\bar T_{\phi,\nu}$ as a functor of triangulated categories $\mathbf{D}^b({}^v\bar\mathbf{O}_{\phi,-})\to\mathbf{D}^b({}^v\bar\mathbf{O}_{\nu,-}).$ By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(b),$ the functor $T_{\phi,\nu}$ gives an exact functor ${}^v\mathbf{O}_{\phi,-}^\text{proj}\to{}^v\mathbf{O}_{\nu,-}^\text{proj}$. By Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:theta/k}, the functor $\bar T_{\phi,\nu}$ coincides with $T_{\phi,\nu}$ when forgetting the grading. The functor $T_{\phi,\nu}$ yields a functor of triangulated categories $\mathbf{D}^b({}^v\mathbf{O}_{\phi,-})\to\mathbf{K}^b({}^v\mathbf{O}_{\nu,-})$ which is $t$-exact for the standard $t$-structures and which coincides with $T_{\phi,\nu}$ when forgetting the grading. Hence, the functor $\bar T_{\phi,\nu}$ is also $t$-exact. Thus, it gives an exact functor $\bar T_{\phi,\nu}:{}^{v}\bar\mathbf{O}_{\phi,-}\to {}^v\bar\mathbf{O}_{\nu,-}$ which coincides with $T_{\phi,\nu}$ when forgetting the grading. Now, we concentrate on the equality $\bar T_{\phi,\nu}(\bar L_{\phi})=\bar L_{\nu}$. Since $\bar T_{\phi,\nu}$ coincides with $T_{\phi,\nu}$ when forgetting the grading, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(d)$, for each $x\in{}^w\!I_{\nu,-}$ there is an integer $j$ such that $\bar T_{\phi,\nu}(\bar L(xw_\nu\bullet{{\rm{o}}}_{\phi,-}))= \bar L(x\bullet{{\rm{o}}}_{\nu,-})\langle j\rangle.$ We must check that $j=0$. Let $\bar T_{\nu,\phi}$ be the left adjoint to $\bar T_{\phi,\nu}$, see Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:gradedtranslationfunctor} below. By Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:2.42} we have ${}^v\bar\mathbf{O}_{\nu,-}={}^v\! \bar \mathscr{A}_{\nu,-}\operatorname{\!-\mathbf{mod}}\nolimits$ and ${}^v\bar\mathbf{O}_{\phi,-}={}^v\! \bar \mathscr{A}_{\phi,-}\operatorname{\!-\mathbf{mod}}\nolimits.$ The graded $k$-algebras ${}^w\!\bar A^\nu_{\phi,+}$, ${}^w\!\bar A_{\phi,+}$ (=${}^v\! \bar \mathscr{A}_{\nu,-}$, ${}^v\! \bar \mathscr{A}_{\phi,-}$) are Koszul. The natural graded indecomposable projective modules are of the form ${}^v\! \bar \mathscr{A}_{\nu,-}1_x$, ${}^v\! \bar \mathscr{A}_{\phi,-}1_x$ with $x\in {}^v\! I_{\nu,-}, {}^v\! I_{\phi,-}$ respectively. We must check that $\bar T_{\nu,\phi}({}^v\!\bar \mathscr{A}_{\nu,-}1_{x})= {}^v\!\bar \mathscr{A}_{\phi,-}1_{xw_\nu}$ for each $x\in{}^w\!I_{\nu,-}$. By definition of $\bar T_{\phi,\nu}$ we have an isomorphism $\bar\theta_{\phi,\nu}\circ\bar\mathbb{V}_k\simeq\bar\mathbb{V}_k\circ\bar T_{\phi,\nu}$ of functors on ${}^v\bar\mathbf{O}_{\phi,-}^\text{proj}$. Further, by definition of $\bar\mathbb{V}_k,$ we have $\bar\mathbb{V}_k({}^v\! \bar \mathscr{A}_{\nu,-}1_x)={}^v\!\bar B_{\nu,-}(x)$ and $\bar\mathbb{V}_k({}^v\!\bar \mathscr{A}_{\phi,-}1_{xw_\nu})={}^v\!\bar B_{\phi,-}(xw_\nu).$ Therefore, it is enough to check that $\bar\theta_{\nu,\phi}({}^v\!\bar B_{\nu,-}(x))= {}^v\!\bar B_{\phi,-}(xw_\nu).$ By Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:theta/k}, this follows from Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:theta}$(f)$ by base change. \end{proof} \begin{rk} \label{rk:gradedtranslationfunctor} General facts imply that the functor $\bar T_{\phi,\nu}$ has a left adjoint $\bar T_{\nu,\phi}:{}^v\bar\mathbf{O}_{\nu,-}\to{}^v\bar\mathbf{O}_{\phi,-}$, see the proof of Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}. By definition of $T_{\nu,\phi}$ and the unicity of the left adjoint, the functor $\bar T_{\nu,\phi}$ coincides with $T_{\nu,\phi}$ when forgetting the grading. \end{rk} Now, we prove the following lemma which is dual to Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gen1}. \begin{lemma} \label{lem:gen2} Assume that $v\in I_{\nu,-}$ and $d+N>f.$ Then, the functor $T_{\phi,\nu}:{}^v\mathbf{O}_{\phi,-}\to{}^v\mathbf{O}_{\nu,-}$ induces a surjective graded $k$-algebra homomorphism $T_{\phi,\nu}:{}^v\!\bar A^\mu_{\phi,-}\to {}^v\!\bar A^\mu_{\nu,-}.$ Its kernel contains the two-sided ideal generated by $\{1_x\,;\,x\in{}^v\!I_{\phi,-}^\mu,\,x\notin I_{\nu,+}\}$. Further, for $x\in {}^v\!I^\mu_{\phi,-}\cap I_{\nu,+}$ we have $xw_\nu\in {}^v\!I^\mu_{\nu,-}$ and $T_{\phi,\nu}(1_{x})=1_{xw_\nu}$. \end{lemma} \begin{proof} First, note that, since $v\in I_{\nu,-}$ and $d+N>f$, the functor $T_{\phi,\nu}:{}^v\mathbf{O}_{\phi,-}\to{}^v\mathbf{O}_{\nu,-}$ is well-defined and it takes ${}^v\mathbf{O}_{\phi,-}^\mu$ into ${}^v\mathbf{O}_{\nu,-}^\mu$ by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(c)$. We'll abbreviate $L_\phi={}^v\!L_{\phi,-}^\mu$ and $L_\nu={}^v\!L_{\nu,-}^\mu.$ By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(f),$ we have $T_{\phi,\nu}(L_{\phi})=L_{\nu}$. Thus, since $T_{\phi,\nu}$ is exact, it induces a graded $k$-algebra homomorphism $T_{\phi,\nu}:{}^v\!\bar A^\mu_{\phi,-}=\operatorname{Ext}\nolimits_{{}^v\mathbf{O}^\mu_{\phi,-}}(L_{\phi})^{{\operatorname{{op}}\nolimits}}\to {}^v\!\bar A^\mu_{\nu,-}=\operatorname{Ext}\nolimits_{{}^v\mathbf{O}^\mu_{\nu,-}}(L_{\nu})^{{\operatorname{{op}}\nolimits}}.$ Composing $T_{\phi,\nu}$ with its left adjoint $T_{\nu,\phi},$ we get the functor $\Theta=T_{\nu,\phi}\circ T_{\phi,\nu}$. To prove that $T_{\phi,\nu}$ is surjective, we must prove that the counit $\Theta\to\bf 1$ yields a surjective map $\operatorname{Ext}\nolimits_{{}^v\mathbf{O}_{\phi,-}^\mu}(L_{\phi})\to \operatorname{Ext}\nolimits_{{}^v\mathbf{O}_{\phi,-}^\mu}(\Theta(L_{\phi}),L_{\phi}).$ The parabolic inclusion ${}^v\mathbf{O}_{\phi,-}^\mu\subset{}^v\mathbf{O}_{\phi,-}$ is injective on extensions by Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:tau}. So we must prove that the counit yields a surjective map $\operatorname{Ext}\nolimits_{{}^v\mathbf{O}_{\phi,-}}(L_{\phi})\to \operatorname{Ext}\nolimits_{{}^v\mathbf{O}_{\phi,-}}(\Theta(L_{\phi}),L_{\phi}).$ Let us consider the graded analogue of this statement. Set $\bar\Theta=\bar T_{\nu,\phi}\circ \bar T_{\phi,\nu}$, where $\bar T_{\nu,\phi}$, $\bar T_{\phi,\nu}$ are as in Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gradedtranslationfunctor} and Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:gradedtranslationfunctor}. We have $$\gathered \operatorname{Ext}\nolimits_{{}^v\mathbf{O}_{\phi,-}}(L_{\phi})= \bigoplus_j\operatorname{Ext}\nolimits_{{}^v\bar\mathbf{O}_{\phi,-}}( \bar L_{\phi},\bar L_{\phi}\langle j\rangle),\cr \operatorname{Ext}\nolimits_{{}^v\mathbf{O}_{\phi,-}}(\Theta(L_{\phi}),L_{\phi})= \bigoplus_j\operatorname{Ext}\nolimits_{{}^v\bar\mathbf{O}_{\phi,-}}( \bar\Theta(\bar L_{\phi}),\bar L_{\phi}\langle j\rangle). \endgathered$$ Thus we must prove that for each $i,j$ the counit $\eta:\bar\Theta\to\bf 1$ yields a surjective map \begin{equation} \label{surj} \operatorname{Ext}\nolimits^i_{{}^v\bar\mathbf{O}_{\phi,-}}(\bar L_{\phi}, \bar L_{\phi}\langle j\rangle)\to \operatorname{Ext}\nolimits^i_{{}^v\bar\mathbf{O}_{\phi,-}} (\bar\Theta(\bar L_{\phi}),\bar L_{\phi}\langle j\rangle). \end{equation} By Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gradedtranslationfunctor}, we have $\bar T_{\phi,\nu}(\bar L_{\phi})=\bar L_{\nu}$. Further, since ${}^v\bar\mathbf{O}_{\nu,-}={}^w\!\bar A_{\phi,+}^\nu\mathbf{g}mod$ and ${}^v\bar\mathbf{O}_{\phi,-}={}^w\!\bar A_{\phi,+}\mathbf{g}mod$, the gradings on ${}^v\bar\mathbf{O}_{\phi,-}$ and ${}^v\bar\mathbf{O}_{\nu,-}$ are Koszul by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg2}. Hence, since the right hand side of \eqref{surj} is $\operatorname{Ext}\nolimits^i_{{}^v\bar\mathbf{O}_{\nu,-}}(\bar L_{\nu},\bar L_{\nu}\langle j\rangle),$ it is zero unless $i=j$. Now, we define the integer $\ell={{{\operatorname{{op}}\nolimits}}eratorname{min}\nolimits}\bigl\{d\,;\,\bar\Theta(\bar L_{\phi})^{d}\neq 0\bigr\}.$ Recall that the grading on ${}^w\!\bar A_{\phi,+}$ is positive. Further, since ${}^v\bar\mathbf{O}_{\phi,-}={}^w\!\bar A_{\phi,+}\mathbf{g}mod$, we can view $\bar\Theta(\bar L_{\phi})$ as a graded ${}^w\!\bar A_{\phi,+}$-module. Thus $\bar\Theta(\bar L_{\phi})^\ell$ is a quotient of $\bar\Theta(\bar L_{\phi})$ which is killed by the radical of ${}^w\!\bar A_{\phi,+}$. We deduce that $\bar\Theta(\bar L_{\phi})^\ell\subset{{{\operatorname{{op}}\nolimits}}eratorname{top}\nolimits}(\bar\Theta(\bar L_{\phi})).$ Next, we claim that for any simple graded ${}^w\!\bar A_{\phi,+}$-module $\bar L$ such that $\bar T_{\phi,\nu}(\bar L)\neq 0$, the map $\eta(\bar L):\bar\Theta(\bar L)\to\bar L$ yields an isomorphism ${{{\operatorname{{op}}\nolimits}}eratorname{top}\nolimits}(\bar\Theta(\bar L))\to\bar L$. Indeed, $\eta(\bar L)$ is surjective because it is non zero, and for any simple quotient $\bar\Theta(\bar L)\to\bar L'$ we have $$0\neq{{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^v\bar\mathbf{O}_{\phi,-}}(\bar\Theta(\bar L),\bar L')= {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{{}^v\bar\mathbf{O}_\nu}(\bar T_{\phi,\nu}(\bar L),\bar T_{\phi,\nu}(\bar L')).$$ By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(d),(e)$ this implies that $\bar T_{\phi,\nu}(\bar L)=\bar T_{\phi,\nu}(\bar L'),$ and that it is non zero. Therefore, we have $\bar L=\bar L'$, proving the claim. Recall that $\bar L_\phi$ is a semi-simple module, see Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:truncation}. Applying the claim to the simple summands $\bar L\subset \bar L_\phi$ such that $\bar T_{\phi,\nu}(\bar L)\neq 0$, we get that ${{{\operatorname{{op}}\nolimits}}eratorname{top}\nolimits}(\bar\Theta(\bar L_\phi))=\operatorname{Im}\nolimits\eta(\bar L_\phi)$. In particular ${{{\operatorname{{op}}\nolimits}}eratorname{top}\nolimits}(\bar\Theta(\bar L_{\phi}))$ is pure of degree zero. This implies that we have $\ell=0$. Therefore, the kernel of $\eta(\bar L_{\phi})$ lives in degrees $>0$. Hence, by Koszulity of the grading of ${}^v\bar\mathbf{O}_{\phi,-}$, we get $$\operatorname{Ext}\nolimits^i_{{}^v\bar\mathbf{O}_{\phi,-}}\bigl(\operatorname{Ker}\nolimits\eta(\bar L_\phi),\bar L_{\phi}\langle i\rangle\bigr)=0.$$ Hence the surjectivity of \eqref{surj} for $i=j$ follows from the long exact sequence of Ext's groups associated with the exact sequence $$0\to\operatorname{Ker}\nolimits(\eta(\bar L_{\phi}))\to\bar\Theta(\bar L_\phi)\to\bar L_\phi.$$ This proves that the map $T_{\phi,\nu}$ is surjective, proving the first part of the lemma. Next, we have $I^\mu_{\nu,-}=\{xw_\nu\;;\; x\in \!I^\mu_{\phi,-}\cap I_{\nu,+}\}$ by Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:C2}$(a)$. Further, we have $T_{\phi,\nu}(1_{xw_\nu})=0$ for $x\notin I_{\nu,-}$ by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(e)$, and $xw_\nu\in I_{\nu,+}$ if and only if $x\in I_{\nu,-}$. This proves the second claim of the lemma. Finally, the last claim of the lemma follows from Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(d)$. \end{proof} Next, we prove the following. \begin{lemma} \label{lem:gen3} Assume that $w\in I_{\mu,+}^\nu$. Let $v=w_-^{-1}$. We have $v\in I^\mu_{\nu,-}$. Assume also that $d+N>f$ and $e+N>d$. There is a $k$-algebra isomorphism $p_{\mu,\nu}:{}^w\! A^\nu_{\mu,+}\to {}^v\!\bar A^\mu_{\nu,-}$ such that $p_{\mu,\nu}(1_x)=1_{y}$ for each $x\in{}^w\!I_{\mu,+}^\nu$, where $y=x_-^{-1}$, and such that the following square is commutative \begin{equation} \label{square4} \begin{split} \xymatrix{ {}^{w_\nu w}\!A_{\mu,+}\ar@{=}[r]_-{\eqref{prop:reg1}}\ar[d]^-{\tau_{\phi,\nu}} &{}^v\!\bar A^\mu_{\phi,-}\ar[d]_{T_{\phi,\nu}}\cr {}^w\! A^\nu_{\mu,+}\ar@{=}[r]^-{p_{\mu,\nu}} &{}^v\!\bar A^\mu_{\nu,-}.} \end{split} \end{equation} \end{lemma} \begin{proof} We have $v=w_\mu w^{-1} w_\nu$. Note that $w_\nu w\in I^\nu_{\phi,-}\cap I_{\mu,+}$, because by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:C} we have $w^{-1}\in I^\mu_{\nu,+}$, hence $w^{-1}w_\nu\in I^\mu_{\phi,+}\cap I_{\nu,-}$ and $w_\nu w=(w^{-1}w_\nu)^{-1}\in I_{\mu,+}\cap I^\nu_{\phi,-}$. Let $\pi_{\mu,\nu}:{}^{w_\nu w}\!A_{\mu,+}\to{}^v\!\bar A^\mu_{\nu,-}$ be the composition of the $k$-algebra homomorphism $T_{\phi,\nu}:{}^v\!\bar A^\mu_{\phi,-}\to{}^v\!\bar A^\mu_{\nu,-}$ in Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gen2} and of the $k$-algebra isomorphism ${}^{w_\nu w}\!A_{\mu,+}={}^v\!\bar A^\mu_{\phi,-}$ in \eqref{prop:reg1}. Note that ${}^{w_\nu w}\! A^\nu_{\mu,+}={}^w\! A^\nu_{\mu,+}$. We must construct a $k$-algebra isomorphism $p_{\mu,\nu}:{}^w\! A^\nu_{\mu,+}\to{}^v\!\bar A^\mu_{\nu,-}$ such that $\pi_{\mu,\nu}=p_{\mu,\nu}\circ\tau_{\phi,\nu}$. Let $x\in{}^{w_\nu w}\!I_{\mu,+}$. Thus $w_\mu x^{-1}\in {}^v\!I^\mu_{\phi,-}$. By Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gen2}, we have $$\pi_{\mu,\nu}(1_x)\neq 0 \iff T_{\phi,\nu}(1_{w_\mu x^{-1}})\neq 0 \iff w_\mu x^{-1}\in I^\mu_{\phi,-}\cap I_{\nu,+}.$$ By Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gen1}, we have $$\tau_{\phi,\nu}(1_x)\neq 0 \iff x\in I^\nu_{\mu,+}.$$ Again by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:C}, we have $$\aligned w_\mu x^{-1}\in I^\mu_{\phi,-}\cap I_{\nu,+} \iff xw_\mu\in I^\nu_{\phi,+}\cap I_{\mu,-} \iff x\in I^\nu_{\mu,+}. \endaligned$$ Hence, we have $\tau_{\phi,\nu}(1_x)=0$ if and only if $\pi_{\mu,\nu}(1_x)=0$. Thus, we have $\operatorname{Ker}\nolimits(\tau_{\phi,\nu})\subset\operatorname{Ker}\nolimits(\pi_{\mu,\nu}),$ because the left hand side is generated by the $1_x$'s killed by $\tau_{\phi,\nu}$ and the right hand side contains the $1_x$'s killed by $T_{\phi,\nu}$. This proves the existence of a $k$-algebra homomorphism $p_{\mu,\nu}$ such that $\pi_{\mu,\nu}=p_{\mu,\nu}\circ\tau_{\phi,\nu}$ and $p_{\mu,\nu}(1_x)=1_{w_\mu x^{-1}}$ for each $x\in{}^w\!I_{\mu,+}^\nu$. The map $p_{\mu,\nu}$ is surjective, because the map $T_{\phi,\nu}$ is surjective by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gen2}. Now, we prove that $p_{\mu,\nu}$ is injective. By Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:tau}, the parabolic inclusion functor $i_{\mu,\phi}:{}^v\mathbf{O}^\mu_{\nu,-}\to{}^v\mathbf{O}_{\nu,-}$ yields a graded $k$-algebra homomorphism $i_{\mu,\phi}:{}^v\!\bar A_{\nu,-}^\mu\to{}^v\!\bar A_{\nu,-}$. Set $z=v^{-1}=w_\nu ww_\mu$. The following is proved below. \begin{claim} \label{claim:gen4} We have the commutative diagram \begin{equation} \label{diag1} \begin{split} \xymatrix{ {}^{w_\nu w}\!A_{\mu,+}\ar@{=}[r]_-{\eqref{prop:reg1}}\ar[d]^-{T_{\mu,\phi}} &{}^v\!\bar A_{\phi,-}^\mu \ar[r]_-{T_{\phi,\nu}}\ar[d]_{i_{\mu,\phi}} &{}^v\!\bar A_{\nu,-}^\mu \ar[d]_{i_{\mu,\phi}}\cr {}^{z}\!A_{\phi,+}\ar@{=}[r]^-{\eqref{prop:reg1}} &{}^v\!\bar A_{\phi,-} \ar[r]^-{T_{\phi,\nu}} &{}^v\!\bar A_{\nu,-}. } \end{split} \end{equation} \end{claim} Note that $z\in I_{\mu,-}$, hence the map $T_{\mu,\phi}$ is well-defined. Now, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(g)$, the translation functor $T_{\mu,\phi}$ yields a map ${}^w\!A_{\mu,+}^\nu\to{}^{ww_\mu}\!A_{\phi,+}^\nu$. Consider the diagram \begin{equation} \label{diag2} \begin{split} \xymatrix{ {}^{w_\nu w}\!A_{\mu,+}\ar[r]_-{\tau_{\phi,\nu}}\ar[d]^-{T_{\mu,\phi}} &{}^w\!A_{\mu,+}^\nu\ar[r]_-{p_{\mu,\nu}} \ar[d]^-{T_{\mu,\phi}} &{}^v\!\bar A_{\nu,-}^\mu \ar[d]_{i_{\mu,\phi}}\cr {}^z\!A_{\phi,+}\ar[r]^{\tau_{\phi,\nu}} &{}^{ww_\mu}\!A^\nu_{\phi,+}\ar[r]^-{p_{\phi,\nu}} &{}^v\!\bar A_{\nu,-}. } \end{split} \end{equation} By Claim {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{claim:gen4}, the outer rectangle in \eqref{diag1} is commutative. Thus, the outer rectangle in \eqref{diag2} is commutative. The left square in \eqref{diag2} is commutative, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(c)$. Thus, since $\tau_{\phi,\nu}$ is surjective, the right square in \eqref{diag2} is also commutative. The middle vertical map in \eqref{diag2} is injective by Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:fidele}. Therefore, to prove that $p_{\mu,\nu}$ is injective it is enough to check that $p_{\phi,\nu}$ is injective. Now, it is easy to see that the map $p_{\phi,\nu}$ is indeed invertible, beause it is surjective by the discussion above (applied to the choice $\nu=\phi$) and $\dim({}^{ww_\mu}\!A_{\phi,+}^\nu)=\dim({}^v\!\bar A_{\nu,-})$ by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg2}. Finally, to finish the proof of Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gen3} we must check that $p_{\mu,\nu}(1_x)=1_{y}$ for each $x\in{}^w\!I_{\mu,+}^\nu$, where $y=x_-^{-1}$. To do that, it suffices to observe that $x_-=w_\nu x w_\mu$, and that, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg1} and Lemmas {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gen1}, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gen2}, the square of maps in \eqref{square4} gives the following diagram $$\xymatrix{ 1_x\ar@{|->}[r]^-{\eqref{prop:reg1}}\ar@{|->}[d]^-{\tau_{\phi,\nu}}&1_{w_\mu x^{-1}}\ar@{|->}[d]_-{T_{\phi,\nu}}\\ 1_x\ar@{|->}[r]^-{p_{\mu,\nu}}&1_{w_\mu x^{-1}w_\nu}. }$$ Now, we prove Claim {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{claim:gen4}. The right square in \eqref{diag1} is commutative, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(c)$. Let us concentrate on the left square. We must prove that the isomorphisms ${}^{w_\nu w}\!A_{\mu,+}={}^v\!\bar A_{\phi,-}^\mu$ and ${}^w\!A_{\phi,+}= {}^v\!\bar A_{\phi,-}$ in Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg1} yield a commutative square \begin{equation}\label{4.27} \begin{split} \xymatrix{ {}^{w_\nu w}\!A_{\mu,+}\ar@{=}[r]_-{\eqref{prop:reg1}}\ar[d]^-{T_{\mu,\phi}} &{}^v\!\bar A_{\phi,-}^\mu\ar[d]_-i\cr {}^z\!A_{\phi,+}\ar@{=}[r]^-{\eqref{prop:reg1}} &{}^v\!\bar A_{\phi,-}. } \end{split} \end{equation} By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:translation2}$(g)$, the module $T_{\mu,\phi}({}^{w_\nu w}\!P_{\mu,+})$ is a direct summand of ${}^z\!P_{\phi,+}$. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:theta}$(f)$ and Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:theta/k}, the sheaf $\theta_{\mu,\phi}({}^w\!B_{\mu,+})$ is a direct summand of ${}^w\!B_{\phi,+}.$ Thus, we have the following diagram \begin{equation*} \begin{split} \xymatrix{ {}^{w_\nu w}\!A_{\mu,+}\ar@{=}[r]^-{\mathbb{V}_k}\ar[d]_-{T_{\mu,\phi}} &{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^{w_\nu w}\!Z_{\mu,+}}({}^{w_\nu w}\!B_{\mu,+})^{{\operatorname{{op}}\nolimits}} \ar[d]^-{\theta_{\mu,\phi}} \cr {}^z\!A_{\phi,+}\ar@{=}[r]^-{\mathbb{V}_k} &{{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\!Z_{\phi,+}}({}^z\!B_{\phi,+})^{{\operatorname{{op}}\nolimits}}. } \end{split} \end{equation*} Note that the horizontal maps are invertible by Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:speBpm}. This diagram is commutative by Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:theta/k}, see also Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:theta}$(a)$. Next, by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:loc3}$(c)$ and Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:loc4}, we have a commutative diagram \begin{equation*} \begin{split} \xymatrix{ {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^{w_\nu w}\!Z_{\mu,+}}({}^{w_\nu w}\!B_{\mu,+})^{{\operatorname{{op}}\nolimits}} \ar[d]_-{\theta_{\mu,\phi}} &{}^v\!\bar A^\mu_{\phi,-}\ar@{=}[l]_-\mathbb{H} \ar[d]^-i\cr {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^z\!Z_{\phi,+}}({}^z\!B_{\phi,+})^{{\operatorname{{op}}\nolimits}} &{}^v\!\bar A_{\phi,-}.\ar@{=}[l]_-\mathbb{H} } \end{split} \end{equation*} Finally, the horizontal maps in \eqref{4.27} are equal to the composition of $\mathbb{H}$ and $\mathbb{V}_k$. \end{proof} Finally, we prove our main theorem. \begin{proof}[Proof of Theorem {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{thm:main}] Since the highest weight categories ${}^w\mathbf{O}^\nu_{\mu,+},$ ${}^v\mathbf{O}^\mu_{\nu,-}$ do not depend on $e,$ $f$ by Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:level}, we can assume that there is a positive integer $d$ such that $d+N>f$ and $e+N>d$. Thus the hypothesis of Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gen3} is satisfied. By Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg1}, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg2} the $k$-algebra ${}^w\!A_{\mu,\pm}$ has a Koszul grading. Thus, by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:1.2} and Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:tau}, the $k$-algebra ${}^w\!A^\nu_{\mu,\pm}$ has also a Koszul grading. Let us equip ${}^w\!A^\nu_{\mu,\pm}$ with this grading. By Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:1.1}, we have ${}^w\!A^{\nu,!}_{\mu,\pm}={}^w\!\bar A^\nu_{\mu,\pm}$ as graded $k$-algebras. Therefore, the graded $k$-algebra ${}^w\!\bar A^\nu_{\mu,\pm}$ is Koszul and its Koszul dual is isomorphic to ${}^w\!A^\nu_{\mu,\pm}$ as a $k$-algebra. We have ${}^w\!\bar A^{\nu,!}_{\mu,+}={}^w\!A^\nu_{\mu,+}$ as a $k$-algebra. By Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gen3}, we have also a $k$-algebra isomorphism ${}^w\!A^\nu_{\mu,+}={}^v\!\bar A^\mu_{\nu,-}$. Thus, we have ${}^w\!\bar A^{\nu,!}_{\mu,+}={}^v\!\bar A^\mu_{\nu,-}$ as $k$-algebras. By unicity of the Koszul grading, we deduce that ${}^w\!\bar A^{\nu,!}_{\mu,+}={}^v\!\bar A^\mu_{\nu,-}$ as graded $k$-algebras. The involutivity of the Koszul duality implies that we have also ${}^w\!\bar A^{\nu,!}_{\mu,-}= {}^v\!\bar A^\mu_{\nu,+}$ as graded $k$-algebras, and ${}^w\!A^\nu_{\mu,-}={}^v\!\bar A^\mu_{\nu,+}$ as $k$-algebras. Finally, we must check that under the isomorphism ${}^w\!\bar A^{\nu,!}_{\mu,+}={}^v\!\bar A^\mu_{\nu,-}$ we have $1_x^!=1_{y}$ with $y=x^{-1}_-$ for each $x\in{}^w\!I_{\mu,+}^\nu.$ If $\nu=\phi$ this is Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg1}. If $\mu=\phi$ this is Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:reg2}. The isomorphism ${}^w\!\bar A^{\nu,!}_{\mu,+}={}^w\!A^\nu_{\mu,+}$ above, which is given by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:1.1}, identifies the idempotents $1_x^!$ and $1_x$ for each $x\in{}^w\!I_{\mu,+}^\nu.$ Thus, by Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:gen3}, the isomorphism ${}^w\!\bar A^{\nu,!}_{\mu,+}={}^v\!\bar A^\mu_{\nu,-}$ identifies the idempotents $1_x^!$ and $1_y$, where $y=x_-^{-1}$. \end{proof} \section{Type A and applications to CRDAHA's}\label{app:A} Fix integers $e,\ell,N>0$. Let $\mathfrak{g}=\mathfrak{g}\mathfrak{l}(N)$. \subsection{Koszul duality in the type A case} Let $\mathfrak{b},\mathfrak{t}\subset\mathfrak{g}$ be the Borel subalgebra of upper triangular matrices and the maximal torus of diagonal matrices. Let $(\epsilon_i)$ be the canonical basis of $\mathbb{C}^N$. We identify $\mathfrak{t}^*=\mathbb{C}^N$, $\mathfrak{t}=\mathbb{C}^N$ and $W=\mathfrak{S}_N$ in the obvious way. Put $\rho=(0,-1,\dots,1-N)$ and $\alpha_i=\epsilon_i-\epsilon_{i+1}$ for each $i\in[1,N).$ We define the affine Lie algebra $\mathbf{g}$ of $\mathfrak{g}$ as in Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:affine}. For any subset $X\subset\mathbb{C}^\ell$ and any $d\in\mathbb{C}$ let $X(d)=\bigl\{(x_1,\dots,x_\ell)\in X\,;\,\sum_ix_i=d\bigr\}.$ Set $\mathscr{C}^\ell_d=\mathbb{N}^\ell(d)$ and fix an element $\nu\in\mathscr{C}^\ell_N$. Let $\nu$ denote also the parabolic type $\{\alpha_i;\ i\neq \nu_1, \nu_1+\nu_2,...\}$. Let $\mathbf{p}_\nu\subset\mathbf{g}$ be the corresponding parabolic subalgebra. The Levi subalgebra of $\mathbf{p}_\nu$ is the Lie subalgebra $\mathfrak{g}_\nu=\mathfrak{g}\mathfrak{l}(\nu_1){{\operatorname{{op}}\nolimits}}lus\dots{{\operatorname{{op}}\nolimits}}lus\mathfrak{g}\mathfrak{l}(\nu_\ell)$ of $\mathfrak{g}$ consisting of block diagonal elements. Let $P=\mathbb{Z}^N$ be the set of integral weights of $\mathfrak{g}$ and let $P^\nu\subset P$ be the subset of $\nu$-dominant integral weights. Fix an element $\mu\in\mathscr{C}^e_N$. Consider the $N$-tuple $1_\mu=(1^{\mu_1}2^{\mu_2}\cdots e^{\mu_e})$. Since $1_\mu\in P+\rho,$ the affine weight ${{\rm{o}}}_{\mu,-}=(1_\mu-\rho)_e$ is an antidominant integral classical affine weight of $\mathbf{g}$ of level $-e-N$, see Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:2.9}. Recall that $W_\mu\subset W$ is the parabolic subgroup generated by the simple affine reflections $s_i$ with $\alpha_i\notin\mu$. Let $\widetilde W=\mathfrak{S}_N\ltimes\mathbb{Z}^N$ be the extended affine symmetric group. We define the \emph{$e$-action of $\widetilde W$ on $\mathbb{Z}^N$} to be such that $\mathfrak{S}_N$ acts by permutation of the entries of a $N$-tuple, while $\tau\in\mathbb{Z}^N$ acts by translation by the $N$-tuple $-e\,\tau$. Let $w\cdot_ex$ be the result of the $e$-action of $w$ on the element $x\in\mathbb{Z}^N$. The stabilizer of the $N$-tuple $1_\mu$ is equal to $W_\mu$. It is a standard parabolic subgroup. Consider the category $\mathbf{O}^\nu_{\mu,-}$ introduced in Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:2.9}. It is canonically equivalent to a block of a truncated category O of the affine Lie algebra associated with $\frak{sl}(N)$, because $\frak{sl}(N)$ differs from $\mathfrak{g}$ by a central element. Hence, all the results above can be applied to the category $\mathbf{O}^\nu_{\mu,-}$. We'll write $\mathbf{O}^\nu_{\mu,-e}=\mathbf{O}^\nu_{\mu,-}$ and $\mathbf{O}^\nu_{-e}=\bigoplus_{\mu\in\mathscr{C}^e_N}\mathbf{O}^\nu_{\mu,-e}.$ The classes $[V^\nu(\lambdabda_e)]$ with $\lambdabda\in P^\nu$ form a $\mathbb{C}$-basis of the complexified Grothendieck group $[\mathbf{O}^\nu_{-e}]$. Composing the Koszul equivalence in Theorem {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{thm:main} and the tilting equivalence, we get an equivalence of triangulated categories $K=E((\bullet)^\diamond):\mathbf{D}^b(\mathbf{O}^\nu_{\mu,-e})\to\mathbf{D}^b(\mathbf{O}^\mu_{\nu,-\ell})$, which induces a $\mathbb{C}$-linear isomorphism $K:[\mathbf{O}^\nu_{\mu,-e}]\to[\mathbf{O}^\mu_{\nu,-\ell}]$. \subsection{The level-rank duality} In thist section, we'll relate the map $K$ above to the level-rank duality. Consider the Lie algebra $\mathfrak{s}\mathfrak{l}(e)$. We identify the set of weights of $\mathfrak{s}\mathfrak{l}(e)$ with $\mathbb{C}^e/\mathbb{C} 1^e$. Thus, elements of $\mathbb{C}^e$ can be viewed as weights of $\mathfrak{s}\mathfrak{l}(e),$ or equivalently, as level 0 classical affine weights of the affine Kac-Moody algebra $\widehat{\mathfrak{s}\mathfrak{l}}(e).$ Let $\{\varepsilon_{i}\,;\,i\in[1,e]\}$ be the canonical basis of $\mathbb{C}^e$. For each $\lambdabda\in P$ and each $k\in[1,N]$, we decompose the $k$-th entry of $\lambdabda+\rho$ as the sum $\lambdabda_k+\rho_k=i_k+e\,r_k$ with $i_k\in[1,e]$ and $r_k\in\mathbb{Z}$. Then, we can view the sum $\widehat{{\operatorname{{op}}\nolimits}}eratorname{wt}\nolimits_e(\lambdabda)=\sum_{k=1}^N\varepsilon_{i_k}+ (\sum_{k=1}^Nr_k)\,\delta$ as a level 0 affine weight of $\widehat{\mathfrak{s}\mathfrak{l}}(e)$, and the sum ${{\operatorname{{op}}\nolimits}}eratorname{wt}\nolimits_e(\lambdabda)=\sum_{k=1}^N\varepsilon_{i_k}$ as a weight of ${\mathfrak{s}\mathfrak{l}}(e).$ The vector spaces $V_e=\mathbb{C}^e\otimes\mathbb{C}[z^{-1},z]$ carries an obvious level zero action of $\widehat{\mathfrak{s}\mathfrak{l}}(e)$. It induces an action of $\widehat{\mathfrak{s}\mathfrak{l}}(e)$ on the space $\bigwedge^\nu(V_e)=\bigotimes_{p=1}^\ell\bigwedge^{\nu_p}(V_e)$ of tensor product of wedge powers. Write $v_{i+er}=\varepsilon_i\otimes z^r\in V_e$ for each $i\in[1,e]$, $r\in\mathbb{Z}$. Note that a tuple $a\in\mathbb{Z}^N$ of the form $a=(a_{1,1},a_{1,2},\dots,a_{\ell,\nu_\ell})$ belongs to $P^\nu+\rho$ if and only if we have $a_{p,1}>a_{p,2}>\cdots>a_{p,\nu_p}$ for each $p\in[1,\ell]$. Set $\wedge^\nu(a)=\bigotimes_{p=1}^\ell(v_{a_{p,1}}\wedge v_{a_{p,2}} \wedge\dots\wedge v_{a_{p,\nu_p}})$. Then the set $\{\wedge^\nu(a)\,;\,a\in P^\nu+\rho\}$ is a basis of $\bigwedge^\nu (V_e).$ Further, for each $\lambdabda\in P^\nu$ the element $\wedge^\nu(\lambdabda+\rho)$ has the weight $\widehat{{\operatorname{{op}}\nolimits}}eratorname{wt}\nolimits_e(\lambdabda)$. Let $\bigwedge^\nu(V_e)_\mu$ be the weight subspace of $\bigwedge^\nu(V_e)$ of weight $\bar\mu=\sum_{i=1}^e\mu_i\,\varepsilon_i$ with respect to the action of $\mathfrak{s}\mathfrak{l}(e)$. The set $\{\wedge^\nu(a)\,;\,a\in (P^\nu+\rho)\cap(\widetilde W\cdot_e1_\mu)\}$ is a basis of $\bigwedge^\nu (V_e)_\mu.$ In a similar way, we equip the vector space $V_\ell=\mathbb{C}^\ell\otimes\mathbb{C}[z^{-1},z]$ with the basis $\{v_{p+\ell r}\,;\,p\in[1,\ell],\,r\in\mathbb{Z}\}$. Then, we define the $\widehat{\mathfrak{s}\mathfrak{l}}(\ell)$-module $\bigwedge^\mu(V_\ell)$ as above. Next, consider the $\widehat{\mathfrak{s}\mathfrak{l}}(e)\times\widehat{\mathfrak{s}\mathfrak{l}}(\ell)$-module $\bigwedge^N(V_{e,\ell})$ with $V_{e,\ell}=\mathbb{C}^e\otimes\mathbb{C}^\ell\otimes\mathbb{C}[z^{-1},z].$ Let $\bar\nu=\sum_{p=1}^\ell\nu_p\,\varepsilon_p,$ viewed as a weight of ${\mathfrak{s}\mathfrak{l}}(\ell)$. The weight subspace of $\bigwedge^N(V_{e,\ell})$ of weight $\bar\mu$ for the $\mathfrak{s}\mathfrak{l}(e)$-action is isomorphic to the $\widehat{\mathfrak{s}\mathfrak{l}}(\ell)$-module $\bigwedge^\mu(V_\ell)$, and the weight subspace of weight $\bar\nu$ for the $\mathfrak{s}\mathfrak{l}(\ell)$-action is isomorphic to the $\widehat{\mathfrak{s}\mathfrak{l}}(e)$-module $\bigwedge^\nu(V_e)$. Since ${\textstyle\bigwedge^\nu}(V_e)_\mu$ and ${\textstyle\bigwedge^\mu}(V_\ell)_\nu$ are both canonically identified with the weight $(\bar\mu,\bar\nu)$ subspace $\bigwedge^N(V_{e,\ell})_{\mu,\nu}$ of $\bigwedge^N(V_{e,\ell})$ for the ${\mathfrak{s}\mathfrak{l}}(e)\times{\mathfrak{s}\mathfrak{l}}(\ell)$-action, we have a canonical linear isomorphism $LR:{\textstyle\bigwedge^\nu}(V_e)_\mu\to {\textstyle\bigwedge^\mu}(V_\ell)_\nu$. More precisely, we have an isomorphism $f_{\mu,\nu}:{\textstyle\bigwedge^\nu}(V_e)_\mu\to\bigwedge^N(V_{e,\ell})_{\mu,\nu}$ which takes the element $\wedge^\nu(a)$, with $a=(a_{p,k})$ in $P^\nu+\rho,$ to the monomial $\bigwedge_{(p,k)}(v_{i_{p,k}}\otimes v_p\otimes z^{r_{p,k}}).$ Here $i_{p,k}\in[1,e]$, $r_{p,k}\in\mathbb{Z}$ are such that $a_{p,k}=i_{p,k}+e\,r_{p,k}$, and the pair $(p,k)$ runs from $(1,1)$ to $(\ell,\nu_\ell)$ in lexicographic order. Next, there is an obvious isomorphism of ${\mathfrak{s}\mathfrak{l}}(e)\times{\mathfrak{s}\mathfrak{l}}(\ell)$-modules $\tau:\bigwedge^N(V_{e,\ell})\to\bigwedge^N(V_{\ell,e})$ which exchanges the vector spaces $\mathbb{C}^e$ and $\mathbb{C}^\ell$. Then, we define $LR=(f_{\nu,\mu})^{-1}\circ \tau\circ f_{\mu,\nu}$. We'll call $LR$ the \emph{level-rank duality}. Our next result compares the isomorphisms $LR$ and $K$. To do so, we consider the $\mathbb{C}$-linear isomorphism $\theta:[\mathbf{O}^\nu_{-e}]\to\bigwedge^\nu(V_e)$ such that $\theta([V^\nu(\lambdabda_e)])=\wedge^\nu(\lambdabda+\rho)$ for each $\lambdabda\in P^\nu$. It takes $[\mathbf{O}^\nu_{\mu,-e}]$ onto the weight subspace $\bigwedge^\nu(V_e)_\mu$. We can now prove the following. \begin{prop}\label{prop:B1} We have a commutative square $$\xymatrix{ [\mathbf{O}^\nu_{\mu,-e}]\ar[r]^-{K}\ar[d]_\theta&[\mathbf{O}^\mu_{\nu,-\ell}]\ar[d]^\theta\\ {\textstyle\bigwedge^\nu}(V_e)_\mu\ar[r]^-{LR}& {\textstyle\bigwedge^\mu}(V_\ell)_\nu. }$$ \end{prop} \begin{proof} Let ${{\operatorname{{op}}\nolimits}}eratorname{triv}\nolimits_\mu$, ${{\operatorname{{op}}\nolimits}}eratorname{sgn}\nolimits_\mu$ be the idempotents of the $\mathbb{C}$-algebra of the parabolic subgroup $W_\mu\subset\widetilde W$ associated with the trivial representation and the signature. For each tuple $a\in\mathbb{Z}^N$ we write $v(a)=\bigotimes_{k=1}^N v_{a_k}$. Let $(V_e)_\mu^{\otimes N}$ be the subspace of $(V_e)^{\otimes N}$ of weight $\bar\mu$ relatively to the $\mathfrak{s}\mathfrak{l}(e)$-action. The element $v(a)$ of $(V_e)^{\otimes N}$ belongs to $(V_e)_\mu^{\otimes N}$ if and only if $a\in\widetilde W\cdot_e1_\mu$. Therefore, the assignment $x\cdot{{\operatorname{{op}}\nolimits}}eratorname{triv}\nolimits_\mu\mapsto v(x\cdot_e 1_\mu)$ with $x\in\widetilde W$ yields a $\mathbb{C}$-linear isomorphism $\mathbb{C}\widetilde W\cdot{{\operatorname{{op}}\nolimits}}eratorname{triv}\nolimits_\mu\to(V_e)_\mu^{\otimes N}.$ Since $x\,\bullet\,{{\rm{o}}}_{\mu,-}=(x\cdot_e1_\mu-\rho)_e$, we have $x\in I^\nu_{\mu,-}$ if and only if the $N$-tuple $x\cdot_e1_\mu$ lies in $P^\nu+\rho$. We deduce that the isomorphism above factors to a $\mathbb{C}$-linear isomorphism $B_{\nu,\mu}:{{\operatorname{{op}}\nolimits}}eratorname{sgn}\nolimits_\nu\!\cdot\,\mathbb{C}\widetilde W\cdot{{\operatorname{{op}}\nolimits}}eratorname{triv}\nolimits_\mu\to\bigwedge^\nu (V_e)_\mu$ such that ${{\operatorname{{op}}\nolimits}}eratorname{sgn}\nolimits_\nu\!\cdot\, x\cdot{{\operatorname{{op}}\nolimits}}eratorname{triv}\nolimits_\mu\mapsto \wedge^\nu(x\cdot_e1_\mu).$ Under this isomorphism, the basis $\{\wedge^\nu(a)\,;\,a\in (P^\nu+\rho)\cap(\widetilde W\cdot_e1_\mu)\}$ of $\bigwedge^\nu (V_e)_\mu$ is identified with $\{{{\operatorname{{op}}\nolimits}}eratorname{sgn}\nolimits_\nu\!\cdot\, x\cdot{{\operatorname{{op}}\nolimits}}eratorname{triv}\nolimits_\mu\,;\,x\in I^\nu_{\mu,-}\}$. By Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:C2}, we have a bijection $ I^\nu_{\mu,-}\to I^\mu_{\nu,-}$ such that $x\mapsto x^{-1}$. We claim that, under the isomorphisms $B_{\nu,\mu}$ and $B_{\mu,\nu}$, the map $LR$ is identified with the $\mathbb{C}$-linear map ${{\operatorname{{op}}\nolimits}}eratorname{sgn}\nolimits_\nu\!\cdot\,\mathbb{C}\widetilde W\cdot{{\operatorname{{op}}\nolimits}}eratorname{triv}\nolimits_\mu\to{{\operatorname{{op}}\nolimits}}eratorname{sgn}\nolimits_\mu\!\cdot\,\mathbb{C}\widetilde W\cdot{{\operatorname{{op}}\nolimits}}eratorname{triv}\nolimits_\nu$ such that ${{\operatorname{{op}}\nolimits}}eratorname{sgn}\nolimits_\nu\!\cdot\, x\cdot{{\operatorname{{op}}\nolimits}}eratorname{triv}\nolimits_\mu\mapsto{{\operatorname{{op}}\nolimits}}eratorname{sgn}\nolimits_\mu\!\cdot\, x^{-1}\cdot{{\operatorname{{op}}\nolimits}}eratorname{triv}\nolimits_\nu$. Since, by Remark {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{rk:2.20}, the isomorphism $K$ takes the element $[V^\nu(x\bullet{{\rm{o}}}_{\mu,-})]$ to $[V^\mu(x^{-1}\bullet{{\rm{o}}}_{\nu,-})]$, this finishes the proof of the proposition. The claim is a direct consequence of the definition of the map $LR$. \end{proof} \subsection{The CRDAHA} \label{sec:CRDAHA} Let $\Gamma\subset\mathbb{C}^\times$ be the group of the $\ell$-th roots of 1. Fix $\nu\in\mathbb{Z}^\ell(N)$ and fix an integer $d>0$. Let $\Gamma_d=\mathfrak{S}_d\ltimes\Gamma^d$. It is a complex reflection group. Let $\mathscr{P}_d$ be the set of {\it partitions of $d$}. Write $|\lambdabda|=d$ and let $l(\lambdabda)$ be the {\it length} of $\lambdabda$. Let $\mathscr{P}^\ell_d$ be the set of {\it $\ell$-partitions of $d$}, i.e., the set of $\ell$-tuples $\lambdabda=(\lambdabda_p)$ of partitions with $\sum_p|\lambdabda_p|=d$. There is a bijection between the set of irreducible representations of $\Gamma_d$ and $\mathscr{P}^\ell_d$, see e.g. \cite[sec.~6]{Ro}. We set $h=1/e$ and $h_p=\nu_{p+1}/e-p/\ell$ for each $p\in\mathbb{Z}/\ell\mathbb{Z}.$ Let $H^\nu(d)$ be the CRDAHA of $\Gamma_d$ with parameters $h$ and $(h_p)$. It is the quotient of the smash product of $\mathbb{C} \Gamma_{d}$ and the tensor algebra of $(\mathbb{C}^2)^{{{\operatorname{{op}}\nolimits}}lus d}$ by the relations $$[y_i,x_i]=1-k\sum_{j\neq i}\sum_{\gamma\in\Gamma}s_{ij}^{\gamma} -\sum_{\gamma\in\Gamma\setminus\{1\}}c_\gamma\gamma_i,$$ $$[y_i,x_j]=k\sum_{\gamma\in\Gamma}\gamma s_{ij}^{\gamma} \quad \text{if}\ i\neq j,$$ $$[x_i,x_j]=[y_i,y_j]=0.$$ The parameters are such that $k=-h$ and $-c_\gamma=\sum_{p=0}^{\ell-1}\gamma^{-p}(h_{p}-h_{p-1})$ for $\gamma\neq1.$ Let $\mathcal{O}^\nu_{-e}\{d\}$ be the category O of $H^\nu(d)$. It is a highest weight category with set of standard modules $\Delta(\mathcal{O}^\nu_{-e}\{d\})=\{\Delta(\lambdabda)\,;\,\lambdabda\in \mathscr{P}^\ell_d\},$ see \cite[sec.~3.3, 3.6]{SV}, \cite[sec.~3.6]{VV}. To avoid confusions we may write $\Delta_e^\nu(\lambdabda)=\Delta(\lambdabda).$ Let $S_e^\nu(\lambdabda)$ be the top of $\Delta_e^\nu(\lambdabda)$. We have the block decomposition $\mathcal{O}^{\nu}_{-e}\{d\}=\bigoplus_{\mu}\mathcal{O}^{\nu}_{\mu}\{d\},$ where $\mu=(\mu_1,\dots,\mu_e)$ is identified with $\bar\mu=\sum_{i=1}^{e}\mu_i\,\varepsilon_i$ and it runs over the set of all integral weights of $\mathfrak{s}\mathfrak{l}(e)$. By \cite{LM}, these blocks are determined by the following combinatorial rule. For each $\lambdabda\in \mathscr{P}^\ell_d,$ an {\it $(i,\nu)$-node} in $\lambdabda$ is a triple $(x,y,p)$ with $x,y>0$ and $y\leqslantlant(\lambdabda_p)_x$ such that $y-x+\nu_p=i$ modulo $e$. Let $n_{i}^\nu(\lambdabda)$ be the number of $(i,\nu)$-nodes in $\lambdabda$. Then, we have $\Delta^\nu_e(\lambdabda)\in\mathcal{O}^{\nu}_{\mu}\{d\}$ if and only if $\lambdabda\in \Lambdabda^\nu_\mu\{d\}$, where \begin{equation} \label{form:1.2} \Lambdabda^\nu_\mu\{d\}=\Big\{\lambdabda\in \mathscr{P}^\ell_d\,;\,\sum_{p=1}^{\ell}\omega_{\nu_p}- \sum_{i=1}^{e-1}\bigl(n_i^\nu(\lambdabda)-n_0^\nu(\lambdabda)\bigr)\,\alpha_i=\bar\mu\Big\}. \end{equation} The expression above should be regarded as an equality of integral weights of $\mathfrak{s}\mathfrak{l}(e)$. More precisely, the symbols $\omega_1,\omega_2,\dots,\omega_{e-1}$ and $\alpha_1,\alpha_2,\dots,\alpha_{e-1}$ are respectively the fundamental weights and the simple roots of $\mathfrak{s}\mathfrak{l}(e)$, and the subscript $\nu_p$ in \eqref{form:1.2} should be viewed as the residue class of $\nu_p$ in $[1,e)$. See \cite[lem.~5.16]{SV} for details. By \cite[rem.~4.5]{RSVV}, the condition \eqref{form:1.2} is equivalent to $$\Lambdabda^\nu_\mu\{d\}=\Big\{\lambdabda\in \mathscr{P}^\ell_d\,;\,{{\operatorname{{op}}\nolimits}}eratorname{wt}\nolimits_e(\lambdabda+\rho_\nu-\rho)= \bar\mu\Big\}.$$ Note that an integral weight of $\mathfrak{s}\mathfrak{l}(e)$ can be represented by an element of $\mathbb{Z}^e(k)$ if and only if it lies in $\omega_k+e\mathbb{Z}\Pi$. Thus, since $\nu\in\mathbb{Z}^\ell(N)$, if $\Lambdabda^\nu_\mu\{d\}\neq\emptyset$ then $\mu$ can be represented by an $e$-tuple in $\mathbb{Z}^e(N)$. Indeed, it is not difficult to check that if $\Lambdabda^\nu_\mu\{d\}\neq\emptyset$ then $\mu\in\mathscr{C}^e_N.$ For each $\mu\in\mathscr{C}^e_N$ and $a\in\mathbb{N},$ we write $\mathcal{O}_{\mu}^\nu[a]=\mathcal{O}^\nu_\mu\{d\}$ and $\Lambdabda^\nu_{\mu}[a]=\Lambdabda^\nu_\mu\{d\}$, where $d=a\,e+\bigl\langle \sum_{p=1}^\ell\omega_{\nu_p}-\mu:\rho\bigr\rangle.$ For each $\lambdabda\in\mathscr{P}^\ell$, we have $\lambdabda\in\Lambdabda^{\nu}_{\mu}[a]$ if and only if $n_0^\nu(\lambdabda)=a$ and ${{\operatorname{{op}}\nolimits}}eratorname{wt}\nolimits_e(\lambdabda+\rho_\nu-\rho)=\bar\mu$. We are interested by the following conjecture \cite[conj.~6]{CM}. \begin{conj} \label{conj:A.0} The blocks $\mathcal{O}^{\nu}_{\mu}[a]$ and $\mathcal{O}^{\mu}_{\nu}[a]$ have a (standard) Koszul grading. The Koszul dual of $\mathcal{O}^{\nu}_{\mu}[a]$ is equivalent to the Ringel dual of $\mathcal{O}^{\mu}_{\nu}[a]$. \end{conj} \subsection{The Schur category } Fix a composition $\nu\in\mathscr{C}^\ell_N$. Let $\mathscr{P}^\nu_d=\{\lambdabda\in \mathscr{P}^\ell_d\,;\,l(\lambdabda_p)\leqslantlant\nu_p\}$. There is an inclusion $\mathscr{P}^\nu_d\subset P^\nu\subset \mathbb{Z}^N$ such that $$\lambdabda\mapsto \bigl(\lambdabda_10^{\nu_1-l(\lambdabda_1)}, \lambdabda_20^{\nu_2-l(\lambdabda_2)},\dots, \lambdabda_\ell 0^{\nu_\ell-l(\lambdabda_\ell)}\bigr),$$ where the partition $\lambdabda_p$ is viewed as an element in $\mathbb{Z}^{\l(\lambdabda_p)}$. Consider the $\nu$-dominant weight $\rho_\nu=(\nu_1,\nu_1-1,\dots,1,\nu_2,\nu_2-1,\dots,\nu_\ell,\dots 1)\in P^\nu.$ For each $\lambdabda\in \mathscr{P}^\nu,$ we abbreviate $V_e^\nu(\lambdabda)=V^\nu((\lambdabda+\rho_\nu-\rho)_e)$ and $L_e^\nu(\lambdabda)=L^\nu((\lambdabda+\rho_\nu-\rho)_e)$. Following \cite{VV}, let $\mathbf{A}^{\nu}_{-e}\{d\}\subset\mathbf{O}^{\nu}_{-e}$ be the Serre subcategory consisting of the finite length $\mathbf{g}$-modules of level $-e-N$ whose constituents belong to the set $\{L_e^\nu(\lambdabda)\,;\,\lambdabda\in \mathscr{P}^\nu_d\}.$ By \cite{VV}, it is a highest weight category with the set of standard modules $\Delta(\mathbf{A}^{\nu}_{-e}\{d\})=\{V_e^\nu(\lambdabda)\,;\,\lambdabda\in \mathscr{P}^\nu_d\}.$ We have the following conjecture \cite[conj.~8.8]{VV}. \begin{conj} \label{conj:A.1} There is a quotient functor $\mathcal{O}^\nu_{-e}\{d\}\to\mathbf{A}^{\nu}_{-e}\{d\}$ taking $S_e^\nu(\lambdabda)$ to $L_e^\nu(\lambdabda)$ if $\lambdabda\in \mathscr{P}^\nu_d$ and to 0 else. If $\nu_p\geqslantlant d$ for each $p$, then we have $\mathscr{P}^\nu_d=\mathscr{P}_d$ and the functor above is an equivalence of highest weight categories. \end{conj} A proof of Conjecture {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{conj:A.1} is given in \cite{RSVV}. \subsection{Koszul duality of the Schur category} Fix an integer $d\geqslantlant 0$ and compositions $\mu\in\mathscr{C}^e_N$, $\nu\in\mathscr{C}^\ell_N$. Let $\mathbf{A}^{\nu}_{\mu}\{d\},$ $\mathbf{A}^\nu_{\mu}[a]$ be the Serre subcategories of $\mathbf{A}^{\nu}_{-e}=\bigoplus_{d\in\mathbb{N}}\mathbf{A}^{\nu}_{-e}\{d\}$ generated by the modules $L^\nu_e(\lambdabda)$ such that $\lambdabda\in\Lambdabda^{\nu}_{\mu}\{d\}$, $\Lambdabda^{\nu}_{\mu}[a]$ respectively. Write $\mathbf{A}^{\nu}_{\mu,-e}=\bigoplus_{d\in\mathbb{N}}\mathbf{A}^{\nu}_{\mu,-e}\{d\}.$ In view of Conjecture {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{conj:A.1}, the following claim can be regarded as an analogue of Conjecture {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{conj:A.0}. \begin{thm}\label{thm:1.3} The category $\mathbf{A}^{\nu}_{\mu}[a]$ has a (standard) Koszul grading. The Koszul dual of $\mathbf{A}^{\nu}_{\mu}[a]$ is equivalent to the Ringel dual of $\mathbf{A}^{\mu}_{\nu}[a]$. \end{thm} \begin{proof} Recall the $\mathbb{C}$-linear map $K:[\mathbf{O}^\nu_{\mu,-e}]\to[\mathbf{O}^\mu_{\nu,-\ell}]$. First, we claim that $K([\mathbf{A}^\nu_{\mu,-e}])=[\mathbf{A}^\mu_{\nu,-\ell}]$. By Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:B1} and the definition of the Schur category, we must compute the element $LR(\wedge^\nu(a)),$ for each $a=\lambdabda+\rho_\nu$ such that $\lambdabda\in\mathscr{P}^\nu$ and $\wedge^\nu(a)\in{\textstyle\bigwedge^\nu}(V_e)_\mu$. First, we consider the $\mathbb{C}$-vector space $LR\big({\textstyle\bigwedge^\nu}(V_e)_\mu\big)$. The map $f_{\mu,\nu}$ takes the monomial $\wedge^\nu(a),$ with $a=(a_{p,k})\in P^\nu+\rho,$ to the monomial $\bigwedge_{(p,k)}(v_{i_{p,k}}\otimes v_p\otimes z^{r_{p,k}})$ such that $a_{p,k}=i_{p,k}+e\,r_{p,k}$. The weight subspace ${\textstyle\bigwedge^\nu}(V_e)_\mu$ is spanned by the monomials $\wedge^\nu(a)$ as above such that $\mu_i=\sharp\{(p,k)\,;\,i_{p,k}=i\}$ for each $i\in[1,e]$. Therefore, we have $f_{\mu,\nu}\big({\textstyle\bigwedge^\nu}(V_e)_\mu\big)=\bigwedge^N(V_{e,\ell})_{\mu,\nu},$ which is the subspace of $\bigwedge^N(V_{e,\ell})$ spanned by all the monomials $\wedge^N(x)=\bigwedge_{(i',p',r')}(v_{i'}\otimes v_{p'}\otimes z^{r'})$ where $(i',p',r')$ runs over the entries of an $N$-tuple $x\in([1,e]\times[1,\ell]\times\mathbb{Z})^N$ such that $\mu_i=\sharp\{(i',p',r')\in x\,;\,i'=i\}$ and $\nu_p=\sharp\{(i',p',r')\in x\,;\,p'=p\}$ for each $i\in[1,e],$ $p\in[1,\ell]$. Next, we consider the subspace $LR\circ \theta\big([\mathbf{A}^\nu_{\mu,-e}]\big)$. The subspace $\theta\big([\mathbf{A}^\nu_{\mu,-e}]\big)$ of ${\textstyle\bigwedge^\nu}(V_e)_\mu$ is spanned by the monomials $\wedge^\nu(a)$ such that there is an $\ell$-partition $\lambdabda\in\Lambdabda^\nu_\mu$ with $a_{p,k}=\lambdabda_{p,k}+\nu_p+1-k$ for each $p,k$. Here $\lambdabda_{p,k}$ is the $k$-th part of the $p$-th partition $\lambdabda_p$ of $\lambdabda$. Thus, the map $f_{\mu,\nu}$ takes $\theta\big([\mathbf{A}^\nu_{\mu,-e}]\big)$ to the subspace of $\bigwedge^N(V_{e,\ell})_{\mu,\nu}$ spanned by all the monomials $\wedge^N(x)$ as above such that $x\in([1,e]\times[1,\ell]\times\mathbb{N})^N$. This implies in particular that $LR\circ \theta\big([\mathbf{A}^\nu_{\mu,-e}]\big)=\theta\big([\mathbf{A}^\mu_{\nu,-\ell}]\big)$, proving the claim. Now, to finish the proof of the theorem, it is enough to observe that $\theta([\mathbf{A}^\nu_{-e}[a]])$ is the sum, over all affine weights $\hat\mu\in P+a\delta,$ of the intersection of of $\theta([\mathbf{A}^\nu_{-e}])$ with the subspace of ${\textstyle\bigwedge^\nu}(V_e)$ of weight $\hat\mu$ for the $\widehat{\mathfrak{s}\mathfrak{l}}(e)$-action. \end{proof} The following is obvious. \begin{cor} Conjecture {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{conj:A.1} implies Conjecture {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{conj:A.0}. \qed \end{cor} \subsection{Koszulity of the $q$-Schur algebra}\label{sec:schurkoszul} Now, we consider the case $\ell=1$. By the Kazhdan-Lusztig equivalence \cite[thm.~38.1]{KL}, the category $\mathbf{A}^{(N)}_{-e}\{d\}$ is equivalent to the module category of the $q$-Schur algebra. This equivalence is an equivalence of highest-weight categories. This follows from the fact that an equivalence of abelian categories $\mathbf{C}_1\to\mathbf{C}_2$ such that $\mathbf{C}_1$, $\mathbf{C}_2$ are of highest weight and that the induced bijection $\operatorname{Irr}\nolimits(\mathbf{C}_1)\to\operatorname{Irr}\nolimits(\mathbf{C}_2)$ is increasing is an equivalence of highest weight categories, i.e., it induces a bijection $\Delta(\mathbf{C}_1)\to\Delta(\mathbf{C}_2)$. See also \cite[thm.~A.5.1]{VV} for a more detailed proof. Therefore, Theorem {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{thm:1.3} implies the following. \begin{cor} The $q$-Schur algebra is Morita equivalent to a Koszul algebra which is balanced. \qed \end{cor} See also \cite{CM} for a different approach to this result. Note that our proof gives an explicit description of the Koszul dual of the $q$-Schur algebra in term of affine Lie algebras. \appendix \section{Finite codimensional affine Schubert varieties}\label{app:B} By a scheme we always mean a scheme over the field $k$, and by a variety we'll mean a reduced scheme of finite type which is quasi-projective. Let $T$ be a torus. A {\it $T$-scheme} is a scheme with an algebraic $T$-action. Fix a contractible topological $T$-space $ET$ with a topologically free $T$-action. For any $T$-scheme $X,$ we set $X_T=X\times_TET$. There are obvious projections $p:X\times ET\to X$ and $q:X\times ET\to X_T.$ We'll identify $S$ with the $T$-equivariant cohomology $k$-space $H_T(\bullet)$ of a point. \subsection{Equivariant perverse sheaves on finite dimensional varieties} \label{sec:HT} Fix a $T$-variety $X.$ Let $\mathbf{D}_T^b(X)$ be the $T$-equivariant bounded derived category. It is the full subcategory of the bounded derived category $\mathbf{D}^b(X_T)$ of sheaves of $k$-vector spaces on $X_T$ (for the analytic topology on $X$), consisting of the complexes of sheaves $\mathcal{F}$ with an isomorphism $q^*\mathcal{F}\simeq p^*\mathcal{F}_X$ for some $\mathcal{F}_X\in\mathbf{D}^b(X)$. The cohomology of an object $\mathcal{F}\in\mathbf{D}^b_{T}(X)$ is the graded $S$-module $ H(\mathcal{F})=\bigoplus_{i\in\mathbb{Z}} {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{D}^b_T(X)}\bigl(k_X,\mathcal{F}[i]\bigr). $ For each $\mathcal{E},\mathcal{F}\in\mathbf{D}^b_T(X)$ we have an object $\mathcal{R}\mathcal{H} om(\mathcal{E},\mathcal{F})$ in $\mathbf{D}^b(X_T)$ such that $\operatorname{Ext}\nolimits_{\mathbf{D}^b_T(X)}(\mathcal{E},\mathcal{F})=H(\mathcal{R}\mathcal{H} om(\mathcal{E},\mathcal{F})).$ If $Y\subset X$ is a $T$-equivariant embedding, let $\bar Y$ be its closure in $X$. Let $IC_T(\overline{Y})$ be the minimal extension of $k_{Y}[\dim Y]$ (the shifted equivariant constant sheaf). It is a perverse sheaf on $X$ supported on $\bar Y$. Let $IH_T(\bar Y)$ be the $k$-space of equivariant intersection cohomology of $\bar Y$ and let $H_T(\bar Y)$ be its equivariant cohomology. We have $IH_T(\bar{Y})=H(IC_T(\bar Y))$ and $H_T(\bar Y)=H(k_{\bar Y})$. Forgetting the $T$-action we define $IC(\bar Y)$, $IH(\bar Y)$ and $H(\bar Y)$ in the same way. We'll say that $X$ is {\it good} if the following hold \begin{itemize} \item $X$ has a Whitney stratification $X=\bigsqcup_xX_x$ by $T$-stable subvarieties, \item $X_x=\mathbb{A}^{l(x)}$ with a linear $T$-action, for some integer $l(x)\in\mathbb{N}$, \item there are integers $n_{x,y,i}\in\mathbb{N}$ such that \begin{equation} \label{parity1} j_y^*IC_T(\bar X_x)=\bigoplus_pk_{X_y}[l(y)][l(x)-l(y)-2p]^{{{\operatorname{{op}}\nolimits}}lus n_{x,y,p}} \end{equation} where $j_x$ is the inclusion $X_x\subset X$. Equivalently, we have \begin{equation} \label{parity2} j_y^!IC_T(\bar X_x)= \bigoplus_{q}k_{X_y}[l(y)][l(y)-l(x)+2q]^{\bigoplus n_{x,y,q}}. \end{equation} \end{itemize} We call the third property the {\it parity vanishing}. \begin{prop} \label{prop:IC/ICT/IH} If $X$ is a good $T$-variety, then the following hold (a) $\dim\operatorname{Ext}\nolimits_{\mathbf{D}^b(X)}^i \bigl(IC(\bar X_x),IC(\bar X_y)\bigr)= \sum_{z,p,q}n_{x,z,p}n_{y,z,q}$ where $z,p,q$ runs over the set of triples such that $2l(z)-l(y)-l(x)+2p+2q=i,$ (b) $\operatorname{Ext}\nolimits_{\mathbf{D}^b(X)}\bigl(IC(\bar X_x), IC(\bar X_y)\bigr)= k\operatorname{Ext}\nolimits_{\mathbf{D}^b_T(X)}\bigl(IC_T(\bar X_x), IC_T(\bar X_y)\bigr),$ (c) $\operatorname{Ext}\nolimits_{\mathbf{D}^b_T(X)} \bigl(IC_T(\bar X_x),IC_T(\bar X_y)\bigr) ={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{H_T(X)}\bigl(IH_T(\bar X_x),IH_T(\bar X_y)\bigr),$ (d) $\operatorname{Ext}\nolimits_{\mathbf{D}^b(X)} \bigl(IC(\bar X_x),IC(\bar X_y)\bigr) ={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{H(X)}\bigl(IH(\bar X_x),IH(\bar X_y)\bigr),$ (e) $IH(\bar X_x)$ vanishes in degrees $\not\equiv l(x)$ modulo 2 and $IH(\bar X_x)=kIH_T(\bar X_x).$ \end{prop} \begin{proof} Part $(a)$ is \cite[thm.~3.4.1]{BGS}. We sketch briefly the proof for the comfort of the reader. Set $X_p=\bigsqcup_{p=l(z)}X_z$. Let $j_p$ be the inclusion of $X_p$ into $X$. For each $\mathcal{F}\in\mathbf{D}^b(X),$ there is a spectral sequence $E_1^{p,q}=H^{p+q}(j_p^!\mathcal{F})\Rightarrow H^{p+q}(\mathcal{F}).$ Therefore, if $\mathcal{F}=\mathcal{R}\mathcal{H} om\bigl(IC(\bar X_x),IC(\bar X_y)\bigr),$ we get a spectral sequence $$\gathered E_1^{p,q} =\bigoplus_{z} H^{p+q} \mathcal{R}\mathcal{H} om\bigl(j_z^*IC(\bar X_x),j_z^!IC(\bar X_y)\bigr)\Rightarrow \operatorname{Ext}\nolimits_{\mathbf{D}^b(X)}^{p+q} \bigl(IC(\bar X_x),IC(\bar X_y)\bigr), \endgathered$$ where $z$ runs over the set of elements with $l(z)=p$. By \eqref{parity1}, \eqref{parity2} the spectral sequence degenerates at $E_1$, and we get $$\dim\operatorname{Ext}\nolimits_{\mathbf{D}^b(X)}^i \bigl(IC(\bar X_x),IC(\bar X_y)\bigr)= \sum_{z,p,q}n_{x,z,p}n_{y,z,q},$$ where $z,p,q$ are as in $(a)$. Now, we prove $(b)$. For each $\mathcal{F}\in\mathbf{D}^b_T(X),$ there is a spectral sequence $E_2^{p,q}=S^p\otimes H^q(\mathcal{F}_X)\Rightarrow H^{p+q}(\mathcal{F})$, see \cite[sec.~5.5]{GKM}. Thus, if $\mathcal{F}=\mathcal{R}\mathcal{H} om\bigl(IC_T(\bar X_x),IC_T(\bar X_y)\bigr),$ we get a spectral sequence $$E_2^{p,q}=S^p\otimes \operatorname{Ext}\nolimits^q_{\mathbf{D}^b(X)}(IC(\bar X_x),IC(\bar X_y)) \Rightarrow\operatorname{Ext}\nolimits^{p+q}_{\mathbf{D}^b_T(X)}(IC_T(\bar X_x),IC_T(\bar X_y)).$$ Now, we have $\operatorname{Ext}\nolimits^q_{\mathbf{D}^b(X)}(IC(\bar X_x),IC(\bar X_y))=0$ for $q\not\equiv l(x)+l(y)$ modulo 2 by $(a)$. Therefore, since $S$ vanishes in odd degrees, the spectral sequence degenerates at $E_2$. Thus, we have $$S\otimes \operatorname{Ext}\nolimits_{\mathbf{D}^b(X)}(IC(\bar X_x),IC(\bar X_y)) =\operatorname{Ext}\nolimits_{\mathbf{D}^b_T(X)}(IC_T(\bar X_x),IC_T(\bar X_y)).$$ The proof of $(d)$ is as in \cite[sec.~1]{G}, \cite[sec.~ 3.3]{BY}. Since our setting is slightly different we sketch briefly the main arguments. Fix a partial order on the set of strata such that $\bar X_x=X_{\leqslantlant x}=\bigsqcup_{y\leqslantlant x}X_y.$ Consider the obvious inclusions $j_{\leqslantlant x}:X_{\leqslantlant x}\to X$, $i=j_{<x}:X_{<x}\to X_{\leqslantlant x}$ and $j=j_{x}:X_{x}\to X_{\leqslantlant x}.$ For each $y\leqslantlant x,$ we set $\mathcal{F}_1=j_{\leqslantlant x}^*IC(\bar X_x)$ and $\mathcal{F}_2=j_{\leqslantlant x}^*IC(\bar X_y)$. For $s=1,2$ and $p\in\mathbb{N}$, there are integers $d_s$ and $d_{s,y,p}$ such that \begin{equation} \label{parity} j_y^*\mathcal{F}_s=\bigoplus_pk_{X_y}[d_s-2p]^{{{\operatorname{{op}}\nolimits}}lus d_{s,y,p}},\quad j_y^!\mathcal{F}_s=\bigoplus_{q}k_{X_y}[2l(y)-d_s+2q]^{\bigoplus d_{s,y,q}}. \end{equation} Consider the diagram of graded $k$-vector spaces given by $$\xymatrix{ \operatorname{Ext}\nolimits_{\mathbf{D}^b(X_{<x})}(i^*\mathcal{F}_1,i^!\mathcal{F}_2)\ar[r]\ar[d]& {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{H(X_{<x})}(H(i^*\mathcal{F}_1),H(i^!\mathcal{F}_2))\ar[d]^-a \\ \operatorname{Ext}\nolimits_{\mathbf{D}^b(X_{\leqslantlant x})}(\mathcal{F}_1,\mathcal{F}_2)\ar[r]\ar[d]& {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{H(X_{\leqslantlant x})}(H(\mathcal{F}_1),H(\mathcal{F}_2))\ar[d]^-b \\ \operatorname{Ext}\nolimits_{\mathbf{D}^b(X_x)}(j^*\mathcal{F}_1,j^*\mathcal{F}_2)\ar[r]&{{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{H(X_{ x})}(H(j^*\mathcal{F}_1),H(j^*\mathcal{F}_2)). }$$ In the right side sequence, the Hom's are the $k$-spaces of graded module homomorphisms over the graded $k$-algebras $H(X_{<x})$, $H(X_{\leqslantlant x})$, $H(X_{x})$ respectively, which are computed in the category of non-graded modules. The horizontal maps are given by taking hyper-cohomology. We'll prove that the middle map is invertible by induction on $x$. This proves part $(d)$. The short exact sequence associated with the left side is exact by \eqref{parity}, see e.g., \cite[lem.~ 5.3]{FW} and the references there. The lower map is obviously invertible, because $j^*\mathcal{F}_s$ are constant sheaves on $X_x$ for each $s=1,2$. The complexes $i^*\mathcal{F}_s$ and $i^!\mathcal{F}_s$ on $X_{<x}$ satisfy again \eqref{parity} for any stratum $X_y\subset X_{<x}$. Thus, by induction, we may assume that the upper map is invertible. Then, to prove that the middle map is invertible it is enough to check that $a$ is injective and that $\operatorname{Im}\nolimits(a)=\operatorname{Ker}\nolimits(b)$. By \eqref{parity}, the distinguished triangles $i_*i^!\to1\to j_*j^*{\buildrel +1\over\to}$ and $j_!j^*\to 1\to i_*i^*{\buildrel +1\over\to}$ yield exact sequences of $H(X_{\leqslantlant x})$-modules \begin{equation*}\gathered 0\to H(i^!\mathcal{F}_s){\buildrel\alpha_s\over\to} H(\mathcal{F}_s){\buildrel\beta_s\over\to} H(j^*\mathcal{F}_s)\to 0,\\ 0\to H_c(j^*\mathcal{F}_s){\buildrel\gamma_s\over\to} H(\mathcal{F}_s) {\buildrel\delta_s\over\to}H(i^*\mathcal{F}_s)\to 0, \endgathered\end{equation*} where $H_c(j^*\mathcal{F}_s)=H(j_!j^*\mathcal{F}_s)$ is the cohomology with compact supports of $j^*\mathcal{F}_s$. Thus, the map $a$ is injective, because $a(\phi)=\alpha_2\circ\phi\circ\delta_1$ for each $\phi$, $\alpha_2$ is injective and $\delta_1$ is surjective. Next, fix an element $\phi\in\operatorname{Ker}\nolimits(b)$. Since $b(\phi)\circ\beta_1=\beta_2\circ\phi$ by construction and $\beta_1$ is surjective, we have $\beta_2\circ\phi=0$. Hence, there is an $H(X_{\leqslantlant x})$-linear map $\phi':H(\mathcal{F}_1)\to H(i^!\mathcal{F}_2)$ such that $\phi=\alpha_2\circ\phi'$. We must check that $\phi'\circ\gamma_1=0$. To do that, set $d=\dim X_x$. Since $X_x\simeq k^d$, the fundamental class $\omega$ of $X_x$ belongs to $H_c^{2d}(X_x)$. The cup product with $\omega$ restricts to 0 on $X_{<x}$, because there is an exact sequence $0\to H_c(X_x)\to H(X_{\leqslantlant x})\to H(X_{<x})\to 0.$ Thus, it yields a map $g_s:H(\mathcal{F}_s)\to H_c(j^*\mathcal{F}_s)[2d]$ which vanishes on $\operatorname{Im}\nolimits(\alpha_s)$. We deduce that $g_2\circ\phi=0.$ Since $H_c(X_x)\subset H_c(X_{\leqslantlant x})$ and $\phi$ is $H_c(X_{\leqslantlant x})$-linear, we have also $\phi\circ g_1=0$. Since $g_1$ is surjective, we deduce that $\phi\circ\gamma_1=0.$ Hence, we have $\phi'\circ\gamma_1=0$, because $\alpha_2$ is injective. Part $(c)$ is proved as part $(d)$, using equivariant cohomology. Finally, we prove $(e)$. By parity vanishing, the spectral sequence $$E_1^{p,q}=H^{p+q}(j_p^!IC(\bar X_x))\Rightarrow IH^{p+q}(\bar X_x)$$ degenerates at $E_1$. This yields the first claim, as in part $(a)$. The second one follows from the first one, because the spectral sequence $E_2^{p,q}=S^p\otimes IH^{q}(\bar X_x)\Rightarrow IH^{p+q}(\bar X_x)$ degenerates at $E_2$. \end{proof} \subsection{Equivariant perverse sheaves on infinite dimensional varieties} This section is a reminder on equivariant perverse sheaves on infinite dimensional varieties, to be used in the next section. Let $X$ be an {\it essentially smooth} $T$-scheme, in the sense of \cite[sec.~1.6]{KT3}. In \cite[sec.~2]{KT3} the derived category of constructible complexes on $X$ and perverse sheaves on $X$ (for the analytic topology) are defined. Note that the convention for perverse sheaves here differs from the convention for perverse sheaves on varieties (as in Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:HT}) : indeed $k_Y[-{{\operatorname{{op}}\nolimits}}eratorname{codim}\nolimits Y]$ is perverse if $Y$ is an essentially smooth $T$-scheme, while $k_Y[\dim Y]$ is perverse for a smooth variety $Y$. We'll use the same terminology as in loc.~cit. and we formulate our results in the $T$-equivariant setting. The equivariant version of the constructions in \cite{KT3} is left to the reader. Let $\mathbf{D}_{T}(X)$ be the $T$-equivariant derived category on $X$. If $X,Y$ are essentially smooth and $Y\to X$ is a $T$-equivariant embedding of finite presentation, let $IC_T(\overline{Y})$ be the minimal extension of the equivariant constant shifted sheaf $k_{Y}[-{{\operatorname{{op}}\nolimits}}eratorname{codim}\nolimits Y]$ on $Y$. It is a perverse sheaf on $X$ supported on the Zariski closure $\bar Y$ in $X$. If $\mathcal{F}\in\mathbf{D}_{T}(X),$ we set $H(\mathcal{F})=\bigoplus_{i\in\mathbb{Z}} {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{D}_T(X)}\bigl(k_X,\mathcal{F}[i]\bigr),$ a graded $S$-module. We abbreviate $ IH_T(\bar{Y})=H(IC_T(\bar Y))$ and $H_T(Y)=H(k_Y).$ We call $H_T(Y)$ the {\it equivariant cohomology of $Y$}. It is an $H_T(X)$-module. Recall that for a morphism $f:Z\to X$ of essentially smooth $T$-schemes there is a functor $f^*:\mathbf{D}_T(X)\to\mathbf{D}_T(Z)$, see \cite[sec.~3.7]{KT3}. If $f$ is the inclusion of a $T$-stable subscheme we write $\mathcal{F}_Z=i^*\mathcal{F}$ and $IH_T(\overline{Y})_Z=H(IC_T(\overline{Y})_Z). $ Now, assume that $X$ is the limit of a projective system of smooth schemes \begin{equation}\label{projsystem} X=\operatorname{pr}\nolimitso_nX_n\end{equation} as in \cite[sec.~1.3]{KT3}. Let $p_n:X\to X_n$ be the projection and set $Y_n=p_n(Y)$. Assume that $Y_n$ is locally closed in $X_n$. Since $k_Y=p_n^*k_{Y_n}$ we have an obvious map $$\aligned H_T(Y_n) &=\bigoplus_{i\in\mathbb{Z}} {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{D}_T(X_n)}\bigl(k_{X_n},k_{Y_n}[i]\bigr)\\ &\to\bigoplus_{i\in\mathbb{Z}} {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{D}_T(X)}\bigl(p_n^*k_{X_n},p_n^*k_{Y_n}[i]\bigr)\\ &\to\bigoplus_{i\in\mathbb{Z}} {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{D}_T(X)}\bigl(k_{X},k_{Y}[i]\bigr)\\ &\to H_T(Y). \endaligned$$ It yields an isomorphism $H_T(Y)={\lim\limits_{\longrightarrow}}_nH_T(Y_n).$ \subsection{Moment graphs and the Kashiwara flag manifold} In this section we consider the localization on Kashiwara's flag manifold. The main motivation is Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:B2} below that we used in the rest of text, in particular in Corollary {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{cor:speBpm} to prove that the functor $\mathbb{V}_{\!k}$ is fully-faithful on projectives in ${}^w\mathbf{O}_{\mu,-}^\Delta$. Let $P_\mu$ be the parabolic subgroup corresponding to the Lie algebra $\mathbf{p}_\mu$. Let $X=X_\mu=\mathcal{G}/P_\mu$ be the Kashiwara partial flag manifold associated with $\mathbf{g}$ and $\mathbf{p}_\mu$, see \cite{K}. Here $\mathcal{G}$ is the schematic analogue of $G(k((t)))$ defined in \cite{K}, which has a locally free right action of the group-$k$-scheme $P_\mu$ and a locally free left action of the group-$k$-scheme $B^-$, the Borel subgroup opposit to $B$. Recall that $X$ is an essentially smooth, not quasi-compact, $T$-scheme, which is covered by $T$-stable quasi-compact open subsets isomorphic to $\mathbb{A}^\infty={{\operatorname{{op}}\nolimits}}eratorname{Spec}\nolimits k[x_k\,;\,k\in\mathbb{N}]$. Let $e_X=P_\mu/P_\mu$ be the origin of $X$. For each $x\in I_{\mu,+},$ we set $X^x=B^-xe_X=B^-xP_\mu/P_\mu.$ Note that $X^x$ is a locally closed $T$-stable subscheme of $X$ of codimension $l(x)$ which is isomorphic to $\mathbb{A}^\infty$. Consider the $T$-stable subschemes $$\overline{X^x}=X^{\geqslantlant x}=\bigsqcup_{y\geqslantlant x}X^y,\quad X^{\leqslantlant x}=\bigsqcup_{y\leqslantlant x}X^y,\quad X^{<x}=\bigsqcup_{y<x}X^y.$$ We call $\overline{X^x}$ a {\it finite-codimensional affine Schubert variety.} We call $X^{\leqslantlant x}$ an {\it admissible open set.} If $\Omega$ is an admissible open set, there are canonical isomorphisms $$ IC_T(\overline{X^x})_\Omega=IC_T(\overline{X^x}\cap \Omega),\quad IH_T(\overline{X^x})_\Omega=IH_T(\overline{X^x}\cap \Omega). $$ We can view $\Omega$ as the limit of a projective system of smooth schemes $(\Omega_n)$ as in \cite[lem.~4.4.3]{KT3}. So, the projection $p_n:\Omega\to \Omega_n$ is a good quotient by a congruence subgroup $B^-_{n}$ of $B^-$. Let $n$ be large enough. Then, we have $IC_T(\overline{X^x}\cap \Omega)=p_n^*IC_T(\overline{X^x_{\Omega,n}})$ with $\overline{X^x_{\Omega,n}}=p_n(\overline{X^x}\cap \Omega)$ by \cite[sec.~2.6]{KT3}. Thus we have a map $$\aligned IH_T(\overline{X^x_{\Omega,n}}) &=\bigoplus_{i\in\mathbb{Z}} {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{D}_T(\Omega_n)}\bigl(k_{\Omega_n},IC_T(\overline{X^x_{\Omega,n}})[i]\bigr)\\ &\to\bigoplus_{i\in\mathbb{Z}} {{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{\mathbf{D}_T(\Omega)}\bigl(k_{\Omega},p_n^*IC_T(\overline{X^x_{\Omega,n}})[i]\bigr)\\ &\to IH_T(\overline{X^x}\cap \Omega) \endaligned$$ which yields an isomorphism $$IH_T(\overline{X^x}\cap \Omega)={\lim\limits_{\longrightarrow}}_nIH_T(\overline{X^x_{\Omega,n}}).$$ For $\Omega=X^{\leqslantlant w}$ and $x\leqslantlant w$ we abbreviate $X^{[x,w]}=\overline{X^x}\cap X^{\leqslantlant w}$ and $X^{[x,w]}_n=p_n(X^{[x,w]}).$ Since $p_n$ is a good quotient by $B^-_n$ and since $X^{x}$ is $B^-_n$-stable, we have an algebraic stratification $X^{\leqslantlant w}_n=\bigsqcup_{x\leqslantlant w}X^{x}_n,$ where $X^{x}_n$ is an affine space whose Zarisky closure is $X^{[x,w]}_n$. \begin{lemma} \label{lem:good} (a) The $T$-variety $X^{\leqslantlant w}_n$ is smooth and good. (b) It is covered by $T$-stable open affine subsets with an attractive fixed point. The fixed points subset is naturally identified with ${}^w\!I_{\mu,+}$. (c) There is a finite number of one-dimensional orbits. The closure of each of them is smooth. Two fixed points are joined by a one-dimensional orbit if and only if the corresponding points in ${}^w\!I_{\mu,+}$ are joined by an edge in ${}^w\mathcal{G}_\mu$. \end{lemma} \begin{proof} The $T$-variety $X^{\leqslantlant w}_n$ is smooth by \cite{KT3}, because $X^{\leqslantlant w}$ is smooth and $p_n$ is a $B^-_n$-torsor for $n$ large enough. We claim that it is also quasi-projective. Let $X_0$ be the stack of $G$-bundles on $\mathbb{P}^1$. We may assume that $P_\mu$ is maximal parabolic. Then, by the Drinfeld-Simpson theorem, a $k$-point of $X$ is the same as a $k$-point of $X_0$ with a trivialization of its pullback to ${{\operatorname{{op}}\nolimits}}eratorname{Spec}\nolimits(k[[t]])$. Here $t$ is regarded as a local coordinate at $\infty\in\mathbb{P}^1$ and we identify $B^-$ with the Iwahori subgroup in $G(k[[t]])$. We may choose $B^-_n$ to be the kernel of the restriction $G(k[[t]])\to G(k[t]/t^n).$ Then, a $k$-point of $X_n=X/B_n^-$ is the same as a $k$-point of $X_0$ with a trivialization of its pullback to ${{\operatorname{{op}}\nolimits}}eratorname{Spec}\nolimits(k[t]/t^n)$. We'll prove that there is an increasing system of open subsets $\mathcal{U}_m\subset X_0$ such that for each $m$ and for $n\gg0$ the fiber product $X_n\times _{X_0}\mathcal{U}_m$ is representable by a quasi-projective variety. This implies our claim. Choosing a faithful representation $G\subset SL_r$ we can assume that $G=SL_r$. So a $k$-point of $X_0$ is the same as a rank $r$ vector bundle on $\mathbb{P}^1$ of degree 0. For an integer $m>0$ let $\mathcal{U}_m(k)$ be the set of $V$ in $X_0(k)$ with $H^1(\mathbb{P}^1,V\otimes\mathcal{O}(m))=0$ which are generated by global sections. It is the set of $k$-points of an open substack $\mathcal{U}_m$ of $X_0$. Note that $\mathcal{U}_m\subset\mathcal{U}_{m+1}$ and $X_0=\bigcup_m\mathcal{U}_m$. Now, the set $\mathcal{Y}_m(k)$ of pairs $(V,b)$ where $V\in \mathcal{U}_m(k)$ and $b$ is a basis of $H^0(\mathbb{P}^1,V\otimes\mathcal{O}(m))$ is the set of $k$-points of a quasi-projective variety $\mathcal{Y}_m$ by the Grothendieck theory of Quot-schemes. Further, there is a canonical $GL_{r(m+1)}$-action on $\mathcal{Y}_m$ such that the morphism $\mathcal{Y}_m\to\mathcal{U}_m$, $(V,b)\mapsto V$ is a $GL_{r(m+1)}$-bundle. Now, for $n\gg 0$ the fiber product $X_n\times _{X_0}\mathcal{U}_m$ is representable by a quasi-projective variety, see e.g., \cite[thm.~5.0.14]{W}. Next, note that $X^{\leqslantlant w}_n$ is recovered by the open subsets $V^x_n=p_n(V^x)$ with $x\leqslantlant w.$ Each of them contains a unique fixed point under the $T$-action and finitely many one-dimensional orbits. Finally, the parity vanishing holds : since $IC(X^{[x,w]})=p_n^*IC(X^{[x,w]}_{n})$ we have $$IC(X^{[x,w]}_n)_{X^y_n}=\bigoplus_ik_{X^y_n}[-l(y)][l(y)-l(x)-2i]^{{{\operatorname{{op}}\nolimits}}lus Q^{\mu,-1}_{x,y,i}}$$ by \cite[thm.~1.3]{KT2}. The change in the degrees with respect to Section {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{sec:HT} is due to the change of convention for perverse sheaves mentioned above. \end{proof} Now we set $V=\mathbf{t}^*$ and we consider the moment graph ${}^w\mathcal{G}^\vee$. \begin{prop} \label{prop:IH} We have (a) $H_T(X^{\leqslantlant w})={}^w\!\bar Z^\vee_{S,\mu,-}$ and $H(X^{\leqslantlant w})={}^w\!\bar Z^\vee_{\mu,-}$ as graded $k$-algebras, (b) $IH_T(X^{[x,w]})={}^w\!\bar B_{S,\mu,-}^\vee(x)$ as a graded ${}^w\!\bar Z_{S,\mu,-}$-module, (c) $IH(X^{[x,w]})={}^w\!\bar B_{\mu,-}^\vee(x)$ as a graded ${}^w\!\bar Z_{\mu,-}^\vee$-module. \end{prop} \begin{proof} Assuming $n$ to be large enough we may assume that $H_T(X^{\leqslantlant w})=H_T(X^{\leqslantlant w}_n),$ $IH_T(X^{[x,w]})=IH_T(X^{[x,w]}_n)$, etc. By Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:good} the $S$-module $H_T(X^{\leqslantlant w}_n)$ is free. Thus we can apply the localization theorem \cite[thm.~6.3]{GKM}, which proves $(a)$. Now, we concentrate on $(b)$. The graded $k$-module $IH(X^{[x,w]}_n)$ vanishes in odd degree by Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:IC/ICT/IH} and Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:good}. Thus, applying \cite{BM} to $X_n^{[x,w]},$ we get a graded ${}^w\bar Z_{S,\mu,-}$-module isomorphism $IH^*_T(X^{[x,w]}_n)={}^w\!\bar B^\vee_{S,\mu,-}(x).$ Part $(c)$ follows from $(b)$, Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:IC/ICT/IH} and Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:good}. \end{proof} \begin{cor} \label{cor:B1} We have a graded $S$-module isomorphism $${}^w\!\bar B_{S,\mu,-}^\vee(x)_{y}= \bigoplus_{i\geqslantlant 0}(S\langle -l(x)-2i\rangle)^{{{\operatorname{{op}}\nolimits}}lus Q^{\mu,-1}_{x,y,i}}.$$ \end{cor} \begin{proof} Apply Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:IH} and \cite[thm.~1.3(i)]{KT2}. \end{proof} \begin{prop} \label{prop:IC} For each $x,y\leqslantlant w$ we have (a) $\sum_{i\in\mathbb{Z}}t^i\dim k\operatorname{Ext}\nolimits^i_{\mathbf{D}_T(X^{\leqslantlant w})} \bigl(IC_T(X^{[x,w]}),IC_T(X^{[y,w]})\bigr)= \sum_zQ_\mu(t)_{x,z}Q_\mu(t)_{y,z},$ (b) $\operatorname{Ext}\nolimits_{\mathbf{D}_T(X^{\leqslantlant w})} \bigl(IC_T(X^{[x,w]}),IC_T(X^{[y,w]})\bigr) ={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{H_T(X^{\leqslantlant w})}\bigl(IH_T(X^{[x,w]}),IH_T(X^{[y,w]})\bigr),$ (c) $\operatorname{Ext}\nolimits_{\mathbf{D}(X^{\leqslantlant w})} \bigl(IC(X^{[x,w]}),IC(X^{[y,w]})\bigr)={{\operatorname{{op}}\nolimits}}eratorname{Hom}\nolimits_{H(X^{\leqslantlant w})}\bigl(IH(X^{[x,w]}),IH(X^{[y,w]})\bigr),$ (d) $\operatorname{Ext}\nolimits_{\mathbf{D}(X^{\leqslantlant w})} \bigl(IC(X^{[x,w]}),IC(X^{[y,w]})\bigr)=k\operatorname{Ext}\nolimits_{\mathbf{D}_T(X^{\leqslantlant w})} \bigl(IC_T(X^{[x,w]}),IC_T(X^{[y,w]})\bigr).$ \end{prop} \begin{proof} Apply Proposition {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:IC/ICT/IH} and Lemma {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{lem:good}. \end{proof} Finally, we obtain the following. \begin{cor} \label{cor:B2} We have a graded $k$-algebra isomorphism $ {}^w\!\bar \mathscr{A}_{\mu,-}^\vee= {{\operatorname{{op}}\nolimits}}eratorname{End}\nolimits_{{}^w\!Z^\vee_{\mu}}\bigl({}^w\!\bar B^\vee_{\mu,-}\bigr)^{{\operatorname{{op}}\nolimits}}. $ \end{cor} \begin{proof} Apply Propositions {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:IH}, {{{\operatorname{{op}}\nolimits}}eratorname{re}\nolimits}f{prop:IC}. \end{proof} \vskip3cm \end{document}
\begin{document} \title[Precise Mechanical Control]{Achieving Precise Mechanical Control in Intrinsically Noisy Systems} \author{{\small Wenlian Lu$^{1,2,3}$,\quad Jianfeng Feng$^{1,2}$,\quad Shun-ichi Amari$^{3}$\quad and David Waxman$^{1}$}} \begin{abstract} How can precise control be realised in intrinsically noisy systems? Here, we develop a general theoretical framework that provides a way to achieve precise control in signal-dependent noisy environments. When the control signal has Poisson or supra-Poisson noise, precise control is not possible. If, however, the control signal has sub-Poisson noise, then precise control is possible. For this case, the precise control solution is not a function, but a rapidly varying random process that must be averaged with respect to a governing probability density functional. Our theoretical approach is applied to the control of straight-trajectory arm movement. Sub-Poisson noise in the control signal is shown to be capable of leading to precise control. Intriguingly, the control signal for this system has a natural counterpart, namely the bursting pulses of neurons --trains of Dirac-delta functions-- in biological systems to achieve precise control performance. \end{abstract} \maketitle \address{\small $^{1}$ Centre for Computational Systems Biology and School of Mathematical Sciences, Fudan University, Shanghai 200433, China} \address{\small $^{2}$ Centre for Scientific Computing, the University of Warwick, Coventry CV4 7AL, UK} \address{\small $^{3}$Brain Science Institute, RIKEN, Wako-shi, Saitama 351-0198, Japan} \ead{[email protected]} \section{Introduction} Many mechanical and biological systems are controlled by signals which contain noise. This poses a problem. The noise apparently corrupts the control signal, thereby preventing precise control. However, precise control can be realised, despite the occurrence of noise, as has been demonstrated experimentally in biological systems. For example, in neural-motor control, as reported in \cite{OLB}, the movement error is believed to be mainly due to inaccuracies of the neural-sensor system, and not associated with the neural-motor system. The minimum-variance principle proposed in \cite{Harris,Harris1} has greatly influenced the theoretical study of biological computation. Assuming the magnitude of the noise in a system depends strongly on the magnitude of the signal, the conclusion of \cite{Harris,Harris1} is that a biological system is controlled by minimising the execution error. A key feature of the control signal in a biological system is that biological computation often only takes on a finite number of values. For example, `bursting' neuronal pulses in the neural-motor system control seem very likely to have only three states, namely inactive, excited, and inhibited. This kind of signal (neuronal pulses) can be abstracted as a dynamic trajectory which is zero for most of the time, but intermittently takes a very large value. Generally, this kind of signal looks like a train of irregularly spaced Dirac-delta functions. In this work we shall theoretically investigate the way signals in realistic biological systems are associated with precise control performance. We shall use bursting neuronal pulse trains as a prototypical example of this phenomenon. In a biological system, noise is believed to be inevitable and essential; it is a part of a biological signal and, for example, the magnitude of the noise typically depends strongly on the magnitude of the signal \cite{Harris,Harris1}. One characteristic of the noise in a system is the dispersion index, $\alpha $, which describes the statistical regularity of the control signal. When the variance in the control signal is proportional to the $2\alpha $-th power of the mean control signal, the dispersion index of the control noise is said to be $\alpha $. It was shown in \cite{Harris,Harris1} and elsewhere (e.g., \cite{Feng1,Feng11}) that an optimal solution of analytic form can be found when the stochastic control signal is supra-Poisson, i.e., when $\alpha \geq 0.5$. However, the resulting control is not precise and a non-zero execution error arises. In recent papers, a novel approach was proposed to find the optimal solution for control of a neural membrane \cite{Feng2}, and a model of saccadic eye movement \cite{Ross}. It was shown that if the noise of the control signal is more regular than Poisson process (i.e., if it is sub-Poisson, with $\alpha <0.5$), then the execution error can be shown to reduce towards zero \cite{Feng2,Ross}. This work employed the theory of Young measures \cite {Young,Young1}, and involved a very specific sort of solution (a `relaxed optimal parameterized measure solution'). We note that \textit{many} biological signals are more regular than a Poisson process: e.g., within in-vivo experiments, it has often been observed that neuronal pulse signals are sub-Poisson in character ($\alpha <0.5$) \cite{Sub,Chris}. However, in \cite{Feng2,Ross}, only a one-dimensional linear model was studied in detail. Thus the results and methods cannot be applied to the control of general dynamical systems. The work of \cite{Feng2,Ross} however, leads to a much harder problem: the general mathematical link between the regularity of the signal's noise and the control performance that can be achieved. In the present work we establish some general mathematical principles linking the regularity of the noise in a control signal with the precision of the resulting control performance, for \textit{general} nonlinear dynamical systems of high dimension. We establish a general theoretical framework that yields \textit{precise control} from a \textit{ noisy controller} using modern mathematical tools. The control signal is formulated as a Gaussian (random) process with a signal-dependent variance. Our results show that if the control signal is more regular than a Poisson process (i.e., if $\alpha <0.5$), then the control optimisation problem naturally involves solutions with a specific singular character (parameterized measure optimal solutions), which can achieve precise control performance. In other words, we show how to achieve results where the variance in control performance can be made arbitrarily small. This is in clear contrast to the situation where the control signals are Poisson or more random than Poisson ($\alpha \geq 0.5$), where the optimal control signal is an ordinary function, not a parameterized measure, and the variance in control performance does not approach zero. The new results can be applied to a large class of control problems in nonlinear dynamical systems of high dimension. We shall illustrate the new sort of solutions with an example of neural-motor control, given by the control of straight-trajectory arm movements, where neural pulses act as the control signals. We show how pulse trains may be realised in nature which lead towards the optimisation of control performance. \section{Model and Mathematical Formulation} To establish a theoretical approach to the problem of noisy control, we shall consider the following general system \begin{equation} \frac{dx}{dt}=a(x(t),t)+b(x(t),t)u(t) \label{general} \end{equation} where: $t$ is time ($t\geq 0$), $x(t)=[x_{1}(t),\ldots ,x_{n}(t)]^{\top }$ is a column vector of `coordinates' describing the state of the system to be controlled (a $\top $-superscript denotes transpose) and $ u(t)=[u_{1}(t),\ldots ,u_{m}]^{\top }$, is a column vector of the signals used to control the $x$ system. The dynamical behaviour of the $x$ system, in the absence of a control signal, is determined by $a(x,t)$ and $b(x,t)$, where $a(x,t)$ consists of $n$ functions: $a(x,t)=[a_{1}(x,t),\ldots ,a_{n}(x,t)]^{\top }$ and $b(x,t)$ is an $n\times m$ `gain matrix' with elements $b_{ij}(x,t)$. The system (\ref{general}) is a generalisation of the dynamical systems studied in the literature \cite {Harris,Harris1,Feng2,Ross}. As stated above, the control signal, $u(t)$, contains noise. We follow Harris's work \cite{Harris,Harris1} on signal-dependent noise theory by modelling the components of the control signal as \begin{equation} u_{i}(t)=\lambda _{i}(t)+\zeta _{i}(t) \label{ui} \end{equation} where $\lambda _{i}(t)$ is the mean control signal at time $t$ of the $i$'th component of $u(t)$ and all noise (randomness) is contained in $\zeta _{i}(t) $. In particular, we take the $\zeta _{i}(t)$ to be an independent Gaussian white noises obeying $\mathbf{E}\left[ \zeta _{i}(t)\right] =0$ , $\mathbf{E}\left[ \zeta _{i}(t)\zeta _{j}(t^{\prime })\right] =\sigma _{i}(t)\sigma _{j}(t^{\prime })\delta (t-t^{\prime })\delta _{ij}$ where $ \mathbf{E}\left[ \cdot \right] $ denotes expectation, $\delta (\cdot )$ is Dirac-delta function, and $\delta _{ij}$ is Kronecker delta. The quantities $\sigma _{i}(t)$, which play the role of standard deviations of the $\zeta _{i}(t)$, are taken to explicitly depend on the mean magnitudes of the control signals: \begin{equation} \sigma _{i}(t)=\kappa _{i}|\lambda _{i}(t)|^{\alpha } \label{sigma i} \end{equation} where $\kappa _{i}$ is a positive constant and $\alpha $ is the dispersion index of the control process (described above). Thus, we can formulate the dynamical system, Eq. (\ref{general}), as a system of It\^{o} diffusion equations: \begin{equation} dx=A(x,t,\lambda )dt+B(x,t,\lambda )dW_{t} \label{Ito} \end{equation} where: (i) $W_{t}=[W_{1,t},\ldots ,W_{m,t}]^{\top }$ contains $m$ independent standard Wiener processes; (ii) the quantity $A(x,t,\lambda )$ denotes the column vector $[A_{1}(x,t,\lambda ),\ldots ,A_{n}(x,t,\lambda )]^{\top }$, the $i$'th component of which has the form $A_{i}(x,t,\lambda )=a_{i}(x,t)+\sum_{j=1}^{m}b_{ij}(x,t)\lambda _{j}$; (iii) the quantity $ B(x,t,\lambda )$ is the matrix, the $i,j$'th element of which is given by $ B_{ij}(x,t,\lambda )=b_{ij}(x,t)\kappa _{j}|\lambda _{j}|^{\alpha }$ where $ i=1,\cdots ,n$ and $j=1,\ldots ,m$. We make the assumption that the range of each $\xi _{i}$ is bounded: $-M_{Y}\leq \lambda _{i}\leq M_{Y}$ with $M_{Y}$ a positive constant. Let $\Omega =[-M_{Y},M_{Y}]^{m}$ be the region where the control signal takes values, with $^{m}$ denoting the $m$-order Cartesian product. Let $\Xi $ be state space of $x$. In this paper we assume it be bounded. Let us now introduce the function $\phi (x,t)=[\phi _{1}(x,t),\cdots ,\phi _{k}(x,t)]^{\top }$, which represents the objective that is to be controlled and optimised. For example, for a linear output we can take $\phi (x,t)=Cx$ for some $k\times n$ matrix $C$; in the case that we control the magnitude of $x$, we can take {$\phi (x,t)=\Vert x\Vert _{2}$; we may even allow dependence on time, for example if the output decays exponentially with time, we can take $\exp (-\gamma t)x(t)$ for some constant $\gamma>0$. } The aim of the control problem we consider here is: (i) to ensure the expected trajectory of the objective $\phi (x(t),t)$ reaches a specified target at a given time, $T$, and (ii) to minimise the \textit{execution error} accumulated by the system, during the time, $R$, that the system is required to spend at the `target' \cite{Harris,Harris1,Feng2,Ross,Win,Sim,Tan,Ike}. In the present context, we take the motion to start at time $t=0$ and subject to the initial condition $x(0)$. The target has coordinates $z(t)$ and we need to choose the controller, $u(t)$, so that for the time interval $ T\leq t\leq T+R $ the expected state of the objective $\phi(x(t),t)$ of the system satisfies $\mathbf{E}[\phi (x(t),t)]=z(t)$. The \textit{accumulated execution error} is $\int_{T}^{T+R}\sum_{i}var\left( \phi _{i}(x(t),t)\right) dt$ and we require this to be minimised. Statistical properties of $x(t)$ can be written in terms of $p(x,t)$, the probability density of the state of the system (\ref{Ito}) at time $t$, which satisfies the Fokker-Plank equation \begin{equation} \frac{\partial p(x,t)}{\partial t}=\mathcal{L}\circ p~;~p(x,0)=\delta (x-x(0)),~t\in\lbrack0,T+R] \label{FPE} \end{equation} with \begin{eqnarray*} \mathcal{L}\circ p & =-\sum_{i=1}^{n}\frac{\partial\lbrack a_{i}(x,t)+\sum_{j=1}^{m}b_{ij}(x,t)\lambda_{j}(t))p(x,t)]}{\partial x_{i}} \\ & +\frac{1}{2}\sum_{i,j=1}^{n}\frac{\partial^{2}\{\sum_{k=1}^{m}\kappa _{k}^{2}b_{ik}(x,t)b_{jk}(x,t)|\lambda_{k}(t)|^{2\alpha}p(x,t)\}}{\partial x_{i}\partial x_{j}}. \end{eqnarray*} Three important quantities are the following: \begin{itemize} \item[(A)] The \textit{accumulated} \textit{execution error}: $ \int_{T}^{T+R}\int_{\Xi}\left\Vert\phi(x,t)-z(t)\right\Vert ^{2}p(x,t)dxdt$; \item[(B)] The \textit{expectation condition} on $x(t)$: $\int_{\Xi} \phi(x,t)p(x,t)dx=z(t)$, for all $t$ in the interval $T\leq t\leq R+T$; \item[(C)] The \textit{dynamical equation} of $p(x,t)$ described as ( \ref{FPE}). \end{itemize} \section{The Young Measure Optimal Solution} To illustrate the idea of the solutions we introduce here, namely Young measure optimal solutions, we provide a simple example. Consider the situation where $x$ and $u$ are one-dimensional functions, while $a(x,t)=px$ , $b(x,t)=q$, $\kappa =1$, $z(t)=z_{0}$ and $\phi (x,t)=x$. Thus (\ref {general}) becomes \begin{equation} \frac{dx}{dt}=px+qu. \label{linear} \end{equation} This has the solution $x(t)=x_{0}\exp (pt)+\int_{0}^{t}\exp (p(t-s))q\lambda (s)ds+\int_{0}^{t}\exp (p(t-s))q|\lambda (s)|^{\alpha }dW_{s}$. Thus, its expectation is $\mathbf{E}(x(t))=x_{0}\exp (pt)+\int_{0}^{t}\exp (p(t-s))q\lambda (s)ds$ and its variance is ${\mathrm{v}ar} (x(t))=\int_{0}^{t}\exp (2p(t-s))q^{2}|\lambda (s)|^{2\alpha }ds$. The solution of the optimisation problem is the minimum of the following functional: \begin{eqnarray} &&H[\lambda ]=\int_{T}^{T+R}\int_{0}^{t}\exp (2p(t-s))q^{2}|\lambda (s)|^{2\alpha }dsdt \nonumber \\ &&+\int_{T}^{T+R}\left\{ \gamma (t)[x_{0}\exp (pt)+\int_{0}^{t}\exp (p(t-s))q\lambda (s)ds-z_{0}]\right\} dt \nonumber \\ &=&\int_{T}^{T+R}\{[g(t)|\lambda (t)|^{2\alpha }-f(t)\lambda (t)]+\mu (t)\}dt \label{eg1} \end{eqnarray} with \begin{eqnarray*} g(t) &=&\left\{ \begin{array}{ll} \frac{q^{2}}{2p}\bigg[\exp (2p(T+R-t))-\exp (2p(T-t))\bigg] & t\leq T \\ \frac{q^{2}}{2p}\bigg[\exp (2p(T+R-t))-1\bigg] & T+R\geq t>T \end{array} \right. \\ f(t) &=&\left\{ \begin{array}{ll} -\int_{T}^{T+R}q\gamma (s)\exp (p(s-t))ds & t\leq T \\ -\int_{t}^{T+R}q\gamma (s)\exp (p(s-t))ds & T+R\geq t>T \end{array} \right. \\ \mu (t) &=&\gamma (t)[x_{0}\exp (pt)-z_{0}], \end{eqnarray*} for some integrable function $\gamma (t)$, which serves as a Lagrange auxiliary multiplier. In the general case, we minimise (A) using (B) and (C) as constraints via the introduction of appropriate $x$ and $t$ dependent Lagrange multipliers. This leads to a functional of the mean control signal, $H\left[ \lambda \right] $, with the form $H[\lambda ]=\int_{T}^{T+R}h(t,\lambda (t))dt$ (see below and Appendix A). Let us use $\xi =[\xi _{1},\cdots ,\xi _{m}]^{\top }$ to denote the value of $\lambda (t)$ at a given time of $t$, i.e., ${\xi } =\lambda (t)$; $\xi$ will serve as a variable of the Young measure (see below). We find \begin{equation} h(t,\xi )=\sum_{i=1}^{m}[g_{i}(t)|\xi _{i}|^{2\alpha }-f_{i}(t)\xi _{i}]+z(t) \label{h} \end{equation} where $g_{i}(t)$, $f_{i}(t)$ and $z(t)$ are functions with respect to $t$ but are independent of the variable $\xi $. The abstract Hamiltonian minimum (maximum) principle (AHMP) \cite{Roub} provides a necessary condition for the optimal solution of minimising (A) with (B) and (C), which is composed of the points in the domain of definition of $\lambda$, namely, $\Omega $, that minimize the function $ h(t,\xi)$ in (\ref{h}), at each time, $t$, which is named \emph{Hamiltonian integrand}. This principle tells us that the optimal solution should pick values of the minimum of $h(t,\xi)$ with respect to $\xi $, for each $t$. If the control signal is supra-Poisson or Poisson, namely the dispersion index $\alpha \geq 0.5$, for each $t\in \lbrack 0,T+R]$, the Hamiltonian integrand $h(t,\xi)$ is convex (or semi-convex) with respect to $\xi$ and so has a unique minimum point with respect to each $\xi_{i}$. So, the optimal solution is a deterministic \emph{function} of time: for each $t_{0}$, $ \lambda _{i}(t_{0})$ can be regarded as picking value at the minimum point of $h(t,\xi)$ for $t=t_{0}$. When $\alpha <1/2$, namely when the control signal is sub-Poisson, it follows that $h(t,\xi )$ is no longer a convex function. Figs. \ref{al} show the possible minimum points of the term $g_{i}(t)|\xi _{i}|^{2\alpha }-f_{i}(t)\xi _{i}$ with $g_{i}(t)>0$ and $f_{i}(t)>0$. From the assumption that the range of each $\xi _{i}$ is bounded, namely $-M_{Y}\leq \xi _{i}\leq M_{Y}$, it then directly follows, from the form of $h(t,\xi )$, that the value of $\xi _{i}$ which optimises $h(t,\xi )$ is not unique; there are \textit{three} possible minimum values: $-M_{Y}$, $0$, and $M_{Y}$ , as shown in Table \ref{maximum points}. So, no explicit function $\lambda (t)$ exists which is the optimal solution of the optimisation problem (A)-(C). However, an infinimum of (\ref{h}) does exist. Proceeding intuitively, we first make an arbitrary choice of one of the three optimal values for $\xi_{i}$ (namely one of $-M_{Y}$, $0$, and $M_{Y}$ ) and then \textit{average} over all possible choices at each time. With $ \eta_{t,i}(\xi)$ the probability density of $\xi_{i}$ at time $t$, the average is carried out using the distribution (probability density functional) $\eta[\lambda]\propto{\prod\nolimits_{t,i}}\eta_{t,i}(\xi)$ which represents independent choices of the control signal at each time. Thus, for example, the functional $H[\lambda]$ becomes \textit{functionally averaged} over $\lambda(\cdot)$ according to {$\int_{T}^{T+R}\left( \int h(t,\lambda )\eta[\lambda]d[\lambda]\right) dt$. The optimisation problem has thus shifted from determining a function (as required when $\alpha\geq 1/2$) to determining a \textit{probability density functional}, $\eta[\lambda ]$. This intuitively motivated procedure is confirmed by optimisation theory- and this leads us to Young measure theory. } Let us spell it out in a mathematical way. Young measure theory \cite {Young,Young1} provides a solution to an optimization problem where a solution, which was a function, becomes a linear functional of a parameterized measure. By way of explanation, a function, $\lambda (t)$, yields a single value for each $t$, but a parameterized measure $\{\eta _{t}(\cdot )\}$ yields a \textit{set of values} on which a measure (i.e., a weighting) $\eta _{t}(\cdot )$ is defined for each $t$. A \textit{functional} with respect to a parameterized measure can be treated in a similar way to a solution that is an explicit function, by averaging over the set of values of the parameterized measure at each $t$. In detail, a functional of the form $H[\lambda ]=\int_{0}^{T}h(t,\lambda (t))dt$, of an explicit function, $ \lambda (t)$, can have its definition extended to a parameterized measure $ \eta _{t}(\cdot )$, namely $H[\eta ]=\int_{0}^{T}\int_{\Omega }h(t,\xi )\eta _{t}(d\xi )dt$. In this sense, an explicit function can be regarded as a special solution that is a `parameterized concentrated measure' (i.e., involving a Dirac-delta function) in that we can write $H[\lambda ]=\int_{0}^{T}\int_{\Omega }h(t,\xi )\delta (\xi -\lambda (t))d\xi dt$. Thus, we can make the equivalence between the explicit function $\lambda (t)$ and a parameterized concentrated measure $\{\delta (\xi -\lambda (t))\}_{t}$ and then replace this concentrated measure, when appropriate, by a Young measure. Technically, a Young measure is a class of parameterized measures that are relatively weak*-compact such that the Lebesgue function space can be regarded as its dense subset in the way mentioned above. Thus, by enlarging the solution space from the function space to the (larger) Young measure space, we can find a solution in the larger space and the minimum value of the optimisation problem, in the Young measure space, coincides with the infinimum in the Lebesgue function space. For any function $r(x,t,\xi)$, we denote a symbol $\cdot$ as the inner product of $r(x,t,\xi)$ over the parameterized measure $\eta_{t}(d\xi)$, by averaging $r(x,t,\xi)$ with respect to $\xi$ via $\eta_{t}(\cdot)$. That is we define $r(x,t,\xi)\cdot\eta_{t}$ to represent $\int_{\Omega}r(x,t,\xi) \eta_{t}(d\xi )$. In this way we can rewrite the optimisation problem (A)-(C) as: \begin{equation} \left\{ \begin{array}{ll} \min_{\eta } & \int_{T}^{T+R}\int_{\Xi }\Vert \phi (x,t)-z(t)\Vert ^{2}p(x,t)dxdt \\ \mathrm{subject~to} & \frac{\partial p(x,t)}{\partial t}=[(\mathcal{L}\cdot \eta )\circ p](x,t),~on~[0,T]\times \Xi ,~p(x,0)=p_{0}(x), \\ & x\in \Xi,~t\in[0,T+R] \\ & \int_{\Xi }\phi (x,t)p(x,t)dx=z(t),~on~[T,T+R],~\eta \in \mathcal{Y}. \end{array} \right. \label{ROP} \end{equation} Here, $\mathcal{Y}$ denotes the Young measure space, which is defined on the state space $\Xi$ with $t\in[0,T+R]$, while $\eta =\{\eta _{t}(\cdot )\}$ denotes a shorthand for the Young measure associated with control; $( \mathcal{L}\cdot \eta )\circ p$ is defined as \begin{eqnarray*} &&[\mathcal{L}\cdot \eta ]\circ p=\mathcal{\int }_{\Omega}\mathcal{L} (t,x,\xi )\circ p(x,t)\eta _{t}(d\xi ) \\ &=&\int_{\Omega }\bigg\{-\sum_{i=1}^{n}\frac{\partial \lbrack A_{i}(x,t,\xi )p(x,t)]}{\partial x_{i}} \\ &&+\frac{1}{2}\sum_{i,j=1}^{n}\frac{\partial ^{2}\{[B(x,t,\xi )B^{\top }(x,t,\xi )]_{ij}p(x,t)\}}{\partial x_{i}\partial x_{j}}\bigg\}\eta _{t}(d\xi ) \\ &=&\int_{\Omega}\bigg\{-\sum_{i=1}^{n}\frac{\partial\lbrack a_{i}(x,t)+\sum_{j=1}^{m}b_{ij}(x,t)\xi_{j})p(x,t)]}{\partial x_{i}} \\ && +\frac{1}{2}\sum_{i,j=1}^{n}\frac{\partial^{2}\{\sum_{k=1}^{m}\kappa _{k}^{2}b_{ik}(x,t)b_{jk}(x,t)|\xi_{k}|^{2\alpha}p(x,t)\}}{\partial x_{i}\partial x_{j}}\bigg\}\eta_{t}(d\xi). \end{eqnarray*} So, we can study the relaxation problem (\ref{ROP}) instead of the original one, (A)-(C). We assume that the constraints in (\ref{ROP}) admit a nonempty set of $\lambda(t)$, which guarantees that the problem (\ref{ROP}) has a solution. We also assume the existence and uniqueness of the Cauchy problem of the Fokker-Plank equation (\ref{FPE}). The abstract Hamiltonian minimum (maximum) principle (Theorem 4.1.17 \cite {Roub}) also provides a similar necessary condition for the Young measure solution of (\ref{ROP}), if it admits a solution, that is composed of the points in $\Omega $ which minimise the integrand of the underlying `abstract Hamiltonian'. By employing variational calculus with respect to the Young measure, we can derive the form (\ref{h}), for the Hamiltonian integrand. See Appendix A for details. Via this principle, the problem conceptively reduces to finding the minimum points of $h(t,\xi )$. From Table \ref{maximum points}, for a sufficiently large $M_{Y} $, it can be seen that, if $\alpha <0.5$, then the minimum points for each $t$ with $g_{i}(t)>0$ may be TWO points $\{0,M_{Y}\}$ or $ \{-M_{Y},0\}$. Hence, in the case of $\alpha <0.5$, the optimal solution of ( \ref{ROP}) is a measure on $\{M_{Y},0\}$ or $\{-M_{Y},0\}$. This implies that the optimal solution of (\ref{ROP}) should have the following form $ \eta _{t}(\cdot )=\eta _{1,t}(\cdot )\times ,\cdots ,\eta _{m,t}(\cdot )$, where $\times $ stands for the Cartesian product, and each $\eta _{i,t}$ we adopt is a measure on $\{-M_{Y},0,M_{Y}\}$: \begin{equation} \eta _{i,t}(\cdot )=\mu _{i}(t)\delta _{M_{Y}}(\cdot )+\nu _{i}(t)\delta _{-M_{Y}}(\cdot )+[1-\mu _{i}(t)-\nu _{i}(t)]\delta _{0}(\cdot ) \label{functional density} \end{equation} where $\mu _{i}(t)$ and $\nu _{i}(t)$ are non-negative weight functions. The optimisation problem corresponds to the determination of the $\mu_{i}(t)$ and $\nu _{i}(t)$. Averaging with respect to $\eta $ corresponds to the optimal control signal when the noise is sub-Poisson ($\alpha <0.5$). This assignment of a probability density for the solution at each time is known in the mathematical literature as a Young Measure \cite{Roub,Young,Young1}. For all $i$ and $t$, the weight functions satisfy: (i) $\mu _{i}(t)+\nu _{i}(t)\leq 1$ and (ii) $\mu_{i}(t)\nu_{i}(t)=0$ (owing to the properties mentioned above that $h(t,\xi _{i})$ cannot simultaneously have both $M_{Y}$ and $-M_{Y}$ as optimal). Consider the simple one-dimensional system (\ref{linear}). We shall provide the explicit form of the optimal control signal $u(t)$ as a Young measure. Taking expectation for both sides in (\ref{linear}), we have \[ \frac{d\mathbf{E}~(x)}{dt}=p\mathbf{E}(x)+q\lambda (t). \] Since we only minimise the variance in $[T,T+R]$ for some $T>0$ and $R>0$, the control signal $u(t)$ for $t\in \lbrack 0,T)$ is picked so that the expectation of $x(t)$ can reach $z_{0}$ at the time $t=T$. After some simple calculations, we find a deterministic $\lambda (t)$ as follows: \[ \lambda (t)=\frac{z_{0}-x_{0}\exp (pT)}{Tq}\exp (p(-T+t)),~t\in \lbrack 0,T] \] such that $\mathbf{E}(x(T))=z_{0}$. Then we pick $\lambda (t)=-pz_{0}/q$ for $t\in \lbrack T,T+R]$ such that $d\mathbf{E}(x(t))/dt=0$ for all $t\in \lbrack T,T+R]$. Hence, $\mathbf{E}(x(t))=z_{0}$ for all $t\in \lbrack T,T+R] $. In the interval $[T,T+R]$, as discussed above, for a sufficiently large $M_{Y}$, the optimal solution of $\lambda (t)$ should be a Young measure that picks values in $\{0,M_{Y},-M_{Y}\}$. To sum up, we can construct the optimal $\lambda (t)$ as follows: \[ \eta _{t}(\cdot )=\left\{ \begin{array}{ll} \delta _{\lambda (t)}(\cdot ) & t\in \lbrack 0,T) \\ \delta _{M_{Y}}(\cdot )\frac{-pz_{0}}{q~M_{Y}}+\delta _{0}(\cdot )[1+\frac{ pz_{0}}{q~M_{Y}}] & t\in \lbrack T,T+R]~\mathrm{if}~-pz_{0}/q>0~\mathrm{or} \\ \delta _{-M_{Y}}(\cdot )\frac{pz_{0}}{q~M_{Y}}+\delta _{0}(\cdot )[1-\frac{ pz_{0}}{q~M_{Y}}] & t\in \lbrack T,T+R]~\mathrm{if}~pz_{0}/q\geq 0. \end{array} \right. \] It can be seen that in $[0,T)$, $\eta _{t}(\cdot )$ is in fact a deterministic function as the same as $\lambda (t)$. \section{Precise Control Performance} We now illustrate the control performance when the noise is sub-Poisson. For the general nonlinear system (\ref{general}), we cannot obtain an explicit expression for the probability density functional $\eta[\lambda]$, Eq. (\ref {functional density}), or the value of the variance (execution error). However, we can adopt a \textit{non optimal} probability density functional which illustrates the property of the exact system, that the execution error becomes arbitrarily small when the bound of the control signal, $M_{Y}$, becomes arbitrarily large. In the simple case (\ref{linear}), we note that if there is a $\hat {u}(t)$, such that $\mathbf{E}(x(t))=z(t) $, then the variance becomes, expressed by Young measure $\hat{\eta}(\cdot)$, \begin{eqnarray*} {\mathrm{v}ar}(x(t)) & =\int_{0}^{t}\int_{-M_{Y}}^{M_{Y}}\exp(2p(t-s))q^{2}|\xi|^{2\alpha}\hat{\eta} (d\xi)ds \\ & =\int_{0}^{t}\exp(2p(t-s))q^{2}M_{Y}^{2\alpha}\frac{|\hat{u}(s)|}{M_{Y}}ds, \end{eqnarray*} which converges to zero as $M_{Y}\rightarrow\infty$, due to $\alpha<0.5$. That is, the minimised execution error can be arbitrarily small if the bound of the control signal, $M_{Y}$, goes sufficiently large. In fact, this phenomenon holds for general cases. The non optimal probability density functional is motivated by assuming that there is a deterministic control signal $\hat{u}(t)$ which controls the dynamical system \begin{equation} \frac{d\hat{x}}{dt}=A(\hat{x},t,\hat{u}(t)) \label{ds} \end{equation} which is the original system (\ref{general}), \textit{with the noise removed} . The deterministic control signal $\hat{u}(t)$ causes $\hat{x}(t)$ to precisely achieve the target trajectory $\hat{x}(t)=z(t)$ for $T\leq t\leq T+R$. Then, we add the noise with the signal-dependent variance: $\sigma _{i}=\kappa _{i}|\lambda _{i}|^{\alpha }$ with some $\alpha <0.5$, which leads a stochastic differential equation, $dx=A(x,t,\lambda (t))dt+B(x,t,\lambda (t))dW_{t}$. The non optimal probability density that is appropriate for time $t$, namely $\hat{\eta}_{t,i}(\xi )$, is constructed to have a mean over the control values $\{-M_{Y},0,M_{Y}\}$, which equals $ \hat{u}(t)$. This probability density is \begin{equation} \hat{\eta}_{i,t}(\lambda _{i})=\frac{|\hat{u}_{i}(t)|}{M_{Y}}\delta _{\sigma (t)M_{Y}}(\lambda _{i})+(1-\frac{|\hat{u}_{i}(t)|}{M_{Y}})\delta _{0}(\lambda _{i}) \label{functional density1} \end{equation} where $\sigma (t)=\mathrm{sign}(\hat{u}_{i}(t))$ and, by definition, $ \hat{u}_{i}(t)=\int_{-M_{Y}}^{M_{Y}}\lambda _{i}\hat{\eta}_{i,t}(\lambda _{i})d\lambda _{i}$. We establish in Appendix B that the expectation condition ((B) above) holds asymptotically when $M_{Y}\rightarrow \infty ,$ which shows that the non optimal probability density functional is appropriately `close' to the optimal functional. The accumulated execution error associated with the non optimal functional is estimated as \begin{equation} \min_{\eta }\sqrt{\int_{T}^{T+R}\mathrm{var}\big[x(t))\big]dt}=O\bigg(\frac{1 }{M_{Y}^{1/2-\alpha }}\bigg) \label{error_est} \end{equation} and, in this way, optimal performance of control, with sub-Poisson noise, can be seen to become precise as $M_{Y}$ is made large. By contrast, if $ \alpha \geq 0.5$, the accumulated execution error is always greater than some positive constant. To gain an intuitive understanding of why the effects of noise are eliminated for $\alpha<0.5$ we discretise the time $t$ into small bins of identical size $\Delta t$. Using the `noiseless control' $\hat{u}_{i}(t)$, we divide the time bin $\left[ t,t+\Delta t\right] $ into two complementary intervals: $\left[ t,t+|\hat{u}(t)|\Delta t/M_{Y}\right] $ and $\left[ t+| \hat{u}(t)|\Delta t/M_{Y},t+\Delta t\right] $, and assign $ \lambda_{i}=\sigma(t)M_{Y}$ for the first interval and $\lambda_{i}=0$ for the second. When $\Delta t\rightarrow0$ the effect of the control signal $ \lambda_{i}(t)$ on the system approaches that of $\hat{u}_{i}(t)$, although $ \lambda_{i}(t)$ and $\hat{u}_{i}(t)$ are quite different. The variance of the noise in the first interval is $\kappa _{i}M_{Y}^{2\alpha}$ and is $0$ in the second. Hence, the overall noise effect of the bin is $\sigma_{i}^{2}= \frac{\kappa_{i}|\hat{u}_{i}(t)|}{M_{Y}}\cdot M_{Y}^{2\alpha}=\kappa_{i}| \hat{u}_{i}(t)|M_{Y}^{2\alpha-1}$. Remarkably, this tends to zero as $ M_{Y}\rightarrow\infty$ \textit{if} $\alpha<1/2$ (i.e., for sub-Poisson noise). The discretisation presented may be regarded as a formal stochastic realisation of the probability density functional (Young measure) adopted. The interpretation above can be verified in a rigorous mathematical way. See Appendix B for details. \section{Application and Example} Let us now consider an application of this work: the control of straight-trajectory arm movement, which has been widely studied \cite {Win,Sim,Tan,Ike} and applied to robotic control. The dynamics of such structures are often formalised in terms of coordinate transformations. Nonlinearity arises from the geometry of the joints. The change in spatial location of the hand that results from bending the elbow depends not only on the amplitude of the elbow movement, but also on the state of the shoulder joint. For simplicity, we ignore gravity and viscous forces, and only consider the movement of a hand on a horizontal plane in the absence of friction. Let $ \theta _{1}$ denote the angle between the upper arm and horizontal direction, and $\theta _{2}$ be the angle between the forearm and upper arm (Fig. \ref{arms}). The relation between the position of hand $[x_{1},x_{2}]$ and the angles $[\theta _{1},\theta _{2}]$ is \begin{eqnarray*} \theta _{1} &=&\arctan (x_{2}/x_{1})-\arctan (l_{2}\sin \theta _{2}/(l_{1}+l_{2}\cos \theta _{2})) \\ \theta _{2} &=&\arccos [(x_{1}^{2}+x_{2}^{2}-l_{1}^{2}-l_{2}^{2})/(2l_{1}l_{2})], \end{eqnarray*} where $l_{1,2}$ are moments of inertia with respect to the center of mass, for the upper arm and forearm. When moving a hand between two points, a human maneuvers their arm so as to make the hand move in roughly a straight line between the end points. We use this to motivate the model by applying geostatics theory \cite{Win}. This implies that the arm satisfies an Euler-Lagrange equation, which can be described as the following nonlinear two-dimensional system of differential equations: \begin{eqnarray} &&N(\theta _{1},\theta _{2})\left[ \begin{array}{c} \ddot{\theta}_{1} \\ \ddot{\theta _{2}} \end{array} \right] +C(\theta _{1},\theta _{2},\dot{\theta}_{1},\dot{\theta}_{2})\left[ \begin{array}{c} \dot{\theta}_{1} \\ \dot{\theta}_{2} \end{array} \right] =\gamma _{0}\left[ \begin{array}{c} Q_{1} \\ Q_{2} \end{array} \right] , \nonumber \\ &&\theta _{1}(0)=-\frac{\pi }{2},~\theta _{2}(0)=\frac{\pi }{2},~\dot{\theta} _{1}(0)=\dot{\theta}_{2}(0)=0. \label{AM} \end{eqnarray} In these equations \begin{eqnarray*} N &=&\left[ \begin{array}{ll} \begin{array}{l} I_{1}+m_{1}r_{1}^{2}+m_{2}l_{1}^{2} \\ +I_{2}+m_{2}r_{2}^{2}+2k\cos \theta _{2} \end{array} & I_{2}+m_{2}r_{2}^{2}+k\cos \theta _{2} \\ I_{2}+m_{2}r_{2}^{2}+k\cos \theta _{2} & I_{2}+m_{2}r_{2}^{2} \end{array} \right] , \\ C &=&k\sin \theta _{2}\left[ \begin{array}{ll} \dot{\theta}_{2} & \dot{\theta}_{1}+\dot{\theta}_{2} \\ \dot{\theta}_{1} & 0 \end{array} \right] ,~Q_{i}=\lambda _{i}(t)+\kappa _{0}|\lambda _{i}(t)|^{\alpha }\frac{ dW_{i}}{dt}, \end{eqnarray*} where {$m_{i}$, $l_{i}$, and $I_{i}$ are, respectively the mass, length, and moment of inertia with respect to the center of mass for the $i$'th part of the system and $i=1$ ($i=2$) denotes the upper arm (forearm), $r_{1,2}$ are the lengths of the upper- and fore-arms, and $\gamma _{0}$ is the scale parameter of the force. Additionally, $k=m_{2}l_{1}r_{2}$, while } $\lambda _{1,2}(t)$ are the \textit{means} of two torques $Q_{1,2}(t)$, which are motor commands to the joints. The torques are accompanied by signal-dependent noises. All other quantities are fixed parameters. See \cite {Win} for the full details of the model. The values of the parameters we pick here are listed in Table \ref{parameters}. For this example, we shall aim to control the hand such that it starts at $t=0$, with the initial condition of (\ref{AM}), reaches the target at coordinates $ H=[H_{1},H_{2}]$ at time $t=T$, and then stays at this target for a time interval of $R$. We use the minimum variance principle to determine the optimal task, which is more advantageous than other optimisation criteria to control a robot arm \cite{Win,Ike}. Let $[x_{1}(t),x_{2}(t)]$ be the Cartesian coordinates of the hand that follow from the angles $[\theta _{1}(t),\theta_{2}(t)]$. The minimum variance principle determines $ \min_{\lambda_{1},\lambda_{2}}\int_{T}^{T+R}[\mathrm{var}(x_{1}(t))+\mathrm{ var}(x_{2}(t))]dt$, subject to the constraint that $\mathbf{E} [x_{1}(t),x_{2}(t)]=[H_{1},H_{2}] $ for $T\leq t\leq T+R$, with $ -M_{Y}\leq\lambda_{i}\leq M_{Y}$. Despite not being in possession of an explicit analytic solution, we can conclude that if $\alpha\geq0.5$, the optimisation problem results from the unique minimum to the Hamiltonian integrand and hence yields $\lambda_{1}(t)$ and $\lambda _{2}(t)$ which are \textit{ordinary functions}. However, if $\alpha<0.5$, the optimal solution of the optimisation problem follows from a probability density functional analogous to Eq. (\ref{functional density}) (i.e., a Young measure over $ \lambda_{i}\in\{-M_{Y},0,M_{Y}\}$). Thus, we can relax the optimisation problem via Young measure as follows: \begin{eqnarray*} Q_{i}=\int_{-M_{Y}}^{M_{Y}} \bigg(\xi_{i}+\kappa_{0}|\xi_{i}(t)|^{ \alpha}dW_{i}/dt\bigg)\cdot\eta_{i,t}(d\xi),~i=1,2, \end{eqnarray*} and \begin{eqnarray} \left\{ \begin{array}{ll} \min_{\eta_{1,2}(\cdot)} & \int_{T}^{T+R} [\mathrm{var}(x_{1}(t))^{2}+ \mathrm{var}[x({2}(t))^{2}]dt \\ \mathrm{Subject~to} & \mathbf{E}[x_{1}(T),x_{2}(T)]=[H_{1},H_{2}],~t\in[T,T+R ] \\ & \xi_{i}\in[-M_{Y},M_{Y}]. \end{array} \right. \label{RAMOP} \end{eqnarray} We used Euler's method to conduct numerical computations, with a time step of $0.01$ msec in (\ref{AM}). This yields a dynamic programming problem (see Methods). Fig. \ref{mean} shows the means of the optimal control signals $ \bar{\lambda}_{1,2}(t)$ with $\alpha =0.25$ and $M_{Y}=20000$: \[ \bar{\lambda}_{i}(t)=\int_{-M_{Y}}^{M_{Y}}\xi \eta _{i,t}(d\xi ),~i=1,2. \] According to the form of the optimal Young measure, the optimal solution should be \[ \eta _{i,t}(ds)=\left\{ \begin{array}{ll} \bigg[\frac{\bar{\lambda}_{i}(t)}{M_{Y}}\delta_{M_{Y}} (s)+(1-\frac{\bar{ \lambda}_{i}(t)}{M_{Y}})\delta_{0} (s)\bigg]ds & \bar{\lambda}_{i}(t)>0 \\ \bigg[\frac{|\bar{\lambda}_{i}(t)|}{M_{Y}}\delta_{-M_{Y}} (s)+(1-\frac{|\bar{ \lambda}_{i}(t)|}{M_{Y}})\delta_{0} (s)\bigg]ds & \bar{\lambda}_{i}(t)<0 \\ \delta_{0} (s)ds & \mathrm{otherwise}. \end{array} \right. \] It can be shown (derivation not given in this work) that in the absence of the noise term, the arm can be accurately controlled to reach a given target for any $T>0$. In this case, Fig. \ref{desire} shows the dynamics of the angles, their velocities, and accelerators, in the controlled system, removed noise. See, in comparison, the dynamical system with noise, whose dynamics of the angles, velocities, and accelerations are illustrated in Fig. \ref{fact}, and its dynamics are exactly the same as those in the case with noise removed. However, the acceleration dynamics of a noisy dynamic system appear discontinuous since the control signals, that have noises and are added to the right-hand sides of the mechanical equations, are discontinuous (noisy) in a numerical realisation. However, according to the theory of stochastic differential equations \cite{SDE}, (\ref{AM}) has continuous solution. Hence, these discontinuous acceleration dynamics lead very smooth dynamics of velocities and angles, as shown in Fig. \ref{fact}. Figs. \ref{figs1} (a) and (b) illustrate that the probability density functional, for this problem, contains optimal control signals that are similar to neural pulses. Despite the optimal solution not being an ordinary function when $\alpha <0.5$, the trajectories of the angles $\theta _{1}$ and $\theta _{2}$ of the arm appear quite smooth, as shown in Fig. \ref{fact} (a), and the target is reached very precisely if the value of $M_{Y}$ is large. By comparison, when $\alpha >0.5$ the outcome has a standard deviation between $4$ to $6$ cm, which may lead to a failure to reach the target. A direct comparison between the execution error of the cases $ \alpha =0.8(>0.5)$ and $\alpha =0.25(<0.5)$ is shown in the supplementary movies (supplementary videos `Video S1' and `Video S2') of arm movements of both cases. Our conclusion is that a Young measure optimal solution, in the case of sub-Poisson control signals, can realize a precise control performance even in the presence of noise. However, Poisson or Supra-Poisson control signals cannot realise a precise control performance, despite the existence of an explicit optimal solution in this case. Thus $\alpha <0.5$ significantly reduces execution error compared with $\alpha \geq 0.5$. With different $T$ (the starting time of reaching the target) and $R$ (the duration of reaching the target), under sub-Poisson noise, i.e., $\alpha <0.5 $, the system can be precisely controlled by optimal Young measure signals with a sufficiently large $M_{Y}$. Since the target in the reachable region of the arm, it implies that the original differential system of (\ref{AM}) with the noise removed can be controlled for any $T>0$ and $ R>0$ \cite{Win,Sim}. According to the discussion in Appendix B (Theorem \ref {thm2}), the execution error can be arbitrarily small when $M_{Y}$ is sufficiently large. However, for a smaller $T$, i.e., the more rapid the control is, the larger means of the control signals will be. As for the duration $R$, by picking the control signals as fixed values (zeros in this example) such that the velocities keep zeros, the arm will stay at the target for arbitrarily long or short. Similarly, with a large $M_{Y}$, the error (variance) of staying at the target can be very small. To illustrate these arguments, we take $T=100$ (msec) and $R=100$ (msec) for example (all other parameters are the same as above). Fig. \ref{meanv1} shows that the means of the optimal Young measure control signals before reaching the target have larger amplitudes than those when $T=650$ (msec) and Fig. \ref {factv1} shows that the arm can be precisely controlled to reach and stay at the target. The movement error depends strongly on the value of the dispersion index, $ \alpha$, and the bound of the control signal, $M_{Y}$. Fig. \ref{figs2} indicates a quantitative difference in the execution error between the two cases $\alpha<0.5$ and $\alpha\geq 0.5$, if $\alpha$ is close to (but less than) $0.5$. The execution error can be appreciable unless a large $M_{Y}$ is used. For example if $\alpha=0.45$, as in Fig. \ref{figs2}, the square root of the execution error is approximately $0.6$ cm when $M_{Y}=20000$. From (\ref{error_est}), the error decreases as $M_{Y}$ increases, behaving approximately as a power-law, as illustrated in the inner plot of Fig. \ref {figs2}. The logarithm of the square root of the execution error is found to depend approximately linearly on the logarithm of $M_{Y}$ when $\alpha=0.25$ , with a slope close to $-0.25$, in good agreement with the theoretical estimate (\ref{error_est}). We note that in a biological context, a set of neuronal pulse trains can achieve precise control in the presence of noise. This could be a natural way to approximately implement the probability density functional when $ \alpha<0.5$. All other parameters are the same as above ($\alpha=0.25$). The firing rates are illustrated in Fig. \ref{figs1} (a) and (b) and broadly coincide with the probability density functional we have discussed. In particular, at each time $t$, the probability $\eta_{i,t}$ can be approximated by the fraction of the neurons that are firing, with the mean firing rates equal the means of the control signals (see Methods). The approximations of the components of the noisy control signals are shown in Figs. \ref{figs3} (a) and (b) respectively. Fig.\ref{figs3} (c) and (d) illustrate such an implementation of the optimal solution by neuronal pulse trains. Using the pulse trains as control signals, we can realise precise movement control. We enclose two videos `movieUP.avi' and 'movieDOWN.avi' to demonstrate the efficiency of the control by pulse trains with two different targets. As they show, the targets are precisely accessed by the arm. We point out that the larger the ensemble is, the more precise the control performance will be, because a large number of the neurons in an ensemble can theoretically lead to a large $M_{Y}$ as we mentioned above, which results in an improvement of the approximation of a Young measure and decreases the execution error as stated in (\ref{error_est}). We note that these kinds of patterns of pulse trains have been widely reported in experiments, for example, the synchronous neural bursting reported in \cite{Ross1}. This may provide a mathematical rationale for the nervous system to adopt pulse-like signals to realise motor control. \section{Conclusions} In this paper, we have provided a general mathematical framework for controlling a class of stochastic dynamical systems with random control signals whose noisy variance can be regarded as a function of the signal magnitude. If the dispersion index, $\alpha$, is $<0.5$, which is the case when the control signal is sub-Poisson, an optimal solution of explicit function does not exist but has to be replaced by a Young measure solution. This parameterized measure can lead a precise control performance, where the controlling error can become arbitrarily small. We have illustrated this theoretical result via a widely-studied problem of arm movement control. In the control problem of biological and robotic systems, large control signals are needed for rapid movement control \cite{Ric}. When noise occurs, this will cause imprecision in the control performance. As pointed out in \cite{Sim,Tod,Kit,Mus,Chh}, a trade-off should be considered when conducting rapid control with noises. In this paper, we still use a "large" control signal but with different contexts. With sub-Poisson noises, we proved that a sufficiently large $M_{Y}$, i.e., a sufficiently large region of the control signal values, can lead precise control performance. Hence, a large region of control signal values plays a crucial role in realising precise control in noisy environments, for both "slow" and "rapid" movement control. In numerical examples, the larger $M_{Y}$ we pick, the smaller control error will be, as shown in the inset plot of Fig. \ref{figs2} as well as (\ref {error_est}) (Theorem \ref{thm2} in Appendix B). Implementation of the Young measure approach in biological control appears to be a natural way to achieve precise execution error in the presence of sub-Poisson noise. In particular, in the neural-motor control example illustrated above, the optimal solution in the case of $\alpha <0.5$ is quite interesting. Assume we have an ensemble of neurons which fire pulses synchronously within a sequence of non overlapping time windows, as depicted in Figs. \ref{figs1} C and D. We see that the firing neurons yield control signals which are very close, in form, to the type of Young measure solution. This conclusion may provide a mathematical rationale for the nervous system why to adopt pulse-like trains to realise motor control. Additionally, we point out that, our approach may have significant ramifications in other fields, including robot motor control and sparse functional estimation, which are issues of our future research. \section*{Methods} \textbf{Numerical methods for the optimisation solution.} We used Euler's method to conduct numerical computations, with a time step of $\Delta t=0.01$ msec in (\ref{AM}) with $\alpha<0.5$. This yields a dynamic programming problem. First, we divide the time domain $[0,T]$ into small time bins with a small size $\Delta t$. Then, we regard the process $\eta_{i,t}$ in each time bin $[n\Delta t,(n+1)\Delta t]$ as a static measure variable. Thus, the solution reduces to finding two series of nonnegative parameters $\mu_{i,n}$ and $\nu_{i,n}$ with $\mu_{i,n}+\nu_{i,n}\leq1$ and $\mu_{i,n}\nu_{i,n}=0$ such that \[ \lambda_{i}(t)=\left\{ \begin{array}{ll} M_{Y} & n\Delta t\leq t<(n+\mu_{i,n})\Delta t \\ -M_{Y} & (n+\mu_{i,n})\Delta t\leq t<(n+\mu_{i,n}+\nu_{i,n})\Delta t \\ 0 & (n+\mu_{i,n}+\nu_{i,n})\Delta t\leq t<(n+1)\Delta t. \end{array} \right. . \] The approximate solution of the optimisation problem requires nonnegative $ \mu_{i,n}$ and $\nu_{i,n}$ that minimise the final movement errors. We thus have a dynamic programming problem. We should point out that in the literature, a similar method was proposed to solve the optimisation problem in a discrete system with control signals taking only two values \cite {Ike,Aih,Aih1}. Thus, the dynamical system (\ref{general}) becomes the following difference equations via the Euler method: \begin{eqnarray*} &&x_{i}(k+1)=x_{i}(k)+\Delta t\bigg\{a_{i}(x(k),t_{k})+ \sum_{j=1}^{m}b_{ij}(x(k),t_{k})[\mu _{j,k}-\nu _{j,k}]M_{Y}\bigg\} \\ &&+\sum_{j=1}^{m}b_{ij}(x(k),t_{k})\kappa _{j}\sqrt{\Delta t}(\mu _{j,k}+\nu _{j,k})M_{Y}^{\alpha }\nu _{j},~k=0,1,2,\cdots , \end{eqnarray*} where $\nu _{j}$, $j=1,\ldots ,m$, are independent standard Gaussian random variables. We can derive difference equations for the expectations and variances of $x(k)$, by ignoring the higher order terms with respect to $ \Delta t$: \begin{eqnarray*} &&\mathbf{E}(x_{i}(k+1))=\mathbf{E}(x_{i}(k))+\Delta t\bigg\{\mathbf{E} [a(x(k),t_{k})] \\ &&+\sum_{j=1}^{m}\mathbf{E}[b_{ij}(x(k),t_{k})][\mu _{j,k}-\nu _{j,k}]M_{Y} \bigg\} \\ &&\mathrm{cov}(x_{i}(k+1),x_{i^{\prime }}(k+1))=\mathrm{cov} (x_{i}(k),x_{i^{\prime }}(k))+\Delta t\mathrm{cov}\bigg(x_{i^{\prime }}(k), \big\{a_{i}(x(k),t_{k}) \\ &&+\sum_{j=1}^{m}b_{ij}(x(k),t_{k})[\mu _{j,k}-\nu _{j,k}]M_{Y}\big\}\bigg) \\ &&+\Delta t\mathrm{cov}\bigg(x_{i}(k),\big\{a_{i^{\prime }}(x(k),t_{k})+\sum_{j=1}^{m}b_{i^{\prime }j}(x(k),t_{k})[\mu _{j,k}-\nu _{j,k}]M_{Y}\big\}\bigg) \\ &&+\Delta t\sum_{j=1}^{m}\mathrm{cov}(b_{i^{\prime }j}(x(k),t_{k}),b_{ij}(x(k),t_{k}))(\mu _{j,k}+\nu _{j,k})M_{Y}^{2\alpha }. \end{eqnarray*} Thus, Eq. (\ref{ROP}) becomes the following discrete optimization problem: \begin{eqnarray} \left\{ \begin{array}{ll} \min_{\mu _{i,k},\nu _{i,k}} & \sum_{k}\mathrm{var}(\phi (x(k),t_{k})) \\ \mathrm{subject~to~} & \mathbf{E}(\phi (x(k),t_{k}))=z(t_{k}),~\mu_{i,k}+\nu_{i,k}\le 1, \\ & \mu_{i,k}\ge0,~\nu_{i,k}\ge0 \end{array} \right. \label{dis} \end{eqnarray} with $x(k)$ a Gaussian random vector with expectation $\mathbf{E}(x(k))$ and covariance matrix $cov(x(k),x(k))$. \textbf{Neuronal pulse trains approximating Young measure solution.} At each time $t $, the measure $\eta_{t}^{\ast}$ can be approximated by the fraction of the neuron ensemble that are firing. In detail, assuming that the means of the optimal control signals are $R_{t,i}(\xi)$, $i=1,2$, and there are one ensemble of excitatory neurons and another ensemble of inhibitory neurons. A fraction of the neurons fire so that the mean firing rates satisfy: \[ \lambda_{i}^{\ast}(t)=\gamma\bigg\{\mathbf{E}[R_{i}^{ext}(t)]-\mathbf{E} [R_{i}^{inh}(t)]\bigg\}, \] where $R_{i}^{ext}(t)$ and $R_{i}^{inh}(t)$ are the firing rates of the excitatory and inhibitory neurons respectively, and $\gamma$ is a scalar factor. In occurrence of sub-Poisson noise, the noisy control signals $ u_{i}^{\ast}(t)=\lambda_{i}^{\ast}(t)+\zeta_{i}(t)$ are approximated by \[ u_{i}^{\ast}(t)=\gamma\big[R_{i}^{ext}(t)-R_{i}^{inh}(t)\big]. \] Both ensembles of neurons are imposed with baseline activities, which bound the minimum firing rates away from zeros, given the spontaneous activities of neurons when no explicit signal is transferred. A numerical approach involves discretise time $t$ into small bins of identical size $\Delta t$. The firing rates can be easily estimated by averaging the population activities in a time bin. We have used $400$ neurons to control the system, with two ensembles of neurons with equivalent numbers that approximate the first and second components of control signal, respectively. Each neuron ensemble have $200$ neurons with $100$ excitatory and $100$ inhibitory neurons. \section*{Appendices} \subsection*{Appendix A: Derivation of formula (\protect\ref{h})} Let: $W$ be the time-varying p.d.f. $p(x,t)$ that is second-order continuous-differentiable with respect to $x$ and $t$ that is embedded in the Sobolev function space $W^{2,2}$; $W_{1}$ be the function space of $ p(x,t_{0})$, regarding as a function with respect to $x$ with a fixed $t_{0}$ ; $W_{2}$ be the function space of $p(x_{0},t)$, regarding as a function with respect to $t$ with a given $x_{0}$. The spaces $W_{1,2}$ can be regarded being embedded in $W$. In addition, let: $\hat{W}$ be the function space where the image $\mathcal{L}[W]$ is embedded; $L$ be the space of linear operator $\mathcal{L}$, denoted above; $\mathscr L(Z,E)$ be the space composed of bounded linear operator from linear space $Z$ to $E$; and $ Z^{\ast }$ be the dual space of the linear space $Z$: $Z^{\ast }=\mathscr L(Z,\mathbb{R})$. Furthermore, let $\tilde{\mathcal{Y}}$ be the tangent space of Young measure space $\mathcal{Y}$: $\tilde{Y}=\{\eta -\eta ^{\prime }:~\eta ,\eta ^{\prime }\in \mathcal{Y}\}$. For simplicity, we do not specify the spaces and just provide the formalistic algebras, and then the following is similar to Chapter 4.3 in \cite{Roub} with appropriate modifications. Define \begin{eqnarray} \Phi(p)&=&\int_{T}^{T+R}\int_{\Xi} \|\phi(x,t)-z(t)\|^{2}p(x,t)dx dt \nonumber \\ \Pi(p,\eta)&=&\bigg(\frac{\partial p}{\partial t}-(\mathcal{L}\cdot\eta) \circ p,p(x,0)-p_{0}(x)\bigg) \nonumber \\ J(p)&=&\int_{\Xi}\phi(x,t)p(x,t)dx-z(t). \label{sym} \end{eqnarray} Thus, (\ref{ROP}) can be rewritten as: \begin{eqnarray} \left\{ \begin{array}{ll} \min_{\eta} & \Phi(p) \\ \mathrm{subject~to} & \Pi(p,\eta)=0,~J(p)=0. \end{array} \right. \label{OPT} \end{eqnarray} The G\^{a}teaux differentials of these maps with respect to $p(x,t)$, denoted by $\nabla_{p}\cdot$, are: \begin{eqnarray*} (\nabla_{p}\Phi)\circ(\hat{p}-p)&=&\int_{T}^{T+R}\int_{\Xi}\|\phi(x,t)-z(t) \|^{2} [\hat{p}(x,t)-p(x,t)]dx dt \\ (\nabla_{p}{\Pi})\circ(\hat{p}-p)&=&\bigg(\frac{\partial(\hat{p}-p)}{ \partial t} -(\mathcal{L}\cdot\eta)\circ(\hat{p}-p),\hat{p}(x,0)-p(x,0)\bigg) \\ (\nabla_{p}J)\circ(\hat{p}-p)&=&\int_{\Xi}\phi(x,t)[\hat{p}(x,t)-p(x,t)]dx \\ \end{eqnarray*} for two time-varying p.d.f. $\hat{p},p\in W$. Here, $\nabla_{p}\Phi\in W^{*}$ , $\nabla_{p}{\Pi}\in\mathscr L(W, \hat{W}\times W_{1})$, $\nabla_{p} J\in \mathscr L(W,W_{2})$. And, the differentials of these maps with respect to the Young measure $\eta$ are: \begin{eqnarray*} (\nabla_{\eta}\Phi)\cdot(\hat{\eta}-\eta)&=&0 \\ (\nabla_{\eta}\Pi)\cdot(\hat{\eta}-\eta)&=& \bigg((\mathcal{L}\circ p)\cdot ( \hat{\eta}-\eta),0\bigg) \\ (\nabla_{\eta}J)\cdot(\hat{\eta}-\eta)&=&0 \\ \end{eqnarray*} for two Young measures $\hat{\eta},\eta\in\mathcal{Y}$. Here, $ \nabla_{\eta}\Phi\in\tilde{\mathcal{Y}}^{*}$, $\nabla_{\eta}\Pi\in\mathscr L( \tilde{\mathcal{Y}},\hat{W}\times W_{1})$, and $\nabla_{\eta}J\in\mathscr L( \tilde{\mathcal{Y}},W_{2})$. Then, we are in the position to derive the result of (\ref{h}) by the following theorem, as a consequence from Theorem 4.1.17 in \cite{Roub}. \begin{theorem} \label{thm1} $\Phi:~W\to W^{*}$, $\Pi:~W\times \mathcal{Y}\to\mathbb{R}$ and $J:~W\to W_{2}$ as defined in (\ref{sym}). Assume that: (1). the trajectory of $x(t)$ in (\ref{Ito}) is bounded almost surely; (2). $a(x,t)$, $b(x,t)$ and $\phi(x,t)$ are $C^{2}$ with respect to $(x,t)$. Let $(\eta^{\ast},p^{\ast})$ be the optimal solution of (\ref{ROP}). Then, there are some $\lambda_{1}\in\mathscr L(\hat{W}\times W_{1},W^{*}) $, $ \lambda_{2}=[\lambda_{21},\lambda_{22}]^{\top}$ with $\lambda_{21}\in { \mathscr L}(\hat{W},W_{2})$, and $\lambda_{22}\in{\mathscr L}(W_{1},,W_{2})$ , such that \begin{eqnarray} \lambda_{1}\circ\nabla_{p}\Pi(p^{\ast},\eta^{\ast}) =\nabla_{p}\Phi(p^{\ast}),~\lambda_{2}\circ\nabla_{p}\Pi(p^{\ast},\eta^{ \ast})=\nabla_{p}J(p^{\ast}), \label{cond2} \end{eqnarray} and the abstract maximum principle \begin{equation} \eta _{t}^{\ast}\bigg\{\mathrm{minima~of~}h(t,\xi )\mathrm{~w.r.t}~\xi \bigg\}=1,~\forall ~t\in \lbrack 0,T+R]. \label{amp} \end{equation} holds with "abstract Hamiltonian": \begin{eqnarray} h(t,\xi)&=&-\int_{\Xi }p^{\ast }(x,t)\bigg\{\sum_{i=1}^{n}A_{i}(x,t,\xi ) \frac{\partial \mu _{1}}{\partial x_{i}} \nonumber \\ &&+\frac{1}{2}\sum_{i,j=1}^{n}[B(x,t,\xi )B(x,t,\xi )^{\top }]_{ij}\frac{ \partial ^{2}\mu _{1}}{\partial x_{i}\partial x_{j}}\bigg\}dx. \label{h1} \end{eqnarray} \end{theorem} \begin{proof} Under the conditions in this theorem, we can conclude that the Fokker-Planck equation has a unique solution $p(\eta)$ that is continuously dependent of $ \eta$ from theory of stochastic differential equation \cite{SDE}; $ \Pi(\cdot,\eta):W\to W^{*}$ is Fr\'{e}chet differentiable at $p=\pi(\eta)$ because $x(t)$ is assumed to be almost surely bounded; $\Pi(p,\cdot): \mathcal{Y}\to W^{*}$ and $J$ (in fact $\nabla_{\eta}J=0$) is G\^{a}teaux equi-differentiable around $p\in W$ because of $p\in W\subset W^{2,2}$ with $ (x,t)$ bounded \cite{XX}; the partial differential $\nabla_{\eta}\Pi$ is weak-continuous with respect to $\eta$ because it is linearly dependent of $ \eta$. In addition, from the existence and uniqueness of the Fokker-Planck equation, $\nabla_{p}\Pi(p,\eta):W\to Im (\nabla_{p}\Pi(p,\eta))\subset \bar{ W}\times W_{1}$ has a bounded inverse. This implies that the following\emph{ adjoint equation} \[ \mu \circ \nabla _{p}\Pi =\nabla _{p}\Phi +c(t)\circ \nabla _{p}J, \] has a solution for $p=p(\eta)$, denoted by $\mu$. Let $\mu =[\mu _{1},\mu _{2}]$, which should be solution of the following equation \begin{equation} \left\{ \begin{array}{l} \frac{\partial \mu _{1}}{\partial t}+(\mathcal{L}^{\ast }\cdot \eta ^{\ast })\circ \mu _{1}=-\Vert \phi (x,t)-z(t)\Vert ^{2}-c(t)\phi (x,t) \\ \mu _{2}(x)=\mu _{1}(x,0) \\ \mu _{1}(x,T+R)=0 \qquad t\in \lbrack 0,T+R],~x\in \Xi , \end{array} \right. \end{equation} with the dual operator $\mathcal{L}^{\ast }$ of $\mathcal{L}$ (the operator in the back-forward Kolmogorov equation), still dependent of $(x,t)$ and the value of $\lambda (t)$ (namely $\xi$ in Young measure): \[ \mathcal{L}^{\ast }\circ q=\sum_{i=1}^{n}a_{i}(x,t,\xi )\frac{\partial q}{ \partial x_{i}}+\frac{1}{2}\sum_{i,j=1}^{n}[B(x,t,\xi )B^{\top }(x,t,\xi )]_{ij}\frac{\partial ^{2}q}{\partial x_{i}x_{j}}. \] We pick $\lambda_{1,2}(t)$ with $\mu _{1}=\lambda _{1}+c(t)\lambda _{21}$ and $\mu _{2}=\lambda _{1}+c(t)\lambda _{22}$. So, $\lambda_{1,2}$ should satisfy equation (\ref{cond2}). In fact, $\lambda_{1,2}$ can be regarded as functions ( or generalized functions) with respect to $(x,t)$. Thus, the conditions of Lemma 1.3.16 in \cite{Roub} can be verified, which implies that the gradients of the maps $\Phi$ and $J$ with respect to $\eta$ , by regarding $p=p(\eta)$ from $\Pi(p,\eta)=0$, as follows: \begin{eqnarray*} \partial\Phi=\nabla_{\eta}\Phi-\lambda_{1}\circ\nabla_{\eta}\Pi, ~\partial J=\nabla_{\eta}J-\lambda_{2}\circ\nabla_{\eta}\Pi. \end{eqnarray*} From the abstract Hamilton minimum principle (Theorem 4.1.17 in \cite{Roub} ), applied to each solution of (\ref{ROP}), denoted by $\eta ^{\ast }$, there exists a nonzero function $c(t) $ such that \begin{equation} H(\tilde{\eta})=\partial \Phi (\eta ^{\ast })\cdot \tilde{\eta}+\langle c(t),\partial J(\eta ^{\ast })\cdot \tilde{\eta}\rangle ,~\forall ~\tilde{ \eta}\in \mathcal{Y} \label{cond1} \end{equation} is an 'abstract Hamiltonian', with respect to $\tilde{\eta}$. With the definitions of $\lambda_{1,2}$, (\ref{cond1}) becomes \[ H(\tilde{\eta})=\nabla _{\eta }\Phi \cdot \tilde{\eta}+c\nabla _{\eta }J\cdot \tilde{\eta}-\langle \mu ,\nabla _{\eta }\Pi \cdot \tilde{\eta} \rangle =-\langle \mu ,\nabla _{\eta }\Pi \cdot \tilde{\eta}\rangle , \] owing to $\nabla _{\eta }\Phi =\nabla _{\eta }J=0$. By specifying $\mu$ with $\lambda_{1,2}$, we have \begin{eqnarray*} H(\tilde{\eta}) &=&-\langle \mu ,\nabla _{\eta }\Pi \cdot \tilde{\eta} \rangle =-\langle \mu _{1},[\mathcal{L}\circ p^{\ast }]\cdot \tilde{\eta} \rangle =-\langle p^{\ast },[\mathcal{L}^{\ast }\circ \mu _{1}]\cdot \tilde{ \eta}\rangle \\ &=&-\int_{0}^{T+R}\int_{\Xi }\int_{\Omega }p^{\ast }\bigg\{ \sum_{i=1}^{n}a_{i}(x,t,\xi )\frac{\partial \mu _{1}}{\partial x_{i}} \\ &&+\frac{1}{2}\sum_{i,j=1}^{n}[B(x,t,\xi )B(x,t,\xi )^{\top }]_{ij}\frac{ \partial ^{2}\mu _{1}}{\partial x_{i}\partial x_{j}}\bigg\}\tilde{\eta} _{t}(d\xi )dxdt, \end{eqnarray*} where $p^{\ast }$ stands for the time-varying density corresponding to the optimal Young measure solution $\eta ^{\ast }$. From this, letting $ \xi=\lambda$, we have the "abstract Hamiltonian" in the form of (\ref{h1}) as the Hamiltonian integrand of $H(\cdot)$. The Hamiltonian abstract minimum (maximum) principle indicates the optimal Young measure $\eta _{t}^{\ast }$ is only concentrated at the minimum points of $h(t,\xi )$ with respect to $\xi $ for each $t$, namely. That is, (\ref {amp}) holds. This completes the proof. \end{proof} From this theorem, since the variances depend on the magnitude of the signal as described in (\ref{sigma i}), removing the terms without $\xi$, it is equivalent to look at the minima of $\hat{h}(t,\xi )$ in the form of \begin{equation} \hat{h}(t,\xi ) =\sum_{i=1}^{m}h_{i}(t,\xi _{i}), \quad h_{i}(t,\xi _{i}) =g_{i}(t)|\xi _{i}|^{2\alpha }-f_{i}(t)\xi _{i} \label{Hamilton} \end{equation} instead of $h(t,\xi )$, where \begin{eqnarray*} f_{i}(t) &=&\sum_{j=1}^{m}\int_{\Xi }p^{\ast }(x,t)b_{ji}(x,t)\frac{\partial \mu _{1}}{\partial x_{i}}dx, \\ g_{i}(t) &=&-\frac{\kappa _{i}^{2}}{2}\sum_{j,k=1}^{m}\int_{\Xi }p^{\ast }(x,t)b_{ji}(x,t)b_{ki}(x,t)\frac{\partial ^{2}\mu _{1}}{\partial x_{j}\partial x_{k}}dx. \end{eqnarray*} This gives formula (\ref{h}). \subsection*{Appendix B: Derivation of precise control performance (\protect \ref{error_est})} The control performance inequality (\ref{error_est}) can be derived from the following theorem. \begin{theorem} \label{thm2} Let $\hat{x}$ be the solution of equation (\ref{ds}) and $ 0<\alpha<0.5$. Assume that there are a positive measurable function $ \kappa(t)$ and a positive constant $C_{1}$ such that $\Vert A(x,t,u)-A(y,t,u)\Vert\leq\kappa(t)\Vert x-y\Vert$, $\Vert B(x,t,u)\Vert^{2}\leq\kappa(t)\sum_{k=1}^{m}|u_{k}|^{2\alpha} $ and$ |\phi(x,t)-\phi(y,t)|\leq C_{1}\Vert x-y\Vert$ hold for all $x,y\in\mathbb{R} ^{n}$ and $t\geq 0$. Then, for any non-random initial value, namely, $x(0)= \mathbf{E}(x(0))$, with the non-optimal Young measure (\ref{functional density1}), we have \begin{enumerate} \item $\int_{T}^{T+R}\Vert\mathbf{E}(x(t))-z(t)\Vert\to 0$; \item $\min_{\eta}\sqrt{\int_{T}^{T+R}\mathrm{var}(x)dt}=O\bigg(\frac{1}{ M_{Y}^{1/2-\alpha}}\bigg)$ \end{enumerate} as $M_{Y}\to\infty$. \end{theorem} \begin{proof} Comparing the differential equation of $x$, i.e. (\ref{Ito}), and that of $ \hat{x}$, (\ref{ds}), we have $d(x-\hat{x})=[A(x,t,\hat{u})-A(\hat{x},t,\hat{ u})]dt+B(x,t,\lambda(t))dW_{t}$. And, replacing $\lambda(t)$ with the Young measure $\hat{\eta}_{t}(\cdot)$, in the form of (\ref{functional density1}), from the conditions in this theorem, we have \begin{eqnarray*} & \mathbf{E}\Vert x(\tau)-\hat{x}(\tau)\Vert^{2}=\mathbf{E}\bigg\{\int _{0}^{\tau}[A(x(t),t,\hat{u})-A(\hat{x}(t),t,\hat{u})]dt\bigg\}^{2} \\ & +\mathbf{E}\bigg\{\int_{0}^{\tau}\Vert B(x,t,\lambda)\Vert^{2}\cdot \hat{ \eta}_{t}dt\bigg\} \\ & \leq\mathbf{E}\bigg\{\int_{0}^{\tau}\kappa(t)\Vert x(t)-\hat{x}(t)\Vert dt \bigg\}^{2}+\sum_{k=1}^{m}\int_{0}^{\tau}\kappa(t)|\lambda_{k}|^{2\alpha }\cdot\hat{\eta}_{k,t}dt \\ & \leq\int_{0}^{\tau}\kappa^{2}(s)ds\int_{0}^{\tau}\mathbf{E}\Vert x(t)- \hat {x}(t)\Vert^{2}dt+\sum_{k=1}^{m}\int_{0}^{\tau}\frac{M^{2\alpha}_{Y}}{ M_{Y}}\kappa(t)|\hat{u}_{k}(t)|dt, \end{eqnarray*} for any $\tau>0$. By using Gr\"{o}nwall's inequality, we have \begin{eqnarray} && \int_{T}^{T+R}\mathbf{E}\Vert x(\tau)-\hat{x}(\tau)\Vert^{2}d\tau dt\leq \int_{0}^{T+R}\mathbf{E}\Vert x(\tau)-\hat{x}(\tau)\Vert^{2}d\tau dt\le \nonumber \\ &&\int_{0}^{T+R}\exp\bigg[\int_{t}^{T+R}\int_{0}^{s}\kappa^{2}(\tau)d\tau ds \bigg] \frac{1}{M^{1-2\alpha}_{Y}}\sum_{k=1}^{m}\kappa(t)|\hat{u}_{k}(t)|dt. \label{errorest} \end{eqnarray} Noting that for $0<\alpha<0.5$, $\lim_{M_{Y}\rightarrow\infty}1/M_{Y}^{1-2 \alpha}=0$ implies that $\int_{T}^{T+R}\mathbf{E}\Vert x(\tau)-\hat{x} (\tau)\Vert^{2}d\tau dt=O(1/M_{Y}^{1-2\alpha})$ as $M_{Y}$goes to infinity. This proves the second item in this theorem. In addition, \begin{eqnarray*} \int_{T}^{T+R}\Vert\mathbf{E}(x(t))-z(t)\Vert \leq \sqrt{R}\sqrt{\mathbf{E} \int_{T}^{T+R}\{\Vert x(t)-\hat{x}(t)\Vert^{2}dt\}} \end{eqnarray*} also approaches zero as $M_{Y}$ goes to infinity. This proves the first item of the theorem. This completes the proof. \end{proof} Hence, as $M_{Y}$ goes to infinity, the non optimal solution (\ref {functional density1}) can asymptotically satisfy the constraint and the error variance goes to zero as $M_{Y}$ goes to infinity. Therefore, the performance error of the REAL optimal solution of the optimisation problem ( \ref{ROP}) approaches zero as $M_{Y}\rightarrow\infty$ in the case of $ \alpha<0.5$. Furthermore, we can conclude from (\ref{errorest}) that the execution error, measured by the standard deviation, can be approximated as ( \ref{error_est}). \section*{References} \begin{figure} \caption{Illustration of possible minimum points of the function $g_{i} \label{al} \end{figure} \begin{table}[ht] \caption{Summary of the possible minimum points of (\protect\ref{h}).} \label{maximum points}\centering{ \begin{tabular}{@{\vrule height 10.5pt depth4pt width0pt}lcc} & $g_{i}(t)>0$ & $g_{i}(t)<0$ \\ \hline $\alpha>0.5$ & one point in $[-M_{Y},M_{Y}]$ & $\{M_{Y}\}$ or $\{-M_{Y} \}$ \\ $\alpha<0.5$ & $\{0,M_{Y}\}$ or $\{0,-M_{Y}\}$ & $\{M_{Y}\}$ or $\{-M_{Y} \}$ \\ \hline \end{tabular} } \end{table} \begin{figure} \caption{Illustration of the arm control. Arm is composed of three points ($ P $, $Q$, and $H$), where $P$ is fixed and others not, and two arms (upper arm $PQ$ and the forearm $QH$). Button $H$ is to reach some given target (red cross) by moving front- and back-arms. } \label{arms} \end{figure} \begin{table}[ht] \caption{Parameters.} \label{parameters}{\centering \begin{tabular}{@{\vrule height 10.5pt depth4pt width0pt}lc} Parameters & Values \\ \hline masses (of the inertia w.r.t the mass center) & $m_{1}=2.28~kg$, $ m_{2}=1.31~kg$ \\ lengths (of the inertia w.r.t the mass center) & $l_{1}=0.305~m$, $ l_{2}=0.254~m$ \\ moments (of the inertia w.r.t the mass center) & $I_{1}=0.022~kg\cdot m^2$; $ I_{2}=0.0077~kg\cdot m^{2}$ \\ lengths of arms & $r_{1}=0.133~m$, $r_{2}=0.109~m$ \\ reach time & $T=650~ms$ (except in Figs. \ref{meanv1} and \ref{factv1}) \\ duration & $R=10~ms$ (except in Figs. \ref{meanv1} and \ref{factv1}) \\ target & $\theta_{1}(T)=-1$, $\theta_{2}(T)=\pi/2$ \\ scale parameter & $r_{0}=1$ \\ noise scale & $\kappa_{0}=1$ \\ bound of the control signal & $M_{Y}=20000$, except in Figs. \ref{al} and \ref{figs1} \\ & and the inset plot of Fig. \ref{figs2} \\ time step & $\Delta t=0.01~ms$ \\ \hline \end{tabular} } \end{table} \begin{figure} \caption{The means of the optimal control signals $\protect\lambda_{1} \label{mean} \end{figure} \begin{figure} \caption{Optimal control of straight-trajectory arm movement model with parameters listed in Table \protect\ref{parameters} \label{desire} \end{figure} \begin{figure} \caption{Optimal control of straight-trajectory arm movement model \emph{ with noise} \label{fact} \end{figure} \begin{figure} \caption{Optimal control signal of Young measure of straight-trajectory arm movement model with noise, illustrations for $\protect\lambda_{1} \label{figs1} \end{figure} \begin{figure} \caption{Means of the optimal control signals $\protect\lambda _{1} \label{meanv1} \end{figure} \begin{figure} \caption{Optimal control of straight-trajectory arm movement model \emph{ with noise} \label{factv1} \end{figure} \begin{figure} \caption{Performance of optimal control of straight-trajectory arm movement model: Relationship between the executive error, measured by mean standard variance, and dispersion index $\protect\alpha$ with $M_{Y} \label{figs2} \end{figure} \begin{figure} \caption{Spiking control of straight-trajectory arm movement model: (a). Approximation of first component ($u_{1} \label{figs3} \end{figure} \end{document}
\begin{document} \title[Higher congruences]{A remark on higher congruences for the number of rational points of varieties defined over a finite field} \author{H\'el\`ene Esnault} \address{ Universit\"at Duisburg-Essen, Mathematik, 45117 Essen, Germany} \epsilonmail{[email protected]} \date{July 16, 2005} \begin{abstract} We show that the $\epsilonll$-adic cohomology of the mod $p$ reduction $Y$ of a regular model of a smooth proper variety defined over a local field, the cohomology of which is supported in codimension $\kappa$, can't be Tate up to level $(\kappa -1)$. As a consequence, the number of rational points of $Y$ can't fulfill the natural relation $|Y({\mathbb F}_q)|\epsilonquiv \sum_{i\ge 0} q^i\cdot b_{2i}(\bar{Y}) $ modulo $q^\kappa$. \\ \ \\ {\bf Une remarque sur les congruences d'ordre sup\'erieur pour le nombre de points rationnels de vari\'et\'es d\'efinies sur un corps fini .}\\ \ \\ {\bf R\'esum\'e}: Nous montrons que la cohomologie $\epsilonll$-adique de la r\'eduction $Y$ modulo $p$ d'un mod\`ele r\'egulier d'une vari\'et\'e propre et lisse d\'efinie sur un corps local, dont la cohomologie est support\'ee en codimension $\kappa \ge 1$, ne peut \^etre de Tate jusqu'en niveau $(\kappa -1)$. En cons\'equence, le nombre de points rationnels de $Y$ ne peut v\'erifier la formule naturelle $|Y({\mathbb F}_q)|\epsilonquiv \sum_{i\ge 0} q^i\cdot b_{2i}(\bar{Y}) $ modulo $q^\kappa$. \epsilonnd{abstract} \maketitle \begin{quote} \epsilonnd{quote} {\bf Version fran\c{c}aise abr\'eg\'ee}. Dans \cite{EsUneq}, Theorem 1.1, nous montrons que si ${\mathcal X}$ est un mod\`ele r\'egulier d'une vari\'et\'e $X$ propre et lisse d\'efinie sur un corps local de corps r\'esiduel ${\mathbb F}_q$, alors si la cohomologie $\epsilonll$-adique $H^i(\bar{X})$ est support\'ee en codimension $\ge 1$ pour $i\ge 1$, le nombre de points rationnels de sa r\'eduction $Y$ modulo $p$ v\'erifie $|Y({\mathbb F}_q)|\epsilonquiv 1 $ modulo $q$. En fait, pour \^etre plus pr\'ecis, sous cette hypoth\`ese, les valeurs propres du Frobenius g\'eom\'etrique agissant sur la cohomologie $\epsilonll$-adique $H^i(\bar{Y})$ de $Y$ sont divisibles par $q$ en tant qu'entiers alg\'ebriques. Le but de cette note est de discuter une formulation en coniveau sup\'erieur. Une fa\c{c}on naturelle de g\'en\'eraliser la condition de coniveau $\ge 1$ pour $i\ge 1$ est de supposer que $H^i(\bar{X})/H^i(\bar{X})_{{\rm alg}}$ est support\'ee en codimension $\kappa$, o\`u $H^i(\bar{X})_{{\rm alg}}$ est nulle si $i$ est impair et sinon est la partie alg\'ebrique. Nous montrons cependant que cela n'implique pas que les valeurs propres du Frobenius g\'eom\'etrique sont divisibles par $q^\kappa$ en tant qu'entiers alg\'ebriques sur $H^i(\bar{Y})/H^i(\bar{Y})_{q^{\frac{i}{2}}}$, o\`u $ H^i(\bar{Y})_{q^{\frac{i}{2}}}$ est nulle si $i$ est impair et sinon est la partie sur laquelle le Frobenius agit par multiplication par $q^{\frac{i}{2}}$. En particulier la formule naturelle $|Y({\mathbb F}_q)|\epsilonquiv \sum_{i\ge 0} q^i\cdot b_{2i}(\bar{Y}) $ modulo $q^\kappa$ n'est pas valable en g\'en\'eral. Cette formulation est propos\'ee par N. Fakhruddin dans \cite{Fakh} qui la montre sous certaines hypoth\`eses pour une famille g\'eom\'etrique en \'egale caract\'eristique $p>0$. Nous montrons en quoi ces hypoth\`eses sont tr\`es fortes. \section{Introduction} In \cite{EsUneq}, Theorem 1.1, we show that if ${\mathcal X}$ is a regular model of a smooth proper variety $X$ defined over a local field with finite residue field ${\mathbb F}_q$, then if $\epsilonll$-adic cohomology $H^i(\bar{X})$ is supported in codimension $\ge 1$ for $i\ge 1$, the number of rational points of its mod $p$ reduction $Y$ fulfills $|Y({\mathbb F}_q)|\epsilonquiv 1 $ modulo $q$. To be more precise, the assumption implies that the eigenvalues of the geometric Frobenius acting on $\epsilonll$-adic cohomology $H^i(\bar{Y})$ of $Y$ are $q$-divisible algebraic integers. The proof relies on a version of Deligne's integrality theorem \cite{DeInt}, Corollaire 5.5.3 over local fields \cite{DE}, Corollary 0.4. The goal of this note is to discuss a formulation in higher coniveau level. A natural generalization of the coniveau $\ge 1$ condition for $i\ge 1$ is to assume that $H^i(\bar{X})/H^i(\bar{X})_{{\rm alg}}$ is supported in codimension $\ge \kappa$, where $H^i(\bar{X})_{{\rm alg}}$ is equal to 0 if $i$ is odd, else is the algebraic part of cohomology. This means that there is a codim $\ge \kappa$ subscheme $Z\subset X$ so that $H^i(\bar{X})\xrightarrow{{\rm rest}=0} H^i(\bar{X}\setminus \bar{Z})/{\rm Im}(H^i(\bar{X})_{{\rm alg}}$. Said differently, $H^{i}(\bar{X})=H^{i}(\bar{X})_{{\rm alg}}$ for $i\le 2 \kappa$, and $H^i_{\bar{Z}}(\bar{X})\twoheadrightarrow H^i(\bar{X})$ for $i\ge 2\kappa$. However we show that this assumption does not imply that the eigenvalues of the geometric Frobenius acting on $H^i(\bar{Y})/H^i(\bar{Y})_{q^{\frac{i}{2}}}$ are divisible by $q^\kappa$-divisible algebraic integers, where $ H^i(\bar{Y})_{q^{\frac{i}{2}}}$ is equal to 0 if $i$ is odd, else is the part of cohomology on which Frobenius acts by multiplication by $q^{\frac{i}{2}}$. In particular, the formula $|Y({\mathbb F}_q)|\epsilonquiv \sum_{i\ge 0} q^i\cdot b_{2i}(\bar{Y}) $ modulo $q^\kappa$ does not hold in general. This formulation was proposed in \cite{Fakh} by N. Fakhruddin, who shows it under certain assumptions in a geometric family in equal characteristic $p>0$. We show how strong are those assumptions. Our example consists of a Godeaux surface in characteristic 0. We take a reduction mod $p$ which is a cone over a smooth curve $C$ of higher degree. After desingularization of the mod $p$ reduction, $H^1(\bar{C})(-1)$ enters $H^3(\bar{Y})$, and this destroys the possibility of the $|Y({\mathbb F}_q)|\epsilonquiv 1 +q\cdot b_2(\bar{Y})$ mod $q^2$ congruence. \noindent {\it Acknowledgements}. We thank Eckart Viehweg for discussions on the subject of this note and for his encouragement. \section{The example} Let us consider the Godeaux surface $X_0/{\mathbb Q}_p$ defined as the quotient of the Fermat quintic $F\subset {\mathbb P}^3_{{\mathbb Q}_p}$ of homogeneous equation $px_0^5+x_1^5+x_2^5+x_3^5$ by the group $\mu_5$ acting via $\xi\cdot (x_i)=(\xi^i\cdot x_i)$. Here $p$ is prime to 5, and $\xi$ generates the group of 5-th roots of unity. As well known \cite{BPVdV}, V, 15 and VII, 11, $H^0(X_0, \Omega^1_{X_0})=H^0(X_0, \Omega^2_{X_0})$ and by comparison of de Rham with \'etale cohomology, one obtains $H^1(\bar{X}_0)=H^3(\bar{X}_0)=0, \ H^{2i}(\bar{X}_0)=H^{2i}_{{\rm alg}}(\bar{X}_0)$ for $i=0,1,2$. Let us assume we have a regular model ${\mathcal X}\to {\rm Spec}(R)$ of $X_0$ over an extension $R\supset {\mathbb Z}_p$, with local field $K={\rm Frac}(R)$ and residue field ${\mathbb F}_q$. Thus the general fiber is $X=X_0\times_{{\mathbb Q}_p} K$, and we denote by $Y$ the mod $p$ reduction over ${\mathbb F}_q$. We use the computation in \cite{EsUneq}, sections 2 and 3. One has an exact sequence \ga{2.1}{H^i_{\bar{Y}}({\mathcal X}^u) \to H^i(\bar{Y})\xrightarrow{{\rm sp}^u} H^i(X^u) \to H^{i+1}_{\bar{Y}} ({\mathcal X}^u)} where $^u$ means the pull back via the extension $K\subset K^u$ to the maximal unramified extension, and $\bar{}$ means the pull back via the extension to the algebraic closure. The sequence is equivariant with respect to the action of the geometric Frobenius ${\rm Frob}\in {\rm Gal}(\bar{{\mathbb F}}_q/{\mathbb F}_q) $ acting on $H^*(\bar{Y}), H^*_{\bar{Y}}({\mathcal X}^u), H^*(X^u)$. One also has the exact sequence \ga{2.2}{0\to H^1(I, H^{i-1}(\bar{X}))\to H^i(X^u)\to H^i(\bar{X})^I\to 0} where $I\subset {\rm Gal}(\bar{K}/K)$ is the inertia group, with quotient $ {\rm Gal}(\bar{K}/K)/I={\rm Gal}(\bar{{\mathbb F}}_q/{\mathbb F}_q)$. The sequence is equivariant with respect to the action of ${\rm Frob}$. So using Gabber's purity theorem \cite{Fu}, Theorem 2.1.1. as in \cite{EsUneq}, section 2, one obtains \ga{2.3}{H^1(\bar{Y})=0, } and an equivariant exact sequence \ga{2.4}{0\to H^0(\bar{Y}^0)(-1)\to H^2(\bar{Y}) \to H^2(\bar{X})^I,} where $Y^0=Y\setminus $ singular locus. Thus in particular, Frob acts via multiplication by $q$ on $H^2(\bar{Y})$. So via Grothendieck-Lefschetz trace formula \cite{Gr} and the fact that $H^4(\bar{Y})=\oplus_{{\rm components}} {\mathbb Q}_\epsilonll(-2)$, we conclude \ga{2.5}{|Y({\mathbb F}_q)| \epsilonquiv 1 +q\cdot b_2(\bar{Y}) - {\rm Tr}\ {\rm Frob}|H^3(\bar{Y}) \ {\rm mod} \ q^2.} The question becomes whether $H^3(\bar{Y})$ dies or not. We now construct ${\mathcal X}$ and show $H^3(\bar{Y})\neq 0$ for this ${\mathcal X}$. The mod $p$ reduction in ${\mathbb P}^3_{{\mathbb F}_p}$ of the model ${\mathcal F}\subset {\mathbb P}^3_{{\mathbb Z}_p}$ of $F$ defined by the same equation $px_0^5+x_1^5+x_2^5+x_3^5$ is the cone over the Fermat curve $Q_{{\mathbb F}_p}\subset {\mathbb P}^2_{{\mathbb F}_p}$ of equation $x_1^5+x_2^5+x_3^5$. Then $\mu_5$ acquires one single fix point $(1:0:0:0) \in {\mathbb P}^3_{{\mathbb F}_p}$ which is the vertex of ${\rm cone}(Q_{{\mathbb F}_p})$. We base change ${\mathbb Z}_p\subset R$ via $\partiali^5=p$ and denote by $k={\mathbb F}_q$ the residue field and $K={\rm Frac}(R)\supset {\mathbb Q}_p$ the local field. So ${\mathcal F}\times_{{\mathbb Z}_p} R \subset {\mathbb P}^3_R$ is defined by the equation $\partiali^5x_0^5+x_1^5+x_2^5+x_3^5$. The $\mu_5$ operation is still defined by $\xi\cdot x_i=\xi^i x_i$ and now the only fix point $x:=(1:0:0:0) \in {\mathbb P}^3_{{\mathbb F}_q}$ is at the same time the only point in which ${\mathcal F}\times_{{\mathbb Z}_p} R$ is not regular. The affine equation of ${\mathcal F}\times_{{\mathbb Z}_p} R$ in $({\mathbb A}^3_R, x_0\neq 0)$ with coordinates $X_i=\frac{x_i}{x_0}$ on which $\mu_5$ acts via $\xi\cdot X_i=\xi^iX_i$, is $\partiali^5+X_1^5+X_2^5+X_3^5$. We blow up the singularity $x$ to obtain $\sigma: {\mathcal F}'\to {\mathcal F}\times_{{\mathbb Z}_p} R$. Then $\sigma^{-1}(x)$ is isomorphic to the Fermat quintic $Z_2$ in ${\mathbb P}^3_{{\mathbb F}_q}$ of equation $X_0^5+X_1^5+X_2^5+X_3^5$ with action $\xi\cdot X_i=\xi^i X_i$. Consequently, $\mu_5$ acts fix point free on ${\mathcal F}'$ and the quotient ${\mathcal X}$, which is defined over $R$, is a regular model of $X=X_0\times_{{\mathbb Q}_p} K:=(F/\mu_5)\times_{{\mathbb Q}_p} K$. Furthermore, $\sigma^{-1}({\mathcal F}\times_{{\mathbb Z}_p} {\mathbb F}_q)$ is the union of two components, one being the blow up $Z_1$ in the vertex of ${\rm cone}(Q_{{\mathbb F}_p}\times_{{\mathbb F}_p} {\mathbb F}_q)$, the other one being the Fermat quintic $Z_2$. Thus the mod $p$ fiber $Y$ of ${\mathcal X}$ has two components $S_i=Z_i/\mu_5$. They meet along $C=(Q_{{\mathbb F}_p}\times_{{\mathbb F}_p} {\mathbb F}_q)/\mu_5$. As $p\ne 5$, the covering $Q_{{\mathbb F}_p}\times_{{\mathbb F}_p} {\mathbb F}_q \to C$ is \'etale, and ${\rm genus}(C)=2$. The normalization sequence for $Y$ yields a Frob equivariant exact sequence \ga{2.6}{ H^3(\bar{Y})\to H^3(\bar{S}_1)\oplus H^3(\bar{S}_2)\to 0.} On the other hand, one has \ml{2.7}{0\neq H^1(\bar{C})(-1)=H^1(\bar{Q}_{{\mathbb F}_p})^{\mu_5}(-1)=\\ H^3_c(\bar{Z}_1\setminus \bar{Q}_{{\mathbb F}_p})^{\mu_5}=H^3(\bar{Z}_1)^{\mu_5}=H^3(\bar{S}_1).} Thus \ga{2.8}{H^3(\bar{Y})\twoheadrightarrow H^1(\bar{C})(-1)\neq 0} which shows $H^3(\bar{Y})\neq 0$. \section{Discussion} \subsection{Higher dimension} One can produce examples as above in all dimensions by taking the product ${\mathcal X}\times_R {\mathbb P}^n$, which is still regular. Then $H^i(X\times_K {\mathbb P}^n)/H^i(X\times_K {\mathbb P}^n)_{{\rm alg}}=0$ for all $i$, while $H^{3+2j}(Y\times_{{\mathbb F}_q} {\mathbb P}^n)\neq 0$ for all $j\ge 0$. \subsection{Motivic condition} From \epsilonqref{2.1}, using \epsilonqref{2.2} and applying \cite{DE}, Corollary 0.4 to the eigenvalues of $H^i(X^u)$, we see immediately that the eigenvalues of Frob on $H^i(\bar{Y})$ fulfill \ga{3.1}{{\rm sp}^u \ {\rm injective} \ + N^\kappa (H^*(\bar{X})/H^*(\bar{X})_{{\rm alg}})= (H^*(\bar{X})/H^*(\bar{X})_{{\rm alg}})\\ \ \Longrightarrow {\rm eigenvalues \ Frob}|H^i(\bar{Y}) \notag \\ =\begin{cases} 0 & i< 2\kappa \ i \ {\rm odd}\\ q^{\frac{i}{2}} & i \le 2\kappa \ i \ {\rm even}\\ \in q^\kappa\cdot {\bar{{\mathbb Z}}} & i \ge 2\kappa. \epsilonnd{cases}\notag } Here $N^\kappa$ is the coniveau filtration as explained in the Introduction. In \cite{Fakh}, N. Fakhruddin analyzes the motivic conditions for a family $f: {\mathbb X}\to S$ defined over a finite field $k$, with $S, {\mathbb X}$ smooth, to have the property that a singular fiber $Y$ over a closed point $s$ with residue field ${\mathbb F}_q\supset k$ fulfills the property $|Y({\mathbb F}_q)|=\sum_{i\ge 0} (-1)^i q^i\cdot b_{2i}(\bar{Y})$ modulo $q^\kappa$. More precisely, he studies the motivic conditions in a geometric family forcing the eigenvalue behavior described in \epsilonqref{3.1}. He singles out three conditions. We explain them and analyze the consequences they have on the completion ${\mathcal X}={\mathbb X}\times_S R$ at $s$ of the family $f$. Here $R$ is the completion of the equal characteristic ring of functions at $s\in S$. Surely, as in \cite{EsPoint}, the first one is base change for the Chow groups $CH_i(\bar{X}), i\le (\kappa -1)$. We know by Bloch's type argument that this implies the coniveau condition in level $\kappa$ on $H^*(\bar{X})/H^*(\bar{X})_{{\rm alg}}$, but we are extremely far of understanding that this is equivalent to it, as predicted by the general Bloch-Beilinson conjectures. The second one is that $R^if_*{\mathbb Q}_\epsilonll$ are constant local systems. This is to say that the specialization map $H^i(\bar{Y})\to H^i(\bar{X})$ is an isomorphism, which in particular forces ${\rm sp}^u$ to be injective, but is stronger than this. So we see that those two conditions imply the weaker cohomological conditions in \epsilonqref{3.1} which already force the eigenvalue conclusion on $H^i(\bar{Y})$. The third condition says that the Chow groups $CH_i(\bar{Y}), i\le (\kappa -1)$, are hit by specialization. This should translate into the condition ${\rm sp}^u$ injective above, which is then a consequence of the cohomological consequence of the condition forcing $R^if_*{\mathbb Q}_\epsilonll$ being a constant local system. At any rate, even if, as explained above, the conditions developed in \cite{Fakh} are far from sharpness, they tacitly raise the question of a finer formulation, and are a motivation for this note. \subsection{Formula} It is of course extremely rare that one can check motivic conditions. It is in the rule easier to control cohomological conditions, and \epsilonqref{3.1} gives conditions for a good behavior of rational points on $Y$. However, the condition ${\rm sp}^u$ injective is very nongeometric and likely very nonnatural as well. It would be better to understand a finer condition on the contribution of $H^i_{\bar{Y}}({\mathcal X}^u)$ in $H^i(\bar{Y})$ via the sequence \epsilonqref{2.1}. \renewcommand\refname{References} \begin{thebibliography}{99} \bibitem{BPVdV} Barth, W., Peters, C., van de Ven, A.: Compact Complex Surfaces, Ergebnisse der Mathematik und ihrer Grenzgebiete, 3. Folge, Band {\bf 4} (1984), Springer Verlag. \bibitem{DeInt} Deligne, P.: Th\'eor\`eme d'int\'egralit\'e, Appendix to Katz, N.: Le niveau de la cohomologie des intersections compl\`etes, Expos\'e XXI in SGA 7, Lect. Notes Math. vol. {\bf 340}, 363-400, Berlin Heidelberg New York Springer 1973. \bibitem{DE}Deligne, P., Esnault, H.: Appendix to ``Deligne's integrality theorem in unequal characteristic and rational points over finite fields'' by H. Esnault, preprint 2004, 5 pages, appears in the Annals of Mathematics. \bibitem{EsPoint} Esnault, H.: Varieties over a finite field with trivial Chow group of 0-cycles have a rational point, Invent. math. {\bf 151} (2003), 187-191. \bibitem{EsUneq} Esnault, H.: Deligne's integrality theorem in unequal characteristic and rational points over finite fields, preprint 2004, 10 pages, appears in the Annals of Mathematics. \bibitem{Fakh} Fakhruddin, N.: Chow groups and higher congruences for the number of rational points on proper varieties over finite fields, arXiv:math.NT/0501181. \bibitem{Fu} Fujiwara, K.: A Proof of the Absolute Purity Conjecture (after Gabber), in Algebraic Geometry 2000, Azumino, Advanced Studies in Pure Mathematics {\bf 36} (2002), Mathematical Society of Japan, 153-183. \bibitem{Gr} Grothendieck, A.: Formule de Lefschetz et rationalit\'e des fonctions $L$. S\'eminaire Bourbaki {\bf 279}, 17-i\`eme ann\'ee (1964/1965), 1-15. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{equation}gin{document} \date{November 19, 2004} \title{Coherent inelastic backscattering of intense laser light by cold atoms} \author{V.~Shatokhin} \affiliation{B.~I.~Stepanov Institute of Physics, National Academy of Sciences of Belarus, Skaryna Ave. 70, BY-220072 Minsk, Belarus} \affiliation{Max-Planck-Institut f\"ur Physik komplexer Systeme, N\"othnitzer Str. 38, D-01187 Dresden, Germany} \author{C.~A.~M\"uller} \affiliation{Physikalisches Institut, Universit\"at Bayreuth, D-95440 Bayreuth, Germany} \author{A.~Buchleitner} \affiliation{Max-Planck-Institut f\"ur Physik komplexer Systeme, N\"othnitzer Str. 38, D-01187 Dresden, Germany} \begin{equation}gin{abstract} We present a nonperturbative treatment of coherent backscattering of intense laser light from cold atoms, and predict a nonvanishing backscattering signal even at very large intensities, due to the constructive (self-)interference of inelastically scattered photons. \end{equation}nd{abstract} \pacs{ 42.50.Ct, 42.25.Dd, 32.80-t, 42.25.Hz } \maketitle When a plane wave of arbitrary nature is incident upon a disordered medium of scatterers, the backscattered intensity is an interference pattern of all coherent partial amplitudes containing detailed information on the sample configuration. Under an ensemble average, interference between uncorrelated amplitudes is washed out, except for a small angular range around exact backscattering, where the average intensity may exhibit a narrow peak. This peak results from constructive interference between multiple scattering probability amplitudes counterpropagating along direct and reversed paths \cite{local,sheng}. This phenomenon is called Coherent Backscattering (CBS) and was for the first time demonstrated with samples of polystyrene particles \cite{clasCBS}. The CBS enhancement factor $\alpha$, the ratio of the total intensity at exact backscattering to the background intensity, measures the coherence of counterpropagating amplitudes responsible for localization effects. Recently, CBS of light has been imported to the quantum realm with clouds of cold atoms \cite{labeyrie99,katalunga03,chaneliere03}. An important leitmotiv of these studies is the robustness of the underlying interference effect with respect to fundamental quantum mechanical dephasing mechanisms, such as spin-flip (of the incoming radiation, which carries a polarization degree of freedom) \cite{mueller02} or inelastic scattering. This has important repercussions for the transition from weak to strong (in Anderson's sense) localization of light in disordered atomic samples \cite{gora}, and also for potential technological applications such as random lasers \cite{cao}. While in the regime of weak, perturbative atom-field coupling, the partial destruction of CBS due to spin-flip like processes -- induced by the multiple degeneracy of the atomic transition driven by the incident radiation -- has been demonstrated experimentally and analysed theoretically in quite some detail, experiments and theory only now start to probe the strong coupling limit, where inelastic photon-atom scattering processes prevail. First experimental results on Sr (driving the $^1\!S_0\rightarrow ^{1}\!\!\!P_1$ transition with its nondegenerate ground state, hence in the absence of spin-flip) \cite{chaneliere03} indeed demonstrate the reduction of the CBS enhancement factor with increasing intensity of the injected field, for values $s=\Omega^2/2(\Delta^2+\gamma^2)<1$ of the atomic saturation parameter (where $\Omega$ is the Rabi frequency induced by the driving, $\gamma$ half the spontaneous decay rate of the excited atomic level, and $\Delta$ the detuning of the injected laser frequency from the exact atomic resonance). A first scattering theoretical treatment \cite{wellens} identified the origin of such suppression in the availability of which-path information through inelastically scattered photons: reversed paths can be distinguished by the detection of photons of different frequency. However, this treatment, still perturbative in the field intensity, cannot address the limit of large saturation parameters $s\geq 1$ -- with an emerging Mollow triplet \cite{mollow69} in the single atom resonance fluorescence -- and, in particular, makes no prediction on the crossover from dominantly elastic to essentially inelastic CBS, nor on the CBS enhancement factor in the deep inelastic limit. In the present Letter, we enter this regime, starting from a general master equation which allows for a nonperturbative treatment of the atom-field coupling. As we will see, even inelastically scattered photons give rise to a nonvanishing CBS signal. We start out from the elementary toy model of CBS -- a laser field scattering off two atoms with labels $1$ and $2$, placed at a fixed \cite{note} distance $r_{12}\gg \langlembda = 2\pi/k_L$, with ${\bf k}_L$ the wave vector of the incident field. It is known from the perturbative treatment of CBS that double scattering (on two atoms) provides the leading contribution to the CBS signal, since this is the lowest order process which gives rise to time reversed scattering amplitudes which can interfere constructively. We expect that this scenario also allows for a qualitative assessment of the nonlinear atomic response in the regime of high laser intensities, whilst propagation effects in the bulk of the scattering medium are certainly beyond reach of this model. Disorder will be mimicked by a suitable average over the atomic positions. Our model also neglects the acceleration of atoms out of resonance, which certainly becomes important at very high intensities, but can be experimentally compensated for by shortening the CBS probe duration, as realized in \cite{chaneliere03}. We focus exclusively on the photon coupling to the internal atomic degrees of freedom. With these premises, the average backscattered intensity can be derived from the correlation functions of the atomic dipoles which emit the detected signal \cite{agarwal,cohen_tannoudji}. We specialize to the scenario of the Sr experiments, with a nondegenerate atomic dipole transition $J_g=0\rightarrow J_e=1$, driven by laser photons with right circular polarization on the sublevels $\bra 1\rightarrow \bra 4$ -- see Fig.~\,{\rm Re}\,f{fig:scat}. The scattered light is detected in the helicity preserving channel (i.e., of photons which originate from the $\bra 2\rightarrow\bra 1$ transition), where single scattering is absent. \begin{equation}gin{figure} \includegraphics[width=6cm]{scattex.eps} \caption{(Color online) Elementary configuration for coherent backscattering (CBS) of intense light (thick arrows) by two isotropic dipolar transitions in the helicity preserving polarization channel (dashed arrows). The sublevels $\bra 1$ and $\bra 3$ of both atoms have magnetic quantum number $m=0$; sublevels $\bra 2$ and $\bra 4$ correspond to $m=-1$ and $m=1$, respectively. } \langlebel{fig:scat} \end{equation}nd{figure} Thus we obtain, up to an irrelevant prefactor, the expectation value for the stationary intensity scattered into the direction ${\bf k}$ close to the backward direction $-{\bf k}_L$, \begin{equation} \langle I\rangle_{\rm ss}=\langle\sigma^1_{22}\rangle_{\rm ss}+\langle\sigma^2_{22}\rangle_{\rm ss} +2\,{\rm Re}\,(\langle\sigma^1_{21}\sigma^2_{12}\rangle_{\rm ss} e^{i{\bf k}\cdot{\bf r}_{12}})\, , \langlebel{corr_f} \end{equation} where $\sigma^\alpha_{kl}\end{equation}quiv \bra k_\alpha\ket l_\alpha$, for atom $\alpha$. The steady-state values for correlation functions of the form $\langle \sigma^\alpha_{ij}\rangle_{\rm ss}$ or $\langle\sigma^\alpha_{ij}\sigma^\begin{equation}ta_{kl}\rangle_{\rm ss}$ can be found from the master equation \cite{lehmberg} \begin{equation} \dot Q =\sum_{\alpha=1}^2{\cal L}_\alpha Q+\sum_{\alpha\nonumbereq \begin{equation}ta=1} ^2{\cal L}_{\alpha\begin{equation}ta}Q, \langlebel{meq} \end{equation} where the Liouvillians ${\cal L}_\alpha$ and ${\cal L}_{\alpha\begin{equation}ta}$ govern the evolution of an arbitrary atomic operator $Q$ for independent and dipole-dipole interacting (through the exchange of one or several photons) atoms, respectively. $Q$ stands for an operator from the complete set of operators acting on a tensor product of Hilbert spaces of individual atoms. For our choice of the atomic structure shown in Fig.~\,{\rm Re}\,f{fig:scat}, \begin{equation} Q\in\underbrace{\{\sigma^1_{11},\cdots,\sigma^1_{44}\}\otimes\{\sigma^2_{11},\cdots,\sigma^2_ {44}\} }_{256 \; {\rm operators}}. \langlebel{Q_2_at} \end{equation} The explicit form of the interaction-picture Liouvillians ${\cal L}_\alpha$ and ${\cal L}_{\alpha\begin{equation}ta}$ derived in the standard dipole, rotating-wave, and Born-Markov approximations can be shown to read: \begin{equation}gin{widetext} \begin{equation}q {\cal L}_\alpha Q & = & -i\Delta[\textbf{D}^{\dagger}_\alpha\cdot\textbf{D}_\alpha,Q] -\frac{i}{2}[\Omega_\alpha(\textbf{D}^{\dagger}_\alpha\cdot\end{equation}p_L)+\Omega^*_\alpha (\textbf{D}_\alpha\cdot\end{equation}p_L^*),Q] +\gamma\left(\textbf{D}^{\dagger}_\alpha\cdot[Q,\textbf{D}_\alpha]+[\textbf{D}^{\dagger}_\alpha,Q]\cdot\textbf{D}_\alpha\right),\\ {\cal L}_{\alpha\begin{equation}ta}Q&=&\textbf{D}^{\dagger}_\alpha\cdot\overleftrightarrow{\bf T}(g,{\bf\hat n})\cdot[Q,\textbf{D}_\begin{equation}ta]+[\textbf{D}^{\dagger}_\begin{equation}ta,Q]\cdot\overleftrightarrow{\bf T}^*(g,{\bf\hat n})\cdot\textbf{D}_\alpha\, , \langlebel{Liouvillians} \end{equation}q \end{equation}nd{widetext} where $\Delta=\omega_L-\omega_0$ is the detuning, $\Omega_\alpha=\Omega e^{i\textbf{k}_L\cdot{\bf r}_\alpha}$ is the atomic (coordinate-dependent) Rabi frequency, and $\end{equation}p_L$ fixes the polarization of the laser field. \begin{equation} \textbf{D}_\alpha=-\end{equation}p_{-1}\sigma^\alpha_{12}+\end{equation}p_0\sigma^\alpha_{13}-\end{equation}p_{+1}\sigma^\alpha_{14} \end{equation} is the lowering dipole operator of atom $\alpha$, with $\end{equation}p_{\pm 1},\;\end{equation}p_0$ the unit vectors of the spherical basis. The radiative dipole-dipole interaction due to exchange of photons between the atoms is described by the tensor $ \overleftrightarrow{\bf T}(g,{\bf\hat n})=\gamma g \overleftrightarrow{\boldsymbol{\Delta}}$, where $\overleftrightarrow{\boldsymbol{\Delta}}=\overleftrightarrow {\openone}-{\bf \hat{n}\hat{n}}$ is the projector on the plane defined by the vector $\bf\hat{n}$ connecting atom $1$ and $2$, and \begin{equation} g =i3 \frac{e^{ik_0 r_{12}}}{2k_0 r_{12}}, \langlebel{g} \end{equation} where $k_0=\omega_0/c$, is the small coupling constant $|g|\ll 1$ in the far-field limit $k_0r_{12}\gg 1$, where we neglect near-field interaction terms of order $1/(k_0r_{12})^2$ and $1/(k_0r_{12})^3$ (which, at higher atomic densities, could also be retained in our formalism). Transforming the operator equation (\,{\rm Re}\,f{meq}) to a system of $255$ linear coupled differential equations for the atomic correlation functions, we can solve for the physical quantities which enter the expression (\,{\rm Re}\,f{corr_f}) for the detected intensity. In doing so, we furthermore take advantage of the far field limit $k_0r_{12}\gg 1$, and expand the correlation functions up to second order, ${\langle(\dots)\rangle}_{\rm ss}^{[2]}$, in the dipole-dipole coupling constant (\,{\rm Re}\,f{g}). The double scattering contribution to the CBS signal, detected in the helicity preserving channel, is then precisely given by terms proportional to $g^2$, since it stems from the exchange of two photons between the atoms, along a `direct' and its `reversed' path. Finally, the CBS signal is obtained after an elementary configuration average $\langle\dots\rangle_{\rm conf.}$ defined through the following twofold procedure: (i) isotropic averaging of the relative orientation ${\bf \hat r}_{12}$ over the unit sphere; (ii) uniform averaging of the distance $r_{12}$ over an interval of the order of $\langlembda$, around a mean value given by the mean free path. After this simple procedure all terms relevant for the calculation of the CBS enhancement factor survive, whereas all the irrelevant terms vanish. We thus arrive at our final expression for the total second-order intensity \begin{equation} I_{\rm ss}^{\rm tot\,[2]}(\theta)=L^{\rm tot}+C^{\rm tot}(\theta), \langlebel{d_sc} \end{equation} a sum of the total ladder (or background), $L^{\rm tot}$, and total crossed (or interference) term $C^{\rm tot}(\theta)$, with $\theta$ the observation angle of the scattered intensity with respect to the backward direction. In terms of the second order atomic correlation functions, $L^{\rm tot}$ and $C^{\rm tot}(\theta)$ are given by \begin{equation}q L^{\rm tot}&=&\left\langle\langle\sigma_{22}^1\rangle_{\rm ss}^{[2]}+\langle\sigma_{22}^2\rangle_{\rm ss}^{[2]}\right\rangle_{\rm conf.},\langlebel{l1}\\ C^{\rm tot}(\theta)&=&2\,{\rm Re}\,\left\langle\langle\sigma_{21}^1\sigma_{12}^2\rangle_{\rm ss}^{[2]}e^{i{\bf k}\cdot{\bf r}_{12}}\right\rangle_{\rm conf.}. \langlebel{c1} \end{equation}q Therefrom we deduce the main quantifier of CBS, the enhancement factor \begin{equation} \alpha=\frac{L^{\rm tot}+C^{\rm tot}(0)}{L^{\rm tot}}. \langlebel{enh_f} \end{equation} Figure~\,{\rm Re}\,f{fig:int} shows our results for the total CBS intensity as well as its components $L^{\rm tot}$ and $C^{\rm tot}(0)$, as a function of the saturation parameter, at exact resonance ($\Delta=0$). \begin{equation}gin{figure} \includegraphics[width=6cm]{int_letter.eps} \caption{(Color online) Total intensities of the ladder, the crossed terms, and their sum $I^{\rm tot\,[2]}_{\rm ss}$, in the helicity preserving channel, as functions of the saturation parameter $s$ at resonance, $\Delta=0$. } \langlebel{fig:int} \end{equation}nd{figure} The behavior of $I^{\rm tot\,[2]}_{\rm ss}$ shows that the double scattering intensity behaves markedly different from that of an isolated atom. Whilst the scattering intensity from an isolated atom $I^{[0]}\propto s/(1+s)$ is known to saturate for large $s$ \cite{cohen_tannoudji}, the double scattering intensity exhibits a maximum at $s\simeq 0.7$, followed by gradual decrease $\sim s^{-1}$ for large $s$: At high laser intensities, more and more photons are scattered inelastically, and are therefore less likely to undergo resonant interaction with the second atom. \begin{equation}gin{figure} \includegraphics[width=6cm]{enh_full.eps} \caption{(Color online) Enhancement factor $\alpha$ in the helicity preserving channel, versus saturation parameter $s$. Solid curve: on resonance ($\Delta =0$), dashed curve: off resonance ($\Delta =\gamma$). Straight lines represent the perturbative prediction $2-(1+\delta)s/4$ of \protect\cite{wellens}. Inset: The finite enhancement $\alpha_{\infty}\simeq 1.09$ signals residual photon (self-)interference, even in the deep inelastic regime. } \langlebel{fig:enh} \end{equation}nd{figure} The enhancement factor $\alpha(s)$ follows directly from the above quantities. As shown in Fig.~\,{\rm Re}\,f{fig:enh}, it decays monotonously from its weak field limit $\alpha(0)=2$. In qualitative agreement with the experiment \cite{chaneliere03}, this decay is faster for finite detuning $\Delta =\gamma$. For small values of $s$, $\alpha$ is well approximated by the linear decay $2-(1+\delta)s/4$, with $\delta=(\Delta/\gamma)^2$, derived within the scattering picture \cite{wellens}. For large values of $s$ (inset), however, $\alpha$ saturates at a value $\alpha_{\infty}\simeq 1.09$ strictly larger than unity, whilst one would expect vanishing contrast (i.e., $\alpha =1$) for scattering from two independent atoms \cite{kochan95}. Hence, the (self-)interference of inelastically scattered photons unambiguously contributes to the crossed term $C^{\rm tot}(0)$. Note that this observation bears some similarity to CBS with photons from degenerate Raman transitions, which were shown to yield an important contribution to the CBS contrast, even in the limit of infinite ground state degeneracy \cite{mueller01}, as well as to the residual CBS enhancement in optically active media at high magnetic fields \cite{martinez}. In contrast, elastically scattered photons remain perfectly coherent, and contribute to the CBS intensity with a constrast two, for any $s$. To see this, we just need to extract the purely elastic component of the signal from the total yield in eq. (\,{\rm Re}\,f{corr_f}). Since the detected intensity $\langle I\rangle_{\rm ss}$ is nothing but the autocorrelation function of the source field amplitudes radiated by the atomic dipoles, its elastic part $\langle I\rangle^{\rm el}_{\rm ss}$ is generated by the classical dipoles induced by the injected radiation -- this is by their average, nonfluctuating parts $\langle\sigma^\alpha_{i\nonumbere j}\rangle_{\rm ss}$ \cite{cohen_tannoudji}. Hence, $\langle I\rangle^{\rm el}_{\rm ss}$ is given by the product of the expectation values of the atomic dipoles: \begin{equation} \langle I\rangle^{\rm el}_{\rm ss}=|\langle\sigma^1_{21}\rangle_{\rm ss}|^2+|\langle\sigma^2_{21}\rangle_{\rm ss}|^2 +2\,{\rm Re}\,(\langle\sigma^1_{21}\rangle_{\rm ss}\langle\sigma^2_{12}\rangle_{\rm ss} e^{i{\bf k}\cdot{\bf r}_{12}})\, . \langlebel{el_corr_f} \end{equation} A power series expansion of the right hand side of (\,{\rm Re}\,f{el_corr_f}) to second order in the coupling $g$ leaves only symmetrically factorized combinations of the form $\langlengle\sigma_{21}^{\alpha}\ranglengle^{[1]}\langlengle\sigma_{12}^\begin{equation}ta\ranglengle^{[1]}$. Asymmetric combinations, like $\langlengle\sigma_{21}^{\alpha}\ranglengle^{[2]}\langlengle\sigma_{12}^\begin{equation}ta\ranglengle^{[0]}$, do not contribute to the signal since the $|1\rangle\leftrightarrow|2\rangle$ transitions are not laser-driven (see Fig.~\,{\rm Re}\,f{fig:scat}), hence $\langlengle\sigma_{12}^\begin{equation}ta\ranglengle^{[0]}$ vanishes. Evaluation of the correlation functions by symbolic calculus, together with the configuration average described above, finally provides an analytic expression for the elastic ladder and crossed terms: \begin{equation} L^{\rm el}=C^{\rm el}(0)=24\pi|g|^2 \frac{1}{1+\delta}\frac{s}{(1+s)^4}\, . \langlebel{coh_comp} \end{equation} Expression (\,{\rm Re}\,f{coh_comp}) shows that the elastic ladder and crossed terms are equal for any $s$, as to be expected from reciprocity arguments \cite{mueller01}. These elastic components decay like $s^{-3}$ at large saturation, much faster than the total intensities that decrease like $s^{-1}$ (cf.\ Fig.~\,{\rm Re}\,f{fig:int}). This proves that the residual CBS enhancement $\alpha_\infty$ is entirely due to the (self-)interference of inelastically scattered photons. Furthermore, expression (\,{\rm Re}\,f{coh_comp}) shows that the elastic part of the double scattering intensity exhibits a maximum at $s=1/3$, slightly below the departure of $\alpha(s)$ from the perturbative prediction of \cite{wellens} in Fig.~\,{\rm Re}\,f{fig:enh}. Consistently, an expansion of (\,{\rm Re}\,f{coh_comp}) to second order in $s$ reproduces the expression $L^{\rm el}=C^{\rm el}(0)\sim s-4s^2$ derived in \cite{wellens}. Note that the crossover to the nonlinear regime for double scattering occurs at a value of $s$ three times smaller than for an isolated atom, where $I^{\rm el[0]}\propto s/(1+s)^2$ exhibits a maximum at $s=1$. This has a transparent interpretation, by virtue of factorizing eq.~(\,{\rm Re}\,f{coh_comp}) into (i) the elastic intensity $I^{{\rm el}[0]}$ scattered by the first strongly driven atom, (ii) the total scattering cross section $\sigma^{\rm tot}\propto 1/(1+\delta)(1+s)$ of the second atom, and (iii) the relative weight $I^{\rm el[0]}/I^{\rm tot[0]}=\sigma^{\rm el}/\sigma^{\rm tot}=(\gamma^2+\Delta^2)/(\gamma^2+\Omega^2/2+\Delta^2)= 1/(1+s)$ \cite{cohen_tannoudji} of elastic processes therein. Obviously, higher order scattering processes than considered in our present contribution must unavoidably push the crossover value of $s$ to even smaller values. In conclusion, we have presented the first study of coherent backscattering of intense laser light from saturated dipole transitions. The CBS enhancement decreases monotonously as a function of $s$, but, remarkably, coherence is -- partially -- preserved in the deep inelastic limit of the two-atom response to intense laser radiation, since also inelastically scattered photons can interfere with themselves, along time-reversed paths. Consequently, CBS should also have an imprint on the spectrum of the scattered radiation, as well as on its photocount statistics, which are both directly accessible in the framework of our present approach, as well as in laboratory experiments. Furthermore, let us note that our present results are also relevant in the somewhat different context of Young's double slit experiments with two atoms \cite{kochan95}. In contrast to the forward Young-type interference that necessarily decoheres for $s\to \infty$, since the photon visits two different, uncoupled atoms, a \end{equation}mph{backscattering} experiment, with appropriate polarization sensitive excitation and detection, must lead to a finite interference contrast. \begin{equation}gin{acknowledgments} We would like to thank Dominique Delande, Beno{\^\i}t Gr\'emaud, Christian Miniatura, Mikhail Titov, and Thomas Wellens for stimulating discussions. \end{equation}nd{acknowledgments} \begin{equation}gin{thebibliography}{99} \bibitem{local} E.~Akkermans, G.~Montambaux, J.-L.~Pichard, and J.~Zinn-Justin (Eds.) {\it Mesoscopic Quantum Physics} (Elsevier, Amsterdam, 1994). \bibitem{sheng} P.~Sheng, {\it Introduction to Wave Scattering, Localization and Mesoscopic Phenomena} (Academic Press, San Diego, 1995). \bibitem{clasCBS} Y.~Kuga and A.~Ishimaru, J. Opt. Soc. Am. A {\bf 1}, 831 (1984); M.~P.~van Albada and A.~Lagendijk, Phys.\ Rev.\ Lett. {\bf 55}, 2692 (1985); P.~E.~Wolf and G.~Maret, {\it ibid.} 2996. \bibitem{labeyrie99} G.~Labeyrie, F.~de Tomasi, J.-C. Bernard, C.~A.~M\"uller, C.~Miniatura and R.~Kaiser, Phys. Rev. Lett. {\bf 83}, 5266 (1999). \bibitem{katalunga03} P.~Katalunga, C.~I.~Sukenik, S.~Balik, M.~D.~Havey, D.~V.~Kupriyanov, and I.~M.~Sokolov, Phys.\ Rev. A {\bf 68}, 033816 (2003). \bibitem{chaneliere03} T.~Chaneli\`ere, D.~Wilkowski, Y.~Bidel, R.~Kaiser, and C.~Miniatura, Phys.\ Rev. E {\bf 70}, 036602 (2004). \bibitem{mueller02} C.~A.~M\"uller and C.~Miniatura, J.Phys. A {\bf 35}, 10163 (2002). \bibitem{gora} Th.~M.~Nieuwenhuizen, A.~L.~Burin, Yu.~Kagan, and G.~V.~Shlyapnikov, Phys. Lett. {\bf 184A}, 360 (1994). \bibitem{cao} H. Cao, Y. Ling, J. Y. Xu, C. Q. Cao, and Prem Kumar, Phys. \ Rev.\ Lett. {\bf 86}, 4524 (2001). \bibitem{wellens} T.~Wellens, B.~Gremaud, D.~Delande, and C.~Miniatura, Phys. Rev. A {\bf 70}, 023817 (2004). \bibitem{mollow69} B.~R.~Mollow, Phys.\ Rev. {\bf 188}, 1969 (1969). \bibitem{note} At typical temperatures of the cloud of about a few mK \cite{labeyrie99}, the Doppler shift associated with the thermal motion is negligible, yet the kinetic energy of an atom is much larger than the recoil energy, thus allowing a classical description of the atomic position \cite{mueller01}. \bibitem{mueller01} C.~A.~M\"uller, T.~Jonckheere, C.~Miniatura, and D.~Delande, Phys.\ Rev. A {\bf 64}, 053804 (2001). \bibitem{agarwal} G.~S.~Agarwal, {\it Quantum Statistical Theories of Spontaneous Emission and their Relation to Other Approaches} (Springer-Verlag, Berlin, 1974). \bibitem{cohen_tannoudji} C.~Cohen-Tannoudji, J.~Dupont-Roc, G.~Grynberg, {\it Atom-Photon Interactions} (John Wiley \& Sons Inc, New York, 1992). \bibitem{lehmberg} R.~H.~Lehmberg, Phys.\ Rev. A {\bf 2}, 883 (1970); D. F. V. James, {\it ibid.} {\bf 47}, 1336 (1993); J. Guo and J. Cooper, {\it ibid.} {\bf 51}, 3128 (1995). \bibitem{kochan95} P.~Kochan, H.~J.~Carmichael, P.~R.~Morrow, and M.~G.~Raizen, Phys. \ Rev.\ Lett. {\bf 75}, 45 (1995). \bibitem{martinez} A.~S.~Martinez and R.~Maynard, Phys.\ Rev. B {\bf 50}, 3714 (1994). \end{equation}nd{thebibliography} \end{equation}nd{document}
\begin{document} \title{\bf REMARKS ON GENERAL FIBONACCI NUMBER} \author{\it Masum Billal} \maketitle \begin{abstract} We dedicate this paper to investigate the most generalized form of {\it Fibonacci Sequence}, one of the most studied sections of the mathematical literature. One can notice that, we have discussed even a more general form of the conventional one. Although it seems the topic in the first section has already been covered before, but we present a different proof here. Later I found out that, the auxiliary theorem used in the first section was proven and even generalized further by {\it F. T. Howard}\eqref{hwd}. Thanks to {\it Curtis Cooper}\eqref{coopr} for pointing out the fact that this has already been studied and providing me with references. For further studies on the literature, one can study \eqref{koshy} and \eqref{levesque}. the At first, we prove that, only the common general Fibonacci Sequence can be a divisible sequence under some restrictions. In the latter part, we find some properties of the sequence, prove that there are infinite {\it alternating bisquable Fibonacci sequence}(defined later) and provide a lower bound on the number of divisors of Fibonacci numbers. \end{abstract} {\small \textbf{Keywords:} Fibonacci Numbers, Divisible Sequence, Bi-squares, Number Of Divisors} \indent {\small {\bf 2000 Mathematics Subject Classification:} 11A25(primary), 11A05, 11A51(secondary)}. \hrulefill \section{Introduction} The {\it General Fibonacci Number} $G_n$ is defined as: \[G_n= \begin{cases} u\mathbbox{ if }n=0\\ v\mathbbox{ if }n=1\\ aG_{n-1}+bG_{n-2}\mathbbox{ if }n\geq2 \end{cases} \] Let's denote such a sequence using the notation $\{G\}=\{G_n:(u,v|a,b)\}_{n\in\mathbb{N}}$. Then the common general Fibonacci Number $F_n$ is defined as $\{F\}=\{G_n:(0,1|a,b)\}_{n\in\mathbb{N}}$ i.e. $F_0=0,F_1=1,F_n=aF_{n-1}+bF_{n-2}$. Throughout the whole paper, we take $\{F\}$ and $\{G\}$ for the conventional and most General Fibonacci sequences respectively. We call $\{G\}$ to be a {\it co-prime most general Fibonacci sequence}({\it C-quence} for brevity) for $\gcd(u,v)=\gcd(u,b)=\gcd(a,b)=\gcd(b,v)=1$ if $b\neq0$. \section{Divisible General Fibonacci Sequences} A sequence $\{a_n\}_{n\in\mathbb{N}}$ is called {\bf divisibility sequence} if $a_n|a_m$ whenever\footnote{Here $a|b$ denotes $a$ divides $b$.} $n|m$. The main result of this section is: \begin{theorem} The only divisible \it{C-quence} are $\{G\}=\{G_n:(0,1|a,b)\}=\{F\}$ and $\{G\}=\{G_n:(u,uk|a,0)\}$. \end{theorem} The proof is based on the following auxiliary theorem: \begin{theorem} \[G_{m+n+1}=G_{m+1}F_{n+1}+bG_mF_n\]\label{genfib} \end{theorem} Other than induction, we can of-course check that this is true using the generalization of {\it Binet's Formula} for the Fibonacci sequence. \begin{theorem}[Generalized {\it Binet's formula}] \begin{eqnarray} G_n & = & v\dfrac{\alpha^n-\beta^n}{\alpha-\beta}+u\dfrac{\alpha^n\beta-\alpha\beta^n}{\alpha-\beta} \end{eqnarray} where $\alpha=\dfrac{-a+\sqrt{\deltaelta}}{2},\beta=\dfrac{-a-\sqrt{\deltaelta}}{2},\delta=a^2+4b$ and $\delta\neq0$ i.e. $\alpha\neq\beta$. and \begin{eqnarray} G_n & = & \left(vn+u\alpha(1-n)\right)\alpha^{n-1} \end{eqnarray} if $\alpha=\beta$.\label{binf} \end{theorem} \begin{proof} Assume that, $G_n=\lambda^n$ for some suitable $\lambda$. Then, we have $\lambda^n=a\lambda^{n-1}+b\lambda^{n-2}$ or $\lambda^2=a\lambda+b$ or\footnote{this can also be called the characteristic equation} \[\lambda^2-a\lambda-b=0\] which has discriminant $\delta=a^2+4\neq0b$ and two roots \[\alpha=\dfrac{a+\sqrt{\deltaelta}}{2},\beta=\dfrac{a+\sqrt{\deltaelta}}{2}\] Then $G_n=l\alpha^{n}+m\beta^{n}$ for some integer $l,m$ if $\delta\neq0$. We can solve this setting $n=0$ and $n=1$. If $n=0$, $l+m=u$ and if $n=1$, then $l\alpha+m\beta=v$. In the case $\delta=0$, we can assume $G_n=(l+mn)\alpha^n$ and then we find the second portion to be true setting $n=0,1$. \end{proof} \begin{remark} We don't need the exact values of $l,m$ to find that the theorem holds true. \end{remark} Here is a combinatorial proof of the theorem, generalized idea of tiling. \begin{proof}[\sc Proof of the auxiliary theorem] Consider the following tiling problem. There is a $(n+2)\times 1$ rectangle, which has $(n+2)$ squares of size $1\times1$. The first square is the starting square $S$. Then follows the squares $0,1,2,...,n$ totaling $n+1$ squares. The square $S$ along with square $0$ can be painted with $u$ colors and the square $S$ along with square $1$ can be painted with $v$ colors. The rectangle is to be filled with tiles of two types: $1\times1$({\bf type $1$}) and $2\times1$({\bf type $2$}). Type $1$ tile can assume $a$ colors and type $2$ can assume $b$ colors. We can see that, the number of different tiling is $G_n=aG_{n-1}+bG_{n-2}$. And if we consider the tiling of the rectangle starting from square $m$ to square $m+n$ for some integer $m\geq0$, then the number of coloring is $F_{n+1}$(since there is no starting square now). Now, consider the case where we want to tile a $(m+n+1)$-th square starting from square $S$. There are two cases: \begin{enumerate} \item Case $1$: We have to reach the $m+1$-th square to tile, which can be done in $G_{m+1}$ ways. Then we have to tile squares $m+1$ to $m+n+1$, which can be done in $F_{n+1}$ ways. \item Case $2$: We want to bypass the $m+1$-th square. So, we tile upto $m$-th square, which can be done in $G_m$ ways. Then we use the $2\times1$ tiles, which can take $b$ colors and we reach the square $m+2$. Then we can tile from square $m+2$ to $m+n+1$ in $F_n$ ways. \end{enumerate} Combining the results of the two cases, we get the total number of coloring is the sum of $G_{m+1}F_{n+1}$(first case) and $bG_mF_n$(second case). On the other hand, we could just color it in $G_{m+n+1}$ ways. Thus, $\boxed{G_{m+n+1}=G_{m+1}F_{n+1}+bG_mF_n}$. \end{proof} We will prove some lemmas to prove the main theorem. \begin{lemma} If $\{G\}=\{G_n:(u,v|a,b)\}$ is a \it{C-quence}, then $\gcd(b,G_n)=1$ for $n\geq0$.\label{bcp} \end{lemma} \begin{proof} Euclidean algorithm can be used to prove this easily with induction. The base cases $n=0$ and $n=1$ are straight from definition. Now, say that, $\gcd(G_n,b)=1$. We find that, \[\gcd(G_{n+1},b)=\gcd(aG_n+bG_{n-1},b)=\gcd(b,aG_n)=\gcd(b,G_n)=1\] which completes the inductive step. \end{proof} \begin{lemma} If $\{G\}=\{G_n:(u,v|a,b)\}$ is a \it{C-quence}, then $\gcd(G_{n+1},G_n)=1$ for $n\geq0$.\label{fibcp} \end{lemma} \begin{proof} The base case $n=0$ is trivial from the definition again. And we assume that, $\gcd(G_{n+1},G_n)=1$. To complete the inductive step we can show \begin{eqnarray*} \gcd(G_{n+2},G_{n+1}) & = & \gcd(aG_{n+1}+bG_n,G_{n+1})\\ & = & \gcd(bG_n,G_{n+1})\\ & = & \gcd(G_n,G_{n+1})\mathbbox{ using lemma \eqref{bcp}}\\ & = & 1 \end{eqnarray*} \end{proof} \begin{lemma} $\{F\}$ is a divisible sequence i.e. $F_n|F_{nk}$ for all $n,k\geq0$.\label{fibdiv} \end{lemma} \begin{proof} Setting $\{F\}=\{G\}$ and $n+1=mq$ in equation \eqref{genfib}, \begin{equation} F_{m(q+1)}=F_{m+1}F_{mq}+bF_mF_{mq-1}\label{fibd} \end{equation} Now, we can easily induct on $q$. The case $q=1$ is clear. Then if we take $F_m|F_{mq}$, from equation \eqref{fibd}, \[F_m|F_{m+1}F_{mq}+bF_mF_{mq-1}=F_{m(q+1)}\] Thus, the induction step is complete. \end{proof} \begin{lemma} $\gcd(F_m,F_n)=F_{\gcd(m,n)}$.\label{fibgcd} \end{lemma} \begin{proof} We already know that, \[F_{m+n+1}=F_{m+1}F_{n+1}+bF_mF_n\] Set $(n,m)\rightarrow(mq-1,r)$: \[F_{mq+r}=F_{r+1}F_{mq}+bF_rF_{mq-1}\] Therefore, using Euclidean algorithm, if $n=mq+r$, \begin{eqnarray*} \gcd(F_n,F_m) & = & \gcd(F_{r+1}F_{mq}+bF_rF_{mq-1},F_m)\\ & = & \gcd(F_m,bF_rF_{mq-1})\mathbbox{ since }F_m|F_{mq}\\ & = & \gcd(F_m,F_r)\mathbbox{ since }\gcd(F_m,b)=\gcd(F_m,F_{mq-1})=1 \end{eqnarray*} Then, repeating this we reach $\gcd(m,n)$ in the index. Hence, proven. \end{proof} Now we prove the key lemma to the theorem. \begin{lemma} $\{G\}$ is a divisible \it{C-quence} with $b\neq0$ if and only if $G_m|F_m$.\label{divc} \end{lemma} \begin{proof} Set $n+1=mq$ in the equation \eqref{genfib}. We get \[G_{m(q+1)}=G_{m+1}F_{mq}+bG_mF_{mq-1}\]\label{diveqn} First, we prove the if part. If $G_m$ is a divisible sequence, then $G_m|G_{mq}$ for all $q\geq1$. Therefore, $G_m$ divides $G_{m(q+1)}-bG_mF_{mq-}=G_{m+1}F_{mq}$. But since $\gcd(G_{m+1},G_m)=1$ from lemma \eqref{fibcp}, we infer $G_m|F_{mq}$ for all $q\geq1$. Setting $q=1$, we get $G_m|F_m$. For the only if part, let's assume $G_m|F_m$. Since $F_m|F_{mq}$, $G_m|F_{mq}$ as well. Then from \eqref{diveqn} we have \[G_m|G_{m+1}F_{mq}+bG_mF_{mq-1}=G_{m(q+1)}\] The induction shows the claim is true. \end{proof} \begin{lemma} If $\{G\}$ is a divisible \it{C-quence} with $b\neq0$, then $\gcd(G_m,F_{mq-1})=1$.\label{ccop} \end{lemma} \begin{proof} Let $d=\gcd(G_m,F_{mq-1})$. Naturally $d|G_m|F_m$ and $d|F_{mq-1}$. Then $d|\gcd(F_m,F_{mq-1})=1$ from lemma \eqref{fibgcd} forcing $d=1$. \end{proof} \begin{proof}[\bf Proof Of The Main Theorem] First we see the case $b=0$. It easily shows that $\{G\}$ can be a divisible sequence if $u|v$. So we can safely assume $b\neq0$. Since $G_m|F_m$, we easily get that $G_1|F_1=1$ implying $G_1=v=1$. The rest is to prove $u=0$. Set $(m,n)\rightarrow(0,n-1)$ in equation \eqref{genfib}. We find that, \[G_n=vF_n+buF_{n-1}\] Now, if $\{G\}$ is divisible, then by lemma \eqref{divc}, $F_n=G_nk$ for some integer $k$. Therefore, $G_n=G_nk+buF_{n-1}$ or \[G_n(1-k)=ubF_{n-1}\] This shows us that $G_n|ubF_{n-1}$. From lemma \eqref{ccop}, $\gcd(G_n,F_{n-1})=1$ and therefore, $G_n|bu$ for all $n$. If none of $b,u$ is zero, it is impossible to hold. Thus, $bu=0$ and then $u=0$ since $b\neq0$. Therefore $\{G_n:(0,1|a,b)\}$ is the only divisible Fibonacci sequence. \end{proof} \section{Bisquare General Fibonacci Numbers} For $G_0=u,G_1=v$, let's say that $\{G\}_{n\in\mathbb{N}}$ {\bf starts with} the pair $(u,v)$. We call the sum of two squares a {\bf bisquare}. An integer sequence $(a_n)_{n\in\mathbb{N}}$ is called {\bf alternating bisquable sequence} if $a_n$ is a bisquare for all odd $n$ or for all even $n$. If it happens for all odd $n$, then let's call it {\bf oddly bisquable sequence}, otherwise {\bf evenly bisquable sequence}. Denote the number of divisors of $n$ by $\tau(n)$. To establish basic identities we will need the auxiliary matrices: \[\mathbf{G}_n =\begin{pmatrix} G_{n+2} & G_{n+1}\\ G_{n+1} & G_n \end{pmatrix},\mathbf{M}=\begin{pmatrix} a & b\\1&0 \end{pmatrix} \] From \eqref{r1}, we can see the proof of {\it Euler} which shows the following claim is true: \begin{theorem}[Euler] If $n$ is a bisquare, then so is every divisor of $n$. \label{bisqr} \end{theorem} \begin{theorem} \[G_nG_{n+2}-G_{n+1}^2=(-b)^n(u^2+uv-v^2)\] \end{theorem} \begin{proof} Notice that, $\mathbf G_n=\mathbf{MG}_{n-1}$ which gives \begin{equation} \mathbf G_n =\mathbf{M}^n\mathbf{G}_0 \label{eq1} \end{equation} We already know that, for two multiplicable matrix $A,B$, $\det(AB)=\det(A)\det(B)$. As a corollary, we also have, $\det(A^m)=\det(A)^m$. Applying this to equation \eqref{eq1}, we get: \begin{eqnarray*} \det(\mathbf G_n) & = & \det(\mathbf{M}^n\mathbf{G}_0)\\ & = & \det(\mathbf{M}^n\mathbf{G})\\ G_nG_{n+2}-G_{n+1}^2& = & (-b)^n(G_0G_2-G_1^2)\\ G_nG_{n+2}-G_{n+1}^2& = & (-b)^n(u^2+uv-v^2) \end{eqnarray*} \end{proof} Before going into the proof of main theorem, we find all integer solutions to the equation: $5x^2+4y^2=z^2$. We will see later how this comes into play. \begin{theorem} All solutions to the equation $5x^2+4y^2=z^2$ are given by: \[ (x,y,z)= \begin{cases} \left(klm,k\left(\dfrac{5l^2-m^2}{4}\right),k\left(\dfrac{5l^2+m^2}{2}\right)\right)\\ \left(mnk,k\left(\dfrac{l^2-5m^2}{4}\right),k\left(\dfrac{l^2+5m^2}{2}\right)\right)\\ \left(4lmk,k(5l^2-m^2),2k(5l^2+m^2)\right)\\ \left(4lmk,k(l^2-5m^2),2k(l^2+5m^2)\right)\\ \end{cases} \] where $l,m\equiv1\pmod2,(l,m)=1$ and $k\in\mathbb{Z}$. \label{eq2} \end{theorem} Write $\gcd(a,b)$ as $(a,b)$. \begin{lemma} If $a$ and $b$ are of the same parity, then $(a+b,a-b)=2(a,b)$, otherwise $(a+b,a-b)=(a,b)$. \label{cop} \end{lemma} \begin{proof} The proof is merely obvious due to {\it Euclidean Algorithm}. \end{proof} \begin{lemma} If $b^2=ac$, then $a=gl^2,c=gm^2,b=glm$ for some $g,l,m$ with $(l,m)=1$. \label{sqr} \end{lemma} \begin{proof} Let $g=(a,c)$, $a=gp,c=gq$ with $(q,p)=1$. Then, $b^2=g^2pq$. Therefore, $g|b$, so $b=gr$ for some $r$ and $pq=r^2$. Since $(p,q)=1$, both $p$ and $q$ must be squares. Assume that $p=l^2,q=m^2$ and we find the solutions above. \end{proof} Now we get back to the equation.We will concentrate only on primitive solutions of this equation i.e. $(x,y,z)=1$ and use the idea of {\it infinite descent}. \begin{proof} We have two cases on based on the parity of $z$. \paragraph*{\bf Case 1:} $z$ even, so $z=2z_1$ for some $z_1\in\mathbb{Z}$. Then $x$ must be even as well, which gives $x=2x_1$. The equation reduces to $5x_1^2+y^2=z_1^2$ and consider the smallest solution. Assume $(y,z_1)=g$, then $z_1=gz_2,y=gy_1$ and $5x_1^2=g^2(z_2^2-y_1^2)$. If $5|g$, $g=5h$ and thus, $x_1^2=5h^2(z_2^2-y_1^2)$ inferring $5|x$ i.e. $x_1=5x_2$. But then, $5x_2^2=h^2(z_2^2-y_1^2)$ which yields a smaller solution $\left(x_2,\dfrac{y}{5},\dfrac{z_1}{5}\right)$. Therefore, without loss of generality, we can take $(z_1,y)=1$. Now, $5x_1^2=(z_1+y)(z_1-y)$ and $y,z$ both are odd. Write $z_1+y=2A,z_1-y=2B$. From lemma \eqref{cop}, we can say $(A,B)=(z_1,y)=1$ which gives us \[5x_1^2=4AB\] Since $(A,B)=1$, $5$ must divide either $A$ or $B$, but not both and $2|x_1$ i.e. $x_1=2x_3$. In the first case, $A=5C$ and $x_3^2=BC$. Using lemma \eqref{sqr}, $B=m^2,C=l^2,x_3=lm$ for some $(l,m)=1$. Thus, $\boxed{z_1=5l^2+m^2,y=5l^2-m^2}$. In the other case, similarly, $\boxed{z_1=l^2+5m^2,y=l^2-5m^2}$. \paragraph*{\bf Case 2:} This time, $z$ is odd, so $x$ is odd too. And again, we just take $(z,2y)=(z,y)=1$. In a similar fashion to the previous case, $5x^2=(z+2y)(z-2y)$. But $(z+2y,z-2y)=(z,y)=1$ and $z+2y=A,z-2y=B$ with both $A,B\equiv1\pmod2,(A,B)=1$. Therefore, $AB=5x^2$ and it is all over the same one as before. Just a change in the solution set: $\boxed{z=\dfrac{5l^2+m^2}{2},y=\dfrac{5l^2-m^2}{4}}$ or $\boxed{z=\dfrac{l^2+5m^2}{2},y=\dfrac{l^2-5m^2}{4}}$. \end{proof} \begin{theorem} $u^2+uv-v^2$ is a square for infinite pairs of $(u,v)$. \end{theorem} \begin{proof} Let $u^2+uv-v^2=t^2$. Then from the formula of quadratic equation: \[u=\dfrac{-v\pm\sqrt{5v^2+4t^2}}{2}\] Since $v$ and $5v^2+4t^2$ are of the same parity, it's enough to prove that there are infinite pairs of $(v,t)$ so that $5v^2+4t^2$ is a square. From theorem \eqref{eq2} we already know that's true. Just take any family of solution. \end{proof} Now we can prove the theorem of concern. \begin{theorem} There are infinite $(u,v)$ which $\{G_n\}_{n\in\mathbb{N}}$ starts with and is an alternating bisquable sequence. More precisely, this is true for both {\it evenly and oddly bisquable sequence}. The evenly case is true for all $a,b$. But the oddly case can be proven easily for $b=-1$. \end{theorem} \begin{proof} Recall equation \eqref{eq1}. If we are looking for evenly bisquable sequence, then we can see that for all even $n=2k$: \[G_{2k}G_{2k+2}=G_{2k+1}^2+(-b)^{2k}(u^2+uv-v^2)\] If $u^2+uv-v^2=t^2$, then we get the equation $G_{2k}G_{2k+1}=r^2+s^2$ for some $r,s$ and for all $k$. Therefore, from theorem \eqref{bisqr}, we can say that, both $G_{2k}$ and $G_{2k+2}$ are bisquare for all $k\in\mathbb{N}$. This provides the proof for the case of evenly sequence. Now, $n=2k-1$ i.e. $n$ is odd. \[G_{2k-1}G_{2k+1}=G_{2k}^2+u^2+uv-v^2\] And once again, we are back to the same case as before. \end{proof} \begin{remark} The case for the original Fibonacci sequence is $u=0,v=1,b=1$. Then, \[G_nG_{n+2}=G_{n+1}^2+(-1)^{n+1}\] And the proof of its being a bisquable sequence follows immediately for all odd $n$. \end{remark} \section{Number Of Divisors Of General Fibonacci Number} In this section, we restrict our concern on finding a lower bound on the number of divisors of $F_n$, $\tau(F_n)$. But to do that, we will assume $u=0,v=1$ which makes the most general Fibonacci sequence the commonly known General Fibonacci sequence. Additionally, $a,b>0$ so it remains a strictly increasing sequence from $n=2$. Now, we shall concentrate on $\tau(F_n)$. We will need more theorems. This one is a famous one due to {\it Carmichael}, \eqref{car}. Though later we will prove the special case needed for our estimation it in an easier way using other known theorems. We call a prime $p$ a {\it primitive divisor} of $F_n$ if $p|F_n$ but $p\not|F_m$ for $0\leq m<n$. Also, we assume that $\Omega(n)$ is the total number of prime factors(distinct or indistinct) of $n$ i.e. if $n=\prod\limits_{i=1}^kp_i^{e_i}$, \[\Omega(n)=\sum_{i=1}^ke_i\] \begin{theorem} If $n\neq1,2,6$, then $F_n$ has at least one primitive divisor except when $n=12,a=2,b=1$ and $n=12,a=1,b=1$. \end{theorem} From this theorem, we can provide the following estimation: \begin{theorem} If $p$ is an odd prime number, then $\tau\left(F_{p^e}\right)\geq2^e$. \end{theorem} \begin{proof} Let $p_1$ be a prime factor of $F_p$. And for $i>1$, we have that $F_{p^i}$ has a primitive factor with respect to $F_{p^i}$. Therefore, if $p_i$ is a primitive factor of $F_{p^i}$(at least one such $p_i$ exist), then for some positive integer $K$, \begin{eqnarray} F_{p^e}&=&\prod_{i=1}^ep_iK\label{prp} \end{eqnarray} This holds because for every $i$, $F_{p^i}|F_{p^{i+1}}$. We can see that the minimum number of divisor of $\tau(F_{p^e})$ is attained when $F_p=p_1$ and $F_{p_{i+1}}=p_ip_{i+1}$(though we don't study if that's possible here). And in that case, we have, \[\tau(F_{p^e})\geq\prod_{i=1}^e(1+1)=2^e\] \end{proof} For $p=2$, it will be(since $F_2=1$): \begin{theorem} If $e>1$, then $\tau(F_{2^e})\geq2^{e-1}$. \end{theorem} Thus, we get an estimation for any positive integer $n>1$. \begin{theorem} \[ \tau(F_n) \geq \begin{cases} 2^{\Omega(n)}\mathbbox{ if } n\equiv1\pmod2\\ 2^{\Omega(n)}\mathbbox{ otherwise} \end{cases} \]\label{es2} \end{theorem} \begin{proof} First we see that, for $n=\prod\limits_{p^e||n}p^e$, \[\tau(F_n)\geq\prod_{p^e||n}\tau(F_{p^e})\] because of equation \eqref{fibcp}. For odd $n=p_1^{e_1}\cdots p_k^{e_k}$, \begin{eqnarray*} \tau(F_n)&\geq&\prod_{i=1}^k\tau(F_{p_i^{e_i}})\\ &\geq&\prod_{i=1}^k2^{e_i}\\ & = & 2^{\sum_{i=1}^ke_i}\\ & = & 2^{\Omega(n)} \end{eqnarray*} For even $n=2^ap_1^{e_1}\cdots p_k^{e_k}$ with $a\geq1$, \begin{eqnarray*} \tau(F_n)&\geq&\tau(F_{2^a})\prod_{i=1}^k\tau(F_{p_i^{e_i}})\\ &\geq&2^{a-1}\prod_{i=1}^k2^{e_i}\\ & = & 2^{a-1+\sum_{i=1}^ke_i}\\ & = & 2^{\Omega(n)-1} \end{eqnarray*} \end{proof} The following theorem improves the current lower bound on $\tau(n)$. \begin{theorem} For all $n$, \[ \tau(F_n)\geq \begin{cases} \tau(n)\mathbbox{ if }n\equiv1\pmod2\\ \tau(n)-1\mathbbox{ otherwise} \end{cases} \]\label{es1} \end{theorem} \begin{proof} First note that, for all odd $n$ and $d|n$, $d$ is odd. Write down the divisors of $n$ as $d_1,d_2,...,d_{\tau(n)}$ in increasing order. Then from \eqref{fibdiv}, we have for all $i$, $F_{d_i}|F_n$. Since $F_{d_{i+1}}>F_{d_i}$ for odd $d_{i+1}>d_i$. We immediately get that, for every distinct divisor $d$ of $n$, there is a distinct divisor of $F_n$ as well. The proof for the even is analogous to the previous one, only the difference is $F_2$ can be $1$. From $d_2=1$, the sequence is strictly increasing. Hence, proven. \end{proof} \begin{remark} The lower bound we estimated in theorem \eqref{es1} is a lot better than the one in theorem \eqref{es2}. Because \[2^e<p^e\] for all odd $p$, hence $2^{\Omega(n)}\ll\prod_{i=1}^kp_i^{e_i}$ for sufficiently large primes $p_i$. Also, $F_2=1$ can hold only for $F_n=F_{n-1}+F_{n-2}$, so except for this case, we can in general say, \[\tau(F_n)\geq\tau(n)\] \end{remark} \end{document}
\begin{document} \begin{frontmatter} \title{Numerically stable online estimation of variance in particle filters} \runtitle{Estimation of variance in particle filters} \begin{aug} \author{\fnms{Jimmy} \snm{Olsson}\thanksref{a,t1}\ead[label=e1]{[email protected]}} \and \author{\fnms{Randal} \snm{Douc}\thanksref{b}\ead[label=e2]{[email protected]}} \address[a]{Department of Mathematics \\ KTH Royal Institute of Technology \\ SE-100 44 Stockholm, Sweden \\ printead{e1}} \address[b]{D\'epartement CITI \\ TELECOM SudParis \\ 9 rue Charles Fourier, 91000 EVRY \\ printead{e2}} \thankstext{t1}{J.~Olsson is supported by the Swedish Research Council, Grant 2011-5577.} \runauthor{J. Olsson and R. Douc} \affiliation{KTH Royal Institute of Technology and Institut T\'el\'ecom/T\'el\'ecom SudParis} \end{aug} \begin{abstract} This paper discusses variance estimation in sequential Monte Carlo methods, alternatively termed particle filters. The variance estimator that we propose is a natural modification of that suggested by H.~P.~Chan and T.~L.~Lai [A general theory of particle filters in hidden Markov models and some applications. \emph{Ann. Statist.}, 41(6):2877--2904, 2013], which allows the variance to be estimated in a single run of the particle filter by tracing the genealogical history of the particles. However, due particle lineage degeneracy, the estimator of the mentioned work becomes numerically unstable as the number of sequential particle updates increases. Thus, by tracing only a part of the particles' genealogy rather than the full one, our estimator gains long-term numerical stability at the cost of a bias. The scope of the genealogical tracing is regulated by a lag, and under mild, easily checked model assumptions, we prove that the bias tends to zero geometrically fast as the lag increases. As confirmed by our numerical results, this allows the bias to be tightly controlled also for moderate particle sample sizes. \end{abstract} \begin{keyword}[class=MSC] \kwd[Primary ]{62M09} \kwd[; secondary ]{62F12} \end{keyword} \begin{keyword} \kwd{Asymptotic variance} \kwd{Feynman-Kac models} \kwd{hidden Markov models} \kwd{particle filters} \kwd{sequential Monte Carlo methods} \kwd{state-space models} \kwd{variance estimation} \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:introduction} Since the \emph{bootstrap particle filter} was introduced in \cite{gordon:salmond:smith:1993}, \emph{sequential Monte Carlo} (SMC) \emph{methods}, alternatively termed \emph{particle filters}, have been successfully applied within a wide range of applications, including computer vision, automatic control, signal processing, optimisation, robotics, econometrics, and finance; see, e.g., \cite{doucet:defreitas:gordon:2001,ristic:arulampalam:gordon:2004} for introductions to the topic. SMC methods approximate a given sequence of distributions by a sequence of possibly weighted empirical measures associated with a sample of \emph{particles} evolving recursively and randomly in time. Each iteration of the SMC algorithm comprises two operations: a \emph{mutation step}, which moves the particles randomly in the state space, and a \emph{selection step}, which duplicates/eliminates, through resampling, particles with high/low importance weights, respectively. In parallel with algorithmic developments, the theoretical properties of SMC have been studied extensively during the last twenty years, and there is currently a number of available results describing the convergence, as the number of particles tends to infinity, of Monte Carlo estimates produced by the algorithm; see, e.g., the monographs \cite{delmoral:2004,delmoral:2013} and \cite[Chapter~9]{cappe:moulines:ryden:2005}. The first \emph{central limit theorem} (CLT) for SMC methods was established in \cite{delmoral:guionnet:1999}, and this result was later refined in the series of papers \cite{chopin:2004,kuensch:2005,douc:moulines:2008}. In the mentioned CLT, the asymptotic variance of the weak Gaussian limit is expressed through a recursive formula involving high-dimensional integrals over generally complicated integrands and is hence intractable in general. Due to its complexity, only a very few recent works have treated the important---although challenging---topic of variance estimation in SMC algorithms. A breakthrough was made by H.~P. Chan and T.~L.~Lai, who proposed, in \cite{chan:lai:2013} and within the framework of general \emph{state-space models} (or, \emph{hidden Markov models}), an estimator, from now on referred to as the \emph{Chan \& Lai estimator} (CLE), that allows the sequence of asymptotic variances to be estimated on the basis of a \emph{single} realisation of the algorithm, without the need of additional simulations. Remarkably, the CLE can be shown to be consistent (see \cite[Theorem~2]{chan:lai:2013}), i.e., to converge in probability to the true asymptotic variance as the number of particles tends to infinity. At a given time step, the CLE estimates the asymptotic variance by tracing genealogically the time-zero ancestors of the particles at the time step in question. The variance estimators proposed recently in \cite{lee:whiteley:2016} are based on the same principle, and may be viewed as refinements of the CLE within a more general framework of \emph{Feynman-Kac models} and particle algorithms with time varying particle population sizes. Moreover, \cite{lee:whiteley:2016} provides an elegant, deepened (asymptotic as well as non-asymptotic) theoretical analysis of the technique, and these results are essential for the development of the present paper. Appealingly, the set of time-zero ancestors may be updated recursively in the SMC algorithm by adding just a single line to the code. This allows variance estimates to be computed online with essentially the same computational complexity and memory demands as the original algorithm. Nevertheless, since the SMC algorithm performs repeatedly selection, it is well established that all the particles in the sample will, eventually, share the \emph{same} time-zero ancestor (see, e.g., \cite{jacob:murray:rubenthaler:2015} for a theoretical analysis of this particle path degeneracy phenomenon). Unfortunately, this implies that the CLE collapses eventually to zero as time increases. Increasing the particle sample size or resampling less frequently the particles will postpone somewhat, but not avoid, this collapse. This makes the CLE impractical in the long run. Thus, although the SMC methodology has become a standard tool in statistics and engineering, a numerically stable estimator of the SMC variance has, surprisingly, hitherto been lacking, and the aim of the present paper is to---at least partially---fill this gap. The natural solution that we propose in the present paper is to estimate the SMC asymptotic variance by tracing only a \emph{part} of the particles' genealogy rather than the full one (i.e., back to the time-zero ancestors). By tracing genealogically only the last generations, depleted ancestor sets are avoided as long as the particle sample size is at least moderately large. Still, this measure leads to a bias, whose size determines completely the success of the approach. Nevertheless, in \cite{douc:moulines:olsson:2014}, which studies the stochastic stability of the sequence of asymptotic variances within the framework of general hidden Markov models, or, viewed differently, randomly perturbed Feynman-Kac models (see \cite[Remark~1]{douc:moulines:olsson:2014}), it is established that the variance, which at time $n$ may be expressed in the form of a sum of $n + 1$ terms, is, in the case where the perturbations form a stationary sequence, uniformly stochastically bounded in time, or, \emph{tight}. Moreover, the analysis provided in \cite{douc:moulines:olsson:2014}, which is driven by mixing assumptions, indicates that the size of the $m^{\mathrm{th}}$ term in the variance at time $n$ decreases geometrically fast with the difference $n - m$. Consequently, we may expect the last terms of the sum to represent the major part of the variance, and as long as the number $\lambda$ of traced generations, the \emph{lag}, and the particle sample size are not too small, the bias should be negligible. This argument is confirmed by our main result, Theorem~\ref{thm:tightness:bias}, which at any time point $n$ provides an order $\rho^\lambda$ bound on the asymptotic (as the number of particles tends to infinity) bias, where $\rho \in (0, 1)$ is a mixing rate. Consequently, as long as the number of particles is large enough, the bias stays numerically stable in the long run and may, as it decreases geometrically fast with the lag $\lambda$, be controlled efficiently. Methodologically, the estimator that we propose has similarities with the \emph{fixed-lag smoothing} approach studied in \cite{kitagawa:sato:2001,olsson:cappe:douc:moulines:2006,olsson:strojby:2010}, and we here face the same bias-variance tradeoff, in the sense that a too greedy/generous lag design leads to high bias/variance, respectively. The developments of the present paper are cast into the framework of randomly perturbed Feynman-Kac models, and the theoretical analysis is driven by the assumptions of \cite{douc:moulines:2012,douc:moulines:olsson:2014} (going back to \cite{douc:fort:moulines:priouret:2009}), which are easily checked for many models used in practice. In particular, by replacing the now classical strong mixing assumption on the transition kernel of the underlying Markov chain---a standard assumption in the literature that typically requires the state space of the Markov chain to be a compact set---by a \emph{local Doeblin condition}, we are able to verify the assumptions for a wide class of models with possibly non-compact state space. As a numerical illustration, we apply our estimator in the context of SMC-based predictor flow approximation in general state-space models, including the widely used \emph{stochastic volatility} model proposed in \cite{hull:white:1987}. We are able to report an efficient, numerically stable performance of our variance estimator, with tight control of the bias at low variance. The paper is structured as follows. Section~\ref{sec:preliminaries} introduces some notation, defines the framework of perturbed Feynman-Kac models, and provides some background to SMC. In Section~\ref{sec:estimator}, focus is set on asymptotic variance estimation and after a prefatory discussion on the CLE we introduce the proposed fixed-lag variance estimator. We also describe how our estimator can be straightforwardly extended to so-called \emph{updated} Feynman-Kac distribution flows. Theoretical and numerical results are found in Section~\ref{sec:theoretical:results} and Section~\ref{sec:numerical:study}, respectively, and Appendix~\ref{sec:proofs} contains all proofs. Our theoretical analysis is divided into two parts: first, the identification of the limiting bias and, second, the construction of a tight upper bound on the same. The first part relies on the theoretical machinery developed in \cite{lee:whiteley:2016}, whose key elements are recalled briefly in the beginning of Section~\ref{sec:proof:consistency:fixed:lag}. Finally, Section~\ref{sec:conclusion} concludes the paper. \section{Preliminaries} \label{sec:preliminaries} \subsection{Some notation and conventions} We assume that all random variables are defined on a common probability space $(\Omega, \mathcal{F}, prob)$. The set of natural numbers is denoted by $\mathbb{N} = \{0, 1, 2, \ldots\}$, and we let $\mathbb{N}pos = \mathbb{N} \setminus \{Ê0\}$ be the positive ones. For all $(m, n) \in \mathbb{N}^2$, we set $\intvect{m}{n} \vcentcolon= \{m, m + 1, \ldots, n\}$. The set of nonnegative real numbers is denoted by $\mathbb{R}_+$. For any quantities $\{ a_\ell \}_{\ell = 1}^m$, vectors are denoted by $\chunk{a}{\ell}{m} \vcentcolon= (a_\ell, \ldots, a_m)$. We introduce some measure and kernel notation. Given some state space $(\mathbb{E}sp, \mathbb{E}fd)$, we denote by $\bmf{\mathbb{E}fd}$ and $probmeas{\mathbb{E}fd}$ the spaces of bounded measurable functions and probability measures on $(\mathbb{E}sp, \mathbb{E}fd)$, respectively. For any functions $(h, h') \in \bmf{\mathbb{E}fd}^2$ we define the product function $h \varotimes h' : \mathbb{E}sp^2 \ni (x, x') \mapsto h(x) h'(x')$. The identity function $x \mapsto x$ is denoted by $\operatorname{id}$. Let $\mu$ be a measure on $(\mathbb{E}sp, \mathbb{E}fd)$; then for any $\mu$-integrable function $h$, we denote by $$ \mu h \vcentcolon= \int h(x) \, \mu(\mathrm{d} x) $$ the Lebesgue integral of $h$ w.r.t. $\mu$. In addition, let $(\mathbb{E}sp', \mathbb{E}fd')$ be some other measurable space and $\kernel{K}$ some possibly unnormalised transition kernel $\kernel{K} : \mathbb{E}sp \times \mathbb{E}fd' \rightarrow \mathbb{R}_+$. The kernel $\kernel{K}$ induces two integral operators, one acting on functions and the other on measures. More specifically, given a measure $\nu$ on $(\mathbb{E}sp, \mathbb{E}fd)$ and a measurable function $h$ on $(\mathbb{E}sp', \mathbb{E}fd')$, we define the measure $$ \nu \kernel{K} : \mathbb{E}fd' \ni A \mapsto \int \kernel{K}(x, A) \, \nu(\mathrm{d} x) $$ and the function $$ \kernel{K} h : \mathbb{E}sp \ni x \mapsto \int h(y) \, \kernel{K}(x, \mathrm{d} y), $$ whenever these quantities are well defined. \subsection{Randomly perturbed Feynman-Kac models} \label{sec:Feynman:Kac:models} Let $(\mathsf{X}, \mathcal{X})$ and $(\mathsf{Z}, \mathcal{Z})$ be a pair of general measurable spaces. Moreover, let $\kernel{K}$ and $\chi$ be a Markov transition kernel and a probability measure on $(\mathsf{X}, \mathcal{X})$, respectively, and $\{ pot[z] : z \in \mathsf{Z} \}$ a family of real-valued, positive, and measurable \emph{potential functions} on $(\mathsf{X}, \mathcal{X})$. For all vectors $\chunk{z}{k}{m} \in \mathsf{Z}^{m - k + 1}$, we define unnormalised transition kernels $$ \uk[\chunk{z}{k}{m}] : \mathsf{X} \times \mathcal{X} \ni (x_k, A) \mapsto \idotsint \mathbbm{1}_A(x_{m + 1}) prod_{\ell = k}^m pot[z_\ell](x_\ell) \, \kernel{M}(x_\ell, \mathrm{d} x_{\ell + 1}), $$ with the convention $\uk[\chunk{z}{k}{m}](x, A) = \delta_x(A)$ if $m < k$ (where $\delta_x$ denotes the Dirac mass located at $x$), and probability measures \begin{equation} \label{eq:def:pred} pred[\chunk{z}{k}{m}] : \mathcal{X} \ni A \mapsto \frac{\chi \uk[\chunk{z}{k}{m}] \mathbbm{1}_A} {\chi \uk[\chunk{z}{k}{m}] \mathbbm{1}_\mathsf{X}}. \end{equation} Using these definitions we may, given a sequence $\{ z_n \}_{n \in \mathbb{N}}$ of \emph{perturbations} in $\mathsf{Z}$, express the \emph{Feynman-Kac distribution flow} $\{ pred[\chunk{z}{0}{n}] \}_{n \in \mathbb{N}}$ recursively as \begin{equation} \label{eq:pred:rec} pred[\chunk{z}{0}{n}] = \frac{pred[\chunk{z}{0}{n - 1}] \uk[z_n]}{pred[\chunk{z}{0}{n - 1}] \uk[z_n] \mathbbm{1}_\mathsf{X}}, \quad n \in \mathbb{N} \end{equation} (where, by the previous convention, $pred[\chunk{z}{0}{- 1}] = \chi$). Even though the previous model may be applied in a non-temporal context, we will often refer to the index $n$ as ``time''. \begin{example}[partially dominated state-space models] \label{example:state:space:model} Let $(\mathsf{X}, \mathcal{X})$ be a measurable space, $\kernel{M} : \mathsf{X} \times \mathcal{X} \rightarrow [0, 1]$ a Markov transition kernel, and $\chi$ a probability measure on $(\mathsf{X}, \mathcal{X})$ (the latter being referred to as the \emph{initial distribution}). In addition, let $(\mathsf{Y}, \mathcal{Y})$ be another measurable space and $g : \mathsf{X} \times \mathsf{Y} \rightarrow \mathbb{R}_+$ a Markov transition density with respect to some reference measure $\nu$ on $(\mathsf{Y}, \mathcal{Y})$. By a general state-space model we mean the canonical version of the bivariate Markov chain $\{ (X_n, Y_n) \}_{n \in \mathbb{N}}$ having transition kernel $$ \mathsf{X} \times \mathsf{Y} \times \mathcal{X} \varotimes \mathcal{Y} \ni ((x, y), A) \mapsto \iint \mathbbm{1}_A(x', y') g(x', y') \, \nu(\mathrm{d} y') \, \kernel{M}(x, \mathrm{d} x') $$ and initial distribution $$ \mathcal{X} \varotimes \mathcal{Y} \ni A \mapsto \iint \mathbbm{1}_A(x, y) g(x, y) \, \nu(\mathrm{d} y) \, \chi(\mathrm{d} x). $$ Here the marginal process $\{ X_n \}_{n \in \mathbb{N}}$, referred to as the \emph{state process}, is only partially observed through the \emph{observation process} $\{ Y_n \}_{n \in \mathbb{N}}$. For the model $\{ (X_n, Y_n) \}_{n \in \mathbb{N}}$ defined in this way, \begin{itemize} \item[(i)] the state process is a Markov chain with transition kernel $\kernel{M}$ and initial distribution $\chi$, \item[(ii)] the observations are, given the states, conditionally independent and such that the marginal conditional distribution of each $Y_n$ depends on $X_n$ only and has density $pot(X_n, \cdot)$ \end{itemize} (we refer to \cite[Section~2.2]{cappe:moulines:ryden:2005} for details). When operating on a well-specified state-space model, a key ingredient is typically the computation of the flow of \emph{predictor distributions}, where the predictor $pred[\chunk{y}{0}{n - 1}]$ at time $n \in \mathbb{N}$ is defined as the conditional distribution of the state $X_n$Ê given the record $\chunk{y}{0}{n - 1} \in \mathsf{Y}^n$ of realised historical observations up to time $n - 1$. Using Bayes' formula (see, e.g., \cite[Section~3.2.2]{cappe:moulines:ryden:2005} for details), it is straightforwardly shown that the predictor flow satisfies a perturbed Feynman-Kac recursion \eqref{eq:pred:rec} with $(\mathsf{X}, \mathcal{X})$, $\kernel{M}$, and $\chi$ given above, the observations $\{ Y_n \}_{n \in \mathbb{N}}$ playing the role of perturbations (i.e., $\mathsf{Z} \gets \mathsf{Y}$ and $\mathcal{Z} \gets \mathcal{Y}$), and the local likelihood functions $\{Êg(\cdot, y) : y \in \mathsf{Y} \}$ playing the role of potential functions $\{Êpot[y] : y \in \mathsf{Y}\}$. We will return to this framework in Section~\ref{sec:numerical:study}. \end{example} \subsection{Sequential Monte Carlo methods} SMC methods approximate online the Feynman-Kac flow generated by \eqref{eq:pred:rec} and a given sequence $\{ z_n \}_{n \in \mathbb{N}}$ of perturbations by propagating recursively a random sample $\{ \epart{n}{i} \}_{i = 1}^N$ of $\mathsf{X}$-valued \emph{particles}. More specifically, given a particle sample $\{ \epart{n}{i} \}_{i = 1}^N$ \emph{targeting} $pred[\chunk{z}{0}{n - 1}]$ in the sense that for all $h \in \bmf{\mathcal{X}}$, $predpart[\chunk{z}{0}{n - 1}] h \backsimeq pred[\chunk{z}{0}{n - 1}] h$ as $N$ tends to infinity, where $$ predpart[\chunk{z}{0}{n - 1}]: \mathcal{X} \ni A \mapsto \frac{1}{N} \sum_{i = 1}^N \mathbbm{1}_A(\epart{n}{i}) $$ denotes the empirical measure associated with the particles, an updated particle sample $\{ \epart{n + 1}{i} \}_{i = 1}^N$ approximating $pred[\chunk{z}{0}{n}]$ is, as the perturbation $z_n$ becomes accessible, formed by Algorithm~\ref{alg:SMC}. \begin{algorithm}[H] \label{alg:SMC} \KwData{$\{ \epart{n}{i} \}_{i = 1}^N$, $z_n$} \KwResult{$\{ \epart{n + 1}{i} \}_{i = 1}^N$} set $\wgtsum{n} \gets 0$\; \For{$i = 1 \to N$}{ set $\wgt{n}{i} \gets pot[z_n](\epart{n}{i})$\; set $\wgtsum{n} \gets \wgtsum{n} + \wgt{n}{i}$\; } \For {$i = 1 \to N$}{ draw $\ind{n + 1}{i} \sim \mathsf{Cat}(\{ \wgt{n}{\ell} / \wgtsum{n} \}_{\ell = 1}^N)$\; draw $\epart{n + 1}{i} \sim \kernel{M}(\epart{n}{\ind{n + 1}{i}}, \cdot)$\; } \caption{SMC particle update} \end{algorithm} (In the algorithm above, $\mathsf{Cat}( \{ \wgt{n}{\ell} / \wgtsum{n} \}_{\ell = 1}^N)$ denotes the categorical distribution induced by the normalised particle weights $\{ \wgt{n}{\ell} / \wgtsum{n} \}_{\ell = 1}^N$.) Algorithm~\ref{alg:SMC} is initialised at time $n = 0$ by drawing $\{ \epart{0}{i} \}_{i = 1}^N \sim \chi^{\varotimes N}$. For all $n \in \mathbb{N}$ and all $h \in \bmf{\mathcal{X}}$, the convergence, as $N$ tends to infinity, of $predpart[\chunk{z}{0}{n - 1}] h$ to $pred[\chunk{z}{0}{n - 1}] h$ Êcan be established in several probabilistic senses. In particular, the first CLT for SMC methods was provided by \cite{delmoral:guionnet:1999}, establishing that \begin{equation} \label{eq:CLT} \sqrt{N} \left( predpart[\chunk{z}{0}{n - 1}] h - pred[\chunk{z}{0}{n - 1}] h \right) \dlim \variance{\chunk{z}{0}{n - 1}}(h) Z, \end{equation} where $Z$ is standard normally distributed and the asymptotic variance is given by $\variance{\chunk{z}{0}{n - 1}} \vcentcolon= \variance{\chunk{z}{0}{n - 1}}[0]$ with \begin{equation} \label{eq:def:as:var} \variance[2]{\chunk{z}{0}{n - 1}}[\ell] : \bmf{\mathcal{X}} \ni h \mapsto \sum_{m = \ell}^n \frac{pred[\chunk{z}{0}{m - 1}] \{ \uk[\chunk{z}{m}{n - 1}](h - pred[\chunk{z}{0}{n - 1}] h) \}^2 }{(pred[\chunk{z}{0}{m - 1}] \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}})^2} \end{equation} (see also \cite{chopin:2004,kuensch:2005,douc:moulines:2008} for similar results). The fact that we in \eqref{eq:def:as:var} define a truncated version of the variance with only $n - \ell + 1$ terms will be clear later on. In the coming section we propose a lag-based, numerically stable estimator of the sequence $\{ \variance[2]{\chunk{z}{0}{n - 1}} \}_{n \in \mathbb{N}}$ of asymptotic variances. The estimator approximates $\{Ê\variance[2]{\chunk{z}{0}{n - 1}} \}_{n \in \mathbb{N}}$ online, as $n$ increases, under constant computational complexity and memory requirements. Importantly, the estimator is obtained as a by-product of the particle filter output and does not require additional simulations. The numerical stability is obtained at the price of a small bias, which may be controlled under weak assumptions on the mixing properties of the model. \section{A lag-based variance estimator} \label{sec:estimator} \subsection{The variance estimator proposed in \cite{chan:lai:2013}} \label{sec:the:Lai:estimator} Since Algorithm~\ref{alg:SMC} resamples the particles at each time step, the particle cloud may be associated with a tree describing the genealogical lineages of the particles. The estimators proposed in \cite{chan:lai:2013} and \cite{lee:whiteley:2016} are based on the particles' \emph{Eve indices} $\{ \eve{n}{i} \}_{i = 1}^N$ (the terminology is adopted from \cite{lee:whiteley:2016}), which are, for all $n \in \mathbb{N}$, defined as the indices of the time-zero ancestors of the particles $\{ \epart{n}{i} \}_{i = 1}^N$. More specifically, the Eve indices may, for all $i \in \intvect{1}{N}$, be computed recursively in Algorithm~\ref{alg:SMC} (just after Line~6) by letting $$ \eve{n}{i} \vcentcolon= \begin{cases} i & \mbox{for } n = 0, \\ \eve{n - 1}{\ind{n}{i}} & \mbox{for } n \in \mathbb{N}pos. \end{cases} $$ Using the Eve indices, H.~P. Chan and T.~L.~Lai proposed, in \cite{chan:lai:2013}, for all $n \in \mathbb{N}$, $\varest[2]{\chunk{z}{0}{n - 1}}(h)$, with \begin{equation} \label{eq:Lai:estimator} \varest[2]{\chunk{z}{0}{n - 1}} : \bmf{\mathcal{X}} \ni h \mapsto \frac{1}{N} \sum_{i = 1}^N \left( \sum_{j : \eve{n}{j} = i} \left\{ h(\epart{n}{j}) - predpart[\chunk{z}{0}{n - 1}] h \right\} \right)^2, \end{equation} as an estimator of $\variance[2]{\chunk{z}{0}{n - 1}}(h)$ for all $h \in \bmf{\mathcal{X}}$. As mentioned in the introduction, we will refer to this estimator as the CLE. (More precisely, in \cite{chan:lai:2013}, focus was set on the \emph{updated} distribution flows discussed in Section~\ref{sec:updated:measures} below; the adaptation is however straightforward.) In \cite{lee:whiteley:2016}, a generalisation of the CLE, allowing the particle population size $N$ to vary between SMC iterations, is presented. As the main result of \cite{chan:lai:2013}, the consistency, as $N$ tends to infinity, of the CLE is established; see also \cite[Theorem~1 and Corollary~1]{lee:whiteley:2016} for a generalisation. The CLE is indeed remarkable, as it allows the variance to be estimated online in a single run of the particle filter with no further simulation. Nevertheless, as explained in the introduction, the previous estimator has a serious flaw which is related to the well-known \emph{particle path depletion phenomenon} of SMC algorithms. More specifically, resampling the particles systematically at each time leads without exception to a random time point before which all the genealogical traces coincide; we refer again to \cite{jacob:murray:rubenthaler:2015}, which provides a time uniform $\mathcal{O}(N \log N)$ bound on the expected number of generations back in time to this most recent common ancestor. Thus, as $n$ increases, the sets $\{ j \in \intvect{1}{N} : \eve{n}{j} = i \}$ will eventually be empty for all indices $i \in \intvect{1}{N}$ except one, say, $i_0$, for which $\{ j \in \intvect{1}{N} : \eve{n}{j} = i_0 \} = \intvect{1}{N}$. As a consequence, eventually, $\varest[2]{\chunk{z}{0}{n - 1}}(h) = 0$ for all $h \in \bmf{\mathcal{X}}$, which makes the estimator impractical. In the next section, we propose a simple modification of the CLE that stabilises numerically the same at the cost of a negligible, controllable bias. \subsection{Our estimator} \label{sec:our:estimator} The estimator that we propose is based on the simple idea of stabilising numerically the CLE by tracing, backwards in time, only a few generations of the particle genealogy, rather than tracing the history all the way back to the time-zero ancestors. In our approach, the Eve indices will be replaced by \emph{Enoch indices}\footnote{Two figures named Enoch appear in the 2nd as well as the 6th generations of the Genealogies of Genesis, as the son of Cain and the son-son-son-son-son of Seth, respectively.} defined, for all $i \in \intvect{1}{N}$ and $m \in \mathbb{N}$, recursively as \begin{equation} \label{eq:def:Enoch} \enoch{m}{n}{i} \vcentcolon= \begin{cases} i & \mbox{for } n = m, \\ \enoch{m}{n - 1}{\ind{n}{i}} & \mbox{for } n > m. \end{cases} \end{equation} In other words, for all $n \in \mathbb{N}$, $m \in \intvect{1}{n}$, and $i \in \intvect{1}{N}$, $\epart{m}{\enoch{m}{n}{i}}$ is the ancestor of $\epart{n}{i}$ at time $m$. Now, let $\lambda \in \mathbb{N}$ be some fixed number, referred to as the lag, and define $\lambdatime{n}{\lambda} \vcentcolon= (n - \lambda) \vee 0$; then, we propose $\varest[2]{\chunk{z}{0}{n - 1}}[\lambda](h)$, with \begin{equation} \label{eq:estimator} \varest[2]{\chunk{z}{0}{n - 1}}[\lambda] : \bmf{\mathcal{X}} \ni h \mapsto \frac{1}{N} \sum_{i = 1}^N \left( \sum_{j : \enoch{\lambdatime{n}{\lambda}}{n}{j} = i} \left\{ h(\epart{n}{j}) - predpart[\chunk{z}{0}{n - 1}] h \right\} \right)^2, \end{equation} as an estimator of the variance $\variance[2]{\chunk{z}{0}{n - 1}}(h)$ for all $n \in \mathbb{N}$, $\chunk{z}{0}{n - 1} \in \mathsf{Z}^n$, and $h \in \bmf{\mathcal{X}}$. Online computation of the Enoch indices $\{ \enoch{\lambdatime{n}{\lambda}}{n}{i} \}_{i =Ê1}^N$ requires the propagation of a window $\{Ê\enoch{\lambdatime{n}{\lambda}}{n}{i}, \ldots, \enoch{n}{n}{i} \}_{i = 1}^N$ of indices; see Algorithm~\ref{alg:fixed-lag:SMC} for a pseudo-code. As the length of the window is bounded by $\lambda + 1$, the memory demand of the estimator is $\mathcal{O}(\lambda N)$ independently of $n$. Moreover, since genealogical tracing has a linear complexity in $N$, the total complexity of the estimator is $\mathcal{O}(\lambda N)$, again independently of $n$. \begin{algorithm}[H] \label{alg:fixed-lag:SMC} \KwData{$\{ \epart{n}{i} \}_{i = 1}^N$, $\{Ê\enoch{\lambdatime{n}{\lambda}}{n}{i}, \ldots, \enoch{n}{n}{i} \}_{i = 1}^N$, $z_n$} \KwResult{$\{ \epart{n + 1}{i} \}_{i = 1}^N$, $\{Ê\enoch{\lambdatime{(n + 1)}{\lambda}}{n + 1}{i}, \ldots, \enoch{n + 1}{n + 1}{i} \}_{i = 1}^N$} set $\wgtsum{n} \gets 0$\; \For{$i = 1 \to N$}{ set $\wgt{n}{i} \gets pot[z_n](\epart{n}{i})$\; set $\wgtsum{n} \gets \wgtsum{n} + \wgt{n}{i}$\; } \For {$i = 1 \to N$}{ draw $\ind{n + 1}{i} \sim \mathsf{Cat}(\{ \wgt{n}{\ell} / \wgtsum{n} \}_{\ell = 1}^N)$\; draw $\epart{n + 1}{i} \sim \kernel{M}(\epart{n}{\ind{n + 1}{i}}, \cdot)$\; \For{$m = \lambdatime{(n + 1)}{\lambda} \to n$}{ set $\enoch{m}{n + 1}{i} \gets \enoch{m}{n}{\ind{n + 1}{i}}$\; } set $\enoch{n + 1}{n + 1}{i} \gets i$\; } \caption{SMC particle and Enoch-index update} \end{algorithm} For $n = 0$, Algorithm~\ref{alg:fixed-lag:SMC} is initialised by drawing $\{ \epart{0}{i} \}_{i = 1}^N \sim \chi^{\varotimes N}$ and setting $\enoch{0}{0}{i} \gets i$ for all $i \in \intvect{1}{N}$. At the end of the algorithm, after the second \textbf{for}-loop, an estimate $$ \varest[2]{\chunk{z}{0}{n}}[\lambda](h) = \frac{1}{N} \sum_{i = 1}^N \left( \sum_{j : \enoch{\lambdatime{(n + 1)}{\lambda}}{n + 1}{j} = i} \{ h(\epart{n + 1}{j}) - predpart[\chunk{z}{0}{n}] h \} \right)^2 $$ of $\variance[2]{\chunk{z}{0}{n}}[\lambda](h)$ may be formed for all $h \in \bmf{\mathcal{X}}$. \subsection{Variance estimators for flows of updated distributions} \label{sec:updated:measures} Some applications involve approximation of the \emph{updated} measures \begin{equation} \label{eq:def:filt} \filt[\chunk{z}{k}{m}] : \mathcal{X} \ni A \mapsto \frac{\chi \uk[\chunk{z}{k}{m - 1}] (pot[z_m] \mathbbm{1}_A)} {\chi \uk[\chunk{z}{k}{m - 1}] (pot[z_m] \mathbbm{1}_\mathsf{X})}, \end{equation} for $\chunk{z}{k}{m} \in \mathsf{Z}^{m - k + 1}$, rather than the measures defined by \eqref{eq:def:pred}. \begin{example}[partially dominated state-space models, revisited] In the case of the partially dominated state-space models discussed in Example~\ref{example:state:space:model}, the updated measures $\{ \filt[\chunk{y}{0}{n}] \}_{n \in \mathbb{N}}$ defined through \eqref{eq:def:filt} are the \emph{filter distributions}; more precisely, in this context, for all $n \in \mathbb{N}$, $\filt[\chunk{y}{0}{n}]$ is the conditional distribution of the state $X_n$ given the realised observations $\chunk{y}{0}{n} \in \mathsf{Y}^{n + 1}$ up to time $n$ (i.e., \emph{including} the last observation $y_n$). \end{example} Since for all $h \in \bmf{\mathcal{X}}$, by normalisation, $$ \filt[\chunk{z}{k}{m}] h = \frac{pred[\chunk{z}{k}{m - 1}](pot[z_m] h)}{pred[\chunk{z}{k}{m - 1}] pot[z_m]}, $$ the flow $\{ \filt[\chunk{z}{0}{n}] \}_{n \in \mathbb{N}}$ of updated distributions is naturally approximated by the flow of weighted empirical measures \begin{equation} \label{eq:def:particle:filter} \filtpart[\chunk{z}{0}{n}] : A \ni \mathcal{X} \mapsto \frac{predpart[\chunk{z}{k}{m - 1}](pot[z_m] \mathbbm{1}_A)}{predpart[\chunk{z}{k}{m - 1}] pot[z_m]} = \sum_{i = 1}^N \frac{\wgt{n}{i}}{\wgtsum{n}} \mathbbm{1}_A(\epart{n}{i}), \end{equation} for some given sequence $\{ z_n \}_{n \in \mathbb{N}}$ of perturbations, where the weights $\{ \wgt{n}{i} \}_{i = 1}^N$ and the weight sum $\wgtsum{n}$ are computed in Algorithm~\ref{alg:SMC}. By the normality \eqref{eq:CLT} and the consistency \eqref{eq:particle:filter:consistency} one obtains, using Slutsky's theorem, for all $\chunk{z}{0}{n} \in \mathsf{Z}^{n + 1}$, the central limit theorem \begin{equation} \label{eq:CLT:updated:measures} \sqrt{N} \left( \filt[\chunk{z}{0}{n}] h - \filt[\chunk{z}{0}{n}] h \right) \dlim \filtvariance{\chunk{z}{0}{n}}(h) Z, \end{equation} as $N$ tends to infinity, where $Z$ is standard normally distributed and the asymptotic variance is given by $\filtvariance[2]{\chunk{z}{0}{n}}(h) = \filtvariance[2]{\chunk{z}{0}{n}}[0](h)$ with \begin{equation} \label{eq:def:as:var:updated:measures} \filtvariance[2]{\chunk{z}{0}{n}}[\ell] : \bmf{\mathcal{X}} \ni h \mapsto \frac{\variance[2]{\chunk{z}{0}{n - 1}}[\ell](pot[z_n] \{ h - \filt[\chunk{z}{0}{n}] h \})}{(pred[\chunk{z}{0}{n - 1}] pot[z_n])^2} \end{equation} (where $\variance{\chunk{z}{0}{n - 1}}[\ell]$ is defined in \eqref{eq:def:as:var} for the original Feynman-Kac particle model). In the case $\ell = 0$, the expression \eqref{eq:def:as:var:updated:measures} is found also in \cite[Eqn.~(17)]{douc:moulines:olsson:2014}. In the light of \eqref{eq:def:as:var:updated:measures}, casting our fixed-lag approach into the framework of updated Feynman-Kac models yields the estimator \begin{multline} \label{eq:def:var:est:updated:measures} \varestfilt[2]{\chunk{z}{0}{n}}[\lambda] : \bmf{\mathcal{X}} \ni h \mapsto \frac{\varest[2]{\chunk{z}{0}{n - 1}}[\lambda](pot[z_n] \{h - \filtpart[\chunk{z}{0}{n}] h\})}{(predpart[\chunk{z}{0}{n - 1}] pot[z_n])^2} \\ = N \sum_{i = 1}^N \left( \sum_{j : \enoch{\lambdatime{n}{\lambda}}{n}{j} = i} \frac{\wgt{n}{i}}{\wgtsum{n}} \left\{ h(\epart{n}{j}) - \filtpart[\chunk{z}{0}{n}] h \right\} \right)^2 \end{multline} for some suitable lag $\lambda \in \mathbb{N}$ (where the equality stems from the fact that $predpart[\chunk{z}{0}{n - 1}] \{Êpot[z_n](h - \filtpart[\chunk{z}{0}{n}] h) \} = 0$). \section{Theoretical results} \label{sec:theoretical:results} \subsection{Main assumptions} In the following we assess the theoretical properties of our estimator. All proofs are presented in Appendix~\ref{sec:proofs}. We preface our main assumptions by the definition of an \emph{$r$-local Doeblin set}. \begin{definition} \label{defi:local-Doeblin} Let $r \in \mathbb{N}pos$. A set $C \in \mathcal{X}$ is $r$-\emph{local Doeblin} with respect to $\kernel{M}$Ê and $pot$ if there exist positive functions $\varepsilon^-_C: \mathsf{Z}^r \to \mathbb{R}^+$ and $\varepsilon^+_C: \mathsf{Z}^r \to \mathbb{R}^+$, a family $\{ probdoeblin{C}{\bar{z}} ; \bar{z} \in \mathsf{Z}^r \}$ of probability measures, and a family $\{\varphi_C \langle \bar{z} \rangle ; \bar{z} \in \mathsf{Z}^r \}$ of positive functions such that for all $\bar{z} \in \mathsf{Z}^r$, $probdoeblin{C}{\bar{z}}(C) =1$ and for all $A \in \mathcal{X}$ and $x \in C$, \begin{equation*} \label{eq:definition-LD-set} \varepsilon^-_C \langle \bar{z} \rangle \, \varphi_C \langle \bar{z} \rangle(x) \, probdoeblin{C}{\bar{z}} (A) \leq \uk[\bar{z}](x, A \cap C) \leq \varepsilon^+_C \langle \bar{z} \rangle \, \varphi_C \langle \bar{z} \rangle(x) \, probdoeblin{C}{\bar{z}}(A). \end{equation*} \end{definition} Our theoretical analysis is driven by the following assumptions. \begin{hypA} \label{assum:likelihoodDrift} The pertubation process $\{ per_n \}_{n \in \mathbb{Z}}$, taking on values in $(\mathsf{Z}, \mathcal{Z})$, is strictly stationary and ergodic. Moreover, there exist an integer $r \in \mathbb{N}pos $ and a set $K \in \mathcal{Z}^{\varotimes r}$ such that the following holds. \begin{enumerate}[(i)] \item \label{item:condition-L-K} The process $\{Êperblock_n \}_{n \in \mathbb{Z}}$, where $perblock_n \vcentcolon= \chunk{per}{n r}{(n + 1)r - 1}$, is ergodic and such that $prob(\bar{per}_0 \in K) > 2/3$. \item For all $\eta > 0$ there exists an $r$-local Doeblin set $C \in \mathcal{X}$ such that for all $\chunk{z}{0}{r - 1} \in K$, \begin{equation*} \label{eq:bound-eta-G} \sup_{x \in C^\complement} \uk[\chunk{z}{0}{r-1}] \mathbbm{1}_\mathsf{X}(x) \leq \eta \sup_{x \in \mathsf{X}} \uk[\chunk{z}{0}{r - 1}] \mathbbm{1}_\mathsf{X}(x) < \infty \end{equation*} and \begin{equation*} \label{eq:lower-bound} \inf_{\chunk{z}{0}{r - 1} \in K} \frac{\varepsilon_C^- \langle \chunk{z}{0}{r - 1} \rangle}{\varepsilon_C^+ \langle \chunk{z}{0}{r - 1} \rangle} > 0, \end{equation*} where the functions $\varepsilon^+_C$ and $\varepsilon^-_C$ are given in Definition~\ref{defi:local-Doeblin}. \item \label{item:condition-minoration} There exists a set $D \in \mathcal{X}$ such that \begin{equation*} \label{eq:condition-minoration} \mathbb{E} \left[\ln^- \inf_{x \in D} \uk[\chunk{per}{0}{r-1}] \mathbbm{1}_D(x) \right] < \infty. \end{equation*} \end{enumerate} \end{hypA} \begin{hypA}\label{assum:majo-g} \begin{enumerate}[(i)] \item \label{item:assum-pos-g} For all $(x, z) \in \mathsf{X} \times \mathsf{Z}$, $pot[z](x) > 0$. \item \label{item:bdd:g} For all $z \in \mathsf{Z}$, $\| pot[z] \|_\infty < \infty$. \item \label{item:assum-supg} $\mathbb{E} \left[ \ln^+ \| pot[per_0] \|_\infty \right] < \infty$. \end{enumerate} \end{hypA} \begin{remark} \label{rem:simplified:assumptions} In the case $r = 1$, \A{assum:likelihoodDrift} may be replaced by the simpler assumption that there exists a set $K \in \mathcal{Z}$ such that the following holds. \begin{enumerate}[(i)] \item \label{item:condition-L-K-simplified} $prob \left( per_0 \in K \right) > 2 / 3$. \item \label{item:condition:local:doeblin:simplified} For all $\eta > 0$ there exists a $1$-local Doeblin set $C \in \mathcal{X}$ such that for all $z \in K$, \begin{equation} \label{eq:bound-eta-G-simplified} \sup_{x \in C^\complement} pot[z](x) \leq \eta \| pot[z] \|_\infty < \infty. \end{equation} \item \label{item:mino-g-simple} There exists a set $D \in \mathcal{X}$ satisfying $$ \inf_{x \in D} \kernel{M}(x, D) > 0 \quad \mbox{and} \quad \mathbb{E} \left[ \ln^- \inf_{x \in D} pot[per_0](x) \right] < \infty. $$ \end{enumerate} This simplified version will be used in Section~\ref{sec:numerical:study}. \end{remark} \subsection{Theoretical properties of the fixed-lag estimator} As established by the following proposition, for all $n \in \mathbb{N}$, $\chunk{z}{0}{n - 1} \in \mathsf{Z}^n$, Êand $\lambda \in \mathbb{N}$, the estimator $\varest[2]{\chunk{z}{0}{n - 1}}[\lambda]$ is a consistent for $\variance[2]{\chunk{z}{0}{n - 1}}[\lambdatime{n}{\lambda}]$, where the latter is given by \eqref{eq:def:as:var} with $\ell = \lambdatime{n}{\lambda}$. \begin{proposition} \label{prop:consistency:fixed:lag} Assume \A{assum:majo-g}(\ref{item:assum-pos-g}--\ref{item:bdd:g}). Then for all $n \in \mathbb{N}$, $\chunk{z}{0}{n - 1} \in \mathsf{Z}^n$, $\lambda \in \mathbb{N}$, and $h \in \bmf{\mathcal{X}}$, as $N \rightarrow \infty$, $$ \varest[2]{\chunk{z}{0}{n - 1}}[\lambda](h) plim \variance[2]{\chunk{z}{0}{n - 1}}[\lambdatime{n}{\lambda}](h). $$ \end{proposition} Now, define, for all $n \in \mathbb{N}$ and $\chunk{z}{0}{n - 1} \in \mathsf{Z}^n$, the \emph{asymptotic bias} \begin{equation} \label{eq:def:bias} \bias{\chunk{z}{0}{n - 1}}{\lambda} : \bmf{\mathcal{X}} \ni h \mapsto \variance[2]{\chunk{z}{0}{n - 1}}(h) - \variance[2]{\chunk{z}{0}{n - 1}}[\lambdatime{n}{\lambda}](h), \end{equation} which is always nonnegative. For the integer $r \in \mathbb{N}pos$ and the set $D \in \mathcal{X}$ given in \A{assum:likelihoodDrift}, introduce the class \begin{equation} \label{eq:class:initial:distributions} mr(D, r) \vcentcolon= \left\{ \chi \in probmeas{\mathcal{X}} : \mathbb{E} \left[ \ln^- \chi \uk[\chunk{per}{0}{\ell - 1}] \mathbbm{1}_D \right] < \infty \mbox{ for all $\ell \in \intvect{0}{r}$} \right \} \end{equation} of initial distributions. Then the following theorem, which is the main result of this paper, states that the asymptotic bias can be controlled uniformly in time by the lag $\lambda$. In particular, the bias decreases with $\lambda$ at a geometric rate. \begin{theorem} \label{thm:tightness:bias} Assume \A[assum:likelihoodDrift]{assum:majo-g}. Then there exist a constant $\rho \in (0, 1)$ and a sequence $(c_k)_{k \in \mathbb{N}pos}$ in $\mathbb{R}_+$ such that for all $\chi \in mr(D, r)$ (with $D$ and $r$ given by \A{assum:likelihoodDrift}), $\lambda \in \mathbb{N}$, $h \in \bmf{\mathcal{X}}$, and $k \in \mathbb{N}$, \begin{equation} \label{eq:bias:bound} \sup_{n \in \mathbb{N}} prob \left( \frac{\bias{\chunk{per}{0}{n - 1}}{\lambda}(h)}{\rho^{\lambda + 1} \| h \|_{\infty}^2} > c_k \right) \leq \frac{1}{k}. \end{equation} \end{theorem} \begin{remark} We stress that the sequence $\{ c_k \}_{k \in \mathbb{N}pos}$ as well as the mixing rate $\rho$ in the previous theorem depend exclusively on the properties of the model and not on the lag size $\lambda$, the objective function $h$, or the initial distribution $\chi$ (as long as this belongs to $mr(D, r)$). This means that at any level specified by $k$, the asymptotic bias can be controlled uniformly in time at a geometric rate in $\lambda$. \end{remark} The strength of these bounds will be illustrated numerically in Section~\ref{sec:numerical:study}. \subsection{Extensions to variance estimators for flows of updated distributions} In the following we confirm that the previous results can be extended to flows of updated Feynman-Kac distributions (recall the definitions in Section~\ref{sec:updated:measures}). \begin{proposition} \label{prop:consistency:fixed:lag:updated:case} Assume \A{assum:majo-g}(\ref{item:assum-pos-g}--\ref{item:bdd:g}). Then for all $n \in \mathbb{N}$, $\chunk{z}{0}{n} \in \mathsf{Z}^{n + 1}$, $\lambda \in \mathbb{N}$, and $h \in \bmf{\mathcal{X}}$, as $N \rightarrow \infty$, $$ \varestfilt[2]{\chunk{z}{0}{n}}[\lambda](h) plim \filtvariance[2]{\chunk{z}{0}{n}}[\lambdatime{n}{\lambda}](h). $$ \end{proposition} Now, as in the previous section, define, for all $n \in \mathbb{N}$ and $\chunk{z}{0}{n} \in \mathsf{Z}^{n + 1}$, the asymptotic bias \begin{equation} \label{eq:def:bias:filter} \biasfilt{\chunk{z}{0}{n}}{\lambda} : \bmf{\mathcal{X}} \ni h \mapsto \filtvariance[2]{\chunk{z}{0}{n}}(h) - \filtvariance[2]{\chunk{z}{0}{n}}[\lambdatime{n}{\lambda}](h). \end{equation} \begin{theorem} \label{thm:tightness:bias:filter} Assume \A[assum:likelihoodDrift]{assum:majo-g}. Then the statement of Theorem~\ref{thm:tightness:bias} holds true when $\bias{\chunk{per}{0}{n - 1}}{\lambda}$ is replaced by $\biasfilt{\chunk{per}{0}{n}}{\lambda}$. \end{theorem} \subsection{Bounds on the asymptotic bias under strong mixing assumptions} Before continuing to the numerical part of the paper, we provide, for the sake of completeness, a stronger bound on the asymptotic bias under the following \emph{strong mixing} assumption, which is standard in the literature of SMC analysis (see \cite{delmoral:guionnet:2001} and, e.g., \cite{delmoral:2004,cappe:moulines:ryden:2005,crisan:heine:2008} for refinements) and points to applications where the state space $\mathsf{X}$ is a compact set. \begin{hypB} \label{ass:strong:mixing} \begin{enumerate}[(i)] \item \label{ass:strong:mixing-1} There exist constants $0 < mlow < mup < \infty$ and a probability measure $\mu \in probmeas{\mathcal{X}}$ such that for all $x \in \mathsf{X}$ and $A \in \mathcal{X}$, $$ mlow \mu(A) \leq \kernel{M}(x, A) \leq mup \mu(A). $$ \item \label{ass:strong:mixing-2} There exist constants $0 < potlow < potup < \infty$ such that for all $z \in \mathsf{Z}$, $\| pot[z] \|_\infty \leq potup$ and for all $(x, z) \in \mathsf{X} \times \mathsf{Z}$, $\kernel{M} pot[z](x) \geq potlow$. \end{enumerate} \end{hypB} Under \B{ass:strong:mixing}(\ref{ass:strong:mixing-1}), define $\varrho \vcentcolon= 1 - mlow / mup$; then, instead of bounding the asymptotic bias stochastically (as in Theorem~\ref{thm:tightness:bias}), the following theorem provides a \emph{deterministic} bound on the same. The proof is found in Section~\ref{sec:proof:strong:bias:bound}. \begin{theorem} \label{thm:strong:bias:bound} Assume \B{ass:strong:mixing}. Then there exists a constant $c \in \mathbb{R}pos$ such that for all $n \in \mathbb{N}$, $\chunk{z}{0}{n - 1} \in \mathsf{Z}^n$, $\lambda \in \mathbb{N}$, and $h \in \bmf{\mathcal{X}}$, \begin{equation*} \label{eq:strong:bias:bound} \bias{\chunk{z}{0}{n - 1}}{\lambda}(h) \leq \begin{cases} 0 & \mbox{for } n \leq \lambda, \\ c \varrho^{2(\lambda + 1)} \| h \|_{\infty}^2 & \mbox{for } n > \lambda. \end{cases} \end{equation*} \end{theorem} \section{Application to state-space models} \label{sec:numerical:study} \subsection{Predicting log-volatility} \label{sec:stochastic:volatility} As a first numerical illustration we consider the problem of computing the predictor flow in the---now classical---stochastic volatility model \begin{equation} \label{eq:sto:vol:model} \begin{split} X_{n + 1} &= \varphi X_n + \sigma U_{n + 1}, \\ Y_n &= \beta \exp \left( X_n / 2 \right) V_n, \end{split} \quad n \in \mathbb{N}, \end{equation} proposed in \cite{hull:white:1987}, where $\{ U_n \}_{n \in \mathbb{N}pos}$ and $\{ V_n \}_{n \in \mathbb{N}}$ are sequences of uncorrelated standard Gaussian noise. In this model, where only the process $\{ Y_n \}_{n \in \mathbb{N}}$ is observable, $\beta$ is a constant scaling factor, $\varphi$ is the \emph{persistence}, and $\sigma$ is the \emph{volatility of the log-volatility}. In the case where $|\varphi| < 1$, the state process $\{ X_n \}_{n \in \mathbb{N}}$ is stationary with a Gaussian invariant distribution having zero mean and variance $\tilde{\sigma}^2 \vcentcolon= \sigma^2 / (1 - \varphi^2)$. It is easily checked that the model $\{ (X_n, Y_n) \}_{n \in \mathbb{N}}$ defined by \eqref{eq:sto:vol:model} is indeed a state-space model in the notion of Example~\ref{example:state:space:model}, and in this case $\mathsf{X} = \mathsf{Y} = \mathbb{R}$, $\mathcal{X} = \mathcal{Y} = \mathcal{B}(\mathbb{R})$, and \[ \begin{split} \kernel{M}(x, A) &= \int_A phi(x' ; \varphi x, \sigma^2) \, \mathrm{d} x', \\ pot(x, y) &= phi(y ; 0, \beta^2 \mathrm{e}^x), \end{split} \] where $(x, y) \in \mathbb{R}^2$, $A \in \mathcal{B}(\mathbb{R})$, and $phi(\cdot ; \mu, \varsigma^2)$ denotes the density of the Gaussian distribution with expectation $\mu \in \mathbb{R}$ and standard deviation $\varsigma > 0$. Consequently, the flow of one-step log-volatility predictors satisfies the perturbed Feynman-Kac recursion \eqref{eq:pred:rec} with $pot[y] = g(\cdot, y)$, $y \in \mathsf{Y}$, and may thus be approximated using SMC methods. We check that the model satisfies the simplified assumptions in Remark~\ref{rem:simplified:assumptions} for the scenario where $|\varphi| < 1$, implying stationary state and observation processes, the latter with marginal stationary distribution \begin{equation} \label{eq:stationary:marginal:observations} \mathcal{B}(\mathbb{R}) \ni A \mapsto \iint \mathbbm{1}_A(y) phi(y ; 0, \beta^2 \mathrm{e}^x) phi(x ; 0, \tilde{\sigma}^2) \, \mathrm{d} x \, \mathrm{d} y. \end{equation} For this purpose, we first note that for all $y \in \mathbb{R}$, $$ \| pot[y] \|_\infty = \frac{1}{|y| \sqrt{2 pi}} \mathrm{e}^{-1/2}, $$ i.e., for all $(x, y) \in \mathbb{R}^2$, \begin{equation*} \frac{pot[y](x)}{\| pot[y] \|_\infty} propto |y| \exp \left( - \frac{x}{2} - \frac{y^2 \mathrm{e}^{- x}}{2 \beta^2} \right), \end{equation*} where the right hand side tends to zero as $|x| \rightarrow \infty$ for all $y \in \mathbb{R}$. We thus conclude that Conditions~\eqref{item:condition-L-K} and \eqref{item:condition:local:doeblin:simplified} in Remark~\ref{rem:simplified:assumptions} hold true for any compact set $K \subset \mathbb{R}$ with probability exceeding $2/3$ under \eqref{eq:stationary:marginal:observations}. In this case, every compact set $C \subset \mathbb{R}$ is $1$-local Doeblin with respect to Lebesgue measure. In addition, since \begin{equation} \label{eq:finite:second:moment:obs:process} \mathbb{E} \left[ÊY_0^2 \right] = \beta^2 \int \mathrm{e}^x phi(x ; 0, \tilde{\sigma}^2) \, \mathrm{d} x < \infty, \end{equation} Condition~\eqref{item:mino-g-simple} is satisfied for all compact sets $D \subset \mathbb{R}$. Moreover, for such compact sets $D$, \eqref{eq:finite:second:moment:obs:process} implies that $mr(D, 1)$ contains all initial distributions $\chi$ with $\chi(D) > 0$; in particular, $mr(D, 1)$ contains the invariant Gaussian distribution of the state process, which we use as initial distribution in the predictor Feynman-Kac recursion. In this context, we aim at computing the sequence $\{ pred[\chunk{y}{0}{n - 1}] \operatorname{id} \}_{n \in \mathbb{N}}$ of predictor means; however, due to the nonlinearity of the emission density $pot$, closed-form expressions are beyond reach, and we thus apply SMC for this purpose. In the scenario considered by us, $(\beta, \varphi, \sigma) = (.641, .975, .165)$, which are the parameter estimates obtained in \cite[Example~11.1.1.2]{cappe:moulines:ryden:2005} on the basis of British pound/US dollar daily log-returns from 1~October 1981 to 28~June 1985. In this setting, the following two numerical experiments were conducted. \subsubsection{Lag size influence} \label{sec:lag:size:influence} First, in order to assess the dependence of the bias \eqref{eq:def:bias} on the lag $\lambda$, a record comprising $600$ observations were generated by simulation of the model under the parameters above. We re-emphasise that even though we here, for simplicity, consider the scenario of a perfectly specified model, this is not required in our assumptions in Section~\ref{sec:theoretical:results} (as long as the observation sequence is stationary). By inputting, $100$ times, these observations into Algorithm~\ref{alg:fixed-lag:SMC} and running the same with $N = 4000$ particles, $100$ replicates of the particle predictor mean at time $600$ were produced and furnished with estimates of the asymptotic variance at the same time point. For each replicate, the asymptotic variance was estimated using all the lags $\lambda \in \{2, 10, 12, 14, 16, 18, 20, 22, 50, 100, 200, 600\}$, where the last one, $\lambda = 600$, corresponds to the CLE. Moreover, the reference value $1.63$ of the asymptotic variance, again at the time point $600$, was obtained by the brute-force approach consisting in generating as many as $1000$ replicates of the particle predictor mean, again with $N = 4000$ particles, and simply multiplying the sample variance of these replicates by $N$. The outcome is reported numerically and visually in Table~\ref{tab:lags:means:stds:stovol} and Figure~\ref{fig:boxplots:stovol}, respectively, where the different boxes in Figure~\ref{fig:boxplots:stovol} correspond to different lags. For each box, the reference value is indicated by a blue-colored asterisk $(\ast)$ and the average estimate of each box is indicated by a red ditto. From Figure~\ref{fig:boxplots:stovol} it is evident that the clear bias introduced by the smallest lags ($\lambda \in \{2, 10, 12\}$) decreases rapidly as $\lambda$ increases, and for $\lambda = 20$ the bias is more or less eliminated. After this, increasing further $\lambda$ leads, as we may expect from the particle path degeneracy, to significant increase of variance and no further improvement of the bias. On the contrary, the fact that the cardinality of the set $\{Ê\enoch{\lambdatime{n}{\lambda}}{n}{i} : i \in \intvect{1}{N} \}$ decreases monotonously as $\lambda$ increases, leading to constantly zero variance estimates for $n$ and $\lambda$ large enough, re-introduces bias also for large $\lambda$. For the last replicate, the cardinalities of the sets $\{ \eve{n}{i} : i \in \intvect{1}{N} \}$ and $\{Ê\enoch{\lambdatime{n}{20}}{n}{i} : i \in \intvect{1}{N} \}$ were $8$ and $140$, respectively. \begin{figure} \caption{Estimated asymptotic variances of the particle predictor mean at time $600$ in the stochastic volatility model \eqref{eq:sto:vol:model} \label{fig:boxplots:stovol} \end{figure} \begin{table}[H] \label{tab:lags:means:stds} \begin{center} \begin{tabular}{c|c|c} \toprule $\lambda$ & Mean & St. dev. \\ \midrule $2$ & $.47$ & $.02$ \\ $10$ & $.96$ & $.09$ \\ $12$ & $.99$ & $.11$ \\ $14$ & $1.47$Ê& $.40$ \\ $16$ & $1.54$ & $.49$ \\ $18$ & $1.60$ & $.57$ \\ $20$ & $1.63$ & $.62$ \\ $22$ & $1.62$ & $.61$ \\ $50$ & $1.58$ & $.61$ \\ $100$ & $1.53$ & $.71$ \\ $200$ & $1.46$ & $.71$ \\ $600$ & $1.30$ & $.96$ \\ \bottomrule \end{tabular} \end{center} \caption{Means and standard deviations of the variance estimates reported in Figure~\ref{fig:boxplots:stovol}.} \label{tab:lags:means:stds:stovol} \end{table} \subsubsection{Long-term stability} \label{sec:long:term:stability} In order to investigate numerically the long-term stability of our fixed-lag estimator, we executed Algorithm~\ref{alg:fixed-lag:SMC} on a considerably longer observation record comprising $3500$ values generated by simulation. The number of particles was set to $N = 5000$.Ê Guided by the outcome of the previous experiment, we furnished the estimated predictor means with variance estimates obtained in the same sweep of the data using the fixed-lag estimator with $\lambda = 20$. In parallel, the CLE was computed on the same realisation of the particle cloud. Finally, the brute-force approach estimating the asymptotic variances on the basis of $1200$ replicates of the predictor mean sequence was re-applied as a reference. Figure~\ref{fig:variance:evolution:short} displays the time evolution of the variance estimates over the initial $200$ time steps, where only estimates corresponding to every second time step have been plotted for readability. As evident from the top panel, the CLE targets well the reference values at most time points for this relatively limited time horizon, even though slight numerical instability may be discerned towards the very end of the plot. In addition, the fixed-lag estimator closes nicely the reference values for most time points with a variance that is somewhat smaller than that of the CLE. In order to display the estimators' long-term stability properties, Figure~\ref{fig:variance:evolution:long} provides the analogous plot for the full observation record comprising $3500$ values. Again, for readability, only estimates corresponding to every $35^\mathrm{th}$ time step have been plotted. Now, as clear from the top panel, the estimates produced by the CLE degenerate rapidly and after, say, $1500$ time steps the CLE loses track completely of the reference values. From time step $2871$, all particles in the cloud share the same Eve index, and the CLE collapses to zero. On the the contrary, from the bottom panel it is evident that the estimates delivered by the fixed-lag estimator stay numerically stable and closes well the reference values at most time points. In particular, the variance peak arising as a result of extreme state process behavior at time $3395$ is captured strikingly well by the fixed-lag estimator. \begin{figure} \caption{Long-term evolution of estimated asymptotic variances of particle predictor means in the stochastic volatility model \eqref{eq:sto:vol:model} \label{fig:variance:evolution:short} \end{figure} \begin{figure} \caption{The same plot as in Figure~\ref{fig:variance:evolution:short} \label{fig:variance:evolution:long} \end{figure} \subsection{SMC confidence bounds} \label{sec:lin:Gauss:model} The \emph{linear Gaussian state-space model} \begin{equation} \label{eq:lin:Gauss:model} \begin{split} X_{n + 1} &= \varphi X_n + \sigma_u U_{n + 1}, \\ Y_n &= X_n + \sigma_v V_n, \end{split} \quad n \in \mathbb{N}, \end{equation} where $\{ U_n \}_{n \in \mathbb{N}pos}$ and $\{ V_n \}_{n \in \mathbb{N}}$ are again sequences of uncorrelated standard Gaussian noise, allows predictor means to be computed in a closed form using \emph{Kalman prediction} (see, e.g., \cite[Algorithm~5.2.9]{cappe:moulines:ryden:2005}). Thus, allowing comparisons with true quantities to be made, the class of linear Gaussian models is often used as a testing lab for SMC algorithms. In the setting where $|\varphi| < 1$, the state and observation processes are stationary, with zero mean Gaussian marginal stationary distributions with variances $\tilde{\sigma}^2$ and $\tilde{\sigma}^2 + \sigma_v^2$, respectively, where $\tilde{\sigma}^2 \vcentcolon= \sigma_u^2 / (1 - \varphi^2)$. We let the former initialise the state process. Arguing along the lines of Section~\ref{sec:stochastic:volatility}, Assumptions~\A[assum:likelihoodDrift]{assum:majo-g} are checked straightforwardly also for this model. We leave the details to the reader. After having generated, for the parameterisation $(\varphi, \sigma_u, \sigma_v) = (.98, .2, 1)$, an observation record of $600$ observations, the experiment in Section~\ref{sec:lag:size:influence} was repeated for the same set of lag sizes $\lambda$. As in the stochastic volatility example, the evolution of the $N = 4000$ particles followed the same model dynamics as that generating the observations, and the reference value $1.102$ of the variance at $n = 600$ was obtained on the basis of $1000$ predictor mean replicates. The outcome is reported in Figure~\ref{fig:boxplots:lin:Gauss} and Table~\ref{tab:lags:means:stds:lin:Gauss}, which for this model indicate a somewhat even more robust performance of our estimator with respect to the lag size; indeed, more or less all the lag sizes in the interval $\intvect{12}{22}$ yield negligible biases, with only a slight increase of variance for the larger ones. According to Table~\ref{tab:lags:means:stds:lin:Gauss}, $\lambda = 18$ yields the minimal bias. As in the previous example, The CLE (corresponding to the lag size $\lambda = 600$) exhibits very unstable performance due to the relatively high $n$-to-$N$ ratio, with at least $75\%$ of its estimates falling below the reference value. \begin{figure} \caption{Estimated asymptotic variances of the particle predictor mean at time $600$ in the linear Gaussian model \eqref{eq:lin:Gauss:model} \label{fig:boxplots:lin:Gauss} \end{figure} \begin{table}[H] \begin{center} \begin{tabular}{c|c|c} \toprule $\lambda$ & Mean & St. dev. \\ \midrule $2$ & $.524$ & $.035$ \\ $10$ & $1.080$ & $.157$ \\ $12$ & $1.095$ & $.163$ \\ $14$ & $1.095$Ê& $.162$ \\ $16$ & $1.096$ & $.175$ \\ $18$ & $1.099$ & $.190$ \\ $20$ & $1.094$ & $.198$ \\ $22$ & $1.093$ & $.202$ \\ $50$ & $1.071$ & $.246$ \\ $100$ & $.976$ & $.370$ \\ $200$ & $.944$ & $.471$ \\ $600$ & $.751$ & $.593$ \\ \bottomrule \end{tabular} \end{center} \caption{Means and standard deviations of the variance estimates reported in Figure~\ref{fig:boxplots:lin:Gauss}. The reference value is $1.102$} \label{tab:lags:means:stds:lin:Gauss} \end{table} Finally, using the lag $\lambda = 18$ extracted from the previous simulation, Algorithm~\ref{alg:fixed-lag:SMC} was re-run $150$ times on the same observation record $\chunk{y}{0}{n - 1}$, $n = 600$, each run producing a sequence $\{ predpart[\chunk{y}{0}{m - 1}](\operatorname{id}) \}_{m = 0}^n$ of particle predictor means, a sequence $\{ \varest{\chunk{y}{0}{m - 1}}[\lambdatime{m}{\lambda}](\operatorname{id}) \}_{m = 0}^n$ of fixed-lag variance estimates, and associated approximate $95\%$ confidence intervals $\{ I_m \}_{m = 0}^n$, each interval given by \begin{equation} \label{eq:confidence:bound} I_m = \left(predpart[\chunk{y}{0}{m - 1}] \operatorname{id} pm \lambda_{.025} \frac{\varest{\chunk{y}{0}{m - 1}}[\lambdatime{m}{\lambda}](\operatorname{id})}{\sqrt{N}} \right), \end{equation} where $\lambda_{.025}$ denotes the $2.5\%$ quantile of the standard Gaussian distribution. As before, $N$ was set to $4000$. In the case of effective variance estimation, one may expect each $I_m$ to fail to cover the true predicted mean $pred[\chunk{y}{0}{m - 1}](\operatorname{id})$ with a probability close to $5\%$. Since we are in the setting of a linear Gaussian model, the exact predictor means $\{ pred[\chunk{y}{0}{m - 1}](\operatorname{id}) \}_{m = 0}^n$ are accessible through Kalman prediction, and we are thus able to assess the failure rates of the confidence intervals at different time points. Such failure rates are reported in Figure~\ref{fig:stemplot:FRs}, and the red-dashed line indicates the perfect rate $5\%$. For readability, only every $10^{\mathrm{th}}$ time point is reported. Appealingly, it is clear that the rates fluctuate constantly around $5\%$, without any notable time dependence. This re-confirms the numeric stability of our fixed-lag variance estimator. The average failure rate across all $600$ time points was $5.5\%$. The fact that the average failure rate is slightly above the perfect rate $5\%$ is in line with the fact that the bias of our estimator is always positive, as underestimation of variance leads to more narrow confidence bounds and, consequently, higher failure rates. One way of hedging against underestimation of variance could be to replace Gaussian quantiles by the quantiles of some Student's $t$-distribution with a moderate number of degrees of freedom. \begin{figure} \caption{Failure rates of confidence bounds \eqref{eq:confidence:bound} \label{fig:stemplot:FRs} \end{figure} \section{Conclusion} \label{sec:conclusion} The estimator of the SMC asymptotic variance that we propose is a natural modification of the CLE introduced in \cite{chan:lai:2013}. As in \cite{kitagawa:sato:2001,olsson:cappe:douc:moulines:2006}, the main idea is to reduce the degree of genealogical tracing, which has a devastating effect on the CLE's numerical stability, at the cost of a small bias, which may be controlled using the forgetting properties of the model. That this measure stabilises numerically the estimator in the long run is confirmed by our theoretical results in Section~\ref{sec:theoretical:results}, which are obtained under---what we believe---minimal model assumptions being satisfied also for many models with possibly non-compact state space. The fact that the bias can be shown to decrease geometrically fast as the lag increases indicates that tight control of the bias is possible also for moderately large particle sample sizes. This is approved by our numerical experiments in Section~\ref{sec:numerical:study}, which report, in the examples under consideration, a negligible bias already for some thousands of particles. Needless to say, the success of our approach depends highly on the interplay between the forgetting properties of the model, the particle sample size, and the choice of the lag. Adaptive lag design is hence a natural direction for future research. Moreover, as our estimator provides numerically stable estimates of the asymptotic variance, it should be highly useful for online SMC sample size adaptation. Here one natural approach could be to estimate the variance of the next time step using a part of the particle population (\emph{pilot sampling}) and then ``refuel'' the particle system at time steps of high variance (here the techniques developed in \cite{lee:whiteley:2016}, where the authors consider the SMC sample allocation problem in the batch mode, should be useful for the theoretical analysis). Finally, casting, using the results obtained in \cite{lindsten:shoen:olsson:2011}, our estimator and analysis into the framework of \emph{Rao-Blackwellised SMC algorithms}, should be of high relevance for high-dimensional applications. \appendix \section{Proofs} \label{sec:proofs} \subsection{Proof of Proposition~\ref{prop:consistency:fixed:lag}} \label{sec:proof:consistency:fixed:lag} The proof of Proposition~\ref{prop:consistency:fixed:lag} relies on the machinery developed in \cite{lee:whiteley:2016}, from which we adopt the following definitions. Throughout this section, let $n \in \mathbb{N}$ and $\chunk{z}{0}{n - 1} \in \mathsf{Z}^n$ be picked arbitrarily. \begin{itemize} \item Denote by $\Binsp{n} \vcentcolon= \{0, 1\}^{n + 1}$ the space of binary strings of length $n + 1$. The \emph{zero string} of length $n + 1$ is denoted by $\zerostr{n}$ and for $m \in \intvect{0}{n}$, $\unitstr{m}{n}$ denotes a \emph{unit string} of length $n + 1$ with $1$ on position $m$ (with positions indexed from $0$) and zeros everywhere else. \item For a given string $\chunk{b}{0}{n} \in \Binsp{n}$, a Markov chain $\{Ê(X_m, X_m') \}_{m = 0}^n$ on $(\mathsf{X}^2, \mathcal{X}^{\varotimes 2})$ is defined as follows. If $b_0 = 0$, then $(X_0, X_0') \sim \chi^{\varotimes 2}$; otherwise, if $b_0 = 1$, $X_0' = X_0 \sim \chi$ (the initial distribution). After this, if $b_{m + 1} = 0$, $X_{m + 1} \sim \kernel{M}(X_m, \cdot)$Ê and $X_{m + 1}' \sim \kernel{M}(X_m', \cdot)$ conditionally independently; otherwise, if $b_{m + 1} = 1$, $X_{m + 1}' = X_{m + 1} \sim \kernel{M}(X_m, \cdot)$. \item With $\mathbb{E}_{\chunk{b}{0}{n}}$ denoting the expectation under the law of $\{Ê(X_m, X_m') \}_{m = 0}^n$, we define, for all $\chunk{b}{0}{n} \in \Binsp{n}$, the measures $$ \mu_{\chunk{b}{0}{n}} \langle \chunk{z}{0}{n - 1} \rangle : \mathcal{X}^{\varotimes 2} \ni A \mapsto \mathbb{E}_{\chunk{b}{0}{n}} \left[ \mathbbm{1}_A(X_n, X_n') prod_{m = 0}^{n - 1} pot[z_m](X_m) pot[z_m](X_m') \right]. $$ Note that for all $h \in \bmf{\mathcal{X}}$ it holds that $\mu_{\zerostr{n}} \langle \chunk{z}{0}{n - 1} \rangle h^{\varotimes 2} = (\chi \uk[ \chunk{z}{0}{n - 1}] h)^2$ and $\mu_{\unitstr{m}{n}} \langle \chunk{z}{0}{n - 1} \rangle h^{\varotimes 2} = \chi \uk[\chunk{z}{0}{m - 1}] \mathbbm{1}_\mathsf{X} \times \chi \uk[\chunk{z}{0}{m - 1}](\uk[\chunk{z}{m}{n - 1}] h)^2$, and defining $$ \term[\chunk{z}{0}{n - 1}]{m}{n} : \bmf{\mathcal{X}} \ni h \mapsto \frac{\mu_{\unitstr{m}{n}} \langle \chunk{z}{0}{n - 1} \rangle h^{\varotimes 2} - \mu_{\zerostr{n}} \langle \chunk{z}{0}{n - 1} \rangle h^{\varotimes 2}}{(\chi \uk[\chunk{z}{0}{n - 1}] \mathbbm{1}_\mathsf{X})^2} $$ yields for all $h \in \bmf{\mathcal{X}}$, $$ \term[\chunk{z}{0}{n - 1}]{m}{n}(h) = \frac{pred[\chunk{z}{0}{m - 1}] (\uk[\chunk{z}{m}{n - 1}] h)^2}{(pred[\chunk{z}{0}{m - 1}] \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}})^2} - (pred[\chunk{z}{0}{n - 1}] h)^2 $$ and, consequently, for all $\ell \in \intvect{0}{n}$, \begin{equation} \label{eq:variance:alt:expression} \variance[2]{\chunk{z}{0}{n - 1}}[\ell](h) = \sum_{m = \ell}^n \term[\chunk{z}{0}{n - 1}]{m}{n} (h - pred[\chunk{z}{0}{n - 1}] h). \end{equation} \item For all $N \in \mathbb{N}pos$, let Ê$partfd{n} \vcentcolon= \sigma( \{ \epart{0}{i} \}_{i = 1}^N, \{Ê\epart{m}{i}, \ind{m}{i} \}_{i = 1}^N ; m \in \intvect{1}{n} )$ be the $\sigma$-field generated by the output of Algorithm~\ref{alg:SMC} during the first $n$ iterations. Conditionally on $partfd{n}$, a genealogical trace $\chunk{\gen[1]}{0}{n}$ is formed backwards in time by, first, drawing $\gen[1]_n$ uniformly over $\intvect{1}{N}$ and, second, setting $\gen[1]_m = \ind{m + 1}{\gen[1]_{m + 1}}$ for all $m \in \intvect{0}{n - 1}$. In addition, a parallel trace $\chunk{\gen[2]}{0}{n}$ is formed by, first, drawing $\gen[2]_n$ uniformly over $\intvect{1}{N}$ and, second, letting $\gen[2]_m = \ind{m + 1}{\gen[2]_{m + 1}}$ if $\gen[2]_{m + 1} \neq \gen[1]_{m + 1}$ or $\gen[2]_m \sim \mathsf{Cat}(\{ \wgt{m}{i} / \wgtsum{m} \}_{i = 1}^N)$ otherwise. \end{itemize} The proof of the following lemma follows closely that of \cite[Lemma~4]{lee:whiteley:2016} and is hence omitted. Define for all $\ell \in \intvect{0}{n}$ and $\chunk{b}{0}{\ell} \in \Binsp{\ell}$, $$ \binset{\ell}(\chunk{b}{0}{\ell}) \vcentcolon= \left \{Ê(\chunk{k}{0}{\ell}, \chunk{k'}{0}{\ell}) \in \intvect{1}{N}^{2(\ell + 1)} : \mbox{for all } \ell' \in \intvect{0}{\ell}, \ k_{\ell'} = k'_{\ell'} \Leftrightarrow b_{\ell'} = 1 \right \}. $$ \begin{lemma} \label{lemma:equiv:sets} For all $N \in \mathbb{N}pos$ and $m \in \intvect{0}{n}$, $$ \left \{ \enoch{m}{n}{\gen[1]_n} \neq \enoch{m}{n}{\gen[2]_n} \right \} = \left \{ (\chunk{\gen[1]}{m}{n}, \chunk{\gen[2]}{m}{n}) \in \binset{n - m}(0_{n - m}) \right \}. $$ \end{lemma} In addition, define, for all $N \in \mathbb{N}pos$, the measures \begin{equation} \label{eq:def:part:gamma} \unpredpart[\chunk{z}{0}{n - 1}] : \mathcal{X} \ni A \mapsto \frac{1}{N^{n + 1}} \left( prod_{m = 0}^{n - 1} \wgtsum{m} \right) \sum_{i = 1}^N \mathbbm{1}_A(\epart{n}{i}) \end{equation} and \begin{multline} \label{eq:mu:meas} \mumeas{N, \chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} : \mathcal{X}^{\varotimes 2} \ni A \mapsto N^{\#_1(\chunk{b}{0}{n})} \left( \frac{N}{N - 1} \right)^{\#_0(\chunk{b}{0}{n})} \left( \unpredpart[\chunk{z}{0}{n - 1}] \mathbbm{1}_\mathsf{X} \right)^2 \\ \times \mathbb{E} \left[ \mathbbm{1}_A \left( \epart{n}{\gen[1]_n}, \epart{n}{\gen[2]_n} \right) \mathbbm{1} \left \{Ê(\chunk{\gen[1]}{0}{n}, \chunk{\gen[2]}{0}{n}) \in \binset{n}(\chunk{b}{0}{n}) \right \} \mid partfd{n} \right], \end{multline} where $\#_1(\chunk{b}{0}{n}) \vcentcolon= \sum_{m = 0}^n b_m$ and $\#_0(\chunk{b}{0}{n}) \vcentcolon= n + 1 - \#_1(\chunk{b}{0}{n})$ denote the numbers of ones and zeros in $\chunk{b}{0}{n}$, respectively. Note that \eqref{eq:def:part:gamma} implies that for all $h \in \bmf{\mathcal{X}}$, $predpart[\chunk{z}{0}{n - 1}] h = \unpredpart[\chunk{z}{0}{n - 1}] h / \unpredpart[\chunk{z}{0}{n - 1}] \mathbbm{1}_\mathsf{X}$. The following lemma, where first part is established in \cite[Theorem~2]{lee:whiteley:2016} and the last part is a standard result (see, e.g, \cite{douc:moulines:2008} for results on the weak consistency of SMC), will be instrumental. \begin{lemma} \label{lemma:mu:convergence} For all $\chunk{b}{0}{n} \in \Binsp{n}$ and $h \in \bmf{\mathcal{X}^{\varotimes 2}}$, as $N \rightarrow \infty$, $$ \mumeas{N, \chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} h plim \mumeas{\chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} h. $$ In addition, for all $h \in \bmf{\mathcal{X}}$, $$ \unpredpart[\chunk{z}{0}{n - 1}] h plim \chi \uk[\chunk{z}{0}{n - 1}] h. $$ \end{lemma} \begin{proof}[Proof of Proposition~\ref{prop:consistency:fixed:lag}] Fix $\ell \in \intvect{0}{n}$ and define for all $N \in \mathbb{N}pos$, \begin{multline*} \varphi_{N, \ell} \langle \chunk{z}{0}{n - 1} \rangle : \mathcal{X}^{\varotimes 2} \ni A \mapsto \left( \unpredpart[\chunk{z}{0}{n - 1}] \mathbbm{1}_\mathsf{X} \right)^2 \\Ê \times \mathbb{E} \left[ \mathbbm{1}_A \left( \epart{n}{\gen[1]_n}, \epart{n}{\gen[2]_n} \right) \mathbbm{1} \left \{Ê(\chunk{\gen[1]}{\ell}{n}, \chunk{\gen[2]}{\ell}{n}) \in \binset{n - \ell}(0_{n - \ell}) \right \} \mid partfd{n} \right]. \end{multline*} First, note that by Lemma~\ref{lemma:equiv:sets}, since $\gen[1]_n$ and $\gen[2]_n$ are conditionally independent and uniformly distributed over $\intvect{1}{N}$, for all $h \in \bmf{\mathcal{X}}$, \begin{equation} \label{eq:estimator:alt:expression} \begin{split} \frac{1}{N^2} \sum_{i = 1}^N \left( \sum_{j : \enoch{\ell}{n}{j} = i} h(\epart{n}{j}) \right)^2 &= \frac{1}{N^2} \sum_{(i, j) : \enoch{\ell}{n}{i} = \enoch{\ell}{n}{j}} h(\epart{n}{i}) h(\epart{n}{j}) \\ &= (predpart[\chunk{z}{0}{n - 1}] h)^2 - \frac{1}{N^2} \sum_{(i, j) : \enoch{\ell}{n}{i} \neq \enoch{\ell}{n}{j}} h(\epart{n}{i}) h(\epart{n}{j}) \\ &= (predpart[\chunk{z}{0}{n - 1}] h)^2 - \frac{\varphi_{N, \ell} \langle \chunk{z}{0}{n - 1} \rangle h^{\varotimes 2}}{(\unpredpart[\chunk{z}{0}{n - 1}] \mathbbm{1}_\mathsf{X})^2}. \end{split} \end{equation} It is hence enough to prove that for all $N \in \mathbb{N}pos$ and $h \in \bmf{\mathcal{X}}$, \begin{multline} \label{eq:sufficient:condition} N \left \{ (\unpredpart[\chunk{z}{0}{n - 1}] h)^2 - \varphi_{N, \ell} \langle \chunk{z}{0}{n - 1} \rangle h^{\varotimes 2} \right\} \\Ê = \sum_{m = \ell}^n \left( \mumeas{N, \unitstr{m}{n}}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} - \mumeas{N, \zerostr{n}}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} \right) + (n - \ell + 1) (\unpredpart[\chunk{z}{0}{n - 1}] h)^2 \\Ê + \| h \|_\infty^2 \mathcal{O}(N^{-1}), \end{multline} where the $\mathcal{O}(N^{-2})$ term does not depend on $h$; indeed, along the lines of the proof of \cite[Theorem~1]{lee:whiteley:2016}, Lemma~\ref{lemma:mu:convergence} implies that for all $\chunk{b}{0}{n} \in \Binsp{n}$, as $N \rightarrow \infty$, $$ \mumeas{N, \chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} \{Êh - predpart[\chunk{z}{0}{n - 1}] h \}^{\varotimes 2} plim \mumeas{\chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} \{Êh - pred[\chunk{z}{0}{n - 1}] h \}^{\varotimes 2}, $$ and \eqref{eq:sufficient:condition} hence yields, with $\ell = \lambdatime{n}{\lambda}$ Êand when combined with \eqref{eq:estimator:alt:expression} and \eqref{eq:variance:alt:expression}, again as $N \rightarrow \infty$, \begin{multline} \label{eq:critical:identity} \varest[2]{\chunk{z}{0}{n - 1}}[\lambda](h) \\ = \sum_{m = \lambdatime{n}{\lambda}}^n \frac{\mumeas{N, \unitstr{m}{n}}{\chunk{z}{0}{n - 1}} \{Êh - predpart[\chunk{z}{0}{n - 1}] h \}^{\varotimes 2} - \mumeas{N, 0_n}{\chunk{z}{0}{n - 1}} \{Êh - predpart[\chunk{z}{0}{n - 1}] h \}^{\varotimes 2}}{(\unpredpart[\chunk{z}{0}{n - 1}] \mathbbm{1}_\mathsf{X})^2} \\ + \| h \|_\infty^2 \mathcal{O}(N^{-1}) plim \variance[2]{\chunk{z}{0}{n - 1}}[\lambdatime{n}{\lambda}](h). \end{multline} In order to establish \eqref{eq:sufficient:condition}, write, using that $\gen[1]_n$ and $\gen[2]_n$ are, given $ partfd{n}$, conditionally independent and uniformly distributed over $\intvect{1}{N}$, \begin{align} \label{eq:estimator:difference:form} \lefteqn{(\unpredpart[\chunk{z}{0}{n - 1}] h)^2 - \varphi_{N, \ell} \langle \chunk{z}{0}{n - 1} \rangle h^{\varotimes 2}} \nonumber \\ &= (\unpredpart[\chunk{z}{0}{n - 1}] \mathbbm{1}_\mathsf{X})^2 \sum_{\chunk{b}{0}{n} \in \Binsp{n}} \mathbb{E} \left[ h\big( \epart{n}{\gen[1]_n} \big) h \big( \epart{n}{\gen[2]_n} \big) \mathbbm{1} \left \{Ê(\chunk{\gen[1]}{0}{n}, \chunk{\gen[2]}{0}{n}) \in \binset{n}(\chunk{b}{0}{n}) \right \} \mid partfd{n} \right] \nonumber \\ &- (\unpredpart[\chunk{z}{0}{n - 1}] \mathbbm{1}_\mathsf{X})^2 \sum_{\chunk{b}{0}{n} \in \Binsp{n} : \chunk{b}{\ell}{n} = 0_{n - \ell}} \mathbb{E} \left[ h\big( \epart{n}{\gen[1]_n} \big) h \big( \epart{n}{\gen[2]_n} \big) \mathbbm{1} \left \{Ê(\chunk{\gen[1]}{0}{n}, \chunk{\gen[2]}{0}{n}) \in \binset{n}(\chunk{b}{0}{n}) \right \} \mid partfd{n} \right] \end{align} and note that, by definition \eqref{eq:mu:meas}, \begin{align} \label{eq:estimator:first:term} \lefteqn{(\unpredpart[\chunk{z}{0}{n - 1}] \mathbbm{1}_\mathsf{X})^2 \sum_{\chunk{b}{0}{n} \in \Binsp{n}} \mathbb{E} \left[ h \big( \epart{n}{\gen[1]_n} \big) h \big( \epart{n}{\gen[2]_n} \big) \mathbbm{1} \left \{Ê(\chunk{\gen[1]}{0}{n}, \chunk{\gen[2]}{0}{n}) \in \binset{n}(\chunk{b}{0}{n}) \right \} \mid partfd{n} \right]} \nonumber \\Ê &= \frac{1}{N} \sum_{m = 0}^n \mumeas{N, \unitstr{m}{n}}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} + \left( 1 - \frac{1}{N} \right)^{n + 1} \mumeas{N, 0_n}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} + \| h \|_\infty^2 \mathcal{O}(N^{-2}) \nonumber \\ &= \frac{1}{N} \sum_{m = 0}^n \left( \mumeas{N, \unitstr{m}{n}}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} - \mumeas{N, \zerostr{n}}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} \right) + \mumeas{N, 0_n}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} + \| h \|_\infty^2 \mathcal{O}(N^{-2}). \end{align} Similarly, \begin{multline} \label{eq:estimator:second:term} \lefteqn{(\unpredpart[\chunk{z}{0}{n - 1}] \mathbbm{1}_\mathsf{X})^2 \sum_{\chunk{b}{0}{n} \in \Binsp{n} : \chunk{b}{\ell}{n} = 0_{n - \ell}} \mathbb{E} \left[ h\big( \epart{n}{\gen[1]_n} \big) h \big( \epart{n}{\gen[2]_n} \big) \mathbbm{1} \left \{Ê(\chunk{\gen[1]}{0}{n}, \chunk{\gen[2]}{0}{n}) \in \binset{n}(\chunk{b}{0}{n}) \right \} \mid partfd{n} \right]} \\ = \frac{1}{N} \sum_{m = 0}^{\ell - 1} \left( \mumeas{N, \unitstr{m}{n}}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} - \mumeas{N, \zerostr{n}}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} \right) + \mumeas{N, 0_n}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} \\Ê - \frac{n - \ell + 1}{N} \mumeas{N, 0_n}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} + \| h \|_\infty^2 \mathcal{O}(N^{-2}), \end{multline} and combining \eqref{eq:estimator:difference:form}, \eqref{eq:estimator:first:term}, \eqref{eq:estimator:second:term}, and the fact that $\mumeas{0_n}{\chunk{z}{0}{n - 1}} h^{\varotimes 2} = (\chi \uk[\chunk{z}{0}{n - 1}] h)^2$ yields \eqref{eq:sufficient:condition}. This completes the proof. \end{proof} \subsection{Proof of Theorem~\ref{thm:tightness:bias}} In the proof of Theorem~\ref{thm:tightness:bias}, the asymptotic bias is bounded using the time-shift approach taken in \cite[Theorem~10]{douc:moulines:olsson:2014}. Even though the theoretical analysis provided in \cite{douc:moulines:olsson:2014} is cast into the framework of general state-space models, it never makes use of the fact that $pot$ is a normalised transition density. As stressed in \cite[Remark~1]{douc:moulines:olsson:2014}, it is hence directly applicable to the framework of randomly perturbed Feynman-Kac models in Section~\ref{sec:Feynman:Kac:models}. \begin{proof} Pick arbitrarily $n \in \mathbb{N}$, $\lambda \in \mathbb{N}$, $h \in \bmf{\mathcal{X}}$, and $\chi \in mr(D, r)$. By defining, for all $m \in \mathbb{N}$, $\ell \in \intvect{0}{m - 1}$, $\chunk{z}{\ell}{m - 1} \in \mathsf{Z}^{m - \ell}$, and measures $(\mu, \mu') \in probmeas{\mathcal{X}}^2$, \begin{multline} \label{eq:def-Delta} \Delta_{\mu, \mu'} \langle \chunk{z}{\ell}{m - 1} \rangle : \bmf{\mathcal{X}}^2 \ni (h, h') \mapsto \mu \uk[\chunk{z}{\ell}{m - 1}] h \times \mu' \uk[\chunk{z}{\ell}{m - 1}] h' \\ - \mu \uk[\chunk{z}{\ell}{m - 1}] h' \times \mu' \uk[\chunk{z}{\ell}{m - 1}] h, \end{multline} we may, using the identity $$ pred[\chunk{per}{0}{m - 1}] \uk[\chunk{per}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}} = \frac{\chi\uk[\chunk{per}{0}{n - 1}] \mathbbm{1}_{\mathsf{X}}}{\chi\uk[\chunk{per}{0}{m - 1}] \mathbbm{1}_{\mathsf{X}}} = prod_{\ell = m}^{n - 1} \frac{\chi\uk[\chunk{per}{0}{\ell}] \mathbbm{1}_{\mathsf{X}}}{\chi\uk[\chunk{per}{0}{\ell - 1}] \mathbbm{1}_{\mathsf{X}}} = prod_{\ell = m}^{n - 1} pred[\chunk{per}{0}{\ell - 1}] pot[per_\ell], $$ for all $m \in \intvect{0}{n}$, write the asymptotic bias at time $n$ as \begin{equation} \label{bias:alternative:form} \bias{\chunk{per}{0}{n - 1}}{\lambda}(h) = \sum_{m = 0}^{\lambdatime{n}{\lambda} - 1} \int pred[\chunk{per}{0}{m - 1}](\mathrm{d} x) \left( \frac{\DDelta{\delta_x,pred[\chunk{per}{0}{m - 1}]}{\chunk{per}{m}{n - 1}}{h, \mathbbm{1}_{\mathsf{X}}}} {[prod_{\ell = m}^{n - 1} pred[\chunk{per}{0}{\ell - 1}] pot[per_\ell]]^2} \right)^2. \end{equation} In addition, under the assumptions of the theorem, \cite[Proposition~1]{douc:moulines:2012} provides the existence of a function $pi: \mathsf{Z}^\infty \to \mathbb{R}$ such that for all initial distributions $\chi \in mr(D, r)$, $$ \lim_{m \to \infty} pred[\chunk{per}{-m}{- 1}] pot[per_0] = \limitfunc{\chunk{per}{- \infty}{0}}, \quad prob\mbox{-a.s.} $$ Since the perturbations $\{ per_n \}_{n \in \mathbb{Z}}$ are stationary, the distribution of $\bias{\chunk{per}{0}{n - 1}}{\lambda}(h)$ coincides with that of the time-shifted bias $\bias{\chunk{per}{-n}{- 1}}{\lambda}(h)$, and a key step in the present proof is to express, via \eqref{bias:alternative:form}, the latter as \begin{multline*} \bias{\chunk{per}{-n}{- 1}}{\lambda}(h) = \\Ê \sum_{m = 0}^{\lambdatime{n}{\lambda} - 1} \int pred[\chunk{per}{-n}{- n + \lambdatime{n}{\lambda} - m - 2}](\mathrm{d} x) \left( \frac{\DDelta{\delta_x,pred[\chunk{per}{-n}{- n + \lambdatime{n}{\lambda} - m - 2}]}{\chunk{per}{- n + \lambdatime{n}{\lambda} - m - 1}{-1}}{h, \mathbbm{1}_{\mathsf{X}}}}{[prod_{\ell = 1}^{n - \lambdatime{n}{\lambda} + m + 1} pred[\chunk{per}{-n}{- \ell - 1}] pot[per_{- \ell}]]^2 } \right)^2. \end{multline*} We hence obtain the bound \begin{equation} \label{eq:fundamental:bias:bound} \bias{\chunk{per}{-n}{-1}}{\lambda}(h) \leq A_n \times B_n, \end{equation} where \begin{align*} A_n &\vcentcolon= \left(\sup_{(k, m) \in \mathbb{Z}^2 : \, -n \leq k \leq m} prod_{\ell = k}^{m - 1} \frac{\limitfunc{\chunk{per}{-\infty}{\ell}}}{pred[\chunk{per}{-n}{\ell - 1}] pot[per_\ell]} \right)^4, \\ B_n &\vcentcolon= \sum_{m = 0}^{\lambdatime{n}{\lambda} - 1} \left( \frac{\sup_{x \in \mathsf{X}} |\DDelta{\delta_x, pred[\chunk{per}{-n}{- n + \lambdatime{n}{\lambda} - m - 2}]}{\chunk{per}{- n + \lambdatime{n}{\lambda} - m - 1}{-1}}{h, \mathbbm{1}_\mathsf{X}}|}{[prod_{\ell = 1}^{n - \lambdatime{n}{\lambda} + m + 1} \limitfunc{\chunk{per}{-\infty}{- \ell}}]^2} \right)^2. \end{align*} To bound uniformly the sequence $\{ B_n \}_{n \in \mathbb{N}}$, decompose each term according to \begin{multline} \label{eq:termwise:decomposition:Bn} \frac{\sup_{x \in \mathsf{X}} |\DDelta{\delta_x, pred[\chunk{per}{-n}{- n + \lambdatime{n}{\lambda} - m - 2}]}{\chunk{per}{- n + \lambdatime{n}{\lambda} - m - 1}{-1}}{h, \mathbbm{1}_\mathsf{X}}|}{[prod_{\ell = 1}^{n - \lambdatime{n}{\lambda} + m + 1} \limitfunc{\chunk{per}{-\infty}{- \ell}}]^2} \\ = \left( \frac{\| \uk[\chunk{per}{- n + \lambdatime{n}{\lambda} - m - 1}{-1}] \mathbbm{1}_\mathsf{X} \|_\infty}{prod_{\ell = 1}^{n - \lambdatime{n}{\lambda} + m + 1} \limitfunc{\chunk{per}{-\infty}{- \ell}}} \right)^2 \\ \times \frac{\sup_{x \in \mathsf{X}} |\DDelta{\delta_x, pred[\chunk{per}{-n}{- n + \lambdatime{n}{\lambda} - m - 2}]}{\chunk{per}{- n + \lambdatime{n}{\lambda} - m - 1}{-1}}{h, \mathbbm{1}_\mathsf{X}}|}{\| \uk[\chunk{per}{- n + \lambdatime{n}{\lambda} - m - 1}{-1}] \mathbbm{1}_\mathsf{X} \|_\infty^2}. \end{multline} We consider separately the two factors of \eqref{eq:termwise:decomposition:Bn}. First, \begin{equation} \label{eq:first:factor:Bn:deomposition} \left( \frac{\| \uk[\chunk{per}{- n + \lambdatime{n}{\lambda} - m - 1}{-1}] \mathbbm{1}_\mathsf{X} \|_\infty}{prod_{\ell = 1}^{n - \lambdatime{n}{\lambda} + m + 1} \limitfunc{\chunk{per}{-\infty}{- \ell}}} \right)^2 = \exp\{Ê(n - \lambdatime{n}{\lambda} + m + 1) \varepsilon_{n - \lambdatime{n}{\lambda} + m + 1} \}, \end{equation} with $$ \varepsilon_k \vcentcolon= \frac{2}{k} \left( \ln \| \uk[\chunk{per}{-k}{-1}] \mathbbm{1}_\mathsf{X} \|_\infty - \sum_{\ell = 1}^k \ln \limitfunc{\chunk{per}{-\infty}{- \ell}} \right) $$ being independent of $\chi$ for all $k \in \mathbb{N}pos$. By \cite[Lemma~17]{douc:moulines:olsson:2014}, $\varepsilon_k \to 0$, $prob$-a.s., as $k \to \infty$, which implies that \eqref{eq:first:factor:Bn:deomposition} grows at most subgeometrically fast with $m$. In addition, by \cite[Proposition~16(iii)]{douc:moulines:olsson:2014} there exists a constant $\rho \in (0, 1)$ and a $prob$-a.s. finite random variable $D$ such that for all $n$ and $m$, all $h \in \bmf{\mathcal{X}}$, and all $\chi \in probmeas{\mathcal{X}}$, $prob$-a.s, $$ \frac{\sup_{x \in \mathsf{X}} |\DDelta{\delta_x, pred[\chunk{per}{-n}{- n + \lambdatime{n}{\lambda} - m - 2}]}{\chunk{per}{- n + \lambdatime{n}{\lambda} - m - 1}{-1}}{h, \mathbbm{1}_\mathsf{X}}|}{\| \uk[\chunk{per}{- n + \lambdatime{n}{\lambda} - m - 1}{-1}] \mathbbm{1}_\mathsf{X} \|_\infty^2} \leq D \rho^{n - \lambdatime{n}{\lambda} + m + 1} \| h \|_\infty. $$ Thus, $prob$-a.s, $$ B_n \leq D^2 \| h \|_\infty^2 \sum_{m = 0}^{\lambdatime{n}{\lambda} - 1} \rho^{2(n - \lambdatime{n}{\lambda} + m + 1)} \exp\{Ê2 (n - \lambdatime{n}{\lambda} + m + 1) \varepsilon_{n - \lambdatime{n}{\lambda} + m + 1} \}. $$ If $n \leq \lambda$, then $\lambdatime{n}{\lambda} = 0$, and the bias vanishes. Thus, we assume in the following that $\lambda < n$, which means that $\lambdatime{n}{\lambda} = n - \lambda$. Then, by the Cauchy-Schwartz inequality, $prob$-a.s, \begin{align} B_n &\leq D^2 \| h \|_\infty^2 \sum_{m = \lambda + 1}^\infty \rho^{2m} \exp(2m \varepsilon_m) \nonumber \\ &\leq D^2 \| h \|_\infty^2 \left( \sum_{m = \lambda + 1}^\infty \rho^{2 m} \right)^{1/2} \left( \sum_{m = \lambda + 1}^\infty \rho^{2 m} \exp(4 m \varepsilon_m) \right)^{1/2} \nonumber \\ &\leq D^2 \| h \|_\infty^2 \rho^{\lambda + 1} \left( \sum_{m = 0}^\infty \rho^{2 m} \right)^{1/2} \left( \sum_{m = 0}^\infty \rho^{2 m} \exp(4 m \varepsilon_m) \right)^{1/2} \nonumber \\ &= C \| h \|_\infty^2 \rho^{\lambda + 1}, \label{eq:bias:cauchy:bound} \end{align} where random variable $$ C \vcentcolon= D^2 \left( \sum_{m = 0}^\infty \rho^{2 m} \right)^{1/2} \left( \sum_{m = 0}^\infty \rho^{2 m} \exp(4 m \varepsilon_m) \right)^{1/2} $$ is $prob$-a.s. finite and independent of $\lambda$, $h$, and $\chi$. For $c \in \mathbb{R}_+$, write, using \eqref{eq:fundamental:bias:bound} and \eqref{eq:bias:cauchy:bound}, $$ prob \left( \frac{\bias{\chunk{per}{0}{n - 1}}{\lambda}(h)}{\rho^{\lambda + 1} \| h \|_{\infty}^2} > c \right) = prob \left( \frac{\bias{\chunk{per}{-n}{- 1}}{\lambda}(h)}{\rho^{\lambda + 1} \| h \|_{\infty}^2} > c \right) \leq prob \left( A_n C > c \right), $$ where the probability on the right hand side is again independent of $\lambda$, $h$, or $\chi$. Thus, using the stationarity of $\{ A_n \}_{n \in \mathbb{N}}$, $$ prob \left( \frac{\bias{\chunk{per}{0}{n - 1}}{\lambda}(h)}{\rho^{\lambda + 1} \| h \|_{\infty}^2} > c \right) \leq prob \left( A_0 > c^{1/2} \right) + prob \left( C > c^{1/2} \right), $$ where the right hand side does not depend on $n$. Now, the $prob$-a.s. finiteness of $A_0$Ê was established as a part of the proof of \cite[Theorem~10]{douc:moulines:olsson:2014}. Consequently, as also $C$ is $prob$-a.s. finite, there exists, for all $k \in \mathbb{N}pos$, a constant $c_k \in \mathbb{R}_+$, independent of $\lambda$, $h$, and $\chi$, such that the probabilities $prob( A_0 > c^{1/2}_k)$Ê and $prob( C > c^{1/2}_k)$ are both bounded by $1/(2 k)$. This completes the proof. \end{proof} \subsection{Proof of Proposition~\ref{prop:consistency:fixed:lag:updated:case}} \begin{proof} The proof consists mainly in combining some of the equalities in the proof of Proposition~\ref{prop:consistency:fixed:lag} with the identity \begin{multline} \label{eq:function:prod:identity} \{Êpot[z_n] (h - \filtpart[\chunk{z}{0}{n}] h) \}^{\varotimes 2} = (pot[z_n] h)^{\varotimes 2} - \{ (pot[z_n] h) \varotimes pot[z_n] \} \filtpart[\chunk{z}{0}{n}] h \\Ê - \{ pot[z_n] \varotimes (pot[z_n] h) \} \filtpart[\chunk{z}{0}{n}] h + pot[z_n]^{\varotimes 2} (\filtpart[\chunk{z}{0}{n}] h)^2. \end{multline} More specifically, as it holds that \begin{equation} \label{eq:zero:identity} predpart[\chunk{z}{0}{n - 1}](pot[z_n] \{ h - \filtpart[\chunk{z}{0}{n}] h \})= 0, \end{equation} by reusing the equality in \eqref{eq:critical:identity}, \begin{multline} \label{eq:filtering:variance:numerator} \varest[2]{\chunk{z}{0}{n - 1}}[\lambda](pot[z_n] \{ h - \filtpart[\chunk{z}{0}{n}] h\}) = \\ \sum_{m = \lambdatime{n}{\lambda}}^n \frac{\mumeas{N, \unitstr{m}{n}}{\chunk{z}{0}{n - 1}} \{Êpot[z_n] (h - \filtpart[\chunk{z}{0}{n}] h) \}^{\varotimes 2} - \mumeas{N, 0_n}{\chunk{z}{0}{n - 1}} \{Êpot[z_n] (h - \filtpart[\chunk{z}{0}{n}] h) \}^{\varotimes 2}}{(\unpredpart[\chunk{z}{0}{n - 1}] \mathbbm{1}_\mathsf{X})^2} \\ + \| pot[z_n] \|_\infty^2 \| h \|_\infty^2 \mathcal{O}(N^{-1}). \end{multline} Now, write, using \eqref{eq:function:prod:identity}, for all $\chunk{b}{0}{n} \in \Binsp{n}$, \begin{multline} \label{eq:termwise:decomposition:updated:case} \mumeas{N, \chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} \{Êpot[z_n] (h - \filtpart[\chunk{z}{0}{n}] h) \}^{\varotimes 2} = \mumeas{N, \chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} (pot[z_n] h)^{\varotimes 2} \\ - \mumeas{N, \chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} \{ (pot[z_n] h) \varotimes pot[z_n] \} \filtpart[\chunk{z}{0}{n}] h - \mumeas{N, \chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} \{ pot[z_n] \varotimes (pot[z_n] h) \} \filtpart[\chunk{z}{0}{n}] h \\ + \mumeas{N, \chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} pot[z_n]^{\varotimes 2} (\filtpart[\chunk{z}{0}{n}] h)^2. \end{multline} Applying Lemma~\ref{lemma:mu:convergence} to each term of \eqref{eq:termwise:decomposition:updated:case} (note that the second part of Lemma~\ref{lemma:mu:convergence} implies the consistency of the updated particle measures, as \begin{equation} \label{eq:particle:filter:consistency} \filtpart[\chunk{z}{0}{n}] h = \frac{predpart[\chunk{z}{0}{n - 1}] (pot[z_n] h)}{predpart[\chunk{z}{0}{n - 1}] pot[z_n]} plim \frac{\chi \uk[\chunk{z}{0}{n - 1}] (pot[z_n] h)}{\chi \uk[\chunk{z}{0}{n - 1}] pot[z_n]} = \filt[\chunk{z}{0}{n}] h \end{equation} when $N$ tends to infinity, a now classical result) yields, using again \eqref{eq:function:prod:identity}, \begin{multline*} \mumeas{N, \chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} \{Êpot[z_n] (h - \filtpart[\chunk{z}{0}{n}] h) \}^{\varotimes 2} plim \mumeas{\chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} (pot[z_n] h)^{\varotimes 2} \\ - \mumeas{\chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} \{ (pot[z_n] h) \varotimes pot[z_n] \} \filt[\chunk{z}{0}{n}] h - \mumeas{\chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} \{ pot[z_n] \varotimes (pot[z_n] h) \} \filt[\chunk{z}{0}{n}] h \\ + \mumeas{\chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} pot[z_n]^{\varotimes 2} (\filt[\chunk{z}{0}{n}] h)^2 = \mumeas{\chunk{b}{0}{n}}{\chunk{z}{0}{n - 1}} \{Êpot[z_n] (h - \filt[\chunk{z}{0}{n}] h) \}^{\varotimes 2}, \end{multline*} as $N$ tends to infinity. Now, applying the previous limit to \eqref{eq:filtering:variance:numerator} and using \eqref{eq:variance:alt:expression}, \eqref{eq:zero:identity}, and the second part of Lemma~\ref{lemma:mu:convergence} yields \begin{multline*} \varestfilt[2]{\chunk{z}{0}{n}}[\lambda](h) = \frac{\varest[2]{\chunk{z}{0}{n - 1}}[\lambda](pot[z_n] \{h - \filtpart[\chunk{z}{0}{n}] h\})}{(predpart[\chunk{z}{0}{n - 1}] pot[z_n])^2} \\Ê plim \frac{\variance[2]{\chunk{z}{0}{n - 1}}[\lambda](pot[z_n] \{ h - \filt[\chunk{z}{0}{n}] h \})}{(pred[\chunk{z}{0}{n - 1}] pot[z_n])^2} = \filtvariance[2]{\chunk{z}{0}{n}}[\lambda](h) \end{multline*} as $N$ tends to infinity, which completes the proof. \end{proof} \subsection{Proof of Theorem~\ref{thm:tightness:bias:filter}} \begin{proof} Pick arbitrarily $n \in \mathbb{N}$, $\lambda \in \mathbb{N}$, and $h \in \bmf{\mathcal{X}}$. Using the expression of $\filtvariance[2]{\chunk{per}{0}{n}}(h)$Ê derived in the proof of \cite[Theorem~11]{douc:moulines:olsson:2014}, one may express the bias $\biasfilt{\chunk{per}{0}{n}}{\lambda}(h)$ as \begin{equation*} \biasfilt{\chunk{per}{0}{n}}{\lambda}(h) = \sum_{m = 0}^{\lambdatime{n}{\lambda} - 1} \int pred[\chunk{per}{0}{m - 1}](\mathrm{d} x) \left( \frac{\DDelta{\delta_x,pred[\chunk{per}{0}{m - 1}]}{\chunk{per}{m}{n - 1}}{pot[per_n] h, pot[per_n]}} {(pred[\chunk{per}{0}{m - 1}] \uk[\chunk{per}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}} )^2} \right)^2, \end{equation*} where $\DDelta{\delta_x,pred[\chunk{per}{0}{m - 1}]}{\chunk{per}{m}{n - 1}}{pot[per_n] h, pot[per_n]}$ is given by \eqref{eq:def-Delta}. Thus, the proof is finalised by following closely the lines of the proof of Theorem~\ref{thm:tightness:bias} and noting that the statement of \cite[Proposition~16(iii)]{douc:moulines:olsson:2014} still holds true when $h$Ê and $\mathbbm{1}_\mathsf{X}$ are replaced by $pot[per_n] h$ and $pot[per_n]$, respectively. \end{proof} \subsection{Proof of Theorem~\ref{thm:strong:bias:bound}} \label{sec:proof:strong:bias:bound} \begin{proof} If $n \leq \lambda$, the bias vanishes by definition; we thus assume that $n > \lambda$. Write, for $m \in \intvect{0}{n - \lambda}$ and $x \in \mathsf{X}$, \begin{multline} \label{eq:strong:bound:decomposition} \frac{\uk[\chunk{z}{m}{n - 1}](h - pred[\chunk{z}{0}{n - 1}] h)(x)}{pred[\chunk{z}{0}{m - 1}] \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}}} \\Ê = \frac{\delta_x \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}}}{pred[\chunk{z}{0}{m - 1}] \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}}} \left( \frac{\delta_x \uk[\chunk{z}{m}{n - 1}] h}{\delta_x \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}}} - \frac{pred[\chunk{z}{0}{m - 1}] \uk[\chunk{z}{m}{n - 1}] h}{pred[\chunk{z}{0}{m - 1}] \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}}} \right), \end{multline} where \cite[Proposition~4.3.4]{delmoral:2004} bounds uniformly the second factor of \eqref{eq:strong:bound:decomposition} according to \begin{equation} \label{eq:strong:bound:second:term} \left| \frac{\delta_x \uk[\chunk{z}{m}{n - 1}] h}{\delta_x \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}}} - \frac{pred[\chunk{z}{0}{m - 1}] \uk[\chunk{z}{m}{n - 1}] h}{pred[\chunk{z}{0}{m - 1}] \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}}} \right| \leq \varrho^{n - m} \| h \|_\infty. \end{equation} To bound the first factor of \eqref{eq:strong:bound:decomposition}, note that $$ pred[\chunk{z}{0}{m - 1}] \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}} = \filt[\chunk{z}{0}{m - 1}] \kernel{M} (pot[z_m] \kernel{M} \uk[\chunk{z}{m + 1}{n - 1}] \mathbbm{1}_{\mathsf{X}}) \\ \geq potlow mlow \mu \uk[\chunk{z}{m + 1}{n - 1}] \mathbbm{1}_{\mathsf{X}} $$ and $$ \delta_x \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}} = pot[z_m](x) \kernel{M} \uk[\chunk{z}{m + 1}{n - 1}] \mathbbm{1}_{\mathsf{X}}(x) \leq potup mup \mu \uk[\chunk{z}{m + 1}{n - 1}] \mathbbm{1}_{\mathsf{X}}, $$ where $(mlow, mup)$ and $(potlow, potup)$ are given in \B{ass:strong:mixing}(\ref{ass:strong:mixing-2}), implying that \begin{equation} \label{eq:strong:bound:first:term} \frac{\delta_x \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}}}{pred[\chunk{z}{0}{m - 1}] \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}}} \leq \frac{ potup mup}{ potlow mlow}.\end{equation} Now, using \eqref{eq:strong:bound:second:term} and \eqref{eq:strong:bound:first:term}, proceed like \begin{multline*} \bias{\chunk{z}{0}{n - 1}}{\lambda}(h) = \sum_{m = 0}^{n - \lambda - 1} \frac{pred[\chunk{z}{0}{m - 1}] \{ \uk[\chunk{z}{m}{n - 1}](h - pred[\chunk{z}{0}{n - 1}] h) \}^2 }{(pred[\chunk{z}{0}{m - 1}] \uk[\chunk{z}{m}{n - 1}] \mathbbm{1}_{\mathsf{X}})^2} \\ \leq \| h \|_{\infty}^2 \left( \frac{ potup mup}{ potlow mlow} \right)^2 \sum_{m = 0}^{n - \lambda - 1} \varrho^{2(n - m)} \leq c \| h \|_{\infty}^2 \varrho^{2(\lambda + 1)}, \end{multline*} with $c \vcentcolon= (potup mup / potlow mlow)^2 / (1 - \varrho^2)$, and the proof is complete. \end{proof} \end{document}
\begin{document} \title{Minimax Extrapolation Problem For Harmonizable Stable Sequences With Noise Observations} \author{Mikhail Moklyachuk$^1$ \footnote{Corresponding email:[email protected] \newline 1 Department of Probability Theory, Statistics and Actuarial Mathematics, Taras Shevchenko National University of Kyiv, Kyiv, Ukraine } and Vitalii Ostapenko$^{1}$ } \date{\today} \maketitle \renewcommand{Abstract}{Abstract} \begin{abstract} We consider the problem of optimal linear estimation of the functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$ that depends on the unknown values $\xi_j,j=0,1,\dots, $ of a random sequence $\{\xi_j,j\in\mathbb Z\}$ from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$, where $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ are mutually independent harmonizable symmetric $\alpha$-stable random sequences which have the spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition. The problem is investigated under the condition of spectral certainty as well as under the condition of spectral uncertainty. Formulas for calculating the value of the error and the spectral characteristic of the optimal linear estimate of the functional are derived under the condition of spectral certainty where spectral densities of the sequences are exactly known. In the case of spectral uncertainty where spectral densities of the sequences are not exactly known, while a class of admissible spectral densities is given, relations that determine the least favorable spectral densities and the minimax spectral characteristic are derived. \end{abstract} \textbf{Keywords}:harmonizable sequence, optimal linear estimate, minimax-robust estimate, least favorable spectral density, minimax spectral characteristic. \textbf{2000 Mathematics Subject Classification:} Primary: 60G10, 60G25, 60G35, Secondary: 62M20, 93E10, 93E11 \selectlanguage{british} \section{Introduction} The classical methods of finding solutions to extrapolation, interpolation and filtering problems for stationary stochastic processes and sequences were developed by Kolmogorov (see selected works of Kolmogorov (1992)), Wiener (see the book by Wiener (1966)), Yaglom (see, for example, books by Yaglom (1987a, 1987b). The problem of estimation of the unknown values of harmonizable random sequences and processes were investigated in papers by Cambanis (1983), Cambanis and Soltani (1984), Hosoya (1982). The interpolation problem for harmonizable symmetric $\alpha$-stable random sequences were investigated in papers by Weron (1985) and Pourahmadi (1984). Most of results concerning estimation of the unknown (missed) values of stochastic processes are based on the assumption that spectral densities of processes are exactly known. In practice, however, complete information on the spectral densities is impossible in most cases. In such situations one finds parametric or nonparametric estimates of the unknown spectral densities. Then the classical estimation method is applied under the assumption that the estimated densities are true. This procedure can result in significant increasing of the value of error as Vastola and Poor (1983) have demonstrated with the help of some examples. This is a reason to search estimates which are optimal for all densities from a certain class of admissible spectral densities. These estimates are called minimax since they minimize the maximal value of the error. A survey of results in minimax (robust) methods of data processing can be found in the paper by Kassam and Poor (1985). The paper by Grenander (1957) should be marked as the first one where the minimax extrapolation problem for stationary processes was formulated and solved. Later Franke and Poor (Franke and Poor, 1984; Franke, 1985) applied the convex optimization methods for investigation the minimax-robust extrapolation and interpolation problems. In papers by Moklyachuk (1990 -- 2008a) of the minimax-robust extrapolation, interpolation and filtering problems are studied for stationary processes. The papers by Moklyachuk and Masyutka (2006 -- 2012) are dedicated to minimax-robust extrapolation, interpolation and filtering problems for vector-valued stationary processes and sequences. Dubovetska et al. (2012) solved the problem of minimax-robust interpolation for another generalization of stationary processes -- periodically correlated sequences. In the papers by Dubovetska and Moklyachuk (2013 -- 2014), Moklyachuk and Golichenko (2016) the minimax-robust extrapolation, interpolation and filtering problems for periodically correlated processes are investigated. The minimax-robust extrapolation, interpolation and filtering problems for stochastic sequences and random processes with $n$th stationary increments are investigated by Luz and Moklyachuk (Luz and Moklyachuk, 2012 -- 2016; Moklyachuk and Luz, 2013). In this paper the problem of optimal estimation is investigated for the linear functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$ that depends on the unknown values of a random sequence $\{\xi_j,j\in\mathbb Z\}$ based on observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$, where $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ are mutually independent harmonizable symmetric $\alpha$-stable random sequences which have the spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition. The problem is investigated under the condition of spectral certainty as well as under the condition of spectral uncertainty. Formulas for calculating the value of the error and the spectral characteristic of the optimal linear estimate of the functional are derived under the condition of spectral certainty where spectral densities of the sequences are exactly known. In the case of spectral uncertainty where spectral densities of the sequences are not exactly known while sets of admissible spectral densities are available, relations which determine the least favorable densities and the minimax-robust spectral characteristics for different classes of spectral densities are derived. \section{Harmonizable symmetric $\alpha$-stable random sequences. Basic properties}\label{sec:fields} \begin{definition}[symmetric $\alpha$-stable random variable] A real random variable $\xi$ is said to be symmetric $\alpha$-stable, $S\alpha S$, if its characteristic function has the form $E exp(it\xi) = exp(-c|t|^{\alpha})$ for some $c \geq 0$ and $0 < \alpha \leq 2.$ The real random variables $\xi_1,\xi_2,\dots,\xi_n$ are jointly $S\alpha S$ if all linear combinations $\sum_{k=1}^{n}a_k\xi_k$ are $S\alpha S$, or ,equivalently, if the characteristic function of $\vec{\xi}=(\xi_1,\dots,\xi_n)$ is of the form $\phi_{\vec{\xi}}(\vec{t}) = E exp(i \sum t_k \xi_k) = exp\{-\int|\sum t_k x_k|^{\alpha}d \Gamma_{\vec{\xi}}(\vec{x})\},$ where $t_1,\dots, t_n$ are real numbers and $\Gamma_{\xi}$ is a symmetric measure defined on the unit sphere $S_n \in R^n$ (Cambanis, 1983). \end{definition} \begin{definition}[symmetric $\alpha$-stable stochastic sequence] A stochastic sequence $\{\xi_k,k\in\mathbb Z\}$ is called symmetric $\alpha$-stable, $S\alpha S$, stochastic sequence, if all linear combinations $\sum_{k=1}^{n}a_k\xi_k$ are $S\alpha S$ random variables. \end{definition} For jointly $S{\alpha}S$ random variables $\xi= \xi_1 + i\xi_2$ and $\eta= \eta_1 + i \eta_2$ the covariation of $\xi$ with $\eta$ is defined as (Cambanis, 1983) $$[\xi,\eta]_{\alpha} = \int_{S_4} (x_1 + i x_2)(y_1 + i y_2)^{<\alpha-1>} d \Gamma_{\xi_1,\xi_2,\eta_1,\eta_2}(x_1, x_2, y_1, y_2),$$ \noindent where $z^{<\beta>} = |z|^{\beta - 1} \bar{z} $ for a complex number $z$ and $\beta > 0.$ The covariation in general is not symmetric and linear on second argument (Weron, 1985). For $\xi, \xi_1, \xi_2, \eta$ jointly $S\alpha S$ we have $$[\xi_1 + \xi_2, \eta]_{\alpha} = [\xi_1, \eta]_{\alpha} + [\xi_2, \eta]_{\alpha},$$ \begin{equation}\label{eq:inequality} |[\xi, \eta]_{\alpha}| \leq ||\xi||_{\alpha} ||\eta||_{\alpha}^{\alpha - 1} \end{equation} and $||\xi||_{\alpha} = [\xi, \xi]_{\alpha}^{1/\alpha}$ is a norm in a linear space of $S\alpha S$ random variables which is equivalent to convergence in probability. It should be noted that $||\cdot||_\alpha$ is not necessarily the usual $L^{\alpha}$ norm. Here is the simplest properties of the function $z^{<\beta>}.$ \begin{lemma} Let $z, x, y $ be complex numbers, $\beta > 0$. Then the following properties hold true: \begin{itemize} \item $|z|^{<\beta>} = z \cdot z^{<\beta - 1>},$ \item $\left||z|^{<\beta>}\right| = \left|z\right|^{<\beta>},$ \item if $z^{<\beta>} = v$, thet $z = v^{<1/\beta>} = |v|^{(1-\beta)/\beta}\bar{v},$ \item $z^{<1>} = \bar{z},$ \item if $z \neq 0$, then $z^{<\alpha>} z^{<\beta>} = \frac{\bar{z}}{|z|} z^{<\alpha + \beta>},$ \item if $z \neq 0$, then $\frac{z^{<\alpha>}}{z^{<\beta>}} = \frac{z}{|z|} z^{<\alpha - \beta>},$ \item $(c z)^{<\alpha>} = c^{\alpha} z^{<\alpha>}, c \in \mathbb{R},$ \item $(z^{<\alpha>})^{<\beta>} = {\bar{z}}^{<\alpha \beta>},$ \item $(xy)^{<\alpha>} = x^{<\alpha>} y^{<\alpha>},$ \item $(z^{\alpha})^{<\beta>} = (z^{<\beta>})^{\alpha},$ \item $(z^{<\alpha>})^{\beta} = (z^{\beta})^{<\alpha>},$ \item $|z^{<\alpha>}|^{\beta} = |z|^{\alpha \beta},$ \item $(x + y)^{<\alpha>} = \bar{x}|x + y|^{\alpha - 1} + \bar{y}|x + y|^{\alpha - 1}.$ \end{itemize} \end{lemma} Let $Z =\{Z(t): -\infty < t < \infty\}$ be a complex $S{\alpha}S$ process with independent increments. The spectral measure of the process $Z$ is defined as $\mu\{(s, t]\} =\|Z(t) - Z(s)\|_{\alpha}^{\alpha}.$ The integrals $\int f(t)dZ(t)$ can be defined for all $f \in L^{\alpha}(\mu)$ with properties (see Cambanis, 1983; Cambanis and Soltani, 1984; Hosoya, 1982): \[ \left\|\int f(t) d Z(t)\right\|^{\alpha}_{\alpha} = \int |f(t)|^{\alpha}d \mu, \] \begin{equation}\label{eq:norm_equality} \left[\int f(t) d Z(t), \int g(t) d Z(t)\right]_\alpha = \int f (t) (g(t))^{<\alpha - 1>} d \mu. \end{equation} \begin{definition}[Harmonizable symmetric $\alpha$-stable stochastic sequence] A $S\alpha S$ stochastic sequence $\{\xi_n,n\in\mathbb Z\}$ is said to be harmonizable, $HS{\alpha}S$, if there exists a $S\alpha S$ process $Z = \{Z(\theta); \theta \in [-\pi, \pi]\}$ with independent increments and finite spectral measure $\mu$ such that sequence $\xi_n$ has the spectral representation $$\xi_n = \int_{-\pi}^{\pi}e^{in\theta}dZ(\theta), \quad n \in \mathbb{Z},$$ \noindent and the covariation has the representation $$[\xi_n, \xi_m]_\alpha = \int_{-\pi}^{\pi}e^{i(n-m)\theta}d \mu(\theta), \quad m, n \in \mathbb{Z}.$$ \end{definition} Note that a $HS\alpha S$ stochastic sequence is not necessarily stationary even second order stationary, but for $\alpha = 2$ the $HS\alpha S$ sequences are stationary with Gaussian distribution. In this article we consider the case where $1<\alpha \leq 2$. Denote by $H(\xi)$ the time domain of the $HS\alpha S$ sequence $\{\xi_n,n\in\mathbb Z\}$, which is a closed in the norm $\|\cdot\|_{\alpha}$ linear manifold generated by all values of the $HS\alpha S$ sequence $\{\xi_n,n\in\mathbb Z\}$. It follows from the spectral representation of the $HS\alpha S$ sequence $\{\xi_n,n\in\mathbb Z\}$ that the mapping $\xi_n\leftrightarrow e^{in\theta},n\in\mathbb Z,$ extents to an isomorphism between the spaces $H(\xi)$ and $L^{\alpha}(\mu)$. Under this isomorphism to each $\eta \in H(\xi)$ corresponds a unique $f\in L^{\alpha}(\mu)$ such that $\eta=\int_{-\pi}^{\pi}f(\theta)dZ(\theta)$. For a closed linear subspace $M \subseteq L^{\alpha}(\mu)$ and $f \in L^{\alpha}(\mu)$, there exists a unique element from $M$ which minimizes the distance to $f$. This element is called projection of $f$ onto $M$ or the best approximation of $f$ in $M$. This projection is denoted by $P_M f$ and is uniquely determined by the condition (Singer, 1970) \begin{equation} \int_{-\pi}^{\pi} g \left(f - P _M f\right)^{<\alpha - 1>}d \mu = 0,\quad g \in M. \end{equation} Similarly, for $HS \alpha S$ stochastic sequence $\{\xi_n,n\in\mathbb Z\}$ and a closed linear subspace $H^-(\xi)$ of the space $H(\xi)$ there is a uniquely determined element $\hat{\xi}_n \in H^-(\xi)$ which minimizes the distance to $\xi_n$ and is uniquely determined from the condition \begin{equation}\label{eq:ortogonal} \left[\eta, \xi_n - \hat{\xi}_n\right]_{\alpha} = 0,\quad \eta \in H^-(\xi). \end{equation} From linearity of the covariation with respect to the first argument from this relation we have that \begin{equation}\label{eq:linearity} ||\xi_n - \hat{\xi}_n||_{\alpha}^{\alpha} = \left[\xi_n,\xi_n-\hat{\xi}_n\right]_{\alpha}-\left[\hat{\xi}_n, \xi_n-\hat{\xi}_n\right]_{\alpha} = \left[\xi_n, \xi_n-\hat{\xi}_n\right]_{\alpha}. \end{equation} This relation plays a fundamental role in the characterization of minimal $HS \alpha S$ stochastic sequences $\{\xi_n,n\in\mathbb Z\}$. \section{Extrapolation problem. Projection approach} Consider the problem of the optimal estimation of the linear functional $$A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j = \int_{-\pi}^{\pi} A(e^{i\theta})dZ^{\xi}(\theta),$$ $$A(e^{i\theta})=\sum_{j = 0}^{\infty}a_j e^{ij\theta}, $$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a harmonizable symmetric $\alpha$-stable random sequence $\{\xi_j,j\in\mathbb Z\}$ from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$. We will suppose that the sequence $ \{ {a}_j: j=0,1, \ldots \}$ which determines the functional $A { \xi} $ satisfies conditions \begin{equation} \label{cond-on-aj} \sum_{j=0}^{ \infty} \left|a_j \right| < \infty , \quad \sum_{j=0}^{ \infty}(j+1) \left | {a}_j \right | ^{2} < \infty. \end{equation} The first condition ensures that the functional $A\xi$ has a finite second moment. The second condition ensures the compactness in $\ell_2$ of the operators that will be defined below. We consider the problem for mutually independent harmonizable symmetric $\alpha$-stable random sequences $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ which have absolutely continuous spectral measures and the spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition (Kolmogorov, 1992; Rozanov, 1967; Salehi, 1979; Pourahmadi, 1984; Weron, 1985) \begin{equation}\label{eq:minimality} \int_{-\pi}^{\pi} (f(\theta)+g(\theta))^{-1/(\alpha-1)}d\theta<\infty. \end{equation} Denote by $H^{-}(\xi+\eta)$ the closed in the $||\cdot||_\alpha$ norm linear manifold generated by values of the harmonizable symmetric $\alpha$-stable random sequence $\xi_k+\eta_k, k = -1, -2, \dots$ in the space $H(\xi+\eta)$ generated by all values of the $HS \alpha S$ sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$. The optimal estimate $\hat{A} \xi$ of the functional ${A} \xi$ is a projection of ${A} \xi$ on the manifold $H^{-}(\xi+\eta)$. It is determined by relations $$[\zeta, A \xi - \hat{A} \xi]_{\alpha} = 0, \quad \forall \zeta \in H^{-}(\xi+\eta),$$ or, equivalently, by relations \begin{equation}\label{ortog1} [\xi_k+\eta_k, A \xi - \hat{A} \xi]_{\alpha} = 0, \quad \forall k = -1, -2, \dots. \end{equation} It follows from the isomorphism between the spaces $H(\xi+\eta)$ and $L^{\alpha}(f+g)$ that the optimal estimate $\hat{A} \xi$ of the functional ${A} \xi$ is of the form \begin{equation}\label{estim2} \hat{A} \xi = \int_{-\pi}^{\pi} {h}(\theta) \left(d Z^{\xi}(\theta) + dZ^{\eta}(\theta) \right). \end{equation} It is determined by the spectral characteristic ${h}(\theta)$ of the estimate which is from the subspace $L^{\alpha}_{-}(f+g)$ of the $L^{\alpha}(f+g)$ space generated by functions $e^{ik\theta},k = -1, -2, \dots$. The spectral characteristic ${h}(\theta)$ of the optimal estimate satisfies the following equations \begin{equation}\label{ortog2} \int_{-\pi}^{\pi} e^{i\theta k} \left[\left( A(e^{i\theta}) - {h}(\theta) \right)^{<\alpha - 1>}f(\theta) - \left({h}(\theta) \right)^{<\alpha - 1>}g(\theta)\right] d\theta = 0, \, k = -1, -2, \dots. \end{equation} It follows from these equations that the spectral characteristic ${h}(\theta)$ of the estimate is determined by the relation \begin{equation}\label{sp-char2} \left( A(e^{i\theta}) - {h}(\theta) \right)^{<\alpha - 1>}f(\theta) - \left({h}(\theta) \right)^{<\alpha - 1>}g(\theta) = \overline{C(e^{i\theta})}, \end{equation} $$ C(e^{i\theta})=\sum_{j = 0}^{\infty} c_j e^{i j \theta},$$ where $c_j$ are unknown coefficients. These unknown coefficients $c_j$ are determined from the condition ${h}(\theta)\in L^{\alpha}_{-}(f+g)$ which gives us the system of equations \begin{equation}\label{speq2} \int_{-\pi}^{\pi} {e^{-i\theta k}\,\, {h}(\theta)} d\theta = 0,\quad k = 0, 1,\dots. \end{equation} The variance of the optimal estimate of the functional is calculated by the formula \begin{equation}\label{var2} \left\|{A} \xi- \hat{A} \xi \right\|_\alpha^\alpha = \int_{-\pi}^{\pi} \left| A(e^{i\theta}) - {h}(\theta) \right|^{\alpha}f(\theta)d \theta + \int_{-\pi}^{\pi} \left|{h}(\theta) \right|^{\alpha}g(\theta)d \theta. \end{equation} We can conclude that the following theorem holds true. \begin{thm}\label{thm1} Let $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ be mutually independent harmonizable symmetric $\alpha$-stable random sequences which have absolutely continuous spectral measures and the spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality}. The optimal linear estimate $\hat{A} \xi$ of the functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ is calculated by formula \eqref{estim2}. The spectral characteristic ${h}(\theta)$ of the estimate is determined by equation \eqref{sp-char2}, where the unknown coefficients $c_j$ are determined from the system of equations \eqref{speq2}. The variance of the optimal estimate of the functional is calculated by formula \eqref{var2}. \end{thm} \subsection{Extrapolation problem. Observations without noise} Consider the problem of optimal linear estimation of the functional $$A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j = \int_{-\pi}^{\pi} A(e^{i\theta})dZ^{\xi}(\theta),\quad A(e^{i\theta})=\sum_{j = 0}^{\infty}a_j e^{ij\theta}, $$ that depends on the unknown values of a harmonizable symmetric $\alpha$-stable random sequence $\{\xi_j,j\in\mathbb Z\},$ from observations of the sequence $\xi_k$ at points $k = -1, -2, \dots$. Let $\{\xi_k,k\in\mathbb Z\}$ be a harmonizable symmetric $\alpha$-stable random sequence which has absolutely continuous spectral measure and the spectral density $f(\theta)>0$ satisfying the minimality condition \begin{equation}\label{minimality1} \int_{-\pi}^{\pi} (f(\theta))^{-1/(\alpha-1)}d\theta<\infty. \end{equation} The optimal estimate $\hat{A} \xi$ of the functional ${A} \xi$ is of the form \begin{equation}\label{estim_f} \hat{A} \xi = \int_{-\pi}^{\pi} {h}(\theta) dZ^{\xi}(\theta). \end{equation} The spectral characteristic ${h}(\theta)$ of the optimal linear estimate $\hat{A} \xi$ of the functional is calculated by the formula \begin{equation}\label{spchar_f} {h}(\theta) = A(e^{i\theta}) - \left(\overline{C(e^{i\theta})}) \right)^{<\frac{1}{\alpha - 1}>} \left(f(\theta)\right)^{\frac{-1}{\alpha - 1}}, \end{equation} where the unknown coefficients $c_j$, $j = 0, 1,\dots$, are determined from the system of equations \begin{equation}\label{eq_sphar_f} \int_{-\pi}^{\pi} e^{-i\theta k} \left( \left(\sum_{j = 0}^{\infty}a_j e^{i j \theta}\right) - \left(\sum_{j = 0}^{\infty} \overline{c}_j e^{-i j \theta} \right)^{<\frac{1}{\alpha - 1}>} \left( f(\theta) \right)^{\frac{-1}{\alpha - 1}}\right) d\theta = 0, k = 0, 1,\dots \end{equation} The variance of the optimal estimate of the functional is calculated by the formula \begin{equation}\label{variance_f} \left\| {A}_N \xi- \hat{A} \xi \right\|_\alpha^\alpha = \int_{-\pi}^{\pi} \left|\left(\overline{C(e^{i\theta}}) \right)^{<\frac{1}{\alpha - 1}>} (f(\theta))^{\frac{-1}{\alpha - 1}} \right|^{\alpha} f(\theta) d \theta. \end{equation} As a corollary from Theorem \ref{thm1} the following statement holds true. \begin{Corollary}\label{cor1} Let $\{\xi_k,k\in\mathbb Z\}$ be a harmonizable symmetric $\alpha$-stable random sequence which has absolutely continuous spectral measure and the spectral density $f(\theta)>0$ satisfying the minimality condition \eqref{minimality1}. The optimal linear estimate $\hat{A} \xi$ of the functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ is of the form \eqref{estim_f}. The spectral characteristic ${h}(\theta)$ of the optimal linear estimate $\hat{A} \xi$ of the functional is calculated by formula \eqref{spchar_f}, where the unknown coefficients $c_j$, $j = 0, 1,\dots$, are determined from the system of equations \eqref{eq_sphar_f}. The variance of the optimal estimate of the functional is calculated by formula \eqref{variance_f}. \end{Corollary} \subsection{Extrapolation problem. Stationary sequences} Consider the problem in the particular case where $\alpha=2$. In this case the harmonizable symmetric $\alpha$-stable random sequences $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ are stationary sequences and we have the problem of the optimal estimation of the linear functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a stationary random sequence from observations of the stationary sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$. We will suppose that stationary sequences $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ have spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition \begin{equation}\label{eq:minimality} \int_{-\pi}^{\pi} (f(\theta)+g(\theta))^{-1}d\theta<\infty. \end{equation} The optimal linear estimate $\hat{A} \xi$ of the functional $A\xi$ is of the form \eqref{estim2}, where the spectral characteristic ${h}(\theta)$ of the optimal estimate and the variance of the optimal estimate, determined by equations \eqref{sp-char2}, \eqref{var2}, are of the form \begin{equation} \label{sphar_a2} \begin{split} &h(\theta)=\frac{A(e^{i\theta})f(\theta)-C(e^{i\theta})}{f(\theta)+g(\theta)}=\\ &=A(e^{i\theta})-\frac{A(e^{i\theta})g(\theta)+C(e^{i\theta})}{f(\theta)+g(\theta)}, \end{split} \end{equation} \begin{equation} \label{var_a2} \begin{split} \Delta(h;f,g)=\left\|{A}_N \xi- \hat{A} \xi \right\|_2^2&=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi} \frac{\left|A(e^{i\theta})g(\theta)+C(e^{i\theta}) \right|^2}{(f(\theta)+g(\theta))^2}f(\theta)d\theta\\ &+\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}\frac{\left| A(e^{i\theta})f(\theta)-C(e^{i\theta}) \right|^2}{(f(\theta)+g(\theta))^2}g(\theta)d\theta.\\ \end{split} \end{equation} The unknown coefficients $c_j$, $j=0,1,\dots,$ are determined from the system of equations \eqref{speq2} which is of the form in this case \begin{equation*} \begin{split} \int\limits_{-\pi}^{\pi}\left(A(e^{i\theta})\frac{f(\theta)}{f(\theta)+g(\theta)}-\frac{C(e^{i\theta})}{f(\theta)+g(\theta)}\right)e^{-ik\theta}d\theta=0, \quad k = 0, 1,\dots \end{split} \end{equation*} From this system of equations we get the following equations \begin{equation}\label{3} \begin{split} \sum\limits_{j=0}^{\infty}a_j\int\limits_{-\pi}^{\pi}\frac{e^{i(j-k)\theta}f(\theta)}{f(\theta)+g(\theta)}d\theta -\sum\limits_{j=0}^{\infty}c_j\int\limits_{-\pi}^{\pi}\frac{e^{i(j-k)\theta}}{f(\theta)+g(\theta)}d\theta=0,\quad k = 0, 1,\dots \end{split} \end{equation} Denote by $\bold{B}$, $\bold{R}$, $\bold{Q}$ operators in the space $\ell_{2}$, which are determined by matrices with elements $$B_{k,j}=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}e^{i(j-k)\theta}\frac{1}{f(\theta)+g(\theta)}d\theta;$$ $$R_{k,j}=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}e^{i(j-k)\theta}\frac{f(\theta)}{f(\theta)+g(\theta)}d\theta;$$ $$Q_{k,j}=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}e^{i(j-k)\theta}\frac{f(\theta)g(\theta)}{f(\theta)+g(\theta)}d\theta;$$ $$k,j = 0, 1,2,\dots $$ With the help of the introduced notations we can write equations (\ref{3}) in the form \begin{equation*}\begin{split} \sum\limits_{j=0}^{\infty}R_{k,j}a_j= \sum\limits_{j=0}^{\infty}B_{k,j}c_j,\quad k = 0, 1,2,\dots. \end{split} \end{equation*} These equations can be represented in the matrix-vector form \begin{equation*}\begin{split} \bold{R}\bold{a}=\bold{B} \bold{c}, \end{split} \end{equation*} where $\bold{a}=(a_0,a_1,\dots)$, $\bold{c}=(c_0,c_1,\dots)$. The unknown coefficients $c_j,j=0,1,\dots$ form a solution to this equation and can be represented in the form $$c_j=(\bold{B}^{-1}\bold{R}\bold{a})_j,$$ where $(\bold{B}^{-1}\bold{R}\bold{a})_j$ is the $j$-th element of the vector $\bold{B}^{-1}\bold{R}\bold{a}$. Finally, the spectral characteristic and the variance of the optimal estimate are determined by the formulas (for more details see the books by Moklyachuk (2008), Moklyachuk and Masyutka (2012), Moklyachuk and Golichenko (2016)) \begin{equation} \label{sphar_a12} \begin{split} h(\theta)&=\frac{ A(e^{i\theta})f(\theta)-\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{R}\bold{a})_je^{ij\theta} } {f(\theta)+g(\theta)}=\\ &=A(e^{i\theta})-\frac{ A(e^{i\theta})g(\theta)+\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{R}\bold{a})_je^{ij\theta} } {f(\theta)+g(\theta)}, \end{split} \end{equation} \begin{equation} \label{var_a12} \begin{split} \Delta(h;f,g)=\left\|{A} \xi- \hat{A} \xi \right\|_2^2 &=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}\frac{\left| A(e^{i\theta})g(\theta)+\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{R}\bold{a})_je^{ij\theta} \right|^2}{(f(\theta)+g(\theta))^2}f(\theta)d\theta\\ &+\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}\frac{\left| A(e^{i\theta})f(\theta)-\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{R}\bold{a})_je^{ij\theta} \right|^2}{(f(\theta)+g(\theta))^2}g(\theta)d\theta\\ &=\langle\bold{R}\bold{a},\bold{B}^{-1}\bold{R}\bold{a}\rangle+\langle\bold{Q}\bold{a},\bold{a}\rangle, \end{split} \end{equation} So, the following theorem holds true. \begin{thm}\label{thm2} Let $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ be mutually independent stationary random sequences which have absolutely continuous spectral measures and the spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality} with $\alpha=2$. The optimal linear estimate $\hat{A} \xi$ of the functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ is calculated by the formula \eqref{estim2}. The spectral characteristic ${h}(\theta)$ of the estimate is calculated by formula \eqref{sphar_a12}. The variance of the optimal estimate of the functional is calculated by formula \eqref{var_a12}. \end{thm} \subsection{Extrapolation problem. Stationary sequences. Observations without noise} Consider the problem of optimal linear estimation of the functional $$A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j = \int_{-\pi}^{\pi} A(e^{i\theta})dZ^{\xi}(\theta),\quad A(e^{i\theta})=\sum_{j = 0}^{\infty}a_j e^{ij\theta}, $$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a stationary stochastic sequence from observations of the sequence $\xi_k$ at points $k = -1, -2, \dots$. Suppose that the stationary stochastic sequence $\{\xi_k,k\in\mathbb Z\}$ has the spectral density $f(\theta)>0$ satisfying the minimality condition \eqref{minimality1} with $\alpha=2$. The optimal linear estimate $\hat{A} \xi$ of the functional $A\xi$ is of the form \eqref{estim_f}, where the spectral characteristic ${h}(\theta)$ of the optimal linear estimate $\hat{A} \xi$ of the functional $A\xi$ is calculated by the formula \begin{equation}\label{sphar_a22} {h}(\theta) = A(e^{i\theta}) - \left(\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{a})_je^{ij\theta} \right) \left(f(\theta)\right)^{-1}. \end{equation} The variance of the optimal estimate of the functional is calculated by the formula \begin{equation}\label{var_a22} \left\| {A} \xi- \hat{A} \xi \right\|_2^2 = \frac{1}{2\pi} \int_{-\pi}^{\pi} \left|\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{a})_je^{ij\theta} \right|^2 f^{-1}(\theta)d\theta=<\bold{B}^{-1}\vec{\bold{a}}, \vec{\bold{a}}>, \end{equation} where $\bold{B}$ is operator in the space $\ell_2$, determined by the matrix with elements \begin {equation*} B(k,j)=\,\frac{1}{2\pi}\int\limits_{-\pi}^{\pi}f^{-1}(\theta)e^{i(j-k)\theta}d\theta, \quad k,j=0,1,\dots. \end{equation*} The spectral density $f(\theta)>0$ of the stationary sequence $\{\xi_k,k\in\mathbb Z\}$ satisfies the minimality condition \eqref{minimality1} with $\alpha=2$. For this reason the function $f^{-1}(\theta)$ admits the factorization \begin{equation}\label{factor} \frac{1}{f(\theta)}=\sum\limits_{p=-\infty}^{\infty}b_{p}e^{ip\theta}=\left|\sum\limits_{j=0}^{\infty}\psi_{j}e^{-ij\theta}\right|^2 =\left|\sum\limits_{j=0}^{\infty}\varphi_{j}e^{-ij\theta}\right|^{-2}. \end{equation} Denote by $\bold{\Psi}$ and $ \bold{\Phi}$ linear operators in the space $\ell_2$ which are determined by matrices with elements $\bold{\Psi}_{i,j}=\psi_{i-j}$, $\bold{\Phi}_{i,j}=\varphi_{i-j}$, for $0\leq j\leq i$, while $\bold{\Psi}_{i,j}=0$, $\bold{\Phi}_{i,j}=0$, for $0\leq i< j$. It can be shown that $\bold{\Psi}\bold{\Phi}=\bold{\Phi}\bold{\Psi}=I$. The operator $\bold{B}$ can be represented in the form $\bold{B}=\bold{\Psi}^{'}\overline{\bold{\Psi}}.$ The operator $\bold{B}^{-1}$ can be represented in the form $\bold{B}^{-1}=\overline{\bold{\Phi}}\bold{\Phi}^{'}.$ As a corollary we can represent formula \eqref{var_a22} in the form \begin{equation}\label{var_a222} \left\| {A} \xi- \hat{A} \xi \right\|_2^2 =\langle\bold{B}^{-1}\vec{\bold{a}}, \vec{\bold{a}}\rangle =\langle\overline{\bold{\Phi}}\bold{\Phi}^{'}\vec{\bold{a}}, \vec{\bold{a}}\rangle =\langle\bold{\Phi}^{'}\vec{\bold{a}}, \bold{\Phi}^{'}\vec{\bold{a}}\rangle =\langle\bold{A}\vec{\varphi}, \bold{A}\vec{\varphi}\rangle =\|\bold{A}\vec{\varphi}\|^2, \end{equation} where the linear operator $\bold{A}$ in the space $\ell_2$ is determined by matrix with elements $\bold{A}_{i,j}=a_{i-j}$, $i,j=0,1,\dots$, and the vector $\vec{\varphi}=({\varphi}_0,{\varphi}_1, {\varphi}_2,\dots)$ are determined by elements ${\varphi}_j,j=0,1,\dots$ of the factorization \eqref{factor}. So, the following theorem holds true. \begin{thm}\label{thm3} Let $\{\xi_k,k\in\mathbb Z\}$ be a stationary stochastic sequence which has absolutely continuous spectral measure and the spectral density $f(\theta)>0$ satisfying the minimality condition \eqref{minimality1} with $\alpha=2$. The optimal linear estimate $\hat{A} \xi$ of the functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence $\{\xi_k,k\in\mathbb Z\}$ from observations of the sequence $\{\xi_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$ is of the form \eqref{estim_f}, where the spectral characteristic ${h}(\theta)$ of the optimal linear estimate $\hat{A} \xi$ of the functional is calculated by the formula \eqref{sphar_a22}. The variance of the optimal estimate of the functional can be calculated by formula \eqref{var_a22} as well as by formula \eqref{var_a222}. \end{thm} \begin{exm} \label{example1} Consider the problem of optimal linear estimation of the functional $$A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j $$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a stationary stochastic sequence from observations of the sequence $\xi_k$ at points $k = -1, -2, \dots$. Suppose that the stationary stochastic sequence $\{\xi_k,k\in\mathbb Z\}$ has the spectral density $$f(\theta) = |1 - \alpha e^{-i\theta}|^{-2}.$$ The function $f^{-1}(\theta)= |1 - \alpha e^{-i\theta}|^{2}$ admits the factorization $$f^{-1}(\theta) = b_{-1} e^{-i\theta} + b_{0}+b_{1} e^{i\theta} =\left|\sum\limits_{j=0}^{\infty}\psi_{j}e^{-ij\theta}\right|^2 =\left|\sum\limits_{j=0}^{\infty}\varphi_{j}e^{-ij\theta}\right|^{-2},$$ where $b_{0} = 1+|\alpha|^2$, $b_{-1} = -\alpha$, $b_{1} = -\bar{\alpha}$, $b_p=0, |p|>1$ are Fourier coefficients of the function $f^{-1}(\theta)$; $\psi_{0} = 1, \psi_{1} = -\alpha, \psi_j=0, j>1$; $\varphi_{j} = \alpha ^{j}, j\geq 0$; $b_p=\sum\limits_{k=0}^{\infty}\psi_{k}\overline{\psi}_{k+p},$ $p\geq0$, and $b_{-p}=\overline{b_p}$, $p\geq0$. The optimal linear estimate $\hat{A} \xi$ of the functional $A\xi$ is of the form \eqref{estim_f}, where the spectral characteristic ${h}(\theta)$ of the optimal linear estimate $\hat{A} \xi$ of the functional $A\xi$ is calculated by the formula \begin{equation}\label{sphar_a2222} {h}(\theta) = \left(\sum_{j = 0}^{\infty}a_j e^{ij\theta}\right) - \left(\sum\limits_{j=0}^{\infty}(\bold{B}^{-1}\bold{a})_je^{ij\theta} \right) \left(b_{-1} e^{-i\theta} + b_0+b_1 e^{i\theta}\right), \end{equation} Making use of the relation ${B}^{-1}=\overline{{\Phi}}{\Phi}^{'}$ we find that the matrix $(B)^{-1}$ is of the form \begin{equation*} B^{-1}= \left(\begin{array}{ccccc} 1 & \alpha & \alpha ^2 & \alpha ^3 & \ldots \\ \overline{\alpha} & \overline{\alpha}\alpha+1 & \overline{\alpha}\alpha^{2}+\alpha & \overline{\alpha}\alpha^{3}+\alpha^{2} & \ldots \\ \overline{\alpha^2} & \overline{\alpha^2}\alpha+\overline{\alpha} & \overline{\alpha^2}\alpha^2+\overline{\alpha}\alpha+1 & \overline{\alpha^2}\alpha^3+\overline{\alpha}\alpha^2+\alpha & \ldots \\ \overline{\alpha^3} & \overline{\alpha^3}\alpha+\overline{\alpha^2} & \overline{\alpha^3}\alpha^2+\overline{\alpha^2}\alpha+\overline{\alpha} & \overline{\alpha^3}\alpha^3+\overline{\alpha^2}\alpha^2++\overline{\alpha}\alpha +1& \ldots \\ \ldots \end{array}\right). \end{equation*} Consider the problem under the condition $a_j=0,j\geq3$. In this case the coefficients $c(j)=\left(\bold{B}^{-1}\vec{\bold{a}}\right)_j, j =0, 1, 2,\ldots$ are as follows $$ c_0=a_0+a_1\alpha+a_2\alpha^2, $$ $$ c_1=a_0\overline{\alpha}+a_1(\overline{\alpha}\alpha+1)+a_2(\overline{\alpha}\alpha^{2}+\alpha), $$ $$ c_2=a_0\overline{\alpha^2}+a_1(\overline{\alpha^2}\alpha+\overline{\alpha})+a_2(\overline{\alpha^2}\alpha^2+\overline{\alpha}\alpha+1), $$ $$ c_j=a_0\overline{\alpha^j}+a_1(\overline{\alpha^j}\alpha+\overline{\alpha^{j-1}})+a_2(\overline{\alpha^j}\alpha^2+\overline{\alpha^{j-1}}\alpha+\overline{\alpha^{j-2}}),j\geq3. $$ The spectral characteristic ${h}_2(\theta)$ of the optimal linear estimate $\hat{A}_2 \xi$ of the functional $A_2\xi=a_0+a_1\xi_1+a_2\xi_2$ is calculated by the formula $$ {h}_2(\theta) = \left(a_0+a_1 e^{i\theta} + a_2 e^{i2\theta} \right) - \left(\sum\limits_{j=0}^{\infty}c_je^{ij\theta} \right) \left(b_{-1} e^{-i\theta} + b_0+b_1 e^{i\theta}\right)= $$ $$ =-c_0b_{-1}e^{-i\theta}=\left(a_0\alpha +a_1\alpha^2+a_2\alpha^3 \right)e^{-i\theta}. $$ The error of the estimate is calculated by the formula $$ \left\| {A}_2 \xi- \hat{A}_2 \xi \right\|_2^2=\langle\bold{A}\vec{\varphi}, \bold{A}\vec{\varphi}\rangle =\langle\bold{B}^{-1}\vec{\bold{a}}, \vec{\bold{a}}\rangle= $$ $$ = a_0^2+a_0a_1(\alpha+\overline{\alpha})+a_1^2(1+\alpha^2)+ a_0a_2(\alpha^2+\overline{\alpha^2})+ a_1a_2(\alpha+\overline{\alpha})(1+\alpha^2)+ a_2^2(1+\alpha^2+\alpha^4). $$ \end{exm} \section{Extrapolation problem. Minimax approach} The value of the error $$\Delta\left(h(f,g);f,g\right):= \left\| \hat{A} \xi- A \xi \right\|_\alpha^\alpha $$ and the spectral characteristic $h(f,g):={h}(\theta)$ of the optimal estimate $\hat{A}\xi$ of the functional $A\xi$ can be calculated by the proposed formulas only in the case where we know the spectral densities $f(\theta)$ and $g(\theta)$ of the harmonizable symmetric $\alpha$-stable stochastic sequences $\{\xi_k,k\in\mathbb Z\},$ and $\{\eta_k,k\in\mathbb Z\}$. However, usually we do not have exact values of the spectral densities of stochastic sequences, while we often know a set $D=D_f\times D_g$ of admissible spectral densities. In this case we can apply the minimax-robust method of estimation to the extrapolation problem. This method let us find an estimate that minimizes the maximum of the errors for all spectral densities from the given set $D=D_f\times D_g$ of admissible spectral densities simultaneously (see books by Moklyachuk (2008), Moklyachuk and Masyutka (2012), Moklyachuk and Golichenko (2016) for more details). \begin{definition} For a given class of spectral densities $D=D_f\times D_g$ the spectral densities $f_0(\theta)\in D_f$, $g_0(\theta)\in D_g$ are called the least favorable in $D=D_f\times D_g$ for the optimal linear estimation of the functional $A \xi$ if the following relation holds true $$\Delta\left(f_0,g_0\right)=\Delta\left(h\left(f_0,g_0\right);f_0,g_0\right)=\max\limits_{(f,g)\in D_f\times D_g}\Delta\left(h\left(f,g\right);f,g\right).$$ \end{definition} \begin{definition} For a given class of spectral densities $D=D_f\times D_g$ the spectral characteristic $h^0=h(f_0,g_0)$ of the optimal estimate $\hat{A}\xi$ of the functional $A\xi$ is called minimax (robust) for the optimal linear estimation of the functional $A \xi$ if the following relations hold true $$h^0\in H_D= \bigcap\limits_{(f,g)\in D_f\times D_g} L^{\alpha}(f+g),$$ $$\min\limits_{h\in H_D}\max\limits_{(f,g)\in D}\Delta\left(h;f,g\right)=\max\limits_{(f,g)\in D}\Delta\left(h^0;f,g\right).$$ \end{definition} The least favorable spectral densities $f_0(\theta)$, $g_0(\theta)$ and the minimax spectral characteristic $h^0=h(f_0,g_0)$ form a saddle point of the function $\Delta \left(h;f,g\right)$ on the set $H_D \times D$. The saddle point inequalities $$\Delta\left(h;f_0,g_0\right)\geq\Delta\left(h^0;f_0,g_0\right)\geq \Delta\left(h^0;f,g\right) $$ $$ \forall h \in H_D, \forall f \in D_f, \forall g \in D_g$$ holds true if $h^0=h(f_0,g_0)$ and $h(f_0,g_0)\in H_D,$ where $(f_0,g_0)$ is a solution to the constrained optimization problem \begin{equation} \label{condextr} \max\limits_{(f,g)\in D_f\times D_g}\Delta\left(h(f_0,g_0);f,g\right)=\Delta\left(h(f_0,g_0);f_0,g_0\right), \end{equation} \begin{equation}\label{delta41}\begin{split} \Delta\left(h(f_0,g_0);f,g\right)&= \left\|{A} \xi- \hat{A} \xi \right\|_\alpha^\alpha\\ & = \int_{-\pi}^{\pi} \left| A(e^{i\theta}) - {h}^0(\theta) \right|^{\alpha}f(\theta)d \theta + \int_{-\pi}^{\pi} \left|{h}^0(\theta) \right|^{\alpha}g(\theta)d \theta. \end{split}\end{equation} The constrained optimization problem \eqref{condextr} is equivalent to the unconstrained optimization problem \begin{equation} \label{8} \Delta_D(f,g)=-\Delta(h(f_0,g_0);f,g)+\delta(f,g\left|D_f\times D_g\right.)\rightarrow \inf, \end{equation} where $\delta(f,g\left|D_f\times D_g\right.)$ is the indicator function of the set $D=D_f\times D_g$. Solution $(f_0,g_0)$ to the problem (\ref{8}) is characterized by the condition $0 \in \partial\Delta_D(f_0,g_0),$ where $\partial\Delta_D(f_0,g_0)$ is the subdifferential of the convex functional $\Delta_D(f,g)$ at point $(f_0,g_0)$. This condition makes it possible to find the least favorable spectral densities in some special classes of spectral densities $D=D_f\times D_g$ (Ioffe and Tikhomirov, 1979; Pshenychnyj, 1971; Rockafellar, 1997; Moklyachuk, 2008b). Note, that the form of the functional $\Delta(h(f_0,g_0);f,g)$ is convenient for application the method of Lagrange multipliers for finding solution to the problem \eqref{condextr}. Making use of the method of Lagrange multipliers and the form of subdifferentials of the indicator functions we describe relations that determine least favourable spectral densities in some special classes of spectral densities Summing up the derived formulas and the introduced definitions we come to conclusion that the following lemmas hold true \begin{lemma} \label{lem41} Let $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ be mutually independent harmonizable symmetric $\alpha$-stable random sequences which have absolutely continuous spectral measures and the spectral densities $f_0(\theta)>0$ and $g_0(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality}. Let the spectral densities $(f_0,g_0)\in D_f\times D_g$ gives a solution to the constrained optimization problem \eqref{condextr}. The spectral densities $(f_0,g_0)$ are the least favorable spectral densities in $D_f\times D_g$ and $h^0=h(f_0,g_0)$ is the minimax spectral characteristic of the optimal linear estimation $\hat{A} \xi$ of the functional $A \xi$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ if $h^0=h(f_0,g_0)\in H_D$. \end{lemma} \begin{lemma} \label{lem42} Let $\{\xi_k,k\in\mathbb Z\}$ be a harmonizable symmetric $\alpha$-stable random sequence which has absolutely continuous spectral measure and the spectral density $f_0(\theta)>0$ satisfying the minimality condition \eqref{minimality1}. Let the spectral density $f_0\in D_f$ gives a solution to the constrained optimization problem \begin{equation} \label{condextr42} \max\limits_{f\in D_f}\Delta\left(h(f_0);f\right)=\Delta\left(h(f_0);f_0\right), \end{equation} \begin{equation} \label{delta42} \Delta\left(h(f_0);f\right)=\left\| {A} \xi- \hat{A} \xi \right\|_\alpha^\alpha = \int_{-\pi}^{\pi} \left|\left(C^0(e^{i\theta}) \right)^{<\frac{1}{\alpha - 1}>} (f_0(\theta))^{\frac{-1}{\alpha - 1}} \right|^{\alpha} f(\theta) d \theta. \end{equation} The spectral density $f_0$ is the least favorable spectral density in $D_f$ and $h^0=h(f_0)$ is the minimax spectral characteristic of the optimal linear estimate $\hat{A} \xi$ of the functional $A \xi$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ if $h^0=h(f_0)\in H_D$. \end{lemma} \begin{lemma} \label{lem43} Let $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ be mutually independent stationary random sequences which have absolutely continuous spectral measures and the spectral densities $f_0(\theta)>0$ and $g_0(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality} with $\alpha=2$. Let spectral densities $(f_0,g_0)\in D_f\times D_g$ gives a solution to the constrained optimization problem \begin{equation} \label{condextr43} \max\limits_{(f,g)\in D_f\times D_g}\Delta\left(h(f_0,g_0);f,g\right)=\Delta\left(h(f_0,g_0);f_0,g_0\right), \end{equation} \begin{equation} \label{delta43} \begin{split} \Delta\left(h(f_0,g_0);f,g\right) &=\frac{1}{2\pi}\int\limits_{-\pi}^{\pi}\frac{\left| A(e^{i\theta})g_0(\theta)+\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta} \right|^2}{(f_0(\theta)+g_0(\theta))^2}f(\theta)d\theta\\ &+\frac{1}{2\pi}\int\limits_{-\pi}^{\pi}\frac{\left| A(e^{i\theta})f_0(\theta)-\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta} \right|^2}{(f_0(\theta)+g_0(\theta))^2}g(\theta)d\theta.\\ \end{split} \end{equation} The spectral densities $(f_0,g_0)$ are the least favorable spectral densities in $D_f\times D_g$ and $h^0=h(f_0,g_0)$ is the minimax spectral characteristic of the optimal linear estimate $\hat{A} \xi$ of the functional $A \xi$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ if $h^0=h(f_0,g_0)\in H_D$. \end{lemma} \begin{lemma} \label{lem44} Let $\{\xi_k,k\in\mathbb Z\}$ be a stationary random sequence which has absolutely continuous spectral measure and the spectral density $f_0(\theta)>0$ satisfying the minimality condition \eqref{minimality1} with $\alpha=2$. Let the spectral density $f_0\in D_f$ gives a solution to the constrained optimization problem \begin{equation} \label{condextr44} \max\limits_{f\in D_f}\Delta\left(h(f_0);f\right)=\Delta\left(h(f_0);f_0\right), \end{equation} \begin{equation}\label{delta44} \Delta\left(h(f_0);f\right) = \frac{1}{2\pi}\int_{-\pi}^{\pi} \left|\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{a})_je^{ij\theta} \right|^2 f_0^{-2}(\theta)f(\theta)d\theta. \end{equation} The spectral density $f_0$ is the least favorable spectral density in $D_f$ and $h^0=h(f_0)$ is the minimax spectral characteristic of the optimal linear estimate $\hat{A} \xi$ of the functional $A \xi$, that depends on the unknown values $\xi_j,j=0,1,\dots,$ of the sequence from observations of the sequence $\{\xi_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots,$ if $h^0=h(f_0)\in H_D$. \end{lemma} \section{Least favorable spectral densities in the class $D_f^{\beta}\times D_g^{\varepsilon}$} Consider the problem of optimal estimation of the linear functional $A \xi$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a random sequence from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$, where $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ are mutually independent harmonizable symmetric $\alpha$-stable random sequences which have spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality} from the class of admissible spectral densities $D=D_f^{\beta}\times D_g^{v,u}$, where $$ D_f^{\beta} = \left\{f(\theta)\left|\int\limits_{-\pi}^{\pi} (f(\theta))^{\beta}d\theta= P_1\right.\right\},$$ $$D_g^{\varepsilon} = \left\{g(\theta)\left| g(\theta)=(1-{\varepsilon}) g_1(\theta)+ {\varepsilon}w(\theta),\int\limits_{-\pi}^{\pi} g(\theta)d\theta= P_2\right.\right\}. $$ Assume that spectral densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ and the functions $h_f(f_0,g_0)$, $h_g(f_0,g_0)$, determined by the equations \begin{equation} \label{hf51} \qquad\qquad h_f(f_0,g_0)=\left| A(e^{i\theta}) - {h}^0(\theta) \right|^{\alpha}, \end{equation} \begin{equation} \label{hg51} h_g(f_0,g_0)=\left|{h}^0(\theta) \right|^{\alpha}, \end{equation} \begin{equation}\label{sp51} \left( A(e^{i\theta}) - {h}^0(\theta) \right)^{<\alpha - 1>}f_0(\theta) - \left({h}^0(\theta) \right)^{<\alpha - 1>}g_0(\theta) = C^0(e^{i\theta}), \end{equation} \begin{equation}\label{speq51} \int_{-\pi}^{\pi} {e^{-i\theta k}\,\, {h}^0(\theta)} d\theta = 0,\quad k = 0, 1,\dots \end{equation} are bounded. Under these conditions the functional \begin{equation}\begin{split} \Delta\left(h(f_0,g_0);f,g\right)&= \left\|{A} \xi- \hat{A} \xi \right\|_\alpha^\alpha\\ & = \int_{-\pi}^{\pi} \left| A(e^{i\theta}) - {h}^0(\theta) \right|^{\alpha}f(\theta)d \theta + \int_{-\pi}^{\pi} \left|{h}^0(\theta) \right|^{\alpha}g(\theta)d \theta. \end{split}\end{equation} is linear and continuous in the $L_1\times L_1$ space and we can apply the Lagrange multipliers method to derive that the least favorable densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ satisfy the equations \begin{equation} \label{eqf51} \left| A(e^{i\theta}) - {h}^0(\theta) \right|^{\alpha}= \gamma_1 \left(f_0(\theta)\right)^{\beta - 1}, \end{equation} \begin{equation} \label{eqg51} \left|{h}^0(\theta) \right|^{\alpha}= \left(\varphi_1 (\theta)+\gamma_2 \right), \end{equation} where $\varphi_1 (\theta)\leq 0$ and $\varphi_1(\theta)= 0$ if $g_0(\theta)\geq (1-{\varepsilon}) g_1(\theta)$; $\gamma_1,$ $\gamma_2$ are the Lagrange multipliers which are determined from the conditions $$ \int\limits_{-\pi}^{\pi} (f_0(\theta))^{\beta}d\theta= P_1,\quad \int\limits_{-\pi}^{\pi} g_0(\theta)d\theta= P_2. $$ Thus, the following statement holds true. \begin{thm}\label{thm51} Let the spectral densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ satisfy the minimality condition \eqref{eq:minimality} and let the functions $h_f(f_0,g_0)$, $h_g(f_0,g_0)$ determined by formulas \eqref{hf51}, \eqref{hg51}, \eqref{sp51}, \eqref{speq51} be bounded. The spectral densities $f_0(\theta)$ and $g_0(\theta)$ are the least favorable in the class $D=D_f^{\beta}\times D_g^{\varepsilon}$ for the optimal linear estimation of the functional $A \xi$ if they satisfy equations \eqref{eqf51}, \eqref{eqg51} and determine a solution to the optimization problem \eqref{condextr}. The minimax-robust spectral characteristic $h(f_0,g_0)$ of the optimal estimate of the functional $A \xi$ is determined by formulas \eqref{sp51}, \eqref{speq51}. \end{thm} \subsection{Least favorable spectral densities. Observations without noise} Consider the problem of optimal linear estimation of the functional $A \xi= \sum_{j = 0}^{\infty} a_j \xi_j$ that depends on the unknown values $\xi_j, j = 0, 1, \ldots$, from observations of the sequence $\{\xi_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$, where $\{\xi_k,k\in\mathbb Z\}$ is a harmonizable symmetric $\alpha$-stable random sequence which have spectral density $f_0(\theta)>0$ satisfying the minimality condition \eqref{minimality1} from the class of admissible spectral densities $D_f^{\beta}$. Assume that spectral density $f_0\in D_f^{\beta}$ and the function $h_f(f_0)$ determined by the equation \begin{equation} \label{hf52} h_f(f_0)= \left|\left(C^0(e^{i\theta}) \right)^{<\frac{1}{\alpha - 1}>} (f_0(\theta))^{\frac{-1}{\alpha - 1}} \right|^{\alpha} \end{equation} is bounded. Under this condition the functional \eqref{delta42} is linear and continuous in the $L_1$ space and we can apply the method of Lagrange multipliers to find solution of the constrained optimization problem \eqref{condextr42} and derive that the least favorable density $f_0\in D_f^{\beta}$ satisfy the equation \begin{equation} \label{eqf52} \left|\left(C^0(e^{i\theta}) \right)^{<\frac{1}{\alpha - 1}>} (f_0(\theta))^{\frac{-1}{\alpha - 1}} \right|^{\alpha}= \gamma_1 \left(f_0(\theta)\right)^{\beta - 1}, \end{equation} where $\gamma_1$ is the Lagrange multipliers. From this equation we find that the least favorable density is of the form \begin{equation}\label{eqf522} f_0(\theta) = C\left|\sum_{j = 0}^{\infty} c_j e^{-i j \theta} \right|^\frac{\alpha}{\alpha + (\alpha-1)(\beta - 1)}. \end{equation} The unknown constants are determined from the optimization problem \eqref{condextr42} and from the condition $$ \int\limits_{-\pi}^{\pi} (f_0(\theta))^{\beta}d\theta= P_1. $$ In the case $\beta=1$ the least favorable density is of the form \begin{equation}\label{eqf5222} f_0(\theta) = C\left|\sum_{j = 0}^{\infty} c_j e^{-i j \theta} \right|. \end{equation} The following statement holds true. \begin{thm}\label{thm52} Let the spectral density $f_0\in D_f^{\beta}$ satisfy the minimality condition \eqref{minimality1} and let the function $h_f(f_0)$ determined by formula \eqref{hf52} be bounded. The spectral density $f_0(\theta)$ is the least favorable in the class $D_f^{\beta}$ for the optimal linear estimation of the functional $A \xi$ if it is of the form \eqref{eqf522} and determine a solution to the optimization problem \eqref{condextr42}. The minimax-robust spectral characteristic $h(f_0)$ of the optimal estimate of the functional $A \xi$ is determined by formula \eqref{spchar_f}. \end{thm} \subsection{Least favorable spectral densities. Stationary sequences} Consider the problem of the optimal estimation of the linear functional $A \xi$ that depends on the unknown values $\xi_j,j=0,1,\dots,$ of a random sequence $\{\xi_k,k\in\mathbb Z\}$ from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points of time $k = -1, -2, \dots$, where $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ are mutually independent stationary stochastic sequences which have spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition \eqref{eq:minimality} with $\alpha=2$ from the class of admissible spectral densities $D=D_f^{\beta}\times D_g^{\varepsilon}$. Assume that spectral densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ and the functions $h_f(f_0,g_0)$, $h_g(f_0,g_0)$, determined by the equations \begin{equation} \label{hf53} h_f(f_0,g_0)=\frac{\left|A(e^{i\theta})g_0(\theta)+\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta}\right|^2}{(f_0(\theta)+g_0(\theta))^2}, \end{equation} \begin{equation} \label{hg53} h_g(f_0,g_0)=\frac{\left|A(e^{i\theta})f_0(\theta)-\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta} \right|^2}{(f_0(\theta)+g_0(\theta))^2} \end{equation} are bounded. Under these conditions the functional \eqref{delta43} is linear and continuous in the $L_1\times L_1$ space and we can apply the Lagrange multipliers method to find solution of the constrained optimization problem \eqref{condextr43} and derive that the least favorable densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ satisfy the equations \begin{equation} \label{eqf53} {\left|A(e^{i\theta})g_0(\theta)+\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta}\right|^2}=\gamma_1 {(f_0(\theta)+g_0(\theta))^2} \left(f_0(\theta)\right)^{\beta - 1}, \end{equation} \begin{equation} \label{eqg53} {\left|A(e^{i\theta})f_0(\theta)-\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{R}^0\bold{a})_je^{ij\theta} \right|^2}= {(f_0(\theta)+g_0(\theta))^2} \left(\varphi_1 (\theta)+\gamma_2 \right), \end{equation} where $\varphi_1 (\theta)\leq 0$ and $\varphi_1(\theta)= 0$ if $g_0(\theta)\geq (1-{\varepsilon}) g_1(\theta)$; $\gamma_1,$ $\gamma_2$ are the Lagrange multipliers which are determined from the conditions $$ \int\limits_{-\pi}^{\pi} (f_0(\theta))^{\beta}d\theta= P_1,\quad \int\limits_{-\pi}^{\pi} g_0(\theta)d\theta= P_2. $$ Thus, the following statement holds true. \begin{thm}\label{thm53} Let the spectral densities $f_0\in D_f^{\beta}$, $g_0\in D_g^{\varepsilon}$ satisfy the minimality condition \eqref{eq:minimality} with $\alpha=2$ and let the functions $h_f(f_0,g_0)$, $h_g(f_0,g_0)$ determined by formulas \eqref{hf53}, \eqref{hg53} be bounded. The spectral densities $f_0(\theta)$ and $g_0(\theta)$ are the least favorable in the class $D=D_f^{\beta}\times D_g^{\varepsilon}$ for the optimal linear estimation of the functional $A \xi$ if they satisfy equations \eqref{eqf53}, \eqref{eqg53} and determine a solution to the constrained optimization problem \eqref{condextr43}. The minimax-robust spectral characteristic $h(f_0,g_0)$ of the optimal estimate of the functional $A \xi$ is determined by formulas \eqref{sphar_a12}. \end{thm} \subsection{Least favorable spectral densities. Stationary sequences. Observations without noise} Consider the problem of the optimal linear estimation of the functional $A \xi$ that depends on the unknown values of a random sequence $\{\xi_j,j\in\mathbb Z\}$ from observations of the sequence at points $k = -1, -2, \dots$, where the stationary random sequence $\{\xi_k,k\in\mathbb Z\},$ has the spectral density $f(\theta)>0$ satisfying the minimality condition \eqref{minimality1} with $\alpha=2$ from the class of admissible spectral densities $D_f^{\beta}$. Assume that spectral density $f_0\in D_f^{\beta}$ and the function $h_f(f_0)$ determined by the equation \begin{equation} \label{hf54} h_f(f_0)=\left|\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{a})_je^{ij\theta} \right|^2f_0^{-2}(\theta) \end{equation} is bounded. Under this condition the functional \eqref{delta44} is linear and continuous in the $L_1$ space and we can apply the Lagrange multipliers method to find solution of the constrained optimization problem \eqref{condextr44} and derive that the least favorable density $f_0\in D_f^{\beta}$ satisfy the equation \begin{equation} \label{eqf54} \left|\sum\limits_{j=0}^{\infty}((\bold{B}^0)^{-1}\bold{a})_je^{ij\theta} \right|^2f_0^{-2}(\theta)= \gamma_1 \left(f_0(\theta)\right)^{\beta - 1}, \end{equation} where $\gamma_1$ is the Lagrange multiplier which is determined from the conditions $$ \int\limits_{-\pi}^{\pi} (f_0(\theta))^{\beta}d\theta= P_1. $$ Thus, the following statement holds true. \begin{thm}\label{thm54} Let the spectral density $f_0\in D_f^{\beta}$ satisfy the minimality condition \eqref{minimality1} with $\alpha=2$ and let the function $h_f(f_0)$ determined by formula \eqref{hf54} be bounded. The spectral density $f_0(\theta)$ is the least favorable in the class $D_f^{\beta}$ for the optimal linear estimation of the functional $A \xi$ if it satisfies equation \eqref{eqf54} and determine a solution to the constrained optimization problem \eqref{condextr44}. The minimax-robust spectral characteristic $h(f_0)$ of the optimal estimate of the functional $A \xi$ is determined by formulas \eqref{sphar_a22}. \end{thm} In the case of $\beta=1$ the set of admissible spectral densities $D=D_f$ is of the form $$ D_f = \left\{f(\theta)\left| \frac{1}{2\pi} \int\limits_{-\pi}^{\pi} f(\theta)d\theta= P\right.\right\}. $$ Stationary sequences with the spectral densities from such $D_f$ have finite dispersion $E|\xi_j|^2= P$ and can be represented as a sum of a regular sequence and a singular sequence. The least favorable in $D_f$ spectral density is density of the regular sequence since singular sequences have zero value of the mean square error of extrapolation. Spectral densities from $D_f$ of the regular sequences admit the factorization \begin{equation}\label{factor2222} f(\theta) = \left|\sum\limits_{j=0}^{\infty}\varphi_{j}e^{-ij\theta}\right|^{2},\quad \sum\limits_{j=0}^{\infty}|\varphi_{j}|^{2}= P, \end{equation} and we can use the following optimization problem to find the least favourable spectral density in the set $D_f$ \begin{equation}\label{var_a2222} \|\bold{A}\vec{\varphi}\|^2\to\max,\quad \|\vec{\varphi}\|^2= \sum\limits_{j=0}^{\infty}|\varphi_{j}|^{2}= P, \end{equation} where the linear operator $\bold{A}$ in the space $\ell_2$ is determined by matrix with elements $\bold{A}_{i,j}=a_{i-j}$, $i,j=0,1,\dots$, and the vector $\vec{\varphi}=({\varphi}_0,{\varphi}_1, {\varphi}_2,\dots)$ are determined by elements ${\varphi}_j,j=0,1,\dots$ of the factorization \eqref{factor2222}. Solution to the optimization problem \eqref{var_a2222} gives the eigenvector $\vec{\varphi}^0=({\varphi}_0^0,{\varphi}_1^0, {\varphi}_2^0,\dots)$ which corresponds to greatest eigenvalue $\nu^0$ of the linear operator $\bold{A}$. We present this result as a theorem \begin{thm}\label{thm544} Spectral density $f_0(\theta)$ is the least favourable in the class $D_f$ for the optimal linear estimation of the functional $A \xi$ if it is of the form \eqref{factor2222}, where $\vec{\varphi}^0=({\varphi}_0^0,{\varphi}_1^0, {\varphi}_2^0,\dots)$ is the eigenvector of the linear operator $\bold{A}$ which corresponds to greatest eigenvalue $\nu^0$ of the operator. The optimal minimax linear estimate $ \hat{A} { \xi}$ of the functional $A {\xi}=\sum_{j=0}^{ \infty} {a}_j\xi_j$ is of the form \[ \hat{A} { \xi}= \sum_{j=0}^{ \infty} {a}_j \left[ \sum_{u=- \infty}^{-1} \varphi_{j-u}^0 { \varepsilon}_u \right] , \] \noindent where $ \varepsilon_u$ is a standard stationary sequence with orthogonal values ("white noise" sequence), the sequence $ \{\varphi_u^0: u= 0,1,\dots \}$ is uniquely determined by coordinates of the eigenvector of the operator $\bold{A}$ that corresponds to the greatest eigenvalue $\nu^0$ and condition $E \left \| { \xi}_j \right \| ^{2} =P$. \end{thm} \begin{exm} \label{exm2.1} Consider the problem of optimal linear stimulation of the functional \[A_{1} { \xi}= \xi (0)+ 2\xi (1) \] that depends on the unknown values $ { \xi}(0)$, $ { \xi}(1)$ of a stationary sequence $ { \xi}(j)$, that satisfies conditions \[ E{ \xi}(j)=0,\quad E\left | { \xi}(j) \right |^{2} \le P, \] based on observations of the sequence $ { \xi}(j)$ at points $j=-1,-2,\dots$. \noindent Eigenvalues of the operator $\bold{A}_1$ are equal to $(1\pm \sqrt{17})/2 $. So the greatest eigenvalue is $ \nu_{1} =(1 + \sqrt{17})/2 $. The eigenvector corresponding to the eigenvalue $ \nu_{1} =(1+ \sqrt{17})/2 $ is of the form $ \vec\varphi = \left \{ \varphi (0), \varphi(1) \right \}$, where \[ \varphi (0)= \sqrt{1/2+ 1/2\sqrt{17}},\quad \varphi (1)= \sqrt{1/2- 1/2\sqrt{17}} . \] The least favourable in the class $D_f$ for the optimal linear estimation of the functional $A_1 \xi$ spectral density is of the form $$ f(\theta) = P\left|\varphi_{0}+\varphi_{1}e^{-ij\theta}\right|^{2},\quad \left|\varphi_{0}\right|^{2}+\left|\varphi_{1}\right|^{2}= 1, $$ which is the spectral density of the least favourable stationary sequence $ { \xi}(j)$ that is a moving average sequence of the form \[ { \xi}(j)= \sqrt{P}\varphi (0) \varepsilon (j)+ \sqrt{P}\varphi (1) \varepsilon (j-1)= \] \[= \sqrt{P}\sqrt{1/2+ 1/2\sqrt{17}} \,\,\varepsilon(j) + \sqrt{P}\sqrt{1/2- 1/2\sqrt{17}}\,\,\varepsilon(j-1), \] $\varepsilon(j)$ is a ''white noise'' sequence. \noindent The optimal linear minimax estimate $ \hat{A}_{1} { \xi}$ of the functional $A_{1} { \xi}$ is of the form \[ \hat{A}_{1} { \xi}= \sqrt{P}\varphi (1)\,\, \varepsilon (-1)= \sqrt{P}\sqrt{1/2- 1/2\sqrt{17}}\,\,\varepsilon (-1). \] \noindent The mean-square error of the optimal estimate of the functional $A_{1} { \xi}$ does not exceed $(9 + \sqrt{17})/2$. \end{exm} \section{Conclusion} We propose methods of solution the optimal linear estimation problem for the linear functional $A \xi~=~\sum_{j = 0}^{\infty} a_j \xi_j$ that depends on the unknown values of a random sequence $\{\xi_j,j\in\mathbb Z\}$ from observations of the sequence $\{\xi_k+\eta_k,k\in\mathbb Z\}$ at points $k = -1, -2, \dots$, where $\{\xi_k,k\in\mathbb Z\}$ and $\{\eta_k,k\in\mathbb Z\}$ are mutually independent harmonizable symmetric $\alpha$-stable random sequences which have the spectral densities $f(\theta)>0$ and $g(\theta)>0$ satisfying the minimality condition. The problem is investigated under the condition of spectral certainty as well as under the condition of spectral uncertainty. Formulas for calculation the value of the error and spectral characteristic of the optimal linear estimate of the functional are derived under the condition of spectral certainty where spectral densities of the sequences are exactly known. In the case where spectral densities of the sequences are not exactly known while a set of admissible spectral densities is available, relations which determine least favorable densities and the minimax-robust spectral characteristics are found for different classes of spectral densities. \end{document}
\begin{document} \title{Congruences of Lines with one-dimensional focal locus} \author{Pietro De Poi} \thanks{This research was partially supported by the DFG Forschungsschwerpunkt ``Globalen Methoden in der Komplexen Geometrie'', and the EU (under the EAGER network).} \address{Mathematisches Institut\\ Universit\"at Bayreuth\\ Lehrstuhl VIII \\ Universit\"atsstra\ss e 30\\ D-95447 Bayreuth\\ Germany\\ } \email{[email protected]} \keywords{Algebraic Geometry, Focal loci, Grassmannians, Congruences} \subjclass[2000]{Primary 14M15, 14N05, Secondary 51N35} \date{\today} \begin{abstract} In this article we obtain the classification of the congruences of lines with one-dimensional focal locus. It turns out that one can restrict to study the case of $\mathbb{P}^3$. \end{abstract} \maketitle \section*{Introduction} A congruence $B$ of lines in $\mathbb{P}^n$ is a family of lines of dimension $n-1$. The order of $B$ is the number of lines of $B$ passing through a general point in $\mathbb{P}^n$ and its class is the number of lines of the congruence contained in a general hyperplane and meeting a general line contained in it. The study of congruences of lines in $\mathbb{P}^3$ was started by E.~Kummer in \cite{K}, in which he gave a classification of those of order one. More recently, congruences of lines in $\mathbb{P}^3$ were studied in \cite{G} by N. Goldstein, who tried to classify these from the point of view of their focal locus. The focal locus is, roughly speaking, the set of critical values of the natural map between the total space $\Lambda$ of the family $B$ and $\mathbb{P}^n$. Successively, Z. Ran in \cite{R} studied the surfaces of order one in the general Grassmannian $\mathrm{G}r(r,n)$ \textit{i.e. } families of $r$-planes in $\mathbb{P}^n$ for which the general $(n-r-2)$-plane meets only one element of the family. He gave a classification of such surfaces, in particular, in the case of $\mathbb{P}^3$, obtaining a modern and more correct proof of Kummer's classification of first order congruences of lines. Moreover, he proved that if the class of these congruences in $\mathbb{P}^3$ is greater than three, these are not smooth, as conjectured by I. Sols. Another proof of Kummer's classification is given by F. L. Zak and others in \cite{Z}. Moreover, E. Arrondo and I. Sols, in \cite{Ar} classified all the smooth congruences with small invariants. Successively, E. Arrondo and M. Gross in \cite{AG} classified all the smooth congruences in $\mathbb{P}^3$ with a fundamental curve and this classification was extended to all smooth congruences with a fundamental curve by E. Arrondo, M. Bertolini and C. Turrini in \cite{ABT}. In this paper the classification of congruences with focal locus of dimension one (\textit{i.e. } if the focal locus reduces to be only a---possibly reducible or non-reduced---fundamental curve) is obtained. In Section~\ref{sec:1} we fix the notation and we recall some known results. In Section~\ref{sec:2} we obtain some general results on the possible minimal dimension of the focal locus of a congruence in $\mathbb{P}^n$: first of all, we prove in Theorem~\ref{thm:cata} that if the focal locus has codimension at least two, then the congruence has order either zero or one. This result has been suggested to us by F. Catanese. In Theorem~\ref{thm:dim} we prove that the dimension of the focal locus is at least $\frac{n-1}{2}$---if the congruence is not a star of lines, \textit{i.e. } the set of lines passing through a point $P\in \mathbb{P}^n$ (in which case the focal locus has support in $P$). Moreover, we give also some results concerning the case of dimension $\frac{n-1}{2}$. In Section~\ref{sec:3} we prove the main result of this paper, namely Theorem~\ref{thm:int}; in order to state it, we need the following notation: first of all, let $\ell$ be a fixed line in $\mathbb{P}^3$ and $\mathbb{P}^1_\ell$ the set of the planes containing $\ell$. Let $\mathbb{P}hi$ be a general nonconstant morphism from $\mathbb{P}^1_\ell$ to $\ell$ and let $\Pi$ be a general element in $\mathbb{P}^1_\ell$. We define $\mathbb{P}^1_{\mathbb{P}hi(\Pi),\Pi}$ as the pencil of lines passing through the point $\mathbb{P}hi(\Pi)$ and contained in $\Pi$. \begin{thm}\label{thm:int} If the focal locus $F$ of a congruence of lines $B$ in $\mathbb{P}^n$ has dimension one, then $n=2,3$. If $n=2$ $F$ is an irreducible curve of degree greater than one and $B$ is its dual; if $n=3$ $B$ is a first order congruence, and we have the following possibilities: \begin{enumerate} \item $F$ is an irreducible curve, which is either: \begin{enumerate} \item\label{1st} a rational normal curve $C_3$ in $\mathbb{P}^3$. In this case $B$ is the family of secant lines of $C_3$ and has bidegree $(1,3)$ in the Grassmannian; \item\label{punto} or a non-reduced scheme whose support is a line $\ell$, and the congruence is---using the above notation---$\cup_{\Pi\in \mathbb{P}^1_\ell}\mathbb{P}^1_{\mathbb{P}hi(\Pi),\Pi}$; if we set $d:=\deg(\mathbb{P}hi)$, the congruence has bidegree $(1,d)$; or \end{enumerate} \item\label{2nd} $F$ is a reducible curve, union of a line $F_1$ and a rational curve $F_2$ such that $\length(F_1\cap F_2)=\deg(F_2)-1$; $B$ is the family of lines meeting $F_1$ and $F_2$ and it has bidegree $(1,\deg(F_2))$. \end{enumerate} \end{thm} \subsection*{acknowledgments} This work has been quite improved by many valuable suggestions of Prof. F. Catanese. I would also like to thank Prof. E. Arrondo, Prof. E. Mezzetti for useful discussions on these topics and Prof. F. Zak for some references, especially \cite{Z}. \section{Notation and Preliminaries}\label{sec:1} We will work with schemes and varieties over the complex field $\mathbb C$, with the standard definitions and notation as in \cite{H}. For us a \emph{variety} will always be projective. More information about general results and references about families of lines, focal diagrams and congruences can be found in \cite{DP2}, \cite{DP3} and \cite{DPi}, or \cite{PDP}. Besides, we refer to \cite{GH} for notation about Schubert cycles and to \cite{Fu} for the definitions and results of intersection theory. So we denote by $\sigma_{a_0,a_1}$ the Schubert cycle of the lines in $\mathbb{P}^n$ contained in a fixed $(n-a_1)$-dimensional subspace $H\subset\mathbb{P}^n$ and which meet a fixed $(n-1-a_0)$-dimensional subspace $\Pi\subset H$. Here we recall that a \emph{congruence of lines} in $\mathbb{P}^n$ is a (flat) family $(\Lambda,B,p)$ of lines in $\mathbb{P}^n$ obtained as the pull-back of the universal family under the desingularization map of a subvariety $B'$ of dimension $n-1$ of the Grassmannian $\mathrm{G}r(1,n)$ of lines in $\mathbb{P}^n$. So $\Lambda\subset B\times \mathbb{P}^n$ and $p$ is the restriction of the projection $p_1:B\times\mathbb{P}^n\rightarrow B$ to $\Lambda$, while we will denote the restriction of $p_2:B\times\mathbb{P}^n\rightarrow\mathbb{P}^n$ by $f$. $\Lambda_b:=p^{-1}(b)$, $(b\in B)$ will be a line of the family and $f(\Lambda_b)=:\Lambda(b)$ is the corresponding line in $\mathbb{P}^n$. A point $y\in \mathbb{P}^n$ is called \emph{fundamental} if its fibre $f^{-1}(y)$ has dimension greater than the dimension of the general one. The \emph{fundamental locus} is the set of the fundamental points. It is denoted by $\Psi$. Moreover the locus of the points $y\in \mathbb{P}^n$ for which the fibre $f^{-1}(y)$ has positive dimension will be denoted by $\Phi$. Since $\Lambda$ is smooth of dimension $n$, we define the \emph{focal divisor} $R\subset\Lambda$ as the ramification divisor of $f$. The schematic image of the focal divisor $R$ under $f$ (see, for example, \cite{H}) is the \emph{focal locus} $F\subset \mathbb{P}^n$. Clearly, we have $\Psi\subset\Phi\subset (F)_{\red}$. To a congruence is associated a \emph{sequence of degrees} $(a_0,\dotsc,a_\nu)$ since $B$ is rationally equivalent to a linear combination of the Schubert cycles of dimension $n-1$ in the Grassmannian $\mathrm{G}r(1,n)$: $$ [B]=\sum_{i=0}^{\nu}a_i\sigma_{n-1-i,i} $$ (where we set $\nu:=\left [\frac{n-1}{2}\right ]$); in particular, the \emph{order} $a_0$ is the number of lines of $B$ passing through a general point in $\mathbb{P}^n$, and the \emph{class} $a_1$ is the number of lines intersecting a general line $L$ and contained in a general hyperplane $H\supset L$ (\textit{i.e. } as a Schubert cycle, $B\cdot \sigma_{n-2,1}$). $a_0$ is the degree of $f:\Lambda\rightarrow\mathbb{P}^n$: thus if $a_0>0$, $\Psi=\Phi$, while if $a_0=1$, set-theoretically, $\Psi=\Phi=F$. An important result---independent of order and class---is the following: \begin{prop}[C. Segre, \cite{sg}]\label{prop:fofi} On every line $\Lambda_b\subset \Lambda$ of the family, the focal divisor $R$ either coincides with the whole $\Lambda_b$---in which case $\Lambda(b)$ is called a \emph{focal line}---or is a zero dimensional subscheme of $\Lambda_b$ of length $n-1$. Moreover, in the latter case, if $\Lambda$ is a first order congruence, $\Psi=F$ and $F\cap \Lambda(b)$ has length $n-1$. \end{prop} This result was proven classically in \cite{sg}; the first modern proof in the case of the congruences (\textit{i.e. } families of dimension two) of planes in $\mathbb{P}^4$ is in \cite{CS}. See \cite{DP2}, Proposition~1 for a proof of Proposition~\ref{prop:fofi}. \section{Congruences of lines in $\mathbb{P}^n$} \label{sec:2} We start with the following result, suggested to us by F. Catanese: \begin{thm}\label{thm:cata} Let $B$ be a congruence whose focal locus $F$ has codimension at least two. Then, $B$ has order either zero or one. \end{thm} \begin{proof} Let us consider the restriction of the map $f:\Lambda\rightarrow\mathbb{P}^n$ to the set $\Lambda\setminus f^{-1}(F)$. Then, either $f^{-1}(F)=\Lambda$, in which case $B$ is a congruence of order zero, or the map $f\mid_{\Lambda\setminus f^{-1}(F)}$ defines an unramified covering of the set $\mathbb{P}^n\setminus F$. But it is a well-known fact that---by dimensional reasons---$\mathbb{P}^n\setminus F$ is simply connected and $\Lambda\setminus f^{-1}(F)$ is connected. Therefore, $f\mid_{\Lambda\setminus f^{-1}(F)}$ is a homeomorphism, hence $f$ is a birational map and $B$ is a first order congruence. \end{proof} A central definition, introduced in \cite{DP2}, is the notion, $\forall d$ such that $0\le d\le n-2$, of \emph{fundamental $d$-locus}. This locus $\mathcal{F}^d$, is so defined: let $R_1,\dotsc,R_s$ be the \emph{horizontal components} (\textit{i.e. } $p_{\mid R_i}:R_i\rightarrow B$ is dominant). Now, let $F_i:=f(R_i)$, (as before, we take the schematic image); then, $\forall d$ such that $0\le d\le n-2$ we define $\mathcal{F}^d:=\cup_{\dim(F_i)=d}F_i$. By dimensional reasons, the lines of the congruence passing through $P_d\in \mathcal{F}^d$ form a family of dimension (at least) $n-1-d$ (so, set-theoretically, $\mathcal{F}^d\subset\Phi$). From this, we infer for example that if the focal locus has dimension zero, then the congruence is a star of lines (see Corollary~1 of \cite{DP2}). In this article, we will be interested in the subsequent case, \textit{i.e. } the case in which $\dim(F)=1$. We first note that if our congruence has order one, then there exists a $d$ such that $\mathcal{F}^d\neq \emptyset$. The first important results of this paper are the following \begin{thm}\label{thm:dim} Let $B$ be a congruence of lines in $\mathbb{P}^n$. If $\dim(F):=i>0$, then $\frac{n-1}{2}\le i \le n-1$. Besides, if $i=\frac{n-1}{2}$, $(F)_{\red}$ is irreducible and the general line of the congruence meets $F$---set-theoretically---in only one point, then $F$ is---set-theoretically---an $i$-plane. \end{thm} \begin{proof} Clearly, we have only to prove that $\dim(F) \ge \frac{n-1}{2}$. Moreover, by Theorem \ref{thm:cata}, we can reduce to study the cases of order zero and one. If the congruence has order zero, $F=f(\Lambda)$ and $F$ contains a family of lines of dimension $n-1$. Then the the observation that among varieties of fixed dimension $i$, the projective space is the unique one which contains a family of lines of dimension greater than or equal to $2(i-1)$. Let now suppose that the order is one. Let $\mathcal{F}^i$ be the fundamental $i$-locus of maximal dimension $i(>0)$. Given $i+1$ general hyperplanes $H_0,\dotsc,H_i$ in $\mathbb{P}^n$ and the corresponding hyperplane sections of $\mathcal{F}^i$, $D_0,\dotsc,D_i$, the lines of $\Lambda$ which meet $D_j$ form a family of dimension $n-2$, $j=0,\dotsc,i$ which will fill a hypersurface $M_{D_j}$ in $\mathbb{P}^n$. Since $D_0\cap \dotsb\cap D_i=\emptyset$ and $M:=M_{D_0}\cap \dotsb \cap M_{D_i}\subset \mathbb{P}^n$ is such that $\dim(M)\ge n-1-i\ge 0$, take $Q\in M$. By definition, there exist $\ell_i\in D_i$ with $Q\in\ell_i$. If the $\ell_i$'s are different, then $Q\in F$. If all $\ell_i$'s are equal, it is absurd. It follows that $M\subset F$. Now $M\subset F$; if $\dim(F)<\frac{n-1}{2}$, then, since $\dim(M)\ge n-1-i$, we have $2i> n-1$, which cannot be, since $\mathcal{F}^i\subset F$. Let us now suppose that the equality holds, $D:=(F)_{\red}$ is irreducible, and the general line of the congruence meets $D$ in only one point. If $P,Q\in D$ are two general points, the set of lines of $\Lambda$ passing through them generate two cones $M_P,M_Q\subset\mathbb{P}^n$ of dimension $i+1$. Therefore, by our hypotheses $\dim(M_P\cap M_Q)\ge 1$ and $M_P\cap M_Q$ contains the line joining $P$ and $Q$. So, $D$ is a linear space. \end{proof} \begin{prop}\label{prop:dim} Let $\Lambda$ be a congruence in $\mathbb{P}^n$ and $D$ the component of the focal locus of maximal dimension $i>0$; if $\mathcal{F}^k$ is a fundamental $k$-locus contained in $D$, then $n-1-i\le k\le i$. In particular, if $i=\frac{n-1}{2}$ we have only the $i$-fundamental locus $\mathcal{F}^i$. \end{prop} \begin{proof} With notation as in the proof of the preceding theorem, we consider the hyperplane sections of $F^k$, $D_0,\dotsc,D_k$, for which we have $D_0\cap \dotsb\cap D_k=\emptyset$, and so $M_{D_0}\cap \dotsb \cap M_{D_k}\subset F$. Therefore our thesis easily follows. \end{proof} \begin{cor}\label{cor:1} Let $\Lambda$ be a congruence in $\mathbb{P}^n$; if $\dim(F)=1$, then $n\le 3$. \end{cor} \begin{rmk} The preceding results are new in the literature, although the method of proof of Theorem~\ref{thm:dim} and Proposition~\ref{prop:dim} is very similar to the respective proofs of Theorem~6 and Corollary~2 of \cite{DP2}. Actually, in Theorem~6 of \cite{DP2} we forgot the hypothesis---in the border case $i=\frac{n-1}{2}$---that the general line should meet the focal locus only once. \end{rmk} \section{Congruences in $\mathbb{P}^3$} \label{sec:3} Our aim in this article is to classify the congruences with $\dim(F)=1$. In view of Corollary~\ref{cor:1}, it is sufficient to consider congruences in $\mathbb{P}^2$ and $\mathbb{P}^3$. The case of $\mathbb{P}^2$ is well known: all the congruences but the ones of order one (\textit{i.e. } the pencils) have a focal curve, and the congruence is given by the tangents of the focal curve. Therefore, we will study congruences in $\mathbb{P}^3$ with $\dim(F)=1$. By Theorem~\ref{thm:cata}, these are the first order congruences (but the star of lines) of $\mathbb{P}^3$, since the order cannot be zero; this can be deduced easily from the fact that the only surface containing a family of dimension two of lines is the plane. Actually, the first order congruences of $\mathbb{P}^3$ are classified in \cite{R} and \cite{Z}. Here we give another easy and---as far as we know---new proof of this classification. In what follows, we will also call the focal locus \emph{fundamental curve}. \subsection{Congruences with Irreducible Fundamental Curve} \begin{prop}\label{prop:irr} If the fundamental curve $F$ of a congruence of lines in $\mathbb{P}^3$ is irreducible, then we have the following possibilities: \begin{enumerate} \item if $F$ is reduced, we are in case~\eqref{1st} of Theorem~\textup{\ref{thm:int}}; \item If $F$ is non-reduced, we are in case~\eqref{punto} of Theorem~\textup{\ref{thm:int}}. \end{enumerate} \end{prop} \begin{proof} First of all, we consider the case of the secant lines of a curve, and we know that we have a first order congruence. We could apply Theorem~2.16 of \cite{DPi} to obtain that $\deg(F)=3$, but for the sake of completeness, we will prove this result here. Clearly we have that $\deg F\ge 3$ for degree reasons. Then, let us denote by $a$ the class of our congruence, and by $r$ a general line in $\mathbb{P}^3$. Let us also denote by $V_r$ the scroll of the lines of $B$ meeting $r$. An easy application of the Schubert calculus shows that $\deg(V_r)=a+1$. Let $V_r$ and $V_{r'}$ be two such surfaces; we denote by $k$ the (algebraic) multiplicity of $F$ in $V_r$ (and $V_{r'}$). First of all, we note that the complete intersection of two general surfaces $V_r$ and $V_{r'}$ is a (reducible) curve $\mathrm{G}amma$ whose components are the focal locus $F$ and $(a+1)$ lines, \textit{i.e. } the lines of $B$ meeting $r$ and $r'$: in fact, a point of $\mathrm{G}amma$ not belonging to the lines meeting both $r$ and $r'$ is a focal point, since through it pass infinitely many lines of $B$, \textit{i.e. } a line meeting $r$ and another one meeting $r'$, and $r$ and $r'$ are general. The lines of $B$ meeting $r$ and $r'$ are $a+1$ since this is the degree of $V_r$. Then, by B\'ezout, we have that \begin{align*} \deg(V_r\cap V_{r'}) &=(a+1)^2\\ &= a+1 + k^2d \end{align*} ---where $d:=\deg(F)$---and we obtain \begin{equation}\label{eq:cu1} k^2d=(a+1)a. \end{equation} Besides, since a line of the congruence not belonging to the $(a+1)$ lines meeting $r$ and $r'$ must intersect the curve $F$ in (at least) two points, we deduce \begin{equation}\label{eq:cu2} a+1=2k. \end{equation} From formulas \eqref{eq:cu1} and \eqref{eq:cu2} we conclude \begin{equation*} k^2(4-d)-2k=0, \end{equation*} from which we get the result, \textit{i.e. } $d=3$. So the only possibility is to have the twisted cubic $C^3$, which has, in fact, an apparent double point. Since this curve has degree three, the bidegree is $(1,3)$. Then we pass at the other case: by Theorem~\ref{thm:dim}, $(F)_{\red}=:\ell$ is a line; so, if we restrict the congruence to a (general) plane $\Pi$ through $\ell$, we get a congruence of lines in $\mathbb{P}^2$ such that its focal locus is contained in $\ell$, and therefore it is a pencil of lines with its fundamental point $P_\Pi$ in $\ell$. Then $\varphi$ of Theorem~\textup{\ref{thm:int}}, \eqref{punto} is the map defined by $\varphi(\Pi)=P_\Pi$. This congruence has bidegree $(1,d)$, as it is easily seen if we consider the lines of the congruence contained in a general plane. \end{proof} \subsection{Congruences with Reducible Fundamental Curve} \begin{prop}\label{prop:red} If the fundamental curve $F$ of a first order congruence of lines in $\mathbb{P}^3$ is reducible, then the congruence is as in Theorem~\textup{\ref{thm:int}}, case~\eqref{2nd}. \end{prop} \begin{proof} Let us denote by $Z$ the 0-dimensional scheme given by $F_1\cap F_2$ and we set $u=\length(Z)$. Let $P$ be a general point in $\mathbb{P}^3$; then, if we set $\deg(F_1):=m_1\ge m_2=:\deg(F_2)$, the cone given by the join $PF_j$ has degree $m_j$; as usual, by B\'ezout $$ \deg(PF_1\cap PF_2)=m_1m_2. $$ Since we have a congruence of order one, we obtain: \begin{equation}\label{eq:m-1} u=m_1m_2-1; \end{equation} this is due to the fact that only one of the lines of $PF_1\cap PF_2$ is a line of the congruence; therefore the others must be the $u$ lines of the join $PZ$. If $Q$ is a general point of $F_1$, there will pass $m_2(m_1-1)$ secant lines of $F_1$ through $Q$ meeting $F_2$ also; but these lines will pass through the points of $Z$, since if one of these lines did not intersect $Z$, this line would be a focal line, and varying these lines when we vary $Q$ on $F_1$, we would obtain a focal surface. Besides, since $Q$ is general, we can suppose that it does not belong to the tangent cones of the two curves at the points of $Z$. Then we have that \begin{equation}\label{eq:mm} u=(m_1-1)m_2, \end{equation} and by equation \eqref{eq:m-1} we obtain $m_2=1$, $u=m_1-1$. To see that $F_1$ is a rational curve, we can simply project it from a general point of the line $F_2$ onto a plane: by the Clebsh formula, the projected curve has geometric genus zero. The class is---as usual---calculated intersecting with a general plane. \end{proof} \subsection{Final Remarks about First Order Congruences} We finish this article analysing which of these congruences are smooth as surfaces in the Grassmannian $\mathrm{G}r(1,3)$. \begin{prop}\label{thm:last} The smooth first order congruences of lines $B$ in $\mathbb{P}^3$ are \begin{enumerate} \item the secants of the twisted cubic, in which case $B$ is the Veronese surface; \item the join of a line and a smooth conic meeting in a point, in which case $B$ is a rational normal cubic scroll; \item the join of two---non-meeting---lines in which case $B$ is a quadric. \end{enumerate} \end{prop} \begin{proof} We see from \cite{AG} and \cite{ABT} that the only possible bidegrees for a smooth congruence in $\mathbb{P}^3$ with a fundamental curve are $(1,1)$, $(1,2)$, $(1,3)$ and $(3,6)$. The last case can be excluded because has order three, by Theorem~\ref{thm:int}. From the description of these congruence given in \cite{Ar} we get the proposition. \end{proof} \begin{rmk} Observe that, if the fundamental curve is non-reduced, its support is a line $\ell$. Its image in $\mathrm{G}r(1,3)$ is a singular point $L$ for $B$: more precisely, $B$ is a $2$-dimensional linear section, tangent to $\mathrm{G}r(1,3)$ at $L$. We observe finally that the case of two lines is the one in which $B$ is the intersection of the two tangent hyperplane sections of the Grassmannian corresponding to the two lines, \textit{i.e. } $B$ is a general $2$-dimensional linear section of $\mathrm{G}r(1,3)$. From this description we see that the case in which the fundamental curve is a line only is a limit case of this one, when the two lines coincide. \end{rmk} \mathbb{P}rovidecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \mathbb{P}rovidecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \mathbb{P}rovidecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \mathbb{P}rovidecommand{\href}[2]{#2} \end{document}
\beta egin{document} \centerline{\beta f {\lambdaongrightarrowrge Families of Rationally Connected Varieties}} \ \ \centerline{\today} \ \noindent {\beta f Tom Graber} \noindent Mathematics Department, Harvard University, \noindent 1 Oxford st., Cambridge MA 02138, USA \noindent graber{\char'100}math.harvard.edu \ \noindent {\beta f Joe Harris} \noindent Mathematics Department, Harvard University, \noindent 1 Oxford st., Cambridge MA 02138, USA \noindent harris{\char'100}math.harvard.edu \ \noindent {\beta f Jason Starr} \noindent Mathematics Department, M.I.T., \noindent Cambridge MA 02139, USA \noindent jstarr{\char'100}math.mit.edu \ \tableofcontents \sigmaection{Introduction} \sigmaubsection{Statement of results} We will work throughout over the complex numbers, so that the results here apply over any algebraically closed field of characteristic 0. Recall that a smooth projective variety $X$ is said to be {\epsilonm rationally connected} if two general points $p, q \in X$ can be joined by a chain of rational curves. In case $\delta im(X) \gamma eq 3$ this turns out to be equivalent to the a priori stronger condition that for any finite subset ${\overline M_{0,0}athbb G}amma \sigmaubset X$ there is a smooth rational curve $C \sigmaubset X$ containing ${\overline M_{0,0}athbb G}amma$ and having ample normal bundle. Rationally connected varieties form an important class of varieties. In dimensions 1 and 2 rational connectivity coincides with rationality, but the two notions diverge in higher dimensions and in virtually every respect the class of rationally connected varieties is better behaved. For example, the condition of rational connectivity is both open and closed in smooth proper families; there are geometric criteria for rational connectivity (e.g. any smooth projective variety with negative canonical bundle is rationally connected, so we know in particular that a smooth hypersurface $X \sigmaubset {\overline M_{0,0}athbb P}^n$ of degree $d$ will be rationally connected if and only if $d \lambdaeq n$), and there are, at least conjecturally, numerical criteria for rational connectivity (see Conjecture~\ref{mumford} below). In this paper we will prove a conjecture of Koll\'ar, Miyaoka and Mori that represents one more basic property of rational connectivity (also one not shared by rationality): that if $X \to Y$ is a morphism with rationally connected image and fibers, then the domain $X$ is rationally connected as well. This will be a corollary of our main theorem: \beta egin{thm}\lambdaongrightarrowbel{mainth} Let $f : X \to B$ be a morphism from a smooth projective variety to a smooth projective curve over ${\overline M_{0,0}athbb C}$. If the general fiber of $f$ is rationally connected, then $f$ has a section. \epsilonnd{thm} Since this is really a statement about the birational equivalence class of the morphism $f$, we can restate it in the equivalent form \beta egin{thm} If $K$ is the function field of a curve over ${\overline M_{0,0}athbb C}$, any rationally connected variety $X$ defined over $K$ has a $K$-rational point. \epsilonnd{thm} In this form, the theorem directly generalizes Tsen's theorem, which is exactly this statement for $X$ a smooth hypersurface of degree $d \lambdaeq n$ in projective space ${\overline M_{0,0}athbb P}^n$ (or more generally a smooth complete intersection in projective space with negative canonical bundle). It would be interesting to know if in fact rationally connected varieties over other $C_1$ fields necessarily have rational points. As we indicated, one basic corollary of our main theorem is \beta egin{cor}\lambdaongrightarrowbel{totalspace} Let $f : X \to Y$ be any dominant morphism of varieties. If $Y$ and the general fiber of $f$ are rationally connected, then $X$ is rationally connected. \epsilonnd{cor} \noindent {\epsilonm Proof}. We can assume (in characteristic 0, at least) that $X$ and $Y$ are smooth projective varieties. Let $p$ and $q$ be general points of $X$. We can find a smooth rational curve $C \sigmaubset Y$ joining $f(p)$ and $f(q)$; let $X' = f^{-1}(C)$ be the inverse image of $C$ in $X$. By Theorem~\ref{mainth}, there is a section $D$ of $X'$ over $C$. We can then connect $p$ to $q$ by a chain of rational curves in $X'$ in three stages: connect $p$ to the point $D \cap X_p$ of intersection of $D$ with the fiber $X_p$ of $f$ through $p$ by a rational curve; connect $D \cap X_p$ to $D \cap X_q$ by $D$, and connect $D \cap X_q$ to $q$ by a rational curve in $X_q$. {\hfil \break \rightline{$\sqr74$}} There is a further corollary of Theorem~\ref{mainth} based on a construction of Campana and Koll\'ar--Miyaoka--Mori: the {\it maximal rationally connected fibration} associated to a variety $X$ (see [Ca], [K] or [KMM]). Briefly, the maximal rationally connected fibration associates to a variety $X$ a (birational isomorphism class of) variety $Z$ and a rational map $\phi : X \to Z$ with the properties that \beta egin{itemize} \item the fibers $X_z$ of $\phi$ are rationally connected; and conversely \item almost all the rational curves in $X$ lie in fibers of $\phi$: for a very general point $z \in Z$ any rational curve in $X$ meeting $X_z$ lies in $X_z$. \epsilonnd{itemize} \noindent The variety $Z$ and morphism $\phi$ are unique up to birational isomorphism, and are called the \textit{mrc quotient} and \textit{mrc fibration} of $X$, respectively. They measure the failure of $X$ to be rationally connected: if $X$ is rationally connected, $Z$ is a point, while if $X$ is not uniruled we have $Z=X$. As observed in Koll\'ar ([K], IV.5.6.3), we have the following Corollary: \beta egin{cor}\lambdaongrightarrowbel{quot} Let $X$ be any variety and $\phi : X \to Z$ its maximal rationally connected fibration. Then $Z$ is not uniruled. \epsilonnd{cor} \noindent {\epsilonm Proof}. Suppose that $Z$ were uniruled, so that through a general point $z \in Z$ we could find a rational curve $C \sigmaubset Z$ through $z$. By Corollary~\ref{totalspace}, the inverse image $\phi^{-1}(C)$ will be rationally connected, which means that every point of the fiber $X_z$ will lie on a rational curve not contained in $X_z$, contradicting the second defining property of mrc fibrations. {\hfil \break \rightline{$\sqr74$}} There are conjectured numerical criteria for a variety $X$ to be either uniruled or rationally connected. They are \beta egin{conj}\lambdaongrightarrowbel{uniruled} Let $X$ be a smooth projective variety. Then $X$ is uniruled if and only if $H^0(X,K_X^m)=0$ for all $m > 0$. \epsilonnd{conj} \noindent and \beta egin{conj}\lambdaongrightarrowbel{mumford} Let $X$ be a smooth projective variety. Then $X$ is rationally connected if and only if $H^0(X,(\Omega^1_X)^{\otimes m})=0$ for all $m > 0$. \epsilonnd{conj} For each of these conjectures, the ``only if" part is known, and straightforward to prove; the ``if" part represents a very difficult open problem (see for example [K], IV.1.12 and IV.3.8.1). As another consequence of our main theorem, we have an implication: \beta egin{cor} Conjecture~\ref{uniruled} implies Conjecture~\ref{mumford} \epsilonnd{cor} \noindent {\epsilonm Proof}. Let $X$ be any smooth projective variety that is not rationally connected; assuming the statement of Conjecture~\ref{uniruled}, we want to show that $H^0(X,(\Omega^1_X)^{\otimes m})\neq 0$ for some $m > 0$. Let $\phi : X \to Z$ be the mrc fibration of $X$. By hypothesis $Z$ has dimension $n >0$, and by Corollary~\ref{quot} $Z$ is not uniruled. If we assume Conjecture~\ref{uniruled}, then, we must have a non-zero section $\sigmaigma \in H^0(Z,K_Z^m)$ for some $m > 0$. But the line bundle $K_Z^m$ is a summand of the tensor power $(\Omega^1_Z)^{\otimes nm}$, so we can view $\sigmaigma$ as a global section of that sheaf; pulling it back via $\phi$, we get a nonzero global section of $(\Omega^1_X)^{\otimes nm}$ {\hfil \break \rightline{$\sqr74$}} \noindent {\beta f Acknowledgments}. We would like to thank Johan deJong, J\'anos Koll\'ar and Barry Mazur for many conversations, which were of tremendous help to us. \sigmaection{Preliminary definitions and constructions} We will be dealing with morphisms $\pi : X \to B$ satisfying a number of hypotheses, which we collect here for future reference. In particular, for the bulk of this paper we will deal with the case $B \cong {\overline M_{0,0}athbb P}^1$; we will show in section~\ref{barb} below both that the statement for $B \cong {\overline M_{0,0}athbb P}^1$ implies the full Theorem~\ref{mainth} and, as well, how to modify the argument that follows to apply to general $B$. \beta egin{hyp}\lambdaongrightarrowbel{hyp} $\pi : X \to B$ is a nonconstant morphism of smooth connected projective varieties over ${\overline M_{0,0}athbb C}$, with $B \cong {\overline M_{0,0}athbb P}^1$. For general $b \in B$, the fiber $X_b = \pi^{-1}(b)$ is rationally connected of dimension at least 2. \epsilonnd{hyp} Now suppose we have a class $\beta eta \in N_1(X)$ having intersection number $d$ with a fiber of the map $\pi$. We have then a natural morphism $$ \varphi : \overline M_{0,0}g(X,\beta eta) \to \overline M_{0,0}g(B,d) $$ defined by composing a map $f : C \to X$ with $\pi$ and collapsing components of $C$ as necessary to make the composition $\pi f$ stable. \beta egin{defn} Let $\pi : X \to B$ be a morphism satisfying \ref{hyp}, and let $f : C \to X$ be a stable map from a nodal curve $C$ of genus $g$ to $X$ with class $f_*[C] = \beta eta$. We say that $f$ is {\it flexible} relative to $\pi$ if the map $\varphi : \overline M_{0,0}g(X,\beta eta) \to \overline M_{0,0}g(B,d)$ is dominant at the point $[f] \in \overline M_{0,0}g(X,\beta eta)$; that is, if any neighborhood of $[f]$ in $\overline M_{0,0}g(X,\beta eta)$ dominates a neighborhood of $[\pi f]$ in $\overline M_{0,0}g(B,d)$. \epsilonnd{defn} Now, it's a classical fact that the variety $\overline M_{0,0}g(B,d)$ has a unique irreducible component whose general member corresponds to a flat map $f : C \to B$ (see for example [C] and [H] for a proof). Since the map $\varphi : \overline M_{0,0}g(X,\beta eta) \to \overline M_{0,0}g(B,d)$ is proper, it follows that if $\pi : X \to B$ admits a flexible curve then $\varphi$ will be surjective. Moreover, $\overline M_{0,0}g(B,d)$ contains points $[f]$ corresponding to maps $f : C \to B$ with the property that every irreducible component of $C$ on which $f$ is nonconstant maps isomorphically via $f$ to $B$. (For example, we could simply start with $d$ disjoint copies $C_1,\delta ots,C_d$ of $B$ (with $f$ mapping each isomorphically to $B$) and identify $d+g-1$ pairs of points on the $C_i$, each pair lying over the same point of $B$. \beta egin{prop} If $\pi : X \to B$ is a morphism satisfying \ref{hyp} and $f : C \to X$ a flexible stable map, then $\pi$ has a section. \epsilonnd{prop} Our goal in what follows, accordingly, will be to construct a flexible curve $f : C \to X$ for an arbitrary $\pi : X \to B$ satisfying \ref{hyp}. \sigmaubsection{The first construction} To manufacture our flexible curve, we apply two basic constructions, which we describe here. (These constructions, especially the first, are pretty standard: see for example section II.7 of [K].) We start with a basic lemma: \beta egin{lm}\lambdaongrightarrowbel{bundle} Let $C$ be a smooth curve and $E$ any vector bundle on $C$; let $n$ be any positive integer. Let $p_1,\delta ots,p_N \in C$ be general points and $\xi_i \sigmaubset E_{p_i}$ a general one-dimensional subspace of the fiber of $E$ at $p_i$; let $E'$ be the sheaf of rational sections of $E$ having at most a simple pole at $p_i$ in the direction $\xi_i$ and regular elsewhere. For $N$ sufficiently large we will have $$ H^1(C, E'(-q_1-\delta ots-q_n)) = 0 $$ for any $n$ points $q_1,\delta ots,q_n \in C$. \epsilonnd{lm} \noindent {\it Proof}. To start with, we will prove simply that $H^1(C,E')=0$. Since this is an open condition, it will suffice to exhibit a particular choice of points $p_i$ and subspaces $\xi_i$ that works. Denoting the rank of $E$ by $r$, we take $N = mr$ divisible by $r$ and choose $m$ points $t_1,\delta ots,t_m \in C$. We then specialize to the case \beta egin{align*} p_1 = \delta ots = p_r &= t_1; & \xi_1,\delta ots,\xi_r &\overline M_{0,0}box{ spanning } E_{t_1} \\ p_{r+1} = \delta ots = p_{2r} &= t_2; & \xi_{r+1},\delta ots,\xi_{2r} &\overline M_{0,0}box{ spanning } E_{t_2} \epsilonnd{align*} and so on. In this case we have $E' = E(t_1 + \delta ots + t_m)$, which we know has vanishing higher cohomology for sufficiently large $m$. Given this, the statement of the lemma follows: to begin with, choose any $g+n$ points $r_1,\delta ots,r_{g+n} \in C$. Applying the argument thus far to the bundle $E(-r_1-\delta ots -r_{g+n})$, we find that for $N$ sufficiently large we will have $H^1(C, E'(-r_1-\delta ots-r_{g+n})) = 0$. But now for any points $q_1,\delta ots,q_n \in C$ we have $$ q_1 + \delta ots + q_n = r_1+\delta ots + r_{g+n} - D $$ for some effective divisor $D$ on $C$. It follows then that \beta egin{align*} h^1(C, E'(-q_1-\delta ots-q_n)) &= h^1(C, E'(-r_1-\delta ots -r_{g+n})(D)) \\ &\lambdaeq h^1(C, E'(-r_1-\delta ots -r_{g+n})) \\ &=0 \epsilonnd{align*} {\hfil \break \rightline{$\sqr74$}} The relevance of this to our present circumstances will perhaps be made clear by the following: \beta egin{lm}\lambdaongrightarrowbel{normal} Let $X$ be a smooth projective variety, $C$ and $C'\sigmaubset X$ two nodal curves meeting at points $p_1,\delta ots,p_\delta elta$; suppose $C$ and $C'$ are smooth with distinct tangent lines at each point $p_i$. Let $D = C \cup C'$ be the union of $C$ and $C'$; and let $N_{C/X}$ and $N_{D/X}$ be the normal sheaves of $C$ and $D$ in $X$. We have then an inclusion of sheaves $$ 0 \to N_{C/X} \to N_{D/X}|_C $$ identifying the sheaf of sections of $N_{D/X}|_C$ with the sheaf of rational sections of $N_{C/X}$ having at most a simple pole at $p_i$ in the normal direction determined by $T_{p_i}C'$. Moreover, if $\tilde D \sigmaubset \sigmapec {\overline M_{0,0}athbb C}[\epsilon]/(\epsilon^2) \times X$ is a first-order deformation of $D$ in $X$ corresponding to a global section $\sigmaigma \in H^0(N_{D/X})$, then $\tilde D$ smooths the node of $D$ at $p_i$ if and only if the restriction $\sigmaigma|_U$ of $\sigmaigma$ to a neighborhood $U$ of $p$ in $C$ is not in the image of $N_{C/X}$. \epsilonnd{lm} Now suppose $\pi : X \to B$ is a morphism satisfying our basic hypotheses~\ref{hyp}, and $C \sigmaubset X$ a smooth, irreducible curve of genus $g$. For a general point $p \in C$, let $X_p = \pi^{-1}(\pi(p))$ be the fiber of $\pi$ through $p$. By hypothesis, $X_p$ is a smooth, rationally connected variety, so that we can find a smooth rational curve $C' \sigmaubset X_p$ meeting $C$ at $p$ (and nowhere else) with arbitrarily specified tangent line at $p$, and having ample normal bundle $N_{C'/X}$. Choose a large number of general points $p_1,\delta ots,p_\delta elta \in C$, and for each $i$ let $C_i \sigmaubset X_{p_i}$ be such a smooth rational curve, with $T_{p_i}C_i$ a general tangent line to $X_{p_i}$ at $p_i$. Combining the preceding two lemmas, we see that for $\delta elta$ sufficiently large, the normal bundle $N_{C'/X}$ of the union $C' = C \cup (\cup C_i)$ will be generated by its global sections; in particular, by Lemma~\ref{normal} there will be a smooth deformation $\tilde C$ of $C'$. Moreover, for any given $n$ we can choose the number $\delta elta$ large enough to ensure that $H^1(C, N_{C'/X}|_C(-r_1-\delta ots-r_{g+n})) = 0$ for some $g + n$ points $r_1,\delta ots,r_{g+n} \in C$; it follows that $H^1(\tilde C, N_{\tilde C/X}(-r_1-\delta ots-r_{g+n})) = 0$ for some $r_1,\delta ots,r_{g+n} \in \tilde C$ and hence that $$ H^1(\tilde C, N_{\tilde C/X}(-q_1-\delta ots-q_n)) = 0 $$ for any $n$ points on $\tilde C$. The process of taking a curve $C \sigmaubset X$, attaching rational curves in fibers and smoothing to get a new curve $\tilde C$, is our first construction. It has the properties that \beta egin{enumerate} \item the genus $g$ of the new curve $\tilde C$ is the same as the genus of the curve $C$ we started with; \item the degree $d$ of $\tilde C$ over $B$ is the same as the degree of $C$ over $B$; \item the branch divisor of the composite map $\tilde C \hookrightarrow X \to B$ is a small deformation of the branch divisor of $C \hookrightarrow X \to B$; and again, \item for any $n$ points $q_1,\delta ots,q_n \in \tilde C$ we have $H^1(\tilde C, N_{\tilde C/X}(-q_1-\delta ots-q_n)) = 0$ \epsilonnd{enumerate} Here is one application of this construction. Suppose we have a smooth curve $C \sigmaubset X$ such that the projection $\overline M_{0,0}u : \pi|_C : C \to B$ is simply branched---that is, the branch divisor of $\overline M_{0,0}u$ consists of $2d+2g-2$ distinct points in $B$---and such that each ramification point $p \in C$ of $\overline M_{0,0}u$ is a smooth point of the fiber $X_p$. Applying our first construction with $n = 2d+2g-2$, we arrive at another smooth curve $\tilde C$ that is again simply branched over $B$, with all ramification occurring at smooth points of fibers of $\pi$. But now the condition that $H^1(\tilde C, N_{\tilde C/X}(-q_1-\delta ots-q_n)) = 0$ applied to the $n = 2d+2g-2$ ramification points of the map $\tilde \overline M_{0,0}u : \tilde C \to B$ says that if we pick a normal vector $v_i$ to $\tilde C$ at each ramification point $p_i$ of $\tilde \overline M_{0,0}u$ we can find a global section of the normal bundle $N_{\tilde C/X}$ with value $v_i$ at $p_i$. Moreover, since ramification occurs at smooth points of fibers of $\pi$, for any tangent vectors $w_i$ to $B$ at the image points $\pi(p_i)$ we can find tangent vectors $v_i \in T_{p_i}X$ with $d\pi(v_i) = w_i$. It follows that {\it as we deform the curve $\tilde C$ in $X$, the branch points of $\tilde \overline M_{0,0}u$ move independently}. A general deformation of $\tilde C \sigmaubset X$ thus yields a general deformation of $\tilde \overline M_{0,0}u$---in other words, the curve $\tilde C$ is flexible. We thus make the \beta egin{defn} Let $\pi : X \to B$ be as in~\ref{hyp}, and let $C \sigmaubset X$ be a smooth curve such that the projection $\overline M_{0,0}u : \pi|_C : C \to B$ is simply branched. If each ramification point $p \in C$ of $\overline M_{0,0}u$ is a smooth point of the fiber $X_p$ containing it, we will say the curve $C$ is {\it pre-flexible}. \epsilonnd{defn} \noindent In these terms, we have established the \beta egin{lm}\lambdaongrightarrowbel{preflex} Let $\pi : X \to B$ be as in~\ref{hyp}. If $X$ admits a pre-flexible curve, the map $\pi$ has a section. \epsilonnd{lm} \noindent \underbar{Remark}. Note that we can extend the notion of pre-flexible and the statement of Lemma~\ref{preflex} to stable maps $f : C \to X$: we say that such a map is preflexible is the composition $\pi f$ is simply branched, and for each ramification point $p$ of $\pi f$ the image $f(p)$ is a smooth point of the map $\pi$, the statement of Lemma~\ref{preflex} holds. \sigmaubsection{The second construction} Our second construction is a very minor modification of the first. Given a family $\pi : X \to B$ as in~\ref{hyp} and a smooth curve $C \sigmaubset X$, we pick a general fiber $X_b$ of $\pi$ and two points $p, q \in C \cap X_b$. We then pick a rational curve $C_0 \sigmaubset X_b$ with ample normal bundle in $X_b$, passing through $p$ and $q$ and not meeting $C$ elsewhere. We also pick a large number $N$ of other general points $p_i \in C$ and rational curves $C_i \sigmaubset X_{p_i}$ in the corresponding fibers, meeting $C$ just at $p_i$ and having general tangent line at $p_i$. Finally, we let $C' = C \cup C_0 \cup (\cup C_i)$ be the union, and $\tilde C$ a smooth deformation of $C'$ (as before, if we pick $N$ large enough, the normal bundle $N_{C'/X}$ will be generated by global sections, so smoothings will exist). This process, starting with the curve $C \sigmaubset X$ and arriving at the new curve $\tilde C$, is our second construction. It has the properties that \beta egin{enumerate} \item the degree $d$ of $\tilde C$ over $B$ is the same as the degree of $C$ over $B$; \item the genus of the new curve $\tilde C$ is one greater than the genus of the curve $C$ we started with; \item for any $n$ points $q_1,\delta ots,q_n \in \tilde C$ we have $H^1(\tilde C, N_{\tilde C/X}(-q_1-\delta ots-q_n)) = 0$; and \item the branch divisor of the composite map $\tilde C \hookrightarrow X \to B$ has two new points: it consists of a small deformation of the branch divisor of $C \hookrightarrow X \to B$, together with a pair of simple branch points $b', b'' \in B$ near $b$, each having as monodromy the transposition exchanging the sheets of $\tilde C$ near $p$ and $q$. \epsilonnd{enumerate} In effect, we have simply introduced two new simple branch points to the cover $C \to B$, with assigned (though necessarily equal) monodromy. Note that we can apply this construction repeatedly, to introduce any number of (pairs of) additional branch points with assigned (simple) monodromy; or we could carry out a more general construction with a number of curves $C_0$. \sigmaection{Proof of the main theorem}\lambdaongrightarrowbel{mainproof} \sigmaubsection{The proof in case $B = {\overline M_{0,0}athbb P}^1$}\lambdaongrightarrowbel{mainarg} We are now more than amply equipped to prove the theorem. We start with a morphism $\pi : X \to B$ as in~\ref{hyp}. To begin with, by hypothesis $X$ is projective; embed in a projective space and take the intersection with $\delta im(X) - 1$ general hyperplanes to arrive at a smooth curve $C \sigmaubset X$. This is the curve we will start with. What do $C$ and the associated map $\overline M_{0,0}u : C \hookrightarrow X \to B$ look like? To answer this, start with the simplest case: suppose that the fibers $X_b$ of $\pi$ do not have multiple components, or in other words that the singular locus $\pi_{\rm sing}$ of the map $\pi$ has codimension 2 in $X$. In this case we are done: $C$ will miss $\pi_{\rm sing}$ altogether, so that all ramification of $\overline M_{0,0}u : C \to B$ will occur at smooth points of fibers; and simple dimension counts show that the branching will be simple. In other words, $C$ will be pre-flexible already. The problems start if $\pi$ has multiple components of fibers. If $Z \sigmaubset X_b$ is such a component, then each point $p \in C \cap Z$ will be a ramification point of $\overline M_{0,0}u$, and no deformation of $C$ will move the corresponding branch point $\pi(p) \in B$. The curve $C$ can not be flexible. And of course it's worse if $\pi$ has a multiple (that is, everywhere-nonreduced) fiber: in that case $\pi$ cannot possibly have a section. To keep track of such points, let $M \sigmaubset B$ be the locus of points such that the fiber $X_b$ has a multiple component. Outside of $M$, the map $\overline M_{0,0}u : C \to B$ is simply branched, and all ramification occurs at smooth points of fibers of $\pi$. Now here is what we're going to do. First, pick a base point $p_0 \in B$, and draw a cut system: that is, a collection of real arcs joining $p_0$ to the branch points $M \cup N$ of $\overline M_{0,0}u$, disjoint except at $p_0$. The inverse image in $C$ of the complement $U$ of these arcs is simply $d$ disjoint copies of $U$; call the set of sheets ${\overline M_{0,0}athbb G}amma$ (or, if you prefer, label them with the integers 1 through $d$). Now, for each point $b \in M$, denote the monodromy around the point $b$ by $\sigmaigma_b$, and express this permutation of ${\overline M_{0,0}athbb G}amma$ as a product of transpositions: $$ \sigmaigma_b = \tau_{b,1}\tau_{b,2}\delta ots\tau_{b,{k_b}} $$ so that in other words $$ \tau_{b,{k_b}}\delta ots\tau_{b,2}\tau_{b,1}\sigmaigma_b = I $$ is the identity. For future reference, let $k = \sigmaum k_b$. We will proceed in three stages. \noindent \underbar{Stage 1}: We use our second construction to produce a new curve $\tilde C$ with, in a neighborhood of each point $b \in M$, $k_b$ new pairs of simple branch points $s_{b,i}, t_{b,i} \in B$, with the monodromy around $s_{b,i}$ and $t_{b,i}$ equal to $\tau_{b,i}$. Note that $\tilde C$ will have genus $g(C) + k$, and that the branch divisor of the projection $\tilde \overline M_{0,0}u : \tilde C \to B$ will be the union of a deformation $\tilde N$ of $N$, the points $s_{b,i}$ and $t_{b,i}$, and $M$. In particular we can find disjoint discs $\Delta_b \sigmaubset B$, with $\Delta_b$ containing the points $b$ and $t_{b,1}, t_{b,2},\delta ots,t_{b,{k_b}}$, so that the monodromy around the boundary $\partial \Delta_b$ of $\Delta_b$ is trivial. Now, for any fixed integer $n$ this construction can be carried out so that the curve $\tilde C$ has the property that $H^1(\tilde C, N_{\tilde C/X}(-q_1-\delta ots-q_n)) = 0$ for any $n$ points $q_i \in \tilde C$. Here we want to choose $$ n = \#N + 2k $$ so that there are global sections of the normal bundle $N_{\tilde C/X}$ with arbitrarily assigned values on the ramification points of $\tilde C$ over $N$ and the points $s_{b,i}$ and $t_{b,i}$. This means in particular that we can deform the curve $\tilde C$ so as to deform the branch points of $\tilde \overline M_{0,0}u$ outside of $M$ independently. What we will do, then, is \noindent \underbar{Stage 2}: We will vary $\tilde C$ so as to {\epsilonm keep all the branch points $b \in N$ and all the points $s_{b,i}$ fixed; and for each $b \in M$ specialize all the branch points $t_{b,i}$ to $b$ within the disc $\Delta_b$}. \lambdaongrightarrowbel{splat} \beta egin{picture}(330,270) \put(10,10){\overline M_{0,0}akebox(330,240){\includegraphics{splat.eps}}} \epsilonnd{picture} \ \noindent To say this more precisely, let $\beta eta \in N_1(X)$ be the class of the curve $\tilde C$, and consider the maps $$ \overline M_{g',0}(X, \beta eta) \lambdaongrightarrow \overline M_{g',0}(B, d) \lambdaongrightarrow B_{2d+2g'-2} $$ with the second map assigning to a stable map $C \to B$ its branch divisor. What we are saying is, starting at the branch divisor $$ D_1 = \tilde N + \sigmaum s_{b,i} + \sigmaum t_{b,i} + \sigmaum_{b \in M} k_b\cdot b $$ of the map $\tilde \overline M_{0,0}u$, draw an analytic arc $\gamma amma = \{D_\lambda\}$ in the subvariety $$ {\overline M_{0,0}athbb P}hi \, = \, \tilde N + \sigmaum s_{b,i} + \sigmaum_{b \in M} k_b\cdot b + \sigmaum(\Delta_b)_{k_b} \, \sigmaubset \, B_{2d+2g'-2} $$ tending to the point $$ D_0 = \tilde N + \sigmaum s_{b,i} + 2\sigmaum_{b \in M} k_b\cdot b . $$ Since the image of the composition $$ \overline M_{g',0}(X, \beta eta) \lambdaongrightarrow B_{2d+2g'-2} $$ contains ${\overline M_{0,0}athbb P}hi$, we can find an arc $\delta elta = \{f_{\nu}\}$ in $\overline M_{g',0}(X, \beta eta)$ that maps onto $\gamma amma$, with $f_1$ the inclusion $\tilde C \hookrightarrow X$. \noindent \underbar{Stage 3}: Let $f_0 : C_0 \to X$ be the limit, in $\overline M_{g',0}(X, \beta eta)$, of the family of curves constructed in Stage 2: that is, the point of the arc $\delta elta$ over $D_0 \in {\overline M_{0,0}athbb P}hi \sigmaubset B_{2d+2g'-2}$. Let $A \sigmaubset C_0$ be the normalization of any irreducible component of $C_0$ on which the composition $\pi f_0$ is nonconstant (that is, whose image is not contained in a fiber), and let $f : A \to X$ be the restriction of $f_0$ to $A$. By construction, the composition $\pi f$ is unramified over a neighborhood of $M$: the monodromy around the boundary $\partial \Delta_b$ of each disc $\Delta_b$ is trivial, and it can be branched over at most one point $b$ inside $\Delta_b$, so it can't be branched at all over $\Delta_b$. Indeed, it is (at most) simply branched over each point of $N$ and each point $s_{b,i}$, and unramified elsewhere. Moreover, since we can carry out the specialization of $\tilde C$ above with the entire fiber of $\tilde C$ over the points of $N$ and the $s_{b,i}$ fixed, the ramification of $\pi f$ on $A$ over these points will occur at smooth points of the corresponding fibers of $\pi$. In other words, {\epsilonm the map $f : A \to X$ is preflexible}, and we are done. {\hfil \break \rightline{$\sqr74$}} \sigmaubsection{The proof for arbitrary curves $B$}\lambdaongrightarrowbel{barb} As we indicated at the outset, there are two straightforward ways of extending this result to the case of arbitrary curves $B$. For one thing, virtually all of the argument we have made goes over without change to the case of base curves $B$ of any genus $h$. The one exception to this is the statement that the space $\overline M_{g,0}(B,d)$ of stable maps $f : C \to B$ of degree $d$ from curves $C$ of genus $g$ to $B$ has a unique irreducible component whose general member corresponds to a flat map $f : C \to B$. This is false in general---consider for example the case $g = d(h-1) + 1$ of unramified covers. It is true, however, if we restrict ourselves to the case $g \gamma g h,d$ (that is, we have a large number of branch points) and look only at covers whose monodromy is the full symmetric group $S_d$. Given this fact, and observing that our second construction allows us to increase the number of branch points of our covers $C \to B$ arbitrarily, the theorem can be proved for general $B$ just as it is proved above for $B \cong {\overline M_{0,0}athbb P}^1$. Alternatively, Johan deJong showed us a simple way to deduce the theorem for general $B$ from the case $B \cong {\overline M_{0,0}athbb P}^1$ alone. We argue as follows: given a map $\pi : X \to B$ with rationally connected general fiber, we choose any map $g : B \to {\overline M_{0,0}athbb P}^1$ expressing $B$ as a branched cover of ${\overline M_{0,0}athbb P}^1$. We can then form the ``norm" of $X$: this is the (birational isomorphism class of) variety $Y \to {\overline M_{0,0}athbb P}^1$ whose fiber over a general point $p \in {\overline M_{0,0}athbb P}^1$ is the product $$ Y_p = \prod_{q \in g^{-1}(p)} X_q . $$ Since the product of rationally connected varieties is again rationally connected, it follows from the ${\overline M_{0,0}athbb P}^1$ case of the theorem that $Y \to {\overline M_{0,0}athbb P}^1$ has a rational section, and hence so does $\pi$. \sigmaection{An example} There are a number of disquieting aspects of the argument in Section~\ref{mainarg}, and in particular about the specialization in Stage 2 of that argument. Clearly the curve $f : A \to X$ constructed there cannot meet any multiple component of a fiber of $\pi : X \to B$; that is, for each $b \in M$ it must meet the fiber $X_b$ only in reduced components of $X_b$. This raises a number of questions: what if the fiber $X_b$ is multiple? How can the curve $\tilde C$, which meets all the multiple components of $X_b$, specialize to one that misses them all? And can we say which reduced components of $X_b$ the curve $A$ will meet? The answers to the first two questions are straightforward: in fact, the argument given here proved that {\epsilonm the map $\pi : X \to B$ cannot have multiple fibers}, that is, every fiber $X_b$ must have a reduced component.\footnote{In fact, this assertion is nearly tantamount to Theorem~\ref{mainth} itself, but we were unable to prove it directly except under additional and restrictive hypotheses} As for the second, what must happen is that as our parameter $\delta elta \to 0$, the points of intersection of $C_\delta elta$ with the multiple components of $X_b$ slide toward the reduced components of $X_b$; the curve $C_0$ produced in the limit will have components contained in the fiber $X_b$ and joining the points of intersection of $A$ with $X_b$ to each of the multiple components. Finally, the answer to the third question---and indeed the whole process---may be illuminated by looking at a simple example; we will do this now. To start, we have to find an example of a map $\pi : X \to B$ with rationally connected general fiber and a special fiber having a multiple component (and smooth total space $X$). Without question, the simplest example will have general fiber $X_b \cong {\overline M_{0,0}athbb P}^1$, and special fiber a chain of three smooth rational curves: \lambdaongrightarrowbel{fiber} \beta egin{picture}(330,30) \put(10,0){\overline M_{0,0}akebox(330,25){\includegraphics{fiber.eps}}} \epsilonnd{picture} \ The middle component will have multiplicity 2 in the fiber, and self-intersection $-1$; the outer two components will each appear with multiplicity 1 in the fiber, and will have self-intersection $-2$. The simplest way to construct a family with such a fiber is to start with a trivial family $X_0 = {\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1 \to {\overline M_{0,0}athbb P}^1$, blow up any point $p$, and then blow up the point $q$ of intersection of the exceptional divisor with the proper transform of the fiber through $p$ to obtain $X$. \beta egin{picture}(330,320) \put(10,0){\overline M_{0,0}akebox(330,310){\includegraphics{family.eps}}} \epsilonnd{picture} \ We will denote by $F$ the proper transform in $X$ of the fiber through $p$ in ${\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1$, and by $G$ the proper transform of the first exceptional divisor; the second exceptional divisor---the multiple component of the special fiber---we will call $E$. To arrive at the simplest possible curve $\tilde C \sigmaubset X$ meeting the multiple component $E$ of the special fiber of this family, we start with a curve $C \sigmaubset {\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1$ of degree $2$ over $B$ that is simply tangent to the special fiber at the point $p$; the proper transform $\tilde C$ of $C$ in $X$ will then meet $E$ once transversely and $F$ and $G$ not at all. (We're not trying to make excuses here, but note that it's virtually impossible to draw a decent picture of the configuration $\tilde C \sigmaubset X \to B$: the curve $\tilde C$ is supposed to meet $E$ once transversely, but still have degree 2 over $B$ and be ramified over $B$ at its point of intersection with $E$.) Now that we've got this set up, what happens when we push another branch point of $\tilde C \to B$ in to the special fiber of $\pi$? The answer is that one of three things can happen, two generically. We will describe these first geometrically in terms of the original curve $C \sigmaubset {\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1$ and its proper transforms, and then write down typical equations. One possibility is that the ramification point $p$ of $C$ over $B$ becomes a node. In this case the limit $C_0$ of the proper transforms $C_\nu$ of the curves will actually contain the component $E$ of the special fiber (the limit of the proper transforms is not the proper transform of the limiting curve, but rather its total transform minus the divisor $E + G$). The remaining component---the actual proper transform of the limiting curve---will have two distinct sheets in a neighborhood of the special fiber, meeting $G$ transversely in distinct points, and each of course unramified over $B$: \ \lambdaongrightarrowbel{firstfam} \beta egin{picture}(330,280) \put(10,30){\overline M_{0,0}akebox(330,250){\includegraphics{firstfam.eps}}} \epsilonnd{picture} This specialization is easy to see in terms of equations: if we choose affine coordinates $x$ on our base ${\overline M_{0,0}athbb P}^1$ and $y$ on the fiber, we can write the equation of our family $\{C_\nu\}$ of curves as $$ C_\nu \; : \; y^2 = x^2 - \nu x $$ and specialize the branch point over $x = \nu$ simply by letting $\nu \to 0$. We can see either from this family of equations, or geometrically, that as $\nu$ tends to 0 the point of intersection of the proper transform $\tilde C_\nu$ of $C_\nu$ slides along $E$ toward the point of intersection $E \cap G$; when it reaches $E \cap G$ the limiting curve becomes reducible, splitting off a copy of $G$. Now, by the symmetry of $X \to B$---we could also blow down the curves $E$ and $F$ in $X$ to obtain ${\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1$---we would expect that there would be a similar specialization with the roles of $F$ and $G$ reversed, and there is: if the curve $C \sigmaubset {\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1$ specializes to one containing the fiber $\{0\} \times {\overline M_{0,0}athbb P}^1$, the limit $\tilde C_0$ of the proper transforms will (generically) consist of the union of $F$ with a curve $A$, with $A$ unramified of degree 2 over $B$ in a neighborhood of the special fiber and meeting the special fiber in two distinct points of $F$. \ \lambdaongrightarrowbel{secondfam} \beta egin{picture}(330,280) \put(10,20){\overline M_{0,0}akebox(330,260){\includegraphics{secondfam.eps}}} \epsilonnd{picture} Finally, there is a common specialization of these two families: if the curve $C \sigmaubset {\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1$ specializes to one that both contains the fiber $\{0\} \times {\overline M_{0,0}athbb P}^1$ and is singular at the point $q$---that is, consists in a neighborhood of the special fiber of the fiber and two sections, one passing through $p$---then the limit $\tilde C_0$ of the proper transforms will consist of the union of all three components $E$, $F$ and $G$ of the special fiber with a curve $A$ consisting of a section meeting the special fiber in a point of $F$ and a section meeting the special fiber in a point of $E$: \ \lambdaongrightarrowbel{thirdfam} \beta egin{picture}(330,280) \put(10,20){\overline M_{0,0}akebox(330,260){\includegraphics{thirdfam.eps}}} \epsilonnd{picture} It's also very instructive to look at this example from the point of view of the equations of the curves. To begin with, denote by $|{\overline M_{0,0}athcal O}_X(d,e)|$ the total transform of the linear system of curves of bidegree $(d,e)$ on ${\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1$. We are looking here at the linear system $$ \tilde {\overline M_{0,0}athcal D} = |{\overline M_{0,0}athcal O}_X(1,2)(-G-2E)|, $$ that is, the proper transform of the linear series ${\overline M_{0,0}athcal D}$ of curves $C \sigmaubset {\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1$ of bidegree $(1,2)$ that pass through $p$ with vertical tangent. Explicitly, these curves form a $3$-dimensional linear series, which we may write in affine coordinates $(x,y)$ on ${\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1$ as $$ {\overline M_{0,0}athcal D} \, = \, \{a xy^2 + b y^2 + c xy + d x\}_{[a,b,c,d] \in {\overline M_{0,0}athbb P}^3} $$ Writing the equation of a typical member of ${\overline M_{0,0}athcal D}$ as a polynomial in $y$: $$ (a x + b) \cdot y^2 + (c x)\cdot y + (d x) = 0 $$ we see that its branch divisor is the zero locus of the quadratic polynomial $$ (d x)^2 - 4(a x + b)(d x) \, = \, (c^2 - 4ad)\cdot x^2 - 4db\cdot x $$ whose roots are at $x = 0$ and $x = 4db/(c^2 - 4ad)$. It's probably best to express this in terms of the maps $$ \overline M_{0,0}(X, [\tilde{\overline M_{0,0}athcal D}]) \lambdaongrightarrow \overline M_{0,0}(B, 2) \lambdaongrightarrow B_2 $$ introduced in section~\ref{mainproof} above. Here, the variety $\overline M_{0,0}(X, [\tilde{\overline M_{0,0}athcal D}])$ has a component $M$ which is a blow-up of the ${\overline M_{0,0}athbb P}^3_{[a,b,c,d]}$ parametrizing the linear series $\tilde {\overline M_{0,0}athcal D}$ (it also has a second, extraneous component whose general point corresponds to a map $f : C \to X$ with reducible domain and image containing the line $y=0$ doubly; this component is not involved here). The image of the composite map $M \to B_2$ is simply the locus $B_0 \cong B \sigmaubset B_2$ of divisors of degree 2 in $B \cong {\overline M_{0,0}athbb P}^1$ containing the point $x=0$, with the map $$ \epsilonta : M \to {\overline M_{0,0}athbb P}^3_{[a,b,c,d]} \to B_0 \cong {\overline M_{0,0}athbb P}^1 $$ given by $$ [a,b,c,d] \overline M_{0,0}apsto [4db, c^2 - 4ad]. $$ What we see in particular from this is that {\epsilonm the fiber of $\epsilonta$ over the point $x=0$ is reducible}, with components given by $d=0$ and $b=0$. Now, in Stage 2 of our argument, as applied here, we start with an arc $\gamma amma \sigmaubset B_0 \sigmaubset B_2$ in which the second branch point approaches $x=0$, and lift that to an arc $\delta elta \sigmaubset M$. If our arc $\delta elta \sigmaubset M$ lifting the arc $\gamma amma \sigmaubset B_0 \sigmaubset B_2$ approaches the component $d=0$---whose general member corresponds to a curve $C \sigmaubset {\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1$ singular at $p$---we get a family of stable maps whose limit is as described in the first example above. If, on the other hand, it approaches the component $b=0$, whose general member corresponds to a curve $C \sigmaubset {\overline M_{0,0}athbb P}^1 \times {\overline M_{0,0}athbb P}^1$ containing the fiber $x=0$, we get a limit as depicted in the third example. And finally, if $\delta elta$ approaches (generically) a point in the intersection of these two components, we get an example of the third type. \beta egin{thebibliography}{[G-H-A]} \beta ibitem[Ca]{Ca} F. Campana, {\it Connexit\'e rationnelle des vari\'et\'es de Fano}, Ann. Sc. E. N.S. {\beta f25} (1992), 539-545 \beta ibitem[C]{C} A. Clebsch, {\epsilonm Zur Theorie der Riemann'schen Flachen}, Math Ann. {\beta f 6} (1872), 216-230 Springer-Verlag, Berlin, 1996. \beta ibitem[H]{H} A. Hurwitz, {\epsilonm Ueber Riemann'sche Fl\"achen mit gegebenen Verzweigungspunkten}, Math. Ann. {\beta f 39} (1891) 1-61. \beta ibitem[K]{K} J. Koll\'ar, {\epsilonm Rational Curves on Algebraic Varieties}, Ergebnisse der Math. 32, Springer-Verlag, Berlin, 1996. \beta ibitem[KMM]{KMM} J. Koll\'ar, Y. Miyaoka, S. Mori, {\epsilonm Rationally Connected Varieties}, J. Alg. Geom. {\beta f 1} (1992) 429-448 \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \noindent\makebox[60mm][l]{\tt {\large}} \noindent{ \begin{center} \Large\bf A note on the passage time of finite state Markov chains\footnote{{ The project is partially supported by NSFC Grant No.11131003.}} \end{center} } \noindent{ \begin{center}Wenming Hong\footnote{School of Mathematical Sciences \& Laboratory of Mathematics and Complex Systems, Beijing Normal University, Beijing 100875, P.R. China. Email: [email protected]} \ \ \ Ke Zhou\footnote{ { School of Statistics, University of International Business and Economics, Beijing 100029, P.R. China. Email: [email protected]}} \end{center} } \begin{center} \begin{minipage}[c]{12cm} \begin{center}\textbf{Abstract}\end{center} Consider a Markov chain with finite state $\{0, 1, \cdots, d\}$. We give the generation functions (or Laplace transforms) of absorbing (passage) time in the following two situations : (1) the absorbing time of state $d$ when the chain starts from any state $i$ and absorbing at state $d$; (2) the passage time of any state $i$ when the chain starts from the stationary distribution { supposed} the chain is time reversible and ergodic. Example shows that it is more convenient compared with the existing methods, especially we can calculate the expectation of the absorbing time directly. \ \mbox{}\textbf{Keywords:}\quad Markov chain, absorbing time, passage time, generation functions, Laplace transforms, eigenvalues, stationary distribution. \\ \mbox{}\textbf{Mathematics Subject Classification (2010)}: 60E10, 60J10, 60J27, 60J35. \end{minipage} \end{center} \section{ Discrete time\label{s2}} \subsection{Absorbing time when the process starting from any state $i$ } Consider the discrete time Markov chain $\{X_n\}_{n\geq 0}$ with finite states $\{0, 1, \cdots, d\}$ and absorbing at state $d$, the transition probability matrix $P$ is given by \begin{equation*} P=\left( \begin{array}{cccccc} r_0 & p_{0,1} & & \cdots & p_{0,d-1} & p_{0,d} \\ q_{1,0} & r_1 & & \cdots & p_{1,d-1}& p_{1,d}\\ \vdots & \ddots & & \ddots & \ddots & \vdots\\ q_{d-1,0}& q_{d-1,1}& & \cdots & r_{d-1} & p_{d-1,d} \\ 0 & 0 & & \cdots & 0 & 1\\ \end{array} \right)_{(d+1)\times(d+1)}. \end{equation*} For ~ $ 0\leq i\leq d$, let $\tau_{i,d}$ be the absorbing time of state $d$ starting from $i$, i.e., $$ \tau_{i,d}:=\inf\{n\geq 1, X_n=d \ \mbox{when} \ X_0=i \}, $$ and $f_{i}(s)$ be the generation function of $\tau_{i,d}$, \begin{equation}\label{f} f_{i}(s)=\mathbb{E}s^{\tau_{i,d}}~~~\text{for}~ 0\leq i\leq d, \end{equation} we have \begin{theorem} \label{th1} For $1\leq j\leq d+1$, denote $A_{j}(s)$ as the $d\times d$ sub-matrix by deleting the $(d+1)^{th}$ row and the $j^{th}$ column of the matrix $I_{d+1}-sP$. Then, for ~ $ 0\leq i\leq d$, we have \begin{equation}\label{fi} f_{i}(s)=(-1)^{d-i}\frac{\det A_{i+1}(s)}{\det A_{d+1}(s)}. \end{equation} $\Box$ \end{theorem} \begin{remark}\label{R1}As a consequence (see Corollary \ref{c1} below), for the birth-death and more general the skip-free (upward jumps { is} only of unit size, and there is no restriction on downward jumps) Markov chain with finite state $\{0, 1, \cdots, d\}$ and absorbing at state $d$, the absorbing time is distributed as a summation of $d$ independent geometric (or exponential) random variables. There are many authors give out different proofs to the results. For the birth and death chain, the well-known results can be traced back to Karlin and McGregor (\cite{KM}, 1959) Keilson (\cite{Keillog}, 1971; \cite{Keil}). Kent and Longford (\cite{Kent}, 1983) proved the result for the discrete time version (nearest random walk) although they have not specified the result as usual form (section 2, \cite{Kent}). Fill (\cite{F1}, 2009) gave the first stochastic proof to both nearest random walk and birth and death chain cases via duality which was established in \cite{DF}. Diaconis and Miclo (\cite{DM}, 2009) presented another probabilistic proof for birth and death chain. Gong et al (\cite{Mao}, 2012) gave a similar result in the case that the state space is $\mathbb{Z^{+}}$. For the skip-free chain, Brown and Shao (\cite{BS}, 1987) first proved the result in continuous time situation; Fill (\cite{F2}, 2009) gave a stochastic proof to both discrete and continuous time cases also by using the duality, and considered the general finite-state Markov chain situation when the chain starts from { state $0$}. However, the existing proofs we mentioned above are heavily relied on the initial state being $0$, no matter the ``analysis" method by Brown and Shao (\cite{BS}, 1987) and the ``stochastic" method by Fill (\cite{F2}, 2009), etc.. The first purpose of this paper (Theorem \ref{th1} and \ref{thc}) is to improve the result to the general situation: the chain starts from any state $i$ (not just from state $0$ only (\cite{F2}, 2009)). In particulary, the results generalize the well-known theorems for the birth-death (Karlin and McGregor \cite{KM}, 1959) and the skip-free (\cite{BS} and \cite{F2}) Markov chain. \end{remark} Before proving the Theorem, let us at first to recover the results for the skip-free (and then the birth-death) discrete time Markov chain (Fill \cite{F2}, 2009). \begin{corollary}\label{c1} Assume $p_{i,j}=0$ for $j-i>1$. We have \begin{equation}\label{f0} f_{0}(s)=\prod_{i=0}^{d-1} \left[\frac{(1-\lambda_i)s}{1-\lambda_i s} \right], \end{equation} where $\lambda_0, \cdots, \lambda_{d-1}$ are the $d$ non-unit eigenvalues of $P$. In particular, if all of the eigenvalues are real and nonnegative, then the hitting time is distributed as the sum of $d$ independent geometric random variables with parameters $1-\lambda_i$.\\ \end{corollary} \begin{proof} Note that, $1$ is an eigenvalue of $P$ evidently. So on the one hand ~${\det(I_{d+1}-sP)}=(1-s)\prod_{i=0}^{d-1}(1-\lambda_i s)$ (where $\lambda_0, \cdots, \lambda_{d-1}$ are the $d$ non-unit eigenvalues of $P$); on the other hand we have ${\det (I_{d+1}-sP)}=(1-s)\times \det A_{d+1}(s) $ from (\ref{p}); it's trivial to show that \begin{equation}\label{A} {\det A_{d+1}(s)}=\prod_{i=0}^{d-1}(1-\lambda_i s). \end{equation} From the definition of $A_1$, it is easy to see \begin{equation}\label{A1} \det A_{1}(s)=(-1)^{d}p_{0,1}p_{1,2}\cdots p_{d-1,d}s^{d}. \end{equation} By (\ref{fi}) and (\ref{A}) we have \begin{equation*} \det A_{1}(1)=(-1)^{d} f_0(1)\cdot {\det A_{d+1}(1)}=(-1)^{d} f_0(1)\cdot \prod_{i=0}^{d-1}(1-\lambda_i).\nonumber \end{equation*} And from (\ref{A1}), we get $\det A_{1}(1)=(-1)^{d} p_{0,1}p_{1,2}\cdots p_{d-1,d}$. Recall that $f_0(1)=1$, by (\ref{f}), we obtain \begin{equation}\label{cp} p_{0,1}p_{1,2}\cdots p_{d-1,d}=\prod_{i=0}^{d-1}(1-\lambda_i ). \end{equation} Then by (\ref{A1}) and (\ref{cp}) \begin{equation}\label{A2} \text{det}A_{1}(s)=(-1)^{d} \prod_{i=0}^{d-1}(1-\lambda_i )s^{d}, \end{equation} and (\ref{f0}) holds from (\ref{A}) and (\ref{A2}) directly . \end{proof} \begin{remark}\label{R2}The following example shows that Theorem \ref{th1} is more convenient compared with the existing methods in Corollary \ref{c1}, especially we can calculate the expectation of the absorbing time directly from (\ref{fi}). \end{remark} Consider a Markov chain with $d+1$ states $\{0, 1, 2, \ldots d\}$ whose transition matrix $P$ can be given by \begin{equation*}\label{p} P=\left( \begin{array}{cccccc} q & p & & \cdots & 0 & 0 \\ q & 0 & & \ddots & 0& 0\\ \vdots & \ddots & & \ddots & \ddots & \vdots\\ q& 0& & \cdots & 0 & p \\ 0 & 0 & & \cdots & 0 & 1\\ \end{array} \right)_{(d+1)\times(d+1)}, \end{equation*} where $p+q=1$. \begin{corollary}\label{c2} For ~ $ 0\leq i\leq d$, \begin{equation}\label{aa} f_{i}(s)=\frac{p^{d-i}s^{d-i}(1-s)+p^{d}qs^{d+1}}{1-s+p^{d}qs^{d+1}}, \end{equation} and we have \begin{equation}\label{bb} \mathbb{E}{\tau_{i,d}}=\frac{1-p^{d-i}}{p^{d}q}. \end{equation} \end{corollary} \noindent{\it Proof }. Take full advantage of $p+q=1$, we can calculate that \begin{equation*} \det A_{i+1}(s)=(-1)^{d-i}\frac{p^{d-i}s^{d-i}(1-s)+p^{d}qs^{d+1}}{1-ps}, \end{equation*} \begin{equation*} \det A_{d+1}(s)=\frac{1-s+p^{d}qs^{d+1}}{1-ps}. \end{equation*} We obtain (\ref{aa}) by using Theorem $2.1$. Recall that $\mathbb{E}{\tau_{i,d}}=f_{i}^{'}(1)$, we can get (\ref{bb}) by some calculation easily. $\Box$ \noindent{\it Proof of Theorem \ref{th1}} By decomposing the first step, for $0\leq i\leq d$, the generation function of $\tau_{i,d}$ satisfies, \begin{equation*} \begin{split} f_{i}(s)&=r_{i}sf_{i}(s)+p_{i,i+1}sf_{i+1}(s)+p_{i,i+2}sf_{i+2}(s)+\cdots p_{i,d-1}sf_{d-1}(s)+p_{i,d}s\\ &~~~~~q_{i,i-1}sf_{i-1}(s)+q_{i,i-2}sf_{i-2}(s)+\cdots+q_{i,0}sf_{0}(s). \end{split} \end{equation*} These system of equations are linear with respect to $f_{0}(s),~f_{1}(s)\cdots,~f_{d-1}(s)$. Using Cramer's Rule, we can get (\ref{fi}) by solving from these equations. $\Box$ \subsection{ Passage time when starting from the stationary distribution} { Consider a discrete time Markov chain $\{X_n\}_{n\geq 1}$ with finite states $\{0, 1, \cdots, d\}$ starting from the stationary distribution $\pi:=(\pi_0,\pi_1,\dots,\pi_d)$,} the transition probability matrix $\widehat{P}$ is given by \begin{equation*}\label{p} \widehat{P}=\left( \begin{array}{cccccc} r_0 & p_{0,1} & & \cdots & p_{0,d-1} & p_{0,d} \\ q_{1,0} & r_1 & & \cdots & p_{1,d-1}& p_{1,d}\\ \vdots & \ddots & & \ddots & \ddots & \vdots\\ q_{d,0} & q_{d,1} & & \cdots & q_{d,d-1} & r_d\\ \end{array} \right)_{(d+1)\times(d+1)}. \end{equation*} In addition, write \begin{equation}\label{d} D=\left( \begin{array}{cccccc} 1 & -\pi_{1} & & \cdots & -\pi_{d-1} & -\pi_{d} \\ 0 & 1-r_{1}s & & \cdots & -p_{1,d-1}s& -p_{1,d}s\\ \vdots & \ddots & & \ddots & \ddots & \vdots\\ 0 & -q_{d,1}s & & \cdots & -q_{d,d-1}s & 1-r_ds\\ \end{array} \right), \end{equation} and \begin{equation}\label{d0} D_{0}=\left( \begin{array}{cccccc} \pi_{0} & -\pi_{1} & & \cdots & -\pi_{d-1} & -\pi_{d} \\ q_{1,0}s & 1-r_{1}s & & \cdots & -p_{1,d-1}s& -p_{1,d}s\\ \vdots & \ddots & & \ddots & \ddots & \vdots\\ q_{d,0}s & -q_{d,1}s & & \cdots & -q_{d,d-1}s & 1-r_ds\\ \end{array} \right). \end{equation} \begin{theorem}\label{gsta} When the chain starts from $X_0$ with the stationary distribution $\pi$, and \begin{equation}\label{tao} \tau:=\inf\{n\geq 0, X_n=0 \ \}, \end{equation} be the passage time of state $0$. Denote $g_{\pi}(s)$ as the generation function of $\tau$, i.e., $g_{\pi}(s)=\mathbb{E}_{\pi}s^{\tau}$; we have \label{ths} \begin{equation*} g_{\pi}(s)=\frac{\det D_{0}}{\det D},\end{equation*}where $D$ and $D_{0}$ are given in (\ref{d}) and (\ref{d0}) respectively. \end{theorem} { Specifically, if the chain is time reversible and ergodic, Brown(\cite{B}, 1999) point out the elegant connection between the passage time stating from the stationary distribution and the interlacing eigenvalues theorem of linear algebra. Recently, this result also proved by Fill and Lyzinski(\cite{FL}, 2013) with a stochastic method. In what follows, we will reprove it directly as a corollary of Theorem \ref{gsta}.} \begin{corollary}\label{csta} \label{Ppi}Let $\lambda_1, \cdots, \lambda_{d}$ be the $d$ non-unit eigenvalues of $\widehat{P}$ (we assume $\lambda_0=1$), and $\gamma_1, \cdots, \gamma_{d}$ be the $d$ eigenvalues of $\widehat{P}_{0}$, which is the sub-matrix obtained by deleting the first row and the first column of $\widehat{P}$. Then we have \begin{equation*} g_{\pi}(s)=\left(\prod_{i=1}^{d}\frac{1-\gamma_{i}}{1-\lambda_{i}}\right) \prod_{i=1}^{d}\left(\frac{1-\lambda_is}{1-\gamma_i s}\right). \end{equation*} \end{corollary} \noindent {\it Proof of Corollary \ref{csta}} ~~ It is easy to see that (recall $\gamma_1, \cdots, \gamma_{d}$ are the eigenvalues of $\widehat{P}_{0}$), \begin{equation} \label{D} \det D=\prod_{i=1}^{d}(1-\gamma_i s). \end{equation} In what follows we will show \begin{equation}\label{D0} \det D_{0}=\pi_{0}\prod_{i=1}^{d}(1-\lambda_is). \end{equation} Define $e_1=(1,0,\dots,0)$, and recall that $\pi=(\pi_0,\pi_1,\dots,\pi_d)$. Then, we have \begin{equation*} \det D_{0}=\left | \begin{array}{cc} 0 & \pi\\ e_1^{T} & I-s\widehat{P} \end{array} \right |. \end{equation*} If we let $\Pi=\text{diag}(\pi_0,\pi_1,\dots,\pi_d)$, the reversibility of $\widehat{P}$ implies that $\Pi^{1/2}\widehat{P}\Pi^{-1/2}$ is a real symmetric matrix. Thus there exist an orthogonal matrix $U$ such that \begin{equation}\label{lf1} U\Pi^{1/2}\widehat{P}\Pi^{-1/2}U^{T}=\text{diag}(\lambda_{0},\lambda_{1},\dots,\lambda_{d}). \end{equation} We can calculate that \begin{equation*} \begin{split} & \left ( \begin{array}{cc} 1 & 0\\ 0 & U \end{array} \right ) \left ( \begin{array}{cc} 1 & 0\\ 0 & \Pi^{1/2} \end{array} \right ) \left ( \begin{array}{cc} 0 & \pi\\ e_1^{T} & I-s\widehat{P} \end{array} \right ) \left ( \begin{array}{cc} 1 & 0\\ 0 & \Pi^{-1/2} \end{array} \right ) \left ( \begin{array}{cc} 1 & 0\\ 0 & U^{T} \end{array} \right )\\ =& \left ( \begin{array}{cc} 0 & \pi\Pi^{-1/2}U^{T}\\ U\Pi^{1/2}e_{1}^{T} & U\Pi^{1/2}(I-s\widehat{P})\Pi^{-1/2}U^{T} \end{array} \right )=\left ( \begin{array}{cc} 0 & \pi\Pi^{-1/2}U^{T}\\ U\Pi^{1/2}e_{1}^{T} &I-s\text{diag}(\lambda_{0},\lambda_{1},\cdots,\lambda_{d}) \end{array} \right ) \end{split} \end{equation*} { It is easy to prove that $\lambda_{0}=1$ is the unique eigenvalue of maximum modulus of $\widehat{P}$. So the geometric multiplicity of $\widehat{P}$ corresponding to $\lambda_{0}$ is one (\cite{KM} P500 Perron's Theorem). On the one hand, $e_{1}U\Pi^{1/2}$ is { a left eigenvector} corresponding to $\lambda_{0}$. $\pi$ is also the left eigenvector of $\lambda_{0}$. Because $\|e_{1}U\Pi^{1/2}\|=\|\pi\|=1$, we have $e_{1}U\Pi^{1/2}=\pi$. So { \begin{equation}\label{lff} U\Pi^{1/2}e_{1}^{T}=(e_{1}U\Pi^{1/2})^{T}=\pi^{T}.\end{equation}} On the other hand, $\Pi^{-1/2}U^{T}e_{1}^{T}$ is { a right eigenvector} corresponding to eigenvalue $\lambda_{0}$, and $\mathbf{1}=\{1,1,\cdots,1\}$ is also the right eigenvector of $\lambda_{0}$. Because $\|\Pi^{-1/2}U^{T}e_{1}^{T}\|=\|\mathbf{1}\|=1$, we have $\Pi^{-1/2}U^{T}e_{1}^{T}=\mathbf{1}$, and \begin{equation}\label{lf2}\pi\Pi^{-1/2}U^{T}e_{1}^{T}=\pi\mathbf{1}=1\end{equation} By (\ref{lf1}) and $\pi=\pi\widehat{P}$, \begin{equation*}\pi\Pi^{-1/2}U^{T}=\pi\widehat{P}\Pi^{-1/2}U^{T}=\pi\Pi^{-1/2}U^{T}\text{diag}(\lambda_{0},\lambda_{1},\dots,\lambda_{d}).\end{equation*} Because $\lambda_{i}\neq 1$ for $i=1,2,\cdots d$, by (\ref{lf2}), $\pi\Pi^{-1/2}U^{T}$ must be equal to $e_1$. By (\ref{lff}), we obtain \begin{equation*} \det D_{0}=\left | \begin{array}{cc} 0 & \pi\Pi^{-1/2}U^{T}\\ U\Pi^{1/2}e_{1}^{T} &I-s\widehat{P} \end{array} \right |=\left | \begin{array}{cc} 0 & e_1\\ \pi^{T} &I-s\widehat{P} \end{array} \right |=\pi_{0}\prod_{i=1}^{d}(1-\lambda_is), \end{equation*}} which we get (\ref{D0}). Combine (\ref{D}) and (\ref{D0}), we obtain \begin{equation*} g_{\pi}(s)=\frac{\pi_{0}\prod_{i=1}^{d}(1-\lambda_is)}{\prod_{i=1}^{d}(1-\gamma_i s)}.\end{equation*} Because $g_{\pi}(s)$ is a generation function, $g_{\pi}(1) = 1$. So \begin{equation*} \pi_{0}=\frac{\prod_{i=1}^{d}(1-\gamma_i)}{\prod_{i=1}^{d}(1-\lambda_i)},\end{equation*} which complete the proof. $\Box$ \noindent {\it Proof of Theorem \ref{gsta}} ~~ Denote $g_{i}(s)$ as the generation function of passage time of state $0$ when the chain is starting from $i$. By the Markov property, we have \begin{equation}\label{keypi1} g_{\pi}(s)=\pi(0)g_{0}(s)+\pi(1)g_{1}(s)+\cdots+\pi(d)g_{d}(s). \end{equation} Obviously, $g_{0}(s)=1$. By decomposing the first step, for $1\leq i\leq d$, $g_{i}(s)$ satisfies \begin{equation*} \begin{split} g_{i}(s)&=r_{i}sg_{i}(s)+p_{i,i+1}sg_{i+1}(s)+\cdots p_{i,d-1}sg_{d-1}(s)+p_{i,d}sg_{d}(s)\\ &~~~~~+q_{i,i-1}sg_{i-1}(s)+q_{i,i-2}sg_{i-2}(s)+\cdots+q_{i,0}s. \end{split} \end{equation*} These system of equations together with (\ref{keypi1}) are linear with respect to $g_{\pi}(s),~g_{1}(s),~g_{2}(s)\cdots,~g_{d}(s)$. Use Cramer's Rule, we can get $g_{\pi}(s)$ by solving from these equations as \begin{equation*} g_{\pi}(s)=\frac{\det D_{0}}{\det D}.\end{equation*} $\Box$ \begin{remark}\label{R4}Actually, if we define for $i=1,2,\cdots, d$ \begin{equation}\label{tao} \tau_{i}:=\inf\{n\geq 0, X_n=i \ \}, \end{equation} be the passage time of state $i$. Denote $g_{\pi}^{i}(s)$ as the generation function of $\tau_{i}$, i.e., $g_{\pi}^{i}(s)=\mathbb{E}_{\pi}s^{\tau_{i}}$, we can obtain the formula for $g_{\pi}^{i}(s)$ with the corresponding modification for the $D$ and $D_{0}$, the proof is almost line by line with regard of $g_{i}(s)=1$ this time . \end{remark} \section{ Continuous time\label{s3}} We can write the counterpart results for the continuous time Markov chain with finite states $\{0, 1, \cdots, d\}$ easily. The proof is similar as in section 1 and so we omit the details. \subsection{Starting from any fixed state } Define $\{X_t\}_{t\geq 0}$ being the (continuous time) Markov chain with finite states $\{0, 1, \cdots, d\}$ and absorbing at state $d$, the generator $Q$ is given by \begin{equation*} Q=\left( \begin{array}{cccccc} -\gamma_0 & \alpha_{0,1} & & \cdots & \alpha_{0,d-1} & \alpha_{0,d} \\ \beta_{1,0} & -\gamma_1 & & \cdots & \alpha_{1,d-1}& \alpha_{1,d}\\ \vdots & \ddots & & \ddots & \ddots & \vdots\\ \beta_{d-1,0}& \beta_{d-1,1}& & \cdots & -\gamma_{d-1} & \alpha_{d-1,d} \\ 0 & 0 & & \cdots & 0 & 0\\ \end{array} \right)_{(d+1)\times(d+1)}. \end{equation*} Let $\tau_{i,d}$ be the absorbing time of state $d$ starting from $i$ and $\widetilde{f}_{i}(s)$ be the Laplace transform of $\tau_{i,d}$. i.e. \begin{equation*}\widetilde{f}_{i}(s)=\mathbb{E}e^{-s\tau_{i,d}}. \end{equation*} \begin{theorem}\label{thc} For $1\leq j\leq d+1$, we denote $\widetilde{A}_{j}(s)$ as the $d\times d$ sub-matrix by deleting the $(d+1)^{th}$ row and the $j^{th}$ column of the matrix $sI_{d+1}-Q$. Then, for $0\leq i\leq d$ we have \begin{equation}\label{fi1} \widetilde{f}_{i}(s)=(-1)^{d-i}\frac{\det\widetilde{A}_{i+1}}{\det\widetilde{A}_{d+1}}. \end{equation} $\Box$ \end{theorem} Immediately, we recover the results for the skip-free continuous time Markov chain (Brown and Shao \cite{BS}, 1987). \begin{corollary} Assume $\alpha_{i,j}=0$ for $j-i>1$. We have \begin{equation*} \varphi_{d}(s)=\prod_{i=0}^{d-1}\frac{\lambda_i}{\lambda_i+s}, \end{equation*} where $\lambda_i$ are the $d$ non-zero eigenvalues of $-Q$. In particular, if all of the eigenvalues are real and nonnegative, then the hitting time is distributed as the sum of $d$ independent exponential random variables with parameters $\lambda_i$. \end{corollary} \begin{proof} The proof is similar as Corollary \ref{c1}, we can calculate that ${\det\widetilde{A}_{d+1}}=\prod_{i=0}^{d-1}(\lambda_i+s)$, and $\det\widetilde{A}_{1}=(-1)^{d}\alpha_{0,1} \alpha_{1,2}\cdots \alpha_{d-1,d}=(-1)^{d}\prod_{i=0}^{d-1}\lambda_i$. \end{proof} \subsection{Starting from the stationary distribution} If we consider a time reversible ergodic Markov chain with generator $\widehat{Q}$, let $\widehat{Q}_{0}$ be the sub-matrix which is obtained by deleting the first row and the first column of $\widehat{Q}$. We denote $\widetilde{g}_{\pi}(s)$ as the Laplace transform of the hitting time of state $0$ when the chain is starting from the stationary distribution $\pi$. \begin{theorem} We have \begin{equation}\label{gpi2} \widetilde{g}_{\pi}(s)=\left(\prod_{i=1}^{d}\frac{\gamma_{i}}{\lambda_{i}}\right) \prod_{i=1}^{d}\left(\frac{\lambda_i+s}{ \gamma_i+s}\right), \end{equation} where $\lambda_1, \cdots, \lambda_{d}$ are the $d$ non-zero eigenvalues of $-\widehat{Q}$ (we assume $\lambda_0=0$), and $\gamma_1, \cdots, \gamma_{d}$ are the $d$ eigenvalues of $-\widehat{Q}_{0}$. \end{theorem} \noindent\textbf{Acknowledgement} { We appreciate Professor Daniel R. Jeske for his interesting example (see Corollary \ref{c2}) . We would also thank the referee's valuable suggestions.} \end{document}
\begin{document} \title{Spectral Multipliers on $2$-Step Stratified Groups, I } \date{} \author{Mattia Calzi\thanks{The author is partially supported by the grant PRIN 2015 \emph{Real and Complex Manifolds: Geometry, Topology and Harmonic Analysis}, and is member of the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).} } \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \newtheorem{remark}[definition]{Remark} \theoremstyle{plain} \newtheorem{theorem}[definition]{Theorem} \newtheorem{lemma}[definition]{Lemma} \newtheorem{proposition}[definition]{Proposition} \newtheorem{corollary}[definition]{Corollary} \maketitle \begin{small} \section*{Abstract} Given a $2$-step stratified group which does not satisfy a slight strengthening of the Moore-Wolf condition, a sub-Laplacian $\mathcal{L}$ and a family $\mathcal{T}$ of elements of the derived algebra, we study the convolution kernels associated with the operators of the form $m(\mathcal{L}, -i \mathcal{T})$. Under suitable conditions, we prove that: i) if the convolution kernel of the operator $m(\mathcal{L},-i \mathcal{T})$ belongs to $L^1$, then $m$ equals almost everywhere a continuous function vanishing at $\infty$ (`Riemann-Lebesgue lemma'); ii) if the convolution kernel of the operator $m(\mathcal{L},-i\mathcal{T})$ is a Schwartz function, then $m$ equals almost everywhere a Schwartz function. \end{small} \section{Introduction} \label{intro} Let $\mathcal{L}$ be a translation-invariant differential operator on $\mathds{R}^n$. In many situations, the study of $\mathcal{L}$ may be simplified by means of the Fourier transform, since $\mathcal{F} \mathcal{L}$ is a polynomial. If we consider a left-invariant differential operator $\mathcal{L}$ on a Lie group $G$, we may still study $\mathcal{L}$ by means of the Fourier transform; however, if $G$ is not commutative, the Fourier transform is less manageable than in the commutative case, so that a different approach is preferable. A reasonable alternative is provided by the spectral theorem. However, an approach of this kind is very sensitive to the operators involved, while the Fourier transform allows, in principle, to treat all the left-invariant differential operators at the same time. For this reason, it might be sensible to consider a finite family of commuting operators $\mathcal{L}_1,\dots, \mathcal{L}_k$ instead of a single one. Now, assume that $\mathcal{L}_1,\dots, \mathcal{L}_k$ are formally self-adjoint left-invariant differential operators on $G$, each of which induces an essentially self-adjoint operator on $C^\infty(G)$; assume that the self-adjoint operators induced by $\mathcal{L}_1,\dots, \mathcal{L}_k$ commute. Then, there is a unique spectral measure $\mu$ on $\mathds{R}^k$ such that \[ \mathcal{L}_j \varphi=\int_{\mathds{R}^k} \lambda_j \,\mathrm{d} \mu(\lambda) \,\varphi \] for every $\varphi \in C^\infty_c(G)$. If $m\colon \mathds{R}^k\to \mathds{C}$ is bounded and $\mu$-measurable, we may then associate with $m$ a distribution $\mathcal{K}(m)$ such that \[ m(\mathcal{L}_1,\dots, \mathcal{L}_k) \, \varphi= \varphi*\mathcal{K}(m) \] for every $\varphi \in C^\infty_c(G)$. The mapping $\mathcal{K}$ is the desired substitute for the (inverse) Fourier transform. One may then investigate the similarities between $\mathcal{K}$ and the (inverse) Fourier transform. For instance, one may consider the following questions: \begin{itemize} \item does the `Riemann-Lebesgue' property hold? In other words, if $m\in L^\infty(\mu)$ and $\mathcal{K}(m)\in L^1(G)$, does $m$ necessarily admit a continuous representative? \item is there a positive Radon measure $\beta$ on $\mathds{R}^k$ such that $\mathcal{K}$ extends to an isometry of $L^2(\beta)$ into $L^2(G)$? \item if such a `Plancherel measure' $\beta$ exists, is it possible to find an `integral kernel' $\chi\in L^1_\mathrm{loc}(\beta \otimes \nu_G)$\footnote{Here, $\nu_G$ denotes a fixed (left or right) Haar measure on $G$. } such that, for every $m\in L^\infty(\beta)$ with compact support, \[ \mathcal{K}(m)(g)=\int_{\mathds{R}^k} m(\lambda)\chi(\lambda,g)\,\mathrm{d} \beta(\lambda) \] for almost every $g\in G$? \item if $G$ is a group of polynomial growth, so that $\mathcal{S}(G)$ can be defined in a reasonable way, does $\mathcal{K}$ map $\mathcal{S}(\mathds{R}^k)$ into $\mathcal{S}(G)$? \item if $G$ is a group of polynomial growth and $\mathcal{K}(m)\in \mathcal{S}(G)$ for some $m\in L^\infty(\mu)$, does $m$ necessarily admit a representative in $\mathcal{S}(\mathds{R}^k)$? \end{itemize} Some of these questions have already been addressed in certain situations. In the case of one operator, the construction of a Plancherel measure dates back to M.\ Christ~\cite[Proposition 3]{Christ} for the case of a homogeneous sub-Laplacian on a stratified group. The `integral kernel' was then introduced by L.\ Tolomeo~\cite[Theorem 2.11]{Tolomeo} for a sub-Laplacian on a group of polynomial growth. Further, A.\ Hulanicki~\cite{Hulanicki} showed that Schwartz multipliers have Schwartz kernels in the setting of a positive Rockland operator on a graded group. On the other hand, a positive answer to the last question is only known for homogeneous sub-Laplacians on stratified groups and for sub-Laplacians on the plane motion group~\cite{MartiniRicciTolomeo}. Concerning the case of more operators, Hulanicki's theorem was extended first for some families on the Heisenberg group by A.\ Veneruso~\cite{Veneruso}, and then to the case of a weighted subcoercive system of operators on a group of polynomial growth by A.\ Martini~\cite[Proposition 4.2.1]{Martini}. A.\ Martini~\cite[Theorem 3.2.7]{Martini} also extended the existence result for the Plancherel measure to the case of a weighted subcoercive system of operators on a general Lie group. As for what concerns the correspondence between Schwartz kernels and Schwartz multipliers, it has been proved on a large class of nilpotent groups for commuting families of differential operators that are invariant under the action of a compact group (nilpotent Gelfand pairs); see the work of F.\ Astengo, B.\ Di Blasio and F.\ Ricci~\cite{AstengoDiBlasioRicci,AstengoDiBlasioRicci2}, V.\ Fischer and F.\ Ricci~\cite{FischerRicci}, V.\ Fischer, F.\ Ricci and O.\ Yakimova~\cite{FischerRicciYakimova}. In this paper, we focus our attention on two properties, which we call $(RL)$ (`Riemann-Lebesgue') and $(S)$ (`Schwartz'); these properties correspond to the first and fifth questions, respectively. The case of a homogeneous operator on a homogeneous group is greatly simplified by the presence of the dilations, to the point that property $(RL)$ holds automatically (cf.~Theorems~\ref{teo:2} and~\ref{teo:3}), while property $(S)$ can be characterized in a simple way, at least on abelian groups; we shall present this characterization in a future paper. On the other hand, the case of more operators, even homogeneous, is much more involved and properties $(RL)$ and $(S)$ may fail even in standard situations like abelian groups or the Heisenberg groups. We shall present examples of these pathological behaviours in a future paper. In the first part of the paper, we introduce Rockland families on homogeneous groups, and some relevant objects such as the `kernel transform' $\mathcal{K}$, the `Plancherel measure' $\beta$, the `integral kernel' $\chi$, and the `multiplier transform' $\mathcal{M}$ (Section~\ref{sec:3}). Then, we discuss the possibility of transferring properties $(RL)$ and $(S)$ to products of groups (Section~\ref{sez:7}) or to image families under polynomial maps (Section~\ref{sec:8}). While the former case is relatively simple, the latter one is more delicate; in Sections~\ref{sec:6} and~\ref{sec:7} we prove some general results which can be used to prove the validity of property $(RL)$ or $(S)$ for an image family once the validity of the corresponding property for the `main family' is known. In the second part of the paper we focus on the case of sub-Laplacians and elements of the centre of $\mathfrak{g}$ on a $2$-step stratified group $G$. Even in this specific context, there is a wide variety of situations. In particular, we distinguish two classes of such groups, where the families of the preceding kind behave quite differently: \begin{itemize} \item the groups $G$ which have a homogeneous subgroup $G'$ contained in $[G,G]$ such that the quotient of $G$ by $G'$ is a Heisenberg group; \item the groups $G$ which have no such quotients. \end{itemize} We call the groups of the first kind $MW^+$ groups, or groups satisfying the $MW^+$ condition, since the condition which defines these groups is a slight strengthening of the Moore-Wolf condition (cf.~\cite{MooreWolf} and also~\cite{MullerRicci}); in fact, the condition that was actually considered in~\cite{MooreWolf} is related to the centre $Z$ of $G$ instead of $[G,G]$. Nevertheless, one may always factor out an abelian group so as to reduce to a group with $Z=[G,G]$. In addition, the exposition becomes clearer if we consider only elements of the derived algebra, instead of the centre, so that the above dichotomy becomes more natural; thanks to Remark~\ref{oss:2}, we can always reduce to this case. Since the treatment of these two classes of groups is quite different, we focus here on groups which do \emph{not} satisfy the $MW^+$ condition; we shall study $MW^+$ groups in a future paper. In Section~\ref{sec:11}, we give an expression for the Plancherel measure; we also provide the reader with the tools needed to find an expression for the integral kernel, which we do not display explicitly since we do not consider it as particularly illuminating. Finally, in the last two sections we shall prove several conditions under which properties $(RL)$ and $(S)$ hold for the given families. \section{Definitions and Notation} A homogeneous group is a simply connected nilpotent Lie group $G$ endowed with a family of dilations which are group automorphisms; we shall denote them by $r\cdot x$ ($r>0$, $x\in G$). The homogeneous dimension $Q$ of $G$ is the sum of the homogeneous degrees of the elements of any homogeneous basis of the Lie algebra of $G$. We denote by $\nu_G$ a Haar measure on $G$, which is the image of a fixed Lebesgue measure on the Lie algebra under the exponential map. A homogeneous norm on $G$ is a proper mapping $\abs{\,\cdot\,}\colon G\to \mathds{R}_+$ which is symmetric and homogeneous of homogeneous degree $1$. \begin{definition} A differential operator $X$ on $G$ is homogeneous of degree $d\in \mathds{C}$ if \[ X [ \varphi(r\,\cdot\,) ]=r^d (X \varphi)(r\,\cdot\,) \] for every $\varphi \in C^\infty(G)$ and for every $r>0$. \end{definition} We end this section with some general notation concerning measures. First of all, unless explicitly stated, all measures are supposed to be positive and Radon. For the sake of simplicity, we only deal with Radon measures on Polish spaces, that is, topological spaces with a countable base whose topology is induced by a complete metric. For example, every locally compact space with a countable base is a Polish space, but we shall need to deal with some Polish spaces which are \emph{not} locally compact (cf.~the proof of Theorem~\ref{prop:10}). If $X$ is a Polish space, then a positive Borel measure $\mu$ on $X$ is a Radon measure if and only if it is locally finite (cf.~\cite[Theorem 2 and Proposition 3 of Chapter IX, § 3]{BourbakiInt2}). Now, if $X$ and $Y$ are Polish spaces, $\mu$ is a Radon measure on $X$, and $\pi\colon X\to Y$ is a $\mu$-measurable mapping, then $\pi$ is called $\mu$-proper if $\pi_*(\mu)$ is a Radon measure. Observe that, if $\pi$ is proper, then it is $\mu$-proper (cf.~\cite[Remark 2 of Chapter IX, § 2, No.\ 3]{BourbakiInt2}). If $\mu$ is a Radon measure on a Polish space $X$, and $f\in L^1_\mathrm{loc}(\mu)$, then we shall denote by $f\cdot \mu$ the Radon measure $E\mapsto\int_E f\,\mathrm{d} \mu$. We say that two positive Radon measures on a Polish space are equivalent if they share the same negligible sets; in other words, if they are absolutely continuous with respect to one another. If $X$ is a locally compact space, then we denote by $C_0(X)$ the space of complex-valued continuous functions on $X$ which vanish at $\infty$, endowed with the maximum norm. We denote by $\mathcal{M}^1(X)$ the dual of $C_0(X)$, that is, the space of bounded (Radon) measures on $X$. \section{Rockland Families and the Kernel Transform} \label{sec:3} In this section, $G$ denotes a homogeneous group of dimension $n$ and homogeneous dimension~$Q$. \begin{definition} Let $\mathcal{L}_A=(\mathcal{L}_\alpha)_{\alpha\in A}$ be a family of differential operators on $G$. We say that $\mathcal{L}_A$ is jointly hypoelliptic if the following hold: if $V$ is an open subset of $G$ and $T$ is a distribution on $V$ such that $\mathcal{L}_\alpha T\in C^\infty(V)$ for every $\alpha\in A$, then $T\in C^\infty(V)$. \end{definition} The following result enriches~\cite[Proposition 3.6.3]{Martini}. \begin{theorem}\label{teo:1} Let $\mathcal{L}_A=(\mathcal{L}_\alpha)_{\alpha\in A}$ be a non-empty finite family of formally self-adjoint, homogeneous, left-invariant differential operators without terms of order $0$ on $G$. Then, the following conditions are equivalent: \begin{enumerate} \item $\mathcal{L}_A$ is jointly hypoelliptic; \item for every continuous non-trivial irreducible unitary representation $\pi$ of $G$ in a hilbertian space $H$, the family of operators $\mathrm{d} \pi(\mathcal{L}_A)$ is jointly injective on $C^\infty(\pi)$; \item the (non-unital) algebra generated by $\mathcal{L}_A$ contains a positive Rockland operator, possibly with respect to a different family of dilations on $G$ with respect to which the $\mathcal{L}_\alpha$ are still homogeneous. \end{enumerate} Assume, in addition, that the $\mathcal{L}_\alpha$ commute as differential operators. Then, the preceding conditions are equivalent to the following one: \begin{enumerate} \item[4.] the $\mathcal{L}_\alpha$ are essentially self-adjoint on $C^\infty_c(G)$, their self-adjoint extensions commute and for every $m\in \mathcal{S}(\mathds{R}^A)$ the convolution kernel of the operator $m(\mathcal{L}_A)$ belongs to $\mathcal{S}(G)$. \end{enumerate} \end{theorem} \begin{proof} {\bf 1 $\implies$ 2.} This is a simple adaptation of the proof of~\cite[Theorem 1]{Beals}. {\bf2 $\implies$ 3.} This is the implication $(ii) \implies (i)$ of~\cite[Proposition 3.6.3]{Martini}. {\bf 3 $\implies$ 1.} Take an open subset $V$ of $G$ and $T\in \mathcal{D}'(T)$ such that $\mathcal{L}_\alpha T$ is $C^\infty$ on $V$ for every $\alpha\in A$. Take $P\in \mathds{C}[A]$ such that $P(0)=0$ and $P(\mathcal{L}_A)$ is hypoelliptic. Then, $P(\mathcal{L}_A)T$ is $C^\infty$ on $V$, so that $T$ is $C^\infty$ on $V$. Now, assume that the $\mathcal{L}_\alpha$ commute as differential operators. {\bf3 $\implies$ 4.} This follows from~\cite[Propositions 1.4.4, 3.1.2, and 4.2.1]{Martini}. {\bf4 $\implies$ 3.} Notice first that, by~\cite[Proposition 1.1]{Miller}, we may replace the family of dilations of $G$ with another one in such a way that the $\mathcal{L}_\alpha$ are still homogeneous and the degrees of homogeneity $\delta_\alpha$ of the $\mathcal{L}_\alpha$ all belong to $\mathds{N}^*$.\footnote{If $\mathcal{L}_{\alpha}=0$, choose $\delta_{\alpha}=1$.} Then, define $k_\alpha\coloneqq \prod_{\alpha'\neq \alpha} \delta_{\alpha'}$ for every $\alpha\in A$, and $P(X_A)\coloneqq\sum_{\alpha\in A} X_\alpha^{2 k_{\alpha} }$; observe that $P$ defines a positive \emph{homogeneous} proper polynomial mapping on $\mathds{R}^A$, of homogeneous degree $\delta'=2 \prod_{\alpha\in A} \delta_\alpha$. Now, take $t\geqslant 0$ and let $p_t$ be the convolution kernel of the operator $e^{- t P(\mathcal{L}_A)}$; by assumption, $p_t\in \mathcal{S}(G)$ for $t>0$, while $p_0=\delta_e$. In addition, it is readily seen that \[ p_t(g)=t^{-\sfrac{Q}{\delta'}} p_1\left( t^{-\sfrac{1}{\delta'}}\cdot g \right) \] for every $t>0$ and for every $g\in G$. Now, define \[ p(t,g)\coloneqq \begin{cases} p_t(g) & \text{if $t>0$}\\ 0 & \text{if $t\leqslant 0$}. \end{cases} \] Then, it is easily seen that $p$ is of class $C^\infty$ on $(\mathds{R}\times G)\setminus \Set{(0,e)}$. In addition, for every $\varphi \in C^\infty_c(G)$ the mapping $t \mapsto \varphi* p_t\in L^2(G)$ is continuous on $\mathds{R}_+$ and differentiable on $\mathds{R}_+^*$, with derivative $t\mapsto \varphi*[P(\mathcal{L}_A) p_t]$. By the arbitrariness of $\varphi$, we deduce that the mapping $t\mapsto p_t\in \mathcal{D}'(G)$ is of class $C^1$ on $\mathds{R}_+$, with derivative $t\mapsto P(\mathcal{L}_A) p_t$ (cf.~\cite[Theorem 3.1]{DixmierMalliavin}). Hence, by means of routine arguments we see that $p$ is a fundamental solution of the heat operator $\partial_t-P(\mathcal{L}_A)$ on $\mathds{R}\times G$. Since $\partial_t-P(\mathcal{L}_A)$ is formally self-adjoint, we see that $\check{\overline{p}}$ is a fundamental solution of the right-invariant differential operator associated with $\partial_t-P(\mathcal{L}_A)$. Arguing as in the proof of~\cite[Theorem 2.1]{Treves2}, we see that $\partial_t-P(\mathcal{L}_A)$ is hypoelliptic, so that also $P(\mathcal{L}_A)$ is hypoelliptic. \end{proof} \begin{definition} A Rockland family is a non-empty finite \emph{commutative} family of homogeneous left-invariant differential operators without terms of order $0$ which satisfies the equivalent conditions of Theorem~\ref{teo:1}. \end{definition} Here we depart slightly from the notion of `Rockland system' as defined in~\cite{Martini}. Indeed, a Rockland system is a Rockland family, while a Rockland family need not be a Rockland system, since the algebra it generates need not contain a Rockland operator. Nevertheless, the difference is only illusory: as Theorem~\ref{teo:1} shows, given a Rockland family $\mathcal{L}_A$, one may change the dilations of $G$ in such a way that $\mathcal{L}_A$ becomes a Rockland system. In other words, up to a change of dilations, there is no difference between Rockland families and Rockland systems. Notice that, as a consequence of the results of Section~\ref{sec:8}, the properties we are going to investigate do not pertain to the chosen family $\mathcal{L}_A$, but actually to the (non-unital) algebra it generates. As a matter of fact, we can start with a commutative, finitely generated, formally self-adjoint and dilation-invariant sub-algebra of $\mathfrak{U}_\mathds{C}(\mathfrak{g})$ and require that its elements have no constant terms and that it contains a hypoelliptic operator without constant terms. It is not hard to see that such algebras are generated by a Rockland family (use~\cite{RobbinSalamon} to prove that dilation-invariant sub-algebras are graded, that is, generated by homogeneous elements), and that different Rockland families which generate the same algebra are equivalent in a natural sense. Notice, in addition, that we do not impose any minimality conditions on the chosen family, in terms of the aforementioned equivalence; we do not even require that each $\mathcal{L}_\alpha$ should be non-zero. This choice does not provide serious inconveniences; instead, it makes the exposition simpler, since we do not have to check at each step that the families we introduce are `minimal' (cf., for instance, Proposition~\ref{prop:3}). \begin{definition}\label{def:1} Let $\mathcal{L}_A$ be a Rockland family. Then, we denote by $\mu_{\mathcal{L}_A}$ the spectral measure associated with the self-adjoint extensions of the $\mathcal{L}_\alpha$. We say that a $\mu_{\mathcal{L}_A}$-measurable function $m\colon \mathds{R}^A\to \mathds{C}$ admits a kernel if $C^\infty_c(G)$ is contained in the domain of $m(\mathcal{L}_A)$. In this case, $\mathcal{S}(G)$ is contained in the domain of $m(\mathcal{L}_A)$ and there is a unique $K\in \mathcal{S}'(G)$ such that $m(\mathcal{L}_A)\varphi= \varphi* K$ for every $\varphi \in \mathcal{S}(G)$ (cf.~\cite[Theorem 7.2]{DixmierMalliavin}); we shall denote $K$ by $\mathcal{K}_{\mathcal{L}_A}(m)$. \end{definition} \begin{definition} Let $\mathcal{L}_A$ be a Rockland family. We shall say that $\mathcal{L}_A$ satisfies property: \begin{itemize} \item[$(RL)$] (`Riemann-Lebesgue') if every $m\in L^\infty(\mu_{\mathcal{L}_A})$ such that $\mathcal{K}_{\mathcal{L}_A}(m)\in L^1(G)$ has a continuous representative; \item[$(S)$] (`Schwartz') if every $m\in L^\infty(\mu_{\mathcal{L}_A})$ such that $\mathcal{K}_{\mathcal{L}_A}(m)\in \mathcal{S}(G)$ has a representative in $\mathcal{S}(\mathds{R}^A)$. \end{itemize} \end{definition} \begin{remark}\label{oss:1:1} Observe that we did not require that $m$ has a representative in $C_0(\mathds{R}^A)$ in the definition of property $(RL)$. Actually, the fact that $m$ vanishes at $\infty$ is basically automatic (cf.~\cite[Proposition 3.2.11]{Martini}). \end{remark} Notice that, thanks to~\cite[Proposition 3.6.3]{Martini}, we may take advantage of the study of `weighted subcoercive systems' pursued in~\cite{Martini}. Actually, many of the general results proved below hold for weighted subcoercive systems on (say) groups of polynomial growth. In particular, we shall often use without reference some elementary properties of $\mathcal{K}_{\mathcal{L}_A}$. For example, \[ \mathcal{K}_{\mathcal{L}_A}(\overline m)=\mathcal{K}_{\mathcal{L}_A}(m)^*= \overline{[\mathcal{K}_{\mathcal{L}_A}(m)]\check\;}. \] We shall also have to deal with equalities of the form \[ \mathcal{K}_{\mathcal{L}_A}(m_1 m_2)=\mathcal{K}_{\mathcal{L}_A}(m_1)*\mathcal{K}_{\mathcal{L}_A}(m_2). \] We leave to the reader to verify case by case that such equalities hold (or else to refer to~\cite{Martini} whenever possible). Before we proceed, let us state a couple of useful results. The first one is basically a corollary of~\cite[Proposition 3.2.4]{Martini}. \begin{proposition}\label{prop:3} Let $G,G'$ be two non-trivial homogeneous groups, and $\pi$ a \emph{homogeneous} homomorphism of $G$ \emph{onto} $G'$. Let $\mathcal{L}_A$ be a Rockland family on $G$. Then, the following hold: \begin{enumerate} \item $\mathrm{d} \pi(\mathcal{L}_A)=(\mathrm{d} \pi(\mathcal{L}_\alpha))_{\alpha\in A}$ is a Rockland family on $G'$; \item $\sigma(\mathrm{d} \pi(\mathcal{L}_A))\subseteq \sigma(\mathcal{L}_A)$; \item if $m\colon E_{\mathcal{L}_A}\to \mathds{C}$ is $\beta_{\mathcal{L}_A}$-measurable and continuous on an open set which carries $\beta_{\mathrm{d} \pi(\mathcal{L}_A)}$, and if $\mathcal{K}_{\mathcal{L}_A}(m)\in \mathcal{M}^1(G)+\mathcal{E}'(G)$, then \[ \pi_*(\mathcal{K}_{\mathcal{L}_A}(m))= \mathcal{K}_{\mathrm{d} \pi(\mathcal{L}_A)}(m). \] \end{enumerate} \end{proposition} \begin{proof} {\bf1.} The fact that $\mathrm{d} \pi(\mathcal{L}_A)$ is Rockland follows from the fact that, if $\widetilde \pi$ is a continuous unitary representation of $G'$, then $\widetilde \pi \circ \pi$ is a continuous unitary representation of $G$, with $C^\infty(\widetilde \pi)=C^\infty(\widetilde \pi\circ \pi)$ since $\pi$ is a submersion, and $\mathrm{d} \widetilde \pi(\mathrm{d} \pi(\mathcal{L}_A))= \mathrm{d} (\widetilde \pi \circ \pi)(\mathcal{L}_A)$; finally, $\widetilde \pi\circ \pi$ is irreducible or trivial if and only if $\widetilde \pi$ is irreducible or trivial, respectively. {\bf3.} Let $\widetilde \pi$ be the right quasi-regular representation of $G$ in $L^2(G')$, that is, $\widetilde \pi(g) f=f(\,\cdot\,\pi(g))$ for every $g\in G$ and for every $f\in L^2(G')$. Then~\cite[Proposition 3.2.4]{Martini}, applied to $\widetilde \pi$, implies that our assertion holds if $m\in C_0(E_{\mathcal{L}_A})$ and $\mathcal{K}_{\mathcal{L}_A}(m)\in L^1(G)$. The general case follows by approximation. {\bf2.} This follows easily from~{\bf3}. \end{proof} The following definition will shorten the notation in the sequel. \begin{definition} Let $F$ be a subspace of $\mathcal{D}'(G)$. Then, we denote by $F_{\mathcal{L}_A}$ the set of $\mathcal{K}_{\mathcal{L}_A}(m)$ as $m$ runs through the set of $\mu_{\mathcal{L}_A}$-measurable functions which admit a kernel in $F$. \end{definition} \begin{proposition}\label{prop:1} Let $F$ be a Fréchet space which is continuously embedded in $\mathcal{M}^1(G)$ or, more generally, in the right convolutors of $L^2(G)$. Then, $F_{\mathcal{L}_A}$ is closed in $F$. \end{proposition} In particular, this applies to $L^1(G)$ and $\mathcal{S}(G)$. With more effort, one may generalize this result to any locally convex space $F$ which is continuously embedded in $\mathcal{D}'(G)$ and for which the bilinear mapping $*\colon C_c^\infty(G)\times F\to L^2(G)$ is separately continuous. \begin{proof} Indeed, let $(m_j)$ be a sequence in $L^\infty(\mu_{\mathcal{L}_A})$ such that the sequence $(\mathcal{K}_{\mathcal{L}_A}(m_j))$ converges to some $f$ in $F$. Then, $(m_j(\mathcal{L}_A))$ is a Cauchy sequence in $\mathcal{L}(L^2(G))$, so that $(m_j)$ is a Cauchy sequence in $L^\infty(\mu_{\mathcal{L}_A})$ by spectral theory. Therefore, it converges to some $m$ in $L^\infty(\mu_{\mathcal{L}_A})$; hence, $\mathcal{K}_{\mathcal{L}_A}(m_j)$ converges to $\mathcal{K}_{\mathcal{L}_A}(m)$ in $\mathcal{S}'(G)$ (cf.~\cite[Theorem 7.2]{DixmierMalliavin}). Therefore, $\mathcal{K}_{\mathcal{L}_A}(m)=f$. \end{proof} We shall often need some dilations on $\mathds{R}^A$ which reflect the homogeneity of the $\mathcal{L}_\alpha$. This leads to the following definition. \begin{definition} Let $\mathcal{L}_A$ be a Rockland family. Then, we shall define $E_{\mathcal{L}_A}$ as $\mathds{R}^A$, endowed with the dilations \[ r \cdot (x_\alpha)\coloneqq (r^{\delta_\alpha} x_\alpha) \] for every $r>0$ and for every $(x_\alpha)\in \mathds{R}^A$; here, $\delta_\alpha$ is the homogeneous degree of $\mathcal{L}_\alpha$ if $\mathcal{L}_\alpha\neq 0$, while $\delta_\alpha=1$ otherwise. We denote by $\abs{\,\cdot\,}$ a homogeneous norm on $E_{\mathcal{L}_A}$. \end{definition} The following proposition is basically a consequence of~\cite[Theorem 3.2.7 and Proposition 3.6.1]{Martini}. The proof is omitted. \begin{theorem} Let $\mathcal{L}_A$ be a Rockland family. Then, there is a unique positive Radon measure $\beta_{\mathcal{L}_A}$ on $E_{\mathcal{L}_A}$ such that the following hold: \begin{enumerate} \item $\mu_{\mathcal{L}_A}$ and $\beta_{\mathcal{L}_A}$ are equivalent; \item $\mathcal{K}_{\mathcal{L}_A}$ induces an isometry of $L^2(\beta_{\mathcal{L}_A})$ onto $L^2_{\mathcal{L}_A}(G)$; \item $(r\,\cdot \,)_*(\beta_{\mathcal{L}_A})=r^{-Q} \beta_{\mathcal{L}_A}$ for every $r>0$. \end{enumerate} \end{theorem} The following corollary has already been considered in~\cite[Proposition 3.2.12]{Martini} for the case $p=1$. The general case follows by interpolation. \begin{corollary} Take $p\in [1,2]$. Then, $\mathcal{K}_{\mathcal{L}_A}$ induces a unique continuous linear mapping \[ \mathcal{K}_{\mathcal{L}_A,p}\colon L^p(\beta_{\mathcal{L}_A})\to L^{p'}(G). \] In addition, $\mathcal{K}_{\mathcal{L}_A,1}$ maps $L^1(\beta_{\mathcal{L}_A})$ into $C_0(G)$, has norm $1$ and induces an isometry from the set of positive $\beta_{\mathcal{L}_A}$-integrable functions into the set of continuous functions of positive type on $G$. \end{corollary} From the preceding corollary we deduce the existence of an `integral kernel' $\chi_{\mathcal{L}_A}$ for the `kernel transform' $\mathcal{K}_{\mathcal{L}_A}$. This integral kernel was introduced in~\cite[Theorem 2.11]{Tolomeo} for a sub-Laplacian on a group of polynomial growth; since many results of the remainder of this section are basically extensions of those presented in~\cite{Tolomeo}, we shall omit most of the proofs. \begin{proposition} There is a unique $\chi_{\mathcal{L}_A}\in L^\infty(\beta_{\mathcal{L}_A}\otimes \nu_G)$ such that \[ \mathcal{K}_{\mathcal{L}_A}(m)(g)=\int_{E_{\mathcal{L}_A}} m(\lambda) \chi_{\mathcal{L}_A}(\lambda,g) \,\mathrm{d} \beta_{\mathcal{L}_A}(\lambda) \] for $\nu_G$-almost every $g\in G$. In addition, $\norm{\chi_{\mathcal{L}_A}}_\infty=1$. \end{proposition} The proof follows the lines of the proof of~\cite[Theorem 2.11]{Tolomeo}. One may also make use of the Dunford-Pettis Theorem. We now pass to show some of the main properties of $\chi_{\mathcal{L}_A}$. In particular, we shall find some representatives of $\chi_{\mathcal{L}_A}$ which are particularly well-behaved. The following simple result generalizes~\cite[Theorem 2.33]{Tolomeo}. \begin{proposition}\label{prop:3:7} For every $s>0$ and for $(\beta_{\mathcal{L}_A}\otimes \nu_G)$-almost every $(\lambda,g)\in E_{\mathcal{L}_A}\times G$, \[ \chi_{\mathcal{L}_A}(s\cdot\lambda,g)=\chi_{\mathcal{L}_A}(\lambda, s \cdot g). \] \end{proposition} The following property is reminiscent of an analogous one concerning Gelfand pairs. It extends~\cite[Proposition 2.14]{Tolomeo} to our setting; we shall nevertheless present an alternative proof. \begin{proposition}\label{prop:3:8} Take a $\beta_{\mathcal{L}_A}$-measurable function $m\colon E_{\mathcal{L}_A}\to \mathds{C}$ which admits a kernel in $\mathcal{M}^1(G)+\mathcal{E}'(G)$. Then \[ \mathcal{K}_{\mathcal{L}_A}(m)*\chi_{\mathcal{L}_A}(\lambda,\,\cdot\,)=\chi_{\mathcal{L}_A}(\lambda,\,\cdot\,)*\mathcal{K}_{\mathcal{L}_A}(m)= m(\lambda)\chi_{\mathcal{L}_A}(\lambda,\,\cdot\,) \] for $\beta_{\mathcal{L}_A}$-almost every $\lambda\in E_{\mathcal{L}_A}$. \end{proposition} \begin{proof} Notice first that, for every $\varphi_2\in C^\infty_c(G)$, the linear functional \[ L^\infty(G)\ni f \mapsto \langle \mathcal{K}_{\mathcal{L}_A}(m)* f, \varphi_2\rangle= \langle f, \mathcal{K}_{\mathcal{L}_A}(m)\check{\;} * \varphi_2\rangle\in \mathds{C} \] is continuous with respect to the weak topology $\sigma(L^\infty(G),L^1(G))$. In addition, for every $\varphi_1\in C^\infty_c(E_{\mathcal{L}_A})$, \[ \mathcal{K}_{\mathcal{L}_A}(\varphi_1)= \int_{E_{\mathcal{L}_A}} \varphi_1(\lambda)\chi_{\mathcal{L}_A}(\lambda,\,\cdot\,)\,\mathrm{d} \beta_{\mathcal{L}_A}(\lambda) \] in $L^\infty(G)$, endowed with the weak topology $\sigma(L^\infty(G),L^1(G))$. Therefore, \[ \begin{split} &\int_{E_{\mathcal{L}_A}} \langle \mathcal{K}_{\mathcal{L}_A}(m)*\chi_{\mathcal{L}_A}(\lambda,\,\cdot\,), \varphi_2\rangle\, \varphi_1(\lambda)\,\mathrm{d} \beta_{\mathcal{L}_A}(\lambda)= \langle \mathcal{K}_{\mathcal{L}_A}(m)* \mathcal{K}_{\mathcal{L}_A}(\varphi_1), \varphi_2\rangle\\ &\qquad \qquad\qquad \qquad\qquad \qquad\quad=\langle \mathcal{K}_{\mathcal{L}_A}(m \varphi_1), \varphi_2\rangle\\ &\qquad \qquad\qquad \qquad\qquad \qquad\quad=\int_{E_{\mathcal{L}_A}} (m \varphi_1)(\lambda) \langle \chi_{\mathcal{L}_A}(\lambda,\,\cdot\,), \varphi_2\rangle\,\mathrm{d} \beta_{\mathcal{L}_A}(\lambda), \end{split} \] whence the assertion by the arbitrariness of $\varphi_2$. The other equality is proved similarly. \end{proof} \begin{corollary} Let $P$ be a polynomial on $E_{\mathcal{L}_A}$. Then \[ P(\mathcal{L}_A)\chi_{\mathcal{L}_A}(\lambda,\,\cdot \,)=P(\mathcal{L}_A^R)\chi_{\mathcal{L}_A}(\lambda,\,\cdot \,)=P(\lambda)\chi_{\mathcal{L}_A}(\lambda,\,\cdot\,) \] for $\beta_{\mathcal{L}_A}$-almost every $\lambda\in E_{\mathcal{L}_A}$; here, $\mathcal{L}_A^R$ denotes the family of right-invariant differential operators which corresponds to $\mathcal{L}_A$. \end{corollary} In the following result we show the existence of well-behaved representatives of $\chi_{\mathcal{L}_A}$. Since it is a straightforward extension of~\cite[Lemmas 2.12 and 2.15 and Propositions 2.17 and 2.18]{Tolomeo}, the proof is omitted. \begin{theorem}\label{teo:2} There is a representative $\chi_{0}$ of $\chi_{\mathcal{L}_A}$ such that the following hold: \begin{enumerate} \item $\chi_{0}(\lambda,\,\cdot\,)$ is a function of positive type of class $C^{\infty}$ with maximum $1$ for every $\lambda\in E_{\mathcal{L}_A}$; \item for every homogeneous left- and right-invariant differential operators $X$ and $Y$ on $G$ of homogeneous degrees $\mathrm{d}_X$ and $\mathrm{d}_Y$, respectively, there is a constant $C_{X,Y}>0$ such that \[ \norm{ Y X \chi_{0}(\lambda,\,\cdot\,) }_\infty\leqslant C_{\gamma_1,\gamma_2} \abs{\lambda}^{\mathrm{d}_{X}+\mathrm{d}_{Y}} \] for every $\lambda\in E_{\mathcal{L}_A}$; \item $\chi_{0}(\lambda,\,\cdot\,)$ converges to $\chi_{0}(0,\,\cdot\,)=1$ in $\mathcal{E}(G)$ as $\lambda\to 0$; \item $\chi_{0}(\,\cdot\,,g)$ is $\beta_{\mathcal{L}_A}$-measurable for every $g\in G$. \end{enumerate} \end{theorem} We conclude this section with some remarks concerning the adjoint of $\mathcal{K}_{\mathcal{L}_A}$ and the continuity of $\chi_{\mathcal{L}_A}$. \begin{definition} We shall denote by $\mathcal{M}_{\mathcal{L}_A}\colon \mathcal{M}^1(G)\to L^\infty(\beta_{\mathcal{L}_A})$ the transpose of the mapping \[ L^1(\beta_{\mathcal{L}_A})\ni m\mapsto \mathcal{K}_{\mathcal{L}_A,1}(m) \check{\;} \in C_0(G). \] \end{definition} By the way, $\mathcal{M}_{\mathcal{L}_A}$ coincides with the adjoint of $\mathcal{K}_{\mathcal{L}_A}\colon L^2(\beta_{\mathcal{L}_A})\to L^2(G)$ on $L^1(G)\cap L^2(G)$. Therefore, by interpolation we deduce that $\mathcal{M}_{\mathcal{L}_A}$ extends to a continuous linear mapping of $L^p(G)$ into $L^{p'}(\beta_{\mathcal{L}_A})$ for every $p\in [1,2]$. The following result extends~\cite[Theorem 2.13]{Tolomeo} to the present setting. The proof is omitted. \begin{proposition}\label{prop:3:3} Take a representative $\chi_{0}$ of $\chi_{\mathcal{L}_A}$ as in Theorem~\ref{teo:2}. Then, for every $\mu \in \mathcal{M}^1(G)$ we have \[ \mathcal{M}_{\mathcal{L}_A}(\mu)(\lambda)=\int_{G} \overline{\chi_{0}(\lambda,g) }\,\mathrm{d} \mu(g) \] for $\beta_{\mathcal{L}_A}$-almost every $\lambda\in E_{\mathcal{L}_A}$. \end{proposition} \begin{corollary}\label{cor:4} Take $m\in L^\infty(\beta_{\mathcal{L}_A})$ such that $\mathcal{K}_{\mathcal{L}_A}(m)\in \mathcal{M}^1(G)$ and $\mu \in \mathcal{M}^1(G)$. Then \[ \mathcal{M}_{\mathcal{L}_A}( \mathcal{K}_{\mathcal{L}_A}(m)*\mu )= \mathcal{M}_{\mathcal{L}_A}(\mu* \mathcal{K}_{\mathcal{L}_A}(m) )=m \mathcal{M}_{\mathcal{L}_A}(\mu). \] \end{corollary} \begin{proof} Indeed, take a representative $\chi_0$ of $\chi_{\mathcal{L}_A}$ as in Theorem~\ref{teo:2}. Then, Proposition~\ref{prop:3:8} implies that \[ \begin{split} \mathcal{M}_{\mathcal{L}_A}( \mathcal{K}_{\mathcal{L}_A}(m)*\mu )(\lambda)&= \langle \mathcal{K}_{\mathcal{L}_A}(m)*\mu, \overline {\chi_{0}(\lambda,\,\cdot\,)}\rangle=\langle \mu, \overline{\mathcal{K}_{\mathcal{L}_A}(\overline m)*\chi_{0}(\lambda,\,\cdot\,)}\rangle\\ &= m(\lambda) \langle\mu, \overline{\chi_0(\lambda,\,\cdot\,)}\rangle= m(\lambda) \mathcal{M}_{\mathcal{L}_A}(\mu)(\lambda) \end{split} \] for $\beta_{\mathcal{L}_A}$-almost every $\lambda\in E_{\mathcal{L}_A}$. The other equality is proved analogously. \end{proof} \begin{corollary}\label{cor:5} Take a function $m\in L^\infty(\beta_{\mathcal{L}_A})$ such that $\mathcal{K}_{\mathcal{L}_A}(m)\in\mathcal{M}^1(G)$. Then, $m=\mathcal{M}_{\mathcal{L}_A}(\mathcal{K}_{\mathcal{L}_A}(m))$. In particular, $m$ is continuous at $0$ and \[ m(0)=\int_{G} \mathrm{d} \mathcal{K}_{\mathcal{L}_A}(m). \] \end{corollary} \begin{proof} The first assertion follows from Corollary~\ref{cor:4}, applied with $\mu\coloneqq \delta_e$. The second assertion follows from~{\bf3} of Theorem~\ref{teo:2}. \end{proof} \begin{theorem}\label{teo:3} The following conditions are equivalent: \begin{enumerate} \item $\chi_{\mathcal{L}_A}$ has a representative $\chi_0$ such that $\chi_0(\,\cdot\,, g)$ is continuous on $\sigma(\mathcal{L}_A)$ for $\nu_G$-almost every $g\in G$;\footnote{Notice that, in principle, this condition is weaker than separate continuity.} \item $\mathcal{M}_{\mathcal{L}_A}$ induces a continuous linear mapping from $L^1(G)$ into $C_0(\sigma(\mathcal{L}_A))$; \item $\mathcal{M}_{\mathcal{L}_A}$ induces a continuous linear mapping from $\mathcal{M}^1(G)$ into $C_b(\sigma(\mathcal{L}_A))$; \item $\chi_{\mathcal{L}_A}$ has a continuous representative. \end{enumerate} \end{theorem} This shows, in particular, that if $\chi_{\mathcal{L}_A}$ has a continuous representative, then $\mathcal{L}_A$ satisfies property $(RL)$. Nevertheless, the converse fails as Remark~\ref{oss:1} shows. \begin{proof} {\bf1 $\implies$ 2.} In order to prove continuity, it suffices to show that \[ \mathcal{M}_{\mathcal{L}_A}(\varphi)(\lambda)=\int_G \overline{\chi_0(\lambda,g)}\,\varphi(g)\,\mathrm{d} g \] for every $\varphi \in L^1(G)$ and for $\beta_{\mathcal{L}_A}$-almost every $\lambda\in E_{\mathcal{L}_A}$, and to apply the dominated convergence theorem. In order to prove that $\mathcal{M}_{\mathcal{L}_A}(\varphi)$ vanishes at $\infty$, it suffices to observe that, if $\tau\in C^\infty_c(E_{\mathcal{L}_A})$ and $\tau(0)=1$, then $\mathcal{M}_{\mathcal{L}_A}(\varphi)$ is the limit in $C_b(\sigma(\mathcal{L}_A))$ of $\mathcal{M}_{\mathcal{L}_A}(\varphi*\mathcal{K}_{\mathcal{L}_A}(\tau(2^{-j}\,\cdot\,)))$, which equals $\tau(2^{-j}\,\cdot\,) \mathcal{M}_{\mathcal{L}_A}(\varphi)$ by Corollary~\ref{cor:4}. {\bf2 $\implies $ 4.} Take $\tau\in \mathcal{S}(E_{\mathcal{L}_A})$ such that $\tau(\lambda)>0$ for every $\lambda\in E_{\mathcal{L}_A}$. Observe that the mapping $G\ni g\mapsto \mathcal{K}_{\mathcal{L}_A}(\tau)(g\,\cdot\,)\in L^1(G)$ is continuous, so that also the mapping $G\ni g\mapsto \mathcal{M}_{\mathcal{L}_A}(\mathcal{K}_{\mathcal{L}_A}(\tau)(g\,\cdot\,))\in C_0(\sigma(\mathcal{L}_A))$ is continuous. Therefore, the mapping \[ \sigma(\mathcal{L}_A)\times G\ni (\lambda,g)\mapsto \mathcal{M}_{\mathcal{L}_A}(\mathcal{K}_{\mathcal{L}_A}(\tau)(g\,\cdot\,))(\lambda) \in \mathds{C} \] is continuous. Now, let $\chi_{1}$ be a representative of $\chi_{\mathcal{L}_A}$ as in Theorem~\ref{teo:2}. Then, Proposition~\ref{prop:3:8} implies that \[ \begin{split} \mathcal{M}_{\mathcal{L}_A}(\mathcal{K}_{\mathcal{L}_A}(\tau)(g\,\cdot\,))(\lambda)&= \int_G \mathcal{K}_{\mathcal{L}_A}(\tau)(g g')\chi_{1}(\lambda,g'^{-1})\,\mathrm{d} g'= [ \mathcal{K}_{\mathcal{L}_A}(\tau)*\chi_{1}(\lambda,\,\cdot\,) ](g)= \tau(\lambda) \chi_{1}(\lambda,g) \end{split} \] for $(\beta_{\mathcal{L}_A}\otimes \nu_G)$-almost every $(\lambda, g)\in E_{\mathcal{L}_A}\times G$. In particular, $\chi_{\mathcal{L}_A}$ has a representative which is continuous on $\sigma(\mathcal{L}_A)\times G$. By~\cite[Corollary to Theorem 2 of Chapter IX, § 4, No.\ 3]{BourbakiGT2}, $\chi_{\mathcal{L}_A}$ has a continuous representative. {\bf 4 $\implies$ 1.} Obvious. {\bf 4 $\implies$ 3.} The proof is similar to that of the implication {\bf1 $\implies$ 2}. {\bf 3 $\implies$ 2.} This follows from the proof of the implication {\bf1 $\implies$ 2}. \end{proof} \section{Products}\label{sez:7} In this section we deal with the following situation: we have a finite family of homogeneous groups $(G_A)_{A\in \mathcal{A}}$, and on each $G_A$ a Rockland family $\mathcal{L}_A$.\footnote{In order to avoid technical problems, we shall assume that the elements of $\mathcal{A}$ are pairwise disjoint.} Then, we shall consider $G\coloneqq \prod_{A\in \mathcal{A}} G_A$, endowed with the dilations \[ r\cdot (g_A)\coloneqq (r\cdot g_A), \] for $r>0$ and $(g_A)\in G$. We shall denote by $ A'$ the union of $\mathcal{A}$ and, for every $\alpha\in A'$, we shall denote by $\mathcal{L}'_\alpha$ the operator on $G$ induced by $\mathcal{L}_\alpha$. Then, $\mathcal{L}'_{ A'}$ will denote the family $(\mathcal{L}'_\alpha)_{\alpha\in A'}$. We shall investigate what we can say about $\mathcal{L}'_{ A'}$ on the ground of our knowledge of the families $\mathcal{L}_A$. Notice that many of the implications of this section are actually equivalences; nevertheless, we shall leave to the reader the task of stating and proving the easy converses. The following result is basically a consequence of~\cite[Propositions 3.4.2 and 3.4.3]{Martini}. The proof is omitted. \begin{proposition} The following hold: \begin{enumerate} \item $\mathcal{L}'_{ A'}$ is a Rockland family; \item take a $\mu_{\mathcal{L}_A}$-measurable function $m_A\colon \mathds{R}^A\to \mathds{C}$ which admits a kernel for every $A\in \mathcal{A}$. Then, $\bigotimes_{A\in \mathcal{A}} m_A$ is $\mu_{\mathcal{L}'_{ A'}}$-measurable, admits a kernel, and \[ \mathcal{K}_{\mathcal{L}'_{ A'}}\left(\bigotimes_{A\in \mathcal{A}} m_A \right)= \bigotimes_{A\in \mathcal{A}} \mathcal{K}_{\mathcal{L}_A}(m_A). \] \end{enumerate} \end{proposition} The following result is basically a consequence of~\cite[Proposition 3.4.4]{Martini}. The proof is omitted. \begin{proposition}\label{prop:6:2} The following hold: \begin{enumerate} \item $\beta_{\mathcal{L}'_{ A'}}= \bigotimes_{A\in \mathcal{A}} \beta_{\mathcal{L}_A}$; \item for $(\beta_{\mathcal{L}'_{ A'}}\otimes \nu_G)$-almost every $((\lambda_\alpha), (g_A))\in \mathds{R}^{ A'}\times G$, \[ \chi_{\mathcal{L}'_{ A'}} ((\lambda_\alpha)_{\alpha\in A'}, (g_A)_{A\in \mathcal{A}})= \prod_{A\in \mathcal{A}} \chi_{\mathcal{L}_A}((\lambda_\alpha)_{\alpha\in A}, g_A ). \] \end{enumerate} \end{proposition} Now we focus on property $(RL)$. From now on, we shall sometimes make use of topological tensor products \emph{over $\mathds{C}$}. We shall generally agree with the notation of~\cite{Treves}, except for the fact that, without further specifications, we shall endow every tensor product with the $\pi$-topology. \begin{lemma}\label{lem:1} Assume that $\mathcal{A}=\Set{A_1,A_2}$. Then, for every $m\in L^1(\beta_{\mathcal{L}'_{ A'}})$ and for every $\mu\in \mathcal{M}^1(G_{A_2})$ there is $m_{\mu}\in L^1(\beta_{\mathcal{L}_{A_1}})$ such that \[ \int_{G_{A_2}}\mathcal{K}_{\mathcal{L}'_{ A'},1}(m)(\,\cdot\,,g_2)\,\mathrm{d} \mu(g_2) = \mathcal{K}_{\mathcal{L}_{A_1},1}(m_{\mu}). \] \end{lemma} \begin{proof} Observe first that \[ L^1(\beta_{\mathcal{L}'_{ A'}})\cong L^1(\beta_{\mathcal{L}_{A_1}}; L^1(\beta_{\mathcal{L}_{A_2}}))\cong L^1(\beta_{\mathcal{L}_{A_1}})\widehat \otimes L^1(\beta_{\mathcal{L}_{A_2}}) \] thanks to~\cite[Theorem 46.2]{Treves}. Therefore,~\cite[Theorem 45.1]{Treves} implies that there are $(c_j)\in \ell^1$ and two bounded sequences $(m_{j,1}), (m_{j,2})$ in $L^1(\beta_{\mathcal{L}_{A_1}})$ and $L^1(\beta_{\mathcal{L}_{A_2}})$, respectively, such that \[ m=\sum_{j\in \mathds{N}} c_j (m_{j,1}\otimes m_{j,2}) \] in $L^1(\beta_{\mathcal{L}'_{ A'}})$. Hence, it suffices to define \[ m_{\mu}\coloneqq\sum_{j\in \mathds{N}} c_j\, \int_{G_2}\mathcal{K}_{\mathcal{L}_{A_2},1}(m_{j,2})(g_2)\,\mathrm{d} \mu(g_2)\, m_{j,1}. \] \end{proof} \begin{corollary}\label{cor:2} Assume that $\mathcal{A}=\Set{A_1,A_2}$. Take $m\in L^\infty(\beta_{\mathcal{L}'_{ A'}})$ such that $\mathcal{K}_{\mathcal{L}'_{ A'}}(m)\in L^1(G)$ and $f\in L^\infty(G_{A_2})$. Then, \[ \int_{G_{A_2}}\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(\,\cdot\,,g_2) f(g_2)\,\mathrm{d} \nu_{G_{A_2}}(g_2)\in L^1_{\mathcal{L}_{A_1}}(G_{A_1}). \] In addition, $\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(\,\cdot\,,g_2)\in L^1_{\mathcal{L}_{A_1}}(G_{A_1})$ for almost every $g_2\in G_{A_2}$. \end{corollary} \begin{proof} {\bf1.} Assume first that $m$ is compactly supported. Let $(K_j)$ be an increasing sequence of compact subsets of $G_{A_2}$ whose union is $G_{A_2}$. Then, \[ \begin{split} \lim_{j\to \infty} \int_{K_j} \mathcal{K}_{\mathcal{L}'_{ A'}}(m)(\,\cdot\,,g_2) f(g_2)\,\mathrm{d} &\nu_{G_{A_2}}(g_2)=\int_{G_{A_2}} \mathcal{K}_{\mathcal{L}'_{ A'}}(m)(\,\cdot\,,g_2) f(g_2)\,\mathrm{d} \nu_{G_{A_2}}(g_2) \end{split} \] in $L^1(G_{A_1})$. The first assertion follows from Lemma~\ref{lem:1} and Proposition~\ref{prop:1}, while the second assertion follows directly from Lemma~\ref{lem:1}. {\bf2.} Now, take $\tau\in C^\infty_c(E_{\mathcal{L}_A})$ such that $\tau(0)=1$, and define $\tau_j\coloneqq \tau(2^{-j}\,\cdot\,)$ for every $j\in \mathds{N}$. Then,~{\bf1} above implies that \[ \int_{G_2}\mathcal{K}_{\mathcal{L}'_{ A'}}(m \tau_j )(\,\cdot\,,g_2) f(g_2)\,\mathrm{d} \nu_{G_{A_2}}(g_2)\in L^1_{\mathcal{L}_{A_1}}(G_{A_1}) \] and that $\mathcal{K}_{\mathcal{L}'_{ A'}}(m \tau_j)(\,\cdot\,,g_2)\in L^1_{\mathcal{L}_{A_1}}(G_{A_1})$ for every $j\in \mathds{N}$ and for almost every $g_2\in G_{A_2}$. Since $\mathcal{K}_{\mathcal{L}'_{ A'}}(m \tau_j )=\mathcal{K}_{\mathcal{L}'_{ A'}}(m )*\mathcal{K}_{\mathcal{L}'_{ A'}}( \tau_j )$ converges to $\mathcal{K}_{\mathcal{L}'_{ A'}}(m)$ in $L^1(G_{ A'})$, the assertions follow from Proposition~\ref{prop:1}. \end{proof} \begin{theorem}\label{teo:4} If $\mathcal{L}_A$ satisfies property $(RL)$ for every $A\in \mathcal{A}$, then $\mathcal{L}'_{ A'}$ satisfies property $(RL)$. \end{theorem} \begin{proof} {\bf1.} Proceeding by induction, we may reduce to the case in which $\mathcal{A}=\Set{A_1,A_2}$. In order to simplify the notation, we shall simply write $G_j $ instead of $G_{A_j}$ for $j=1,2$. Now, take $m\in L^\infty(\beta_{\mathcal{L}'_{ A'}})$ such that $\mathcal{K}_{\mathcal{L}'_{ A'}}(m)\in L^1(G)$. Then, Corollary~\ref{cor:5}, Proposition~\ref{prop:6:2} and Fubini's theorem imply that \[ \mathcal{M}_{\mathcal{L}_{A_1}}[g_1\mapsto\mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)](\lambda_2)](\lambda_1)=m(\lambda_1, \lambda_2) \] for $\beta_{\mathcal{L}_{A_1}}$-almost every $\lambda_1\in E_{\mathcal{L}_{A_1}}$ and for $\beta_{\mathcal{L}_{A_2}}$-almost every $\lambda_2\in E_{\mathcal{L}_{A_2}}$. Observe that Lemma~\ref{lem:1} implies that $\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)\in L^1_{\mathcal{L}_{A_2}}(G_{2})$ for almost every $g_1\in G_{1}$, and that by assumption $\mathcal{M}_{\mathcal{L}_{A_2}}$ induces a continuous linear mapping from $L^1_{\mathcal{L}_{A_2}}(G_{2}) $ into $C_0(\sigma(\mathcal{L}_{A_2}))$. Therefore, the mapping $g_1 \mapsto \mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)]$ defines an element of $L^1(G_{1}; C_0(\sigma(\mathcal{L}_{A_2})))$. {\bf2.} Let us prove that, for every $\mu \in \mathcal{M}^1(\sigma(\mathcal{L}_{A_2}))$, the mapping \[ g_1 \mapsto (\mu\mathcal{M}_{\mathcal{L}_{A_2}})[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)] \] belongs to $L^1_{\mathcal{L}_{A_1}}(G_{1})$. Indeed, the preceding considerations show that $\mu\mathcal{M}_{\mathcal{L}_{A_2}}$ defines an element of $L^1_{\mathcal{L}_{A_2}}(G_{2})'$, so that it can be represented by an element of $L^\infty(G_2)$; therefore, the assertion follows from Corollary~\ref{cor:2}. Now, let us prove that the mapping \[ \mathcal{M}^1(\sigma(\mathcal{L}_{A_1}))\ni \mu\mapsto \left[g_1 \mapsto (\mu\mathcal{M}_{\mathcal{L}_{A_2}})[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)]\right]\in L^1_{\mathcal{L}_{A_1}}(G_1) \] is weakly continuous \emph{on the bounded subsets of $\mathcal{M}^1(\sigma(\mathcal{L}_{A_1}))$}. Indeed,~\cite[Theorem 46.2]{Treves} implies that $L^1(G_1; C_0(\sigma(\mathcal{L}_{A_2})))\cong L^1(G_1)\widehat \otimes C_0(\sigma(\mathcal{L}_{A_2}))$, so that~\cite[Theorem 45.1]{Treves} implies that there are $(c_j)\in \ell^1$ and two bounded sequences $(f_{j}), (\varphi_{j})$ in $L^1(G_1)$ and $C_0(\sigma(\mathcal{L}_{A_2}))$, respectively, such that \[ \left[g_1\mapsto \mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)]\right]=\sum_{j\in \mathds{N}} c_j (f_{j}\otimes \varphi_{j}) \] in $L^1(G_1; C_0(\sigma(\mathcal{L}_{A_2})))$. Since the series \[ \sum_{j\in \mathds{N}} c_j \langle \mu, \varphi_j\rangle f_j \] converges uniformly to $g_1 \mapsto (\mu\mathcal{M}_{\mathcal{L}_{A_2}})[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)] $ as $\mu$ stays in a bounded subset of $\mathcal{M}^1(\sigma(\mathcal{L}_{A_2}))$, the assertion follows. {\bf3.} Next, observe that by assumption $\mathcal{M}_{\mathcal{L}_{A_1}}$ induces a continuous linear mapping from $L^1_{\mathcal{L}_{A_1}}(G_{1})$ into $C_0(\sigma(\mathcal{L}_{A_1}))$, so that~{\bf2} above implies that the mapping \[ \begin{split} \sigma(\mathcal{L}_{A_2})\ni \lambda_2 \mapsto \mathcal{M}_{\mathcal{L}_{A_1}}\left(g_1\mapsto \mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)](\lambda_2) \right)\in C_0(\sigma(\mathcal{L}_{A_1})) \end{split} \] is continuous. Therefore, the mapping \[ \sigma(\mathcal{L}'_{ A'})\ni (\lambda_1,\lambda_2)\mapsto \mathcal{M}_{\mathcal{L}_{A_1}}\left(g_1\mapsto \mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)](\lambda_2) \right)(\lambda_1)\in \mathds{C} \] is continuous; hence, it extends to a continuous mapping $m_0$ on $E_{\mathcal{L}'_{ A'}}$ by~\cite[Corollary to Theorem 2 of Chapter IX, § 4, No.\ 3]{BourbakiGT2}. Now,~{\bf1} implies that $m_0(\lambda_1,\lambda_2)=m(\lambda_1,\lambda_2)$ for $\beta_{\mathcal{L}_{A_1}}$-almost every $\lambda_1\in E_{\mathcal{L}_{A_1}}$ and for $\beta_{\mathcal{L}_{A_2}}$-almost every $\lambda_2\in E_{\mathcal{L}_{A_2}}$. Since both $m$ and $m_0$ are $\beta_{\mathcal{L}'_{ A'}}$-measurable, Tonelli's theorem implies that $m=m_0$ $\beta_{\mathcal{L}'_{ A'}}$-almost everywhere. \end{proof} Now, we focus on property $(S)$. First, we need some definitions. \begin{definition} Let $E$ be a homogeneous group, and let $F$ be a Fréchet space. We shall define $\mathcal{S}(E;F)$ as the set of $\varphi \in \mathcal{E}(E;F)$ such that $(1+\abs{\,\cdot\,})^k X\varphi$ is bounded for every $k\in \mathds{N}$ and for every left-invariant differential operator $X$ on $E$. We shall endow $\mathcal{S}(E;F)$ with the topology induced by the semi-norms \[ \varphi \mapsto \norm*{(1+\abs{\,\cdot\,})^k\norm{X\varphi}_\rho}_\infty \] as $k$ runs through $\mathds{N}$, $X$ runs through the set of left-invariant differential operators on $E$, and $\rho$ runs through the set of continuous semi-norms on $F$. Now, let $C$ be a closed subset of $E$, and let $N_{E,C,F}$ be the set of $\varphi \in \mathcal{S}(E;F)$ which vanish on $C$. Then, we shall define $\mathcal{S}_E(C;F)\coloneqq \quot{\mathcal{S}(E;F)}{N_{E,C,F}}$; we shall omit to denote $E$ when it is clear by the context. We shall simply write $\mathcal{S}_E(C)$ instead of $\mathcal{S}_E(C;\mathds{C})$. \end{definition} \begin{proposition}\label{prop:9} Let $F$ be a Fréchet space over $\mathds{C}$, and $E$ a homogeneous group. Then, the bilinear mapping $\mathcal{S}(E)\times F\ni (\varphi,v)\mapsto [h\mapsto \varphi(h) v]\in \mathcal{S}(E;F)$ induces an isomorphism \[ \mathcal{S}(E)\widehat \otimes F\to \mathcal{S}(E;F). \] \end{proposition} The proof is similar to that of~\cite[Theorem 51.6]{Treves} and is omitted. \begin{proposition}\label{prop:12} Let $E_1,E_2$ be two homogeneous groups, and let $C_1,C_2$ be two closed subspaces of $E_1,E_2$, respectively. Then, $\mathcal{S}_{E_1\times E_2}(C_1\times C_2)$ is canonically isomorphic to $\mathcal{S}_{E_1}(C_1)\widehat \otimes \mathcal{S}_{E_2}(C_2)$. \end{proposition} \begin{proof} Define \[ \Psi_{E,C}\colon \mathcal{S}(E)\ni \varphi \mapsto (\varphi(x))_{x\in C}\in \mathds{C}^C \] for every homogeneous group $E$ and for every closed subspace $C$ of $E$. Then, clearly $N_{E,C,\mathds{C}}$ is the kernel of $\Psi_{E,C,\mathds{C}}$. Now, observe that, with a slight abuse of notation, $\Psi_{E_1\times E_2, C_1\times C_2}=\Psi_{E_1,C_1}\widehat \otimes \Psi_{E_2,C_2}$ (cf.~\cite[Proposition 6 of Chapter I, §1, No.\ 3]{Grothendieck}). Therefore,~\cite[Proposition 3 of Chapter I, § 1, No.\ 2]{Grothendieck} implies that $N_{E_1\times E_2, C_1\times C_2, \mathds{C}}$ is the closed vector subspace of $\mathcal{S}(E_1)\widehat\otimes \mathcal{S}(E_2)$ generated by the tensors of the form $\varphi_1\otimes \varphi_2$, with $\Psi_{E_1,C_1}(\varphi_1)=0$ or $\Psi_{E_2,C_2}(\varphi_2)=0$. By the same reference, we see that $N_{E_1\times E_2, C_1\times C_2, \mathds{C}}$ is also the kernel of the canonical projection $\mathcal{S}(E_1)\widehat \otimes \mathcal{S}(E_2)\to \mathcal{S}_{E_1}(C_1)\widehat \otimes \mathcal{S}_{E_2}(C_2)$, so that the assertion follows. \end{proof} \begin{lemma}\label{lem:3} Assume that $\mathcal{A}=\Set{A_1,A_2}$. Take $m\in L^\infty(\beta_{\mathcal{L}'_{ A'}})$ such that $\mathcal{K}_{\mathcal{L}'_{ A'}}\in \mathcal{S}(G_{ A'})$ and $T\in \mathcal{S}'(G_{A_2})$. Then, \[ \langle T, g_2\mapsto\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(\,\cdot\,,g_2)\rangle\in \mathcal{S}_{\mathcal{L}_{A_1}}(G_{A_1}). \] \end{lemma} The proof is similar to that of Corollary~\ref{cor:2}, the only difference being that here one has to approximate $T$ in $\mathcal{S}'(G_{A_2})$ by a sequence of measures with compact support. \begin{theorem}\label{teo:5} If $\mathcal{L}_A$ satisfies property $(S)$ for every $A\in \mathcal{A}$, then $\mathcal{L}'_{ A'}$ satisfies property $(S)$. \end{theorem} \begin{proof} {\bf1.} Proceeding by induction, we may reduce to the case in which $\mathcal{A}=\Set{A_1,A_2}$. In order to simplify the notation, we shall simply write $G_j$, and $\mathcal{S}(\sigma(\mathcal{L}_{A_j}))$ instead of $G_{A_j}$ and $\mathcal{S}_{E_{\mathcal{L}_{A_j}}}(\sigma(\mathcal{L}_{A_j}))$, respectively, for $j=1,2$. Now, take $m\in L^\infty(\beta_{\mathcal{L}'_{ A'}})$ such that $\mathcal{K}_{\mathcal{L}'_{ A'}}\in\mathcal{S}(G)$. Then, Corollary~\ref{cor:5}, Proposition~\ref{prop:6:2} and Fubini's theorem imply that \[ \mathcal{M}_{\mathcal{L}_{A_1}}[g_1\mapsto\mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)](\lambda_2)](\lambda_1)=m(\lambda_1, \lambda_2) \] for $\beta_{\mathcal{L}_{A_1}}$-almost every $\lambda_1\in E_{\mathcal{L}_{A_1}}$ and for $\beta_{\mathcal{L}_{A_2}}$-almost every $\lambda_2\in E_{\mathcal{L}_{A_2}}$. Observe that Lemma~\ref{lem:3} implies that $\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)\in \mathcal{S}_{\mathcal{L}_{A_2}}(G_{2})$ for every $g_1\in G_{1}$, and that by assumption $\mathcal{M}_{\mathcal{L}_{A_2}}$ induces a continuous linear mapping from $\mathcal{S}_{\mathcal{L}_{A_2}}(G_{2}) $ onto $\mathcal{S}(\sigma(\mathcal{L}_{A_2}))$. Therefore, the map $g_1 \mapsto \mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)]$ defines an element of $\mathcal{S}(G_{1}; \mathcal{S}(\sigma(\mathcal{L}_{A_2})))$. {\bf2.} Let us prove that the mapping $g_1 \mapsto \mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)]$ belongs to the space $\mathcal{S}_{\mathcal{L}_{A_1}}(G_{1})\widehat \otimes \mathcal{S}(\sigma(\mathcal{L}_{A_2}))$. Take $T\in \mathcal{S}(\sigma(\mathcal{L}_{A_2}))'$; then, Lemma~\ref{lem:3} implies that \[ [g_1 \mapsto (T\mathcal{M}_{\mathcal{L}_{A_2}})[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)]] \in \mathcal{S}_{\mathcal{L}_{A_1}}(G_{1}) \] since $T\mathcal{M}_{\mathcal{L}_{A_2}}$ defines an element of $\mathcal{S}_{\mathcal{L}_{A_2}}(G_{2})'$, which can be extended to an element of $\mathcal{S}'(G_2)$. Next, observe that~\cite[Proposition 50.4]{Treves} implies that \[ \mathcal{S}_{\mathcal{L}_{A_1}}(G_{1})\widehat \otimes \mathcal{S}(\sigma(\mathcal{L}_{A_2}))\cong \mathcal{L}(\mathcal{S}(\sigma(\mathcal{L}_{A_2}))'; \mathcal{S}_{\mathcal{L}_{A_1}}(G_{1})) \] since $\mathcal{S}(\sigma(\mathcal{L}_{A_2}))$ is nuclear thanks to~\cite[Proposition 50.1]{Treves}. Now, the mapping \[ g_1 \mapsto \mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)] \] belongs to $\mathcal{S}(G_1; \mathcal{S}(\sigma(\mathcal{L}_{A_2})))$; arguing as above, we see that this latter space is the canonical image of $\mathcal{S}(G_1)\widehat \otimes \mathcal{S}(\sigma(\mathcal{L}_{A_2}))\cong \mathcal{L}(\mathcal{S}(\sigma(\mathcal{L}_{A_2}))'; \mathcal{S}(G_{1}))$, so that the preceding arguments imply our claim. {\bf3.} Now, by assumption $\mathcal{M}_{\mathcal{L}_{A_1}}$ induces a continuous linear map from $\mathcal{S}_{\mathcal{L}_{A_1}}(G_{1})$ into $\mathcal{S}(\sigma(\mathcal{L}_{A_1}))$, so that we have the continuous linear mapping \[ \mathcal{M}_{\mathcal{L}_{A_1}}\widehat \otimes I_{\mathcal{S}(\sigma(\mathcal{L}_{A_2})) } \colon \mathcal{S}_{\mathcal{L}_{A_1}}(G_{1})\widehat \otimes \mathcal{S}(\sigma(\mathcal{L}_{A_2}))\to \mathcal{S}(\sigma(\mathcal{L}_{A_1}))\widehat \otimes \mathcal{S}(\sigma(\mathcal{L}_{A_2})); \] in addition, for every $T\in \mathcal{S}(\sigma(\mathcal{L}_{A_2}))'$ and for every $\lambda_1\in \sigma(\mathcal{L}_{A_1})$, \[ \begin{split} &\langle T, \left(\mathcal{M}_{\mathcal{L}_{A_1}}\widehat \otimes I_{\mathcal{S}(\sigma(\mathcal{L}_{A_2}))}\right)( g_1 \mapsto \mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)] )(\lambda_1)\rangle= \mathcal{M}_{\mathcal{L}_{A_1}}[g_1 \mapsto T\mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)]](\lambda_1) \end{split} \] (reason as in~{\bf2}). Choosing $T=\delta_{\lambda_2}$ for $\lambda_2\in \sigma(\mathcal{L}_{A_2})$, and taking into account Proposition~\ref{prop:3}, we see that the mapping \[ \sigma(\mathcal{L}'_{ A'})\ni(\lambda_1,\lambda_2)\mapsto \mathcal{M}_{\mathcal{L}_{A_1}}(g_1 \mapsto \mathcal{M}_{\mathcal{L}_{A_2}}[\mathcal{K}_{\mathcal{L}'_{ A'}}(m)(g_1,\,\cdot\,)](\lambda_2))(\lambda_1) \] extends to an element $m_0$ of $\mathcal{S}(E_{\mathcal{L}'_{ A'}})$. Now,~{\bf1} implies that $m_0(\lambda_1,\lambda_2)=m(\lambda_1,\lambda_2)$ for $\beta_{\mathcal{L}_{A_1}}$-almost every $\lambda_1\in E_{\mathcal{L}_{A_1}}$ and for $\beta_{\mathcal{L}_{A_2}}$-almost every $\lambda_2\in E_{\mathcal{L}_{A_2}}$. Since both $m$ and $m_0$ are $\beta_{\mathcal{L}'_{ A'}}$-measurable, Tonelli's theorem implies that $m=m_0$ $\beta_{\mathcal{L}'_{ A'}}$-almost everywhere. The assertion follows. \end{proof} \section{Image Families}\label{sec:8} In this section we shall fix a Rockland family $\mathcal{L}_A$ on a homogeneous group $G$; we consider $\mathcal{L}_A$ as `known' and we study an `image family' $P(\mathcal{L}_A)$, where $P\colon \mathds{R}^A\to \mathds{R}^\Gamma$ is a polynomial mapping with homogeneous components, and $\Gamma$ is a finite set. We shall investigate what we can say about $P(\mathcal{L}_A)$ on the base of our knowledge of $\mathcal{L}_A$. \begin{proposition}\label{prop:1:2}\label{prop:7:4} The following statements are equivalent: \begin{enumerate} \item $P(\mathcal{L}_A)$ is a Rockland family; \item the restriction of $P$ to $\sigma(\mathcal{L}_A)$ is proper, that is, $P(\lambda)\neq 0$ for every $\lambda\in \sigma(\mathcal{L}_A)$ such that $\abs{\lambda}=1$. \end{enumerate} In addition, if $P(\mathcal{L}_A)$ is a Rockland family, then: \begin{enumerate} \item[(i)] $\mu_{P(\mathcal{L}_A)}=P_*(\mu_{\mathcal{L}_A})$ and $\sigma(P(\mathcal{L}_A))=P(\sigma(\mathcal{L}_A))$; \item[(ii)] a $\beta_{P(\mathcal{L}_A)}$-measurable function $m\colon E_{P(\mathcal{L}_A)}\to \mathds{C}$ admits a kernel if and only if $m\circ P$ admits a kernel; in this case, \[ \mathcal{K}_{P(\mathcal{L}_A)}(m)= \mathcal{K}_{\mathcal{L}_A}(m\circ P); \] \item[(iii)] $\beta_{P(\mathcal{L}_A)}=P_*(\beta_{\mathcal{L}_A})$. \end{enumerate} \end{proposition} \begin{proof} By spectral theory, $\mu_{P(\mathcal{L}_A)}=P_*(\mu_{\mathcal{L}_A})$ and $\sigma(P(\mathcal{L}_A))=\overline{P(\sigma(\mathcal{L}_A))}$, without further assumptions on $P(\mathcal{L}_A)$. If $P(\mathcal{L}_A)$ is a Rockland family, then also~{\bf(ii)} holds by spectral theory again; as a consequence, also~{\bf(iii)} holds in this case. Then, we are reduced to proving the equivalence of~{\bf1} and~{\bf2}. {\bf 1 $\implies$ 2.} This follows from~\cite[Lemma 3.5.1]{Martini}. {\bf2 $\implies$ 1.} Notice first that the union of the families $\mathcal{L}_A$ and $P(\mathcal{L}_A)$ is clearly Rockland, so that the $P(\mathcal{L}_\alpha)$ are essentially self-adjoint on $C^\infty_c(G)$ with commuting closures. Let $S$ be the unit sphere of $E_{\mathcal{L}_A}$ corresponding to some homogeneous norm; take $\tau_1\in C^\infty(S)$ such that $\tau_1=1$ on a neighbourhood of $\sigma(\mathcal{L}_A)\cap S$ and such that $\tau_1$ is supported in $\Set{x\in S\colon 2\abs{P(x)}\geqslant \mun_{S\cap\sigma(\mathcal{L}_A)} \abs{P}}$. Then, extend $\tau_1$ to a homogeneous function of degree $0$. In addition, take $\tau_2\in C^\infty_c(E_{\mathcal{L}_A})$ so that $\tau_2=1$ on a neighbourhood of $0$. Now, if $m\in \mathcal{S}(E_{P(\mathcal{L}_A)})$, then clearly $[\tau_2+(1-\tau_2)\tau_1] (m\circ P)\in \mathcal{S}(E_{\mathcal{L}_A})$, so that $\mathcal{K}_{P(\mathcal{L}_A)}(m)\in \mathcal{S}(G)$. The assertion follows. \end{proof} \begin{proposition}\label{prop:7:3} Assume that $P(\mathcal{L}_A)$ is a Rockland family, and take a disintegration $(\beta_{\lambda'})_{\lambda'\in E_{P(\mathcal{L}_A)}}$ of $\beta_{\mathcal{L}_A}$ relative to $P$. Then, \[ \chi_{P(\mathcal{L}_A)}(\lambda',g)= \int_{E_{\mathcal{L}_A}} \chi_{\mathcal{L}_A}(\lambda,g)\,\mathrm{d} \beta_{\lambda'}(\lambda) \] for $(\beta_{P(\mathcal{L}_A)}\otimes \nu_G)$-almost every $(\lambda',g)\in E_{P(\mathcal{L}_A)}\times G$. \end{proposition} Notice that the existence of a disintegration follows from~\cite[Theorem 1 of Chapter VI, § 3, No.\ 1]{BourbakiInt1}. Then, the proof amounts to showing that both sides of the asserted equality have the same integrals when multiplied by elements of $C^\infty_c(E_{\mathcal{L}_A})\otimes C^\infty_c(G)$; it is omitted. Now, consider property $(RL)$. Assume that $\mathcal{L}_A$ satisfies property $(RL)$, and take $m\in L^\infty(\beta_{P(\mathcal{L}_A)})$ such that $\mathcal{K}_{P(\mathcal{L}_A)}(m)\in L^1(G)$. Then, there is $\widetilde m\in C_0(E_{\mathcal{L}_A})$ such that $m\circ P=\widetilde m$ $\beta_{\mathcal{L}_A}$-almost everywhere. In Section~\ref{sec:6}, we shall study this situation in a general setting, seeking conditions under which $\widetilde m$ is constant on the fibres of $P$ in $\sigma(\mathcal{L}_A)$. Since $P$ is proper, this implies that $\widetilde m$ is the composite of a continuous function with $P$, at least on $\sigma(\mathcal{L}_A)$. Notice, however, that sometimes it is more convenient to argue on suitable subsets of the spectrum. Property $(S)$ is studied in a similar way, making use of the results of Sections~\ref{sec:6} and~\ref{sec:7}. \section{Composite Functions: Continuous Functions}\label{sec:6} In this section we consider the following problem: given three Polish spaces $X, Y, Z$, a measure $\mu$ on $X$, a $\mu$-measurable mapping $\pi\colon X\to Y$, and a function $m\colon Y\to Z$ such that $m\circ \pi$ equals $\mu$-almost everywhere a continuous function, does $m$ equal $\pi_*(\mu)$-almost everywhere a continuous function? To this end, we introduce the following definition. \begin{definition} Let $X$ be a Polish space, $Y$ a set, $\mu$ a positive Radon measure on $X$, and $\pi$ a mapping from $X$ into $Y$. We say that two points $x,x'$ of $\Supp{\mu}$ are $(\mu,\pi)$-connected if $\pi(x)=\pi(x')$ and there are $x=x_1,\dots, x_k=x'\in \pi^{-1}(\pi(x))\cap \Supp{\mu}$ such that, for every $j=1,\dots, k$, for every neighbourhood $U_j$ of $x_j$ in $\Supp{\mu}$, and for every neighbourhood $U_{j+1}$ of $x_{j+1}$ in $\Supp{\mu}$, the set $ \pi^{-1}(\pi(U_j)\cap \pi(U_{j+1})) $ is not $\mu$-negligible. We say that $\mu$ is $\pi$-connected if every pair of elements of $ \Supp{\mu}$ having the same image under $\pi$ are $(\mu,\pi)$-connected. \end{definition} Observe that $(\mu,\pi)$-connectedness actually depends only on the equivalence class of $\mu$ and the equivalence relation induced by $\pi$ on $X$. In addition, notice that, if $Y$ is a topological space and $\pi$ is open at some point of each fibre (in the support of $\mu$), then $\mu$ is $\pi$-connected. We emphasize that, in the definition of $(\mu,\pi)$-connectedness, the points $x_1,\dots,x_k$ are fixed \emph{before} considering their neighbourhoods. In other words, if for every neighbourhood $U$ of $x$ in $\Supp{\mu}$ and for every neighbourhood $U'$ of $x'$ in $\Supp{\mu}$ we found $x=x_1,\dots, x_k=x'$ and neighbourhoods $U_j$ of $x_j$ in $\Supp{\mu}$ so that $U=U_1$, $U'=U_k$ and, for every $j=1,\dots, k$, the set $\pi^{-1}(\pi(U_j)\cap \pi(U_{j+1})) $ were not $\mu$-negligible, then we would \emph{not} be able to conclude that $x$ and $x'$ are $(\mu,\pi)$-connected. Now we can prove our main result. Notice that, even though its hypotheses are quite restrictive, it still gives rise to important consequences. \begin{proposition}\label{prop:A:7} Let $X,Y,Z$ be three Polish spaces, $\pi\colon X\to Y$ a $\mu$-measurable mapping, and $\mu$ a $\pi$-connected positive Radon measure on $X$. Assume that $\pi$ is $\mu$-proper and that there is a disintegration $(\lambda_y)_{y\in Y}$ of $\mu$ relative to $\pi$ such that $\Supp{\lambda_y}\supseteq\Supp{\mu} \cap \pi^{-1}(y)$ for $\pi_*(\mu)$-almost every $y\in Y$. Take a continuous map $m_0\colon X\to Z$ such that there is map $m_1\colon Y\to Z$ such that $m_0(x)= (m_1\circ \pi)(x)$ for $\mu$-almost every $x\in X$. Then, there is a $\pi_*(\mu)$-measurable mapping $m_2\colon Y\to Z$ such that $m_0=m_2\circ \pi$ \emph{pointwise} on $\Supp{\mu}$. \end{proposition} If $\pi$ is also proper, then $m_2$ is actually continuous on $\pi(\Supp{\mu})$. \begin{proof} Observe first that there is a $\pi_*(\mu)$-negligible subset $N$ of $Y$ such that $m_1\circ \pi=m_0$ $\lambda_y$-almost everywhere for every $y\in Y\setminus N$. Notice that we may assume that $\Supp{\mu}=X$ and that, if $y\in Y\setminus N$, then the support of $\lambda_y$ contains $\pi^{-1}(y)$. Since $m_0$ is continuous and since $m_1\circ \pi$ is constant on the support of $\lambda_y$, it follows that $m_0$ is constant on $\pi^{-1}(y)$ for every $y\in Y\setminus N$. Now, take $y\in \pi(X)\cap N$ and $x_1,x_2\in \pi^{-1}(y)$. Let $\mathfrak{U}(x_1)$ and $\mathfrak{U}(x_2)$ be the filters of neighbourhoods of $x_1$ and $x_2$, respectively. Assume first that $\pi(U_1)\cap \pi(U_2) $ is not $\pi_*(\mu)$-negligible for every $U_1\in \mathfrak{U}(x_1)$ and for every $U_2\in \mathfrak{U}(x_2)$. Take $U_1\in \mathfrak{U}(x_1)$ and $U_2\in \mathfrak{U}(x_2)$. Then, there is $y_{U_1,U_2}\in \pi(U_1)\cap \pi(U_2)\setminus N$, and then $x_{h,U_1,U_2}\in U_h\cap \pi^{-1}(y_{U_1,U_2})$ for $h=1,2$. Now, $m_0(x_{1,U_1,U_2})=m_0(x_{2,U_1,U_2})$ for every $U_1\in \mathfrak{U}(x_1)$ and for every $U_2\in \mathfrak{U}(x_2)$. In addition, $x_{h,U_1,U_2}\to x_h$ in $X$ along the product filter of $\mathfrak{U}(x_1)$ and $ \mathfrak{U}(x_2)$. Since $m_0$ is continuous, passing to the limit we see that $m_0(x_1)=m_0(x_2)$. Since $\mu$ is $\pi$-connected, this implies that $m_0$ is constant on $ P^{-1}(y)$ for \emph{every} $y\in \pi(X)$. The assertion follows. \end{proof} In the following proposition we give sufficient conditions in order that a measure be connected. \begin{proposition}\label{prop:A:8} Let $E_1,E_2$ be two finite-dimensional vector spaces, $L\colon E_1\to E_2$ a linear mapping, $C$ a closed convex subset of $E_1$ and $\mu$ a positive Radon measure on $E_1$ with support $C$. Take a Polish subspace $X$ of $E_1$ so that $\mu(E_1\setminus X)=0$. Then, $\mu_X$ is $\restr{L}{X}$-connected. \end{proposition} Actually, there is no need that $X$ be a Polish space, but we did not consider Radon measures on more general Hausdorff spaces. \begin{proof} We may assume that $C$ has non-empty interior. Then, we may find a bounded convex open subset $U$ of $ C$ and an convex open neighbourhood $V$ of $0$ in $\ker L$ such that $U+V\subseteq C$. Take $r\in ]0,1]$ and $x,y\in C\cap X$ such that $y-x\in V$; take $R_x>0$ so that $U\subseteq B(x,R_x)$. Then, for every $u\in U$ we have $y+ r(u-x)\in B(y,r R_x)\cap [y,y-x+u]\subseteq B(y,r R_x)\cap C$; analogously, $x+r(U-x)\subseteq B(x,r R_x)\cap C$. Since $L(x)=L(y)$, we infer that \[ L^{-1}(L( B(x, r R_x)\cap C \cap X)\cap L(B(y,r R_x)\cap C\cap X))\supseteq [x+r(U-x)]\cap X. \] Now, $x+r(U-x)$ is a non-empty open subset of $C=\Supp{\mu}$, so that $\mu_X([x+r(U-x)]\cap X)=\mu(x+r(U-x))>0$. The arbitrariness of $r$ then implies that $x$ and $y$ are $(\mu,L)$-connected. The assertion follows easily. \end{proof} Now we present a result on the disintegration of Hausdorff measures, which is particularly useful to check the assumptions of Proposition~\ref{prop:A:7}. It is a corollary of~\cite[Theorem 3.2.22]{Federer}; we omit the proof and refer to~\cite{Federer} for any unexplained notation. \begin{proposition}\label{prop:A:6} Let, for $j=1,2$, $E_j$ be an $\mathcal{H}^{k_j}$-measurable and countably $\mathcal{H}^{k_j}$-rectifiable subset of $\mathds{R}^{n_j}$. Assume that $k_2\leqslant k_1$, and let $P$ be a locally Lipschitz mapping of $E_1$ into $E_2$. Take a positive function $f\in L^1_\mathrm{loc}(\chi_{E_1}\cdot \mathcal{H}^{k_1})$ and assume that $f(x)\,\mathrm{ap\,} J_{k_2} P(x)\neq 0$ for $\mathcal{H}^{k_1}$-almost every $x\in E_1$, and that $P$ is $(f\cdot \mathcal{H}^{k_1})$-proper. Then, the following hold: \begin{enumerate} \item the mapping \[ g\colon\mathds{R}^{n_2}\ni y \mapsto \int_{P^{-1}(y)} \frac{f}{\mathrm{ap\,} J_{k_2} P}\,\mathrm{d} \mathcal{H}^{k_1-k_2} \] is well-defined $\mathcal{H}^{k_2}$-almost everywhere and measurable; in addition, \[ P_*(f\cdot \mathcal{H}^{k_1})= g\cdot \mathcal{H}^{k_2}; \] \item the measure \[ \beta_y\coloneqq \frac{1}{g(y)}\frac{f}{\mathrm{ap\,} J_{k_2} P} \chi_{P^{-1}(y)}\cdot \mathcal{H}^{k_1-k_2} \] is well-defined and Radon for $P_*(f\cdot \mathcal{H}^{k_1})$-almost every $y\in \mathds{R}^{n_2}$; in addition, $(\beta_y)$ is a disintegration of $f\cdot \mathcal{H}^{k_1}$ relative to $P$; \item $\beta_y$ is equivalent to $\chi_{P^{-1}(y)}\cdot \mathcal{H}^{k_1-k_2} $ for $P_*(f\cdot \mathcal{H}^{k_1})$-almost every $y\in E_2$. \end{enumerate} \end{proposition} Notice that, if $E_1$ is a submanifold of $\mathds{R}^{n_1}$ and $P$ is of class $C^1$, then $\mathrm{ap\,} J_{k_2}P(x)$ is simply $\norm{\bigwedge^{k_2} T_x (P)}$ for every $x\in E_1$. \section{Composite Functions: Schwartz Functions}\label{sec:7} In this section we shall extend some results on composite differentiable functions by E.\ Bierstone, P.\ Milman and G.\ W.\ Schwarz to the case of Schwartz functions by means of techniques developed by F.\ Astengo, B.\ Di Blasio and F.\ Ricci. We shall take advantage of the remarkable works of E.\ Bierstone, P.\ Milman and G.\ W.\ Schwarz about the composition of smooth functions on analytic manifolds, and we shall refer to~\cite{BierstoneMilman,BierstoneMilman2,BierstoneSchwarz} for any unexplained definition, in particular for the notion of (Nash) subanalytic sets. As a matter of fact, in the applications we shall only need to know that any convex subanalytic set is automatically Nash subanalytic, since it is contained in an affine space of the same dimension, and that semianalytic sets are Nash subanalytic (cf.~\cite[Proposition 2.3]{BierstoneMilman}). Our starting point is the following result (cf.~\cite[Theorem 0.2]{BierstoneMilman} and~\cite[Theorem 0.2.1]{BierstoneSchwarz}). Here, $\mathcal{E}(\mathds{R}^m)$ denotes the set of $C^\infty$ functions on $\mathds{R}^m$ endowed with the topology of locally uniform convergence of all derivatives; $\mathcal{E}_{\mathds{R}^n}(C)$ is the quotient of $\mathcal{E}(\mathds{R}^n)$ by the space of $C^\infty$ functions which vanish on $C$. \begin{theorem}\label{teo:8} Let $C$ be a closed subanalytic subset of $\mathds{R}^n$ and let $P\colon \mathds{R}^n\to \mathds{R}^m$ be an analytic mapping. Assume that $P$ is proper on $C$ and that $P(C)$ is Nash subanalytic. Then, the canonical mapping \[ \Phi\colon\mathcal{E}(\mathds{R}^m)\ni \varphi\mapsto \varphi\circ P\in \mathcal{E}_{\mathds{R}^n}(C) \] has a closed range, and admits a continuous linear section defined on $\Phi(\mathcal{E}(\mathds{R}^n))$. In addition, $\psi\in \mathcal{E}_{\mathds{R}^n}(C)$ belongs to the image of $\Phi$ if and only if for every $y\in \mathds{R}^m$ there is $\varphi_y\in \mathcal{E}(\mathds{R}^m)$ such that, for every $x\in C$ such that $P(x)=y$, the Taylor series of $\varphi_y\circ P$ and $\psi$ at $x$ differ by the Taylor series of a function of class $C^\infty$ which vanishes on $C$. \end{theorem} In order to simplify the notation, we shall simply say that $\psi $ is a formal composite of $P$ if the second condition of the statement holds. Now, we are particularly interested in the case of Schwartz functions. The strategy developed in~\cite{AstengoDiBlasioRicci2} is the following: one first decomposes dyadically a given Schwartz function in the sum of dilates of a family of test functions with a suitable decay; then, one applies the section given by Theorem~\ref{teo:8}, truncates the resulting functions (so that they are still test functions), and finally sums their dilates. In order to do that, however, one needs homogeneity. \begin{theorem}\label{teo:7} Let $P\colon \mathds{R}^n\to \mathds{R}^m$ be a polynomial mapping, and assume that $\mathds{R}^n$ and $\mathds{R}^m$ are endowed with dilations such that $P(r\cdot x)=r\cdot P(x)$ for every $r>0$ and for every $x\in \mathds{R}^n$. Let $C$ be a dilation-invariant subanalytic closed subset of $\mathds{R}^n$, and assume that $P$ is proper on $C$ and that $P(C)$ is Nash subanalytic. Then, the canonical mapping \[ \Phi\colon\mathcal{S}(\mathds{R}^m)\ni \varphi\mapsto \varphi\circ P\in \mathcal{S}_{\mathds{R}^n}(C) \] has a closed range and admits a continuous linear section defined on $\Phi(\mathcal{S}(\mathds{R}^m))$. In addition, $\psi\in \mathcal{S}_{\mathds{R}^n}(C)$ belongs to the image of $\Phi$ if and only if it is a formal composite of $P$. \end{theorem} As a matter of fact, in our applications $\mathds{R}^n$ will be $E_{\mathcal{L}_A}$, while $\mathds{R}^m$ will be $E_{P(\mathcal{L}_A)}$; $C$ will be (a subset of) $\sigma(\mathcal{L}_A)$. Then, Theorem~\ref{teo:7} gives some sufficient conditions in order that some $f\in \mathcal{S}_{P(\mathcal{L}_A)}(G)$, which has a Schwartz multiplier on $E_{\mathcal{L}_A}$, should have a Schwartz multiplier on $E_{P(\mathcal{L}_A)}$ (cf.~Section~\ref{sec:8}). Notice, however, that sometimes it is convenient to take $C$ so as to be a portion of $\sigma(\mathcal{L}_A)$ such that $P(C)=\sigma(P(\mathcal{L}_A))$, since $\sigma(\mathcal{L}_A)$ need \emph{not} be subanalytic. \begin{proof} For the first assertion, simply argue as in the proof of~\cite[Theorem 6.1]{AstengoDiBlasioRicci2} replacing the linear section provided by Schwarz and Mather with that of Theorem~\ref{teo:8}. As for the second part of the statement, notice first that it follows easily from Theorem~\ref{teo:8} when $\psi$ is compactly supported; since the image of $\Phi$ is closed, it follows by approximation in the general case. \end{proof} In the following result we give a simple but very useful application of Theorem~\ref{teo:7}. \begin{corollary}\label{cor:A:7} Let $V$ and $W$ be two finite-dimensional vector spaces, $C$ a subanalytic closed convex cone in $V$, and $L$ a linear mapping of $V$ into $W$ which is proper on $C$. Take $m_1\in \mathcal{S}(V)$ and assume that there is $m_2\colon W\to \mathds{C}$ such that $m_1=m_2\circ L$ on $C$. Then, there is $m_3\in \mathcal{S}(W)$ such that $m_1=m_3\circ L$ on $C$. \end{corollary} \begin{proof} Observe first that we may assume $C$ has vertex $0$ and has non-empty interior. Observe, by the way, that $L(C)$ is subanalytic (cf.~\cite[Theorem 0.1 and Proposition 3.13]{BierstoneMilman2}), hence Nash subanalytic. Now, fix $x\in C$. Since the interior of $C$ is non-empty, it is clear that $C$ is a total subset of $V$, so that we may find a free family $(v_j)_{j\in J}$ in $C$ which generates an algebraic complement $V'$ of $\ker L$ in $V$. In addition, since either $x=0$ or $x\not \in \ker L$, we may assume that $x\in V'$. Let $L'\colon W\to V$ be the composite of the inverse of the restriction of $L$ to $V'$ with the natural immersion of $V'$ in $V$. Then, $L'$ is a linear section of $L$. Define $m'\coloneqq m_1\circ L'$, so that $m'\in \mathcal{E}(W)$. Next, define $C'\coloneqq V'\cap C$, so that $C'$ is a closed convex cone with non-empty interior in $V'$, since it contains the non-empty open set $\sum_{j\in J} \mathds{R}_+^* v_j$. Take $z\in C'$ and any $y\in C\cap[x+\ker L]$. Then, $x+z=(L'\circ L) (x+z)=(L' \circ L )(y+z)$, so that $m_1=m'\circ L$ on $y+C'$. Since $m_1$ is constant on the intersections of $C$ with the translates of $\ker L$, the same holds on $ C\cap (y+C'+\ker L)$. Now, denote by $\open{C'}$ the interior of $C'$ in $V'$. Then, $y+\open{C'}+\ker L$ is an open convex set and $y$ is adherent to $C\cap (y+\open{C'}+\ker L)$, so that the Taylor polynomials of every fixed order of $m_1$ and $m'\circ L$ about $y$ coincide on $C\cap (y+\open{C'}+\ker L)$, hence on $V$. Since this holds for every $y\in C\cap [x+\ker L]$, Theorem~\ref{teo:7} implies that there is $m_3\in \mathcal{S}(W)$ such that $m_1=m_3\circ L$ on $C$. \end{proof} \section{Quadratic Operators on $2$-Step Stratified Groups} A connected Lie group $G$ is called $2$-step nilpotent if $[\mathfrak{g},[\mathfrak{g},\mathfrak{g}]]=0$, where $\mathfrak{g}$ is the Lie algebra of $G$. The group $G$ is $2$-step stratified if, in addition, it is simply connected and $\mathfrak{g}=\mathfrak{g}_1\oplus \mathfrak{g}_2$, with $[\mathfrak{g}_1,\mathfrak{g}_1]=[\mathfrak{g},\mathfrak{g}]=\mathfrak{g}_2$. Notice that, if $G$ is a simply connected $2$-step nilpotent group, then it is `stratifiable,' that is, for every algebraic complement $\mathfrak{g}_1$ of $\mathfrak{g}_2\coloneqq [\mathfrak{g},\mathfrak{g}]$, the decomposition $\mathfrak{g}=\mathfrak{g}_1 \oplus \mathfrak{g}_2$ turns $G$ into a stratified group. Nevertheless, $G$ may be endowed with many different structures of a stratified group; when we speak of a $2$-step stratified group, we then mean that an algebraic complement of $[\mathfrak{g},\mathfrak{g}]$ is fixed. A $2$-step stratified group is endowed with the canonical dilations, that is $r\cdot (X+Y)=r X+ r^2 Y$ for every $r>0$, for every $X\in \mathfrak{g}_1$ and for every $Y\in \mathfrak{g}_2$. Thus, $G$ becomes a homogeneous group. \begin{definition} Let $G$ be a $2$-step stratified group. Then, for every $\omega\in \mathfrak{g}_2^*$ we shall define \[ B_\omega\colon \mathfrak{g}_1\times \mathfrak{g}_1\ni(X,Y)\mapsto \langle \omega, [X,Y]\rangle. \] Then, $G$ is an $MW^+$ group if $B_\omega$ is non-degenerate for some $\omega\in \mathfrak{g}_2^*$ (cf.~\cite{MooreWolf} and also~\cite{MullerRicci}). A Heisenberg group is an $MW^+$ group with one-dimensional centre. \end{definition} \begin{definition} Take $d\in \mathds{N}^*$, and let $\mathfrak{g}$ be the free Lie algebra on $d$ generators. Then, the quotient $\mathfrak{g}'$ of $\mathfrak{g}$ by its ideal $[\mathfrak{g}, [\mathfrak{g},\mathfrak{g}]]$ is the free $2$-step nilpotent Lie algebra on $d$ generators. The simply connected Lie group with Lie algebra $\mathfrak{g}'$ is called the free $2$-step nilpotent Lie group on $d$ generators. \end{definition} Now, to every symmetric bilinear form $Q$ on $\mathfrak{g}_1^*$ we can associate a differential operator on $G$ as follows: \[ \mathcal{L}\coloneqq -\sum_{\ell,\ell'} Q(X_\ell^*, X_{\ell'}^*) X_{\ell} X_{\ell'}, \] where $(X_\ell)$ is a basis of $\mathfrak{g}_1$ with dual basis $(X^*_\ell)$. As the reader my verify, $\mathcal{L}$ does not depend on the choice of $(X_\ell)$; actually, one may prove that $-\mathcal{L}$ is the symmetrization of the quadratic form induced by $Q$ on $\mathfrak{g}^*$ (cf.~\cite[Theorem 4.3]{Helgason}). \begin{lemma}\label{lem:10:10} Let $Q$ be a symmetric bilinear form on $\mathfrak{g}_1^*$, and let $\mathcal{L}$ be the associated operator. Then, $\mathcal{L}$ is formally self-adjoint if and only if $Q$ is real. In addition, $\mathcal{L}$ is formally self-adjoint and hypoelliptic if and only if $Q$ is non-degenerate and either positive or negative. \end{lemma} \begin{proof} The first assertion follows from the fact that the formal adjoint of $\mathcal{L}$ is associated with $\overline Q$. The last assertion then follows from~\cite{Hormander3}. \end{proof} Next, we show how to put $\mathcal{L}$ in a particularly convenient form according to the chosen $\omega\in \mathfrak{g}_2^*$. \begin{definition} Let $V$ be a vector space and $\Phi$ a bilinear form on $V$. Then, we shall define \[ \mathrm{d}_\Phi\colon V\ni v \mapsto \Phi(\,\cdot\,,v)\in V^*. \] \end{definition} Notice that any algebraic complement of the radical of a skew-symmetric bilinear form on a finite-dimensional vector space is symplectic. Therefore, by~\cite[Corollary 5.6.3]{AbrahamMarsden} we deduce the following result. \begin{proposition}\label{prop:10:8} Let $V$ be a finite-dimensional vector space over $\mathds{R}$, let $\sigma$ be a skew-symmetric bilinear form on $V$, and let $Q$ be a positive, non-degenerate bilinear form on $V$. Then, there are a basis $(v_j)_{j=1,\dots,m}$ of $V$ and a positive integer $n\leqslant \frac{m}{2}$ such that the following hold: \begin{itemize} \item $Q(v_j,v_j)=Q(v_{n+j},v_{n+j})>0$ for every $j=1,\dots, n$; \item $Q(v_j,v_k)=0$ for every $j,k\in \Set{1,\dots,m}$ such that $j\neq k$ and either $j\leqslant 2 n$ or $k\leqslant 2 n$; \item for every $j,k=1,\dots, m$, \[ \sigma(v_j,v_k)=\begin{cases} 1 & \text{if $j\in \Set{1,\dots,n}$ and $k=n+j$};\\ -1 & \text{if $j\in \Set{n+1,\dots,2 n}$ and $k=j-n$};\\ 0 & \text{otherwise}. \end{cases} \] \end{itemize} \end{proposition} Observe that $Q(v_j,v_j)$ is the eigenvalue of $\abs{\mathrm{d}_{Q}^{-1}\circ \mathrm{d}_\sigma }$ corresponding to $v_j$, where the absolute value is computed with respect to $Q$. \section{Plancherel Measure and Integral Kernel}\label{sec:11} In this section, $G$ denotes a $2$-step stratified group of dimension $n$ which does \emph{not} satisfy the $MW^+$ condition, $Q$ a symmetric bilinear form on $\mathfrak{g}_1^*$, and $(T_1,\dots, T_{n_2})$ a basis of $\mathfrak{g}_2$. We shall denote by $\mathcal{L} $ the sub-Laplacian induced by $Q $ and we shall assume that $\mathcal{L}_A\coloneqq (\mathcal{L}, (-i T_k)_{k=1,\dots,n_2})$ is a Rockland family, that is, that $\mathcal{L}$ is a hypoelliptic sub-Laplacian, up to a sign. Indeed, if $\pi_0$ is the projection of $G$ onto its abelianization, then $\mathrm{d} \pi_0(\mathcal{L}_A)$ is a Rockland family, so that $\mathcal{F}(\mathrm{d} \pi_0(\mathcal{L}_A))$ vanishes only at $0$. Since $\mathrm{d} \pi_0(T_k)=0$ for every $k=1,\dots, n_2$, this implies that $Q$ is non-degenerate and either positive or negative; hence, $\mathcal{L}$ is a hypoelliptic sub-Laplacian, up to a sign. We may then assume that $Q$ is positive and non-degenerate. We shall also endow $\mathfrak{g}$ with a scalar product for which $\mathfrak{g}_1$ and $\mathfrak{g}_2$ are orthogonal, and which induces $\widehat Q$ on $\mathfrak{g}_1$. Then, we may endow $\mathfrak{g}$ with the translation-invariant measure $\mathcal{H}^n$; up to a normalization, \emph{we may then assume that $(\exp_G)_*(\mathcal{H}^n)$ is the chosen Haar measure on $G$.} We shall endow $\mathfrak{g}_2^*$ with the scalar product induced by that of $\mathfrak{g}_2$, and then with the corresponding Lebesgue measure. Define \[ J_{Q, \omega}\coloneqq \mathrm{d}_{Q}\circ \mathrm{d}_{B_\omega}\colon \mathfrak{g}_1 \to \mathfrak{g}_1 \] for every $\omega\in \mathfrak{g}_2^*$, and define $d\coloneqq \mun_{\omega\in \mathfrak{g}_2^*}\dim \ker \mathrm{d}_{B_\omega}$, so that $d>0$ since $G$ is not an $MW^+$ group. We denote by $W$ the set of $\omega\in \mathfrak{g}_2^*$ such that $\dim \ker \mathrm{d}_{B_\omega}>d$. Define $n_1\coloneqq \frac{1}{2}(\dim \mathfrak{g}_1-d)$, and observe that $n_1=0$ if and only if $G$ is abelian. We denote by $\Omega$ the set of $\omega\in \mathfrak{g}_2^*\setminus W$ where $\card\left(\sigma( \abs{J_{Q,\omega} } )\setminus\Set{0}\right)$ attains its maximum $\overline h$. As the next lemma shows, if $G$ is not abelian, then $\Omega$ is open and dense. \begin{lemma}\label{lem:2} The sets $W$ and $\mathfrak{g}_2^*\setminus \Omega$ are algebraic varieties. \end{lemma} As the proof shows, the multiplicities of the eigenvalues are constant on $\Omega$. \begin{proof} Define $P_{\omega}$ so that $X^d P_{\omega}(X)$ is the characteristic polynomial of $-J_{Q,\omega}^2$. Then, it is clear that $W$ is the zero locus of the polynomial mapping $\omega \mapsto P_{\omega}(0)$, so that it is an algebraic variety. Next, take $k\in \Set{1,\dots, n_1}$ and let $\mathfrak{P}_k$ be the set of partitions of $\Set{1,\dots, n_1}$ into $k$ non-empty sets. Define \[ P_k(X_{1},\dots, X_{n_1})\coloneqq \prod_{\mathcal{K}\in \mathfrak{P}_k} \sum_{K\in \mathcal{K}} \sum_{k_1,k_2\in K} (X_{k_1}-X_{k_2})^2, \] so that $P_k$ is a $\mathfrak{S}_{n_1}$-invariant polynomial. Take $\widetilde \mu_{1,\omega},\dots, \widetilde \mu_{n_1,\omega}\geqslant 0$ so that the eigenvalues of $J_{Q,\omega}$ are $0,\dots, 0, \pm i \widetilde \mu_{1,\omega},\dots, \pm i \widetilde \mu_{n_1,\omega}$ for every $\omega\in \mathfrak{g}_2^*$. Now, the mapping $\omega \mapsto P_k(\widetilde \mu_{1,\omega}^2,\dots, \widetilde \mu_{n_1,\omega}^2)$ is a $\mathfrak{S}_{n_1}$-invariant polynomial mapping in the roots of the polynomial $P_{\omega}$; hence, it is a polynomial mapping (cf.~\cite[Theorem 1 of Chapter IV, § 6, No.\ 1]{BourbakiA2}). Therefore, the set of $\omega\in \mathfrak{g}_2^*$ such that $P_k(\widetilde \mu_{1,\omega},\dots, \widetilde \mu_{n_1,\omega})=0$ is an algebraic variety $W_k$. In addition, it is clear that $\Omega$ is the complement of $W\cup W_{\overline h-1}$, so that it is open in the Zariski topology. \end{proof} \begin{proposition}\label{prop:11:5} There are four analytic mappings \begin{align*} &\mu\colon \Omega \to (\mathds{R}_+^*)^{\overline{h}} & &P\colon \Omega \to \mathcal{L}(\mathfrak{g}_1)^{\overline{h}} & &P_0\colon \mathfrak{g}_2^*\setminus W\to \mathcal{L}(\mathfrak{g}_1) & &\rho\colon \Omega \to \Set{1,\dots, \overline h}^{n_1} \end{align*} such that the following hold: \begin{itemize} \item the mapping \[ \Omega \ni \omega \mapsto \mu_{\rho_{k,\omega},\omega}\in \mathds{R}_+ \] extends to a continuous mapping $\omega \mapsto \widetilde \mu_{k,\omega}$ on $\mathfrak{g}_2^*$ for every $k=1,\dots, n_1$; \item for every $h=0,\dots, \overline h$ and for every $\omega\in \Omega$ (for every $\omega\in \mathfrak{g}_2^*\setminus W$, if $h=0$), $P_{h,\omega}$ is a $B_\omega$- and $\widehat Q$-self-adjoint projector of $\mathfrak{g}_1$; \item if $h=1,\dots, \overline h$ and $\omega\in \Omega$, then $\tr P_{h,\omega}=2 \card( \Set{k\in \Set{1,\dots, n_1}\colon \rho_{k,\omega}=h} )$; \item $\sum_{h=0}^{\overline h} P_{h,\omega}=I_{\mathfrak{g}_1}$ and $\sum_{h=1}^{\overline h} \mu_{h,\omega} P_{h,\omega}= \abs{J_{Q, \omega}}$ for every $\omega\in \Omega$; \item $P_{0,\omega}(\mathfrak{g}_1)=\ker \mathrm{d}_{B_\omega}$ for every $\omega\in \mathfrak{g}_2^*\setminus W$. \end{itemize} \end{proposition} The proof is omitted, since it basically consists of straightforward generalizations of the arguments of~\cite[§ 1.3--4 and § 5.1 of Chapter II]{Kato}. \begin{definition} We define $\mu, \widetilde \mu, P$ and $P_0$ as in Proposition~\ref{prop:11:5}. In addition, we define $\vect{n_{1}}\colon \Omega \to (\mathds{N}^*)^{\overline{h}} $ so that $n_{1,h,\omega}=\frac{1}{2}\tr P_{h,\omega}$ for every $h=1,\dots, \overline{h} $ and for every $\omega\in \Omega$. Furthermore, we shall sometimes identify $\mu_\omega$ with the linear mapping \[ \mathds{R}^{\overline{h}} \ni \lambda \mapsto \sum_{h=1}^{\overline{h} }\mu_{h,\omega} \lambda_h\in \mathds{R} \] for every $\omega\in \Omega$. Analogous notation for $\widetilde \mu_\omega$. \end{definition} With the above notation, we have $\mu_{\omega}(\vect{n_{1,\omega}})=\sum_{h=1}^{\overline{h} }\mu_{h,\omega} n_{1,h,\omega}$. Observe, in addition, that the index $1$ in $\vect{n_1}$ refers to the first layer $\mathfrak{g}_1$, just as the index $2$ in $n_2$ refers to the second layer $\mathfrak{g}_2$. \begin{corollary}\label{cor:11:2} The function $\omega \mapsto \mu_{\omega}(\vect{n_{1,\omega}})=\widetilde \mu_{\omega}(\vect{1}_{n_1})$ is a norm on $\mathfrak{g}_2^*$ which is analytic on $\mathfrak{g}_2^*\setminus W$. \end{corollary} \begin{proof} Observe that \[ 2\mu_{\omega}(\vect{n_{1,\omega}})= \norm{ J_{Q , \omega} }_1=\norm{J_{Q,\omega}+P_{0,\omega}}_1-d \] for every $\omega\in \mathfrak{g}_2^*$, and that the linear mapping $\omega \mapsto J_{Q, \omega}$ is one-to-one since $G$ is stratified. The assertion follows. \end{proof} \begin{definition} By an abuse of notation, we shall denote by $(x,t)$ the elements of $G$, where $x\in \mathfrak{g}_1$ and $t\in \mathfrak{g}_2$, thus identifying $(x,t)$ with $\exp_G(x,t)$. For every $x\in \mathfrak{g}_1$ and for every $\omega\in \mathfrak{g}_2^*\setminus W$, we shall define \[ x_{0,\omega}\coloneqq P_{0,\omega} (x), \] while, for every $\omega\in \Omega$ and for every $h=1,\dots, \overline{h} $, \[ x_{h,\omega}\coloneqq \sqrt{\mu_{h,\omega}}P_{h,\omega} (x). \] By an abuse of notation, we shall write $x_\omega$ instead of $\sum_{h=1}^{\overline h}x_{h,\omega}$, so that $\abs{x_{\omega}}=\left( \sum_{h=1}^{\overline{h} } \abs{x_{h,\omega}}^2\right)^{\sfrac{1}{2}}$. \end{definition} \begin{proposition}\label{prop:11:1} The mapping \[ \mathfrak{g}_1 \times \Omega \ni (x,\omega)\mapsto \sum_{h=1}^{\overline{h} } x_{h,\omega} \] extends uniquely to a continuous function on $\mathfrak{g}_1\times\mathfrak{g}_2^*$ which is analytic on $\mathfrak{g}_1 \times (\mathfrak{g}_2^*\setminus W)$. \end{proposition} \begin{proof} Observe that, for every $\omega\in \mathfrak{g}_2^*$, $-J_{Q,\omega}^2=J_{Q,\omega}^*J_{Q,\omega}$ is positive, and that \[ -J_{Q,\omega}^2+P_{0,\omega} \] is positive and non-degenerate as long as $\omega\not \in W$. Therefore, the mapping \[ \omega\mapsto \sqrt[4]{-J_{Q,\omega}^2}=\sqrt[4]{-J_{Q,\omega}^2+P_{0,\omega}}- P_{0,\omega}\in \mathcal{L}(\mathfrak{g}_1) \] is continuous on $\mathfrak{g}_2^*$ and analytic on $\mathfrak{g}_2^*\setminus W$ thanks to~\cite[Proposition 10 of Chapter I, § 4, No.\ 8]{BourbakiTS}.\footnote{For what concerns continuity, just observe that $\sqrt[4]{\,\cdot\,}$ is continuous on the cone of positive endomorphisms of $\mathfrak{g}_1$, which is the closure of the cone of non-degenerate positive endomorphisms of $\mathfrak{g}_1$, as in~\cite[p.\ 85]{Hormander2}.} Then, it suffices to observe that \[ \sqrt[4]{-J_{Q,\omega}^2}(x)= \sum_{h=1}^{\overline{h}} x_{h,\omega} \] for every $\omega\in \Omega$ and for every $x\in \mathfrak{g}_1$. \end{proof} \begin{definition} Define $G_\omega$, for every $\omega\in \mathfrak{g}_2^*$, as the quotient of $G$ by its normal subgroup $\exp_G(\ker \omega)$. Then, $G_0$ is the abelianization of $G$, and we identify it with $\mathfrak{g}_1$. If $\omega\neq 0$, then we shall identify $G_\omega$ with $\mathfrak{g}_1 \oplus \mathds{R}$, endowed with the product \[ (x_1,t_1) (x_2,t_2)\coloneqq\left(x_1+x_2, t_1+t_2+\frac{1}{2}B_\omega(x_1,x_2) \right) \] for every $x_1,x_2\in \mathfrak{g}_1$ and for every $t_1,t_2\in \mathds{R}$. Hence, \[ \pi_\omega(x,t)=(x,\omega(t)) \] for every $(x,t)\in G$. \end{definition} \begin{definition} For every $\omega\in \mathfrak{g}_2^*\setminus W$, define $\abs{\mathfrak{P}aff(\omega)}\coloneqq \prod_{h=1}^{\overline{h}} \mu_{h,\omega}^{n_{1,h,\omega}}$, the Pfaffian of $\omega$ (cf.~\cite{AstengoCowlingDiBlasioSundari}). \end{definition} Now we are in position to find the Plancherel measure and the integral kernel associated with $\mathcal{L}_A$. This is done by means of the explicit knowledge of the Plancherel and inversion formulae of $G$ (cf.~\cite{AstengoCowlingDiBlasioSundari}) and the following weak version of Poisson's formula (cf.~\cite[Proposition 5.4]{MartiniRicciTolomeo} for a proof in a slightly different setting). \begin{proposition}\label{prop:2:9} Let $\mathcal{L}'_{A'}$ be a Rockland family on a homogeneous group $G'$, and take $m\in L^\infty(\beta_{\mathcal{L}'_{A'}})$ such that $\mathcal{K}_{\mathcal{L}'_{A'}}(m)\in L^1(G)$. Then,\footnote{If $f\in L^1(G)$ and $\pi$ is an irreducible unitary representation of $G$, then $\mathcal{F}(f)(\pi)\coloneqq\int_G f(x) \pi(x^{-1})\,\mathrm{d} x=\pi^*(f)$. } \[ \mathcal{F}(\mathcal{K}_{\mathcal{L}'_{A'}}(m))(\pi)=m(\mathrm{d} \pi(\mathcal{L}'_{A'})) \] for almost every $[\pi]$ in the dual of $G$. \end{proposition} Before we state the next result, where we find relatively explicit formulae for the Plancherel measure and the integral kernel associated with $\mathcal{L}_A$, let us briefly comment on our techniques. Thanks to the form of the Plancherel formula for $G$ (see~\cite{AstengoCowlingDiBlasioSundari}), we may basically reduce to study $\mathrm{d} \pi_\omega(\mathcal{L}_A)$ for $\omega \neq 0$, or only for $\omega\in \Omega$. Therefore, the analysis of $\mathcal{L}_A$ is basically reduced to the case in which $n_2=1$. If $G$ is actually a Heisenberg group, then the Plancherel formula only involves the Bargmann-Fock representations $\pi_\lambda$ ($\lambda\neq 0$), and it is well-known that $\mathrm{d}\pi_\lambda(\mathcal{L}_A)$ has an orthonormal basis of eigenfunctions such that the corresponding functions of positive type on $G$ are suitable Laguerre functions (cf.~\cite{HulanickiRicci}). When $G$ has higher-dimensional centre (as in our case), then it splits into the product of a Heisenberg group and an abelian group, and the results are somewhat similar, even though the abelian factor causes some `superpositions' of different `layers' of the Plancherel measure associated with $\mathcal{L}_A$. \begin{proposition}\label{prop:11:2} For every $\varphi\in C_c(E_{\mathcal{L}_A})$, \[ \begin{split} \int_{E_{\mathcal{L}_A}} \varphi\,\mathrm{d} \beta_{\mathcal{L}_A}&= \frac{\pi^{\frac{d}{2}}}{(2\pi)^{n_1+n_2+d} \Gamma\left( \frac{d}{2}\right) } \sum_{\gamma\in \mathds{N}^{\overline h}} \binom{\vect{n_{1,\omega}}+\gamma-\vect{1}_{\overline h} }{\gamma} \times\\ &\qquad\times\int_{ \mathds{R}_+\times \mathfrak{g}_2^* } \varphi(\mu_{\omega}(\vect{n_{1,\omega}}+2 \gamma)+\lambda, \omega(\vect{T})) \abs{\lambda}^{\frac{d}{2}-1} \abs{\mathfrak{P}aff(\omega)} \,\mathrm{d} (\lambda,\omega). \end{split} \] \end{proposition} \begin{proof} We follow the construction of the Plancherel measure of~\cite{AstengoCowlingDiBlasioSundari} as in~\cite[4.4.1]{Martini}. Take $\omega\in \mathfrak{g}_2^*\setminus W$ and $\tau\in P_{0,\omega}(\mathfrak{g}_1)$. Let $\pi_{\omega,\tau}$ be an irreducible unitary representation of $G$ in a hilbertian space $H_{\omega,\tau}$ such that $\pi_{\omega,\tau}(x,t)=e^{i \omega(t)+ i\tau(x) } I_{H_{\omega,\tau}}$ for every $(x,t)\in P_{0,\omega}(\mathfrak{g}_1)\times \mathfrak{g}_2$.\footnote{Recall that such a representation is uniquely determined up to unitary equivalence; cf.~\cite{AstengoCowlingDiBlasioSundari} and the references therein.} Then, for every $f\in L^2(G)$, \[ \norm{f}_2^2= \frac{1}{(2\pi)^{n_1+n_2+d}} \int_{\mathfrak{g}_2^*} \int_{P_{0,\omega}(\mathfrak{g}_1) } \norm{ \pi_{\omega,\tau}(f) }_2^2 \abs{\mathfrak{P}aff(\omega)}\,\mathrm{d} \tau\,\mathrm{d} \omega. \] Now, it is well-known that there is a commutative family $(P_{\omega,\tau,\gamma})_{\gamma\in \mathds{N}^{\overline h}}$ of self-adjoint projectors of $H_{\omega,\tau}$ such that $I_{H_{\omega, \tau}}=\sum_{\gamma\in \mathds{N}^{\overline h}} P_{\omega,\tau,\gamma}$ pointwise, and such that for every $\gamma\in \mathds{N}^{\overline h}$ we have $\tr P_{\omega, \tau,\gamma}= \binom{\vect{n_{1,\omega}}+\gamma-\vect{1}_{\overline h} }{\gamma}$ and (cf.~Proposition~\ref{prop:10:8}) \[ \mathrm{d} \pi_{\omega, \tau}(\mathcal{L}_A) \cdot P_{\omega, \tau,\gamma}=(\abs{\tau}^2+\mu_\omega (\vect{n_{1,\omega}}+2 \gamma),\omega(\vect{T})) P_{\omega, \tau,\gamma}. \] Therefore, for every $\varphi \in C^\infty_c(E_{\mathcal{L}_A})$, $\norm{\mathcal{K}_{\mathcal{L}_A}(\varphi)}_2^2$ equals \[ \begin{split} &\frac{1}{(2\pi)^{n_1+n_2+d}} \int_{\mathfrak{g}_2^*}\int_{P_{0,\omega}(\mathfrak{g}_1)} \sum_{\gamma\in \mathds{N}^{\overline h}} \binom{\vect{n_{1,\omega}}+\gamma-\vect{1}_{\overline h} }{\gamma} \abs*{\varphi\left(\abs{\tau}^2+\mu_\omega (\vect{n_{1,\omega}}+2 \gamma),\omega(\vect{T})\right) }^2 \abs{\mathfrak{P}aff(\omega)}\,\mathrm{d} \tau\,\mathrm{d} \omega, \end{split} \] whence stated formulae for $\beta_{\mathcal{L}_A}$. \end{proof} Now, let us make some remarks on how one may find an expression for $\chi_{\mathcal{L}_A}$; since that expression is not particularly illuminating, we shall omit to present it explicitly. We begin with a definition. \begin{definition} Define \[ \Phi_d\colon \mathds{R}_+\ni x \mapsto \Gamma\left( \frac{d}{2} \right) \frac{ J_{\frac{d}{2}-1}(x)}{ \left( \frac{x}{2} \right)^{\frac{d}{2}-1} }=\Gamma\left( \frac{d}{2} \right)\sum_{k\in \mathds{N}} \frac{(-1)^k x^{2 k} }{ 4^{ k} k! \Gamma\left( k+\frac{d}{2} \right) } , \] where $J_{\frac{d}{2}-1}$ is the Bessel function (of the first kind) of order $\frac{d}{2}-1$. \end{definition} Observe first that from the Plancherel formula for $G$ which we stated in the proof of Proposition~\ref{prop:11:2} we deduce the following inversion formula: \[ f(x,t)= \frac{1}{(2\pi)^{n_1+n_2+d}} \int_{\mathfrak{g}_2^*}\int_{P_{0,\omega}(\mathfrak{g}_1)} \tr( \pi_{\omega,\tau}(x,t)^* \pi_{\omega,\tau}(f) ) \abs{\mathfrak{P}aff(\omega)}\,\mathrm{d} \tau\,\mathrm{d}\omega \] for every $f\in \mathcal{S}(G)$. If $\varphi \in C^\infty_c(E_{\mathcal{L}_A})$, then $\mathcal{K}_{\mathcal{L}_A}(\varphi)(x,t)$ equals, for almost every $(x,t)\in G$, \[ \begin{split} &\frac{1}{(2\pi)^{n_1+n_2+d}} \int_{\mathfrak{g}_2^*} \int_{P_{0,\omega}(\mathfrak{g}_1)} \sum_{\gamma\in \mathds{N}^{\overline h}} \varphi\left(\abs{\tau}^2+\mu_{\omega}(\vect{n_{1,\omega}}+2 \gamma),\omega(\vect{T})\right)\times\\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\times\tr (\pi_{\omega,\tau}(x,t)^* P_{\omega, \tau,\gamma} ) \abs{\mathfrak{P}aff(\omega)}\,\mathrm{d} \tau\,\mathrm{d} \omega. \end{split} \] Next, if $\Lambda^m_\gamma(X)=\sum_{j=0}^\gamma \binom{\gamma+m}{\gamma-j} \frac{(-X)^j}{j!}$ denotes the $\gamma$-th Laguerre polynomial of order $m$, then \[ \tr (\pi_{\omega,\tau}(x,t)^* P_{\omega, \tau,\gamma} )= e^{-\frac{1}{4}\abs{x_\omega}^2+ i \tau(x_{0,\omega}) +i \omega(t)} \prod_{h=1}^{\overline{h}} \Lambda^{n_{1,\omega,h}-1}_{\gamma_{h}}\left(\frac{1}{2} \abs{x_{\omega,h}}^2 \right) \] by~\cite[Proposition 2]{HulanickiRicci} and~\cite[10.12 (41)]{Erdelyi}, while \[ \munt{-}_{\partial B(0,1)\cap P_{0,\omega}(\mathfrak{g}_1)} e^{i \tau(x_{0,\omega})}\,\mathrm{d} \mathcal{H}^{d-1}(\tau)= \Gamma\left( \frac{d}{2} \right) \frac{ J_{\frac{d}{2}-1}\left( \abs{x_{0,\omega} } \right) }{ \left( \frac{\abs{x_{0,\omega}}}{2} \right)^{\frac{d}{2}-1 }}=\Phi_d(\abs{x_{0,\omega}}). \] One may then find formulae for $\chi_{\mathcal{L}_A}$. \begin{remark}\label{oss:2} Let $T'_1,\dots, T'_n$ be $n$ homogeneous elements of the centre $\mathfrak{z}$ of $\mathfrak{g}$. Let us show that the study of the family $(\mathcal{L},-i T_1',\dots, - i T_n')$ can be reduced to that of the families of the form considered above on suitable $2$-step stratified groups. Notice that we may assume that there is $n'\in \Set{0,\dots, n}$ such that $T_j'\in \mathfrak{g}_2$ if and only if $j\leqslant n'$; let $\mathfrak{g}''$ be the vector subspace of $\mathfrak{g}$ generated by $T_{n'+1}',\dots, T_{n}'$, and observe that $\mathfrak{g}''\subseteq \mathfrak{g}_1$ by homogeneity. Let $\mathfrak{g}'_1$ be the polar in $\mathfrak{g}_1$ of the $Q$-orthogonal complement of the polar of $\mathfrak{g}''$ in $\mathfrak{g}_1^*$; define $\mathfrak{g}'\coloneqq \mathfrak{g}'_1\oplus \mathfrak{g}_2$. Then, $\mathfrak{g}$ is the direct sum of its ideals $\mathfrak{g}'$ and $\mathfrak{g}''$. Let $G'$ and $G''$ be the Lie subgroups of $G$ corresponding to $\mathfrak{g}'$ and $\mathfrak{g}''$, and let $\mathcal{L}'$ and $\mathcal{L}''$ be the sub-Laplacians on $G'$ and $G''$, respectively, corresponding to the restriction of $Q$ to $\mathfrak{g}_1'^*$ and $\mathfrak{g}''^*$. By an abuse of notation, then, $\mathcal{L}=\mathcal{L}'+\mathcal{L}''$, so that the family $(\mathcal{L}, - i T_1',\dots, - i T_n')$ is equivalent to the family $(\mathcal{L}', - i T'_1,\dots, - i T'_n)$. Now, the family $(- i T'_{n'+1},\dots, - i T'_n)$ on $G''$ satisfies property $(RL)$ by classical Fourier analysis. Therefore, Theorem~\ref{teo:4} and its easy converse imply that the family $(\mathcal{L}, - i T_1',\dots, - i T_n')$ satisfies property $(RL)$ if and only if the family $(\mathcal{L}', - i T_1',\dots, - i T_{n'}')$ satisfies property $(RL)$. Since this latter family is equivalent to a family of the form $(\mathcal{L}',-i T_1,\dots, - i T_{n_2'})$ for some $n_2'$ and for some choice of the basis $T_1,\dots, T_{n_2}$ of $\mathfrak{g}_2$, our assertion follows. Notice, however, that $G'$ may be an $MW^+$ group; we shall deal with $MW^+$ groups in a future paper. Similar arguments apply to property $(S)$ and the continuity of the integral kernel. \end{remark} \section{Property $(RL)$} In this section we shall present several sufficient conditions for the validity of property $(RL)$. First of all, we observe that the spectrum of $\mathcal{L}_A$ is a semianalytic convex cone. In addition, we can basically ignore the Laguerre polynomials of higher order which appear in the Fourier inversion formula, thanks to Proposition~\ref{prop:2:9}. Indeed, with reference to the proof of Proposition~\ref{prop:11:2}, the `ground state', that is, the first eigenvalue of $\mathrm{d} \pi_{\omega,\tau}(\mathcal{L}_A)$, is sufficient to cover the whole of $\sigma(\mathcal{L}_A)$, as $\omega$ and $\tau$ vary. This fact leads to significant simplifications, as the basic Lemma~\ref{lem:9} shows. We need to distinguish between the `full' family $\mathcal{L}_A$, for which we can prove continuity of the multipliers only on a dense subset of the spectrum \emph{in full generality} (cf.~Lemma~\ref{lem:9}), and the `partial' family $(\mathcal{L}, (-i T_1,\dots, - i T_{n_2'}))$ for $n_2'<n_2$, where by means of a deeper analysis we are able to prove property $(RL)$ in full generality (cf.~Theorem~\ref{prop:10}). This latter result requires to deal with Radon measures defined on Polish spaces which are not necessarily locally compact. Concerning the `full' family $\mathcal{L}_A$, as we observed above, we can prove in full generality that every integrable kernel corresponds to a multiplier which is continuous on a dense subset of the spectrum. Nevertheless, we can prove that property $(RL)$ holds in the following cases: when $P_0$ extends to a continuous function on $\mathfrak{g}_2^*\setminus\Set{0}$, for example when $W=\Set{0}$ or when $G$ is the product of an $MW^+$ group and a non-trivial abelian group (cf.~Theorem~\ref{prop:19:1}); when $G$ is a free $2$-step stratified group on an odd number of generators (cf.~Theorem~\ref{prop:19:5}). In both cases, we make use of the simplified `inversion formula' for $\mathcal{K}_{\mathcal{L}_A}$ which is available in this case; in the second case, we employ the simple structure of free groups to prove that the $L^1$ kernels are invariant under sufficiently many linear transformations in order that the above-mentioned inversion formula give rise to a continuous multiplier. \begin{lemma}\label{lem:9} Take $f\in L^1_{\mathcal{L}_A}(G)$. Then, $\mathcal{M}_{\mathcal{L}_A}(f)$ has a representative which is continuous on \[ \Set{ ( \mu_\omega(\vect{n_{1,\omega}} ) ,\omega(\vect{T}) )\colon \omega\in \mathfrak{g}_2^* }\cup \Set{ ( \lambda ,\omega(\vect{T}) )\colon \omega\in \mathfrak{g}_2^*\setminus W ,\lambda \geqslant \mu_\omega(\vect{n_{1,\omega}}) }. \] \end{lemma} \begin{proof} Fix a multiplier $m$ of $f$. By Proposition~\ref{prop:2:9}, there is a negligible subset $N_1$ of $\mathfrak{g}_2^*$ such that for every $\omega\in \mathfrak{g}_2^*\setminus N_1$ there is negligible subset $N_{2,\omega}$ of $P_{0,\omega}(\mathfrak{g}_1)$ such that \[ \pi_{\omega, \tau}^*(f)= m(\mathrm{d} \pi_{\omega, \tau}(\mathcal{L}_A)) \] for every $\tau \in P_{0,\omega}(\mathfrak{g}_1)\setminus N_{2,\omega}$. Notice that we may assume that $W\subseteq N_1$. Therefore, for every $\omega\in \mathfrak{g}_2^*\setminus N_1$ and for every $\tau\in P_{0,\omega}(\mathfrak{g}_1)\setminus N_{2,\omega}$, \[ \begin{split} m(\mu_{\omega}(\vect{n_{1,\omega}})+\abs{\tau}^2, \omega(\vect{T}))&=\frac{1}{\tr P_{\omega,\tau,0}} \tr (m(\mathrm{d} \pi_{\omega, \tau}(\mathcal{L}_A)) P_{\omega, \tau,0})\\ &= \int_{G} f(x,t) e^{- \frac{1}{4}\abs{x_{\omega}}^2+ i \omega(t)+i \tau(x_{0,\omega}) }\,\mathrm{d} (x,t). \end{split} \] Now, for every $\omega\in \mathfrak{g}_2^*\setminus N_1$ there is negligible subset $N_{3,\omega}$ of $\mathds{R}_+^*$ such that, for every $\lambda\in \mathds{R}_+^*\setminus N_{3,\omega}$, we have $\mathcal{H}^{d-1}\big(\partial B_{P_{0,\omega}(\mathfrak{g}_1)}\big(0,\sqrt{\lambda} \big)\cap N_{2,\omega}\big)=0$. Therefore, for every $\omega\in \mathfrak{g}_2^*\setminus N_1$ and for every $\lambda\in \mathds{R}_+^*\setminus N_{3,\omega}$, \[ \begin{split} m(\mu_{\omega}(\vect{n_{1,\omega}})+\lambda, \omega(\vect{T}))&=\munt{-}_{\partial B(0,\sqrt \lambda)} \int_G f(x,t) e^{ -\frac{1}{4}\abs{x_{\omega}}^2+i \omega(t)+ i \tau(x_{0,\omega}) }\,\mathrm{d} (x,t) \,\mathrm{d} \mathcal{H}^{d-1}(\tau)\\ &= \int_G f(x,t) e^{ -\frac{1}{4}\abs{x_{\omega}}^2+i \omega(t) } \Phi_d\left( \sqrt{\lambda} \abs{x_{0,\omega}} \right)\,\mathrm{d} (x,t). \end{split} \] Now, the mapping \[ (\omega, \lambda) \mapsto \int_G f(x,t) e^{ -\frac{1}{4}\abs{x_{\omega}}^2+i \omega(t) } \Phi_d\left( \sqrt{\lambda} \abs{x_{0,\omega}} \right)\,\mathrm{d} (x,t) \] is continuous on $[(\mathfrak{g}_2^*\setminus W)\times \mathds{R}_+]\cup [\mathfrak{g}_2^*\times \Set{0}]$ by Proposition~\ref{prop:11:1}, so that by means of Tonelli's theorem we see that it induces a representative of $m$ which satisfies the conditions of the statement. \end{proof} \begin{theorem}\label{prop:19:1} Assume that $P_0$ can be extended to a continuous function on $\mathfrak{g}_2^*\setminus\Set{0}$. Then, $\mathcal{L}_A$ satisfies property $(RL)$. \end{theorem} Notice that, by polarization, $P_0$ has a continuous extension to $\mathfrak{g}_2^*\setminus\Set{0}$ if and only if $\abs{P_0(x)}$ has a continuous extension to $\mathfrak{g}_2^*\setminus\Set{0}$ for every $x\in \mathfrak{g}_1$. In addition, observe that the hypotheses of the proposition hold in the following situations: \begin{itemize} \item when $W=\Set{0}$, for example when $G$ is the free $2$-step nilpotent group on three generators; \item when $P_0$ is constant on $\mathfrak{g}_2^*\setminus W$, for example when $G=G'\times\mathds{R}^d$ for some $MW^+$ group $G'$, such as a product of Heisenberg groups. \end{itemize} \begin{proof} {\bf1.} Keep the notation of the proof of Lemma~\ref{lem:9}. Assume first that $n_2=1$, so that $W=\Set{0}$. In addition, $\ker \mathrm{d}_{\sigma_\omega}=\ker \mathrm{d}_{\sigma_{-\omega}}$ for every $\omega\in \mathfrak{g}_2^*$, so that $P_0$ is constant on $\mathfrak{g}_2^*\setminus \Set{0}$. The computations of the proof of Lemma~\ref{lem:9} then lead to the conclusion. {\bf2.} Denote by $\widetilde P_0$ the continuous extension of $P_0$ to $\mathfrak{g}_2^*\setminus \Set{0}$; observe that $\widetilde P_{0,\omega}$ is a self-adjoint projector of $\mathfrak{g}_1$ of rank $d$ for every non-zero $\omega\in \mathfrak{g}_2^*$. Take $f\in L^1_{\mathcal{L}_A}(G)$ and define, for every non-zero $\omega\in \mathfrak{g}_2^*$ and for every $\lambda \geqslant 0$, \[ m(\mu_{\omega}(\vect{n_{1,\omega}})+\lambda, \omega(\vect{T}))\coloneqq \int_G f(x,t) e^{ -\frac{1}{4}\abs{x_{\omega}}^2+i \omega(t) } \Phi_d\left( \sqrt{\lambda} \abs{\widetilde P_{0,\omega}(x)} \right)\,\mathrm{d} (x,t), \] so that $f=\mathcal{K}_{\mathcal{L}_A}(m)$. Then, $m$ is clearly continuous con $\sigma(\mathcal{L}_A)\setminus (\mathds{R}\times\Set{0}^{n_2})$, and $m(\mu_{r \omega}(\vect{n_{1, \omega}})+\lambda, r \omega(\vect{T}))$ converges to \[ \int_G f(x,t) \Phi_d\left( \sqrt{\lambda} \abs{\widetilde P_{0,\omega}(x)} \right)\,\mathrm{d} (x,t) \] as $r\to 0^+$, uniformly as $\omega$ runs through the unit sphere $S$ of $\mathfrak{g}_2^*$. Therefore, it will suffice to prove that the above integrals do not depend on $\omega\in S$ for every $\lambda\geqslant 0$. Indeed, Proposition~\ref{prop:3} implies that, for every $\omega\in S$, \[ (\pi_\omega)_*(f)=\mathcal{K}_{\mathrm{d}\pi_\omega(\mathcal{L}_A)}(m). \] Now,~{\bf1} above implies that the family $\mathrm{d} \pi_\omega(\mathcal{L}_A)$ satisfies property $(RL)$. Then, Proposition~\ref{prop:3} implies that \[ (\pi_0)_*(f)\in L^1_{\mathrm{d}\pi_0(\mathcal{L}_A)}(G_0); \] in addition, $\mathrm{d} \pi_0(\mathcal{L}_A)$ is identified with $(\Delta, 0,\dots,0)$, where $\Delta$ is the (positive) Laplacian associated with the scalar product $\widehat Q$ on $\mathfrak{g}_1$. Then, \[ \int_G f(x,t) \Phi_d\left( \sqrt{\lambda} \abs{\widetilde P_{0,\omega}(x)} \right)\,\mathrm{d} (x,t) =\int_{\mathfrak{g}_1}(\pi_0)_*(f)(x) \Phi_d\left( \sqrt{\lambda} \abs{\widetilde P_{0,\omega}(x)} \right)\,\mathrm{d} x, \] whence the assertion since $(\pi_0)_*(f)$ is rotationally invariant. \end{proof} \begin{remark}\label{oss:1} Let $G$ be $\mathds{H}^1\times \mathds{R}$, where $\mathds{H}^1$ is the $3$-dimensional Heisenberg group. If $\mathcal{L}$ is the standard sub-Laplacian on $\mathds{H}^1$, $T$ is a basis of the centre of the Lie algebra of $\mathds{H}^1$, and $\Delta$ is the (positive) Laplacian on $\mathds{R}$, then $(\mathcal{L}+\Delta, i T)$ satisfies property $(RL)$ by Theorem~\ref{prop:19:1}, but it is easily seen that its integral kernel does not admit any continuous representatives. \end{remark} When $G$ is a free group, we can remove the assumption that $P_0$ has a continuous extension. \begin{theorem}\label{prop:19:5} Assume that $G$ is a free $2$-step stratified group on an odd number of generators. Then, $\mathcal{L}_A$ satisfies property $(RL)$. \end{theorem} \begin{proof} Take $f\in L^1_{\mathcal{L}_A}(G)$; by Lemma~\ref{lem:9}, $f$ has a multiplier $m$ which is continuous on $\sigma(\mathcal{L}_A)\setminus (\mathds{R}\times W)$. Now, \[ (\pi_\omega)_*(f)(x,t) =\int_{\omega(t')= t} f(x, t')\,\mathrm{d} t' \] for almost every $(x,t)\in G_\omega$. Then, Proposition~\ref{prop:3} implies that $(\pi_\omega)_*(f)$ is invariant under the isometries which restrict to the identity on $(\ker \mathrm{d}_{\sigma_\omega})^\perp$, for every $\omega \in \mathfrak{g}_2^*\setminus W$; indeed, $\mathrm{d} \pi_\omega(\mathcal{L}_A)$ is invariant under such isometries, and these isometries are group automorphisms. Next, take $\omega\in W$ and an isometry $U$ of $G_\omega$ which restricts to the identity on $(\ker \mathrm{d}_{\sigma_\omega})^\perp$. Since $\dim \ker \mathrm{d}_{\sigma_\omega}$ is odd, there must be some $v\in \ker \mathrm{d}_{\sigma_\omega}$ such that $U\cdot v=\pm v$. Let $V$ be the orthogonal complement of $\mathds{R} v$ in $\ker \mathrm{d}_{\sigma_\omega}$, so that $V$ is $U$-invariant. Now, let $\sigma_V$ be a standard symplectic form on the hilbertian space $V$,\footnote{That is, choose a symplectic form $\sigma_V$ on $V$ so that $V$ admits an orthonormal basis (relative to the scalar product) which is also a symplectic basis (relative to $\sigma_V$).} and define $\omega_p$, for every $p\in \mathds{N}$, so that \[ \sigma_{\omega_p}=\sigma_\omega+ 2^{-p} \sigma_V; \] this is possible since $G$ is a free $2$-step stratified group. Then, $\omega_p$ belongs to $\mathfrak{g}_2^*\setminus W$ and converges to $\omega$. In addition, $(\pi_{\omega_p})_*(f)$ is $U$-invariant thanks to Proposition~\ref{prop:3}. Now, it is easily seen that $(\pi_{\omega_p})_*(f)$ converges to $(\pi_\omega)_*(f)$ in $L^1(\mathfrak{g}_1 \oplus \mathds{R})$, so that $(\pi_\omega)_*(f)$ is $U$-invariant. Then, the mapping \[ m_1\colon \mathds{R}_+\times(\mathfrak{g}_2^*\setminus W) \ni(\lambda,\omega) \mapsto \int_G f(x,t) e^{ -\frac{1}{4}\abs{x_{\omega}}^2+i \omega(t) } \Phi_1\left( \sqrt{\lambda} \abs{x_{0,\omega}} \right)\,\mathrm{d} (x,t), \] extends to a continuous function on $\mathfrak{g}_2^*\times \mathds{R}_+$. Now, clearly $m(\lambda, \omega(\vect{T}))=m_1(\lambda- \mu_\omega(\vect{n_{1,\omega}}),\omega)$ for every $(\lambda, \omega(\vect{T}))\in \sigma(\mathcal{L}_A)$; the assertion follows. \end{proof} \begin{theorem}\label{prop:10} Take $n_2'<n_2$. Then, the family $(\mathcal{L}, (-i T_j)_{j=1,\dots,n_2'})$ satisfies property $(RL)$. \end{theorem} \begin{proof} Define $\mathcal{L}'_{A'}=(\mathcal{L}, (-i T_j)_{j=1,\dots, n_2'})$, and let $L\colon E_{\mathcal{L}_A}\to E_{\mathcal{L}'_{A'}}$ be the unique linear mapping such that $\mathcal{L}'_{A'}=L(\mathcal{L}_A)$. Until the end of the proof, we shall identify $\mathfrak{g}_2^*$ with $\mathds{R}^{n_2}$ by means of the mapping $\omega \mapsto \omega (\vect{T})$. In addition, define $X\coloneqq (\sigma(\mathcal{L}_A) \setminus W) \cup \partial \sigma(\mathcal{L}_A)$, so that $X$ is a Polish space by~\cite[Theorem 1 of Chapter IX, § 6, No.\ 1]{BourbakiGT2}. Let $\beta$ be the (Radon) measure induced by $\beta_{\mathcal{L}_A}$ on $X$, so that $\Supp{\beta}=X$. Let $L'$ be the restriction of $L$ to $X$. Since $\sigma(\mathcal{L}_A)$ is a convex cone by Corollary~\ref{cor:11:2} and since $ W$ is $\beta_{\mathcal{L}_A}$-negligible, Proposition~\ref{prop:A:8} implies that $\beta$ is $L'$-connected. Now, Proposition~\ref{prop:A:6} implies that $\beta$ has a disintegration $(\beta_{\lambda'})_{\lambda'\in E_{\mathcal{L}'_{A'}}}$ such that $\beta_{\lambda'}$ is equivalent to $\chi_{L'^{-1}(\lambda')}\cdot \mathcal{H}^{n_2-n_2'}$ for $\beta_{\mathcal{L}'_{A'}}$-almost every $\lambda'\in E_{\mathcal{L}'_{A'}}$. Observe that $L^{-1}(\lambda')\cap \sigma(\mathcal{L}_A)$ is a convex set of dimension $n_2-n_2'$ for $\beta_{\mathcal{L}'_{A'}}$-almost every $\lambda'\in E_{\mathcal{L}'_{A'}}$. In addition, $ W\cap L^{-1}(\lambda')$ is an algebraic variety of dimension at most $n_2-n_2'-1$ for $\beta_{\mathcal{L}'_{A'}}$-almost every $\lambda'\in E_{\mathcal{L}'_{A'}}$, for otherwise $\mathcal{H}^{n_2+1}(W)$ would be non-zero, which is absurd. Therefore, $\Supp{\beta_{\lambda'}}=L'^{-1}(\lambda')$ for $\beta_{\mathcal{L}'_{A'}}$-almost every $\lambda'\in E_{\mathcal{L}'_{A'}}$. Now, take $m_0\in L^\infty(\beta_{\mathcal{L}_A})$ so that $\mathcal{K}_{\mathcal{L}'_{A'}}(m_0)\in L^1(G)$. Let us prove that $m_0$ has a continuous representative. Indeed, Lemma~\ref{lem:9} implies that there is a continuous function $m_1$ on $X$ such that $m_0\circ L'=m_1$ $\beta$-almost everywhere. Hence, Proposition~\ref{prop:A:7} implies that there is a function $m_2\colon \sigma(\mathcal{L}'_{A'})\to \mathds{C}$ such that $m_2\circ L'=m_1$. Since the mapping $L\colon \partial \sigma(\mathcal{L}_A)\to \sigma(\mathcal{L}'_{A'})$ is proper and onto, and since $\partial \sigma(\mathcal{L}_A)\subseteq X$, it follows that $m_2$ is continuous. The assertion follows (cf.~\cite[Corollary to Theorem 2 of Chapter IX, § 4, No.\ 2]{BourbakiGT2}). \end{proof} \section{Property $(S)$} The results of this section are basically a generalization of the techniques employed in~\cite{AstengoDiBlasioRicci,AstengoDiBlasioRicci2}. Theorem~\ref{prop:20:2} applies, for example, to the free $2$-step nilpotent group on three generators. Notice that we need to impose the condition $W=\Set{0}$ since our methods cannot be used to infer any kind of regularity on $W\setminus \Set{0}$; for example, in general our auxiliary functions $\abs{x_\omega}^2$ and $P_0$ are not differentiable on $W$. Nevertheless, this does not mean that property $(S)$ cannot hold when $W\neq \Set{0}$; as a matter of fact, Theorem~\ref{prop:20:3} shows that this happens for a product of free $2$-step stratified groups on $3$ generators and a suitable sub-Laplacian thereon. In order to simplify the notation, we define $\mathcal{S}(G,\mathcal{L}_A)\coloneqq\mathcal{K}_{\mathcal{L}_A}(\mathcal{S}(E_{\mathcal{L}_A}))$. We begin with a lemma which will allow us to get some `Taylor expansions' of multipliers corresponding to Schwartz kernels under suitable hypotheses. Its proof is modelled on a technique due to D.\ Geller~\cite[Theorem 4.4]{Geller}. We state it in a slightly more general context. \begin{lemma}\label{lem:20:3}\label{lem:20:4} Let $\mathcal{L}_A$ be a Rockland family on a homogeneous group $G'$, and let $T'_1,\dots, T'_n$ be a free family of elements of the centre of the Lie algebra $\mathfrak{g}'$ of $G'$. Let $\pi_1$ be the canonical projection of $G'$ onto its quotient by the normal subgroup $\exp(\mathds{R} T'_1)$, and assume that the following hold: \begin{itemize} \item $(\mathcal{L}_A, i T'_1,\dots, i T'_n )$ satisfies property $(RL)$; \item $\mathrm{d} \pi_1(\mathcal{L}_A, i T'_{2},\dots, i T'_n)$ satisfies property $(S)$. \end{itemize} Take $\varphi\in \mathcal{S}_{(\mathcal{L}_A, i T'_1,\dots, i T'_n )}(G')$. Then, there are two families $(\widetilde \varphi_\gamma)_{\gamma\in \mathds{N}^{n}}$ and $(\varphi_\gamma)_{\gamma\in \mathds{N}^{n}}$ of elements of $\mathcal{S}(G',\mathcal{L}_A)$ and $\mathcal{S}_{(\mathcal{L}_A, i T'_1,\dots, i T'_n )}(G')$, respectively, such that \[ \varphi=\sum_{\abs{\gamma}<h} \vect{T}'^\gamma \widetilde \varphi_\gamma+\sum_{\abs{\gamma}=h} \vect{T}'^\gamma \varphi_\gamma \] for every $h\in \mathds{N}$. \end{lemma} \begin{proof} For every $k\in \Set{1,\dots, n}$, let $G'_k$ be the quotient of $G'$ by the normal subgroup $\exp(\mathds{R} T'_k)$. Endow $\mathfrak{g}'$ with a scalar product which turns $(T'_1,\dots, T'_n)$ into an orthonormal family. Then, Proposition~\ref{prop:3} implies that $(\pi_1)_*(\varphi)\in \mathcal{S}_{\mathrm{d} \pi_1(\mathcal{L}_A, i T'_{2},\dots, i T'_{n})}(G'_1)$, so that there is $\widetilde m_1\in \mathcal{S}(E_{\mathrm{d} \pi_1(\mathcal{L}_A, i T'_{2},\dots, i T'_{n})})$ such that $(\pi_1)_*(\varphi)=\mathcal{K}_{\mathrm{d} \pi_1(\mathcal{L}_A, i T'_{2},\dots, i T'_{n})}(\widetilde m_1)$. Therefore, if we define $\widetilde \varphi_{0,1}\coloneqq \mathcal{K}_{(\mathcal{L}_A, i T'_{2},\dots, i T'_{n})}(\widetilde m_1)$, then Proposition~\ref{prop:3} implies that $(\pi_1)_*(\varphi-\widetilde \varphi_{0,1})=0$. In other words, \[ \int_{\mathds{R}} (\varphi-\widetilde \varphi_{0,1})(\exp(x+s T'_1))\,\mathrm{d} s=0 \] for every $x\in T_1'^\perp$. Identifying $\mathcal{S}(G') $ with $\mathcal{S}(\mathds{R} T'_1;\mathcal{S}(T_1'^\perp))$, by means of a simple consequence of the classical Hadamard's lemma we see that there is $\varphi_1 \in \mathcal{S}(G')$ such that \[ \varphi=\widetilde \varphi_{0,1}+ T'_1 \varphi_1. \] Now, let us prove that $\varphi_1\in \mathcal{S}_{(\mathcal{L}_A, i T'_{1},\dots, i T'_{n})}(G')$. Indeed, \[ T'_1 \mathcal{K}_{(\mathcal{L}_A, i T'_{2},\dots, i T'_{n})}\mathcal{M}_{(\mathcal{L}_A, i T'_{2},\dots, i T'_{n})}(\varphi_1)=\varphi-\widetilde \varphi_{0,1}=T'_1 \varphi_1. \] Since clearly $\mathcal{K}_{(\mathcal{L}_A, i T'_{2},\dots, i T'_{n})}\mathcal{M}_{(\mathcal{L}_A, i T'_{2},\dots, i T'_{n})}(\varphi_1)\in L^2(G')$, and since $T'_1$ is one-to-one on $L^2(G')$, the assertion follows. If $n\geqslant 2$, then we can apply the same argument to $\widetilde \varphi_{0,1}$ considering the quotient $G_2'$, since we already know that $\widetilde \varphi_{0,1}$ has a Schwartz multiplier. Then, we obtain $\widetilde \varphi_{0,2}\in \mathcal{S}(G',(\mathcal{L}_A, i T'_{3},\dots, i T'_{n}))$ and $\varphi_2\in \mathcal{S}_{(\mathcal{L}_A, i T'_{1},\dots, i T'_{n})} (G')$ such that \[ \varphi=\widetilde \varphi_{0,2}+ T'_1 \varphi_1+ T'_2 \varphi_2. \] Iterating this procedure, we eventually find functions $\widetilde \varphi_0\in \mathcal{S}(G',\mathcal{L}_A)$ and $\varphi_1,\dots, \varphi_{n}\in \mathcal{S}_{(\mathcal{L}_A, i T'_{1},\dots, i T'_{n})}(G')$ such that \[ \varphi=\widetilde \varphi_0+\sum_{k=1}^{n} T'_k \varphi_k. \] The assertion follows proceeding inductively. \end{proof} Notice that, if $G$ is abelian and $\mathcal{L}$ is a Laplacian on $G$, then $\mathcal{L}$ satisfies properties $(RL)$ and $(S)$ (cf.~\cite{Whitney2}). \begin{theorem}\label{prop:20:2} Assume that $W=\Set{0}$. Then, $(\mathcal{L}, (-i T_k)_{k=1}^{n_2'})$ satisfies property $(S)$ for every $n_2'\leqslant n_2$. \end{theorem} \begin{proof} Notice that Theorems~\ref{prop:19:1} and~\ref{prop:10} imply that $(\mathcal{L}, (-i T_k)_{k=1}^{n_2'})$ satisfies property $(RL)$. Therefore, by means of Corollary~\ref{cor:A:7} we see that it will suffice to prove the assertion for $n_2'=n_2$. In addition, the abelian case, that is, the case $n_2=0$ has already been considered. We proceed by induction on $n_2\geqslant 1$. {\bf1.} Observe first that the abelian case, Theorem~\ref{prop:19:1}, and Lemma~\ref{lem:20:3} imply that we may find a family $(\widetilde \varphi_\gamma)$ of elements of $\mathcal{S}(G,\mathcal{L})$, and a family $(\varphi_\gamma)$ of elements of $\mathcal{S}_{\mathcal{L}_A}(G)$ such that \[ \varphi=\sum_{\abs{\gamma}< h} \vect{T}^\gamma\widetilde \varphi_\gamma+ \sum_{\abs{\gamma}=h}\vect{T}^\gamma \varphi_\gamma \] for every $h\in \mathds{N}$. Define $\widetilde m_\gamma\coloneqq \mathcal{M}_{\mathcal{L}}(\widetilde \varphi_\gamma)\in \mathcal{S}(\sigma(\mathcal{L}))$ and $m_\gamma\coloneqq \mathcal{M}_{\mathcal{L}_A}(\varphi_\gamma)\in C_0(\beta_{\mathcal{L}_A})$ for every $\gamma$. Then, \[ m_0(\lambda, \omega) =\sum_{\abs{\gamma}<h} \omega^\gamma \widetilde m_\gamma(\lambda)+ \sum_{\abs{\gamma}=h} \omega^\gamma m_\gamma(\lambda, \omega) \] for every $h\in \mathds{N}$ and for every $(\lambda, \omega)\in \sigma(\mathcal{L}_A)$. By a vector-valued version of Borel's lemma (cf.~\cite[Theorem 1.2.6]{Hormander2} for the scalar, one-dimensional case), we see that there is $\widehat m\in C^\infty_c(\mathds{R}^{n_2}; \mathcal{S}(\mathds{R}) )$ such that $\widehat m^{(\gamma)}(0)=\widetilde m_\gamma$ for every $\gamma\in \mathds{N}^{n_2}$. Interpret $\widehat m$ as an element of $\mathcal{S}(E_{\mathcal{L}_A})$. Reasoning on $m-\widehat m$, we may reduce to the case in which $\widetilde m_\gamma=0$ for every $\gamma$; then, we shall simply write $m$ instead of $m_0$. {\bf2.} Consider the norm $N\coloneqq \mu(\vect{n_{1}})$ on $\mathfrak{g}_2^*$ and let $S$ be the associated unit sphere. Define $\sigma(\omega)\coloneqq \frac{\omega}{N(\omega)}$ for every $\omega\in \mathfrak{g}_2^*\setminus \Set{0}$. Then, the mapping \[ S\ni \omega \mapsto (\pi_\omega)_*(\varphi)\in \mathcal{S}(\mathfrak{g}_1 \oplus \mathds{R}) \] is of class $C^\infty$. Fix $\omega_0\in S$. It is not hard to see that we may find a dilation-invariant open neighbourhood $U$ of $\omega_0$ and an analytic mapping $\psi\colon U\times (\mathfrak{g}_1\oplus \mathds{R})\to \mathds{R}^{2 n_1}\times \mathds{R}\times \mathds{R}^d$ such that, for every $\omega\in U$, $\psi_\omega\coloneqq \psi(\omega,\,\cdot\,)$ is an isometry of $\mathfrak{g}_1 \oplus \mathds{R}$ onto $\mathds{R}^{2 n_1}\times \mathds{R}\times \mathds{R}^d$ such that $\psi_\omega(P_{0,\omega}(\mathfrak{g}_1))=\Set{0}\times \mathds{R}^d$ and $\psi_\omega(\Set{0}\times \mathds{R})=\Set{0}\times \mathds{R}\times \Set{0}$. Take $\omega\in U$. By transport of structure, we may put on $\mathds{R}^{2 n_1}\times \mathds{R}$ a group structure for which $\mathds{R}^{2 n_1}\times \mathds{R}$ is isomorphic to $\mathds{H}^{n_1}$ and which turns $\psi_\omega$ into an isomorphism of Lie groups, where, $\mathfrak{g}_1\oplus \mathds{R}$ is endowed with the structure induced by its identification with $G_\omega$.\footnote{Obviously, this structure depends on $\omega$.} Then, there is a sub-Laplacian $\mathcal{L}'_\omega$ on $\mathds{R}^{2 n_1}\times \mathds{R}$ such that, if $T$ denotes the derivative along $\Set{0}\times \mathds{R}\subseteq\mathds{R}^{2 n_1}\times \mathds{R}$ and $\Delta$ is the standard (positive) Laplacian on $\mathds{R}^d$, then \[ \mathrm{d} (\psi_\omega\circ\pi_\omega)(\mathcal{L}_A)=(\mathcal{L}'_\omega+\Delta, \omega(\vect{T}) T). \] Then, Proposition~\ref{prop:3} and Lemma~\ref{lem:3} imply that \[ (\psi_\omega\circ\pi_\omega)_*(\varphi_\gamma)((y,t),\,\cdot\,)\in \mathcal{S}_\Delta(\mathds{R}^d) \] for every $(y,t)\in \mathds{R}^{2 n_1}\times \mathds{R}$ and for every $\gamma\in \mathds{N}^{n_2}$. Define \[ \widehat \varphi_\gamma\colon (U\cap S) \times \mathds{R}_+\times( \mathds{R}^{2 n_1}\times\mathds{R})\ni (\omega, \xi, (y,t))\mapsto \mathcal{M}_\Delta ((\psi_\omega\circ\pi_\omega)_*(\varphi_\gamma)((y,t),\,\cdot\,) )(\xi), \] so that $\widehat \varphi_\gamma(\omega,\,\cdot\,,(y,t))\in \mathcal{S}(\mathds{R}_+)$ for every $\omega\in U\cap S$ and for every $(y,t)\in \mathds{R}^{2 n_1}\times \mathds{R}$ since $\Delta $ satisfies property $(S)$. In addition, the mapping \[ \omega \mapsto [ (y,t)\mapsto (\psi_\omega\circ\pi_\omega)_*(\varphi_\gamma)((y,t),\,\cdot\,) ] \] belongs to $\mathcal{E}(S\cap U; \mathcal{S}(\mathds{R}^{2 n_1}\times \mathds{R}; \mathcal{S}_\Delta(\mathds{R}^d) ))$, so that the mapping \[ \omega \mapsto [ (y,t)\mapsto \widehat\varphi_\gamma(\omega,\,\cdot\,,(y,t)) ] \] belongs to $\mathcal{E}(S\cap U; \mathcal{S}(\mathds{R}^{2 n_1}\times \mathds{R}; \mathcal{S}_\mathds{R}(\mathds{R}_+) ))$. Now, observe that the mapping \[ U\ni \omega \mapsto \psi_\omega^{-1}\in \mathcal{L}(\mathds{R}^{2 n_1}\times \mathds{R}\times \mathds{R}^d; \mathfrak{g}_1 \oplus \mathds{R}) \] is of class $C^\infty$, so that also the mapping \[ f\colon U\times \mathds{R}^{n_1}\ni(\omega,y)\mapsto \abs{ ( \psi_{\sigma(\omega)}^{-1}(y,0,0) )_{\omega} }^2 \] is of class $C^\infty$, thanks to Proposition~\ref{prop:11:1}. In addition, by means of Proposition~\ref{prop:11:2} we see that \[ \begin{split} m_\gamma(\xi+N(\omega), \omega(\vect{T}))= \int_{\mathds{R}^{2 n_1}\times \mathds{R}} \widehat \varphi_\gamma(\sigma(\omega), \xi, (y,t)) e^{ -\frac{1}{4} f(\omega, y)+i N(\omega)t }\,\mathrm{d} (y,t) \end{split} \] for every $\gamma\in \mathds{N}^{n_2}$, for every $\omega\in U$ and for every $\xi\geqslant 0$. Therefore, the preceding arguments and some integrations by parts show that \[ \begin{split} m(\xi+N(\omega),\omega(\vect{T})) &=\sum_{\abs{\gamma}=h}\sigma(\omega(\vect{T}))^\gamma\int_{\mathds{H}^{n_1}} T^h\widehat \varphi_\gamma(\sigma(\omega),\xi,(y,t)) e^{-\frac{1}{4} f(\omega,y)+i N(\omega) t }\,\mathrm{d} (y,t)\\ &=\sum_{\abs{\gamma}=h}(-i\omega(\vect{T}))^\gamma\int_{\mathds{H}^{n_1}} \widehat \varphi_\gamma(\sigma(\omega),\xi,(y,t)) e^{-\frac{1}{4} f(\omega,y)+i N(\omega) t }\,\mathrm{d} (y,t) \end{split} \] for every $h\in \mathds{N}$, for every $\omega\in U$ and for every $\xi \geqslant 0$. Now, fix $p_1,p_2,p_3\in \mathds{N}$, and take $h\in\mathds{N}$. Apply Faà di Bruno's formula and integrate by parts $p_3$ times in the $t$ variable. Then, there is a constant $C>0$ such that \[ \begin{split} \abs{ (\partial_1^{p_1} \partial_2^{p_2}m )(\xi,\omega(\vect{T})) }&\leqslant C N(\omega)^{h-p_2-p_3}(1+N(\omega))^{p_2} \int_{\mathds{H}^{n_1}} (1+\abs{(y,t)})^{2 p_2}\times\\ & \qquad\times \max_{\substack{\abs{\gamma}=h\\q_2+q_3=0,\dots, p_2}} \abs{ \widehat \varphi_\gamma^{(p_1+p_3+q_2+q_3)}(\sigma(\omega), \xi-N(\omega), (y,t)) } \,\mathrm{d} (y,t) \end{split} \] for every $(\xi,\omega(\vect{T}))\in \open{\sigma(\mathcal{L}_A)}\cap (\mathds{R}\times U)$. Here, $\abs{(y,t)}=\abs{y}+\sqrt{\abs{t}}$ is a homogeneous norm on $\mathds{R}^{2 n_1}\times \mathds{R}$. Now, take a compact subset $K$ of $U\cap S$. Then, the properties of the $\widehat \varphi_\gamma$ imply that for every $p_4\in \mathds{N}$ there is a constant $C'$ such that \[ \abs{ \widehat \varphi_\gamma^{(q)}(\omega, \xi, (y,t)) }\leqslant \frac{C'}{ (1+\xi)^{p_4}(1+\abs{(y,t)})^{2 p_2+ 2 n_1+3} } \] for every $\gamma$ with length $h$, for every $q=0,\dots, p_1+p_2+p_3 $, for every $\omega\in K$, for every $\xi\geqslant 0$ and for every $(y,t)\in \mathds{R}^{2 n_1}\times \mathds{R}$. Therefore, there is a constant $C''>0$ such that \[ \abs{ (\partial_1^{p_1} \partial_2^{p_2}m )(\xi,\omega(\vect{T})) }\leqslant C'' N(\omega)^{h-p_2-p_3}\frac{(1+N(\omega))^{p_2}}{ (1+ \xi-N(\omega) )^{p_4} } \] for every $(\xi,\omega(\vect{T}))\in \open{\sigma(\mathcal{L}_A)}\cap (\mathds{R}\times U)$ such that $\sigma(\omega)\in K$. By the arbitrariness of $U$ and $K$, and by the compactness of $S$, we see that we may take $C''$ so that the preceding estimate holds for every $(\xi,\omega(\vect{T}))\in \open{\sigma(\mathcal{L}_A)}\cap (\mathds{R} \times (\mathds{R}^{n_2}\setminus\Set{0}))$. Now, taking $h-p_3>p_2$ we see that $\partial_1^{p_1} \partial_2^{p_2}m$ extends to a continuous function on $\sigma(\mathcal{L}_A)$ which vanishes on $\mathds{R}_+\times \Set{0}$. If $N(\omega)\leqslant \frac{1}{3}$, then take $h-p_3=p_2$ and observe that \[ \frac{1}{3} + \xi+ N(\omega) \leqslant \frac{2}{3}+\xi\leqslant 1+\xi-N(\omega) \] for every $\xi \geqslant N(\omega)$. On the other hand, if $N(\omega)\geqslant \frac{1}{3}$, then take $p_3=p_4+h$ and observe that \[ 1+\xi+N(\omega) \leqslant (1+2 N(\omega))(1+\xi-N(\omega))\leqslant 5 N(\omega) (1+\xi-N(\omega)) \] for every $\xi \geqslant N(\omega)$. Hence, for every $p_4\in \mathds{N}$ we may find a constant $C'''>0$ such that \[ \abs{ (\partial_1^{p_1} \partial_2^{p_2}m )(\xi,\omega(\vect{T})) }\leqslant C''' \frac{1}{ (1+ \xi+N(\omega) )^{p_4} } \] for every $\xi\geqslant N(\omega)$. Now, extending~\cite[Theorem 5 of Chapter VI]{Stein} to the case of Schwartz functions in the spirit of~\cite[Theorem 6.1]{AstengoDiBlasioRicci2}, we see that $m\in \mathcal{S}_{E_{\mathcal{L}_A}}(\sigma(\mathcal{L}_A))$. \end{proof} \begin{theorem}\label{prop:20:3} Assume that $G$ is the product of a finite family $(G_\eta)_{\eta\in H}$ of $2$-step stratified groups which do not satisfy the $MW^+$ condition; endow each $G_\eta$ with a sub-Laplacian $\mathcal{L}_\eta$ and assume that $(\mathcal{L}_\eta, i \mathcal{T}_\eta)$ satisfies property $(RL)$ (resp.\ $(S)$) for some finite family $\mathcal{T}_\eta$ of elements of the second layer of the Lie algebra of $G_\eta$. Define $\mathcal{L}\coloneqq \sum_{\eta\in H} \mathcal{L}_\eta$ (on $G$), and let $\mathcal{T}$ be a finite family of elements of the vector space generated by the $\mathcal{T}_\eta$. Then, the family $(\mathcal{L}, - i \mathcal{T})$ satisfies property $(RL)$ (resp.\ $(S)$). \end{theorem} \begin{proof} Observe first that, by means of Propositions~\ref{prop:A:7},~\ref{prop:A:8}, and~\ref{prop:A:6}, and Corollary~\ref{cor:A:7}, we may reduce to the case in which $\mathcal{T}$ is the union of the $\mathcal{T}_\eta$. Then, Theorems~\ref{teo:4},~\ref{teo:5},~\ref{prop:19:1}, and~\ref{prop:20:2}, imply that the family $(\mathcal{L}_H, - i \mathcal{T})$ satisfies property $(RL)$ (resp.\ $(S)$). Therefore, the assertion follows easily from Propositions~\ref{prop:A:7},~\ref{prop:A:8}, and~\ref{prop:A:6}, and Corollary~\ref{cor:A:7}. \end{proof} \end{document}
\begin{document} \title{Non-classicality of coherent state mixtures} \author{I. Starshynov, J. Bertolotti, J. Anders} \address{Department of Physics and Astronomy, CEMPS, University of Exeter, Stocker Road, Exeter EX4 4QL, UK} \ead{[email protected], \, [email protected]} \begin{abstract} Mixtures of coherent states are commonly regarded as classical. Here we show that there is a quantum advantage in discriminating between coherent states in a mixture, implying the presence of quantum properties in the mixture, which are however not captured by commonly used non-classicality measures. We identify a set of desired properties for any non-classicality measure that aims to capture these quantum features, and define the discord potential $C_D$, which we show to satisfy all those properties. We compare the discord potential with recently proposed coherence monotones, and prove that the coherence monotones diverge for classically distinguishable states, thus indicating their failure to quantify non-classicality in this limit. On the technical side, we provide a simple method of calculating the discord as well as other information-theoretic quantities for the (non-Gaussian) output of any input state with positive P-function. \end{abstract} \color{black} \section{Introduction} \label{intro} It is often not obvious whether a given state of a physical system requires a quantum-mechanical description, or whether a classical description suffices. Moreover, on closer inspection one realises that a physical system can show distinct quantum features~\cite{Ferraro2012}, such as non-positive ``probabilities'', and correlations that violate Bell's inequality. Hence different measures of ``quantumness'' have been introduced that each quantifies how much a quantum state exhibits a particular feature~\cite{Dodonov,Ryl2015,Yuan2018,Bernardini2017}. Popular non-classicality measures include ill-definiteness or negativity of the Glauber-Sudarshan P-function~\cite{Gilchrist1997,Glauber1963}, the entanglement potential~\cite{Asboth2005}, and the recently proposed coherence monotones~\cite{Baumgratz2014,Streltsov2016}. An interesting case study are proper mixtures of coherent states, $\sum_j p_j \, \left| \alpha_j \right \rangle \left \langle \alpha_j \right|$, where $p_j$ are positive probabilities, and $\left| \alpha_j \right \rangle$ are coherent states~\cite{Loudon2000}. Because coherent states are commonly thought to be classical, and mixing is also a classical operation, one expects mixtures of coherent states to be classical, too. And indeed their P-function is well defined and positive, and they have zero entanglement potential. However, since any two coherent states are never completely orthogonal there is always an ambiguity in discriminating them from each other. It is well-known that quantum measurements are able to discriminate coherent states in a mixture better than any classical measurement~\cite{Helstrom1967,Barnett2009}, suggesting that mixtures of coherent states do have a ``quantumness'' that is not quantified by neither the P-function nor the entanglement potential~\cite{Hosseini2014,Choi2017}. In this paper we propose a new measure of non-classicality, the discord potential $C_D$, that captures this feature. In Section~\ref{sec:quadvantage} we discuss the quantum advantage in state discrimination. In Section~\ref{sec:discordpot} we identify the properties that we look for in a non-classicality measure and define the discord potential, which satisfies all the desired properties. In Section~\ref{sec:properties} we discuss the properties of the discord potential for mixtures of two coherent states. We also show that while two recently proposed \emph{coherence monotones} diverge even when the quantum advantage in state discrimination disappears, the discord potential correctly goes to zero. A formal proof of the divergence of two coherence monotones is presented in the Appendix (Section~\ref{sec:app}). We provide a method to calculate entropies and discord potential for any proper mixture of coherent states (including non-Gaussian states) in Section~\ref{sec:method}, and summarise our results in Section~\ref{sec:concl}. \section{Quantum advantage in state discrimination} \label{sec:quadvantage} State discrimination is the problem of identifying which of a set of expected states has been received. If the states are orthogonal this is a trivial task, however, if the possible states are non-orthogonal the discrimination will always have an associated error. This error will depend on what measurements are performed on the system, and proving optimal state discrimination is an area of active research~\cite{Wittmann2008,Cook2007,Becerra2013,Rosati2016}. Let us consider the simple example of distinguishing two coherent states $|\alpha_0 \rangle$ and $|\beta_0 \rangle$ that occur with probabilities $0 < a < 1$ and $1- a$, respectively, i.e. the corresponding mixed state is \begin{equation}\label{eq:rho} \rho_0 = a \, \vert \alpha_0 \rangle \langle \alpha_0 \vert + (1-a) \, \vert \beta_0 \rangle \langle \beta_0 \vert . \end{equation} When the separation $d_0$, which is related to the states' overlap $d_0^2:= |\alpha_0 - \beta_0|^2 = - \ln |\langle \alpha_0 \vert \beta_0 \rangle|^2$, decreases to zero the two coherent states become identical and the system is not a mixture anymore. For large separations, the two coherent states have less and less overlap and thus become classically distinguishable. But for intermediate separations there will be a non-trivial probability $P(a,d_0)$ of identifying a state incorrectly, which will have a maximum for $a=1/2$. Interestingly, when discriminating between $| \alpha_0 \rangle $ and $| \beta_0 \rangle$, quantum measurements have an advantage over classical (homodyne) ones. I.e. the optimum measurement produces an error given by the Helstrom bound~\cite{Helstrom1967}, $P_{\mathrm{Hel}} (1/2,d_0)~=~\frac{1}{2}\left[1-\sqrt{1-e^{-d_0^2}}\right]$, which is less than the corresponding error for homodyne measurements~\cite{Wittmann2008}, $P_{\mathrm{Hom}} (1/2, d_0)~=~\frac{1}{2}\left[1-\mbox{Erf}(d_0/\sqrt{2})\right]$, as shown in Fig.~\ref{fig:Helstrom}. The difference, $\Delta P$, between $P_{\mathrm{Hom}}$ and $P_{\mathrm{Hel}}$ characterizes the ``advantage'' of a quantum measurement over the classical one. This advantage is zero for $d_0 \to 0$ and $d_0 \to \infty$, and has a maximum at an intermediate separation. \begin{figure} \caption{ \label{fig:Helstrom} \label{fig:Helstrom} \end{figure} \section{Discord potential} \label{sec:discordpot} The advantage $\Delta P$ indicates a non-trivial quantum property in the state $\rho$ which we want to capture with a suitable measure of non-classicality. We note that the entanglement potential was constructed as a measure of quantumness that explicitly identifies proper mixtures of coherent states as classical \cite{Asboth2005}; i.e. it does not capture this advantage. Here we are looking for a measure $C$ that characterises the non-classicality of any state, with the following properties: \begin{itemize} \item it is positive-defined, \item it is non-zero for all states that have a non-zero entanglement potential, \item it vanishes for the coherent state mixtures $\rho_0$ when $d_0 \to 0$ and when $d_0 \to \infty$, \item and it is strictly positive for intermediate distances $d_0$ for mixtures $\rho_0$. \end{itemize} We define the \emph{discord potential $C_D$} as a measure of non-classicality of any state $\rho$ as \begin{equation} C_D (\rho) \equiv D (\rho^{AB}) , \label{eq:criterion} \end{equation} where $D (\rho^{AB})$ is the discord of a two-mode state $\rho^{AB}$ obtained by impinging $\rho$ on a balanced beam splitter (BS), as shown in Fig.~\ref{fig:ent}. Formally, this output state can be written as \begin{equation} \rho^{AB} = U_{BS} \left( \rho \otimes |0 \rangle \langle 0| \right) U_{BS}^{\dag}, \end{equation} where $U_{BS}$ is the balanced BS unitary and $|0\rangle \langle0|$ is the vacuum state. \begin{figure} \caption{ \label{fig:ent} \label{fig:ent} \end{figure} The \emph{discord} $D_A(\sigma^{AB})$ of any two-mode state $\sigma^{AB}$ is a non-symmetrical correlation measure that quantifies the correlation between $A$ and $B$ when a von Neumann measurement $\{\Pi_j^A\}$, with orthogonal projectors $\Pi_j^A$, is performed on the mode $A$. It is defined as~\cite{Ollivier2001,Datta2008}: \begin{equation}\label{eq:discord} D_A(\sigma^{AB}) \equiv S(\sigma^{A}) - S(\sigma^{AB})+\min_{\{\Pi_j^A\}} S(\sigma^{B|\{\Pi_j^A\}}), \end{equation} where $ S(\sigma^{AB})$ is the von Neumann entropy of the global state and $S(\sigma^{A})$ is the entropy of the reduced state $\sigma^A$ of mode $A$. The last term involves the conditional entropy of the mode $B$ depending on the outcome of the measurement $\{\Pi_j^A\}$ on $A$, $S( \sigma^{ B|\{ \Pi_j^A \}}) = \sum_j p_j \, S(\tr_A \left[ \Pi_j^A \sigma^{AB} \Pi_j^A \right])$, where $p_j = \tr \left[ \Pi_j^A \sigma^{ AB} \Pi_j^A \right]$ is the probability for the outcome $\Pi_j^A$ to occur. Note, that the discord is defined as the \emph{minimum} over all possible von Neumann measurements, i.e. sets of projectors on $A$. As the BS output states $\rho^{AB}$ are symmetrical by construction, $D_A (\rho^{AB}) = D_B (\rho^{AB})$, which we denote by $D(\rho^{AB})$ in the definition of the discord potential $C_D$ in Eq.~(\ref{eq:criterion}). $C_D$ is also positive-definite by construction since $D$ is positive-definite. Furthermore, since the discord of any entangled state $\sigma^{AB}$ is non-zero ~\cite{Ollivier2001,Datta2008}, $C_D$ is non-zero for all states with non-zero entanglement potential \cite{Asboth2005}. \section{Properties of the discord potential for mixtures of two coherent states.} \label{sec:properties} We are now interested in the discord potential of mixtures of two coherent states $\rho_0$ for which we need to characterise the two-mode output state. One way to obtain $\rho^{AB}$ is to use well-known relations between the quasi-probability distributions of the input and output of a lossless balanced beam splitter~\cite{Ou1987}. Specifically, for a one-mode state $\rho$ the P-function is a distribution over a complex amplitude $\xi$, \begin{equation} {\cal P}_{\rho} (\xi)=\frac{e^{|\xi|^2}}{\pi} \, \int \langle -\gamma| \, \rho \, |\gamma \rangle \, e^{|\gamma|^2- \gamma \xi^*+\gamma^*\xi} \, \mbox{d}^2 \gamma, \end{equation} where the integration is performed over all coherent states $|\gamma\rangle$ with complex amplitude $\gamma$. Then the P-function of the two-mode output state is related to the P-function of the two-mode input state as \cite{Loudon2000,Glauber1963}: \begin{equation} \label{eq:P_input_output} {\cal P}_{\rho^{AB}}(\xi^\prime, \zeta^\prime) = {\cal P}_{\rho_0 \otimes |0\rangle \langle 0|} \, \left(\frac{\xi^\prime - i \zeta^\prime}{\sqrt{2}}, \frac{\zeta^\prime - i \xi^\prime}{\sqrt{2}}\right). \end{equation} For the input state $\rho_0 \otimes |0\rangle \langle 0|$ the P-function is: \begin{equation} \label{eq:P_in} {\cal P}_{\rho_0 \otimes |0\rangle \langle 0|} \, (\xi, \zeta)= \left[ a \, \delta^2(\xi-\alpha_0) + (1-a) \, \delta^2(\xi-\beta_0) \right] \cdot \delta^2(\zeta). \end{equation} Substituting~\eref{eq:P_in} into~\eref{eq:P_input_output} gives the output P-function, from which we obtain the output state: \begin{equation}\label{eq:out} \rho^{AB} =a \, \vert \alpha \rangle \langle \alpha \vert \otimes \vert i\alpha \rangle \langle i\alpha \vert +(1-a) \, \vert \beta \rangle \langle \beta \vert \otimes \vert i\beta \rangle \langle i\beta \vert, \end{equation} where $\alpha = \alpha_0/\sqrt{2}$, $\beta = \beta_0/\sqrt{2}$. The reduced state $\rho^A$ of the mode $A$ is then: \begin{equation}\label{eq:rhoA} \rho^A = a \vert \alpha \rangle \langle \alpha \vert + (1-a) \vert \beta \rangle \langle \beta \vert, \end{equation} with $\rho^B$ taking exactly the same form. \begin{figure} \caption{ \label{fig:reduced_ent} \label{fig:reduced_ent} \end{figure} We are now ready to calculate the discord $D (\rho^{AB})$. While calculations in the infinite dimensional Hilbert space can be challenging and are usually limited to Gaussian states~\cite{Giorda2010,Adesso2010,Giorda2012}, for the particular set of states considered here a straightforward method is discussed in Section \ref{sec:method}. The discord of the two-mode output state, $D(\rho^{AB})$, and thus the discord potential $C_D (\rho_0)$ of the input state, shown in Fig.~\ref{fig:reduced_ent}, vanishes for $d_0 \to 0$, $d_0 \to \infty$, $a \to 0$ and $a \to 1$, as we required for our non-classicality measure. \begin{figure} \caption{ \label{fig:discord} \label{fig:discord} \end{figure} Fig.~\ref{fig:discord}a shows the output state discord $D(\rho^{AB})$, its entropy $S(\rho^{AB})$, its mutual information $I (\rho^{AB})=S(\rho^{A})+S(\rho^{B}) - S(\rho^{AB})$, and the ``classical'' contribution to the mutual information, $I^{cl}(\rho^{AB})=I (\rho^{AB}) - D (\rho^{AB})$, as a function of $d = |\alpha - \beta|$, with $d$ directly related to the separation in the input state, as $d= d_0/\sqrt{2}$. As can be seen from the figure, the discord reaches a maximum for $ d = \sqrt{\hbar/2}$. This is the distance at which the peaks of the input state Wigner-function is equal to half of the value of the coherent state quadrature uncertainty, which is equivalent to the Rayleigh criterion for peak resolution~\cite{Born1994}. Interestingly, for $d \lessapprox \sqrt{\hbar/8}$ the quantum discord contribution to $I$ becomes larger than the classical contribution. Alternative candidates that may satisfy our requirements for a measure of non-classicality are the coherence monotones~\cite{Baumgratz2014,Streltsov2016}, which quantify the coherence of a quantum state with respect to a chosen basis. In quantum information theory quantum coherences have been shown to be a resource for creating non-classical correlations~\cite{Streltsov2015,Killoran2016}, for enhancing quantum measurement precision~\cite{Marvian2016} and for performing certain quantum computational tasks~\cite{Hillery2016}. Coherences can be quantified by the \emph{$l_1$ norm of coherence} \begin{equation}\label{eq:l1norm} C_{l_1}(\rho) = \sum_{n\ne k} | \rho_{n,k}|, \end{equation} where $\rho_{n,k}$ are the coefficients of the state in a basis $\{| n \rangle\}$, and the \emph{relative entropy of coherence} \begin{equation}\label{eq:relentrcoh} C_{RE}(\rho) = S(\rho_{\mathrm{diag}}) - S(\rho), \end{equation} where $\rho_{\mathrm{diag}}$ is the state obtained by removing the non-diagonal elements of $\rho$ in the chosen basis. Choosing the energy basis (i.e. the Fock basis), these measures are plotted in Fig.~\ref{fig:discord}b for the mixture of two coherent states $\rho_0$ with $a=1/2$ and $\beta_0 = -\alpha_0$ as a function of separation $d_0$. One can see that both, the $l_1$ norm of coherence and the relative entropy of coherence, increase monotonously with $d_0$. Notably they diverge for $d_0 \to \infty$, which contrasts with the properties we require for our non-classicality measure. Indeed, the $l_1$ norm of coherence has already been shown to diverge for a certain class of states in the infinite-dimensional case~\cite{Zhang2016}. We here prove in the Appendix (Section \ref{sec:app}) that actually both measures diverge for the coherent state mixtures. Moreover, both coherence monotones depend not only on the separation between the coherent states in the mixture $d_0$, but they also depend on the absolute amplitude of each coherent state $|\alpha_0|$ and $|\beta_0|$. In contrast, $C_D(\rho)$ depends only on $d_0$, tends to zero at both small and large values of $d_0$, and quantifies the state's non-classicality at intermediate separations, as can be seen in Fig.~\ref{fig:discord}b. Thus the discord potential satisfies all our requirements. In addition, $C_D$ does not require to choose a basis, in contrast to the coherence monotones. \section{Calculating entropies of (non-Gaussian) coherent state mixtures} \label{sec:method} Coherent states, such as~\eref{eq:rho}, are elements of the infinite dimensional Hilbert space spanned by the Fock basis. At first this makes the direct calculation of the entropies in~\eref{eq:discord} tricky, as one would need to find the eigenvalues of an infinite matrix. However, these quantities can be obtained by moving to the Hilbert space spanned by the non-trivial pure state elements of the considered mixture and establishing an orthonormal basis in this smaller sub-space. For a general proper mixture of coherent states \begin{equation}\label{eq:rho2} \rho = \sum_{j=1}^N p_j \, |\alpha_j\rangle \langle \alpha_j |, \end{equation} with $p_j>0$, this subspace can be spanned by the $| \alpha_j \rangle$ for $j = 1, \dots, N$, the orthonormal basis $|u_j\rangle$ can be built using the Gram-Schmidt procedure: \begin{eqnarray} \label{eq:basis} |u_1\rangle& = | \alpha_1 \rangle , \\ |u_j\rangle &= \frac{|v_j\rangle}{|| v_j ||}, \quad |v_j\rangle &= | \alpha_j \rangle - \sum_{k}^{N-1} \frac{\langle v_k| \alpha_j \rangle}{\langle v_k| v_k \rangle}, \end{eqnarray} In such a basis the only non-zero elements of $\rho$ will be: \begin{equation}\label{eq:rhomn} \rho_{jk}=\langle u_j | \rho | u_k \rangle. \end{equation} This is a finite dimensional matrix of size $N \times N$, where $N$ is the number of pure state elements in Eq.~\eref{eq:rho2}. This finite-dimensional matrix is straightforwardly diagonalized allowing the calculation of the eigenvalues and entropy of the state $\rho$. For example, for the reduced state $\rho^A$ in Eq.~\eref{eq:rhoA} an orthonormal basis in the subspace spanned by $|\alpha \rangle$ and $|\beta \rangle$ is: \begin{equation} \vert u_1\rangle = \vert \alpha \rangle; \qquad \vert u_2 \rangle = \frac{\vert \beta \rangle - k \vert \alpha \rangle}{\sqrt{1-|k|^2}}, \end{equation} where $k =\langle \alpha \vert \beta \rangle$. In this basis $\rho^A$ can be written as: \begin{equation} \rho^{A}_\bot = \left( \begin{array}{cc} a + (1-a) k^2 & \frac{k(1-a)(1-|k|^2)}{\sqrt{1-k^2}} \\ \frac{k^*(1-a)(1-|k|^2)}{\sqrt{1-k^{*2}}} & (1-a)(1 -k^2) \end{array} \right). \end{equation} The eigenvalues and the entropy, $S(\rho^A)$, are now readily calculated. To calculate the entropy of the two-mode state $S(\rho^{AB})$ in Eq.~\eref{eq:out} one has to introduce an analogous pair of basis vectors for the second mode and then diagonalise a four-dimensional matrix to obtain the entropy. The last term in the expression for the discord \eref{eq:discord} is the entropy of a single reduced mode, but requires the optimization over all possible measurement operators for the other mode. In the subspace spanned by $|\alpha \rangle$ and $|\beta \rangle$ the general set of measurement operators is: \begin{eqnarray} \Pi_1^A &= \cos \theta \, \vert u_1 \rangle + e^{i\phi} \, \sin \theta \, \vert u_2 \rangle \cr \Pi_2^A &= \sin \theta \, \vert u_1 \rangle - e^{-i\phi} \, \cos \theta \, \vert u_2 \rangle \end{eqnarray} Using these operators it is then possible to calculate the conditional entropy, $S(\sigma^{B|\{\Pi_j^A\}})$ and minimize this entropy over all $\theta$ and $\phi$. For the more general case that the input state is a mixture of more than two pure state elements, i.e. $N>2$, the Gram-Schmidt diagonalization can still be used to calculate the entropies for $N\otimes N$ reduced states. But the optimization procedure for obtaining the conditional entropies becomes complicated and has so far been shown to be possible using the linear entropy approximation for $2\otimes N$ systems~\cite{Ma2015}. \section{Conclusions} \label{sec:concl} Despite the existence of a huge variety of non-classicality criteria, it still can be hard to determine if a given system requires a full quantum-mechanical description or not, especially if it is infinite-dimensional. In particular, mixtures of coherent states are often considered classical, but there is a quantum advantage when discriminating between overlapping coherent states in the mixture, suggesting that a quantum-mechanical description is necessary to fully capture the system properties. We proposed a new criterion for non-classicality of a state $\rho$ based on the quantum discord of the output $\rho^{AB}$ of a balanced beam splitter, when the two inputs are the state of interest $\rho$ and the vacuum state $|0 \rangle \langle 0|$. This measure, the discord potential $C_D$, identifies as non-classical all states with non-zero entanglement potential, but it is also positive for mixtures of coherent states when the quantum advantage in discrimination is positive, whilst vanishing otherwise. We also showed that the discord potential has several advantages over another set of commonly used non-classicality measures: the coherence monotones. Namely, the discord potential does not require a basis choice and it does not diverge for mixtures between classically separated coherent states, in contrast to two coherence monotones. Furthermore, we detailed a simple method to calculate the entropy of any state with positive P-function, and showed how to calculate the quantum potential of mixtures of two coherent states requiring optimization over all possible measurements. We conclude that the discord potential can be a more sensitive indicator of non-classicality than the entanglement potential or the positivity of the P-function, capturing a wider class of states that show quantum advantages. \section*{References} \section{Appendix} \label{sec:app} \subsection{Asymptotic behaviour of $C_{l_1}$ for coherent states in the Fock basis.} We first evaluate the large $\alpha$ asymptotes of the coherence monotone $C_{l_1}$ for a coherent state $|\alpha\rangle$ in the Fock basis $\{|n \rangle \}$ with $n = 0, 1, 2, ....$, and then conclude with stating the asymptotic behaviour of a mixture of two coherent states. Using Eq.~\eref{eq:l1norm} the $l_1$ norm of coherence for a coherent state in the Fock basis is: \begin{eqnarray} C_{l_1}(|\alpha\rangle) &= e^{-|\alpha|^2} \, \sum_{k \ne n} \frac{|\alpha|^{n+k}}{\sqrt{k!\, n!}} = e^{-|\alpha|^2} \, \left( \left[1 + \sum_{k=1}^\infty \frac{|\alpha|^{k}}{\sqrt{k!}} \right]^2 - \sum_{k=0}^\infty \frac{|\alpha|^{2k}}{k!} \right). \end{eqnarray} The resulting summation contains a square root of a factorial, and does not have a closed form. However, its asymptotic behaviour at large $A:=|\alpha|$ can be estimated using the following procedure: We introduce \begin{equation} f(A) =\sum_{k=1}^\infty \frac{A^k }{\sqrt{k!}} \quad \mathrm{and} \quad g(A) = {A \over f (A)} \, \left({\mbox{d} f \over \mbox{d} A}\right), \end{equation} so that \begin{eqnarray} C_{l_1}(|\alpha\rangle) &= e^{- A^2} \, \left(1 + 2 f(A) + f^2(A) \right) - 1. \end{eqnarray} Using that the ratio of two power series in $x$ with coefficients $p_k$ and $q_k$ for large $x$ is asymptotically determined by the ratio of the coefficients of the largest power, \begin{equation} \lim_{x\to \infty} \frac{\sum_{k=1}^\infty p_k \, x^k}{\sum_{k=1}^\infty q_k \, x^k} =\lim_{k\to \infty} \frac{p_k}{q_k}, \end{equation} the asymptote of $g(A)/A^2$ for large $A$ is: \begin{equation} \lim_{A \to \infty} \frac{g(A)}{A^2} = \lim_{A \to \infty} \frac{\sum_{k=0}^\infty \frac{(k+1) A^{k} }{\sqrt{(k+1)!}}}{\sum_{k=2}^\infty \frac{A^{k} }{\sqrt{(k-1)!}} } =\lim_{k\to \infty} {(k+1) \sqrt{(k-1)!} \over \sqrt{(k+1)!}} = 1, \end{equation} and thus $g(A) = A^2 + o(A^2)$. The next order of the approximation is given by \begin{eqnarray} \lim_{A \to \infty} \left(g(A) - A^2\right) = \lim_{A \to \infty} \frac{\sum_{k=2}^\infty \frac{(k-1) A^{k} }{\sqrt{(k-1)!}} - \sum_{k=4}^\infty \frac{A^{k} }{\sqrt{(k-3)!}}}{\sum_{k=2}^\infty \frac{A^{k} }{\sqrt{(k-1)!}} } \\ =\lim_{k\to \infty} \left((k-1) - {\sqrt{(k-1)(k-2) }}\right) = \frac{1}{2}, \end{eqnarray} and thus $g(A) = A^2 + \frac{1}{2}+o(1)$. This leads to the following differential equation for $f(A)$: \begin{equation} \left({\mbox{d} f \over \mbox{d} A}\right) = \left( A^2 + \frac{1}{2}+o(1) \right) \, {f(A) \over A}, \end{equation} which in the asymptotic limit of large $A$ has the solution \begin{equation} f (A) \approx e^{A^2 \over 2} \, \sqrt{c A} \end{equation} with $c$ a constant which can be evaluated numerically, resulting in $c \approx 5$. Further terms of the approximation will lead to multipliers of the form $\exp[{1 \over A^k}]$ with $k>1$ in $f(A)$, which quickly converge to 1 for large $A$. This allows one to conclude that the $l_1$ norm of coherence of a coherent state in the Fock basis for large $\alpha$ asymptotically becomes \begin{eqnarray} C_{l_1}(|\alpha\rangle) &= e^{- |\alpha|^2} + 2 e^{- |\alpha|^2 \over 2} \, \sqrt{c |\alpha|} + c |\alpha| - 1 \approx c |\alpha| \end{eqnarray} i.e. that $C_{l_1}$ diverges for $|\alpha| \to \infty$. We now return to a proper mixture of two coherent states, $\rho = a \vert \alpha \rangle \langle \alpha \vert + (1-a) \vert \beta \rangle \langle \beta \vert$ with $0 < a <1$. This state's coefficients in the Fock basis are \begin{equation} \rho_{k,n} = \frac{a \, e^{-|\alpha|^2} \alpha^k\alpha^{*n}+(1-a)e^{-|\beta|^2} \beta^k\beta^{*n}}{\sqrt{k! \, n!}}. \end{equation} One can see that these coefficients imply that the $l_1$ norm of coherence, Eq.~\eref{eq:l1norm}, will depend on the absolute values of $\alpha$ and $\beta$, not just their relative displacement, in contrast to the discord potential which only depends on the relative displacement. To illustrate the behaviour of the $l_1$ norm of coherence on the separation of the elements of the mixture we here choose $\beta$ = $-\alpha$ and also $a=1/2$. The mixed state coefficients then simplify to \begin{equation} \rho_{k,n} = \frac{\alpha^k\alpha^{*n} }{\sqrt{k! \, n!}} \, e^{-|\alpha|^2} \quad \mbox{for} \quad k+n = \mbox{even}. \end{equation} Thus, the coefficients of the mixture of coherent states are identical to those for the coherent state $| \alpha \rangle$, but only when $k+n$ is an even number. As a consequence the coherence monotone $C_{l_1}(\rho)$ has almost identical asymptotic behaviour for large $|\alpha|$ as the monotone for the coherent state $C_{l_1}(|\alpha\rangle)$, \green{with $C_{l_1}(\rho) = \frac{1}{2} \, C_{l_1}(|\alpha\rangle) \approx \frac{c}{2} |\alpha| $.} The diverging asymptotic behaviour of $C_{l_1} (\rho)$ for large $\alpha$ is indicated in Fig.~\ref{fig:discord}. \subsection{Asymptotic behaviour of $C_{RE}$ for coherent states in the Fock basis.} We first evaluate the large $\alpha$ asymptotes of the relative entropy of coherence $C_{RE}$ for a coherent state $|\alpha\rangle$ in the Fock basis $\{|n \rangle \}$ with $n = 0, 1, 2, ....$, and then conclude with stating the asymptotic behaviour of a mixture of two coherent states. The second term in Eq.~\eref{eq:relentrcoh} is 0 since the coherent state is pure, which leaves us with \begin{eqnarray} C_{RE}(|\alpha\rangle) &= - e^{-|\alpha|^2} \sum_{k =0}^\infty \frac{|\alpha|^{2k}}{k!}\ln \left(e^{-|\alpha|^2} \frac{|\alpha|^{2k}}{k!} \right)\\ &=|\alpha|^2 (1- 2\ln |\alpha|) + e^{-|\alpha|^2} \sum_{k =0}^\infty \frac{|\alpha|^{2k}}{k!}\ln k! . \end{eqnarray} For large $A:=|\alpha|$ the high powers are important and we use the Stirling approximation $\ln k! \approx k \ln k - k +\frac{1}{2} \ln (2\pi k) + \dots$, to transform the above expression to: \begin{eqnarray} \fl C_{RE}(|\alpha\rangle) &= \frac{1}{2}e^{-A^2} \sum_{k =1}^\infty \frac{A^{2k}}{k!}\ln k +A^2 e^{-A^2} \sum_{k =0}^\infty \frac{A^{2k}}{k!}\ln (k+1)- A^2 \ln A^2 +\frac{\ln(2\pi)}{2}. \end{eqnarray} Now we need to establish the asymptotic behaviour for large $A$ of the functions \begin{equation}\label{eq:h0} h_0(A) =e^{-A^2} \sum_{k =1}^\infty \frac{A^{2k}}{k!}\ln k \quad \mbox{and} \quad h_1(A) =e^{-A^2} \sum_{k =0}^\infty \frac{A^{2k}}{k!}\ln(k+1). \end{equation} After substituting the Laplace transform identity $\ln k = - k \int_0^\infty e^{-kt}\ln t \, \mbox{d} t -\gamma$, in Eq.~\eref{eq:h0}, where $\gamma$ is the Euler -- Mascheroni constant, $h_0(A)$ becomes: \begin{eqnarray} h_0(A) &= -e^{-A^2} \int_0^\infty \sum_{k =1}^\infty \frac{A^{2k} e^{-kt}}{k!} k\ln t \, \mbox{d} t -\gamma \, (1 - e^{-A^2})\\ &= - e^{-A^2} A^2\int_0^\infty e^{-t+A^2 e^{-t}}\ln t \, \mbox{d} t -\gamma \, (1 - e^{-A^2})\\ &= - A^2 \int_0^1 e^{A^2 (x-1)} \, \ln \left( \ln {1 \over x} \right) \, \mbox{d} x -\gamma \, (1 - e^{-A^2}) \end{eqnarray} where $x = e^{-t}$ and the integral kernel is $I(A,x) = e^{A^2 (x-1)} \, \ln \left( \ln {1 \over x} \right)$. The kernel $I(A,x)$ diverges at the points $x=0$ and $x=1$, i.e. $I(A,0) \to \infty$ and $I(A,1) \to -\infty$, and these points will give maximal contribution to the integral. Also at $x=1/e$ the kernel vanishes, i.e. $I(A, 1/e) = 0$. Splitting the integral into two parts, \begin{equation}\label{eq:I} \int_0^1 I(A,x) \, \mbox{d} x = \int_0^{1/e} I(A,x) \, \mbox{d} x + \int_{1/e}^1 I(A,x) \, \mbox{d} x \, , \end{equation} we bound the asymptotic behaviour of the first integral using the Cauchy--Schwarz inequality \begin{equation} 0 \le \int_0^{1/e} I(A,x) \, \mbox{d} x \, \le \, \frac{\mathrm{const.}}{\sqrt{2} A} \, e^{- A^2\left(1 - \frac{1}{e}\right) }, \end{equation} where $\mathrm{const.}$ is a number arising from integrating $\left(\ln(-\ln x)\right)^2$ and taking the square root. This shows that the first integral decays exponentially with $A \to \infty$. In order to find the asymptotic behaviour of the second integral we change variables $x - 1 = - p$, \begin{equation} \int_{1/e}^1 I(A,x) \, \mbox{d} x = \int_0^{1-1/e} e^{-p A^{2} } \ln (-\ln(1-p))\, \mbox{d} p. \end{equation} Now we can expand the function $\ln(-\ln(1-p))$ around the point $p=0$, \begin{eqnarray} \int_{1/e}^1 I(A,x) \, \mbox{d} x &= \int_0^{1-1/e} e^{- p A^{2}} \left( \ln p +\sum_{k=1}^\infty \frac{(-1)^{n+1}}{n} \left[ \sum_{k=2}^\infty \frac{p^{k-1}}{k}\right]^n \, \right) \, \mbox{d} p\\ &= \int_0^{1-1/e} e^{- p A^{2}} \left( \ln p + \frac{p}{2}+\frac{5 p^2}{24}+\frac{90 p^3}{720}+\dots \right) \, \mbox{d} p \\ &= \int_0^{1-1/e} e^{- p A^{2}} \ln p \, \mbox{d} p + \sum_{j=1} \lambda_j \int_0^{1-1/e} e^{- p A^{2}} p^j \, \mbox{d} p \, . \end{eqnarray} where $\lambda_j$ are the expansion coefficient of the logarithm for powers of $p$. The leading contribution to the first of these integrals is \begin{equation} \int_0^{1-1/e} e^{- p A^{2}} \ln p \, \mbox{d} p \approx - \frac{\gamma + \ln A^2}{A^2}, \end{equation} plus some exponentially decaying terms in $A$. The second integral gives \begin{equation} \fl \sum_{j=1} \lambda_j \int_0^{1-1/e} e^{- p A^{2}} \, p^j \, \mbox{d} p =\sum_{j=1} \lambda_j \frac{\Gamma(1+j) - \Gamma(1+j, A^2 (1- \frac{1}{e}))}{A^{2(1+j)}} = O\left(\frac{\mathrm{const}}{A^4}\right), \end{equation} where $\Gamma(k)$ and $\Gamma(k, A)$ are the Gamma and incomplete Gamma functions, respectively. Therefore for large $A$, when dropping decaying terms, one finds \begin{eqnarray} h_0(A) &= - A^2 \int_0^1 I(A,x) \, \mbox{d} x -\gamma \, (1 - e^{-A^2}) \\ &\approx - A^2 \left(- \frac{\gamma + \ln A^2}{A^2} \right) + O\left(\frac{\mathrm{const}}{A^4}\right) -\gamma \approx \ln A^2. \end{eqnarray} Using similar arguments one finds \begin{equation} h_1(A) \approx \ln A^2 +\frac{1}{2 A^2}+o\left(\frac{\mathrm{const}}{A^2}\right). \end{equation} Finally, the asymptotic behaviour of $C_{RE}$ of a coherent state for large $|\alpha|=A$ is \begin{eqnarray} C_{RE}(|\alpha\rangle) &= \frac{1}{2} \, h_0(A) +A^2 \, h_1(A) - A^2 \ln A^2 +\frac{\ln(2\pi)}{2} \\ &\approx \ln A +\frac{1}{2} +\frac{\ln(2\pi)}{2} +o\left(\mathrm{const}\right), \end{eqnarray} which diverges logarithmically as $A \to \infty$. We return again to the mixture of two coherent states $|\alpha \rangle$ and $|- \alpha \rangle$, $\rho=\frac{1}{2}\vert \alpha \rangle \langle \alpha \vert + \frac{1}{2} \vert - \alpha \rangle \langle - \alpha \vert$ which allows us to illustrate the behaviour of the relative entropy of coherence $C_{RE}$ on the separation $d=2 |\alpha|$. Since the diagonal coefficients of the mixed state, $\rho_{n,n}$, are identical to those for the coherent state $| \alpha \rangle$ its coherence monotone $C_{RE}(\rho)$ is identical to the coherent state one $C_{RE}(|\alpha\rangle)$ apart from the fact that the entropy $S(\rho)$ is now non-zero and rises to $\ln 2$ for large $|\alpha|$. Hence $C_{RE}(\rho) \approx C_{RE}(|\alpha\rangle) - \ln 2 \approx \ln |\alpha| $ for large $|\alpha|$ and this diverging asymptotic behaviour of $C_{RE} (\rho)$ is indicated in Fig.~\ref{fig:discord}. \end{document}
\begin{document} \title{Packing and Covering a Polygon with Geodesic Disks} \begin{abstract} Given a polygon $P$, for two points $s$ and $t$ contained in the polygon, their \emph{geodesic distance} is the length of the shortest $st$-path within $P$. A \emph{geodesic disk} of radius $r$ centered at a point $v \in P$ is the set of points in $P$ whose geodesic distance to $v$ is at most $r$. We present a polynomial time $2$-approximation algorithm for finding a densest geodesic unit disk packing in $P$. Allowing arbitrary radii but constraining the number of disks to be $k$, we present a $4$-approximation algorithm for finding a packing in $P$ with $k$ geodesic disks whose minimum radius is maximized. We then turn our focus on $\emph{coverings}$ of $P$ and present a $2$-approximation algorithm for covering $P$ with $k$ geodesic disks whose maximal radius is minimized. Furthermore, we show that all these problems are $\mathsf{NP}$-hard in polygons with holes. Lastly, we present a polynomial time exact algorithm which covers a polygon with two geodesic disks of minimum maximal radius. \end{abstract} \section{Motivation and Related Work} Packing and covering problems are among the most studied problems in discrete geometry (see \cite{agar},\cite{aste},\cite{boro},\\ \cite{erd},\cite{convex},\cite{grub},\cite{rog},\cite{zusch},\cite{toth1993packing},\cite{recentT},\cite{toth},\cite{zong} for books on these topics). Nevertheless, most of the literature focus on packings and coverings using Euclidean balls, which is a somewhat unrealistic assumption for practical problems. A prominent practical example is the \emph{Facility Location} (\emph{$k$-Center}) problem (see for example \cite{drezner1995facility}) in buildings or other constrained areas. In this a setting the relevant distance metric is the shortest path metric and not the Euclidean distance. Such a problem occurs when a mobile robot is navigating in a room such as a data center (see \cite{LenchnerICAC} and \cite{5980554} for such an example), which is naturally modeled as a polygon, and we are interested in placing charging stations in such a way that the worst case travel time of the robot to the closest station gets minimized \cite{ibmRob}.\\ The shortest path distance is also referred to as the \emph{geodesic distance} and, for two points $u$ and $v$ in a polygon $P$, it is denoted by $d(u,v)$ and defined as the length of the shortest path between $u$ and $v$ which stays inside $P$. Furthermore, we define a closed \emph{geodesic disk} $D$ of radius $r$ centered at a point $v \in P$, as the set of all points in $P$ whose geodesic distance to $v$ is at most $r$. The \emph{interior} of $D$, denoted by $int(D)$, contains all points in $P$ which are at distance less than $r$ from $v$. The \emph{boundary} of $D$, denoted by $\partial D$, contains all points of $P$ which are either exactly at a distance $r$ from $v$ or they are at distance at most $r$ from $v$ but contained in the polygon boundary $\partial P$ (see Figure \ref{geoDisk}). In this paper we would like to initiate the studies of packing and covering problems in polygons using geodesic disks.\\ For packing problems in polygons, several complexity theoretical results are known. In \cite{packNard} it is shown that packing unit squares into orthogonal polygons with holes is $\mathsf{NP}$-hard, while the complexity is still open for simple polygons \cite{OPP}. On the other hand, in \cite{iacono} it is shown that finding the maximum number of small polygons which can be packed into a simple polygon is $\mathsf{NP}$-hard. This result was improved in \cite{ningPack} where hardness is shown, even if the small polygons have constant size. In case where the number of disks to be packed is fixed, the problem is known as a \emph{Dispersion} or \emph{Obnoxious Facility Location} problem and its Euclidean versions have been studied in \cite{dispersion1980} and \cite{Erkut1989275} while other distance functions were considered in \cite{hosseini2009obnoxious}. \begin{figure} \caption{A polygon containing a geodesic disk centered at $v$, whose interior is depicted in gray and its boundary is drawn in black.} \label{geoDisk} \end{figure} Covering problems have been studied in the context of the \emph{Metric $k$-Center Clustering} problem, where $n$ points in the plane are covered with $k$ metric disks of minimum maximal radius. In \cite{kcenterRef} it was shown to be $\mathsf{NP}$-hard and a $2$-approximation algorithm was presented, which is the best possible approximation ratio when allowing arbitrary metrics \cite{Vazirani}. For the Euclidean metric it is shown in \cite{2approxKc} to be inapproximable in polynomial time within a factor of $1.82$ unless $\mathsf{P} = \mathsf{NP}$. In the context of polygons, \cite{convexCOv} present an $(0.78-\epsilon)$-approximation algorithm for covering a convex polygon with $k$ Euclidean disks, under the restriction that the disks need to be fully contained within the polygon. Exact coverings of $n$ points in the plane with two Euclidean disks of minimum maximal radius, commonly referred to as the \emph{2-Center} problem, has been heavily studied. The best deterministic algorithm runs in $O(n \log^9 n)$ time \cite{Sharir:1996:NAP:237218.237251} and in \cite{Eppstein1997} an expected $O(n \log^2 n)$ time algorithm is presented. For polygons, in \cite{Shin98computingtwo} a $O(n^2\log^3 n)$ time algorithm for covering a convex polygon with two Euclidean disks of minimum maximal radius is presented. This paper is organized as follows. In Section \ref{udPack} we present a polynomial time $2$-approximation algorithm for packing geodesic unit disks into a simple polygon and show that the problem is $\mathsf{NP}$-hard in polygons with holes. In Section \ref{udCover} we show that covering a polygon with holes using geodesic unit disks is $\mathsf{NP}$-hard. In Section \ref{geodesicKcov} we allow arbitrary radii, present a $2$-approximation algorithm for covering a polygon (possibly with holes) with $k$ disks of minimum maximal radius and show that the problem is $\mathsf{NP}$-hard in polygons with holes. Analogously, in Section \ref{geoKpacking} we present a $4$-approximation algorithm for finding a packing in a polygon with $k$ geodesic disks whose minimum radius is maximized and show that the problem is $\mathsf{NP}$-hard in polygons with holes. Finally, in Section \ref{exact2cov} we present a polynomial time exact algorithm which covers a simple polygon with two geodesic disks of minimum maximal radius. \section{Geodesic Unit Disk Packing} \label{udPack} In this section we study the following Geodesic Unit Disk Packing problem, present a $2$-approximation algorithm for simple polygons and show that the problem is $\mathsf{NP}$-hard in polygons with holes. \begin{problem}[Geodesic Unit Disk Packing] Given a polygon, find a maximum cardinality packing with geodesic disks of radius $1$. \label{packproblem} \end{problem} \begin{algorithm}[htp] \caption{greedyUnitPacking($P$)} \label{alg1} \begin{algorithmic} \STATE $S \leftarrow \emptyset$ \COMMENT{The centers of the packing.} \STATE $\partial \mathcal{A} \leftarrow \emptyset$ \COMMENT{The boundary of the geodesic disk arrangement.} \STATE $V \leftarrow vertices(P)$ \COMMENT{The set of points at which disks can be centered; initially these are the vertices of $P$.} \REPEAT \STATE find $u,v \in V$ of maximum geodesic distance \STATE center a geodesic disk $D$ of radius $2$ at $v$ \STATE $S \leftarrow S \cup \{v\}$ \COMMENT{Add $v$ to the set of centers returned by the algorithm.} \STATE $V \leftarrow V \setminus int(D)$ \COMMENT{Remove all points of $V$ contained in the interior of $D$.} \STATE Update $V$ by including all intersection points of $\partial D$ with $\partial \mathcal{A}$ and with the unpacked portion of $\partial P$. \STATE $\partial \mathcal{A} \leftarrow \partial \mathcal{A} \cup \partial D$ \COMMENT{Update the boundary of the disk arrangement by including $\partial D$.}\\ \UNTIL{$V = \emptyset$} \RETURN $S$ \end{algorithmic} \end{algorithm} The algorithm greedily centers geodesic disks of radius $2$ at points which are part of a maximum distance pair in the currently unpacked (i.e. free) region of the polygon. The reasoning behind placing radius $2$ and not unit disks is that a radius $2$ disk does not contain the center of any other radius $2$ disk if and only if the disks of radius $1$ with the same centers are disjoint, i.e. form a packing. The set of candidate points of maximal distance, denoted by $V$ in the algorithm, is initialized to consist of all the polygon vertices. In each iteration, the intersection points of the boundary $\partial D$ of the newly placed disk with unpacked parts of the polygon boundary as well as the intersection points of the newly placed disk with the boundary $\partial \mathcal{A}$ of the disk arrangement induced by the previously placed disks are added to $V$. Furthermore, all points of $V$ lying in the interior of the newly placed disk $D$ get removed from $V$. \begin{prop} The \emph{greedyUnitPacking} algorithm runs in time $O(K(n+K)\log^2(n+K))$, where $n$ is the number of vertices of the polygon and $K$ is the size of the output. \end{prop} \begin{proof} Using the algorithm introduced in \cite{Guibas} (see also \cite{geodPac}), the boundary of a geodesic disk can be computed in time $O(n)$. Furthermore, using the algorithm of \cite{diskBoundShar}, one can compute the boundary of an arrangement of $K$ geodesic disks in $O((n + K)\log^2(n+K))$ time. It is easy to see that the cardinality of $V$ is $O(n + K)$ at any step, thus finding the next center $v$ among the set $V$ of points, i.e. finding a point of a maximum distance pair in the free space, can be computed in $O((n + K)\log(n+K))$ time using the algorithm of \cite{Toussaint89computinggeodesic}. Since the main loop runs $K$ times, the proposition follows. \end{proof} \begin{theorem} The greedyUnitPacking algorithm yields a $2$-approximation for the Geodesic Unit Disk Packing problem. \end{theorem} \begin{proof} Since at each step $greedyUnitPacking$ centers the newly placed disk at a point in $V$ and all these points are all at least at distance $2$ from any point in $S$, the computed centers indeed induce a geodesic unit disk packing. In order to see that $S$ has at least half the cardinality of an optimal packing $OPT$, let $(p_1, \ldots, p_K)$ be the sequence of the points $S$ as placed by $greedyUnitPacking$. For $1 \leq j \leq K$ let $D_j$ be the geodesic radius $2$ disk centered at $p_j$. Letting $free_i(P) = P \setminus \bigcup^{i-1}_{j=1} int(D_j)$ denote the free (i.e. unpacked) regions of $P$ at the beginning of the $i$-th iteration, the algorithm selects a point $p_i$ of $V$ which is a diametral point in $free_i(P)$. Thus Lemma \ref{distLemma} implies that the part of $D_i$ which is contained in $free_i(P)$ contains at most two centers of any optimal solution, i.e. $|free_i(P) \cap int(D_i) \cap OPT| \leq 2$. Since $\bigcup^{K}_{j=1} int(D_j) \cap free_j(P)$ covers all of $P$, it holds for each $c \in OPT$ that there is a disk $D_i$, such that $c \in free_i(P) \cap int(D_i)$. Thus $\{int(D_1) \cap free_1(P), \ldots, int(D_K) \cap free_K(P) \}$ \emph{partitions} $OPT$ into $\leq K$ blocks each of size $\leq 2$ implying $|OPT| \leq 2K$. \end{proof} \begin{figure} \caption{An illustration for the proof of Lemma \ref{distLemma} \label{geoIntPic} \end{figure} \begin{definition} A \emph{pseudo triangle} is a polygon consisting of three convex vertices which are connected to each other by concave chains. A \emph{pseudo quadrilateral} is defined analogously on four convex vertices.\\ \end{definition} \begin{lemma} Let $P'$ be an arbitrary (not necessarily connected) subset of a polygon $P$, let $v$ be an arbitrary point in $P'$ and let $u \in P'$ be a point with maximum geodesic distance to $v$ (w.r.t. $P$) among all points in $P'$. It then holds for any $x,y,z \in P'$ that $\max\{d(u,x),d(u,y), d(u,z)\} \geq \min\{d(x,y), d(x,z), d(y,z)\}$, with $d$ denoting the geodesic distance in $P$. \label{distLemma} \end{lemma} \begin{proof} For contradiction suppose that the claim in the lemma is false, i.e. let $x,y,z$ be three points with \\$\min\{d(x,y), d(x,z), d(y,z)\} > \max\{d(u,x),d(u,y), d(u,z)\}$. For two points $a, b \in P$, let $p(a,b)$ denote the shortest path in $P$ connecting them. It is easy to see that the path $p(u,v)$ connecting $u$ and $v$ intersects at most two of the three shortest paths among $x,y,z$, since they form a pseudo triangle. W.l.o.g. let $p(x,y)$ be a path not intersected by $p(u,v)$ and w.l.o.g. let $p(u,x)$ and $p(v,y)$ be the other two paths defining a pseudo quadrilateral together with $p(u,v)$ and $p(x,y)$. It then follows from Observation \ref{quadIneq} that $d(u,v) + d(x,y) \leq d(u,y) + d(v,x)$. On the other hand, since $d(v,x) \leq d(u,v)$ and $d(u,y) < d(x,y)$ by assumption, $d(u,v) + d(x,y) > d(u,y) + d(v,x)$, a contradiction. \end{proof} \begin{obs} In a pseudo quadrilateral, the sum of the lengths of the two diagonals is at least as large as the sum of the lengths of the two opposite sides. \label{quadIneq} \end{obs} \begin{figure} \caption{Illustration of the proof of Observation \ref{quadIneq} \label{figproof} \end{figure} \begin{proof} Let $a,b,c,d$ be the four convex vertices of a pseudo quadrilateral and let $p$ be the intersection point of the two diagonal shortest paths $p(a,c)$ and $p(b,d)$ as shown in Figure \ref{figproof}. It is well known that in any simple polygon the shortest paths between any three points form a pseudo triangle. Thus, according to Observation \ref{triIneq}, $d(a,p) + d(p,b) \geq d(a,b)$ and $d(c,p) + d(p,d) \geq d(c,d)$. Thus $d(a,p) + d(p,b) + d(c,p) + d(p,d) \geq d(a,b) + d(c,d)$. The inequality for $d(a,d)$ and $d(b,c)$ can be shown analogously and thus the observation follows. \end{proof} The following result is known (see for example \cite{geoDiaSimp}) and we thus omit a proof here. \begin{obs}[Triangle Inequality] In any pseudo triangle the sum of the lengths of two paths connecting two convex vertices is at least as large as the length of the remaining shortest path. \label{triIneq} \end{obs} \begin{theorem} Geodesic Unit Disk Packing is $\mathsf{NP}$-hard in polygons with holes. \label{unitPackHard} \end{theorem} \begin{proof} In \cite{garey1977rectilinear} it is shown that the Maximum Independent Set problem on planar graphs of maximum degree $3$ is $\mathsf{NP}$-hard. Given such an instance $G=(V,E)$, it is easy to see that replacing an edge by a path of odd length $l$ increases the maximum independent set by $(l-1)/2$. We now reduce an instance of this problem to our problem, by orthogonally embedding $G$ in the plane on an integer grid \cite{gridEmd}. We then replace each edge $e \in E$ by a path of $l_e$ straight line edges, each edge is of length $1$, with $l_e$ being an odd number. We denote the obtained graph by $G'=(V',E')$. Denoting the resulting embedding by $P = P(G')$, we attach a polygonal chain of length $1.5$ at the midpoint of each edge. Using the arguments of the proof of Theorem 1 in \cite{penVig}, it follows that there is enough space in the embedding $P$ to attach the polygonal paths on each edge separately. Now any optimal packing centers a geodesic unit disk at the end of each polygonal chain and packs the remaining polygon (whose free space can now only be packed by placing disks at the vertices $V'$) with a maximum independent set for $G'$. Therefore, $G$ has an independent set of size at most $M$ if and only if $P$ has a packing of size at most $\sum_{e \in E} (l_e-1)/2+M+|E'|$. \end{proof} \section{Geodesic Unit Disk Covering} \label{udCover} In this section we show that covering a polygon with holes using the minimum number of geodesic unit disks is $\mathsf{NP}$-hard. \begin{problem}[Geodesic Unit Disk Covering] Given a polygon $P$, find a cover of $P$ with fewest geodesic unit disks. \label{geoUnCov} \end{problem} \begin{theorem} Geodesic Unit Disk Covering is $\mathsf{NP}$-hard in polygons with holes. \label{geoUnCovHard} \end{theorem} \begin{proof} We reduce an instance $G = (V,E)$ of the $\mathsf{NP}$-hard vertex cover problem on planar graphs with maximum degree $3$ (see \cite{Garey:1979:CIG:578533}) to it. It is easy to see that replacing an edge by a path of odd length $l$ increases the size of a vertex cover by $(l-1)/2$. We now reduce such a vertex cover instance to the unit disk cover problem by orthogonally embedding $G$ in the plane on an integer grid \cite{gridEmd}. We then replace each edge $e \in E$ by a path of $l_e$ straight line edges each length $1$, with $l_e$ being an odd number, thus obtaining a new graph $G'=(V',E')$. We then replace each edge by an edge gadget as shown in Figure \ref{covGad} and call the resulting polygon with holes $P=P(G')$. For an edge $\{u,v\} \in E'$ the corresponding gadget consists of two paths of length $1$ connecting $u$ and $v$. In the middle of each path, an additional path of length $0.5$ is attached. It thus follows that an edge gadget is completely covered by a single unit disk if and only if it the disk is centered at either $u$ or $v$. Thus $G$ has a vertex cover of size at most $M$ if and only if $P$ has a covering of size at most $\sum_{e \in E} (l_e-1)/2+ M$. Furthermore, using the arguments of the proof of Theorem 1 in \cite{penVig}, it is clear that there is enough space in the embedding $P$ to replace edges in $E'$ by their gadgets without overlapping any other gadgets. \end{proof} \begin{figure} \caption{An edge gadget for an edge $\{u,v\} \label{covGad} \end{figure} \section{Geodesic $k$-Covering} \label{geodesicKcov} In this section we present a $2$-approximation algorithm for covering a polygon, possibly with holes, using $k$ geodesic disks and show that the problem is $\mathsf{NP}$-hard. \begin{problem}[Geodesic $k$-Covering] Given a polygon $P$, possibly with holes, find a cover of $P$ with $k$ geodesic disks whose maximal radius is minimized. \label{geoCov} \end{problem} \begin{problem}[Metric $k$-Clustering] Given a set $S$ of $n$ points from a metric space, find $k$ smallest disks, such that they cover all points in $S$. \end{problem} Since the geodesic distance is a metric (see also Observation \ref{triIneq}), it follows that the \emph{Geodesic $k$-Covering} problem can be stated as a \emph{Metric $k$-clustering} problem, which, although $\mathsf{NP}$-hard, can be approximated within a factor of two \cite{kcenterRef}. On the other hand, the fact that a polygon contains an uncountable number of points requires some further reasoning as to why the Geodesic $k$-Covering problem is of discrete nature. \\ The following algorithm simply places $k$ points inside a polygon, possibly with holes, such that the minimum geodesic distance among them is maximized. \begin{algorithm} \begin{algorithmic} \caption{gonzalezPlacement($P,k$)} \STATE $C \leftarrow \{c_1\}$ \COMMENT{with $c_1$ an arbitrary point in $P$} \STATE $\mathcal{M} \leftarrow \emptyset$ \COMMENT{sequence of shortest path maps} \FOR{$i \leftarrow 2$ to $k$} \STATE compute the shortest path map $M_{i-1}$ for $c_{i-1}$ \STATE $\mathcal{M} \leftarrow \mathcal{M} \cup \{M_{i-1}\}$ \COMMENT{store it in collection $\mathcal{M}$} \STATE compute the geodesic voronoi diagram of $C$ in $P$ \STATE compute $A_i = A_i^1 \cup A_i^2 \cup vertices(P)$ \COMMENT{as described in the proof of Theorem \ref{2approxGonThm}} \STATE $c_i \leftarrow \arg\max_{a \in A}\min_{c \in C} d(c,a)$ using $M_j$, with $1\leq j \leq i$ \STATE $C \leftarrow C \cup \{c_i\}$ \ENDFOR \RETURN $C$ \end{algorithmic} \end{algorithm} \begin{theorem} The $gonzalezPlacement$ algorithm finds a geodesic cover of a polygon $P$ (possibly with holes) whose maximum radius is at most twice as large as the largest radius in an optimal cover in time $O(k^2(n+k)\log(n+k))$. \label{2approxGonThm} \end{theorem} \begin{proof} The approximation ratio follows from the fact that Metric $k$-Clustering can be approximated within a factor of two (see \cite{kcenterRef}). In order to prove the running time we need to bound the time to find the next center $c_{i+1}$. For this let $C_i$ denote the set of centers after the $i$-th iteration. We find the next center in a polygon, possibly with holes, by computing the geodesic voronoi diagram \cite{AronovGeo} of $C_{i}$ in $P$ using the continuous Dijkstra paradigm (see also \cite{higherGeoVorHoles}) in time $O((n+i)\log(n+i))$ \cite{Hershberger97anoptimal}. The center $c_{i+1}$ is either contained in the set of vertices of the voronoi diagram (denoted by $A^1_i$) or lies at the intersection points of the voronoi edges with the boundary of $P$ (denoted by $A^2_i$) or in the set of vertices of $P$ (denoted by $vertices(P)$). This holds since any other point is incompletely constrained and its distance to $C_i$ could thus be enlarged \cite{Shamos75}. We denote the union $A^1_i \cup A^2_i \cup vertices(P)$ of all points of interest by $A_i$ and remark that the complexity of the geodesic voronoi diagram is $O(n+i)$ (see \cite{AronovGeo}), implying $|A_i| = O(n+i)$. Next, we compute the shortest path map $M_i$ for $c_i$ in time $O(n \log n)$ using again the algorithm of \cite{Hershberger97anoptimal} and add it to the collection $\mathcal{M}$ of shortest path maps. Finding the point furthest from any point in $C_i$, i.e. $\arg\max_{a \in A}\min_{c \in C_i} d(c,a)$, can be done by using each of the $i$ shortest path maps in $\mathcal{M}$ in total time $O(i(i+n)\log n)$ per iteration. \end{proof} \begin{theorem} Geodesic $k$-Covering is $\mathsf{NP}$-hard. \label{hardCov} \end{theorem} \begin{proof} From the proof of Theorem \ref{geoUnCovHard} it clearly follows that the decision version of the Geodesic Unit Disk Covering problem, i.e. whether a given polygon with holes can be covered with $k$ geodesic unit disks, is $\mathsf{NP}$-hard. Since solving the $k$-Covering problem and checking whether the minimum radius is at most one decides the Geodesic Unit Disk Covering problem, the theorem follows. \end{proof} \section{Geodesic $k$-Packing} \label{geoKpacking} In this section we present a $4$-approximation algorithm for packing $k$ geodesic disks into a polygon, possibly with holes, and show that the problem is $\mathsf{NP}$-hard. \begin{problem}[Geodesic $k$-Packing] Given a polygon $P$, possibly with holes, pack $k$ geodesic disks whose minimum radius is maximized. \end{problem} \begin{theorem} The $gonzalezPlacement$ algorithm finds a geodesic packing whose minimum radius is at least a fourth of the minimum radius in an optimal solution. \end{theorem} \begin{proof} Letting $r$ denote the largest radius needed to cover $P$ with geodesic disks centered at the points returned by $gonzalezPlacement$, it is well known (see \cite{kcenterRef}) that all centers found by $gonzalezPlacement$ have distance of at least $r$. Thus centering disks of radius $r/2$ at these points provides a $k$-Packing, which, according to Proposition \ref{pack2cov} has a minimum radius of at least a fourth of the optimal solution. \end{proof} \begin{prop} Let $s^*$ denote the optimal radius of the Geodesic $k$-Packing problem and let $r$ be the maximum radius of an arbitrary Geodesic $k$-Cover. It then holds that $s^* \leq 2r$. \label{pack2cov} \end{prop} \begin{proof} For contradiction suppose that $s^* > 2r$ and let $C$ be the centers of a packing achieving such a minimum radius. Let $q \in P$ be a point lying halfway between two centers $c,c'\in C$ for which $d(c,c') = 2s^*$. Observe that by the (geodesic) triangle inequality, $\min_{c \in C} d(q,c) \geq s^* > 2r$ and thus no geodesic radius $r$ disk containing $q$ can contain any element of $C$. But this contradicts the fact that the polygon can be covered by $k$ disks of maximum radius $r$, since each of the $k$ points in $C$ need to be contained in a disk and no disk can contain more than one point of $C$. \end{proof} Using a similar argument as for the proof of Theorem \ref{hardCov}, we obtain the following Theorem. \begin{theorem} Geodesic $k$-Packing is $\mathsf{NP}$-hard. \end{theorem} \section{Exact Covering with Two Geodesic Disks} \label{exact2cov} In this section we are studying the problem of covering a simple polygon $P$ with two geodesic disks of minimum maximal radius. We solve this problem by first considering the decision version, i.e. whether $P$ can be covered with two radius $r$ disks. We then apply parametric search \cite{Megiddo83applyingparallel} on the decision algorithm in order to solve the minimization problem. Following \cite{Shin98computingtwo}, the basic idea for solving the decision problem is to first compute an arrangement $\mathcal{C}$ of geodesic radius $r$ circles, each centered at a convex vertex of $P$ in time $O(n^2)$ using the algorithm in \cite{Guibas} (see also \cite{geodPac}). If $P$ can be covered by two geodesic disks of radius $r$ then it can be covered with two such disks centered on arcs of the circles in $\mathcal{C}$. This can be seen to hold, by noting that such a configuration minimizes the distance between the centers of the two covering disks. The $testTwoDiskCover$ algorithm now tests for each arc pair in $\mathcal{C}$ whether two disks centered at these arcs cover all vertices of $P$. It is easy to see that independently of where in an arc of $\mathcal{C}$ a radius $r$ disk is centered, it always contains the same vertices of $P$. Furthermore, Lemma \ref{twoEdges} ensures that if all vertices are covered, then there are at most two uncovered edges. Using the algorithm of \cite{Shin98computingtwo}, one can check in constant time if the at most two uncovered edges can be covered with two disks centered in the same arcs as the current disks. Lastly, according to Lemma \ref{boundInt}, a completely covered boundary implies a covering of the interior and thus correctness of the $testTwoDiskCover$ algorithm follows. Furthermore, it is easy to see that the running time of $testTwoDiskCover$ is $O(n^5)$. Thus applying parametric search on the decision problem results in an algorithm for finding two geodesic disks of minimum maximal radius covering $P$ which runs in time $O(n^8 \log n)$. \begin{algorithm} \begin{algorithmic} \STATE $\forall v \in V_{convex}(P)$ : center a geodesic $r$-circle $C_v$ on $v$ \COMMENT{$V_{convex}(P)$ denoting the set of convex vertices of $P$} \STATE build circle arrangement $\mathcal{C}$ \FOR{$\{p,q\} \in V_{convex} \times V_{convex}$ with $p \neq q$} \FOR{Arc $A \in C_p$ of $\mathcal{C}$} \STATE \COMMENT{$C_p$ denoting the geodesic $r$-circle centered at $p$.} \STATE center a disk $D_a$ at some $a \in A$ \FOR{Arc $B \in C_q$ of $\mathcal{C}$} \STATE center a disk $D_b$ at some $b \in B$ \IF{$(V(P) \not\subseteq D_a \cup D_b)$} \STATE \COMMENT{$D_a$ and $D_b$ do not cover all vertices.} \STATE continue \ENDIF \STATE Let $E_{unvoc}$ be the $\leq 2$ uncovered edges. \IF{($\exists a' \in A, b' \in B$ s.t. disks $D_{a'}$ and $D_{b'}$ cover $E_{unvoc}$)} \RETURN $true$ \ENDIF \ENDFOR \ENDFOR \ENDFOR \RETURN $false$ \end{algorithmic} \caption{testTwoDiskCover($P,r$)} \label{algoTest} \end{algorithm} \begin{lemma} If two geodesic disks cover all vertices of a simple polygon then there are at most two uncovered edges. \label{twoEdges} \end{lemma} \begin{proof} By the triangle inequality it follows that if a geodesic disk covers both endpoints of an edge, it also covers its interior. Therefore, any uncovered edge has its endpoints covered by two different disks and thus there has to be a point in the interior of the edge which is equidistant from the two disk centers. Since in \cite{AronovGeo} it is shown that the geodesic bisector between two points has exactly two points on the polygon boundary, there can be at most two such edges. \end{proof} \begin{lemma} If two geodesic disks cover the boundary of a simple polygon, then they also cover its interior. \label{boundInt} \end{lemma} \begin{proof} Let $D_1$ and $D_2$ be two geodesic disks centered at $c_1$, $c_2$ respectively which, w.l.o.g., both lie on the $x$-axis and which cover the boundary $\partial P$. Assume for contradiction that there is a point $p \in P \setminus (D_1 \cup D_2)$ and let $s$ be the vertical line segment through $p$ which ends in $\partial P$ at the point $r$ and $s$. The shortest paths among $c_1, c_2, r$ and $s$ span a pseudo quadrilateral containing $p$ in its interior and thus by the triangle inequality $\max\{d(c_1, r), d(c_1, s)\} \geq d(c_1, p)$ and $\max\{d(c_2, r), d(c_2, s)\} \geq d(c_2, p)$ contradicting that the boundary is covered. \end{proof} \section{Acknowledgments} The author would like to thank Peter Bra{\ss} and Jon Lenchner for introducing the problems to him and Ning Xu and Peter Bra{\ss} for helpful discussions. \end{document}
\begin{document} \title{A limiter-based well-balanced discontinuous Galerkin method for shallow-water flows with wetting and drying: Triangular grids} \author[uhh,cen]{Stefan Vater} \author[ucd]{Nicole Beisiegel} \author[uhh,cen]{Jörn Behrens} \address[uhh]{\orgdiv{Department of Mathematics}, \orgname{Universität Hamburg}, \orgaddress{Bundesstraße 55, \state{20146 Hamburg}, \country{Germany}}} \address[cen]{\orgdiv{CEN -- Center for Earth System Research and Sustainability}, \orgname{Universität Hamburg}, \orgaddress{Grindelberg 5, \state{20144 Hamburg}, \country{Germany}}} \address[ucd]{\orgdiv{School of Mathematics \& Statistics}, \orgname{University College Dublin}, \orgaddress{Belfield, \state{Dublin 4}, \country{Ireland}}} \authormark{VATER \textsc{et al.}} \corres{Stefan Vater, Institute of Mathematics, Freie Universität Berlin, Arnimallee 6, 14195 Berlin, Germany. \email{[email protected]}} \fundingInfo{ \fundingAgency{Volkswagen Foundation} \fundingNumber{Grant: ASCETE} \fundingAgency{European Union, FP7-ENV2013 6.4-3} \fundingNumber{Grant: 603839} \fundingAgency{Science Foundation Ireland} \fundingNumber{Grant: 14/US/E3111}} \abstract[Summary]{A novel wetting and drying treatment for second-order Runge-Kutta discontinuous Galerkin (RKDG2) methods solving the non-linear shallow water equations is proposed. It is developed for general conforming two-dimensional triangular meshes and utilizes a slope limiting strategy to accurately model inundation. The method features a non-destructive limiter, which concurrently meets the requirements for linear stability and wetting and drying. It further combines existing approaches for positivity preservation and well-balancing with an innovative velocity-based limiting of the momentum. This limiting controls spurious velocities in the vicinity of the wet/dry interface. It leads to a computationally stable and robust scheme -- even on unstructured grids -- and allows for large time steps in combination with explicit time integrators. The scheme comprises only one free parameter, to which it is not sensitive in terms of stability. A number of numerical test cases, ranging from analytical tests to near-realistic laboratory benchmarks, demonstrate the performance of the method for inundation applications. In particular, super-linear convergence, mass-conservation, well-balancedness, and stability are verified.} \keywords{Shallow water equations, Discontinuous Galerkin Methods, Wetting and Drying, Limiter, Well-balanced Schemes, Stability} \jnlcitation{\cname{ \author{Vater S.}, \author{N. Beisiegel}, and \author{J. Behrens}} \ctitle{A limiter-based well-balanced discontinuous Galerkin method for shallow-water flows with wetting and drying: Triangular grids}, submitted to \cjournal{International Journal for Numerical Methods in Fluids}, } \maketitle \section{Introduction} \label{sec:Introduction} In order to successfully compute near- and onshore propagation of ocean waves, depth-integrated equations such as the shallow water equations are commonly employed. While these are derived under the assumption that vertical velocities are negligible, they efficiently model large scale horizontal flows and wave propagation with high accuracy. Computational problems occur in the coastal area, where the shoreline, theoretically defined as the line of zero water depth, represents a moving boundary condition, which must be considered in the numerical scheme. Here, it is essential to construct a computational method, which concurrently fulfills the physical requirements for accurate coastal modeling: \begin{itemize} \item conservation of mass, \item exact representation of the shoreline (wetting and drying), and \item robust computation of perturbations from the steady state at rest (well-balancedness). \end{itemize} Although the moving shoreline can be incorporated into the numerical scheme by adjusting the computational domain, its implementation is difficult and often only simple flow configurations have been successfully considered with this approach \citep{Bates1999}. Only recently, an Arbitrary Lagrangian Eulerian (ALE) method together with a moving $r$-adaptive mesh was applied to more complex flow situations \citep{Arpaia2018}. Most commonly, an Eulerian approach is considered, where the mesh points are fixed and the numerical scheme is applied to all cells, regardless if they are wet or dry. Cells are flooded or run dry through the interaction with other cells, i.e., by fluxes. In recent years discontinuous Galerkin (DG) methods have gained a lot of scientific interest within the geophysical fluid dynamics (GFD) modeling community. They combine a number of desirable properties important for coastal applications such as conservation of physical quantities, geometric flexibility and accuracy \citep{Bernard2007,Giraldo2008,Dawson2013}. While Godunov-type finite volume (FV) methods are considered one of the most comprehensive tools for hydrologic modeling of coastal inundation problems \citep{Toro2007,Kesserwani2014}, several studies have investigated possible merits of the more complex DG formulation. General comparisons between the two discretization techniques point out that FV methods need wide stencils to attain high order of accuracy, whereas DG discretizations remain compact \citep{Zhou2001,Zhang2005}. This feature of DG formulations makes them particularly viable for $p$-adaptivity and parallelization. On the other hand the linear stability limit (CFL condition) is more generous for FV methods. Concerning coastal modeling, \citet{Kesserwani2014} argue that the extra complexity associated with DG discretizations pays off by providing higher-quality solution behavior on very coarse meshes. Furthermore, DG methods better approximate the near-shore velocity in certain situations \citep{Kesserwani2014,Vater2017}. While there is theoretically no barrier to extend the DG method to higher-order accuracy, several practical aspects impede this endeavor. For the time discretization appropriate integration schemes -- such as strong stability preserving methods -- have to be applied, which maintain crucial properties of the governing equations. Such methods can be hard to derive for high-order methods. In many practical applications data sets expose a large amount of roughness leading to small-scale variations in high order methods requiring appropriate filtering with associated order reduction. With respect to coastal inundation, no high-order convergence can be expected at the wet-dry interface due to the non-differentiable transition from wet to dry. Therefore, a second-order DG model seems to be a reasonable choice for practical coastal modeling \cite{Kesserwani2014}. This is also reflected in the DG literature, where most wetting and drying treatments are build on top of a second-order accurate DG discretization \citep{Gourgue2009,Kesserwani2010a,Bunya2009,Ern2008}, and only few go beyond formal second-order accuracy (e.g., \citet{Xing2013}). Several concepts addressing the above mentioned physical requirements for wetting and drying in a DG framework proved to be practical. To guarantee positivity of the fluid depth and conserve mass at the same time, various authors proposed a redistribution of mass within each cell \citep{Bunya2009,Ern2008,Xing2013}. This reduces the problem of positivity preservation to only requiring positivity in the mean, which can be guaranteed by a CFL time step restriction \citep{Xing2013}. Since the DG discretization may not exactly resolve the wet/dry interface, artificial gradients can occur in the surface elevation, which render the method unbalanced. To preserve a discrete steady state at rest in this case, it was proposed to neglect the gravity terms in such cells \citep{Bunya2009,Gourgue2009}. To retain a well-defined velocity computation, which is usually not a primary variable and calculated through the quotient of momentum and fluid depth, a so-called thin-layer approach was introduced, where a point is considered dry, if the fluid depth drops below a given tolerance \citep{Bunya2009}. Using this tolerance, the velocity can be set directly to zero in such a situation and the problem of possibly dividing by a zero fluid depth is circumvented. Although there are other approaches to deal with the wetting and drying problem \citep{Gourgue2009,Kaernae2011}, the most common procedure is based on slope-modification techniques \citep{Ern2008,Bunya2009,Kesserwani2010a,Xing2013}. Based on the works of \citet{Cockburn1998}, a generalized Minmod total variation bounded (TVB) limiter is usually combined with the wetting and drying treatment to guarantee linear stability and prevent oscillations in case of discontinuities. However, several authors point out that this slope limiting and the handling of wetting and drying may activate each other, such that their concurrent use can lead to instabilities. This conflict is circumvented by applying the TVB limiter only in those cells, where the wetting and drying algorithm is not activated. This is the starting point of the current study, in which we base our wetting and drying treatment on Barth/Jespersen-type limiters \citep{Barth1989}. To the authors' experience such limiters do not severely alter a discrete state at rest and small perturbations around it, when limiting in surface elevation. The application of the new non-destructive limiter leads to another problem, which does not seem to be as prominent for the TVB limiter. Because both, fluid depth and momentum diminish close to the shoreline, the quotient of both -- representing the velocity -- becomes numerically ill-conditioned. This may lead to spurious errors in the velocity values and result in severe time step restrictions induced by the CFL stability condition inherent in the discretization of hyperbolic problems. In order to solve this issue, our approach incorporates the velocity field into the limiting procedure for the momentum. The idea is borrowed from FV methods, where interface values are often reconstructed from other variables than the primary prognostic variables (see \citet{Leer2006} and references therein). Here, we develop an approach for DG methods to modify the primary variables based on other secondary variables. We stress that our scheme maintains common time step restrictions unlike implicit methods, such as in \citet{Meister2014}. The aim of this work is to introduce a new approach that addresses all of the above issues by combining existing with novel strategies for a robust, efficient, and accurate inundation scheme for explicit DG computations on general triangular grids. In this course, a previously developed one-dimensional limiter \citep{Vater2015} is generalized to the two-dimensional case. Here, we rely on a nodal DG formulation along the lines of \citet{Giraldo2008}, where we work with geometrical nodes and basis function expansions defined by nodal values. To preserve positivity, we adopt the ``positive-depth operator'' from \citet{Bunya2009}, but only applied to the fluid depth. Furthermore, cancellation of gravity terms is applied in cells adjacent to the wet/dry interface. The presented method is based on limiting total fluid height $H=h+b$, which is the sum of fluid depth $h$ and bathymetry $b$, and velocity in the momentum-based flux computation. This approach is based on the original idea of hydrostatic reconstruction for FV methods \citep{Audusse2004,Zhou2001,Noelle2006}. Two Barth/Jespersen-type limiters are employed -- the original edge-based version \citep{Barth1989} and a modified vertex-based development \citep{kuzmin_vertex-based_2010}, the latter being particularly suitable for triangular grids. We find only minor differences between these options and utilize them for computational efficiency reasons. Although the limiter from \citet{kuzmin_vertex-based_2010} also works for higher than second-order accuracy, for the reasons given above we stick to piecewise linear basis functions, and leave the extension of our approach to higher-order for future research. A set of six commonly used test cases is implemented to demonstrate stability, accuracy, convergence, well-balancedness and mass-conservation of our scheme in the presence of moving wet-dry interfaces. This manuscript is organized into four further sections. Following this introduction, we briefly introduce the equations and review the DG discretization scheme. We then detail the wetting and drying algorithm in section \ref{sec:WettingDrying}, before demonstrating rigorously the properties of the new limiting approach with numerical examples in section \ref{sec:Results}. We conclude with final remarks and an outlook for future applications. \section{The shallow water equations and their RKDG discretization} \label{sec:RKDGMethod} To model two-dimensional waves in shallow water and their interaction with the coast, this study employs the nonlinear shallow water equations. They are derived from the principles of conservation of mass and momentum and can be written compactly in conservative form \begin{equation} \label{eq:SWESystem} \fatvec{U}_t + \nabla \cdot \fatvec{F}(\fatvec{U}) = \fatvec{S}(\fatvec{U}), \end{equation} where the vector of unknowns is given by $\fatvec{U} = (h, h\fatvec{u})\tran$. Here and below, we have written the partial derivative with respect to time $t$ as $\fatvec{U}_t \equiv \pptext{\fatvec{U}}{t}$ and the divergence with respect to the spatial horizontal coordinates $\fatvec{x}=(x,y)\tran$ as $\nabla \cdot\fatvec{F}$, which is applied to each component of $\fatvec{F}$. The quantity $h=h(\fatvec{x},t)$ denotes the fluid depth of a uniform density water layer and $\fatvec{u}=\fatvec{u}(\fatvec{x},t)= (u(\fatvec{x},t),v(\fatvec{x},t))\tran$ is the depth-averaged horizontal velocity. The flux function is defined by $\fatvec{F}(\fatvec{U}) = \begin{pmatrix} h\fatvec{u} \\ h \fatvec{u} \otimes \fatvec{u} + \frac{g}{2} h^2\fatvec{I_{2}} \end{pmatrix} $, where $g$ is the gravitational acceleration and $\fatvec{I_{2}}$ the $2\times 2$ identity matrix. Furthermore, the bathymetry (bottom topography) $b=b(\fatvec{x})$ is represented by the source term $\fatvec{S}(\fatvec{U}) = (0, -gh \nabla b)\tran$. Note that realistic simulations might require further source terms such as bottom friction or Coriolis forcing, since these can significantly influence the wetting and drying as well as the propagation behavior. However, this paper focuses on novel slope limiting techniques for robust inundation modeling. Hence, we only consider the influence of bathymetry and leave the inclusion of further source terms for future investigation to avoid additional (stabilizing) diffusive effects caused, for example, by bottom friction. For the discretization using the DG method, the governing equations are solved on a polygonal domain $\Omega\subset \mathbb R^2$, which is divided into conforming elements (triangles) $C_i$. On each element, system \eqref{eq:SWESystem} is multiplied by a test function $\varphi$ and integrated. Integration by parts of the flux term leads to the weak DG formulation \begin{equation} \label{eq:WeakDGForm} \int_{C_i} \fatvec{U}_t \varphi \dint{\fatvec{x}} - \int_{C_i} \fatvec{F}(\fatvec{U})\cdot \nabla\varphi \dint{\fatvec{x}} + \int_{\partial C_i} \fatvec{F}^*(\fatvec{U}) \cdot \fatvec{n} \ \varphi \dint{\sigma} = \int_{C_i} \fatvec{S}(\fatvec{U}) \varphi \dint{\fatvec{x}} \ , \end{equation} where $\fatvec{n}$ is the unit outward pointing normal at the edges of element $C_i$. The interface flux $\fatvec{F}^*$ is not defined in general, as the solution can have different values at the interface in the adjacent elements. This problem is circumvented in the discretization by using an (approximate) solution of the corresponding Riemann problem. For the simulations in this study we used the Rusanov solver \citep{Toro2009}. Another integration by parts of the volume integral over the flux yields the so-called strong DG formulation \citep{Hesthaven2008} \begin{equation} \label{eq:StrongDGForm} \int_{C_i} \fatvec{U}_t \varphi \dint{\fatvec{x}} + \int_{C_i} \nabla \cdot \fatvec{F}(\fatvec{U})\ \varphi \dint{\fatvec{x}} + \int_{\partial C_i} \left(\fatvec{F}^*(\fatvec{U}) - \fatvec{F}(\fatvec{U})\right) \cdot \fatvec{n} \ \varphi \dint{\sigma} = \int_{C_i} \fatvec{S}(\fatvec{U}) \varphi \dint{\fatvec{x}} \ , \end{equation} which recovers the original differential equations within a cell, but with an additional term accounting for the jumps at the interfaces. We will deal with both formulations \eqref{eq:WeakDGForm} and \eqref{eq:StrongDGForm} in this work. The system of equations is further discretized using a semi-discretization in space with a piecewise polynomial ansatz for the discrete solution components and test functions $\varphi_k$. We obtain formally second-order accuracy by using piecewise linear functions, which are represented by nodal Lagrange basis functions \cite{Hesthaven2008,Giraldo2008}, based on the cell vertices as nodes. The solution in one element is then given by $\fatvec{U}(\fatvec{x},t) = \sum_j \fatvec{\tilde{U}}_j(t) \varphi_j(\fatvec{x})$, where $(\fatvec{\tilde{U}}_j(t))_j$ is the vector of degrees of freedom. The integrals are computed using 3-point and 2-point Gau\ss-Legendre quadrature for volume and line integrals, respectively. Note, that the divergence of the the flux is computed analytically at each quadrature point based on piecewise linear distributions of $h$ and $h\fatvec{u}$. For the gravitational part this leads to the identity $\nabla \cdot ( \frac{g}{2} h^2 \fatvec{I_{2}} ) = g h\nabla h$ within each cell. This discretization in space leads to a system of ordinary differential equations (ODEs) \begin{align*} \frac{\partial \fatvec{\tilde{U}}_\Delta}{\partial t} = \fatvec{H}_\Delta(\fatvec{\tilde{U}}_\Delta), \end{align*} where $\fatvec{\tilde{U}}_\Delta$ contains the degrees of freedom for all cells. The right-hand-side $\fatvec{H}_\Delta(\fatvec{\tilde{U}}_\Delta)$ represents the discretized flux and source term. This system is then solved using a total-variation diminishing (TVD) $s$-stage Runge-Kutta scheme \cite{Shu1988,Gottlieb2001}. In each Runge-Kutta stage a limiter is applied, which is usually employed to stabilize the scheme in case of discontinuities. However, as stated above, it can also be used for dealing with the problem of wetting and drying. In this study, we employ Heun's method, which is the second-order representative of a standard Runge-Kutta TVD scheme. For the discretization of the bottom topography we use a piecewise linear representation, which is continuous across the interfaces. It is derived from the given data and fixed throughout a simulation. In order to achieve well-balancedness in the DG formulations \eqref{eq:WeakDGForm} and \eqref{eq:StrongDGForm}, exact quadrature of the terms involving $\tfrac{g}{2} h^2$ and the source term is a basic requirement \cite{Xing2006}. This ensures that the sum of cell integrals over flux and source terms together with the line integrals representing interface fluxes vanishes in the lake at rest case. It is achieved by the given quadrature rules. Note that one could also employ a discretization of the bottom topography, which has jumps at the cell interfaces (e.g., resulting from a $L^2$ projection of the exact data). In this case one has to include higher order correction terms into the source term discretization \citep{Xing2006,Xing2010}, which is based on hydrostatic reconstruction of the interface values \citep{Audusse2004}. However, throughout this work we use discretely continuous representations of the bathymetry and exact quadrature rules. \section{Wetting and drying algorithm} \label{sec:WettingDrying} After having introduced the governing equations and the general DG discretization, we describe our approach for dealing with wetting and drying, which is a direct generalization of the one-dimensional limiter described in \citet{Vater2015} to the two-dimensional case of triangular meshes. It consists of a flux modification applied in cells with dry nodes and a specially designed limiter, which is non-destructive for the steady state at rest, ensures positivity of the fluid depth and leads to a stable velocity computation. The flux modification is needed to maintain the lake at rest. While the positivity preservation is mostly taken from previous works \citep{Bunya2009,Xing2013}, we introduce a new strategy for the momentum, where we essentially limit the velocities but keep piecewise linear momentum distributions. \subsection{The wet/dry interface} By using a piecewise polynomial DG discretization and enforcing positivity, the discrete shoreline is located at cell interfaces. This can introduce artificial gradients of the fluid depth for the lake at rest in cells which have at least one node with zero fluid depth, the latter we call ``semi-dry'' cells (cf.\ figure \ref{fig:WetDryInterface}). Here, we define the discrete lake at rest by interpolating the exact surface elevation $H=h+b$ at triangle nodes and setting the momentum to zero. This results in a continuous representation of the fluid depth. On the other hand, there may be semi-dry cells that are approximated physically correct (e.g., in a dam-break situation where the water comes from higher elevation) and which must be distinguished from the lake-at-rest case (figure \ref{fig:SemidryCellTypes}). We do this by comparing the maximum total height with the maximum bottom topography within a cell. If the maximum total height is not larger than the maximum bottom topography within cell $C_i$, i.e., \begin{equation} \label{eq:wetdrycrit} \max_{\fatvec{x} \in C_i} H(\fatvec{x}) - \max_{\fatvec{x} \in C_i} b(\fatvec{x}) < \mathrm{TOL}_\mathrm{wet} \ , \end{equation} we are possibly in a local lake-at-rest situation, and the cell must be specially treated. Here, we have introduced the parameter $\mathrm{TOL}_\mathrm{wet}$, which is a threshold for the fluid depth under which a point is considered dry. At such points the velocity is set to zero, which is computed by division of $(hu)/h$ elsewhere. Otherwise, if \eqref{eq:wetdrycrit} is not fulfilled, the cell is treated as a completely wet cell. \begin{figure} \caption{Discontinuous Galerkin discretization of a partly dry domain using a triangular grid. The exact shoreline is printed in red, while the discrete is in black. Semi-dry cells where at least one node has zero fluid depth are hatched.\label{fig:WetDryInterface} \label{fig:WetDryInterface} \end{figure} \begin{figure} \caption{Different configurations of semi-dry cells using a DG discretization with piecewise linear elements in 1D (red dashed: surface elevation, green dotted: bottom topography). Black circles denote nodal values of fluid depth and bathymetry. Displayed are a configuration where \eqref{eq:wetdrycrit} \label{fig:SemidryCellTypes} \end{figure} To render the method well-balanced for the discrete lake at rest, we neglect the volume integrals over the terms involving $\frac{g}{2} h^2\fatvec{I_{2}}$ and the source term in semi-dry cells where \eqref{eq:wetdrycrit} is fulfilled. This is equivalent to setting $g$ to zero. For the strong DG formulation \eqref{eq:StrongDGForm} no further modifications are needed. In the case of the lake at rest where no advection is present, all volume integrals vanish for wet and semi-dry cells, and the flux difference $\fatvec{F}^*(\fatvec{U}) - \fatvec{F}(\fatvec{U})$, which is computed at the interfaces, automatically cancels due to continuity of the surface elevation and consistency of the numerical flux function. This is different for the weak DG formulation \eqref{eq:WeakDGForm}, where also the volume integrals associated with gravitational forces are neglected. The interface contributions involving $\frac{g}{2} h^2$ cannot be simply neglected, since the numerical flux at the wet interfaces couples to adjacent cells, which is needed in case of perturbations. For these wet interfaces an additional flux term is introduced which balances the numerical flux. It includes only the gravitational part and is based on the fluid depth of the semi-dry cell at the wet interface. The momentum equation in a semi-dry cell then reads \begin{gather*} \int_{C_i} \varphi (h\fatvec{u})_t \dint{\fatvec{x}} - \int_{C_i} \nabla \varphi \cdot \Bigl( h \fatvec{u} \otimes \fatvec{u} + \cancel{\frac{g}{2} h^2\fatvec{I_{2}}} \Bigr) \dint{\fatvec{x}}\, + \\ \sum_{j=1}^3 \int_{E_j^i} \Bigl( \fatvec{F}_{h\fatvec{u}}^* - \frac{g}{2} (h^{-})^2 \fatvec{I_{2}} \Bigr) \cdot \fatvec{n} \ \varphi \dint{\sigma} = - \int_{C_i} \cancel{\varphi g h \nabla b} \dint{\fatvec{x}} \ , \end{gather*} where $h^{-}$ is the value of the fluid depth at the interface based on cell $C_i$, and $E_j^i$, $j\in\{1,2,3\}$ are the three edges of $C_i$. In the equation, we have canceled the above mentioned volume integrals associated with gravitational forces. If this discretization is applied to the lake at rest with $\fatvec{u} \equiv \fatvec{0}$, then all the edge contributions in a semi-dry cell vanish. For dry edges this is because $h$ is zero, and for wet cells the difference computed under the integral cancels to zero. Note, however, that well-balancing is easier accomplished by using the strong form, and this form also leads to slightly better results as we will see in section \ref{sec:Results}. These described modifications can be also interpreted as a flux limiting approach. \subsection{Limiting of the fluid depth} \label{sec:LimitDepth} Limiting with respect to fluid depth is based on a Barth/Jespersen-type limiter \citep{Barth1989}, which fulfills the requirement to not alter a well-balanced discrete solution, when limiting in total height $H=h+b$. Positivity is attained by ensuring positivity in the mean and redistribution of fluid depth within each cell. Barth/Jespersen-type limiters modify the solution within a cell, such that it does not exceed the maximum and minimum of the cell mean values of adjacent cells. In this work we study the original version by Barth and Jespersen, which incorporates the cells which are connected by a common edge to the cell under consideration. Additionally, we consider a generalization for triangular grids, which was introduced by \citet{kuzmin_vertex-based_2010}. This limiter incorporates the cells which are connected by a common vertex for comparison. We refer to these two versions as ``edge-based'' and ``vertex-based'' limiter, respectively. Note, that the main goal is not to compare these two limiters. We introduce the two versions to offer some flexibility in the computational setup, since some algorithms require edge based computations for efficiency, whereas others are organized by nodal representations. Given the cell average or centroid value $H_c = \overline{H} = \overline{h+b}$ of the total height, the piecewise linear in-cell distribution can be described by $H(\fatvec{x}) = H_c + (\nabla H)_c \cdot (\fatvec{x}-\fatvec{x}_c)$. A Barth/Jespersen-type limiter modifies this to \begin{equation*} \hat{H}(\fatvec{x}) = H_c + \alpha_e (\nabla H)_c \cdot (\fatvec{x}-\fatvec{x}_c), \quad 0 \le \alpha_e \le 1 , \end{equation*} where $\alpha_e$ is chosen, such that the reconstructed solution is bounded by the maximum and minimum centroid values of a given cell neighborhood: \begin{equation*} H_c^\mathrm{min} \le \hat{H}(\fatvec{x}) \le H_c^\mathrm{max} . \end{equation*} For the original Barth/Jespersen limiter this cell neighborhood is given by the considered cell and the three cells sharing an edge with this cell. In case of the limiter described by \citet{kuzmin_vertex-based_2010}, the cell neighborhood is given by the considered cell and the surrounding cells sharing a vertex with this cell. The correction factor is explicitly defined as \begin{equation*} \alpha_e = \min_i \begin{cases} \min \left\{ 1, \frac{H_c^\mathrm{max} - H_c}{H_i - H_c}\right\}, & \text{if } H_i - H_c > 0 \\ 1, & \text{if } H_i - H_c = 0 \\ \min \left\{ 1, \frac{H_c^\mathrm{min} - H_c}{H_i - H_c}\right\}, & \text{if } H_i - H_c < 0 \end{cases} \end{equation*} where $H_i$ are the in-cell values of $H$ at the three vertices of the triangle. The limited fluid depth $\hat{h}$ is then recovered by $\hat{h} = \hat{H} - b$. Positivity is enforced in a second step by the positive depth operator originally proposed by \citet{Bunya2009}. Note that this approach is closely related to the positivity preserving limiter introduced in \citet{Xing2013}, the latter being also suitable for higher order elements. Here we also follow \citet{Xing2013} by relying on a CFL time step restriction to preserve positivity in the mean. Applied to the RKDG2 method, the CFL limit is 1/3, which is less restrictive than the time step restriction to ensure linear stability. Let us express the piecewise linear function $\hat{h}$ by its nodal representation with Lagrange basis functions \begin{equation*} \hat{h}(x,y) = \sum_{i=1}^3 \hat{h}_i \, \varphi_i(x,y) \quad \text{for } (x,y) \in C_k , \end{equation*} where we take as nodes the vertices of the triangle $C_k$. Then positivity on the whole triangle is obtained by requiring positivity for the nodal values. Denoting the final limited values by $h^{\lim}$ and $h^{\lim}_i$ with $h^{\lim} = \sum h^{\lim}_i \, \varphi_i$, $h^{\lim}_i$ is determined by the following procedure: If $\hat{h}_i \ge 0 \ \forall i \in \{ 1, 2, 3\}$, then \begin{equation*} h^{\lim}_i = \hat{h}_i, \ i \in \{ 1, 2, 3\} . \end{equation*} Otherwise we determine the order of nodal indices $n_i \in \{ 1, 2, 3\}$ that satisfy $\hat{h}_{n_1} \le \hat{h}_{n_2} \le \hat{h}_{n_3}$ and compute the values in the following sequence: \begin{align*} h^{\lim}_{n_1} &= 0\\ h^{\lim}_{n_2} &= \max\{ 0, \hat{h}_{n_2} - (h^{\lim}_{n_1}-\hat{h}_{n_1}) / 2 \}\\ h^{\lim}_{n_3} &= \hat{h}_{n_3} - (h^{\lim}_{n_1}-\hat{h}_{n_1}) - (h^{\lim}_{n_2}-\hat{h}_{n_2}) \end{align*} As \citet{Bunya2009} show, this algorithm lowers the depths at nodes $n_2$ and $n_3$ by equal amounts, and the algorithm is mass conserving. \subsection{Velocity-based limiting of the momentum} In a last step the momentum distribution is modified by limiting the in-cell variation of the resulting velocity distribution. This provides a stable computation near the wet/dry interface in situations when both, $h$ and $(h\fatvec{u})$ are small. It is designed to keep a piecewise linear momentum distribution with fixed cell mean values. As noted in the introduction, this approach originates from FV methods, where the reconstruction of interface values by means of other than the primary variables has a long tradition \cite{Leer2006,Audusse2004}. For FV methods this does not pose a problem, since the reconstructed values are only used for flux computation. In DG methods the situation is more complicated, since the whole in-cell solution is limited and used throughout the computations. Therefore, one is usually bound to use the primary variables for limiting. For the momentum limiting we first compute preliminary limited velocity components $\hat{u}_i$ (and similarly $\hat{v}_i$) at each node $i$ of the triangle \begin{equation*} \hat{u}_i = \max\{\min\{ u_i, u_c^\mathrm{max}\}, u_c^\mathrm{min}\} \end{equation*} where $u_i = (hu)_i/h_i$ and $u_c = (hu)_c/h_c$ are the $x$-velocities computed from the nodal and centroid values of momentum and (the unlimited) fluid depth. Note that in case $h_i<\mathrm{TOL}_\mathrm{wet}$ the velocity is set to 0. The minimum and maximum values $u_c^\mathrm{min/max}$ are determined as in subsection \ref{sec:LimitDepth} for the total height by considering the centroid values of the neighboring cells which share a common edge (edge-based limiter) or a common vertex (vertex-based limiter) with the cell. This results in three different linear momentum distributions based on two of the three preliminary nodal velocities, by keeping the cell mean value of the momentum and the distribution of the fluid depth. For the three possibilities we obtain a velocity for the third node with \begin{gather*} \hat{u}^{23}_1 = \frac{3 (hu)_c - h^{\lim}_2\cdot\hat{u}_2 - h^{\lim}_3\cdot\hat{u}_3}{h^{\lim}_1}, \quad \hat{u}^{13}_2 = \frac{3 (hu)_c - h^{\lim}_1\cdot\hat{u}_1 - h^{\lim}_3\cdot\hat{u}_3}{h^{\lim}_2}, \\ \hat{u}^{12}_3 = \frac{3 (hu)_c - h^{\lim}_1\cdot\hat{u}_1 - h^{\lim}_2\cdot\hat{u}_2}{h^{\lim}_3}, \end{gather*} where the lower index denotes the node for which the velocity is computed and the upper index defines, which nodal velocities this is based on. The final limited momentum component is then chosen to produce the smallest in-cell velocity variation. Set \begin{equation}\label{eq:grad_velo} \begin{aligned} \delta \hat{u}_1 &= \max\{ \hat{u}^{23}_1, \hat{u}_2, \hat{u}_3 \} - \min\{ \hat{u}^{23}_1, \hat{u}_2, \hat{u}_3 \} , \\ \delta \hat{u}_2 &= \max\{ \hat{u}_1, \hat{u}^{13}_2, \hat{u}_3 \} - \min\{ \hat{u}_1, \hat{u}^{13}_2, \hat{u}_3 \} , \\ \delta \hat{u}_3 &= \max\{ \hat{u}_1, \hat{u}_2, \hat{u}^{12}_3 \} - \min\{ \hat{u}_1, \hat{u}_2, \hat{u}^{12}_3 \} . \end{aligned} \end{equation} If $\delta \hat{u}_1 \le \delta \hat{u}_i$ for $i \in \{2, 3\}$, then \begin{equation} \label{eq:hulim1} (hu)^{\lim}_1 = h^{\lim}_1 \cdot \hat{u}^{23}_1 , \quad (hu)^{\lim}_2 = h^{\lim}_2 \cdot \hat{u}_2 , \quad (hu)^{\lim}_3 = h^{\lim}_3 \cdot \hat{u}_3 . \end{equation} If $\delta \hat{u}_2 \le \delta \hat{u}_i$ for $i \in \{1, 3\}$, then \begin{equation}\label{eq:hulim2} (hu)^{\lim}_1 = h^{\lim}_1 \cdot \hat{u}_1 , \quad (hu)^{\lim}_2 = h^{\lim}_2 \cdot \hat{u}^{13}_2 , \quad (hu)^{\lim}_3 = h^{\lim}_3 \cdot \hat{u}_3 . \end{equation} Otherwise \begin{equation}\label{eq:hulim3} (hu)^{\lim}_1 = h^{\lim}_1 \cdot \hat{u}_1 , \quad (hu)^{\lim}_2 = h^{\lim}_2 \cdot \hat{u}_2 , \quad (hu)^{\lim}_3 = h^{\lim}_3 \cdot \hat{u}^{12}_3 . \end{equation} The final limited momentum is then given by $(hu)_h^{\lim} = \sum_i [(hu)^{\lim}_i \, \varphi_i(x,y)]$. The same procedure is applied to the $y$-momentum. In conclusion, the wetting and drying algorithm can be summarized as follows: \begin{boxtext} \section*{Limiter-Based Wetting and Drying Treatment} \begin{enumerate} \item Flux modification \begin{enumerate} \item Set $g$ to 0 in volume integrals of semi-dry cells, which satisfy \eqref{eq:wetdrycrit}, and add additional interface flux if using the weak DG formulation. \end{enumerate} \item Limiting of fluid depth \begin{enumerate} \item\label{alg:limha} Apply edge-based \cite{Barth1989} OR vertex-based \cite{kuzmin_vertex-based_2010} limiter to total height $H=h+b$. \item Apply positive depth operator \cite{Bunya2009} to limited $\hat{h}$ obtained from $\hat{H}$ in step \ref{alg:limha}. \end{enumerate} \item Limiting of momentum \begin{enumerate} \item\label{alg:limma} Apply edge-based \cite{Barth1989} OR vertex-based \cite{kuzmin_vertex-based_2010} limiter to velocities at triangle nodes. \item\label{alg:limmb} Extrapolate in-cell velocity distribution from two out of three nodal values obtained in step \ref{alg:limma}. \item Determine discrete in-cell velocity variation from the three distributions obtained in step \ref{alg:limmb}. \item Compute limited momentum from velocities with smallest variation and limited fluid depth (cf.\ \eqref{eq:hulim1}--\eqref{eq:hulim3}). \end{enumerate} \end{enumerate} \end{boxtext} \section{Numerical results} \label{sec:Results} In the following we demonstrate the major functionalities of the limiting procedure described in the last section. Using a hierarchy of test cases, where we start with configurations where the exact solution to the shallow water equations is known, we show the scheme's mass conservation and well-balancedness, as well as the correct representation of the shoreline. Two test cases, which originate from \citet{Thacker1981}, particularly demonstrate the scheme's ability of representing a moving shoreline. Two further test cases are derived from laboratory experiments, which, together with the runup onto a linearly sloping beach, are standard test cases for the evaluation of operational tsunami models \citep{Synolakis2007}. For the simulations, we use both versions of the limiter, i.e., vertex-based and edge-based limitation of total height and velocity and show that although they differ slightly in computational complexity and added numerical diffusion, they both yield comparable and accurate results. The presented limiter depends on one free parameter -- the wet/dry tolerance $\mathrm{TOL}_\mathrm{wet}$ -- that defines the fluid depth threshold, below which a point is considered dry. This is especially important for the computation of the velocity. We comment on this tolerance for each test case and, overall, conclude that the stability of the limiting strategy is not sensitive to it. However, it can influence the discrete location of the wet/dry interface. Apart from the first test case, where we compare the weak and strong DG formulations concerning well-balancedness, we only present results using the strong DG form in the simulations. Although the strong DG form shows somehow better results in case of the lake at rest, the other test cases produce nearly indistinguishable results for both DG formulations. Throughout this section, we set the acceleration due to gravity to $g=9.80616\, \mathrm{m/s^2}$ and omit the units of measurement of the physical quantities, which should be thought in standard SI units with $\mathrm{m}$ (meters), $\mathrm{s}$ (seconds) etc. For the spatial discretization we mostly use regular grids, which are usually derived from one or more rectangles, each divided into two triangles. Such a grid is then repeatedly uniformly refined by bisection to obtain the desired resolution (see \citet{Behrens2005} for details on grid generation). The discrete initial conditions and the bottom topography are derived from the analytical ones by interpolation at the nodal points (vertices of triangles). Explicit methods for the solution of hyperbolic problems are usually subject to a CFL time step restriction \citep{Courant1928}, which is of the form $\Delta t \le \mathrm{cfl} \, h_\Delta / c_\mathrm{max}$. Here, $h_\Delta$ defines a grid parameter and $c_\mathrm{max}$ is the maximum propagation speed of information. For one-dimensional problems, \citet{Cockburn1991} proved that $\mathrm{cfl_{1D}} = 1/3$ for the RKDG2 method. However, this cannot be directly transferred to two-dimensional triangular grids. We follow the work of \citet{Kubatko2008}, who propose as grid parameter the radius of the smallest inner circle of the triangles surrounding a vertex. These nodal values are further aggregated to each triangle by taking the minimum of its three vertex values. The 2D CFL number then approximately relates to its 1D counterpart by $\mathrm{cfl_{2D}} \approx 2^{-1/(p-1)} \mathrm{cfl_{1D}}$, where $p$ is the order of discretization. This results in $\mathrm{cfl_{2D}} \approx 0.233$ for our RKDG2 method. Note that this limit is more restrictive than the time step restriction of 1/3 to ensure positivity in the mean. If not stated otherwise, we choose a constant time step size $\Delta t^n = \Delta t$ which guarantees that the CFL condition is essentially satisfied. Besides fluid depth and momentum we often show the velocity $\fatvec{u} = (h\fatvec{u}) / h$ with its in-cell distribution, which is derived diagnostically by the quotient of the two other quantities. \subsection{Lake at rest} As a first test we conduct two simple ``lake at rest'' experiments with different bathymetries to examine the well-balancedness of our scheme. In a quadratic and periodic domain $\Omega=[0,1]^2$ the first bathymetry is defined by $b(\fatvec{x}) = \max\left\{ 0, 0.25 - 5 ((x-0.5)^2 + (y-0.5)^2)\right\}$, which features a local, not fully submerged parabolic mountain centered around the mid point $(0.5,0.5)^{\top}$. The initial fluid depth and velocity are given by \begin{equation}\label{eq:iniLakeAtRest} \begin{aligned} h(\fatvec{x}, 0) &= \max\left\{0, 0.1 - b(\fatvec{x}) \right\}, \\ \fatvec{u}(\fatvec{x},0) &= \fatvec{0}. \end{aligned} \end{equation} This is a steady state solution which should be reproduced by the numerical scheme. We run simulations using the strong and weak DG formulations with both limiters and a grid resolution of $\Delta x \approx 0.022$ (leg of right angled triangle) until $T_\mathrm{end} = 40$. A time step of $\Delta t = 0.002$ is used, which results in 20\,000 time steps. The wet/dry tolerance is chosen as $\mathrm{TOL}_\mathrm{wet}=10^{-6}$. The results are depicted in figure \ref{fig:wellbalancing}. We show the error in the $L^2$ as well as the maximum ($L^\infty$) norm for the fluid depth (top row) and momentum (bottom row) for all four possible configurations. All combinations are well-balanced almost up to machine precision considering fluid depth. The momentum is also balanced, except for the vertex-based limiter in combination with the weak DG form, which shows a slowly growing -- yet well controlled -- momentum error. The simulations using the strong DG form show generally smaller errors.This behavior can be explained by the superior balancing property of the strong DG formulation, which has been already observed in \citet{Beisiegel2014}. \begin{figure} \caption{Lake at rest: errors over time for the for fluid depth (top) and momentum (bottom) in the $L^\infty$ (left) and $L^2$ norms (right) for the first bathymetry setup. Vertex-based limiter with weak DG formulation (red), edge-based limiter with weak DG formulation (blue), vertex-based limiter with strong DG formulation (magenta dashed), edge-based limiter with strong DG formulation (cyan dashed).\label{fig:wellbalancing} \label{fig:wellbalancing} \end{figure} For the second bathymetry setup, we define the sub-domains \begin{align*} \Omega_1 &= \left\{ \fatvec{x} \in \Omega \big| \|\fatvec{x} - (0.35,0.65)^{\top}\| < 0.1 \right\},\\ \Omega_2 &= \left\{ \fatvec{x} \in \Omega \big| \|\fatvec{x} - (0.55,0.45)^{\top}\| < 0.1 \right\},\\ \Omega_3 &= \left\{ \fatvec{x} \in \Omega \big| | x - 0.47 | < 0.25 \wedge | y - 0.55 | < 0.25 \right\} \text{ and}\\ \Omega_4 &= \left\{ \fatvec{x} \in \Omega \big| \|\fatvec{x} - (0.5,0.5)^{\top}\| < 0.45 \right\}. \end{align*} The bathymetry is given by \begin{equation*} b (\fatvec{x}) = \begin{cases} 0.15 & \text{if } \fatvec{x} \in \Omega_1 \\ 0.05 & \text{if } \fatvec{x} \in \Omega_2 \\ 0.07 & \text{if } \fatvec{x} \in \Omega_3 \setminus \{\Omega_1 \cup \Omega_2\} \\ 0.03 & \text{if } \fatvec{x} \in \Omega_4 \setminus \{\Omega_3\} \\ 0 & \text{otherwise.} \end{cases} \end{equation*} The initial conditions are given as in \eqref{eq:iniLakeAtRest} (cf.\ figure \ref{fig:wellbalstep} left). Although the analytical setup has discontinuities in the bathymetry, we also interpolate bathymetry and initial condition by piecewise linear and continuous approximations at the cell vertices. Using the same grid and time step size as in the first setup, we obtain $L^\infty$ errors for fluid depth and momentum over time as displayed in figure \ref{fig:wellbalstep}, right. One can see that the scheme is also well-balanced in this case with steps in the bathymetry. \begin{figure} \caption{Lake at rest: Initial setup for the second bathymetry configuration (left), and $L^\infty$ errors over time for fluid depth (magenta) and momentum (dark purple). Vertex-based limiter with strong DG formulation.\label{fig:wellbalstep} \label{fig:wellbalstep} \end{figure} \subsection{Tsunami runup onto a linearly sloping beach} A standard benchmark problem to evaluate wetting and drying behavior of a numerical scheme is the wave runup onto a plane beach. We perform this quasi one-dimensional test case \citep{LongWaveRunupModels1_2004} to compare the results to the ones already obtained with the one-dimensional version of the scheme in \citet{Vater2015}. The test case admits an exact solution following a technique developed in \citet{Carrier2003}. In a rectangular domain $\Omega = [-400, 50\,000]\times[0,400]$ with linearly sloping bottom topography $b(\fatvec{x}) = 5000 - \alpha x$, $\alpha = 0.1$, and initial velocity $\fatvec{u}(\fatvec{x},0)=\fatvec{0}$, an initial surface elevation is prescribed in non-dimensional variables by \begin{equation*} \eta'(x') = a_1 \exp\left\{ k_1 (x'-x_1)^2 \right\} - a_2 \exp\left\{ k_2 (x'-x_2)^2 \right\} . \end{equation*} The parameters are given by $a_1 = 0.006$, $a_2 = 0.018$, $k_1 = 0.4444$, $k_2 = 4$, $x_1 = 4.1209$ and $x_2 = 1.6384$. Then, the initial surface profile is recovered by taking $x=Lx'$ and $\eta = \alpha L \eta'$ with the reference length $L=5000$ (cf.\ figure~\ref{fig:TsunmaiRunupIni}). As the solution near the left boundary of $\Omega$ is always dry and on the right boundary outgoing waves should not be reflected, we set wall and transparent boundary conditions for the left and right boundary, respectively. The boundaries in $y$-direction are set periodic, to avoid any artifacts coming from the definition of the boundary conditions. \begin{figure} \caption{Tsunami runup onto a beach: initial surface elevation at $t=0$.\label{fig:TsunmaiRunupIni} \label{fig:TsunmaiRunupIni} \end{figure} The simulations are run with a time step of $\Delta t = 0.04$, and a spatial resolution of $\Delta x = 50$ which corresponds to the length of the leg of a right angled triangle. The resulting Courant number is approximately $0.25$, which is attained offshore where the fluid depth is largest. The wet/dry tolerance is set to $\mathrm{TOL}_\mathrm{wet}=10^{-2}$. We compare our numerical results with the analytical solution on the interval $x\in[-400, 800]$ at times $t = 160$, $175$ and $220$. The results are depicted in figure \ref{fig:runup}. The left column shows the free surface elevation, where the simulations with both limiters (red and blue lines) match the exact solution (green line). However, it can be observed that due to its smaller stencil the edge-based limiter develops spurious discontinuities at cell edges. The middle and right column show the momentum and the reconstructed velocity at the respective times. As expected, the momentum is reproduced well while the velocity shows some spurious over-, and undershoots in the near-dry area, but good results elsewhere. In general, the vertex-based limiter yields qualitatively better results for this test case. The results are comparable to those in one space-dimension given in \citet{Vater2015} and demonstrate the similarity of our two-dimensional extension of the limiter. \begin{figure} \caption{Tsunami runup onto a beach: surface elevation, $x$-momentum and $x$-velocity (derived by $u=(hu)/h$) along line $y=200$ at times $t=160$ (top), $t=175$ (middle) and $t=220$ (bottom). Exact solution (green dash-dotted), vertex-based limiter (red), edge-based limiter (blue).\label{fig:runup} \label{fig:runup} \end{figure} \subsection{Long wave resonance in a paraboloid basin} The following two test cases particularly address the correct representation of a moving shoreline. They were originally defined in \citet{Thacker1981} and have an analytical solution. The first problem is a purely radially symmetric flow. Here, we also discuss the impact of the wet/dry tolerance on our method. Note, that we work with a scaled version of the problem as given in \citet{Lynett2002}. In a quadratic domain $\Omega = [-4000, 4000]^2$ with a parabolic bottom topography given by $b(\fatvec{x}) = \tilde{b}(r) = H_0 \tfrac{r^2}{a^2}$ where $r = |\fatvec{x}| = \sqrt{x^2 + y^2}$, the initial fluid depth and velocity are prescribed by \begin{align*} h(\fatvec{x},0) &= \max\left\{ 0, H_0 \left(\frac{\sqrt{1-A^2}}{1-A} - \frac{|\fatvec{x}|^2 (1-A^2)}{a^2 (1-A)^2}\right)\right\} \\ \fatvec{u}(\fatvec{x},0) &= \fatvec{0} \end{align*} where \begin{equation*} A = \frac{a^4 - r_0^4}{a^4 + r_0^4} , \end{equation*} $H_0 = 1$, $r_0 = 2000$, $a = 2500$. The exact radially symmetric solution is then given by \begin{align*} h(\fatvec{x}, t) &= \max\left\{ 0, H_0 \left(\frac{\sqrt{1-A^2}}{1-A \cos(\omega t)} - \frac{|\fatvec{x}|^2 (1-A^2)}{a^2 (1-A \cos(\omega t))^2}\right)\right\} \\ (u,v)(\fatvec{x},t) &= \begin{cases} \displaystyle\frac{\omega A \sin(\omega t)}{2 (1-A \cos(\omega t))} \, \fatvec{x} & \text{if } h(\fatvec{x}, t) > 0\\ \fatvec{0} & \text{otherwise,} \end{cases} \end{align*} where $\omega$ is the frequency defined as $\omega = \sqrt{8 g H_0} / a$. The simulations are run for two periods ($P$) of the oscillation, i.e.\ until $T_\mathrm{end} = 2P = 2\cdot(2\pi/\omega)$, with a time step of $\Delta t = P/700 \approx 2.534$, and a spatial resolution of $\Delta x = 125/\sqrt{2} \approx 88.39$ (leg of right angled triangle). The initial Courant number is approximately $0.16$. This is lower than the theoretically maximal Courant number because of possibly occurring spurious velocities in nearly dry regions affecting the Courant number at later times of the simulation. The maximum Courant number that is obtained for $\mathrm{TOL}_\mathrm{wet} = 10^{-14}$ is $0.22$, whereas it is 0.16 for $\mathrm{TOL}_\mathrm{wet} = 10^{-2}$. The results for fluid depth, $x$-momentum and $x$-velocity at times $t=1.5P,1.75P$ and $2P$ over a cross section $y=0$ are shown in figure \ref{fig:thacker1_crossx_cmplim}. Qualitatively, the vertex-based limiter (red line) shows slightly better results than the edge-based limiter (blue line). Note the small scale for the momentum at $t=1.5P$ and $t=2P$ in comparison with $t=1.75P$. The momentum plots in comparison with the velocity plots also show the action of the limiter: While the momentum, especially for the edge-based limiter, is non-monotone in some regions, the velocity is mostly monotone. Only near the wet/dry interface some spurious velocities are visible when the overall velocity is close to zero. \begin{figure} \caption{Long wave resonance in a paraboloid basin: cross section of fluid depth, $x$-momentum and $x$-velocity (left to right) at times $t=1.5P$, $t=1.75P$ and $t=2P$ (top to bottom) at $y=0$. Exact solution (green dash-dotted), vertex-based limiter (red), edge-based limiter (blue), $\mathrm{TOL} \label{fig:thacker1_crossx_cmplim} \end{figure} A comparison of simulation results at final time $t=2P$ with different wet/dry tolerances $\mathrm{TOL}_\mathrm{wet}$ reveals that the results for the prognostic variables as well as the reconstructed velocities are largely insensitive to the chosen tolerance (see figure \ref{fig:thacker1_crossx_wdTOL}). However, as the wet/dry tolerance gets small a larger area is considered wet by the scheme. This is visible in the velocity plot, where spurious velocities start to appear in nearly dry regions. We further illustrate the effect of the parameter $\mathrm{TOL}_\mathrm{wet}$ by comparison of fully two-dimensional fields obtained with the vertex-based limiter (figure \ref{fig:thacker1_2d}). The top and bottom rows show the fluid depth and velocity with $\mathrm{TOL}_\mathrm{wet} = 10^{-2}$ and $\mathrm{TOL}_\mathrm{wet} = 10^{-8}$, respectively. The results for the fluid depth are largely identical with the exception that the area that the model recognizes as ``wet'' is much larger with a smaller tolerance. In the additional wet area obtained with a smaller tolerance, small values of momentum and fluid depth lead to spurious velocities. However, we note that in spite of the observed existence of spurious velocities, these are still bounded and their magnitudes are within the range of the exact solution to the problem. A major aspect of the velocity-based limiter becomes apparent when compared to the non-velocity-based version of that same limiter, i.e., limiting directly in the momentum variable. In figure \ref{fig:thacker1_dt} we show the maximal possible global time step $\Delta t$ for a fixed CFL number $\mathrm{cfl} = 0.2$. We allow the time step to vary over time based on the CFL number and compute it using the numerical velocity and fluid depth. Both simulation runs use the same version of the vertex-based limiter with respect to the fluid depth. We observe that the velocity-based limiter (blue line in figure \ref{fig:thacker1_dt}) allows for a reasonable time step that does not show much variation. In contrast, if we do not use the velocity-based limiter, the simulation result shows large accumulations of spurious velocities in the first drying phase, leading to an unreasonably small time step (cyan line in figure \ref{fig:thacker1_dt}) to the extent that we were not able to finish the simulation within reasonable time. \begin{figure} \caption{Long wave resonance in a paraboloid basin: cross section of fluid depth, $x$-momentum and $x$-velocity (left to right) at time $t=2P$ at $y=0$ for the vertex-based limiter. Exact solution (green dash-dotted), solution with $\mathrm{TOL} \label{fig:thacker1_crossx_wdTOL} \end{figure} \begin{figure} \caption{Long wave resonance in a paraboloid basin: 2d view of fluid depth (left) and $x$-velocity (right) at time $t=2P$ for the vertex-based limiter. Note, that only the area is colored, where the fluid depth is above the wet/dry tolerance. $\mathrm{TOL} \label{fig:thacker1_2d} \end{figure} \begin{figure} \caption{Long wave resonance in a paraboloid basin: Resulting time step $\Delta t$ over time by keeping $\mathrm{cfl} \label{fig:thacker1_dt} \end{figure} \subsection{Oscillatory flow in a parabolic bowl} The second test case which goes back to \citet{Thacker1981} is also defined in a parabolic bowl, but describes a circular flow with a linear surface elevation in the wet part of the domain. It is the 2D analogue of the 1D test case described in \citet{Vater2015}. Here we follow the particular setup of \citet{Gallardo2007}. In a square domain $\Omega=[-2,2]^2$ with bottom topography $b = b(\fatvec{x}) = 0.1 \left(x^2 + y^2 \right)$, an analytical solution of the shallow water equations is given by \begin{align*} h(\fatvec{x},t) &= \max\left\{ 0, 0.1 \left(x \cos(\omega t) + y \sin(\omega t) + \tfrac{3}{4} \right) - b(\fatvec{x}) \right\} \\ (u,v)(\fatvec{x},t) &= \begin{cases} \frac{\omega}{2} \bigl( -\sin(\omega t), \cos(\omega t) \bigr) & \text{if } h(\fatvec{x}, t) > 0\\ \fatvec{0} & \text{otherwise,} \end{cases} \end{align*} with $\omega = \sqrt{0.2 g}$. Starting with $t=0$ we ran simulations for two periods until $T_\mathrm{end} = 2P = 2\cdot(2\pi/\omega)$ of the oscillation with a time step of $\Delta t = P/1000 \approx 0.004487$. The spatial resolution is set to 8\,192 elements, which is a Cartesian grid with $64^2$ squares divided into two triangles of an edge length of $0.0625$ (leg of right angled triangle). Figure \ref{fig:thacker2_crossx} shows cross sections over the line $y=0$ at time $t=2P$. The exact solution is plotted in green and the numerical approximation in red and blue for the vertex-based and edge-based limiter, respectively. The tolerance is chosen to $\mathrm{TOL}_\mathrm{wet}=10^{-3}$. We observe good agreement of the numerical results with the analytical solution for fluid depth, momentum and velocity. Note, however, that the edge-based limiter tends to produce artificial discontinuities in the solution and to slightly under-predict the $y$-momentum due to a higher inherent diffusion, which results from the edge-based stencil. This also yields a too small velocity and is visible in the 2D plots in figure \ref{fig:thacker2_2d}. Moreover, the contour plot of the $x$-momentum (middle column) shows that the triggered discontinuities are clearly visible. The results with the vertex-based limiter (top) are smoother and show less diffusion. \begin{figure} \caption{Oscillatory flow: cross section at $y=0$ for time $t=2P$. fluid depth, $x$-momentum, $y$-momentum, $x$-velocity, $y$-velocity (left to right, top to bottom). Exact solution (green dash-dotted), vertex-based limiter (red), edge-based limiter (blue), $\mathrm{TOL} \label{fig:thacker2_crossx} \end{figure} \begin{figure} \caption{Oscillatory flow: 2d view for vertex-based limiter (top) and edge-based limiter (bottom). Fluid depth, $x$-momentum, $y$-momentum (left to right) at time $t=2P$. White areas denote where the fluid depth is below the wet/dry tolerance $\mathrm{TOL} \label{fig:thacker2_2d} \end{figure} To show that our limiting approach is applicable to arbitrary grids, figure \ref{fig:thacker2_2dunstruct} shows analogue simulation results on a highly unstructured Delaunay grid with 1233 elements. Note that this grid has a coarser resolution than the one used in figures \ref{fig:thacker2_crossx} and \ref{fig:thacker2_2d}. As can be seen from the cross section plots, the results are similar to those on the other grids. \begin{figure} \caption{Oscillatory flow on unstructured grid: 2d view of fluid depth with grid (left) and cross section at $y=0$ for fluid depth and $y$-momentum (right) at time $t=2P$. Exact solution (green dash-dotted), vertex-based limiter (red), $\mathrm{TOL} \label{fig:thacker2_2dunstruct} \end{figure} Besides the accuracy on fixed grids, also the convergence of the wetting and drying scheme is of interest. While we cannot expect second order convergence due to the non-smooth transition (kink) between wet and dry regions in the flow variables, the convergence rate should be at least approximately linear. For the convergence calculation we have computed the solution up to $t=2P$ on several grids with the number of cells ranging from $2\,048$ to $524\,288$ and fixed ratio $\Delta t / \Delta x$ and a wet/dry tolerance $\mathrm{TOL}_\mathrm{wet}=10^{-8}$. The experimental convergence rate is then calculated by the formula \begin{equation*} \gamma_c^f := \frac{\log(\|e_c\| / \|e_f\|)}{\log(\Delta x_c / \Delta x_f)} . \end{equation*} In this definition, $e_c$ and $e_f$ are the computed error functions of the solution on a coarse and a fine grid (denoted by the number of cells) and $\Delta x_c$ and $\Delta x_f$ are the corresponding grid resolutions. In figure \ref{fig:thacker2_convergence} and table \ref{table:thacker2_convergence} we show the results of this convergence analysis. The DG method converges with both limiters, however, the convergence rate that is achieved in the $L^2$ norm is higher with the vertex-based limiter ($\approx 1.6$) than with the edge-based limiter ($\approx 1$). The test case of an oscillatory flow in a parabolic bowl is also suitable to evaluate the conservation of mass and of total energy $E=\int_\Omega h(\fatvec{u}\cdot\fatvec{u})/2+gh(h/2+b) \dint{x}$ for the numerical method, since there is no flow across the boundary of the domain. While mass conservation should hold up to machine accuracy, total energy can only hold within the approximation error. We can see that mass conservation is not affected by the slope limiters (left plot of figure \ref{fig:thacker2_mass}), while only the vertex-based limiter (right plot of same figure) nearly conserves energy. This indicates that the edge-based limiter exposes some numerical dissipation. \begin{figure} \caption{Oscillatory flow: errors in fluid depth (left) and momentum (right) measured in the $L^2$ norm (circles) and the $L^\infty$ norm (squares). Vertex-based limiter (red), edge-based limiter (blue).\label{fig:thacker2_convergence} \label{fig:thacker2_convergence} \end{figure} \begin{table} \renewcommand{1.2}{1.2} \begin{tabular}{l|cccc} & $L^2(h)$ & $L^2(m)$ & $L^\infty(h)$ & $L^\infty(m)$ \\\hline $\gamma_{2048}^{8192}$ & 1.6873 & 1.6230 & 0.9104 & 1.1587 \\ $\gamma_{8192}^{32768}$ & 1.6903 & 1.5996 & 1.3190 & 1.3072 \\ $\gamma_{32768}^{131072}$ & 1.5626 & 1.5671 & 0.8477 & 1.0294 \\ $\gamma_{131072}^{524288}$ & 1.5779 & 1.5901 & 1.1845 & 1.0847 \\ $\gamma_\mathrm{fitted}$ & 1.6289 & 1.5926 & 1.0690 & 1.1496 \end{tabular} \hspace{1ex} \begin{tabular}{l|cccc} & $L^2(h)$ & $L^2(m)$ & $L^\infty(h)$ & $L^\infty(m)$ \\\hline $\gamma_{2048}^{8192}$ & 1.0048 & 0.9332 & 0.9926 & 0.9494 \\ $\gamma_{8192}^{32768}$ & 1.0125 & 0.9527 & 0.9860 & 0.9491 \\ $\gamma_{32768}^{131072}$ & 1.0090 & 0.9694 & 0.9513 & 0.9834 \\ $\gamma_{131072}^{524288}$ & 1.0012 & 0.9802 & 0.8538 & 0.9957 \\ $\gamma_\mathrm{fitted}$ & 1.0077 & 0.9593 & 0.9505 & 0.9688 \end{tabular} \caption{Oscillatory flow: convergence rates between different grid levels for fluid depth ($h$) and momentum ($m$) in the $L^2$ and $L^\infty$ norms. Vertex-based limiter (left) and edge-based limiter (right). Also displayed is the mean convergence rate $\gamma_\mathrm{fitted}$, which is obtained by a least squared fit.\label{table:thacker2_convergence}} \end{table} \begin{figure} \caption{Oscillatory flow: time series of relative mass (left) and energy (right) changes for the vertex-based limiter (red) and edge-based limiter (blue).\label{fig:thacker2_mass} \label{fig:thacker2_mass} \end{figure} Finally, we record the Courant number for this test case over time for different wet/dry tolerances. In figure \ref{fig:thacker2_cfl} we plot the Courant number for the vertex-based (left) and edge-based limiter (right) with wet/dry tolerances $\mathrm{TOL}_\mathrm{wet}\in \{10^{-2},10^{-4},10^{-8},10^{-14} \}$. It can be observed that for all tolerances the Courant number stays bounded and mostly below the theoretical limit. However, when the wet/dry tolerance becomes smaller, spurious velocities start to arise and affect the Courant number. If the tolerance is set large enough ($\approx 10^{-4}$), we obtain a nearly constant Courant number over time, which is similar to the Courant number of the exact problem. \begin{figure} \caption{Oscillatory flow: time series of maximum Courant number with wet/dry tolerance $\mathrm{TOL} \label{fig:thacker2_cfl} \end{figure} \subsection{Runup onto a complex three-dimensional beach} The 1993 Okushiri tsunami caused many unexpected phenomena, such as an extreme runup height of 32m which was observed near the village of Monai on Okushiri island. The event was reconstructed in an 1/400 scale laboratory experiment, using a large-scale tank (205m long, 6m deep, 3.4m wide) at Central Research Institute for Electric Power Industry (CRIEPI) in Abiko, Japan \citep{Matsuyama2001}. For the test case the coastal topography in a domain of $5.448\mathrm{m} \times 3.402\mathrm{m}$ and the incident wave from offshore is provided. Beside the temporal and spatial variations of the shoreline location, the temporal evolution of the surface elevation at three specified gauge stations are of interest (figure \ref{fig:okushiri_setup}). \begin{figure} \caption{Okushiri: time series of incident wave which is used as boundary condition (left) and experimental setup (right).\label{fig:okushiri_setup} \label{fig:okushiri_setup} \end{figure} At the offshore boundary we set the incident wave as right-going simple wave. This means, given the fluid depth $h$ of the incident wave the $x$-velocity is computed by \begin{equation}\label{eq:bcsimplewave} u = 2 (\sqrt{g h} - \sqrt{g h_0}) , \end{equation} where $h_0=0.13535\mathrm{m}$ denotes the water depth at rest. At the other three boundaries there were reported to be walls. So we set reflective wall boundary conditions at these locations. We perform simulations with a time step of $\Delta t = 0.001$ until $40\,000$ steps ($T_\mathrm{end} = 40$) on a grid with 393\,216 elements ($384 \times 256$ rectangles divided into four triangles). The wet/dry tolerance is set to $\mathrm{TOL}_\mathrm{wet}=10^{-4}$. The results are depicted in figures \ref{fig:okushiri_timeseries} and \ref{fig:okushiri_contours}. Figure \ref{fig:okushiri_timeseries} shows the comparison of the numerical results with experimental data at gauges 5, 7, and 9. Overall, we observe good agreement with both limiters (red and blue lines). Detailed contour plots of the coastal area together with the experimentally derived shoreline are shown in figure \ref{fig:okushiri_contours} for times $t = 15.0, 15.5, 16.0, 16.5, 17.0$. This shoreline is taken from \citep{NTHMP2011} and adjusted to the figures, which means it can only be used as a rough estimate. However, the flood line is represented well and we also demonstrate a good match of the maximum run-up (red dot) at $t=16.5$. \begin{figure} \caption{Okushiri: time series of gauge data: gauge 5 (left), gauge 7 (middle), gauge 9 (right), experimental data (green dash-dotted), vertex-based limiter (red), edge-based limiter (blue dashed). $\mathrm{TOL} \label{fig:okushiri_timeseries} \end{figure} \begin{figure} \caption{Okushiri: detailed contour plot of the coastal area at $t = 15.0, 15.5, 16.0, 16.5, 17.0$, (top left to right bottom), vertex-based limiter with $\mathrm{TOL} \label{fig:okushiri_contours} \end{figure} \subsection{Flow around a conical island} This test is part of a series of experiments carried out at the US Army Engineer Waterways Experiment Station in a $25\times28.2$m basin with a conical island situated at its center \citep{Briggs1995,Liu1995}. The experiment was motivated by the 1992 Flores Island tsunami run-up on Babi Island. The conical island has its center at $\fatvec{x}_I = (12.96, 13.8)^\top$ and is defined by \begin{align*} b (\fatvec{x}) = \tilde{b}(r) = \begin{cases} 0.625 & r \leq 1.1 \\ (3.6-r)/4 & 1.1 \leq r \leq 3.6 \\ 0 & \text{otherwise,} \end{cases} \end{align*} with $r = \| \fatvec{x} - \fatvec{x}_I\|_2$ being the distance from the center (see figure \ref{fig:conical_setup}). The initial fluid depth and velocity are given by $h(\fatvec{x},0) = \max\{0, h_0 - b(\fatvec{x})\}$ and $\fatvec{u}(\fatvec{x},0) = \fatvec{0}$, where $h_0 = 0.32$. Three different solitary waves (denoted by case A, B and C) were generated by a wavemaker in the experiments at the left side of the domain, from which we only consider case A and C. Besides the trajectories of the wave paddle, time series of the surface elevation at 27 different gauge stations, 8 of which are freely available, were measured. The first four gauge stations were situated in a $L/2$ distance in $x$-direction from the toe of the beach, where $L/2$ is the distance at which the solitary wave height drops to $5\%$ of its maximum height, and $L$ defines the wave length. In the numerical simulations we describe the wave by an incoming analytical solitary wave through the boundary condition on the left side of the domain. In order to make the analytical wave compatible to measurements, it needs to be adjusted with the parameters given below. The wave is defined by \begin{equation*} h_b(t) = h_0 + a \left( \frac{1}{\cosh(K (cT - ct - x_0))} \right)^{\! 2} \end{equation*} where $K = \sqrt{\tfrac{3 a}{4 h_0^3}}$ and $c = \sqrt{g h_0} \left(1+\tfrac{a}{2 h_0}\right)$. To obtain the other parameters, we fitted the solitary wave to the experimental data at the first four gauge stations. This resulted in an amplitude and time shift of $a = 0.014$ and $T = 8.85$ for case A. The parameter $x_0 = 5.76$ is the $x$-coordinate of the first four gauges. For case C the parameters are $a = 0.057$, $T = 7.77$ and $x_0 = 7.56$. Compared to the experiments this also includes a time shift of 20. As in the Okushiri test case the velocity at the boundary is defined to obtain a right running simple wave (cf.\ \eqref{eq:bcsimplewave}). \begin{figure} \caption{Conical Island: Top and side views of experimental setup with location of wave gauges (blue).\label{fig:conical_setup} \label{fig:conical_setup} \end{figure} For the numerical simulations the domain is slightly adjusted to have dimensions $25.92 \times 27.60$, and the conical island is exactly centered. The domain is discretized into $1024 \times 1024$ uniform squares which are divided into two triangles (2\,097\,152 elements). The time step is $\Delta t = 0.0025$, and the computations are run until $T_\mathrm{end} = 20$. Results are computed using both limiters with a wet/dry tolerance of $\mathrm{TOL}_\mathrm{wet}= 10^{-3}$. On the left side of the domain, we impose an inflow boundary condition to prescribe the solitary wave. Furthermore, a transparent boundary condition is set on the right side and wall boundary conditions are set at the top and the bottom of the domain. In figures \ref{fig:conical_timeseriesA} and \ref{fig:conical_timeseriesC} we compare the resulting time series of the surface elevation at gauge stations 6, 9, 16, and 22 for case A and C, respectively, with the experimental data. Note that some time series from the experiments were slightly shifted to have an initial zero water level. While gauge 6 and 9 are right in front of the island, gauge 16 is on the side and gauge 22 at the rear of it. It can be seen that for smaller wave amplitudes (case A) the experimental data can be reproduced well. On the other hand, for higher amplitudes in case C non-linear effects become dominant and are not balanced because of the lack of wave dispersion. The result is a steepening of the wave at the front and a flattening at the rear. Furthermore, we observe a general under-estimation of the trough after the first wave. \begin{figure} \caption{Conical Island: time series for case A of surface elevation at wave gauges 6, 9, 16, 22 (top left to bottom right) experimental data (green dash-dotted, shifted by $\Delta t = -20$), vertex-based limiter (red), edge-based limiter (blue dashed).\label{fig:conical_timeseriesA} \label{fig:conical_timeseriesA} \end{figure} \begin{figure} \caption{Conical Island: time series for case C of surface elevation at wave gauges 6, 9, 16, 22 (top left to bottom right) experimental data (green dash-dotted, shifted by $\Delta t = -20$), vertex-based limiter (red), edge-based limiter (blue dashed).\label{fig:conical_timeseriesC} \label{fig:conical_timeseriesC} \end{figure} In figure \ref{fig:conical_snapshots} some snapshots of the simulations using the vertex-based limiter are displayed. They demonstrate that the initial wave correctly splits into two wave fronts after hitting the island. These wave fronts collide behind the island at a later time in the simulation. Finally, we compare the computed maximum run-up on the island with measurements from the experiments in figure \ref{fig:conical_maxrunups}. For both configurations of wave amplitude the simulations resemble measurements well and only slightly overestimate the runup. These deviations are larger in case C at the front of the island, where the wave first arrives. This behavior is probably due to the lack of wave dispersion and an imprecise representation of the wave generated by the wavemaker. Additional discrepancies might be related to the neglected bottom friction within the model. We attribute the better fit of the runup resulting from the simulation with the edge-based limiter to the additional diffusion introduced by this limiter, and not to a better physical modeling of the runup. \begin{figure} \caption{Conical Island: contour plot of surface elevation for case A (top) at $t=13$ (left) and $t=16$ (right) using the vertex-based limiter. Same for case C (bottom) at $t=11$ (left) and $t=14$ (right).\label{fig:conical_snapshots} \label{fig:conical_snapshots} \end{figure} \begin{figure} \caption{Conical Island: maximum vertical runup in $\mathrm{cm} \label{fig:conical_maxrunups} \end{figure} \section{Conclusions} \label{sec:Conclusions} In this work a new wetting and drying treatment for RKDG2 methods applied to the shallow water equations is presented. The key ingredients are a non-destructive limiting of the fluid depth combined with a velocity-based limiting of the momentum, the latter preconditioning the velocity computation near the shoreline, i.e., in areas of small fluid depth and momentum. This, in turn, guarantees a uniform time step with respect to the CFL stability constraint for explicit methods, which we explicitly report. The limiting strategy is complemented by a straightforward flux modification and a positivity-preserving limiter, which renders the scheme mass-conservative, well-balanced and stable for a wide range of flow regimes. It is a natural extension of a previously developed 1D scheme \citep{Vater2015} to the case of two-dimensional structured and unstructured triangular grids. Originally designed to control linear stability, the chosen limiter for the fluid depth does not alter steady states at rest and small perturbations around them. Two versions of the limiter are presented that differ in the selection of cells to be included into the limiting procedure. The ``edge-based'' version is based on the original Barth/Jespersen \citep{Barth1989} limiter. Due to its small stencil, it modifies states with constant gradients and therefore introduces additional diffusion into the method. The ``vertex-based'' version is an extension of the Barth/Jespersen limiter \citep{kuzmin_vertex-based_2010} especially designed for triangular grids and is non-destructive to linear states. It results in slightly more accurate computations in most situations, but has a larger stencil. Only one single parameter $\mathrm{TOL}_\mathrm{wet}$ enters the scheme, which controls the threshold in fluid depth considered to be dry. We show that the stability of the method is unaffected by this parameter. It solely determines the effective area which is considered wet by the discretization. A carefully chosen wet/dry tolerance thus leads to an accurate shoreline computation. The presented test cases range from simple configurations where the analytical solution is known to the reproduction of laboratory experiments. They illustrate the method's applicability to a variety of flow regimes and verify its numerical properties: well-balancing in the case of a lake at rest, accurate representation of the shoreline, even in case of fast transitions, and convergence to the exact solution. Comparison with laboratory experiments shows good agreement. Some of the test cases are benchmark problems for the evaluation of operational tsunami models \citep{Synolakis2007}. With the successful simulation of these problems, we could show that the presented model satisfies the requirements for its application to realistic geophysical problems. Future research will concentrate on the extension of the current scheme to adaptive grids and its application to tsunami and storm surge simulations. In this respect, additional source terms, such as the parametrization of sub-grid roughness by bottom friction and wind drag must be incorporated into the model. Furthermore, possibilities to extend the proposed concept to higher than second-order RKDG methods are investigated. \section*{Acknowledgments} This work benefited greatly from free software products. Without these tools -- such as python, the gnu FORTRAN compiler and the Linux operating system -- a lot of tasks would not have been so easy to realize. It is our pleasure to thank all developers for their excellent products. \end{document}
\betaegin{document} \tauitle{Shift-preserving maps on $\omega^*$} \alphauthor{Will Brian} \alphaddress { William R. Brian\\ Department of Mathematics\\ Baylor University\\ One Bear Place \#97328\\ Waco, TX 76798-7328} \varepsilonmail{[email protected]} \sigmaubjclass[2010]{Primary: 54H20. Secondary: 03C98, 03E35, 37B05, 54G05} \kappaeywords{Stone-\v{C}ech compactification, shift map, Parovi\v{c}enko's theorem, abstract omega-limit sets, weak incompressibility, Continuum Hypothesis, elementary submodels, Martin's Axiom} \tauhanks{$*$ The author would like to thank Brian Raines and Alan Dow for helpful conversations, and Brian especially for patiently listening to several preliminary (and wrong) ideas for proving the main theorem of this paper.} \betaegin{abstract} The shift map $\sigma$ on $\omega^*$ is the continuous self-map of $\omega^*$ induced by the function $n \mathrm{MA}psto n+1$ on $\omega$. Given a compact Hausdorff space $X$ and a continuous function $f: X \tauo X$, we say that $(X,f)$ is a quotient of $(\omega^*,\sigma)$ whenever there is a continuous surjection $Q: \omega^* \tauo X$ such that $Q \circ \sigma = \sigma \circ f$. Our main theorem states that if the weight of $X$ is at most $\alphaleph_1$, then $(X,f)$ is a quotient of $(\omega^*,\sigma)$ if and only if $f$ is weakly incompressible (which means that no nontrivial open $U \sigmaub X$ has $f(\closure{U}) \sigmaub U$). Under $\mathrm{MA}thrm{CH}$, this gives a complete characterization of the quotients of $(\omega^*,\sigma)$ and implies, for example, that $(\omega^*,\sigma^{-1})$ is a quotient of $(\omega^*,\sigma)$. In the language of topological dynamics, our theorem states that a dynamical system of weight $\alphaleph_1$ is an abstract $\omega$-limit set if and only if it is weakly incompressible. We complement these results by proving $(1)$ our main theorem remains true when $\alphaleph_1$ is replaced by any $\kappa < \mathrm{MA}thbb{P}seudo$, $(2)$ consistently, the theorem becomes false if we replace $\alphaleph_1$ by $\alphaleph_2$, and $(3)$ $\mathrm{MA}thrm{OCA}ma$ implies that $(\omega^*,\sigma^{-1})$ is not a quotient of $(\omega^*,\sigma)$. \varepsilonnd{abstract} \mathrm{MA}ketitle \sigmaection{Introduction} In \cite{Par}, Parovi\v{c}enko proved that every compact Hausdorff space of weight $\alphaleph_1$ is a continuous image of $\omega^* = \beta\omega - \omega$. In this paper we prove an analogous result concerning the continuous maps on $\omega^*$ that respect the shift map. The \varepsilonmph{shift map} $\sigma: \beta\omega \tauo \beta\omega$ sends an ultrafilter $p$ to the unique ultrafilter generated by $\sigmaet{A+1}{A \in p}$. Equivalently, $\sigma$ is the unique map on $\beta\omega$ that continuously extends the map $n \mathrm{MA}psto n+1$ on $\omega$. The shift map restricts to an autohomeomorphism of $\omega^*$. If $X$ is a compact Hausdorff space and $f: X \tauo X$ is continuous, we say that $(X,f)$ is a \varepsilonmph{quotient} of $(\omega^*,\sigma)$ whenever there is a continuous surjection $Q: \omega^* \tauo X$ such that $Q \circ \sigma = f \circ Q$. The main theorem of this paper characterizes the quotients of $(\omega^*,\sigma)$ that have weight at most $\alphaleph_1$: \betaegin{maintheorem} Suppose $X$ is a compact Hausdorff space with weight at most $\alphaleph_1$, and $f: X \tauo X$ is continuous. Then $(X,f)$ is a quotient of $(\omega^*,\sigma)$ if and only if $f$ is weakly incompressible. \varepsilonnd{maintheorem} Recall that $f: X \tauo X$ is \varepsilonmph{weakly incompressible} if for any open $U \sigmaub X$ with $\varepsilonmptyset \neq U \neq X$, we have $f(\closure{U}) \not\sigmaub U$. This theorem is the appropriate analogue of Parovi\v{c}enko's because $(\omega^*,\sigma)$ is itself weakly incompressible, and this property is always preserved by taking quotients. In other words, our theorem isolates a property of the shift map that determines exactly when Parovi\v{c}enko's topological result extends to a result of dynamics. \sigmaubsection*{Connection with topological dynamics} A \varepsilonmph{dynamical system} is a pair $(X,f)$, where $X$ is a compact Hausdorff space and $f: X \tauo X$ is continuous. For example, $(\omega^*,\sigma)$ is a dynamical system, and our main theorem states that it is universal (in the ``mapping onto'' sense) for all weakly incompressible dynamical systems of weight $\leq \alphaleph_1$. Given a point $x \in X$, the $\omega$\varepsilonmph{-limit set of} $x$ is the set of all limit points of the orbit of $x$: $$\omega_f(x) = \betaigcap_{n \in \omega}\closure{\sigmaet{f^m(x)}{m \gammaeq n}}.$$ It is easy to see that $\omega_f(x)$ is closed under $f$, so that $(\omega_f(x),f)$ is itself a dynamical system. Recall that two dynamical systems $(X,f)$ and $(Y,g)$ are \varepsilonmph{isomorphic} (or, for some authors, \varepsilonmph{conjugate}) if there is a homeomorphism $H: X \tauo Y$ with $H \circ f = g \circ H$. An \varepsilonmph{abstract $\omega$-limit set} is a dynamical system that is isomorphic to a dynamical system of the form $(\omega_f(x),f)$. For example, $(\omega^*,\sigma)$ is an abstract $\omega$-limit set because $\omega^* = \omega_\sigma(n)$ for any $n \in \omega$ in the larger dynamical system $(\beta\omega,\sigma)$. Notice that $\omega^*$ is not an $\omega$-limit set ``internally''; that is, $\omega^* \neq \omega_\sigma(p)$ for any $p \in \omega^*$ (indeed, $\omega^*$ is not even separable). In order to realize $(\omega^*,\sigma)$ as an $\omega$-limit set, it is necessary to extend it to a larger dynamical system. In the next section, we will prove the following characterization of abstract $\omega$-limit sets: \betaegin{thm} $(X,f)$ is an abstract $\omega$-limit set if and only if it is a quotient of $(\omega^*,\sigma)$. \varepsilonnd{thm} In other words, $(\omega^*,\sigma)$ is universal among all abstract $\omega$-limit sets. Theorem~\ref{thm:external} is one of the motivations for our study of the quotients of $(\omega^*,\sigma)$: it is the study of the internal structure of $\omega$-limit sets. Theorem~\ref{thm:external} allows us to rephrase our main theorem as follows: \betaegin{maintheorem2} Suppose $(X,f)$ is a dynamical system and the weight of $X$ is at most $\alphaleph_1$. $(X,f)$ is an abstract $\omega$-limit set if and only if $f$ is weakly incompressible. \varepsilonnd{maintheorem2} This way of stating the main theorem reveals it as an extension of the following well-known result of Bowen and Sharkovsky: \betaegin{thm4} A metrizable dynamical system is an abstract $\omega$-limit set if and only if it is weakly incompressible. \varepsilonnd{thm4} Sharkovsky proves the forward direction in \cite{Srk} and Bowen proves the converse in \cite{Bwn}. We will give a slightly different proof below, because we will require a mild strengthening of this theorem (Corollary~\ref{cor:BS}) to prove our main result. See \cite{BGOR} or \cite{M&R}, and the references therein, for further research on the connection between weak incompressibility and $\omega$-limit sets. \sigmaubsection*{Outline of the proof} Of the various proofs of Parovi\v{c}enko's theorem, ours is closest in spirit to that of B{\l}aszczyk and Szyma\'nski in \cite{B&S}. Their proof begins by writing a given compact Hausdorff space $X$ as a length-$\omega_1$ inverse limit of compact metrizable spaces: $X = \varprojlim \sigmaeq{X_\alpha}{\alpha < \omega_1}$. They then construct a coherent transfinite sequence of continuous surjections $Q_\alpha: \omega^* \tauo X_\alpha$, and define $Q: \omega^* \tauo X$ to be the inverse limit of this sequence. The $Q_\alpha$ are constructed recursively, using a variant of the following lifting lemma at successor stages: \betaegin{lemma}\label{lem:easylift} Let $Y$ and $Z$ be compact metrizable spaces, and let $Q_Z: \omega^* \tauo Z$ and $\mathrm{MA}thbb{P}i: Y \tauo Z$ be continuous surjections. Then there is a continuous surjection $Q_Y: \omega^* \tauo Y$ such that $Q_Z = \mathrm{MA}thbb{P}i \circ Q_Y$. \varepsilonnd{lemma} In our situation, the first part of B{\l}aszczyk and Szyma\'nski's proof goes through: we prove in Corollary~\ref{cor:inverselimit} below that given a dynamical system $(X,f)$ of weight $\alphaleph_1$, one may always write $(X,f)$ as a length-$\omega_1$ inverse limit of metrizable dynamical systems. However, we run into trouble with the analogue of Lemma~\ref{lem:easylift}: the analogous lemma for dynamical systems is false (see Example~\ref{ex:liftingproblems}). To get around this problem, we modify B{\l}aszczyk and Szyma\'nski's approach by using sharper tools. Rather than beginning with $(X,f)$ and writing it as a topological inverse limit, we begin with a particular embedding of $X$ in $[0,1]^{\omega_1}$ and use a much stronger form of inverse limit: a continuous chain of elementary submodels of a sufficiently large fragment of the set-theoretic universe. Each model in our chain naturally gives rise to a metrizable ``reflection'' of $(X,f)$, and the continuity requirement organizes these reflections into an inverse limit system with limit $(X,f)$. Elementarity gives this system strong structural properties, and ultimately is the key that unlocks a workable analogue of Lemma~\ref{lem:easylift}. Our use of elementarity is inspired by the work of Dow and Hart in \cite{D&H}, where they prove that every continuum of weight $\alphaleph_1$ is a continuous image of $\mathrm{MA}thbb{H}^*$, the Stone-\v{C}ech remainder of $\mathrm{MA}thbb{H} = [0,\infty)$. They give three proofs of this fact, each of which relies on elementarity in some essential way. The proof of our main theorem is most similar to their third proof, found in Section 3 of \cite{D&H}. In Section~\ref{sec:MA}, we will show that both Parovi\v{c}enko's theorem about continuous images of $\omega^*$ and the Dow-Hart theorem about continuous images of $\mathrm{MA}thbb H^*$ can be derived as relatively straightforward corollaries of our main theorem. In light of this, it is unsurprising that our proof uses some of the same ideas found in \cite{B&S} and \cite{D&H}. \sigmaubsection*{Extensions and limitations} Under the Continuum Hypothesis, our result gives a complete characterization of the quotients of $(\omega^*,\sigma)$: \betaegin{thm2} Assuming $\mathrm{MA}thrm{CH}$, the following are equivalent: \betaegin{enumerate} \item $(X,f)$ is a quotient of $(\omega^*,\sigma)$. \item $X$ has weight at most $\mathrm{MA}thfrak{c}$ and $f$ is weakly incompressible. \item $X$ is a continuous image of $\omega^*$ and $f$ is weakly incompressible. \varepsilonnd{enumerate} \varepsilonnd{thm2} Every quotient of $(\omega^*,\sigma)$ is weakly incompressible, so $(3)$ gives the most liberal possible characterization of quotients of $(\omega^*,\sigma)$: they are simply the weakly incompressible dynamical systems for which the topology is not an obstruction. In Section~\ref{sec:MA}, we show that the nontrivial conclusions of Theorem~\ref{thm:ch} are independent of $\mathrm{MA}thrm{ZFC}$. Specifically, we show that $(2)$ does not imply $(1)$ or $(3)$ in the Cohen model, and that $(3)$ does not imply $(1)$ under $\mathrm{MA}thrm{OCA}ma$. In fact, we will show under $\mathrm{MA}thrm{OCA}ma$ that $(\omega^*,\sigma^{-1})$ is not a quotient of $(\omega^*,\sigma)$, even though $\sigma^{-1}$ is weakly incompressible. We also show in Section~\ref{sec:MA} that if $\kappa < \mathrm{MA}thbb{P}seudo$ then our main theorem holds with $\kappa$ in the place of $\alphaleph_1$: \betaegin{thm3} If the weight of $X$ is less than $\mathrm{MA}thbb{P}seudo$, then $(X,f)$ is a quotient of $(\omega^*,\sigma)$ if and only if $f$ is weakly incompressible. \varepsilonnd{thm3} In the same way that our main theorem is the dynamical analogue of Parovi\v{c}enko's theorem, this result is the dynamical analogue of the following result of van Douwen and Przymusi\'nski from \cite{vDP}: \varepsilonmph{If $X$ is a compact Hausdorff space with weight less than $\mathrm{MA}thbb{P}seudo$, then $X$ is a continuous image of $\omega^*$}. \sigmaection{First steps}\label{sec:background} \sigmaubsection*{Extending maps from $\omega$ to $\beta\omega$} If $X$ is a compact Hausdorff space and $f: \omega \tauo X$ is any function, then there is a unique continuous function $\beta f: \beta\omega \tauo X$ that extends $f$, the \varepsilonmph{Stone extension} of $f$. For a sequence $\sigmaeq{x_n}{n \in \mathrm{MA}thbb{N}}$ of points in $X$ and $p \in \beta\omega$, we will usually write $p\mbox{-}\!\lim_{n \in \w} x_n$ for the image of $p$ under the Stone extension of the function $n \mathrm{MA}psto x_n$. We will need the following facts about Stone extensions (proofs can be found in chapter 3 of \cite{H&S}): \betaegin{lemma}\label{lem:plimits} Let $X$ be a compact Hausdorff space and $\sigmaeq{x_n}{n < \omega}$ a sequence of points in $X$. \betaegin{enumerate} \item $p\mbox{-}\!\lim_{n \in \w} x_n = y$ if and only if for every open $U \ni y$ we have $\sigmaet{n}{x_n \in U} \in p$ \item $p \mathrm{MA}psto p\mbox{-}\!\lim_{n \in \w} x_n$ is a continuous function $\beta\omega \tauo X$. \item If $f: X \tauo X$ is continuous and $p \in \beta\omega$, then $$\tauextstyle f \!\left( p\mbox{-}\!\lim_{n \in \w} x_n \right) = p\mbox{-}\!\lim_{n \in \w} f(x_n).$$ \item For each $p \in \beta\omega$, $\sigmaplim x_n = p\mbox{-}\!\lim_{n \in \w} x_{n+1}$. \varepsilonnd{enumerate} \varepsilonnd{lemma} \sigmaubsection*{Extending maps from $\omega^*$ to $\beta\omega$} The following folklore result is a fairly straightforward consequence of the Tietze Extension Theorem, or an alternative proof can be found in \cite{Eng}, Theorem 3.5.13. \betaegin{lemma}\label{lem:metricextensions} Suppose $X$ is a compact Hausdorff space and $f: \omega^* \tauo X$ is continuous. There is a compact Hausdorff space $Y \sigmaupseteq X$, such that $f$ can be extended to a continuous function $F: \beta\omega \tauo Y$. Furthermore, we may assume that $F \!\restriction\! \omega$ is injective, and that $F(\omega)$ is an open, relatively discrete subset of $Y$ with $F(\omega) \cap X = \varepsilonmptyset$. \varepsilonnd{lemma} \betaegin{lemma}\label{lem:continuity} Let $(X,f)$ be a dynamical system, and $Q: X \tauo Y$ a continuous surjection such that, for all $x_1,x_2 \in X$, if $Q(x_1) = Q(x_2)$ then $Q(f(x_1)) = Q(f(x_2))$. Then there is a unique continuous $g: Y \tauo Y$ such that $g \circ Q = Q \circ f$. \varepsilonnd{lemma} \betaegin{proof} The assumptions about $Q$ immediately imply that there is a unique function $g: Y \tauo Y$ such that $g \circ Q = Q \circ f$, namely $g(y) = Q(f(Q^{-1}(y)))$. We need to check that this function is continuous. If $K$ is a closed subset of $Y$, then $f^{-1}(Q^{-1}(K))$ is closed in $X$. Because $X$ is compact, $f^{-1}(Q^{-1}(K))$ is compact, which implies $g^{-1}(K) = Q(f^{-1}(Q^{-1}(K)))$ is closed. Since $K$ was arbitrary, $g$ is continuous. \varepsilonnd{proof} In the same way that our main theorem can be seen as a dynamical version of Parovi\v{c}enko's theorem, the following result can be seen as a dynamical version of Lemma~\ref{lem:metricextensions}: \betaegin{theorem}\label{thm:external} $(X,f)$ is an abstract $\omega$-limit set if and only if it is a quotient of $(\omega^*,\sigma)$. \varepsilonnd{theorem} \betaegin{proof} It is well known that if $(X,f)$ is an $\omega$-limit set then it is a quotient of $(\omega^*,\sigma)$. Indeed, the map $p \mathrm{MA}psto p\mbox{-}\!\lim_{n \in \w} f^n(x)$ gives a quotient mapping from $(\omega^*,\sigma)$ to $(\omega_f(x),f)$. For details and some discussion, see Section 2 of \cite{Bls}. Here we need to prove the converse. Suppose $q: \omega^* \tauo X$ is a quotient mapping from $(\omega^*,\sigma)$ to $(X,f)$. Using Lemma~\ref{lem:metricextensions}, there is a compact Hausdorff space $Y$ containing $X$ such that $q$ extends to a continuous function $Q: \beta\omega \tauo Y$, where $Q \!\restriction\! \omega$ is injective, $Q(\omega)$ is an open, relatively discrete subset of $Y$, and $Q(\omega) \cap X = \varepsilonmptyset$. Replacing $Y$ with $Q(\beta\omega)$ if necessary, we may also assume that $Q$ is surjective. Define $g: Y \tauo Y$ by $$ g(y) = \betaegin{cases} f(y) & \tauextrm{ if $y \in X$} \\ Q(n+1) & \tauextrm{ if $y = Q(n)$, $n \in \omega$} \varepsilonnd{cases} $$ This function is well-defined because $Q \!\restriction\! \omega$ is injective and $Q(\omega) \cap X = \varepsilonmptyset$. By design, $Q \circ \sigma = g \circ Q$. By Lemma~\ref{lem:continuity}, $g$ is continuous. To finish the proof, we will show that, in $(Y,g)$, $X$ is an $\omega$-limit set. Letting $p = Q(0)$, we claim $X = \omega_g(p)$. Notice that $$\sigmaet{g^m(Q(0))}{m \gammaeq n} = \sigmaet{Q(m)}{m \gammaeq n}$$ for all $n$. Using the continuity of $Q$, we have $$\closure{\sigmaet{Q(m)}{m \gammaeq n}} \sigmaupseteq Q(\closure{\sigmaet{m}{m \gammaeq n}}) \sigmaupseteq Q(\omega^*) = X.$$ Thus $\omega(Q(0)) \sigmaupseteq X$. The reverse inclusion follows from the fact that $Q(\omega)$ is open and relatively discrete. \varepsilonnd{proof} \sigmaubsection*{Chain transitivity} Suppose $(X,f)$ is a dynamical system and $d$ is a metric for $X$. An $\varepsilon$\varepsilonmph{-chain} in $(X,f)$ is a sequence $\sigmaeq{x_i}{i \leq n}$ such that $d(f(x_i),x_{i+1}) < \varepsilon$ for all $i < n$. Roughly, an $\varepsilon$-chain is a piece of an orbit, but computed with a small error at each step. $(X,f)$ is called \varepsilonmph{chain transitive} if for any $a,b \in X$ and any $\varepsilon > 0$, there is an $\varepsilon$-chain beginning at $a$ and ending at $b$. Using open covers in the place of $\varepsilon$-balls, we can reformulate the definition of chain transitivity so that it applies to non-metrizable dynamical systems. Given $(X,f)$ and an open cover $\mathrm{MA}thcal{U}$ of $X$, we say that $\sigmaeq{x_i}{i \leq n}$ is a $\mathrm{MA}thcal{U}$\varepsilonmph{-chain} if, for every $i < n$, there is some $U \in \mathrm{MA}thcal{U}$ such that $f(x_i) \in U$ and $x_{i+1} \in U$. A dynamical system $(X,f)$ is \varepsilonmph{chain transitive} if for any $a,b \in X$ and any open cover $\mathrm{MA}thcal{U}$ of $X$, there is a $\mathrm{MA}thcal{U}$-chain beginning at $a$ and ending at $b$. \betaegin{lemma}\label{lem:ct}$\ $ \betaegin{enumerate} \item A dynamical system is chain transitive if and only if it is weakly incompressible. \item Every quotient of $(\omega^*,\sigma)$ is weakly incompressible. \varepsilonnd{enumerate} \varepsilonnd{lemma} The proof of $(1)$ is essentially the same as the proof for metrizable dynamical systems (see, e.g., Theorem 4.12 in \cite{Akn}). Both $(1)$ and $(2)$ can be found (with proofs) in Section 5 of \cite{WRB}. \sigmaubsection*{The Bowen-Sharkovsky theorem} We now give a proof of the theorem of Bowen and Sharkovsky mentioned in the introduction. \betaegin{theorem}[Bowen-Sharkovsky]\label{thm:BS} A metrizable dynamical system is an abstract $\omega$-limit set if and only if it is weakly incompressible. \varepsilonnd{theorem} \betaegin{proof} The forward direction is a consequence of Theorem~\ref{thm:external} and Lemma~\ref{lem:ct}. To prove the reverse direction, we will use chain transitivity instead of weak incompressibility. Let $(X,f)$ be a chain transitive dynamical system, and let $d$ be a metric for $X$. Pick $x_0 \in X$ arbitrarily. Using chain transitivity and the compactness of $X$, define $x_1, x_2, \dots, x_{n_1}$ so that \betaegin{enumerate} \item $\sigmaeq{x_i}{0 \leq i \leq n_1}$ is a $1$-chain \item $\betaigcup_{0 \leq i \leq n_1} B_1(x_i) = X$, and \item $x_{n_1} = x_0$. \varepsilonnd{enumerate} Now assuming that $\sigmaeq{x_i}{i \leq n_m}$ have already been defined, define $x_{n_m+1}, x_{n_m+2}, \dots, x_{n_{m+1}}$ so that \betaegin{enumerate} \item $\sigmaeq{x_i}{n_m \leq i \leq n_{m+1}}$ is a $\frac{1}{m}$-chain, \item $\betaigcup_{n_m \leq i \leq n_{m+1}} B_{\frac{1}{m}}(x_i) = X$, and \item $x_{n_{m+1}} = x_0$. \varepsilonnd{enumerate} It is not difficult to see that chain transitivity and compactness together allow us to build such a sequence of points. Define $Q: \omega^* \tauo X$ to be the Stone extension of the map $n \mathrm{MA}psto x_n$. This function is automatically continuous. It follows from $(2)$ above that $\sigmaet{x_m}{m \gammaeq n}$ is dense in $X$ for every $n$, which implies that $Q$ is surjective. It remains to show that $Q \circ \sigma = f \circ Q$. Fix $p \in \omega^*$ and $\varepsilon > 0$. Let $m$ be sufficiently large (precisely, we will need $m > n_k$ where $\frac{1}{k} < \varepsilon$). Notice that $$\tauextstyle Q(\sigma(p)) = \sigmaplim x_n = p\mbox{-}\!\lim_{n \in \w} x_{n+1}, \ \tauext{ and}$$ $$f(Q(p)) = f(p\mbox{-}\!\lim_{n \in \w} x_n) = p\mbox{-}\!\lim_{n \in \w} f(x_n).$$ Using the fact that $p$ is non-principal and that $d(f(x_n),x_{n+1}) < \varepsilon$ for every $n \gammaeq m$, we have $$\tauextstyle d(f(Q(p)),Q(\sigma(p))) = d(p\mbox{-}\!\lim_{n \in \w} f(x_n),p\mbox{-}\!\lim_{n \in \w} x_{n+1}) \leq \varepsilon.$$ Since $\varepsilon$ was arbitrary, $f(Q(p)) = Q(\sigma(p))$. Since $p$ was also arbitrary, $Q \circ \sigma = f \circ Q$ as desired. \varepsilonnd{proof} After developing a few more definitions in the next section, we will state a slightly stronger version of this result (which already follows from the given proof). This stronger version will be the base step in our recursive proof of the main theorem. \sigmaection{A few lemmas}\label{sec:lemmas} In this section we begin the proof of our main theorem in the form of several lemmas. The heart of the proof -- a transfinite recursion driven by a chain of elementary submodels -- will be in the next section. Given an ordinal $\delta$, the \varepsilonmph{standard basis} for $[0,1]^\delta$ is the basis generated by sets of the form $\mathrm{MA}thbb{P}i_\alpha^{-1}(p,q)$, where $p,q \in [0,1] \cap \mathrm{MA}thbb{Q}$ and $\mathrm{MA}thbb{P}i_\alpha$ is the projection mapping a point of $[0,1]^\delta$ to its $\alpha^{\mathrm{MA}thrm{th}}$ coordinate. Whenever we mention basic open subsets of $[0,1]^\delta$, this is the basis we mean. Notice that every basic open subset of $[0,1]^\delta$ can be defined using finitely many ordinals less than $\delta$ and finitely many rational numbers. Suppose $X$ is a closed subset of $[0,1]^\delta$. By an \varepsilonmph{open cover} of $X$, we will mean a set $\mathrm{MA}thcal U$ of open subsets of $[0,1]^\delta$ with $X \sigmaub \betaigcup \mathrm{MA}thcal U$. A \varepsilonmph{nice open cover} of $X$ is a finite open cover $\mathrm{MA}thcal{U}$ of $X$ consisting of basic open subsets of $[0,1]^\delta$, such that $U \cap X \neq \varepsilonmptyset$ for all $U \in \mathrm{MA}thcal{U}$. If $\mathrm{MA}thcal U$ is a collection of subsets of $[0,1]^\delta$ and $A \sigmaub [0,1]^\delta$, $$\mathrm{MA}thcal U_\sigmatar(A) = \betaigcup \sigmaet{U \in \mathrm{MA}thcal U}{U \cap A \neq \varepsilonmptyset}.$$ For convenience, if $A = \{a\}$ we write $\mathrm{MA}thcal U_\sigmatar(a)$ instead of $\mathrm{MA}thcal U_\sigmatar(\{a\})$. If $\mathrm{MA}thcal U$ and $\mathrm{MA}thcal V$ are collections of open sets, recall that $\mathrm{MA}thcal U$ \varepsilonmph{refines} $\mathrm{MA}thcal V$ if for every $U \in \mathrm{MA}thcal U$ there is some $V \in \mathrm{MA}thcal V$ with $U \sigmaub V$. $\mathrm{MA}thcal U$ is a \varepsilonmph{star refinement} of $\mathrm{MA}thcal V$ if for every $U \in \mathrm{MA}thcal U$ there is some $V \in \mathrm{MA}thcal V$ such that $\mathrm{MA}thcal U_\sigmatar(U) \sigmaub V$. It is known (see, e.g., Theorem 5.1.12 in \cite{Eng}) that every open cover of a compact Hausdorff space has a star refinement. \betaegin{lemma}\label{lem:stars} Let $X$ be a closed subset of $[0,1]^\delta$. A function $f: X \tauo X$ is continuous if and only if for every open cover $\mathrm{MA}thcal U$ of $X$ there is a nice open cover $\mathrm{MA}thcal V$ of $X$ such that $$\sigmaet{\mathrm{MA}thcal V_\sigmatar(f(\mathrm{MA}thcal V_\sigmatar(x) \cap X))}{x \in X}$$ is an open cover of $X$ that refines $\mathrm{MA}thcal U$. \varepsilonnd{lemma} \betaegin{proof} Suppose that $f$ is continuous and let $\mathrm{MA}thcal U$ be an open cover of $X$. Let $\mathrm{MA}thcal W$ be a star refinement of $\mathrm{MA}thcal{U}$. By continuity, $f^{-1}(W \cap X)$ is a relatively open subset of $X$ for every $W \in \mathrm{MA}thcal W$. For each $W \in \mathrm{MA}thcal W$ pick some open subset $W^\leftarrow$ of $[0,1]^\delta$ such that $W^\leftarrow \cap X = f^{-1}(W \cap X)$. Let $\mathrm{MA}thcal W^\leftarrow = \sigmaet{W^\leftarrow}{W \in \mathrm{MA}thcal W}$, and observe that $\mathrm{MA}thcal W^\leftarrow$ is an open cover of $X$. Let $\mathrm{MA}thcal Y$ be a star refinement of $\mathrm{MA}thcal W^\leftarrow$, and let $\mathrm{MA}thcal V$ be a common refinement of $\mathrm{MA}thcal Y$ and $\mathrm{MA}thcal W$, for example $\sigmaet{Y \cap W}{Y \in \mathrm{MA}thcal Y \tauext{ and } W \in \mathrm{MA}thcal W}$. By refining $\mathrm{MA}thcal{V}$ further we may assume it consists of basic open sets; by throwing some sets away we may assume $\mathrm{MA}thcal{V}$ is finite and every element of $\mathrm{MA}thcal{V}$ meets $X$. In other words, we may take $\mathrm{MA}thcal{V}$ to be a nice open cover. If $x \in X$, then $\mathrm{MA}thcal{V}_\sigmatar(x) \sigmaub \mathrm{MA}thcal Y_\sigmatar(x) \sigmaub W^\leftarrow$ for some $W^\leftarrow \in \mathrm{MA}thcal W^\leftarrow$. By the definition of $\mathrm{MA}thcal W^\leftarrow$, $f(\mathrm{MA}thcal{V}_\sigmatar(x) \cap X) \sigmaub W$ for some $W \in \mathrm{MA}thcal W$. Because $\mathrm{MA}thcal{V}$ refines $\mathrm{MA}thcal W$ and $\mathrm{MA}thcal W$ star refines $\mathrm{MA}thcal U$, $\mathrm{MA}thcal{V}_\sigmatar(f(\mathrm{MA}thcal{V}_\sigmatar(x) \cap X)) \sigmaub \mathrm{MA}thcal W_\sigmatar(W) \sigmaub U$ for some $U \in \mathrm{MA}thcal{U}$. For the other direction, suppose that $f$ satisfies the conclusion of the lemma. Fix $x \in X$ and let $U,W$ be open sets containing $f(x)$ such that $f(x) \in W \sigmaub \closure{W} \sigmaub U$. Let $\mathrm{MA}thcal U = \{U,[0,1]^\kappa - W\}$, and let $\mathrm{MA}thcal{V}$ be a nice open cover of $X$ satisfying the conclusion of the lemma. Setting $V = \mathrm{MA}thcal{V}_\sigmatar(f(\mathrm{MA}thcal{V}_\sigmatar(x) \cap X))$, we must have either $V \sigmaub U$ or $V \cap U = \varepsilonmptyset$. The latter is impossible because $f(x) \in V$, so $V \sigmaub U$. Thus we have found a neighborhood of $x$ in $X$, namely $\mathrm{MA}thcal{V}_\sigmatar(x) \cap X$, whose image under $f$ is contained in $U \cap X$. Since $U$ and $x$ were arbitrary, $f$ is continuous. \varepsilonnd{proof} Given a countable ordinal $\delta$, define $\mathbb{P}i_\delta: [0,1]^{\omega_1} \tauo [0,1]^\delta$ to be the natural projection onto the first $\delta$ coordinates, namely $\mathbb{P}i_\delta = \Delta_{\alpha < \delta}\mathrm{MA}thbb{P}i_\alpha$. \betaegin{lemma}\label{lem:projections} Let $X$ be a closed subset of $[0,1]^{\omega_1}$ and let $f: X \tauo X$ be continuous. There is a closed unbounded $C \sigmaub \omega_1$ such that for every $\delta \in C$ and $x,y \in X$, if $\mathbb{P}i_\delta(x) = \mathbb{P}i_\delta(y)$ then $\mathbb{P}i_\delta(f(x)) = \mathbb{P}i_\delta(f(y))$. \varepsilonnd{lemma} \betaegin{proof} For each $\alpha < \omega_1$, let $\mathrm{MA}thcal N_\alpha$ denote the set of all nice open covers of $X$ that are defined using only ordinals less than $\alpha$. For each open cover $\mathrm{MA}thcal{U}$ of $X$ there is a nice open cover $\mathrm{MA}thcal{V}$ satisfying the conclusion of Lemma~\ref{lem:stars}, and $\mathrm{MA}thcal{V} \in \mathrm{MA}thcal N_\alpha$ for some $\alpha < \omega_1$. For each $\alpha < \omega_1$ define $\mathrm{MA}thbb{P}hi(\alpha)$ to be the least ordinal with the property that if $\mathrm{MA}thcal{U} \in \mathrm{MA}thcal N_\alpha$, then some $\mathrm{MA}thcal{V} \in \mathrm{MA}thcal N_{\mathrm{MA}thbb{P}hi(\alpha)}$ satisfies the conclusion of Lemma~\ref{lem:stars}. Because each $\mathrm{MA}thcal N_\alpha$ is countable, $\mathrm{MA}thbb{P}hi$ maps countable ordinals to countable ordinals. Let $C$ be the set of closure points of $\mathrm{MA}thbb{P}hi$: $$C = \sigmaet{\delta < \omega_1}{\tauext{if } \alpha < \delta \tauext{ then } \mathrm{MA}thbb{P}hi(\alpha) < \delta}.$$ We claim that $C$ satisfies the conclusions of the lemma. Suppose $\delta \in C$ and $\mathbb{P}i_\delta(f(y)) \neq \mathbb{P}i_\delta(f(z))$. We may find some $\mathrm{MA}thcal{U} \in \mathrm{MA}thcal N_\delta$ such that $\mathrm{MA}thcal{U}$ separates $f(y)$ from $f(z)$, in the sense that there is no $U \in \mathrm{MA}thcal{U}$ containing both $f(y)$ and $f(z)$. Because $\delta$ is a closure point of $\mathrm{MA}thbb{P}hi$, there is some $\mathrm{MA}thcal{V} \in \mathrm{MA}thcal N_\delta$ satisfying the conclusion of Lemma~\ref{lem:stars}. For every $x \in X$, there is some $U \in \mathrm{MA}thcal{U}$ such that$\mathrm{MA}thcal{V}_\sigmatar(f(\mathrm{MA}thcal{V}_\sigmatar(x) \cap X)) \sigmaub U$. Because $f(y) \in f(\mathrm{MA}thcal{V}_\sigmatar(y) \cap X)$ and $f(z) \in (\mathrm{MA}thcal{V}_\sigmatar(z) \cap X)$, our choice of $\mathrm{MA}thcal{U}$ guarantees $f(\mathrm{MA}thcal{V}_\sigmatar(y) \cap X) \cap f(\mathrm{MA}thcal{V}_\sigmatar(y) \cap X) = \varepsilonmptyset$, which implies $\mathrm{MA}thcal{V}_\sigmatar(y) \cap \mathrm{MA}thcal{V}_\sigmatar(z) \cap X = \varepsilonmptyset$. Since $\mathrm{MA}thcal{V} \in \mathrm{MA}thcal N_\delta$, this implies $\mathbb{P}i_\delta(y) \neq \mathbb{P}i_\delta(z)$. \varepsilonnd{proof} \betaegin{corollary}\label{cor:inverselimit} Every dynamical system of weight $\alphaleph_1$ can be written as an inverse limit of metrizable dynamical systems. \varepsilonnd{corollary} \betaegin{proof} Let $(X,f)$ be a dynamical system of weight $\alphaleph_1$. Embed $X$ in $[0,1]^{\omega_1}$, and let $C$ be the closed unbounded set of ordinals described in the previous lemma. For each $\delta \in C$, let $X_\delta = \mathbb{P}i_\delta(X)$ and define $f_\delta: X_\delta \tauo X_\delta$ by $f_\delta(\mathbb{P}i_\delta(x)) = \mathbb{P}i_\delta(f(x))$, which is continuous by Lemma~\ref{lem:continuity}. Then $\sigmaeq{(\mathbb{P}i_\delta(X),f_\delta)}{\delta \in C}$ is an inverse limit system, having the natural projections as bonding maps, and the limit of this system is $(X,f)$. \varepsilonnd{proof} Before moving on to our next lemma, we take a moment to justify the use of elementary submodels in the next section. Na\"{i}vely, one might wonder why we cannot simply prove our main theorem in the style of B{\l}aszczyk and Szyma\'nski, using Corollary~\ref{cor:inverselimit} and the appropriate analogue of Lemma~\ref{lem:easylift}: \betaegin{itemize} \item[($*$)] \varepsilonmph{Let $(Y,g)$ and $(Z,h)$ be metrizable dynamical systems, and let $Q_Z: \omega^* \tauo Z$ and $\mathrm{MA}thbb{P}i: Y \tauo Z$ be quotient mappings. Then there is a quotient mapping $Q_Y: \omega^* \tauo Y$ such that $Q_Z = \mathrm{MA}thbb{P}i \circ Q_Y$.} \varepsilonnd{itemize} The following example shows that $(*)$ is not true, so that we need more than a simple topological inverse limit structure in order to make B{\l}aszczyk and Szyma\'nski's proof go through. We will simply sketch the example and leave detailed proofs to the reader. \betaegin{example}\label{ex:liftingproblems} $([0,1],\mathrm{MA}thrm{id})$ is a weakly incompressible dynamical system, and for our example it will play the role of $(Y,g)$ and $(Z,h)$ in $(*)$. Define $\mathrm{MA}thbb{P}i: [0,1] \tauo [0,1]$ by setting $\mathrm{MA}thbb{P}i(0) = 0$, $\mathrm{MA}thbb{P}i(\frac{2}{3}) = 1$, and $\mathrm{MA}thbb{P}i(1) = \frac{1}{2}$, and then extending $\mathrm{MA}thbb{P}i$ linearly on the rest of $[0,1]$. We will define a quotient mapping $\mathrm{MA}thbb{P}i_Z$ from $(\omega^*,\sigma)$ to $([0,1],\mathrm{MA}thrm{id})$ that does not lift through $\mathrm{MA}thbb{P}i$. Define $p_Z: \omega \tauo [0,1]$ so that $p_Z(n)$ is the distance from $s(n) = \sigmaum_{m \leq n}\frac{1}{m}$ to the nearest even integer ($s$ could be replaced with any increasing unbounded sequence of reals where the distance between successive terms goes to $0$). Letting $\mathrm{MA}thbb{P}i_Z: \omega^* \tauo [0,1]$ be the map induced by $p_Z$, it is easy to check (either directly, or using Lemma~\ref{lem:mainlemma} below) that $\mathrm{MA}thbb{P}i_Z$ is a quotient mapping from $(\omega^*,\sigma)$ to $([0,1],\mathrm{MA}thrm{id})$. Suppose $\mathrm{MA}thbb{P}i_Y: \omega^* \tauo [0,1]$ is another quotient mapping from $(\omega^*,\sigma)$ to $([0,1],\mathrm{MA}thrm{id})$. By the Tietze Extension Theorem, $\mathrm{MA}thbb{P}i_Y$ is induced by a map $p_Y: \omega \tauo [0,1]$. Suppose $\mathrm{MA}thbb{P}i_Z = \mathrm{MA}thbb{P}i \circ \mathrm{MA}thbb{P}i_Y$. Then we must have $\lim_{n \tauo \infty} |p_Z(n) - \mathrm{MA}thbb{P}i(p_Y(n))| = 0$. Since $\mathrm{MA}thbb{P}i_Y \circ \sigma = \mathrm{MA}thbb{P}i_Y$, we also must have $\lim_{n \tauo \infty}|p_Y(n) - p_Y(n+1)| = 0$. Putting these facts together, one may show that, for large enough $n$, $p_Y(n) \in [0,\frac{2}{3}+\varepsilon)$ for any prescribed $\varepsilon > 0$. This contradicts the surjectivity of $\mathrm{MA}thbb{P}i_Y$. $ \sigmaquare$ \varepsilonnd{example} Suppose $X \sigmaub [0,1]^\delta$ and $f: X \tauo X$ is continuous. If $\mathrm{MA}thcal U$ is a nice open cover of $X$, we say that a sequence $\sigmaeq{x_n}{n < \omega}$ is \varepsilonmph{eventually compliant with} $\mathrm{MA}thcal U$ if there exists some $m \in \omega$ such that \betaegin{enumerate} \item $\sigmaet{x_n}{n \gammaeq m} \sigmaub \betaigcup \mathrm{MA}thcal U$, \item $\sigmaet{x_n}{n \gammaeq m} \cap U$ is infinite for all $U \in \mathrm{MA}thcal{U}$, and \item for all $n \gammaeq m$, we have $x_{n+1} \in \mathrm{MA}thcal U_\sigmatar(f(\mathrm{MA}thcal U_\sigmatar(x_n) \cap X))$. \varepsilonnd{enumerate} Roughly, the idea behind this definition is that if our vision is blurred (with the amount of blurriness prescribed by $\mathrm{MA}thcal U$), then $(1)$ it appears that every $x_n$ could be in $X$, $(2)$ it appears that $\sigmaet{x_n}{n \gammaeq m}$ could be dense in $X$, and $(3)$ for each $n$, not only does it seem that $x_n$ could be in $X$, but also that $x_{n+1} = f(x_n)$. \betaegin{lemma}\label{lem:mainlemma} Let $X$ be a closed subset of $[0,1]^\delta$ and let $f: X \tauo X$ be continuous. If $\sigmaeq{x_n}{n < \omega}$ is a sequence of points in $[0,1]^\delta$ that is eventually compliant with every nice open cover of $X$, then the map $p \mathrm{MA}psto p\mbox{-}\!\lim_{n \in \w} x_n$ is a quotient mapping from $(\omega^*,\sigma)$ to $(X,f)$. Conversely, if $Q$ is a quotient mapping from $(\omega^*,\sigma)$ to $(X,f)$, then there is a sequence $\sigmaeq{x_n}{n < \omega}$ in $[0,1]^\delta$ such that $Q(p) = p\mbox{-}\!\lim_{n \in \w} x_n$ for all $p \in \omega^*$, and this sequence is eventually compliant with every nice open cover of $X$. \varepsilonnd{lemma} \betaegin{proof} Fix $X \sigmaub [0,1]^\delta$ and $f: X \tauo X$, and suppose $\sigmaeq{x_n}{n < \omega}$ is a sequence of points in $[0,1]^\delta$ that is eventually compliant with every nice open cover of $X$. Define $Q: \omega^* \tauo [0,1]^\delta$ by $Q(p) = p\mbox{-}\!\lim_{n \in \w} x_n$. From the definitions, we know that $Q$ is a continuous function with domain $\omega^*$. We need to check that $Q(\omega^*) = X$ and that $Q \circ \sigma = f \circ Q$. First we show that $Q(\omega^*) \sigmaub X$. Let $U$ be any open subset of $[0,1]^\delta$ containing $X$. There is some nice open cover $\mathrm{MA}thcal U$ of $X$ such that $\betaigcup \mathrm{MA}thcal U \sigmaub U$. By part $(1)$ of our definition of eventual compliance, $p\mbox{-}\!\lim_{n \in \w} x_n \in \closure{U}$ for every $p \in \omega^*$. Since $U$ was arbitrary, $F(\omega^*) \sigmaub X$. Next we show that $X \sigmaub Q(\omega^*)$. Let $U$ be any basic open subset of $[0,1]^\delta$ with $U \cap X \neq \varepsilonmptyset$. We may find a nice open cover $\mathrm{MA}thcal U$ of $X$ such that $U \in \mathrm{MA}thcal U$. By part $(2)$ of the definition of eventual compliance, $Q(\omega^*) \cap U \neq \varepsilonmptyset$. Because $Q(\omega^*)$ is the continuous image of a compact space, and therefore closed, this shows $X \sigmaub Q(\omega^*)$. Lastly, we show that $Q \circ \sigma = f \circ Q$. Fix $p \in \omega^*$, and let $U$ be an open neighborhood of $f(Q(p))$. We may find an open cover $\mathrm{MA}thcal{U}$ of $X$ such that $U \in \mathrm{MA}thcal{U}$ and $U$ is the only member of $\mathrm{MA}thcal{U}$ containing $f(Q(p))$. Applying Lemma~\ref{lem:stars}, we obtain a nice open cover $\mathrm{MA}thcal{V}$ of $X$ such that $\mathrm{MA}thcal{V}_\sigmatar(f(\mathrm{MA}thcal{V}_\sigmatar(Q(p)) \cap X)) \sigmaub U$. Let $m$ be large enough to witness the fact that $\sigmaeq{x_n}{n < \omega}$ is eventually compliant with $\mathrm{MA}thcal{V}$. Because $p$ is non-principal, $$A = \sigmaet{n \gammaeq m}{x_n \in \mathrm{MA}thcal{V}_\sigmatar(Q(p))} \in p.$$ Using part $(3)$ of the definition of eventual compliance, $x_{n+1} \in U$ for every $n \in A$. Thus $$\tauextstyle Q(\sigma(p)) = \sigmaplim x_n = p\mbox{-}\!\lim_{n \in \w} x_{n+1} \in \closure{U}.$$ Because $U$ was an arbitrary open neighborhood of $f(Q(p))$, this shows $Q(\sigma(p)) = f(Q(p))$. Since $p$ was arbitrary, $Q \circ \sigma = f \circ Q$ as desired. This finishes the proof of the first assertion of the lemma. For the converse direction, suppose $Q$ is a quotient mapping from $(\omega^*,\sigma)$ to $(X,f)$. By the Tietze Extension Theorem, $Q$ extends to a continuous function on $\beta\omega$. In other words, there is a sequence $\sigmaeq{x_n}{n < \omega}$ of points in $[0,1]^\delta$ such that $Q(p) = p\mbox{-}\!\lim_{n \in \w} x_n$ for every $p \in \omega^*$. We want to show that this sequence is eventually compliant with every nice open cover of $X$. Using the fact that $Q(\omega^*) = X$, it is easy to check parts $(1)$ and $(2)$ of the definition of eventual compliance. To verify $(3)$, let $\mathrm{MA}thcal{U}$ be a nice open cover of $X$ and suppose $\sigmaeq{x_n}{n < \omega}$ is not eventually compliant with $\mathrm{MA}thcal{U}$. Then there is an infinite $A \sigmaub \omega$ such that, for every $a \in A$, $x_{a+1} \notin \mathrm{MA}thcal{U}_\sigmatar(f(\mathrm{MA}thcal{U}_\sigmatar(x_a) \cap X))$. Let $p \in A^*$, let $x = Q(p)$, and fix $U \in \mathrm{MA}thcal{U}$ with $x \in U$. By definition, $x = p\mbox{-}\!\lim_{n \in \w} x_n \in U$ implies that for some infinite $B \in p$, $\sigmaet{x_n}{n \in B} \sigmaub U$. Replacing $B$ with $B \cap A$ if necessary, we may assume $B \sigmaub A$. $B+1 \in \sigma(p)$, and for all $b \in B$ we have $x_{b+1} \notin \mathrm{MA}thcal{U}_\sigmatar(f(\mathrm{MA}thcal{U}_\sigmatar(x_b) \cap X)) \sigmaupseteq \mathrm{MA}thcal{U}_\sigmatar(f(U \cap X))$. Thus $$Q(\sigma(p)) = \sigmaplim x_n = p\mbox{-}\!\lim_{n \in \w} x_{n+1} \notin \mathrm{MA}thcal{U}_\sigmatar(f(U \cap X)) \ni f(Q(p)).$$ Thus $Q \circ \sigma(p) \neq f \circ Q(p)$, contradicting the assumption that $Q$ is a quotient mapping. \varepsilonnd{proof} The next two definitions describe a particular kind of eventually compliant sequence, one that has been constructed in a certain way. These are the kinds of sequences that will be used in the next section. As before, suppose $X$ is a closed subspace of $[0,1]^\delta$ and that $f: X \tauo X$ is continuous. Given a nice open cover $\mathrm{MA}thcal U$ of $X$ and a fixed point $x \in X$, we say that a finite sequence $\sigmaeq{x_i}{m < i \leq n}$ is a \varepsilonmph{$\mathrm{MA}thcal U$-compliant $x$-loop} provided \betaegin{enumerate} \item $x_n = x$, \item $\sigmaet{x_i}{m < i \leq n} \sigmaub \betaigcup \mathrm{MA}thcal U$, \item $\sigmaet{x_i}{m < i \leq n} \cap U \neq \varepsilonmptyset$ for all $U \in \mathrm{MA}thcal U$, \item $x_{m+1} \in \mathrm{MA}thcal{U}_\sigmatar(f(\mathrm{MA}thcal{U}_\sigmatar(x)))$, and \item for all $i$ with $m < i < n$, we have $x_{i+1} \in \mathrm{MA}thcal U_\sigmatar(f(\mathrm{MA}thcal U_\sigmatar(x_i) \cap X))$. \varepsilonnd{enumerate} A sequence $\sigmaeq{x_n}{n < \omega}$ is \varepsilonmph{eventually decomposable} into $\mathrm{MA}thcal U$-compliant $x$-loops if there is some increasing sequence $\sigmaeq{n_k}{k < \omega}$ of natural numbers such that $\sigmaeq{x_i}{n_k < i \leq n_{k+1}}$ is a $\mathrm{MA}thcal U$-compliant $x$-loop for every $k$ (the ``eventually'' in the name refers to the fact that we do not require $n_0 = 0$). The following three ``lemmas'' have trivial proofs that amount simply to checking the definitions involved, but we record them here as small steps toward the proof in the next section. For each lemma, suppose $X$ is a closed subset of $[0,1]^\delta$, $f: X \tauo X$ is continuous, and $x \in X$. \betaegin{lemma}\label{lem:looprefinement} If $\mathrm{MA}thcal U$ and $\mathrm{MA}thcal V$ are nice open covers of $X$ and $\mathrm{MA}thcal U$ refines $\mathrm{MA}thcal V$, then every $\mathrm{MA}thcal U$-compliant $x$-loop is also a $\mathrm{MA}thcal V$-compliant $x$-loop. \varepsilonnd{lemma} \betaegin{lemma}\label{lem:loopconcatonation} Let $\mathrm{MA}thcal U$ be a nice open cover of $X$ and let $\sigmaeq{x_n}{n < \omega}$ be a sequence of points that is eventually decomposable into $\mathrm{MA}thcal U$-compliant $x$-loops. Then $\sigmaeq{x_n}{n < \omega}$ is eventually compliant with $\mathrm{MA}thcal U$. \varepsilonnd{lemma} \betaegin{lemma}\label{lem:coordinates} Let $\mathrm{MA}thcal U$ be a nice open cover of $X$, and let $F$ denote the finite set of ordinals used in the definition of $\mathrm{MA}thcal{U}$. Suppose $\sigmaeq{x_i}{m < i \leq n}$ and $\sigmaeq{y_i}{m < i \leq n}$ are two sequences of points in $[0,1]^\delta$, and that $\mathrm{MA}thbb{P}i_\alpha(x_i) = \mathrm{MA}thbb{P}i_\alpha(y_i)$ for all $i \leq n$ and $\alpha \in F$. Then $\sigmaeq{x_i}{m < i \leq n}$ is a $\mathrm{MA}thcal{U}$-compliant $x$-loop if and only if $\sigmaeq{y_i}{m < i \leq n}$ is. \varepsilonnd{lemma} The observant reader will notice that these sorts of loops appeared already in our proof of the Bowen-Sharkovsky theorem in Section~\ref{sec:background}. We now state the (already proved!) stronger version of Theorem~\ref{thm:BS} that will be used in the proof of the main theorem: \betaegin{corollary}\label{cor:BS} Let $(X,f)$ be a weakly incompressible metrizable dynamical system, and fix $x \in X$. There is a sequence $\sigmaeq{x_n}{n < \omega}$ of points in $X$ such that, for any nice open cover $\mathrm{MA}thcal{U}$ of $X$, $\sigmaeq{x_n}{n < \omega}$ is eventually decomposable into $\mathrm{MA}thcal{U}$-compliant $x$-loops. \varepsilonnd{corollary} \sigmaection{The main theorem}\label{sec:main} Before beginning the proof of our main theorem, we briefly review the basic theory of elementary submodels, as these will be the main tool for guiding our construction. We recommend \cite{Hdg} for a more thorough treatment of the topic. Given a large, uncountable set $H$, we will consider the structure $(H,\in)$. $M \sigmaub H$ is an \varepsilonmph{elementary submodel} of $H$ if, given any formula $\varphi$ of first-order logic and any $a_1,a_2,\dots,a_n \in M$, $$(H,\in) \models \varphi(a_1,a_2,\dots,a_n) \ \ \ \Leftrightarrow \ \ \ (M,\in) \models \varphi(a_1,a_2,\dots,a_n)$$ In other words, $M$ and $H$ agree with each other on every first-order statement that can be formulated within $M$. For our proof, $H$ will be taken to be the set of all sets hereditarily smaller than $\kappa$ for some sufficiently large regular cardinal $\kappa$. The structure $(H,\in)$ satisfies all the axioms of $\mathrm{MA}thrm{ZFC}$ except for the power set axiom, and even this fails only for sets $X$ with $|2^X| \gammaeq \kappa$. This makes $H$ a good substitute for the universe of all sets. Indeed, if $\kappa$ is larger than any set mentioned in our proof, then $H$ satisfies $\mathrm{MA}thrm{ZFC}$ for all practical purposes. Suppose $M$ is an elementary submodel of $H$. Since $H$ satisfies (most of) $\mathrm{MA}thrm{ZFC}$, so must $M$. Thus objects definable without parameters, like rational numbers, the ordinals $\omega$ and $\omega_1$, and topological spaces like $[0,1]$ or $[0,1]^{\omega_1}$, are all in $M$. A bit more generally, things definable by formulas with parameters in $M$ are in $M$. For example, if $U$ is a basic open subset of $[0,1]^{\omega_1}$ and the ordinals used in the definition of $U$ are all in $M$, then $U \in M$; if $\mathrm{MA}thcal U$ is a nice open cover of some $X \in M$, and each $U \in \mathrm{MA}thcal U$ is defined using ordinals in $M$, then $\mathrm{MA}thcal U \in M$. The existence of elementary submodels of $H$ is guaranteed by the downward L\"{o}wenheim-Skolem theorem (see chapter 3 of \cite{Hdg}). We will use the following version of this theorem to facilitate our proof: \betaegin{lemma}[L\"{o}wenheim-Skolem]\label{lem:LS} Let $H$ be an uncountable set, and let $A \sigmaub H$ be countable. There exists a sequence $\sigmaeq{M_\alpha}{\alpha < \omega_1}$ of elementary submodels of $H$ such that \betaegin{enumerate} \item $A \sigmaub M_0$, and $M_\beta \sigmaub M_\alpha$ whenever $\beta < \alpha$. \item each $M_\alpha$ is countable. \item for limit $\alpha$, $M_\alpha = \betaigcup_{\beta < \alpha}M_\beta$. \item for each $\alpha$, $\sigmaeq{M_\beta}{\beta < \alpha} \in M_{\alpha+1}$. \varepsilonnd{enumerate} \varepsilonnd{lemma} We are now in a position to prove the main theorem. As mentioned in the introduction, our application of elementarity parallels that in Section 3 of \cite{D&H}. In order to make things easier for the reader (especially the reader already familiar with \cite{D&H}), we have tried to match our notation to theirs wherever possible. \betaegin{theorem}[Main Theorem]\label{thm:main} Suppose $(X,f)$ is a dynamical system with weight $\alphaleph_1$. Then $(X,f)$ is a quotient of $(\omega^*,\sigma)$ if and only if $f$ is weakly incompressible. \varepsilonnd{theorem} \betaegin{proof} Every quotient of $(\omega^*,\sigma)$ is weakly incompressible by Lemma~\ref{lem:ct}. We must prove that a weakly incompressible dynamical system with weight $\alphaleph_1$ is a quotient of $(\omega^*,\sigma)$. Let $(X,f)$ be a weakly incompressible dynamical system with weight $\alphaleph_1$. Without loss of generality, suppose $X \sigmaub [0,1]^{\omega_1}$ and $\vec{0} \in X$. Recall that $[0,1]^{\omega_1}$ is a homogeneous topological space, so $\vec{0} \in X$ really can be assumed without any loss of generality. Using transfinite recursion, we will construct maps $q_\beta: \omega \tauo [0,1]$. In the end, the diagonal mapping $Q = \Delta_{\beta < \omega_1}q_\beta$ will define a sequence $\sigmaeq{Q(n)}{n < \omega}$ in $[0,1]^{\omega_1}$ that is eventually compliant with every nice open cover of $X$. By Lemma~\ref{lem:mainlemma}, this will be enough to prove the theorem. The recursion will be guided by a sequence of elementary submodels as described in Lemma~\ref{lem:LS}. Fix $\kappa$ sufficiently large, let $H$ denote the set of all sets hereditarily smaller than $\kappa$, and fix a sequence $\sigmaeq{M_\alpha}{\alpha < \omega_1}$ of countable elementary submodels of $H$ such that \betaegin{enumerate} \item $X,f \in M_0$. \item $M_\beta \sigmaub M_\alpha$ whenever $\beta < \alpha$. \item for limit $\alpha$, $M_\alpha = \betaigcup_{\beta < \alpha}M_\beta$. \item for each $\alpha$, $\sigmaeq{M_\beta}{\beta < \alpha} \in M_{\alpha+1}$. \varepsilonnd{enumerate} For each $\alpha < \omega_1$, define $\delta_\alpha = \omega_1 \cap M_\alpha$. It can be shown that if $\beta$ is a countable ordinal in $M_\alpha$, then every ordinal less than $\beta$ is also in $M_\alpha$. It follows that $\delta_\alpha$ is a countable ordinal, namely the supremum of all countable ordinals in $M_\alpha$. For each $\alpha < \omega_1$, let $X_\alpha = \mathbb{P}i_{\delta_\alpha}(X)$. For every $\alpha$, if $\mathbb{P}i_{\delta_\alpha} (x) = \mathbb{P}i_{\delta_\alpha} (y)$, then $\mathbb{P}i_{\delta_\alpha} \circ f(x) = \mathbb{P}i_{\delta_\alpha} \circ f(y)$. This follows from the proof of Lemma~\ref{lem:projections}: the function $\mathrm{MA}thbb{P}hi$ defined there can be defined inside $M_\alpha$, so that $\delta_\alpha$ must be closed under $\mathrm{MA}thbb{P}hi$, which means $\delta_\alpha \in C$. Thus we may define $f_\alpha: X_\alpha \tauo X_\alpha$ to be the unique self-map of $X_\alpha$ satisfying $\mathbb{P}i_{\delta_\alpha} \circ f = f_\alpha \circ \mathbb{P}i_{\delta_\alpha}$, namely $f_\alpha(x) = \mathbb{P}i_{\delta_\alpha}(f(\mathbb{P}i_{\delta_\alpha}^{-1}(x)))$. This function is continuous by Lemma~\ref{lem:continuity}. $(X_\alpha,f_\alpha)$ is a dynamical system, and $\mathbb{P}i_{\delta_\alpha}$ provides a natural quotient mapping from $(X,f)$ to $(X_\alpha,f_\alpha)$. $X_\alpha$ is metrizable because it is a subset of $[0,1]^{\delta_\alpha}$, and $f_\alpha$ is weakly incompressible by Lemma~\ref{lem:ct} (alternatively, weak incompressibility can be proved directly by an elementarity argument). We may think of the $(X_\alpha,f_\alpha)$ as metrizable ``reflections'' of $(X,f)$. If $\sigmaeq{x_n}{n < \omega}$ is a sequence of points in $[0,1]^{\delta_\alpha}$ for some $\alpha$, let us say that a sequence $\sigmaeq{y_n}{n < \omega}$ of points in $[0,1]^{\omega_1}$ is a \varepsilonmph{lifting} of $\sigmaeq{x_n}{n < \omega}$ if $\mathbb{P}i_{\delta_\alpha}(y_n) = x_n$ for all $n$ (and similarly for finite sequences of points). We are now in a position to begin our recursive construction of the maps $q_\alpha$. Let $\mathrm{MA}thcal{U}$ be a nice open cover of $X$ with $\mathrm{MA}thcal{U} \in M_0$. Only ordinals less than $\delta_0$ can be used in the definition of $\mathrm{MA}thcal{U}$, so $\mathrm{MA}thcal{U}$ naturally projects to a nice open cover of $X_0$ in $[0,1]^{\delta_0}$, namely $$\mathbb{P}i_{\delta_0}(\mathrm{MA}thcal{U}) = \sigmaet{\mathbb{P}i_{\delta_0}(U)}{U \in \mathrm{MA}thcal{U}}.$$ Applying Corollary~\ref{cor:BS} to $(X_0,f_0)$, we obtain a sequence $\sigmaeq{x_n}{n < \omega}$ of points in $X_0$ such that, for any nice open cover $\mathrm{MA}thcal{U}$ of $X$ with $\mathrm{MA}thcal{U} \in M_0$, $\sigmaeq{x_n}{n < \omega}$ eventually decomposes into $\mathbb{P}i_{\delta_0}(\mathrm{MA}thcal{U})$-compliant $\vec{0}$-loops. By Lemma~\ref{lem:coordinates}, any lifting of $\sigmaeq{x_n}{n < \omega}$ to $[0,1]^{\omega_1}$ eventually decomposes into $\mathrm{MA}thcal{U}$-compliant $\vec{0}$-loops for any $\mathrm{MA}thcal{U} \in M_0$. For $\beta < \delta_0$, define $q_\beta(n) = \mathrm{MA}thbb{P}i_\beta(x_n)$ (in other words, we define the $q_\beta$ so that $\Delta_{\beta < \delta_0}q_\beta$ maps $\omega$ to the sequence just constructed). Let $\sigmaeq{n_k}{k < \omega}$ be an increasing sequence of natural numbers such that for any $\mathrm{MA}thcal{U} \in M_0$, $\mathrm{MA}thcal{U}$ a nice open cover of $X$, and for all but finitely many $k$, any lifting of $\sigmaeq{x_i}{n_k < i \leq n_{k+1}}$ is a $\mathrm{MA}thcal{U}$-compliant $\vec{0}$-loop. For the successor stage of the recursion, let $\alpha < \omega_1$ and suppose the functions $q_\beta$ have already been constructed for every $\beta < \delta_\alpha$. Let $Q_\alpha = \Delta_{\beta < \delta_\alpha}q_\beta$, and suppose the following three inductive hypotheses hold: \betaegin{itemize} \item [(H1)] $Q_\alpha \in M_{\alpha+1}$. \item [(H2)] $Q_\alpha(n_k) = \vec{0}$ for all $k < \omega$. \item [(H3)] For any nice open cover $\mathrm{MA}thcal U$ of $X$ with $\mathrm{MA}thcal{U} \in M_\alpha$ and for all but finitely many $k$, any lifting of $\sigmaeq{Q_\alpha(n)}{n_k < i \leq n_{k+1}}$ is a $\mathrm{MA}thcal U$-compliant $\vec{0}$-loop. \varepsilonnd{itemize} We will show how to obtain $q_\beta$ for $\delta_\alpha \leq \beta < \delta_{\alpha+1}$. Because $M_{\alpha+1}$ is countable, there are only countable many nice open covers of $X$ in $M_{\alpha+1}$, namely those that are definable from ordinals less than $\delta_{\alpha+1}$. Also, any two nice open covers of $X$ in $M_{\alpha+1}$, say $\mathrm{MA}thcal V$ and $\mathrm{MA}thcal W$, have a common refinement that is also a nice open cover of $X$ in $M_{\alpha+1}$; e.g., one such common refinement is $$\sigmaet{V \cap W}{V \in \mathrm{MA}thcal V, W \in \mathrm{MA}thcal W, \tauext{ and }V \cap W \cap X \neq \varepsilonmptyset}.$$ Thus we may find a countable sequence $\sigmaeq{\mathrm{MA}thcal U_m}{m < \omega}$ of nice open covers of $X$ such that \betaegin{enumerate} \item $\mathrm{MA}thcal{U}_m \in M_{\alpha+1}$ for every $m$, \item $\mathrm{MA}thcal U_n$ refines $\mathrm{MA}thcal U_m$ whenever $m \leq n$, and \item if $\mathrm{MA}thcal U$ is any nice open cover of $X$ in $M_{\alpha+1}$, then $\mathrm{MA}thcal U_m$ refines $\mathrm{MA}thcal U$ for some $m$. \varepsilonnd{enumerate} Fix $m \in \omega$ and consider $\mathrm{MA}thcal U_m$. The set of ordinals used in the definition of $\mathrm{MA}thcal U_m$ is finite and may be split into two parts: those ordinals that are below $\delta_\alpha$, which we call $F_m^0$, and those that are in the interval $[\delta_\alpha,\delta_{\alpha+1})$, which we call $F_m^1$. The ordinals $F_m^1$ are not in $M_\alpha$, but we may use elementarity to find a finite set of ordinals $G_m$ in $M_\alpha$ that reflects the set $F_m^1$. More formally, suppose that we write down in the language of first-order logic a (very long) formula $\varphi^m$ that does all of the following: \betaegin{enumerate} \item $\varphi^m$ defines $\mathrm{MA}thcal{U}_m$ in terms of $F_m^0 \cup F_m^1$, \item $\varphi^m$ asserts that $\mathrm{MA}thcal{U}_m$ is a nice open cover of $X$, \item $\varphi^m$ records information about how $\mathrm{MA}thcal{U}_m$ interacts with $X$ and $f$: \betaegin{enumerate} \item for all $\mathrm{MA}thcal J \sigmaub \mathrm{MA}thcal{U}_m$, $\varphi^m$ asserts either that $\betaigcap \mathrm{MA}thcal J \cap X = \varepsilonmptyset$ or that $\betaigcap \mathrm{MA}thcal J \cap X \neq \varepsilonmptyset$, \item if $\mathrm{MA}thcal J \sigmaub \mathrm{MA}thcal{U}_m$, $\betaigcap \mathrm{MA}thcal J \cap X \neq \varepsilonmptyset$, and $U \in \mathrm{MA}thcal{U}_m$, then $\varphi^m$ asserts either that $f(\betaigcup \mathrm{MA}thcal J \cap X) \cap U = \varepsilonmptyset$ or that $f(\betaigcup \mathrm{MA}thcal J \cap X) \cap U \neq \varepsilonmptyset$. \varepsilonnd{enumerate} \varepsilonnd{enumerate} Given a finite sequence of points, the information contained in $(1)$ is enough to determine precisely which elements of $\mathrm{MA}thcal{U}_m$ contain each member of the sequence. Once that is known, the information in $(3)$ is enough to determine whether or not that sequence is a $\mathrm{MA}thcal{U}_m$-compliant $\vec{0}$-loop. By elementarity, there is a finite set $G_m$ of ordinals in $M_\alpha$ such that $\varphi^m$ remains true when the members of $F_m^1$ are replaced with the members of $G_m$. For each $\beta \in F_m^1$, let $\beta_m$ denote the corresponding member of $G_m$. Let $\mathrm{MA}thcal V_m$ be the nice open cover of $X$ that is defined via $\varphi^m$, but substituting the members of $G_m$ in place of the corresponding members of $F_m^1$. We think of $\mathrm{MA}thcal{V}_m$ as the reflection of $\mathrm{MA}thcal{U}_m$ in $M_\alpha$. Let $k(m) \in \omega$ be the least natural number with the property that for all $k \gammaeq k(m)$, any lifting of $\sigmaeq{Q_\alpha(i)}{n_{k} < i \leq n_{k+1}}$ is a $\mathrm{MA}thcal{V}_m$-compliant $\vec{0}$-loop. This $k(m)$ exists by our third inductive hypothesis. If $m < n$, then $\mathrm{MA}thcal{V}_n$ refines $\mathrm{MA}thcal{V}_m$, so $k(m) \leq k(n)$ by Lemma~\ref{lem:looprefinement}. We are now in a position to define the maps $q_\beta$ for $\delta_\alpha \leq \beta < \delta_{\alpha+1}$: $$ q_\beta(i) = \betaegin{cases} 0 & \tauextrm{ if $n_{k(m)} < i \leq n_{k(m+1)}$ and $\beta \notin F_m^1$,} \\ q_{\beta_m}(i) & \tauextrm{ if $n_{k(m)} < i \leq n_{k(m+1)}$ and $\beta \in F_m^1$.} \varepsilonnd{cases} $$ Roughly, this says that $q_\beta$ assumes the behavior of its mirror image $q_{\beta_m}$ on the interval between $n_{k(m)}$ and $n_{k(m+1)}$, provided some suitable mirror image has already been found. As $m$ increases, the $\beta_m$ become better and better reflections of $\beta$, because the formulas $\varphi^m$ include more and more information about $X$ and $f$. With the $q_\beta$ thus defined, we need to check that our three inductive hypotheses remain true at the next stage of the recursion. For the first hypothesis, note that, because we have $\sigmaeq{M_\beta}{\beta < \alpha+1} \in M_{\alpha+2}$, the construction of the $q_\beta$, $\delta_\alpha \leq \beta < \delta_{\alpha+1}$, can be carried out in $M_{\alpha+2}$. Thus the result of this construction, namely $Q_{\alpha+1} = \Delta_{\beta < \delta_{\alpha+1}}q_\beta$, is a member of $M_{\alpha+2}$, as desired. The second inductive hypothesis, that $Q_{\alpha+1}(n_k) = \vec{0}$ for all $k$, is clear from the definition of the $q_\beta$. For the third inductive hypothesis, let $\mathrm{MA}thcal U$ be a nice open cover of $X$ with $\mathrm{MA}thcal{U} \in M_{\alpha+1}$ and fix $m$ large enough so that $\mathrm{MA}thcal U_m$ refines $\mathrm{MA}thcal U$. By Lemma~\ref{lem:looprefinement}, it is enough to check that for all but finitely many $k$, any lifting of $\sigmaeq{Q_{\alpha+1}(i)}{n_k \leq i \leq n_{k+1}}$ is a $\mathrm{MA}thcal U_m$-compliant $\vec{0}$-loop. By the definition of $k(m)$, if $k(m) \leq k < k(m+1)$ then any lifting of $\sigmaeq{Q_{\alpha}(i)}{n_k < i \leq n_{k+1}}$ is a $\mathrm{MA}thcal V_m$-compliant $\vec{0}$-loop. Of course, by Lemma~\ref{lem:coordinates} only the coordinates in $F_m^0 \cup G_m$ are relevant to determining this fact. More specifically, in $M_\alpha$ it may be proved that any sequence $\sigmaeq{x(i)}{n_k < i \leq n_{k+1}}$ agreeing with $\sigmaeq{Q_{\alpha}(i)}{n_k < i \leq n_{k+1}}$ on the members of $F_m^0 \cup G_m$ is a $\vec{0}$-loop compliant with the nice open cover of $X$ defined by $\varphi^m$ using $F_m^0 \cup G_m$, namely $\mathrm{MA}thcal{V}_m$. By our definition of the $q_\beta$, $\sigmaeq{Q_{\alpha+1}(i)}{n_k < i \leq n_{k+1}}$ is such a sequence, except that we have replaced the members of $G_m$ with the members of $F_m^1$. By elementarity and our choice of the $G_m$, if $k(m) \leq k < k(m+1)$ then any lifting of $\sigmaeq{Q_{\alpha}(i)}{n_k < i \leq n_{k+1}}$ is a $\vec{0}$-loop compliant with the nice open cover of $X$ defined by $\varphi^m$ using $F_m^0 \cup F_m^1$, namely $\mathrm{MA}thcal{U}_m$. Given any $k \gammaeq k(m)$, the same argument shows that if $\varepsilonll$ is the natural number with $k(\varepsilonll) \leq k < k(\varepsilonll+1)$, then $\sigmaeq{Q_{\alpha}(i)}{n_k < i \leq n_{k+1}}$ is a $\mathrm{MA}thcal{U}_\varepsilonll$-compliant $\vec{0}$-loop. By Lemma~\ref{lem:looprefinement} and the fact that $\mathrm{MA}thcal{U}_\varepsilonll$ refines $\mathrm{MA}thcal U_m$, $\sigmaeq{Q_{\alpha+1}(i)}{n_k \leq i \leq n_{k+1}}$ is a $\mathrm{MA}thcal U_m$-compliant $\vec{0}$-loop for all $k \gammaeq k(m)$. This proves the third inductive hypothesis and completes the successor step of our recursion. At limit stages there is nothing to construct: due to our choice of the $M_\alpha$, we have $\delta_\alpha = \betaigcup_{\beta < \alpha}\delta_\beta$ for limit $\alpha$, so that all the $q_\beta$, $\beta < \delta_\alpha$, have already been defined by stage $\alpha$. We only need to check for limit $\alpha$ that our inductive hypotheses remain true. The first hypothesis is true because $\sigmaeq{M_\beta}{\beta < \alpha} \in M_{\alpha+1}$. The second hypothesis is true at $\alpha$ if it is true at every $\beta < \alpha$. To check the third hypothesis, suppose $\mathrm{MA}thcal U$ is a nice open cover of $X$ with $\mathrm{MA}thcal{U} \in M_\alpha$. $\mathrm{MA}thcal U$ is defined using only finitely many ordinals less than $\delta_\alpha$, so $\mathrm{MA}thcal U \in M_\beta$ already for some $\beta < \alpha$. At stage $\beta$, we ensured that any lifting of $\sigmaeq{Q_\beta(i)}{n_k \leq i \leq n_{k+1}}$ is a $\mathrm{MA}thcal U$-compliant $\vec{0}$-loop for all but finitely many $k$. But $Q_\alpha$ agrees with $Q_\beta$ on all coordinates below $\delta_\beta$, so any lifting of $\sigmaeq{Q_\alpha(i)}{n_k \leq i \leq n_{k+1}}$ is also a lifting of $\sigmaeq{Q_\beta(i)}{n_k \leq i \leq n_{k+1}}$, and is therefore a $\mathrm{MA}thcal U$-compliant $\vec{0}$-loop. This completes our recursion. We claim that the map $Q = \Delta_{\alpha < \omega_1}q_\alpha$ is as required; i.e., the sequence $\sigmaeq{Q(n)}{n < \omega}$ is eventually compliant with every nice open cover of $X$. Indeed, if $\mathrm{MA}thcal U$ is a nice open cover of $X$, then $\mathrm{MA}thcal U$ is defined by finitely many ordinals, so it was considered at some stage $\alpha$ of our recursion. At that stage we guaranteed that any lifting of $\sigmaeq{Q_\alpha(n)}{n < \omega}$ is eventually decomposable into $\mathrm{MA}thcal U$-compliant $\vec{0}$-loops. $\sigmaeq{Q(n)}{n < \omega}$ is such a lifting, so it is eventually compliant with $\mathrm{MA}thcal{U}$. \varepsilonnd{proof} \sigmaection{Related results}\label{sec:MA} \sigmaubsection*{Two corollaries} Consider the following two theorems, both discussed in the introduction: \betaegin{itemize} \item (Parovi\v{c}enko, \cite{Par}) \varepsilonmph{Every compact Hausdorff space of weight $\alphaleph_1$ is a continuous image of $\omega^*$.} \item (Dow-Hart, \cite{D&H}) \varepsilonmph{Every connected compact Hausdorff space of weight $\alphaleph_1$ is a continuous image of $\mathrm{MA}thbb H^*$, where $\mathrm{MA}thbb H = [0,\infty)$.} \varepsilonnd{itemize} We begin this section by showing that both of these theorems can be derived as fairly straightforward consequences of Theorem~\ref{thm:main}. \betaegin{lemma}\label{lem:counterexample} Let $Y$ be a compact Hausdorff space of weight $\kappa$. There is a weakly incompressible dynamical system $(X,f)$ such that $X$ also has weight $\kappa$ and $Y$ is clopen in $X$. \varepsilonnd{lemma} \betaegin{proof} Let $Y$ be a compact Hausdorff space of weight $\kappa$. Let $X$ be the one-point compactification of $\mathrm{MA}thbb{Z} \tauimes Y$, where $\mathrm{MA}thbb{Z}$ is given the discrete topology. Let $*$ denote the unique point of $X - \mathrm{MA}thbb{Z} \tauimes Y$, and define $f: X \tauo X$ so that $f(*) = *$, and $f(n,y) = (n+1,y)$. Clearly, $f$ is continuous, $X$ has weight $\kappa$, and $Y$ is (homeomorphic to) a clopen subset of $X$. It remains to show that $(X,f)$ is chain transitive. Let $\mathrm{MA}thcal U$ be any open cover of $X$ and $a,b \in X$. To find a $\mathrm{MA}thcal U$-chain from $a$ to $b$, fix $U \in \mathrm{MA}thcal U$ with $* \in U$. If $a = *$ and $b = (n,y)$, we may choose $m$ small enough that $m < n$ and $(m,y) \in U$. Then $$\langle*,(m,y),(m+1,y),\dots,(n,y)\rangle$$ is a $\mathrm{MA}thcal U$-chain from $a$ to $b$. Similarly if $a = (m,y)$ and $b = *$, choose $n$ large enough that $n > m$ and $(n,y) \in U$. Then $$\langle(m,y),(m+1,y),\dots,(n,y),*\rangle$$ is a $\mathrm{MA}thcal U$-chain from $a$ to $b$. If $a \neq * \neq b$, then we may get a $\mathrm{MA}thcal U$-chain from $a$ to $b$ by concatonating a $\mathrm{MA}thcal{U}$-chain from $a$ to $*$ with a $\mathrm{MA}thcal{U}$-chain from $*$ to $b$. Thus $(X,f)$ is chain transitive. \varepsilonnd{proof} Parovi\v{c}enko's theorem follows immediately from Theorem~\ref{thm:main} and the next result: \betaegin{proposition}\label{prop:counterexample} Suppose every weakly incompressible dynamical system of weight $\kappa$ is a quotient of $(\omega^*,\sigma)$. Then every compact Hausdorff space of weight $\kappa$ is a continuous image of $\omega^*$. \varepsilonnd{proposition} \betaegin{proof} Suppose every weakly incompressible dynamical system of weight $\kappa$ is a quotient of $(\omega^*,\sigma)$, and let $Y$ be a compact Hausdorff space of weight $\kappa$. Let $(X,f)$ be the dynamical system guaranteed by Lemma~\ref{lem:counterexample}. $(X,f)$ is a quotient of $(\omega^*,\sigma)$, so in particular there is a continuous surjection $Q: \omega^* \tauo X$. The pre-image of $Y$ is clopen in $\omega^*$, and therefore homeomorphic to $\omega^*$. The restriction of $Q$ to $Q^{-1}(Y)$ provides a continuous surjection from (a copy of) $\omega^*$ to $Y$. \varepsilonnd{proof} Observe that a compact Hausdorff space $X$ is connected if and only if $(X,\mathrm{MA}thrm{id})$ is a weakly incompressible dynamical system. With this in mind, Theorem~\ref{thm:main} and the following proposition immediately imply the theorem of Dow and Hart: \betaegin{proposition} If $(X,\mathrm{MA}thrm{id})$ is a quotient of $(\omega^*,\sigma)$ then $X$ is a continuous image of $\mathrm{MA}thbb H^*$. \varepsilonnd{proposition} \betaegin{proof} Suppose $(X,\mathrm{MA}thrm{id})$ is a quotient of $(\omega^*,\sigma)$, and assume that $X \sigmaub [0,1]^\delta$ for some $\delta$. By Theorem~\ref{thm:main} and the second part of Lemma~\ref{lem:mainlemma}, there is a sequence $\sigmaeq{x_n}{n < \omega}$ of points in $[0,1]^\delta$ that is eventually compliant with every nice open cover of $X$. Define a map $q: \mathrm{MA}thbb H \tauo [0,1]^\delta$ by sending $n$ to $x_n$ for each integer $n$, and then extending $q$ linearly to the rest of $\mathrm{MA}thbb H$. This function induces a map $Q: \mathrm{MA}thbb H^* \tauo [0,1]^\delta$, and we claim that $Q$ is a continuous surjection from $\mathrm{MA}thbb H^*$ to $X$. $Q$ is continuous by definition. We see that $Q(\mathrm{MA}thbb H^*) \sigmaupseteq X$ by considering those elements of $\mathrm{MA}thbb H^*$ that are supported on the integers. It remains to show $Q(\mathrm{MA}thbb H^*) \sigmaub X$. Let $W$ be an open set containing $X$ and let $\mathrm{MA}thcal{U}$ be a nice open cover with $\betaigcup \mathrm{MA}thcal{U} \sigmaub W$. Let $\mathrm{MA}thcal{V}$ be a star refinement of a star refinement of $\mathrm{MA}thcal{U}$. Because $\sigmaeq{x_n}{n < \omega}$ is eventually compliant with $\mathrm{MA}thcal{V}$, there is some $m$ such that for all $n \gammaeq m$, $x_{n+1} \in \mathrm{MA}thcal{V}_\sigmatar(\mathrm{MA}thcal{V}_\sigmatar(x_n))$. By our choice of $\mathrm{MA}thcal{V}$, there is some $U \in \mathrm{MA}thcal{U}$ with $x_n,x_{n+1} \in U$. As every basic open subset of $[0,1]^\delta$ is convex, $q(r) \in U$ for all $r \in [x_n,x_{n+1}]$. Thus $q(r) \in W$ for every $r \in [m,\infty)$, which implies $Q(\mathrm{MA}thbb H^*) \sigmaub W$. Since $W$ was arbitrary, $Q(\mathrm{MA}thbb H^*) \sigmaub X$. \varepsilonnd{proof} \sigmaubsection*{The first and fourth heads of $\beta\omega$} If we assume the Continuum Hypothesis, then Theorem~\ref{thm:main} gives a complete internal characterization of the quotients of $(\omega^*,\sigma)$: \betaegin{theorem}\label{thm:ch} Assuming $\mathrm{MA}thrm{CH}$, the following are equivalent: \betaegin{enumerate} \item $(X,f)$ is a quotient of $(\omega^*,\sigma)$. \item $X$ has weight at most $\mathrm{MA}thfrak{c}$ and $f$ is weakly incompressible. \item $X$ is a continuous image of $\omega^*$ and $f$ is weakly incompressible. \varepsilonnd{enumerate} \varepsilonnd{theorem} \betaegin{proof} The equivalence of $(1)$ and $(2)$ is a straightforward consequence of Theorem~\ref{thm:main} and $\mathrm{MA}thrm{CH}$. The equivalence of $(2)$ and $(3)$ is a straightforward consequence of Parovi\v{c}enko's characterization of the continuous images of $\omega^*$ under $\mathrm{MA}thrm{CH}$. \varepsilonnd{proof} Of the six implications this theorem entails, three are provable from $\mathrm{MA}thrm{ZFC}$: $(1) \mathrm{MA}thbb{R}ightarrow (2)$, $(1) \mathrm{MA}thbb{R}ightarrow (3)$, and $(3) \mathrm{MA}thbb{R}ightarrow (2)$. We will now consider the other three, and show that each of them is independent of $\mathrm{MA}thrm{ZFC}$. Lemma~\ref{lem:counterexample} shows that $(2) \mathrm{MA}thbb{R}ightarrow (3)$ if and only if every compact Hausdorff space of weight $\leq \mathrm{MA}thfrak{c}$ is a continuous image of $\omega^*$. This is a purely topological question about $\omega^*$ that is considered elsewhere, e.g. in \cite{JvM}. It is known to be independent: for example, a result of Kunen states that $\omega_2+1$ is not a continuous image of $\omega^*$ in the Cohen model. Because $(1) \mathrm{MA}thbb{R}ightarrow (3)$ is a theorem of $\mathrm{MA}thrm{ZFC}$, the previous paragraph also shows that $(2) \mathrm{MA}thbb{R}ightarrow (1)$ is independent. The independence of $(3) \mathrm{MA}thbb{R}ightarrow (1)$ requires a different argument. Consider the following corollary to Theorem~\ref{thm:ch}: \betaegin{corollary}\label{cor:quotient} Assuming $\mathrm{MA}thrm{CH}$, $(\omega^*,\sigma^{-1})$ is a quotient of $(\omega^*,\sigma)$. \varepsilonnd{corollary} \betaegin{proof} The proof is immediate from Theorem~\ref{thm:ch} and the following observation: \varepsilonmph{If $X$ is a compact Hausdorff space and $f: X \tauo X$ is a homeomorphism, then $f$ is weakly incompressible if and only if $f^{-1}$ is.} This is easy to see using chain transitivity: given an open cover $\mathrm{MA}thcal{U}$ of $X$ and any $a,b \in X$, $(X,f)$ has a $\mathrm{MA}thcal{U}$-chain from $a$ to $b$ if and only if $(X,f^{-1})$ has a $\mathrm{MA}thcal{U}$-chain from $b$ to $a$. \varepsilonnd{proof} To show that $(3) \mathrm{MA}thbb{R}ightarrow (1)$ is independent, it is enough to prove that the conclusion of Corollary~\ref{cor:quotient} is independent. \betaegin{theorem}\label{thm:farah} Assuming $\mathrm{MA}thrm{OCA}ma$, $(\omega^*,\sigma^{-1})$ is not a quotient of $(\omega^*,\sigma)$. \varepsilonnd{theorem} Recall that a continuous function $F: \omega^* \tauo \omega^*$ is \varepsilonmph{trivial} if there is a function $f: \omega \tauo \beta\omega$ such that $F = \beta f \!\restriction\! \omega^*$. Similarly, $F: A^* \tauo \omega^*$ is trivial if it is induced by a function $A \tauo \beta\omega$. To prove Theorem~\ref{thm:farah}, we will use a deep theorem greatly restricting the kinds of self-maps of $\omega^*$ we find under $\mathrm{MA}thrm{OCA}ma$. A very general version of the result is proved by Farah in \cite{IF1}, but we need only a special case, which is already implicit in the work of Velickovic \cite{Vel}, and has precursors in the work of Shelah-Stepr\={a}ns \cite{S&S} and Shelah \cite{SSh}. \betaegin{theorem}[Farah, et al.]\label{thm:ocama} Assuming $\mathrm{MA}thrm{OCA}ma$, for any continuous $F: \omega^* \tauo \omega^*$ there is some $A \sigmaub \omega$ such that $F \!\restriction\! A^*$ is trivial and $F(\omega^* - A^*)$ is nowhere dense. \varepsilonnd{theorem} \betaegin{proof}[Proof of Theorem~\ref{thm:farah}] Suppose $Q$ is a quotient mapping from $(\omega^*,\sigma)$ to $(\omega^*,\sigma^{-1})$. Using Theorem~\ref{thm:ocama}, fix $A \sigmaub \omega$ such that $Q \!\restriction\! A^*$ is trivial and $Q(\omega^* - A^*)$ is nowhere dense. Also, fix $q: A \tauo \beta\omega$ such that $Q \!\restriction\! A^* = \beta q \!\restriction\! A^*$. Because $Q$ is surjective, $A$ must be infinite. Let $X = \sigmaet{a \in A}{q(a) \in \omega}$. Observe that $Q \!\restriction\! X$ remains trivial and that $Q(\omega^* - X^*)$ remains nowhere dense. Thus, replacing $A$ with $X$ if necessary, we may (and do) assume that $q(a) \in \omega$ for all $a \in A$. If $q$ is not finite-to-one on $A$, there is an infinite set $X \sigmaub A$ and some $n \in \omega$ with $q(X) = n$, but then $Q(p) = n$ for any $p \in X^*$, a contradiction. Thus $q$ is finite-to-one on $A$. Suppose $A$ is not co-finite. Then $$B = \sigmaet{a \in A}{a+1 \notin A}$$ is infinite. Using basic facts about Stone extensions, $\sigma^{-1} \circ Q(B^*) = (q(B)-1)^*$. This set is clopen (in particular, it has nonempty interior), so we may find some $p \in B^*$ such that $\sigma^{-1} \circ Q (p) \notin Q(\omega^* - A^*)$. However, $Q \circ \sigma(p) \in Q \circ \sigma(B^*) = Q((B+1)^*) \sigmaub Q(\omega^* - A^*)$, so that $\sigma^{-1} \circ Q(p) \neq Q \circ \sigma(p)$, a contradiction. Thus $A$ is co-finite, and $Q = \beta q \!\restriction\! \omega^*$ for some finite-to-one function $q: A \tauo \omega$. Since changing $q$ on a finite set does not change $Q = \beta q \!\restriction\! \omega^*$, we may assume $A = \omega$, and $Q$ is induced by a finite-to-one function $q: \omega \tauo \omega$. We now construct an infinite sequence of natural numbers as follows. Pick $b_0 \in \omega$ arbitrarily. Assuming $b_0,b_1,\dots,b_n$ are given, there are co-finitely many $b \in \omega$ satisfying \betaegin{enumerate} \item $b \neq b_0,b_1,\dots,b_n$, \item $q(b)-1 \neq q(b_0+1),q(b_1+1),\dots,q(b_n+1)$, and \item $q(b+1) \neq q(b_0)-1,q(b_1)-1,\dots,q(b_n)-1$. \varepsilonnd{enumerate} Also, a straightforward argument by contradiction shows that there are infinitely many $b \in \omega$ satisfying \betaegin{enumerate} \sigmaetcounter{enumi}{3} \item $q(b_{n+1})-1 \neq q(b_{n+1}+1)$. \varepsilonnd{enumerate} Thus we may choose some $b_{n+1} \in \omega$ satisfying $(1) - (4)$. Let $B = \sigmaet{b_n}{n < \omega}$ and let $p \in B^*$. Then $Q \circ \sigma(p) \in q(B+1)^*$ and $\sigma^{-1} \circ Q(p) \in (q(B) - 1)^*$. By construction, $q(B+1) \cap (q(B) - 1) = \varepsilonmptyset$, which shows $Q \circ \sigma(p) \neq \sigma^{-1} \circ Q(p)$. \varepsilonnd{proof} We do not know whether Corollary~\ref{cor:quotient} can be improved from a quotient mapping to an isomorphism: \betaegin{question} Is it consistent that there is a homeomorphism $H: \omega^* \tauo \omega^*$ with $H \circ \sigma = \sigma^{-1} \circ H$? \varepsilonnd{question} Observe that our proof of Theorem~\ref{thm:main} cannot produce a homeomorphism: in the quotient mapping constructed there, the inverse image of $\vec{0}$ has nonempty interior. Therefore some new idea would be needed to answer this question in the affirmative. We point out that if the answer to this question is yes, then it seems likely that $\mathrm{MA}thrm{CH}$ will imply the existence of such an isomorphism already (see Section 5.1 of \cite{IF2}). See \cite{SG2} for some partial results. \sigmaubsection*{An extension using Martin's Axiom} We end with an extension of Theorem~\ref{thm:main} to cardinals $\kappa < \mathrm{MA}thbb{P}seudo$. \betaegin{theorem}\label{thm:MA} Let $(X,f)$ be a dynamical system with the weight of $X$ less than $\mathrm{MA}thbb{P}seudo$. Then $(X,f)$ is a quotient of $(\omega^*,\sigma)$ if and only if $f$ is weakly incompressible. \varepsilonnd{theorem} \betaegin{proof} Let $(X,f)$ be a weakly incompressible dynamical system, and let $\kappa$ be the weight of $X$. Suppose $\kappa < \mathrm{MA}thbb{P}seudo$. By a theorem of M. Bell, this is equivalent to assuming $\mathrm{MA}sk$, Martin's Axiom at $\kappa$ for $\sigma$-centered posets. We may (and do) assume that $X \sigmaub [0,1]^\kappa$. We will use $\mathrm{MA}sk$ to construct a sequence of points in $[0,1]^\kappa$ that is eventually compliant with every nice open cover of $X$. Recall that $[0,1]^\kappa$ is separable, and fix a countable dense $D \sigmaub [0,1]^\kappa$. Let us assume that $X$ is nowhere dense in $[0,1]^\kappa$ and that $X \cap D = \varepsilonmptyset$. This assumption does not sacrifice any generality, since we could always just replace $[0,1]^\kappa$ with $[0,1] \tauimes [0,1]^\kappa$, identify $X$ with $\{0\} \tauimes X$, and replace $D$ with $(\mathrm{MA}thbb{Q} \cap (0,1]) \tauimes D$. Fix $x \in X$. Let $\mathrm{MA}thbb{P}$ be the set of all pairs $\langles,\mathrm{MA}thcal{U}\rangle$, such that $s$ is a sequence of distinct points in $D$ and $\mathrm{MA}thcal{U}$ is a nice open cover of $X$. Order $\mathrm{MA}thbb{P}$ by defining $\langlet,\mathrm{MA}thcal{V}\rangle \leq \langles,\mathrm{MA}thcal{U}\rangle$ if and only if \betaegin{itemize} \item $s$ is an initial segment of $t$. \item $\mathrm{MA}thcal{V}$ refines $\mathrm{MA}thcal{U}$. \item either $t = s$, or $t-s$ is a $\mathrm{MA}thcal{U}$-compliant $x$-loop. \varepsilonnd{itemize} Ultimately, we will use $\mathrm{MA}sk$ to obtain a suitably generic $G \sigmaub \mathrm{MA}thbb{P}$, and then $\gamma = \betaigcup \sigmaet{s}{\langles,\mathrm{MA}thcal{U}\rangle \in G}$ will be the desired sequence of points. Roughly, a condition $\langles,\mathrm{MA}thcal{U}\rangle$ is a promise that $s$ is an initial segment of $\gamma$, and that the part of $\gamma$ after $s$ will decompose into $\mathrm{MA}thcal{U}$-compliant $x$-loops. $\mathrm{MA}thbb{P}$ is clearly reflexive, and if $\langleu,\mathrm{MA}thcal W\rangle \leq \langlet,\mathrm{MA}thcal{V}\rangle \leq \langles,\mathrm{MA}thcal{U}\rangle$ then $\langleu,\mathrm{MA}thcal W\rangle \leq \langles,\mathrm{MA}thcal{U}\rangle$ by Lemma~\ref{lem:looprefinement}. Thus $\mathrm{MA}thbb{P}$ is a pre-order, and it makes sense to talk about forcing with $\mathrm{MA}thbb{P}$. Because $D$ is countable, there are only countably many possibilities for the first coordinate of a condition in $\mathrm{MA}thbb{P}$. To show that $\mathrm{MA}thbb{P}$ is $\sigma$-centered, it suffices to show that if two conditions $\langles,\mathrm{MA}thcal{U}\rangle$, $\langles,\mathrm{MA}thcal{V}\rangle$ have the same first coordinate $s$, then they have a common extension. Taking $\mathrm{MA}thcal W$ to be any nice open cover of $X$ that refines both $\mathrm{MA}thcal{U}$ and $\mathrm{MA}thcal{V}$ (for example $\mathrm{MA}thcal W = \sigmaet{U \cap V}{U \in \mathrm{MA}thcal{U}, V \in \mathrm{MA}thcal{V}, \tauext{ and } U \cap V \cap X \neq \varepsilonmptyset}$), then $\langles,\mathrm{MA}thcal W\rangle \leq \langles,\mathrm{MA}thcal{U}\rangle$ and $\langles,\mathrm{MA}thcal W\rangle \leq \langles,\mathrm{MA}thcal{V}\rangle$. Thus $\mathrm{MA}thbb{P}$ is $\sigma$-centered. If $\mathrm{MA}thcal{U}$ is a nice open cover of $X$, define $$D_{\mathrm{MA}thcal{U}} = \sigmaet{\langles,\mathrm{MA}thcal{V}\rangle \in \mathrm{MA}thbb{P}}{\mathrm{MA}thcal{V} \tauext{ refines } \mathrm{MA}thcal{U}}.$$ We claim that $D_\mathrm{MA}thcal{U}$ is dense in $\mathrm{MA}thbb{P}$. To see this, fix a nice open cover $\mathrm{MA}thcal{U}$ of $X$ and let $\langles,\mathrm{MA}thcal{V}\rangle \in \mathrm{MA}thbb{P}$. Clearly $\langles,\mathrm{MA}thcal{U}\rangle \in \mathrm{MA}thbb{P}$, and we have already seen (in the previous paragraph) that any two conditions in $\mathrm{MA}thbb{P}$ with the same first coordinate have a common extension. This common extension is in $D_{\mathrm{MA}thcal{U}}$ and below $\langles,\mathrm{MA}thcal{V}\rangle$, so $D_\mathrm{MA}thcal{U}$ is dense in $\mathrm{MA}thbb{P}$. By $\mathrm{MA}sk$, there is a filter $G$ on $\mathrm{MA}thbb{P}$ such that $D_{\mathrm{MA}thcal{U}} \cap G \neq \varepsilonmptyset$ for every nice open cover $\mathrm{MA}thcal{U}$ of $X$. Let $\gamma = \betaigcup \sigmaet{s}{\langles,\mathrm{MA}thcal{U}\rangle \in G}$. For any nice open cover $\mathrm{MA}thcal{U}$ of $X$, $\gamma$ is eventually compliant with $\mathrm{MA}thcal{U}$ precisely because $G \cap D_{\mathrm{MA}thcal{U}} \neq \varepsilonmptyset$. An application of Lemma~\ref{lem:mainlemma} completes the proof. \varepsilonnd{proof} A topic left open by Theorems \ref{thm:main} and \ref{thm:MA} is how to construct quotients or isomorphisms from $(\omega^*,\sigma)$ to dynamical systems of weight $\mathrm{MA}thfrak{c}$ when $\mathrm{MA}thrm{CH}$ fails. The following question is a particularly interesting possibility related to the Katowice problem: \betaegin{question} Is it consistent to have a weakly incompressible autohomeomorphism of $\omega_1^*$? \varepsilonnd{question} If $F$ were such a map, then $F$ cannot be trivial on any set of the form $A^*$, with $A$ co-countable. It is consistent that no such map exists, but it is not currently known whether the opposite is also consistent. See \cite{L&M} for some discussion of this problem and related results. \betaegin{thebibliography}{99} \betaibitem{Akn} E. Akin, \varepsilonmph{The General Topology of Dynamical Systems}, Graduate Studies in Mathematics, American Mathematical Society, reprint edition, 2010. \betaibitem{BGOR} A. D. Barwell, C. Good, P. Oprocha, and B. E. Raines, ``Characterizations of $\omega$-limit sets of topologically hyperbolic spaces,'' \varepsilonmph{Discrete and Continuous Dynamical Systems} \tauextbf{33}, no. 5 (2013), pp. 1819-1833. \betaibitem{Bel} M. Bell, ``On the combinatorial principle $P(c)$,'' \varepsilonmph{Fundamenta Mathematicae} \tauextbf{114} (1981), pp. 149-157. \betaibitem{Bls} A. Blass, ``Ultrafilters: where topological dynamics = algebra = combinatorics,'' \varepsilonmph{Topology Proceedings} \tauextbf{18} (1993), pp. 33-56. \betaibitem{B&S} A. B\l aszczyk and A. Szyma\'nski, ``Concerning Parovi\v{c}enko's theorem,'' \varepsilonmph{Bulletin de L'Academie Polonaise des Sciences S\'erie des sciences math\'ematiques} \tauextbf{28} (1980), pp. 311-314. \betaibitem{Bwn} R. Bowen, ``$\omega$-limit sets for axiom $A$ diffeomorphisms,'' \varepsilonmph{Journal of Differential Equations} \tauextbf{18} (1975), pp. 333-339. \betaibitem{WRB} W. R. Brian, ``$P$-sets and minimal right ideals in $\mathrm{MA}thbb{N}^*$,'' \varepsilonmph{Fundamenta Mathematicae} \tauextbf{229} (2015), pp. 277-293. \betaibitem{vDP} E. K. van Douwen and T. C. Przymusi\'nski, ``Separable extensions of first-countable spaces,'' \varepsilonmph{Fundamenta Mathematicae} \tauextbf{111} (1980), pp. 147 - 158. \betaibitem{D&H} A. Dow and K. P. Hart, ``A universal continuum of weight $\alphaleph$,'' \varepsilonmph{Transactions of the American Mathematical Society} \tauextbf{353} (2000), pp. 1819 - 1838. \betaibitem{Eng} R. Engelking, \varepsilonmph{General Topology}. Sigma Series in Pure Mathematics, 6 (1989), Heldermann, Berlin (revised edition). \betaibitem{IF1} I. Farah, \varepsilonmph{Analytic quotients: theory of lifting for quotients over analytic ideals on the integers,} Memoirs of the American Mathematical Society no. \tauext{702} (2000), vol. 148. \betaibitem{IF2} I. Farah, ``The fourth head of $\beta\mathrm{MA}thbb{N}$,'' in \varepsilonmph{Open Problems in Topology II} (2007), ed. Elliot Pearl, Elsevier, pp. 145-150. \betaibitem{SG2} S. Geschke, ``The shift on $\mathrm{MA}thcal P(\omega) / \mathrm{MA}thrm{fin}$,'' unpublished manuscript available online at \tauexttt{www.math.uni-hamburg.de/home/geschke/publikationen.html.en}. \betaibitem{H&S} N. Hindman and D. Strauss, \varepsilonmph{Algebra in the Stone-\v{C}ech compactification}, 2$^{nd}$ edition, De Gruyter, Berlin, 2012. \betaibitem{Hdg} W. Hodges, \varepsilonmph{Model Theory}, Encyclopedia of mathematics and its applications, no. \tauextbf{42}, Cambridge University Press, Cambridge, 1993. \betaibitem{L&M} P. Larson and P. McKenney, ``Automorphisms of $\mathrm{MA}thcal P(\lambda) / \mathrm{MA}thcal I_\kappa$,'' to appear in \varepsilonmph{Fundamenta Mathematicae}. \betaibitem{M&R} J. Meddaugh and B. E. Raines, ``Shadowing and internal chain transitivity,'' \varepsilonmph{Fundamenta Mathematicae} \tauextbf{222} (2013), pp. 279-287. \betaibitem{JvM} J. van Mill, ``An introduction to $\beta\omega$,'' in the \varepsilonmph{Handbook of Set-Theoretic Topology} (1984), eds. K. Kunen and J. E. Vaughan, North-Holland, pp. 503-560. \betaibitem{Par} I. I. Parovi\v{c}enko, ``A universal bicompact of weight $\alphaleph_1$,'' \varepsilonmph{Soviet Mathematics Doklady} \tauextbf{4} (1963), pp. 592-595. \betaibitem{Srk} O. M. Sharkovsky, ``On attracting and attracted sets,'' Doklady Akademii Nauk SSSR \tauextbf{160} (1065), pp. 1036 - 1038. \betaibitem{SSh} S. Shelah, \varepsilonmph{Proper Forcing}, Lecture Notes in Mathematics, vol. 940, Springer-Verlag, Berlin, 1982. \betaibitem{S&S} S. Shelah and J. Stepr\={a}ns, ``$\mathrm{MA}thrm{PFA}$ implies all automorphisms are trivial,'' \varepsilonmph{Proceedings of the American Mathematical Society} \tauextbf{104} (1988), pp. 1220 - 1225. \betaibitem{Vel} B. Velickovic, ``$\mathrm{MA}thrm{OCA}$ and automorphisms of $\mathrm{MA}thbb{P}wmf$,'' \varepsilonmph{Topology and its Applications} \tauextbf{49} (1992), pp. 1-12. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title [On homogeneous geodesics and weakly symmetric spaces \dots] {On homogeneous geodesics and \\ weakly symmetric spaces} \author{Valeri\u{\i} Berestovski\u{\i}} \address{V.\,N. Berestovski\u{\i} \newline Sobolev Institute of Mathematics of the Siberian Branch \newline of the Russian Academy of Sciences, Novosibirsk, Acad. Koptyug ave. 4, \newline 630090, Russia} \address{Novosibirsk State University, \newline Mechanics-Mathematical department \newline 2 Pirogov str., Novosibirsk, 630090, Russia} \email{[email protected]} \author{Yuri\u{\i}~Nikonorov} \address{Yu.\,G. Nikonorov \newline Southern Mathematical Institute of the Vladikavkaz Scientific Centre \newline of the Russian Academy of Sciences, Vladikavkaz, Markus st. 22, \newline 362027, Russia} \email{[email protected]} \begin{abstract} In this paper, we establish a sufficient condition for a geodesic in a Riemannian manifold to be homogeneous, i.~e. an orbit of an $1$-parameter isometry group. As an application of this result, we provide a new proof of the fact that every weakly symmetric space is geodesic orbit manifold, i.~e. all its geodesics are homogeneous. We also study general properties of homogeneous geodesics, in particular, the structure of the closure of a given homogeneous geodesic. We present several examples where this closure is a torus of dimension $\geq 2$ which is (respectively, is not) totally geodesic in the ambient manifold. Finally, we discuss homogeneous geodesics in Lie groups supplied with left-invariant Riemannian metrics. \noindent 2010 Mathematical Subject Classification: 53C20, 53C25, 53C35. \noindent Key words and phrases: geodesic orbit Riemannian space, homogeneous Riemannian manifold, homogeneous space, quadratic mapping, totally geodesic torus, weakly symmetric space. \end{abstract} \maketitle \section{Introduction, notation and useful facts} Let $(M,g)$ be a Riemannian manifold and let $\gamma: \mathbb{R} \rightarrow M$ be a geodesic in $(M,g)$. The geodesic $\gamma$ is called {\it homogeneous} if $\gamma(\mathbb{R})$ is an orbit of an $1$-parameter subgroup of $\Isom(M,g)$, the full isometry group of $(M,g)$. A Riemannian manifold $(M,g)$ is called a manifold with homogeneous geodesics or a geodesic orbit manifold if any geodesic $\gamma$ of $M$ is homogeneous. These definitions are naturally generalized to the case when all isometries are taken from a given Lie subgroup $G\subset \Isom(M,g)$, that acts transitively on $M$. In this case we get the notions of $G$-homogeneous geodesics and $G$-homogeneous geodesic orbit spaces. This terminology was introduced in \cite{KV} by O.~Kowalski and L.~Vanhecke, who initiated a systematic study of such spaces. We refer to \cite{KV}, \cite{Arv2017}, and \cite{Nikonorov2017} for expositions on general properties of geodesic orbit Riemannian spaces and some historical survey about this topic. Let $(M=G/H, g)$ be a homogeneous Riemannian manifold, where $G$ is the identity component of $\Isom(M,g)$ and $H$ is the isotropy subgroup at a point $o\in M$. Since $H$ is compact, there is an $\Ad(H)$-invariant decomposition \begin{equation}\label{reductivedecomposition} \mathfrak{g}=\mathfrak{h}\oplus \mathfrak{m}, \end{equation} where $\mathfrak{g}={\rm Lie }(G)$ and $\mathfrak{h}={\rm Lie}(H)$. The Riemannian metric $g$ is $G$-invariant and is determined with an $\Ad(H)$-invariant inner product $g = (\cdot,\cdot)$ on the space $\mathfrak{m}$ which is identified with the tangent space $M_0$ at the initial point $o = eH$. By $[\cdot, \cdot]$ we denote the Lie bracket in $\mathfrak{g}$, and by $[\cdot, \cdot]_{\mathfrak{m}}$ its $\mathfrak{m}$-component according to (\ref{reductivedecomposition}). We recall (in the above terms) a well-known criterion of geodesic orbit spaces. \begin{lemma}[\cite{KV}]\label{GO-criterion} A homogeneous Riemannian manifold $(M=G/H,g)$ with the reductive decomposition {\rm (\ref{reductivedecomposition})} is a geodesic orbit space if and only if for any $X \in \mathfrak{m}$ there is $Z \in \mathfrak{h}$ such that $([X+Z,Y]_{\mathfrak{m}},X) =0$ for all $Y\in \mathfrak{m}$. \end{lemma} For a given $X\in\mathfrak{m}$, the condition $([X+Z,Y]_{\mathfrak{m}},X) =0$ for all $Y\in \mathfrak{m}$ means that the orbit of $\exp\bigl((X+Z)t\bigr) \subset G$, $t \in \mathbb{R}$, through the point $o=eH$ is a geodesic in $(M=G/H,g)$. Note also that all orbits of an $1$-parameter isometry group, generated with a Killing vector fields of constant length on a given Riemannian manifold, are geodesics, see e.~g. \cite{BerNik7}. An important class of geodesic orbit manifolds consists of weakly symmetric spaces, introduced by A.~Selberg \cite{S}. A homogeneous Riemannian manifold $(M = G/H, g)$ is a {\it weakly symmetric space} if any two points $p,q \in M$ can be interchanged with an isometry $a \in G$ (this is a definition equivalent to the original one). Note that a Riemannian manifold $(M,g)$ is a weakly symmetric space if and only if it is homogeneous and for some (hence, every) point $x \in M$ and any reductive decomposition (\ref{reductivedecomposition}) the following property holds: for every $U\in \mathfrak{m}$ there is $s \in H$ such that $\Ad(s) (U)=-U$~\cite{Zi96}. Note that every $G$-invariant Riemannian metric on $G/H$ (with the above property) makes it a weakly symmetric space. Weakly symmetric spaces $M= G/H$ have many interesting properties and are closely related with spherical spaces, commutative spaces, Gelfand pairs etc. (see papers \cite{AV, BV1996, Yak, Zi96} and book \cite{W1} by J.A.~Wolf). The classification of weakly symmetric reductive homogeneous Riemannian spaces was given by O.S.~Yakimova \cite{Yak} on the base of the paper \cite{AV} (see also \cite{W1}). Let us recall that {\it weakly symmetric Riemannian manifolds are geodesic orbit} by a result of J.~Berndt, O.~Kowalski, and L.~Vanhecke \cite{BKV}. The main motivation of this paper was to reprove this result by alternative simple methods. It should be noted that the full isometry group $\Isom (M,g)$ of a given Riemannian manifold $(M,g)$ is a Lie group and the isotropy subgroup at any point $x\in M$ is compact by the Myers--Steenrod theorem. Note also that by the Cartan theorem, a closed (abstract) subgroup of a Lie group is a Lie subgroup, hence, Lie group itself, but this is not true in general for non-closed subgroup, see details e.~g. in \cite{HilNeeb}. All manifolds in this paper are supposed to be connected. For a smooth manifold $M$ and $x\in M$, $M_x$ denotes the tangent space to $M$ at the point $x$. For a smooth manifolds mapping $\psi : M \rightarrow N$, we denote by $T\psi$ its differential. The structure of this paper is as follows. In Section \ref{mr}, we prove in Theorem \ref{isomhomgeod} that a geodesic $\gamma$ in a given smooth Riemannian manifold $(M,g)$ is homogeneous if the set (group) $G_{\gamma}$ of all isometries in $(M,g)$, preserving $\gamma$ and its orientation, acts transitively on $\gamma$. Recall that $\gamma$ is homogeneous if it is an orbit of a point $x\in \gamma$ under an 1-parameter Lie subgroup $\psi(t)$, $t\in \mathbb{R}$, of the full isometry Lie group $\Isom(M,g)$ of $(M,g)$. At first we give two alternative proofs of Proposition \ref{isomhomgeodn} which states that $G_{\gamma}$ is a closed (hence, Lie) subgroup of $\Isom(M,g)$. The first proof of this proposition uses some results from the theory of topological groups while the second one applies the so-called {\it development of the geodesic} $\gamma$ in $p^{-1}(\gamma)$ where $p:TM\rightarrow M$ is the canonical projection. After this it is quite easy to prove Theorem \ref{isomhomgeod} and then Theorem~\ref{cor.weaksymgo} stating that every weakly symmetric Riemannian space is geodesic orbit. At the end of this section, we discuss briefly some known results on geodesics invariant under distinguished isometry of $(M,g)$. In Section \ref{cl}, the closure of a homogeneous geodesic $\gamma$ in $(M,g)$ and the corresponding 1-parameter group $\psi$ in $\Isom(M,g)$ are investigated. It is proved in Proposition \ref{prop.closopsub} that the closure of $\psi(\mathbb{R})$ in $\Isom(M,g)$ coincides with $\psi(R)$ or is isomorphic to a compact commutative Lie group (torus $T^k$) for $k\geq 2$. By Theorem \ref{nontrisotr}, the same statement is true for~$\gamma$ (in general case, it is possible that the dimension of the closure $T^k$ of $\psi(\mathbb{R})$ in $\Isom(M,g)$ is greater than the dimension of the closure $T^l$ of $\gamma(\mathbb{R})$ in $(M,g)$); if $l\geq 2$ then sectional curvatures of all 2-planes tangent to $T^l$, calculated in $(M,g)$, are nonnegative. Then we present some examples of $T^{l}$, $l\geq 2$, that are (respectively, are not) totally geodesic in $(M,g)$. At the end of this section are given some references to papers containing interesting results on geodesics. In Section \ref{ehg}, we present some examples of homogeneous geodesics $\gamma$ on Lie groups $G$ with left invariant Riemannian metric $g$, among them such that the corresponding torus $T^l$ for non-closed subset $\gamma(\mathbb{R})\subset (G,g)$ is (respectively, is not) totally geodesic in $(G,g)$. In Section \ref{sqm}, we study properties of a special quadratic mapping closely related to homogeneous geodesics in $(G,g)$. \section{Main results} \label{mr} \begin{theorem}\label{isomhomgeod} Let $(M,g)$ be a Riemannian manifold, and let $\gamma: \mathbb{R} \rightarrow M$ be a geodesic parameterized with arc length. Suppose that for any $s\in \mathbb{R}$ there is an isometry $\eta(s)\in \Isom(M,g)$, such that $\eta(s)(\gamma(t)) =\gamma(t+s)$ for all $t \in \mathbb{R}$. Then the geodesic $\gamma$ is homogeneous, i.~e. an orbit of an 1-parameter isometry group. \end{theorem} \begin{proof} Let us consider \begin{eqnarray}\label{eq.onepariso1} G_{\gamma}(s)&:=&\{a\in \Isom(M,g)\, |\, a(\gamma(t)) =\gamma(t+s)\, \forall t \in \mathbb{R}\}, \quad s\in \mathbb{R};\\ \label{eq.onepariso2} G_{\gamma}&:=&\cup_{s\in \mathbb{R}}\, G_{\gamma}(s). \end{eqnarray} We know that $G_{\gamma}(s) \neq \emptyset$ for all $s\in \mathbb{R}$ and $G_{\gamma}=\cup_{s\in \mathbb{R}}\, G_{\gamma}(s)$. Clear that $G_{\gamma}(0)$ is compact, since it is the intersection of the isotropy subgroups at all points of $\gamma(\mathbb{R})$ with respect to $\Isom(M,g)$. It is obvious that $G_{\gamma}$ is a subgroup in $\Isom(M,g)$. The most crucial step in this proof is to prove that $G_{\gamma}$ is a Lie group. We prove this fact separately in Proposition \ref{isomhomgeodn} below. Let $G_{\gamma}^0$ be the identity component of $G_{\gamma}$. Since $G_{\gamma}$ is a Lie group, $G_{\gamma}^0$ is also a Lie group, hence, a Lie (possibly, virtual) subgroup in $\Isom(M,g)$. Therefore, for any $U\in \mathfrak{g}$, where $\mathfrak{g}$ is the Lie algebra of the group $G_{\gamma}^0$, we get that $\exp(tU) \subset G_{\gamma}^0$. Clearly, there are $U \in \mathfrak{g}$ and $t\in \mathbb{R}$ such that $\exp(tU) \not\subset G_{\gamma}(0)$. Hence, $\gamma(\mathbb{R})=\exp(tU)(\gamma(0))$, $t\in \mathbb{R}$, as required. \end{proof} \begin{proposition}\label{isomhomgeodn} The group $G_{\gamma}$ defined with (\ref{eq.onepariso2}) is a Lie group. \end{proposition} \begin{proof} If $\gamma$ is non-injective then the properties of $G_{\gamma}$ (see (\ref{eq.onepariso1}) and (\ref{eq.onepariso2})) imply that $\gamma$ is periodic i.~e. there is the smallest $s>0$ such that $\gamma(t+s)=\gamma (t)$ for all $t \in \mathbb{R}$ (recall that the motion of a point $\gamma(t)$ along $\gamma$ under the action of $G_{\gamma}$ depends essentially only on its position in $M$, but not on the value of $t$). Then $G_{\gamma}$ is compact, hence, a Lie subgroup in $\Isom(M,g)$. From this point we suppose that $\gamma$ is injective. In this case we give two proofs of the fact that $G_{\gamma}$ is a Lie group. {\it The first proof.} Let us supply the group $G_{\gamma}$ with {\it the natural topology}. Define the natural projection $$ \eta: G_{\gamma} \mapsto \mathbb{R}, $$ as follows: $\eta(a)=s$ if and only if $a(\gamma(t))=\gamma (t+s)$ for all $t \in \mathbb{R}$. Clearly, such $s$ is unique. Let $\Pi_s$ the parallel transport of $M_{\gamma(0)}$ to $M_{\gamma(s)}$ along $\gamma$ with respect to the Levi-Civita connection of the Riemannian manifold $(M,g)$. Consider any $a\in G_{\gamma}$ and put $s=\eta(a)$. Let us define the map $a\in G_{\gamma} \mapsto \varphi(a)\in O(M_{\gamma(0)})$ as follows: $$ \varphi(a)= \Pi_s^{-1} \circ (T\, a)_{\gamma(0)}. $$ Let us supply $G_{\gamma}$ with the topology induced with the product topology on the space $\mathbb{R} \times O(M_{\gamma(0)})$ under the mapping $a\in G_{\gamma} \mapsto (\eta(a),\varphi(a))\in \mathbb{R} \times O(M_{\gamma(0)})$, that is obviously injective. This topology makes $G_{\gamma}$ a locally compact topological group. It is clear that $\eta(a_1\cdot a_2)=\eta(a_1)+\eta(a_2)$ for $a_1, a_2 \in G_{\gamma}$. Since $\eta$ is an open surjective homomorphism of topological groups, then $\operatorname{Ker} (\eta)$ is a normal subgroup in $G_{\gamma}$. If fact, $\operatorname{Ker}(\eta)= G_{\gamma}(0)$ is compact. The following important result is known in the theory of topological groups: If $G$ is a topological group and $H$ is a closed invariant subgroup of $G$ such that $H$ and $G/H$ are Lie groups, then $G$ is a Lie group (see Theorem 1 in \cite{Gleason1949}, Theorem 7 in \cite{Iwasawa1949}, or pp.~153--154 in \cite{MontZip1955}). Since $\mathbb{R}=G_{\gamma}/G_{\gamma}(0)$, where $\mathbb{R}$ and $G_{\gamma}(0)$ are Lie groups, then $G_{\gamma}$ is a Lie group by the above result. {\it The second proof.} We will use the development of the geodesic $\gamma$ in $p^{-1}(\gamma)$ where $p:TM\rightarrow M$ is the canonical projection. Let $\Pi_{t',\,t}$ be the parallel transport of $M_{\gamma(t)}$ to $M_{\gamma(t')}$ along $\gamma$ with respect to the Levi-Civita connection of the Riemannian manifold $(M,g)$. Obviously, $\Pi_{t',\, t}\gamma\,'(t)=\gamma\,'(t')$. Similar statement is valid for all $T\psi$, $\psi\in G_{\gamma}$. Let $W=\cup_{t\in \mathbb{R}}W_{\gamma(t)}$, $W_{\gamma(t)}=\{w\in M_{\gamma(t)}: w\perp \gamma\,'(t)\}$. Choose any orthonormal basis $e_2,\dots, e_n$ in $W_{\gamma(0)}$ and define a basis $e_2(t),\dots, e_n(t)$ of $W_{\gamma(t)}$ by equalities $e_j(t)=\Pi_{t,0}(e_j)$, $j=2,\dots, n$. Then $W$ is a smooth vector bundle over $\gamma$ (the topology on $\gamma$ is defined with the parameter $t$) with the restriction of $p$ to $W$, and $\Pi_{t',t}$, $T\psi$, where $\psi\in G_{\gamma}$, are smooth linear isomorphisms on $W$. In addition, for any $t,t',s\in \mathbb{R}$ and $\psi\in G_{\gamma}(s)$, $T\psi_{\gamma(t')}\circ\Pi_{t',\,t}=\Pi_{t'+s,\,t+s}\circ T\psi_{\gamma(t)}$. Since the matrix of any parallel transport $\Pi_{r',\,r}$ with respect to bases $e_{2}(r),\dots,e_n(r)$ and $e_{2}(r'),\dots,e_n(r')$ is always the unit matrix, this implies that the matrix of $T\psi_{\gamma(t)}$ with respect to bases $e_{2}(t),\dots,e_n(t)$ and $e_{2}(t+s),\dots,e_n(t+s)$ coincides with the matrix of $T\psi_{\gamma(t')}$ with respect to bases $e_{2}(t'),\dots,e_n(t')$ and $e_{2}(t'+s),\dots,e_n(t'+s)$. Denote this matrix, which is independent on $t$, by $(\psi)$. The above argument shows that any element $\psi\in G_{\gamma}(s)$ and the action of $T\psi$ on $W$ are uniquely defined with the pair $(s,(\psi))$. In this notation, the product in the group $T\psi$, $\psi\in G_{\gamma}$, is written as \begin{equation} \label{actmatr} (s,(\psi))(s\,',(\psi'))=(s+s\,',(\psi)(\psi')). \end{equation} Thus, if $A(s)=\{(\psi): \psi\in G_{\gamma}(s)\}$ then \begin{equation} \label{as} A(s+s')=A(s)A(s')=A(s')A(s), \quad A(-s)=A(s)^{-1}, \quad A(s)=A_sA(0) \end{equation} for $s, s'\in \mathbb{R}$, $A_s\in A(s)$. Then $A(s)$ is compact for any $s\in \mathbb{R}$, since $A(0)$ is compact, and $A(-s)=A(0)^{-1}A^{-1}=A(0)A^{-1}$. The last equalities implies that $A(s)=AA(0)=A(0)A$ for any $A\in A(s)$ and $s\in \mathbb{R}$ and $A(s+t)=A_sA(t)$ for any $A_s\in A(s)$ and $s,t\in \mathbb{R}$. In particular, $A(0)$ is a compact normal subgroup of the group $\mathcal{A}=\{A(s), s\in \mathbb{R}\}\subset O(n-1)$. Let $d$ be the intrinsic metric on $(M,g)$, $\delta$ any bi-invariant metric on the group $O(n-1)$, whose restriction to $SO(n-1)$ coincides with the intrinsic metric defined with a bi-invariant Riemannian metric on $SO(n-1)$ and $\delta_H$ the corresponding Hausdorff metric on the family of compact subsets in $O(n-1)$. Note that $\delta_H(A(s),A(0))=\delta(A_s,A(0))=\delta(A_0,A(s))$ for any $A_s\in A(s)$ and $A_0\in A(0)$ because of the last equality in (\ref{as}) and the right invariance of the metric $\delta$. We state that $\delta_H(A(s),A(0))\rightarrow 0$ when $s\neq 0$ and $s\rightarrow 0$. Otherwise, there is a sequence $\psi_{s_n}\in G_{\gamma}(s_n)$ such that $s_n\neq 0$, $s_n\rightarrow 0$, $\delta((\psi_{s_n}),A(0))>\varepsilon$ for some $\varepsilon >0$ and all $n\in \mathbb{N}$, and $d(\psi_{s_n}(x),\psi(x))\rightarrow 0$ for some $\psi\in \Isom(M,g)$ uniformly for $x\in B(\gamma(0),r)$, where $0< r < \infty$. Then $\psi \in G_{\gamma}(0)$ but $(\psi)\notin A(0)$, a contradiction. Therefore, for any $s_0\in \mathbb{R}$ and $A_{s_0}\in A(s_0)$, \begin{equation} \label{hausd} \delta_H\bigl(A(s_0+s),A(s_0)\bigr)=\delta_H\bigl(A_{s_0}A(s),A_{s_0}A(0)\bigr)=\delta_H\bigl(A(s),A(0)\bigr)\rightarrow 0 \end{equation} if $s\neq 0$ and $s\rightarrow 0$. The orthonormal bases $e_2(t),\dots, e_n(t)$ in $W_{\gamma(t)}$ permit to consider $W=\cup_{s}W_{\gamma(s)}$ as a direct product $\mathbb{R}\times W_{\gamma(0)}$ and supply the last manifold by the direct product of the standard Riemannian metrics on its factors. Then $W$ is isometric to $n$-dimensional Euclidean space. If $\psi\in G_{\gamma}(s)$ we define the Euclidean motion in $W$ by the formula \begin{equation} \label{actionG} \psi(t,w)=\bigl(s+t,(\psi)w\bigr), \end{equation} where the vector $w\in W_{\gamma(0)}$ is considered as a vector-column with components in the base $e_2(0),\dots, e_n(0)$. In consequence of (\ref{hausd}) and compactness of sets $A(s)\subset O(n-1)$, the correspondence $\psi\in G_{\gamma}(s)\rightarrow (s,(\psi))$ and formulae (\ref{actmatr}), (\ref{actionG}) give the exact representation of $G_{\gamma}$ as a closed, hence a Lie, subgroup of $\Isom(W)$ for $n$-dimensional Euclidean space $W$. \end{proof} Theorem \ref{isomhomgeod} implies a new proof of the following important result, that was obtained in~\cite{BKV} using other methods. \begin{theorem}[J.~Berndt--O.~Kowalski--L.~Vanhecke, \cite{BKV}]\label{cor.weaksymgo} Every weakly symmetric Riemannian space $(M,g)$ is geodesic orbit. \end{theorem} \begin{proof} Let us fix a geodesic $\gamma: \mathbb{R} \rightarrow M$ in a weakly symmetric Riemannian manifold $(M,g)$. For any $p \in \gamma(\mathbb{R})$, there is a non-trivial isometry $\eta(p)\in \Isom( M,g)$ that is a nontrivial involution on $\gamma(\mathbb{R})$ fixing the point $p \in \gamma(\mathbb{R})$ (see e.~g.~\cite{Zi96}). For a given $s\in \mathbb{R}$, the isometry $\psi(s):=\eta(\gamma(s/2))\circ \eta(\gamma(0))$ preserves $\gamma(\mathbb{R})$ and its orientation, and moves the point $\gamma(0)$ to $\gamma(s)$. Therefore, the geodesic $\gamma$ is homogeneous by Theorem \ref{isomhomgeod}. \end{proof} Let $(M,g)$ be a Riemannian manifold, and let $\gamma: \mathbb{R} \rightarrow M$ be a geodesic parameterized with arc length. The geodesic $\gamma$ is called {\it invariant under the isometry} $a\in \Isom(M,g)$, if there is $\tau \in \mathbb{R}$ such that $a(\gamma(t)) =\gamma(t+\tau)$ for all $t \in \mathbb{R}$. If $\gamma$ is a homogeneous geodesic, then, according to Proposition \ref{isomhomgeodn}, the isometry group $a\in \Isom(M,g)$ such that $\gamma$ is invariant under $a$, is a Lie group $G_{\gamma}$ defined with (\ref{eq.onepariso2}). Moreover, by the proof of Proposition \ref{isomhomgeodn}, $G_{\gamma}= G_{\gamma}(0) \times \mathbb{R}$, where $$G_{\gamma}(0)=\{a\in \Isom(M,g)\,|\, a(\gamma(t)) =\gamma(t)\, \forall t \in \mathbb{R}\}.$$ Geodesics invariant under a distinguished isometry are studied in various papers, see e.~g. \cite{Grove1974, Bangert2016} and references therein. In particular, K.~Grove proved the following result. \begin{theorem}[\cite{Grove1974}] If $M$ is compact and the isometry $A\in \Isom(M,g)$ has a non-closed invariant geodesic then there are uncountably many $A$-invariant geodesics on $M$. \end{theorem} It should be noted also the following recent result by V.~Bangert. \begin{theorem}[\cite{Bangert2016}] Let $\gamma$ be a non-closed and bounded geodesic in a complete Riemannian manifold $(M,g)$ and assume that $\gamma$ is invariant under an isometry $A$ of $(M,g)$, but is not contained in the set of fixed points of $A$. Then for some $k\geq 2$, the geodesic line flow $\gamma'$ corresponding to $\gamma$ is dense in a $k$-dimensional torus $T^k$ embedded in $TM$ and, in particular, every geodesic with initial vector in $T^k$ is $A$-invariant. \end{theorem} \section{On the closure of a homogeneous geodesic} \label{cl} Now, we are going to discuss important properties of an arbitrary homogeneous geodesic $\gamma$ on a given Riemannian manifold $(M,g)$. The main object of our interest is the closure of a given homogeneous geodesic. The following result is well known (see e.~g. \cite{Goto1971}), but we give a short proof for the reader's convenience. \begin{proposition}\label{prop.closopsub} Let $\psi(s)=\exp(sU)$, $s\in \mathbb{R}$, be an 1-parameter group in a given Lie group~$G$, where $U$ is from $\mathfrak{g}$, the Lie algebra of $G$. Then $\psi(\mathbb{R})$ is a connected abelian subgroup of $G$ and there are three possibilities: 1) $\psi(\mathbb{R})$ is a closed subgroup of $G$, diffeomorphic to $\mathbb{R}$ ($\psi$ is a diffeomorphism); 2) $\psi(\mathbb{R})$ is a closed subgroup of $G$, diffeomorphic to the circle $S^1$ ($\psi$ is a covering map); 3) $\psi(\mathbb{R})$ is not closed subgroup of $G$, and its closure is a torus $T^k$ of dimension $k\geq 2$. \end{proposition} \begin{proof} Clear that $K$, the closure of $\psi(\mathbb{R})$ in $G$, is a connected abelian Lie group. If $K=\psi(\mathbb{R})$, then we get either the first or the second possibility. Suppose that $K\neq \psi(\mathbb{R})$. If $K$ is not a torus, then $K= \mathbb{R}\times K_1$ for some abelian connected group $K_1$. If $\pi:\mathbb{R}\times K_1 \rightarrow \mathbb{R}$ is the projection to the first factor, then $\pi \circ \psi: \mathbb{R} \rightarrow \mathbb{R}$ is a Lie group isomorphism. Now, consider any point $(a,b)\in \mathbb{R}\times K_1$. There is a sequence $(a_n,b_n)\in \psi(\mathbb{R})\subset \mathbb{R}\times K_1$ such that $\lim\limits_{n\to\infty}(a_n,b_n)=(a,b)$. It is clear that $(a_n,b_n)\in \psi([-M,M])$ for some positive $M\in \mathbb{R}$. Indeed, $a_n \to a$ as $n \to \infty$ and $\pi \circ \psi$ is a Lie group automorphism of $\mathbb{R}$, hence, the set $(\pi \circ \psi)^{-1}(a_n)=\psi^{-1}\bigl((a_n,b_n)\bigr)$, $n\in \mathbb{N}$, is bounded. Since $\psi([-M,M])$ is compact, then $(a,b)\in \psi(\mathbb{R})$ and $K=\psi(\mathbb{R})$ that impossible. Hence, $K$ is a torus $T^k$ of dimension $k\geq 1$. Obviously, $K\neq \psi(\mathbb{R})$ implies $k\geq 2$. \end{proof} Now we consider the structure of the closure of homogeneous geodesics in Riemannian manifolds. Let $(M,g)$ be a Riemannian manifold, and let $\gamma: \mathbb{R} \rightarrow M$ be a geodesic parameterized with arc length, $\gamma(0)=x \in M$. Suppose that $\gamma$ is homogeneous, i.~e. there is $U$ in the Lie algebra $\mathfrak{g}$ of the Lie group $G=\Isom(M,g)$, such that $\gamma(t)=\psi(t)(x)$, where $\psi(t)=\exp(Ut)$, $t\in \mathbb{R}$. It is known that all orbits of any closed subgroup of $\Isom(M,g)$ on $M$ are closed (see e.~g. Proposition 1 in \cite{Yau1977}). Therefore, for $\psi(\mathbb{R})$ closed in $G$, the geodesic $\gamma(\mathbb{R})$ is closed as the set in $M$. In case 2) of Proposition \ref{prop.closopsub}, the geodesic $\gamma=\gamma(t)$ is periodic, but it is possible also in case 3). For instance, one can find in \cite{BerNik6, BerNik8} several examples of Killing vector fields of constant length that have both compact and non-compact integral curves (such curves are homogeneous geodesics). Now, we assume that $\psi(\mathbb{R})$ is not closed in $G$. The following result had been proved in \cite[Theorem 3.2]{Grove1974} for any complete Riemannian manifold $(M,g)$ with compact isometry group $\Isom(M,g)$. We prove a more general version using a similar approach. \begin{theorem}\label{the.closhomgeod} Let $(M,g)$ be a complete Riemannian manifold and let $\gamma: \mathbb{R} \rightarrow M$ be a homogeneous geodesic, i.~e. it is an orbit of some 1-parameter isometry group $\psi(t)=\exp(Ut)$, $t\in \mathbb{R}$, for some $U \in \mathfrak{g}$, the Lie algebra of the Lie group $G=\Isom(M,g)$. Assume that $\gamma(\mathbb{R})$ is non-closed subset in $M$. Then $\gamma(\mathbb{R})$ lies in a submanifold of $(M,g)$ diffeomorphic to a $l$-dimensional torus $T^l$ with $l\geq 2$ and any orbit of the group $\psi(t)$, $t\in \mathbb{R}$, through a point of $T^l$ is a geodesic lying dense in $T^l$. Furthermore, the sectional curvature of any $2$-plane, tangent to $T^l$ at a point $x\in T^l$ and containing $\gamma'(0)$, is nonnegative. \end{theorem} \begin{proof} Put $x=\gamma(0) \in M$. Consider the closure $K$ of $\psi(\mathbb{R})$ in~$G$ and the closure $N$ of $\gamma(\mathbb{R})$ in~$M$. By Proposition \ref{prop.closopsub}, $K$ is a torus $T^k$ of dimension $k\geq 2$. Note that $N$ is invariant under the action of every $\psi(t)$, $t\in \mathbb{R}$. Indeed, if $x_0\in N$, then there are $t_n\in \mathbb{R}$ such that $\gamma(t_n)=\psi(t_n)(x)\to x_0$ as $n \to \infty$. Hence, $\psi(t)(\gamma(t_n))=\psi(t+t_n)(x)\to \psi(t)(x_0)$ as $n \to \infty$, i.~e. $\psi(t)(x_0) \in N$. Now, it is easy to see that $N$ is invariant even under the action of $K$. Moreover, this action is transitive. Indeed, for every two points $a,b\in N$, there are a sequence $t_n$ such that $a_n:=\psi(t_n)(x)\to a$ as $n\to \infty$ and a sequence $s_n$ such that $b_n:=\psi(s_n)(x)\to b$ as $n\to \infty$. It is clear that a sequence $\varphi_n = \psi(s_n-t_n)\in \psi(\mathbb{R})\subset K$ is such that $\varphi_n(a_n)=b_n$ for all $n$. Passing, if necessary, to a subsequence, we can assume that $\varphi_n \to \varphi$ as $n\to \infty$ for some $\varphi \in K \subset \Isom(M,g)$. Hence, $b=\lim\limits_{n\to \infty} b_n =\lim\limits_{n\to \infty}\varphi_n(a_n) = \varphi (a)$, $K$ acts transitively on $N$, and $N$ is the orbit of $K$ through the point $x$. Hence, $N$ is a homogeneous space of a torus $K=T^k$, therefore, $N$ is a torus itself, i.~e. $N=T^l$ with $l\geq 2$ (since $\gamma$ is not closed). Note that $x$ is a critical point for the function $y\in M \mapsto g_y(\widetilde{U},\widetilde{U})$, where $\widetilde{U}$ is a Killing field, corresponding to $U\in \mathfrak{g}$. It is easy to see that the value of this function is constant on $N$. Hence, any point $y$ of $N$ is a critical point for the same function and integral curve of $\widetilde{U}$ is a homogeneous geodesic. Since the distance between the points $\psi(t_1)(x)$ and $\psi(t_2)(y)$ is equal to the distance between the points $\psi(t_1+t)(x)$ and $\psi(t_2+t)(y)$ for every $t, t_1, t_2 \in \mathbb{R}$, then the geodesic $\psi(t)(y)$, $t\in \mathbb{R}$, is dense in $N$ for all $y \in N$. Finally, let us prove the assertion on the sectional curvature. Due to the previous assertion, we may (without loss of generality) consider only points of the geodesic $\gamma$. Suppose that $p=\gamma(s)$ for some $s\in \mathbb{R}$. Let $Y_p$ be a unit tangent vector to $N=T^l$ at $p$ orthogonal to $\widetilde{U}_p$. We define the vector field $Y$ along $\gamma$ by setting $Y(t) = d(\psi(t))_p (Y_p)$. By the construction of $Y$ and the previous discussion, $Y$ is obtained with an 1-parameter geodesic variation of $\gamma$, i.~e. $Y$ is a Jacobi field, therefore, \begin{equation}\label{jac.equ} \nabla^2 Y + R( Y, \gamma ')\gamma ' = 0. \end{equation} Taking inner product on both sides of (\ref{jac.equ}) with $Y$ we obtain that the sectional curvature of the $2$-plane spanned on $Y$ and $\gamma'$ (the latter is parallel to $\widetilde{U}$ on $\gamma$) satisfies the equality $K_{sec} = - g(\nabla^2 Y,Y)/g(\gamma', \gamma')$. Since $g(Y, Y)$ is constant along $\gamma$, we have \begin{eqnarray*} 2g(\nabla Y, Y)= \frac{d}{dt}\, g(Y, Y)=0, \\ 0=\frac{d}{dt}\, g(\nabla Y, Y)= g(\nabla^2 Y,Y) + g(\nabla Y,\nabla Y), \end{eqnarray*} hence, $K_{sec} = g(\nabla Y,\nabla Y)/g(\gamma', \gamma') \geq 0$. \end{proof} \begin{remark} \label{nontrisotr} Let $K_x$ be the isotropy subgroup of $K$ at the point $x\in N\subset M$. Then $N=K/K_x$. It is interesting to study explicit examples with non-discrete $K_x$. Note that, for any $a\in K_x$ (and, moreover, for any $a$ from the isotropy subgroup of $\Isom(M,g)$ at the point $x$), the orbit of the group $\xi(t)=\exp(t(\Ad(a)(U)))$, $t\in \mathbb{R}$, through $x$ is a geodesic. \end{remark} We have the following obvious corollary. \begin{corollary} \label{nonboundeded} If a homogeneous geodesic $\gamma$ is not bounded in $(M,g)$ then $\gamma(R)$ is a closed subset in $M$. \end{corollary} Theorem \ref{the.closhomgeod} leads to the following natural questions (we use the above notation). \begin{question}\label{ques1} Is it true that $N=T^l$ is totally geodesic in $(M,g)$? \end{question} \begin{question}\label{ques2} Is it true that for any $W\in \operatorname{Lie}(K=T^k)$, the orbit of the group $\exp(tW)$, $t\in \mathbb{R}$, trough every point of $N=T^l$ is a geodesic? \end{question} The above two questions are especially interesting for the case of homogeneous Riemannian manifolds. It is well known that these questions have positive answer for symmetric spaces $M$. In this case, the closure of a given geodesic is either the geodesic itself or a totally geodesic torus $N=T^l$, where $1 \leq l\leq \rk M$ and $\rk M$ is the rank of the symmetric space $M$. In the next section we will show that generally both these questions have negative answers even for the case of Lie groups with left-invariant Riemannian metrics. There is another proof of the last assertion in Theorem \ref{the.closhomgeod}, which gives an additional information connected with Questions~\ref{ques1} and \ref{ques2}. It is based on the famous Gauss formula and equation. We shall use corresponding results from \cite{KobNom1969}, with a little different notation. Let $N$ be any smooth submanifold of a smooth Riemannian manifold $(M,g)$ with the induced metric tensor $g'$ and Levi-Civita connection $\nabla'$. If $X,Y$ are vector fields on $N$ then \textit{the Gauss formula} is $$\nabla_XY=\nabla'_XY + \alpha(X,Y),$$ where $\alpha$ is \textit{the second fundamental form} of $N$ (in $M$) and $\alpha(X,Y)$ is orthogonal to $N$. By Proposition 4.5 in \cite{KobNom1969} we have \begin{proposition} \label{sect} Let $X$ and $Y$ be a pair of orthonormal vectors in $N_x$, where $x\in N$. For 2-plane $X\wedge Y$, spanned on $X$ and $Y$, we have $$K_M(X\wedge Y)=K_N(X\wedge Y)+ g(\alpha(X,Y),\alpha(X,Y))-g(\alpha(X,X),\alpha(Y,Y)),$$ where $K_M$ (respectively, $K_N$) denotes the sectional curvature in $M$ (respectively, $N$). \end{proposition} Under conditions of Theorem \ref{the.closhomgeod}, we can take $X=\gamma'(0)$ and any unit vector $Y$ at $x$, tangent to $N=T^l$ and orthogonal to $X$ (or even corresponding parallel vector fields $X$, $Y$ on $N=T^l$). Then $\alpha(X,X)=0$ and we immediately get the last statement of Theorem~\ref{the.closhomgeod} and the following corollary. \begin{corollary} \label{nonclosed} If $(M,g)$ has negative sectional curvature then $\gamma(R)$ is a closed subset in~$M$. \end{corollary} In general case, $N$ is a totally geodesic submanifold in $(M,g)$ if and only if $\alpha \equiv 0$ on~$N$. Therefore, using in our case the above parallel vector fields $X$, $Y$ on $N=T^l$ for $l\geq 2$, we get the following corollary. \begin{corollary} \label{possect} If $(M,g)$ has positive sectional curvature and $l\geq 2$ then $N=T^l$ is not totally geodesic submanifold in $(M,g)$. \end{corollary} There are Riemannian manifolds of positive sectional curvature that have homogeneous geodesics with the closure $N=T^l$ for $l\geq 2$. For instance, we can consider the sphere $S^3=U(2)/U(1)$ supplied with $U(2)$-invariant Riemannian metrics (the Berger sphere), that are sufficiently close to the metric of constant curvature in order to have positive sectional curvature, see \cite[pp.~587--589]{Z2}. Such metrics are naturally reductive, hence geodesic orbit i.~e. all their geodesics are homogeneous. It is easy to choose a geodesic that has the closure of dimension $2$. Indeed, there are only countable set of periodic geodesics through a given point. It is known that any self-intersecting geodesic in a homogeneous Riemannian manifold is periodic. Then for all other geodesics the closure $N=T^l$ is such that $l=2$ (since the torus $T^3$ could not act on $S^3$ effectively), see details in \cite[Example 1]{Zi76}. Similar examples could be constructed for the sphere $S^{2n-1}=U(n)/U(n-1)$ with any $n \geq 2$. By Corollary \ref{possect}, they provide counterexamples to Question~\ref{ques1}, hence, Question~\ref{ques2}. Note that the Berger spheres are weakly symmetric \cite{Zi96, Yak}. This shows that the behaviour of geodesics in weakly symmetric spaces are more complicated than in symmetric spaces. Interesting results on homogeneous geodesics in Riemannian manifolds of negative sectional curvature were obtained in \cite[pp.~19--22]{BiONei69}, where such geodesics were called Killing geodesics. Interesting results on the behaviour of geodesics in homogeneous Riemannian spaces are obtained also in \cite{Rodionov1981, Rodionov1984, Rodionov1990}. \section{Examples of homogeneous geodesics} \label{ehg} Here we consider some examples of homogeneous geodesics on Riemannian manifolds. We restrict ourself to Lie groups with left-invariant Riemannian metrics. It is known that any homogeneous Riemannian space $(G/H, g)$ admits at least one homogeneous geodesic~\cite{Ko-Szen}. In the partial case of Lie groups with left-invariant Riemannian metrics, this result was obtained earlier in~\cite{Kaj}. Special examples of $3$-dimensional non-unimodular Lie group admit exactly one homogeneous geodesic \cite{Marin2002}. On the other hand, in a three-dimensional unimodular Lie group $G$ endowed with a left-invariant metric $g$, there always exist three mutually orthogonal homogeneous geodesics through each point. Moreover, for generic metrics, there are no other homogeneous geodesics \cite{Marin2002}. Let $G$ be a connected Lie group supplied with a left-invariant Riemannian metric $g$, that is generated with some inner product $(\cdot,\cdot)$ on $\mathfrak{g}=\operatorname{Lie}(G)$. We call $X \in \mathfrak{g}$ a {\it geodesic vector} if $\exp(sX)$, $s\in \mathbb{R}$, is a geodesic in $(G,g)$ (i.~e. it is a homogeneous geodesic). By Lemma \ref{GO-criterion}, we see that $X \in \mathfrak{g}$ is a geodesic vector if and only if $([X,Y],X) =0$ for all $Y\in \mathfrak{g}$. If $G$ is compact and semisimple, then the minus Killing form $\langle \cdot,\cdot \rangle$ of $\mathfrak{g}$ is positive definite and $\langle[X,Y],Z \rangle+\langle Y,[X,Z] \rangle=0$ for all $X,Y,Z \in \mathfrak{g}$. Hence, all $X \in \mathfrak{g}$ are geodesic vectors for the inner product $\langle \cdot,\cdot \rangle$. For an arbitrary inner product $(\cdot,\cdot)$ on $\mathfrak{g}$, there is a basis $E_1, E_2, \dots, E_n$, $n=\dim(G)$, in $\mathfrak{g}$, simultaneously orthonormal for $\langle \cdot,\cdot \rangle$ and orthogonal for $(\cdot,\cdot)$. Consider numbers $\mu_i \in \mathbb{R}$ such that $(E_i,E_i)=\mu_i \langle E_i,E_i \rangle$. Then for any $i$ and any $Y\in \mathfrak{g}$ we get $$ ([E_i,Y],E_i)=\mu_i \langle [E_i,Y],E_i \rangle=-\mu_i \langle Y_i,[E_i,E_i] \rangle=0, $$ i.~e. $E_i$ is a homogeneous vector (see more general results for homogeneous spaces in~\cite{Ko-Szen}). Let us show that a small deformation of a given inner product could seriously change the set of homogeneous vectors. \begin{example}\label{exam1} Suppose that a compact semisimple Lie algebra $\mathfrak{g}$ is supplied with the minus Killing form $\langle \cdot,\cdot \rangle$. For any nontrivial $U\in \mathfrak{g}$, we have a $\langle \cdot,\cdot \rangle$-orthogonal decomposition $\mathfrak{g}=C_{\mathfrak{g}}(U) \oplus [U,\mathfrak{g}]$, where $C_{\mathfrak{g}}(U)$ is the centralizer of $U$ in $\mathfrak{g}$. Indeed, the operator $\ad(U)$ is skew symmetric with respect to $\langle \cdot,\cdot \rangle$, hence $X\in C_{\mathfrak{g}}(U)$ implies $\langle X,[U,\mathfrak{g}] \rangle=0$ and the converse is also true. Let us take any vector $V\in \mathfrak{g}$ that is not orthogonal both to $U$ and to $[U,\mathfrak{g}]$ with respect to $\langle \cdot,\cdot \rangle$. Note that a generic vector in $\mathfrak{g}$ has this property. Let us define the following inner product: $(X,Y)=\langle X,Y \rangle+\alpha \cdot \langle X,V \rangle \cdot\langle Y,V \rangle$, where $\alpha>0$. If $\langle X,V \rangle=0$, then $(X,Y)=\langle X,Y \rangle$ for every $Y\in\mathfrak{g}$, hence such $X$ is a geodesic vector for the inner product $(\cdot,\cdot)$. On the other hand, there is $W\in \mathfrak{g}$ such that $\langle[U,W], V\rangle \neq 0$, hence, $$ ([W,U],W)= \langle [W,U],W \rangle+\alpha \cdot\langle [W,U],V \rangle \cdot\langle U,V \rangle=\alpha \cdot\langle [W,U],V \rangle \cdot\langle U,V \rangle \neq 0, $$ $U$ is not a geodesic vector for $(\cdot,\cdot)$. If $\rk\,\mathfrak{g} \geq 2 $, then we can choose the vector $X \in C_{\mathfrak{g}}(U)$ with the property $\langle X,V \rangle=0$. Moreover, let us fix a Cartan (i.~e. maximal abelian) subalgebra $\mathfrak{t} \subset \mathfrak{g}$. We may consider the vector $X\in \mathfrak{t}$ such that it is a regular vector in $\mathfrak{g}$ ($C_{\mathfrak{g}}(X)=\mathfrak{t}$) and the closure of the 1-parameter group $\exp(sX)$, $s\in \mathbb{R}$, coincides with a maximal torus $T:=\exp(\mathfrak{t})$ in $G$. If $U \in \mathfrak{t}$ with $\langle U,X \rangle=0$ and $V\in\mathfrak{g}$ such that $\langle V,X \rangle=0$, $\langle V,U \rangle\neq 0$ and $\langle V,[U,\mathfrak{g}] \rangle\neq 0$, then $X$ is ($U$ is not) a geodesic vector for $(\cdot,\cdot)$ as above. Hence, the closure of a homogeneous geodesic $\exp(sX)$, $s\in \mathbb{R}$, is the torus $T$, but the orbit $\exp(sU)$, $s\in \mathbb{R}$, is not geodesic, although $U\in\mathfrak{t} =\operatorname{Lie}(T)$. This gives the negative answer to Question \ref{ques2}. For Levi-Civita connection (elements of $\mathfrak{g}$ are considered as left-invariant vector fields on $G$, see e.~g. \cite{Bes}) we have \begin{eqnarray*} 2(\nabla _XU,Y)=2(\nabla _UX,Y)=([Y,U],X)+(U,[Y,X])\\ =\langle[Y,U],X\rangle +\langle U,[Y,X]\rangle+\alpha \cdot\langle [Y,U],V \rangle \cdot\langle X,V \rangle +\alpha \cdot\langle U,V \rangle \cdot\langle [Y,X],V \rangle\\ =\alpha \cdot\langle U,V \rangle \cdot\langle [Y,X],V \rangle \neq 0 \end{eqnarray*} for some $Y\in [X,\mathfrak{g}]=[\mathfrak{t},\mathfrak{g}]$. Indeed, $[U,\mathfrak{g}]\subset [X,\mathfrak{g}]$ (due to $C_{\mathfrak{g}}(X)=\mathfrak{t} \subset C_{\mathfrak{g}}(U)$), hence there is $Y\in \mathfrak{g}$ such that $\langle[X,Y], V\rangle \neq 0$. This implies the required result. We may assume also (without loss of generality) that this $Y$ is in $[X,\mathfrak{g}]=[\mathfrak{t},\mathfrak{g}]$ (since we have a $\langle \cdot,\cdot \rangle$-orthogonal decomposition $\mathfrak{g}=C_{\mathfrak{g}}(X) \oplus [X,\mathfrak{g}]=\mathfrak{t}\oplus [X,\mathfrak{g}]$ and the $\mathfrak{t}$-component of $Y$ commutes with $X$). Further, since the subspace $[X,\mathfrak{g}]=[\mathfrak{t},\mathfrak{g}]$ is $\langle \cdot,\cdot \rangle$-orthogonal to $C_{\mathfrak{g}}(X)=\mathfrak{t}$, then the torus $T$ is not totally geodesic with respect to the left-invariant metric generated with the inner product $(\cdot, \cdot)$. This gives the negative answer to Question~\ref{ques1}. \end{example} Now we are going to study geodesic vectors for some left-invariant Riemannian metrics on the Lie group $G=SU(2)\times SU(2)$. \begin{example}\label{exam2} Let us consider the basis $E_i$, $i=1,\dots,6$, in $\mathfrak{g}=\mathfrak{su}(2)\oplus \mathfrak{su}(2)$ such that the first three vectors are in the first copy of $\mathfrak{su}(2)$ whereas other vectors are in the second copy of $\mathfrak{su}(2)$ and \begin{eqnarray*} {[E_1,E_2]=E_3}, \quad [E_2,E_3]=E_1, \quad [E_1,E_3]=-E_2,\\ {[E_4,E_5]=E_6}, \quad [E_5,E_6]=E_4, \quad [E_4,E_6]=-E_5. \end{eqnarray*} It is clear that this basis is orthonormal with respect to the bi-invariant Riemannian metric $\langle \cdot , \cdot \rangle =-1/2 \cdot B(\cdot , \cdot)$, where $B$ is the Killing form of $\mathfrak{g}$. Let us consider a non-degenerate $(6\times 6)$-matrix $A=(a_{ij})$ and then inner product $(\cdot, \cdot)$ such that the vectors $F_i=\sum_{j=1}^6 a_{ij} E_j$, $i=1,\dots,6$, constitute a $(\cdot, \cdot)$-orthonormal basis. It is easy to see that there is an one-to-one correspondence between the set of all inner products on $\mathfrak{g}$ and the set of lower triangle matrices $A=(a_{ij})$ with positive elements on the principal diagonal. Let us consider the inner product $(\cdot, \cdot)_d$ that is generated with matrix $$ A= \left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 0\\ d & 1 & 1 & 0 & 0 & d\\ \end{array} \right), $$ where $d>0$. Direct calculations (that could be performed with using any system of computer algebra) show the the vector $V=\sum_{i=1}^6 v_i E_i$ is a geodesic vector for the inner product $(\cdot, \cdot)_d$ if and only if one of the following conditions holds: \begin{itemize} \item $v_1 \in \mathbb{R}$, $v_2=v_3=0$, $v_4 \in \mathbb{R}$, $v_5 \in \mathbb{R}$, $v_6=d\cdot v_1$; \item $v_1 \in \mathbb{R}$, $v_2 \in \mathbb{R}$, $v_3 \in \mathbb{R}$, $v_4 \in \mathbb{R}$, $v_5=2v_1-v_4$, $v_6=d\cdot v_1 +v_2+v_3$; \item $v_1=d\cdot v_3$, $v_2=v_3$, $v_3 \in \mathbb{R}$, $v_4=d\cdot v_3$, $v_5=d\cdot v_3$, $v_6 \in \mathbb{R}$. \end{itemize} Hence, the set of geodesic vectors for the metric $(\cdot, \cdot)_d$ is the union of one 2-dimensional, one 3-dimensional, and one 4-dimensional linear subspaces in $\mathfrak{g}=su(2)\oplus su(2)$. The set of vectors $V=\sum_{i=1}^6 v_i E_i$ with $v_2=v_3=v_4=v_5=0$ determines a Cartan subalgebra $\mathfrak{t}$ in $\mathfrak{g}$. We see that $V \in \mathfrak{t}$ is a geodesic vector if either $v_1=0$, or $v_6=d\cdot v_1$. Clear that in the latter case for any irrational $d$ we get the vector $V$ such that the closure of the 1-parameter group $\exp(sV)$, $s\in \mathbb{R}$, coincides with a maximal torus $T:=\exp(\mathfrak{t})$ in $G$. Therefore, we again get the negative answer to Question \ref{ques2}. If $D$ is the discriminant of the characteristic polynomial of the Ricci operator $\operatorname{Ric}_d$ of the metric $(\cdot, \cdot)_d$, then $$ D=\frac{3^{18}\cdot 17^2}{2^8 \cdot d^{44}} \bigl(1+o(d)\bigr)\quad \mbox{as} \quad d \to 0. $$ Therefore, for sufficiently small positive $d$, $\operatorname{Ric}_d$ has 6 distinct eigenvalues. This implies that the full connected isometry group of the metric $(\cdot, \cdot)_d$ is $G=SU(2)\times SU(2)$. Indeed, if the dimension of full isometry group is $>6$, then the isotropy subgroup is non-discrete and have (due to the effectiveness) not only one-dimensional (hence, trivial) irreducible subrepresentations in the isotropy representation, hence $\operatorname{Ric}_d$ should have some coincided eigenvalues. Therefore, every operator $\ad(X)$, $X\in \mathfrak{g}=\mathfrak{su}(2)\oplus \mathfrak{su}(2)$, is not skew symmetric with respect to $(\cdot, \cdot)_d$ for sufficiently small $d>0$. \end{example} Let us recall the following natural question. \begin{question}\label{ques3} Whether a given metric Lie algebra $(\mathfrak{g},Q)$ admits a basis that consists of geodesic vectors (a geodesic basis)? \end{question} This question was studied in \cite{CHNT, CLNN, Ca-Ko-Ma, Ko-Szen}. In \cite{Ko-Szen} it is shown that semisimple Lie algebras possess an orthonormal basis comprised geodesic vectors, for every inner product (it had been explained a little earlier in this paper). Results for certain solvable algebras are given in \cite{Ca-Ko-Ma}. It is easy to see that if $\mathfrak{g}$ possesses an orthonormal geodesic basis (with respect to some inner product), then $\mathfrak{g}$ is unimodular \cite{CLNN}. The authors of \cite{CLNN} proved that every unimodular Lie algebra, of dimension at most $4$, equipped with an inner product, possesses an orthonormal basis comprised geodesic vectors, whereas there is an example of a solvable unimodular Lie algebra of dimension $5$ that has no orthonormal geodesic basis, for any inner product. The authors of \cite{CHNT} were interested in giving conditions for the Lie algebra $\mathfrak{g}$ to admit a basis (not necessary orthonormal) which is a geodesic basis with respect to some inner product. The main results of \cite{CHNT} show that the following Lie algebras admit an inner product having a geodesic basis: \begin{itemize} \item unimodular solvable Lie algebras with abelian nilradical; \item some Lie algebras with abelian derived algebra; \item Lie algebras having a codimension one ideal of a particular kind; \item unimodular Lie algebras of dimension $5$. \end{itemize} The authors of \cite{CHNT} also obtained some negative results. For instance, they found the list of nonunimodular Lie algebras of dimension $4$ admitting no geodesic basis. \section{One special quadratic mapping} \label{sqm} We have discussed that the set of geodesic vectors on a given Lie algebra depends on the chosen inner product $(\cdot,\cdot)$. Let us consider this problem in a more general context. Let $\mathfrak{g}$ be a Lie algebra, then every inner product $(\cdot,\cdot)$ on $\mathfrak{g}$ determines a special quadratic mapping \begin{equation}\label{spec.mapp} \xi=\xi_{(\cdot,\cdot)}:\mathfrak{g} \mapsto \mathfrak{g} \end{equation} as follows: For any $X\in \mathfrak{g}$ we put $\xi(X):=V$, where $V$ is a unique vector in $\mathfrak{g}$ with the equality $([X,Y],X)=(V,Y)$ for all $Y\in \mathfrak{g}$. This mapping is well known, see e.~g. Proposition 7.28 in \cite{Bes} (where $\xi(X):=-U(X,X)$ in the notation of this proposition) for its generalization for homogeneous Riemannian spaces. The set of zeros of the mapping $\xi_{(\cdot,\cdot)}$ is exactly the set of geodesic vectors in $\mathfrak{g}$ with respect to the inner product $(\cdot,\cdot)$. In particular, this set always contains the center of the Lie algebra $\mathfrak{g}$. For any bi-invariant inner product $(\cdot,\cdot)$, the map $\xi_{(\cdot,\cdot)}$ is obviously trivial. On the other hand, this map could have unexpected properties for some special inner products $(\cdot,\cdot)$. \begin{example}\label{exam3} Let us consider a basis $E_i$, $i=1,2,3$, in $\mathfrak{g}=\mathfrak{su}(2)$ such that $$ [E_1,E_2]=E_3, \quad [E_2,E_3]=E_1, \quad [E_1,E_3]=-E_2. $$ Fix some positive numbers $a,b,c$ and consider the inner product $(\cdot,\cdot)$ on $\mathfrak{g}=\mathfrak{su}(2)$ that has an orthonormal basis $F_1=aE_1$, $F_2=bE_2$, $F_3=cE_3$. Direct calculations give us an explicit form of the mapping $\xi_{(\cdot,\cdot)}$ (we use coordinates of all vectors with respect to the original basis $E_1,E_2,E_3$): $$ \xi_{(\cdot,\cdot)} \bigl( x_1,x_2,x_3 \bigr)=\left( \frac{a(b-c)}{bc}\, x_2x_3, \frac{b(c-a)}{ac}\, x_1x_3, \frac{c(a-b)}{ab}\, x_1x_2 \right). $$ It is easy to see that for $a\neq b\neq c\neq a$, any geodesic vector should be a multiple of one of the vectors $E_1,E_2,E_3$. On the other hand, for $a=b\neq c$, geodesic vectors are exactly the vectors either with $x_3=0$ or with $x_1=x_2=0$. For $a=b=c$ we have a bi-invariant inner product. Obviously, $x_1x_2>0$ and $x_1x_3>0$ imply $x_2x_3>0$, hence, $\xi_{(\cdot,\cdot)}$ is not surjective. This correlates with Theorem 8 in \cite{ArZhuk2016}, stating (in particular) that any surjective quadratic mapping $q:\mathbb{R}^3 \rightarrow \mathbb{R}^3$ has no non-trivial zero. \end{example} It could be an interesting problem to study general properties of the quadratic mapping~(\ref{spec.mapp}) for general Lie algebras and general inner products. Below we consider some results in this direction. It should be recalled that the mapping $\xi_{(\cdot,\cdot)}$ always has at least one non-trivial zero according to~\cite{Kaj}. Nevertheless, this property does not imply directly the non-surjectivity of $\xi_{(\cdot,\cdot)}$ for $\dim \mathfrak {g} \geq 6$, see Example~3 and the corresponding discussion in \cite{ArZhuk2016}. Recall that a Lie algebra $\mathfrak{g}$ is unimodular if $\trace \ad(Y)=0$ for all $Y\in \mathfrak{g}$ (here, as usual, the operator $\ad(Y):\mathfrak{g} \rightarrow \mathfrak{g}$ is defined with $\ad(Y)(Z)=[Y,Z]$). All compact and semisimple Lie algebras are unimodular. Any Lie algebra $\mathfrak{g}$ contains {\it the unimodular kernel}, the maximal unimodular ideal, that could be described as follows: $$ \mathfrak{u}:=\{ Y\in \mathfrak{g}\,|\,\trace \ad(Y)=0\}. $$ For a non-unimodular Lie algebra $\mathfrak{g}$, the ideal $\mathfrak{u}$ has codimension $1$ in $\mathfrak{g}$. The following result is also well known (see e.~g. Lemma 7.32 in \cite{Bes}). \begin{proposition}\label{qudmapgen.1} Let $\mathfrak{g}$ be a Lie algebra supplied with an inner product $(\cdot,\cdot)$, $\dim \, \mathfrak{g}=n$. Then the quadratic map $\xi_{(\cdot,\cdot)}$ (see (\ref{spec.mapp})) has the following properties: 1) For a given $Y\in \mathfrak{g}$, the operator $\ad(Y)$ is $(\cdot,\cdot)$-skew symmetric if and only if $(\xi(X),Y)=0$ for all $X\in \mathfrak{g}$. 2) Let us define $\Delta=\Delta_{(\cdot,\cdot)}\in \mathfrak{g}$ by the equation $(\Delta,Y)=\trace \ad(Y)$, $Y \in \mathfrak{u}$. Then $\Delta$ is $(\cdot,\cdot)$-orthogonal to the unimodular kernel $\mathfrak{u}$ of $\mathfrak{g}$. In particular, $\Delta=0$ if~$\mathfrak{g}$ is unimodular. 3) $\Delta=-\sum_{i=1}^n \xi(E_i)$ for any $(\cdot,\cdot)$-orthonormal basis $E_i$, $i=1,\dots, n$, in $\mathfrak{g}$. \end{proposition} \begin{proof} Let us prove the first assertion. Recall that the operator $\ad(Y)$ is $(\cdot,\cdot)$-skew symmetric if and only if $([X,Y],X)=0$ for all $X\in \mathfrak{g}$. By the definition of $\xi_{(\cdot,\cdot)}$ we get $(\xi(X),Y)=([X,Y],X)$ for every $X\in \mathfrak{g}$, that proves the required assertion. The second assertion follows directly from the definition of the unimodular kernel $\mathfrak{u}$. Let us prove the third assertion. Fix any $Y \in \mathfrak{g}$. By the definition of $\xi_{(\cdot,\cdot)}$ we have $(\xi(E_i),Y)=([E_i,Y],E_i)$. Therefore, $$ \left(\sum_{i=1}^n \xi(E_i), Y \right)=\sum_{i=1}^n (\xi(E_i),Y)=-\sum_{i=1}^n ([Y,E_i],E_i)=-\trace \ad(Y)=-(\Delta,Y), $$ that proves the required result. \end{proof} We hope that the further study of the mapping (\ref{spec.mapp}) will allow to understand more deeply the set of geodesic vectors for general metric Lie algebras. {\bf Acknowledgements.} The first author was supported by the Ministry of Education and Science of the Russian Federation (Grant 1.308.2017/4.6). \end{document}
\begin{document} \thispagestyle{empty} {\begin{tabular}{lr} \end{tabular}} \begin{center} \baselineskip 1.2cm {\Huge\bf Noise and Order in Cavity Quantum Electrodynamics }\\[1mm] \normalsize \end{center} {\centering {\large Per K. Rekdal\footnote{Email address: [email protected].}$^{}$ and {\large Bo-Sture K. Skagerstam\footnote{Email address: [email protected].}$^{}$} \\[5mm] $^{}~$Department of Physics, The Norwegian University of Science and Technology, N-7491 Trondheim, Norway \\[1mm] } } \begin{abstract} \normalsize In this paper we investigate the various aspects of noise and order in the micromaser system. In particular, we study the effect of adding fluctuations to the atom cavity transit time or to the atom-photon frequency detuning. By including such noise-producing mechanisms we study the probability and the joint probability for excited atoms to leave the cavity. The influence of such fluctuations on the phase structure of the micromaser as well as on the long-time atom correlation length is also discussed. We also derive the asymptotic form of micromaser observables. \end{abstract} \bc{ \section{Introduction} }\ec Noise is usually considered as a limiting factor in the performance of a physical device (see e.g. Refs.\cite{noises}). There are, however, nonlinear dynamical systems where the presence of noise sources can induce completely new regimes that cannot be realized without noise. Recent studies have shown that noise in such systems can induce more ordered regimes, more regular structures, increase the degree of coherence, cause the amplification of weak signals and growth of their signal-to-noise ratio (see e.g. Refs. \cite{Maritan&B&K94,Buchleitner98}). In other words, noise can play a constructive role, enhancing the degree of order in a system. The micromaser is an example of such a nonlinear system. The micromaser system is an experimental realization of the idealized system of a two-level atom interacting with a second quantized single-mode of the electromagnetic field (for reviews and references see e.g. Refs. \cite{Walther_div}- \cite{Skagerstam99}). In the micromaser a beam of two-level atoms is sent through a microcavity where each atom intersects with the photon field inside the cavity during a transit time $\tau$. After exit from the cavity the atoms are detected in either of its two states. It is assumed that subsequent atoms arrive at time intervals which are much longer then the atom-field interaction such that at most one atom at a time is inside the cavity, which is the operating condition for the one-atom maser. In such a system noise-controlled jumps between metastable states have been discussed in the literature \cite{Buchleitner98}. In the present paper we study the effect of including noise-producing mechanisms in the micromaser system like a velocity spread in the atomic beam or a spread in the atom-photon frequency detuning $\Delta \omega$. We show that under suitable conditions, such noise-producing mechanisms lead to more pronounced revivals in the probabilities ${\cal P}(+)$ and ${\cal P}(+,+)$, where ${\cal P}(+)$ is the probability that an atom is found in its excited state after interaction with the photon field inside the microcavity and ${\cal P}(+,+)$ is the joint probability that two consecutive atoms are measured in their excited state after interaction. It is the purpose of the present paper to study the counterintuitive role that noise can play in the micromaser and extend the results in Refs. \cite{Filipowicz86_igjen,ElmforsLS95}. The paper is organized as follows. For the convenience of the reader we recapitulate in Section \ref{revivals_SEC} the theoretical framework of the micromaser system. In this Section we also discuss the general conditions to make revivals in the probability ${\cal P}(+)$ well separated in the atomic transit time. In Section \ref{no_avg_SEC} we determine asymptotic limits of the order parameter of the micromaser system and the probabilities ${\cal P}(+)$ and ${\cal P}(+,+)$. In Section \ref{fluctuations_SEC} the effect of noise on ${\cal P}(+)$ and ${\cal P}(+,+)$, the order parameter and the micromaser phase diagram is discussed. In Section \ref{corr_KAP} the effect of fluctuations on the correlation length is discussed and numerical investigations are presented. A conclusion is given in Section \ref{final_KAP}. \bc{ \section{The Dynamical System} \label{revivals_SEC} } \ec In the description of the dynamics of the one-atom micromaser we have to take losses of the photon field into account, i.e. the time evolution of the photon field is described by a master equation. The continuous time formulation of the micromaser system \cite{Lugiato87} is a suitable technical frame work for our purposes. Let $a$ be the probability for a pump atom to be in its excited state. Assuming that the pump atoms are prepared in an incoherent mixture, i.e. the density matrix of the atoms is diagonal with diagonal matrix elements $a$ and $b$ such that $a+b=1$, it is shown in \cite{Lugiato87,ElmforsLS95} that the vector $p$ formed by the diagonal density matrix elements of the photon field obeys the differential equation $dp/dt=-\gamma Lp$. Here $\gamma$ is the damping rate of photons in the cavity and $L=L_C -N(M-1)$, where $( L_C)_{nm} = (n_b+1)[ \, n \delta_{n,m} - (n+1) \delta_{n+1,m} \, ] + n_b[ \, (n+1)\delta_{n,m} - n \delta_{n,m+1} \, ]$ describes the damping of the cavity. $n_b$ is the number of thermal photons and $N$ is the average number of atoms injected into the cavity during the cavity decay time. The matrix $M$ is $M=M(+) +M(-)$, where $M(+)_{nm} = b q_{n+1} \delta_{n+1,m} + a(1-q_{n+1}) \delta_{n,m}$ and $M(-)_{nm} = a q_{n} \delta_{n,m+1} + b(1-q_{n}) \delta_{n,m}$, has its origin in the Jaynes-Cummings (JC) model \cite{Jaynes&Cummings63,ElmforsLS95}. The quantity $q_n \equiv q(x=n/N)$ is given by \be \label{q_n} q(x) = \frac{x}{x+\Delta^2} \sin^2 \left ( \theta \sqrt{x+\Delta^2}\right ) ~~, \ee \noi where $\Delta^2 \equiv (\Delta \omega)^2/(4g^2 N)$ is the scaled dimensionless detuning parameters of the micromaser and $g$ is the single photon Rabi frequency at zero detuning of the JC model. \eq{q_n} is expressed in terms of the scaled dimensionless pump parameter $\theta = g \tau \sqrt{N}$. Noise mechanisms, like a spread in the transition time or a spread in the detuning, are now included in the analysis by simply averaging the matrix $L$ with respect to $\theta$ or $\Delta$, i.e. averaging the quantity $q_n$ with respect to one of these parameters. A similar averaging procedure with regard to the parameters $a$ (or $b$) or $n_b$ leads only to a trivial replacement of the corresponding parameters with their mean values. The stationary solution of the photon distribution where such noise effects are included is therefore derived in a standard and well known manner \cite{Filipowicz86_igjen,Lugiato87,ElmforsLS95}. The result is \be \label{p_n_eksakt} {\bar p}_n = {\bar p}_0 \prod_{m=1}^{n} \frac{n_b \, m + N a \langle q_m \rangle} {(1+n_b) \, m + N b \langle q_m \rangle } ~~, \ee \noi where $\langle \cdot \rangle$ in \eq{p_n_eksakt} denotes averaging, to be discussed below, with respect to $\theta$ or $\Delta$. ${\bar p}_0$ is a normalization constant. After the passage through the microcavity we make a selective measurement of the atoms. We imagine that one then only measure those atoms leaving the cavity with a definite value of $\theta$ or $\Delta$, i.e. in effect putting a sharp velocity filter or a filter sensitive to detuning at the atom output port of the cavity. The probability ${\cal P}(s)$ of finding an atom in a state $s=\pm$ after the interaction with the cavity photons, where $+$ represents the excited atom state and $-$ represents the atom ground state, can then be expressed in the following matrix form \cite{ElmforsLS95}: \be \label{p_pl_matrix} {\cal P}(s) = {\bar u}^{T} M(s) {\bar p} ~~, \ee \noi such that ${\cal P}(+)\,+\,{\cal P}(-) = 1$. The elements of the vector ${\bar p}$ is given by the equilibrium distribution in \eq{p_n_eksakt}. The quantity ${\bar u}$ is a vector with all entries equal to $1$, ${\bar u}_n=1$. Explicitly ${\cal P}(+)$ takes the form \be \label{p_pl} {\cal P}(+) = a \sum_{n=0}^{\infty} \; {\bar p}_n ( 1 - q_{n+1} ) + b \sum_{n=0}^{\infty} \; {\bar p}_{n+1} q_{n+1} ~~. \ee \noi It is well known in the literature that this probability can exhibit quantum revivals (see e.g. Refs. \cite{Walther_div} - \cite{samle_bok_publ}). Furthermore, the joint probability for observing two consecutive atoms in the states $s_1$ and $s_2$ is given by \cite{ElmforsLS95} \bea \label{p_PP_M} {\cal P}(s_1,s_2) &=& {\bar u}^{T} S(s_2) \, S(s_1)~ {\bar p} \nonumber \\ &=& {\bar u}^{T} M(s_2) \, \, S(s_1)~ {\bar p} ~~, \eea \noi where $S(s) = (1+L_C/N)^{-1}M(s)$. Explicitly one finds \bea \label{p_PP} {\cal P}(+,+) &=& a^2 \sum_{n,m=0}^{\infty} \; (1-q_{n+1}) \, (1+L_C/N)^{-1}_{nm} \, (1-q_{m+1}) \, {\bar p}_m \nonumber \\ &+& ab \sum_{n,m=0}^{\infty} \; \Big \{ \, (1-q_{n+1}) \, (1+L_C/N)^{-1}_{nm} \, q_{m+1} \; {\bar p}_{m+1} \nonumber \\ &+& q_n \, (1+L_C/N)^{-1}_{nm} \, (1-q_{m+1}) \; {\bar p}_m \, \Big \} \nonumber \\ &+& b^2 \sum_{n,m=0}^{\infty} \; q_n \, (1+L_C/N)^{-1}_{nm} \, q_{m+1} \, {\bar p}_{m+1} ~~. \eea In Refs.\cite{ElmforsLS95,Rekdal99} it is shown that the equilibrium distribution \eq{p_n_eksakt} can be re-written in a form which is rapidly convergent in the large $N$ limit by making use of a Poisson summation technique \cite{Schleich93}. In terms of the scaled photon number variable $x=n/N$ the stationary probability distribution can be written in the from \be \label{p_x} {\bar p}(x) = {\bar p}_0 \sqrt{\frac{w(x)}{w(0)}} ~ e^{-N \, V(x)}~~, \ee \noi where \be V(x) = \sum_{k=-\infty}^{\infty}V_k(x)~~. \ee \noi The effective potential $V(x)$ is expressed in terms of \be \label{V_k} V_k(x) = - \int _0^x d\nu \, \ln[\, w(\nu)\,] \, \cos(2\pi N k \nu) ~~, \ee \noi where \be \label{w_x} w(x) = \frac{n_b \, x + a \, \langle q(x) \rangle}{(1+n_b)\, x + b \, \langle q(x) \rangle} ~~. \ee \noi We stress that \eq{p_x} is exact. In the large $N$ limit \eq{p_x} can be simplified by making use of a saddle-point approximation. Apart from calculable $1/N$ corrections we put $V(x)=V_0(x)$. The saddle-points are then determined by $V_0^{\prime}(x)=0$, where $V_0^{\prime}(x)$ is the derivative of $V_0(x)$ with respect to $x$. Hence, it is the nature of the global minima of $V_0(x)$ which determine the probability distribution, apart from possible zeros of $w(x)$. If the only global minimum of $V_0(x)$ occurs at $x=0$, which corresponds to the thermal phase of the micromaser, we can expand the effective potential $V_0(x)$ around origin. The probability ${\bar p}_n$ is then given by \be \label{p_n_geo} {\bar p}_n = {\bar p}_0 \left ( \frac{n_b+a\,\theta^{2}_{eff}}{1+n_b+b\,\theta^{2}_{eff}} \right)^n ~~, \ee \noi where \bea \label{theta_eff} \theta_{eff}^2 = \lim_{x \rightarrow 0} \frac{\langle q(x) \rangle}{x} = \left \langle \frac{ \sin^2(\theta \Delta)}{\Delta^2} \right \rangle ~~. \eea \noi The probability distribution as given by \eq{p_n_geo} is normalizable provided \be \label{c_bet} \theta_{eff}^2 (2a-1) < 1 ~~. \ee If, on the other hand, there exists non-trivial saddle-points of $V_0(x)$, which correspond to the possible maser phases of the micromaser, we write \be \label{p_x_helt_gen} {\bar p}(x) = \sum_j {\bar p}_j(x) ~~, \ee \noi where $\sum_j$ denotes the sum over the local minima of $V_0(x)$, i.e. $V_0^{\prime \prime}(x) > 0$, and where ${\bar p}_j(x)$ is ${\bar p}(x)$ for $x$ close to the local minimum $x=x_j$. The distribution ${\bar p}_j(x)$ is given by \be \label{p_j} {\bar p}_j(x) = \frac{T_j}{\sqrt{2\pi N}} ~e^{-\frac{N}{2} V_0^{\prime \prime}(x_j)(x-x_j)^2} ~~, \ee \noi where $T_j$ is determined by the normalization condition for ${\bar p}(x)$, i.e \be \label{T_j} T_j = \frac{e^{- N V_0(x_j)}}{ {\displaystyle \sum_m} \displaystyle{ e^{-N V_0(x_m)}}/\sqrt{V_0^{\prime \prime}(x_m)}}~~, \ee \noi and \be \label{V_dobbel} V_0^{\prime \prime}(x) = \frac{(2a-1)^2}{a + n_b(2a-1)} \, \frac{\langle q(x) \rangle - x \, d \langle q(x) \rangle/dx }{x^2} ~~. \ee \noi For the given parameters, the sum in \eq{T_j} is supposed to be taken over all saddle-points corresponding to a minimum of $V_0(x)$, i.e. all saddle-points corresponding to $V_0^{\prime \prime}(x)>0$. If $x=x_j$ does not correspond to a global minimum of the effective potential $V_0(x)$, then $T_j$ is exponentially small in the large $N$ limit. If $x=x_j$ does correspond to one and only one global minimum, we can neglect all the terms in the sum in $T_j$ but $m=j$, in which case $T_j$ is reduced to $T_j = \sqrt{V_0^{\prime \prime}(x_j)}$. In the neighbourhood of such a global minimum the probability distribution in \eq{p_j} is therefore reduced to \be \label{p_j_gauss} {\bar p}_j(x) = \frac{1}{ \sqrt{2\pi N^2 \sigma_x^2}} ~ e^{-\frac{(x-x_j)^2}{2\sigma_x^2}} ~~, \ee \noi where the standard deviation $\sigma_x$ of \eq{p_j_gauss} is \be \label{sigma_x} \sigma_x = \frac{1}{ \sqrt{N V_0^{\prime \prime}(x_{j})}} ~~, \ee \noi which is zero in the large $N$ limit provided $V_0^{\prime \prime}(x_j) \neq 0$. The probability ${\bar p}(x)$ is therefore peaked around the global minima for large $N$. The minimum $x_j$ of the potential $V_0(x)$ is a solution of saddle-point equation \be \label{sadel_lign} x = (2a-1) \langle q(x) \rangle \leq 1 ~~. \ee \noi This saddle-point equation is independent of the number $n_b$ of thermal photons in the cavity. The excitation probability ${\cal P}(+)$ can be re-written in a form where quantum revivals (see e.g. Ref. \cite{Knight93}) become explicitly by making use of a Poisson summation technique \cite{Schleich93}. When $a=1$ we obtain \bea \label{p_P_Poisson_gen} {\cal P}(+) &=& 1 - \frac{1}{2} \sum_{n=0}^{\infty} \, {\bar p}_n \frac{n+1}{n+1+ N \Delta^2} + \frac{1}{2} \sum_{\nu=-\infty}^{\infty} w_{\nu}(\theta) \nonumber \\ &+& \frac{1}{2} \, {\bar p}_0 \; \frac{1}{1+ N \Delta^2} \cos( \, 2 \theta \sqrt{1/N + \Delta^2}\, ) ~~, \eea \noi where \be \label{w_0_gen} w_0(\theta) = N \int_0^{\infty} dx \; {\bar p}(x) \frac{x+1/N}{x+1/N + \Delta^2} \cos \left( 2 \theta \sqrt{x + 1/N + \Delta^2 } \right ) ~~, \ee \noi and \bea \label{w_nu_gen} w_{\nu}(\theta) &=& N \; \mbox{Re} \, \bigg \{ \; e^{-2\pi i \nu N \left[ \,\left ( \, \theta/2 \pi \nu N \, \right )^2 + \Delta^2 \, \right ] } \nonumber \\ &\times& \int_0^{\infty} dx \; {\bar p}(x) \, \frac{x+1/N}{x+1/N + \Delta^2} \; e^{2\pi i \nu N \left( \, \sqrt{x+1/N + \Delta^2} - \theta/2 \pi \nu N \, \right )^2 } \; \bigg \} ~~. \eea \noi Here ${\bar p}(x)$ is the continuous version of ${\bar p}(n/N)$. We stress that \eq{p_P_Poisson_gen} is exact. If $a\neq 1$ we obtain ${\cal P}(+)$ by the replacement ${\cal P}(+) \rightarrow 1-a + (2a-1){\cal P}(+)$, apart from $1/N$ corrections. If the probability distribution ${\bar p}(x)$ is sufficiently peaked, i.e. ${\bar x} \gg \sigma_x$, where ${\bar x} \equiv {\bar x}(\theta)$ denotes the average of $x=n/N$ with respect to the stationary probability distribution ${\bar p}_n$, then the excitation probability is reduced to \be \label{p_P_Poisson} {\cal P}(+) \approx 1 - \frac{1}{2} \frac{{\bar x}}{ {\bar x} + \Delta^2} + \frac{1}{2} \sum_{\nu=0}^{\infty} w_{\nu}(\theta) ~~, \ee \noi where \bea \label{w_nu} w_{\nu}(\theta) &\approx& {\bar p}( x = {\bar x}_{\nu}) \, \frac{{\bar x}}{ {\bar x} + \Delta^2}\, \frac{\theta}{\pi \sqrt{2 \nu^3 N}} \; \nonumber \\ &\times& \cos \left [ \, 2 \pi \nu N \left ( \, \left ( \, \frac{\theta}{2 \pi \nu N} \, \right )^2 + \Delta^2 \, \right ) - \frac{\pi}{4} \, \right ] ~~, ~~ \nu \geq 1 ~~, \eea \noi and \be \label{n_nu_2} {\bar x}_{\nu} = \left ( \, \frac{\theta}{2 \pi \nu N} \, \right )^2 - \Delta^2 ~~. \ee \noi According to \eq{n_nu_2} the $\nu$:th revival of ${\cal P}(+)$ occurs in the region where $\theta$ is close to \be \label{overl_theta_nu} \theta_{\nu} = 2 \pi \nu N \sqrt{ {\bar x} + \Delta^2} ~~. \ee \noi This equation relates the width $\sigma_x$ of the probability distribution ${\bar p}(x)$ into a measure for the width $\Delta \theta_{\nu}$ in the pump parameter of the $\nu$:th revival according to a probability distribution of the form $p(\theta) \simeq \exp(-(\theta -\theta_{\nu})^2/2(\Delta \theta_{\nu})^2)$, i.e. \be \label{sig_mu} \Delta \theta_{\nu} = \pi \nu N \, \frac{\sigma_x}{\sqrt{ {\bar x} + \Delta^2} } ~~. \ee \noi Two consecutive terms of the sum in \eq{p_P_Poisson} separate in time when their temporal separation $\theta_{\nu+1} - \theta_{\nu}$ is larger then $\Delta \theta_{\nu+1} + \Delta \theta_{\nu}$, i.e. within one standard deviation, provided \be \label{ikke_overlapp_bet_NY} \nu < \frac{ {\bar x} + \Delta^2 }{ \sigma_x} - \frac{1}{2} ~~. \ee \noi The revivals in ${\cal P}(+)$ cannot be resolved for values of $\nu$ larger then the right-hand side of \eq{ikke_overlapp_bet_NY}. If the probability distribution ${\bar p}(x)$ is sufficiently peaked, i.e. ${\bar x} \gg \sigma_x$, then the excitation probability ${\cal P}(+)$ approaches \be \label{P_a_peaked} {\cal P}(+) = a- \frac{1}{2}(2a-1)\left( \frac{{\bar x}}{({\bar x} + \Delta^2)} - w_{0}(\theta)\right) \approx a -(2a-1)q(x)~~, \ee \noi in the large $N$ limit, where we make use of \be \label{eq:asymp} w_{0}(\theta) \approx \exp(-\theta^2 /2N)\frac{\bar x}{{\bar x}+\Delta^2}\cos(2\theta\sqrt{{\bar x}+\Delta^2}) ~~. \ee If the average in the saddle-point equation \eq{sadel_lign} is such that $\langle q(x) \rangle \approx q(x)$, we see that ${\cal P}(+) \approx a - {\bar x}$. If we average \eq{eq:asymp} with respect to the noise in the system, we obtain $\langle {\cal P}(+) \rangle \approx a - {\bar x}$ by making use of the saddle-point equation \eq{sadel_lign} once more. The approximative result \eq{P_a_peaked} can actually be converted into an exact and more general relation between ${\bar x}$ and ${\cal P}(+)$. If the atoms are not measured with well-defined $\theta$ or $\Delta$ then we must in general average ${\cal P}(+)$ with respect to the corresponding probability distribution. In \eq{p_pl} we then replace $q_n$ by $\langle q_n \rangle$ and denotes the corresponding excitation probability ${\cal P}(+)$ by $\langle {\cal P}(+) \rangle$. From the equilibrium distribution \eq{p_n_eksakt} we then obtain the recurrence formula $[ \, (1+n_b) \, n + N b \langle q_n \rangle \, ] \, {\bar p}_n = {\bar p}_{n-1} [ \, n_b \, n + N a \langle q_n \rangle \, ]$. By summing this formula over $n$, we therefore derive that \be \label{sh_xP} {\bar x} = a + \frac{n_b}{N} - \langle {\cal P}(+) \rangle ~~. \ee \noi In general and in the presence of noise in $\theta$ and/or $\Delta$, ${\bar x}$ and ${\cal P}(+)$ have no direct relation and we must in general consider them to be independent variables. \bc{ \section{Absence of Fluctuations} \label{no_avg_SEC} }\ec If the atoms enter the cavity without a spread in the transit time and without a spread in the atom-photon frequency detuning, the behavior of the order parameter and the excitation probability ${\cal P}(+)$ is well known in the literature (see e.g. Refs. \cite{Walther_div,samle_bok_publ,Filipowicz86_igjen}). As e.g. seen in \fig{p_pl_sig0_THETA_FIG}, ${\cal P}(+)$ under such circumstances shows no clear evidence for resonant behavior of revivals. A more precise way to express the presence of revivals in the temporal variations of ${\cal P}(+)$ can e.g. obtained by performing a Fourier transformation of ${\cal P}(+)$ and considering the width of the corresponding spectrum. The corresponding Fourier spectrum of \fig{p_pl_sig0_THETA_FIG} is then broad. The appearance of clear revivals would correspond to a more peaked Fourier spectrum. In the large $N$ and $\theta$ limit the order parameter approaches a constant. This constant, ${\bar x}_{\infty}$, is therefore determined by the global minimum of $V_0(x)$ in the large $\theta$ limit, in which case $V_0(x)$ has a unique minimum. The micromaser system has then no maser-maser phase transitions. As shown in an Appendix, this minimum is determined by $\partial V_0(x)/\partial x=0$, where \be \label{hurra_formel} \frac{\partial V_0(x)}{\partial x} = \ln \left [ \frac{1-a}{a} \right ] + f(\, y(x) \, ) - f( \, w(x) \,) ~~, \ee \noi and where $f(z)$, $y(x)$ and $w(x)$ are as given in the Appendix. When e.g. $a=1$, $n_b=0.15$ and $\Delta=0$ the global minimum of $V_0(x)$ occurs at ${\bar x}_{\infty} \approx 0.34$, which corresponds to an asymptotic excitation probability ${\cal P}_{\infty}(+) \approx 0.66$ due to \eq{sh_xP}. When $a=1$ and $\Delta=0$ it follows from \eq{p_PP} that for large $\theta$ the joint probability ${\cal P}(+,+)$ approaches the asymptotic value \be \label{P_pp_a_smh} {\cal P}_{\infty}(+,+) \approx (5 {\cal P}_{\infty}(+) - 1)/4 ~~, \ee \noi i.e. with the physical parameters as above we find ${\cal P}_{\infty}(+,+) \approx 0.57$ (see \fig{p_pl_sig0_THETA_FIG}). \eq{P_pp_a_smh} follows from \eq{p_PP} by making use of the following properties of the unbounded operator $L_C$ \be \sum_{n=0}^{\infty} \, (L_C)_{nm} = 0 ~~ , ~~ \sum_{m=0}^{\infty} \, (L_C)_{nm} = -1 ~~, \ee \noi and performing a suitable large $N$ limit. \begin{figure}\label{p_pl_sig0_THETA_FIG} \end{figure} \bc{ \section{Effects of Fluctuations} \label{fluctuations_SEC} }\ec We will now consider physical effects of noise in the micromaser system. We start by discussing the effects of adding fluctuations to the pump parameter $\theta$ and then, in Section \ref{det_SEC}, we study the effects of adding fluctuations to the atom-photon detuning parameter $\Delta$. \bc{ \subsection{Pump Parameter Fluctuations} \label{pump_SEC} }\ec Suppose that the pump parameter $\theta$ is described by a positive stochastic variable $\xi$ as described by the probability distribution $P_{\theta}(\xi)$ such that $\langle \xi \rangle = \theta$ and $\langle ( \xi - \theta )^2 \rangle = \sigma_{\theta}^2$. The averaged value of $q(x)$ with respect to $P_{\theta}(\xi)$ is then given by \be \label{q_t_def} \langle q(x) \rangle_{\theta} = \int_{0}^{\infty} d \xi \; P_{\theta}(\xi) \, \frac{x}{x+\Delta^2} \sin^2 \left ( \xi \sqrt{x+\Delta^2} \right ) ~~. \ee \noi Non-trivial saddle-points of the corresponding $V_0(x)$ can be found by solving the equation \be \label{s_t} 1 = (2a-1) \, I_1(\theta,x(\theta)) ~~, \ee \noi where \be \label{I_1_def_gen} I_1(\theta, x) = \frac{1}{x+\Delta^2}\; {\int_{0}^{\infty} d \xi \; P_{\theta}(\xi) \, \sin^2 \left ( \, \xi \sqrt{x + \Delta^2} \; \right ) \leq \sigma_{\theta}^2 + \theta^2} ~~. \ee For small values of $x$, we can expand the effective potential $V_0(x)$ around $x=0$. A straightforward expansion $V_0(x)$ then leads to \bea \label{V_exp_fase} V_0(x) &=& x \ln \left [ \frac{1 + n_b+a\,\theta^{2}_{eff}}{n_b+a\,\theta^{2}_{eff}} \right ] \nonumber \\ &+& \frac{x^2}{2} \frac{a+n_b(2a-1)}{(n_b + a \theta_{eff}^2)(1+n_b+b \theta_{eff}^2)} ~ \langle \, \xi^4 f(\xi |\Delta| ) \, \rangle \; + \; {\cal O}(x^3)~~, \eea \noi where \be f(x) = \frac{\sin^2(x)}{x^4} - \frac{\sin(x) \cos(x)}{x^3} ~~. \ee \noi For small $x$, i.e. $x \ll 1$, $f(x) = 1/3 + {\cal O}(x^2)$. For typical physical parameters such that $\theta |\Delta| \lesssim \pi$, we therefore observe a thermal-maser phase transition for values of $\theta$ determined by $V_0^{\prime}(x)=0$. The corresponding critical transition line is then \be \label{a_t_gen} a(\theta) = \frac{1}{2} + \frac{1}{2} \frac{1}{I_1(\theta)} \leq 1 ~~, \ee \noi where $I_1(\theta,0) \equiv I_1(\theta)$. \eq{a_t_gen} coincides with the radius of convergence of the thermal probability distribution \eq{p_n_geo} as it should. For $\Delta=0$ we see that $\theta_{eff}^2 = \sigma_{\theta}^2 + \theta^2$ and hence this phase transition can occur for $\theta=0$, i.e. the phase transition can be induced by the presence of noise in the stochastic variable $\theta$. For $\theta |\Delta| \gg \pi$ we must in general make use of a combination of analytical and numerical methods, as in the case of $\sigma_{\theta}=0$ \cite{Rekdal99}, in order to get a detailed picture of the phase diagram. In the maser phase, the order parameter ${\bar x}(\theta)$ approaches zero when the system approaches the critical line \eq{a_t_gen}. Furthermore, in the large $N$ limit ${\bar x}(\theta)$ is always zero in the thermal phase. Hence, the order parameter is continues on the critical transition line \eq{a_t_gen}. To determine the order of the phase transition we therefore have to investigate higher order derivatives. The first derivative of ${\bar x}(\theta)$ with respect to $\theta$, ${\bar x}^{\prime}(\theta)$, at the critical transition line \eq{a_t_gen} is \be \label{dx_dtheta} {\bar x}^{\prime}(\theta) = \frac{\Delta^2 \, I_1^{\prime}(\theta)} {I_1(\theta) - I_2(\theta)} ~~, \ee \noi where $I_1^{\prime}(\theta)$ is the derivative of $I_1(\theta)$ with respect to $\theta$ and where \be \label{I_2} I_2(\theta) = \int_{0}^{\infty} d \xi \; P_{\theta}(\xi) \, \xi^2 \; \frac{\sin \left ( 2 \xi \Delta \right )}{2\xi \Delta} ~~. \ee \noi By making use of the positive definiteness of $P_{\theta}(\xi) \xi^2$, the Cauchy-Schwarz inequality implies $|I_2(\theta)| \leq \sqrt{\sigma_{\theta}^2 + \theta^2}$. If $x^{\prime}(\theta)$ is zero, then we have to investigate higher order derivatives. The second derivative of ${\bar x}(\theta)$ with respect to $\theta$, ${\bar x}^{\prime \prime}(\theta)$, at the critical thermal-maser line \eq{a_t_gen} is \be {\bar x}^{\prime \, \prime}(\theta) = \frac{\Delta^2 I_1^{\prime \, \prime}(\theta) + {\bar x}^{\prime}(\theta) I_2^{\prime}(\theta)}{I_1(\theta) - I_2(\theta)} ~~, \ee \noi where $I_1^{\prime \, \prime}(\theta)$ is the second derivative of $I_1(\theta)$ with respect to $\theta$ and $I_2^{\prime}(\theta)$ is the derivative of $I_2(\theta)$ with respect to $\theta$. The effective potential $V_0(x)$ approaches its large $\theta$ limit exponentially fast as a function of $\sigma_{\theta}^2$. This unique minimum is determined by $\partial V_0(x)/\partial x=0$, where $\partial V_0(x)/\partial x$ is given by \eq{hurra_formel} with $y(x)$ and $w(x)$ replaced by ${\tilde y}(x) = a \, e^{-2(x+\Delta^2) \sigma_{\theta}^2}/[ \, \frac{a}{2} (1+e^{-2(x+\Delta^2) \sigma_{\theta}^2} ) +n_b( x+\Delta^2 ) \, ]$ and ${\tilde w}(x) = b \, e^{-2(x+\Delta^2) \sigma_{\theta}^2}/[ \, \frac{b}{2} (1+e^{-2(x+\Delta^2) \sigma_{\theta}^2} ) +(1+n_b)( x+\Delta^2 ) \, ]$, respectively. If the micromaser is not detuned, i.e. $\Delta=0$, then there is one critical thermal-maser transition line only. This critical transition line can be found analytically for any probability distribution $P_{\theta}(\xi)$. The integral $I_1(\theta)$ is then given by \be I_1(\theta) = \sigma_{\theta}^2 + \theta^2 \geq 1 ~~. \ee \noi The critical thermal-maser transition line is then only dependent of the variance $\sigma_{\theta}^2$ and the mean value $\theta$: \be \label{t_m_line} a(\theta) = \frac{1}{2} + \frac{1}{2} \frac{1}{\sigma_{\theta}^2 + \theta^2 } ~~. \ee \noi For values of $a$ and $\theta$ above this critical line, i.e. $a \geq a(\theta)$, the micromaser system is in a maser phase. Below this line, on the other hand, the micromaser is in the thermal phase (see e.g. \fig{fase_diff_sig_FIG}). The order parameter ${\bar x}(\theta)$ is continuous on the critical line \eq{t_m_line}. The first derivative of ${\bar x}(\theta)$ with respect to $\theta$, ${\bar x}^{\prime}(\theta)$, at this critical thermal-maser line is \be {\bar x}^{\prime}(\theta) = 6 \, \frac{\theta}{\langle \xi^4 \rangle} ~~, \ee \noi which is non-zero for $\sigma_{\theta}^2 < 1$. The critical line \eq{t_m_line} then describes a line of second-order (thermal-maser) phase transition. If $\sigma_{\theta}^2 \geq 1$, \eq{t_m_line} also describes a line of second-order phase transitions for all values of $\theta$ except for $\theta=0$ where $a(0)=1/2 + 1/(2 \sigma_{\theta}^2)$. Since ${\bar x}^{\prime}(\theta)$ then is zero we have to investigate higher order derivatives. The second derivative of ${\bar x}(\theta)$ with respect to $\theta$, ${\bar x}^{\prime \prime}(\theta)$, at the critical thermal-maser line is \be \label{dobbel_x} {\bar x}^{\prime \, \prime}(\theta) = \frac{6}{\langle \xi^4 \rangle} \left [ \, 1 - 2 \frac{\theta}{\langle \xi^4 \rangle} \frac{d \langle \xi^4 \rangle}{d \theta} + \frac{7}{12} \left (\frac{ \theta }{ \langle \xi^4 \rangle } \right )^2 \langle \xi^6 \rangle \, \right ] ~~, \ee \noi which is non-zero when the first derivative ${\bar x}^{\prime}(\theta)$ is zero. The point $a(0)=1/2 + 1/(2 \sigma_{\theta}^2)$ on the critical thermal-maser line therefore corresponds to a third-order transition. From \eq{dobbel_x} it follows, in addition, that we can have at most a third-order phase transition when fluctuations in $\theta$ are taken into account. \begin{figure}\label{fase_diff_sig_FIG} \end{figure} If the probability $P_{\theta}(\xi)$ is sufficiently peaked such that the its $\xi$-variation is fast in comparison to $\xi$-variation of $\sin^2 \left ( \xi \Delta \right )$, i.e. if $\sigma_{\theta} |\Delta| \ll \pi/2$, then \eq{I_1_def_gen} is reduced to $I_1(\theta) = \sin^2 \left ( \theta \Delta \, \right )/\Delta^2 + \sigma_{\theta}^2 \, \cos \left ( 2 \theta \Delta \, \right )$. The first thermal-maser critical line is then given by \be \label{a_kons} a(\theta) = \frac{1}{2} + \frac{1}{2} \frac{\Delta^2}{\sin^2(\theta \Delta) + \sigma_{\theta}^2 \Delta^2 \cos( 2 \theta \Delta )} ~~, \ee \noi which is consistent with the result of Ref. \cite{Rekdal99} for $\sigma_{\theta}^2=0$. If, on the other hand, the quantity $P_{\theta}(\xi)$ has a $\xi$-variation which is slow in comparison to the $\xi$-variation of $\sin^2 \left ( \xi \Delta \right )$ i.e. if $\sigma_{\theta} |\Delta| \gg \pi/2$, then \eq{I_1_def_gen} reduces to $I_1(\theta) = 1/2\Delta^2$. \eq{a_t_gen} is then reduced to \be a(\theta) = \frac{1}{2} + \Delta^2 ~~. \ee In order to get explicit analytic results we now choose the following gamma probability distribution for the pump parameter \be P_{\theta}(\xi) = \frac{\beta}{\Gamma(\alpha + 1)} \, \xi^{\alpha} \, e^{-\beta \xi} ~~, \ee \noi where $\beta = \theta/\sigma_{\theta}^2$ and $\alpha = \theta^2/\sigma_{\theta}^2 - 1$, so that $\langle \xi \rangle = \theta$ and $\langle (\xi - \theta)^2 \rangle = \sigma_{\theta}^2$. Other choices are possible, but are not, as long as the distribution is sufficiently peaked, expected to change the overall qualitative picture. The integral $I_1(\theta,x)$ is then given by \bea \label{I_1_expl} I_1(\theta,x) &=& \frac{1}{2} \, \frac{1}{x+\Delta^2} \nonumber \\ &\times& \left [ ~ 1 - \bigg( 1 + \frac{2(x+\Delta^2) \sigma_{\theta}^2}{\theta^2/2 \sigma_{\theta}^2 } \bigg) ^{- \frac{\theta^2}{2 \sigma_{\theta}^2} } \cos \left( \, \frac{\theta^2}{\sigma_{\theta}^2} \arctan \left( \sqrt { \frac{2(x+\Delta^2) \sigma_{\theta}^2}{\theta^2/2 \sigma_{\theta}^2 } } \right ) \, \right ) ~ \right ] . ~~~~~~ \eea \noi This averaged form of $q(x)$ depends on the two independent variables $\theta^2/(2\sigma_{\theta}^2)$ and $2( x+\Delta^2 ) \, \sigma_{\theta}^2$ only. In particular, if \be \label{c_theta} \frac{\theta^2}{2 \sigma_{\theta}^2} \gg \sigma_{\theta}^2 ~~, \ee \noi then \eq{I_1_expl} reduces to \be \label{I_1_red} I_1(\theta,x) = \frac{1}{2} \, \frac{1}{x+\Delta^2} \; \bigg [ ~ 1 - e^{-2(x+\Delta^2) \sigma_{\theta}^2 } \, \cos\left ( 2 \theta \sqrt{x+\Delta^2} \, \right ) ~ \bigg ] ~~. \ee \noi This explicit form of $I_1(\theta,x)$ can be achieved for a broad class of probability distributions, e.g. a Gaussian distribution. If, in addition, \be \label{bet_stor_theta} \sigma_{\theta}^2 \gg \frac{1}{2a-1} ~~, \ee \noi then any solution to the corresponding saddle-point equation $1=(2a-1) I_1(\theta, x(\theta) )$ is exponentially close to \be \label{num_5_x} {\bar x} = a-1/2 - \Delta^2 ~~. \ee \noi If, on the other hand, \be \label{bet_liten} \sigma_{\theta}^2 \gg \theta \gtrsim \sigma_{\theta} \gtrsim 1 ~~, \ee \begin{figure}\label{p_pl_sig25_THETA_FIG} \end{figure} \begin{figure}\label{p_pl_sig25_SmL_FIG} \end{figure} \noi then the corresponding saddle-point equation has one non-trivial solution only, i.e. \eq{num_5_x}. Under the conditions given by Eqs. (\ref{c_theta}) and (\ref{bet_stor_theta}) or \eq{bet_liten} the maser-maser phase transitions will be less significant. The phase diagram then essentially consists of critical thermal-maser line(s) only (see e.g. \fig{fase_diff_sig_FIG}). With regard to the observable ${\cal P}(+)$, the requirement $\theta \gtrsim \theta_{\nu=1} = 2 \pi N \sqrt{{\bar x}+\Delta^2}$ is compatible with \eq{c_theta} provided that $N$ is sufficiently large, i.e. \be \label{verylargen} N \gg \frac{\sqrt{2}}{2 \pi} \, \frac{\sigma_{\theta}^2}{\sqrt{a-1/2}} ~~, \ee \noi for the saddle-point solution \eq{num_5_x}. The equilibrium probability distribution is in this case given by \eq{p_j_gauss} with the variance $\sigma_x^2 = [ \, a + n_b(2a-1) \, ] /2N$ and $x_j={\bar x}$ as in \eq{num_5_x}. The equilibrium distribution ${\bar p}(x)$ is peaked around ${\bar x}$ provided ${\bar x} \gg \sigma_x$, i.e. \be \label{N_sig_bet_atter} N \gg \frac{1}{2} \; \frac{a + n_b(2a-1)}{[ \, a-1/2 - \Delta^2 \, ]^2} ~~. \ee \noi From \eq{overl_theta_nu} we observe that the $\nu$:th revival in ${\cal P}(+)$ occurs in the region where $\theta$ is close to \be \theta_{\nu} = 2 \pi \nu N \sqrt{ a-1/2 } ~~. \ee \noi According to \eq{ikke_overlapp_bet_NY}, two consecutive revivals in ${\cal P}(+)$ are separated provided \be \label{ikke_overlapp_bet_NY_gauss} \nu < \sqrt{2 N} \; \frac{2a-1}{\sqrt{a+n_b(2a-1)}} - \frac{1}{2} ~~, \ee \noi which gives a bound on how many revivals that can be resolved in the excitation probability ${\cal P}(+)$. In Fig. \ref{p_pl_sig25_THETA_FIG} we consider values of relevant physical parameters such that Eqs. (\ref{bet_stor_theta}), (\ref{verylargen}) and (\ref{N_sig_bet_atter}) are satisfied. We then observe that ${\cal P}(+)$ and ${\cal P}(+,+)$ exhibit well pronounced revivals. For completeness we show in Fig. \ref{p_pl_sig25_SmL_FIG}, for the same set of values of the physical parameters as in Fig. \ref{p_pl_sig25_THETA_FIG}, that the approximative but analytical expression for ${\cal P}(+)$ as given by \eq{p_P_Poisson} is in very good agreement with the exact result for ${\cal P}(+)$. Within one standard deviation we expect, according to \eq{ikke_overlapp_bet_NY_gauss}, that at most seven revivals in ${\cal P}(+)$ are well separated in this case. Numerically we find that actually only the first two revivals are well separated. Let us now consider the case $\sigma_{\theta}^2 \ll 1$. If, in addition, \eq{c_theta} is satisfied, the quantity $\langle q(x) \rangle_{\theta}$ is reduced to \eq{q_n}. If on the other hand \eq{c_theta} is not satisfied, i.e. $\theta \ll \sigma_{\theta}^2$, then the saddle-point equation \eq{sadel_lign} has only the trivial solution for any probability distribution $P_{\theta}(\xi)$ as it should. The fluctuations in the atom cavity time have then no dramatic effect on the order parameter ${\bar x}$ (see e.g. \fig{x_FIG}), the excitation probabilities ${\cal P}(+)$ and ${\cal P}(+,+)$ or the phase diagram. \bc{ \subsection{Detuning Fluctuations} \label{det_SEC} }\ec In this Section we consider the situation when there is a spread in the detuning parameter $\Delta$. Suppose that the spread in the detuning $\Delta$ is expressed in terms of a stochastic variable $\xi$ as described by the probability distribution $P_{\Delta}(\xi)$ such that $\langle \xi \rangle = \Delta$ and $\langle ( \xi - \Delta )^2 \rangle = \sigma_{\Delta}^2$. The averaged value of $q(x)$ with respect to $P_{\Delta}(\xi)$ is then \be \label{q_D_def} \langle q(x) \rangle_{\Delta} = \int_{-\infty}^{\infty} d \xi \; P_{\Delta}(\xi) \, \frac{x}{x+\xi^2} \sin^2 \left ( \theta \sqrt{x+\xi^2} \right ) ~~. \ee \noi Non-trivial saddle-points of $V_0(x)$ can then be found by solving the equation \be 1 = (2a-1) \, J_1( \theta,x(\theta) ) ~~, \ee \noi where \be \label{J_1_def} J_1(\theta,x) = \theta^2 \; \int_{-\infty}^{\infty} d \xi \; P_{\Delta}(\xi) \, \frac{ \sin^2 ( \, \theta \sqrt{x+\xi^2} \; ) } {( \, \theta \sqrt{x+\xi^2} \; )^2 } \leq \theta^2 ~~. \ee For small values of $x$ we follow the argument of Section \ref{pump_SEC} concerning the critical thermal-maser phase transition line. The critical transition line is then \be \label{a_t_gen_II} a(\theta) = \frac{1}{2} + \frac{1}{2} \frac{1}{J_1(\theta)} \leq 1 ~~, \ee \noi where $J_1(\theta,0) \equiv J_1(\theta)$. \eq{a_t_gen_II} coincides with the radius of convergence of the thermal probability distribution \eq{p_n_geo}. In general we must use a combination of analytical and numerical methods, as in the case of $\sigma_{\theta}=0$ \cite{Rekdal99}, in order to get a detailed picture of the phase diagram. If the probability $P_{\Delta}(\xi)$ is sufficiently peaked such that the its $\xi$-variation is fast in comparison to $\xi$-variation of $\sin^2 \left ( \theta \xi \right )$, i.e. if $\sigma_{\Delta} \theta \ll \pi/2$, then \eq{J_1_def} is reduced to $J_1(\theta) = [ \, 3 \, \sin^2(\theta \Delta)/\Delta^2 + \theta^2 \, \cos( 2 \theta \Delta ) - 2\theta \sin( 2 \theta \Delta )/\Delta \, ]/\Delta^2$. The first thermal-maser critical line is then given by \be \label{a_kons_Delta} a(\theta) = \frac{1}{2} + \frac{1}{2} \frac{\Delta^2}{ \sin^2( \theta \Delta ) + \sigma_{\Delta}^2 \, g(\theta) } ~~, \ee \noi where $g(\theta) \equiv 3 \, \sin^2(\theta \Delta)/\Delta^2 + \theta^2 \cos( 2 \theta \Delta ) - 2\theta \, \sin( 2 \theta \Delta )/\Delta$, which is consistent with $\cite{Rekdal99}$ for $\sigma_{\Delta}^2=0$. If, on the other hand, the quantity $P_{\Delta}(\xi)$ has a $\xi$-variation which is slow in comparison to the $\xi$-variation of $\sin^2 ( \theta \xi )$, i.e. if $\sigma_{\Delta} \theta \gg \pi/2$, then $J_1(\theta)$ reduces to \be \label{J_1_approx_stor} J_1(\theta) = 1/2 \Delta^2 ~~. \ee \noi \eq{a_kons_Delta} is then reduced to \be a(\theta) = \frac{1}{2} + \Delta^2 ~~. \ee In order to get explicit analytic results we now choose the following exponential probability distribution for the detuning parameter \be P_{\Delta}(\xi) = \frac{1}{\sqrt{2\pi\sigma_{\Delta}^2}} \, \exp \left (- \frac{(\xi-\Delta)^2}{2 \sigma^2_{\Delta}} \right ) ~~, \ee \noi such that $\langle \xi \rangle = \Delta$ and $\langle ( \xi - \Delta )^2 \rangle = \sigma_{\Delta}^2$. Other choices are possible, but are not, as long as the distribution is sufficiently peaked, expected to change the overall qualitative picture. The averaged value of $q(x)$ with respect to $P_{\Delta}(\xi)$ is given by \be \label{q_hele} \langle q(x) \rangle_{\Delta} = \langle q(x) \rangle_0 + \langle q(x) \rangle_{osc} ~~, \ee \noi where \be \label{q_0} \langle q(x) \rangle_0 \equiv \frac{1}{2} \frac{x}{\sqrt{2 \pi \sigma_{\Delta}^2}} \, \int_{-\infty}^{\infty} d \xi ~ \frac{ e^{- \frac{(\xi-\Delta)^2}{2 \sigma^2_{\Delta}} } } {x+\xi^2} ~~, \ee \noi and \be \label{q_osc} \langle q(x) \rangle_{osc} \equiv - \frac{1}{2} \frac{x}{\sqrt{2 \pi \sigma_{\Delta}^2}} \, \int_{-\infty}^{\infty} d \xi ~ \frac{ e^{- \frac{(\xi-\Delta)^2}{2 \sigma^2_{\Delta}} } }{x+\xi^2} \, \cos \left ( 2 \theta \sqrt{x+\xi^2} \, \right ) ~~. \ee \noi We observe that \be \label{int_omskr} \langle q(x) \rangle_0 = \sqrt{\frac{x}{2 \sigma_{\Delta}^2}} ~ e^{-\Delta^2/2 \sigma_{\Delta}^2 + x/2 \sigma_{\Delta}^2} \int_{\sqrt{x/2 \sigma_{\Delta}^2}}^{\infty} d \eta ~ \exp{ \left [ \, -\eta^2 + \frac{1}{\eta^2} \frac{\Delta^2}{2\sigma_{\Delta}^2} \frac{x}{2 \sigma_{\Delta}^2} \, \right ] } ~~. \ee \noindent \eq{int_omskr} can be used to find an expansion in the parameter $\Delta^2/2 \sigma_{\Delta}^2$. For our purposes we notice that if $\sigma_{\Delta}^2$ is sufficiently large, i.e. \be \label{stor_sigma} \sigma_{\Delta}^2 \gg 1 ~~, \ee \noi then \eq{int_omskr} is reduced to $\langle q(x) \rangle_0 \lesssim \frac{1}{2}\sqrt{\pi \, x/2 \sigma_{\Delta}^2} + {\cal O}( \, x/2 \sigma_{\Delta}^2 \, )$. Any solution of the corresponding saddle-point equation is much less then unity since $\langle q(x) \rangle_{osc} \leq \langle q(x) \rangle_0$. The maser-maser phase transitions will therefore be less significant. The phase diagram does then essentially consist of critical thermal-maser line(s) only. For sufficiently large values of $\theta$, i.e. \be \label{stor_theta_her} \theta \gg \frac{\sqrt{2 \pi \sigma_{\Delta}^2}}{2a-1} ~~, \ee \noi the oscillating term can be estimated by $|\langle q(x) \rangle_{osc}| \approx \left [ \, \pi/8 \sigma_{\Delta}^2 \, \right ]^{1/4} \sqrt{(2a-1)/(8 \sigma_{\Delta}^2 \theta)}$ for the solution \be \label{x_j_spes_igjen} {\bar x} = (2a - 1)^2 \frac{\pi}{8 \sigma_{\Delta}^2} ~~, \ee \noi of the saddle-point equation. \eq{x_j_spes_igjen} is much less then unity due to the assumption \eq{stor_sigma}. The term $\langle q(x) \rangle_{osc}$ is negligible in comparison to $\langle q(x) \rangle_0$ when \eq{stor_theta_her} is satisfied. With regard to the observable ${\cal P}(+)$, the requirement $\theta \gtrsim \theta_{\nu=1} = 2 \pi N \sqrt{ {\bar x} + \Delta^2}$ is compatible with \eq{stor_theta_her} provided that $N$ is sufficiently large, i.e. \be \label{Nverylarge} N \gg \frac{4}{\pi^3} \, \frac{2 \sigma_{\Delta}^2}{(2a-1)^2} ~~, \ee \noi for the solution \eq{x_j_spes_igjen} of the saddle-point equation. The equilibrium probability distribution is given by \eq{p_j_gauss} with $x_j={\bar x}$ as in \eq{x_j_spes_igjen} and $\sigma_x^2 = (2a-1) [ \, a + n_b(2a-1) \, ] \pi/(4 N \sigma_{\Delta}^2)$. The probability distribution ${\bar p}(x)$ is therefore peaked around ${\bar x}$ provided that ${\bar x} \gg \sigma_x$, i.e. \be \label{N_sig_bet} N \gg \frac{16 [ \, a + n_b(2a-1) \, ]}{(2a-1)^3 \pi} \; \sigma_{\Delta}^2 ~~. \ee \noi The $\nu$:th revival of ${\cal P}(+)$ occurs in the region where $\theta$ is close to \be \theta_{\nu} = (2a-1) \pi \, \sqrt{ \frac{\pi}{2}} \, \frac{\nu N}{\sigma_{\Delta}} ~~, \ee \noi according to \eq{overl_theta_nu}. \eq{ikke_overlapp_bet_NY} implies that two consecutive revivals are separated provided that \be \label{nulimit} \nu \leq \sqrt{ \frac{N}{\sigma_{\Delta}^2} \; \frac{(2a-1)^3 \pi}{16 \, [a + n_b(2a-1)]} } - \frac{1}{2} ~~, \ee \noi which gives a bound on how many revivals that can be resolved in the excitation probability ${\cal P}(+)$. Fluctuations in the atom cavity transit time with a large $\sigma_{\theta}^2$ give, however, an order parameter of the order one. This difference between $\theta$- and $\Delta$-fluctuations is illustrated in \fig{x_FIG}. In \fig{p_pl_sig25_DELTA_FIG} we consider physical parameters such that Eqs. (\ref{stor_sigma}), (\ref{Nverylarge}) and (\ref{N_sig_bet}) are satisfied. We observe that ${\cal P}(+)$ and ${\cal P}(+,+)$ now exhibit well pronounced revivals. With the same set of physical parameters one can also verify that the approximative but analytical expression for ${\cal P}(+)$ as obtained by a Poisson resummation technique, i.e. \eq{p_P_Poisson}, is in good agreement with the exact expression for ${\cal P}(+)$. We also notice that according to \eq{nulimit}, only the first revival in ${\cal P}(+)$ is clearly resolved in this case, which agrees well with our numerical results. \begin{figure}\label{p_pl_sig25_DELTA_FIG} \end{figure} \begin{figure}\label{p_pl_sig01_DELTA_FIG} \end{figure} Let us now, on the other hand, consider the case when $\sigma_{\Delta}^2$ is small but non-zero, i.e. \be \label{liten_sigma} 0 < \sigma_{\Delta}^2 \ll 1 ~~. \ee \noi The quantity $\langle q(x) \rangle_0$ can then be approximated according to $\langle q(x) \rangle_0 = x/[ \, 2(x + \Delta^2) \, ]$ and the term $\langle q(x) \rangle_{osc}$ has an upper bound $|\langle q(x) \rangle_{osc}| \leq 1/[ \, 1 + \epsilon^2 \, ]^{1/4}$, where $\epsilon \equiv 2 \sigma_{\Delta}^2 \theta x/( \, x+\Delta^2 \, )^{3/2}$. For sufficiently large values of $\theta$, i.e. \be \label{t_bet_zstor} \theta \gg \frac{1}{2 \sigma_{\Delta}^2} ~ \frac{(a-1/2)^{3/2}}{a-1/2-\Delta^2} ~~, \ee \noi the term $\langle q(x) \rangle_{osc}$ is negligible in comparison to $\langle q(x) \rangle_0$ for the solution \be \label{x_j_nuh} {\bar x} = a-1/2 - \Delta^2 ~~, \ee \noi of the saddle-point equation. The maser-maser phase transitions will therefore be less significant if Eqs. (\ref{liten_sigma}) and (\ref{t_bet_zstor}) are satisfied. The phase diagram does then essentially consist of critical thermal-maser line(s) only. For sufficiently small values of $\theta$, i.e. \be \theta \ll \frac{1}{2\sigma_{\Delta}} ~~, \ee \noi \eq{q_hele} is reduced to \eq{q_n}. Hence there is no dramatic change in the order parameter or in the phase diagram. With regard to the observable ${\cal P}(+)$, the requirement $\theta \gtrsim \theta_{\nu=1} = 2 \pi N \sqrt{{\bar x}+\Delta^2}$ is compatible with \eq{t_bet_zstor} provided that $N$ is sufficiently large, i.e. \be \label{deltaNlarge} N \gg \frac{1}{4 \pi \sigma_{\Delta}^2} \; \frac{\sqrt{a-1/2}}{a-1/2-\Delta^2} ~~. \ee \noi The equilibrium probability distribution is then given by \eq{p_j_gauss} with $x_j={\bar x}$ as in \eq{x_j_nuh} and $\sigma_x^2 = [ \, a + n_b(2a-1) \, ]/(2N)$ according to \eq{sigma_x}. The distribution ${\bar p}(x)$ is peaked around ${\bar x}$ provided that ${\bar x} \gg \sigma_x$, i.e. \be \label{N_sig_bet_igjen} N \gg \frac{1}{2} \; \frac{a + n_b(2a-1)}{a-1/2-\Delta^2} ~~. \ee \noi The $\nu$:th revival of ${\cal P}(+)$ occurs in the region where $\theta$ is close to \be \theta_{\nu} = 2 \pi \nu N \, \sqrt{a-1/2 } ~~, \ee \noi according to \eq{overl_theta_nu}. Furthermore, two consecutive revivals are separated provided that \be \nu \leq \sqrt{ \frac{N}{2} \; \frac{(2a-1)^2}{a + n_b(2a-1)} } - \frac{1}{2} ~~, \ee \noi which gives a bound on how many revivals that can be resolved in the excitation probability ${\cal P}(+)$. \begin{figure}\label{x_FIG} \end{figure} \begin{figure}\label{corr_FIG} \end{figure} In \fig{p_pl_sig01_DELTA_FIG} we consider physical parameters such that Eqs. (\ref{liten_sigma}), (\ref{deltaNlarge}) and (\ref{N_sig_bet_igjen}) are satisfied. In this case we also observe well pronounced revivals in ${\cal P}(+)$ and ${\cal P}(+,+)$. The width and location of the revival in ${\cal P}(+)$ agrees well with the approximative expression \eq{p_P_Poisson}. \bc{ \section{The Correlation Length} \label{corr_KAP} }\ec Let us now consider long-time correlations in the large $N$ limit as was first introduced in Ref.\cite{ElmforsLS95}. These correlations are most conveniently obtained by making use of the continuous time formulation of the micromaser system \cite{Lugiato87}. The lowest eigenvalue $\lambda_0=0$ of the matrix $L$, as defined in Section \ref{revivals_SEC}, then determines the stationary equilibrium solution vector $p={\bar p}$ as given by Eq.~(\ref{p_n_eksakt}). The next non-zero eigenvalue $\lambda$ of $L$, which we ~determine ~numerically, ~will then ~determine typical scales for the approach to the stationary situation. The joint probability for observing two atoms, with a time-delay $t$ between them, can now be used in order to define a correlation length $\gamma^A(t)$ \cite{ElmforsLS95}. At large times $t\rightarrow \infty$, we define the atomic beam correlation length $\xi_A$ by \cite{ElmforsLS95} \be \gamma_A(t) \simeq e^{-t/\xi_A}~~, \ee \noi which then is determined by $\lambda$, i.e. $\gamma\xi_A=1/\lambda$. For photons, we define a similar correlation length $\xi_{C}$. It follows that the correlation lengths are identical, i.e. $\xi_A=\xi_C\equiv \xi$ \cite{ElmforsLS95}. The correlation length $\gamma \xi$ is shown in Fig.~\ref{corr_FIG} for different values of $\sigma_{\theta}^2$ and exhibits large peaks for the pump parameters $\theta^*_{k k+1}$, $\theta_0^*$ and/or $\theta^*_{tk}$ \cite{Rekdal99}. The correlation grows at most as $\sqrt{N}$ at $\theta = \theta_0^*$. At $\theta_{kk+1}^*$ and $\theta_{tk}^*$ the large $N$ dependence is, however, different. At these values of the pump parameter there is instead a competition between two neighbouring minima of $V_0(x)$. Using the technique of \cite{ElmforsLS95} it can be shown that the peak close to $\theta = \theta_{k k+1}^*$ is \be \gamma \xi \simeq e^{ N \Delta V_0} ~~, \ee \noi where $\Delta V_0$ is the smallest potential barrier between the two competing minima of the effective potential $V_0(x)$. For increasing values of $\sigma_{\theta}^2$ or $\sigma_{\Delta}^2$ this potential barrier $\Delta V_0$ decreases. The corresponding correlation length will then also decrease (see e.g. \fig{corr_FIG}). Numerical study shows that adding fluctuations to the atom-photon frequency detuning result in the same quantitative effect: the peaks in the correlation length decreases due to smaller potential barriers $\Delta V_0$. \bc{ \section{Conclusion} \label{final_KAP} }\ec In conclusion, we have investigated various effects of including noise-producing mechanisms in a micromaser system, like a velocity spread in the atomic beam or a spread in the atom-photon frequency detuning. For sufficiently large widths of the corresponding probability distributions, i.e. for sufficiently large $\sigma_{\theta}^2$ or a non-zero $\sigma_{\Delta}^2$, the excitation probabilities of atoms leaving the microcavity exhibit well-pronounced revivals. Noise therefore tends to increase the order in the system. Furthermore, the maser-maser phase transition disappear for sufficiently large $\sigma_{\theta}^2$ or $\sigma_{\Delta}^2$, in which case the phase diagram in the $a$- and $\theta$- parameter space consists essentially of a single maser-thermal critical line. We have also shown that the correlation length drastically decreases when noise-producing mechanisms are included in the system. \begin{center} {\bf APPENDIX} \end{center} In this Appendix we derive an exact analytical expression for $V_0(x)$ in the large $\theta$ limit when the quantity $q_n$ has no spread in any of the physical parameters, i.e. when $q(x)$ is as given in \eq{q_n}. The potential $V_0(x)$ may then be re-written in the from \bea \label{mildert_denne} V_0(x) &=& - \int_0^x d \nu \bigg \{ ~ \ln \left [ a \frac{\nu}{\nu + \Delta^2} + n_b \nu \right ] + \ln \left [ 1 +\frac{a \sin^2( \theta \sqrt{\nu + \Delta^2})-a}{a + n_b( \nu + \Delta^2)} \right ] \\ \nonumber &-& \ln \left [ b \frac{\nu}{\nu + \Delta^2} + (1+n_b) \nu \right ] - \ln \left [ 1 +\frac{b \, \sin^2( \theta \sqrt{\nu + \Delta^2})-b}{b + (1+n_b)( \nu + \Delta^2)} \right ] \bigg \} ~~. \eea \noi By making use of the power series expansion of the logarithm we obtain \bea \label{V_expan} V_0(x) &=& - \int_0^x d \nu \, \bigg \{ ~ \ln \left [ \left( \; b \, \displaystyle{ \frac{\nu}{\nu + \Delta^2} } + (1+n_b) \nu \; \right ) /\left ( \; a \, \displaystyle{ \frac{\nu}{\nu + \Delta^2} } + n_b \nu \; \right ) \right ] \\ \nonumber &+& \sum_{n=1}^{\infty} \; \frac{1}{n} \bigg [ ~ \left (\frac{a}{a+n_b(\nu+\Delta^2)} \right )^n - \left (\frac{b}{b+(1+n_b)(\nu+\Delta^2)} \right )^n \; \bigg ] \cos^{2n} ( \theta \sqrt{\nu + \Delta^2} ) ~ \bigg \} ~~. \eea \noi In the large $\theta$ limit only the non-oscillating terms in \eq{V_expan} contribute, i.e. \be \label{app_final} V_0(x) = \int_0^x d \nu \, \left \{ ~ \ln \left [ \frac{1-a}{a} \right ] + f(\, y(\nu) \, ) - f( \, w(\nu) \,) ~ \right \} ~~, \ee \noi where \be f(z) = \ln z + R(z) ~~, \ee \noi and $y(x) = a/( \, a+n_b(x+\Delta^2) \, )$ and $w(x) = b/( \, b +(1+n_b)(x+\Delta^2) \, )$. The function $R(z)$ is defined by \be \label{R_funksj} R(z) = \sum_{n=1}^{\infty} \frac{1}{n} \frac{1}{2^{2n}} \frac{(2n)!}{(n!)^2} \; z^n ~~, \ee \noi which is a generalized hypergeometric series \cite{Magnus53}. By differentiating \eq{app_final} we therefore, finally, obtain \eq{hurra_formel}. \begin{center} {\bf ACKNOWLEDGEMENT} \end{center} The authors wish to thank the members of NORDITA for the warm hospitality while the present work was completed. The research has been supported in part by the Research Council of Norway under contract no. 118948/410. \end{document}
\betaegin{document} \deltaate{} \title{Natural decomposition of processes\\ and weak Dirichlet processes} \maketitle \centerline{\betaf Fran\c cois Coquet$^*$, Adam Jakubowski$^{**,}$\varphiootnote{Supported in part by Komitet Bada\'n Naukowych under Grant No 1 P03A 022 26 and completed while the author was visiting Universit\'e de Rennes I.}} \centerline{\betaf Jean M\'emin$^{***}$ and Leszek S\l omi\'nski$^{**,}$\varphiootnote{Supported in part by Komitet Bada\'n Naukowych under Grant No 1 P03A 022 26 and completed while the author was visiting Universit\'e de Rennes I.}} \centerline{\it $^*$LMAH, Universit\'e du Havre} \centerline{\it 25 rue Philippe Lebon, 76063 Le Havre Cedex, France} \centerline{\it $^{**}$Faculty of Mathematics and Computer Science,} \centerline{\it N. Copernicus University, ul. Chopina, 87-100 Toru\'n, Poland} \centerline{\it $^{***}$IRMAR, Universit\'e de Rennes 1} \centerline{\it Campus de Beaulieu, 35042 Rennes Cedex, France} \betaigskip \betaegin{center}{\sigmamall{\betaf Abstract}} \varepsilonnd{center} { \sigmamall A class of stochastic processes, called ``weak Dirichlet processes", is introduced and its properties are investigated in detail. This class is much larger than the class of Dirichlet processes. It is closed under $C^1$-transformations and under absolutely continuous change of measure. If a weak Dirichlet process has finite energy, as defined by Graversen and Rao, its Doob-Meyer type decomposition is unique. The developed methods have been applied to a study of generalized martingale convolutions.} {\it Mathematics Subject Classification:} 60G48, 60H05 {\it Keywords:} weak Dirichlet processes, Dirichlet processes, semimartingales, finite energy processes, quadratic variation, mutual quadratic covariation, Ito's formula, generalized martingale convolution. \sigmaection{Introduction} The quadratic variation of a stochastic process as well as the mutual covariation of two stochastic processes have been well-known for a long time to be at the core of the theory of stochastic integration, was it only because the quadratic variation appears explicitely in Ito's formula for semimartingales. And indeed, every attempt to generalize Ito's calculus to a wider class of integrators (for instance Dirichlet processes) or to functions less regular than ${\cal C}^2$ functions has to deal with quadratic variations or covariations of the processes that appear. In this aspect, the most enlightening work is perhaps F\"ollmer's paper (\cite{F850}). On the other hand, it was proven by Graversen and Rao (\cite{GR}) that the existence of a Doob-Meyer type decompositon for a process $X$ is narrowly linked to the fact that $X$ has a finite energy, which is a somewhat weaker assumption than the existence of a quadratic variation for $X$. The most well-known class of processes with finite energy (beyond the class of semimartingales) is the class of Dirichlet processes. A larger class has been recently introduced by Errami and Russo (\cite{ER1}) under the name ``weak Dirichlet processes". The present paper explores some desirable properties of such processes. Although our definition of a quadratic variation is different from Errami and Russo's one -it is in a way more classical- at any rate both coincide as far as semimartingales are concerned. The other noticeable difference is that throughout the paper we deal with non continuous processes. In part 2., we give an as explicit as possible link between quadratic variation, energy, Dirichlet processes, weak Dirichlet processes, and ``natural" (that is, ``Doob-Meyer type") decomposition. Part 3. is devoted to prove that any ${\cal C}^1$ function of a weak Dirichlet process is again a weak Dirichlet process. We are able to give an explicit Ito-type formula for ${\cal C}^2$ transformations, but we could only find an explicit formula for the martingale part in the general case. Part 4. which is closest to Errami and Russo's work mentionned above, deals with processes $X$ that may be written $\deltaisplaystyle X_t=\int_0^tG(t,s)dL_s$ where $L$ is a quasileft continuous -but not necessarily continuous- square-integrable martingale and $G$ is a deterministic function. We give two sets of hypotheses under which $X$ is a weak Dirichlet process, and also give its natural decomposition. This section is illustrated through 3 examples, last one giving additionnaly a formula of Fubini type. At last, we joined as an appendix some counter-examples related to quadratic variation or to regularity of paths of processes. Although such examples may be well-known, we could not find any in the litterature, and we hope that they can enlight some of the technical problems we are confronted with here and there in the paper. \sigmaection{Basic notations and results about processes with finite energy and weak Dirichlet processes} In what follows, we are given a probability space $(\Omega} \deltaef\o{\omega, {\cal G}, {\betaf P} )$. We also fix a positive real number $T$. Unless otherwise stated, every process or filtration will be indexed by $t\in [0, T]$. A filtration $({\cal F}_t)_{t\leq T}$ is denoted by ${\cal F} $. All filtrations are assumed to be right-continuous and defined on $(\Omega} \deltaef\o{\omega, {\cal G}, {\betaf P} )$ with ${\cal F}_T \sigmaubset {\cal G}$. We are also given a refining sequence ${D_n}$ of subdivisions of $[0,T]$ whose mesh goes to 0 when $n\to\infty$. For every $n$, $D_n=\{0=t_0^n,t_1^n,\deltaots, t_{N(n)}^n=T\}$. We work with processes with a.s. right-continuous trajectories with left limits (such a process is called c\`adl\`ag), null in 0 and, unless otherwise stated, admitting a finite energy in the sense defined below following Graversen and Rao (\cite{GR}): \betaegin{definition} We say that $X$ is a process {\it of finite energy} if \betaegin{equation} \label{EF} \sigmaup_nE\left[\sigmaum_{t_i^n\in D_n}(X_{t_i^n}-X_{t_{i-1}^n})^2\right]<+\infty. \varepsilonnd{equation} This ``sup" will be denoted ${\cal E}n(X)$. \varepsilonnd{definition} Of course, if $X$ has a finite energy, ${\deltaisplaystyle |X_t|^2}$ is integrable for every $t\leq T$ and also $\sigmaum_{s\leq T}\Delta X_s^2$ is integrable. We recall Graversen and Rao's main result in \cite{GR}: \betaegin{theorem}\label{GRth} If $X$ is a process with finite energy, then we can write $X$ as a sum $X=M+A$, where M is a square-integrable martingale and $A$ is a predictable process such that there exists a subsequence $(D_{n_j})$ of $(D_n)$ satisfying \betaegin{equation} \label{M0} E\left[\sigmaum_{t_i^{n_j}\in D_{n_j},t_i^{n_j}\leq t}(A_{t_i^{n_j}}-A_{t_{i-1}^{n_j }})(N_{t_i^{n_j}}-N_{t_{i-1}^{n_j}})\right] \longrightarrow 0 \varepsilonnd{equation} as $j\to \infty$ for all square integrable martingale $N$. If, moreover, $X=M'+A'$ is any other such decomposition, the process $A-A'$ is a continuous martingale. At last, if we write \betaegin{equation}\label{Mn} M^n_t= \sigmaum_{t^n_i\in D_n, t^n_{i}\leq t}\left[X_{t^n_{i}}-E[X_{t^n_{i}}/{\cal F}_{t^n_{i-1}}] \right], \varepsilonnd{equation} then for all $t\in\betaigcup_n D_n$, $M_t$ is the weak limit in $\sigmaigma({\betaf L}^2, {\betaf L}^2)$ of the sequence, $(M^{n_j}_t)$. \varepsilonnd{theorem} \betaigskip Such a decomposition of $X$ is a Doob-Meyer type decomposition: the predictable process $A$ with the property of convergence (2) is a ``natural" process. In this section we will discuss the case when the decomposition $X=M+A$ in Theorem \ref{GRth} is unique. Such Doob-Meyer decomposition will be called the natural decomposition of $X$. We will use the notion of weak Dirichlet process introduced by Errami and Russo (\cite{ER1}) in a slighty different context. \betaegin{definition}\label{wdp} We say that $X$ is a weak Dirichlet process if it admits a decomposition $X=M+A$, where $M$ is a local martingale and $A$ is a predictable process such as $[A,N]=0$ for all continuous local martingale $N$. \varepsilonnd{definition} In the above definition and in the sequel we use the notion of mutual covariation and quadratic variation in the following sense taken from \cite{CMS}. \betaegin{definition}{\betaf (i)} Processes $X$ and $Y$ admit a quadratic (mutual) covariation along $(D_n)$ if there exists a c\`adl\`ag process denoted $[X,Y]$ with for every $t\leq T$ $$[X,Y]_t=[X,Y]_t^c+\sigmaum_{s\leq t}\Delta X_s\Delta Y_s$$ and \betaegin{equation} \label{CVQ} S^n(X,Y)_t:=\sigmaum_{t_i^n\in D_n,t_i^n\leq t}(X_{t_{i+1}^n}-X_{t_{i}^n})(Y_{t_{i+1}^n}-Y_{t_{i}^n} )\toP [X,Y]_t\,\,\,\,as\,\, n\to\infty. \varepsilonnd{equation} {\betaf (ii)} The process $X$ admits a quadratic variation along $(D_n)$ if there exists a c\`adl\`ag process denoted $[X,X]$ with for every $t\leq T$ $$[X,X]_t=[X,X]_t^c+ \sigmaum_{s\leq t}\Delta X^2_s$$ and \betaegin{equation} \label{VQ} S^n(X,X)_t:=\sigmaum_{t_i^n\in D_n,t_i^n\leq t}(X_{t_{i+1}^n}-X_{t_{i}^n})^2\toP [X,X]_t\,\,\,\,as\,\, n\to\infty. \varepsilonnd{equation} \varepsilonnd{definition} \betaegin{remark} {\rm {\betaf (i)} The decomposition $X=M+A$ of a weak Dirichlet process is unique. To see this suppose that we have decompositions $X=M+A=M'+A'$ with $A$ and $A'$ predictable and verifying $[A,N]=[A,N']=0$ for every continuous local martingale $N$. Then $A-A'$ is a predictable local martingale, hence a continuous local martingale. Then $[A-A']_T=[A-A',A]_T-[A-A',A']_T=0$ and we deduce that $A=A'$. {\betaf (ii)} A weak Dirichlet process $X$ need not admit a quadratic variation. We know only that for every continuous martingale $N$ there exists the covariation $[X,N]$. {\betaf (iii)} Of course, in general, a decomposition $X=M+A$ with a martingale $M$ and a predictable process $A$, does not imply that $[A,N]=0$ for every continuous martingale $N$, when $[A,N]$ exists. For example, take $A$ as a continuous martingale and $N=A$. }\varepsilonnd{remark} The class of weak Dirichlet processses is much larger than the class of Dirichlet processes. We recall: \betaegin{definition} A Dirichlet process is the sum of a local martingale and a continuous process whose quadratic variation is identically zero. \varepsilonnd{definition} \betaegin{remark}{\rm Note that a Dirichlet process admits a quadratic variation, which is equal to the quadratic variation of its martingale part. Our definition of a quadratic variation, which follows F\"ollmer's one in \cite{F850} is weaker than the definition in \cite{F851}, and slightly different from Russo and Vallois' one in \cite{RV}. However the three of them coincide as far as semimartingales are concerned, and a Dirichlet process according to the definition in \cite{F851} is also Dirichlet according to the two other ones.} \varepsilonnd{remark} The following notion of pre-quadratic variation is weaker than the quadratic variation one; however the two notions coincide under stronger assumptions, as will be seen below. \betaegin{definition} A process $X$ (not necessarily c\`adl\`ag) admits a pre-quadratic variation along $(D_n)$ if there exists an increasing process denoted $S(X,X)$ with for every $t\leq T$ \betaegin{equation} \label{PVQ} S^n(X,X)_t:=\sigmaum_{t_i^n\in D_n,t_i^n\leq t}(X_{t_{i+1}^n}-X_{t_{i}^n})^2\toP S(X,X)_t\,\,\,\,as\,\, n\to\infty. \varepsilonnd{equation} \varepsilonnd{definition} \betaegin{remark} {\rm We can find examples of continuous processes $X$ such that $S(X,X)$ is defined but not continuous (see example in Annex), hence $X$ does not admit a quadratic variation. } \varepsilonnd{remark} For every $t\leq T$ denoting $\pi_{t}$ any subdivision of $[0,t]$, we consider the sum $$S^{\pi_t}(X,X):=\sigmaum_{{t_i\in \pi_t}, i>0}(X_{t_{i}}-X_{t_{i-1}})^2$$ \betaegin{proposition} If a c\`adl\`ag process $X$ admits a pre-quadratic variation $S$ with the following property $(S)$: $S(X,X)$ is right continuous and for every $t\leq T$, $S^{\pi_t}(X,X)\toP S(X,X)_t$ as the mesh of $\pi_t$ tends to $0$. Then, $S(X,X)$ is the quadratic variation of $X$ along any sequence $(D_n)$ of subdivisions of $[0,T]$, whose mesh tends to $0$. \varepsilonnd{proposition} This result is proved in (\cite{J}), Lemme (3.11). \betaegin{remark} {\rm {\betaf (i)} The class of Dirichlet processes is larger than the space ${\betaf H^2}$ of semimartingales. Every continuous function admitting a quadratic variation equal to zero is a deterministic Dirichlet process. {\betaf(ii)} Every continuous function is a deterministic weak Dirichlet process: Actually, let us consider a bounded continuous martingale $N$ nul in $0$, we have \betaegin{eqnarray*} E(|\sigmaum_{t_i^n\in D_n}(f(t_{i+1}^n)-f(t_i^n)) (N_{t_{i+1}^n}-N_{t_{i}^n})|^2) &\leq \sigmaup_{t_i^n}(f(t_{i+1}^n)-f(t_i^n))^2E(N_T)^2. \varepsilonnd{eqnarray*} and, from continuity of $f$ this last term tends to $0$ when $n\rightarrow \infty $. We give in Section 4 nondeterministic examples of weak Dirichlet processes, which are not ordinary Dirichlet processes. } \varepsilonnd{remark} \betaegin{remark} {\rm The family of processes with finite energy is clearly stable under addition, however we do not know if this stability holds for the family of processes admitting a quadratic variation. Of course this is true for the family of Dirichlet processes. } \varepsilonnd{remark} \betaegin{theorem} Assume $X$ is a process with finite energy. The following three conditions are equivalent: \betaegin{description} \item[{\betaf(i)}] $X$ is a weak Dirichlet process, \item[{\betaf(ii)}] for every continuous local martingale $N$, the quadratic covariation $[X,N]$ is well-defined, \item[{\betaf(iii)}] for every locally square integrable martingale $N$, the quadratic covariation $[X,N]$ is well-defined. \varepsilonnd{description} In this case the decomposition $X=M+A$ is unique and it is the natural decomposition expressed in Theorem \ref{GRth}. \varepsilonnd{theorem} {\it Proof}: $(i){\rm I\!R}ightarrow(iii)$ Let us write $X=M+A$ as in Definition \ref{wdp} and consider the decomposition $N=N^c+N^d$, where $N^c$ is the continuous and $N^d$ purely discontinuous part of $N$. By the definition of a weak Dirichlet process the covariation $[X,N^c]$ is well-defined. For proving the existence of $[X,N^d]$ we use the following lemma. \betaegin{lemma}\label{scs} Assume that $X$ has a finite energy, and that $N$ is a locally square integrable martingale which is the compensated sum of its jumps. Then $X$ and $N$ admit a covariation such that \betaegin{equation} [X,N]_t=\sigmaum_{s\leq t}\Delta X_s\Delta N_s. \varepsilonnd{equation} \varepsilonnd{lemma} {\it Proof of Lemma \ref{scs}}: By using a localizing sequence of stopping times, one can assume that $N$ is a square integrable martingale. One can find a sequence $(N^p)_p$ of martingales with finite variation and only a finite number of jumps, such that $N^p\to N$ in ${\betaf H^2}$. We have then, for fixed $p$, \betaegin{equation} \sigmaum_{t_i^n\in D_n,t_i^n\leq t}(X_{t_{i+1}^n}-X_{t_{i}^n})(N^p_{t_{i+1}^n}-N^p_{t_{i}^n} )\toP \sigmaum_{s\leq t}\Delta X_s\Delta N^p_s. \varepsilonnd{equation} as $n\to\infty$. On the other hand, \betaegin{eqnarray*} &&E\left|\sigmaum_{t_i^n\in D_n,t_i^n\leq t}(X_{t_{i+1}^n}-X_{t_{i}^n}) (N^p_{t_{i+1}^n}-N^p_{t_{i}^n})- \sigmaum_{t_i^n\in D_n,t_i^n\leq t}(X_{t_{i+1}^n}-X_{t_{i}^n})(N_{t_{i+1}^n}- N_{t_{i}^n})\right|\\ &&\qquad\leq E|\Bigl(\sigmaum_{t_i^n\in D_n,t_i^n\leq t} (X_{t_{i+1}^n}-X_{t_{i}^n})^2\Bigr)^{1/2}\\ &&\qquad\qquad\qquad\times\Bigl(\sigmaum_{t_i^n\in D_n,t_i^n\leq t}\betaigl ((N_{t_{i+1}^n}-N^p_{t_{i+1}^n}) -(N_{t_{i}^n}-N^p_{t_{i}^n})\betaigr)^2\Bigr)^{1/2}|\\ &&\qquad\leq ({\cal E}n(X))^{1/2}E\Bigl([N-N^p,N-N^p]_t\Bigr)^{1/2} \varepsilonnd{eqnarray*} which goes to 0 as $p\to\infty$ since $[N-N^p,N-N^p]_t\toP 0$. At last, \betaegin{eqnarray*} & &E\Bigl|\sigmaum_{s\leq t}\Delta X_s(\Delta N_s-\Delta N^p_s)\Bigr|\leq E\Bigl(\betaigl( \sigmaum_{s\leq t}\Delta X_s^2\betaigr)^{1/2}\betaigl(\sigmaum_{s\leq t}(\Delta N_s-\Delta N^p_s)^2\betaigr)^{1/2}\\ && \qquad\qquad\qquad\qquad \leq ({\cal E}n(X))^{1/2}E\left[[N-N^p,N-N^p]_t\right]^{1/2}\cr \varepsilonnd{eqnarray*} which goes to zero as $p$ goes to infinity, hence $\sigmaum_{s\leq t}\Delta X_s \Delta N^p_s$ converges in $L^1$ to $\sigmaum_{s\leq t}\Delta X_s\Delta N_s$. These three convergences give the lemma. \vrule height 4pt depth 0pt width 4pt\vrule height 4pt depth 0pt width 4pt \betaigskip $(iii){\rm I\!R}ightarrow(ii)$ is obvious. $(ii){\rm I\!R}ightarrow(i)$ Let $X=M+A$ be a decomposition from Theorem \ref{GRth} and let $N$ be a continuous local martingale. Define $T_p=\inf\{ t:\vert N_t\vert \geq p\} $ then $(T_p)$ is a localizing sequence of stopping times. We will prove that for every $p$, $[A,N^{T_p}]=0$, which implies that also $[A,N]=0$. By hypothesis we have the convergence $S^n(X,N^{T_p})_t\toP [X,N^{T_p}]_t$. Hence we deduce that also $S^n(A,N^{T_p})_t\toP [A,N^{T_p}]_t$. We will show that in fact \betaegin{equation}\label{eqnew} S^n(A,N^{T_p})_{T}\rightarrow [A,N^{T_p}]_{T}\quad in \,\,L^1.\varepsilonnd{equation} To see this, it is sufficient to check uniform integrability of the sequence $\{S^n(A,N^{T_p})_{T}\}$. Writing $\vert S^n(X,N^{T_p})_{T}\vert\leq S^n(X,X)^{1/2}_{T} S^n(N^{T_p},N^{T_p})^{1/2} _{T}$, and using H\"older inequality, we get: \[E\vert S^n(X,N^{T_p})_{T}\vert ^{4/3} \leq E[S^n(X,X)_{T}]E[S^n(N^{T_p},N^{T_p})_{T}^2].\] Similarly we prove that also $E\vert S^n(M,N^{T_p})_{T}\vert^{4/3}<+\infty$. As a consequence, we deduce uniform integrability of $\{(S^n(A,N^{T_p})_{T}\}$ and (\ref{eqnew}) holds true. Therefore, in particular \[E[S^n(A,N^{T_p})_{T}]\rightarrow E[A,N^{T_p}]_{T}\] and due to (\ref{M0} ) \betaegin{equation}\label{eqnew1} E[A,N^{T_p}]_{T}=0.\varepsilonnd{equation} Note that the process $[A,N^{T_p}]$ has a finite variation ; moreover, since $N$ is continuous, $[A,N^{T_p}]$ is also a continuous process. Therefore, to get $[A,N^{T_p}]=0$ it is sufficient to prove that $[A,N^{T_p}]$ is a local martingale Let us consider a bounded stopping time $\tau\leq T $, the same arguments as above give the convergence: $E[S^n(A,N^{T_p\widetilde} \def\Mh{\widehat Medge \tau })_{T}] \rightarrow E[A,N^{T_p\widetilde} \def\Mh{\widehat Medge \tau }]_{T}$ and (\ref{M0} ) gives \[E[A,N^{T_p }]_{\tau}=E[A,N^{T_p\widetilde} \def\Mh{\widehat Medge \tau }]_{T }=0.\] Hence, it follows easily that the stopped process $[A,N]^{T_p}$ is a martingale and $[A,N^{T_p }]=0$. Since $P(T_p=T)\uparrow 1$, the proof of the last implication is completed. Finally, note that the uniqueness of the decomposition in Theorem \ref{GRth} is an easy consequence of the fact that $X$ is a weak Dirichlet process. {\vrule height 4pt depth 0pt width 4pt} \\[2mm] \vrule height 4pt depth 0pt width 4pt We get immediately the following \betaegin{corollary} Let us consider a weak Dirichlet process $X$ {\betaf (i)} If $Q$ is a probability measure absolutely continuous with respect to ${\betaf P} $, then $X$ is a $Q$ weak Dirichlet process. {\betaf (ii)} For an $a>0$ we define $\hat X= \sigmaum_{s\leq .}\Delta X_s1_{\Delta |X_s| >a} $, then $X-\hat X$ is a weak Dirichlet process. \varepsilonnd{corollary} Now, we will consider processes with finite energy $X$ admitting additionally a quadratic variation $[X,X]$. Then of course $E[X,X]_T<\infty$. \betaegin{theorem}\label{thm2} Assume $X$ is a weak Dirichlet process with finite energy, admitting a quadratic variation process. \betaegin{description} \item[{\betaf(i)}] In the natural decomposition $X=M+A$, $M$ is a square integrable martingale and $A$ has an integrable quadratic variation. \item[{\betaf(ii)}] The natural decomposition is minimal in the following sense. If $X=M'+A'$ is another decomposition with a local martingale $M'$ and a predictable process $A'$ then $[A'A']$ is well defined and: $$[A',A']=[M-M',M-M']+[A,A]. $$ \varepsilonnd{description} \varepsilonnd{theorem} {\it Proof}: $(i)$ To begin with, we notice that $$E[\sigmaum_{s\leq T}\Delta A_s^2]<\infty; $$ actually, for every predictable stopping time $S$, $\Delta A_S=E[\Delta X_S\vert {\cal F}_S-]$, hence $\deltaisplaystyle E[\sigmaum_{s\leq T}\Delta A_s^2]\leq E[\sigmaum_{s\leq T}\Delta X_s^2]<\infty.$ It follows that $\deltaisplaystyle E[\sigmaum_{s\leq T}\Delta M_s^2]<\infty $ and $M$ is a locally square integrable martingale. Let us consider the decomposition $M=M^c+M^d$ where $M^c$ is the continuous part of $M$ and $M^d$ its purely discontinuous part. Writing $[M^d,A]=[M^d,X]-[M^d,M^d]$, we get the existence of $[M^d,A]$. Using the property of decomposition $X=M+A$, we have $[M,A]=[M^d,A]$. From Lemma \ref{scs} we deduce: $$[M,A]=\sigmaum_{s\leq \cdot}\Delta M_s\Delta X_s-\sigmaum_{s\leq \cdot} \Delta M_s^2=\sigmaum_{s\leq \cdot}\Delta M_s \Delta A_s.$$ Now, by the definition of quadratic variation of $X$ and $M$ one gets the existence of $[A,A]$: $$[A,A]=[X,X]-2[M,A]-[M,M]. $$ Finally, $$E([M,M]_T+[A,A]_T)\leq E[X,X]_T+2E[\sigmaum_{s\leq T}\Delta M_s^2]^{1/2} E[\sigmaum_{s\leq T}\Delta A_s^2]^{1/2}<\infty .$$ (ii) Since $A'=M+A-M'$, by linearity $[A',A']$ is well defined and we can write: $$[A',A']=[M+A-M',M+A-M']=[M-M',M-M']+[A,A]-2[A,M-M']. $$ But, $M-M'$ is a continuous local martingale and as $A$ is taken from the natural decomposition of $X$, we get: $[A,M-M']=0$, hence the desired result. {\vrule height 4pt depth 0pt width 4pt} \betaigskip \sigmaection{Stability of weak Dirichlet processes under $C^1$ transformations} Assume $X$ is a process of finite energy. Let us denote by $\mu $ the jump measure of $X$. Then $\deltaisplaystyle\sigmaum_{s\leq .}\Delta X_s^2$ can be written $\deltaisplaystyle\int_0^.\int_{{\rm I\!R} -\{ 0\} }x^2\mu (ds,dx)$. Since $\deltaisplaystyle E\sigmaum_{s\leq T}\Delta X^2_s<\infty $, $X$ admits a L\'evy system $\nu $ which is the predictable compensator of $\mu $; then the predictable increasing process $\deltaisplaystyle\int_0^.\int_{{\rm I\!R} -\{ 0\} }x^2\nu (ds,dx)$ is well defined and $$E\sigmaum_{s\leq T}\Delta X^2_s=E[\int_0^T\int_{{\rm I\!R} -\{ 0\} }x^2\mu (ds,dx)]= E[\int_0^T\int_{{\rm I\!R} -\{ 0\} }x^2\nu (ds,dx)]<\infty.$$ We begin with $C^2$ stability: \betaegin{theorem}\label{C2} Let $X=M+A$ be a weak Dirichlet process of finite energy and $F$ a $C^2$-real valued function with bounded derivatives $f$ and $f'$. Then the process $(F(X_t)_{t\geq 0})$ is a weak Dirichlet process of finite energy and the decomposition $F(X)= Y+\Gamma $ holds with the martingale part \betaegin{eqnarray*}& &Y_t= F(0)+\int_0^tf(X_{s-})dM_s\\ &&\qquad+\int_0^t\int_{{\rm I\!R} }\Big(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-})\Big ) (\mu -\nu )(ds, dx) \varepsilonnd{eqnarray*} and the predictable part \betaegin{eqnarray*} \lefteqn{\Gamma _t=\int_0^tf(X_{s-})dA_s-1/2\sigmaum_{s\leq t}f'(X_{s-})(\Delta A_s)^2 +1/2\int_0^tf'(X_s)d[M,M]^c_s}\cr &\qquad+\int_0^t\int_{{\rm I\!R} }\Big(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-})\Big)\nu (ds ,dx), \varepsilonnd{eqnarray*} where $\deltaisplaystyle(S)\int_0^. f(X_{s-})dA_s$ is well defined as a limit in probability of Riemann sums. More precisely for every $t$ $$\sigmaum_{t^n_i\in D_n, t^n_i\leq t}(f(X_{t^n_i})(A_{t^n_{i+1}}-A_{t^n_i})+1/2f'(X_{t^n_i}) (A_{t^n_{i+1}}-A_{t^n_i})^2) \toP (S)\int_0^t f(X_{s-})dA_s.$$ \varepsilonnd{theorem} {\it Proof:} Fix $t>0$. We use arguments from the paper \cite{F850} by F\"ollmer. For $\varepsilonpsilon>0$ we define $J(1)=\{s\leq t;|\Delta X_s|>\varepsilonpsilon\}$. In the following, the elements of $D_n$ are, for short, written $t_{i}$ instead of $t^n_i$. Then $ \sigmaum_{t_i\in D_n, t_i\leq t}(F(X_{t_{i+1}})-F(X_{t_i}))=\sigmaum_1(F(X_{t_{i+1}})-F(X_{t_i})) +\sigmaum_2(F(X_{t_{i+1}})-F(X_{t_i})),$ where $\deltaisplaystyle\sigmaum_1$ denotes the sum (depending on $\omega\in\Omega} \deltaef\o{\omegamega$) of $t_i\in D_n, t_i\leq t$ such that $(t_i,t_{i+1}]\cap J(1)\neq\varepsilonmptyset$ and $\deltaisplaystyle\sigmaum_2$ the sum on the other $t_i$'s. Then by Taylor's formula \betaegin{eqnarray*} & &\sigmaum_2(F(X_{t_{i+1}})-F(X_{t_i})) =\sigmaum_2 f(X_{t_i})(X_{t_{i+1}}-X_{t_i})\\ & &\nonumber\qquad\qquad+1/2\sigmaum_2 f'(X_{t_i})(X_{t_{i+1}}-X_{t_i})^2+\sigmaum_2 r_2(X_{t_i},X_{t_{i+1}}), \varepsilonnd{eqnarray*} where $r_2(X_{t_i},X_{t_{i+1}})=C^{\varepsilonpsilon}_i(X_{t_{i+1}}-X_{t_i})^2$ with $\max_2 |C^{\varepsilonpsilon}_i|\leq ||f'||$ and \betaegin{equation}\label{eqnew2} \lim_{\varepsilonpsilon\deltaownarrow0}\limsup_{n\to \infty} P(\max_2 |C_i^{\varepsilonpsilon}|>\deltaelta)=0,\quad \deltaelta>0.\varepsilonnd{equation} Hence \betaegin{eqnarray*} & &\sigmaum_{t_i\in D_n, t_i\leq t}(F(X_{t_{i+1}})-F(X_{t_i})) =\sigmaum_{t_i\in D_n, t_i\leq t} f(X_{t_i})(X_{t_{i+1}}-X_{t_i})\\ & &\nonumber+1/2\sigmaum_{t_i\in D_n, t_i\leq t} f'(X_{t_i})(X_{t_{i+1}}-X_{t_i})^2+\sigmaum_2 r_2(X_{t_i},X_{t_{i+1}})\\ & & -\sigmaum_1\{F(X_{t_{i+1}})-F(X_{t_i})-f(X_{t_i})(X_{t_{i+1}}-X_{t_i}) -1/2f'(X_{t_i})(X_{t_{i+1}}-X_{t_i})^2\}\\ & &=\sigmaum_{t_i\in D_n, t_i\leq t} f(X_{t_i})(M_{t_{i+1}}-M_{t_i})+\sigmaum_{t_i\in D_n, t_i\leq t} f(X_{t_i})(A_{t_{i+1}}-A_{t_i})\\ & & +1/2\sigmaum_{t_i\in D_n, t_i\leq t} f'(X_{t_i})(A_{t_{i+1}}-A_{t_i})^2+1/2\sigmaum_{t_i\in D_n, t_i\leq t} f'(X_{t_i})(M_{t_{i+1}}-M_{t_i})^2\\ & &+\sigmaum_{t_i\in D_n, t_i\leq t} f'(X_{t_i})(M_{t_{i+1}}-M_{t_i})(A_{t_{i+1}}-A_{t_i}) + \sigmaum_2r_2(X_{t_i},X_{t_{i+1}})\\ & & -\sigmaum_1\{F(X_{t_{i+1}})-F(X_{t_i})-f(X_{t_i})(X_{t_{i+1}}-X_{t_i}) -1/2f'(X_{t_i})(X_{t_{i+1}}-X_{t_i})^2\}\\ & & I^n_1+I^n_2+I^n_3+I^n_4+I^n_5+I^{n,\varepsilonpsilon}_6 +I^{n,\varepsilonpsilon}_7.\varepsilonnd{eqnarray*} Now, note that by the definition of a stochastic integral we have $I^n_1\toP\int_0^tf(X_{s-})dM_s. $ The following simple lemma will be very useful in order to estimate the other terms. \betaegin{lemma}\label{lemnew} Assume that c\`adl\`ag processes $X$ and $Y$ admit a quadratic covariation $[X, Y]$ and that the sequence $\{Var(S^n(X,Y))_T\}$ is bounded in probability. To c\`adl\`ag processes $Z$ and $U$ we associate the sequences $\{Z^n\}$ and $\{U^n\}$ of processes, where $Z^n$ and $U^n$ are the respective discretizations of $Z$ and $U$ along $D_n$; precisely $Z^n_t= Z_{t^n_{i}}$, $U^n_t= U_{t^n_{i}}$ when $t\in [t^n_i, t^n_{i+1}[$. Then, for every continuous real function $f,g$ and every $t$, holds the convergence: $$\int_0^tf(Z^n_{s-})g(\Delta U^n_s)dS^n(X,Y)_s \toP \int_0^tf(Z_{s-})g(\Delta U_s)d[X,Y]_s,$$ where these integrals are Stieltjes integrals with respect to the processes $S^n(X,Y)$ or $[X,Y]$. \varepsilonnd{lemma} {\it Proof of Lemma 3.1} From the proof of \cite[Lemma 1.3]{CMS} one can deduce that $\int_0^.f(Z^n_{s-})g(\Delta U^n_s)dS^n(X,Y)_s \toP\int_0^.f(Z_{s-})g(\Delta U^n_s)d[X,Y]_s $ in the (so-called $J_1$) Skorokhod topology (see e.g. \cite{JS}). Since for every $t$ $f(Z^n_{t-})g(\Delta U^n_t)\Delta S^n(X,Y)_t\toP f(Z_{t-})g(\Delta U_t) \Delta [X,Y]_t, $ the desired result follows from properties of the Skorokhod topology $J_1$. {\vrule height 4pt depth 0pt width 4pt} \\[2mm] It is clear by Lemma \ref{lemnew} that we have the convergences; \betaegin{eqnarray*} & &I^n_4\toP1/2\int_0^tf'(X_{s-})d[M,M]_s,\\ & &I^n_5\toP\int_0^tf'(X_{s-})d[M^d,A]_s, \varepsilonnd{eqnarray*} where $M^d$ denotes purely discontinuous part of $M$. Since $X$ is a process with finite energy, by (\ref{eqnew2}) $ \lim_{\varepsilonpsilon\deltaownarrow0}\limsup_{n\to \infty} P( |I^{n,\varepsilonpsilon}_6|>\deltaelta)=0,\quad \deltaelta>0. $ We observe also that $P$-almost surely there exists the limit \betaegin{eqnarray*} & &\lim_{\varepsilonpsilon\deltaownarrow0}\lim_{n\to \infty}I^{n,\varepsilonpsilon}_7=\sigmaum_{s\leq t}\{F(X_s)-F(X_{s-})-f(X_{s-})\Delta X_s-1/2f'(X_{s-})\Delta X_s^2\}\\ & &\qquad=\sigmaum_{s\leq t}\{F(X_s)-F(X_{s-})-f(X_{s-})\Delta X_s\}\\ & &-1/2\sigmaum_{s\leq t}\{f'(X_{s-})\Delta M_s^2+\sigmaum_{s\leq t}f'(X_{s-})\Delta A_s^2+2\sigmaum_{s\leq t}f'(X_{s-})\Delta M^d_s\Delta A_s\}. \varepsilonnd{eqnarray*} On the other hand it is obvious that $P$-almost surely $\sigmaum_{t_i\in D_n, t_i\leq t}(F(X_{t_{i+1}})-F(X_{t_i}))\rightarrow F(X_t)-F(0) $ and putting together all convergences, we deduce that $\{I^n_2+I^n_3\}$ is converging in probability and the limit we denote as $(S)\int_0^tf(X_{s-})dA_s$. Observe also that $\int_0^tf'(X_{s-})d[M,M]^c_s=\int_0^tf'(X_{s-})d[M,M]_s-\sigmaum_{s\leq t}f'(X_{s-}) \Delta M_s^2 $ and $$\int_0^tf'(X_{s-})d[M^d,A]_s=\sigmaum_{s\leq t}f(X_{s-})\Delta M_s\Delta A_s.$$ As a consequence we obtain the formula \betaegin{eqnarray*} & & F(X_t)= F(0)+\int_0^tf(X_{s-})dM_s +(S)\int_0^tf(X_{s-})dA_s\\ & &\qquad\qquad+1/2\int_0^tf'(X_{s-})d[M,M]^c_s-1/2\sigmaum_{s\leq t}f'(X_{s-})\Delta A_s^2\\ & &\qquad\qquad+\sigmaum_{s\leq t}\{ F(X_s)-F(X_{s-})-\Delta X_sf(X_{s-})\}. \varepsilonnd{eqnarray*} Now, writing \betaegin{eqnarray*} &&\sigmaum_{s\leq t}\{ F(X_s)-F(X_{s-})-\Delta X_sf(X_{s-})\}\\ &&\qquad= \int_0^t\int_{{\rm I\!R} -\{ 0\} }(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-})) \mu (ds,dx) \varepsilonnd{eqnarray*} and using the basic inequalities $$\vert F(y+x)-F(y)-xf(y)\vert \leq \Vert f'\Vert x^2$$ and $$\vert F(y+x)-F(y)-xf(y)\vert \leq 2\Vert f\Vert \vert x\vert, $$ we get the decomposition \betaegin{eqnarray*} &&\sigmaum_{s\leq .}\{F(X_s)-F(X_{s-})-\Delta X_sf(X_{s-})\}\\ &&\qquad= \int_0^.\int_{{\rm I\!R} -\{ 0\} }(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-})) (\mu -\nu )(ds,dx)\\ &&\qquad\qquad+\int_0^.\int_{{\rm I\!R} -\{ 0\} }(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-})) \nu (ds,dx) \varepsilonnd{eqnarray*} where $$\int_0^.\int_{{\rm I\!R} -\{ 0\} }(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-}) (\mu -\nu )(ds,dx)$$ is a square integrable purely discontinuous martingale, which we will denote by $L$ and $$\int_0^.\int_{{\rm I\!R} -\{ 0\} }(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-}))\nu (ds,dx)$$ is an increasing predictable square integrable process. Then we get the decomposition $F(X_.)=F(0)+Y_.+\Gamma _.$, as written in the statement of Theorem 3.1. It remains to prove that, for every continuous local martingale $N$, holds the equality: $[\Gamma ,N]=0$. First note that \betaegin{eqnarray*} & &\sigmaum_{t_i\in D_n, t_i\leq t} (\Gamma_{t_{i+1}}-\Gamma_{t_i})(N_{t_{i+1}}-N_{t_i})\\ & &\qquad\qquad= \sigmaum_{t_i\in D_n, t_i\leq t} \int_{t_i}^{t_{i+1}}(f(X_{s-})-f(X_{t_i}))dM_s(N_{t_{i+1}}-N_{t_i})\\ & &\nonumber\qquad\qquad\quad -\sigmaum_{t_i\in D_n, t_i\leq t} (L_{t_{i+1}}-L_{t_i})(N_{t_{i+1}}-N_{t_i})\\ & &\qquad\qquad\quad+\sigmaum_{t_i\in D_n, t_i\leq t} f(X_{t_i})(A_{t_{i+1}}-A_{t_i})(N_{t_{i+1}}-N_{t_i})\\ & &\qquad\qquad\quad\nonumber +1/2\sigmaum_{t_i\in D_n, t_i\leq t} f'(X_{t_i})(A_{t_{i+1}}-A_{t_i})^2(N_{t_{i+1}}-N_{t_i})\\ & &\qquad\qquad\quad+1/2\sigmaum_{t_i\in D_n, t_i\leq t} f'(X_{t_i})(M_{t_{i+1}}-M_{t_i})^2(N_{t_{i+1}}-N_{t_i})\\ & &\qquad\qquad\quad\nonumber+ \sigmaum_{t_i\in D_n, t_i\leq t} f'(X_{t_i})(M_{t_{i+1}}-M_{t_i})(A_{t_{i+1}}-A_{t_i})(N_{t_{i+1}}-N_{t_i}) \\&&\qquad\qquad\quad + \sigmaum_2r_2(X_{t_i},X_{t_{i+1}})(N_{t_{i+1}}-N_{t_i})\\ & &\qquad\qquad\quad -\sigmaum_1\{F(X_{t_{i+1}})-F(X_{t_i})-f(X_{t_i})(X_{t_{i+1}}-X_{t_i})\\ & &\qquad\qquad\qquad\quad -1/2f'(X_{t_i})(X_{t_{i+1}}-X_{t_i})^2\}(N_{t_{i+1}}-N_{t_i})\\ & &\qquad\qquad=\nonumber I^n_1+I^n_2+I^n_3+I^n_4+I^n_5+I^n_6+I^{n,\varepsilonpsilon}_7 +I^{n,\varepsilonpsilon}_8\varepsilonnd{eqnarray*} Clearly, \[|I^n_1|\leq(\sigmaum_{t_i\in D_n, t_i\leq t} (\int_{t_i}^{t_{i+1}}(f(X_{s-})-f(X_{t_i}))dM_s)^2)^{1/2} (\sigmaum_{t_i\in D_n, t_i\leq t}(N_{t_{i+1}}-N_{t_i})^2)^{1/2},\] where by the definition of the stochastic integral \betaegin{eqnarray*} & &E\sigmaum_{t_i\in D_n, t_i\leq t} (\int_{t_i}^{t_{i+1}}(f(X_{s-})-f(X_{t_i}))dM_s)^2 \\ & &\qquad\qquad=E\sigmaum_{t_i\in D_n, t_i\leq t} \int_{t_i}^{t_{i+1}}(f(X_{s-})-f(X_{t_i}))^2d[M,M]_s\rightarrow0.\varepsilonnd{eqnarray*} On the other hand by Lemma \ref{lemnew} \betaegin{eqnarray*} & &I^n_2\toP[L,N]_t=0,\\ & &I^n_3\toP\int_0^tf(X_{s-})d[A,N]_s=0,\\ & &I^n_4\toP1/2\int_0^tf'(X_{s-})\Delta A_sd[A,N]_s=0,\\ & &I^n_5\toP1/2\int_0^tf'(X_{s-})\Delta N_sd[M,M]_s=0, \varepsilonnd{eqnarray*} and \[I^n_6\toP\int_0^tf'(X_{s-})\Delta M_sd[A,N]_s=0.\] Finally, for every $\varepsilonpsilon>0$ \[I^{n,\varepsilonpsilon}_6\toP0\qquad\mbox{\rm and}\qquad I^{n,\varepsilonpsilon}_7\rightarrow0,\,\,P-a.s. \] and the proof of Theorem \ref{C2} is completed. {\vrule height 4pt depth 0pt width 4pt} \betaegin{corollary}\label{corC2} Let $X=M+A$ be a weak Dirichlet process of finite energy admitting a quadratic variation and $F$ be a $C^2$-real valued function with bounded derivatives $f$ and $f'$. Then the process $(F(X_t)_{t\geq 0})$ is a weak Dirichlet process of finite energy admitting a qudratic variation and the decomposition $F(X)= Y+\Gamma $ holds with the martingale part \betaegin{eqnarray*}& &Y_t= F(0)+\int_0^tf(X_{s-})dM_s\\ &&\qquad+\int_0^t\int_{{\rm I\!R} }\Big(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-})\Big ) (\mu -\nu )(ds, dx) \varepsilonnd{eqnarray*} and the predictable part \betaegin{eqnarray*} \lefteqn{\Gamma _t=\int_0^tf(X_{s-})dA_s +1/2\int_0^tf'(X_s) d[X,X]^c_s}\cr &\qquad+\int_0^t\int_{{\rm I\!R} }\Big(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-})\Big)\nu (ds ,dx). \varepsilonnd{eqnarray*} \varepsilonnd{corollary} {\it Proof}: By Theorem \ref{thm2}(i), $A$ admits integrable quadratic variation $[A,A]$ and due to Lemma \ref{lemnew} \[ 1/2\sigmaum_{t_i\in D_n, t_i\leq t}f(X_{t_i})(A_{t_{i+1}}-A_{t_i})^2\toP1/2\int_0^tf(X_{s-})d[A,A]_s. \] Since $[X,X]^c=[M,M]^c+[A,A]^c$, it is clear that \betaegin{eqnarray*} & &1/2\int_0^tf(X_{s-})d[A,A]_s-1/2\sigmaum_{s\leq t}f(X_{s-})\Delta A^2_s+1/2\int_0^tf(X_{s-})d[M,M]_s\\ & &\qquad\qquad=1/2\int_0^tf(X_{s-})d([A,A]^c_s +[M,M]^c)=1/2\int_0^tf(X_{s-})d[X,X]^c_s \varepsilonnd{eqnarray*} and the decomposition $F(X_t)=F(0)+Y_t+\Gamma_t$ in the statement of Corollary 3.1 is a consequence of Theorem \ref{C2}. Finally, by the Theorem from \cite[page 144]{F850} we obtain that also $F(X)$ admits a quadratic variation, which completes the proof. {\vrule height 4pt depth 0pt width 4pt} \betaegin{theorem}\label{C1} Let $X=M+A$ be a weak Dirichlet process of finite energy and $F$ a $C^1$-real valued function with bounded derivative $f$. Then the process $(F(X_t)_{t\geq 0})$ is a weak Dirichlet process of finite energy and the decomposition $F(X)= Y+\Gamma $ holds with the martingale part \betaegin{eqnarray*}& &Y_t= F(0)+ \int_0^tf(X_{s-})dM_s\\ &&\qquad+\int_0^t\int_{{\rm I\!R} }\Big(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-})\Big ) (\mu -\nu )(ds, dx) \varepsilonnd{eqnarray*} \varepsilonnd{theorem} \betaegin{remark} This theorem has formally almost the same statement as Theorem \ref{C2}. However, we have not here any explicit formula for $\Gamma$. The delicate point here is the behaviour of the sum $$\sigmaum_{s\leq t}\betaig (F(X_s)-F(X_{s-})-\Delta X_sf(X_{s-})\betaig )$$ which is not necessarily absolutely convergent and which does not define a process with finite variation. \varepsilonnd{remark} {\it Proof of Theorem 3.2}: We consider a sequence $(F^p)_{p\in {\rm I\!N} }$ of $C^2$ real functions such that $\Vert F-F^p\Vert +\Vert f-f^p\Vert \rightarrow 0$, when $p\rightarrow \infty $. Using Theorem \ref{C2} we can write $$F^p(X_t)=F^p(0)+Y^p_t+\Gamma ^p_t$$ where $$Y^p_t=\int_0^tf^p(X_{s-})dM_s+L^p_t$$ with $$L^p_t=\int_0^t\int_{{\rm I\!R} -\{ 0\} }\Big(F^p(X_{s-}+x)-F^p(X_{s-})-xf^p(X_{s-})\Big ) (\mu -\nu )(ds, dx).$$ The sequence $\{\int_0^.f^p(X_{s-})dM_s+L^p_.\}_{p\in {\rm I\!N} }$ is a Cauchy sequence in the space $H^2$ of square integrable martingales, hence the limiting martingale exists and has the form $\int_0^.f(X_{s-})dM_s+L_.$: Actually, for $p,q$ integers \betaegin{eqnarray*} & &\Vert Y^p-Y^q\Vert _{H^2}\leq E(\int_0^T(f^p(X_{s-})-f^q(X_{s-}))^2d[M,M]_s)\\ & &\qquad\qquad\qquad +E[\int_0^T\int_{{\rm I\!R} -\{ 0\} }\Big(F^p(X_{s-}+x)-F^p(X_{s-})-xf^p(X_{s-})\\ &&\qquad\quad\qquad\qquad\qquad -F^q(X_{s-}+x)+F^q(X_{s-})+xf^q(X_{s-})\Big )^2\nu (ds, dx)]\\ & &\qquad\qquad\leq \Vert F^p-F^q\Vert ^2E[[M,M]]+2\Vert f^p-f^q\Vert ^2E[[M,M]]. \varepsilonnd{eqnarray*} Now we write: \[\Gamma^p_t= F^p(X_t)-F^p(0)-\int_0^tf^p(X_{s-})dM_s-L^p_t.\] Clearly the sequence of predictable processes $(\Gamma^p)$ converges uniformly in probability and its limit (i.e. the process $\Gamma$) has to be also predictable. It remains to prove that $[\Gamma,N]=0$ for every continuous local martingale $N$. Fix $t$. In the sequel use the notations from the proof of Theorem \ref{C2}. By Taylor's formula \[ \sigmaum_2(F(X_{t_{i+1}})-F(X_{t_i})) =\sigmaum_2 f(X_{t_i})(X_{t_{i+1}}-X_{t_i})\\ +\sigmaum_2 r_1(X_{t_i},X_{t_{i+1}}), \] where $r_1(X_{t_i},X_{t_{i+1}})=C_i^{\varepsilonpsilon} (X_{t_{i+1}}-X_{t_i})$ satisfy $|C_i^{\varepsilonpsilon}|\leq ||f||$ and \[\lim_{\varepsilonpsilon\deltaownarrow0}\limsup_{n\to \infty} P(\max_2 |C_i^{\varepsilonpsilon}|>\deltaelta)=0,\quad \deltaelta>0.\] Therefore, \betaegin{eqnarray*} & &\sigmaum_{t_i\in D_n, t_i\leq t} (\Gamma_{t_{i+1}}-\Gamma_{t_i})(N_{t_{i+1}}-N_{t_i}) \\& &\qquad=\sigmaum_{t_i\in D_n, t_i\leq t} \int_{t_i}^{t_{i+1}}f(X_{t_i})-f(X_{s-})dM_s(N_{t_{i+1}}-N_{t_i}) \\& &\qquad\quad-\sigmaum_{t_i\in D_n, t_i\leq t} (L_{t_{i+1}}-L_{t_i})(N_{t_{i+1}}-N_{t_i})\\ & &\qquad\quad+\sigmaum_{t_i\in D_n, t_i\leq t} f(X_{t_i})(A_{t_{i+1}}-A_{t_i})(N_{t_{i+1}}-N_{t_i})\\ & &\qquad\quad\nonumber + \sigmaum_2r_1(X_{t_i},X_{t_{i+1}})(N_{t_{i+1}}-N_{t_i})\\ & &\quad\quad\nonumber -\sigmaum_1\{F(X_{t_{i+1}})-F(X_{t_i})-f(X_{t_i}) (X_{t_{i+1}}-X_{t_i})(N_{t_{i+1}}-N_{t_i})\}\\ & &\qquad=\nonumber I^n_1+I^n_2+I^n_3+I^{n,\varepsilonpsilon}_4 +I^{n,\varepsilonpsilon}_5.\varepsilonnd{eqnarray*} Clearly, first three sums tend to $0$ analogously to the proof of Theorem \ref{C2}. Next, \[\lim_{\varepsilonpsilon\deltaownarrow0}\limsup_{n\to\infty}P(|I^{n,\varepsilonpsilon}_4|>\deltaelta)=0,\ quad \deltaelta>0\] and for every $\varepsilonpsilon>0$ \[ I^{n,\varepsilonpsilon}_5\rightarrow0,\,\,P-a.s. \] which, completes the proof of Theorem \ref{C1}. {\vrule height 4pt depth 0pt width 4pt} \betaegin{corollary} Let $X=M+A$ be a weak Dirichlet process of finite energy admitting a quadratic variation process and $F$ a $C^1$-real valued function with bounded derivative $f$. Then the process $(F(X_t)_{t\geq 0})$ is a weak Dirichlet process of finite energy admitting a quadratic variation and the decomposition $F(X)= Y+\Gamma $ holds with the martingale part \betaegin{eqnarray*}& &Y_t= F(0)+ \int_0^tf(X_{s-})dM_s\\ &&\qquad+\int_0^t\int_{{\rm I\!R} }\Big(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-})\Big ) (\mu -\nu )(ds, dx) \varepsilonnd{eqnarray*} The quadratic variation process of $F(X_t)_{t}$ is given by \[[F(X), F(X)]_t=\int_0^t (f(X_s))^2d[M,M]_s^c+\int_0^t (f(X_s))^2d[A,A]_s^c + \sigmaum_{0\leq s\leq t} (F(X_s)-F(X_{s-}))^2.\] \varepsilonnd{corollary} {\it Proof}: Follows easily from Theorem 3.2, \cite[Theorem, page 144]{F850} and the equality $[X,X]^c=[M,M]^c+[A,A]^c$. {\vrule height 4pt depth 0pt width 4pt} \\[2mm] We are able to prove a version of Theorem 3.2 for weak Dirichlet processes also with infinite energy. However, in this case we have restricted our attention to processes with a continuous predictable part. \betaegin{theorem}\label{C10} Let $X=M+A$ be a weak Dirichlet process with continuous predictable part $A$ and $F$ a $C^1$ real-valued function with bounded derivative $f$. Then the process $(F(X_t)_{t\geq 0})$ is a weak Dirichlet process and the decomposition $F(X)= Y+\Gamma $ holds with the martingale part \betaegin{eqnarray*}& &Y_t= F(0)+ \int_0^tf(X_{s-})dM_s\\ &&\qquad+\int_0^t\int_{{\rm I\!R} }\Big(F(X_{s-}+x)-F(X_{s-})-xf(X_{s-})\Big ) (\mu -\nu )(ds, dx) \varepsilonnd{eqnarray*} \varepsilonnd{theorem} {\it Proof}: We consider a sequence $(F^p)_{p\in {\rm I\!N} }$ of $C^2$ real functions such that locally on compact sets $\Vert F-F^p\Vert +\Vert f-f^p\Vert \rightarrow 0$, when $p\rightarrow \infty $. Let $A^p$ be a sequence of continuous processes with finite variation such that \[\sigmaup_{t\leq T}|A^p_t-A_t|\toP0.\] Using classical It\^o's formula for the semimartingale $X^p=M+A^p$ we can write $$F^p(X^p_t)=F^p(0)+Y^p_t+\Gamma ^p_t$$ where $$Y^p_t=\int_0^tf^p(X^p_{s-})dM_s+L^p_t$$ with $$L^p_t=\int_0^t\int_{{\rm I\!R} -\{ 0\} }\Big(F^p(X^p_{s-}+x)-F^p(X^p_{s-})-xf^p(X^p_{s-})\Big ) (\mu -\nu )(ds, dx).$$ Similarly to the proof of Theorem 3.2, we check that \[\sigmaup_{t\leq T}|Y^p-Y_t|\toP0.\] On the other hand it is clear that \[\sigmaup_{t\leq T}|F^p(X^p_t)-F(X_t)|\toP0,\] which implies that $\Gamma$ as a uniform limit of predictable processes is also predictable. Finally, by the same arguments as in the proof of Theorem 3.2 we prove that $[\Gamma,N]=0$ for every continuous local martingale $N$. \betaigskip \betaigskip \sigmaection{Weak Dirichlet processes and generalized martingale convolutions} In this section we deal with processes $X$ such that \betaegin{equation}\label{is} X_t=\int_0^tG(t,s)dL_s \varepsilonnd{equation} where $L$ is a quasileft continuous square integrable martingale, and $G$ a real valued deterministic function of $(s,t)$. Let us consider the following hypotheses on $G$. \noindent ($H_0$): $(t,s)\rightarrow G(t,s)$ is continuous on $\{ (s,t):0<s\leq t\leq T\} $. \noindent ($H_1$): For all $s$, $t\rightarrow G(t,s)$ has a bounded energy on $]s,T]$ that is $$V^2_2(G)((s,T],s)=\sigmaup_n\sigmaum_{t_i\in D_n,t_i\geq s}(G(t_{i+1},s)-G(t_i,s))^2 <\infty .$$ \noindent $$E[\int_0^TV_2^2(G)(]s,T])d[L,L]_s]<\infty \leqno{(H_2):}$$ \noindent $$E[\int_0^T\Gamma ^2(s)d[L,L]_s]<\infty \leqno{(H_3):}$$ where $\Gamma ^2(s)=\sigmaup_{t\leq T}G^2(t,s).$ \noindent \betaegin{remark}{\rm Errami and Russo (\cite{ER1}) use, instead of ($H_0$) a slightly more restrictive assumption, namely: ($H_{0^+}$): $(t,s)\rightarrow G(t,s)$ is continuous on $\{ (s,t):0\leq s\leq t\leq T \} $. Note that $(H_{0^+})$, implies $(H_3)$. Actually $\Gamma ^2$ is continuous and bounded. If $t\rightarrow G(t,s)$ admits a quadratic variation on $(s,T]$ along $(D_n)$, then $(H_1)$ is satisfied. We shall extend $G$ to the square $[0,T]^2$ by setting $G(t,s)=0$ if $s>t$. }\varepsilonnd{remark} \betaegin{theorem} If $X$ meets (\ref{is}) and if $G$ satisfies $(H_0),(H_1),(H_2), (H_3)$, then\betaegin{description} \item[{\betaf (i)}] $X$ is a continuous in probability process with finite energy and has an optional modification, \item[{\betaf (ii)}] Let us assume that $X$ has a.s. c\`adl\`ag trajectories, then $X$ is a weak Dirichlet process with natural decomposition $X=M+A$, such that if $M^n$ is defined as in (\ref{Mn}), then for every $t\leq T$, $\vert M^n_t-M_t\vert \rightarrow 0$ in ${\betaf L}^2$. \varepsilonnd{description} \varepsilonnd{theorem} {\it Proof}: The proof will be given in several steps. \betaegin{lemma} $X$ is a continuous in probability process with finite energy. \varepsilonnd{lemma} {\it Proof of Lemma 4.1}: First of all, from $(H_2)$ and $(H_3)$, for every $t\leq T$ $X_t$ is an ${\cal F}_t $-measurable square integrable random variable. Let us write $$X_{t_{i+1}}-X_{t_i}=\int_0^{t_i}(G(T_{i+1},s)-G(t_i,s))dL_s+\int_{t_i}^{t_{i+1 }} G(t_{i+1},s)dL_s.$$ Since $L$ is a square integrable martingale, we get: \betaegin{eqnarray*} && E(\sigmaum_{t_i\in D_n}(X_{t_{i+1}}-X_{t_i})^2)\cr &&\leq 2E(\sigmaum_{t_i\in D_n} (\int_0^{t_i}(G(t_{i+1},s)-G(t_i,s))dL_s)^2)+2E(\sigmaum_{t_i\in D_n} (\int_{t_i}^{t_{i+1}} G(t_{i+1},s)dL_s)^2)\cr &&\leq 2E(\sigmaum_{t_i\in D_n} \int_0^{t_i}(G(t_{i+1},s-G(t_i,s))^2d[L,L]_s)+2E(\sigmaum_{t_i\in D_n}\int_{t_i} ^{t_{i+1}}(G(t_{i+1},s))^2d[L,L]_s\cr &&=2E(I^n_1)+2E(I^n_2). \varepsilonnd{eqnarray*} By simple calculations \betaegin{eqnarray*}I^n_1&=&\sigmaum_{t_i\in D_n} \sigmaum_{k=1}^i\int_{t_{k-1}}^{t_k}(G(t_{i+1},s)- G(t_i,s))^2d[L,L]_s\cr &=&\sigmaum_{t_k\in D_n} \int_{t_{k-1}}^{t_k}\sigmaum_{i>k}(G(t_{i+1},s)-G(t_i,s))^2d[L,L]_s\cr &\leq& \sigmaum_{t_k\in D_n}\int_{t_{k-1}}^{t_k}V^2_2(G)((s,T],s)d[L,L]_s, \varepsilonnd{eqnarray*} and \betaegin{eqnarray*} I^n_2&\leq& \sigmaum_{t_i\in D_n} \int_{t_i}^{t_{i+1}}(\Gamma ^2(s)d[L,L]_s\cr &\leq& \int_0^T\Gamma ^2(s)d[L,L]_s. \varepsilonnd{eqnarray*} Therefore \betaegin{eqnarray*} \sigmaup_{n}E(\sigmaum_{t_i\in D_n}(X_{t_{i+1}}-X_{t_i})^2 &\leq& 2E\int_0^TV_2^2(G)((s,T],s)d[L,L]_s+2E\int_0^T(\Gamma (s))^2d[L,L]_s\cr &<&\infty .\varepsilonnd{eqnarray*} This proves that $X$ has a finite energy.\\[2mm] Now, let us take $s,t$ such that $0\leq s<t\leq T$. We get: $$E[(X_t-X_s)^2]\leq 2E\int_0^s(G(t,u)-G(s,u))^2d[L,L]_u+2E\int_s^t(G(t,u))^2 d[L,L]_u.$$ Since $L$ is continuous in probability, so is $[L,L]$. Under $(H_3)$, $$2E\deltaisplaystyle\int_s^t(G(t,u))^2d[L,L]_u\rightarrow 0,\quad as\,\, t\rightarrow s,$$ then by continuity of $t\rightarrow G(t,s)$ and dominated convergence, $$E\deltaisplaystyle\int_0^s(G(t,u)-G(s,u))^2d[L,L]_u \rightarrow 0.$$ The continuity in probability of the process $X$ follows. At last, since the process $X$ is ${\cal F}_t $-adapted and continuous in probability, it admits an optional modification that we shall denote again by $X$: see for example \cite{M} pp 230--231, where Th\'eor\`eme 5 bis is given for a progressively measurable modification, but the sequence of approximating processes introduced in the proof is c\`adl\`ag hence optional (see also below the proof of the existence of a predictable modification of the process $A$). Therefore Lemma 4.1 is proven. {\vrule height 4pt depth 0pt width 4pt} \betaegin{lemma} Let us consider the decomposition: $X_t= A^n_t+M^n_t$, where as in (\ref{Mn}), $$M^n_t= \sigmaum_{t^n_i\in D_n,t^n_{i}\leq t}\left[X_{t^n_{i}}- E[X_{t^n_{i}}/{\cal F}_{t^n_{i-1}}]\right].$$ Then $X$ admits a modification with a decomposition $X_t=A_t+M_t$, where $M$ is the square integrable martingale $M_t=\deltaisplaystyle\int_0^tG(s,s) dL_s$, and $A$ a predictable process, such that: \betaegin{description} \item[{\betaf(i)}] for every $t\leq T$, $\vert M^n_t-M_t\vert \rightarrow 0$ in ${\betaf L}^2$, \item[{\betaf(ii)}] for every $t\leq T$, $\vert A_t^n-A_t \vert \rightarrow 0$ in ${\betaf L}^2$, \item[{\betaf(iii)}] for every continuous local martingale $N$, $[A,N]=0$. \varepsilonnd{description} \varepsilonnd{lemma} \noindent {\it Proof of Lemma 4.2}: For every $t_i, t_{i+1}$ we have $$E[X_{t_{i+1}}-X_{t_i}\vert {\cal F}_{t_i}]=\int_0^{t_i}(G(t_{i+1},s)-G(t_i,s))dL_s$$ hence for $t\in[0,T]$ $$M^n_t=\sigmaum_{t_{i+1}\leq t}\int_{t_i}^{t_{i+1}}G(t_{i+1},s)dL_s=\int_0^{\rho ^n(t)} G^n(s)dL_s,$$ where $G^n(s)=G(t_{i+1},s)$ for $s\in (t_i,t_{i+1}]$ and $\rho ^n(t)= \max\{ t_{i}: t_{i}\leq t\} $. Note that $\rho ^n(t)\rightarrow t$ as $n\rightarrow \infty $. Define $M_t=\deltaisplaystyle\int_0^tG(s,s)dL_s$. By $(H_2)$ and $(H_3)$, for every $\varepsilon >0$ $$E[(M_{\varepsilon })^2]=E\int_0^{\varepsilon }G^2(s,s)d[L,L]_s \leq E\int_0^{\varepsilon }\Gamma ^2(s)d[L,L]_s$$ and this last expression tends to $0$ when $\varepsilon $ tends to $0$. Similarly, when $\varepsilon \rightarrow 0$ $$\sigmaup_nE[(M^n_{\varepsilon })^2=\sigmaup_nE\int_0^{\varepsilon }(G^n(s))^2d[L,L]_s \rightarrow 0.$$ Now, let us fix $\varepsilon >0$ and belonging to $D_n$. Since $(t,s)\rightarrow G(t,s)$ on $\{ (s,t):\varepsilon \leq s<t\leq T\} $ is continuous, then it is uniformly continuous and therefore: $$\sigmaup_{\varepsilon \leq s\leq T}\vert G^n(s)-G(s,s)\vert \rightarrow 0.$$ Hence \betaegin{eqnarray*} E[(M^n_t-M^n_{\varepsilon }-M_t+M_{\varepsilon })^2]&\leq& 2E(\int_{\varepsilon } ^{\rho ^n(t)}(G^n(s)-G(s,s))dL_s)^2\cr &&+2E(\int_{\rho ^n(t)}^tG(s,s)dL_s)^2\cr &\leq& 8E\int_{\varepsilon }^1(G^n(s)-G(s,s))^2d[L,L]_s\cr &&+2E\int_{\rho ^n(t)}^t G^2(s,s)d[L,L]_s. \varepsilonnd{eqnarray*} The first term tends to 0 when $n\rightarrow \infty $ because $G^n$ converges uniformly to $G$ on $[\varepsilon,T]$. Now, the continuity in probability of $L$ implies the continuity in probability of $M$ on $[0,T]$, hence the second term tends to $0$ when $n\rightarrow \infty $. Note that for every $n$ $$A^n_t= \sigmaum_{t^n_i\in D_n, t^n_{i} \leq t}{\betaf E}[X_{t^n_{i}}-X_{t^n_{i-1}}/{\cal F}_{t^n_{i-1}}]$$ and $A=X-M$. $A^n$ is a predictable process and we have $X_{\rho ^n(t)}=A^n_{\rho ^n(t)}+M^n_{\rho ^n(t)}.$ As $X$ is continuous in probability, for every $t\leq T$, $X_{\rho ^n(t)}\rightarrow X_t$ in ${\betaf L}^2$; moreover, $M^n_t=M^n_{\rho ^n(t)}$ and $A^n_t=A^n_{\rho ^n(t)}$: we deduce that $A_t^n\rightarrow A_t$ in ${\betaf L}^2$ for every $t$. It follows that $A$ is also adapted and continuous in probability. Our decomposition $X=M+A$ coincides with the Graversen-Rao decomposition of Theorem 2.1. But in Theorem 2.1, it is assumed that $X$ is c\`adl\`ag; here it is not the case, it is necessary to check that $A$ admits a predictable modification. Actually, since $A$ is continuous in probability on the interval $[0,1]$, one can find a subsequence $\{n(k)\}_{k\geq 1}$ such that for every $t\in [0,1]$, $\betaar A^{n(k)} \rightarrow A_t$ a.s. when $k\rightarrow \infty $, whith $\betaar A^n_t= \sigmaum _i1_{(t^n_i,t^n_{i+1}]}(t)A_{t^n_i}$ and $A^n_0=A_0$. Since every $\betaar A^n$ is a step process adapted and left continuous, it is predictable, and the process $A'$ defined by $A'_t=\limsup_k\betaar A^{n(k)}_t$ is also predictable. But for every $t$, $A'_t=A_t$ a.s. So, we shall suppose now that $A=A'$, and $X=A'+M$. \noindent {\it Proof of (iii)}: Let $N$ be a continuous local martingale. Using localization arguments, we will assume that $N$ is a square integrable martingale. For every $t$ we can write: \betaegin{eqnarray*} &&\sigmaum_{t_i\in D_n,t_i\leq t} (N_{t_{i+1}}-N_{t_i})(A_{t_{i+1}}-A_{t_i})\\ & &\qquad\qquad= \sigmaum_{t_i\in D_n,t_i\leq t}(N_{t_{i+1}}-N_{t_i}) \int_0^{t_i}(G(t_{i+1},s)-G(t_i,s))dL_s\\ &&\qquad\qquad\quad+\sigmaum_{t_i\in D_n,t_i\leq t} (N_{t_{i+1}}-N_{t_i})\int_{t_i}^{t_{i+1}}(G(t_{i+1},s)-G(s,s))dL_s\\ &&\qquad\qquad=I^n_1+I^n_2 \varepsilonnd{eqnarray*} Since $N$ is a martingale, using the B-D-G inequality and Schwarz's inequality we get: \betaegin{eqnarray*} E\vert I^n_1 \vert &\leq &cE\left(\sigmaum_{t_i\in D_n} (N_{t_{i+1}}-N_{t_i})^2 \Bigl(\int_0^{t_i}(G(t_{i+1},s)-G(t_i,s))dL_s\Bigr)^2\right)^{1/2}\cr &\leq &cE\left(\max_{t_i\in D_n}\vert N_{t_{i+1}}-N_{t_i}\vert \sigmaum_{t_i\in D_n}\Bigl (\int_0^{t_i}(G(t_{i+1},s)-G(t_i,s))dL_s\Bigr)^2\right)^{1/2}\cr &\leq &c(E(\max_{t_i\in D_n} \vert N_{t_{i+1}}-N_{t_i}\vert ^2))^{1/2} \left(E\sigmaum_{t_i\in D_n} \Bigl(\int_0^{t_i}(G(t_{i+1},s)-G(t_i,s))dL_s\Bigr)^2\right)^{1/2}. \varepsilonnd{eqnarray*} Because of the continuity of $N$, $E[\max_{t_i\in D_n}\vert N_{t_{i+1}}-N_{t_i}\vert ^2]\rightarrow 0;$ the second term is estimated as before by $(E\deltaisplaystyle\int_0^TV^2_2(G)((s,T],s)d[L,L]_s)^{1/2}$, which is finite. Now, by Schwarz's inequality \betaegin{eqnarray*} E\vert I^n_2\vert &\leq& (E\sigmaum_{t_i\in D_n} (N_{t_{i+1}}-N_{t_i})^2)^{1/2} \left(E\sigmaum_{t_i\in D_n} \Bigl(\int_{t_i}^{t_{i+1}}(G(t_{i+1},s)-G(s,s))dL_s\Bigr)^2\right)^{1/2}\cr &\leq&(E([N,N]_T))^{1/2}\Bigl(E\int_0^T (G^n(s)-G(s,s))^2d[L,L]_s\Bigr)^{1/2} \varepsilonnd{eqnarray*} where $G^n$ is defined as above. Since $E\deltaisplaystyle\int_0^T(G^n(s)-G(s,s))^2d[L,L]_s\rightarrow 0$, we conclude that for every $t$ $$\sigmaum_{t_i\in D_n,t_i\leq t} (N_{t_{i+1}}-N_{t_i})(A_{t_{i+1}}-A_{t_i})\rightarrow 0$$ in ${\betaf L}^1$, and the covariation process $[N,A]$ is null for every continuous martingale $N$. The proof of Theorem 4.1 is complete. {\vrule height 4pt depth 0pt width 4pt} Unhappily we are not able to prove that $X$ admits a modification with c\`adl\`ag trajectories. However, in this direction, we have the following lemma. \betaegin{lemma}: $A$ is continuous (hence $X$ is c\`adl\`ag) if the following additional condition is filled: \noindent ($H_c$): There exist $\deltaelta>0, p>1$ and a function $a(u)$ meeting $$E\Bigl[(\int_0^Ta(u)d[L,L]_u)^p\Bigr]<\infty ,$$ such that for every $s$, $t$, $u$ holds $$\Bigl(G(t,u)-G(s,u)\Bigr)^2\leq a(u)|t-s|^{{1\over p}+\deltaelta}.$$ \varepsilonnd{lemma} \noindent {\it Proof }: Let us take $s,t$ such that $0\leq s\leq t \leq T$. We have with a constant $c$ changing from line to line: \betaegin{eqnarray*} E(A_t-A_s)^{2p}&\leq& E(\int _0^t(G(t,u)-G(u,u))dL_u- \int_0^s(G(s,u)-G(u,u))dL_u)^{2p}\cr &\leq& cE(\int_0^s(G(t,u)-G(s,u))^2d[L,L]_u)^p\cr & &\qquad+E(\int_s^t(G(t,u)-G(u,u)^2d[L,L]_u)^p\cr &\leq& c(t-s)^{1+p\deltaelta }E(\int_0^ta(u)d[L,L]_u)^p\cr &\leq& c(t-s)^{1+p\deltaelta }. \varepsilonnd{eqnarray*} Hence we get the continuity of $A$ by Kolmogorov's Lemma. {\vrule height 4pt depth 0pt width 4pt} An analogous result under Holder condition was already given in the paper (\cite{BM} ), Lemmas 2C and 2D. We shall suppose by now that the processes given by (\ref{is} ) have a. s. c\`adl\`ag trajectories. \betaigskip We are now interested in investigating conditions on $G$ in order to make $X$ a Dirichlet process or a weak Dirichlet process admitting a quadratic variation. For that let us consider the following hypotheses: \noindent $(H_4)$: For all $s$, $t\rightarrow G(t,s)$ has a bounded variation on $(s,\tau]$, for every $\tau\leq T$. (We denote this variation $\vert G\vert ((s,\tau],s)<\infty $). \noindent $$ E\int_0^T\vert G\vert ((s,T],s)d[L,L]_s<\infty \leqno{(H_5):} $$ \noindent $(H_6)$: For all $u, v$, $t\rightarrow G(t,u)$ and $t\rightarrow G(t,v)$ have a finite mutual quadratic covariation with the property $(S)$ on $(\max(u,v),T]$. (We denote this covariation $ [G(.,u),G(.,v)]_\tau$). Moreover we suppose that the convergence involved to define the covariation, is uniform in $u,v$. \betaigskip Of course $(H_6)$ implies that $t\rightarrow G(t,s)$ admits a quadratic variation on $(s,T]$ for all $s$ with the property $(S)$ and that $(H_1)$ is satisfied. \betaegin{theorem}\betaegin{description} \item[{\betaf (i)}] Let us assume $(H_0),(H_3),(H_4),(H_5)$. Then $X$ is a Dirichlet process. (i.e. $A$ is continuous and $[A,A]\varepsilonquiv 0$). \item[{\betaf (ii)}] Let us assume $(H_0),(H_2),(H_3),(H_6)$. Then $X$ is a weak Dirichlet process. Moreover if we assume that the process B defined by \betaegin{equation}\label{[A]} B_t=\int_0^t[G(.,s),G(.,s)]_td[L,L]_s+2\int_0^t(\int_0^v[G(.,u), G(.,v)]_tdL_u)dL_v \varepsilonnd{equation} is a c\`adl\`ag process, then $X$ and $A$ admit a quadratic variation such that: $[A,A]_t=B_t$ and \betaegin{equation} [X,X]=[A,A]+[M,M]. \varepsilonnd{equation} \varepsilonnd{description} \varepsilonnd{theorem} \noindent \betaegin{remark}{\rm\betaegin{description} \item[{\betaf(i)}] If $t\rightarrow G(t,s)$ is $C^1$ for every $s$, denoting $G_1(t,s)$ its derivative and assuming that $(t,s)\rightarrow G_1(t,s)$ is continuous on $[0,1]^2$, we get $A_t=\deltaisplaystyle\int_0^t(\int_0^uG_1(u,s)dL_s)du$ by applying Fubini's theorem for stochastic integrals, so X is a semimartingale. This result is due to Protter (\cite{P}). \item[{\betaf(ii)}] In case of continuous martingale $L$, part 2) is due to Errami and Russo (\cite{ER1} ) and (\cite{ER2} ). \varepsilonnd{description} }\varepsilonnd{remark} \noindent {\it Proof }: (i) First of all, we notice that our hypotheses imply that $(H_1)$ and $(H_2)$ are satisfied for any sequence $(D_n)$ of subdivisions with mesh tending to $0$. Since we can write $$A_{t_{i+1}}-A_{t_i}=\int_0^{t_i}(G(t_{i+1},s)-G(t_i,s))dL_s+ \int_{t_i}^{t_{i+1}}(G(t_{i+1},s)-G(s,s))dL_s,$$ for every $\varepsilon >0$ we get: \betaegin{eqnarray*} && E\sigmaum_{t_i\in D_n} (A_{t_{i+1}}-A_{t_i})^2 \\ && \qquad\leq E\sigmaum_{t_i\in D_n}\int_0^{t_i}(G(t_{i+1},s)-G(t_i,s))^2d[L,L]_s \\& &\qquad\qquad+\sigmaum_{t_i\in D_n}\int_{t_i}^{t_{i+1}} (G(t_{i+1},s)-G(s,s))^2d[L,L]_s\\ &&\leq \max_{{t_i\in D_n},\varepsilon \leq s\leq T}\vert G(t_{i+1},s)-G(t_i,s)\vert E\sigmaum_{t_i\in D_n} \int_{\varepsilon }^{t_i}\vert G(t_{i+1},s)-G(t_i,s)\vert d[L,L]_s\\ &&\qquad\quad +E\sigmaum_{t_i\in D_n}\int_0^{t_i\widetilde} \def\Mh{\widehat Medge \varepsilon }(G(t_{i+1},s)-G(t_i,s))^2d[L,L]_s \\ &&\qquad\quad+E\int_0^T(G^n(s)-G(s,s))^2d[L,L]_s\\ &&\qquad\leq \max_{{t_i\in D_n},\varepsilon \leq s\leq T} \vert G(t_{i+1},s)-G(t_i,s)\vert E\int_{\varepsilon }^1\vert G\vert ((s,1],s)d[L,L]_s\cr &&\qquad\quad+E\int_0^{\varepsilon }V^2_2(G)((s,T],s)d[L,L]_s+E\int_0^T(G^n(s)-G(s,s))^2d[L,L]_s \varepsilonnd{eqnarray*} Using the following properties: \betaegin{eqnarray*} & &\max_{{t_i\in D_n},\varepsilon \leq s\leq T} \vert G(t_{i+1},s)-G(t_i,s)\vert \rightarrow 0,\quad\mbox{\rm when}\,\,n\rightarrow \infty,\\ & &E\deltaisplaystyle\int_{\varepsilon }^T \vert G\vert ((s,T],s)d[L,L]_s<\infty, \\& &E\deltaisplaystyle\int_0^{\varepsilon }V^2_2(G)((s,T],s) d[L,L]_s\rightarrow 0,\quad\mbox{\rm when}\,\, \varepsilon \rightarrow, 0\varepsilonnd{eqnarray*} and \[E\deltaisplaystyle\int_0^T (G^n(s)-G(s,s))^2d[L,L]_s\rightarrow 0,\quad\mbox{\rm when }\,\, n\rightarrow \infty,\] we deduce that $[A,A]\varepsilonquiv 0$. By the inequality $$\max_{t_i\in D_n} \vert A_{t_{i+1}}-A_{t_i}\vert^2\leq \deltaisplaystyle\sigmaum_{t_i\in D_n} (A_{t_{i+1}} -A_{t_i})^2,$$ we get that $A$ is continuous. \betaigskip (ii) Taking into account Proposition 2.1 we have only to prove the property $S$ for the process defined by the right hand side of formula (\ref{[A]}). Let us notice first that formula (\ref{[A]}) is well-defined, as we take as integrant of $dL_s$ for the last term the predictable projection of the optional process $$(\deltaisplaystyle\int_0^s[G(.,u),G(.,s)]_tdL_u)_{s\leq t}.$$ See for example Dellacherie -Meyer (\cite{DM}), Chap.VI for details. Taking into account that $E\deltaisplaystyle\int_0^T(G^n(s)-G(s,s))^2d [L,L]_s\rightarrow 0$ when $n\rightarrow \infty $, we have: \betaegin{eqnarray*} & &[A,A]_t=\lim_n\sigmaum_{t_i\in D_n,t_{i}\leq t}(A_{t_{i+1}}-A_{t_i})^2\cr &&\qquad=\lim_n\sigmaum_{t_i\in D_n,t_{i} \leq t}(\int_0^{t_i}(G(t_{i+1},s)-G(t_i,s))dL_s)^2\cr &&=\lim_n2\sigmaum_{t_i\in D_n,t_{i}\leq t}\int_0^{t_i}(\int_0^s(G(t_{i+1},u)-G(t_i,u))dL_u)_-(G(t_{i+1},s) -G(t_i,s))dL_s\cr &&\qquad\qquad+\lim_n\sigmaum_{t_i\in D_n,t_{i} \leq t}\int_0^{t_i}(G(t_{i+1},s)-G(t_i,s))^2d[L,L]_s\cr &&\qquad=2\lim_n I^n_1(t)+\lim_nI^n_2(t). \varepsilonnd{eqnarray*} Then $$I^n_2(t)=\sigmaum_k\int_{t_{k-1}}^{t_k}\sigmaum_{i>k,t_{i}\leq t}(G(t_{i+1},s)- G(t_i,s))^2d[L,L]_s$$ This sequence converges to $\deltaisplaystyle\int_0^t[G(.,s)]_td[L,L]_s$ by dominated convergence. On the other hand, $I^n_1(t)$ can be written \betaegin{eqnarray*}I^n_1(t)&=&\sigmaum_k\int_{t_{k-1}}^{t_k}(\int_0^s\sigmaum_{i>k, t_{i}\leq t} (G(t_{i+1},u)-G(t_i,u))(G(t_{i+1},s)-G(t_i,s))dL_u)_-dL_s \cr &=&\int_0^{\rho ^n(t)}(\int_0^s[G^n(.,u),G^n(.,s)]_tdL_u)_-dL_s. \varepsilonnd{eqnarray*} For any optional process $Y$, let us write $Y^P$ its predictable projection. Then noticing that $$(\int_0^s[G^n(.,u),G^n(.,s)]_tdL_u)_-= (\int_0^s[G^n(.,u),G^n(.,s)]_tdL_u)^P$$ and from classical properties of predictable projections, we get for $t\in D_n$ \betaegin{eqnarray*} &&E(I^n_1(t)-\int_0^t (\int_0^s[G(.,u),G(.,s)]_tdL_u)dL_s)^2 \cr &&\qquad= E[\int_0^t[(\int_0^s([G^n(.,u),G^n(.,s)]_t-[G(.,u),G(.,s)]_t)dL_u)^P]^2 d<L,L>_s] \cr &&\qquad\leq E[\int_0^t[(\int_0^s([G^n(.,u),G^n(.,s)]_t-[G(.,u),G(.,s)]_t)dL_u)^2]^P d<L,L>_s] \cr &&\qquad=E[\int_0^t(\int_0^s([G^n(.,u),G^n(.,s)]_t-[G(.,u),G(.,s)]_t)dL_u)^2 d<L,L>_s] \varepsilonnd{eqnarray*} We show now that for any $t\in \betaigcup_nD_n$ \[\int_0^t(\int_0^s([G^n(.,u),G^n(.,s)]_t-[G(.,u),G(.,s)]_t) dL_u)^2d<L>_s\toP0,\quad\mbox{ \rm when}\,\,n\rightarrow \infty. \] Note that \betaegin{eqnarray*} &&E(\int_0^s([G^n(.,u),G^n(.,s)]_t-[G(.,u),G(.,s)]_t)dL_u)^2 \cr &&\qquad=E(\int_0^s([G^n(.,u),G^n(.,s)]_t-[G(.,u),G(.,s)]_t)^2d<L,L>_u). \varepsilonnd{eqnarray*} But for every $u,s,t$, $[G^n(.,u),G^n(.,s)]_t-[G(.,u),G(.,s)]_t\rightarrow 0$ when $n\rightarrow \infty $, and we have the estimation \betaegin{eqnarray*}&& \vert [G^n(.,u),G^n(.,s)]_t-[G(.,u),G(.,s)]_t\vert \cr &&\qquad\leq 1/2([G^n(.,u)]_t+[G^n(.,s)]_t+[G(.,u)]_t+G(.,s)]_t). \varepsilonnd{eqnarray*} >From ($H_6$) this last term is bounded, hence by dominated convergence $$E(\int_0^s([G^n(.,u),G^n(.,s)]_t-[G(.,u),G(.,s)]_t)dL_u)^2\rightarrow 0$$ for every $s,t$. So, we can get easily (18) by localisation of $L$. We are finished as soon as we remark that using the continuity of process $<L,L>$, for every $t$ \betaegin{eqnarray*} &&E(I^n_1(t)-I^n_1(\rho ^n(t))-\int_{\rho ^n(t)}^t (\int_0^s[G(.,u),G(.,s)]dL_u)dL_s)^2 \cr &&\qquad\qquad= E(\int_{\rho ^n(t)}^t (\int_0^s[G(.,u),G(.,s)]dL_u)dL_s)^2 \cr &&\qquad\qquad=E\int_{\rho ^n(t)}^t (\int_0^s[G(.,u),G(.,s)]dL_u)^2d<L,L>_s \varepsilonnd{eqnarray*} converges to $0$ when $n\rightarrow \infty $. {\vrule height 4pt depth 0pt width 4pt} \betaigskip \noindent {\betaf Example 1: Fractional normal processes of index $H>1/2$} We consider the case where $L$ is a normal martingale (i.e. a square integrable martingale with predictable quadratic variation $<L,L>_t=t$), and $G(t,s)$ given for $t\geq s$ by: \betaegin{equation} G(t,s)=cs^{1/2-H}\int_s^tu^{H-1/2}(u-s)^{H-3/2}du \varepsilonnd{equation} with $c$ constant. Of course, when $L$ is a standard Brownian motion, $X$ is the classical fractional Brownian motion. Let us check that ($H_3$),($H_4$) and ($H_5$) are satisfied. \betaegin{eqnarray*} \vert G\vert ((s,T],s)&=& cs^{1/2-H}\int_s^Tu^{H-1/2}(u-s)^{H-3/2}du \cr &\leq& cs^{1/2-H}\int_s^T(u-s)^{H-3/2}du \cr &=&cs^{1/2-H}{1\over {H-1/2}}(1-s)^{H-1/2}du \cr &\leq& {c\over {H-1/2}}s^{1/2-H} \varepsilonnd{eqnarray*} and $$\int_0^T\vert G\vert ((s,T],s)ds\leq {c\over {(H-1/2)(3/2-H)}},$$ hence finally $$\int_0^T\vert G^2\vert ((s,T],s)ds\leq {c\over {(H-1/2)(2-2H)}}.$$ This means that, for every normal martingale $L$, the process $X$ defined by $X_t=\deltaisplaystyle\int_0^tG(t,s)dL_s$ is a Dirichlet process. Since $G(s,s)=0$, its martingale part is null. In particular $X$ is continuous, even if $L$ is not. \betaigskip \noindent {\betaf Example 2: Weak Dirichlet process driven by a Brownian motion} For this example we take $L=B$ a standard Brownian motion and $G(t,s)=\betaeta (t)f(s)$ for $t\geq s$, where $t\rightarrow \betaeta (t)$ is a fixed Brownian trajectory such that its quadratic variation is $[\betaeta (.)]_t=t$, and $f$ is a real continuous function on $[0,T]$. Here we can apply part 2) of Theorem 4.2. Actually: $$[G(;,u),G(.,v)]_t= \lim_{t_{i+1}\leq t,t_i\geq max\{ u,v\} } \sigmaum_i(\betaeta (t_{i+1})-\betaeta (t_i))^2f(u)f(v)$$ and this term converges uniformly in $u,v$ to $(t-\max\{ u,v\} )f(u)f(v)$. Therefore we get the decomposition $X=M+A$, with \betaegin{equation} M_t=\int_0^t\betaeta (s)f(s)dB_s \varepsilonnd{equation} and the formula (\ref{[A]}) gives the quadratic variation of $A$ \betaegin{equation} [A,A]_t=\int_0^t(t-s)f^2(s)ds+2\int_0^t\int_0^s(t-s)f(u)f(s) dB_udB_s \varepsilonnd{equation} which can be written $$[A,A]_t=\int_0^t(\int_0^sf(u)dB_u)^2ds.$$ In particular the process $(\deltaisplaystyle\int_0^t\int_0^s(t-s)f(u)f(s)dB_udB_s)_t$ has a finite variation. Since $E([A,A]_t)= \deltaisplaystyle\int_0^t(t-s)f^2(s)ds\not = 0$, $X$ is not a Dirichlet process; however, Theorems 4.1 and 4.2 ensure that $X$ is a weak Dirichlet process admitting quadratic variation. \betaigskip Actually, this example 2 is a particular case of the following: \betaigskip \noindent {\betaf Example 3} Let us consider $G(t,s)$ under the form $G(t,s)= \int_s^tf(u,s)d\betaeta _u$ \noindent where for every $s<t$, $u\rightarrow f(u,s)$ has a bounded variation on $(s,t]$, and $(u,s)\rightarrow f(u,s)$ is continuous on $\{(u,s): 0\leq u\leq s\leq T\}$. We denote $df_u(u,s)$ the measure associated to the variation process. We assume also that $\betaeta $ is a deterministic real continuous on $[0,T]$ function admitting a quadratic variation along $(D_n)$, and $\betaeta _0=0$. $X_t=A_t$ and we shall prove the Fubini type formula : \betaegin{equation} A_t=\int_0^t(\int_s^tf(u,s)d\betaeta _u)dL_s=\int_0^t(\int_0^uf(u,s)dL_s) d\betaeta _u. \varepsilonnd{equation} Indeed, $A$ admits a quadratic variation along $(D_n)$ given by $$[A,A]_t=\int_0^t(\int_0^sf(u,s)dL_u)^2d[\betaeta ,\betaeta ]_s.$$ Actually, taking into account that $[\betaeta ,f(.,s)]=0$, we get $$\int_s^tf(u,s)d\betaeta _u=\betaeta _tf(t,s)-\betaeta _sf(s,s)-\int_s^t\betaeta _u df_u(u,s).$$ Then $$A_t= \betaeta _t\int_0^tf(t,s)dL_s-\int_0^t\betaeta _sf(s,s)dL_s- \int_0^t(\int_s^t\betaeta _udf_u(u,s))dL_s.$$ >From Theorem 4.2 (i) the process $Y$ defined by $Y_t= \deltaisplaystyle\int_0^tf(t,s)dL_s$ is a Dirichlet process and $[\betaeta ,Y]=0$, then by integration by parts, we get; $$\betaeta _tY_t= \int_0^tY_sd\betaeta _s+\int_0^t\betaeta _sdY_s$$ And by using the sequence $(D_n)$ \betaegin{eqnarray*}&\int_0^t\betaeta _sdYs= lim_{D_n}\sigmaum_{t_i \in D_n, t_i\leq t}\betaeta _{t_i} (Y_{t_{i+1}}-Y_{t_i})\cr &=\lim_{D_n}\sigmaum_{t_i \in D_n, t_i\leq t}\betaeta _{t_i}\int_{t_i}^{t_{i+1}}f(t_{i+1},s)dL_s\cr &+\lim_{D_n}\sigmaum_{t_i \in D_n, t_i\leq t}\betaeta _{t_i}\int_0^{t_i}(f(t_{i+1},s)-f(t_i,s))dL_s \cr &=\int_0^t\betaeta _sf(s,s)dL_s+\int_0^t(\int_s^t\betaeta _udf_u(u,s))dL_s, \varepsilonnd{eqnarray*} then we deduce the formula (18). \sigmaection{Appendix} \sigmaubsection{A process with finite energy but without quadratic variation along the dyadics.} Let $D_k$ be the $k$-th dyadic subdivision of $[0,1]$, that is $\deltaisplaystyle t^k_j={j\over 2^k}$,$0 \leq j\leq 2^k$. We are willing to build a deterministic function $x$ such that $S_k=1$ if $k$ is even and greater than 2, and $S_k=2$ for $k$ odd. Such a funcion has obviously finite energy along the sequence $(D_k)_k$ (and indeed its energy is equal to 2, although is would be equal to 1 along the sequence $(D_{2k})_k$), but has no quadratic variation since the sequence $(S_k)$ has 2 accumulation points. Let us begin with defining $x_0=x_1=0$ and $x_{1/2}=1$, so that $S_1=2$. At the second step, we define $x_{1/4}=x_{3/4}=1/2$, so that $S_2=1$ In order to make our construction clear, we go into details for the third step. We want to define $x_{j/8}$ for odd $j$ in order that $S_3=2$. The idea is to compute $x_{j/8}$ such that $$\Bigl(x_{j \over 8}-x_{{j-1}\over 8}\Bigr)^2+\Bigl(x_{{j+1} \over 8}-x_{{j}\over 8}\Bigr)^2=2\times \Bigl(x_{{j+1} \over 8}-x_{{j-1}\over 8}\Bigr)^2.$$ Actually, this amounts to find a solution $y$ to an equation like \betaegin{equation}\label{eq2} (a-y)^2+(b-y)^2=2\times (a-b)^2. \varepsilonnd{equation} As equation (\ref{eq2}) has two solutions, namely $((1+\sigmaqrt 3)a+(1- \sigmaqrt 3)b)/2$ and $((1-\sigmaqrt 3)a+(1+ \sigmaqrt 3)b)/2$, we have 2 possible choices for each $x_{j/8}$ with odd $j$ in order that $S_3=2$. This process is then iterated as follows : 1. Assume that we have constructed $x_{j\over 2^{2k-1}}$, $0\leq j\leq 2^{2k-1}$, such that $S_{2k-1}=2$ for some $k$. Then we put $x_{2j+1\over 2^{2k}}=(x_{2j\over 2^{2k}}+x_{2j+2\over 2^{2k}})/2$ (so that it is the middle of its neighbours). Then it is readily checked that $S_{2k}=1$. 2. Now we have to choose the $x_{2j+1\over 2^{2k+1}}$'s. We will proceed as was done above for $k=1$. Namely, we can always choose $y=x_{2j+1\over 2^{2k+1}}$ so that it solves equation (\ref{eq2}) with $a=x_{2j\over 2^{2k+1}}$ and $b=x_{2j+2\over 2^{2k+1}}$, and the result follows the same lines as for $k=1$. It remains to check that we can build a real continuous function $x$ on $[0,1]$ with the specified values on the dyadics. Let $x^n$ be the piecewise linear function joining the points constructed at rank $n$. We will show that the sequence $(x^n)$ satisfies a uniform Cauchy criterion, which will give the claim. First note that it is obvious (again from the solution of equation (\ref{eq2}) that any two neighbours at rank $2k$ or $2k+1$ are far from each other at most $((1+\sigmaqrt 3/4)^k$. In other words, we have always \betaegin{equation}\label{voisins} |x^n_{i+1\over2^n}-x^n_{i\over 2^n}|\leq \Bigl({1+\sigmaqrt 3\over 4}\Bigr)^{n\over 2}. \varepsilonnd{equation} Now, fix $\varepsilon>0$. For positive $n$ and $p$ and for $t\in[0,1]$, let $t^n_i$ be the closest to $t$ point in $D_n$, and $t^{n+p}_j$ the closest to $t$ point in $D_{n+p}$. Without loss of generality we will assume that $t^n_i\leq t^{n+p}_j \leq t^n_{i+1}$. We have then \betaegin{eqnarray}\label{ineq} |x^n_t-x^{n+p}_t|&\leq |x^n_t-x^{n}_{t^n_i}|+|x^{n}_{t_i^n}-x^{n+p}_{t_i^n}|\cr &+|x^{n+p}_{t_i^n}-x^{n+p}_{t^{n+p}_j}|+|x^{n+p}_{t^{n+p}_j}-x^{n+p}_t|. \varepsilonnd{eqnarray} >From (\ref{voisins}) and the definition of $x^n$ it is obvious that the first and last terms in the right-hand side of (\ref{ineq}) can be made as small as wanted (say less than $\varepsilon/3$), uniformly in $t$ and $p$, for $n$ large enough. Moreover, as $D_n\sigmaubset D_{n+p}$, the second term is identically zero. Hence it remains to uniformly estimate $|x^{n+p}_{t_i^n}-x^{n+p}_{t^{n+p}_j}|$ (note that in principle, at most $2^p$ points of $D_{n+p}$ may lay between $t_i^n$ and $t^{n+p}_j$, so that there is no trivial uniform in $p$ majorization of such a sequence). Let us put $t_n^i=k/2^n$ We assume for instance that $l:=x_{k/2^n}\leq h:x_{(k+1)/2^n}$ and that $n$ is odd. Then clearly $l \leq x_{(2k+1)/2^{n+1}}\leq h$ (since $x_{(2k+1)/2^{n+1}}=(l+h)/2$), and if we choose $x_{(4k+1)/2^{n+2}}=((1+\sigmaqrt 3)l+(1- \sigmaqrt 3)(l+h)/2)/2$ and $x_{(4k+3)/2^{n+2}}=((1+\sigmaqrt 3)(h+l)/2+(1- \sigmaqrt 3)h)/2$, we can see that both values lay in $[l-(\sigmaqrt 3 -1)(h-l)/4, h]$. Keeping (\ref{voisins}) in mind, we conclude that for every point $s$ in $D_{n+2}\cap [k/2^n, {(k+1)/ 2^n}]$, and for every $p \geq 2$, $$l-(\sigmaqrt 3 -1)/4\Bigl({1+\sigmaqrt 3\over 4}\Bigr)^{n\over 2}\leq x^{n+p}_s\leq h.$$ If we iterate this procedure, it is straightforward now that for every point $s$ in $D_{n+2m}\cap [k/2^n, {(k+1)/ 2^n}]$, and for every $p \geq 2m$, $$l-{{\sigmaqrt 3-1}\over 4}\Bigl({1+\sigmaqrt 3\over 4}\Bigr)^{n\over 2}\sigmaum_{i=0}^{m-1} \Bigl({1+\sigmaqrt 3\over 4}\Bigr)^i\leq x^{n+p}_s\leq h $$ and eventually, for every point $s$ in $\betaigcup_m D_{n+2m}\cap [k/2^n, {(k+1)/ 2^n}]$, and for every $p \geq 0$, $$l-{{\sigmaqrt 3 -1}\over 4}\Bigl({1+\sigmaqrt 3\over 4}\Bigr)^{n\over 2}{4\over 3-\sigmaqrt 3}\leq x^{n+p}_s\leq h. $$ At last, il follows that the third term in the right-hand of (\ref{ineq}) can be made as small as wanted, say less than $\varepsilon/3$, for $n$ big enough,uniformly in $t$ and $p$, and finally we checked the uniform Cauchy criterion for the sequence of functions $(x^n)$. Hence this sequence converges to a continuous function $x$, such that by construction, for every $k$ $$\sigmaum_{i\in D_{2k}}(x_{i+1}-x_i)^2=1$$ and $$\sigmaum_{i\in D_{2k+1}}(x_{i+1}-x_i)^2=2.$$ This function has finite energy, but no quadratic variation along the dyadics. \sigmaubsection{A continuous function with discontinuous pre-quadratic variation} We consider the function introduced in \cite{CS}, Example 1, that is the piecewise affine function $X$ such that $X_t=0$ at each $t=1-2^{1-2p}$, $X_t=1/p^{1/2}$ for $t=1-2^{-2p}$, and $X$ is affine between these points. If we define moreover $X_1=0$, $X$ is a continuous fonction on $[0,1]$. It is clear that $X$ is a fonction of finite variation, hence of zero quadratic variation on every $[0,t]$ with $t<1$. On the other hand, it was proven in \cite{CS} that $X$ has an infinite quadratic variation on $[0,1]$ along $S:=\{1-2^{-2k}, k\geq 1\}$. For $n>0$, we define now a subdivision $\pi_n$ of $[0,1]$ as follows : $$\pi_n=\betaigcup_{j\leq 2^{2n}-1} \{j2^{-2n}\}\cup\betaigcup_{k\geq n}\{1-2^{-2k}\}.$$ It is straightforward that for every $n$ $X$ has an infinite quadratic variation along $\pi_n$, although its quadratic variation along $\pi_n\cap[0,t]$ goes to zero as $n\to\infty$ for every $t<1$. Note that if we modify our example in order that $X_t=1/p$ for $t=1-2^{-2p}$, $p>0$, everything else remaining unchanged, the pre-quadratic variation of $X$ on $[0,t]$ is equal to zero if $t<1$, but finite and non zero for $t=1$. \sigmaubsection{A continuous in probability process may admit no c\`adl\`ag modification.} Let $X$ be the piecewise affine function such that $X_t=0$ at each $t=1-2^{1-2p}$, $X_t=1$ for $t=1-2^{-2p}$, and $X$ is affine between these points. If we define moreover $X_t=0$ outside $[0,1)$, we get a discontinuity of the second kind at 1. Now define the process $Y$ as follows : $Y_t=X_{t-T}{\betaf 1}_{t\geq T}$, where $T$ is a random variable uniformly distributed on $[0,1]$, then $Y$ is continuous in probability, but every path of $Y$ has almost surely a discontinuity of the second kind between times 1 and 2. Note that this result remains true even if we ask our process to have a quadratic variation along a sequence $(\pi_n)$. \betaegin{thebibliography}{CE99} \betaibitem{BM} M.A. Berger, V.J. Mizel, Volterra Equations with It\^o integrals-I, {\it J. Integ. Eq. } {\betaf 2} (1980), 187-245. \betaibitem{CMS} F. Coquet, J. M\'emin and L. S\l omi\'nski, On Non-Continuous Dirichlet Processes, {\it J. Theor. Probab.}, {\betaf 16} (2003), 197--216. \betaibitem{CS} F. Coquet and L. S\l omi\'nski, On the convergence of Dirichlet Processes, {\it Bernoulli}, {\betaf 5} (1999), 615--639. \betaibitem{DM} C. Dellacherie, P.A. Meyer, Probabilit\'es et potentiel, Chap. V \`a VIII, 1980, Hermann, Paris. \betaibitem{ER1} M. Errami, F. Russo, $n$-covariation, generalized Dirichlet processes and calculus with respect to finite cubic variation processes, {\it Stochastic process. appl.}, {\betaf 104}, (2003), 259-299. \betaibitem{ER2} M. Errami, F. Russo, Covariation de convolution de martingales, {\it C. R. Acad. Sci. Paris}, t. {\betaf 326}, S\'erie I,p. 601-606, 1998. \betaibitem{F850} H. F\"ollmer, Calcul d'Ito sans Probabilit\'es, {\it in S\'eminaire de Probabilit\'es XV}, Lecture Notes in Maths 850, Springer-Verlag (1981), 143--150. \betaibitem{F851} H. F\"ollmer, Dirichlet Processes, {\it in S\'eminaire de Probabilit\'es XV}, Lecture Notes in Maths 851, Springer-Verlag (1981), 476--478. \betaibitem{GR} S.E. Graversen and M. Rao, Quadratic variation and Energy, {\it Nagoya Math. J.}, {\betaf 100} (1985), 163--180. \betaibitem{J} J. Jacod, Convergence en loi de semimartigales et variation quadratique, {\it in S\'eminaire de Probabilit\'es XV}, Lecture Notes in Maths 850, Springer, Berlin, 1981. \betaibitem{JS} J. Jacod, A. Shiryaev, Limit Theorems for Stochastic Processes, Springer, Berlin 1987. \betaibitem{M} M. M\'etivier, Notions fondamentales de la th\'eorie des probabilit\'es, 1968, Dunod, Paris. \betaibitem{P} P. Protter, Volterra equations driven by semimartingales, {\it The Annals of Probability}, 1985, Vol. {\betaf 13}, No 2, 519-530. \betaibitem{RV} F. Russo, P. Vallois, The generalized covariation process and Ito formula, {\it Stochastic process. appl}, {\betaf 59} (1995), 81--104. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \begin{abstract} Let $u_k(G,p)$ be the maximum over all $k$-vertex graphs $F$ of by how much the number of induced copies of $F$ in $G$ differs from its expectation in the binomial random graph with the same number of vertices as $G$ and with edge probability $p$. This may be viewed as a measure of how close $G$ is to being $p$-quasirandom. For a positive integer $n$ and $0<p<1$, let $D(n,p)$ be the distance from $p\binom{n}{2}$ to the nearest integer. Our main result is that, for fixed $k\ge 4$ and for $n$ large, the minimum of $u_k(G,p)$ over $n$-vertex graphs has order of magnitude $\Theta\big(\max\{D(n,p), p(1-p)\} n^{k-2}\big)$ provided that $p(1-p)n^{1/2} \to \infty$. \epsilonnd{abstract} \title{How unproportional must a graph be?} \section{Introduction} \label{sec:intro} An important result of Erd\H{o}s and Spencer~\cite{ErdosSpencer71} states that every graph $G$ of order $n$ contains a set $S\subseteq V(G)$ such that $e(G[S])$, the number of edges in the subgraph induced by $S$, differs from $\frac12{|S|\choose 2}$ by at least $\Omega(n^{3/2})$; an earlier observation of Erd\H{o}s~\cite{Erdos63a} shows that this lower bound is tight up to the constant. More generally, it was shown in \cite{ErdosGoldbergPachSpencer88} that for graphs with density $p\in(\frac{2}{n-1},1-\frac{2}{n-1})$, there is some subset where the number of edges differs from expectation by at least $c\sqrt{p(1-p)}n^{3/2}$ (see \cite{BollobasScott06,BollobasScott11,BollobasScott15} for further results and discussion). When $p$ is constant, the above results can be equivalently reformulated in the language of graph limits as that the smallest cut-distance from the constant-$p$ graphon to an order-$n$ graph $G$ is $\Theta(n^{-1/2})$. Instead of defining all terms here (which can be found in Lov\'asz' book~\cite{Lovasz:lngl}), we observe that the cut-distance in this special case is equal, within some multiplicative constant, to the maximum over $S\subseteq V(G)$ of $\frac1{n^2}\, \Big|2e(G[S])-p|S|^2\Big|$. There are other measures of how close a graph $G$ is to the constant-$p$ graphon, which means measuring how close $G$ is to being $p$-quasirandom. Here we consider two possibilities, subgraph statistics and graph norms, as follows. For graphs $G$ and $H$, we denote by $N(H, G)$ the number of induced subgraphs of $G$ that are isomorphic to $H$. For example, if $v(H)=k\le n$, then the expected number of $H$-subgraphs in the binomial random graph $\G np$ (where each pair on the vertex set $[n]:=\{1,\dots,n\}$ is independently included as an edge with probability $p$) is \[ {\mathbf{E}}[N(H,\G np )]= \frac{n(n-1)\dots (n-k+1)}{|\mathrm{Aut}(H)|}\, p^{e(H)}(1-p)^{{k\choose2}-e(H)}, \] where $\mathrm{Aut}(H)$ is the group of automorphisms of $H$. Let $k\ge 2$ be a fixed integer parameter. For any graph $G$ on $n$ vertices and a real $0<p<1$, let \begin{equation} \label{eq:u_k_g} u_k(G,p) := \max \Big\{\,\big|\,N(F, G) - {\mathbf{E}}[N(F, \G np )]\,\big|\, : v(F) = k\Big\}, \epsilonnd{equation} where the maximum is taken over all (non-isomorphic) graphs $F$ on $k$ vertices. The quantity $u_k(G,p)$ measures how far the graph $G$ is away from the random graph $\G np $ in terms of $k$-vertex induced subgraph counts. For example, $u_k(G,p)/n^k$ is within a constant factor (that depends on $k$ only) from the total variational distance between $\G kp$ and a random $k$-vertex subgraph of $G$. We are interested in estimating \begin{equation} \label{eq:u_k_n} u_k(n,p) := \min\{u_k(G,p): v(G)=n\}, \epsilonnd{equation} the minimum value of $u_k(G,p)$ that a graph $G$ of order $n$ can have. Informally speaking, we ask how $p$-quasirandom a graph of order $n$ can be. Clearly, $u_2(n,p) < 1$ and $u_2(n,p) = 0$ if $p\binom{n}{2}$ is integer. In fact, if we denote by $D(n,p)$ the distance from $p\binom{n}{2}$ to the nearest integer, then $u_2(n,p) = D(n,p)$. The problem of constructing pairs $(F,p)$ with $u_3(F,p)=0$ (such graphs $F$ were called \epsilonmph{$p$-proportional}) received some attention because the Central Limit Theorem fails for the random variable $N(F,\G np )$ for such $F$, see~\cite{BarbourKaronskiRucinski89,Janson90flt,JansonNowicki91}. Apart from sporadic examples, infinitely many such pairs were constructed by Janson and Kratochvil~\cite{JansonKratochvil91} for $p=1/2$ and by Janson and Spencer~\cite{JansonSpencer92} for every fixed rational $p$; see K\"arrman~\cite{Karrman93} for a different proof of the last result. The main contribution of this paper is the following. \begin{theorem} \label{thm:main} (a) Let $k\ge 3$ be fixed and $p=p(n)\in(0,1)$ with $\frac{1}{p(1-p)} = o(n^{1/2})$. Then \[ u_k(n,p) = O\big(\max\{D(n,p),\, p(1-p)\}n^{k-2}\big). \] (b) Let $k\ge 4$ be fixed and $p=p(n)\in(0,1)$. Then \[ u_k(n,p) = \Omega\big(\max\{D(n,p),\, p(1-p)\}n^{k-2}\big). \] \epsilonnd{theorem} Note that the existence of proportional graphs shows that the lower bound of Theorem~\ref{thm:main} does not extend in general to $k=3$. Another measure of graph similarity is the $2k$-th Shatten norm $\shatten pG{2k}$. Lemma~8.12 in~\cite{Lovasz:lngl} shows that the $4$-th Shatten norm defines the same topology as the cut-norm. Again, we define it only for the special case when we want to measure how $p$-quasirandom an $n$-vertex graph $G$ is, where we allow loops. Here, we take the (normalised) $\epsilonll_{2k}$-norm of the eigenvalues $\lambda_1,\dots,\lambda_n$ of $M=A-pJ$, where $A$ is the adjacency matrix of $G$ and $J$ is the all-$1$ matrix: $$ \shatten pG{2k}:=\frac{(\lambda_1^{2k}+\dots+\lambda_n^{2k})^{1/2k}}n. $$ We remark that when $G$ has a loop, the corresponding diagonal entry in the matrix $A$ is~1. An equivalent and more combinatorial definition of the $2k$-th Shatten norm is to take $\shatten pG{2k}= t(C_{2k},M)^{1/2k}$, where $C_{2k}$ is the $2k$-cycle and $t(F,M)$ denotes the \epsilonmph{homomorphism density} of a graph $F$, which is the expected value of $\prod_{ij\in E(F)} M_{f(i),f(j)}$, where $f:V(F)\to [n]$ is a uniformly chosen random function, see \cite[Chapter 5]{Lovasz:lngl}. In other words, \begin{equation} \label{eq:shatten} \shatten pG{2k}= \left(n^{-2k}\sum_{f:\res{2k}\to [n]}\ \prod_{i\in\res{2k}} (A_{f(i),f(i+1)}-p)\right)^{1/2k}, \epsilonnd{equation} where the sum is over all $n^{2k}$ maps $f:\res{2k}\to [n]$, from the integer residues modulo $2k$ to~$\{1,\dots,n\}$. We can show the following result. \begin{theorem} \label{th:shatten} Let $k\ge 2$ be a fixed integer. The minimum of $\shatten pG{2k}$ over all $n$-vertex graphs $G$ with loops allowed is \[ \Theta\left(\min\left\{p(1-p),\, p^{1/2}(1-p)^{1/2} n^{-(k-1)/2k}\right\}\right). \] \epsilonnd{theorem} Hatami~\cite{Hatami10} studied which graphs other than even cycles produce a norm when we use the appropriate analogue of \epsilonqref{eq:shatten}. He showed, among other things, that complete bipartite graphs with both parts of even size do. We also prove a version of Theorem~\ref{th:shatten} for this norm, see Theorem~\ref{th:2k2m} of \secref{sec:shatten}. The rest of this paper is organised as follows. In \secref{sec:lower_bound} we prove the lower bound from \thmref{thm:main}. In \secref{sec:upper_bound} we prove the upper bound. We consider graph norms in \secref{sec:shatten}, in particular proving Theorem~\ref{th:shatten} there. The final section contains some open questions and concluding remarks. Throughout the paper, we adopt the convention that $k$ is a fixed constant and all asymptotic notation symbols ($\Omega$, $O$, $o$ and $\Theta$) are with respect to the variable $n$. To simplify the presentation, we often omit floor and ceiling signs whenever these are not crucial and make no attempts to optimise the absolute constants involved. \section{Lower bound for $u_k(n,p)$ in the range $k \ge 4$} \label{sec:lower_bound} The goal of this section is to prove that $u_k(n,p) = \Omega\big(\max\{D(n,p),\, p(1-p)\}n^{k-2}\big)$. More precisely, we will show that there exists a constant $\varepsilon= \varepsilon(k)>0$ such that $u_k(G,p) \ge \varepsilon\max\{D(n,p),\,p(1-p)\} n^{k-2}$, for all graphs $G$ on $n\ge k$ vertices and for all $0<p<1$. The following lemma shows that it is enough to prove the lower bound for $k=4$ only. \begin{lemma} \label{lem:step_up} For every $k\ge 2$ there is $c_k>0$ such that $u_{k+1}(G,p)\ge c_kn \cdot u_k(G,p)$ for every graph $G$ of order $n\ge k+1$ and for all $0<p<1$. \epsilonnd{lemma} \begin{proof} Define $$u_F(G,p):=\big|N(F,G)-{\mathbf{E}}[N(F,\G np )]\big|. $$ Take a graph $F$ of order $k$ with $u_F(G,p)= u_{k}(G,p)$. Let $f(G)$ be the number of pairs $(A,x)$ where a $k$-set $A$ induces $F$ in $G$ and $x\in V(G)\setminus A$. Then $f(G)= (n-k)N(F,G)$ and ${\mathbf{E}}[f(\G np )]=(n-k){\mathbf{E}}[N(F,\G np )]$; thus these two parameters differ (in absolute value) by exactly $(n-k)u_k(G,p)$. On the other hand, $f(G)$ can be written as $\sum_{F'} N(F,F')N(F',G)$ where the sum is over non-isomorphic $(k+1)$-vertex graphs $F'$. The expectation of $f(\G np )$ obeys the same linear identity: \[ {\mathbf{E}}[f(\G np )]=\sum_{v(F')=k+1} N(F,F')\,{\mathbf{E}}[N(F',\G np)]. \] We conclude that \begin{eqnarray*} \frac{n}{k+1}\, u_k(G,p)&\le& (n-k)\,u_k(G,p)\ =\ \big|\,f(G)-{\mathbf{E}}[f(\G np )]\,\big|\\ &\le& \sum_{v(F')=k+1} N(F,F')\, u_{F'}(G,p)\ \le\ 2^{{k+1\choose 2}}\cdot (k+1)\cdot u_{k+1}(G,p). \epsilonnd{eqnarray*} Thus the lemma holds with $c_k=2^{-{k+1\choose 2}}\, (k+1)^{-2}$. \epsilonnd{proof} In the next lemma we prove one of the bounds for $u_4(n,p)$. We remark that it was implicitly proven in \cite[Proposition~3.7]{JansonKratochvil91}. \begin{lemma} \label{lem:k_equals_4} There exists an absolute constant $\varepsilon>0$ such that, for every $0<p<1$ and for all graphs $G$ on $n\ge 4$ vertices, the inequality $u_4(G,p) \ge \varepsilon p(1-p) n^2$ holds. \epsilonnd{lemma} \begin{proof} Let $\varepsilon>0$ be a sufficiently small constant. Suppose that there is a graph $G$ of order $n\ge 4$ satisfying $u_4(G,p)<\varepsilon p(1-p) n^2$. By applying \lemref{lem:step_up} twice, we conclude that $u_2(G,p) <\varepsilon_1 p (1-p)$, where we set $\varepsilon_1:=\varepsilon/(c_2c_3)$ with the constants $c_i$ given by the lemma. This implies that \begin{eqnarray} \bigg| e(G)^2 - {\mathbf{E}}\left[e(\G np )\right]^2\bigg| &\le& \big| e(G) - {\mathbf{E}}\left[e(\G np )\right]\big|\cdot \left(2p\binom{n}{2}+ \varepsilon_1 p(1-p)\right)\nonumber\\ &<& \varepsilon_1 p (1-p)\cdot 3p\binom{n}{2}\ =\ 3\varepsilon_1 p^2(1-p)\,\binom{n}{2}.\label{eq:eG2} \epsilonnd{eqnarray} \comment{OP: Indeed, we need to show that $ \frac{\varepsilon p(1-p)}{c_2c_3}< p{n\choose 2}$, which is equivalent to $\varepsilon (1-p)< {n\choose 2}c_2c_3$, which holds if eg $\varepsilon<6c_2c_3$} For every graph $G$, we can write $e(G)^2$ as \begin{equation} \label{eqn:square_edges} e(G)^2 = \sum_{2\le v(F)\le 4} \alpha_F N(F,G), \epsilonnd{equation} where $F$ in the summation ranges over non-isomorphic graphs satisfying $2\le v(F)\le 4$, and $\alpha_F\ge 0$ is a constant depending on $F$ only. Indeed, split ordered pairs $(e,e')\in E(G)^2$ according to the isomorphism type $F$ of $G[e\cup e']$. The number $\alpha_F$ of times that a given $F$-subgraph in $G$ is counted equals the number of ways to pick an ordered pair of edges from $E(F)$ whose union is the whole vertex set~$V(F)$. For example, if $F$ is an edge then $\alpha_F=1$, and if $v(F)=4$ then $\alpha_F$ is the number of ordered pairs of disjoint edges in $F$. Since ${\mathbf{E}}\left[e(\G np )^2\right] - {\mathbf{E}}\left[e(\G np )\right]^2 = {\mathbf{Var}}[e(\G np )]= p(1-p){n\choose 2}$ is the variance of $e(\G np)$, we have by~\epsilonqref{eq:eG2} and the Triangle Inequality that \begin{equation} \label{eqn:lb_sq_edges} \bigg|e(G)^2 - {\mathbf{E}}\left[e(\G np )^2\right]\bigg|>p(1-p){n\choose 2}-3\varepsilon_1 p^2(1-p)\,\binom{n}{2}> \frac{p(1-p)}{2}\binom{n}{2}. \epsilonnd{equation} \comment{OP: here we need $\frac{3\varepsilon p^2(1-p)}{c_2c_3}\binom{n}{2}<\frac{p(1-p)}{2}\binom{n}{2}$ for which $3\varepsilon<c_2c_3/2$ suffices} Moreover, the identity \epsilonqref{eqn:square_edges} implies that ${\mathbf{E}}\left[e(\G np )^2\right] = \sum_{2\le v(F)\le4} \alpha_F\, {\mathbf{E}}[N(F,\G np )]$. Thus, by~\epsilonqref{eqn:lb_sq_edges}, \begin{eqnarray*} \sum_{k=2}^4\sum_{v(F)=k} \alpha_F \,u_k(G,p) &\ge& \sum_{k=2}^4 \sum_{v(F)=k} \alpha_F \Big|N(F, G)-{\mathbf{E}}[N(F,\G np )]\,\Big|\\ &\ge& \bigg|\sum_{k=2}^4 \sum_{v(F)=k} \alpha_F \Big( N(F, G)-{\mathbf{E}}[N(F,\G np )]\,\Big)\bigg|\\ &=& \bigg|e(G)^2 - {\mathbf{E}}\left[e(\G np )^2\right]\bigg|\ >\ \frac{p(1-p)}{2}\binom{n}{2}. \epsilonnd{eqnarray*} Thus for some $k\in\{2,3,4\}$, we have $u_k(G,p)\ge\varepsilon p(1-p)n^2$. \lemref{lem:step_up} implies that $u_4(G,p)>\varepsilon p(1-p) n^2$, contradicting our assumption and proving the lemma. \epsilonnd{proof} The previous two lemmas give that $u_k(n,p) = \Omega(p(1-p) n^{k-2})$ for $k\ge 4$. Thus, in order to finish the proof of the lower bound, we need to show that $u_k(n,p) = \Omega(D(n,p)n^{k-2})$. The latter bound is a consequence of $u_2(n,p) = D(n,p)$ together with \lemref{lem:step_up}, thereby concluding the proof of Theorem~\ref{thm:main}(b). \section{Upper bound for $k\ge 3$} \label{sec:upper_bound} In this section, we prove that $u_k(n,p) = O(\max\{D(n,p), p(1-p)\} n^{k-2})$ for fixed $k\ge 3$ and for all $p=p(n)$ such that $\frac{1}{p(1-p)}= o(n^{1/2})$. We can assume, without loss of generality, that $p \le \frac12$. Indeed, if $\Ov{G}$ denotes the complement of $G$ then $u_k(G,p) = u_k(\Ov{G}, 1-p)$, which implies that $u_k(n,p)=u_k(n,1-p)$. Thus our assumption can be made because the bound $O(\max\{D(n,p), p(1-p)\}n^{k-2})$ is symmetric with respect to $p$ and $1-p$. (Recall that $D(n,p) = u_2(n,p) = u_2(n,1-p) = D(n,1-p)$.) In addition, note that in the range $p\le \frac12$, it suffices to show that $u_k(n,p)=O(\max\{D(n,p), p\} n^{k-2})$. To prove the upper bound, we borrow some definitions, results, and proof ideas from~\cite{JansonSpencer92}. Following their notation, one can count the number of induced subgraphs of $G$ that are isomorphic to $H$ using the following identity \begin{equation} \label{eqn:num_iso} N(H,G) = \sum_{H'} \prod_{e\in E(H')} I_G(e) \prod_{e\in E(\overline{H'})} (1-I_G(e)), \epsilonnd{equation} where we sum over all $H'$ isomorphic to $H$ with $V(H') \subseteq V(G)$, $I_G(e)$ is the indicator function that $e$ is an edge in $G$ and $\overline{H'}$ denotes the complement of the graph $H'$. Observe that the range of $H'$ taken in the outermost sum in~\epsilonqref{eqn:num_iso} depends on $V(G)$ but not on $E(G)$; this will be useful when comparing $H$-counts in different graphs on the same vertex set. We define a related sum over the same range of $H'$: \begin{equation} \label{eqn:signed_sum} S(H,G) = S^{(p)}(H,G):= \sum_{H'} \prod_{e\in E(H')} (I_G(e) - p), \epsilonnd{equation} where $p$ is as before. Rewriting~\epsilonqref{eqn:num_iso} by replacing each factor $I_G(e)$ by $(I_G(e) - p) + p$ and each factor $1-I_G(e)$ by $(1-p)-(I_G(e)-p)$ and expanding, we obtain a linear combination of products $\prod_{e\in X} (I_G(e)-p)$, with each $X$ being some subset of unordered pairs of $V(G)$ involving at most $v(H)$ different vertices. All sets $X$ that are isomorphic to the same graph $F$ get the same coefficient, which we denote $a_{F,H}(n,p)$. The coefficient for $X=\epsilonmptyset$ (i.e.\ the constant term) is obtained by summing the same quantity $p^{e(H')}(1-p)^{e(\overline{H'})}$ over all summands $H'$; thus it is equal to the expected number of $H$-subgraphs in $\G np$. We separate this special term and re-write~\epsilonqref{eqn:num_iso} as \begin{equation} \label{eqn:id_iso_signed} N(H,G) = {\mathbf{E}}[N(H,\G np)]+ \sum_{F \in \mc{F}_k} a_{F,H}(n,p) S(F,G), \epsilonnd{equation} where $k=v(H)$ and $\mc{F}_k$ denotes the family of all graphs $F$ without isolated vertices satisfying $2\le v(F)\le k$. Also, note that $a_{F,H}(n,p)$ does not depend on $G$ and is bounded from above by $O(n^{v(H) - v(F)})$. In fact, one can show that $a_{F, H}(n,p) = O(p^{e(H)-\alpha} n^{v(H) - v(F)})$, where $\alpha$ is the maximum number of edges that a common subgraph of both $H$ and $F$ can have, but we will not need such an estimate. Thus, in order to prove that there exists a graph $G$ on $n$ vertices such that $u_k(G,p) = O(\max\{D(n,p), p\}n^{k-2})$, it suffices to show that there exists $G$ such that \begin{equation} \label{eqn:small_signed} S(F,G) = \left\{\begin{array}{ll} O(pn^{v(F)-2}), & \text{for all } F\in \mc{F}_k\setminus \{K_2\}, \\ O(D(n,p)), & \text{if } F = K_2. \epsilonnd{array}\right. \epsilonnd{equation} (Note that one cannot hope for $S(K_2,G)=O(p)$ in general; this is why we need two terms in the asymptotic formula for $u_k(n,p)$.) A natural candidate for $G$ in \epsilonqref{eqn:small_signed} is the random graph $G\sim \G np$. Unfortunately, $G$ does not work ``out of the box''; namely,~\epsilonqref{eqn:small_signed} typically fails for $F\in\mc{F}_k$ with $v(F)\le 3$. However, by changing the adjacencies of carefully chosen pairs we can steer these parameters to have the desired order of magnitude. The next lemma yields some bounds for $S(F,\G np )$. \begin{lemma} \label{lem:small_signed_random} Let $G\sim \G np $. For all $F\in \mc{F}_k$, we have \[ {\mathbf{E}}[S(F,G)]=0\quad\text{and}\quad {\mathbf{E}}[S(F,G)^2] \le p^{e(F)}n^{v(F)}. \] \epsilonnd{lemma} \begin{proof} By \epsilonqref{eqn:signed_sum}, we have \[ {\mathbf{E}}[S(F,G)]=\sum_{F'} \epsilonnskip {\mathbf{E}}\left[ \prod_{e\in E(F')} (I_{G}(e)-p)\right], \] where the sum is over all $F'$ isomorphic to $F$ with $V(F') \subseteq V(G)$. Each expectation on the right-hand side vanishes, by independence and since ${\mathbf{E}}[I_{G}(e)]=p$. Thus ${\mathbf{E}}[S(F,G)]=0$. We similarly write \[ {\mathbf{E}}[S(F,G)^2]= \sum_{F', F''} \epsilonnskip{\mathbf{E}}\left[ \prod_{e\in E(F')} (I_{G}(e)-p) \prod_{e\in E(F'')} (I_{G}(e)-p)\right]. \] where the sum is over all pairs $(F',F'')$ of graphs isomorphic to $F$ with $V(F') \cup V(F'') \subseteq V(G)$. The expectation term in the above sum vanishes when $F' \ne F''$ and it is equal to $(p-p^2)^{e(F)}\le p^{e(F)}$ when $F'=F''$. Since the number of possible choices for $F'$ is at most $\binom{n}{f}\cdot f!\le n^{f}$, where $f=v(F)$, we conclude that ${\mathbf{E}}[S(F,G)^2] \le p^{e(F)} n^{v(F)}$. \epsilonnd{proof} Using Chebyschev's inequality (see, e.g., \cite[Theorem 4.1.1]{AlonSpencer16pm}), we have that, for all $\lambda>0$, \begin{equation} \label{eqn:cheby_1} {\mathbf{Pr}}\left[\,\big|S(F,\G np )\big| \ge \lambda\cdot p^{e(F)/2} n^{v(F)/2}\,\right] \le \lambda^{-2}. \epsilonnd{equation} By the union bound combined with \epsilonqref{eqn:cheby_1}, the random graph $G\sim \G np $ satisfies the following property with probability at least $0.96$. \noindent\textbf{Property A.}\epsilonnspace $|S(F,G)| \le 5|\mc{F}_k|^{1/2} p^{e(F)/2}n^{v(F)/2}$ for all graphs $F\in \mc{F}_k$. The inequality $p^{e(F)/2}n^{v(F)/2} \le p n^{v(F)-2}$ holds whenever $v(F)\ge 4$. This is because every graph on $4$ or more vertices in $\mc{F}_k$ has at least $2$ edges, since no vertex is isolated. In order to find a graph satisfying the conditions expressed in \epsilonqref{eqn:small_signed}, we just need to adjust $G$ so that $S(K_2,G) = O(D(n,p))$ and $S(F,G)=O(pn^{v(F)-2})$ when $F\in \mc{F}_3\setminus \{K_2\}$. The family $\mc{F}_3\setminus \{K_2\}$ consists of two graphs: the triangle $K_3$ and the 2-path $P_2$, the unique graph on three vertices having exactly two edges. So, we just need to adjust $S(K_2,G)$, $S(K_3,G)$ and $S(P_2,G)$. This must be performed carefully, to prevent $S(F,G)$ from changing too much for graphs $F\in \mc{F}_k$ with $v(F)\ge 4$. Let us investigate what happens to $S(F,G)$ when we add or remove an edge. Note that by ``edges'', we generally mean edges in the complete graph, i.e., all pairs $ij$ with $i,j\in V(G)$, and not only the pairs that happen to be selected as the edges of $G$. For each pair $ij$ with $i,j\in V(G)$, let \begin{equation} \label{eqn:signed_sum_ij} S_{ij}(F,G) := S(F,G \cup \{ij\}) - S(F,G\setminus \{ij\}), \epsilonnd{equation} where $G\cup \{ij\}$ and $G\setminus \{ij\}$ represent the graphs obtained from $G$ by adding and removing the edge $ij$, respectively. By expanding each of the two terms in \epsilonqref{eqn:signed_sum_ij} using~\epsilonqref{eqn:signed_sum}, we can write $S_{ij}(F,G)$ as the sum of $\prod_{e\in E(F')} (I_{G\cup \{ij\}}(e)-p)-\prod_{e\in E(F')} (I_{G\setminus \{ij\}}(e)-p)$ over all $F$-subgraphs $F'$ inside~$V(G)$. If $E(F')$ does not contain $ij$, then both products are identical. Thus we have that \begin{equation}\label{eq:SijFG} S_{ij}(F,G)=\sum_{F'} \big((1-p)-(-p)\big)\prod_{e\in E(F') \setminus \{ij\}} (I_{G}(e)-p)=\sum_{F'} \prod_{e\in E(F') \setminus \{ij\}} (I_{G}(e)-p), \epsilonnd{equation} where we sum over all $F'$ isomorphic to $F$ with $V(F') \subseteq V(G)$ and $ij \in E(F')$. The next lemma gives a bound for the expectation and the variance of $S_{ij}(F,\G np )$. \begin{lemma} \label{lem:small_signed_random_ij} Let $G\sim \G np $. For all $F\in \mc{F}_k$ with $v(F)\ge 3$ and all pairs $1\le i < j \le n$, we have \[ {\mathbf{E}}[S_{ij}(F,G)]=0\quad\text{and}\quad{\mathbf{E}}[S_{ij}(F,G)^2] \le k^2 p^{e(F) - 1} n^{v(F)-2}. \] \epsilonnd{lemma} \begin{proof} The proof is similar to that of Lemma~\ref{lem:small_signed_random}. We have ${\mathbf{E}}[S_{ij}(F,G)] = 0$ by~\epsilonqref{eq:SijFG}, the independence of the random variables $I_G(e)$ and the linearity of expectation. For the second part of the lemma, we write \[ {\mathbf{E}}[S_{ij}(F,G)^2]= \sum_{F', F''} \epsilonnskip{\mathbf{E}}\left[ \prod_{e\in E(F')\setminus \{ij\}} (I_{G}(e)-p) \prod_{e\in E(F'') \setminus \{ij\}} (I_{G}(e)-p)\right]. \] where the sum is over all pairs $(F',F'')$ of graphs isomorphic to $F$ with $V(F') \cup V(F'') \subseteq V(G)$ and $\{i,j\} \in E(F') \cap E(F'')$. The expectation term in the above sum vanishes when $F' \ne F''$ and it is upper bounded by $p^{e(F) - 1}$ when $F'=F''$. Since the number of possible choices for $F'$ is at most $k^2n^{v(F)-2}$, we conclude that ${\mathbf{E}}[S_{ij}(F,G)^2] \le k^2p^{e(F) - 1}n^{v(F)-2}$, as desired. \epsilonnd{proof} Take a pair $ij$ of vertices. For $0\le s\le 2$, let $Z_s=Z_s(ij)$ denote the number of vertices $z\in V(G)\setminus \{i,j\}$ such that exactly $s$ of the pairs $iz$ and $jz$ belong to $E(G)$. Let us express \begin{eqnarray*} Y_1\ =\ Y_1(ij)&:=&S_{ij}(P_2,G),\\ Y_2\ =\ Y_2(ij)&:=&S_{ij}(K_3,G), \epsilonnd{eqnarray*} in terms of the random variables~$Z_0$ and~$Z_2$. When we compute $Y_1$ using~\epsilonqref{eq:SijFG}, we have to sum over all $2$-paths containing the edge $ij$. Denoting the third vertex of the path by $z$, we get $$ Y_1=\sum_{z\in V\setminus\{i,j\}} (I_G(iz)+I_G(jz)-2p)=2(1-p) Z_2 + (1-2p)Z_1 -2pZ_0. $$ Using that ${\mathbf{E}}[Z_0]=(1-p)^2(n-2)$ and ${\mathbf{E}}[Z_2]=p^2(n-2)$ (or that ${\mathbf{E}}[Y_1]=0$), we derive that \begin{eqnarray} Y_1 &=& 2(1-p) Z_2 + (1-2p)(n-2-Z_0-Z_2) -2pZ_0\nonumber\\ &=&(Z_2-{\mathbf{E}}[Z_2])-(Z_0-{\mathbf{E}}[Z_0]).\label{eq:Y1} \epsilonnd{eqnarray} Likewise, we obtain \begin{eqnarray} Y_2 &=&\sum_{z\in V\setminus\{i,j\}} (I_G(iz)-p)(I_G(jz)-p) \ =\ (1-p)^2 Z_2 -p(1-p)Z_1+p^2 Z_0\nonumber\\ &=&(1-p)(Z_2-{\mathbf{E}}[Z_2])+p(Z_0-{\mathbf{E}}[Z_0]).\label{eq:Y2} \epsilonnd{eqnarray} The triple $(Z_0, Z_1, Z_2)$ has a multinomial distribution for $G\sim \G np $. In the next lemma we show that for any fixed rectangle $R\subseteq {\mathbb{R}}^2$ of positive area, there exists $\epsilonta =\epsilonta(R)>0$ such that $\left(\frac{Y_1}{\sqrt{pn}}, \frac{Y_2}{p\sqrt{n}}\right)\in R$ with probability at least $\epsilonta$. Recall that we have assumed that $p\le 1/2$ and $p^2n\to\infty$. \begin{lemma} \label{lem:limit_distr} For fixed reals $\alpha_1<\alpha_2$ and $\beta_1<\beta_2$ there exists $\epsilonta=\epsilonta(\alpha_1, \alpha_2, \beta_1, \beta_2) > 0$ such that, for all large $n$, the probability of \begin{equation} \label{eqn:cond_vars} \alpha_1\le\frac{Y_1}{\sqrt{pn}}\le\alpha_2\quad\text{and}\quad \beta_1\le\frac{Y_2}{p\sqrt{n}}\le\beta_2 \epsilonnd{equation} is at least $\epsilonta$. \epsilonnd{lemma} \begin{proof} \comment{Our former proof very sketchy...} Define \begin{eqnarray*} c&:=&\frac12\, \min\{\,\alpha_2-\alpha_1,\,\beta_2-\beta_1\,\},\\ C&:=& 2\max\big\{\,|\alpha_1|,|\alpha_2|,|\beta_1|,|\beta_2|\,\big\},\\ \delta&:=& \frac{c}{8\pi}\, \mathrm{e}^{-2C^2}\ >\ 0. \epsilonnd{eqnarray*} Let us show that $\epsilonta:=\delta^2$ works in the lemma. Consider the following $2\times 2$-matrix and its inverse: $$ A:=\left[\begin{array}{cc}-1 & \sqrt{p}\\ \sqrt{p} & 1-p\epsilonnd{array}\right]\quad\mbox{with}\quad A^{-1}=\left[\begin{array}{cc} -1+p& \sqrt{p}\\ \sqrt{p} & 1\epsilonnd{array}\right]. $$ Note that each entry of $A$ and $A^{-1}$ has absolute value at most $1$, so the linear maps given by these matrices are 2-Lipschitz in the $\epsilonll_1$-distance. Thus if we let $S=S(n)$ be the square of side length $c$ with centre $(\alpha_0,\beta_0)^T:=A^{-1}(\frac{\alpha_1+\alpha_2}2,\frac{\beta_1+\beta_2}2)^T$, then the image of $S$ under $A$ lies inside the rectangle $R:=[\alpha_1,\alpha_2]\times [\beta_1,\beta_2]$ while $S$ itself is a subset of $A^{-1}R\subseteq [-C,C]^2$. (Here $(\alpha,\beta)^T$ means the column vector with entries $(\alpha,\beta)$.) The matrix $A$ was chosen to encode the linear relations~\epsilonqref{eq:Y1} and~\epsilonqref{eq:Y2} between $(Y_1,Y_2)$ and $(Z_0,Z_2)$, with an appropriate normalisation applied to each random variable. Specifically, it holds that \begin{eqnarray} A\left(\frac{Z_0-{\mathbf{E}}[Z_0]}{\sqrt{pn}},\frac{Z_2-{\mathbf{E}}[Z_2]}{p \sqrt{n}}\right)^T &=& \left(\frac{Y_1}{\sqrt{pn}},\, \frac{Y_2}{p\sqrt{n}}\right)^T. \label{eq:ZY} \epsilonnd{eqnarray} \comment{Calculation: LHS of last equality is $$ \left(\frac{Z_2-{\mathbf{E}}[Z_2]}{p \sqrt{n}}\,\sqrt{p}-\frac{Z_0-{\mathbf{E}}[Z_0]}{\sqrt{pn}},\ \frac{(1-p)(Z_2-{\mathbf{E}}[Z_2])}{p \sqrt{n}}+\frac{Z_0-{\mathbf{E}}[Z_0]}{\sqrt{pn}}\,\sqrt{{p}}\right) $$} By~\epsilonqref{eq:ZY} it is enough to show, that with probability at least $\epsilonta$, we have \begin{eqnarray} \label{eqn:cond_Z0} \alpha_0-\frac c2\ \le \frac{Z_0-{\mathbf{E}}[Z_0]}{\sqrt{pn}} &\le& \alpha_0+\frac c2,\\ \label{eqn:cond_Z2} \beta_0-\frac c2 \ \le\ \frac{Z_2-{\mathbf{E}}[Z_2]}{p\sqrt{n}} &\le& \beta_0+\frac c2. \epsilonnd{eqnarray} A version of de Moivre-Laplace theorem (see e.g.~\cite[Theorem~1.6(i)]{Bollobas:rg}) states that, for any function $p=p(n)\in (0,1)$ with $p(1-p)n\to \infty$ and any reals $a<b$, if $X_n$ has the binomial distribution with parameters $(n,p)$, then \begin{equation}\label{eq:MoivreLaplace} \lim_{n\to\infty}\, {\mathbf{Pr}}\left[\, a\le \frac{X_n-np}{\sqrt{np(1-p)}}\le b\,\right] = \frac1{2\pi}\int_a^b \mathrm{e}^{-x^2/2}\mathrm{d}x. \epsilonnd{equation} Let $n$ be large. We begin by sampling $Z_2$. We know that $Z_2$ is distributed according to the binomial distribution: $Z_2\sim {\textup{Bin}}(n-2,p^2)$. Its variance is ${\mathbf{Var}}[Z_2]=p^2(1-p^2)(n-2)$. Let $Z_2^*:=(Z_2-{\mathbf{E}}[Z_2])/\sqrt{{\mathbf{Var}}[Z_2]}$ be the normalised version of~$Z_2$. Note that the constraint \epsilonqref{eqn:cond_Z2} is satisfied if and and only if $Z_2^*$ belongs to $\gamma_n\cdot [\beta_0-\frac c2,\beta_0+\frac c2]$, where $\gamma_n:=p\sqrt n/\sqrt{{\mathbf{Var}}[Z_2]}$ and $y\cdot X:=\{y\cdot x: x\in X\}$ denotes the dilation of a set $X$ by a scalar~$y$. De Moivre-Laplace theorem~\epsilonqref{eq:MoivreLaplace} applies to $Z_2$ since we assumed that $p^2n\to \infty$ and $p\le 1/2$. Using $p\le 1/2$ again, we have that $\gamma_n$ is between, for example, $1$ and $2$. Note that the normal distribution assigns probability at least $2\delta$ to every interval of length $c$ inside $[-2C,2C]$ by the definition of~$\delta$. Let us show that the probability of~\epsilonqref{eqn:cond_Z2} is at least $\delta$. If this is false, then by passing to a subsequence of counterexamples $n$ we can further assume that $\gamma_n$ and $\beta_0=\beta_0(n)$ converge to some $\gamma$ and $\beta$ respectively (with $\gamma\in [1,2]$ and $|\beta|\le C -c/2$). Let $I=[a,b]$ be the interval with centre at $\frac{a+b}2=\gamma\beta$ such that de Moivre-Laplace theorem predicts the limiting probability $\frac32\, \delta$ for it. Its length $a-b$ is strictly smaller than $\gamma c$ because, as we have already observed, the probability that the normal variable hits $\gamma\cdot [\beta-\frac c2,\beta+\frac c2]$ is at least~$2\delta$. Thus, for all large $n$ from our subsequence, $I$ is a subset of $\gamma_n\cdot [\beta_0(n)-\frac {c}2,\,\beta_0(n)+\frac {c}2]$. However, our assumption states that each of the latter intervals is hit with probability less than $\delta$ by $Z_2^*$, contradicting de Moivre-Laplace theorem when applied to the constant interval~$I$. \hide{A standard compactness argument based on the boundedness of $\beta_0=\beta_0(p)$ implies that the probability of \epsilonqref{eqn:cond_Z2} is at least $\delta$ for all large~$n$.} Let $\alpha\in\{0,\dots,n-2\}$ be such that $|\beta-\beta_0|\le c/2$, where we set $\beta:=(\alpha-(n-2)p^2)/(p\sqrt{n})$. Let $X_\alpha$ be $Z_0$ conditioned on $Z_2=\alpha$. The random variable $X_\alpha$ has the binomial distribution with parameters $(1-p^2)(n-2) -\beta p \sqrt{n}$ and $\frac{(1-p)^2}{1-p^2}=\frac{1-p}{1+p}$. By our assumption $p^2 n\to\infty$, the term $\beta p \sqrt{n}=O(p\sqrt{n})$ is negligible when compared to $p^2n$. We have \begin{eqnarray*} {\mathbf{E}}[X_\alpha] &=& (1-p)^2(n-2) - \frac{1-p}{1+p}\cdot \beta p \sqrt{n},\\ {\mathbf{Var}}[X_\alpha] &=& (1+o(1))\, \frac{1-p}{1+p} \cdot \frac{2p}{1+p}\cdot (1-p^2)n\ =\ (2+o(1)) \frac{p(1-p)^2n}{1+p}. \epsilonnd{eqnarray*} We see that ${\mathbf{Var}}[X_\alpha]$ lies between, for example, $np/4$ and $4np$. As before, a compactness argument based on de Moivre-Laplace theorem shows that the infimum over all intervals $I\subseteq [-2C,2C]$ of length $c/2$ of the probability that $(X_\alpha-{\mathbf{E}}[X_\alpha])/\sqrt{{\mathbf{Var}}[X_\alpha]}$ belongs to $I$ is at least $\delta$ for all large~$n$. We see that, when conditioned on any value $\alpha$ of $Z_2$ that satisfies~\epsilonqref{eqn:cond_Z2}, the probability that~\epsilonqref{eqn:cond_Z0} holds is at least~$\delta$. Therefore, the probability that~\epsilonqref{eqn:cond_Z0} and~\epsilonqref{eqn:cond_Z2} hold simultaneously is at least $\epsilonta = \delta^2$, which concludes the proof. \epsilonnd{proof} Next, we put a pair $e\subseteq V(G)$ in at most one of sets $E_1,\dots,E_5$ as follows: \begin{align*} E_1 &:= \{e: e\in E(G),\, \sqrt{pn}<Y_1(e) \Text{and} p\sqrt{n}<Y_2(e)\}, \\ E_2 &:=\{e: e\in E(G),\, \sqrt{pn}<Y_1(e) \Text{and} Y_2(e)<-p\sqrt{n}\}, \\ E_3 &:=\{e: e\in E(G),\, Y_1(e)<-\sqrt{pn} \Text{and} p\sqrt{n}<Y_2(e)\}, \\ E_4 &:=\{e: e\in E(G),\, Y_1(e)<-\sqrt{pn} \Text{and} Y_2(e)<-p\sqrt{n}\}, \\ E_5 &:=\{e: e\not\in E(G),\, |Y_1(e)|<0.1\sqrt{pn} \Text{and} |Y_2(e)|<0.1p\sqrt{n}\}. \epsilonnd{align*} Also, let $E^{*}$ denote the set of pairs $ij$, where $i,j \in V(G)$ are distint vertices such that \begin{equation} \label{eqn:cheby_2} |S_{ij}(F, G)| > 4k\cdot \varepsilon^{-1/2}|\mc{F}_k|^{1/2} p^{(e(F) - 1)/2}n^{v(F)/2-1} \epsilonnd{equation} for at least one $F\in \mc{F}_k$. Informally speaking, the rest of the proof proceeds as follows. First, by using Lemma~\ref{lem:limit_distr} we show that, with reasonably high probability, the set $E_i\setminus E^*$ is ``large'' for each $i\in [5]$. Then, by applying a simple greedy algorithm, Corollary~\ref{cor:large_matching} gives a bounded degree graph $H'$ consisting of $\Omega(n)$ edges from each $E_i\setminus E^*$. We will modify the random graph $G$ to satisfy~\epsilonqref{eqn:small_signed} by flipping some pairs, all restricted to~$H'$. First, by flipping the appropriate number of pairs inside either $E_1$ or $E_5$, we can make $|S(K_2,G)|$ to be equal to $D(n,p)$, the smallest possible value, thus satisfying one of the constraints in~\epsilonqref{eqn:small_signed}. Next, by adding an edge from $E_5$ to $E(G)$ and removing an edge in $E_i$ from $E(G)$, we do not change $S(K_2,G)$ while we can steer each of $S(K_3,G)$ and $S(P_2,G)$ in the right direction by having the freedom to choose $i\in[4]$. The latter claim can be justified using the fact that all flipped pairs come from a bounded degree graph $H'$, so the updated values of $Y_1(e)$ and $Y_2(e)$ stay close to the initial values for every pair~$e\subseteq V(G)$. Furthermore, since $H'$ is disjoint from $E^*$, the effect on $S(F,G)$ of every $H'$-flip is small for each $F\in\mathcal{F}_k$. Thus we make~\epsilonqref{eqn:small_signed} hold for $F\in\mathcal{F}_3$ without violating it for the graphs in $\mathcal{F}_k\setminus\mathcal{F}_3$. Let us provide all the details. Let $\varepsilon>0$ be sufficiently small, in particular so that $\epsilonta=\varepsilon$ satisfies Lemma~\ref{lem:limit_distr} for any choice of $\alpha_1<\alpha_2$ and $\beta_1<\beta_2$ from $\{\,\pm0.1,\,\pm1,\,\pm2\,\}$. First, let us show that $|E_1|\ge \varepsilon pn^2/4$ asymptotically almost surely. Recall that $E_1$ consists of those pairs $e\subseteq V(G)$ for which \begin{equation} \label{eq:class1} e \in E(G),\quad \sqrt{pn} < Y_1(e)\quad\text{and}\quad p\sqrt{n} < Y_2(e). \epsilonnd{equation} Let $I_1(e)$ be the indicator random variable for $E_1$. For the random graph $G\sim \G np $, the first condition $e\in E(G)$ for $e$ to be in $E_1$ is independent of the other two conditions. Thus, by the choice of $\varepsilon$, we can assume that ${\mathbf{E}}[I_1(e)] \ge \varepsilon p$. We have $|E_1|=\sum_{e} I_1(e)$, hence ${\mathbf{E}}[\,|E_1|\,] \ge \varepsilon p \binom{n}{2}$. We re-write the variance of $|E_1|$ as the sum of pairwise covariances of its components: with ${\mathbf{Cov}}[X,Y]:={\mathbf{E}}[XY]-{\mathbf{E}}[X]\,{\mathbf{E}}[Y]$ we have \begin{equation}\label{eq:VarE1} {\mathbf{Var}}[\,|E_1|\,] = \sum_{e\cap e' =\epsilonmptyset} {\mathbf{Cov}}[I_1(e),I_1(e')] +\sum_{e\cap e'\ne \epsilonmptyset} {\mathbf{Cov}}[I_1(e), I_1(e')], \epsilonnd{equation} Take any pairs $e=xy$ and $e'=x'y'$ that have no common vertices. Let us show that ${\mathbf{Cov}}[I_1(e),I_1(e')] = o(p^2)$. Informally speaking, $I_1(e)$ can only influence $I_1(e')$ through the four edges that connect $e$ to $e'$, while the probability that $Y_1$ or $Y_2$ is so close to the cut-off values in~\epsilonqref{eq:class1} as to be affected by these four edges is $o(1)$ by de Moivre-Laplace theorem. A bit more formally, we first expose all edges between the set $A:=e\cup e'$ and its complement $V(G)\setminus A$, and compute the ``current'' values $Y_1'$ and $Y_2'$ on $e$ and $e'$ where, for example, $$ Y_1'(e):=\sum_{z\in V(G)\setminus A} (I_G(xz)+I_G(yz)-2p) $$ takes into account those 2-paths on $V(G)$ that contain $e=xy$ as an edge but are vertex-disjoint from the other pair~$e'$. The values of $Y_1$ and $Y_2$ on $e$ and $e'$ can be computed from $Y_1'$ and $Y_2'$ by adding the contribution from the four edges connecting $e$ to $e'$. By~\epsilonqref{eq:Y1} and~\epsilonqref{eq:Y2}, each of these increments is at most $8$. If $Y_1'(e),Y_1'(e')\not\in \sqrt{pn}\pm8$ and $Y_2'(e),Y_2'(e')\not\in p\sqrt{n}\pm 8$, then the validity of the requirements on $Y_1$ and $Y_2$ in~\epsilonqref{eq:class1} does not depend on the four edges between $e$ and $e'$; thus the corresponding contribution to ${\mathbf{Cov}}[I_1(e),I_1(e')]$ is zero. The complementary event, that at least one of $Y_1'$ and $Y_2'$ is within additive constant 8 from the corresponding cut-off value, has probability $o(1)$ by an application of de Moivre-Laplace theorem. Furthermore, the constraints $e,e'\in E(G)$ in~\epsilonqref{eq:class1}, that are independent of everything else, contribute $O(p^2)$ to the covariance of $I_1(e)$ and $I_1(e')$. Thus indeed ${\mathbf{Cov}}[I_1(e),I_1(e')] = o(p^2)$. We see that the first sum in \epsilonqref{eq:VarE1} has $O(n^4)$ terms, each $o(p^2)$. Since the second sum has $O(n^3)$ terms, each at most $p^2$, the variance of $|E_1|$ is $o(n^4p^2)$. By Chebyschev's inequality, \[ {\mathbf{Pr}}[\,|E_1| < \varepsilon p n^2 / 4\,]\ \le\ {\mathbf{Pr}}[\,|E_1-{\mathbf{E}}[E_1]| > \varepsilon p n^2 / 5\,]\ =\ o(1), \] proving the required. The argument above implies that asymptotically almost surely $|E_i| \ge \varepsilon p n^2/4$ for all $i=1,\ldots, 4$. Similarly, one can show that $|E_5| \ge \varepsilon n^2/4$ asymptotically almost surely. (Note that $E_5$ might be much ``denser'' than the other sets because we dropped the requirement $e\in E(G)$.) Finally, using the standard Chernoff estimates one can show that asymptotically almost surely $\Delta(G) \le 2np$ for $G\sim \G np$. In particular, the following property is satisfied with probability at least $0.99$ when $n$ is large. \noindent \textbf{Property B.}\epsilonnspace $|E_i|\ge \varepsilon p n^2/4$ for $i=1,\ldots, 4$. Moreover, $|E_5| \ge \varepsilon n^2/4$ and $\Delta(G)\le 2p n$. Next, we would like to show that the set $E^*$ that was defined by~\epsilonqref{eqn:cheby_2} is small. Chebyschev's inequality together with \lemref{lem:small_signed_random_ij} implies that ${\mathbf{Pr}}[ij\in E^{*}] \le \varepsilon/16$. Hence ${\mathbf{E}}[\,|E^{*}|\,] \le \varepsilon n^2/32$. By Markov's inequality, ${\mathbf{Pr}}[\,|E^{*}| > \varepsilon n^2/8\,] < \frac{1}{4}$. Similarly, ${\mathbf{Pr}}[\,|E^{*}\cap E(G)| > \varepsilon p n^2/8\,] < \frac{1}{4}$. Thus by the union bound, $G\sim \G np $ satisfies the following property with probability at least $0.5$. \noindent \textbf{Property C.}\epsilonnspace $E^{*}$ has size at most $\varepsilon n^2/8$. Moreover, $|E^{*}\cap E(G)| \le \varepsilon p n^2/8$. Also, we state and prove the following simple result that asserts the existence of large matchings in relatively dense graphs. \begin{proposition} \label{prop:large_matching} Let $H$ be a graph and let $\Delta:=\Delta(H)$. There exists a matching in $H$ of size at least $\frac{e(H)}{2\Delta}$. In particular, if $m < \Delta$ then $H$ contains a subgraph $H'$ with maximal degree $\Delta(H')\le m$ and $e(H') \ge \frac{m}{4\Delta} e(H)$. \epsilonnd{proposition} \begin{proof} Let $M$ be a maximal matching in $H$, and assume $M$ has $k< \frac{e(H)}{2\Delta}$ pairs. All the edges of $H$ have at least one endpoint in $V(M)$. Hence \[ e(H) \le |V(M)|\cdot \Delta = 2k\cdot \Delta < e(H), \] a contradiction. We remark that the bound $\frac{e(H)}{2\Delta}$ is not tight but it suffices for our purposes. To construct $H'$, we start with the empty graph. At each step of the construction, we apply the first assertion of the proposition to the graph $H\setminus H'$, in order to obtain a matching $M$ having exactly $\cei{\frac{e(H)}{4 \Delta}}$ edges. We then add all the edges from $M$ to $H'$. We repeat this step exactly $m$ times. Since we always have $e(H') \le m \cdot \cei{\frac{e(H)}{4\Delta}} < \frac{e(H)}{2}$, and thus $e(H\setminus H') > \frac{e(H)}{2}$, it is always possible to find such $M$, in all the steps of the process. \epsilonnd{proof} An important corollary of \propref{prop:large_matching} is as follows. \begin{corollary} \label{cor:large_matching} Let $C>0$ be fixed. If Properties \textbf{B} and \textbf{C} simultaneously hold for a graph $G$ and $n$ is sufficiently large, then there exists a graph $H'$ having at least $Cn$ edges from each $E_i\setminus E^*$, $i=1,\ldots, 5$, such that $\Delta(H')\le 320C/\varepsilon$. \epsilonnd{corollary} \begin{proof} Because of Property \textbf{C}, we have $|E^*\cap E(G)| \le \varepsilon p n^2/8$ and $|E^{*}| \le \varepsilon n^2/8$, which, together with Property \textbf{B}, implies that $|E_i\setminus E^*| \ge \varepsilon p n^2/8$ for $i=1,\ldots, 4$, and $|E_5\setminus E^*| \ge \varepsilon n^2/8$. Let $H_i$ be the graph on $V(G)$ having edge set $E_i \setminus E^*$. We have $\Delta(H_i) \le \Delta(G) \le 2np$ for $i=1,\ldots,4$ and $\Delta(H_5) \le n$. Hence $\frac{e(H_i)}{\Delta(H_i)} \ge \frac{\varepsilon n}{16}$ for all $i=1,\ldots, 5$. By \propref{prop:large_matching} applied with $m=64 C / \varepsilon < \min\{\Delta(H_i) : i=1,\ldots, 5\}$, each $H_i$ contains a subgraph $H_i'$ having at least $\frac{m}4\cdot \frac{e(H_i)}{\Delta(H_i)}\ge C n$ edges such that $\Delta(H_i')\le m$. Let $H' = \bigcup_{i=1}^5 H_i'$. Clearly $\Delta(H') \le 5 m = 320 C/\varepsilon$ and $H'$ contains at least $Cn$ edges from each $E_i\setminus E^*$, thereby proving the corollary. \epsilonnd{proof} \begin{proof}[Proof of the upper bound in \thmref{thm:main}] Given $p\in (0,1/2]$ and $k\ge 3$, choose small $\varepsilon>0$ and then sufficiently large $C$. Let $n\to\infty$. By the union bound, $G\sim \G np $ satisfies Properties~\textbf{A}, \textbf{B} and \textbf{C} with probability at least $0.4$. Hence there exists a graph $G$ on $n$ vertices satisfying the three properties simultaneously. Fix such $G$. From Corollary~\ref{cor:large_matching}, there exists a graph $H'$ having at least $Cn$ edges from each $E_i\setminus E^*$, such that $\Delta:=\Delta(H') \le 320C/\varepsilon$. Let $E' =E(H')$. In what follows, we change $E(G)$ on pairs, all of which will belong to $E'$. Note that at any intermediate step, the effect of (for instance) removing an edge $ij\in E'\cap E_1$ from $E(G)$ on $S(P_2,G)$ and $S(K_3,G)$ is not quite given by the initial values of $Y_1(ij)$ and $Y_2(ij)$, since certain edges $iw$, $jw$ might have been changed. But $E'$ was defined in such a way that there are most $2\Delta=o(\sqrt{pn})$ changed edges which affect either $Y_1$ or $Y_2$. So, the removal of $ij\in E_1\setminus E^*$ from $E(G)$ at any intermediate stage, still decreases $S(P_2,G)$ by an amount between $\sqrt{pn}-2\Delta$ and $4k \varepsilon^{-1/2}|\mc{F}_k|^{1/2}\sqrt{pn}+2\Delta<\varepsilon^{-1} \sqrt{pn}$. Similarly, because $\Delta = o(p\sqrt{n})$, the same operation decreases $S(K_3, G)$ by an amount between $p\sqrt{n} - 2\Delta$ and $4k\varepsilon^{-1/2}|\mc{F}_k|^{1/2} p\sqrt{n} + 2\Delta < \varepsilon^{-1} p\sqrt{n}$. By Property \textbf{A}, we know that $$|S(K_2,G)|\ \le\ 5|\mc{F}_k|^{1/2}p^{1/2}n\ =:\ \tau. $$ If $S(K_2,G) \ge 1$, we can pick an $e\in E' \setminus E_5$ and remove it from $G$. This has the effect of reducing $S(K_2,G)$ by $1$. If $S(K_2,G) \le -1$, then we can pick an $e\in E'\cap E_5$ and add it to~$G$. This new edge increases the value of $S(K_2,G)$ by $1$. Iterate this process at most $\tau$ times to obtain a graph $G$ such that $|S(K_2,G)| = D(n,p)$, always using a different edge $e$. This is possible because there are at least $C n$ edges from $E'\cap E_i$, for each $i$. Since we have flipped at most $\tau$ edges, all belonging to $H'$, and each flip changes $S(K_3,G)$ (reps.\ $S(P_2,G)$) by at most $\varepsilon^{-1}p\sqrt{n}$ (resp.\ $\varepsilon^{-1}\sqrt{pn}$) in absolute value, the current graph satisfies $|S(K_3,G)| \le p S_0$ and $|S(P_2,G)| \le p^{1/2} S_0$, where \[ S_0=5|\mc{F}_k|^{1/2}p^{1/2}n^{3/2}+\tau \cdot \varepsilon^{-1}\sqrt n. \] Our next goal is to make both $|S(K_3,G)|$ and $|S(P_2,G)|$ small without changing $S(K_2,G)$. We repeat the following step $Cp^{1/2}n-\tau$ times. Consider the current graph $G$. There are four cases depending on whether each of $S(K_3,G)$ and $S(P_2,G)$ is positive or not. First suppose that they are both positive. Pick previously unused edges $e\in E' \cap E_1$ and $e'\in E'\cap E_5$, and replace $e$ with $e'$ in $G$. This operation preserves the value of $S(K_2,G)$, and has the effect of reducing both $S(K_3,G)$ and $S(P_2,G)$. It reduces $S(K_3,G)$ by between $(1-0.1)p\sqrt{n}-4\Delta\ge 0.8p\sqrt{n}$ and $2\varepsilon^{-1}p \sqrt n<pn$. Thus if (initially) $S(K_3,G)\ge pn$, then this value is lowered by at least $0.8p\sqrt{n}$. Regarding $S(P_2,G)$, the operation reduces it by between $0.8\sqrt{pn}$ and $2\varepsilon^{-1}\sqrt{pn}<pn$. Likewise, if $S(K_3,G)<0$ and $S(P_2,G)>0$, we replace an $e\in E'\cap E_2$ by an $e'\in E'\cap E_5$, and similarly in the other two cases. We iterate this process, always using edges $e$ and $e'$ that have not been used before. This is possible since $E'$ contains at least $C n$ edges from each $E_i$. Also, once one of $|S(K_3,G)|$ or $|S(P_2,G)|$ becomes less than $pn$, it stays so for the rest of the process. Since $(Cp^{1/2} n-\tau)\cdot 0.8\sqrt{n}> S_0$, we have that $\max\{|S(K_3,G)|, |S(P_2,G)|\}<pn$ at the end. The iterative process might change the value of $S(F,G)$ for $F\in \mc{F}_k$ with at least $4$ vertices. Take any such $F$ and let $f=v(F)$. Initially, $|S(F,G)|$ was at most $5|\mc{F}_k|^{1/2}p^{e(F)/2}n^{f/2}$ by Property~\textbf{A}. If we add to it $Cp^{1/2}n$, an upper bound on the number of the changed edges, multiplied by $4k\varepsilon^{-1/2}|\mc{F}_k|^{1/2}p^{(e(F) - 1)/2}n^{f/2-1}$, then this accounts for every copy of $F$ inside the vertex set $V(G)$ except perhaps those that contain at least two of the changed edges. (This estimate used the fact that none of the changed edges is in $E^*$.) A pair of two disjoint changed edges is trivially in at most $f^4 n^{f-4}$ copies of $F$. It remains to consider the case when $xy$ and $xz$ are two changed intersecting edges. Note that there are at most $Cp^{1/2}n\cdot 2\Delta$ choices of $(xy,xz)$. Consider a copy $F'$ of $F$ with vertex set $X\supseteq \{x,y,z\}$. If none of the pairs $e\subseteq X$ with $e\not\subseteq\{x,y,z\}$ is an element of $E(G)$ or a changed edge, then this $F'$ contributes at most $p$ in absolute value to the sum in~\epsilonqref{eqn:signed_sum} that defines $S(F,G)$. (Indeed, as $F$ has at least 4 non-isolated vertices, at least one edge of $F'$ has to intersect $X\setminus\{x,y,z\}$; thus the $F'$-term in~\epsilonqref{eqn:signed_sum} contains at least one factor~$-p$.) Otherwise, $X$ has to contain a changed edge or an edge from $E(G)$ that is not inside $\{x,y,z\}$. The number of such subgraphs for any given triple $\{x,y,z\}$ can be bounded by $$ 3(\Delta+2pn)f^4 n^{f-4} + ( Cp^{1/2}n + pn^2) f^5 n^{f-5}\le 2f^5pn^{f-3}. $$ Putting all together we obtain that, at the end of the process, \begin{eqnarray*} |S(F,G)| &\le& 5|\mc{F}_k|^{1/2}p^{e(F)/2}n^{f/2}+Cp^{1/2}n \cdot 4k\varepsilon^{-1/2}|\mc{F}_k|^{1/2}p^{(e(F) - 1)/2}n^{f/2-1} \\ &+& (Cp^{1/2}n)^2f^4n^{f-4}+Cp^{1/2}n\cdot 2\Delta\cdot (p\cdot f^3n^{f-3}+2f^5pn^{f-3}). \epsilonnd{eqnarray*} \hide{ This is $O(p n^{v(F)-2})$ if $e(F) \ge 3$. In the remaining case $e(F) \le 2$ and $v(F)\ge 4$, we have only one graph, namely $F=2 K_2$ (two parallel edges). It satisfies \[ S(K_2,G)^2 = p(1-p)\binom{n}{2} + (1-2p)S(K_2, G) + 2 S(P_2, G)+ 2S(2 K_2,G), \] therefore $|S(2K_2, G)| = O(pn^2)$.} This is $ O(pn^{f-2})$ since $F$ has $f\ge 4$ vertices and $e(F)\ge2$ edges. We conclude that the final graph $G$ satisfies $S(F,G) = O(p n^{v(F)-2})$ for all $F\in \mc{F}_k\setminus \{K_2\}$ and $S(K_2,G)=O(D(n,p))$. That is, we satisfied~\epsilonqref{eqn:small_signed}, which implies the required upper bound on $u_k(G,p)$. \epsilonnd{proof} \section{Shatten norms and other related norms} \label{sec:shatten} Note that the graphs in this section are allowed to have loops. When we define the complement $\Ov{G}$ of a graph $G$, loopless vertices are mapped to loops and vice versa. For a graph $G$ on $[n]$ and a function $p=p(n)$, let $M=A-pJ$ denote the shifted adjacency matrix of $G$, that is, \begin{equation} \label{eqn:shifted_adj} M_{ij}=\left\{\begin{array}{ll} 1-p,& \text{if } ij\in E(G),\\ -p,& \text{otherwise},\epsilonnd{array}\right.\qquad 1\le i,j\le n. \epsilonnd{equation} In order to make some forthcoming formulas shorter, we define $\epsilon(G):=\sum_{i=1}^n \sum_{j=1}^n A_{ij}$. In other words, $\epsilon(G)$ is the number of loops plus twice the number of non-loop edges in $G$. For example, $\epsilon(G)+\epsilon(\Ov{G})=n^2$. Let us prove Theorem~\ref{th:shatten} \begin{proof}[Proof of Theorem~\ref{th:shatten}{}] Let $s=2k$ and let $G$ be a graph (possibly with loops) on $[n]$, where $n\to\infty$. Without loss of generality we may assume that $p \le \frac12$. This is because $\shatten pG{s}^{s} = \shatten {(1-p)}{\Ov{G}}{s}^{s}$ and the expression in the statement we have to prove is symmetric with respect to $p$ and $1-p$. The matrix $M$ in~\epsilonqref{eqn:shifted_adj} is a symmetric real matrix so it has real eigenvalues $\lambda_1\ge\dots\ge\lambda_n$. For an even integer $s\ge 4$, we have \[ n^s\,\shatten pG{s}^{s}=\sum_{i=1}^n \lambda_i^s=\textup{tr}(M^s)=\sum_{i=1}^n (M^s)_{ii}, \] where $\textup{tr}$ denotes the trace of a matrix. From now on we split the analysis of the lower bound for $\shatten pG{s}^{s}$ into two cases. In the first case, we assume that $\epsilon(G) \ge \frac p2\,n^2$. This (together with $p\le \frac12$) implies that \begin{equation}\label{eq:SumMijSquare} \sum_{i=1}^n \lambda_i^2 =\sum_{i,j=1}^n M_{ij}^2=(1-p)^2\epsilon(G)+p^2\epsilon(\Ov{G})\ge \left((1-p)^2\,\frac p2+p^2\left(1-\frac p2\right)\right) \, n^{2} =\frac p2\,n^2. \epsilonnd{equation} By the inequality between the arithmetic and $k$-th power means for $k\ge 2$ applied to non-negative numbers $\lambda_1^2,\dots,\lambda_n^2$ (or just by the convexity of $x\mapsto x^k$ for $x\ge 0$), we conclude that \[ \left(\frac{\lambda_1^{2k}+\dots+\lambda_n^{2k}}n\right)^{1/k}\, \ge\, \frac{\lambda_1^2+\dots+\lambda_n^2}{n}\, \ge\, \frac{pn}2. \] Thus $n^{2k}\shatten Gp{2k}^{2k}=\sum_{i=1}^n\lambda_i^{2k}= \Omega(p^k n^{k+1})$, giving the required lower bound in the first case. In the second case, we assume that $\epsilon(G) < \frac p2\, n^2$. Since $\lambda_n$ is the smallest eigenvalue of $M$, we have $\lambda_n = \min \{\langle Mv, v\rangle : \|v\|_2 = 1\}$. So if we choose $v=\left(\frac{1}{\sqrt{n}},\ldots, \frac{1}{\sqrt{n}}\right) \in {\mathbb{R}}^n$, we obtain \begin{equation}\label{eq:SumMij} \lambda_n \le \langle Mv, v\rangle = \frac{(1-p)\epsilon(G) - p\epsilon(\Ov{G})}{n} \le \left((1-p)\frac p2-p(1-\frac p2)\right)n= -\frac{pn}2. \epsilonnd{equation} This implies that $\sum_{i=1}^n\lambda_i^{2k}\ge \lambda_n^{2k} =\Omega(p^{2k} n^{2k})$, thereby proving the lower bound in the second case. On the other hand, for the upper bound we have two constructions. Again we assume that $p \le \frac12$. The first construction is very simple: the empty graph. If $G$ is empty, a straightforward computation shows that $\shatten pG{2k} = p$, and this proves the upper bound whenever $p \le n^{-(k-1)/k}$. For the second construction, we consider $G\sim \I G_{n,p}^\text{loop}$ to be a random graph with loops, where every possible pair or loop belongs to $E(G)$ independently with probability $p$. Here we assume that $p>n^{-(k-1)/k}$. Let $X=n^{2k}\shatten pG{2k}^{2k}$. By~\epsilonqref{eq:shatten}, we have $X=\sum_{f:\res{2k}\to V(G)} X_f$, where $X_f=\prod_{i\in \res{2k}} M_{f(i),f(i+1)}$ and $M=A-pJ$ is as before. Then the expectation of $X_f$ is $0$ unless for every $i$ there is $j\not=i$ with $\{f(j),f(j+1)\}=\{f(i),f(i+1)\}$, that is, every edge of $C_{2k}$ is glued with some other edge. If $f$ is a map with ${\mathbf{E}}[X_f]\not=0$ then the image under $f$ of the edge set of $C_{2k}$ is a connected multi-graph where every edge (or loop) appears with even multiplicity, so it contains at most $k+1$ vertices. Since the number of maps $f$ for which the image of $C_{2k}$ contains at most $e$ distinct edges (ignoring multiplicity) is $O(n^{e+1})$, we have \[ {\mathbf{E}}[X] = O\left(\sum_{e=1}^{k} n^{e+1}p^e\right) = O(n^{k+1}p^{k}), \] since $p > n^{-1}$. Now take an outcome $G$ such that the value of $X$ is at most its expected value. This finishes the proof of the theorem. \epsilonnd{proof} A related result of Hatami~\cite{Hatami10} shows that a complete bipartite graph $F=K_{2k,2m}$, with even part sizes $2k$ and $2m$, also gives a norm by a version of \epsilonqref{eq:shatten}. If $G$ is a graph on $[n]$, then this norm, for $G-p$, is $$ \|G-p\|_F:=t(F,M)^{1/(2k+2m)}=n^{-1} X^{1/(2k+2m)}, $$ where $M$ is as in \epsilonqref{eqn:shifted_adj}, \[ X:=\sum_{f:A\cup B\to V(G)}\ \prod_{a\in A}\ \prod_{b\in B} M_{f(a),f(b)}, \] and $A, B$ are fixed disjoint sets of sizes $2k$ and $2m$ respectively. \begin{theorem}\label{th:2k2m} Let $F=K_{2k,2m}$ with $1\le k\le m$. The minimum of $\|G-p\|_F$ over $n$-vertex graphs $G$ with loops allowed is \[ \Theta\left(\min\left\{p^{4km}(1-p)^{4km},\, p^{2km}(1-p)^{2km}n^{-k} \right\}^{1/(2m+2k)} \right). \] \epsilonnd{theorem} \begin{proof} For the same reasons stated in the beginning of the proof of Theorem~\ref{th:shatten} we may assume, without loss of generality, that $p \le \frac12$. We begin with the lower bound. We rewrite $X$ by grouping all maps $f:A\cup B\to V(G)$ by the restriction of $f$ to $A$. For every fixed $h:A\to V(G)$, we have \[ \sum_{g:B\to V(G)} \ \prod_{a\in A} \ \prod_{b\in B} M_{h(a),g(b)} = \left(\sum_{u\in V(G)}\ \prod_{a\in A}M_{h(a),u}\right)^{2m}\ge 0. \] As in the proof of Theorem~\ref{th:shatten}, we divide the analysis into two cases. In the first case, we assume that $\epsilon(G)\ge \frac p2\, n^2$. Let $\mc H$ be the set of all $h:A\to V(G)$ such that $h(2i-1)=h(2i)$ for all $i\in[k]$, where we assumed that $A=[2k]$. Note that $|\mc H| = n^k$. If $h\in \mc H$ we have \[ \sum_{u\in V(G)}\ \prod_{a\in A} M_{h(a),u}= \sum_{u\in V(G)} \ \prod_{i\in [k]} M_{h(2i),u}^2. \] Thus by the convexity of $x\mapsto x^{2m}$ for $x\in {\mathbb{R}}$, the convexity of $x\mapsto x^k$ for $x\ge 0$, and the calculation in~\epsilonqref{eq:SumMijSquare}, we have that \begin{align*} X &= \sum_{h:A\to V(G)} \left(\sum_{u\in V(G)} \prod_{a\in A} M_{h(a),u}\right)^{2m} \ge \sum_{h\in \mc H} \left(\sum_{u\in V(G)} \prod_{i\in [k]} M_{h(2i),u}^2\right)^{2m}\\ &\ge n^k\left(\frac{1}{n^k}\sum_{h\in \mc H}\sum_{u\in V(G)} \prod_{i\in [k]} M_{h(2i),u}^2\right)^{2m}= n^k\left(\frac{1}{n^k}\sum_{u\in V(G)} \left[\sum_{v\in V(G)} M_{v,u}^2\right]^k\right)^{2m} \\ &\ge n^k\left(\frac{1}{n^{k-1}} \left[\frac{1}{n} \sum_{u\in V(G)} \sum_{v\in V(G)} M_{v,u}^2\right]^k\right)^{2m} \ge n^k\left(\frac{1}{n^{k-1}} \left[\frac{(1-p)^2\epsilon(G)+p^2\epsilon(\Ov{G})}{n} \right]^k\right)^{2m} \\ &\ge n^k\left(\frac{1}{n^{k-1}} \left[\frac{pn}{2} \right]^k\right)^{2m} = \Omega\left(p^{2km}n^{k+2m}\right), \epsilonnd{align*} which proves the lower bound in the first case. In the second case, we assume that $\epsilon(G)< \frac p2\, n^2$. By the convexity of $x\mapsto x^{2m}$ and $x\mapsto x^{2k}$ for all $x\in {\mathbb{R}}$ and by the calculation in~\epsilonqref{eq:SumMij}, we have that \begin{align*} X &= \sum_{h:A\to V(G)} \left(\sum_{u\in V(G)} \prod_{a\in A} M_{h(a),u}\right)^{2m} \ge n^{2k}\left(\frac{1}{n^{2k}}\sum_{h:A\to V(G)}\sum_{u\in V(G)} \prod_{i\in [2k]} M_{h(i),u}\right)^{2m}\\ &=n^{2k}\left(\frac{1}{n^{2k}}\sum_{u\in V(G)} \left[\sum_{v\in V(G)} M_{v,u}\right]^{2k}\right)^{2m} \ge n^{2k}\left(\frac{1}{n^{2k-1}} \left[\frac{1}{n} \sum_{u\in V(G)} \sum_{v\in V(G)} M_{v,u}\right]^{2k}\right)^{2m}\\ &= n^{2k}\left(\frac{1}{n^{2k-1}} \left[\frac{(1-p)\epsilon(G)-p\epsilon(\Ov{G})}{n} \right]^{2k}\right)^{2m} = \Omega\left(p^{4km}n^{2k+2m}\right), \epsilonnd{align*} which proves the lower bound in the second case. We turn to the upper bound. We need two constructions. The first one is again the empty graph. If $G$ is empty then \[ \|G-p\|_F = p^{2km/(k+m)}, \] and this proves the upper bound whenever $p \le n^{-1/(2m)}$. The second construction is the random graph $G\sim\G np^\text{loop}$. Write $X$ as the sum of $X_f$ over $f:A\cup B\to V(G)$. Each $f$ with ${\mathbf{E}}[X_f]\not=0$ maps $E(K_{2k,2m})$ into a connected multi-graph where every edge appears with even multiplicity. Consider the equivalence relation on $A\cup B$ given by one such $f$, where two vertices in $A\cup B$ are equivalent if their images under $f$ coincide. If non-trivial classes (i.e.,\ those containing more than one vertex) miss some $a\in A$ and some $b\in B$, then $\{f(a),f(b)\}$ is a singly-covered edge, a contradiction. Thus, non-trivial classes have to cover at least one of $A$ or $B$ entirely, so the number of identifications is at least $\min\{|A|,|B|\}/2=k$. It follows that the image of $F$ under $f$ has at most $k+2m$ vertices. In fact, if the image of $F$ under $f$ contains exactly $2k + 2m -t$ vertices (where $t\ge k$), the number of distinct edges in the image of $F$ by $f$ is at least $4km - 2mt$. This is because every ``identification'' of vertices under the same equivalence class of $f$ can ``destroy'' at most $2m$ edges. Therefore \[ {\mathbf{E}}[X] = O\left(\sum_{t=k}^{2k+2m-1} n^{2k+2m-t}p^{4km-2mt}\right) = O(n^{k+2m}p^{2km}), \] since $p > n^{-1/(2m)}$. Now take an outcome $G$ such that the value of $X$ is at most its expected value. This finishes the proof of the theorem. \epsilonnd{proof} \section{Concluding remarks and open questions} \label{sec:concluding_remarks} Observe that the result of Chung, Graham, Wilson~\cite{ChungGrahamWilson89} implies that there cannot be a graph $G$ with $t(K_2,A)=p$ and $t(C_4,A)=p^4$ where $0<p<1$ and $A$ is the adjacency matrix of $G$. (Indeed, otherwise the uniform blow-ups of $G$ would form a quasirandom sequence, which is a contradiction.) This argument does not work with the subgraph count function $N(F,G)$. We do not know if the fact that $u_k(n,p)$ can be zero infinitely often for $k=3$ (when $p$ is rational) but not for $k=4$ can directly be related to the fact that quasirandomness is forced by $4$-vertex densities. Let $\G nm$ be the random graph on $[n]$ with $m$ edges, where all $\binom{\binom{n}{2}}{m}$ outcomes are equally likely. Janson~\cite{Janson94} completely classified the cases when the random variable $N(F,\G nm)$ satisfies the Central Limit Theorem where $n\to\infty$ and $m=\lfloor p{n\choose 2}\rfloor$. He showed that the exceptional $F$ are precisely those graphs for which $S^{(p)}(H,F)=0$ for every $H$ from the following set: connected graphs with $5$ vertices and graphs without isolated vertices with $3$ or $4$ vertices. It is an open question if at least one such pair $(F,p)$ with $p\not=0,1$ exists, see, e.g., \cite[Page 65]{Janson94} and \cite[Page 350]{Janson95}. Note that nothing is stipulated about $S^{(p)}(K_2,F)$. In fact, it has to be non-zero e.g.\ by \thmref{thm:main}; moreover, \cite[Theorem~4]{Janson94} shows that, for given $v(F)$ and $p$, the number of edges in such hypothetical $F$ is uniquely determined. \hide{ Using computer, K\"arrman~\cite{Karrman94} constructed a 64-vertex graph $F$ with $S^{(1/2)}(H,F)=0$ for $H=K_2$, $3$- and $4$-path, and $4$-cycle. As far as we know, it is open if there are any further such examples $F$. } This indicates that the problem of understanding possible joint behaviour of the $S$-statistics is difficult already for very small graphs. It would be interesting to extend \thmref{thm:main} to a wider range of $p$, or to other structures such as, for example, $r$-uniform hypergraphs with respect to different notions of quasirandomness (see~\cite{ConlonHanPersonSchacht12,LenzMubayi15,Towsner17}). \hide{Unfortunately, we could not determine the order of magnitude of the corresponding function $u_k$ for these structures.} \section*{References} \epsilonnd{document}
\begin{document} \title{Grades of Discrimination}top \noindent \section{Introduction}\label{s:Preliminaries} There are several relations which may fall short of genuine identity, but which behave like identity in important respects. Such \emph{grades of discrimination} have recently been the subject of much philosophical and technical discussion. Much of this discussion has been fuelled by considering \emph{the Principle of the Identity of Indiscernibles}: the claim that indiscernible objects are always identical. The Principle is obviously of direct metaphysical interest (see \cite{Hawley:II}). But, within the philosophy of mathematics, the Principle has risen to prominence via the question of whether platonistically-minded structuralists can countenance structures with indiscernible but distinct positions (see \cite[\S1]{Shapiro:STUR}). And, within the philosophy of physics, the central question has been whether quantum mechanics presented real-world counterexamples to the Principle (see \cite{Muller:RR}). As discussion has progressed, though, it has become increasingly clear that we must distinguish between different versions of `the' Principle, corresponding to different notions of indiscernibility. This has spurred several philosophers to investigate the logical properties of these different notions (see \cite{CaultonButterfield:KILM}, \cite{Ketland:II}, \cite{LadymanLinneboPettigrew:IDPL}). This paper completes that logical investigation. It exhaustively details, not just the properties of grades of indiscernibility, but the properties of all of the grades of {discrimination}. Indeed, this paper answers all of the mathematical questions that are natural at this level of abstraction. There are three broad families of grades of discrimination. Grades of \emph{indiscernibility} are defined in terms of satisfaction of certain first-order formulas, either with or without access to a primitive symbol that stands for genuine identity. They have been the focus of much recent philosophical attention. Grades of \emph{symmetry} are defined in terms of isomorphisms. More specifically, they are defined in terms of symmetries (also known as automorphisms) on a structure. These grades have received some philosophical attention, though in a slightly less cohesive way than the grades of indiscernibility. Finally, grades of \emph{relativity} are defined in terms of relativeness correspondences, analogously to the grades of symmetry. The notion of a relativeness correspondence has been studied by model-theorists, but is entirely absent from the philosophical literature on grades of discrimination. This paper rectifies this situation, introducing grades of relativity for the first time. I mentioned, earlier, that the Principle of the Identity of Indiscernibles has been the main motivating force for interest in grades of discrimination. But it is now worth pausing to consider broader reasons for investigating the logical properties of the grades of discrimination. The simplest reason to care about grades of discrimination is that they allow us to calibrate relationships of similarity and difference. More ambitiously, though, we might hope that some grade of discrimination will provide us with a genuinely illuminating answer to the question: {When are objects identical?} To take a simple example: set theory tells us that sets are identical iff they share all their members. Consequently, some grade of indiscernibility provides a suitable criterion of identity in set-theoretic contexts. To take a more contentious example: we might somehow become convinced that nature abhors a (non-trivial) symmetry. If so, then some grade of symmetry will provide a suitable criterion of identity in empirical contexts. The general hope, then, is that our grades of discrimination may furnish us with some non-trivial \emph{criterion of identity} (in some context or other). This search for a non-trivial criterion of identity need not be \emph{reductive}. We might simply seek an illuminating {constraint} upon the conditions under which objects can be distinct. That said, some philosophers have hoped to find a {reductive} criterion for identity; that is, they have hoped to replace the identity primitive with some defined grade of discrimination. This reductive ambition is most prominent among those who have defended some Principle of Identity of Indiscernibles; such philosophers have therefore focussed on the various grades of indiscernibility. However, reductive ambitions might, in principle, be served equally well by considering either grades of symmetry or grades of relativity. (I revisit this in \S\ref{s:Capturing}.) Advancing a criterion of identity is not, however, simply a matter of selecting some grade of discrimination. As we shall see, each grade of discrimination is defined with respect to a (model-theoretic) signature. So, consider a signature which contains just a few monadic predicates which stand for eye colour. If we present a non-trivial criterion of identity, in the form of a grade of discrimination defined with respect to \emph{this} signature, then we shall be forced to say, absurdly, that there is at most one person with brown eyes. Consequently, any philosopher who wants to advance a non-trivial criterion of identity must not only select some appropriate grade of discrimination, but must also stipulate the particular signature she has in mind. (I shall revisit this point several times below.) In this paper, though, I am not aiming to \emph{advance} any particular criterion of identity. My aim is only to provide a mathematical toolkit for anyone who is interested in criteria of identity, whether reductive or non-reductive. That toolkit is structured around three main results. Theorem \ref{thm:BigMap} completely characterises the entailments between the grades of discrimination. Theorem \ref{thm:GaloisConnection} establishes a Galois Connection between isomorphisms and relativeness correspondences, which enables us better to understand the relationships between grades of symmetry and grades of relativity. And Theorem \ref{thm:NoSven}, is a Beth--Svenonius Theorem for logics without identity. By combining these three results, I answer several subsidiary questions concerning the grades of discrimination, including: which grades are equivalence relations (\S\ref{s:Equivalence}); which grades can be captured using sets of first-order formulas (\S\ref{s:Capturing}); how they behave in finitary cases (\S\ref{s:FinitaryCase}); and how they behave in elementary extensions of structures (\S\ref{s:DefinabilityTheory} and \S\ref{s:AllElementaryExtensions}). I now state some notational conventions. I always use `$\lang{L}$' to denote an arbitrary signature, i.e.\ a collection of constants, predicates and function-symbols. The philosophical discussion of grades of indiscernibility tends to be restricted to \emph{relational} signatures, i.e.\ signatures which contain only predicates.\footnote{An exception is \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[\S6]{LadymanLinneboPettigrew:IDPL}.} There are reasonable philosophical motivations for this: if we assume that each constant names exactly \emph{one} object, then we seem to presuppose that we understand rather a lot about the notion of identity before we begin (see e.g.\ \cite{Black:II}, \cite[pp.\ 40--1]{CaultonButterfield:KILM}); more generally, the very idea of a \emph{function} seems to presuppose the notion of identity; hence, if we want to avoid prejudging certain philosophical questions about identity, it might be wise to restrict our attention to relational signatures. The model-theoretic discussion of these issues is, though, less often restricted to relational signatures. There is a sensible technical motivation for this: many of the results hold in the more general case. Since this paper aims to provide philosophers with technical results, I shall allow signatures to contain both constants and functions, but I shall comment on the relational case when it is interestingly different. For technical ease, I treat constants as $0$-place function-symbols. Where $\lang{L}$ is a signature, the $\lang{L}\yesstroke$-formulas are the first-order formulas formed in the usual way using any $\lang{L}$-symbols and any symbols from standard first-order logic with identity. In particular, then, they may contain the symbol `$=$', which always stands for (genuine) identity. The $\lang{L}\nostroke$-formulas are those formed \emph{without} using the symbol `$=$'. $\lang{L}\yesstroke_n$ is the set of $\lang{L}\yesstroke$-formulas with free variables among `$v_1$', $\ldots$, `$v_n$'; similarly for $\lang{L}\nostroke_n$. I use swash fonts for structures and italic fonts for their associated domains. So, where $\model{M}$ is an $\lang{L}$-structure, its domain is $M$. Where $\overline{e} = \langle e_1, \ldots, e_n\rangle$ and $\pi$ is a function, $\pi(\overline{e}) = \langle \pi(e_1), \ldots, \pi(e_n)\rangle$. Where $\Pi$ is a two place relation, I write $\overline{d}\Pi\overline{e}$ to abbreviate $\langle d_{1}, e_{1}\rangle, \ldots, \langle d_{n}, e_{n}\rangle \in \Pi$. \section{Twelve grades of discrimination}\label{s:TwelveGrades} I start by defining six \emph{grades of indiscernibility}; three grades of $\lang{L}\nostroke$-indiscerni\-bi\-lity, and three grades of $\lang{L}\yesstroke$-indiscernibility.\footnote{This family of definitions has a long philosophical heritage, e.g.: \citeauthor{HilbertBernays:GM} \cite[\S5]{HilbertBernays:GM}; Quine \cite[pp.\ 230--2]{Quine:WO}, \cite{Quine:GD}; \citeauthor{CaultonButterfield:KILM} \cite[\S2.1, \S3.2]{CaultonButterfield:KILM}; \citeauthor{Ketland:II} \cite[pp.\ 306--7]{Ketland:SII}, \cite[Definitions 2.3, 2.5]{Ketland:II}; \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[Definition 3.1, \S6.4]{LadymanLinneboPettigrew:IDPL}.} \begin{define}\label{def:Indiscernibility} For any $\lang{L}$-structure $\model{M}$ with $a, b \in M$: \begin{enumerate}[nolistsep] \item $a \mathrel{\indiscernibleno\monadstroke} b$ in $\model{M}$ \text{iff} $\model{M} \models \phi(a) \liff \phi(b)$ for all $\phi \in \lang{L}\nostroke_1$ \item $a \mathrel{\indiscernibleno\relstroke} b$ in $\model{M}$ \text{iff} $\model{M} \models \phi(a,b) \liff \phi(b,a)$ for all $\phi \in \lang{L}\nostroke_2$ \item $a \indiscernibleno b$ in $\model{M}$ \text{iff} $\model{M} \models \phi(a, \overline{e}) \liff \phi(b, \overline{e})$ for all $n < \omega$, all $\phi \in \lang{L}\nostroke_{n+1}$ and all $\overline{e} \in M^n$ \end{enumerate} Similarly: \begin{enumerate}[nolistsep]\addtocounter{enumi}{3} \item $a \mathrel{\indiscernibleyes\monadstroke} b$ in $\model{M}$ \text{iff} $\model{M} \models \phi(a) \liff \phi(b)$ for all $\phi \in \lang{L}\yesstroke_1$ \item $a \mathrel{\indiscernibleyes\relstroke} b$ in $\model{M}$ \text{iff} $\model{M} \models \phi(a,b) \liff \phi(b,a)$ for all $\phi \in \lang{L}\yesstroke_2$ \item $a = b$ in $\model{M}$ \text{iff} $a$ is identical to $b$ \end{enumerate} Here, `$_\textsf{p}$' indicates \emph{pairwise} indiscernibility; `$_\textsf{m}$' indicates \emph{monadic} indiscernibility; and no subscript indicates \emph{complete} indiscernibility. \end{define}\noindent There are several alternative characterisations of $\indiscernibleno$, two of which will prove useful (see \citeauthor{CasanovasEtAl:EEEFL} \cite[p.\ 508]{CasanovasEtAl:EEEFL} and \citeauthor{Ketland:II} \cite[Theorem 3.17]{Ketland:II}): \begin{lem}\label{lem:AlternateCharacterisationOfFOI} For any $\lang{L}$-structure $\model{M}$, the following are equivalent: \begin{enumerate}[nolistsep] \item $a \indiscernibleno b$ in $\model{M}$ \item\label{alternate:atomic} $\model{M} \models \phi(a, \overline{e}) \liff \phi(b, \overline{e})$ for all $n < \omega$, all {atomic} $\phi \in \lang{L}\nostroke_{n+1}$ and all $\overline{e} \in M^n$ \item\label{alternate:weak} $\model{M} \models \phi(a, a) \liff \phi(a, b)$ for all $\phi \in \lang{L}\nostroke_2$ \end{enumerate} \end{lem}\noindent Quine was the first philosopher to analyse all six grades of indiscernibility. His fullest discussion of them ended as follows: \begin{quote} May there even be many intermediate grades? The question is ill defined. By imposing special conditions on the form or content of the open sentence used in discriminating two objects, we could define any number of intermediate grades of discriminability, subject even to no linear order. What I have called moderate discriminability [i.e.\ $\mathrel{\indiscernibleyes\relstroke}$ or $\mathrel{\indiscernibleno\relstroke}$], however, is the only intermediate grade that I see how to define at our present high level of generality. \cite[p.\ 116]{Quine:GD} \end{quote} Quine was right that Definition \ref{def:Indiscernibility} essentially exhausts all of the grades of discrimination that are fairly natural, highly general, and which can be defined in terms of satisfaction of $\lang{L}\nostroke$- and $\lang{L}\yesstroke$-formulas.\footnote{Though \citeauthor{CaultonButterfield:KILM} \cite{CaultonButterfield:KILM} and \citeauthor{LadymanLinneboPettigrew:IDPL} \cite {LadymanLinneboPettigrew:IDPL} also explore the case of quantifier-free formulas.} Nevertheless, other grades of discrimination are quite natural; we just need to consider alternative methods of definition. (The sense in which they are `intermediate' grades will become clear in \S\ref{s:Entailments} and, as Quine conjectured, we will see that they are not linearly ordered.) In particular, I shall introduce grades of discrimination that are defined in terms of \emph{isomorphisms}. As a reminder: \begin{define} Let $\model{M}, \model{N}$ be $\lang{L}$-structures. An \emph{isomorphism} from $\model{M}$ to $\model{N}$ is any bijection $\pi : M \longrightarrow N$ such that: \begin{enumerate}[nolistsep] \item $\overline{e} \in R^\model{M}$ iff $\pi(\overline{e}) \in R^\model{N}$, for all $n$-place $\lang{L}$-predicates $R$ and all $\overline{e} \in M^n$ \item $\pi(f^\model{M}(\overline{e})) = f^\model{N}(\pi(\overline{e}))$, for all $n$-place $\lang{L}$-function-symbols $f$ and all $\overline{e} \in M^n$ \end{enumerate} A \emph{symmetry} on $\model{M}$ is an isomorphism from $\model{M}$ to $\model{M}$. \end{define}\noindent Isomorphisms preserve $\lang{L}\yesstroke$-formulas (see e.g.\ \citeauthor{Marker:MT} \cite[pp.\ 13--14]{Marker:MT}): \begin{lem}\label{lem:IsomorphismPreservation} Let $\model{M}, \model{N}$ be $\lang{L}$-structures, and $\pi : \model{M} \functionto \model{N}$ be an isomorphism. For all $n < \omega$, all $\phi \in \lang{L}\yesstroke_n$ and all $\overline{e} \in M^n$: $$\model{M} \models \phi(\overline{e})\text{ iff }\model{N}\models \phi(\pi(\overline{e}))$$ \end{lem}\noindent There is therefore a good sense in which objects linked by a symmetry cannot be discriminated. Consequently, symmetries are a source of grades of discrimination, and I shall be interested in three distinct \emph{grades of symmetry}: \begin{define} For any $\lang{L}$-structure $\model{M}$ with $a, b \in M$: \begin{enumerate}[nolistsep] \item $a \symmetric\barestroke b$ in $\model{M}$ iff there is a symmetry $\pi$ on $\model{M}$ with $\pi(a) = b$ \item $a \symmetric\pairstroke b$ in $\model{M}$ {iff} there is a symmetry $\pi$ on $\model{M}$ with $\pi(a) = b$ and $\pi(b) = a$ \item $a \symmetric\totalstroke b$ in $\model{M}$ {iff} there is a symmetry $\pi$ on $\model{M}$ with $\pi(a)= b$, $\pi(b) = a$ and $\pi(x) = x$ for all $x \notin \{a, b\}$ \end{enumerate} \end{define}\noindent These three grades have already received some philosophical attention;\footnote{$\symmetric\barestroke$ is considered by \citeauthor{Ketland:II} \cite{Ketland:SII}, \cite{Ketland:II} under the name `structural indiscernibility', and by \citeauthor{LadymanLinneboPettigrew:IDPL} \cite{LadymanLinneboPettigrew:IDPL} under the name `symmetry'. $\symmetric\pairstroke$ is considered by \citeauthor{LadymanLinneboPettigrew:IDPL} \cite{LadymanLinneboPettigrew:IDPL} under the name `full symmetry'. $\symmetric\totalstroke$ is considered by \citeauthor{Ketland:II} \cite{Ketland:SII}, \cite{Ketland:II}, who writes `$\pi_{ab}$' to indicate that $a \symmetric\totalstroke b$.} one of my aims is to incorporate them into the discussion in a systematic way. In defining the notion of an isomorphism, the only object-language symbols which are mentioned are those of the signature; there is no need to mention `$=$'. Nevertheless, the notion of an isomorphism---and hence each grade of symmetry---straightforwardly depends upon the notion of identity. After all, an isomorphism is a bijection, which is to say it maps \emph{unique} objects to \emph{unique} objects, and \emph{vice versa}. This dependence on identity is reflected in Lemma \ref{lem:IsomorphismPreservation}: symmetries preserve $\lang{L}\yesstroke$-formulas. If we want to avoid treating identity as a primitive---for philosophical or technical reasons---then the notion of an isomorphism is probably too strong. In looking for a weaker notion, a first thought would be to consider functions between structures that need not be bijections. (In this regard, \emph{strict homomorphisms} are sometimes considered.) But this is insufficiently concessive, since the very idea of a \emph{function} presupposes the notion of identity, for a function maps each object (or $n$-tuple) to a \emph{unique} object. Instead, then, we should consider structure-preserving \emph{relations} that may hold between structures. The appropriate notion is provided by \citeauthor{CasanovasEtAl:EEEFL} \cite[Definition 2.5]{CasanovasEtAl:EEEFL}; recall from \S\ref{s:Preliminaries} that $\overline{d}\Pi\overline{e}$ abbreviates $\langle d_{1}, e_{1}\rangle, \ldots, \langle d_{n}, e_{n}\rangle \in \Pi$:\footnote{They credit a special case of this to \citeauthor{BlokPigozzi:PL} \cite[p.\ 343]{BlokPigozzi:PL}; see also \cite[\S5]{Waszkiewicz:NII}.} \begin{define} Let $\model{M}, \model{N}$ be $\lang{L}$-structures. A \emph{relativeness correspondence from $\model{M}$ to $\model{N}$} is any relation $\Pi \subseteq M \times N$ with $\text{dom}(\Pi) = M$ and $\text{rng}(\Pi) = N$ such that: \begin{enumerate}[nolistsep] \item $\overline{d} \in R^\model{M}$ {iff} $\overline{e} \in R^\model{N}$, for all $n$-place $\lang{L}$-predicates $R$ and all $\overline{d}\Pi\overline{e}$ \item $f^\model{M}(\overline{d}) \Pi f^\model{N}(\overline{e})$, for all $n$-place $\lang{L}$-function-symbols $f$ and all $\overline{d}\Pi\overline{e}$ \end{enumerate} A \emph{relativity} on $\model{M}$ is a relativeness correspondence from $\model{M}$ to $\model{M}$. \end{define}\noindent \citeauthor{CasanovasEtAl:EEEFL} \cite[Proposition 2.6]{CasanovasEtAl:EEEFL} show that relativeness correspondences preserve $\lang{L}\nostroke$-formulas: \begin{lem}\label{lem:RelativenessPreservation} Let $\model{M}, \model{N}$ be $\lang{L}$-structures, and $\Pi$ be a relativeness correspondence from $\model{M}$ to $\model{N}$. For all $n < \omega$, all $\phi \in \lang{L}\nostroke_n$, and all $\overline{d}\Pi\overline{e}$: $$\model{M} \models \phi(\overline{d})\text{ iff }\model{N}\models \phi(\overline{e}) $$ \end{lem}\noindent There is therefore a good sense in which objects linked by a relativity cannot be discriminated. So, by simple analogy with the three grades of symmetry, I shall consider three \emph{grades of relativity}: \begin{define} For any $\lang{L}$-structure $\model{M}$ with $a, b \in M$: \begin{enumerate}[nolistsep] \item $a \relativeto\barestroke b$ in $\model{M}$ {iff} there is a relativity $\Pi$ on $\model{M}$ with $a \Pi b$ \item $a \relativeto\pairstroke b$ in $\model{M}$ {iff} there is a relativity $\Pi$ on $\model{M}$ with $a \Pi b$ and $b \Pi a$ \item $a \mathrel{\textsf{r}}tal b$ in $\model{M}$ {iff} there is a relativity $\Pi$ on $\model{M}$ with $a \Pi b$, $b \Pi a$ and $x \Pi x$ for all $x \nindiscernibleno a$ and $x \nindiscernibleno b$ \end{enumerate} \end{define}\noindent Unlike the grades of symmetry, the grades of relativity have not yet been considered by philosophers interested in grades of discrimination. However, there is no principled reason for this omission. Indeed, a central claim of this paper is that relativeness correspondences (and hence grades of relativity) are the appropriate $\lang{L}\nostroke$-surrogate for isomorphisms (and hence grades of symmetry). This claim should already be plausible, given that we arrived at the notion of a relativeness correspondence by relaxing the notion of an isomorphism, and given the immediate comparison between Lemmas \ref{lem:IsomorphismPreservation} and \ref{lem:RelativenessPreservation}. The claim will receive further support during this paper. For the reader's convenience, the following table summarises the twelve grades of discrimination: \begin{table}[h] \setlength\tabcolsep{0.025\textwidth} \begin{tabular}{@{}p{0.05\textwidth}p{0.4\textwidth}p{0.45\textwidth}@{}} Grade & Informal gloss & Definition sketch \\ \hline $=$ & genuine identity & $a = b$\\ $\mathrel{\indiscernibleyes\relstroke}$ & pairwise $\lang{L}\yesstroke$-indiscernibility & $\phi(a,b) \liff \phi(b,a)$, all $\phi\in\lang{L}\yesstroke_2$\\ $\mathrel{\indiscernibleyes\monadstroke}$ & monadic $\lang{L}\yesstroke$-indiscernibility & $\phi(a) \liff \phi(b)$, all $\phi \in \lang{L}\yesstroke_1$ \\\\ $\indiscernibleno$ & complete $\lang{L}\nostroke$-indiscernibility & $\phi(a, \overline{e}) \liff \phi(b, \overline{e})$, all $\overline{e}$ and $\phi\in \lang{L}\nostroke_n$\\ $\mathrel{\indiscernibleno\relstroke}$ & pairwise $\lang{L}\nostroke$-indiscernibility & $\phi(a,b) \liff \phi(b,a)$, all $\phi\in\lang{L}\nostroke_2$\\ $\mathrel{\indiscernibleno\monadstroke}$ & monadic $\lang{L}\nostroke$-indiscernibility & $\phi(a) \liff \phi(b)$, all $\phi \in \lang{L}\nostroke_1$ \\\\ $\symmetric\totalstroke$& complete symmetry & a permutation $\pi(a) = b$, $\pi(b) = a$ and $\pi(x) = x$ all $x \notin \{a, b\}$\\ $\symmetric\pairstroke$& pairwise symmetry & a permutation $\pi(a) = b$, $\pi(b) = a$\\ $\symmetric\barestroke$& monadic symmetry & a permutation $\pi(a)= b$\\\\ $\mathrel{\textsf{r}}tal$ & complete relativity & a relativity $a\Pi b$, $b\Pi a$ and $x \Pi x$ all $x \nindiscernibleno a, x \nindiscernibleno b$\\ $\relativeto\pairstroke$& pairwise relativity & a relativity $a \Pi b$, $b \Pi a$ \\ $\relativeto\barestroke$& monadic relativity & a relativity $a \Pi b$ \end{tabular} \end{table} \section{Entailments between the grades}\label{s:Entailments} Having defined twelve grades of discrimination, my first task is to characterise the relationships between them. More precisely: I shall build upon some existing results (mentioned in endnotes) to provide a complete account of the entailments and non-entailments between the various grades of discrimination. For any two grades of discrimination $\mathrm{R}$ and $\mathrm{S}$, say that $\mathrm{R}$ \emph{entails} $\mathrm{S}$ \emph{iff} for any structure $\model{M}$ and any $a, b \in M$, if $a \mathrm{R} b$ then $a \mathrm{S} b$ in $\model{M}$. Entailment is relativised to particular classes of structures---e.g.\ to structures with relational signatures---in the obvious way. In \S\ref{s:FinitaryCase}, I shall consider the special case of entailments where we restrict our attention to \emph{finite} structures. However, the target result for this section is the general case: \addtocounter{dummy}{2} \begin{thm}[Entailments between the grades]\label{thm:BigMap} These Hasse Diagrams characterise the entailments between our grades of discrimination: \begin{quote} \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] { & & \\ &= & \\ & \symmetric\totalstroke & \indiscernibleno \\ & \symmetric\pairstroke & \mathrel{\textsf{r}}tal \\ \mathrel{\indiscernibleyes\relstroke} & \symmetric\barestroke & \relativeto\pairstroke \\ \mathrel{\indiscernibleyes\monadstroke} & \mathrel{\indiscernibleno\relstroke} & \relativeto\barestroke \\ & \mathrel{\indiscernibleno\monadstroke}& \\}; \draw(m-2-2)--(m-3-2)--(m-4-2)--(m-5-2)--(m-6-1)--(m-7-2); \draw(m-2-2)--(m-3-3)--(m-4-3)--(m-5-3)--(m-6-3)--(m-7-2); \draw(m-3-2)--(m-4-3); \draw(m-4-2)--(m-5-3); \draw(m-5-2)--(m-6-3); \draw(m-4-2)--(m-5-1)--(m-6-2)--(m-7-2); \draw(m-5-1)--(m-6-1); \draw(m-5-3)--(m-6-2); \end{tikzpicture} \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] { & = & & \\ & \indiscernibleno & & \\ & \symmetric\totalstroke & & \\ & \symmetric\pairstroke & \mathrel{\textsf{r}}tal& \\ \mathrel{\indiscernibleyes\relstroke} & \symmetric\barestroke & \relativeto\pairstroke \\ \mathrel{\indiscernibleyes\monadstroke} & \mathrel{\indiscernibleno\relstroke} & \relativeto\barestroke \\ & \mathrel{\indiscernibleno\monadstroke}& \\}; \draw(m-1-2)--(m-2-2)--(m-3-2)--(m-4-2)--(m-5-2)--(m-6-1)--(m-7-2); \draw(m-4-3)--(m-5-3)--(m-6-3)--(m-7-2); \draw(m-3-2)--(m-4-3); \draw(m-4-2)--(m-5-3); \draw(m-5-2)--(m-6-3); \draw(m-4-2)--(m-5-1)--(m-6-2)--(m-7-2); \draw(m-5-1)--(m-6-1); \draw(m-5-3)--(m-6-2); \end{tikzpicture} \end{quote} The left diagram considers the case of arbitrary signatures; the right diagram considers entailment when restricted to structures with relational signatures. \end{thm}\addtocounter{dummy}{-3}\noindent To explain the notation: there is a path down the page from $\text{R}$ to $\text{S}$ \emph{iff} $\text{R}$ entails $\text{S}$. So in the case of arbitrary signatures: $=$ entails $\indiscernibleno$; $\indiscernibleno$ does not entail $=$; $\symmetric\totalstroke$ does not entail $\indiscernibleno$; and $\indiscernibleno$ does not entail $\symmetric\totalstroke$. In the case of relational signatures: $\indiscernibleno$ entails $\symmetric\totalstroke$; hence $\indiscernibleno$ entails $\symmetric\pairstroke$; etc. I shall start by proving the entailments:\footnote{\citeauthor{Quine:GD} \cite{Quine:GD} proves case (\ref{yesyesnonohierarchy}); see also \citeauthor{Ketland:SII} \cite[p.\ 307]{Ketland:SII}, \cite[\S3.2]{Ketland:II}, \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[Theorem 5.2]{LadymanLinneboPettigrew:IDPL}. \citeauthor{CaultonButterfield:KILM} \cite[Theorem 1]{CaultonButterfield:KILM} prove case (\ref{symtoyes}) when restricted to relational signatures; see also \citeauthor{Ketland:II} \cite[Lemma 3.22]{Ketland:II} and \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[Theorem 9.17, 9.20]{LadymanLinneboPettigrew:IDPL}. \citeauthor{Ketland:II} \cite[Theorem 3.23]{Ketland:II} proves case (\ref{foitosymtotal}).} \begin{lem}\label{lem:BigMapEntailments}For structures with arbitrary signatures: \begin{enumerate}[nolistsep] \item\label{identityattop} $=$ entails both $\indiscernibleno$ and $\symmetric\totalstroke$ \item\label{foitonorel} $\indiscernibleno$ entails $\mathrel{\textsf{r}}tal$ \item\label{yesnoyesonhierarchy} $\mathrel{\indiscernibleyes\relstroke}$ entails $\mathrel{\indiscernibleno\relstroke}$, and $\mathrel{\indiscernibleyes\monadstroke}$ entails $\mathrel{\indiscernibleno\monadstroke}$ \item\label{yesyesnonohierarchy} $\mathrel{\indiscernibleyes\relstroke}$ entails $\mathrel{\indiscernibleyes\monadstroke}$, and $\mathrel{\indiscernibleno\relstroke}$ entails $\mathrel{\indiscernibleno\monadstroke}$ \item\label{symtoyes} $\symmetric\pairstroke$ entails $\mathrel{\indiscernibleyes\relstroke}$, and $\symmetric\barestroke$ entails $\mathrel{\indiscernibleyes\monadstroke}$ \item\label{reltono} $\relativeto\pairstroke$ entails $\mathrel{\indiscernibleno\relstroke}$, and $\relativeto\barestroke$ entails $\mathrel{\indiscernibleno\monadstroke}$ \item\label{StrengthSymmetryRelativity:total} $\symmetric\totalstroke$ entails $\mathrel{\textsf{r}}tal$, $\symmetric\pairstroke$ entails $\relativeto\pairstroke$, and $\symmetric\barestroke$ entails $\relativeto\barestroke$ \item\label{symhierarchy} $\symmetric\totalstroke$ entails $\symmetric\pairstroke$, and $\symmetric\pairstroke$ entails $\symmetric\barestroke$ \item\label{relativehierarchy} $\mathrel{\textsf{r}}tal$ entails $\relativeto\pairstroke$, and $\relativeto\pairstroke$ entails $\relativeto\barestroke$ \end{enumerate} For structures with relational signatures, but not in general: \begin{enumerate}[nolistsep]\addtocounter{enumi}{9} \item \label{foitosymtotal} $\indiscernibleno$ entails $\symmetric\totalstroke$ \end{enumerate} \begin{proof} (\ref{identityattop}). The identity map is a symmetry. (\ref{foitonorel}). The relation given by $x \Pi x$ iff $x \indiscernibleno x$ is a relativity. (\ref{yesnoyesonhierarchy})--(\ref{yesyesnonohierarchy}). Immediate from the definitions. (\ref{symtoyes})--(\ref{reltono}). Immediate from Lemmas \ref{lem:IsomorphismPreservation} and \ref{lem:RelativenessPreservation}. (\ref{StrengthSymmetryRelativity:total}). Every symmetry can be regarded as a relativity. (\ref{symhierarchy})--(\ref{relativehierarchy}). Immediate from the definitions. (\ref{foitosymtotal}). If $a \indiscernibleno b$, then for any $n < \omega$, any atomic formula $\phi \in \lang{L}\nostroke_n$, and any $\overline{e} \in (M\setminus \{a, b\})^n$, we have $\model{M} \models \phi(a, \overline{e}) \liff \phi(b, \overline{e})$. \end{proof} \end{lem}\noindent It remains to demonstrate the non-entailments:\footnote{\citeauthor{Ketland:II} \cite[p.\ 8]{Ketland:II} and \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[Theorem 5.2]{LadymanLinneboPettigrew:IDPL} use $\model{A}$ to prove case (\ref{foinotidentity}); see also \citeauthor{Button:RSIC} \cite[p.\ 218]{Button:RSIC} and \citeauthor{Ketland:SII} \cite[p.\ 309]{Ketland:SII}. \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[Theorem 9.17]{LadymanLinneboPettigrew:IDPL} use $\model{B}$ to prove case (\ref{symtotalnotfoi}), noting that it is the analogue of \citeauthor{Black:II}'s \cite*{Black:II} two-sphere world. \citeauthor{Button:RSIC} \cite[p.\ 218]{Button:RSIC}, \citeauthor{Ketland:SII} \cite[p.\ 310]{Ketland:SII} and \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[Theorem 7.12]{LadymanLinneboPettigrew:IDPL} use an example like $\model{C}$. \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[Theorem 5.2]{LadymanLinneboPettigrew:IDPL} use an example like $\model{D}$. \citeauthor{CaultonButterfield:KILM} \cite[pp.\ 60--2]{CaultonButterfield:KILM} and \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[Theorem 9.20]{LadymanLinneboPettigrew:IDPL} use examples like $\model{E}$.} \begin{lem}\label{lem:BigMapCounterEntailments} For structures with relational signatures: \begin{enumerate}[nolistsep] \item\label{foinotidentity} $\indiscernibleno$ does not entail $=$ \item\label{symtotalnotfoi} $\symmetric\totalstroke$ does not entail $\indiscernibleno$ \item\label{relativetotalnotyesmonad} $\mathrel{\textsf{r}}tal$ does not entail $\mathrel{\indiscernibleyes\monadstroke}$ \item\label{symbarenotnopair} $\symmetric\barestroke$ does not entail $\mathrel{\indiscernibleno\relstroke}$ \item\label{sympairnotrelativetotal} $\symmetric\pairstroke$ does not entail $\mathrel{\textsf{r}}tal$ \item\label{yesrelnotrelativebare} $\mathrel{\indiscernibleyes\relstroke}$ does not entail $\relativeto\barestroke$ \end{enumerate} Moreover, for structures with arbitrary signatures: \begin{enumerate}[nolistsep]\addtocounter{enumi}{6} \item\label{foinotyesmonad} $\indiscernibleno$ does not entail $\mathrel{\indiscernibleyes\monadstroke}$ \end{enumerate} \begin{proof} (\ref{foinotidentity}). In this unlabelled graph, $\model{A}$, we have $1 \indiscernibleno 2$ but $1 \neq 2$: \begin{quote} \centering \begin{tikzpicture}[descr/.style={fill=white,inner sep=2.5pt}] \matrix (m) [matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] {1 & 2\\ }; \end{tikzpicture} \end{quote} (\ref{symtotalnotfoi}). In this unlabelled graph, $\model{B}$, we have $1 \symmetric\totalstroke 2$ but $1 \nindiscernibleno 2$: \begin{quote} \centering \begin{tikzpicture}[descr/.style={fill=white,inner sep=2.5pt}] \matrix (m) [matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] {1 & 2\\ }; \path[-, thick] (m-1-1) edge (m-1-2); \end{tikzpicture} \end{quote} (\ref{relativetotalnotyesmonad}). In this unlabelled graph, $\model{C}$, we have $1 \mathrel{\textsf{r}}tal 2$ but $1 \mathrel{\nindiscernibleyes\monadstroke} 2$: \begin{quote} \centering \begin{tikzpicture}[descr/.style={fill=white,inner sep=2.5pt}] \matrix (m) [matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] {\phantom{3}& 1 & 2 & 3\\ }; \path[-, thick] (m-1-3) edge (m-1-2); \path[-, thick] (m-1-3) edge (m-1-4); \end{tikzpicture} \end{quote} (\ref{symbarenotnopair}). In this unlabelled directed graph, $\model{D}$, we have $1 \symmetric\barestroke 2$ but $1 \mathrel{\nindiscernibleno\relstroke} 2$: \begin{quote} \centering \begin{tikzpicture}[descr/.style={fill=white,inner sep=2.5pt}] \matrix (m) [matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] {1 & 2\\ 4 & 3\\ }; \path[->, thick] (m-1-1) edge (m-1-2); \path[->, thick] (m-1-2) edge (m-2-2); \path[->, thick] (m-2-2) edge (m-2-1); \path[->, thick] (m-2-1) edge (m-1-1); \end{tikzpicture} \end{quote} (\ref{sympairnotrelativetotal}). In $\model{D}$, again, we have $1 \symmetric\pairstroke 3$ but $1 \mathrel{\slashed{\relativeto}}tal 3$. (\ref{yesrelnotrelativebare}). Let $\model{E}$ be the disjoint union of a complete countably-infinite graph with a complete uncountable graph, i.e.: \begin{align*} E &:= \mathbb{R}\\ R^{\model{E}} &:= \{\langle n, m\rangle \in \mathbb{N}^2 \mid n \neq m\} \cup \{\langle p, q \rangle \in (\mathbb{R} \setminus \mathbb{N})^2 \mid p \neq q\} \end{align*} By taking a Skolem Hull of $\model{E}$ containing $1\in \mathbb{N}$ and any $e \in \mathbb{R}\setminus\mathbb{N}$, we see that $1 \mathrel{\indiscernibleyes\relstroke} e$. Now suppose that $\Pi$ is a relativity with $1 \Pi e$. Since $\Pi$ must preserve the edges of the graph, and every element in either `cluster' has an edge to every element in the cluster except itself, $\Pi$ must be a bijection between $\mathbb{N}$ and $\mathbb{R}\setminus \mathbb{N}$. Contradiction; so $1 \nrelativeto\barestroke e$. (\ref{foinotyesmonad}). Augment $\model{A}$ by adding a single constant which picks out $1$. \end{proof} \end{lem}\noindent It is simple to check that Lemmas \ref{lem:BigMapEntailments} and \ref{lem:BigMapCounterEntailments} yield Theorem \ref{thm:BigMap}. This Theorem allows us to compare the consequences of imposing various grades of discrimination as criteria of identity. I should comment briefly on the philosophical significance of the constructions used in Lemma \ref{lem:BigMapCounterEntailments}. The existence of $\model{A}$ is guaranteed by absolutely standard model theory. However, $\model{A}$ contains two distinct objects that are `blank': from the perspective of $\model{A}$, these objects have no properties or relations to anything, so that their distinctness must be \emph{brute}. And this might suggest that the use of absolutely standard model theory begs the question against anyone who believes in a non-trivial criterion of identity. Fortunately it does not, but it is worth carefully explaining why. Let Fran be a philosopher who advocates a non-trivial criterion of identity: in particular, Fran thinks that $x$ and $y$ are identical iff $x \indiscernibleno y$. However, bearing in mind the discussion of \S\ref{s:Preliminaries}---particularly of a signature which allows us only to describe eye colour---Fran advances this criterion of identity with respect to some \emph{particular} signature, $\lang{F}$. Now, if $\model{A}$ is presented as an $\lang{F}$-structure, then Fran will certainly deny that $\model{A}$ could exist. However, Fran can make sense of $\model{A}$ by regarding it as a $\lang{G}$-structure, where $\lang{G}$ is a signature which is impoverished compared with $\lang{F}$. Construed thus, $\model{A}$ begs no question against Fran, because it poses no threat to her proposed criterion of identity. To be clear: I am not trying to endorse or defend Fran's position.\footnote{I was once on Fran's side \cite[p.\ 220]{Button:RSIC}; but I have changed my mind \cite[p.\ 211 n.\ 8]{Button:LoR}.} My point is simply that everyone, including Fran, can make sense of standard model theory. \section{A Galois Connection}\label{s:Galois} Theorem \ref{thm:BigMap} graphically demonstrates that grades of symmetry are to grades of $\lang{L}\yesstroke$-indiscernibility as grades of relativity are to grades of $\lang{L}\nostroke$-indiscernibility. In this section, I develop this point by outlining a Galois Connection between isomorphisms and relativeness correspondences. (The results of this section can be fruitfully compared with those of \citeauthor{BonnayEngstrom:ID} \cite{BonnayEngstrom:ID}; we discovered our results independently.) Lemma \ref{lem:IsomorphismPreservation} has an obvious converse: every bijective map which preserves all $\lang{L}\yesstroke$-formulas is an isomorphism. However, there is no converse to Lemma \ref{lem:RelativenessPreservation}. To make this more precise, consider the following definition: \begin{define}\label{def:NearCorrespondence} Let $\model{M}, \model{N}$ be $\lang{L}$-structures. A \emph{near-correspondence from $\model{M}$ to $\model{N}$} is any relation $\Pi \subseteq M \times N$ with $\text{dom}(\Pi) = M$ and $\text{rng}(\Pi) = N$ such that, for all $n < \omega$, all $\phi \in \lang{L}\nostroke_n$, and all $\overline{d}\Pi\overline{e}$: \begin{align*} \model{M} \models \phi(\overline{d})&\text{ iff }\model{N} \models \phi(\overline{e}) \end{align*} \end{define}\noindent Lemma \ref{lem:RelativenessPreservation} states that every relativeness correspondence is a near-correspondence. But the converse fails. Let $\model{F}$ be an $\{f\}$-structure, defined as follows: \begin{align*} F &= \{1, 2\}\\ f^{\model{F}}(1) &= f^{\model{F}}(2) = 2 \end{align*} Then $\Pi = \{\langle 1, 2\rangle, \langle 2, 1\rangle\}$ is clearly a near-correspondence from $\model{F}$ to $\model{F}$, but not a relativeness correspondence. However, there is an elegant connection between near-correspondences (and hence relativeness correspondences) and isomorphisms on the models we obtain by quotienting using $\indiscernibleno$. The use of such quotients is standard in model theory without identity, and the central idea is summed up in the following Definition and Lemma (see \citeauthor{CasanovasEtAl:EEEFL} \cite[Definition 2.3--2.4]{CasanovasEtAl:EEEFL}):\footnote{\citeauthor{CasanovasEtAl:EEEFL} trace the definition and lemma back to \citeauthor{Monk:ML} \cite[presumably Exercises 29.33--34]{Monk:ML}. This has recently been rediscovered by philosophers, e.g.\ Ketland \cite[p.\ 307 n.\ 10]{Ketland:SII}, \cite[Theorem 3.12]{Ketland:II}.} \begin{define} Let $\model{M}$ be any $\lang{L}$-structure. Then $\quotient{\model{M}}$ is the $\lang{L}$-structure obtained by quotienting $\model{M}$ by $\indiscernibleno$. We denote its members with $\quotient{a}_\model{M} = \{b \in M \mid a \indiscernibleno b \text{ in }\model{M}\}$ and, when no confusion can arise, we dispense with the subscript, talking of $\quotient{a}$ rather than $\quotient{a}_\model{M}$. Now $\quotient{\model{M}}$ is defined as follows: \begin{align*} \quotient{M} & = \{\quotient{a} \mid a \in M\}\\ R^{\quotient{\model{M}}} & = \{\quotient{\overline{e}} \in \quotient{M}^n \mid \overline{e} \in R^\model{M}\} && \text{all }n\text{-place }\lang{L}\text{-predicates }R\\ f^{\quotient{\model{M}}}(\quotient{\overline{e}}) & = \quotient{f^{\model{M}}(\overline{e})} && \text{all }n\text{-place }\lang{L}\text{-function-symbols }f\text{ and all }\overline{e} \in M^n \end{align*} \end{define} \begin{lem}\label{lem:Quotient}Let $\model{M}$ be an $\lang{L}$-structure. For all $n < \omega$, all $\phi \in \lang{L}\nostroke_n$ and all $\overline{e} \in M^n$: \begin{align*} \model{M} \models \phi(\overline{e}) & \text{ iff }\quotient{\model{M}} \models \phi(\quotient{\overline{e}}) \end{align*} \end{lem}\noindent \citeauthor{CasanovasEtAl:EEEFL} \cite[Proposition 2.6]{CasanovasEtAl:EEEFL} note that there is a relativeness correspondence from $\model{M}$ to $\model{N}$ iff $\quotient{\model{M}} \isomorphic \quotient{\model{N}}$. I wish to build on this; and I begin with some definitions: \begin{define}\label{define:Galois} For any $\lang{L}$-structures $\model{M}, \model{N}$: \begin{enumerate}[nolistsep] \item $\textbf{N}(\model{M}, \model{N})$ is the set of near-correspondences from $\model{M}$ to $\model{N}$ \item $\textbf{I}(\model{M}, \model{N})$ is the set of isomorphisms from $\quotient{\model{M}}$ to $\quotient{\model{N}}$ \item $\textbf{e}(\model{M}, \model{N}) : \textbf{I}(\model{M}, \model{N}) \longrightarrow \textbf{N}(\model{M}, \model{N})$ is given by: $a \pi^\textbf{e} b$ iff $\pi(\quotient{a}) = \quotient{b}$ \item $\textbf{c}(\model{M}, \model{N}) : \textbf{N}(\model{M}, \model{N}) \longrightarrow \textbf{I}(\model{M}, \model{N})$ is given by: $\Pi^\textbf{c}(\quotient{a}) = \quotient{b}$ iff there are $a' \indiscernibleno a$ and $b' \indiscernibleno b$ such that $a' \Pi b'$ \end{enumerate} Say that $\Pi \in \textbf{N}(\model{M}, \model{N})$ is \emph{maximal} iff no strict superset of $\Pi$ is in $\textbf{N}(\model{M}, \model{N})$. \end{define}\noindent I prove that these are genuine definitions, i.e.\ that $\textbf{e}(\model{M}, \model{N})$ and $\textbf{c}(\model{M}, \model{N})$ are functions. I begin with $\textbf{e}(\model{M}, \model{N})$: \begin{lem}\label{lem:QuotientSymmetryGivesRelativity} If $\pi \in \textbf{I}(\model{M}, \model{N})$, then $\pi^\textbf{e}$ is a relativeness correspondence, and hence $\pi^{\textbf{e}} \in \textbf{N}(\model{M}, \model{N})$. \begin{proof} Fix $n < \omega$ and suppose that $\overline{d}\pi^\textbf{e}\overline{e}$; i.e.\ that $\pi(\overline{\quotient{d}}) = \overline{\quotient{e}}$. For each $n$-place $L$-predicate $R$, observe: \begin{align*} \overline{d} \in R^\model{M} \text{ iff }\overline{\quotient{d}} \in R^{\quotient{\model{M}}} \text{ iff }\pi(\overline{\quotient{d}}) \in R^{\quotient{\model{N}}} \text{ iff }\overline{\quotient{e}} \in R^{\quotient{\model{N}}} \text{ iff }\quotient{e} \in R^\model{N} \end{align*} For each $n$-place $L$-function-symbol $f$, observe: \begin{align*} \pi(\quotient{f^\model{M}(\overline{d})}) = \pi(f^{\quotient{\model{M}}}(\overline{\quotient{d}})) = f^{\quotient{\model{N}}}(\pi(\overline{\quotient{d}})) = f^{\quotient{\model{N}}}(\overline{\quotient{e}}) = \quotient{f^\model{N}(\overline{e})} \end{align*} so that $f^\model{M}(\overline{d})\pi^\textbf{e} f^\model{N}(\overline{e})$. Hence $\pi^\textbf{e}$ is a relativeness correspondence, and so a near-correspondence by Lemma \ref{lem:RelativenessPreservation}. \end{proof} \end{lem}\noindent To show that $\textbf{c}(\model{M}, \model{N})$ is a function, we need a subsidiary result: \begin{lem}\label{lem:RelativitiesAndFoi} Let $\Pi$ be a near-correspondence from $\model{M}$ to $\model{N}$, with $a \Pi b$ and $a' \Pi b'$. Then $a \indiscernibleno a'$ in $\model{M}$ iff $b \indiscernibleno b'$ in $\model{N}$. \begin{proof} Suppose $a \indiscernibleno a'$ in $\model{M}$; then using Lemmas \ref{lem:RelativenessPreservation} and \ref{lem:AlternateCharacterisationOfFOI}: \begin{align*} \model{N} \models \phi(b,b) \text{ iff }\model{M} \models \phi(a, a)\text{ iff }\model{M} \models \phi(a, a') \text{ iff }\model{N} \models \phi(b,b') \end{align*} So $b \indiscernibleno b'$ in $\model{N}$, by Lemma \ref{lem:AlternateCharacterisationOfFOI}. The converse is similar. \end{proof} \end{lem}\noindent It follows that $\textbf{c}(\model{M}, \model{N})$ is a function: \begin{lem}\label{lem:GalDownGivesIsomorphism} If $\Pi \in \textbf{N}(\model{M}, \model{N})$, then $\Pi^{\textbf{c}} \in \textbf{I}(\model{M}, \model{N})$. \begin{proof} Lemma \ref{lem:RelativitiesAndFoi} immediately yields that $\Pi^\textbf{c}(\overline{a})$ is a well-defined function, and indeed an injection. $\Pi^{\textbf{c}}$ is a surjection, since $\text{rng}(\Pi) = N$. It remains to show that $\Pi^\textbf{c}$ preserves structure. For the remainder of the proof, fix $n < \omega$ and ${\overline{a}}, {\overline{b}} \in {M}^n$ such that $\Pi^\textbf{c}(\quotient{\overline{a}}) = \quotient{\overline{b}}$. Let $R$ be any $n$-place $\lang{L}$-predicate. For each $1 \leq i \leq n$ we have $a'_i \indiscernibleno {a_i}$ and $b'_i \indiscernibleno {b_i}$ such that $a'_i \Pi b'_i$; and hence: \begin{align*} \quotient{\overline{a}} \in R^{\quotient{\model{M}}} &\text{ iff }\overline{a'} \in R^\model{M} \text{ iff }\overline{b'} \in R^\model{N}\text{ iff }\quotient{\overline{b}} \in R^{\quotient{\model{N}}} \text{ iff }\Pi^\textbf{c}(\quotient{\overline{a}}) \in R^{\quotient{\model{N}}} \end{align*} Let $f$ be any $n$-place $\lang{L}$-function-symbol. For all $\phi(v_1, v_2) \in \lang{L}\nostroke_{2}$, define $\phi_{f}(v_1, \overline{x})$ as $\phi(v_1, f(\overline{x}))$; and let $b' \in N$ be such that $f^\model{M}(\overline{a})\Pi b'$. Invoking Lemma \ref{lem:AlternateCharacterisationOfFOI}: \begin{align*} \model{N} \models \phi(b', b') &\text{ iff }\model{M} \models \phi(f^\model{M}(\overline{a}), f^\model{M}(\overline{a}))\\ &\text{ iff }\model{M} \models \phi_f(f^\model{M}(\overline{a}), \overline{a})\\ &\text{ iff }\model{N} \models \phi_f(b', \overline{b})\\ &\text{ iff }\model{N} \models \phi(b', f^\model{N}(\overline{b})) \end{align*} Hence $b' \indiscernibleno f^\model{N}(\overline{b})$ by Lemma \ref{lem:RelativenessPreservation}. Now: \begin{align*} \Pi^\textbf{c}(f^{\quotient{\model{M}}}(\overline{\quotient{a}})) & = \Pi^\textbf{c}(\quotient{f^\model{M}(\overline{a})}) = \quotient {b'} = \quotient{f^\model{N}(\overline{b})} = f^{\quotient{\model{N}}}(\overline{\quotient{b}}) = f^{\quotient{\model{N}}}(\Pi^\textbf{c}(\overline{\quotient{a}})) \end{align*} so that functions are preserved. \end{proof} \end{lem}\noindent Lemmas \ref{lem:QuotientSymmetryGivesRelativity} and \ref{lem:GalDownGivesIsomorphism} together show that Definition \ref{define:Galois} is a proper definition. Its significance resides in the following: \begin{thm}[Galois Connection on $\indiscernibleno$-quotients]\label{thm:GaloisConnection} For each $\Pi \in \textbf{N}(\model{M}, \model{N})$ and each $\pi \in \textbf{I}(\model{M}, \model{N})$: $\Pi^\textbf{c} = \pi$ iff $\Pi \subseteq \pi^\textbf{e}$. \begin{proof} \emph{Left-to-right.} Suppose $\Pi^\textbf{c} = \pi$. Fix $\langle d, e \rangle \in \Pi$; then $\pi(\quotient{d}) = \quotient{e}$, so $d \pi^\textbf{e} e$. \emph{Right-to-left.} Suppose $\Pi \subseteq \pi^\textbf{e}$. Where $\Pi^\textbf{c}(\quotient{d}) = \quotient{e}$, there are $d' \indiscernibleno {d}$ and $e' \indiscernibleno {e}$ such that $d' \Pi e'$ and hence $d'\pi^\textbf{e} e'$; so $\quotient{e} = \quotient{e'} = \pi(\quotient{d'}) = \pi(\quotient{d})$. Hence $\Pi^\textbf{c}(\quotient{d}) = \pi(\quotient{d})$, for all $\quotient{d} \in \quotient{M}$. \end{proof} \end{thm}\noindent This Theorem highlights the depth of the connection between isomorphisms and relativeness correspondences. Additionally, it strengthens the claim that relativeness correspondences are the $\lang{L}\nostroke$-analogue of isomorphisms. For, given that there are near-correspondences that are not relativeness correspondences, one might have worried that relativeness correspondences \emph{compete} with the near-correspondences to be the $\lang{L}\nostroke$-analogue of isomorphism. However, the appearance of competition vanishes, once we consider some consequences of the Galois Connection: \begin{lem}\label{lem:GaloisConsequences} For any $\lang{L}$-structures $\model{M}, \model{N}$: \begin{enumerate}[nolistsep] \item\label{GaloisConsequences:identity} $\textbf{c}(\model{M}, \model{N}) \circ \textbf{e}(\model{M}, \model{N})$ is the identity function \item\label{GaloisConsequences:idempotent} $\textbf{e}(\model{M}, \model{N}) \circ \textbf{c}(\model{M}, \model{N})$ is idempotent \item\label{GaloisConsequences:maximal} If $\pi \in \textbf{I}(\model{M}, \model{N})$, then $\pi^\textbf{e}$ is maximal \item\label{GaloisConsequences:unique} If $\Pi \in \textbf{N}(\model{M}, \model{N})$, then $(\Pi^\textbf{c})^\textbf{e}$ is the unique maximal relativeness correspondence that extends $\Pi$ \end{enumerate} \begin{proof} (\ref{GaloisConsequences:identity})--(\ref{GaloisConsequences:idempotent}). Immediate from the fact that this is a Galois Connection with the partial-ordering on $\textbf{I}(\model{M}, \model{N})$ being identity. (\ref{GaloisConsequences:maximal}). Let $\Sigma \in \textbf{I}(\model{M}, \model{N})$ be such that $\pi^{\textbf{e}} \subseteq \Sigma$, and suppose $a \Sigma b$. By Lemma \ref{lem:GalDownGivesIsomorphism}, $\Sigma^\textbf{c} \in \textbf{N}(\model{M}, \model{N})$, with $\Sigma^\textbf{c}(\quotient{a}) = \quotient{b}$. For any $d$ such that $a \pi^\textbf{e} d$, we have $a \Sigma d$, and hence $\Sigma^\textbf{c}(\quotient{a}) = \quotient{d}$, so that $b \indiscernibleno d$. Hence $\pi(\quotient{a}) = \quotient{d} = \quotient{b}$, and so $a \pi^\textbf{e} b$. (\ref{GaloisConsequences:unique}). Lemma \ref{lem:QuotientSymmetryGivesRelativity}, our Galois Connection, and case (\ref{GaloisConsequences:maximal}) show that $(\Pi^\textbf{c})^\textbf{e}$ is a maximal relativeness correspondence extending $\Pi$. To show uniqueness, let $\Sigma$ be any maximal relativeness correspondence extending $\Pi$. Consider any $a, b \in M$ such that $a (\Pi^\textbf{c})^\textbf{e} b$. Then there are $a' \indiscernibleno {a}$, $b' \indiscernibleno {b}$ such that $a' \Pi b'$, and hence such that $a' \Sigma b'$, since $\Pi \subseteq \Sigma$. Hence, for any $\overline{d}\Sigma\overline{e}$, and any $\phi \in \lang{L}\nostroke_{n+1}$, by Lemma \ref{lem:RelativenessPreservation}: \begin{align*} \model{M} \models \phi(a,\overline{d}) &\text{ iff }\model{M} \models \phi(a',\overline{d})\text{ iff }\model{N} \models \phi(b',\overline{e}) \text{ iff }\model{N} \models \phi(b,\overline{e}) \end{align*} Consequently, $\Theta = \Sigma \cup \{\langle a, b\rangle\}$ is a near-correspondence. So $(\Theta^\textbf{c})^\textbf{e}$ is a maximal relativeness correspondence extending $\Sigma$; but $\Sigma$ is itself maximal, so $a \Sigma b$. Generalising, $(\Pi^\textbf{c})^\textbf{e} \subseteq \Sigma$. Since $(\Pi^\textbf{c})^\textbf{e}$ is maximal, $(\Pi^\textbf{c})^\textbf{e} = \Sigma$. \end{proof} \end{lem}\noindent The preceding result tells us that every near-correspondence expands to a relativeness correspondence. Accordingly, there is no genuine competition between near-correspondences and relativeness correspondences. Indeed, instead of defining the grades of relativity in terms of relativeness correspondences, we could have defined them in terms of near-correspondences. Or, even more simply, we could have defined them in terms of symmetries on quotient models, as shown by the following immediate consequence of the preceding results: \begin{lem}\label{lem:QuotientSymHom} For any $\lang{L}$-structure $\model{M}$: \begin{enumerate}[nolistsep] \item\label{QuotientSymHom:bare} $a \relativeto\barestroke b$ in $\model{M}$ \text{iff} $\quotient{a} \symmetric\barestroke \quotient{b}$ in $\quotient{\model{M}}$ \item\label{QuotientSymHom:pair} $a \relativeto\pairstroke b$ in $\model{M}$ \text{iff} $\quotient{a} \symmetric\pairstroke \quotient{b}$ in $\quotient{\model{M}}$ \item\label{QuotientSymHom:total} $a \mathrel{\textsf{r}}tal b$ in $\model{M}$ \text{iff} $\quotient{a} \symmetric\totalstroke \quotient{b}$ in $\quotient{\model{M}}$ \end{enumerate} \end{lem} \section{Equivalence relations}\label{s:Equivalence} At various points, I have described the grades of discrimination as behaving like identity. A natural question is whether the grades of discrimination behave like identity in being \emph{equivalence} relations. (Note that I implicitly relied upon the fact that $\indiscernibleno$ is an equivalence relation in defining the $\indiscernibleno$-quotient structure.) \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[Theorem 10.22]{LadymanLinneboPettigrew:IDPL} have partially answered this question, in noting that $\mathrel{\indiscernibleyes\relstroke}$ and $\mathrel{\indiscernibleno\relstroke}$ are not transitive (in general). The following result, which employs our Galois Connection, completes the picture. \begin{thm}\label{lem:EquivalenceIndiscernibility} $\mathrel{\indiscernibleyes\relstroke}$, $\mathrel{\indiscernibleno\relstroke}$, $\symmetric\pairstroke$ and $\relativeto\pairstroke$ are reflexive and symmetric, but not transitive (in general); the remaining eight grades of discrimination are equivalence relations. \begin{proof} Consider the following coloured graph, $\model{G}$: \begin{quote}\centering \begin{tikzpicture}[descr/.style={fill=white,inner sep=2.5pt}] \node (atoma) at (120:1.1){$1$}; \node (atomb) at (60:1.1){$2$}; \node (atomc) at (0:1.1){$3$}; \node (atomd) at (300:1.1){$4$}; \node (atome) at (240:1.1){$5$}; \node (atomf) at (180:1.1){$6$}; \path[-, thick](atoma) edge (atomb); \path[-, ultra thick, dotted](atomb) edge (atomc); \path[-, thick](atomc) edge (atomd); \path[-, ultra thick, dotted](atomd) edge (atome); \path[-, thick](atome) edge (atomf); \path[-, ultra thick, dotted](atomf) edge (atoma); \end{tikzpicture} \end{quote} Here, $1 \symmetric\pairstroke 2$ and $2 \symmetric\pairstroke 3$, but $1 \mathrel{\nindiscernibleno\relstroke} 3$. By Theorem \ref{thm:BigMap}, this establishes that $\mathrel{\indiscernibleyes\relstroke}$, $\mathrel{\indiscernibleno\relstroke}$, $\symmetric\pairstroke$ and $\relativeto\pairstroke$ are not transitive (in general). The reflexivity and symmetry of all the grades of indiscernibility are immediate from their definitions, as is the transitivity of $=$, $\indiscernibleno$, $\mathrel{\indiscernibleyes\monadstroke}$ and $\mathrel{\indiscernibleno\monadstroke}$. It is routine to check that all three grades of symmetry are symmetric and reflexive, and that $\symmetric\totalstroke$ and $\symmetric\barestroke$ are transitive. Lemma \ref{lem:QuotientSymHom} entails that the same is true for the respective grades of relativity. \end{proof} \end{thm}\noindent Since identity is surely transitive, Theorem \ref{lem:EquivalenceIndiscernibility} might seem to provide a knockdown argument against treating any of $\mathrel{\indiscernibleyes\relstroke}, \mathrel{\indiscernibleno\relstroke}, \symmetric\pairstroke$ and $\relativeto\pairstroke$ as a criterion of identity. However, this point is a little more subtle than it might initially seem. Consider the discussion of $\model{A}$, at the end of \S\ref{s:Entailments}. $\model{A}$ might have seemed to present a counterexample to treating $\indiscernibleno$ as a criterion of identity. But any philosopher who advocates such a criterion, such as Fran, will maintain that we can make sense of $\model{A}$ by (and only by) treating it as a structure of some artificially \emph{restricted} signature. At that point, $\model{A}$ no longer presents a counterexample to Fran's proposed criterion of identity, which she advances with respect to some \emph{richer} signature. With this in mind, consider Rach, a philosopher who advocates $\relativeto\pairstroke$ as a criterion of identity. $\model{G}$ might seem to pose problems for Rach. But if $\model{G}$ is presented with regard to Rach's preferred signature, then it violates her proposed criterion of identity even before we consider issues about transitivity: after all, $\model{G}$ is to contain objects which are distinct but (`genuinely') pairwise symmetric. Accordingly, Rach will maintain that we can make sense of $\model{G}$ by (and only by) treating it as a structure of some artificially \emph{restricted} signature. And at that point, $\model{G}$ no longer demonstrates the non-transitivity of Rach's proposed criterion of identity, which she advances with respect to some \emph{richer} signature. The situation, then, is slightly odd. From the perspective of anyone who thinks that identity is more fine-grained than any of $\mathrel{\indiscernibleyes\relstroke}$, $\mathrel{\indiscernibleno\relstroke}$, $\symmetric\pairstroke$ and $\relativeto\pairstroke$, these four grades of discrimination fail to behave like identity in an absolutely crucial sense, in failing to be transitive. (This is why \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[p.\ 23]{LadymanLinneboPettigrew:IDPL} suggest that $\mathrel{\indiscernibleyes\relstroke}$ and $\mathrel{\indiscernibleno\relstroke}$ violate a plausible `minimal requirement' on any notion of indiscernibility.) But it does not immediately follow that one cannot \emph{propose} one of these four grades as a criterion of identity. \section{Connections to definability theory}\label{s:DefinabilityTheory} I now want to explore some natural technical questions which have not featured on the radar of philosophers interested in grades of discrimination. These questions concern the relationship between grades of discrimination and \emph{elementary extensions}, and they relate to definability theory. My answers to these questions, together with the Galois Connection of \S\ref{s:Galois}, will yield interesting entailments between the different grades in special cases (to be discussed in \S\ref{s:FinitaryCase}). To be clear on terminology: \begin{define} Let $\model{M}$ and $\model{N}$ be $\lang{L}$-structures. Say that $\model{M} \prec\yesstroke \model{N}$ iff for all $n < \omega$, all $\phi \in \lang{L}\yesstroke_n$, and all $\overline{e} \in M^n$: \begin{align*} \model{M} \models \phi(\overline{e}) &\text{ iff }\model{N}\models \phi(\overline{e}) \end{align*} Say that $\model{M} \prec\nostroke \model{N}$ iff the above holds with $\lang{L}\nostroke_n$ in place of $\lang{L}\yesstroke_{n}$. \end{define}\noindent There is a classic result connecting $\lang{L}\yesstroke$-indiscernibility with the existence of a symmetry in some elementary extension (see e.g.\ \citeauthor{Marker:MT} \cite[Proposition 4.1.5]{Marker:MT}): \begin{thm}\label{thm:YesSvenStep} For any $\lang{L}$-structure $\model{M}$, the following are equivalent: \begin{enumerate}[nolistsep] \item $\model{M} \models \phi(\overline{a}) \liff \phi(\overline{b})$, for all $\phi \in \lang{L}\yesstroke_n$ \item There is an $\model{N} \succ\yesstroke \model{M}$ and a symmetry $\pi$ on $\model{N}$ such that $\pi(\overline{a}) = \overline{b}$ \end{enumerate} \end{thm}\noindent For present purposes, the immediate import of Theorem \ref{thm:YesSvenStep} is that it yields a new way to characterise $\mathrel{\indiscernibleyes\relstroke}$ and $\mathrel{\indiscernibleyes\monadstroke}$: \begin{lem}\label{lem:YesSvenConsequence} For any $\lang{L}$-structure $\model{M}$: \begin{enumerate}[nolistsep] \item\label{yesmonadextend} $a \mathrel{\indiscernibleyes\monadstroke} b$ in $\model{M}$ {iff} there is an $\model{N} \succ\yesstroke \model{M}$ in which $a \symmetric\barestroke b$ \item\label{yespairextend} $a \mathrel{\indiscernibleyes\relstroke} b$ in $\model{M}$ {iff} there is an $\model{N} \succ\yesstroke \model{M}$ in which $a \symmetric\pairstroke b$ \end{enumerate} \end{lem}\noindent This raises a natural question: Is there an $\lang{L}\nostroke$-analogue of Theorem \ref{thm:YesSvenStep}? There certainly is; but to show this, I need two definitions. First, I need the ordinary notion of a diagram:\footnote{Whilst this notion of diagram invokes `$=$', \citeauthor{Dellunde:EFL} shows that there is a perfectly workable notion of diagram which does not employ `$=$'.} \begin{define}\label{define:ElemDiagram} Let $\lang{L}$ be any signature and $X$ be any set. Then $\lang{L}(X)$ is the signature formed by augmenting $\lang{L}$ with each member of $X$ as a (new) constant. Where $\model{M}$ is an $\lang{L}$-structure, $\eldiag\yesstroke(\model{M})$ is the set of $\lang{L}\yesstroke(M)$-sentences satisfied by the $\lang{L}(M)$-structure formed by letting each $e \in M$ name itself. \end{define}\noindent Next, I need the $\lang{L}\nostroke$-analogue for a partial elementary map: \begin{define} Let $\model{M}, \model{N}$ be $\lang{L}$-structures. A \emph{proto-correspondence from $\model{M}$ to $\model{N}$} is any relation $\Pi \subseteq M \times N$ such that, for all $n < \omega$, all $\phi \in \lang{L}\nostroke_n$, and all $\overline{d}\Pi\overline{e}$: \begin{align*} \model{M} \models \phi(\overline{d})&\text{ iff }\model{N} \models \phi(\overline{e}) \end{align*} \end{define}\noindent So a near-correspondence from $\model{M}$ to $\model{N}$ is a proto-correspondence with domain $M$ and range $N$. The proof of the $\lang{L}\nostroke$-analogue of Theorem \ref{thm:YesSvenStep} now amounts to little more than a tweak to \citeauthor{Marker:MT}'s proof of Theorem \ref{thm:YesSvenStep}.\footnote{In more detail: my Lemma \ref{lem:AddOne} tweaks \citeauthor{Marker:MT}'s Lemma 4.16; my Lemma \ref{lem:OneToAll} tweaks \citeauthor{Marker:MT}'s Corollary 4.1.7 (cf.\ also \citeauthor{CasanovasEtAl:EEEFL}'s \cite{CasanovasEtAl:EEEFL} Lemma 2.7); and my Lemma \ref{lem:SvenoniusCoolDiagram} tweaks \citeauthor{Marker:MT}'s Proposition 4.1.5. The main difference is that I use proto-correspondences rather than partial elementary maps, and in the final step I require a detour, via Lemma \ref{lem:GaloisConsequences}, to obtain a relativity.} I start with two type-realising constructions: \begin{lem}\label{lem:AddOne} Let $\Pi$ be a proto-correspondence from $\model{M}$ to $\model{N}$. For any $a \in M$, there is as an $\model{O} \succ\yesstroke \model{N}$ with some $b \in O$ such that $\Pi \cup \{\langle a, b\rangle\}$ is a proto-correspondence from $\model{M}$ to $\model{O}$. \begin{proof} Define: \begin{align*} \Gamma &= \{\phi(v, \overline{e}) \in \lang{L}\nostroke_1(\text{rng}(\Pi)) \mid \text{for some }n< \omega\text{, some }\phi \in \lang{L}\nostroke_{n+1}\text{, and some }\overline{d}\Pi\overline{e},\\ &\phantom{= \{\phi(v, \overline{e}) \in \lang{L}\nostroke_1(\text{rng}(\Pi)) \mid .}\text{we have }\model{M} \models \phi(a, \overline{d})\} \end{align*} Consider any $\phi(v, \overline{e}) \in \Gamma$; since $\model{M} \models \exists v \phi(v, \overline{d})$ and $\Pi$ is a proto-correspondence, $\model{N} \models \exists v \phi(v, \overline{e})$. Equally, $\model{N}$ can be treated as a model of $\eldiag\yesstroke(\model{N})$. So any finite subset of $\Gamma \cup \eldiag\yesstroke(\model{N})$ is satisfiable. Hence, by Compactness, there is a model of $\Gamma \cup \eldiag\yesstroke(\model{N})$, which we can regard as $\model{O} \succ\yesstroke \model{N}$. Now simply let $\Sigma = \Pi \cup \{\langle a, b \rangle\}$, where $\model{O} \models \Gamma(b)$. \end{proof} \end{lem} \begin{lem}\label{lem:OneToAll} Let $\Pi$ be a proto-correspondence from $\model{M}$ to $\model{N}$ with $\model{M} \prec\yesstroke \model{N}$. Then there is some $\model{O} \succ\yesstroke \model{N}$ and a proto-correspondence $\Sigma \supseteq \Pi^{-1}$ from $\model{N}$ to $\model{O}$ with $\text{dom}(\Sigma)=N$. \begin{proof} We construct an elementary chain. Since $\Pi$ is a proto-correspondence and $\model{M} \prec\yesstroke \model{N}$, we have that for all $\phi \in \lang{L}\nostroke_n$ and all $\overline{a}\Pi\overline{b}$: \begin{align*} \model{N} \models \phi(\overline{b}) \text{ iff }\model{M} \models \phi(\overline{a})\text{ iff }\model{N} \models \phi(\overline{a}) \end{align*} Defining $\model{O}_0 = \model{N}$ and $\Sigma_0 = \Pi^{-1}$, observe that $\Sigma_0$ is a proto-correspondence from $\model{N}$ to $\model{O}_0$. This is our initial stage in the chain. Now let $\{e_\alpha \mid \alpha < \kappa\}$ exhaustively enumerate $N$, let $D_\alpha = \text{dom}(\Sigma_0) \cup \{e_\beta \mid \beta < \alpha\}$ for each $\alpha < \kappa$, and proceed recursively: \begin{enumerate}[nolistsep] \item[---] Stage $\alpha + 1$: Given a proto-correspondence $\Sigma_\alpha$ from $\model{N}$ to $\model{O}_{\alpha}$ with $\text{dom}(\Sigma_\alpha) = D_\alpha$, use Lemma \ref{lem:AddOne} to obtain an $\model{O}_{\alpha+1} \succ\yesstroke \model{O}_{\alpha}$ and a proto-correspondence $\Sigma_{\alpha + 1} \supseteq \Sigma_{\alpha}$ from $\model{N}$ to $\model{O}_{\alpha + 1}$ with $\text{dom}(\Sigma_{\alpha+1}) = D_{\alpha+1}$. \item[---] Stage $\alpha$, with $\alpha$ a limit ordinal: let $\model{O}_\alpha = \bigcup_{\beta < \alpha} \model{O}_\beta$ and $\Sigma_\alpha = \bigcup_{\alpha < \beta} \Sigma_\beta$. \end{enumerate} Now let $\model{O} = \bigcup_{\alpha < \kappa}\model{N}_\alpha$ and $\Sigma = \bigcup_{\alpha < \kappa} \Sigma_\alpha$. \end{proof} \end{lem} \begin{lem}\label{lem:SvenoniusCoolDiagram} Let $\model{M}$ be an $\lang{L}$-structure with $\overline{a}, \overline{b} \in M^n$ such that $\model{M} \models \phi(\overline{a}) \liff \phi(\overline{b})$ for all $\phi \in\lang{L}\nostroke_n$. Then there is some $\model{N} \succ\yesstroke \model{M}$ and a near-correspondence $\Pi$ from $\model{N}$ to $\model{N}$ such that $\overline{a}\Pi\overline{b}$. \begin{proof} Given $\model{M}, \overline{a}, \overline{b}$ as described, we have a proto-correspondence $\Pi_0$ from $\model{M}$ to $\model{M}$ with $\overline{a}\Pi_0\overline{b}$. Setting $\model{M} = \model{M}_0 = \model{N}_0$, we can repeatedly apply Lemma \ref{lem:OneToAll} to construct an elementary chain (solid arrows indicate a proto-correspondence): \begin{center} \begin{tikzpicture}[description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=4em, column sep=2em, text height=1.5ex, text depth=0.25ex] { \model{M}_0 & & \model{M}_1 & & \model{M}_2 & \ldots\\ & \model{N}_0 & & \model{N}_1 & & \model{N}_2 & \ldots\\ }; \path[->,font=\scriptsize] (m-1-1.300) edge node[description] {$\Pi_0$}(m-2-2.120) (m-1-3.300) edge node[description] {$\Pi_1$} (m-2-4.120) (m-1-5.300) edge node[description] {$\Pi_2$} (m-2-6.120) (m-2-2.60) edge node[description] {$\Sigma_0$} (m-1-3.240) (m-2-4.60) edge node[description] {$\Sigma_1$} (m-1-5.240); \end{tikzpicture} \end{center} where both $\Sigma_i \subseteq \Pi_i^{-1} \subseteq \Sigma_{i+1}$ and $\model{M}_i \prec\yesstroke \model{N}_i \prec\yesstroke \model{M}_{i+1}$ for each $i < \omega$. Define: \begin{align*} \model{N} &= \bigcup_{i < \omega} \model{N}_i = \bigcup_{i < \omega} \model{M}_i\\ \Pi &= \bigcup_{i < \omega} \Pi_i \end{align*} It is routine to check that $\Pi$ and $\model{N}$ have the required properties. \end{proof} \end{lem}\noindent We can now obtain our $\lang{L}\nostroke$-analogue of Theorem \ref{thm:YesSvenStep}: \begin{thm}\label{thm:NoSvenStep}For any $\lang{L}$-structure $\model{M}$, the following are equivalent: \begin{enumerate}[nolistsep] \item\label{Svenonius:Map} $\model{M} \models \phi(\overline{a}) \liff \phi(\overline{b})$, for all every $\phi \in \lang{L}\nostroke_n$ \item\label{Svenonius:RelativityYes} There is an $\model{N} \succ\yesstroke \model{M}$ and a relativity $\Pi$ on $\model{N}$ such that $\overline{a}\Pi\overline{b}$ \item\label{Svenonius:RelativityNo} There is an $\model{N} \succ\nostroke \model{M}$ and a relativity $\Pi$ on $\model{N}$ such that $\overline{a}\Pi\overline{b}$ \end{enumerate} \begin{proof}(\ref{Svenonius:Map}) $\Rightarrow$ (\ref{Svenonius:RelativityYes}). Use Lemma \ref{lem:SvenoniusCoolDiagram} to obtain a near-correspondence, then use Lemma \ref{lem:GaloisConsequences} to extend this to a relativity. (\ref{Svenonius:RelativityYes}) $\Rightarrow$ (\ref{Svenonius:RelativityNo}). Trivial. (\ref{Svenonius:RelativityNo}) $\Rightarrow$ (\ref{Svenonius:Map}). $\Pi$ is a relativity, and hence a near-correspondence by Lemma \ref{lem:RelativenessPreservation}; the result now follows since $\model{M} \prec\nostroke\model{N}$. \end{proof} \end{thm}\noindent This Theorem lends yet more weight to the claim that relativeness correspondences are the $\lang{L}\nostroke$-analogue of symmetries. Moreover, it immediately yields a new way to characterise $\mathrel{\indiscernibleno\relstroke}$ and $\mathrel{\indiscernibleno\monadstroke}$ (compare Lemma \ref{lem:YesSvenConsequence}): \begin{lem}\label{lem:NoSvenConsequence} For any $\lang{L}$-structure $\model{M}$: \begin{enumerate}[nolistsep] \item $a \mathrel{\indiscernibleno\monadstroke} b$ in $\model{M}$ \text{iff} there is an $\model{N} \succ\yesstroke \model{M}$ in which $a \relativeto\barestroke b$ \item $a \mathrel{\indiscernibleno\relstroke} b$ in $\model{M}$ \text{iff} there is an $\model{N} \succ\yesstroke \model{M}$ in which $a \relativeto\pairstroke b$ \end{enumerate}\noindent Moreover, both claims hold with $\succ\nostroke$ in place of $\succ\yesstroke$. \end{lem}\noindent Before continuing with the main aims of this paper, it is worth briefly stopping to smell the roses. Theorem \ref{thm:YesSvenStep} is sometimes used as a stepping stone to the following foundational result of definability theory (notation clarified in endnote):\footnote{\citeauthor{Beth:PMTD} \cite{Beth:PMTD} proved (\ref{YesSven:definable}) $\Leftrightarrow$ (\ref{YesSven:Beth}); \citeauthor{Svenonius:TPM} \cite{Svenonius:TPM} proved (\ref{YesSven:definable}) $\Leftrightarrow$ (\ref{YesSven:invariance}). $(\model{M}, U)$ is the $\lang{L}\mathord{\cup}\{R\}$-structure formed from $\model{M}$ by allowing $R$ to pick out $U$. As one would expect, $\pi(V) = \{\pi(\overline{e}) \mid \overline{e} \in V\}$, and equally $\Pi(V) = \{\overline{e} \mid \text{there are }\overline{d} \in V\text{ such that }\overline{d}\Pi\overline{e}\}$.} \begin{thm}[Beth--Svenonius Theorem, $\lang{L}\yesstroke$-case]\label{thm:YesSven} For any $\lang{L}$-structure $\model{M}$ with $R \notin \lang{L}$ and $U \subseteq M^n$, the following are equivalent: \begin{enumerate}[nolistsep] \item\label{YesSven:definable} $(\model{M}, U) \models \forall \overline{v}(\phi(\overline{v}) \liff R\overline{v})$ for some $\phi \in \lang{L}\yesstroke_n$. \item\label{YesSven:invariance} For every $(\model{N}, V) \succ\yesstroke (\model{M}, U)$ and every symmetry $\pi$ of $\model{N}$: $V = \pi(V)$. \item\label{YesSven:Beth} For any $\lang{L}$-structure $\model{N}$ and any sets $V_0, V_1 \subseteq N^n$: if $(\model{N}, V_0)$, $(\model{N}, V_1)$ and $(\model{M}, U)$ all satisfy the same $\lang{L}\yesstroke$-sentences, then $V_0 = V_1$. \end{enumerate} \end{thm}\noindent Pleasingly, we can use Theorem \ref{thm:NoSvenStep} as a stepping-stone to an $\lang{L}\nostroke$-analogue of this result; indeed, the main steps are exactly as in the $\lang{L}\yesstroke$-case:\footnote{The only difficult step in either Theorem is (\ref{NoSven:invarianceYes}) $\Rightarrow$ (\ref{NoSven:definable}). For the $\lang{L}\yesstroke$-case, see e.g.\ \citeauthor{Poizat:MT} \cite[Proposition 9.2]{Poizat:MT}. To prove the $\lang{L}\nostroke$-case, we simply tweak Poizat's proof by invoking Theorem \ref{thm:NoSvenStep} rather than Theorem \ref{thm:YesSvenStep}, and considering $n$-types in the sense of $\lang{L}\nostroke_n$, rather than $\lang{L}\yesstroke_n$. \citeauthor{Dellunde:EFSM} \cite[p.\ 5]{Dellunde:EFSM} and \citeauthor{KeislerMiller:CWE} \cite[p.\ 3]{KeislerMiller:CWE} have shown that $\lang{L}\nostroke_n$-types behave as one would hope.} \begin{thm}[Beth--Svenonius Theorem, $\lang{L}\yesstroke$-case]\label{thm:NoSven} For any $\lang{L}$-structure $\model{M}$ with $R \notin \lang{L}$ and $U \subseteq M^n$, the following are equivalent: \begin{enumerate}[nolistsep] \item\label{NoSven:definable} $(\model{M}, U) \models \forall \overline{v}(\phi(\overline{v}) \liff R\overline{v})$ for some $\phi \in \lang{L}\nostroke_n$. \item\label{NoSven:invarianceYes} For every $(\model{N}, V) \succ\yesstroke (\model{M}, U)$ and every relativity $\Pi$ of $\model{N}$: $V = \Pi(V)$ \item\label{NoSven:invarianceNo} For every $(\model{N}, V) \succ\nostroke (\model{M}, U)$ and every relativity $\Pi$ of $\model{N}$: $V = \Pi(V)$ \item\label{NoSven:Beth} For any $\lang{L}$-structure $\model{N}$ and any sets $V_0, V_1 \subseteq N^n$: if $(\model{N}, V_0)$, $(\model{N}, V_1)$ and $(\model{M}, U)$ all satisfy the same $\lang{L}\nostroke$-sentences, then $V_0 = V_1$. \end{enumerate} \end{thm}\noindent \section{Entailments in the finitary case}\label{s:FinitaryCase} The results of the previous section immediately yield a special case of Theorem \ref{thm:BigMap}, obtained by restricting our attention to \emph{finitary} structures.\footnote{An alternative route to Theorem \ref{thm:FiniteMap} merits comment. We can use finitary isomorphism systems to prove the coincidence of grades of $\lang{L}\yesstroke$-indiscernibility with grades of symmetry in finite structures, without invoking the results from \S\ref{s:DefinabilityTheory}. (For an introduction to finitary isomorphism systems, see \citeauthor{EbbinghausEtAl:ML} \cite[chapter XI]{EbbinghausEtAl:ML}.) \citeauthor{CasanovasEtAl:EEEFL} \cite[Definitions 4.1--4.2]{CasanovasEtAl:EEEFL} define the $\lang{L}\nostroke$-analogue of finitary isomorphism systems. It turns out that we can use these, analogously, to prove the coincidence of grades of $\lang{L}\nostroke$-indiscernibility with grades of relativity in finite structures.} This special case has already attracted some attention, since it is philosophically interesting,\footnote{\citeauthor{CaultonButterfield:KILM} \cite[Theorem 2]{CaultonButterfield:KILM} prove a special case of the mutual entailment between $\symmetric\pairstroke$ and $\mathrel{\indiscernibleyes\relstroke}$, and $\symmetric\barestroke$ and $\mathrel{\indiscernibleyes\monadstroke}$, on the assumption that $\lang{L}$ is finite and relational. \citeauthor{Ketland:II}'s \cite*[p.\ 2]{Ketland:II} attention is entirely restricted to finite relational signatures. \citeauthor{LinneboMuller:WD} \cite[Theorem 3]{LinneboMuller:WD} note that \emph{witness-discernibility} (a further notion, which I have not discussed) is equivalent to $\mathrel{\indiscernibleno\relstroke}$ in finite structures, and outline several reasons for focussing on finite structures.} and the following result completes the picture. \begin{thm}[Entailments between the grades, finite structures]\label{thm:FiniteMap} These Hasse Diagrams characterise the entailments between our grades of discrimination, when we restrict our attention to finite structures: \begin{quote} \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] { & & \\ = & \\ \symmetric\totalstroke & \indiscernibleno \\ \mathrel{\indiscernibleyes\relstroke}, \symmetric\pairstroke & \mathrel{\textsf{r}}tal \\ \mathrel{\indiscernibleyes\monadstroke}, \symmetric\barestroke & \mathrel{\indiscernibleno\relstroke}, \relativeto\pairstroke \\ & \mathrel{\indiscernibleno\monadstroke}, \relativeto\barestroke \\}; \draw(m-2-1)--(m-3-1)--(m-4-1)--(m-5-1)--(m-6-2); \draw(m-2-1)--(m-3-2)--(m-4-2)--(m-5-2)--(m-6-2); \draw(m-3-1)--(m-4-2); \draw(m-4-1)--(m-5-2); \end{tikzpicture} \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=2em, column sep=2em, text height=1.5ex, text depth=0.25ex] { = & & \\ \indiscernibleno & \\ \symmetric\totalstroke &\\ \mathrel{\indiscernibleyes\relstroke}, \symmetric\pairstroke & \mathrel{\textsf{r}}tal \\ \mathrel{\indiscernibleyes\monadstroke}, \symmetric\barestroke & \mathrel{\indiscernibleno\relstroke}, \relativeto\pairstroke \\ & \mathrel{\indiscernibleno\monadstroke}, \relativeto\barestroke \\}; \draw(m-1-1)--(m-2-1)--(m-3-1)--(m-4-1)--(m-5-1)--(m-6-2); \draw(m-4-2)--(m-5-2)--(m-6-2); \draw(m-3-1)--(m-4-2); \draw(m-4-1)--(m-5-2); \end{tikzpicture} \end{quote} The left diagram is restricted to finite structures with arbitrary signatures; the right diagram is restricted to finite structures with \emph{relational} signatures. \begin{proof} Most of this is supplied by Theorem \ref{thm:BigMap}. For the remainder observe that if $\model{M}$ is finite, then $\model{N} \succ\yesstroke \model{M}$ iff $\model{M} = \model{N}$. It follows from Lemma \ref{lem:YesSvenConsequence} that $\mathrel{\indiscernibleyes\relstroke}$ entails $\symmetric\pairstroke$ and that $\mathrel{\indiscernibleyes\monadstroke}$ entails $\symmetric\barestroke$; and similarly with Lemma \ref{lem:NoSvenConsequence}. \end{proof} \end{thm}\noindent However, a little work will yield an even stronger result: the grades of relativity and the grades of $\lang{L}\nostroke$-indiscernibility also entail each other when the structure's $\indiscernibleno$-quotient is finite. To show this, we need a few results. The first tells us when $\indiscernibleno$ is definable in a structure:\footnote{For the case where $\lang{L}$ is finite and relational, see \citeauthor{Ketland:II} \cite[Definition 2.3]{Ketland:II}.} \begin{lem}\label{lem:QuotientBehaviour}Let $\model{M}$ be any $\lang{L}$-structure. If either $\lang{L}$ is finite and relational or $\quotient{M}$ is finite, then there is an $\lang{L}\nostroke_2$-formula which defines $\indiscernibleno$ in $\model{M}$. However, the restrictions are necessary. \begin{proof} \emph{Case when $\lang{L}$ is finite and relational.} Let $\phi_1, \ldots, \phi_n$ enumerate all the atomic $\lang{L}$-formulas. By Lemma \ref{lem:AlternateCharacterisationOfFOI}, the following $\lang{L}\nostroke_2$-formula defines $\indiscernibleno$ in $\model{M}$: \begin{align*} \bigwedge_{i = 1}^n \forall \overline{v} (\phi_i (x,\overline{v}) \liff \phi_i(y, \overline{v})) \end{align*} \emph{Case when $\quotient{M}$ is finite.} Let $\quotient{e_1}, \ldots, \quotient{e_m}$ exhaustively enumerate $\quotient{M}$ without repetition. So for all $i \neq j$ between $1$ and $m$, we have ${e_i} \indiscernibleno {e_j}$; hence by Lemma \ref{lem:AlternateCharacterisationOfFOI} there is some $\phi_{i,j} \in \lang{L}\nostroke_2$ such that $\model{M} \nmodels \phi_{i,j}({e_i}, {e_i}) \liff \phi_{i,j}({e_i}, {e_j})$, and so the following $\lang{L}\nostroke_2$-formula defines $\indiscernibleno$ in $\model{M}$: \begin{align*} \bigwedge_{i \neq j}(\phi_{i,j}(x, x) \liff \phi_{i,j}(x, y)) \end{align*} \emph{The necessity of the restrictions.} Let $\lang{L}$ contain one-place predicates $P_i$ for all $i < \omega$, and a single two-place predicate $R$. Define: \begin{align*} H :=& \mathbb{N}\\ P_i^\model{H} := & \{6i, 6i+1\}, \text{for all }0 < i < \omega\\ R^\model{H} := & \{\langle 2, 1\rangle, \langle 1, 0\rangle\} \cup \{\langle 2, 6n -2\rangle, \langle 6n-2, 6n\rangle, \langle 6n-2, 6n+2\rangle \mid 0 < n < \omega\}\ \cup \\ & \{\langle 3, 6n -1\rangle, \langle 6n-1, 6n+1\rangle, \langle 6n-1, 6n+3\rangle \mid 0 < n < \omega\} \end{align*} We can represent $\model{H}$ more perspicuously as follows: \begin{quote}\centering \begin{tikzpicture} \node(a) {$2$}; \node[right=3em of a](a1) {$\phantom{1}4$}; \node[right=3em of a1](a11) {$\phantom{1}6$}; \node[below=0em of a11](a10) {$\phantom{1}8$}; \path[->] (a1) edge (a11) edge (a10); \node[right=0em of a11](P1) {\textcolor{gray}{$P_1$}}; \node[below=2em of a1](a2) {$10$}; \node[right=3em of a2](a21) {$12$}; \node[below=0em of a21](a20) {$14$}; \path[->] (a2) edge (a21) edge (a20); \node[right=0em of a21](P2) {\textcolor{gray}{$P_2$}}; \node[below=2em of a2](a3) {$16$}; \node[right=3em of a3](a31) {$18$}; \node[below=0em of a31](a30) {$20$}; \path[->] (a3) edge (a31) edge (a30); \node[right=0em of a31](P3) {\textcolor{gray}{$P_3$}}; \node[below=2em of a3](adots) {$\vdots$}; \node[above=2em of a1](a0) {$\phantom{1}1$}; \node[right=3em of a0](a00) {$\phantom{1}0$}; \draw[dashed, ->] (a)--(adots); \draw[->] (a0)--(a00); \path[->] (a) edge (a1) edge (a2) edge (a3) edge (a0); \node[right=0em of P1](b11) {$\phantom{2}7$}; \node[right=3em of b11](b1) {$\phantom{1}5$}; \node[right=3em of b1](b) {$3$}; \node[below=0em of b11](b10) {$\phantom{2}9$}; \draw[->] (b)--(b1); \path[->] (b1) edge (b11) edge (b10); \node[right=0em of P2](b21) {$13$}; \node[right=3em of b21](b2) {$11$}; \node[below=0em of b21](b20) {$15$}; \node[right=0em of P3](b31) {$19$}; \node[right=3em of b31](b3) {$17$}; \node[below=0em of b31](b30) {$21$}; \node[below=2em of b3](bdots) {$\vdots$}; \draw[->] (b)--(b2); \path[->] (b2) edge (b21) edge (b20); \draw[->] (b)--(b3); \path[->] (b3) edge (b31) edge (b30); \draw[->][dashed, thick] (b)--(bdots); \draw[dotted] (P1) ellipse (3em and 0.7em); \draw[dotted] (P2) ellipse (3em and 0.7em); \draw[dotted] (P3) ellipse (3em and 0.7em); \end{tikzpicture} \end{quote} I claim that $\quotient{\model{H}}\models \phi(\quotient{2}) \liff \phi(\quotient{3})$ for all $\phi \in \lang{L}\nostroke_1$. To prove this, fix $\phi \in \lang{L}\nostroke_1$ and let $\lang{K}$ be the (necessarily finite) set of $\lang{L}$-predicates appearing in $\phi$. Where $\model{H}^*$ is the $\lang{K}$-reduct of $\model{H}$, we have $\quotient{2} \symmetric\pairstroke \quotient{3}$ in $\quotient{\model{H}^*}$, and hence $2 \relativeto\pairstroke 3$ in $\model{H}^*$ by Lemma \ref{lem:QuotientSymHom}. Lemma \ref {lem:RelativenessPreservation} now yields that $\model{H}^* \models \phi(2) \liff \phi(3)$, and hence $\model{H} \models \phi(2) \liff \phi(3)$. Now apply Lemma \ref{lem:Quotient}. However, where $\psi \in \lang{L}\yesstroke_1$ abbreviates: \begin{align*} \exists x (Rvx \land \forall y \forall z ((Rxy \land Rxz) \lonlyif y = z))) \end{align*} we have $\quotient{\model{H}} \models \psi(\quotient{2}) \land \lnot \psi(\quotient{3})$. So no $\lang{L}\nostroke_2$-formula can define $=$ in $\quotient{\model{H}}$; and hence no $\lang{L}\nostroke_2$-formula can define $\indiscernibleno$ in $\model{H}$, by Lemma \ref{lem:Quotient}. \end{proof} \end{lem}\noindent We already knew that $a \indiscernibleno b$ in $\model{M}$ iff $\quotient{a} = \quotient{b}$ in $\quotient{\model{M}}$. Lemma \ref{lem:QuotientBehaviour} allows us, under special circumstances, to obtain analogous results for our other grades of indiscernibility: \begin{lem}\label{lem:IndiscernibilityReducts} Let $\model{M}$ be any $\lang{L}$-structure. If either $\lang{L}$ is finite and relational or $\quotient{M}$ is finite, then: \begin{enumerate}[nolistsep] \item $a \mathrel{\indiscernibleno\relstroke} b$ in $\model{M}$ iff $\quotient{a} \mathrel{\indiscernibleyes\relstroke} \quotient{b}$ in $\quotient{\model{M}}$ \item $a \mathrel{\indiscernibleno\monadstroke} b$ in $\model{M}$ iff $\quotient{a} \mathrel{\indiscernibleyes\monadstroke} \quotient{b}$ in $\quotient{\model{M}}$ \end{enumerate} \begin{proof} By Lemma \ref{lem:QuotientBehaviour}, given either assumption, some $\lang{L}\nostroke_2$-formula defines $\indiscernibleno$ in $\model{M}$. The same formula defines $=$ in $\quotient{\model{M}}$, and the result follows via Lemma \ref{lem:Quotient}. \end{proof} \end{lem}\noindent Finally, the Galois Connection of \S\ref{s:Galois} allows us to extend Theorem \ref{thm:FiniteMap}, as desired: \begin{lem}\label{lem:FiniteMapRelativeHalf} For structures with finite $\indiscernibleno$-quotients: \begin{enumerate}[nolistsep] \item\label{Finite:RelativePair} $\mathrel{\indiscernibleno\relstroke}$ entails $\relativeto\pairstroke$, and vice versa \item\label{Finite:RelativeBare} $\mathrel{\indiscernibleno\monadstroke}$ entails $\relativeto\barestroke$, and vice versa. \end{enumerate} \begin{proof} Combine Theorem \ref{thm:FiniteMap} with Lemmas \ref{lem:IndiscernibilityReducts} and \ref{lem:QuotientSymHom}. \end{proof} \end{lem}\noindent Note that Theorem \ref{thm:FiniteMap} does not hold for grades of $\lang{L}\yesstroke$-discernibility/symmetry. To see this, let $\model{E}^*$ be a superstructure of $\model{E}$ obtained by making $R$ reflexive. Whilst $\quotient{\model{E}^*}$ has only two members and its signature is finite and relational, no symmetry on $\model{E}^*$ sends $1$ to any element in $\mathbb{R} \setminus \mathbb{N}$. \section{Capturing grades of discrimination}\label{s:Capturing} All twelve grades of discrimination have fairly straightforward definitions. However, the grades of indiscernibility are defined in terms of satisfaction of object-language formulas, whereas the grades of symmetry and relativity are defined are defined metalinguistically. It is natural to ask whether this is essential. More precisely, I shall ask which of the grades are \emph{capturable}, in the following sense: \begin{define} For any $\lang{L}$-structure $\model{M}$, say that $\Gamma \subseteq \lang{L}_2$ \emph{captures} $\mathrm{R}$ \emph{in $\model{M}$} iff: \begin{align*} \text{for all }a, b \in M: a\mathrm{R}b\text{ in }\model{M}&\text{ iff }\model{M} \models \phi(a,b)\text{ for every }\phi \in \Gamma \end{align*} Say that $\mathrm{R}$ is \emph{capturable$^+$ in $\model{M}$} iff some $\Gamma \subseteq \lang{L}\yesstroke_{2}$ captures $\mathrm{R}$ in $\model{M}$ (similarly for \emph{capturable$^-$}). Say that $\mathrm{R}$ is \emph{universally capturable$^+$} \text{iff} some single $\Gamma \subseteq \lang{L}\yesstroke_{2}$ captures $\mathrm{R}$ in every $\lang{L}$-structure (similarly for \emph{universally capturable$^-$}). \end{define}\noindent I shall consider capturability for each of the three families of grades of discrimination, starting with the grades of indiscernibility: \begin{lem}\label{lem:CaptureIndiscernible} \begin{enumerate}[nolistsep] \item\label{CaptureIndiscernible:yes} $=$, $\mathrel{\indiscernibleyes\relstroke}$, $\mathrel{\indiscernibleyes\monadstroke}$ are universally capturable$^+$ \item\label{CaptureIndiscernible:no} $\indiscernibleno$, $\mathrel{\indiscernibleno\relstroke}$, $\mathrel{\indiscernibleno\monadstroke}$ are universally capturable$^-$ \item\label{CaptureIndiscernible:nonot} There is a structure in which none of $=$, $\mathrel{\indiscernibleyes\relstroke}$ and $\mathrel{\indiscernibleyes\monadstroke}$ is capturable$^-$ \end{enumerate} \begin{proof} (\ref{CaptureIndiscernible:yes}) and (\ref{CaptureIndiscernible:no}). Obvious. (\ref{CaptureIndiscernible:nonot}). Let $\model{I}$ be the following graph: \begin{quote} \centering \begin{tikzpicture}[descr/.style={fill=white,inner sep=2.5pt}] \matrix (m) [matrix of math nodes, row sep=0em, column sep=2em, text height=1.5ex, text depth=0.25ex] {\phantom{c} & 1 & 2 &3 \\ & 4 & 5 & 6\\ & 7 & 8 & 9 \\ }; \draw[-, thick] (m-1-2)--(m-1-3); \draw[-, thick] (m-2-2)--(m-2-3); \draw[-, thick] (m-3-2)--(m-3-3)--(m-3-4); \end{tikzpicture} \end{quote} For all $\phi \in \lang{L}\nostroke_{2}$, $\model{I} \models \phi(1, 4) \liff \phi(1, 7)$ even though $1 \mathrel{\indiscernibleyes\relstroke} 4$ and $1 \mathrel{\nindiscernibleyes\monadstroke} 7$. \end{proof} \end{lem} \begin{lem}\label{lem:CaptureSymmetries} \begin{enumerate}[nolistsep] \item\label{CaptureSymTotal} $\symmetric\totalstroke$ is universally capturable$^+$ \item\label{spsmyesfinite} For finite structures: $\symmetric\pairstroke$ and $\symmetric\barestroke$ are universally capturable$^+$ \item \label{spsmyes} There is a structure in which $\symmetric\pairstroke$ and $\symmetric\barestroke$ are not capturable$^+$ \item\label{SymmetryUncapturable} There is a finite structure in which none of $\symmetric\totalstroke$, $\symmetric\pairstroke$ and $\symmetric\barestroke$ is capturable$^-$ \end{enumerate} \begin{proof} (\ref{CaptureSymTotal}). For each atomic $\phi \in \lang{L}\yesstroke_{n+2}$, define: \begin{align*} x \simeq_\phi y &:= \forall \overline{v} \left(\bigwedge_{i=1}^n(v_i \neq x \land v_i \neq y) \lonlyif \left[\phi (x, y, \overline{v}) \liff \phi(y, x, \overline{v}) \right]\right) \end{align*} By Lemma \ref{lem:IsomorphismPreservation}, $\symmetric\totalstroke$ is universally captured$^+$ by the set of all such $\simeq_\phi$. (\ref{spsmyesfinite}). From Theorem \ref{thm:FiniteMap} and Lemma \ref{lem:CaptureIndiscernible}. (\ref{spsmyes}). Let $\model{J}$ comprise two disjoint copies of the complete countable graph, with a disjoint copy of a complete uncountable graph, i.e.: \begin{align*} J := & \mathbb{R}\\ R^{\model{J}} :=& \{\langle m, n\rangle \in \mathbb{N}^2\mid m \neq n\text{ and }m+n\text{ is even}\} \cup \{\langle p, q \rangle \in (\mathbb{R} \setminus \mathbb{N})^2 \mid p \neq q\} \end{align*} By taking a Skolem Hull containing $1, 2$ and some $e \in \mathbb{R} \setminus \mathbb{N}$, it is clear that: \begin{align*} \model{J} \models \phi(1, 2) \liff \phi(1,e) \end{align*} for any $\phi \in \lang{L}\yesstroke_2$. However, $1 \symmetric\pairstroke 2$ in $\model{J}$, whereas $1 \nsymmetric\barestroke e$ in $\model{J}$. (\ref{SymmetryUncapturable}) In $\model{I}$ from Lemma \ref{lem:CaptureIndiscernible}, $1 \symmetric\totalstroke 2$, whereas $7 \nsymmetric\barestroke 8$. However, for all $\phi \in \lang{L}\nostroke_{2}$, $\model{\quotient{I}} \models \phi(\quotient{1}, \quotient{2}) \liff \phi(\quotient{7}, \quotient{8})$ and hence $\model{I} \models \phi(1, 2) \liff \phi(7, 8)$. \end{proof} \end{lem} \begin{lem}\label{lem:RelativityCapturable} \begin{enumerate}[nolistsep] \item\label{relativetotalcapturable} $\mathrel{\textsf{r}}tal$ is universally capturable$^-$ \item\label{rprmno} For structures with finite $\indiscernibleno$-quotients: $\relativeto\pairstroke$ and $\relativeto\barestroke$ are universally cap\-tu\-ra\-ble$^-$ \item\label{rprmyes} There is a structure in which neither of $\relativeto\pairstroke$ and $\relativeto\barestroke$ is capturable$^+$ \end{enumerate} \begin{proof} (\ref{relativetotalcapturable}). Let $\Gamma$ be the set of all $\lang{L}\nostroke_2$-formulas of the form: \begin{align*} \forall \overline{v} \left(\bigwedge_{i=1}^{n}\left[\phi_{i}(x,x) \land \lnot \phi_{i}(x, v_{i}) \land \psi_{i}(y,y) \land \lnot \psi_{i}(y, v_{i})\right] \lonlyif \left[\theta(x, y, \overline{v}) \liff \theta(y, x, \overline{v})\right]\right) \end{align*} for any $n < \omega$, any $\phi_1, \ldots, \phi_n, \psi_1, \ldots, \psi_n \in \lang{L}\nostroke_2$, and any $\theta \in \lang{L}\nostroke_{n+2}$. I claim that $\Gamma$ captures $\mathrel{\textsf{r}}tal$ in any $\lang{L}$-structure $\model{M}$. First, suppose $a \mathrel{\textsf{r}}tal b$ in $\model{M}$. Fix some $\gamma \in \Gamma$, and some $\overline{e} \in M^n$. Suppose that: $$\model{M} \models \bigwedge^n_{i=1}\left[\phi_{i}(a,a) \land \lnot \phi_{i}(a, e_{i}) \land \psi_{i}(b,b) \land \lnot \psi_{i}(b, e_{i})\right]$$ Then by Lemma \ref{lem:AlternateCharacterisationOfFOI}, $e_{i} \nindiscernibleno a$ and $e_{i} \nindiscernibleno b$ for each $1 \leq i \leq n$. Since $a \mathrel{\textsf{r}}tal b$, Lemma \ref{lem:RelativenessPreservation} tells us that $\model{M} \models \theta(a,b,\overline{e}) \liff \theta(b, a, \overline{e})$. Hence $\model{M} \models \gamma(a, b)$, for any $\gamma \in \Gamma$. Next, suppose $\model{M} \models \gamma(a,b)$, for all $\gamma \in \Gamma$. I claim that the following is a near-correspondence from $\model{M}$ to $\model{M}$: $$\Pi = \{\langle a, b\rangle, \langle b, a\rangle\} \cup \{\langle x, x\rangle \mid x \nindiscernibleno a \text{ and }x\nindiscernibleno b\}$$ To show this, fix $n < \omega$, $\theta \in \lang{L}\nostroke_{n+2}$ and $\overline{e} \in M^n$ such that $e_i \nindiscernibleno a$ and $e_{i} \nindiscernibleno b$ for each $1 \leq i \leq n$. Since each $e_{i} \nindiscernibleno a$ and $e_{i} \nindiscernibleno b$, by Lemma \ref{lem:AlternateCharacterisationOfFOI} there are formulas $\phi_i, \psi_i \in \lang{L}\nostroke_2$ for each $1 \leq i \leq n$ such that ${M} \models \phi_{i}(a, a) \land \lnot\phi_{i}(a, e_{i})$ and $\model{M} \models \psi_{i}(b, b) \land \lnot \psi_{i}(b, e_{i})$. Conjoining these, we get: \begin{align*} \model{M} \models & \bigwedge_{i=1}^{n}\left[\phi_{i}(a,a) \land \lnot \phi_{i}(a, e_{i}) \land \psi_{i}(b,b) \land \lnot \psi_{i}(b, e_{i})\right] \end{align*} Since $\model{M} \models \gamma(a, b)$ for all $\gamma \in \Gamma$, we obtain that, for all $\theta \in \lang{L}\nostroke_{n+2}$: $$\model{M} \models \theta(a, b, \overline{e}) \liff \theta(b, a, \overline{e})$$ Generalising, $\Pi$ is a near-correspondence. By the Galois Connection of Theorem \ref{thm:GaloisConnection}, $(\Pi^{\textbf{c}})^{\textbf{e}}$ is a relativity on $\model{M}$; and so $a \mathrel{\textsf{r}}tal b$. (\ref{rprmno}). From Lemmas \ref{lem:FiniteMapRelativeHalf} and \ref{lem:CaptureIndiscernible}. (\ref{rprmyes}). Exactly as in Lemma \ref{lem:CaptureSymmetries}, case (\ref{spsmyes}). \end{proof} \end{lem}\noindent Lemmas \ref{lem:CaptureIndiscernible}--\ref{lem:RelativityCapturable} can be summarised as follows: \begin{thm}[Capturing the grades]\label{thm:Capturability} The following table exhaustively details the capturability of each grade of discrimination: \begin{table}[h] \setlength\tabcolsep{0.025\textwidth} \begin{tabular}{@{}p{0.2\textwidth}p{0.2\textwidth}p{0.2\textwidth}@{}} Grade & Capturable$^+$ & Capturable$^-$\\ \hline $=$ & $\checkmark$ & $\times$ \\ $\mathrel{\indiscernibleyes\relstroke}$ & $\checkmark$ & $\times$ \\ $\mathrel{\indiscernibleyes\monadstroke}$ & $\checkmark$ & $\times$ \\\\ $\indiscernibleno$ & $\checkmark$ & $\checkmark$ \\ $\mathrel{\indiscernibleno\relstroke}$ & $\checkmark$ & $\checkmark$ \\ $\mathrel{\indiscernibleno\monadstroke}$ & $\checkmark$ & $\checkmark$ \\\\ $\symmetric\totalstroke$& $\checkmark$ & $\times$ \\ $\symmetric\pairstroke$& \emph{\textbf{f}} & $\times$ \\ $\symmetric\barestroke$& \emph{\textbf{f}} & $\times$ \\\\ $\mathrel{\textsf{r}}tal$& $\checkmark$ & $\checkmark$ \\ $\relativeto\pairstroke$& \emph{\textbf{fq}} & \emph{\textbf{fq}} \\ $\relativeto\barestroke$& \emph{\textbf{fq}} & \emph{\textbf{fq}} \end{tabular} \end{table} \end{thm}\noindent The table of Theorem \ref{thm:Capturability} should be read with the following key: \begin{enumerate}[nolistsep] \item[$\checkmark$] universally capturable \item[$\times$] there is an $\lang{L}$-structure in which the grade is not capturable \item[\emph{\textbf{f}}\ ] universally capturable when we restrict attention to finite structures; but there are counterexamples elsewhere \item[\emph{\textbf{fq}}\ ] universally capturable when we restrict attention to structures with finite $\indiscernibleno$-quotients; but there are counterexamples elsewhere \end{enumerate} This demonstrates, once again, that grades of $\lang{L}\yesstroke$-discernibility are to grades of symmetry, as grades of $\lang{L}\nostroke$-discernibility are to grades of relativity. More interestingly, though, Theorem \ref{thm:Capturability} bears directly upon the philosophical search for reductive criteria of identity. As mentioned in \S\ref{s:Preliminaries}, much of the interest in grades of discrimination comes from their potential to provide us with a criterion of identity, possibly a reductive one. However, if a grade of discrimination cannot be captured by some set of formulas in the object language, this should bar it from use in any reductive criterion of identity. After all, if the grade must be invoked as a \emph{primitive} at the level of the object language, it is unclear why we should not simply allow ourselves to take {identity} itself as a primitive in the object language. The situation will be no better, in this regard, if the grade can only be captured$^+$ and not captured$^-$. Consequently, no grade of $\lang{L}\yesstroke$-indiscernibility or symmetry can provide a \emph{reductive} criterion of identity. The remaining candidates for reductive criteria of identity are therefore the grades of $\lang{L}\nostroke$-indiscernibility and relativity. However, in the special cases when they are capturable$^-$---which we require if we seek a reductive criterion of identity---two of the grades of $\lang{L}\nostroke$-indiscernibility are simply co-extensive with two of the grades of $\lang{L}\nostroke$-indiscernibility (see Theorem \ref{thm:FiniteMap}). Hence the only plausible distinct candidates for a reductive criterion of identity are, in order of entailment: $\indiscernibleno$, $\mathrel{\textsf{r}}tal$, $\mathrel{\indiscernibleno\relstroke}$ and $\mathrel{\indiscernibleno\monadstroke}$. This does not show, though, that the remaining grades of discrimination are philosophically uninteresting. After all, we might simply be interested in providing an illuminating but \emph{non-reductive} answer to the general question: \emph{When are objects identical?} To repeat an example from \S\ref{s:Preliminaries}: if we have become convinced that nature abhors a (non-trivial) symmetry, then $\symmetric\barestroke$ could serve as a non-reductive, non-trivial criterion of identity, even though it is uncapturable$^+$. \section{Symmetry in all elementary extensions}\label{s:AllElementaryExtensions} In \S\ref{s:DefinabilityTheory}, I connected the grades of indiscernibility with the existence of a symmetry/relativity in \emph{some} elementary extensions. To close this paper, I wish to consider what happens when we consider the existence of a symmetry or relativity in \emph{all} elementary extensions. In particular, I shall demonstrate a neat connection between $\indiscernibleno$ and symmetries in elementary extensions. To show this, I first require a general method for constructing such elementary extensions:\footnote{\citeauthor{Monk:ML} \cite[Theorem 29.16]{Monk:ML} described this explicitly; \citeauthor{Grzegorczyk:CC} \cite[p.\ 41]{Grzegorczyk:CC} earlier mentioned it in passing, implying it was mathematical folklore. The method was rediscovered by philosophers, e.g.\ \citeauthor{Ketland:II} \cite[p.\ 7]{Ketland:II} and \citeauthor{LadymanLinneboPettigrew:IDPL} \cite[Theorem 8.14]{LadymanLinneboPettigrew:IDPL}. However, all these authors restrict their attentions to relational signatures.} \begin{lem}\label{lem:Inflate} Let $\model{M}$ be an $\lang{L}$-structure with $a \in M$, and let $D$ be a set such that $M\cap D = \emptyset$. Then there is an $\lang{L}$-structure $\model{N} \succ\nostroke \model{M}$ with $N = M \cup D$, such that $a \indiscernibleno d$ in $\model{N}$ for all $d \in D$. \begin{proof} Define $\sigma : N \longrightarrow M$ by: $\sigma(x) = x$ if $x \in M$, and $\sigma(d) = a$ if $d \in D$. Set: \begin{align*} R^\model{N} &= \{\overline{e} \in N^n \mid \sigma(\overline{e}) \in R^\model{M} \} & & \text{all }n\text{-place }\lang{L}\text{-predicates }R\\ f^\model{N}(\overline{e}) &= f^\model{M}(\sigma(\overline{e})) & & \text{all }n\text{-place }\lang{L}\text{-function-symbols }f\text{ and all }\overline{e} \in N^n \end{align*} I claim that, for each $\lang{L}$-term $\tau$, all $\overline{d} \in D^{m}$ and all $\overline{e} \in M^{n}$: \begin{align*} \tau^{\model{N}}(\overline{d}, \overline{e}) &= \tau^{\model{N}}(\overline{a}, \overline{e}) = \tau^{\model{M}}(\overline{a}, \overline{e}) \end{align*} (where $a_i = a$ for all $1 \leq i \leq m$). This is proved by induction on complexity. The case where $\tau$ is an $\lang{L}$-function symbol is given. Now suppose the claim holds for $\tau_{1}, \ldots, \tau_{k}$ and consider $\tau(\overline{x}, \overline{y}) = f(\tau_{1}(\overline{x}, \overline{y}), \ldots, \tau_{k}(\overline{x}, \overline{y}))$. Then: \begin{align*} \tau^{\model{N}}(\overline{d}, \overline{e}) &= f^{\model{N}}(\tau^{\model{N}}_{1}(\overline{d}, \overline{e}), \ldots, \tau^{\model{N}}_{k}(\overline{d}, \overline{e})) \\ & = f^{\model{N}}(\tau_{1}^{\model{N}}(\overline{a}, \overline{e}), \ldots, \tau^{\model{N}}_{k}(\overline{a}, \overline{e})) = \tau^{\model{N}}(\overline{a}, \overline{e})\\ &= f^{\model{N}}(\tau_{1}^{\model{M}}(\overline{a}, \overline{e}), \ldots, \tau^{\model{M}}_{k}(\overline{a}, \overline{e}))\\ & = f^{\model{M}}(\tau_{1}^{\model{M}}(\overline{a}, \overline{e}), \ldots, \tau^{\model{M}}_{k}(\overline{a}, \overline{e})) = \tau^{\model{M}}(\overline{a}, \overline{e}) \end{align*} This proves the claim. Hence, for all atomic $\phi \in \lang{L}\nostroke_{m+n}$, all $\overline{d} \in D^{m}$ and all $\overline{e} \in M^{n}$: \begin{align*} \model{N} \models \phi(\overline{d}, \overline{e}) &\text{ iff }\model{N} \models \phi(\overline{a}, \overline{e})\text{ iff }\model{M} \models \phi(\overline{a}, \overline{e}) \end{align*} So for all $d \in D$ we have $a \indiscernibleno d$ in $\model{N}$ by Lemma \ref{lem:AlternateCharacterisationOfFOI}; and moreover $\model{N} \succ\nostroke \model{M}$. \end{proof} \end{lem}\noindent Thus armed, I can connect $\indiscernibleno$ with symmetry in elementary extensions: \begin{lem}\label{thm:ElementExtensionMain} For any $\lang{L}$-structure $\model{M}$: if $a \symmetric\barestroke b$ in every $\model{N} \succ\nostroke \model{M}$, then $a \indiscernibleno b$ in $\model{M}$. \begin{proof} Suppose $a \symmetric\barestroke b$ in every $\model{N} \succ\nostroke \model{M}$. Let $D$ be such that $M \cap D = \emptyset$ and $|D| > |\quotient{b}_\model{M}|$. Construct $\model{N}$ as in Lemma \ref{lem:Inflate}, so that $a \indiscernibleno d$ in $\model{N}$ for all $d \in D$. Since $\model{N} \succ\nostroke \model{M}$, by assumption there is a symmetry $\pi$ on $\model{N}$ such that $\pi(a) = b$. So, for every $\phi \in \lang{L}\nostroke_{2}$, and all $d \in D$, by Lemma \ref{lem:IsomorphismPreservation}: \begin{align*} \model{N} \models \phi(b, b)\text{ iff } \model{N} \models \phi(a, a) \text{ iff }\model{N} \models \phi(a, d) \text{ iff }\model{N} \models \phi(b, \pi(d)) \end{align*} Hence $\pi(d) \in \quotient{b}_\model{N}$ for every $d \in D$, by Lemma \ref{lem:AlternateCharacterisationOfFOI}. Since $\pi$ is a bijection, $|D| = |\{\pi(d) \mid d \in D\}| \leq |\quotient{b}_\model{N}|$. If $a \nindiscernibleno b$ in $\model{N}$, then $|\quotient{b}_\model{N}| = |\quotient{b}_\model{M}|$, contradicting our choice of $D$; so $a \indiscernibleno b$ in $\model{N}$. Since $\model{M} \prec\nostroke \model{N}$ and $\indiscernibleno$ is universally capturable$^-$ by Lemma \ref{lem:CaptureIndiscernible}, $a \indiscernibleno b$ in $\model{M}$. \end{proof} \end{lem}\noindent Theorem \ref{thm:BigMap} (left-diagram) entails us that there is no converse to \ref{thm:ElementExtensionMain} in the general case. However, we do obtain a converse in restricted circumstances: \begin{lem}When $\lang{L}$ is relational, for any $\lang{L}$-structure $\model{M}$: $a \indiscernibleno b$ in $\model{M}$ iff $a \symmetric\barestroke b$ in every $\model{N} \succ\nostroke \model{M}$. \begin{proof} Immediate from Theorem \ref{thm:BigMap} and Lemmas \ref{lem:CaptureIndiscernible} and \ref{thm:ElementExtensionMain}. \end{proof} \end{lem}\noindent Moreover, we can strengthen Lemma \ref{thm:ElementExtensionMain} in the case of $\symmetric\totalstroke$. \begin{lem}\label{thm:ElementExtensionTotal} Let $\model{M}$ be an $\lang{L}$-structure with $a \in M$ and $e \notin M$. Let $\model{N} \succ\nostroke \model{M}$ be constructed as in Lemma \ref{lem:Inflate}, so that $N = M \cup \{e\}$ and $a \indiscernibleno e$ in $\model{N}$. If $a \symmetric\totalstroke b$ in $\model{N}$, then $a \indiscernibleno b$ in $\model{M}$. \begin{proof} Suppose $a \symmetric\totalstroke b$ in $\model{N}$, i.e.\ $\pi(a) = b$, $\pi(b) = a$, and $\pi(x) = x$ for all $x \notin \{a, b\}$ is a symmetry on $\model{N}$. In particular, $\pi(e) = e$. Hence, invoking Lemma \ref{lem:IsomorphismPreservation}, for all $\phi \in \lang{L}\nostroke_{2}$: $\model{M}\models \phi(a, a)$ iff $ \model{N} \models \phi(a, a)$ iff $\model{N} \models \phi(e, a)$ iff $\model{N} \models \phi(e, b)$ iff $\model{N} \models \phi(a, b)$ iff $\model{M} \models \phi(a, b)$. Hence $a \indiscernibleno b$ in $\model{M}$ by Lemma \ref{lem:AlternateCharacterisationOfFOI}. \end{proof} \end{lem}\noindent However, this strengthening of Lemma \ref{thm:ElementExtensionMain} is limited to the case of $\symmetric\totalstroke$. To see this, let $\model{K}$ comprise two disjoint copies of the complete countable graph, i.e.: \begin{align*} K &= \mathbb{N}\\ R^{\model{K}} &= \left\{ \langle m, n\rangle \in \mathbb{N}^2 \mid m\neq n \text{ and }m + n \text{ is even}\right\} \end{align*} Whilst $1 \nindiscernibleno 2$ in $\model{K}$, we can use Lemma \ref{lem:Inflate} to add a single new element, $e$, such that $1 \indiscernibleno e$, without disrupting the fact that $1 \symmetric\pairstroke 2$. Moreover, nothing like Lemma \ref{thm:ElementExtensionMain} holds for relativities: Lemma \ref{lem:RelativityCapturable} tells us that $a \mathrel{\textsf{r}}tal b$ in $\model{M}$ iff $a \mathrel{\textsf{r}}tal b$ in all $\model{N} \succ\nostroke \model{M}$. The results of \S\ref{s:DefinabilityTheory} exhaustively detailed the connections between grades of discernibility and the existence of a symmetry/relativity in \emph{some} elementary extension. The results of this section now exhaustively detail the connections between grades of discernibility and the existence of a symmetry/relativity in \emph{all} elementary extensions. We thus have complete answers to several natural questions concerning the connection between grades of discrimination and elementary extensions. \section{Concluding remarks} Several recent technical-cum-philosophical papers have explored some of the grades of discrimination. This paper has pressed forward that technical investigation in many ways. To close, I shall emphasise two. First, I have introduced \emph{grades of relativity} to the philosophical literature---along with the notion of a near-correspondence, a relativeness correspondence, and a partial relativeness correspondence---and shown that these are the natural $\lang{L}\nostroke$-analogues of the grades of symmetry. Second, I have offered complete answers to the natural questions that arise concerning all twelve grades of discrimination. Indeed, the technical investigation of the grades of discrimination now seems to be complete.\footnote{Huge thanks to \O{}ystein Linnebo and Sean Walsh, whose questions provided much of the original motivation for this paper, and whose subsequent comments were very helpful. Further thanks to Denis Bonnay, Adam Caulton, Fredrik Engstr\"{o}m, Jeffrey Ketland, Richard Pettigrew, a referee for \emph{RSL}, and a referee for this journal.} \end{document}
\begin{document} \title{The sixth moment of automorphic $L$-functions} \date{\today} \operatorname{\mathcal{S}}ubjclass[2010]{11M41, 11F11, 11F67} \keywords{moment of $L$-functions, automorphic $L$-functions, $\Gamma_1(q)$} \author[V. Chandee]{Vorrapan Chandee} \address{Department of Mathematics \\ Burapha University \\ 169 Long-Hard Bangsaen rd, Saensuk, Mueng, Chonburi, Thailand, 20131} \email{[email protected] } \author[X. Li]{Xiannan Li} \address{ \parbox{1\textnormal{li}\,newidth}{ University of Oxford, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, UK, OX2 6GG \\ Current address: Department of Mathematics, Kansas State University, 138 Cardwell Hall, Manhattan, Kansas, USA, 66506} } \email{[email protected] } \boldsymbol \alphalowdisplaybreaks \numberwithin{equation}{section} \operatorname{\mathcal{S}}electlanguage{english} \begin{abstract} In this paper, we consider the $L$-functions $L(s, f)$ where $f$ is an eigenform for the congruence subgroup $\Gamma_1(q)$. We prove an asymptotic formula for the sixth moment of this family of automorphic $L$-functions. \end{abstract} \maketitle \operatorname{\mathcal{S}}ection{Introduction} Moments of $L$-functions are of great interest to analytic number theorists. For instance, for $\zeta(s)$ denoting the Riemann zeta function and $$I_k(T) := \int_0^T |\zeta(\tfrac{1}{2} + it)|^{2k}dt,$$ asymptotic formulae were proven for $k = 1$ by Hardy and Littlewood and for $k=2$ by Ingham (see \cite{Ti} VII). This work is closely related to zero density results and the distribution of primes in short intervals. More recently, moments of other families of $L$-functions for studied for their numerous applications, including non-vanishing and subconvexity results. In many applications, it is important to develop technology which can understand such moments for larger $k$. The behavior of moments for larger $k$ remain mysterious. However, recently there has been great progress in our understanding. First, good heuristics and conjectures on the behavior of $I_k(T)$ appeared in the literature. To be precise, a folklore conjecture states that $$I_k(T) \operatorname{\mathcal{S}}im c_k T(\log T)^{k^2}$$ for constants $c_k$ depending on $k$ but the values of $c_k$ were unknown for general $k$ until the work of Keating and Snaith \cite{KS} which related these moments to circular unitary ensembles and provided precise conjectures for $c_k$. The choice of group is consistent with the Katz-Sarnak philosophy \cite{KaSa}, which indicates that the symmetry group associated to this family should be unitary. Based on heuristics for shifted divisor sums, Conrey and Ghosh derived a conjecture in the case $k=3$ \cite{CGh} and Conrey and Gonek derived a conjecture in the case $k=4$ \cite{CGo}. In particular, the conjecture for the sixth moment is $$ I_3(T) \operatorname{\mathcal{S}}im 42 a_3 \frac{T(\log T)^9}{9!}$$ for some arithmetic factor $a_3.$ Further conjectures including lower order terms, and for other symmetry groups are available from the work of Conrey, Farmer, Keating, Rubinstein and Snaith \cite{CFKRS} as well as from the work of Diaconu, Goldfeld and Hoffstein \cite{DGH}. In support of these conjectures, lower bounds of the the right order of magnitude are available due to Rudnick and Soundararajan \cite{Lower}, while good upper bounds of the right order of magnitude are available conditionally on RH, due to Soundararajan \cite{Sound} and later improved by Harper \cite{Har}. Despite this, verifications of the moment conjectures for high moments remain elusive. Typically, even going slightly beyong the fourth moment to obtain a twisted fourth moment is quite difficult, and there are few families for which this is known. Quite recently, Conrey, Iwaniec and Soundararajan \cite{CIS} derived an asymptotic formula for the sixth moment of Dirichlet $L$-functions with a power saving error term. Instead of fixing the modulus $q$ and only averaging over primitive characters $\chi \bmod q$, they also average over the modulus $q \leq Q$, which gives them a larger family of size $Q^2$. Further, they include a short average on the critical line. In particular, they showed that \est{\operatorname{\mathcal{S}}um_{q \leq Q} \operatorname{\mathcal{S}}umstar_{\chi \mod q} \int_{-\infty}^{\infty} \mathopen{}\mathclose\bgroup\originalleft| L\mathopen{}\mathclose\bgroup\originalleft( \tfrac 12 + it, \chi \aftergroup\egroup\originalright)\aftergroup\egroup\originalright|^6\mathopen{}\mathclose\bgroup\originalleft| \Gamma\mathopen{}\mathclose\bgroup\originalleft( \tfrac 14 + \tfrac{it}{2} \aftergroup\egroup\originalright)\aftergroup\egroup\originalright|^6 \> dt \operatorname{\mathcal{S}}im 42 b_3 \frac{Q^2(\log Q)^9}{9!} \int_{-\infty}^{\infty} \mathopen{}\mathclose\bgroup\originalleft| \Gamma\mathopen{}\mathclose\bgroup\originalleft( \tfrac 14 + \tfrac{it}{2} \aftergroup\egroup\originalright)\aftergroup\egroup\originalright|^6 \> dt } for some constant $b_3.$ This is consistent with the analogous conjecture for the Riemann zeta function above. The authors of this paper subsequently derived an asymptotic formula for the eight moment of this family of $L$-functions, conditionally on GRH \cite{CL}, which is \est{\operatorname{\mathcal{S}}um_{q \leq Q} \operatorname{\mathcal{S}}umstar_{\chi \mod q} \int_{-\infty}^{\infty} \mathopen{}\mathclose\bgroup\originalleft| L\mathopen{}\mathclose\bgroup\originalleft( \tfrac 12 + it, \chi \aftergroup\egroup\originalright)\aftergroup\egroup\originalright|^8\mathopen{}\mathclose\bgroup\originalleft| \Gamma\mathopen{}\mathclose\bgroup\originalleft( \tfrac 14 + \tfrac{it}{2} \aftergroup\egroup\originalright)\aftergroup\egroup\originalright|^8 \> dt \operatorname{\mathcal{S}}im 24024 b_4 \frac{Q^2(\log Q)^{16}}{16!} \int_{-\infty}^{\infty} \mathopen{}\mathclose\bgroup\originalleft| \Gamma\mathopen{}\mathclose\bgroup\originalleft( \tfrac 14 + \tfrac{it}{2} \aftergroup\egroup\originalright)\aftergroup\egroup\originalright|^8 \> dt } for some constant $b_4.$ In this paper, we study a family of $L$-functions attached to automorphic forms on $GL(2)$. To be more precise, let $S_k(\Gamma_0(q), \chi)$ be the space of cusp forms of weight $k \ge 2$ for the group $\Gamma_0(q)$ and the nebentypus character $\chi \mod q$, where $$\Gamma_0(q) = \mathopen{}\mathclose\bgroup\originalleft\{ \mathopen{}\mathclose\bgroup\originalleft( \mathopen{}\mathclose\bgroup\originalleft.{\begin{array}{cc} a & b \\ c & d \\ \end{array} } \aftergroup\egroup\originalright) \ \aftergroup\egroup\originalright| \ ad- bc = 1 , \ \ c \equiv 0 \mod q \aftergroup\egroup\originalright\}.$$ Also, let $S_k(\Gamma_1(q))$ be the space of holomorphic cusp forms for the group $$\Gamma_1(q) = \mathopen{}\mathclose\bgroup\originalleft\{ \mathopen{}\mathclose\bgroup\originalleft( \mathopen{}\mathclose\bgroup\originalleft.{\begin{array}{cc} a & b \\ c & d \\ \end{array} } \aftergroup\egroup\originalright) \ \aftergroup\egroup\originalright| \ ad- bc = 1 , \ \ c \equiv 0 \mod q, \ \ \ a \equiv d \equiv 1 \mod q \aftergroup\egroup\originalright\}.$$ Note that $S_k(\Gamma_1(q))$ is a Hilbert space with the Petersson's inner product $$ <f, g> = \int_{\Gamma_1(q) \backslash \mathbb H} f(z)\bar{g}(z) y^{k-2} \> dx \> dy,$$ and $$ S_k(\Gamma_1(q)) = \bigoplus_{\chi \mod q} S_k(\Gamma_0(q), \chi).$$ Let $\mathcal H_\chi \operatorname{\mathcal{S}}ubset S_k(\Gamma_0(q), \chi)$ be an orthogonal basis of $ S_k(\Gamma_0(q), \chi)$ consisting of Hecke cusp forms, normalized so that the first Fourier coefficient is $1$. For each $f \in \mathcal H_\chi $, we let $L(f, s)$ be the $L$-function associated to $f$, defined for $\textup{Re }(s) > 1$ as \es{\label{def:Lfnc} L(f,s) = \operatorname{\mathcal{S}}um_{n \geq 1} \frac {\lambda_f(n)}{n^{s}} = \prod_p \pr{1 - \frac{\lambda_f(p)}{p^s} + \frac{\chi(p)}{p^{2s}}}^{-1},} where $\{\lambda_f(n)\}$ are the Hecke eigenvalues of $f$. With our normalization, $\lambda_f(1) = 1 $. In general, the Hecke eigenvalues satisfy the Hecke relation \es{\label{eqn:Heckerelation} \lambda_f(m) \lambda_f(n) = \operatorname{\mathcal{S}}um_{d|(m,n)} \chi(d) \lambda_f\pr{\frac{mn}{d^2}},} for all $m,n \geq 1$. We define the completed $L$-function as \es{\label{def:completedLfnc}\Lambda\pr{f, \tfrac 12 + s} = \pr{\frac q{4\pi^2}}^{\frac s2} \Gamma \pr{s + \frac k2} L\pr{f, \tfrac 12 + s},} which satisfies the functional equation \est{\Lambda\pr{f, \tfrac 12 + s} = i^k \overline{\eta}_f\Lambda\pr{\bar{f}, \tfrac 12 - s},} where $|\eta_f| = 1$ when $f$ is a newform. Suppose for each $f \in \mathcal H_\chi$, we have an associated number $\boldsymbol \alphapha_f$. Then we define the harmonic average of $\boldsymbol \alphapha_f$ over $\mathcal H_\chi$ to be \est{\operatorname{\mathcal{S}}umh_{f \in \mathcal H_\chi} \boldsymbol \alphapha_f = \frac{\Gamma(k-1)}{(4\pi)^{k-1}}\operatorname{\mathcal{S}}um_{f \in \mathcal H_\chi} \frac{\boldsymbol \alphapha_f}{\|f\|^2}.} We note that when the first coefficient $\lambda_f(1) = 1$, $\|f\|^2$ is essentially the value of a certain $L$-function at $1$, and so on average, $\|f\|^2$ is constant. As in other works, it is possible to remove the weighting by $\|f\|^2$ through what is now a standard argument. We shall be interested in moments of the form \est{\frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = (-1)^k}} \operatorname{\mathcal{S}}umh_{f \in \mathcal H_\chi} |L(f, 1/2)|^{2k}.} We note that the size of the family is around size $q^2$. For prime level, $\eta_f$ can be expressed in terms of Gauss sums, and in particular we expect $\eta_f$ to equidistribute on the circle as $f$ varies over an orthogonal basis of $S_k(\Gamma_1(q))$. Thus, we expect our family of $L$-functions to be unitary. In this paper, we prove an asymptotic formula for the sixth moment - this will be the first time that the sixth moment of a family of $L$-functions over $GL(2)$ has been understood. Following \cite{CFKRS}, we have the following conjecture for the sixth moment of our family. We refer the reader to Appendix \ref{sec:Eulerproduct} for a brief derivation of the arithmetic factor in the conjecture. \begin{conj} \label{conj:CFKRSnoshift}Let $q$ be a prime number. As $q \aftergroup\egroup\originalrightarrow \infty$, we have \est{\frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = (-1)^k}} \operatorname{\mathcal{S}}umh_{f \in \mathcal H_\chi} |L(f, 1/2)|^6 \operatorname{\mathcal{S}}im 42 \mathscr C_3 \mathopen{}\mathclose\bgroup\originalleft(1- \frac 1q\aftergroup\egroup\originalright)^{4} \mathopen{}\mathclose\bgroup\originalleft(1 + \frac 4q + \frac {1}{q^2} \aftergroup\egroup\originalright) C_q^{-1} \frac{(\log q)^9}{9!},} where \es{\label{def:c3} \mathscr C_3 := \prod_{p} C_p, \ \ \ \textrm{and,} \ \ \ C_p:= \mathopen{}\mathclose\bgroup\originalleft( 1 + \frac 4p + \frac 7{p^2} - \frac 2{p^3} + \frac 9{p^4} - \frac{16}{p^5} + \frac{1}{p^6} - \frac{4}{p^7}\aftergroup\egroup\originalright)\mathopen{}\mathclose\bgroup\originalleft( 1 + \frac 1{p}\aftergroup\egroup\originalright)^{-4}.} \end{conj} Iwaniec and Xiaoqing Li proved a large sieve result for this family in \cite{IL}, and Djankovic used their result to prove \cite{Dj} that for an odd integer $k \geq 3$ and prime $q$ that \begin{equation*} \frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = -1}} \operatorname{\mathcal{S}}umh_{f \in \mathcal H_\chi} |L(f, 1/2)|^6 \ll q^{\varepsilon}. \end{equation*} as $q \aftergroup\egroup\originalrightarrow \infty.$ In this paper, we shall prove the following \begin{thm} \label{cor:main} Let $q$ be a prime and $k\geq 5$ be odd. Then, as $q \aftergroup\egroup\originalrightarrow \infty$, we have \est{\frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = -1}} & \operatorname{\mathcal{S}}umh_{f \in \mathcal H_\chi} \int_{-\infty}^{\infty} \mathopen{}\mathclose\bgroup\originalleft|\Lambda\mathopen{}\mathclose\bgroup\originalleft(f, \tfrac 12 + it\aftergroup\egroup\originalright)\aftergroup\egroup\originalright|^6 \> dt \\ &\operatorname{\mathcal{S}}im 42 \mathscr C_3 \mathopen{}\mathclose\bgroup\originalleft(1- \frac 1q\aftergroup\egroup\originalright)^{4} \mathopen{}\mathclose\bgroup\originalleft(1 + \frac 4q + \frac {1}{q^2} \aftergroup\egroup\originalright) C_q^{-1} \frac{(\log q)^9}{9!} \int_{-\infty}^{\infty} \mathopen{}\mathclose\bgroup\originalleft|\Gamma\mathopen{}\mathclose\bgroup\originalleft( \tfrac k2 + it \aftergroup\egroup\originalright) \aftergroup\egroup\originalright|^6 \> dt,} where $\mathscr C_3$ and $C_p$ are defined in (\ref{def:c3}). \end{thm} In fact, we are able to prove this with an error term of $q^{-1/4}$, as opposed to the $q^{-1/10}$ error term in the work of Conrey, Iwaniec and Soundararajan\cite{CIS}. The reason behind this superior error term is explained in the outline in Section \ref{subsec:outline}. In future work, we hope to extend our attention to the eighth momment. The assumption that $k$ is odd implies that all $f\in \mathcal H_\chi$ are newforms. This is for convenience only and is not difficult to remove. Indeed, when $k$ is even, all $f \in \mathcal H_\chi$ are newforms except possibly when $\chi$ is the principal character and $f$ is induced by a cusp form of full level. We avoid this case for the sake of brevity. Similarly, the assumption that $k\geq 5$ simplifies parts of the calculation; it is possible to prove Theorem \ref{cor:main} for smaller $k$. Since the $\Gamma$ function decays rapidly on vertical lines, the average over $t$ is fairly short. It is included for the same reason as in the works \cite{CIS} and \cite{CL} in that it allows us to avoid certain unbalanced sums in the computation of the moment. Although this appears to be a small technical change in the main statement, evaluating such moments without the short integration over $t$ is a significant challenge. Our Theorem will follow from the more general Theorem \ref{thm:mainmoment} for shifted moments in Section \ref{sec:shift}. \operatorname{\mathcal{S}}ubsection{Outline of paper} \label{subsec:outline} To help orient the reader, we provide a sketch of the proof, and introduce the various sections of the paper. After applying the approximate functional equation developed in Section \ref{sec:prelem}, the main object to be understood is roughly of the form $$\frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = (-1)^k}} {\operatorname{\mathcal{S}}um_{f \in \mathcal H_\chi}}^h \operatorname{\mathcal{S}}um_{m, n \asymp q^{3/2}} \frac{\operatorname{\mathcal{S}}igma_3(m)\operatorname{\mathcal{S}}igma_3(n)\lambda_f(n)\overline{\lambda_f}(m)}{\operatorname{\mathcal{S}}qrt{mn}}. $$In fact, since the coefficients $\lambda_f(n)$ are not completely multiplicative, the expression is significantly more complicated for the purpose of extracting main terms. Applying Peterson's formula for the average over $f\in \mathcal H_\chi$ leads to diagonal terms $m=n$ which are evaluated fairly easily in Section \ref{sec:diag} as well as off-diagonal terms which involve sums of the form $$ \operatorname{\mathcal{S}}um_{m, n \asymp q^{3/2}} \frac{\operatorname{\mathcal{S}}igma_3(m)\operatorname{\mathcal{S}}igma_3(n)}{\operatorname{\mathcal{S}}qrt{mn}} \frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = (-1)^k}} \operatorname{\mathcal{S}}um_c S_{\chi}(m, n; cq) J_{k-1}\bfrac{4\pi \operatorname{\mathcal{S}}qrt{mn}}{cq}, $$ where $S_{\chi}(m, n; cq)$ is the Kloosterman sum defined in (\ref{def:Kloosterman}), and $J_{k-1}(x)$ is the J-Bessel function of order $k-1$. Let us focus on the transition region for the Bessel function where $c \asymp q^{1/2}$, so that the conductor is a priori of size $qc \asymp q^{3/2}$. It is here that the addition average over $\chi \bmod q$ comes into play. To be more precise, to understand the exponential sum $$\frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = (-1)^k}} S_{\chi}(m, n; cq),$$ it suffices to understand $$\operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{a \bmod cq\\a \equiv 1 \bmod q}} e\bfrac{am + \bar an}{cq}, $$which, assuming that $(c, q) = 1$, is $$e \bfrac{m+n}{cq}\operatorname{\mathcal{S}}umstar_{a \bmod c} e\bfrac{\bar q(a-1)m + \bar q (\bar a - 1)n}{c}, $$using Chinese remainder theorem and reciprocity. The factor $e \bfrac{m+n}{cq}$ has small derivatives and may be treated as a smooth function, while the conductor of the rest of the exponential sum has decreased to $c \asymp q^{1/2}$. The details of these calculations are in Section \ref{sec:offdiagsetup}. This phenomenon of the drop in conductor appears in other examples. In the case of the sixth moment of Dirichlet $L$-functions in \cite{CIS}, it occurs when replacing $q$ with the complementary divisor $\frac{m-n}{q} \asymp q^{1/2}$. It is quite interesting that the same drop in conductor occurs by seemingly very different mechanisms. However, note that when the complementary divisor is small, the ordered pair $(m, n)$ is forced to be in a narrow region. That this does not occur in our case is one of reasons behind the superior error term in our result; the assumption that $q$ is prime also plays a role. After the conductor drop, we apply Voronoi summation to the sum over $m$ and $n$ in \S \ref{sec:appvoronoi}. We need a version of Voronoi summation including shifts. The proof of this is essentially the same as the proof of the standard Voronoi summation formula for $\operatorname{\mathcal{S}}igma_3(n)$ by Ivic \cite{Ivic}. We state the result required in Appendix \ref{sec:voronoi}. After applying Voronoi, it is easy to guess which terms should contribute to the main terms and which terms should be error terms. The main terms are described in Proposition \ref{prop:mainTM} and the error terms are bounded in Proposition \ref{prop:Terror}. Essentially, we expect the main terms to be a sum of products of $9$ factors of $\zeta$, the same as the diagonal contribution but with permutations in the shifts, as in Theorem \ref{thm:mainmoment}. This is by no means immediately visible from the expression in Proposition \ref{prop:mainTM}. Indeed, it takes some effort to see that we get the right number of $\zeta$ factors. Along the way, we use, among other things, a calculation of Iwaniec and Xiaoqing Li in \cite{IL}. This is done in Section \ref{sec:provepropTM}. In order to finish the verifications, we need to check that the local factors of two expressions agree. The details here are standard but intricate, and are provided in Appendix \ref{sec:Eulerverif}. Finally, the error terms from Voronoi summation are bounded in Section \ref{sec:properror}. Here, one needs to show that the dual sums from Voronoi summation are essentially quite short, which is related to the reduction in conductor from $cq$ to $c$ earlier. \operatorname{\mathcal{S}}ection{Notation and the shifted sixth moment} \label{sec:shift} We begin with some notation. Let $\boldsymbol \alpha := (\boldsymbol \alphapha_1, \boldsymbol \alphapha_2, \boldsymbol \alphapha_3)$ and $\boldsymbol \beta := (\beta_1, \beta_2, \beta_3)$. For a complex number $s$, we shall write $\boldsymbol \alpha + s := (\boldsymbol \alphapha_1 + s, \boldsymbol \alphapha_2 + s, \boldsymbol \alphapha_3 + s).$ We define \es{\label{def:deltaalphabeta} \delta(\boldsymbol \alpha, \boldsymbol \beta) := \frac 12 \operatorname{\mathcal{S}}um_{j = 1}^3 (\boldsymbol \alphapha_j - \beta_j),} \es{\label{def:GprodGamma} G(s; \boldsymbol \alpha, \boldsymbol \beta) := \prod_{j = 1}^3 \Gamma \pr{s + \tfrac {k-1}2 + \boldsymbol \alphapha_j}\Gamma \pr{s + \tfrac {k-1}2 - \beta_j}, }and \es{\label{def:completedSprodLfnc} \Lambda(f, s; \boldsymbol \alpha, \boldsymbol \beta) : = \prod_{j = 1}^3 \Lambda\pr{f, s + \boldsymbol \alphapha_j}\Lambda \pr{\bar{f}, s - \beta_j}.} Note that we have \es{\label{def:prodCompletedLfnc} \Lambda(f; \boldsymbol \alpha, \boldsymbol \beta) = \Lambda\pr{f, \tfrac 12; \boldsymbol \alpha, \boldsymbol \beta} = \pr{\frac q{4\pi^2}}^{\delta(\boldsymbol \alpha, \boldsymbol \beta)} G\pr{\tfrac 12; \boldsymbol \alpha, \boldsymbol \beta} \prod_{j = 1}^3 L\pr{f, \tfrac 12 + \boldsymbol \alphapha_j}L\pr{\bar{f}, \tfrac 12 - \beta_j}.} We define the shifted $k$-divisor function by \es{\label{def:sigma_k} \operatorname{\mathcal{S}}igma_k(n; \boldsymbol \alphapha_1,..., \boldsymbol \alphapha_k) = \operatorname{\mathcal{S}}um_{n_1n_2...n_k = n} n_1^{-\boldsymbol \alphapha_1}n_2^{-\boldsymbol \alphapha_2}...n_k^{-\boldsymbol \alphapha_k}. } Let \es { \label{eqn:defB}\mathscr B(a, b; \boldsymbol \alpha) := \frac{\mu(a)\operatorname{\mathcal{S}}igma_3(b; \boldsymbol \alphapha_1 + \boldsymbol \alphapha_2, \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3, \boldsymbol \alphapha_3 + \boldsymbol \alphapha_1)}{a^{\boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}}.} Next we need the following lemmas, which help us generate the conjecture of the sixth moment, namely \es{ \label{momentS1q}\frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = (-1)^k}} \operatorname{\mathcal{S}}umh_{f \in \mathcal H_\chi} \Lambda\pr{f; \boldsymbol \alpha , \boldsymbol \beta }. } \begin{lemma} \label{lem:multofsigma_k} We have \est{\operatorname{\mathcal{S}}igma_2(n_1n_2; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2) = \operatorname{\mathcal{S}}um_{d| (n_1, n_2)} \mu(d) d^{-\boldsymbol \alphapha_1 - \boldsymbol \alphapha_2} \operatorname{\mathcal{S}}igma_2 \pr{\frac{n_1}{d}; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2}\operatorname{\mathcal{S}}igma_2\pr{\frac{n_2}{d}; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2}.} \end{lemma} \begin{proof} Since both sides are multiplicative functions, it is enough to prove the Lemma when $n_1n_2$ is a prime power. We set $n_1 = p^a$ and $n_2 = p^b,$ where $1 \leq a \leq b.$ Then \es{ \label{eqn:idprimesigma}\operatorname{\mathcal{S}}igma_2(p^{a}p^b; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2) &= \operatorname{\mathcal{S}}um_{0 \leq k \leq a} \mu(p^k) p^{-k(\boldsymbol \alphapha_1 +\boldsymbol \alphapha_2)} \operatorname{\mathcal{S}}igma_2 \pr{p^{a-k}; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2}\operatorname{\mathcal{S}}igma_2\pr{p^{b-k}; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2} \\ &= \operatorname{\mathcal{S}}igma_2 \pr{p^{a}; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2}\operatorname{\mathcal{S}}igma_2\pr{p^{b}; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2} - p^{-(\boldsymbol \alphapha_1 +\boldsymbol \alphapha_2)} \operatorname{\mathcal{S}}igma_2 \pr{p^{a-1}; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2}\operatorname{\mathcal{S}}igma_2\pr{p^{b-1}; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2}.} On the other hand, \est{\operatorname{\mathcal{S}}igma_2 (p^{n}; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2) = \operatorname{\mathcal{S}}um_{\ell = 0}^n p^{-\ell\boldsymbol \alphapha_1} p^{-(n-\ell)\boldsymbol \alphapha_2} = p^{-n\boldsymbol \alphapha_2} \frac{1 - \frac{1}{p^{(n+1)(\boldsymbol \alphapha_1 - \boldsymbol \alphapha_2)}}}{1 - \frac{1}{p^{\boldsymbol \alphapha_1 - \boldsymbol \alphapha_2}}}, } and the lemma follows by substituting the above formula into (\ref{eqn:idprimesigma}). \end{proof} We write the product of $L$-functions in term of Dirichlet series in the following lemma \begin{lem} \label{eqn:prod3Lfnc} Let $L(f, w)$ be an $L$-function in $\mathcal H_\chi.$ For $\textup{Re }(s + \boldsymbol \alphapha_i) > 1$, we have \est{ L\pr{f, s + \boldsymbol \alphapha_1}&L\pr{f, s + \boldsymbol \alphapha_2}L\pr{f, s+ \boldsymbol \alphapha_3} = \operatorname{\mathcal{S}}umtwo_{a, b \geq 1} \frac{\chi(ab)\mathscr B(a, b; \boldsymbol \alpha)}{(ab)^{ 2s}} \operatorname{\mathcal{S}}um_{n \geq 1} \frac{\lambda_f(an) \operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)}{(an)^{ s}}.} \end{lem} \begin{proof} From the Hecke relation (\ref{eqn:Heckerelation}) and Lemma \ref{lem:multofsigma_k}, we have \es{ \label{prod3L} &L\pr{f, s + \boldsymbol \alphapha_1}L\pr{f, s + \boldsymbol \alphapha_2}L\pr{f, s +\boldsymbol \alphapha_3} \\ &= \operatorname{\mathcal{S}}um_{n_1 \geq 1} \frac{\lambda_f(n_1)}{n_1^{ s + \boldsymbol \alphapha_1}} \operatorname{\mathcal{S}}um_{d \geq 1} \frac{\chi(d)}{d^{2s + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}} \operatorname{\mathcal{S}}um_{j\geq 1} \frac{\lambda_f \mathopen{}\mathclose\bgroup\originalleft(j\aftergroup\egroup\originalright) \operatorname{\mathcal{S}}igma_2(j; \boldsymbol \alphapha_2, \boldsymbol \alphapha_3)}{j^{ s}} \\ &= \operatorname{\mathcal{S}}um_{d \geq 1} \frac{\chi(d)}{d^{ 2s + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}} \operatorname{\mathcal{S}}umtwo_{n_1, j \geq 1} \frac{\operatorname{\mathcal{S}}igma_2(j; \boldsymbol \alphapha_2, \boldsymbol \alphapha_3)}{n_1^{ s + \boldsymbol \alphapha_1} j^{ s}} \operatorname{\mathcal{S}}um_{e|(n_1,j)} \chi(e) \lambda_f \pr{\frac{jn_1}{e^2}} \\ &= \operatorname{\mathcal{S}}um_{a \geq 1} \frac{\mu(a)\chi(a)}{a^{2s + \boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}} \operatorname{\mathcal{S}}umtwo_{d, e \geq 1} \frac{\chi(de) \operatorname{\mathcal{S}}igma_2(e; \boldsymbol \alphapha_2, \boldsymbol \alphapha_3)}{(de)^{2s}d^{\boldsymbol \alphapha_2+ \boldsymbol \alphapha_3}e^{\boldsymbol \alphapha_1}} \operatorname{\mathcal{S}}umtwo_{j, n_1 \geq 1} \frac{\lambda_f(n_1ja) \operatorname{\mathcal{S}}igma_2(j; \boldsymbol \alphapha_2, \boldsymbol \alphapha_3)}{(ajn_1)^{ s}n_1^{\boldsymbol \alphapha_1}} \\ &= \operatorname{\mathcal{S}}um_{a \geq 1} \frac{\mu(a)\chi(a)}{a^{2s + \boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}} \operatorname{\mathcal{S}}um_{b \geq 1} \frac{\chi(b) \operatorname{\mathcal{S}}igma_3(b; \boldsymbol \alphapha_1+ \boldsymbol \alphapha_2,\boldsymbol \alphapha_2+ \boldsymbol \alphapha_3, \boldsymbol \alphapha_3 + \boldsymbol \alphapha_1)}{b^{2s}} \operatorname{\mathcal{S}}um_{n \geq 1} \frac{\lambda_f(an) \operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alphapha_1, \boldsymbol \alphapha_2, \boldsymbol \alphapha_3)}{(an)^{ s}} .} This completes the lemma. \end{proof} \begin{lem} \label{lem:orthogonality} The orthogality relation for Dirichlet characters is \es{ \label{eqn:orthodirichlet} \frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) =(-1)^k}} \chi(m) \overline{\chi} (n) = \mathopen{}\mathclose\bgroup\originalleft\{\begin{array}{ll} 1 & {\rm if} \ m \equiv n \mod q, \ \ (mn, q) = 1\\ (-1)^k & {\rm if} \ m \equiv - n \mod q, \ \ (mn, q) = 1 \\ 0 & {\rm otherwise}. \end{array} \aftergroup\egroup\originalright.} Petersson's formula gives \es{ \label{eqn:Petersson}\operatorname{\mathcal{S}}umh_{f \in \mathcal H_\chi } \overline{\lambda}_f(m) \lambda_f(n) = \delta_{m = n} + \operatorname{\mathcal{S}}igma_\chi(m, n),} where \est{\operatorname{\mathcal{S}}igma_\chi(m, n) &= 2\pi i^{-k} \operatorname{\mathcal{S}}um_{c \equiv 0 \mod q} c^{-1}S_\chi(m, n; c) J_{k - 1}\pr{\frac{4\pi}{c} \operatorname{\mathcal{S}}qrt{mn}} \\ &= 2\pi i^{-k} \operatorname{\mathcal{S}}um_{c = 1}^{\infty} (cq)^{-1}S_\chi(m, n; cq) J_{k - 1}\pr{\frac{4\pi}{cq} \operatorname{\mathcal{S}}qrt{mn}}, } and $S_\chi$ is the Kloosterman sum defined by \es{ \label{def:Kloosterman}S_{\chi} (m, n; cq) = \operatorname{\mathcal{S}}um_{a\bar{a} \equiv 1 \mod {cq}} \chi(a) e \pr{\frac{am + \bar{a}n}{cq}}.} \end{lem} From Lemma \ref{eqn:prod3Lfnc}, we have that \est{\prod_{i = 1}^3 L(f, s+ \boldsymbol \alphapha_i)L(f, s - \beta_i) &= \operatorname{\mathcal{S}}umfour_{a_1, b_1, a_2, b_2 \geq 1} \frac{\chi(a_1b_1)\overline{\chi}(a_2b_2)\mathscr B(a_1, b_1; \boldsymbol \alpha)\mathscr B(a_2, b_2; -\boldsymbol \beta)}{(a_1b_1a_2b_2)^{ 2s}} \\ & \ \ \ \ \ \ \ \ \ \ \times \operatorname{\mathcal{S}}umtwo_{n, m \geq 1} \frac{\lambda_f(a_1n)\overline{\lambda_f}(a_2m) \operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)\operatorname{\mathcal{S}}igma_3(m;-\boldsymbol \beta)}{(a_1na_2m)^{ s}}.} By the orthogonality relation of Dirichlet characters and Petersson's formula in Lemma \ref{lem:orthogonality}, a naive guess might be that the main contribution comes from the diagonal terms $a_1b_1 = a_2b_2$ and $a_1n = a_2m$, where $(a_ib_i, q) = 1,$ which is \es{\label{def:Cs} \mathcal C(s, \boldsymbol \alpha, \boldsymbol \beta) := \operatorname{\mathcal{S}}umsix_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2, m, n \geq 1\\ a_1n = a_2m \\ a_1b_1 = a_2b_2 \\ (a_i, q) = (b_j, q) = 1 }} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{(a_1b_1)^{2s}} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{(a_2b_2)^{2s}} \frac{\operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)\operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{(a_1n)^{ s} (a_2m)^{ s}}} for $\textup{Re }(s)$ large enough. This can be written as the Euler product $$ \mathcal C(s, \boldsymbol \alpha, \boldsymbol \beta) = \prod_p \mathcal C_p(s, \boldsymbol \alpha, \boldsymbol \beta), $$ where for $p \neq q$, \es{\label{def:Cps} \mathcal C_p(s, \boldsymbol \alpha, \boldsymbol \beta) &:= \operatorname{\mathcal{S}}umsix_{\operatorname{\mathcal{S}}ubstack{r_1, t_1, r_2, t_2, u_1, u_2 \geq 0 \\ r_1 + u_1 = r_2 + u_2 \\ r_1 + t_1 = r_2 + t_2 }} \frac{ \mathscr B(p^{r_1}, p^{t_1}; \boldsymbol \alpha)}{p^{2s(r_1 + t_1)}} \frac{ \mathscr B(p^{r_2}, p^{t_2}; -\boldsymbol \beta)}{p^{2s(r_2 + t_2)}} \frac{\operatorname{\mathcal{S}}igma_3(p^{u_1};\boldsymbol \alpha)\operatorname{\mathcal{S}}igma_3(p^{u_2}; -\boldsymbol \beta)}{p^{ s(r_1 + u_1)} p^{s(r_2 + u_2)}}, } and for $p = q,$ \es{\label{def:Cqs} \mathcal C_q(s, \boldsymbol \alpha, \boldsymbol \beta) &:= \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{u \geq 0 }} \frac{\operatorname{\mathcal{S}}igma_3(q^{u};\boldsymbol \alpha)\operatorname{\mathcal{S}}igma_3(q^{u}; -\boldsymbol \beta)}{q^{ 2us} }. } Next, for $\zeta_p(w) := \pr{1 - \frac{1}{p^w}}^{-1}$, let \es{ \label{def:mathcalZp}\mathcal Z_p(s;\boldsymbol \alpha, \boldsymbol \beta) := \prod_{i = 1}^3 \prod_{j = 1}^3 \zeta_p (2s + \boldsymbol \alphapha_i - \beta_j), \hskip 0.7in \mathcal Z (s; \boldsymbol \alpha, \boldsymbol \beta) := \prod_{i = 1}^3 \prod_{j = 1}^3 \zeta(2s + \boldsymbol \alphapha_i - \beta_j).} and \es{\label{def:mathcalAs} \mathcal A (s; \boldsymbol \alpha, \boldsymbol \beta) := \mathcal C(s; \boldsymbol \alpha , \boldsymbol \beta) \mathcal Z(s;\boldsymbol \alpha, \boldsymbol \beta)^{-1} = \prod_{p} \mathcal C_p(s; \boldsymbol \alpha , \boldsymbol \beta) \mathcal Z_p(s;\boldsymbol \alpha, \boldsymbol \beta)^{-1}.} We define \es{\label{def:Mqalbb} \mathcal M(q; \boldsymbol \alpha, \boldsymbol \beta) := \pr{\frac{q}{4\pi^2}}^{\delta(\boldsymbol \alpha, \boldsymbol \beta)} G\pr{\tfrac 12; \boldsymbol \alpha, \boldsymbol \beta }\mathcal A \mathcal Z \pr{\tfrac 12; \boldsymbol \alpha, \boldsymbol \beta}. } When $\textup{Re }(\boldsymbol \alphapha_i), \textup{Re }(\beta_i) \ll \frac{1}{\log q}$, the term $\mathcal A\pr{\tfrac 12, \boldsymbol \alpha, \boldsymbol \beta}$ is absolutely convergent. Now, let $S_{j}$ be the permutation group of $j$ variables. Based on the analysis of the diagonal contribution, we expect $\mathcal M(q; \boldsymbol \alpha, \boldsymbol \beta) $ to be a part of the average in (\ref{momentS1q}), and we also notice that the expression $\mathcal M(q; \boldsymbol \alpha, \boldsymbol \beta)$ is fixed by the action of $S_3 \times S_3$. Since we expect our final answer to be symmetric under the full group $S_6$, we sum over the cosets $S_6/(S_3 \times S_3)$. In fact, the method of Conrey, Farmer, Keating, Rubinstein and Snaith \cite{CFKRS} gives the following conjecture for the average of $\Lambda(f;\boldsymbol \alpha, \boldsymbol \beta)$. \begin{conj} \label{conj:mainterm}Assume that $\boldsymbol \alpha, \boldsymbol \beta$ satisfy $\textup{Re }(\boldsymbol \alphapha_i), \textup{Re }(\beta_i) \ll \frac 1{\log q}$ and $\textup{Im } (\boldsymbol \alphapha_i), \textup{Im } (\beta_i) \ll q^{1-\varepsilon}.$ We have \est{\frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = (-1)^k}} \operatorname{\mathcal{S}}umh_{f \in \mathcal H_\chi} \Lambda\pr{f; \boldsymbol \alpha , \boldsymbol \beta } = \operatorname{\mathcal{S}}um_{\pi \in S_{6}/(S_3\times S_3)} \mathcal M(q; \pi(\boldsymbol \alpha, \boldsymbol \beta)) \pr{1 + O(q^{-\frac 12 + \varepsilon})}, } where we define $\pi(\boldsymbol \alpha, \boldsymbol \beta) = \pi (\boldsymbol \alphapha_1, ...,\boldsymbol \alphapha_k, \beta_1,...,\beta_k)$ for $\pi \in S_{2k}$, where $\pi$ acts on the $2k$ tuple $(\boldsymbol \alphapha_1, ...,\boldsymbol \alphapha_k, \beta_1,...,\beta_k)$ as usual. \end{conj} We will also write $\pi(\boldsymbol \alpha, \boldsymbol \beta) = (\pi(\boldsymbol \alpha), \pi(\boldsymbol \beta))$ by an abuse of notation, where $\pi(\boldsymbol \alpha, \boldsymbol \beta)$ is as above. Our main goal is to find an asymptotic formula for \es{ \label{eqn:mainConsider} \mathcal M_6(q) := \frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = (-1)^k}} {\operatorname{\mathcal{S}}um_{f \in \mathcal H_\chi}}^h \int_{-\infty}^{\infty} \Lambda\pr{f; \boldsymbol \alpha + it, \boldsymbol \beta + it} \> dt,} and we will prove the following result. \begin{thm} \label{thm:mainmoment} Let $q$ be prime and $k \geq 5$ be odd. For $\boldsymbol \alphapha_i, \beta_j \ll \frac 1{\log q}$, we have that \est{\mathcal M_6(q) = \int_{-\infty}^{\infty} \operatorname{\mathcal{S}}um_{\pi \in S_{6}/(S_3\times S_3)} \mathcal M(q, \pi(\boldsymbol \alpha) + it, \pi(\boldsymbol \beta) + it) \> dt + O\pr{q^{-\frac 14 + \varepsilon}}. } \end{thm} We note that as the shifts go to $0$, the main term of this moment is of the size $(\log q)^9$, and we derive Theorem (\ref{cor:main}). We refer the reader to \cite{CFKRS} for the details of this type of calculation. \operatorname{\mathcal{S}}ection{Approximate functional equation}\label{sec:prelem} In this section, we will prove an approximate functional equation for the product of $L$-functions. Let \est{H(s; \boldsymbol \alpha, \boldsymbol \beta) = \prod_{j = 1}^3 \prod_{\ell = 1}^3 \pr{s^2 - \pr{\frac{\boldsymbol \alphapha_j - \beta_\ell}{2}}^2}^3,} and define for any $\xi> 0,$ \est{W(\xi; \boldsymbol \alpha, \boldsymbol \beta) = \frac{1}{2\pi i} \int_{(1)} G\pr{\tfrac 12 + s; \boldsymbol \alpha, \boldsymbol \beta} H\pr{s; \boldsymbol \alpha, \boldsymbol \beta} \xi^{-s} \ \frac{ds}{s}.} Moreover, let $\Lambda_0(f, \boldsymbol \alpha, \boldsymbol \beta) $ be \es{\label{def:Lambda_0} & \pr{\frac{q}{4\pi^2}}^{\delta(\boldsymbol \alphapha, \beta)} \operatorname{\mathcal{S}}umfour_{a_1, b_1, a_2, b_2 \geq 1} \frac{\chi(a_1b_1) \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1b_1} \frac{\overline{\chi}(a_2b_2) \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2b_2} \\ & \cdot \operatorname{\mathcal{S}}umtwo_{n, m \geq 1} \frac{\lambda_f(a_1n) \operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)}{(a_1n)^{1/2}} \frac{\overline{\lambda_f}(a_2m) \operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{(a_2m)^{1/2}} W\pr{\frac{(2\pi)^6 a_1^{3}b_1^2a_2^{3}b_2^2nm}{q^{3}}; \boldsymbol \alpha, \boldsymbol \beta}. } \begin{lemma} \label{lem:approxFncLmodular} We have \est{H(0;\boldsymbol \alpha,\boldsymbol \beta)\Lambda(f; \boldsymbol \alpha, \boldsymbol \beta) = \Lambda_0(f, \boldsymbol \alpha, \boldsymbol \beta) + \Lambda_0(f, \boldsymbol \beta, \boldsymbol \alpha).} \end{lemma} \begin{proof} We consider \est{I := \frac{1}{2\pi i}\int_{(1)} \Lambda\pr{f, s + 1/2; \boldsymbol \alpha, \boldsymbol \beta} H(s; \boldsymbol \alpha, \boldsymbol \beta) \> \frac{ds}{s}.} Moving the contour integral to $(-1)$, we obtain that \est{I &= \Lambda\pr{f; \boldsymbol \alpha, \boldsymbol \beta} H(0; \boldsymbol \alpha, \boldsymbol \beta) + \frac{1}{2\pi i}\int_{(-1)}\Lambda\pr{f, s + 1/2; \boldsymbol \alpha, \boldsymbol \beta} H(s; \boldsymbol \alpha, \boldsymbol \beta) \> \frac{ds}{s} \\ &= \Lambda\pr{f; \boldsymbol \alpha, \boldsymbol \beta} H(0; \boldsymbol \alpha, \boldsymbol \beta) - \frac{1}{2\pi i}\int_{(1)}\Lambda\pr{f, -s + 1/2; \boldsymbol \alpha, \boldsymbol \beta} H(-s; \boldsymbol \alpha, \boldsymbol \beta) \> \frac{ds}{s}.} By the functional equation, we have $\Lambda(f, s + 1/2 ; \boldsymbol \alpha, \boldsymbol \beta) = \Lambda(f, -s + 1/2; \boldsymbol \beta, \boldsymbol \alpha)$. Moreover, $H$ is an even function, and $H(s;\boldsymbol \alpha, \boldsymbol \beta) = H(s; \boldsymbol \beta, \boldsymbol \alpha).$ Therefore, \est{\Lambda\pr{f; \boldsymbol \alpha, \boldsymbol \beta} H(0; \boldsymbol \alpha, \boldsymbol \beta) = & \ \frac{1}{2\pi i}\int_{(1)}\Lambda\pr{f, s + 1/2; \boldsymbol \alpha, \boldsymbol \beta} H(s; \boldsymbol \alpha, \boldsymbol \beta) \frac{ds}{s} \\ &+\frac{1}{2\pi i}\int_{(1)}\Lambda\pr{f, s + 1/2; \boldsymbol \beta, \boldsymbol \alpha} H(s; \boldsymbol \alpha, \boldsymbol \beta) \> \frac{ds}{s}.} The Lemma follows after writing $\Lambda$ as a product of $L$-functions and Gamma functions and using Lemma \ref{eqn:prod3Lfnc}. \end{proof} Next, we let \est{V_{\boldsymbol \alpha, \boldsymbol \beta}(\xi, \eta ; \mu) = \pr{\frac{\mu}{4\pi^2}}^{ \delta(\boldsymbol \alpha, \boldsymbol \beta)} \int_{-\infty}^{\infty} \pr{\frac{\eta}{\xi}}^{it} W\pr{\frac{\xi\eta (4\pi^2)^3}{\mu^{3}}; \boldsymbol \alpha + it, \boldsymbol \beta + it} \> dt,} and \es{\label{def:Lambda_1} \Lambda_1(f; \boldsymbol \alpha, \boldsymbol \beta) &= \operatorname{\mathcal{S}}umfour_{a_1, b_1, a_2, b_2 \geq 1} \frac{\chi(a_1b_1) \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1b_1} \frac{\overline{\chi}(a_2b_2) \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2b_2} \\ & \cdot \operatorname{\mathcal{S}}umtwo_{n, m \geq 1} \frac{\lambda_f(a_1n) \operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)}{(a_1n)^{1/2}} \frac{\overline{\lambda_f}(a_2m) \operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{(a_2m)^{1/2}} V_{\boldsymbol \alpha, \boldsymbol \beta}\pr{a_1^3 b_1^2 n, a_2^3b_2^2m; q}. } \begin{lemma} \label{lem:approxV} With notation as above, we have \est{H(0; \boldsymbol \alpha, \boldsymbol \beta) \int_{-\infty}^{\infty} \Lambda(f; \boldsymbol \alpha + it, \boldsymbol \beta + it) \> dt = \Lambda_1(f; \boldsymbol \alpha, \boldsymbol \beta) + \Lambda_1(f; \boldsymbol \beta, \boldsymbol \alpha).} \end{lemma} The proof follows easily from Lemma \ref{lem:approxFncLmodular}. \begin{remark} \label{rem:boundV} The integration over $t$ is added so that the main contribution comes from when $a_1^3b_1^2n \ll q^{3/2 + \varepsilon}$ and $a_2^3b_2^2m \ll q^{3/2 + \varepsilon}$, and we will see this from Lemma \ref{lem:decayV} below. Without the integration over $t$, the ranges of $a_i, b_j, m, n$ that we need to consider satisfy the weaker condition $a_1^3b_1^2na_2^3b_2^2m \ll q^{3 + \varepsilon},$ and the proof presented here does not extend to this range. \end{remark} \begin{lem} \label{lem:decayV} If $\xi$ or $\eta \gg q^{\frac 32 + \varepsilon},$ then for any $A > 1,$ we have \est{V_{\boldsymbol \alpha, \boldsymbol \beta} (\xi, \eta; q) \ll q^{-A},} where the implied constant depends on $\varepsilon$ and $A.$ \end{lem} \begin{proof} From the definition of $W$ and $V$ and a change of variables ($s + it = w, s- it = z$), we can write $V_{\boldsymbol \alpha, \boldsymbol \beta}(\xi, \eta; \mu)$ as \est{ & \pr{\frac{q}{4\pi^2}}^{\delta(\boldsymbol \alpha, \boldsymbol \beta)} \frac{4\pi}{(2\pi i)^2} \int_{(0)} \prod_{j = 1}^3 \Gamma \pr{w + \tfrac {k-1}2 + \boldsymbol \alphapha_j } \pr{\frac{q^{\frac 32}}{ (4\pi^2)^{\frac 32} \xi} }^{w} \\ & \hskip 2in \times \int_{(1)} \prod_{j = 1}^3 \Gamma \pr{z + \tfrac {k-1}2 - \beta_j }\pr{\frac{q^{\frac 32}}{ (4\pi^2)^{\frac 32} \eta} }^{z} H\pr{\tfrac{z + w}{2}; \boldsymbol \alpha, \boldsymbol \beta} \> \frac{dz \> dw}{z + w}. } When $\xi \gg q^{\frac 32 + \varepsilon},$ we move the contour integral over $w$ to the far right, and similarly, when $\eta \gg q^{\frac 32 + \varepsilon}$, we move the contour integral over $z$ to the far right. The lemma then follows. \end{proof} \operatorname{\mathcal{S}}ection{Setup for the proof of \ref{thm:mainmoment} and diagonal terms} \label{sec:setupmoment} From Lemma \ref{lem:approxV}, we have that for $\boldsymbol \alphapha_i, \beta_i \ll 1/\log q,$ \es{\label{eqn:HM6} H(0;\boldsymbol \alpha,\boldsymbol \beta) \mathcal M_6(q) = \frac{2}{\phi(q)} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = (-1)^k}} {\operatorname{\mathcal{S}}um_{f \in \mathcal H_\chi}}^h \{\Lambda_1\pr{f; \boldsymbol \alpha, \boldsymbol \beta} + \Lambda_1\pr{f; \boldsymbol \beta, \boldsymbol \alpha}\}.} Therefore, to evaluate $\mathcal M_6(q)$, it is sufficient to compute asymptotically \es{\label{eqn:MainboundLambda1} \mathscr M_1(q; \boldsymbol \alpha, \boldsymbol \beta) := \frac{2}{\phi(q)} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) = (-1)^k}} {\operatorname{\mathcal{S}}um_{f \in \mathcal H_\chi}}^h \Lambda_1\pr{f; \boldsymbol \alpha, \boldsymbol \beta}.} Applying the Petersson's formula, we obtain that \est{\operatorname{\mathcal{S}}umh_{f \in \mathcal H_\chi } \overline{\lambda}_f(a_2m) \lambda_f(a_1n) = \delta_{a_2m = a_1n} + \operatorname{\mathcal{S}}igma_\chi(a_2m, a_1n),} where $\operatorname{\mathcal{S}}igma_\chi(a_2m, a_1n)$ is defined as in (\ref{lem:orthogonality}). We then write \es{\label{decomM1} \mathscr M_1(q; \boldsymbol \alpha, \boldsymbol \beta) = \mathscr D(q; \boldsymbol \alpha, \boldsymbol \beta) + \mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta),} where $\mathscr D(q; \boldsymbol \alpha, \boldsymbol \beta)$ is the diagonal contribution from $\delta_{a_2m = a_1n},$ and $\mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta)$ is the contribution from $\operatorname{\mathcal{S}}igma_\chi(a_2m, a_1n).$ In Section \ref{sec:diag} below, we will show that the term $\mathscr D(q; \boldsymbol \alpha, \boldsymbol \beta)$ contributes one of the twenty terms in Conjecture \ref{conj:mainterm}, specifically the term corresponding to $\mathcal M(q; \boldsymbol \alpha + it, \boldsymbol \beta + it).$ Moreover, $\mathscr K(q;\boldsymbol \alpha, \boldsymbol \beta)$ gives another nine terms in the conjecture, namely those transpositions in $S_6/S_3 \times S_3$ which switches $\boldsymbol \alphapha_i$ and $\beta_j$ for a fixed $i, j = 1, 2, 3$. We explicitly work out one of these terms in Proposition \ref{prop:mainTM}. Similarly, $\mathscr D(q; \boldsymbol \beta, \boldsymbol \alpha)$ gives rise to the term corresponding to $\mathcal M(q; \boldsymbol \beta + it, \boldsymbol \alpha + it),$ and the last nine expressions arise from $\mathscr K(q; \boldsymbol \beta, \boldsymbol \alpha)$. \operatorname{\mathcal{S}}ubsection{Evaluating the diagonal terms $\mathscr D (q; \boldsymbol \alpha, \boldsymbol \beta)$} \label{sec:diag} We recall that \est{ \mathscr D(q; \boldsymbol \alpha, \boldsymbol \beta) &= \frac{2}{\phi(q)} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) =(-1)^k}} \operatorname{\mathcal{S}}umsix_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2, m, n \geq 1 \\ a_1n = a_2m}} \chi(a_1b_1) \overline{\chi}(a_2b_2) \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1b_1}\frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2b_2} \\ & \hskip 2in \times \frac{\operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)\operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{(a_1n)^{\frac 12} (a_2m)^{\frac 12}} V_{\boldsymbol \alpha, \boldsymbol \beta}\pr{a_1^3 b_1^2 n, a_2^3b_2^2m; q}.} We will compute the diagonal contribution in the following lemma. \begin{lem} \label{lem:diagonal} With the same notations as above, we have \est{\mathscr D(q; \boldsymbol \alpha, \boldsymbol \beta) = H(0; \boldsymbol \alpha, \boldsymbol \beta)\int_{-\infty}^{\infty} \mathcal M(q; \boldsymbol \alpha + it, \boldsymbol \beta + it) \> dt + O\pr{q^{-3/4 + \varepsilon}}.} \end{lem} \begin{proof} We apply the orthogonality relation for Dirichlet characters in (\ref{eqn:orthodirichlet}) and obtain that for $(a_ib_i, q) = 1,$ \est{\frac{2}{\phi(q)}\operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\chi \mod q \\ \chi(-1) =(-1)^k}} \chi(a_1b_1) \overline{\chi} (a_2b_2) = \mathopen{}\mathclose\bgroup\originalleft\{\begin{array}{ll} 1 & {\rm if} \ a_1b_1 \equiv a_2b_2 \mod q \\ (-1)^k & {\rm if} \ a_1b_1 \equiv - a_2b_2 \mod q \\ 0 & {\rm otherwise}. \end{array} \aftergroup\egroup\originalright.} When $a_1b_1 \geq q/4$ or $a_2b_2 \geq q/4$, we have that $a_1^3b_1^2n \geq q^2/16$ or $a_2^3b_2^2m \geq q^2/16$. From Lemma \ref{lem:decayV}, $V_{\boldsymbol \alpha, \boldsymbol \beta}(a_1^3b_1^2n, a_2^3b_2^2m; q) \ll q^{-A}$ in that range so the contribution from these terms is negligible. Hence the main contribution from $\mathscr D(q; \boldsymbol \alpha, \boldsymbol \beta)$ comes from the terms with $a_1b_1 = a_2b_2$ when $(a_ib_i, q) = 1,$ and \est{ \mathscr D(q; \boldsymbol \alpha, \boldsymbol \beta) &= \operatorname{\mathcal{S}}umsix_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2, m, n \geq 1 \\ a_1n = a_2m \\ a_1b_1 = a_2b_2 \\ (a_ib_i, q) = 1 }} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1b_1} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2b_2}\\ & \hskip 1in \times \frac{\operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)\operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{(a_1n)^{\frac 12} (a_2m)^{\frac 12}} V_{\boldsymbol \alpha, \boldsymbol \beta}\pr{a_1^3 b_1^2 n, a_2^3b_2^2m; q} + O(q^{-A}).} Since $a_1b_1 = a_2b_2$ and $a_1n = a_2m$, $a_1^3b_1^2n = a_2^3 b_2^2 m.$ Therefore, $\mathscr D(q; \boldsymbol \alpha, \boldsymbol \beta)$ can be written as \est{ & \frac{1}{2\pi i}\pr{\frac{q}{4\pi^2}}^{\delta(\boldsymbol \alpha, \boldsymbol \beta)}\int_{-\infty}^{\infty} \int_{(1)} G\pr{\tfrac 12 + s; \boldsymbol \alpha + it, \boldsymbol \beta + it} H(s; \boldsymbol \alpha, \boldsymbol \beta) \pr{\frac{q}{4\pi^2}}^{3s} \mathcal A \mathcal Z\pr{\tfrac 12 + s, \boldsymbol \alpha, \boldsymbol \beta} \frac{ds}{s} \> dt,} where we have used Equations (\ref{def:Cs}) to (\ref{def:mathcalAs}). Note that $\mathcal A(s; \boldsymbol \alpha, \boldsymbol \beta)$ is absolutely convergent when $\textup{Re }(s) > \frac 14 + \varepsilon.$ Furthermore, the pole at $s = -(\boldsymbol \alphapha_i - \beta_j)/2$ from the zeta factor $\mathcal Z\pr{\tfrac 12 + s; \boldsymbol \alpha, \boldsymbol \beta}$ is cancelled by the zero at the same point from $H(s; \boldsymbol \alpha, \boldsymbol \beta).$ Thus, in the region $\textup{Re }(s) > -\tfrac 14 + \varepsilon,$ the integrand is analytic except for a simple pole at $s = 0.$ Moving the line of integration to $\textup{Re }(s) = -1/4 + \varepsilon,$ we obtain that $\mathscr D(q; \boldsymbol \alpha, \boldsymbol \beta)$ is \est{ \pr{\frac{q}{4\pi^2}}^{\delta(\boldsymbol \alpha, \boldsymbol \beta)}\int_{-\infty}^{\infty} G\pr{\tfrac 12; \boldsymbol \alpha + it, \boldsymbol \beta + it} H(0; \boldsymbol \alpha, \boldsymbol \beta)\mathcal A \mathcal Z \pr{\tfrac 12; \boldsymbol \alpha, \boldsymbol \beta} \> dt + O\pr{q^{-3/4 + \varepsilon}}.} The lemma now follows from (\ref{def:Mqalbb}) and upon noting that $\mathcal A \mathcal Z \pr{\tfrac 12; \boldsymbol \alpha, \boldsymbol \beta} = \mathcal A \mathcal Z \pr{\tfrac 12; \boldsymbol \alpha + it, \boldsymbol \beta + it}.$ \end{proof} \operatorname{\mathcal{S}}ection{Setup for the off-diagonal terms $\mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta)$} \label{sec:offdiagsetup} Define $\mathcal K f = i^{-k} f + i^k \bar{f}$. If $g$ is a real function, then $g\mathcal K f = \mathcal K (gf).$ Applying orthogonality relation for $\chi$ from (\ref{eqn:orthodirichlet}) to $\mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta)$, we obtain that \est{& \mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta) = 2\pi \operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2 \geq 1 \\ (a_1a_2b_1b_2, q) = 1}} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1b_1} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2b_2} \operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{n, m \geq 1}} \frac{ \operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)}{(a_1n)^{1/2}} \frac{ \operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{(a_2m)^{1/2}} \\ & \hskip 0.3in \times V_{\boldsymbol \alpha, \boldsymbol \beta}\pr{a_1^3 b_1^2 n, a_2^3b_2^2m; q} \operatorname{\mathcal{S}}um_{c = 1}^\infty \frac{1}{cq} J_{k-1} \pr{\frac{4\pi}{cq} \operatorname{\mathcal{S}}qrt{a_2ma_1n} } \mathcal K \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{a \mod{cq} \\ a \equiv \overline{a_1b_1}a_2b_2\mod q} } e\pr{\frac{aa_2m + \bar{a}a_1n}{cq}} ,} where $\operatorname{\mathcal{S}}umstar $ denotes a sum over reduced residues. Let $f$ be a smooth partition of unity such that \est{\operatorname{\mathcal{S}}umd_M f\pr{\frac m M} = 1, } where $f$ is supported in [1/2, 3] and $\operatorname{\mathcal{S}}umd_M$ denotes an dyadic sum over $M = 2^k$, $k\geq 0$. Rearranging the sum, we have \est{\mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta) = \frac {2\pi}{q}&\operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2 \geq 1 \\ (a_1a_2b_1b_2, q) = 1}} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1^{\frac 32}b_1} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2^{\frac 32}b_2} \operatorname{\mathcal{S}}umd_M \operatorname{\mathcal{S}}umd_N S(\mathbf a, \mathbf b, M, N; \boldsymbol \alpha, \boldsymbol \beta), } where \est{S(\mathbf a, \mathbf b, M, N ; \boldsymbol \alpha, \boldsymbol \beta) &:= \operatorname{\mathcal{S}}um_{c = 1}^{\infty}\frac{1}{c} \operatorname{\mathcal{S}}umtwo_{m, n \geq 1} \frac{\operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)\operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{n^{\frac 12}m^{\frac 12}}\mathcal F(\mathbf a, \mathbf b, m, n, c),} \es{\label{def:F} \mathcal F(\mathbf a, \mathbf b, m, n, c) := & \ \mathcal G(\mathbf a, \mathbf b, m,n, c) \ \mathcal K \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{a \mod{cq} \\ a \equiv \overline{a_1b_1}a_2b_2\mod q} } e\pr{\frac{aa_2m + \bar{a}a_1n}{cq}} J_{k-1} \pr{\frac{4\pi}{cq} \operatorname{\mathcal{S}}qrt{a_2ma_1n} } ,} and \es{\label{def:G} \mathcal G(\mathbf a, \mathbf b, m, n, c) := & V_{\boldsymbol \alpha, \boldsymbol \beta}\pr{a_1^3 b_1^2 n, a_2^3b_2^2m; q} f\pr{\frac{m}{M}}f\pr{\frac{n}{N}} .} As described in the outline of the paper, we now take the following steps to compute $\mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta).$ \\ We write \es{\label{decompK}\mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta) = \mathscr K_M(q; \boldsymbol \alpha, \boldsymbol \beta) + \mathscr K_E(q; \boldsymbol \alpha,\boldsymbol \beta),} where $\mathscr K_M(q; \boldsymbol \alpha, \boldsymbol \beta)$ is the contribution from the sum over $c < C,$ where $C = \frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{q^{\frac 23}},$ and $\mathscr K_E(q; \boldsymbol \alpha, \boldsymbol \beta)$ is the rest. We will show that the contribution from $\mathscr K_E(q; \boldsymbol \alpha, \boldsymbol \beta)$ is small in Section \ref{sec:truncationC}. This is possible by the decay of the Bessel functions and such a truncation bounds the size of the conductor inside the exponential sum. For $\mathscr K_M(q; \boldsymbol \alpha, \boldsymbol \beta)$, we start by reducing the conductor inside the exponential sum from $cq$ to $c$ in Section \ref{sec:treatmentexp}. This step takes advantage of the average over $\chi \bmod q$. Before we show each step, we provide properties of Bessel functions that will be used later. \begin{lem} \label{lem:Besselresult} We have \es{\label{asympJxbig} J_{k-1} (2\pi x) = \frac{1}{\pi\operatorname{\mathcal{S}}qrt x}\pg{ W(2\pi x)\e{x - \frac k4 + \frac 18} + \overline{W}(2\pi x)\e{-x + \frac k4 - \frac 18}},} where $W^{(j)}(x) \ll_{j, k} x^{-j} $. Moreover, \es{\label{asympJxSm} J_{k-1} (2 x) = \operatorname{\mathcal{S}}um_{\ell = 0}^{\infty} (-1)^{\ell} \frac{x^{2\ell + k - 1}}{\ell! (\ell + k - 1)!},} and \es{\label{bound:Bessel1} J_{k-1}(x) \ll \min (x^{-1/2}, x^{k-1}).} Finally, the following integration is used when calculating the main terms of $\mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta)$. If $\boldsymbol \alphapha, \beta, \gamma > 0$, then \es{\label{int:bessel} \mathcal K \int_0^\infty \e{(\boldsymbol \alphapha + \beta)x + \gamma x^{-1}}J_{k - 1}(4\pi \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha \beta x})\> \frac{dx}{x} = 2\pi J_{k -1}(4\pi\operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha\gamma})J_{k -1}(4\pi\operatorname{\mathcal{S}}qrt{\beta\gamma}), } and the integration is 0 if $\boldsymbol \alphapha, \beta > 0$ and $\gamma \leq 0.$ \end{lem} These results are standard. We refer the reader to \cite{Watt} for the first three claims, and to \cite{Ob} for the last claim. \operatorname{\mathcal{S}}ubsection{Truncating the sum over $c$.} \label{sec:truncationC} In this section we show that we can truncate the sum over $c$ in $S(\mathbf a, \mathbf b, M, N; \boldsymbol \alpha, \boldsymbol \beta)$ with small error contribution. \begin{prop} \label{prop:truncateC} Let $C = \frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{q^{\frac 23}}$, $k \geq 5$, and $\mathcal F(\mathbf a, \mathbf b, m, n, c)$ be defined as in (\ref{def:F}). Further, let \est{\mathscr K_E(q; \boldsymbol \alpha, \boldsymbol \beta) = \frac {2\pi}{q}&\operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2 \geq 1 \\ (a_1a_2b_1b_2, q) = 1}} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1^{\frac 32}b_1} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2^{\frac 32}b_2} \operatorname{\mathcal{S}}umd_M \operatorname{\mathcal{S}}umd_N S_E(\mathbf a, \mathbf b, M, N; \boldsymbol \alpha, \boldsymbol \beta), } where \est{S_E(\mathbf a, \mathbf b, M, N; \boldsymbol \alpha, \boldsymbol \beta) &= \operatorname{\mathcal{S}}um_{c \geq C} \frac{1}{c} \operatorname{\mathcal{S}}umtwo_{m, n \geq 1} \frac{\operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)\operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{n^{\frac 12}m^{\frac 12}}\mathcal F(\mathbf a, \mathbf b, m, n, c).} Then $$ \mathscr K_E(q;\boldsymbol \alpha, \boldsymbol \beta) \ll q^{-\frac 5{12} + \varepsilon}.$$ \end{prop} \begin{proof} Note that the contribution of terms when $a_1^3b_1^2N$ or $a_2^3b_2^2M$ is $\gg q^{\frac 32 + \varepsilon}$ is $\ll_{\varepsilon, A} q^{-A}$, due to the fast decay rate of $\mathcal G(\mathbf a, \mathbf b, m, n, c)$ defined in (\ref{def:G}). Thus we will discount such terms in the rest of the proof. For $k\geq 5$, we let \est{ \tilde{\Delta}(m, n) &= \tilde{\Delta}(m, n, \mathbf a, \mathbf b; C) \\ &= \operatorname{\mathcal{S}}um_{c\geq C} \frac{1}{c} \ \mathcal K \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{a \mod{cq} \\ a \equiv \overline{a_1b_1}a_2b_2\mod q} } e\pr{\frac{aa_2m + \bar{a}a_1n}{cq}} J\bfrac{4 \pi \operatorname{\mathcal{S}}qrt{a_1a_2 mn}}{cq} \\ &= \tilde{\Delta}_1(m, n) + \tilde{\Delta}_2(m, n), } where \est{ \tilde{\Delta}_1(m, n) := \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{c\geq C\\(c, q) = 1}} \frac{1}{c} \ \mathcal K \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{a \mod{cq} \\ a \equiv \overline{a_1b_1}a_2b_2\mod q} } e\pr{\frac{aa_2m + \bar{a}a_1n}{cq}} J\bfrac{4 \pi \operatorname{\mathcal{S}}qrt{a_1a_2 mn}}{cq}, } and $\tilde{\Delta}_2(m, n)$ is the sum of the terms where $(c, q)>1$. Now for $(c, q) = 1$, the Weil bound gives \est{ \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{a \mod{cq} \\ a \equiv \overline{a_1b_1}a_2b_2\mod q} }e\pr{\frac{aa_2m + \bar{a}a_1n}{cq}} \ll c^{1/2+\varepsilon} \operatorname{\mathcal{S}}qrt{(a_1m, a_2n, c)}, } and from the bound in (\ref{bound:Bessel1}), we have \est{ J\bfrac{4 \pi \operatorname{\mathcal{S}}qrt{a_1a_2 mn}}{cq} \ll \bfrac{\operatorname{\mathcal{S}}qrt{a_1a_2mn}}{cq}^{k-1}. } When $a_1^3b_1^2N$ and $a_2^3b_2^2M \ll q^{\frac 32 + \varepsilon},$ we obtain that for $k \geq 5,$ \est{ \operatorname{\mathcal{S}}um_{m, n} & \frac{\operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)\operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{\operatorname{\mathcal{S}}qrt{mn}} \tilde{\Delta}_1(m, n) \mathcal G(\mathbf a, \mathbf b, m, n, c) \ll q^{\varepsilon} M^{\frac{k}{2}}N^{\frac k2}\operatorname{\mathcal{S}}um_{c\geq C} \frac{1}{c} c^{1/2+\varepsilon} \bfrac{\operatorname{\mathcal{S}}qrt{a_1a_2}}{cq}^{k-1} \ll q^{\frac 7{12} + \varepsilon}. } In the above, we have used that $\max (a_1^3b_1^2N, a_2^3b_2^2M) \ll q^{\frac 32 + \varepsilon}$. Then summing over $a_1, b_1, a_2, b_2$ gives the desired bound. Now, for $(c, q) >1$, we use the bound \est{ \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{a \mod{cq} \\ a \equiv \overline{a_1b_1}a_2b_2\mod q} }e\pr{\frac{aa_2m + \bar{a}a_1n}{cq}} \ll (cq)^{1/2+\varepsilon} \operatorname{\mathcal{S}}qrt{(a_1m, a_2n, cq)}. } Hence, for $q|c$ and $k \geq 5,$ we obtain \est{ \operatorname{\mathcal{S}}um_{m, n} & \frac{\operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)\operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{\operatorname{\mathcal{S}}qrt{mn}} \tilde{\Delta}_2(m, n) \mathcal G(\mathbf a, \mathbf b, m, n, c) \ll q^{\varepsilon}M^{\frac k2}N^{\frac k2} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{c\geq C\\mathfrak{q}|c}} \frac{1}{c} (qc)^{\frac 12} \bfrac{\operatorname{\mathcal{S}}qrt{a_1a_2}}{cq}^{k-1} \ll q^{\frac 1{12} + \varepsilon}. } Then summing over $a_1, b_1, a_2, b_2$ gives the desired bound. \end{proof} From this proposition, we are left to consider only \es{\label{def:KM} \mathscr K_M(q; \boldsymbol \alpha, \boldsymbol \beta) = \frac {2\pi}{q}&\operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2 \geq 1 \\ (a_1a_2b_1b_2, q) = 1}} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1^{\frac 32}b_1} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2^{\frac 32}b_2} \operatorname{\mathcal{S}}umd_M \operatorname{\mathcal{S}}umd_N S_M(\mathbf a, \mathbf b, M, N; \boldsymbol \alpha, \boldsymbol \beta), } where \es{\label{def:SM} S_M(\mathbf a, \mathbf b, M, N; \boldsymbol \alpha, \boldsymbol \beta) &:= \operatorname{\mathcal{S}}um_{c < C} \frac{1}{c} \operatorname{\mathcal{S}}um_{m, n \geq 1 } \frac{\operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha)\operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{n^{\frac 12}m^{\frac 12}}\mathcal F(\mathbf a, \mathbf b, m, n, c).} \operatorname{\mathcal{S}}ubsection{Treatment of the exponential sum} \label{sec:treatmentexp} Next, we reduce the conductor in the exponential sum in $\mathcal F(\mathbf a, \mathbf b, m, n, c)$ before applying Voronoi summation. \begin{lemma}[Treatment of the exponential sum] Assume that $(c, q) = 1$ and let \est{ Y := \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{a \mod{cq} \\ a \, \equiv \, \overline{a_1b_1}a_2b_2 \mod q} }e\pr{\frac{au + \bar{a}v}{cq}}. } Then we have \begin{eqnarray*} Y &=&e\bfrac{(a_2b_2)^2u + (a_1b_1)^2 v}{cqa_1b_1a_2b_2} \operatorname{\mathcal{S}}umstar_{x \mod{c}} e\bfrac{\bar q (a_1b_1x - a_2b_2) u}{a_1b_1c} e\bfrac{\bar q (a_2b_2\bar x - a_1b_1) v}{a_2b_2c}. \end{eqnarray*} \end{lemma} \begin{proof} By Chinese Remainder Theorem, for each $a \mod{cq}$, there exist unique $x \mod{c}$ and $y \mod{q}$ such that \es{ \label{eqn:aqc} a = xq\bar q + y c \bar c, } where $\bar q$ denotes the inverse of $q$ modulo $c$, and $\bar c$ denotes the inverse of $c$ modulo $q$. Using (\ref{eqn:aqc}) and the reciprocity relation \est{\frac{\overline{a}}{b} + \frac{\overline{b}}{a} \equiv \frac{1}{ab} \mod 1, } where $(a, b) = 1,$ $\overline{a}$ is the inverse of $a \bmod b$, and $\overline{b}$ is the inverse of $b \bmod a$, we obtain that \begin{align*} Y &= e\bfrac{\overline{a_1b_1}a_2b_2 \bar c u + a_1b_1\overline{a_2b_2} \bar c v}{q} \operatorname{\mathcal{S}}umstar_{x \mod{c}} e\bfrac{x\bar qu + \bar x \bar q v}{c}. \end{align*} Thus \begin{eqnarray*} Y &=& e\bfrac{(a_2b_2)^2u + (a_1b_1)^2 v}{cqa_1b_1a_2b_2} \operatorname{\mathcal{S}}umstar_{x \mod{c}} e\bfrac{x\bar q u + \bar x \bar qv}{c} e\bfrac{-\bar q a_2b_2u }{ca_1b_1}e\bfrac{- \bar q a_1b_1 v}{ca_2b_2}, \end{eqnarray*} and the lemma follows. \end{proof} Note that when $c<C <q$, we automatically have $(c, q) = 1 $. The point of this lemma is that we may treat $e\bfrac{(a_2b_2)^2u + (a_1b_1)^2 v}{cqa_1b_1a_2b_2}$ as a smooth function with small derivatives, while the other exponentials have conductor at most $ca_ib_i \leq q^{1+\varepsilon}$ after truncation. It should be noted however, that we are most concerned with the contribution from the transition region of the Bessel function, where the conductor $ca_ib_i$ should be thought of as around size $q^{1/2}$. \operatorname{\mathcal{S}}ection{Applying Voronoi Summation}\label{sec:appvoronoi} To calculate $\mathscr K_M(q; \boldsymbol \alpha, \boldsymbol \beta)$ as defined in \eqref{def:KM}, we start by evaluating $S_M(\mathbf a, \mathbf b, M, N; \boldsymbol \alpha, \boldsymbol \beta)$ defined in (\ref{def:SM}). We write \est{S_M(\mathbf a, \mathbf b, M, N; \boldsymbol \alpha, \boldsymbol \beta) = \operatorname{\mathcal{S}}um_{c < C} \operatorname{\mathcal{S}}umstar_{x \mod{c}} \frac{1}{c} \pr{\mathcal S^+_{\boldsymbol \alpha, \boldsymbol \beta} (c, x) + \mathcal S^-_{\boldsymbol \alpha, \boldsymbol \beta}(c,x)},} where \est{\mathcal S^\pm_{\boldsymbol \alpha, \boldsymbol \beta}(c, x) & = i^{\mp k} \operatorname{\mathcal{S}}um_{m \geq 1} \frac{\operatorname{\mathcal{S}}igma_3(m; -\boldsymbol \beta)}{m^{\frac 12}} f\pr{\frac{m}{M}} e\pr{\pm \frac{a_2^2b_2m }{cqa_1b_1} } e\pr{\pm \frac{\bar q (a_1b_1x - a_2b_2) a_2m}{a_1b_1c} }\times \\ & \ \ \ \times \operatorname{\mathcal{S}}um_{n \geq 1} \operatorname{\mathcal{S}}igma_3(n;\boldsymbol \alpha) F^\pm_{\boldsymbol \alpha, \boldsymbol \beta}(m, n, c) e\pr{\pm \frac{\bar q (a_2b_2\bar x - a_1b_1) a_1n}{a_2b_2c}},} where \est{ F^\pm_{\boldsymbol \alpha, \boldsymbol \beta}(m, n, c) = \frac{1}{n^{\frac 12}} V_{\boldsymbol \alpha, \boldsymbol \beta}\pr{a_1^3 b_1^2 n, a_2^3b_2^2m; q} J_{k-1} \pr{\frac{4\pi}{cq} \operatorname{\mathcal{S}}qrt{a_2ma_1n} } \ e\pr{\pm \frac{ a_1^2b_1 n}{cqa_2b_2}} f\pr{\frac{n}{N}} . } Let \begin{equation}\label{eqn:lambda1eta1} \frac{\lambda_1}{\eta_1} = \frac{\bar q (a_2b_2\bar x - a_1b_1) a_1}{a_2b_2c}, \end{equation} and \begin{equation} \label{eqn:lambda2eta2} \frac{\lambda_2}{\eta_2} = \frac{\bar q (a_1b_1x - a_2b_2) a_2}{a_1b_1c}, \end{equation} where $(\lambda_1, \eta_1) = (\lambda_2, \eta_2) = 1.$ Moreover we define \est{ \mathcal V^\pm_{\boldsymbol \alpha, \boldsymbol \beta}(c; {\bf a}, {\bf b}, y, z) = \frac{1}{y^{1/2 }} \frac{1}{z^{1/2}} &V_{\boldsymbol \alpha, \boldsymbol \beta} \pr{a_1^3b_1^2y, a_2^3b_2^2z; q} J_{k-1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{a_2za_1y}}{cq}} \times \\ &\times i^{\mp k}\e{\pm \frac{a_1^2b_1y}{cqa_2b_2} \pm \frac{a_2^2b_2z}{cqa_1b_1}} f\pr{\frac yN} f\pr{\frac zM} .} We then apply Voronoi Summation as in Theorem \ref{thm:voronoi} to the sum over $n, m$ and obtain that $\mathcal S^+_{\boldsymbol \alpha, \boldsymbol \beta}(c, x) + \mathcal S^-_{\boldsymbol \alpha, \boldsymbol \beta}(c, x)$ is \est{\operatorname{\mathcal{S}}um_{i= 1}^3 \operatorname{\mathcal{S}}um_{j = 1}^3 \mathbb{R}es_{s_1 = 1 - \boldsymbol \alphapha_i} \mathbb{R}es_{s_2 = 1 + \beta_j} (\mathcal T_{M, \boldsymbol \alpha, \boldsymbol \beta}^+(c,x, s_1, s_2) + \mathcal T_{M, \boldsymbol \alpha, \boldsymbol \beta}^-(c,x, s_1, s_2) ) + \operatorname{\mathcal{S}}um_{i = 1}^8 \pr{\mathcal T_{i, \boldsymbol \alpha, \boldsymbol \beta}^+(c,x) + \mathcal T_{i, \boldsymbol \alpha, \boldsymbol \beta}^-(c,x) },} where in the region of absolutely convergence, \es{\label{def:calT1} \mathcal T_{M, \boldsymbol \alpha, \boldsymbol \beta}^\pm(c,x, s_1, s_2) &:= \mathcal F_M^\pm(c; \boldsymbol \alpha, \boldsymbol \beta) D_3\pr{s_1, \pm \frac{\lambda_1}{\eta_1}, \boldsymbol \alpha } D_3\pr{s_2, \pm \frac{\lambda_2}{\eta_2}, -\boldsymbol \beta } , } \est{ \mathcal F_M^\pm(c; \boldsymbol \alpha, \boldsymbol \beta) &:=\mathcal F_M^\pm(c, s_1, s_2; \boldsymbol \alpha, \boldsymbol \beta) := \int_{0}^{\infty} \int_0^{\infty} y^{s_1 - 1}z^{s_2 - 1} \mathcal V^\pm_{\boldsymbol \alpha, \boldsymbol \beta}(c; {\bf a}, {\bf b}, y, z) \>dy \> dz ,} \es{\label{def:calT2} \mathcal T_{1, \boldsymbol \alpha, \boldsymbol \beta}^\pm (c,x) &:= \frac{\pi^{3/2 + \boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}}{\eta_1^{3+\boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}} \operatorname{\mathcal{S}}um_{i = 1}^3 \mathbb{R}es_{s = 1 + \beta_i} D_3\pr{s_2, \pm \frac{\lambda_2}{\eta_2}, -\boldsymbol \beta } \operatorname{\mathcal{S}}um_{n = 1}^{\infty} A_3\pr{n, \pm \frac{\lambda_1}{\eta_1}, \boldsymbol \alpha} \mathcal F_1^\pm(c,n; \boldsymbol \alpha, \boldsymbol \beta),} \est{\mathcal F_1^\pm(c,n; \boldsymbol \alpha, \boldsymbol \beta) &= \mathcal F_1^\pm(c,n, s; \boldsymbol \alpha, \boldsymbol \beta)= \int_{0}^{\infty} \int_0^{\infty} z^{s - 1} \mathcal V^\pm_{\boldsymbol \alpha, \boldsymbol \beta}(c; {\bf a}, {\bf b}, y, z) U_3\pr{\frac{\pi^3ny}{\eta_1^3}; \boldsymbol \alpha} \>dy \> dz, } and $\mathcal T_{i, \boldsymbol \alpha, \boldsymbol \beta}^\pm (c,x)$, where $i = 2, 3, 4$ are defined similarly. Further, \es{\label{def:calT6} \mathcal T_{5, \boldsymbol \alpha, \boldsymbol \beta}^\pm(c,x) &:= \frac{\pi^{3 + \operatorname{\mathcal{S}}um_{i = 1}^3 (\boldsymbol \alphapha_i - \beta_i)}}{\eta_1^{3+\operatorname{\mathcal{S}}um_{i=1}^3 \boldsymbol \alphapha_i}\eta_2^{3 - \operatorname{\mathcal{S}}um_{i=1}^3 \beta_i}} \operatorname{\mathcal{S}}um_{n = 1}^{\infty} \operatorname{\mathcal{S}}um_{m = 1}^\infty A_3\pr{n, \pm \frac{\lambda_1}{\eta_1}, \boldsymbol \alpha} A_3\pr{m, \pm \frac{\lambda_2}{\eta_2}, -\boldsymbol \beta} \mathcal F_5^\pm(c,n, m; \boldsymbol \alpha, \boldsymbol \beta),} \est{\mathcal F_5^\pm(c,n, m; \boldsymbol \alpha, \boldsymbol \beta) &= \int_{0}^{\infty} \int_0^{\infty} \mathcal V^\pm_{\boldsymbol \alpha, \boldsymbol \beta}(c; {\bf a}, {\bf b}, y, z) U_3\pr{\frac{\pi^3mz}{\eta_2^3}; -\boldsymbol \beta} U_3\pr{\frac{\pi^3ny}{\eta_1^3}; \boldsymbol \alpha} \>dy \> dz, } and $\mathcal T_{i, \boldsymbol \alpha, \boldsymbol \beta}^\pm(c,x)$, where $i = 6, 7, 8$ are defined similarly. As mentioned in Section \ref{sec:setupmoment}, there are nine terms from $\mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta).$ In particular, we will show that these terms arise from \est{\operatorname{\mathcal{S}}um_{c < C} \operatorname{\mathcal{S}}umstar_{x \mod c} \frac{1}{c} \pr{\mathcal T_{M, \boldsymbol \alpha, \boldsymbol \beta}^+(c,x, s_1, s_2) + \mathcal T_{M, \boldsymbol \alpha, \boldsymbol \beta}^-(c,x, s_1, s_2)},} and in fact each term comes from the residues at $s_1 = 1 - \boldsymbol \alphapha_i$ and $s_2 = 1 + \beta_j$ for $i = 1, 2, 3.$ We state the contribution from the residues $s_1 = 1 - \boldsymbol \alphapha_1$ and $s_2 = 1 + \beta_1$ in Proposition \ref{prop:mainTM} below, and prove it in Section \ref{sec:provepropTM}. By symmetry, the analogous result holds for the other residues. Then, we will show that the rest of $\mathcal T_{i, \boldsymbol \alpha, \boldsymbol \beta}^\pm(c,x)$ are negligible in Section \ref{sec:properror} as stated in Proposition \ref{prop:Terror}. \begin{prop} \label{prop:mainTM} Let \est{ R_{\boldsymbol \alphapha_1, \beta_1} := \frac {2\pi}{q}&\operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2 \geq 1 \\ (a_1a_2b_1b_2, q) = 1}} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1^{\frac 32}b_1} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2^{\frac 32}b_2} \\ & \cdot \operatorname{\mathcal{S}}umd_M \operatorname{\mathcal{S}}umd_N \operatorname{\mathcal{S}}um_{c < C} \operatorname{\mathcal{S}}umstar_{x \mod c} \frac{1}{c} \mathbb{R}es_{s_1 = 1 - \boldsymbol \alphapha_1} \mathbb{R}es_{s_2 = 1 + \beta_1} \pr{\mathcal T_{M, \boldsymbol \alpha, \boldsymbol \beta}^+(c,x, s_1, s_2) + \mathcal T_{M, \boldsymbol \alpha, \boldsymbol \beta}^-(c,x, s_1, s_2)}. } Then we have \est{R_{\boldsymbol \alphapha_1, \beta_1} = H(0; \boldsymbol \alpha, \boldsymbol \beta)\int_{-\infty}^{\infty}\mathcal M(q;\pi(\boldsymbol \alpha) + it, \pi(\boldsymbol \beta) + it ) \> dt + O(q^{-1/2 + \varepsilon}),} where $(\pi(\boldsymbol \alpha), \pi(\boldsymbol \beta)) = (\beta_1, \boldsymbol \alphapha_2, \boldsymbol \alphapha_3; \boldsymbol \alphapha_1, \beta_2, \beta_3).$ \end{prop} \begin{prop} \label{prop:Terror} For i = 1,.., 8, define \est{ \mathcal E_i^\pm(q; \boldsymbol \alpha,\boldsymbol \beta) := \frac {2\pi}{q}&\operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2 \geq 1 \\ (a_1a_2b_1b_2, q) = 1}} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1^{\frac 32}b_1} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2^{\frac 32}b_2} \operatorname{\mathcal{S}}umd_M \operatorname{\mathcal{S}}umd_N \operatorname{\mathcal{S}}um_{c < C} \operatorname{\mathcal{S}}umstar_{x \mod c} \frac{\mathcal T_{i, \boldsymbol \alpha, \boldsymbol \beta}^\pm(c,x)}{c} . } Then $$ \mathcal E_i^\pm(q; \boldsymbol \alpha, \boldsymbol \beta) \ll q^{-1/4 + \varepsilon}.$$ \end{prop} We will prove this proposition in Section \ref{sec:properror}. \operatorname{\mathcal{S}}ection{Proof of Proposition \ref{prop:mainTM}} \label{sec:provepropTM} We begin by collecting some lemmas which will be used in this section. \operatorname{\mathcal{S}}ubsection{Preliminary Lemmas } \begin{lemma} \label{lem:countingnumberX} Let $(a, \ell) = 1.$ We have \est{ f(c, \ell) := \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{x \mod {c\ell} \\ x \equiv a \mod {\ell}} } 1 = c \prod_{\operatorname{\mathcal{S}}ubstack{p| c \\ p \nmid \ell} } \pr{1 - \frac {1}{p}} = \phi(c) \prod_{p | (\ell, c)} \pr{1 - \frac{1}{p}}^{-1} . } \end{lemma} \begin{proof} We first prove that if $(m, n\ell) = 1$, then \es{\label{eqn:multsumstarMN} f(mn, \ell) = f(n, \ell) \phi(m).} For all $x$ satisfying $(x, mn\ell) = 1$, we can write $x = um \overline m + vn\ell \overline{n\ell},$ where $m\overline m \equiv 1 \mod {n\ell},$ $n\ell\overline {n\ell} \equiv 1 \mod {m},$ and $(u, n\ell) = (v, m) = 1.$ Moreover $x \equiv a \mod \ell$ if and only if $u \equiv a \mod \ell$. By Chinese Remainder Theorem, \est{f(mn, \ell) = \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{x \mod {mn\ell} \\ x \equiv a \mod {\ell}} } 1 = \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{u \mod {n\ell} \\ u \equiv a \mod {\ell}} } \operatorname{\mathcal{S}}umstar_{v \mod m} 1 = f(n, \ell) \phi(m).} Let $c = c_1 c_2,$ where all prime factors of $c_1$ also divide $\ell$, and $(c_2, \ell) = 1.$ From (\ref{eqn:multsumstarMN}), we have that $f(c, \ell) = f(c_1, \ell) \phi(c_2).$ Now let $x$ be any residue modulo $c_1\ell$ with $x \equiv a \mod \ell$. Then $(x, c_1\ell) = 1$ since $(a, \ell) = 1$. Thus all such $x$ can be uniquely written as $x = a + k\ell,$ where $k = 0,..., c_1-1$, so $f(c_1, \ell) = c_1.$ We then have $f(c, \ell) = c_1 \phi(c_2)$, and the statement follows from the identity $\phi(c_2) = c_2 \prod_{p|c_2}\mathopen{}\mathclose\bgroup\originalleft(1-\frac 1p\aftergroup\egroup\originalright).$ \end{proof} \begin{lemma} \label{lem:sumT} Let $\boldsymbol \alphapha, \beta, y, z$ be nonnegative real numbers satisfying $\boldsymbol \alphapha y, \beta z \ll q^{2}$ and define \est{T = T(y, z, \boldsymbol \alphapha, \beta) := \operatorname{\mathcal{S}}um_{\delta = 1 }^\infty \frac{1}{\delta} J_{k - 1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha \beta yz}}{ \delta}} \mathcal{K} \e{\frac{\boldsymbol \alphapha y}{\delta} + \frac{\beta z}{ \delta}}.} Further, let $L = q^{100}$ and $w$ be a smooth function on $\mathbb R^+$ with $w(x) = 1$ if $0 \leq x \leq 1$, and $w(x) =0 $ if $x > 2.$ Then for any $A>0$, we have \est{T &= 2\pi \operatorname{\mathcal{S}}um_{\ell = 1}^{\infty} w \pr{\frac{\ell}{L}} J_{k - 1} \pr{4\pi \operatorname{\mathcal{S}}qrt{ \boldsymbol \alphapha y\ell}} J_{k - 1} \pr{4\pi \operatorname{\mathcal{S}}qrt{\beta z\ell}} \\ & \ \ \ - 2\pi \int_{0 }^{\infty} w \pr{\frac{\ell}{L}} J_{k - 1} \pr{4\pi \operatorname{\mathcal{S}}qrt{ \boldsymbol \alphapha y\ell}} J_{k - 1} \pr{4\pi \operatorname{\mathcal{S}}qrt{\beta z\ell}} \> d\ell + O_A(q^{-A}). } \end{lemma} \begin{proof} We will follow Iwaniec and Xiaoqing Li's arguments in Section 3 of \cite{IL} to evaluate $T$. Let $\eta(s)$ be a smooth function on $\mathbb R^+$ with $\eta(s) = 0$ if $0 \leq s < 1/4,$ $0 \leq \eta(s) \leq 1$ if $1/4 \leq s \leq 1/2$, and $\eta(s) = 1$ if $s > 1/2.$ We then obtain that \est{T = \mathcal{K} \operatorname{\mathcal{S}}um_{\delta} \frac{\eta(\delta)}{\delta} J_{k - 1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha \beta yz}}{\delta}} \e{\frac{\boldsymbol \alphapha y}{\delta} + \frac{\beta z}{ \delta}}.} After inserting this smooth function we apply Poisson summation to obtain that \est{T = \operatorname{\mathcal{S}}um_{\ell} \hat F(\ell) := \operatorname{\mathcal{S}}um_{\ell} \mathcal{K} \int_0^{\infty} \frac{\eta(u)}{u} \e{\ell u + \frac{\boldsymbol \alphapha y}{u} + \frac{\beta z}{ u}} J_{k - 1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha \beta yz}}{ u}} \> du. } By (\ref{asympJxbig}), we can write the integral above in terms of two integrals with the phase $$\ell u + \frac{\boldsymbol \alphapha y \ \pm \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha \beta yz} + \beta z }{u} .$$ If $|\ell| > L $, the factor $\ell u$ dominates. Then integrating by parts $A$ times, we have that \est{\int_0^{\infty} \frac{\eta(u)}{u} \e{\ell u + \frac{\boldsymbol \alphapha y}{u} + \frac{\beta z}{ u}} J_{k - 1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha \beta yz}}{ u}} \> du \ll q^{-A}. } Therefore \est{T = \operatorname{\mathcal{S}}um_{\ell} \hat F(\ell) w \pr{\frac{|\ell|}{L}} + \ O(q^{-A}).} Now, we write $\operatorname{\mathcal{S}}um_{\ell} \hat F(\ell) w \pr{\frac{|\ell|}{L}} = T_1 - T_2$, where \est{T_1 := \operatorname{\mathcal{S}}um_{\ell} w \pr{\frac{|\ell|}{L}} \mathcal{K} \int_0^{\infty} \frac{1}{u} \e{\ell u + \frac{\boldsymbol \alphapha y}{u} + \frac{\beta z}{u}} J_{k - 1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha \beta yz}}{u}} \> du, } and \est{T_2 := \operatorname{\mathcal{S}}um_{\ell} w \pr{\frac{|\ell|}{L}}\mathcal{K} \int_0^{\infty} \frac{1 - \eta(u)}{u} \e{\ell u + \frac{\boldsymbol \alphapha y}{u} + \frac{\beta z}{ u}} J_{k - 1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha \beta yz}}{u}} \> du .} We use (\ref{int:bessel}) to evaluate $T_1$ and obtain \es{ \label {eqn:T1} T_1 = 2\pi \operatorname{\mathcal{S}}um_{\ell = 1}^{\infty} w \pr{\frac{\ell}{L}} J_{k - 1} \pr{4\pi \operatorname{\mathcal{S}}qrt{ \boldsymbol \alphapha y\ell}} J_{k - 1} \pr{4\pi \operatorname{\mathcal{S}}qrt{\beta z\ell}}.} For $T_2,$ we note that $\xi(u) = 1 - \eta(u) = 1$ if $0 < s < 1/4,$ $0 \leq \xi(u) \leq 1$ if $1/4 \leq s \leq 1/2$, and $\xi(u) = 0 $ if $s > 1/2.$ Interchanging the sum over $\ell$ and the integration over $u$ and applying Poisson summation formula, we have \est{T_2 &= \mathcal{K} \int_0^{\infty}\frac{\xi(u)}{u} \e{ \frac{\boldsymbol \alphapha y}{\delta} + \frac{\beta z}{\delta}} J_{k - 1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha \beta yz}}{ u}} \operatorname{\mathcal{S}}um_{\ell} L \hat w(L (\ell + u)) \> du.} Since $\hat w(y) \ll (1 + |y|)^{-A},$ the main contribution comes from $\ell = 0$ and $0 \leq u < 1/4.$ Therefore \es{\label{eqn:T2ab} T_2 &= \mathcal{K} \int_0^{\infty}\frac{\xi(u)}{u} \e{ \frac{\boldsymbol \alphapha y}{ \delta} + \frac{\beta z}{\delta}} J_{k - 1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha \beta yz}}{ u}} L \hat w(L u) \> du + O(q^{-A})\\ &= \mathcal{K} \int_0^{\infty}\frac{1}{u} \e{ \frac{\boldsymbol \alphapha y}{\delta} + \frac{\beta z}{ \delta}} J_{k - 1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha \beta yz}}{u}} L \hat w(L u) \> du + O(q^{-A})\\ &= 2\pi \int_0^{\infty} w \pr{\frac{\ell}{L}} J_{k - 1} \pr{4\pi \operatorname{\mathcal{S}}qrt{\boldsymbol \alphapha y\ell}} J_{k - 1} \pr{4\pi \operatorname{\mathcal{S}}qrt{ \beta z\ell}} \> d\ell + O(q^{-A}), } where the last equality comes from Plancherel's formula and (\ref{int:bessel}). \end{proof} Next, the following lemma deals with the sum and the integral involving $\ell$. \begin{lemma} \label{lem:evalsumwtozeta}Let $w$ be a smooth function on $\mathbb R^+$ with $w(x) = 1$ if $0 \leq x \leq 1,$ and $w(x) = 0$ if $x > 2.$ Also we let $\gamma$ be a complex number where $\textup{Re } \gamma \ll \frac{1}{\log q}$, $\textup{Re } \gamma < 0$ and $L = q^{100}.$ Then \est{\operatorname{\mathcal{S}}um_{\ell = 1}^{\infty} w \pr{\frac \ell L} \frac{1}{\ell^{1 + \gamma}} - \int_0^{\infty} w \pr{\frac \ell L} \frac{1}{\ell^{1+ \gamma}} \> d\ell = \zeta(1 + \gamma) + O(q^{-20}).} \end{lemma} \begin{proof} Let $\tilde w(z)$ be the Mellin transform of $w,$ defined by \est{\tilde w(z) = \int_0^{\infty} w(t) \frac{t^z}{t} \> dt.} From the definition, $\tilde w(z)$ is analytic for $\textup{Re } z > 0$, and integration by parts gives $$\tilde w(z) = -\frac 1z\int_0^{\infty} w'(t) t^z dt, $$ so $\tilde w(z)$ can be analytically continued to $\textup{Re } z > -1$ except at $z = 0$ where it has a simple pole with residue $w(0) = 1$. For $\operatorname{\mathcal{S}}igma > \min\{0, -\textup{Re } \gamma\},$ we have \es{ \label{sum:ellMellin} \operatorname{\mathcal{S}}um_{\ell = 1}^{\infty} w \pr{\frac \ell L} \frac{1}{\ell^{1+\gamma}} &= \operatorname{\mathcal{S}}um_{\ell = 1}^{\infty} \frac{1}{2\pi i} \int_{(\operatorname{\mathcal{S}}igma)} \tilde w(z) \pr{\frac \ell L}^{-z} \frac{1}{\ell^{1 + \gamma }} \> dz \\ &= \frac{1}{2\pi i} \int_{(\operatorname{\mathcal{S}}igma)} \tilde w(z) L^z \zeta(1 + \gamma + z) \> dz.} Shifting the contour to $\textrm{Re} (z) = - 1/4$, we have that (\ref{sum:ellMellin}) is \est{\zeta(1 + \gamma) + \tilde w(-\gamma) L^{-\gamma} + O(q^{-20}). } The Lemma follows from noting that $\tilde w(-\gamma) L^{-\gamma} =\int_0^{\infty} w \pr{\frac \ell L} \frac{1}{\ell^{1+ \gamma}} \> d\ell$. \end{proof} \operatorname{\mathcal{S}}ubsection{Calculation of residues} \label{sec:calRes} In this section, we will calculate $$ \mathbb{R}es_{s_1 = 1 - \boldsymbol \alphapha_1} \mathbb{R}es_{s_2 = 1 + \beta_1} \pr{\mathcal T_{M, \boldsymbol \alpha, \boldsymbol \beta}^+(c,x, s_1, s_2) + \mathcal T_{M, \boldsymbol \alpha, \boldsymbol \beta}^-(c,x, s_1, s_2)}.$$ To do this, we essentially need to consider \est{ \mathbb{R}es_{s_1 = 1 - \boldsymbol \alphapha_1} D_3\pr{s_1, \pm \frac{\lambda_1}{\eta_1}, \boldsymbol \alpha} y^{s_1 - 1},} where $\frac{\lambda_1 }{\eta_1} $ is defined in (\ref{eqn:lambda1eta1}). Let $(a_1b_1, a_2b_2) = \lambda,$ $a_1b_1 = u_1 \lambda, a_2b_2 = u_2 \lambda,$ where $(u_1,u_2) =1. $ Note that $(u_1x - u_2, c) = (u_2\bar x - u_1, c) = \delta. $ Hence $$\delta_1 := ((a_2b_2\bar x - a_1b_1)a_1, a_2b_2c) = \lambda((u_2\bar x - u_1)a_1, u_2c) = \lambda \delta (a_1, u_2 c/\delta),$$ and $\lambda_1 = \bar q(a_2b_2\bar x-a_1b_1)a_1/\delta_1$ and $\eta_1 = a_2b_2c/\delta_1.$ By (\ref{eqn:OriginalD}), we obtain that \est{ \mathcal R_1\mathopen{}\mathclose\bgroup\originalleft(\frac c\delta, \mathbf a, \mathbf b\aftergroup\egroup\originalright) &:= \mathbb{R}es_{s_1 = 1 - \boldsymbol \alphapha_1} D_3\pr{s_1, \pm \frac{\lambda_1}{\eta_1}, \boldsymbol \alpha} \\ &= \frac{1}{\eta_1^{2 - 2\boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}} \operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{1 \leq a_1, a_2 \leq \eta_1 \\ \eta_1 | a_1a_2}}\zeta\pr{1 - \boldsymbol \alphapha_1 + \boldsymbol \alphapha_2, \frac{a_2}{\eta_1}}\zeta\pr{1 - \boldsymbol \alphapha_1 + \boldsymbol \alphapha_3, \frac{a_3}{\eta_1} }.} Hence \est{ \mathbb{R}es_{s_1 = 1 - \boldsymbol \alphapha_1} D_3\pr{s_1, \pm \frac{\lambda_1}{\eta_1}, \boldsymbol \alpha} y^{s_1 - 1} =\mathcal R_1\mathopen{}\mathclose\bgroup\originalleft(\frac c\delta, \mathbf a, \mathbf b\aftergroup\egroup\originalright) y^{-\boldsymbol \alphapha}.} Similarly, we let $ \mathcal R_2(c, \mathbf a, \mathbf b) := \mathbb{R}es_{s_2 = 1 + \beta_1} D_3\pr{s_2, \pm \frac{\lambda_2}{\eta_2}, -\boldsymbol \beta} $. Then \est{\mathcal R_2\pr{\frac c\delta, \mathbf a, \mathbf b} = \frac{1}{\eta_2^{2 + 2\beta_1 - \beta_2 - \beta_3}} \operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{1 \leq a_1, a_2 \leq \eta_2 \\ \eta_2 | a_1a_2}}\zeta\pr{1 + \beta_1 - \beta_2, \frac{a_2}{\eta_2}}\zeta\pr{1 + \beta_1 - \beta_3, \frac{a_3}{\eta_2} },} and \est{\mathbb{R}es_{s_2 = 1 + \beta_1} D_3\pr{s_1, \pm \frac{\lambda_2}{\eta_2}, \boldsymbol \alpha} z^{s_2 - 1} =\mathcal R_2\mathopen{}\mathclose\bgroup\originalleft(\frac c\delta, \mathbf a, \mathbf b\aftergroup\egroup\originalright) z^{\beta}.} \operatorname{\mathcal{S}}ubsection{Computing $R_{\boldsymbol \alphapha_1, \beta_1}$} From the previous section, $R_{\boldsymbol \alphapha_1, \beta_1}$ can be written as \est{ R_{\boldsymbol \alphapha_1, \beta_1} = \frac{2\pi}{q} &\operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2 \geq 1 \\ (a_1a_2b_1b_2, q) = 1}} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1^{\frac 32}b_1} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2^{\frac 32}b_2} \operatorname{\mathcal{S}}umd_M \operatorname{\mathcal{S}}umd_N \mathcal A({\bf a}, {\bf b}, M, N) + O(q^{-1/2 + \varepsilon}),} where $\mathcal A({\bf a}, {\bf b}, M, N)$ is defined as \es{\label{def:AabMN} \operatorname{\mathcal{S}}um_{c = 1}^{\infty} \frac{\mathcal F (c)}{c}\operatorname{\mathcal{S}}umstar_{x \mod c} \mathcal R_1\mathopen{}\mathclose\bgroup\originalleft(\frac c\delta, \mathbf a, \mathbf b\aftergroup\egroup\originalright) \mathcal R_2\mathopen{}\mathclose\bgroup\originalleft(\frac c\delta, \mathbf a, \mathbf b\aftergroup\egroup\originalright) , } and \est{ \mathcal F(c) := \mathcal F_{\boldsymbol \alpha, \boldsymbol \beta}(c, {\bf a}, {\bf b}) := \int_{0}^{\infty} \int_0^{\infty} & \frac{1}{y^{1/2 + \boldsymbol \alphapha_1}} \frac{1}{z^{1/2 - \beta_1}} V_{\boldsymbol \alpha, \boldsymbol \beta} \pr{a_1^3b_1^2y, a_2^3b_2^2z; q} J_{k-1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{a_2ya_1z}}{cq}} \times \\ &\times f\pr{\frac yN} f\pr{\frac zM} \mathcal K \e{\frac{a_1^2b_1y}{cqa_2b_2} + \frac{a_2^2b_2z}{cqa_1b_1}} \>dy \> dz .} We remark that we can extend the sum over $c$ to all positive integers in a similar manner as in the truncation argument in Proposition \ref{prop:truncateC}. Now, we let $$\frac{1}{c^2}\mathcal G(c, \mathbf a, \mathbf b) := \mathcal R_1(c, \mathbf a, \mathbf b) \mathcal R_2(c, \mathbf a, \mathbf b),$$ so that we can write the sum over $c$ in (\ref{def:AabMN}) as \est{ &\operatorname{\mathcal{S}}um_{c = 1}^\infty \frac{\mathcal F (c)}{c} \operatorname{\mathcal{S}}um_{\delta | c} \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{x \mod c \\ (u_1x-u_2, c) = \delta}} \frac{1}{(c/\delta)^2}\mathcal G \pr{\frac{c}{\delta}, {\bf a}, {\bf b}} = \operatorname{\mathcal{S}}um_{\delta = 1}^\infty \frac{1}{\delta} \operatorname{\mathcal{S}}um_{c = 1}^\infty \frac{\mathcal F(c\delta) }{c} \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{x \mod {c\delta} \\ (u_1x-u_2, c\delta) = \delta}} \frac{\mathcal G(c, {\bf a}, {\bf b})}{c^2} \\ &= \operatorname{\mathcal{S}}um_{\delta = 1}^\infty \frac{1}{\delta} \operatorname{\mathcal{S}}um_{c = 1}^\infty \frac{\mathcal F(c\delta) }{c} \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{x \mod {c\delta} } } \operatorname{\mathcal{S}}um_{b | \pr{\frac{u_1x-u_2}{\delta}, c}} \frac{\mu(b) \mathcal G(c, {\bf a}, {\bf b})}{c^2} \\ &= \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\delta \geq 1 \\ (\delta, u_1u_2) = 1}} \frac{1}{\delta} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{b \geq 1 \\ (b, u_1u_2) = 1}} \frac{\mu(b)}{b^3} \operatorname{\mathcal{S}}um_{c \geq 1} \frac{\mathcal G(cb, {\bf a}, {\bf b}) \mathcal F(cb\delta)}{c^3} \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{x \mod {c\delta b} \\ x \equiv u_2 \bar u_1 \mod {b\delta}} } 1 ,} where the sum over $x$ is 0 if $(u_1u_2, b\delta) \neq 1$ since $(u_1, u_2) = 1.$ Applying Lemma \ref{lem:countingnumberX} to the sum over $x$, we then obtain that \est{ & \mathcal A({\bf a}, {\bf b}, M, N)= \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{\delta \geq 1 \\ (\delta, u_1u_2) = 1}} \frac{1}{\delta} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{b \geq 1\\ (b, u_1u_2) = 1}} \frac{\mu(b)}{b^3} \operatorname{\mathcal{S}}um_{c \geq 1} \frac{1}{c^2} \prod_{\operatorname{\mathcal{S}}ubstack{p| c \\ p \nmid b\delta}} \pr{1 - \frac 1p} \mathcal F(cb\delta) \ \mathcal G(cb, {\bf a}, {\bf b}) \\ &= \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{h \geq 1 \\ h | u_1u_2}} \frac{\mu(h)}{h} \operatorname{\mathcal{S}}um_{\delta \geq 1} \frac{1}{\delta} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{b \geq 1\\ (b, u_1u_2) = 1}} \frac{\mu(b)}{b^3} \operatorname{\mathcal{S}}um_{c \geq 1} \frac{1}{c^2} \prod_{\operatorname{\mathcal{S}}ubstack{p| c \\ p \nmid b h \delta}} \pr{1 - \frac 1p} \mathcal F(cbh\delta ) \ \mathcal G(cb, {\bf a}, {\bf b}) \\ &= \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{h \geq 1 \\ h | u_1u_2}} \frac{\mu(h)}{h} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{b \geq 1 \\ (b, u_1u_2) = 1}} \frac{\mu(b)}{b^3} \operatorname{\mathcal{S}}um_{c \geq 1} \frac{\mathcal G(cb, {\bf a}, {\bf b})}{c^2} \operatorname{\mathcal{S}}um_{\gamma | c} \prod_{\operatorname{\mathcal{S}}ubstack{p| c \\ p \nmid bh\gamma}} \pr{1 - \frac 1p} \operatorname{\mathcal{S}}um_{\delta \geq 1} \frac{1}{\delta} \mathcal F(cbh\delta) \operatorname{\mathcal{S}}um_{g | (c/\gamma, \delta/\gamma)} \mu(g) \\ &= \operatorname{\mathcal{S}}umsharp \mathscr G_{ {\bf a}, {\bf b}}(1; h, b, c, \gamma, g)\operatorname{\mathcal{S}}um_{\delta \geq 1} \frac{1}{\delta} \mathcal F(cbhg\gamma \delta), } where \es{ \label{def:Gscr}\operatorname{\mathcal{S}}umsharp \mathscr G_{ {\bf a}, {\bf b}}(s; h, b, c, \gamma, g) = \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{ h \geq 1 \\ h | u_1u_2}} \frac{\mu(h)}{h^s} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{b \geq 1 \\ (b, u_1u_2) = 1}} \frac{\mu(b)}{b^{2 + s}} \operatorname{\mathcal{S}}um_{c \geq 1} \frac{\mathcal G(cb, {\bf a}, {\bf b})}{c^{1 + s}} \operatorname{\mathcal{S}}um_{\gamma | c} \frac{1}{\gamma^s} \operatorname{\mathcal{S}}um_{g | \frac{c}{\gamma}} \frac{\mu(g)}{g^s} \prod_{\operatorname{\mathcal{S}}ubstack{p| c \\ p \nmid bh\gamma}} \pr{1 - \frac 1p}.} Next applying Lemma \ref{lem:sumT} to the sum over $\delta,$ and summing $\operatorname{\mathcal{S}}umd_M \operatorname{\mathcal{S}}umd_N$, we have that \est{&\operatorname{\mathcal{S}}umd_M \operatorname{\mathcal{S}}umd_N \mathcal A ({\bf a}, {\bf b}, M, N) \\ &= 2\pi \frac{1}{2\pi i} \pr{\frac{q}{4\pi^2}}^{\delta(\boldsymbol \alpha, \boldsymbol \beta)}\int_{-\infty}^{\infty} \int_{(1)} \operatorname{\mathcal{S}}umsharp \mathscr G_{ {\bf a}, {\bf b}}(1; h, b, c, \gamma, g) \pg{\operatorname{\mathcal{S}}um_{\ell = 1}^{\infty} w \pr{\frac \ell L} - \int_0^{\infty} w \pr{\frac \ell L} \> d\ell} \\ &\ \ \ \times G\pr{\frac 12 + s; \boldsymbol \alpha + it, \boldsymbol \beta + it} H(s; \boldsymbol \alpha, \boldsymbol \beta) \pr{\frac{a_2^3b_2^2}{a_1^3b_1^2}}^{it} (a_1^3b_1^2a_2^3b_2^2)^{-s} \frac{q^{3s}}{(4\pi^2)^{3s}} \\ & \ \ \ \times \int_{0}^{\infty} \int_0^{\infty} \frac{y^{-\tfrac 12 - \boldsymbol \alphapha_1}}{y^{ s + it}} \frac{z^{-\tfrac 12 + \beta_1}}{z^{ s - it}} J_{k - 1} \pr{4\pi \operatorname{\mathcal{S}}qrt{\frac{ a_1^2b_1y\ell}{a_2b_2cbhg\gamma q }}} J_{k - 1} \pr{4\pi \operatorname{\mathcal{S}}qrt{\frac{ a_2^2b_2z\ell}{a_1b_1cbhg\gamma q }}} \>dy \> dz \frac{ds}{s} \> dt . } The integration over $y$ and $z$ can be evaluated by Equation 707.14 in \cite{GR}, which is \est{ \int_0^{\infty} v^{\mu} (vk)^{\frac 12} J_{\nu} (vk) \> dv = 2^{\mu + 1/2} k^{-\mu - 1} \frac{\Gamma\pr{\tfrac \mu 2 + \tfrac \nu 2 + \tfrac 34}}{\Gamma\pr{\tfrac \nu 2 - \tfrac \mu 2 + \tfrac 14}},} for $- \textrm{Re} \nu - \frac 32 < \textrm{Re} \mu < 0.$ Then we apply Lemma \ref{lem:evalsumwtozeta} to the sum and the integration over $\ell.$ Therefore after summing over $a_1, a_2, b_1, b_2$, we obtain that the main term of $R_{\boldsymbol \alphapha_1, \beta_1}$ is \es{ \label{eqn:mainint} & \frac{1}{2\pi i} \pr{\frac{q}{4\pi^2}}^{\delta(\pi(\boldsymbol \alpha), \pi(\boldsymbol \beta))}\int_{-\infty}^{\infty} \int_{(\varepsilon)} G_{\boldsymbol \alphapha_1, \beta_1}\pr{\frac 12 + s; \boldsymbol \alpha + it, \boldsymbol \beta + it} H(s; \boldsymbol \alpha, \boldsymbol \beta) \mathcal M_{\boldsymbol \alphapha_1, \beta_1}(s) \\ & \hskip 1.5 in \times \zeta(1 -\boldsymbol \alphapha_1 + \beta_1 - 2s) \> \frac{q^{s}}{(4\pi^2)^{s}}\frac{ds}{s} \> dt,} where $(\pi(\boldsymbol \alphapha), \pi(\beta)) = (\beta_1, \boldsymbol \alphapha_2, \boldsymbol \alphapha_3; \boldsymbol \alphapha_1, \beta_2, \beta_3)$, \est{G_{\boldsymbol \alphapha_i, \beta_j}\pr{\frac 12 + s; \boldsymbol \alpha, \boldsymbol \beta} = \Gamma\pr{\frac k2 - s - \boldsymbol \alphapha_i} \Gamma\pr{\frac k2 - s + \beta_j } \prod_{\ell \neq i} \Gamma\pr{\frac k2 + s + \boldsymbol \alphapha_\ell } \prod_{\ell \neq j}\Gamma\pr{\frac k2 + s - \beta_\ell } ,} and \es{\label{def:Malbeta1} \mathcal M_{\boldsymbol \alphapha_1, \beta_1}(s) &:= \operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2 \geq 1 \\ (a_1a_2b_1b_2, q) = 1}} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1^{2-2\boldsymbol \alphapha_1 - \beta_1 + 2s}b_1^{1-\boldsymbol \alphapha_1-\beta_1 + 2s}} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2^{2 + \boldsymbol \alphapha_1 + 2\beta_1 + 2s}b_2^{1 + \boldsymbol \alphapha_1 + \beta_1 + 2s}} \\ &\hskip 0.3in \times \operatorname{\mathcal{S}}umsharp \mathscr G_{ {\bf a}, {\bf b}}(2s+\boldsymbol \alphapha_1 - \beta_1; h, b, c, \gamma, g).} Now, in the ensuing discussion, we temporarily assume that $\textrm{Re}(\boldsymbol \alphapha_1) < \textrm{Re}(\boldsymbol \alphapha_2) , \textrm{Re}(\boldsymbol \alphapha_3) $ and $\textrm{Re}(\beta_1) > \textrm{Re}(\beta_2) , \textrm{Re}(\beta_3) $. In this region, \est{\mathcal R_1\mathopen{}\mathclose\bgroup\originalleft(\frac c\delta, \mathbf a, \mathbf b\aftergroup\egroup\originalright) &= \operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{n_2, n_3 \geq 1 \\ \frac{u_2c}{\delta(a_1, u_2c/\delta)} \ |\ n_2n_3}} \frac{1}{n_2^{1 + \boldsymbol \alphapha_2 - \boldsymbol \alphapha_1}n_3^{1 + \boldsymbol \alphapha_3 - \boldsymbol \alphapha_1}}} and \est{ \mathbb{R}es_{s_1 = 1 - \boldsymbol \alphapha_i} D_3\pr{s_1, \pm \frac{\lambda_1}{\eta_1}, \boldsymbol \alpha} y^{s_1 - 1} &= \mathcal R_1\mathopen{}\mathclose\bgroup\originalleft(\frac c\delta, \mathbf a, \mathbf b\aftergroup\egroup\originalright) y^{-\boldsymbol \alphapha_1} = y^{-\boldsymbol \alphapha_1}\operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{n_2, n_3 \geq 1 \\ \frac{u_2c}{\delta(a_1, u_2c/\delta)} \ |\ n_2n_3}} \frac{1}{n_2^{1 + \boldsymbol \alphapha_2 - \boldsymbol \alphapha_1}n_3^{1 + \boldsymbol \alphapha_3 - \boldsymbol \alphapha_1}}.} Similarly, \est{\mathbb{R}es_{s_2 = 1 + \beta_1} D_3\pr{s_2, \pm \frac{\lambda_2}{\eta_2}, -\boldsymbol \beta} z^{s_2 - 1} = \mathcal R_2\mathopen{}\mathclose\bgroup\originalleft(\frac c\delta, \mathbf a, \mathbf b\aftergroup\egroup\originalright) z^{\beta_1} = z^{\beta_1}\operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{m_2, m_3 \geq 1 \\ \frac{u_1c}{\delta(a_2, u_1c/\delta)} \ |\ m_2m_3}} \frac{1}{m_2^{1 + \beta_1 - \beta_2}m_3^{1 + \beta_1 - \beta_3}} .} Thus, \es{\label{def:Gc} &\frac{1}{c^2} \mathcal G(c, {\bf a}, {\bf b}) = \operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{n_2, n_3 \geq 1 \\ \frac{u_2c}{(a_1, u_2c)} \ |\ n_2n_3}} \frac{1}{n_2^{1 + \boldsymbol \alphapha_2 - \boldsymbol \alphapha_1}n_3^{1 + \boldsymbol \alphapha_3 - \boldsymbol \alphapha_1}} \operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{m_2, m_3 \geq 1 \\ \frac{u_1c}{(a_2, u_1c)} \ |\ m_2m_3}} \frac{1}{m_2^{1 + \beta_1 - \beta_2}m_3^{1 + \beta_1 - \beta_3}}\\ &= \frac{(a_1, u_2c)(a_2, u_1c)}{u_1u_2c^2} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{n = 1}}^\infty \frac{\operatorname{\mathcal{S}}igma_2\pr{\frac{u_2cn}{(a_1,u_2c)}; \boldsymbol \alphapha_2 - \boldsymbol \alphapha_1, \boldsymbol \alphapha_3 - \boldsymbol \alphapha_1}}{n} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{m = 1}}^\infty \frac{\operatorname{\mathcal{S}}igma_2\pr{\frac{u_1cm}{(a_2, u_1c)}; \beta_1 - \beta_2, \beta_1 - \beta_3}}{m}.} From this, we may then check that \es{\label{def:Malbeta} \mathcal M_{\boldsymbol \alphapha_1, \beta_1}(s) &= \prod_{j = 2}^3\zeta(1 + 2s + \boldsymbol \alphapha_j - \beta_1 ) \zeta(1 + 2s + \boldsymbol \alphapha_1 - \beta_j) \mathcal J_{\boldsymbol \alphapha_1, \beta_1}(s),} where $\mathcal J_{\boldsymbol \alphapha_1, \beta_1}$ is absolutely convergent in the region Re$(s) = -1/4 + \varepsilon.$ Although we have a priori only verified \eqref{def:Malbeta} for the region $\textrm{Re}(\boldsymbol \alphapha_1) < \textrm{Re}(\boldsymbol \alphapha_2) , \textrm{Re}(\boldsymbol \alphapha_3) $ and $\textrm{Re}(\beta_1) > \textrm{Re}(\beta_2) , \textrm{Re}(\beta_3) $, we see that \eqref{def:Malbeta} must hold for all values of $\boldsymbol \alphapha_i, \beta_j$ by analytic continuation. We note that the pole of $\zeta(1-\boldsymbol \alphapha_1+\beta_1 - 2s)$ at $s = (\boldsymbol \alphapha_1 - \beta_1)/2$ and the poles of $\zeta(1 + 2s + \boldsymbol \alphapha_i - \beta_j)$ at $s = (\boldsymbol \alphapha_i - \beta_j)/2$ cancel with the zeros at the same point from $H(s; \boldsymbol \alpha, \boldsymbol \beta)$. Thus, the integrand in \eqref{eqn:mainint} has only a simple pole at $s = 0$ and is analytic for all values of $s$ with $\textup{Re } s > -1/4 + \varepsilonilon$. Moving the line of integration to Re$(s) = -1/4 + \varepsilon,$ we then obtain the main term \est{ &\zeta(1 -\boldsymbol \alphapha_1 + \beta_1 ) \mathcal M_{\boldsymbol \alphapha_1, \beta_1}(0) \pg{\pr{\frac{q}{4\pi^2}}^{\delta(\pi(\boldsymbol \alpha), \pi(\boldsymbol \beta))} H(0; \boldsymbol \alpha, \boldsymbol \beta) \int_{-\infty}^{\infty} G\pr{\frac 12; \pi(\boldsymbol \alpha) + it, \pi(\boldsymbol \beta) + it} \> dt},} with negligible error term. To finish the proof of Proposition \ref{prop:mainTM}, we will show that the local factor at prime $p$ of the Euler product of $\zeta(1 -\boldsymbol \alphapha_1 + \beta_1 )\mathcal M_{\boldsymbol \alphapha_1, \beta_1}(0)$ is the same as the one in $\mathcal A \mathcal Z\pr{\tfrac 12; \pi(\boldsymbol \alpha), \pi(\boldsymbol \beta)}$ defined in (\ref{def:mathcalZp}) and (\ref{def:mathcalAs}). The details of this are in Appendix \ref{sec:Eulerverif}. \operatorname{\mathcal{S}}ection{Proof of Proposition \ref{prop:Terror} } \label{sec:properror} To prove the proposition, it suffices to show that $\mathcal E_i^+(q; \boldsymbol \alpha, \boldsymbol \beta) \ll q^{-1/4 + \varepsilon}$ for $i = 1$ and $i = 5$ since the proofs of upper bounds for other terms are similar. We start with a lemma that will be used in the proof. \begin{lemma} \label{lem:boundres} Let $\lambda, \eta$ be integers such that $(\lambda, \eta) = 1$ and $\lambda, \eta \ll q^{A}$, where $A$ is a fixed constant. Moreover, for $i,j = 1, 2, 3,$ $ \boldsymbol \alphapha_i \ll \frac 1{\log q},$ and $|\boldsymbol \alphapha_i - \boldsymbol \alphapha_j| \gg \frac{1}{q^{\varepsilon_1}}$ when $i \neq j.$ Then for $\varepsilon > \varepsilon_1,$ \est{\mathbb{R}es_{s = 1 - \boldsymbol \alphapha_i} D_3\pr{s, \frac{\lambda}{\eta}, \boldsymbol \alpha} \ll \frac{q^{\varepsilon}}{\eta},} where $D_3\pr{s,\frac \lambda\eta, \boldsymbol \alpha }$ is defined in (\ref{eqn:OriginalD}). \end{lemma} \begin{proof} By symmetry, it suffices to prove the statement for the residue at $1- \boldsymbol \alphapha_1.$ For $\textup{Re }(s) > 1 + \textup{Re }(\boldsymbol \alphapha_1 - \boldsymbol \alphapha_j)$, where $j = 2, 3$, let \est{D(s) &:= \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{m = 1 \\ \eta| m}}^{\infty} \frac{\operatorname{\mathcal{S}}igma_2(m; \boldsymbol \alphapha_2 - \boldsymbol \alphapha_1, \boldsymbol \alphapha_3 - \boldsymbol \alphapha_1)}{m^s} \\ &= \frac{1}{\eta^s}\operatorname{\mathcal{S}}um_{d |\eta} \frac{\mu(d)\operatorname{\mathcal{S}}igma_2\pr{\frac{\eta}{d}; \boldsymbol \alphapha_2 - \boldsymbol \alphapha_1, \boldsymbol \alphapha_3 - \boldsymbol \alphapha_1}}{d^{s + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3 - 2\boldsymbol \alphapha_1}} \zeta( s + \boldsymbol \alphapha_2 - \boldsymbol \alphapha_1)\zeta(s + \boldsymbol \alphapha_3 - \boldsymbol \alphapha_1),} where we have used Lemma \ref{lem:multofsigma_k} to derive the last line. Now, $D(s)$ can be continued analytically to the whole complex plane except for poles at $s = 1 + \boldsymbol \alphapha_1 - \boldsymbol \alphapha_j$ for $j = 2, 3.$ Moreover, $D(1) \ll \frac{q^{\varepsilon}}{\eta}.$ For $i = 1, 2, 3,$ and $\textup{Re }(s + \boldsymbol \alphapha_i) > 1$, the sum in the Lemma can be rewritten as \est{ & \frac{1}{\eta^{3s + \boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}} \operatorname{\mathcal{S}}umthree_{r_1, r_2, r_3 \mod \eta}\e{\frac{\lambda r_1 r_2r_3}{\eta}}\zeta\pr{s+ \boldsymbol \alphapha_1; \frac{r_1}{\eta}}\zeta\pr{s+ \boldsymbol \alphapha_2; \frac{r_2}{\eta}}\zeta\pr{s+ \boldsymbol \alphapha_3; \frac{r_3}{\eta}}. } This sum can be analytically continued to the whole complex plane except for poles at $s = 1 - \boldsymbol \alphapha_i$ for $i = 1, 2, 3.$ After some arrangement, the contribution of the residue at $s = 1 - \boldsymbol \alphapha_1$ is \est{ &\frac{1}{\eta^{2 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3 - 2\boldsymbol \alphapha_1}} \operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{r_2, r_3 \mod \eta\\ \eta | r_2r_3}}\zeta\pr{1 + \boldsymbol \alphapha_2 - \boldsymbol \alphapha_1; \frac{r_2}{\eta}}\zeta\pr{1 + \boldsymbol \alphapha_3 - \boldsymbol \alphapha_1; \frac{r_3}{\eta}} = D(1), } and the Lemma follows. \end{proof} \operatorname{\mathcal{S}}ubsection{Bounding $\mathcal E_1^+(q; \boldsymbol \alpha, \boldsymbol \beta)$} \label{sec:proofE1} With the same notation as in Section \ref{sec:provepropTM} and $\mathscr B$ defined as in \eqref{eqn:defB}, we recall that \est{ \mathcal E_{1, \boldsymbol \alpha, \boldsymbol \beta}^+(q; \boldsymbol \alpha, \boldsymbol \beta) = \frac {2\pi}{q}&\operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2 \geq 1 \\ (a_1a_2b_1b_2, q) = 1}} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1^{\frac 32}b_1} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2^{\frac 32}b_2} \operatorname{\mathcal{S}}umd_M \operatorname{\mathcal{S}}umd_N E_{1, \boldsymbol \alpha, \boldsymbol \beta}^+(\mathbf{a}, \mathbf{b}, M, N) ,} where \est{E_{1, \boldsymbol \alpha, \boldsymbol \beta}^+(\mathbf{a}, \mathbf{b}, M, N) := \operatorname{\mathcal{S}}um_{c < C} \operatorname{\mathcal{S}}umstar_{x \bmod \delta c} \frac{\mathcal T_{1, \boldsymbol \alpha, \boldsymbol \beta}^+(c,x)}{c} = \operatorname{\mathcal{S}}um_{\delta < C} \frac 1\delta \operatorname{\mathcal{S}}um_{c < \frac{C}{\delta}} \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{x \bmod \delta c \\ (u_1x-u_2, c\delta) = \delta}} \frac{\mathcal T^+_{1, \boldsymbol \alpha, \boldsymbol \beta}(c\delta, x)}{c},} \est{ \mathcal T_{1, \boldsymbol \alpha, \boldsymbol \beta}^+ (c\delta,x) &:= \frac{\pi^{3/2 + \boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}}{\eta_1^{3+\boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}} \operatorname{\mathcal{S}}um_{i = 1}^3 \mathbb{R}es_{s = 1 + \beta_i} D_3\pr{s, \frac{\lambda_2}{\eta_2}, -\boldsymbol \beta} \operatorname{\mathcal{S}}um_{n = 1}^{\infty} A_3\pr{n, \frac{\lambda_1}{\eta_1}, \boldsymbol \alpha} \mathcal F_1^+(c\delta,n; \boldsymbol \alpha, \boldsymbol \beta),} for $\lambda_1 = \frac{\bar q (u_2\bar x - u_1)a_1}{\delta(a_1, u_2c)},$ $\eta_1 = \frac{u_2 c}{(a_1, u_2c)},$ $\lambda_2 = \frac{\bar q (u_1 x - u_2)a_2}{\delta(a_2, u_1c)},$ $\eta_2 = \frac{u_1 c}{(a_2, u_1c)},$ $u_i = \frac{a_ib_i}{(a_1b_1, a_2b_2)}$ and \est{\mathcal F_1^+(c\delta,n; \boldsymbol \alpha, \boldsymbol \beta) &= \int_{0}^{\infty} \int_0^{\infty} \frac{1}{y^{1/2 }} \frac{z^{s - 1}}{z^{1/2}} V_{\boldsymbol \alpha, \boldsymbol \beta} \pr{a_1^3b_1^2y, a_2^3b_2^2z; q} J_{k-1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{a_2ya_1z}}{c\delta q}} f\pr{\frac yN} f\pr{\frac zM} \times \\ &\hskip 0.5in \times i^{- k}\e{\frac{a_1^2b_1y}{c\delta qa_2b_2} + \frac{a_2^2b_2z}{c\delta qa_1b_1}} U_3\pr{\frac{\pi^3ny}{\eta_1^3}; \boldsymbol \alpha} \>dy \> dz. } We first note that the contribution from the terms $a_1^3b_1^2y \gg q^{3/2 + \varepsilon}$ or $a_2^3b_2^2 z \gg q^{3/2 + \varepsilon}$ can be bounded by $q^{-A}$ for any $A$ due to the factor $V_{\boldsymbol \alphapha, \beta}(a_1^3b_1^2y, a_2^3b_2^2 z; q).$ So from now on we assume $a_1^3b_1^2N \ll q^{3/2 + \varepsilon}$ and $a_2^3b_2^2 M \ll q^{3/2 + \varepsilon}.$ Moreover, the dyadic sum over $M$ and $N$ contains only $\ll \log^2q$ terms, so it suffices to prove that \begin{equation} E_{1, \boldsymbol \alpha, \boldsymbol \beta}^+(\mathbf{a}, \mathbf{b}, M, N)\ll a_1^{1/2}q^{3/4+\varepsilon}, \end{equation}for fixed $\mathbf{a}, \mathbf{b}, M, N $ satisfying $a_1^3b_1^2N \ll q^{3/2 + \varepsilon}$ and $a_2^3b_2^2 M \ll q^{3/2 + \varepsilon}$. On a first reading, the reader may set $a_1 = a_2 = b_1 = b_2 = 1$ as this simplifies the notation without substantially changing the calculation. We now write \begin{equation} E_{1, \boldsymbol \alpha, \boldsymbol \beta}^+(\mathbf{a}, \mathbf{b}, M, N) = H_1+ H_2, \end{equation} where $H_1$ is the contribution from the sum over $n \le \frac{\eta_1^3}{N} q^\varepsilon$, and $H_2$ is the rest. \operatorname{\mathcal{S}}ubsubsection{Bounding $H_1$} \label{sec:Esm} By (\ref{lem:asympUVxsmall}), $ U_3 \pr{\frac{\pi^3 n y}{\eta_1^3}} \ll q^{\varepsilon}$ when $n \ll \frac{\eta_1^3 q^\varepsilon }{N} $. This and (\ref{bound:Bessel1}) gives us that $$\mathcal F_1^+(c\delta,n; \boldsymbol \alpha, \boldsymbol \beta) \ll M^{1/2}N^{1/2}q^\varepsilon \min\pg{\pr{\frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{c\delta q}}^{-\frac 12},\pr{\frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{c\delta q}}^{k-1}}. $$ Then, from Lemma \ref{lem:boundres}, Lemma \ref{lem:A3B3Kloos}, (\ref{def:Ak}), and using the fact that $(a_2, u_1c) \leq a_2,$ and $\frac{1}{(a_1, u_2c)} \leq 1$, we obtain that $H_1 $ is bounded by \est{& M^{\frac 12}N^{\frac 12} q^{\varepsilon} \operatorname{\mathcal{S}}um_{\delta < C} \operatorname{\mathcal{S}}um_{c < \frac C\delta} \frac{1}{\eta_1^3 \eta_2}\operatorname{\mathcal{S}}um_{n \ll \frac{\eta_1^3q^\varepsilon}{N}} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{h | \eta_1, \ h | n^2}} \eta_1^{\frac 32}\operatorname{\mathcal{S}}qrt h \min\pg{\pr{\frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{c\delta q}}^{-\frac 12},\pr{\frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{c\delta q}}^{k-1}} \\ &\ll M^{\frac 54} N^{\frac 14} q^{\varepsilon} \frac{a_1^{\frac 34} a_2^{\frac 74}u_2^{\frac 32}}{u_1q^{\frac 32}} \ll q^{3/4+\varepsilon}, } as desired. In the above, we have used $(a_1b_1, a_2b_2) \geq 1$, and $a_1^3b_1^2N \ll q^{3/2 + \varepsilon}$ and $a_2^3b_2^2 M \ll q^{3/2 + \varepsilon}$. \operatorname{\mathcal{S}}ubsubsection{Bounding $H_2$} \label{sec:proofE1b} We start from re-writing $\mathcal F_1^+(c\delta, n; \boldsymbol \alpha, \boldsymbol \beta)$ as \est{ &\int_{-\infty}^{\infty} \int_{(\frac 1{\log q})} V_1(s, t) \int_{0}^{\infty} z^{\beta_1} F_{s - it}\pr{\frac zM} \e{\frac{a_2^2b_2z}{c\delta qa_1b_1}}\mathcal I(n, z) \> dz \> \frac{ds}{s} \> dt,} where $V_1(s,t) := V_1({\bf a}, {\bf b}, \boldsymbol \alpha, \boldsymbol \beta, s, t, M, N) $ is defined as \es{\label{def:V1st} \frac{1}{2\pi i} \pr{\frac{q}{4\pi^2}}^{\delta(\boldsymbol \alpha, \boldsymbol \beta)} \pr{ \frac{a_2^3b_2^2 }{a_1^3b_1^2}}^{it} G\pr{\frac 12 + s; \boldsymbol \alpha + it, \boldsymbol \beta + it} H(s; \boldsymbol \alpha, \boldsymbol \beta) \pr{\frac{q}{4\pi^2}}^{3s} M^{-\frac 12 - s + it} N^{-\frac 12 - s - it},} and $\mathcal I(n, z) := \mathcal I_{\boldsymbol \alpha}(\mathbf{a}, \mathbf{b}, N, n, z, c, \delta)$ is defined as \es{\label{def:IabN} \int_0^{\infty}F_{s + it}\pr{\frac yN} \e{\frac{a_1^2b_1y}{c\delta qa_2b_2} } J_{k-1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{a_2ya_1z}}{c\delta q}} U_3\pr{\frac{\pi^3ny}{\eta_1^3}; \boldsymbol \alpha} \>dy,} and $F_{v} \pr{x} := \frac{1}{x^{\frac 12 + v}} f\pr{x}.$ Note that the $j$-th derivative, $F_{s \pm it}^{(j)}\pr{\frac{y}{N}} = O(|t|^j N^{-j}). $ Note that the trivial bound for $\mathcal F_1^+ (c\delta, n; \boldsymbol \alpha, \boldsymbol \beta)$ is \begin{equation}\label{eqn:Fbdd} \mathcal F_1^+ (c\delta, n; \boldsymbol \alpha, \boldsymbol \beta) \ll \operatorname{\mathcal{S}}qrt{MN} (qn)^\varepsilon. \end{equation} There are two cases to consider: (1) $c > \frac{8\pi\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{\delta q}$, and (2) $c \leq \frac{8\pi\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{\delta q}$. \\ \\ {\bf Case 1: $c > \frac{8\pi\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{\delta q}$.} By (\ref{asympJxSm}), (\ref{lem:asymptUxbig}), and since $\frac{\pi^3 ny}{\eta_1^3} \gg q^\varepsilon$, $\mathcal I(n, z)$ can be written as \est{ & \operatorname{\mathcal{S}}um_{j = 1}^K \bfrac{\pi^3 ny}{\eta_1^3}^{\frac{\beta_1 + \beta_2 + \beta_3}{3}} \pr{\frac{\eta_1}{\pi n^{\frac 13}N^{\frac 13}}}^{j} \operatorname{\mathcal{S}}um_{\ell = 0}^{\infty} \frac{(-1)^{\ell}}{\ell ! (\ell + k - 1)!} \int_0^{\infty} \mathscr F_j(y, z,\ell) \e{\frac{a_1^2b_1y}{c\delta qa_2b_2} } \times \\ & \hskip 2in \times \pg{ c_j \e{\frac{3n^{\frac 13}y^{\frac 13}}{\eta_1}} + d_j \e{-\frac{3n^{\frac 13}y^{\frac 13}}{\eta_1}} } \>dy + O\pr{q^{-\varepsilon (K + 1)}}, } where $c_j, d_j$ are some constants, and \est{ \mathscr F_j(y, z,\ell) = F_{s + it}\pr{\frac yN} \pr{\frac{2\pi \operatorname{\mathcal{S}}qrt{a_2ya_1z}}{c\delta q}}^{2\ell + k -1} \pr{\frac{y}{N}}^{\frac{-j}{3}} } is supported on $y\in [N, 2N]$. Moreover, $\frac{\partial^i \mathscr F_j(y, z, \ell)}{\partial y^i} \ll \frac{1}{2^{2\ell}}\frac {|t|^i}{N^i}$ and $\mathscr F_j(y,z, \ell) \ll 1$. Thus, picking $K$ large enough so that $q^{-\varepsilon(K+1)}$ is negligible, it suffices to bound integrals of the form \est{\int_0^{\infty} &\mathscr F_j(y, z, \ell) \e{\theta_z(y, n)} \>dy,} where $$\theta_z(y, n) = \pm \frac{3n^{\frac 13}y^{\frac 13}}{\eta_1} + By , \ \ \ \ \textrm{and} \ \ \ B = \frac{a_1^2b_1}{c\delta q a_2b_2}.$$ Taking the derivative of $\theta_z(y, n)$ with respect to $y$, we have that $$\theta'_z(y, n) = B \pm \frac{n^{\frac 13}}{y^{\frac 23}\eta_1} .$$ When $n \geq 64(B\eta_1)^3N^{2}$ or $n \leq \frac{1}{4}{(B\eta_1)^3N^{2}},$ $|\theta_z'(y, n)| \gg \frac{n^{\frac 13}}{y^{\frac 23}\eta_1} \gg \frac{q^{\varepsilon}}{N},$ since $n \gg \frac{\eta_1^3 q^\varepsilon}{N}$. Thus integrating by parts many times shows that the contribution from these terms is negligible. Therefore we only consider the contribution from when $\frac 1{4}(B\eta_1)^3N^{2} \leq n \leq 64(B\eta_1)^3N^{2}.$ Note however that $$(B\eta_1)^3 N^2 \ll \frac{(a_1^2 b_1)^3N^2}{\delta^3 q^3} \ll \frac{q^\varepsilon}{\delta^3}, $$ and that there are no terms of this form unless $N \gg \frac{q^{3/2}}{(a_1^2 b_1)^{3/2}}$ and $\delta \ll q^\varepsilon$. From (\ref{eqn:Fbdd}), trivially $\mathcal F_1^+(c\delta, n; \boldsymbol \alpha, \boldsymbol \beta) = O(M^{\frac 12} N^{\frac 12}(nq)^\varepsilon).$ Hence the contribution to $H_2$ from these terms is bounded by \est{& M^{\frac 12}N^{\frac 12}q^\varepsilon \operatorname{\mathcal{S}}um_{\delta < q^\varepsilon} \operatorname{\mathcal{S}}um_{c \gg \frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{\delta q}} \frac{1}{\eta_1^{\frac 32}\eta_2} \operatorname{\mathcal{S}}um_{h | \eta_1} \operatorname{\mathcal{S}}qrt h \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{n \ll q^\varepsilon \\ h^2 | n}} \frac{\eta_1}{ n^{\frac 13}N^{\frac 13}} \\ &\ll a_1^{\frac 54}b_1^{\frac 12} a_2^{\frac 34} M^{\frac 14}N^{\frac 14} q^{\varepsilon} \ll a_1^{\frac 12}q^{\frac 34+\varepsilon}, } similar to before. \\ \\ {\bf Case 2: $c \leq \frac{8\pi\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{\delta q}$.} By (\ref{asympJxbig}), we write $\mathcal I(n, z) $ as \est{ \int_0^{\infty} \pg{R^+(y, z) + R^-(y, z)} \frac{\operatorname{\mathcal{S}}qrt{c\delta q}}{\pi \pr{a_1a_2yz}^{\frac 14} }U_3\pr{\frac{\pi^3ny}{\eta_1^3}; \boldsymbol \alpha} \e{\frac{a_1^2b_1y}{c\delta qa_2b_2} } \>dy , } where \est{R^\pm(y, z) := F_{s + it}\pr{\frac yN} W^\pm\pr{\frac{4\pi\operatorname{\mathcal{S}}qrt{a_1a_2yz}}{c\delta q}}{ \e{\pm \pr{\frac{2\operatorname{\mathcal{S}}qrt{a_1a_2yz}}{c\delta q} - \frac k4 + \frac 18}}},} and $W^+ = W$, $W^- = \overline{ W}.$ Similar to Case 1, we explicitly write $U_3\pr{\frac{\pi^3ny}{\eta_1^3}; \boldsymbol \alpha}$ as in Equation (\ref{lem:asymptUxbig}) so it suffices to bound \est{\operatorname{\mathcal{S}}um_{j = 1}^K \bfrac{\pi^3 ny}{\eta_1^3}^{\frac{\beta_1 + \beta_2 + \beta_3}{3}} &\pr{\frac{\eta_1}{\pi n^{\frac 13}N^{\frac 13}}}^{j}\frac{1}{N^{\frac 14}}\frac{\operatorname{\mathcal{S}}qrt{c\delta q}}{\pi \pr{a_1a_2z}^{\frac 14} }\int_0^{\infty} \mathscr H_j^\pm(y, z)\pg{ c_j \e{\frac{3n^{\frac 13}y^{\frac 13}}{\eta_1}} + d_j \e{-\frac{3n^{\frac 13}y^{\frac 13}}{\eta_1}} } \times \\ &\times \e{\frac{a_1^2b_1y}{c\delta qa_2b_2} }\e{\pm\pr{{\frac{2\operatorname{\mathcal{S}}qrt{a_1a_2yz}}{c\delta q} - \frac k4 + \frac 18}}} \>dy + O\pr{q^{-\varepsilon (K + 1)}}, } where $\mathscr H_j^\pm(y, z) = \frac{F_{s + it}\pr{\frac yN} W^\pm\pr{\frac{4\pi\operatorname{\mathcal{S}}qrt{a_1a_2yz}}{c\delta q}}}{\pr{\frac yN}^{\frac j3 + \frac 14} }$ is supported on $y \in [N, 2N]$. Note that $\frac{\partial^{(i)} \mathscr H_j^\pm(y,z) }{\partial y^i} \ll_{j,i} \frac {|t|^i}{N^i}$ and $\mathscr H_j^\pm(y,z) \ll 1$. Thus, the integration over $y$ is of the form \est{\int_0^{\infty} &\mathscr H^\pm_j(y, z) \e{g_z(y, n)} \>dy,} where $$g_z(y, n) = \pm \frac{3n^{\frac 13}y^{\frac 13}}{\eta_1} \pm \pr{2A\operatorname{\mathcal{S}}qrt y + \frac k4 - \frac 18} + By , \ \ \ \ \ \ A = \frac{\operatorname{\mathcal{S}}qrt{a_1a_2z}}{c\delta q}, \ \ B = \frac{a_1^2b_1}{c\delta q a_2b_2}.$$ Differentiating $g_z(y, n)$ with respect to $y$, we have $$g'_z(y, n) = \pm \frac{n^{\frac 13}}{y^{\frac 23}\eta_1} \pm \frac{A}{y^{\frac 12}} + B.$$ When $a_2^{\frac 32}b_2M^{\frac 12} \geq 4a_1^{\frac 32}b_1N^{\frac 12}, $ it follows that $\frac A{y^{\frac 12}} \geq \frac A{y^{\frac 12}} - B \geq \frac 12 \frac A{y^{\frac 12}} $ and $\frac 32 \frac A{y^{\frac 12}} \geq \frac A{y^{\frac 12}} + B \geq \frac A{y^{\frac 12}}.$ Therefore $$ \frac{1}{2} \frac{A}{y^{\frac 12}} \leq \mathopen{}\mathclose\bgroup\originalleft| \pm \frac{A}{y^{\frac 12}} + B \aftergroup\egroup\originalright| \leq \frac{3}{2} \frac{A}{y^{\frac 12}}.$$ When $n \geq 54(A\eta_1)^3N^{\frac 12}$ or $n \leq \frac{1}{64}(A\eta_1)^3N^{\frac 12},$ we have that $|g_z'(y, n)| \gg \frac{n^{\frac 13}}{y^{\frac 23}\eta_1} \gg \frac{q^{\varepsilon}}{N},$ since $n \gg \frac{\eta_1^3 q^\varepsilon}{N}$. Integrating by parts many times shows that these terms are negligible. We then consider only the terms when $\frac {1}{64}(A\eta_1)^3N^{\frac 12} \leq n \leq 54(A\eta_1)^3N^{\frac 12}.$ Note however that $$(A\eta_1)^3 N^{1/2} \ll \bfrac{\operatorname{\mathcal{S}}qrt{a_1a_2M} u_2c}{c\delta q}^3 N^{1/2} \ll \bfrac{\operatorname{\mathcal{S}}qrt{a_1}}{\delta q^{\frac 14}}^3 N^{1/2} \ll \frac{q^{\varepsilon}}{\delta^3}, $$and that the left side is only $\gg 1$ if $N \gg q^{3/2}/a_1^3$ and $\delta \ll q^\varepsilon$. By (\ref{eqn:Fbdd}), the contribution of $\mathcal F_1^+ (c\delta, n; \boldsymbol \alpha, \boldsymbol \beta)$ to the terms in this range is $O(M^{\frac 12} N^{\frac 12}(nq)^\varepsilon).$ So the contribution to $H_2$ from these terms is bounded by \est{& M^{\frac 12}N^{\frac 12} q^\varepsilon\operatorname{\mathcal{S}}um_{\delta < q^\varepsilon} \operatorname{\mathcal{S}}um_{c \ll \frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{\delta q}} \frac{1}{\eta_1^{\frac 32}\eta_2} \operatorname{\mathcal{S}}um_{h | \eta_1} \operatorname{\mathcal{S}}qrt h \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{n \ll q^\varepsilon \\ h^2 | n}} \pr{\frac{\eta_1}{ n^{\frac 13}N^{\frac 13}}} \frac{\operatorname{\mathcal{S}}qrt{c\delta q}}{\pr{a_1a_2MN}^{\frac 14} } \\ & \ll a_1^{\frac 54}b_1^{\frac 12} a_2^{\frac 34} M^{\frac 14}N^{\frac 14} q^{\varepsilon} \ll a_1^{\frac 12}q^{\frac 34+\varepsilon} } which suffices. When $4 a_2^{\frac 32}b_2M^{\frac 12} \leq a_1^{\frac 32}b_1N^{\frac 12}, $ we have that $ \frac 12 B < B - \frac A{y^{\frac 12}} < B $ and $\frac 32 B > A{y^{\frac 12}} + B > B.$ By the same arguments as in Case 1, the range of $n$ that should be considered is of the size $(B\eta_1)^3N^{2}$ and give a contribution to $H_2$ bounded by $a_1^{1/2}q^{3/4 + \varepsilon}.$ When $\frac 14 a_1^{\frac 32}b_1N^{\frac 12} < a_2^{\frac 32}b_2M^{\frac 12} < 4 a_1^{\frac 32}b_1N^{\frac 12}, $ we have that $\frac A{y^{\frac 12}} \asymp B,$ and so the range of $n$ that should be considered is of the size $(A\eta_1)^3N^{\frac 12}$ by the same arguments as above. Hence the contribution from these terms to $H_2$ is $ O(a_1^{1/4}q^{\frac 34 + \varepsilon}).$ This completes the proof of Proposition \ref{prop:Terror} for $\mathcal E_1^+(q; \boldsymbol \alpha, \boldsymbol \beta)$. The same proof applies to bound $\mathcal E_{i}^\pm(q; \boldsymbol \alpha, \boldsymbol \beta)$ for $i = 2, 3, 4.$ \operatorname{\mathcal{S}}ubsection{Bounding $\mathcal E^+_5(q; \boldsymbol \alpha, \boldsymbol \beta)$} \label{sec:proofE5} We first recall that \est{ \mathcal E_5^+(q; \boldsymbol \alpha, \boldsymbol \beta) = \frac {2\pi}{q}&\operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2 \geq 1\\ (a_1a_2b_1b_2, q) = 1}} \frac{ \mathscr B(a_1, b_1; \boldsymbol \alpha)}{a_1^{\frac 32}b_1} \frac{ \mathscr B(a_2, b_2; -\boldsymbol \beta)}{a_2^{\frac 32}b_2} \operatorname{\mathcal{S}}umd_M \operatorname{\mathcal{S}}umd_N E_{5, \boldsymbol \alpha, \boldsymbol \beta}^+(\mathbf{a}, \mathbf{b}, M, N) ,} where \est{E_{5, \boldsymbol \alpha, \boldsymbol \beta}^+(\mathbf{a}, \mathbf{b}, M, N) := \operatorname{\mathcal{S}}um_{c < C} \operatorname{\mathcal{S}}umstar_{x \bmod \delta c} \frac{\mathcal T_{5, \boldsymbol \alpha, \boldsymbol \beta}^+(c,x)}{c} = \operatorname{\mathcal{S}}um_{\delta < C} \frac 1\delta \operatorname{\mathcal{S}}um_{c < \frac{C}{\delta}} \operatorname{\mathcal{S}}umstar_{\operatorname{\mathcal{S}}ubstack{x \bmod \delta c \\ (u_1x-u_2, c\delta) = \delta}} \frac{\mathcal T^+_{5, \boldsymbol \alpha, \boldsymbol \beta}(c\delta, x)}{c};} \est{ \mathcal T_{5, \boldsymbol \alpha, \boldsymbol \beta}^+(c\delta,x) &:= \frac{\pi^{3 + \operatorname{\mathcal{S}}um_{i = 1}^3 (\boldsymbol \alphapha_i - \beta_i)}}{\eta_1^{3+\operatorname{\mathcal{S}}um_{i=1}^3 \boldsymbol \alphapha_i}\eta_2^{3 - \operatorname{\mathcal{S}}um_{i=1}^3 \beta_i}} \operatorname{\mathcal{S}}umtwo_{n,m \geq 1} A_3\pr{n, \frac{\lambda_1}{\eta_1}, \boldsymbol \alpha} A_3\pr{m, \frac{\lambda_2}{\eta_2}, -\boldsymbol \beta} \mathcal F_5^+(c\delta,n, m; \boldsymbol \alpha, \boldsymbol \beta);} for $\lambda_1 = \frac{\bar q (u_2\bar x - u_1)a_1}{\delta(a_1, u_2c)},$ $\eta_1 = \frac{u_2 c}{(a_1, u_2c)},$ $\lambda_2 = \frac{\bar q (u_1 x - u_2)a_2}{\delta(a_2, u_1c)},$ $\eta_2 = \frac{u_1 c}{(a_2, u_1c)},$ $u_i = \frac{a_ib_i}{(a_1b_1, a_2b_2)}$, and \est{\mathcal F_5^+(c\delta,n, m, \boldsymbol \alpha, \boldsymbol \beta) &= \int_{0}^{\infty} \int_0^{\infty} \frac{1}{y^{1/2 }} \frac{1}{z^{1/2}} V_{\boldsymbol \alpha, \boldsymbol \beta} \pr{a_1^3b_1^2y, a_2^3b_2^2z; q} J_{k-1} \pr{\frac{4\pi \operatorname{\mathcal{S}}qrt{a_2ya_1z}}{c\delta q}} f\pr{\frac yN} f\pr{\frac zM} \times \\ &\hskip 0.5in \times i^{- k}\e{\frac{a_1^2b_1y}{c\delta qa_2b_2} + \frac{a_2^2b_2z}{c\delta qa_1b_1}} U_3\pr{\frac{\pi^3ny}{\eta_1^3}; \boldsymbol \alpha} U_3\pr{\frac{\pi^3mz}{\eta_2^3}; - \boldsymbol \beta} \>dy \> dz. } The proofs in this section are very similar to the ones in the previous section. Previously, we had one sum over $n$ and now we have a double sum over $m$ and $n$ which can be treated in a similar manner. To be precise, we begin by dividing $E_{5, \boldsymbol \alpha, \boldsymbol \beta}^+(\mathbf{a}, \mathbf{b}, M, N)$ into $ \operatorname{\mathcal{S}}um_{i = 1}^4 E^+_{5, i}(\mathbf{a}, \mathbf{b}, M, N),$ where $E^+_{5, i}(\mathbf{a}, \mathbf{b}, M, N) := E^+_{5, i, \boldsymbol \alpha, \boldsymbol \beta}(\mathbf{a}, \mathbf{b}, M, N)$ is the contribution from case $i$ below. \begin{enumerate} \item $n \ll \frac{\eta_1^3q^{\varepsilon}}{N}$ and $m \ll \frac{\eta_2^3q^{\varepsilon}}{M};$ \item $n \gg \frac{\eta_1^3q^{\varepsilon}}{N}$ and $m \ll \frac{\eta_2^3q^{\varepsilon}}{M};$ \item $n \ll \frac{\eta_1^3q^{\varepsilon}}{N}$ and $m \gg \frac{\eta_2^3q^{\varepsilon}}{M};$ \item $n \gg \frac{\eta_1^3q^{\varepsilon}}{N}$ and $m \gg \frac{\eta_2^3q^{\varepsilon}}{M}.$ \end{enumerate} By symmetry, the treatment for cases (2) and (3) is the same, so we will show only the second case. Similar to Section \ref{sec:proofE1}, the contribution from the terms $a_1^3b_1^2y \gg q^{3/2 + \varepsilon}$ or $a_2^3b_2^2 z \gg q^{3/2 + \varepsilon}$ can be bounded by $q^{-A}$ due to the factor $V_{\boldsymbol \alpha, \boldsymbol \beta}(a_1^3b_1^2y, a_2^3b_2^2 z; q).$ Thus it suffices to prove that \begin{equation} \label{eqn:E5ibound} E_{5, i}^+(\mathbf{a}, \mathbf{b}, M, N)\ll a_1^{1/2}a_2^{1/2}q^{3/4+\varepsilon}, \end{equation} for fixed $\mathbf{a}, \mathbf{b}, M, N $ satisfying $a_1^3b_1^2N \ll q^{3/2 + \varepsilon}$ and $a_2^3b_2^2 M \ll q^{3/2 + \varepsilon}$. In fact, we will prove the stronger bound $E_{5, i}^+(\mathbf{a}, \mathbf{b}, M, N)\ll a_1^{1/2}a_2^{1/2}q^{1/2+\varepsilon}$. \operatorname{\mathcal{S}}ubsubsection{Bounding $E^+_{5,1}({\bf a}, {\bf b}, M, N)$} \label{sec:boundT3mnsmall} For this case, $ U_3 \pr{\frac{\pi^3 n y}{\eta_1^3}; \boldsymbol \alpha} \ll q^{\varepsilon},$ and $ U_3 \pr{\frac{\pi^3 mz}{\eta_2^3}; -\boldsymbol \beta } \ll q^{\varepsilon}$ by (\ref{lem:asympUVxsmall}). Similar to the arguments in Section \ref{sec:Esm}, from Lemma \ref{lem:boundres}, Lemma \ref{lem:A3B3Kloos} and (\ref{bound:Bessel1}), we have that for $k \geq 5$, $E^+_{5,1}({\bf a}, {\bf b}, M, N)$ is bounded by \est{&\ll M^{\frac 12}N^{\frac 12} q^{\varepsilon} \operatorname{\mathcal{S}}um_{\delta < C} \operatorname{\mathcal{S}}um_{c < \frac C\delta} \frac{1}{\eta_1^3 \eta_2^3}\operatorname{\mathcal{S}}um_{n \ll \frac{\eta_1^3q^\varepsilon}{N}} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{h_1 | \eta_1, \ h_1 | n^2}} \eta_1^{\frac 32}\operatorname{\mathcal{S}}qrt h_1 \operatorname{\mathcal{S}}um_{m \ll \frac{\eta_2^3q^\varepsilon}{M}} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{h_2 | \eta_2, \ h_2 | m^2}} \eta_2^{\frac 32}\operatorname{\mathcal{S}}qrt h_2 \\ &\hskip 2in \times \min\pg{\pr{\frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{c\delta q}}^{-\frac 12},\pr{\frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{c\delta q}}^{k-1}} \\ &\ll M^{-\frac 12}N^{-\frac 12}q^{\varepsilon} \operatorname{\mathcal{S}}um_{\delta < C} \pg{\operatorname{\mathcal{S}}um_{\frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{q\delta} \ll c < \frac C\delta} \eta_1^{\frac 32} \eta_2^{\frac 32} \pr{\frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{c\delta q}}^{k-1} + \operatorname{\mathcal{S}}um_{c \ll \frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{q\delta}} \eta_1^{\frac 32} \eta_2^{\frac 32} \pr{\frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{c\delta q}}^{-\frac 12} }\\ &\ll M^{\frac 32} N^{\frac 32} q^{\varepsilon} \frac{a_1^{2} a_2^{2}u_1^{\frac 32}u_2^{\frac 32}}{q^{4}} \ll q^{\frac 12 + \varepsilon}. } \operatorname{\mathcal{S}}ubsubsection{Bounding $E^+_{5,2}({\bf a}, {\bf b}, M, N)$} We can write $\mathcal F_5^+(c\delta, n, m; \boldsymbol \alpha, \boldsymbol \beta)$ as \est{ & \int_{-\infty}^{\infty} \int_{(\frac 1{\log q})} V_1(s,t) \int_{0}^{\infty} F_{s - it}\pr{\frac zM} \e{\frac{a_2^2b_2z}{c\delta qa_1b_1}} U_3\pr{\frac{\pi^3mz}{\eta_2^3}; -\boldsymbol \beta} \mathcal I(n,z) \> dz \> \frac{ds}{s} \> dt,} where $V_1(s,t)$ and $\mathcal I(n, z)$ are defined as in (\ref{def:V1st}) and (\ref{def:IabN}), respectively, and $F_{v} \pr{x} = \frac{1}{x^{\frac 12 + v}} f\pr{x}.$ Note that $F_{s \pm it}^{(j)}\pr{\frac{y}{N}} \ll |t|^j N^{-j}. $ The integration over $z$ can be bounded trivially, and the sum over $m, h_2$ can be treated in the same way as in Section \ref{sec:boundT3mnsmall}. For the integration over $y$, we argue as in Case 1 and 2 of Section \ref{sec:proofE1b} and obtain that $E^+_{5,2}({\bf a}, {\bf b}, M, N) \ll a_1^{\frac 12}q^{\frac 12 + \varepsilon}.$ \operatorname{\mathcal{S}}ubsubsection{Bounding $E^+_{5,4}({\bf a}, {\bf b}, M, N)$} We split into two cases as follows. \\ {\bf Case 1: $c > \frac{8\pi\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{\delta q}$.} We use (\ref{asympJxSm}) and (\ref{lem:asymptUxbig}), and the integral that we consider is of the form \est{ \int_0^{\infty} \int_0^{\infty} G(y, z) \e{\frac{a_1^2 b_1y}{c\delta q a_2b_2} +\frac{a_2^2 b_2z}{c\delta q a_1b_1} \pm \frac{3 n^{\frac 13} y^{\frac 13}}{\eta_1} \pm \frac{3 m^{\frac 13} z^{\frac 13}}{\eta_2}} \> dy \> dz,} where $\frac{\partial^j \partial^i G(y, z)}{\partial y^j \partial z^i} \ll \frac{1}{N^j M^i},$ $G(x,y) \ll 1,$ and it is supported in $[N, 2N] \times [M, 2M]$. Therefore, the integration over $y, z$ above is $O(MN).$ By the same arguments as case 1 of Section \ref{sec:proofE1b}, it is sufficient to consider when $c_1(B_1\eta_1)^3N^2 \ll n \ll c_2(B_1\eta_1)^3N^2 $ and $c_1(B_2\eta_2)^3M^2 \ll m \ll c_2(B_2\eta_2)^3M^2,$ where $c_1, c_2$ are some constants, $B_1 = \frac{a_1^2b_1}{c\delta qa_2b_2}$, and $B_2 = \frac{a_2^2b_2}{c\delta qa_1b_1}$, since the terms outside these ranges give negligible contribution from integration by parts many times. By the same arguments as in Section \ref{sec:proofE1}, $$ (B\eta_1)^3N^2 \ll \frac{(a_1^2b_1)^3N^2}{\delta^3q^3} \ll \frac{q^\varepsilon}{\delta^3}, \ \ \ \ (B\eta_2)^3M^2 \ll \frac{(a_2^2b_2)^3M^2}{\delta^3q^3} \ll \frac{q^\varepsilon}{\delta^3}.$$ So there are no terms of this form unless $N \gg \frac{q^{3/2}}{(a_1^2b_1)^{3/2}}, M \gg \frac{q^{3/2}}{(a_2^2b_2)^{3/2}},$ and $\delta \ll q^\varepsilon.$ We then obtain that the contribution from these terms to $E^+_{5,4}({\bf a}, {\bf b}, M, N)$ is bounded by \est{ & M^{\frac 12}N^{\frac 12} q^\varepsilon \operatorname{\mathcal{S}}um_{\delta < q^\varepsilon} \operatorname{\mathcal{S}}um_{ c \gg \frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{\delta q}} \frac{1}{\eta_1^{\frac 32}\eta_2^{\frac 32}} \operatorname{\mathcal{S}}um_{h_1 | \eta_1} \operatorname{\mathcal{S}}qrt h_1 \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{n \ll q^\varepsilon \\ h_1^2 | n}} \operatorname{\mathcal{S}}um_{h_2 | \eta_1} \operatorname{\mathcal{S}}qrt h_2 \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{m \ll q^\varepsilon \\ h_2^2 | m}} 1 \ll a_1^{\frac 12}a_2^{\frac 12} q^{\frac 12 + \varepsilon}. } \\ \\ {\bf Case 2: $c \leq \frac{8\pi\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{\delta q}$.} For this case, we use (\ref{asympJxbig}), and the integral that we consider is of the form \est{\frac{\eta_1 \eta_2}{m^{\frac 13}n^{\frac 13} M^{\frac 13}N^{\frac 13}} \frac{\operatorname{\mathcal{S}}qrt{c\delta q}}{M^{\frac 14}N^{\frac 14}(a_1a_2)^{\frac 14}}&\int_0^{\infty} \int_0^{\infty} \mathcal H(y, z) \e{g(y,z, n, m)} \> dy \> dz, } where $\frac{\partial^j \partial^i \mathcal H(y, z)}{\partial y^j \partial z^i} \ll \frac{1}{N^j M^i},$ $\mathcal H(x,y)$ is supported in $[N, 2N] \times [M, 2M]$, and \est{ & g(y, z, n, m) = \frac{a_1^2 b_1y}{c\delta q a_2b_2} +\frac{a_2^2 b_2z}{c\delta q a_1b_1} \pm \frac{3 n^{\frac 13} y^{\frac 13}}{\eta_1} \pm \frac{3 m^{\frac 13} z^{\frac 13}}{\eta_2} \pm \frac{2\operatorname{\mathcal{S}}qrt{a_1a_2yz}}{c\delta q} .} We note that the integration over $y, z$ above is $O(MN).$ Hence we obtain that \est{\frac{\partial g(y, z, n, m)}{\partial y} = B_1 \pm \frac{ n^{\frac 13} }{y^{\frac 23}\eta_1} \pm \frac{A_1}{ y^{\frac 12}},} and \est{\frac{\partial g(y, z, n, m)}{\partial z} = B_2 \pm \frac{ m^{\frac 13} }{z^{\frac 23}\eta_2 } \pm \frac{A_2}{ z^{\frac 12}},} where $A_1 = \frac{\operatorname{\mathcal{S}}qrt{a_1a_2z}}{c\delta q}$ and $A_2 = \frac{\operatorname{\mathcal{S}}qrt{a_1a_2y}}{c\delta q}.$ We will divide into three cases to consider. {\it Case 2.1:} $a_2^{\frac 32}b_2M^{\frac 12} \geq 4 a_1^{\frac 32}b_1N^{\frac 12}.$ For this case, we have that $\mathopen{}\mathclose\bgroup\originalleft|\frac {A_1}{y^{\frac 12}} \pm B_1 \aftergroup\egroup\originalright| \asymp \frac {A_1}{y^{\frac 12}},$ and $\mathopen{}\mathclose\bgroup\originalleft| \frac {A_2}{z^{\frac 12}} \pm B_2 \aftergroup\egroup\originalright| \asymp B_2.$ By similar arguments to case 2 of section \ref{sec:proofE1b}, we consider the ranges $n \asymp (A_1\eta_1)^3N^{\frac 12}$ and $m \asymp (B_2 \eta_2)^3M^2.$ By the same arguments as in Section \ref{sec:proofE1}, we note that $$ (A\eta_1^3)N^{\frac 12} \ll \pr{\frac{\operatorname{\mathcal{S}}qrt{a_1}}{\delta q^{\frac 12}}}^3 N^{\frac 12} \ll \frac{q^\varepsilon}{\delta^3}, \ \ \ \ (B\eta_2)^3M^2 \ll \frac{(a_2^2b_2)^3M^2}{\delta^3q^3} \ll \frac{q^\varepsilon}{\delta^3}$$ and there are no terms of this from unless $N \gg q^{\frac 32}/a_1^3,$ $M \gg \frac{q^{3/2}}{(a_2^2b_2)^{3/2}}$ and $\delta \ll q^{\varepsilon}.$ Hence the contribution from these terms to $E^+_{5, 4}(\mathbf a, \mathbf b, M, N)$ is \est{ & M^{\frac 16}N^{\frac 1{6}} \operatorname{\mathcal{S}}um_{\delta < q^\varepsilon} \operatorname{\mathcal{S}}um_{ c \ll \frac{\operatorname{\mathcal{S}}qrt{a_1a_2MN}}{\delta q}} \frac{1}{\eta_1^{\frac 12}\eta_2^{\frac 12}} \operatorname{\mathcal{S}}um_{h_1 | \eta_1} \operatorname{\mathcal{S}}qrt h_1 \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{n \ll q^\varepsilon \\ h_1^2 | n}} \frac{1}{n^{\frac 13}}\operatorname{\mathcal{S}}um_{h_2 | \eta_1} \operatorname{\mathcal{S}}qrt h_2 \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{m \ll q^\varepsilon \\ h_2^2 | m}} \frac{1}{m^{\frac 13}} \ll a_1^{\frac 12} a_2^{\frac 12}q^{\frac 12 + \varepsilon}. } {\it Case 2.2:} $a_1^{\frac 32}b_1N^{\frac 12} \geq 4a_2^{\frac 32}b_2M^{\frac 12}.$ For this case, we do the same calculation as in case 2.1 and obtain that the contribution is also $O\pr{a_1^{ \frac 12} a_2^{\frac 12}q^{\frac 12 + \varepsilon}}.$ {\it Case 2.3:} $\frac 14 a_1^{\frac 32}b_1N^{\frac 12} < a_2^{\frac 32}b_2M^{\frac 12} < 4 a_1^{\frac 32}b_1N^{\frac 12}.$ For this case, we have that $\frac {A_1}{y^{\frac 12}} \asymp B_1,$ and $\frac {A_2}{z^{\frac 12}} \asymp B_2.$ By similar arguments to case 2 of Section \ref{sec:proofE1b}, we can focus on the ranges $n \asymp (A_1\eta_1)^3N^{\frac 12}$ and $m \asymp (A_1\eta_1)^3N^{\frac 12}$. The contribution from these terms to $ E^+_{5,4}(\mathbf a, \mathbf b, M, N)$ is then $\ll a_1^{\frac 12} a_2^{\frac 12}q^{\frac 12 + \varepsilon}.$ \operatorname{\mathcal{S}}ection{Conclusion of the proof of Theorem \ref{thm:mainmoment}} Recall that from (\ref{eqn:HM6}) and (\ref{eqn:MainboundLambda1}), we want to evaluate \est{H(0;\boldsymbol \alpha,\boldsymbol \beta) \mathcal M_6(q) = \mathscr M_1(q; \boldsymbol \alpha, \boldsymbol \beta) + \mathscr M_1(q; \boldsymbol \beta, \boldsymbol \alpha).} By (\ref{decomM1}), we see that $\mathscr M_1(q; \boldsymbol \alpha,\boldsymbol \beta) = \mathscr D(q; \boldsymbol \alpha, \boldsymbol \beta) + \mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta)$, and in Lemma \ref{lem:diagonal}, we showed that $$ \mathscr D(q; \boldsymbol \alpha, \boldsymbol \beta) = H(0; \boldsymbol \alpha, \boldsymbol \beta) \int_{-\infty}^{\infty} \mathcal M(q; \boldsymbol \alpha + it, \boldsymbol \beta + it) \> dt + O(q^{-3/4 + \varepsilon}),$$ which is one of the twenty main terms of the asymptotic formula. Then we decomposed $\mathscr K(q; \boldsymbol \alpha, \boldsymbol \beta)$ as $\mathscr K_M(q; \boldsymbol \alpha, \boldsymbol \beta) + \mathscr K_E(q; \boldsymbol \alpha, \boldsymbol \beta)$. We proved in Section \ref{sec:truncationC} that $\mathscr K_E(q; \boldsymbol \alpha, \boldsymbol \beta) \ll q^{-1/2 + \varepsilon},$ and then using Voronoi Summation formula, we extracted another nine main terms of the asymptotic formula from $\mathscr K_M(q; \boldsymbol \alpha, \boldsymbol \beta)$ with an error term $O(q^{-\frac 14 + \varepsilon})$ (see Proposition \ref{prop:mainTM} and \ref{prop:Terror}, \S \ref{sec:provepropTM}, \S \ref{sec:properror} and Appendix \ref{sec:Eulerverif}). As briefly discussed in \S \ref{sec:setupmoment}, those terms correspond to $\mathcal M(q; \pi(\boldsymbol \alpha) + it, \pi(\boldsymbol \beta) + it), $ where $\pi$ is the transposition $ (\boldsymbol \alphapha_i, \beta_j)$ for $i = 1, 2, 3$ in $S_6/S_3 \times S_3.$ Hence $\mathscr M_1(q; \boldsymbol \alpha, \boldsymbol \beta)$ gives ten main terms the desired asymptotic formula, and similarly the remaining ten terms comes from $\mathscr M_1(q; \boldsymbol \beta, \boldsymbol \alpha).$ Therefore combining everything together, we have that \est{H(0;\boldsymbol \alpha,\boldsymbol \beta) \mathcal M_6(q) = H(0; \boldsymbol \alpha, \boldsymbol \beta) \int_{-\infty}^{\infty} \operatorname{\mathcal{S}}um_{\pi \in S_6/S_3 \times S_3} \mathcal M(q; \pi(\boldsymbol \alpha) + it, \pi(\boldsymbol \beta) + it) \> dt + O(q^{-1/4 + \varepsilon}).} If $|\boldsymbol \alphapha_i - \beta_j| \gg q^{-\varepsilon}$ for all $1\leq i, j \leq 3$, then $H(0; \boldsymbol \alpha, \boldsymbol \beta) \gg q^{-\varepsilon}$ and we immediately get \est{\mathcal M_6(q) = \int_{-\infty}^{\infty} \operatorname{\mathcal{S}}um_{\pi \in S_6/S_3 \times S_3} \mathcal M(q; \pi(\boldsymbol \alpha) + it, \pi(\boldsymbol \beta) + it) \> dt + O(q^{-1/4 + \varepsilon}).} However, since all expressions above - including the term bounded by $O(q^{-1/2+\varepsilon}$) - are analytic in the $\boldsymbol \alphapha_i$ and $\beta_j$, we see that this in fact holds in general. \appendix \operatorname{\mathcal{S}}ection{Comparing the main term of $ R_{\boldsymbol \alphapha_1, \beta_1}$ and $\mathcal M(q; \pi(\boldsymbol \alpha), \pi(\boldsymbol \beta))$} \label{sec:Eulerverif} To finish the proof of Proposition \ref{prop:mainTM}, we will show that the local factor at prime $p$ of the Euler product of $\zeta(1 -\boldsymbol \alphapha_1 + \beta_1 )\mathcal M_{\boldsymbol \alphapha_1, \beta_1}(0)$ is the same as the one in $\mathcal A \mathcal Z\pr{\tfrac 12; \pi(\boldsymbol \alpha), \pi(\boldsymbol \beta)}$, where $(\pi(\boldsymbol \alpha), \pi(\boldsymbol \beta)) = (\beta_1, \boldsymbol \alphapha_1, \boldsymbol \alphapha_2; \boldsymbol \alphapha_1, \beta_2, \beta_3)$ and $\mathcal M_{\boldsymbol \alphapha_1, \beta_1}(s)$ is defined as in (\ref{def:Malbeta}). To simplify the presentation, we will work within the ring of formal Dirichlet series, so that we need not worry about convergence issues in this section. Indeed, if we show that $\zeta(1 -\boldsymbol \alphapha_1 + \beta_1 )\mathcal M_{\boldsymbol \alphapha_1, \beta_1}(0)$ is the same as $\mathcal A \mathcal Z\pr{\tfrac 12; \pi(\boldsymbol \alpha), \pi(\boldsymbol \beta)}$ as formal series, then they must have the same region of absolute convergence. Thus, as analytic functions, they agree on the region of absolute convergence, and so must be the same by analytic continuation. Note that we have already verified that there is a non-empty open region of absolute convergence at the end of \S \ref{sec:provepropTM}. For notational convenience, $\boldsymbol \alpha_{2, 3} = (\boldsymbol \alphapha_2, \boldsymbol \alphapha_3),$ and $ -\boldsymbol \beta_{2,3} = (-\beta_2, -\beta_3)$ in this section. \operatorname{\mathcal{S}}ubsection{Euler product at prime $p$ of $\mathcal A \mathcal Z\pr{\tfrac 12; \pi(\boldsymbol \alpha), \pi(\boldsymbol \beta)}$} \label{sec:Eulerproduct} We start from rearranging the sums in $\mathcal A \mathcal Z(s; \pi(\boldsymbol \alpha), \pi(\boldsymbol \beta))$ by the same method as in (\ref{prod3L}). When $\textrm{Re}(s + \beta_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3), \textrm{Re}(s - \boldsymbol \alphapha_1 - \beta_2 - \beta_3) > 1,$ we recall that from Equations (\ref{def:Cs}) and (\ref{def:mathcalAs}), $\mathcal A \mathcal Z(s; \pi(\boldsymbol \alpha), \pi(\boldsymbol \beta))$ is \est{ \operatorname{\mathcal{S}}umsix_{\operatorname{\mathcal{S}}ubstack{a_1, b_1, a_2, b_2, m, n \geq 1\\ a_1n = a_2m \\ a_1b_1 = a_2b_2 \\ (a_i, q) = (b_j, q) = 1 }} \frac{ \mathscr B(a_1, b_1; \pi(\boldsymbol \alpha))}{(a_1b_1)^{2s}} \frac{ \mathscr B(a_2, b_2; -\pi(\boldsymbol \beta))}{(a_2b_2)^{2s}} \frac{\operatorname{\mathcal{S}}igma_3(n;\pi(\boldsymbol \alpha))\operatorname{\mathcal{S}}igma_3(m; -\pi(\boldsymbol \beta))}{(a_1n)^{ s} (a_2m)^{ s}} .} Using Lemma \ref{lem:multofsigma_k} and the proof of Lemma \ref{prod3L} and using the fact that \es{\label{eqn:sig3to2}\operatorname{\mathcal{S}}igma_3(a; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2, \boldsymbol \alphapha_3) = \operatorname{\mathcal{S}}um_{df = a} d^{-\boldsymbol \alphapha_1} \operatorname{\mathcal{S}}igma_2(f; \boldsymbol \alphapha_2, \boldsymbol \alphapha_3),} we see after a change of variables that \es{\label{Euler2Conj} \mathcal A \mathcal Z\pr{\tfrac 12; \pi(\boldsymbol \alpha), \pi(\boldsymbol \beta)} &= \zeta( 1- \boldsymbol \alphapha_1 + \beta_1) \operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{d_1, d_2, e_1, e_2 \geq 1\\ d_1e_1 = d_2 e_2 \\ (d_ie_i, q) = 1}} \frac{1}{d_1^{1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}d_2^{1 - \beta_2 -\beta_3}} \frac{1}{e_1^{1 + \beta_1}e_2^{1 - \boldsymbol \alphapha_1}} \mathcal J\pr{ e_1, e_2},} where \es{ \label{defCalJ} \mathcal J( e_1, e_2) &= \operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{ j_1, j_2 \geq 1}} \frac{\operatorname{\mathcal{S}}igma_2(j_1e_1; \boldsymbol \alphatt)\operatorname{\mathcal{S}}igma_2(j_2e_2; -\boldsymbol \betatt) (j_1, j_2)^{1 - \boldsymbol \alphapha_1 + \beta_1}}{j_1^{1 - \boldsymbol \alphapha_1}j_2^{1 + \beta_1}}.} Since both $\zeta(1 -\boldsymbol \alphapha_1 + \beta_1 )\mathcal M_{\boldsymbol \alphapha_1, \beta_1}(0)$ and $\mathcal A \mathcal Z\pr{\tfrac 12; \pi(\boldsymbol \alpha), \pi(\boldsymbol \beta)}$ have the factor $\zeta(1 - \boldsymbol \alphapha_1 + \beta_1)$, it suffices to consider only the local factor at prime $p$ of the sum over $d_i, e_i$ in (\ref{Euler2Conj}). For $p\neq q$, this is \es{\label{localpRHS}\operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{\delta_1, \delta_2, \varepsilonilon_1, \varepsilonilon_2 \geq 0 \\ \delta_1 + \varepsilonilon_1 = \delta_2 + \varepsilonilon_2}}\frac{1}{p^{D + \varepsilonilon_1 + \varepsilonilon_2 + \varepsilonilon_1 \beta_1 - \varepsilonilon_2\boldsymbol \alphapha_1 }} \operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\varepsilonilon_1, \varepsilonilon_2, k)}{p^k},} where $p^{\delta_i} \| d_i$, $p^{\varepsilonilon_i} \| e_i$, $p^{\iota_i} \| j_i,$ \es{ \label{def:D} D := \delta_1 + \delta_2 + \delta_1(\boldsymbol \alphapha_2 + \boldsymbol \alphapha_3) - \delta_2(\beta_2 - \beta_3),} and \es{\label{defJpk} \mathcal J_p(\varepsilonilon_1, \varepsilonilon_2, k) &:= \operatorname{\mathcal{S}}igma_2(p^{k + \varepsilonilon_1}; \boldsymbol \alphatt) \operatorname{\mathcal{S}}igma_2 (p^{k + \varepsilonilon_2}; -\boldsymbol \betatt) + \operatorname{\mathcal{S}}um_{0 \leq \iota_1 < k } \frac{\operatorname{\mathcal{S}}igma_2(p^{\iota_1 + \varepsilonilon_1}; \boldsymbol \alphatt) \operatorname{\mathcal{S}}igma_2 (p^{k + \varepsilonilon_2}; -\boldsymbol \betatt)}{p^{\beta_1(k-\iota_1)}} \\ & \hskip 2.5in + \operatorname{\mathcal{S}}um_{0 \leq \iota_2 < k } \frac{\operatorname{\mathcal{S}}igma_2(p^{k + \varepsilonilon_1}; \boldsymbol \alphatt) \operatorname{\mathcal{S}}igma_2 (p^{\iota_2 + \varepsilonilon_2}; -\boldsymbol \betatt)}{p^{ - \boldsymbol \alphapha_1(k-\iota_2)}}.} For $p = q$, we have that $\delta_i = \varepsilonilon_i = 0,$ and the local factor at $p$ is \es{\label{localqRHS} \operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(0,0, k)}{p^k}.} We also comment here that when $\boldsymbol \alphapha_i = \beta_i = 0$ for $i = 1, 2, 3,$ using $\operatorname{\mathcal{S}}igma_2(p^k) = k + 1$ in (\ref{localpRHS}), (\ref{defJpk}), (\ref{localqRHS}) and some straightforward calculation, we derive that the local factor at $p \neq q$ of $\mathcal A \mathcal Z \pr{\tfrac 12; 0 , 0}$ is $$\mathopen{}\mathclose\bgroup\originalleft( 1 - \frac 1p\aftergroup\egroup\originalright)^{-9} C_p,$$ where $C_p$ is defined in (\ref{def:c3}), and the local factor at $q$ is $$ \mathopen{}\mathclose\bgroup\originalleft( 1 - \frac 1q\aftergroup\egroup\originalright)^{-5}\mathopen{}\mathclose\bgroup\originalleft( 1 + \frac 4q + \frac 1{q^2}\aftergroup\egroup\originalright). $$ This explains the presence of the arithmetic factor in our Conjecture \ref{conj:CFKRSnoshift}, as in the work \cite{CFKRS}. \operatorname{\mathcal{S}}ubsection{The Euler product at $p$ of $\mathcal M_{\boldsymbol \alphapha_1, \beta_1}(0)$} First, by the definition of $\mathscr B(a, b; \boldsymbol \alphapha_1, \boldsymbol \alphapha_2, \boldsymbol \alphapha_3)$ in (\ref{eqn:defB}), $\mathcal G(c, {\bf a}, {\bf b})$ in (\ref{def:Gc}), $\operatorname{\mathcal{S}}umsharp \mathscr G_{ {\bf a}, {\bf b}}(s; h, b, c, \gamma, g)$ in (\ref{def:Gscr}), Equation (\ref{eqn:sig3to2}), and a change of variables, we obtain that $\mathcal M_{\boldsymbol \alphapha_1, \beta_1}(0)$ can be re-written as \est{ & \operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{d_1, f_1, d_2, f_2 \geq 1\\ (d_if_i, q) = 1}} \frac{1}{d_1^{1 - \boldsymbol \alphapha_1 - \beta_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3} d_2^{1 + \boldsymbol \alphapha_1 + \beta_1 - \beta_2 - \beta_3}} \frac{1}{f_1^{1 - \beta_1} f_2^{1 + \boldsymbol \alphapha_1}} \\ & \hskip 0.5in \cdot \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{h \geq 1 \\ h | u_1u_2}} \frac{\mu(h)}{h^{\boldsymbol \alphapha_1 - \beta_1}} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{b \geq 1\\ (b, u_1u_2) = 1}} \frac{\mu(b)}{b^{ \boldsymbol \alphapha_1 - \beta_1 }} \operatorname{\mathcal{S}}um_{c \geq 1} \frac{c}{c^{ \boldsymbol \alphapha_1 - \beta_1}} \operatorname{\mathcal{S}}um_{\gamma | c} \frac{1}{\gamma^{\boldsymbol \alphapha_1 - \beta_1}} \operatorname{\mathcal{S}}um_{g | \frac{c}{\gamma}} \frac{\mu(g)}{g^{\boldsymbol \alphapha_1 - \beta_1}} \prod_{\operatorname{\mathcal{S}}ubstack{p| c \\ p \nmid bh\gamma}} \pr{1 - \frac 1p} \\ & \hskip 0.5in \cdot \operatorname{\mathcal{S}}um_{n \geq 1} \operatorname{\mathcal{S}}um_{a_1 | f_1} \frac{\mu(a_1)}{a_1^{\boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}} \pr{\frac{(a_1, u_2cb) }{a_1u_2cbn}}^{1 - \boldsymbol \alphapha_1} \operatorname{\mathcal{S}}igma_2 \pr{\frac{u_2cbn}{(a_1, u_2cb)} ; \boldsymbol \alphatt} \operatorname{\mathcal{S}}igma_2\pr{\frac{f_1}{a_1}; \boldsymbol \alphatt} \\ & \hskip 0.5in \cdot \operatorname{\mathcal{S}}um_{m \geq 1}\operatorname{\mathcal{S}}um_{a_2 | f_2} \frac{\mu(a_2)}{a_2^{- \beta_2 - \beta_3}} \pr{\frac{ (a_2, u_1cb)}{a_2u_1cbm}}^{1 + \beta_1} \operatorname{\mathcal{S}}igma_2 \pr{\frac{u_1cbm}{(a_2, u_1cb)} ; -\boldsymbol \betatt}\operatorname{\mathcal{S}}igma_2\pr{\frac{f_2}{a_2}; -\boldsymbol \betatt}. } In Section \ref{sec:calRes}, $u_i = \frac{a_ib_i}{(a_1b_1, a_2b_2)},$ but after changing variables, we write that $u_i = \frac{f_id_i}{(f_1d_1, f_2d_2)}.$ By comparing Euler products, we can show that \est{ \operatorname{\mathcal{S}}um_{n \geq 1} \operatorname{\mathcal{S}}um_{a_1 | f_1} \frac{\mu(a_1)}{a_1^{\boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}} \pr{\frac{(a_1, u_2cb) }{a_1u_2cbn}}^{1 - \boldsymbol \alphapha_1} \operatorname{\mathcal{S}}igma_2 \pr{\frac{u_2cbn}{(a_1, u_2cb)} ; \boldsymbol \alphatt} \operatorname{\mathcal{S}}igma_2\pr{\frac{f_1}{a_1}; \boldsymbol \alphatt} = \operatorname{\mathcal{S}}um_{n' \geq 1} \frac{\operatorname{\mathcal{S}}igma_2(f_1u_2cbn'; \boldsymbol \alphatt)}{(u_2cbn')^{1-\boldsymbol \alphapha_1}}.} We also have a similar expression for the sum over $m$ and $a_2.$ Hence we can write $\mathcal M_{\boldsymbol \alphapha_1, \beta_1}(0)$ as \es{\label{EulerOff3}& \operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{d_1, f_1, d_2, f_2 \geq 1\\ (d_if_i, q) = 1}} \frac{1}{d_1^{1 - \boldsymbol \alphapha_1 - \beta_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3} d_2^{1 + \boldsymbol \alphapha_1 + \beta_1 - \beta_2 - \beta_3}} \frac{1}{f_1^{1 - \beta_1} u_2^{1 - \boldsymbol \alphapha_1} f_2^{1 + \boldsymbol \alphapha_1} u_1^{1 + \beta_1}} \\ & \hskip 0.5in \cdot \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{h \geq 1 \\ h | u_1u_2}} \frac{\mu(h)}{h^{\boldsymbol \alphapha_1 - \beta_1}} \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{b \\ (b, u_1u_2) = 1}} \frac{\mu(b)}{b^{ 2 }} \operatorname{\mathcal{S}}um_{c} \frac{1}{c} \operatorname{\mathcal{S}}um_{\gamma | c} \frac{1}{\gamma^{\boldsymbol \alphapha_1 - \beta_1}} \operatorname{\mathcal{S}}um_{g | \frac{c}{\gamma}} \frac{\mu(g)}{g^{\boldsymbol \alphapha_1 - \beta_1}} \prod_{\operatorname{\mathcal{S}}ubstack{p| c \\ p \nmid bh\gamma}} \pr{1 - \frac 1p} \\ & \hskip 0.5in \cdot \operatorname{\mathcal{S}}umtwo_{n, m \geq 1} \frac{\operatorname{\mathcal{S}}igma_2(f_1u_2cbn; \boldsymbol \alphatt)}{n^{1-\boldsymbol \alphapha_1}} \frac{\operatorname{\mathcal{S}}igma_2(f_2u_1cbm; -\boldsymbol \betatt)}{m^{1+\beta_1}}. \\} We note here that $d_1f_1u_2 = d_2f_2u_1$ by the definition of $u_1, u_2.$ Next, we consider the local factor at $p \neq q$ of (\ref{EulerOff3}), which is of the form \es{\label{eqn:initialfactorp}\operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{\delta_1, \delta_2, \xi_1, \xi_2 \geq 0 \\ \delta_1 + \ell_1 = \delta_2 + \ell_2}}\frac{1}{p^{D'(\ell_1, \ell_2) + \ell_2\beta_1 - \ell_1 \boldsymbol \alphapha_1 - (\xi_1 + \xi_2)(\beta_1 - \boldsymbol \alphapha_1) }} \mathscr L_p(\delta_1, \delta_2, \xi_1, \xi_2),} where $p^{\delta_i} \| d_i$, $p^{\xi_i} \| f_i$, $p^{\upsilon_i} \| u_i$, $\ell_1 = \xi_1 + \upsilon_2 $, $\ell_2 = \xi_2 + \upsilon_1$, $\min\{ \upsilon_1, \upsilon_2\} = 0,$ $D'(\ell_1, \ell_2) = D + (\delta_2 - \delta_1)(\boldsymbol \alphapha_1 + \beta_1) + \ell_1 + \ell_2$, and $D$ is defined in (\ref{def:D}). We will examine $\mathscr L_p(d_1, d_2, f_1, f_2)$ below but before that analysis, we need the following two Lemmas. \begin{lem} \label{lem:localfactorgammac} The contribution to the local factor at $p$ from $$ \operatorname{\mathcal{S}}um_{\gamma | c} \frac{1}{\gamma^{\boldsymbol \alphapha_1 - \beta_1}} \operatorname{\mathcal{S}}um_{g | \frac{c}{\gamma}} \frac{\mu(g)}{g^{\boldsymbol \alphapha_1 - \beta_1}} \prod_{\operatorname{\mathcal{S}}ubstack{p| c \\ p \nmid bh\gamma}} \pr{1 - \frac 1p} $$is 1 if $p \nmid c$ or $p | bh$. Otherwise, it is $ 1 - \frac{1}{p} + \frac{1}{p^{1+\boldsymbol \alphapha_1 - \beta_1}}.$ \end{lem} \begin{proof} For $p \nmid c$ or $p | bh$, the contribution to the local factor is 1 because $$\operatorname{\mathcal{S}}um_{\gamma | c} \frac{1}{\gamma^{\boldsymbol \alphapha_1 - \beta_1}} \operatorname{\mathcal{S}}um_{g | \frac c\gamma} \frac{\mu(g)}{g^{\boldsymbol \alphapha_1 - \beta_1}} = \operatorname{\mathcal{S}}um_{a | c} \frac{1}{a^{\boldsymbol \alphapha_1 - \beta_1} }\operatorname{\mathcal{S}}um_{g | a} \mu(g) = 1.$$ Now suppose $p | c$ and $ p \nmid bh$. Below we write $p^{c_p} \| c$ and $p^{\gamma_p} \| \gamma.$ Then the contribution to the local factor at $p$ is \est{&\pr{ 1 - \frac{1}{p^{\boldsymbol \alphapha_1 - \beta_1}}}\pr{1 - \frac{1}{p}} + \operatorname{\mathcal{S}}um_{1 \leq \gamma_p < c_p} \frac{1}{p^{\gamma_p(\boldsymbol \alphapha_1 - \beta_1)}} \pr{ 1 - \frac{1}{p^{\boldsymbol \alphapha_1 - \beta_1}}} + \frac{1}{p^{c_p(\boldsymbol \alphapha_1 - \beta_1)}} = 1 - \frac{1}{p} + \frac{1}{p^{1+\boldsymbol \alphapha_1 - \beta_1}}.} \end{proof} \begin{lem} \label{arrPl1l2} Let \es{\label{def:Pcnm} \mathscr P (\ell_1, \ell_2) := \operatorname{\mathcal{S}}umthree_{c_p \geq 0, \ n_p, m_p \geq 0} \frac{1}{p^{c_p}}\frac{\operatorname{\mathcal{S}}igma_2(p^{\ell_1 + n_p + c_p}; \boldsymbol \alphatt)}{p^{n_p(1-\boldsymbol \alphapha_1)}} \frac{\operatorname{\mathcal{S}}igma_2(p^{\ell_2 + m_p + c}; -\boldsymbol \betatt)}{p^{m_p(1+\beta_1)}}.} Then $$ \mathscr P (\ell_1, \ell_2) = \operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\ell_1, \ell_2, k)}{p^k} + \frac{p^{\boldsymbol \alphapha_1 - \beta_1}}{p^2} \mathscr P (\ell_1 + 1, \ell_2 + 1),$$ where $\mathcal J_p(\ell_1, \ell_2, k)$ is defined as in (\ref{defJpk}). \end{lem} \begin{proof} We have \est{ \mathscr P (\ell_1, \ell_2) &= \operatorname{\mathcal{S}}um_{c_p \geq 0} \frac{\operatorname{\mathcal{S}}igma_2(p^{\ell_1 + c_p}; \boldsymbol \alphatt) \operatorname{\mathcal{S}}igma_2(p^{\ell_2 + c_p}; -\boldsymbol \betatt)}{p^{c_p}} + \operatorname{\mathcal{S}}umtwo_{c_p \geq 0, \ n_p \geq 1} \frac{\operatorname{\mathcal{S}}igma_2(p^{\ell_1 + n_p + c_p}; \boldsymbol \alphatt)\operatorname{\mathcal{S}}igma_2(p^{\ell_2 + c_p}; -\boldsymbol \betatt)}{p^{c_p + n_p- n_p\boldsymbol \alphapha_1}} \\ & + \operatorname{\mathcal{S}}umtwo_{c_p \geq 0, \ m_p \geq 1} \frac{\operatorname{\mathcal{S}}igma_2(p^{\ell_1 + c_p}; \boldsymbol \alphatt)\operatorname{\mathcal{S}}igma_2(p^{\ell_2 + m_p + c_p}; -\boldsymbol \betatt)}{p^{c_p + m_p + m_p\beta_1}} + \frac{p^{\boldsymbol \alphapha_1 - \beta_1}}{p^2} \mathscr P (\ell_1 + 1, \ell_2 + 1) \\ &= \operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\ell_1, \ell_2, k)}{p^k} + \frac{p^{\boldsymbol \alphapha_1 - \beta_1}}{p^2} \mathscr P (\ell_1 + 1, \ell_2 + 1),} after some arrangement. \end{proof} Now we examine $\mathscr L_p(\delta_1, \delta_2, \xi_1, \xi_2)$ which we separate into two cases below. \operatorname{\mathcal{S}}ubsection*{Case 1: $\delta_1 + \xi_1 = \delta_2 + \xi_2$.} For this case, we have $\upsilon_1 = \upsilon_2 = 0,$ so $\xi_i = \ell_i$. Hence $u_1u_2 = 1$ and $p \not| h.$ From Lemma \ref{lem:localfactorgammac}, we then obtain that \est{\mathscr L_p(\delta_1, \delta_2, \xi_1, \xi_2) &= \mathscr P (\ell_1, \ell_2) + \frac{1}{p^2} \pr{ p^{\beta_1 - \boldsymbol \alphapha_1} - 1}\mathscr P (\ell_1 + 1, \ell_2 + 1) - \frac{1}{p^2}\mathscr P (\ell_1 + 1, \ell_2 + 1). } From Lemma \ref{arrPl1l2}, $\mathscr L_p(d_1, d_2, f_1, f_2) $ can be written as \est{ & \operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\ell_1, \ell_2, k)}{p^k} + \frac{1}{p^2}\pr{p^{\boldsymbol \alphapha_1 - \beta_1} - 2 + p^{\beta_1 - \boldsymbol \alphapha_1}} \mathscr P (\ell_1 + 1, \ell_2 + 1)\\ &= \operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\ell_1, \ell_2, k)}{p^k} + \frac{1}{p^2}\pr{p^{\boldsymbol \alphapha_1 - \beta_1} - 2 + p^{\beta_1 - \boldsymbol \alphapha_1}}\operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\ell_1 + 1, \ell_2 + 1, k)}{p^k} \\ & \hskip 2in + \frac{p^{\boldsymbol \alphapha_1 - \beta_1}}{p^4}\pr{p^{\boldsymbol \alphapha_1 - \beta_1} - 2 + p^{\beta_1 - \boldsymbol \alphapha_1}}\mathscr P (\ell_1 + 2, \ell_2 + 2) \\ &= \operatorname{\mathcal{S}}um_{k \geq 0} \frac{1}{p^k} \pg{\mathcal J_p(\ell_1, \ell_2, k) + \pr{p^{\boldsymbol \alphapha_1 - \beta_1} - 2 + p^{\beta_1 - \boldsymbol \alphapha_1}} \operatorname{\mathcal{S}}um_{1 \leq m_p \leq \lfloor \frac{k}{2}\rfloor}p^{(m_p - 1)(\boldsymbol \alphapha_1 - \beta_1)} \mathcal J_p(\ell_1 + m_p, \ell_2 + m_p, k - 2m_p)}.} \operatorname{\mathcal{S}}ubsection*{Case 2: $\delta_1 + \xi_1 \neq \delta_2 + \xi_2$} For this case, $\upsilon_1 + \upsilon_2 \geq 1$. So $p|u_1u_2$, and $b_p = 0$, where $p^{b_p} \| b.$ By Lemma \ref{lem:localfactorgammac} and \ref{arrPl1l2}, we have \est{&\mathscr L_p(\delta_1, \delta_2, \xi_1, \xi_2) = \pr{1 - \frac{1}{p^{\boldsymbol \alphapha_1 -\beta_1}}} \mathscr P(\ell_1, \ell_2) - \frac{1}{p^2}\pr{1 - \frac{1}{p^{\boldsymbol \alphapha_1 -\beta_1}}} \mathscr P(\ell_1 + 1, \ell_2+ 1)\\ &= \pr{1 - \frac{1}{p^{\boldsymbol \alphapha_1 -\beta_1}}} \pg{\operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\ell_1, \ell_2, k)}{p^k} + \frac{1}{p^2}\pr{p^{\boldsymbol \alphapha_1 - \beta_1} - 1} \mathscr P (\ell_1 + 1, \ell_2 + 1)}\\ &= \pr{1 - \frac{1}{p^{\boldsymbol \alphapha_1 -\beta_1}}} \mathopen{}\mathclose\bgroup\originalleft\{ \operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\ell_1, \ell_2, k)}{p^k} + \frac{1}{p^2}\pr{p^{\boldsymbol \alphapha_1 - \beta_1} - 1}\operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\ell_1 + 1, \ell_2 + 1, k)}{p^k} \aftergroup\egroup\originalright. \\ & \hskip 3in + \mathopen{}\mathclose\bgroup\originalleft.\frac{p^{\boldsymbol \alphapha_1 - \beta_1}}{p^4}\pr{p^{\boldsymbol \alphapha_1 - \beta_1} - 1}\mathscr P (\ell_1 + 2, \ell_2 + 2) \aftergroup\egroup\originalright\} \\ &= \pr{1 - \frac{1}{p^{\boldsymbol \alphapha_1 -\beta_1}}} \operatorname{\mathcal{S}}um_{k \geq 0} \frac{1}{p^k} \Bigg\{\mathcal J_p(\ell_1, \ell_2, k) \\ & \hskip 1.5in + \pr{p^{\boldsymbol \alphapha_1 - \beta_1} - 1} \operatorname{\mathcal{S}}um_{1 \leq m_p \leq \lfloor \frac{k}{2}\rfloor}p^{(m_p - 1)(\boldsymbol \alphapha_1 - \beta_1)} \mathcal J_p(\ell_1 + m_p, \ell_2 + m_p, k - 2m_p) \Bigg\}.} From both cases, we obtain that (\ref{eqn:initialfactorp}) is \es{\label{eqn:secondfactor}&\operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{\delta_1, \delta_2, \ell_1, \ell_2 \geq 0 \\ \delta_1 + \ell_1 = \delta_2 + \ell_2 }}\frac{1}{p^{D'(\ell_1, \ell_2) - \ell_1\beta_1 + \ell_2\boldsymbol \alphapha_1 }} \mathscr L_p(\delta_1, \delta_2, \ell_1, \ell_2) \\ &+ \operatorname{\mathcal{S}}umfour_{\operatorname{\mathcal{S}}ubstack{\delta_1, \delta_2, \ell_1, \ell_2 \geq 0 \\ \delta_1 + \ell_1 = \delta_2 + \ell_2 }} \pr{ \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{0 \leq \xi_2 < \ell_2 \\ \xi_1 = \ell_1}} \frac{\mathscr L_p(\delta_1, \delta_2, \xi_1, \xi_2)}{p^{(\ell_2- \ell_1)\beta_1 - \xi_2(\beta_1 - \boldsymbol \alphapha_1) }} + \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{0 \leq \xi_1 < \ell_1 \\ \xi_2 = \ell_2}} \frac{\mathscr L_p(\delta_1, \delta_2, \xi_1, \xi_2)}{p^{(\ell_2- \ell_1)\boldsymbol \alphapha_1 - \xi_1(\beta_1 - \boldsymbol \alphapha_1) }}} \frac{1}{p^{D'(\ell_1, \ell_2)}} \\ &:= \operatorname{\mathcal{S}}umtwo_{\delta_1, \delta_2 \geq 0} \mathcal S_p(\delta_1, \delta_2), } say. For fixed $\delta_1, \delta_2$, where $\delta_2 \geq \delta_1$, we rearrange the term $\mathcal S_p(\delta_1, \delta_2)$ and obtain that \es{\label{eqn:thirdfactor} S_p(\delta_1, \delta_2) &= \operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{ \varepsilonilon_1, \varepsilonilon_2 \geq 0 \\ \delta_1 + \varepsilonilon_1 = \delta_2 + \varepsilonilon_2 }} \operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\varepsilonilon_1, \varepsilonilon_2, k)}{p^{D'(\varepsilonilon_1, \varepsilonilon_2) + k } } \Bigg\{ \frac{1}{p^{(\varepsilonilon_2 - \varepsilonilon_1)\boldsymbol \alphapha_1}} + \frac{1}{p^{(\varepsilonilon_2 - \varepsilonilon_1)\beta_1}} - \frac{1}{p^{-\varepsilonilon_1\beta_1 + \varepsilonilon_2\boldsymbol \alphapha_1}} \\ & \hskip 0.5 in + \pr{p^{\boldsymbol \alphapha_1 - \beta_1} - 2 + p^{\beta_1 - \boldsymbol \alphapha_1}}\operatorname{\mathcal{S}}um_{1 \leq \ell_2 \leq \varepsilonilon_2 } \frac{p^{(\ell_2 - 1)(\boldsymbol \alphapha_1 - \beta_1)}}{p^{-(\varepsilonilon_1 - \ell_2)\beta_1 + (\varepsilonilon_2 - \ell_2)\boldsymbol \alphapha_1 }} \\ & \hskip 0.5in + \mathopen{}\mathclose\bgroup\originalleft. \pr{p^{\boldsymbol \alphapha_1 - \beta_1} - 1} \operatorname{\mathcal{S}}um_{1 \leq \ell_2 \leq \varepsilonilon_2} p^{(\ell_2 - 1)(\boldsymbol \alphapha_1 - \beta_1)} \pr{\frac{1}{p^{(\varepsilonilon_2 - \varepsilonilon_1)\boldsymbol \alphapha_1}} + \frac{1}{p^{(\varepsilonilon_2 - \varepsilonilon_1)\beta_1}} - \frac{2 p^{\ell_2( \boldsymbol \alphapha_1 - \beta_1)}}{p^{-\varepsilonilon_1\beta_1 + \varepsilonilon_2\boldsymbol \alphapha_1}}} \aftergroup\egroup\originalright\} \\ &= \operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{ \varepsilonilon_1, \varepsilonilon_2 \geq 0 \\ \delta_1 + \varepsilonilon_1 = \delta_2 + \varepsilonilon_2 }} \operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\varepsilonilon_1, \varepsilonilon_2, k)}{p^{D + \varepsilonilon_1 + \varepsilonilon_2 + k + \varepsilonilon_1 \beta_1 - \varepsilonilon_2 \boldsymbol \alphapha_1 } }. } By similar calculation $\mathcal S_p(\delta_1, \delta_2)$ yields the same value for $\delta_1 > \delta_2.$ In summary from (\ref{eqn:initialfactorp}), (\ref{eqn:secondfactor}) and (\ref{eqn:thirdfactor}), the local factor at $p$ of $\mathcal M_{\boldsymbol \alphapha_1, \beta_1}$ is \est{\operatorname{\mathcal{S}}umtwo_{\operatorname{\mathcal{S}}ubstack{ \varepsilonilon_1, \varepsilonilon_2 \geq 0\\ \delta_1 + \varepsilonilon_1 = \delta_2 + \varepsilonilon_2 }} \operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(\varepsilonilon_1, \varepsilonilon_2, k)}{p^{D + \varepsilonilon_1 + \varepsilonilon_2 + k + \varepsilonilon_1 \beta_1 - \varepsilonilon_2 \boldsymbol \alphapha_1 } }, } which is the same as the local factor at $p$ of $\mathcal A \mathcal Z \pr{\tfrac 12; \pi(\boldsymbol \alpha), \pi(\boldsymbol \beta)}$ in (\ref{localpRHS}), as desired. For $p = q$, we use similar arguments, with $\delta_i = \varepsilonilon_i = 0$, so that the Euler factor is $$\operatorname{\mathcal{S}}um_{k \geq 0} \frac{\mathcal J_p(0, 0, k)}{p^{k } }, $$ which is the same as the local factor at $q$ of $\mathcal A \mathcal Z \pr{\tfrac 12; \pi(\boldsymbol \alpha), \pi(\boldsymbol \beta)}$ in (\ref{localqRHS}). This completes the proof of Proposition \ref{prop:mainTM}. \operatorname{\mathcal{S}}ection{Voronoi Summation}\label{sec:voronoi} In this section, we state the Voronoi Summation formula for the shifted $k$-divisor function defined in (\ref{def:sigma_k}). The proof of this formula is essentially the same as the proof by Ivic of the Voronoi Summation formula for the $k$-divisor function in \cite{Ivic}, so we will state the results and refer the reader to Ivic for detailed proofs. Let $\omega$ be a smooth compactly supported function. For $\boldsymbol \alpha = (\boldsymbol \alphapha_1,...,\boldsymbol \alphapha_k)$, let $\operatorname{\mathcal{S}}igma_k(n ; \boldsymbol \alpha) = \operatorname{\mathcal{S}}igma_k(n; \boldsymbol \alphapha_1, ..., \boldsymbol \alphapha_k).$ Define \est{ S\pr{\frac ac, \boldsymbol \alpha} = \operatorname{\mathcal{S}}um_{n= 1}^\infty \operatorname{\mathcal{S}}igma_k(n; \boldsymbol \alpha) e\bfrac{an}{c} \omega(n), } where $(a, c) = 1$, and the Mellin transform of $\omega$ \est{ \tilde \omega(s) = \int_0^\infty \omega(x) x^{s-1} dx. }Since $\omega$ was chosen to be from the Schwarz class, $\tilde \omega$ is entire and decays rapidly on vertical lines. We have the Mellin inversion formula \est{ \omega(n) = \frac{1}{2\pi i} \int_{(c)} \tilde \omega(s) \frac{ds}{n^s}, }where $c$ is any vertical line. Let \es{\label{def:Ak} A_k&\pr{m, \frac ac, \boldsymbol \alpha} \\ &= \frac 12 \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{m_1,..., m_k \geq 1 \\ m_1...m_k = m}} m_1^{\boldsymbol \alphapha_1}...m_k^{\boldsymbol \alphapha_k} \operatorname{\mathcal{S}}um_{a_1 \mod c} ... \operatorname{\mathcal{S}}um_{a_k \mod c} \pg{\e{\frac{aa_1...a_k + \mathbf a \cdot \mathbf m}{c}} + \e{\frac{-aa_1...a_k + \mathbf a \cdot \mathbf m}{c}} },} and \es{\label{def:Bk} B_k&\pr{m, \frac ac, \boldsymbol \alpha} \\ &= \frac 12 \operatorname{\mathcal{S}}um_{\operatorname{\mathcal{S}}ubstack{m_1,..., m_k \geq 1 \\ m_1...m_k = m}} m_1^{\boldsymbol \alphapha_1}...m_k^{\boldsymbol \alphapha_k} \operatorname{\mathcal{S}}um_{a_1 \mod c} ... \operatorname{\mathcal{S}}um_{a_k \mod c} \pg{\e{\frac{aa_1...a_k + \mathbf a \cdot \mathbf m}{c}} - \e{\frac{-aa_1...a_k + \mathbf a \cdot \mathbf m}{c}} }.} Moreover, we define \est{G_k(s, n, \boldsymbol \alpha) = \frac{c^{ks}}{\pi^{ks}n^s}\pr{\prod_{\ell = 1}^k \frac{\Gamma\pr{\frac{s-\boldsymbol \alphapha_\ell}{2}}}{\Gamma\pr{\frac{1 - s+\boldsymbol \alphapha_\ell}{2}}}}, \ \ \ \ \ H_k(s, n, \boldsymbol \alpha) = \frac{c^{ks}}{\pi^{ks}n^s}\pr{\prod_{\ell = 1}^k \frac{\Gamma\pr{\frac{1+s -\boldsymbol \alphapha_\ell}{2}}}{\Gamma\pr{\frac{2 - s+\boldsymbol \alphapha_\ell}{2}}}}, } and for $0 < \operatorname{\mathcal{S}}igma < \frac 12 - \frac 1k + \frac{\operatorname{\mathcal{S}}um_{\ell = 1}^{k} \textrm{Re}(\boldsymbol \alphapha_\ell)}{k},$ we let \es{\label{def:UandV} U_k(x; \boldsymbol \alpha) = \frac{1}{2\pi i} \int_{(\operatorname{\mathcal{S}}igma)} \prod_{\ell = 1}^k \frac{\Gamma\pr{\frac{s-\boldsymbol \alphapha_\ell}{2}}}{\Gamma\pr{\frac{1 - s+\boldsymbol \alphapha_\ell}{2}}} \frac{ds}{x^s}, \ \ \ \ \ V_k(x; \boldsymbol \alpha) = \frac{1}{2\pi i} \int_{(\operatorname{\mathcal{S}}igma)} \prod_{\ell = 1}^k \frac{\Gamma\pr{\frac{1+ s-\boldsymbol \alphapha_\ell}{2}}}{\Gamma\pr{\frac{2 - s+\boldsymbol \alphapha_\ell}{2}}} \frac{ds}{x^s}.} We note that by Stirling's formula, both integrals for $U_k$ and $V_k$ are absolutely convergent. Finally, we define the Dirichlet series to be \est{ D_k \pr{s, \frac ac, \boldsymbol \alpha} = \operatorname{\mathcal{S}}um_n \frac{\operatorname{\mathcal{S}}igma_k(n; \boldsymbol \alpha) e\bfrac{an}{c}}{n^s}, } which converges absolutely for $\textup{Re } s > 1$. We have that \es{\label{eqn:OriginalD} D_k \pr{s, \frac ac, \boldsymbol \alpha} &= \operatorname{\mathcal{S}}um_{n_1,...,n_k} \frac{e\bfrac{an_1n_2...n_k}{c}}{n_1^{s+\boldsymbol \alphapha_1}...n_k^{s+\boldsymbol \alphapha_k}} \\ &= \frac{1}{c^{ks+\boldsymbol \alphapha_1+...+\boldsymbol \alphapha_k}}\operatorname{\mathcal{S}}um_{a_1,...,a_k \bmod c}e\bfrac{aa_1...a_k}{c} \prod_{j=1}^k \zeta\mathopen{}\mathclose\bgroup\originalleft(s+\boldsymbol \alphapha_j, \frac{a_j}{c}\aftergroup\egroup\originalright), } where $ \zeta \mathopen{}\mathclose\bgroup\originalleft(s, \frac ac\aftergroup\egroup\originalright)$ is the Hurwitz zeta function defined for $\textup{Re } s > 1$ as \es{ \label{def:hurwitz} \zeta \mathopen{}\mathclose\bgroup\originalleft(s, \frac ac\aftergroup\egroup\originalright) = \operatorname{\mathcal{S}}um_{n=0}^\infty \frac{1}{\lr{n+\frac ac}^s}. } The Hurwitz zeta function may be analytically continued to all of $\mathbb C$ except for a simple pole at $s = 1$. Therefore, $D_k\mathopen{}\mathclose\bgroup\originalleft(s, \tfrac ac \aftergroup\egroup\originalright)$ can be analytically continued to all of $\mathbb C$ except for a simple pole at $s = 1 - \boldsymbol \alphapha_j$ for $j = 1,.., k.$ \begin{thm} \label{thm:voronoi} With notations as above and $(a, c) = 1$ and $\boldsymbol \alphapha_i \ll \frac{1}{1000k\log (|c| + 100)}, $ we have \est{ S\pr{\frac ac, \boldsymbol \alpha} &= \operatorname{\mathcal{S}}um_{\ell = 1}^{k} \mathbb{R}es_{s = 1 - \boldsymbol \alphapha_\ell} \tilde \omega(s) D_k\pr{s, \frac ac, \boldsymbol \alpha} \\ & + \frac{\pi^{k/2 + \boldsymbol \alphapha_1 + ... + \boldsymbol \alphapha_k}}{c^{k+\boldsymbol \alphapha_1 + ... + \boldsymbol \alphapha_k}}\operatorname{\mathcal{S}}um_{n = 1}^{\infty} A_k\pr{n, \frac ac, \boldsymbol \alpha} \int_0^{\infty} \omega(x) U_k\pr{\frac{\pi^knx}{c^k}; \boldsymbol \alpha} \> dx\\ & + i^{3k}\frac{\pi^{k/2 + \boldsymbol \alphapha_1 + ... + \boldsymbol \alphapha_k}}{c^{k+\boldsymbol \alphapha_1 + ... + \boldsymbol \alphapha_k}}\operatorname{\mathcal{S}}um_{n = 1}^{\infty} B_k\pr{n, \frac ac, \boldsymbol \alpha} \int_0^{\infty} \omega(x) V_k\pr{\frac{\pi^knx}{c^k}; \boldsymbol \alpha} \> dx.} \end{thm} We refer the reader to the proof of Theorem 2 in \cite{Ivic} for details. Next, we collect properties of $A_3(n, a/c, \boldsymbol \alpha)$, $B_3(n, a/c, \boldsymbol \alpha)$, $U_3(x; \boldsymbol \alpha)$ and $V_3(x; \boldsymbol \alpha)$. These are useful for bounding error terms of $\mathcal M_6(q).$ \begin{lemma} \label{lem:A3B3Kloos} Let $a, n, \gamma$ be integers such that $(a, \gamma) = 1$. Moreover $A_3(n, a/c, \boldsymbol \alpha)$, $B_3(n, a/c, \boldsymbol \alpha)$ are defined as in (\ref{def:Ak}) and (\ref{def:Bk}). Then \est{\operatorname{\mathcal{S}}um_{n_1n_2n_3 = n}& n_1^{\boldsymbol \alphapha_1}n_2^{\boldsymbol \alphapha_2}n_3^{\boldsymbol \alphapha_3} \operatorname{\mathcal{S}}umthree_{r_1, r_2, r_3 \mod \gamma} \e{\frac{ar_1r_2r_3 + r_1n_1 + r_2n_2 + r_3n_3}{\gamma}} \\ &= \gamma \operatorname{\mathcal{S}}um_{h | \gamma, h^2 | n} h \Delta(n, h, \gamma) S \pr{\frac{n}{h^2}, -\overline{a}, \frac{\gamma}{h}},} where $\Delta(n, h, \gamma)$ is a divisor function satisfying $\Delta(n, h, \gamma) \ll (\gamma n)^\varepsilon.$ Moreover, \est{A_3\pr{n, \frac{a}{\gamma}, \boldsymbol \alpha} \ll (\gamma n)^\varepsilon\gamma^{\frac 32} \operatorname{\mathcal{S}}um_{h | \gamma, h^2 | n} \operatorname{\mathcal{S}}qrt h, \ \ \ \ \ \ \ B_3\pr{n, \frac{a}{\gamma}, \boldsymbol \alpha} \ll (\gamma n)^\varepsilon\gamma^{\frac 32} \operatorname{\mathcal{S}}um_{h | \gamma, h^2 | n} \operatorname{\mathcal{S}}qrt h.} \end{lemma} The proof of this lemma can be found in Equations (8.7)-(8.9) in \cite{Ivic}. \begin{lemma} \label{lem:asymptUandV} If $U(x; \boldsymbol \alpha) := U_3(x; \boldsymbol \alpha)$ and $V(x; \boldsymbol \alpha) := V_3(x; \boldsymbol \alpha),$ as defined in (\ref{def:UandV}), and $\boldsymbol \alphapha_i \ll \frac{1}{1000k\log (|c| + 100)}, $ then for any $0 < \varepsilon < 1/6$ and $x > 0,$ we have \es{\label{lem:asympUVxsmall} U(x; \boldsymbol \alpha) \ll x^{\varepsilon}, \hskip 1in V(x; \boldsymbol \alpha) \ll x^{\varepsilon}.} Moreover for any fixed integer $K \geq 1$ and $x \geq x_0 > 0,$ \es{\label{lem:asymptUxbig} U(x; \boldsymbol \alpha) = \operatorname{\mathcal{S}}um_{j = 1}^K \frac{c_j \cos(6x^{\frac 13}) + d_j \operatorname{\mathcal{S}}in (6x^{\frac 13 })}{x^{\frac j3 + \frac{\boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}{3}}} + O\pr{\frac 1{x^{\frac {K+1}{3} + \frac{\boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}{3}}}},} \es{\label{lem:asymptVxbig} V(x; \boldsymbol \alpha) = \operatorname{\mathcal{S}}um_{j = 1}^K \frac{e_j \cos(6x^{\frac 13}) + f_j \operatorname{\mathcal{S}}in (6x^{\frac 13})}{x^{\frac j3 + \frac{\boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}{3} } } + O\pr{\frac 1{x^{\frac {K+1}{3} + \frac{\boldsymbol \alphapha_1 + \boldsymbol \alphapha_2 + \boldsymbol \alphapha_3}{3}}}},} with suitable constants $c_j,..., f_j$, and $c_1 = 0, d_1 = -\frac{2}{\operatorname{\mathcal{S}}qrt {3\pi}}$ $e_1 =-\frac{2}{\operatorname{\mathcal{S}}qrt {3\pi}}, f_1 = 0 $. \end{lemma} The proof of this lemma is a minor modification of the proof of Lemma 3 in \cite{Ivic}. \operatorname{\mathcal{S}}ection*{Acknowledgment} Vorrapan Chandee acknowledges support from Coordinating Center for Thai Government Science and Technology Scholarship Students (CSTS) and National Science and Technology Development Agency (NSTDA) grant of year 2014. Part of this work was done while the first author was visiting Mathematical Institute, University of Oxford - she is grateful for their kind hospitality. \end{document}
\begin{document} \title{Controlling Rough Paths} \author{M. Gubinelli} \address{Dipartimento di Matematica Applicata ``U.Dini'' Via Bonanno Pisano, 25 bis - 56125 Pisa, ITALIA} \email{[email protected]} \keywords{Rough path theory, Path-wise stochastic integration\\ \indent \emph{MSC Class.} 60H05; 26A42} \abstract We formulate indefinite integration with respect to an irregular function as an algebraic problem which has a unique solution under some analytic constraints. This allows us to define a good notion of integral with respect to irregular paths with H\"older exponent greater than $1/3$ (e.g. samples of Brownian motion) and study the problem of the existence, uniqueness and continuity of solution of differential equations driven by such paths. We recover Young's theory of integration and the main results of Lyons' theory of rough paths in H\"older topology. \endabstract \maketitle \section{Introduction} This work has grown out from the attempt of the author to understand the integration theory of T.~Lyons~\cite{Lyons,lyonsbook} which gives a meaning and nice continuity properties to integrals of the form \begin{equation} \label{eq:line-integral} \int_s^t \langle \varphi(X_u), dX_u\rangle \end{equation} where $\varphi$ a differential 1-form on some vector space $V$ and $t \mapsto X_t$ is a path in $V$ not necessarily of bounded variation. From the point of view of Stochastic Analysis Lyons' theory provide a path-wise formulation of stochastic integration and stochastic differential equations. The main feature of this theory is that a path in a vector space $V$ should not be considered determined by a function from an interval $I \subset \mathbb{R}$ to $V$ but, if this path is not regular enough, some additional information is needed which would play the r\^ole of the iterated integrals for regular paths: e.g. quantities like the rank two tensor: \begin{equation} \label{eq:area} \mathbb{X}^{2,\mu\nu}_{st} = \int_s^t \int_s^u dX^\mu_v dX_u^\nu \end{equation} and its generalizations (see the works of K.-T.~Chen~\cite{ChenAll} for applications of iterated integrals to Algebraic Geometry and Lie Group Theory). For irregular paths the r.h.s. of eq.~(\ref{eq:area}) cannot in general be understood as a classical Lebesgue-Stieltjes integral however if we have \emph{any} reasonable definition for this integral then (under some mild regularity conditions) all the integrals of the form given in eq.~(\ref{eq:line-integral}) can be defined to depend continuously on $X,\mathbb{X}^2$ and $\varphi$ (for suitable topologies). A \emph{rough} path is the original path together with its iterated integrals of low degree. The theory can then be extended to cover the case of more irregular paths (with H\"older exponents less than $1/3$) by a straightforward but cumbersome generalization of the arguments (the more the path is irregular the more iterated integrals are needed to characterize a rough path). With this work we would like provide an alternative formulation of integration over rough paths which leads to the same results of that of Lyons' but in some extent is simpler and more straightforward. We will encounter an algebraic structure which is interesting by itself and corresponds to a kind of finite-difference calculus. In the original work of Lyons~\cite{Lyons} roughness is measured in $p$-variation norm, instead here we prefer to work with H\"older-like (semi)norms, in Sec.~\ref{sec:probability} we prove that Brownian motion satisfy our requirements of regularity. In a recent work Friz~\cite{fritz} has established H\"older regularity of Brownian rough paths (according to Lyons' theory) and used this result to give an alternative proof of the support theorem for diffusions. This work has been extended later by Friz and Victoir~\cite{fritz2} by interpreting Brownian rough paths as suitable processes on the free nilpotent group of step $2$: regularity of Brownian rough paths can then be seen as a consequence of standard H\"older regularity results for stochastic processes on groups. We will start by reformulating in Sec.~\ref{sec:algebraic_prelude} the classical integral as the unique solution of an algebraic problem (adjoined with some analytic condition to enforce uniqueness) and then generalizing this problem and building an abstract tool for its solution. As a first application we rediscover in Sec.~\ref{sec:young} the integration theory of Young~\cite{young} which was the prelude to the more deep theory of Lyons. Essentially, Young's theory define the integral $$ \int_s^t f_u dg_u $$ when $f$ is $\gamma-$H\"older continuous, $g$ is $\rho-$H\"older continuous and $\gamma+\rho > 1$ (actually, the original argument was given in term of $p$-variation norms). This will be mainly an exercise to familiarize with the approach before discussing the integration theory for more irregular paths in Sec.~\ref{sec:irregular}. We will define integration for a large class of paths whose increments are controlled by a fixed reference rough path. This is the main difference with the approach of Lyons. Next, to illustrate an application of the theory, we discuss the existence and uniqueness of solution of ordinary differential equation driven by irregular paths (Sec.~\ref{sec:ode}). In particular, sufficient conditions will be given for the existence in the case of $\gamma$-H\"older paths with $\gamma > 1/3$ which are weaker than those required to get uniqueness. This point answer a question raised in Lyons~\cite{Lyons}. In Sec.~\ref{sec:probability} we prove that Brownian motion and the second iterated integral provided by It\^o or Stratonovich integration are H\"older regular rough paths for which the theory outlined above can be applied. Finally we show how to prove the main results of Lyons' theory (extension of multiplicative paths and the existence of a map from almost-multiplicative to multiplicative paths) within this approach. This last section is intended only for readers already acquainted with Lyons' theory (extensive accounts are present in literature, see e.g.~\cite{Lyons,lyonsbook}). In appendix~\ref{app:proofs} we collect some lengthy proofs. \section{Algebraic prelude} \label{sec:algebraic_prelude} Consider the following observation. Let $f$ be a bounded continuous function on $\mathbb{R}$ and $x$ a function on $\mathbb{R}$ with continuous first derivative. Then there exists a unique couple $(a,r)$ with $a \in C^1(\mathbb{R})$, $a_0 = 0$ and $r \in C(\mathbb{R}^2)$ such that \begin{equation} \label{eq:problem} f_s(x_t-x_s) = a_t-a_s - r_{st} \end{equation} and \begin{equation} \label{eq:condition} \lim_{t\to s} \frac{|r_{st}|}{|t-s|} = 0. \end{equation} This unique couple $(a,r)$ is given by $$ a_t = \int_0^t f_u dx_u, \qquad r_{st} = \int_s^t (f_u-f_s) dx_u. $$ The indefinite integral $\int f dx$ is the unique solution $a$ of the algebraic problem~(\ref{eq:problem}) with the additional requirement~(\ref{eq:condition}) on the remainder $r$. Since the eq.~(\ref{eq:problem}) make sense for arbitrary functions $f,x$ it is natural to investigate the possible existence and uniqueness of regular solutions. This will lead to the generalization of the integral $\int f dx$ for functions $x$ not necessarily of finite-variation. \subsection{Framework} Let $\mathcal{C}$ the algebra of bounded continuous functions from $\mathbb{R}$ to $\mathbb{R}$ and $\Omega \mathcal{C}_n$ ($n > 0$) the subset of bounded continuous functions from $\mathbb{R}^{n+1}$ to $\mathbb{R}$ which are zero on the main diagonal where all the arguments are equal, i.e. $R \in \Omega \mathcal{C}_n$ implies $R_{t_1\dots t_n}=0$ if $t_1 = t_2 = \cdots = t_n$ . In this paper we will call elements from $\Omega \mathcal{C}_n$ (for any $n > 0$) \emph{processes} to distinguish them from \emph{paths} which are elements of $\mathcal{C}$. The vector spaces $\Omega \mathcal{C}_n$ are $\mathcal{C}$-bimodules with left multiplication $(AB )_{t_1\cdots t_{n+1}} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} A_{t_1} B_{t_1\cdots t_{n+1}}$ and right multiplication $( B A)_{t_1\cdots t_{n+1}} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} A_{t_{n+1}} B_{t_1\cdots t_{n+1}}$ for all $(t_1,\dots,t_{n+1}) \in \mathbb{R}^{n+1}$, $A \in \mathcal{C}$ and $B \in \Omega \mathcal{C}_n$. Moreover if $A \in \Omega \mathcal{C}_n$ and $B \in \Omega \mathcal{C}_m$ it is defined their external product $AB \in \Omega \mathcal{C}_{m+n-1}$ as $(AB)_{t_1\cdots t_{m+n-1} } = A_{t_1\cdots t_n} B_{t_{n}\cdots t_{n+m-1}}$. In the following we will write $\Omega \mathcal{C}$ for $\Omega \mathcal{C}_1$. The application $\delta\! : \mathcal{C} \to \Omega \mathcal{C}$ defined as \begin{equation} \label{eq:derivation} (\delta\! A)_{st} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} A_t - A_s \end{equation} is a derivation on $\mathcal{C}$ since $\delta\! (AB) = A \delta\! B + \delta\! A B = B \delta\! A + \delta\! B A$. Let $\Omega \mathcal{C}^\gamma$ be the subspace of elements $X \in \Omega \mathcal{C}$ such that \begin{equation*} \|X\|_\gamma \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \sup_{t,s \in \mathbb{R}^2} \frac{|X_{st}|}{|t-s|^\gamma} < \infty \end{equation*} and let $\mathcal{C}^\gamma$ be the subspace of the elements $A \in \mathcal{C}$ such that $\|\delta\! A\|_\gamma < \infty$. Define $\Omega \mathcal{C}_2^{\rho,\gamma} $ as the subspace of elements $X$ of $\Omega \mathcal{C}_2$ such that \begin{equation*} \|X\|_{\rho,\gamma} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \sup_{(s,u,t) \in \mathbb{R}^3}\frac{|X_{sut}|}{|u-s|^\rho |t-u|^{\gamma}} < \infty \end{equation*} Let $\Omega \mathcal{C}_2^z \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \oplus_{\rho > 0} \Omega \mathcal{C}_2^{\rho,z-\rho}$: an element $A \in \Omega \mathcal{C}_2^z$ is a finite linear combination of elements $A_i \in \Omega \mathcal{C}_2^{\rho_i,z-\rho_i} $ for some $\rho_i \in (0,z)$. Define the linear operator $N : \Omega \mathcal{C} \to \Omega \mathcal{C}_2$ as \begin{equation*} (N R)_{sut} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} R_{st}-R_{ut}-R_{su}. \end{equation*} and let $\mathcal{Z}_2 \mathrel{\raise.2\p@\hbox{:}\mathord{=}} N( \Omega \mathcal{C} )$ and $\mathcal{Z}_2^z \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \Omega \mathcal{C}^z_2 \cap \mathcal{Z}_2$. We have that $\text{Ker} N = \text{Im}\, \delta\!$. Indeed $N \delta\! A = 0$ for all $A \in \mathcal{C}$ and it is easy to see that for each $R \in \Omega \mathcal{C}$ such that $N R = 0$ we can let $A_t = R_{t0}$ to obtain that $\delta\! A = R$. If $F \in \mathcal{C}$ and $R \in \Omega \mathcal{C}$ then a straightforward computation shows that \begin{equation} \label{eq:leibnitz_n} \begin{gathered} N(F R)_{sut} = F_s N(R)_{sut} - \delta\! F_{su} R_{ut} = (F N(R) - \delta\! F R )_{sut}; \\ N(R F)_{sut} = F_{t} N(R)_{sut} + R_{su} \delta\! F_{ut}= (N(R) F + R \delta\! F)_{sut}. \end{gathered} \end{equation} These equations suggest that the operators $\delta$ and $N$ enjoys remarkable algebraic properties. Indeed they are just the first two members of a family of linear operators which acts as derivations on the modules $\Omega \mathcal{C}_k$, $k=0,1,\dots$ and which can be characterized as the coboundaries of a cochain complex which we proceed to define. \subsection{A cochain complex} Consider the following chain complex: a \emph{simple} chain of degree $n$ is a a string $[t_1 t_2 \cdots t_n]$ of real numbers and a chain of degree $n$ is a formal linear combination of simple chains of the same degree with coefficients in $\mathbb{Z}$. The boundary operator $\partial$ is defined as \begin{equation} \label{eq:boundary} \partial[t_1 \dots t_n] = \sum_{i=1}^n (-1)^{i} [t_1 \cdots \hat t_i \cdots t_n] \end{equation} where $\hat t_i$ means that this element is removed from the string. For example \begin{equation*} \partial[st] = -[t]+[s], \qquad \partial [sut] = -[su]+[ts]-[ut]; \end{equation*} It is easy to verify that $\partial \partial = 0$. To this chain complex is adjoined in a standard way a complex of cochains (which are linear functionals on chains). A cochain $A$ of degree $n$ is such that, on simple chains of degree $n$, act as \begin{equation*} \langle [t_1\cdots t_n],A\rangle = A_{t_1\cdots t_n}; \end{equation*} The coboundary $\partial^*$ acts on cochains of degree $n$ as \begin{equation} \label{eq:coboundary} \begin{split} (\partial^* A)_{t_1\cdots t_{n+1}} & = \langle[t_1\cdots t_{n+1}],\partial^* A \rangle = \langle \partial [t_1\cdots t_{n+1}], A \rangle \\ & = \sum_{i=1}^{n+1} (-1)^i \langle \partial [t_1\cdots \hat t_i \cdots t_{n+1}], A \rangle = \sum_{i=1}^{n+1} (-1)^i A_{t_1\cdots \hat t_i \cdots t_{n+1}} \end{split} \end{equation} \emph{e.g.} for cochains $A,B$ of degree $1$ and $2$ respectively, we have \begin{equation*} (\partial^* A)_{st} = A_s - A_t, \qquad (\partial^* B)_{sut} = B_{st}-B_{ut}-B_{su} \end{equation*} so that we have natural identifications of $\partial^*$ with $-\delta$ when acting on $1$-cochains and with $N$ when acting on $2$-cochains. We recognize also that elements of $\Omega \mathcal{C}_{n-1}$ ($\Omega \mathcal{C}_0 = \mathcal{C}$) are $n$-cochains and that we have the following complex of modules: \begin{equation*} 0 \rightarrow \mathbb{R} \rightarrow \mathcal{C} \stackrel{\partial^*}{\longrightarrow} \Omega \mathcal{C} \stackrel{\partial^*}{\longrightarrow} \Omega \mathcal{C}_2 \stackrel{\partial^*}{\longrightarrow} \Omega \mathcal{C}_3 \rightarrow \cdots \end{equation*} As usual $\partial^* \partial^* = 0$ which means that the image of $\partial^*|_{\Omega \mathcal{C}_n}$ is in the kernel of $\partial^*|_{\Omega \mathcal{C}_{n+1}}$. Since $\text{Ker}N = \text{Im}\delta\!\ $ the above sequence is exact at $\Omega \mathcal{C}$. Actually, the sequence is exact at every $\Omega \mathcal{C}_n$: let $A$ an $n+1$-cochain such that $\partial^* A = 0$. Let us show that there exists an $n$-cochain $B$ such that $A = \partial^* B$. Take $$ B_{t_1\cdots t_n} = (-1)^{n+1} A_{t_1\cdots t_n s} $$ where $s$ is an arbitrary reference point. Then compute \begin{equation*} \begin{split} (\partial^* B)_{t_1\cdots t_{n+1}} & = - B_{t_2\cdots t_{n+1}} + B_{t_1 \hat t_2 \cdots t_{n+1}} + \cdots + (-1)^{n+1} B_{t_1 \cdots t_{n}} \\ & = (-1)^{n+1}[- A_{t_2\cdots t_{n+1} s} + A_{t_1 \hat t_2 \cdots t_{n+1} s} + \cdots + (-1)^{n+1} A_{t_1 \cdots t_{n} s}] \\ & = (-1)^{n+1}[(\partial^* A)_{t_1 t_2 \cdots t_{n+1} s} - (-1)^{n+2} A_{t_1 \cdots t_{n+1}}] = A_{t_1 \cdots t_{n+1}}. \end{split} \end{equation*} As an immediate corollary we can introduce the operator $N_2 : \Omega \mathcal{C}_2 \to \Omega \mathcal{C}_3$ such that $N_2 \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \partial^*|_{\Omega \mathcal{C}_2}$ to characterize the image of $N$ as the kernel of $N_2$. Note that, for example, $N_2$ satisfy a Leibnitz rule: if $A,B \in \Omega \mathcal{C}_2$, \begin{equation} \label{eq:leibnitz_n2} \begin{split} N_2 (AB)_{suvt} & = \partial^*(AB)_{suvt} = -(AB)_{uvt}+(AB)_{svt}-(AB)_{sut}+(AB)_{suv} \\ & = -A_{uv} B_{vt}+A_{sv}B_{vt}-A_{su}B_{ut}+A_{su}B_{uv} \\ & = (N A)_{suv} B_{vt} - A_{su} (N B)_{uvt} \\ & = (N A B - A N B)_{suvt} \end{split} \end{equation} To understand the relevance of this discussion to our problem let us reformulate the observation at the beginning of this section as follows: \begin{problem} \label{prob:0} Given two paths $F,X \in \mathcal{C}$ is it possible to find a (possibly) unique decomposition \begin{equation} \label{eq:problem00} F \delta\! X = \delta\! A - R \end{equation} where $A \in \mathcal{C}$ and $R \in \Omega \mathcal{C}$? \end{problem} To have uniqueness of this decomposition we should require that $\delta\! A$ should be (in some sense) orthogonal to $R$. So we are looking to a canonical decomposition of $\Omega \mathcal{C} \simeq \delta\!\, \mathcal{C} \oplus \mathcal{B}$ where $\mathcal{B}$ is a linear subspace of $\Omega \mathcal{C}$ which should contain the remained $R$. This decomposition is equivalent to the possibility of splitting the short exact sequence \begin{equation*} 0 \rightarrow \mathcal{C}/\mathbb{R} \stackrel{\delta\!\ }{\longrightarrow} \Omega \mathcal{C} \stackrel{N}{\longrightarrow} \mathcal{Z}_2 \rightarrow 0. \end{equation*} We cannot hope to achieve the splitting in full generality and we must resort to consider an appropriate linear subspace $\mathcal{E}$ of $\Omega \mathcal{C}$ which contains $\delta\!\,\mathcal{C}$ and for which we can show that there exists a linear function $\Lambda_{\mathcal{E}} : N \mathcal{E} \to \mathcal{E}$ such that $$ N \Lambda_{\mathcal{E}} = 1_{N \mathcal{E}}. $$ Then $\Lambda_{\mathcal{E}}$ splits the short exact sequence \begin{equation*} 0 \rightarrow \mathcal{C}/\mathbb{R} \stackrel{\delta\!\ }{\longrightarrow} \mathcal{E} \stackrel{N}{\longrightarrow} N{\mathcal{E}} \rightarrow 0 \end{equation*} which implies $\mathcal{E} = \delta \mathcal{C} \oplus N \mathcal{E}$. In this case, if $F \delta\! X \in \mathcal{E}$ we can recover $\delta\! A$ as \begin{equation} \label{eq:split1} \delta\! A = F \delta\! X - \Lambda_{\mathcal{E}} N (F\delta\! X). \end{equation} To identify a subspace $\mathcal{E}$ for which the splitting is possible we note that $$\text{Im} \delta \cap \Omega \mathcal{C}^z = \{ 0 \}$$ for all $z > 1$, indeed, if $X = \delta\! A$ for some $A \in \mathcal{C}$ and $X \in \Omega \mathcal{C}^z$ then $A \in \mathcal{C}^z$ which implies $A = \text{const}$ if $z > 1$. Then we can reformulate the algebraic characterization of integration at the beginning of this section as the following problem: \begin{problem} \label{prob:1} Given two paths $F,X \in \mathcal{C}$ is it possible to find $A \in \mathcal{C}$ and $R \in \Omega \mathcal{C}^z$ for some $z > 1$ such that the decomposition \begin{equation} \label{eq:problem0} F \delta\! X = \delta\! A - R \end{equation} holds? \end{problem} Note that if such a decomposition exists then it is automatically unique since if $F \delta\! X = \delta\! A' - R'$ is another we have that $R - R' = \delta\!(A-A')$ but since $R-R' \in \Omega \mathcal{C}^{z} \cap \ker N$ we get $R = R'$ and thus $A = A'$ modulo a constant. That Problem~\ref{prob:1} cannot always be solved is clear from the following consideration: let $F = X$ and apply $N$ to both sides of eq.~(\ref{eq:problem0}) to obtain \begin{equation*} \delta\! X_{su} \delta\! X_{ut} =- N R_{sut} \end{equation*} for all $(s,u,t) \in \mathbb{R}^3$. Then \begin{equation*} \delta\! X_{st} \delta\! X_{st} =- N R_{tst} = R_{st}+R_{st} \end{equation*} for all $(t,s) \in \mathbb{R}^2$. Now, if $R \in \Omega \mathcal{C}^z$ with $z > 1$ then \begin{equation} \label{eq:counterex0} |\delta\! X_{st}| |\delta\! X_{st}| \le 2 \|R\|_z |t-s|^z. \end{equation} which implies that $X \in \mathcal{C}^{z/2}$. So unless this last condition is fulfilled we cannot solve Problem~(\ref{eq:problem0}) with the required regularity on $R$. A sufficient condition for a solution to Problem~\ref{prob:1} to exists is given by the following result which states sufficient conditions on $A \in \Omega \mathcal{C}_2$ for which the algebraic problem $$ N R = A $$ has a unique solution $R \in \Omega \mathcal{C}/\delta\!\, \mathcal{C}$. \subsection{The main result} For every $A \in \mathcal{Z}_2^z$ with $z > 1$ there exists a unique $R \in \Omega \mathcal{C}^z$ such that $N R = A$: \begin{proposition} \label{prop:main} If $z > 1$ there exists a unique linear map $\Lambda: \mathcal{Z}^{z}_2 \to \Omega \mathcal{C}^z$ such that $N \Lambda = 1_{\mathcal{Z}_2}$ and such that for all $A \in \mathcal{Z}^z_2$ we have \begin{equation*} \|\Lambda A\|_{z} \le \frac{1}{2^z-2} \sum_{i=1}^n \|A_i\|_{\rho_i,z-\rho_i} \end{equation*} if $A = \sum_{i=1}^n A_i $ with $n \ge 1$, $0 < \rho_i < z$ and $A_i \in \Omega \mathcal{C}_2^{\rho_i,z-\rho_i}$ for $i=1,\dots,n$. \end{proposition} \subsection{Localization} If $I \subset J$ denote with $A|_I$ the restriction on $I$ of the function $A$ defined on $J$. The operator $\Lambda$ is local in the following sense: \begin{proposition} \label{prop:local} If $I \subset \mathbb{R}$ is an interval and $A,B \in \mathcal{Z}_2^z$ with $z > 1$ then \begin{equation*} A |_{I^3} = B |_{I^3} \implies \Lambda A |_{I^2}= \Lambda B|_{I^2} \end{equation*} \end{proposition} \proof This follows essentially from the same argument which gives the uniqueness of $\Lambda$. Indeed if $Q = \Lambda A - \Lambda B$ we have that $N Q = A-B $ which vanish when restricted to $I^2$. So for $(t,s) \in I^2$, $t \le u \le s$ we have $$ Q_{ut}= Q_{st} - Q_{su} $$ but since $Q \in \Omega \mathcal{C}^z$ with $z>1$ we get $Q |_{I^2} = 0$. \qed Given an interval $I =[a,b]\subset \mathbb{R}$ and defining in an obvious way the corresponding spaces $\mathcal{C}^\gamma(I)$, $\Omega \mathcal{C}_n^\gamma(I)$, etc\dots we can introduce the operator $\Lambda_I : \mathcal{Z}_2^z(I) \to \Omega \mathcal{C}^z(I)$ as $\Lambda_I A \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \Lambda \tilde A |_{I^2}$ where $\tilde A \in \mathcal{Z}_2^z$ is any arbitrary extension of the element $A \in \mathcal{Z}_2^z(I)$. By the locality of $\Lambda$ any choice of the extension $\tilde A$ will give the same result, moreover the specific choice $ \tilde A_{sut} := A_{\tau(t),\tau(u),\tau(s)} $ where $\tau(t) := (t \wedge b)\vee a$ has the virtue to satisfy the following bound \begin{equation*} \|\tilde A_i\|_{\rho_i,z-\rho_i} \le \| A_i\|_{\rho_i,z-\rho_i,I} \end{equation*} where $\| \cdot \|_{\rho_i,z-\rho_i,I}$ is the norm on $\Omega \mathcal{C}_2^z(I)$ and $A = \sum_i A_i$ is a decomposition of $A$ in $\Omega \mathcal{C}_2^z(I)$ so that we have \begin{equation*} \|\Lambda_I A\|_{z,I} \le \frac{1}{2^z-2} \sum_i \| A_i\|_{\rho_i,z-\rho_i,I} \end{equation*} We will write $\Lambda$ instead of $\Lambda_I$ whenever the interval $I$ can be deduced from the context. \subsection{Notations} In the following we will have to deal with tensor products of vector spaces and we will use the ``physicist'' notation for tensors. We will use $V,V_1,V_2,\dots$ to denote vector spaces which will be always finite dimensional\footnote{In many of the arguments this will be not necessary, but to handle infinite-dimensional Banach spaces some care should be excercized in the definition of norms on tensor products. We prefer to skip this issue for the sake of clarity.}. Then, if $V$ is a vector space, $A \in V$ will be denoted by $A^\mu$, where $\mu$ is the corresponding vector index (in an arbitrary but fixed basis), ranging from $1$ to the dimension of $V$, elements in $V^*$ (the linear dual of $V$) are denoted by $A_\mu$ with lower indexes, elements in $V\otimes V$ will be denoted by $A^{\mu\nu}$, elements of $V^{\otimes 2}\otimes V^*$ as $A^{\mu\nu}_{\kappa}$, etc\dots Summation over repeated indexes is understood whenever not explicitly stated otherwise: $A_\mu B^\mu$ is the scalar obtained by contracting $A \in V^*$ with $B \in V$. Symbols like $\bar \mu, \bar \nu,\dots$ (a bar over a greek letter) will be vector multi-indexes, i.e. if $\bar \mu = (\mu_1,\dots,\mu_n)$ then $ A^{\bar \mu} $ is the element $A^{\mu_1,\dots,\mu_n}$ of $V^{\otimes n}$. Given two multi-indexes $\bar\mu$ and $\bar\nu$ we can build another multi-index $\bar\mu\bar\nu$ which is composed of all the indices of $\bar\mu$ and $\bar \nu$ in sequence. With $|\bar\mu|$ we denote the degree of the multi-index $\bar \mu$, i.e. if $\bar \mu = (\mu_1,\dots,\mu_n)$ then $|\bar\mu| = n$. Then for example $|\bar \mu \bar \nu| = |\bar \mu|+|\bar \nu|$. By convention we introduce also the empty multi-index denoted by $\emptyset$ such that $\bar \mu \emptyset = \emptyset \bar \mu = \bar \mu$ and $|\emptyset| = 0$. Symbols like $\mathcal{C}(V)$, $\Omega \mathcal{C}(V)$, $\mathcal{C}(I,V)$, etc\dots (where $I$ is an interval) will denote paths and processes with values in the vector space $V$. Moreover the symbol $K$ will denote arbitrary strictly positive constants, maybe different from equation to equation and not depending on anything. \section{Young's theory of integration} \label{sec:young} Proposition~\ref{prop:main} allows to solve Problem~\ref{prob:1} when $F \in \mathcal{C}^\rho$, $X \in \mathcal{C}^\gamma$ with $\gamma+\rho > 1$: in this case $$N (F \delta\! X)_{sut} = - \delta\! F_{su} \delta\! X_{ut}$$ so that $N(F \delta\! X) \in \mathcal{Z}_2^{\gamma+\rho}$. Then since $N(F \delta\! X - \Lambda N (F \delta\! X)) =0 $ there exists a unique $A \in \mathcal{C}$ (modulo a constant) such that \begin{equation*} \delta\! A = F \delta\! X - \Lambda N (F \delta\! X). \end{equation*} \begin{proposition}[Young] \label{prop:young} Fix an interval $I \subseteq \mathbb{R}$. If $F \in \mathcal{C}^\rho(I)$ and $X \in \mathcal{C}^\gamma(I)$ with $\gamma + \rho > 1$ define \begin{equation} \label{eq:integral_young} \int_s^t F_u dX_u \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \left[F \delta\! X - \Lambda N(F \delta\! X)\right]_{st}, \qquad s,t \in I. \end{equation} Then we have \begin{equation} \label{eq:bound_young} \left|\int_s^t (F_u-F_s) dX_u \right|\le \frac{1}{2^{\gamma+\rho}-2} |t-s|^{\gamma+\rho} \|F\|_{\rho,I} \|X\|_{\gamma,I}, \qquad s,t \in I. \end{equation} \end{proposition} \proof Is immediate observing that by definition \begin{equation*} \int_s^t (F_u-F_s) dX_s = - [\LambdaN (F \delta\! X)]_{st} = [\Lambda(\delta\! F \delta\! X)]_{st} \end{equation*} and using the previous results. \qed Another justification of this definition of the integral comes from the the following convergence of discrete sums which also establish the equivalence of this theory of integration with that of Young. \begin{corollary} \label{cor:sums_young} In the hypothesis of the previous Proposition we have \begin{equation*} \int_s^t F_u dX_u = \lim_{|\Pi|\to 0} \sum_{\{t_i\}\in \Pi} F_{t_i} (X_{t_{i+1}}-X_{t_{i}}) , \qquad s,t \in I \end{equation*} where the limit is taken over partitions $\Pi = \{t_0,t_1,\dots,t_n\}$ of the interval $[s,t] \subseteq I$ such that $t_0 = s, t_n = t$, $t_{i+1}>t_i$, $|\Pi| = \sup_i |t_{i+1}-t_i|$. \end{corollary} \proof For any partition $\Pi$ write \begin{equation*} \begin{split} S_\Pi &= \sum_{i=0}^{n-1} F_{t_i} (X_{t_{i+1}}-X_{t_{i}}) = \sum_{i=0}^{n-1} (F \delta\! X)_{t_i t_{i+1} } = \sum_{i=0}^{n-1} (\delta\! A + R)_{t_i t_{i+1} } \end{split} \end{equation*} with $R \in \Omega \mathcal{C}^{\gamma+\rho}(I)$ given by $R = \Lambda (\delta\! F \delta\! X)$ and such that (cfr. Prop.~\ref{prop:young}): $$ \norm{R}_{\gamma+\rho,I} \le \frac{1}{2^{\gamma+\rho}-2} \norm{F}_{\rho,I} \norm{X}_{\gamma,I}. $$ Then \begin{equation} \begin{split} S_\Pi & = A_t-A_s- \sum_{i=0}^{n-1} R_{t_i t_{i+1}} = \int_s^t F_u dX_u - \sum_{i=0}^{n-1} R_{t_i t_{i+1}}. \end{split} \end{equation} But now, since $\gamma+\rho>1$, \begin{equation*} \sum_{i=0}^{n-1} |R_{t_i t_{i+1}}| \le \norm{R}_{\gamma+\rho,I} \sum_{i=0}^{n-1} |t_{i+1}-t_i|^{\gamma+\rho} \le \norm{R}_{\gamma+\rho,I} |\Pi|^{\gamma+\rho-1} |t-s| \to 0 \end{equation*} as $|\Pi|\to 0$. \qed \section{More irregular paths} \label{sec:irregular} In order to solve Problem~\ref{prob:0} for a wider class of $F$ and $X$ we are led to dispense with the condition $R \in \Omega \mathcal{C}^{z}$ with $z > 1$ and thus loose the uniqueness of the decomposition: if the couple $(A,R)$ solve the problem, then also $(A+B,R+\delta\! B)$ solve the problem with a nontrivial $B \in \mathcal{C}^z$. So our aim is actually to find a distinguished couple $(A,R)$ which will be characterized by some additional conditions. Up to now we have considered only paths with values in $\mathbb{R}$, since the general case of vector-valued paths can be easily derived however in the case of more irregular paths the vector features of the paths will play a prominent r\"ole so from now on we will consider paths with valued in (finite-dimensional) Banach spaces $V$,$V_1$,\dots Let $X \in \mathcal{C}^\gamma(V)$ a path with values in the Banach space $V$ for some $\gamma > 0$ and \emph{assume} that we are given a tensor process $\mathbb{X}^2$ in $\Omega \mathcal{C}^{2\gamma}(V^{\otimes 2})$ such that \begin{equation} \label{eq:two-process} N (\mathbb{X}^{2,\mu\nu}) = \delta\! X^\mu \delta\! X^\nu. \end{equation} If $\gamma \le 1/2$ we cannot obtain this process using prop.~\ref{prop:main} but (as we will see in Sec.~\ref{sec:probability}) there are other natural ways to build such a process for special paths $X$. We can think at the arbitrary choice of $\mathbb{X}^2$ among all the possible solutions (with given regularity $2\gamma$) of eq.~(\ref{eq:two-process}) as a way to resolve the ambiguity of the decomposition in Problem~\ref{prob:0}, since in this case $$ X^\mu \delta\! X^\nu = \delta\! I^{\mu\nu} - \mathbb{X}^{2,\mu\nu} $$ and so we are able to integrate any component of $X$ with respect to each other and we can write $$ \int_s^t X_u^\mu dX^\nu_u = \delta\! I^{\mu\nu}_{st} $$ meaning that the integral on the l.h.s. is defined by the r.h.s., definition which depends on our choice of $\mathbb{X}^2$. Of course in this case Corollary~\ref{cor:sums_young} does not hold anymore and discrete sums of $X\delta\! X$ are not guaranteed to converge to $\int X dX$. Note that in the scalar case the equation $$ X \delta\! X = \delta\! I - R $$ with $X \in \mathcal{C}^\gamma$ has always a solution given by $ I_{t} = X_t^2/2 + \text{const} $ for which $$ \delta\! I_{st} = \frac{1}{2}X_t^2 - \frac{1}{2} X_s^2 =\frac{1}{2} X_t(X_t-X_s) + \frac{1}{2} X_s(X_t-X_s) = X_s \delta\! X_{st} + \frac{1}{2} (\delta\! X_{st})^2 $$ giving the decomposition $\delta\! I = X \delta\! X + R$ with $R \in \Omega \mathcal{C}^{2\gamma}$. The same argument works for the symmetric part of the two-tensor $\mathbb{X}^2$: If $X \in \mathcal{C}^\gamma(V)$ there exists a two-tensor $S \in \Omega \mathcal{C}^{2\gamma}(V \otimes V)$ given by $$ S_{st}^{\mu\nu} = \frac{1}{2}\delta\! X_{st}^\mu \delta\! X_{st}^\nu $$ for which $$ N S^{\mu\nu} = \frac{1}{2} (\delta\! X^\mu \delta\! X^\nu + \delta\! X^\nu \delta\! X^\mu) . $$ of course $S$ is not unique as soon as $\gamma \le 1/2$. Since one of the feature of the integral we wish to retain is linearity we must agree that if $A$ is a linear application from $V$ to $V$ and $Y^\mu_t = A^{\mu}_\nu X^\nu_t$ then the integral $\delta\! I = \int Y d X$ must be such that \begin{equation*} Y^\mu \delta\! X^\nu = A^{\mu}_{\kappa} X^\kappa \delta\! X^\nu = \delta\! I^{\mu\nu} - A^{\mu}_{\kappa} \mathbb{X}^{2,\kappa\nu} \end{equation*} so \begin{equation*} \delta\! I^{\mu\nu} = Y^\mu \delta\! X^\nu + A^{\mu}_{\kappa} \mathbb{X}^{2,\kappa\nu} \end{equation*} and we have fixed at once the values of all the integrals of linear functions of the path $X$ w.r.t. $X$. Then consider a path $Y$ which is only \emph{locally} a linear function of $X$, i.e. such that \begin{equation} \label{eq:expansion} \delta\! Y^\mu = G^{\mu}_\nu \delta\! X^\nu + Q^\mu \end{equation} where $Q$ is a ``remainder'' in $\Omega \mathcal{C}(V)$ and $G$ is a path in $\mathcal{C}(V \otimes V^*)$. In order to be able to show that $Y$ is integrable w.r.t. $X$ we must find a solution $R$ of the equation \begin{equation*} N R^{\mu\nu} = \delta\! Y^\mu \delta\! X^\nu. \end{equation*} but then, using the local expansion give in eq.~(\ref{eq:expansion}), \begin{equation*} \begin{split} N R^{\mu\nu} & = G^{\mu}_\kappa \delta\! X^\kappa\delta\! X^\nu + Q^\mu \delta\! X^\nu \\ & = G^{\mu}_\kappa N (\mathbb{X}^{2,\kappa\nu}) + Q^\mu \delta\! X^\nu \\ & = N(G^{\mu}_\kappa \mathbb{X}^{2,\kappa\nu}) + \delta G^{\mu}_\kappa \mathbb{X}^{2,\kappa\nu} + Q^\mu \delta\! X^\nu \end{split} \end{equation*} where we have used eq.~(\ref{eq:leibnitz_n}) (the Leibnitz rule for $N$). To find a solution $R$ is then equivalent to let $$ \widetilde R^{\mu\nu} = R^{\mu\nu} - G^{\mu}_\kappa \mathbb{X}^{2,\kappa\nu} $$ and solve \begin{equation} \label{eq:rtildexx} N \widetilde R = \delta G^{\mu}_\kappa \mathbb{X}^{2,\kappa\nu} + Q^\mu \delta\! X^\nu. \end{equation} Sufficient conditions to apply Prop.~\ref{prop:main} to solve eq.~(\ref{eq:rtildexx}) are that $G \in \mathcal{C}^{\eta-\gamma}(V\otimes V^*)$, $Q \in \Omega \mathcal{C}^\eta(V)$ with $\eta + \gamma = z > 1$. In this case there exists a unique $\widetilde R \in \Omega \mathcal{C}^z$ solving~(\ref{eq:rtildexx}) and we have obtained the distinguished decomposition \begin{equation} \label{eq:decomposition_area} Y^\mu \delta\! X^\nu = \delta\! I^{\mu\nu} - G^{\mu}_{\kappa} \mathbb{X}^{2,\kappa\nu} - \widetilde R^{\mu\nu}. \end{equation} Note that the path $Y$ lives a-priori only in $\mathcal{C}^\gamma$ and this implies that uniqueness of the solution of Problem~\ref{prob:1} can be achieved only if $\gamma > 1/2$. On the other hand the request that $Y$ can be decomposed as in eq.~(\ref{eq:expansion}) with prescribed regularity on $G$ and $Q$ has allowed us to show that the ambiguity in the solution of Problem~\ref{prob:0} can be reduced to the choice of a process $\mathbb{X}^2$ satisfying eq.~(\ref{eq:two-process}). Of course if $\gamma > 1/2$ there is only one solution to~(\ref{eq:two-process}) with the prescribed regularity and the decomposition~(\ref{eq:decomposition_area}) (into a gradient and a remainder) coincides with the unique solution of Problem~\ref{prob:1}. Another way to look at this result is to consider the ``non-exact'' differential $$ F \delta\! X + G \mathbb{X}^2 $$ where $F,G$ are arbitrary paths and ask in which case it admits a unique decomposition $$ F \delta\! X + G \mathbb{X}^2 = \delta\! A + R $$ as a sum of an exact differential plus a remainder term. Of course to have uniqueness is enough that $R \in \Omega \mathcal{C}^z$, $z>1$. Compute $$ N (F \delta\! X + G \mathbb{X}^2) = - \delta\! F \delta\! X - \delta\! G \mathbb{X}^2 + G \delta\! X \delta\! X = (- \delta\! F + G \delta\! X) \delta\! X - \delta\! G \mathbb{X}^2 $$ so in order to have $R \in \Omega \mathcal{C}^z$, $z>1$ condition~(\ref{eq:expansion}) and suitable regularity of $G$ and $Q$, are sufficient to apply Prop.~\ref{prop:main}. \subsection{Weakly-controlled paths.} The analysis laid out above leads to the following definition. \begin{definition} Fix an interval $I \subseteq \mathbb{R}$ and let $X \in \mathcal{C}^\gamma(I,V)$. A path $Z \in \mathcal{C}^\gamma(I,V)$ is said to be \emph{weakly-controlled by $X$ in $I$ with a remainder of order $\eta$} if it exists a path $Z' \in \mathcal{C}^{\eta-\gamma}(I,V\otimes V^*)$ and a process $R_Z \in \Omega \mathcal{C}^{\eta}(I,V)$ with $\eta > \gamma$ such that $$ \delta\! Z^\mu = Z^{\prime\,\mu\nu} \delta\! X^\nu + R^\mu_Z. $$ If this is the case we will write $(Z,Z') \in \mathcal{D}^{\gamma,\eta}_X(I,V)$ and we will consider on the linear space $\mathcal{D}^{\gamma,\eta}_X(I,V)$ the semi-norm $$ \|Z\|_{D(X,\gamma,\eta),I} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \|Z'\|_{\infty,I}+\|Z'\|_{\eta-\gamma,I}+\|R_Z\|_{\eta,I} + \norm{Z}_{\gamma,I}. $$ \end{definition} (The last contribution is necessary to enforce $Z \in \mathcal{C}^\gamma(I,V)$ when $I$ is unbounded). The decomposition $\delta\! Z^\mu = Z^{\prime\,\mu\nu} \delta\! X^\nu + R^\mu$ is a-priori not unique, so a path in $D_{\gamma,\eta}(I,X)$ must be understood as a pair $(Z,Z')$ since then $R_Z$ is uniquely determined. However we will often omit to specify $Z'$ when it will be clear from the context. The term \emph{weakly-controlled} is inspired by the fact that paths which are solution of differential equations controlled by $X$ (see Sec.~\ref{sec:ode}) belongs to the class of weakly-controlled paths (wrt. $X$). In general however, a weakly-controlled path $Z$ is uniquely determined knowing $X$ and the ``derivative'' $Z'$ only when $\eta > 1$. Weakly-controlled paths enjoy a transitivity property: \begin{lemma} \label{lemma:transitivity} If $Z \in \mathcal{D}_Y^{\gamma,\eta}(I,V)$ and $Y \in \mathcal{D}_X^{\gamma,\sigma}(I,V)$ then $Z \in \mathcal{D}_X^{\gamma,\min(\sigma,\eta)}(I,V)$ and \begin{equation*} \|(Z,Z')\|_{D(X,\gamma,\delta),I} \le K \|Z\|_{D(Y,\gamma,\eta),I} (1+ \|Y\|_{D(X,\gamma,\sigma),I})(1+ \|X\|_{\gamma,I}) \end{equation*} where $K$ is some fixed constant. \end{lemma} \proof The Proof is in the Appendix, Sec.~\ref{sec:proof-transitivity}. \qed Another important property of the class of weakly-controlled paths is that it is stable under smooth maps. Let $C^{n,\delta}(V,V_1)$ the space of $n$-times differentiable maps from $V$ to the vector space $V_1$ with $\delta$-H\"older $n$-th derivative and consider the norm $$ \|\varphi\|_{0,\delta} = \|\varphi\|_\infty + \| \varphi\|_\delta \qquad \|\varphi\|_{n,\delta} = \|\varphi\|_\infty + \sum_{k=1}^n \|\partial^k \varphi \|_{\infty} + \|\partial^n \varphi\|_\delta $$ where $\varphi \in C^{n,\delta}(V,V_1)$, $\partial^k \varphi$ is the $k$-th derivative of $\varphi$ seen as a function with values in $V_1 \otimes V^{* \otimes k}$ and $$ \|\varphi \|_\infty = \sup_{x\in V} |\varphi(x)|, $$ $$ \|\partial^n \varphi\|_\delta = \sup_{x,y \in V} \frac{|\partial^n \varphi(x) - \partial^n \varphi(y)|}{|x-y|^\delta}. $$ \begin{proposition} \label{prop:functionD} Let $Y \in \mathcal{D}^{\gamma,\eta}_X(I,V)$ and $\varphi \in C^{1,\delta}(V,V_1)$, then the path $Z$ such that $Z_t^\mu = \varphi(Y_t)^\mu$ is in $\mathcal{D}^{\gamma,\sigma}_X(I,V_1)$ with $\sigma = \min(\gamma (\delta+1),\eta)$. Its decomposition is $$ \delta\! Z^\mu = \partial_\nu \varphi(Y)^\mu Y^{\prime\,\nu}_{\kappa} \delta\! X^\kappa + R^\mu_Z $$ with $R_Z \in \Omega \mathcal{C}^\sigma(I,V_1)$ and \begin{equation} \label{eq:bound_functionD} \begin{split} \|Z\|_{D(X,\gamma,\sigma),I} & \le K \|\varphi\|_{1,\delta}( \norm{Y}_{D(X,\gamma,\eta),I} + \norm{Y}^{1+\delta}_{D(X,\gamma,\eta),I} + \norm{Y}^{\sigma/\gamma}_{D(X,\gamma,\eta),I}) \end{split} \end{equation} and if $\varphi \in C^{2,\delta}(V,V_1)$ we have also \begin{equation} \label{eq:bound_lipshitz_functionD} \norm{\varphi(Y)-\varphi(\widetilde Y)}_{D(X,\gamma,(1+\delta)\gamma),I} \le C \norm{Y-\widetilde Y}_{D(X,\gamma,(1+\delta)\gamma),I} \end{equation} for $Y,\widetilde Y \in \mathcal{D}^{\gamma,(1+\delta)\gamma}_X(I,V)$ with $$ C = K \norm{\varphi}_{2,\delta} (1+\norm{X}_{\gamma,I}) (1+\norm{Y}_{D(X,\gamma,(1+\delta)\gamma),I}+\norm{\widetilde Y}_{D(X,\gamma,(1+\delta)\gamma),I})^{1+\delta}. $$ Moreover if $\widetilde Y \in \mathcal{D}_{\widetilde X}^{\gamma,(1+\delta)\gamma}(I,V)$, $\widetilde Z = \varphi(\widetilde Y)$ and $$ \delta\! Y^\mu = Y^{\prime\,\mu}_{\nu} \delta\! X^\nu + R^\mu_Y, \qquad \delta\! \widetilde Y^\mu = \widetilde Y^{\prime,\mu}_{\nu} \delta\! \widetilde X^\nu + R^\mu_{\widetilde Y}, \qquad $$ $$ \delta\! Z^\mu = Z^{\prime\,\mu}_{\nu} \delta\! X^\nu + R^\mu_Z, \qquad \delta\! \widetilde Z^\mu = \widetilde Z^{\prime\,\mu}_{\nu} \delta\! \widetilde X^\nu + R^\mu_{\widetilde Z}, \qquad $$ with $Z^{\prime\,\mu}_{\nu,t} = \partial_\kappa \varphi(Y_t)^{\mu} Y^{\prime\,\kappa}_{\nu,t}$, $\widetilde Z^{\prime\,\mu}_{\nu,t} = \partial_\kappa \varphi(\widetilde Y_t)^{\mu} \widetilde Y^{\prime\,\kappa}_{\nu,t}$ then \begin{equation} \label{eq:function_difference} \|Z'-\widetilde Z'\|_\infty + \|Z'-\widetilde Z'\|_{\delta\gamma,I} + \|R_Z-R_{\widetilde Z}\|_{(1+\delta)\gamma,I} +\|Z-\widetilde Z\|_{\gamma,I} \le C (\|X-\widetilde X\|_{\gamma,I} + \varepsilonilon_I) \end{equation} with $$ \varepsilonilon_I = \|Y'-\widetilde Y'\|_{\infty,I} + \|Y'-\widetilde Y'\|_{\delta\gamma,I} + \|R_Y-R_{\widetilde Y}\|_{(1+\delta)\gamma,I} +\|Y-\widetilde Y\|_{\gamma,I}. $$ \end{proposition} \proof The proof is given in the Appendix, Sec.~\ref{sec:proof_of_function_D}. \qed \subsection{Integration of weakly-controlled paths.} Let us given a reference path $X \in \mathcal{C}^\gamma(I,V)$ and an associated process $\mathbb{X}^2 \in \Omega \mathcal{C}^{2\gamma}(I,V\otimes V)$ satisfying the algebraic relationship \begin{equation} \label{eq:Hrelation} N \mathbb{X}^{2,\mu\nu}_{sut} = \delta\! X^\mu_{su} \delta\! X^\nu_{ut} \qquad s,u,t \in I. \end{equation} Following Lyons we will call the couple $(X,\mathbb{X}^2)$ a \emph{rough path} (of roughness $1/\gamma$). We are going to show that weakly-controlled paths can be integrated one against the other. Take two paths $Z,W$ in $V$ weakly-controlled by $X$ with remainder of order $\eta$. By an argument similar to that at the beginning of this section we can obtain a unique decomposition of $Z \delta\! W$ as \begin{equation*} Z^\mu \delta\! W^\nu = \delta\! A^{\mu\nu} - F^{\mu\mu'} G^{\nu\nu'} \mathbb{X}^{2,\mu'\nu'} + \Lambda N(Z^\mu \delta\! W^\nu + F^{\mu\mu'} G^{\nu\nu'} \mathbb{X}^{2,\mu'\nu'}) \end{equation*} and we can state the following Theorem: \begin{theorem} \label{th:rough} For every $(Z,Z') \in D_X^{\gamma,\eta}(I,V)$ and $(W,W') \in D_X^{\gamma,\eta}(I,V)$ with $\eta+\gamma = \delta >1$ define \begin{equation} \label{eq:rough-definition} \begin{split} \int_s^t Z^\mu_{u} dW^\nu_u := Z^\mu_{s} \delta\! W^\nu_{st} + Z^{\prime\,\mu}_{\mu',s} W^{\prime\,\nu}_{\nu',s} \mathbb{X}^{2,\mu'\nu'}_{st} - [\Lambda N (Z^\mu \delta\! W^\nu + Z^{\prime\,\mu}_{\mu'} W^{\prime\,\nu}_{\nu'} \mathbb{X}^{2,\mu'\nu'} ) ]_{st}, \qquad s,t \in I \end{split} \end{equation} then this integral extends that defined in prop.~\ref{prop:young} and the following bound holds: \begin{equation} \label{eq:rough-bound} \left| \int_s^t (Z^\mu_u-Z^\mu_s) dW^\nu_u - Z^{\prime\,\mu}_{\mu',s} W^{\prime\,\nu}_{\nu',s} \mathbb{X}^{2,\mu'\nu'}_{st} \right| \le \frac{1}{2^{\delta}-2} |t-s|^{\delta} \|(Z,Z')\|_{D(X,\gamma,\eta)} \|(W,W')\|_{D(X,\gamma,\eta)}, \end{equation} which implies the continuity of the bilinear application $$ ((Z,Z'),(W,W')) \mapsto \left(\int_0^\cdot Z dW,Z W'\right) $$ from $\mathcal{D}_X^{\gamma,\eta}(V)\times \mathcal{D}_X^{\gamma,\eta}(V)$ to $\mathcal{D}_X^{\gamma,\min(2\gamma,\eta)}(V\otimes V)$. \end{theorem} \proof Compute \begin{equation*} \begin{split} Q^{\mu\nu}_{sut} & = N (Z^\mu \delta\! W^\nu + Z^{\prime\,\mu}_{\mu'} W^{\prime\,\nu}_{\nu'} \mathbb{X}^{2,\mu'\nu'})_{sut} \\ & = - \delta\! Z^\mu_{su}\delta\! W^\nu_{ut} + (Z^{\prime\,\mu}_{\mu'} W^{\prime\,\nu}_{\nu'})_s N \mathbb{X}^{2,\mu'\nu'}_{sut} - \delta\!(Z^{\prime\,\mu}_{\mu'} W^{\prime\,\nu}_{\nu'})_{su} \mathbb{X}^{2,\mu'\nu'}_{ut} \\ &= - Z^{\prime\,\mu}_{\mu',s} \delta\! X^{\mu'}_{su} W^{\prime\,\nu}_{\nu',u} \delta\! X^{\nu'}_{ut} - R^\mu_{Z,su} \delta\! W^\nu_{ut} - Z^{\prime\,\mu}_{\mu',s} \delta\! X^{\mu'}_{su} R^\nu_{W,ut} \\ & \qquad - \delta\!(Z^{\prime\,\mu}_{\mu'} W^{\prime\,\nu}_{\nu'})_{su} \mathbb{X}^{2,\mu'}_{\nu',ut} + (Z^{\prime\,\mu}_{\mu'} W^{\prime\,\nu}_{\nu'})_s N \mathbb{X}^{2,\mu'\nu'}_{sut} \\ &= - R^\mu_{Z,su} \delta\! W^\nu_{ut} - Z^{\prime\,\mu}_{\mu',s}\delta\! X^{\mu'}_{su} R^\nu_{W,ut}\\ & \qquad - \delta\!(Z^{\prime\,\mu}_{\mu'} W^{\prime\,\nu}_{\nu'})_{su} \mathbb{X}^{2,\mu'\nu'}_{ut} - Z^{\prime\,\mu}_{\mu',s} \delta\! X^{\mu'}_{su} \delta\! W^{\prime\,\nu}_{\nu',su} \delta\! X^{\nu'}_{ut} \end{split} \end{equation*} and observe that all the terms are in $\Omega \mathcal{C}_2^\delta(I,V^{\otimes 2})$ so that $Q \in \mathcal{Z}_2^\delta(I,V^{\otimes 2})$ is in the domain of $\Lambda$, then \begin{equation*} \begin{split} \norm{\Lambda Q}_{\delta,I} & \le \frac{1}{2^\delta-2} \left[ \norm{R_Z}_{\eta,I} \norm{W}_{\gamma,I} + \norm{Z'}_{\infty,I} \norm{X}_{\gamma,I} \norm{R_W}_{\eta,I} \right. \\ & \qquad \left. + \norm{\mathbb{X}^2}_{2\gamma,I} (\norm{Z'}_{\infty,I} \norm{W'}_{\eta-\gamma,I} +\norm{W'}_{\infty,I}\norm{Z'}_{\eta-\gamma,I}) + \norm{Z'}_{\infty,I} \norm{W'}_{\eta-\gamma,I} \norm{X}_{\gamma,I}^2 \right] \\ & \le \frac{1}{2^\delta-2} (1+\norm{X}_{\gamma,I}^2 + \norm{\mathbb{X}^2}_{2\gamma,I}) \norm{(Z,Z')}_{D(X,\gamma,\eta),I} \norm{(W,W')}_{D(X,\gamma,\eta),I} \end{split} \end{equation*} and the bound~(\ref{eq:rough-bound}) together with the stated continuity easily follows. To prove that this new integral extends the previous definition note that when $2\gamma > 1$ eq.~(\ref{eq:Hrelation}) has a unique solution and since $Z,W \in \mathcal{C}^\gamma(I,V)$ let $\tilde A_{st} = \int_s^t Z dW$ where the integral is understood in the sense of prop.~\ref{prop:young}. Then we have $$ Z^\mu \delta\! W^\nu = \delta\! \tilde A^{\mu\nu} - \tilde R^{\mu\nu} $$ with $\tilde R \in \Omega \mathcal{C}^{2\gamma}(I,V\otimes V)$, at the same time $$ Z^\mu \delta\! W^\nu = \delta\! A^{\mu\nu} - Z^{\prime\,\mu}_{\mu'} W^{\prime\,\nu}_{\nu'} \mathbb{X}^{2,\mu'\nu'} - R^{\mu\nu} $$ with $R \in \Omega \mathcal{C}^{\delta}(I,V^{\otimes 2})$. Comparing these two expressions and taking into account that $2\gamma > 1$ we get $\delta\! A = \delta\! \tilde A$ and $\tilde R^{\mu\nu} = Z^{\prime\,\mu}_{\mu'} W^{\prime\,\nu}_{\nu'} \mathbb{X}^{2,\mu'\nu'} - R^{\mu\nu}$ proving the equivalence of the two integrals. \qed Note that, in the hypothesis of Th.~\ref{th:rough}, we have \begin{equation*} \mathbb{X}^{2,\mu\nu}_{st} = \int_s^t (X^{\mu}_u-X^{\mu}_s) dX^{\nu}_u. \end{equation*} Even if the notation does not make it explicit it is important to remark that the integral depends on the rough path $(X,\mathbb{X}^2)$, however if there is another rough path $(Y,\mathbb{Y}^2)$ and $X \in \mathcal{D}_Y^{\gamma,\eta}(I,V)$ we have shown that $\mathcal{D}_X^{\gamma,\eta}(I,V)\subseteq \mathcal{D}_Y^{\gamma,\eta}(I,V)$ (see Lemma~\ref{lemma:transitivity}) and the integral defined according to $(X,\mathbb{X}^2)$ is equal to that defined according to $(Y,\mathbb{Y}^2)$ if and only if we have \begin{equation*} \mathbb{X}^{2,\mu\nu} = \int_s^t \delta\! X^\mu_{su} dX^\nu_u \end{equation*} where this last integral is understood based on $(Y,\mathbb{Y}^2)$. Necessity is obvious, let us prove sufficiency. Let the decomposition of $X$ according to $Y$ be $$ \delta\! X^\mu = A^{\mu}_{\nu} \delta\! Y^\nu + R^\mu_X $$ and write $$ \delta\! Z^\mu = Z^{\prime\,\mu}_{\nu} \delta\! X^\nu + R^\mu_Z, \qquad \delta\! W^\mu = W^{\prime\,\mu}_{\nu} \delta\! W^\nu + R^\mu_W $$ then if $$ \delta\! I^{\mu\nu}_{st} = \int_s^t Z^\mu d_{(X,\mathbb{X}^2)}W^\nu $$ is the integral based on $(X,\mathbb{X}^2)$, $$ \delta\! \widetilde I^{\mu\nu}_{st} = \int_s^t Z^\mu d_{(Y,\mathbb{Y}^2)}W^\nu $$ the one based on $(Y,\mathbb{Y}^2)$; we have by definition of integral $$ \delta\! I^{\mu\nu} = Z^{\mu} \delta\! W^\nu + Z^{\prime\,\mu}_{\kappa} W^{\prime,\nu}_{ \rho} \mathbb{X}^{2,\kappa\rho} + R^{\mu\nu}_I $$ $$ \delta\! \widetilde I^{\mu\nu} = Z^{\mu} \delta\! W^\nu + Z^{\prime,\mu}_{\kappa} A^{\kappa}_{\kappa'} W^{\prime\,\nu}_{\rho} A^{\rho}_{\rho'} \mathbb{Y}^{2,\kappa'\rho'} + R^{\mu\nu}_{\widetilde I} $$ and $$ \mathbb{X}^{2,\kappa\rho} = A^{\kappa}_{\kappa'} A^{\rho}_{\rho'} \mathbb{Y}^{2,\kappa'\rho'} + R^{\kappa\rho}_{\mathbb{X}^2} $$ where $R_I, R_{\widetilde I}, R_{\mathbb{X}^2} \in \Omega \mathcal{C}^{\gamma+\eta}(V^{\otimes 2})$. Then \begin{equation*} \begin{split} \delta\! (I^{\mu\nu}-\widetilde I^{\mu\nu}) & = Z^{\prime\,\mu}_{\kappa} W^{\prime,\nu}_{\rho} (\mathbb{X}^{2,\kappa\rho}-A^{\kappa}_{\kappa'} A^{\rho}_{\rho'} \mathbb{Y}^{2,\kappa'\rho'}) + R^{\mu\nu}_I - R^{\mu\nu}_{\widetilde I} \\ & = Z^{\prime\,\mu}_{\kappa} W^{\prime\,\nu}_{\rho} R_{\mathbb{X}^2}^{\kappa\rho} + R^{\mu\nu}_I - R^{\mu\nu}_{\widetilde I} \end{split} \end{equation*} but then $\delta\!(I -\widetilde I) \in \Omega \mathcal{C}^{\gamma+\eta}(I,V^{\otimes 2})$ with $\gamma+\eta > 1$ so it must be $\delta\! I = \delta\! \widetilde I$.\qed Given another rough path $(\widetilde X, \mathbb{\widetilde X}^2)$ and paths $\widetilde W, \widetilde Z \in \mathcal{D}_{\widetilde X}^{\gamma,\eta}(I,V)$ then it takes not so much effort to show that the difference $$ \Delta_{st} := \int_s^t Z dW - \int_s^t \widetilde Z d\widetilde W $$ (where the first integral is understood with respect to $(X,\mathbb{X}^2)$ and the second w.r.t. $(\widetilde X, \mathbb{\widetilde X}^2)$) can be bounded as \begin{equation} \label{eq:continuity} \|\Delta -Z \delta\! W + \widetilde Z \delta\! \widetilde W+ \widetilde W' \widetilde Z' \mathbb{\widetilde X}^2 - W' Z' \mathbb{X}^2\|_{\delta,I} \le \frac{1}{2^z-2} (D_1+D_2+D_3) \end{equation} where $$ D_1 = (1+\norm{X}_{\gamma,I}^2 + \norm{\mathbb{X}^2}_{2\gamma,I}) (\norm{(Z,Z')}_{D(X,\gamma,\eta),I}+\norm{(\widetilde Z,\widetilde Z')}_{D(\widetilde X,\gamma,\eta),I}) \varepsilonilon_W $$ $$ D_2 = (1+\norm{X}_{\gamma,I}^2 + \norm{\mathbb{X}^2}_{2\gamma,I}) (\norm{(W,W')}_{D(X,\gamma,\eta),I}+\norm{(\widetilde W,\widetilde W')}_{D(\widetilde X,\gamma,\eta),I}) \varepsilonilon_Z $$ \begin{equation*} \begin{split} D_3 & = (\norm{(W,W')}_{D(X,\gamma,\eta),I}+\norm{(\widetilde W,\widetilde W')}_{D(\widetilde X,\gamma,\eta),I}) \\ & \qquad \cdot (\norm{(Z,Z')}_{D(X,\gamma,\eta),I}+\norm{(\widetilde Z,\widetilde Z')}_{D(\widetilde X,\gamma,\eta),I}) (\norm{X-\widetilde X}_{\gamma,I} + \norm{\mathbb{X}^2-\mathbb{\widetilde X}^2}_{2\gamma,I}) \end{split} \end{equation*} and \begin{equation*} \begin{split} \varepsilonilon_Z & = \|Z'-\widetilde Z'\|_{\infty,I} + \|Z'-\widetilde Z'\|_{\eta-\gamma,I} + \|R_Z-\widetilde R_Z\|_{\eta,I}+ \|Z-\widetilde Z\|_{\gamma,I} \end{split} \end{equation*} \begin{equation*} \begin{split} \varepsilonilon_W & = \|W'-\widetilde W'\|_{\infty,I} + \|W'-\widetilde W'\|_{\eta-\gamma,I} + \|R_W-\widetilde R_W\|_{\eta,I}+ \|W-\widetilde W\|_{\gamma,I} \end{split} \end{equation*} so that the integral possess reasonable continuity properties also with respect to the reference rough path $(X, \mathbb{X}^2)$. \begin{remark} It is trivial but cumbersome to generalize the statement of Theorem~\ref{th:rough} in the case of inhomogeneous degrees of smoothness, i.e. when we have $Z \in \mathcal{D}_X^{\gamma,\eta}(V)$, $W \in \mathcal{D}_Y^{\rho,\eta'}(V)$ with $X \in \mathcal{C}^\gamma(V)$, $Y\in \mathcal{C}^\rho(V)$ and there is a process $H \in \Omega \mathcal{C}^{\gamma+\rho}(V^{\otimes 2})$ which satisfy $$ N H^{\mu\nu} = \delta\! X^\mu \delta\! Y^\nu. $$ In this case the condition to be satisfied in order to be able to define the integral is $\min(\gamma+\eta',\rho+\eta) = \delta > 1$. \end{remark} As in Sec.~\ref{sec:young} we can give an approximation result of the integral defined in Theorem~\ref{th:rough} as a limit of sums of increments: \begin{corollary} \label{cor:sums_rough} In the hypothesis of the previous Proposition we have \begin{equation*} \int_s^t Z^\mu_u dW^\nu_u = \lim_{|\Pi|\to 0} \sum_{i=0}^{n-1} \left(Z^\mu_{t_i} \delta\! W^\nu_{t_{i},t_{i+1}} + Z^{\prime\,\mu}_{\mu',t_i} W^{\prime\,\nu}_{\nu',t_i} \mathbb{X}^{2,\mu'\nu'}_{t_{i},t_{i+1}}\right) \end{equation*} where the limit is taken over partitions $\Pi = \{t_0,t_1,\dots,t_n\}$ of the interval $[s,t]$ such that $t_0 = s, t_n = t$, $t_{i+1}>t_i$, $|\Pi| = \sup_i |t_{i+1}-t_i|$. \end{corollary} \proof The proof is analogous to that of Corollary~\ref{cor:sums_young}.\qed Better bounds can be stated in the case where we are integrating a path controlled by $X$ against $X$ itself \begin{corollary} \label{cor:betterbounds} When $W \in \mathcal{D}_X^{\gamma,\eta}(I,V_1 \otimes V^*)$ the integral $$ \delta\! A^\mu_{st} = \int_s^t W^\mu_{\nu,u} dX^\nu_u $$ belongs to $\mathcal{D}_{X}^{\gamma,2\gamma}(I,V_1)$ and satisfy \begin{equation} \label{eq:boundAsimple} \norm{\delta\! A - W_\nu \delta\! X^\nu - W^{\prime}_{\nu\kappa} \mathbb{X}^{2,\nu\kappa}}_{D(X,\gamma,\eta+\gamma),I} \le \frac{1}{2^{\eta+\gamma}-2} (\norm{X}_{\gamma,I}+\norm{\mathbb{X}^2}_{2\gamma,I}) \norm{W}_{D(X,\gamma,\eta),I} \end{equation} Moreover if $(\widetilde X, \mathbb{\widetilde X}^2)$ is another rough path and $\widetilde W \in \mathcal{D}_{\widetilde X}^{\gamma,\eta}(I,V_1\otimes V^*)$ then $$ \delta\! B^\mu_{st} = \int_s^t W^\mu_{\nu,u} dX^\nu_u - \int_s^t \widetilde W^\mu_{\nu,u} d\widetilde X^\nu_u $$ and $$ \delta\! B^\mu = W^\mu_\nu \delta\! X^\nu - \widetilde W^\mu_\nu \delta\! \widetilde X^\nu - W^{\prime\,\mu}_{\nu\kappa} \mathbb{ X}^{2,\nu\kappa} - \widetilde W^{\prime\,\mu}_{\nu\kappa} \mathbb{ \widetilde X}^{2,\nu\kappa} + R^\mu_{B} $$ with $R_B$ satisfying the bound \begin{equation} \label{eq:betterbound-difference} \norm{R_B}_{\eta+\gamma,I} \le \frac{1}{2^{\eta+\gamma}-2}\left[ C_{X,I} \varepsilonilon_{W,I} + (\norm{W}_{D(X,\gamma,\eta),I}+\norm{\widetilde W}_{D(\widetilde X,\gamma,\eta),I}) \rho_I \right] \end{equation} with $$ \varepsilonilon_{W,I} = \norm{ R_W - R_{\widetilde W}}_{\eta,I} + \norm{W'-\widetilde W'}_{\eta-\gamma,I} $$ and $$ \rho_I = \norm{X-\widetilde X}_{\gamma} + \norm{\mathbb{X}^2-\mathbb{\widetilde X}^2}_{2\gamma,I} $$ $$ C_{X,I} = \norm{X}_{\gamma,I}+\norm{\mathbb{X}^2}_{2\gamma,I}+ \norm{\widetilde X}_{\gamma,I}+\norm{\mathbb{\widetilde X}^2}_{2\gamma,I} $$ \end{corollary} \proof The integral path $\delta\! A$ has the following decomposition $$ \delta\! A^\mu = W^\mu_\nu \delta\! X^\nu + W^{\prime\,\mu}_{\nu\kappa} \mathbb{X}^{2,\nu\kappa} + R^\mu_A $$ with $R_A$ satisfying $$ N R_A^\mu = \delta\! W^{\prime\,\mu}_{\nu\kappa} \mathbb{X}^{2,\nu\kappa} + R^\mu_{W,\nu} \delta\! X^\nu $$ then eq.~(\ref{eq:boundAsimple}) follows immediately from the properties of $\Lambda$. Next, let $\delta\! \widetilde A = \int \widetilde W d \widetilde X$ and $$ \delta\! \widetilde A^\mu = \widetilde W^\mu_\nu \delta\! \widetilde X^\nu + \widetilde W^{\prime\,\mu}_{\nu\kappa} \mathbb{\widetilde X}^{2,\nu\kappa} + R^\mu_{\widetilde A} $$ then $$ N R^\mu_B = \delta\! W^{\prime\,\mu}_{\nu\kappa} \mathbb{X}^{2,\nu\kappa} + R^\mu_{W,\nu} \delta\! X^\nu - \delta\! \widetilde W^{\prime\,\mu}_{\nu\kappa} \mathbb{\widetilde X}^{2,\nu\kappa} + R^\mu_{\widetilde W,\nu} \delta\! \widetilde X^\nu $$ and \begin{equation*} \begin{split} \norm{R_B}_{\eta+\gamma,I}& \le \frac{1}{2^{\eta+\gamma}-2}\left[ \norm{W'-\widetilde W'}_{\eta-\gamma,I} \norm{\mathbb{X}^2}_{2\gamma,I}+\norm{\widetilde W'}_{\eta-\gamma,I} \norm{\mathbb{X}^2-\mathbb{\widetilde X}^2}_{2\gamma,I} \right. \\ &\qquad \left. + \norm{X-\widetilde X}_{\gamma,I} \norm{R_W}_{\eta,I}+\norm{\widetilde X}_{\gamma,I} \norm{R_W-R_{\widetilde W}}_{\eta,I} \right] \\ & \le \frac{1}{2^{\eta+\gamma}-2}\left[ C_{X,I} \varepsilonilon_{W,I} + (\norm{W}_{D(X,\gamma,\eta),I} +\norm{\widetilde W}_{D(X,\gamma,\eta),I})\rho_I\right] \end{split} \end{equation*} \qed \section{Differential equations driven by paths in $\mathcal{C}^\gamma(V)$} \label{sec:ode} The continuity of the integral defined in eq.~(\ref{eq:integral_young}) allows to prove existence and uniqueness of solutions of differential equations driven by paths in $\mathcal{C}^\gamma(V)$ for $\gamma$ not too small. Fix an interval $J \subseteq \mathbb{R}$ and let us given $X \in \mathcal{C}^\gamma(J,V)$ and a function $\varphi \in C(V,V \otimes V^*)$. A solution $Y$ of the differential equation \begin{equation} \label{eq:diff_eq_young} dY^\mu_t = \varphi(Y_t)^\mu_\nu dX^\nu_t, \qquad Y_{t_0} = y, \quad t_0 \in J \end{equation} in $J$ will be a continuous path $Y \in \mathcal{C}^\gamma(V,J)$ such that \begin{equation} \label{eq:integralODE} Y_t^\mu = y + \int_{t_0}^t \varphi(Y_u)^\mu_\nu dX^\nu_u. \end{equation} for every $t \in J$. If $\gamma > 1/2$ sufficient conditions must be imposed on $\varphi$ such that the integral in~(\ref{eq:integralODE}) can be understood in the sense of prop.~\ref{prop:young}. If $1/3 < \gamma \le 1/2$ the integral must be understood in the sense of Theorem~\ref{th:rough}. Then in this case we want to show that, given a driving rough path $(X,\mathbb{X}^2)$ it is possible to find a path $Y \in \mathcal{D}_X^{\gamma,2\gamma}(V,J)$ that satisfy eq.~(\ref{eq:integralODE}). The strategy of the proof will consist in introducing a map $Y \mapsto G(Y)$ on suitable paths $Y \in \mathcal{C}(J,V)$ depending implicitly on $X$ (and eventually on $\mathbb{X}^2$) such that \begin{equation} \label{eq:mapG} G(Y)_t = Y_{t_0} + \int_{t_0}^t \varphi(Y_u)^\mu_\nu dX^\nu_u. \end{equation} Existence of solutions will follow from a fixed-point theorem applied to $G$ acting on a suitable compact and convex subset of the Banach space of H\"older continuous functions on $J$ (this require $V$ to be finite dimensional). To show uniqueness we will prove that under stronger conditions on $\varphi$ the map $G$ is locally a strict contraction. Next we show also that the It\^o map (in the terminology of Lyons~\cite{Lyons}) $Y=F(y,\varphi,X) $ (or $Y=F(y,\varphi,X,\mathbb{X}^2)$) which sends the data of the differential equation to the corresponding solution $Y = G(Y)$, is a Lipschitz continuous map (in compact intervals $J$) in each of its argument, where on $X$ and $\mathbb{X}^2$ we are considering the norms of $\mathcal{C}^\gamma(J,V)$ and $\Omega \mathcal{C}^{2\gamma}(J,V^{\otimes 2})$ respectively. Note that, in analogy with the classical setting, the solution of the differential equation is ``smooth'' in the sense that it will be of the form \begin{equation} \label{eq:diffeq-1} \delta\! Y = \varphi(Y) \delta\! X + R_Y \end{equation} with $R_Y \in \Omega \mathcal{C}^{z}(V,J)$ with $z>1$ in the case of $\gamma > 1/2$ and of the form \begin{equation} \label{eq:diffeq-2} \delta\! Y = \varphi(Y) \delta\! X + \partial \varphi(Y) \varphi(Y) \mathbb{X}^2 + Q_Y \end{equation} with $R_Y \in \Omega \mathcal{C}^{z}(V,J)$ with $z>1$ in the case of $1/3 < \gamma \le 1/2$. Natural conditions for existence of solutions will be $\varphi \in C^{\delta}(V,V\otimes V^*)$ if $\gamma > 1/2$ and $(1+\delta)\gamma > 1$, while $\varphi \in C^{1,\delta}(V,V\otimes V^*)$ if $1/3 < \gamma \le 1/2$ where $\delta \in (0,1)$ such that $(2+\delta)\gamma > 1$ while uniqueness will hold if $\varphi \in C^{1,\delta}(V,V\otimes V^*)$ or $\varphi \in C^{2,\delta}(V,V\otimes V^*)$ respectively with analogous conditions on $\delta$. \begin{remark} Another equivalent approach to the definition of a differential equation in the non-smooth setting is to say that $Y$ solves a differential equation driven by $X$ if eq.~(\ref{eq:diffeq-1}) or eq.~(\ref{eq:diffeq-2}) is satisfied with remainders $R_Y$ or $Q_Y$ in $\Omega \mathcal{C}^z(V)$ for some $z$. This would have the natural meaning of describing the local dynamical behaviour of $Y_t$ as the parameter $t$ is changed in terms of the control $X$. This point of view has been explored previously in an unpublished work by A.~M.~Davie~\cite{Davie} which also gives some examples showing that the conditions on the vector field $\varphi$ cannot be substantially relaxed. \end{remark} \begin{remark} In a recent work~\cite{lilyons} Li and Lyons show that, under natural hypotesis on $\varphi$, the It\^o map $F$ can be differentiated with respect to the control path $X$ (when extended to a rough path). \end{remark} \subsection{Some preliminary results} In the proofs of the Propositions below it will be useful the following comparison of norms which holds for locally H\"older continuous paths: \begin{lemma} \label{lemma:improved_bound} Let $\eta > \gamma$, $b>a$ then $\Omega \mathcal{C}^\eta([a,b]) \subseteq \Omega \mathcal{C}^\gamma([a,b])$ and \begin{equation*} \norm{X}_{\gamma,[a,b]} \le |b-a|^{\eta-\gamma} \norm{X}_{\eta,[a,b]} \end{equation*} \end{lemma} for any $X \in \Omega \mathcal{C}^\eta([a,b])$. \proof Easy: \begin{equation*} \norm{X}_{\gamma,[a,b]} = \sup_{t,s \in [a,b]} \frac{\abs{X_{st}}}{|t-s|^\gamma} = \sup_{t,s \in [a,b]} \frac{\abs{X_{st}}}{|t-s|^\eta} |t-s|^{\eta-\gamma} \le |b-a|^{\eta-\gamma} \sup_{t,s \in [a,b]} \frac{\abs{X_{st}}}{|t-s|^\eta}. \end{equation*} \qed Moreover we will need to patch together local H\"older bounds for different intervals: \begin{lemma} \label{eq:holder-patching} Let $I,J$ be two adjacent intervals on $\mathbb{R}$ (i.e. $I \cap J \neq 0$) then if $X \in \Omega \mathcal{C}^{\gamma}(I,V)$, $X \in \Omega \mathcal{C}^{\gamma}(J,V) $ and $N X \in \Omega \mathcal{C}^{\gamma_1,\gamma_2}(I \cup J,V)$ with $\gamma = \gamma_1+\gamma_2$, then we have $X \in \mathcal{C}^{\gamma}(I \cup J,V)$ with \begin{equation} \label{eq:patchbound} \norm{X}_{\gamma,I \cup J} \le 2 (\norm{X}_{\gamma,I}+\norm{X}_{\gamma,J}) + \norm{N X}_{\gamma_1,\gamma_2,I \cup J}. \end{equation} \end{lemma} \proof See the Appendix, Sec.~\ref{sec:proof-holder-patching}. \qed \subsection{Existence and uniqueness when $\gamma > 1/2$} First we will formulate the results for the case $\gamma > 1/2$ since they are simpler and require weaker conditions. \begin{proposition}[Existence $\gamma > 1/2$] \label{eq:existence_young} If $\gamma > 1/2$ and $\varphi \in C^\delta(V,V\otimes V^*)$ with $\delta \in (0,1)$ and $(1+\delta)\gamma > 1$ there exists a path $Y \in \mathcal{C}^\gamma(V)$ which solves eq.~(\ref{eq:diff_eq_young}) (where the integral is the one defined in Sec.~\ref{sec:young}). \end{proposition} \proof Consider an interval $I=[t_0,t_0+T] \subseteq J$, $T>0$ and note that $W = \varphi(Y)$ is in $\mathcal{C}^{\delta\gamma}(I,V\otimes V^*)$ with $$ \norm{W}_{\delta\gamma,I}= \norm{\varphi(Y)}_{\delta\gamma,I} \le \norm{\varphi}_{\delta}\norm{Y}_{\gamma,I}^\delta $$ so that if $(1+\delta)\gamma > 1 $ it is meaningful, according to Prop.~\ref{prop:young} to consider the application $\mathcal{C}^{\gamma}(I,V) \to \mathcal{C}^{\gamma}(I,V)$ defined as in eq.~(\ref{eq:mapG}). Moreover the path $Z = G(Y) \in \mathcal{C}^\gamma(I,V)$ satisfy \begin{equation*} \delta\! Z^\mu = \varphi(Y)^{\mu}_{\nu} \delta\! X^\nu + Q^\mu_Z \end{equation*} with $$ \norm{Q_Z}_{(1+\delta)\gamma,I} \le \frac{1}{2^{(1+\delta)\gamma}-2} \norm{X}_{\gamma,I} \norm{\varphi(Y)}_{\delta \gamma,I} \le \frac{1}{2^{(1+\delta)\gamma}-2} \norm{\varphi}_{\delta} \norm{X}_{\gamma,I}\norm{Y}_{\gamma,I}^\delta $$ then, using Lemma~\ref{lemma:improved_bound}, \begin{equation*} \begin{split} \norm{Z}_{\gamma,I} & \le \norm{\varphi(Y) \delta\! X}_{\gamma,I} + \norm{Q_Z}_{\gamma,I} \\ & \le \norm{\varphi}_{0,\delta} \norm{X}_{\gamma,I} + T^{\gamma\delta} \norm{Q_Z}_{(1+\delta)\gamma,I} \\ & \le K C_{X,I} \norm{\varphi}_{0,\delta} (1+T^{\delta \gamma}\norm{Y}^\delta_{\gamma,I}) \\ & \le K C_{X,J} \norm{\varphi}_{0,\delta} (1+T^{\delta \gamma}\norm{Y}^\delta_{\gamma,I}) \end{split} \end{equation*} with $$ C_{X,I} = \norm{X}_{\gamma,I} $$ For any $T$ let $A_T > 0$ be the solution to \begin{equation} \label{eq:eqA} A_T = K C_{X,J} \norm{\varphi}_{0,\delta} (1+T^{\delta \gamma}A_T^\delta). \end{equation} Then $\norm{G(Y)}_{\gamma,I} \le A_T$ whenever $\norm{Y}_{\gamma,I} \le A_T$ and moreover $G(Y)_{t_0} = Y_{t_0}$. Then for any $y \in V$, the application $G$ maps the compact and convex set \begin{equation} \label{eq:setQ} Q_{y,[t_0,t_0+T]} = \{ Y \in \mathcal{C}^\gamma([t_0,t_0+T],V) : Y_{t_0} = y, \norm{Y}_{\gamma,[t_0,t_0+T]} \le A_T \} \end{equation} into itself. Let us show that $G$ on $Q_{y,[t_0,t_0+T]}$ is at least H\"older continuous with respect to the norm $\|\cdot\|_\gamma$. This will allow us to conclude (by the Leray-Schauder-Tychonoff theorem) the existence of a fixed-point in $Q_{y,[t_0,t_0+T]}$. To prove continuity take $Y,\widetilde Y \in Q_{y,I}$ and denote $\widetilde Z = G(\widetilde Y)$ so that $$ \delta\! \widetilde Z^\mu = \varphi(\widetilde Y)^{\mu}_{\nu} \delta\! X^\nu + \widetilde Q^\mu_Z $$ as for $Z = G(Y)$. Then \begin{equation} \label{eq:continuitybound0} \begin{split} \norm{Z-\widetilde Z}_{\gamma,I} \le \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\infty,I} \norm{X}_{\gamma,I} + \norm{Q_Z-Q_{\widetilde Z}}_{\gamma,I} \end{split} \end{equation} but now taking $0 < \alpha < 1$ such that $(1+\alpha\delta)\gamma > 1$ \begin{equation*} \norm{Q_Z-Q_{\widetilde Z}}_{(1+\alpha\delta)\gamma,I} \le \frac{1}{2^{(1+\alpha\delta)\gamma}-2} \norm{X}_{\gamma,I} \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\alpha \delta \gamma,I} \end{equation*} To bound $\norm{\varphi(Y)-\varphi(\widetilde Y)}_{\alpha \delta \gamma,I}$ we interpolate between the following two bounds: \begin{equation*} \norm{\varphi(Y)-\varphi(\widetilde Y)}_{0,I} \le 2 \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\infty,I} \le 2 \norm{\varphi}_{\delta} \norm{\widetilde Y - Y}_{\infty,I}^\delta \end{equation*} and \begin{equation*} \begin{split} \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\delta \gamma,I} \le \norm{\varphi(Y)}_{\delta\gamma,I} + \norm{\varphi(\widetilde Y)}_{\delta\gamma,I} \le \norm{\varphi}_{\delta} (\norm{Y}_{\gamma,I}^\delta+\norm{\widetilde Y}_{\gamma,I}^\delta) \le \norm{\varphi}_\delta 2 A_T^\delta \end{split} \end{equation*} obtaining \begin{equation*} \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\alpha \delta \gamma,I} \le 2 \norm{\varphi}_\delta \norm{\widetilde Y - Y}_{\infty,I}^{(1-\alpha)\delta} A_T^{\alpha \delta} \end{equation*} Eq.~(\ref{eq:continuitybound0}) becomes \begin{equation*} \begin{split} \norm{Z-\widetilde Z}_{\gamma,I} & \le \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\infty,I} \norm{X}_{\gamma,I} + T^{\alpha\delta \gamma}\norm{Q_Z-Q_{\widetilde Z}}_{(1+\alpha\delta)\gamma,I} \\ & \le K \norm{\varphi}_\delta \norm{X}_{\gamma,I} \left[ \norm{Y-\widetilde Y}_{\infty,I}^\delta + \norm{\widetilde Y - Y}_{\infty,I}^{(1-\alpha)\delta} A_T^{\alpha \delta} \right] \end{split} \end{equation*} Since $\norm{Y-\widetilde Y}_{\infty,I} \le \norm{Y-\widetilde Y}_{\gamma,I}$ (recall that $T < 1$) we have that $G$ is continuous on $Q_{y,I}$ for the topology induced by the norm $\norm{\cdot}_{\gamma,I}$ (the paths all have a common starting point). Since all these arguments does not depend on the location of the interval $I$ we can patch together local solutions to get the existence of a global solution on all $J$. \qed \begin{proposition}[Uniqueness $\gamma > 1/2$] \label{eq:uniqueness_young} Assume $\varphi \in C^{1,\delta}(V,V\otimes V^*)$ with $(1+\delta)\gamma > 1$, then there exists a unique solution of eq.~(\ref{eq:diff_eq_young}). The It\^o map $F(y,\varphi,X)$ is Lipschitz in the sense that satisfy the following bound $$ \norm{F(y,\varphi,X) - F(\widetilde y,\widetilde \varphi,\widetilde X)}_{\gamma,J} \le M (\norm{X-\widetilde X}_{\gamma,J}+\norm{\varphi-\widetilde\varphi}_{1,\delta}+|y-\widetilde y|) $$ for some constant $M$ depending only on $\norm{X}_{\gamma,J}$, $\norm{\widetilde X}_{\gamma,J}$, $\norm{\varphi}_{1,\delta}$, $\norm{\widetilde \varphi}_{1,\delta}$ and $J$. \end{proposition} \proof Let us continue to use the notations of the previous proposition. Let $Y,\widetilde Y$ be two paths in $\mathcal{C}^\gamma(J,V)$, and $X,\widetilde X \in \mathcal{C}^\gamma(J,V)$. Let $W= \varphi(Y)$, $\widetilde W = \varphi(\widetilde Y)$, $Z = G(Y)$, $\widetilde Z = \widetilde G(\widetilde Y)$ where $\widetilde G$ is the map corresponding to the driving path $\widetilde X$: $$ \widetilde Y \mapsto \widetilde G(\widetilde Y)^\mu \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \widetilde Y^\mu_{t_0}+\int_{t_0}^\cdot \varphi(\widetilde Y_u)^{\mu}_{\nu} d\widetilde X^\nu_u. $$ Then \begin{equation*} \delta\! \widetilde Z^\mu = \varphi(\widetilde Y_s)^{\mu} \delta\! \widetilde X^\nu + Q^\mu_{\widetilde Z} \end{equation*} Introduce the following shorthands: $$ \varepsilonilon_{Z,I} = \norm{Z-\widetilde Z}_{\gamma,I}, \quad \varepsilonilon_{W,I}^* = \norm{W-\widetilde W}_{\delta\gamma,I}, \quad \varepsilonilon_{Y,I} = \norm{Y-\widetilde Y}_{\gamma,I}, \quad \varepsilonilon_{Y,I}^* = \norm{Y-\widetilde Y}_{\delta\gamma,I}; $$ $$ \rho_I = \norm{X-\widetilde X}_{\gamma,I} + |Y_0 - \widetilde Y_0| + \norm{\varphi-\widetilde\varphi}_{1,\delta} $$ $$ C_{X,I} = \norm{X}_{\gamma,I} + \norm{\widetilde X}_{\gamma,I} \qquad C_{Y,I} = \norm{Y}_{\gamma,I} + \norm{\widetilde Y}_{\gamma,I}. $$ With these notations, Lemma~\ref{lemma:zetabound-young} states that, when $T <1$ : \begin{equation} \label{eq:epsilonZbound-young} \begin{split} \varepsilonilon_{Z,I} & \le K C_{X,I} C_{Y,I}^\delta [(1+\norm{\varphi}_{1,\delta})\rho_I + \norm{\varphi}_{1,\delta} T^{\gamma\delta} \varepsilonilon_{Y,I}] \end{split} \end{equation} As we showed before in Prop.~\ref{eq:existence_young} there exists a constant $A_T$ such that the set $Q_{y,I}\mathrel{\raise.2\p@\hbox{:}\mathord{=}} \{Y \in C^\gamma(I,V): Y_{t_0} = y, \|Y\|_{\gamma,I} \le A_T\}$ is invariant under $G$. Take $Y,\widetilde Y \in Q_{y,I}$ and $X = \widetilde X$. Then we have $\rho_I = 0$, $C_{Y,I} \le 2 A_T$ and \begin{equation*} \varepsilonilon_{Z,I} \le K \norm{\varphi}_{1,\delta} C_{X,J} A_T^\delta T^{\gamma\delta} \varepsilonilon_{Y,I}. \end{equation*} Choosing $T$ small enough such that $K \norm{\varphi}_{1,\delta} C_{X,J} A_T^\delta T^{\gamma\delta} = \alpha < 1$ implies $$ \norm{G(Y) - G(\widetilde Y)}_{\gamma,I} = \varepsilonilon_{Z,I} \le \alpha \norm{Y-\widetilde Y}_{\gamma,I}. $$ The map $G$ is then a strict contraction on $Q_{y,I}$ and has a unique fixed-point. Again, since the estimate does not depend on the location of $I \subset J$ we can extend the unique solution to all $J$. \qed \subsection{Existence and uniqueness for $\gamma > 1/3$} \begin{proposition}[Existence $\gamma > 1/3$] \label{eq:existence_rough} If $\gamma > 1/3$ and $\varphi \in C^{1,\delta}(V,V)$ with $(2+\delta)\gamma>1$ there exists a path $Y \in \mathcal{D}^{\gamma,2\gamma}_X(V)$ which solves eq.~(\ref{eq:diff_eq_young}) where the integral is understood in the sense of Theorem~\ref{th:rough} based on the couple $(X,\mathbb{X}^2)$. \end{proposition} \proof By Prop.~\ref{prop:functionD} for any $Y \in \mathcal{D}_{X}^{\gamma,2\gamma}(J,V)$, the path $W = \varphi(Y)$ is in $\mathcal{D}_{X}^{\gamma,(1+\delta)\gamma}(J,V)$ with \begin{equation} \label{eq:boundWxx} \begin{split} \norm{W}_{D(X,\gamma,(1+\delta)\gamma),I} & = \norm{\varphi(Y)}_{D(X,\gamma,(1+\delta)\gamma),I} \le K \|\varphi\|_{1,\delta}( \norm{Y}_{*,I} + \norm{Y}^{1+\delta}_{*,I} + \norm{Y}^{2}_{*,I}) \\ & \le 3 K \|\varphi\|_{1,\delta} (1+\norm{Y}_{*,I})^2 \end{split} \end{equation} where we introduced the notation $\|\cdot\|_{*,I} = \|\cdot \|_{D(X,\gamma,2\gamma),I}$. Then we can integrate $W$ against $X$ as soon as $(2+\delta)\gamma > 1$ and define the map $G$ as $G : \mathcal{D}^{\gamma,2\gamma}_X(I,V) \to \mathcal{D}^{\gamma,2\gamma}_X(I,V)$ with the formula~(\ref{eq:mapG}). Let $Y$ be a path such that $Y'_{t_0} = \varphi(Y_{t_0})$. The decomposition of $Z$ (as above $Z = G(Y)$) reads \begin{equation*} \delta\! Z^\mu = Z^{\prime\,\mu}_\nu \delta\! X^\nu + R_Z^\mu = \varphi(Y)^{\mu}_{\nu} \delta\! X^\nu + \partial^\kappa \varphi(Y)^{\mu}_{\nu} Y^{\prime\,\kappa}_{\rho} \mathbb{X}^{2,\nu\rho} + Q^\mu_Z \end{equation*} with (use eq.~(\ref{eq:boundAsimple})) \begin{equation} \label{eq:Qbound3} \norm{Q_Z}_{(2+\delta)\gamma,I} \le K C_{X,I} \norm{\varphi(Y)}_{D(X,\gamma,(1+\delta)\gamma),I} \end{equation} where $$ C_{X,I} = 1+\norm{X}_{\gamma,I}+\norm{\mathbb{X}^2}_{2\gamma,I}. $$ Our aim is to bound $Z$ in $\mathcal{D}_X^{\gamma,2\gamma}(I,V)$. To achieve this we already have the good bound~(\ref{eq:Qbound3}) for $Q_Z$ so we need bounds for $\norm{\partial_\kappa \varphi(Y)^{\cdot}_{\nu} Y^{\prime\,\kappa}_{\rho} \mathbb{X}^{2,\nu\rho}}_{2\gamma,I}$, $\norm{\varphi(Y)}_{\gamma,I}$ and $\norm{Z}_{\gamma,I}$. To simplify the arguments assume that $T < 1$ since at the end we will need to take $T$ small anyway. Let us start with $\norm{\partial_\kappa \varphi(Y)^{\cdot}_{\nu} Y^{\prime\,\kappa}_{\rho} \mathbb{X}^{2,\nu\rho}}_{2\gamma,I}$: \begin{equation} \label{eq:zbound-part1} \begin{split} \norm{\partial_\kappa \varphi(Y)^{\cdot}_{\nu} Y^{\prime\,\kappa}_{\rho} \mathbb{X}^{2,\nu\rho}}_{2\gamma,I} & \le \norm{\partial_\kappa \varphi(Y)^{\cdot}_{\nu}}_{\infty,I} \norm{Y^{\prime\,\kappa}_{\rho}}_{\infty,I} \norm{\mathbb{X}^{2,\nu\rho}}_{2\gamma,I} \\ & \le \norm{\partial \varphi}_{\infty}(|Y'_{t_0}|+T^{\gamma}\norm{Y'}_{\gamma,I}) \norm{\mathbb{X}^{2,\nu\rho}}_{2\gamma,I} \\ & \le \norm{ \varphi}_{1,\delta}(\norm{\varphi}_{1,\delta}+T^{\gamma}\norm{Y'}_{\gamma,I}) \norm{\mathbb{X}^{2,\nu\rho}}_{2\gamma,I} \end{split} \end{equation} Next, using the fact that \begin{equation*} \begin{split} \norm{\partial \varphi(Y)}_{\infty,I}& \le |\partial \varphi(Y_{t_0})|+ \norm{\partial \varphi(Y)}_{0,I} \\ & \le \norm{\varphi}_{1,\delta} + T^{\delta\gamma} \norm{\partial \varphi(Y)}_{\delta\gamma,I} \\ & \le \norm{\varphi}_{1,\delta} + T^{\delta\gamma} \norm{ \varphi(Y)}_{D(X,\gamma,(1+\delta)\gamma),I} \end{split} \end{equation*} obtain \begin{equation} \label{eq:zbound-part2} \begin{split} \norm{\varphi(Y)}_{\gamma,I} & \le \|X\|_{\gamma,I} \norm{\partial \varphi(Y)}_{\infty,I} + \norm{R_{\varphi(Y)}}_{\gamma,I} \\ & \le \norm{\varphi}_{1,\delta}\|X\|_{\gamma,I} +T^{\delta\gamma} (\|X\|_{\gamma,I} \norm{\partial \varphi(Y)}_{D(X,\gamma,(1+\delta)\gamma),I} + \norm{R_{\varphi(Y)}}_{(1+\delta)\gamma,I} ) \\ & \le C_{X,I} (\norm{\varphi}_{1,\delta} + T^{\delta\gamma} \norm{\varphi(Y)}_{D(X,\gamma,(1+\delta)\gamma),I} ) \end{split} \end{equation} To finish consider \begin{equation} \label{eq:zbound-part3} \begin{split} \norm{Z}_{\gamma,I} & \le \norm{Z'\delta\! X}_{\gamma,I} + \norm{R_Z}_{\gamma,I} \\ & \le \norm{ \varphi(Y)}_{\infty,I} \norm{X}_{\gamma,I} + \norm{\partial \varphi(Y) Y' \mathbb{X}^2}_{2\gamma,I} + \norm{Q_Z}_{2\gamma,I} \end{split} \end{equation} Putting together the bounds given in eqs.~(\ref{eq:Qbound3}), (\ref{eq:zbound-part1}), (\ref{eq:zbound-part2}) and eq.~(\ref{eq:zbound-part3}) we get \begin{equation} \label{eq:boundstoghether0} \begin{split} \norm{Z}_{*,I} & = \norm{\varphi(Y)}_\infty + \norm{\varphi(Y)}_{\gamma,I} + \norm{\partial_\kappa \varphi(Y)^{\cdot}_{\nu} Y^{\prime\,\kappa}_{\rho} \mathbb{X}^{2,\nu\rho}}_{2\gamma,I} + \norm{Q_Z}_{2\gamma,I} + \norm{Z}_{\gamma,I} \\ & \le 2 (1+\norm{X}_{\gamma,I}) \norm{\varphi(Y)}_\infty + \norm{\varphi(Y)}_{\gamma,I} + 2 \norm{\partial_\kappa \varphi(Y)^{\cdot}_{\nu} Y^{\prime\,\kappa}_{\rho} \mathbb{X}^{2,\nu}_{\rho}}_{2\gamma,I} + 2T^{\delta\gamma} \norm{Q_Z}_{(2+\delta)\gamma,I} \\ & \le K C_{X,I} (\norm{\varphi}_{1,\delta} + \norm{\varphi}_{1,\delta}^2 + T^{\delta\gamma} \norm{\varphi}_{1,\delta}\norm{Y}_{*,I} + T^{\delta\gamma}\norm{\varphi(Y)}_{D(X,\gamma,(1+\delta)\gamma),I}) \end{split} \end{equation} Eq.~(\ref{eq:boundWxx}) is used to conclude that \begin{equation} \label{eq:bound_on_gy} \begin{split} \norm{G(Y)}_{*,I} & \le K \|\varphi\|_{1,\delta} C_{X,I}(1+\|\varphi\|_{1,\delta}+T^{\delta\gamma} (1+\norm{Y}_{*,I}))^2 \\ & \le K \|\varphi\|_{1,\delta} C_{X,J}(1+\|\varphi\|_{1,\delta}+T^{\delta\gamma} (1+\norm{Y}_{*,I}))^2 \end{split} \end{equation} There exists $T_*$ such that for any $T < T_*$ the equation $$ A_T = K \|\varphi\|_{1,\delta} C_{X,J}(1+\|\varphi\|_{1,\delta}+T^{\delta\gamma} (1+A_T))^2 $$ has at least a solution $A_T > 0$. Then we get that $\norm{G(Y)}_{*,I} \le A_T$ whenever $\norm{Y}_{*,I} \le A_T$. Let us now prove that in the set $$ Q'_{y,I} = \{ Y \in \mathcal{D}_X^{\gamma,2\gamma}(I,V) : Y_{t_0}=y, Y'_{t_0} = \varphi(y), \norm{Y}_{*,I} \le A_T \} $$ the map $G$ is continuous (in the topology induced by the $\norm{\cdot}_{*,I}$ norm). Take $Y,\widetilde Y \in Q'_{y,I}$ with $Z = G(Y)$, $\widetilde Z = G(\widetilde Y)$ and \begin{equation*} \delta\! \widetilde Z^\mu = \widetilde Z^{\prime\,\mu}_\nu \delta\! X^\nu + R_{\widetilde Z}^\mu = \varphi(\widetilde Y)^{\mu}_{\nu} \delta\! X^\nu + \partial^\kappa \varphi(\widetilde Y)^{\mu}_{\nu} \widetilde Y^{\prime\,\kappa}_{\rho} \mathbb{X}^{2,\nu\rho} + Q^\mu_{\widetilde Z} \end{equation*} Take $0 < \alpha < 1$ and $(2+\alpha \delta)\gamma > 1$: a bound similar to Eq.~(\ref{eq:boundstoghether0}) exists for $\norm{Z-\widetilde Z}_{*,I}$: \begin{equation*} \begin{split} \norm{Z-\widetilde Z}_{*,I} & \le 2 (1+\norm{X}_{\gamma,I}) \norm{\varphi(Y)-\varphi(\widetilde Y)}_\infty + \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\gamma,I} \\ & \qquad + 2 \norm{(\partial_\kappa \varphi(Y)^{\cdot}_{\nu} Y^{\prime\,\kappa}_{\rho}-\partial_\kappa \varphi(\widetilde Y)^{\cdot}_{\nu} \widetilde Y^{\prime\,\kappa}_{\rho}) \mathbb{X}^{2,\nu}_{\rho}}_{2\gamma,I} + 2 \norm{Q_Z-Q_{\widetilde Z}}_{(2+\alpha \delta)\gamma,I} \\ & \le K C_{X,I}\left[ \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\gamma,I} +\norm{\partial \varphi(Y) + \partial \varphi(\widetilde Y)}_{\infty,I} A_{T} + \norm{Y'-\widetilde Y'}_{\infty,I} \norm{\varphi}_{\infty} \right] \\ & \qquad + 2 \norm{Q_Z-Q_{\widetilde Z}}_{(2+\alpha \delta)\gamma,I} \end{split} \end{equation*} when $\norm{Y-\widetilde Y}_{*,I} \le \varepsilon < 1$ we have $$ \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\gamma,I} +\norm{\partial \varphi(Y) + \partial \varphi(\widetilde Y)}_{\infty,I} A_{T} + \norm{Y'-\widetilde Y'}_{\infty,I} \norm{\varphi}_{\infty} \le K \norm{\varphi}_{1,\delta} (1+A_T) \varepsilon^\delta $$ moreover we can bound $\norm{Q_Z-Q_{\widetilde Z}}_{(2+\alpha \delta)\gamma,I}$ as $$ \norm{Q_Z-Q_{\widetilde Z}}_{(2+\alpha \delta)\gamma,I} \le \frac{1}{2^{(2+\alpha\delta)\gamma}-2} C_{X,I} \left[\norm{R_{W}-R_{\widetilde W}}_{(1+\alpha\delta)\gamma,I} + \norm{\partial \varphi(Y) - \partial \varphi(\widetilde Y)}_{\alpha\delta\gamma,I} \right] $$ with $W = \varphi(Y)$, $\widetilde W = \varphi(\widetilde Y)$. Both of the terms in the r.h.s. will be bounded by interpolation: the first between $$ \norm{R_{W}-R_{\widetilde W}}_{(1+ \delta)\gamma,I} \le \norm{\varphi(Y)}_{D(X,\gamma,(1+\delta)\gamma)} + \norm{\varphi(\widetilde Y)}_{D(X,\gamma,(1+\delta)\gamma)} $$ and \begin{equation*} \begin{split} \norm{R_{W}-R_{\widetilde W}}_{\gamma,I} & = \norm{(\delta\! \varphi(Y)-\delta\! \varphi(\widetilde Y)) - (\partial \varphi(Y) - \partial \varphi(\widetilde Y)) \delta\! X}_{\gamma,I} \\ &\le \norm{\varphi(Y)- \varphi(\widetilde Y)}_{\gamma,I} + C_{X,I} \norm{\partial \varphi(Y) - \partial \varphi(\widetilde Y)}_{\infty,I} \\ & \le \norm{\varphi}_{1,\delta} \varepsilon + C_{X,I} \norm{\varphi}_{1,\delta} \varepsilon^\delta \end{split} \end{equation*} while the second between $$ \norm{\partial \varphi(Y) - \partial \varphi(\widetilde Y)}_{\delta\gamma,I} \le \norm{\partial \varphi(Y) }_{\delta\gamma,I} \le +\norm{\partial \varphi(\widetilde Y) }_{\delta\gamma,I} $$ and $$ \norm{\partial \varphi(Y) - \partial \varphi(\widetilde Y)}_{0,I} \le 2 \norm{\partial \varphi(Y) - \partial \varphi(\widetilde Y)}_{\infty,I} \le \norm{\varphi}_{1,\delta} \norm{Y-\widetilde Y}^\delta_{\infty,I} \le \norm{\varphi}_{1,\delta} \varepsilon^{\delta}. $$ These estimates are enough to conclued that $\norm{Z-\widetilde Z}_{*,I}$ goes to zero whenever $\norm{\widetilde Y - Y}_{*,I}$ does. Reasoning as in Prop.~\ref{eq:existence_young} we can prove that a solution exists in $\mathcal{D}_X^{\gamma,2\gamma}(I,V)$ for any $I \subseteq J$ such that $|I|$ is sufficiently small. Cover $J$ by a sequence $I_1,\dots,I_n$ of intervals of size $T < T_*$. Patching together local solutions we have a continuous solution $\overline Y$ defined on all $J$ with $$ \delta\! \overline Y = \overline Y' \delta\! X + R_{\overline Y} $$ where $R_{\overline Y} \in \cup_i \Omega \mathcal{C}^{2\gamma}(I_i,V)$ and $\overline Y' \in \cup_i \Omega \mathcal{C}^{\gamma}(I_i,V)$. It remains to prove that $\overline Y \in \mathcal{D}_X^{\gamma,2\gamma}(J,V)$. Since the restriction of $\overline Y$ on $I_i$ is in $Q_{y,I_i}$ for some $y \in V$ we have that (with abuse of notation) $\norm{\overline Y}_{*,I_i} \le A_T$ for any $i$. Using Lemma~\ref{eq:holder-patching} iteratively we can obtain that $$ \norm{\overline Y}_{\gamma,J} \le 2^{n+1} \sup_i \norm{\overline Y}_{\gamma,I_i} \le 2^{n+1} A_T $$ and by the same token $$ \norm{\overline Y'}_{\gamma,J} \le 2^{n+1} A_T $$ Next consider $R_{\overline Y}$: write $J_k = \cup_{i=1}^k I_i$ and by the very same lemma get ($J_{i+1} = J_i \cup I_{i+1}$) \begin{equation*} \begin{split} \norm{R_{\overline Y}}_{2\gamma,J_{i+1}} & \le 2 \norm{R_{\overline Y}}_{2\gamma,J_i} + 2\norm{R_{\overline Y}}_{2\gamma,I_{i+1}} + \norm{\delta\! \overline Y'\delta\! X}_{\gamma,\gamma,J_{i+1}} \\ & \le 2 \norm{R_{\overline Y}}_{2\gamma,J_i} + 2\norm{R_{\overline Y}}_{2\gamma,I_{i+1}} + \norm{\overline Y'}_{\gamma,J} \norm{X}_{\gamma,J} \end{split} \end{equation*} since $N R_{\overline Y} = - \delta\! \overline Y'\delta\! X$. By induction over $i$ we end up with \begin{equation*} \norm{R_{\overline Y}}_{2\gamma,J} \le 2^{n+1} \sup_i \norm{R_{\overline Y}}_{2\gamma,I_{i}} + n \norm{\overline Y'}_{\gamma,J} \norm{X}_{\gamma,J} \le (2^{n+1}+ 2^{2n+2} n) A_T \end{equation*} and this is enough to conclude that $\overline Y \in \mathcal{D}_X^{\gamma,2\gamma}(J,V)$. \qed \begin{proposition}[Uniqueness $\gamma > 1/3$] If $\gamma > 1/3$ and $\varphi \in C^{2,\delta}(V,V)$ with $(2+\delta)\gamma>1$ there exists a unique path $Y \in \mathcal{D}^{\gamma,2\gamma}_X(J,V)$ which solves eq.~(\ref{eq:diff_eq_young}) based on the couple $(X,\mathbb{X}^2)$. Moreover the It\^o map $F(y,\varphi,X,\mathbb{X}^2)$ is Lipschitz continuous in the following sense. Let $Y = F(y,\varphi,X,\mathbb{X}^2)$ and $\widetilde Y = F(\widetilde y, \widetilde \varphi, \widetilde X, \mathbb{\widetilde X}^2)$ where $(X,\mathbb{X}^2)$ and $(\widetilde X, \mathbb{\widetilde X}^2)$ are two rough paths, then defining \begin{equation*} \varepsilonilon_{Y,I} = \norm{Y'-\widetilde Y'}_{\infty,I} + \norm{Y'-\widetilde Y'}_{\gamma,I} + \norm{R_Y-R_{\widetilde Y}}_{2\gamma,I} + \norm{\varphi-\widetilde\varphi}_{2,\delta} \end{equation*} \begin{equation*} \rho_I = |Y_{t_0} - \widetilde Y_{t_0}| + \norm{X-\widetilde X}_{\gamma,I} + \norm{\mathbb{X}^2-\mathbb{\widetilde X}^2}_{2\gamma,I} \end{equation*} and $$ C_{X,I} = (1+\norm{X}_{\gamma,I}+\norm{\widetilde X}_{\gamma,I}+\norm{\mathbb{X}^2}_{2\gamma,I}+\norm{\mathbb{\widetilde X}^2}_{2\gamma,I}) $$ $$ C_{Y,I} = (1+\norm{Y}_{*,I}+\norm{\widetilde Y}_{*,I}). $$ we have that there exists a constant $M$ depending only on $C_{X,J}$, $C_{Y,J}$, $\norm{\varphi}_{2,\delta}$ and $\norm{\widetilde \varphi}_{2,\delta}$ such that $$ \varepsilonilon_{Y,J} \le M \rho_{J}. $$ \end{proposition} \proof The strategy will be the same as in the proof of Prop.~\ref{eq:uniqueness_young}. Take two paths $Y,\widetilde Y \in \mathcal{D}_X^{\gamma,2\gamma}(J,V)$ and let as above $Z = G(Y)$, $\widetilde Z = \widetilde G (\widetilde Y)$. Write the decomposition for each of the paths $Y, \widetilde Y, Z,\widetilde Z$ as $$ \delta\! Y^\mu = Y^{\prime\,\mu}_{\nu} \delta\! X^\nu + R^\mu_Y, \qquad \delta\! \widetilde Y^\mu = \widetilde Y^{\prime\,\mu}_{\nu} \delta\! \widetilde X^\nu + R^\mu_{\widetilde Y}, \qquad $$ and $$ \delta\! Z = Z' \delta\! X + R_Z = \varphi(Y) \delta\! X + \partial \varphi(Y) \mathbb{X}^2 + Q_Z $$ $$ \delta\! \widetilde Z = \widetilde Z' \delta\! \widetilde X + R_{\widetilde Z} = \widetilde \varphi(\widetilde Y) \delta\! \widetilde X + \partial \widetilde \varphi(\widetilde Y) \mathbb{\widetilde X}^2 + Q_{\widetilde Z} $$ The key point is to bound $\varepsilonilon_{Z,I}$ defined as \begin{equation*} \begin{split} \varepsilonilon_{Z,I} & = \norm{\varphi(Y)-\widetilde \varphi(\widetilde Y)}_{\infty,I} + \norm{\varphi(Y)-\widetilde \varphi(\widetilde Y)}_{\gamma,I} + \norm{R_Z-R_{\widetilde Z}}_{2\gamma,I} \end{split} \end{equation*} and the result of Lemma~\ref{lemma:zetabound-rough} (in the Appendix) tells us that, when $T < 1$, $\varepsilonilon_{Z,I}$ can be bounded by \begin{equation} \label{eq:zetabound} \varepsilonilon_{Z,I} \le K [(1+\norm{\varphi}_{2,\delta})C_{X,I}^2 C_{Y,I}^3 \rho_I + \norm{\varphi}_{2,\delta} T^{\delta\gamma} C_{X,I}^3 C_{Y,I}^2 \varepsilonilon_{Y,I}]. \end{equation} Taking $Y_0 = \widetilde Y_0$, $\widetilde X = X$, $\mathbb{\widetilde X}^2 = \mathbb{X}^2$ and $\varphi = \widetilde \varphi$ we have $\rho_I = \rho_J = 0$. As shown in the proof of Prop.~\ref{eq:existence_rough} if $T < T_*$ for any $y \in V$ there exists a set $Q_{y,I} \subset \mathcal{D}_{X}^{\gamma,2\gamma}(I,V)$ invariant under $G$. Moreover if $Y,\widetilde Y \in Q_{y,I}$ for some $y$ then $\norm{Y}_{*,I} \le A_T$, $\norm{\widetilde Y}_{*,I} \le A_T$ and letting $$ \bar C_{Y,T} = 1+2 A_T $$ we can rewrite eq.~(\ref{eq:zetabound}) as $$ \varepsilonilon_{Z,I} \le K \norm{\varphi}_{2,\delta} T^{\delta\gamma} C_{X,J}^3 \bar C_{Y,T}^2 \varepsilonilon_{Y,I} $$ so choosing $ T$ small enough such that \begin{equation} \label{eq:tcritical} T^{\delta\gamma} C_{X,J}^3 \bar C_{Y,T}^2 = \alpha < 1 \end{equation} we have $$ \|G(Y)-G(\widetilde Y)\|_{*,I} = \varepsilonilon_{Z,I} \le \alpha \varepsilonilon_{Y,I} = \alpha \|Y-\widetilde Y\|_{*,I}. $$ Then $G$ is a strict contraction in $\mathcal{D}_X^{\gamma,2\gamma}(I,V)$ and thus has a unique fixed-point. Again, patching together local solutions we get a global one defined on all $J$ and belonging to $\mathcal{D}_X^{\gamma,2\gamma}(J,V)$. Now let us discuss the continuity of the It\^o map $F(y,\varphi,X,\mathbb{X}^2)$. Let $Y, \widetilde Y$ be the solutions based on $(X,\mathbb{X}^2)$ and $(\widetilde X,\mathbb{\widetilde X}^2)$ respectively. We have $Y = G(Y) = Z$, $\widetilde Y = \widetilde G(\widetilde Y) = \widetilde Z$ so that $\varepsilonilon_{Z,I} = \varepsilonilon_{Y,I}$ for any interval $I \subset J$ and we can use eq.~(\ref{eq:zetabound}) to write $$ \varepsilonilon_{Y,I} = \varepsilonilon_{Z,I} \le K [(1+\norm{\varphi}_{2,\delta})C_{X,I}^2 C_{Y,I}^3 \rho_I + \norm{\varphi}_{2,\delta}T^{\delta\gamma} C_{X,I}^3 C_{Y,I}^2 \varepsilonilon_{Y,I}]. $$ Fix $T$ small enough for~(\ref{eq:tcritical}) to hold so that $$ \varepsilonilon_{Y,I} \le (1-\alpha)^{-1} K (1+\norm{\varphi}_{2,\delta}) C_{X,J}^2 C_{Y,J}^3 \rho_I = M_1 \rho_I $$ Cover $J$ with intervals $I_1,\dots,I_n$ of width $T$ and let $J_k = \cup_{i=1}^k I_k$ with $J_n = J$. To patch together the bounds for different $I_i$ into a global bound for $\varepsilonilon_{Y,J}$ we use again Lemma~\ref{eq:holder-patching} to estimate \begin{equation*} \begin{split} \norm{R_Y-R_{\widetilde Y}}_{2\gamma,J_{i+1}} & \le \norm{R_Y-R_{\widetilde Y}}_{2\gamma,J_{i}}+\norm{R_Y-R_{\widetilde Y}}_{2\gamma,I_{i+1}}+\norm{\delta\! Y' \delta\! X - \delta\! \widetilde Y' \delta\! \widetilde X}_{\gamma,\gamma,J_{i+1}} \\ & \le \norm{R_Y-R_{\widetilde Y}}_{2\gamma,J_{i}}+\norm{R_Y-R_{\widetilde Y}}_{2\gamma,I_{i+1}} \\ & \qquad + \norm{ Y'-\widetilde Y'}_{2\gamma,J_{i+1}} \norm{ X}_{\gamma,J} + \norm{ \widetilde Y'}_{\gamma,J} \norm{ X- \widetilde X}_{\gamma,J_{i+1}} \end{split} \end{equation*} then we obtain easily that \begin{equation*} \varepsilonilon_{Y,J_{i+1}} \le C_{X,J} (\varepsilonilon_{Y,J_{i}}+\varepsilonilon_{Y,I_{i+1}} ) + C_{Y,J} \rho_J. \end{equation*} Proceeding by induction we get \begin{equation*} \begin{split} \varepsilonilon_{Y,J_{n}} & \le (C_{X,J} n + \sum_{k=1}^n C_{X,J}^k) \sup_i \varepsilonilon_{Y,I_{i}} + n C_{Y,J} \rho_J \\ & \le [ 2 \sum_{k=1}^n C_{X,J}^k M_1 + n C_{Y,J}] \rho_J \end{split} \end{equation*} which implies that there exists a constant $M$ depending only on $C_{X,J}$, $C_{Y,J}$, $\norm{\varphi}_{2,\delta}$ such that $$ \varepsilonilon_{Y,J} \le M \rho_{J}. $$ \qed \section{Some probability} \label{sec:probability} So far we have developed our arguments using only analytic and algebraic properties of paths. In this section we show how probability theory provides concrete examples of non-smooth paths for which the theory outlined above applies. Let $(\Omega,\mathcal{F},\mathbb{P})$ a probability space where is defined a standard Brownian motion $X$ with values in $V=\mathbb{R}^n$ (endowed with the Euclidean scalar product). It is well known that $X$ is almost surely locally H\"older continuous for any exponent $\gamma < 1/2$, so that we can fix $\gamma < 1/2$ and choose a version of $X$ living in $ \mathcal{C}^\gamma(I,V)$ on any bounded interval $I$. In this case solutions $\mathbb{X}^2$ of eq.~(\ref{eq:two-process}) can be obtained by stochastic integration: let $$ W^{\mu\nu}_{\text{It\^o},st} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \int_s^t (X^\mu_u-X^\mu_s) \hat d X^\nu_u $$ where the hat indicates that the integral is understood in It\^o's sense with respect to the forward filtration $\mathcal{F}_t = \sigma(X_s; s\le t)$. Then it is easy to show that, for any $s,u,t \in \mathbb{R}$ \begin{equation} \label{eq:ito1} W^{\mu\nu}_{\text{It\^o},st} - W^{\mu\nu}_{\text{It\^o},su} - W^{\mu\nu}_{\text{It\^o},ut} = (X^\mu_u - X^\mu_s)(X^\nu_t - X^\nu_u) \end{equation} which means that $$ N W^{\mu\nu}_{\text{It\^o}} = \delta\! X^\mu \delta\! X^\nu $$ then we can choose a continuous version $\mathbb{X}^2_{\text{It\^o}}$ of $(t,s) \mapsto W_{\text{It\^o,st}}$ for which eq.~(\ref{eq:ito1}) holds a.s. for all $t,u,s \in \mathbb{R}$. It remains to show that $\mathbb{X}^2_{\text{It\^o}} \in \Omega \mathcal{C}^{2\gamma}(I,V^{\otimes 2})$ (for any $\gamma <1/2$ and bounded interval $I$). To prove this result we will develop a small variation on a well known argument first introduced by Garsia, Rodemich and Rumsey (cfr.~\cite{Rosinski,stroock}) to control H\"older-like seminorms of continuous stochastic processes with a corresponding integral norm. Fix an interval $T \subset \mathbb{R}$. A Young function $\psi$ on $\mathbb{R}^+$ is an increasing, convex function such that $\psi(0)=0$. \begin{lemma} \label{lemma:besov} For any process $R \in \Omega \mathcal{C}(T)$ let \begin{equation*} U = \int_{T \times T} \psi\left(\frac{|R_{st}|}{p(|t-s|/4)} \right) dt\,ds \end{equation*} where $p: \mathbb{R}^+ \to \mathbb{R^+}$ is an increasing function with $p(0)=0$ and $\psi$ is a Young function. Assume there exists a constant $C$ such that \begin{equation} \label{eq:ext-bound-n} \sup_{(u,v,r) \in [s,t]^3} |N R_{u v r}| \le \psi^{-1}\left( \frac{C}{|t-s|^2}\right) p(|t-s|/4), \end{equation} for any couple $s<t$ such that $[s,t] \subset T$. Then \begin{equation} \label{eq:control-besov} |R_{st}| \le 16 \int_0^{|t-s|} \left[\psi^{-1}\left(\frac{U}{ r^2}\right)+\psi^{-1}\left(\frac{C}{ r^2}\right) \right]dp(r) \end{equation} for any $s,t \in T$. \end{lemma} \proof See the Appendix, Sec.~\ref{sec:proof_besov}.\qed \begin{remark} Lemma~\ref{lemma:besov} reduces to well known results in the case $N R = 0$ since we can take $C=0$. Condition~(\ref{eq:ext-bound-n}) is not very satisfying and we conjecture that an integral control over $N R$ would suffice to obtain~(\ref{eq:control-besov}). However in its current formulation it is enough to prove the following useful corollary. \end{remark} \begin{corollary} For any $\gamma > 0$ and $p \ge 1$ there exists a constant $C$ such that for any $R \in \Omega \mathcal{C}$ \begin{equation} \label{eq:generalboundxx} \|R\|_{\gamma,T} \le C (U_{\gamma+2/p,p}(R,T)+\|N R\|_{\gamma,T}). \end{equation} where \begin{equation*} U_{\gamma,p}(R,T) = \left[ \int_{T \times T} \left(\frac{|R_{s t}|}{|t-s|^\gamma}\right)^p dt ds \right]^{1/p}. \end{equation*} \end{corollary} \proof In the previous proposition take $\psi(x) = x^p$, $p(x) = x^{\gamma+2/p}$; the conclusion easily follows. \qed In the case of $\mathbb{X}^2$ we have, fixed $T = [t_0,t_1] \in \mathbb{R}$, $t_0 < t_1$, and using the scaling properties of Brownian motion, \begin{equation*} \begin{split} \mathbb{E}\left[ U_{\gamma+2/p,p}(\mathbb{X}^2_{\text{It\^o}},T)^p\right] & = \mathbb{E} \int_{[t_0,t_1]^2} \frac{|\mathbb{X}^2_{\text{It\^o},uv}|^p}{|u-v|^{p\gamma+2}} du dv \\ & = \mathbb{E}|\mathbb{X}^2_{\text{It\^o},0\,1}|^p \int_{[t_0,t_1]^2} |u-v|^{p(1-\gamma-2/p)} du dv < \infty \end{split} \end{equation*} for any $\gamma < 1$ and $p > 1/(1-\gamma)$ so that, a.s. $U_{\gamma+2/p,p}(\mathbb{X}^2_{\text{It\^o}},T)$ is finite for any $\gamma < 1$ and $p$ sufficiently large. Since $$ \sup_{(u,v,w): s \le u \le v \le w\le t} |(N \mathbb{X}^2_{\text{It\^o}})_{uvw}| \le \sup_{(u,v,w): s \le u \le v \le w\le t} |\delta\! X_{uv}||\delta\! X_{vw}| \le \|X\|_{\gamma,T}^2 |t-s|^{2\gamma} $$ for any $t_0\le s \le t \le t_1$, we have from~(\ref{eq:generalboundxx}) that for any $\gamma < 1/2$, a.s. $$ \|\mathbb{X}^2_{\text{It\^o},st}(\omega)\| \le C_{\gamma,T}(\omega) |t-s|^{2\gamma} $$ for any $t,s \in I$, where $C_{\gamma,T}$ is a suitable random constant. Then for any $\gamma < 1/2$ and bounded interval $I \subset \mathbb{R}$ we can choose a version such that $\mathbb{X}^2_{\text{It\^o}} \in \Omega \mathcal{C}^{2\gamma}(I,V^{\otimes 2})$. We can introduce $$ \mathbb{X}^{2,\mu\nu}_{\text{Strat.},st} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \int_s^t (X^\mu_u-X^\mu_s) \circ \hat dX^\nu_u $$ where the integral is understood in Stratonovich sense, then by well known results in stochastic integration, we have $$ \mathbb{X}^{2,\mu\nu}_{\text{Strat.},st} = \mathbb{X}^{2,\mu\nu}_{\text{It\^o},st} +\frac{g^{\mu\nu}}{2} (t-s) $$ where $g^{\mu\nu}=1$ if $\mu=\nu$ and $g^{\mu\nu} =0 $ otherwise. It is clear that, also in this case, we can select a continuous version of $\mathbb{X}^2_{\text{Strat.},st}$ which lives in $\Omega \mathcal{C}^{2\gamma}$ and such that $N\mathbb{X}^2_{\text{Strat.}} = \delta\! X \delta\! X$. The connection between stochastic integrals and the integral we defined in Sec.~\ref{sec:irregular} starting from a couple $(X,\mathbb{X}^2)$ is clarified in the next corollary: \begin{corollary} Let $\varphi \in C^{1,\delta}(V,V\otimes V^*)$ with $(1+\delta)\gamma > 1$, then the It\^o stochastic integral $$ \delta\! I_{\text{It\^o},st}^\mu = \int_s^t \varphi(X_u)^\mu_\nu \hat dX^\nu_u $$ has a continuous version which is a.s. equal to $$ \delta\! I_{\text{rough},st}^\mu = \int_s^t \varphi(X_u)^\mu_\nu dX^\nu_u $$ where the integral is understood in the sense of Theorem~\ref{th:rough} based on the rough path $(X,\mathbb{X}^2_{\text{It\^o}})$ moreover the Stratonovich integral $$ \delta\! I_{\text{Strat.},st}^\mu = \int_s^t \varphi(X_u)^\mu_\nu \circ \hat dX^\nu_u $$ is a.s. equal to the integral $$ \delta\! J_{st}^\mu = \int_s^t \varphi(X_u)^\mu_\nu dX^\nu_u $$ defined based on the couple $(X,\mathbb{X}^2_{\text{Strat.}})$ and the following relation holds $$ \delta\! J_{st}^\mu = \delta\! I_{\text{rough},st}^\mu + \frac{g^{\nu\kappa}}{2} \int_s^t \partial_\kappa \varphi(X_u)^\mu_\nu du $$ \end{corollary} \proof Recall that the It\^o integral $\delta\! I_{\text{It\^o}}$ is the limit in probability of the discrete sums $$ S^\mu_\Pi = \sum_i \varphi(X_{t_i})^\mu_\nu (X^\nu_{t_{i+1}}-X^\nu_{t_i}) $$ while the integral $\delta\! I_{\text{rough}}$ is the classical limit as $|\Pi| \to 0$ of $$ S^{\prime\,\mu}_\Pi = \sum_i \left[\varphi(X_{t_i})^\mu_\nu (X^\nu_{t_{i+1}}-X^\nu_{t_i}) + \partial_\kappa \varphi(X_{t_i})^\mu_\nu \mathbb{X}^{2,\kappa\nu}_{\text{It\^o},t_i t_{i+1}}\right] $$ (cfr. Corollary~\ref{cor:sums_rough}). Then it will suffice to show that the limit in probability of $$ R^\mu_\Pi = \sum_i \partial_\kappa \varphi(X_{t_i})^\mu_\nu \mathbb{X}^{2,\kappa\nu}_{\text{It\^o},t_i t_{i+1}} $$ is zero. Since we assume $\partial \varphi$ bounded it will be enough to show that $R_\Pi \to 0$ in $L^2(\Omega)$. By a standard argument, using the fact that $R_\Pi$ is a discrete martingale, we have \begin{equation*} \begin{split} \expect|R_\Pi|^2 & = \sum_i \expect |\partial_\kappa \varphi(X_{t_i})_\nu \mathbb{X}^{2,\kappa\nu}_{\text{It\^o},t_i t_{i+1}}|^2 \le \|\varphi\|_{1,\delta} \sum_i \expect |\mathbb{X}^{2}_{\text{It\^o},t_i t_{i+1}}|^2 \\& = \|\varphi\|_{1,\delta} \expect |\mathbb{X}^{2}_{\text{It\^o},0 1}|^2 \sum_i |t_{i+1}-t_i|^2 \le \|\varphi\|_{1,\delta} \expect |\mathbb{X}^{2}_{\text{It\^o},0 1}|^2 |\Pi| |t-s| \end{split} \end{equation*} which implies that $\expect|R_\Pi|^2 \to 0$ as $|\Pi| \to 0$. As far as the integral $\delta\! J$ is concerned, we have that it is the classical limit of \begin{equation*} \begin{split} S^{\prime\prime\,\mu}_\Pi & = \sum_i \left[\varphi(X_{t_i})^\mu_\nu (X^\nu_{t_{i+1}}-X^\nu_{t_i}) + \partial_\kappa \varphi(X_{t_i})^\mu_\nu \mathbb{X}^{2,\kappa\nu}_{\text{Strat.},t_i t_{i+1}}\right] \\ & = \sum_i \left[\varphi(X_{t_i})^\mu_\nu (X^\nu_{t_{i+1}}-X^\nu_{t_i}) + \partial_\kappa \varphi(X_{t_i})^\mu_\nu \mathbb{X}^{2,\kappa\nu}_{\text{It\^o},t_i t_{i+1}} + \frac{g^{\kappa\nu}}{2} \partial_\kappa \varphi(X_{t_i})^\mu_\nu (t_{i+1}-t_i) \right] \\ & = S^{\prime\,\mu}_\Pi + \frac{g^{\kappa\nu}}{2}\sum_i \partial_\kappa \varphi(X_{t_i})^\mu_\nu (t_{i+1}-t_i) \end{split} \end{equation*} so that $$ \delta\! I^\mu_{\text{rough},st} = \delta\! J^\mu_{st} - \frac{g^{\kappa\nu}}{2}\int_s^t \partial_\kappa \varphi(X_u)^\mu_\nu du $$ as claimed and then, by the relationship between It\^o and Stratonovich integration: $$ \delta\! I_{\text{It\^o},st}^\mu = \delta\! I_{\text{Strat.},st}^\mu - \frac{g^{\kappa\nu}}{2}\int_s^t \partial_\kappa \varphi(X_u)^\mu_\nu du $$ we get $\delta\! J = \delta\! I_{\text{Strat.}}$. \qed \section{Relationship with Lyons' theory of rough paths} \label{sec:lyons} The general abstract result given in Prop.~\ref{prop:main} can also be used to provide alternative proofs of the main results in Lyons' theory of rough paths~\cite{Lyons}, i.e. the extension of multiplicative paths to any degree and the construction of a multiplicative path from an almost-multiplicative one. The main restriction is that we only consider control functions $\omega(t,s)$ (cfr. Lyons~\cite{Lyons} for details and definitions) which are given by $$ \omega(t,s) = K |t-s| $$ for some constant $K$. Given an integer $n$, $T^{(n)}(V)$ denote the truncated tensor algebra up to degree $n$: $T^{(n)}(V) := \oplus_{k=0}^n V^{\otimes k}$, $V^{\otimes 0 } = \mathbb{R}$. A tensor-valued path $Z : I^2 \to T^{(n)}(V)$ is of \emph{finite $p$-variation} if \begin{equation} \label{eq:finite_p_variation_path} \|Z^{\bar{\mu}}\|_{|\bar{\mu}|/p} \le K^{|\bar\mu|}, \qquad \forall \bar{\mu} : | \bar{\mu}| \le n \end{equation} where $\bar{\mu}$ is a tensor multi-index. A path $Z$ of degree $n$ and finite $p$-variation is \emph{almost multiplicative} (of roughness $p$) if $Z^\emptyset \equiv 1$, $n \ge \lfloor p \rfloor$ and \begin{equation} \label{eq:almost_multiplicative} N Z^{\bar\mu} = \sum_{\bar\nu\bar\kappa = \bar\mu} Z^{\bar\nu} Z^{\bar\kappa} + R^{\bar\mu} \end{equation} with $R^{\bar\mu} \in \Omega \mathcal{C}_2^z(I,T^{(n)}(V))$ for some $z>1$ uniformly for all $\bar\mu$. By convention the summation $\sum_{\bar\nu\bar\kappa = \bar\mu}$ does not include the terms where either $\bar\mu =\emptyset$ or $\bar\kappa = \emptyset$. A path $Z$ is \emph{multiplicative} if $Z^\emptyset \equiv 1$ and \begin{equation} \label{eq:multiplicative} N Z^{\bar\mu} = \sum_{\bar\nu\bar\kappa = \bar\mu} Z^{\bar\nu} Z^{\bar\kappa} \end{equation} Then the key result is contained in the following Proposition: \begin{proposition} If $Z$ is an almost-multiplicative path of degree $n$ and finite $p$-variation, $n \ge \lfloor p \rfloor$, then there exists a unique multiplicative path $\widetilde Z$ in $T^{(\lfloor p \rfloor)}(V)$ with finite $p$-variation such that \begin{equation} \label{eq:from_almost_to_multiplicative} \|Z^{\bar\mu}-\widetilde Z^{\bar\mu}\|_z \le K \end{equation} for some $z > 1$ and all multi-index $\bar\mu$ such that $|\bar\mu| \le \lfloor p \rfloor$. \end{proposition} \proof Let us prove that there exists a multiplicative path $\widetilde Z$ such that \begin{equation} \label{eq:almost_remainder} Z = \widetilde Z + Q \end{equation} with $Q \in \Omega \mathcal{C}^z$, $z > 1$. We proceed by induction: if $|\bar\mu|=1$: \begin{equation*} N Z^{\bar\mu}_{sut} = R^{\bar\mu}_{sut} \end{equation*} which, given that $R^{\bar\mu} \in \Omega \mathcal{C}_2^z$, $z >1$ implies that exists a unique $\widetilde Z^{\bar\mu}$ such that $N \widetilde Z^{\bar\mu} = 0$ and \begin{equation*} Z^{\bar\mu} = \widetilde Z^{\bar\mu} + \Lambda R^{\bar\mu} = \widetilde Z^{\bar\mu} + Q^{\bar\mu} \end{equation*} with $Q^{\bar\mu} \in \Omega \mathcal{C}^z$. Then assume that eq.~(\ref{eq:almost_remainder}) is true up to degree $j-1$ and let us show that it is true also for a multi-index $\bar\mu$ of degree $j$: \begin{equation*} \begin{split} N Z^{\bar\mu} & = \sum_{\bar\nu \bar\kappa = \bar\mu} Z^{\bar\nu} Z^{\bar\kappa} + R^{\bar\mu} \\ & = \sum_{\bar\nu \bar\kappa = \bar\mu} (\widetilde Z^{\bar\nu}+Q^{\bar\nu})(\widetilde Z^{\bar\kappa}+Q^{\bar\kappa}) + R^{\bar\mu} \\ & = \sum_{\bar\nu \bar\kappa = \bar\mu} \widetilde Z^{\bar\nu} \widetilde Z^{\bar\kappa} + \sum_{\bar\nu \bar\kappa = \bar\mu} [ Q^{\bar\nu} \widetilde Z^{\bar\kappa} + \widetilde Z^{\bar\nu} Q^{\bar\kappa} + Q^{\bar\nu} Q^{\bar\kappa}] + R^{\bar\mu} \\ & = \sum_{\bar\nu \bar\kappa = \bar\mu} \widetilde Z^{\bar\nu} \widetilde Z^{\bar\kappa} + \widetilde R^{\bar\mu} \end{split} \end{equation*} If we can prove that $\widetilde R^{\bar\mu}$ is in the image of $N$, then writing $$ \widetilde Z^{\bar\mu} = Z^{\bar\mu} - \Lambda \widetilde R^{\bar\mu} = Z^{\bar\mu} + Q^{\bar\mu} $$ we obtain the multiplicative property for $\widetilde Z^{\bar\mu}$ $$ N \widetilde Z^{\bar\mu} = \sum_{\bar\nu \bar\kappa = \bar\mu} \widetilde Z^{\bar\nu}_{ut} \widetilde Z^{\bar\kappa}_{su} $$ with $|\bar\mu| = j$, and we are done since uniqueness is obvious. To prove $\widetilde R^{\bar\mu} \in \text{Im}N$ we must show that $N_2 \widetilde R^{\bar\mu} = 0$: \begin{equation*} \begin{split} N_2 \widetilde R^{\bar\mu} & = N_2 \left[ N Z^{\bar \mu}-\sum_{\bar\nu \bar\kappa = \bar\mu} \widetilde Z^{\bar\nu} \widetilde Z^{\bar\kappa}\right] = N_2 \left[\sum_{\bar\nu \bar\kappa = \bar\mu} \widetilde Z^{\bar\nu} \widetilde Z^{\bar\kappa}\right] \\ & = \sum_{\bar\nu \bar\kappa = \bar\mu} N \widetilde Z^{\bar\nu} \widetilde Z^{\bar\kappa} - \sum_{\bar\nu \bar\kappa = \bar\mu} \widetilde Z^{\bar\nu} N \widetilde Z^{\bar\kappa} \\ & = \sum_{\bar\nu \bar\kappa = \bar\mu} \sum_{\bar\sigma\bar\tau = \bar\nu} \widetilde Z^{\bar\sigma} \widetilde Z^{\bar\tau}\widetilde Z^{\bar\kappa} - \sum_{\bar\nu \bar\kappa = \bar\mu} \sum_{\bar\sigma\bar\tau = \bar\kappa} \widetilde Z^{\bar\nu} \widetilde Z^{\bar\sigma} \widetilde Z^{\bar\tau} = 0 \end{split} \end{equation*} where we used the Leibnitz rule for $N_2$ (see eq.~(\ref{eq:leibnitz_n2})). To finish we can take for the constant $K$ in eq.~(\ref{eq:from_almost_to_multiplicative}) the maximum of $\|Q^{\bar\mu}\|_z$ for all $|\bar\mu| \le \lfloor p \rfloor$. \qed \begin{proposition} Let $Z$ be a multiplicative path of degree $n$ and finite $p$-variation such that \begin{equation} \label{eq:multiplicative_bound} \sum_{\bar\mu : |\bar\mu|=k}\norm{Z^{\bar\mu}}_{k/p} \le C \frac{\alpha^{k}}{k!} \end{equation} for all $k \le n$ and with $\alpha,C >0$; then if $(n+1) > p$ and $C$ is small enough (see eq.~(\ref{eq:smallness_of_C})) there exists a unique multiplicative extension of $Z$ to any degree and eq.~(\ref{eq:multiplicative_bound}) holds for every $k$. \end{proposition} \proof By induction we can assume that $Z$ is a multiplicative path of degree $k$ for which eq.~(\ref{eq:multiplicative_bound}) holds up to degree $k$ and prove that it can be extended to degree $k+1$ with the same bound. Note that $k \ge n$ and then $(k+1) > p$. For $|\bar\mu|=k+1$ we should have \begin{equation} \label{eq:extension_decomposition} N Z^{\bar\mu} = \sum_{\bar\nu\bar \kappa = \bar\mu} Z^{\bar\nu} Z^{\bar\kappa} \in \mathcal{Z}_2^{(k+1)/p} \end{equation} Since $(k+1) > p$, this equation has a unique solution $Z^{\bar\mu} \in \Omega \mathcal{C}^{(k+1)/p}(T^{k+1}(V))$. Then observe that, from eq.~(\ref{eq:extension_decomposition}) $$ Z^{\bar\mu}_{st} = Z^{\bar\mu}_{ut} + Z^{\bar\mu}_{su} + \sum_{\bar\nu\bar \kappa = \bar\mu} Z^{\bar\nu}_{su} Z^{\bar\kappa}_{ut} $$ and taking as $u$ the mid-point between $t$ and $s$ we can bound $Z^{\bar \mu}$ as follows: \begin{equation*} \sum_{|\bar\mu| = k+1} \norm{Z^{\bar\mu}_{st}}_{(k+1)/p} \le \frac{2}{2^{(k+1)/p}} \sum_{|\bar\mu| = k+1} \norm{Z^{\bar\mu}_{st}}_{(k+1)/p} +C^2 \alpha^{k+1} \sum_{i=1}^k \frac{2^{-i/p}}{i!} \frac{2^{-(k+1-i)/p}}{(k+1-i)!} \end{equation*} Now, \begin{equation*} \begin{split} \sum_{i=0}^{k+1} & \frac{2^{-i/p}}{i!} \frac{2^{-(k+1-i)/p}}{(k+1-i)!} \le \sum_{i=0}^{k+1} \frac{2^{-i}}{i!} \frac{2^{-(k+1-i)}}{(k+1-i)!} +2 \sum_{i=0}^{\lfloor p \rfloor} \frac{ (2^{-(k+1-i)/p} 2^{-i/p}-2^{-(k+1-i)} 2^{-i})}{i!(k+1-i)!} \\ & = \frac{1}{(k+1)!}\left[1+2 \sum_{i=0}^{\lfloor p \rfloor} \frac{(k+1)!}{i!(k+1-i)!} (2^{-(k+1)/p} -2^{-(k+1)}) \right] \\ & \le \frac{1+D_p k^{\lfloor p \rfloor} 2^{-(k+1)/p}}{(k+1)!} \end{split} \end{equation*} which gives \begin{equation*} \sum_{|\bar\mu| = k+1} \norm{Z^{\bar\mu}_{st}}_{(k+1)/p} \le C^2 \frac{(2^{(k+1)/p}-2)}{2^{(k+1)/p}} \frac{(1+D_p k^{\lfloor p \rfloor} 2^{-(k+1)/p})\alpha^{k+1}}{(k+1)!} \le C \frac{\alpha^{k+1}}{(k+1)!} \end{equation*} whenever $C$ is such that \begin{equation} \label{eq:smallness_of_C} 0 < C \le \min_{k \ge n} \frac{2^{(k+1)/p}}{(2^{(k+1)/p}-2)(1+D_p k^{\lfloor p \rfloor} 2^{-(k+1)/p})}. \end{equation} This concludes the proof of the induction step. \qed \section*{Acknowledgments} The author has been introduced to this problem by F.~Flandoli which has sustained his work with useful discussions and constant encouragement. A due thank to M.~Franciosi for some advices on homological algebra and to Y.~Ouknine for pointing out a mistake in an earlier version of the paper. \appendix \section{Some proofs} \label{app:proofs} \subsection{Proof of prop.~\ref{prop:main}} \label{sec:proof_main} The basic technique to prove the existence of the map $\Lambda$ is borrowed form~\cite{FGGT}. Let $\eta(x)$ be a smooth function on $\mathbb{R}$ with compact support and $\eta_{\alpha}(x) \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \alpha^{-1} \eta(x/\alpha)$. Define \begin{equation*} (\Lambda_\beta A)_{st} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} - \int_s^t dx \iint d\tau d\sigma \mathcal{F}_\beta(x,s;\tau,\sigma) A_{\tau x \sigma} \end{equation*} where \begin{equation*} \mathcal{F}_\beta(x,s;\tau,\sigma) \mathrel{\raise.2\p@\hbox{:}\mathord{=}} [ \eta_{\beta}(x-\tau)- \eta_{\beta}(s-\tau)] \partial_x \eta_{\beta}(x-\sigma) \end{equation*} and the integrals in $\tau$ and $\sigma$ are extended over all $\mathbb{R}$. Given that $A \in \mathcal{Z}_2$ there exists $R \in \Omega \mathcal{C}$ such that $N R = A$ and \begin{equation*} \begin{split} (\Lambda_\beta A)_{st} & = - \int_s^t dx \iint d\tau d\sigma \mathcal{F}_\beta(x,s;\tau,\sigma) (R_{\tau \sigma}-R_{\tau x }-R_{ x \sigma}) \\ & = - \int_s^t dx \iint d\tau d\sigma \mathcal{F}_\beta(x,s;\tau,\sigma) R_{\tau \sigma} \end{split} \end{equation*} since the other terms vanish after the integrations in $\tau$ or $\sigma$. Then the following decomposition holds: \begin{equation} \label{eq:decomposition} \Lambda_\beta A = \tilde R_\beta + \delta\! \Phi_\beta(R) \end{equation} where \begin{equation*} (\tilde R_\beta)_{st} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \iint d\tau d\sigma \eta_\beta(s-\tau) [\eta_\beta(t-\sigma)-\eta_\beta(s-\sigma)] R_{\tau \sigma} \end{equation*} and \begin{equation*} \delta\! \Phi_\beta(R)_{st} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} - \int_s^t dx \iint d\sigma d\tau \eta_\beta(x-\tau) \partial_x \eta_\beta(x-\sigma) R_{\tau \sigma} \end{equation*} In eq.~(\ref{eq:decomposition}) the l.h.s. depends only on $A = N R$ while each of the terms in the r.h.s depends explicitly on $R$. We have $N \Lambda_\beta A = N \tilde R_\beta$ and since $\lim_{\beta \to 0} \tilde R_\beta = R$ pointwise we have that $\lim_{\beta \to 0} N \Lambda_\beta A = N R = A$. So every accumulation point $X$ of $\Lambda_\beta A$ will solve the equation $N X = A$. Moreover if it exists $X \in \Omega \mathcal{C}^{z}$ with $z > 1$ and $N X = A$ then it is unique and $\lim_{\beta \to 0} \Lambda_\beta R = X$ in $\Omega \mathcal{C}^{1}$ since in this case \begin{equation*} \Lambda_\beta A = \tilde R_\beta + \delta\! \Phi_\beta(R) = \tilde X_\beta + \delta\! \Phi_\beta(X) \end{equation*} and it is easy to prove that $\Phi_\beta(X) \to 0$ in $\mathcal{C}^1$. Now we will prove that $\lim_{\beta\to 0 }\Lambda_\beta A$ exists when $A \in \mathcal{Z}_2^z$ with $z > 1$. Define $f_\tau : \mathbb{R}^2 \times \mathbb{R}_+ \to V$ as $f_\tau (x,y,\alpha) := \eta_{\alpha}(x-\tau)$ and $g_\sigma : \mathbb{R}^2 \times \mathbb{R}_+ \to V$ as $g_\sigma (x,y,\alpha) := \eta_{\alpha}(y-\sigma)$. Apply Stokes Theorem to the exact differential 2-form $\omega \mathrel{\raise.2\p@\hbox{:}\mathord{=}} df_\tau \wedge d g_\sigma = d(f_\tau d g_\sigma)$ on $D \mathrel{\raise.2\p@\hbox{:}\mathord{=}} \Delta_{t,s} \times [\beta,\beta']$ where $\Delta_{t,s} = \{(x,y) \in \mathbb{R}^2 : s < x < y < t\}$. Then \begin{equation*} \int_{\partial D} \omega = \int_D d \omega = 0 \end{equation*} where the boundary $\partial D = - c_1 + c_2 + c_3 $ is composed of $c_1 = \Delta_{t,s} \times \{ \beta \}$, $c_2 = \Delta_{t,s} \times \{ \beta' \}$, $c_3 = \partial \Delta_{t,s} \times [\beta,\beta']$. So \begin{equation*} \int_{\Delta_{t,s}} \omega |_{\alpha = \beta} = \int_{\Delta_{t,s}} \omega |_{\alpha = \beta'} + \int_{ \partial \Delta_{t,s} \times [\beta,\beta']} \omega \end{equation*} giving \begin{equation*} \int_s^t \mathcal{F}_\beta(x,s;\tau,\sigma) dx = \int_s^t \mathcal{F}_{\beta'}(x,s;\tau,\sigma) dx + \int_\beta^{\beta'} d\alpha \int_s^t \mathcal{K}(\alpha,x,t,s;\tau,\sigma) dx \end{equation*} with \begin{multline*} \mathcal{K} (\alpha,x,t,s;\tau,\sigma) = \partial_\alpha [\eta_\alpha(x-\sigma)-\eta_\alpha(s-\sigma)] \partial_x \eta_\alpha(x-\tau) \\+\partial_\alpha [\eta_\alpha(t-\tau)-\eta_\alpha(x-\tau)] \partial_x \eta_\alpha(x-\sigma) \end{multline*} Then \begin{equation} \label{eq:telescope} \Lambda_\beta A_{st} = \Lambda_{\beta'} A_{st} - \int_\beta^{\beta'} d\alpha \int_s^t dx \iint d\tau d\sigma \,\mathcal{K}(\alpha,x,t,s;\tau,\sigma) R_{\tau \sigma} \end{equation} Assume we can write $A = \sum_{i=1}^n A_i$ where $A_i \in \Omega \mathcal{C}_2^{\rho_i,z-\rho_i}$ for a choice of $n$ and $\rho_i > 0$, $i=1,\dots,n$. Write $\rho_i' = z-\rho_i$. Then consider \begin{equation*} \begin{split} I&(\alpha) = - \int_s^t dx \iint d\tau d\sigma \mathcal{K}(\alpha,x,t,s;\tau,\sigma) R_{\tau \sigma} \\ & = \iint d\tau d\sigma \left\{\partial_\alpha \eta_\alpha(s-\sigma) [\eta_\alpha(t-\tau)-\eta_\alpha(s-\tau)] \right.\\ & \qquad \left. - \partial_\alpha \eta_\alpha(t-\tau) [\eta_\alpha(t-\sigma)-\eta_\alpha(s-\sigma)]\right\} R_{\tau \sigma} \\ & + \int_s^t dx \iint d\tau d\sigma [\partial_\alpha \eta_\alpha(x-\tau) \partial_x \eta(x-\sigma) - \partial_\alpha \eta_\alpha(x-\sigma) \partial_x \eta(x-\tau)] R_{\tau \sigma} \\ & = \iint d\tau d\sigma \partial_\alpha \eta_\alpha(\sigma) \eta_\alpha(\tau) [R_{t+\tau,s+\sigma }-R_{s+\tau,s+\sigma }-R_{t+\tau,t+\sigma}+R_{t+\tau,s+\sigma} ] \\ & + \int_s^t dx \iint d\tau d\sigma \partial_\alpha \eta_\alpha(\tau) \partial_\sigma \eta(\sigma) [R_{x+\sigma,x+\tau}-R_{x+\tau,x+\sigma}] \\ & = \iint d\tau d\sigma \partial_\alpha \eta_\alpha(\sigma) \eta_\alpha(\tau) [N R_{t+\tau,s+\tau,s+\sigma }+N R_{t+\tau,t+\sigma,s+\sigma} ] \\ & + \int_s^t dx \iint d\tau d\sigma \partial_\alpha \eta_\alpha(\tau) \partial_\sigma \eta(\sigma) [N R_{x+\sigma,x,x+\tau}-N R_{x+\tau,x,x+\sigma}] \end{split} \end{equation*} so that we can bound \begin{equation*} \begin{split} |I(\alpha)| & \le \iint d\tau d\sigma |\partial_\alpha \eta_\alpha(\sigma)| |\eta_\alpha(\tau)| \left[|N R_{t+\tau,s+\tau,s+\sigma }|+|N R_{t+\tau,t+\sigma,s+\sigma}| \right] \\ & \qquad + \int_s^t dx \iint d\tau d\sigma |\partial_\alpha \eta_\alpha(\tau)|| \partial_\sigma \eta(\sigma)| \left[|N R_{x+\sigma,x,x+\tau}|+|N R_{x+\tau,x,x+\sigma}|\right] \\ & \le \sum_{i=1}^n \|A_i\|_{\rho_i,\rho_i'} \iint d\tau d\sigma |\partial_\alpha \eta_\alpha(\sigma)| |\eta_\alpha(\tau)| \left[|t-s|^{\rho_i} |\tau-\sigma|^{\rho_i'} +|\tau-\sigma|^{\rho_i} |t-s|^{\rho_i'} \right] \\ & \qquad + \sum_{i=1}^n \|A_i\|_{\rho_i,\rho_i'} \int_s^t dx \iint d\tau d\sigma |\partial_\alpha \eta_\alpha(\tau)|| \partial_\sigma \eta(\sigma)| \left[|\sigma|^{\rho_i} |\tau|^{\rho_i'}+|\tau|^{\rho_i} |\sigma|^{\rho_i'}\right] \end{split} \end{equation*} where each term can be bounded as follows: \begin{equation*} \iint d\tau d\sigma |\partial_\alpha \eta_\alpha(\sigma)| |\eta_\alpha(\tau)| |\tau-\sigma|^{a} = \alpha^{a-1} \iint d\tau d\sigma |\eta(\sigma)- \sigma \eta'(\sigma)| |\eta(\tau)|\, |\tau-\sigma|^{a} \le K \alpha^{a-1}, \end{equation*} \begin{equation*} \iint d\tau |\partial_\alpha \eta_\alpha(\tau)| |\tau|^{a} = \alpha^{a-1} \iint d\tau |\eta(\tau)-\tau\eta'(\tau)| |\tau|^{a} \le K^{1/2} \alpha^{a-1} \end{equation*} for a suitable constant $K>0$ and obtain \begin{equation*} \begin{split} |I(\alpha)| & \le K \sum_{i=1}^n (\alpha^{\rho_i-1} |t-s|^{\rho_i'} +\alpha^{\rho_i'-1} |t-s|^{\rho_i}) \|A_i\|_{\rho_i,\rho_i'} \\ & + K |t-s| \sum \alpha^{z-2} \|A_i\|_{\rho_i,\rho_i'} \end{split} \end{equation*} Upon integration in $\alpha$ we get: \begin{equation*} \int_0^1 |I(\alpha)| d\alpha \le K \sum_{i=1}^n \|A_i\|_{\rho_i,\rho_i'} \end{equation*} if $|t-s| \le 1$. By dominated convergence of the integral in eq.~(\ref{eq:telescope}), $$\lim_{\beta \to 0} \Lambda_\beta A \mathrel{\mathord{=}\raise.2\p@\hbox{:}} \Lambda A$$ exists (in $\Omega \mathcal{C}$ uniformly in bounded intervals). If we also observe that \begin{equation*} |(\Lambda_{\beta'} A)_{st}| \le K (\beta^{\prime})^{-1}|t-s| \sum_{i=1}^n \|A_i\|_{\rho_i,\rho_i'} \end{equation*} we get that \begin{equation*} |(\Lambda A)_{t,s}| \le K \sum_{i=1}^n \|A_i\|_{\rho_i,\rho_i'} \end{equation*} for $|t-s| \le 1$. Finally, let $J_{t,s}(x) \mathrel{\raise.2\p@\hbox{:}\mathord{=}} s+(t-s)(0 \vee (x\wedge 1))$ and $(J_{t,s}^* X)_{u,v,w} \mathrel{\raise.2\p@\hbox{:}\mathord{=}} X_{J_{t,s}(u),J_{t,s}(v),J_{t,s}(w)}$ for all $X \in \Omega \mathcal{C}_2$. Then \begin{equation*} \|J_{t,s}^* X\|_{\gamma,\gamma'} \le |t-s|^{\gamma+\gamma'} \|X\|_{\gamma,\gamma'}. \end{equation*} Since $\Lambda_\beta A_{t,s} = (J_{t,s}^* \Lambda_{|t-s|\beta} A)_{0,1} = \Lambda_{|t-s|\beta} (J_{t,s}^* A)_{0,1}$ and \begin{equation*} |(\Lambda (J^*_{t,s} R))_{1,0}| \le K \sum_{i=1}^n \|J^*_{t,s} A_i\|_{\rho_i,\rho_i'} \end{equation*} this is enough to obtain the desired regularity: \begin{equation*} |(\Lambda A)_{t,s}| \le K |t-s|^{z} \sum_{i=1}^n \|A_i\|_{\rho_i,\rho_i'}. \end{equation*} The constant $K$ can be chosen to be equal to $1/(2^z-2)$. Let $\Phi = \sum_{i=1}^n \|A_i\|_{\rho_i,\rho_i'}$. and $R=\Lambda A$ and since $N R = A$ write \begin{equation*} R_{st} = R_{ut} + R_{su} + \sum_i A_{i,sut} \end{equation*} with $t>u>s$ and $u = s+|t-s|/2$. Then estimate \begin{equation*} \begin{split} |R_{st}| & \le |R_{ut}| + |R_{su}| + \sum_i |A_{i,sut}| \\ & \le \norm{R}_z (|t-u|^z+|u-s|^z) + \sum_i \|A_i\|_{\rho_i,\rho_i'} |u-s|^{\rho_i} |t-u|^{\rho'_i} \\ & = \frac{2 \norm{R}_z + \Phi}{2^z} |t-s|^z \end{split} \end{equation*} so that \begin{equation*} \norm{R}_z \le \frac{1}{2^z-2} \Phi. \end{equation*} \qed \subsection{Some Proofs for Sec.~\ref{sec:irregular}} \subsubsection{Proof of Lemma~\ref{lemma:transitivity}} \label{sec:proof-transitivity} \proof Write down the decomposition for $Z$ and $Y$: \begin{equation*} \begin{gathered} \delta\! Z^\mu = F^{\mu}_{\nu} \delta\! Y^\nu + R^\mu_{ZY},\\ \delta\! Y^\mu = G^{\mu}_{\nu} \delta\! X^\nu + R^\mu_Y\\ \end{gathered} \end{equation*} where $F \in \mathcal{C}^{\eta-\gamma}(I,V \otimes V^*)$, $G \in \mathcal{C}^{\sigma-\gamma}(I,V)$, $R_{ZY} \in \Omega \mathcal{C}^\eta(I,V)$ and $R_Y \in \Omega \mathcal{C}^\sigma(I,V)$, then \begin{equation*} \delta\! Z^\mu = F^{\mu}_{\nu} G^{\nu\kappa} \delta\! X^\kappa + R^\mu_{ZY} + F^{\mu}_{\nu} R^\nu_Y = Z^{\prime\,\mu}_{\kappa} \delta\! X^\kappa + R^\mu_{ZX} \end{equation*} with $Z^{\prime\,\mu}_{\kappa} = F^{\mu}_{\nu} G^{\nu}_{\kappa}$ and $R^\mu_{ZX} = R^\mu_{ZY} + F^{\mu}_{\nu} R^\nu_{Y}$. Let $\delta = \min(\sigma,\eta)$ and note that for $R_{ZY}$ we have \begin{equation*} \begin{gathered} \|R_{ZY}\|_{\eta,I} \le \|Z\|_{D(Y,\gamma,\eta),I}\\ \|R_{ZY}\|_{\gamma,I} \le \|Z\|_{\gamma,I} + \|F\|_{\infty,I} \|Y\|_{\gamma,I} \le \|Z\|_{D(Y,\gamma,\eta),I} (1+\|Y\|_{\gamma,I}) \end{gathered} \end{equation*} and by interpolation we obtain ($a=(\eta-\delta)/(\eta-\gamma) \le 1$) $$ \|R_{ZY}\|_{\delta,I} \le \|R_{ZY}\|_{\eta,I}^{1-a} \|R_{ZY}\|_{\gamma,I}^{a} \le \|Z\|_{D(Y,\gamma,\eta),I} (1+\|Y\|_{\gamma,I})^a \le \|Z\|_{D(Y,\gamma,\eta),I} (1+\|Y\|_{\gamma,I}) $$ and similarly $$ \|R_{Y}\|_{\delta,I} \le \|Y\|_{D(X,\gamma,\sigma),I} (1+\|X\|_{\gamma,I}) $$ moreover $$ \|F\|_{0,I} = \sup_{t,s \in I}|F_t-F_s| \le \sup_{t,s \in I} (|F_t|+ |F_s|) = 2 \|F\|_{\infty,I} \le 2 \|Z\|_{D(Y,\gamma,\eta),I} $$ so, again by interpolation, we find $$ \|F\|_{\delta-\gamma,I} \le \|Z\|_{D(Y,\gamma,\eta),I} 2^{1-(\delta-\gamma)/(\sigma-\gamma)} \le 2 \|Z\|_{D(Y,\gamma,\eta),I} $$ and $$ \|G\|_{\delta-\gamma,I} \le 2 \|Y\|_{D(X,\gamma,\sigma),I} $$ To finish bound the norm of $Z,Z'$ as \begin{equation*} \begin{split} \|(Z,Z')\|_{D(X,\gamma,\delta),I} & = \|Z'\|_{\infty,I} + \|Z'\|_{\delta-\gamma,I} + \|R_{ZX}\|_{\delta,I} + \|Z\|_{\gamma,I} \\ & \le \|F\|_{\infty,I} \|G\|_{\infty,I} + \|F\|_{\delta-\gamma,I}\|G\|_{\infty,I} \\ & \qquad + \|F\|_{\infty,I} \|G\|_{\delta-\gamma,I} + \|R_{ZY}\|_{\delta,I} + \|F\|_{\infty,I} \|R_Y\|_{\delta,I} + \|Z\|_{\gamma,I} \\ & \le K \|Z\|_{D(Y,\gamma,\eta),I} (1+ \|Y\|_{D(X,\gamma,\sigma),I})(1+ \|X\|_{\gamma,I}) \end{split} \end{equation*} \qed \subsubsection{Proof of Prop.~\ref{prop:functionD}} \label{sec:proof_of_function_D} Let $y(r) = (Y_t-Y_s)r+Y_s$ so that \begin{equation*} \begin{split} Z^\mu_t-Z^\mu_s & = \varphi(y(1))^\mu-\varphi(y(0))^\mu = \int_0^1 \partial_\nu \varphi(y(r))^\mu y'(r)^\nu dr \\ & = (Y^\nu_t-Y^\nu_s) \int_0^1 \partial_\nu \varphi(y(r))^\mu dr \\ & = \partial_\nu\varphi(Y_s)^\mu (Y^\nu_t-Y^\nu_s) + (Y^\nu_t-Y^\nu_s) \int_0^1 \left[\partial_\nu \varphi(y(r))^\mu-\partial_\nu \varphi(Y_s)^\mu\right] dr \end{split} \end{equation*} Then if $\delta\! Y^\mu = Y^{\prime\,\mu}_{\nu} \delta\! X^\nu + R^\mu$ we have \begin{equation} \label{eq:decomposition_of_Z} \begin{split} Z^\mu_t-Z^\mu_s & = \partial_\nu \varphi(Y_s)^\mu Y^{\prime\,\nu}_{\kappa,s} (X^\kappa_t-X^\kappa_s) + \partial_\nu\varphi(Y_s)^\mu R^\nu_{st} + (Y^\nu_t-Y^\nu_s) \int_0^1 \left[\partial_\nu \varphi(y(r))^\mu-\partial_\nu \varphi(Y_s)^\mu\right] dr \\ & = Z^{\prime\,\mu}_{\kappa,s} (X^\kappa_t-X^\kappa_s) + R^{\mu}_{Z,st} \end{split} \end{equation} with $Z^{\prime\,\mu}_{\kappa,s} = \partial_\nu \varphi(Y_s)^\mu Y^{\prime\,\nu}_{\kappa,s}$, \begin{equation*} \begin{split} \|Z'\|_{\sigma-\gamma} &\le \|\partial \varphi(Y_\cdot)\|_{\sigma-\gamma} \|Y'\|_\infty+ \|\partial \varphi(Y_\cdot)\|_{\infty} \|Y'\|_{\sigma-\gamma} \\ &\le (\|\partial \varphi(Y_\cdot)\|_{\delta \gamma}+\|\partial \varphi(Y_\cdot)\|_{0}) \|Y'\|_\infty+ \|\partial \varphi(Y_\cdot)\|_{\infty} (\|Y'\|_{\eta-\gamma}+\|Y'\|_{0}) \\ &\le \|\varphi\|_{1,\delta}(\|Y\|^\delta_{\gamma}+2) \|Y'\|_\infty+ 2 \|\varphi\|_{1,\delta} (\|Y'\|_{\eta-\gamma}+2\|Y'\|_{\infty}) \\ & \le K \|\varphi\|_{1,\delta} (\norm{Y}_{D(X,\gamma,\eta)}+\norm{Y}_{D(X,\gamma,\eta)}^{1+\delta}) \end{split} \end{equation*} as far as $R_Z$ is concerned we have \begin{equation*} \begin{split} |R_{Z,st}| & = |Y_t-Y_s| \left[\int_0^1 \left|\partial \varphi(y(r))-\partial \varphi(Y_s)\right| dr\right] \\ & \le \|\varphi\|_{1,\delta} \left|\int_0^1 r^\delta dr\right| |Y_t-Y_s|^{1+\delta} \le K \|\varphi\|_{1,\delta} \|Y\|_{\gamma}^{1+\delta} |t-s|^{\gamma (1+\delta)}; \end{split} \end{equation*} and \begin{equation*} \begin{split} |R_{Z,st}| & = |Y_t-Y_s| \left[\int_0^1 \left|\partial \varphi(y(r))-\partial \varphi(Y_s)\right| dr\right] \le K \|\varphi\|_{1,\delta} \|Y\|_\gamma |t-s|^\gamma. \end{split} \end{equation*} Interpolating these two inequalities we get \begin{equation*} \norm{R_Z}_{\sigma} \le K \|\varphi\|_{1,\delta} \|Y\|^{\sigma/\gamma}_\gamma \le K \|\varphi\|_{1,\delta} \|Y\|^{\sigma/\gamma}_{D(X,\gamma,\sigma)} \end{equation*} which together with the obvious bound \begin{equation*} \norm{Z}_\gamma \le \|\varphi\|_{1,\delta}\norm{Y}_\gamma \end{equation*} implies \begin{equation*} \begin{split} \|Z\|_{D(X,\gamma,\sigma)} & \le K \|\varphi\|_{1,\delta}( \norm{Y}_{D(X,\gamma,\eta)} + \norm{Y}^{1+\delta}_{D(X,\gamma,\eta)} + \norm{Y}^{\sigma/\gamma}_{D(X,\gamma,\eta)}) \end{split} \end{equation*} If $\delta\!\widetilde Y^\mu = \widetilde Y^{\prime\,\mu}_{\nu} \delta\! X^\nu + \widetilde R^\mu$ is another path, $\widetilde Z_t = \varphi(\widetilde Y_t)$ and $H = Z -\widetilde Z$ we have (see eq.~(\ref{eq:decomposition_of_Z})): \begin{equation} \label{eq:difference_H} \begin{split} \delta\! H^\mu & = H^{\prime\,\mu}_{\kappa} \delta\! X^\kappa + A^\mu + B^\mu \end{split} \end{equation} with $$ H^{\prime\,\mu}_{\kappa} = \partial_\nu \varphi(Y)^\mu Y^{\prime\,\nu}_{\kappa}-\partial_\nu \varphi(\widetilde Y)^\mu \widetilde Y^{\prime\,\nu}_{\kappa} $$ $$ A^{\mu}_{st} = \partial_\nu\varphi(Y_s)^\mu R^\nu_{st} - \partial_\nu\varphi(\widetilde Y_s)^\mu \widetilde R^\nu_{st} $$ and \begin{equation*} \begin{split} B^{\mu}_{st} & = \delta\! Y^\nu_{st} \int_0^1 \left[\partial_\nu \varphi(y(r))^\mu-\partial^\nu \varphi(y(0))^\mu\right] dr - \delta\! \widetilde Y^\nu_{st} \int_0^1 \left[\partial_\nu \varphi(\widetilde y(r))^\mu-\partial_\nu \varphi(\widetilde y(0))^\mu\right] dr \\ & = \delta\! (Y-\widetilde Y)^\nu_{st} \int_0^1 \left[\partial_\nu \varphi(y(r))^\mu-\partial_\nu \varphi(y(0))^\mu\right] dr \\ & \qquad + \delta\! \widetilde Y^\nu_{st} \int_0^1 \left[ \partial_\nu\varphi(y(r))^\mu-\partial_\nu \varphi(\widetilde y(r))^\mu-\partial_\nu \varphi(y(0))^\mu+\partial_\nu \varphi(\widetilde y(0))^\mu\right] dr \end{split} \end{equation*} Let $y(r,r') = (y(r) - \widetilde y(r)) r' + \widetilde y(r)$ and bound the second integral as \begin{equation*} \begin{split} \Big|\int_0^1 dr & \left[ \partial_\nu\varphi(y(r))^\mu-\partial_\nu \varphi(\widetilde y(r))^\mu-\partial_\nu \varphi(y(0))^\mu+\partial_\nu \varphi(\widetilde y(0))^\mu\right] \Big| \\ & = \Abs{\int_0^1 dr \int_0^1 dr' \left[ \partial_{\kappa} \partial_\nu\varphi(y(r,r'))^\mu-\partial_{\kappa} \partial_\nu \varphi(y(0,r'))^\mu\right] (y(r) - \widetilde y(r))^{\kappa} } \\ & \le \|\varphi\|_{2,\delta} {\int_0^1 dr \int_0^1 dr' |y(r,r')-y(0,r')|^\delta } \abs{y(r) - \widetilde y(r) } \\ & \le K \|\varphi\|_{2,\delta} (\norm{Y}_{\gamma}+\norm{\widetilde Y}_{\gamma})^{\delta} \norm{Y-\widetilde Y}_\infty |t-s|^{\gamma \delta} \end{split} \end{equation*} then \begin{equation*} \begin{split} \norm{B}_{(1+\delta)\gamma} \le \norm{Y-\widetilde Y}_\gamma \norm{\varphi}_{2,\delta} \norm{Y}_\gamma^{\delta} + K\norm{\widetilde Y}_\gamma \|\varphi\|_{2,\delta} (\norm{Y}_{\gamma}+\norm{\widetilde Y}_{\gamma})^{\delta} \norm{Y-\widetilde Y}_\infty \end{split} \end{equation*} and in the same way it is possible to obtain \begin{equation*} \norm{B}_{\gamma} \le \|\varphi\|_{2,\delta}( \norm{Y-\widetilde Y}_\gamma + \norm{\widetilde Y}_\gamma \norm{Y-\widetilde Y}_\infty) . \end{equation*} Moreover \begin{equation*} \norm{H'}_{\infty} \le \|\varphi\|_{2,\delta} \norm{Y'-\widetilde Y'}_{\infty} + \norm{Y'}_{\infty} \|\varphi\|_{2,\delta} \norm{Y-\widetilde Y}_{\infty} \end{equation*} \begin{equation*} \norm{H'}_{\gamma\delta} \le \|\varphi\|_{2,\delta} \norm{Y'-\widetilde Y'}_{\gamma\delta} + \norm{Y'}_{\gamma\delta} \|\varphi\|_{2,\delta} \norm{Y-\widetilde Y}_{\infty} \end{equation*} and \begin{equation*} \begin{split} \norm{A}_{\gamma}& \le \|\varphi\|_{2,\delta} \norm{R-\widetilde R}_{\gamma} + \norm{R}_{\gamma} \|\varphi\|_{2,\delta} \norm{Y-\widetilde Y}_{\infty} \\ & \le \|\varphi\|_{2,\delta} ( \norm{Y'-\widetilde Y'}_{\infty}\norm{X}_\gamma+\norm{Y-\widetilde Y}_{\gamma}) + (\norm{Y'}_\infty \norm{X}_{\gamma}+\norm{Y}_{\gamma}) \|\varphi\|_{2,\delta} \norm{Y-\widetilde Y}_{\infty} \end{split} \end{equation*} \begin{equation*} \norm{A}_{(1+\delta)\gamma}\le \|\varphi\|_{2,\delta} \norm{R-\widetilde R}_{(1+\delta)\gamma} + \norm{R}_{(1+\delta)\gamma} \|\varphi\|_{2,\delta} \norm{Y-\widetilde Y}_{\infty} \end{equation*} And collecting all together these results we end up with \begin{equation*} \norm{Z-\widetilde Z}_{D(X,\gamma,(1+\delta)\gamma)} \le C \norm{Y-\widetilde Y}_{D(X,\gamma,(1+\delta)\gamma)} \end{equation*} with $$ C = K \norm{\varphi}_{2,\delta} (1+\norm{X}_\gamma) (1+\norm{Y}_{D(X,\gamma,(1+\delta)\gamma)}+\norm{\widetilde Y}_{D(X,\gamma,(1+\delta)\gamma)})^{1+\delta}. $$ To finish consider the case in which $\delta\!\widetilde Y^\mu = \widetilde Y^{\prime\,\mu}_{\nu} \delta\! \widetilde X^\nu + \widetilde R^\mu_{\widetilde Y}$ is a path controlled by $\widetilde X$. If we let again $\widetilde Z_t = \varphi(\widetilde Y_t)$ and $H = Z -\widetilde Z$ we have \begin{equation*} \begin{split} \delta\! H^\mu & = \partial_\nu \varphi(\widetilde Y_\cdot)^\mu \widetilde Y^{\prime\,\nu}_\kappa \delta\! (X^\kappa - \widetilde X^\kappa ) +H^{\prime\,\mu}_{\kappa} \delta\! X^\kappa + A^\mu + B^\mu \end{split} \end{equation*} where the only difference with the expression in eq.~(\ref{eq:difference_H}) is in the first term in the r.h.s. then \begin{equation*} \|Z-\widetilde Z\|_\gamma + \|Z'-\widetilde Z'\|_{\delta\gamma} + \|R_Z-R_{\widetilde Z}\|_{(1+\delta)\gamma} + \|Z'-\widetilde Z'\|_{\infty} \le C (\varepsilonilon+\|X-\widetilde X\|_\gamma) \end{equation*} with $$ \varepsilonilon = \|Y-\widetilde Y\|_\gamma + \|Y'-\widetilde Y'\|_{\delta\gamma} + \|R_Y-R_{\widetilde Y}\|_{(1+\delta)\gamma} + \|Y'-\widetilde Y'\|_{\infty} $$ and this concludes the proof of prop.~\ref{prop:functionD}. \qed \subsection{Some Proofs and Lemmata used in Sec.~\ref{sec:ode}} \subsubsection{Proof of Lemma~\ref{eq:holder-patching}} \label{sec:proof-holder-patching} \proof Take $ u \in I \cap J$: \begin{equation*} \begin{split} \sup_{t \in I\backslash J ,s \in J\backslash I} \frac{|X_{st}|}{|t-s|^\gamma} & \le \sup_{t \in I\backslash J ,s \in J\backslash I} \frac{|X_{ut}|+|X_{su}|+|(N X)_{sut}|}{|t-s|^\gamma} \\ & \le \sup_{t \in I\backslash J ,s \in J\backslash I} \frac{|X_{ut}|}{|t-s|^\gamma} +\sup_{t \in I\backslash J ,s \in J\backslash I} \frac{|X_{su}|}{|t-s|^\gamma} +\sup_{t \in I\backslash J ,s \in J\backslash I} \frac{|(N X)_{sut}|}{|t-s|^\gamma} \\ & \le \sup_{t \in I\backslash J ,s \in J\backslash I} \frac{|X_{ut}|}{|t-u|^\gamma} +\sup_{t \in I\backslash J ,s \in J\backslash I} \frac{|X_{su}|}{|u-s|^\gamma} +\sup_{t \in I\backslash J ,s \in J\backslash I} \frac{|(N X)_{sut}|}{|t-u|^{\gamma_2} |s-u|^{\gamma_2}} \\ & \le \norm{X}_{\gamma,I}+\norm{X}_{\gamma,J}+\norm{X}_{\gamma_1,\gamma_2,I \cup J} \end{split} \end{equation*} then \begin{equation*} \begin{split} \norm{X}_{\gamma,I \cup J} &= \sup_{t,s \in I \cup J} \frac{|X_{t}-X_s|}{|t-s|^\gamma} \le \sup_{t ,s \in I } \frac{|X_{t}-X_s|}{|t-s|^\gamma} + \sup_{t,s \in J} \frac{|X_{t}-X_s|}{|t-s|^\gamma} + \sup_{t \in I\backslash J ,s \in J\backslash I} \frac{|X_{t}-X_s|}{|t-s|^\gamma} \\ & \le 2 (\norm{X}_{\gamma,I}+\norm{X}_{\gamma,J}) +\norm{X}_{\gamma_1,\gamma_2,I \cup J} \end{split} \end{equation*} as claimed. \qed \subsubsection{Lemmata for some bounds on the map $G$} With the notation in the proof of Prop.~\ref{eq:existence_young} we have \begin{lemma} \label{lemma:zetabound-young} For any interval $I=[t_0,t_0+T] \subseteq J$ such that $T<1$ the following bound holds \begin{equation} \label{eq:epsilonZbound-young-lemma} \begin{split} \varepsilonilon_{Z,I} & \le K C_{X,I} C_{Y,I}^\delta [(1+\norm{\varphi}_{1,\delta})\rho_I + T^{\gamma\delta} \varepsilonilon_{Y,I}] \end{split} \end{equation} \end{lemma} \proof Consider first the case when $\varphi =\widetilde \varphi$. Eq.~(\ref{eq:continuity-young-x}) is a statement of continuity of the The integral defined in Prop.~\ref{prop:young} is a bounded bilinear application $(A,B) \mapsto \int A dB$ then it is also continuous in both arguments and it is easy to check that \begin{equation} \label{eq:continuity-young-x} \norm{Q_Z-Q_{\widetilde Z}}_{(1+\delta)\gamma,I} \le K(C_{X,I} \varepsilonilon^*_{W,I} + C_{Y,I} \rho_I) \end{equation} where we used the shorthands (defined in the proof of Prop.~\ref{eq:uniqueness_young}): $$ \varepsilonilon_{Z,I} = \norm{Z-\widetilde Z}_{\gamma,I}, \quad \varepsilonilon_{W,I}^* = \norm{W-\widetilde W}_{\delta\gamma,I}, \quad \varepsilonilon_{Y,I} = \norm{Y-\widetilde Y}_{\gamma,I}, \quad \varepsilonilon_{Y,I}^* = \norm{Y-\widetilde Y}_{\delta\gamma,I}; $$ $$ \rho_I = \norm{X-\widetilde X}_{\gamma,I} + |Y_0 - \widetilde Y_0| $$ $$ C_{X,I} = \norm{X}_{\gamma,I} + \norm{\widetilde X}_{\gamma,I} $$ $$ C_{Y,I} = \norm{Y}_{\gamma,I} + \norm{\widetilde Y}_{\gamma,I} $$ Observe that \begin{equation*} \begin{split} \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\infty,I} &\le |\varphi(Y_0)-\varphi(\widetilde Y_0)| + T^{\delta\gamma} \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\delta\gamma,I} \\ & \le \norm{\varphi}_{1,\delta}\rho_I + T^{\delta\gamma} \varepsilonilon^*_{W,I} \end{split} \end{equation*} \begin{equation*} \begin{split} \varepsilonilon_{Z,I} & \le \norm{\varphi(Y) \delta\! X-\varphi(\widetilde Y) \delta\! \widetilde X}_{\gamma,I} + \norm{Q_Z-Q_{\widetilde Z}}_{\gamma,I} \\ & \le \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\infty,I} \norm{X}_{\gamma,I} + \norm{\varphi(\widetilde Y)}_{\infty,I} \norm{X-\widetilde X}_{\gamma,I} + T^{\delta\gamma} \norm{Q_Z-Q_{\widetilde Z}}_{(1+\delta)\gamma,I} \\ & \le \norm{\varphi}_{1,\delta} \rho_I C_{X,I}+T^{\delta\gamma} \varepsilonilon^*_{W,I}+ K T^{\delta\gamma} (C_{X,I} \varepsilonilon^*_{W,I} + C_{Y,I} \rho_I) \\ & \le \norm{\varphi}_{1,\delta} \rho_I (C_{X,I}+1+KC_{Y,I}^\delta) + T^{\gamma\delta} \varepsilonilon_{W,I}^* (C_{X,I}+KC_{Y,I}) \end{split} \end{equation*} It remains to bound $\varepsilonilon_{W,I}^*$: Write $$ \varphi(x)-\varphi(y) = \int_0^1 d\alpha \partial\varphi(\alpha x + (1-\alpha)y) (x-y) = R\varphi(x,y)(x-y) $$ then $$ \norm{R\varphi}_{\infty} = \sup_{x,y \in V} |R\varphi(x,y)| \le \norm{\varphi}_{1,\delta} $$ and \begin{equation*} \begin{split} |R\varphi(x,y)-R\varphi(x',y')| & = \left|\int_0^1 (\partial \varphi(\alpha x + (1-\alpha)y)-\partial \varphi(\alpha x' + (1-\alpha)y') d\alpha \right| \\ & \le \norm{\varphi}_{1,\delta} \int_0^1 |\alpha (x-x') + (1-\alpha)(y-y')|^\delta d\alpha \\ & \le \norm{\varphi}_{1,\delta}(|x-x'|^\delta+|y-y'|^\delta) \end{split} \end{equation*} so that \begin{equation*} \begin{split} \varepsilonilon^*_{W,I} & = \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\delta\gamma,I} = \norm{R\varphi(Y,\widetilde Y)(Y-\widetilde Y)}_{\delta\gamma,I} \\ & \le \norm{R\varphi(Y,\widetilde Y)}_{\infty,I} \norm{Y-\widetilde Y}_{\delta\gamma,I} + \norm{R\varphi(Y,\widetilde Y)}_{\delta\gamma,I}\norm{Y-\widetilde Y}_{\infty,I} \\ & \le \norm{\varphi}_{1,\delta} \norm{Y-\widetilde Y}_{\delta\gamma,I} + \norm{Y-\widetilde Y}_{\infty,I} \norm{\varphi}_{1,\delta} (\norm{Y}^\delta_{\gamma,I} + \norm{\widetilde Y}^\delta_{\gamma,I}) \\ & \le K \norm{\varphi}_{1,\delta} C_{Y,I}^\delta \varepsilonilon_{Y,I}^* \\ & \le K \norm{\varphi}_{1,\delta} C_{Y,I}^\delta \varepsilonilon_{Y,I} \end{split} \end{equation*} concluding: \begin{equation} \label{eq:epsilonZbound-young-00} \begin{split} \varepsilonilon_{Z,I} & \le K \norm{\varphi}_{1,\delta} C_{X,I} C_{Y,I}^\delta (\rho_I + T^{\gamma\delta} \varepsilonilon_{Y,I}) \end{split} \end{equation} The general case in which $\varphi \neq \widetilde\varphi$ can be easily derived from Eq.~(\ref{eq:epsilonZbound-young-00}) and the continuity of the integral, giving: \begin{equation*} \begin{split} \varepsilonilon_{Z,I} & \le K C_{X,I} C_{Y,I}^\delta [(1+\norm{\varphi}_{1,\delta})\rho_I + T^{\gamma\delta} \varepsilonilon_{Y,I}]. \end{split} \end{equation*} \qed Using the notation in the proof of Prop.~\ref{eq:existence_rough} we have \begin{lemma} \label{lemma:zetabound-rough} For any interval $I=[t_0,t_0+T] \subseteq J$ such that $T<1$ the following bound holds \begin{equation} \label{eq:zetabound-lemma} \varepsilonilon_{Z,I} \le K \norm{\varphi}_{2,\delta}(C_{X,I}^2 C_{Y,I}^3 \rho_I + T^{\delta\gamma} C_{X,I}^3 C_{Y,I}^2 \varepsilonilon_{Y,I}) + K \norm{\varphi-\widetilde \varphi}_{2,\delta} C_{X,I} C_{Y,I}^2 \end{equation} \end{lemma} \proof To begin assume that $\varphi = \widetilde \varphi$. Let $W = \varphi(Y)$, $\widetilde W = \varphi(\widetilde Y)$ and write their decomposition as $$ \delta\! W^\mu = W^{\prime\,\mu}_{\nu} \delta\! X^\nu + R^\mu_W, \qquad \delta\! \widetilde W^\mu = \widetilde W^{\prime\,\mu}_{\nu} \delta\! \widetilde X^\nu + R^\mu_{\widetilde W}, \qquad $$ with $W^{\prime\,\mu}_{\nu} = \partial_\kappa \varphi(Y)^{\mu} Y^{\prime\,\kappa}_{\nu}$, $\widetilde W^{\prime\,\mu}_{\nu} = \partial_\kappa \varphi(\widetilde Y)^{\mu} \widetilde Y^{\prime\,\kappa}_{\nu}$ Moreover let \begin{equation*} \varepsilonilon_{W,I}^* = \norm{W'-\widetilde W'}_{\infty,I} + \norm{W'-\widetilde W'}_{\delta\gamma,I} + \norm{R_W+R_{\widetilde W}}_{(1+\delta)\gamma,I} + \norm{W -\widetilde W}_{\gamma,I} \end{equation*} Using the bound~(\ref{eq:betterbound-difference}) we have \begin{equation} \label{eq:boundQ} \norm{Q- Q_{\widetilde Z}}_{(2+\delta)\gamma} \le K (D_1+D_2) \end{equation} $$ D_1 = C_X \varepsilonilon^*_{W,I} $$ \begin{equation*} \begin{split} D_2 & = (\norm{\varphi(Y)}_{D(X,\gamma,(1+\delta)\gamma),I}+\norm{\varphi(\widetilde Y)}_{D(\widetilde X,\gamma,(1+\delta)\gamma),I})(\norm{X-\widetilde X}_{\gamma,I} + \norm{\mathbb{X}^2-\mathbb{\widetilde X}^2}_{2\gamma,I}) \\ & \le K \norm{\varphi}_{2,\delta} C_{Y,I}^2 \rho_I \end{split} \end{equation*} where we used eq.~(\ref{eq:boundWxx}) to bound $\norm{\varphi(Y)}_{D(X,\gamma,(1+\delta)\gamma),I}$ and $\norm{\varphi(\widetilde Y)}_{D(\widetilde X,\gamma,(1+\delta)\gamma),I}$ in terms of $C_{Y,I}$. By Prop.~\ref{prop:functionD} we have \begin{equation} \label{eq:function_difference-xx} \varepsilonilon^*_{W,I} \le K \norm{\varphi}_{2,\delta} C_{X,I} C_{Y,I}^{1+\delta} (\|X-\widetilde X\|_{\gamma,I} + \varepsilonilon^*_{Y,I}) \le K \norm{\varphi}_{2,\delta} C_{X,I} C_{Y,I}^{2} (\rho_I + \varepsilonilon^*_{Y,I}) \end{equation} with \begin{equation*} \begin{split} \varepsilonilon^*_{Y,I} & = \|Y'-\widetilde Y'\|_\infty + \|Y'-\widetilde Y'\|_{\delta\gamma} + \|R_Y-R_{\widetilde Y}\|_{(1+\delta)\gamma} +\|Y-\widetilde Y\|_\gamma \end{split} \end{equation*} and $$ C_{I} = K \norm{\varphi}_{2,\delta} C_{X,I} C_{Y,I}^{1+\delta} $$ Taking $T < 1$ we can bound $\varepsilonilon^*_{Y,I} \le \varepsilonilon_{Y,I} + \norm{Y-\widetilde Y}_{\gamma,I}$ and \begin{equation} \label{eq:bound-epsilon-Y-xx} \begin{split} \varepsilonilon^*_{Y,I} & \le \|Y'-\widetilde Y'\|_\infty + \|Y'-\widetilde Y'\|_{\gamma} + \|R_Y-R_{\widetilde Y}\|_{2\gamma} + C_{X,I} \varepsilonilon_{Y,I} + C_{Y,I} \norm{X-\widetilde X}_{\gamma,I} \\ & \le 2 C_{X,I} \varepsilonilon_{Y,I} + C_{Y,I} \rho_I \end{split} \end{equation} where we used the following majorization for $\norm{Y-\widetilde Y}_{\gamma,I}$: \begin{equation} \label{eq:boundYgamma} \begin{split} \norm{Y-\widetilde Y}_{\gamma,I} & \le \norm{Y'\delta\! X-\widetilde Y' \delta\! \widetilde X}_{\gamma,I} +\norm{R_Y-R_{\widetilde Y}}_{\gamma,I} \\ & \le \norm{Y'-\widetilde Y'}_{\infty,I} \norm{X}_{\gamma,I} + (\norm{Y'}_{\infty,I}+\norm{\widetilde Y'}_{\infty,I}) \norm{X-\widetilde X}_{\gamma,I} + \norm{R_Y-R_{\widetilde Y}}_{2\gamma,I} \\ & \le C_{X,I} \varepsilonilon_{Y,I} + C_{Y,I} \rho_I \end{split} \end{equation} Eq.~(\ref{eq:bound-epsilon-Y-xx}) together with eq.~(\ref{eq:function_difference-xx}) imply \begin{equation*} \varepsilonilon^*_{W,I} \le K \norm{\varphi}_{2,\delta} (C_{X,I} C_{Y,I}^{3} \rho_I + C^2_{X,I} C_{Y,I}^{2} \varepsilonilon_{Y,I}) \end{equation*} and so \begin{equation} \label{eq:boundQ2} \begin{split} \norm{Q_Z- Q_{\widetilde Z}}_{(2+\delta)\gamma} & \le K C_X \varepsilonilon_{W,I} + K \norm{\varphi}_{2,\delta} C_{Y,I}^2 \rho_{I} \\ & \le K (C_X C_I (1+2 C_Y)+\norm{\varphi}_{2,\delta} C_{Y,I}^2) \rho_I + 2K C_I C^2_X \varepsilonilon_{Y,I} \\ & \le K \norm{\varphi}_{2,\delta}(C_X^2 C_Y^3 \rho_I + C_X^3 C_Y^2 \varepsilonilon_{Y,I}) \end{split} \end{equation} \begin{equation*} \begin{split} \varepsilonilon_{Z,I} & = \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\infty,I} + \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\gamma,I} + \norm{R_Z-R_{\widetilde Z}}_{2\gamma,I} \\ & \le |\varphi(Y_0)-\varphi(\widetilde Y_0)|+ 2 \norm{\varphi(Y)-\varphi(\widetilde Y)}_{\gamma,I} + \norm{R_Z-R_{\widetilde Z}}_{2\gamma,I} \end{split} \end{equation*} Proceed step by step: \begin{equation*} \begin{split} \norm{\partial \varphi(Y)-\partial \varphi(\widetilde Y)}_{\infty,I} & \le |\partial \varphi(Y_{t_0})-\partial \varphi(\widetilde Y_{t_0})|+ T^\gamma \norm{\partial \varphi(Y)-\partial \varphi(\widetilde Y)}_{\gamma,I} \\ & \le \norm{\varphi}_{2,\delta} |Y_{t_0}-\widetilde Y_{t_0}| + T^\gamma \norm{\varphi}_{2,\delta} \norm{Y-\widetilde Y}_{\gamma,I} \\ & \le T^\gamma \norm{\varphi}_{2,\delta} C_{X,I} \varepsilonilon_{Y,I} + 2 \norm{\varphi}_{2,\delta} C_{Y,I} \rho_I \end{split} \end{equation*} Next: \begin{equation*} \begin{split} \norm{R_Z-R_{\widetilde Z}}_{2\gamma,I} & \le \norm{\partial \varphi(Y)\mathbb{X}^2-\partial \varphi(\widetilde Y)\mathbb{\widetilde X}^2 }_{2\gamma,I} + \norm{Q_Z-Q_{\widetilde Z}}_{2\gamma,I} \\ & \le \norm{\partial \varphi(Y)-\partial \varphi(\widetilde Y) }_{\infty,I} (\norm{\mathbb{X}^2}_{2\gamma,I}+\norm{\mathbb{\widetilde X}^2}_{2\gamma,I} ) \\ &\qquad + (\norm{\partial \varphi(Y)}_{\infty,I}+\norm{\partial \varphi(\widetilde Y)}_{\infty,I}) \norm{\mathbb{X}^2-\mathbb{\widetilde X}^2}_{2\gamma,I} \\ &\qquad+ T^{\delta\gamma} \norm{Q_Z-Q_{\widetilde Z}}_{(2+\delta)\gamma,I} \\ & \le K \norm{\varphi}_{2,\delta}(\rho_I C_X^2 C_Y^3 + \varepsilonilon_{Y,I} T^{\delta \gamma} C_X^3 C_Y^2) \end{split} \end{equation*} and \begin{equation*} \begin{split} \norm{\varphi(Y)&-\varphi(\widetilde Y)}_{\gamma,I} \le \norm{\partial \varphi(Y) \delta\! X - \partial \varphi(\widetilde Y)\delta\! \widetilde X}_{\gamma,I} + \norm{R_W-R_{\widetilde W}}_{\gamma,I} \\ & \le \norm{\partial \varphi(Y) - \partial \varphi(\widetilde Y)}_{\infty,I}(\norm{X}_{\gamma,I}+\norm{\widetilde X}_{\gamma,I}) \\ & \qquad + (\norm{\partial \varphi(Y) }_{\infty,I}+\norm{\partial \varphi(\widetilde Y) }_{\infty,I}) \norm{X-\widetilde X}_{\gamma,I}+ T^\gamma \norm{R_W-R_{\widetilde W}}_{2\gamma,I} \\ & \le (\norm{\varphi}_{2,\delta} |Y_{t_0}-\widetilde Y_{t_0}| + T^\gamma \norm{\varphi}_{2,\delta} C_{X,I} \varepsilonilon_{Y,I} + \norm{\varphi}_{2,\delta} C_{Y,I} \norm{X-\widetilde X}_{\gamma,I})(\norm{X}_{\gamma,I}+\norm{\widetilde X}_{\gamma,I}) \\ & \qquad +2 \norm{\varphi}_{2,\delta} \norm{X-\widetilde X}_{\gamma,I}+ T^\gamma \varepsilonilon_{W,I}^* \\ & \le K \norm{\varphi}_{2,\delta}(C_{X,I} C_{Y,I}^3 \rho_I + T^\gamma C_{X,I}^2 C_{Y,I}^2 \varepsilonilon_{Y,I}) \end{split} \end{equation*} finally we have \begin{equation} \label{eq:zetabound00} \varepsilonilon_{Z,I} \le K \norm{\varphi}_{2,\delta}(C_{X,I}^2 C_{Y,I}^3 \rho_I + T^{\delta\gamma} C_{X,I}^3 C_{Y,I}^2 \varepsilonilon_{Y,I}). \end{equation} When $\varphi \neq \widetilde \varphi$ rewrite the difference $Z-\widetilde Z$ as \begin{equation*} Z_t-\widetilde Z_t = Y_{t_0} - \widetilde Y_{t_0} + \int_{t_0}^t [\varphi(Y)-\varphi(\widetilde Y)] dX + \int_{t_0}^t [\varphi(\widetilde Y)-\widetilde \varphi(\widetilde Y)] dX \end{equation*} the contribution to $\varepsilonilon_{Z,I}$ from the first integral is bounded by Eq.~(\ref{eq:zetabound00}) while the last integral can be bounded by $K \norm{\varphi-\widetilde\varphi}_{2,\delta} C_{X,I} C_{Y,I}^2$ (cfr. Eq.~(\ref{eq:bound_on_gy})) giving the final result~(\ref{eq:epsilonZbound-young-lemma}). \qed \subsection{Proof of Lemma~\ref{lemma:besov}} \label{sec:proof_besov} \proof Let $B(u,r) = \{ w \in T : |w-u| \le r \}$. Observe that by the monotonicity and convexity of $\psi$ for any couple of measurable sets $A,B \subset T$ we have \begin{equation} \label{eq:mean-estimate} \begin{split} \left|\int_{A\times B} R_{st} \frac{dt ds}{|A| |B|}\right| & \le p(d(A,B)/4) \psi^{-1}\left(\int_{A\times B} \psi\left(\frac{|A_{st}|}{p(d(t,s)/4)}\right) \frac{dt ds}{|A| |B|}\right) \\ & \qquad \le p(d(A,B)/4) \psi^{-1}\left(\frac{U}{|A| |B|}\right) \end{split} \end{equation} where $d(A,B) = \sup_{t \in A, s \in B} |t-s|$. Let \begin{equation*} \overline R(t,r_1,r_2) = \int_{B(t,r_1)} \frac{du}{| B(t,r_1)| }\int_{ B(t,r_2)} \frac{ dv}{| B(t,r_2)|} R_{uv} \end{equation*} Take $t,s \in T$, $a = |t-s|$, define the decreasing sequence of numbers $\lambda_n \downarrow 0$ as $\lambda_0 = a$, $\lambda_{n+1}$ such that $$ p( \lambda_{n}) = 2 p( \lambda_{n+1}) $$ then \begin{equation*} \begin{split} p((\lambda_n+\lambda_{n+1})/4) & \le p(\lambda_n) = 2 p(\lambda_{n+1}) \\ & = 4 p(\lambda_{n+1}) - 2 p(\lambda_{n+1}) \\ & = 4 [ p(\lambda_{n+1}) - p(\lambda_{n+2})]. \end{split} \end{equation*} Using eq.~(\ref{eq:mean-estimate}) and the fact that $|B(t,\lambda_i)| \ge \lambda_i$ for every $i \ge 0$ we have \begin{equation*} \begin{split} |\overline R(t,\lambda_{n+1},\lambda_{n})| & \le p((\lambda_{n} +\lambda_{n+1})/4) \psi^{-1}\left(\frac{U}{ \lambda_{n} \lambda_{n+1}}\right) \\ & \le 4 [ p(\lambda_{n+1}) - p(\lambda_{n+2})] \psi^{-1}\left(\frac{U}{ \lambda_{n} \lambda_{n+1}}\right) \\ & \le 4 \int_{\lambda_{n+2}}^{\lambda_{n+1}} \psi^{-1}\left(\frac{U}{ r^2}\right) dp(r). \end{split} \end{equation*} Take a sequence $\{ t_i \}_{i=0}^\infty$ of variables in $T$ and note that, for every $n \ge 0$, $$ R_{t\,t_n} = R_{t\, t_{n+1}} + R_{t_{n+1}\, t_{n}} + (N R)_{t t_{n+1} t_n} $$ so that, by induction, $$ R_{t\, t_0} = R_{t\, t_{n+1}}+ \sum_{i=0}^{n} [R_{t_{i+1} t_i} + (N R)_{t\, t_{i+1} t_i} ]. $$ Average each $t_i$ over the ball $B(t,\lambda_i)$ and bound as follows \begin{equation} \label{eq:finite-rep} \overline R(t,0,\lambda_0) = \overline R(t,0,\lambda_{n+1}) + \sum_{i=0}^{n} \overline R(t,\lambda_{i+1},\lambda_{i}) + \sum_{i=0}^{n} \overline B(t,\lambda_{i+1},\lambda_{i}) \end{equation} where \begin{equation*} \overline B(t,\lambda_{i+1},\lambda_i) = \int_{B(t,\lambda_{i+1})} \frac{dv}{| B(t,\lambda_{i+1}) |} \int_{B(t,\lambda_i)} \frac{du}{| B(t,\lambda_i)|} N R_{tvu} \end{equation*} which, using~(\ref{eq:ext-bound-n}), can be majorized by \begin{equation*} \begin{split} | \overline B(t,\lambda_{i+1},\lambda_i)| & \le \psi^{-1}\left( \frac{C}{ \lambda_i^2}\right) p( \lambda_i/2) \le 4 \psi^{-1}\left( \frac{C}{ \lambda_i^2}\right) [p(\lambda_{i+1})-p(\lambda_{i+2})] \\ & \le 4 \int_{\lambda_{i+2}}^{\lambda_{i+1}} \psi^{-1}\left( \frac{C}{ r^2}\right) dp(r) \end{split} \end{equation*} Then, taking the limit as $n \to \infty$ in Eq.~(\ref{eq:finite-rep}), using the continuity of $R$ and that $R_{tt}=0$, we get \begin{equation} \label{eq:bound-from-t} \begin{split} |\overline R(t,0,\lambda_0)| & \le \sum_{i=0}^{\infty} 4 \int_{\lambda_{i+2}}^{\lambda_{i+1}} \psi^{-1}\left(\frac{U}{ r^2}\right) dp(r) + \sum_{i=0}^{\infty} 4 \int_{\lambda_{i+2}}^{\lambda_{i+1}} \psi^{-1}\left( \frac{C}{ r^2}\right) dp(r) \\ & \le 4 \int_0^{\lambda_1} \left[ \psi^{-1}\left(\frac{U}{ r^2}\right)+ \psi^{-1}\left(\frac{C}{ r^2}\right)\right] dp(r) \\ & \le 4 \int_0^{|t-s|} \left[ \psi^{-1}\left(\frac{U}{ r^2}\right)+ \psi^{-1}\left(\frac{C}{ r^2}\right)\right] dp(r) \end{split} \end{equation} and of course the analogous estimate \begin{equation} \label{eq:bound-from-s} |\overline R(s,0,\lambda_0)| \le 4 \int_0^{|t-s|} \left[ \psi^{-1}\left(\frac{U}{ r^2}\right)+ \psi^{-1}\left(\frac{C}{ r^2}\right)\right] dp(r) \end{equation} Moreover $$ R_{st} = R_{s u} + R_{uv} + R_{v t} + N R_{sut} + N R_{uvt} $$ so $$ |R_{st}| \le |R_{s u}| + |R_{v t}| + |R_{uv}| + \sup_{r \in [s,t]} |N R_{srt}| + \sup_{r \in [u,t]} |N R_{urt}| $$ by averaging $u$ over the ball $B(s,a)$ and $v$ over the ball $B(t,a)$ we get \begin{equation*} \int_{B(s,a)} \frac{du}{|B(s,a)|} \int_{B(t,a)} \frac{dv}{|B(t,a)|} |R_{uv}| \le p(3 a/4)\psi^{-1}\left(\frac{U}{4 a^2}\right) \le \int_0^{|t-s|} \psi^{-1}\left(\frac{U}{ r^2}\right) dp(r) \end{equation*} and \begin{equation*} \int_{B(s,a)} \frac{du}{|B(s,a)|} \sup_{r \in [u,t]} |N R_{urt}| \le p(a/2)\psi^{-1}\left(\frac{C}{ a^2}\right) \le \int_0^{|t-s|} \psi^{-1}\left(\frac{C}{ r^2}\right) dp(r) \end{equation*} Putting all toghether we end up with \begin{equation*} |R_{st}| \le 10 \int_0^{|t-s|} \left[ \psi^{-1}\left(\frac{U}{ r^2}\right)+ \psi^{-1}\left(\frac{C}{ r^2}\right)\right] dp(r) \end{equation*} \qed \end{document}
\begin{document} \title{On logically cyclic groups} \author{\sc M. Shahryari} \thanks{} \address{ Department of Pure Mathematics, Faculty of Mathematical Sciences, University of Tabriz, Tabriz, Iran } \email{[email protected]} \date{\today} \begin{abstract} A group $G$ is called logically cyclic, if it contains an element $s$ such that every element of $G$ can be defined by a first order formula with parameter $s$. The aim of this paper is to investigate the structure of such groups. \end{abstract} \maketitle {\bf AMS Subject Classification} Primary 20A15, Secondary 03C07 and 03C98.\\ {\bf Key Words} Definability; Elementary extensions; Logically cyclic groups; Divisible groups; Quantifier elimination. Let $\mathcal{L}=(\cdot, ^{-1}, 1)$ be the language of groups. Suppose $G$ is a group and $S\subseteq G$. We extend $\mathcal{L}$ to a new language $\mathcal{L}(S)$, by attaching new constant symbols $a_s$, for any $s\in S$. So we have $\mathcal{L}(S)=\mathcal{L}\cup\{ a_s: s\in S\}$. Then, $G$ becomes an $\mathcal{L}(S)$-structure if we let $s$ to be the interpretation of $a_s$ in $G$. Now, suppose $g\in G$ is an arbitrary element and there exists a first order formula $\varphi(x)$ in the language $\mathcal{L}(S)$ (with a free variable $x$), such that $$ \{ g\}=\{ x\in G:\ G\vDash \varphi(x)\}. $$ Then we say that $g$ is {\em definable} by the elements of $S$ or $S$-{\em definable} for short. Let $\mathrm{def}_S(G)$ be the set of all $S$-definable elements of $G$. Clearly this subset is a subgroup. If we have $\mathrm{def}_S(G)=G$, then we say that $S$ {\em logically generates} $G$. A {\em logically cyclic} group is a group which is logically generated by a single element. A cyclic group is then clearly logically cyclic as every element can be defined as a power of the generator. The converse is not true, for example the additive group of rationales, $G=(\mathbb{Q}, +)$, is logically cyclic as every element $g=m/n$ can be defined by the formula $nx=m$, which is clearly a first order formula having $s=1$ as a parameter. It is easy to see that all subgroups of $G=(\mathbb{Q}, +)$ are logically cyclic. In this paper , we prove that if a finite group is logically cyclic, then it is cyclic in the ordinary sense. We also determine all finitely generated logically cyclic groups as well as those logically cyclic groups which are torsion-free. \section{Preliminaries} We can use the definability theorem of Svenonius to study definable elements in groups. For the case of finite groups, a version of this theorem will be used which can be proved by an elementary argument. We briefly discuss this well-known result of the model theory. Let $\mathcal{L}$ be a first order language and $M$ be a structure in $\mathcal{L}$. An $n$-ary relation $R\subseteq M^n$ is said to be definable, if there exists a formula $\varphi(x_1, \ldots, x_n)$ in the language $\mathcal{L}$, such that $$ R=\{ (x_1, \ldots, x_n)\in M^n:\ M\vDash \varphi(x_1, \ldots, x_n)\}. $$ It is easy to see that if $R$ is definable, then every automorphism of $M$ preserves $R$. To see what can be said about the converse, we need the concept of {\em elementary extension}. For any $\mathcal{L}$-structure $M$, suppose $\mathrm{Th}(M)$ is the first order theory of $M$, i.e. the set of all first order sentences, which are true in $M$. We say that an $\mathcal{L}$-structure $M^{\prime}$ is an elementary extension of $M$, if $M$ is a sub-structure of $M^{\prime}$ and $\mathrm{Th}(M)\subseteq \mathrm{Th}(M^{\prime})$. We are now ready to review the theorem of Svenonius. \begin{theorem} A relation $R\subseteq M^n$ is definable if and only if every automorphism of every elementary extension of $M$ preserves $R$. \end{theorem} For a proof, the reader can see \cite{Poizat}. Suppose we want to use this theorem in the case of groups; we must assume that $G$ is a group, $S\subseteq G$ is an arbitrary subset, and $\mathcal{L}(S)$ is the extended language of groups with parameters from $S$. Clearly a singleton set $\{ g\}$ is a unary relation in $G$, so we can restate the above theorem, for $S$-definability of elements in $G$. \begin{corollary} An element $g$ is $S$-definable in $G$ if and only if, for any elementary extension $G^{\prime}$ of $G$ and every automorphism $\alpha:G^{\prime}\to G^{\prime}$, if $\alpha$ fixes elements of $S$, then it fixes also $g$. \end{corollary} If $G$ is a finite group, then the only elementary extension of $G$ is $G$ itself, because there exists a first order sentence which says that $G$ has $m$ elements ($m$ is the order of $G$), so any elementary extension of $G$ must have order $m$. Hence, for the case of finite groups, we have \begin{corollary} An element $g$ is $S$-definable in a finite group $G$ if and only if, for any automorphism $\alpha:G\to G$, if $\alpha$ fixes elements of $S$, then it fixes also $g$. \end{corollary} Note that this very special case of Svenonius's theorem can be proved independently by an elementary argument. Here is the proof. \begin{proof} Let $g\in G$ be invariant under every automorphism which fixes elements of $S$. Let $$ G=\{ g_1, g_2, \ldots, g_n\} $$ be an enumeration of the elements of $G$ such that $g_1=1$, $\{g_2, \ldots, g_m\}=S$ and $g_{m+1}=g$. Consider the Cayley table of $G$, i.e. determine the unique numbers $\sigma(i,j)$ such that $g_ig_j=g_{\sigma(i,j)}$. Now we introduce a formula $\varphi(y)$ in the language $\mathcal{L}(S)$ as \begin{eqnarray*} \exists x_1, \ldots, x_n&:& (\bigwedge_{i\neq j}x_i\neq x_j)\wedge((\bigwedge_{i=2}^mx_i=g_i)\wedge x_1=1)\wedge (\bigwedge_{i,j}x_ix_j=x_{\sigma(i,j)})\\ &\ & \wedge(y=x_{m+1}). \end{eqnarray*} We show that $\varphi(y)$ defines $g$. Let $a\in G$ be an element such that $G\models \varphi(a)$. Therefore there exist distinct elements $b_1, \ldots, b_n\in G$ such that\\ 1- $b_1=1$ and $b_2=g_2, \ldots, b_m=g_m$. 2- $b_{m+1}=a$. 3- $b_ib_j=b_{\sigma(i,j)}$.\\ Now, the map $f:G\to G$ defined by $f(g_i)=b_i$ is an automorphism, and for all $g_i\in S$ we have $f(g_i)=b_i=g_i$. So, $f$ must preserve $g$. Hence, we have $g=f(g)=f(g_{m+1})=b_{m+1}=a$. This completes the proof. \end{proof} An automorphism $\alpha:G\to G$ is said to be an $S$-automorphism, if it fixes $S$ elementwise. If we work in the semidirect product $\hat{G}=\mathrm{Aut}(G)\ltimes G$, then the set of all $S$-automorphisms of $G$ is just the centralizer $C_{\mathrm{Aut}(G)}(S)$. We will use this type of centralizer notation in the rest of the article. So, for an arbitrary group, we have $$ \mathrm{def}_S(G)\subseteq C_G(A), $$ where $A=C_{\mathrm{Aut}(G)}(S)$. If $G$ is finite, then by the corollary 1.3, we have the equality $$ \mathrm{def}_S(G)= C_G(A). $$ Suppose $G$ is logically cyclic. So $G=\mathrm{def}_s(G)$ for some $s$. This shows that $G=C_G(A)$, therefore we have the implication $$ \forall \alpha\in \mathrm{Aut}(G):\ \alpha(s)=s\Rightarrow \alpha=\mathrm{id}_G. $$ For finite groups, 1.3 implies that the converse is also true, i.e. if there exists an element $s$ satisfying the above implication, then $G$ is logically cyclic. We prove that every logically cyclic group is abelian. Note that a similar argument shows that for any group $G$ and every element $s\in G$, the subgroup $\mathrm{def}_s(G)$ is also abelian (see also the discussion at the end of the next section). It is also possible to prove that, if a group $G$ is logically generated by a commuting set of elements $S$, then it is Abelian. \begin{proposition} Every logically cyclic group is abelian. \end{proposition} \begin{proof} Let $G=\mathrm{def}_s(G)$. Then as above $$ \forall \alpha\in \mathrm{Aut}(G):\ \alpha(s)=s\Rightarrow \alpha=\mathrm{id}_G. $$ So, considering the inner automorphism $\alpha: x\mapsto sxs^{-1}$, we obtain $s\in Z(G)$. Now, let $g\in G$ be an arbitrary element and let $\beta: x\mapsto gxg^{-1}$. Since $[g, s]=1$, so $\beta(s)=s$ and therefore $\beta=\mathrm{id}_G$. This shows that $g\in Z(G)$ and hence $G$ is abelian. \end{proof} One may ask this question: If a group $G$ is logically generated by a set $S$ and $\langle S\rangle$ is nilpotent, can we prove that $G$ is nilpotent? \\ Note that if every element of $G$ is definable in the language of groups, $\mathcal{L}$, then we must have $$ G=C_G(\mathrm{Aut}(G)), $$ and in this case we have $\mathrm{Aut}(G)=1$, which shows that $G=1, \mathbb{Z}_2$. So, the groups $1$ and $\mathbb{Z}_2$ are the only groups, every element in which, is definable in the language of groups. As we saw, if a group $G$, is logically cyclic, then there exists an element $s\in G$ such that for all non-identity automorphism $\alpha$, we have $\alpha(s)\neq s$. In this case, if we consider the action of $\mathrm{Aut}(G)$ on $G$, then $$ |\mathrm{Orb}(s)|=|\mathrm{Aut}(G)|, $$ so, for logically cyclic groups, we have $$ |\mathrm{Aut}(G)|\leq |G|. $$ \section{The case of finite groups} We are now ready to prove one of our main theorems. \begin{theorem} Let $G$ be a finite logically cyclic group. Then $G$ is cyclic. \end{theorem} \begin{proof} As we said before, $G$ is abelian and so it is a direct product of abelian $p$-groups of the form $$ H_p=\mathbb{Z}_{p^{e_1}}\times \cdots\times\mathbb{Z}_{p^{e_t}}, $$ where $p$ is a prime (ranging in the set of all prime divisors of $|G|$) and $1\leq e_1\leq \cdots\leq e_t$ are depending on $p$. Note that if a finite group $G=G_1\times G_2$ is logically cyclic, then both $G_1$ and $G_2$ are logically cyclic. This is because, if for example $G_1$ is not logically cyclic, then (by 1.3) for all $s_1\in G_1$, there exists a non-identity automorphism $\varphi_1\in \mathrm{Aut}(G_1)$, such that $\varphi_1(s_1)=s_1$. Hence for all $(s_1, s_2)\in G$, there exists the non-identity $(\varphi_1, \mathrm{id}_{G_2})\in \mathrm{Aut}(G)$, such that $$ (\varphi_1, \mathrm{id}_{G_2})(s_1, s_2)=(s_1, s_2), $$ and this violates the logically cyclicity of $G$. The converse is also true if the orders of $G_1$ and $G_2$ are co-prime, since, in this case we have $$ \mathrm{Aut}(G)=\mathrm{Aut}(G_1)\times \mathrm{Aut}(G_2). $$ This argument shows that it is enough to assume that $G$ has the form $$ \mathbb{Z}_{p^{e_1}}\times \cdots\times\mathbb{Z}_{p^{e_t}}. $$ By \cite{Hillar}, the order of $\mathrm{Aut}(G)$ can be computed as follows. Let $$ d_i=\max\{ j:\ e_j=e_i\}, \ c_i=\min\{ j:\ e_j=e_i\}. $$ Then we have $$ |\mathrm{Aut}(G)|=\prod_{i=1}^t(p^{d_i}-p^{i-1})p^{e_i(t-d_i)+(e_i-1)(t-c_i+1)}. $$ Suppose $t\geq 2$. Since $G$ is logically cyclic, so by the above observation, $A=\mathbb{Z}_{p^{e_1}}\times\mathbb{Z}_{p^{e_2}}$ is logically cyclic. We compute the order of $\mathrm{Aut}(A)$, using the above formula. Note that, in the case of the group $A$, we have $$ 1\leq d_1\leq 2,\ d_2=2,\ c_1=1,\ 1\leq c_2\leq 2. $$ So, we have $$ |\mathrm{Aut}(A)|=(p^{d_1}-1)(p-1)p^{4e_1+3e_2-d_1e_1-c_2e_2+c_2-4}. $$ Applying the requirement $|\mathrm{Aut}(A)|\leq |A|$, we obtain $$ (p^{d_1}-1)(p-1)p^{(3-d_1)e_1+(2-c_2)e_2+c_2}\leq p^4, $$ so we can consider some possibilities for $d_1$ and $d_2$.\\ 1- First, note that the case $d_1=1$ and $c_2=1$ is impossible.\\ 2- If $d_1=1$ and $c_2=2$, then we have $$ (p^1-1)(p-1)p^{3e_1+2e_2-e_1-2e_2+2}\leq p^4, $$ and hence $$ (p-1)^2p^{2e_1+2}\leq p^4. $$ Now the case $e_1>1$ is impossible and hence $e_1=1$. This shows that $p=2$ and hence $A=\mathbb{Z}_2\times \mathbb{Z}_{2^f}$ for some $f\geq 2$. It is easy to see that $|\mathrm{Aut}( \mathbb{Z}_2\times \mathbb{Z}_{2^f})|=2^{f+1}$ and therefore the whole the group $A=\mathbb{Z}_2\times \mathbb{Z}_{2^f}$ must be the orbit of $s$ under the action of its automorphism group, which is not the case. So, we get a contradiction.\\ 3- Let $d_1=2$ and $c_2=1$. This shows that $e_1=e_2$, and hence $$ (p^2-1)(p-1)p^{2e_1+1}\leq p^4. $$ Again, the case $e_1>1$ is impossible and the case $e_1=1$ implies $(p^2-1)(p-1)\leq p$ which is contradiction.\\ 4- Finally, note that the case $d_1=2$ and $c_2=2$ is also impossible.\\ This argument shows that in all the cases, $t\geq 2$ is not valid. So $G$ is a direct product of cyclic groups of co-prime orders and hence it is cyclic. \end{proof} A caution is necessary here: the subgroup $\mathrm{def}_s(G)$ is strongly dependent to $G$. If we are not careful about this dependence, we may obtain wrong conclusions. As an example, let $H=\mathrm{def}_s(G)$. Since every element of $H$ is definable by the parameter $s$, so one may concludes that $H$ is logically cyclic. In the other words, one may convince that by 2.1, for any finite group $G$ and any $s\in G$ the group $C_G(C_{\mathrm{Aut}(G)}(s))$ is cyclic. This is not true, since for example, if we let $G$ be a $p$-group of class 2 with an odd $p$, and if we assume that $\mathrm{Aut}(G)$ is also $p$-group such that $\Omega_1(G)$ is not included in the center, then we can choose $s$ to be a non-central element of order $p$ and $u\in C_{\mathrm{Aut}(G)}(G)\cap Z(G)$ with order $p$. Now, it is easy to see that $\langle s, u\rangle\subseteq C_G(C_{\mathrm{Aut}(G)}(s))$, and so this subgroup is not cyclic. Note that for the case $p=2$, the dihedral group of order 8 is also a counterexample. These counterexamples show that in general $\mathrm{def}_s(G)$ is not logically cyclic. To see the reason, note that if $H\leq G$ and $s\in H$, then there is no trivial relations between $\mathrm{def}_s(G)$ and $\mathrm{def}_s(H)$. If $g\in \mathrm{def}_s(H)$, and $\varphi(x)$ is a formula defining $g$ in $H$, then we may have $$ |\{ a\in G:\ G\vDash \varphi(a)\}|>1, $$ or even, we may have $G\vDash \neg \varphi(a)$. This shows that in general $\mathrm{def}_s(H)$ is not contained in $\mathrm{def}_s(G)$. On the other hand, if $g\in H\cap \mathrm{def}_s(G)$, then we may have not $g\in \mathrm{def}_s(H)$ by a similar argument. Hence, the subgroup $\mathrm{def}_s(G)$ behaves not so simply despite its abelianity. In some cases, the subgroup $\mathrm{def}_s(G)$ is also logically cyclic, for example if $G$ is a divisible Abelian group. To see this, one may use the quantifier elimination property of divisible Abelian group and a similar argument as in the proof of Theorem 3.3 below. \section{The case of infinite groups} In this section, we will determine the structure of logically cyclic groups for the following cases:\\ 1- Finitely generated groups, 2- Divisible groups, 3- Torsion-free groups.\\ Suppose $G=\mathrm{def}_s(G)$ is a logically cyclic group and $H=\langle s\rangle$. Then as we saw in the introduction, the subgroup $H$ is an $\mathrm{Aut}$-basis of $G$ in the sense of \cite{Gio} and \cite{Gio2}. This means that every automorphism of $G$ is uniquely determined by its action on $H$. So, as it is proved in \cite{Gio}, we have $$ \mathrm{Hom}(G/H, H)=0. $$ Applying results of \cite{Gio2} concerning finite $\mathrm{Aut}$-bases, we can collect the following facts about the group $G$.\\ 1- If $G$ is periodic, then it is finite so by the previous section it is cyclic. 2- If $G$ is periodic by finitely generated, then it is finitely generated. 3- $\mathrm{Aut}(G)$ is countable. Note that this is also a result of the inequality $|\mathrm{Aut}(G)|\leq |G|$, which is valid for all logically cyclic groups. 4- If the order of $s$ is finite then $G$ is finite and so it is cyclic. 5- The quasi-cyclic groups $\mathbb{Z}_{p^{\infty}}$ are not logically cyclic as well as the additive group $\mathbb{Q}/\mathbb{Z}$. We first determine the structure of all finitely generated logically cyclic groups. \begin{proposition} Let $G$ be a logically cyclic finitely generated group. Then $G$ is cyclic or $G=\mathbb{Z}\times \mathbb{Z}_2$. \end{proposition} \begin{proof} We have $G=\mathbb{Z}^n\oplus A$ for some finite group $A$. It is easy to apply Corollary 1.3 to see that $A$ is also logically cyclic, so by the previous section $A=\mathbb{Z}_m$, for some $m$. Now, suppose $n>1$ and $H=\langle s\rangle$. Since $s$ has infinite order by the fact 4, we have $G/H=\mathbb{Z}^{n-1}\oplus A$ and hence there exists a non-zero homomorphism $G/H \to H$, contradicting the fact $\mathrm{Hom}(G/H, H)=0$. Therefore $n\leq 1$. Now, suppose that $G$ is not cyclic. We show that $m=2$. Note that the group $G=\mathbb{Z}\times \mathbb{Z}_2$ is actually a logically cyclic group. To see this, we show that $G$ is logically generated by $s=(1,0)$. Clearly all elements of the form $(u,0)$ can be defined by $x=us$, so consider an element of the form $(u,1)$. Since we have $(u,1)=us+(0,1)$ and $(0,1)$ is the only element of order 2 in the whole group, so we can define $(u,1)$ by the formula $$ \forall y\ ((2y=0 \wedge y\neq 0) \Rightarrow x=us+y). $$ Suppose now, $m\geq 3$ and $s=(u,v)$ is a logical generator of $G$. Note that $s$ can not be of the form $(u,0)$, because in this case we can fix a non-identity automorphism $\alpha\in \mathrm{Aut}(\mathbb{Z}_m)$ (as we let $m\geq 3$) and then the non-identity automorphism $(\mathrm{id}, \alpha)$ will fix $s$, which is impossible. Also it is impossible to have $s=(0,v)$, since in this case $o(s)$ is finite and this implies that $G$ is also finite by the fact 4 above. Recall that every endomorphism $f:G\to G$ can be represented as a matrix $$ M=\left [ \begin{array}{cc} \lambda_n& 0\\ \gamma_b& \eta_a \end{array} \right ], $$ where $\lambda_n:\mathbb{Z}\to \mathbb{Z}$ is defined by $\lambda_n(x)=nx$, $\gamma_b:\mathbb{Z}\to \mathbb{Z}_m$ is defined by $\gamma_b(x)=bx\ \ (\mathrm{mod}\ m)$, and $\eta_a:\mathbb{Z}_m\to \mathbb{Z}_m$ is defined by $\eta_a(x)=ax\ \ (\mathrm{mod}\ m)$. We know that this is an automorphism iff the matrix $M$ is invertible and this happens just in the case $(a, m)=1$. Note that also $M$ represents the identity iff $a=1$ and $m$ divides $b$. We first investigate the case when $m$ and $u$ are co-prime. Choose $a\neq 1$ co-prime to $m$ (this is possible as we assumed that $m\geq 3$). Then there is an integer $b$ such that $$ bu+(a-1)v \equiv 0\ \ (\mathrm{mod}\ m). $$ So, consider the automorphism $$ M=\left [ \begin{array}{cc} \mathrm{id}& 0\\ \gamma_b& \eta_a \end{array} \right ]. $$ We have $$ Ms=\left [ \begin{array}{cc} \lambda_n& 0\\ \gamma_b& \eta_a \end{array} \right ] \left [ \begin{array}{c} u\\ v \end{array} \right ] = \left [ \begin{array}{c} u\\ bu+av \end{array} \right ] = \left [ \begin{array}{c} u\\ v \end{array} \right ]=s. $$ This shows that $s$ can not be a logical generator of $G$. So, we have $d=(m, u)>1$. Now, put $a=1$ and $b=m/d$ and consider again the automorphism $M$. This is a non-identity automorphism as $m$ does not divide $m/d$. It is easy now to see $Ms=s$, a contradiction. \end{proof} We can use a similar argument as above to show that if $G$ is a torsion-free logically cyclic group, then so is $G\times \mathbb{Z}_2$. To see this, first we show that $\mathbb{Q}\times \mathbb{Z}_2$ is logically generated by $s=(1,0)$. Clearly every element of the form $(m/n, 0)$ can be defined by $nx=ms$, so consider the element $(m/n, 1)$. This element can be defined by $$ \forall y \forall z \ ((2y=0 \wedge y\neq 0 \wedge nz=ms) \Rightarrow x=y+z). $$ Now, we can apply this observation and the theorem 3.3 below to prove the general case.\\ The next proposition, shows that the additive group of rationals is the only divisible logically cyclic group. \begin{proposition} Let $G$ be a non-trivial divisible logically cyclic group. Then $G=(\mathbb{Q}, +)$. \end{proposition} \begin{proof} By a well-know theorem on divisible abelian groups, $G=\mathbb{Q}^I\oplus \sum_{p\in J}\oplus\mathbb{Z}_{p^{\infty}}$, where $I$ is a set and $J$ is a set of primes. Let $J\neq \emptyset$. Then for some prime $p$ the quasi-cyclic group $\mathbb{Z}_{p^{\infty}}$ is a direct summand of $G$ and hence $\mathrm{Aut}(\mathbb{Z}_{p^{\infty}})$ embeds in $\mathrm{Aut}(G)$. But by \cite{Gio2}, the cardinality of the automorphism group of the quasi-cyclic group is uncountable while $|\mathrm{Aut}(G)|\leq |G|$ is countable. Therefore, $J=\emptyset$ and so $G= \mathbb{Q}^I$. Let $|I|>1$. Then for any $0\neq s\in \mathbb{Q}^I$, we can construct a non-trivial automorphism $\alpha\in \mathrm{GL}_I(\mathbb{Q})$ such that $\alpha(s)=s$. But, this contradicts the assumption of logical cyclicty of $G$. \end{proof} Finally, we give a characterization of logically cyclic torsion-free groups. In the proof, we use the well-known quantifier elimination property of divisible groups, which says that in such a group, every first order formula is equivalent to a quantifier-free one. \begin{theorem} Let $G=\mathrm{def}_s(G)$ be a torsion-free logically cyclic group. Then $G$ embeds in $(\mathbb{Q}, +)$. \end{theorem} \begin{proof} Let $G^{\ast}$ be the divisible envelope of $G$, so $G$ is an essential subgroup of $G^{\ast}$, i.e. for any $0\neq u\in G^{\ast}$ the intersection $G\cap \langle u\rangle$ is non-trivial. We prove that $G^{\ast}$ is also logically cyclic and $s$ is its logical generator. Suppose $0\neq u\in G^{\ast}$. There is a non-zero integer $m$ such that $0\neq mu=v\in G$. Let $\varphi(x)$ be a formula which defines $v$ in $G$. Since $G^{\ast}$ is divisible, so it has the quantifier elimination property. Hence in $G^{\ast}$, the formula $\varphi(x)$ is equivalent to a quantifier-free formula $\psi(x, {\bf a})$, where ${\bf a}$ is a set of elements of $G^{\ast}$. Note that $\psi(x, {\bf a})$ is a Boolean combination of atomic formulae of the form $m_{ij}x=a_{ij}$ with $m_{ij}\in \mathbb{Z}$ and $a_{ij}\in G^{\ast}$, so $$ \psi(x, {\bf a})\equiv \bigvee_{i=1}^p\bigwedge_{j=1}^{q_i}(m_{ij}x=a_{ij})^{\pm}, $$ where $\pm$ indicates an atomic formula or a negation of an atomic formula. Since $v$ is a solution $\psi(x, {\bf a})$, so there is an index $i$ such that we have $$ \bigwedge_{j=1}^{q_i}(m_{ij}v=a_{ij})^{\pm}. $$ If all conjunctives in the recent formula are negative, then there will be infinitely many solutions for it in $G$, which is not the case. So, there is a $j$ such that $m_{ij}v=a_{ij}$. Since $G$ is torsion-free, so we conclude that $v$ is defined by $m_{ij}v=a_{ij}$ in $G^{\ast}$, i.e. $$ \varphi(x)\equiv (m_{ij}v=a_{ij}). $$ Now, consider the following formula in the language of groups with parameter $s$, $$ \forall x\ (\varphi(x)\Rightarrow my=x). $$ Clearly, this formula, defines $u$ in $G^{\ast}$ and so, $G^{\ast}$ is logically cyclic. By the previous proposition, $G^{\ast}=\mathbb{Q}$, and the proof completes. \end{proof} One more problem remains unsolved; the classification of logically cyclic algebraic structures other than groups. An algebra $A$ in an algebraic language $\mathcal{L}$, is said to be cyclic if it is generated by a single element. It is called logically cyclic if every element of $A$ can be defined by a first order formula containing a fixed parameter from $A$. If $A$ is finite, then clearly $A$ is the only elementary extension of itself. So an element $a\in A$ is definable using a parameter $s$, if and only if every automorphism of $A$ which fixes $s$, fixes already $a$. How can we obtain the relation between cyclic and logically cyclic algebras? This may need further efforts because in general we have a few information about $\mathrm{Aut}(A)$.\\ {\bf Acknowledgement} The author would like to thank J. S. Eyvazloo, G. Robinson and K. Bou-Rabee for their comments and suggestions. \end{document}
\betaegin{document} \title{A linear condition for non-very generic discriminantal arrangements} \betaegin{abstract} The discriminantal arrangement is the space of configurations of $n$ hyperplanes in generic position in a $k$ dimensional space (see \cite{MS}). Differently from the case $k=1$ in which it corresponds to the well known braid arrangement, the discriminantal arrangement in the case $k>1$ has a combinatorics which depends from the choice of the original $n$ hyperplanes. It is known that this combinatorics is constant in an open Zarisky set $\Zt$, but to assess weather or not $n$ fixed hyperplanes in generic position belongs to $\Zt$ proved to be a nontrivial problem. Even to simply provide examples of configurations not in $\Zt$ is still a difficult task. In this paper, moving from a recent result in \cite{SSc}, we define a \tauhetaxtit{weak linear independency} condition among sets of vectors which, if imposed, allows to build configurations of hyperplanes not in $\Zt$. We provide $3$ examples. \betad end{abstract} \sigmaection{Introduction} In 1989 Manin and Schechtman (\cite{MS}) introduced the {\it discriminantal arrangement} ${\mathcal B}(n,k,{\mathcal A}^0)$ which hyperplanes consist of the non-generic parallel translates of the generic arrangement ${\mathcal A}^0$ of $n$ hyperplanes in ${\NZQ C}^k$. ${\mathcal B}(n,k,{\mathcal A}^0)$ is a generalization of the braid arrangement (\cite{OT}) with which ${\mathcal B}(n,1)={\mathcal B}(n,1,{\mathcal A}^0)$ coincides. \\ These arrangements, which have several beautiful relations with diverse problems (see, for instance Kapranov-Voevodsky \cite{KV3}, \cite{KV1},\cite{KV2}, the vanishing of cohomology of bundles on toric varieties \cite{Per}, the representations of higher braid groups \cite{Koh}) proved to be not only very interesting, but also quite tricky. Indeed, differently from the braid arrangement ${\mathcal B}(n,1)$, their combinatorics depends on the original arrangement ${\mathcal A}^0$ if $k>1$. This generated few misunderstanding over the years.\\ Manin and Schechtman introduced the discriminantal arrangement in order to model higher Bruhat orders. To do so, they only used the combinatorics of ${\mathcal B}(n,k,{\mathcal A}^0)$ when ${\mathcal A}^0$ varies in an open Zarisky set $\Zt$. In 1991 Ziegler showed ( see Theorem 4.1 in \cite{Zie} ) that their description was not complete and that a slightly different construction was needed. In particular in \cite{FZ} Felsner and Ziegler showed that, in the representable case, what was missed in Manin and Schechtman construction was the part coming from the case ${\mathcal A}^0 \notin \Zt$.\\ As pointed out by Falk in 1994 (\cite{Falk}), the construction in \cite{MS} led to the misunderstanding that the intersection lattice of ${\mathcal B}(n,k,{\mathcal A}^0)$ was independent of the arrangement ${\mathcal A}^0$ ( see, for instance, \cite{Orl}\cite{OT}, or \cite{Law}). In \cite{Falk} Falk shows that this was not the case, providing an example of a combinatorics of ${\mathcal B}(6,3,{\mathcal A}^0)$ different from the one presented by Manin and Schechtman. Nevertheless Falk too was not aware that such an example already existed in \cite{Crapo}.\\ Actually, Crapo already introduced the discriminantal arrangement in 1985 with the name of \tauhetaxtit{geometry of circuits} (see \cite{Crapo}). In a more combinatorial setting, he defined it as the matroid $M(n,k, \mathcal{C})$ of circuits of the configuration $\mathcal{C}$ of $n$ generic points in ${\NZQ R}^k$. The circuits of the matroid $M(n,k, \mathcal{C})$ are the hyperplanes of ${\mathcal B}(n,k,{\mathcal A}^0),$ when ${\mathcal A}^0$ is the arrangement of the hyperplanes in ${\NZQ R}^k$ orthogonal to the vectors joining the origin with the $n$ points in $\mathcal{C}$ ( for further development see \cite{CR} ). In this paper, Crapo provides also an example of an arrangement ${\mathcal A}^0$ of $6$ lines in the real plane for which the combinatorics of ${\mathcal B}(6,2,{\mathcal A}^0)$ in rank $3$ is different from the one described in \cite{MS}. \\ In 1997 Bayer and Brandt (see \cite{BB}) cast a light on this difference naming \tauhetaxtit{very generic} the arrangements ${\mathcal A}^0 \in \Zt$ ( the one simply called generic in \cite{MS}) and non very generic the others. They conjectured a description of the intersection lattice of ${\mathcal B}(n,k,{\mathcal A}^0), {\mathcal A}^0 \in \Zt$, subsequently proved by Athanasiadis in 1999 in \cite{Atha} (as far as it is known to the authors, this is the first paper on the literature on discriminantal arrangements in which there is a reference to the Crapo's paper \cite{Crapo}). It is worthy to notice that Bayer and Brandt mentioned in \cite{BB} that, in their opinion, a candidate for a very generic arrangement could have been the cyclic arrangement. In Subsection \ref{subse:simple}, by means of the Theorem 4.1 in \cite{SSY1}, we show that this is not the case since there are cyclic arrangements which are non-very generic.\\ Moreover even if Athanasiadis proved that the combinatorics of ${\mathcal B}(n,k,{\mathcal A}^0), {\mathcal A}^0 \in \Zt$ can be described by opportunely defined sets of subsets of $\{1,\lambdadots,n\}$, the contrary is not true in general. In Section \ref{sec:example}, by means of our main result, we provide two examples of non-very generic arrangements ${\mathcal A}^0$ for which the combinatorics of ${\mathcal B}(n,k,{\mathcal A}^0)$ satisfies Bayer-Brandt-Athanasiadis numerical properties (see Examples \ref{ex:MS(16,11)} and \ref{ex:MS(10,3)}). We point out that to provide such examples is a nontrivial result since, until very recently, 1985 Crapo's and 1997 Falk's examples were, essentially, the only known examples of non-very generic arrangements.\\ Finally, in order to better understand the difficulty behind this problem, we remark that even with the Bayer-Brandt-Athanasiadis description, the enumerative problem of funding the characteristic polynomial of the discriminantal arrangement in the very generic case is nontrivial (see, for instance, \cite{NT}).\\ Recently, following a result in \cite{LS} which completely describes the rank $2$ combinatorics of ${\mathcal B}(n,k,{\mathcal A}^0)$ for any ${\mathcal A}^0$, the authors tried to better understand the combinatorics of the discriminantal arrangement (see \cite{SSY1}, \cite{SSY2}, \cite{SSc}) and its connection with special configurations of points in the space (see \cite{DPS}, \cite{SS}). In particular in \cite{SSc} the authors generalize the dependency condition given in \cite{LS} providing a sufficient condition for the existence, in rank $r > 2$, of non-very generic intersections, i.e. intersections which do not appear in ${\mathcal B}(n,k,{\mathcal A}^0), {\mathcal A}^0 \in \Zt$.\\ In this paper, moving from the construction in \cite{SSc}, we introduced the notion of \tauhetaxtit{weak linear independency} among sets of vectors proving that if there are vectors in the hyperplanes of ${\mathcal A}^0$ which form weakly linearly independent sets, then ${\mathcal A}^0$ is non-very generic. This result allowed to build several examples of non-very generic arrangements in higher dimension (see also \cite{So}).\\ The content of the paper is the following. In Section \ref{sec:prelim}, we recall the definition of discriminantal arrangement and, following \cite{SSc}, the definition of \tauhetaxtit{simple} intersection, $K_{\NZQ T}$-translated and $K_{\NZQ T}$-configuration. In Section \ref{sec:vect}, we introduce the notion of $K_{\NZQ T}$-vector sets, weak linear independency and we prove our main result, Theorem \ref{thm:main2}. In the last section we provide three examples of non-very generic arrangements obtained by imposing the condition stated in Theorem \ref{thm:main2}. \sigmaection{Preliminaries}\lambdaabel{sec:prelim} \sigmaubsection{Discriminantal arrangement} Let ${\mathcal A}^0 = \{ H_1^0, \dots, H_n^0 \}$ be a central arrangement in ${\NZQ C}^k, k<n$ such that any $m$ hyperplanes intersect in codimension $m$ at any point except for the origin for any $m \lambdaeq k$. We will call such an arrangement a \tauhetaxtit{central generic\footnote{Here we use the word \tauhetaxtit{generic} to stress that ${\mathcal A}^0$ admits a translated which is a generic arrangement.} arrangement}. The space $\betad SS[{\mathcal A}^0]$ (or simply $\betad SS$ when dependence on ${\mathcal A}^0$ is clear or not essential) will denote the space of parallel translates of ${\mathcal A}^0$, that is the space of the arrangements ${\mathcal A}^t= \{ H_1^{x_1}, \dots, H_n^{x_n} \}$, $t = (x_1, \dots, x_n) \in {\NZQ C}^n$, $H_i^{x_i} = H_i^0 + \alphalpha_i x_i$, $\alphalpha_i$ a vector normal to $H_i^0$. There is a natural identification of $\betad SS$ with the $n$-dimensional affine space ${\NZQ C}^n$ such that the arrangement ${\mathcal A}^0$ corresponds to the origin. In particular, an ordering of hyperplanes in ${\mathcal A}^0$ determines the coordinate system in $\betad SS$ (see \cite{LS}). \\ The closed subset of $\betad SS$ formed by the translates of ${\mathcal A}^0$ which fail to form a generic arrangement is a union of hyperplanes $D_L \sigmaubset \betad SS$ (see \cite{MS}). Each hyperplane $D_L$ corresponds to a subset $L = \{ i_1, \dots, i_{k+1} \} \sigmaubset$ [$n$] $\coloneqq \{ 1, \dots, n \}$ and it consists of $n$-tuples of translates of hyperplanes $H_1^0, \dots, H_n^0$ in which translates of $H_{i_1}^0, \dots, H_{i_{k+1}}^0$ fail to form a generic arrangement. The arrangement ${\mathcal B}(n, k, {\mathcal A})$ of hyperplanes $D_L$ is called $discriminantal$ $arrangement$ and has been introduced by Manin and Schechtman in \cite{MS}.\\ It is well known (see, among others \cite{Crapo},\cite{MS}) that there exists an open Zarisky set $\mathcal{Z}$ in the space of (central) generic arrangements of $n$ hyperplanes in ${\NZQ C}^k$, such that the intersection lattice of the discriminantal arrangement $\mathcal{B}(n,k,{\mathcal A})$ is independent from the choice of the arrangement ${\mathcal A} \in \mathcal{Z}$. Accordingly to Bayer and Brandt in \cite{BB} we will call the arrangements ${\mathcal A} \in \mathcal{Z}$ \tauhetaxtit{very generic} and \tauhetaxtit{non-very generic} the others. \sigmaubsection{Simple intersections}\lambdaabel{subse:simple} According to \cite{SSc} we call an element $X$ in the intersection lattice of the discriminantal arrangement $\mathcal{B}(n,k,{\mathcal A})$ a \tauhetaxtbf{simple} intersection if $$X=\betaigcap_{i=1}^r D_{L_i}, |L_i| = k+1, \mbox{ and } \betaigcap_{i \in I}D_{L_i} \neq D_S, \mid S \mid > k+1 \mbox{ for any } I \sigmaubset [r], \mid I \mid \gammaeq 2 \quad .$$ We call multiplicity of the simple intersection $X$ the number $r$ of hyperplanes intersecting in $X$. In \cite{SSc} authors proved that the following Proposition holds. \betaegin{prop}\lambdaabel{pro:main}If the intersection lattice of the discriminantal arrangement $\mathcal{B}(n,k,{\mathcal A})$ contains a simple intersection of rank strictly less than its multiplicity, then ${\mathcal A}$ is non-very generic. \betad end{prop} \noindent It is nontrivial to assess whether or not an arrangement is very generic. For example, in \cite{BB} Bayer and Brandt guessed that the cyclic arrangement could have been a good candidate to build very generic arrangements. We can now show that this is not true in general. For instance, the cyclic arrangement ${\mathcal A}^0 \in {\NZQ R}^3$ with hyperplanes normal to the vectors $\alphalpha_i=(1,t_i,t_i^2),(t_1,t_2,t_3,t_4,t_5,t_6)=(1,-1,a,-a,b,-b), a,b\neq 1, a\neq b$, is generic but non-very generic. Indeed the vectors $\alphalpha_1 \tauimes \alphalpha_2$, $\alphalpha_3 \tauimes \alphalpha_4$ and $\alphalpha_5 \tauimes \alphalpha_6$ are linearly dependent and by Theorem 4.1 in \cite{SSY1} this is equivalent to the intersection $X=D_{\{1,2,3,4\}}\cap D_{\{1,2,5,6\}} \cap D_{\{3,4,5,6\}}$ be a simple intersection of multiplicity $3$ in rank $2$. By Proposition \ref{pro:main}, we get that ${\mathcal A}^0$ is non-very generic. \sigmaubsection{$\mathbf{K_{{\NZQ T}}}$-translated and $\mathbf{K_{{\NZQ T}}}$-configurations}\lambdaabel{sub:KT} Fixed a set ${\NZQ T} = \{ L_1, \dots, L_r \}$ of subsets $L_i \sigmaubset [n]$, $|L_i| = k+1$, for any arrangement ${\mathcal A}=\{H_1,\lambdadots, H_n\}$ translated of ${\mathcal A}^0$ we will denote by $P_i = \betaigcap_{p \in L_i} H_p$ and $H_{i,j} = \betaigcap_{p \in L_i \cap L_j} H_p$. Notice that $P_i$ is a point if and only if ${\mathcal A} \in D_{L_i}$, it is empty otherwise. Following \cite{SSc} we will call the set ${\NZQ T}$ an $r$-\tauhetaxtbf{set} if the conditions \betaegin{equation}\lambdaabel{eq:proper1} \betaigcup_{i=1}^r L_i = \betaigcup_{i \in I} L_i \quad \mbox{ and} \quad L_i \cap L_j \neq \betad emptyset \betad end{equation} are satisfied for any subset $I \sigmaubset [r], \mid I \mid=r-1$ and any two indices $1 \lambdaeq i < j \lambdaeq r$.\\ Given an $r$-set ${\NZQ T}$ authors in \cite{SSc} defined:\\ \paragraph{$\mathbf{K_{{\NZQ T}}}$-\tauhetaxtbf{translated}} A translated ${\mathcal A}=\{H_1, \lambdadots, H_n\}$ of ${\mathcal A}^0$ will be called $\mathbf{K_{{\NZQ T}}}$ or $\mathbf{K_{{\NZQ T}}}$-translated if each point $P_i=\betaigcap_{p \in L_i} H_p \neq \betad emptyset$ is the intersection of exactly the $k+1$ hyperplanes indexed in $L_i$ for any $L_i \in {\NZQ T}$. \paragraph{$\mathbf{K_{{\NZQ T}}}$-\tauhetaxtbf{configuration} $K_{\NZQ T}({\mathcal A})$} Given a $\mathbf{K_{{\NZQ T}}}$-translated ${\mathcal A}$, the complete graph having the points $P_i$ as vertices and the vectors $P_iP_j$ as edges will be called \tauhetaxtbf{$K_{{\NZQ T}}$-configuration} and denoted by $K_{\NZQ T}({\mathcal A})$. \\ \sigmaection{An algebraic condition for non-very genericity}\lambdaabel{sec:vect} The discriminantal arrangement ${\mathcal B}(n,k,{\mathcal A}^0)$ is not essential arrangement of center $D_{[n]}=\betaigcap_{L\sigmaubset n, \mid L \mid=k+1}D_L \sigmaimeq {\NZQ C}^k$. The center is formed by all translated ${\mathcal A}^t$ of ${\mathcal A}^0$ which are central arrangements. If we consider its essentialization $ess({\mathcal B}(n,k, {\mathcal A}^0))$ in ${\NZQ C}^{n-k} \sigmaimeq \betad SS / D_{[n]}$, an element ${\mathcal A}^t \in ess({\mathcal B}(n,k, {\mathcal A}^0))$ will corresponds uniquely to a translation $t \in {\NZQ C}^n/ C \sigmaimeq {\NZQ C}^{n-k}$, $C=\{t \in {\NZQ C}^n \mid {\mathcal A}^t \mbox{ is central} \}$. The following proposition arises naturally. \betaegin{prop}\lambdaabel{defi:indepK_TT} Let ${\mathcal A}^0$ be a generic central arrangement of $n$ hyperplanes in ${\NZQ C}^k$. Translations ${\mathcal A}^{t_1}, \dots, {\mathcal A}^{t_d}$ of ${\mathcal A}^0$ are linearly independent vectors in $\betad SS / D_{[n]}\sigmaimeq {\NZQ C}^{n-k}$ if and only if $t_1,\lambdadots, t_d$ are linearly independent vectors in ${\NZQ C}^n/C$. \betad end{prop} \noindent Given ${\mathcal A}^{t_1}, \dots, {\mathcal A}^{t_d}$ $K_{{\NZQ T}}$-translated of ${\mathcal A}^0$, we will say that the $K_{\NZQ T}$-configurations $K_{\NZQ T}({\mathcal A}^{t_i})$ are independent if ${\mathcal A}^{t_i}, i=1,\dots, d$ are. \betaegin{center} \betaegin{figure}[h] \betaegin{tikzpicture} \coordinate [label=above:$P_i$] (0) at (0,3); \coordinate [label=above:$P_{i+1}$] (1) at ({-3/sqrt(2)},{3/sqrt(2)}); \coordinate [label=left:$P_{i+2}$] (2) at (-3,0); \coordinate [label=below:$\dots$] (3) at (-3,-1); \coordinate [label=left:$P_{j-1}$] (4) at ({-3/sqrt(2)},{-3/sqrt(2)}); \coordinate [label=below:$P_j$] (5) at (0,-3); \coordinate [label=below:$P_{j+1}$] (6) at ({3/sqrt(2)},{-3/sqrt(2)}); \coordinate [label=below:$\dots$] (7) at (3,-1); \coordinate [label=below:$P_{i-2}$] (8) at (3,0); \coordinate [label=above:$P_{i-1}$] (9) at ({3/sqrt(2)},{3/sqrt(2)}); \betaegin{scope} \draw[-latex] (0) -- node[above] {$v_{i,i+1}$} (1); \draw[-latex] (0) -- node[above] {$v_{i,i+2}$} (2); \draw[-latex] (0) -- node[left] {$v_{i,j-1}$} (4); \draw[-latex] (0) -- node[right] {$v_{i,j+1}$} (6); \draw[-latex] (0) -- node[above] {$v_{i,i-2}$} (8); \draw[-latex] (1) -- node[left] {$v_{i+1,i+2}$} (2); \draw[-latex] (4) -- node[below] {$v_{j-1,j}$} (5); \draw[-latex] (5) -- node[below] {$v_{j,j+1}$} (6); \draw[-latex] (8) -- node[right] {$v_{i-2,i-1}$} (9); \draw[-latex] (0) -- node[above] {$v_{i,i-1}$} (9); \draw[-latex] (0) -- node[right] {$v_{i,j}$} (5); \fill (0) circle (1.5pt); \fill (1) circle (1.5pt); \fill (2) circle (1.5pt); \fill (4) circle (1.5pt); \fill (5) circle (1.5pt); \fill (6) circle (1.5pt); \fill (8) circle (1.5pt); \fill (9) circle (1.5pt); \betad end{scope} \betad end{tikzpicture} \ \ \ \betaegin{tikzpicture} \coordinate [label=above:$P_i$] (0) at (0,3); \coordinate [label=above:$P_{i+1}$] (1) at ({-3/sqrt(2)},{3/sqrt(2)}); \coordinate [label=left:$P_{i+2}$] (2) at (-3,0); \coordinate [label=below:$\dots$] (3) at (-3,-1); \coordinate [label=left:$P_{j-1}$] (4) at ({-3/sqrt(2)},{-3/sqrt(2)}); \coordinate [label=below:$P_j$] (5) at (0,-3); \coordinate [label=below:$P_{j+1}$] (6) at ({3/sqrt(2)},{-3/sqrt(2)}); \coordinate [label=below:$\dots$] (7) at (3,-1); \coordinate [label=below:$P_{i-2}$] (8) at (3,0); \coordinate [label=above:$P_{i-1}$] (9) at ({3/sqrt(2)},{3/sqrt(2)}); \betaegin{scope} \draw[-latex] (0) -- node[above] {$v_{i,i+1}$} (1); \draw[-latex] (0) -- node[above] {$v_{i,i+2}$} (2); \draw[-latex] (0) -- node[left] {$v_{i,j-1}$} (4); \draw[-latex] (0) -- node[right] {$v_{i,j+1}$} (6); \draw[-latex] (0) -- node[above] {$v_{i,i-2}$} (8); \draw[-latex] (0) -- node[above] {$v_{i,i-1}$} (9); \draw[-latex] (0) -- node[right] {$v_{i,j}$} (5); \fill (0) circle (1.5pt); \fill (1) circle (1.5pt); \fill (2) circle (1.5pt); \fill (4) circle (1.5pt); \fill (5) circle (1.5pt); \fill (6) circle (1.5pt); \fill (8) circle (1.5pt); \fill (9) circle (1.5pt); \betad end{scope} \betad end{tikzpicture} \caption{Diagonal vectors $v_{i,j}$ and their associated $K_{\NZQ T}$-vector set.}\lambdaabel{fig:K_T_vect} \betad end{figure} \betad end{center} \sigmaubsection{$K_{{\NZQ T}}$-vector sets} Let ${\mathcal A}^t, t = (x_1, \dots, x_n)$ be a $K_{\NZQ T}$-translated of ${\mathcal A}^0$ and $P_i^t$ denote the intersection $\betaigcap_{p \in L_i} H^{x_p}_p$. Then to the $K_{\NZQ T}$-configuration $K_{\NZQ T}({\mathcal A}^t)$ corresponds a unique family $\{v^t_{i,j}\}$ of vectors such that $P^t_i+v^t_{i,j}=P^t_j$. Notice that two different $K_{\NZQ T}$-configurations can define the same family $\{v_{i,j}\}$\footnote{It is unique in the quotient space $\betad SS / D_{[n]}\sigmaimeq {\NZQ C}^{n-k}$}. With the above notations, we provide the following definition. \betaegin{defi}Let ${\mathcal A}^0$ be a central generic arrangement, ${\NZQ T}$ an $r$-set, ${\mathcal A}^t$ a $K_{\NZQ T}$-translated of ${\mathcal A}^0$ and $i_0 \in [r]$ a fixed index. We call $K_{\NZQ T}$-\tauhetaxtbf{vector set} the set of vectors $\{v^t_{i_0,j}\}_{j\neq i_0}$ which satisfies $P^t_{i_0}+v^t_{i_0,j}=P^t_{j}$ for any $j\in [r], j \neq i_0$. \betad end{defi} \noindent Since the vectors $\{v^t_{i,j}\}$ satisfy, by definition, the property $v^t_{k,l} = v^t_{i.l}- v^t_{i,k},$ then the set $\{v^t_{i,j}\}$ is uniquely determined by any $K_{\NZQ T}$-vector set $\{v^t_{i_0,j}\}_{j\neq i_0}$ (see Figure \ref{fig:K_T_vect}). \\ Given a $K_{\NZQ T}$-vector set we can naturally define the operation of multiplication by a scalar $$a\{v^t_{i_0,j}\}_{j\neq i_0} \coloneqq \{av^t_{i_0,j}\}_{j\neq i_0}, \quad a \in {\NZQ C}$$ and sum of two different $K_{\NZQ T}$-vector sets $$\{v^{t_1}_{i_0,j}\}_{j\neq i_0}+\{v^{t_2}_{i_0,j}\}_{j\neq i_0}=\{v^{t_1}_{i_0,j}+v^{t_2}_{i_0,j}\}_{j\neq i_0} \quad .$$ With the above notations and operations, we have the following definition. \betaegin{defi} For a fixed set ${\NZQ T}$, $d$ different $K_{\NZQ T}$-vector sets $\{ \{v^{t_h}_{i_0,j}\}_{j\neq i_0} \}_{h=1, \dots, d}$ are weakly linearly independent if and only if for any $a_1,\lambdadots,a_d \in {\NZQ C}$ such that \betaegin{equation} \sigmaum_{h=1}^{d} a_h \{v^{t_h}_{i_0,j}\}_{j\neq i_0} = 0, \betad end{equation} then $a_1=\lambdadots=a_d=0$. \betad end{defi} \noindent The following remark is a key point to prove the connection between the linearly independence of $K_{{\NZQ T}}$-configurations and the weakly linearly independence of the associated $K_{\NZQ T}$-vector sets. \betaegin{rem}\lambdaabel{rem:corres} Let $K_{\NZQ T}({\mathcal A}^t)$ be the $K_{\NZQ T}$-configuration of the arrangement ${\mathcal A}^t$, $K_{\NZQ T}$-translated of ${\mathcal A}^0$. Then for any $c \in {\NZQ C}$, the $K_{\NZQ T}$-configuration $K_{\NZQ T}({\mathcal A}^{ct})$ is an "expansion" by $c$ of $K_{\NZQ T}({\mathcal A}^t)$, that is $v_{i,j}^{ct} = c v_{i,j}^{t}$. This is consequence of the fact that for any $i \in [r]$ the vector $OP^{ct}_i$ joining the origin with the points $P^{ct}_i$ satisfies $OP^{ct}_i=cOP^{t}_i$ by definition of translation. Hence $P^{ct}_iP^{ct}_j=cP^{t}_iP^{t}_j$, i.e. $$v_{i,j}^{ct} = c v_{i,j}^{t} \quad .$$ Analogously we have that, if $t_1,t_2 \in {\NZQ C}^n$ are two translations then $$v_{i,j}^{t_1}+v_{i,j}^{t_2}=v_{i,j}^{t_1+t_2} \quad .$$ \betad end{rem} \noindent We can now prove the main lemma of this section. \betaegin{lem}\lambdaabel{lem:K_TTvec} Let ${\mathcal A}^0$ be a central generic arrangement of $n$ hyperplanes in ${\NZQ C}^k$ and ${\NZQ T}=\{L_1, \lambdadots, L_r\}$ be an $r$-set such that $[n]=\betaigcup_{i=1}^r L_i$. The $K_{{\NZQ T}}$-translated arrangements ${\mathcal A}^{t_1}, \dots, {\mathcal A}^{t_d}$ of ${\mathcal A}^0$ are linearly independent if and only if their associated $K_{\NZQ T}$-vector sets $\{ \{v^{t_h}_{i_0,j}\}_{j\neq i_0} \}_{h=1, \dots, d}$ are weakly linearly independent. \betad end{lem} \betaegin{proof} By definition, ${\mathcal A}^{t_1}, \dots, {\mathcal A}^{t_d}$ are linearly independent if and only if the translations $t_1,\lambdadots,t_d$ are linearly independent vectors in ${\NZQ C}^n/C$. Let's consider a linear combination $\sigmaum_{h=1}^d a_h t_h$ of the vectors $t_h$ and the translated arrangements ${\mathcal A}^{a_ht_h}$. By Remark \ref{rem:corres} we have that the $K_{{\NZQ T}}$-vector sets associated to ${\mathcal A}^{a_ht_h}$ satisfy the equalities: $$ \{v^{\sigmaum_{h=1}^{d} a_ht_h}_{i_0,j}\}_{j\neq i_0}=\sigmaum_{h=1}^{d} \{v^{a_h t_h}_{i_0,j}\}_{j\neq i_0}=\sigmaum_{h=1}^{d} a_h \{v^{t_h}_{i_0,j}\}_{j\neq i_0} \quad . $$ Hence $\sigmaum_{h=1}^{d} a_h \{v^{t_h}_{i_0,j}\}_{j\neq i_0}=0$ if and only if $v^{\sigmaum_{h=1}^{d} a_ht_h}_{i_0,j}=0$ for any $j$, that is $P_{i_0}^{\sigmaum_{h=1}^{d} a_ht_h} \betad equiv P_j^{\sigmaum_{h=1}^{d} a_ht_h}$. This is equivalent to ${\mathcal A}^{\sigmaum_{h=1}^{d} a_ht_h}$ be a central arrangement of center $P_{i_0}^{\sigmaum_{h=1}^{d} a_ht_h}$, i.e. $\sigmaum_{h=1}^d a_h t_h \in C$ and the statement follows from Proposition \ref{defi:indepK_TT}. \betad end{proof} \noindent The assumption that $\betaigcup_{i=1}^r L_i=[n]$ in Lemma \ref{lem:K_TTvec} is equivalent to consider a subset ${\mathcal A}'^0 \sigmaubset {\mathcal A}^0$ which only contains the hyperplanes indexed in the $\betaigcup_{i=1}^r L_i \sigmaubset [n]$ in the more general case. Indeed if a (central) generic arrangement ${\mathcal A}^0$ contains a subarrangement ${\mathcal A}'^0$ which is non-very generic then ${\mathcal A}^0$ is obviously non-very generic. Analogously, if there exists a restriction arrangement ${\mathcal A}^{Y_{{\mathcal A}'}} = \{ H \cap Y_{{\mathcal A}'} | H \in {\mathcal A}^0 \sigmaetminus {\mathcal A}'\}, Y_{{\mathcal A}'}=\betaigcap_{H \in {\mathcal A}'} H$ of ${\mathcal A}^0$ which is non-very generic, then ${\mathcal A}^0$ is non-very generic. The following main theorem of this Section follows. \betaegin{thm}\lambdaabel{thm:main2} Let ${\mathcal A}^0$ be a central generic arrangement of $n$ hyperplanes in ${\NZQ C}^k$. If there exists an $r$-set ${\NZQ T}=\{L_1, \lambdadots ,L_r\}$ with $\mid \betaigcup_{i=1}^r L_i \mid=m$ and ${\rm rank} \betaigcap_{p \in \betaigcap_{i=1}^r L_i} H_p=y$, which admits $m-y-k-r'$ weakly linearly independent $K_{{\NZQ T}}$-vector sets for some $r'<r$, then ${\mathcal A}^0$ is non-very generic. \betad end{thm} \betaegin{proof}Let's consider the subarrangement ${\mathcal A}'$ of ${\mathcal A}^0$ given by the hyperplanes indexed in the $\betaigcup_{i=1}^r L_i$ and its essentialization, i.e. the restriction arrangement ${\mathcal A}'^Y$, $Y=\betaigcap_{p \in \betaigcap_{i=1}^r L_i} H_p$. If $y={\rm rank}~Y$ then the arrangement ${\mathcal A}'^Y$ is a central essential arrangement in ${\NZQ C}^{m-y}, m=\mid \betaigcup_{i=1}^r L_i \mid$. By Lemma \ref{lem:K_TTvec}, if ${\mathcal A}^{t_1},\lambdadots,{\mathcal A}^{t_{m-y-k-r'}}$ are $K_{{\NZQ T}}$-translated of ${\mathcal A}'^Y$ associated to the $m-y-k-r'$ independent $K_{{\NZQ T}}$-vector sets, then ${\mathcal A}^{t_1},\lambdadots,{\mathcal A}^{t_{m-y-k-r'}}$ are linearly independent vectors in $\betad SS[{\mathcal A}'^Y] / D_{[m]}\sigmaimeq {\NZQ C}^{m-y-k}$. That is ${\mathcal A}^{t_1},\lambdadots,{\mathcal A}^{t_{m-y-k-r'}}$ span a subspace of dimension $m-y-k-r'$. On the other hand, by construction, ${\mathcal A}^{t_j}$ are $K_{{\NZQ T}}$-translated, i.e. ${\mathcal A}^{t_j}\in ess(X), X=\betaigcap_{i=1}^r D_{L_i}$ for any $j=1, \lambdadots , m-y-k-r'$, that is the space spanned by ${\mathcal A}^{t_1},\lambdadots,{\mathcal A}^{t_{m-y-k-r'}}$ is included in $ess(X)$. This implies that the simple intersection $ess(X)$ has dimension $d \gammaeq m-y-k-r' > m-y-k-r$ that is its codimension is smaller than $r$, i.e. ${\rm rank}~ess(X)<r$ and hence ${\rm rank}~X<r$. This implies that ${\mathcal A}'$ is non-very generic and hence ${\mathcal A}^0$ is non-very generic. \betad end{proof} \noindent Theorem \ref{thm:main2} allows to build non-very generic arrangements simply imposing linear conditions on vectors $v_{i,j} \in H^0_{i,j}$. This linearity is a non trivial achievement since the conditions to check the (non) very genericity are Pl\"ucker-type conditions. We point out that while Theorem \ref{thm:main2} provides a quite useful tool to build non-very generic arrangements, we are still far away from being able to check whether a given arrangement is very generic or not.\\ In the next section we will provide non trivial examples of how to build non-very generic arrangements by means of the Theorem \ref{thm:main2}. \sigmaection{Examples of non-very generic arrangements}\lambdaabel{sec:example} In this section we present few examples to illustrate how to use the Theorem \ref{thm:main2} to construct non-very generic arrangements. To construct the numerical examples we used the software CoCoA-5.2.4 ( see \cite{AM}). \betaegin{ex}[${\mathcal B}(12, 8, {\mathcal A}^0)$ with an intersection of multiplicity 4 in rank 3]\lambdaabel{ex:MS(12,8)} Let $L_1 = [12] \sigmaetminus \{ 10,11,12 \}, L_2 = [12] \sigmaetminus \{ 7,8,9 \}, L_3 = [12] \sigmaetminus\{ 4,5,6 \}$ and $L_4 = [12] \sigmaetminus \{ 1,2,3 \}$ be subsets of $[12]$ of $k+1=9$ indices. It is an easy computation that the set ${\NZQ T} = \{ L_1, L_2, L_3, L_4 \}$ is a $4$-set. Let's consider a central generic arrangement ${\mathcal A}^0$ of $12$ hyperplanes in ${\NZQ C}^8$. In this case $m=n=12, y=0$ and $m-k-r=12-8-4=0$, hence, by Theorem \ref{thm:main2} in order for ${\mathcal A}^0$ to be non-very generic it is enough the existence of just one $K_{{\NZQ T}}$-vector set $\{v_{1,2},v_{1,3},v_{1,4}\}$, that is the vectors $v_{2,3}=v_{1,3}-v_{1,2} \in \betaigcap_{p \in L_2 \cap L_3 \sigmaetminus \{ 12 \}} H_p^0$ and $v_{2,4}=v_{1,4}-v_{1,2} \in \betaigcap_{p \in L_2 \cap L_4 \sigmaetminus \{ 12 \}} H_p^0$ have to belong to $H_{12}^0$ (see Figure \ref{fig:exB(12,8)}). Notice that since $v_{3,4} = v_{2,4} - v_{2,3} \in \betaigcap_{p \in L_3 \cap L_4 \sigmaetminus \{ 12 \}} H_p^0$, if $v_{2,3}, v_{2,4} \in H_{12}^0$ then $v_{3,4} \in H_{12}^0$. That is all hyperplanes in ${\mathcal A}^0$ can be chosen freely\footnote{Here and in the rest of this section, freely means that we only impose the condition that ${\mathcal A}^0$ is a central generic arrangement. In particular this condition is always taken as given and imposed even if not written.}, but $H_{12}$ which has to contain the vectors $v_{2,3},v_{2,4}$. \betaegin{figure}[h] \betaegin{minipage}{0.48\tauhetaxtwidth} \centering \betaegin{tikzpicture} \coordinate (0) at (0,0); \coordinate (1) at (4,0); \coordinate (2) at (3,4); \coordinate (3) at (1,3); \coordinate (4) at (-1,0); \coordinate (5) at (-1/2,-2/3); \coordinate (6) at (-1/3,-1); \coordinate (7) at (17/4,-1); \coordinate (8) at (9/2,-1/2); \coordinate (9) at (5,0); \coordinate (10) at (4,9/2); \coordinate (11) at (4,16/3); \coordinate (12) at (11/4,5); \coordinate (13) at (4/3,4); \coordinate (14) at (0,4); \coordinate (15) at (0,5/2); \coordinate [label=above:$P_1$] (a) at (-1/4,0); \coordinate [label=above:$P_2$] (b) at (17/4,0); \coordinate [label=left:$P_3$] (c) at (3,25/6); \coordinate [label=above:$P_4$] (d) at (1-1/9,3+1/9); \coordinate [label=$H_{1,2}^t$] (e) at (-1.3,-0.3); \coordinate [label=$H_{1,3}^t$] (f) at (-2/3-0.1,-1-0.1); \coordinate [label=$H_{1,4}^t$] (g) at (-1/4-0.1,-1.6); \coordinate [label=$\betaigcap_{p \in L_2 \cap L_3 \sigmaetminus \{ 12 \}} H_p^t$] (h) at (4+0.3,-1.6); \coordinate [label=$\betaigcap_{p \in L_2 \cap L_4 \sigmaetminus \{ 12 \}} H_p^t$] (j) at (5.7,-1-0.1); \coordinate [label=$\betaigcap_{p \in L_3 \cap L_4 \sigmaetminus \{ 12 \}} H_p^t$] (i) at (5.3,25/6+0.2); \betaegin{scope} \draw[-latex] (0) -- node[below] {$v_{1,2}$} (1); \draw[-latex] (1) -- node[right] {$v_{2,3}$} (2); \draw[-latex] (2) -- node[above] {$v_{3,4}$} (3); \draw[-latex] (0) -- node[left] {$v_{1,4}$} (3); \draw[-latex] (0) -- node[left] {$v_{1,3}$} (2); \draw[-latex] (1) -- node[right] {$v_{2,4}$} (3); \draw (4) -- (9); \draw (5) -- (11); \draw (7) -- (12); \draw (6) -- (13); \draw[dashed] (8) -- (14); \draw (10) -- (15); \betad end{scope} \betad end{tikzpicture} \caption{$K_{\NZQ T}$-configuration $K_{\NZQ T}({\mathcal A}^t)$ of ${\mathcal B}(12, 8, {\mathcal A}^0).$ $v_{i,j}$ are vectors in $H_{i,j}^0$.}\lambdaabel{fig:exB(12,8)} \betad end{minipage} \betaegin{minipage}{0.48\tauhetaxtwidth} \centering \betaegin{tikzpicture} \coordinate (0) at (-1, 0); \coordinate (1) at (-1/2, -5/12); \coordinate (2) at (-1/4,-1); \coordinate (3) at (1/3,-5/6); \coordinate (4) at (5/3,-5/6); \coordinate (5) at (22/10, -4/5); \coordinate (6) at (27/10, -7/12); \coordinate (7) at (3,0); \coordinate (8) at (7/2,17/8); \coordinate (9) at (7/2,5/2); \coordinate (10) at (7/2, 35/12); \coordinate (11) at (33/10, 13/4); \coordinate (12) at (3/2, 35/8); \coordinate (13) at (12/10, 24/5); \coordinate (14) at (8/10, 24/5); \coordinate (15) at (1/2, 35/8); \coordinate (16) at (-3/2,15/4); \coordinate (17) at (-3/2,35/12); \coordinate (18) at (-3/2, 5/2); \coordinate (19) at (-3/2,17/8); \coordinate [label=left:$P_1$] (a) at (0,0.2); \coordinate [label=right:$P_2$] (b) at (2,0.2); \coordinate [label=$P_3$] (c) at (3,5/2); \coordinate [label=right:$P_4$] (d) at (1,4); \coordinate [label=$P_5$] (e) at (-1,5/2); \coordinate (A) at (0,0); \coordinate (B) at (2,0); \coordinate (C) at (3,5/2); \coordinate (D) at (1,4); \coordinate (E) at (-1,5/2); \coordinate [label=$H_{1,2}^t$] (p) at (-1.3,-0.3); \coordinate [label=$H_{1,3}^t$] (q) at (-2/3-0.1,-1); \coordinate [label=$H_{1,4}^t$] (r) at (-1/4-0.1,-1.6); \coordinate [label=$H_{1,5}^t$] (s) at (0.5, -1.4); \coordinate [label=$H_{2,3}^t$] (t) at (1.7, -1.4); \coordinate [label=$H_{2,4}^t$] (u) at (2.3, -1.4); \coordinate [label=$H_{2,5}^t$] (v) at (3.0, -1.2); \coordinate [label=$H_{3,4}^t$] (w) at (3.8, 1.6); \coordinate (x) at (4.8, 2.2); \coordinate (y) at (2.7, 4.3); \betaegin{scope} \draw (0) -- (7); \draw (1) -- (10); \draw (2) -- (13); \draw (3) -- (16); \draw (4) -- (11); \draw (5) -- (14); \draw (6) -- (17); \draw (8) -- (15); \draw [dashed] (9) -- (18); \draw [dashed] (12) -- (19); \draw[-latex] (A) -- node {$v_{1,2}^{k}$} (B); \draw[-latex] (A) -- node[left] {$v_{1,3}^{k}$} (C); \draw[-latex] (A) -- node {$v_{1,4}^{k}$} (D); \draw[-latex] (A) -- node {$v_{1,5}^{k}$} (E); \draw[-latex] (B) -- (C); \draw[-latex] (B) -- (D); \draw[-latex] (B) -- (E); \draw[-latex] (C) -- (D); \draw[-latex] (C) -- (E); \draw[-latex] (D) -- (E); \betad end{scope} \betad end{tikzpicture} \caption{$K_{\NZQ T}$-configuration $K_{\NZQ T}({\mathcal A}^t)$ of ${\mathcal B}(10, 3, {\mathcal A}^0).$ $v_{i,j}^{k}$ is a vector in $H_{i,j}^0$.}\lambdaabel{fig:exB(10,3)} \betad end{minipage} \betad end{figure} \noindent Let's see a numerical example. Let us consider hyperplanes of equation $H_i^0: \alphalpha_i \cdot x = 0$, with $\alphalpha_i$, $i = 1, \dots, 11$ assigned as following: \betaegin{equation} \betaegin{split} & \alphalpha_1 = (0,0,1,1,0,1,-1,1), \alphalpha_2 = (0,0,0,1,1,1,1,-1), \alphalpha_3 = (0,0,1,0,0,0,1,1), \\ & \alphalpha_4 = (0,1,0,1,1,1,0,1), \alphalpha_5 = (0,2,0,-1,-1,0,1,-1), \alphalpha_6 = (0,-1,0,2,1,-1,-1,1), \\ & \alphalpha_7 = (1,0,0,1,0,-1,-1,1), \alphalpha_8 = (-1,0,0,0,2,1,1,1), \alphalpha_9 = (-4,0,0,0,1,-1,1,1), \\ & \alphalpha_{10} = (1,1,1,-1,-1,-1,-1,1), \alphalpha_{11} = (1,1,1,2,2,2,0,3). \\ \betad end{split} \betad end{equation} In this case, we have the $K_{\NZQ T}$-vector set \betaegin{equation*} \{ v_{1,2}, v_{1,3}, v_{1,4} \} = \{ (1,0,0,0,0,0,0,0), (0,1,0,0,0,0,0,0), (0,0,-1,0,0,0,0,0) \} \quad . \betad end{equation*} The other vectors are obtained by means of relations $v_{2,3} = v_{1,3} - v_{1,2}, v_{2,4} = v_{1,4} - v_{1,2}, v_{3,4} = v_{1,4} - v_{1,3}$, that is \betaegin{equation} v_{2,3} = (-1,1,0,0,0,0,0,0), v_{2,4} = (-1,0,1,0,0,0,0,0), v_{3,4} = (0,-1,1,0,0,0,0,0) \betad end{equation} and, finally, we get $\alphalpha_{12} = (-2,-2,-2,3,4,-5,6,7)$ by imposing the condition that $\alphalpha_{12}$ has to be orthogonal to $v_{2,3}$ and $v_{2,4}$ . \betad end{ex} \betaegin{ex}[${\mathcal B}(16, 11, {\mathcal A}^0)$ with an intersection of multiplicity 4 in rank 3]\lambdaabel{ex:MS(16,11)} Let $L_1 = [16] \sigmaetminus \{ 13,14,15,16 \}, L_2 = [16] \sigmaetminus \{ 9,10,11,12 \}, L_3 = [16] \sigmaetminus \{ 5,6,7,8 \}$ and $L_4 = [16] \sigmaetminus \{ 1,2,3,4 \}$ be subsets of $[16]$ of $k+1 = 12$ indices. The set ${\NZQ T} = \{ L_1, L_2, L_3, L_4 \}$ is a $4$-set. Let's consider a central generic arrangement ${\mathcal A}^0$ of 16 hyperplanes in ${\NZQ C}^{11}$. In this case $m = n = 16, y=0$ and $m - k - r = 16 - 11 - 4 = 1$, hence, by Theorem \ref{thm:main2} in order for ${\mathcal A}^0$ to be non-very generic we need two weakly linearly independent $K_{\NZQ T}$-vector sets $\{ v_{1,2}^{1}, v_{1,3}^{1}, v_{1,4}^{1} \}$ and $\{ v_{1,2}^{2}, v_{1,3}^{2}, v_{1,4}^{2} \}$ that is the vectors $v_{2,3}^{k} \in \betaigcap_{p \in L_2 \cap L_3 \sigmaetminus \{ 16 \}} H_p^0$ and $v_{2,4}^{k} \in \betaigcap_{p \in L_2 \cap L_4 \sigmaetminus \{ 16 \}} H_p^0$, $k = 1,2$, have to belong to $H_{16}^0$\footnote{The graphic representation in this case can be simply obtained replacing the number $12$ with $16$ in Figure \ref{fig:exB(12,8)}.}. Notice that since $v_{3,4}^{k} = v_{2,4}^{k} - v_{2,3}^{k} \in \betaigcap_{p \in L_2 \cap L_4 \sigmaetminus \{ 16 \}} H_p^0$, if $v_{2,3}^{k}, v_{2,4}^{k} \in H_{16}^0$ then $v_{3,4}^{k} \in H_{16}^0$. That is all hyperplanes in ${\mathcal A}^0$ can be chosen freely, but $H_{16}^0$ which has to contain the vectors $v_{2,3}^{k}, v_{2,4}^{k}$, $k = 1,2$.\\ Let's see a numerical example. Let us consider hyperplanes of equation $H_i^0: \alphalpha_i \cdot x = 0$, with $\alphalpha_i$, $i = 1, \dots, 15$ assigned as following. \betaegin{equation} \betaegin{split} & \alphalpha_1 = (0,0,1,0,0,1,0,0,0,1,-1), \alphalpha_2 = (0,0,-1,0,0,1,1,1,1,-1,0), \alphalpha_3 = (0,0,2,0,0,1,1,0,1,1,0), \\ & \alphalpha_4 = (0,0,1,0,0,1,1,0,0,0,1), \alphalpha_5 = (0,-1,0,0,1,0,1,1,1,-1,0), \alphalpha_6 = (0,1,0,0,2,0,0,-1,-1,0,1), \\ & \alphalpha_7 = (0,2,0,0,-1,0,-1,0,0,1,1), \alphalpha_8 = (0,-1,0,0,2,0,1,1,1,0,0), \alphalpha_9 = (1,0,0,-3,0,0,-1,-1,1,1,1), \\ & \alphalpha_{10} = (2,0,0,5,0,0,1,-1,-1,1,1), \alphalpha_{11} = (3,0,0,1,0,0,1,-1,2,0,1), \alphalpha_{12} = (1,0,0,5,0,0,1,0,1,1,0), \\ & \alphalpha_{13} = (1,1,1,-3,-3,-3,-1,-3,2,-2,-1), \alphalpha_{14} = (1,1,1,0,0,0,-2,1,-8,1,1), \alphalpha_{15} = (0,0,0,-5,-5,-5,1,2,-3,-4,7). \\ \betad end{split} \betad end{equation} In this case, we have the $K_{\NZQ T}$-vector sets \betaegin{equation*} \betaegin{split} & \{ v_{1,2}^{1}, v_{1,3}^{1}, v_{1,4}^{1} \} = \{ (1,0,0,0,0,0,0,0,0,0,0), (0,1,0,0,0,0,0,0,0,0,0), (0,0,1,0,0,0,0,0,0,0,0) \} \quad , \\ & \{ v_{1,2}^{2}, v_{1,3}^{2}, v_{1,4}^{2} \} = \{ (0,0,0,1,0,0,0,0,0,0,0), (0,0,0,0,1,0,0,0,0,0,0), (0,0,0,0,0,1,0,0,0,0,0) \} \quad . \betad end{split} \betad end{equation*} The other vectors are obtained by means of relations $v_{2,3}^{k} = v_{1,3}^{k} - v_{1,2}^{k}, v_{2,4}^{k} = v_{1,4}^{k} - v_{1,2}^{k}, v_{3,4}^{k} = v_{1,4}^{k} - v_{1,3}^{k}$, $k = 1,2$, that is \betaegin{equation} \betaegin{split} & v_{2,3}^{1} = (-1,1,0,0,0,0,0,0,0,0,0), v_{2,4}^{1} = (-1,0,1,0,0,0,0,0,0,0,0), v_{3,4}^{1} = (0,-1,1,0,0,0,0,0,0,0,0) \quad , \\ & v_{2,3}^{2} = (0,0,0,-1,1,0,0,0,0,0,0), v_{2,4}^{2} = (0,0,0,-1,0,1,0,0,0,0,0), v_{3,4}^{2} = (0,0,0,0,-1,1,0,0,0,0,0) \betad end{split} \betad end{equation} and, finally, we get $\alphalpha_{16}$ = $(1,1,1,-2,-2,-2,5,6,7,8,9)$ by imposing the conditions that $\alphalpha_{16}$ has to be orthogonal to $v_{2,3}^{k}$ and $v_{2,4}^{k}$, $k = 1,2$. \betad end{ex} \betaegin{ex}[${\mathcal B}(10, 3, {\mathcal A}^0)$ with an intersection of multiplicity 5 in rank 4]\lambdaabel{ex:MS(10,3)} Let $L_1 = \{ 1,2,3,4 \}, L_2 = \{ 1,5,6,7 \}, L_3 = \{ 2,5,8,9 \}, L_4 = \{ 3,6,8,10 \}$ and $L_5 = \{ 4,7,9,10 \}$ be subsets of $[10]$ of $k+1 = 4$ indices. The set ${\NZQ T} = \{ L_1, L_2, L_3, L_4, L_5 \}$ is a $5$-set. Let's consider a central generic arrangement ${\mathcal A}^0$ of 10 hyperplanes in ${\NZQ C}^3$. In this case $m = n = 10,y=0$ and $m - k - r = 10 - 3 - 5 = 2$, hence, by Theorem \ref{thm:main2} in order for ${\mathcal A}^0$ to be non-very generic we need three weakly linearly independent $K_{\NZQ T}$-vector sets $\{ v_{1,2}^{1}, v_{1,3}^{1}, v_{1,4}^{1}, v_{1,5}^{1} \}$, $\{ v_{1,2}^{2}, v_{1,3}^{2}, v_{1,4}^{2}, v_{1,5}^{2} \}$ and $\{ v_{1,2}^{3}, v_{1,3}^{3}, v_{1,4}^{3}, v_{1,5}^{3} \}$ that is the vectors $v_{4,5}^{k}$, $k = 1,2,3$, have to belong to $H_{10}^0$ (see Figure \ref{fig:exB(10,3)}). Notice that since in this case hyperplanes are planes, then the three vectors $v_{i,j}^{k}, k=1,2,3$ will be linearly dependent for any choice of indices $(i,j),i\neq j$. This additional condition forces that at most 8 hyperplanes in ${\mathcal A}^0$ can be chosen freely, while both $H_{9}^0$ and $H_{10}^0$ have to contain the dependent vectors $v_{3,5}^{k}$ and $v_{4,5}^{k}$, $k = 1,2,3$, respectively.\\ Let's see a numerical example. Let us consider hyperplanes of equation $H_i^0: \alphalpha_i \cdot x = 0$, with $\alphalpha_i$, $i = 1, \dots, 8$ assigned as following. \betaegin{equation} \betaegin{split} & \alphalpha_1 = (0,10,3), \alphalpha_2 = (20,0,-9), \alphalpha_3 = (2,-3,0), \alphalpha_4 = (3,1,0), \\ &\alphalpha_5 = (0,0,1), \alphalpha_6 = (1,-1,1), \alphalpha_7 = (1,2,2), \alphalpha_8 = (4,-1,-3). \betad end{split} \betad end{equation} In this case, we have the $K_{\NZQ T}$-vector sets \betaegin{equation*} \betaegin{split} & \{ v_{1,2}^{1}, v_{1,3}^{1}, v_{1,4}^{1}, v_{1,5}^{1} \} = \{ (1,-3,10), (\frac{9}{2},\frac{21}{2},10),(\frac{9}{2},3,\frac{25}{2}),(-\frac{77}{9},\frac{77}{3},-\frac{125}{9}) \}, \\ & \{ v_{1,2}^{2}, v_{1,3}^{2}, v_{1,4}^{2}, v_{1,5}^{2} \} = \{ (-2,6,-20),(-9,-47,-20),(-3,-2,-27),(-\frac{2}{3},2,-\frac{50}{3}) \}, \\ & \{ v_{1,2}^{3}, v_{1,3}^{3}, v_{1,4}^{3}, v_{1,5}^{3} \} = \{ (-3,3,-10),(-\frac{9}{2},-\frac{2391}{80},-10),(-\frac{1467}{1040},-\frac{489}{520},-\frac{16151}{1040}),(-\frac{4}{3},4,-\frac{71}{6}) \}. \betad end{split} \betad end{equation*} The other vectors are obtained by means of relations $v_{i,l}^{k} = v_{1,l}^{k} - v_{1,i}^{k}$, where $2 \lambdaeq i<l \lambdaeq 5$, $k = 1,2,3$, that is \betaegin{equation} \betaegin{split} & v_{2,3}^{1} = (\frac{7}{2}, \frac{27}{2}, 0), v_{2,4}^{1} = (\frac{7}{2}, 6, \frac{5}{2}), v_{2,5}^{1} = (-\frac{86}{9}, \frac{86}{3}, -\frac{215}{9}), \\ & v_{3,4}^{1} = (0, -\frac{15}{2}, \frac{5}{2}), v_{3,5}^{1} = (-\frac{235}{18}, \frac{91}{6}, -\frac{215}{9}), v_{4,5}^{1} = (-\frac{235}{18}, \frac{68}{3}, -\frac{475}{18}), \\ & v_{2,3}^{2} = (-7, -53, 0), v_{2,4}^{2} = (-1, -8, -7), v_{2,5}^{2} = (\frac{4}{3}, -4, \frac{10}{3}), \\ & v_{3,4}^{2} = (6, 45, -7), v_{3,5}^{2} = (\frac{25}{3}, 49, \frac{10}{3}), v_{4,5}^{2} = (\frac{7}{3}, 4, \frac{31}{3}), \\ & v_{2,3}^{3} = (-\frac{3}{2}, -\frac{2631}{80}, 0), v_{2,4}^{3} = (\frac{1653}{1040}, -\frac{2049}{520}, -\frac{5751}{1040}), v_{2,5}^{3} = (\frac{5}{3}, 1, -\frac{11}{6}), \\ & v_{3,4}^{3} = (\frac{3213}{1040}, \frac{6021}{208}, -\frac{5751}{1040}), v_{3,5}^{3} = (\frac{19}{6}, \frac{2711}{80}, -\frac{11}{6}), v_{4,5}^{3} = (\frac{241}{3120}, \frac{2569}{520}, \frac{11533}{3120}) \quad . \betad end{split} \betad end{equation} Finally, we get $\alphalpha_9$ = $(314,-40,-197)$ and $\alphalpha_{10}$ = $(139,30,-43)$ by imposing the conditions that $\alphalpha_9$ and $\alphalpha_{10}$ have to be orthogonal to $v_{3,5}^{k}$ and $v_{4,5}^{k}$, $k = 1,2,3$. \betad end{ex} \betaegin{rem} Notice that Example \ref{ex:MS(10,3)} is slightly different from other examples for two reasons. Firslty, it uses a different combinatorics. In the Examples \ref{ex:MS(12,8)} and \ref{ex:MS(16,11)} the 4-sets ${\NZQ T} = \{ L_1, L_2, L_3, L_4 \}$ are of the form $L_i = [n] \sigmaetminus K_i$ with $K_i$'s which satisfy the properties $\betaigcup_{i=1}^4 K_i=[n]$ and $K_i \cap K_j = \betad emptyset$\footnote{Notice that this is a generalization of the combinatorics used in \cite{LS}} while in the Example \ref{ex:MS(10,3)} they are not. Secondly, in the Examples \ref{ex:MS(12,8)} and \ref{ex:MS(16,11)} in order to obtain non-very generic arrangement we could choose all hyperplanes freely but one, while in the Example \ref{ex:MS(10,3)} two hyperplanes had to be fixed as a result of the need of three weakly independent $K_{\NZQ T}$-vector sets in two dimensional hyperplanes. Indeed this dependency condition gives rise to $27$ independent equations of the form \betaegin{equation}\lambdaabel{eq:rem_dep2} v_{i,j}^3 = \alphalpha v_{i,j}^1 + \betaeta v_{i,j}^2 \betad end{equation} which fix the entries of the vectors $v_{1,i}^k, i = 3,4,5$ uniquely for any choice of three dependent vectors $v_{1,2}^k, k = 1,2,3$. Hence the vectors $v_{3,5}^k$ and $v_{4,5}^k$, $k = 1,2,3$ are determined and so are the two hyperplanes $H_9^0$ and $H_{10}^0$. \betad end{rem} \betaegin{rem} Notice that both Example \ref{ex:MS(16,11)} and Example \ref{ex:MS(10,3)} satisfy Athanasiadis condition while the first one fails when the set $I$ has maximal cardinality. This essentially shows how the problem to describe the $r$-sets ${\NZQ T}$ that can give rise to ( simple ) non-very generic intersections is non trivial. \betad end{rem} \betaigskip \noindent \tauhetaxtbf{Competing interests:} The author(s) declare none . \betaegin{thebibliography}{9} \betaibitem{AM} J. Abbott and A. M. Bigatti, CoCoA: a system for doing Computations in Commutative Algebra, available at http://cocoa.dima.unige.it. \betaibitem{Atha} C. A. Athanasiadis, The Largest Intersection Lattice of a Discriminantal Arrangement, Contributions to Algebra and Geometry, Vol. 40, No. 2 (1999). \betaibitem{BB} M. Bayer, K. Brandt, Discriminantal arrangements, fiber polytopes and formality, J. Algebraic Comb., 6 (1997), pp. 229-246. \betaibitem{Crapo} H. Crapo, The combinatorial theory of structures, in: ``Matroid theory'' (A. Recski and L. Loca\'sz eds.), Colloq. Math. Soc. J\'anos Bolyai Vol. 40, pp. 107-213, North-Hokkand, Amsterdam-New York, 1985. \betaibitem{CR} H. Crapo and G.C. Rota, The resolving bracket. In Invariant methods in discrete and computational geometry (N. White ed.), pp. 197-222, Kluwer Academic Publisher, Dordrecht 1995. \betaibitem{DPS} P. Das, E. Palezzato and S. Settepanella, The Generalized Sylvester's And Orchard Problems Via Discriminantal arrangement, arXiv:2201.03007. \betaibitem{Falk} M. Falk, A note on discriminantal arrangements, Proc. Amer. Math. Soc., 122 (4) (1994), pp. 1221-1227. \betaibitem{FZ} S. Felsner and G.M. Ziegler, Zonotopes associated wuth hugher Bruhat orders, Discrete Mathematics, 241 (2001), pp. 301-312 . \betaibitem{KV1} M. Kapranov, V. Voevodsky, Free n-category generated by a cube, oriented matroids, and higher Bruhat orders Funct. Anal. Appl., 2 (1991), pp. 50-52. \betaibitem{KV2} M. Kapranov, V. Voevodsky, Combinatorial-geometric aspects of polycategory theory: pasting schemes and higher Bruhat orders (list of results). Cahiers de Topologie et Geometrie dijferentielle categoriques, 32 (1991), pp. 11-27. \betaibitem{KV3} M. Kapranov, V. Voevodsky, Braided monoidal 2-categories and Manin–Schechtman higher braid groups, J. Pure Appl. Algebra, 92 (3) (1994), pp. 241-267. \betaibitem{Koh} T.Kohno, Integrable connections related to Manin and Schechtman’s higher braid groups, Illinois J. Math. 34, no. 2, (1990) pp. 476–484. \betaibitem{Law} R.J. Lawrence, A presentation for Manin and Schechtman's higher braid groups, MSRI pre-print \url{http://www.ma.huji.ac.il/~ruthel/papers/premsh.html} (1991). \betaibitem{LS} A. Libgober and S. Settepanella, Strata of discriminantal arrangements, Journal of Singularities Volume in honor of E. Brieskorn 18 (2018). \betaibitem{MS} Yu. I. Manin and V. V. Schechtman, Arrangements of Hyperplanes, Higher Braid Groups and Higher Bruhat Orders, Advanced Studies in Pure Mathematics 17, (1989) Algebraic Number Theory in honor K. Iwasawa pp. 289-308. \betaibitem{NT} Y. Numata and A. Takemura, On computation of the characteristic polynomials of the discriminantal arrangements and the arrangements generated by generic points, Harmony of Grobner Bases and the Modern Industrial Society, World Scientific, Takayuki Hibi, pp.228-252 (2012). \betaibitem{Orl} P. Orlik, Introduction to Arrangements, CBMS Reg. Conf. Ser. Math., vol. 72, American Mathematical Society, Providence, RI, USA (1989). \betaibitem{OT} P. Orlik, H. Terao, Arrangements of Hyperplanes, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 300, Springer-Verlag, Berlin (1992). \betaibitem{Per} M. Perling, Divisorial cohomology vanishing on toric varieties, Doc. Math., 16 (2011), pp. 209-251. \betaibitem{SS} T. Saito and S. Settepanella, Small examples of discriminantal arrangements associated to non-very generic arrangements, arXiv:2202.04794. \betaibitem{SSY1} S. Sawada, S. Settepanella and S. Yamagata, Discriminantal arrangement, 3×3 minors of Plücker matrix and hypersurfaces in Grassmannian Gr(3,n), Comptes Rendus Mathematique, Ser. I 355, (2017), pp.1111-1120. \betaibitem{SSY2} S. Sawada, S. Settepanella and S. Yamagata, Pappus’s Theorem in Grassmannian $Gr(3, {\NZQ C}^n)$, Ars Mathematica Contemporanea, Vol 16, No 1, (2019), pp. 257-276. \betaibitem{SSc} S. Settepanella and S. Yamagata, On the non-very generic intersections in discriminantal arrangements, to appear in Comptes Rendus Mathematique. \betaibitem{So}S. Yamagata, A classification of combinatorial types of discriminantal arrangements, arXiv:2201.01894. \betaibitem{Zie} G.M. Ziegler, Higher Bruhat orders and cyclic hyperplane arrangements, Topology 32, (1993), pp. 259–279. \betad end{thebibliography} \betad end{document}
\begin{equation}gin{document} \begin{equation}gin{frontmatter} \title{Shortcuts to adiabaticity} \author[UPV]{E. Torrontegui} \author[UPV]{S. Ib\'a\~nez} \author[UPV]{S. Mart\'\i nez-Garaot} \author[FT,IK]{M. Modugno} \author[LA1,LA2]{A. del Campo} \author[Toul]{D. Gu\'ery-Odelin} \author[Cork]{A. Ruschhaupt} \author[UPV,Shan]{Xi Chen} \author[UPV,Shan]{J. G. Muga} \address[UPV]{Departamento de Qu\'{\i}mica F\'{\i}sica, Universidad del Pa\'{\i}s Vasco - Euskal Herriko Unibertsitatea, Apdo. 644, Bilbao, Spain} \address[FT]{Departamento de F\'{\i}sica Te\'orica e Historia de la Ciencia, Universidad del Pa\'{\i}s Vasco - Euskal Herriko Unibertsitatea, Apdo. 644, Bilbao, Spain} \address[IK]{IKERBASQUE, Basque Foundation for Science, 48011 Bilbao, Spain} \address[LA1]{Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM, USA} \address[LA2]{Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM, USA} \address[Toul]{Laboratoire Collisions Agr\'egats R\'eactivit\'e, CNRS UMR 5589, IRSAMC, Universit\'e Paul Sabatier, 31062 Toulouse CEDEX 4, France} \address[Cork]{Department of Physics, University College Cork, Cork, Ireland} \address[Shan]{Department of Physics, Shanghai University, 200444 Shanghai, People's Republic of China} \begin{equation}gin{abstract} Quantum adiabatic processes --that keep constant the populations in the instantaneous eigenbasis of a time-dependent Hamiltonian-- are very useful to prepare and manipulate states, but take typically a long time. This is often problematic because decoherence and noise may spoil the desired final state, or because some applications require many repetitions. ``Shortcuts to adiabaticity'' are alternative fast processes which reproduce the same final populations, or even the same final state, as the adiabatic process in a finite, shorter time. Since adiabatic processes are ubiquitous, the shortcuts span a broad range of applications in atomic, molecular and optical physics, such as fast transport of ions or neutral atoms, internal population control and state preparation (for nuclear magnetic resonance or quantum information), cold atom expansions and other manipulations, cooling cycles, wavepacket splitting, and many-body state engineering or correlations microscopy. Shortcuts are also relevant to clarify fundamental questions such as a precise quantification of the third principle of thermodynamics and quantum speed limits. We review different theoretical techniques proposed to engineer the shortcuts, the experimental results, and the prospects. \end{abstract} \begin{equation}gin{keyword} adiabatic dynamics \sep quantum speed limits \sep superadiabaticity \sep quantum state engineering \sep transport engineering of cold atoms, ions, and Bose-Einstein condensates \sep wave packet splitting \sep third principle of thermodynamics \sep transitionless tracking algorithm \sep fast expansions \end{keyword} \end{frontmatter} \tableofcontents \section{Introduction} The expression ``shortcuts to adiabaticity'' (STA) was recently introduced in \cite{Ch10}, to describe protocols that speed up a quantum adiabatic process, usually, although not necessarily, through a non-adiabatic route.\footnote{ The word ``adiabatic'' may have two different meanings: the thermodynamical one (no heat transfer between system and environment) and the quantum one, as stated by \citet{BF28} in the adiabatic theorem: ``a physical system remains in its instantaneous eigenstate when a given perturbation is acting on it slowly enough and if there is a gap between the eigenvalue and the rest of the Hamiltonian's spectrum''. Here we shall always understand ``adiabatic'' in the quantum mechanical sense.} {There the} Lewis-Riesenfeld invariants were used to inverse engineer the time dependence of a harmonic oscillator frequency between predetermined initial and final values so as to avoid final excitations. That paper and its companion on Bose-Einstein condensates \citep{MCRG09} have indeed triggered a surge of activity, not only for harmonic expansions \citep{energy,Muga10,Li10,Nice10,Nice11,delcampo11a,Nice11b,Li11,3d,Yidun,ErikFF,DB12,Li12}, but for atom transport \citep{transport,BECtransport,OCTtrans,Bowler}, quantum computing \citep{PLA2011}, quantum simulations \citep{simu}, optical lattice expansions \citep{PLA2012,Yuce2}, wavepacket splitting \citep{split}, internal state control \citep{Chen11,NHSara,noise,Ban,Sara12,Yidun1}, many-body state engineering \citep{delcampo11,DB12,DRZ12,JJ}, and other applications such as sympathetic cooling of atomic mixtures \citep{Onofrio11,Onofrio12}, or cooling of nanomechanical resonators \citep{Lianao,Lianao2}. In fact several works had previously or simultaneously considered to speed up adiabatic processes making use of different techniques. For example \citet{Rice03,Rice05,Rice08} and \citet{Berry09} proposed the addition of counterdiabatic terms to a reference Hamiltonian $H_0$ to achieve adiabatic dynamics with respect to $H_0$. This ``transitionless tracking algorithm'' \citep{Berry09} has been applied to manipulate the populations of two-level systems \citep{Rice05,Berry09,Ch10b,Oliver,expcd}. Another technique to design laser pulses for fast population transfer is parallel adiabatic passage \citep{PLAP1,PLAP2,PLAP3,PLAP4}. \citet{David} {designed trap motions in order} to perform non-adiabatic fast transport of atomic cold clouds. Also, \citet{MNProc} developed a ``fast-forward technique'' for several manipulations on wavepackets such as expansions, transport or splitting of Bose-Einstein condensates. Related work had also been carried out for wave packet splitting making use of optimal control \citep{S07,S09a,S09b}, and in the context of quantum refrigerators, to find fast ``frictionless'' expansions \citep{Salamon09,KoEPL09}. For recent developments on this line stimulated by invariant-based engineering results see \citet{KoPRE10,EPL11,KoPRL12,KoPRE12}. The considerable number of publications on the subject, and a recent Conference on ``Shortcuts to adiabaticity'' held in Bilbao (16-20 July 2012) demonstrate much current interest, not only within the cold atoms and atomic physics communities but also from fields such as semiconductor physics and spintronics \citep{Ban}. Indeed adiabatic processes are ubiquitous, so we may expect a broad range of applications, even beyond the quantum domain, since some of the concepts are easy to translate into optics \citep{SHAPEapp,Tseng} or mechanics \citep{NHSara}. Apart from the practical applications, the fundamental implications of shortcuts on quantum speed limits \citep{qb,Oliver,Moise2012}, time-energy uncertainty relations \citep{energy}, multiple Schr\"odinger pictures \citep{Sara12}, and {the quantification of} the third principle of thermodynamics and of maximal cooling rates \citep{Salamon09,KoEPL09,energy,KoPRE10,EPL11,KoPRL12,KoPRE12} are also intriguing and provide further motivation. In this review we shall first describe different approaches to STA in Sec. \ref{gf}. While the main goal there is to construct new protocols for a fast manipulation of quantum states avoiding final excitations, additional conditions may be imposed. For example, ideally these protocols should not be state specific but work for an arbitrary state\footnote{Contrast this to the quantum brachistochrone \citep{qb}, in which the aim is to find a time-independent Hamiltonian that takes a given initial state to a given final state in minimal time. Studies of ``quantum speed limits'' adopt in general this state-to-state approach, as in \citet{Moise2012}.}. They should also be stable against perturbations, and keep the values of the transient energy and other variables manageable throughout the {whole} process. {Several} applications are discussed in Secs. 3 to 6. We have kept {a notation consistency} within each Section but not throughout the whole review, following when possible notations close to the original publications. \section{General Formalisms\langlebel{gf}} \subsection{Invariant Based Inverse Engineering} \langlebel{inveng} {\it Lewis-Riesenfeld invariants.--} The \citet{LR} theory is applicable to a quantum system that evolves with a time-dependent Hermitian Hamiltonian $H(t)$, which supports a Hermitian dynamical invariant $I(t)$ {satisfying} \begin{equation}q i \hbar \frac{\parallelrtialartial I(t)}{\parallelrtialartial t } - [H(t), I(t)]=0. \langlebel{invad} \end{equation}q {Therefore} its expectation values for an arbitrary solution of the time-dependent Schr\"{o}dinger equation $i \hbar \frac{\parallelrtialartial}{ \parallelrtialartial t } |\Psi (t)\rangle = H(t) |\Psi (t)\rangle$, do not depend on time. $I(t)$ can be used to expand $|\Psi(t)\rangle$ as a superposition of ``dynamical modes'' $|\parallelrtialsi_n (t) \ranglengle$, \begin{equation}q |\Psi (t)\rangle=\sum_n c_n |\parallelrtialsi_n (t) \ranglengle,\;\; |\parallelrtialsi_n(t)\rangle=e^{i\alpha_n(t)}|\parallelrtialhi_n(t)\rangle, \langlebel{expan1} \end{equation}q where $n=0,1,...$; $c_n$ are time-independent amplitudes, and $|\parallelrtialhi_n (t)\ranglengle$ are orthonormal eigenvectors of the invariant $I(t)$, \begin{equation}q \langlebel{invariant} I(t) = \sum_n |\parallelrtialhi_n (t) \ranglengle \langlembda_n\langlengle \parallelrtialhi_n (t)|. \end{equation}q The $\langlembda_n$ are real constants, and the Lewis-Riesenfeld phases are defined as \citep{LR} \begin{equation}q \langlebel{LRphase} \alpha_n (t) = \frac{1}{\hbar} \int_0^t \Big\langlengle \parallelrtialhi_n (t') \Big| i \hbar \frac{\parallelrtialartial }{ \parallelrtialartial t'} - H(t') \Big| \parallelrtialhi_n (t') \Big\ranglengle d t'. \end{equation}q We use for simplicity a notation for a discrete spectrum of $I(t)$ but the generalization to a continuum or mixed spectrum is straightforward. We also assume a non-degenerate spectrum. Non-Hermitian invariants and Hamiltonians have been considered for example in \citet{Gao91,Gao92,Lohe,NHSara}. {\it Inverse engineering.--} Suppose that we want to drive the system from an initial Hamiltonian $H(0)$ to a final one $H(t_f)$, in such a way that the populations in the initial and final instantaneous bases are the same, but admitting transitions at intermediate times. To inverse engineer a time-dependent Hamiltonian $H(t)$ and achieve this goal, we may first define the invariant through its eigenvalues and eigenvectors. The Lewis-Riesenfeld phases $\alpha_n(t)$ may also be chosen as arbitrary functions to write down the time-dependent unitary evolution operator ${U}$, \begin{equation}q {U}= \sum_n e^{i \alpha_n (t)} |\parallelrtialhi_n (t) \ranglengle \langlengle \parallelrtialhi_n (0)|. \end{equation}q $U$ obeys $ i \hbar \dot{U} = H(t) {U}, $ where the dot means time derivative. Solving formally this equation for $H(t)=i\hbar \dot{U}{U}^{\dag}$, we get \begin{equation}q \langlebel{inHa} H(t)= - \hbar \sum_n |\parallelrtialhi_n(t)\rangle \dot{\alpha}_n \langle \parallelrtialhi_n(t)| + i \hbar \sum_n | \parallelrtialartial_t \parallelrtialhi_n (t) \ranglengle \langlengle \parallelrtialhi_n (t)|. \end{equation}q According to Eq. (\ref{inHa}), for a given invariant there are many possible Hamiltonians corresponding to different choices of phase functions $\alpha_n(t)$. In general $I(0)$ does not commute with $H(0)$, so the eigenstates of $I(0)$, $|\parallelrtialhi_n (0) \ranglengle$, do not coincide with the eigenstates of $H(0)$. $H(t_f)$ does not necessarily commute with $I(t_f)$ either. If we impose $[I(0), H(0)]=0$ and $[I(t_f), H(t_f)]=0$, the eigenstates will coincide, which guarantees a state transfer without final excitations. In typical applications the Hamiltonians $H(0)$ and $H(t_f)$ are given, and set the initial and final configurations of the external parameters. Then we define $I(t)$ and its eigenvectors accordingly, so that the commutation relations are obeyed at the boundary times and, finally, $H(t)$ is designed via Eq. (\ref{inHa}). While the $\alpha_n(t)$ may be taken as fully free time-dependent phases in principle, they may also be constrained by a pre-imposed or assumed structure of $H(t)$. Secs. 3, 4 and 5 {present} examples of how this works for expansions, transport and internal state control. A generalization of this inverse method for non-Hermitian Hamiltonians was considered in \citet{NHSara}. Inverse engineering was applied to accelerate the slow expansion of a classical particle in a time-dependent harmonic oscillator without final excitation. This system may be treated formally as a quantum two-level system with non-Hermitian Hamiltonian \citep{Gao91,Gao92}. {\it Quadratic in momentum invariants.--} \citet{LR} paid special attention to the time-dependent harmonic oscillator and its invariants quadratic in position and momentum. Later on \citet{LL} found, in the framework of classical mechanics, the general form of the Hamiltonian compatible with quadratic-in-momentum invariants, which includes non harmonic potentials. This work, and the corresponding quantum results of \citet{DL} constitutes the basis of this subsection. A one-dimensional Hamiltonian with a quadratic-in-momentum invariant must have the form $H=p^2/2m+V(q,t)$,\footnote{$q$ and $p$ may denote operators or numbers. The context should clarify their exact meaning.} with the potential \citep{LL,DL} \begin{equation}q \langlebel{Vinv} V(q,t)= -F(t)q+\frac{m}{2}\omegaega^2(t)q^2+\frac{1}{\rho(t)^2}U\left[\frac{q-q_c(t)}{\rho(t)}\right]. \end{equation}q $\rho$, $q_c$, $\omegaega$, and $F$ are arbitrary functions of time that satisfy the auxiliary equations \begin{equation}qa \ddot{\rho}+\omegaega^2(t)\rho&=&\frac{\omegaega_0^2}{\rho^3}, \langlebel{Erma} \\ \ddot{q}_c+\omegaega^2(t)q_c &=& F(t)/m, \langlebel{alphaeq} \end{equation}qa where $\omegaega_0$ is a constant. Their physical interpretation will be explained below and depends on the operation. A quadratic-in-$p$ dynamical invariant is given, up to a constant factor, by \begin{equation}q \langlebel{invaq} I=\frac{1}{2m}[\rho(p-m\dot{q}_c)-m\dot{\rho}(q-q_c)]^2 +\frac{1}{2}m\omegaega_0^2\left(\frac{q-q_c}{\rho}\right)^2 +U\left(\frac{q-q_c}{\rho}\right). \end{equation}q Now $\alpha_n$ in Eq. (\ref{LRphase}) satisfies \citep{LR,DL} \begin{equation}qa \alpha_n =-\frac{1}{\hbar}\int_0^t {\rm d} t'\left(\frac{\langlembda_n}{\rho^2}+ \frac{m(\dot{q}_c\rho-q_c\dot{\rho})^2}{2\rho^2}\right), \end{equation}qa and the function $\parallelrtialhi_n$ can be written as \citep{DL} \begin{equation}q\langlebel{psin} \parallelrtialhi_n (q,t)=e^{\frac{im}{\hbar}\left[\dot{\rho} q^2/2\rho+(\dot{q}_c\rho-q_c\dot{\rho})q/\rho\right]}\frac{1}{\rho^{1/2}}\Phi_n\bigg(\underbrace{\frac{q-q_c}{\rho}}_{=:\sigma}\bigg) \end{equation}q in terms of the solution $\Phi_n(\sigma)$ (normalized in $\sigma$-space) of the auxiliary Schr\"odinger equation \begin{equation}q \left[-\frac{\hbar^2}{2m}\frac{\parallelrtialartial^2}{\parallelrtialartial\sigma^2}+\frac{1}{2}m\omegaega_0^2\sigma^2+U(\sigma)\right]\Phi_n=\langlembda_n\Phi_n. \langlebel{last} \end{equation}q The strategy of invariant-based inverse engineering here is to design $\rho$ and $q_c$ first so that $I$ and $H$ commute at initial and final times, except for launching or stopping atoms as in \citet{transport}. Then $H$ is deduced from Eq. (\ref{Vinv}). Applications will be discussed in Secs. \ref{secEXP} and \ref{secTRA}. \subsection{Counterdiabatic or Transitionless Tracking Approach\langlebel{secCD}} For the transitionless driving or counterdiabatic approach as formulated by \citet{Berry09}, and equivalently by \citet{Rice03,Rice05,Rice08}\footnote{Berry's transitionless driving method is equivalent to the counterdiabatic approach of \citet{Rice03,Rice05,Rice08}. In Section \ref{alternative} we shall see how to further exploit this scheme together with ``superadiabatic iterations''.}, the starting point is a time-dependent reference Hamiltonian, \begin{equation}q H_0(t)=\sum_n | n_0(t)\ranglengle E^{(0)}_n(t) \langlengle n_0 (t)|. \end{equation}q The approximate time-dependent adiabatic solution of the dynamics with $H_0$ takes the form \begin{equation}q \langlebel{aa} |\parallelrtialsi_n^{(ad)} (t) \ranglengle = e^{i {\rm\bf x}i_n (t)} |n_0(t)\ranglengle, \end{equation}q where the adiabatic phase reads \begin{equation}q {\rm\bf x}i_n (t)=-\frac{1}{\hbar} \int^t_0 dt' E^{(0)}_n(t') + i\int^t_0 dt' \langlengle n_0(t')| \parallelrtialartial_{t'} n_0(t') \ranglengle. \end{equation}q The approximate adiabatic vectors in Eq. (\ref{aa}) are defined differently from the dynamical modes of the previous subsection, but they may potentially coincide, as we shall see. Defining now the unitary operator \begin{equation}q U= \sum_n e^{i {\rm\bf x}i_n (t)} |n_0(t)\ranglengle \langlengle n_0(0)|, \end{equation}q a Hamiltonian $H(t)=i \hbar \dot{U}{U}^{\dag}$ can be constructed to drive the system exactly along the adiabatic paths of $H_0(t)$, as $H(t)= H_{0}(t) + H_{cd}(t)$, where \begin{equation}qa H_{cd}(t)= i \hbar \sum_n \bigg(|\parallelrtialartial_t n_0(t) \ranglengle \langlengle n_0(t) | -\langlengle n_0(t) | \parallelrtialartial_t n_0(t) \ranglengle | n_0(t) \ranglengle \langlengle n_0(t) |\bigg) \end{equation}qa is purely non-diagonal in the $\{|n_0(t)\rangle\}$ basis. We may change the $E^{(0)}_n(t)$, and therefore $H_0(t)$ itself, keeping the same $|n_0(t)\rangle$. We could for example make all the $E^{(0)}_n(t)$ zero, or set ${\rm\bf x}i_n(t)=0$ \citep{Berry09}. Taking into account this freedom the Hamiltonian for transitionless driving can be generally written as \begin{equation}qa \langlebel{NBH} H (t)= -\hbar \sum_n |n_0 (t) \rangle \dot{{\rm\bf x}i}_n \langle n_0 (t)|+ i \hbar \sum_n | \parallelrtialartial_t n_0(t) \ranglengle \langlengle n_0(t)|. \end{equation}qa Subtracting $H_{cd}(t)$, the generic $H_0$ is \begin{equation}q \langlebel{general H_0} H_0 (t)=\sum_n |n_0 (t) \rangle\big[i\hbar \langle n_0(t)|\parallelrtialartial_t n_0 (t) \rangle - \hbar \dot{{\rm\bf x}i}_n \big]\langle n_0 (t)|. \end{equation}q It is usually required that $H_{cd}(t)$ vanish for $t<0$ and $t>t_f$, either suddenly or continuously at the boundary times. In that case the $\{|n_0 (t) \rangle\}$ become also at the extreme times (at least at $t=0^{-}$ and $t=t_f^{+}$) eigenstates of the full Hamiltonian. Using Eq. (\ref{invad}) and the orthonormality of the $\{|n_0(0)\rangle\}$ we may write invariants of $H(t)$ with the form $ I(t)=\sum_n |n_0(t)\rangle \langlembda_n\langle n_0(t)|. $ For the simple choice $\langlembda_n=E^{(0)}_n(0)$, then $I(0)=H_0(0)$. {In this part and also in Sec. \ref{inveng}} the invariant-based and transitionless-tracking-algorithm approaches have been presented in a common language to make their relations obvious. Reinterpreting the phases of Berry's method as ${\rm\bf x}i_n (t)=\alpha_n (t)$, and the states as $|n_0(t)\rangle=|\parallelrtialhi_n(t)\rangle$, the Hamiltonians $H(t)$ in Eqs. (\ref{inHa}) and (\ref{NBH}) may be equated. As well, the $H_0(t)$ implicit in the invariant-based method is given by Eq. (\ref{general H_0}), so that the dynamical modes can be also understood as approximate adiabatic modes of $H_0(t)$ \citep{Chen11}. An important caveat is that the two methods could coincide but they do not have to. Given $H(0)$ and $H(t_f)$, there is much freedom to interpolate them using different invariants, phase functions, and reference Hamiltonians $H_0(t)$. In other words, these methods do not provide a unique shortcut but families of them. This flexibility enables us to optimize the path according to physical criteria and/or operational constraints. {\it Non-Hermitian Hamiltonians.--} A generalization is possible for non-Hermitian Hamiltonians in a weak non-hermiticity regime \citep{NHSara,erratum}. It was applied to engineer a shortcut laser interaction and accelerate the decay of a two-level atom with spontaneous decay. Note that the concept of ``population'' is problematic for non-Hermitian Hamiltonians \citep{Jolicard}. This affects in particular the definition of ``adiabaticity'' and of the shortcut concept. It is useful to rely instead on normalization-independent quantities, such as the norm of a wave-function component in a biorthogonal basis \citep{erratum}. {\it Many-body Systems.--} Following \citet{DRZ12}, the transitionless quantum driving can be extended as well to many-body quantum critical systems, exploiting recent advances in the simulation of coherent $k$-body interactions \citep{kbody,Barreiro11}. In this context STA allow a finite-rate crossing of a second order quantum phase transition without creating excitations. Consider the family of quasi-free fermion Hamiltonians in dimension $D$, $\mathcal{H}_0 =\sum_{{\rm\bf k}}\parallelrtialsi_{{\rm\bf k}}^\dagger \left[ \vec{a}_{\rm\bf k} (\langlembda(t)) \cdot \vec{ \sigma}_{\rm\bf k} \right] \parallelrtialsi_{{\rm\bf k}} $, where the ${\rm\bf k}$-mode Pauli matrices are $\vec \sigma_{{\rm\bf k}} \end{eqnarray}uiv (\sigma_{\rm\bf k}^x, \sigma_{\rm\bf k}^y,\sigma_{\rm\bf k}^z )$ and $\parallelrtialsi_{{\rm\bf k}}^\dagger = (c_{{\rm\bf k},1}^\dagger,c_{{\rm\bf k},2}^\dagger)$ are fermionic operators, and the sum goes over independent ${\bf k}$-modes. Particular instances of quantum critical models within this family of Hamiltonians are the Ising and XY models in $D=1$ \citep{Sachdev}, and the Kitaev model in $D =2$ \citep{EK} and $D=1$ \citep{1dKitaev}. The function $\vec a_{\rm\bf k} (\langlembda) \end{eqnarray}uiv (a^x_{\rm\bf k} (\langlembda),a^y_{\rm\bf k} (\langlembda),a^z_{\rm\bf k} (\langlembda))$ is specific for each model \citep{Dziarmaga10}. All these models can be written down as a sum of independent Landau-Zener crossings, where the instantaneous ${\rm\bf k}$-mode eigenstates have eigenenergies $ \varepsilon_{{\rm\bf k},\parallelrtialm}=\parallelrtialm |\vec a_{\rm\bf k}(\langlembda)| = \parallelrtialm \sqrt{a_{\rm\bf k}^x(\langlembda)^2+a_{\rm\bf k}^y(\langlembda)^2+a_{\rm\bf k}^z(\langlembda)^2}. \nonumber $ It is possible to adiabatically cross the quantum critical point driving the dynamics along the instantaneous eigenmodes of $\mathcal{H}_0$ provided that the dynamics is driven by the modified Hamiltonian $\mathcal{H}=\mathcal{H}_0+\mathcal{H}_{cd}$, where \citep{DRZ12} \begin{equation}qa \mathcal{H}_{cd} &=& \langlembda'(t)\sum_{{\rm\bf k}}\frac{1}{2 |\vec a_{\rm\bf k}(\langlembda)|^2} \parallelrtialsi_{{\rm\bf k}}^{\dag} \left[ (\vec a_{\rm\bf k} (\langlembda) \times \parallelrtialartial_\langlembda \vec a_{\rm\bf k} (\langlembda)) \cdot \vec \sigma_{{\rm\bf k}} \right] \parallelrtialsi_{{\rm\bf k}} \end{equation}qa is typically highly non-local in real spaces and involves many-body interactions in the spin representation. However, it was shown in the 1D quantum Ising model that a truncation of $\mathcal{H}_{cd}$ with interactions restricted to range $M$ is efficient to suppress excitations on modes $k>M^{-1}$ \citep{DRZ12}. \subsection{Fast-forward Approach\langlebel{secmethod}} Based on some earlier results \citep{MNPra}, the fast-forward (FF) formalism for adiabatic dynamics and application examples were worked out in \citet{MNProc,MN11,Masuda2012} for the Gross-Pitaevskii equation or the corresponding Schr\"odinger equation. The aim of the method is to accelerate a ``standard'' system subjected to a slow variation of external parameters by canceling a divergence due to an infinitely-large magnification factor with the infinitesimal slowness due to adiabaticity. A fast-forward potential is constructed which leads to the speeded-up evolution but, as a consequence of the different steps and functions introduced, the method is somewhat involved, which possibly hinders a broader application. The streamlined construction of fast-forward potentials presented in \citet{ErikFF} is followed here. The starting point is the 3D time-dependent Gross-Pitaevskii (GP) equation \citep{Dalfovo} \begin{equation}q \langlebel{start} i\hbar\frac{\parallelrtialartial \parallelrtialsi(\mathbf{x},t)}{\parallelrtialartial t} = -\frac{\hbar^2}{2m} \nabla^2 \parallelrtialsi(\mathbf{x},t) + V(\bold{x},t)\parallelrtialsi(\mathbf{x},t) + g_{\rm 3}|\parallelrtialsi(\mathbf{x},t)|^2\parallelrtialsi(\mathbf{x},t). \end{equation}q Using the ansatz $\parallelrtialsi(\mathbf{x},t)=r(\bold{x},t)e^{i\parallelrtialhi(\mathbf{x},t)}$ ($r(\mathbf{x},t), \parallelrtialhi(\mathbf{x},t) \in \mathbb{R}$) we formally solve for $V(\mathbf{x},t)$ in \end{eqnarray}ref{start} and get for the real and imaginary parts \begin{equation}qa {\rm{Re}}[V(\bold{x},t)]&=&-\hbar{\dot \parallelrtialhi}+\frac{\hbar^2}{2m}\bigg(\frac{\nabla^2 r}{r}-(\nabla \parallelrtialhi)^2\bigg)-g_3r^2, \langlebel{real} \\ {\rm{Im}}[V(\bold{x},t)]&=&\hbar\frac{\dot r}{r}+\frac{\hbar^2}{2m}\bigg(\frac{2\nabla \parallelrtialhi\cdot \nabla r}{r}+\nabla^2 \parallelrtialhi\bigg). \langlebel{imag} \end{equation}qa Imposing ${\rm{Im}}[V(\bold{x},t)]=0$, i.e., \begin{equation}q \frac{\dot r}{r}+\frac{\hbar}{2m}\bigg(\frac{2\nabla \parallelrtialhi\cdot \nabla r}{r}+\nabla^2 \parallelrtialhi\bigg)=0, \langlebel{imag0} \end{equation}q Eq. (\ref{real}) gives a real potential. In the inversion protocol it is assumed that the full Hamiltonian and the corresponding eigenstates are known at the boundary times. Then we design $r(\bold{x},t)$, solve for $\parallelrtialhi$ in Eq. (\ref{imag0}), and finally get the potential $V$ from Eq. (\ref{real}). In \citet{ErikFF} it was shown how the work of \citet{MNPra,MNProc,MN11} relates to this streamlined construction. Since the phase $\parallelrtialhi$ that solves Eq. (\ref{imag0}) depends in general on the particular $r({\bf{x}},t)$, Eq. (\ref{real}) gives in principle a state-dependent potential. However, in some special circumstances, the fast-forward potential remains the same for all modes. This happens in particular for the Schr\"odinger equation, $g_3=0$, and Lewis-Leach potentials associated with quadratic-in-momentum invariants. In other words, the invariant-based approach can be formulated as a special case of the simple inverse method \citep{ErikFF}. \subsection{Alternative Shortcuts Through Unitary Transformations \langlebel{alternative}} Shortcuts found via the methods described so far or by any other approach might be difficult to implement in practice. In the cd approach, for instance, the structure of the complementary Hamiltonian $H_{cd}$ could be quite different from the structure of the reference Hamiltonian $H_0$. Here are three examples, the first two for a particle of mass $m$ in 1D, the third one for a two-level system: - Example 1: Harmonic oscillator expansions \citep{Muga10}, see Sec. \ref{secEXP}: \begin{equation}q H_0=p^2/(2m)+m\omegaega^2 q^2/2, \;\; H_{cd}=-(pq+qp)\dot{\omegaega}/(4 \omegaega). \end{equation}q - Example 2: Harmonic transport with a trap of constant frequency $\omegaega_0/2\parallelrtiali$ and displacement $q_0(t)$ \citep{transport}, see Sec. \ref{secTRA}: \begin{equation}q H_0=p^2/(2m)+(q-q_0(t))^2m\omegaega_0^2/2, \;\; H_{cd}=p\dot{q}_0. \end{equation}q - Example 3: Population inversion in a two-level system \citep{Berry09,Ch10b,Sara12}, see Sec. \ref{secINT}: \begin{equation}q H_0=\left( \begin{equation}gin{array}{cc} Z_0&X_0 \\ X_0&-Z_0 \end{array} \right), \;\; H_{cd}=\hbar(\dot{\Theta}_0/2)\sigma_y, \langlebel{28}\end{equation}q where $\Theta_0=\arccos(Z_0/R_0)$ is a polar angle and $R_0=(X_0^2+Z_0^2)^{1/2}$. In all these examples the experimental implementation of $H_0$ is possible, but the realization of the counter-diabatic terms is problematic. A way out is provided by unitary transformations that generate alternative shortcut protocols without the undesired terms in the Hamiltonian \citep{Sara12}. {A standard tool is the use of different interaction pictures for describing one physical setting.} Unitary operators $\mathcal{U} (t)$ connect the different pictures and the goal is frequently to work in a picture that facilitates the mathematical manipulations. In this standard scenario all pictures describe the same physics, the same physical experiments and manipulations. The main idea in \citet{Sara12} is to regard instead the unitary transformations as a way to generate different physical settings and different experiments, not just as mathematical transformations. The starting point is a shortcut described by the Schr\"odinger equation $i\hbar\parallelrtialartial_t \parallelrtialsi(t)=H(t)\parallelrtialsi(t)$, our reference protocol. (In all the above examples $H=H_0+H_{cd}$.) The new dynamics is given by $i\hbar\parallelrtialartial_t \parallelrtialsi'(t)=H'(t)\parallelrtialsi'(t)$, where $\parallelrtialsi'(t)={\mathcal{U}}(t)^\dagger\parallelrtialsi(t)$, and $H'= {\mathcal{U}}^\dagger(H-K){\mathcal{U}}$, where $K=i\hbar\dot{{\mathcal{U}}}{\mathcal{U}}^\dagger$. If ${\mathcal{U}}(0)={\mathcal{U}}(t_f)=1$ the final states will coincide, i.e., $\parallelrtialsi'(t_f)=\parallelrtialsi(t_f)$ for a given initial state $\parallelrtialsi'(0)=\parallelrtialsi(0)$. If, in addition, $\dot{{\mathcal{U}}}(0)=\dot{{\mathcal{U}}}(t_f)=0$, then $H(0)=H'(0)$, and $H(t_f)=H'(t_f)$. Let us now list the unitary transformations that provide for the three examples realizable Hamiltonians \citep{Sara12}: - Example 1: Harmonic oscillator expansions, \begin{equation}q {\mathcal{U}}=\exp{\bigg(i\frac{m\dot{\omegaega}}{4\hbar \omegaega}q^2\bigg)}, \;\; H'=p^2/(2m)+m{\omegaega'}^2 q^2/2, \langlebel{u1} \end{equation}q where $ \omegaega'=\left[\omegaega^2-\frac{3\dot{\omegaega}^2}{4\omegaega^2}+ \frac{\ddot{\omegaega}}{2\omegaega}\right]^{1/2}. $ - Example 2: Harmonic transport, \begin{equation}q {\mathcal{U}}= \exp{(-im\dot{q}_0q/\hbar)}, \;\; H'=p^2/(2m)+(q-q'_0(t))^2m\omegaega_0^2/2, \langlebel{u2} \end{equation}q where $ q_0'=q_0+\ddot{q}_0/\omegaega_0^2. $ - Example 3: Population inversion in a two-level system, \begin{equation}q {\mathcal{U}}=\left( \begin{equation}gin{array}{cc} e^{-i\parallelrtialhi/2}&0 \\ 0&e^{i\parallelrtialhi/2} \end{array} \right), \;\; H'=\left( \begin{equation}gin{array}{cc} Z_0-\hbar\dot{\parallelrtialhi}/2&P \\ P&-Z_0+\hbar\dot{\parallelrtialhi}/2 \end{array} \right), \langlebel{u3} \end{equation}q where $ \parallelrtialhi=\arctan{(\hbar\dot{\Theta}_0/2X_0)},\,\, 0\le\parallelrtialhi < 2\parallelrtiali, $ and $ P=[X_0^2+(\hbar\dot{\Theta}_0/2)^2]^{1/2}. $ Why do the ${\mathcal{U}}$s in Eqs. (\ref{u1}-\ref{u3}) have these forms? The answer lies in the symmetry possesed by the Hamiltonian. Transformations of the form ${\mathcal{U}}=e^{if(t)G_j}$ based on generators $G_j$ of the corresponding Lie algebra produce operators within the algebra and, by suitably manipulating the function $f(t)$ undesired terms may be eliminated. {\it Superadiabatic iterations.--} As discussed in Sec. \ref{secCD}, Demirplak, Rice, and Berry proposed to add a suitable counterdiabatic (cd) term\footnote{This is the $H_{cd}$ term of Sec. \ref{secCD}. The superscript $^{(0)}$ is added now to distinguish it from higher order cd terms introduced below.} $H_{cd}^{(0)}$ to the time dependent Hamiltonian $H_0(t)$ so as to follow the adiabatic dynamics of $H_0$. The same $H_{cd}^{(0)}$ also appears naturally when studying the adiabatic approximation of the original system, i.e., the one evolving with $H_0$. This system behaves adiabatically, following the eigenstates of $H_0$, precisely when the counterdiabatic term is negligible. This is evident in an interaction picture (IP) based on the unitary transformation $A_0(t)=\sum_n |n_0(t)\rangle\langle n_0(0)|$ such that $\parallelrtialsi_1(t)=A_0^\dagger\parallelrtialsi_0$. In this IP, the new Hamiltonian is $H_1(t)=A_0^\dagger(t)(H_0(t)-K_0(t))A_0(t)$ and $K_0(t)=i\hbar \dot{A}_0(t)A_0^\dagger(t)$. If $K_0(t)$ is zero or negligible, $H_1(t)$ becomes diagonal in the basis $\{|n_0(0)\rangle\}$, so that the IP equation is an uncoupled system with solutions \begin{equation}q |\parallelrtialsi_1(t)\rangle=\sum_n |n_0(0)\rangle e^{-\frac{i}{\hbar}\int_0^t E_n^{(0)}(t')dt'} \langle n_0(0)|\parallelrtialsi_1(0)\rangle. \end{equation}q Correspondingly, $ |\parallelrtialsi_0(t)\rangle=\sum_n |n_0(t)\rangle e^{-\frac{i}{\hbar}\int_0^t E_n^{(0)}(t')dt'} \langle n_0(0)|\parallelrtialsi_0(0)\rangle. $ The same solution, which, for a non-zero $K_0$, is only approximate, is found exactly by adding to the IP Hamiltonian the counterdiabatic term $A_0^\dagger(t)K_0(t)A_0(t)$. This requires an external intervention and changes the physics of the original system. In the IP the modified Hamiltonian is $H^{(1)}\end{eqnarray}uiv H_1+A_0^\dagger(t)K_0(t)A_0(t)= A_0^\dagger(t)H_0(t)A_0(t)$ and in the Schr\"odinger picture (SP) the modified Hamiltonian is $H_0^{(1)}(t)=H_0(t)+K_0(t)$, so we identify $H_{cd}^{(0)}(t)=K_0(t)$. In other words, a ``small'' coupling term $K_0$ that makes the adiabatic approximation a good one also implies a small counterdiabatic manipulation. However, irrespective of the size of $K_0$, $H_0^{(1)}(t)$ provides a shortcut to slow adiabatic following because it keeps the populations in the instantaneous basis of $H_0$ invariant, in particular at the final time $t_f$. Looking for generalized adiabatic approximations, \citet{Garrido}, \citet{Berrysa} or \citet{NMR} have investigated further iterative interaction pictures and the corresponding approximations. The idea is best understood by working out explicitly the next iteration: one starts with $i\hbar\parallelrtialartial_t \parallelrtialsi_1(t)=H_1\parallelrtialsi_1(t)$ and diagonalizes $H_1(t)$ to produce its eigenbasis $\{|n_1(t)\rangle\}$. A unitary operator $A_1=\sum_n|n_1(t)\rangle\langle n_1(0)|$ plays now the same role as $A_0$ in the previous IP. It defines a new IP wave function $\parallelrtialsi_2(t)=A_1^\dagger(t)\parallelrtialsi_1$ that satisfies $i\hbar\parallelrtialartial_t \parallelrtialsi_2(t)=H_2\parallelrtialsi_2(t)$, where $H_2(t)=A_1^\dagger(t)(H_1(t)-K_1(t))A_1(t)$ and $K_1=i\hbar\dot{A}_1A_1^\dagger$. If $K_1$ is zero or ``small'' enough, i.e. if a (first order) superadiabatic approximation is valid, the dynamics would be uncoupled in the new interaction picture, namely, \begin{equation}q\langlebel{psi2unc} |\parallelrtialsi_2(t)\rangle=\sum_n |n_1(0)\rangle e^{-\frac{i}{\hbar}\int_0^t E_n^{(1)}(t')dt'} \langle n_1(0)|\parallelrtialsi_2(0)\rangle. \end{equation}q We may get the same result by changing the physics and adding $A_1^\dagger(t)K_1(t)A_1(t)$ to $H_2$ \citep{Rice08,Sara12}. In the SP the added interaction becomes a first order counterdiabatic term $H_{cd}^{(1)}=A_0 K_1 A_0^\dagger$. Transforming back to the SP and using $A_j(0)=1$ the state (\ref{psi2unc}) becomes \begin{equation}q |\parallelrtialsi_0(t)\rangle=\sum_n\sum_m |m_0(t)\rangle \langle m_0(0)|n_1(t)\rangle e^{-\frac{i}{\hbar}\int_0^t E_n^{(1)}(t')dt'} \langle n_1(0)|\parallelrtialsi_0(0)\rangle. \end{equation}q Quite generally the populations of the final state in the adiabatic basis $\{|n_0(t_f)\rangle\}$ will be different from the ones of the adiabatic process, unless $|n_0(0)\rangle=|n_1(t_f)\rangle$ and $|n_1(0)\rangle=|n_0(0)\rangle$, up to phase factors. The first condition is satisfied if $K_0(t_f)=0$ and the second one if $K_0(0)=0$. Then the superadiabatic process will actually lead to the same final populations as an adiabatic one, possibly with different phases for the individual components. Similarly, the first-order counterdiabatic term $H_{cd}^{(1)}$ would provide a shortcut with $H_0^{(2)}=H_0+H_{cd}^{(1)}$ in the SP, different from the one carried out by $H_{0}^{(1)}$. Moreover, if $K_1(0)=K_1(t_f)=0$, then $H_0^{(2)}=H_0$, at $t=0,t_f$. Further iterations define higher order superadiabatic frames. Is there any advantage in using one or another counterdiabatic scheme? There are two reasons that could make higher order schemes attractive in practice: one is that the structure of the $H_{cd}^{(j)}$ may change with $j$. For example, for a two-level population inversion $H_{cd}^{(0)}=\hbar(\dot{\Theta}_0/2)\sigma_y$, whereas $H_{cd}^{(1)}=\hbar(\dot{\Theta}_1/2)(\cos\Theta_0 \sigma_x-\sin\Theta_0\sigma_z)$, where $\Theta_1$ is the polar angle corresponding to the the Cartesian components of $H_1=X_1\sigma_x+Y_1\sigma_y+Z_1\sigma_z$ \citep{Sara12}. The second reason is that, for a fixed process time, the cd-terms are smaller in norm as $j$ increases, up to a value in which they begin to grow, see e.g. \citet{NMR}. One should pay attention though not only to the size of the cd-terms but also to the feasibility of the boundary conditions at the time edges to really generate shortcuts in this manner. \subsection{Optimal Control Theory} Optimal control theory (OCT) is a vast field covering many techniques and applications. As for STA, fast expansions \citep{Salamon09}, wavepacket splitting \citep{S07,S09a,S09b}, transport \citep{Calarco} and many-body state preparation \citep{RC09} have been addressed with different OCT approaches. The combination of OCT techniques with invariant-based engineering STA is particularly fruitful since the later provides by construction families of protocols that achieve a perfect fidelity or vanishing final excitation, whereas OCT may help to select among the many possible protocols the ones that optimize some physically relevant variable \citep{Li10,Li11,OCTtrans,Li12}. In this context the theory used so far is the maximum principle of \citet{LSP}. For a dynamical system $\dot{x}=\mathbf{f}(\mathbf{x}(t),u)$, where $\mathbf{x}$ is the state vector and $u$ the scalar control, {in order} to minimize the cost function $J(u)=\int_0^{t_f} g(\mathbf{x}(t),u)dt$, the principle states that the coordinates of the extremal vector $\mathbf{x}(t)$ and of the corresponding adjoint state $\mathbf{p}(t)$ formed by Lagrange multipliers, fulfill Hamilton's equations for a control Hamiltonian $H_c=p_0g(\mathbf{x}(t),u)+\mathbf{p}^T\cdot \mathbf{f}(\mathbf{x}(t),u)$. For almost all times during the process $H_c$ attains its maximum at $u=u(t)$ and $H_c=c$, where $c$ is constant. We shall discuss specific applications in Secs. \ref{secEXP} and \ref{secTRA}. \section{Expansions of trapped particles\langlebel{secEXP}} Performing fast expansions of trapped cold atoms without losing or exciting them is important for many applications: for example to reduce velocity dispersion and collisional shifts in spectroscopy and atomic clocks, decrease the temperature, adjust the density to avoid three body losses, facilitate temperature and density measurements, or to change the size of the cloud for further manipulations. Of course trap compressions are also quite common. For harmonic traps we may address expansion or compression processes with the quadratic-in-$p$ invariants theory by setting $q_c=U=F=0$ in Eq. (\ref{Vinv}). This means that Eq. (\ref{alphaeq}) does not play any role and the important auxiliary equation is the ``Ermakov equation'' (\ref{Erma}). The physical meaning of $\rho$ is determined by its proportionality to the standard deviation of the position of the ``expanding (or contracting) modes'' $e^{i\alpha_n}\parallelrtialhi_n$. Here we shall discuss the expansion from $\omegaega(0)=\omegaega_0$ to $\omegaega(t_f)=\omegaega_f$ \citep{Ch10}. Choosing \begin{equation}q \langlebel{bct0} \rho(0)=1,\;\; \dot{\rho}(0)=0, \end{equation}q $H(0)$ and $I(0)$ commute. They actually become equal, and have common eigenfunctions. Consistent with the Ermakov equation, $\ddot{\rho}(0)=0$ holds as well for a continuous frequency. At $t_f$ we impose\footnote{If $\ddot{\rho}(t_f)\ne 0$ the final frequency would not be $\omegaega_f$ but $\omegaega(t_f)=[\omegaega_f^2-\ddot{\rho}/\gamma]^{1/2}$. If discontinuities are allowed and the frequency is changed abruptly from $\omegaega(t_f)$ to $\omegaega_f$ the excitations will also be avoided, at least in principle. A similar discontinuity is possible at $t=0$ if $\ddot{\rho}(0)\ne 0$ and the frequency jumps abruptly from $\omegaega_0$ to $\omegaega(0)=[\omegaega_0^2-\ddot{\rho}(0)]^{1/2}$.} \begin{equation}q \rho(t_f)=\gamma=(\omegaega_0/\omegaega_f)^{1/2},\;\; \dot{\rho}(t_f)=0,\;\; \ddot{\rho}(t_f)=0. \langlebel{bctf} \end{equation}q In this manner the expanding mode is an instantaneous eigenvector of $H$ at $t=0$ and $t_f$, regardless of the exact form of $\rho(t)$. To fix $\rho(t)$, one chooses a functional form to interpolate between these two times, flexible enough to satisfy the boundary conditions. For a simple polynomial ansatz $ \rho (t) = 6 \left(\gamma -1\right) s^5 -15 \left(\gamma-1\right) s^4 +10 \left(\gamma-1\right)s^3 + 1 $ \citep{Palao}, where $s=t/t_f$. The next step is to solve for $\omegaega(t)$ in Eq. (\ref{Erma}). This procedure poses no fundamental lower limit to $t_f$, which could be in principle arbitrarily small. There are nevertheless practical limitations and/or prices to pay. For short enough $t_f$, $\omegaega(t)$ may become purely imaginary at some $t$ \citep{Ch10} and the potential becomes a parabolic repeller. Another difficulty is that the transient energy required may be too high, as discussed in \citet{energy} and in the following subsection. Since actual traps are only approximately harmonic, large transient energies will imply perturbing effects of anharmonicities and thus undesired excitations of the final state, or even atom losses. \subsection{Transient Energy Excitation} Knowing the transient excitation energy is also important to quantify the principle of unattainability of zero temperature, first enunciated by Nernst. This principle is usually formulated as the impossibility to reduce the temperature of any system to the absolute zero in a finite number of operations, and identified with the third law of thermodynamics. Kosloff and coworkers in \citep{Salamon09} have restated the unattainability principle in quantum refrigerators as the vanishing of the cooling rate when the temperature of the cold bath approaches zero, and quantify it by the scaling law that relates cooling rate and cold bath temperature. We shall examine here the consequences of the transient energy excitation on the unattainability principle in two ways: for a single, isolated expansion, and considering the expansion as one of the branches of a quantum refrigerator cycle \citep{energy}. A lower bound ${\mathcal B}_n$ for the time-averaged energy of the n-th expanding mode $\overline{E_n}$ (time averages from $0$ to $t_f$ will be denoted by a bar) is found by applying calculus of variations \citep{energy}, so that $\overline{E_n}\ge{\mathcal B}_n$. If the final frequency $\omegaega_f$ is small enough to satisfy $t_f \ll 1/\sqrt{\omegaega_0 \omegaega_f}$, and $\gamma \gg 1$, the lower bound has the asymptotic form $ {\mathcal B}_n\approx {(2n+1) \hbar}/{(2 \omegaega_f t^2_f)}. $ A consequence is that $ t_f \geq \sqrt{(2n+1) \hbar/(2 \omegaega_f \overline{E_n})}. $ When $\overline{E_n}$ is limited, because of anharmonicities or a finite trap depth, the scaling is fundamentally the same as the one found for bang-bang methods with real frequencies \citep{Salamon09}, and leads to a cooling rate $R\parallelrtialropto T_c^{3/2}$ in an inverse quantum Otto cycle (the proportionality factor may be improved by increasing the allowed $\overline{E_n}$). This dependence had been previously conjectured to be a universal one characterizing the unattainability principle for any cooling cycle \citep{KoEPL09}. The results in \citet{energy} provide strong support for the validity of this conjecture within the set of processes defined by ordinary harmonic oscillators with time-dependent frequencies. In \citep{EPL11} a faster rate $\sim -T_c/\log T_c$ is found with optimal control techniques for bounded trap frequencies, allowed to become imaginary. There is no contradiction with the previous scaling since bounding the trap frequencies does not bound the system energy. In other words, achieving such fast cooling is not possible if the energy cannot become arbitrarily large. Independently of the participation of the harmonic trap expansion as a branch in a refrigerator cycle, we may apply the previous analysis also to a single expansion, assuming that the initial and final states are canonical density operators characterized by temperatures $T_0$ and $T_f$. These are related by $T_f=(\omegaega_f/\omegaega_0)T_0$ for a population-preserving process. {}In a harmonic potential expansion, the unattainability of a zero temperature can be thus reformulated as follows: The transient excitation energy becomes infinite for any population-preserving and finite-time process when the final temperature is zero (which requires $\omegaega_f=0$). The excitation energy has to be provided by an external device, so a fundamental obstruction to reach $T_f=0$ in a finite time, is the need for a source of infinite power \citep{energy}. The standard deviation of the energy was also studied numerically \citep{energy}. { There it was found that the dominant dependences of the time averages scale with} $\omegaega_f$ and $t_f$ in the same way as the average energy. These dependences are different from the ones in the \citet{AA} (AA) relation $ \overline{\Delta H}\, t_f \geq \frac{h}{4}, $ where $ \overline{{\Delta H}}= {\int^{t_f}_0 \Delta H (t) dt}/{t_f}. $ \subsection{Three Dimensional Effects} The previous discussion is limited to one dimension (1D) but actual traps are three-dimensional and at most effectively 1D. \citet{3d} worked out the theory and performed numerical simulations of fast expansions of cold atoms in a three-dimensional Gaussian-beam optical trap. Three different methods to avoid final motional excitation were compared: inverse engineering using Lewis-Riesenfeld invariants, which provides the best overall performance, a bang-bang approach with one intermediate frequency, and a ``fast adiabatic approach''\footnote{ The adiabaticity condition for the harmonic oscillator is $|\sqrt{2}\dot{\omegaega}/(8 \omegaega^2)|\ll 1$. An efficient, but still adiabatic, strategy by \citet{Ch10} is to distribute $\dot{\omegaega}/\omegaega^2$ uniformly along the trajectory, i.e., $\dot{\omegaega}/\omegaega^2=c$, $c$ being constant. Solving this differential equation and imposing $\omegaega_f=\omegaega(t_f)$ we get $\omegaega(t)=\omegaega_0/[1-(\omegaega_f-\omegaega_0)t/(t_f\omegaega_f)]$. This may be enough for some applications. This function was successfully applied in \citet{Bowler}.}. The optical trap considered in \citet{3d} is formed by a laser, red detuned with respect to an atomic transition, and is characterized in the harmonic approximation by longitudinal and radial frequencies. To fourth order in the coordinates the effective potential includes anharmonic terms and radial-longitudinal coupling terms. While magnetic traps allow for an independent control of longitudinal and radial frequencies \citep{Nice10,Nice11,Nice11b}, this is not the case for a simple laser trap. In \citet{3d} it was assumed that the time-dependence of the longitudinal frequency is engineered to avoid final excitations with a simple 1D harmonic theory. The main conclusion of the study is that the transitionless expansions in optical traps are feasible under realistic conditions. For the inverse engineering method, the main perturbation is due to the possible adiabaticity failure in the radial direction, which can be suppressed or mitigated by increasing the laser waist. This waist increase would also reduce smaller perturbing effects due to longitudinal anharmonicity or radial-longitudinal coupling. The simple bang-bang approach fails because the time for the radial expansion is badly mismatched with respect to the ideal time, and the fast adiabatic method fails for short expansion times as a result of longitudinal excitations. Complications such as perturbations due to different noise types, and consideration of condensates, gravity effects, or the transient realization of imaginary trap frequencies are still open questions. Other extensions of \citet{3d} could involve the addition of a second laser for further control of the potential shape, or alternative trap shapes. Optical traps based on Bessel laser beams, for example, may be useful to decouple longitudinal and radial motions. \subsection{Bose-Einstein Condensates\langlebel{econd}} In this section we shall discuss the possibility of realizing STA in a harmonically trapped Bose-Einstein condensate using a scaling ansatz. A mean-field description of this state of matter is based on the time-dependent Gross-Pitaevskii equation (GPE) \citep{Dalfovo}, \begin{equation}qa i\hbar\frac{\parallelrtialartial\Psi({\rm\bf x},t)}{\parallelrtialartial t}=\bigg[-\frac{\hbar^2}{2m}\Delta+\frac{1}{2}m\omega^2(t){\rm\bf x}^2+g_D|\Psi({\rm\bf x},t)|^2\bigg]\Psi({\rm\bf x},t). \end{equation}qa Here, $\Delta$ is the $D$-dimensional Laplacian operator and $g_D$ is the $D$-dimensional coupling constant. For a three-dimensional cloud, using the normalization $\int|\Psi({\rm\bf x},t)|^2d{\rm\bf x}=1$, $g_{\rm 3}=\frac{4\parallelrtiali\hbar^2Na}{m}$, for a condensate of a number of atoms $N$ of mass $m$, interacting with each other through a contact Fermi-Huang pseudopotential parameterized by a $s$-wave scattering length $a$. In $D=1,2$ the corresponding expression for $g_{D}$ can be obtained by a dimensional reduction of the 3D GPE \citep{Salas}. As a mean-field theory the GPE overestimates the phase coherence of real Bose-Einstein condensates. The presence of phase fluctuations generally induces a breakdown of the dynamical self-similar scaling law that governs the dynamics of the expanding cloud and the formation of density ripples. The conditions for quantum phase fluctuations to be negligible for STA were discussed in \citet{delcampo11a}. In the following we shall ignore phase fluctuations. The results of this section will be generalized to strongly correlated gases in Sec. \ref{mbsta}, including as a particular case, the microscopic model of ultracold bosons interacting through s-wave scattering. STA in the mean-field regime were designed in \citet{MCRG09} based on the classic results by \citet{CD96}, and \citet{KSS96}, who found the exact dynamics of the condensate wavefunction under a time-modulation of the harmonic trap frequency. Consider a condensate wavefunction $\Psi({\rm\bf x},t=0)$, a solution of the time-independent GPE with chemical potential $\mu$ in a harmonic trap of frequency $\omega_0$, i.e., $(-\frac{\hbar^2}{2m}\Delta+\frac{1}{2}m\omega_0^2{\rm\bf x}^2+g_D|\Psi({\rm\bf x},t=0)|^2-\mu)\Psi({\rm\bf x},t=0)=0$. Under a modulation of the trap frequency $\omega(t)$ the scaling ansatz \begin{equation}qa \langlebel{tdbec} \Psi({\rm\bf x},t)=\frac{1}{\rho^{\frac{D}{2}}}\exp\bigg[i\frac{m|{\rm\bf x}|^2}{2\hbar}\frac{\dot{\rho}}{\rho}-i\frac{\mu\tau(t)}{\hbar}\bigg]\Psi\left(\frac{{\rm\bf x}}{\rho},t=0\right) \end{equation}qa is an exact solution of the time-dependent Gross-Pitaevskii equation provided that \begin{equation}qa \ddot{\rho}+\omega(t)^2 \rho=\frac{\omega_0^2}{\rho^3}, {\rm\bf q}quad g_D(t)=\frac{g_D(t=0)}{\rho^{2-D}},{\rm\bf q}quad \tau(t)=\int_0^t \frac{dt'}{\rho^2}. \end{equation}qa It follows that the scaling factor $\rho$ must be a solution of the Ermakov equation, precisely as in the single-particle harmonic oscillator case. This paves the way to engineer a shortcut to an adiabatic expansion or compression from the initial state $\Psi({\rm\bf x},t=0)$ to a target state $\Psi({\rm\bf x},t_f)=\Psi({\rm\bf x}/\rho,t=0)/\rho^{\frac{D}{2}}$ by designing the trajectory $\rho(t)$. The modulation of the coupling constant required in $D=1,3$ can be implemented with the aid of a Feshbach resonance \citep{MCRG09}, or, in $D=1$, by a modulation of the transverse confinement \citep{Staliunas04,Engels07,delcampo11a}. The $D=2$ requires no tuning in time of the coupling constant as a result of the Pitaevskii-Rosch symmetry \citep{PR}. It has recently been suggested that this symmetry is broken upon quantization, constituting an instance of a quantum anomaly in ultracold gases \citep{OPL10}. To date no experiment has provided evidence in favor of this observation. We point out that observing a breakdown of shortcuts to expansions of 2D BEC clouds would help to verify this quantum-mechanical symmetry breaking. An important simplification occurs in the Thomas-Fermi regime, where the mean-field energy dominates over the kinetic part. Assuming the validity of this regime along the dynamics, the scaling ansatz (\ref{tdbec}) becomes exact as long as the following consistency equations are satisfied, \begin{equation}qa \ddot{\rho}+\omega(t)^2 \rho=\frac{\omega_0^2}{\rho^{D+1}}, {\rm\bf q}quad g_D(t)=g_D(t=0),{\rm\bf q}quad \tau(t)=\int_0^t \frac{dt'}{\rho^D}. \end{equation}qa Hence, in the Thomas-Fermi regime, it is possible to engineer a shortcut exactly, while keeping the coupling strength $g_D$ constant \citep{MCRG09}. Optimal control theory has been recently applied in this regime to find optimal protocols with a restriction on the allowed frequencies \cite{Li12}. {\it Dimensional reduction and modulation of the non-linear interactions.--} For low dimensional BECs, tightly confined in one or two directions, an effective tuning of the coupling constant can be achieved by modulating the trapping potential along the tightly confined axis, see e.g. \citet{Staliunas04}, a proposal experimentally explored in \citet{Engels07}. In a nutshell, the tightly confined degrees of freedom decoupled from the weakly confined ones, are governed to a good approximation by a non-interacting Hamiltonian. It is then possible to perform a dimensional reduction of the 3D GPE, and derive a lower-dimensional version for the weakly confined degrees of freedom, where the effective coupling constant inherits a dependence of the width of the transverse modes which have been integrated out. Adiabatically tuning the transverse confinement leads to a controlled tuning of the effective coupling constant. A faster-than-adiabatic modulation can be engineered by implementing a shortcut in the transverse degree of freedom. Consider the 3D mean-field description \begin{equation}qa \langlebel{ad_3DGPE} i\hbar\frac{\parallelrtialartial\Psi({\rm\bf x},t)}{\parallelrtialartial t}=\big[-\frac{\hbar^2}{2m}\Delta+{\rm V^{ex}}({\rm\bf x},t) +g_{\rm 3}|\Psi({\rm\bf x},t)|^2\big]\Psi({\rm\bf x},t), \end{equation}qa with $ {\rm V^{ex}}({\rm\bf x},t)=\frac{m}{2}[\omega_x(t)^2x^2+\omega_y(t)^2y^2+\omega_z(t)^2z^2] $. For tight transverse confinement ($\omega_x\sim\omega_y\gg \omega_z$ and $N|a|\sqrt{m\omega_z/\hbar}\ll 1$), the transverse excitations are frozen. The transverse mode can be approximated by the single-particle harmonic oscillator ground state $\Phi_0(x,y,t)$, so that the wavefunction factorizes $\Psi({\rm\bf x},t) = \Phi_0(x,y,t)\parallelrtialsi(z,t)$. Integrating out the transverse modes, and up to a time-dependent constant which can be gauged away, one obtains the reduced GPE \begin{equation}qa i\hbar\frac{\parallelrtialartial\parallelrtialsi(z,t)}{\parallelrtialartial t}= \big[-\frac{\hbar^2}{2m}\frac{\parallelrtialartial^2}{\parallelrtialartial z^2} +{\rm V^{ex}}(z)+g_{\rm 1}(t)|\parallelrtialsi(z,t)|^2\big]\parallelrtialsi(z,t), \end{equation}qa with the effective coupling $g_{\rm 1}(t) =g_{\rm 3}\iint\!\! dx dy |\Phi_0(x,y,t)|^4.$ A general trajectory $g_{1}(t)$ can be implemented by modifying the frequency $\omega_{\parallelrtialerp}(t)$ of the transverse confinement according to \begin{equation}qa \omega_{\parallelrtialerp}^2(t)=\omega_{\parallelrtialerp}^2(0)\Bigg[\frac{g_{1}(t)}{g_{1}(0)}\Bigg]^2+\frac{1}{2}\frac{\ddot{g}_{1}(t)}{g_{1}(t)}-\frac{3}{4}\Bigg[\frac{\dot{g}_{1}(t)}{g_{1}(t)}\Bigg]^2 \end{equation}qa in quasi-1D atomic clouds~\citep{delcampo11a}. The first term in the RHS corresponds to the adiabatic tuning discussed in \citet{Staliunas04,Engels07} while the remaining terms are associated with the STA dynamics in the transverse modes. A similar analysis applies to the control of the effective coupling constant in a pancake condensate, in the x-y plane, under tight confinement along the $z$-direction \citep{delcampo11a}. We note that this technique is restricted to tune the amplitude of the coupling constant, at variance with alternative techniques based on Feschbach or confinement-induced resonances which can change both the amplitude and character of the interactions, e.g., from attractive to repulsive \citep{BDZ08}. \subsection{Strongly Correlated Gases}\langlebel{mbsta} The preceding sections were focused on single-particle systems and a mean-field description of Bose-Einstein condensates. We have seen that the inversion of scaling laws is a powerful technique to design STA in those processes where the dynamics is self-similar, e.g. expansions, or transport. In the following we focus on the engineering of STA in strongly correlated quantum fluids of relevance to ultracold gases experiments. We shall consider a fairly general model in dimension $D$ consisting of $N$ indistinguishable particles with coordinates ${\rm\bf x}_i\in\mathbb{R}^D$, trapped in a time-dependent isotropic harmonic potential of frequency $\omega(t)$ and interacting with each other through a two-body potential ${\rm V}({\rm\bf x}_i-{\rm\bf x}_j)$. The many-body Hamiltonian describing this system reads \citep{delcampo11} \begin{equation}qa \langlebel{ad_Hamiltonian} \mathcal{H}\!=\!\sum_{i=1}^{N}\!\bigg[\!-\frac{\hbar^2}{2m}\Delta_{i}+\frac{1}{2}m\omega^2(t){\rm\bf x}_i^2\bigg]\!+\!\epsilon\sum_{i<j}{\rm V}({\rm\bf x}_i-{\rm\bf x}_j), \end{equation}qa where $\Delta_{i}$ is the $D$-dimensional Laplacian operator for the ${\rm\bf x}_i$ variable, and $\epsilon=\epsilon(t)$ is a dimensionless time-dependent coupling strength satisfying $\epsilon(0)=1$. We shall further assume that $ {\rm V}(\langlembda{\rm\bf x})=\langlembda^{-\alpha}{\rm V}({\rm\bf x}) $ under scaling of the coordinates. Specific realizations of this model include the Calogero-Sutherland model \citep{Sutherland98}, the Tonks-Girardeau gas \citep{OS02,MG05}, Lieb-Liniger gas \citep{BPG08}, Bose-Einstein condensates (BEC) \citep{CD96,KSS96,MCRG09}, including dipolar interactions \citep{dipolar}, and more general many-body quantum systems \citep{GBD10}. For simplicity, we leave out other cases to which similar techniques can be applied, such as strongly interacting mixtures \citep{MG07} or systems with internal structure \citep{etonks1,etonks3}. Let us now consider an equilibrium state $\Phi$ of the system (\ref{ad_Hamiltonian}) at $t=0$ with chemical potential $\mu$. For compactness we shall use the notation ${\rm\bf x}_{j:k}\end{eqnarray}uiv\{x_j,x_{j+1},\dots,x_{k-1},x_k\}$. It is possible to find a self-similar scaling solution of the form \begin{equation}qa \langlebel{ad_scaling} \Phi\left({\rm\bf x}_{1:N},t\right)=\frac{1}{\rho^{D/2}}\exp\bigg[i\sum_{i=1}^N\frac{ m{\rm\bf x}_i^2\dot{\rho}}{2\rho\hbar}-i\mu\tau(t)/\hbar\bigg] \Phi\left(\frac{{\rm\bf x}_{1:N}}{\rho},t=0\right), \end{equation}qa where $\tau(t)=\int_{0}^tdt'/\rho^2(t')$, whenever the scaling factor $\rho=\rho(t)$ is the solution of the Ermakov differential equation, $\ddot{\rho}+\omega^2(t)\rho=\omega_0^2/\rho^3 $, with $\omega_0=\omega(0)$, satisfying the boundary conditions $\rho(0)=1$ and $\dot{\rho}(0)=0$. This is the same consistency equation that arises in the context of the single-particle time-dependent harmonic oscillator. Scaling laws greatly simplify the dynamics of quantum correlations. Let us consider the time-evolution of the $n$-particle reduced density matrix \begin{equation}qa g_n({\rm\bf x}_{1:n};{\rm\bf x}_{1:n}';t)=\frac{N!}{(N-n)!} \!\int \! \parallelrtialrod_{i=n+1}^N\!d x_i \Phi^*({\rm\bf x}_{1:N};t)\Phi({\rm\bf x}_{1:n}',\!{\rm\bf x}_{n+1:N};t). \end{equation}qa Provided the scaling law holds, its time evolution reads \begin{equation}qa g_n({\rm\bf x}_{1:n};{\rm\bf x}_{1:n}';t)= \rho^{-nD} g_n\!\left(\frac{{\rm\bf x}_{1:n}}{\rho};\frac{{\rm\bf x}_{1:n}'}{\rho}; 0\right) \exp\!\left(\!-\frac{i}{\rho}\frac{\dot{\rho}}{\omegaega_0} \frac{\sum_{i=1}^n({\rm\bf x}_i^2-{\rm\bf x}_i'^2)}{2l^2_0}\right), \end{equation}qa where $l_0=\sqrt{\hbar/m\omega_0}$. Local quantum correlations depend exclusively on the diagonal elements of $g_n({\rm\bf x}_{1:n};{\rm\bf x}_{1:n}';t)$ and manifest directly the self-similar dynamics. For instance, the time evolution of the density profile $n({\rm\bf x})=g_1({\rm\bf x};{\rm\bf x})$ reads $ n({\rm\bf x},t)=\rho^{-nD}n\left(\frac{{\rm\bf x}}{\rho},t=0\right). $ The dynamics of non-local correlations is more involved due to the presence of the oscillatory phase. As an example, the evolution of the one-body reduced density matrix (OBRDM) under self-similar dynamics \citep{MG05,GBD10}, \begin{equation}gin{equation} \langlebel{ad_g1t} g_1({\rm\bf x},{\rm\bf y};t) = \frac{1}{\rho^D} g_1\left(\frac{{\rm\bf x}}{\rho},\frac{{\rm\bf y}}{\rho};0\right) \exp\left(-\frac{i}{\rho}\frac{\dot{\rho}}{\omegaega_0} \;\frac{{\rm\bf x}^2-{\rm\bf y}^2}{2l^2_0}\right), \end{equation} induces a non-self-similar evolution of the momentum distribution, its Fourier transform $ n({\rm\bf k},t) = \int d{\rm\bf x} d{\rm\bf y} \,e^{i{\rm\bf k}\cdot({\rm\bf x}-{\rm\bf y})} g_1 ({\rm\bf x},{\rm\bf y};t). $ It is expected that the oscillatory phases distort quantum correlations. The case of a free expansion, where the frequency modulation in terms of the Heaviside function $\Theta(t)$ reads $\omega(t)=\omega_0\Theta(-t)$, has received much attention. The solution to the Ermakov equation for the scaling factor is $\rho(t)=\sqrt{1+\omega_0^2t^2}$ and for $t\gg\omega_0^{-1}$, $\rho(t)\sim\omega_0t$, $\dot{\rho}=\omega_0$. Using the method of the stationary phase, it follows that \begin{equation}qa n({\rm\bf k},t)\sim |2\parallelrtiali\omega_0\l_0^2/\dot{\rho}|^Dg_1(\omega_0 {\rm\bf k}\l_0^2/\dot{\rho},\omega_0 {\rm\bf k}\l_0^2/\dot{\rho}), \end{equation}qa i.e., the asymptotic momentum distribution is mapped to the scaled density profile of the initial state \citep{Hrvoje1,Hrvoje2,GBD10}. As a result, all information of the off-diagonal elements of the OBRDM is lost. Similar effects result in an expansion in finite-time $t_f\sim \omega_0^{-1}$ and signal the breakdown of adiabaticity. Excitations manifest as well in local correlation functions, e.g. excitation of the breathing mode of the cloud. In the adiabatic limit ($\tau\gg \omega_0^{-1}$), the time-variation of the the scaling factor vanishes $\dot{\rho}(t)\approx 0$, resulting in the adiabatic trajectory $\rho(t)=\sqrt{\omega_0/\omega(t)}$. At all times the time-evolution of the OBRDM and the momentum distribution can be related by a scaling transformation of their form at $t=0$, \begin{equation}gin{equation} \langlebel{ad_sg1} g_1({\rm\bf x},{\rm\bf y};t) = \frac{1}{\rho^D(t)} g_1\left(\frac{{\rm\bf x}}{\rho(t)},\frac{{\rm\bf y}}{\rho(t)};0\right),{\rm\bf q}quad n({\rm\bf k},t) = \rho^D(t) n(\rho(t) {\rm\bf k},0). \end{equation} These expressions can be applied for expansions ($\rho(t)>1$) and compressions ($\rho(t)<1$), and generally still require tuning the interaction coupling strength. Nonetheless, the required adiabatic time scale can be exceedingly long and we next tackle the problem of achieving a final scaled state in a predetermined expansion time $t_f$. The upshot of the frictionless dynamics is that quantum correlations at the end of the quench ($t=t_f$, and only then) are those of the initial state scaled by a factor $\rho(t_f)=\gamma$ \citep{delcampo11}. In particular, \begin{equation}qa \langlebel{ad_stacorr} g_1({\rm\bf x},{\rm\bf y};t_f) = \frac{1}{\gamma^D} g_1\left(\frac{{\rm\bf x}}{\gamma},\frac{{\rm\bf y}}{\gamma};0\right),{\rm\bf q}quad n({\rm\bf k},t_f) = \gamma^D n(\gamma {\rm\bf k},0). \end{equation}qa Similar expressions hold for higher-order correlations, i.e. $g_n({\rm\bf x}_{1:n},{\rm\bf y}_{1:n};t_f) = \gamma^{nD} g_n\left({\rm\bf x}_{1:n}/\gamma,{\rm\bf y}_{1:n}/\gamma;0\right)$. Moreover, as long as the initial state is an equilibrium state in the initial trap, so it is the state at $t_f$ with respect to the final trap, preventing any non-trivial dynamics after the quench, for $t>t_f$ if $\omega(t>t_f)=\omega_f$. Nonetheless, at intermediate times $t\in [0,t_f)$ the momentum distribution exhibits a rich non-equilibrium dynamics, and can show for instance, evolution towards the scaled density profile of the initial state. We close this section with two comments. First, the applicability of STA based on inversion of scaling laws is not restricted to fermionic or bosonic systems, but can be applied as well to anyonic quantum fluids for which dynamical scaling laws are known \citep{delcampo08}. Systems with quantum statistics smoothly extrapolating between bosons and fermions might be realized in the laboratory following \citet{KLMR11}. Second, the possibility of scaling up the system while preserving quantum correlations constitutes a new type of microscopy of quantum correlations in quantum fluids \citep{delcampo11,DB12}. It is as well of interest to design new protocols to reconstruct the initial quantum state of the system from the time-evolution of its density profile \citep{BB87,LS97}, a tomographic technique demonstrated experimentally in \citet{KPM97}, and applicable to many-body systems \citep{DMM08}. {\it Scaling laws in other trapping potentials.--} Scaling laws for many-body systems can be found for more general types of confinements. Among them, homogeneous potentials are of particular interest, since they simplify the correspondence between ultracold atom experiments and condensed matter theory. The early experimental implementations of the paradigmatic particle in a box aimed at the creation of optical billiards for ultracold gases \citep{prepainters1,prepainters2}. Trapping of a BEC in an all-optical box was reported in \citet{becbox} and analogous traps have been created in atom chips \citep{boxchip}. For the purpose of implementing STA, the dynamical optical dipole potential may be realized using the highly versatile ``painting technique'', which creates a smooth and robust time-averaged potential with a rapidly-moving laser beam \citep{painters}, or alternatively, by spatial light modulators \citep{modu}. The breakdown of adiabaticity in a time-dependent homogeneous potential leads to quantum transients related to the diffraction in time (DIT) effect, see \citet{DGCM09} for a review. A sharply localised matter-wave in a region of space, after sudden removal of the confinement, exhibits during free evolution density ripples. The earliest example discussed by \citet{Moshinsky52} the free evolution of a truncated cut-off plane wave, exhibits an oscillatory pattern with the same functional form than the diffraction pattern of a classical light beam from a semi-infinite plane. The phenomenon is ubiquitous in matter-wave dynamics induced by a quench, and in particular, it arises in time-dependent box potentials in one \citep{GK76,Godoy02,DM06}, two and three \citep{Godoy03} dimensions. The effect manifests as well in strongly interacting gases such as ultracold bosons in the Tonks-Girardeau regime \citep{DM06,delcampo08}. Moreover, when the piston walls move at a finite speed $v$, the adiabatic limit is not approached monotonically as $v\rightarrow 0$. It was shown in \citet{DMK08,Mousavi12,Mousavi12b} that an enhancement of DIT occurs when the walls move with the dominant velocity component of the initial confined state, due to a constructive interference between the expanding and reflected components from the walls. When the reflections from the confinement walls dominate, the non-adiabatic dynamics in time-dependent homogeneous potentials lead to Talbot oscillations and weave a quantum carpet in the time evolution of the density profile \citep{Schleich1,Schleich2}. Suppression of these excitations is dictated by the adiabatic theorem both in the non-interacting \citep{boxlaws1,boxlaws2,boxlaws3,bookinv} and mean-field regime \citep{BMT02}. Further, for non-interacting systems one can prove that no shortcut based on invariants or scaling exists in time-dependent homogeneous potentials. At the single-particle level, this follows from the fact that the family of trajectories for the width ${\rm\bf x}i(t)$ of a box-like potential for which a dynamical invariant exist \citep{boxlaws1}, takes the form ${\rm\bf x}i(t)=[at^2+bt+c]^{\frac{1}{2}}$, which is incompatible with the boundary conditions required to reduce a time-evolving scaling solution to the initial and target states. For many-body quantum fluids, the same result is derived from the consistency equations for self-similar dynamics to occur. To find a shortcut in this scenario one has to relax the condition on the confinement and allow for a inhomogeneous auxiliary harmonic potential of the form \citep{DB12} \begin{equation}qa U^{\rm aux}({\rm\bf x},t)=-\frac{1}{2}m\frac{\ddot{{\rm\bf x}i}(t)}{{\rm\bf x}i(t)}|{\rm\bf x}|^2, \end{equation}qa where ${{\rm\bf x}}\in\mathbb{R}^D$, $|{\rm\bf x}|\in[0,{\rm\bf x}i(t)]$. For $D=1$, a box with one stationary wall at $x=0$ and moving wall at $x={\rm\bf x}i(t)$ is assumed. Cylindrical and spherical symmetry is imposed for $D=2,3$ respectively. This auxiliary potential can be implemented by means of a blue-detuned laser \citep{Khaykovich} or direct painting with a rapidly moving laser \citep{prepainters1,prepainters2,painters}. Thanks to its presence it is possible to find dynamical self-similar solutions to single-particle and many-body Schr\"odinger equations for time-dependent box-like confinements with a general modulation of the width ${\rm\bf x}i(t)$. In particular, consider the Hamiltonian \begin{equation}qa \langlebel{mbh} \mathcal{H}=\! \sum_{i=1}^N\!\Big[-\frac{\hbar^2}{2m}\Delta_{i} +U^{\rm aux}({\rm\bf x}_i,t)\!\Big]\!+\!\epsilon\!\sum_{i<j}V({\rm\bf x}_i-{\rm\bf x}_j), \end{equation}qa where ${{\rm\bf x}_i}\in\mathbb{R}^D$, $r_i=|{\rm\bf x}|_i\in[0,{\rm\bf x}i(t)]$, and let us introduce the scaling factor $\rho(t)={\rm\bf x}i(t)/{\rm\bf x}i(0)$. If ${\rm V}(\langlembda {\rm\bf x})=\langlembda^{-\alpha}{\rm V}({\rm\bf x})$, $\epsilon(t)=\rho(t)^{\alpha -2}$, in the presence of $U^{\rm aux}({\rm\bf x},t)$, the time evolution of an initial eigenstate of the system with chemical potential $\mu$ follows a scaling law in Eq. (\ref{ad_scaling}). Given the existence of a scaling law, a many-body shortcut can be engineered by designing the scaling factor as for the simple harmonic oscillator, ensuring that the time-evolving state reduces to the initial and target state at the begining and end of the evolution \citep{DB12}. Naturally, this is {possible as well} for Bose-Einstein condensates in the mean-field regime, extending the Castin-Dum-Kagan-Surkov-Shlyapnikov scaling ansatz \citep{DB12}. Along a shortcut to an adiabatic expansion, the auxiliary potential is expulsive in an early stage of the expansion, expelling the atoms from the center and providing the required speed-up. The rapidly expanding cloud is slowed down in a second state of the expansion, when $U^{\rm aux}({\rm\bf x},t)$ becomes a trapping potential. The sequence is reversed in a shortcut to an adiabatic compression. In both cases, at $t=t_f$, $U^{\rm aux}({\rm\bf x},t)$ vanishes, and the cloud reaches the target state, a stationary state of the final Hamiltonian. As a result, STA provide a variant of the paradigmatic model of a quantum piston \citep{QJ12}. \subsection{Experimental Realization} Experiments of fast shortcut expansions have been realized at Nice with magnetic confining of ${}^{87}$Rb atoms for ultra-cold clouds \citep{Nice10} and condensates in the Tomas-Fermi regime \citep{Nice11}. Compared to the simple expansions treated in \citep{Ch10}, gravity introduces and extra linear term in the Hamiltonian and requires a treatment with additional boundary conditions. For the cold cloud, samples of $N = 10^5$ atoms and temperature $T_0 = 1.63$ $\mu$K were used to keep the time between collisions small $\approx 28$ ms, and the potential effectively harmonic. The initial trap frequencies for $x,y,z$ directions in Hz were $(228.1, 22.2, 235.8)$ and the final ones $(18.1, 7.1, 15.7)$. The results for the fast (35 ms) 15-fold frequency decompression to the trap in the vertical dimension, yielded a residual center-of-mass oscillation of the cloud equivalent to that of a 1.3-s-long linear decompression, a reduction by a factor of 37. For the condensate, the number of atoms was $N =1.3\times10^5$ and the initial temperature was $T_0 =130$ nK \citep{Nice11}. The potential is $U(r, t) = m\omegaega_{\parallelrtiale}^2(t)(x^2 +z^2)+ \frac{1}{2}m\omegaega_{\parallelrtiala}^2(t)y^2 +mgz$. Initial radial $(x,z)$ and axial $(y)$ frequencies were 235.8 Hz and 22.2 Hz, respectively. The experiment performed a 30-ms-long radial decompression of the trap by a factor of 9, yielding a final radial frequency of 26.2 Hz. The axial frequency was reduced by a factor of 3 to a final value 7.4 Hz. Using scaling techniques similar to the ones in Sec. \ref{econd} it was shown that this decompression is a shortcut for both directions. Residual excitations were attributed to imperfect implementation of $\omegaega(t)$, anharmonicities, and trap tilting. \subsection{Optimal Control} The time-dependent frequency of a harmonic trap expansion based on invariants can be optimized with respect to time or to transient excitation energy, restricting the allowed transient frequencies \citep{Li10,Li11}. Kosloff and coworkers have applied OCT to minimize the expansion time with ``frictionless conditions'', i.e., taking an initial thermal equilibrium at one temperature into thermal equilibrium at another temperature in a cooling cycle, using real or imaginary bang-bang (piecewise constant or ramped) intermediate trap frequencies, see e.g. \citet{Salamon09,EPL11}. \subsection{Other Applications} Inverse engineering expansions using invariant theory or scaling laws have been applied in several contexts. For example, \citet{Onofrio11} discussed the possibility of achieving deep degeneracy of Fermi gases via sympathetic cooling by changing the trapping frequency of another species (the coolant) to keep constant the Lewis-Riesenfeld invariant. The identified advantages are the maximal heat capacity retained by the coolant due to the conservation of the number of atoms, and the preservation of its phase-space density in the nondegenerate regime where the specific heat retains its Dulong-Petit value. The limits of the approach are set by the transient excitation, that should be kept below some allowed threshold, and by the spreading of the cooling cloud which reduces the spatial overlap with the Fermionic cloud. The method is found to be quite robust with respect to broadband noise in the trapping frequency \citep{Onofrio12}. \citet{Lianao} propose a scheme to cool down a mechanical resonator in a three-mirror cavity optomechanical system. The dynamics of the mechanical resonator and cavities is reduced to that of a time-dependent harmonic oscillator, whose effective frequency can be controlled through the optical driving fields. A simpler harmonic system is studied in \citet{Lianao2}, a charged mechanical resonator coupled to electrodes via Coulomb interaction controlled by bias gate voltages. \citet {PLA2012} designs, using scaling, fast frictionless expansions of an optical lattice with dynamically variable spacing (accordion lattice). Specifically, he considers the 1D Hamiltonian $H =p^2/(2m) + V (t) \cos\left(2k_L {x}/{\Lambda(t)}\right)+ {m\omegaega^2(t)}x^2/2$, where $\Lambda$ is the scale parameter which goes from 1 at $t=0$ to $c$ at $t_f$ and the parabolic potential only acts during the expansion according to $\omegaega^2(t)=-\Lambda^{-1}{\parallelrtialartial^2\Lambda}/{\parallelrtialartial t^2}$. Decreasing the potential depth as $V(t)=V_0/\Lambda^2(t)$, and making the first and second derivatives of $\Lambda$ vanish at the boundary times guarantee a frictionless expansion. In \citet{Yuce2} the results are extended to a continuously replenished BEC in a harmonic trap or in an optical lattice. \citet{simu} propose inverse engineering {of the trap frequencies based on the} Lewis-Riesenfeld invariants as part of the elementary operations necessary to implement a universal bosonic simulator using ions in separate traps. This method would allow to improve the accuracy and speed of conventional laser operations {on ions} which are limited by the Lamb-Dicke approximation. \citet{JJ} develop a method to produce highly coherent-spin-squeezed many-body states in bosonic Josephson junctions (BJJs). They start from the known mapping of the two-site Bose-Hubbard (BH) Hamiltonian to that of a single effective particle evolving according to a Schr\"odinger-like equation in Fock space. Since, for repulsive interactions, the effective potential in Fock space is nearly parabolic, the inversion protocols for shortcuts to adiabatic evolution in harmonic potentials may be applied to the many-body BH Hamiltonian. The procedure requires a good control of the time variation of the atom-atom scattering length during the desired period, a possibility now at hand in current experimental setups for internal BJJs. \section{Transport\langlebel{secTRA}} The efficient transport of atoms and ions by moving the confining trap is a necessary fundamental requirement for many applications. These are for example quantum information processing in multiplexed trap arrays \citep{Leibfried2002,ions,Bowler} or quantum registers \citep{MeschNature}; controlled translation from the production or cooling chamber to {the} interaction or manipulation zones; control of interaction times and locations, e.g. in cavity QED experiments, quantum gates \citep{Calarco2000} or metrology \citep{Maleki}; and velocity control to stop \citep{catcher1,boxlaws3} or launch atoms \citep{Meschede}. The transport should ideally be lossless, fast and ``faithful'', i.e. the final state should be equal to the initial one apart from the translation and possibly phase factors. This is compatible with some transient excitation in the instantaneous basis at intermediate times. Many different experimental approaches have been implemented. Neutral atoms have been transported individually, as thermal atomic clouds, or condensates, using optical or magnetic traps. The magnetic traps can be displaced by moving the coils mechanically, by time-varying currents in a lithographic pattern, or on a conveyor belt with permanent magnets \citep{Lahaye}. Optical traps can be used as optical tweezers whose focal point is translated by moving mechanically lenses \citep{David}, and traveling lattices (conveyor belts) can be made with two counterpropagating beams slightly detuned. Mixed magneto-optical approaches are also possible. To transport ions, controlled time dependent voltages have been used in linear-trap based frequency standards \citep{Maleki}, and more recently in quantum information applications using multisegmented Paul traps \citep{SK,SK2,Bowler}, or an array of Penning traps \citep{Penning}, also in 2D configurations \citep{Wineland}. In general, a way to avoid spilling or excitation of the atoms is to perform a sufficiently slow (adiabatic) transport, but for many applications the total processing time is limited due to decoherence and an adiabatic transport may turn out to be too long. In the context of quantum information processing, transport could occupy most of the operation time of realistic algorithms, so ``transport times'' need to be minimized \citep{ions,SK}. There are in summary important reasons to reduce the transport time, and several theoretical and experimental works have studied ways to make fast transport also faithful \citep{David,Calarco,MNProc,Shan,transport,BECtransport}. \subsection{Invariant-based Shortcuts for Transport\langlebel{4.1}} As done for expansions, shortcut techniques can be applied to perform fast atomic transport without final vibrational heating by combining dynamical invariants and inverse engineering. Two main scenarios can be handled in this way: shortcuts for the transport of a harmonic trap and shortcuts for the transport of an arbitrary trap. It is also possible to construct shortcuts for more complicated settings like atom stopping or launching, and combinations of transport and expansion of harmonic traps. {\it Transport of a rigid harmonic trap.--} Suppose that a 1D harmonic trap should be moved from $q_0(0)$ at time $t=0$ to $d=q_0(t_f)$ at a time $t_f$. The potential is $V=\frac{m}{2} \omegaega_0^2 (x-q_0(t))^2$ with fixed frequency. Comparing this to Eq. \end{eqnarray}ref{Vinv} this implies \begin{equation}q F=m\omegaega_0^2 q_0(t),\; \omegaega(t)=\omegaega_0,\;U=0. \end{equation}q Note that Eq. \end{eqnarray}ref{Erma} plays no role here and Eq. (\ref{alphaeq}) becomes the only relevant auxiliary equation, \begin{equation}q \langlebel{classical} \ddot{q}_c+\omegaega_0^2(q_c-q_0)=0, \end{equation}q where $q_c$ can be identified as a classical trajectory. This is the equation of a moving oscillator for which an analytical solution is known in both classical and quantum physics. From a classical mechanics point of view, the amplitude $\mathcal{A}$ of the oscillatory motion after transport is the modulus of the Fourier transform of the velocity profile associated with the trap trajectory \citep{David}, \begin{equation}gin{equation} \mathcal{A} = |{\mathcal F}[\dot{q}_0](\omegaega_0)|\, \langlebel{eq.amplitude} \end{equation} with ${\mathcal F}[f]=\int_{-\infty}^{+\infty} f(t){e}^{-{i}\omegaega t}\, dt$. This Fourier formulation of the transport problem allows for many enlightening analogies. For instance, $\mathcal{A}^2$ is mathematically identical to the intensity profile for the far field Fraunhofer diffraction pattern of an object with a transmittance having the same shape as the velocity profile for the transport. An optimal transport condition is therefore equivalent to a dark fringe in the corresponding diffraction pattern. The optimization of the conditions under which a non adiabatic transport should be carried out with a rigid harmonic trap are thus equivalent to apodization problems in optics. If the velocity profile contains the repetition of a pattern one expects an interference-like effect, this would be, for instance, the case for a symmetrical round trip transport as experimentally demonstrated in \citet{David}. Quantum mechanically, the wave function after transport reads \begin{equation}gin{equation} \Psi(q,t_f) = \tilde{\Phi}(q-q_0(t),t)\exp\left( \frac{im(q-q_0(t))\dot{q}_0}{\hbar}\right)\exp\left( \frac{i}{\hbar}\int_0^tdt'{\mathcal L}(t')\right), \langlebel{psiexact} \end{equation} where ${\mathcal L}=m{\dot q}_c^2/2-m\omegaega_0^2(q_c-q_0(t))^2/2$ is the Lagrangian associated with the equation of motion (\ref{classical}), and $\tilde{\Phi}$ a wave function that coincides with the initial wave function at initial time and that evolves under the action of the static harmonic potential of angular frequency $\omegaega_0$ located at $q=q_0(0)$. Using the boundary conditions associated with the transport, one finds from Eq.~(\ref{psiexact}) that an optimal transport for which the system starts in the ground state and ends up in the ground state of the displaced potential corresponds exactly to the classical criterion of a cancellation of the Fourier transform of the velocity profile, i.e. ${\mathcal A} =0$. Let us now address the application of invariant-based engineering. We first design an appropriate classical trajectory $q_c(t)$ fulfilling the boundary conditions $q_c(0)=q_0(0)=0,\; \dot{q}_c(0)=0,\; \ddot{q}_c(0)=0$ and $q_c(t_f)=q_0(t_f)=d,\; \dot{q}_c(t_f)=0,\; \ddot{q}_c(t_f)=0$, to ensure an evolution from the n-th state of the initial trap to the n-th state of the final trap. Then the trap motion trajectory $q_0(t)$ is deduced via Eq. (\ref{classical}). Some variants are vertical transport with a gravity force, so that $F=m\omegaega_0^2q_0-mg$ and Eq. (\ref{classical}) becomes $ \ddot{q}_c+\omegaega_0^2(q_c-q_0)=-g, $ and stopping or launching processes \citep{transport}. A major concern in practice for all these applications is to keep the harmonic approximation valid. This may require an analysis of the actual potential and of the excitations taking place along the non-adiabatic transport process. Without such detailed analysis, the feasibility of the approach for a given transport objective set by the pair $d,t_f$ can be estimated by comparing lower excitation bounds \citep{BECtransport}. These are obtained using calculus of variations as we have discussed before for expansions. Writing the expectation value of potential energy for a transport mode as $\langle V(t)\rangle=\frac{\hbar\omegaega_0}{2}\left(n+1/2\right)+E_P$, the time average of $E_P$ is bounded as ${\overline{E_{P}}}\ge{6md^2}/({t_f^4 \omegaega_0^2})$ \citep{transport}. {This scaling should be compared to }the milder dependence on $t_f^{-2}$ of the time-averaged transient energy in expansions \citep{energy}. \citet{OCTtrans} have shown how to realise this bound by allowing the discontinuous acceleration of the trap at $t = 0$ and $t = t_f$ and also finite jumps in the trap position. In \citet{OCTtrans}, the invariant-based method is complemented by optimal control theory. Since actual traps are not really harmonic, the relative displacement between the center of mass and the trap center is kept bounded as a constraint. The trajectories are then optimized according to different physical criteria: time minimization, (time-averaged) displacement minimization, and (time-averaged) transient energy minimization. The minimum time solution has a ``bang-bang'' form, and the minimum displacement solution is of ``bang-off-bang'' form. In this framework discontinuities in the acceleration $\ddot{q}_c$ at the edge times and elsewhere are allowed. Physically this means that the trap may ideally jump suddenly over a finite distance, whereas the velocity $\dot{q}_c$ and the trajectory $q_c$ remain always continuous. {\it Transport of an arbitrary trap with compensating force.--} In the second main scenario, the trap potential $U(q-q_0(t))$ is arbitrary, and it is rigidly displaced along $q_0(t)$. Now, in Eq. (\ref{Vinv}), $\omegaega=\omegaega_0=0$, $F=m\ddot{q}_0$, and $q_c$ in Eq.~(\ref{alphaeq}) may be identified with the transport function $q_0$. Inverse engineering in this case is based on designing the trap trajectory $q_0$ \citep{transport}. In addition to $U$, there is a compensating linear potential term $-mq\ddot{q}_0$ in $ H=p^2/2m-mq\ddot{q}_0+U(q-q_0). $ The corresponding force compensates for the inertial force due to the trap motion in the rest frame of the trap, in such a way that the wave function in that frame is not modified up to a time dependent global phase factor. This Hamiltonian was originally proposed by \citet{MNProc} using the ``fast-forward'' scaling technique. \citet{Masuda2012} has recently generalized this result for interacting, identical, spinless particles. \subsection{Transport of a Bose-Einstein Condensate} The two main scenarios of the previous subsection can be generalized for Bose-Einstein condensates \citep{BECtransport}. We first consider 1D harmonic transport. For the GPE \begin{equation}qa i\hbar\frac{\parallelrtialartial\parallelrtialsi}{\parallelrtialartial t} (q,t)=\left[-\frac{\hbar^2}{2m}\frac{\parallelrtialartial^2}{\parallelrtialartial q^2}+\frac{m\omegaega_{0}^{2}}{2}(q-q_{0}(t))^2 + g_{\rm 1}|\parallelrtialsi (q,t)|^2\right]\parallelrtialsi (q,t), \langlebel{GP2} \end{equation}qa the results of Sec. \ref{4.1} motivate the ansatz \begin{equation}q \parallelrtialsi(q,t)=\exp\left\{\frac{i}{\hbar}\left(-\mu t+m\dot{q}_c q\right) - \frac{i}{\hbar} \int_{0}^{t}\!dt'\left[\frac{m}{2}\bigg(\dot{q}_c^2-\omegaega_0^2(q_c^2-q_0^2)\bigg)\right]\right\} \chi(\sigma), \langlebel{trans} \end{equation}q where $\chi(\sigma)$ satisfies the stationary GPE \begin{equation}qa \left[-\frac{\hbar^2}{2m}\nabla^2_{{\sigma}}+ \frac{m\omegaega^{2}_{0}}{2}|{\sigma}|^2+U(\sigma) +g_{\rm 1}|\chi(\sigma)|^2\right]\chi(\sigma)= \mu\;\chi(\sigma). \langlebel{GPstationary} \end{equation}qa The ansatz provides indeed a solution to Eq. (\ref{GP2}) when $q_c(t)$ satisfies Eq. (\ref{classical}). Inverse engineering gives the trap trajectory $q_0 (t)$ from \end{eqnarray}ref{classical} after designing $q_c(t)$, as for the linear dynamics. The inverse method can also be applied to anharmonic transport of condensates by means of a compensating force \citep{transport}. In either scenario this method does not require that $t_f$ satisfies any discretization condition, as it occurs with other approaches \citep{BECtransport}, and $t_f$ can in principle be made as small as desired. In practice there are of course technical and fundamental limitations \citep{transport}. Smaller values of $t_f$ increase the distance from the condensate to the trap center, and the effect of anharmonicity. There could be also geometrical constraints: for short $t_f$, $q_0(t)$ could exceed the interval [$0,d$]. OCT combined with the inverse method, see below, provides a way to design trajectories taking these restrictions into account. {\it Optimal control theory.---} An OCT trajectory has been found when the center of the physical trap is kept inside a given range (e.g. inside the vacuum chamber), i.e. $q_{\downarrow} \le q_0 (t) \le q_{\uparrow}$ \citep{BECtransport}. At the beginning the trap is immediately set at the upper bound $q_{\uparrow}$ to accelerate the condensate as much as possible and at time $t_1$ the trap is moved to the lower bound $q_{\downarrow} $ to decelerate the condensate so as to leave it at rest at $t_f$. An important open question is to evaluate the effect of the approximate realization of the discontinuities found in the bang-bang solutions. {\it Effect of Perturbations.--} \citet{BECtransport} also investigated the effect of anharmonicities when the harmonic transport protocol is applied. For a symmetrically perturbed potential $ V=\omegaega_0^2m \left[(q-q_0)^2 + \alpha (q-q_0)^4\right]/2, $ the fidelity increases with increasing coupling constant $g_{\rm 1}$, because of the increased width of the wavefunction. They also considered that the center of the physical trap is randomly perturbed with respect to $q_0(t)$. The fidelity at $t_f$ is found to be independent of $d$ and the chosen $q_c(t)$ and increases for shorter times $t_f$ and for smaller couplings $g_1$, unlike the previous results. \section{Internal State engineering\langlebel{secINT}} Manipulating the internal state of a quantum system with time-dependent interacting fields is the basis of quantum information processing \citep{Allen,Vitanov-Rev,Bergmann} and many other fields. Two major routes are resonant pulses, and adiabatic methods such as ``Rapid'' Adiabatic Passage (RAP), Stimulated Raman Adiabatic Passage (STIRAP), and their variants. Simple fixed-area resonant pulses, such as a $\parallelrtiali$ pulse, may be fast if intense enough, but they are also highly sensitive to variations in the pulse area, and to inhomogeneities in the sample \citep{Allen}. Composite pulses provide an alternative to the single $\parallelrtiali$ pulse, with some successful applications \citep{Levitt,Collin,Torosov}, but still {they} need an accurate control of pulse phase and intensity. In NMR, composite pulses are being superseded by adiabatic passage methods, which have also been very successful in laser cooling, chemical reaction dynamics, metrology, atom optics, interferometry, or cavity quantum electrodynamics. Adiabatic passage is robust versus parameter variations but slow. It is moreover prone to decoherence because of the effect of noise over the long times required. This motivates the search for fast and robust shortcuts, with respect to parameter variations and noise. Several methods to find STA have been put forward for two- and three-level atomic systems. Among them, methods that we have already discussed in Sec. 2, like the transitionless driving, invariant-based engineering, or OCT. \subsection{Population Inversion in Two-level Systems\langlebel{poin}} Using the convention $|1\rangle={1\choose 0}$, $|2\rangle={0\choose 1}$, assume a two-level system with a Hamiltonian of the form \begin{equation}qa H_0 (t)= \frac{\hbar}{2} \left(\begin{equation}gin{array}{cc} -\Delta(t) & \Omegaega_{R}(t) -i\Omegaega_I(t) \\ \Omegaega_{R}(t)+i\Omegaega_I(t) & \Delta(t) \end{array}\right). \langlebel{H0} \end{equation}qa In quantum optics it describes the semiclassical coupling of two atomic levels with a laser in a laser-adapted interaction picture, where $\Omegaega_c(t)=\Omegaega_R(t) + i \Omegaega_I(t)$ is the complex Rabi frequency and $\Delta(t)$ the time-dependent detuning between laser and transition frequencies. We will keep the language of the atom-laser interaction hereafter, but in other two-level systems, for example, in a spin-$1/2$ system or in a Bose-Einstein condensate on an accelerated optical lattice \citep{Oliver}, $\Omegaega_c(t)$ and $\Delta(t)$ may correspond to different physical quantities. Initially at time $t=0$, the atom is assumed to be in the ground state $|1\rangle$. The goal is to achieve a perfect population inversion such that at a time $t=T$ the atom is in the excited state. For a $\parallelrtiali$ pulse the laser is on resonance, i.e. $\Delta(t)=0$ for all $t$. If the Rabi frequency is chosen like $\Omegaega_c(t) = \fabs{\Omegaega_c (t)} e^{i\alpha}$, with a time-independent $\alpha$, and such that $ \int_0^T dt\, \fabs{\Omegaega_c(t)} = \parallelrtiali, $ the population is inverted at time $T$. A simple example is the ``flat'' $\parallelrtiali$ pulse with $\Omegaega_c(t) = e^{i\alpha}{\parallelrtiali }/{T}$. Adiabatic schemes provide another major route for population inversion. In the ``Rapid Adiabatic Passage'' technique the radiation is swept slowly through resonance. The term ``rapid'' here means that the frequency sweep is shorter than the life-time of spontaneous emission and other relaxation times. Many schemes corresponding to different functions $\Delta(t)$, and $\Omegaega_R(t)$ are possible. The simplest is a Landau-Zener approach, with $\Delta$ linear in time, $\Omegaega_R$ constant and $\Omegaega_I=0$. {\it Transitionless shortcuts to adiabaticity.--} If an adiabatic scheme is used and the adiabaticity condition $\frac{1}{2}|\Omegaega_a|\ll |\Omegaega(t)|$ (where $\Omegaega= \sqrt{\Delta^2+\Omegaega_R^2}$, $\Omegaega_I=0$, and $\Omegaega_a \end{eqnarray}uiv [\Omegaega_R \dot{\Delta} - \dot{\Omegaega}_R \Delta]/\Omegaega^2$) is not fulfilled, the inversion fails. We may still get an inversion (i.e. a shortcut) by applying a counterdiabatic field such that its maximum is not larger than the maximum of $\Omegaega_R$ \citep{Ch10b}. The total Hamiltonian, see Sec. \ref{secCD}, for the transitionless shortcut protocol is \citep{Rice08,Berry09,Ch10b} \begin{equation}qa H_{0a} (t)= \frac{\hbar}{2} \left(\begin{equation}gin{array}{cc} -\Delta & \Omegaega_{R}- i \Omegaega_a \\ \Omegaega_{R}+ i \Omegaega_a & \Delta \end{array}\right). \langlebel{toha} \end{equation}qa {\it Invariant-based Shortcuts.--} STA in two-level systems can be also found making use of Lewis-Riesenfeld invariants \citep{Yidun,noise}. For $H_0$ in Eq. (\ref{H0}), a dynamical invariant may be parameterized as \begin{equation}qa \langlebel{I} I (t)= \frac{\hbar}{2} \mu \left(\begin{equation}gin{array}{cc} \fcos{\Theta(t)} & \fsin{\Theta(t)} e^{- i \alpha(t)} \\ \fsin{\Theta(t)} e^{i \alpha(t)} & -\fcos{\Theta(t)} \end{array}\right), \end{equation}qa where $\mu$ is a constant with units of frequency to keep $I(t)$ with dimensions of energy. From the invariance condition the functions $\Theta(t)$ and $\alpha(t)$ must satisfy \begin{equation}gin{eqnarray} \begin{equation}gin{array}{l} \dot\Theta = \Omegaega_I \cos \alpha - \Omegaega_R \sin \alpha,\\ \dot\alpha = -\Delta(t) - \cot\Theta\left(\Omegaega_R\cos\alpha + \Omegaega_I\sin\alpha\right). \end{array} \langlebel{schrpure} \end{eqnarray} The eigenvectors of the invariant are \begin{equation}qa |\parallelrtialhi_+(t)\rangle = \left( \begin{equation}gin{array}{c} \fcos{\Theta/2} e^{-i\alpha/2}\\ \fsin{\Theta/2}e^{i\alpha/2} \end{array} \right), {\rm\bf q}uad |\parallelrtialhi_-(t)\rangle = \left( \begin{equation}gin{array}{c} \fsin{\Theta/2} e^{-i\alpha/2}\\ -\fcos{\Theta/2} e^{i\alpha/2} \end{array} \right), \langlebel{phiplus} \end{equation}qa with eigenvalues $\parallelrtialm \frac{\hbar}{2} \mu$. A general solution $|\Psi(t)\rangle$ of the Schr\"odinger equation can be written as a linear combination \begin{equation}gin{eqnarray} |\Psi(t)\rangle = c_+ e^{i{\rm\bf k}appa_+(t)} |\parallelrtialhi_+(t)\rangle + c_- e^{i{\rm\bf k}appa_-(t)} |\parallelrtialhi_-(t)\rangle, \end{eqnarray} where $c_\parallelrtialm$ are complex, constant coefficients, and ${\rm\bf k}appa_\parallelrtialm$ are the phases of \citet{LR} {introduced in Eq. \end{eqnarray}ref{LRphase}}. Let $ \gamma = - 2 {\rm\bf k}appa_+ = 2 {\rm\bf k}appa_-, $ then $\gamma$ must be a solution of \begin{equation}gin{eqnarray} \dot\gamma &=& \frac{1}{\sin\Theta} \left(\cos\alpha\,\Omegaega_R + \sin\alpha\,\Omegaega_I\right). \langlebel{dotgamma} \end{eqnarray} Equivalently a solution of the Schr\"odinger equation $|\Psi(t)\rangle$ may be designed with the same parameterization as above ($|\Psi(t)\rangle\langle\Psi(t)|$ is a dynamical invariant.) and, by putting this ansatz into the Schr\"odinger equation, Eqs. (\ref{schrpure}) and (\ref{dotgamma}) are found. If $\Omegaega_R(t)$, $\Omegaega_I(t)$ and $\Delta(t)$ are given, Eqs. \end{eqnarray}ref{schrpure} and \end{eqnarray}ref{dotgamma} have to be solved to get $\Theta(t)$, $\alpha(t)$ and $\gamma(t)$. A particular solution of the Schr\"odinger equation is then given by \begin{equation}gin{eqnarray} |\parallelrtialsi(t)\rangle = |\parallelrtialhi_+(t)\rangle e^{-i \gamma(t)/2}. \langlebel{solpsi} \end{eqnarray} To find invariant-based shortcuts and inverse engineer the Hamiltonian $\Theta(t)$, $\alpha(t)$, and $\gamma(t)$ are fixed first, fulfilling the boundary conditions $\Theta(0)=0$ and $\Theta(T)=\parallelrtiali$. The wave function \end{eqnarray}ref{solpsi} corresponds then to an atom in the ground state at $t=0$ and in the excited state at $t=T$, i.e. a perfect population inversion. Then, by inverting \end{eqnarray}ref{schrpure} and \end{eqnarray}ref{dotgamma}, \begin{equation}gin{eqnarray} \Omegaega_R &=& \cos\alpha\sin\Theta \;\dot\gamma - \sin\alpha\;\dot\Theta\langlebel{pot_R}, \\ \Omegaega_I &=& \sin\alpha\sin\Theta\;\dot\gamma + \cos\alpha\;\dot\Theta\langlebel{pot_I}, \\ \Delta &=& -\cos\Theta \;\dot\gamma - \dot\alpha. \langlebel{pot_D} \end{eqnarray} There is much freedom in designing such a shortcut because the auxiliary functions $\Theta(t)$, $\alpha(t)$ and $\gamma(t)$ can be chosen arbitrarily except for the boundary conditions. \subsection{Effect of Noise and Perturbations} A key aspect to choose among the many possible shortcuts is their stability or robustness versus different perturbations. \citet{noise} have derived optimal invariant-based shortcut protocols, maximally stable concerning amplitude noise of the interaction and with respect to systematic errors. It turns out that the perturbations due to noise and systematic errors require different optimal protocols. Let the ideal, unperturbed Hamiltonian be the $H_0(t)$ of Eq. \end{eqnarray}ref{H0}. In \citet{noise}, it is assumed that the errors affect $\Omegaega_{R}$ and $\Omegaega_{I}$ but not the detuning $\Delta$, which, for an atom-laser realization of the two-level system is more easily controlled. For systematic errors, for example if different atoms at different positions are subjected to slightly different fields due to the Gaussian shape of the laser, the actual, experimentally implemented Hamiltonian is $H_{01}=H_0 + \begin{equation}ta H_1$, where $ H_1 (t)= H_0 (t)\big|_{\Delta\end{eqnarray}uiv 0} $ and $\begin{equation}ta$ is the amplitude of the systematic error. The second type of error considered in \citet{noise} is amplitude noise, which is assumed to affect $\Omegaega_R$ and $\Omegaega_I$ independently with the same strength parameter $\langlembda^2$. This is motivated by the assumption that two lasers may be used to implement the two parts of the Rabi frequency. The final master equation describing systematic error and amplitude-noise error is \begin{equation}gin{eqnarray} \frac{d}{dt} \hat\rho &=& -\frac{i}{\hbar} [H_0 + \begin{equation}ta H_1,\hat\rho] -\frac{\langlembda^2}{2 \hbar^2} \left([H_{2R},[H_{2R},\hat\rho]] + [H_{2I},[H_{2I},\hat\rho]]\right), \langlebel{masterfinal} \end{eqnarray} where $H_{2R} (t)= H_0 (t)\big|_{\Delta\end{eqnarray}uiv\Omegaega_I\end{eqnarray}uiv 0}$, and $H_{2I} (t)= H_0 (t)\big|_{\Delta\end{eqnarray}uiv\Omegaega_R\end{eqnarray}uiv 0}$. Before studying both types of error together it is fruitful to look at them separately. {\it Amplitude-Noise Error.--} If there is no systematic error ($\begin{equation}ta=0$) and only an amplitude-noise error affecting the Rabi frequencies, a noise sensitivity can be defined as \begin{equation}gin{eqnarray*} q_N := -\frac{1}{2} \left. \frac{\parallelrtialartial^2 P_2}{\parallelrtialartial \langlembda^2}\right|_{\langlembda=0} = - \left.\frac{\parallelrtialartial P_2}{\parallelrtialartial (\langlembda^2)}\right|_{\langlembda=0}, \end{eqnarray*} where $P_2$ is the probability to be in the excited state at final time $T$, i.e. $P_2 \approx 1 - q_N \langlembda^2$. To find an invariant-based shortcut protocol maximally stable concerning amplitude noise, it is first assumed that the unperturbed solution (\ref{solpsi}) satisfies $\Theta(0)=0$ and $\Theta(T)=\parallelrtiali$. Using a perturbation approximation of the solution and keeping only terms up to $\langlembda^2$ \citep{noise}, \begin{equation}gin{eqnarray} q_N &=& \frac{1}{4} \int_0^T dt \Big[ (\cos^2\Theta + \cos^2\alpha\sin^2\Theta)(m\sin\alpha - \cos\alpha \dot\Theta)^2\nonumber\\ && + (\cos^2\Theta + \sin^2\alpha\sin^2\Theta)(m\cos\alpha + \sin\alpha \dot\Theta)^2\Big], \end{eqnarray} where $m(t)=-\dot\gamma \sin\Theta$. Minimizing the error sensitivity $q_N$ by Euler-Lagrange one gets that the optimal solutions satisfy \citep{noise} $\alpha=n \parallelrtiali/4$, $n$ odd, and \begin{equation}gin{eqnarray} (3 + \cos(2\Theta))\ddot\Theta = \sin(2\Theta) (\dot\Theta)^2. \langlebel{eqx} \end{eqnarray} The corresponding $\Omegaega_R$ and $\Omegaega_I$ can be calculated from Eqs. (\ref{pot_R}) and (\ref{pot_I}). In this case, $\Omegaega_{R} = \parallelrtialm \dot\Theta/\sqrt{2} = \parallelrtialm \Omegaega_I$ and $\Delta(t)=0$. The optimal noise sensitivity value is $q_N = 1.82424/T< \parallelrtiali^2/(4T)$ and the maximum of the Rabi frequency is $\Omegaega_R(t_f/2) t_f \approx 2.70129$. An approximate solution of Eq. (\ref{eqx}) is given by $\Theta(t) = \parallelrtiali t/T - \frac{1}{12}\sin(2\parallelrtiali t/T)$, with a noise sensitivity of $q_N=1.82538/T$. {\it Systematic Error.---} If there is no amplitude-noise error ($\langlembda=0$) and only systematic error, a systematic error sensitivity is defined as \begin{equation}gin{eqnarray*} q_S := -\frac{1}{2}\left. \frac{\parallelrtialartial^2 P_2}{\parallelrtialartial \begin{equation}ta^2} \right|_{\begin{equation}ta=0} = -\left. \frac{\parallelrtialartial P_2}{\parallelrtialartial (\begin{equation}ta^2)} \right|_{\begin{equation}ta=0}, \end{eqnarray*} where $P_2$ is as before the probability to find the atom in the excited state at final time $T$. $q_S$ may be calculated with a perturbation approximation of the solution keeping only terms up to $\begin{equation}ta^2$ \citep{noise}. To find an optimal scheme the invariant based technique is used again. The evolution of the unperturbed state can be parameterized as before, $|\parallelrtialsi(t)\rangle$ (see Eq.~\end{eqnarray}ref{solpsi}), with the boundary values $\Theta(0)=0$ and $\Theta(T)=\parallelrtiali$. The expression for the systematic error sensitivity is now \begin{equation}gin{eqnarray*} q_S = \left|\int_0^T dt e^{-i\gamma}\dot\Theta \sin^2\Theta\right|^2. \end{eqnarray*} The optimal value is clearly $q_S = 0$. An example of a class which fulfills $q_S=0$ is found by letting $ \gamma(t) = n \left(2 \Theta - \sin(2\Theta)\right). $ If follows that $ q_S = {\sin^2\left(n\parallelrtiali\right)}/(4n^2), $ so for $n=1,2,3,...$, $q_S = 0$. There is still some freedom left, this allows further optimization concerning additional constraints. {\it Systematic and amplitude-noise errors.--} If both errors coexist the optimal schemes will depend on their relative importance. \citet{noise} examine numerically the behavior of different protocols. Fig. \ref{general_ex} shows that the different optimal schemes perform better than the other one depending on the dominance of one or the other type of error. \begin{equation}gin{figure}[htbp] \begin{equation}gin{center} \includegraphics[width=1\linewidth]{rev_fig_1.eps} \caption{{\bf (Color online) Probability $P_2$ versus noise error and systematic error parameter; optimal systematic stability protocol (blue), optimal noise protocol (green).}} \langlebel{general_ex} \end{center} \end{figure} Additional work is in required to extend the results in \citet{noise} to different types of noise and perturbations. Apart from the invariant-based approach, \citet{Lacour} have proposed robust trajectories in the adiabatic parameter space that maximize the population transfer for a two-level system subjected to dephasing. Also the robustness of the ``parallel adiabatic passage'' technique (keeping the eigenvalues of the Hamiltonian parallel \citep{PLAP3}) with respect to fluctuations of the phase, amplitude and pulse area was analyzed in \citet{PLAP4}. \subsection{Three-level Systems} The transitionless driving for stimulated rapid adiabatic passage from level 1 to level 3 in a lambda configuration with an intermediate state 2 making use of a pumping and a Stokes laser was studied in \citet{ShoreOC,Rice03,Rice05,Ch10b}. The fast-driving cd field connects levels $|1\rangle$ and $|3\rangle$. This implies in general a weak magnetic dipole transition, which limits the ability of the field to shorten the times. Invariant-based engineering solves the problem by providing alternative shortcuts that do not couple directly levels $|1\rangle$ and $|3\rangle$ \citep{Chen3}, as discussed below. It should be noted though that in an optical analogy of STA to engineer multimode waveguides all these schemes (with or without 1-3 coupling) may in principle be implemented \citep{SHAPEapp,Tseng} by computer-generated holograms. In this analogy, based on the paraxial approximation, space plays the role of time so that the effect of the shortcuts is to shorten the length of the mode converters. In \citet{Chen3}, using two lasers on resonance with the 1-2 and 2-3 transitions, two single-mode protocols that make use of one eigenstate of the invariant are described. In these protocols full fidelity requires an infinite laser intensity, and shortening the time also implies an energy cost. The first protocol, based on simple sine and cosine functions for the pumping and Stokes lasers keeps the population of level 2 small. To achieve the same fidelity, less intensity is required in the second protocol, in which the intermediate level $|2\rangle$ is populated. The population of the intermediate level is usually problematic when its time decay scale is smaller than the process time. While this may be a serious drawback for an adiabatic slow process, it need not be for a fast shortcut. Protocols that populate level $2$ may thus be considered as useful alternatives for certain systems and sufficiently short process times. In the previous two protocols the initial state is not exactly $|1\rangle$ to avoid a divergence in the Rabi frequency. A third multi-mode wave-function protocol is also proposed in \citet{Chen3} using the same fields as for the first protocol but with an initial state which is simply the bare state $|1\rangle$. It provides a much less costly shortcut so exploring the multi-mode approach for this and other systems is an interesting task for future work. Inverse engineering of four-level systems has been considered in \citet{Yidun1}, where a full Lie-algebraic classiffication and detailed construction of the dynamical invariants is provided. \subsection{Spintronics} Coherent spin manipulation in quantum dots is the key element in the state-of-the-art technology of spintronics. \citet{Ban} have considered the electric control of electron spin in a quantum dot formed in a two-dimensional electron gas confined by the material composition under a weak magnetic field, focusing on the spin flip in the doublet of the lowest orbital state. The influence from higher orbital states can be taken into account by the L\"{o}wdin partition technique reducing the full Hamiltonian into an effective two-level one in which the matrix elements depend on electric field components. Using invariant-based inverse engineering the time-dependent electric fields are designed so to flip the spin rapidly and avoid decoherence effects. The results are stable with respect to environmental noise and the device-dependent noise and may open new possibilities for high-fidelity spin-based quantum information processing. \subsection{Experiments} The counterdiabatic or transitionless approach described in Secs. \ref{secCD}, 2.4, and \ref{poin} has been applied recently to invert the population of different two-level systems: In \cite{Oliver} the effective two-level system is set as a condensate in the bands of an accelerated optical lattice \cite{Oliver0}. Writting the Hamiltonian in Cartesian-like coordinates as $H=X\sigma_x+Y\sigma_y+Z\sigma_z$, $X$ may be controlled by the trap depth, $Z$ by the lattice acceleration \cite{Oliver0}, and $Y$ could in principle be implemented by a second shifted lattice. The counterdiabatic term in Eqs. (\ref{28}) or (\ref{toha}) is of the form $Y\sigma_y$ whose realization is cumbersome in this setting. The alternative was to perform a unitary transformation that leads to the same final state modifying the original $X$ and $Z$ terms. This manipulation, discussed in Sec. 2.4. (see Eq. (\ref{u3}), was interpreted as a $Z$-rotation in \cite{Sara12}, where it is compared to the one based on the first-order superadiabatic cd-term $H_{cd}^{(1)}$.\footnote{The use of the term ``superadiabatic'' in \cite{Oliver} differs -is broader there- from the one in Sec. 2.4.} Landau-Zener and a ``tangent'' protocol with a tangent function for $Z$ unnafected by the rotation, are used as a reference, the later being found to be very robust versus a simulated variation of control parameters. In \cite{expcd} the two-level system is a single nitrogen vacancy center in diamond controlled by time dependent microwave fields. The reference process is a Landau-Zener transition, and the $Y\sigma_y$ cd-term is implemented by a field oscillating $\parallelrtiali/2$ radians out of phase with respect to the field that provides the $X\sigma_x$ term. As the maximal value of the total field amplitude is bounded, in this case to avoid undesired transitions, a ``rapid-scan'' approach is implemented to shorten the protocol time: the protocol is divided into a discrete set of time segments with varying phase and the time duration of each segment is adjusted so that the maximal amplitude allowed is applied. \section{Wavepacket Splitting} Splitting a wavefunction without excitating it is important in matter wave interferometry \citep{S07,S09a,S09b,Augusto}. For linear waves, described by the Schr\"odinger equation, it is a peculiar operation, as adiabatic following is not robust but unstable with respect to a small external potential asymmetry \citep{JGB,split}. The ground-state wavefunction ``collapses'' into the slightly lower well so that a very slow trap potential bifurcation fails to split the wave except for perfectly symmetrical potentials. A fast bifurcation with a rapidly growing separating potential succeeds to split the wave but at the price of a strong excitation. STA that speed up the adiabatic process along a non-adiabatic route overcome these problems \citep{split}. Numerical modelling shows that the wave splitting via shortcuts is significantly more stable than the adiabatic following with respect to asymmetric perturbations and avoids the final excitation. Specifically \citet{split} use the streamlined version \citep{ErikFF} of the fast-forward technique of \citet{MNProc}, see Sec. \ref{secmethod}, applied to Gross-Pitaevskii or Schr\"odinger equations after having found some obstacles to apply the invariants-based method (the eigenvectors of quadratic-in momentum invariants do not satisfy the required boundary conditions \citep{ErikFF}), and the transitionless-driving algorithm \citep{Rice03} (because of difficulties to implement in practice the counter-diabatic terms). The following discussion refers to the Schr\"odinger equation except for a final comment on the GPE. {\it{Fast-forward approach.}---} To apply the FF approach the density $r(x,t)$ must be first designed. Assume the splitting of an initial single Gaussian $f(x,0)=e^{-x^2/(2 a_0^2)}$, where $a_0$ is the width of the ground state for a harmonic oscillator with frequency $\omegaega/\hbar$, $a_0=\sqrt{\hbar/(m\omegaega)}$, into a final double Gaussian $f(x,t_f)=e^{-(x-x_f)^2/(2a_0^2)}+e^{-(x+x_f)^2/(2a_0^2)}$. The interpolation \begin{equation}q \langlebel{ansatzrbueno} r(x,t)=z(t)[e^{- (x-x_f(t))^2/(2a_0^2)}+e^{- (x+x_f(t))^2/(2a_0^2)}], \end{equation}q where $z(t)$ is a normalization function, generates simple $Y$-shaped potentials. The conditions $\dot{x}_0(0)=\dot{x}_0(t_f)=0$ are imposed, so $\dot r=0$ at boundary times. In \citet{split} $x_0(s)=x_f(3s^2-2 s^3)$, where $s=t/t_f$, is chosen for simplicity, and Eq. (\ref{imag0}) is solved for the initial conditions to get the FF potential with Eq. (\ref{real}). {\it{Effect of the perturbation}.---} The effects of an asymmetric perturbation may be studied with the potential $V_{\langlembda}=V_{FF}+\langlembda\theta(x)$, where $\theta$ is the step function and $V_{FF}$ the potential obtained via Eqs. (\ref{real}), (\ref{imag0}), and (\ref{ansatzrbueno}) with $\langlembda=0$. The goal is to find a stable time-dependent porotocol that, even without knowing the value of $\langlembda$, is able to produce the split state. {\it{Moving two-mode model}.---} Static two-mode models have been used before to analyze splitting processes \citep{Javanainen99,S09b,Aichmayr}. \citet{split} consider instead a two-level model with moving left and right basis functions to provide analytical estimates and insight as a complement of the more detailed FF approach. Assume first the (symmetrical and orthogonal) moving left and right bare basis states $|L(t)\ranglengle = {0 \choose 1}$, $|R(t)\ranglengle ={1\choose 0}$, and a corresponding two-mode Hamiltonian model \begin{equation}q \langlebel{H_tm} H(t)=\frac{1}{2} \left ( \begin{equation}gin{array}{cc} \langlembda & -\delta(t)\\ -\delta(t)& -\langlembda \end{array} \right), \end{equation}q where $\delta(t)/\hbar$ is the tunneling rate \citep{Javanainen99,S09b} and $\langlembda$ the energy difference between the two wells \citep{Aichmayr}. We may simply consider $\langlembda$ constant through a given splitting process and equal to the perturbative parameter that defines the asymmetry. Thus, the instantaneous eigenvalues are $E^{\parallelrtialm}_\langlembda(t)=\parallelrtialm \frac{1}{2} \sqrt{\langlembda^2+\delta^2(t)}$, and the normalized eigenstates \begin{equation} \begin{equation}gin{array}{ll} \langlebel{eigenstates_tm} |\parallelrtialsi^+_\langlembda(t)\ranglengle = \sin{ \left ( \frac{\alpha}{2} \right ) } |L(t)\ranglengle-\cos {\left ( \frac{\alpha}{2} \right ) }|R(t)\ranglengle, \\ |\parallelrtialsi^-_\langlembda(t)\ranglengle = \cos{ \left ( \frac{\alpha}{2} \right )}|L(t)\ranglengle+\sin{\left ( \frac{\alpha}{2} \right )}|R(t)\ranglengle, \end{array} \end{equation}q where $\tan \alpha = \delta (t)/\langlembda$ defines the mixing angle. When $\left \{ |L(t)\ranglengle,|R(t)\ranglengle \right\}$ are close enough initially (and $\delta(0)\gg\langlembda$), the instantaneous eigenstates of $H$ are close to the symmetric ground state $|\parallelrtialsi^{-}_{0}(0)\ranglengle=\frac{1}{\sqrt{2}}(|L(0)\ranglengle+|R(0)\ranglengle)$ and the antisymmetric excited state $|\parallelrtialsi^{+}_{0}(0)\ranglengle=\frac{1}{\sqrt{2}}(|L(0)\ranglengle-|R(0)\ranglengle)$ of the single well. At $t_f$ two extreme regimes may be distinguished: {\it{i)}} For $\delta(t_f)\gg\langlembda$ the final eigenstates of $H$ tend to $|\parallelrtialsi^{\mp}_{\langlembda}(t_f)\ranglengle=\frac{1}{\sqrt{2}}(|L(t_f)\ranglengle\parallelrtialm |R(t_f)\ranglengle)$ which correspond to the symmetric and antisymmetric splitting states. {\it{ii)}} For $\delta(t_f)\ll\langlembda$ the final eigenfunctions of $H$ collapse and become right and left localized states: $|\parallelrtialsi^{-}_{\langlembda}(t_f)\ranglengle=|L(t_f)\ranglengle$ and $|\parallelrtialsi^{+}_{\langlembda}(t_f)\ranglengle=|R(t_f)\ranglengle$. Since $\delta(t_f)$ is set as a small number to avoid tunnelling in the final configuration, the transition from one to the other regime explains the collapse of the ground state function to one of the wells at small $\langlembda\approx\delta(t_f)$. {\it{Dynamics of the two-mode model}.---} In a moving-frame interaction-picture wave function $\parallelrtialsi^A=A^\dagger\parallelrtialsi^S$, where $A=\sum_{\begin{equation}ta=L,R} |\begin{equation}ta (t)\rangle\langle \begin{equation}ta(0)|$ and $\parallelrtialsi^S$ is the Schr\"odinger-picture wave function, $\parallelrtialsi^A$ obeys $i\hbar\dot{\parallelrtialsi}^A=(H_A -K_A) \parallelrtialsi^A$, with $H_A=A^\dagger H A$, and $K_A=i\hbar A^\dagger \dot{A}$. For real $\langle x|R(t)\rangle$ and $\langle x|L(t)\rangle$, the symmetry $\langle x|R(t)\rangle=\langle -x|L(t)\rangle$ makes $K_A=0$. We may invert Eq. (\ref{eigenstates_tm}) to write the bare states in terms of ground and excited states, and get $\delta (t)$ from Eq. (\ref{H_tm}). The actual dynamics is approximated by identifying $|\parallelrtialsi^{\parallelrtialm}_0(t)\rangle$ and $E^{\parallelrtialm}_0(t)$ with the instantaneous ground and excited states and energies of the unperturbed FF Hamiltonian. They are combined to compute the bare basis in coordinate representation, and with them the matrix elements $\langle\begin{equation}ta'|H_{\langlembda}|\begin{equation}ta\rangle$. The dynamics in the moving frame for the two-mode Hamiltonian may then be solved. {\it{Sudden approximation}.---} The behaviour at low $\langlembda$ may be understood with the sudden approximation \citep{Messiah}. Its validity requires \citep{Messiah} $ t_f \ll \hbar/\Delta \overline{H_A}, $ where $\Delta \overline{H_A}=\sqrt{\langlengle\parallelrtialsi(0)|\overline{H_A}^2|\parallelrtialsi(0)\ranglengle-\langlengle\parallelrtialsi(0)|\overline{H_A}|\parallelrtialsi(0)\ranglengle^2}$ and $\overline{H_A}=\frac{1}{t_f}\int_{0}^{t_f}dt'H_A(t')$. With $|\parallelrtialsi(0)\ranglengle=|\parallelrtialsi_{0}^{-}(0)\ranglengle$ the condition to apply the sudden approximation becomes $\langlembda \ll \frac{2\hbar}{t_f}$. In this regime the dynamical wave function $\parallelrtialsi(t_f)$ is not affected by the perturbation and becomes the ideal split state $\parallelrtialsi^-_0(t_f)$, up to a phase factor. The previous results may be extended to weakly non-linear Bose-Einstein condensates for which $g_1/a_0\langlembda\ll1$. Otherwise the instability of adiabatic splitting with respect to perturbations is strongly suppressed by the compensating effect of the non-linear term \citep{split}. Of course the shortcuts would still be useful if the time is to be reduced. \section{Discussion} We have presented an overview of recent work on shorcuts to adiabaticity (STA) covering a broad span of methods and physical systems. STA offer many promissing research and application avenues with practical and fundamental implications. Several pending tasks have been described along the text. We add here some more: To extend the set of basic physical limitations and laws for fast processes in specific operations, taking into account different constraints; To generate simple, viable shortcuts making systematic use of symmetries; To enhance robustness versus different types of noise and perturbations; To perform inverse-engineering with invariants beyond the quadratic-in-momentum family; To develop shortcuts for adiabatic computing, and in general for Hamiltonians that cannot be easily diagonalized, as in \cite{Cal11}; To design or supplement STA by optimal control theory methods. We have seen some exaples but many other optimization problems await unexplored. Indeed STA open interesting prospects to improve or make realizable quantum information and technology operations, by implementing new fast and robust transport or expansion approaches, internal state manipulations, and cooling protocols; nuclear magnetic resonance is another field where developing ideal pulses may benefit from STA. STA could be also useful beyond single or many-body quantum systems, e.g. to build optical short-length mode converters or for designing mechanical operations with nanoparticles, mesoscopic, or macroscopic objects. In classical mechanics there are many examples of adiabatic evolution that may be shortcut. The application of this concept to interacting classical gas manipulation remains also an open question. We have witnessed in a few years a surge of activity and applications that could have hardly been predicted. Researchers creativity will likely continue to surprise us in the stimulating crossroad of STA with new, unnexpected concepts and applications. {\it Acknowledgment.---} We are grateful to D. Alonso, Y. Ban, M. Berry, M. G. Boshier, B. Damski, J. Garc\'ia-Ripoll, J.-S. Li, G. C. Hegerfeldt, R. Kosloff, M. A. Mart\'\i n-Delgado, I. Lizuain, D. Porras, M. B. Plenio, M. Rams, L. Santos, S. Schmidt, E. Sherman, D. Stefanatos, E. Timmermans, and W. H. Zurek. We acknowledge funding by Grants No. IT472-10, FIS2009-12773-C02-01, 61176118, NSF PHY11-25915, BFI09.39, BFI08.151, 12QH1400800 the UPV/EHU Program UFI 11/55, the U.S Department of Energy through the LANL/LDRD Program, a LANL J. Robert Oppenheimer fellowship (A.d.C.), and a UPV/EHU fellowship (S.M.G.). A.d.C. is grateful to KITP for hospitality. \begin{equation}gin{thebibliography}{99} \bibitem[Aichmayr(2010)]{Aichmayr}Aichmayr, A., 2010. Analyzing the Dynamics of an atomic Bose-Einstein-Condensate within a Two-Mode Model, Bachelor-Thesis, Institut f\"ur Physik Karl-Franzens-Universit\"at Graz. \bibitem[Allen and Eberly(1987)]{Allen} Allen L., Eberly, J. H., 1987. {\it Optical Resonance and Two- Level Atoms} (Dover, New York). \bibitem[Anadan and Aharonov(1990)]{AA} Anandan, J., Aharonov, Y., 1990. Geometry of quantum evolution. Phys. Rev. Lett. {65}, 1697. \bibitem[Andresen et al.(2011)]{optimal_control} Andresen, B., Hoffmann, K. H., Nulton, J., Tsirlin, A., Salamon, P., 2011. Optimal control of the parametric oscillator. Eur. J. Phys. {32}, 827. \bibitem[Ban et al.(2012)]{Ban}Ban, Y., Chen, X., Sherman E. Y., Muga, J. G., 2012. Fast and robust spin manipulation in a quantum dot by electric fields. Phys. Rev. Lett. 109, 206602. \bibitem[Band et al.(2002)]{BMT02} Band, Y. B., Malomed, B., Trippenbach, M., 2002. Adiabaticity in nonlinear quantum dynamics: Bose-Einstein Condensate in a time-varying box. Phys. Rev. A 65, 033607. \bibitem[Barreiro et al.(2011)]{Barreiro11} Barreiro, J. T., M\"uller, M., Schindler, P., Nigg, D., Monz, T., Chwalla, M., Hennrich, M., Roos, C. F., Zoller P., Blatt, R., 2011. An open-system quantum simulator with trapped ions. Nature 470, 486. \bibitem[Bason et al.(2012)]{Oliver} Bason M. G., M. Viteau, N. Malossi, P. Huillery, E. Arimondo, D. Ciampini, R. Fazio, V. Giovannetti, R. Mannella, O. Morsch, 2012. High-fidelity quantum driving. Nat. Phys. {8}, 147. \bibitem[Bender et al.(2007)]{qb}Bender, C. M., Brody, D. C., Jones, H. F., Meister, B. K., 2007. Faster than Hermitian Quantum Mechanics. Phys. Rev. Lett. {98}, 040403. \bibitem[Bergmann et al.(1998)]{Bergmann} Bergmann, K., Theuer, H., Shore, B. W., 1998. Coherent population transfer among quantum states of atoms and molecules. Rev. Mod. Phys. {70}, 1003. \bibitem[Berry and Klein(1984)]{boxlaws1} Berry, M. V., Klein, G., 1984. Newtonian trajectories and quantum waves in expanding force fields. J. Phys A 17, 1805. \bibitem[Berry(1987)]{Berrysa}Berry, M. V., 1987. Quantum phase corrections from adiabatic iteration. Proc. R. Soc. London, Ser. A {414}, 31. \bibitem[Berry(1990)]{Berry90} Berry, M. V., 1990. Histories of adibatic quantum transitions. Proc. R. Soc. Lond A {429}, 61. \bibitem[Berry(2009)]{Berry09} Berry, M. V., 2009. Transitionless quantum driving. J. Phys. A: Math. Theor. { 42}, 365303. \bibitem[Bertrand and Betrand(1987)]{BB87} Bertrand, J., Bertrand, P., 1987. A tomographic approach to Wigner's function, Found. Phys. 17, 397. \bibitem[Blakestad et al.(2009)]{Wineland} Blakestad, R. B., Ospelkaus, C., VanDevender, A. P., Amini, J. M., Britton, J., Leibfried, D., Wineland, D. J., 2009. High-Fidelity Transport of Trapped-Ion Qubits through an X-Junction Trap Array. Phys. Rev. Lett. {102}, 153002. \bibitem[Bloch et al.(2008)]{BDZ08} Bloch, I., Dalibard, J., Zwerger W., 2008. Many-Body Physics with Ultracold Gases. Rev. Mod. Phys. 80, 885. \bibitem[Bowler et al.(2012)]{Bowler} Bowler, R., Gaebler, J., Lin, Y., Tan, T. R., Hanneke, D., Jost, J. D., Home, J. P., Leibfried, D., Wineland, D. J., 2012. Coherent Diabatic Ion Transport and Separation in a Multi-Zone Trap Array. Phys. Rev. Lett. 109, 080502. \bibitem[Born and Fock(1928)]{BF28}Born, M., Fock, V. A., 1928. Beweis des Adiabatensatzes. Zeitschrift f\"ur Physik A 51, 165. \bibitem[Boyer et al.(2006)]{modu} Boyer, V., Godun, R. M., Smirne, G., Cassettari, D., Chandrashekar, C. M., Deb, A. B., Laczik, Z. J., Foot C. J., 2006. Dynamic Manipulation of Bose-Einstein Condensates with a Spatial Light Modulator. Phys. Rev. A 73, 031402(R). \bibitem[Bujan et al.(2008)]{BPG08} Buljan, H., Pezer, R., Gasenzer, T., 2008. Fermi-Bose Transformation for the Time-Dependent Lieb-Liniger Gas. Phys. Rev. Lett. 100, 080406. \bibitem[Calarco et al.(2000)]{Calarco2000} Calarco, T., Hinds, E. A., Jaksch, D., Schmiedmayer, J., Cirac, J. I., Zoller, P., 2000. Quantum gates with neutral atoms: Controlling collisional interactions in time-dependent traps. Phys. Rev. A {61}, 022304. \bibitem[Castin and Dum(1996)]{CD96} Castin, Y., Dum. R., 1996. Bose-Einstein Condensates in Time Dependent Traps. Phys. Rev. Lett. 77, 5315. \bibitem[Chen et al.(2009)]{boxlaws3} Chen, X., Muga, J. G., del Campo, A., Ruschhaupt, A., 2009. Atom cooling by non-adiabatic expansion. Phys. Rev. A 80, 063421. \bibitem[Chen et al.(2010a)]{Shan}Chen, D., Zhang, H., Xu, X., Li, T., Wang, Y., 2010a. Nonadiabatic transport of cold atoms in a magnetic quadrupole potential. Appl. Phys. Lett. {96}, 134103. \bibitem[Chen et al.(2010b)] {Ch10} Chen, X., Ruschhaupt, A., Schmidt, S., del Campo, A., Gu\'ery-Odelin, D., Muga, J. G., 2010b. Fast Optimal Frictionless Atom Cooling in Harmonic Traps: Shortcut to Adiabaticity. Phys. Rev. Lett. {104}, 063002. \bibitem[Chen et al.(2010c)]{Ch10b} Chen, X., Lizuain, I., Ruschhaupt, A., Gu\'{e}ry-Odelin, D., Muga, J. G., 2010c. Shortcut to adiabatic passage in two and three level atoms. Phys. Rev. Lett. { 105}, 123003. \bibitem[Chen and Muga(2010)]{energy} Chen, X., Muga, J. G., 2010. Transient energy excitation in shortcuts to adiabaticity for the time-dependent harmonic oscillator. Phys. Rev. A {82}, 053403. \bibitem[Chen et al.(2011a)]{Chen11} Chen, X., Torrontegui, E., Muga, J. G., 2011a. Lewis-Riesenfeld invariants and transitionless quantum driving. Phys. Rev. A {83}, 062116. \bibitem[Chen et al.(2011b)]{OCTtrans} Chen, X., Torrontegui, E., D. Stefanatos, J. -S. Li and Muga, J. G., 2011b. Optimal trajectories for efficient atomic transport without final excitation. Phys. Rev. A {84}, 043415. \bibitem[Chen and Muga(2012)]{Chen3} Chen, X., Muga, J. G., 2012. Engineering of fast population transfer in three-level systems. Phys. Rev. A. 86, 033405. \bibitem[Choi, Onofrio and Sundaram(2011)]{Onofrio11} Choi, S., Onofrio, R., Sundaram, B., 2011. Optimized sympathetic cooling of atomic mixtures via fast adiabatic strategies. Phys. Rev. A 84, 051601(R). \bibitem[Choi et al.(2012)]{Onofrio12} Choi, S., Onofrio, R., Sundaram, B., 2012. Squeezing and robustness of frictionless cooling strategies. Phys. Rev. A 86, 043436. \bibitem[Collin et al.(2004)]{Collin} Collin, E., Ithier, G., Aassime, A., Joyez, P., Vion, D., Esteve, D. 2004. NMR-like Control of a Quantum Bit Superconducting Circuit. Phys. Rev. Lett. {93}, 157005. \bibitem[Couvert et al.(2008a)]{David} Couvert, A.,Kawalec, T., Reinaudi, G., Gu\'ery-Odelin, D., 2008a. Optimal transport of ultracold atoms in the non-adiabatic regime. Europhys. Lett. {83}, 13001. \bibitem[Couvert et al.(2008b)]{David2} Couvert, A., Jeppesen, M., Kawalec, T., Reinaudi, G., Mathevet, R., Gu\'ery-Odelin, D., 2008b. A quasi-monomode guided atom laser from an all-optical Bose-Einstein condensate Europhys. Lett. {83}, 50001. \bibitem[Crick et al.(2010)]{Penning} Crick, D. R., Donnellan, S., Ananthamurthy, S., Thompson, R. C., Segal, D. M., 2010. Fast shuttling of ions in a scalable Penning trap array. Rev. Sci. Instr. {81}, 013111. \bibitem[Dalfovo et al.(1999)]{Dalfovo} Dalfovo, F., Giorgini, S., Pitaevskii, L. P., Stringari, S., 1999. Theory of Bose-Einstein condensation in trapped gases. Rev. Mod. Phys. {71}, 463. \bibitem[del Campo and Muga(2006)]{DM06} del Campo, A., Muga, J. G., 2006. Dynamics of a Tonks-Girardeau gas released from a hard-wall trap. Europhys. Lett. 74, 965. \bibitem[del Campo(2008)]{delcampo08} del Campo, A., 2008. Fermionization and bosonization of expanding one-dimensional anyonic fluids. Phys. Rev. A 78, 045602. \bibitem[del Campo et al.(2008a)]{DMK08} del Campo, A., Muga, J. G., Kleber, M., 2008a. Quantum matter wave dynamics with moving mirrors. Phys. Rev. A 77, 013608. \bibitem[del Campo et al.(2008b)]{DMM08} del Campo, A., Man'ko, V. I., Marmo, G., 2008b. Symplectic tomography of ultracold gases in tight-waveguides. Phys. Rev. A 78, 025602. \bibitem[del Campo et al.(2009)]{DGCM09} del Campo, A., Garc\'ia-Calder\'on, G., Muga, J. G., 2009. Quantum transients. Phys. Rep. 476, 1. \bibitem[del Campo(2011a)]{delcampo11a} del Campo, A., 2011a. Fast frictionless dynamics as a toolbox for low-dimensional Bose-Einstein condensates. EPL 96, 60005. \bibitem[del Campo(2011b)]{delcampo11} del Campo, A., 2011b. Frictionless quantum quenches in ultracold gases: a quantum dynamical microscope. Phys. Rev. A 84, 031606(R). \bibitem[del Campo and Boshier(2012)]{DB12} del Campo, A., Boshier, M. G., 2012. Shortcuts to adiabaticity in a time-dependent box. Sci. Rep. 2, 648. \bibitem[del Campo et al.(2012)]{DRZ12} del Campo, A., Rams, M. M., Zurek, W. H., 2012. Assisted finite-rate adiabatic passage across a quantum critical point: Exact solution for the quantum Ising model. Phys. Rev. Lett. 109, 115703. \bibitem[Demirplak and Rice(2003)]{Rice03}Demirplak, M., Rice, S. A., 2003. Adiabatic Population Transfer with Control Fields. J. Phys. Chem. A {107}, 9937. \bibitem[Demirplak and Rice(2005)]{Rice05}Demirplak, M., Rice, S. A., 2005. Assisted Adiabatic Passage Revisited. J. Phys. Chem. B {109}, 6838. \bibitem[Demirplak and Rice(2008)]{Rice08}Demirplak, M., Rice, S. A., 2008. On the consistency, extremal, and global properties of counterdiabatic fields. J. Chem. Phys. {129}, 154111. \bibitem[Deschamps et al.(2008)]{NMR} Deschamps, M., Kervern, G., Massiot, D., Pintacuda, G., Emsley, L., Grandinetti, P. J., 2008. Superadiabaticity in Magnetic Resonance. J. Chem. Phys. {129}, 204110. \bibitem[Deuretzbacher et al.(2008)]{etonks3} Deuretzbacher, F., Fredenhagen, K., Becker, D., Bong, K., Sengstock, K., Pfannkuche, D., 2008. Exact Solution of Strongly Interacting Quasi-One-Dimensional Spinor Bose Gases. Phys. Rev. Lett. 100, 160405. \bibitem[Dhara and Lawande(1984)]{DL} Dhara, A. K., Lawande, S. W., 1984. Feynman propagator for time-dependent Lagrangians possessing an invariant quadratic in momentum. J. Phys. A { 17}, 2423. \bibitem[Dodonov et al.(1993)]{boxlaws2} Dodonov, V. V., Klimov, A. B., Nikonov, D. E., 1993. Quantum particle in a box with moving walls. J. Math. Phys. 34, 3391. \bibitem[Dridi et al.(2009)]{PLAP3} Dridi, G., Gu\'erin, S., Hakobyan, V., Jauslin, H. R., Eleuch, H., 2009. Ultrafast stimulated Raman parallel adiabatic passage by shaped pulses. Phys. Rev. A {80}, 043408. \bibitem[Dziarmaga et al.(2010)]{Dziarmaga10} Dziarmaga, J., 2010. Dynamics of a Quantum Phase Transition and Relaxation to a Steady State. Adv. Phys. 59, 1063. \bibitem[Engels et al.(2007)]{Engels07} Engels, P., Atherton, C., Hoefer, M. A., 2007. Observation of Faraday Waves in a Bose-Einstein Condensate. Phys. Rev. Lett. 98, 095301. \bibitem[Ermakov(1880)]{Ermakov}Ermakov, V. P., 1880. Second order differential equations. Conditions of complete integrability. Univ. Izv. Kiev. Series III {9}, 1. Translated in 2008. Appl. Anal. Discrete Math. 2, 123. \bibitem[Fasihi et al.(2012)]{Yidun}Fasihi, M.-A., Wan, Y., Nakahara, M., 2012. Non-adiabatic Fast Control of Mixed States Based on Lewis-Riesenfeld Invariant. J. Phys. Soc. Jap. 81, 024007. \bibitem[Feldmann and Kosloff(2012)]{KoPRE12}Feldmann, T., Kosloff, R., 2012 Short time cycles of purely quantum refrigerators. Phys. Rev. E 85, 051114. \bibitem[Friedman et al.(2001)]{prepainters2} Friedman, N., Kaplan, A., Carasso, D., Davidson, N., 2001. Observation of chaotic and regular dynamics in atom-optics billiards. Phys. Rev. Lett. 86, 1518. \bibitem[Friesch et al.(2000)]{Schleich1} Friesch, O. M., Marzoli, I., Schleich, W. P., 2000. Quantum carpets woven by Wigner functions. New J. Phys. 2, 4. \bibitem[Gao et al.(1991)]{Gao91} Gao, X. C., Xu, J. B., Qian, T. Z., 1991. Geometric phase and the generalized invariant formulation. Phys. Rev. A {44}, 7016. \bibitem[Gao et al.(1992)]{Gao92} Gao, X. C., Xu, J. B., Qian, T. Z., 1992. Invariants and geometric phase for systems with non-Hermitian time-dependent Hamiltonians. Phys. Rev. A {46}, 3626. \bibitem[Garrido(1964)]{Garrido}Garrido, L. M., 1964. Generalized Adiabatic Invariance. J. Math. Phys. {5}, 355. \bibitem[Gea-Banaloche(2002)]{JGB} Gea-Banacloche, J., 2002. Splitting the wave function of a particle in a box. Am. J. Phys {70}, 3. \bibitem[Gerasimov and Kazarnovskii(1976)]{GK76} Gerasimov, A. S., Kazarnovskii, M. V. 1976. Possibility of observing nonstationary quantum-mechanical effects by means of ultracold neutrons. Sov. Phys. JETP 44, 892. \bibitem[Godoy(2002)]{Godoy02} Godoy, S., 2002. Diffraction in time: Fraunhofer and Fresnel dispersion by a slit. Phys. Rev. A 65, 042111. \bibitem[Godoy(2003)]{Godoy03} Godoy, S., 2003. Diffraction in time of particles released from spherical traps. Phys. Rev. A 67, 012102. \bibitem[Greiner et al.(2001)]{HanschPRA2001}Greiner, M., Bloch, I., H\"ansch, T. W., and Esslinger, T., 2001. Magnetic transport of trapped cold atoms over a large distance. Phys. Rev. A 63, 031401. \bibitem[Gritsev et al.(2010)]{GBD10} Gritsev, V., Barmettler, P., Demler, E., 2010. Scaling approach to quantum non-equilibrium dynamics of many-body systems. New J. Phys. 12, 113005. \bibitem[Grond et al.(2009a)]{S09a} Grond, J., Schmiedmayer, J., Hohenester, U., 2009a. Optimizing number squeezing when splitting a mesoscopic condensate. Phys. Rev. A {79}, 021603. \bibitem[Grond et al.(2009b)]{S09b} Grond, J., von Winckel, G., Schmiedmayer, J., Hohenester, U., 2009b. Optimal control of number squeezing in trapped Bose-Einstein condensates. Phys. Rev. A {80}, 053625. \bibitem[Gul\'erin et al.(2002)]{PLAP1} Gu\'erin, S., Thomas, S., Jauslin, H. R., 2002. Optimization of population transfer by adiabatic passage. Phys. Rev.A {65}, 023409. \bibitem[Gul\'erin et al.(2011)]{PLAP4} Gu\'erin, S., Hakobyan, V., Jauslin, H. R., 2011. Optimal adiabatic passage by shaped pulses: Efficiency and robustness. Phys. Rev. A {84}, 013423. \bibitem[G\"ung\"ord\"u et al.(2012)]{Yidun1} G\"ung\"ord\"u, U., Wan, Y., Fasihi, M.A., Nakahara, M. Dynamical Invariants of Four-Level Systems. arXiv:1205.3034. \bibitem[H\"ansel et al.(2001)]{H_Nature} H\"ansel, W., Hommelhoff, P., H\"ansch, T. W., Reichel, J., 2001. Bose-Einstein condensation on a microelectronic chip. {Nature} 413, 498-501. \bibitem[Henderson et al.(2009)]{painters} Henderson, K., Ryu, C., MacCormick, C., Boshier, M. G., 2009. Experimental demonstration of painting arbitrary and dynamic potentials for Bose-Einstein condensates. New J. Phys. 11, 043030. \bibitem[Hoffmann et al.(2011)]{EPL11} Hoffmann, K. H., Salamon, P., Rezek Y., Kosloff, R., 2011. Time-optimal controls for frictionless cooling in harmonic traps. EuroPhys. Lett. {96}, 60015. \bibitem[Hohenester et al.(2007)]{S07} Hohenester, U., Rekdal, P. K., Borzi, A., Schmiedmayer, J., 2007. Optimal quantum control of Bose Einstein condensates in magnetic microtraps. Phys. Rev. A {75}, 023602. \bibitem[Huber et al.(2008)]{SK} Huber, G., Deuschle, T., Schnitzler, W., Reichle, R., Singer, K., Schmidt-Kaler, F., 2008. Transport of ions in a segmented linear Paul trap in printed-circuit-board technology. New J. Phys. {10}, 013004. \bibitem[Ib\'{a}\~{n}ez et al.(2011)]{NHSara} Ib\'a\~nez, S., Mart\'{\i}nez-Garaot, S., Chen, X., Torrontegui E., Muga, J. G., 2011a. Shortcuts to adiabaticity for non-Hermitian systems. {Phys. Rev. A} {84}, 023415. \bibitem[Ib\'{a}\~{n}ez et al.(2012a)]{Sara12} Ib\'{a}\~{n}ez, S., Chen, X., Torrontegui, E., Muga, J. G., Ruschhaupt, A., 2012a. Multiple Schr\"odinger pictures and dynamics in shortcuts to adiabaticity. {Phys. Rev. Lett.} 109, 100403. \bibitem[Ib\'{a}\~{n}ez et al.(2012b)]{erratum} Ib\'a\~nez, S., Mart\'{\i}nez-Garaot, S., Chen, X., Torrontegui E., Muga, J. G., 2012b. Erratum: Shortcuts to adiabaticity for non-Hermitian systems [Phys. Rev. A 84, 023415 (2011)]. {Phys. Rev. A} 86, 019901(E). \bibitem[Javanainen and Ivanov(1999)]{Javanainen99}Javanainen J., Ivanov, M. Y., 1999. Splitting a trap containing a Bose-Einstein condensate. Atom number fluctuations. Phys. Rev. A {60}, 2351. \bibitem[Juli\'a-D\'\i az et al.(2012)]{JJ} Juli\'a-D\'\i az, B., Torrontegui, E., Martorell, J., Muga, J. G., Polls, A., 2012. Fast generation of spin-squeezed states in bosonic Josephson junctions. arXiv:1207.1483v1. \bibitem[Kagan, Surkov and Shlyapnikov(1996)]{KSS96} Kagan. Y., Surkov, E. L., Shlyapnikov, G. V., 1996. Evolution of a Bose-condensed gas under variations of the confining potential. Phys. Rev. A 54, R1753. \bibitem[Keilmann et al.(2011)]{KLMR11} Keilmann, T., Lanzmich, S., McCulloch, I., Roncaglia, M., 2011. Statistically induced phase transitions and anyons in 1D optical lattices. {Nature Communications} 2, 361. \bibitem[Khaykovich et al.(2002)]{Khaykovich} Khaykovich, L., Schreck, F., Ferrari, G., Bourdel, T., Cubizolles, J., Carr, L. D., Castin, Y., Salomon, C., 2002. Formation of a Matter-Wave Bright Soliton. Science 296, 1290. \bibitem[Kosloff and Feldmann(2010)]{KoPRE10}Kosloff, R., Feldmann, T., 2010.Optimal performance of reciprocating demagnetization quantum refrigerators. Phys. Rev. E 82, 011134. \bibitem[Kuhr et al.(2001)]{Meschede} Kuhr, S., Alt, W., Schrader, D., M\"uller, M., Gomer, V., Meschede, D., 2001. Deterministic Delivery of a Single Atom. Science {293}, 278. \bibitem[Kurtsiefer et al.(1997)]{KPM97} Kurtsiefer, Ch., Pfau, T., Mlynek, J., 1997. Measurement of the Wigner function of an ensemble of helium atoms. {Nature} 386, 150. \bibitem[Lacour et al.(2008)]{Lacour} Lacour, X., Gu\'{e}rin, S., Jauslin, H. R., 2008. Optimized adiabatic passage with dephasing. {Phys. Rev. A} {78}, 033417. \bibitem[Lahaye et al.(2006)]{Lahaye} Lahaye, T., Reinaudi, G., Wang, Z., Couvert, A., Gu\'ery-Odelin, D., 2006. Transport of Atom Packets in a Train of Ioffe-Pritchard Traps. Phys. Rev. A {74} 033622. \bibitem[Lau and James(2012)]{simu}Lau, H-K., James, D. F. V., 2012. Proposal for a scalable universal bosonic simulator using individually trapped ions. Phys. Rev. A 85, 062329. \bibitem[Leclerc et al.(2012)] {Jolicard} Leclerc, A., Viennot, D., Jolicard, G., 2012. The role of the geometric phases in adiabatic populations tracking for non-Hermitian Hamiltonians. J. Phys. A. 45, 415201. \bibitem[Lee et al.(2007)]{EK} Lee, D.-H., Zhang, G.-M., Xiang, T., 2007. Edge Solitons of Topological Insulators and Fractionalized Quasiparticles in Two Dimensions. Phys. Rev. Lett 99, 196805. \bibitem[Leonhardt and Schneider(1997)]{LS97} Leonhardt, U., Schneider, S., 1997. State reconstruction in one-dimensional quantum mechanics: The continuous spectrum. Phys. Rev. A 56, 2549. \bibitem[Levy and Kosloff(2012)]{KoPRL12}Levy, A., Kosloff, R., 2012. Quantum Absorption Refrigerator. Phys. Rev. Lett. 108, 070604. \bibitem[Lewvitt(1986)]{Levitt} Levitt, M. H., 1986. Composite Pulses. {Prog. Nucl. Magn. Reson. Spectrosc.} {18}, 61. \bibitem[Lewis(1967)]{Lewis1} Lewis, H. R., 1967. Classical and quantum systems with time-dependent harmonic-oscillator-type Hamiltonians. {Phys. Rev. Lett} {18}, 510. \bibitem[Lewis and Leach(1982)]{LL} Lewis, H. R., Leach, P. G., 1982. A direct approach to finding exact invariants for one-dimensional time-dependent classical Hamiltonians. J. Math. Phys. { 23}, 2371. \bibitem[Lewis and Riesenfeld(1969)]{LR} Lewis, H. R., Riesenfeld, W. B., 1969. An Exact Quantum Theory of the Time-Dependent Harmonic Oscillator and of a Charged Particle in a Time-Dependent Electromagnetic Field. J. Math. Phys. {10}, 1458. \bibitem[Li et al.(2011)]{Lianao} Li, Y., Wu, L. A., Wang, Z. D., 2011. Fast cooling of mechanical resonator with time-controllable optical cavities. Phys. Rev. A 83, 043804. \bibitem[Lin et al.(2012)]{SHAPEapp} Lin, T.-Y., Hsiao, F.-C., Jhang, Y.-W., Hu, C., S.-Y. Tseng, 2012. Mode conversion using optical analogy of shortcut to adiabatic passage in engineered multimode waveguides. Opt. Exp. 20, 24085. \bibitem[Lohe(009)]{Lohe} Lohe, M. A,, 2009. Exact time dependence of solutions to the time-dependent Schr\"odinger equation. J. Phys. A: Math. Theor. {42}, 035307. \bibitem[Juki\'c et al.(2008)]{Hrvoje1} Juki\'c, D., Pezer, R., Gasenzer, T., Buljan, H., 2008. Free expansion of a Lieb-Liniger gas: Asymptotic form of the wave functions. Phys. Rev. A 78, 053602. \bibitem[Masuda(2012)]{Masuda2012} Masuda, S., 2012. Acceleration of adiabatic transport of interacting particles and rapid manipulations of dilute Bose gas in ground state. arXiv:1208.5650 \bibitem[Masuda and Nakamura(2008)]{MNPra} Masuda S., Nakamura, K., 2008. Fast-forward problem in quantum mechanics. Phys. Rev. A {78}, 062108. \bibitem[Masuda and Nakamura(2010)]{MNProc} Masuda, S., Nakamura, K., 2010. Fast-forward of adiabatic dynamics in quantum mechanics. Proc. R. Soc. A { 466}, 1135. \bibitem[Masuda and Nakamura(2011)]{MN11} Masuda, S., Nakamura, K., 2011. Acceleration of adiabatic quantum dynamics in electromagnetic fields. Physical Review A {84}, 043434. \bibitem[Messiah(1999)]{Messiah}Messiah A., \textit{Quantum Mechanics} (Dover Publicatins, Inc. Mineola, New York, 1999), Vol. 2. \bibitem[Meyrath et al.(2005)]{becbox} Meyrath, T. P., Schreck, F., Hanssen , J. L., Chuu, C.-S., Raizen, M. G., 2005. Bose Einstein Condensate in a Box. Phys. Rev. A {71}, 041604(R). \bibitem[Milner et al.(2001)]{prepainters1} Milner, V., Hanssen, J. L., Campbell, W. C., Raizen, M. G., 2001. Optical billiards for atoms. Phys. Rev. Lett. 86, 1514. \bibitem[Minguzzi and Gangardt(2005)]{MG05} Minguzzi, A., Gangardt, D. M., 2005. Exact Coherent States of a Harmonically Confined Tonks-Girardeau Gas. Phys. Rev. Lett. 94, 240404. \bibitem[Minguzzi and Girardeau(2007)]{MG07} Minguzzi, A., Girardeau, M. D., 2007. Soluble Models of Strongly Interacting Ultracold Gas Mixtures in Tight Waveguides. Phys. Rev. Lett. 99, 230402. \bibitem[Miroschnychenko et al.(2008)]{MeschNature} Miroschnychenko, Y., Alt, W., Dotsenko, I., F\"orster, L., Khudaverdyan, M., Meschede, D., Schrader, D., Rauschenbeutel, A., 2006. Laser-trapped atoms in strings can be deftly rearranged and the spacing between them precisely adjusted. {Nature} {442}, 151. \bibitem[Moshinsky(1952)]{Moshinsky52} Moshinsky, M., 1952. Diffraction in time. Phys. Rev. 88, 625. \bibitem[Mostafazadeh(2001)]{bookinv} Mostafazadeh, A. {\it Dynamical invariants, adiabatic approximation and the geometric phase}, (New York: Nova, 2001). \bibitem[Mousavi et al.(2007)]{etonks1} Mousavi, S.V., del Campo, A., Lizuain, I., Muga J. G., 2007. Ramsey interferometry with a two-level generalized Tonks-Girardeau gas. Phys. Rev. A 76, 033607. \bibitem[Mousavi(2012a)]{Mousavi12} Mousavi, S. V., 2012a. Quantum dynamics in a time-dependent hard-wall spherical trap. Europhys. Lett, 99, 30002. \bibitem[Mousavi(2012b)]{Mousavi12b} Mousavi, S. V., 2012b. Quantum Particle in an Infinite Circular-Well potential with a Moving Wall: Exact Solutions and Dynamics. arXiv:1207.3854 \bibitem[Muga et al.(2009)]{MCRG09}Muga, J. G., Chen, X., Ruschhaupt, A., Gu\'ery-Odelin, D., 2009. Frictionless dynamics of Bose-Einstein condensates under fast trap variations. J. Phys. B: At. Mol. Opt. Phys. 42, 241001. \bibitem[Muga et al.(2010)]{Muga10} Muga, J. G., X. Chen, S. Ib\'{a}\~{n}ez, I. Lizuain, Ruschhaupt, A., 2010. Transitionless quantum drivings for the harmonic oscillator. J. Phys. B {43}, 085509. \bibitem[M\"uller et al.(2011)]{kbody} M\"uller, M., Hammerer, K., Zhou, Y. L., Roos, C. F., Zoller, P. 2011. Simulating open quantum systems: from many-body interactions to stabilizer pumping New J. Phys. 13, 085007. \bibitem[Murphy et al.(2009)]{Calarco} Murphy, M., Jiang, L., Khaneja, N., Calarco, T., 2009. High-fidelity fast quantum transport with imperfect controls. Phys. Rev. A 79, 020301(R). \bibitem[Nehrkorn et al.(2011)]{Cal11} Nehrkorn, J., Montangero, S., Ekert, A., Smerzi, A., Fazio, R., Calarco, T., 2011. Staying adiabatic with unknown energy gap. arXiv:1105.1707v1. \bibitem[O'Dell et al.(2004)]{dipolar} O'Dell, D. H. J., Giovanazzi, S., Eberlein, C., 2004. Exact Hydrodynamics of a Trapped Dipolar Bose-Einstein Condensate. Phys. Rev. Lett. 92, 250401. \bibitem[\"Ohberg and Santos(2002)]{OS02} \"Ohberg, P., Santos, L., 2002. Dynamical Transition from a Quasi-One-Dimensional Bose-Einstein Condensate to a Tonks-Girardeau Gas. Phys. Rev. Lett. 89, 240402. \bibitem[Olshanii et al.(2010)]{OPL10} Olshanii, O., Perrin, H., Lorent, V., 2010. Example of a Quantum Anomaly in the Physics of Ultracold Gases. Phys. Rev. Lett. 105, 095302. \bibitem[Ozcakmakli and Yuce(2012)]{Yuce2} Ozcakmakli, Z., Yuce, C., 2012. Shortcuts to adiabaticity for growing condensates. Phys. Scr. 86, 055001. \bibitem[Palao et al.(1998)]{Palao} Palao, J. P., Muga, J. G., Sala, R. 1998. Composite Absorbing Potentials. Phys. Rev. Lett. {80}, 5469. \bibitem[Pezze et al.(2005)]{Augusto} Pezze, L., Smerzi, A., Berman, G. P., Bishop, A. R., Collins, L. A., 2005. Dephasing and breakdown of adiabaticity in the splitting of Bose-Einstein condensates. New J. Phys. {7}, 85. \bibitem[Oezer et al.(2009)]{Hrvoje2} Pezer, R., Gasenzer, T., Buljan, H., 2009. Single-particle density matrix for a time-dependent strongly interacting one-dimensional Bose gas, Phys. Rev. A 80, 053616. \bibitem[Pitaevskii and Rosch(1997)]{PR} Pitaevskii, L. P., Rosch, A., 1997. Breathing modes and hidden symmetry of trapped atoms in two dimensions. Phys. Rev. A {55}, R853. \bibitem[Pontryagin(1962)]{LSP} Pontryagin, L. S., 1962. {The Mathematical Theory of Optimal Processes} (Interscience Publishers, New York) \bibitem[ Polkovnikov et al.(2011)]{Polkovnikov11} Polkovnikov, A., Sengupta, K., Silva, A., Vengalattore, M., 2011. Colloquium: Nonequilibrium dynamics of closed interacting quantum systems. Rev. Mod. Phys. {83}, 863. \bibitem[Prestage et al.(1993)]{Maleki} Prestage, J. D., Tjoelker, R. L., Dick, G. J., Maleki, L., 1993. Improved Linear Ion Trap Package. Proc. 1993 IEEE Freq. Control Symposium, p. 144. \bibitem[Quan and Jarzynski(2012)]{QJ12} Quan, H. T., Jarzynski C., 2012. Validity of nonequilibrium work relations for the rapidly expanding quantum piston, Phys. Rev. E {85}, 031102. \bibitem[Rahmani and Chamon(2011)]{RC09} Rahmani, A., Chamon, C., 2011. Optimal control for unitary preparation of many-body states: application to Luttinger liquids Phys. Rev. Lett. \textbf{107}, 016402. \bibitem[Reichle et al.(2006)]{ions} Reichle, R., Leibfried, D., Blakestad, R. B., Britton, J., Jost, J. D., Knill, E., Langer, C., Ozeri, R., Seidelin, S., Wineland, D. J., 2006. Transport dynamics of single ions in segmented microstructured Paul trap arrays. Fortschr. Phys. {54}, 666. \bibitem[Rezek et al.(2009)]{KoEPL09} Rezek, Y., Salamon, P., Hoffmann, K. H., Kosloff, R., 2009. The quantum refrigerator: The quest for absolute zero. Europhys. Lett. \textbf{85} 30008. \bibitem[Rowe et al.(2002)]{Leibfried2002} Rowe, M. A., Ben-Kish, A., DeMarco, B., Leibfried, D., Meyer, V., Beall, J., Britton, J., Hughes, J., Itano, W. M., Jelenkovic, B., Langer, C., Rosenband, T., Wineland, D. J., 2002. Transport of quantum states and separation of ions in a dual rf ion trap. Quant. Inf. Comp. {2}, 257. \bibitem[Ruostekoski et al.(2001)]{Schleich2} Ruostekoski, J., Kneer, B., Schleich, W. P., Rempe, G., 2001. Interference of a Bose-Einstein condensate in a hard-wall trap: From the nonlinear Talbot effect to the formation of vorticity. Phys. Rev. A 63, 043613. \bibitem[Ruschhaupt et al.(2012)]{noise}Ruschhaupt, A., Chen, X., Alonso, D., Muga, J. G., 2012. Optimally robust shortcuts to population inversion in two-level quantum systems. New J. Phys. 14, 093040. \bibitem[Sachdev(1999)]{Sachdev} Sachdev, S., {\it Quantum phase transitions} (Cambridge University Press, Cambridge, 1999). \bibitem[Salamon et al.(2009)]{Salamon09} Salamon, P., Hoffmann, K. H., Rezek, Y., Kosloff, R. 2009. Maximum work in minimum time from a conservative quantum system. {Phys. Chem. Chem. Phys.} {11}, 1027. \bibitem[Schaff et al.(2010)] {Nice10} Schaff, J. F., Song, X. L., Vignolo, P., Labeyrie, G., 2010. Fast optimal transition between two equilibrium states. {Phys. Rev. A} {82} 033430; Phys. Rev. A {83}, 059911(E) (2011). \bibitem[Schaff et al.(2011a)]{Nice11}Schaff, J. F., Song, X. L., Capuzzi, P., Vignolo, P., Labeyrie, G., 2011a. STA for an interacting Bose-Einstein condensate. {Europhys. Lett.} {93}, 23001. \bibitem[Schaff et al.(2011b)]{Nice11b} Schaff, J. F., Capuzzi, P., Labeyrie, G., Vignolo, P., 2011b. Shortcuts to adiabaticity for trapped ultracold gases. New J. Phys. { 13}, 113017. \bibitem[Salasnich et al.(2002)]{Salas} Salasnich, L., Parola, A., Reatto, L., 2002. Effective wave equations for the dynamics of cigar-shaped and disk-shaped Bose condensates. Phys. Rev. A {65}, 043614. \bibitem[Sarandy et al.(2011)]{PLA2011} Sarandy, M. S., Duzzioni, E. I., Serra, R. M., 2011. Quantum computation in continuous time using dynamic invariants Phys. Lett. A {375}, 3343. \bibitem[Schiff(1949)]{Schiff} Schiff, L. I., {\it Quantum Mechanics} (McGraw Hill, New York, 1949). \bibitem[Schmidt et al.(2009)] {catcher1} Schmidt, S., Muga, J. G., Ruschhaupt, A., 2009. Stopping particles of arbitrary velocities with an accelerated wall. Phys. Rev. A {80}, 023406. \bibitem[Sengupta et al.(2008)]{1dKitaev} Sengupta, K., Sen, D., Mondal, S., 2008. Exact Results for Quench Dynamics and Defect Production in a Two-Dimensional Model. Phys. Rev. Lett. {100}, 077204. \bibitem[Staliunas et al.(2004)]{Staliunas04} Staliunas, K., Longhi, S., de V\'alcarcel G. J., 2004. Faraday patterns in low-dimensional Bose-Einstein condensates. Phys. Rev. A \textbf{70}, 011601(R) (2004). \bibitem[Stefanatos et al.(2010)]{Li10} Stefanatos, D., J. Ruths, J. -S. Li, 2010. Frictionless atom cooling in harmonic traps: A time-optimal approach. Phys. Rev. A {82}, 063422. \bibitem[Stefanatos et al.(2011)]{Li11}Stefanatos, D., Schaettler, H., Li, J.-S., 2011. Minimum-time frictionless atom cooling in harmonic traps. SIAM J. Cont. Opt. 49, 2440. \bibitem[Stefanatos and Li(2012)]{Li12}Stefanatos, D., Li, J.-S., 2012. Frictionless decompression in minimum time of Bose-Einstein condensates in the Thomas-Fermi regime. Phys. Rev. A 86, 063602. \bibitem[Sutherland(1998)]{Sutherland98} Sutherland, B., 1998. Exact Coherent States of a One-Dimensional Quantum Fluid in a Time-Dependent Trapping Potential. Phys. Rev. Lett. 80, 3678. \bibitem[Torosov et al.(2011)]{Torosov} Torosov, B. T., Gu\'{e}rin, S., Vitanov, N. V., 2011. High-Fidelity Adiabatic Passage by Composite Sequences of Chirped Pulses. {Phys. Rev. Lett.} {106}, 233001. \bibitem[Torrontegui et al.(2011)]{transport} Torrontegui, E., Ib\'a\~nez, S., Chen, X., Ruschhaupt, A., Gu\'ery-Odelin, D., Muga, J. G. 2011. Fast atomic transport without vibrational heating. {Phys. Rev.} A {83}, 013415 \bibitem[Torrontegui et al.(2012a)] {ErikFF} Torrontegui, E., Mart\'{\i}nez-Garaot, S., Ruschhaupt, A., Muga, J. G., 2012a. Shortcuts to adiabaticity: Fast-forward approach. Phys. Rev. A \textbf{86}, 013601. \bibitem[Torrontegui et al.(2012b)]{split}Torrontegui, E., Mart\'\i nez-Garaot, S., Modugno, M., Chen, X., Muga, J. G., 2012b. Engineering fast and stable splitting of matter waves. arXiv:1207.3184v1. \bibitem[Torrontegui et al.(2012c)]{3d}Torrontegui, E., Chen, X., Modugno, M., Ruschhaupt, A., Gu\'ery-Odelin, D., Muga, J. G., 2012c. Fast transitionless expansion of cold atoms in optical Gaussian-beam traps. Phys. Rev. A 85, 033605. \bibitem[Torrontegui et al.(2012d)]{BECtransport} Torrontegui, E., Chen, X., Modugno, M., Schmidt, S., Ruschhaupt, A., Muga, J. G., 2012d. Fast transport of Bose-Einstein condensates. New J. Phys. {14}, 013031. \bibitem[Tseng and Chen(2012)]{Tseng} Tseng S.-Y., Chen, X., Engineering of fast mode conversion in multimode waveguides. Opt. Lett. 37, 5118. \bibitem[Unayan et al.(1997)]{ShoreOC} Unanyan, R. G., Yatsenko, L. P., Bergmann, K., Shore, B. W., 1997. Laser-induced adiabatic atomic reorientation with control of diabatic losses. Opt. Commun. 139, 48. \bibitem[Uzdin et al.(2012)]{Moise2012}Uzdin, R., Gunther, U., Rahav, S., Moiseyev, N., 2012. Time-dependent Hamiltonians with 100\% evolution speed efficiency. {J. Phys. A: Math. Theor.} 45, 415304. \bibitem[van Es et al.(2010)]{boxchip} van Es, J. J. P., Wicke, P., van Amerongen, A. H., R\'etif, C., Whitlock, S., van Druten, N. J., 2010. Box traps on an atom chip for one-dimensional quantum gases. {J. Phys. B: At. Mol. Opt. Phys.} 43, 155002. \bibitem[Vasilev et al.(2009)]{PLAP2} Vasilev, G. S., Kuhn, A., Vitanov, N. V., 2009. Optimum pulse shapes for stimulated Raman adiabatic passage. Phys. Rev. A {80}, 013417. \bibitem[Vitanov et al.(2001)]{Vitanov-Rev} Vitanov, N. V., Halfmann, T., Shore, B. W., Bergmann, K., 2001. Laser-induced population transfer by adiabatic passage techniques. {Annu. Rev. Phys. Chem.} {52}, 763. \bibitem[Yuce(2012)]{PLA2012}Yuce, C., 2012. Fast frictionless expansion of an optical lattice. Phys. Lett. A 376, 1717. \bibitem[Xiong et al.(2010)]{Xiong} Xiong, D., Wang, P., Fu, Z., Zhang, J., 2010. Transport of Bose-Einstein condensate in QUIC trap and separation of trapping spin states. {Opt. Exp.} 18, 1649. \bibitem[Walther et al.(2012)]{SK2} Walther, A., Ziesel, F., Ruster, T., Dawkins, S. T., Ott, K., Hettrich, M., Singer, K., Schmidt-Kaler, F., Poschinger, U., 2012. Controlling Fast Transport of Cold Trapped Ions. Phys. Rev. Lett. 109, 080501. \bibitem[Zenesini et al.(2009)]{Oliver0} Zenesini A. et al. 2009. Time-Resolved Measurement of Landau-Zener Tunneling in Periodic Potentials. Phys. Rev. Lett. {103}, 090403. \bibitem[Zhang et al.(2012a)]{Lianao2} Zhang, J.-Q., Yong, L., Feng, M., 2012a. Cooling a charged mechanical resonator with time-dependent bias gate voltages. arXiv:1211.0770. \bibitem[Zhang et al.(2012b)]{expcd} Zhang, J. et al., 2012b. Experimental implementation of assisted quantum adiabatic passage in a single spin. arXiv:1212.0832. \end{thebibliography} \end{document}
\begin{document} \title{Session Types for Link Failures (Technical Report)} \begin{abstract} We strive to use session type technology to prove behavioural properties of fault-tolerant distributed algorithms. Session types are designed to abstractly capture the structure of (even multi-party) communication protocols. The goal of session types is the analysis and verification of the protocols' behavioural properties. One important such property is progress, \ie the absence of (unintended) deadlock. Distributed algorithms often resemble (compositions of) multi-party communication protocols. In contrast to protocols that are typically studied with session types, they are often designed to cope with system failures. An essential behavioural property is (successful) termination, despite failures, but it is often elaborate to prove for distributed algorithms. We extend multi-party session types (and multi-party session types with nested sessions) with optional blocks that cover a limited class of link failures. This allows us to automatically derive termination of distributed algorithms that come within these limits. To illustrate our approach, we prove termination for an implementation of the ``rotating coordinator'' Consensus algorithm. This paper is an extended version of \cite{adameitPetersNestmann17}. \end{abstract} \section{Introduction} Session types are used to statically ensure correctly coordinated behaviour in systems without global control. One important such property is progress, \ie the absence of (unintended) deadlock. Like with every other static typing approach to guarantee behavioural properties, the main advantage is that the respective properties are then provable without unrolling the process, \ie without computing its executions. Thereby, the state explosion problem is avoided. Hence, after the often elaborate task of establishing a type system, they allow to prove properties of processes in a quite efficient way. Session types describe global behaviours of a system---or protocols---as \emph{sessions}, \ie units of conversations. The participants of such sessions are called \emph{roles}. \emph{Global types} specify protocols from a global point of view, whereas \emph{local types} describe the behaviour of individual roles within a protocol. \emph{Projection} ensures that a global type and its local types are consistent. These types are used to reason about processes formulated in a corresponding \emph{session calculus}. Most of the existing session calculi are extensions of the well-known $ \pi $-calculus \cite{milnerParrowWalker92} with specific operators adapted to correlate with local types. Session types are designed to abstractly capture the structure of (even multi-party) communication protocols \cite{BettiniAtall08,BocciAtall10}. The literature on session types provides a rich variety of extensions. Session types with \emph{nested} protocols were introduced by \cite{DemangeonHonda12} as an extension of multi-party session types as defined \eg in \cite{BettiniAtall08,BocciAtall10}. They offer the possibility to define sub-protocols independently of their parent protocols. It is essentially the notion of nested protocols that led us to believe that session types could be applied to capture properties of distributed algorithms, especially the so-called round-based distributed algorithms. The latter are typically structured by a repeated execution of communication patterns by $n$~distributed partners. Often, like it will also be in our running example, such a pattern involves an exposed coordinator role, whose incarnation may differ from round to round. As such, distributed algorithms very much resemble compositions of nested multi-party communication protocols. Moreover, an essential behavioural property of distributed algorithms is (successful) termination \cite{Tel94,Lynch96}, despite failures, but it is often elaborate to prove. It turns out that progress (as provided by session types) and termination (as required by distributed algorithms) are closely related. For these reasons, our goal is to apply session type technology to prove behavioural properties of distributed algorithms. Particularly interesting round-based distributed algorithms were designed in a fault-tolerant way, in order to work in a model where they have to cope with system failures---be it links dropping or manipulating messages, or processes crashing with or without recovery. As the current session type systems are not able to cover fault-tolerance (except for exception handling as in \cite{CarboneHondaYoshida08,capecchi2016}), it is necessary to add an appropriate mechanism to cover system failures. \paragraph{Optional Blocks.} While the detection of conceptual design errors is a standard property of type systems, proving correctness of algorithms despite the occurrence of uncontrollable system failures is not. In the context of distributed algorithms, various kinds of failures have been studied. Often, the correctness of an algorithm does not only depend on the kinds of failures but also of the phase of the algorithm in which they occur, the number of failures, or their likelihood. Here, we only consider a very simple case, namely algorithms that terminate despite arbitrarily many link failures that may occur at any moment in the execution of the algorithm. Therefore, we extend session types with \emph{optional blocks}. Such a block specifies chunks of communication that may at some point fail due to a link failure. This partial communication protocol is protected by the optional block, to ensure that no other process can interfere before the optional block was resolved and to ensure, that in the case of failure, no parts of the failed communication attempt may influence the further behaviour. In case a link fails, the ambition to guarantee progress requires from our mechanism that the continuation behaviour is not blocked. Therefore, the continuation of an optional block $ C $ can be parametrised by a set of values that are either computed by a successful termination of an optional block or are provided beforehand as default values, \ie we require that for each value that $ C $ uses the optional block specifies a default value. An optional block can cover parts of a protocol or even other optional blocks. The type system ensures that communication with optional blocks requires an optional block as communication partner and that only a successful termination of an optional block releases the protection around its values derived within optional blocks. The semantics of the session calculus then allows us to abort an unguarded optional block at any point. If an optional block models a single communication, its abortion represents a message loss. In summary, optional blocks will allow us to automatically derive termination despite arbitrary link failures of distributed algorithms. \paragraph{Running Example.} Fault-tolerant Consensus algorithms are used to solve the problem of reaching agreement on a decision in the presence of faulty processes or otherwise unreliable systems. A simple, but prominent example is the rotating coordinator algorithm \cite{Tel94} to be used in an asynchronous message-passing system model. We use this algorithm as running example throughout this paper. \begin{example}[The Rotating Coordinator Algorithm] \label{exa:RCAlgorithm} $ $ \begin{lstlisting} *$ x_i :=\! $ input; *for $ r := 1 $ to $ n $ do *\{ *if $ r = i $ then broadcast($ x_i $); *else if alive($ p_r $) then $ x_i :=\! $ input\_from\_broadcast() *\}; *output $ x_i $; \end{lstlisting} \end{example} \noindent The above example describes the rotating coordinator algorithm for participant~$ i $. A network then consists of $ n $ such participants composed in parallel that try to reach Consensus on a value $ x $. Each participant receives an initial value for $ x $ from the environment, then performs the $ n $ rounds of the algorithm described in the Lines 2--6, and finally outputs its decision value. Within the $ n $ rounds each participant $i$ is exactly once (if $ r = i $) the coordinator and broadcasts its current value to all other participants. In the remaining rounds it receives the values broadcasted by other participants and replaces its own value with the received value. Due to the asynchronous nature of the underlying system, messages do not necessarily arrive in the order in which they were sent; also, different participants can be in different (local) rounds at the same (global) time. In case of link failure, the system may lose an arbitrary number of messages. In case of process crash, only its messages that are still in transition may possibly be received. In a system with only crash failures but no link failures, messages that are directed from a non-failing participant to some other non-failing participants are eventually received; this is often called reliable point-to-point communication. Following \cite{Tel94, Lynch96}, a network of processes like above solves Consensus if \begin{inparaenum}[(1)] \item all non-failing participants eventually output their value (\emph{Termination}), \item all emitted output values are the same (\emph{Agreement}), and \item each output value is an initial value of some participant (\emph{Validity}). \end{inparaenum} We show that an implementation of the above algorithm reaches termination despite an arbitrary number of link failures, \ie all participants (regardless of whether their messages are lost) eventually terminate. Note that we implement broadcast by a number of binary communications---one from the sender to each receiver---and thus a link failure not necessary implies the loss of all messages of a broadcast. This way we consider the more general and more realistic case of such failures. Moreover we concentrate our attention to the main part of the algorithm, \ie the implementation of the rounds in the Lines 2--6. \paragraph{Related Work.} Type systems are usually designed for scenarios that are free of system failures. An exception is \cite{KouzapasGutkovasGay14} that introduces unreliable broadcast. Within such an unreliable broadcast a transmission can be received by multiple receivers but not necessarily all available receivers. In the latter case, the receiver is deadlocked. In contrast, we consider failure-tolerant unicast, \ie communications between a single sender and a single receiver, where in the case of a failure the receiver is not deadlocked but continues using default values. \cite{CarboneHondaYoshida08,capecchi2016} extends session types with exceptions thrown by processes within \textsc{try}-and-\textsc{catch}-blocks. Both concepts---\textsc{try}-and-\textsc{catch}-blocks and optional blocks---introduce a way to structurally and semantically encapsulate an unreliable part of a protocol and provide some means to 'detect' a failure and 'react' to it. They are, however, conceptionally and technically different. An obvious difference is the limitation of the inner part of optional blocks towards the computation of values; there is no such limitation in the \textsc{try}-and-\textsc{catch}-blocks of \cite{capecchi2016}. More fundamentally these approaches differ in the way they allow to 'detect' failures and to 'react' to them. Optional blocks are designed for the case of system errors that may occur non-deterministically and not necessarily reach the whole system or not even all participants of an optional block, whereas \textsc{try}-and-\textsc{catch}-blocks model controlled interruption requested by a participant. Hence these approaches differ in the source of an error; raised by the underlying system structure or by a participant. Technically this means that in the presented case failures are introduced by the semantics, whereas in \cite{capecchi2016} failures are modelled explicitly as \textsc{throw}-operations. In the latter case the model also describes, why a failure occurred. Here we deliberately do not model causes of failures, but let them occur non-deterministically. In particular we do not specify, how a participant 'detects' a failure. Different system architectures might provide different mechanisms to do so, \eg by time-outs. As it is the standard for the analysis of distributed algorithms, our approach allows to port the verified algorithms on different systems architectures, provided that the respective structure and its failure pattern preserves correctness of the considered properties. The main difference between these two approaches is how they react to failures. In \cite{capecchi2016} \textsc{throw}-messages are propagated among nested \textsc{try}-and-\textsc{catch}-blocks to ensure that all participants are consistently informed about concurrent \textsc{throws} of exceptions. In distributed systems such a reaction towards a system error is unrealistic. Distributed processes usually do not have any method to observe an error on another system part and if a participant is crashed or a link fails permanently there is usually no way to inform a waiting communication partner. Instead abstractions (failure detectors) are used to model the detection of failures that can \eg be implemented by time-outs. Here it is crucial to mention that failure detectors are usually considered to be local and can not ensure global consistency. Distributed algorithms have to deal with the problem that some part of a system may consider a process/link as crashed, while at the same time the same process/link is regarded as correct by another part. This is one of the most challenging problems in the design and verification of distributed algorithms. In the case of link failures, if a participant is directly influenced by a failure on some other system part (a receiver of a lost message) it will eventually abort the respective communication attempt. If a participant does not depend (the sender in an unreliable link) it may never know about the failure or its nature. Distributed algorithms usually deal with unexpected failures that are hard to detect and often impossible to propagate. Generating correct algorithms for this scenario is difficult and error-prone, thus we need methods to verify them. \paragraph{Contribution.} We extend multi-party session types---first a basic version similar to \cite{BettiniAtall08, BocciAtall10} and then the type system of \cite{DemangeonHonda12}---by optional blocks, \ie protected parts of sessions that either yield a value to be used in the continuation or fail and return a former specified default value. This simple but restricted mechanism allows us to model link failures of the underlying system and to model distributed algorithms on top of such an unreliable communication infrastructure. Moreover the type system ensures that well-typed processes progress, \ie termination, for the case of arbitrary occurrences of link failures. Our approach is limited with respect to two aspects: We only cover algorithms that \begin{inparaenum}[(1)] \item allow us to specify default values for all unreliable communication steps and \item terminate despite arbitrary link failures. \end{inparaenum} Accordingly, this approach is only a first step towards the analysis of distributed algorithms with session types. It shows however that it is possible to analyse distributed algorithms with session types and how the latter can solve the otherwise often complicated and elaborate task of proving termination. We show that our attempt respects two important aspects of fault-tolerant distributed algorithms: \begin{inparaenum}[(1)] \item The modularity as \eg present in the concept of rounds in many algorithms can be expressed naturally, and \item the model respects the asynchronous nature of distributed systems such that messages are not necessarily delivered in the order they are send and the rounds may overlap. \end{inparaenum} \paragraph{Overview.} We extend global types and restriction in \S\ref{sec:globalTypes}, local types and projection in \S\ref{sec:localTypes}, and introduce the extended session calculus in \S\ref{sec:calculus} with a mechanism to check types in \S\ref{sec:wellTypedness}. We present two examples---that are different variants to consider the rotating coordinator algorithm---as running examples with the presented definitions. A third example is presented in \S\ref{sec:ExaSubSessionsWithinOptionalBlocks}. In \S\ref{sec:properties} we analyse the properties of the extended type system and use them to prove termination despite link failures of our running example. We conclude with \S\ref{sec:conclusions}. This paper is an extended version of \cite{adameitPetersNestmann17}. \subsection{Properties of Optional Blocks} \label{sec:optionalBlocks} We extend standard versions of session types---as given \eg in \cite{BettiniAtall08,BocciAtall10,DemangeonHonda12}---with optional blocks. An optional block is a simple construct that encapsulates and isolates a potentially unreliable part of a protocol. Since we are interested in the proof of relatively strong system properties such as termination, we restrain the effect that the failure (and thus also the success) of the encapsulated part can impose on the remainder of the protocol. The encapsulated part can compute some values and has---for the case of failure---to provide some default values. In order to use optional blocks to model failures and at the same time ensure certain system properties, we designed optional blocks such that they ensure the properties encapsulation, isolation, safety, and reliance: \begin{description} \item[Encapsulation:] Optional blocks encapsulate a potentially unreliable part of a protocol such that unreliable parts and reliable parts of a system are clearly distinguished and cannot interfere except for values that might be computed differently in the case of a failure. \item[Isolation:] Communication from within an optional block is restricted to its participants. \item[Safety:] Regardless of success or failure, each participant of an optional block---if it does not loop forever---returns a (potentially empty) vector of values of the required kinds. In the case of success these return values can be computed using communications with the other participants of the block. In the case of failure the return values are the default values. \item[Reliance:] If the considered system can terminate, then there is also a way to successfully complete all optional blocks. \end{description} Here encapsulation and isolation result from the semantics of the newly introduced concepts, whereas safety and reliance are enforced by the type system. Session types introduce three different layers of abstraction: \begin{inparaenum}[(1)] \item The session calculus is a process calculus---usually a variant of the $ \pi $-calculus \cite{milnerParrowWalker92}---that allows to model systems. Usually it is designed (or adapted) to provide flexibility and an easy and intuitive syntax in order to support the designer. For this purpose session calculi usually reflect only the local views on the respective participants, \ie the overall system behaviour is represented by modelling the single participants and their abilities to interact. With that session calculi are relatively close to programming languages. As a consequence, session calculi themselves provide very little guarantees on the correctness of the modelled systems. \item Global types, on the other hand side, provide a global view on a system. They are used to specify and formalise the desired properties of the overall system. \item To mediate between these two points of view and to guarantee that an implementation in a session calculus of a specification given as a global type has the desired properties, session types introduce an intermediate layer called local types. Intuitively local types specify the consequences of a global specification on a single participant. Projection functions allow to automatically derive local types from global types and typing rules allow to automatically check whether an implementation in a session calculus satisfies the global specification, by comparing the process against the local type. \end{inparaenum} With that session types provide a static and thus very efficient way to analyse and guarantee different kinds of system properties. When extending session types with optional blocks we follow these three layers, starting with global types. \section{Global Types with Optional Blocks} \label{sec:globalTypes} We want to derive a correct implementation of the algorithm of our running example from its specification. Accordingly we start with the introduction of the type system. Throughout the paper we use $ G $ for global types, $ T $ for local types, $ \Labe $ for communication labels, $ \Chan[s], \Chan[k] $ for session names, $ \Chan $ for shared channels, $ \Role $ for role identifiers, $ \Prot $ for protocol identifiers, and $ \Args[v] $ for values of a base type (\eg integer or string). $ \Args, \Args[y] $ are variables to represent \eg session names, shared channels, or values. We formally distinguish between roles, labels, process variables, type variables, and names---additionally to identifiers for global/local types, protocols, processes, \ldots. Formally we do however not further distinguish between different kinds of names but use different identifiers ($ \Chan, \Chan[s], \Args[v], \ldots $) to provide hints on the main intended purpose at the respective occurrence. Roles and participants are used as synonyms. To simplify the presentation, we adapt set-like notions for tuples. For example we write $ \Args_i \in \tilde{\Args} $ if $ \tilde{\Args} = \left( \Args_1, \ldots, \Args_n \right) $ and $ 1 \leq i \leq n $. We use $ \cdot $ to denote the empty tuple. Global types describe protocols from a global point of view on systems by interactions between roles. They are used to formalise specifications that describe the desired properties of a system. We extend the basic global types as used \eg in \cite{BettiniAtall08,BocciAtall10} with a global type for optional blocks. \begin{definition}[Global Types] \label{def:globalTypes} Global types with optional blocks are given by \begin{align*} G & \deffTerms \GTCom{\Role_1}{\Role_2}{\sum_{i \in \indexSet} \Set{\GTInp{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{G_i}}} \sep \textcolor{blue}{ \GTOptBl{\widetilde{\Role, \Typed{\tilde{\Args}}{\tilde{\Sort}}}}{G}{G'} }\\ & \sep \GTChoi{G_1}{\Role}{G_2} \sep \GTPar{G_1}{G_2} \sep \GTRec{\TermV}{G} \sep \TermV \sep \GTEnd \end{align*} \end{definition} \noindent $ \GTCom{\Role_1}{\Role_2}{\sum_{i \in \indexSet} \Set{\GTInp{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{G_i}}} $ is the standard way to specify a communication from role $ \Role_1 $ to role $ \Role_2 $, where $ \Role_1 $ has a direct choice between several labels $ \Labe_i $ proposed by $ \Role_2 $. Each branch expects values $ \tilde{\Args}_i $ of sorts $ \tilde{\Sort}_i $ and executes the continuation $ G_i $. When $ \indexSet $ is a singleton, we write $ \GTCom{\Role_1}{\Role_2}{\GTInpS{\Labe}{\Typed{\tilde{\Args}}{\tilde{\Sort}}}} $. $ \GTChoi{G_1}{\Role}{G_2} $ introduces so-called located (or internal) choice: the choice for one role $ \Role $ between two distinct protocol branches. The parallel composition $ \GTPar{G_1}{G_2} $ allows to specify independent parts of a protocol. $ \GTRec{\TermV}{G} $ and $ \TermV $ are used to allow for recursion. $ \GTEnd $ denotes the successful completion of a global type. We often omit trailing $ \GTEnd $ clauses. We add the primitive $ \GTOptBl{\widetilde{\Role, \Typed{\tilde{\Args}}{\tilde{\Sort}}}}{G}{G'} $ to describe an optional block between the roles $ \Role_1, \ldots, \Role_n $, where $ \widetilde{\Role, \Typed{\tilde{\Args}}{\tilde{\Sort}}} $ abbreviates the sequence $ \Role_1, \Typed{\tilde{\Args}_1}{\tilde{\Sort}_1}, \ldots, \Role_n, \Typed{\tilde{\Args}_n}{\tilde{\Sort}_n} $ for some natural number $ n $. Here $ G $ is the protocol that is encapsulated by the optional block and the $ \tilde{\Args}_i $ are so-called default values that are used within the continuation $ G' $ of the surrounding parent session if the optional block fails. There is one (possibly empty) vector of default values $ \tilde{\Args}_i $ for each role $ \Role_i $. The inner part $ G $ of an optional block is a (part of a) protocol that (in the case of success) is used to compute the vectors of return values. The typing rules ensure that for each role $ \Role_i $ the type of the computed vector coincides with the type $ \tilde{\Sort}_i $ of the specified vector of default values $ \tilde{\Args}_i $. Intuitively, if the block does not fail, each participant can use its respective vector of computed values in the continuation $ G' $. Otherwise, the default values are used. An optional block can either be completed successfully or fail completely. Optional blocks capture the main features of a failure very naturally: a part of a protocol either succeeds or fails. They also encapsulate the source and direct impact of the failure, which allows us to study their implicit effect---as \eg missing communication partners---on the overall behaviour of protocols. With that they help us to specify, implement, and verify failure-tolerant algorithms. Using optional blocks we provide a natural and simple specification of an unreliable link $ c $ between the two roles $ \Role[src] $ and $ \Role[trg] $, where in the case of success the value $ \Args[v]_{\Role[src]} $ is transmitted and in the case of failure a default value $ \Args[v]_{\Role[trg]} $ is used by the receiver. \begin{example}[Global Type of an Unreliable Link] \label{exa:GTunreliableLink} \begin{align*} \GUL{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]}{\Args[v]_{\Role[trg]}} = \GTOptBlS{\Role[src], \cdot, \Role[trg], \Typed{\Args[v]_{\Role[trg]}}{\Sort[V]}}{\left( \GTCom{\Role[src]}{\Role[trg]}{\GTInp{\Labe[c]}{\Typed{\Args[v]_{\Role[src]}}{\Sort[V]}}{\GTEnd}} \right)} \end{align*} \end{example} \noindent Here we have a single communication step---to model the potential loss of a single message---that is covered within an optional block. In the term $ \GUL{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]}{\Args[v]_{\Role[trg]}}\!.G' $ the receiver $ \Role[trg] $ may use the transmitted value $ \Args[v]_{\Role[src]} $ in the continuation $ G' $ if the communication succeeds or else uses its default value $ \Args[v]_{\Role[trg]} $. Note that the optional block above specifies the empty sequence of values as default values for the sending process $ \Role[src] $, \ie the sender needs no default values. In the remaining text, we use $ \prod_{i = 1..n} G_i \deff \GTPar{G_1}{\GTPar{\cdots}{G_n}} $ to abbreviate parallel composition and $ \bigodot_{i = 1..n} G_i \deff G_1.\cdots.G_n $ likewise for sequential composition. We naturally adapt these notations to local types and processes that are introduced later. Remember that global types specify a global point of view of the communication structure, whereas the pseudo code of Example~\ref{exa:RCAlgorithm} provides the local view for participant~$ i $ containing also the data flow. Accordingly we obtain a global type for Example~\ref{exa:RCAlgorithm} by abstracting partly from the values; concentrating on the communications. Let $ \Args[v]_{i, j} $ be the value of participant~$ i $ of Example~\ref{exa:RCAlgorithm} after round~$ j $ such that $ \Args[v]_{i, i} := \Args[v]_{i, i - 1} $ (the coordinator does not update its value) and assume a vector $ \left( \Args[v]_{1, 0}, \ldots, \Args[v]_{n, 0} \right) $ of initial values. Here only the initial values $ \Args[v]_{i, 0} $ are actually values, the remaining $ \Args[v]_{i, j} $ are variables that are instantiated with values during runtime. Then, in \begin{example}[Global Type for Rotating Coordinators] \label{exa:GTRC} \begin{align*} \GRC{n} ={} & \bigodot_{i = 1..n} \; \bigodot_{j = 1..n, j \neq i} \GUL{\Role[p]_i}{\Args[v]_{i, i - 1}}{\Role[p]_j}{\Args[v]_{j, i - 1}} \end{align*} \end{example} the index $ i $ is used to specify the number of the current round, while $ j $ iterates over potential communication partners in round $ i $. From a global point of view, there are $ n $~rounds such that each participant is exactly once the coordinator $ \Role[p]_i $ and transmits its value to all other participants $ \Role[p]_j $ using an unreliable link. This global type abstracts in particular from Line~$ 5 $ in Example~\ref{exa:RCAlgorithm}, since it does not specify that or how the values of the receivers are updated. For simplicity we do not consider the Lines~1 and 7. \subsection{Global Types with Optional Blocks and Sub-Sessions} As it is the case for our running example, many distributed algorithms are organised in rounds or use similar concepts of modularisation. We want to be able to directly mirror this modularity. To do so, we make use of the extension of multi-party session types with nested sessions of \cite{DemangeonHonda12}. These authors introduce two additional primitives for global types \begin{align*} \sep \GTDecl{\Prot}{\tilde{\Role}_1}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}_2}{G}{G'} \sep \GTCall{\Role}{\Prot}{\tilde{\Role}}{\tilde{\Args[y]}}{G} \end{align*} to implement sub-sessions. The type $ \GTDecl{\Prot}{\tilde{\Role}_{1}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}_2}{G}{G'} $ describes the declaration of a sub-protocol $ G $ identified via $ \Prot $, to be called from within the main protocol $ G' $. Here $ \tilde{\Role}_1 $, $ \tilde{\Args[y]} $, and $ \tilde{\Role}_2 $ are the internally invited participants, the arguments, and the externally invited participants of $ G $, respectively. With the protocol call $ \GTCall{\Role}{\Prot}{\tilde{\Role}}{\tilde{\Args[y]}}{G} $ a formerly declared sub-protocol $ \Prot $ can be initialised, where $ \tilde{\Role} $ and $ \tilde{\Args[y]} $ specify the internally invited roles and the arguments of $ \Prot $. $ G $ is the remainder of the parent session. The sub-protocols that are introduced by these two primitives allow to specify algorithms in a modular way. \begin{definition}[Global Types with Sub-Sessions] \label{def:globalTypesWSS} \begin{align*} G & \deffTerms \GTCom{\Role_1}{\Role_2}{\sum_{i \in \indexSet} \Set{\GTInp{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{G_i}}} \sep \textcolor{blue}{ \GTOptBl{\widetilde{\Role, \Typed{\tilde{\Args}}{\tilde{\Sort}}}}{G}{G'} }\\ & \sep \GTDecl{\Prot}{\tilde{\Role}_1}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}_2}{G}{G'} \sep \GTCall{\Role}{\Prot}{\tilde{\Role}}{\tilde{\Args[y]}}{G}\\ & \sep \GTChoi{G_1}{\Role}{G_2} \sep \GTPar{G_1}{G_2} \sep \GTRec{\TermV}{G} \sep \TermV \sep \GTEnd \end{align*} \end{definition} Here optional blocks can surround (a part of) a session that possibly contains sub-sessions or it may surround a part of a single sub-session. We extend the global type of our running example. \begin{example}[Rotating Coordinators with Sub-Sessions] \label{exa:GTRCWSS} \begin{align*} \GRCB{n} = \GTDecl{\Prot[R]_n}{\Role[src], \widetilde{\Role[trg]}}{\Typed{\Args[v]_{\Role[src]}}{\Sort[V]}}{\cdot}{\GR{n}}{\left( \bigodot_{i = 1..n} \GTCallS{\Role[p]_i}{\Prot[R]_n}{\overline{\Role[p]_i}}{\Args[v]_{i, i - 1}} \right)} \end{align*} where the global type of a round is given by: \begin{align*} \GR{n} = \bigodot_{j = 1..(n - 1)} \GUL{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]_j}{\Args[v]_{\Role[src]}} \end{align*} \end{example} \noindent The type of a single round basically remains the same but is transferred into the sub-session $ \GR{n} $ such that each round corresponds to its own sub-session. The overall session $ \GRCB{n} $ then consists of the declaration of this sub-protocol (using the $ \mathtt{let} $-construct) followed by the iteration over $ i $ of the rounds, where in each round the respective sub-session is called. Here $\overline{\Role[p]_i}$ is used to abbreviate the reordering $ \Role[p]_i, \Role[p]_1, \ldots, \Role[p]_{i - 1}, \Role[p]_{i + 1}, \ldots \Role[p]_n $ of the vector $ \tilde{\Role[p]} $. In $ \GR{n} $ we use $ \Args[v]_{\Role[src]} $ not only as transmitted value but also as the default value of the receiver. This violates our intuition of the algorithm. Intuitively the default value should be the last known value of the receiver, \ie $ \Args[v]_{i, j - 1} $ for the receiver $ i $ in round~$ j $. The implementation in the session calculus will use this value. However, since all $ \Args[v]_{i, j} $ are of the same type and because we consider (global) types here, we can use $ \Args[v]_{\Role[src]} $. \subsection{Restriction} $ \GRC{n} $ and $ \GRCB{n} $ have $ n $ roles: $ \Role[p]_1, \ldots, \Role[p]_n $. \emph{Restriction} maps a global type to the parts that are relevant for a certain role. We extend the restriction rules of \cite{DemangeonHonda12,Demangeon15} with a rule for optional blocks. \begin{definition}[Restriction] \label{def:restriction} $ $\\ The restriction operator $ \RestS{G}{\Role} $ goes inductively through all constructors except: \begin{align*} & \Rest{\GTDecl{\Prot}{\tilde{\Role}_1}{\tilde{\Args[y]}}{\tilde{\Role}_2}{G}{G'}}{\Role_0} = \GTDecl{\Prot}{\tilde{\Role}_1}{\tilde{\Args[y]}}{\tilde{\Role}_2}{G}{\!\Rest{G'}{\Role_0}}\\ & \Rest{\GTCall{\Role}{\Prot}{\tilde{\Role}}{\tilde{\Args[y]}}{G}}{\Role_0} = \begin{cases} \GTCall{\Role}{\Prot}{\tilde{\Role}}{\tilde{\Args[y]}}{\!\Rest{G}{\Role_0}} & \text{if } \Role_0 = \Role \text{ or } \Role_0 \in \tilde{\Role}\\ \RestS{G}{\Role_0} & \text{else} \end{cases}\\ & \Rest{\GTCom{\Role_1}{\Role_2}{\sum_{i \in \indexSet} \Set[]{\GTInp{\Labe_i}{\tilde{\Args}_i}{G_i}}}}{\Role_0} \!= \begin{cases} \GTCom{\Role_1}{\Role_2}{\sum_{i \in \indexSet} \Set[]{\GTInp{\Labe_i}{\tilde{\Args}_i}{\!\Rest{G_i}{\Role_0}}}} & \text{if } \Role_0 \in \Set[]{ \Role_1, \Role_2 }\\ \RestS{G_1}{\Role_0} & \text{else} \end{cases}\\ & \textcolor{blue}{\Rest{\GTOptBl{\widetilde{\Role, \tilde{\Args}}}{G}{G'}}{\Role_0}} \textcolor{blue}{\ = \begin{cases} \GTOptBl{\widetilde{\Role, \tilde{\Args}}}{\left( \RestS{G}{\Role_0} \right)}{\left( \RestS{G'}{\Role_0} \right)} & \text{if } \Role_0 \in \tilde{\Role}\\ \RestS{G'}{\Role_0} & \text{else} \end{cases}} \end{align*} \end{definition} \noindent For simplicity we abbreviate $ \Typed{\tilde{\Args[z]}}{\tilde{\Sort}} $ by $ \tilde{\Args[z]} $ for all vectors of values in this definition. This definition also captures the definition of restriction for the smaller type system without sub-sessions---in this case the first two rules are superfluous. Here, the restriction of an optional block on one of its participants results in the restriction of both, its inner part as well as its continuation, on that role. If we restrict an optional block on a role that does not participate, the result is the restriction of the continuation only. Accordingly, the restriction of an unreliable link on a role $ \Role[p]_i $ results in the link itself if $ \Role[p]_i $ is either the source or the target of the unreliable link and else removes the unreliable link. \begin{example}[Restriction on Participant~$ i $] \label{exa:RestRC} \begin{align*} \RestS{\GRC{n}}{\Role[p]_i} ={} & \bigodot_{j = 1..(i{-}1)} \GUL{\Role[p]_j}{\Args[v]_{j, j{-}1}}{\Role[p]_i}{\Args[v]_{i, j{-}1}}. & \tag{a}\label{exa:RestA}\\ & \bigodot_{j = 1..n, j \neq i} \GUL{\Role[p]_i}{\Args[v]_{i, i{-}1}}{\Role[p]_j}{\Args[v]_{j, i{-}1}}. & \tag{b}\label{exa:RestB}\\ & \bigodot_{j = (i + 1)..n} \GUL{\Role[p]_j}{\Args[v]_{j, j{-}1}}{\Role[p]_i}{\Args[v]_{i, j{-}1}} & \tag{c}\label{exa:RestC} \end{align*} \end{example} \noindent Restricting $ \GRC{n} $ on participant~$ i $ reduces all rounds~$ j $ (except for round~$ j = i $) to the single communication step of round~$ j $ in that participant~$ i $ receives a value. Accordingly, participant~$ i $ receives $ i{-}1 $ times a value in $ i{-}1 $ rounds in (\ref{exa:RestA}), then broadcasts its current value to all other participants (modelled by $ n{-}1 $ single communication steps) in (\ref{exa:RestB}), and then receives $ n{-}i $ times a value in the remaining rounds in (\ref{exa:RestC}). For the extended Example~\ref{exa:GTRCWSS} we have $ \RestS{\GRCB{n}}{\Role[p]_i} = \GRCB{n} $, since each call of a sub-session refers to all roles. The restriction of the protocol for rounds $ \GR{n} $ on the role that coordinates the respective round, \ie for $ \Role[p]_i = \Role[src] $, is (similar to (\ref{exa:RestB})): \begin{align*} \RestS{\GR{n}}{\Role[p]_i} ={} & \bigodot_{j = 1..n, j \neq i} \GUL{\Role[p]_i}{\Args[v]_{i, i{-}1}}{\Role[p]_j}{\Args[v]_{i, i{-}1}} \end{align*} whereas its restriction on another role, \ie for $ \Role[p]_i \in \widetilde{\Role[trg]} $ and $ \Role[src] = \Role[p]_j $, leads (similar to (\ref{exa:RestA}) and (\ref{exa:RestC})) to: \begin{align*} \RestS{\GR{n}}{\Role[p]_i} ={} & \GUL{\Role[p]_j}{\Args[v]_{j, j{-}1}}{\Role[p]_i}{\Args[v]_{j, j{-}1}} \end{align*} \subsection{Well-Formed Global Types} Following \cite{DemangeonHonda12} we type all objects appearing in global types with \emph{kinds} (types for types) $ \Sort[K] \deffTerms \Sort[Role] \mid \Sort[Val] \mid \diamond \mid \left( \Sort[K]_1 \times \ldots \times \Sort[K]_n \right) \to \Sort[K] $. $ \Sort[Val] $ are value-kinds, which are first-order types for values (like $ \mathbb{B} $ for boolean) or data types. $ \Sort[Role] $ is used for identifiers of roles. We use $ \diamond $ to denote protocol types and $ \to $ to denote parametrisation. We adopt the definition of \emph{well-kinded} global types from \cite{DemangeonHonda12} that basically ensures that all positions $ \Role, \Role_1, \Role_2, \tilde{\Role}_1, \tilde{\Role}_2 $ in global types can be instantiated only by objects of type $ \Sort[Role] $ and that the type of all sub-protocols in declarations and calls is of the form $ \Sort[K] \to \diamond $, where $ \Sort[K] $ is the product of the types of the internally invited roles and the arguments of the sub-protocol. According to \cite{DemangeonHonda12} a global type $ G $ is \emph{projectable} if \begin{inparaenum}[(1)] \item for each occurrence of $ \GTChoi{G_1}{\Role}{G_2} $ in the type and for any free role $ \Role' \neq \Role $ we have $ \RestS{G_1}{\Role'} = \RestS{G_2}{\Role'} $, \item for each occurrence of $ \GTCom{\Role_1}{\Role_2}{\sum_{i \in \indexSet} \Set[]{\GTInp{\Labe_i}{\tilde{\Args}_i}{G_i}}} $ in the type and for any free role $ \Role' \notin \Set{ \Role_1, \Role_2 } $ we have $ \RestS{G_i}{\Role'} = \RestS{G_j}{\Role'} $ for all $ i, j \in \indexSet $, \item for each occurrence of $ \GTPar{G_1}{G_2} $ the types $ G_1 $ and $ G_2 $ do not share the same free role. \end{inparaenum} To simplify the definition of projection, we write $ \Role \in G $ if $ \Role $ is a free role in the global type $ G $ and else $ \Role \notin G $. Additionally we require (similar to sub-sessions in \cite{DemangeonHonda12}) for a global type $ G $ to be projectable that \begin{inparaenum}[(1)] \item for each optional block $ \GTOptBl{\widetilde{\Role, \Typed{\tilde{\Args}}{\tilde{\Sort}}}}{G_1}{G_2} $ in $ G $, all roles in $ G_1 $ are either contained in $ \tilde{\Role} $ or are newly introduced by a sub-session and \item for each each $ \GTDecl{\Prot}{\tilde{\Role}_1}{\tilde{\Args[y]}}{\tilde{\Role}_2}{G_1}{G_2} $ in $ G $, all roles in $ G_1 $ are either contained in $ \tilde{\Role}_1 $ or $ \tilde{\Role}_2 $ or are newly introduced by a sub-session. \end{inparaenum} A global type is \emph{well-formed} when it is well-kinded and projectable, and satisfies the standard linearity condition \cite{BettiniAtall08}. For more intuition on the notion of well-formedness and examples for non-well-formed protocols we refer to \cite{DemangeonHonda12}. In the examples, we use $ \Sort[V] $ as the type of the values $ \Args[v]_{i, j} $. Clearly, $ \GRC{n} $ and $ \GRCB{n} $ are well-formed. \section{Local Types with Optional Blocks} \label{sec:localTypes} Local types describe a local and partial point of view on a global communication protocol \wrt a single participant. They are used to validate and monitor distributed programs. We extend the basic local types as used \eg in \cite{BettiniAtall08,BocciAtall10} with a local type for optional blocks. \begin{definition}[Local Types] \label{def:localTypes} Local types with optional blocks are given by \begin{align*} T & \deffTerms \LTGet{\Role}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{T_i} }} \sep \LTSend{\Role}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{T_i} }} \sep \textcolor{blue}{\LTOpt{\tilde{\Role}}{T}{\Typed{\tilde{\Args}}{\tilde{\Sort}}}{T'}}\\ & \sep \LTChoi{T_1}{T_2} \sep \LTPar{T_1}{T_2} \sep \LTRec{\TermV}{T} \sep \TermV \sep \LTEnd \end{align*} \end{definition} \noindent The first two operators specify endpoint primitives for communications with $ \mathtt{get} $ for the receiver side---where $ \Role $ is the sender---and $ \mathtt{send} $ for the sender side---where $ \Role $ denotes the receiver. Accordingly, they introduce the two possible local views of a global type for communication. $ \LTChoi{T_1}{T_2} $ is the local view of the global type $ \GTChoi{G_1}{\Role}{G_2} $ for a choice determined by the role $ \Role $ for which this local type is created. $ \LTPar{T_1}{T_2} $ represents the local view of the global type for parallel composition, \ie describes independent parts of the protocol for the considered role. Again $ \LTRec{\TermV}{T} $ and $ \TermV $ are used to introduce recursion and $ \LTEnd $ denotes the successful completion of a protocol. We add the local type $ \LTOpt{\tilde{\Role}}{T}{\Typed{\tilde{\Args}}{\tilde{\Sort}}}{T'} $. It initialises an optional block between the roles $ \tilde{\Role} $ around the local type $ T $, where the currently considered participant $ \Role $ (called \textit{owner}) is a participant of this block, \ie $ \Role \in \tilde{\Role} $. After the optional block the local type continues with $ T' $. Again we usually omit trailing $ \LTEnd $ clauses. \subsection{Local Types with Optional Blocks and Sub-Sessions} Local types describe a local and partial point of view on a global communication protocol \wrt a single participant. To obtain the local types that correspond to global types with sub-sessions we add the three operators of local types introduced by \cite{DemangeonHonda12}: \begin{align*} & \sep \LTCall{\Prot}{G}{\tilde{\Args[v]}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}_2}{T} \sep \LTEnt{\Prot}{\Role_1}{\tilde{\Args[v]}}{\Role_2}{T} \sep \LTReq{\Prot}{\Role_1}{\tilde{\Args[v]}}{\Role_2}{T} \end{align*} Sub-sessions are created with the $ \mathtt{call} $ operator; internal invitations are handled by the $ \mathtt{req} $-operator for requests and the $ \mathtt{ent} $-operator to accept invitations. A $ \mathtt{call} $ creates a sub-session for protocol $ \Prot $ of the global type $ G $, where $ \tilde{\Args[v]} $ are value arguments handed to the protocol and $ \tilde{\Role}_2 $ are the external roles that are invited to this sub-session. In $ \LTEnt{\Prot}{\Role_1}{\tilde{\Args[v]}}{\Role_2}{T} $ the role $ \Role_2 $ refers to the initiator of the sub-session, $ \Role_1 $ denotes the role in the sub-protocol the participant accepts to take, and $ \tilde{\Args[v]} $ are the arguments of the respective protocol. Similarly, in $ \LTReq{\Prot}{\Role_1}{\tilde{\Args[v]}}{\Role_2}{T} $ the role $ \Role_1 $ is the role of the protocol the participant is invited to take, $ \tilde{\Args[v]} $ are the arguments of the protocol, and $ \Role_2 $ refers to the participant the invitation is directed to. \begin{definition}[Local Types with Sub-Sessions] \label{def:localTypesWSS} \begin{align*} T & \deffTerms \LTGet{\Role}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{T_i} }} \sep \LTSend{\Role}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{T_i} }} \sep \textcolor{blue}{\LTOpt{\Role}{T}{\Typed{\tilde{\Args}}{\tilde{\Sort}}}{T'}}\\ & \sep \LTCall{\Prot}{G}{\tilde{\Args[v]}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}_2}{T} \sep \LTEnt{\Prot}{\Role_1}{\tilde{\Args[v]}}{\Role_2}{T} \sep \LTReq{\Prot}{\Role_1}{\tilde{\Args[v]}}{\Role_2}{T}\\ & \sep \LTChoi{T_1}{T_2} \sep \LTPar{T_1}{T_2} \sep \LTRec{\TermV}{T} \sep \TermV \sep \LTEnd \end{align*} \end{definition} \subsection{Projection} To ensure that a global type and its local types coincide, global types are projected to their local types. In \cite{DemangeonHonda12} projection is defined \wrt a protocol environment $ \env $ that associates protocol identifiers to their contents and is updated in $ \mathtt{let} $ constructs. For global types without sub-sessions $ \env $ remains empty. We inherit the rules to project global types on their local types from \cite{Demangeon15,DemangeonHonda12} and add a rule to cover global types of optional blocks. \begin{figure} \caption{Projection Rules} \label{fig:projectionRules} \end{figure} Figure~\ref{fig:projectionRules} contains all projection rules for both considered type system. Again there are rules---the rules to deal with sub-sessions---that are superfluous in the smaller type system. The projection rule for optional blocks has three cases. The last case is used to skip optional blocks when they are projected to roles that do not participate. The first two cases handle projection of optional blocks to one of its participants. A local optional block is generated with the projection of $ G $ as content. The first two cases check whether the optional block indeed computes any values for the role we project onto. They differ only in the way that the continuation of the optional block and its inner part are connected. If the projected role does not specify default values---because no such values are required---the projected continuation $ \Proj{G'}{}{\Role_p} $ can be placed in parallel to the optional block (second case). Otherwise, the continuation has to be guarded by the optional block and, thus, by the computation of the computed values (first case). By distinguishing between these two first cases, we follow the same line of argument as used for sub-sessions in \cite{DemangeonHonda12}, where the projected continuation of a sub-session $ \texttt{call} $ is either in parallel to the projection of the $ \texttt{call} $ itself or connected sequentially. Intuitively, whenever the continuation depends on the outcome of the optional block it has to be connected sequentially. The observant reader may have recognised that the global and the local types do not specify any mechanism to install the value computed in a successful optional block in its continuation. We do not want to restrict the way in which the result of an optional block is computed, except that it has to be derived from the knowledge of the owner together with communications with the other participant of the optional block. Hence obtaining its values in the projection function is difficult. For the global and the local type it is however not necessary to derive the correct value but only its kinds and these kinds have to coincide with the kinds of the default values. We leave the computation of return values and thus the data flow of the algorithm to its actual implementation after introducing the session calculus. The type system will, however, ensure that for each optional block---if no block fails---exactly one vector of return values is computed within each optional block. \begin{example}[Projection of Unreliable Links] \begin{align*} \ProjS{\GUL{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]}{\Args[v]_{\Role[trg]}}}{}{\Role[src]} & = \LULS{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]} = \LTOptS{\Role[scr], \Role[trg]}{\LTSend{\Role[trg]}{\LTLabS{\Labe[c]}{\Typed{\Args[v]_{\Role[src]}}{\Sort[V]}}}}{\cdot}\\ \ProjS{\GUL{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]}{\Args[v]_{\Role[trg]}}}{}{\Role[trg]} & = \LULT{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]}{\Args[v]_{\Role[trg]}} = \LTOptS{\Role[src], \Role[trg]}{\LTGet{\Role[src]}{\LTLabS{\Labe[c]}{\Typed{\Args[v]_{\Role[src]}}{\Sort[V]}}}}{\Typed{\Args[v]_{\Role[trg]}}{\Sort[V]}} \end{align*} \end{example} \noindent When projected onto its sender, the global type for a communication over an unreliable link of Example~\ref{exa:GTunreliableLink} results in the local type $ \LULS{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]} $ that consists of an optional block containing a send operation towards $ \Role[trg] $. Since the optional block for the sender does not specify any default values, the local type $ \LULS{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]} $ will be placed in parallel to the projection of the continuation. The projection onto the receiver results in the local type $ \LULT{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]}{\Args[v]_{\Role[trg]}} $ that consists of an optional block containing a receive operation from $ \Role[src] $. Here a default value is necessary for the case that the message is lost. So the type $ \LULT{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]}{\Args[v]_{\Role[trg]}} $ has to be composed sequentially with the projection of the continuation. \begin{example}[Local Types for Rotating Coordinators] \label{exa:LTRC} \begin{align*} \ProjS{\GRC{n}}{}{\Role[p]_i} ={} & \Proj{\RestS{\GRC{n}}{\Role[p]_i}}{}{\Role[p]_i}\\ ={} & \bigodot_{j = 1..(i{-}1)} \LULT{\Role[p]_j}{\Args[v]_{j, j{-}1}}{\Role[p]_i}{\Args[v]_{i, j{-}1}}.\\ & \left( \LTPar{\left( \prod_{j = 1..n, j \neq i} \LULS{\Role[p]_i}{\Args[v]_{i, i{-}1}}{\Role[p]_j} \right)}{\left( \bigodot_{j = (i + 1)..n} \LULT{\Role[p]_j}{\Args[v]_{j, j{-}1}}{\Role[p]_i}{\Args[v]_{i, j{-}1}} \right)} \right) \end{align*} \end{example} \noindent The projection of Example~\ref{exa:GTRC} onto participant $ \Role[p]_i $ consists of $ i{-}1 $ sequential receptions (rounds $ 1 $ to $ i{-}1 $), then $ n{-}1 $ parallel transmissions of the current value of $ \Role[p]_i $ to the remaining participants (round $ i $), and finally $ n{-}i $ more sequential receptions (round $ i {+} 1 $ to $ n $) in parallel to round $ i $. Due to the different cases of the projection of optional blocks, we obtain parallel optional blocks in the local type although all blocks are sequential in the global type. \begin{example}[Rotating Coordinators with Sub-Sessions] \label{exa:LTRCWSS} \begin{align*} \ProjS{\GRCB{n}}{}{\Role[p]_i} ={} & \ProjS{G'}{\PDec{\Prot[R]_n}{\Role[src], \widetilde{\Role[trg]}}{\Args[v]_{\Role[src]}}{\cdot}{\GR{n}}}{\Role[p]_i}\\ ={} & \bigodot_{j = 1..(i{-}1)} \LTEntS{\Prot[R]_n}{\Role[trg]_i}{\Args[v]_{j, j{-}1}}{\Role[p]_j}.\\ & \LTCall{\Prot[R]}{\GR{n}}{\Args[v]_{i, i{-}1}}{\Args[v]_{\Role[src]}}{\cdot}{\big( } \LTPar{\LTEntS{\Prot[R]_n}{\Role[src]}{\Args[v]_{i, i{-}1}}{\Role[p]_i}}{\LTReqS{\Prot[R]_n}{\Role[src]}{\Args[v]_{i, i{-}1}}{\Role[p]_i}}\\ & \quad \LTPar{}{\LTPar{\prod_{j = 1..i} \LTReqS{\Prot[R]_n}{\Role[trg]_j}{\Args[v]_{i, i{-}1}}{\Role[p]_j}}{\prod_{j = i..(n{-}1)} \LTReqS{\Prot[R]_n}{\Role[trg]_j}{\Args[v]_{i, i{-}1}}{\Role[p]_{j + 1}}}}\\ & \quad \LTPar{}{\big( \bigodot_{j = (i + 1)..n}\!\!\! \LTEntS{\Prot[R]_n}{\Role[trg]_{i{-}1}}{\Args[v]_{j, j{-}1}}{\Role[p]_j} \big)} \big) \end{align*} \end{example} \noindent To project the global type $ \GRCB{n} $ of Example~\ref{exa:GTRCWSS} to the local type of participant~$ i $, we first add the information about the declaration of the protocol $ \Prot[R] $ to the environment and then project the $ n $ rounds. The first $ i{-}1 $ rounds and the last $ n{-}i $ rounds are projected to sequentially composed acceptance notifications $ \LTEntS{\Prot[R]_n}{\Role[trg]_i}{\Args[v]_{j, j{-}1}}{\Role[p]_j} $ to participate in the sub-session for the respective round as target, \ie receiver. The projection of round $ i $ on the coordinator participant $ i $ initialises a sub-session using the $ \mathtt{call} $-operator followed by the acceptance notion of participant~$ i $ ($ \mathtt{ent} $) to participate as $ \Role[src] $ (sender) and the invitations for all participants ($ \mathtt{req} $). Similar to Example~\ref{exa:LTRC}, the projections of the rounds $ j \neq i $ are composed sequentially, whereas the projection of round $ i $---consisting of the parallel composition of the respective invitations and the acceptance of $ \Role[p]_i $---is composed in parallel to the projection of round $ i {+} 1 $. \section{A Session Calculus with Optional Blocks} \label{sec:calculus} Global types (and the local types that are derived from them) can be considered as specifications that describe the desired properties of the system we want to analyse. The process calculus, that we use to model/implement the system, is in the case of session types usually a variant of the $ \pi $-calculus \cite{milnerParrowWalker92}. We extend a basic session-calculus as used \eg in \cite{BettiniAtall08,BocciAtall10} with two operators. \begin{definition}[Processes] \label{def:processes} Processes are given by \begin{align*} P & \deffTerms \PInp{\Chan}{\tilde{\Args}}{P} \sep \POut{\Chan}{\tilde{\Chan[s]}}{P} \sep \PGet{\Chan[k]}{\Role_1}{\Role_2}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args}_i}{P_i} }} \sep \PSend{\Chan[k]}{\Role_1}{\Role_2}{\Labe}{\tilde{\Args[v]}}{P}\\ & \sep \textcolor{blue}{\POpt{\Role}{\tilde{\Role}}{P}{\tilde{\Args}}{\tilde{\Args[v]}}{P'}} \sep \textcolor{blue}{\POptEnd{\Role}{\tilde{\Args[v]}}}\\ & \sep \PRes{\Args}{P} \sep \PChoi{P_1}{P_2} \sep \PPar{P_1}{P_2} \sep \PRec{\TermV[X]}{P} \sep \PVar{\TermV[X]} \sep \PEnd \end{align*} \end{definition} The prefixes $ \PInp{\Chan}{\tilde{\Args}}{P} $ and $ \POut{\Chan}{\tilde{\Chan[s]}}{P} $ are inherited from the $ \pi $-calculus and are used for external invitations. Using the shared channel $ \Chan $, an external participant can be invited with the output $ \POut{\Chan}{\tilde{\Chan[s]}}{P} $ transmitting the session channels $ \tilde{\Chan[s]} $ that are necessary to participate and the external participant can accept the invitation using the input $ \PInp{\Chan}{\tilde{\Args}}{P} $. The following two operators introduce a branching input and the corresponding transmission on the session channel $ k $ from $ \Role_1 $ to $ \Role_2 $. These two operators correspond to the local types for $ \mathtt{get} $ and $ \mathtt{send} $. Restriction $ \PRes{\Args}{P} $ allows to generate a fresh name that is not known outside of the scope of this operator unless it was explicitly communicated. For simplicity and following \cite{Demangeon15} we assume that only shared channels $ \Chan $ for external invitations and session channels $ \Chan[s], \Chan[k] $ for not yet initialised sub-sessions are restricted, because this covers the interesting cases\footnote{Sometimes it might be useful to allow the restriction of values, \eg for security. For this case an additional restriction operator can be introduced.} and simplifies the typing rules in Figure~\ref{fig:typingRules}. The term $ \PChoi{P_1}{P_2} $ either behaves as $ P_1 $ or $ P_2 $. $ \PPar{P_1}{P_2} $ defines the parallel composition of the processes $ P_1 $ and $ P_2 $. $ \PRec{\TermV[X]}{P} $ and $ \PVar{\TermV[X]} $ are used to introduce recursion. $ \PEnd $ denotes the completion of a process. To implement optional blocks, we add $ \POpt{\Role}{\tilde{\Role}}{P}{\tilde{\Args}}{\tilde{\Args[v]}_d}{P'} $ and $ \POptEnd{\Role}{\tilde{\Args[v]}} $. The former defines an optional block between the roles $ \tilde{\Role} $ around the process $ P $ with the default values $ \tilde{\Args[v]}_d $. We require that the owner $ \Role $ of this block is one of its participants $ \tilde{\Role} $, \ie $ \Role \in \tilde{\Role} $. In the case of success, $ \POptEnd{\Role}{\tilde{\Args[v]}} $ transmits the computed values $ \tilde{\Args[v]} $ from within the optional block to the continuation $ P' $ to be substituted for the variables $ \tilde{x} $ within $ P' $. If the optional block fails the variables $ \tilde{x} $ of $ P' $ are replaced by the default values $ \tilde{\Args[v]}_d $ instead. Without loss of generality we assume that the roles $ \tilde{\Role} $ of optional blocks are distinct. Since optional blocks can compute only values and their defaults need to be of the same kind, $ \POptEnd{\Role}{\tilde{\Args[v]}} $ and the defaults cannot carry session names, \ie names used as session channels. The type system ensures that the inner part $ P $ of a successful optional block reaches some $ \POptEnd{\Role}{\tilde{\Args[v]}} $ and thus transmits computed values of the expected kinds in exactly one of its parallel branches. The semantics presented below ensures that every optional block can transmit at most one vector of computed values and has to fail otherwise. Similarly optional blocks, that use roles in their inner part $ P $ that are different from $ \tilde{r} $ and are not newly introduced as part of a sub-session within $ P $, cannot be well-typed. Since optional blocks open a context block around their inner part that separates $ P $ from the continuation $ P' $, scopes as introduced by input prefixes and restriction that are opened within $ P $ cannot cover parts of $ P' $. If an optional block does not compute any values and consequently the vector of default values is empty, we abbreviate $ \POpt{\Role}{\tilde{\Role}}{P}{\cdot}{\cdot}{Q} $ by $ \POptNV{\Role}{\tilde{\Role}}{P}{Q} $. Again we usually omit trailing $ \PEnd $. In Definition~\ref{def:processes} all occurrences of $ \Args $, $ \tilde{\Args} $, and $ \tilde{\Args}_i $ refer to bound names of the respective operators. The set $ \FreeNames{P} $ of free names of $ P $ is the set of names of $ P $ that are not bound. A substitution $ \Set[]{ \Subst{\Args[y]_1}{\Args_1}, \ldots, \Subst{\Args[y]_n}{\Args_n} } = \Set[]{ \Subst{\tilde{\Args[y]}}{\tilde{\Args}} } $ is a finite mapping from names to names, where the $ \tilde{\Args} $ are pairwise distinct. The application of a substitution on a term $ P\!\Set[]{ \Subst{\tilde{\Args[y]}}{\tilde{\Args}} } $ is defined as the result of simultaneously replacing all free occurrences of $ \Args_i $ by $ \Args[y]_i $, possibly applying alpha-conversion to avoid capture or name clashes. For all names $ n \notin \tilde{x} $ the substitution behaves as the identity mapping. We use '$ . $' (as \eg in $ \PInp{\Chan}{\tilde{\Args}}{P} $) to denote sequential composition. In all operators the part before '$ . $' guards the continuation after the '$ . $', \ie the continuation cannot reduce before the guard was reduced. A subprocess of a process is \emph{guarded} if it occurs after such a guard, \ie is the continuation (or part of the continuation) of a guard. Guarded subprocesses can be \emph{unguarded} by steps that remove the guard. \begin{example}[Implementation of Unreliable Links] \label{exa:PUL} \begin{align*} \PULS{\Role[p]_1}{\Args[v]_1}{\Role[p]_2} &= \POptNVS{\Role[p]_1}{\Role[p]_1, \Role[p]_2}{\PSend{\Chan[s]}{\Role[p]_1}{\Role[p]_2}{\Labe[c]}{\Args[v]_1}{\POptEnd{\Role[p]_1}{\cdot}}}\\ \PULT{\Role[p]_1}{\Role[p]_2}{\Args[v]_2} &= \POptS{\Role[p]_2}{\Role[p]_1, \Role[p]_2}{\PGet{\Chan[s]}{\Role[p]_1}{\Role[p]_2}{\PLab{\Labe[c]}{\Args}{\POptEnd{\Role[p]_2}{\Args}}}}{\Args[y]}{\Args[v]_2} \end{align*} \end{example} \noindent $ \PULS{\Role[p]_1}{\Args[v]_1}{\Role[p]_2} $ is the implementation of a single send action on an unreliable link and $ \PULT{\Role[p]_1}{\Role[p]_2}{\Args[v]_2} $ the corresponding receive action. Here a continuation of the sender cannot gain any information from the modelled communication; not even whether it succeeded, whereas a continuation of the receiver in the case of success obtains the transmitted value $ \Args[v]_1 $ and else its own default value $ \Args[v]_2 $. To implement the rotating coordinator algorithm of Example~\ref{exa:RCAlgorithm}, we replace the check '\texttt{if alive}$ (p_r) $' by an optional block for communications. In the first $ i{-}1 $ and the last $ n{-}i $ rounds, participant~$ i $ either receives a value or (if the respective communication fails) uses as default value its value of the round before. In round~$ i $, participant~$ i $ transmits its current value to each other participant. \begin{example}[Rotating Coordinator Implementation] \label{exa:PRC} \begin{align*} \PRC{n} ={} & \prod_{i = 1..n} \left( \PPar{\POutS{\Chan_i}{\Chan[s]}}{\PInp{\Chan_i}{\Chan[s]}{\PIN{i}{n}}} \right)\\ \PIN{i}{n} ={} & ( \bigodot_{j = 1..(i{-}1)} \POptS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{j \to i, \downarrow}}{\Args[v]_{i, j}}{\Args[v]_{i, j{-}1}} ).\\ & ( \prod_{j = 1..n, j \neq i} \POptNVS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{i \to j, \uparrow}} \PPar{}{( \bigodot_{j = (i + 1)..n} \POptS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{j \to i, \downarrow}}{\Args[v]_{i, j}}{\Args[v]_{i, j{-}1}} ))}\\ P_{j \to i, \downarrow} ={} & \PGet{\Chan[s]}{\Role[p]_j}{\Role[p]_i}{\PLab{\Labe[c]}{\Args[v]_{j, j{-}1}}{\POptEnd{\Role[p]_i}{\Args[v]_{j, j{-}1}}}}\\ P_{i \to j, \uparrow} ={} & \PSend{\Chan[s]}{\Role[p]_i}{\Role[p]_j}{\Labe[c]}{\Args[v]_{i, i{-}1}}{\POptEnd{\Role[p]_i}{\cdot}} \end{align*} \end{example} \noindent The overall system $ \PRC{n} $ consists of the parallel composition of the $ n $ participants. The channel $ \Chan_i $ is used to distribute the initial session channel. Since these communications on $ \tilde{\Chan} $ are used to initialise the system and not to model the algorithm, we assume that they are reliable. The term $ \PIN{i}{n} $ models participant $ i $. Each participant first optionally receives $ i{-}1 $ times a value from another participant. Therefore, an optional block surrounds the term $ P_{j \to i, \downarrow} $. If the optional block succeeds, then $ P_{j \to i, \downarrow} $ receives the value $ \Args[v]_{j, j{-}1} $ from the current leader of the round and finishes its optional block with the transmission of the computed value $ \POptEnd{\Role[p]_i}{\Args[v]_{j, j{-}1}} $. In this case, the value $ \Args[v]_{i, j} $ is instantiated with the received value $ \Args[v]_{j, j{-}1} $. If the communication with the current leader $ \Role[p]_j $ fails, then the value $ \Args[v]_{i, j} $ is instantiated instead with the default value $ \Args[v]_{i, j{-}1} $, \ie the last value of participant~$ i $. In these first $ i{-}1 $ (and also the last $ n{-}i $) rounds, participant~$ i $ is a receiver and consists of exactly one optional block per round. The last $ n{-}i $ rounds of participant~$ i $ are similar. In round~$ i $, participant~$ i $ is the sender and consists of $ n{-}1 $ optional blocks (second line of the definition of $ \PIN{i}{n} $), exactly one such block with each other participant. Here these $ n{-}1 $ optional blocks do not need a default value and accordingly do not compute a value. Note that the continuation of all these $ n{-}1 $ blocks of the coordinator is $ \PEnd $. For each other participant~$ j $ these blocks surround the term $ P_{i \to j, \uparrow} $ in which $ \Role[p]_i $ transmits its current value $ \Args[v]_{i, i{-}1} $ to $ \Role[p]_j $. Note that, in round~$ i $, participant~$ i $ does not need to update its own value, since it gains no new information. Therefore, we assumed $ \Args[v]_{i, i} := \Args[v]_{i, i{-}1} $ for all $ i < n $ in the assumed vectors of values. In Example~\ref{exa:PRC}, the $ n{-}1 $ optional blocks of round~$ i $ are pairwise in parallel and parallel to the optional block of round~$ i {+} 1 $, which guards the block of round~$ i {+} 2 $ and so forth. This matches an intuitive understanding of this process in terms of asynchronous communications. The sending operations emit the respective value as soon as they are unguarded, but they syntactically remain part of the term until the (possibly later) reception of the respective message consumes it. In fact, the presented session calculus is synchronous but the examples---including the examples with sub-sessions presented later---can be considered as distributed asynchronous processes, because they use neither choice nor output continuations different from $ \PEnd $ \cite{hondaTokoro91,boudol92,palamidessi03} and because optional blocks of senders have no default values and each send action matches exactly one receive action and vice versa \cite{fossacs12_pi}. \subsection{A Session Calculus with Optional Blocks and Sub-Sessions} Again we extend the session calculus, in order to obtain a mechanism to express modularity. \cite{DemangeonHonda12} introduces three operators for this purpose. \begin{align*} \sep \PDecl{\Chan[k]}{\Chan[s]}{\tilde{\Args[v]}}{\tilde{\Chan}}{\tilde{\Role}}{P} \sep \PEnt{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Args}{P} \sep \PReq{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Chan[k]}{P} \end{align*} $ \PDecl{\Chan[k]}{\Chan[s]}{\tilde{\Args[v]}}{\tilde{\Chan}}{\tilde{\Role}}{P} $ allows a process to create a sub-session $ \Chan[k] $, where $ \Chan[s] $ is the parent session, $ \tilde{\Args[v]} $ are arguments, $ \tilde{\Role} $ are external participants, and $ \tilde{\Chan} $ are the channels for external invitations. Internal invitations are handled by $ \PEnt{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Args}{P} $ and $ \PReq{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Chan[k]}{P} $, where $ \Role_1 $ invites $ \Role_2 $ to play role $ \Role_3 $ in a sub-session. Here $ \Args $ is a name that is bounded in $ P $ within the operator $ \PEnt{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Args}{P} $. All other names of these three operators are free. Again the '$ . $' is used to refer to sequential composition, \ie in all three operators the respective continuation $ P $ is guarded. \begin{definition}[Processes with Sub-Sessions] \label{def:processesWSS} \begin{align*} P & \deffTerms \PInp{\Chan}{\tilde{\Args}}{P} \sep \POut{\Chan}{\tilde{\Chan[s]}}{P} \sep \PGet{\Chan[k]}{\Role_1}{\Role_2}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args}_i}{P_i} }} \sep \PSend{\Chan[k]}{\Role_1}{\Role_2}{\Labe}{\tilde{\Args[v]}}{P}\\ & \sep \textcolor{blue}{\POpt{\Role}{\tilde{\Role}}{P}{\tilde{\Args}}{\tilde{\Args[v]}}{P'}} \sep \textcolor{blue}{\POptEnd{\Role}{\tilde{\Args[v]}}}\\ & \sep \PDecl{\Chan[k]}{\Chan[s]}{\tilde{\Args[v]}}{\tilde{\Chan}}{\tilde{\Role}}{P} \sep \PEnt{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Args}{P} \sep \PReq{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Chan[k]}{P}\\ & \sep \PRes{\Args}{P} \sep \PChoi{P_1}{P_2} \sep \PPar{P_1}{P_2} \sep \PRec{\TermV[X]}{P} \sep \PVar{\TermV[X]} \sep \PEnd \end{align*} \end{definition} Similar to Example~\ref{exa:PRC}, we present an implementation of the rotating coordinators with a sub-session for each round. \begin{example}[Rotating Coordinators with Sub-Sessions] \label{exa:PRCWSS} \begin{align*} \PRCB{n} ={} & \prod_{i = 1..n} \left( \PPar{\POutS{\Chan_i}{\Chan[s]}}{\PInp{\Chan_i}{\Chan[s]}{\PINB{i}{n}}} \right)\\ \PINB{i}{n} ={} & \bigodot_{j = 1..(i{-}1)} \PEnt{\Chan[s]}{\Role[p]_j}{\Role[p]_i}{\Role[trg]_i}{\Args}{P'_{\downarrow}\!\left( \Role[trg]_i, i, j \right)}.\\ & \quad \big( \PRes{\Chan[k]}{\left( \PDecl{\Chan[k]}{\Chan[s]}{\Args[v]_{i, i{-}1}}{\cdot}{\cdot}{P'_{\uparrow}\!\left( i \right)} \right)} \PPar{}{\big( \bigodot_{j = (i + 1)..n} \PEnt{\Chan[s]}{\Role[p]_j}{\Role[p]_i}{\Role[trg]_{i{-}1}}{\Args}{P'_{\downarrow}\!\left( \Role[trg]_{i{-}1}, i, j \right)} \big)} \big)\\ P'_{\downarrow}\!\left( \Role, i, j \right) ={} & \POptS{\Role}{\Role, \Role[src]}{\PGet{\Args}{\Role[src]}{\Role}{\PLab{\Labe[c]}{\Args[y]}{\POptEnd{\Role}{\Args[y]}}}}{\Args[v]_{i, j}}{\Args[v]_{i, j{-}1}}\\ P'_{\uparrow}\!\left( i \right) ={} & \PPar{\PEnt{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[src]}{\Args}{P'_{\Prot[R]}\!\left( i, \Args \right)}}{\PReqS{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[src]}{\Chan[k]}} \PPar{}{\PPar{\prod_{j = 1..i} \PReqS{\Chan[k]}{\Role[p]_i}{\Role[p]_j}{\Role[trg]_j}{\Chan[k]}}{\prod_{j = i..(n{-}1)} \PReqS{\Chan[s]}{\Role[p]_i}{\Role[p]_{j + 1}}{\Role[trg]_j}{\Chan[k]}}}\\ P'_{\Prot[R]}\!\left( i, \Args \right) ={} & \prod_{j = 1..(n{-}1)} \!\!\!\!\!\POptNVS{\Role[scr]}{\Role[src], \Role[trg]_j}{\PSend{\Args}{\Role[src]}{\Role[trg]_j}{\Labe[c]}{\Args[v]_{i, i{-}1}}{\POptEnd{\Role[src]}{\cdot}}} \end{align*} \end{example} \noindent $ \PRCB{n} $ is the implementation of the rotating coordinator algorithm using a sub-session for each round in the global type. Accordingly the differences between the Examples~\ref{exa:PRC} and \ref{exa:PRCWSS} are due to the initialisation of the sub-sessions. In each round in which participant~$ i $ is not the coordinator, participant~$ i $ first accepts the invitation of the current coordinator to participate as receiver in the sub-session of the round and then receives and updates its value similar to Example~\ref{exa:PRC}. If participant~$ i $ is itself the coordinator, then it initialises a new session~$ k $ for the round and then, in parallel, invites all processes (including itself) to participate and accepts to participate in this sub-session as sender followed by the $ n{-}1 $ optional blocks to transmit its value similar to Example~\ref{exa:PRC}. \subsection{Reduction Semantics} We identify processes up structural congruence, where structural congruence is defined by the rules: \begin{center} \begin{tabular}{c} $ \PPar{P}{\PEnd} \equiv P $ \quad\quad $ \PPar{P_1}{P_2} \equiv \PPar{P_2}{P_1} $ \quad\quad $ \PPar{P_1}{\left( \PPar{P_2}{P_3} \right)} \equiv \PPar{\left( \PPar{P_1}{P_2} \right)}{P_3} $ \\ $ \PRec{\TermV[X]}{P} \equiv P\!\Set[]{ \Subst{\PRec{\TermV[X]}{P}}{\TermV[X]} } $ \quad\quad $ \PChoi{P_1}{P_2} \equiv \PChoi{P_2}{P_1} $ \\ $ \PRes{\Args}{\PEnd} \equiv \PEnd $ \quad\quad $ \PRes{\Args}{\PRes{\Args[y]}{P}} \equiv \PRes{\Args[y]}{\PRes{\Args}{P}} $ \quad\quad $ \PRes{\Args}{\left( \PPar{P_1}{P_2} \right)} \equiv \PPar{P_1}{\PRes{\Args}{P_2}} $ \; if $ \Args \notin \FreeNames{P_1} $ \end{tabular} \end{center} In \cite{DemangeonHonda12} the semantics is given by a set of reduction rules that are defined \wrt evaluation contexts. We extend them with optional blocks. \begin{definition} \label{def:evaluationContexts} $ \EC \deffTerms \ECHole \sep \ECPar{P}{\EC} \sep \ECRes{\Args}{\EC} \sep \POpt{\Role}{\tilde{\Role}}{\EC}{\tilde{\Args}}{\tilde{\Args[v]}}{P'} $ \end{definition} \noindent Intuitively an evaluation context is a term with a single hole that is not guarded. Additionally, we introduce two variants of evaluation contexts and a context for blocks that are used to simplify the presentation of our new rules. \begin{definition} $ \ECR \deffTerms \ECHole \sep \ECPar{P}{\ECR} \sep \POpt{\Role}{\tilde{\Role}}{\ECR}{\tilde{\Args}}{\tilde{\Args[v]}}{P'} $\\ $ \ECO \deffTerms \POpt{\Role}{\tilde{\Role}}{\ECP}{\tilde{\Args}}{\tilde{\Args[v]}}{P'} $, where $ \ECP \deffTerms \ECHole \sep \ECPar{P}{\ECP} $ \end{definition} \noindent Accordingly, a $ \ECO $-context consists of exactly one optional block that contains an $ \ECP $-context, \ie a single hole that can occur within the parallel composition of arbitrary processes. We define the function $ \RolesOf{\POpt{\Role}{\tilde{\Role}}{\ECP}{\tilde{\Args}}{\tilde{\Args[v]}}{P'}} \deff \tilde{\Role} $, to return the roles of the optional block of a $ \ECO $-context, and the function $ \OwnerOf{\POpt{\Role}{\tilde{\Role}}{\ECP}{\tilde{\Args}}{\tilde{\Args[v]}}{P'}} \deff \Role $, to return its owner. Figure~\ref{fig:reductionRules} presents all reduction rules for the two introduced versions of session calculi: both come with optional blocks, but the first one without sub-sessions, while the second one including sub-sessions. For the simpler session calculus we need the Rules~$ (\mathsf{comS}) $, $ (\mathsf{choice}) $, and $ (\mathsf{comC}) $ to deal with the standard operators for communication, choice, and external invitations to sessions, respectively. Since evaluation contexts $ \EC $ contain optional blocks, these rules allow for steps within a single optional block. To capture optional blocks for this first session calculus, we introduce the new Rules~$ (\mathsf{fail}) $, $ (\mathsf{succ}) $, $ (\mathsf{cSO}) $, and $ (\mathsf{cCO}) $. For the second session calculus \cite{DemangeonHonda12} add the Rules~$ (\mathsf{subs}) $ and $ (\mathsf{join}) $ to deal with sub-sessions and we introduce the new Rule~$ (\mathsf{jO}) $ to capture sub-sessions within optional blocks. Here $ \dot{=} $ means that the two compared vectors contain the same roles but not necessarily in the same order, \ie $ \dot{=} $ checks whether the set of participants of two optional blocks are the same. \begin{figure*} \caption{Reduction Rules} \label{fig:reductionRules} \end{figure*} The Rules~(\textsf{comS}), (\textsf{comC}), and (\textsf{join}) represent three different kinds of communication. They define communications within a session, external session invitations, and internal session invitations, respectively. In all three cases communication is an axiom that requires the occurrence of two matching counterparts of communication primitives (of the respective kind) to be placed in parallel within an evaluation context. As a consequence of the respective communication step the continuations of the communication primitives are unguarded and the values transmitted in the communication step are instantiated (substituted) in the receiver continuation. (\textsf{choice}) allows the reduction of either side of a choice, if the respective side can perform a step. (\textsf{subs}) initialises a sub-session by transmitting external invitations. The two rules (\textsf{succ}) and (\textsf{fail}) describe the main features of optional blocks, namely how they succeed (\textsf{succ}) and what happens if they fail (\textsf{fail}). (\textsf{fail}) aborts an optional block, \ie removes it and unguards its continuation instantiated with the default values. This rule can be applied whenever an optional block is unguarded, \ie there is no way to ensure that an optional block does indeed perform any step (or terminates after successfully doing some of its steps). In combination with (\textsf{succ}), it introduces the non-determinism that is used to express the random nature in that system errors may occur. If we use optional blocks the cover a single transmission over an unreliable link, each use of the Rule~(\textsf{fail}) refers to a single link failure. (\textsf{succ}) is the counterpart of (\textsf{fail}); it removes a successfully completed optional block and unguards its continuation instantiated with the computed results. To successfully complete an optional block, we require that its content has to reduce to a single occurrence of $ \POptEnd{\Role}{\tilde{\Args[v]}} $, where $ \Role $ is the owner of the block and accordingly one of the participating roles. Since (\textsf{succ}) and (\textsf{fail}) are the only ways to reduce $ \POptEnd{\Role}{\tilde{\Args[v]}} $, this ensures that a successful optional block can compute only a single vector of return values. Other parallel branches in the inner part of an optional block have to terminate with $ \PEnd $. This ensures that no confusion can arise from the computation of different values in different parallel branches. Since at the process-level an optional block covers only a single participant, this limitation does not restrict the expressive power of the considered processes. If the content of an optional block cannot reduce to $ \POptEnd{\Role}{\tilde{\Args[v]}} $ the optional block is doomed to fail. The remaining rules describe how different optional blocks can interact. Here, we need to ensure that communication from within an optional block ensures isolation, \ie that such communications are restricted to the encapsulated parts of other optional blocks. The $ \ECR $-contexts allow for two such blocks to be nested within different optional blocks. The exact definition of such a communication rule depends on the semantics of the considered calculi and their communication rules. Here there are the Rules (\textsf{cSO}), (\textsf{cCO}), and (\textsf{jO}). They are the counterparts of (\textsf{comS}), (\textsf{comC}), and (\textsf{join}) and accordingly allow for the respective kind of communication step. As an example consider Rule (\textsf{cSO}). In comparison to (\textsf{comS}), Rule~(\textsf{cSO}) ensures that communications involving the content of an optional block are limited to two such contents of optional blocks with the same participants. This ensures that optional blocks describe the local view-points of the encapsulated protocol. Optional blocks do not allow for scope extrusion of restricted names, \ie a name restricted within an optional block cannot be transmitted nor can an optional block successfully be terminated if the computed result values are subject to a restriction from the content of the optional block. Also values that are communicated between optional blocks can be used only by the continuation of the optional block and only if the optional block was completed successfully. If an optional block fails while another process is still waiting for a communication within its optional block, the latter optional block is doomed to fail. Note that the semantics of optional blocks is inherently synchronous, since an optional sending operation can realise the failing of its matching receiver (\eg by $ \POpt{\Role_1}{\Role_2}{\ldots\POptEnd{\Role_1}{\Args[ok]}}{\Args}{\Args[fail]}{P} $). Let $ \longmapsto^+ $ denote the transitive closure of $ \longmapsto $ and let $ \longmapsto^* $ denote the reflexive and transitive closure of $ \longmapsto $, respectively. \subsubsection{Reaching Consensus Despite Crash Failures} To illustrate the semantics of optional blocks and our implementation of the rotating coordinator algorithm (Example~\ref{exa:PRC}), we present one execution for the case of $ n = 3 $. Assume that $ \Args[v]_{1, 0} = 0 $, $ \Args[v]_{2, 0} = 1 = \Args[v]_{3, 0} $, and, since the coordinator does not update its value, $ \Args[v]_{i, i} = \Args[v]_{i, i-1} $ for all $ 1 \leq i \leq 3 $. Moreover, assume that participant~$ 1 $ crashes in round~$ 1 $ after delivering its value to participant~$ 3 $ but before participant~$ 2 $ obtains the value. \begin{align*} \PRC{3} &= \PPar{\left( \PPar{\POutS{\Chan_1}{\Chan[s]}}{\PInp{\Chan_1}{\Chan[s]}{\PIN{1}{3}}} \right)}{\PPar{\left( \PPar{\POutS{\Chan_2}{\Chan[s]}}{\PInp{\Chan_2}{\Chan[s]}{\PIN{2}{3}}} \right)}{\left( \PPar{\POutS{\Chan_3}{\Chan[s]}}{\PInp{\Chan_3}{\Chan[s]}{\PIN{3}{3}}} \right)}}\\ &\longmapsto^3 \PPar{\PIN{1}{3}}{\PPar{\PIN{2}{3}}{\PIN{3}{3}}} \end{align*} where \begin{align*} \PIN{i}{n} ={} & ( \bigodot_{j = 1..(i{-}1)} \POptS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{j \to i, \downarrow}}{\Args[v]_{i, j}}{\Args[v]_{i, j{-}1}} ).\\ & ( \prod_{j = 1..n, j \neq i} \POptNVS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{i \to j, \uparrow}} \PPar{}{( \bigodot_{j = (i + 1)..n} \POptS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{j \to i, \downarrow}}{\Args[v]_{i, j}}{\Args[v]_{i, j{-}1}} ))}\\ P_{j \to i, \downarrow} ={} & \PGet{\Chan[s]}{\Role[p]_j}{\Role[p]_i}{\PLab{\Labe[c]}{\Args[v]_{j, j{-}1}}{\POptEnd{\Role[p]_i}{\Args[v]_{j, j{-}1}}}}\\ P_{i \to j, \uparrow} ={} & \PSend{\Chan[s]}{\Role[p]_i}{\Role[p]_j}{\Labe[c]}{\Args[v]_{i, i{-}1}}{\POptEnd{\Role[p]_i}{\cdot}} \end{align*} The first three steps initialise the session using three times Rule~$ (\mathsf{comC}) $. We assume here that these steps belong to the environment and do never fail. If one of these steps fails, the respective participant does not know the global session channel and thus cannot participate in the algorithm, \ie is crashed from the beginning. After the initialisation all participants consist of sequential and parallel optional blocks. Each of these optional blocks can fail any time. The coordinator of the first round $ \Role[p]_1 $ can transmit its value to the other two participants. Since the respective two blocks of the sender and each of the matching blocks of the two receivers are all in parallel, both communications can happen. Intuitively, by unguarding a sending operation, we can consider the message as already being emitted by the sender. It remains syntactically present until the receiver captures it to complete the transmission. Accordingly, $ \Role[p]_1 $ directly moves to round~$ 2 $. \begin{align*} \longmapsto^3 \PPar{\PIN{1}{3}'}{\PPar{\PIN{2}{3}}{\PIN{3}{3}'}} \end{align*} where \begin{align*} \PIN{1}{3}' &= \PPar{\POptNVS{\Role[p]_1}{\Role[p]_1, \Role[p]_2}{P_{1 \to 2, \uparrow}}}{\POpt{\Role[p]_1}{\Role[p]_1, \Role[p]_2}{P_{2 \to 1, \downarrow}}{\Args[v]_{1, 2}}{0}{\POptS{\Role[p]_1}{\Role[p]_1, \Role[p]_3}{P_{3 \to 1, \downarrow}}{\Args[v]_{1, 3}}{\Args[v]_{1, 2}}}}\\ \PIN{3}{3}' &= \POpt{\Role[p]_3}{\Role[p]_3, \Role[p]_2}{P_{2 \to 3, \downarrow}}{\Args[v]_{3, 2}}{0}{\left( \PPar{\POptNVS{\Role[p]_3}{\Role[p]_3, \Role[p]_1}{P_{3 \to 1, \uparrow}}}{\POptNVS{\Role[p]_3}{\Role[p]_3, \Role[p]_2}{P_{3 \to 2, \uparrow}}} \right)} \end{align*} Next $ \Role[p]_3 $ receives the value~$ 0 $ from $ \Role[p]_1 $ using Rule~$ (\mathsf{cSO}) $. Then both optional blocks that participate in this communication are completed successfully using Rule~$ (\mathsf{succ}) $ such that $ \Role[p]_3 $ updates its current value to the received $ 0 $. This completes the first round for $ \Role[p]_3 $ and it moves to round~$ 2 $. The remainder of $ \Role[p]_1 $, \ie $ \PIN{1}{3}' = \PPar{\POptNVS{\Role[p]_1}{\Role[p]_1, \Role[p]_2}{\ldots}}{\POpt{\Role[p]_1}{\Role[p]_1, \Role[p]_2}{\ldots}{\Args[v]_{1, 2}}{0}{\ldots}} $, consists of the remaining optional block towards $ \Role[p]_2 $ in parallel with the optional block for round~$ 2 $ of $ \Role[p]_1 $ that guards the optional block for round~$ 3 $. $ \PIN{3}{3}' = \POpt{\Role[p]_3}{\Role[p]_3, \Role[p]_2}{\ldots}{\Args[v]_{3,2}}{0}{\ldots} $ is guarded by its optional block to receive in round~$ 2 $, where its current value $ \Args[v]_{3, 1} $ was instantiated with $ 0 $ received from $ \Role[p]_1 $. \begin{align*} \longmapsto^5 \PPar{\PIN{2}{3}'}{\PIN{3}{3}'} \end{align*} where \begin{align*} \PIN{2}{3}' = \PPar{\POptNVS{\Role[p]_2}{\Role[p]_2, \Role[p]_3}{P_{2 \to 3, \uparrow}}}{\POptS{\Role[p]_2}{\Role[p]_2, \Role[p]_3}{P_{3 \to 2, \downarrow}}{\Args[v]_{2, 3}}{1}} \end{align*} Now $ \Role[p]_1 $ crashes, \ie its three remaining optional blocks are removed using Rule~$ (\mathsf{fail}) $ three times. Since $ \Role[p]_2 $ cannot receive a value from $ \Role[p]_1 $ after that we also remove its optional block of round~$ 1 $ using the default value $ \Args[v]_{2, 0} = 1 $ to instantiate $ \Args[v]_{2, 1} = \Args[v]_{2, 2} $. With that $ \Role[p]_2 $ moves to round~$ 2 $, unguards its two optional blocks to transmit its current value, \ie $ \Args[v]_{2,0} = \Args[v]_{2, 1} = 1 $, and, by unguarding also its optional block of round~$ 3 $, directly moves forward to round~$ 3 $. Since $ \Role[p]_1 $ is crashed, the first block of $ \Role[p]_2 $ in round~$ 2 $ towards $ \Role[p]_1 $ is doomed to fail causing another application of Rule~$ (\mathsf{fail}) $. As result we obtain $ \PIN{2}{3}' = \PPar{\POptNVS{\Role[p]_2}{\Role[p]_2, \Role[p]_3}{\ldots}}{\POptS{\Role[p]_2}{\Role[p]_2, \Role[p]_3}{\ldots}{\Args[v]_{2,3}}{1}} $. \begin{align*} \longmapsto^3 \PPar{\PIN{2}{3}''}{\PIN{3}{3}''} \end{align*} where \begin{align*} \PIN{2}{3}'' &= \POptS{\Role[p]_2}{\Role[p]_2, \Role[p]_3}{P_{3 \to 2, \downarrow}}{\Args[v]_{2, 3}}{1}\\ \PIN{3}{3}'' &= \PPar{\POptNVS{\Role[p]_3}{\Role[p]_3, \Role[p]_1}{P_{3 \to 1, \uparrow}}}{\POptNVS{\Role[p]_3}{\Role[p]_3, \Role[p]_2}{P_{3 \to 2, \uparrow}}} \end{align*} $ \Role[p]_3 $ completes round~$ 2 $ by receiving the value $ 1 $ that was transmitted by $ \Role[p]_2 $ in round~$ 2 $ using Rule~$ (\mathsf{cSO}) $. After this communication the respective two optional blocks are resolved by Rule~$ (\mathsf{succ}) $ that also updates the current value of $ \Role[p]_3 $ to $ \Args[v]_{3, 2} = 1 $. With that $ \Role[p]_3 $ moves to round~$ 3 $. As results we obtain $ \PIN{2}{3}'' = \POptS{\Role[p]_2}{\Role[p]_2, \Role[p]_3}{\ldots}{\Args[v]_{2,3}}{1} $ for $ \Role[p]_2 $ and for $ \Role[p]_3 $ we obtain $ \PIN{3}{3}'' = \PPar{\POptNVS{\Role[p]_3}{\Role[p]_3, \Role[p]_1}{\ldots}}{\POptNVS{\Role[p]_3}{\Role[p]_3, \Role[p]_2}{\ldots}} $. \begin{align*} \longmapsto^4 \PEnd \end{align*} Similarly, round~$ 3 $ is completed by the reception of $ 1 $ by $ \Role[p]_2 $ from $ \Role[p]_3 $ and two steps to resolve the optional blocks. Additionally the remaining block of $ \Role[p]_3 $ towards the crashed $ \Role[p]_1 $ is removed using Rule~$ (\mathsf{fail}) $. The last values of $ \Role[p]_2 $ and $ \Role[p]_3 $ were $ 1 $. Since $ \Role[p]_1 $ (that still holds $ 0 $) crashed, this solves Consensus (although we abstract from the outputs of the results). This example visualises how the rotating coordinator algorithm allows processes to reach Consensus despite crash failures. Please observe that, due to the asynchronous nature of the processes, rounds can overlap, \ie there are derivatives in which the participants are situated in different rounds. Overlapping rounds are an important property of round-based distributed algorithms that significantly complicate their analysis. Hence it is important to model them properly. We gain overlapping rounds by \begin{inparaenum}[(1)] \item placing the optional blocks of senders in parallel and \item (for the case of sub-sessions as visualised in the next section) using the sub-sessions of \cite{DemangeonHonda12} that similarly place acceptance notions (and thus the content of sub-sessions) and the continuation of this sub-session in parallel. \end{inparaenum} \subsubsection{Reaching Consensus with Sub-Sessions} To show that the overlapping of rounds for these cases is the same, we map the above reduction of Example~\ref{exa:PRC} on Example~\ref{exa:PRCWSS}. In contrast to the first example the second wraps each round within a sub-session. We start again with \begin{align*} \PRCB{3} \longmapsto^3{} & \PPar{\PIN{1}{3}}{\PPar{\PIN{2}{3}}{\PIN{3}{3}}} \end{align*} to initialise the parent session using three times Rule~$ (\mathsf{comC}) $. This unguards the first sub-session call that is performed by the first co-ordinator $ \Role[p]_i $ to initialise a sub-session for round~$ 1 $. Accordingly, in the following four steps \begin{align*} \longmapsto^4{} & \PPar{\PINB{1}{3}}{\PPar{\PINB{2}{3}}{\PINB{3}{3}}} \end{align*} $ \Role[p]_1 $ calls the sub-session using Rule~$ (\mathsf{subs}) $ and unguards the corresponding three internal session invitations $ \PReqS{\Chan[s]}{\Role[p]_1}{\Role[p]_1}{\Role[scr]}{\Chan[k]} $, $ \PReqS{\Chan[s]}{\Role[p]_1}{\Role[p]_2}{\Role[trg]_1}{\Chan[k]} $, and $ \PReqS{\Chan[s]}{\Role[p]_1}{\Role[p]_3}{\Role[trg]_2}{\Chan[k]} $ that are answered using three applications of Rule~$ (\mathsf{join}) $. Therefore the acceptance notification $ \PEntS{\Chan[s]}{\Role[p]_1}{\Role[p]_1}{\Role[src]}{\Args} $ of $ \Role[p]_1 $ is unguarded by Rule~$ (\mathsf{subs}) $ and the other two acceptance notifications $ \PEntS{\Chan[s]}{\Role[p]_2}{\Role[p]_1}{\Role[trg]_1}{\Args} $ and $ \PEntS{\Chan[s]}{\Role[p]_3}{\Role[p]_1}{\Role[trg]_2}{\Args} $ are unguarded in the first three steps. Again $ \Role[p]_1 $ directly moves to round~$ 2 $. The following three steps \begin{align*} \longmapsto^3{} & \PPar{\PIN[P'']{1}{3}}{\PPar{\PIN[P']{2}{3}}{\PIN[P'']{3}{3}}} \end{align*} are similar to the first example. $ \Role[p]_3 $ receives the value~$ 0 $ from $ \Role[p]_1 $ using Rule~$ (\mathsf{cSO}) $. Then both optional blocks that participate in this communication are completed successfully using Rule~$ (\mathsf{succ}) $ such that $ \Role[p]_3 $ updates its current value to the received $ 0 $. This completes the first round for $ \Role[p]_3 $ and it moves to round~$ 2 $. Now $ \Role[p]_1 $ crashes, \ie its three remaining optional blocks will be removed using Rule~$ (\mathsf{fail}) $ as soon as they are unguarded but $ \Role[p]_1 $ still answers session invitations. One optional block of $ \Role[p]_1 $ is already unguarded and thus removed. Since $ \Role[p]_2 $ cannot receive a value from $ \Role[p]_1 $ after that we also remove its optional block of round~$ 1 $. \begin{align*} \longmapsto^2{} & \PPar{\PIN[P''']{1}{3}}{\PPar{\PIN[P'']{2}{3}}{\PIN[P'']{3}{3}}} \end{align*} With that $ \Role[p]_2 $ moves to round~$ 2 $. Next $ \Role[p]_2 $ initialises the sub-session for round~$ 2 $ in the same way as $ \Role[p]_1 $ did for round~$ 1 $ and $ \Role[p]_1 $ removes the next optional block using Rule~$ (\mathsf{fail}) $ \begin{align*} \longmapsto^5{} & \PPar{\PIN[P'''']{1}{3}}{\PPar{\PIN[P''']{2}{3}}{\PIN[P''']{3}{3}}} \end{align*} The session initialisation unguards its two optional blocks of $ \Role[p]_2 $ as co-ordinator to transmit its current value, \ie $ \Args[v]_{2,0} = \Args[v]_{2, 1} = 1 $. $ \Role[p]_3 $ directly moves forward to round~$ 3 $. Since $ \Role[p]_1 $ is crashed, the first block of $ \Role[p]_2 $ in round~$ 2 $ towards $ \Role[p]_1 $ is doomed to fail causing another application of Rule~$ (\mathsf{fail}) $ \begin{align*} \longmapsto & \PPar{\PIN[P'''']{1}{3}}{\PPar{\PIN[P'''']{2}{3}}{\PIN[P''']{3}{3}}} \end{align*} $ \Role[p]_3 $ completes round~$ 2 $ by receiving the value $ 1 $ that was transmitted by $ \Role[p]_2 $ in round~$ 2 $ using Rule~$ (\mathsf{cSO}) $. After this communication the respective two optional blocks are resolved by Rule~$ (\mathsf{succ}) $ that also updates the current value of $ \Role[p]_3 $ to $ \Args[v]_{3,2} = 1 $ \begin{align*} \longmapsto^3 \PPar{\PIN[P'''']{1}{3}}{\PPar{\PIN[P''''']{2}{3}}{\PIN[P'''']{3}{3}}} \end{align*} With that $ \Role[p]_3 $ moves to round~$ 3 $. The last sub-session for round~$ 3 $ is initialised and $ \Role[p]_1 $ removes it last optional block \begin{align*} \longmapsto^5 \PPar{\PIN[P'''''']{2}{3}}{\PIN[P''''']{3}{3}} \end{align*} With that $ \Role[p]_1 $ is reduced to $ \PEnd $ and $ \Role[p]_3 $ finishes its last round. The last steps \begin{align*} \longmapsto^4 \PEnd \end{align*} are used to remove the optional block of $ \Role[p]_3 $ towards the crashed $ \Role[p]_1 $, to complete the reception of the value from $ \Role[p]_3 $ by $ \Role[p]_2 $, and to resolve the remaining optional blocks. The last values of $ \Role[p]_2 $ and $ \Role[p]_3 $ were $ 1 $. As above, since $ \Role[p]_1 $ (that still holds $ 0 $) crashed, this solves Consensus. \section{Well-Typed Processes} \label{sec:wellTypedness} In the Sections~\ref{sec:globalTypes} and \ref{sec:localTypes} we provided the types of our type system. Now we connect types with processes from Section~\ref{sec:calculus} by the notion of well-typedness. A process $ P $ is \emph{well-typed} if it satisfies a typing judgement of the form $ \Gamma \vdash P \triangleright \Delta $, \ie under the \emph{global environment}~$ \Gamma $, $ P $ is validated by the \emph{session environment}~$ \Delta $. We extend environments defined in \cite{DemangeonHonda12} with a primitive for session environments. \begin{definition}[Environments] \begin{align*} \Gamma & \deffTerms \emptyset \sep \Gamma, \Typed{\Chan}{\AT{T}{\Role}} \sep \Gamma, \TypedProt{\Prot}{\tilde{\Role}_1}{\tilde{\Args[y]}}{\tilde{\Role}_2}{G} \sep \Gamma, \Typed{\Chan[s]}{G}\\ \Delta & \deffTerms \emptyset \sep \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} \sep \Delta, \Typed{\ATE{\Chan[s]}{\Role}}{T} \sep \Delta, \Typed{\ATI{\Chan[s]}{\Role}}{T} \sep \textcolor{blue}{\Delta, \Typed{\Role}{\OV{\tilde{\Sort}}}} \end{align*} \end{definition} \noindent The global environment $ \Gamma $ relates shared channels to the type of the invitation they carry, protocol names to their code, and session channels $ \Chan[s] $ to the global type $ G $ they implement. $ \Typed{\Chan}{\AT{T}{\Role}} $ means that $ \Chan $ is used to send and receive invitations to play role $ \Role $ with local type $ T $. In $ \TypedProt{\Prot}{\tilde{\Role}_1}{\tilde{\Args[y]}}{\tilde{\Role}_2}{G} $, $ \Prot $ is a protocol of the global type $ G $ with the internal (external) participants $ \tilde{\Role}_1 $ ($ \tilde{\Role}_2 $) and the arguments $ \tilde{\Args[y]} $. The session environment $ \Delta $ relates pairs of session channels $ \Chan[s] $ and roles $ \Role $ to local types $ T $. We use $ \Typed{\ATE{\Chan[s]}{\Role}}{T} $ ($ \Typed{\ATI{\Chan[s]}{\Role}}{T} $) to denote the capability to invite externally (internally) someone to play role $ \Role $ in $ \Chan[s] $. We add the declaration $ \Typed{\Role}{\OV{\tilde{\Sort}}} $, to cover the kinds of the return values of an optional block of the owner $ \Role $. A session environment is \emph{closed} if it does not contain declarations $ \Typed{\Role}{\OV{\tilde{\Sort}}} $. We assume that initially session environments do not contain declarations $ \Typed{\Role}{\OV{\tilde{\Sort}}} $, \ie are closed. Such declarations are introduced while typing the content of an optional block. Whereby the typing rules ensure that environments can never contain more than one declaration $ \Typed{\Role}{\OV{\tilde{\Sort}}} $. Let $ \left( \Delta, \Typed{\AT{\Chan[s]}{\Role}}{\LTEnd} \right) = \Delta $. Let $ \AT{\Chan[s]}{\Role}^- $ denote either $ \ATE{\Chan[s]}{\Role} $ or $ \ATI{\Chan[s]}{\Role} $ or $ \AT{\Chan[s]}{\Role} $. If $ \AT{\Chan[s]}{\Role}^- $ does not appear in $ \Delta $, we write $ \GetType[\Delta]{\AT{\Chan[s]}{\Role}} = 0 $. Following \cite{DemangeonHonda12} we assume an operator $ \otimes $ such that \begin{enumerate}[(1)] \item $ \Delta \otimes \emptyset = \Delta $, \item $ \Delta_1 \otimes \Delta_2 = \Delta_2 \otimes \Delta_1 $, \item $ \Delta_1 \otimes \left( \Delta_2, \Typed{\Role}{\OV{\tilde{\Sort}}} \right) = \left( \Delta_1, \Typed{\Role}{\OV{\tilde{\Sort}}} \right) \otimes \Delta_2 $, \item $ \Delta_1 \otimes \left( \Delta_2, \Typed{\AT{\Chan[s]}{\Role}^-}{T} \right) = \left( \Delta_1, \Typed{\AT{\Chan[s]}{\Role}^-}{T} \right) \otimes \Delta_2 $ if $ \GetType[\Delta_1]{\AT{\Chan[s]}{\Role}} = 0 = \GetType[\Delta_2]{\AT{\Chan[s]}{\Role}} $, and \item $ \left( \Delta_1, \Typed{\AT{\Chan[s]}{\Role}^-}{T_1} \right) \otimes \left( \Delta_2, \Typed{\AT{\Chan[s]}{\Role}^-}{T_2} \right) = \left( \Delta_1, \Typed{\AT{\Chan[s]}{\Role}^-}{\LTPar{T_1}{T_2}} \right) \otimes \Delta_2 $. \end{enumerate} Thus $ \otimes $ allows to split parallel parts of local types. We write $ \vdash \Args[v] : \Sort $ if value $ \Args[v] $ is of kind $ \Sort $. \begin{figure*} \caption{Typing Rules} \label{fig:typingRules} \end{figure*} In Figure~\ref{fig:typingRules} we extend the typing rules of \cite{DemangeonHonda12} with the Rules~(\textsf{Opt}) and (\textsf{OptE}) for optional blocks. (\textsf{Opt}) ensures that \begin{inparaenum}[(1)] \item the process and the local type specify the same set of roles $ \tilde{\Role} \ \dot{=} \ \tilde{\Role}' $ as participants of the optional block, \item the kinds of the default values $ \tilde{\Args[v]} $, the arguments $ \tilde{\Args} $ of the continuation $ P' $, and the respective variables $ \tilde{\Args[y]} $ in the local type coincide, \item the continuation $ P' $ is well-typed \wrt the part $ \Delta' $ of the current session environment and the remainder $ T' $ of the local type of $ \AT{\Chan[s]}{\Role_1} $, \item the content $ P $ of the block is well-typed \wrt the session environment $ \Delta, \Typed{\AT{\Chan[s]}{\Role_1}}{T}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} $, where $ \Typed{\Role_1}{\OV{\tilde{\Sort}}} $ ensures that $ P $ computes return values of the kinds $ \tilde{\Sort} $ if no failure occurs, and \item the return values of a surrounding optional block cannot be returned in a nested block, because of the condition $ \nexists \Role'', \tilde{\Sort[K]} \logdot \Typed{\Role''}{\OV{\tilde{\Sort[K]}}} \in \Delta $. \end{inparaenum} (\textsf{OptE}) ensures that the kinds of the values computed by a successful completion of an optional block match the kinds of the respective default values. Apart from that this rule is similar to (\textsf{N}) in Figure~\ref{fig:typingRules}. Since (\textsf{OptE}) is the only way to consume an instance of $ \Typed{\Role}{\OV{\tilde{\Sort}}} $, this rule checks that---ignoring the possibility to fail---the content of an optional block reduces to $ \POptEnd{\Role}{\tilde{\Args[v]}} $, if the corresponding local type requires it to do so. Combining these rules, (\textsf{Opt}) introduces exactly one occurrence of $ \Typed{\Role}{\OV{\tilde{\Sort}}} $ in the session environment, the function $ \otimes $ in (\textsf{Pa}) for parallel processes in Figure~\ref{fig:typingRules} ensures that this occurrence reaches exactly one of the parallel branches of the content of the optional block, and finally only (\textsf{OptE}) allows to terminate a branch with this occurrence. This ensures that---ignoring the possibility to fail---each block computes exactly one vector of return values $ \POptEnd{\Role}{\tilde{\Args[v]}} $ (or, more precisely, one such vector for each choice-branch). For an explanation of the remaining rules we refer to \cite{BettiniAtall08,BocciAtall10} and \cite{DemangeonHonda12}. Instead we present the derivation of the type judgements of some examples starting with Example~\ref{exa:PRC}. Applying these typing rules is elaborate but straightforward and can be automated easily, since for all processes except choice exactly one rule applies and all parameters except for restriction are determined by the respective process and the given type environments. Thus, the number of different proof-trees is determined by the number of choices and the type of restricted channels can be derived using back-tracking. \subsection{Our Implementation of the Rotating Co-ordinators is Well-Typed} When testing the type of a process, the first step is to choose a suitable global and local environment for the type judgement. The global environment initially contains \begin{inparaenum}[(1)] \item the channels for the invitations to the session to that the projection of the global type on the respective participant is assigned, \item the session channel to that the complete global type is assigned, and \item the global types of all sub-sessions. \end{inparaenum} For Example~\ref{exa:PRC} this means \begin{align*} \Gamma = \Typed{\Chan_1}{\ProjS{\GRC{n}}{}{\Role[p]_1}}, \ldots, \Typed{\Chan_n}{\ProjS{\GRC{n}}{}{\Role[p]_n}}, \Typed{\Chan[s]}{\GRC{n}} \end{align*} where Example~\ref{exa:GTRC} provides $ \GRC{n} $ and its projection $ \ProjS{\GRC{n}}{}{\Role[p]_i} $ is given in Example~\ref{exa:LTRC}. The session environment initially maps the session channel for each participant to the local type of the respective participant. Here we have: \begin{align*} \Delta = \Typed{\ATE{\Chan[s]}{\Role[p]_1}}{\ProjS{\GRC{n}}{}{\Role[p]_1}}, \ldots, \Typed{\ATE{\Chan[s]}{\Role[p]_n}}{\ProjS{\GRC{n}}{}{\Role[p]_n}} \end{align*} Notice that $ \Delta $ is closed. We have to prove $ \Gamma \vdash \PRC{n} \triangleright \Delta $, where $ \PRC{n} $ is given in Example~\ref{exa:PRC}. First we apply Rule~$ (\mathsf{Pa}) $ $ n $ times to separate the $ n $ participants $ \PPar{\POutS{\Chan_i}{\Chan[s]}}{\PInp{\Chan_i}{\Chan[s]}{\PIN{i}{n}}} $, whereby we split $ \Delta $ into $ \Delta_1 \otimes \ldots \otimes \Delta_n $ with $ \Delta_i = \Typed{\ATE{\Chan[s]}{\Role[p]_i}}{\ProjS{\GRC{n}}{}{\Role[p]_i}} $. For each participant we separate the output $ \POutS{\Chan_i}{\Chan[s]} $ and $ \PInp{\Chan_i}{\Chan[s]}{\PIN{i}{n}} $ by another application of $ (\mathsf{Pa}) $. \begin{align*} \dfrac{\dfrac{}{\Gamma \vdash \PEnd \triangleright \emptyset} (\mathsf{N}) \quad \Gamma(\Chan_i) = \ProjS{\GRC{n}}{}{\Role[p]_i}}{\Gamma \vdash \POutS{\Chan_i}{\Chan[s]} \triangleright \Typed{\ATE{\Chan[s]}{\Role[p]_i}}{\ProjS{\GRC{n}}{}{\Role[p]_i}}} (\mathsf{O}) \end{align*} \begin{align*} \dfrac{\Gamma \vdash \PIN{i}{n} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\ProjS{\GRC{n}}{}{\Role[p]_i}} \quad \Gamma(\Chan_i) = \ProjS{\GRC{n}}{}{\Role[p]_i}}{\Gamma \vdash \PInp{\Chan_i}{\Chan[s]}{\PIN{i}{n}} \triangleright \emptyset} (\mathsf{I}) \end{align*} It remains to show that $ \Gamma \vdash \PIN{i}{n} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\ProjS{\GRC{n}}{}{\Role[p]_i}} $. $ \PIN{i}{n} $ starts with $ i{-}1 $ sequentially composed optional blocks $ \POptS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{j \to i, \downarrow}}{\Args[v]_{i, j}}{\Args[v]_{i, j{-}1}} $ and similarly $ \ProjS{\GRC{n}}{}{\Role[p]_i} $ with $ i{-}1 $ sequential local types $ \LULT{\Role[p]_j}{\Args[v]_{j, j{-}1}}{\Role[p]_i}{\Args[v]_{i, j{-}1}} = \LTOptS{\Role[p]_i, \Role[p]_j}{\LTGet{\Role[p]_j}{\LTLabS{\Labe[c]}{\Typed{\Args[v]_{j, j{-}1}}{\Sort[V]}}}}{\Typed{\Args[v]_{i, j{-}1}}{\Sort[V]}} $. For each of these blocks we apply Rule~$ (\mathsf{Opt}) $ and have to show: \begin{enumerate} \item $ \Gamma \vdash P_{j \to i, \downarrow} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTGet{\Role[p]_j}{\LTLabS{\Labe[c]}{\Typed{\Args[v]_{j, j{-}1}}{\Sort[V]}}}}, \Typed{\Role[p]_i}{\OV{\Sort[V]}} $ \item $ \Gamma \vdash P' \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{T'} $, where $ P' $ is the continuation of the optional block and $ T' $ the continuation of the local type \item $ \vdash \Typed{\Args[v]_{i, j}}{\Sort[V]} $ and $ \vdash \Typed{\Args[v]_{i, j{-}1}}{\Sort[V]} $ \end{enumerate} The third condition checks whether the variable $ \Args[v]_{i, j} $ and the default value $ \Args[v]_{i, j{-}1} $ of the optional block are values and of the same type as the value $ \Typed{\Args[v]_{i, j{-}1}}{\Sort[V]} $ of the local type. Since all $ \Args[v]_{k, l} $ are of kind $ \Sort[V] $, this condition is satisfied. The first condition checks the type of the content of the optional block, where $ P_{j \to i, \downarrow} = \PGet{\Chan[s]}{\Role[p]_j}{\Role[p]_i}{\PLab{\Labe[c]}{\Args[v]_{j, j{-}1}}{\POptEnd{\Role[p]_i}{\Args[v]_{j, j{-}1}}}} $ and \begin{align*} \dfrac{\dfrac{\vdash \Typed{\Args[v]_{j, j{-}1}}{\Sort[V]}}{\Gamma \vdash \POptEnd{\Role[p]_i}{\Args[v]_{j, j{-}1}} \triangleright \Typed{\Role[p]_i}{\OV{\Sort[V]}}} (\mathsf{OptE}) \quad \vdash \Typed{\Args[v]_{j, j{-}1}}{\Sort[V]}}{\Gamma \vdash P_{j \to i, \downarrow} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTGet{\Role[p]_j}{\LTLabS{\Labe[c]}{\Typed{\Args[v]_{j, j{-}1}}{\Sort[V]}}}}, \Typed{\Role[p]_i}{\OV{\Sort[V]}}} (\mathsf{C}) \end{align*} The second condition refers to the respective next part of the process and the local type. After the first $ i{-}1 $ sequential optional blocks, $ P' $ is the parallel composition of $ n{-}1 $ optional blocks $ \POptNVS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{i \to j, \uparrow}} $ and the remaining sequential blocks: \begin{align*} P'' = \bigodot_{j = (i + 1)..n} \POptS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{j \to i, \downarrow}}{\Args[v]_{i, j}}{\Args[v]_{i, j{-}1}} \end{align*} Similarly, $ T' $ is the parallel composition of the $ \LULS{\Role[p]_i}{\Args[v]_{i, i{-}1}}{\Role[p]_j} $ and the remaining (sequentially composed) $ \LULT{\Role[p]_j}{\Args[v]_{j, j{-}1}}{\Role[p]_i}{\Args[v]_{i, j{-}1}} $. We use $ (\mathsf{Pa}) $ to separate the parallel components in both the process and the local type. Thus we have to show $ \Gamma \vdash \POptNVS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{i \to j, \uparrow}} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LULS{\Role[p]_i}{\Args[v]_{i, i{-}1}}{\Role[p]_j}} $ for $ j = 1..n, i \neq j $ and $ \Gamma \vdash P'' \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LULT{\Role[p]_j}{\Args[v]_{j, j{-}1}}{\Role[p]_i}{\Args[v]_{i, j{-}1}}.T''} $. The proof of the last typing judgement for the $ n{-}i $ last sequential blocks is very similar to the proof for the first $ i{-}1 $ sequential blocks with an application of Rule~$ (\mathsf{N}) $ in the end. For each $ j \in \Set[]{ 1, \ldots, i{-}1, i + 1, \ldots, n } $ we apply Rule~$ (\mathsf{Opt}) $ and have to show: \begin{enumerate} \item $ \Gamma \vdash P_{i \to j, \uparrow} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTSend{\Role[p]_j}{\LTLabS{\Labe[c]}{\Typed{\Args[v]_{i, i{-}1}}{\Sort[V]}}}}, \Typed{\Role[p]_i}{\OV{\cdot}} $ \item $ \Gamma \vdash \PEnd \triangleright \emptyset $ \end{enumerate} There are no default values and thus the conditions $ \vdash \Typed{\tilde{\Args}}{\tilde{\Sort}} $ and $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}} $ of Rule~$ (\mathsf{Opt}) $ hold trivially. The second condition, for the continuation of the optional blocks, is in all cases of $ j $ the same and follows from Rule~$ (\mathsf{N}) $. For the first condition we have $ P_{i \to j, \uparrow} = \PSend{\Chan[s]}{\Role[p]_i}{\Role[p]_j}{\Labe[c]}{\Args[v]_{i, i{-}1}}{\POptEnd{\Role[p]_i}{\cdot}} $ and thus \begin{align*} \dfrac{\dfrac{\vdash \Typed{\cdot}{\cdot}}{\Gamma \vdash \POptEnd{\Role[p]_i}{\cdot} \triangleright \Typed{\Role[p]_i}{\OV{\cdot}}} (\mathsf{OptE}) \quad \vdash \Typed{\Args[v]_{i, i{-}1}}{\Sort[V]}}{\Gamma \vdash \PSend{\Chan[s]}{\Role[p]_i}{\Role[p]_j}{\Labe[c]}{\Args[v]_{i, i{-}1}}{\POptEnd{\Role[p]_i}{\cdot}} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTSend{\Role[p]_j}{\LTLabS{\Labe[c]}{\Typed{\Args[v]_{i, i{-}1}}{\Sort[V]}}}}, \Typed{\Role[p]_i}{\OV{\cdot}}} (\mathsf{S}) \end{align*} \subsection{Our Implementation with Sub-Sessions is Well-Typed} To check the type of $ \PRCB{n} $ of Example~\ref{exa:PRCWSS} we need to add the type of the protocol to the global environment used for $ \PRC{n} $: \begin{align*} \Gamma = \Typed{\Chan_1}{\ProjS{\GRCB{n}}{}{\Role[p]_1}}, \ldots, \Typed{\Chan_n}{\ProjS{\GRCB{n}}{}{\Role[p]_n}}, \Typed{\Chan[s]}{\GRCB{n}}, \TypedProt{\Prot[R]_n}{\Role[scr], \tilde{\Role[trg]}}{\Args[v]_{\Role[scr]}}{\cdot}{\GR{n}} \end{align*} The session environment initially is the same as for the first example: \begin{align*} \Delta = \Typed{\ATE{\Chan[s]}{\Role[p]_1}}{\ProjS{\GRCB{n}}{}{\Role[p]_1}}, \ldots, \Typed{\ATE{\Chan[s]}{\Role[p]_n}}{\ProjS{\GRCB{n}}{}{\Role[p]_n}} \end{align*} Again $ \Delta $ is closed. We have to prove $ \Gamma \vdash \PRCB{n} \triangleright \Delta $. Also this type derivation is very similar to our first example. We provide one derivation for the sub-session of a round, to demonstrate the additional steps. \begin{align*} \dfrac{\begin{array}{c} \Gamma \vdash P' \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{T}, \Delta_k \quad \GetType{\Prot[R]_n} = \TypedProt{\Prot[R]_n}{\Role[scr], \tilde{\Role[trg]}}{\Args[v]_{\Role[scr]}}{\cdot}{\GR{n}}\\ \ProjS{\GR{n}\!\Set[]{ \Subst{\Args[v]_{i, i - 1}}{\Args[v]_{\Role[scr]}} }}{}{\Role[scr]} = T_{\Role[scr]} \quad \forall j < n \logdot \ProjS{\GR{n}\!\Set[]{ \Subst{\Args[v]_{i, i - 1}}{\Args[v]_{\Role[scr]}} }}{}{\Role[trg]_j} = T_{\Role[trg], j}\\ \vdash \Typed{\Args[v]_{i, i - 1}}{\Sort[V]} \quad \GetType{\Chan[k]} = \GR{n}\!\Set[]{ \Subst{\Args[v]_{i, i - 1}}{\Args[v]_{\Role[scr]}} } \end{array}}{\Gamma \vdash \PDecl{\Chan[k]}{\Chan[s]}{\Args[v]_{i, i - 1}}{\cdot}{\cdot}{P'} \triangleright \; \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTCall{\Prot[R]_n}{\GR{n}}{\Args[v]_{i, i - 1}}{\Typed{\Args[v]_{\Role[scr]}}{\!\Sort[V]}}{\cdot}{T}}}(\mathsf{New}) \end{align*} and \begin{align*} \Delta_k ={} & \Typed{\ATI{\Chan[k]}{\Role[scr]}}{\LTSend{\Role[trg]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}}, \Typed{\ATI{\Chan[k]}{\Role[trg]_1}}{\LTGet{\Role[scr]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}}, \ldots, \Typed{\ATI{\Chan[k]}{\Role[trg]_{i - 1}}}{\LTGet{\Role[scr]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}} \end{align*} Then we have to show that $ \Gamma \vdash P' \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{T}, \Delta_k $. The internal session invitations within $ P' $ are handled by the Rules~$ (\mathsf{P}) $ and $ (\mathsf{J}) $ similar to \begin{align*} \dfrac{\dfrac{}{\Gamma \vdash \PEnd \triangleright \emptyset}(\mathsf{N}) \quad \GetType{\Prot[R]_n} = \TypedProt{\Prot[R]_n}{\Role[scr], \tilde{\Role[trg]}}{\Args[v]_{\Role[scr]}}{\cdot}{\GR{n}} \quad \ProjS{\GR{n}\!\Set[]{ \Subst{\Args[v]_{i, i - 1}}{\Args[v]_{\Role[scr]}} }}{}{\Role[scr]} = T_{\Role[scr]}}{\Gamma'' \vdash \PReqS{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[scr]}{\Chan[k]} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTReqS{\Prot[R]_n}{\Role[src]}{\Args[v]_{i, i - 1}}{\Role[p]_i}}, \Typed{\ATI{\Chan[k]}{\Role[scr]}}{T_{\Role[scr]}}} (\mathsf{P}) \end{align*} and \begin{align*} \dfrac{\Gamma \vdash P'' \triangleright \Typed{\AT{\Args[z]}{\Role[scr]}}{T_{\Role[scr]}'} \quad \GetType{\Prot[R]_n} = \TypedProt{\Prot[R]_n}{\Role[scr], \tilde{\Role[trg]}}{\Args[v]_{\Role[scr]}}{\cdot}{\GR{n}} \quad \ProjS{\GR{n}\!\Set[]{ \Subst{\Args[v]_{i, i}}{\Args[v]_{\Role[scr]}} }}{}{\Role[scr]} = T_{\Role[scr]}'}{\Gamma \vdash \PEnt{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[scr]}{\Args[z]}{P''} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTEnt{\Prot[R]_n}{\Role[scr]}{\Args[v]_{i, i - 1}}{\Role[p]_i}{T_{\Role[scr]}'}}} (\mathsf{J}) \end{align*} for the coordinator inviting himself. \section{An Example with Sub-Sessions within Optional Blocks} \label{sec:ExaSubSessionsWithinOptionalBlocks} We present a third example---again a variant of the rotating coordinator algorithm in Example~\ref{exa:RCAlgorithm}---to demonstrate the use of sub-sessions within optional blocks. \subsection{Global and Local Types} \begin{example}[Global Type for Rotating Coordinators] \label{exa:globalType} \begin{align*} \GRC{n} ={} & \GTDecl{\Prot[C]}{\Role[scr],\Role[trg]}{\Typed{\Args[val]}{\Sort[V]}}{\cdot}{G_{\Prot[C]}}{} \bigodot_{i=1..n} G_{\text{Round}}(i, n)\\ G_{\Prot[C]} ={} & \GTCom{\Role[scr]}{\Role[trg]}{\GTInpS{\Labe[bc]}{\Typed{\Args[val]}{\Sort[V]}}}\\ G_{\text{Round}}(i, n) ={} & \bigodot_{j = 1..n, \; j \neq i} {\GTOptBlS{\Role[p]_i, \cdot, \Role[p]_j, \Typed{\Args[v]_{j, i - 1}}{\Sort[V]}}{\left( \GTCallS{\Role[p]_i}{\Prot[C]}{\Role[p]_i, \Role[p]_j}{\Args[v]_{i, i - 1}} \right)}} \end{align*} \end{example} \noindent $ \GRC{n} $ first declares a sub-protocol and then performs the $ n $ rounds of the algorithm sequentially. The sub-protocol $ G_{\Prot[C]} $, identified with $ \Prot[C] $, specifies a single communication as part of the broadcast in Line~4 of Example~\ref{exa:RCAlgorithm}. This communication step covers the transmission of the value $ \Args[val] $ (under the label $ \Labe[bc] $ for broadcast) from $ \Role[scr] $ to $ \Role[trg] $. In each round the current coordinator participant $ \Role[p]_i $ calls this sub-protocol sequentially $ n - 1 $ times, in order to transmit its current value $ \Args[v]_{i, i - 1} $ to all other participants. To simulate link failures we capture each communication of the broadcast in a single optional block. Here the sender $ \Role[p]_i $ does not need to specify a default value, whereas the continuation of the receiver $ \Role[p]_j $ uses its last known value $ \Args[v]_{j, i - 1} $ if the communication fails. Since global types describe a global point of view, the communication steps modelled above also cover the reception of values in Line~5. For simplicity we omit the Lines~1 and 7 from our consideration. Restricting $ \GRC{n} $ on participant~$ i $ reduces all rounds~$ j $ (except for round~$ j = i $) to the single communication step of round~$ j $ in that participant~$ i $ receives a value. Accordingly, participant~$ i $ receives $ i - 1 $ times a value in $ i - 1 $ rounds, then broadcasts its current value to all other participants (modelled by $ n - 1 $ single communication steps), and then receives $ n - i $ times a value in the remaining $ n - i $ rounds. \begin{example}[Restriction on Participant~$ i $] \label{exa:restriction} \begin{align*} \RestS{\GRC{n}}{\Role[p]_i} ={} & \GTDecl{\Prot[C]}{\Role[scr],\Role[trg]}{\Typed{\Args[val]}{\!\Sort[V]}}{\cdot}{G_{\Prot[C]}}{\!\RestS{G'}{\Role[p]_i}}\\ \RestS{G'}{\Role[p]_i} ={} & \left( \bigodot_{j = 1..(i - 1)} \GTOptBlS{\Role[p]_j, \cdot, \Role[p]_i, \Typed{\Args[v]_{i, j - 1}}{\!\Sort[V]}}{\left( \GTCallS{\Role[p]_j}{\Prot[C]}{\Role[p]_j, \Role[p]_i}{\Args[v]_{j, j - 1}} \right)} \right).G_{\text{Round}}(i, n).\\ & \bigodot_{j = (i + 1)..n} \GTOptBlS{\Role[p]_j, \cdot, \Role[p]_i, \Typed{\Args[v]_{i, j - 1}}{\!\Sort[V]}}{\left( \GTCallS{\Role[p]_j}{\Prot[C]}{\Role[p]_j, \Role[p]_i}{\Args[v]_{j, j - 1}} \right)} \end{align*} \end{example} \begin{example}[Projection to the Local Type of Participant~$ i $] \label{exa:localType} \begin{align*} \ProjS{G_{\text{RC}}(n)}{}{\Role[p]_i} ={} & \ProjS{( \bigodot_{i=1..n} G_{\text{Round}}(i, n) )}{\PDec{\Prot[C]}{\Role[scr],\Role[trg]}{\Typed{\Args[val]\,}{\Sort[V]}}{\cdot}{G_{\Prot[C]}}}{\Role[p]_i}\\ ={} & \ProjS{( \RestS{G'}{\Role[p]_i} )}{\PDec{\Prot[C]}{\Role[scr],\Role[trg]}{\Typed{\Args[val]\,}{\Sort[V]}}{\cdot}{G_{\Prot[C]}}}{\Role[p]_i}\\ ={} & ( \bigodot_{j = 1..(i - 1)} \LTOptS{\Role[p]_i, \Role[p]_j}{T_{j \to i}}{\Typed{\Args[v]_{i, j - 1}}{\Sort[V]}} ).\\ & ( \prod_{j = 1..n, j \neq i} \LTOptS{\Role[p]_i, \Role[p]_j}{T_{i \text{ calls } j}}{\cdot} \LTPar{}{( \bigodot_{j = (i + 1)..n} \LTOptS{\Role[p]_i, \Role[p]_j}{T_{j \to i}}{\Typed{\Args[v]_{i, j - 1}}{\Sort[V]}} )} )\\ T_{j \to i} ={} & \LTEntS{\Prot[C]}{\Role[trg]}{\Args[v]_{j, j - 1}}{\Role[p]_j}\\ T_{i \text{ calls } j} ={} & \LTCall{\Prot[C]}{G_{\Prot[C]}}{\Args[v]_{i, i -1}}{\Typed{\Args[val]}{\!\Sort[V]}}{\cdot}{T_{i \to j}}\\ T_{i \to j} ={} & \LTPar{\LTReqS{\Prot[C]}{\Role[src]}{\Args[v]_{i, i - 1}}{\!\Role[p]_i}}{\LTReqS{\Prot[C]}{\Role[trg]}{\Args[v]_{i, i - 1}}{\!\Role[p]_j}} \LTPar{}{\LTEntS{\Prot[C]}{\Role[scr]}{\Args[v]_{i, i - 1}}{\Role[p]_i}} \end{align*} \end{example} \noindent To project the global type $ G_{\text{RC}}(n) $ on the local type of participant~$ i $ we first add the information about the declaration of the protocol $ \Prot[C] $ to the environment and then project the $ n $ rounds. Since the projection of optional blocks to a role that does not participate in that optional block simply removes the respective block, the projection of the $ n $ rounds on the local type of participant~$ i $ is same as the projection of the restriction $ \Rest{G'}{\Role[p]_i} $ (see Example~\ref{exa:restriction}) on the local type of participant~$ i $. Accordingly participant~$ i $ is $ i - 1 $ times the target $ \Role[trg] $ of the protocol $ \Prot[C] $ (\cf $ T_{j \to i} $), \ie optionally receives $ n - i $ values, then initiates round~$ i $ (\cf $ T_{i \text{ calls } j} $) and broadcasts its current value by calling the protocol $ \Prot[C] $ $ n - 1 $ times as source $ \Role[scr] $ (\cf $ T_{i \to j} $), and finally participant~$ i $ is $ n - i $ more times the target $ \Role[trg] $ of $ \Prot[C] $ (\cf $ T_{j \to i} $). Observe that, due to the different cases of the projection of optional blocks, receiving values from rounds different from $ i $ guards the continuation of participant~$ i $ while broadcasting its own value is performed in parallel (although in the global type all optional blocks guard the respective continuation). \subsection{Implementation} Based on the local type of participant~$ i $ in Example~\ref{exa:localType} we provide an implementation of the rotating coordinator. Therefore we replace the check '\texttt{if alive}$ (p_r) $' by an optional block for communications. In the first $ i - 1 $ and the last $ n - i $ rounds participant~$ i $ either receives a value or (if the respective communication fails) uses as default value its value from the former round. In round~$ i $ participant~$ i $ initiates $ n - 1 $ new sub-sessions---each covered within an optional block---to transmit its current value to each other participant. \begin{example}[Rotating Coordinator Implementation] \label{exa:process} \begin{align*} P_{\text{RC}}(n) ={} & \prod_{i = 1..n} \left( \PPar{\POutS{\Chan_i}{\Chan[s]}}{\PInp{\Chan_i}{\Chan[s]}{P_i}} \right)\\ P_i ={} & ( \bigodot_{j = 1..(i - 1)} \POptS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{j \to i}}{\Args[v]_{i, j}}{\Args[v]_{i, j - 1}} ).\\ & ( \prod_{j = 1..n, j \neq i} \PRes{\Chan[k]}{\POptNVS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{i \text{ calls } j}}} \PPar{}{( \bigodot_{j = (i + 1)..n} \POptS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{j \to i}}{\Args[v]_{i, j}}{\Args[v]_{i, j - 1}} ))}\\ P_{j \to i} ={} & \PEnt{\Chan[s]}{\Role[p]_j}{\Role[p]_i}{\Role[trg]}{\Args}{\PGet{\Args}{\Role[scr]}{\Role[trg]}{\PLab{\Labe[bc]}{\Args[v]}{\POptEnd{\Role[p]_i}{\Args[v]}}}}\\ P_{i \text{ calls } j} ={} & \PDecl{\Chan[k]}{\Chan[s]}{\Args[v]_{i, i - 1}}{\cdot}{\cdot}{P_{i \to j, \Chan[k]}}\\ P_{i \to j, \Chan[k]} ={} & \PPar{\PPar{\PReqS{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[scr]}{\Chan[k]}}{\PReqS{\Chan[s]}{\Role[p]_i}{\Role[p]_j}{\Role[trg]}{\Chan[k]}}}{\PEnt{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[scr]}{\Args[z]}{\PSend{\Args[z]}{\Role[scr]}{\Role[trg]}{\Labe[bc]}{\Args[v]_{i, i - 1}}{\POptEnd{\Role[p]_i}{\cdot}}}} \end{align*} \end{example} \noindent The overall system $ P_{\text{RC}}(n) $ consists of the parallel composition of the $ n $ participants. The channel $ \Chan_i $ is used to distribute the initial session channel. Since these communications on $ \tilde{\Chan} $ are used to initialise the system and not to model the algorithm, we assume that they are reliable. The term $ P_i $ models participant $ i $. Each participant first optionally receives $ i - 1 $ times a value from another participant. Therefore an optional block surrounds the term $ P_{j \to i} $ that first answers the sub-session request of participant~$ j $, then receives (as target) in the respective sub-session a value from participant~$ j $ (the source), and finally outputs $ \POptEnd{\Role[p]_i}{\Args[v]} $. This last output terminates the optional block and transmits the received value to its continuation. If this communication succeeds, the respective optional block succeeds, and the received value replaces the current value of participant~$ i $. Otherwise the default value of the former round is used, \ie the current value of participant~$ i $ remains unchanged. In round~$ i $ participant~$ i $ initiates $ n - 1 $ parallel sub-sessions; one for each other participant. For each sub-session a private version of the sub-session channel $ \Chan[k] $ is restricted and an optional block is created. Within the optional block, the term $ P_{i \text{ calls } j} $ creates a sub-session between $ i $ (source) and $ j $ (target). This sub-session $ P_{i \to j, k} $ consists of the parallel composition of the invitations of the source and the target to participate in the sub-session using $ \Chan[k] $, and the session acceptance of the source (participant~$ i $) followed by the transmission of its current value towards the target (participant~$ j $) and the empty transmission $ \POptEnd{\Role[p]_i}{\cdot} $ that terminates the optional block of participant~$ i $. Finally participant~$ i $ optionally receives $ n - i $ more values from other participants in the same way as in its first $ i - 1 $ rounds. The sub-sessions initiated by participant~$ i $ for the broadcast are in parallel to the reception of the value for round~$ i + 1 $. The remaining rounds are composed sequentially. In round~$ i $ the value $ \Args[v]_{i, i - 1} $ is emitted, \ie the (initial value or) last value that is received in the sequential $ n - 1 $ rounds that guard the parallel composition of the sub-sessions to transmit this value. This matches an intuitive understanding of this process in terms of asynchronous communications. The sending operations emit the respective value as soon as they are unguarded but they syntactically remain part of the term until the (possibly later) reception of the respective message consumes it. In fact the presented session calculus is synchronous but, since Example~\ref{exa:process} uses neither choice nor output continuations different from $ \PEnd $ or $ \POptEnd{\Role[p]_i}{\cdot} $, the process in Example~\ref{exa:process} can be considered as an asynchronous process \cite{hondaTokoro91, boudol92, palamidessi03, fossacs12_pi}. \subsection{Reaching Consensus} The first three steps initialise the outermost session using three times Rule~$ (\mathsf{comC}) $. We assume here that these steps belong to the environment and do never fail. If one of these steps fails, the respective participant does not know the global session channel and thus cannot participate in the algorithm, \ie is crashed from the beginning. \begin{align*} P_{\text{RC}}(3) & \longmapsto^3 \PPar{P_1}{\PPar{P_2}{P_3}} \intertext{After the initialisation all participants consist of sequential and parallel optional blocks. Each of these optional blocks can fail any time. $ \Role[p]_1 $ can initialise one of its two sub-sessions to transmit its value to one of the other participants. Since these two blocks are in parallel, $ \Role[p]_1 $ can start with either of them. We assume however that in the next two steps it successfully initialise both sub-sessions within its unguarded optional blocks using two times Rule~$ (\mathsf{subs}) $. There is no external partner to invite, so the initialisation of the sub-sessions does not generate output messages but only unguards $ P_{1 \to 2, \Chan[k]} $ and $ P_{1 \to 3, \Chan[k]} $.} & \longmapsto^2 \PRes{\Chan[k]}{\POptNVS{\Role[p]_1}{\Role[p]_1, \Role[p]_2}{P_{1 \to 2, \Chan[k]}}} \PPar{\PPar{}{\PRes{\Chan[k]}{\POptNVS{\Role[p]_1}{\Role[p]_1, \Role[p]_3}{P_{1 \to 3, \Chan[k]}}}}}{\PPar{P_1'}{\PPar{P_2}{P_3}}} \intertext{Next $ \Role[p]_1 $ accepts the invitation to its own session---using Rule~$ (\mathsf{join}) $---within the second sub-session ($ P_{1 \to 3}, \Chan[k] $) and $ \Role[p]_3 $ accepts the invitation from $ \Role[p]_1 $---using Rule~$ (\mathsf{jO}) $.} & \longmapsto^2 \PPar{\PRes{\Chan[k]}{(\POptNVS{\Role[p]_1}{\Role[p]_1, \Role[p]_2}{P_{1 \to 2, \Chan[k]}})}}{\PPar{P_1'}{P_2}}\\ & \hspace{2em} \PPar{}{\PRes{\Chan[k]}{( \POptNVS{\Role[p]_1}{\Role[p]_1, \Role[p]_3}{\PSend{\Chan[k]}{\Role[scr]}{\Role[trg]}{\Labe[bc]}{0}{\POptEnd{\Role[p]_1}{\cdot}}} }} \PPar{}{\POpt{\Role[p]_3}{\Role[p]_3, \Role[p]_1}{\PGet{\Chan[k]}{\Role[scr]}{\Role[trg]}{\PLab{\Labe[bc]}{\Args[v]}}{\POptEnd{\Role[p]_3}{\Args[v]}}}{\Args[v]_{3, 1}}{1}{P_3'}}) \intertext{After transmitting its value to $ \Role[p]_3 $---using Rule~$ (\mathsf{cSO}) $---the content of the second optional block (from $ \Role[p]_1 $ to $ \Role[p]_3 $) is reduced to $ \POptEnd{\Role[p]_1}{\cdot} $ and the block can be removed by Rule~$ (\mathsf{succ}) $.} & \longmapsto^2 \PPar{\PRes{\Chan[k]}{(\POptNVS{\Role[p]_1}{\Role[p]_1, \Role[p]_2}{P_{1 \to 2, \Chan[k]}})}}{\PPar{P_1'}{P_2}} \PPar{}{\POpt{\Role[p]_3}{\Role[p]_3, \Role[p]_1}{\POptEnd{\Role[p]_3}{0}}{\Args[v]_{3, 1}}{1}{P_3'}}) \intertext{Finally, $ \Role[p]_3 $ replaces its own value by the value $ 0 $ it received from $ \Role[p]_1 $---using Rule~$ (\mathsf{succ}) $. With that $ \Role[p]_3 $ finishes round~$ 1 $ and moves to round~$ 2 $, whereas the other two participants still remain in round~$ 1 $.} & \longmapsto \PPar{\PRes{\Chan[k]}{(\POptNVS{\Role[p]_1}{\Role[p]_1, \Role[p]_2}{P_{1 \to 2, \Chan[k]}})}}{\PPar{P_1'}{\PPar{P_2}{P_3'\Set[]{\Subst{0}{\Args[v]_{3, 1}}}}}} \intertext{$ \Role[p]_1 $ and $ \Role[p]_3 $ are waiting for a value from $ \Role[p]_2 $ and $ \Role[p]_2 $ is waiting for a value from $ \Role[p]_1 $. $ \Role[p]_1 $ fails to deliver its value to $ \Role[p]_2 $. It does not really matter whether it crashes while the invitations for the sub-session are accepted or before sending the value. The result is the same: The first optional block of $ \Role[p]_1 $ is removed by Rule~$ (\mathsf{fail}) $. After that there is no other optional block with matching roles for the first optional block of $ \Role[p]_2 $ and thus no communication can take place. Hence it has to be aborted as well. With that all three participants move to round~$ 2 $. Moreover, since we assume that $ \Role[p]_1 $ is crashed, we also abort the remaining two optional blocks of $ \Role[p]_1 $ in $ P_1' $ and $ \Role[p]_1 $ completes round~$ 3 $.} & \longmapsto^4 \PPar{P_2'}{P_3'\Set[]{\Subst{0}{\Args[v]_{3, 1}}}} \intertext{$ \Role[p]_2 $ in $ P_2' $ holds the value $ 1 $ (its initial value) and $ \Role[p]_3 $ in $ P_3'\!\Set[]{ \Subst{0}{\Args[v]_{3, 1}} } $ holds the value $ 0 $. To complete round~$ 2 $ Participant~$ 2 $ aborts its attempt to send to $ \Role[p]_1 $ and (successfully) completes the sub-session with $ \Role[p]_3 $. Round~$ 3 $ is completed in the same way and finally the two remaining participants both hold the value $ 1 $.} & \longmapsto^7 \PPar{P_2''}{P_3''\!\Set[]{ \Subst{1}{\Args[v]_{3, 2}} }} \longmapsto^7 \PPar{P_2'''\!\Set[]{ \Subst{1}{\Args[v]_{2, 3}} }}{P_3'''\!\Set[]{ \Subst{1}{\Args[v]_{3, 2}} }} \end{align*} \subsection{Well-Typed Processes} Let \begin{align*} \Gamma = \Typed{\Chan[a]_1}{\Rest{G_{\text{RC}}^n}{\Role[p]_1}}, \ldots, \Typed{\Chan[a]_n}{\Rest{G_{\text{RC}}^n}{\Role[p]_n}}, \TypedProt{\Prot[C]}{\Role[scr], \Role[trg]}{\Args[val]}{\cdot}{G_{\Prot[C]}}, \Typed{\Chan[s]}{G_{\text{RC}}^n} \end{align*} where $ G_{\text{RC}}^n $, $ G_{\Prot[C]} $, and $ \Rest{G_{\text{RC}}^n}{\Role[p]_i} $ are provided by the Examples~\ref{exa:globalType} and \ref{exa:restriction}. Similarly, let \begin{align*} \Delta = \Typed{\ATE{\Chan[s]}{\Role[p]_1}}{\ProjS{G_{\text{RC}}^n}{}{\Role[p]_1}}, \ldots, \Typed{\ATE{\Chan[s]}{\Role[p]_1}}{\ProjS{G_{\text{RC}}^n}{}{\Role[p]_1}} \end{align*} where $ \ProjS{G_{\text{RC}}^n}{}{\Role[p]_i} $ is provided by Example~\ref{exa:localType}. We notice that $ \Delta $ is closed. We first apply the Rule~$ (\mathsf{Pa}) $ $ n $ times to separate $ P_{\text{RC}}^n = \prod_{i = 1..n} \left( \PPar{\POutS{\Chan}{\Chan[s]}}{\PInp{\Chan}{\Chan[s]}{P_i}} \right) $ into $ n $ participants $ \PPar{\POutS{\Chan}{\Chan[s]}}{\PInp{\Chan}{\Chan[s]}{P_i}} $, whereby we split $ \Delta $ into $ \Delta_1 \otimes \ldots \otimes \Delta_n $ with $ \Delta_i = \Typed{\ATE{\Chan[s]}{\Role[p]_i}}{\ProjS{G_{\text{RC}}^n}{}{\Role[p]_i}} $. Since $ \Delta_i \otimes \emptyset = \Delta_i = \Typed{\ATE{\Chan[s]}{\Role[p]_i}}{\ProjS{G_{\text{RC}}^n}{}{\Role[p]_i}} $ and $ \ProjS{G_{\text{RC}}^n}{}{\Role[p]_i} = \Proj{\Rest{G_{\text{RC}}^n}{\Role[p]_i}}{}{\Role[p]_i} $, we have: \begin{align*} \hspace*{-1em}\dfrac{\dfrac{\dfrac{}{\Gamma \vdash \PEnd \triangleright \emptyset}(\mathsf{N}) \quad \GetType{\Chan_i} = \Rest{G_{\text{RC}}^n}{\Role[p]_i}}{\Gamma \vdash \POutS{\Chan_i}{\Chan[s]} \triangleright \Delta_i}(\mathsf{O}) \quad \dfrac{\Gamma \vdash P_i \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\ProjS{G_{\text{RC}}^n}{}{\Role[p]_i}} \quad \GetType{\Chan_i} = \Rest{G_{\text{RC}}^n}{\Role[p]_i}}{\Gamma \vdash \PInp{\Chan_i}{\Chan[s]}{P_i} \triangleright \emptyset}(\mathsf{I})}{\Gamma \vdash \PPar{\POutS{\Chan_i}{\Chan[s]}}{\PInp{\Chan_i}{\Chan[s]}{P_i}} \triangleright \Delta_i}(\mathsf{Pa}) \end{align*} It remains to prove that $ \Gamma \vdash P_i \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\ProjS{G_{\text{RC}}^n}{}{\Role[p]_i}} $. By Example~\ref{exa:process}, $ P_i $ starts with $ i - 1 $ sequential optional blocks and, by Example~\ref{exa:localType}, $ \ProjS{G_{\text{RC}}^n}{}{\Role[p]_i} $ similarly starts with $ i - 1 $ sequential local types of optional blocks. For each of theses blocks \begin{align*} \dfrac{\begin{array}{c} \Role[p]_i, \Role[p]_j \ \dot{=} \ \Role[p]_i, \Role[p]_j \quad \Gamma \vdash P_{j \to i} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{T_{j \to i}}, \Typed{\Role[p]_i}{\OV{\Sort[V]}} \quad \nexists \Role, \tilde{\Sort[K]} \logdot \Typed{\Role}{\OV{\tilde{\Sort[K]}}} \in \emptyset\\ \Gamma \vdash P_{j \to i}' \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{T_{j \to i}'} \quad \vdash \Typed{\Args[v]_{i, j}}{\Sort[V]} \quad \vdash \Typed{\Args[v]_{i, j - 1}}{\Sort[V]} \end{array}}{\Gamma \vdash \POpt{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{j \to i}}{\Args[v]_{i, j}}{\Args[v]_{i, j - 1}}{P_{j \to i}'} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTOpt{\Role[p]_i, \Role[p]_j}{T_{j \to i}}{\Typed{\Args[v]_{i, j - 1}}{\!\Sort[V]}}{T_{j \to i}'}}}(\mathsf{Opt}) \end{align*} with $ P_{j \to i} = \PEnt{\Chan[s]}{\Role[p]_j}{\Role[p]_i}{\Role[trg]}{\Args}{\PGet{\Args}{\Role[scr]}{\Role[trg]}{\PLab{\Labe[bc]}{\Args[v]}{\POptEnd{\Role[p]_i}{\Args[v]}}}} $, the type $ T_{j \to i} = \LTEntS{\Prot[C]}{\Role[trg]}{\Args[v]_{j, j - 1}}{\Role[p]_j} $ and where $ P_{j \to i}' $ and $ T_{j \to i}' $ are the respective continuations. For the content of the optional blocks we have to check the type of $ P_{j \to i} $. \begin{align*} \hspace{-0.2em}\hspace{-1.5em}\dfrac{D_1 \quad \GetType{\Prot[C]} = \TypeOfProt{\Role[scr], \Role[trg]}{\Args[val]}{\cdot}{G_{\Prot[C]}} \quad \ProjS{G_{\Prot[C]}\!\Set[]{ \Subst{\Args[v]_{j, j - 1}}{\Args[val]} }}{}{\Role[trg]} = \LTGet{\Role[scr]}{\LTLabS{\Labe[bc]}{\Typed{\Args[v]_{j, j - 1}}{\Sort[V]}}}}{\Gamma \vdash \PEnt{\Chan[s]}{\Role[p]_j}{\Role[p]_i}{\Role[trg]}{\Args}{\PGet{\Args}{\Role[scr]}{\Role[trg]}{\PLab{\Labe[bc]}{\Args[v]}{\POptEnd{\Role[p]_i}{\Args[v]}}}} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTEntS{\Prot[C]}{\Role[trg]}{\Args[v]_{j, j - 1}}{\Role[p]_j}}, \Typed{\Role[p]_i}{\OV{\Sort[V]}}}(\mathsf{J}) \end{align*} with \begin{align*} D_1 = \dfrac{\dfrac{\vdash \Typed{\Args[v]}{\Sort[V]}}{\Gamma \vdash \POptEnd{\Role[p]_i}{\Args[v]} \triangleright \Typed{\Role[p]_i}{\OV{\Sort[V]}}}(\mathsf{OptE}) \quad \vdash \Typed{\Args[v]}{\Sort[V]}}{\Gamma \vdash \PGet{\Args}{\Role[scr]}{\Role[trg]}{\PLab{\Labe[bc]}{\Args[v]}{\POptEnd{\Role[p]_i}{\Args[v]}}} \triangleright \Typed{\AT{\Args[x]}{\Role[trg]}}{\LTGet{\Role[scr]}{\LTLabS{\Labe[bc]}{\Typed{\Args[v]_{j, j - 1}}{\Sort[V]}}}}, \Typed{\Role[p]_i}{\OV{\Sort[V]}}}(\mathsf{C}) \end{align*} After removing $ i - 1 $ optional blocks this way, $ P_{i - 1 \to i}' $ and $ T_{i - 1 \to i}' $ consist of $ n $ parallel components, respectively. We use the Rule~$ (\mathsf{Pa}) $ $ n $ times to separate these components. The $ n $'th component, we obtain this way, consists of $ n - 1 $ sequential optional blocks, respectively. Their type is checked similar to the first $ i - 1 $ such sequential optional blocks with a derivation for $ \PEnd $ using Rule~$ (\mathsf{N}) $ in the end. It remains to show that $ \Gamma \vdash \PRes{\Chan[k]}{\POptNVS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{i \text{ calls } j}}} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTOptS{\Role[p]_i, \Role[p]_j}{T_{i \text{ calls } j}}{\cdot}} $ holds for all $ j = 1..n $ with $ j \neq i $, where \begin{align*} P_{i \text{ calls } j} &= \PDecl{\Chan[k]}{\Chan[s]}{\Args[v]_{i, i - 1}}{\cdot}{\cdot}{P_{i \to j, \Chan[k]}}\\ T_{i \text{ calls } j} &= \LTCall{\Prot[C]}{G_{\Prot[C]}}{\Args[v]_{i, i - 1}}{\Typed{\Args[val]}{\!\Sort[V]}}{\cdot}{T_{i \to j}} \end{align*} Let $ \Gamma' = \Gamma, \Typed{\Chan[k]}{G_{\Prot[C]}\!\Set[]{ \Subst{\Args[v]_{i, i - 1}}{\Args[val]} }} $. \begin{align*} \dfrac{\dfrac{\Role[p]_i, \Role[p]_j \ \dot{=} \ \Role[p]_i, \Role[p]_j \quad \Gamma' \vdash P_{i \text{ calls } j} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{T_{i \text{ calls } j}}, \Typed{\Role[p]_i}{\OV{\cdot}} \quad \nexists \Role, \tilde{\Sort[K]} \logdot \Typed{\Role}{\OV{\tilde{\Sort[K]}}} \in \emptyset \quad \dfrac{}{\Gamma' \vdash \PEnd \triangleright \emptyset}(\mathsf{N}) \quad \vdash \Typed{\cdot}{\cdot} \quad \vdash \Typed{\cdot}{\cdot}}{\Gamma' \vdash \POptNVS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{i \text{ calls } j}} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTOptS{\Role[p]_i, \Role[p]_j}{T_{i \text{ calls } j}}{\cdot}}}(\mathsf{Opt})}{\Gamma \vdash \PRes{\Chan[k]}{\POptNVS{\Role[p]_i}{\Role[p]_i, \Role[p]_j}{P_{i \text{ calls } j}}} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTOptS{\Role[p]_i, \Role[p]_j}{T_{i \text{ calls } j}}{\cdot}}}(\mathsf{R}) \end{align*} with $ G_{\Prot[C]}\!\Set[]{ \Subst{\Args[v]_{i, i - 1}}{\Args[val]} } = \GTCom{\Role[scr]}{\Role[trg]}{\GTInpS{\Labe[bc]}{\Typed{\Args[v]_{i, i - 1}}{\!\Sort[V]}}} $, \begin{align*} \dfrac{\begin{array}{c} \Gamma' \vdash P_{i \to j, \Chan[k]} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{T_{i \to j}}, \Delta_k, \Typed{\Role[p]_i}{\OV{\cdot}}\\ \GetType[\Gamma']{\Prot[C]} = \TypedProt{\Prot[C]}{\Role[scr], \Role[trg]}{\Args[val]}{\cdot}{G_{\Prot[C]}} \quad \ProjS{G_{\Prot[C]}\!\Set[]{ \Subst{\Args[v]_{i, i - 1}}{\Args[val]} }}{}{\Role[scr]} = \LTSend{\Role[trg]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}\\ \ProjS{G_{\Prot[C]}\!\Set[]{ \Subst{\Args[v]_{i, i - 1}}{\Args[val]} }}{}{\Role[trg]} = \LTGet{\Role[scr]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}} \quad \vdash \Typed{\Args[v]_{i, i - 1}}{\Sort[V]} \quad \GetType[\Gamma']{\Chan[k]} = G_{\Prot[C]}\!\Set[]{ \Subst{\Args[v]_{i, i - 1}}{\Args[val]} } \end{array}}{\Gamma' \vdash \PDecl{\Chan[k]}{\Chan[s]}{\Args[v]_{i, i - 1}}{\cdot}{\cdot}{P_{i \to j, \Chan[k]}} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTCall{\Prot[C]}{G_{\Prot[C]}}{\Args[v]_{i, i - 1}}{\Typed{\Args[val]}{\!\Sort[V]}}{\cdot}{T_{i \to j}}}, \Typed{\Role[p]_i}{\OV{\cdot}}}(\mathsf{New}) \end{align*} and $ \Delta_k = \Typed{\ATI{\Chan[k]}{\Role[scr]}}{\LTSend{\Role[trg]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}}, \Typed{\ATI{\Chan[k]}{\Role[trg]}}{\LTGet{\Role[scr]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}} $. It remains to show that $ \Gamma' \vdash P_{i \to j, \Chan[k]} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{T_{i \to j}}, \Delta_k, \Typed{\Role[p]_i}{\OV{\cdot}} $. By Example~\ref{exa:process}, \begin{align*} P_{i \to j, \Chan[k]} = \PPar{\PReqS{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[scr]}{\Chan[k]}}{\PReqS{\Chan[s]}{\Role[p]_i}{\Role[p]_j}{\Role[trg]}{\Chan[k]}}\PPar{}{\PEnt{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[scr]}{\Args[z]}{\PSend{\Args[z]}{\Role[scr]}{\Role[trg]}{\Labe[bc]}{\Args[v]_{i, i - 1}}{\POptEnd{\Role[p]_i}{\cdot}}}} \end{align*} and, by Example~\ref{exa:localType}, \begin{align*} T_{i \to j} = \LTPar{\LTReqS{\Prot[C]}{\Role[src]}{\Args[v]_{i, i - 1}}{\!\Role[p]_i}}{\LTReqS{\Prot[C]}{\Role[trg]}{\Args[v]_{i, i - 1}}{\!\Role[p]_j}} \LTPar{}{\LTEntS{\Prot[C]}{\Role[scr]}{\Args[v]_{i, i - 1}}{\Role[p]_i}} \end{align*} We apply the Rule~$ (\mathsf{Pa}) $ two times such that it remains to show: \begin{enumerate}[(1)] \item $ \Gamma' \vdash \PReqS{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[scr]}{\Chan[k]} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTReqS{\Prot[C]}{\Role[src]}{\Args[v]_{i, i - 1}}{\!\Role[p]_i}}, \Typed{\ATI{\Chan[k]}{\Role[scr]}}{\LTSend{\Role[trg]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}} $ \item $ \Gamma' \vdash \PReqS{\Chan[s]}{\Role[p]_i}{\Role[p]_j}{\Role[trg]}{\Chan[k]} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTReqS{\Prot[C]}{\Role[trg]}{\Args[v]_{i, i - 1}}{\!\Role[p]_j}}, \Typed{\ATI{\Chan[k]}{\Role[trg]}}{\LTGet{\Role[scr]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}} $ \item $ \Gamma' \vdash \PEnt{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[scr]}{\Args[z]}{\PSend{\Args[z]}{\Role[scr]}{\Role[trg]}{\Labe[bc]}{\Args[v]_{i, i - 1}}{\POptEnd{\Role[p]_i}{\cdot}}} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTEntS{\Prot[C]}{\Role[scr]}{\Args[v]_{i, i - 1}}{\Role[p]_i}}, \Typed{\Role[p]_i}{\OV{\cdot}} $ \end{enumerate} For the first case we have: \begin{align*} \dfrac{\dfrac{}{\Gamma' \vdash \PEnd \triangleright \emptyset}(\mathsf{N}) \quad \GetType{\Prot[C]} = \TypedProt{\Prot[C]}{\Role[scr], \Role[trg]}{\Args[val]}{\cdot}{G_{\Prot[C]}} \quad \ProjS{G_{\Prot[C]}\!\Set[]{ \Subst{\Args[v]_{i, i}}{\Args[val]} }}{}{\Role[scr]} = \LTSend{\Role[trg]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}}{\Gamma' \vdash \PReqS{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[scr]}{\Chan[k]} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTReqS{\Prot[C]}{\Role[src]}{\Args[v]_{i, i - 1}}{\!\Role[p]_i}}, \Typed{\ATI{\Chan[k]}{\Role[scr]}}{\LTSend{\Role[trg]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}}}(\mathsf{P}) \end{align*} The second case is similar. We have: \begin{align*} \dfrac{\dfrac{}{\Gamma' \vdash \PEnd \triangleright \emptyset}(\mathsf{N}) \quad \GetType{\Prot[C]} = \TypedProt{\Prot[C]}{\Role[scr], \Role[trg]}{\Args[val]}{\cdot}{G_{\Prot[C]}} \quad \ProjS{G_{\Prot[C]}\!\Set[]{ \Subst{\Args[v]_{i, i}}{\Args[val]} }}{}{\Role[trg]} = \LTGet{\Role[scr]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}}{\Gamma' \vdash \PReqS{\Chan[s]}{\Role[p]_i}{\Role[p]_j}{\Role[trg]}{\Chan[k]} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTReqS{\Prot[C]}{\Role[trg]}{\Args[v]_{i, i - 1}}{\!\Role[p]_j}}, \Typed{\ATI{\Chan[k]}{\Role[trg]}}{\LTGet{\Role[scr]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}}}(\mathsf{P}) \end{align*} For the third case we have: \begin{align*} \dfrac{\begin{array}{c} \dfrac{\dfrac{\vdash \Typed{\cdot}{\cdot}}{\Gamma' \vdash \POptEnd{\Role[p]_i}{\cdot} \triangleright \Typed{\Role[p]_i}{\OV{\cdot}}}(\mathsf{OptE}) \quad \vdash \Typed{\Args[v]_{i, i - 1}}{\Sort[V]}}{\Gamma' \vdash \PSend{\Args[z]}{\Role[scr]}{\Role[trg]}{\Labe[bc]}{\Args[v]_{i, i - 1}}{\POptEnd{\Role[p]_i}{\cdot}} \triangleright \Typed{\AT{\Args[z]}{\Role[scr]}}{\LTSend{\Role[trg]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}}}, \Typed{\Role[p]_i}{\OV{\cdot}}}(\mathsf{S})\\ \GetType[\Gamma']{\Prot[C]} = \TypedProt{\Prot[C]}{\Role[scr], \Role[trg]}{\Args[val]}{\cdot}{G_{\Prot[C]}} \quad \ProjS{G_{\Prot[C]}\!\Set[]{ \Subst{\Args[v]_{i, i}}{\Args[val]} }}{}{\Role[scr]} = \LTSend{\Role[trg]}{\LTLabS{\Labe[bc]}{\Args[v]_{i, i - 1}}} \end{array}}{\Gamma' \vdash \PEnt{\Chan[s]}{\Role[p]_i}{\Role[p]_i}{\Role[scr]}{\Args[z]}{\PSend{\Args[z]}{\Role[scr]}{\Role[trg]}{\Labe[bc]}{\Args[v]_{i, i - 1}}{\POptEnd{\Role[p]_i}{\cdot}}} \triangleright \Typed{\AT{\Chan[s]}{\Role[p]_i}}{\LTEntS{\Prot[C]}{\Role[scr]}{\Args[v]_{i, i - 1}}{\Role[p]_i}}, \Typed{\Role[p]_i}{\OV{\cdot}}}(\mathsf{J}) \end{align*} We conclude that $ \Gamma \vdash P_{\text{RC}}^n \triangleright \Delta $ holds. \section{Properties of the Type Systems} \label{sec:properties} In the following we analyse the properties of the (two versions of the) type systems. We formally distinguish between the following sets: \begin{compactitem} \item The set $ \nameSet $ of names that captures all kinds of channel names, session names, and names for values. We often use different identifiers to hint on the different purpose of a name, \eg we use $ \Chan $ for shared channels, $ \Chan[s], \Chan[k] $ for session names, and $ \Args[v] $ for values. We do however not formally distinguish between these different kinds of names but formally distinguish names from the following sets. \item The set $ \roleSet $ of roles, usually identified by $ \Role[r], \Role[r]', \Role[r]_i, \ldots $ (in the examples we used the roles $ \Role[p]_1, \ldots, \Role[p]_n, \Role[scr] $, and $ \Role[trg] $). \item The set $ \labelSet $ of labels, usually identified by $ \Labe, \Labe', \Labe_i, \ldots $ (in the examples we used the labels $ \Labe[c] $ and $ \Labe[bc] $ for communication). \item The set $ \procVarSet $ of process variables, usually identified by $ \TermV[X] $. \item The set $ \typeVarSet $ of type variables, usually identified by $ \TermV $. \end{compactitem} Moreover notice that kinds, usually identified by $ \Sort[S], \Sort[S]', \Sort[S]_, \ldots $ (and $ \Sort[V] $ in our examples), are neither global nor local types and can be formally distinguished from every global or local type. Because of that, a statement $ \vdash \Typed{\Args[v]}{\Sort[S]} $ tells us that $ \Args[v] $ is a value that is different from all names that are used \eg as (either shared or session) channel. Remember the type environments---$ \Gamma $ for the global types and $ \Delta $ for the session types---cannot contain multiple type statements for the same name or the same combination of a name and a role, respectively. \subsection{Structural Congruence, Substitution, and Evaluation Contexts} We start with a few auxiliary results. The first Lemma tells us, that the property of being well-typed is preserved by structural congruence. Note that following \cite{DemangeonHonda12} we handle recursion implicitly using the rules \[ \begin{array}{c} \GTPar{G_1}{G_2} \equiv \GTPar{G_2}{G_1} \hspace*{1.4em} \GTPar{G_1}{\left( \GTPar{G_2}{G_3} \right)} \equiv \GTPar{\left( \GTPar{G_1}{G_2} \right)}{G_3} \hspace*{1.4em} \GTChoi{G_1}{\Role}{G_2} \equiv \GTChoi{G_2}{\Role}{G_1} \hspace*{1.4em} \GTRec{\TermV}{G} \equiv G\!\Set{ \Subst{\GTRec{\TermV}{P}}{\TermV} } \end{array} \] for global types and the rules \[ \begin{array}{c} \LTPar{T_1}{T_2} \equiv \LTPar{T_2}{T_1} \hspace*{1.4em} \LTPar{T_1}{\left( \LTPar{T_2}{T_3} \right)} \equiv \LTPar{\left( \LTPar{T_1}{T_2} \right)}{T_3} \hspace*{1.4em} \LTChoi{T_1}{T_2} \equiv \LTChoi{T_2}{T_1} \hspace*{1.4em} \LTRec{\TermV}{T} \equiv T\!\Set{ \Subst{\LTRec{\TermV}{P}}{\TermV} } \end{array} \] for local types and usually equate structural equivalent types and processes. \begin{lemma} \label{lem:typedStructuralCongruence} For both type systems: If $ \Gamma \vdash P \triangleright \Delta $ and $ P \equiv P' $ then $ \Gamma \vdash P' \triangleright \Delta $. \end{lemma} \begin{proof} We start with the larger type system, \ie the session types with optional blocks and sub-sessions. The proof is by induction on the structural congruence $ \equiv $ between processes. \begin{description} \item[Case $ \PPar{P}{\PEnd} \equiv P $:] Assume $ \Gamma \vdash \PPar{P}{\PEnd} \triangleright \Delta $. Then, by the typing rules of Figure~\ref{fig:typingRules}, the proof of this judgement has (modulo applications of Rule~(\textsf{S2}) that can be moved towards the type check of $ P $) to start with \begin{align*} \dfrac{\Gamma \vdash P \triangleright \Delta_{P} \quad \dfrac{}{\Gamma \vdash \PEnd \triangleright \emptyset}(\mathsf{N})}{\Gamma \vdash \PPar{P}{\PEnd} \triangleright \Delta}(\mathsf{Pa}) \end{align*} where $ \Delta = \Delta_P \otimes \emptyset $ and thus $ \Delta_P = \Delta $. Then also $ \Gamma \vdash P \triangleright \Delta $. Assume $ \Gamma \vdash P \triangleright \Delta $. With the Rules~$ (\mathsf{N}) $ and $ (\mathsf{Pa}) $ and, because $ \Delta \otimes \emptyset = \Delta $, we then have \begin{align*} \dfrac{\Gamma \vdash P \triangleright \Delta \quad \dfrac{}{\Gamma \vdash \PEnd \triangleright \emptyset}(\mathsf{N})}{\Gamma \vdash \PPar{P}{\PEnd} \triangleright \Delta}(\mathsf{Pa}) \end{align*} Hence also $ \Gamma \vdash \PPar{P}{\PEnd} \triangleright \Delta $. \item[Case $ \PPar{P_1}{P_2} \equiv \PPar{P_2}{P_1} $:] Assume $ \Gamma \vdash \PPar{P_1}{P_2} \triangleright \Delta $. Then, by the typing rules of Figure~\ref{fig:typingRules}, the proof of this judgement has to start with a number of applications of (\textsf{S2}) that reduce $ \Gamma \vdash \PPar{P_1}{P_2} \triangleright \Delta $ to $ \Gamma \vdash \PPar{P_1}{P_2} \triangleright \Delta' $ for some $ \Delta' $ such that \begin{align*} \dfrac{\Gamma \vdash P_1 \triangleright \Delta_{P1} \quad \Gamma \vdash P_2 \triangleright \Delta_{P2}}{\Gamma \vdash \PPar{P_1}{P_2} \triangleright \Delta'}(\mathsf{Pa}) \end{align*} where $ \Delta' = \Delta_{P1} \otimes \Delta_{P2} $. With Rule~$ (\mathsf{Pa}) $ and, because $ \Delta_{P1} \otimes \Delta_{P2} = \Delta_{P2} \otimes \Delta_{P1} $, then also $ \Gamma \vdash \PPar{P_2}{P_1} \triangleright \Delta' $. We use the same applications of (\textsf{S2}) to derive $ \Gamma \vdash \PPar{P_2}{P_1} \triangleright \Delta $. The other direction is similar. \item[Case $ \PPar{P_1}{\left( \PPar{P_2}{P_3} \right)} \equiv \PPar{\left( \PPar{P_1}{P_2} \right)}{P_3} $:] Assume $ \Gamma \vdash \PPar{P_1}{\left( \PPar{P_2}{P_3} \right)} \triangleright \Delta $. Then, by the typing rules of Figure~\ref{fig:typingRules}, the proof of this judgement has to start with a number of applications of (\textsf{S2}) that reduce $ \Gamma \vdash \PPar{P_1}{\left( \PPar{P_2}{P_3} \right)} \triangleright \Delta $ to $ \Gamma \vdash \PPar{P_1}{\left( \PPar{P_2}{P_3} \right)} \triangleright \Delta' $ for some $ \Delta' $ such that \begin{align*} \dfrac{\Gamma \vdash P_1 \triangleright \Delta_{P1} \quad \dfrac{\Gamma \vdash P_2 \triangleright \Delta_{P2} \quad \Gamma \vdash P_3 \triangleright \Delta_{P3}}{\Gamma \vdash \PPar{P_2}{P_3} \triangleright \Delta_{P2 - 3}}(\mathsf{Pa})}{\Gamma \vdash \PPar{P_1}{\left( \PPar{P_2}{P_3} \right)} \triangleright \Delta'}(\mathsf{Pa}) \end{align*} where $ \Delta' = \Delta_{P1} \otimes \Delta_{P2 - 3} $ and $ \Delta_{P2 - 3} = \Delta_{P2} \otimes \Delta_{P3} $. With Rule~$ (\mathsf{Pa}) $ and, because $ \Delta_{P1} \otimes \left( \Delta_{P2} \otimes \Delta_{P3} \right) = \left( \Delta_{P1} \otimes \Delta_{P2} \right) \otimes \Delta_{P3} $, we have \begin{align*} \dfrac{\dfrac{\Gamma \vdash P_1 \triangleright \Delta_{P1} \quad \Gamma \vdash P_2 \triangleright \Delta_{P2}}{\Gamma \vdash \PPar{P_1}{P_2} \triangleright \Delta_{P1} \otimes \Delta_{P2}}(\mathsf{Pa}) \quad \Gamma \vdash P_3 \triangleright \Delta_{P3}}{\Gamma \vdash \PPar{\left( \PPar{P_1}{P_2} \right)}{P_3} \triangleright \Delta'}(\mathsf{Pa}) \end{align*} Using the same applications of (\textsf{S2}) we obtain $ \Gamma \vdash \PPar{\left( \PPar{P_1}{P_2} \right)}{P_3} \triangleright \Delta $. The other direction is similar. \item[Case $ \PRes{\Args}{\PEnd} \equiv \PEnd $:] Assume $ \Gamma \vdash \PRes{\Chan}{\PEnd} \triangleright \Delta $. Then, by the typing rules of Figure~\ref{fig:typingRules}, the proof of this judgement has to start with a number of applications of (\textsf{S2}) that reduce $ \Gamma \vdash \PRes{\Chan}{\PEnd} \triangleright \Delta $ to $ \Gamma \vdash \PRes{\Chan}{\PEnd} \triangleright \Delta' $ for some $ \Delta' $ such that \begin{align*} \dfrac{\dfrac{}{\Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash \PEnd \triangleright \Delta'}(\mathsf{N})}{\Gamma \vdash \PRes{\Args}{\PEnd} \triangleright \Delta'}(\mathsf{R}) \end{align*} where $ \Delta' = \emptyset $. By Rule~$ (\mathsf{N}) $ and because $ \Delta' = \emptyset $, we have $ \Gamma \vdash \PEnd \triangleright \Delta' $. Using the same applications of (\textsf{S2}) we obtain $ \Gamma \vdash \PEnd \triangleright \Delta $. Assume $ \Gamma \vdash \PEnd \triangleright \Delta $. Then, by the typing rules of Figure~\ref{fig:typingRules}, the proof of this judgement has to start with a number of applications of (\textsf{S2}) followed by one application of (\textsf{N}), where for the last step the session environment has to be empty. With the Rules~$ (\mathsf{N}) $ and $ (\mathsf{R}) $ we then have \begin{align*} \dfrac{\dfrac{}{\Gamma, \Typed{\Args}{\AT{T'}{\Role'}} \vdash \PEnd \triangleright \emptyset}(\mathsf{N})}{\Gamma \vdash \PRes{\Args}{\PEnd} \triangleright \emptyset}(\mathsf{R}) \end{align*} Using the same applications of (\textsf{S2}) we obtain $ \Gamma \vdash \PRes{\Args}{\PEnd} \triangleright \Delta $. \item[Case $ \PRes{\Args}{\PRes{\Args[y]}{P}} \equiv \PRes{\Args[y]}{\PRes{\Args}{P}} $:] Assume $ \Gamma \vdash \PRes{\Args}{\PRes{\Args[y]}{P}} \triangleright \Delta $. Then, by the typing rules of Figure~\ref{fig:typingRules}, the proof of this judgement has to start with a number of applications of (\textsf{S2}) that reduce $ \Gamma \vdash \PRes{\Args}{\PRes{\Args[y]}{P}} \triangleright \Delta $ to $ \Gamma \vdash \PRes{\Args}{\PRes{\Args[y]}{P}} \triangleright \Delta' $ for some $ \Delta' $ such that \begin{align*} \dfrac{\dfrac{\Gamma, \Typed{\Args}{\AT{T}{\Role}}, \Typed{\Args[y]}{\AT{T'}{\Role'}} \vdash P \triangleright \Delta'}{\Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash \PRes{\Args[y]}{P} \triangleright \Delta'}(\mathsf{R})}{\Gamma \vdash \PRes{\Args}{\PRes{\Args[y]}{P}} \triangleright \Delta'}(\mathsf{R}) \end{align*} By $ \Gamma, \Typed{\Args}{\AT{T}{\Role}}, \Typed{\Args[y]}{\AT{T'}{\Role'}} \vdash P \triangleright \Delta' $, Rule~$ (\mathsf{R}) $ and because $ \Gamma, \Typed{\Args}{\AT{T}{\Role}}, \Typed{\Args[y]}{\AT{T'}{\Role'}} = \Gamma, \Typed{\Args[y]}{\AT{T'}{\Role'}}, \Typed{\Args}{\AT{T}{\Role}} $, we have \begin{align*} \dfrac{\dfrac{\Gamma, \Typed{\Args}{\AT{T}{\Role}}, \Typed{\Args[y]}{\AT{T'}{\Role'}} \vdash P \triangleright \Delta'}{\Gamma, \Typed{\Args[y]}{\AT{T'}{\Role'}} \vdash \PRes{\Args}{P} \triangleright \Delta'}(\mathsf{R})}{\Gamma \vdash \PRes{\Args[y]}{\PRes{\Args}{P}} \triangleright \Delta'}(\mathsf{R}) \end{align*} Using the same applications of (\textsf{S2}) we obtain $ \Gamma \vdash \PRes{\Args[y]}{\PRes{\Args}{P}} \triangleright \Delta $. The other direction is similar. \item[Case $ \PRes{\Args}{\left( \PPar{P_1}{P_2} \right)} \equiv \PPar{P_1}{\PRes{\Args}{P_2}} $ if $ \Args \notin \FreeNames{P_1} $:] Assume $ \Gamma \vdash \PRes{\Args}{\left( \PPar{P_1}{P_2} \right)} \triangleright \Delta $. Then, by the typing rules of Figure~\ref{fig:typingRules}, the proof of this judgement has to start with a number of applications of (\textsf{S2}) that reduce $ \Gamma \vdash \PRes{\Args}{\left( \PPar{P_1}{P_2} \right)} \triangleright \Delta $ to $ \Gamma \vdash \PRes{\Args}{\left( \PPar{P_1}{P_2} \right)} \triangleright \Delta' $ for some $ \Delta' $ such that \begin{align*} \dfrac{\dfrac{\Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P_1 \triangleright \Delta_{P1} \quad \Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P_2 \triangleright \Delta_{P2}}{\Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash \PPar{P_1}{P_2} \triangleright \Delta'}(\mathsf{Pa})}{\Gamma \vdash \PRes{\Args}{\left( \PPar{P_1}{P_2} \right)} \triangleright \Delta'}(\mathsf{R}) \end{align*} where $ \Delta' = \Delta_{P1} \otimes \Delta_{P2} $. The only rules that makes use of type declarations of channels from the global environment are the Rules~$ (\mathsf{New}) $, $ (\mathsf{I}) $, and $ (\mathsf{O}) $. Since $ \Args \notin \FreeNames{P_1} $, the Rules~$ (\mathsf{New}) $, $ (\mathsf{I}) $, and $ (\mathsf{O}) $---even if they are used---will not check for the type of $ \Args $ in the judgement $ \Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P_1 \triangleright \Delta_{P1} $. Hence also $ \Gamma \vdash P_1 \triangleright \Delta_{P1} $. With $ \Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P_2 \triangleright \Delta_{P2} $ and the Rules~$ (\mathsf{Pa}) $ and $ (\mathsf{R}) $ then \begin{align*} \dfrac{\Gamma \vdash P_1 \triangleright \Delta_{P1} \quad \dfrac{\Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P_2 \triangleright \Delta_{P2}}{\Gamma \vdash \PRes{\Args}{P_2} \triangleright \Delta_{P2}}(\mathsf{R})}{\Gamma \vdash \PPar{P_1}{\PRes{\Args}{P_2}} \triangleright \Delta'}(\mathsf{Pa}) \end{align*} Using the same applications of (\textsf{S2}) we obtain $ \Gamma \vdash \PPar{P_1}{\PRes{\Args}{P_2}} \triangleright \Delta $. Assume $ \Gamma \vdash \PPar{P_1}{\PRes{\Args}{P_2}} \triangleright \Delta $. Then, by the typing rules of Figure~\ref{fig:typingRules}, the proof of this judgement has to start with a number of applications of (\textsf{S2}) that reduce $ \Gamma \vdash \PPar{P_1}{\PRes{\Args}{P_2}} \triangleright \Delta $ to $ \Gamma \vdash \PPar{P_1}{\PRes{\Args}{P_2}} \triangleright \Delta' $ for some $ \Delta' $ such that \begin{align*} \dfrac{\Gamma \vdash P_1 \triangleright \Delta_{P1} \quad \dfrac{\Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P_2 \triangleright \Delta_{P2}}{\Gamma \vdash \PRes{\Args}{P_2} \triangleright \Delta_{P2}}(\mathsf{R})}{\Gamma \vdash \PPar{P_1}{\PRes{\Args}{P_2}} \triangleright \Delta'}(\mathsf{Pa}) \end{align*} where $ \Delta' = \Delta_{P1} \otimes \Delta_{P2} $. Since $ \Args \notin \FreeNames{P_1} $, no rule will check for the type of $ \Args $ in the judgement $ \Gamma \vdash P_1 \triangleright \Delta_{P1} $. Hence also $ \Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P_1 \triangleright \Delta_{P1} $. With $ \Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P_2 \triangleright \Delta_{P2} $ and the Rules~$ (\mathsf{Pa}) $ and $ (\mathsf{R}) $ then \begin{align*} \dfrac{\dfrac{\Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P_1 \triangleright \Delta_{P1} \quad \Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P_2 \triangleright \Delta_{P2}}{\Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash \PPar{P_1}{P_2} \triangleright \Delta'}(\mathsf{Pa})}{\Gamma \vdash \PRes{\Args}{\left( \PPar{P_1}{P_2} \right)} \triangleright \Delta'}(\mathsf{R}) \end{align*} Using the same applications of (\textsf{S2}) we obtain $ \Gamma \vdash \PRes{\Args}{\left( \PPar{P_1}{P_2} \right)} \triangleright \Delta $. \item[Case $ \PChoi{P_1}{P_2} \equiv \PChoi{P_2}{P_1} $:] Assume $ \Gamma \vdash \PChoi{P_1}{P_2} \triangleright \Delta $. Then, by the typing rules of Figure~\ref{fig:typingRules}, the proof of this judgement has to start with a number of applications of (\textsf{S2}) that reduce $ \Gamma \vdash \PChoi{P_1}{P_2} \triangleright \Delta $ to $ \Gamma \vdash \PChoi{P_1}{P_2} \triangleright \Delta' $ for some $ \Delta' $ such that \begin{align*} \dfrac{\Gamma \vdash P_1 \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T_1} \quad \Gamma \vdash P_2 \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T_2}}{\Gamma \vdash \PChoi{P_1}{P_2} \triangleright \Delta'}(\mathsf{S1}) \end{align*} where $ \Delta' = \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T_1 \oplus T_2} $. With Rule~$ (\mathsf{S1}) $ and, because $ T_1 \oplus T_2 = T_2 \oplus T_1 $, then also $ \Gamma \vdash \PChoi{P_2}{P_1} \triangleright \Delta' $. Using the same applications of (\textsf{S2}) we obtain $ \Gamma \vdash \PChoi{P_2}{P_1} \triangleright \Delta $. The other direction is similar. \item[Case $ \PRec{\TermV[X]}{P} \equiv P\!\Set{ \Subst{\PRec{\TermV[X]}{P}}{\TermV[X]} } $:] Assume $ \Gamma \vdash \PRec{\TermV[X]}{P} \triangleright \Delta $. Then, by the typing rules of Figure~\ref{fig:typingRules}, $ \Delta $ contains (modulo some applications of (\textsf{S2})) some $ \Typed{\AT{\Chan[s]}{\Role}}{\LTRec{\TermV}{T}} $ to check the type of $ \PRec{\TermV[X]}{P} $. Because of $ \LTRec{\TermV}{T} \equiv T\!\Set[]{ \Subst{\LTRec{\TermV}{T}}{\TermV} } $, then also $ \Gamma \vdash P\!\Set{ \Subst{\PRec{\TermV[X]}{P}}{\TermV[X]} } \triangleright \Delta $. The other direction is similar. \end{description} Since the rules of structural congruence $ \equiv $ are the same for both type systems and because none of the above cases relies on one of the Rules~$ (\mathsf{P}) $, $ (\mathsf{J}) $, or $ (\mathsf{New}) $, this lemma also holds for the session types with optional blocks but without sub-sessions. \end{proof} The next Lemma allows us to substitute (session) names within type judgements if the session environment is adapted accordingly. \begin{lemma} \label{lem:typeSubstA} For both type systems: If $ \Gamma \vdash P \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T} $ then $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \end{lemma} \begin{proof} We start with the larger type system, \ie the session types with optional blocks and sub-sessions. Assume $ \Gamma \vdash P \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T} $. We perform an induction on the derivation of this judgement from the typing rules of Figure~\ref{fig:typingRules}. Note that the Rules~$ (\mathsf{N}) $ and $ (\mathsf{OptE}) $ refer to base cases, while the remaining rules refer to the induction steps of the induction. \begin{description} \item[Case Rule~$ (\mathsf{N}) $:] In this case $ P = \PEnd $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \emptyset $. This is a contradiction. Hence the implication holds trivially. \item[Case Rule~$ (\mathsf{OptE}) $:] In this case $ P = \POptEnd{\Role'}{\tilde{\Args[v]}} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Typed{\Role'}{\OV{\tilde{\Sort}}} $. Again this is a contradiction. \item[Case Rule~$ (\mathsf{I}) $:] In this case $ P = \PInp{\Chan}{\Args'}{P'} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta' $ and we have $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Args'}{\Role'}}{T'} $ and $ \GetType{\Chan} = \AT{T'}{\Role'} $. Using alpha-conversion before Rule~$ (\mathsf{I}) $ we can ensure that $ \Args \neq \Args' $. Then, by the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T}, \Typed{\AT{\Args'}{\Role'}}{T'} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Args'}{\Role'}}{T'} $. With Rule~$ (\mathsf{I}) $ and $ \GetType{\Chan} = \AT{T'}{\Role'} $ we have $ \Gamma \vdash \PInp{\Chan}{\Args'}{\left( P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. Since $ \Args \neq \Args' $, then $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item[Case Rule~$ (\mathsf{O}) $:] In this case $ P = \POut{\Chan}{\Chan[s]'}{P'} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta', \Typed{\ATE{\Chan[s]'}{\Role'}}{T'} $ and we have $ \Gamma \vdash P' \triangleright \Delta' $ and $ \GetType{\Chan} = \AT{T'}{\Role'} $. Hence $ \Args \neq \Chan[s]' $, $ \Delta = \Delta'', \Typed{\ATE{\Chan[s]'}{\Role'}}{T'} $, and $ \Delta' = \Delta'', \Typed{\AT{\Args}{\Role}}{T} $. By the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta'', \Typed{\AT{\Args}{\Role}}{T} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T} $. With Rule~$ (\mathsf{O}) $ and $ \GetType{\Chan} = \AT{T'}{\Role'} $ we have $ \Gamma \vdash \POut{\Chan}{\Chan[s]'}{\left( P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\ATE{\Chan[s]'}{\Role'}}{T'} $. Since $ \Args \neq \Chan[s]' $, then $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item[Case Rule~$ (\mathsf{C}) $:] In this case $ P = \PGet{\Chan[k]}{\Role_1}{\Role_2}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args[y]}_i}{P_i} }} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta', \Typed{\AT{\Chan[k]}{\Role_2}}{\LTGet{\Role_1}{_{i \in \indexSet{}} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{T_i} }}} $ and we have $ \Gamma \vdash P_i \triangleright \Delta', \Typed{\AT{\Chan[k]}{\Role_2}}{T_i} $ and $ \vdash \Typed{\tilde{\Args[y]}_i}{\tilde{\Sort}_i} $ for all $ i \in \indexSet $. Using alpha-conversion before Rule~$ (\mathsf{C}) $ we can ensure that $ \Args \notin \tilde{\Args[y]}_i $ for all $ i \in \indexSet $. We distinguish between the cases (1)~$ \Args = \Chan[k] $ and (2)~$ \Args \neq \Chan[k] $. \begin{enumerate}[(1)] \item Then $ \Delta = \Delta' $, $ \Role = \Role_2 $, and $ T = \LTGet{\Role_1}{_{i \in \indexSet{}} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{T_i} }} $. By the induction hypothesis, $ \Gamma \vdash P_i \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T_i} $ implies $ \Gamma \vdash P_i\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T_i} $ for all $ i \in \indexSet $. With Rule~$ (\mathsf{C}) $ and $ \vdash \Typed{\tilde{\Args[y]}_i}{\tilde{\Sort}_i} $ for all $ i \in \indexSet $ we have $ \Gamma \vdash \PGet{\Chan[s]}{\Role_1}{\Role}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args[y]}_i}{\left( P_i\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} }} \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{\LTGet{\Role_1}{_{i \in \indexSet{}} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{T_i} }}} $. Since $ \Args \notin \tilde{\Args[y]}_i $ for all $ i \in \indexSet $, $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item Then $ \Delta = \Delta'', \Typed{\AT{\Chan[k]}{\Role_2}}{\LTGet{\Role_1}{_{i \in \indexSet{}} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{T_i} }}} $ and $ \Delta' = \Delta'', \Typed{\AT{\Args}{\Role}}{T} $. By the induction hypothesis, $ \Gamma \vdash P_i \triangleright \Delta'', \Typed{\AT{\Args}{\Role}}{T}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_i} $ implies $ \Gamma \vdash P_i\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_i} $ for all $ i \in \indexSet $. With Rule~$ (\mathsf{C}) $ and $ \vdash \Typed{\tilde{\Args[y]}_i}{\tilde{\Sort}_i} $ for all $ i \in \indexSet $ we have $ \Gamma \vdash \PGet{\Chan[k]}{\Role_1}{\Role_2}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args[y]}_i}{\left( P_i\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} }} \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[k]}{\Role_2}}{\LTGet{\Role_1}{_{i \in \indexSet{}} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{T_i} }}} $. Since $ \Args \notin \tilde{\Args[y]}_i $ for all $ i \in \indexSet $, $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \end{enumerate} \item[Case Rule~$ (\mathsf{S}) $:] In this case $ P = \PSend{\Chan[k]}{\Role_1}{\Role_2}{\Labe_j}{\tilde{\Args[v]}}{P'} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta', \Typed{\AT{\Chan[k]}{\Role_1}}{\LTSend{\Role_2}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args_i}}{\tilde{\Sort}_i}}{T_i} }}} $ and we have $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Chan[k]}{\Role_1}}{T_j} $, $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}_j} $, and $ \Args \notin \tilde{\Args[v]} $. We distinguish between the cases (1)~$ \Args = \Chan[k] $ and (2)~$ \Args \neq \Chan[k] $. \begin{enumerate}[(1)] \item Then $ \Delta = \Delta' $, $ \Role = \Role_1 $, and $ T = \LTSend{\Role_2}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args_i}}{\tilde{\Sort_i}}}{T_i} }} $. By the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T_j} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T_j} $. With Rule~$ (\mathsf{S}) $ and $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}_j} $ we have $ \Gamma \vdash \PSend{\Chan[s]}{\Role}{\Role_2}{\Labe_j}{\tilde{\Args[v]}}{\left( P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{\LTSend{\Role_2}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args_i}}{\tilde{\Sort_i}}}{T_i} }}} $. Since $ \Args \notin \tilde{\Args[v]} $, then $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item Then $ \Delta = \Delta'', \Typed{\AT{\Chan[k]}{\Role_1}}{\LTSend{\Role_2}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args_i}}{\tilde{\Sort_i}}}{T_i} }}} $ and $ \Delta' = \Delta'', \Typed{\AT{\Args}{\Role}}{T} $. By the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta'', \Typed{\AT{\Args}{\Role}}{T}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_j} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_j} $. With Rule~$ (\mathsf{S}) $ and $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}_j} $ we have $ \Gamma \vdash \PSend{\Chan[k]}{\Role_1}{\Role_2}{\Labe_j}{\tilde{\Args[v]}}{\left( P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[k]}{\Role_1}}{\LTSend{\Role_2}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args_i}}{\tilde{\Sort_i}}}{T_i} }}} $. Since $ \Args \notin \tilde{\Args[v]} $, then $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \end{enumerate} \item[Case Rule~$ (\mathsf{R}) $:] In this case $ P = \PRes{\Args'}{P'} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta' $ and we have $ \Gamma, \Typed{\Args'}{\AT{T'}{\Role'}} \vdash P' \triangleright \Delta' $. Using alpha-conversion before Rule~$ (\mathsf{R}) $ we can ensure that $ \Args \neq \Args' $. By the induction hypothesis, $ \Gamma, \Typed{\Args'}{\AT{T'}{\Role'}} \vdash P' \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T} $ implies $ \Gamma, \Typed{\Args'}{\AT{T'}{\Role'}} \vdash P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. With Rule~$ (\mathsf{R}) $ we have $ \Gamma \vdash \PRes{\Args'}{\left( P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. Since $ \Args \neq \Args' $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item[Case Rule~$ (\mathsf{P}) $:] In this case $ P = \PReq{\Chan[s]'}{\Role_1}{\Role_2}{\Role_3}{\Chan[k]}{P'} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta', \Typed{\AT{\Chan[s]'}{\Role_1}}{\LTReq{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_2}{T_1}}, \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3} $ and we have $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Chan[s]'}{\Role_1}}{T_1} $, $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}}{\tilde{\Role}_5}{G} $, and $ \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role_3} = T_3 $. Hence $ \Args \neq \Chan[k] $. We distinguish between the cases (1)~$ \Args = \Chan[s]' $ and (2)~$ \Args \neq \Chan[s]' $. \begin{enumerate}[(1)] \item Then $ \Delta = \Delta', \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3} $, $ \Role = \Role_1 $, and $ T = \LTReq{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_2}{T_1} $. By the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Args}{\Role}}{T_1} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_1} $. With Rule~$ (\mathsf{P}) $, $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}}{\tilde{\Role}_5}{G} $, and $ \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role_3} = T_3 $ we have $ \Gamma \vdash \PReq{\Chan[s]}{\Role}{\Role_2}{\Role_3}{\Chan[k]}{\left( P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{\LTReq{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_2}{T_1}}, \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3} $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item Then $ \Delta = \Delta'', \Typed{\AT{\Chan[s]'}{\Role^1}}{\LTReq{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_2}{T_1}}, \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3} $ and $ \Delta' = \Delta'', \Typed{\AT{\Args}{\Role}}{T} $. By the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta'', \Typed{\AT{\Args}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role_1}}{T_1} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role_1}}{T_1} $. With Rule~$ (\mathsf{P}) $, $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}}{\tilde{\Role}_5}{G} $, and $ \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role_3} = T_3 $ we have $ \Gamma \vdash \PReq{\Chan[s]'}{\Role_1}{\Role_2}{\Role_3}{\Chan[k]}{\left( P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role_1}}{\LTReq{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_2}{T_1}}, \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3} $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \end{enumerate} \item[Case Rule~$ (\mathsf{J}) $:] In this case we have $ P = \PEnt{\Chan[s]'}{\Role_1}{\Role_2}{\Role_3}{\Args'}{P'} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta', \Typed{\AT{\Chan[s]'}{\Role_2}}{\LTEnt{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_1}{T_2}} $ and we have $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Chan[s]'}{\Role_2}}{T_2}, \Typed{\AT{\Args'}{\Role_3}}{T_3} $, $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}}{\tilde{\Role}_5}{G} $, and $ \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role_3} = T_3 $. Using alpha-conversion before Rule~$ (\mathsf{J}) $ we can ensure that $ \Args \neq \Args' $. We distinguish between the cases (1)~$ \Args = \Chan[s]' $ and (2)~$ \Args \neq \Chan[s]' $. \begin{enumerate}[(1)] \item Then $ \Delta = \Delta' $, $ \Role = \Role_2 $, and $ T = \LTEnt{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_1}{T_2} $. By the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T_2}, \Typed{\AT{\Args'}{\Role_3}}{T_3} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T_2}, \Typed{\AT{\Args'}{\Role_3}}{T_3} $. With Rule~$ (\mathsf{J}) $, $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}}{\tilde{\Role}_5}{G} $, and $ \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role_3} = T_3 $ we have $ \Gamma \vdash \PEnt{\Chan[s]}{\Role_1}{\Role}{\Role_3}{\Args'}{\left( P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{\LTEnt{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_1}{T_2}} $. Since $ \Args \neq \Args' $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item Then $ \Delta = \Delta'', \Typed{\AT{\Chan[s]'}{\Role_2}}{\LTEnt{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_1}{T_2}} $ and $ \Delta' = \Delta'', \Typed{\AT{\Args}{\Role}}{T} $. By the induction hypothesis, \begin{align*} \Gamma \vdash P' \triangleright \Delta'', \Typed{\AT{\Args}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role_2}}{T_2}, \Typed{\AT{\Args'}{\Role_3}}{T_3} \end{align*} implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role_2}}{T_2}, \Typed{\AT{\Args'}{\Role_3}}{T_3} $. With Rule~$ (\mathsf{J}) $, $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}}{\tilde{\Role}_5}{G} $, and $ \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role_3} = T_3 $ we have: \begin{align*} \Gamma \vdash \PEnt{\Chan[s]'}{\Role_1}{\Role_2}{\Role_3}{\Args'}{\left( P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role_2}}{\LTEnt{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_1}{T_2}} \end{align*} Since $ \Args \neq \Args' $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \end{enumerate} \item[Case Rule~$ (\mathsf{New}) $:] In this case $ P = \PDecl{\Chan[k]}{\Chan[s]'}{\tilde{\Args[v]}}{\tilde{\Chan}}{\tilde{\Role}'''}{P'} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta', \Typed{\AT{\Chan[s]'}{\Role'}}{\LTCall{\Prot}{G}{\tilde{\Args[v]}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}'''}{T'}} $ and we have $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Chan[s]'}{\Role'}}{T'}, \Typed{\ATI{\Chan[k]}{\Role''_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role''_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'''_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'''_m}}{T'_{n + m}} $, $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}''}{\tilde{\Args[y]}}{\tilde{\Role}'''}{G} $, $ \forall i \logdot \GetType{\Chan_i} = \AT{T'_{i + n}}{\Role'''_{i + n}} $, $ \forall i \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role''_i} = T'_i $, $ \forall j \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role'''_j} = T'_{j + n} $, $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}} $, and $ \GetType{\Chan[k]} = {\Prot\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }} $. Since $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}} $, we have $ \Args \notin \tilde{\Args[v]} $. We distinguish between the cases (1)~$ \Args = \Chan[s]' $ and (2)~$ \Args \neq \Chan[s]' $. \begin{enumerate}[(1)] \item Then $ \Delta = \Delta' $, $ \Role = \Role' $, and $ T = \LTCall{\Prot}{G}{\tilde{\Args[v]}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}'''}{T'} $. By the induction hypothesis, \begin{align*} \Gamma \vdash P' \triangleright & \Delta, \Typed{\AT{\Args}{\Role}}{T'}, \Typed{\ATI{\Chan[k]}{\Role''_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role''_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'''_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'''_m}}{T'_{n + m}} \end{align*} implies \begin{align*} \Gamma \vdash P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright & \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T'}, \Typed{\ATI{\Chan[k]}{\Role''_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role''_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'''_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'''_m}}{T'_{n + m}} \end{align*} With Rule~$ (\mathsf{New}) $ and $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}''}{\tilde{\Args[y]}}{\tilde{\Role}'''}{G} $ and $ \forall i \logdot \GetType{\Chan_i} = \AT{T'_{i + n}}{\Role'''_{i + n}} $, $ \forall i \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role''_i} = T'_i $ and $ \forall j \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role'''_j} = T'_{j + n} $, $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}} $ and $ \GetType{\Chan[k]} = {\Prot\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }} $ we have \begin{align*} & \Gamma \vdash \PDecl{\Chan[k]}{\Chan[s]'}{\tilde{\Args[v]}}{\tilde{\Chan}}{\tilde{\Role}'''}{\left( P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \; \Delta, \Typed{\AT{\Chan[s]}{\Role}}{\LTCall{\Prot}{G}{\tilde{\Args[v]}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}'''}{T'}} \end{align*} Since $ \Args \notin \tilde{\Args[v]} $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item Then $ \Delta = \Delta'', \Typed{\AT{\Chan[s]'}{\Role'}}{\LTCall{\Prot}{G}{\tilde{\Args[v]}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}'''}{T'}} $ and $ \Delta' = \Delta'', \Typed{\AT{\Args}{\Role}}{T} $. By the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta'', \Typed{\AT{\Args}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role'}}{T'}, \Typed{\ATI{\Chan[k]}{\Role''_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role''_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'''_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'''_m}}{T'_{n + m}} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role'}}{T'}, \Typed{\ATI{\Chan[k]}{\Role''_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role''_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'''_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'''_m}}{T'_{n + m}} $. With Rule~$ (\mathsf{New}) $ and $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}''}{\tilde{\Args[y]}}{\tilde{\Role}'''}{G} $ and $ \forall i \logdot \GetType{\Chan_i} = \AT{T'_{i + n}}{\Role'''_{i + n}} $ and $ \forall i \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role''_i} = T'_i $ and $ \forall j \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role'''_j} = T'_{j + n} $ and $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}} $ and $ \GetType{\Chan[k]} = {\Prot\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }} $ we have \begin{align*} & \Gamma \vdash \PDecl{\Chan[k]}{\Chan[s]'}{\tilde{\Args[v]}}{\tilde{\Chan}}{\tilde{\Role}'''}{\left( P'\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \; \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role'}}{\LTCall{\Prot}{G}{\tilde{\Args[v]}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}'''}{T'}} \end{align*} Since $ \Args \notin \tilde{\Args[v]} $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \end{enumerate} \item[Case Rule~$ (\mathsf{S1}) $:] In this case $ P = \PChoi{P_1}{P_2} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta', \Typed{\AT{\Chan[s]'}{\Role'}}{T_1 \oplus T_2} $ and we have $ \Gamma \vdash P_1 \triangleright \Delta', \Typed{\AT{\Chan[s]'}{\Role'}}{T_1} $ and $ \Gamma \vdash P_2 \triangleright \Delta', \Typed{\AT{\Chan[s]'}{\Role'}}{T_2} $. We distinguish between the cases (1)~$ \Args = \Chan[s]' $ and (2)~$ \Args \neq \Chan[s]' $. \begin{enumerate}[(1)] \item Then $ \Delta = \Delta' $, $ \Role = \Role' $, and $ T = T_1 \oplus T_2 $. By the induction hypothesis, $ \Gamma \vdash P_1 \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T_1} $ and $ \Gamma \vdash P_2 \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T_2} $ imply $ \Gamma \vdash P_1\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T_1} $ and $ \Gamma \vdash P_2\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T_2} $. With Rule~$ (\mathsf{S1}) $ we have $ \Gamma \vdash \PChoi{\left( P_1\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)}{\left( P_2\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{\LTChoi{T_1}{T_2}} $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item Then $ \Delta = \Delta'', \Typed{\AT{\Chan[s]'}{\Role'}}{T_1 \oplus T_2} $ and $ \Delta' = \Delta'', \Typed{\AT{\Args}{\Role}}{T} $. By the induction hypothesis, $ \Gamma \vdash P_1 \triangleright \Delta'', \Typed{\AT{\Args}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role'}}{T_1} $ and $ \Gamma \vdash P_2 \triangleright \Delta'', \Typed{\AT{\Args}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role'}}{T_2} $ imply $ \Gamma \vdash P_1\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role'}}{T_1} $ and $ \Gamma \vdash P_2\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role'}}{T_2} $. With Rule~$ (\mathsf{S1}) $ we have $ \Gamma \vdash \PChoi{\left( P_1\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)}{\left( P_2\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role'}}{\LTChoi{T_1}{T_2}} $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \end{enumerate} \item[Case Rule~$ (\mathsf{S2}) $:] In this case $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta', \Typed{\AT{\Chan[s]'}{\Role'}}{T_1 \oplus T_2} $ and we have $ \Gamma \vdash P \triangleright \Delta', \Typed{\AT{\Chan[s]'}{\Role'}}{T_i} $ with $ i \in \Set[]{ 1, 2 } $. We distinguish between the cases (1)~$ \Args = \Chan[s]' $ and (2)~$ \Args \neq \Chan[s]' $. \begin{enumerate}[(1)] \item Then $ \Delta = \Delta' $, $ \Role = \Role' $, and $ T = T_1 \oplus T_2 $. By the induction hypothesis, $ \Gamma \vdash P \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T_i} $ implies $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T_i} $. With Rule~$ (\mathsf{S2}) $ we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{\LTChoi{T_1}{T_2}} $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item Then $ \Delta = \Delta'', \Typed{\AT{\Chan[s]'}{\Role'}}{T_1 \oplus T_2} $ and $ \Delta' = \Delta'', \Typed{\AT{\Args}{\Role}}{T} $. By the induction hypothesis, $ \Gamma \vdash P \triangleright \Delta'', \Typed{\AT{\Args}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role'}}{T_i} $ implies $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role'}}{T_i} $. With Rule~$ (\mathsf{S2}) $ we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta'', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\AT{\Chan[s]'}{\Role'}}{\LTChoi{T_1}{T_2}} $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \end{enumerate} \item[Case Rule~$ (\mathsf{Pa}) $:] In this case $ P = \PPar{P_1}{P_2} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta_1 \otimes \Delta_2 $ and we have $ \Gamma \vdash P_1 \triangleright \Delta_1 $ and $ \Gamma \vdash P_2 \triangleright \Delta_2 $. Hence $ \Typed{\AT{\Args}{\Role}}{T_1} \in \Delta_1 $, $ \Typed{\AT{\Args}{\Role}}{T_2} \in \Delta_2 $, and $ T = \LTPar{T_1}{T_2} $. Then $ \Delta_1 = \Delta_1', \Typed{\AT{\Args}{\Role}}{T_1} $, $ \Delta_2 = \Delta_2', \Typed{\AT{\Args}{\Role}}{T_2} $, and $ \Delta = \Delta_1' \otimes \Delta_2' $. By the induction hypothesis, $ \Gamma \vdash P_1 \triangleright \Delta_1', \Typed{\AT{\Args}{\Role}}{T_1} $ and $ \Gamma \vdash P_2 \triangleright \Delta_2', \Typed{\AT{\Args}{\Role}}{T_2} $ imply $ \Gamma \vdash P_1\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta_1', \Typed{\AT{\Chan[s]}{\Role}}{T_1} $ and $ \Gamma \vdash P_2\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta_2', \Typed{\AT{\Chan[s]}{\Role}}{T_2} $. With Rule~$ (\mathsf{Pa}) $ we have $ \Gamma \vdash \PPar{\left( P_1\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)}{\left( P_2\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \left( \Delta_1', \Typed{\AT{\Chan[s]}{\Role}}{T_1} \right) \otimes \left( \Delta_2', \Typed{\AT{\Chan[s]}{\Role}}{T_2} \right) $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item[Case Rule~$ (\mathsf{Opt}) $:] Here $ P = \POpt{\Role_1}{\tilde{\Role}}{P_1}{\tilde{\Args}'}{\tilde{\Args[v]}}{P_2} $ and $ \Delta, \Typed{\AT{\Args}{\Role}}{T} = \Delta_1 \otimes \Delta_2, \Typed{\AT{\Chan[s]'}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_2}} $, and we have $ \Gamma \vdash P_1 \triangleright \Delta_1, \Typed{\AT{\Chan[s]'}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} $, $ \nexists \Role', \Sort[K] \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_1 $, $ \Gamma \vdash P_2 \triangleright \Delta_2, \Typed{\AT{\Chan[s]'}{\Role_1}}{T_2} $, $ \vdash \Typed{\tilde{\Args}'}{\tilde{\Sort}} $, and $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}} $. Hence $ \Args \notin \tilde{\Args[v]} $ and $ \Args \notin \tilde{\Args[y]} $. Using alpha-conversion before Rule~$ (\mathsf{Opt}) $ we can ensure that $ \Args \notin \tilde{\Args}' $. We distinguish between the cases (1)~$ \Args = \Chan[s]' $ and (2)~$ \Args \neq \Chan[s]' $. \begin{enumerate}[(1)] \item Then $ \Delta = \Delta_1 \otimes \Delta_2 $, $ \Role = \Role_1 $, and $ T = \LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_2} $. By the induction hypothesis, $ \Gamma \vdash P_1 \triangleright \Delta_1, \Typed{\AT{\Args}{\Role}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} $ and $ \Gamma \vdash P_2 \triangleright \Delta_2, \Typed{\AT{\Args}{\Role}}{T_2} $ imply $ \Gamma \vdash P_1\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta_1, \Typed{\AT{\Chan[s]}{\Role}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} $ and $ \Gamma \vdash P_2\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta_2, \Typed{\AT{\Chan[s]}{\Role}}{T_2} $. With Rule~$ (\mathsf{Opt}) $, $ \nexists \Role', \Sort[K] \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_1 $, $ \vdash \Typed{\tilde{\Args}'}{\tilde{\Sort}} $, and $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}} $ we have $ \Gamma \vdash \POpt{\Role_1}{\tilde{\Role}}{P_1\!\Set[]{ \Subst{\Chan[s]}{\Args} }}{\tilde{\Args}'}{\tilde{\Args[v]}}{\left( P_2\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \Delta_1 \otimes \Delta_2, \Typed{\AT{\Chan[s]}{\Role}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_2}} $. Since $ \Args \notin \tilde{\Args}' $ and $ \Args \notin \tilde{\Args[v]} $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \item Then $ \Delta = \Delta_1' \otimes \Delta_2', \Typed{\AT{\Chan[s]'}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_2}} $, $ \Delta_1 = \Delta_1', \Typed{\AT{\Args}{\Role}}{T_1'} $, $ \Delta_2 = \Delta_2', \Typed{\AT{\Args}{\Role}}{T_2'} $, and $ T = \LTPar{T_1'}{T_2'} $. By the induction hypothesis, $ \Gamma \vdash P_1 \triangleright \Delta_1', \Typed{\AT{\Args}{\Role}}{T_1'}, \Typed{\AT{\Chan[s]'}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} $ and $ \Gamma \vdash P_2 \triangleright \Delta_2', \Typed{\AT{\Args}{\Role}}{T_2'}, \Typed{\AT{\Chan[s]'}{\Role_1}}{T_2} $ imply $ \Gamma \vdash P_1\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta_1', \Typed{\AT{\Chan[s]}{\Role}}{T_1'}, \Typed{\AT{\Chan[s]'}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} $ and $ \Gamma \vdash P_2\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta_2', \Typed{\AT{\Chan[s]}{\Role}}{T_2'}, \Typed{\AT{\Chan[s]'}{\Role_1}}{T_2} $. With Rule~$ (\mathsf{Opt}) $, $ \nexists \Role', \Sort[K] \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_1 $, $ \vdash \Typed{\tilde{\Args}'}{\tilde{\Sort}} $, and $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}} $ we have that $ \Gamma \vdash \POpt{\Role_1}{\tilde{\Role}}{P_1\!\Set[]{ \Subst{\Chan[s]}{\Args} }}{\tilde{\Args}'}{\tilde{\Args[v]}}{\left( P_2\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} \triangleright \left( \Delta_1', \Typed{\AT{\Chan[s]}{\Role}}{T_1'} \right) \otimes \left( \Delta_2', \Typed{\AT{\Chan[s]}{\Role}}{T_2'} \right), \Typed{\AT{\Chan[s]'}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_2}} $. Since $ \Args \notin \tilde{\Args}' $ and $ \Args \notin \tilde{\Args[v]} $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta, \Typed{\AT{\Chan[s]}{\Role}}{T} $. \end{enumerate} \end{description} The proof for the session types with optional blocks but without sub-sessions is similar but omits the cases for the Rules~$ (\mathsf{P}) $, $ (\mathsf{J}) $, and $ (\mathsf{New}) $. The remaining cases do not rely on the Rules~$ (\mathsf{P}) $, $ (\mathsf{J}) $, or $ (\mathsf{New}) $. \end{proof} Moreover, values of the same kind can be substituted in the process without changing the session environment. \begin{lemma} \label{lem:typeSubstB} In both type systems: If $ \Gamma \vdash P \triangleright \Delta $, $ \vdash \Typed{\Args[y]}{\Sort} $, and $ \vdash \Typed{\Args[v]}{\Sort} $ then $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \end{lemma} \begin{proof} We start with the larger type system, \ie the session types with optional blocks and sub-sessions. Assume $ \Gamma \vdash P \triangleright \Delta $, $ \vdash \Typed{\Args[y]}{\Sort} $, and $ \vdash \Typed{\Args[v]}{\Sort} $. We perform an induction on the derivation of the judgement from the typing rules of Figure~\ref{fig:typingRules}. Note that the Rules~$ (\mathsf{N}) $ and $ (\mathsf{OptE}) $ refer to base cases, while the remaining rules refer to the induction steps of the induction. Also note that the only rules with free values that can be substituted are $ (\mathsf{OptE}) $, $ (\mathsf{S}) $, $ (\mathsf{New}) $, and $ (\mathsf{Opt}) $. For these rules we have to check that the kind of values is respected and that a substitution of a value in the process does not conflict with the session environment required by this rule. We avoid the substitution of bound names explicitly using alpha-conversion. \begin{description} \item[Case Rule~$ (\mathsf{N}) $:] In this case $ P = \PEnd $ and $ \Delta = \emptyset $. By Rule~$ (\mathsf{N}) $, we have $ \Gamma \vdash \PEnd \triangleright \emptyset $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{OptE}) $:] In this case $ P = \POptEnd{\Role}{\tilde{\Args[v]}'} $ and $ \Delta = \Typed{\Role}{\OV{\tilde{\Sort}'}} $ and we have $ \vdash \Typed{\tilde{\Args[v]}'}{\tilde{\Sort}'} $. Hence, if $ \tilde{\Args[v]}'_i = \Args[y] $, then $ \tilde{\Sort}'_i = \Sort $ and thus the kinds of $ \tilde{\Args[v]}'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } $ and $ \tilde{\Args[y]}' $ coincide. Thus $ \vdash \Typed{\tilde{\Args[v]}'}{\tilde{\Sort}'} $ implies $ \vdash \Typed{\tilde{\Args[v]}'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} }}{\tilde{\Sort}'} $. With Rule~$ (\mathsf{OptE}) $ we have $ \Gamma \vdash \POptEnd{\Role}{\tilde{\Args[v]}'} \!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Typed{\Role}{\OV{\tilde{\Sort}'}} $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{I}) $:] In this case $ P = \PInp{\Chan}{\Args}{P'} $ and we have $ \Gamma \vdash P' \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T} $ and $ \GetType{\Chan} = \AT{T}{\Role} $. Using alpha-conversion before Rule~$ (\mathsf{I}) $ we can ensure that $ \Args \notin \Set[]{ \Args[v], \Args[y] } $. Because of $ \vdash \Typed{\Args[y]}{\Sort} $ and $ \vdash \Typed{\Args[v]}{\Sort} $, we have $ \Chan \notin \Set[]{ \Args[v], \Args[y] } $. Then, by the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta, \Typed{\AT{\Args}{\Role}}{T} $. With Rule~$ (\mathsf{I}) $ and $ \GetType{\Chan} = \AT{T}{\Role} $ we have $ \Gamma \vdash \PInp{\Chan}{\Args}{\left( P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)} \triangleright \Delta $. Since $ \Chan, \Args \notin \Set[]{ \Args[v], \Args[y] } $, then $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{O}) $:] In this case $ P = \POut{\Chan}{\Chan[s]}{P'} $ and $ \Delta = \Delta', \Typed{\ATE{\Chan[s]}{\Role}}{T} $ and we have $ \Gamma \vdash P' \triangleright \Delta' $ and $ \GetType{\Chan} = \AT{T}{\Role} $. Because of $ \vdash \Typed{\Args[y]}{\Sort} $ and $ \vdash \Typed{\Args[v]}{\Sort} $, we have $ \Chan, \Chan[s] \notin \Set[]{ \Args[v], \Args[y] } $. By the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta' $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta' $. With Rule~$ (\mathsf{O}) $ and $ \GetType{\Chan} = \AT{T}{\Role} $ we have $ \Gamma \vdash \POut{\Chan}{\Chan[s]}{\left( P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)} \triangleright \Delta', \Typed{\ATE{\Chan[s]}{\Role}}{T} $. Since $ \Chan, \Chan[s] \notin \Set[]{ \Args[v], \Args[y] } $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{C}) $:] In this case we have $ P = \PGet{\Chan[k]}{\Role_1}{\Role_2}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args[y]}'_i}{P_i} }} $, $ \Delta = \Delta', \Typed{\AT{\Chan[k]}{\Role_2}}{\LTGet{\Role_1}{_{i \in \indexSet{}} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}'_i}}{T_i} }}} $ and we have $ \Gamma \vdash P_i \triangleright \Delta', \Typed{\AT{\Chan[k]}{\Role_2}}{T_i} $ and $ \vdash \Typed{\tilde{\Args[y]}'_i}{\tilde{\Sort}'_i} $ for all $ i \in \indexSet $. Using alpha-conversion before Rule~$ (\mathsf{C}) $ we can ensure that $ \Args[v], \Args[y] \notin \tilde{\Args[y]}'_i $ and $ \Args[v], \Args[y] \notin \tilde{\Args}_i $ for all $ i \in \indexSet $. Because of $ \vdash \Typed{\Args[y]}{\Sort} $ and $ \vdash \Typed{\Args[v]}{\Sort} $, we have $ \Chan[k] \notin \Set[]{ \Args[v], \Args[y] } $. By the induction hypothesis, $ \Gamma \vdash P_i \triangleright \Delta', \Typed{\AT{\Chan[k]}{\Role_2}}{T_i} $ implies $ \Gamma \vdash P_i\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta', \Typed{\AT{\Chan[k]}{\Role_2}}{T_i} $ for all $ i \in \indexSet $. With Rule~$ (\mathsf{C}) $ and $ \vdash \Typed{\tilde{\Args[y]}'_i}{\tilde{\Sort}'_i} $ for all $ i \in \indexSet $ we have $ \Gamma \vdash \PGet{\Chan[k]}{\Role_1}{\Role_2}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args[y]}'_i}{\left( P_i\!\Set[]{ \Subst{\Chan[s]}{\Args} } \right)} }} \triangleright \Delta', \Typed{\AT{\Chan[k]}{\Role_2}}{\LTGet{\Role_1}{_{i \in \indexSet{}} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}'_i}}{T_i} }}} $. Since $ \Chan[k] \notin \Set[]{ \Args[v], \Args[y] } $ and $ \Args[v], \Args[y] \notin \tilde{\Args[y]}'_i $ for all $ i \in \indexSet $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{S}) $:] In this case we have $ P = \PSend{\Chan[k]}{\Role_1}{\Role_2}{\Labe_j}{\tilde{\Args[v]}'}{P'} $, $ \Delta = \Delta', \Typed{\AT{\Chan[k]}{\Role_1}}{\LTSend{\Role_2}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}'_i}}{T_i} }}} $ and we have $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Chan[k]}{\Role_1}}{T_j} $ and $ \vdash \Typed{\tilde{\Args[v]}'}{\tilde{\Sort}'_j} $. Hence, if $ \tilde{\Args[v]}'_i = \Args[y] $, then $ \tilde{\Sort}'_i = \Sort $ and thus the kinds of $ \tilde{\Args[v]}'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } $ and $ \tilde{\Args} $ coincide. Thus $ \vdash \Typed{\tilde{\Args[v]}'}{\tilde{\Sort}_j'} $ implies $ \vdash \Typed{\tilde{\Args[v]}'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} }}{\tilde{\Sort}_j'} $. Because of $ \vdash \Typed{\Args[y]}{\Sort} $ and $ \vdash \Typed{\Args[v]}{\Sort} $, we have $ \Chan[k] \notin \Set[]{ \Args[v], \Args[y] } $. By the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Chan[k]}{\Role_1}}{T_j} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta', \Typed{\AT{\Chan[k]}{\Role_1}}{T_j} $. With Rule~$ (\mathsf{S}) $ and $ \vdash \Typed{\tilde{\Args[v]}'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} }}{\tilde{\Sort}_j'} $ we have $ \Gamma \vdash \PSend{\Chan[k]}{\Role_1}{\Role_2}{\Labe_j}{\tilde{\Args[v]}'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} }}{\left( P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)} \triangleright \Delta', \Typed{\AT{\Chan[k]}{\Role_1}}{\LTSend{\Role_2}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}'_i}}{T_i} }}} $. Since $ \Chan[k] \notin \Set[]{ \Args[v], \Args[y] } $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{R}) $:] In this case $ P = \PRes{\Args}{P'} $ and we have $ \Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P' \triangleright \Delta $. Using alpha-conversion before Rule~$ (\mathsf{R}) $ we can ensure that $ \Args \notin \Set[]{ \Args[v], \Args[y] } $. By the induction hypothesis, $ \Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P' \triangleright \Delta $ implies $ \Gamma, \Typed{\Args}{\AT{T}{\Role}} \vdash P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. With Rule~$ (\mathsf{R}) $ we have $ \Gamma \vdash \PRes{\Args}{\left( P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)} \triangleright \Delta $. Since $ \Args \notin \Set[]{ \Args[v], \Args[y] } $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{P}) $:] In this case we have $ P = \PReq{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Chan[k]}{P'} $ and $ \Delta = \Delta', \Typed{\AT{\Chan[s]}{\Role_1}}{\LTReq{\Prot}{\Role_3}{\tilde{\Args[v]}'}{\Role_2}{T_1}}, \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3} $, $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} $, $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}'}{\tilde{\Role}_5}{G} $, and $ \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}'}{\tilde{\Args[y]}'} }}{}{\Role_3} = T_3 $. Because of $ \vdash \Typed{\Args[y]}{\Sort} $ and $ \vdash \Typed{\Args[v]}{\Sort} $, we have $ \Chan[s], \Chan[k] \notin \Set[]{ \Args[v], \Args[y] } $. By the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} $. With Rule~$ (\mathsf{P}) $, $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}'}{\tilde{\Role}_5}{G} $, and $ \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}'}{\tilde{\Args[y]}'} }}{}{\Role_3} = T_3 $ we have $ \Gamma \vdash \PReq{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Chan[k]}{\left( P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)} \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role_1}}{\LTReq{\Prot}{\Role_3}{\tilde{\Args[v]}'}{\Role_2}{T_1}}, \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3} $. Since $ \Chan[s], \Chan[k] \notin \Set[]{ \Args[v], \Args[y] } $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{J}) $:] In this case we have $ P = \PEnt{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Args}{P'} $ and $ \Delta = \Delta', \Typed{\AT{\Chan[s]}{\Role_2}}{\LTEnt{\Prot}{\Role_3}{\tilde{\Args[v]}'}{\Role_1}{T_2}} $, $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Args}{\Role_3}}{T_3} $, $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}'}{\tilde{\Role}_5}{G} $, and $ \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}'}{\tilde{\Args[y]}'} }}{}{\Role_3} = T_3 $. Using alpha-conversion before Rule~$ (\mathsf{J}) $ we can ensure that $ \Args \notin \Set[]{ \Args[v], \Args[y] } $. Because of $ \vdash \Typed{\Args[y]}{\Sort} $ and $ \vdash \Typed{\Args[v]}{\Sort} $, we have $ \Chan[s] \notin \Set[]{ \Args[v], \Args[y] } $. By the induction hypothesis, $ \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Args}{\Role_3}}{T_3} $ implies $ \Gamma \vdash P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Args}{\Role_3}}{T_3} $. With Rule~$ (\mathsf{J}) $, $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}'}{\tilde{\Role}_5}{G} $, and $ \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}'}{\tilde{\Args[y]}'} }}{}{\Role_3} = T_3 $ we have $ \Gamma \vdash \PEnt{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Args}{\left( P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)} \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role_2}}{\LTEnt{\Prot}{\Role_3}{\tilde{\Args[v]}'}{\Role_1}{T_2}} $. Since $ \Chan[s], \Args \notin \Set[]{ \Args[v], \Args[y] } $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{New}) $:] In this case $ P = \PDecl{\Chan[k]}{\Chan[s]}{\tilde{\Args[v]}'}{\tilde{\Chan}}{\tilde{\Role}'''}{P'} $ and \begin{align*} \Delta = \Delta', \Typed{\AT{\Chan[s]}{\Role}}{\LTCall{\Prot}{G}{\tilde{\Args[v]}'}{\Typed{\tilde{\Args[y]}'}{\tilde{\Sort}'}}{\tilde{\Role}'''}{T'}} \end{align*} and we have \begin{align*} \Gamma \vdash P' \triangleright \; \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\ATI{\Chan[k]}{\Role''_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role''_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'''_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'''_m}}{T'_{n + m}} \end{align*} and $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}''}{\tilde{\Args[y]}'}{\tilde{\Role}'''}{G} $ and $ \forall i \logdot \GetType{\Chan_i} = \AT{T'_{i + n}}{\Role'''_{i + n}} $ and $ \forall i \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}'}{\tilde{\Args[y]}'} }}{}{\Role''_i} = T'_i $ and $ \forall j \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}'}{\tilde{\Args[y]}'} }}{}{\Role'''_j} = T'_{j + n} $ and $ \vdash \Typed{\tilde{\Args[v]}'}{\tilde{\Sort}'} $ and $ \GetType{\Chan[k]} = {\Prot\!\Set[]{ \Subst{\tilde{\Args[v]}'}{\tilde{\Args[y]}'} }} $. Hence, if $ \tilde{\Args[v]}'_i = \Args[y] $, then $ \tilde{\Sort}'_i = \Sort $ and thus the kinds of $ \tilde{\Args[v]}'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } $ and $ \tilde{\Args[y]}' $ coincide. Because of $ \vdash \Typed{\Args[y]}{\Sort} $ and $ \vdash \Typed{\Args[v]}{\Sort} $, we have $ \Chan[s], \Chan[k] \notin \Set[]{ \Args[v], \Args[y] } $ and $ \Args[v], \Args[y] \notin \tilde{\Chan} $. By the induction hypothesis, \begin{align*} \Gamma \vdash P' \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\ATI{\Chan[k]}{\Role''_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role''_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'''_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'''_m}}{T'_{n + m}} \end{align*} implies \begin{align*} \Gamma \vdash P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \; \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T}, \Typed{\ATI{\Chan[k]}{\Role''_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role''_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'''_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'''_m}}{T'_{n + m}} \end{align*} With Rule~$ (\mathsf{New}) $ and $ \GetType{\Prot} = \TypeOfProt{\tilde{\Role}''}{\tilde{\Args[y]}'}{\tilde{\Role}'''}{G} $ and $ \forall i \logdot \GetType{\Chan_i} = \AT{T'_{i + n}}{\Role'''_{i + n}} $ and $ \forall i \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}'}{\tilde{\Args[y]}'} }}{}{\Role''_i} = T'_i $ and $ \forall j \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}'}{\tilde{\Args[y]}'} }}{}{\Role'''_j} = T'_{j + n} $ and $ \vdash \Typed{\tilde{\Args[v]}'}{\tilde{\Sort}'} $ and $ \GetType{\Chan[k]} = {\Prot\!\Set[]{ \Subst{\tilde{\Args[v]}'}{\tilde{\Args[y]}'} }} $ we have \begin{align*} \Gamma \vdash \PDecl{\Chan[k]}{\Chan[s]}{\tilde{\Args[v]}'}{\tilde{\Chan}}{\tilde{\Role}'''}{\left( P'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)} \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{\LTCall{\Prot}{G}{\tilde{\Args[v]}'}{\Typed{\tilde{\Args[y]}'}{\tilde{\Sort}'}}{\tilde{\Role}'''}{T}} \end{align*} Since $ \Chan[s], \Chan[k] \notin \Set[]{ \Args[v], \Args[y] } $ and $ \Args[v], \Args[y] \notin \tilde{\Chan} $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{S1}) $:] In this case $ P = \PChoi{P_1}{P_2} $ and $ \Delta = \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_1 \oplus T_2} $ and we have $ \Gamma \vdash P_1 \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_1} $ and $ \Gamma \vdash P_2 \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_2} $. Because of $ \vdash \Typed{\Args[y]}{\Sort} $ and $ \vdash \Typed{\Args[v]}{\Sort} $, we have $ \Chan[s] \notin \Set[]{ \Args[v], \Args[y] } $. By the induction hypothesis, $ \Gamma \vdash P_1 \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_1} $ and $ \Gamma \vdash P_2 \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_2} $ imply $ \Gamma \vdash P_1\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_1} $ and $ \Gamma \vdash P_2\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_2} $. With Rule~$ (\mathsf{S1}) $ we have $ \Gamma \vdash \PChoi{\left( P_1\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)}{\left( P_2\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)} \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{\LTChoi{T_1}{T_2}} $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{S2}) $:] In this case $ \Delta = \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_1 \oplus T_2} $ and we have $ \Gamma \vdash P \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_i} $ with $ i \in \Set[]{ 1, 2 } $. Because of $ \vdash \Typed{\Args[y]}{\Sort} $ and $ \vdash \Typed{\Args[v]}{\Sort} $, we have $ \Chan[s] \notin \Set[]{ \Args[v], \Args[y] } $. By the induction hypothesis, $ \Gamma \vdash P \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_i} $ implies $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{T_i} $. With Rule~$ (\mathsf{S2}) $ we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta', \Typed{\AT{\Chan[s]}{\Role}}{\LTChoi{T_1}{T_2}} $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{Pa}) $:] In this case $ P = \PPar{P_1}{P_2} $ and $ \Delta = \Delta_1 \otimes \Delta_2 $ and we have $ \Gamma \vdash P_1 \triangleright \Delta_1 $ and $ \Gamma \vdash P_2 \triangleright \Delta_2 $. By the induction hypothesis, $ \Gamma \vdash P_1 \triangleright \Delta_1 $ and $ \Gamma \vdash P_2 \triangleright \Delta_2 $ imply $ \Gamma \vdash P_1\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta_1 $ and $ \Gamma \vdash P_2\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta_2 $. With Rule~$ (\mathsf{Pa}) $ we have $ \Gamma \vdash \PPar{\left( P_1\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)}{\left( P_2\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)} \triangleright \Delta_1 \otimes \Delta_2 $. Hence $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \item[Case Rule~$ (\mathsf{Opt}) $:] In this case $ P = \POpt{\Role_1}{\tilde{\Role}}{P_1}{\tilde{\Args}}{\tilde{\Args[v]}'}{P_2} $ and $ \Delta = \Delta_1 \otimes \Delta_2, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}'}{\tilde{\Sort}'}}{T_2}} $ and we have $ \Gamma \vdash P_1 \triangleright \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}'}} $, $ \nexists \Role', \Sort[K] \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_1 $, $ \Gamma \vdash P_2 \triangleright \Delta_2, \Typed{\AT{\Chan[s]}{\Role_1}}{T_2} $, $ \vdash \Typed{\tilde{\Args}}{\tilde{\Sort}'} $, and $ \vdash \Typed{\tilde{\Args[v]}'}{\tilde{\Sort}'} $. Hence, if $ \tilde{\Args[v]}'_i = \Args[y] $, then $ \tilde{\Sort}'_i = \Sort $ and thus the kinds of $ \tilde{\Args[v]}'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } $ and $ \tilde{\Args[y]}' $ coincide. Thus $ \vdash \Typed{\tilde{\Args[v]}'}{\tilde{\Sort}'} $ implies $ \vdash \Typed{\tilde{\Args[v]}'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} }}{\tilde{\Sort}'} $. Using alpha-conversion before Rule~$ (\mathsf{Opt}) $ we can ensure that $ \Args[v], \Args[y] \notin \tilde{\Args} $. Because of $ \vdash \Typed{\Args[y]}{\Sort} $ and $ \vdash \Typed{\Args[v]}{\Sort} $, we have $ \Chan[s] \notin \Set[]{ \Args[v], \Args[y] } $. By the induction hypothesis, $ \Gamma \vdash P_1 \triangleright \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}'}} $ and $ \Gamma \vdash P_2 \triangleright \Delta_2, \Typed{\AT{\Chan[s]}{\Role_1}}{T_2} $ imply $ \Gamma \vdash P_1\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}'}} $ and $ \Gamma \vdash P_2\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta_2, \Typed{\AT{\Chan[s]}{\Role_1}}{T_2} $. With Rule~$ (\mathsf{Opt}) $, $ \nexists \Role', \Sort[K] \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_1 $, $ \vdash \Typed{\tilde{\Args}}{\tilde{\Sort}'} $, and $ \vdash \Typed{\tilde{\Args[v]}'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} }}{\tilde{\Sort}'} $ we have $ \Gamma \vdash \POpt{\Role_1}{\tilde{\Role}}{P_1\!\Set[]{ \Subst{\Args[v]}{\Args[y]} }}{\tilde{\Args}}{\tilde{\Args[v]}'\!\Set[]{ \Subst{\Args[v]}{\Args[y]} }}{\left( P_2\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \right)} \triangleright \Delta_1 \otimes \Delta_2, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}'}{\tilde{\Sort}'}}{T_2}} $. Since $ \Args[v], \Args[y] \notin \tilde{\Args} $, we have $ \Gamma \vdash P\!\Set[]{ \Subst{\Args[v]}{\Args[y]} } \triangleright \Delta $. \end{description} The proof for the session types with optional blocks but without sub-sessions is similar but omits the cases for the Rules~$ (\mathsf{P}) $, $ (\mathsf{J}) $, and $ (\mathsf{New}) $. The remaining cases do not rely on the Rules~$ (\mathsf{P}) $, $ (\mathsf{J}) $, or $ (\mathsf{New}) $. \end{proof} The next lemma deals with evaluation contexts in typing judgements (compare to \cite{Demangeon15}). If a process $ P $ is well-typed within an evaluation context then \begin{inparaenum}[(1)] \item the process $ P $ is well-typed itself and \item any other process that is well-typed \wrt to the same global environment as $ P $ is also well-typed within the evaluation context. \end{inparaenum} \begin{lemma} \label{lem:typeEC} For both type systems: If $ \Gamma \vdash \AEC{P} \triangleright \Delta $ then: \begin{enumerate} \item There exist $ \Delta_1, \Delta' $, and $ \Gamma \subseteq \Gamma' $ such that $ \Gamma' \vdash P \triangleright \Delta_1 $ and $ \Delta = \Delta' \otimes \Delta_1 $. \item For all $ P_2, \Delta_2 $ such that $ \Gamma' \vdash P_2 \triangleright \Delta_2 $ and $ \left( \forall \Role, \tilde{\Sort} \logdot \Typed{\Role}{\OV{\tilde{\Sort}}} \in \Delta_1 \text{ iff } \Typed{\Role}{\OV{\tilde{\Sort}}} \in \Delta_2 \right) $, we have $ \Gamma \vdash \AEC{P_2} \triangleright \Delta_2 \otimes \Delta' $. \end{enumerate} \end{lemma} \begin{proof} By the typing rules of Figure~\ref{fig:typingRules}, the derivation of $ \Gamma \vdash \AEC{P} \triangleright \Delta $ is a tree containing a derivation of $ \Gamma' \vdash P \triangleright \Delta_1 $ for some $ \Gamma', \Delta_1 $ as subtree. Since no rule removes elements of the global environment (but Rule~$ (\mathsf{R}) $ might add elements), $ \Gamma \subseteq \Gamma' $. By the definition of evaluation contexts, the only rules that can be used in the part of the derivation of $ \Gamma \vdash \AEC{P} \triangleright \Delta $ that is below the subtree $ \Gamma' \vdash P \triangleright \Delta_1 $ are the Rules~$ (\mathsf{R}) $, $ (\mathsf{Pa}) $, and $ (\mathsf{Opt}) $. Rule~$ (\mathsf{R}) $ does not change the session environment. The Rules~$ (\mathsf{Pa}) $ and $ (\mathsf{Opt}) $ split the session environment of the original judgement into two session environments using the operator $ \otimes $ such that each of the two subtrees generated by these rules obtains one part of the session environment. Hence, moving downwards from $ \Gamma' \vdash P \triangleright \Delta_1 $ in the derivation of $ \Gamma \vdash \AEC{P} \triangleright \Delta $, we can collect all session environments that were split from $ \Delta $ and combine them with $ \otimes $---possibly adding $ \emptyset $---to obtain $ \Delta' $ such that $ \Delta = \Delta' \otimes \Delta_1 $. Rule~$ (\mathsf{Opt}) $ additionally adds an assignment $ \Typed{\Role}{\OV{\tilde{\Sort}}} $ to cover the type of the return values to the part of the session environment that is used in $ \EC $ for the position that contains the hole. In this case either $ P $ contains $ \POptEnd{\Role}{\tilde{\Args[v]}} $ and $ \Typed{\Role}{\OV{\tilde{\Sort}}} \in \Delta_1 $ for some $ \Role $ and $ \Typed{\tilde{\Args[v]}}{\tilde{\Sort}} $, or the $ \Typed{\Role}{\OV{\tilde{\Sort}}} $ was split from $ \Delta $ into the part $ \Delta' $. The typing rules ensure that $ \Delta_1 $ can contain at most one assignment of the form $ \Typed{\Role}{\OV{\tilde{\Sort}}} $. Moreover, if we have $ \Gamma' \vdash P_2 \triangleright \Delta_2 $ and $ \left( \forall \Role, \tilde{\Sort} \logdot \Typed{\Role}{\OV{\tilde{\Sort}}} \in \Delta_1 \text{ iff } \Typed{\Role}{\OV{\tilde{\Sort}}} \in \Delta_2 \right) $, we can replace the subtree for $ \Gamma' \vdash P \triangleright \Delta_1 $ in the derivation of $ \Gamma \vdash \AEC{P} \triangleright \Delta $---while substituting all occurrences of $ \Delta_1 $ by $ \Delta_2 $ below the subtree---and obtain a derivation for $ \Gamma \vdash \AEC{P_2} \triangleright \Delta_2 \otimes \Delta' $. Here, the second condition ensures, that the type of the return values is checked for $ P_1 $ if and only if it is checked for $ P_2 $. This ensures that this property holds in case the hole of the context covers the enclosed part of an optional block. Since both type systems use evaluation contexts and the Rules~$ (\mathsf{R}) $, $ (\mathsf{Pa}) $, and $ (\mathsf{Opt}) $, to type them, in the same way, both type systems fulfil this property. \end{proof} Note that, since the contexts $ \ECR $, $ \ECO $, and $ \ECP $ are strict sub-contexts of evaluation contexts $ \EC $, \ie each context of one of the former kinds is also an evaluation context, the above lemma holds for all four kinds of contexts. \subsection{Subject Reduction} \emph{Subject reduction} is a basic property of each type system. It is this property that allows us to reason statically about terms, by ensuring that whenever a process its well-typed then all its derivatives are well-typed as well. Hence, for all properties the type system ensures for well-typed terms, it is not necessary to compute executions but only to test for well-typedness of the original term. We use a strong variant of subject reduction that additionally involves the condition $ \Delta \mapsto \Delta' $, in order to capture how the local types evolve alongside the reduction of processes. More precisely, $ \Delta' $ is the session environment we obtain for the derivative $ P' $ of a process $ P $ with respect to a step $ P \longmapsto P' $. Therefore the effect of reductions on processes on the corresponding local types is captured within the relation $ \mapsto $. \begin{figure*} \caption{Reduction Rules for Session Environments} \label{fig:sessionTypeReductions} \end{figure*} In Figure~\ref{fig:sessionTypeReductions} we derive from the interplay of the reduction rules of processes in Figure~\ref{fig:reductionRules} and the typing rules in Figure~\ref{fig:typingRules} the rules for the evolution of session environments following the reductions of a process. Note that Rule~(\textsf{succ}') is a special case of Rule~(\textsf{fail}'). The difference between a successful completion and the abortion of an optional block cannot be observed from the session environment. Also note that the rules of Figure~\ref{fig:sessionTypeReductions} do not replace the rules for process reductions or type checks. They are used here as an auxiliary tool to simplify the argumentation about completion. Figure~\ref{fig:sessionTypeReductions} contains all rules for both considered type systems. For the smaller type system with optional blocks but without sub-sessions the Rules~$ (\mathsf{subs}') $ and $ (\mathsf{join}') $ are superfluous. Based on the rules of Figure~\ref{fig:sessionTypeReductions}, we add the condition $ \Delta \mapsto \Delta' $ to the formulation of subject reduction. Obviously this extension results into a strictly stronger requirement that naturally implies the former statement. The proof of subject reduction is by induction over the derivation of a single reduction step of the processes, \ie over the reduction rules. For each reduction rule we have to prove how the proof of well-typedness of the process can be adapted to show that the derivative is also well-typed. Thus we have to relate the reduction rules and the typing rules. \begin{theorem}[Subject Reduction] \label{thm:subjectReduction} $ $\\ For both type systems: If $ \Gamma \vdash P \triangleright \Delta $ and $ P \longmapsto P' $ then there exists $ \Delta' $ such that $ \Gamma \vdash P' \triangleright \Delta' $ and $ \Delta \mapsto \Delta' $. \end{theorem} \begin{proof} Again we consider the larger type system first. Assume $ \Gamma \vdash P \triangleright \Delta $ and $ P \longmapsto P' $. We perform an induction over the rules used to derive $ P \longmapsto P' $ with a case analysis over the rules of Figure~\ref{fig:typingRules}. \begin{description} \item[Cases $ (\mathsf{comS}) $:] In this case we have \begin{align*} P = \AEC{\PPar{\PSend{\Chan[k]}{\Role_1}{\Role_2}{\Labe_j}{\tilde{\Args[v]}}{P^*}}{\PGet{\Chan[k]}{\Role_1}{\Role_2}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args}_i}{P_i} }}}} \quad \text{ and } \quad P' = \AEC{\PPar{P^*}{P_j\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args}_j} }}} \end{align*} With $ \Gamma \vdash P \triangleright \Delta $ and Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_P, \Delta_{\EC}, \Gamma' $ such that $ \Gamma \subseteq \Gamma' $, $ \Delta = \Delta_{\EC} \otimes \Delta_P $, and $ \Gamma' \vdash \PPar{\PSend{\Chan[k]}{\Role_1}{\Role_2}{\Labe_j}{\tilde{\Args[v]}}{P^*}}{\PGet{\Chan[k]}{\Role_1}{\Role_2}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args}_i}{P_i} }}} \triangleright \Delta_P $. By the rules in Figure~\ref{fig:typingRules} the proof of the judgement has to start (modulo Rule~(\textsf{S2})) as follows \begin{align*} \dfrac{\dfrac{\Gamma' \vdash P^* \triangleright \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_j^*} \quad \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}_j}}{\Gamma' \vdash \PSend{\Chan[k]}{\Role_1}{\Role_2}{\Labe_j}{\tilde{\Args[v]}}{P^*} \triangleright \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_{\text{send}}}} (\mathsf{S}) \quad \dfrac{\left( \Gamma' \vdash P_i \triangleright \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_i} \quad \vdash \Typed{\tilde{\Args}_i}{\tilde{\Sort}_i} \right)_{i \in \indexSet}}{\Gamma' \vdash \PGet{\Chan[k]}{\Role_1}{\Role_2}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args}_i}{P_i} }} \triangleright \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_{\text{get}}}} (\mathsf{C})}{\Gamma' \vdash \PPar{\PSend{\Chan[k]}{\Role_1}{\Role_2}{\Labe_j}{\tilde{\Args[v]}}{P^*}}{\PGet{\Chan[k]}{\Role_1}{\Role_2}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args}_i}{P_i} }}} \triangleright \Delta_P} (\mathsf{Pa}) \end{align*} where $ T_{\text{send}} = \LTSend{\Role_2}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args[z]}_i}{\tilde{\Sort}_i}}{T^*_i} }} $, $ T_{\text{get}} = \LTGet{\Role_1}{_{i \in \indexSet{}} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args[y]}_i}{\tilde{\Sort}_i}}{T_i} }} $, and the session environment $ \Delta_P = \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_{\text{send}}} \otimes \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_{\text{get}}} $. By Lemma~\ref{lem:typeSubstB}, $ \Gamma' \vdash P_j \triangleright \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_j} $, $ \vdash \Typed{\tilde{\Args}_j}{\tilde{\Sort}_j} $, and $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}_j} $ imply $ \Gamma' \vdash P_j\!\Set[]{\Subst{\tilde{\Args[v]}}{\tilde{\Args}_j}} \triangleright \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_j} $. With $ \Gamma' \vdash P^* \triangleright \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_j^*} $ and since $ \Delta_{P1} \otimes \Delta_{P2} $ is defined, we obtain \begin{align*} \dfrac{\Gamma' \vdash P^* \triangleright \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_j^*} \quad \Gamma' \vdash P_j\!\Set[]{\Subst{\tilde{\Args[v]}}{\tilde{\Args}_j}} \triangleright \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_j}}{\Gamma' \vdash \PPar{P^*}{P_j\!\Set[]{\Subst{\tilde{\Args[v]}}{\tilde{\Args}_j}}} \triangleright \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_j^*} \otimes \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_j}} (\mathsf{Pa}) \end{align*} Note that $ \Delta_P $ contains some $ \Typed{\Role[p]}{\OV{\tilde{\Sort[K]}}} $ if and only if $ \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_j^*} \otimes \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_j} $ contains the same assignment $ \Typed{\Role[p]}{\OV{\tilde{\Sort[K]}}} $. With Lemma~\ref{lem:typeEC}~(2), we have $ \Gamma \vdash P' \triangleright \left( \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_j^*} \otimes \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_j} \right) \otimes \Delta_{\EC} $. It remains to show that $ \Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_j^*} \otimes \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_j} \right) \otimes \Delta_{\EC} $. Because $ j \in \indexSet $ and \begin{align*} \Delta_P ={} & \Delta_{P1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_1}}{\LTSend{\Role_2}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args[z]}_i}{\tilde{\Sort}_i}}{T^*_i} }}}, \Typed{\AT{\Chan[k]}{\Role_2}}{\LTGet{\Role_1}{_{i \in \indexSet{}} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args[y]}_i}{\tilde{\Sort}_i}}{T_i} }}} \end{align*} we obtain \begin{align*} \dfrac{\dfrac{}{\Delta_P \mapsto \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_j^*} \otimes \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_j}} (\mathsf{comS}')}{\Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_1}}{T_j^*} \otimes \Delta_{P2}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_j} \right) \otimes \Delta_{\EC}} (\mathsf{par}) \end{align*} \item[Case $ (\mathsf{choice}) $:] In this case we have $ P_i \longmapsto P_i' $ and \begin{align*} P = \AEC{\PChoi{P_1}{P_2}} \quad \text{ and } \quad P' = \AEC{P_i'} \end{align*} With $ \Gamma \vdash P \triangleright \Delta $ and Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_P, \Delta_{\EC}, \Gamma' $ such that $ \Gamma \subseteq \Gamma' $, $ \Delta = \Delta_{\EC} \otimes \Delta_P $, and $ \Gamma' \vdash \PChoi{P_1}{P_2} \triangleright \Delta_P $. By the rules in Figure~\ref{fig:typingRules} the proof of the judgement has to start (modulo Rule~(\textsf{S2})) as follows \begin{align*} \dfrac{\Gamma' \vdash P_1 \triangleright \Delta_{P}', \Typed{\AT{\Chan[s]}{\Role}}{T_1} \quad \Gamma' \vdash P_1 \triangleright \Delta_{P}', \Typed{\AT{\Chan[s]}{\Role}}{T_2}}{\Gamma' \vdash \PChoi{P_1}{P_2} \triangleright \Delta_P} (\mathsf{S1}) \end{align*} for some $ \Chan[s] $ and $ \Role $, where $ \Delta_P = \Delta_{P}', \Typed{\AT{\Chan[s]}{\Role}}{\LTChoi{T_1}{T_2}} $. By the induction hypothesis, $ \PChoi{P_1}{P_2} \longmapsto P_i' $, $ \Gamma' \vdash P_1 \triangleright \Delta_{P}', \Typed{\AT{\Chan[s]}{\Role}}{T_1} $ and $ \Gamma' \vdash P_1 \triangleright \Delta_{P}', \Typed{\AT{\Chan[s]}{\Role}}{T_2} $ imply $ \Gamma' \vdash P_i' \triangleright \Delta_{P}'' $ for some $ \Delta_{P}'' $ such that $ \Delta_{P}', \Typed{\AT{\Chan[s]}{\Role}}{T_i} \mapsto \Delta_P'' $. Because of $ \Delta_P = \Delta_{P}', \Typed{\AT{\Chan[s]}{\Role}}{\LTChoi{T_1}{T_2}} $, $ \Delta_{P}', \Typed{\AT{\Chan[s]}{\Role}}{T_i} \mapsto \Delta_P'' $ and since the rules of Figure~\ref{fig:sessionTypeReductions} neither remove nor add assignments of the form $ \Typed{\Role[p]}{\OV{\tilde{\Sort[K]}}} $, we have $ \Typed{\Role[p]}{\OV{\tilde{\Sort[K]}}} \in \Delta_P $ iff $ \Typed{\Role[p]}{\OV{\tilde{\Sort[K]}}} \in \Delta_P'' $. Finally, with Lemma~\ref{lem:typeEC}~(2), we have $ \Gamma \vdash P' \triangleright \Delta_P'' \otimes \Delta_{\EC} $. It remains to show that $ \Delta_{\EC} \otimes \Delta_P \mapsto \Delta_P'' \otimes \Delta_{\EC} $. Because $ \Delta_{P}', \Typed{\AT{\Chan[s]}{\Role}}{T_i} \mapsto \Delta_P'' $ and $ \Delta_P = \Delta_{P}', \Typed{\AT{\Chan[s]}{\Role}}{\LTChoi{T_1}{T_2}} $, we obtain \begin{align*} \dfrac{\dfrac{\Delta_{P}', \Typed{\AT{\Chan[s]}{\Role}}{T_i} \mapsto \Delta_P''}{\Delta_P \mapsto \Delta_P''} (\mathsf{choice}')}{\Delta_{\EC} \otimes \Delta_P \mapsto \Delta_P'' \otimes \Delta_{\EC}} (\mathsf{par}) \end{align*} \item[Cases $ (\mathsf{subs}) $:] In this case we have \begin{align*} P = \AEC{\PDecl{\Chan[k]}{\Chan[s]}{\tilde{\Args[v]}}{\tilde{\Chan}}{\tilde{\Role}}{P^*}} \quad \text{ and } \quad P' = \AEC{\PPar{P^*}{\PPar{\POutS{\Chan_1}{\Chan[k]}}{\PPar{\ldots}{\POutS{\Chan_m}{\Chan[s]}}}}} \end{align*} With $ \Gamma \vdash P \triangleright \Delta $ and Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_P, \Delta_{\EC}, \Gamma' $ such that $ \Gamma \subseteq \Gamma' $, $ \Delta = \Delta_{\EC} \otimes \Delta_P $, and $ \Gamma' \vdash \PDecl{\Chan[k]}{\Chan[k]}{\tilde{\Args[v]}}{\tilde{\Chan}}{\tilde{\Role}}{P^*} \triangleright \Delta_P $. By the rules in Figure~\ref{fig:typingRules} the proof of the judgement has to start (modulo Rule~(\textsf{S2})) as follows \begin{align*} \dfrac{\begin{array}{c} \Gamma' \vdash P^* \triangleright \Delta_P', \Typed{\AT{\Chan[s]}{\Role''}}{T}, \Typed{\ATI{\Chan[k]}{\Role_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'_m}}{T'_{n + m}} \quad \GetType[\Gamma']{\Prot} = \TypeOfProt{\tilde{\Role}}{\tilde{\Args[y]}}{\tilde{\Role}'}{G}\\ \forall i \logdot \GetType[\Gamma']{\Chan_i} = \AT{T'_{i + n}}{\Role'_{i + n}} \quad \forall i \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role_i} = T'_i \quad \forall j \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role'_j} = T'_{j + n} \quad \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}} \quad \GetType[\Gamma']{\Chan[k]} = {\Prot\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }} \end{array}}{\Gamma' \vdash \PDecl{\Chan[k]}{\Chan[s]}{\tilde{\Args[v]}}{\tilde{\Chan}}{\tilde{\Role}}{P^*} \triangleright \Delta_P} (\mathsf{New}) \end{align*} where $ \Delta_P = \Delta_P', \Typed{\AT{\Chan[s]}{\Role''}}{\LTCall{\Prot}{G}{\tilde{\Args[v]}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}'}{T}} $. Note that Rule~$ (\mathsf{subs}) $ replaces $ P^* $ by $ \PPar{P^*}{\PPar{\POutS{\Chan_1}{\Chan[k]}}{\PPar{\ldots}{\POutS{\Chan_m}{\Chan[k]}}}} $. To obtain a type derivation for this term, we first apply Rule~$ (\mathsf{Pa}) $ $ n $ times to split the parallel components of the process. Thereby $ \Delta_P', \Typed{\AT{\Chan[s]}{\Role''}}{T}, \Typed{\ATI{\Chan[k]}{\Role_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'_m}}{T'_{n + m}} $ is split up into $ \Delta_P', \Typed{\AT{\Chan[s]}{\Role''}}{T}, \Typed{\ATI{\Chan[k]}{\Role_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role_n}}{T'_n} $ and $ m $ instances of $ \Typed{\ATE{\Chan[k]}{\Role'_i}}{T'_{n + i}} $. Because of $ \forall i \logdot \GetType[\Gamma']{\Chan_i} = \AT{T'_{i + n}}{\Role'_{i + n}} $, for each $ \POutS{\Chan_i}{\Chan[k]} $ we have \begin{align*} \dfrac{\dfrac{}{\Gamma' \vdash \PEnd \triangleright \emptyset} (\mathsf{N}) \quad \GetType[\Gamma']{\Chan_i} = \AT{T'_{i + n}}{\Role'_{i + n}}}{\Gamma' \vdash \POutS{\Chan_i}{\Chan[k]} \triangleright \Typed{\ATE{\Chan[k]}{\Role'_i}}{T'_{n + i}}} (\mathsf{O}) \end{align*} Finally, with Lemma~\ref{lem:typeEC}~(2), we have \begin{align*} \Gamma \vdash P' \triangleright \left( \Delta_P', \Typed{\AT{\Chan[s]}{\Role''}}{T}, \Typed{\ATI{\Chan[k]}{\Role_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'_m}}{T'_{n + m}} \right) \otimes \Delta_{\EC} \end{align*} It remains to show that: \begin{align*} \Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_P', \Typed{\AT{\Chan[s]}{\Role''}}{T}, \Typed{\ATI{\Chan[k]}{\Role_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'_m}}{T'_{n + m}} \right) \otimes \Delta_{\EC} \end{align*} Because of $ \forall i \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role_i} = T'_i $ and $ \forall j \logdot \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role'_j} = T'_{j + n} $, this follows from Rule~$ (\mathsf{subs}') $. \item[Case $ (\mathsf{comC}) $:] In this case we have \begin{align*} P = \AEC{\PPar{\POut{\Chan}{\tilde{\Chan[s]}}{P_1}}{\PInp{\Chan}{\tilde{\Args}}{P_2}}} \quad \text{ and } \quad P' = \AEC{\PPar{P_1}{P_2\!\Set[]{ \Subst{\tilde{\Chan[s]}}{\tilde{\Args}} }}} \end{align*} With $ \Gamma \vdash P \triangleright \Delta $ and Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_P, \Delta_{\EC}, \Gamma' $ such that $ \Gamma \subseteq \Gamma' $, $ \Delta = \Delta_{\EC} \otimes \Delta_P $, and $ \Gamma' \vdash \PPar{\POut{\Chan}{\tilde{\Chan[s]}}{P_1}}{\PInp{\Chan}{\tilde{\Args}}{P_2}} \triangleright \Delta_P $. By the rules in Figure~\ref{fig:typingRules} the proof of the judgement has to start (modulo Rule~(\textsf{S2})) as follows \begin{align*} \dfrac{\dfrac{\Gamma' \vdash P_1 \triangleright \Delta_{P1} \quad \GetType[\Gamma']{\Chan} = \AT{T}{\Role}}{\Gamma' \vdash \POut{\Chan}{\tilde{\Chan[s]}}{P_1} \triangleright \Delta_{P1}, \Typed{\ATE{\Chan[s]}{\Role}}{T}} (\mathsf{O}) \quad \dfrac{\Gamma' \vdash P_2 \triangleright \Delta_{P2}, \Typed{\AT{\Args}{\Role}}{T} \quad \GetType[\Gamma']{\Chan}{\AT{T}{\Role}}}{\Gamma' \vdash \PInp{\Chan}{\tilde{\Args}}{P_2} \triangleright \Delta_{P2}} (\mathsf{I})}{\Gamma' \vdash \PPar{\POut{\Chan}{\tilde{\Chan[s]}}{P_1}}{\PInp{\Chan}{\tilde{\Args}}{P_2}} \triangleright \Delta_P} (\mathsf{Pa}) \end{align*} where $ \Delta_P = \Delta_{P1} \otimes \Delta_{P2}, \Typed{\ATE{\Chan[s]}{\Role}}{T} $. By Lemma~\ref{lem:typeSubstA}, $ \Gamma' \vdash P_2 \triangleright \Delta_{P2}, \Typed{\AT{\Args}{\Role}}{T} $ implies $ \Gamma' \vdash P_2\!\Set[]{\Subst{\Chan[s]}{\Args}} \triangleright \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{T} $. With $ \Gamma' \vdash P_1 \triangleright \Delta_{P1} $ and since $ \Delta_{P1} \otimes \Delta_{P2} $ is defined and there is no type for $ \AT{\Chan[s]}{\Role} $ in $ \Delta_{P1} $, we obtain \begin{align*} \dfrac{\Gamma' \vdash P_1 \triangleright \Delta_{P1} \quad \Gamma' \vdash P_2\!\Set[]{\Subst{\Chan[s]}{\Args}} \triangleright \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role}}{T}}{\Gamma' \vdash \PPar{P_1}{P_2\!\Set[]{\Subst{\Chan[s]}{\Args}}} \triangleright \Delta_{P1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role}}{T}} (\mathsf{Pa}) \end{align*} Finally, with Lemma~\ref{lem:typeEC}~(2), we have $ \Gamma \vdash P' \triangleright \left( \Delta_{P1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role}}{T} \right) \otimes \Delta_{\EC} $. It remains to show that $ \Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{P1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role}}{T} \right) \otimes \Delta_{\EC} $. Because $ \Delta_P = \Delta_{P1} \otimes \Delta_{P2}, \Typed{\ATE{\Chan[s]}{\Role}}{T} $, we obtain \begin{align*} \dfrac{\dfrac{}{\Delta_P \mapsto \Delta_{P1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role}}{T}} (\mathsf{comC}')}{\Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{P1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role}}{T} \right) \otimes \Delta_{\EC}} (\mathsf{par}) \end{align*} \item[Case $ (\mathsf{join}) $:] In this case we have \begin{align*} P = \AEC{\PPar{\PReq{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Chan[k]}{P_1}}{\PEnt{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Args}{P_2}}} \quad \text{ and } \quad P' = \AEC{\PPar{P_1}{P_2\!\Set[]{ \Subst{\Chan[k]}{\Args} }}} \end{align*} With $ \Gamma \vdash P \triangleright \Delta $ and Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_P, \Delta_{\EC}, \Gamma' $ such that $ \Gamma \subseteq \Gamma' $, $ \Delta = \Delta_{\EC} \otimes \Delta_P $, and $ \Gamma' \vdash \PPar{\PReq{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Chan[k]}{P_1}}{\PEnt{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Args}{P_2}} \triangleright \Delta_P $. By the rules in Figure~\ref{fig:typingRules} the proof of the judgement has to start (modulo Rule~(\textsf{S2})) as follows \begin{align*} \dfrac{\dfrac{\Gamma' \vdash P_1 \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} \quad \GetType[\Gamma']{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}}{\tilde{\Role}_5}{G} \quad \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role_3} = T_3}{\Gamma' \vdash \PReq{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Chan[k]}{P_1} \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTReq{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_2}{T_1}}, \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3}} (\mathsf{P}) \quad D}{\Gamma' \vdash \PPar{\PReq{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Chan[k]}{P_1}}{\PEnt{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Args}{P_2}} \triangleright \Delta_P} (\mathsf{Pa}) \end{align*} with $ D = $ \begin{align*} \dfrac{\Gamma' \vdash P_2 \triangleright \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Args}{\Role_3}}{T_3} \quad \GetType[\Gamma']{\Prot} = \TypeOfProt{\tilde{\Role}_4}{\tilde{\Args[y]}}{\tilde{\Role}_5}{G} \quad \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args[y]}} }}{}{\Role_3} = T_3}{\Gamma' \vdash \PEnt{\Chan[s]}{\Role_1}{\Role_2}{\Role_3}{\Args}{P_2} \triangleright \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{\LTEnt{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_1}{T_2}}} (\mathsf{J}) \end{align*} where $ \Delta_P = \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTReq{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_2}{T_1}}, \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{\LTEnt{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_1}{T_2}} $. By Lemma~\ref{lem:typeSubstA}, $ \Gamma' \vdash P_2 \triangleright \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Args}{\Role_3}}{T_3} $ implies $ \Gamma' \vdash P_2\!\Set[]{\Subst{\Chan[k]}{\Args}} \triangleright \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Chan[k]}{\Role_3}}{T_3} $. With $ \Gamma' \vdash P_1 \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} $ and since $ \Delta_{P1} \otimes \Delta_{P2} $ is defined, we obtain \begin{align*} \dfrac{\Gamma' \vdash P_1 \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} \quad \Gamma' \vdash P_2\!\Set[]{\Subst{\Chan[k]}{\Args}} \triangleright \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Chan[k]}{\Role_3}}{T_3}}{\Gamma' \vdash \PPar{P_1}{P_2\!\Set[]{\Subst{\Chan[k]}{\Args}}} \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Chan[k]}{\Role_3}}{T_3}} (\mathsf{Pa}) \end{align*} Finally, with Lemma~\ref{lem:typeEC}~(2), we have $ \Gamma \vdash P' \triangleright \left( \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Chan[k]}{\Role_3}}{T_3} \right) \otimes \Delta_{\EC} $. It remains to show that: \begin{align*} \Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Chan[k]}{\Role_3}}{T_3} \right) \otimes \Delta_{\EC} \end{align*} Because $ \Delta_P = \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTReq{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_2}{T_1}}, \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{\LTEnt{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_1}{T_2}} $, we obtain \begin{align*} \dfrac{\dfrac{}{\Delta_P \mapsto \Delta_{P1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Chan[k]}{\Role_3}}{T_3}} (\mathsf{join}')}{\Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Chan[k]}{\Role_3}}{T_3} \right) \otimes \Delta_{\EC}} (\mathsf{par}) \end{align*} \item[Case $ (\mathsf{fail}) $:] In this case we have \begin{align*} P = \AEC{\POpt{\Role_1}{\tilde{\Role}}{P_1}{\tilde{\Args}}{\tilde{\Args[v]}}{P_2}} \quad \text{ and } \quad P' = \AEC{P_2\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args}} }} \end{align*} Because of $ \Gamma \vdash P \triangleright \Delta $ and Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_P, \Delta_{\EC}, \Gamma' $ such that $ \Gamma \subseteq \Gamma' $, $ \Delta = \Delta_{\EC} \otimes \Delta_P $, and $ \Gamma' \vdash \POpt{\Role_1}{\tilde{\Role}}{P_1}{\tilde{\Args}}{\tilde{\Args[v]}}{P_2} \triangleright \Delta_P $. By the rules in Figure~\ref{fig:typingRules} the proof of the judgement has to start (modulo Rule~(\textsf{S2})) as follows: \begin{align*} \dfrac{\Gamma' \vdash P_1 \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} \quad \nexists \Role', \tilde{K} \logdot \Typed{\Role'}{\OV{\tilde{K}}} \in \Delta_{P1} \quad \Gamma' \vdash P_2 \triangleright \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} \quad \vdash \Typed{\tilde{\Args}}{\tilde{\Sort}} \quad \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}}}{\Gamma' \vdash \POpt{\Role_1}{\tilde{\Role}}{P_1}{\tilde{\Args}}{\tilde{\Args[v]}}{P_2} \triangleright \Delta_P}(\mathsf{Opt}) \end{align*} where $ \Delta_P = \Delta_{P1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}'}{T_1}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_1'}} $ and $ \tilde{\Role} \ \dot{=} \ \tilde{\Role}' $. By $ \Gamma' \vdash P_2 \triangleright \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} $, $ \vdash \Typed{\tilde{\Args}}{\tilde{\Sort}} $, $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}} $, and Lemma~\ref{lem:typeSubstB}, we have $ \Gamma' \vdash P_2\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args}} } \triangleright \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} $. Because of $ \nexists \Role', \tilde{K} \logdot \Typed{\Role'}{\OV{\tilde{K}}} \in \Delta_{P1} $, we have $ \Typed{\Role'}{\OV{\tilde{K}}} \in \Delta_P $ iff $ \Typed{\Role'}{\OV{\tilde{K}}} \in \Delta_{P2} $ iff $ \Typed{\Role'}{\OV{\tilde{K}}} \in \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} $. With Lemma~\ref{lem:typeEC}~(2) then $ \Gamma \vdash P' \triangleright \left( \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} \right) \otimes \Delta_{\EC} $. Because $ \Delta_P = \Delta_{P1} \otimes \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\Role_2}{T_1}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_1'}} $ and $ \Gamma' \vdash P_1 \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} $, we obtain \begin{align*} \dfrac{\dfrac{}{\Delta_P \mapsto \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'}} (\mathsf{fail}')}{\Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{P2}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} \right) \otimes \Delta_{\EC}} (\mathsf{par}) \end{align*} \item[Case $ (\mathsf{succ}) $:] In this case we have \begin{align*} P = \AEC{\POpt{\Role_1}{\tilde{\Role}}{\POptEnd{\Role_1}{\tilde{\Args[v]}_1}}{\tilde{\Args}}{\tilde{\Args[v]}_2}{P_1}} \; \text{ and } \; P' = \AEC{P_1\!\Set[]{ \Subst{\tilde{\Args[v]}_1}{\tilde{\Args}} }} \end{align*} With $ \Gamma \vdash P \triangleright \Delta $ and Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_P, \Delta_{\EC}, \Gamma' $ such that $ \Gamma \subseteq \Gamma' $, $ \Delta = \Delta_{\EC} \otimes \Delta_P $, and $ \Gamma' \vdash \POpt{\Role_1}{\tilde{\Role}}{\POptEnd{\Role_1}{\tilde{\Args[v]}_1}}{\tilde{\Args}}{\tilde{\Args[v]}_2}{P_1} \triangleright \Delta_P $. By the rules in Figure~\ref{fig:typingRules} the proof of the judgement has to start (modulo Rule~(\textsf{S2})) as follows: \begin{align*} \dfrac{\dfrac{\vdash \Typed{\tilde{\Args[v]}_1}{\tilde{\Sort}}}{\Gamma' \vdash \POptEnd{\Role_1}{\tilde{\Args[v]}_1} \triangleright \Typed{\Role_1}{\OV{\tilde{\Sort}}}}(\mathsf{OptE}) \quad \Gamma' \vdash P_1 \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} \quad \vdash \Typed{\tilde{\Args}}{\tilde{\Sort}} \quad \vdash \Typed{\tilde{\Args[v]}_2}{\tilde{\Sort}}}{\Gamma' \vdash \POpt{\Role_1}{\tilde{\Role}}{\POptEnd{\Role_1}{\tilde{\Args[v]}_1}}{\tilde{\Args}}{\tilde{\Args[v]}_2}{P_1} \triangleright \Delta_P}(\mathsf{Opt}) \end{align*} where $ \Delta_P = \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}'}{\LTEnd}{\Typed{\tilde{\Args[z]}}{\tilde{\Sort}}}{T_1'}} $ and $ \tilde{\Role} \ \dot{=} \ \tilde{\Role}' $. Since the global environment cannot contain two different declarations of output values for the same pair $ \AT{\Chan[s]}{\Role_1} $, the kinds of the values $ \tilde{\Args[v]}_1 $ and $ \tilde{\Args[v]}_2 $ have to be the same, \ie $ \vdash \Typed{\tilde{\Args[v]}_1}{\tilde{\Sort}} $ and $ \vdash \Typed{\tilde{\Args[v]}_2}{\tilde{\Sort}} $. Because of $ \Gamma' \vdash P_1 \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} $, $ \vdash \Typed{\tilde{\Args[v]}_1}{\tilde{\Sort}} $, $ \vdash \Typed{\tilde{\Args}}{\tilde{\Sort}} $, and Lemma~\ref{lem:typeSubstB}, we have $ \Gamma' \vdash P_1\!\Set[]{ \Subst{\tilde{\Args[v]}_1}{\tilde{\Args}} } \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} $. With Lemma~\ref{lem:typeEC}~(2) then $ \Gamma \vdash P' \triangleright \left( \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} \right) \otimes \Delta_{\EC} $. It remains to show that $ \Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} \right) \otimes \Delta_{\EC} $. Because $ \Delta_P = \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}}{\LTEnd}{\Typed{\tilde{\Args[z]}}{\tilde{\Sort}}}{T_1'}} $, we obtain \begin{align*} \dfrac{\dfrac{}{\Delta_P \mapsto \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'}} (\mathsf{succ}')}{\Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} \right) \otimes \Delta_{\EC}} (\mathsf{par}) \end{align*} \item[Case $ (\mathsf{cCO}) $:] In this case we have \begin{align*} P & = \AEC{\PPar{\AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{\POut{\Chan}{\Chan[s]}{P_1}}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}}}{\AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{\PInp{\Chan}{\Args}{P_4}}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}}}}\\ P' & = \AEC{\PPar{\AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}}}{\AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\Chan[s]}{\Args} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}}}} \end{align*} With $ \Gamma \vdash P \triangleright \Delta $ and Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_P, \Delta_{\EC}, \Gamma' $ such that $ \Gamma \subseteq \Gamma' $, $ \Delta = \Delta_{\EC} \otimes \Delta_P $, and \begin{align*} \Gamma' \vdash \PPar{\AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{\POut{\Chan}{\Chan[s]}{P_1}}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}}}{\AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{\PInp{\Chan}{\Args}{P_4}}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}}} \triangleright \Delta_P \end{align*} By the rules in Figure~\ref{fig:typingRules} the proof of the judgement has to start (modulo Rule~(\textsf{S2})) with Rule~$ (\mathsf{Pa}) $, that splits $ \Delta_P $ such that $ \Delta_P = \Delta_{\ECR, P1-3} \otimes \Delta_{\ECR', P4-6} $. Again by Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_{P1-3} $, $ \Delta_{P4-6} $, $ \Delta_{\ECR} $, $ \Delta_{\ECR'} $, $ \Gamma_1 $, and $ \Gamma_2 $ such that $ \Gamma' \subseteq \Gamma_1 $, $ \Gamma' \subseteq \Gamma_2 $, $ \Delta_{\ECR, P1-3} = \Delta_{\ECR} \otimes \Delta_{P1-3} $, $ \Delta_{\ECR', P4-6} = \Delta_{\ECR'} \otimes \Delta_{P4-6} $, $ \Gamma_1 \vdash \POpt{\Role_1}{\tilde{\Role}}{\PPar{\POut{\Chan}{\Chan[s]}{P_1}}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3} \triangleright \Delta_{P1-3} $, and $ \Gamma_2 \vdash \POpt{\Role_2}{\tilde{\Role}}{\PPar{\PInp{\Chan}{\Args}{P_4}}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6} \triangleright \Delta_{P4-6} $. Then: \begin{align*} \hspace*{-1em}\dfrac{\dfrac{\begin{array}{l} \dfrac{\Gamma_1 \vdash P_1 \triangleright \Delta_{P1} \quad \GetType[\Gamma_1]{\Chan} = \AT{T_1}{\Role_3}}{\Gamma_1 \vdash \POut{\Chan}{\Chan[s]}{P_1} \triangleright \Delta_{P1}, \Typed{\ATE{\Chan[s]}{\Role_3}}{T_1}}(\mathsf{O}) \quad \Gamma_1 \vdash P_2 \triangleright \Delta_{P2} \end{array}}{\Gamma_1 \vdash \PPar{\POut{\Chan}{\Chan[s]}{P_1}}{P_2} \triangleright \left( \Delta_{P1}, \Typed{\ATE{\Chan[s]}{\Role_3}}{T_1} \right) \otimes \Delta_{P2}}(\mathsf{Pa}) \begin{array}{l} \Gamma_1 \vdash P_3 \triangleright \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1'}\\ \vdash \Typed{\tilde{\Args}_1}{\tilde{\Sort}} \quad \vdash \Typed{\tilde{\Args[v]}_1}{\tilde{\Sort}} \end{array}}{\Gamma_1 \vdash \POpt{\Role_1}{\tilde{\Role}}{\PPar{\POut{\Chan}{\Chan[s]}{P_1}}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3} \triangleright \Delta_{P1-3}}(\mathsf{Opt}) \end{align*} where $ \left( \Delta_{P1}, \Typed{\ATE{\Chan[s]}{\Role_3}}{T_1} \right) \otimes \Delta_{P2} = \Delta_{P1-2}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} $ and $ \nexists \Role', \tilde{\Sort[K]} \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_{P1-2} $ and $ \Delta_{P1-3} = \Delta_{P1-2} \otimes \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{\LTOpt{\Role_2}{T_1}{\Typed{\tilde{\Args[y]}_1}{\tilde{\Sort}}}{T_1'}} $. Since $ \left( \Delta_{P1}, \Typed{\ATE{\Chan[s]}{\Role_3}}{T_1} \right) \otimes \Delta_{P2} $ is defined and because $ \Chan[s] \neq \Chan[k] $, we obtain \begin{align*} \dfrac{\dfrac{\begin{array}{l} \Gamma_1 \vdash P_1 \triangleright \Delta_{P1} \quad \Gamma_1 \vdash P_2 \triangleright \Delta_{P2} \end{array}}{\Gamma_1 \vdash \PPar{P_1}{P_2} \triangleright \Delta_{P1} \otimes \Delta_{P2}}(\mathsf{Pa}) \quad \begin{array}{l} \Gamma_1 \vdash P_3 \triangleright \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1'}\\ \vdash \Typed{\tilde{\Args}_1}{\tilde{\Sort}} \quad \vdash \Typed{\tilde{\Args[v]}_1}{\tilde{\Sort}} \end{array}}{\Gamma_1 \vdash \POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3} \triangleright \Delta_{P1-3}'}(\mathsf{Opt}) \end{align*} where $ \Delta_{P1} \otimes \Delta_{P2} = \Delta_{P1-2}', \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1''}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} $ and \begin{align*} \Delta_{P1-3}' = \Delta_{P1-2}' \otimes \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{\LTOpt{\Role_2}{T_1''}{\Typed{\tilde{\Args[y]}_1}{\tilde{\Sort}}}{T_1'}} \end{align*} Note that $ \Delta_{P1-3}' $ is obtained from $ \Delta_{P1-3} $ by removing a capability on $ \AT{\Chan[s]}{\Role_3} $ and changing a capability on $ \AT{\Chan[k]_1}{\Role_1} $. With Lemma~\ref{lem:typeEC}~(2), then $ \Gamma' \vdash \AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}} \triangleright \Delta_{\ECR, P1-3}' $, where $ \Delta_{\ECR, P1-3}' = \Delta_{P1-3}' \otimes \Delta_{\ECR} $. Moreover, because $ \GetType[\Gamma_2]{\Chan} = \AT{T1}{\Role_3} $, \begin{align*} \hspace*{-1em}\dfrac{\dfrac{\begin{array}{l} \dfrac{\Gamma_2 \vdash P_4 \triangleright \Delta_{P4}, \Typed{\AT{\Args}{\Role_3}}{T_1} \quad \GetType[\Gamma_2]{\Chan} = \AT{T_1}{\Role_3}}{\Gamma_2 \vdash \PInp{\Chan}{\Args}{P_4} \triangleright \Delta_{P4}}(\mathsf{I}) \quad \Gamma_2 \vdash P_5 \triangleright \Delta_{P5} \end{array}}{\Gamma_2 \vdash \PPar{\PInp{\Chan}{\Args}{P_4}}{P_5} \triangleright \Delta_{P4} \otimes \Delta_{P5}}(\mathsf{Pa}) \begin{array}{l} \Gamma_2 \vdash P_6 \triangleright \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2'}\\ \vdash \Typed{\tilde{\Args}_2}{\tilde{\Sort}'} \quad \vdash \Typed{\tilde{\Args[v]}_2}{\tilde{\Sort}'} \end{array}}{\Gamma_2 \vdash \POpt{\Role_2}{\tilde{\Role}}{\PPar{\PInp{\Chan}{\Args}{P_4}}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6} \triangleright \Delta_{P4-6}}(\mathsf{Opt}) \end{align*} where $ \Delta_{P4} \otimes \Delta_{P5} = \Delta_{P4-5}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2}, \Typed{\Role_2}{\OV{\tilde{\Sort'}}} $, $ \nexists \Role', \tilde{\Sort[K]} \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_{P4-5} $, and we have $ \Delta_{P4-6} = \Delta_{P4-5} \otimes \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{\LTOpt{\tilde{\Role}}{T_2}{\Typed{\tilde{\Args[y]}_2}{\tilde{\Sort}'}}{T_2'}} $. By $ \Gamma_2 \vdash P_4 \triangleright \Delta_{P4}, \Typed{\AT{\Args}{\Role_3}}{T_1} $ and Lemma~\ref{lem:typeSubstA}, we have $ \Gamma_2 \vdash P_4\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta_{P4}, \Typed{\AT{\Chan[s]}{\Role_3}}{T_1} $. Then \begin{align*} \hspace*{-1em}\dfrac{\dfrac{\Gamma_2 \vdash P_4\!\Set[]{ \Subst{\Chan[s]}{\Args} } \triangleright \Delta_{P4}, \Typed{\AT{\Chan[s]}{\Role_3}}{T_1} \quad \Gamma_2 \vdash P_5 \triangleright \Delta_{P5}}{\Gamma_2 \vdash \PPar{P_4\!\Set[]{ \Subst{\Chan[s]}{\Args} }}{P_5} \triangleright \left( \Delta_{P4}, \Typed{\AT{\Chan[s]}{\Role_3}}{T_1} \right) \otimes \Delta_{P5}}(\mathsf{Pa}) \begin{array}{l} \Gamma_2 \vdash P_6 \triangleright \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2'}\\ \vdash \Typed{\tilde{\Args}_2}{\tilde{\Sort}'} \quad \vdash \Typed{\tilde{\Args[v]}_2}{\tilde{\Sort}'} \end{array}}{\Gamma_2 \vdash \POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\Chan[s]}{\Args} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6} \triangleright \Delta_{P4-6}'}(\mathsf{Opt}) \end{align*} where $ \left( \Delta_{P4}, \Typed{\AT{\Chan[s]}{\Role_3}}{T_1} \right) \otimes \Delta_{P5} = \Delta_{P4-5}'', \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2}, \Typed{\Role_2}{\OV{\tilde{\Sort'}}} $ and we have $ \Delta_{P4-6}' = \Delta_{P4-5}'' \otimes \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{\LTOpt{\tilde{\Role}}{T_2}{\Typed{\tilde{\Args[y]}_2}{\tilde{\Sort}'}}{T_2'}} $. Here $ \Delta_{P4-6}' $ is obtained from $ \Delta_{P4-6} $ by a adding a single capability on $ \AT{\Chan[s]}{\Role_3} $. With Lemma~\ref{lem:typeEC}~(2), then $ \Gamma' \vdash \AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\Chan[s]}{\Args} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}} \triangleright \Delta_{\ECR', P4-6}' $, where $ \Delta_{\ECR', P4-6}' = \Delta_{P4-6}' \otimes \Delta_{\ECR'} $. Since $ \Delta_{\ECR, P1-3} \otimes \Delta_{\ECR', P4-6} $ is defined, so is $ \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' $. Hence, by Rule~$ (\mathsf{Pa}) $, the judgement $ \Gamma' \vdash \AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}} \triangleright \Delta_{\ECR, P1-3}' $, and $ \Gamma' \vdash \AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\Chan[s]}{\Args} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}} \triangleright \Delta_{\ECR', P4-6}' $, we have \begin{align*} \Gamma' \vdash \PPar{\AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}}}{\AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\Chan[s]}{\Args} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}}} \triangleright \; \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \end{align*} With Lemma~\ref{lem:typeEC}~(2) we conclude with $ \Gamma \vdash P' \triangleright \left( \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \right) \otimes \Delta_{\EC} $. It remains to show that $ \Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \right) \otimes \Delta_{\EC} $. Because $ \Delta_P = \left( \Delta_{\ECR} \otimes \Delta_{P1-3} \right) \otimes \left( \Delta_{\ECR'} \otimes \Delta_{P4-6} \right) $ with $ \Delta_{P1-3} = \Delta_{P1-2} \otimes \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{\LTOpt{\Role_2}{T_1}{\Typed{\tilde{\Args[y]}_1}{\tilde{\Sort}}}{T_1'}} $ and $ \Delta_{P4-6} = \Delta_{P4-5} \otimes \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{\LTOpt{\Role_1}{T_2}{\Typed{\tilde{\Args[y]}_2}{\tilde{\Sort}'}}{T_2'}} $, we obtain \begin{align*} \dfrac{\dfrac{\dfrac{}{\left( \Delta_{P1-2} \otimes \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1} \right) \otimes \left( \Delta_{P4-5} \otimes \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2} \right) \mapsto \Delta_{P1-3}' \otimes \Delta_{P4-6}'} (\mathsf{comS}')}{\Delta_{P1-3} \otimes \Delta_{P4-6} \mapsto \Delta_{P1-3}' \otimes \Delta_{P4-6}'} (\mathsf{optCom})}{\Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \right) \otimes \Delta_{\EC}} (\mathsf{par}) \end{align*} where we first reorder the session environments modulo $ \otimes $ and remove with Rule~$ (\mathsf{par}) $ all assignments on the contexts, \ie $ \Delta_{\ECR} $, $ \Delta_{\ECR'} $, and $ \Delta_{\EC} $. \item[Case $ (\mathsf{jO}) $:] In this case we have \begin{align*} P & = \AEC{\PPar{\AECR{P_{1-3}}}{\AECR[E']{P_{4-6}}}}\\ P_{1-3} & = \POpt{\Role_1}{\tilde{\Role}}{\PPar{\PReq{\Chan[s]}{\Role_3}{\Role_4}{\Role_5}{\Chan[k]}{P_1}}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}\\ P_{4-6} & = \POpt{\Role_2}{\tilde{\Role}}{\PPar{\PEnt{\Chan[s]}{\Role_3}{\Role_4}{\Role_5}{\Args}{P_4}}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}\\ P' & = \AEC{\PPar{\AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}}}{\AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\Chan[k]}{\Args} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}}}} \end{align*} With $ \Gamma \vdash P \triangleright \Delta $ and Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_P, \Delta_{\EC}, \Gamma' $ such that $ \Gamma \subseteq \Gamma' $, $ \Delta = \Delta_{\EC} \otimes \Delta_P $, and $ \Gamma' \vdash \PPar{\AECR{P_{1-3}}}{\AECR[E']{P_{4-6}}} \triangleright \Delta_P $. By the rules in Figure~\ref{fig:typingRules} the proof of the judgement has to start (modulo Rule~(\textsf{S2})) with Rule~$ (\mathsf{Pa}) $, that splits $ \Delta_P $ such that $ \Delta_P = \Delta_{\ECR, P1-3} \otimes \Delta_{\ECR', P4-6} $. Again by Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_{P1-3} $, $ \Delta_{P4-6} $, $ \Delta_{\ECR} $, $ \Delta_{\ECR'} $, $ \Gamma_1 $, and $ \Gamma_2 $ such that $ \Gamma' \subseteq \Gamma_1 $, $ \Gamma' \subseteq \Gamma_2 $, $ \Delta_{\ECR, P1-3} = \Delta_{\ECR} \otimes \Delta_{P1-3} $, $ \Delta_{\ECR', P4-6} = \Delta_{\ECR'} \otimes \Delta_{P4-6} $, $ \Gamma_1 \vdash P_{1-3} \triangleright \Delta_{P1-3} $, and $ \Gamma_2 \vdash P_{4-6} \triangleright \Delta_{P4-6} $. Then: \begin{align*} \dfrac{\dfrac{\begin{array}{l} \dfrac{\begin{array}{l} \Gamma_1 \vdash P_1 \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_3}}{T_1}\\ \GetType[\Gamma_1]{\Prot} = \TypeOfProt{\tilde{\Role}_6}{\tilde{\Args[y]}_1}{\tilde{\Role}_7}{G} \quad \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[z]}_1}{\tilde{\Args[y]}_1} }}{}{\Role_5} = T_5 \end{array}}{\Gamma_1 \vdash \PReq{\Chan[s]}{\Role_3}{\Role_4}{\Role_5}{\Chan[k]}{P_1} \triangleright \Delta_{P1}'}(\mathsf{P}) \quad \Gamma_1 \vdash P_2 \triangleright \Delta_{P2} \end{array}}{\Gamma_1 \vdash \PPar{\PReq{\Chan[s]}{\Role_3}{\Role_4}{\Role_5}{\Chan[k]}{P_1}}{P_2} \triangleright \Delta_{P1}' \otimes \Delta_{P2}}(\mathsf{Pa}) \begin{array}{l} \Gamma_1 \vdash P_3 \triangleright \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1'}\\ \vdash \Typed{\tilde{\Args}_1}{\tilde{\Sort}} \quad \vdash \Typed{\tilde{\Args[v]}_1}{\tilde{\Sort}} \end{array}}{\Gamma_1 \vdash \POpt{\Role_1}{\tilde{\Role}}{\PPar{\PReq{\Chan[s]}{\Role_3}{\Role_4}{\Role_5}{\Chan[k]}{P_1}}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3} \triangleright \Delta_{P1-3}}(\mathsf{Opt}) \end{align*} where $ \nexists \Role', \tilde{\Sort[K]} \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_{P1-2} $ and \begin{align*} \Delta_{P1}' & = \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_3}}{\LTReq{\Prot}{\Role_5}{\tilde{\Args[z]}_1}{\Role_4}{T_1}}, \Typed{\ATI{\Chan[k]}{\Role_5}}{T_5}\\ \Delta_{P1}' \otimes \Delta_{P2} & = \Delta_{P1-2}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}}}\\ \Delta_{P1-3} & = \Delta_{P1-2} \otimes \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}_1}{\tilde{\Sort}}}{T_1'}} \end{align*} Since $ \Delta_{P1}' \otimes \Delta_{P2} $ is defined and because $ \Chan[s] \neq \Chan[k] $, we obtain \begin{align*} \dfrac{\dfrac{\begin{array}{l} \Gamma_1 \vdash P_1 \triangleright \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_3}}{T_1} \quad \Gamma_1 \vdash P_2 \triangleright \Delta_{P2} \end{array}}{\Gamma_1 \vdash \PPar{P_1}{P_2} \triangleright \left( \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_3}}{T_1} \right) \otimes \Delta_{P2}}(\mathsf{Pa}) \quad \begin{array}{l} \Gamma_1 \vdash P_3 \triangleright \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1'}\\ \vdash \Typed{\tilde{\Args}_1}{\tilde{\Sort}} \quad \vdash \Typed{\tilde{\Args[v]}_1}{\tilde{\Sort}} \end{array}}{\Gamma_1 \vdash \POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3} \triangleright \Delta_{P1-3}'}(\mathsf{Opt}) \end{align*} where $ \left( \Delta_{P1}, \Typed{\AT{\Chan[s]}{\Role_3}}{T_1} \right) \otimes \Delta_{P2} = \Delta_{P1-2}', \Typed{\AT{\Chan[k]_1}{\Role^1}}{T_1} $ and \begin{align*} \Delta_{P1-3}' = \Delta_{P1-2}' \otimes \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}_1}{\tilde{\Sort}}}{T_1'}}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} \end{align*} Note that $ \Delta_{P1-3}' $ is obtained from $ \Delta_{P1-3} $ by removing a capability on $ \AT{\Chan[k]}{\Role_5} $ and reducing a capability on $ \AT{\Chan[s]}{\Role_3} $. With Lemma~\ref{lem:typeEC}~(2), then $ \Gamma' \vdash \AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}} \triangleright \Delta_{\ECR, P1-3}' $, where $ \Delta_{\ECR, P1-3}' = \Delta_{P1-3}' \otimes \Delta_{\ECR} $. Moreover, because of $ \GetType[\Gamma_2]{\Prot} = \TypeOfProt{\tilde{\Role}_6}{\tilde{\Args[y]}_1}{\tilde{\Role}_7}{G} $ and $ \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[z]}_1}{\tilde{\Args[y]}_1} }}{}{\Role_5} = T_5 $, \begin{align*} \dfrac{\dfrac{\dfrac{\begin{array}{l} \Gamma_2 \vdash P_4 \triangleright \Delta_{P4}, \Typed{\AT{\Chan[s]}{\Role_4}}{T_4}, \Typed{\AT{\Args}{\Role_5}}{T_5}\\ \GetType[\Gamma_2]{\Prot} = \TypeOfProt{\tilde{\Role}_6}{\tilde{\Args[y]}_1}{\tilde{\Role}_7}{G} \quad \ProjS{G\!\Set[]{ \Subst{\tilde{\Args[z]}_1}{\tilde{\Args[y]}_1} }}{}{\Role_5} = T_5 \end{array}}{\Gamma_2 \vdash \PEnt{\Chan[s]}{\Role_3}{\Role_4}{\Role_5}{\Args}{P_4} \triangleright \Delta_{P4}'}(\mathsf{J}) \quad \Gamma_2 \vdash P_5 \triangleright \Delta_{P5}}{\Gamma_2 \vdash \PPar{\PEnt{\Chan[s]}{\Role_3}{\Role_4}{\Role_5}{\Args}{P_4}}{P_5} \triangleright \Delta_{P4}' \otimes \Delta_{P5}}(\mathsf{Pa}) \begin{array}{l} \Gamma_2 \vdash P_6 \triangleright \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2'}\\ \vdash \Typed{\tilde{\Args}_2}{\tilde{\Sort}'} \quad \vdash \Typed{\tilde{\Args[v]}_2}{\tilde{\Sort}'} \end{array}}{\Gamma_2 \vdash \POpt{\Role_2}{\tilde{\Role}}{\PPar{\PEnt{\Chan[s]}{\Role_3}{\Role_4}{\Role_5}{\Args}{P_4}}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6} \triangleright \Delta_{P4-6}}(\mathsf{Opt}) \end{align*} where $ \nexists \Role', \tilde{\Sort[K]} \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_{P4-5} $ and \begin{align*} \Delta_{P4}' & = \Delta_{P4}, \Typed{\AT{\Chan[s]}{\Role_4}}{\LTEnt{\Prot}{\Role_5}{\tilde{\Args[z]}_2}{\Role_3}{T_4}}\\ \Delta_{P4}' \otimes \Delta_{P5} & = \Delta_{P4-5}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2}, \Typed{\Role_2}{\OV{\tilde{\Sort}'}}\\ \Delta_{P4-6} & = \Delta_{P4-5} \otimes \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{\LTOpt{\tilde{\Role}}{T_2}{\Typed{\tilde{\Args[y]}_2}{\tilde{\Sort}'}}{T_2'}} \end{align*} By $ \Gamma_2 \vdash P_4 \triangleright \Delta_{P4}, \Typed{\AT{\Chan[s]}{\Role_4}}{T_4}, \Typed{\AT{\Args}{\Role_5}}{T_5} $ and Lemma~\ref{lem:typeSubstA}, we have $ \Gamma_2 \vdash P_4\!\Set[]{ \Subst{\Chan[k]}{\Args} } \triangleright \Delta_{P4}, \Typed{\AT{\Chan[s]}{\Role_4}}{T_4}, \Typed{\AT{\Args[k]}{\Role_5}}{T_3'} $. Then \begin{align*} \dfrac{\dfrac{\begin{array}{l} \Gamma_2 \vdash P_4\!\Set[]{ \Subst{\Chan[k]}{\Args} } \triangleright \Delta_{P4}, \Typed{\AT{\Chan[s]}{\Role_4}}{T_4}, \Typed{\AT{\Args[k]}{\Role_5}}{T_5} \quad \Gamma_2 \vdash P_5 \triangleright \Delta_{P5} \end{array}}{\Gamma_2 \vdash \PPar{P_4\!\Set[]{ \Subst{\Chan[k]}{\Args} }}{P_5} \triangleright \Delta_{P4-5}''}(\mathsf{Pa}) \begin{array}{l} \Gamma_2 \vdash P_6 \triangleright \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2'}\\ \vdash \Typed{\tilde{\Args}_2}{\tilde{\Sort}'} \quad \vdash \Typed{\tilde{\Args[v]}_2}{\tilde{\Sort}'} \end{array}}{\Gamma_2 \vdash \POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\Chan[k]}{\Args} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6} \triangleright \Delta_{P4-6}'}(\mathsf{Opt}) \end{align*} where $ \Delta_{P4-5}'' = \left( \Delta_{P4}, \Typed{\AT{\Chan[s]}{\Role_4}}{T_4}, \Typed{\AT{\Args[k]}{\Role_5}}{T_5} \right) \otimes \Delta_5 = \Delta_{P4-5}''', \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2} $ and $ \Delta_{P4-6}' = \Delta_{P4-5}''' \otimes \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{\LTOpt{\tilde{\Role}}{T_2}{\Typed{\tilde{\Args[y]}^2}{\tilde{\Sort}'}}{T_2'}}, \Typed{\Role_2}{\OV{\tilde{\Sort}'}} $. Here $ \Delta_{P4-6}' $ is obtained from $ \Delta_{P4-6} $ by reducing a capability on $ \AT{\Chan[s]}{\Role_4} $ and a adding a capability on $ \AT{\Chan[k]}{\Role_5} $. With Lemma~\ref{lem:typeEC}~(2), then $ \Gamma' \vdash \AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\Chan[k]}{\Args} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}} \triangleright \Delta_{\ECR', P4-6}' $, where $ \Delta_{\ECR', P4-6}' = \Delta_{P4-6}' \otimes \Delta_{\ECR'} $. Since $ \Delta_{\ECR, P1-3} \otimes \Delta_{\ECR', P4-6} $ is defined, so is $ \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' $. Hence, by Rule~$ (\mathsf{Pa}) $, the judgement $ \Gamma' \vdash \AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}} \triangleright \Delta_{\ECR, P1-3}' $, and $ \Gamma' \vdash \AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\Chan[k]}{\Args} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}} \triangleright \Delta_{\ECR', P4-6}' $, we have \begin{align*} \Gamma' \vdash \PPar{\AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}}}{\AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\Chan[k]}{\Args} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}}} \triangleright \; \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \end{align*} With Lemma~\ref{lem:typeEC}~(2) we conclude with $ \Gamma \vdash P' \triangleright \left( \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \right) \otimes \Delta_{\EC} $. It remains to show that $ \Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \right) \otimes \Delta_{\EC} $. Because $ \Delta_P = \left( \Delta_{\ECR} \otimes \Delta_{P1-3} \right) \otimes \left( \Delta_{\ECR'} \otimes \Delta_{P4-6} \right) $ and because $ \Delta_{P1-3}' $ and $ \Delta_{P4-6}' $ are obtained from $ \Delta_{P1-3} $ and $ \Delta_{P4-6} $ by \begin{itemize} \item changing $ \Typed{\ATI{\Chan[k]}{\Role_5}}{T_5} $ to $ \Typed{\AT{\Chan[k]}{\Role_5}}{T_5} $, \item reducing $ \Typed{\AT{\Chan[s]}{\Role_3}}{\LTReq{\Prot}{\Role_5}{\tilde{\Args[z]}_1}{\Role_4}{T_1}} $ to $ \Typed{\AT{\Chan[s]}{\Role_3}}{T_1} $, and \item reducing $ \Typed{\AT{\Chan[s]}{\Role_4}}{\LTEnt{\Prot}{\Role_5}{\tilde{\Args[z]}_2}{\Role_3}{T_4}} $ to $ \Typed{\AT{\Chan[s]}{\Role_4}}{T_4} $ \end{itemize} we have \begin{align*} \dfrac{\dfrac{\dfrac{}{\left( \Delta_{P1-2} \otimes \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1} \right) \otimes \left( \Delta_{P4-5} \otimes \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2} \right) \mapsto \Delta_{P1-3}' \otimes \Delta_{P4-6}'} (\mathsf{join}')}{\Delta_{P1-3} \otimes \Delta_{P4-6} \mapsto \Delta_{P1-3}' \otimes \Delta_{P4-6}'} (\mathsf{optCom})}{\Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \right) \otimes \Delta_{\EC}} (\mathsf{par}) \end{align*} \item[Case $ (\mathsf{cSO}) $:] In this case we have \begin{align*} P & = \AEC{\PPar{\AECR{P_{1-3}}}{\AECR[E']{P_{4-6}}}}\\ P_{1-3} & = \POpt{\Role_1}{\tilde{\Role}}{\PPar{\PSend{\Chan[k]}{\Role_3}{\Role_4}{\Labe_j}{\tilde{\Args[v]}}{P_1}}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}\\ P_{4-6} & = \POpt{\Role_2}{\tilde{\Role}}{\PPar{\PGet{\Chan[k]}{\Role_3}{\Role_4}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args}_i}{P_{4, i}} }}}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}\\ P' & = \AEC{\PPar{\AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}}}{\AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_{4, j}\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args}_j} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}}}} \end{align*} With $ \Gamma \vdash P \triangleright \Delta $ and Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_P, \Delta_{\EC}, \Gamma' $ such that $ \Gamma \subseteq \Gamma' $, $ \Delta = \Delta_{\EC} \otimes \Delta_P $, and $ \Gamma' \vdash \PPar{\AECR{P_{1-3}}}{\AECR[E']{P_{4-6}}} \triangleright \Delta_P $. By the rules in Figure~\ref{fig:typingRules} the proof of the judgement has to start (modulo Rule~(\textsf{S2})) with Rule~$ (\mathsf{Pa}) $, that splits $ \Delta_P $ such that $ \Delta_P = \Delta_{\ECR, P1-3} \otimes \Delta_{\ECR', P4-6} $. Again by Lemma~\ref{lem:typeEC}~(1), there exist $ \Delta_{P1-3} $, $ \Delta_{P4-6} $, $ \Delta_{\ECR} $, $ \Delta_{\ECR'} $, $ \Gamma_1 $, and $ \Gamma_2 $ such that $ \Gamma' \subseteq \Gamma_1 $, $ \Gamma' \subseteq \Gamma_2 $, $ \Delta_{\ECR, P1-3} = \Delta_{\ECR} \otimes \Delta_{P1-3} $, $ \Delta_{\ECR', P4-6} = \Delta_{\ECR'} \otimes \Delta_{P4-6} $, $ \Gamma_1 \vdash P_{1-3} \triangleright \Delta_{P1-3} $, and $ \Gamma_2 \vdash P_{4-6} \triangleright \Delta_{P4-6} $. Then: \begin{align*} \dfrac{\dfrac{\begin{array}{l} \dfrac{\Gamma_1 \vdash P_1 \triangleright \Delta_{P_1}, \Typed{\AT{\Chan[k]}{\Role^3}}{T_{1, j}} \quad \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}_j'}}{\Gamma_1 \vdash \PSend{\Chan[k]}{\Role_3}{\Role_4}{\Labe_j}{\tilde{\Args[v]}}{P_1} \triangleright \Delta_{P1}'}(\mathsf{S}) \quad \Gamma_1 \vdash P_2 \triangleright \Delta_{P2} \end{array}}{\Gamma_1 \vdash \PPar{\PSend{\Chan[k]}{\Role_3}{\Role_4}{\Labe_j}{\tilde{\Args[v]}}{P_1}}{P_2} \triangleright \Delta_{P1}' \otimes \Delta_{P2}}(\mathsf{Pa}) \begin{array}{l} \Gamma_1 \vdash P_3 \triangleright \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1'}\\ \vdash \Typed{\tilde{\Args}_1}{\tilde{\Sort}} \quad \vdash \Typed{\tilde{\Args[v]}_1}{\tilde{\Sort}} \end{array}}{\Gamma_1 \vdash \POpt{\Role_1}{\tilde{\Role}}{\PPar{\PSend{\Chan[k]}{\Role_3}{\Role_4}{\Labe_j}{\tilde{\Args[v]}}{P_1}}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3} \triangleright \Delta_{P1-3}}(\mathsf{Opt}) \end{align*} where $ \nexists \Role', \tilde{\Sort[K]} \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_{P1-2} $ and \begin{align*} \Delta_{P1}' & = \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_3}}{\LTSend{\Role_4}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args[z]}_i}{\tilde{\Sort}_i'}}{T_{1, i}} }}}\\ \Delta_{P1}' \otimes \Delta_{P2} & = \Delta_{P1-2}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}}}\\ \Delta_{P1-3} & = \Delta_{P1-2} \otimes \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}_1}{\tilde{\Sort}}}{T_1'}} \end{align*} Since $ \Delta_{P1}' \otimes \Delta_{P2} $ is defined and because $ \Chan[s] \neq \Chan[k] $, we obtain \begin{align*} \dfrac{\dfrac{\begin{array}{l} \Gamma' \vdash P_1 \triangleright \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_3}}{T_{1, j}} \quad \Gamma' \vdash P_2 \triangleright \Delta_{P2} \end{array}}{\Gamma' \vdash \PPar{P_1}{P_2} \triangleright \left( \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_3}}{T_{1, j}} \right) \otimes \Delta_{P2}}(\mathsf{Pa}) \quad \begin{array}{l} \Gamma' \vdash P_3 \triangleright \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1'}\\ \vdash \Typed{\tilde{\Args}_1}{\tilde{\Sort}} \quad \vdash \Typed{\tilde{\Args[v]}_1}{\tilde{\Sort}} \end{array}}{\Gamma' \vdash \POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3} \triangleright \Delta_{P1-3}'}(\mathsf{Opt}) \end{align*} where $ \left( \Delta_{P1}, \Typed{\AT{\Chan[k]}{\Role_3}}{T_{1, j}} \right) \otimes \Delta_{P2} = \Delta_{P1-2}', \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} $ and \begin{align*} \Delta_{P1-3}' = \Delta_{P1-2}' \otimes \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}_1}{\tilde{\Sort}}}{T_1'}} \end{align*} Note that $ \Delta_{P1-3}' $ is obtained from $ \Delta_{P1-3} $ by reducing a capability on $ \AT{\Chan[k]}{\Role_3} $. With Lemma~\ref{lem:typeEC}~(2), then $ \Gamma' \vdash \AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}} \triangleright \Delta_{\ECR, P1-3}' $, where $ \Delta_{\ECR, P1-3}' = \Delta_{P1-3}' \otimes \Delta_{\ECR} $. Moreover \begin{align*} \dfrac{\begin{array}{l} \dfrac{\begin{array}{l} \dfrac{\left( \Gamma_2 \vdash P_{4, i} \triangleright \Delta_{P4}, \Typed{\AT{\Chan[k]}{\Role_4}}{T_{4, i}} \quad \vdash \Typed{\tilde{\Args}_i}{\tilde{\Sort}_i'} \right)_{i \in \indexSet}}{\Gamma_2 \vdash \PGet{\Chan[k]}{\Role_3}{\Role_4}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args}_i}{P_{4, i}} }} \triangleright \Delta_{P4}'}(\mathsf{C}) \quad \Gamma_2 \vdash P_5 \triangleright \Delta_{P5} \end{array}}{\Gamma_2 \vdash \PPar{\PGet{\Chan[k]}{\Role_3}{\Role_4}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args}_i}{P_{4, i}} }}}{P_5} \triangleright \Delta_{P4}' \otimes \Delta_{P5}}(\mathsf{Pa})\\ \Gamma_2 \vdash P_6 \triangleright \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2'} \quad \vdash \Typed{\tilde{\Args}_2}{\tilde{\Sort}''} \quad \vdash \Typed{\tilde{\Args[v]}_2}{\tilde{\Sort}''} \end{array}}{\Gamma_2 \vdash \POpt{\Role_2}{\tilde{\Role}}{\PPar{\PGet{\Chan[k]}{\Role_3}{\Role_4}{_{i \in \indexSet} \Set{ \PLab{\Labe_i}{\tilde{\Args}_i}{P_{4, i}} }}}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6} \triangleright \Delta_{P4-6}}(\mathsf{Opt}) \end{align*} where $ \nexists \Role', \tilde{\Sort[K]} \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_{P4-5} $ and \begin{align*} \Delta_{P4}' & = \Delta_{P4}, \Typed{\AT{\Chan[k]}{\Role_4}}{\LTGet{\Role_3}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args[z]}'}{\tilde{\Sort}_i'}}{T_{4, i}} }}}\\ \Delta_{P4}' \otimes \Delta_{P5} & = \Delta_{P4-5}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2}, \Typed{\Role_2}{\OV{\tilde{\Sort}''}}\\ \Delta_{P4-6} & = \Delta_{P4-5} \otimes \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{\LTOpt{\tilde{\Role}}{T_2}{\Typed{\tilde{\Args[y]}_2}{\tilde{\Sort}''}}{T_2'}} \end{align*} By $ \Gamma_2 \vdash P_{4, j} \triangleright \Delta_{P4}, \Typed{\AT{\Chan[k]}{\Role_4}}{T_{4, j}} $, $ \vdash \Typed{\tilde{\Args[v]}}{\tilde{\Sort}_j'} $, $ \vdash \Typed{\tilde{\Args[x]}_j}{\tilde{\Sort}_j'} $, and Lemma~\ref{lem:typeSubstB}, we have $ \Gamma_2 \vdash P_{4, j}\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args}_j} } \triangleright \Delta_{P4}, \Typed{\AT{\Chan[k]}{\Role_4}}{T_{4, j}} $. Then \begin{align*} \dfrac{\dfrac{\begin{array}{l} \Gamma_2 \vdash P_{4, j}\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args}_j} } \triangleright \Delta_{P4}, \Typed{\AT{\Chan[k]}{\Role_4}}{T_{4, j}}\\ \Gamma_2 \vdash P_5 \triangleright \Delta_{P5} \end{array}}{\Gamma_2 \vdash \PPar{P_4\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args}_j} }}{P_5} \triangleright \Delta_{P4 - 5}''}(\mathsf{Pa}) \begin{array}{l} \Gamma_2 \vdash P_6 \triangleright \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2'}\\ \vdash \Typed{\tilde{\Args}_2}{\tilde{\Sort}''} \quad \vdash \Typed{\tilde{\Args[v]}_2}{\tilde{\Sort}''} \end{array}}{\Gamma_2 \vdash \POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args}_j} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6} \triangleright \Delta_{P4-6}'}(\mathsf{Opt}) \end{align*} where $ \nexists \Role', \tilde{\Sort[K]} \logdot \Typed{\Role'}{\OV{\tilde{\Sort[K]}}} \in \Delta_{P4-5}''' $ and \begin{align*} \Delta_{P4-5}'' & = \left( \Delta_{P4}, \Typed{\AT{\Chan[k]}{\Role_4}}{T_{4, j}} \right) \otimes \Delta_5 = \Delta_{P4-5}''', \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2}, \Typed{\Role_2}{\OV{\tilde{\Sort}''}}\\ \Delta_{P4-6}' & = \Delta_{P4-5}''' \otimes \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{\LTOpt{\tilde{\Role}}{T_2}{\Typed{\tilde{\Args[y]}_2}{\tilde{\Sort}'}}{T_2'}} \end{align*} Hence $ \Delta_{P4-6}' $ is obtained from $ \Delta_{P4-6} $ by changing a capabilities for $ \Role_4 $. With Lemma~\ref{lem:typeEC}~(2), then $ \Gamma' \vdash \AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args}_j} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}} \triangleright \Delta_{\ECR', P4-6}' $, where $ \Delta_{\ECR', P4-6}' = \Delta_{P4-6}' \otimes \Delta_{\ECR'} $. Since $ \Delta_{\ECR, P1-3} \otimes \Delta_{\ECR', P4-5} $ is defined, so is $ \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' $. Hence, by Rule~$ (\mathsf{Pa}) $, the judgement $ \Gamma' \vdash \AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}} \triangleright \Delta_{\ECR, P1-3}' $, and $ \Gamma' \vdash \AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args}_j} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}} \triangleright \Delta_{\ECR', P4-6}' $, we have \begin{align*} \Gamma' \vdash \PPar{\AECR{\POpt{\Role_1}{\tilde{\Role}}{\PPar{P_1}{P_2}}{\tilde{\Args}_1}{\tilde{\Args[v]}_1}{P_3}}}{\AECR[E']{\POpt{\Role_2}{\tilde{\Role}}{\PPar{P_4\!\Set[]{ \Subst{\tilde{\Args[v]}}{\tilde{\Args}_j} }}{P_5}}{\tilde{\Args}_2}{\tilde{\Args[v]}_2}{P_6}}} \triangleright \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \end{align*} With Lemma~\ref{lem:typeEC}~(2) we conclude with $ \Gamma \vdash P' \triangleright \left( \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \right) \otimes \Delta_{\EC} $. It remains to show that $ \Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \right) \otimes \Delta_{\EC} $. Because $ \Delta_P = \left( \Delta_{\ECR} \otimes \Delta_{P1-3} \right) \otimes \left( \Delta_{\ECR'} \otimes \Delta_{P4-6} \right) $ and because $ \Delta_{P1-3}' $ and $ \Delta_{P4-6}' $ are obtained from $ \Delta_{P1-3} $ and $ \Delta_{P4-6} $ by \begin{itemize} \item reducing $ \Typed{\AT{\Chan[k]}{\Role_3}}{\LTSend{\Role_4}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args[z]}_i}{\tilde{\Sort}_i'}}{T_{1, i}} }}} $ to $ \Typed{\AT{\Chan[k]}{\Role_3}}{T_{1, j}} $, and \item reducing $ \Typed{\AT{\Chan[k]}{\Role_4}}{\LTGet{\Role_3}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args[z]}'}{\tilde{\Sort}_i'}}{T_{4, i}} }}} $ to $ \Typed{\AT{\Chan[k]}{\Role_4}}{T_{4, j}} $ \end{itemize} we have \begin{align*} \dfrac{\dfrac{\dfrac{}{\left( \Delta_{P1-2} \otimes \Delta_{P3}, \Typed{\AT{\Chan[k]_1}{\Role_1}}{T_1} \right) \otimes \left( \Delta_{P4-5} \otimes \Delta_{P6}, \Typed{\AT{\Chan[k]_2}{\Role_2}}{T_2} \right) \mapsto \Delta_{P1-3}' \otimes \Delta_{P4-6}'} (\mathsf{comS}')}{\Delta_{P1-3} \otimes \Delta_{P4-6} \mapsto \Delta_{P1-3}' \otimes \Delta_{P4-6}'} (\mathsf{optCom})}{\Delta_{\EC} \otimes \Delta_P \mapsto \left( \Delta_{\ECR, P1-3}' \otimes \Delta_{\ECR', P4-6}' \right) \otimes \Delta_{\EC}} (\mathsf{par}) \end{align*} \end{description} To obtain the proof for the smaller type system we simply omit the Cases~$ (\mathsf{subs}) $, $ (\mathsf{join}) $, $ (\mathsf{cSO}) $, $ (\mathsf{cCO}) $, and $ (\mathsf{jO}) $. This is possible, because no other case relies on one of the Rules~$ (\mathsf{P}) $, $ (\mathsf{J}) $, or $ (\mathsf{New}) $. \end{proof} \subsection{Progress and Completion} Apart from subject reduction we are interested in progress and completion. Following \cite{DemangeonHonda12} we use coherence to prove progress and completion. A session environment is \emph{coherent} if it is composed of the projections of well-formed global types with global types for all external invitations (also guarded once). In other words if the session environment is coherent we can use the projection rules in the reversed direction to reconstruct complete global types. In particular coherence ensures that in the case of a communication from $ \Role_1 $ to $ \Role_2 $ on a channel $ \Args $ the session environment maps the type of the sender to $ \AT{\Args}{\Role_1} $ and the type of the receiver to $ \AT{\Args}{\Role_2} $ (or vice versa). This also ensures that the type of the transmitted value and the type of the received value have to correspond and that for each sender there is the matching receiver and vice versa. Most of the reduction rules preserve coherence. Only the rules to call a sub-session and to handle its internal and external invitations as well as the failing of optional blocks can temporary invalidate this property. By removing the protocol call and a strict subset of these internal and external invitations, we obtain a process and a corresponding session type that does not directly result from the projection of a global type, since it neither refers to the session initialisation containing all internal and external invitations nor to the global type of the content of this sub-session without open invitations. A failing optional block is not a problem for the process itself, because the continuation of the process is instantiated with the default value and this process with a corresponding session environment correspond to the projection of the global type of the continuation. But a failing optional block may cause another part of the network, \ie a parallel process, to lose coherence. If another, parallel optional block is waiting for a communication with the former, it is doomed to fail. This situation of a single optional block without its dual communication partner cannot result from the projection of a global type. Due to the interleaving of steps, an execution starting in a process with a coherent session environment may lead to a state in which there are open internal and external invitations for several different protocols and/or several single optional blocks at the same time. However, coherence ensures that for all such reachable processes there is a finite sequence of steps that restores coherence and thus ensures progress and completion. The rules of Figure~\ref{fig:sessionTypeReductions} allow to restore coherence. \begin{lemma} \label{lem:coherence} For both type systems:\\ If $ \Delta $ is coherent and $ \Delta \mapsto \Delta' $ then there exists $ \Delta'' $ such that $ \Delta' \mapsto^* \Delta'' $ and $ \Delta'' $ is coherent. \end{lemma} \begin{proof} Again we consider the larger type system first. The proof is by induction on the rules that are necessary to derive $ \Delta \mapsto \Delta' $. Here most of the cases refer to base cases; only the rules~$ (\mathsf{choice}') $, $ (\mathsf{opt}) $, $ (\mathsf{optCom}) $, and $ (\mathsf{par}) $ refer to induction steps. \begin{description} \item[Case $ (\mathsf{comS}') $:] In this case $ \Delta $ contains two type statements for a channel $ \Chan[k] $ on two different roles $ \Role_1 $ and $ \Role_2 $: \begin{align*} \Typed{\AT{\Chan[k]}{\Role_1}}{\LTSend{\Role_2}{_{i \in \indexSet} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i}{\tilde{\Sort}_i}}{T_i} }}}, \Typed{\AT{\Chan[k]}{\Role_2}}{\LTGet{\Role_1}{_{i \in \indexSet{}} \Set{ \LTLab{\Labe_i}{\Typed{\tilde{\Args}_i'}{\tilde{\Sort}_i}}{T_i'} }}} \end{align*} Since $ \Delta $ is coherent and cannot contain other type statements for $ \AT{\Chan[k]}{\Role_1} $ or $ \AT{\Chan[k]}{\Role_2} $, these two local types have to be the result of the projection of a single global type describing a communication from $ \Role_1 $ to $ \Role_2 $ on channel $ \Chan[k] $. Moreover the possible continuations of this global type are projected into the pairs of local types $ T_i $ and $ T_i' $ such that for all $ i \in \indexSet $ the combination of $ \Typed{\AT{\Chan[k]}{\Role_1}}{T_i} $ and $ \Typed{\AT{\Chan[k]}{\Role_2}}{T_i'} $ is the result of the projection of the respective continuation of the global type. Because of this, $ \Delta' $ (the two type statements are replaced by $ \Typed{\AT{\Chan[k]}{\Role_1}}{T_j}, \Typed{\AT{\Chan[k]}{\Role_2}}{T_j'} $) is coherent. \item[Case $ (\mathsf{choice}') $:] In this case we have $ \Delta_1, \Typed{\AT{\Chan[s]}{\Role}}{T_i} \mapsto \Delta_1', \Typed{\AT{\Chan[s]}{\Role}}{T_i'} $ for some $ i \in \Set[]{1, 2} $ and $ \Delta = \Delta_1, \Typed{\AT{\Chan[s]}{\Role}}{T_1 \oplus T_2} $. By the induction hypothesis and $ \Delta_1, \Typed{\AT{\Chan[s]}{\Role}}{T_i} \mapsto \Delta_1', \Typed{\AT{\Chan[s]}{\Role}}{T_i'} $, the resulting $ \Delta_1', \Typed{\AT{\Chan[s]}{\Role}}{T_i'} $ is coherent for both instantiations of $ i $. \item[Case $ (\mathsf{comC}') $:] In this case $ \Delta = \Delta_1, \Typed{\ATE{\Chan[s]}{\Role}}{T} $ and $ \Delta' = \Delta_1, \Typed{\AT{\Chan[s]}{\Role}}{\Role} $. We observe that this rule does not change the types, but only lifts the status of $ \Typed{\ATE{\Chan[s]}{\Role}}{T} $ from 'needs to be invited with type $ T $' to 'is present'. However, because of the open external invitation $ \Typed{\ATE{\Chan[s]}{\Role}}{T} $, the session environment $ \Delta $ is not coherent and thus the implication holds trivially. We need this rule to restore coherence in the Case~$ (\mathsf{subs}') $. \item[Case $ (\mathsf{join}') $:] In this case $ \Delta = \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTReq{\Prot}{\Role_3}{\tilde{\Args[v]}}{\Role_2}{T_1}}, \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3}, \Typed{\AT{\Chan[s]}{\Role_2}}{\LTEnt{\Prot}{\Role_3}{\tilde{\Args[v]}'}{\Role_1}{T_2}} $ and $ \Delta' = \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2}, \Typed{\AT{\Chan[k]}{\Role_3}}{T_3} $. An internal invitation is accepted by reducing the corresponding request $ \mathtt{req} $ and its acceptance notification $ \mathtt{ent} $, and by lifting the status of $ \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3} $ from 'needs to be invited with type $ T_3 $' to 'is present' ($ \Typed{\AT{\Chan[k]}{\Role_3}}{T_3} $). Again the session environment $ \Delta $ is not coherent, because of $ \Typed{\ATI{\Chan[k]}{\Role_3}}{T_3} $, and thus the implication holds trivially. We need this rule to restore coherence in the Case~$ (\mathsf{subs}') $. \item[Case $ (\mathsf{subs}') $:] In this case the $ \Typed{\AT{\Chan[s]}{\Role''}}{\LTCall{\Prot}{G}{\tilde{\Args[v]}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}'}{T}} $ of $ \Delta $ is reduced to $ \Typed{\AT{\Chan[s]}{\Role''}}{T} $ in $ \Delta' $ and the statements $ \Typed{\ATI{\Chan[k]}{\Role_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'_m}}{T'_{n + m}} $ are added to $ \Delta' $. Since $ \Delta $ is coherent $ \LTCall{\Prot}{G}{\tilde{\Args[v]}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}'}{T} $ results from the projection of the global type for the declaration (with $ \mathtt{let} $) of $ \Prot $ and its call $ \GTCall{\Role_A}{\Prot}{\tilde{\Role}}{\tilde{\Args[y]}}{G} $. The the statements $ \Typed{\ATI{\Chan[k]}{\Role_1}}{T'_1}, \ldots, \Typed{\ATI{\Chan[k]}{\Role_n}}{T'_n}, \Typed{\ATE{\Chan[k]}{\Role'_1}}{T'_{n + 1}}, \ldots, \Typed{\ATE{\Chan[k]}{\Role'_m}}{T'_{n + m}} $ refer to the open internal and external invitations. Because of these statements, $ \Delta' $ is not coherent but we can restore coherence by accepting all open invitations, \ie by moving to the projection of $ G $ the global type for the continuation of the call $ \GTCall{\Role_A}{\Prot}{\tilde{\Role}}{\tilde{\Args[y]}}{G} $. The open internal invitations $ \Typed{\ATI{\Chan[k]}{\Role_i}}{T'_i} $ are handled by requests $ \mathtt{req} $ and acceptance notifications $ \mathtt{ent} $ that result from the projection of the call $ \GTCall{\Role_A}{\Prot}{\tilde{\Role}}{\tilde{\Args[y]}}{G} $. These are unguarded by Rule~$ (\mathsf{subs}) $ in the type judgement. Since $ \Delta $ is coherent and because all internal invitations as well as their acceptance notifications are generated by the same projection of the call, $ \Delta $ has to contain exactly one pair $ \Typed{\AT{\Chan[s]}{\Role_i'}}{\LTReq{\Prot}{\Role_i}{\tilde{\Args[v]}}{\Role_i''}{T_i''}}, \Typed{\AT{\Chan[s]}{\Role_i''}}{\LTEnt{\Prot}{\Role_i}{\tilde{\Args[v]}'}{\Role_i'}{T_i'''}} $ for each $ \Typed{\ATI{\Chan[k]}{\Role_i}}{T'_i} $. Because of that we can reduce the open internal invitations by $ n $ applications of Rule~$ (\mathsf{join}') $. As result the requests and acceptance notifications are reduced to their respective continuations, and the $ \Typed{\ATI{\Chan[k]}{\Role_i}}{T'_i} $ are turned into $ \Typed{\AT{\Chan[k]}{\Role_i}}{T'_i} $. Accordingly the $ n $ applications of Rule~$ (\mathsf{join}') $ lead to $ \Delta \mapsto^n \Delta_1 $, where $ \Delta_1 $ is obtained from $ \Delta $ by replacing $ \Typed{\AT{\Chan[s]}{\Role''}}{\LTCall{\Prot}{G}{\tilde{\Args[v]}}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{\tilde{\Role}'}{T}} $ and the corresponding $ n - 1 $ parallel acceptance notifications $ \Typed{\AT{\Chan[s]}{\Role_i''}}{\LTEnt{\Prot}{\Role_i}{\tilde{\Args[v]}'}{\Role_i'}{T_i'''}} $ by $ T' $ and $ T_i''' $, where $ T' $ is obtained from $ T $ by replacing the corresponding requests and the acceptance notification of the caller by their continuations. The remaining open external invitations are the only reason that prevents $ \Delta_1 $ from being coherent. The open external invitations $ \Typed{\ATE{\Chan[k]}{\Role'_j}}{T'_{n + j}} $ are accepted with $ m $ applications of Rule~$ (\mathsf{comC}') $ (which does not influence other parts of the session environments and also does not require other parts of $ \Delta_1 $ to contain specific local types). As result the $ \Typed{\ATE{\Chan[k]}{\Role'_j}}{T'_{n + j}} $ are turned into $ \Typed{\AT{\Chan[k]}{\Role'_j}}{T'_{n + j}} $ that correspond to the projection of $ m $ global types on the respective roles $ \tilde{\Role}' $ for the sub-session $ \Chan[k] $. To obtain the global type that restores coherence, these $ m $ global types of the external communication partners are placed in parallel to the global type of the continuation $ G $. \item[Case $ (\mathsf{opt}') $:] In this case we have $ \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} \mapsto \Delta_1', \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} $, \begin{align*} \Delta = \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_2}} \quad \text{ and } \quad \Delta' = \Delta_1', \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1'}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_2}} \end{align*} Since $ \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} $ results from $ \Delta $ by removing an optional block and its continuation while extracting the content of the optional block and since $ \Delta $ is coherent, $ T_1 $ is the result of projecting the global type representing the content of the optional block and thus $ \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} $ is also coherent. Then, by the induction hypothesis, $ \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} \mapsto \Delta_1', \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} $ implies that there is some $ \Delta_1'' $ such that $ \Delta_1', \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} \mapsto^* \Delta_1'' $ and $ \Delta_1'' $ is coherent. Note that this sequence may reduce the local type $ T_1' $ assigned to $ \AT{\Chan[s]}{\Role_1} $ to $ T_1'' = \LTEnd $. By applying Rule~$ (\mathsf{opt}') $ around each step of $ \Delta_1', \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} \mapsto^* \Delta_1'' $ we obtain the derivation $ \Delta \mapsto^* \Delta_1''' $, where $ \Delta_1''' $ is obtained from $ \Delta_1'' $ by replacing $ \Typed{\AT{\Chan[s]}{\Role_1}}{T_1''} $ in $ \Delta_1'' $ by $ \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1''}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_2}} $. Since $ \Delta_1'' $ is coherent, $ T_1'' $ is the result of a projection of a global type and the remaining type statements add to a coherent session environment. Since $ \Delta $ is coherent, $ T_2 $ is the result of a projection of a global type and the parts of $ \Delta $ that are not changed in $ \Delta \mapsto^* \Delta_1''' $ contain the dual projection of the optional block. Because of this, $ \Delta_1''' $ is coherent. \item[Case $ (\mathsf{optCom}) $:] In this case we have $ \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2} \mapsto \Delta_1', \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2'} $, \begin{align*} \Delta &= \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}_1}{\tilde{\Sort}_1}}{T_3}}, \Typed{\AT{\Chan[s]}{\Role_2}}{\LTOpt{\tilde{\Role}}{T_2}{\Typed{\tilde{\Args[y]}_2}{\tilde{\Sort}_2}}{T_4}} \quad \text{ and}\\ \Delta' &= \Delta_1', \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1'}{\Typed{\tilde{\Args[y]}_1}{\tilde{\Sort}_1}}{T_3}}, \Typed{\AT{\Chan[s]}{\Role_2}}{\LTOpt{\tilde{\Role}}{T_2'}{\Typed{\tilde{\Args[y]}_2}{\tilde{\Sort}_2}}{T_4}} \end{align*} Since $ \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2} $ results from $ \Delta $ by removing two optional blocks and their continuations while extracting the content of the optional blocks and since $ \Delta $ is coherent, $ T_1 $ and $ T_2 $ are the result of projecting the global types representing the content of the optional blocks and thus $ \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2} $ is also coherent. Then, by the induction hypothesis, $ \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2} \mapsto \Delta_1', \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2'} $ implies that there is some $ \Delta_1'' $ such that $ \Delta_1', \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2'} \mapsto^* \Delta_1'' $ and $ \Delta_1'' $ is coherent. Let this sequence reduce $ T_1' $ and $ T_2' $ to $ T_1'' $ and $ T_2'' $. By applying Rule~$ (\mathsf{optCom}') $ around each step of $ \Delta_1', \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2'} \mapsto^* \Delta_1'' $ we obtain the derivation $ \Delta \mapsto^* \Delta_1''' $, where $ \Delta_1''' $ is obtained from $ \Delta_1'' $ by replacing $ \Typed{\AT{\Chan[s]}{\Role_1}}{T_1''}, \Typed{\AT{\Chan[s]}{\Role_2}}{T_2''} $ in $ \Delta_1'' $ by $ \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1''}{\Typed{\tilde{\Args[y]}_1}{\tilde{\Sort}_1}}{T_3}}, \Typed{\AT{\Chan[s]}{\Role_2}}{\LTOpt{\tilde{\Role}}{T_2''}{\Typed{\tilde{\Args[y]}_2}{\tilde{\Sort}_2}}{T_4}} $. Since $ \Delta_1'' $ is coherent, $ T_1'' $ and $ T_2'' $ are the result of a projection of a global type and the remaining type statements add to a coherent session environment. Since $ \Delta $ is coherent, $ T_3 $ and $ T_4 $ are the result of a projection of a global type. Because of this, $ \Delta_1''' $ is coherent. \item[Case $ (\mathsf{fail}') $:] In this case \begin{align*} \Delta = \Delta_1 \otimes \Delta_2, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}}{T_1}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_1'}} \quad \text{ and } \quad \Delta' = \Delta_2, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1'} \end{align*} and $ \Gamma \vdash P \triangleright \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T}, \Typed{\Role_1}{\OV{\tilde{\Sort}}} $ for some $ \Gamma, P, \tilde{\Sort} $. Since $ \Delta $ is coherent, $ \Delta_2 $ contains all optional blocks with participants $ \tilde{\Role} $ that depend on the failed block. By one more application of Rule~$ (\mathsf{fail}') $ for each such block, we remove these optional blocks to avoid deadlocked communication attempts with the former failed block, \ie we have $ \Delta \mapsto \Delta' \mapsto^* \Delta'' $ such that $ \Delta'' $ is obtained by reducing statements of the form $ \Typed{\AT{\Chan[s]}{\Role_2}}{\LTOpt{\tilde{\Role}}{T_2}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_2'}} $ with $ \Role_2 \in \tilde{\Role} $ in $ \Delta' $ to $ \Typed{\AT{\Chan[s]}{\Role_2}}{T_2'} $. Since $ \Delta $ is coherent, $ T_1' $ and all the $ T_2' $ are projections of global types for the continuations of the respective blocks. Because of that, $ \Delta'' $ is coherent. \item[Case $ (\mathsf{succ}') $:] In this case \begin{align*} \Delta = \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{\LTOpt{\tilde{\Role}}{\LTEnd}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_1}} \quad \text{ and } \quad \Delta' = \Delta_1, \Typed{\AT{\Chan[s]}{\Role_1}}{T_1} \end{align*} Since $ \Delta $ is coherent, $ T_1 $ is a projection of a global type (for the continuation of the considered optional block). Applying Rule~$ (\mathsf{fail}') $ as in the last case, we reduce all optional blocks on the same participants. We obtain $ \Delta \mapsto \Delta' \mapsto^* \Delta'' $, where $ \Delta'' $ is obtained from $ \Delta' $ by reducing statements of the form $ \Typed{\AT{\Chan[s]}{\Role_2}}{\LTOpt{\tilde{\Role}}{T_2}{\Typed{\tilde{\Args[y]}}{\tilde{\Sort}}}{T_2'}} $ with $ \Role_2 \in \tilde{\Role} $ in $ \Delta' $ to $ \Typed{\AT{\Chan[s]}{\Role_2}}{T_2'} $. Since $ \Delta $ is coherent, $ T_1' $ and all the $ T_2' $ are projections of global types for the continuations of the respective blocks. Because of that, $ \Delta'' $ is coherent. \item[Case $ (\mathsf{par}) $:] In this case we have $ \Delta_1 \mapsto \Delta_1' $, \begin{align*} \Delta = \Delta_1 \otimes \Delta_2 \quad \text{ and } \quad \Delta' = \Delta_1' \otimes \Delta_2 \end{align*} Since $ \Delta $ is coherent, either $ \Delta_1 $ is coherent or there are some optional blocks in $ \Delta_2 $ that are missing in $ \Delta_1 $ to turn it into a coherent session environment. In the latter case we can move the respective blocks over $ \otimes $ and, by applying Rule~$ (\mathsf{par}) $, obtain a derivation $ \Delta_3 \mapsto \Delta_3' $ such that $ \Delta_3 $ is coherent. Let $ \Delta_2' $ be the remainder of $ \Delta_2 $, \ie $ \Delta_1 \otimes \Delta_2 = \left( \Delta_1 \otimes \Delta_4 \right) \otimes \Delta_2' $ and $ \Delta_3 = \Delta_1 \otimes \Delta_4 $. Since we can also move $ \emptyset $ this way, the second case is more general. By the induction hypothesis, then there is some $ \Delta_3'' $ such that $ \Delta_3' \mapsto^* \Delta_3'' $ and $ \Delta_3'' $ is coherent. Since $ \Delta_3 \otimes \Delta_2' $ is defined, so is $ \Delta_3'' \otimes \Delta_2' $. Hence we obtain $ \Delta \mapsto \Delta' \mapsto \Delta_3'' \otimes \Delta_2' $. Since $ \Delta $ is coherent and $ \Delta_2' $ does not contain optional blocks with counterparts in $ \Delta_3 $, we conclude that $ \Delta_2' $ is coherent. With the coherence of $ \Delta_3'' $, then $ \Delta_3'' \otimes \Delta_2' $ is coherent. \end{description} For the type system without sub-sessions the Rules~$ (\mathsf{subs}') $ and $ (\mathsf{join}') $ are superfluous. Since $ (\mathsf{subs}') $ is the only case that relies on the presence of these two rules, these two cases can be removed and the statement holds for the smaller type system. \end{proof} Let weak coherence describe the session environments that only temporary lost coherence. More precisely, a session environment $ \Delta $ is \emph{weakly coherent} if there is some $ \Delta' $ such that $ \Delta' $ is coherent and $ \Delta' \mapsto \Delta $. As it can be shown easily by an induction on the rules of Figure~\ref{fig:sessionTypeReductions} and the definition of coherence, a weakly coherent session environment results from missing optional blocks for pairs of dual communication partners and/or missing $ \mathtt{call} $-type statements together with a strict subset of missing open invitations of the respective protocol. Note that, due to the open external invitations for the parent session, all presented examples are not coherent but only weakly coherent. Since weak coherence results from reducing a coherent session environment, we can always perform some more reductions to restore coherence. \begin{lemma} \label{lem:weakCoherence} For both type systems:\\ If $ \Delta $ is weakly coherent then there exists $ \Delta' $ such that $ \Delta \mapsto^* \Delta' $ and $ \Delta' $ is coherent. \end{lemma} \begin{proof} The proof for both type systems is the same except for the handling of sub-sessions and invitations that can be ignored in the simpler case. If $ \Delta $ is coherent, then choose $ \Delta' = \Delta $ and we are done. Otherwise, because $ \Delta $ is weakly coherent, there is some $ \Delta_0 $ such that $ \Delta_0 \mapsto^* \Delta $ and $ \Delta_0 $ is coherent. By recalling the proof of Lemma~\ref{lem:coherence}, then $ \Delta $ is only weakly coherent, because in comparison with $ \Delta_0 $ there are missing $ \mathtt{call} $ (due to Rule~$ (\mathsf{subs}') $) with already reduced invitations (due to the Rules~$ (\mathsf{join}') $ or $ (\mathsf{comC}') $) or missing optional blocks (due to the Rules~$ (\mathsf{fail}') $ or $ (\mathsf{succ}') $), whose counterparts are contained in $ \Delta $. For the former case, Lemma~\ref{lem:coherence} tells us that it suffices to answer the remaining invitations. Since $ \Delta_0 $ is coherent and $ \Delta_0 \mapsto^* \Delta $, all necessary internal acceptance notifications $ \mathtt{ent} $ are contained in $ \Delta $ and thus the invitations can be removed as described in Lemma~\ref{lem:coherence} in the Case~$ (\mathsf{subs}') $ using Rule~$ (\mathsf{join}') $ followed by the removal of the external invitations using Rule~$ (\mathsf{comC}') $. In the latter case, Lemma~\ref{lem:coherence} tells us that all problematic optional blocks can be removed by Rule~$ (\mathsf{fail}') $ that can be applied whenever there is an unguarded optional block. Thus, following Lemma~\ref{lem:coherence}, we can remove all problematic open invitations and optional blocks without counterparts and obtain $ \Delta \mapsto^* \Delta' $ such that $ \Delta' $ is coherent. \end{proof} Accordingly, our extension of the type system with optional blocks cannot cause deadlock, because optional blocks can always be aborted using Rule~$ (\mathsf{fail}) $. Due to initial external invitations $ \PInp{\Chan[a]}{\Chan[s]}{\ldots} $ to the parent session, our examples are not coherent. Since this design decision allows for modularity using sub-sessions, we do not want to restrict our attention to coherent session environments. Instead, to better cover these cases, we relax the definition of coherence for initial session environments. Let a session environment $ \Delta $ be \emph{initially coherent} if it is obtained from a coherent environment, \ie $ \Delta_0 \mapsto^* \Delta $ for some coherent $ \Delta_0 $, and neither contains internal open invitations nor optional blocks without their counterparts. \emph{Progress} ensures that well-typed processes cannot get stuck unless their protocol requires them to. In comparison to standard formulations of progress from literature and in comparison to \cite{DemangeonHonda12}, we add that the respective sequence of steps does not require any optional blocks to be unreliable. We denote an optional block as \emph{unreliable} \wrt to a sequence of steps if it does fail within this sequence and else as \emph{reliant}. In other words we ensure progress despite arbitrary (and any number of) failures of optional blocks. \begin{theorem}[Progress] \label{thm:progress} For both type systems:\\ If $ \Gamma \vdash P \triangleright \Delta $ such that $ \Delta $ is initially coherent, then either $ P = \PEnd $ or there exists $ P' $ such that $ P \longmapsto^+ P' $, $ \Gamma \vdash P' \triangleright \Delta' $, where $ \Delta \mapsto^* \Delta' $ and $ \Delta' $ is coherent, and $ P \longmapsto^+ P' $ does not require any optional block to be unreliable. \end{theorem} \begin{proof} The proof is the same for both type systems. Assume $ \Gamma \vdash P \triangleright \Delta $ such that $ \Delta $ is initially coherent and $ P \neq \PEnd $. Then, by the Lemmata~\ref{lem:coherence} and \ref{lem:weakCoherence}, we can answer all open external invitations in the sequence $ \Delta \mapsto \Delta_1 $ without Rule~$ (\mathsf{fail}') $ such that $ \Delta_1 $ is coherent. Because of $ \Gamma \vdash P \triangleright \Delta $ and the typing rules of Figure~\ref{fig:typingRules}, we can map this sequence to $ P \longmapsto^* P_1 $ and, by Theorem~\ref{thm:subjectReduction}, $ \Gamma \vdash P_1 \triangleright \Delta_1 $. Since $ \Delta \mapsto^* \Delta_1 $ does not use Rule~$ (\mathsf{fail}') $, no optional block fails in $ P \longmapsto^* P_1 $. If $ P_1 = \PEnd $ then, since $ P \neq \PEnd $, there was at least one open external invitation and thus $ P \longmapsto^+ P_1 $ and we are done. If $ P_1 \neq \PEnd $ then, because of $ \Gamma \vdash P_1 \triangleright \Delta_1 $, the projection rules in Figure~\ref{fig:projectionRules}, and since $ \Delta_1 $ is coherent, $ P_1 $ contains unguarded \begin{compactitem} \item both parts (sender and receiver) of the projection of a global type for communication, \item all counterparts of the projection of a global type of an optional block, or \item (in the case of the larger type system) all internal acceptance notifications and the call guarding internal invitations and one acceptance notification that result from the projection of a global type of a sub-session call. \end{compactitem} In all three cases, coherence and the projection rules ensure that there is at least one step to reduce $ P_1 $ in which no optional block fails, \ie there is some $ P_1' $ such that $ P_1 \longmapsto P_1' $ without Rule~$ (\mathsf{fail}) $. By Theorem~\ref{thm:subjectReduction}, then $ \Gamma \vdash P_1' \triangleright \Delta_1' $ for some $ \Delta_1' $ such that $ \Delta_1 \mapsto \Delta_1' $. Since $ P $ is initially coherent and $ P \longmapsto^+ P_1' $ does not use Rule~$ (\mathsf{fail}) $, for each optional block in $ P $ there are either all matching counterpart or $ P_1 \longmapsto P_1' $ was using Rule~$ (\mathsf{succ}) $. In the former case we can use Lemma~\ref{lem:coherence}, to obtain $ \Delta' $ without using Rule~$ (\mathsf{fail}') $ such that $ \Delta \mapsto^* \Delta_1 \mapsto \Delta_1' \mapsto^* \Delta' $ and $ \Delta' $ is coherent. With $ \Gamma \vdash P_1' \triangleright \Delta_1' $, the typing rules in Figure~\ref{fig:typingRules}, and Theorem~\ref{thm:subjectReduction}, then $ P \longmapsto^* P_1 \longmapsto P_1' \longmapsto^* P' $ such that $ \Gamma \vdash P' \triangleright \Delta' $ and $ P \longmapsto^+ P' $ does not require any optional block to be unreliable. In the latter case, coherence ensures that the counterparts of the successfully terminated optional block does not need to communicate with this optional block. By repeating the above argument for the content of the counterparts (that are by coherence obtained from a global type), where is some $ P' $ such that $ P_1' \longmapsto^* P' $ without Rule~$ (\mathsf{fail}) $ and $ \Gamma \vdash P' \triangleright \Delta_1' $ that successfully resolves the remaining counterparts such that $ P \longmapsto^* P_1 \longmapsto P_1' \longmapsto^* P' $ does not require any optional block to be unreliable, $ \Delta \mapsto^* \Delta_1 \mapsto^* \Delta_1' $ and $ \Delta_1' $ is coherent. \end{proof} \emph{Completion} is a special case of progress for processes without infinite recursions. It ensures that well-typed processes, without infinite recursion or a loop resulting from calling sub-sessions infinitely often, follow their protocol and then terminate. Similarly to progress, we prove that completion holds despite arbitrary failures of optional blocks but does not require any optional block to be unreliable. \begin{theorem}[Completion] \label{thm:completion} For both type systems:\\ If $ \Gamma \vdash P \triangleright \Delta $ such that $ \Delta $ is initially coherent and $ P $ does not contain infinite recursions and cannot infinitely often call a sub-session, then $ P \longmapsto^* \PEnd $, $ \Gamma \vdash \PEnd \triangleright \emptyset $, and $ P \longmapsto^* \PEnd $ does not require any optional block to be unreliable. \end{theorem} \begin{proof} By the typing rules in Figure~\ref{fig:typingRules}, $ \Gamma \vdash P' \triangleright \Delta' $ implies that $ P' = \PEnd $ if and only if $ \Delta' = \emptyset $. By Theorem~\ref{thm:progress}, if $ \Gamma \vdash P \triangleright \Delta $ such that $ \Delta $ is initially coherent, then either $ P = \PEnd $ or there exists $ P' $ such that $ P \longmapsto^+ P' $, $ \Gamma \vdash P' \triangleright \Delta' $, where $ \Delta \mapsto^* \Delta' $ and $ \Delta' $ is coherent, and $ P \longmapsto^+ P' $ does not require any optional block to be unreliable. In the first case ($ P = \PEnd $) we are done. Otherwise, since coherence implies initial coherence, we do perform at least one step and can apply Theorem~\ref{thm:progress} on $ \Gamma \vdash P' \triangleright \Delta' $ again. By repeating this argument we either construct an infinite reduction sequence or reach $ \PEnd $ after finitely many steps as required. Remember that we equate structural congruent session environments. But, since we assume that $ P $ and accordingly $ \Delta $ do not do an infinite sequence of recursions, applying structural congruence cannot increase the session environment infinitely often. Along with the reduction sequence for processes we construct a reduction sequence $ \Delta \mapsto^* \Delta' \mapsto^* \Delta'' \mapsto^* \ldots $. By inspecting the rules of Figure~\ref{fig:sessionTypeReductions}, it is easy to check that each reduction step strictly reduces the according session environment. Rule~$ (\mathsf{subs}') $ introduces new parts to the session environment but therefore has to reduce a $ \mathtt{call} $ in another part of the session environment. Since we assume that $ P $ cannot infinitely often call a sub-session, we can easily construct a potential function to prove that the session environment strictly decreases whenever no recursion is unfolded. Because of that and since $ \Delta $ is finite, the sequence $ \Delta \mapsto^* \Delta' \mapsto^* \Delta'' \mapsto^* \ldots $ eventually reaches $ \emptyset $, \ie $ \Delta \mapsto^* \emptyset $. With that we reach $ \PEnd $. \end{proof} \subsection{Summary} A simple but interesting consequence of the Completion property is, that for each well-typed process there is a sequence of steps that successfully resolves all optional blocks. This is because we type the content of optional blocks and that our type system ensures that these contents reach exactly one success reporting message $ \POptEnd{\Role}{\tilde{\Args[v]}} $ in exactly one of its parallel branches (and in each of its choice branches). \begin{corollary}[Reliance] \label{cor:reliance} For both type systems:\\ If $ \Gamma \vdash P \triangleright \Delta $ such that $ \Delta $ is initially coherent and $ P $ does not contain infinite recursions and cannot infinitely often call a sub-session, then $ P \longmapsto^* \PEnd $ such that all optional blocks are successfully resolved in this sequence. \end{corollary} To summarize our type systems have the following properties. \begin{theorem}[Properties] \label{thm:properties} For both type systems: \begin{description} \item[Subject Reduction:] If $ \Gamma \vdash P \triangleright \Delta $ and $ P \longmapsto P' $ then there exists $ \Delta' $ such that $ \Gamma \vdash P' \triangleright \Delta' $ and $ \Delta \mapsto^* \Delta' $. \item[Progress:] If $ \Gamma \vdash P \triangleright \Delta $ such that $ \Delta $ is initially coherent, then either $ P = \PEnd $ or there exists $ P' $ such that $ P \longmapsto^+ P' $, $ \Gamma \vdash P' \triangleright \Delta' $, where $ \Delta \mapsto^* \Delta' $ and $ \Delta' $ is coherent, and $ P \longmapsto^+ P' $ does not require any optional block to be unreliable. \item[Completion:] If $ \Gamma \vdash P \triangleright \Delta $ such that $ \Delta $ is initially coherent and $ P $ does not contain infinite recursions and cannot infinitely often call a sub-session, then $ P \longmapsto^* \PEnd $, $ \Gamma \vdash \PEnd \triangleright \emptyset $, and $ P \longmapsto^* \PEnd $ does not require any optional block to be unreliable. \item[Reliance:] If $ \Gamma \vdash P \triangleright \Delta $ such that $ \Delta $ is initially coherent and $ P $ does not contain infinite recursions and cannot infinitely often call a sub-session, then $ P \longmapsto^* \PEnd $ such that all optional blocks are successfully resolved in this sequence. \end{description} \end{theorem} $ \PRC{n} $ is well-typed \wrt to the initially coherent session environment $ \Gamma $ that does not contain recursions. Thus, by the completion property of Theorem~\ref{thm:properties}, our implementation $ \PRC{n} $ of the rotating coordinator algorithm terminates despite arbitrary failures of optional blocks. Note that, although establishing the type system and proving Theorem~\ref{thm:properties} was elaborate, to check whether a process is well-typed is straightforward and can be automated easily and efficiently. Since all communication steps of the algorithm are captured in optional blocks and since failure of optional blocks containing a single communication step represents a link failure/message loss, $ \PRC{n} $ terminates despite arbitrary occurrences of link failures. Session types usually also ensure \emph{communication safety}, \ie freedom of communication error, and \emph{session fidelity}, \ie a well-typed process exactly follows the specification described by its global type. With optional blocks we lose these properties, because they model failures. As a consequence communications may fail and whole parts of the specified protocol in the global type might be skipped. In order to still provide some guarantees on the behaviour of well-typed processes, we however limited the effect of failures by encapsulation in optional blocks. It is trivial to see, that in the failure-free case, \ie if no optional block fails, we inherit communication safety and session fidelity from the underlying session types in \cite{BettiniAtall08,BocciAtall10} and \cite{DemangeonHonda12}. Even in the case of failing optional blocks, we inherit communication safety and session fidelity for the parts of protocols outside of optional blocks and the inner parts of successful optional blocks, since our extension ensures that all optional blocks that depend on a failure are doomed to fail and the remaining parts work as specified by the global type. \subsection{System Failures} If we use optional blocks the cover a single transmission over an unreliable link, each use of Rule~(\textsf{fail}) refers to a single link failure. Whether a specification, \ie a global type, implements link failures can be checked easily, by analysing whether all communication steps on unreliable links are encapsulated by the above described binary optional blocks $ \GUL{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]}{\Args[v]_{\Role[trg]}} $. Notice that this way we can model systems that contain reliable as well as unreliable links. The properties encapsulation, isolation, and safety guarantee that the above described unreliable links meet our intuition of the considered class of failure and their effect. Restricting our attention to link failures, where in the case of failure a default value is provided, as well as the restriction on protocols to compute some values might appear as a rather strong limitation. But this limitation actually matches the intuition used for many distributed algorithms. We consider systems that use some method to determine at which point a certain failure has occurred---\eg by a time out or more abstractly a failure detector. But apart from the detection of the failure, the system does usually not provide any informations about it or its source. We match this intuition by restricting the way the modelled system can react on a failure. \subsection{Crash Failures} Crash failures can be considered as a special case of link failures: After the first link failure all communications with the respective sender of the first failure have to fail. Following this intuition, a system with crash failures can be obtained from a system with link failures by excluding all executions that do not meet the above criterion. Accordingly, all algorithms that terminate despite link failures also terminate despite crash failures. There are however algorithms that do not guarantee termination despite link failures but only despite crash failures. Consider once more Example~\ref{exa:RCAlgorithm}. This algorithm satisfies termination despite link failures; but it will not be able to ensure agreement in this scenario, \ie cannot ensure that despite link failures all participants decide consistently \cite{Lynch96}. Agreement despite crash failures is ensured. Similarly an algorithm might satisfy termination only with respect to a maximal amount of failures or under the assumption that a certain process never fails. The simplest way to express crash failures with optional blocks is to encapsulate the specification of a whole algorithm in an optional block on all participating roles. Projection then results in local types $ T_i $ for each role $ \Role_i $, that are completely encapsulated by an optional block $ \LTOptS{\tilde{\Role}}{T_i}{\cdot} $. A process crashes iff its optional block fails. Here we need to encapsulate all communications between the participating roles $ \tilde{\Role} $ in optional blocks of the form $ \GUL{\Role[src]}{\Args[v]_{\Role[src]}}{\Role[trg]}{\Args[v]_{\Role[trg]}} $, \ie have to model all links as unreliable, to ensure that the crash of one process does not doom the whole system. With that the specification of systems that contain both, link and crash failures, is easy. We can also model a process crash with recovery this way, using a recursion $ \LTRec{\TermV}{\LTOpt{\tilde{\Role}}{T_i}{\Typed{\tilde{\Args}}{\tilde{\Sort}}}}{t} $ and the default values $ \tilde{\Args} $ to capture the initial values of the process. The main difficulty are systems with unreliable processes but reliable links. Here we have to ensure that all communication failures result from a crashed process. An easy way to tackle this problem is to let the reduction semantics keep track of the processes that are crashed or are currently considered alive as it was done \eg in \cite{KuhnrichNestmann09,wagnerNestmann14} or for exceptions in \cite{capecchi2016}. With that the semantics can ensure that a communication error causes a process to crash---or is caused by a crashed process---and that the optional block of a crashed process will eventually fail. The interesting question here is how the type system can be used to guarantee termination in systems with unreliable processes, if the algorithm does not terminate in the presence of arbitrary link failures. Even more challenging is the analysis of algorithms that tolerate only a bounded amount of failures. In the presented approach we concentrate---as a first step---on link failures/message loss and algorithms that terminate despite arbitrary link failures. \section{Conclusions} \label{sec:conclusions} We extend standard session types with optional blocks with default values. Thereby, we obtain a type system for progress and completion/termination despite link failures that can be used to reason about fault-tolerant distributed algorithms. Our approach is limited with respect to two aspects: We only cover algorithms that \begin{inparaenum}[(1)] \item allow us to specify default values for all unreliable communication steps and \item terminate despite arbitrary link failures. \end{inparaenum} Accordingly, this approach is only a first step towards the analysis of distributed algorithms with session types. It shows however that it is possible to analyse distributed algorithms with session types and how the latter can solve the otherwise often complicated and elaborate task of proving termination. Note that, optional blocks can contain larger parts of protocols than a single communication step. Thus they may also allow for more complicated failure patterns than simple link failures/message loss. In \cite{adameitPetersNestmann17} we extend a simple type system with optional blocks. The (for many distributed algorithms interesting) concept of rounds is obtained instead by using the more complicated nested protocols (as defined in \cite{DemangeonHonda12}) with optional blocks. Due to lack of space, the type systems with nested protocols/sub-sessions and optional blocks as well as more interesting examples with and without explicit (and of course overlapping) rounds were postponed to this report. As presented above the inclusion of sub-session is straightforward and does not require to change the concept of optional blocks as presented in \cite{adameitPetersNestmann17}. In combination with sub-sessions our attempt respects two important aspects of fault-tolerant distributed algorithms: \begin{inparaenum}[(1)] \item The modularity as \eg present in the concept of rounds in many algorithms can be expressed naturally, and \item the model respects the asynchronous nature of distributed systems such that messages are not necessarily delivered in the order they are sent and the rounds may overlap. \end{inparaenum} Our extension offers new possibilities for the analysis of distributed algorithms and widens the applicability of session types to unreliable network structures. We hope to inspire further work in particular to cover larger classes of algorithms and system failures. \newcommand*{\doi}[1]{\href{http://dx.doi.org/#1}{doi: #1}} \end{document}
\begin{document} \title[Weighted Morrey estimates]{Some inequalities for the multilinear singular integrals with Lipschitz functions on weighted Morrey spaces} \author{FER\.{I}T G\"{U}RB\"{U}Z} \address{ Hakkari University, Faculty of Education, Department of Mathematics Education, Hakkari 30000, Turkey} \email{[email protected]} \urladdr{} \thanks{} \curraddr{ } \urladdr{} \thanks{} \date{} \subjclass[2000]{ 42B20, 42B25, 47G10} \keywords{Oscillation; variation; multilinear singular integral operators; Lipschitz space; weighted Morrey space; weights } \dedicatory{} \thanks{} \begin{abstract} The aim of this paper is to prove the boundedness of the oscillation and variation operators for the multilinear singular integrals with Lipschitz functions on weighted Morrey spaces. \end{abstract} \maketitle \section{Introduction} We first say that there exists a continuous function $K\left( x,y\right) $ defined on $\Omega =\left\{ \left( x,y\right) \in {\mathbb{ R\times R}}:x\neq y\right\} $ and $C>0$ if $K$ admits the following representation \begin{equation} \left\vert K\left( x,y\right) \right\vert \leq \frac{C}{\left\vert x-y\right\vert },\qquad \forall \left( x,y\right) \in \Omega \label{1} \end{equation} and for all $x$, $x_{0}$, $y\in {\mathbb{R}}$ with $\left\vert x-y\right\vert >2\left\vert x-x_{0}\right\vert $ \begin{eqnarray} &&\left\vert K\left( x,y\right) -K\left( x_{0},y\right) \right\vert +\left\vert K\left( y,x\right) -K\left( y,x_{0}\right) \right\vert \notag \\ &\leq &\frac{C}{\left\vert x-y\right\vert }\left( \frac{\left\vert x-x_{0}\right\vert }{\left\vert x-y\right\vert }\right) ^{\beta }, \label{1*} \end{eqnarray} where $1>\beta >0$. Then $K$ is said to be a Calder\'{o}n-Zygmund standard kernel. Suppose that $K$ satisfies (\ref{1}) and (\ref{1*}). Then, Zhang and Wu \cite {Zhang} considered the family of operators $T:=\left\{ T_{\epsilon }\right\} _{\epsilon >0}$ and a related the family of commutator operators $ T_{b}:=\left\{ T_{\epsilon ,b}\right\} _{\epsilon >0}$ generated by $ T_{\epsilon }$ and $b$ which are given by \begin{equation} T_{\epsilon }f\left( x\right) =\dint\limits_{\left\vert x-y\right\vert >\epsilon }K\left( x,y\right) f\left( y\right) dy \label{3} \end{equation} and \begin{equation} T_{\epsilon ,b}f\left( x\right) =\dint\limits_{\left\vert x-y\right\vert >\epsilon }\left( b\left( x\right) -b\left( y\right) \right) K\left( x,y\right) f\left( y\right) dy. \label{0} \end{equation} In this sense, following \cite{Zhang}, the definition of the oscillation operator of $T$ is given by \begin{equation*} \mathcal{O}\left( Tf\right) \left( x\right) :=\left( \dsum\limits_{i=1}^{\infty }\sup_{t_{i+1}\leq \epsilon _{i+1}<\epsilon _{i}\leq t_{i}}\left\vert T_{\epsilon _{i+1}}f\left( x\right) -T_{\epsilon _{i}}f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}, \end{equation*} where $\left\{ t_{i}\right\} $ is a decreasing fixed sequence of positive numbers converging to $0$ and a related $\rho $-variation operator is defined by \begin{equation*} \mathcal{V}_{\rho }\left( Tf\right) \left( x\right) :=\sup_{\epsilon _{i}\searrow 0}\left( \dsum\limits_{i=1}^{\infty }\left\vert T_{\epsilon _{i+1}}f\left( x\right) -T_{\epsilon _{i}}f\left( x\right) \right\vert ^{\rho }\right) ^{\frac{1}{\rho }},\qquad \rho >2, \end{equation*} where the supremum is taken over all sequences of real number $\left\{ \epsilon _{i}\right\} $ decreasing to $0$. We also take into account the operator \begin{equation*} \mathcal{O}^{\prime }\left( Tf\right) \left( x\right) :=\left( \dsum\limits_{i=1}^{\infty }\sup_{t_{i+1}<\eta _{i}<t_{i}}\left\vert T_{t_{i+1}}f\left( x\right) -T_{\eta _{i}}f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}. \end{equation*} On the other hand, it is obvious that \begin{equation*} \mathcal{O}^{\prime }\left( Tf\right) \approx \mathcal{O}\left( Tf\right) . \end{equation*} That is, \begin{equation*} \mathcal{O}^{\prime }\left( Tf\right) \leq \mathcal{O}\left( Tf\right) \leq 2 \mathcal{O}^{\prime }\left( Tf\right) . \end{equation*} Recently, Campbell et al. in \cite{Campbell} proved the oscillation and variation inequalities for the Hilbert transform in $L^{p}$($1<p<\infty $) and then following \cite{Campbell}, we denote by $E$ the mixed norm Banach space of two-variable function $h$ defined on $ \mathbb{R} \times \mathbb{N} $ such that \begin{equation*} \left\Vert h\right\Vert _{E}\equiv \left( \dsum\limits_{i}\left( \sup_{s}\left\vert h\left( s,i\right) \right\vert \right) ^{2}\right) ^{1/2}<\infty . \end{equation*} Given $T:=\left\{ T_{\epsilon }\right\} _{\epsilon >0}$ is a family operators such that $\lim\limits_{\epsilon \rightarrow 0}T_{\epsilon }f\left( x\right) =Tf\left( x\right) $ exists almost everywhere for certain class of functions $f$, where $T_{\epsilon }$ defined as (\ref{3}). For a fixed decreasing sequence $\left\{ t_{i}\right\} $ with $t_{i}\searrow 0$, let $J_{i}=\left( t_{i+1},t_{i}\right] $ and define the $E$-valued operator $ U\left( T\right) :f\rightarrow U\left( T\right) f$ given by \begin{equation*} U\left( T\right) f\left( x\right) =\left\{ T_{t_{i+1}}f\left( x\right) -T_{s}f\left( x\right) \right\} _{s\in J_{i},i\in \mathbb{N} }=\left\{ \dint\limits_{\left\{ t_{i+1}<\left\vert x-y\right\vert <s\right\} }K\left( x,y\right) f\left( y\right) dy\right\} _{s\in J_{i},i\in \mathbb{N} }. \end{equation*} Then \begin{eqnarray*} \mathcal{O}^{\prime }\left( Tf\right) \left( x\right) &=&\left\Vert U\left( T\right) f\left( x\right) \right\Vert _{E}=\left\Vert \left\{ T_{t_{i+1}}f\left( x\right) -T_{s}f\left( x\right) \right\} _{s\in J_{i},i\in \mathbb{N} }\right\Vert _{E} \\ &=&\left\Vert \left\{ \dint\limits_{\left\{ t_{i+1}<\left\vert x-y\right\vert <s\right\} }K\left( x,y\right) f\left( y\right) dy\right\} _{s\in J_{i},i\in \mathbb{N} }\right\Vert _{E}. \end{eqnarray*} Let $\Phi =\left\{ \beta :\beta =\left\{ \epsilon _{i}\right\} ,\epsilon _{i}\in \mathbb{R} ,\epsilon _{i}\searrow 0\right\} $. We denote by $F_{\rho }$ the mixed norm space of two variable functions $g\left( i,\beta \right) $ such that \begin{equation*} \left\Vert g\right\Vert _{F_{\rho }}\equiv \sup_{\beta }\left( \dsum\limits_{i}\left\vert g\left( i,\beta \right) \right\vert ^{\rho }\right) ^{1/\rho }. \end{equation*} We also take into account the $F_{\rho }$-valued operator $V\left( T\right) :f\rightarrow V\left( T\right) f$ such that \begin{equation*} V\left( T\right) f\left( x\right) =\left\{ T_{\epsilon _{i+1}}f\left( x\right) -T_{\epsilon _{i}}f\left( x\right) \right\} _{\beta =\left\{ \epsilon _{i}\right\} \in \Phi }. \end{equation*} Thus, \begin{equation*} V_{\rho }\left( T\right) f\left( x\right) =\left\Vert V\left( T\right) f\left( x\right) \right\Vert _{F_{\rho }}. \end{equation*} Given $m$ is a positive integer, and $b$ is a function on ${\mathbb{R}}$. Let $R_{m+1}\left( b;x,y\right) $ be the $m+1$-th order Taylor series remainder of $b$ at $x$ about $y$, that is, \begin{equation*} R_{m+1}\left( b;x,y\right) =b\left( x\right) -\dsum\limits_{\gamma \leq m} \frac{1}{\gamma !}b^{\left( \gamma \right) }\left( y\right) \left( x-y\right) ^{\gamma }. \end{equation*} In this paper, we consider the family of operators $T^{b}:=\left\{ T_{\epsilon }^{b}\right\} _{\epsilon >0}$ given by \cite{Hu}, where $ T_{\epsilon }^{b}$ are the multilinear singular integral operators of $ T_{\epsilon }$ as follows \begin{equation} T_{\epsilon }^{b}f\left( x\right) =\dint\limits_{\left\vert x-y\right\vert >\epsilon }\frac{R_{m+1}\left( b;x,y\right) }{\left\vert x-y\right\vert ^{m}} K\left( x,y\right) f\left( y\right) dy. \label{4} \end{equation} Thus, if $m=0$, then $T_{\epsilon }^{b}$ is just the commutator of $ T_{\epsilon }$ and $b$, which is given by (\ref{0}). But, if $m>0$, then $ T_{\epsilon }^{b}$ are non-trivial generation of the commutators. The theory of multilinear analysis was received extensive studies in the last 3 decades (see \cite{Cohen, Gurbuz} for example). Hu and Wang \cite{Hu} proved that the weighted $\left( L^{p},L^{q}\right) $-boundedness of the oscillation and variation operators for $T^{b}$ when the $m$-th derivative of $b$ belongs to the homogenous Lipschitz space $\dot{\Lambda}_{\beta }$. In this sense, we recall the definition of homogenous Lipschitz space $\dot{ \Lambda}_{\beta }$ as follows: \begin{definition} $\left( \text{\textbf{Homogenous Lipschitz space}}\right) $ Let $0<\beta \leq 1$. The homogeneous Lipschitz space $\dot{\Lambda}_{\beta }$ is defined by \begin{equation*} \dot{\Lambda}_{\beta }\left( {\mathbb{R}}\right) =\left\{ b:\left\Vert b\right\Vert _{\dot{\Lambda}_{\beta }}=\sup_{x,h\in \mathbb{R} ,h\neq 0}\frac{\left\vert b\left( x+h\right) -b\left( x\right) \right\vert }{ \left\vert h\right\vert ^{\beta }}<\infty \right\} . \end{equation*} Obviously, if $\beta >1$, then $\dot{\Lambda}_{\beta }\left( {\mathbb{R}} \right) $ only includes constant. So we restrict $0<\beta \leq 1$. \end{definition} Now, we recall the definitions of basic spaces such as Morrey, weighted Lebesgue, weighted Morrey spaces and consider the relationship between these spaces. Besides the Lebesgue space $L^{q}\left( {\mathbb{R}}\right) $, the Morrey space $M_{p}^{q}\left( {\mathbb{R}}\right) $ is another important function space with definition as follows: \begin{definition} $\left( \text{\textbf{Morrey space}}\right) $\label{Definition1} For $1\leq p\leq q<\infty $, the Morrey space $M_{p}^{q}\left( {\mathbb{R}}\right) $ is the collection of all measurable functions $f$ whose Morrey space norm is \begin{equation*} \left\Vert f\right\Vert _{M_{p}^{q}\left( {\mathbb{R}}\right) }=\sup _{\substack{ I\subset {\mathbb{R}} \\ I:Interval}}\frac{1}{\left\vert I\right\vert ^{\frac{1}{p}-\frac{1}{q}}}\left\Vert f\chi _{I}\right\Vert _{L_{p}\left( {\mathbb{R}}\right) }<\infty . \end{equation*} \end{definition} \begin{remark} $\cdot $ If $p=q$, then \begin{equation*} \Vert f\Vert _{M_{q}^{q}\left( {\mathbb{R}}\right) }=\Vert f\Vert _{L^{q}\left( {\mathbb{R}}\right) }. \end{equation*} $\cdot $ if $q<p$, then $M_{p}^{q}\left( {\mathbb{R}}\right) $ is strictly larger than $L^{q}\left( {\mathbb{R}}\right) $. For example, $ f(x):=\left\vert x\right\vert ^{-\frac{1}{q}}\in $ $M_{p}^{q}\left( {\mathbb{ R}}\right) $ but $f(x):=\left\vert x\right\vert ^{-\frac{1}{q}}\notin $ $ L^{q}\left( {\mathbb{R}}\right) $. \end{remark} On the other hand, for a given weight function $w$ and any interval $I$, we also denote the Lebesgue measure of $I$ by $\left\vert I\right\vert $ and set weighted measure \begin{equation*} w\left( I\right) =\dint\limits_{I}w\left( x\right) dx. \end{equation*} For $0<p<\infty $, the weighted Lebesgue space $L_{p}(w)\equiv L_{p}({{ \mathbb{R}}},w)$ is defined by the norm \begin{equation*} \Vert f\Vert _{L_{p}(w)}=\left( \dint\limits_{{{\mathbb{R}}} }|f(x)|^{p}w(x)dx\right) ^{\frac{1}{p}}<\infty . \end{equation*} A weight $w$ is said to belong to the Muckenhoupt class $A_{p}$ for $ 1<p<\infty $ such that \begin{align} \lbrack w]_{A_{p}}& :=\sup\limits_{I}[w]_{A_{p}(I)} \notag \\ & =\sup\limits_{I}\left( \frac{1}{|I|}\dint\limits_{I}w(x)dx\right) \left( \frac{1}{|I|}\dint\limits_{I}w(x)^{1-p^{\prime }}dx\right) ^{p-1}<\infty , \label{2} \end{align} where $p^{\prime }=\frac{p}{p-1}$. The condition (\ref{2}) is called the $ A_{p}$-condition, and the weights which satisfy it are called $A_{p}$ -weights. The expression $[w]_{A_{p}}$ is also called characteristic constant of $w$. Here and after, $A_{p}$ denotes the Muckenhoupt classes (see \cite{Gurbuz, KomShir}). The $A_{p}$ class of weights characterizes the $L_{p}(w)$ boundedness of the maximal function as Muckenhoupt \cite{Muckenhoupt} established in the 70s. Subsequent works of Muckenhoupt \cite{Muckenhoupt} himself Muckenhoupt and Wheeden \cite{Muckenhoupt1, Muckenhoupt2}, Coifman and Fefferman \cite{Coifman} were devoted to explore the connection of the $ A_{p}$ class with weighted estimates for singular integrals. However, it was not until the 2000s that the quantitative dependence on the so called $A_{p}$ constant, namely $[w]_{A_{p}}$, became a trending topic. When $p=1$, $w\in $ $A_{1}$ if there exists $C>1$ such that for almost every $x$, \begin{equation} Mw(x)dx\leq Cw\left( x\right) \label{5} \end{equation} and the infimum of $C$ satisfying the inequality (\ref{5}) is denoted by $ [w]_{A_{1}}$, where $M$ is the classical Hardy-Littlewood maximal operator. When $p=\infty $, we define $A_{\infty }\left( {{\mathbb{R}}}\right) =\dbigcup\limits_{1\leq p<\infty }A_{p}\left( {{\mathbb{R}}}\right) $. That is, the $A_{\infty }$ constant is given by \begin{eqnarray*} \lbrack w]_{A_{\infty }} &:&=\sup\limits_{I}[w]_{A_{\infty }(I)} \\ &=&\sup\limits_{I}\dint\limits_{I}M\left( \chi _{I}w\right) \left( x\right) dx, \end{eqnarray*} where we utilize the notation $M\left( \chi _{I}w\right) $ to denote the Hardy-Littlewood maximal function of a function $\chi _{I}w$ by \begin{equation*} M\left( \chi _{I}w\right) (x):=\sup\limits_{I}\frac{1}{|I|} \int\limits_{I}|\chi _{I}w(x)|dx. \end{equation*} A weight function $w$ belongs to $A_{p,q}$ (Muckenhoupt-Wheeden class) for $1<p<q<\infty $ if \begin{align} \lbrack w]_{A_{p,q}}& :=\sup\limits_{I}[w]_{A_{p,q}(I)} \notag \\ & =\sup\limits_{I}\left( \frac{1}{|I|}\dint\limits_{I}w(x)^{q}dx\right) ^{ \frac{1}{q}}\left( \frac{1}{|I|}\dint\limits_{I}w(x)^{-p^{\prime }}dx\right) ^{\frac{1}{p^{\prime }}}<\infty . \label{13} \end{align} From the definition of $A_{p,q}$, we know that $w\left( x\right) \in A_{p,q}\left( {{\mathbb{R}}}\right) $ implies $w(x)^{q}\in A_{q}\left( {{ \mathbb{R}}}\right) $ and $w(x)^{p}\in A_{p}\left( {{\mathbb{R}}}\right) $. Now, we begin with some Lemmas. These Lemmas are very necessary for the proof of the main result. \begin{lemma} \label{Lemma1}\cite{GarRub} If $w\in A_{p}$, $p\geq 1$, then there exists a constant $C>0$ such that \begin{equation*} w\left( 2I\right) \leq Cw\left( I\right) . \end{equation*} for any interval $I$. More precisely, for all $\lambda >1$ we have \begin{equation*} w\left( \lambda I\right) \leq C\lambda ^{p}w\left( I\right) , \end{equation*} where $C$ is a constant independent of $I$ or $\lambda $ and $w\left( I\right) =\dint\limits_{I}w\left( x\right) dx$. \end{lemma} \begin{lemma} \label{Lemma2}\cite{Cohen} Let $b$ be a function on $ \mathbb{R} $ and $b^{\left( m\right) }\in L_{u}\left( \mathbb{R} \right) $ with $m\in \mathbb{N} $ for any $u>1$. Then \begin{equation*} \left\vert R_{m}\left( b;x,y\right) \right\vert \leq C\left\vert x-y\right\vert ^{m}\left( \frac{1}{\left\vert I\left( x,y\right) \right\vert }\dint\limits_{I\left( x,y\right) }\left\vert b^{\left( m\right) }\left( z\right) \right\vert ^{u}dz\right) ^{\frac{1}{u}},C>0, \end{equation*} where $I\left( x,y\right) $ is the interval $\left( x-5\left\vert x-y\right\vert ,x+5\left\vert x-y\right\vert \right) $. \end{lemma} \begin{lemma} \label{Lemma3}\cite{Hu} Let $K\left( x,y\right) $ satisfies (\ref{1}) and ( \ref{1*}), $\rho >2$, and $T:=\left\{ T_{\epsilon }\right\} _{\epsilon >0}$ and $T^{b}:=\left\{ T_{\epsilon }^{b}\right\} _{\epsilon >0}$ be given by ( \ref{3}) and (\ref{4}), respectively. If $\mathcal{O}\left( T\right) $ and $ \mathcal{V}_{\rho }\left( T\right) $ are bounded on $L_{p_{0}}\left( \mathbb{R} ,dx\right) $ for some $1<p_{0}<\infty $, and $b^{\left( m\right) }\in \dot{ \Lambda}_{\beta }$ with $m\in \mathbb{N} $ for $0<\beta <1$, then \begin{equation} \left\Vert \mathcal{O}^{\prime }\left( T^{b}\right) \right\Vert _{L_{q}\left( w^{q}\right) }\leq \left\Vert \mathcal{O}\left( T^{b}\right) \right\Vert _{L_{q}\left( w^{q}\right) }\leq C\left\Vert b\right\Vert _{\dot{ \Lambda}_{\beta }}\Vert f\Vert _{L_{p}(w^{p})},C>0, \label{6} \end{equation} and \begin{equation*} \left\Vert \mathcal{V}_{\rho }\left( T^{b}\right) \right\Vert _{L_{q}\left( w^{q}\right) }\leq C\left\Vert b\right\Vert _{\dot{\Lambda}_{\beta }}\Vert f\Vert _{L_{p}(w^{p})},C>0, \end{equation*} for any $1<p<\frac{1}{\beta }$ with $\frac{1}{q}=\frac{1}{p}-\beta $ and $ w\in A_{p,q}$. \end{lemma} Next, in 2009, the weighted Morrey space $L_{p,\kappa }(w)$ was defined by Komori and Shirai \cite{KomShir} as follows: \begin{definition} $\left( \text{\textbf{Weighted Morrey space}}\right) $ Let $1\leq p<\infty $ , $0<\kappa <1$ and $w$ be a weight function. Then the weighted Morrey space $L_{p,\kappa }(w)\equiv L_{p,\kappa }({\mathbb{R}},w)$ is defined by \begin{equation*} L_{p,\kappa }(w)\equiv L_{p,\kappa }({\mathbb{R}},w)=\left\{ f\in L_{p,w}^{loc}\left( {\mathbb{R}}\right) :\Vert f\Vert _{L_{p,\kappa }(w)}=\sup\limits_{I}\,w(I)^{-\frac{\kappa }{p}}\,\Vert f\Vert _{L_{p,w}(I)}<\infty \right\} . \end{equation*} \end{definition} \begin{remark} $\cdot $ If $\kappa =0,$ then \begin{equation*} \Vert f\Vert _{L_{p,0}(w)}=\Vert f\Vert _{L_{p}(w)}. \end{equation*} $\cdot $ When $w\equiv 1$ and $\kappa =1-\frac{p}{q}$ with $1<p\leq q<\infty $, then \begin{equation*} \Vert f\Vert _{L_{p,1-\frac{p}{q}}(1)}=\Vert f\Vert _{M_{p}^{q}\left( { \mathbb{R}}\right) }. \end{equation*} \end{remark} Finally, we recall the definition of the weighted Morrey space with two weights as follows: \begin{definition} $\left( \text{\textbf{Weighted Morrey space with two weights}}\right) $ Let $ 1\leq p<\infty $ and $0<\kappa <1$. Then for two weights $u$ and $v$, the weighted Morrey space $L_{p,\kappa }(u,v)\equiv L_{p,\kappa }({\mathbb{R}} ,u,v)$ is defined by \begin{equation*} L_{p,\kappa }(u,v)\equiv L_{p,\kappa }({\mathbb{R}},u,v)=\left\{ f\in L_{p,u}^{loc}\left( {\mathbb{R}}\right) :\Vert f\Vert _{L_{p,\kappa }(w)}=\sup\limits_{I}\,v(I)^{-\frac{\kappa }{p}}\,\Vert f\Vert _{L_{p,u}(I)}<\infty \right\} . \end{equation*} It is obvious that \begin{equation*} L_{p,\kappa }(w,w)\equiv L_{p,\kappa }(w). \end{equation*} \end{definition} In 2016, Zhang and Wu \cite{Zhang} gave the boundedness of the oscillation and variation operators for Calder\'{o}n-Zygmund singular integrals and the corresponding commutators on the weighted Morrey spaces. In 2017, Hu and Wang \cite{Hu} established the weighted $\left( L^{p},L^{q}\right) $ -inequalities of the variation and oscillation operators for the multilinear Calder\'{o}n-Zygmund singular integral with a Lipschitz function in $ \mathbb{R} $. Inspired of these results \cite{Hu, Zhang}, we investigate the boundedness of the oscillation and variation operators for the family of the multilinear singular integral defined by (\ref{4}) on weighted Morrey spaces when the $m$-th derivative of $b$ belongs to the homogenous Lipschitz space $ \dot{\Lambda}_{\beta }$ in this work. Throughout this paper, $C$ always means a positive constant independent of the main parameters involved, and may change from one occurrence to another. We also use the notation $F\lesssim G$ to mean $F\leq CG$ for an appropriate constant $C>0$, and $F\approx G$ to mean $F\lesssim G$ and $G\lesssim F$. \section{Main result} We now formulate our main result as follows. \begin{theorem} \label{Theorem1}Let $K\left( x,y\right) $ satisfies (\ref{1}) and (\ref{1*} ), $\rho >2$, and $T:=\left\{ T_{\epsilon }\right\} _{\epsilon >0}$ and $ T^{b}:=\left\{ T_{\epsilon }^{b}\right\} _{\epsilon >0}$ be given by (\ref{3} ) and (\ref{4}), respectively. If $\mathcal{O}\left( T\right) $ and $ \mathcal{V}_{\rho }\left( T\right) $ are bounded on $L_{p_{0}}\left( \mathbb{R} ,dx\right) $ for some $1<p_{0}<\infty $, and $b^{\left( m\right) }\in \dot{ \Lambda}_{\beta }$ with $m\in \mathbb{N} $ for $0<\beta <1$, then $\mathcal{O}\left( T^{b}\right) $ and $\mathcal{V} _{\rho }\left( T^{b}\right) $ are bounded from $L_{p,\kappa }(w^{p},w^{q})$ to $L_{p,\frac{\kappa q}{p}}(w^{q})$ for any $1<p<\frac{1}{\beta }$, $\frac{1 }{q}=\frac{1}{p}-\beta $, $0<\kappa <\frac{p}{q}$ and $w\in A_{p,q}$. \end{theorem} \begin{corollary} \cite{Zhang} Let $K\left( x,y\right) $ satisfies (\ref{1}) and (\ref{1*}), $ \rho >2$, and $T:=\left\{ T_{\epsilon }\right\} _{\epsilon >0}$ and $ T_{b}:=\left\{ T_{\epsilon ,b}\right\} _{\epsilon >0}$ be given by (\ref{3}) and (\ref{0}), respectively. If $\mathcal{O}\left( T\right) $ and $\mathcal{V }_{\rho }\left( T\right) $ are bounded on $L_{p_{0}}\left( \mathbb{R} ,dx\right) $ for some $1<p_{0}<\infty $, and $b\in \dot{\Lambda}_{\beta }$ for $0<\beta <1$, then $\mathcal{O}\left( T_{b}\right) $ and $\mathcal{V} _{\rho }\left( T_{b}\right) $ are bounded from $L_{p,\kappa }(w^{p},w^{q})$ to $L_{p,\frac{\kappa q}{p}}(w^{q})$ for any $1<p<\frac{1}{\beta }$, $\frac{1 }{q}=\frac{1}{p}-\beta $, $0<\kappa <\frac{p}{q}$ and $w\in A_{p,q}$. \end{corollary} \subsection{The Proof of Theorem \protect\ref{Theorem1}} \begin{proof} We consider the proof related to $\mathcal{O}\left( T^{b}\right) $ firstly. Fix an interval $I=\left( x_{0}-l,x_{0}+l\right) $, and we write as $ f=f_{1}+f_{2}$, where $f_{1}=f\chi _{2I}$, $\chi _{2I}$ denotes the characteristic function of $2I$. Thus, it is sufficient to show that the conclusion \begin{eqnarray*} \left\Vert \mathcal{O}^{\prime }\left( T^{b}f\right) \left( x\right) \right\Vert _{L_{p,\frac{\kappa q}{p}}(w^{q})} &\leq &\left\Vert \mathcal{O} ^{\prime }\left( T^{b}f_{1}\right) \left( x\right) \right\Vert _{L_{p,\frac{ \kappa q}{p}}(w^{q})}+\left\Vert \mathcal{O}^{\prime }\left( T^{b}f_{1}\right) \left( x\right) \right\Vert _{L_{p,\frac{\kappa q}{p} }(w^{q})} \\ &\lesssim &\left\Vert b\right\Vert _{\dot{\Lambda}_{\beta }}\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})} \end{eqnarray*} holds for every interval $I\subset {\mathbb{R}}$. Then \begin{eqnarray*} &&\left( \dint\limits_{I}\left\vert \mathcal{O}^{\prime }\left( T^{b}f\right) \left( x\right) \right\vert ^{q}w^{q}\left( x\right) dx\right) ^{\frac{1}{q}} \\ &\leq &\left( \dint\limits_{I}\left\vert \mathcal{O}^{\prime }\left( T^{b}f_{1}\right) \left( x\right) \right\vert ^{q}w^{q}\left( x\right) dx\right) ^{\frac{1}{q}}+\left( \dint\limits_{I}\left\vert \mathcal{O} ^{\prime }\left( T^{b}f_{2}\right) \left( x\right) \right\vert ^{q}w^{q}\left( x\right) dx\right) ^{\frac{1}{q}} \\ &=&:F_{1}+F_{2}. \end{eqnarray*} First, we use (\ref{6}) to estimate $F_{1}$, and we obtain \begin{eqnarray*} F_{1} &=&\left( \dint\limits_{I}\left\vert \mathcal{O}^{\prime }\left( T^{b}f_{1}\right) \left( x\right) \right\vert ^{q}w^{q}\left( x\right) dx\right) ^{\frac{1}{q}}\lesssim \left\Vert b\right\Vert _{\dot{\Lambda} _{\beta }}\Vert f_{1}\Vert _{L_{p}(w^{p})} \\ &=&\left\Vert b\right\Vert _{\dot{\Lambda}_{\beta }}\left( \frac{1}{ w^{q}\left( 2I\right) ^{\kappa }}\dint\limits_{2I}\left\vert f\left( x\right) \right\vert ^{p}w^{p}\left( x\right) dx\right) ^{\frac{1}{p} }w^{q}\left( 2I\right) ^{\frac{\kappa }{p}} \\ &\lesssim &\left\Vert b\right\Vert _{\dot{\Lambda}_{\beta }}\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})}^{p}w^{q}\left( I\right) ^{\frac{ \kappa }{p}}. \end{eqnarray*} Thus, \begin{equation} \left\Vert \mathcal{O}^{\prime }\left( T^{b}f_{1}\right) \left( x\right) \right\Vert _{L_{p,\frac{\kappa q}{p}}(w^{q})}\lesssim \left\Vert b\right\Vert _{\dot{\Lambda}_{\beta }}\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})}. \label{20} \end{equation} Second, for $x\in I,k=1,2,\ldots ,m\in \mathbb{N} $, let $A_{k}=\left\{ y:2^{k}l\leq \left\vert y-x\right\vert <2^{k+1}l\right\} $, $B_{k}=\left\{ y:\left\vert y-x\right\vert <2^{k+1}l\right\} $, and \begin{equation*} b_{k}\left( z\right) =b\left( z\right) -\frac{1}{m!}\left( b^{\left( m\right) }\right) _{B_{k}}z^{m}. \end{equation*} By \cite{Cohen}, for any $y\in A_{k}$, it is obvious that \begin{equation*} R_{m+1}\left( b;x,y\right) =R_{m+1}\left( b_{k};x,y\right) . \end{equation*} Moreover, since $b\in \dot{\Lambda}_{\beta }$, then, for $y\in A_{k}$, we get \begin{eqnarray} \left\vert b^{\left( m\right) }\left( y\right) -\left( b^{\left( m\right) }\right) _{B_{k}}\right\vert &\leq &\frac{1}{\left\vert B_{k}\right\vert } \dint\limits_{B_{k}}\left\vert b^{\left( m\right) }\left( y\right) -b^{\left( m\right) }\left( z\right) \right\vert dz \notag \\ &\lesssim &\left\Vert b^{\left( m\right) }\right\Vert _{\dot{\Lambda}_{\beta }}\left( 2^{k}l\right) ^{\beta }. \label{7} \end{eqnarray} Hence, by Lemma \ref{Lemma2} and (\ref{7}) \begin{eqnarray*} R_{m}\left( b_{k};x,y\right) &\lesssim &\left\vert x-y\right\vert ^{m}\left( \frac{1}{\left\vert I\left( x,y\right) \right\vert }\dint\limits_{I\left( x,y\right) }\left\vert b^{\left( m\right) }\left( z\right) \right\vert ^{u}dz\right) ^{\frac{1}{u}} \\ &\lesssim &\left\vert x-y\right\vert ^{m}\left\Vert b^{\left( m\right) }\right\Vert _{\dot{\Lambda}_{\beta }}\left( 2^{k}l\right) ^{\beta }. \end{eqnarray*} Also, following \cite{Zhang}, we have \begin{equation*} \left\Vert \left\{ \chi _{\left\{ t_{i}+1<\left\vert x-y\right\vert <u\right\} }\right\} _{u\in J_{i},i\in \mathbb{N} }\right\Vert _{A}\leq 1. \end{equation*} Thus, the estimate of $F_{2}$ can be obtained as follows: \begin{eqnarray*} \left\vert \mathcal{O}^{\prime }\left( T^{b}f_{2}\right) \left( x\right) \right\vert &=&\left\Vert U\left( T^{b}f_{2}\right) \left( x\right) \right\Vert \\ &=&\left\Vert \left\{ \dint\limits_{\left\{ t_{i}+1<\left\vert x-y\right\vert <u\right\} }\frac{R_{m+1}\left( b;x,y\right) }{\left\vert x-y\right\vert ^{m}}K\left( x,y\right) f_{2}\left( y\right) dy\right\} \right\Vert _{A} \\ &\leq &\dint\limits_{ \mathbb{R} }\left\Vert \left\{ \chi _{\left\{ t_{i}+1<\left\vert x-y\right\vert <u\right\} }\right\} _{u\in J_{i},i\in \mathbb{N} }\right\Vert _{A}\left\vert \frac{R_{m+1}\left( b;x,y\right) }{\left\vert x-y\right\vert ^{m}}K\left( x,y\right) f_{2}\left( y\right) \right\vert dy \\ &\leq &\dint\limits_{ \mathbb{R} }\left\vert \frac{R_{m+1}\left( b;x,y\right) }{\left\vert x-y\right\vert ^{m} }K\left( x,y\right) f_{2}\left( y\right) \right\vert dy \\ &\lesssim &\dint\limits_{\left\vert x-y\right\vert >2l}\left\vert \frac{ R_{m+1}\left( b;x,y\right) }{\left\vert x-y\right\vert ^{m}}K\left( x,y\right) f\left( y\right) \right\vert dy \\ &\lesssim &\dsum\limits_{k=1}^{\infty }\frac{1}{2^{k}l}\dint\limits_{A_{k}} \left( \left\Vert b^{\left( m\right) }\right\Vert _{\dot{\Lambda}_{\beta }}\left( 2^{k}l\right) ^{\beta }+\left\vert b^{\left( m\right) }\left( y\right) -\left( b^{\left( m\right) }\right) _{B_{k}}\right\vert \right) \left\vert f\left( y\right) \right\vert dy \\ &\lesssim &\left\Vert b^{\left( m\right) }\right\Vert _{\dot{\Lambda}_{\beta }}\dsum\limits_{k=1}^{\infty }\frac{1}{\left( 2^{k}l\right) ^{1-\beta }} \dint\limits_{A_{k}}\left\vert f\left( y\right) \right\vert dy+\dsum\limits_{k=1}^{\infty }\frac{1}{2^{k}l}\dint\limits_{A_{k}}\left \vert b^{\left( m\right) }\left( y\right) -\left( b^{\left( m\right) }\right) _{B_{k}}\right\vert \left\vert f\left( y\right) \right\vert dy \\ &=&G_{1}+G_{2}. \end{eqnarray*} For $G_{1}$, since \begin{equation*} \left( \dint\limits_{A_{k}}w\left( y\right) ^{-p^{\prime }}dy\right) ^{\frac{ 1}{p^{\prime }}}\lesssim w^{q}\left( B_{k}\right) ^{-\frac{1}{q}}\left\vert B_{k}\right\vert ^{\frac{1}{p^{\prime }}+\frac{1}{q}} \end{equation*} with $1<p<\frac{1}{\beta }$, $\frac{1}{q}=\frac{1}{p}-\beta $ and using H \"{o}lder's inequality, we have \begin{eqnarray} &&\dsum\limits_{k=1}^{\infty }\frac{1}{\left( 2^{k}l\right) ^{1-\beta }} \dint\limits_{A_{k}}\left\vert f\left( y\right) \right\vert dy \notag \\ &\lesssim &\dsum\limits_{k=1}^{\infty }\frac{1}{\left( 2^{k}l\right) ^{1-\beta }}\left( \dint\limits_{A_{k}}\left\vert f\left( y\right) \right\vert ^{p}w^{p}\left( y\right) dy\right) ^{\frac{1}{p}}\left( \dint\limits_{A_{k}}w\left( y\right) ^{-p^{\prime }}dy\right) ^{\frac{1}{ p^{\prime }}} \notag \\ &\lesssim &\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})}\dsum\limits_{k=1}^{\infty }\frac{\left( 2^{k}l\right) ^{ \frac{1}{p^{\prime }}+\frac{1}{q}}}{\left( 2^{k}l\right) ^{1-\beta }} w^{q}\left( B_{k}\right) ^{\frac{\kappa }{p}-\frac{1}{q}} \notag \\ &\lesssim &\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})}\dsum\limits_{k=1}^{\infty }w^{q}\left( B_{k}\right) ^{\frac{ \kappa }{p}-\frac{1}{q}}. \label{10} \end{eqnarray} Since $w\in A_{p,q}$, then we have $w^{q}\in A_{\infty }$. Thus, Lemma \ref {Lemma1} implies $w^{q}\left( B_{k}\right) \leq \left( C\right) ^{k}w^{q}\left( I\right) ,C>1,$ i.e., \begin{equation} \dsum\limits_{k=1}^{\infty }w^{q}\left( B_{k}\right) ^{\frac{\kappa }{p}- \frac{1}{q}}\lesssim w^{q}\left( I\right) ^{\frac{\kappa }{p}-\frac{1}{q} }\dsum\limits_{k=1}^{\infty }C^{\frac{\kappa }{p}-\frac{1}{q}}\lesssim w^{q}\left( I\right) ^{\frac{\kappa }{p}-\frac{1}{q}} \label{11} \end{equation} with $\frac{\kappa }{p}-\frac{1}{q}<0$. This implies \begin{equation} G_{1}\lesssim \left\Vert b^{\left( m\right) }\right\Vert _{\dot{\Lambda} _{\beta }}\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})}w^{q}\left( I\right) ^{\frac{\kappa }{p}-\frac{1}{q}}. \label{12} \end{equation} Let $y\in A_{k}$. For $G_{2}$, by (\ref{7}), (\ref{10}) and (\ref{11}) we get \begin{eqnarray} G_{2} &\lesssim &\left\Vert b^{\left( m\right) }\right\Vert _{\dot{\Lambda} _{\beta }}\dsum\limits_{k=1}^{\infty }\frac{1}{\left( 2^{k+1}l\right) ^{1-\beta }}\dint\limits_{A_{k}}\left\vert f\left( y\right) \right\vert dy \notag \\ &\lesssim &\left\Vert b^{\left( m\right) }\right\Vert _{\dot{\Lambda}_{\beta }}\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})}w^{q}\left( I\right) ^{\frac{\kappa }{p}-\frac{1}{q}}. \label{12*} \end{eqnarray} Thus, by (\ref{12}) and (\ref{12*}), we obtain \begin{eqnarray*} F_{2} &=&\left( \dint\limits_{I}\left\vert \mathcal{O}^{\prime }\left( T^{b}f_{2}\right) \left( x\right) \right\vert ^{q}w^{q}\left( x\right) dx\right) ^{\frac{1}{q}} \\ &\lesssim &\left\Vert b^{\left( m\right) }\right\Vert _{\dot{\Lambda}_{\beta }}\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})}w^{q}\left( I\right) ^{\frac{\kappa }{p}-\frac{1}{q}}w^{q}\left( I\right) ^{\frac{1}{q}} \\ &=&\left\Vert b^{\left( m\right) }\right\Vert _{\dot{\Lambda}_{\beta }}\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})}w^{q}\left( I\right) ^{\frac{\kappa }{p}}. \end{eqnarray*} Thus, \begin{equation} \left\Vert \mathcal{O}^{\prime }\left( T^{b}f_{2}\right) \left( x\right) \right\Vert _{L_{p,\frac{\kappa q}{p}}(w^{q})}\lesssim \left\Vert b\right\Vert _{\dot{\Lambda}_{\beta }}\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})}. \label{21} \end{equation} As a result, by (\ref{20}) and (\ref{21}), we get \begin{equation*} \left\Vert \mathcal{O}^{\prime }\left( T^{b}f\right) \left( x\right) \right\Vert _{L_{p,\frac{\kappa q}{p}}(w^{q})}\lesssim \left\Vert b\right\Vert _{\dot{\Lambda}_{\beta }}\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})}. \end{equation*} Similarly, $\mathcal{V}_{\rho }\left( T^{b}\right) $ has the same estimate as above (here we omit the details), thus the inequality \begin{equation*} \left\Vert \mathcal{V}_{\rho }\left( T^{b}f\right) \left( x\right) \right\Vert _{L_{p,\frac{\kappa q}{p}}(w^{q})}\lesssim \left\Vert b\right\Vert _{\dot{\Lambda}_{\beta }}\left\Vert f\right\Vert _{L_{p,\kappa }(w^{p},w^{q})} \end{equation*} is valid. Therefore, Theorem \ref{Theorem1} is completely proved. \end{proof} \end{document}
\begin{document} \title{Einstein-Podolsky-Rosen Correlations of Ultracold Atomic Gases} \author{Nir Bar-Gill$^1$} \author{Christian Gross$^2$} \author{Gershon Kurizki$^1$} \author{Igor Mazets$^3$} \author{Markus Oberthaler$^2$} \affiliation{$^1$Weizmann Institute of Science, Rehovot, Israel.} \affiliation{$^2$Kirchhoff-Institut f\"{u}r Physik, Universit\"{a}t Heidelberg, 69120 Heidelberg, Germany.} \affiliation{$^3$Atominstitut \"{o}sterreichischer Universit\"{a}ten, TU Wien, Vienna, Austria.} \begin{abstract} Einstein, Podolsky \& Rosen (EPR) pointed out \cite{epr} that correlations induced between quantum objects will persist after these objects have ceased to interact. Consequently, their joint continuous variables (CV), e.g., the difference of their positions and the sum of their momenta, may be specified, regardless of their distance, with arbitrary precision. EPR correlations give rise to two fundamental notions\cite{mann,bell,schrodinger,peres,braunstein_rev}: {\em nonlocal ``steering''} of the quantum state of one object by measuring the other, and inseparability ({\em entanglement}) of their quantum states. EPR entanglement is a resource of quantum information (QI)\cite{braunstein_rev,drummond_reid,zoller_cve} and CV teleportation of light\cite{vaidman,braunstein} and matter waves\cite{opatrny,opatrny_deb}. It has lately been demonstrated for {\em collective} CV of distant thermal-gas clouds, correlated by interaction with a common field \cite{polzik1999,polzik_nature}. Here we demonstrate that collective CV of two species of trapped ultracold bosonic gases can be EPR-correlated (entangled) via {\em inherent} interactions between the species. This paves the way to further QI applications of such systems, which are atomic analogs of coupled superconducting Josephson Junctions (JJ)\cite{smerzi_prl,oberthaler}. A precursor of this study has been the observation of quantum correlations (squeezing) in a single bosonic JJ \cite{oberthaler_squeezing}. \end{abstract} \,\mbox{m}aketitle \paragraph*{EPR criteria --} In studying continuous variable entanglement (CVE), it is instructive to draw an analogy with the original EPR scenario \cite{epr}, wherein two particles, $1$ and $2$, are defined through their position and momentum variables $x_{1,2},p_{1,2}$. EPR saw as paradox the fact that depending on whether we measure $x_1$ or $p_1$ of particle $1$, one can predict the measurement result of $x_2$ or $p_2$, respectively, with arbitrary precision, unlimited by the Heisenberg relation $\Delta x_2 \Delta p_2 \leq 1/2$ (choosing $\hbar=1$). This {\em nonlocal} dependence of the measurement results of particle 2 on those of particle 1 has been dubbed {\em ``steering''} by Schr\"odinger\cite{schrodinger2}. Equivalently, the EPR state is deemed entangled in the continuous translational variables of the two particles. The entanglement is exhibited by the collective operators $\hat{x}_{\pm} = \hat{x}_1 \pm \hat{x}_2$ and $\hat{p}_{\pm} = \hat{p}_1 \pm \hat{p}_2$. In quantum optics these variables are associated with the sum and difference of field quadratures of two light modes mixed by a symmetric beam splitter \cite{braunstein_rev,reid_rev} (Fig. 1a). In order to quantify the EPR correlations, one may adopt two distinct criteria. The first criterion imposes an upper bound on the product of the variances of EPR-correlated {\em commuting} dimensionless operators, $\hat{x}_+$ and $\hat{p}_-$ or $\hat{x}_-$ and $\hat{p}_+$ \cite{drummond_reid,opatrny}: \begin{equation} \langle \Delta \hat{x}_{\pm}^2 \rangle \langle \Delta \hat{p}_{\,\mbox{m}p}^2 \rangle \equiv \frac{1}{4s} \leq \frac{1}{4}, \label{eq:2mode_crit} \end{equation} The EPR correlation is then measured by the two-mode squeezing factor $\infty > s > 1$. The second is the inseparability (entanglement) criterion for {\em gaussian} states \cite{zoller_cve,polzik_nature}, related to the sum of the variances of the correlated observables $\epsilon \equiv \langle \Delta \hat{x}_{\pm}^2 \rangle + \langle \Delta \hat{p}_{\,\mbox{m}p}^2 \rangle - 1 < 0$. Here the maximal entanglement corresponds to the most negative $\epsilon$ obtainable. In what follows we inquire: to what extent do these EPR criteria apply to the system at hand, i.e., a two-species BEC in a symmetric double-well potential? \paragraph*{Scheme for global-mode EPR correlations in bosonic JJs --} We first consider the correlation dynamics of the two species (two internal states of the atom), in the presence of tunnel coupling between the wells. We shall analyze the EPR correlations in the basis of two global internal-state modes that are {\em not} spatially separated between the two wells. Since there is no population exchange between the internal states $\ket{A}$ and $\ket{B}$, the numbers of atoms $N_A$ and $N_B$ in these states are constants of motion. The Hamiltonian (Supplement) can be then written in this basis in terms of the left-right atom-number differences in the two internal states, $\hat{n}_A = \left( \cre{a}{L} \ann{a}{L} - \cre{a}{R} \ann{a}{R} \right)/2$ and $\hat{n}_B = \left( \cre{b}{L} \ann{b}{L} - \cre{b}{R} \ann{b}{R} \right)/2$, and their canonically conjugate phase operators $\hat{\phi}_{A,B}$, obeying the commutation relations $\left[ \hat{\phi}_{\alpha} , \hat{n}_{\alpha'} \right] = i \delta_{\alpha \alpha'}$ ($\alpha,\alpha'=A,B$). For simplicity we assume from now on that $N_A=N_B \equiv N$ (generalization to $N_A \neq N_B$ is straightforward), and consider small interwell number differences such that $\langle \hat{n}_{A,B} \rangle << N$. The Hamiltonian\cite{leggett_jphysb} then becomes \begin{eqnarray} H &=& (E_c)_{AA} \hat{n}_A^2 + (E_c)_{BB} \hat{n}_B^2 + 2 (E_c)_{AB} \hat{n}_A \hat{n}_B \nonumber \\ &-& JN \left( cos \hat{\phi}_A + cos \hat{\phi}_B \right) + \frac{2J}{N} \left( \hat{n}_A^2 cos \hat{\phi}_A + \hat{n}_B^2 cos \hat{\phi}_B \right). \label{Hnphi2} \end{eqnarray} Here the nonlinearity coefficients (``charging'' energies) $(E_c)_{AA}$, $(E_c)_{BB}$ and $(E_c)_{AB}$ are determined respectively by the intra- and inter-species s-wave scattering lengths. The tunneling energy $J$ is the same for the atoms in the internal states $\ket{A}$ and $\ket{B}$. Equation (\ref{Hnphi2}) displays the full dynamics used in our numerics (Fig. \ref{fig:squeezing}), that of two quantum nonlinear pendula coupled via $2 (E_c)_{AB} \hat{n}_A \hat{n}_B$. This coupling is the key to EPR correlations of modes $A$ and $B$. We may, for didactic purposes, simplify (\ref{Hnphi2}) by expanding the cosine terms. In the lowest-order approximation $cos \hat{\phi}_{A,B} \simeq 1 - \hat{\phi}_{A,B}^2/2$, the system is described by two {\em coupled} harmonic oscillators. This suggests that the system under study can indeed satisfy the entanglement or two-mode squeezing criteria, if the relevant collective variables in our system are mapped onto those of two field modes mixed by a symmetric beam splitter \begin{eqnarray} \hat{n}_\pm = \frac{1}{\sqrt{2}} \left( \hat{n}_A \pm \hat{n}_B \right) \leftrightarrow \hat{x}_{\pm}, \nonumber \\ \hat{\phi}_\pm = \frac{1}{\sqrt{2}} \left( \hat{\phi}_A \pm \hat{\phi}_B \right) \leftrightarrow \hat{p}_{\pm}. \label{eq:nphi_pm} \end{eqnarray} Using the collective variables defined in (\ref{eq:nphi_pm}), we can rewrite Eq. (\ref{Hnphi2}) in the harmonic approximation, assuming $(E_c)_{AA} \simeq (E_c)_{BB} = E_c$, as: \begin{eqnarray} \hat{H} &=& \left( E_c + (E_c)_{AB} + \frac{2J}{N} \right) \hat{n}_+^2 + \frac{JN}{2} \hat{\phi}_+^2 \nonumber \\ &+& \left( E_c - (E_c)_{AB} + \frac{2J}{N} \right) \hat{n}_-^2 + \frac{JN}{2} \hat{\phi}_-^2. \label{eq:Hharmonic2} \end{eqnarray} Hence, the transformed Hamiltonian describes two {\em uncoupled} harmonic modes in the collective basis. The ``+''-mode corresponds to Josephson oscillations of the total atomic population (regardless of the internal state) between the two wells, such that the inter-species ratio in each well is constant (in-phase oscillations of the $A,B$ species). The ``-''-mode corresponds to oscillations of the inter-species ratio between the two wells, such that the total population imbalance does not change (out-of-phase oscillations of the $1,2$ species). These two modes have different fundamental frequencies, $\omega_{\pm}$ (see Supplement). We may then wonder: do the EPR correlation criteria hold in the uncoupled ($\pm$ modes) basis? Indeed, they do: for $(E_c)_{AB} > 0$ Eqs. (\ref{eq:2mode_crit}),(\ref{eq:nphi_pm}),(\ref{eq:Hharmonic2}) allow the $\pm$ modes to satisfy the EPR criteria, yielding $s=[(2J/N + E_c + (E_c)_{AB})]/[(2J/N + E_c - (E_c)_{AB})]$. We then obtain $s >> 1$ for $E_c \simeq (E_c)_{AB} >> 2J/N$ and the ground states of both modes, approaching the {\em ideal} EPR limit $s \rightarrow \infty$ of full CV entanglement. Thus, the fact that there is coupling between the {\em original} ($A$ and $B$) modes {\em suffices} to create {\em EPR} correlations between the collective $\pm$ modes, although there is no coupling in the latter basis. Beyond the lowest-order approximation that has led to (\ref{eq:Hharmonic2}), there is parametric coupling of the collective modes that may induce nontrivial dynamics of CV wavepackets: the slow, $-$, mode can be ``frozen'' at a low-temperature state, while the fast, $+$, mode may be kept at its ground state, conforming to the Born-Oppenheimer coupling regime (see Supplement). The occupations of thermally excited $+$ mode states must be low compared to its ground state, in order to satisfy the EPR criteria (see Supplement). For {\em exact} calculation of the dynamics we must resort to the angular momentum operators that describe the full system (Supplement). The entanglement criterion is then\cite{polzik_nature} \begin{equation} \frac{1}{| \langle \hat{L}_x \rangle |^2} \left( \langle \Delta \hat{L}_{y \pm} ^2 \rangle \langle \Delta \hat{L}_{z \,\mbox{m}p} ^2 \rangle \right) \equiv \frac{1}{4s} < \frac{1}{4}. \label{L_crit} \end{equation} This entanglement criterion differs from those used for the number-phase operators only for significant nonlinear phase diffusion, which reduces $| \langle \hat{L}_x \rangle |$ compared to $1$ and thus diminishes the ideal limit of $s$. Since such phase diffusion occurs due to the interatomic (nonlinear) interaction, which is also responsible for the entanglement, one needs to find the optimal charging energies in (\ref{Hnphi2}) and state-preparation that would yield the largest EPR correlations (see Methods). The optimal {\em sudden} sequence for state preparation consists of (Fig. 1a): (a) filling the original trap by a BEC in internal state $| A \rangle$; (b) sudden ramping up of the inter-well potential barrier, thus creating a two-well symmetric superposition; (c) transforming state $| A \rangle$ into a symmetric superposition of $| A \rangle$ and $| B \rangle$ by a fast $\pi/2$ pulse. This sequence yields an initial {\em coherent state in the two original modes $A$,$B$}, whose EPR entanglement then builds up with time according to their coupled-pendula dynamics (Eq. (\ref{Hnphi2})). By contrast, {\em slower} ramping up of the barrier causes them to be exposed to both nonlinear phase diffusion and environment-induced dephasing (see below) much longer, thus spoiling the entanglement criterion (\ref{L_crit}) (Fig. \ref{fig:squeezing}(b),(c)). We find an optimal value for the charging energy which results in the largest amount of EPR correlations, closest to the ideal inseparability (obtainable in the absence of nonlinear phase diffusion and for the ground-state of the coupled two-mode system). We note that it is not advantageous in this scheme to create a single-mode squeezed state in each well as an initial condition. Intuitively, this is due to the fact that such squeezing does not translate into correlations between the wells, and thus does not induce reduced variances of the two-mode coordinates. In more detail, an initial coherent state provides minimal non-correlated variances in the combined variables $\langle \Delta \hat{L}_{y \pm} \rangle \langle \Delta \hat{L}_{z \pm} \rangle / \left| \langle \hat{L}_x \rangle \right|^2 = 1/4$. In comparison, an initial single-mode squeezed state in each well, which is squeezed along the same quadrature, would not improve on this variance product. Finally, initial single-mode squeezed states of different quadratures would result in a larger variance product, thus limiting the two-mode squeezing reachable through the dynamics described above (see Suppl). \begin{figure} \caption{(a) State preparation scheme: the condensate is split both in real-space and in the internal-state basis, to create two-coupled modes. Then entanglement dynamics take place as a function of time, and the measurement is done in the collective beam-splitter basis. (b)-(c) Dynamics of the entanglement defined by EPR criteria for both {\em sudden} \label{fig:squeezing} \end{figure} \paragraph*{Scheme for local-mode correlations and ``steering'' in bosonic JJs --} We now present an approach based on correlations of two squeezed {\em local} (left- and right- well) modes (Fig. \ref{fig:BS_new}(a). The system is initialized in the left well (L) of a double-well potential, in a single internal state ($1$). Then, the barrier is suddenly dropped in order to create a coherent superposition of the vibrational ground-state $| g \rangle$ and first excited state $| e \rangle$ of the new potential. Next, a $\pi/2$ pulse creates a coherent superposition of the internal states $A,B$ of the atoms. \begin{figure} \caption{(a) Schematic sequence for the creation of ``non-local'' two-mode entanglement in analogy with the BS approach. (b) Single-mode squeezing dynamics as a function of time, for $N=100$. Decoherence effects: the variance of $n_+$ (c) and of $\phi_-$ (d) as a function of time, in the presence of proper dephasing. We subtract the variance of the Hermitian dynamics (without dephasing) in order to single-out the effect of dephasing on the dynamics of the variances. The coherence time is here estimated to be $\sim 100$ ms at $20$ nK.} \label{fig:BS_new} \end{figure} We assume that there is no exchange term, since to lowest order the cross-coupling terms cancel, and therefore the number of particles in each external state is conserved. This conservation allows us to rewrite the Hamiltonian in terms of the internal-state number difference operator in each vibrational state, $\hat{n}_{g} = (\hat{n}_g)_A - (\hat{n}_g)_B$ and $\hat{n}_{e} = (\hat{n}_e)_A - (\hat{n}_e)_B$. The Hamiltonian then becomes (see Supplement): \begin{equation} \hat{H} = \left( (E_c)_{AA} + (E_c)_{BB} - 2 (E_c)_{AB} \right) \left( \hat{n}_{g}^2 + \hat{n}_{e}^2 \right). \label{eq:1mode_sq} \end{equation} Thus, the system evolves {\em separately} in the two vibrational modes, each undergoing dynamical single-mode squeezing in the internal-state basis\cite{ueda,sorensen}. Such internal-state squeezing in each mode was demonstrated recently \cite{oberthaler_squeezing}. Following an evolution during which each vibrational mode separately experiences internal-state squeezing, we raise the barrier quickly to create two separate symmetric wells, denoted L (left) and R (right). This sudden projection creates a BS-like transformation: \begin{eqnarray} |L \rangle &=& \frac{1}{\sqrt{2}} \left( |g \rangle + |e \rangle \right ), \nonumber \\ |R \rangle &=& \frac{1}{\sqrt{2}} \left( |g \rangle - |e \rangle \right ). \end{eqnarray} Therefore, we now have two-mode squeezing, or EPR-like entanglement, between the left and right wells. The mode in each well is defined by the number and phase differences of the internal states. Local measurements may be done in the internal-state basis in each well {\em separately}, exhibiting {\em non-classical} correlations between the $|L \rangle$ and $|R \rangle$ spatially separated modes, in the spirit of ``steering''. The scheme presented above is analogous to the quantum optics approach\cite{braunstein_rev}, in which two independent single-mode squeezed states are injected into the input ports of a beam splitter (BS), thereby creating entangled modes at the output ports of the BS. However, the {\em intrinsic} nonlinearity of each BEC mode causes their unwarranted mixing even {\em before} the BS-like transformation, causing fidelity loss (see Methods). In this sequence we wait for the {\em maximal} single-mode squeezing to develop separately, before raising the barrier to project the $|g \rangle$ and $|e \rangle$ states onto the $|L \rangle$ and $|R \rangle$ basis. Therefore, we can immediately use the maximal squeezing factor $s$ calculated for each single-mode\cite{braunstein_rev}, to extract the two-mode squeezing parameter. Then the collective two-mode squeezing is given by \begin{eqnarray} \langle \Delta \hat{n}_+^2 \rangle &=& \langle \Delta \left( \hat{n}_L + \hat{n}_R \right)^2 \rangle = \frac{ \left \langle \Delta \left( n_+^{(0)} \right)^2 \right \rangle}{s}, \nonumber \\ \langle \Delta \hat{\phi}_-^2 \rangle &=& \langle \Delta \left( \hat{\phi}_L - \hat{\phi}_R \right)^2 \rangle = \frac{\left \langle \Delta \left(\phi_-^{(0)} \right)^2 \right \rangle}{s}, \end{eqnarray} namely, the two-mode squeezing parameter is equal to that of single-mode squeezing. This squeezing parameter now characterizes the knowledge obtained about variables in one well having measured their counterparts in the other well. \paragraph*{Decoherence effects --} We now turn to the effect of environment-induced decoherence on the robustness of EPR entanglement in this system. We assume proper dephasing created by independently fluctuating (stochastic) energy shifts of atoms in each internal state and well $1(2)l(R)$, caused by the thermal atomic or electromagnetic environment. Due to the spectroscopic similarity of the two BEC species, we reduce the number of independent stochastic processes by setting $\epsilon_{1L}/\epsilon_{2L} = \epsilon_{1R}/\epsilon_{2R} = \left(1 - \xi \right)/\left(1 + \xi \right)$, and assuming a {\em "symmetrized environment"}, i.e. $\xi << 1$. Due to the small value of $\xi$, the variance of $\hat{\phi}_-$ almost does not change (in either the global or local scheme), while the variance of $\hat{n}_+$ increases linearly, and is responsible for the growing loss of entanglement. Hence, we may manipulate the system as we see fit {\em within} the coherence time. \paragraph*{Discussion --} We have addressed EPR effects in an ultracold-atom analog of two {\em coupled} Josephson junctions (JJs): a {\em two-species} Bose-Einstein condensate (BEC), each species corresponding to a different sublevel of the atomic internal ground state \cite{lobo}, trapped in a tunnel-coupled double-well potential (Fig. 1a). We have shown that such bosonic coupled JJs can induce EPR entanglement of appropriate combinations of collective {\em continuous} (phase and atom-number) variables. This entanglement has been shown to be resilient to environmental noise (decoherence). It exhibits intriguing dynamics under conditions analogous to the {\em molecular} Born-Oppenheimer regime for coupled slow and fast variables \cite{Davydov}. Alternatively, it can dynamically realize beam-splitter mixing of two squeezed modes. We acknowledge the support of GIF, DIP and EC (MIDAS STREP, FET Open), and the Humboldt Foundation (G.K.). \end{document}
\begin{document} \title{\Large\bfseries Homological stability for moduli spaces of disconnected submanifolds, I } \author{\small Martin Palmer\quad $/\!\!/$\quad 29\textsuperscript{th} April 2020 } \date{} \maketitle { \makeatletter \renewcommand*{\BHFN@OldMakefntext}{} \makeatother \footnotetext{2010 \textit{Mathematics Subject Classification}: Primary 55R80, 57S05; Secondary 57N20, 58B05.} \footnotetext{\textit{Key words}: Moduli spaces of submanifolds, homological stability, embedding spaces, configuration spaces.} \footnotetext{[---Also available at \href{https://mdp.ac/papers/hsfmsods1}{mdp.ac/papers/hsfmsods1}, where any addenda or informal related notes will also be posted.---]} } \begin{abstract} A well-known property of unordered configuration spaces of points (in an open, connected manifold) is that their homology \emph{stabilises} as the number of points increases. We generalise this result to moduli spaces of submanifolds of higher dimension, where stability is with respect to the number of components having a fixed diffeomorphism type and isotopy class. As well as for unparametrised submanifolds, we prove this also for partially-parametrised submanifolds -- where a \emph{partial parametrisation} may be thought of as a superposition of parametrisations related by a fixed subgroup of the mapping class group. In a companion paper (\cite{Palmer2018-hs-msods-II}) this is further generalised to submanifolds equipped with labels in a bundle over the embedding space, from which we deduce corollaries for the stability of diffeomorphism groups of manifolds with respect to parametrised connected sum and addition of singularities. \end{abstract} \tableofcontents \section{Introduction} Let $M$ be an open connected manifold of dimension at least $2$. The configuration space of $n$ points in $M$ is defined to be $C_n(M) = (M^n \smallsetminus \ensuremath{\Delta\!\!\!\!\Delta})/\Sigma_n$, where $\ensuremath{\Delta\!\!\!\!\Delta}$ is the ``collision set'' of $M^n$ \[ \ensuremath{\Delta\!\!\!\!\Delta} = \{ (p_1,\dotsc,p_n)\in M^n \;|\; p_i = p_j \text{ for some } i\neq j \}, \] and $\Sigma_n$ denotes the symmetric group on $n$ letters. Alternatively, it can be written as $C_n(M) = \mathrm{Emb}(n,M)/\Sigma_n$, where $\mathrm{Emb}(n,M)$ denotes the space of embeddings of the zero-dimensional manifold $n$ into $M$. More generally, one can define a labelled configuration space, for a given label space $X$, by $C_n(M,X) = \mathrm{Emb}(n,M) \times_{\Sigma_n} X^n$. An important property of these spaces is that they satisfy homological stability as $n$ varies: \begin{thm}[{\cite{Segal1973Configurationspacesand, McDuff1975Configurationspacesof, Segal1979topologyofspaces, Randal-Williams2013Homologicalstabilityunordered}}]\label{tClassical} There is a natural map $C_n(M,X)\to C_{n+1}(M,X)$ which is split-injective on homology in all degrees, and if $X$ is path-connected it induces isomorphisms on homology up to degree $\frac{n}{2}$. \end{thm} A natural question to consider is what happens for higher-dimensional embedded submanifolds. Certain special cases have been considered before, up to dimension $2$ (embedded circles and surfaces) -- see \hyperref[para:related-results]{\S\anglenumber{v.}} for a brief overview. The main theorem of this paper is a general stability result for moduli spaces of disconnected submanifolds of a fixed diffeomorphism type and isotopy class, where stabilisation occurs by adding new connected components to the submanifold. \paragraph{\anglenumber{i.} Unparametrised submanifolds.} \addcontentsline{toc}{subsection}{Unparametrised submanifolds.} Let $P$ be a closed manifold. A first guess for the natural analogue of $C_n(M)$ in the case of embedded copies of $P$ may be \[ \mathbf{C}_{nP}(M) = \mathrm{Emb}(nP,M) / (\mathrm{Diff}(P) \wr \Sigma_n). \] The wreath product $\mathrm{Diff}(P) \wr \Sigma_n$ is naturally a subgroup of $\mathrm{Diff}(nP)$, where $nP$ is shorthand for $\{1,\ldots,n\} \times P$, and they are equal if $P$ is connected. However, in general (consider for example $M=\mathbb{R}^3$ and $P=S^1$) this has a diverging number of path-components as $n\to\infty$, so it will not satisfy homological stability even in degree zero. Instead, we study the path-components of this space separately. Assume that $M$ is the interior of a manifold with boundary $\ensuremath{{\,\,\overline{\!\! M\!}\,}}$, and choose: \begin{itemizeb} \item[$\circ$] a self-embedding $e \colon \ensuremath{{\,\,\overline{\!\! M\!}\,}} \hookrightarrow \ensuremath{{\,\,\overline{\!\! M\!}\,}}$ that is isotopic to the identity, \item[$\circ$] an embedding $\iota \colon P \hookrightarrow \partial\ensuremath{{\,\,\overline{\!\! M\!}\,}}$ such that $\iota(P) \cap e(\ensuremath{{\,\,\overline{\!\! M\!}\,}}) = \varnothing$ and $e(\iota(P)) \subseteq M$. \end{itemizeb} Then we may define a map \begin{equation}\label{eStabMapIntro} \mathbf{C}_{nP}(M) \longrightarrow \mathbf{C}_{(n+1)P}(M) \end{equation} by pushing a given collection of submanifolds inwards along $e$ and adjoining a new copy of $P$ near the boundary using $e \circ \iota$, in the region ``vacated'' by $e$. \begin{center} \begin{tikzpicture} [x=1mm,y=1mm] \draw[fill,black!5] (10,-10) rectangle (30,10); \draw[dashed] (0,10) -- (30,10) -- (30,-10) -- (0,-10); \draw (0,-10) -- (0,10); \draw (10,-10) -- (10,10); \draw[fill] (-0.3,-3) rectangle (0.3,3); \draw[fill] (9.4,-3) rectangle (10,3); \draw[fill] (-20.3,-13) rectangle (-19.7,-7); \draw[->,black!50] (2,0) -- (8,0); \draw[->,black!50] (2,5) -- (8,5); \draw[->,black!50] (2,-5) -- (8,-5); \node at (5,2.5) [font=\small,black!80] {$e$}; \node at (5,-2.5) [font=\small,black!80] {$e$}; \node at (5,-7.5) [font=\small,black!80] {$e$}; \node at (5,7.5) [font=\small,black!80] {$e$}; \draw[<-,>=right hook,black!50] (-18,-10) to ($ (-18,-10)!0.5!(-2,0) $); \draw[->,black!50] ($ (-18,-10)!0.5!(-2,0) $) to (-2,0); \node at (-11,-4) [font=\small,black!80] {$\iota$}; \node at (20,0) [font=\small] {$e(\ensuremath{{\,\,\overline{\!\! M\!}\,}})$}; \node at (-23,-10) [font=\small,black!80] {$P$}; \end{tikzpicture} \end{center} There is a natural basepoint $[e \circ \iota]$ of $\mathbf{C}_P(M) = \mathrm{Emb}(P,M)/\mathrm{Diff}(P)$. Via the maps \eqref{eStabMapIntro} this determines basepoints of each of the spaces $\mathbf{C}_{nP}(M)$. We now define $C_{nP}(M)$ to be the path-component of $\mathbf{C}_{nP}(M)$ containing this basepoint. The maps \eqref{eStabMapIntro} restrict to \emph{stabilisation maps} \begin{equation}\label{eStabMapIntro2} C_{nP}(M) \longrightarrow C_{(n+1)P}(M). \end{equation} We can now state a special case of our main theorem. \begin{thm}[Special case of Theorem \ref{tmain}] If $\dim(P)\leqslant \frac12 (\dim(M)-3)$, the stabilisation maps \eqref{eStabMapIntro2} induce split-injections on homology in all degrees and isomorphisms up to degree $\frac{n}{2}$. \end{thm} \paragraph{\anglenumber{ii.} Superpositions of parametrised submanifolds.} \addcontentsline{toc}{subsection}{Superpositions of parametrised submanifolds.} Instead of taking the orbit space by the action of $\mathrm{Diff}(P) \wr \Sigma_n$, we also consider the orbit space \[ \mathbf{C}_{nP}(M;G) = \mathrm{Emb}(nP,M)/(G \wr \Sigma_n) \] for an open subgroup $G \leqslant \mathrm{Diff}(P)$. The diffeomorphism group $\mathrm{Diff}(P)$ is locally path-connected, so its open subgroups are in one-to-one corresponedence with the subgroups of the \emph{mapping class group} $\pi_0(\mathrm{Diff}(P))$. A point in $\mathrm{Emb}(nP,M)/(G \wr \Sigma_n)$ is a collection of $n$ pairwise disjoint submanifolds of $M$ that are each diffeomorphic to $P$ and each equipped with a parametrisation up to the action of $G$, in other words, a $G$-orbit of parametrisations, which may be more poetically described as a ``\emph{superposition}'' of parametrisations. Proceeding exactly as above (see \S\ref{s:detailed-statements} for precise constructions), we restrict to a particular sequence $C_{nP}(M;G)$ of path-components and obtain stabilisation maps \begin{equation}\label{eStabMapIntro3} C_{nP}(M;G) \longrightarrow C_{(n+1)P}(M;G). \end{equation} Our main theorem is then the following. \begin{thm}[Equivalent to Theorem \ref{tmain}]\label{t:G} If $\dim(P)\leqslant \frac12 (\dim(M)-3)$, the stabilisation maps \eqref{eStabMapIntro3} induce split-injections on homology in all degrees and isomorphisms up to degree $\frac{n}{2}$. \end{thm} \paragraph{\anglenumber{iii.} Corollaries for mixed configurations and different kinds of stability.} \addcontentsline{toc}{subsection}{Corollaries for mixed configurations and different kinds of stability.} \begin{rmk}[{\textit{Mixed configurations}.}] So far we have only discussed one path-component of the space $\mathbf{C}_{nP}(M;G)$, namely the one in which the embedded copies of $P$ are isotopic (modulo $G$) to the standard embedding $\{[e \circ \iota] , [e^2 \circ \iota] , \ldots, [e^n \circ \iota] \}$, in particular the different copies of $P$ are isotopic to each other. However, once we know Theorem \ref{t:G}, we also immediately obtain homological stability for ``mixed configurations'' of submanifolds, in which there are $n$ copies of $P$ that are embedded so as to be isotopic to the standard embedding $\{[e \circ \iota] , [e^2 \circ \iota] , \ldots, [e^n \circ \iota] \}$, as well as finitely many other embedded submanifolds (possibly with different diffeomorphism types or even dimensions), such that these two parts of the configuration may be isotoped to be ``far apart'' (more precisely, so that all copies of $P$ are in a specified collar neighbourhood of $\partial\ensuremath{{\,\,\overline{\!\! M\!}\,}}$ and all other components of the configuration are disjoint from this collar neighbourhood). A rough sketch of how to deduce homological stability for such moduli spaces of mixed configurations is as follows. We consider the forgetful map that forgets the copies of $P$ and remembers only the other components of the configuration, which turns out to be a fibre bundle. The stabilisation map for the moduli spaces of mixed configurations is then a map of fibre bundles over a fixed base space, and its restriction to each fibre is a stabilisation map of the form \eqref{eStabMapIntro3}, which is homologically stable by Theorem \ref{t:G}. Via the induced map of Serre spectral sequences this implies that the stabilisation map for the moduli spaces of mixed configurations is also homologically stable. \end{rmk} \begin{rmk}[{\textit{Twisted homological stability}.}]\label{rmk:ths} Unordered configuration spaces of points are also homologically stable with respect to so-called \emph{finite-degree} -- or \emph{polynomial} -- \emph{twisted coefficient systems}. For the symmetric groups (corresponding to configurations in $\mathbb{R}^\infty$) this is due to \cite{Betley2002Twistedhomologyof}, and the result was extended to configuration spaces (in any connected, open manifold) in \cite{Palmer2018Twistedhomologicalstability}. The method of proof in \cite{Palmer2018Twistedhomologicalstability} is to relate the statement of homological stability with polynomial twisted coefficients, via a decomposition of the coefficients, a version of Shapiro's lemma and a spectral sequence, to homological stability with constant coefficients, and then deduce the twisted version from the constant version. This method is compatible with the setting of moduli spaces of disconnected submanifolds in this paper, so a generalisation of that argument, together with Theorem \ref{t:G}, implies that the stabilisation maps \eqref{eStabMapIntro3} are also homologically stable with respect to (appropriately-defined) polynomial twisted coefficient systems. See Theorem D of \cite{Palmer2018-hs-msods-II} for a precise statement of this result, and \S 9 of \cite{Palmer2018-hs-msods-II} for a detailed explanation of how to adapt the method of proof of \cite{Palmer2018Twistedhomologicalstability} to this situation. \end{rmk} \begin{rmk}[{\textit{Representation stability}.}] Via an argument of S{\o}ren Galatius [personal communication], which uses only elementary representation theory of the symmetric groups, representation stability \cite{ChurchFarb2013Representationtheoryand} for the rational cohomology of \emph{ordered} configuration spaces (which was first proved by other means in \cite{Church2012Homologicalstabilityconfiguration}) may be deduced from twisted homological stability for the unordered configuration spaces, for a certain polynomial twisted coefficient system. The same argument applied to Theorem \ref{t:G} (and Remark \ref{rmk:ths} above) implies that the rational cohomology of the \emph{ordered} moduli spaces of disconnected submanifolds, in which the different copies of $P$ are equipped with a total ordering, is representation stable. \end{rmk} \paragraph{\anglenumber{iv.} Labelled submanifolds and corollaries for diffeomorphism groups.}\label{para:labelled-submanifolds} \addcontentsline{toc}{subsection}{Labelled submanifolds and corollaries for diffeomorphism groups.} In the companion paper (\cite{Palmer2018-hs-msods-II}) we lift Theorem \ref{t:G} to the setting of moduli spaces of \emph{labelled} submanifolds. Recall that we have chosen an open subgroup $G \leqslant \mathrm{Diff}(P)$. Now let \[ \pi \colon Z \longrightarrow \mathrm{Emb}(P,\ensuremath{{\,\,\overline{\!\! M\!}\,}}) \] be a $G$-equivariant Serre fibration with path-connected fibres. We may consider its $n$th power $\pi^n$, and let $Z_n$ be the preimage of $\mathrm{Emb}(nP,M) \subseteq \mathrm{Emb}(P,\ensuremath{{\,\,\overline{\!\! M\!}\,}})^n$ under this map. This is a right $(G \wr \Sigma_n)$-space and the map \[ \pi_n \colon Z_n \longrightarrow \mathrm{Emb}(nP,M) \] is $(G \wr \Sigma_n)$-equivariant. We define $\mathbf{C}_{nP}(M,Z;G) = Z_n / (G \wr \Sigma_n)$. The map $\pi_n$ induces a map \[ \bar{\pi}_n \colon \mathbf{C}_{nP}(M,Z;G) \longrightarrow \mathbf{C}_{nP}(M;G), \] which is again a Serre fibration with path-connected fibres (see Corollary \ref{c:fibration-orbit-spaces2}). The preimage of the path-component $C_{nP}(M;G)$ of $\mathbf{C}_{nP}(M;G)$ is therefore a single path-component of $\mathbf{C}_{nP}(M,Z;G)$, which we denote by $C_{nP}(M,Z;G)$. Using some mild auxiliary data (see \S 6 of \cite{Palmer2018-hs-msods-II} for the details) we may also lift the stabilisation maps \eqref{eStabMapIntro3} to maps \begin{equation}\label{eStabMapIntro4} C_{nP}(M,Z;G) \longrightarrow C_{(n+1)P}(M,Z;G). \end{equation} In \cite{Palmer2018-hs-msods-II} we prove that Theorem \ref{t:G} lifts to this setting: \begin{thm}[{\cite[Theorem B]{Palmer2018-hs-msods-II}}] Let $G \leqslant \mathrm{Diff}(P)$ be an open subgroup and $\pi \colon Z \to \mathrm{Emb}(P,\ensuremath{{\,\,\overline{\!\! M\!}\,}})$ a $G$-equivariant Serre fibration with path-connected fibres. Assume that $\mathrm{dim}(P) \leqslant \tfrac12(\mathrm{dim}(M) - 3)$. Then the stabilisation maps \eqref{eStabMapIntro4} induce isomorphisms on homology with field coefficients up to degree $\frac{n}{2}$ and (thus) isomorphisms on homology with integral coefficients up to degree $\frac{n}{2} - 1$. \end{thm} This allows us to deduce corollaries about homological stability of different kinds of diffeomorphism groups, including: \begin{itemizeb} \item[$\circ$] \emph{Symmetric diffeomorphism groups}, with respect to the operation of parametrised connected sum along a submanifold (see \cite[Theorem A]{Palmer2018-hs-msods-II}). This extends results of \cite{Tillmann2016Homologystabilitysymmetric}, which are concerned with symmetric diffeomorphism groups and the operation of connected sum (at a point). \item[] {[}If we have two embeddings $e \colon L \hookrightarrow M$ and $f \colon L \hookrightarrow N$ and an isomorphism of normal bundles $\theta \colon \nu_e \cong \nu_f$, the \emph{parametrised connected sum} is obtained by removing tubular neighbourhoods of $e(L)$ and $f(L)$ and identifying the resulting boundaries using $\theta$.] \item[$\circ$] Diffeomorphism groups of manifolds with \emph{conical singularities} (a special type of Baas-Sullivan singularity \cite{Sullivan1967Hauptvermutungmanifolds,Baas1973bordismtheorymanifolds}), with respect to the number of singularities of a given type (see \cite[Corollary C]{Palmer2018-hs-msods-II}). \end{itemizeb} \paragraph{\anglenumber{v.} Related results.}\label{para:related-results} \addcontentsline{toc}{subsection}{Related results.} There are several other homological stability results which are closely related to Theorem \ref{t:G}. We will give a brief overview in order of increasing dimension. \paragraph{Configurations of finite sets on a surface.} In \cite{Tran2014Homologicalstabilitysubgroups} it is shown that the \emph{partitioned surface braid groups} are homologically stable. Let $S$ be an open connected surface and write $\beta_r^S$ for the \emph{surface braid group} on $S$ on $r$ strands, namely $\beta_r^S = \pi_1(C_r(S))$. Now fix positive integers $\xi$ and $n$. There is a short exact sequence \[ 1 \to \ensuremath{P\!\beta}_{n\xi}^S \longrightarrow \beta_{n\xi}^S \longrightarrow \Sigma_{n\xi} \to 1, \] where the kernel $\ensuremath{P\!\beta}_{n\xi}^S$ is the \emph{pure surface braid group} on $n\xi$ strands. The wreath product $\Sigma_\xi \wr \Sigma_n = (\Sigma_\xi)^n \rtimes \Sigma_n$ is naturally a subgroup of $\Sigma_{n\xi}$, and the $n$th \emph{partitioned braid group} $\beta_n^{\xi,S}$ is defined to be its preimage in $\beta_{n\xi}^S$. Geometrically, it is the subgroup of the $n\xi$th braid group on $S$ consisting of braids that preserve a given partition $n\xi = \xi + \xi + \cdots + \xi$ of the endpoints. This is a subgroup of index \[ \frac{(n\xi)!}{n!\,(\xi!)^n}, \] corresponding to a covering space of $C_{n\xi}(S)$ with this number of sheets. Alternatively, it may be thought of as the \emph{moduli space of $n$ pairwise disjoint submanifolds of $M$ of diffeomorphism type $P$}, where $P = \{ 1 ,\ldots, \xi \}$. We note that this result is not included as a special case of Theorem \ref{t:G}, since $\mathrm{dim}(P) = 0 \nleqslant \tfrac12(\mathrm{dim}(S) - 3) = -\tfrac12$. \paragraph{Unlinks in $3$-manifolds.} Going up from dimension zero to dimension one, in \cite{Kupers2013Homologicalstabilityunlinked} it is shown that the sequence of moduli spaces \[ C_{nS^1}(M) \longrightarrow C_{(n+1)S^1}(M) \] is homologically stable when $M$ is a connected $3$-manifold and $S^1 \hookrightarrow \partial M$ is an embedding into a coordinate chart. These are the spaces of $n$-component unlinks in $M$, for varying $n$. Note that Theorem \ref{t:G} only applies once the dimension of $M$ is at least $5$, so the two results are disjoint. The subspace $\mathcal{E}_n \subset C_{nS^1}(D^3)$ of unknotted, unlinked \emph{Euclidean} circles was studied in \cite{BrendleHatcher2013Configurationspacesrings} (the adjective ``Euclidean'' means that a circle is the image of $\{ (x,y,z)\in \mathbb{R}^3 \mid z=0, x^2+y^2=1 \}$ under rotation, translation and dilation), who showed that the inclusion is a homotopy equivalence. As a corollary, the spaces $\mathcal{E}_n$ are also homologically stable. \paragraph{String motion groups.} Closely related to this is the sequence of fundamental groups of the spaces $\mathcal{E}_n \simeq C_{nS^1}(D^3)$, which are called the \emph{circle-braid groups} or the \emph{string motion groups}. They are isomorphic to the symmetric automorphism groups $\Sigma\mathrm{Aut}(F_n)$ of the free groups $F_n$ and also to certain quotients of mapping class groups of $3$-manifolds. In this latter guise they were proved in \cite[Corollary 1.2]{HatcherWahl2010Stabilizationmappingclass} to satisfy homological stability. More generally \cite[Corollary 1.3]{HatcherWahl2010Stabilizationmappingclass} proves homological stability for $\Sigma\mathrm{Aut}(G^{*n})$, where $G^{*n}$ denotes the iterated free product $G*\dotsb *G$, for $G=\pi_1(P)$ for certain $3$-manifolds $P$ (in particular $P=S^1 \times D^2$). By \cite[Th\'{e}or\`{e}me 1.2]{CollinetDjamentGriffin2013Stabilitehomologiquepour} these symmetric automorphism groups actually satisfy homological stability for any group $G$. Moreover, their integral homology may in principle be calculated using the methods of \cite{Griffin2013Diagonalcomplexesand}.\footnote{See Theorem C of \cite{Griffin2013Diagonalcomplexesand}. The statement is for $\mathrm{Aut}(G^{*n})$ and doesn't permit $G$ to be $\mathbb{Z}$, but this restriction is removed by replacing $\mathrm{Aut}(G^{*n})$ with $\Sigma\mathrm{Aut}(G^{*n})$. In particular the integral homology of $\pi_1(\mathcal{E}_n)=\Sigma\mathrm{Aut}(F_n)$ is identified with the direct sum of $H_*((\mathbb{Z}/2)\wr\Sigma_n;\mathbb{Z})$ together with $H_{*-e}((\mathbb{Z}/2)\wr\mathrm{Aut}(f);\mathbb{Z})$ where $f$ runs over certain directed, labelled, rooted trees with $n$ vertices and $e$ is the number of edges of $f$. The wreath product is formed using the obvious homomorphism $\mathrm{Aut}(f)\to\Sigma_n$ given by the action of an automorphism of $f$ on its vertices.} Note that the spaces $\mathcal{E}_n$ are not aspherical (they are finite-dimensional manifolds whose fundamental groups contain torsion), so homological stability for $\pi_1(\mathcal{E}_n)$ is independent of homological stability for the spaces $\mathcal{E}_n$ themselves. Rationally, homological stability for $\pi_1(\mathcal{E}_n)$ is also known via a different route. There is a homomorphism to the hyperoctahedral group $\pi_1(\mathcal{E}_n) \to W_n = (\mathbb{Z}/2)\wr\Sigma_n$ which just remembers the permutation of the $n$ circles and their orientations; the kernel of this is called the \emph{pure string motion group}. By \cite{Wilson2012Representationstabilitycohomology} the rational cohomology of the pure string motion groups is \emph{uniformly representation stable} (see \cite[Definition 2.6]{ChurchFarb2013Representationtheoryand}) with respect to the action of $W_n$, from which it follows that the groups $\pi_1(\mathcal{E}_n)$ are rationally homologically stable. In fact, by Theorem 7.1 of \cite{Wilson2012Representationstabilitycohomology}, the rational homology of $\pi_1(\mathcal{E}_n)$ is trivial. \paragraph{Spaces of connected subsurfaces.} One dimension higher again, \cite{CanteroRandal-Williams2017Homologicalstabilityspaces} proves a homological stability result for spaces of \emph{connected} subsurfaces of a manifold. Denote the connected, orientable surface of genus $g$ with $b$ boundary components by $\Sigma_{g,b}$ and let \[ \mathcal{E}(\Sigma_{g,b},M) = \mathrm{Emb}(\Sigma_{g,b},M)/\mathrm{Diff}^+(\Sigma_{g,b}), \] where the embeddings are prescribed on a sufficiently small collar neighbourhood of the boundary $\partial \Sigma_{g,b}$ and the diffeomorphisms are required to be the identity on a sufficiently small collar neighbourhood of $\partial M$. Given a surface embedded in $\partial M\times [0,1]$ with the correct boundary there is an induced stabilisation map $\mathcal{E}(\Sigma_{g,b},M) \to \mathcal{E}(\Sigma_{g^\prime,b^\prime},M)$. Theorem 1.3 of \cite{CanteroRandal-Williams2017Homologicalstabilityspaces} says that these maps are homologically stable, under the assumption that $M$ is simply-connected and of dimension at least $6$ (or dimension $5$ with an extra restriction on the permissible stabilisation maps). The stable ranges for the various stabilisation maps are the same as the best-known ranges for Harer stability \cite{Harer1985Stabilityofhomology, Ivanov1993homologystabilityTeichmuller}, which were obtained in \cite{Boldsen2012Improvedhomologicalstability} and \cite{Randal-Williams2016Resolutionsmodulispaces}; see also \cite{Wahl2013Homologicalstabilitymapping}. In particular Theorem 1.3 of \cite{CanteroRandal-Williams2017Homologicalstabilityspaces} recovers Harer stability when $M=\mathbb{R}^\infty$. Moreover, \cite{CanteroRandal-Williams2017Homologicalstabilityspaces} also identifies the homology in the stable range. There is a well-defined scanning map from each $\mathcal{E}(\Sigma_{g,b},M)$ to a certain explicit limiting space (a union of path-components of the space of compactly-supported sections of a bundle over $M$) which induces an isomorphism on homology in the stable range, assuming that $M$ is simply-connected and at least $5$-dimensional. \paragraph{\anglenumber{vi.} Stable homology.} \addcontentsline{toc}{subsection}{Stable homology.} The next natural question after Theorem \ref{t:G} is \emph{what is the limiting homology?} As mentioned in the paragraph just above, this is known for moduli spaces of connected subsurfaces. For unordered configuration spaces this question was answered by McDuff in \cite{McDuff1975Configurationspacesof}: \[ \mathrm{colim}_{n\to\infty} H_*(C_n(M,X)) \;\cong\; H_* \bigl( \Gamma_{c,\circ} \bigl( \dot{T}M \wedge_{\mathrm{fib}} X_+ \to M \bigr)\bigr) . \] The colimit is taken along the stabilisation maps, $\dot{T}M\to M$ is the fibrewise one-point compactification of the tangent bundle of $M$, $\dot{T}M \wedge_{\mathrm{fib}} X_+$ is its fibrewise smash product with $X_+$ and $\Gamma_{c,\circ}$ denotes any path-component of the space of compactly-supported sections of this bundle. For moduli spaces of disconnected submanifolds one natural first guess is that the \emph{scanning map} is a homology equivalence in the limit. This is the map which identifies the limiting homology for unordered configuration spaces and for moduli spaces of connected subsurfaces, and it is also well-defined for moduli spaces of disconnected submanifolds. For simplicity we will consider just \emph{unparametrised} submanifolds, and discuss the scanning map for $C_{nP}(M)$. For a finite-dimensional real vector space $V$ let $\mathit{Gr}_p(V)$ be the Grassmannian of $p$-planes in $V$, let $\gamma_p^\perp(V)$ be the orthogonal complement of the canonical $p$-plane bundle over $\mathit{Gr}_p(V)$ and denote its total space by $A_p(V)$; this is the affine Grassmannian of $p$-planes in $V$. Since $\mathit{Gr}_p(V)$ is compact the Thom space $\mathrm{Th}(\gamma_p^\perp(V))$ is the one-point compactification of $A_p(V)$, denoted $\dot{A}_p(V)$. This construction is functorial in $V$, so one can apply it fibrewise to real vector bundles, and in particular form the bundle $\dot{A}_p(TM) \to M$ of affine $p$-planes in the tangent bundle of $M$. The \emph{scanning map for $C_{nP}(M)$} may be constructed exactly as for spaces of connected subsurfaces (see page 1388 of \cite{CanteroRandal-Williams2017Homologicalstabilityspaces}),\footnote{Strictly speaking, one actually constructs a zig-zag of two maps where the reversed map is a weak equivalence.} and is of the form \[ C_{nP}(M) \longrightarrow \Gamma_{c,\circ} \bigl( \dot{A}_p(TM)\to M \bigr) . \] In the case $M=\mathbb{R}^m$ the codomain is $\Omega^m_\circ \dot{A}_p(\mathbb{R}^m)$, so if we specialise to $m=3$ and $P=S^1$ we have \[ \mathcal{E}_n \simeq C_{nS^1}(\mathbb{R}^3) \longrightarrow \Omega_0^3 \dot{A}_1(\mathbb{R}^3). \] In \cite[\S 2]{Kupers2013Homologicalstabilityunlinked} it is shown that the first rational homology of the right-hand side is one-dimensional, whereas (abelianising the computation of $\pi_1(\mathcal{E}_n)$ in Proposition 3.7 of \cite{BrendleHatcher2013Configurationspacesrings}) the first integral homology of the left-hand side is $(\mathbb{Z}/2)^3$ for all $n \geqslant 2$. So the first guess that the scanning map identifies the stable homology of moduli spaces of disconnected submanifolds cannot be correct. \paragraph{\anglenumber{vii.} Open questions.} \addcontentsline{toc}{subsection}{Open questions.} Following on from the previous section, one open problem is of course to identify the stable homology of moduli spaces of disconnected submanifolds $C_{nP}(M;G)$ in the cases when it is known to stabilise. This is -- to the best of the author's knowledge -- currently unknown unless $P$ is a point. Another question is whether the ``stability slope'' may be improved beyond $\tfrac12$ (i.e., whether there is a homological stability result for the sequence of spaces $C_{nP}(M;G)$ that holds in degrees $* \leqslant \lambda n$, for some constant $\lambda > \tfrac12$), perhaps depending on the choice of coefficient ring. Another open question is whether the dimension condition $p \leqslant \tfrac12(m-3)$ on the dimensions $(m,p) = (\mathrm{dim}(M),\mathrm{dim}(P))$ can be weakened. The evidence suggests that it ``should'' be possible, since the dimension pairs $(2,0)$ and $(3,1)$ are excluded by the condition $p \leqslant \tfrac12(m-3)$, but configuration spaces of points (or collections of $\xi$ points) on surfaces are certainly homologically stable, as are moduli spaces of unlinks in $3$-manifolds, by \cite{Kupers2013Homologicalstabilityunlinked}. Almost all of the cases in which homological stability is known for spaces of submanifolds are within the range in which there can be no non-trivial knotting --- the condition $p \leqslant \tfrac12(m-3)$ ensures this, as does the condition that $M$ must have dimension at least $5$ in the case of connected oriented subsurfaces. One exception is of course $C_{nS^1}(M)$ for a $3$-manifold $M$, but in this case only the path-component of the unlink has been shown to be homologically stable; it is not known what happens if one tries to stabilise by adding new non-trivially-knotted components to a configuration. \begin{rmk}[\emph{Dependence on the dimension hypothesis.}] \label{r:dimension-hypothesis} The dimension hypothesis $p \leqslant \tfrac12(m-3)$ is used several times in the proof of homological stability in \S\ref{s:proof}. It is used for the proof of Proposition \ref{p:condition-iv} (and hence it is also needed for Corollary \ref{c:condition-iv}), and it is also used for Corollary \ref{c:Xn1}, where it is used to verify the dimension hypotheses of Lemma \ref{l:path-of-embeddings} (see Remark \ref{r:dimension-hypothesis-lemma} for a discussion of how these hypotheses are used in the proof of Lemma \ref{l:path-of-embeddings}). In addition, Remark \ref{r:connectivity-of-M-breve} uses the weaker assumption that $p \leqslant m-3$. Proposition \ref{p:iotabb} uses the hypothesis that the embedding $\iota \colon P \hookrightarrow \partial M$ admits a non-vanishing section of its normal bundle, which is automatically true if we assume that $p \leqslant \tfrac12(m-2)$. Proposition \ref{p:two-conditions} and Corollary \ref{c:two-conditions} depend on Proposition \ref{p:iotabb}, so they also use the same hypothesis (either that $p \leqslant \tfrac12(m-2)$ or the weaker hypothesis that $\iota$ admits a non-vanishing section of its normal bundle). \end{rmk} \paragraph{Outline.} \addcontentsline{toc}{subsection}{Outline.} In the next section we give a more precise statement of the main result of the paper, Theorem \ref{t:G}, which is reformulated there as Theorem \ref{tmain}. This will be proved in \S\ref{s:proof}. In \S\ref{s:axioms} we first isolate the spectral sequence part of the argument by giving an ``axiomatic'' homological stability theorem, or ``homological stability criterion'', Theorem \ref{tAxiomatic}. In \S\ref{s:fibre-bundles} we prove some preliminary results about fibre bundles and fibrations that we will need, notably using the notion of locally retractile group actions to show that certain maps between smooth mapping spaces are fibre bundles. Then in \S\ref{s:proof} we apply the homological stability criterion Theorem \ref{tAxiomatic} to prove Theorem \ref{tmain}. The appendix \S\ref{s:appendix} contains a proof of a technical lemma about homotopy fibres of augmented semi-simplicial spaces, which is used in a key step of the proof. \paragraph{Acknowledgements.} \addcontentsline{toc}{subsection}{Acknowledgements.} The author would like to thank Federico Cantero Mor{\'a}n, S{\o}ren Galatius, Geoffroy Horel, Alexander Kupers, Oscar Randal-Williams and Ulrike Tillmann for many enlightening discussions during the preparation of this article. Additionally, he would like to thank the anonymous referee for helpful and detailed suggestions for improvements to an earlier draft. \section{Precise formulation of the main result}\label{s:detailed-statements} Let $M$ be a smooth, connected manifold of dimension $m$, with non-empty boundary. Let $P$ be a smooth, closed manifold of dimension $p$ and choose an embedding \[ \iota \colon P \lhook\joinrel\longrightarrow \partial M, \] as well as a collar neighbourhood for the boundary of $M$, in other words a proper embedding \[ \lambda \colon \partial M \times [0,2] \lhook\joinrel\longrightarrow M \] such that $\lambda(z,0) = z$ for all $z \in \partial M$. We also assume that $\lambda$ has the property that it may be extended to a slightly larger proper embedding $\partial M \times [0,2+\epsilon] \hookrightarrow M$ for some $\epsilon > 0$, but we do not fix this data. Choose an open (therefore closed) subgroup $G \leqslant \mathrm{Diff}(P)$. \begin{notation} For $\epsilon \in [0,2]$ we will write $M_\epsilon = M \smallsetminus \lambda (\partial M \times [0,\epsilon])$. In particular, $M_0$ is the interior of $M$. We will use the embedding space $\mathrm{Emb}(P,M)$ very often, so we abbreviate it to $E = \mathrm{Emb}(P,M)$. For a non-negative integer $k$, we write $kP$ for the disjoint union of $k$ copies of the manifold $P$, which is $\varnothing$ if $k=0$. \end{notation} Let $n$ be a non-negative integer. Consider the embedding space $\mathrm{Emb}(nP,M_1)$, which has a left-action of the group $G \wr \Sigma_n \leqslant \mathrm{Diff}(nP)$. The quotient space $\mathrm{Emb}(nP,M_1) / (G \wr \Sigma_n)$ may be thought of as the subspace of the symmetric product $\mathrm{Sp}^n(E/G)$ consisting of configurations $\{[\varphi_1],\ldots,[\varphi_n]\}$ such that the images $\varphi_i(P)$ are contained in $M_1 \subseteq M$ and are pairwise disjoint. Analogously, we consider the quotient $\mathrm{Emb}((n+1)P,M_0)/(G \wr \Sigma_{n+1})$ as a subspace of $\mathrm{Sp}^{n+1}(E/G)$. \begin{defn}\label{d:standard} A \emph{standard configuration} in $\mathrm{Emb}(nP,M_1) / (G \wr \Sigma_n) \subseteq \mathrm{Sp}^n(E/G)$ is one of the form \[ \{ [\lambda(-,t_1)\circ\iota], \ldots, [\lambda(-,t_n)\circ\iota] \}, \] where $t_1,\ldots,t_n$ are $n$ distinct numbers in the interval $(1,2)$. Note that all standard configurations are in the same path-component. A \emph{standard configuration} in $\mathrm{Emb}((n+1)P,M_0)/(G \wr \Sigma_{n+1})$ is one of the form \[ \{ [\lambda(-,t_1)\circ\iota], \ldots, [\lambda(-,t_{n+1})\circ\iota] \}, \] where $t_1,\ldots,t_{n+1}$ are $n+1$ distinct numbers in the interval $(0,2)$. \end{defn} \begin{defn}\label{d:input-data} We now define a map $f \colon X \to Y$ depending on the data $(M,P,\lambda,\iota,G,n)$. First, $X$ is the path-component of $\mathrm{Emb}(nP,M_1) / (G \wr \Sigma_n)$ containing the standard configurations and $Y$ is the path-component of $\mathrm{Emb}((n+1)P,M_0)/(G \wr \Sigma_{n+1})$ containing the standard configurations. The map $f$ is then defined as follows: \[ \{ [\varphi_1] , \ldots , [\varphi_n] \} \;\longmapsto\; \{ [\lambda(-,\tfrac12)\circ\iota] , [\varphi_1] , \ldots , [\varphi_n] \}. \] \end{defn} \begin{rmk} This is an explicit version of the more intuitively-defined stabilisation maps \eqref{eStabMapIntro3} in the introduction. \end{rmk} \begin{lem}\label{l:injectivity} The induced map $f_* \colon H_*(X) \to H_*(Y)$ is split-injective. \end{lem} This is not difficult to prove: see \S\ref{s:split-injectivity}. The main result of this paper is that these maps are also \emph{surjective} on homology in a range of degrees, if we assume that the dimension $p$ of $P$ is sufficiently small compared to the dimension $m$ of $M$. \begin{athm}\label{tmain} The induced map $f_* \colon H_*(X) \to H_*(Y)$ is an isomorphism if $* \leqslant \frac{n}{2}$ and $p \leqslant \frac12(m-3)$. \end{athm} This is equivalent to Theorem \ref{t:G}, and will be proved in \S\ref{s:proof}. Before that, in \S\ref{s:axioms} we establish some sufficient axiomatic criteria for homological stability. \section{Homological stability criteria}\label{s:axioms} In order to separate the geometric part of the proof from the more technical manipulation of spectral sequences, we give an axiomatic criterion for homological stability in this section, and apply it to moduli spaces of disconnected submanifolds in \S\ref{s:proof}. The idea of this method of proving homological stability -- finding ``resolutions'' of the maps one wishes to prove stability for -- is due to \cite{Randal-Williams2016Resolutionsmodulispaces} and has been used many times subsequently, for example in \cite{CanteroRandal-Williams2017Homologicalstabilityspaces, GalatiusRandal-Williams2018Homologicalstabilitymoduli, KupersMiller2016Homologicalstabilitytopological, Perlmutter2016Linkingformsstabilization, Randal-Williams2013Homologicalstabilityunordered}. \subsection{Setup}\label{ss:axioms-setup} Suppose that for each integer $n\geqslant 0$ we have a collection $\ensuremath{\mathscr{X}}(n)$ of maps between path-connected spaces. We want to find conditions implying that each $f\in\ensuremath{\mathscr{X}}(n)$ is $\frac{n}{2}$-homology-connected, meaning that it is an isomorphism on homology up to degree $\frac{n}{2}-1$ and a surjection up to degree $\frac{n}{2}$, equivalently that its mapping cone $Cf$ has trivial reduced homology up to degree $\frac{n}{2}$. As a convention we take $\ensuremath{\mathscr{X}}(n) = \{ \text{all continuous maps} \}$ for $n<0$.\footnote{This is compatible with our aim that each $f\in\ensuremath{\mathscr{X}}(n)$ is $\frac{n}{2}$-homology-connected: for $n=-1$ or $-2$ this says that the mapping cone $Cf$ has trivial reduced homology in degree $-1$, in other words $Cf\neq\varnothing$, but mapping cones are always non-empty (the mapping cone of $\varnothing\to X$ is $X_+$). For $n\leqslant -3$ the condition is vacuous.} We will show that this holds if each $f\in\ensuremath{\mathscr{X}}(n)$ admits a \emph{resolution}, each level of which can be \emph{approximated} by a map in $\ensuremath{\mathscr{X}}(k)$ for $k<n$, plus a certain factorisation condition that these data must satisfy. We begin by making this last sentence precise. \begin{notation}[Homology-connectivity.] For a map $f\colon X\to Y$, the number $\ensuremath{h\mathrm{conn}}(f)$ is the largest integer $n$ such that $f_* \colon H_*(X) \to H_*(Y)$ is an isomorphism for $*\leqslant n-1$ and surjective for $*=n$. Equivalently it is the largest integer $n$ such that the reduced homology of the mapping cone of $f$ is trivial up to degree $n$. \end{notation} \begin{defn}[Resolution of a space.]\label{dResolutionSpace} Recall that an augmented semi-simplicial space $Y_\bullet$ is a diagram of the form \begin{center} \begin{tikzpicture} [x=0.8mm,y=1mm] \node at (0,0) {$\cdots$}; \node at (20,0) {$Y_1$}; \node at (40,0) {$Y_0$}; \node at (56,0) [anchor=west] {$Y_{-1}=Y$}; \draw[->] (5,2)--(15,2); \draw[->] (5,0)--(15,0); \draw[->] (5,-2)--(15,-2); \draw[->] (25,1)--(35,1); \draw[->] (25,-1)--(35,-1); \draw[->] (45,0)--(55,0); \end{tikzpicture} \end{center} with face maps $d_i\colon Y_k\to Y_{k-1}$ for $1\leqslant i\leqslant k+1$ which satisfy the simplicial identities $d_i d_j = d_{j-1}d_i$ when $i<j$. Its geometric realisation $\geomr{Y_\bullet}$ is the quotient of $\bigsqcup_{k\geqslant 0} Y_k \times \Delta^k$ by the face relations $(d_i(y),z) \sim (y,\delta_i(z))$, where $\delta_i$ is the inclusion of the $i$th face of the standard simplex $\Delta^{k+1}$. This depends only on the unaugmented part $Y_{\geqslant 0}$ of $Y_\bullet$, and the augmentation map $Y_0\to Y$ induces a well-defined map $\geomr{Y_\bullet}\to Y$. The augmented semi-simplicial space $Y_\bullet$ is a \emph{$c$-resolution} of $Y$ if $\ensuremath{h\mathrm{conn}}(\geomr{Y_\bullet}\to Y) \geqslant \lfloor c\rfloor$. \end{defn} \begin{defn}[Resolution of a map.]\label{dResolutionMap} If we have a map of augmented semi-simplicial spaces \begin{equation}\label{eAxResolution} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (la) at (0,0) {$X$}; \node (ra) at (20,0) {$Y$}; \node (lb) at (0,10) {$X_0$}; \node (rb) at (20,10) {$Y_0$}; \node (lc) at (0,20) {$X_1$}; \node (rc) at (20,20) {$Y_1$}; \node (ld) at (0,32) {$\vdots$}; \node (rd) at (20,32) {$\vdots$}; \draw[->] (la) to node[above,font=\small]{$f$} (ra); \draw[->] (lb) to node[above,font=\small]{$f_0$} (rb); \draw[->] (lc) to node[above,font=\small]{$f_1$} (rc); \draw[->] (lb) to (la); \draw[->] (rb) to (ra); \draw[->] ($ (lc.south)+(-1,0) $) -- ($ (lb.north)+(-1,0) $); \draw[->] ($ (lc.south)+(1,0) $) -- ($ (lb.north)+(1,0) $); \draw[->] ($ (rc.south)+(-1,0) $) -- ($ (rb.north)+(-1,0) $); \draw[->] ($ (rc.south)+(1,0) $) -- ($ (rb.north)+(1,0) $); \draw[->] ($ (ld.south)+(-2,0) $) -- ($ (lc.north)+(-2,0) $); \draw[->] ($ (ld.south)+(0,0) $) -- ($ (lc.north)+(0,0) $); \draw[->] ($ (ld.south)+(2,0) $) -- ($ (lc.north)+(2,0) $); \draw[->] ($ (rd.south)+(-2,0) $) -- ($ (rc.north)+(-2,0) $); \draw[->] ($ (rd.south)+(0,0) $) -- ($ (rc.north)+(0,0) $); \draw[->] ($ (rd.south)+(2,0) $) -- ($ (rc.north)+(2,0) $); \end{tikzpicture} \end{split} \end{equation} where $X_\bullet$ is a $(c-1)$-resolution and $Y_\bullet$ is a $c$-resolution, then we say that the semi-simplicial map $f_\bullet \colon X_\bullet \to Y_\bullet$ is a \emph{$c$-resolution} of $f\colon X\to Y$. \end{defn} \begin{defn}[Approximation of a resolution.]\label{dApproximationMap} We say that a given $c$-resolution $f_\bullet \colon X_\bullet \to Y_\bullet$ is \emph{approximated} by maps $\{ f_{i\alpha}^\prime \colon X_{i\alpha}^\prime \to Y_{i\alpha}^\prime \}$ if the following holds (the index $\alpha$ runs over an indexing set which depends on $i$). For each $i\geqslant 0$ there is a commutative square \begin{equation}\label{eAxApproximation} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$X_i$}; \node (tr) at (20,10) {$Y_i$}; \node (bl) at (0,0) {$A_i$}; \node (br) at (20,0) {$B_i$}; \draw[->] (tl) to node[above,font=\small]{$f_i$} (tr); \draw[->] (tl) to node[left,font=\small]{$p_i$} (bl); \draw[->] (tr) to node[right,font=\small]{$q_i$} (br); \draw[->] (bl) to node[below,font=\small]{$\phi_i$} (br); \end{tikzpicture} \end{split} \end{equation} in which $p_i$ and $q_i$ are Serre fibrations and $\phi_i$ is a weak equivalence. Additionally, for at least one point $a_{i\alpha}$ in each path-component $A_{i\alpha}$ of $A_i$, the restriction of $f_i$ to $p_i^{-1}(a_{i\alpha}) \to q_i^{-1}(\phi_i(a_{i\alpha}))$ is equal to $f_{i\alpha}^\prime \colon X_{i\alpha}^\prime \to Y_{i\alpha}^\prime$. \end{defn} \begin{defn}[Double resolution and approximation] A \emph{double resolution and approximation} of a map $f \colon X \to Y$ is a $c_1$-resolution $f_\bullet \colon X_\bullet \to Y_\bullet$ approximated by maps $\{ f_{i\alpha}^\prime \colon X_{i\alpha}^\prime \to Y_{i\alpha}^\prime \}$, a commutative square \begin{equation}\label{eAxApproximation2} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,12) {$\ensuremath{{\,\overline{\! X}}}_\alpha$}; \node (tr) at (50,12) {$\ensuremath{{\overline{Y}}}_\alpha$}; \node (bl) at (0,0) {$p_0^{-1}(a_{0\alpha})$}; \node (br) at (50,0) {$q_0^{-1}(\phi_0(a_{0\alpha}))$}; \node at (-13,0) {$X_{0\alpha}^\prime =$}; \node at (65,0) {$= Y_{0\alpha}^\prime$}; \draw[->] (tl) to node[above,font=\small]{$g_\alpha$} (tr); \draw[->] (tl) to (bl); \draw[->] (tr) to (br); \draw[->] (bl) to node[below,font=\small]{$f_{0\alpha}^\prime$} (br); \end{tikzpicture} \end{split} \end{equation} for each point $a_{0\alpha} \in A_0$ as above, in which the vertical maps are weak equivalences, followed by (for each $\alpha$) a $c_2$-resolution $g_{\alpha\bullet} \colon \ensuremath{{\,\overline{\! X}}}_{\alpha\bullet} \to \ensuremath{{\overline{Y}}}_{\alpha\bullet}$ approximated by maps $\{ \ensuremath{{\,\overline{\! X}}}p_{\alpha i \beta} \to \ensuremath{{\overline{Y}}}p_{\alpha i \beta} \}$. This can of course be continued in the obvious way to define \emph{iterated resolutions and approximations}, but we will only need double resolutions and approximations later. \end{defn} \begin{defn}[Three factorisation conditions.]\label{dFactorisationConditions} Consider a commutative square of continuous maps \begin{equation}\label{e:commutative-square} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$A$}; \node (tr) at (20,10) {$B$}; \node (bl) at (0,0) {$C$}; \node (br) at (20,0) {$D$}; \draw[->] (tl) to (tr); \draw[->] (bl) to node[below,font=\small]{$f$} (br); \draw[->] (tl) to (bl); \draw[->] (tr) to node[right,font=\small]{$h$} (br); \end{tikzpicture} \end{split} \end{equation} This satisfies the \emph{weak factorisation condition} if $h$ factors up to homotopy through $f$. It satisfies the \emph{strong factorisation condition} if it has a triangle decomposition \begin{equation}\label{e:triangle-decomposition} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$A$}; \node (tr) at (20,10) {$B$}; \node (bl) at (0,0) {$C$}; \node (br) at (20,0) {$D$}; \draw[->] (tl) to (tr); \draw[->] (bl) to node[below,font=\small]{$f$} (br); \draw[->] (tl) to (bl); \draw[->] (tr) to node[right,font=\small]{$h$} (br); \draw[->] (tr) to (bl); \node at (6,7) [font=\small] {$H$}; \node at (14,4) [font=\small] {$J$}; \end{tikzpicture} \end{split} \end{equation} (meaning that there exists a diagonal map $B \to C$ and homotopies $H$ and $J$), and the composite homotopy $H \! J \colon S^1 \times A \to D$ is homotopic to the identity homotopy. It satisfies the \emph{moderate factorisation condition} if there is a triangle decomposition \eqref{e:triangle-decomposition} and a map $\ell \colon Z \to A$ such that the composite map $H \! J \circ (\mathrm{id} \times \ell) \colon S^1 \times Z \to D$ factors up to homotopy through $f$, in other words there is a dotted map making the square \begin{equation}\label{e:triangle-decomposition-2} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$S^1 \times Z$}; \node (tr) at (30,10) {$S^1 \times A$}; \node (bl) at (0,0) {$C$}; \node (br) at (30,0) {$D$}; \draw[->] (tl) to node[above,font=\small]{$\mathrm{id} \times \ell$} (tr); \draw[->] (bl) to node[below,font=\small]{$f$} (br); \draw[->,dashed] (tl) to (bl); \draw[->] (tr) to node[right,font=\small]{$H \! J$} (br); \end{tikzpicture} \end{split} \end{equation} commute up to homotopy. \end{defn} \begin{rmk}\label{r:strong-moderate} The moderate factorisation condition implies the weak factorisation condition: if there is a triangle decomposition \eqref{e:triangle-decomposition}, then $h$ factors (via $J$) up to homotopy through $f$. Also, the strong factorisation condition implies the moderate factorisation condition: if $H \! J \colon S^1 \times A \to D$ is homotopic to the identity homotopy, then we may take $Z=A$, $\ell=\mathrm{id}$ and the dotted map $S^1 \times A \to C$ to be the projection onto the second factor followed by the map $A \to C$ from \eqref{e:commutative-square}. \end{rmk} \subsection{Sufficient criteria for stability} \begin{thm}\label{tAxiomatic} For each integer $n\geqslant 0$, let $\ensuremath{\mathscr{X}}(n)$ be a collection of maps between path-connected spaces. Then any one of the following six conditions implies that $\ensuremath{h\mathrm{conn}}(f) \geqslant \lfloor \frac{n}{2} \rfloor$ for each $f \in \ensuremath{\mathscr{X}}(n)$ and each $n\geqslant 0$, in other words, the sequence $\ensuremath{\mathscr{X}}(n)$ is homologically stable. For any $m\leqslant n$ define $\ensuremath{\mathscr{X}}_m^n = \{ f \mid f \text{ is weakly equivalent to a map in } \ensuremath{\mathscr{X}}(k) \text{ for } m \leqslant k \leqslant n \}$. The six conditions are as follows. For each $n\geqslant 2$, each $f \colon X \to Y$ of $\ensuremath{\mathscr{X}}(n)$ has \begin{itemizeb} \item[\textup{(1)}] an $\frac{n}{2}$-resolution $f_\bullet \colon X_\bullet \to Y_\bullet$ approximated by maps $f_{i\alpha}^\prime$ with $f_{0\alpha}^\prime \in \ensuremath{\mathscr{X}}_{n-2}^{n-1}$ and $f_{i\alpha}^\prime \in \ensuremath{\mathscr{X}}_{n-2i}^{n-1}$ for $i\geqslant 1$. In addition, for each $\alpha$, the square \begin{equation}\label{eSquare} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$X_{0\alpha}^\prime$}; \node (tr) at (20,10) {$Y_{0\alpha}^\prime$}; \node (bl) at (0,0) {$X$}; \node (br) at (20,0) {$Y$}; \draw[->] (tl) to node[above,font=\small]{$f_{0\alpha}^\prime$} (tr); \draw[->] (bl) to node[below,font=\small]{$f$} (br); \draw[->] (tl) to (bl); \draw[->] (tr) to (br); \end{tikzpicture} \end{split} \end{equation} \begin{itemizeb} \item[\textup{(1s)}] satisfies the strong factorisation condition. \item[\textup{(1m)}] satisfies the moderate factorisation condition with $\ell \in \ensuremath{\mathscr{X}}_{n-2}^{n-1}$. \item[\textup{(1w)}] satisfies the weak factorisation condition, and we assume that every map in $\bigcup_{n\geqslant 0}\ensuremath{\mathscr{X}}(n)$ induces injections on homology in all degrees. \end{itemizeb} \item[\textup{(2)}] an $\frac{n}{2}$-resolution $f_\bullet \colon X_\bullet \to Y_\bullet$ approximated by maps $f_{i\alpha}^\prime$ with $f_{0\alpha}^\prime \in \ensuremath{\mathscr{X}}_{n-2}^{n-1}$ and $f_{i\alpha}^\prime \in \ensuremath{\mathscr{X}}_{n-2i}^{n-1}$ for $i\geqslant 1$. Also, for each $\alpha$, the map $g_\alpha \colon \ensuremath{{\,\overline{\! X}}}_\alpha \to \ensuremath{{\overline{Y}}}_\alpha$ has an $\frac{n}{2}$-resolution $g_{\alpha\bullet} \colon \ensuremath{{\,\overline{\! X}}}_{\alpha\bullet} \to \ensuremath{{\overline{Y}}}_{\alpha\bullet}$ approximated by maps $g_{\alpha i \beta}^\prime$ with $g_{\alpha 0 \beta}^\prime \in \ensuremath{\mathscr{X}}_{n-2}^{n-1}$ and $g_{\alpha i \beta}^\prime \in \ensuremath{\mathscr{X}}_{n-2i}^{n-1}$ for $i\geqslant 1$. In addition, for each $\alpha$ and $\beta$, the square \begin{equation}\label{eSquare2} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$\ensuremath{{\,\overline{\! X}}}p_{\alpha 0 \beta}$}; \node (tr) at (25,10) {$\ensuremath{{\overline{Y}}}p_{\alpha 0 \beta}$}; \node (bl) at (0,0) {$X$}; \node (br) at (25,0) {$Y$}; \draw[->] (tl) to node[above,font=\small]{$g_{\alpha 0 \beta}^\prime$} (tr); \draw[->] (bl) to node[below,font=\small]{$f$} (br); \draw[->] (tl) to (bl); \draw[->] (tr) to (br); \end{tikzpicture} \end{split} \end{equation} \begin{itemizeb} \item[\textup{(2s)}] satisfies the strong factorisation condition. \item[\textup{(2m)}] satisfies the moderate factorisation condition with $\ell \in \ensuremath{\mathscr{X}}_{n-2}^{n-1}$. \item[\textup{(2w)}] satisfies the weak factorisation condition, and we assume that every map in $\bigcup_{n\geqslant 0}\ensuremath{\mathscr{X}}(n)$ induces injections on homology in all degrees. \end{itemizeb} \end{itemizeb} \end{thm} \begin{rmk} Our convention from the beginning of this section implies that the condition $f\in\ensuremath{\mathscr{X}}(n)$ is vacuous when $n<0$, so there is no condition on $f_{i\alpha}^\prime$ or $g_{\alpha i \beta}^\prime$ for $i>\frac{n}{2}$. \end{rmk} \begin{rmk} There is an obvious generalisation of this theorem, in which one takes arbitrarily many resolutions and approximations of a given $f \in \ensuremath{\mathscr{X}}(n)$, before verifying the strong/moderate/weak factorisation condition. The number of iterated resolutions and approximations is permitted to depend on $f$. The proof of this generalisation is an immediate extension of the proof of Theorem \ref{tAxiomatic} given below. \end{rmk} \begin{rmk} Theorem \ref{tAxiomatic} also admits improvements in terms of the range of homological degrees to which it applies, namely its conclusion that $\ensuremath{h\mathrm{conn}}(f) \geqslant \lfloor \tfrac{n}{2} \rfloor$ for each $f \in \ensuremath{\mathscr{X}}(n)$, which represents a \emph{stability slope} of $\tfrac12$. For example, the homological stability results of \cite{Randal-Williams2016Resolutionsmodulispaces} and \cite{CanteroRandal-Williams2017Homologicalstabilityspaces} have a stability slope of $\tfrac23$ rather than $\tfrac12$. In fact, Theorem \ref{tAxiomatic} does not apply directly to the situations of \cite{Randal-Williams2016Resolutionsmodulispaces, CanteroRandal-Williams2017Homologicalstabilityspaces}, since in those cases one has a collection of maps graded by a semigroup which is not $\mathbb{N}$. Restricting attention to a subcollection of maps graded by a subsemigroup isomorphic to $\mathbb{N}$ allows one in principle to apply Theorem \ref{tAxiomatic}, but its conclusion would not then be optimal. However, Theorem \ref{tAxiomatic} admits generalisations for collections of maps $\ensuremath{\mathscr{X}}(-)$ graded by other semigroups, which would recover this improved stability slope. Another setting to which Theorem \ref{tAxiomatic} does not apply directly is that of \cite{KupersMiller2016Homologicalstabilitytopological}, since in that case the stability slope is strictly \emph{smaller} than $\tfrac12$ in general. However, a slight modification of Theorem \ref{tAxiomatic} would apply to this setting too: modifying the condition that $f_{i\alpha}^{\prime} \in \ensuremath{\mathscr{X}}_{n-2i}^{n-1}$ to a condition of the form $f_{i\alpha}^{\prime} \in \ensuremath{\mathscr{X}}_{n-v(i)}^{n-1}$ for some affine function $v$, the argument may be modified slightly to conclude homological stability with a stability slope depending on the function $v$. \end{rmk} \begin{proof}[Proof of Theorem \ref{tAxiomatic}] The proof is by induction on $n$. First note that when $n=0,1$ the claim is just that each $f\in\ensuremath{\mathscr{X}}(n)$ induces a surjection on path-components, which is true since we have taken all spaces to be path-connected. So let $n\geqslant 2$ and assume by induction that the conclusion holds for smaller values of $n$. Let $f\in\ensuremath{\mathscr{X}}(n)$. \textbf{\slshape Spectral sequences.} We have a resolution and approximation of $f$, so we may consider the square \eqref{eAxApproximation} for each $i\geqslant 0$. First replace it by an objectwise weakly-equivalent one in which all spaces have path-components that are open. (This is possible because, for any space $Z$ with path-components $Z_\alpha$, the natural map $\bigsqcup_\alpha Z_\alpha \to Z$ is a weak equivalence.) It now splits as the topological disjoint union of the squares \begin{equation}\label{eAxApproximationComponent} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$X_{i\alpha}$}; \node (tr) at (20,10) {$Y_{i\alpha}$}; \node (bl) at (0,0) {$A_{i\alpha}$}; \node (br) at (20,0) {$B_{i\alpha}$}; \draw[->] (tl) to node[above,font=\small]{$f_{i\alpha}$} (tr); \draw[->] (tl) to node[left,font=\small]{$p_{i\alpha}$} (bl); \draw[->] (tr) to node[right,font=\small]{$q_{i\alpha}$} (br); \draw[->] (bl) to node[below,font=\small]{$\phi_{i\alpha}$} (br); \end{tikzpicture} \end{split} \end{equation} where $A_{i\alpha}$ runs through the path-components of $A_i$ as $\alpha$ varies, $B_{i\alpha}$ is the path-component of $B_i$ that contains $\phi_i(A_{i\alpha})$ (hence $B_{i\alpha}$ also runs through the path-components of $B_i$ as $\alpha$ varies, since $\phi_i$ is assumed to be a weak equivalence, in particular a $\pi_0$-bijection), $X_{i\alpha} = p_i^{-1}(A_{i\alpha})$, $Y_{i\alpha} = q_i^{-1}(B_{i\alpha})$ and the maps $-_{i\alpha}$ are the corresponding restrictions of the maps $-_i$. For each $\alpha$, this square can be replaced by an objectwise homotopy-equivalent one in which $A_{i\alpha} = B_{i\alpha}$ and $\phi_{i\alpha}$ is the identity, and $p_{i\alpha}$, $q_{i\alpha}$ are still Serre fibrations. There is then, for each $i\geqslant 0$ and $\alpha$, a relative Serre spectral sequence (\textit{cf}.\ Remark 2 on page 351 of \cite{Switzer1975Algebraictopologyhomotopy} and Exercise 5.6 on page 178 of \cite{McCleary2001usersguideto}) converging to the reduced homology of $Cf_{i\alpha}$, the mapping cone of $f_{i\alpha}$, and whose $E^2$ page can be identified as \begin{equation}\label{eRSSS} E^2_{s,t} \cong H_s(A_{i\alpha};\widetilde{H}_t(Cf_{i\alpha}^\prime)). \end{equation} This is first quadrant and its $r$th differential has bidegree $(-r,r-1)$. Moreover, the edge homomorphism \begin{equation}\label{eRSSSedgehom} \widetilde{H}_t(Cf_{i\alpha}^\prime) \cong E^2_{0,t} \twoheadrightarrow E^{\infty}_{0,t} \hookrightarrow \widetilde{H}_t(Cf_{i\alpha}) \end{equation} is the map on reduced homology induced the two inclusions $X_{i\alpha}^\prime = p_i^{-1}(a_{i\alpha})\hookrightarrow p_i^{-1}(A_{i\alpha}) = X_{i\alpha}$ and $Y_{i\alpha}^\prime = q_i^{-1}(\phi_i(a_{i\alpha}))\hookrightarrow q_i^{-1}(B_{i\alpha}) = Y_{i\alpha}$. Secondly, the map of augmented semi-simplicial spaces $f_\bullet \colon X_\bullet \to Y_\bullet$ induces a spectral sequence converging to the shifted reduced homology $\widetilde{H}_{*+1}$ of the total homotopy cofibre (twice iterated mapping cone) of \begin{equation}\label{eTotalHomotopyCofibre} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$\lVert X_\bullet \rVert$}; \node (tr) at (20,10) {$\lVert Y_\bullet \rVert$}; \node (bl) at (0,0) {$X$}; \node (br) at (20,0) {$Y$}; \draw[->] (tl) to node[above,font=\small]{$\lVert f_\bullet \rVert$} (tr); \draw[->] (bl) to node[below,font=\small]{$f$} (br); \draw[->] (tl) to (bl); \draw[->] (tr) to (br); \end{tikzpicture} \end{split} \end{equation} and whose $E^1$ page can be identified as \begin{equation}\label{eSSSS} E^1_{s,t} \cong \widetilde{H}_t(Cf_s). \end{equation} This is slightly larger than first quadrant -- it lives in $\{t\geqslant 0, s\geqslant -1\}$ -- and its $r$th differential has bidegree $(-r,r-1)$. The first differential $E^1_{0,t}\to E^1_{-1,t}$ can be identified with the map $\widetilde{H}_t(Cf_0) \to \widetilde{H}_t(Cf)$ on homology induced by the augmentation maps. See \cite[\S 2.3]{Randal-Williams2016Resolutionsmodulispaces} (a construction is also given in \cite[Appendix B]{Palmer2013Homologicalstabilityoriented}). \textbf{\slshape Strategy.} Since each $X_{0\alpha} \subseteq X_0$ is a union of path-components (and similarly for $Y_{0\alpha} \subseteq Y_0$), the mapping cone $Cf_0$ decomposes as the wedge $\bigvee_\alpha Cf_{0\alpha}$. For each $\alpha$ we have a map $Cf_{0\alpha}^\prime \to Cf_{0\alpha}$. Assuming one of the variants of condition (1), the strategy is to prove that the composite map $\bigvee_\alpha Cf_{0\alpha}^\prime \to \bigvee_\alpha Cf_{0\alpha} = Cf_0 \to Cf$ is both surjective on homology up to degree $\lfloor \frac{n}{2} \rfloor$ and the zero map on reduced homology in this range. If instead we assume one of the variants of condition (2), then we also have a homology equivalence $Cg_\alpha \to Cf_{0\alpha}^\prime$ for each $\alpha$, as well as a decomposition $Cg_{\alpha 0} = \bigvee_\beta Cg_{\alpha 0 \beta}$ and maps $Cg_{\alpha 0 \beta}^\prime \to Cg_{\alpha 0 \beta}$. The strategy in this case is to prove that the composite map \[ \textstyle \bigvee_{\alpha\beta} Cg_{\alpha 0 \beta}^\prime \to \bigvee_{\alpha\beta} Cg_{\alpha 0 \beta} = \bigvee_\alpha Cg_{\alpha 0} \to \bigvee_\alpha Cg_\alpha \to \bigvee_\alpha Cf_{0\alpha}^\prime \to \bigvee_\alpha Cf_{0\alpha} = Cf_0 \to Cf \] is both surjective and zero on reduced homology up to degree $\lfloor \frac{n}{2} \rfloor$. \textbf{\slshape Surjectivity on homology.} We first prove that each map $Cf_{0\alpha}^\prime \to Cf_{0\alpha}$ is surjective on homology in the required range. By condition (1) and the inductive hypothesis we have that $\ensuremath{h\mathrm{conn}}(f_{0\alpha}^\prime)\geqslant \lfloor\frac{n}{2}\rfloor -1$, in other words $\widetilde{H}_t(Cf_{0\alpha}^\prime)=0$ for $t\leqslant \lfloor\frac{n}{2}\rfloor -1$. Hence the $E^2$ page of the relative Serre spectral sequence \eqref{eRSSS} (for $i=0$) vanishes for $t\leqslant \lfloor\frac{n}{2}\rfloor -1$ and any $s\geqslant 0$. So in the slightly larger range $t\leqslant\lfloor\frac{n}{2}\rfloor$ the second map of \eqref{eRSSSedgehom} is the identity (there are no extension problems for this total degree) and therefore $\widetilde{H}_t(Cf_{0\alpha}^\prime) \to \widetilde{H}_t(Cf_{0\alpha})$ is surjective. Secondly, we show that $Cf_0 \to Cf$ is also surjective on homology in this range. By condition~(1) and the inductive hypothesis we have that $\ensuremath{h\mathrm{conn}}(f_{i\alpha}^\prime) \geqslant \lfloor\frac{n}{2}\rfloor -i$ for $i\geqslant 1$, in other words $\widetilde{H}_t(Cf_{i\alpha}^\prime)=0$ for $t\leqslant\lfloor\frac{n}{2}\rfloor -i$ and $i\geqslant 1$. Hence the $E^2$ page of the relative Serre spectral sequence \eqref{eRSSS} (for $i\geqslant 1$) vanishes for $t\leqslant\lfloor\frac{n}{2}\rfloor -i$ and any $s\geqslant 0$. Therefore in the limit we have \[ \widetilde{H}_*(Cf_{i\alpha})=0 \qquad\text{for } *\leqslant\lfloor\tfrac{n}{2}\rfloor -i\quad (\text{when } i\geqslant 1) \qquad\text{and for } *\leqslant\lfloor\tfrac{n}{2}\rfloor -1\quad (\text{when } i=0). \] The $E^1$ page of the spectral sequence \eqref{eSSSS} is the direct sum over $\alpha$ of $\widetilde{H}_t(Cf_{s\alpha})$, and so it has a trapezium of zeros as shown in Figure \ref{fSSSS}. Note that since $f_\bullet \colon X_\bullet \to Y_\bullet$ is an $\frac{n}{2}$-resolution we have that $\widetilde{H}_{*+1}$ of the total homotopy cofibre of \eqref{eTotalHomotopyCofibre} is trivial for $*+1\leqslant\lfloor\frac{n}{2}\rfloor$, so the spectral sequence \eqref{eSSSS} converges to zero in total degree $*\leqslant\lfloor\frac{n}{2}\rfloor -1$. In particular for $t\leqslant\lfloor\frac{n}{2}\rfloor$ we have that $E^{\infty}_{-1,t}=0$. Moreover, the second, third and later differentials that hit $E^{r}_{-1,t}$ all have trivial domain and so cannot kill it. Hence $E^1_{-1,t}$ must already be killed by the first differential -- in other words, the first differential $\widetilde{H}_t(Cf_0) \cong E^1_{0,t} \to E^1_{-1,t} \cong \widetilde{H}_t(Cf)$ must be surjective. If instead we assume condition (2), then all of the arguments above remain valid, and in addition we repeat these arguments, using the spectral sequences associated to the resolutions $g_{\alpha\bullet} \colon \ensuremath{{\,\overline{\! X}}}_{\alpha\bullet} \to \ensuremath{{\overline{Y}}}_{\alpha\bullet}$ and the relative Serre spectral sequences associated to the corresponding approximations to prove that the maps $Cg_{\alpha 0 \beta}^\prime \to Cg_{\alpha 0 \beta}$ and $Cg_{\alpha 0} \to Cg_\alpha$ are all surjective on homology up to degree $\lfloor \frac{n}{2} \rfloor$. \begin{figure} \caption{Zeros in the $E^1$ page of the spectral sequence \eqref{eSSSS} \label{fSSSS} \end{figure} \textbf{\slshape Zero on homology.} If we assume one of the variants of condition (1), we now want to show that, for each $\alpha$, the composite map $Cf_{0\alpha}^\prime \to Cf_{0\alpha} \hookrightarrow Cf_0 \to Cf$ is zero on homology up to degree $\lfloor\frac{n}{2}\rfloor$. This is the map on mapping cones induced by the square \eqref{eSquare}. If we assume one of the variants of condition (2), we instead want to show that, for each $\alpha$ and $\beta$, the composite map \[ Cg_{\alpha 0 \beta}^\prime \to Cg_{\alpha 0 \beta} \hookrightarrow Cg_{\alpha 0} \to Cg_\alpha \to Cf_{0\alpha}^\prime \to Cf_{0\alpha} \hookrightarrow Cf_0 \to Cf \] is zero on homology up to degree $\lfloor\frac{n}{2}\rfloor$. This is the map on mapping cones induced by the square \eqref{eSquare2}. So we just have to show that either \eqref{eSquare} or \eqref{eSquare2} induces the zero map on the homology of the mapping cones (of the horizontal maps) in a range of degrees. In fact, this is what we will do under the moderate and strong factorisation conditions, but under the weak factorisation condition we will do something slightly different. We start with this case. \textbf{\slshape The weak factorisation condition.} We first assume condition (1w), the weak factorisation condition and injectivity on homology of all maps in $\bigcup_{n\geqslant 0} \ensuremath{\mathscr{X}}(n)$. The square \eqref{eSquare} induces a map of long exact sequences on homology: \begin{center} \begin{tikzpicture} [x=1.2mm,y=1.5mm] \node (t2) at (25,10) {$\cdots$}; \node (t3) at (40,10) {$\widetilde{H}_t(Y_{0\alpha}^\prime)$}; \node (t4) at (60,10) {$\widetilde{H}_t(Cf_{0\alpha}^\prime)$}; \node (t5) at (80,10) {$\widetilde{H}_{t-1}(X_{0\alpha}^\prime)$}; \node (t6) at (105,10) {$\widetilde{H}_{t-1}(Y_{0\alpha}^\prime)$}; \node (t7) at (120,10) {$\cdots$}; \node (b1) at (5,0) {$\cdots$}; \node (b2) at (20,0) {$\widetilde{H}_t(X)$}; \node (b3) at (40,0) {$\widetilde{H}_t(Y)$}; \node (b4) at (60,0) {$\widetilde{H}_t(Cf)$}; \node (b5) at (75,0) {$\cdots$}; \draw[->] (t2) to (t3); \draw[->] (t3) to node[above,font=\small]{$(**)$} (t4); \draw[->] (t4) to (t5); \draw[->] (t5) to node[above,font=\small]{$(f_{0\alpha}^\prime)_*$} (t6); \draw[->] (t6) to (t7); \draw[->] (b1) to (b2); \draw[->] (b2) to node[below,font=\small]{$f_*$} (b3); \draw[->] (b3) to (b4); \draw[->] (b4) to (b5); \draw[->] (t3) to (b2); \draw[->] (t3) to (b3); \draw[->] (t4) to node[right,font=\small]{$(*)$} (b4); \end{tikzpicture} \end{center} The triangle on the left-hand side comes from the factorisation assumed in the weak factorisation condition. It implies that the composition $(*)\circ (**)$ is zero. Since we have assumed that maps in $\bigcup_{n\geqslant 0} \ensuremath{\mathscr{X}}(n)$ all induce injective maps on homology in all degrees, the map $(f_{0\alpha}^\prime)_*$ in the diagram is injective for any $t$, and so by exactness the map $(**)$ is surjective for any $t$.\footnote{It would not be enough to invoke the inductive hypothesis here, since it only tells us that the map $(f_{0\alpha}^\prime)_*$ in the diagram is injective for $t\leqslant\lfloor\frac{n}{2}\rfloor -1$, and we need the range $t\leqslant\lfloor\frac{n}{2}\rfloor$.} In the range $t\leqslant\lfloor\frac{n}{2}\rfloor$, and after taking the direct sum of the top line over $\alpha$, the map $(*)$ is surjective by what we proved above. So we have a map $(*) \circ (**)$ with target $\widetilde{H}_t(Cf)$ which is both surjective and zero in the required range, which finishes the inductive step of the proof.\footnote{Equivalently, one may organise the logic by deducing from the vanishing of $(*) \circ (**)$ and the surjectivity of $(**)$ that $(*)$ also vanishes, and hence, since $(*)$ is also surjective, its target must be zero.} If we assume condition (2w), then we apply exactly the same argument to the map of long exact sequences induced by the square \eqref{eSquare2} instead. \textbf{\slshape The moderate and strong factorisation conditions.} We first assume either (1m) or (1s) and show that the map $Cf_{0\alpha}^\prime \to Cf$ is zero on reduced homology up to degree $\lfloor\frac{n}{2}\rfloor$. The triangle decomposition \eqref{e:triangle-decomposition} of \eqref{eSquare} induces a decomposition of this map on homology as follows: \begin{center} \begin{tikzpicture} [x=1.2mm,y=1.5mm] \node (t2) at (25,10) {$\cdots$}; \node (t3) at (40,10) {$\widetilde{H}_t(Cf_{0\alpha}^\prime)$}; \node (t4) at (60,10) {$\widetilde{H}_{t-1}(X_{0\alpha}^\prime)$}; \node (t5) at (75,10) {$\cdots$}; \node (b1) at (5,0) {$\cdots$}; \node (b2) at (20,0) {$\widetilde{H}_t(Y)$}; \node (b3) at (40,0) {$\widetilde{H}_t(Cf)$}; \node (b4) at (55,0) {$\cdots$}; \draw[->] (t2) to (t3); \draw[->] (t3) to (t4); \draw[->] (t4) to (t5); \draw[->] (b1) to (b2); \draw[->] (b2) to (b3); \draw[->] (b3) to (b4); \draw[->] (t3) to (b3); \draw[->] (t4) to (b2); \end{tikzpicture} \end{center} where the diagonal map is \[ \widetilde{H}_{t-1}(X_{0\alpha}^\prime) \lhook\joinrel\xrightarrow{\;\text{K{\"u}nneth}\;} \widetilde{H}_t(S^1 \times X_{0\alpha}^\prime) \xrightarrow{\;(H \! J)_*\;} \widetilde{H}_t(Y). \] This is proved in \cite[Appendix A]{Palmer2013Homologicalstabilityoriented}, and it also follows from \cite[Lemma 7.5]{CanteroRandal-Williams2017Homologicalstabilityspaces}, which in fact gives a space-level decomposition. Putting this together with \eqref{e:triangle-decomposition-2} we have a commutative diagram \begin{center} \begin{tikzpicture} [x=1.2mm,y=1.5mm] \node (tl) at (0,20) {$\widetilde{H}_{t-1}(Z)$}; \node (tm) at (30,20) {$\widetilde{H}_{t-1}(X_{0\alpha}^\prime)$}; \node (tr) at (60,20) {$\widetilde{H}_t(Cf_{0\alpha}^\prime)$}; \node (ml) at (0,10) {$\widetilde{H}_t(S^1 \times Z)$}; \node (mm) at (30,10) {$\widetilde{H}_t(S^1 \times X_{0\alpha}^\prime)$}; \node (bl) at (0,0) {$\widetilde{H}_t(X)$}; \node (bm) at (30,0) {$\widetilde{H}_t(Y)$}; \node (br) at (60,0) {$\widetilde{H}_t(Cf)$}; \draw[->>] (tl) to node[above,font=\small]{$\ell_*$} (tm); \draw[->] (tr) to (tm); \draw[->] (ml) to (mm); \draw[->] (bl) to node[above,font=\small]{$f_*$} (bm); \draw[->] (bm) to (br); \draw[->] (ml) to (bl); \draw[->] (mm) to node[right,font=\small]{$(H \! J)_*$} (bm); \draw[->] (tr) to node[right,font=\small]{$(\dagger)$} (br); \incl{(tl)}{(ml)} \incl{(tm)}{(mm)} \draw[->] (bl.south)--(0,-4)-- node[below,font=\small]{$0$} (60,-4)--(br.south); \end{tikzpicture} \end{center} with either $\ell \in \ensuremath{\mathscr{X}}_{n-2}^{n-1}$ (if we assume (1m)) or $\ell = \mathrm{id}$ (if we assume (1s), see Remark \ref{r:strong-moderate}). In either case we know that the induced map $\ell_*$ in the diagram is surjective for $t\leqslant\lfloor\frac{n}{2}\rfloor$ (by the inductive hypothesis if we assume (1m)). The bottom two horizontal maps are consecutive maps in a long exact sequence, so their composition is zero. A diagram chase therefore shows that the map $(\dagger)$ is zero for $t\leqslant\lfloor\frac{n}{2}\rfloor$, as required. If we assume (2m) or (2s), then we use exactly the same argument, applied instead to the triangle decomposition \eqref{e:triangle-decomposition} of \eqref{eSquare2}. \end{proof} \section{Fibre bundle and fibration lemmas}\label{s:fibre-bundles} \subsection{Smooth mapping spaces.}\label{ss:smooth-mapping-spaces} In this section we prove some technical lemmas that certain maps between smooth mapping spaces are fibre bundles. First we discuss briefly the topology that we use for our smooth mapping spaces, and collect some basic facts that will be useful later. For smooth\footnote{By ``smooth'' we always mean $C^\infty$-smooth. Also, manifolds in this section have empty boundary unless explicitly stated otherwise.} manifolds $L$ and $N$, we will always use the \emph{strong $C^\infty$ topology} on the space $C^\infty(L,N)$ of smooth maps $L \to N$, unless explicitly stated otherwise. This topology is also sometimes known as the \emph{Whitney $C^\infty$ topology}. Whenever $L$ is compact, this coincides with the \emph{weak $C^\infty$ topology}, also known as the \emph{compact-open $C^\infty$ topology}, but when $L$ is non-compact it is strictly finer than the weak $C^\infty$ topology.\footnote{On the other hand, even when $L$ is non-compact, the strong and weak $C^\infty$ topologies do coincide on the subset $C_{\mathrm{pr}}^\infty(L,N)$ of all proper maps $L \to N$. In particular this means that they coincide on the diffeomorphism group $\mathrm{Diff}(N)$ for any manifold $N$. In practice, we will almost exclusively work with smooth mapping spaces with compact domain and diffeomorphism groups, so the distinction between strong and weak $C^\infty$ topologies does not arise.} In fact, the weak $C^\infty$ topology is metrisable and therefore paracompact, whereas the strong $C^\infty$ topology is not even first-countable when $L$ is non-compact and $\mathrm{dim}(N)>0$. On the other hand, the strong $C^\infty$ topology will be useful for us since various properties of smooth maps, notably the property of being an embedding, are open conditions in this topology, and moreover the strong $C^\infty$ topology makes the mapping space into a Baire space, so the Thom transversality theorem applies. We refer to \cite[\S 2]{Mather1969StabilityofC}, \cite[\S II.3]{GolubitskyGuillemin1973Stablemappingsand} and \cite[\S 2.1]{Hirsch1976Differentialtopology} for the definitions and further discussion of these topologies. In addition to the facts mentioned above, we record three facts more formally, for future reference. \begin{fact}\label{fact:locally-contractible} For compact $L$, the space $C^\infty(L,N)$ is locally contractible. Its open subspace $\mathrm{Emb}(L,N)$ of smooth embeddings is therefore also locally contractible, and in particular $\mathrm{Diff}(L) = \mathrm{Emb}(L,L)$ is locally contractible. For non-compact manifolds $N$, the diffeomorphism group $\mathrm{Diff}(N)$ is not even locally path-connected, but its subgroup $\mathrm{Diff}_c(N)$ of compactly-supported diffeomorphisms is locally contractible. \end{fact} \begin{proof} For the first statement, see for example \cite[Corollary of Proposition $4'$, page 281]{Cerf1961Topologiedecertains}. For the non-local-path-connectedness of $\mathrm{Diff}(N)$ when $N$ is non-compact (but $\sigma$-compact), see \cite{GuranZarichnyui1984Whitneytopologyand} or Theorem 4 of \cite{BanakhMineSakaiYagasaki2011Homeomorphismanddiffeomorphism}, which states that for such $N$ the full diffeomorphism group $\mathrm{Diff}(N)$ is locally homeomorphic to the infinite box power $\square^\omega l_2$, which is not locally path-connected. Theorem 4 of \cite{BanakhMineSakaiYagasaki2011Homeomorphismanddiffeomorphism} also states that the subgroup $\mathrm{Diff}_c(N)$ is locally homeomorphic to $l_2 \times \mathbb{R}^\infty \cong \boxdot^\omega l_2$, which is locally contractible. \end{proof} \begin{fact}\label{fact:composition-continuous} The function \[ C^\infty(L,M) \times C^\infty(M,N) \longrightarrow C^\infty(L,N) \] given by composition of smooth maps is in general discontinuous --- in fact it always fails to be continuous if $L$ is non-compact --- but it becomes continuous when it is restricted to the subset $C_{\mathrm{pr}}^\infty(L,M) \times C^\infty(M,N)$ of the domain, where $C_{\mathrm{pr}}^\infty$ denotes the subspace of proper smooth maps. \end{fact} This is originally due to \cite[Proposition 1 and Remark 2 on page 259]{Mather1969StabilityofC}. See also Proposition II.3.9 and the remark following it in \cite{GolubitskyGuillemin1973Stablemappingsand}. As a consequence, the right action of $\mathrm{Diff}(L)$ on the embedding space $\mathrm{Emb}(L,N)$ is always continuous, whereas we have to assume that $L$ is compact in order for the left action of $\mathrm{Diff}(N)$ on this space to be continuous. \begin{fact}\label{fact:topological-embedding} If $M$ is a submanifold of $N$, then the continuous injection \[ C_{\mathrm{pr}}^\infty(L,M) \longrightarrow C^\infty(L,N) \] given by post-composition with the inclusion is a topological embedding, i.e.\ a homeomorphism onto its image. Thus, if $e \colon L \hookrightarrow N$ is a smooth embedding, the map \[ e \circ - \colon \mathrm{Diff}(L) \longrightarrow \mathrm{Emb}(L,N) \] is a topological embedding. \end{fact} The first statement follows from Remark 1 on page 259 of \cite{Mather1969StabilityofC}. To deduce the second statement, set $M = e(L)$ and restrict the domain and codomain to embedding spaces, to obtain a topological embedding $\mathrm{Emb}_{\mathrm{pr}}(L,e(L)) \hookrightarrow \mathrm{Emb}(L,N)$. Now note that a proper embedding between manifolds of the same dimension is a diffeomorphism, so $\mathrm{Diff}(L,e(L)) = \mathrm{Emb}_{\mathrm{pr}}(L,e(L))$, and pre-compose with the homeomorphism $e \circ - \colon \mathrm{Diff}(L) \to \mathrm{Diff}(L,e(L))$. \begin{assumption}\label{notation-convention} For the rest of this section, unless otherwise specified, $G$ and $H$ denote topological groups, $L$ and $N$ denote manifolds without boundary, with $L$ assumed to be compact, such that $\mathrm{dim}(L) < \mathrm{dim}(N)$. \end{assumption} \subsection{A fibre bundle criterion.} Let $G$ be a topological group with a left-action on a space $X$, i.e.\ a group homomorphism $G\to \mathrm{Homeo}(X)$ such that the adjoint $a\colon G\times X\to X$ is continuous.\footnote{Note that we do not need to specify a topology on the group of self-homeomorphisms of $X$, since the continuity condition is on the action map $G \times X \to X$ and not on the group homomorphism $G \to \mathrm{Homeo}(X)$.} \begin{defn}\label{d:G-locally-retractile} We say that $X$ is \emph{$G$-locally retractile} with respect to this action (equivalently, that the action $G\curvearrowright X$ \emph{admits local sections}) if every $x\in X$ has an open neighbourhood $V_x \subseteq X$ and a continuous map $\gamma_x \colon V_x \to G$ sending $x$ to the identity such that $\gamma_x(y)\cdot x = y$ for all $y\in V_x$, in other words the composition \[ V_x \xrightarrow{\gamma_x} G \xrightarrow{-\cdot x} X \] is the inclusion. Equivalently, each orbit map $G \times \{ x \} \hookrightarrow G\times X \xrightarrow{a} X$ admits a (pointed) section on some open neighbourhood of $x\in X$. \end{defn} The notion of $G$-locally retractile spaces comes from \cite[\S 0.4.4]{Cerf1961Topologiedecertains}, where the definition is given in a more general categorical setting; the definition above corresponds to taking $\mathcal{C}$ to be the category of topological spaces and $T'$ to be the identity functor in \S 0.4.4 of \cite{Cerf1961Topologiedecertains}.\footnote{A minor difference is that the definition of \S 0.4.4 of \cite{Cerf1961Topologiedecertains} requires that $\gamma_x(y) \cdot y = x$ for all $y \in V_x$, instead of $\gamma_x(y) \cdot x = y$. But these conditions are interchangeable, by composing $\gamma_x$ with the inverse map $(-)^{-1} \colon G \to G$.} In the terminology of \cite{Palais1960Localtrivialityof}, one says that $X$ \emph{admits local cross-sections} (with respect to the action of $G$). The notion of $G$-locally retractile spaces has also recently been used in \cite{CanteroRandal-Williams2017Homologicalstabilityspaces}. Immediately from the definitions, we note: \begin{lem}\label{l:retractile-basic} Let $H \leqslant G \leqslant K$ be topological groups and let $X$ be a locally retractile $G$-space. \begin{itemizeb} \item[\textup{(i)}] If $H$ is open in $G$, then the action of $H$ on $X$ is locally retractile. \item[\textup{(ii)}] If the $G$-action on $X$ extends to a $K$-action, then this action is also locally retractile. \item[\textup{(iii)}] If $A \subseteq X$ is an open, $G$-invariant subspace, then the action of $G$ on $A$ is locally retractile. \end{itemizeb} In particular, if $G$ is locally path-connected, then the action of $G_0$ on $X$ is locally retractile, where $G_0$ is the path-component of $G$ containing the identity. \end{lem} The following lemma is very useful for checking that a given map is a fibre bundle. \begin{prop}[{\cite[Theorem A]{Palais1960Localtrivialityof}}]\label{p:G-locally-retractile} Let $f \colon X \to Y$ be a $G$-equivariant continuous map and assume that the space $Y$ is $G$-locally retractile. Then $f$ is a fibre bundle. \end{prop} \begin{proof} Let $y\in Y$ and take $\gamma_y\colon V_y \to G$ as in Definition \ref{d:G-locally-retractile}. Then the map \[ (x,v) \longmapsto \gamma_y(v)\cdot x \;\colon\; f^{-1}(y)\times V_y \longrightarrow f^{-1}(V_y) \] is a local trivialisation of $f$ over $V_y$, with inverse given by \[ x \longmapsto (\gamma_y(f(x))^{-1}\cdot x,f(x)) \;\colon\; f^{-1}(V_y) \longrightarrow f^{-1}(y) \times V_y.\qedhere \] \end{proof} In the special case where $f$ is of the form $X \to X/H$ for a right action of another group $H$ on $X$, we have a stronger conclusion, as remarked in \cite{Palais1960Localtrivialityof} on page 307. \begin{prop}\label{p:G-locally-retractile-principal} Let $X$ be a space with a left $G$-action and a right $H$-action, which commute. Assume that the $H$-action is free, and moreover that for each $x \in X$ the map $x \cdot - \colon H \to X$ is a topological embedding. Assume that the induced left $G$-action on the quotient space $X/H$ is locally retractile. Then the quotient map $q \colon X \to X/H$ is a principal $H$-bundle. \end{prop} \begin{proof} For a point $y = xH \in X/H$, the local trivialisation constructed in the proof above is \[ (x \cdot h,v) \longmapsto \gamma_y(v) \cdot x \cdot h \;\colon\; xH \times V_y \longrightarrow q^{-1}(V_y) \] and it is clearly $H$-equivariant, where we let $H$ act on $xH$ and on $q^{-1}(V_y)$ by the restriction of its action on $X$, and act trivially on $V_y$. By hypothesis, the map $h \mapsto x \cdot h \colon H \to xH \subseteq X$ is a homeomorphism, so we may compose the trivialisation above with the homeomorphism $(h,v) \mapsto (x \cdot h,v) \colon H \times V_y \to xH \times V_y$ to obtain an $H$-equivariant local trivialisation \[ (h,v) \longmapsto \gamma_y(v) \cdot x \cdot h \;\colon\; H \times V_y \longrightarrow q^{-1}(V_y). \] This is now a local trivialisation of $q$ over $V_y$ as a principal $H$-bundle. \end{proof} For example, we note that for any subgroup $H$ of a topological group $G$, we may take $X=G$ with $H$ acting by right-multiplication. In this case the map $H \to G$ given by $h \mapsto gh$ for fixed $g \in G$ is always a topological embedding (its inverse is given by $k \mapsto g^{-1}k$), so the proposition above says in this case that if $G/H$ is $G$-locally retractile, then $G \to G/H$ is a principal $H$-bundle. Subgroups $H$ having the property that $G \to G/H$ is a principal $H$-bundle are sometimes called \emph{admissible}, so one may rephrase this as saying that $H$ is admissible if $G/H$ is $G$-locally retractile. \subsection{Manifolds without boundary.} A very useful setting where this criterion applies is for embedding spaces between smooth manifolds. Let $\mathrm{Diff}_c(N)$ denote the topological group of compactly-supported self-diffeomorphisms of $N$, i.e.\ diffeomorphisms $\theta \colon N \to N$ such that $\{ x \in N \mid \theta(x) \neq x \}$ is relatively compact in $N$. By Fact \ref{fact:composition-continuous}, the action of $\mathrm{Diff}_c(N)$ on $\mathrm{Emb}(L,N)$ is continuous. \begin{prop}[{\cite[Theorem B]{Palais1960Localtrivialityof}\footnote{In fact, Theorem B of \cite{Palais1960Localtrivialityof} says that the action of the path-component of the identity $\mathrm{Diff}_c(N)_0$ on $\mathrm{Emb}(L,N)$ is locally retractile, but this is equivalent to the statement of Proposition \ref{p:G-locally-retractile-embeddings} by Lemma \ref{l:retractile-basic}.}}]\label{p:G-locally-retractile-embeddings} The space $\mathrm{Emb}(L,N)$ is $\mathrm{Diff}_c(N)$-locally retractile. \end{prop} There is also a 14-line proof in \cite{Lima1963localtrivialityof}. It is also proved in \cite{Cerf1961Topologiedecertains}, in a much more general setting for manifolds with corners, which we will discuss in \S\ref{ss:manifolds-with-boundary} below. We will also give a self-contained proof in \S\ref{ss:quotients-of-embedding-spaces} below, in the course of proving Proposition \ref{p:G-locally-retractile-orbitspace}, which is a variant of this result for quotients of embedding spaces. (Our proof of Proposition \ref{p:G-locally-retractile-orbitspace} will use the ideas of \cite{Lima1963localtrivialityof}.) We note that a direct corollary of Proposition \ref{p:G-locally-retractile-embeddings} is the isotopy extension theorem. \begin{thm} Suppose we are given a smooth isotopy of embeddings from $L$ into $N$, i.e.\ a path $e \colon [0,1] \to \mathrm{Emb}(L,N)$. Then there is a path $\phi \colon [0,1] \to \mathrm{Diff}_c(N)$ of compactly-supported diffeomorphisms such that $\phi(0)$ is the identity and $e(t) = \phi(t) \circ e(0)$ for all $t \in [0,1]$. \end{thm} \begin{proof} The map $- \circ e(0) \colon \mathrm{Diff}_c(N) \to \mathrm{Emb}(L,N)$ is $\mathrm{Diff}_c(N)$-equivariant, and by Proposition~\ref{p:G-locally-retractile-embeddings} the embedding space $\mathrm{Emb}(L,N)$ is $\mathrm{Diff}_c(N)$-locally retractile, so Proposition \ref{p:G-locally-retractile} implies that the map is a fibre bundle and hence a Serre fibration. \end{proof} \begin{rmk} The fact that the map $- \circ e(0) \colon \mathrm{Diff}_c(N) \to \mathrm{Emb}(L,N)$ is a Serre fibration also implies parametrised versions of the isotopy extension theorem, for smooth isotopies of embeddings parametrised by CW complexes. In fact, since $L$ is compact, the space $\mathrm{Emb}(L,N)$ is paracompact, as noted at the beginning of \S\ref{ss:smooth-mapping-spaces}, so this implies that $- \circ e(0)$ is in fact a Hurewicz fibration. Thus we also obtain versions of the isotopy extension theorem parametrised by an arbitrary space. \end{rmk} In the proof of Theorem \ref{tmain} in \S\ref{s:proof} below we will not use Proposition \ref{p:G-locally-retractile-embeddings} itself, but rather two different extensions of it, one for manifolds with non-empty boundary (which is contained in the work of Cerf) and one for quotients of embedding spaces by open subgroups of $\mathrm{Diff}(L)$ (which we prove directly). These are discussed in the next two subsections. \subsection{Manifolds with boundary.}\label{ss:manifolds-with-boundary} For the proof of Theorem \ref{tmain} in \S\ref{s:proof} we will need a version of Proposition \ref{p:G-locally-retractile-embeddings} for manifolds with boundary. In fact, this result extends to a very general setting for manifolds with corners, as shown in the work of Cerf~\cite{Cerf1961Topologiedecertains}. We first state the special case that we will actually use, and then discuss how to deduce it from \cite{Cerf1961Topologiedecertains}. Let $L$ and $N$ be manifolds with boundary decomposed as \[ \partial L = \partial_1 L \sqcup \partial_2 L \qquad \partial N = \partial_1 N \sqcup \partial_2 N \] and assume that $L$ is compact. Let $\mathrm{Emb}_{12}(L,N)$ be the space, equipped with the (weak $=$ strong) $C^\infty$ topology, of all smooth embeddings $L \hookrightarrow N$ taking $\partial_1 L$ to $\partial_1 N$, $\partial_2 L$ to $\partial_2 N$ and $\mathrm{int}(L)$ to $\mathrm{int}(N)$. Any diffeomorphism of $N$ that is isotopic to the identity must preserve the decomposition of $\partial N$ into $\partial_1 N$ and $\partial_2 N$, so there is a well-defined action of $\mathrm{Diff}_c(N)_0$ on $\mathrm{Emb}_{12}(L,N)$. We will see in the discussion of Cerf's results below that this action is continuous. \begin{prop}\label{p:G-locally-retractile-boundary} With respect to this action, $\mathrm{Emb}_{12}(L,N)$ is $\mathrm{Diff}_c(N)_0$-locally retractile. \end{prop} Now let $L$ and $N$ be two smooth manifolds with corners, where $L$ is assumed to be compact, and define $\mathrm{Emb}(L,N)$ to be the space of smooth embeddings of $L$ into $N$ with fixed ``incidence relations'' (see the next paragraph or \cite[\nopp II.1.1.1]{Cerf1961Topologiedecertains}). Everything will be given the strong $C^\infty$ topology. Choose an embedding $f\in\mathrm{Emb}(L,N)$ and an open neighbourhood $U$ of $f(L)$ in $N$. Let $\mathrm{Diff}_U(N)$ be the group of diffeomorphisms of $N$ which restrict to the identity on $N\smallsetminus U$ and let \[ \mathrm{Iso}_U(N) \;\subseteq\; C^\infty(N\times [0,1],N) \] be the subspace of maps $f$ such that $f(-,t)\in\mathrm{Diff}_U(N)$ for all $t$ and $f(-,0)=\mathrm{id}$. It turns out that this is a topological group under pointwise composition and acts continuously on $\mathrm{Emb}(L,N)$ by taking the diffeomorphism at the endpoint of the path and composing with the embedding. Before stating Cerf's analogue of Proposition \ref{p:G-locally-retractile-embeddings}, we justify and explain the claim at the end of the last paragraph. First, the space $\mathrm{Iso}_U(N)$ is a topological group with respect to pointwise composition by \cite[\nopp II.1.5.2]{Cerf1961Topologiedecertains}. Second, we need to know that composing an embedding $L\hookrightarrow N$ with a diffeomorphism in $\mathrm{Diff}_U(N)$ does not change its incidence relations. To explain this we briefly recall some notions from \cite[\nopp II.1.1.1]{Cerf1961Topologiedecertains}. Each point in $N$ has an \emph{index}, which is the unique integer $k$ such that $N$ is locally homeomorphic to $\mathbb{R}^k \times [0,\infty)^{n-k}$ at that point. Writing $N^k$ for the subspace of points of index at most $k$, a \emph{face} of $N$ of index $k$ is the closure in $N$ of a path-component of $N^k \smallsetminus N^{k-1}$. An \emph{incidence relation} is then a choice of a face of $N$ for each point in $L$. A diffeomorphism of $N$ induces a permutation of the set of faces of $N$, and it preserves the incidence relations of any embedding into $N$ if and only if this permutation is the identity. It is easy to see that the induced permutation is a locally-constant invariant of diffeomorphisms of $N$, and so any diffeomorphism in the path-component of the identity sends every face to itself. Now, an element of the group $\mathrm{Iso}_U(N)$ is a path in $\mathrm{Diff}(N)$ starting at the identity, so the diffeomorphism at its endpoint sends every face of $N$ to itself, and therefore preserves the incidence relations of any embedding into $N$. Thus the group action map $\mathrm{Iso}_U(N) \times \mathrm{Emb}(L,N) \to \mathrm{Emb}(L,N)$ is well-defined. Finally, we need to know that this map is continuous. It is given by $(a,b)\mapsto a\circ i\circ b$, where $i\colon N\hookrightarrow N\times [0,1]$ takes $x$ to $(x,1)$. This is continuous by \cite[Proposition 5, page 281]{Cerf1961Topologiedecertains} since $i$ is proper and $L$ is compact. \begin{thm}[{\cite[Th{\'e}or{\`e}me 5, page 293]{Cerf1961Topologiedecertains}}]\label{t:Cerf} The action of $\mathrm{Iso}_U(N)$ on $\mathrm{Emb}(L,N)$ admits a local section at $f\in\mathrm{Emb}(L,N)$. \end{thm} This implies the analogue of Proposition \ref{p:G-locally-retractile-embeddings} for manifolds with corners. \begin{coro}\label{c:Cerf} The action of $\mathrm{Diff}_c(N)_0$ on $\mathrm{Emb}(L,N)$ is locally retractile. \end{coro} \begin{proof}[Proof of the corollary] To see this, given a point $f\in\mathrm{Emb}(L,N)$, choose a relatively compact open neighbourhood $U$ of $f(L)$ in $N$. The action of $\mathrm{Iso}_U(N)$ on $\mathrm{Emb}(L,N)$ factors through the endpoint map $\mathrm{Iso}_U(N) \to \mathrm{Diff}_c(N)_0$ so a local section at $f$ for the action of $\mathrm{Iso}_U(N)$ induces a local section at $f$ for the action of $\mathrm{Diff}_c(N)_0$.\footnote{An important point is that although the adjoint $[0,1]\to C^\infty(N,N)$ of a smooth map $g\colon N\times [0,1]\to N$ need not be continuous in general for non-compact $N$, it \emph{is} continuous if $g$ is a compactly-supported diffeotopy (\textit{cf}.\ \cite[Lemma 1.12]{Haller1995Groupsofdiffeomorphisms}). This is needed to see that the endpoint map $\mathrm{Iso}_U(N) \to \mathrm{Diff}_c(N)$ lands in $\mathrm{Diff}_c(N)_0$.} \end{proof} \begin{proof}[Proof of Proposition \ref{p:G-locally-retractile-boundary} from the corollary] The setting of Proposition \ref{p:G-locally-retractile-boundary} is a special case of the setting of Cerf. Note that the faces of a manifold with boundary (but no higher-codimension corners) are precisely its boundary-components, together with its interior. The condition imposed in the definition of $\mathrm{Emb}_{12}(L,N)$ is therefore a union of incidence relations in the sense of Cerf, and so $\mathrm{Emb}_{12}(L,N)$ splits as a topological disjoint union of $\mathrm{Emb}_{\mathcal{I}}(L,N)$ for various different incidence relations $\mathcal{I}$.\footnote{If $\partial_1 N$ and $\partial_2 N$ are both connected, then there is just one incidence relation in the disjoint union. In general, the number of incidence relations in the disjoint union is $\leqslant n_1^{\ell_1} n_2^{\ell_2}$, where $n_i$ is the number of components of $\partial_i N$ and $\ell_i$ is the number of components of $\partial_i L$.} The action of $\mathrm{Diff}_c(N)_0$ on $\mathrm{Emb}_{12}(L,N)$ preserves this decomposition, and its action on each piece is locally retractile by Corollary \ref{c:Cerf}, so its action on the whole space $\mathrm{Emb}_{12}(L,N)$ is also locally retractile. \end{proof} \subsection{Quotients of embedding spaces.}\label{ss:quotients-of-embedding-spaces} Another version of Proposition \ref{p:G-locally-retractile-embeddings} that we will need is for quotients of embedding spaces by groups of diffeomorphisms, in the following sense. We now return to the setting of Assumption \ref{notation-convention}, in particular all manifolds have empty boundary and $L$ is compact with $\mathrm{dim}(L) < \mathrm{dim}(N)$. Let $G$ be any open subgroup of $\mathrm{Diff}(L)$. \begin{prop}\label{p:G-locally-retractile-orbitspace} The orbit space $\mathrm{Emb}(L,N)/G$ is $\mathrm{Diff}_c(N)$-locally retractile. \end{prop} \begin{coro}\label{c:principal-H-bundle} The quotient map $\mathrm{Emb}(L,N) \to \mathrm{Emb}(L,N)/G$ is a principal $G$-bundle. \end{coro} \begin{proof}[Proof of the corollary] This follows directly from Fact \ref{fact:topological-embedding} and Propositions \ref{p:G-locally-retractile-principal} and \ref{p:G-locally-retractile-orbitspace}. \end{proof} \begin{rmk} In the special case when $G = \mathrm{Diff}(L)$, Corollary \ref{c:principal-H-bundle} was first proved in \cite{BinzFischer1981manifoldofembeddings}, and then generalised, by removing the assumption that $L$ is compact, in \cite[Theorem 10.14]{Michor1980Manifoldsofsmooth}. See also \cite[Theorem 13.14]{Michor1980Manifoldsofdifferentiable} and \cite[Theorem 44.1]{KrieglMichor1997convenientsettingof} for proofs of this result in the non-compact case. It seems likely that Proposition \ref{p:G-locally-retractile-orbitspace}, and therefore also Corollary \ref{c:principal-H-bundle} (for arbitrary open subgroups $G \leqslant \mathrm{Diff}(L)$), is also true for non-compact $L$, but we have not pursued this greater generality since we will only need the result in the case when $L$ is compact. \end{rmk} \begin{proof}[Proof of Propositions \ref{p:G-locally-retractile-embeddings} and \ref{p:G-locally-retractile-orbitspace}] Denote the projection map by \[ q \colon \mathrm{Emb}(L,N) \longrightarrow \mathrm{Emb}(L,N) / G. \] Fix $e \in \mathrm{Emb}(L,N)$. We will construct a local section for the action of $\mathrm{Diff}_c(N)$ on $\mathrm{Emb}(L,N)/G$ at the point $q(e)$ in two steps, as a composition of two maps $b$ and $\theta$.\footnote{At the end we mention how to modify this to obtain instead a local section for the action of $\mathrm{Diff}_c(N)$ on $\mathrm{Emb}(L,N)$ at the point $e$. Essentially, one discards the first step and uses $\theta$ directly.} \textbf{Step 1.} First choose a tubular neighbourhood for $e$, namely an embedding $t_e \colon \nu_e \hookrightarrow N$ such that $t_e \circ o_e = e$, where $\pi_e \colon \nu_e \to L$ denotes the normal bundle of $e$ and $o_e \colon L \to \nu_e$ denotes its zero section. Note that $V = t_e(\nu_e)$ is an open neighbourhood of $e(L)$ in $N$, so $\mathrm{Emb}(L,V)$ is an open neighbourhood of $e$ in $\mathrm{Emb}(L,N)$ (\textit{cf}.\ Fact \ref{fact:topological-embedding}). The map \[ \Phi_e = \pi_e \circ t_e^{-1} \circ - \colon \mathrm{Emb}(L,V) \longrightarrow C^\infty(L,L) \] is continuous by Fact \ref{fact:composition-continuous}. Define $\mathcal{W}_e = \Phi_e^{-1}(\mathrm{id})$ and $\mathcal{U}_e = \Phi_e^{-1}(G)$. Note that $e \in \mathcal{W}_e \subseteq \mathcal{U}_e$ and that $\mathcal{U}_e$ is open in $\mathrm{Emb}(L,N)$ since we assumed that $G$ is open in $\mathrm{Diff}(L)$. Alternative descriptions of these two subsets are: \begin{align*} \mathcal{W}_e &= \{ t_e \circ s \mid s \text{ is a smooth section of } \pi_e \} \\ \mathcal{U}_e &= \{ t_e \circ s \circ g \mid s \text{ is a smooth section of } \pi_e \text{ and } g \in G \} . \end{align*} Note that $q^{-1}(q(\mathcal{W}_e))$ is the $G$-orbit of $\mathcal{W}_e$, which is $\mathcal{U}_e$. Hence $q(\mathcal{W}_e)$ is an open neighbourhood of $q(e)$ in the quotient space $\mathrm{Emb}(L,N)/G$. Also note that $q|_{\mathcal{W}_e}$ is \emph{injective}, since sections are equal if and only if they have the same image. \begin{sublem} The continuous injection $q|_{\mathcal{W}_e}$ is a homeomorphism onto its image. \end{sublem} \begin{proof}[Proof of the sublemma] \let{\small (sublemma)} \qedsymboloriginaloriginal{\small (sublemma)} \qedsymboloriginal \renewcommand{{\small (sublemma)} \qedsymboloriginal}{{\small (sublemma)} {\small (sublemma)} \qedsymboloriginaloriginal} Let $U$ be an open subset of $\mathrm{Emb}(L,N)$. We need to show that $q(U \cap \mathcal{W}_e)$ is open in $q(\mathcal{W}_e)$. Since $q(\mathcal{W}_e)$ is open in $\mathrm{Emb}(L,N)/G$, this is equivalent to showing that $q(U \cap \mathcal{W}_e)$ is open in $\mathrm{Emb}(L,N)/G$, i.e.\ that $q^{-1}(q(U \cap \mathcal{W}_e))$ is open in $\mathrm{Emb}(L,N)$. Define \[ \Psi_e \colon \mathcal{U}_e \longrightarrow \mathcal{W}_e \] by $f \mapsto f \circ \Phi_e(f)^{-1}$. This is continuous by Fact \ref{fact:composition-continuous} and the fact that inversion is continuous for diffeomorphism groups. Thus $\Psi_e^{-1}(U \cap \mathcal{W}_e)$ is open in $\mathcal{U}_e$, and hence in $\mathrm{Emb}(L,N)$. Now we note that $\Psi_e^{-1}(U \cap \mathcal{W}_e)$ is precisely the $G$-orbit of $U \cap \mathcal{W}_e$, which is $q^{-1}(q(U \cap \mathcal{W}_e))$. \end{proof} Denote the inverse of $q|_{\mathcal{W}_e}$ by \[ b \colon q(\mathcal{W}_e) \longrightarrow \mathcal{W}_e. \] By the sublemma, it is continuous. Also, note that $b(q(e)) = e$. \textbf{Step 2.} Now (by \cite{Whitney1936Differentiablemanifolds}) we may choose a proper embedding \[ \iota \colon N \lhook\joinrel\longrightarrow \mathbb{R}^k \] for some $k$. We also choose a tubular neighbourhood for $\iota$, namely an embedding $t \colon \nu_\iota \hookrightarrow \mathbb{R}^k$ such that $t \circ o_\iota = \iota$, where $\pi_\iota \colon \nu_\iota \to N$ is the normal bundle of $\iota$ and $o_\iota \colon N \to \nu_\iota$ is its zero section. We will choose a tubular neighbourhood for the composition $\iota \circ e = \iota e \colon L \hookrightarrow \mathbb{R}^k$ more carefully. There is an embedding of bundles \begin{center} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,12) {$\nu_{\iota e}$}; \node (tr) at (30,12) {$T\mathbb{R}^k$}; \node (bl) at (0,0) {$L$}; \node (br) at (30,0) {$\mathbb{R}^k$}; \inclusion{above}{$u$}{(tl)}{(tr)} \inclusion{below}{$\iota e$}{(bl)}{(br)} \draw[->] (tl) to node[left,font=\small]{$\pi_{\iota e}$} (bl); \draw[->] (tr) to (br); \end{tikzpicture} \end{center} so that $u(\nu_{\iota e})$ is the orthogonal complement of $T(\iota e(L))$ in $T\mathbb{R}^k|_{\iota e(L)}$ with respect to the standard Riemannian metric on $\mathbb{R}^k$. Write $D_r = D_r(\nu_{\iota e})$ for the subbundle of $\nu_{\iota e}$ consisting of vectors of norm $\leqslant r$, using the norm inherited via this embedding. Since $L$ is compact, we may choose $r>0$ sufficiently small so that the exponential map for $T\mathbb{R}^k$ restricts to an embedding \[ v \colon D_r \lhook\joinrel\longrightarrow \mathbb{R}^k \] such that $vo_{\iota e} = \iota e$, where $o_{\iota e} \colon L \to D_r$ is the zero section of \[ \hat{\pi} = \pi_{\iota e}|_{D_r} \colon D_r \longrightarrow L. \] We note that: \begin{itemizeb} \item[(i)] The restriction of $v$ to each fibre of $\hat{\pi} \colon D_r \to L$ is an isometry. \item[(ii)] Moreover, we have $v(D_r) = \{ x \in \mathbb{R}^k \mid \exists\, y \in L \text{ such that } \lvert \iota e(y) - x \rvert \leqslant r \}$. \end{itemizeb} Now we decrease $r > 0$ if necessary so that we also have \begin{itemizeb} \item[(iii)] $v(D_r) \subseteq t(\nu_\iota)$. \end{itemizeb} (Again, this is possible since $L$ is compact.) Define \[ \mathcal{V}_e = \{ f \in \mathrm{Emb}(L,N) \mid \forall\, x \in L \text{ we have } \lvert \iota f(x) - \iota e(x) \rvert < \tfrac{r}{2} \} \] and note that this is a $\mathrm{Diff}(L)$-equivariant open subset of $\mathrm{Emb}(L,N)$. Also choose a smooth map $\lambda \colon [0,\infty) \to [0,1]$ such that $\lambda(t) = 1$ for $t \leqslant \tfrac{r}{8}$ and $\lambda(t) = 0$ for $t \geqslant \tfrac{r}{4}$. For any $f \in \mathcal{V}_e$ we now attempt to define a map $\theta(f) \colon N \to N$ as follows. \begin{itemizeb} \item[(1)] If $x \in N \smallsetminus \iota^{-1}v(D_{r/4})$, we set $\theta(f)(x) = x$. \item[(2)] If $x \in \iota^{-1}v(D_{r/2})$, we set $\theta(f)(x) = \pi_\iota t^{-1} \iota_f(x)$, where \[ \iota_f(x) \;=\; \iota(x) + \lambda(\lvert \bar{x} \rvert) (\iota f \hat{\pi}(\bar{x}) - \iota e \hat{\pi}(\bar{x})), \] and $\bar{x} = v^{-1}\iota(x) \in D_{r/2}$. \end{itemizeb} To check that this is well-defined and smooth, it is enough to check that \begin{itemizeb} \item[(a)] $\iota_f(x)$ is in the image of $t$ in case (2), \item[(b)] the two cases agree for $x \in \iota^{-1}v(D_{r/2} \smallsetminus D_{r/4})$. \end{itemizeb} The verification of (b) follows from the fact that $\lambda(t) = 0$ for $t \geqslant \tfrac{r}{4}$. For condition (a), suppose that $x \in \iota^{-1}v(D_{r/2})$ and let $\bar{x} = v^{-1}\iota(x) \in D_{r/2}$. Then \[ \lvert \iota_f(x) - \iota e \hat{\pi}(\bar{x}) \rvert \;=\; \lvert \iota_f(x) - \iota(x) \rvert \; + \; \lvert \iota(x) - \iota e \hat{\pi}(\bar{x}) \rvert . \] For the first summand, we have \[ \lvert \iota_f(x) - \iota(x) \rvert \leqslant \lvert \iota f \hat{\pi}(\bar{x}) - \iota e \hat{\pi}(\bar{x}) \rvert < \tfrac{r}{2} \] since $f \in \mathcal{V}_e$. For the second summand, we have \begin{align*} \lvert \iota(x) - \iota e \hat{\pi}(\bar{x}) \rvert &= \lvert v(\bar{x}) - vo_{\iota e} \hat{\pi}(\bar{x}) \rvert \\ &= \lvert \bar{x} - o_{\iota e} \hat{\pi}(\bar{x}) \rvert \\ &\leqslant \tfrac{r}{2} \end{align*} by property (i) and since $\bar{x} \in D_{r/2}$. Hence $\iota_f(x) \in v(D_r) \subseteq t(\nu_\iota)$ by properties (ii) and (iii). This completes the verification of (a), so $\theta(f) \colon N \to N$ is a well-defined, smooth map. Note also that it is supported in $\iota^{-1}v(D_{r/2}) \subseteq N$, which is compact since $L$ (and therefore $D_{r/2}$) is compact and $\iota$ is a \emph{proper} embedding. So we have a function \[ \theta \colon \mathcal{V}_e \longrightarrow C_c^\infty(N,N), \] and it is not hard to see from Fact \ref{fact:composition-continuous}, plus the fact that pointwise addition and multiplication of smooth maps into $\mathbb{R}^k$ are continuous, that $\theta$ is continuous. \textbf{Step 3.} (Putting together $b$ and $\theta$.) Recall that $\mathcal{V}_e$ is a $G$-equivariant open subset of $\mathrm{Emb}(L,N)$, so $q(\mathcal{V}_e)$ is an open subset of $\mathrm{Emb}(L,N)/G$. Therefore, the intersection $q(\mathcal{V}_e) \cap q(\mathcal{W}_e)$ is an open neighbourhood of $q(e)$ in $\mathrm{Emb}(L,N)/G$. Recall the continuous inverse $b \colon q(\mathcal{W}_e) \to \mathcal{W}_e$ of $q|_{\mathcal{W}_e}$ from Step 1. Note that \[ b(q(\mathcal{V}_e) \cap q(\mathcal{W}_e)) \subseteq \mathcal{W}_e \cap q^{-1}(q(\mathcal{V}_e)) = \mathcal{W}_e \cap \mathcal{V}_e \subseteq \mathcal{V}_e . \] We may therefore define \[ \bar{\theta} = \theta \circ b \colon q(\mathcal{V}_e) \cap q(\mathcal{W}_e) \longrightarrow C_c^\infty(N,N) \] and observe that $\bar{\theta}(q(e)) = \theta(e) = \mathrm{id}$. Since $\mathrm{Diff}_c(N)$ is an open neighbourhood of the identity in $C_c^\infty(N,N)$, its preimage \[ \mathcal{Y}_e = \bar{\theta}^{-1}(\mathrm{Diff}_c(N)) \] is an open neighbourhood of $q(e)$ in $\mathrm{Emb}(L,N)/G$ and we have a continuous map \[ \bar{\theta} \colon \mathcal{Y}_e \longrightarrow \mathrm{Diff}_c(N). \] It now remains to check that $q(\bar{\theta}(q(f)) \circ e) = q(f)$ for all $q(f) \in \mathcal{Y}_e$. We may as well assume that the representative $f \in \mathrm{Emb}(L,N)$ of $q(f)$ has been chosen so that $b(q(f)) = f$, since this is always possible, so this is equivalent to checking that $q(\theta(f) \circ e) = q(f)$. We will show that $\theta(f) \circ e = f$ as embeddings $L \hookrightarrow N$. Let $z \in L$. Then $\iota e(z) = vo_{\iota e}(z)$, so $e(z) \in \iota^{-1}v(D_{r/2})$ and hence \begin{align*} \theta(f)(e(z)) &= \pi_\iota t^{-1} ( \iota e(z) + \lambda(0)(\iota f(z) - \iota e(z)) ) \\ &= \pi_\iota t^{-1} (\iota f(z)) \\ &= \pi_\iota t^{-1} t o_\iota f(z) \\ &= f(z). \end{align*} Hence $(\mathcal{Y}_e,\bar{\theta})$ is a local section at $q(e)$ for the action of $\mathrm{Diff}_c(N)$ on $\mathrm{Emb}(L,N)/G$. \textbf{Note.} The proof also shows that $\mathrm{Emb}(L,N)$ -- without taking a quotient -- is $\mathrm{Diff}_c(N)$-locally retractile. To see this, discard Step 1, which constructs the map $b$, and use the map $\theta$ directly, restricted to $\theta^{-1}(\mathrm{Diff}_c(N))$. \end{proof} \subsection{Fibration lemmas.} Directly from the definition, one may see that products, compositions and pullbacks of Serre/Hurewicz fibrations are again Serre/Hurewicz fibrations. In the case of Serre fibrations there is a partial converse to the statement about pullbacks: if the pullback of a map along a surjective Serre fibration is a Serre fibration, then the original map must already be a Serre fibration. More explicitly: \begin{lem}\label{l:Serre-fibration-pullback} Suppose we have a pullback diagram in topological spaces \begin{center} \begin{tikzpicture} [x=1mm,y=1mm] \node (a) at (0,15) {$A$}; \node (c) at (15,15) {$C$}; \node (b) at (0,0) {$B$}; \node (d) at (15,0) {$D$}; \draw[->] (c) to (a); \draw[->] (c) to node[right,font=\small]{$r$} (d); \draw[->] (a) to node[left,font=\small]{$l$} (b); \draw[->] (d) to node[below,font=\small]{$b$} (b); \node at (12,12) {$\llcorner$}; \end{tikzpicture} \end{center} in which $r$ and $b$ are Serre fibrations and $b$ is surjective. Then $l$ is also a Serre fibration. \end{lem} \begin{proof} Consider a lifting problem \begin{center} \begin{tikzpicture} [x=1mm,y=1mm] \node (x) at (-25,15) {$[0,1]^{k-1}$}; \node (y) at (-25,0) {$[0,1]^k$}; \node (a) at (0,15) {$A$}; \node (c) at (15,15) {$C$}; \node (b) at (0,0) {$B$}; \node (d) at (15,0) {$D$}; \draw[->] (c) to (a); \draw[->] (c) to node[right,font=\small]{$r$} (d); \draw[->] (a) to node[left,font=\small]{$l$} (b); \draw[->] (d) to node[below,font=\small]{$b$} (b); \node at (12,12) {$\llcorner$}; \inclusion{left}{$i$}{(x)}{(y)} \draw[->] (x) to node[above,font=\small]{$f$} (a); \draw[->] (y) to node[below,font=\small]{$g$} (b); \end{tikzpicture} \end{center} for $k\geqslant 0$. Since $b$ is surjective we may lift one corner of the map $g$ to $D$. Then, using the lifting property for $b$ ($k$ times) we may lift the whole map $g$ to a map $\bar{g}\colon [0,1]^k \to D$. The maps $\bar{g}\circ i$ and $f$ induce a map $\bar{f}\colon [0,1]^{k-1} \to C$ by the universal property of the pullback. We may now find a map $\bar{h}\colon [0,1]^k \to C$ solving the lifting problem $(\bar{f},\bar{g})$ and compose with the map $C\to A$ to solve the original lifting problem $(f,g)$. \end{proof} \begin{coro}\label{c:fibration-orbit-spaces} Let $f\colon X\to Y$ be an equivariant map between $G$-spaces, and assume that it is additionally a Serre fibration. Suppose that $Y \to Y/G$ is a principal $G$-bundle. Then the map of orbit spaces $\bar{f}\colon X/G \to Y/G$ is also a Serre fibration. \end{coro} \begin{proof} The facts that $Y \to Y/G$ is a principal $G$-bundle and $f \colon X \to Y$ is $G$-equivariant imply that $X \to X/G$ is also a principal $G$-bundle. Hence the square \begin{center} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,15) {$X/G$}; \node (tr) at (20,15) {$X$}; \node (bl) at (0,0) {$Y/G$}; \node (br) at (20,0) {$Y$}; \draw[->] (tr) to node[right,font=\small]{$f$} (br); \draw[->] (tl) to node[left,font=\small]{$\bar{f}$} (bl); \draw[->] (tr) to (tl); \draw[->] (br) to (bl); \end{tikzpicture} \end{center} is a morphism of principal $G$-bundles, and therefore a pullback square. Any principal $G$-bundle is a surjective Serre fibration, so this square satisfies the hypotheses of Lemma \ref{l:Serre-fibration-pullback}. \end{proof} \begin{coro}\label{c:fibration-orbit-spaces2} The map $\bar{\pi}_n$ in \textup{\hyperref[para:labelled-submanifolds]{\S\anglenumber{iv.}}} is a Serre fibration with path-connected fibres. \end{coro} \begin{proof} By assumption, the map $\pi \colon Z \to \mathrm{Emb}(P,\ensuremath{{\,\,\overline{\!\! M\!}\,}})$ is a $G$-equivariant Serre fibration with path-connected fibres. Hence its $n$th power $\pi^n \colon Z^n \to \mathrm{Emb}(P,\ensuremath{{\,\,\overline{\!\! M\!}\,}})^n$ is a $(G \wr \Sigma_n)$-equivariant Serre fibration with path-connected fibres. The pullback $\pi_n$ of $\pi^n$ along the inclusion $\mathrm{Emb}(nP,M) \hookrightarrow \mathrm{Emb}(P,\ensuremath{{\,\,\overline{\!\! M\!}\,}})^n$ is therefore also a $(G \wr \Sigma_n)$-equivariant Serre fibration with path-connected fibres. Since $G \wr \Sigma_n$ is an open subgroup of $\mathrm{Diff}(nP)$, Corollary \ref{c:principal-H-bundle} implies that \[ \mathrm{Emb}(nP,M) \longrightarrow \mathrm{Emb}(nP,M)/(G \wr \Sigma_n) = \mathbf{C}_{nP}(M;G) \] is a principal $(G \wr \Sigma_n)$-bundle. Then Corollary \ref{c:fibration-orbit-spaces} implies that $\bar{\pi}_n$ is a Serre fibration. Moreover, the proof of Corollary \ref{c:fibration-orbit-spaces} shows that \begin{center} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (-35,15) {$\mathbf{C}_{nP}(M,Z;G)$}; \node (tm) at (0,15) {$Z_n/(G \wr \Sigma_n)$}; \node (tr) at (40,15) {$Z_n$}; \node (bl) at (-35,0) {$\mathbf{C}_{nP}(M;G)$}; \node (bm) at (0,0) {$\mathrm{Emb}(nP,M)/(G \wr \Sigma_n)$}; \node (br) at (40,0) {$\mathrm{Emb}(nP,M)$}; \draw[->] (tr) to node[right,font=\small]{$\pi_n$} (br); \draw[->] (tm) to (bm); \draw[->] (tl) to node[left,font=\small]{$\bar{\pi}_n$} (bl); \draw[->] (tr) to (tm); \draw[->] (br) to (bm); \draw (tm) edge[double equal sign distance] (tl); \draw (bm) edge[double equal sign distance] (bl); \end{tikzpicture} \end{center} is a pullback square, so the fibres of $\bar{\pi}_n$ are homeomorphic to fibres of $\pi_n$ -- in particular, they are path-connected. \end{proof} \section{Proof of stability}\label{s:proof} In this section we apply Theorem \ref{tAxiomatic} to prove our main theorem. Fix integers $m$ and $p$ satisfying $0 \leqslant p \leqslant \frac12(m-3)$ and let $n$ be a non-negative integer. Let $\ensuremath{\mathscr{X}}(n)$ be the collection of all maps $f \colon X \to Y$ as constructed in Definition \ref{d:input-data}, for all choices of $(M,P,\lambda,\iota,G)$, where $M$ is a smooth, connected $m$-manifold, $P$ is a smooth, closed $p$-manifold, $\iota$ is an embedding $P \hookrightarrow \partial M$, $\lambda$ is a proper embedding $\partial M \times [0,2] \hookrightarrow M$ with $\lambda(-,0) = \text{inclusion}$ and $G$ is an open subgroup of $\mathrm{Diff}(P)$. To prove Lemma \ref{l:injectivity} and Theorem \ref{tmain} we will show that every map in $\bigcup_{n\geqslant 0}\ensuremath{\mathscr{X}}(n)$ is split-injective on homology and the sequence $\ensuremath{\mathscr{X}}(n)$ satisfies condition (2w) of Theorem \ref{tAxiomatic}. In the next section we spell this out precisely, as a guide to the proof. As a side remark, although we have fixed the dimensions $m$ and $p$ such that they satisfy the hypothesis $p \leqslant \frac12(m-3)$ throughout this section, whenever this dimension hypothesis (or something weaker than it) is used in the proof of a lemma, proposition or corollary, it will be stated explicitly, in order to keep track of where it is needed in the proof. \subsection{Overview of the proof.}\label{s:overview} We first show in \S\ref{s:split-injectivity} that every map $f \in \bigcup_{n\geqslant 0}\ensuremath{\mathscr{X}}(n)$ induces split injections on homology in all degrees. Now fix $n\geqslant 2$ and $f \colon X \to Y$ in $\ensuremath{\mathscr{X}}(n)$. We construct in \S\ref{s:firstr} a map $f_\bullet \colon X_\bullet \to Y_\bullet$ of augmented semi-simplicial spaces whose $(-1)$st level is $f \colon X \to Y$, and show that $\ensuremath{h\mathrm{conn}}(\lVert X_\bullet \rVert \to X) \geqslant n-1$ and $\ensuremath{h\mathrm{conn}}(\lVert Y_\bullet \rVert \to Y) \geqslant n$. Fix $i\geqslant 0$. In \S\ref{s:firsta} we construct a commutative square \begin{equation}\label{e:firsta} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$X_i$}; \node (tr) at (20,10) {$Y_i$}; \node (bl) at (0,0) {$A_i$}; \node (br) at (20,0) {$B_i$}; \draw[->] (tl) to node[above,font=\small]{$f_i$} (tr); \draw[->] (tl) to node[left,font=\small]{$p_i$} (bl); \draw[->] (tr) to node[right,font=\small]{$q_i$} (br); \incl{(bl)}{(br)} \end{tikzpicture} \end{split} \end{equation} where $A_i$ is path-connected, $p_i$ and $q_i$ are Serre fibrations and the inclusion $A_i \hookrightarrow B_i$ is a homotopy equivalence, and we show, for a certain point $a_i \in A_i$, that the restriction of $f_i$ to $p_i^{-1}(a_i) \to q_i^{-1}(a_i)$ is in $\ensuremath{\mathscr{X}}(n-i-1)$. In the case $i=0$ we also construct a commutative square \begin{equation} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$\ensuremath{{\,\overline{\! X}}}$}; \node (tr) at (40,10) {$\ensuremath{{\overline{Y}}}$}; \node (bl) at (0,0) {$p_0^{-1}(a_0)$}; \node (br) at (40,0) {$q_0^{-1}(a_0)$}; \draw[->] (tl) to node[above,font=\small]{$g$} (tr); \draw[->] (tl) to (bl); \draw[->] (tr) to (br); \draw[->] (bl) to node[below,font=\small]{restriction of $f_0$} (br); \end{tikzpicture} \end{split} \end{equation} in which the vertical maps are homeomorphisms. (This last step is of course not very significant: the map $g$ is just slightly more convenient to work with than the restriction of $f_0$ for the next constructions.) In \S\ref{s:secondr} we construct a map $g_\bullet \colon \ensuremath{{\,\overline{\! X}}}_\bullet \to \ensuremath{{\overline{Y}}}_\bullet$ of augmented semi-simplicial spaces whose $(-1)$st level is $g \colon \ensuremath{{\,\overline{\! X}}} \to \ensuremath{{\overline{Y}}}$, and we show that $\lVert \ensuremath{{\,\overline{\! X}}}_\bullet \rVert \to \ensuremath{{\,\overline{\! X}}}$ and $\lVert \ensuremath{{\overline{Y}}}_\bullet \rVert \to \ensuremath{{\overline{Y}}}$ are weak equivalences. Fix $j\geqslant 0$. In \S\ref{s:seconda} we construct another commutative square \begin{equation}\label{e:seconda} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,12) {$\ensuremath{{\,\overline{\! X}}}_j$}; \node (tr) at (20,12) {$\ensuremath{{\overline{Y}}}_j$}; \node (bl) at (0,0) {$\ensuremath{{\,\overline{\! A}}}_j$}; \node (br) at (20,0) {$\ensuremath{{\,\overline{\! B}}}_j$}; \draw[->] (tl) to node[above,font=\small]{$g_j$} (tr); \draw[->] (tl) to node[left,font=\small]{$\bar{p}_j$} (bl); \draw[->] (tr) to node[right,font=\small]{$\bar{q}_j$} (br); \incl{(bl)}{(br)} \end{tikzpicture} \end{split} \end{equation} where $\bar{p}_j$ and $\bar{q}_j$ are Serre fibrations and the inclusion $\ensuremath{{\,\overline{\! A}}}_j \hookrightarrow \ensuremath{{\,\overline{\! B}}}_j$ is a homotopy equivalence. We also show that, for every point $\bar{a} \in \ensuremath{{\,\overline{\! A}}}_j$, the restriction of $g_j$ to $\bar{p}_j^{-1}(\bar{a}) \to \bar{q}_j^{-1}(\bar{a})$ is in $\ensuremath{\mathscr{X}}(n-1)$. Finally, fix $a_0 \in A_0$ as above and any point $\bar{a} \in \ensuremath{{\,\overline{\! A}}}_0$. In \S\ref{s:weak-factorisation} we show that the composite map \[ \bar{q}_0^{-1}(\bar{a}) \lhook\joinrel\longrightarrow \ensuremath{{\overline{Y}}}_0 \longrightarrow \ensuremath{{\overline{Y}}} \xrightarrow{\;\;\cong\;\;} q_0^{-1}(a_0) \lhook\joinrel\longrightarrow Y_0 \longrightarrow Y \] factors up to homotopy through $f \colon X \to Y$. (See Figure \ref{fhomotopy} for a picture.) Combining all of these constructions from \S\S\ref{s:firstr}--\ref{s:weak-factorisation} with split-injectivity from \S\ref{s:split-injectivity} verifies condition (2w) for the sequence $\ensuremath{\mathscr{X}}(n)$, and therefore Theorem \ref{tAxiomatic} implies Theorem \ref{tmain}. \subsection{Split-injectivity.}\label{s:split-injectivity} Fix the input data $M,P,\lambda,\iota,G$ and temporarily write the map $f \colon X \to Y$ of Definition \ref{d:input-data} as $f_n \colon X_n \to Y_n$ to make the dependence on $n$ explicit (but we continue to hide the dependence on the other choices from the notation). By construction, $X_{n+1}$ is a subspace of $Y_n$. The only difference is that, in $Y_n$, the submanifolds are required to have image contained in $M_0$, whereas in $X_{n+1}$ they are required to have image contained in the slightly smaller manifold $M_1 \subset M_0$. Using the collar neighbourhood $\lambda$, it is easy to construct a deformation retraction for the inclusion $M_1 \hookrightarrow M_0$, which induces a deformation retraction for the inclusion $X_{n+1} \hookrightarrow Y_n$. Now choosing a homotopy inverse for each of these inclusions, the maps $f_n$ give us a sequence \[ \cdots \longrightarrow X_{n-1} \longrightarrow X_n \longrightarrow X_{n+1} \longrightarrow \cdots , \] and taking reduced homology in any fixed degree $*$ we get a sequence \[ 0 = \widetilde{H}_*(X_0) \longrightarrow \widetilde{H}_*(X_1) \longrightarrow \widetilde{H}_*(X_2) \longrightarrow \widetilde{H}_*(X_3) \longrightarrow \cdots \] of abelian groups and homomorphisms. (Note that $X_0$ is the one-point space.) \begin{lem}[Lemma 2 of \cite{Dold1962DecompositiontheoremsSn}]\label{l:Dold} Suppose we have a sequence of abelian group homomorphisms $s_n \colon A_n \to A_{n+1}$ for $n\geqslant 0$ with $A_0 = 0$, together with homomorphisms $\tau_{k,n} \colon A_n \to A_k$ for $1 \leqslant k \leqslant n$ such that $\tau_{n,n} = \mathrm{id}$ and $\tau_{k,n} = \tau_{k,n+1} \circ s_n \;\mathrm{mod}\; \mathrm{im}(s_{k-1})$, i.e.\ $\mathrm{im}(\tau_{k,n} - \tau_{k,n+1} \circ s_n) \subseteq \mathrm{im}(s_{k-1})$. Then every $s_n$ is split-injective. \end{lem} To prove Lemma \ref{l:injectivity} it would therefore suffice to define maps $X_n \to X_k$ for each $1 \leqslant k \leqslant n$ such that the induced maps on reduced homology satisfy the hypotheses of this lemma. In fact we will define maps $X_n \to \mathrm{Sp}^b (X_k)$ where $b = \binom{n}{k}$. These induce homomorphisms $\widetilde{H}_*(X_n) \to \widetilde{H}_*(X_k)$ by sending a cycle in $X_n$ first to a cycle in $\mathrm{Sp}^b(X_k)$ and then viewing it as a formal sum of cycles in $X_k$. Alternatively, we may apply the Dold-Thom theorem \cite{DoldThom1958Quasifaserungenundunendliche} that $H_* = \pi_* \circ \mathrm{Sp}^\infty$ for $* \geqslant 1$ and note that $\mathrm{Sp}^\infty \circ \mathrm{Sp}^b = \mathrm{Sp}^\infty$. \begin{defn} Let $\tau_{k,n} \colon X_n \to \mathrm{Sp}^{\binom{n}{k}}(X_k)$ be the map taking $\{[\varphi_1],\ldots,[\varphi_n]\}$ to the formal sum of $\{[\varphi_i] \mid i \in S \}$ over all subsets $S \subseteq \{1,\ldots,n\}$ with $\lvert S \rvert = k$. \end{defn} Clearly $\tau_{n,n} = \mathrm{id}$, and there are homotopies \[ \tau_{k,n+1} \circ f_n \;\simeq\; \tau_{k,n} + f_{k-1} \circ \tau_{k-1,n} \] of maps $X_n \longrightarrow \mathrm{Sp}^{\binom{n+1}{k}}(X_k)$. The hypotheses of Lemma \ref{l:Dold} are therefore satisfied after taking reduced homology, so Lemma \ref{l:Dold} implies Lemma \ref{l:injectivity}. \subsection{Resolution by subconfigurations.}\label{s:firstr} Let $n\geqslant 2$ and fix the input data $(M,P,\lambda,\iota,G)$ determining a map $f \colon X \to Y$ in $\ensuremath{\mathscr{X}}(n)$. Recall that we denote $\mathrm{Emb}(P,M)$ by $E$ and that $G$ is an open subgroup of $\mathrm{Diff}(P)$. \begin{defn} For $i\geqslant -1$ let $X_i$ be the subspace of $X \times (E/G)^{i+1}$ consisting of tuples \[ (\{ [\varphi_1],\ldots,[\varphi_n] \}, ([\psi_0],\ldots,[\psi_i]) ) \] such that the $[\psi_0],\ldots,[\psi_i]$ are pairwise distinct and $\{ [\psi_0],\ldots,[\psi_i] \} \,\subseteq\, \{ [\varphi_1],\ldots,[\varphi_n] \}$. There are obvious face maps $d_j \colon X_i \to X_{i-1}$, given by forgetting $[\psi_j]$, that turn this into an augmented semi-simplicial space $X_\bullet$ with $X_{-1} = X$. We define an augmented semi-simplicial space $Y_\bullet$ with $Y_{-1} = Y$ in exactly the same way, and there is a semi-simplicial map $f_\bullet \colon X_\bullet \to Y_\bullet$, defined by adjoining $[\lambda(-,\frac12)\circ\iota]$ to a configuration, such that $f_{-1} = f$. \end{defn} We now show that the given embedding $\iota \colon P \hookrightarrow \partial M$ may be extended to an embedding \begin{equation} \label{eq:iotabb} \ensuremath{\bar{\iota}}b \colon P \times (-1,1) \times [0,2] \lhook\joinrel\longrightarrow M, \end{equation} which will be used several times for the constructions in the following sections. To make explicit how the relative dimension hypothesis on $M$ and $P$ is used, we split this into two statements in the proposition below. Recall that $m = \mathrm{dim}(M)$ and $p = \mathrm{dim}(P)$. In this section we are assuming that $p \leqslant \tfrac12(m-3)$, so in particular $p \leqslant \tfrac12(m-2)$. \begin{prop} \label{p:iotabb} The assumption that $p \leqslant \tfrac12(m-2)$ implies that the normal bundle of the embedding $\iota \colon P \hookrightarrow \partial M$ admits a non-vanishing section. Whenever $\iota$ satisfies this property, it may be extended to an embedding of the form \eqref{eq:iotabb}. \end{prop} \begin{proof} Write $\nu_\iota \colon N \to P$ for the normal bundle of $\iota$ and $o_\iota \colon P \to N$ for its zero section. Choose a metric on $\nu_\iota$ and denote by $S(\nu_\iota) \colon S(N) \to P$ its unit sphere subbundle with respect to this metric. This has fibres homeomorphic to $S^{m-p-2}$, so the obstructions to the existence of a section of $S(\nu_\iota)$ live in cohomology groups of the form $H^i(P;\pi_{i-1}(S^{m-p-2}))$. Since $P$ has dimension $p \leqslant m-p-2$, these groups all vanish, and so there exists a section of $S(\nu_\iota)$, thus a non-vanishing section of $\nu_\iota$. For the second statement, choose a tubular neighbourhood $\ensuremath{\bar{\iota}}$ for $\iota$, i.e.\ an embedding $\ensuremath{\bar{\iota}} \colon N \hookrightarrow \partial M$ such that $\ensuremath{\bar{\iota}} \circ o_\iota = \iota$. A choice of non-vanishing section of $\nu_\iota$ determines a trivial one-dimensional subbundle $P \times \mathbb{R} \to P$ of $\nu_\iota$, in particular an embedding $P \times \mathbb{R} \hookrightarrow N$. Composing this with $\ensuremath{\bar{\iota}}$ and restricting to $P \times (-1,1)$, we obtain an embedding $\check{\iota} \colon P \times (-1,1) \hookrightarrow \partial M$. Note that $\check{\iota}(-,0) = \iota$. Now we may define the desired embedding $\ensuremath{\bar{\iota}}b$ by $\ensuremath{\bar{\iota}}b(z,s,t) = \lambda(\check{\iota}(z,s),t)$, using the given collar neighbourhood $\lambda$ of $\partial M$. Intuitively, the coordinate $s \in (-1,1)$ specifies a ``shift'' of the standard embedding $\iota \colon P \hookrightarrow \partial M$ parallel to the boundary, and the coordinate $t \in [0,2]$ specifies a shift orthogonal to the boundary, into the interior of $M$ if $t>0$. \end{proof} For a space $Z$, denote the configuration space of $n$ unordered, distinct points in $Z$ by $C_n(Z)$. There is then a map \[ \mathit{st}_n \colon C_n((-1,1) \times (1,2)) \longrightarrow X \] given by $\mathit{st}_n(\{ (s_1,t_1),\ldots,(s_n,t_n) \}) = \{ [\ensuremath{\bar{\iota}}b(-,s_1,t_1)],\ldots,[\ensuremath{\bar{\iota}}b(-,s_n,t_n)] \}$. Note that the image of $\mathit{st}_n$ contains the \emph{standard configurations} of Definition \ref{d:standard}, corresponding to the configurations with $s_1 = s_2 = \cdots = s_n = 0$. Similarly, there is a map \[ \mathit{st}_{n+1}^\prime \colon C_{n+1}((-1,1) \times (0,2)) \longrightarrow Y \] whose image contains the standard configurations in $Y$. Now we show that $\ensuremath{h\mathrm{conn}}(\lVert X_\bullet \rVert \to X) \geqslant n-1$ and $\ensuremath{h\mathrm{conn}}(\lVert Y_\bullet \rVert \to Y) \geqslant n$. We will do this just in the first case, since the second case is almost identical. We first note that all face maps (and compositions of face maps) of $X_\bullet$ are covering maps. In particular, the augmentation $X_0 \to X$ is a covering map. Now, $X$ is path-connected by definition, and it is not hard to see that $X_0$ is also path-connected: we just need to show that two points in the same fibre of $X_0 \to X$ over $x \in X$ can be connected by a path in $X_0$, which we can do by first moving $x$ into the image of $\mathit{st}_n$ and then using the image of a braid in $C_n((-1,1) \times (1,2))$ to permute the components of the configuration of submanifolds $x$. Since the space of vertices $X_0$ is path-connected, so is the geometric realisation $\lVert X_\bullet \rVert$.\footnote{An alternative argument for path-connectivity of $\lVert X_\bullet \rVert$, avoiding the use of the map $\mathit{st}_n$ and hence the hypothesis of Proposition \ref{p:iotabb}, is as follows. We show below that the homotopy fibre of $\lVert X_\bullet \rVert \to X$ is $(n-2)$-connected, where $n\geqslant 2$, so it is at least path-connected. The codomain $X$ is path-connected by definition. Hence the domain $\lVert X_\bullet \rVert$ must also be path-connected.} The relative Hurewicz theorem therefore tells us that $\ensuremath{h\mathrm{conn}}(\lVert X_\bullet \rVert \to X) \geqslant n-1$ as long as the map $\lVert X_\bullet \rVert \to X$ is $(n-1)$-connected, in other words its homotopy fibre is $(n-2)$-connected. We will use the following lemma, which appeared in an earlier, preprint version of \cite{Randal-Williams2016Resolutionsmodulispaces} (but not in the final, published version), and which is also very similar to Lemma 2.14 of \cite{EbertRandal-Williams2019Semi-simplicialspaces}. We also give a self-contained proof in the appendix \S\ref{s:appendix}. \begin{lem}\label{l:homotopy-fibres} For any augmented semi-simplicial space $X_\bullet$ and point $x \in X = X_{-1}$, the canonical map $\lVert \mathrm{hofib}_x(k_\bullet) \rVert \to \mathrm{hofib}_x(\lVert X_\bullet \rVert \to X)$ is a weak equivalence, where $k_n \colon X_n \to X$ denotes the unique composition of face maps $X_n \to X_{n-1} \to \cdots \to X_0 \to X$. \end{lem} We note that we are implicitly using a specific point-set model for the homotopy fibre of a map, so that $\mathrm{hofib}_x(k_\bullet)$ is a semi-simplicial \emph{space}, and not just a semi-simplicial object in the homotopy category. This is explained in more detail in \S\ref{ss:homotopy-fibres}. In our case, the maps $k_n \colon X_n \to X$ are covering maps, therefore Serre fibrations, so there is a levelwise weak equivalence $k_{\bullet}^{-1}(x) \to \mathrm{hofib}_x(f_\bullet)$, which then induces a weak equivalence on geometric realisations (\textit{cf}.\ Theorem 2.2 of \cite{EbertRandal-Williams2019Semi-simplicialspaces}). Thus the homotopy fibre of $\lVert X_\bullet \rVert \to X$ is weakly equivalent to $\lVert k_{\bullet}^{-1}(x) \rVert$. The semi-simplicial set $k_{\bullet}^{-1}(x)$ has as its set of $i$-simplices all ordered $(i+1)$-tuples of pairwise distinct elements of the set $k_0^{-1}(x) = x = \{ [\varphi_1],\ldots,[\varphi_n] \} \in X$. This is often called the \emph{complex of injective words} on $n$ letters, and its geometric realisation is known to be homotopy equivalent to a wedge of $(n-1)$-spheres, see for example \cite[Proposition 3.3]{Randal-Williams2013Homologicalstabilityunordered}. In particular, this means that $\lVert k_{\bullet}^{-1}(x) \rVert$, and therefore the homotopy fibre of $\lVert X_\bullet \rVert \to X$, is $(n-2)$-connected, as claimed. From the above discussion (and an identical argument in the case of $Y_\bullet$) we conclude: \begin{lem} The map $f_\bullet \colon X_\bullet \to Y_\bullet$ of augmented semi-simplicial spaces is an $n$-resolution of $f \colon X \to Y$, in other words $\ensuremath{h\mathrm{conn}}(\lVert X_\bullet \rVert \to X) \geqslant n-1$ and $\ensuremath{h\mathrm{conn}}(\lVert Y_\bullet \rVert \to Y) \geqslant n$. \end{lem} \subsection{The first approximation.}\label{s:firsta} Fix $i\geqslant 0$. For a space $Z$, write $\widetilde{C}_k(Z)$ for the configuration space of $k$ ordered points in $Z$ and define a map \[ \widetilde{\mathit{st}}_k \colon \widetilde{C}_k((-1,1) \times (1,2)) \longrightarrow (E/G)^k \] by $\widetilde{\mathit{st}}_k((s_1,t_1),\ldots,(s_k,t_k)) = ([\ensuremath{\bar{\iota}}b(-,s_1,t_1)],\ldots,[\ensuremath{\bar{\iota}}b(-,s_k,t_k)])$. The domain of $\widetilde{\mathit{st}}_k$ (the $k$th ordered configuration space of the plane) is path-connected, and therefore so is its image in $(E/G)^k$. \begin{defn} Let $A_i$ be the path-component of $\mathrm{Emb}((i+1)P,M_1)/G^{i+1} \subseteq (E/G)^{i+1}$ containing the standard configurations, i.e.\ the image of $\widetilde{\mathit{st}}_{i+1}$. Equivalently, $A_i$ is the subspace of $(E/G)^{i+1}$ consisting of ordered tuples $([\psi_0],\ldots,[\psi_i])$ such that the images $\psi_\alpha(P)$ are pairwise disjoint and contained in $M_1$, and there exists a path of such configurations starting at $([\psi_0],\ldots,[\psi_i])$ and ending at a standard configuration. We define $B_i$ in exactly the same way, except that we replace $M_1$ with $M_0$. \end{defn} \begin{lem} The inclusion $A_i \hookrightarrow B_i$ is a homotopy equivalence. \end{lem} \begin{proof} Using the collar neighbourhood $\lambda$, we may construct a deformation retraction for the inclusion $M_1 \hookrightarrow M_0$, which induces a deformation retraction for the inclusion $A_i \hookrightarrow B_i$. \end{proof} \begin{defn} Define $p_i \colon X_i \to A_i$ by sending $(\{ [\varphi_1],\ldots,[\varphi_n] \}, ([\psi_0],\ldots,[\psi_i]) )$ to $([\psi_0],\ldots,[\psi_i])$. In words, $p_i$ takes a configuration of submanifolds in which $i+1$ components have been marked (and ordered), and forgets all of the non-marked components. The map $q_i \colon Y_i \to B_i$ is defined in exactly the same way. \end{defn} These are clearly well-defined, and fit into a commutative square \eqref{e:firsta}. \begin{lem}\label{l:Serre-fibrations-1} The maps $p_i \colon X_i \to A_i$ and $q_i \colon Y_i \to B_i$ are Serre fibrations. \end{lem} \begin{proof} By Proposition \ref{p:G-locally-retractile-orbitspace}, the action of $\mathrm{Diff}_c(M_1)$ on the quotient space $\mathrm{Emb}((i+1)P,M_1)/G^{i+1}$ is locally retractile. Since $\mathrm{Diff}_c(M_1)$ is locally path-connected by Fact \ref{fact:locally-contractible}, Lemma \ref{l:retractile-basic} implies that the restriction of this action to the identity path-component $\mathrm{Diff}_c(M_1)_0$ is also locally retractile. Embedding spaces are locally contractible as long as the domain manifold is compact (by Fact \ref{fact:locally-contractible} again), so in particular they are locally path-connected, and so their path-components are open. The property of having open path-components passes to quotient spaces, so $A_i$ is an open subspace of $\mathrm{Emb}((i+1)P,M_1)/G^{i+1}$. It is also $\mathrm{Diff}_c(M_1)_0$-invariant, since it is a path-component and the group $\mathrm{Diff}_c(M_1)_0$ is path-connected. Therefore Lemma \ref{l:retractile-basic} tells us that the action of $\mathrm{Diff}_c(M_1)_0$ on $A_i$ is locally retractile. There is also a well-defined action of $\mathrm{Diff}_c(M_1)_0$ on $X_i$ given by post-composition, and $p_i \colon X_i \to A_i$ is equivariant with respect to these actions. Thus Proposition \ref{p:G-locally-retractile} implies that it is a fibre bundle, in particular a Serre fibration. An almost identical argument, replacing $M_1$ by $M_0$ everywhere, shows that $q_i \colon Y_i \to B_i$ is also a fibre bundle, and therefore a Serre fibration. \end{proof} Now choose a point $a_i = ([\psi_0],\ldots,[\psi_i]) \in A_i$ such that each $\psi_\alpha(P)$ is contained in $M_2$ and define \[ \mathrm{im}(a_i) = \psi_0(P) \cup \psi_1(P) \cup \ldots \cup \psi_i(P). \] \begin{prop}\label{p:two-conditions} Assume that the normal bundle of the embedding $\iota \colon P \hookrightarrow \partial M$ admits a non-vanishing section. Then the following two conditions are equivalent, for a configuration \[ \{ [\varphi_1],\ldots,[\varphi_{n-i-1}] \} \in \mathrm{Emb}((n-i-1)P,M_1 \smallsetminus \mathrm{im}(a_i)) / (G \wr \Sigma_{n-i-1}). \] \begin{itemizeb} \item[\textup{(1)}] There is a path in this space from $\{ [\varphi_1],\ldots,[\varphi_{n-i-1}] \}$ to a standard configuration, i.e.\ one of the form $\{ [\lambda(\iota(-),t_1)] ,\ldots, [\lambda(\iota(-),t_{n-i-1})] \}$ for distinct $t_1,\ldots,t_{n-i-1}$ in the interval $(1,2)$, \textit{cf}.\ Definition \ref{d:standard}. \item[\textup{(2)}] There is a path in the space $\mathrm{Emb}(nP,M_1)/(G \wr \Sigma_n)$ from $\{ [\varphi_1] ,\ldots, [\varphi_{n-i-1}] , [\psi_0] ,\ldots, [\psi_i] \}$ to a standard configuration, i.e.\ one of the form $\{ [\lambda(\iota(-),t_1)] ,\ldots, [\lambda(\iota(-),t_n)] \}$ for distinct $t_1,\ldots,t_n$ in the interval $(1,2)$. \end{itemizeb} \end{prop} We defer the proof for a few paragraphs. This immediately implies: \begin{coro}\label{c:two-conditions} Under the same assumption as in Proposition \ref{p:two-conditions}, there is a canonical homeomorphism $p_i^{-1}(a_i) \cong X_{n-i-1}(M \smallsetminus \mathrm{im}(a_i))$ given by \[ (\{ [\varphi_1],\ldots,[\varphi_n] \}, ([\psi_0],\ldots,[\psi_i]) ) \;\longmapsto\; \{ [\varphi_1],\ldots,[\varphi_{n-i-1}] \} , \] where we assume that the indexing of the $[\varphi_1],\ldots,[\varphi_n]$ is chosen so that $[\psi_\alpha] = [\varphi_{n-\alpha}]$. \end{coro} \begin{rmk} The notation $X_{n-i-1}(M \smallsetminus \mathrm{im}(a_i))$ means the space $X$ from Definition \ref{d:input-data}, with the same $P,\lambda,\iota,G$ as before, but with $n$ replaced by $n-i-1$ and with $M$ replaced by its open submanifold $M \smallsetminus \mathrm{im}(a_i)$. \end{rmk} \begin{proof}[Proof of Corollary \ref{c:two-conditions}] The map given above defines a homeomorphism from $p_i^{-1}(a_i)$ onto the subspace of \[ \mathrm{Emb}((n-i-1)P,M_1 \smallsetminus \mathrm{im}(a_i)) / (G \wr \Sigma_{n-i-1}) \] consisting of those elements that satisfy condition (2) of Proposition \ref{p:two-conditions}. By definition, the space $X_{n-i-1}(M \smallsetminus \mathrm{im}(a_i))$ is the subspace of $\mathrm{Emb}((n-i-1)P,M_1 \smallsetminus \mathrm{im}(a_i)) / (G \wr \Sigma_{n-i-1})$ consisting of those elements that satisfy condition (1) of Proposition \ref{p:two-conditions}. The result therefore follows from Proposition \ref{p:two-conditions}. \end{proof} By exactly the same argument, replacing $M_1$ with $M_0$ everywhere, we also have a canonical homeomorphism $q_i^{-1}(a_i) \cong Y_{n-i-1}(M \smallsetminus \mathrm{im}(a_i))$, and these fit into a commutative diagram \begin{equation} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$X_{n-i-1}(M \smallsetminus \mathrm{im}(a_i))$}; \node (tr) at (80,10) {$Y_{n-i-1}(M \smallsetminus \mathrm{im}(a_i))$}; \node (bl) at (0,0) {$p_i^{-1}(a_i)$}; \node (br) at (80,0) {$q_i^{-1}(a_i)$}; \draw[->] (tl) to node[above,font=\small]{$f_{n-i-1}(M \smallsetminus \mathrm{im}(a_i))$} (tr); \node at (0,5) {\rotatebox{-90}{$\cong$}}; \node at (80,5) {\rotatebox{-90}{$\cong$}}; \draw[->] (bl) to node[below,font=\small]{restriction of $f_i$} (br); \end{tikzpicture} \end{split} \end{equation} Thus we have shown that the restriction of $f_i$ to $p_i^{-1}(a_i) \to q_i^{-1}(a_i)$ is in $\ensuremath{\mathscr{X}}(n-i-1)$. \begin{proof}[Proof of Proposition \ref{p:two-conditions}] First assume that we have a path of configurations $\{ [\varphi_1^t] ,\ldots, [\varphi_{n-i-1}^t] \}$ in $M_1 \smallsetminus \mathrm{im}(a_i)$ indexed by $t \in [0,1]$ with $[\varphi_\alpha^0] = [\varphi_\alpha]$ and $[\varphi_\alpha^1] = [\lambda(\iota(-),u_\alpha)]$ for distinct $u_1,\ldots,u_{n-i-1} \in (1,2)$. By definition of $A_i$, there is a path $([\psi_0^t] ,\ldots, [\psi_i^t])$ in $\mathrm{Emb}((i+1)P,M_1)/G^{i+1}$ with $[\psi_\alpha^0] = [\psi_\alpha]$ and $[\psi_\alpha^1] = [\lambda(\iota(-),v_\alpha)]$ for distinct $v_0,\ldots,v_i \in (1,2)$. By compactness, we may choose some $\epsilon > 0$ such that $\psi_\alpha^t(P) \subseteq M_{1+\epsilon}$ for all $\alpha$ and all $t$, in particular, $v_0,\ldots,v_i > 1+\epsilon$. Using this, we construct a path in $\mathrm{Emb}(nP,M_1)/(G \wr \Sigma_n)$ from $\{ [\varphi_1] ,\ldots, [\varphi_{n-i-1}] , [\psi_0] ,\ldots, [\psi_i] \}$ to a standard configuration in three steps. \begin{itemizeb} \item[(i)] The path $\{ [\varphi_1^t] ,\ldots, [\varphi_{n-i-1}^t] , [\psi_0] ,\ldots, [\psi_i] \}$ ends at the configuration \[ \{ [\lambda(\iota(-),u_1)] ,\ldots, [\lambda(\iota(-),u_{n-i-1})] , [\psi_0] ,\ldots, [\psi_i] \} \] for some $u_1,\ldots,u_{n-i-1} \in (1,2)$. \item[(ii)] Next, keep the $[\psi_0] ,\ldots, [\psi_i]$ fixed and move the other part of the configuration by gradually decreasing the values of $u_1,\ldots,u_{n-i-1}$ until they all lie in $(1,1+\epsilon)$. Note that this will not intersect any of $\psi_0(P), \ldots, \psi_i(P)$ since the latter are contained in $M_2$ by our assumption on the point $a_i \in A_i$. \item[(iii)] Now the path $\{ [\lambda(\iota(-),u_1)] ,\ldots, [\lambda(\iota(-),u_{n-i-1})] , [\psi_0^t] ,\ldots, [\psi_i^t] \}$ ends at a standard configuration. \end{itemizeb} Hence we have shown that $\text{(1)} \Rightarrow \text{(2)}$.\footnote{A slight variation of the argument for this implication is as follows. One may show, by similar reasoning, that the subset of configurations satisfying condition (1) is a path-component of $\mathrm{Emb}((n-i-1)P,M_1 \smallsetminus \mathrm{im}(a_i)) / (G \wr \Sigma_{n-i-1})$, and the subset of configurations satisfying condition (2) is a non-empty union of path-components. Thus, once we have proven the opposite implication $\text{(2)} \Rightarrow \text{(1)}$ (which is proven just below), the implication $\text{(1)} \Rightarrow \text{(2)}$ is automatic.} For the proof of the opposite implication, we will use the equivalent characterisation of condition (1) that \emph{there is a path from $\{ [\varphi_1] ,\ldots, [\varphi_{n-i-1}] \}$ to the image of $\mathit{st}_{n-i-1}$} and of condition (2) that \emph{there is a path from $\{ [\varphi_1] ,\ldots, [\varphi_{n-i-1}] , [\psi_0] ,\ldots, [\psi_i] \}$ to the image of $\mathit{st}_n$}. By the assumption that the normal bundle of $\iota$ admits a non-vanishing section, we may apply Proposition \ref{p:iotabb} and extend $\iota$ to an embedding $\ensuremath{\bar{\iota}}b$ of the form \eqref{eq:iotabb}, so the maps $\mathit{st}_n$ are indeed defined. Assume that we are given a path of configurations \[ \gamma \colon [0,1] \longrightarrow \mathrm{Emb}(nP,M_1)/(G \wr \Sigma_n) \] with $\gamma(0) = \{ [\varphi_1] ,\ldots, [\varphi_{n-i-1}] , [\psi_0] ,\ldots, [\psi_i] \}$ and $\gamma(1)$ in the image of $\mathit{st}_n$. We will write \[ \gamma(t) = \{ [\varphi_1^t] ,\ldots, [\varphi_{n-i-1}^t] , [\psi_0^t] ,\ldots, [\psi_i^t] \} \] and note that it makes sense to talk about the various components $[\varphi_\alpha^t]$ and $[\psi_\alpha^t]$ individually as well as together, since they may be distinguished using unique path-lifting for the covering space $\mathrm{Emb}(nP,M_1)/G^n \to \mathrm{Emb}(nP,M_1)/(G \wr \Sigma_n)$. Without loss of generality, we may arrange that \begin{itemizeb} \item[(a)] $\psi_\alpha^t(P) \subseteq M_{1.5}$ for all $\alpha$ and $t$. \item[(b)] $\varphi_\alpha^1(P) \subseteq M_1 \smallsetminus M_{1.5} = \lambda(\partial M \times (1,1.5])$ for all $\alpha$. \end{itemizeb} Condition (a) is possible to arrange, using the collar neighbourhood $\lambda$, since we have assumed that $\psi_\alpha^0(P) \subseteq M_2$. For condition (b): once we have arrived in the image of $\mathit{st}_n$, we may choose an appropriate path in the configuration space $C_n((-1,1) \times (1,2))$ whose image under $\mathit{st}_n$ fixes the $[\psi_\alpha^1]$ and moves the $[\varphi_\alpha^1]$ into $M_1 \smallsetminus M_{1.5}$. Define a path $\gamma^\prime \colon [0,1] \to \mathrm{Emb}((i+1)P,M_{1.5})/(G \wr \Sigma_{i+1})$ by $t \mapsto \{ [\psi_0^t], \ldots, [\psi_i^t] \}$ and define \begin{equation}\label{e:isotopy-extension} \mathrm{Diff}_c(M_{1.5}) \longrightarrow \mathrm{Emb}((i+1)P,M_{1.5})/(G \wr \Sigma_{i+1}) \end{equation} by $\Phi \mapsto \{ [\Phi \circ \psi_0], \ldots, [\Phi \circ \psi_i] \}$. By Propositions \ref{p:G-locally-retractile-orbitspace} and \ref{p:G-locally-retractile} this is a fibre bundle, thus a Serre fibration, so we may find a lift $\gamma'' \colon [0,1] \to \mathrm{Diff}_c(M_{1.5})$ of $\gamma'$ such that $\gamma''(0)$ is the identity. We now define a path \[ \gamma''' \colon [0,1] \longrightarrow \mathrm{Emb}((n-i-1)P,M_1 \smallsetminus \mathrm{im}(a_i)) / (G \wr \Sigma_{n-i-1}) \] by $\gamma'''(t) = \{ [\gamma''(t)^{-1} \circ \varphi_1^t] ,\ldots, [\gamma''(t)^{-1} \circ \varphi_{n-i-1}^t] \}$, where we are implicitly extending compactly-supported diffeomorphisms of $M_{1.5}$ to $M_1$ by the identity on $M_1 \smallsetminus M_{1.5}$. This is now a path from $\{ [\varphi_1],\ldots,[\varphi_{n-i-1}] \}$ to the image of $\mathit{st}_{n-i-1}$. Hence we have shown that $\text{(2)} \Rightarrow \text{(1)}$. \end{proof} We finish this subsection by defining a slightly more convenient (and homeomorphic) model for the restriction of $f_0$ to $p_0^{-1}(a_0) \to q_0^{-1}(a_0)$. \begin{defn} Recall from \S\ref{s:firstr} (see the proof of Proposition \ref{p:iotabb}) that we have chosen a tubular neighbourhood for the embedding $\iota \colon P \hookrightarrow \partial M$. In other words, writing $\nu_\iota \colon N \to P$ for the normal bundle of $\iota$ and $o_\iota \colon P \to N$ for its zero section, we have chosen an embedding $\ensuremath{\bar{\iota}} \colon N \hookrightarrow \partial M$ such that $\ensuremath{\bar{\iota}} \circ o_\iota = \iota$. We have also chosen a metric on the bundle $\nu_\iota$. Let $D(\nu_\iota) \colon D(N) \to P$ denote the closed unit disc subbundle of $\nu_\iota$ with respect to this metric, and define \[ T = \ensuremath{\bar{\iota}}(D(N)). \] This is a compact codimension-zero submanifold of $\partial M$ with boundary $\ensuremath{\bar{\iota}}(S(N))$, where $S(\nu_\iota) \colon S(N) \to P$ is the unit sphere subbundle of $\nu_\iota$ with respect to the chosen metric. \end{defn} \begin{defn}\label{d:xbar} Let $\ensuremath{{\,\overline{\! X}}}$ be the path-component of $\mathrm{Emb}((n-1)P,M_1 \smallsetminus \lambda(T \times \{2\})) / (G \wr \Sigma_{n-1})$ containing the image of $\mathit{st}_{n-1}$. Let $\ensuremath{{\overline{Y}}}$ be the path-component of $\mathrm{Emb}(nP,M_0 \smallsetminus \lambda(T \times \{2\})) / (G \wr \Sigma_n)$ containing the image of $\mathit{st}_n$. There is a continuous map \[ g \colon \ensuremath{{\,\overline{\! X}}} \longrightarrow \ensuremath{{\overline{Y}}} \] defined by $\{ [\varphi_1] ,\ldots, [\varphi_{n-1}] \} \;\longmapsto\; \{ [\ensuremath{\bar{\iota}}b(-,0,\tfrac12)] , [\varphi_1] ,\ldots, [\varphi_{n-1}] \}$. (See Proposition \ref{p:iotabb} for the construction of the embedding $\ensuremath{\bar{\iota}}b$ extending $\iota$. Recall, in particular, that it satisfies $\ensuremath{\bar{\iota}}b(-,0,0) = \iota(-)$ and more generally $\ensuremath{\bar{\iota}}b(-,0,t) = \lambda(\iota(-),t)$.) \end{defn} Recall that we have fixed a point $a_0 = [\psi_0] \in A_0$ with $\psi_0(P) \subseteq M_2$. Choose a diffeomorphism \[ \Psi \colon M \smallsetminus \psi_0(P) \longrightarrow M \smallsetminus \lambda(T \times \{2\}) \] that restricts to the identity on $M \smallsetminus M_{1.5} = \lambda(\partial M \times [0,1.5])$. This exists because, firstly, $[\psi_0]$ has a path in $E/G$ to $[\ensuremath{\bar{\iota}}b(-,0,2)]$, by definition of $A_0$. Since the map \eqref{e:isotopy-extension} (with $i=0$) is a Serre fibration, we may lift this to a path of diffeomorphisms, evaluate at $1$, extend by the identity on $M \smallsetminus M_{1.5}$ and then restrict to obtain a diffeomorphism \[ M \smallsetminus \psi_0(P) \longrightarrow M \smallsetminus \ensuremath{\bar{\iota}}b(P \times \{0\} \times \{2\}) \] that restricts to the identity on $M \smallsetminus M_{1.5}$. Now, since $\lambda(T \times \{2\})$ is a tubular neighbourhood of $\ensuremath{\bar{\iota}}b(P \times \{0\} \times \{2\}) \subset \lambda(\partial M \times \{2\})$, it is easy to construct a diffeomorphism \[ M \smallsetminus \ensuremath{\bar{\iota}}b(P \times \{0\} \times \{2\}) \longrightarrow M \smallsetminus \lambda(T \times \{2\}) \] that restricts to the identity on $M \smallsetminus M_{1.5}$. (We note that, to construct this, we use the fact (assumed at the beginning of \S\ref{s:detailed-statements}) that the collar neighbourhood $\lambda \colon \partial M \times [0,2] \hookrightarrow M$ extends to a slightly larger collar neighbourhood $\partial M \times [0,2+\epsilon] \hookrightarrow M$.) Composing these two diffeomorphisms gives the desired diffeomorphism $\Psi$. \begin{rmk}\label{r:diffeomorphism-Psi} If we are careful in how we define the second diffeomorphism above, we may ensure the following useful property of $\Psi$. Let $s \colon P \to N$ be any section of the normal bundle $\nu_\iota$. Consider the half-open path $[0,2) \to \mathrm{Emb}(P,M_1 \smallsetminus \lambda(T \times \{2\}))$ given by $t \mapsto \lambda(-,t) \circ \ensuremath{\bar{\iota}} \circ s$. Postcomposing at each time $t$ with the diffeomorphism $\Psi^{-1}$ defines a half-open path $\gamma \colon [0,2) \to \mathrm{Emb}(P,M_1 \smallsetminus \psi_0(P))$. Then this path may be extended continuously to a path $[0,2] \to \mathrm{Emb}(P,M_1)$ by setting $\gamma(2) = \psi_0$. A second useful property of $\Psi$ is that if we consider $\Psi^{-1}$ as an embedding $M \smallsetminus \lambda(T \times \{2\}) \hookrightarrow M$, then it is isotopic to the inclusion through embeddings that restrict to the identity on $M \smallsetminus M_{1.5}$. \end{rmk} \begin{lem}\label{l:Psi} Postcomposition with $\Psi^{-1}$ defines homeomorphisms $\ensuremath{{\,\overline{\! X}}} \to p_0^{-1}(a_0)$ and $\ensuremath{{\overline{Y}}} \to q_0^{-1}(a_0)$ such that \begin{equation}\label{e:psi-inverse} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$\ensuremath{{\,\overline{\! X}}}$}; \node (tr) at (40,10) {$\ensuremath{{\overline{Y}}}$}; \node (bl) at (0,0) {$p_0^{-1}(a_0)$}; \node (br) at (40,0) {$q_0^{-1}(a_0)$}; \draw[->] (tl) to node[above,font=\small]{$g$} (tr); \draw[->] (tl) to (bl); \draw[->] (tr) to (br); \draw[->] (bl) to node[below,font=\small]{\textup{restriction of $f_0$}} (br); \end{tikzpicture} \end{split} \end{equation} commutes. \end{lem} \begin{proof} This is immediate from the constructions. One easy but important observation is that if a configuration $\{ [\varphi_1] ,\ldots, [\varphi_{n-1}] \}$ has a path to the image of $\mathit{st}_{n-1}$, then so does the configuration $\{ [\Psi^{-1} \circ \varphi_1] ,\ldots, [\Psi^{-1} \circ \varphi_{n-1}] \}$, since $\Psi^{-1}$ is the identity on $\lambda(\partial M \times [0,1.5])$. \end{proof} \subsection{Resolution by tubes to the boundary.}\label{s:secondr} \begin{defn}[The second resolution]\label{d:secondr} Let $\ensuremath{{\,\overline{\! X}}}_0$ be the subspace of $\ensuremath{{\,\overline{\! X}}} \times \mathrm{Emb}(P \times [0,2],M)$ consisting of all elements $(\{ [\varphi_1] ,\ldots, [\varphi_{n-1}] \} , \tau)$ with the following properties. \begin{itemizeb} \item[(a)] There exist $h \in (\tfrac12,1)$ and $\epsilon \in (0,1)$ so that $\tau(-,t) = \ensuremath{\bar{\iota}}b(-,h,t)$ for all $t \in [0,1+\epsilon] \cup [2-\epsilon,2]$. \item[(b)] The image $\tau(P \times (1,2))$ is contained in $M_1 \smallsetminus ( \varphi_1(P) \cup \varphi_2(P) \cup \cdots \cup \varphi_{n-1}(P) \cup \lambda(T \times \{2\}) )$. \end{itemizeb} There is an obvious map $\ensuremath{{\,\overline{\! X}}}_0 \to \ensuremath{{\,\overline{\! X}}}$ given by forgetting $\tau$. More generally, for $i\geqslant 0$, let $\ensuremath{{\,\overline{\! X}}}_i$ be the subspace of $\ensuremath{{\,\overline{\! X}}} \times \mathrm{Emb}(P \times [0,2],M)^{i+1}$ consisting of all elements $(\{ [\varphi_1] ,\ldots, [\varphi_{n-1}] \},(\tau_0,\ldots,\tau_i))$ with the following properties. \begin{itemizeb} \item[(ab)] Each $\tau_\alpha$ satisfies the properties (a) and (b) above. \item[(c)] For $\alpha \neq \beta$, the images $\tau_\alpha(P \times [0,2])$ and $\tau_\beta(P \times [0,2])$ are disjoint. \item[(d)] Write $h_\alpha \in (\tfrac12,1)$ for the number associated to $\tau_\alpha$ by condition (a). Then $h_0 < h_1 < \cdots < h_i$. \end{itemizeb} There are maps $d_j \colon \ensuremath{{\,\overline{\! X}}}_i \to \ensuremath{{\,\overline{\! X}}}_{i-1}$ defined by forgetting $\tau_j$. These obviously satisfy the simplicial identities, so they give $\ensuremath{{\,\overline{\! X}}}_\bullet = \{ \ensuremath{{\,\overline{\! X}}}_i \}_{i\geqslant 0} \cup \{ \ensuremath{{\,\overline{\! X}}} \}$ the structure of an augmented semi-simplicial space. The augmented semi-simplicial space $\ensuremath{{\overline{Y}}}_\bullet$ is defined similarly --- the space $\ensuremath{{\overline{Y}}}_i$ is the subspace of $\ensuremath{{\overline{Y}}} \times \mathrm{Emb}(P \times [0,2],M)$ consisting of all elements $(\{ [\varphi_1] ,\ldots, [\varphi_{n-1}] \},(\tau_0,\ldots,\tau_i))$ with the properties (c) and (d) above, as well as the following variants of (a) and (b). \begin{itemizeb} \item[({\=a})] There exist $h \in (\tfrac12,1)$ and $\epsilon \in (0,1)$ so that $\tau(-,t) = \ensuremath{\bar{\iota}}b(-,h,t)$ for all $t \in [0,\epsilon] \cup [2-\epsilon,2]$. \item[({\=b})] The image $\tau(P \times (0,2))$ is contained in $M_0 \smallsetminus ( \varphi_1(P) \cup \varphi_2(P) \cup \cdots \cup \varphi_{n-1}(P) \cup \lambda(T \times \{2\}) )$. \end{itemizeb} There are again maps forgetting $\tau_j$ that give $\ensuremath{{\overline{Y}}}_\bullet = \{ \ensuremath{{\overline{Y}}}_i \}_{i\geqslant 0} \cup \{ \ensuremath{{\overline{Y}}} \}$ the structure of an augmented semi-simplicial space. There is a map of augmented semi-simplicial spaces \[ g_\bullet \colon \ensuremath{{\,\overline{\! X}}}_\bullet \longrightarrow \ensuremath{{\overline{Y}}}_\bullet \] given by $(\{ [\varphi_1] ,\ldots, [\varphi_{n-1}] \} , (\tau_0,\ldots,\tau_i)) \;\longmapsto\; (\{ [\ensuremath{\bar{\iota}}b(-,0,\tfrac12)] , [\varphi_1] ,\ldots, [\varphi_{n-1}] \} , (\tau_0,\ldots,\tau_i))$ on spaces of $i$-simplices. Clearly $g_{-1} = g$, i.e.\ this extends $g \colon \ensuremath{{\,\overline{\! X}}} \to \ensuremath{{\overline{Y}}}$ to augmented semi-simplicial spaces. \end{defn} \begin{rmk} A vertex of $\ensuremath{{\,\overline{\! X}}}_\bullet$ is intuitively a ``tube'' with cross-section $P$ going from the boundary $\partial M$ to $\lambda(T \times \{2\})$, which is a thickened copy of $P$ in the interior of $M$. This tube must be ``straight'' in a certain sense near each end, and it must be disjoint from the configuration (and the interior of the tube must also be disjoint from $\lambda(T \times \{2\})$). An ordered collection of such tubes forms a simplex if and only if they are pairwise disjoint and the ordering coincides with the intrinsic ordering that they inherit from the boundary condition near $\partial M$. \end{rmk} \begin{rmk} \label{r:iotabb} Since the spaces under consideration from now on are only defined if we assume the existence of (and choose) an embedding $\hat{\iota}$ of the form \eqref{eq:iotabb} extending $\iota$ (see condition (a) and its variants of Definition \ref{d:secondr} above), we will not mention this assumption again for the remainder of this section. Recall that we are, in any case, assuming throughout this section the dimension hypothesis $p \leqslant \tfrac12(m-3)$, which implies the existence of such an embedding $\hat{\iota}$ by Proposition \ref{p:iotabb}. \end{rmk} Our task in this subsection is to show that the induced maps \[ \lVert \ensuremath{{\,\overline{\! X}}}_\bullet \rVert \longrightarrow \ensuremath{{\,\overline{\! X}}} \qquad\text{and}\qquad \lVert \ensuremath{{\overline{Y}}}_\bullet \rVert \longrightarrow \ensuremath{{\overline{Y}}} \] are weak equivalences. We will do this explicitly just for $\lVert \ensuremath{{\,\overline{\! X}}}_\bullet \rVert \to \ensuremath{{\,\overline{\! X}}}$, since the other case is almost identical. We will use the following theorem due to Galatius and Randal-Williams. \begin{thm}[Theorem 6.2 of \cite{GalatiusRandalWilliams2014Stablemodulispaces}]\label{t:grw} If $Z_\bullet \to Z$ is an augmented semi-simplicial space, the following conditions imply that the map $\lVert Z_\bullet \rVert \to Z$ is a weak equivalence. \begin{itemizeb} \item[\textup{(i)}] The map $Z_i \to Z_0 \times_Z \cdots \times_Z Z_0$ taking an $i$-simplex to the ordered set of its $i+1$ vertices is a homeomorphism onto an open subspace. \item[\textup{(ii)}] Under this identification, an $(i+1)$-tuple of vertices $(v_0,v_1,\ldots,v_i)$ lies in $Z_i$ if and only if $(v_\alpha,v_\beta)$ lies in $Z_1$ for all $\alpha < \beta$. \item[\textup{(iii)}] Denote the augmentation map $Z_0 \to Z$ by $a$. For every point $v \in Z_0$ there is an open neighbourhood $U$ of $a(v) \in Z$ and a section $s \colon U \to Z_0$ of $a$ such that $s(a(v)) = v$. \item[\textup{(iv)}] For any finite set $\{ v_1,\ldots,v_k \}$ in a fibre of $a$, there is another vertex $v$ in the same fibre such that $(v_\alpha,v) \in Z_1$ for all $\alpha \in \{ 1,\ldots,k \}$. \end{itemizeb} \end{thm} \begin{rmk} We note that, in \cite{GalatiusRandalWilliams2014Stablemodulispaces}, condition (iii) is slightly weaker and more complicated to state, and incorporates the $k=0$ part of condition (iv), i.e.\ surjectivity of $a \colon Z_0 \to Z$. \end{rmk} First note that conditions (i)--(iii) are clearly true for $Z_\bullet = \ensuremath{{\,\overline{\! X}}}_\bullet$. For (i) and (ii) this is because we defined an $i$-simplex to be an $(i+1)$-tuple of vertices satisfying conditions (c) and (d), which are open conditions that may be determined by looking at ordered sub-tuples of length $2$. For condition (iii), let $v = (\{ [\varphi_1] ,\ldots, [\varphi_{n-1}] \} , \tau) \in \ensuremath{{\,\overline{\! X}}}_0$. Define $U$ to be the open subspace of $\ensuremath{{\,\overline{\! X}}}$ consisting of all configurations $\{ [\varphi_1^{\prime}] ,\ldots, [\varphi_{n-1}^\prime] \}$ such that $\bigcup_{\alpha = 1}^{n-1} \varphi_{\alpha}^\prime (P)$ is disjoint from $\tau(P \times [0,2])$. This is an open neighbourhood of $a(v) = \{ [\varphi_1] ,\ldots, [\varphi_{n-1}] \}$ and we may define a section of $a \colon \ensuremath{{\,\overline{\! X}}}_0 \to \ensuremath{{\,\overline{\! X}}}$ on $U$ by \[ s \colon U \longrightarrow \ensuremath{{\,\overline{\! X}}}_0 \qquad \{ [\varphi_1^{\prime}] ,\ldots, [\varphi_{n-1}^\prime] \} \;\longmapsto\; (\{ [\varphi_1^{\prime}] ,\ldots, [\varphi_{n-1}^\prime] \} , \tau), \] which sends $a(v)$ to $v$. In order to verify condition (iv) for $Z_\bullet = \ensuremath{{\,\overline{\! X}}}_\bullet$ we will first take a detour to discuss transversality. One corollary of Thom's transversality theorem is the following. \begin{thm}[Corollary II.4.12(b), page 56, \cite{GolubitskyGuillemin1973Stablemappingsand}]\label{t:gg} Let $L$ and $N$ be smooth manifolds without boundary and $f \colon L \to N$ a smooth map. Let $W \subseteq N$ be a smooth submanifold and $A \subseteq B \subseteq L$ open subsets such that $\ensuremath{{\,\overline{\! A}}} \subseteq B$. Let $\mathcal{U}$ be an open neighbourhood of $f \in C^\infty(L,N)$ in the Whitney $C^\infty$-topology. Then there exists $g \in \mathcal{U}$ such that \begin{itemizeb} \item[\textup{(1)}] $g|_A = f|_A$, \item[\textup{(2)}] $g$ is transverse to $W$ on $L \smallsetminus B$. \end{itemizeb} \end{thm} A useful corollary of this is the following. \begin{coro}\label{c:transversality} Let $L$ and $N$ be smooth manifolds without boundary and let $W \subseteq N$ be a smooth submanifold that is closed as a subset such that $\dim(W) + \dim(L) < \dim(N)$. Also, let $f \colon L \to N$ be a smooth map and $B \subseteq L$ an open subset such that $f(\ensuremath{{\,\overline{\! B}}}) \subseteq N \smallsetminus W$. Let $\mathcal{U}$ be an open neighbourhood of $f \in C^\infty(L,N)$ in the Whitney $C^\infty$-topology. Then, for any open subset $A \subseteq B$ with $\ensuremath{{\,\overline{\! A}}} \subseteq B$, there exists $g \in \mathcal{U}$ such that \begin{itemizeb} \item[\textup{(1)}] $g|_A = f|_A$, \item[\textup{({\^2})}] $g(L) \subseteq N \smallsetminus W$. \end{itemizeb} \end{coro} This says, roughly, that if we have a smooth map $f$ whose image is disjoint from $W$ on a closed subset $\ensuremath{{\,\overline{\! B}}}$, then we may find a smooth map $g$ that is arbitrarily close to $f$ (in the sense that $g \in \mathcal{U}$), agrees with $f$ on a slightly smaller subset (namely $A$), and whose entire image is disjoint from $W$. \begin{proof} Let $\mathcal{V} = \{ g \in C^\infty(L,N) \mid g(\ensuremath{{\,\overline{\! B}}}) \subseteq N \smallsetminus W \}$. This condition is equivalent to requiring that $\Gamma_g \subseteq (L \times N) \smallsetminus (\ensuremath{{\,\overline{\! B}}} \times W)$, where $\Gamma_g$ is the graph of $g$. Therefore $\mathcal{V}$ is open in the graph topology (which is the Whitney $C^0$-topology) on $C^\infty(L,N)$, and therefore it is also open in the Whitney $C^\infty$-topology on $C^\infty(L,N)$. Applying Theorem \ref{t:gg} to the open neighbourhood $\mathcal{U} \cap \mathcal{V}$ of $f$ we obtain $g \in \mathcal{U}$ such that $g|_A = f|_A$, $g(L \smallsetminus B) \subseteq N \smallsetminus W$ and $g(\ensuremath{{\,\overline{\! B}}}) \subseteq N \smallsetminus W$. The last two properties combined imply that $g(L) \subseteq N \smallsetminus W$. \end{proof} We will use this to prove: \begin{prop}\label{p:condition-iv} If $\mathrm{dim}(P) = p \leqslant \tfrac12(m-3) = \tfrac12(\mathrm{dim}(M) - 3)$, the augmented semi-simplicial space $Z_\bullet = \ensuremath{{\,\overline{\! X}}}_\bullet$ satisfies condition \textup{(iv)} of Theorem \ref{t:grw}. \end{prop} \begin{coro}\label{c:condition-iv} If $p \leqslant \tfrac12(m-3)$, the maps $\lVert \ensuremath{{\,\overline{\! X}}}_\bullet \rVert \to \ensuremath{{\,\overline{\! X}}}$ and $\lVert \ensuremath{{\overline{Y}}}_\bullet \rVert \to \ensuremath{{\overline{Y}}}$ are weak equivalences. \end{coro} \begin{proof} By the discussion above, $Z_\bullet = \ensuremath{{\,\overline{\! X}}}_\bullet$ satisfies conditions (i)--(iii) and by Proposition \ref{p:condition-iv} it also satisfies condition (iv). Theorem \ref{t:grw} therefore implies that $\lVert \ensuremath{{\,\overline{\! X}}}_\bullet \rVert \to \ensuremath{{\,\overline{\! X}}}$ is a weak equivalence. The argument for $\lVert \ensuremath{{\overline{Y}}}_\bullet \rVert \to \ensuremath{{\overline{Y}}}$ is almost identical, replacing $n$ by $n+1$ and $M_1$ by $M_0$ everywhere. \end{proof} \begin{proof}[Proof of Proposition \ref{p:condition-iv}] Fix a point $\varphi = \{ [\varphi_1] ,\ldots, [\varphi_{n-1}] \} \in \ensuremath{{\,\overline{\! X}}}$ and a collection of embeddings $\tau_1,\ldots,\tau_k \colon P \times [0,2] \hookrightarrow M$ such that $(\varphi,\tau_\alpha) \in \ensuremath{{\,\overline{\! X}}}_0$ for each $\alpha$. Let $h_\alpha \in (\tfrac12,1)$ be the number associated to $\tau_\alpha$ by condition (a) of Definition \ref{d:secondr}. We need to construct a new embedding \begin{equation}\label{e:tau} \tau \colon P \times [0,2] \lhook\joinrel\longrightarrow M \end{equation} such that $(\varphi,\tau) \in \ensuremath{{\,\overline{\! X}}}_0$ and $(\varphi,(\tau_\alpha,\tau)) \in \ensuremath{{\,\overline{\! X}}}_1$ for all $\alpha \in \{ 1,\ldots,k \}$. As a first step, choose $h \in (\tfrac12,1)$ such that $h > \mathrm{max}_{\alpha = 1}^k h_\alpha$ and define an embedding $\sigma \colon P \times [0,2] \hookrightarrow M$ by $\sigma = \ensuremath{\bar{\iota}}b(-,h,-)$. It now suffices to find an embedding \[ \sigma' \colon P \times (1,2) \lhook\joinrel\longrightarrow M_1 \smallsetminus \lambda(T \times \{2\}) \] such that \begin{itemizeb} \item $\sigma = \sigma'$ on $P \times ((1,1+\epsilon) \cup (2-\epsilon,2))$ for some $\epsilon > 0$, \item the image of $\sigma'$ is disjoint from $\varphi_1(P) \cup \cdots \cup \varphi_{n-1}(P)$ and $\tau_1(P \times (1,2)) \cup \cdots \cup \tau_k(P \times (1,2))$. \end{itemizeb} This is because we could then define \eqref{e:tau} to agree with $\sigma$ on $P \times ([0,1] \cup \{2\})$ and to agree with $\sigma'$ on $P \times (1,2)$, and it would satisfy all of the required conditions. Let $\sigma_0 = \sigma|_{P \times (1,2)}$. We will construct $\sigma'$ from $\sigma_0$ by using Corollary \ref{c:transversality} to modify it to be disjoint from the manifolds $\tau_\alpha(P \times (1,2))$ and $\varphi_\alpha(P)$ one at a time. Set $L = P \times (1,2)$, $N = M_1 \smallsetminus \lambda(T \times \{2\})$, $f=\sigma_0$ and $W = \tau_1(P \times (1,2))$. By a compactness argument, and using the fact that $h \neq h_1$ and property (a) of $\tau_1$, we may find $\delta > 0$ such that \[ \sigma_0(P \times ((1,1+2\delta] \cup [2-2\delta,2))) \subseteq N \smallsetminus W. \] We may therefore set $B = P \times ((1,1+2\delta) \cup (2-2\delta,2))$. Since being an embedding is an open condition in the Whitney $C^\infty$-topology, we may take $\mathcal{U}$ to be an open neighbourhood of $f=\sigma_0$ in $C^\infty(L,N)$ consisting of embeddings. Corollary \ref{c:transversality} therefore gives us an embedding \[ \sigma_1 \colon P \times (1,2) \lhook\joinrel\longrightarrow M_1 \smallsetminus (\lambda(T \times \{2\}) \cup \tau_1(P \times (1,2))) \] such that $\sigma_1 = \sigma_0$ on $P \times ((1,1+\delta) \cup (2-\delta,2))$. Note that here we crucially used the fact that $\dim(P) \leqslant \frac12(\dim(M)-3)$ in order to satisfy the dimension condition of Corollary \ref{c:transversality}. Iterating this, we next set $L = P \times (1,2)$, $N = M_1 \smallsetminus (\lambda(T \times \{2\}) \cup \tau_1(P \times (1,2)))$, $f = \sigma_1$ and $W = \tau_2(P \times (1,2))$ and apply Corollary \ref{c:transversality} to obtain an embedding \[ \sigma_2 \colon P \times (1,2) \lhook\joinrel\longrightarrow M_1 \smallsetminus (\lambda(T \times \{2\}) \cup \tau_1(P \times (1,2)) \cup \tau_2(P \times (1,2))) \] such that $\sigma_2 = \sigma_1$ on $P \times ((1,1+\delta') \cup (2-\delta',2))$ for some $\delta' > 0$. After a finite number of further applications of Corollary \ref{c:transversality} we obtain an embedding $\sigma'$ with the required properties. \end{proof} \subsection{The second approximation.}\label{s:seconda} \begin{defn} For $j\geqslant 0$ define $\ensuremath{{\,\overline{\! A}}}_j$ to be the subspace of $\mathrm{Emb}(P \times [0,2],M)^{i+1}$ consisting of tuples of embeddings $(\tau_0,\ldots,\tau_j)$ such that each $\tau_\alpha$ satisfies condition (a) of Definition \ref{d:secondr}, the tuple satisfies conditions (c) and (d) of Definition \ref{d:secondr} and each $\tau_\alpha$ also satisfies: \begin{itemizeb} \item[({\d b})] $\tau_\alpha(P \times (1,2)) \subseteq M_1 \smallsetminus \lambda(T \times \{2\})$. \end{itemizeb} Similarly, define $\ensuremath{{\,\overline{\! B}}}_j$ to be the subspace of $\mathrm{Emb}(P \times [0,2],M)^{i+1}$ consisting of $(\tau_0,\ldots,\tau_j)$ satisfying conditions ({\=a}), (c) and (d) of Definition \ref{d:secondr}, as well as \begin{itemizeb} \item[({\d{\={b}}})] $\tau_\alpha(P \times (0,2)) \subseteq M_0 \smallsetminus \lambda(T \times \{2\})$. \end{itemizeb} \end{defn} \begin{lem} The inclusion $\ensuremath{{\,\overline{\! A}}}_j \hookrightarrow \ensuremath{{\,\overline{\! B}}}_j$ is a homotopy equivalence. \end{lem} \begin{proof} We will define a deformation retraction for the inclusion, i.e.\ a map \[ H \colon \ensuremath{{\,\overline{\! B}}}_j \times [0,1] \longrightarrow \ensuremath{{\,\overline{\! B}}}_j \] such that $H(-,0) = \mathrm{id}$ and $H((\ensuremath{{\,\overline{\! A}}}_j \times [0,1]) \cup (\ensuremath{{\,\overline{\! B}}}_j \times \{1\})) \subseteq \ensuremath{{\,\overline{\! A}}}_j$. For this, we choose a smooth map \[ \Upsilon \colon S \longrightarrow [0,2], \] where $S = \{ (s,t) \in [0,1] \times [0,2] \mid s \leqslant t \}$, such that \begin{itemizeb} \item[(i)] $\Upsilon(0,t) = t$, \item[(ii)] $\Upsilon(s,t) = t - s$ for $t \leqslant s+\tfrac14$, \item[(iii)] $\Upsilon(s,t) = t$ for $t \geqslant \tfrac74$, \item[(iv)] for each fixed $s \in [0,1]$ the map $\Upsilon(s,-)$ is a diffeomorphism $[s,2] \cong [0,2]$ with $\Upsilon(s,1) \leqslant 1$. \end{itemizeb} This is not hard (although a little fiddly) to construct. Note that the function $\Upsilon(s,t) = 2(\tfrac{t-s}{2-s})$ satisfies conditions (i) and (iv), but not (ii) or (iii). If we were working only with $C^0$ embeddings, then we would not need (ii) or (iii) and this version of $\Upsilon$ would work, but in order to ensure that the two parts of the definition below glue together to form a $C^\infty$ embedding we also need conditions (ii) and (iii). We may now use this to ``conjugate'' each $\tau_\alpha$ by reparametrising its domain by $\Upsilon(s,-)$ and its codomain by $\Upsilon(s,-)^{-1}$. More precisely, we define the deformation retraction by \[ H((\tau_0,\ldots,\tau_j),s) = (\tau_0^s,\ldots,\tau_j^s), \] where $\tau_\alpha^s \colon P \times [0,2] \hookrightarrow M$ is defined by \[ \tau_\alpha^s(z,t) = \begin{cases} \ensuremath{\bar{\iota}}b(z,h_\alpha,t) & \text{for } 0 \leqslant t \leqslant s \\ \bar{\Upsilon}_s \circ \tau_\alpha(z,\Upsilon(s,t)) & \text{for } s \leqslant t \leqslant 2, \end{cases} \] where $\bar{\Upsilon}_s \colon M \hookrightarrow M$ is the self-embedding defined by $\lambda \circ (\mathrm{id} \times \Upsilon(s,-)^{-1}) \circ \lambda^{-1}$ on the collar neighbourhood and by the identity elsewhere. The choice of smoothly varying reparametrisation $\Upsilon(s,-)$ ensures that the two pieces of this definition glue together to form a smooth embedding, and one may easily check that this $H$ is indeed a deformation retraction. \end{proof} Now define a map $\bar{p}_j \colon \ensuremath{{\,\overline{\! X}}}_j \to \ensuremath{{\,\overline{\! A}}}_j$ taking $(\{ [\varphi_1] ,\ldots, [\varphi_{n-1}] \},(\tau_0,\ldots,\tau_j))$ to the tuple $(\tau_0,\ldots,\tau_j)$, and similarly $\bar{q}_j \colon \ensuremath{{\overline{Y}}}_j \to \ensuremath{{\,\overline{\! B}}}_j$ taking $(\{ [\varphi_1] ,\ldots, [\varphi_{n}] \},(\tau_0,\ldots,\tau_j))$ to $(\tau_0,\ldots,\tau_j)$. These forgetful maps fit into a commutative square: \begin{equation} \label{eq:second-approximation} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,12) {$\ensuremath{{\,\overline{\! X}}}_j$}; \node (tr) at (20,12) {$\ensuremath{{\overline{Y}}}_j$}; \node (bl) at (0,0) {$\ensuremath{{\,\overline{\! A}}}_j$}; \node (br) at (20,0) {$\ensuremath{{\,\overline{\! B}}}_j$}; \draw[->] (tl) to node[above,font=\small]{$g_j$} (tr); \draw[->] (tl) to node[left,font=\small]{$\bar{p}_j$} (bl); \draw[->] (tr) to node[right,font=\small]{$\bar{q}_j$} (br); \incl{(bl)}{(br)} \end{tikzpicture} \end{split} \end{equation} \begin{prop}\label{p:Serre-fibrations-2} The maps $\bar{p}_j \colon \ensuremath{{\,\overline{\! X}}}_j \to \ensuremath{{\,\overline{\! A}}}_j$ and $\bar{q}_j \colon \ensuremath{{\overline{Y}}}_j \to \ensuremath{{\,\overline{\! B}}}_j$ are Serre fibrations. \end{prop} \begin{proof} We start by constructing a manifold with boundary $M_{\mathrm{cut}}$ whose interior is $M_1 \smallsetminus \lambda(T \times \{2\})$. Recall that $T \subseteq \partial M$ is a codimension-zero submanifold with boundary, and write $\mathring{T}$ for its interior. We will attach two pieces of boundary to $M_1 \smallsetminus \lambda(T \times \{2\})$. First, we (re-)attach $\lambda(\partial M \times \{1\})$ as one piece of boundary.\footnote{Note that we do not say one ``boundary-component'', because we are not assuming that $\partial M$ is connected.} Second, we attach $\lambda(\mathring{T} \times \{2\})$, \emph{but only along one side}, as the second piece of boundary. More precisely, we define \begin{equation}\label{e:cut-manifold} M_{\mathrm{cut}} \; = \; M \smallsetminus \lambda((\partial M \times [0,1)) \cup (T \times \{2\})) \; \underset{\lambda(\mathring{T} \times (1,2))}{\cup} \; \lambda(\mathring{T} \times (1,2]). \end{equation} The name $M_{\mathrm{cut}}$ comes from thinking of the operation of removing $\lambda(T \times \{2\})$ and re-attaching $\lambda(\mathring{T} \times \{2\})$ along one side only as making a ``cut'' in the interior of $M$ in order to get a new piece of boundary. One may see directly from the definition \eqref{e:cut-manifold} that $\mathrm{int}(M_{\mathrm{cut}}) = M_1 \smallsetminus \lambda(T \times \{2\})$ and $\partial M_{\mathrm{cut}} = \lambda(\partial M \times \{1\}) \sqcup \lambda(\mathring{T} \times \{2\})$ as described more heuristically above. Now let $L = P \times [1,2] \times \{0,\ldots,j\}$ and decompose its boundary as \[ \partial_1 L = P \times \{1\} \times \{0,\ldots,j\} \qquad \partial_2 L = P \times \{2\} \times \{0,\ldots,j\} . \] Similarly, decompose the boundary of $M_{\mathrm{cut}}$ as \[ \partial_1 M_{\mathrm{cut}} = \lambda(\partial M \times \{1\}) \qquad \partial_2 M_{\mathrm{cut}} = \lambda(\mathring{T} \times \{2\}). \] Define $\hat{A}_j = \mathrm{Emb}_{12}(L,M_{\mathrm{cut}})$, where the embedding space $\mathrm{Emb}_{12}(-,-)$ is defined as in \S\ref{s:fibre-bundles} just before Proposition \ref{p:G-locally-retractile-boundary}. Equivalently an element of $\hat{A}_j$ consists of a tuple $(\tau_0,\ldots,\tau_j)$ of embeddings $\tau_\alpha \colon P \times [1,2] \hookrightarrow M$ with pairwise-disjoint images, such that \begin{itemizeb} \item $\tau_\alpha(P \times \{1\}) \subseteq \lambda(\partial M \times \{1\})$, \item $\tau_\alpha(P \times \{2\}) \subseteq \lambda(\mathring{T} \times \{2\})$, \item $\tau_\alpha(P \times (1,2)) \subseteq M_1 \smallsetminus \lambda(T \times [2,2+\epsilon))$ for some $\epsilon > 0$. \end{itemizeb} Now define $\hat{X}_j$ to be the subspace of $\ensuremath{{\,\overline{\! X}}} \times \hat{A}_j$ consisting of all $(\{ [\varphi_1] ,\ldots, [\varphi_{n-1}] \} , (\tau_0,\ldots,\tau_j))$ such that \[ \bigcup_{\alpha = 1}^{n-1} \varphi_\alpha(P) \qquad\text{is disjoint from}\qquad \bigcup_{\alpha = 0}^{j} \tau_\alpha(P \times [1,2]). \] (Recall that the space $\ensuremath{{\,\overline{\! X}}}$ was defined in Definition \ref{d:xbar}.) There is a well-defined continuous action of $\mathrm{Diff}_c(M_{\mathrm{cut}})_0$ on both $\hat{X}_j$ and $\hat{A}_j$ given by post-composition, and the projection onto the second factor $\hat{p}_j \colon \hat{X}_j \to \hat{A}_j$ is equivariant with respect to these actions. (Note that it is important for the well-definedness of the actions that we are considering the path-component $\mathrm{Diff}_c(M_{\mathrm{cut}})_0$ of the identity in the group $\mathrm{Diff}_c(M_{\mathrm{cut}})$, so in particular we are considering actions of a path-connected group.) Propositions \ref{p:G-locally-retractile-boundary} and \ref{p:G-locally-retractile} now imply that $\hat{p}_j$ is a fibre bundle, in particular a Serre fibration. Now we may define a topological embedding $\ensuremath{{\,\overline{\! A}}}_j \hookrightarrow \hat{A}_j$ by $(\tau_0,\ldots,\tau_j) \mapsto (\tau_0|_{P \times [1,2]},\ldots,\tau_j|_{P \times [1,2]})$. By construction, the pullback of $\hat{p}_j$ along this embedding is exactly the map $\bar{p}_j \colon \ensuremath{{\,\overline{\! X}}}_j \to \ensuremath{{\,\overline{\! A}}}_j$. Hence $\bar{p}_j$ is a Serre fibration, as required. The argument for $\bar{q}_j$ is essentially identical, so it is omitted. \end{proof} Now fix $\bar{a} = (\tau_0,\ldots,\tau_j) \in \ensuremath{{\,\overline{\! A}}}_j$. We will show that the restriction of $g_j$ to the fibres $\bar{p}_j^{-1}(\bar{a}) \to \bar{q}_j^{-1}(\bar{a})$ lies in $\ensuremath{\mathscr{X}}(n-1)$. Define \[ \breve{M} = M \smallsetminus (\tau_0(P \times [0,2]) \cup \cdots \cup \tau_j(P \times [0,2]) \cup \lambda(T \times \{2\})) \] and note that $\partial \breve{M} = \partial M \smallsetminus \ensuremath{\bar{\iota}}b(P \times \{h_0,\ldots,h_j\} \times \{0\})$. Choose $\epsilon > 0$ such that: \begin{itemizeb} \item $\tau_\alpha(z,t) = \ensuremath{\bar{\iota}}b(z,h_\alpha,t)$ for all $t \in [0,1+\epsilon]$, $z \in P$ and $\alpha \in \{ 0,\ldots,j \}$, \item each $\tau_\alpha(P \times (1+\epsilon,2])$ is contained in $M_{1+\epsilon}$. \end{itemizeb} Choose a diffeomorphism $\theta \colon [0,2] \to [0,1+\epsilon]$ such that $\theta(t) = t$ for $t \in [0,1]$ and define \[ \mu \colon \partial\breve{M} \times [0,2] \lhook\joinrel\longrightarrow \breve{M} \] by $\mu(z,t) = \lambda(z,\theta(t))$. This is a collar neighbourhood for the boundary of $\breve{M}$. Write $X_{n-1}(\breve{M},\mu)$ for the space $X$ from Definition \ref{d:input-data} with $P,\iota,G$ the same as before but with $n,M,\lambda$ replaced by $n-1,\breve{M},\mu$ respectively, and similarly for $Y_{n-1}(\breve{M},\mu)$ and the map $f_{n-1}(\breve{M},\mu)$. \begin{rmk}\label{r:connectivity-of-M-breve} In a moment we will show that the restriction of $g_j$ to the fibres $\bar{p}_j^{-1}(\bar{a}) \to \bar{q}_j^{-1}(\bar{a})$ may be identified (up to homeomorphism) with $f_{n-1}(\breve{M},\mu)$, and from this we would like to deduce that it lies in $\ensuremath{\mathscr{X}}(n-1)$. We therefore have to check that $\breve{M}$ and $\mu$ (as well as $P,\iota,G$) are valid input data for Definition \ref{d:input-data}. The only non-trivial issue with this is to see that $\breve{M}$ is connected. However, this follows from the fact that $M$ is connected, together with the dimension hypothesis $m \geqslant 2p + 3$; in fact, we only need the weaker assumption that $m \geqslant p + 3$ here. To see this, note that to obtain $\breve{M}$ from $M$ we first removed $\lambda(T \times \{2\})$ and then each $\tau_\alpha(P \times [0,2])$. The latter all have codimension $m - p - 1 \geqslant 2$ and the former deformation retracts onto its core $\lambda(\iota(P) \times \{2\})$ (recall that $T \subseteq \partial M$ is a tubular neighbourhood for $\iota(P) \subseteq \partial M$), which has codimension $m - p \geqslant 3$, so neither of these operations changes $\pi_0$. Therefore $\breve{M}$ is connected since $M$ is connected. \end{rmk} A little thought about the definitions yields the following descriptions of $\bar{p}_j^{-1}(\bar{a})$ and $X_{n-1}(\breve{M},\mu)$. First we fix some temporary notation. Write \[ U \; = \; \mathrm{Emb}(P \times \{1,\ldots,n-1\} , M_1 \smallsetminus \lambda(T \times \{2\})) \; / \; (G \wr \Sigma_{n-1}) \] and set $W = \tau_0(P \times (1,2)) \cup \cdots \cup \tau_j(P \times (1,2))$, which is a properly embedded submanifold of $M_1 \smallsetminus \lambda(T \times \{2\})$. Choose a tuple $(s_1,\ldots,s_{n-1})$ of distinct points in the interval $(1,1+\epsilon)$ and let $\varphi_{\mathrm{st}}$ be the embedding \[ (z,\alpha) \; \longmapsto \; \ensuremath{\bar{\iota}}b(z,0,s_\alpha) \colon P \times \{1,\ldots,n-1\} \lhook\joinrel\longrightarrow M_1 \smallsetminus \lambda(T \times \{2\}). \] Note that the image of $\varphi_{\mathrm{st}}$ is disjoint from $W$ due to how we chose $\epsilon$ above. With this notation, we have \[ X_{n-1}(\breve{M},\mu) \; \subseteq \; \bar{p}_j^{-1}(\bar{a}) \; \subseteq \; U, \] and, given an element $[\varphi] \in U$, \begin{itemizeb} \item $[\varphi] \in \bar{p}_j^{-1}(\bar{a})$ if and only if $\mathrm{im}(\varphi)$ is disjoint from $W$ and there is a path in $U$ from $[\varphi]$ to $[\varphi_{\mathrm{st}}]$, \item $[\varphi] \in X_{n-1}(\breve{M},\mu)$ if and only if $\mathrm{im}(\varphi)$ is disjoint from $W$ and there is a path $t \mapsto [\varphi^t]$ in $U$ from $[\varphi]$ to $[\varphi_{\mathrm{st}}]$ such that $\mathrm{im}(\varphi^t)$ is disjoint from $W$ for all $t \in [0,1]$. \end{itemizeb} Replacing $M_1$ with $M_0$ and $n-1$ with $n$, we obtain a similar description of $Y_{n-1}(\breve{M},\mu) \subseteq \bar{q}_j^{-1}(\bar{a})$, and a commutative square \begin{equation} \label{eq:restriction-of-gj} \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$\bar{p}_j^{-1}(\bar{a})$}; \node (tr) at (50,10) {$\bar{q}_j^{-1}(\bar{a})$}; \node (bl) at (0,0) {$X_{n-1}(\breve{M},\mu)$}; \node (br) at (50,0) {$Y_{n-1}(\breve{M},\mu)$}; \node at (0,5) {\rotatebox{90}{$\subseteq$}}; \node at (50,5) {\rotatebox{90}{$\subseteq$}}; \draw[->] (tl) to node[above,font=\small]{restriction of $g_j$} (tr); \draw[->] (bl) to node[below,font=\small]{$f_{n-1}(\breve{M},\mu)$} (br); \end{tikzpicture} \end{split} \end{equation} (for commutativity we use the fact that the collar neighbourhoods $\lambda(z,t)$ and $\mu(z,t)$ of $M$ and $\breve{M}$ agree for $t \leqslant 1$ due to how we chose $\theta$ above; in particular they agree on $\iota(P) \times \{\tfrac12\}$). It remains to show that in fact $X_{n-1}(\breve{M},\mu) = \bar{p}_j^{-1}(\bar{a})$ and $Y_{n-1}(\breve{M},\mu) = \bar{q}_j^{-1}(\bar{a})$. This will follow immediately from the descriptions above together with the following general lemma, in which we take $L = P \times \{1,\ldots,n-1\}$ and $N = M_1 \smallsetminus \lambda(T \times \{2\})$. \begin{lem}\label{l:path-of-embeddings} Let $L$ and $N$ be manifolds without boundary such that $\mathrm{dim}(N) \geqslant 2\,\mathrm{dim}(L) + 3$ and assume that $L$ is compact. Let $W \subseteq N$ be a closed subset that is a finite union of properly embedded submanifolds, each of dimension at most $\mathrm{dim}(N) - \mathrm{dim}(L) - 2$. Let $G$ be an open subgroup of $\mathrm{Diff}(L)$. Suppose we have a path \[ \gamma \colon [0,1] \longrightarrow \mathrm{Emb}(L,N)/G \] such that $\gamma(0)(L)$ and $\gamma(1)(L)$ are disjoint from $W$. Then there exists another path \[ \gamma' \colon [0,1] \longrightarrow \mathrm{Emb}(L,N)/G \] with the same endpoints as $\gamma$, such that $\gamma'(t)(L)$ is disjoint from $W$ for all $t \in [0,1]$, in other words $\gamma'$ has image contained in $\mathrm{Emb}(L,N \smallsetminus W)/G$. \end{lem} \begin{coro} \label{c:Xn1} Assume the dimension hypothesis that $\mathrm{dim}(P) = p \leqslant \tfrac12(m-3) = \tfrac12(\mathrm{dim}(M) - 3)$. Then, in diagram \eqref{eq:second-approximation}, the restriction of $g_j$ to the fibres over any point $\bar{a} \in \ensuremath{{\,\overline{\! A}}}_j$ lies in $\ensuremath{\mathscr{X}}(n-1)$. \end{coro} \begin{proof} By the discussion preceding Lemma \ref{l:path-of-embeddings}, we have the commutative square \eqref{eq:restriction-of-gj}. Lemma \ref{l:path-of-embeddings} then tells us that the vertical inclusions in this diagram are equalities, so the restriction of $g_j$ to the fibres over $\bar{a}$ may be identified, up to homeomorphism, with the map $f_{n-1}(\breve{M},\mu)$. Here we have used the dimension hypothesis to ensure that Lemma \ref{l:path-of-embeddings} applies to $L = P \times \{1,\ldots,n-1\}$, $N = M_1 \smallsetminus \lambda(T \times \{2\})$ and $W = \tau_0(P \times (1,2)) \cup \cdots \cup \tau_j(P \times (1,2))$. This map lies in $\ensuremath{\mathscr{X}}(n-1)$, by definition (see the first paragraph of \S\ref{s:proof}), using Remark \ref{r:connectivity-of-M-breve} to verify that $\breve{M}$ is connected. \end{proof} We will use the following theorem of Whitney in the proof of Lemma \ref{l:path-of-embeddings}. \begin{thm}[{\cite[Theorem 5 in \S 11]{Whitney1936Differentiablemanifolds}}]\label{t:Whitney} Let $L$ and $N$ be smooth manifolds without boundary such that $\mathrm{dim}(N) \geqslant 2\,\mathrm{dim}(L) + 1$ and let $A \subseteq L$ be a closed subset. Suppose we are given a continuous map $\phi \colon L \to N$ such that its restriction to $A$ is a smooth injective immersion. Then there exists a smooth injective immersion $\psi \colon L \to N$ such that $\psi|_A = \phi|_A$. \end{thm} We will also use the following immediate corollary of Thom's transversality theorem. \begin{prop}\label{p:Thom-paths} Let $L$ and $N$ be smooth manifolds without boundary, $L$ assumed to be compact, and $X \subseteq N$ a countable union of submanifolds of $N$. Then given any embedding $\phi \in \mathrm{Emb}(L,N)$ there is a path $\gamma \colon [0,1] \to \mathrm{Emb}(L,N)$ with $\gamma(0) = \phi$ and $\gamma(1)$ transverse to $X$. \end{prop} \begin{proof} By the transversality theorem of Thom (\textit{cf}.\ \cite[Theorem II.4.9, page 54]{GolubitskyGuillemin1973Stablemappingsand}), the subset \[ \{ \phi' \in \mathrm{Emb}(L,N) \mid \phi' \text{ is transverse to } X \} \; \subseteq \; \mathrm{Emb}(L,N) \] is dense in the strong $C^\infty$ topology. Since $\mathrm{Emb}(L,N)$ is locally path-connected (indeed, locally contractible since $L$ is compact, \textit{cf}.\ Fact \ref{fact:locally-contractible}) we may choose a path-connected open neighbourhood $\mathcal{U} \subseteq \mathrm{Emb}(L,N)$ of $\phi$. By density, we may find $\phi' \in \mathcal{U}$ that is transverse to $X$, and then by path-connectedness of $\mathcal{U}$ we may find a path from $\phi$ to $\phi'$. \end{proof} \begin{proof}[Proof of Lemma \ref{l:path-of-embeddings}] We will prove this in $4 = \{ 0,1,2,3 \}$ steps. \textbf{Step 0.} First, we show that we may assume without loss of generality that $\gamma(0)(L)$ and $\gamma(1)(L)$ are disjoint subsets of $N$. So we assume temporarily that the lemma is true under this assumption, and let $\gamma \colon [0,1] \to \mathrm{Emb}(L,N)/G$ be a path such that $\gamma(0)(L)$ and $\gamma(1)(L)$ are disjoint from $W$ (but not necessarily from each other). The projection map $\mathrm{Emb}(L,N) \to \mathrm{Emb}(L,N)/G$ is a surjective Serre fibration (by Corollary \ref{c:principal-H-bundle}) so we may lift $\gamma$ to $\bar{\gamma} \colon [0,1] \to \mathrm{Emb}(L,N)$. Now $\bar{\gamma}(0)$ is an embedding $L \hookrightarrow N \smallsetminus W$, and by Proposition \ref{p:Thom-paths} (replacing $N$ by $N \smallsetminus W$ and setting $X = \bar{\gamma}(1)(L)$), there is a path $\delta \colon [-1,0] \to \mathrm{Emb}(L,N \smallsetminus W)$ such that $\delta(0) = \bar{\gamma}(0)$ and $\delta(-1)$ is transverse to $\bar{\gamma}(1)(L)$, which implies (since $2\,\mathrm{dim}(L) < \mathrm{dim}(N)$) that $\delta(-1)(L)$ is disjoint from $\bar{\gamma}(1)(L)$. Write $[\delta]$ for the composition of $\delta$ and the projection to $\mathrm{Emb}(L,N)/G$. Concatenating $[\delta]$ and $\gamma$ we obtain a path in $\mathrm{Emb}(L,N)/G$ whose endpoints $[\delta(-1)]$ and $\gamma(1)$ are (orbits of) embeddings whose images are disjoint from $W$ and from each other. Thus, by our temporary assumption, there exists a path in $\mathrm{Emb}(L,N \smallsetminus W)/G$ between $[\delta(-1)]$ and $\gamma(1)$. Concatenating this path with the reverse of $[\delta]$ we obtain a path in $\mathrm{Emb}(L,N \smallsetminus W)/G$ between $[\delta(0)] = \gamma(0)$ and $\gamma(1)$, as desired. We may from now on assume that $\gamma(0)(L)$ and $\gamma(1)(L)$ are disjoint. \textbf{Step 1.} (Extending $\gamma$ via tubular neighbourhoods.) As in step 0, since the projection $\mathrm{Emb}(L,N) \to \mathrm{Emb}(L,N)/G$ is a surjective Serre fibration (by Corollary \ref{c:principal-H-bundle}), we may lift $\gamma$ to $\bar{\gamma} \colon [0,1] \to \mathrm{Emb}(L,N)$ such that $\bar{\gamma}(0)(L)$ and $\bar{\gamma}(1)(L)$ are disjoint from $W$ and each other. Denote the normal bundle of $\bar{\gamma}(0) \colon L \hookrightarrow N$ by $V_0 = \nu(\bar{\gamma}(0)) \to L$ and its zero section by $z_0 \colon L \to V_0$. Choose a tubular neighbourhood, i.e.\ an embedding $t_0 \colon V_0 \hookrightarrow N$ such that $t_0 \circ z_0 = \bar{\gamma}(0)$. \begin{sublem} The normal bundle $V_0 \to L$ has a trivial one-dimensional subbundle. \end{sublem} \begin{proof}[Proof of the sublemma] \let{\small (sublemma)} \qedsymboloriginaloriginal{\small (sublemma)} \qedsymboloriginal \renewcommand{{\small (sublemma)} \qedsymboloriginal}{{\small (sublemma)} {\small (sublemma)} \qedsymboloriginaloriginal} It suffices to show that the unit sphere subbundle $S(V_0) \to L$ has a section. The fibres of this bundle are $k$-spheres, where $k = \mathrm{dim}(N) - \mathrm{dim}(L) - 1$, and the obstruction classes to the existence of such a section live in the cohomology groups $H^i(L;\pi_{i-1}(S^k))$. If $i > \mathrm{dim}(L)$ this clearly vanishes. If $i \leqslant \mathrm{dim}(L)$, then \[ i-1 \leqslant \mathrm{dim}(L) - 1 \leqslant \mathrm{dim}(N) - \mathrm{dim}(L) - 4 = k-3, \] so it also vanishes in this case. \end{proof} The inclusion of a trivial one-dimensional subbundle is an embedding $L \times \mathbb{R} \hookrightarrow V_0$. Let us denote by $\hat{\gamma}_0 \colon L \times \mathbb{R} \hookrightarrow N$ the composition of this embedding and the embedding $t_0$. Note that $\hat{\gamma}_0(-,0) = \bar{\gamma}(0)$. We may now choose disjoint open subsets $U_0$ and $U_1$ of $N$ such that \begin{align*} U_0 \; &\supseteq \; \hat{\gamma}_0(L \times \{0\}) \; = \; \bar{\gamma}(0)(L) \\ U_1 \; &\supseteq \; W \cup \bar{\gamma}(1)(L) \end{align*} since these are disjoint subsets of $N$ that are compact and closed respectively, and $N$ is regular (since it is a manifold). By compactness of $L$ we may find $\epsilon > 0$ such that $\hat{\gamma}_0(L \times [-\epsilon,\epsilon]) \subseteq U_0$, so in particular $\hat{\gamma}_0(L \times [-\epsilon,\epsilon])$ is disjoint from $W$ and $\bar{\gamma}(1)(L)$. We may similarly use a tubular neighbourhood of $\bar{\gamma}(1)$ to extend it to an embedding $\hat{\gamma}_1 \colon L \times \mathbb{R} \hookrightarrow N$ with $\hat{\gamma}_1(-,0) = \bar{\gamma}(1)$. As above, choose disjoint open subsets $U_0^\prime$ and $U_1^\prime$ of $N$ such that \begin{align*} U_0^\prime \; &\supseteq \; \hat{\gamma}_1(L \times \{0\}) \; = \; \bar{\gamma}(1)(L) \\ U_1^\prime \; &\supseteq \; W \cup \hat{\gamma}_0(L \times [-\epsilon,\epsilon]). \end{align*} By compactness of $L$ we may decrease $\epsilon > 0$ if necessary so that $\hat{\gamma}_1(L \times [-\epsilon,\epsilon]) \subseteq U_0^\prime$. Hence: \begin{itemizeb} \item The subsets $\hat{\gamma}_0(L \times [-\epsilon,\epsilon])$ and $\hat{\gamma}_1(L \times [-\epsilon,\epsilon])$ of $N$ are disjoint from $W$ and from each other. \end{itemizeb} Now we use $\hat{\gamma}_0$ and $\hat{\gamma}_1$ to extend $\bar{\gamma}$ to a slightly larger interval. Define $\mathring{\gamma} \colon (-3\epsilon , 1+3\epsilon) \to \mathrm{Emb}(L,N)$ by: \[ \mathring{\gamma}(t)(z) = \begin{cases} \hat{\gamma}_0(z,t+2\epsilon) & t \in (-3\epsilon,-\epsilon] \\ \hat{\gamma}_0(z,-t) & t \in [-\epsilon,0] \\ \bar{\gamma}(t)(z) & t \in [0,1] \\ \hat{\gamma}_1(z,t-1) & t \in [1,1+\epsilon] \\ \hat{\gamma}_1(z,1-t+2\epsilon) & t \in [1+\epsilon,1+3\epsilon). \end{cases} \] This is a continuous path in $\mathrm{Emb}(L,N)$ extending $\bar{\gamma}$ (i.e.\ $\mathring{\gamma}|_{[0,1]} = \bar{\gamma}$) such that \begin{itemizeb} \item $\mathring{\gamma}(-2\epsilon) = \bar{\gamma}(0)$, \item $\mathring{\gamma}(1+2\epsilon) = \bar{\gamma}(1)$, \item $\mathring{\gamma}(t)(L)$ is disjoint from $W$ for $t \leqslant 0$ and for $t \geqslant 1$. \end{itemizeb} \textbf{Step 2.} (Making the adjoint of $\mathring{\gamma}$ into an embedding.) The adjoint of $\mathring{\gamma}$ is a continuous map \[ f_1 \colon L \times (-3\epsilon,1+3\epsilon) \longrightarrow N \] defined by $f_1(z,t) = \mathring{\gamma}(t)(z)$. Let $A_0 = L \times (-3\epsilon,-\epsilon]$ and $A_1 = L \times [1+\epsilon,1+3\epsilon)$. The restriction of $f_1$ to $A_0$ is (up to an affine shift in coordinates) equal to $\hat{\gamma}_0$, so it is an embedding. Similarly, up to an affine shift in coordinates, the restriction of $f_1$ to $A_1$ is equal to $\hat{\gamma}_1$, so it is also an embedding. Moreover, the closures of the images of $A_0$ and $A_1$ under $f_1$ are disjoint, so the restriction of $f_1$ to $A = A_0 \sqcup A_1$ is an embedding. Applying Theorem \ref{t:Whitney} to $\phi = f_1$ (note that the $L$ of the theorem is what we are calling $L \times (-3\epsilon , 1+3\epsilon)$ here) we obtain a smooth injective immersion \[ f_2 \colon L \times (-3\epsilon,1+3\epsilon) \longrightarrow N \] such that $f_2|_A = f_1|_A$. Let \[ f_3 \colon L \times (-2.5\epsilon,1+2.5\epsilon) \longrightarrow N \] be the restriction of $f_2$ to the relatively compact subset $L \times (-2.5\epsilon,1+2.5\epsilon)$. Note that \begin{itemizeb} \item $f_3$ is an embedding, since it is the restriction of an injective immersion to a relatively compact subset of the domain, \item $f_3(B_1)$ is disjoint from $W$, \end{itemizeb} where for $r \in (0,2)$ we write \[ B_r = L \times ((-2.5\epsilon , -r\epsilon) \cup (1 + r\epsilon , 1 + 2.5\epsilon)). \] \textbf{Step 3.} (Transversality and disjointness from $W$.) We may write $W = W_1 \cup \cdots \cup W_k$, where each $W_i$ is a properly embedded submanifold of $N$ of dimension at most $\mathrm{dim}(N) - \mathrm{dim}(L) - 2$. We will use Corollary \ref{c:transversality} to modify $f_3$ to be disjoint from the $W_i$ one at a time. First, since $f_3(B_1)$ is disjoint from $W_1$, we may apply Corollary \ref{c:transversality} to $f_3$ and $W_1$ to obtain an embedding \[ f_4 \colon L \times (-2.5\epsilon,1+2.5\epsilon) \lhook\joinrel\longrightarrow N \smallsetminus W_1 \] such that $f_4$ agrees with $f_3$ on $B_{1.5}$. (We note that, in order to obtain an \emph{embedding}, we chose the $\mathcal{U}$ in Corollary \ref{c:transversality} to be an open neighbourhood of $f_3$ consisting of embeddings.) Now, since $f_4(B_{1.5}) = f_3(B_{1.5})$ is disjoint from $W_2$, we may apply Corollary \ref{c:transversality} to $f_4$ and $W_2$ to obtain an embedding \[ f_5 \colon L \times (-2.5\epsilon,1+2.5\epsilon) \lhook\joinrel\longrightarrow N \smallsetminus (W_1 \cup W_2) \] such that $f_5$ agrees with $f_4$ (and therefore also with $f_3$) on $B_{1.55}$. Iterating this procedure another $k-2$ times we eventually obtain an embedding \[ f_{k+3} \colon L \times (-2.5\epsilon,1+2.5\epsilon) \lhook\joinrel\longrightarrow N \smallsetminus (W_1 \cup \cdots \cup W_k) = N \smallsetminus W \] such that $f_{k+3}$ agrees with $f_3$ on $B_{1.55\ldots 5}$. In particular, it agrees with $f_3$, and therefore with $f_1$, on $L \times \{-2\epsilon\}$ and $L \times \{1+2\epsilon\}$. We may now define \[ \gamma' \colon [-2\epsilon , 1+2\epsilon] \longrightarrow \mathrm{Emb}(L,N) \] by $\gamma'(t)(z) = f_{k+3}(z,t)$ and note that \begin{itemizeb} \item $\gamma'(t)(L)$ is disjoint from $W$ for all $t \in [-2\epsilon , 1+2\epsilon]$, \item $\gamma'(-2\epsilon) = f_{k+3}(\,\cdot\, ,-2\epsilon) = f_1(\,\cdot\, ,-2\epsilon) = \mathring{\gamma}(-2\epsilon) = \bar{\gamma}(0)$, \item $\gamma'(1+2\epsilon) = f_{k+3}(\,\cdot\, ,1+2\epsilon) = f_1(\,\cdot\, ,1+2\epsilon) = \mathring{\gamma}(1+2\epsilon) = \bar{\gamma}(1)$. \end{itemizeb} Rescaling the interval $[-2\epsilon , 1+2\epsilon]$ to $[0,1]$ and post-composing with the projection $\mathrm{Emb}(L,N) \to \mathrm{Emb}(L,N)/G$, we obtain a path $\gamma'$ as claimed in the lemma. \end{proof} \begin{rmk}[\emph{The use of the dimension hypothesis in the proof of Lemma \ref{l:path-of-embeddings}.}] \label{r:dimension-hypothesis-lemma} In Lemma \ref{l:path-of-embeddings} we made two assumptions about the relative dimensions of the manifolds $L$ and $N$, and the union of submanifolds $W \subseteq N$, namely: \[ \mathrm{dim}(N) \geqslant 2\,\mathrm{dim}(L) + 3 \qquad\text{and}\qquad \mathrm{dim}(W) \leqslant \mathrm{dim}(N) - \mathrm{dim}(L) - 2. \] The first assumption was essential in step 2 of the proof, in order to apply Theorem \ref{t:Whitney} to $N$ and $L \times (\text{interval})$. The second assumption was essential in step 3 of the proof, in order to apply Corollary \ref{c:transversality} to $N$, $W$ and $L \times (\text{interval})$. In step 0, only the weaker dimension hypothesis $\mathrm{dim}(N) \geqslant 2\,\mathrm{dim}(L) + 1$ was needed. In step 1, a sufficient hypothesis would have been that the orbits of embeddings $\gamma(0)$ and $\gamma(1) \in \mathrm{Emb}(L,N)/G$ both have the property that some (equivalently, any) representative embedding $L\hookrightarrow N$ admits a non-vanishing normal section. This normal section hypothesis is also guaranteed by the weaker dimension hypothesis $\mathrm{dim}(N) \geqslant 2\,\mathrm{dim}(L) + 1$. \end{rmk} \subsection{The weak factorisation condition.}\label{s:weak-factorisation} We first quickly recap some choices and constructions that we have made so far in the proof. \begin{recap} In \S\ref{s:firsta} we chose an embedding $\psi_0 \colon P \hookrightarrow M_2$ such that $a_0 = [\psi_0] \in A_0 \subseteq B_0$, and we constructed a diffeomorphism \[ \Psi \colon M \smallsetminus \psi_0(P) \longrightarrow M \smallsetminus \lambda(T \times \{2\}). \] In \S\ref{s:seconda} we chose an embedding $\tau_0 \colon P \times [0,2] \hookrightarrow M$ such that $\bar{a} = \tau_0 \in \ensuremath{{\,\overline{\! A}}}_0 \subseteq \ensuremath{{\,\overline{\! B}}}_0$. With respect to these choices, there is a composite map \begin{equation}\label{e:composite-map} \eta \colon Y_{n-1}(\breve{M},\mu) = \bar{q}_0^{-1}(\bar{a}) \lhook\joinrel\longrightarrow \ensuremath{{\overline{Y}}}_0 \longrightarrow \ensuremath{{\overline{Y}}} \xrightarrow{\;\;\Psi^{-1} \,\circ\, -\;\;} q_0^{-1}(a_0) \lhook\joinrel\longrightarrow Y_0 \longrightarrow Y, \end{equation} where $Y_0 \to Y$ and $\ensuremath{{\overline{Y}}}_0 \to \ensuremath{{\overline{Y}}}$ are the augmentation maps of the semi-simplicial spaces $Y_\bullet$ and $\ensuremath{{\overline{Y}}}_\bullet$ respectively, the two inclusions are the inclusions of fibres for the Serre fibrations $q_0 \colon Y_0 \to B_0$ and $\bar{q}_0 \colon \ensuremath{{\overline{Y}}}_0 \to \ensuremath{{\,\overline{\! B}}}_0$ (\textit{cf}.\ Lemma \ref{l:Serre-fibrations-1} and Proposition \ref{p:Serre-fibrations-2}) and the middle map is a homeomorphism induced by post-composing all embeddings with the inverse of $\Psi$ (\textit{cf}.\ Lemma \ref{l:Psi}). Finally, $\bar{q}_0^{-1}(\bar{a})$ was identified in \S\ref{s:seconda} with $Y_{n-1}(\breve{M},\mu)$, where $\breve{M} = M \smallsetminus (\tau_0(P \times [0,2]) \cup \lambda(T \times \{2\}))$ and $\mu$ is a collar neighbourhood for the boundary of $\breve{M}$, a restriction and rescaling of $\lambda$. In a similar way, we have a composite map $X_{n-1}(\breve{M},\mu) \to X$ and a commutative square (ignoring for now the grey diagonal map and homotopy): \begin{equation} \label{eq:final-commutative-square} \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,15) {$X_{n-1}(\breve{M},\mu)$}; \node (tr) at (50,15) {$Y_{n-1}(\breve{M},\mu)$}; \node (bl) at (0,0) {$X$}; \node (br) at (50,0) {$Y$}; \draw[->] (tl) to node[above,font=\small]{$f_{n-1}(\breve{M},\mu)$} (tr); \draw[->] (bl) to node[below,font=\small]{$f$} (br); \draw[->] (tl) to (bl); \draw[->] (tr) to node[right,font=\small]{$\eta$} (br); \draw[->,densely dashed,black!50] (tr) to node[above,font=\small]{$\zeta$} (bl); \node (homotopy) at (40,4) [black!50] {$\Longrightarrow$}; \node at (homotopy.north) [anchor=mid,font=\small,black!50] {$\mathcal{H}$}; \end{tikzpicture} \end{split} \end{equation} obtained by gluing together the following six commutative squares: \begin{itemizeb} \item two coming from the augmentations of the semi-simplicial maps $X_\bullet \to Y_\bullet$ and $\ensuremath{{\,\overline{\! X}}}_\bullet \to \ensuremath{{\overline{Y}}}_\bullet$, \item two coming from the inclusions of the fibres of \eqref{e:firsta} and \eqref{e:seconda} with $i=j=0$, \item \eqref{e:psi-inverse}, \item \eqref{eq:restriction-of-gj} with $j=0$. \end{itemizeb} \end{recap} \textbf{Last step of the proof.} The commutative square \eqref{eq:final-commutative-square} above is the commutative square \eqref{eSquare2} of Theorem \ref{tAxiomatic} for which we must check the weak factorisation condition, i.e., we must check that the map $\eta$ in \eqref{eq:final-commutative-square} factors up to homotopy through $f$, as the last step of the proof of Theorem~\ref{tmain}. We will therefore construct a map $\zeta \colon Y_{n-1}(\breve{M},\mu) \to X$ and a homotopy $\mathcal{H}$ from $f \circ \zeta$ to $\eta$, as pictured in grey in \eqref{eq:final-commutative-square}. First we describe $f$ and $\eta$ explicitly. By definition, $f \colon X \to Y$ acts by \[ \{ [\varphi_1] ,\ldots, [\varphi_n] \} \;\longmapsto\; \{ [\ensuremath{\bar{\iota}}b(-,0,0.5)] , [\varphi_1] ,\ldots, [\varphi_n] \} . \] Combining the two augmentation maps, the diffeomorphism $\Psi$ and the identification $Y_{n-1}(\breve{M},\mu) = \bar{q}_0^{-1}(\bar{a})$, we see that $\eta \colon Y_{n-1}(\breve{M},\mu) \to Y$ acts by \begin{equation}\label{e:eta} \{ [\varphi_1] ,\ldots, [\varphi_n] \} \;\longmapsto\; \{ [\psi_0] , [\Psi^{-1} \circ \varphi_1] ,\ldots, [\Psi^{-1} \circ \varphi_n] \} . \end{equation} \textbf{Two paths of embeddings.} To construct $\zeta$ and $\mathcal{H}$ we will use two paths of embeddings $\Gamma$ and $\Delta$. First, recall from Remark \ref{r:diffeomorphism-Psi} that there is a path \[ \Gamma \colon [0,1] \longrightarrow \mathrm{Emb}(M \smallsetminus \lambda(T \times \{2\}),M) \] such that $\Gamma(0) = \Psi^{-1}$, $\Gamma(1)$ is the inclusion and the restriction of $\Gamma(s)$ to $M \smallsetminus M_{1.5}$ is the identity for all $s \in [0,1]$. Second, recall that in \S\ref{s:seconda} we chose an $\epsilon > 0$ such that \begin{itemizeb} \item $\tau_0(z,t) = \ensuremath{\bar{\iota}}b(z,h_0,t)$ for all $t \in [0,1+\epsilon]$ and $z \in P$, \item $\tau_0(P \times (1+\epsilon,2])$ is contained in $M_{1+\epsilon}$. \end{itemizeb} Now choose a path $\delta \colon [0,1] \to \mathrm{Emb}([0,2],[0,2])$ such that $\delta(0)$ is the identity, $\delta(1)$ takes $[0,2]$ to $[1,2]$ and $\delta(s)|_{[1+\epsilon,2]} = \mathrm{id}$ for all $s \in [0,1]$. This determines a path \[ \Delta \colon [0,1] \longrightarrow \mathrm{Emb}(M,M) \] by defining $\Delta(s)|_{M \smallsetminus \lambda(\partial M \times [0,2])} = \mathrm{id}$ and $\Delta(s)(\lambda(z,t)) = \lambda(z,\delta(s)(t))$ for $(z,t) \in \partial M \times [0,2]$. Note that $\Delta(s)(\breve{M}) \subseteq \breve{M}$ for all $s \in [0,1]$, due to how we chose $\epsilon$ and $\delta$ above, so this may be regarded as a path \[ \Delta \colon [0,1] \longrightarrow \mathrm{Emb}(\breve{M},\breve{M}) \] with $\Delta(0) = \mathrm{id}$. \textbf{The map $\zeta$.} We now define $\zeta \colon Y_{n-1}(\breve{M},\mu) \to X$ to act by \[ \{ [\varphi_1] ,\ldots, [\varphi_n] \} \;\longmapsto\; \{ [\Delta(1) \circ \varphi_1] ,\ldots, [\Delta(1) \circ \varphi_n] \} . \] This is easily seen to be well-defined and continuous. The map $f \circ \zeta \colon Y_{n-1}(\breve{M},\mu) \to Y$ therefore acts by \begin{equation}\label{e:fzeta} \{ [\varphi_1] ,\ldots, [\varphi_n] \} \;\longmapsto\; \{ [\ensuremath{\bar{\iota}}b(-,0,0.5)] , [\Delta(1) \circ \varphi_1] ,\ldots, [\Delta(1) \circ \varphi_n] \} . \end{equation} \textbf{The homotopy $\mathcal{H}$.} We will now construct a homotopy \[ \mathcal{H} \colon Y_{n-1}(\breve{M},\mu) \times [0,4] \longrightarrow Y \] between $\mathcal{H}(-,0) = \eqref{e:fzeta}$ and $\mathcal{H}(-,4) = \eqref{e:eta}$. We first define it explicitly and then explain what we are doing more intuitively. Explicitly, the definition is \begin{equation}\label{e:homotopy} \mathcal{H} (\{ [\varphi_{1,\ldots,n}] \},t) = \begin{cases} \{ [\ensuremath{\bar{\iota}}b(-,th_0,0.5)] , [\Delta(1) \circ \varphi_{1,\ldots,n}] \} & t \in [0,1] \\ \{ [\tau_0(-,0.5)] , [\Delta(2-t) \circ \varphi_{1,\ldots,n}] \} & t \in [1,2] \\ \{ [\tau_0(-,0.5)] , [\Gamma(3-t) \circ \varphi_{1,\ldots,n}] \} & t \in [2,3] \\ \{ [\Psi^{-1} \circ \tau_0(- , 1.5 t - 4)] , [\Psi^{-1} \circ \varphi_{1,\ldots,n}] \} & t \in [3,4) \\ \{ [\psi_0] , [\Psi^{-1} \circ \varphi_{1,\ldots,n}] \} & t = 4, \end{cases} \end{equation} where the notation $[\text{\small (wxyz)}_{1,\ldots,n}]$ is an abbreviation of $[\text{\small (wxyz)}_1] ,\ldots, [\text{\small (wxyz)}_n]$. A more explanatory description is as follows: \begin{itemizeb} \item[(1)] At time $t=0$ the homotopy $\mathcal{H}(-,0)$ pushes the configuration $[\varphi_{1,\ldots,n}]$ away from the boundary of $\breve{M}$ using the self-embedding $\Delta(1)$ (this is $\zeta$) and then adjoins a new embedded copy of $P$ via the embedding $[\ensuremath{\bar{\iota}}b(-,0,0.5)]$ (this is $f$). \item[(2)] In the interval $t \in [0,1]$ the location of the new copy of $P$ is modified (by a straight line) from $[\ensuremath{\bar{\iota}}b(-,0,0.5)]$ to $[\ensuremath{\bar{\iota}}b(-,h_0,0.5)] = [\tau_0(-,0.5)]$. Since the rest of the configuration $[\Delta(1) \circ \varphi_{1,\ldots,n}]$ is contained in $M_1$, this does not result in any collisions. \item[(3)] In the interval $t \in [1,2]$ the new copy of $P$ stays fixed at $[\tau_0(-,0.5)]$ and the rest of the configuration moves from $[\Delta(1) \circ \varphi_{1,\ldots,n}]$ to $[\Delta(0) \circ \varphi_{1,\ldots,n}] = [\varphi_{1,\ldots,n}]$. This does not result in any collisions with $\tau_0(P \times \{0.5\})$ since the original configuration $[\varphi_{1,\ldots,n}]$ is contained in $\breve{M}$, so in particular it is disjoint from \begin{equation}\label{e:p-times-1e} \tau_0(P \times [0,1+\epsilon]) = \ensuremath{\bar{\iota}}b(P \times \{h_0\} \times [0,1+\epsilon]) = \lambda(P_{h_0} \times [0,1+\epsilon]), \end{equation} where for $r \in (-1,1)$ we write $P_r = \ensuremath{\bar{\iota}}b(P \times \{r\} \times \{0\}) \subseteq \partial M$, and the self-embeddings $\Delta(s)$ act on the subset $\eqref{e:p-times-1e} \subseteq M$ by ``compressing'' the second coordinate. \item[(4)] In the interval $t \in [2,3]$ the new copy of $P$ remains fixed at $[\tau_0(-,0.5)]$ and the rest of the configuration moves from $[\varphi_{1,\ldots,n}]$ to $[\Psi^{-1} \circ \varphi_{1,\ldots,n}]$ via the isotopy of embeddings $\Gamma$. \item[(5)] In the half-open interval $t \in [3,4)$ the configuration $[\Psi^{-1} \circ \varphi_{1,\ldots,n}]$ remains fixed. Note that, since $[\varphi_{1,\ldots,n}]$ is disjoint from $\tau_0(P \times [0,2])$, the configuration $[\Psi^{-1} \circ \varphi_{1,\ldots,n}]$ is disjoint from $\Psi^{-1} \circ \tau_0(P \times [0,2])$. We may therefore move the new copy of $P$ along this embedded copy of $P \times [0,2]$ from $[\tau_0(-,0.5)] = [\Psi^{-1} \circ \tau_0(-,0.5)]$ \emph{almost} to $[\Psi^{-1} \circ \tau_0(-,2)]$ (but not quite, since $\Psi^{-1}$ is not defined exactly at $\tau_0(P \times \{2\}) \subseteq \lambda(T \times \{2\})$). As $t$ approaches $4$ (so $1.5t-4$ approaches $2$), the new copy of $P$ that we have adjoined approaches $\psi_0(P)$. More precisely: \item[(6)] For $t \in [4-\epsilon',4)$, for some $\epsilon' > 0$, the path of embeddings $t \mapsto \Psi^{-1} \circ \tau_0(-,1.5 t - 4)$ is of the form considered in Remark \ref{r:diffeomorphism-Psi}, and it may therefore be continuously extended to $t=4$ with the embedding $\psi_0$. Hence we may define the homotopy $\mathcal{H}(-,4)$ to send the configuration $[\varphi_{1,\ldots,n}]$ to the configuration $[\Psi^{-1} \circ \varphi_{1,\ldots,n}]$ with a new copy of $P$ adjoined via the embedding $[\psi_0]$. This is precisely $\eta$. \end{itemizeb} For a picture worth $\tfrac{10}{3}$ times as much as these approx.\ $300$ words, see Figure \ref{fhomotopy}. This homotopy \[ \mathcal{H} \;\colon\; f \circ \zeta \;\simeq\; \eta \] verifies the weak factorisation condition, and therefore completes the proof of Theorem \ref{tmain}. \begin{figure} \caption{The homotopy $\mathcal{H} \label{fhomotopy} \end{figure} \section{Appendix. Homotopy fibres of augmented semi-simplicial spaces}\label{s:appendix} In this appendix, we give a proof of Lemma \ref{l:homotopy-fibres}, which is restated as Lemma \ref{l:rwappendix} below. \subsection{Preliminaries} We begin by mentioning an elementary lemma on colimits in topological spaces. \begin{lem}\label{lem:closed-inclusion-colim} Let $F,G\colon I\to \mathsf{Top}$ be diagrams of topological spaces and $\alpha \colon F\Rightarrow G$ be a natural transformation with the property that $\alpha_i \colon F(i) \to G(i)$ is a closed inclusion for each $i\in I$. Suppose also that for each $i\in I$ the square \begin{equation}\label{eq:closed-inclusion-colim} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,12) {$F(i)$}; \node (tr) at (30,12) {$\ensuremath{\mathrm{colim}}(F)$}; \node (bl) at (0,0) {$G(i)$}; \node (br) at (30,0) {$\ensuremath{\mathrm{colim}}(G)$}; \draw[->] (tl) to node[above,font=\small]{$\mu_i$} (tr); \draw[->] (bl) to node[below,font=\small]{$\nu_i$} (br); \draw[->] (tl) to node[left,font=\small]{$\alpha_i$} (bl); \draw[->] (tr) to node[right,font=\small]{$\alpha_*$} (br); \end{tikzpicture} \end{split} \end{equation} is \emph{Cartesian} in the category of sets, meaning that the canonical function from $F(i)$ to the pullback \textup{(}in sets\textup{)} of the rest of the diagram is a bijection. Then the map \[ \alpha_* \colon \ensuremath{\mathrm{colim}}(F) \longrightarrow \ensuremath{\mathrm{colim}}(G) \] is also a closed inclusion. The same holds with ``closed inclusion'' replaced by ``open inclusion''. \end{lem} \begin{rmk} It is sufficient to check that each square \eqref{eq:closed-inclusion-colim} is Cartesian in the category of sets for each $i$ in a given \emph{cofinal} subcategory $J$ of $I$, since restricting $F$ and $G$ to $J$ does not change their colimits (see \cite[\href{http://stacks.math.columbia.edu/tag/09WN}{09WN}]{stacks-project}). \end{rmk} \begin{proof} It is immediate that $\alpha_*$ is injective, so it remains to show that it is a closed (resp.\ open) map. We write the proof in the closed case; the open case is identical. Let $A$ be a closed subset of $\ensuremath{\mathrm{colim}}(F)$. By the definition of the colimit topology, it suffices to show that $\nu_i^{-1}(\alpha_*(A))$ is closed in $G(i)$ for each $i$. But since the square above is Cartesian in the category of sets, we have $\nu_i^{-1}(\alpha_*(A)) = \alpha_i(\mu_i^{-1}(A))$, which is closed since $\alpha_i$ is a closed map. \end{proof} \begin{defn} Let $Z_\bullet$ be a semi-simplicial space, and let $Y_\bullet$ be a semi-simplicial subspace. We call $Y_\bullet$ \emph{full} relative to $Z_\bullet$ if each square \begin{equation}\label{eq:def-of-full-sssubspace} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,12) {$\Delta^n \times Y_n$}; \node (tr) at (30,12) {$\Delta^n \times Z_n$}; \node (bl) at (0,0) {$\lVert Y_\bullet \rVert^{(n)}$}; \node (br) at (30,0) {$\lVert Z_\bullet \rVert^{(n)}$}; \draw[->] (tl) to (tr); \draw[->] (bl) to (br); \draw[->] (tl) to (bl); \draw[->] (tr) to (br); \end{tikzpicture} \end{split} \end{equation} is \emph{Cartesian} in the category of sets, meaning that the canonical function from $\Delta^n \times Y_n$ to the pullback (in sets) of the rest of the diagram is a bijection. Another way of stating this is that a simplex $\sigma \in Z_n$ is contained in $Y_n$ whenever at least one of its vertices is contained in $Y_0$. Another equivalent characterisation is that the subspace $\lVert Y^{\delta}_\bullet \rVert$ of $\lVert Z^{\delta}_\bullet \rVert$ is a union of path-components, where $Y_n^\delta$ denotes $Y_n$ given the discrete topology and similarly for $Z_n^\delta$. \end{defn} \begin{eg}\label{eg:augmented-ssspace-fibres} Let $Z_\bullet$ be an \emph{augmented} semi-simplicial space, choose a subset $A \subseteq Z_{-1}$ and define $Y_n = f_n^{-1}(A)$ where $f_n \colon Z_n \to Z_{-1}$ is the unique composition of face maps. Then $Y_\bullet$ is a full semi-simplicial subspace of $Z_\bullet$, which can be seen as follows. Suppose $\sigma$ is a simplex in $Z_n$ and one of its vertices is in $Y_0$, in other words $d^n(\sigma)\in Y_0$, where $d^n$ is one of the possible compositions of face maps $Z_n \to Z_0$. Then we have $f_n(\sigma) = f_0(d^n(\sigma))\in A$ and so $\sigma \in Y_n$. \end{eg} The relevance of this definition is the following lemma. \begin{lem}\label{lem:closed-inclusion} Let $Z_\bullet$ be a semi-simplicial space and $Y_\bullet$ be a full semi-simplicial subspace. If each inclusion $Y_n \hookrightarrow Z_n$ is a closed inclusion then the map of geometric realisations \begin{equation}\label{eq:map-of-geometric-realisations} \lVert Y_\bullet \rVert \longrightarrow \lVert Z_\bullet \rVert \end{equation} is also a closed inclusion. Similarly when ``closed inclusion'' is replaced by ``open inclusion''. \end{lem} \begin{proof} The proof is identical for the ``open'' statement and the ``closed'' statement, so we will just prove the ``closed'' statement. We will first prove by induction that the map of skeleta \begin{equation}\label{eq:map-of-skeleta} \lVert Y_\bullet \rVert^{(n)} \longrightarrow \lVert Z_\bullet \rVert^{(n)} \end{equation} is a closed inclusion for all $n$. The case $n=0$ is part of the assumption, so we let $n\geqslant 1$ and assume the result for $n-1$ by inductive hypothesis. The map \eqref{eq:map-of-skeleta} is the map of pushouts induced by the following diagram: \begin{equation}\label{eq:closed-inclusion-inductive-step} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (t1) at (0,15) {$\Delta^n \times Y_n$}; \node (t2) at (30,15) {$\partial\Delta^n \times Y_n$}; \node (t3) at (60,15) {$\lVert Y_\bullet \rVert^{(n-1)}$}; \node (b1) at (0,0) {$\Delta^n \times Z_n$}; \node (b2) at (30,0) {$\partial\Delta^n \times Z_n$}; \node (b3) at (60,0) {$\lVert Z_\bullet \rVert^{(n-1)}$}; \draw[->] (t2) to (t1); \draw[->] (b2) to (b1); \draw[->] (t1) to (b1); \draw[->] (t2) to (b2); \draw[->] (t3) to (b3); \draw[->] (t2) to (t3); \draw[->] (b2) to (b3); \end{tikzpicture} \end{split} \end{equation} in which the vertical maps are closed inclusions by assumption and inductive hypothesis. To apply Lemma \ref{lem:closed-inclusion-colim} we need to check that three squares of the form \eqref{eq:closed-inclusion-colim} are Cartesian in the category of sets. The square corresponding to the left-hand side of \eqref{eq:closed-inclusion-inductive-step} is precisely \eqref{eq:def-of-full-sssubspace}, and therefore Cartesian in the category of sets by assumption. The square corresponding to the middle of \eqref{eq:closed-inclusion-inductive-step} therefore also has this property, since the left-hand square of \eqref{eq:closed-inclusion-inductive-step} is Cartesian (in spaces, not just in sets, although this is not relevant here). The square corresponding to the right-hand side of \eqref{eq:closed-inclusion-inductive-step} is \begin{equation}\label{eq:closed-inclusion-inductive-step-2} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,12) {$\lVert Y_\bullet \rVert^{(n-1)}$}; \node (tr) at (30,12) {$\lVert Z_\bullet \rVert^{(n-1)}$}; \node (bl) at (0,0) {$\lVert Y_\bullet \rVert^{(n)}$}; \node (br) at (30,0) {$\lVert Z_\bullet \rVert^{(n)}$}; \draw[->] (tl) to (tr); \draw[->] (bl) to (br); \draw[->] (tl) to (bl); \draw[->] (tr) to (br); \end{tikzpicture} \end{split} \end{equation} which is also Cartesian in the category of sets.\footnote{To see this, consider everything as subsets of the set $\lVert Z_\bullet \rVert$. There is a surjection \[ k\colon \textstyle{\bigsqcup}_{m} (\Delta^m \times Z_m) \longrightarrow \lVert Z_\bullet \rVert \] and we need to show that the intersection of the subsets $k(\Delta^n \times Y_n)$ and $k(\Delta^{n-1} \times Z_{n-1})$ is precisely $k(\Delta^{n-1} \times Y_{n-1})$. Clearly this is contained in the intersection. To show the converse, consider $(t,y)\in \Delta^n \times Y_n$ and $(s,z)\in \Delta^{n-1} \times Z_{n-1}$ such that $k(t,y)=k(s,z)$. The equivalence relation defining $k$ glues each face of an $n$-simplex to exactly one $(n-1)$-simplex, so we must have that $z=d_i(y)$ for some face map $d_i$. Hence $z\in Y_{n-1}$ and so $k(s,z)\in k(\Delta^{n-1} \times Y_{n-1})$.} Lemma \ref{lem:closed-inclusion-colim} then implies that \eqref{eq:map-of-skeleta} is a closed inclusion. Finally, we show that \eqref{eq:map-of-geometric-realisations} is a closed inclusion. By the same reasoning as above, the square \begin{equation}\label{eq:closed-inclusion-final-step} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,12) {$\lVert Y_\bullet \rVert^{(n)}$}; \node (tr) at (30,12) {$\lVert Z_\bullet \rVert^{(n)}$}; \node (bl) at (0,0) {$\lVert Y_\bullet \rVert$}; \node (br) at (30,0) {$\lVert Z_\bullet \rVert$}; \draw[->] (tl) to (tr); \draw[->] (bl) to (br); \draw[->] (tl) to (bl); \draw[->] (tr) to (br); \end{tikzpicture} \end{split} \end{equation} is Cartesian in the category of sets for all $n$. We already know that \eqref{eq:map-of-skeleta} is a closed inclusion for each $n$, so Lemma \ref{eq:closed-inclusion-colim} implies that \eqref{eq:map-of-geometric-realisations} is a closed inclusion. \end{proof} \subsection{Computing homotopy fibres levelwise}\label{ss:homotopy-fibres} \begin{convention} For a map $g\colon Y\to Z$ and point $z\in Z$, we take the following explicit model for the homotopy fibre $\mathrm{hofib}_z(g)$: denote the mapping space $\ensuremath{\mathrm{Map}}([0,1],\{0\};Z,\{z\})$ by $P_z Z$ and let $\mathrm{hofib}_z(g)$ be the pullback of $g$ and the evaluation map $\mathrm{ev}_1\colon P_z Z \to Z$. \end{convention} Let $X_\bullet$ be an augmented semi-simplicial space and write $f_n \colon X_n \to X \coloneqq X_{-1}$ for the unique composition of face maps. Note that, for any point $x\in X$, the homotopy fibres $\{\mathrm{hofib}_x(f_n)\}_{n\geqslant 0}$ naturally inherit face maps from $X_\bullet$ and so form a semi-simplicial space $\mathrm{hofib}_x(f_\bullet)$. The following lemma appeared as Lemma 2.1 in the third arXiv version of \cite{Randal-Williams2016Resolutionsmodulispaces} (but not in the final, published version) and is also very similar to Lemma 2.14 of \cite{EbertRandal-Williams2019Semi-simplicialspaces} (they assume that the maps $f_n$ are all quasifibrations). We give a complete proof in this appendix, which is self-contained apart from an appeal to one theorem of Dold and Thom (Theorem \ref{thm:doldthom} below). \begin{lem}\label{l:rwappendix} The sequence $\lVert \mathrm{hofib}_x(f_\bullet) \rVert \to \lVert X_\bullet \rVert \to X$ is a homotopy fibre sequence over $x$. \end{lem} A sequence $A\to B\to C$ is a \emph{homotopy fibre sequence over $c\in C$} if the square \begin{center} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,10) {$A$}; \node (tr) at (20,10) {$B$}; \node (bl) at (0,0) {$\{c\}$}; \node (br) at (20,0) {$C$}; \draw[->] (tl) to (tr); \draw[->] (bl) to (br); \draw[->] (tl) to (bl); \draw[->] (tr) to (br); \end{tikzpicture} \end{center} is homotopy Cartesian, meaning that there exists a homotopy filling the square, and the canonical map (induced by this homotopy) from $A$ to the homotopy pullback of the rest of the diagram is a weak equivalence. This property is invariant under objectwise weak equivalence of diagrams (which will be used in the proof of the lemma, \textit{cf}.\ diagram \eqref{eq:reduction-to-special-case} below). The lemma therefore equivalently states that the canonical map \begin{equation}\label{eq:canonical-map} \lVert \mathrm{hofib}_x(f_\bullet) \rVert \longrightarrow \mathrm{hofib}_x(\lVert X_\bullet \rVert \to X) \end{equation} is a weak equivalence. Before proving this, we recall that a \emph{quasifibration}, introduced in \cite{DoldThom1958Quasifaserungenundunendliche}, is a map $f\colon X\to Y$ with the property that for each $y\in Y$ the natural map $f^{-1}(y) \to \mathrm{hofib}_y(f)$ is a weak equivalence. Quasifibrations are not closed under taking pullbacks,\footnote{See Bemerkung 2.3 of \cite{DoldThom1958Quasifaserungenundunendliche}.} but they do satisfy a useful local-to-global result, proved in \cite{DoldThom1958Quasifaserungenundunendliche} (see also \cite[Theorem 15.84]{Strom2011Modernclassicalhomotopy}). \begin{thm}[{\cite[Satz 2.2]{DoldThom1958Quasifaserungenundunendliche}}]\label{thm:doldthom} Let $f\colon X\to Y$ be a surjective map and $\mathcal{U}$ a basis for the topology of $Y$. If the restriction $f^{-1}(U)\to U$ is a quasifibration for all $U\in \mathcal{U}$, then $f$ is a quasifibration. \end{thm} \begin{proof}[Proof of Lemma \ref{l:rwappendix}] For each $n\geqslant 0$ define $Y_n \coloneqq X^{[0,1]} \times_X X_n$, so the map $f_n$ factors as a weak equivalence followed by a Serre fibration $X_n \to Y_n \to X$. Denote the second map by $g_n$. The spaces $\{Y_n\}_{n\geqslant 0}$ inherit face maps from $X_\bullet$ and form an augmented semi-simplicial space $Y_\bullet \to X$ whose composition of face maps $Y_n \to X$ is $g_n$. Now choose a CW-approximation $w\colon Z \to X$ for $X$, and write $Z_\bullet \to Z$ for the levelwise pullback of $Y_\bullet$ along $w$. Note that, since $w$ is a weak equivalence and each $g_n \colon Y_n \to X$ is a Serre fibration, the map of augmented semi-simplicial spaces $Z_\bullet \to Y_\bullet$ over $w$ is a levelwise weak equivalence and the compositions of face maps $h_n \colon Z_n \to Z$ are Serre fibrations. Choose a point $z \in Z$ and a path in $X$ from $w(z)$ to $x$. This path induces homotopy equivalences $\mathrm{hofib}_{w(z)}(f_n) \simeq \mathrm{hofib}_x(f_n)$. Summarising, we have the following commutative diagram, where $\sim_\ell$ denotes a levelwise weak equivalence: \begin{equation}\label{eq:reduction-to-special-case} \centering \begin{split} \begin{tikzpicture} [x=1.5mm,y=1.2mm] \node (t1) at (0,20) {$\mathrm{hofib}_x(f_\bullet)$}; \node (t2) at (20,20) {$\mathrm{hofib}_{w(z)}(f_\bullet)$}; \node (t3) at (40,20) {$\mathrm{hofib}_{w(z)}(g_\bullet)$}; \node (t4) at (60,20) {$\mathrm{hofib}_z(h_\bullet)$}; \node (m2) at (20,10) {$X_\bullet$}; \node (m3) at (40,10) {$Y_\bullet$}; \node (m4) at (60,10) {$Z_\bullet$}; \node (b3) at (40,0) {$X$}; \node (b4) at (60,0) {$Z$}; \draw[->] (t1) to node[above,font=\small]{$\sim_\ell$} (t2); \draw[->] (t2) to node[above,font=\small]{$\sim_\ell$} (t3); \draw[->] (t4) to node[above,font=\small]{$\sim_\ell$} (t3); \draw[->] (m2) to node[above,font=\small]{$\sim_\ell$} (m3); \draw[->] (m4) to node[above,font=\small]{$\sim_\ell$} (m3); \draw[->] (b4) to node[above,font=\small]{$\sim$} node[below,font=\small]{$w$} (b3); \draw[->] (t1) to (m2); \draw[->] (t2) to (m2); \draw[->] (t3) to (m3); \draw[->] (t4) to (m4); \draw[->] (m2) to (b3); \draw[->] (m3) to (b3); \draw[->] (m4) to (b4); \end{tikzpicture} \end{split} \end{equation} In the induced diagram after taking (thick) geometric realisations, all horizontal maps are weak equivalences. Hence it suffices to prove that the sequence $\lVert \mathrm{hofib}_z(h_\bullet) \rVert \to \lVert Z_\bullet \rVert \to Z$ is a homotopy fibre sequence for any $z\in Z$. In other words, we would like to show that the map $t$ in the square \begin{equation}\label{eq:tblr} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (tl) at (0,15) {$\lVert \mathrm{hofib}_z(h_\bullet) \rVert$}; \node (tr) at (50,15) {$\mathrm{hofib}_z(h)$}; \node (bl) at (0,0) {$\lVert \mathrm{fib}_z(h_\bullet) \rVert$}; \node (br) at (50,0) {$\mathrm{fib}_z(h)$}; \draw[->] (tl) to node[above,font=\small]{$t$} (tr); \draw[->] (bl) to node[below,font=\small]{$b$} (br); \draw[->] (bl) to node[left,font=\small]{$\ell$} (tl); \draw[->] (br) to node[right,font=\small]{$r$} (tr); \end{tikzpicture} \end{split} \end{equation} is a weak equivalence, where $h$ denotes the induced map $\lVert Z_\bullet \rVert \to Z$. Since each $h_n$ is a Serre fibration, the map $\ell$ in \eqref{eq:tblr} is a weak equivalence. It is easy to see that the map $b$ is a continuous bijection (in fact the same is true for $t$, although we will not use this). To see that it is in fact a homeomorphism, consider the triangle \begin{equation}\label{eq:triangle} \centering \begin{split} \begin{tikzpicture} [x=1mm,y=1mm] \node (l) at (0,0) {$\lVert \mathrm{fib}_z(h_\bullet) \rVert$}; \node (m) at (30,0) {$\mathrm{fib}_z(h)$}; \node (r) at (60,0) {$\lVert Z_\bullet \rVert$}; \draw[->] (l) to node[above,font=\small]{$b$} (m); \draw[->] (m) to node[above,font=\small]{$i$} (r); \draw[->] (l.south east) to [out=-15,in=195] node[below,font=\small]{$\lVert i_\bullet \rVert$} (r.south west); \end{tikzpicture} \end{split} \end{equation} where $i_n \colon \mathrm{fib}_z(h_n) \hookrightarrow Z_n$ and $i\colon \mathrm{fib}_z(h) \hookrightarrow \lVert Z_\bullet \rVert$ are the inclusions. The map $\lVert i_\bullet \rVert$ is injective, and $b$ will be a homeomorphism if and only if $\lVert i_\bullet \rVert$ is an inclusion, i.e.\ a topological embedding. By Lemma \ref{lem:closed-inclusion} (and Example \ref{eg:augmented-ssspace-fibres}), the map $\lVert i_\bullet \rVert$ will be a closed inclusion as long as each $i_n$ is a closed inclusion. So we need to show that $\mathrm{fib}_z(h_n) = h_n^{-1}(z)$ is a closed subset of $Z_n$. But $Z$ is a CW-complex, so in particular its points are closed, so $\{z\}$ is closed in $Z$ and so $h_n^{-1}(z)$ is closed in $Z_n$ by continuity of $h_n$. Hence the map $b$ in \eqref{eq:tblr} is a homeomorphism. It remains to show that the map $r$ in \eqref{eq:tblr} is a weak equivalence for each $z\in Z$; in other words, we need to show that the map $h\colon \lVert Z_\bullet \rVert \to Z$ is a quasifibration. We will do this using Theorem \ref{thm:doldthom}. Since $Z$ is a CW-complex, it is locally contractible, so we may take $\mathcal{U}$ to be a basis for its topology consisting of contractible subsets. To apply Theorem \ref{thm:doldthom}, we need to know that $h$ is surjective, which will be true if and only if $h_0\colon Z_0 \to Z$ is surjective. But we may assume without loss of generality that $f_0\colon X_0\to X$ is $\pi_0$-surjective.\footnote{Suppose that Lemma \ref{l:rwappendix} holds with this assumption and let $X_\bullet$ be any augmented semi-simplicial space with basepoint $x\in X=X_{-1}$. Define $X^\prime$ to be the smallest union of path-components of $X$ containing the image of $\lVert X_\bullet \rVert$ (which is the same as the image of $X_0$) and define $X_n^\prime = X_n$ for $n\geqslant 0$. If $x\notin X^\prime$ then the lemma is vacuously true (since both $\mathrm{hofib}_x(\lVert X_\bullet \rVert \to X)$ and $\lVert \mathrm{hofib}_x(f_\bullet) \rVert$ are empty). Otherwise, applying the lemma to $X_{\bullet}^\prime$ we see that $\lVert \mathrm{hofib}_x(f_\bullet) \rVert \to \lVert X_\bullet \rVert \to X^\prime$ is a homotopy fibre sequence over $x$. Adding disjoint path-components to the third space in a homotopy fibre sequence does not affect the property of being a homotopy fibre sequence, so $\lVert \mathrm{hofib}_x(f_\bullet) \rVert \to \lVert X_\bullet \rVert \to X$ is also a homotopy fibre sequence over $x$.} By construction, this is equivalent to surjectivity of the map $g_0\colon Y_0\to X$. Now $h_0$ is the pullback of $g_0$ along $w$, and is therefore also surjective. Let $U\in \mathcal{U}$. We need to show that the restriction $h|_U\colon h^{-1}(U)\to U$ is a quasifibration. Let $z\in U$. Since each $h_n\colon Z_n \to Z$ is a Serre fibration, so is its restriction $h_n|_U\colon h_n^{-1}(U)\to U$. Hence we have weak equivalences \[ h_n^{-1}(z) \xrightarrow{\sim} \mathrm{hofib}_z(h_n|_U) \xrightarrow{\sim} h_n^{-1}(U), \] where the second weak equivalence is due to the fact that $U$ is contractible. We therefore have a levelwise weak equivalence $h_{\bullet}^{-1}(z) \to h_{\bullet}^{-1}(U)$, and hence a weak equivalence on thick geometric realisations $\lVert h_{\bullet}^{-1}(z) \rVert \to \lVert h_{\bullet}^{-1}(U) \rVert$. But this is the inclusion $h^{-1}(z)\to h^{-1}(U)$, so the composition of the maps \[ h^{-1}(z) \longrightarrow \mathrm{hofib}_z(h|_U) \longrightarrow h^{-1}(U) \] is a weak equivalence. The right-hand map is a weak equivalence since $U$ is contractible, so by 2-out-of-3, the left-hand map is also a weak equivalence. Since $z\in U$ was arbitrary, we have shown that $h|_U$ is a quasifibration. Theorem \ref{thm:doldthom} now tells us that $h$ is a quasifibration, and so the map $r$ in \eqref{eq:tblr} is a weak equivalence. We showed above that $\ell$ is a weak equivalence and $b$ is a homeomorphism, so the map $t$ is also a weak equivalence. Thus the sequence $\lVert \mathrm{hofib}_z(h_\bullet) \rVert \to \lVert Z_\bullet \rVert \to Z$ is a homotopy fibre sequence for any $z\in Z$. By diagram \eqref{eq:reduction-to-special-case} this implies that $\lVert \mathrm{hofib}_x(f_\bullet) \rVert \to \lVert X_\bullet \rVert \to X$ is a homotopy fibre sequence for any $x\in X$, as required. \end{proof} \phantomsection \addcontentsline{toc}{section}{References} \renewcommand{\normalfont\small}{\normalfont\small} \setlength{\bibitemsep}{0pt} \printbibliography \noindent {\itshape Institutul de Matematică Simion Stoilow al Academiei Române, 21 Calea Griviței, 010702 București, România} \noindent {\tt [email protected]} \end{document}
\begin{document} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \title*{Polynomial estimates over exponential curves in~$\mathbb C^2$} \author{Shirali Kadyrov and Yershat Sapazhanov} \institute{Shirali Kadyrov \at Suleyman Demirel University, Kaskelen, Kazakhstan 040900, \\ \email{[email protected]} \and Yershat Sapazhanov \at Suleyman Demirel University, Kaskelen, Kazakhstan 040900,\\\email{[email protected]}} \maketitle \abstract*{Each chapter should be preceded by an abstract (10--15 lines long) that summarizes the content. The abstract will appear \textit{online} at \url{www.SpringerLink.com} and be available with unrestricted access. This allows unregistered users to read the abstract as a teaser for the complete chapter. As a general rule the abstracts will not appear in the printed version of your book unless it is the style of your particular book or that of the series to which your book belongs. Please use the 'starred' version of the new Springer \texttt{abstract} command for typesetting the text of the online abstracts (cf. source file of this chapter template \texttt{abstract}) and include them with the source files of your manuscript. Use the plain \texttt{abstract} command if the abstract is also to appear in the printed version of the book.} \abstract{For any complex $\alpha$ with non-zero imaginary part we show that Bernstein-Walsh type inequality holds on the piece of the curve $\{(e^z,e^{\alpha z}) : z \in \mathbb C\}$. Our result extends a theorem of Coman-Poletsky \cite{CP10} where they considered real-valued $\alpha$.} \section{Introduction} \label{sec:1} In pluripotential theory, one is often interested in growth of polynomials of several variables. A classical Bernstein-Walsh inequality \cite{Ra95} gives important implications in this direction. Recently, there has been significant research work carried in obtaining Bernstein-Walsh type inequalities, see e.g \cite{AO13, Br18, BBL10, CP10, CPo03, CP03, KL16, Ne06}. We now recall the result of Coman-Poletsky in \cite{CP10}. Let $\alpha \in (0,1) \backslash \mathbb Q$ and $K \subset \mathbb C^2$ be a compact set given by $K=\{(e^z,e^{\alpha z}) : |z| \le 1\}.$ Define \begin{equation}\label{eq:En} E_n(\alpha)=\sup\{\|P\|_{\Delta^2} : P \in \mathcal P_n, \|P\|_K \le 1\}, \end{equation} where $\mathcal P_n$ is the space of polynomials in $\mathbb C[z,w]$ of degree at most $n$, $\Delta^2$ is the closed bidisk $\{(z,w) \in \mathbb C^2 : |z|,|w| \le 1\},$ and $\|\cdot\|_{\Delta^2}, \|\cdot\|_K$ are the uniform norms defined on compact sets $\Delta^2$ and $ K$, respectively. Let $e_n(\alpha):=\log E_n(\alpha)$. Then, Coman-Poletsky prove \begin{theorem}\label{thm:cp} For any Diophantine $\alpha \in (0,1)$ one has $$\frac{n^2 \log n}{2}-n^2 \le e_n(\alpha) \le \frac{n^2\log n}{2} + 9 n^2 +Cn,$$ for any $n \ge 1$, where constant $C>0$ depends on $\alpha$. \end{theorem} Here, term `Diophantine' comes from the Diophantine approximation theory and is an exponent that tells how well a real number can be approximated by rationals. For a proper definition see \cite{CP10}. As a consequence of Theorem~\ref{thm:cp} one gets the Bernstein-Walsh type inequality \begin{equation} \label{eq:B} |P(z,w)| \le \|P\|_K E_n(\alpha) e^{n \log^+\max\{|z|,|w|\}}, \end{equation} for any $(z,w) \in \mathbb C^2$, $P \in \mathcal P_n$ and $E_n(\alpha)=e^{e_n(\alpha)}$ is determined by the theorem above. We note that the inequality (\ref{eq:B}) holds for any $\alpha \in \mathbb C$ and finding the optimal bounds for $e_n(\alpha)$ is what makes it challenging in general. The proof of Theorem~\ref{thm:cp} makes use of the well-developed continued fraction expansion theory. We note that the theorem considers real-valued $\alpha$'s only. In this note, we aim to extend Theorem~\ref{thm:cp} to complex $\alpha$'s. We now state our main result. \begin{theorem} \label{thm:main} Let $\alpha=\alpha_1 + i \alpha_2$, $\alpha_1,\alpha_2 \in \mathbb R$ be given such that $|\alpha| <1 $ and $\alpha_2 \ne 0$. Then, $$\frac{n^2 \log n}{2}-n^2 \le e_n(\alpha) \le \frac{n^2 \log n}2+8 n^2-n \log |\alpha_2|.$$ \end{theorem} We remark here that our proof of Theorem~\ref{thm:main} closely follows that of \cite{CP10}. However, in our case we do not need to appeal to continued fraction theory and as a result our proof requires less effort. Nonetheless, it holds true for \emph{all} non-real complex numbers $\alpha$. \section{Proof of main result} \label{sec:2} In this section we prove our main result Theorem~\ref{thm:main}. For any real $x$ let $\langle x \rangle$ denote the closest integer to $x$. We need the following lemma. \begin{lem}[cf. Lemma~2.4 in \cite{CP10}]\label{prod} Let $k,x,y \in \mathbb{Z},$ $x \le y,$ $k \ge 1.$ Then (with $0^0:=1$) \[ \prod_{j=x}^{y}|j-k \alpha| \ge \left\{ \begin{array}{ll} (\frac{y-x}{2e})^{y-x} & \textrm{ , if } \langle k \alpha_1 \rangle \notin [x,y]\textrm{,}\\ (\frac{y-x}{2e})^{y-x} \cdot | k \alpha_2 | & \textrm{ , if } x \le \langle k \alpha_1 \rangle \le y \end{array} \right. \] \end{lem} \begin{proof} We argue as in Lemma~2.4 of \cite{CP10}. Using Stirling formula $$ e^{7/8} \le \frac{m!}{(\frac{m}{e})^m \sqrt{m}} \le e,\forall m \in \mathbb N, $$ one gets $$ \prod_{j=1}^{m} (j-\frac{1}{2}) = \frac{(2m)!}{2^{2m} \cdot m!} \ge \left(\frac{m}{e}\right)^m $$ Let $j_0 =\langle k \alpha_1\rangle$, if $j \ne j_0$ then, $$ |j-k \alpha| \ge |j- k \alpha_1| = |j-j_0|- \frac{1}{2} $$ Hence, when $j_0 <x,$ we get $$ \prod_{j=x}^{y} |j-k\alpha| \ge \frac{1}{2}(y-x)! $$ Similarly, if $y <j_0,$ $$ \prod_{j=x}^{y} |j-k\alpha| \ge \frac{1}{2}(y-x)! $$ On the other hand, for $x \le j_0 \le y,$ we obtain $$ \prod_{j=x}^{y}|j-k \alpha| \ge \Big{(} \frac{j_0-x}{e} \Big{)} ^{j_0-x} \cdot \Big{(} \frac{y - j_0}{e} \Big{)} ^{y - j_0} \cdot |j_0 - k \alpha| \ge \Big{(}\frac{y-x}{2e} \Big{)}^{y-x} \cdot |j_0-k \alpha|. $$ However, $|j_0-k \alpha| \ge |k \alpha_2|$, which finishes the proof. \qed \end{proof} It is easy to see that the space $\mathcal{P}_n$ of polynomials in $\mathbb C[z,w]$ of degree at most $n$ has dimension equal to $N+1$, where $N := (n^2+3n)/2.$ We are now ready to proceed with the proof of Theorem~\ref{thm:main}. \begin{proof} Our argument closely follows that of \cite{CP10}. We first obtain the upper estimate for the exponent $e_n(\alpha)$. For a given polynomial $R(\lambda) = \sum_{j=0}^{m} c_j \lambda^j$ of a single variable we let $D_R$ denote the following differential operator $$ D_R = R \bigg( \frac{d}{dz} \bigg) = \sum_{j=0}^{m}c_j \frac{d^j}{dz^j }. $$ Then, $\forall \alpha \in \mathbb{C},$ we have \begin{equation} \label{diffeq} D_R (e^{\alpha z})|_{z=0} = \sum_{j=0}^{m}c_j \alpha^j e^{\alpha \cdot 0} = R(\alpha). \end{equation} Let $P(z,w)=\sum_{j+k \le n} c_{jk} z^j w^k \in \mathcal{P}_n$, $ n \ge 1$, be given with $\|P\|_K \le 1$, where as before $K = \{ ( e^z , e^{\alpha z)} : |z| \le 1 \}. $ We set $$ f(z) := P( e^z , e^{\alpha z}) = \sum_{j+k \le n} c_{jk} e^{(j+k \alpha)z}. $$ To obtain the upper bound for $e_n(\alpha)$ it suffices to estimate the coefficients $c_{lm}$ from above. To this end, we define the following polynomials $R_{lm}$ of degree $N$ $$ R_{lm}(\lambda) = \prod_{j+k \le n, (j,k) \ne (l,m)} (\lambda - j - k \alpha )=\sum _{t=0}^N a_t \lambda^t. $$ Using (\ref{diffeq}) we have \begin{eqnarray} D_{R_{lm}}f(z)|_{z=0} &=& \sum_{j+k \le n}c_{jk}R_{lm}(j+k \alpha) \nonumber\\ &=& c_{lm}R_{lm}(l+m \alpha) \nonumber\\ &=& c_{lm} \beta_{lm} \nonumber \\ \beta_{lm} &:=& \prod_{j+k \le n, (j,k)\ne (l,m)}(l-j+(m-k)\alpha ) \nonumber \end{eqnarray} Using Cauchy's estimates $|f^{(t)}(0)|\le t! \le N^t$ for $t \le N,$ we arrive at $$ \Big{|} D_{R_{lm}}f(z) |_{z=0} \Big{|} = \Bigg{|} \sum_{t=0}^{N} a_t f^{(t)}(0)\Bigg{|} \le \sum_{t=0}^{N} |a_t| N^t \le (N+n)^N, $$ where the last inequality follows from Vieta's formulas and the fact that \\ $| j+k \alpha | \le n.$ Therefore \begin{eqnarray} \label{logest} \log(|c_{lm}\beta_{lm}|) \le N \log (N+n) \le n^2 \log n + 3.7n^2. \end{eqnarray} We now study the lower estimates on $|\beta _{lm}|$ which will lead to upper estimate for $c_{lm}$. Clearly, $$ |\beta_{lm}| \ge \prod_{k=0,k\ne m}^{n} \prod_{j=0}^{n-k} |l-j+(m-k)\alpha| = A_1 A_2, $$ Where $$ A_1 = \prod_{k=0}^{m-1} \prod_{j=0}^{n-k}|j-l-(m-k)\alpha| = \prod_{k=1}^{m} \prod_{j=-l}^{n-l-m+k}|j-k \alpha|, $$ $$ A_2 = \prod_{k=m+1}^{n} \prod_{j=0}^{n-k}|l-j-(k-m)\alpha| = \prod_{k=1}^{n-m} \prod_{j=l+m+k-n}^{l}|j-k \alpha|, $$ From Lemma~\ref{prod} we see that $$ A_1 \ge \prod_{k=1}^{m} \Big{(} \frac{n-m+k}{2e} \Big{)}^{n-m+k} \cdot |k \alpha_2 |, $$$$ A_2 \ge \prod_{k=1}^{n-m} \Big{(} \frac{n-m+k}{2e} \Big{)}^{n-m+k} \cdot | k \alpha_2|. $$ Thus, \begin{eqnarray} |\beta _{lm}| \ge A_1 A_2 &\ge& n! \cdot |\alpha_2|^n \cdot \prod_{k=1}^{n} \Big{(} \frac{k}{2e} \Big{)}^{k}\nonumber\\ &\ge& |\alpha_2|^n \cdot e^{-2n^2} \cdot \prod_{k=1}^n k^k \nonumber.\end{eqnarray} Using $\prod_{k=1}^n k^k \ge \frac{n^2 \log n}{2} - \frac{n^2}{4}$ (cf. Lemma~2.1 in \cite{CP03}) we get \begin{eqnarray} \log |\beta_{lm}| &\ge& \frac{n^2 \log n}{2} - \frac{n^2}{4} + n \log|\alpha_2|-2n^2 \nonumber\\ &\ge& \frac{n^2 \log n}{2} - \frac{9n^2}{4} + n \log|\alpha_2|\nonumber \end{eqnarray} Thus, using (\ref{logest}) we arrive at \begin{eqnarray} \log |c_{lm}| &\le& n^2 \log n + 3.7 n^2 - \log |\beta_{lm}| \nonumber\\ &\le& \frac{n^2}{2} \log n + 5.95 n^2 - n \log |\alpha_{2}| \nonumber. \end{eqnarray} Clearly, $\|P\|_{\Delta^2} \le \sum_{j+k \le n} |c_{jk}| \le (N+1) \max_{j+k \le n} |c_{jk}|$. Recalling $N = (n^2+3n)/2$ and using (\ref{eq:En}) we obtain that $$e_n(\alpha)=\log E_n(\alpha) \le \log(N+1) + \frac{n^2}{2} \log n + 5.95 n^2 - n \log |\alpha_{2}|,$$ Since $\log(N+1) \le N = (n^2 +3n)/2 \le 2n^2$ for all $n \ge 1,$ we get that $$e_n(\alpha) \le \frac{n^2}{2} \log n + 8 n^2 - n \log |\alpha_{2}|,$$ which gives the upper estimate. We now turn to obtaining lower estimate for $e_n(\alpha).$ To this end, we would like to construct a polynomial whose $\Delta^2$-norm is large compared to its $K$-norm. We want to show that we can pick coefficients of $P(z,w)=\sum_{k+j \le n} c_{jk} z^kw^j \in \mathcal P_n$ such that the Maclaurin series expansion of $f(t):=P(e^t, e^{\alpha t})=\sum_{j +k \le n} c_{jk} e^{(k+\alpha j)t}$ is $f(t)=\sum_{k=N}^\infty a_k t^k,$ that is, $f(t)$ has zero of order $N$ at 0, where as before $\dim \mathcal P_n = N+1.$ In other words, we want $f^{(k)}(0)=0$ for all $0 \le k \le N-1.$ Thus, we get a system of $N$ linear equations $$\sum_{j+k \le n} c_{jk} (k+\alpha j)^m=0,\,\, m=0,1,\dots,N-1.$$ List $\{(k+\alpha j) : k+j \le n\}$ by $\{a_1,a_2,\dots, a_N\}$, then the system has a solution provided that the following Vandermonde matrix \[ \left( \begin{array}{ccccc} 1 & 1 & 1&\dots & 1 \\ a_1 &a_2 & a_3 &\dots & a_N \\ a_1^2 &a_2^2 & a_3^2 & \dots & a_N^2 \\ \vdots & \vdots & \vdots &\ddots & \vdots\\ a_1^{N-1} & a_2^{N-1} &a_3^{N-1} &\dots & a_N^{N-1} \end{array} \right)\] is invertible. Hence, it suffices to show that $a_j \ne a_k$ unless $j=k$. Indeed, $k_1+\alpha j_1=k_2+\alpha j_2$ implies $\alpha_2 j_1=\alpha_2 j_2,$ but $\alpha_2 \ne 0$ so that we must have $j_1=j_2.$ This in turn gives $k_1=k_2.$ So, the system has a solution and we can make sure that $f(t)=P(e^t, e^{\alpha t})=t^N g(t)$ for some entire holomorphic function $g(t).$ We set $h(t):= f(t)/\|P\|_K.$ Then, $\|h\|_{\Delta } = \|f\|_\Delta/{\|P\|_K} = 1$ as $\|P\|_K = \sup_{|z|\le 1}|P(e^z,e^{\alpha z})| = \sup_{ |z|\le 1}|f(t)|$. Fix $r \ge 1$ (to be determined later) and consider $|t| = r$, then, Maximum Modulus Principle for holomorphic functions yields $$\sup_{|t| = r} |h(t)| =\frac{\sup_{|t|= r}|f(t)|}{\|P\|_K} \ge \frac{r^N\sup_{|t| = r} |g(t)|}{\|P\|_K} = \frac{r^N \sup_{|t|= r}|g(t)|}{\|P\|_K} \ge r^N.$$ Equation (\ref{eq:B}) gives $$|f(t)|=|P(e^t,e^{\alpha t})| \le \|P\|_K E_n(\alpha) e^{n \log^+\max\{|e^t|,|e^{\alpha t}|\}},$$ Hence, $$r^N \le \sup_{|t| = r} |h(t)| \le E_n(\alpha) e^{n r} \textrm{ for any } r \ge 1.$$ Now taking $r=N/n$ we get $$e_n(\alpha) \ge N\log(N/n) - N \ge \frac{n^2}2 \log n - n^2.$$ This finishes the proof. \qed \end{proof} \end{document}
\begin{document} \title{Ties in Multiwinner Approval Voting} \begin{abstract} We study the complexity of deciding whether there is a tie in a given approval-based multiwinner election, as well as the complexity of counting tied winning committees. We consider a family of Thiele rules, their greedy variants, Phragm{\'e}n's sequential rule, and Method of Equal Shares. For most cases, our problems are computationally hard, but for sequential rules we find an FPT algorithm for discovering ties (parameterized by the committee size). We also show experimentally that in elections of moderate size ties are quite frequent. \end{abstract} \section{Introduction} In an approval-based multiwinner election, a group of voters expresses their preferences about a set of candidates---i.e., each voter indicates which of them he or she approves---and then, using some prespecified rule, the organizer selects a winning committee (a fixed-size subset of the candidates). Multiwinner elections can be used to resolve very serious matters---such as choosing a country's parliament---or rather frivolous ones---such as choosing the tourist attractions that a group of friends would visit---or those positioned anywhere in between these two extremes---such as choosing a department's representation for the university senate. In large elections, one typically does not expect ties to occur (although surprisingly many such cases are known\footnote{\texttt{https://en.wikipedia.org/wiki/List\_of\_close\_ election\_results]}}), but for small and moderately sized ones the issue is unclear. While perhaps a group of friends may manage to not spoil their holidays upon discovery that they were as willing to visit one monument as another, a person not selected for a university senate due to a tie may be quite upset, especially if this tie is discovered after announcing the results. To address such possibilities, we study the following three issues: \begin{enumerate} \item We consider the complexity of detecting if two or more committees tie under a given voting rule. While for most rules this problem turns out to be intractable, for many settings we find practical solutions (in most cases it is either possible to use a natural integer linear programming trick or an FPT algorithm that we provide). \item We consider the complexity of counting the number of winning committees. We do so, because being able to count winning committees would be helpful in sampling them uniformly. Unfortunately, in this case we mostly find hardness and hardness of approximation results. \item We generate a number of elections, both synthetic and based on real-life data, and evaluate the frequency of ties. It turns out to be surprisingly high. \end{enumerate} We consider a subfamily of Thiele rules \citep{Thie95a,azi-gas-gud-mac-mat-wal:c:approval-multiwinner,lac-sko:c:approval-thiele} that includes the multiwinner approval rule (AV), the approval-based Chamberlin--Courant rule (CCAV), and the proportional approval voting rule (PAV), as well as on their greedy variants. We also study satisfaction approval voting (SAV), the Phragm{\'e}n rule, and Method of Equal Shares (MEqS). This set includes rules appropriate for selecting committees of individually excellent candidates (e.g., AV or SAV), diverse committees (e.g., CCAV or GreedyCCAV), or proportional ones (e.g., PAV, GreedyPAV, Phragm{\'e}n, or MEqS); see the works of \citet{elk-fal-sko-sli:j:multiwinner-properties} and \citet{fal-sko-sli-tal:b:multiwinner-voting} for more details on classifying multiwinner rules with respect to their application. We summarize our results in Table~\ref{tab:results}. See also the textbook of \citet{lac-sko:b:multiwinner-approval}. The issue of ties and tie-breaking has already received quite some attention in the literature, although typically in the context of single-winner voting. For example, \citet{obr-elk:c:random-ties-matter} and \citet{obr-elk-haz:c:ties-matter} consider how various tie-breaking mechanisms affect the complexity of manipulating elections, and recently \citet{xia:c:probability-of-ties} has made a breakthrough in studying the probability that ties occur in large, randomly-generated single-winner elections. \citet{xia:t:tie-breaking} also developed a novel tie-breaking mechanisms, which can be used for some multiwinner rules, but he did not deal with such approval rules as we study here. Finally, \citet{con-rog-xia:c:mle} have shown that deciding if a candidate is a tied winner in an STV election is ${{\mathrm{NP}}}$-hard. While STV is not an approval-based rule and they focused on the single-winner setting, many of our results are in similar spirit. \begin{table}[t] \centering \begin{tabular}{l|cc} \toprule Rule & \textsc{Unique-Committee}& \textsc{\#Winning-Committees} \\ \midrule AV & ${{\mathrm{P}}}$ & ${{\mathrm{P}}}$ \\ SAV & ${{\mathrm{P}}}$ & ${{\mathrm{P}}}$ \\ \midrule CCAV & $\ensuremath{{\mathrm{coNP}}}$-hard, $\mathrm{coW[1]}$-h. ($k$) & ${{\mathrm{\#P}}}$-hard, ${{\mathrm{\#W[1]}}}$-hard ($k$)\\ PAV & $\ensuremath{{\mathrm{coNP}}}$-hard, $\mathrm{coW[1]}$-h. ($k$) & ${{\mathrm{\#P}}}$-hard, ${{\mathrm{\#W[1]}}}$-hard ($k$)\\ \midrule GreedyCCAV & $\ensuremath{{\mathrm{coNP}}}$-com., ${{\mathrm{FPT}}}(k)$ & ${{\mathrm{\#P}}}$-hard, ${{\mathrm{\#W[1]}}}$-hard ($k$)\\ GreedyPAV & $\ensuremath{{\mathrm{coNP}}}$-com., ${{\mathrm{FPT}}}(k)$ & ${{\mathrm{\#P}}}$-hard, ${{\mathrm{\#W[1]}}}$-hard ($k$)\\ Phragm{\'e}n & $\ensuremath{{\mathrm{coNP}}}$-com., ${{\mathrm{FPT}}}(k)$ & ${{\mathrm{\#P}}}$-hard, ${{\mathrm{\#W[1]}}}$-hard ($k$)\\ MEqS (Phase 1) & $\ensuremath{{\mathrm{coNP}}}$-com., ${{\mathrm{FPT}}}(k)$ & ${{\mathrm{\#P}}}$-hard\\ \bottomrule \end{tabular} \caption{\label{tab:results}Summary of our complexity results.} \end{table} \section{Preliminaries} By $\mathbb{R}_+$ we denote the set of nonnegative real numbers. For each integer $t$, we write $[t]$ to mean $\{1, \ldots, t\}$. We use the Iverson bracket notation, i.e., for a logical expression $F$, we interpret $[F]$ as~$1$ if $F$ is true and as~$0$ if it is false. Given a graph $G$, we write $V(G)$ to denote its set of vertices and $E(G)$ to denote its set of edges. For a vertex $v$, by $d(v)$ we mean its degree (i.e., the number of edges that touch it). An election $E = (C,V)$ consists of a set of candidates $C = \{c_1, \ldots, c_m\}$ and a collection of voters $V = (v_1, \ldots, v_n)$, where each voter $v_i$ has a set $A(v_i) \subseteq C$ of candidates that he or she approves. We refer to this set as~$v_i$'s approval set or $v_i$'s vote, interchangeably. A multiwinner voting rule $f$ is a function that given an election $E = (C,V)$ and committee size $k \in [|C|]$ outputs a family of size-$k$ subsets of~$C$, i.e., a family of winning committees. Below we describe the rules that we focus on. Let $E = (C,V)$ be an election and let $k$ be the committee size. Under the multiwinner approval rule ($\textsc{AV}$), each voter assigns a single point to each candidate that he or she approves and winning committees consist of $k$ candidates with the highest scores. Satisfaction approval voting ($\textsc{SAV}$) proceeds analogously, except that each voter $v \in V$ assigns $\nicefrac{1}{|A(v)|}$ points to each candidate he or she approves. In other words, under AV each voter can give a single point to each approved candidate, but under SAV he or she needs to split a single point equally among them. Next we consider the class of Thiele rules, defined originally by \citet{Thie95a} and discussed, e.g., by \citet{lac-sko:c:approval-thiele} and \citet{azi-gas-gud-mac-mat-wal:c:approval-multiwinner}. Given a nondecreasing weight function $w \colon \mathbb{N} \rightarrow \mathbb{R}_+$ such that $w(0) = 0$, we define the $w$-Thiele score ($w$-$\textrm{score}$) of a committee $S = \{s_1, \ldots, s_k\}$ in election $E$ to be: \[ \textstyle w\hbox{-}\textrm{score}_E(S) = \sum_{v \in V} w(|A(v) \cap S|). \] The $w$-Thiele rule outputs all committees with the highest $w$-score. We require that for each of our weight functions $w$, it is possible to compute each value $w(i)$ in polynomial time with respect to~$i$. Additionally, we focus on functions such that $w(1) = 1$ and for each positive integer $i$ it holds that $w(i)-w(i-1) \geq w(i+1)-w(i)$. We refer to such functions, and the Thiele rules that they define, as $1$-concave. Three best-known 1-concave Thiele rules include the already defined AV rule, which uses function $w_\textsc{AV}(t) = t$, the approval-based Chamberlin--Courant rule ($\textsc{CCAV}$), which uses function $w_\textsc{CCAV}(t) = [t \geq 1]$, and the proportional approval voting rule (${{\mathrm{P}}}av$), which uses function $w_{{\mathrm{P}}}av(t) = \sum_{i=1}^t\nicefrac{1}{i}$. While it is easy to compute some winning committee under the AV rule in polynomial time (out of possibly exponentially many), for the other Thiele rules, including CCAV and PAV, even deciding if a committee with at least a given score exists is ${{\mathrm{NP}}}$-hard (see the works of \citet{pro-ros-zoh:j:proportional-representation} and \citet{bet-sli-uhl:j:mon-cc} for the case of CCAV, and the works of \citet{azi-gas-gud-mac-mat-wal:c:approval-multiwinner} and \citet{sko-fal-lan:j:collective} for the general case). Hence, sometimes the following greedy variants of Thiele rules are used ($E$ is the input election and $k$ is the desired committee size): \begin{enumerate} \item[] Let $f$ be a $w$-Thiele rule. Its greedy variant, denoted Greedy-$f$, first sets $W_0 := \emptyset$ and then executes $k$ iterations, where for each $i \in [k]$, in the $i$-th iteration it computes $W_i := W_{i-1} \cup \{c\}$ such that $c$ is a candidate in $C \setminus W_{i-1}$ that maximizes the $w$-score of $W_i$. Finally, it outputs $W_k$. In case of internal ties, i.e., if at some iteration there is more than one candidate that the algorithm may choose, the algorithm outputs all committees that can be obtained for some way of resolving each of these ties. In other words, we use the parallel-universes tie-breaking model~\citep{con-rog-xia:c:mle}. \end{enumerate} When we discuss the operation of some Greedy-$f$ rule on election $E$ and we discuss the situation after its $i$-th iteration, where, so far, subcommittee $W_i$ was selected, then by the score of a (not-yet-selected) candidate~$c$ we mean the value $w\hbox{-}\textrm{score}_E(W_i \cup \{c\}) - w\hbox{-}\textrm{score}_E(W_i)$, i.e., the marginal increase of the $w$-score that would result from selecting $c$. We refer to the greedy variants of CCAV and PAV as Greedy\-CCAV and GreedyPAV (in the literature, these rules are also sometimes called \emph{sequential} variants of CCAV and PAV, see, e.g., the book of \citet{lac-sko:b:multiwinner-approval}). Given a greedy variant of a 1-concave Thiele rule, it is always possible to compute at least one of its winning committees in polynomial time by breaking internal ties arbitrarily. Further, it is well-known that the $w$-score of this committee is at least a $1-\nicefrac{1}{e} \approx 0.63$ fraction of the highest possible $w$-score; this follows from the classic result of \citet{nem-wol-fis:j:submodular} and the fact that $w$-score is monotone and submodular. The Phragm{\'e}n (sequential) rule proceeds as follows (see, e.g., the work of \citet{san-elk-lac-fer-fis-bas-sko:c:pjr}): \begin{enumerate} \item[] Let $E = (C,V)$ be an election and let $k$ be the committee size. Each candidate costs a unit of currency. The voters start with no money, but they receive it continuously at a constant rate. As soon as there is a group of voters who approve a certain not-yet-selected candidate and who together have a unit of currency, these voters ``buy'' this candidate (i.e., they give away all their money and the candidate is included in the committee). The process stops as soon as $k$ candidates are selected. For internal ties, we use the parallel-universes tie-breaking. \end{enumerate} Method of Equal Shares (MEqS), introduced by \citet{pet-sko:laminar} and \citet{pet-pie-sko:c:meqs}, is similar in spirit, but gives the voters their ``money'' up front (we use the same notation as above): \begin{enumerate} \item[] Initially, each voter has budget equal to $\nicefrac{k}{|V|}$. The rule starts with an empty committee and executes up to $k$ iterations as follows (for each voter $v$, let $b(v)$ denote $v$'s budget in the current iteration): For each not-yet-selected candidate $c$ we check if the voters that approve $c$ have at least a unit of currency (i.e., $\sum_{v \in A(c)} b(v) \geq 1$). If so, then we compute value $\rho_c$ such that $\sum_{v \in A(c)} \min(b(v), \rho_c) = 1$, which we call the per-voter cost of $c$. We extend the committee with this candidate $c'$, whose per-voter cost $\rho_{c'}$ is lowest; the voters approving $c'$ ``pay'' for him or her (i.e., each voter $v \in A(c')$ gives away $\min(b(v), \rho_{c'})$ of his or her budget). In case of internal ties, we use the parallel-universes tie-breaking. The process stops as soon as no candidate can be selected. \end{enumerate} The above process, referred to as Phase~1 of MEqS, often selects fewer than $k$ candidates. To deal with this, we extend the committee with candidates selected by Phragm{\'e}n (started off with the budgets that the voters had at the end of Phase~1). We jointly refer to the greedy rules, Phragm{\'e}n, MEqS, and Phase~1 of MEqS as sequential rules. We assume that the reader is familiar with basic classes of computational complexity such as ${{\mathrm{P}}}$, ${{\mathrm{NP}}}$, and $\ensuremath{{\mathrm{coNP}}}$. ${{\mathrm{\#P}}}$ is the class of functions that can be expressed as counting accepting paths of nondeterministic polynomial-time Turing machines. Additionally, we consider pararameterized complexity classes such as ${{\mathrm{FPT}}}$ and ${{\mathrm{W[1]}}}$. ${{\mathrm{\#W[1]}}}$ is a parameterized counting class which relates to ${{\mathrm{W[1]}}}$ in the same way as ${{\mathrm{\#P}}}$ relates to ${{\mathrm{NP}}}$~\citep{flu-gro:j:parameterized-counting}. When discussing counting problems, it is standard to use Turing reductions: A counting problem $\#A$ reduces to a counting problem $\#B$ if there is a polynomial time algorithm that solves $\#A$ in polynomial time, provided it has oracle access to $\#B$ (i.e., it can solve $\#B$ in constant time).\footnote{For $\#{{\mathrm{W[1]}}}$, the running time can even be larger, but our $\#{{\mathrm{W[1]}}}$-hardness proofs use polynomial-time reductions.} \section{Unique Winning Committee}\label{sec:unique} In this section we consider the problem of deciding if a given multiwinner rule outputs a unique committee in a given election. Formally, we are interested in the following problem. \begin{definition} Let $f$ be a multiwinner voting rule. In the $f$-$\textsc{Unique-Committee}$ problem we are given an election~$E$ and a committee size $k$, and we ask if $|f(E,k)| = 1$. \end{definition} It is a folk result that for AV and SAV this problem is in ${{\mathrm{P}}}$ (see beginning of Section~\ref{sec:counting} for an argument). For Thiele rules other than AV, the situation is more intriguing. In particular, already the problem of deciding if a given committee is winning under the CCAV rule is $\ensuremath{{\mathrm{coNP}}}$-complete~\citep{son-dey-mis:c:multiwinner-verification}. We show that for 1-concave Thiele rules other than AV the $\textsc{Unique-Committee}$ problem is $\ensuremath{{\mathrm{coNP}}}$-hard (and we conjecture that the problem is not in $\ensuremath{{\mathrm{coNP}}}$). \begin{proposition}\label{pro:thiele-unique} Let $f$ be a 1-concave $w$-Thiele rule other than AV. Then $f$-$\textsc{Unique-Committee}$ is $\ensuremath{{\mathrm{coNP}}}$-hard. \end{proposition} \begin{proof} Let $x = w(2) - w(1)$ and assume, for now, that $x < 1$. We give a reduction from \textsc{Independent-Set} to the complement of $f$-$\textsc{Unique-Committee}$. An instance of \textsc{Independent-Set} consists of a graph $G$ and integer $k$, and we ask if there are $k$ vertices neither of which is connected with the others. Let $G'$ be a graph obtained from $G$ by adding $k$ vertices such that each of the new vertices is connected to each of the old ones (but the new vertices are not connected to each other). If $G$ does not have a size-$k$ independent set, then $G'$ has a unique one, and if $G$ has at least one size-$k$ independent set, then $G'$ has at least two. Let us denote the vertices of $G'$ as $V(G') = \{v_1, \ldots, v_n\}$ and its edges as $E(G') = \{e_1, \ldots, e_m\}$. Let $\delta$ be the highest degree of a vertex in $V(G')$. We fix the committee size to be $k$ and we form an election $E$ with candidate set $V(G')$ and with the following voters: \begin{enumerate} \item For each edge $e_\ell = \{v_i,v_j\}$ there is a single voter who approves $v_i$ and $v_j$. \item For each vertex $v_i$ there are $\delta-d(v_i)$ voters approving~$v_i$. \end{enumerate} Consider a set of $k$ vertices from $V(G')$. If this set is an independent set, then interpreted as a committee in election~$E$, it has $w$-score equal to $\delta k$. On the other hand, if $S$ is not an independent set, then its score is at most $(\delta k-1)+x < \delta k$. We know that $G'$ has an independent set of size $k$. If $G$ also has one, then our election has at least two winning committees and, otherwise, the winning committee is unique. Let us now consider the case that $x= 1$. Since $f$ is not AV, there certainly is an integer $t$ such that $w(t)-w(t-1) = 1$ and $w(t+1)-w(t) < 1$. In this case, we modify the reduction by adding $t-1$ candidates approved by every voter and changing the committee size to be $t+k-1$. \end{proof} \iffalse It turns out that even a seemingly easier problem of deciding if there is a unique committee that has at least a given score is $\ensuremath{{\mathrm{US}}}$-complete (and, so, $\ensuremath{{\mathrm{coNP}}}$-hard). The point of studying this problem is that it separates the issue of finding the score of the winning committee(s) from that of deciding if only a single committee wins. \begin{definition} Let $f$ be a $w$-Thiele rule. In the $f$-\textsc{Unique-Threshold-Committee} problem we are given an election $E$, committee size $k$, score threshold $y$, and we ask if there is exactly one committee whose $w$-score is at least $y$. \end{definition} \begin{theorem}\label{thm:threshold-uniq-com-us-memb-for-thiele} For each 1-concave Thiele rule $f$, $f \neq$ AV, $f$-\textsc{Unique-Threshold-Committee} is $\ensuremath{{\mathrm{US}}}$-complete. \end{theorem} \begin{proof} We first consider membership in $\ensuremath{{\mathrm{US}}}$. For a given rule $f$, assume we are given an instance consisting of an approval election $E = (C,V)$, committee size $k$, and threshold~$y$. Assume that $C = \{c_1, \ldots, c_m\}$. A polynomial-time nondeterministic Turing machine witnessing membership of our problem in $\ensuremath{{\mathrm{US}}}$ works as follows: It guesses a sequence of $k$ candidates in such a way that if $c_i$ is guessed before $c_j$ then $i < j$. Then it verifies if the score of the committee consisting of the guessed candidates is at least $y$ and, if so, it accepts on this computation path. Otherwise, it rejects on this path. For $\ensuremath{{\mathrm{US}}}$-hardness, we give a reduction from \textsc{Unique-Independent-Set} (quite similar to a reduction provided by \citet{azi-gas-gud-mac-mat-wal:c:approval-multiwinner}, as well as to the proof of Proposition~\ref{pro:thiele-unique}). The details are in Appendix~\ref{app:threshold}. \end{proof} \begin{corollary}\label{thm:threshold-uniq-com-us-compl-for-pav-ccav} ${{\mathrm{P}}}av$-$\textsc{Unique-Threshold-Committee}$ and $\textsc{CCAV}$-$\textsc{Unique-Threshold-Committee}$ are $\ensuremath{{\mathrm{US}}}$-complete, even if each voter approves at most $2$ candidates. \end{corollary} \fi For greedy variants of Thiele rules (with the natural exception of AV) and for the Phragm{\'e}n rule, deciding if the winning committee is unique is $\ensuremath{{\mathrm{coNP}}}$-complete. Our proof for the greedy variants of Thiele rules is inspired by a complexity-of-robustness proof for GreedyPAV, provided by \citet{fal-gaw-kus:c:greedy-approval-robustness}. For Phragm{\'e}n, somewhat surprisingly, their robustness proof directly implies our desired result. We also get analogous result for Method of Equal Shares and its Phase~1. \begin{theorem}\label{thm:uniqgreedyhard} Let $f$ be a 1-concave $w$-Thiele rule, $f \neq$ AV. Greedy-$f$-\textsc{Unique-Comm\-ittee} is $\ensuremath{{\mathrm{coNP}}}$-complete. \end{theorem} \begin{proof} Membership in $\ensuremath{{\mathrm{coNP}}}$ is clear: Given an election and committee size, we run the greedy algorithm breaking the ties arbitrarily, and we compute some winning committee $W$. Then, we rerun the same algorithm nondeterministically, at each internal tie trying each possible choice; if a given computation completes with a committee different than $W$ then it rejects (and the whole computation rejects; indeed, we found two different winning committees) and otherwise it accepts (if all paths accept, then the whole computation accepts; indeed, all ways of handling the internal ties lead to the same final committee). In the following, we focus on showing $\ensuremath{{\mathrm{coNP}}}$-hardness. Let $\delta_1 = w(1)-w(0)$, $\delta_2 = w(2)-w(1)$, and $\delta_3 = w(3)-w(2)$. For example, for $w_{{\mathrm{P}}}av$ we would have $\delta_1 = 1$, $\delta_2 = \frac{1}{2}$ and $\delta_3 = \frac{1}{3}$. By our assumptions on weight functions, we know that (a)~these numbers are rational, (b)~$\delta_1 = 1$ (but we will not use this), and that (c)~$\delta_1 \geq \delta_2 \geq \delta_3$. We additionally assume that $\delta_1 - \delta_2 > \delta_2 - \delta_3$, but later we will show how to relax this assumption. We give a reduction from \textsc{Independent Set} to the complement of Greedy-$f$-$\textsc{Unique-Committee}$. Our input consists of a graph $G$, where $V(G) = \{v_1, \ldots, v_n\}$ and $E(G) = \{e_1, \ldots, e_m\}$, and an integer $k$. The question is if there are $k$ vertices in $V(G)$ that are not connected by an edge. Without loss of generality, we assume that $G$ is $3$-regular, i.e., each vertex touches exactly three edges~\citep{gar-joh-sto:j:simplified-np-complete}. Let~$\alpha$ be a positive integer such that $\alpha \delta_1$ and $\alpha\frac{\delta_1-\delta_2}{\delta_1}$ are integers and $\alpha (\delta_1-\delta_2) > \delta_1$. We fix values $t = \alpha (nmk)^3$, $T = 10\alpha(nmk)^6$, and $D=\beta T^{10}$, where $\beta$ is the smallest positive integer greater than $\frac{\delta_1}{\delta_1-\delta_2}$; while we could choose smaller ones, these suffice. We form an election where the candidate set is $V(G) \cup \{p,d\}$ and we have the following three groups of voters: \begin{enumerate} \item For each edge $e_\ell = \{v_i,v_j\}$, we have $t$ voters with approval set $\{v_i,v_j,d\}$. \item For each candidate $v_i$ we have $D + T^3 + ((m-1)n+1)T-3t$ voters with approval set $\{v_i\}$, and for each pair of distinct candidates $v_i$ and $v_j$ we have $T$ voters with approval set $\{v_i,v_j\}$. \item We have $D+T^3 + nmT + 1 - \frac{(\delta_1-\delta_2)}{\delta_1}kT -mt$ voters with approval set $\{p,d\}$, $\frac{3(\delta_1-\delta_2)}{\delta_1}kt$ voters who approve $d$, and $mt$ voters who approve $p$. \end{enumerate} We let the committee size be $n+1$. We claim that if~$G$ contains an independent set of size~$k$ then there are two Greedy-$f$ winning committees, $V(G) \cup \{d\}$ and $V(G) \cup \{p\}$, and otherwise there is only one, $V(G) \cup \{d\}$. The proof follows. Let $X =\delta_1D + \delta_1T^3 + \delta_1nmT$. Prior to the first iteration of Greedy-$f$, each candidate $v_i$ has score $X$, candidate $d$ has score: $ X - (\delta_1-\delta_2)kT + 3(\delta_1-\delta_2)kt + \delta_1. $ and candidate $p$ has score: $ X - (\delta_1-\delta_2)kT + \delta_1. $ During the first $k$ iterations, Greedy-$f$ selects some $k$ candidates from $V(G)$. This is so, because whenever some candidate~$v_i$ is selected, the scores of the remaining members of~$V(G)$ decrease by $(\delta_1-\delta_2)T$ due to the voters in the second group, and by at most $(\delta_1-\delta_2)t$, due to the voters in the first group. Hence, after the first $k-1$ iterations each remaining candidate from $V(G)$ has score at least $ X - (\delta_1-\delta_2)(k-1)T -(\delta_1-\delta_2)(k-1)t, $ which---by our choices of $\alpha$, $t$, and $T$---is larger than the scores that both $p$ and $d$ had even prior to the first iteration (note that the scores of the candidates cannot increase between iterations). On the other hand, after the $k$-th iteration, each remaining member of $V(G)$ has score at most $X - (\delta_1-\delta_2)kT$, which is less than $p$ has (since $p$ is only approved by voters who do not approve members of $V(G)$, at this point his or her score is the same as prior to the first iteration). As a consequence, in the $(k+1)$-st iteration Greedy-$f$ either chooses $p$ or $d$. Let us now analyze which one of them. Let $S$ be the set of candidates from $V(G)$ selected in the first~$k$ iterations. If $S$ forms an independent set, then prior to the $(k+1)$-st iteration, the score of $d$ is $X - (\delta_1-\delta_2)kT + \delta_1$. This is so, because for each candidate $v_i$ in $S$, $d$ loses exactly $3(\delta_1-\delta_2)t$ points due to the voters in the first group that correspond to the three edges that include $v_i$ (since $S$ is an independent set, for each member of $S$ these are different three edges). In this case, Greedy-$f$ is free to choose either among $p$ and $d$. However, if $S$ is not an independent set, then the score of $d$ drops by at most $(3k-1)(\delta_1-\delta_2)t + (\delta_2-\delta_3)t$. This is so, because $S$ contains at least two candidates $v_i$ and $v_j$ that are connected by an edge; when the second one of them is included in the committee, then the score of $d$ drops by at most $2(\delta_1-\delta_2)+(\delta_2-\delta_3)$. In this case Greedy-$f$ is forced to select $d$ in the $(k+1)$-st iteration. In the following $n-k$ iterations, $f$ selects the remaining members of $V(G)$ (after either $p$ or $d$ is selected in the $(k+1)$-st iteration, the score of the other one drops so much that he or she cannot be selected; this is due to the $D$ voters who approve $\{p,d\}$). It remains to observe that if $G$ contains an independent set of size~$k$, then Greedy-$f$ can choose its members in the first~$k$ iterations. This is the case, because whenever Greedy-$f$ chooses a member of the independent set, then the score of its other members never drops more than the score of the other remaining vertex candidates. Hence, if~$G$ has a size-$k$ independent set, then, due to the parallel-universes tie-breaking, Greedy-$f$ outputs two winning committees, $V(G) \cup \{p\}$ and $V(G) \cup \{d\}$. Otherwise we have a unique winning committee $V(G) \cup \{d\}$. This completes the proof for the case that $\delta_1 - \delta_2 > \delta_2-\delta_3$. Let us now consider the case where $\delta_1 - \delta_2 \leq \delta_2-\delta_3$. Let $\delta_4 = w(4)-w(3)$, $\delta_5 = w(5)-w(4)$, and so on. If there is some positive integer $t$ such that $\delta_{t+1} - \delta_{t+2} > \delta_{t+2} - \delta_{t+3}$ then it suffices to use the same reduction as above, extended so that we have candidates $d_1, \ldots, d_t$ that are approved by every voter and the committee size is increased by $t$. Greedy-$f$ will choose these~$t$ candidates in the first~$t$ iterations and then it will continue as described in the reduction, with $\delta_{t+1}, \delta_{t+2}$, and $\delta_{t+3}$ taking the roles of $\delta_1$, $\delta_2$, and $\delta_3$. In fact, such a $t$ must exist. Otherwise, if $\delta_{t+1} - \delta_{t+2} \leq \delta_{t+2} - \delta_{t+3}$ for every $t$ then either $f$ is AV (which we assumed not to be the case) or $w$ is not nondecreasing, which is forbidden by definition. \end{proof} \begin{corollary}\label{cor:cc-pav-phragmen} $\textsc{Unique-Committee}$ is $\ensuremath{{\mathrm{coNP}}}$-complete for GreedyCCAV, GreedyPAV, and ${{\mathrm{P}}}hragmen$. \end{corollary} The results for GreedyCCAV and GreedyPAV follow directly from the preceding theorem. For Phragm{\'e}n, \citet{fal-gaw-kus:c:greedy-approval-robustness} have shown that the following problem, known as Phragm{\'e}n-\textsc{Add-Robustness-Radius}, is ${{\mathrm{NP}}}$-complete: Given an election~$E$, committee size~$k$, and number~$B$, is it possible to add at most $B$ approvals to the votes so that the winning committee under the resolute variant of the Phragm{\'e}n rule (where all internal ties are resolved according to a given tie-breaking order) changes. Their proof works in such a way that adding approvals only affects how ties are broken. Hence, effectively, it also shows that $\textsc{Unique-Committee}$ is $\ensuremath{{\mathrm{coNP}}}$-complete for the (non-resolute) variant of Phragm{\'e}n. \begin{theorem}\label{thm:meqs-p1-uni} $\textsc{Unique-Committee}$ is $\ensuremath{{\mathrm{coNP}}}$-complete for Phase~1 of MEqS. \end{theorem} \begin{proof} The following nondeterministic algorithm shows membership in $\ensuremath{{\mathrm{coNP}}}$: First, we deterministically compute the output of Phase~1, breaking internal ties in some arbitrary way. This way we obtain some committee $W$. Next we rerun Phase~1, at each internal tie nondeterministically trying all possibilities. We accept on computation paths that output~$W$ and we reject on those outputting some other committee. This algorithm accepts on all computation paths if and only if the rule has a unique winning committee. Next, we give a reduction from the complement of the classic ${{\mathrm{NP}}}$-complete problem, \textsc{X3C}. An instance of \textsc{X3C} consists of a universe set $U = \{u_1, \ldots, u_{3n}\}$ and a family ${\mathcal{S}} = \{S_1, \ldots, S_{3n}\}$ of size-$3$ subsets of $U$. We ask if there are $n$ sets from ${\mathcal{S}}$ whose union is $U$ (we refer to such a family as an exact cover of $U$; note that the sets in such a cover must be disjoint). Without loss of generality, we assume that each member of $U$ belongs to exactly three sets from ${\mathcal{S}}$~\citep{gon:j:x3c} and that $n$ is even. Now we describe our election. Ideally, we would like to distribute different amounts of budget between different voters, but as MEqS splits the budget evenly, we design the election in such a way that in the initial iterations the respective voters spend appropriate amounts of money on the candidates that otherwise are not crucial for the construction. We form the following groups of voters (we reassure the reader that the analysis is more pleasant than the following two enumerations may suggest): \begin{enumerate} \item Group $B$, which contains $144n^3 - 12n$ voters. \item Group $B_U$, which contains $54n^3 + 9n^2$ voters. \item Group $U'$, which models the elements of the universe set~$U$. For each $u_i \in U$, there is a single corresponding voter in $U'$. We have $|U'|=3n$. \item Group $U''$, which serves a similar purpose as $U'$, but contains more voters. Specifically, for each $u_i \in U$, there are $6n$ corresponding voters in $U''$; $|U''| = 18n^2$. \item Group $V_{pd}$, which contains $12n$ voters. \item Group $V_S$, which contains $9n$ voters. \item Two voters, $d_1$ and $d_2$. \end{enumerate} In total, there are $198n^3 + 27n^2 + 12n + 2$ voters. Further, we have the following groups of candidates: \begin{enumerate} \item Group $C_B$ of $144n^3 - 12n^2$ candidates approved by the $144n^3$ voters from $B \cup V_{pd}$. \item Group $C_U$ of $54n^3 + 24n^2 + \nicefrac{5n}{2}$ candidates approved by the $54n^3 + 27n^2 + 3n$ voters from $B_U \cup U' \cup U''$. \item Candidate $p$ approved by the $12n$ voters from $V_{pd}$. \item Candidate $d$ approved by the $15n$ voters from $V_{pd} \cup U'$. \item Candidates $c_1$ and $c_2$, both approved by $d_1$ and $d_2$. \item Group $D$ of $15n^2 + \frac{45n}{2} + 5$ candidates approved by $d_1$. \item For each set $S_\ell \in {\mathcal{S}}$ such that $S_\ell = \{u_i,u_j,u_t\}$ we have a corresponding candidate $s_\ell$ approved by: (a) three unique voters from $V_S$, (b) the voters from $U'$ and $U''$ that correspond to the elements $u_i$, $u_j$, $u_t$. We write $S$ to denote this group of candidates and we refer to its members as the $S$-candidates. Each $S$-candidate is approved by $3 + 3 + 3 \cdot 6n = 18n + 6$ voters. \end{enumerate} We have $198n^3 + 27n^2 + 28n + 9$ candidates in total. We set the committee size $k$ to be equal to the number of voters, i.e., $k = 198n^3 + 27n^2 + 12n + 2$. Let us consider the following two committees (note that each of them contains fewer than $k$ candidates; indeed, Phase~1 of MEqS sometimes chooses committees smaller than requested): \begin{align*} W_d &= C_B \cup C_U \cup S \cup \{c_1,c_2\} \cup \{d\}, \\ W_p &= C_B \cup C_U \cup S \cup \{c_1,c_2\} \cup \{p\}. \end{align*} We claim that Phase~1 of MEqS always outputs committee $W_d$, and if $(U,{\mathcal{S}}$) is a \emph{yes}-instance then it also outputs $W_p$. Let us analyze how Phase~1 of MEqS proceeds on our election. Since the committee size is equal to the number of voters, initially each voter receives budget equal to $1$. At first, we will select all candidates from $C_B$. Indeed, there are $144n^3-12n^2$ candidates in this group, each approved by $144n^3$ voters (from $B \cup V_{pd}$). Each of these voters pays $\nicefrac{1}{144n^3}$ for each of the candidates (this is the lowest per-voter candidate cost at this point). After these purchases, each voter from $B \cup V_{pd}$ will be left with budget equal to $1 - (144n^3 - 12n^2) \cdot (\nicefrac{1}{144n^3}) = \nicefrac{1}{12n}$. Next, we will select all candidates from $C_U$. Indeed, this set contains $54n^3+24n^2 + \nicefrac{5n}{2}$ candidates approved by $54n^3+27n^2+3n$ voters (from $B_U \cup U' \cup U''$) who have not spent any part of their budget yet. All candidates in $C_U$ will be purchased at the same pre-voter cost of $\nicefrac{1}{(54n^3+27n^2+3n)}$ (the lowest one at this point). Each voter in $B_U \cup U' \cup U''$ will be left with budget equal to $1 - (54n^3 + 24n^2 + \nicefrac{5n}{2}) \cdot \nicefrac{1}{(54n^3 + 27n^2 + 3n)} = \frac{3n^2 + \nicefrac{n}{2}}{54n^3 + 27n^2 + 3n} = \frac{6n+1}{108n^2 + 54n + 6} = \frac{6n+1}{(6n+1) \cdot (18n+6)} = \nicefrac{1}{(18n+6)}$. Next, we consider the $S$-candidates who, at this point, have the highest approval score among the yet unselected candidates. As each $S$-candidate is approved by exactly $18n+6$ voters and each voter still has budget higher or equal to $\nicefrac{1}{(18n+6)}$, we keep selecting the $S$-candidates at the per-voter cost of $\nicefrac{1}{(18n+6)}$ as long as there is at least one such candidate whose all voters still have budget of at least $\nicefrac{1}{(18n+6)}$. Upon selecting a given $S$-candidate, corresponding to set $S_\ell$, all the voters who approve him or her pay $\nicefrac{1}{(18n+6)}$. This includes the three unique voters from $V_S$ and the voters from $U'$ and $U''$ who correspond to the members of $S_\ell$. Prior to this payment, the voters from $U'$ and $U''$ have budget equal to $\nicefrac{1}{(18n+6)}$, so they end up with $0$ afterward (and we say that they are \emph{covered} by this $S$-candidate). Consequently, the $S$-candidates that we buy at the per-voter cost of $\nicefrac{1}{(18n+6)}$ correspond to disjoint sets. Now let us consider what happens when there is no $S$-candidate left who can be purchased at the per-voter cost of $\nicefrac{1}{(18n+6)}$. This means that for each unselected $S$ candidate, at least $6n+1$ voters approving him have already been covered and have no budget left. Hence, for a given $S$-candidate there are at least $6n+1$ voters (from $U'$ and $U''$) whose budget is~$0$, at most $12n+2$ voters (from $U'$ and $U''$) who each have budget of $\nicefrac{1}{(18n+6)}$, and three voters (from $V_S$) who each have budget equal to $1$. To buy this $S$ candidate, the voters from $U'$ and $U''$ would have to use up their whole budget, and the voters from $V_S$ would have to pay at least: \[ \textstyle \frac{1}{3}(1 - (12n+2) \cdot \frac{1}{18n+6}) = \frac{18n+6 - (12n+2)}{3 \cdot (18n+6)} = \frac{6n+4}{54n+18} \] each. However, at this point there are two candidates that can be purchased at lower per-voter cost. Indeed, candidate $p$ could be purchased by the $12n$ voters from $V_{pd}$ at the per-voter cost of $\nicefrac{1}{12n}$ (after buying the candidates from $C_B$, they still have exactly this amount of budget left). Since candidate $d$ also is approved by all the voters from $V_{pd}$, and also by the voters from $U'$, candidate $d$ would either have the same per-voter cost as $p$ (in case all the members of $U'$ were already covered) or would have an even lower per-voter cost. The only other remaining candidates are $c_1$, $c_2$, and the candidates from $D$, but their per-voter costs are greater or equal to $\nicefrac{1}{2}$. Hence, at this point, MEqS either selects $p$ or $d$. The former is possible exactly if the already selected $S$-candidates form an exact cover of $U'$ (and, hence, correspond to an exact cover for our input instance of \textsc{X3C}). If we select $p$, then the $12n$ voters from $V_{pd}$ use up all their budget. The remaining voters who approve $d$, those in $U'$, have total budget equal to at most $3n \cdot \frac{1}{18n+6} < 1$, so $d$ cannot be selected in any of the following iterations (within Phase~1). On the other hand, if we select $d$, then all the voters from $U'$ would have to pay all they had left (that is, either $0$ or $\frac{1}{18n+6}$, each) and voters from $V_{pd}$ would split the remaining cost. That is, each voter from $V_{pd}$ would have to pay at least: \[ \textstyle \frac{1 - 3n \cdot \frac{1}{18n+6}}{12n} = \frac{18n+6 - 3n}{12n \cdot (18n+6)} = \frac{15n+6}{12n \cdot (18n+6)}. \] Consequently, each voter from $V_{pd}$ would be left with at most: \[ \textstyle \frac{1}{12n} - \frac{15n+6}{12n \cdot (18n+6)} = \frac{18n+6 - (15n+6)}{12n \cdot (18n+6)} = \frac{1}{72n+24}. \] This would not suffice to purchase $p$, as $12n \cdot \frac{1}{72n+24} < 1$. Thus either we select $d$ (and not~$p$) or we select $p$ (and not $d$; where this is possible only if we previously purchased $S$-candidates that cover all members of $U'$). In the following iterations, we purchase all remaining $S$-candidates (because each of them is approved by three unique voters from $V_S$), as well as candidates $c_1$ and $c_2$ (voters $d_1$ and $d_2$ buy them with per-voter cost of $\nicefrac{1}{2}$ for each). This uses up the budget of $d_1$ and, so, no candidate from $D$ is selected. All in all, if there is no exact cover for the input \textsc{X3C} instance, then $W_d$ is the unique winning committee, but otherwise $W_d$ and $W_p$ tie. This finishes the proof. \end{proof} $\textsc{Unique-Committee}$ is also $\ensuremath{{\mathrm{coNP}}}$-complete for the full version of MEqS. To see this, it suffices to note that after adding enough voters with empty votes, MEqS becomes equivalent to Phragm{\'e}n (because per-voter budget is so low that Phase~1 becomes vacuous) and inherits its hardness. On the positive side, for sequential rules we can solve \textsc{Unique-Committee} in ${{\mathrm{FPT}}}$ time with respect to the committee size: In essence, we first compute some winning committee and then we try all ways of breaking internal ties to find a different one. For small values of $k$, such as, e.g., $k \leq 10$, the algorithm is fast enough to be practical. \begin{theorem}\label{thm:greedy-fpt-unique} Let $f$ be MEqS, Phase~1 of MEqS, Phragm{\'e}n, or a greedy variant of a $1$-concave Thiele rule. There is an ${{\mathrm{FPT}}}$ algorithm for $f$-$\textsc{Unique-Committee}$ parameterized by the committee size. \end{theorem} \begin{proof} Let $E$ be the input election and let $k$ be the committee size. First, we compute some committee $W$ in $f(E,k)$, by running the algorithm for $f$ and breaking the internal ties arbitrarily. Next, we rerun the algorithm, but whenever it is about to add a candidate into the constructed committee, we do as follows (let $T$ be the set of candidates that the algorithm can insert into the committee): If $T$ contains some candidate $c$ that does not belong to $W$, then we halt and indicate that there are at least two winning committees ($W$ and those that include $c$). If $T$ is a subset of $W$, then we recursively try each way of breaking the tie. If the algorithm completes without halting, we report that there is a unique winning committee. The correctness is immediate. The running time is equal to $O(k!)$ times the running time of the rule's algorithm (for the case where each tie is broken in a given way). Indeed, at the first internal tie we may need to recurse over at most $k$ different candidates, then over at most $k-1$, and so on. \end{proof} For $1$-concave Thiele rules other than $\textsc{AV}$, \textsc{Unique-Committee} is $\mathrm{co}\hbox{-}{{\mathrm{W[1]}}}$-hard when parameterized by the committee size (this follows from the proof of Proposition~\ref{pro:thiele-unique} as \textsc{Independent-Set} is ${{\mathrm{W[1]}}}$-hard for parameter $k$). To solve the problem in practice, we note that for each $1$-concave Thiele rule there is an integer linear program (ILP) whose solution corresponds to the winning committee. We can either use the ability of some ILP solvers to output several solutions (which only succeeds in case of a tie), or we can use the following strategy: First, we compute some winning committee using the basic ILP formulation. Then, we extend the formulation with a constraint that requires the committee to be different from the previous one and compute a new one. If both committees have the same score, then there is a tie. \section{Counting Winning Committees}\label{sec:counting} Let us now consider the problem of counting the winning committees. Formally, our problem is as follows. \begin{definition} Let $f$ be a multiwinner voting rule. In the $f$-\#\textsc{Winning-Committees} problem we are given an election and a committee size $k$; we ask for $|f(E,k)|$. \end{definition} There are polynomial-time algorithms for computing the number of winning committees for $\textsc{AV}$ and $\textsc{SAV}$. For an election $E$ with committee size $k$, we first sort the candidates with respect to their scores in a non-increasing order and we let $x$ be the score of the $k$-th candidate. Then, we let $S$ be the number of candidates whose score is greater than $x$, and we let $T$ be the number of candidates with score equal to $x$. There are $T \choose k-S$ winning committees. \begin{proposition} $\{$AV, \!\! SAV$\}$-\#\textsc{Winning-Committees} $\in\! {{\mathrm{P}}}$ \end{proposition} On the other hand, whenever $f$-\textsc{Unique-Committee} is intractable, so is $f$-\#\textsc{Winning-Committees}. Indeed, it immediately follows that there is no polynomial-time $(2-\varepsilon)$-approximation algorithm for $f$-\#\textsc{Winning-Committees} for any $\varepsilon > 0$ (if such an algorithm existed then it could solve $f$-\textsc{Unique-Committee} in polynomial time as for an election with a single winning committee it would have to output~$1$, and for an election with $2$ winning committees or more, it would have to output an integer greater or equal at least $\frac{2}{2-\varepsilon} > 1$, so we could distinguish these cases\footnote{We assume here that if a solution for a counting problem is $x \in \mathbb{N}$, then an $\alpha$-approximation algorithm, with $\alpha \geq 1$, has to output an integer between $x/\alpha$ and $\alpha x$. If we allowed rational values on output, the inapproximability bound would drop to $\sqrt{2}-\varepsilon$.}). However, for all our rules a much stronger result holds. \begin{proposition}\label{pro:hard-approx} Let $f$ be a $1$-concave Thiele rule (different from AV), its greedy variant, Phragm{\'e}n, MEqS or Phase~1 of MEqS. Unless ${{\mathrm{P}}} \neq {{\mathrm{NP}}}$, there is no polynomial-time approximation algorithm for $f$-\#\textsc{Winning-Committees} with polynomially-bounded approximation ratio. \end{proposition} \begin{proof} For Phase~1 of MEqS, it suffices to use the proof of Theorem~\ref{thm:meqs-p1-uni} with candidate $p$ replaced by polynomially many copies, each approved by the same voters. Either we get a unique winning committee or polynomially many tied ones. The same trick works with the greedy variants of $1$-concave Thiele rules and Theorem~\ref{thm:uniqgreedyhard}, and Phragm{\'e}n and Corollary~\ref{cor:cc-pav-phragmen}. For the case of $1$-concave Thiele rules, we use the following strategy. Let $p$ be some positive integer and let $(G,k)$ be an instance of \textsc{Independent-Set}, where $G$ is a graph and $k$ is an integer. We form a graph $G^p$ whose vertex set is: $V(G^p) = \{ v^i \mid v \in V(G), i \in [p] \}$ and where two vertices, $u^i$ and $v^j$, are connected by an edge either if $i \neq j$ or if $i=j$ and $u$ and $v$ are connected by an edge in $G$. Consequently, if $G$ has $x$ independent sets of size $k$, then $G^p$ has $px$ such sets (each independent set of $G^p$ is a copy of an independent set of $G$, using only vertices with the same superscript). Hence, if in the proof of Propostion~\ref{pro:thiele-unique} we replace graph $G$ with graph $G^p$, where $p$ is some polynomial function of the input size, then we obtain an election that either has a unique winning committee (if the input graph did not have an independent set of a required size) or an election that has polynomially many winning committees (if the graph had at least one such independent set). \end{proof} We note that the construction given in the proof of Proposition~\ref{pro:thiele-unique} also shows that for each $1$-concave Thiele rule $f \neq$ AV, $f$-\#\textsc{Winning-Committees} is both ${{\mathrm{\#P}}}$-hard and $\#{{\mathrm{W[1]}}}$-hard for parameterization by the committee size (because this reduction produces elections that have one more winning committee than the number of size-$k$ independent sets in the input graph, and counting independent sets is both ${{\mathrm{\#P}}}$-complete and $\#{{\mathrm{W[1]}}}$-complete for parameterization by~$k$~\citep{val:j:permanent,flu-gro:j:parameterized-counting}). For greedy variants of $1$-concave Thiele rules and Phragm{\'e}n, the situation is more interesting because \textsc{Unique-Committee} is in ${{\mathrm{FPT}}}$ (for the parameterization by the committee size). Yet, \#\textsc{Winning-Committees} is also hard. \begin{theorem}\label{thm:greedy-count} Let $f$ be Phragm{\'e}n or a greedy variant of a $1$-con\-cave Thiele rule (different from AV). $f$-\#\textsc{Winning-Committees} is ${{\mathrm{\#P}}}$-hard and $\#{{\mathrm{W[1]}}}$-hard (for the parameterization by the committee size). \end{theorem} \begin{proof} We first consider greedy variants of $1$-concave Thiele rules. Let $w$ be the weight function used by $f$. Let $x = w(2) - w(1)$. We have $w(1) = 1$ and we assume that $x < 1$ (we will consider the other case later). We show a reduction from the $\textsc{\#Matching}$ problem, where we are given a graph~$G$, an integer~$k$, and we ask for the number of size-$k$ matchings (i.e., the number of size-$k$ sets of edges such that no two edges in the set share a vertex). $\textsc{\#Matching}$ is $\#{{\mathrm{W[1]}}}$-hard for parameterization by $k$~\citep{cur-mar:c:param-counting-matchings}. Let $G$ and $k$ be our input. We form an election $E$ where the edges of $G$ are the candidates and the vertices are the voters. For each edge $e = \{u,v\}$, the corresponding edge candidate is approved by the vertex voters corresponding to $u$ and $v$. We also form an election $E_p$, equal to $E$ except that it has two extra voters who both approve a single new candidate, $p$. We note that every candidate in both $E$ and $E_p$ is approved by exactly two voters. Hence, the greedy procedure first keeps on choosing candidates whose score is $2$ (i.e., edges that jointly form a matching, or candidate $p$ in $E_p$). It selects the candidates with lower scores (i.e., edges that break a matching) only when score-$2$ candidates disappear. Let $W$ be some size-$k$ $f$-winning committee for election~$E_p$. We consider two cases: \begin{enumerate} \item If $p$ does not belong to $W$, then the edge candidates in~$W$ form a matching. If it were not the case, then before including an edge candidate with score lower than $2$, the greedy algorithm would have included $p$ in the committee. \item If $p$ belongs to $W$ then $W \setminus \{p\}$ is an $f$-winning committee of size $k-1$ for election $E$. Indeed, if we take the run of the greedy algorithm that computes $W$ and remove the iteration where $p$ is selected, we get a correct run of the algorithm for election $E$ and committee size $k-1$. Further, for every size-$(k-1)$ committee winning in $E$, $S \cup \{p\}$ is a size-$k$ winning committee in $E_p$ (because we can always select $p$ in the first iteration). \end{enumerate} So, to compute the number of size-$k$ matchings in $G$, it suffices to count the number of winning size-$k$ committees in $E_p$ and subtract from it the number of winning size-$(k-1)$ committees in $E$. If $x = 1$, then we find the smallest value $t$ such that $w(t)-w(t-1) = 1$ and $w(t+1)-w(t) < 1$ and use the same construction as above, except that there are $t-1$ dummy candidates approved by every voter. Regarding Phragm{\'e}n, it turns out that the same construction as for the greedy variants of $1$-concave Thiele rules still works. In time $t=\nicefrac{1}{2}$, each voter has $\nicefrac{1}{2}$ budget and each candidate (including $p$) can be purchased (because each candidate is approved by exactly two voters and their total budget is $1$). Hence, if $W$ is a winning committee for $E_p$ but $W$ does not include $p$, then all its members were purchased at time $\nicefrac{1}{2}$. It means that these candidates were approved by disjoints sets of voters, whose corresponding edges edges form a size-$k$ matching. On the other hand, if $p$ belongs to $W$ then $W \setminus {p}$ is a winning size-$(k-1)$ committee for $E$, as in the above proof. \end{proof} \begin{corollary} \#\textsc{Winning-Committees} is ${{\mathrm{\#P}}}$-hard and $\#{{\mathrm{W[1]}}}$-hard (for the parameterization by the committee size) for GreedyCCAV, GreedyPAV, Phragm{\'e}n, and MEqS. \end{corollary} The above result holds for MEqS because of its relation to Phragm{\'e}n. For Phase~1 of MEqS, we have ${{\mathrm{\#P}}}$-hardness, but ${{\mathrm{\#W[1]}}}$-hardness so far remains elusive. \begin{theorem}\label{thm:meqs-counting} \#\textsc{Winning-Committees} is ${{\mathrm{\#P}}}$-hard for Phase~1 of MEqS. \end{theorem} \begin{proof} We give a reduction from \textsc{\#X3C}, i.e., a counting version of the problem used in the proof of Theorem~\ref{thm:meqs-p1-uni}. Let $E_{pd}$ be the same election as constructed in that proof, except for the following change: Group $B_U$ contains $9n$ voters fewer and the $9n$ voters from $V_S$ additionally approve the candidates from $C_U$. Consequently, the committee size decreases by $9n$ (because we maintain that the committee size is equal to the number of voters). Because of this change, when selecting the candidates from $C_U$, the budget of the voters from $V_S$ drops to $\nicefrac{1}{(18n+6)}$. Then, after the iterations where $S$-candidates are selected at per-voter cost of $\nicefrac{1}{(18n+6)}$, no further $S$-candidates are selected (because the voters approving them have total budget lower than $1$). As a consequence, Phase~1 of MEqS applied to election $E_{pd}$ chooses all committees of the following forms: \begin{enumerate} \item Committees consisting of all candidates from $C_B \cup C_U \cup \{c_1,c_2\} \cup \{d\}$ and a subset of $S$-candidates such that all other $S$-candidates include at least one covered voter from $U' \cup U''$. \item Committees consisting of all candidates from $C_B \cup C_U \cup \{c_1,c_2\} \cup \{p\}$ and a subset of $S$-candidates that correspond to an exact cover of $U$. \end{enumerate} Next, we form election $E_d$ identical to $E_{pd}$ except that it does not include candidate $p$. For $E_d$, Phase~1 of MEqS selects all the committees of the first type above. Hence, to compute the number of solutions for our instance of \textsc{\#X3C}, it suffices to subtract the number of committees selected by Phase~1 of MEqS for $E_{d}$ from the number of committees selected by Phase~1 of MEqS for $E_{pd}$. This completes the proof. \end{proof} \section{Experiments} A'priori, it is not clear how frequent are ties in multiwinner elections. In this section we present experiments that show that they, indeed, are quite common, at least if one considers elections of moderate size. \subsection{Statistical Cultures and the Basic Experiment} Below we describe the statistical cultures that we use to generate elections (namely, the resampling model, the interval model, and PabuLib data) and how we perform our basic experiments. {{\mathrm{P}}}aragraph{Resampling Model~\citep{szu-fal-jan-lac-sli-sor-tal:c:sampling-approval-elections}.} We have two parameters, $p$ and ${{\mathrm{P}}}hi$, both between $0$ and $1$. To generate an election with candidate set $C = \{c_1, \ldots, c_m\}$ and with $n$ voters, we first choose uniformly at random a central vote $u$ approving exactly $\lfloor p m \rfloor$ candidates. Then, we generate the votes, for each considering the candidates independently, one by one. For a vote~$v$ and candidate~$c$, with probability $1-{{\mathrm{P}}}hi$ we copy $c$'s approval status from $u$ to $v$ (i.e., if $u$ approves $c$, then so does $v$; if $u$ does not approve $c$ then neither does $v$), and with probability ${{\mathrm{P}}}hi$ we ``resample'' the approval status of~$c$, i.e., we let~$v$ approve~$c$ with probability $p$ (and disapprove it with probability $1-p$). On average, each voter approves about $pm$ candidates. {{\mathrm{P}}}aragraph{Interval Model.} In the Interval model, each voter and each candidate is a point on a $[0,1]$ interval, chosen uniformly at random. Additionally, each candidate $c$ has radius $r_c$ and a voter $v$ approves canidate $c$ if the distance between their points is at most $r_c$. Intuitively, the larger the radius, the more appealing is a given candidate. We generate the radii of the candidates by taking a base radius $r$ as input and, then, choosing each candidates' radius from the normal distribution with mean $r$ and standard deviation $\nicefrac{r}{2}$. Such spatial models are discussed in detail, e.g., by \citet{enelow1984spatial,enelow1990advances}. In the approval setting, they were recently considered, e.g., by \citet{bre-fal-kac-nie2019:experimental_ejr} and \citet{god-bat-sko-fal:c:2d}. {{\mathrm{P}}}aragraph{PabuLib Data.} PabuLib is a library of real-life participatory budgeting (PB) instances, mostly from Polish cities~\citep{sto-szu-tal:t:pabulib}. A PB instance is a multiwinner election where the candidates (referred to as projects) have costs and the goal is to choose a ``committee'' of at most a given total cost. We restrict our attention to instances from Warsaw, which use approval voting, and we disregard the cost information (while this makes our data less realistic, we are not aware of other sources of real-life data for approval elections that would include sufficiently large candidate and voter sets). To generate an election with $m$ candidates and $n$ voters, we randomly select a Warsaw PB instance, remove all but $m$ candidates with the highest approval score, and randomly draw $n$ voters (with repetition, restricting our attention only to voters who approve at least one of the remaining candidates). We consider 120 PB instances from Warsaw that include at least 30 candidates (each of them includes at least one thousand votes, usually a few thousand). {{\mathrm{P}}}aragraph{Basic Experiment.} In a basic experiment we fix the number of candidates $m$, the committee size~$k$, and a statistical culture. Then, for each number $n$ of voters between $20$ and $100$ (with a step of $1$) we generate $1000$ elections with $m$ candidates and $n$~voters, and for each of them compute whether our rules have a unique winning committee (we omit GreedyCCAV). Then we present a figure that on the $x$ axis has the number of voters and on the $y$ axis has the fraction of elections that had a unique winning committee for a given rule. For AV and SAV, we use the algorithm from the beginning of Section~\ref{sec:counting}, for sequential rules we use the FPT algorithm from Theorem~\ref{thm:greedy-fpt-unique}, and for CCAV and PAV we use the ILP-based approach, with a solver that provides multiple solutions. \subsection{Results} \newcommand{\resampling}[3]{png_ijcai23/unique/resampling/m_#1_k_#2_reps_1000_params_('p', #3) ('phi', 0.75).png} \newcommand{{{\mathrm{P}}}abulib}[2]{png_ijcai23/unique/pabulib_with_replacement/m_#1_k_#2_reps_1000_params_best_cands_num_#1.png} \newcommand{\interval}[3]{png_ijcai23/unique/euclidean_cr/m_#1_k_#2_reps_1000_params_('radius', #3) ('dim', 1) ('space', 'uniform').png} \begin{figure*} \caption{$m=30$, $k=5$, \\$k/2$ approvals/vote, \\ resampling model, ${{\mathrm{P} \caption{$m=30$, $k=5$, \\$k$ approvals/vote\\ resampling model, ${{\mathrm{P} \caption{$m=30$, $k=5$, \\$2k$ approvals/vote \\ resampling model, ${{\mathrm{P} \caption{$m=30$, $k=5$, \\$k/2$ approvals/vote \\ Interval} \caption{$m=30$, $k=5$, \\$k$ approvals/vote \\ Interval} \caption{$m=30$, $k=5$,\\ PabuLib (Warsaw)\\} \caption{\label{fig:resampling} \label{fig:resampling} \end{figure*} All our experiments regard $30$ candidates and committee size~$5$ (the results for $50$ and $100$ candidates, and committee size $10$, are analogous). First, we performed three basic experiments for the resampling model with the parameter $p$ (approval probability) set so that, on average, each voter approved either $k/2$, $k$, or $2k$ candidates. We used ${{\mathrm{P}}}hi = 0.75$ (according to the results of \citet{szu-fal-jan-lac-sli-sor-tal:c:sampling-approval-elections}, this value gives elections that resemble the real-life ones). We present the results in the top row of Figure~\ref{fig:resampling}. Next, we also performed two basic experiments for the Interval model (with the base radius selected so that, on average, each voter approved either $k/2$ or $k$ candidates), and with the PabuLib data (see the second row of Figure~\ref{fig:resampling}). These experiments support the following general conclusions. First, for most scenarios and for most of our rules, there is a nonnegligible probability of a tie (depending on the rule and the number of voters, this probability may be as low as $5\%$ or as high as nearly $100\%$). This shows that one needs to be ready to detect and handle ties in moderately sized multiwinner elections. Second, we see that SAV generally leads to fewest ties, CCAV leads to most, and AV often holds a strong second position in this category (in the sense that it also leads to a high probability of having a tie in many settings). The other rules are in between. Phase~1 of MEqS often has significantly fewer ties than the other rules, but full version of MEqS does not stand out. PAV occasionally leads to fewer ties (in particular, on PabuLib data and on the resampling model with $2k$ approvals per vote). \section{Summary} We have shown that, in general, detecting ties in multiwinner elections is intractable, but doing so for moderately-sized ones is perfectly possible. Our experiments show that ties in such elections are a realistic possibility and one should be ready to handle them. Intractability of counting winning committees suggests that tie-breaking by sampling committees may not be feasible. Looking for fair tie-breaking mechanisms is a natural follow-up research direction. {{\mathrm{P}}}aragraph{Acknowledgments.} This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 101002854). \noindent \includegraphics[width=3cm]{erceu} \end{document}
\begin{document} \title{Consistent Weighted Sampling Made Fast, Small, and Easy} \author{Bernhard Haeupler\\Carnegie Mellon University\\[email protected] \and Mark Manasse\\[email protected] \and Kunal Talwar\footnote{This research was performed while all the authors were at Microsoft Research Silicon Valley.} \\[email protected]} \date{} \maketitle \begin{abstract} Document sketching using Jaccard similarity has been a workable effective technique in reducing near-duplicates in Web page and image search results, and has also proven useful in file system synchronization, compression and learning applications~\cite{BroderGMZ97,Broder97,BroderCFM98}. Min-wise sampling can be used to derive an unbiased estimator for Jaccard similarity and taking a few hundred independent consistent samples leads to compact sketches which provide good estimates of pairwise-similarity. Early sketching papers handled weighted similarity, for integer weights, by transforming an element of weight $w$ into $w$ elements of unit weight, each requiring their own hash function evaluation in the consistent sampling. Subsequent work~\cite{GollapudiP,ManasseMT,Ioffe10} removed the integer weight restriction, and showed how to produce samples using a constant number of hash evaluations for any element, independent of its weight. Another drastic speedup for sketch computations was given by Li, Owen and Zhang~\cite{LiOZ12} who showed how to compute such (near-)independent samples in one shot, requiring only a constant number of hash function evaluations per element. Unfortunately this latter improvement works only for the unweighted case. In this paper we give a simple, fast and accurate procedure which reduces weighted sets to unweighted sets with small impact on the Jaccard similarity. This leads to compact sketches consisting of many (near-)independent weighted samples which can be computed with just a small constant number of hash function evaluations per weighted element. The size of the produced unweighted set is furthermore a tunable parameter which enables us to run the unweighted scheme from~\cite{LiOZ12} in the regime where it is most efficient. Even when the sets involved are unweighted, our approach gives a simple solution to the densification problem that~\cite{ShrivastavaL14a,ShrivastavaL14b} attempt to address. Unlike previously known schemes, ours does not result in an unbiased estimator. However, we prove that the bias introduced by our reduction is negligible and that the standard deviation is comparable to the unweighted case. We also empirically evaluate our scheme and show that it gives significant gains in computational efficiency, without any measurable loss in accuracy. \varepsilonnd{abstract} \section{Introduction} Web experiments have repeatedly shown that most breadth-first collections of pages contain many unique pages, but also contain large clusters of near-duplicate pages. Typical studies have found that duplicate and near-duplicate pages account for between a third and a half of a corpus. Min-wise sampling~\cite{Broder97,BroderGMZ97,BroderCFM98} has been widely used in Web and image search since the mid-nineties to produce consistent compact sketches which provide good estimates of pairwise similarity of corpus items, computing the sketch with reference only to a single item. Min-wise hashing computes a sketch for estimating the Jaccard (scaled L1) similarity effectively; SimHash~\cite{Charikar02}, and related techniques compute sketches for estimating the angular separation (in L2) of arbitrary vectors. Both of these have been widely used in deployed commercial search engines to allow the search result pages to suppress reporting the near-duplicate pages which would otherwise often dominate the search results. SimHash, by its nature, allows vector coordinates to be arbitrary real numbers, and weights the random projections accordingly. While min-wise sampling is designed for unweighted sets it can also be used for non-negative integer weights. For this one simply replaces any element $e$ of weight $w$ by $w$ elements $e_1,\ldots,e_w$ of unit weight. This reduction however leads to the running time of computing one consistent sample to be proportional to the sum of all weights. More recent papers~\cite{GollapudiP,ManasseMT,Ioffe10} removed the integer weight restriction, and showed how to produce samples using a constant number of hash evaluations for any element, independent of its weight. All these sampling techniques as described typically result in a Boolean random variable whose expectation is related to the similarity. One typically applies the same procedure repeatedly with independent randomness to produce a sketch consisting of hundreds to samples, to get independent Boolean estimates that can be averaged. In this work, we will be concerned with designing a faster estimation scheme for Jaccard similarity. A beautiful idea of Li and K\"onig~\cite{LiK11,li2010b}, known as $b$-bit min-wise hashing, helps reduce the size of a sketch by storing only a $b$-bit hash per sample. While this results in some ``accidental'' hash collisions, this can be remedied by taking into account the effect of these collisions and taking a larger number of samples. This gives more compact sketches but may require a longer time for sketch generation. For the unweighted case Li, Owen and Zhang~\cite{LiOZ12} showed how to drastically speed up the computation of sketches by computing 200, say, (near-)independent samples in one shot using only a constant number of hash function evaluations per element, instead of computing each sample one-by-one. Unfortunately however, this ``one permutation''-technique does not easily extend to weighted sampling. In this paper, we bring this level of performance to weighted sampling, producing an algorithm which produces a sketch in time proportional to the number of positive-weight elements in the item. We do this by picking two or more scales and converting the weighted set into an unweighted one by randomized rounding. This sampling step introduces a negligible error and bias and leads to a small unweighted set on which any unweighted sketching technique can be applied. The size of this set is a tunable parameter which can be beneficial for the subsequently used unweighted sketching step. We apply this new algorithm, and the older ones, in a variety of settings to compare the variance in accuracy of approximation. These improvements come at a marginal cost. Our algorithm takes as input an interestingness threshold $\alpha$, say $\alpha = \frac{1}{2}$, such that similarities smaller than $\alpha$ are considered uninteresting. Given two weighted sets, it either returns an accurate estimate of the similarity, or correctly declares that the similarity is below $\alpha$. Since in applications, one is not usually interested in estimating similarity when it is small, we believe that this is an acceptable tradeoff. \section{Background} Given two finite sets $S$ and $T$ from a universe $U$, the {\varepsilonm Jaccard similarity} of $S$ and $T$ is defined as: \begin{align*} jacc(S,T) = \frac{|S \cap T|}{|S \cup T|}. \varepsilonnd{align*} A weighted set associates a positive real weight to each element in it. Thus a weighted set is defined by a map $w : U \rightarrow \Re_+$, with the weight of the elements outside the set defined as 0. We denote the support of $w$ by $supp(w) = \{a \in U: w(a) >0\}$. An unweighted set is then the special case where the weight is equal to one on all of its support. This definition of Jaccard similarity can be extended to weighted sets in a natural way. Given two mappings $w_S$ and $w_T$ with supports $S$ and $T$ respectively, their {\varepsilonm weighted Jaccard similarity} $jacc(W,V)$ is defined as \begin{align*} \frac{\sum_{a \in S \cap T} \min(w_S(a), w_T(a))}{\sum_{a \in S \cup T} \max(w_S(a), w_T(a))} = \frac{| \min(w_S,w_T) |_1}{| \max(w_S,w_T) |_1}. \varepsilonnd{align*} Given a weighted set $w$, we denote by $n(w)$ the support size $|supp(w)|$, and by $W(w)$ the total weight $|w|_1$. When the weighted set if clear from context, we will simply use $n$ and $W$ to denote these quantities. We will be concerning ourselves with fast algorithms for creating a short sketch of a weighted set from which we can quickly estimate the Jaccard similarity between two sets. Empirical observations place Jaccard similarity below $\approx$ 0.7 as probably not near-duplicates, and above $\approx$ 0.95 as likely near- or exact-duplicates. We will be concerned with fast and accurate sketching techniques for Jaccard similarity. Sampling techniques which pick without replacement can closely approximate Jaccard: the oldest such, working in a limited setting where weights are integers (or integer multiples of some fixed base constant) replaces item $a$ with weight $w(a)$ by new items $(a,1), \ldots, (a,w(a))$, all of weight 1. By taking a hash function $h$, and computing it multiple times for each input item, we can map each pair of item, sample number to a positive real number. To select the $j^{th}$ sample, consider ${h(x, j)}$ for all items $x$, and choose the pair producing the numerically smallest hash value. This leads to the generation of $k$ samples in time $O(Wk)$. Manasse, McSherry and Talwar~\cite{ManasseMT} extended this classic scheme to the case of arbitrary non-negative weights, using the ``active index'' idea of Gollapudi and Panigrahy~\cite{GollapudiP}. Their scheme had the additional advantage that only expected constant time per element is required, independent of its weight, which leads to a total expected run-time of $O(nk)$ for generating $k$ samples, independent of $W$. Ioffe~\cite{Ioffe10} improved this to worst-case constant time, by carefully analyzing the resulting distributions, reducing the per input per sample cost to choosing five random uniform values in the range between zero and one. This gives a worst case run-time of $O(nk)$. In a deployed implementation, the principal costs for this per element are: (a) Seeding the pseudo-random generator with the input, (b) computing roughly 200 sets of 5 random values in $[0,1]$, and manipulating them to compute a 200 hash values, and (c) comparing these hash values to a vector of the 200 smallest values discovered to date, and replacing that value if smaller. Li, Owen and Zhang~\cite{LiOZ12}, using a technique first explored by Flajolet and Martin~\cite{FlajoletM85} compute a single hash value for each element, as well as a random sample number; the hash value then contends for smallest only among those values with equal sample number. This approach does not seem to extend to weighted sampling, because an item with very large weight may need to contend for multiple samples in order to sufficiently influence the predicted similarity. Another deficiency of this approach is that when sets are small, the accuracy of the estimator suffers. Recent works by Shrivastava and Li~\cite{ShrivastavaL14a,ShrivastavaL14b} attempt to address the later concern. More recently Li and K\"onig~\cite{LiK11,li2010b} found it effective to instead store up to 256 smallest values, but store only 1 or 2 bits derived randomly from the smallest values. Accidental collisions will happen a quarter or a half the time, but we can get equivalent power for estimating the Jaccard value by computing enough extra samples to account for the matches that occur due to insufficient length of the recorded value. For 2 bit samples, with 136 samples, we expect 34 to match randomly, leaving us with 102 samples that match with probability equal to the Jaccard value; na\"ive Chernoff bounds allow us to conclude that estimates of the true value will be accurate to within 0.1. For 1 bit samples, drawing 200 results in 100 accurate samples; the storage space required without these modifications is on the order of 800 bytes. The 2 bit variant takes 34 bytes to hold (with 800 bytes needed in memory prior to the 2 bit reduction, since the minimization step needs to done accurately), while the 1 bit variant needs 25 bytes; a further variant instead computes 400 results, reduces each to 1 bit, and then computes the exclusive-or of pairs of bits to reduce back to 200 bits, each of which will match randomly half the time; this gives us an estimator for $p^2$, rather than $p$, which we can then take the square root of. In our new algorithm, we seek to gain for the weighted case the space efficiency of~\cite{LiK11}, while producing a sample set as efficiently as~\cite{LiOZ12} do for the unweighted case. \subsection*{Other Related Work} The Jaccard similarity is an $\varepsilonll_1$ version of similarity between two objects. The cosine similarity is an $\varepsilonll_2$ notion that has also been used, and SimHash gives a simple sketching scheme for it. Henzinger~\cite{Henzinger06} performed an in-depth comparison of the then state-of-the-art algorithms for MinHash and SimHash. More recently, Shrivastava and Li~\cite{ShrivastavaL14c} compare these two approaches in some other applications. While as stated, SimHash requires time $O(nk)$ to generate $k$ samples, recent work on fast Johnson-Lindenstrauss lemma~\cite{AilonC09,AilonL09,DasguptaKS10} may be viewed as giving faster algorithms for SimHash. \if 0 \section{Algorithm rationale and design} Because of the inaccuracies inherent in allowing each input to contend for only one sample (we could weight the sample values, but the requisite weighting cannot be computed correctly in isolation for an item), we choose instead to mimic some aspects of the original technique of repeating input elements to contend for multiple samples. We do not scale arbitrarily; two sketches are only comparable when the degree of repetition matches. Accordingly, we take advantage of the typical applications of similarity: items with small Jaccard similarity are not interesting. Suppose we decide that detecting Jaccard less than, e.g., one half is uninteresting. One simple observation from the formula for $J(U, V)$ is that it must be less than $|| U ||_1 / || V ||_1$: the numerator increases, since the minimum of two things is smaller than either one; the denominator decreases, since the maximum of two things is larger than either. Correspondingly, if two inputs differ in norm by more than a factor of $2$, the Jaccard similarity of the two cannot be as large as $0.5$. In this case we might be content with giving an answer with lower precision or even just report that the Jaccard distance is small. Our algorithm defines a cutoff factor $\alpha$ such that comparing two sketches with Jaccard similarity larger than $\alpha$ leads to very precise answers while evaluating two sketches with similarity less than $\alpha$ might lead to the simplistic answer that $J < \alpha$. We keep $\alpha = 0.5$ in mind as a typical value for this cutoff. Other tunable parameters in our algorithm are the number of samples $k$ we want for the comparison of two $\alpha$-close sketches, which governs the precision, the number of scales $t$, and the minimum expected number of unweighted items $L$ competing for each sample space. The later parameter is useful to use unweighted sampling schemes in the regime they are most efficient. As typical values for these parameters are $k = 256$ (in particular if the more space efficient $b$-bit hashing \cite{} is used a larger than usual amount of samples are required), $L = 5$ and for simplicity we go with $t=3$. With these parameters set we consider the powers of $\alpha^{-1}$ starting upwards and downwards from 1. Given two items whose L1 norms differ by at most a factor of $\alpha$, the one norms of these two items either lie in the same range (i.e., in $[2^k, 2^{k+1})$, for $\alpha=2$, $k = \floor{\log_2 W_1} = \floor{\log_2 W_2}$) or in adjacent ranges (e.g., $W_1 \in [2^k, 2^{k+1})$ and $W_2 \in [2^{k+1}, 2^{k+2})$). We scale the weights of all elements of an item by a power of $\alpha^{-1}$ so that the total falls into the range $[\frac{Lk}{t-1},\alpha^{-1}\frac{Lk}{t-1})$. Similarly by multiplying by extra $\alpha^{-1}$ factors we consider the $t-1$ scales above it. For our example values of selecting $k=256$ samples with $L=5$, $t=3$ and $\alpha = 2$ we first scale the weights to lie in $[640;1280)$ and then scale by a factor of 2 and 4 to lie in $[1280;2560)$ and $[2560;5120)$. Next, for each scale we eliminate the fractional parts using randomized rounding, preserving the expected norm: an element with weight $w$ consisting of a integer part $j = \floor{w}$ and a fractional part $f = w - j \in [0,1)$ turns into an $j$ unit weight elements with probability $f$, and into $j+1$ unit weight elements with probability $1-f$. This requires one hash function evaluation per element. Furthermore, since the error introduced in this step is negligible compared to the average sampling error one can reuse the same hash value for the rounding step of all scales. As we do this, we name the pieces for future randomization by the element name, and the associated integer of the piece. This completes the reduction to the unweighted case. For each scale we now use the any unweighted sampling scheme to create a sketch. In particular, we can use the single-hash function technique of Li to assign each unit element to a bin in the range of the desired number of samples, and to assign a pseudo-random hash value in the range [0, 1], retaining only the smallest in each bin. Because the expected number of unit pieces surviving randomized rounding will equal the scaled one-norm, we with high probability need at most around $5120 + 2560 + 1280 < 9000$ random values to complete this and for a typical item we expect around $6400$. We expect each potential sample to receive between $L$ and $\alpha L$ unit elements in the first scale, between $\alpha L$ and $\alpha^2 L$ in the second scale and between $\alpha^2 L$ and $\alpha^3 L$ in the last. Correspondingly, we expect that a sample will rarely receive no elements; the probability of a given sample ending with no elements when $L$ elements are used per sample is $(1 - 1/k)^{Lk}$. For our values this corresponds to a probability between $0.7\%$ and $0.004\%$ for the first scale, a probability between $0.004\%$ and one in a billion for the second scale and between $10^{-7}$ and $10{-14}$ for the last scale. Since this happens rarely, we choose to ignore this, and assign a value of zero to a bin when no sample is selected. This allows us to store the sketch compactly, without using any extra space to record the emptiness of some bin, and also simplifies the comparison of two sketches; alternatively we could, instead use a byte to indicate the position of an empty bin, if only one exists, with two special values indicating the expected case of all bins non-empty, and a second special value indicating that more than one bin ended empty. The precision of this scheme for $t = 3$ can be evaluated as follows. For any items whose weight is within a factor of $\alpha$ share between two and three scales leading to between $k$ and $1.5k$ samples and therefore a precision which is at least as large as desired. For any items whose weight is between $\alpha$ and $\alpha^2$ apart the number of samples is between $0.5k$ and $k$. Even though these items pairs are guaranteed to have a Jaccard similarity below our cutoff value $\alpha$ we still get at least half of our minimum required precision on those (and sometimes even full precision). For item pairs that are at least a factor of $\alpha^2$ apart in weight no shared scale might exist and we simply report that $J < W_1/W_2$ which is at least as strong as $J < \alpha^2$. In our example we get full (or $1.5$-fold) precision on any pairs which are within Jaccard similarity $0.5$ and still get at least half-precision answers for pairs with Jaccard similarity up to $0.25$. The space corresponds to $k \frac{t}{t-1}$ samples which with our example parameters corresponds to $256 \cdot \frac{3}{2} = 384$ samples which is $96$ byte if a $2$-bit sampling scheme is used. This is the same size as a $2$-bit scheme with $k=256$ samples if a bitmask for empty bins is stored. The number of hash function evaluations of our scheme for an item with $n$ non-zero weights is $n$ evaluations for the rounding and in the worst case $k L \sum_{i=1}^t \alpha^i = k L \frac{\alpha^t - 1}{\alpha - 1}$ unweighted elements for which a sample and priority value needs to be created. The later quantity is typically chosen to be less than $n$ which leads overall to around $2n$ to $3n$ hash function evaluations. In contrast the best weighted sampling technique so far required $5$ hash function evaluations per element per sample for a total of $5n \cdot k$ or in our example $1028 n$ hash function evaluations. \begin{algorithm}[htb!] \caption{ComputeSketch($\vec{I},k,\alpha,L,t$)} \begin{algorithmic}[1] {\mathcal{S}}tate scale = $\ceil{\log_{1/\alpha} \frac{kL}{||\vec{I}||_1}}$ {\mathcal{S}}tate $\vec{I}_1 = \alpha^{-s} \vec{I}$ \Comment{Scale} {\mathcal{F}}or {$i = 2$ to $t$} {\mathcal{S}}tate $\vec{I}_i = \alpha^{-s} \vec{I}_{i-1}$ {\mathbf{E}}ndFor {\mathcal{S}}tatex {\mathcal{F}}or {$i = 1$ to $t$} \Comment{Round} {\mathcal{S}}tate $S_i = \varepsilonmptyset$ {\mathcal{F}}or {$j$ with $I(j) > 0$} {\mathcal{S}}tate $full_j = \floor{I_1(j)}$ {\mathcal{I}}f {$rand(i,j,full_j) < I_i(j) - full_j$} {\mathcal{S}}tate $full_j = full_j + 1$ {\mathbf{E}}ndIf {\mathcal{S}}tate $S_i = S_i \cup \bigcup_{p=1}^{full_j} \{(j,p)\}$ {\mathbf{E}}ndFor {\mathbf{E}}ndFor {\mathcal{S}}tatex {\mathcal{F}}or {$i = 1$ to $t$} \Comment{Unweighted Sketching} {\mathcal{S}}tate Compute unweighted Sketch $s_i$ of $S_i$ with $\frac{k}{t-1}$ samples {\mathbf{E}}ndFor {\mathcal{S}}tate Output sketch = $(scale,s_1,\ldots,s_t)$ \varepsilonnd{algorithmic} \label{alg:hmt} \varepsilonnd{algorithm} \fi \section{Algorithm rationale and design} A sketching based similarity estimation scheme consists of two subroutines: \begin{itemize} \item A sketching algorithm $Sketch$ that takes as input a single (possibly) weighted set (and usually, a common random seed) and returns a sketch, and \item an estimating algorithm $Estimate$ that takes as input two sketches generated by the sketching and returns an estimate of the Jaccard similarity. \varepsilonnd{itemize} The property one wants from this pair of algorithms is that for any pair of weighted sets $w_1$ and $w_2$, the estimate $Estimate(Sketch(w_1,r),Sketch(w_2,r))$ is ``close'' to the true Jaccard similarity, with high probability over the randomness $r$. Moreover, we want the run-time and the output size of the Sketch algorithm to be small. We will present our algorithm in two parts. We first describe a reduction that given a weighted set $w$ and a random seed $r$ outputs an {\varepsilonm unweighted} set $ReduceToUnwtd(w,r)$ such that: \begin{description} \item{(a)} the expected size of the unweighted set is $|w|_1$, and \item{(b)} given any two weighted sets $w_1$ and $w_2$, the resulting unweighted sets $S_1 = ReduceToUnwtd(w_1,r)$ and $S_2 = ReduceToUnwtd(w_2,r)$ satisfy the property that $jacc(S_1,S_2)$ is approximately $jacc(w_1,w_2)$ with high probability, as long as $|w_1|_1$ and $|w_2|_1$ are large enough. \varepsilonnd{description} We will formalize these statements in the next section. We will then describe how such a reduction can be used along with an unweighted similarity estimation scheme to generate fast and small sketches for weighted sets. In this second part, we will assume that we are given a threshold $\alpha$ such that similarities smaller than $\alpha$ need not be estimated accurately. \subsection{Weighted to Unweighted Reduction} In this section, we describe a simple randomized reduction that transforms any weighted set to an unweighted one, such that the Jaccard similarity between sets is approximately preserved. Given a weighted set $w$ from a universe $U$, the reduction produces an unweighted set $S \subseteq U \times \N$. We will assume access to a hash function $h$ that takes a random seed $r$ and a pair $(a,i) \in U \times \N$, and returns a real number in $[0,1)$. We assume for the proofs, that for a random $r$, the hash value $h(r,a,i)$ is uniform in $[0,1)$ and independent of $h(r,a',i')$ for any $(a',i')$ different from $(a,i)$. In practice, we will only need a small expected number of bits of this $h$, and using a pseudorandom generator will suffice. The reduction is a simple randomized rounding scheme. Consider an element $a$ with weight $w(a)$. We write $w(a)= j_a + f_a$, where $j_a= \lfloor w(a) \rfloor$ is the integer part of $w(a)$ and $f_a \in [0,1)$ is the fractional part. We add to $S$ an element $(a,i)$ for $i=1,\ldots j_a$. Additionally, we add $(a,j(a)+1)$ with probability exactly $f_a$; we do this by computing a hash $h(r,a,j_a)$ and adding $(a,j_a+1)$ to $S$ if and only if $h(r,a,j_a) < f_a$. Using the same hash function seeded with the element $a$ ensures consistency in our rounding, which is crucial for (approximately) preserving Jaccard similarity. The resulting {\varepsilonm ReduceToUnwtd} algorithm looks as follows: \begin{algorithm}[htb!] \caption{ReduceToUnwtd($w$,$r$)} \begin{algorithmic}[1] {\mathcal{S}}tate $S = \varepsilonmptyset$ {\mathcal{F}}or {$a$ with $w(a) > 0$} {\mathcal{S}}tate $j_a = \floor{w(a)}$ {\mathcal{I}}f {$h(r,a,j_a) < w(a) - j_a$} {\mathcal{S}}tate $j_a = j_a + 1$ {\mathbf{E}}ndIf {\mathcal{S}}tate $S = S \cup \bigcup_{p=1}^{j_a} \{(a,p)\}$ {\mathbf{E}}ndFor {\mathcal{S}}tate Output $S$ \varepsilonnd{algorithmic} \label{alg:reduce} \varepsilonnd{algorithm} This {\varepsilonm ReduceToUnwtd} algorithm furthermore provides the following guarantees, whose proof we defer to the next section. \begin{theorem} \label{thm:reduction} Let $w_1$ and $w_2$ be weighted sets and let $S_1 = ReduceToUnwtd(w_1,r)$ and $S_2 = ReduceToUnwtd(w_2,r)$ for a random seed $r$. Then for each $i=1,2$ and any $\delta > 0$, \begin{enumerate} \item{(Size Expectation)} ${\mathbf{E}}x[|S_i|] = |w_i|_1$. \item{(Size Tail)} ${\mathcal{P}}r[\big| |S_i| - |w_i|_1 \big| \geq 3\sqrt{|w_i|_1 \ln \frac{2}{\delta}}] \leq \delta$. \varepsilonnd{enumerate} Further let $W = \max(|w_1|_1, |w_2|_1)$. Then \begin{enumerate} \setcounter{enumi}{2} \item{(Bias)} $\big|{\mathbf{E}}x[jacc(S_1,S_2)] - jacc(w_1,w_2)\big| \leq \frac{1}{W-1}$. \item{(Tail)} ${\mathcal{P}}r[|jacc(S_1,S_2) - jacc(w_1,w_2)| \geq \sqrt{\frac{27\ln \frac{4}{\delta}}{W}}] \leq \delta$. \varepsilonnd{enumerate} \varepsilonnd{theorem} A variant of this algorithm will be useful when we want to even further improve on the number of hash computations that need to be performed. In particular, we propose the algorithm {\varepsilonm ReduceToUnwtdDep} which uses $h(r,a)$ instead $h(r,a,j_a)$ to determine whether $(a,j_a+1)$ is added. Except for this small change in Step 4 the algorithm is identical to {\varepsilonm ReduceToUnwtd}. \begin{algorithm}[htb!] \caption{ReduceToUnwtdDep($w$,$r$)} \begin{algorithmic}[1] {\mathcal{S}}tate $S = \varepsilonmptyset$ {\mathcal{F}}or {$a$ with $w(a) > 0$} {\mathcal{S}}tate $j_a = \floor{w(a)}$ {\mathcal{I}}f {$h(r,a) < w(a) - j_a$} {\mathcal{S}}tate $j_a = j_a + 1$ {\mathbf{E}}ndIf {\mathcal{S}}tate $S = S \cup \bigcup_{p=1}^{j_a} \{(a,p)\}$ {\mathbf{E}}ndFor {\mathcal{S}}tate Output $S$ \varepsilonnd{algorithmic} \label{alg:reducedep} \varepsilonnd{algorithm} This slight change in {\varepsilonm ReduceToUnwtdDep} compared to {\varepsilonm ReduceToUnwtd} introduces some dependencies in the rounding. For example, if $w_a=1.5$ and $w'_a=2.5$, then the outputs of {\varepsilonm ReduceToUnwtdDep} on these two weight functions $S$ and $S'$ will have the events $(a,2) \in S$ perfectly correlated with the event $(a,3) \in S'$, whereas in the original {\varepsilonm ReduceToUnwtd} algorithm these events are independent. Nevertheless, the following theorem, which is also proved in the next section, shows that the Jaccard similarity of $S$ and $S'$ is still close to that between $w$ and $w'$. \begin{theorem} \label{thm:reductiondep} Let $w_1$ and $w_2$ be weighted sets and let $S_1 = ReduceToUnwtd(w_1,r)$ and $S_2 = ReduceToUnwtd(w_2,r)$ for a random seed $r$. Then for each $i=1,2$ and any $\delta > 0$, \begin{enumerate} \item{(Size Expectation)} ${\mathbf{E}}x[|S_i|] = |w_i|_1$. \item{(Size Tail)} ${\mathcal{P}}r[\big| |S_i| - |w_i|_1 \big| \geq 3\sqrt{|w_i|_1 \ln \frac{2}{\delta}}] \leq \delta$. \varepsilonnd{enumerate} Further let $W = \max(|w_1|_1, |w_2|_1)$. Then \begin{enumerate} \setcounter{enumi}{2} \item{(Bias)} $\big|{\mathbf{E}}x[jacc(S_1,S_2)] - jacc(w_1,w_2)\big| \leq \frac{1}{W-1}$. \item{(Tail)} ${\mathcal{P}}r[|jacc(S_1,S_2) - jacc(w_1,w_2)| \geq \sqrt{\frac{27\ln \frac{4}{\delta}}{W}}] \leq \delta$. \varepsilonnd{enumerate} \varepsilonnd{theorem} \subsection{Similarity Estimation Scheme} We start by observing that the weighted Jaccard similarity is scale invariant: for any real number $\gamma>0$ it holds that $jacc(\gamma w_1, \gamma w_2) = jacc(w_1,w_2)$. For our scheme however different choices of $\gamma$ lead to a different outcome. In particular, our reduction gets more accurate as the total $\varepsilonll_1$ weight $w(U)$ increases. On the other hand, the expected size of the unweighted set resulting from the reduction is $w(U)$ so that any unweighted similarity estimation sketch we use gets more inefficient as $w(U)$ increases. We therefore would like to pick $\gamma$ to be as small as possible while keeping the accuracy loss in the reduction small. A more pressing matter though is the following: if we know the sets $w_1$ and $w_2$, we can carefully pick $\gamma$, but the whole point of the sketch is that it summarizes $w_1$ without knowing which $w_2$ we would want to compare it with. Thus, we will need to decide on one or more scaling factors $\gamma$ for a set $w$ without knowing which other weighted sets we will compare it with. To describe our scheme, we will introduce a few more parameters. The input parameter $\alpha<1$ is the interestingness threshold, and our scheme may report "$< \alpha$" instead of outputting an estimate if the Jaccard similarity is smaller than $\alpha$. The parameter $k$ will correspond to the final number of comparable samples (or $b$-bit hash values) for a pair of weighted sets of interest. This determines the accuracy of the scheme, which is of the order $\frac{1}{\sqrt{k}}$, the standard deviation of $k$ independent random measurements. Thus to get accuracy about $0.05$ in the estimate of the Jaccard similarity, we will use $k$ to be about $400$. An additional parameter $L$ is the redundancy we want to use when using an unweighted similarity estimation scheme. Roughly speaking, for generating $k$ samples, we will assume that the unweighted similarity scheme we use works well on sets of size at least $Lk$. For a scheme such as the one we use~\cite{LiK11}, $L$ being a small constant such as $5$ suffices. Finally, a parameter $\beta$ will determine the scaling factors we use, and $t$ will denote the number of scales we use for each weighted set. For simplicity, we first describe the algorithm with $\beta=\alpha$. We propose to pick for a weighted set $w$, a small number $t$ of ``scales'', where a scale is simply an integer power of $\beta$. We pick the scales in such a way that the scaled $\varepsilonll_1$ weight $\beta^i \cdot w(U)$ is in the range $[\frac{Lk}{t-1},\beta^{-1}\frac{Lk}{t-1})$ for the first scale, and in adjacent larger geometric intervals for the remaining $t-1$ scaling factors. This can be achieved by taking the first scaling factor to be $\beta^{-s}$ for $s$ given by $s=\lceil \log_{1/\beta} \frac{Lk}{(t-1)w(U)}\rceil$, and the subsequent scaling factors being given by $s+1,s+2,\ldots,s+t-1$. For each of these scales $s'$, we define the scaled weighted sets $\beta^{-s'}\cdot w$, and apply the reduction to it to derive an unweighted set of size at least $Lk/(t-1)$, and apply an unweighted sketching scheme to derive $\frac{k}{t-1}$ samples. Observe that if $w_1(U)/w_2(U) \not\in [\alpha,1/\alpha]$, then the Jaccard similarity $jacc(w_1,w_2) < \alpha$. Thus for such sets, we can safely report ``similarity $<$ $\alpha$''. If on the other hand, the ratio $w_1(U)/w_2(U) \in [\alpha,1/\alpha]$, then the first scaling factors $s_1$ and $s_2$ chosen for $w_1$ and $w_2$ are either equal or differ by $1$. In either case, they share at leat $t-1$ scaling factors, and we can use those for estimation of the distance. This gives us $k$ comparable samples, as desired. More generally, we can pick an integer $\tau \in [1,t]$, and set\footnote{In principle, $\beta$ can be chosen arbitrarily in $[\alpha^{1/\tau},\alpha^{1/\tau+1})$, and a value more conducive to floating point operations may be picked.} $\beta = \alpha^{1/\tau}$, We then use $\frac{k}{t-\tau}$ samples for each scales, and pick our scales starting from $s=\lceil \log_{1/\beta} \frac{Lk}{(t-\tau)w(U)}\rceil$. It is easy to verify that this choice ensures that for any pair of sets with $w_1(U)/w_2(U) \in [\alpha,1/\alpha]$, we are guaranteed to find $t$ comparable samples. We formalize the sketching and the estimating algorithms next. We assume access to a subroutines {\varepsilonm UnwtdSketch($S,k,r$)} that takes in an unweighted set $S$, parameter $k$ and a seed $r$ and returns a sketch consisting of $k$ samples. In addition, we assume that {\varepsilonm UnwtdEstimate($sketch,sketch'$)} outputs an estimate of the Jaccard distance based on the sketches. The $b$-bit hashing scheme of~\cite{LiK11} would give a candidate pair of instantiations of these subroutines. In the description below, we assume that the randomness source $r$ can be partitioned into sources $r_i$, $r'_i$, for $t$ different values of $i$. \begin{algorithm}[htb!] \caption{ComputeSketch($w,k,\alpha,\tau,L,t,r$)} \begin{algorithmic}[1] {\mathcal{S}}tate $\beta = \alpha^{\frac{1}{\tau}}$ {\mathcal{S}}tate s = $\ceil{\log_{1/\beta} \frac{Lk}{(t-\tau)|w|_1}}$ {\mathcal{S}}tate sketch = $\varepsilonmptyset$ {\mathcal{S}}tatex {\mathcal{F}}or {$i = s$ to $s+t-1$} {\mathcal{S}}tate $w_i = \beta^{-i} w$ \Comment{Scale} {\mathcal{S}}tate $S_i = ReduceToUnwtd(w_i,r_i)$ \Comment{Round} {\mathcal{S}}tate $sk_i = UnwtdSketch(S_i, \frac{k}{t-\tau},r'_i)$ \Comment{Sketch} {\mathcal{S}}tate Add $(i,sk_i)$ to sketch {\mathbf{E}}ndFor {\mathcal{S}}tate Output sketch \varepsilonnd{algorithmic} \label{alg:hmtsketch} \varepsilonnd{algorithm} \begin{algorithm}[htb!] \caption{EstimateJaccard($sketch,sketch'$)} \begin{algorithmic}[1] {\mathcal{S}}tate Common = $\{(i,sk_i,sk'_i): (i,sk_i) \in sketch $ and $(i,sk'_i) \in sketch' \}$ {\mathcal{I}}f {Common == $\varepsilonmptyset$} {\mathcal{S}}tate Output "Similarity $< \alpha$" {\mathbf{E}}lse {\mathcal{F}}or {each $(i,sk_i,sk'_i) \in$ Common} {\mathcal{S}}tate $sim_i = UnwtdSimilarity(sk_i,sk'_i)$ {\mathbf{E}}ndFor {\mathcal{S}}tate Output Average($sim_i$) {\mathbf{E}}ndIf \varepsilonnd{algorithmic} \label{alg:hmtestimate} \varepsilonnd{algorithm} We note that as stated, the number of hash computations required in the reduction steps is $tn$. However, using the dependent version $ReduceToUnwtdDep$ of the reduction, this can be reduced to $n$, which may be a substantial saving when $n$ is large. While our algorithm can be used with any unweighted similarity estimation scheme, using it with the one permutation hashing scheme of~\cite{LiOZ12} will give us unweighted sets of expected size at most $\beta^{-1} \frac{Lk}{t-\tau},\ldots, \beta^{-t} \frac{Lk}{t-\tau}$. The number of hash evaluations in the unweighted sketching scheme is equal to the set size, so that for the typical setting of $\beta=\alpha=0.5$, $t=3$, the total cost is $7Lk$ hash evaluations. The running time is of a similar order. Here $L$ is a small constant such as $4$. Recall that the best previously known weighted scheme required $5nk$ hash evaluations, which we are reducing to $n+7Lk$. \subsection{Discussion} In this section, we discuss some finer implementation details, and how they may affect our choices of parameters. \subsection*{The benefits of tunable unweighted size} One immediate benefit of the set size being tunable is that we can arrange the parameters so as to ensure that at each of the scales that a set is involved in, the size of the unweighted set resulting from the reduction is at least $\frac{Lk}{t-\tau}$. As discussed earlier, this has the benefit of making empty samples rare enough that they can be ignored. This does not just simplify the sketch comparison but more importantly relieves us from having to store whether or not a sample is empty, leading to noticeable savings in the sketch size. We remark that even for unweighted sets, the one permutation approach of~\cite{LiOZ12} suffers when sets are small. With many bins being empty the accuracy of the scheme drops drastically. Subsequent works~\cite{ShrivastavaL14a,ShrivastavaL14b} have proposed modifications to address this issue. Nevertheless, the authors still pay for sparseness: the variance initially falls off as $\frac{1}{k}$ as $k$ increases, but flattens out once $k$ becomes much larger than the set size. We note that treating an unweighted set as a weighted one, and applying our approach gives a simple solution to this problem, and results in a $1/k$ fall in variance for arbitrarily large $k$, irrespective of the support size, without paying any penalty in the running time. Thus, even for unweighted sets, the approach proposed in this work is useful. A more subtle benefit comes from the fact that the size of the unweighted set is {\varepsilonm at most} $\beta^{-t} \frac{Lk}{t-\tau}$, so that an average sample gets at most $L\beta^{-t}$ items. For typical values, $L=5, t=3, \beta = 0.5$, this is $40$. Thus when computing the hash value to compute the minimum in the bin, we can do with, say a 13 bit hash value. Using these many bits makes it exceedingly unlikely that one of the bins will not have a unambiguous minimum. Thus when generating hash values for $(a,1)\ldots,(a,j)$ for some $a$, we can use $\log_2 \frac{k}{t-\tau}$ bits to generate the bin, 13 bits to figure out the minimum, and an additional $b$ bits to be stored for the winner in the bin. Thus we need $7+13+2=22$ bits per $(a,i)$. Thus we can generate a sequence of pseudorandom bits seeded with $a$, and then break it up into chunks of 22 bits each, using the $i$th chunk for $(a,i)$. In the rare event that we do not get a unique minimum in some bin, we can reseed with $(a,1)$ and generate several additional bits per $i$ sequentially, and so on. Since this is a rare enough event it does not impose a significant cost. Note that in contrast, without such an upper bound on the required number of bits, the bucketing of any large document would lead to a large number of elements landing in the same bin which would require a larger number of bits to identify the minimum. One thus has to either reseed for each $(a,i)$ or make other assumptions on the document size. \subsection*{The effect of the threshold for interestingness} The threshold $\alpha$ which determines what values of Jaccard similarity we consider interesting would typically depend on applications. While we presented this work with $\alpha=0.5$, lower or higher similarity values may be preferable in other settings. When $\alpha$ is close to 1 (say 0.95), then other optimizations may be possible. Indeed, note that for sets that are so similar, the sketches, even for a large values of $b$ would agree in nearly all the locations. Intuitively, each stored values gives us little information as there is at most say $5$ out of $100$ $b$-bit values that are different. One could ameliorate this by compressing the sketch in a careful manner. For example, one can take 10 $b$-bit values and just store their XOR, thus saving a factor of 10. In return, we can now generate a 1000 $b$-bit values instead of 100, but store only the 100 resulting XORs. For similarity more than 0.95, at least half of the 10-bin-blocks would be identical, and thus their XOR would be the same. Since we can account for accidental collision of the $b$-bit values, we can account for them. A careful look at this process shows that we would get an estimate of $Sim^{10} \approx (1-10(1-Sim))$ for $Sim$ close to 1, from which a more accurate estimate of the similarity can be obtained. This idea is not new and has been suggested in Li and K\"onig~\cite{LiK11}, who show that taking $b$ to be $1$ is already better when similarity is at least $0.5$, and show that xoring pairs (the $b= \frac{1}{2}$) case) gives a further improvement for larger similarities. It is natural to pick the appropriate value of $b<1$ when $\alpha$ is close to $1$. \subsection*{Using more scales than 3} Recall that with the proposed choice of 3 scales, we get two common scales for sets whose weights are within a factor of $\alpha$ of each other. But even for sets whose weights are within a factor of $\alpha^2$, we have one common scale and thus get a similarity estimate with error commensurate with $k/2$ samples instead of $k$. By choosing more scales, we get a better trade-off between space and accuracy for every similarity value: we store at many scales, but have fewer samples for each scale. As $t$ increases, a larger fraction of our samples $(1-\frac{1}{t})$ are shared between two sets within weight $\alpha$. And we get a smoother fall-off in accuracy for smaller values of similarity: e.g., even sets with weights within a factor of $\alpha^2$ have $t-2$ scales in common, and thus a $(1-\frac{2}{t})$ of our samples can be used for estimating similarity. By setting $\tau$ to be larger, say $\tau = 3$, and larger $t$ (say 10), we get as much accuracy as the $\tau=1,t=3$ case for similarities around 0.5, but a smoother decay in accuracy for smaller similarities, and in fact a higher accuracy for higher similarities. The only way that we may have to ``pay'' for this is that as the size of the unweighted sets now becomes smaller the error in the reduction step may increase. In our experiments, even for unweighted sets of size 50, the error introduced was only a few percent, and moreover the errors over different scales seemed to largely cancel each other out. Finally, we note that while the variance of the similarity estimate from different scales varies slightly. For the larger scales, the scaled weight is larger, so that the reduction has smaller variance. While at most a factor of $\frac{1}{L}$ of the variance from the unweighted sketch, this small difference in variance of the estimators from each scales can be taken into account while averaging the estimates from different scales. This would give a small improvement in the variance of the final estimate, at the cost of a slightly more complex estimation algorithm. \section{Proofs} In this section, we prove Theorems~\ref{thm:reduction} and~\ref{thm:reductiondep}. \subsection{A Useful Lemma} We start by stating and proving a useful inequality that relates the expectation of the inverse of a sum of independent $0$-$1$ variables to the inverse of a closely related expectation. This is a special case of a result of \cite{ChaoS72} and we present a simple proof here for completeness. \begin{lemma} \label{lem:expinverse} Let $\{X_i\}_{i=1}^N$ be a sequence of independent Bernoulli random variables with ${\mathbf{E}}[X_i]=\mu_i$ and let $\mu \stackrel{def}{=} \sum_i \mu_i$. Then for $A\geq 1$, $$ \frac{1}{A+\mu} \leq {\mathbf{E}}\left[(A+\sum_{i=1}^N X_i)^{-1}\right] \leq \frac{1}{A+\mu-1}. $$ \varepsilonnd{lemma} \begin{proof} The first inequality follows by applying Jensen's inequality to the function $\phi(X)=1/(A+X)$, which is convex for $X\geq 0$. To prove the second inequality, we use a result from~\cite{ChaoS72} who gave a formula for negative moments of random variables. We reproduce the proof of the case we use for completeness. Observe that for every $t,x>0$, $$ \frac{t^x}{x} = \int_0^t u^{x-1} \texttt{d} u. $$ Setting $t=1$ and taking expectations over random $x$, we get $$ {\mathbf{E}}\big[\frac{1}{X}\big] = \int_0^1 {\mathbf{E}}\big[u^{X-1}\big] \texttt{d} u. $$ For $X= A+ \sum_i X_i$, we upper bound \begin{align*} {\mathbf{E}}\big[u^{X+A-1}\big] &= u^{A-1}\prod_i {\mathbf{E}}\big[u^{X_i}\big]\\ &= \varepsilonxp((A-1)\ln u)\prod_i (1-(1-u)\mu_i)\\ &\leq \varepsilonxp((A-1)(u-1))\prod_i \varepsilonxp(-(1-u)\mu_i)\\ &= \varepsilonxp(-(1-u)(\mu+A-1)). \varepsilonnd{align*} We conclude that \begin{align*} {\mathbf{E}}\big[\frac{1}{X}\big] &\leq \varepsilonxp(-(\mu+A-1))\int_{0}^1 \varepsilonxp((\mu+A-1) u)\texttt{d} u \\ &= \frac{1-\varepsilonxp(-(\mu+A-1))}{\mu+A-1}<\frac{1}{\mu+A-1}. \varepsilonnd{align*} \varepsilonnd{proof} \subsection{Proof of Theorem~\ref{thm:reduction}} We set up some notation first. Given weighted sets $w_1$ and $w_2$ , we define $w_{\min} : U \to \Re$ as $$w_{\min}(a)= \min(w_1(a),w_2(a))$$ and similarly define $w_{\max} : U \to \Re$ as $$w_{\max}(a) = \max(w_1(a),w_2(a)).$$ Recall that $S_i = ReduceToUnwtd(w_i,r)$, and let $S_{\min}$ and $S_{\max}$ denote the outcomes $ReduceToUnwtd(w_{\min},r)$ and $ReduceToUnwtd(w_{\max},r)$ respectively. Our threshold-based rounding has the property that there is no loss of generality in assuming that $w_1=w_{\min}$ and $w_2=w_{\max}$. This is because the Jaccard similarity $jacc(w_1,w_2)$ equals the similarity $jacc(w_{\min},w_{\max})$, and moreover for any value of $r$ we have that $(a,j) \in S_1 \cap S_2$ if and only if $(a,j) \in S_{\min}$, and similarly $(a,j) \in S_1 \cup S_2$ if and only if $(a,j) \in S_{\max}$. Therefore, $jacc(S_1,S_2) = jacc(S_{\min},S_{\max})$ for every value of $r$. For the rest of this proof we will therefore assume that $w_1=w_{\min}$ and $w_2=w_{\max}$. Let $Z_1 : U \to \{0,1\}$ be the indicator function for the set $S_1$ and similarly define $Z_2$. Note that both $Z_1$ and $Z_2$ are random variables. For a function $f: U \to \Re$, let $f(U)$ denote $\sum_{a \in U} f(a)$. Part (1) of Theorem~\ref{thm:reduction} is now immediate by linearity of expectation, since for $i=1,2$ we have that $${\mathbf{E}}x[S_i] = \sum_{a \in U} \sum_{j\in \N} {\mathbf{E}}x[Z_i(a,j)] = \sum_{a \in U} w_1(a).$$ Part (2) of Theorem~\ref{thm:reduction} follows by a direct application of Chernoff bounds (see e.g.~\cite{DubhashiP09}). We now proceed to proving parts (3) and (4) of Theorem~\ref{thm:reduction}. Let us define $w_{\ensuremath{rest}} = w_{2} - w_{1}$ and similarly $Z_{\ensuremath{rest}} = Z_{2} - Z_{1}$. Let $\hat{U} = U \times \N$ and we will denote a generic element of $\hat{U}$ by $e = (a,j)$. With this notation the original weighted Jaccard similarity equals $$jacc(w_1,w_2) = w_{1}(U)/w_{2}(U).$$ The Jaccard similarity between $S_1$ and $S_2$ on the other hand is $$jacc(S_1,S_2) = jacc(Z_1,Z_2) = Z_{1}(\hat{U})/Z_{2}(\hat{U}).$$ Also, note that for $i=1,2$ and for each $a\in U$, the random variables $Z_i(a,j)$ are all Bernoulli random variables. For $e=(a,j)$, define $X_e = Z_1(e)$, and $Y_e= Z_2(e)-Z_1(e)$. These random variables then satisfy the following properties: \begin{itemize} \item $X_e$ and $Y_e$ are Bernoulli random variables. \item The random variables $\{(X_e,Y_e): e \in \hat{U}\}$ are independent of each other. Thus, $X_e$ may depend on $Y_e$ but not on $X_{e'}$ for $e \neq e'$. \item For any $e$, at least one of $X_e$ and $Y_e$ is zero. Thus, ${\mathbf{E}}x[X_eY_e]=0$. \item $\sum_{e \in \hat{U}} {\mathbf{E}}x[X_e] = w_1(U)$. \item $\sum_{e \in \hat{U}} {\mathbf{E}}x[X_e+Y_e] = w_2(U)$. \varepsilonnd{itemize} We first prove part (4). This is an easy consequence of Chernoff bounds applied to the sums of Bernoulli random variables $X_e$, and to the sum of $(X_e+Y_e)$. Let $X$ denote $\sum_e X_e$ and $Y$ denote $\sum_e Y_e$. Let $\mu_x =E[X]=w_1(U)$ and $\mu_y=E[Y] = w_2(U)-w_1(U)$. Without loss of generality we can assume that $E[X] \geq E[Y]$ (or else can argue about $\frac{Y}{X+Y}$). Now by standard Chernoff bounds, $${\mathcal{P}}r[|X-\mu_x|\geq \alpha \mu_x] \leq 2\varepsilonxp(-\alpha^2 \mu_x/3)$$ and \begin{align*} {\mathcal{P}}r[|X+Y-\mu_x-\mu_y| \geq \alpha &(\mu_x+\mu_y)] \\ &\leq 2\varepsilonxp(-\alpha^2 (\mu_x+\mu_y)/3). \varepsilonnd{align*} Thus, except with probability $4\varepsilonxp(-\alpha^2\mu_x/3)$, we have $$X/\mu_x \in (1-\alpha,1+\alpha)$$ and $$(X+Y)/(\mu_x+\mu_y) \in (1-\alpha,1+\alpha).$$ In this case, $$\frac{X}{X+Y}/\frac{\mu_x}{\mu_x+\mu_y} \in (\frac{1-\alpha}{1+\alpha},\frac{1+\alpha}{1-\alpha}) \subset (1-2\alpha,1+3\alpha).$$ Thus, $|\frac{X}{X+Y} - \frac{\mu_x}{\mu_x+\mu_y}| \leq 3\alpha$ and setting $\alpha = \sqrt{\frac{3\ln 4/\delta}{\mu_x}}$ implies that \begin{align*} {\mathcal{P}}r[|\frac{X}{X+Y} - \frac{\mu_x}{\mu_x+\mu_y}| \geq 3\sqrt{\frac{3\ln 4/\delta}{\mu_x}}] \leq \delta, \varepsilonnd{align*} thus proving part (4). We remark that we did not attempt to optimize the constants here. Finally, we will prove the following result, which implies part (3). \begin{theorem} \label{thm:biastech} Let $(X_1,Y_1),\ldots, (X_m,Y_m)$ be a sequence of independent tuples of Bernoulli random variables such that ${\mathbf{E}}x[X_i] = p_i$, ${\mathbf{E}}x[Y_i]=q_i$ and ${\mathbf{E}}x[X_i Y_i]=0$ (i.e., they are never 1 together). Let $\mu_x = \sum_i p_i$ and $\mu_y = \sum_i q_i$. Let $X=\sum_i X_i$ and $Y=\sum_i Y_i$. Then assuming\footnote{We use the convention that $0/0=1$ for the left inequality, and $0/0=0$ for the right one.} that $\mu_x+\mu_y >1$, \begin{align*} \frac{\mu_x-1}{\mu_x+\mu_y - 1} \leq {\mathbf{E}}x[\frac{X}{X+Y}] \leq \frac{\mu_x}{\mu_x+\mu_y - 1}. \varepsilonnd{align*} \varepsilonnd{theorem} \begin{proof} We first write \begin{align*} {\mathbf{E}}x&[\frac{X}{X+Y}] \\ &= {\mathbf{E}}x_{(X_1,\ldots,X_n)}[X\cdot {\mathbf{E}}x_{Y_1,\ldots,Y_n}[\frac{1}{X +Y}\mid X_1,\ldots,X_n], \varepsilonnd{align*} When $X=0$, the expression inside the outer expectation is zero. For $X \geq 1$, we apply Lemma~\ref{lem:expinverse} to conclude that \begin{align*} X{\mathbf{E}}x_{(Y_1,\ldots,Y_n)}&[\frac{1}{X+Y}| X_1,\ldots,X_n]\\ &\leq \frac{X}{X+{\mathbf{E}}x[Y|X_1,\ldots,X_n]-1}. \varepsilonnd{align*} Now recall that ${\mathbf{E}}x[Y_i|X_i=1]=0$. It follows that ${\mathbf{E}}x[Y_i|X_i] = (1-X_i)q_i/(1-p_i)$. Thus, \begin{align*} {\mathbf{E}}x&[\frac{X}{X+Y}] = \\ &={\mathbf{E}}x_{(X_1,\ldots,X_n)}[\sum_i X_i/(\sum_i (X_i + \frac{(1-X_i)q_i}{1-p_i}) -1)]\\ &={\mathbf{E}}x_{(X_1,\ldots,X_n)}[\sum_i X_i/(\sum_i \gamma_i X_i + \sum_i \beta_i -1) \varepsilonnd{align*} where $\beta_i = \frac{q_i}{1-p_i}$ and $\gamma_i =1-\beta_i$. This expression is easily seen to be concave in each $X_i$ (for any fixing of the other $X_j$'s). Indeed denoting the numerator by $f$ and the denominator by $g$, the partial derivative $\pder{f/g}{X_i} = \frac{1}{g} - \frac{\gamma_i f}{g^2}$ and $\pdertwo{f/g}{X_i} = -\frac{2\gamma_i}{g^2}(1-\gamma_i\cdot \frac{f}{g})$. Since both $f/g$ and $\gamma_i$ are at most 1, the concavity follows. Thus, using Jensen's inequality, we can one-by-one replace the random variable $X_i$ by its expectation. Rearranging we get \begin{align*} {\mathbf{E}}x[\frac{X}{X+Y}] &\leq \frac{\sum_i {\mathbf{E}}x[X_i]}{\sum_i \gamma_i {\mathbf{E}}x[X_i] + \sum_i \beta_i -1} &= \frac{\mu_x}{\mu_x+\mu_y-1}, \varepsilonnd{align*} which implies the second inequality. By symmetry \begin{align*} {\mathbf{E}}x[\frac{Y}{X+Y}] &\leq \frac{\mu_y}{\mu_x+\mu_y - 1} \varepsilonnd{align*} Noting that $\frac{X}{X+Y} = 1- \frac{Y}{X+Y}$ then implies the first inequality. \varepsilonnd{proof} This implies that both ${\mathbf{E}}x[jacc(S_1,S_2)]$ and $jacc(w_1,w_2) = \frac{w_1}{w_2}$ are sandwiched in the interval $[\frac{w_1-1}{w_2-1},\frac{w_1}{w_2-1}]$. Since this interval is of size $\frac{1}{w_2-1}$, this implies part (3) and completes the proof of Theorem~\ref{thm:reduction}. \subsection{Proof of Theorem~\ref{thm:reductiondep}} In this section, we prove Theorem~\ref{thm:reductiondep}. We show that even if the random threshold for rounding $(a,j)$ is chosen as $h(r,a)$ instead of $h(r,a,j)$, the properties of the reduction hold. Once again, there is no loss of generality in assuming that $w_1=w_{\min}$ and $w_2=w_{\max}$. We note that for $i=1,2$, and for any $a$, the variables $Z_i(a,j)$ are all deterministic except for $Z_i(a,\lceil w_{i}(a)\rceil)$. Thus, for each $i$, the Bernoulli random variables $\{Z_i(e): e \in U \times \N\}$ are all independent. This is sufficient for the proofs of parts (1), (2) and (4) to go through unchanged. It remains to argue that part (3) still holds. To prove this, we will need to handle additional dependencies between the tuples $(X_e,Y_e)$ as defined above. For a fixed $a$, let $e_a$ denote $(a,\lceil w_1(a)\rceil)$, and $e'_a$ denote $(a,\lceil w_2(a)\rceil)$. If $e_a=e'_a$, then the only non-deterministic random variable amongst $\{(X_e,Y_e): e=(a,j)\}$ is the tuple $(X_{e_a},Y_{e_a})$. If on the other hand, $e_a < e'_a$, then $X_{e'_a}$ is deterministically $0$ and $Y_{e_a}=1-X_{e_a}$. Moreover the random variables $X_{e_a}$ and $Y_{e'_a}$ are correlated through a common threshold $thresh$, with $X_{e_a} = \mathbf{1}(thresh<w_2(a) - \lfloor w_2(a)\rfloor)$ and $Y_{e'_a}= \mathbf{1}(thresh<w_2(a) - \lfloor w_2(a)\rfloor)$. For any $e$, let $Y'_e$ denote $Y_{e'_a}$ if $e=e_a$ for some $a$, and let $Y_e$ be deterministically $0$ otherwise. For $e=e'_a\neq e_a$ we redefine $Y_e$ to be $0$. This essentially moves the troubling random variable $Y_{e'_a}$ to $Y'_{e_a}$. Furthermore, this regrouping ensures that all these triples $\{(X_e, Y_e,Y'_e): e \in U \times \N\}$ are independent of each other. The next result is an analog of Theorem~\ref{thm:biastech}, albeit with a more complex proof. \begin{theorem} \label{thm:biastechdep} Suppose $(X_1,Y_1,Y'_1),\ldots, (X_n,Y_n,Y'_n)$ is a se\-quence of independent tuples of Bernoulli random variables such that ${\mathbf{E}}x[X_i] = p_i$, ${\mathbf{E}}x[Y_i]=q_i$, ${\mathbf{E}}x[Y'_i]=q'_i$, and ${\mathbf{E}}x[X_i Y_i]=0$, i.e., $X_i$ and $Y_i$ are never 1 simultaneously. Further suppose that either (a) $q'_i=0$ or (b) $q_i=1-p_i$, $X_{i} = \mathbf{1}(thresh<p_i)$ and $Y'_{i}= \mathbf{1}(thresh<q'_i)$ for threshold $thresh$ chosen uniformly in $[0,1]$. Let $X=\sum_i X_i$, $Y=\sum_i (Y_i+Y'_i)$, $\mu_x = {\mathbf{E}}x[X]$ and $\mu_y={\mathbf{E}}x[Y]$. Then assuming\footnote{We use the convention that $0/0=1$ for the left inequality, and $0/0=0$ for the right one.} that $\mu_x+\mu_y >1$, \begin{align*} \frac{\mu_x-1}{\mu_x+\mu_y - 1} \leq {\mathbf{E}}x[\frac{X}{X+Y}] \leq \frac{\mu_x}{\mu_x+\mu_y - 1}. \varepsilonnd{align*} \varepsilonnd{theorem} \begin{proof} We first write \begin{align*} {\mathbf{E}}x&[\frac{X}{X+Y}] \\ &= {\mathbf{E}}x_{(X_1,\ldots,X_n)}[X\cdot {\mathbf{E}}x_{Y_1,Y'_1\ldots,Y_n,Y'_n}[\frac{1}{X +Y}\mid X_1\ldots,X_n], \varepsilonnd{align*} When $X=0$, the expression inside the outer expectation is zero. For $X \geq 1$, we once again apply Lemma~\ref{lem:expinverse}, which applies since conditioned on $X_i$, either $Y_i$ is fully determined, or $Y'_i$ is deterministically zero. In either case, at most one of $Y_i,Y'_i$ is random. We conclude that \begin{align*} X{\mathbf{E}}x_{(Y_1,Y'_1\ldots,Y_n,Y'_n)}&[\frac{1}{X+Y}| X_1,\ldots,X_n]\\ &\leq \frac{X}{X+{\mathbf{E}}x[Y|X_1,\ldots,X_n]-1}. \varepsilonnd{align*} We now need to estimate these expectations. We will compute ${\mathbf{E}}x[X_i+Y_i+Y'_i|X_i]$. We consider several cases. In each case, we will show that this expectation can be written as $\beta_i + \gamma_i X_i$, with $\gamma_i \leq 1$. Case 1: $q'_i=0$. This case is similar to the setting of Theorem~\ref{thm:biastech}. Here ${\mathbf{E}}x[Y_i|X_i=1]=0$. It follows that ${\mathbf{E}}x[X_i+Y_i+Y'_i|X_i] = X_i+(1-X_i)q_i/(1-p_i) = \frac{q_i}{1-p_i} + X_i\frac{1-p_i-q_i}{1-p_i}$. Clearly, $\gamma_i \leq 1$. Case 2(a): $q_i=1-p_i, q'_i \geq p_i$. In this case, $X_i=\mathbf{1}(thresh<p_i)$, so that $X_i=$ implies that $Y'_i=1$ as well. If $X_i=0$, then $Y'_i$ is $1$ with probability exactly $(q'_i-p_i)/(1-p_i)$. Moreover, $Y_i = 1-X_i$. Thus in this case, ${\mathbf{E}}x[X_i+Y_i+Y'_i|X_i] = 1 + X_i + (1-X_i)(q'_i-p_i)/(1-p_i) = 1 + \frac{q'_i-p_i}{1-p_i} + X_i\frac{1-q'_i}{1-p_i}$. By assumption $\gamma_i \leq 1$. Case 2(b): $q_i=1-p_i, q'_i < p_i$. In this case $X_i=0$ iff $thresh>p_i$ in which case $Y'_i=0$ as well. If $X_i=1$, then $Y'_i$ is $1$ with probability exactly $q'_i/p_i$. Once again, $Y_i=1-X_i$. Thus in this case, ${\mathbf{E}}x[X_i+Y_i+Y'_i|X_i] = 1 + X_i\frac{q'_i}{p_i}$. By assumption $\gamma_i < 1$. Thus in all cases, we can write \begin{align*} {\mathbf{E}}x&[\frac{X}{X+Y}] = \\ &={\mathbf{E}}x_{(X_1,\ldots,X_n)}[\sum_i X_i/(\sum_i \gamma_i X_i + \sum_i \beta_i -1) \varepsilonnd{align*} with $\gamma_i \leq 1$. This expression is then easily seen to be concave in each $X_i$. Indeed denoting the numerator by $f$ and the denominator by $g$, the partial derivative $\pder{f/g}{X_i} = \frac{1}{g} - \frac{\gamma_i f}{g^2}$ and $\pdertwo{f/g}{X_i} = -\frac{2\gamma_i}{g^2}(1-\gamma_i\cdot \frac{f}{g})$. Since both $f/g$ and $\gamma_i$ are at most 1, the concavity follows. Thus using Jensen's inequality, \begin{align*} {\mathbf{E}}x[\frac{X}{X+Y}] &\leq \frac{\sum_i {\mathbf{E}}x[X_i]}{\sum_i \gamma_i {\mathbf{E}}x[X_i] + \sum_i \beta_i -1}. \varepsilonnd{align*} It remains to compute the value of the denominator. Since ${\mathbf{E}}x[X_i+Y_i+Y'_i|X_i] = \beta_i+\gamma_i X_i$, it follows that ${\mathbf{E}}x[X_i+Y_i+Y'_i] = \beta_i + \gamma_i {\mathbf{E}}x[X_i]$. Thus the denominator is exactly ${\mathbf{E}}x[X+Y] - 1$. This implies the second inequality. To prove the first inequality, it will once again suffice to argue that ${\mathbf{E}}[\frac{Y}{X+Y}]$ is bounded above by $\frac{\mu_y}{\mu_x +\mu_y -1}$. Unlike Theorem~\ref{thm:biastech}, the variables $X$ and $Y$ are not symmetric. The proof however is relatively straightforward and we sketch it next. We will now write \begin{align*} {\mathbf{E}}x&[\frac{Y}{X+Y}] \\ &= {\mathbf{E}}x_{(Y_1,Y'_1\ldots,Y_n,Y'_n)}[Y\cdot {\mathbf{E}}x_{X_1,\ldots,X_n}[\frac{1}{X +Y}\mid Y_1,Y'_1\ldots,Y_n,Y'_n], \varepsilonnd{align*} When $Y=0$, the expression inside the outer expectation is zero. For $Y \geq 1$, we once again apply Lemma~\ref{lem:expinverse} to write \begin{align*} Y\cdot &{\mathbf{E}}x_{X_1,\ldots,X_n}[\frac{1}{X +Y}\mid Y_1,Y'_1\ldots,Y_n,Y'_n] \\ &\leq \frac{Y}{Y+{\mathbf{E}}x[X|Y_1,Y'_1\ldots,Y_n,Y'_n]-1}. \varepsilonnd{align*} We once again handle the two cases separately. In the case that $q_i=1-p_i$, the variable $X_i$ is fixed given $Y_i$, and the term $Y_i+Y'_i + {\mathbf{E}}x[X_i|Y_i,Y'_i]$ is equal to $1+Y'_i$ (i.e., a deterministic quantity under the conditioning). If on the other hand, $q'_i=0$, then $Y'_i$ is deterministically 0, and the term $Y_i+Y'_i + {\mathbf{E}}x[X_i|Y_i,Y'_i]$ can be written as $\beta_i + \gamma_i Y_i$ with $\gamma_i \leq 1$. Thus we can apply Jensen's inequality to derive the claimed bound. \varepsilonnd{proof} This then implies part (3) of Theorem~\ref{thm:reductiondep} and completes its proof. \section{Synthetic Test Results} \makeatletter \newenvironment{tablehere} {\def\@captype{table}} {} \newenvironment{figurehere} {\def\@captype{figure}} {} \makeatother We produced pseudo-random artificial sets of weights, which we modified to center around a few chosen levels of exact weighted Jaccard similarity: 95\%, 90\%, 85\%, 80\%, 70\%, 65\%, 60\%, 55\%, 50\%, and 40\%. We implemented a single scaling version of our algorithm, so that we could estimate Jaccard for all pairs, as well as implementing Ioffe's algorithm. We selected full-length samples, as well as 2-bit, 1-bit, and half-bit compressions; the number of samples was chosen from 64, 128, 256, and 512. The first figures we present compare Ioffe sketching against our algorithm using exact Jaccard and our algorithm using binning, measuring the error between the values these algorithms produce, versus the underlying truth. In Figure~\ref{fig:IoffevsHMTaverageError}, the yellow bars show the average absolute error when estimating weighted Jaccard using 128 Ioffe samples, shown at a variety of underlying true Jaccard values ranging from 0.40 (on the right) to 0.96, while the blue bars show the estimate using our algorithm computing an estimated 1024 samples randomly assigned to 128 bins. The green bars show the average absolute error introduced by randomized rounding, in which every input item is reassigned to an integer close to its true weight, and the Jaccard value of these roundings are computed exactly. We display these because, following the theorems above on bias, all bias away from true Jaccard is introduced by rounding; both Ioffe's sampling, and binning preserve expected Jaccard values. At high true Jaccard values both Ioffe and we perform well, missing a true Jaccard value of 0.96 by approximately 0.01; this corresponds to getting a mismatch in slightly over one bin; due simply to quantization in 128 bins, we have to expect an error of at least one in 256. We observe that our algorithm is insignificantly worse than Ioffe at very high Jaccard values from 0.9 to 1.0 (largely due to errors introduced by rounding. Our algorithm is repeatably somewhat better than Ioffe at Jaccard values between 0.8 and 0.9, and is roughly equivalent for Jaccard values down to 0.5, below which point Ioffe sampling is better than our technique, although the absolute error for our technique never exceeds 0.035, corresponding to getting an average excess mismatch of approximately 4 bins. The next figure, Figure~\ref{fig:IoffevsHMTaverageError} shows the same bars, but with a new color assignment (Ioffe is now blue, our algorithm is now red, and randomized rounding is gray), and computes the standard deviation of the observation made above corresponding to each true Jaccard value. In this graph we see that our standard deviation is smaller than Ioffe's except at the lowest true Jaccard value, at which point the mismatch in scalings starts to dominate the computation. We also see that the standard deviation is nearly as large as the absolute error, mostly coming from estimates which are closer to the true value. \begin{figurehere} \centering \includegraphics[width=0.8 \columnwidth]{IoffevsHMTaverageError} \caption{Comparison of Ioffe's and our sampling in terms of average error} \label{fig:IoffevsHMTaverageError} \varepsilonnd{figurehere} \begin{figurehere} \centering \includegraphics[width=0.8 \columnwidth]{IoffevsHMTdeviation} \caption{Comparison of Ioffe's and our sampling in terms of standard deviation} \label{fig:IoffevsHMTdeviation} \varepsilonnd{figurehere} \nocite{LiSK13,AlonsoFM13,TheobaldSP08} \section{Conclusions} We have presented a simple scheme to reduce weighted sets to unweighted ones efficiently, in such a way that the size of the resulting unweighted set is tunable, and the Jaccard between sets of comparable sizes is preserved very accurately. We have shown how to use this scheme for the problem of building sketches for Jaccard similarity estimation for weighted sets. The resulting scheme is two orders of magnitude faster than previously known schemes for typical setting of parameters, and does not suffer any significant loss in quality. We prove that the scheme has a non-zero but negligible bias, and satisfies tail inequalities similar to the unweighted case. We also show empirical results showing that this computational benefit comes at negligible cost in accuracy in the interesting case of large similarity. \varepsilonnd{document}
\begin{document} \title{Twisted conjugacy in direct products of groups} \begin{abstract} Given a group \(G\) and an endomorphism \(\varphi\) of \(G\), two elements \(x, y \in G\) are said to be \(\varphi\)-conjugate if \(x = gy \inv{\varphi(g)}\) for some \(g \in G\). The number of equivalence classes for this relation is the Reidemeister number \(R(\varphi)\) of \(\varphi\). The set \(\{R(\psi) \mid \psi \in \Aut(G)\}\) is called the Reidemeister spectrum of \(G\). We investigate Reidemeister numbers and spectra on direct products of finitely many groups and determine what information can be derived from the individual factors. \end{abstract} \maketitle \section{Introduction} Let \(G\) be a group and \(\varphi: G \to G\) be an endomorphism. For \(x, y \in G\), we say that \(x\) and \(y\) are \emph{\(\varphi\)-conjugate} if there exists a \(g \in G\) such that \(x = g y \inv{\varphi(g)}\). In that case, we write \(x \Rconj{\varphi} y\) and we denote the \(\varphi\)-equivalence class (or Reidemeister class of \(\varphi\)) of \(x\) by \([x]_{\varphi}\). We define \(\Reid[\varphi]\) to be the set of all \(\varphi\)-equivalence classes and the \emph{{Reidemeister} number \(R(\varphi)\) of \(\varphi\)} as the cardinality of \(\Reid[\varphi]\). Note that \(R(\varphi) \in \N_0 \cup \{\infty\}\). Finally, we define the \emph{Reidemeister spectrum} to be \( \Spec_R(G) := \{ R(\varphi) \mid \varphi \in \Aut(G)\}. \) We say that \(G\) has the \emph{\(\Rinf\)-property} if \(\SpecR(G) = \{\infty\}\). In that case, we also write \(G \in \Rinf\). The concept of Reidemeister numbers arises from Nielsen fixed-point theory, where the topological analog is used to count and bound the number of fixed-point classes of a continuous self-map, and it is strongly related to the algebraic one introduced above, see \cite{Jiang83}. For many (families of) groups, it has been determined what their Reidemeister spectrum is and/or whether or not they possess the \(\Rinf\)-property, \eg certain subgroups of infinite symmetric groups \cite{Cox19}, extensions of \(\SL(n, \Z)\) and \(\GL(n, \Z)\) by countable abelian groups \cite{MubeenaSankaran14} and Houghton groups \cite{JoLeeLee17}. We refer the reader to \cite{FelshtynNasybullov16} for a more exhaustive list of examples. The behaviour of the Reidemeister spectrum and the \(\Rinf\)-property under group constructions has been studied as well. One of the major results is due to D.\ Gon\c{c}alves, P.\ Sankaran and P.\ Wong, who proved in \cite[Theorem~1]{GoncalvesSankaranWong20} that under some mild conditions any free product of finitely many (non-trivial) groups has the \(\Rinf\)-property. There are also several results concerning the relation among Reidemeister classes and numbers on group extensions in general, see \eg \cites{Goncalves98,Heath85,Wong01}, and wreath products of abelian groups, see \cite{GoncalvesWong06}. In this article, we investigate the following question. \begin{quest*} Let \(G\) and \(H\) be (non-isomorphic) groups and let \(n \geq 2\) be an integer. What can we say about \(\SpecR(G^{n})\) and \(\SpecR(G \times H)\) in terms of \(\SpecR(G)\) and \(\SpecR(H)\)? \end{quest*} There has already been done some research into Reidemeister spectra of direct products: S.\ Tertooy determined the Reidemeister spectrum of direct products of free nilpotent groups in his PhD thesis \cite[\S6.5]{Tertooy19} (see also Remark following \cref{cor:SpecRProductCentrelessDirectlyIndecomposableGroups}); K.\ Dekimpe and D.\ Gon\c{c}alves investigated in \cite{DekimpeGoncalves15} the Reidemeister spectrum of infinite abelian groups, some of which are given by an infinite direct product of finite abelian groups. However, these results concern more concrete (families of) groups, whereas we aim to find more general results. We start by providing a matrix description of the endomorphism monoid of a direct product of groups, which we then use to determine the Reidemeister number of endomorphisms of specific forms. We also derive sufficient conditions to obtain complete information on \(\SpecR(G \times H)\) if we know \(\SpecR(G)\) and \(\SpecR(H)\). \section{Matrix description of endomorphism monoid of direct product} Given a group \(G\), the set \(\End(G)\) of all endomorphisms on \(G\) forms a monoid under composition, with the identity map as neutral element. For endomorphisms of direct products of groups, there exists an alternative way to represent this monoid by means of matrices of group homomorphisms, as described by \eg F.\ Johnson in \cite[\S1]{Johnson83}, J.\ Bidwell, M.\ Curran and D.\ McCaughan in \cite[Theorem~1.1]{BidwellCurranMcCaughan06} and by J.\ Bidwell in \cite[Lemma~2.1]{Bidwell08}. If \(\varphi, \psi: G \to H\) are two homomorphisms with commuting images, then it is easily seen that \(\varphi + \psi: G \to H: g \mapsto (\varphi + \psi)(g) := \varphi(g) \psi(g)\) is a homomorphism as well. Now, let \(G_{1}, \ldots, G_{n}\) be groups. Define \[ \M = \mathopen{}\mathclose\bgroup\originalleft\{ \begin{pmatrix} \varphi_{11} & \ldots & \varphi_{1n} \\ \vdots & \ddots & \vdots \\ \varphi_{n1} & \ldots &\varphi_{nn} \end{pmatrix} \middlebar \begin{array}{ccc} \forall 1 \leq i, j \leq n: \varphi_{ij} \in \Hom(G_{j}, G_{i}) \\ \forall 1 \leq i, k, l \leq n: k \ne l \implies [\im \varphi_{ik}, \im \varphi_{il}] = 1\end{array}\aftergroup\egroup\originalright\} \] and equip it with matrix multiplication, where the addition of two homomorphisms \(\varphi, \psi \in \Hom(G_{j}, G_{i})\) with commuting images is defined as above and the multiplication of \(\varphi \in \Hom(G_{j}, G_{i})\) and \(\psi \in \Hom(G_{i}, G_{k})\) is defined as \(\varphi \circ \psi\). It is readily verified that this puts a monoid structure on \(\M\), where the diagonal matrix with the respective identity maps on the diagonal is the neutral element. \begin{lemma} For \(G = \Times\limits_{i = 1}^{n}G_{i}\), we have that \(\End (G) \cong \M\) as monoids. \end{lemma} F.\ Johnson proved this result for \(G_{1} = \ldots = G_{n}\) and although J. Bidwell, M.\ Curran and D.\ McCaughan state in both aforementioned papers that they only consider finite groups, their proof holds up for infinite groups as well. For the sake of completeness, we give a proof here as well. \begin{proof} For \(1 \leq i \leq n\), denote by \(\pi_{i}: G \to G_{i}\) the canonical projection and by \(e_{i}: G_{i} \to G\) the canonical inclusion. Given \(\varphi \in \End (G)\), put \(\varphi_{ij} := \pi_{i} \circ \varphi \circ e_{j} \in \Hom(G_{j}, G_{i})\). Fix \(i, k, l \in \{1, \ldots, n\}\) with \(k \ne l\). If \(g_{k} \in G_{k}\) and \(g_{l} \in G_{l}\), then \(e_{k}(g_{k})\) and \(e_{l}(g_{l})\) commute in \(G\). Hence, \(\varphi(e_{k}(g_{k}))\) and \(\varphi(e_{l}(g_{l}))\) commute as well implying that \(\varphi_{ik}(g_{k})\) and \(\varphi_{il}(g_{l})\) commute too. Therefore, \([\im \varphi_{ik}, \im \varphi_{il}] = 1\). Since the commuting condition is satisfied, we can define \(F: \End(G) \to \M\) by putting \(F(\varphi) := (\varphi_{ij})_{ij}\). If \(\varphi, \psi \in \End(G)\), then we need to prove for all \(1 \leq i, j \leq n\) that \[ \pi_{i} \circ \varphi \circ \psi \circ e_{j} = (\varphi \circ \psi)_{ij} = \sum_{k = 1}^{n} \varphi_{ik} \psi_{kj}. \] For \(1 \leq i, j \leq n\) and \(g \in G_{j}\), we see that \begin{align*} (\pi_{i} \circ \varphi \circ \psi \circ e_{j})(g) &= (\pi_{i} \circ \varphi \circ \psi)(1, \ldots, 1, g, 1, \ldots, 1) \\ &= (\pi_{i} \circ \varphi)(\psi_{1j}(g), \ldots, \psi_{nj}(g)) \\ &= (\pi_{i} \circ \varphi) \mathopen{}\mathclose\bgroup\originalleft(e_{1}(\psi_{1j}(g))\ldots e_{n}(\psi_{nj}(g)) \aftergroup\egroup\originalright) \\ &= \prod_{k = 1}^{n} (\pi_{i} \circ \varphi \circ e_{k})(\psi_{kj}(g)) \\ &= \prod_{k = 1}^{n} (\varphi_{ik} \circ\psi_{kj})(g) \\ &= \mathopen{}\mathclose\bgroup\originalleft(\sum_{k = 1}^{n} \varphi_{ik} \psi_{kj}\aftergroup\egroup\originalright)(g), \end{align*} hence the equality holds. Therefore, \(F\) is a monoid homomorphism. It is also clear that \(F\) is injective. To prove that \(F\) is surjective, let \((\varphi_{ij})_{ij} \in \M\) and define \[ \varphi: G \to G: (g_{1}, \ldots, g_{n}) \mapsto \mathopen{}\mathclose\bgroup\originalleft(\prod_{k = 1}^{n}\varphi_{1k}(g_{k}), \ldots, \prod_{k = 1}^{n}\varphi_{nk}(g_{k})\aftergroup\egroup\originalright). \] Due to the commuting conditions and the fact that all \(\varphi_{ij}\)'s are group homomorphisms, the map \(\varphi\) is a well-defined endomorphism of \(G\) and it is clear that \(F(\varphi) = (\varphi_{ij})_{ij}\). \end{proof} We will often identify an endomorphism of \(G\) with is image under \(F\) and write \(\varphi = (\varphi_{ij})_{ij}\). In a matrix, we denote the identity map with \(1\) and the trivial homomorphism with \(0\). \begin{lemma} \label{lem:automorphismOfDirectProductImpliesNormalImages} With the notations as above, let \(\varphi \in \Aut(G)\). Then \begin{enumerate}[(i)] \item for all \(1 \leq i \leq n\), \(G_{i}\) is generated by \(\{\im \varphi_{ij} \mid 1 \leq j \leq n\}\); \item \(\im \varphi_{ij}\) normal in \(G_{i}\) for all \(1 \leq i, j \leq n\). \end{enumerate} \end{lemma} \begin{proof} Suppose \(\varphi\) is an automorphism. Then \(\varphi\) is surjective. Let \(1 \leq i \leq n\). Then \(G_{i} = \pi_{i}(\varphi(G))\). Since \(G\) is generated by \(\{e_{j}(G_{j}) \mid 1 \leq j \leq n\}\), we see that \(G_{i}\) is generated by \(\{(\pi_{i} \circ \varphi \circ e_{j})(G_{j}) \mid 1 \leq j \leq n\} = \{\im \varphi_{ij} \mid 1 \leq j \leq n\}\). This proves the first item. For the second, fix \(1 \leq i, j \leq n\). If we pick \(g \in \im \varphi_{ij}\) and \(h \in G_{i}\) arbitrary, we can write \(h = xy\) for some \(x \in \im \varphi_{ij}\) and \(y \in \grpgen{\{\im \varphi_{ik} \mid k \ne j\}}\), as \(G_{i}\) is generated by \(\im \varphi_{i1}, \ldots, \im \varphi_{in}\) and the images of \(\varphi_{ij}\) and \(\varphi_{ik}\) commute if \(j \ne k\). Then \[ [g, h] = \inv{g} \invb{xy} g xy = \inv{g} \inv{y} \inv{x} g xy = \inv{g} \inv{x} g x = [g, x] \in \im \varphi_{ij}. \] Consequently, \([\im \varphi_{ij}, G_{i}] \leq \im \varphi_{ij}\) implying that \(\im \varphi_{ij}\) is normal in \(G_{i}\). \end{proof} \begin{lemma} \label{lem:upperTriangularAutomorphisms} With the notations as above, suppose that all automorphisms of \(G\) have a matrix representation that is upper triangular, or all of them are lower triangular. Let \(\varphi \in \Aut(G)\). Then \(\varphi_{ii} \in \Aut(G_{i})\) for each \(i \in \{1, \ldots, n\}\) and \(\varphi_{ij} \in \Hom(G_{j}, Z(G_{i}))\) for all \(1 \leq i \ne j \leq n\). \end{lemma} \begin{proof} We prove the result for upper triangular matrices, the proof for lower triangular is similar. Let \(\varphi \in \Aut(G)\). Let \(\inv{\varphi} = (\psi_{ij})_{ij}\). Fix \(i \in \{1, \ldots, n\}\). Since both \(\varphi\) and \(\inv{\varphi}\) are upper triangular, we find that \[ \Id_{G_{i}} = (\varphi \circ \inv{\varphi})_{ii} = \varphi_{ii} \circ \psi_{ii} \] and \[ \Id_{G_{i}} = (\inv{\varphi} \circ \varphi)_{ii} = \psi_{ii} \circ \varphi_{ii}, \] showing that \(\varphi_{ii}\) must be an automorphism of \(G_{i}\). Now, let \(1 \leq i, j \leq n\) be indices with \(i \ne j\). Since \(\im \varphi_{ij}\) and \(\im \varphi_{ii} = G_{i}\) commute, we conclude that \(\im \varphi_{ij} \in Z(G_{i})\). \end{proof} Now that we have an alternative description of the endomorphism monoid, we can deduce some general results regarding Reidemeister numbers of specific endomorphisms on direct products. We define the diagonal endomorphisms to be all endomorphisms of \(\End (G)\) of the form \[ \Diag(\varphi_{1}, \ldots, \varphi_{n}): G \to G: (g_{1}, \ldots, g_{n}) \mapsto (\varphi_{1}(g_{1}), \ldots, \varphi_{n}(g_{n})), \] where each \(\varphi_{i} \in \End (G_{i})\). We denote the submonoid of all diagonal endomorphism with \(\Diag(G)\). Note that it is isomorphic with \(\End(G_{1}) \times \ldots \times \End(G_{n})\). The following is then quite straightforward. \begin{prop} \label{prop:ReidemeisterNumberDirectProductAutomorphismGroups} Let $G_{1}, \ldots, G_{n}$ be groups and put \(G = \Times\limits_{i = 1}^{n} G_{i}\). Let \(\varphi\) be an element of \(\Diag(G)\) and write \(\varphi = \Diag(\varphi_{1}, \ldots, \varphi_{n})\). Then $R(\varphi) = \prod_{i = 1}^{n} R(\varphi_{i})$. \end{prop} \begin{proof} It is clear that, for $(g_{1}, \ldots, g_{n}), (h_{1}, \ldots, h_{n}) \in G_{1} \times \ldots \times G_{n}$, we have \[ (g_{1}, \ldots, g_{n}) \Rconj{\varphi} (h_{1}, \ldots, h_{n}) \iff \forall i \in \{1, \ldots, n\}: g_{i} \Rconj{\varphi_{i}} h_{i}. \] Thus, the map \[ \Reid[\varphi] \to \Reid[\varphi_{1}] \times \ldots \times \Reid[\varphi_{n}]: [(g_{1}, \ldots, g_{n})]_{\varphi} \mapsto ([g_{1}]_{\varphi_{1}}, \ldots, [g_{n}]_{\varphi_{n}}) \] is a well-defined bijection, implying that \[ R(\varphi) = \prod_{i = 1}^{n} R(\varphi_{i}). \qedhere \] \end{proof} \begin{defin} For \(a \in \N_{0} \cup \{\infty\}\), we define the product \(a \cdot \infty\) to be equal to \(\infty\). Let \(A_{1}, \ldots, A_{n}\) be subsets of \(\N_{0} \cup \{\infty\}\). We then define \[ A_{1} \cdot \ldots \cdot A_{n} := \prod_{i = 1}^{n} A_{i} := \{a_{1} \ldots a_{n} \mid \forall i \in \{1, \ldots, n\}: a_{i} \in A_{i}\}. \] If \(A_{1} = \ldots = A_{n} =: A\), we also write \(A^{(n)}\) for the \(n\)-fold product of \(A\) with itself. \end{defin} \begin{cor} \label{cor:SpecRDirectProductCharacteristicSubgroups} Let \(G_{1}, \ldots, G_{n}\) be groups and put \(G = \Times\limits_{i = 1}^{n} G_{i}\). Then \[ \prod_{i = 1}^{n} \SpecR(G_{i}) \subseteq \SpecR(G). \] Equality holds if \(\Aut(G) = \Times\limits_{i = 1}^{n} \Aut(G_{i})\). \end{cor} We now specify to the case of an \(n\)-fold direct product of a group with itself, \ie \(G^{n}\) for some group \(G\) and integer \(n \geq 1\). First of all, the symmetric group \(S_{n}\) embeds in \(\End(G^{n})\) in the following way: \[ S_{n} \to \End G^{n}: \sigma \mapsto (P_{\inv{\sigma}}: G \to G: (g_{1}, \ldots, g_{n}) \mapsto (g_{\inv{\sigma}(1)}, \ldots, g_{\inv{\sigma}(n)})), \] where the matrix representation of \(P_{\inv{\sigma}}\) is given by \[ P_{\inv{\sigma}} = \begin{pmatrix} e_{\inv{\sigma}(1)} \\ \vdots \\ e_{\inv{\sigma}(n)} \end{pmatrix}. \] Here, \(e_{i}\) is a row with a \(1\) on the \(i\)-th spot and zeroes elsewhere. To prove that is indeed a monoid morphism, let \(\sigma, \tau \in S_{n}\) and \((g_{1}, \ldots, g_{n}) \in G^{n}\) be arbitrary. Put \(h_{i} := g_{\inv{\sigma}(i)}\), then \[ P_{\inv{\tau}}(h_{1}, \ldots, h_{n}) = (h_{\inv{\tau}(1)}, \ldots, h_{\inv{\tau}(n)}) = (g_{\inv{\sigma}(\inv{\tau}(1))}, \ldots, g_{\inv{\sigma}(\inv{\tau}(n))}) \] and \[ P_{\inv{(\tau \sigma)}}(g_{1}, \ldots, g_{n}) = (g_{\inv{\sigma}(\inv{\tau}(1))}, \ldots, g_{\inv{\sigma}(\inv{\tau}(n))}), \] therefore \(P_{\inv{(\tau\sigma)}} = P_{\inv{\tau}} P_{\inv{\sigma}}\). Now, we define \(\End_{w}(G^{n})\) to be the submonoid of \(\End(G^{n})\) generated by \(S_{n}\) and \(\Diag(G^{n})\). \begin{lemma} Let \(G\) be a group and \(n \geq 1\) an integer. Then each endomorphism \(\varphi\) in \(\End_{w}(G^{n})\) can be written as \[ \varphi = \Diag(\varphi_{1}, \ldots, \varphi_{n}) P_{\inv{\sigma}} \] for some \(\varphi_{i} \in \End(G)\) and \(\sigma \in S_{n}\). Moreover, \(\varphi \in \Aut(G^{n})\) if and only if each \(\varphi_{i} \in \Aut(G)\). \end{lemma} \begin{proof} The existence of such a decomposition relies on the following equality, which we claim holds for all \(\sigma \in S_{n}\) and \(\varphi_{i} \in \End(G)\): \begin{equation} \label{eq:rewritingRulePermutationDiagonalMatrices} P_{\inv{\sigma}} \Diag(\varphi_{1}, \ldots, \varphi_{n}) = \Diag(\varphi_{\inv{\sigma}(1)}, \ldots, \varphi_{\inv{\sigma}(n)}) P_{\inv{\sigma}}. \end{equation} Indeed, evaluating the left-hand side in \((g_{1}, \ldots, g_{n})\) yields \[ P_{\inv{\sigma}} (\varphi_{1}(g_{1}), \ldots, \varphi_{n}(g_{n})) = (\varphi_{\inv{\sigma}(1)}(g_{\inv{\sigma}(1)}), \ldots, \varphi_{\inv{\sigma}(n)}(g_{\inv{\sigma}(n)})) \] whereas the right-hand side yields \[ \Diag(\varphi_{\inv{\sigma}(1)}, \ldots, \varphi_{\inv{\sigma}(n)}) (g_{\inv{\sigma}(1)}, \ldots, g_{\inv{\sigma}(n)}), \] hence we see they are equal. Thus, given an element in \(\End_{w}(G^{n})\), we can apply the equality above several times in order to gather all diagonal endomorphisms and all elements of the form \(P_{\inv{\sigma}}\) together, yielding the desired representation. The claim regarding the automorphism is immediate, as each \(P_{\inv{\sigma}}\) is an automorphism. \end{proof} The representation is not necessarily unique, as the trivial endomorphism equals \(\Diag(0, \ldots, 0) P_{\inv{\sigma}}\) for all \(\sigma \in S_{n}\). If we restrict ourselves to automorphisms, however, this yields the injective group homomorphism \[ \Psi: \Aut(G) \wr S_{n} \to \Aut(G^{n}): (\varphi, \sigma) = (\varphi_{1}, \ldots, \varphi_{n}, \sigma) \mapsto \Diag(\varphi_{1}, \ldots, \varphi_{n}) P_{\inv{\sigma}}. \] Here, \(\Aut(G) \wr S_{n}\) is the wreath product, \ie the semidirect product \(\Aut(G)^{n} \rtimes S_{n}\), where the action is given by \[ \sigma \cdot (\varphi_{1}, \ldots, \varphi_{n}) = (\varphi_{\inv{\sigma}(1)}, \ldots, \varphi_{\inv{\sigma}(n)}). \] To see that is indeed homomorphism, note that, for \((\varphi, \sigma), (\psi, \tau) \in \Aut(G) \wr S_{n}\), \[ \Psi((\varphi, \sigma) (\psi, \tau)) = \Psi(\varphi \circ (\sigma \cdot \psi), \sigma \tau) = \Diag(\varphi_{1}\psi_{\inv{\sigma}(1)}, \ldots, \varphi_{n}\psi_{\inv{\sigma}(n)}) P_{\invb{\sigma \tau}} \] and \begin{align*} \Psi(\varphi, \sigma)\Psi(\psi, \tau) &= \Diag(\varphi_{1}, \ldots, \varphi_{n}) P_{\inv{\sigma}} \Diag(\psi_{1}, \ldots, \psi_{n}) P_{\inv{\tau}} \\ &= \Diag(\varphi_{1}, \ldots, \varphi_{n}) \Diag(\psi_{\inv{\sigma}(1)}, \ldots, \psi_{\inv{\sigma}(n)}) P_{\inv{\sigma}} P_{\inv{\tau}} \\ &= \Diag(\varphi_{1}\psi_{\inv{\sigma}(1)}, \ldots, \varphi_{n}\psi_{\inv{\sigma}(n)}) P_{\invb{\sigma \tau}} \end{align*} where we used \eqref{eq:rewritingRulePermutationDiagonalMatrices}. We will identify \(\Aut(G) \wr S_{n}\) with its image under \(\Psi\) and thus regard it as a subgroup of \(\Aut(G^{n})\). We now determine the Reidemeister number of an arbitrary element of \(\End_{w}(G^{n})\), which also generalises \cite[Proposition~5.1.2]{DekimpeSenden21}. \begin{prop}[{see \eg \cite[Proposition~1.1.3]{DekimpeSenden21}}] \label{prop:conjugateEndomorphisms} Let \(G\) be a group, \(\varphi \in \End(G)\) and \(\psi \in \Aut(G)\). Then \(R(\varphi) = R(\varphi^\psi)\). \end{prop} \begin{prop} \label{prop:ReidemeisterNumberPermutedDiagonalEndomorphism} Let \(G\) be a group and let \(\varphi := \Diag(\varphi_{1}, \ldots, \varphi_{n}) P_{\sigma} \in \End_{w}(G^{n})\). Let \[ \sigma = (c_{1} \ldots c_{n_{1}})(c_{n_{1} + 1} \ldots c_{n_{2}})\ldots (c_{n_{k - 1} + 1} \ldots c_{n_{k}}) \] be the disjoint cycle decomposition of \(\sigma\), where \(n_{0} := 0 < n_{1} < n_{2} < \ldots < n_{k} = n\). Put \(\tilde{\varphi}_{j} := \varphi_{c_{n_{j - 1} + 1}} \circ \ldots \circ \varphi_{c_{n_{j}}}\). Then \[ R(\varphi) = \prod_{j = 1}^{k} R(\tilde{\varphi}_{j}). \] \end{prop} \begin{proof} Let \(\tau \in S_{n}\) be the permutation given by \(\tau(i) = c_{i}\) for all \(i \in \{1, \ldots, n\}\). Then \[ \sigma^{\tau} = \inv{\tau} \sigma \tau = (1 \ldots n_{1})(n_{1} + 1 \ \ldots n_{2}) \ldots (n_{k - 1} + 1 \ldots n_{k}). \] By \cref{prop:conjugateEndomorphisms}, conjugate endomorphisms have the same Reidemeister number. Since, by \eqref{eq:rewritingRulePermutationDiagonalMatrices}, \[ P_{\tau}\Diag(\varphi_{1}, \ldots, \varphi_{n}) P_{\sigma} P_{\inv{\tau}} = \Diag(\varphi_{\tau(1)}, \ldots, \varphi_{\tau(n)})P_{\sigma^{\tau}}, \] it is sufficient to determine the Reidemeister number of the latter endomorphism. Note that \(P_{\sigma^{\tau}}\) is a block matrix consisting of \(k\) square blocks of the form \[ \begin{pmatrix} 0 & 1 & 0 & 0 & \ldots & 0 & 0 \\ 0 & 0 & 1 & 0 & \ldots & 0 & 0 \\ 0 & 0 & 0 & 1 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots& \vdots \\ 0 & 0 & 0 & 0 & \ldots & 1 & 0 \\ 0 & 0 & 0 & 0 & \ldots & 0 & 1 \\ 1 & 0 & 0 & 0 & \ldots & 0 & 0 \end{pmatrix} \] on the diagonal and zero matrices elsewhere. This implies that \[ \Diag(\varphi_{\tau(1)}, \ldots, \varphi_{\tau(n)})P_{\sigma^{\tau}} \in \Times_{i = 1}^{k} \End(G^{n_{i} - n_{i - 1}}). \] Therefore, it is sufficient to determine Reidemeister numbers of endomorphisms of the form \(\psi := \Diag(\psi_{1}, \ldots, \psi_{n})P_{\alpha}\), where \(\alpha = (1 \ 2 \ldots \ n)\). Indeed, if we know that \(R(\psi) = R(\psi_{1} \circ \ldots \circ \psi_{n})\), then \begin{align*} R(\varphi) &= \prod_{i = 1}^{k} R(\varphi_{\tau(n_{i - 1} + 1)} \circ \ldots \circ \varphi_{\tau(n_{i})}) = \prod_{i = 1}^{k} R(\varphi_{c_{n_{i - 1} + 1}} \circ \ldots \circ \varphi_{c_{n_{i}}}) = \prod_{j = 1}^{k} R(\tilde{\varphi}_{j}), \end{align*} by construction of \(\tau\). So, let \((g_{1}, \ldots, g_{n}) \in G^{n}\). We claim there exists a \(g \in G\) such that \[ (g_{1}, \ldots, g_{n}) \Rconj{\psi} (g, 1, \ldots, 1). \] Note that, for \((x_{1}, \ldots, x_{n}) \in G^{n}\), \[ (x_{1}, \ldots, x_{n}) (g_{1}, \ldots, g_{n}) \inv{\psi(x_{1}, \ldots, x_{n})} = (x_{1}g_{1} \inv{\psi_{1}(x_{2})}, \ldots, x_{n}g_{n} \inv{\psi_{n}(x_{1})}). \] Put \(x_{1} = 1, x_{n} = \inv{g}_{n}\) and \(x_{i} = \psi_{i}(x_{i + 1})\inv{g}_{i}\) for \(i \in \{2, \ldots, n - 1\}\), starting with \(i = n - 1\). Finally, put \(g = x_{1} g_{1} \inv{\psi_{1}(x_{2})}\). Then \[ \begin{cases} g = x_{1} g_{1} \inv{\psi_{1}(x_{2})} \\ 1 = x_{i} g_{i} \inv{\psi_{i}(x_{i + 1})} & \mbox{for \(i \geq 2\)}, \end{cases} \] where \(x_{n + 1} = x_{1}\). This implies that \((g_{1}, \ldots, g_{n}) \Rconj{\psi} (g, 1, \ldots, 1)\). Next, put \(\tilde{\psi} = \psi_{1} \circ \ldots \circ \psi_{n}\) and suppose \((g, 1, \ldots, 1) \Rconj{\psi} (h, 1, \ldots, 1)\) for some \(g, h \in G\). Then there exist \(x_{1}, \ldots, x_{n} \in G\) such that \[ \begin{cases} g = x_{1} h\inv{\psi_{1}(x_{2})} \\ 1 = x_{i} \inv{\psi_{i}(x_{i + 1})} & \mbox{for \(i \geq 2\)}, \end{cases} \] where again \(x_{n + 1} = x_{1}\). Consequently, \(x_{i} = \psi_{i}(x_{i + 1})\) for \(i \geq 2\). This implies that \[ g = x_{1}h \inv{\psi_{1}(x_{2})} = x_{1}h \inv{\psi_{1}(\psi_{2}(x_{3}))} = \ldots = x_{1}h \inv{\tilde{\psi}(x_{1})}, \] \ie \(g \Rconj{\tilde{\psi}} h\). Conversely, if \(g \Rconj{\tilde{\psi}} h\), then there exists an \(x \in G\) such that \(g = x h \inv{\tilde{\psi}(x)}\). Put \(x_{1} = x\) and \(x_{i} = \psi_{i}(x_{i + 1})\) for \(i \geq 2\), starting with \(i = n\) and where, again, \(x_{n + 1} = x_{1}\). Then \[ \begin{cases} g = x_{1} h\inv{\psi_{1}(x_{2})} \\ 1 = x_{i} \inv{\psi_{i}(x_{i + 1})} & \mbox{for \(i \geq 2\)}, \end{cases} \] hence \((g, 1, \ldots, 1) \Rconj{\psi} (h, 1, \ldots, 1)\). Combining all results, we find that there is a bijection between the Reidemeister classes of \(\psi\) and those of \(\tilde{\psi}\). Consequently, \(R(\psi) = R(\tilde{\psi})\). \end{proof} \begin{cor} \label{cor:ReidemeisterNumbersOfAut(G)wrSn} Let \(G\) be a group and \(n \geq 1\) a natural number. Then \[ \bigcup_{i = 1}^{n} \SpecR(G)^{(i)} \subseteq \SpecR(G^{n}). \] Equality holds if \(\Aut(G^{n}) = \Aut(G) \wr S_{n}\). \end{cor} \begin{proof} Let \(1 \leq k \leq n\). Let \(\varphi_{1}, \ldots, \varphi_{k}\) be automorphisms of \(G\). We prove that \(R(\varphi_{1}) \ldots R(\varphi_{k}) \in \SpecR(G^{n})\). Consider the automorphism \[ \varphi := \Diag(\varphi_{1}, \ldots, \varphi_{k}, \Id_{G}, \ldots, \Id_{G})P_{\sigma} \] where \(\sigma = (1)(2)(3) \ldots (k - 1)(k \ k + 1 \ \ldots \ n - 1 \ n)\). By the previous proposition \(R(\varphi) = R(\varphi_{1}) \ldots R(\varphi_{k})\). Note that this combined with the previous proposition also proves that the left-hand side equals \(\{R(\varphi) \mid \varphi \in \Aut(G) \wr S_{n} \}\), from which the additional claim follows immediately. \end{proof} \begin{cor} \label{cor:RinfWhenAut(Gn)=Aut(G)wrSn} Let \(G\) be a group and \(n \geq 1\) an integer. Suppose that \(G \in \Rinf\) and that \(\Aut(G^{n}) = \Aut(G) \wr S_{n}\). Then \(G^{n} \in \Rinf\). \end{cor} \begin{proof} Since \(\Aut(G) \wr S_{n} = \Aut(G^{n})\), the previous corollary shows that \[ \SpecR(G^{n}) = \bigcup_{i = 1}^{n} \SpecR(G)^{(i)} = \bigcup_{i = 1}^{n} \{\infty\}^{(i)} = \{\infty\}. \qedhere \] \end{proof} We can also generalise \cref{prop:conjugateEndomorphisms}. \begin{cor} \label{cor:CyclicPermutationsPreserveReidemeisterNumber} Let \(G\) be a group, \(n \geq 1\) an integer and \(\varphi_{1}, \ldots, \varphi_{n} \in \End(G)\). Then \[ R(\varphi_{1} \circ \varphi_{2} \circ \ldots \circ \varphi_{n}) = R(\varphi_{2} \circ \varphi_{3} \ldots \circ \varphi_{n} \circ \varphi_{1}) \] \end{cor}\begin{proof} Consider the endomorphism \(\varphi := \Diag(\varphi_{1}, \ldots, \varphi_{n}) P_{(1 \ 2 \ \ldots \ n)}\) of \(G^{n}\). Since \((1 \ 2 \ \ldots \ n)\) and \((2 \ \ldots \ n \ 1)\) are both cycle representations of the same permutation, \cref{prop:ReidemeisterNumberPermutedDiagonalEndomorphism} yields that \[ R(\varphi_{1} \circ \varphi_{2} \circ \ldots \circ \varphi_{n}) = R(\varphi) = R(\varphi_{2} \circ \varphi_{3} \circ \ldots \circ \varphi_{n} \circ \varphi_{1}). \qedhere \] \end{proof} \section{Direct products of two groups} We now restrict to the case of direct products of two groups. Let \(H\) and \(K\) be two groups and put \(G := H \times K\). Instead of using indices to represent endomorphisms of \(G\) as matrices, we use the notation \[ \M = \mathopen{}\mathclose\bgroup\originalleft\{ \begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix} \middlebar \begin{array}{ccc}\alpha \in \End(H), & \beta \in \Hom(K, H), &[\im \alpha, \im \beta] = 1\\ \gamma \in \Hom(H, K), &\delta \in \End(K), &[\im \gamma, \im \delta] = 1\end{array}\aftergroup\egroup\originalright\}. \] Let \(\varphi \in \End(G)\) be an endomorphism leaving \(H\) invariant. Then, in the notation above, \(\gamma\) is the trivial homomorphism, sending everything to \(1\). Fix representatives \(\{h_{i}\}_{i \in \mathcal{I}}\) of \(\Reid[\alpha]\) and \(\{k_{j}\}_{j \in \mathcal{J}}\) of \(\Reid[\delta]\). \begin{lemma} \label{lem:representativesReidphi} \(\Reid[\varphi] = \{[(h_{i},k_{j})]_{\varphi} \mid i \in \mathcal{I}, j \in \mathcal{J}\}\). In particular, \(R(\varphi) \leq R(\alpha)R(\delta)\). \end{lemma} \begin{proof} Let \((h, k) \in G\). We have to prove that there are indices \(i\) and \(j\) such that \((h, k) \Rconj{\varphi} (h_{i}, k_{j})\). First, let \(j \in \mathcal{J}\) be such that \(k \Rconj{\delta} k_{j}\). Write \(k_{j} = y k \inv{\delta(y)}\) for some \(y \in K\), then \[ (1, y) (h, k) \inv{\varphi(1, y)} = (h \inv{\beta(y)}, k_{j}). \] Put \(h' = h \inv{\beta(y)}\), then \((h, k) \Rconj{\varphi} (h', k_{j})\). Next, let \(i \in \mathcal{I}\) be such that \(h' \Rconj{\alpha} h_{i}\) and write \(h_{i} = x h' \inv{\alpha(x)}\) for some \(x \in H\). Then \[ (x, 1)(h', k_{j})\inv{\varphi(x, 1)} = (h_{i}, k_{j}), \] finishing the proof. The inequality \(R(\varphi) \leq R(\alpha) R(\delta)\) then follows immediately. \end{proof} \begin{defin} Let \(A\) be a group, \(\varphi \in \End(A)\) and \(a \in A\). The \emph{\(\varphi\)-stabiliser of \(a\)} is the subgroup \[ \Stab_{\varphi}(a) = \{b \in A \mid a = b a \inv{\varphi(b)}\} \] of \(A\). We also call subgroups of this form \emph{twisted stabilisers}. \end{defin} We continue with the same notation as before. Fix \(j \in \mathcal{J}\). The subgroup \(\Stab_{\delta}(k_{j})\) acts on the right on \(\Reid[\alpha]\) in the following way: \[ \rho_{j}: \Reid[\alpha] \times \Stab_{\delta}(k_{j}) \to \Reid[\alpha]: ([h]_{\alpha}, y) \mapsto [h\beta(y)]_{\alpha}. \] Indeed, if \(h' = xh \inv{\alpha(x)}\), then \[ h' \beta(y) = xh \inv{\alpha(x)} \beta(y) = x h \beta(y) \inv{\alpha(x)}, \] since \([\im \alpha, \im \beta] = 1\). Therefore, the action is independent of the representative. Moreover, if \(y, y' \in \Stab_{\delta}(k_{j})\), then \[ \rho_{i}([h \beta(y')]_{\alpha}, y) = [h \beta(y')\beta(y)]_{\alpha} = [h \beta(y'y)]_{\alpha} = \rho_{i}([h]_{\alpha}, y'y). \] \begin{theorem} \label{theo:sumFormulaReidemeisterNumberDirectProductInvariantFactor} With the notations as above, denote by \(r_{j}\) the number of orbits of \(\rho_{j}\). Then \[ R(\varphi) = \sum_{j \in \mathcal{J}} r_{j}. \] Here, an infinite sum or a sum with one of its terms equal to \(\infty\) is to be interpreted as \(\infty\). \end{theorem} \begin{proof} By \cref{lem:representativesReidphi}, we have to decide when \((h_{i_{1}}, k_{j_{1}})\) and \((h_{i_{2}}, k_{j_{2}})\) are \(\varphi\)-conjugate. If so, then, since \(H\) is \(\varphi\)-invariant, projecting onto \(K\) yields \(k_{j_{1}} \Rconj{\delta} k_{j_{2}}\), hence \(k_{j_{1}} = k_{j_{2}} =: k\). We claim that \[ (h_{i_{1}}, k) \Rconj{\varphi} (h_{i_{2}}, k) \iff \exists (x, y) \in H \times \Stab_{\delta}(k): h_{i_{1}} = x h_{i_{2}} \inv{\alpha(x)} \inv{\beta(y)}. \] Indeed, if \((h_{i_{1}}, k) = (x, y)(h_{i_{2}}, k) \inv{\varphi(x, y)}\) for some \((x, y) \in G\), then \begin{align*} h_{i_{1}} &= x h_{i_{2}}\inv{\alpha(x)} \inv{\beta(y)} \\ k &= y k \inv{\delta(y)}, \end{align*} showing that \(y \in \Stab_{\delta}(k)\). The converse implication is clear. As \([\im \alpha, \im \beta] = 1\), we can rewrite this as \[ (h_{i_{1}}, k) \Rconj{\varphi} (h_{i_{2}}, k) \iff \exists y \in \Stab_{\delta}(k): [h_{i_{1}}]_{\alpha} = [h_{i_{2}}\inv{\beta(y)}]_{\alpha} \] \ie if and only if \(h_{i_{1}}\) and \(h_{i_{2}}\) lie in the same orbit under the action of \(\Stab_{\delta}(k)\). Therefore, the theorem is proven. \end{proof} Of course, the analogous result for \(\varphi\) leaving \(K\) invariant holds as well. We will now use this theorem to determine the Reidemeister spectrum of direct products of the form \(G \times F\), where \(F\) is a finite group and \(G\) is a finitely generated torsion-free residually finite group, generalising a result due to A.\ Fel'shtyn (see \cite[Proposition~3]{Felshtyn00}). The key ingredient will be proving that automorphisms of \(G\) with finite Reidemeister number have trivial twisted stabilisers. The following two results are well-known, see \eg \cite[Corollary~2.5]{FelshtynTroitsky07} and \cite[Lemma~1.1]{GoncalvesWong09}, respectively. \begin{lemma} \label{lem:ReidemeisterNumberInvariantInnerAutomorphisms} Let \(G\) be a group, \(\varphi \in \End(G)\) and \(g \in G\). Denote by \(\tau_{g}\) the inner automorphism corresponding to \(g\), \ie \(\tau_{g}(x) = gx\inv{g}\) for all \(x \in G\). Then \(R(\tau_{g} \circ \varphi) = R(\varphi)\). \end{lemma} \begin{lemma} \label{lem:ReidemeisterNumbersExactSequence} Let \(G\) be a group, \(\varphi \in \End(G)\) and \(N\) a \(\varphi\)-invariant normal subgroup of \(G\), \ie \(\varphi(N) \leq N\). Denote by \(\bar{\varphi}\) the induced endomorphism on \(G / N\) and by \(\varphi'\) the induced endomorphism on \(N\). Then the following hold: \begin{enumerate}[(i)] \item \(R(\varphi) \geq R(\bar{\varphi})\). In particular, if \(R(\bar{\varphi}) = \infty\), then also \(R(\varphi) = \infty\). \label{lemitem:ReidemeisterNumberGreaterThanQuotientReidemeisterNumber} \item If \(R(\varphi') = \infty\) and \(\size{\Fix(\bar{\varphi})} < \infty\), then \(R(\varphi) = \infty\). \label{lemitem:InfiniteSubgroupReidemeisterNumberFiniteFixedPointsImpliesInfiniteReidemeisterNumber} \end{enumerate} In particular, if \(N\) is characteristic and \(G / N\) has the \(\Rinf\)-property, then so does \(G\). \end{lemma} We also need the following bound. \begin{lemma}[{\cite[Lemma~3]{Jabara08}}] \label{lem:upperboundFixedPointsFiniteGroupReidemeisterNumber} Let \(G\) be a finite group and \(\varphi \in \Aut(G)\). Put \(R(\varphi) = r\). Then \(\size{\Fix(\varphi)} \leq 2^{2^{r}}\). \end{lemma} The following result can be found implicitly in \cite{Jabara08}. For the reader's convenience, we present it here with a complete proof. \begin{prop} \label{prop:infiniteTwistedStabilisersImpliesInfiniteReidemeisterNumberFGRF} Let \(G\) be a finitely generated residually finite group. Let \(\varphi \in \Aut(G)\). If \(\Stab_{\varphi}(g)\) is infinite for some \(g \in G\), then \(R(\varphi) = \infty\). \end{prop} \begin{proof} Suppose that \(\Stab_{\varphi}(g)\) is infinite. Note that \begin{align*} \Stab_{\varphi}(g) &= \{x \in G \mid x g \inv{\varphi(x)} = g\} \\ &= \{x \in G \mid \varphi(x) = x^{g}\} \\ &= \Fix(\tau_{g} \circ \varphi). \end{align*} As \(\varphi\) and \(\tau_{g} \circ \varphi\) have the same Reidemeister number by \cref{lem:ReidemeisterNumberInvariantInnerAutomorphisms}, it is sufficient to prove the result for \(g = 1\). So, suppose \(\varphi\) has infinitely many fixed points. Fix \(n \geq 1\) and let \(x_{1}, \ldots, x_{n}\) be \(n\) of these fixed points. Since \(G\) is finitely generated and residually finite, we can find a characteristic subgroup \(K\) of \(G\) with finite index such that \(\pi: G \to G / K\) is injective on \(\{x_{1}, \ldots, x_{n}\}\). Let \(\bar{\varphi}\) be the induced automorphism on \(G / K\). Then \(R(\varphi) \geq R(\bar{\varphi})\). Moreover, by \cref{lem:upperboundFixedPointsFiniteGroupReidemeisterNumber}, we know that \[ n = \size{\{\bar{x}_{1}, \ldots, \bar{x}_{n}\}} \leq \size{\Fix(\bar{\varphi})} \leq 2^{2^{R(\bar{\varphi})}}, \] therefore, \[ R(\varphi) \geq R(\bar{\varphi}) \geq \log_{2}(\log_{2}(n)). \] As this holds for all \(n \geq 1\) and \(\log_{2}(\log_{2}(n))\) tends to infinity as \(n\) increases, we must have that \(R(\varphi) = \infty\). \end{proof} \begin{cor} \label{cor:twistedStabilierFGTFResiduallyFinite} Let \(G\) be a torsion-free finitely generated residually finite group. Let \(\varphi \in \Aut(G)\). If \(R(\varphi) < \infty\), then all \(\varphi\)-stabilisers are trivial. \end{cor} \begin{proof} As \(\varphi\)-stabilisers are subgroups, a non-trivial one is necessarily infinite. \end{proof} The condition that \(G\) is finitely generated cannot be dropped. In fact, there already exists a non-finitely generated torsion-free residually finite abelian group admitting an automorphism with finite Reidemeister number and non-trivial twisted stabilisers. \begin{example}[{Based on \cite[Proposition~3.6]{DekimpeGoncalves15}}] Consider the direct sum \(A\) of countably many copies of \(\Z\), indexed by the positive integers, \ie \[ A = \bigoplus_{n = 1}^{\infty} \Z, \] and define \(\varphi: A \to A\) as \[ (a_{1}, a_{2}, a_{3}, a_{4}, \ldots) \mapsto (a_{1} + a_{2} + a_{3}, a_{2} + a_{3}, a_{3} + a_{4} + a_{5}, a_{4} + a_{5}, \ldots). \] In other words, the \((2k - 1)\)-th component is given by \(a_{2k - 1} + a_{2k} + a_{2k + 1}\) and the \(2k\)-th by \(a_{2k} + a_{2k + 1}\). This map is an endomorphism of \(A\), and even an automorphism since the map \(\psi: A \to A\) defined as \[ (a_{1}, a_{2}, a_{3}, a_{4}, \ldots) \mapsto (a_{1} - a_{2}, a_{2} - a_{3} + a_{4}, a_{3} - a_{4}, a_{4} - a_{5} + a_{6}, \ldots) \] is the inverse of \(\varphi\). The map \(\varphi\) has non-trivial fixed points, for instance \((1, 0, 0, \ldots)\), but \(R(\varphi) = 1\), since \[ [(0, 0, \ldots)]_{\varphi} = \{a - \varphi(a) \mid a \in A\} = \im(\varphi - \Id) \] and \(\varphi - \Id: A \to A\) is given by \[ (a_{1}, a_{2}, a_{3}, a_{4}, \ldots) \mapsto (a_{2} + a_{3}, a_{3}, a_{4} + a_{5}, a_{5}, \ldots), \] which is clearly surjective \end{example} \begin{theorem} \label{theo:SpecRDirectProductFGTFResiduallyFiniteCharacteristic} Let \(G\) and \(H\) be groups with \(G\) finitely generated, torsion-free residually finite. Suppose that \(H\) is characteristic in \(G \times H\). Then \[ \SpecR(G \times H) = \SpecR(G) \cdot \SpecR(H). \] \end{theorem} \begin{proof} Since \(H\) is characteristic in \(G \times H\), each automorphism of \(G \times H\) is of the form \[ \begin{pmatrix} \alpha & 0 \\ \gamma & \delta \end{pmatrix}, \] and \(\alpha \in \Aut(G)\), \(\delta \in \Aut(H)\) and \(\gamma \in \Hom(G, Z(H))\), by \cref{lem:upperTriangularAutomorphisms}. Now, fix \(\varphi \in \Aut(G \times H)\). We claim that \(R(\varphi) = R(\alpha) R(\delta)\). We know by \cref{lem:ReidemeisterNumbersExactSequence}\eqref{lemitem:ReidemeisterNumberGreaterThanQuotientReidemeisterNumber} that \(R(\varphi) \geq R(\alpha)\). Thus, \(R(\varphi) = \infty\) if \(R(\alpha) = \infty\). In that case, \(R(\varphi) = R(\alpha)R(\delta)\). So suppose that \(R(\alpha) < \infty\). Then, by \cref{cor:twistedStabilierFGTFResiduallyFinite}, we know that \(\Stab_{\alpha}(g) = 1\) for all \(g \in G\). This means that the action of \(\Stab_{\alpha}(g)\) on \(\Reid[\delta]\) is trivial for all \(g \in G\), implying that the number of orbits for each action is equal to \(R(\delta)\). Applying \cref{theo:sumFormulaReidemeisterNumberDirectProductInvariantFactor} then yields \[ R(\varphi) = \sum_{[g]_{\alpha} \in \Reid[\alpha]} R(\delta) = R(\alpha) R(\delta). \] This proves that \(\SpecR(G \times H) \subseteq \SpecR(G) \cdot \SpecR(H)\). The converse inclusion follows directly from \cref{prop:ReidemeisterNumberDirectProductAutomorphismGroups}. \end{proof} \begin{cor} \label{cor:SpecRDirectProductFGTFResiduallyFiniteFinite} Let \(G\) be a finitely generated, torsion-free residually finite group and \(F\) a group which is generated by torsion elements. Then \[ \SpecR(G \times F) = \SpecR(G) \cdot \SpecR(F). \] \end{cor} \begin{proof} Since \(G\) is torsion-free, \(F\) is characteristic in \(G \times F\) as it is the subgroup generated by the torsion elements. Therefore, we can apply \cref{theo:SpecRDirectProductFGTFResiduallyFiniteCharacteristic}. \end{proof} Using the fact that finitely generated linear groups are residually finite (see \eg \cite{Malcev65}) and virtually torsion-free if the underlying field has characteristic zero (see \eg \cite{Selberg62}), one can easily generate examples of groups on which the theorem applies. \section{Direct products of centreless groups} In this section we use the matrix description of the endomorphism monoid to describe the automorphism group under certain conditions. The first result is a generalisation of F.\ Johnson's result \cite[Corollary~2.2]{Johnson83}. \begin{defin} Let \(G\) be a group. We say that \(G\) is \emph{directly indecomposable} if \(G \cong H \times K\) for some groups \(H, K\) implies that \(H = 1\) or \(K = 1\). \end{defin} \begin{theorem} \label{theo:generalisationJohnson83} Let \(G_{1}, \ldots, G_{n}\) be non-isomorphic non-trivial, centreless, directly indecomposable groups. Let \(r_{1}, \ldots, r_{n}\) be positive integers. Put \(G = \Times\limits_{i = 1}^{n} G_{i}^{r_{i}}\). Then \[ \Aut(G) = \Times_{i = 1}^{n} \mathopen{}\mathclose\bgroup\originalleft(\Aut(G_{i}) \wr S_{r_{i}}\aftergroup\egroup\originalright) \] \end{theorem} In \cite{Johnson83}, there was the extra condition that there do not exist non-trivial homomorphisms \(\varphi: G_{i} \to G_{j}\) with normal image for \(1 \leq i < j \leq n\), but it is redundant. In fact, the proof we will give is nearly identical to the one F.\ Johnson gave for the case \(n = 1\), see \cite[Theorem~1.1]{Johnson83}. The following is well-known, so we omit the proof. \begin{lemma} \label{lem:equivalenceDirectProductNormalSubgroups} Let \(G\) be a group and let \(N_{1}, \ldots, N_{n}\) be commuting normal subgroups such that \(N_{1} \ldots N_{n} = G\). If \[ \mathopen{}\mathclose\bgroup\originalleft(\prod_{j \ne i} N_{j}\aftergroup\egroup\originalright) \cap N_{i} = 1 \] for all \(i \in \{1, \ldots, n\}\), then \(G \cong N_{1} \times \ldots \times N_{n}\). \end{lemma} To make the notation easier, we first prove the following (equivalent) result: \begin{theorem} \label{theo:equivalentFormulationGeneralisationJohnson} Let \(G_{1}, \ldots, G_{n}\) be non-trivial, centreless, directly indecomposable groups. Put \(G = \Times\limits_{i = 1}^{n} G_{i}\). Under the monoid isomorphism \(\End(G) \cong \M\), \(\Aut(G)\) corresponds to those matrices \((\varphi_{ij})_{ij}\) satisfying the following conditions: \begin{enumerate}[(i)] \item Each row and column contains exactly one non-trivial homomorphism. \item Each non-trivial homomorphism is an isomorphism. \end{enumerate} \end{theorem} \begin{proof} Let \(\varphi = (\varphi_{ij})_{ij} \in \Aut(G)\) and fix \(i \in \{1, \ldots, n\}\). By \cref{lem:automorphismOfDirectProductImpliesNormalImages}, we know that \(G_{i}\) is generated by \(\im \varphi_{i1}\) up to \(\im \varphi_{in}\) and that each of these images is normal in \(G_{i}\). Let \(k \in \{1, \ldots, n\}\) be arbitrary. Suppose that \(g\) is an element of \[ \mathopen{}\mathclose\bgroup\originalleft(\prod_{j \ne k} \im \varphi_{ij}\aftergroup\egroup\originalright) \cap \im \varphi_{ik} \leq G_{i}. \] For \(l \ne k\), we find that \[ [g, \im \varphi_{il}] \subseteq [\im \varphi_{ik}, \im \varphi_{il}] = 1, \] and for \(l = k\), we find that \[ [g, \im \varphi_{ik}] \subseteq \mathopen{}\mathclose\bgroup\originalleft[\prod_{j \ne k} \im \varphi_{ij}, \im \varphi_{ik}\aftergroup\egroup\originalright] = 1. \] Therefore, \(g \in Z(G_{i}) = 1\). Thus, we conclude that \(G_{i}\) is isomorphic to the direct product of \(\im \varphi_{i1}\) up to \(\im \varphi_{in}\) by \cref{lem:equivalenceDirectProductNormalSubgroups}. As \(G_{i}\) is directly indecomposable, exactly one of these images is non-trivial. Since \(i\) was arbitrary, we thus find a map \(\sigma: \{1, \ldots, n\} \to \{1, \ldots, n\}\) such that \(\im \varphi_{i j} \ne 1\) if and only if \(j = \sigma(i)\). If \(\sigma\) were not surjective, say, \(m \notin \im \sigma\), we find that \(\im \varphi_{im} = 1\) for all \(i \in \{1, \ldots, n\}\). Recall that \(e_{i}: G_{i} \to G\) and \(\pi_{i}: G \to G_{i}\) were the canonical inclusion and projection, respectively. Then \(e_{m}(G_{m}) \leq \ker (\pi_{i} \circ \varphi)\) for all \(i \in \{1, \ldots, n\}\). We thus find that \[ e_{m}(G_{m}) \leq \bigcap_{i = 1}^{n} \ker(\pi_{i} \circ \varphi) = \bigcap_{i = 1}^{n} \inv{\varphi}(\ker(\pi_{i})) = \inv{\varphi} \mathopen{}\mathclose\bgroup\originalleft(\bigcap_{i = 1}^{n} \ker (\pi_{i}) \aftergroup\egroup\originalright) = 1, \] which contradicts the non-triviality of \(G_{m}\). Thus, \(\sigma\) is surjective, and therefore bijective. We thus find that the matrix representation of \(\varphi\) contains exactly one non-trivial homomorphism on each row and column. As \(\varphi\) is an automorphism, each of these non-trivial homomorphisms must be both injective and surjective, therefore, they must all be isomorphisms. \end{proof} \begin{proof}[Proof of \cref{theo:generalisationJohnson83}] Let \(\varphi \in \Aut(G)\). By \cref{theo:equivalentFormulationGeneralisationJohnson}, the matrix representation \((\varphi_{ij})_{ij}\) contains exactly one non-trivial homomorphism per row and column, and each of those homomorphisms is in fact an isomorphism. Due to the fact that \(G_{i}\) and \(G_{j}\) are not isomorphic for \(i \ne j\), \((\varphi_{ij})_{ij}\) is of the form \[ \begin{pmatrix} A_{1} & 0 & 0 & \ldots & 0 \\ 0 & A_{2} & 0 & \ldots & 0 \\ 0 & 0 & A_{3} & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \ldots & A_{n}, \end{pmatrix} \] where each \(A_{i}\) is an \((r_{i} \times r_{i})\)-matrix containing exactly one non-trivial homomorphism per row and column, and each of those homomorphisms is an automorphism of \(G_{i}\). These block matrices correspond to automorphisms lying in \(\Aut(G_{i}) \wr S_{r_{i}}\). Therefore, \(\varphi\) lies in \(\Times\limits_{i = 1}^{n} \mathopen{}\mathclose\bgroup\originalleft(\Aut(G_{i}) \wr S_{r_{i}}\aftergroup\egroup\originalright)\). \end{proof} \begin{cor} \label{cor:SpecRProductCentrelessDirectlyIndecomposableGroups} Let \(G_{1}, \ldots, G_{n}\) be non-trivial, non-isomorphic, centreless, directly indecomposable groups. Let \(r_{1}, \ldots, r_{n}\) be positive integers and put \(G = \Times\limits_{i = 1}^{n}G_{i}^{r_{i}}\). Then \[ \SpecR(G) = \prod_{i = 1}^{n} \mathopen{}\mathclose\bgroup\originalleft(\bigcup_{j = 1}^{r_{i}} \SpecR(G_{i})^{(j)}\aftergroup\egroup\originalright). \] In particular, \(G\) has the \(\Rinf\)-property if and only if \(G_{i}\) has the \(\Rinf\)-property for some \(i \in \{1, \ldots, n\}\). \end{cor} \begin{proof} This follows by combining \cref{theo:generalisationJohnson83} with \cref{cor:SpecRDirectProductCharacteristicSubgroups,cor:ReidemeisterNumbersOfAut(G)wrSn}. \end{proof} \begin{remark} S.\ Tertooy proved the same equality for direct products of free nilpotent groups of finite rank in \cite[\S6.5]{Tertooy19}. Note that free nilpotent groups have non-trivial centre, so they do not satisfy the conditions of \cref{cor:SpecRProductCentrelessDirectlyIndecomposableGroups}, showing that those conditions are sufficient but not necessary. \end{remark} The next result provides us with sufficient conditions for a direct product to have the \(\Rinf\)-property when one of the factors has it. \begin{theorem} \label{theo:SecondGeneralisationJohnson} Let \(G_{1}, \ldots, G_{n}, H\) be non-trivial groups such that each \(G_{i}\) is centreless and directly indecomposable and such that, for each \(i \in \{1, \ldots, n\}\), \(H\) has no direct factor isomorphic with \(G_{i}\). Let \(G = \Times\limits_{i = 1}^{n} G_{i}\) Then, under the monoid isomorphism \(\End(G \times H) \cong \M\), \(\Aut(G \times H)\) corresponds to \[ \mathopen{}\mathclose\bgroup\originalleft\{ \begin{pmatrix} \alpha & 0 \\ \gamma & \delta \end{pmatrix} \middlebar \begin{array}{ccc}\alpha \in \Aut(G), &\delta \in \Aut(H), \\ \gamma \in \Hom(G, Z(H)) \end{array}\aftergroup\egroup\originalright\}. \] In other words, \(H\) is characteristic in \(G \times H\). \end{theorem} \begin{proof} Let \(\varphi \in \Aut(G \times H)\) and write \[ \varphi = \begin{pmatrix} \varphi_{11} & \ldots & \varphi_{1n} & \beta_{1} \\ \vdots & \ddots & \vdots & \vdots \\ \varphi_{n1} & \ldots & \varphi_{nn} & \beta_{n} \\ \gamma_{1} & \ldots & \gamma_{n} & \delta \end{pmatrix} \] with, for all \(1 \leq i, j \leq n\), \(\varphi_{ij} \in \Hom(G_{j}, G_{i}), \beta_{i} \in \Hom(H, G_{i}), \gamma_{j} \in \Hom(G_{j}, H)\) and \(\delta \in \End(H)\). By a similar argument as in the beginning of \cref{theo:equivalentFormulationGeneralisationJohnson}, we find that each of the first \(n\) rows of \(\varphi\) contains precisely one non-trivial homomorphism. The inverse automorphism \(\inv{\varphi}\) has a similar matrix form as \(\varphi\), hence, also each of the first \(n\) rows of \(\inv{\varphi}\) contains precisely one non-trivial homomorphism, and we denote the corresponding homomorphisms of \(\inv{\varphi}\) by \(\psi_{ij}, \beta_{i}', \gamma_{j}'\) and \(\delta'\). The goal is to prove that \(\beta_{i} = 0\) for all \(1 \leq i \leq n\). First, we make some observations. Suppose that \(\varphi_{ij} \ne 0\) for some \(i, j \in \{1, \ldots, n\}\). Then \(\beta_{i} = 0\) and \(\varphi_{ik} = 0\) for \(k \in \{1, \ldots, n\}\) different from \(j\). Hence \(\Id_{G_{i}} = (\varphi \circ \inv{\varphi})_{ii} = \varphi_{ij} \psi_{ji}\). This implies that \(\psi_{ji} \ne 0\), hence by symmetry, \(\Id_{G_{j}} = (\inv{\varphi} \circ \varphi)_{jj} = \psi_{ji} \varphi_{ij}\). Moreover, this also shows that \(\varphi_{kj} = 0\) for \(k \ne i\). Indeed, if \(\varphi_{kj} \ne 0\), the same arguments as before show that \(\psi_{jk} \ne 0\). This would imply that the \(j\)-th row of \(\inv{\varphi}\) contains two non-trivial homomorphisms, which is a contradiction. Hence, the index \(i\) is unique, \ie the \(j\)-th column of \(\varphi\) contains only \(\varphi_{ij}\) and \(\gamma_{j}\) as (potentially) non-trivial homomorphisms. By symmetry, the \(i\)-th column of \(\inv{\varphi}\) contains only \(\gamma_{i}'\) and \(\psi_{ji}\) as (potentially) non-trivial homomorphisms. Next, suppose that \(\beta_{l}\) is non-trivial for some \(l \in \{1, \ldots, n\}\). Since each of the first \(n\) rows of \(\varphi\) contains at most one non-trivial homomorphism, there is a column of \(\varphi\), say the \(j\)-th one, containing only \(\gamma_{j}\) as (potentially) non-trivial homomorphism. So, let \(\J\) be the set of indices of the columns of \(\varphi\) of this form and \(\I\) be the set of indices \(i\) with \(\beta_{i} \ne 0\). Suppose that \(\I\), and thus also \(\J\), is non-empty. Note that, for each \(j \in \J\), \(\gamma_{j}\) is injective, since \[ \Id_{G_{j}} = (\inv{\varphi} \circ \varphi)_{jj} = \beta_{j} \gamma_{j}. \] From this it follows that \(\im \gamma_{j} \cap \grpgen{\im \gamma_{i} \mid i \in \J \setminus \{j\}} = 1\) for \(j \in \J\). Indeed, suppose that \(h\) lies in the intersection. Since the images of the \(\gamma_{k}\)'s commute pairwise, we can write \[ h = \prod_{i \in \J \setminus \{j\}} \gamma_{i}(g_{i}) = \gamma_{j}(g_{j}) \] for some \(g_{j} \in G_{j}\) and \(g_{i} \in G_{i}\). For \(g \in G_{j}\), we then find that \[ \gamma_{j}(g) \gamma_{j}(g_{j}) = \gamma_{j}(g) \prod_{i \in \J \setminus \{j\}}\gamma_{i}(g_{i}) = \mathopen{}\mathclose\bgroup\originalleft(\prod_{i \in \J \setminus \{j\}}\gamma_{i}(g_{i})\aftergroup\egroup\originalright) \gamma_{j}(g) = \gamma_{j}(g_{j}) \gamma_{j}(g), \] again by the commuting condition. Injectivity of \(\gamma_{j}\) implies that \(gg_{j} = g_{j}g\), and as this has to hold for all \(g \in G_{j}\), we conclude that \(g_{j} \in Z(G_{j}) = 1\). Thus, \(h = 1\), proving the claim. This shows that \(\grpgen{\im \gamma_{j} \mid j \in \J}\) is isomorphic to \(\Times\limits_{j \in \J} G_{j}\), by \cref{lem:equivalenceDirectProductNormalSubgroups}. We thus have a subgroup of \(H\) isomorphic to \(\Times\limits_{j \in \J} G_{j}\). We now proceed to prove that this subgroup is in fact a direct factor to obtain a contradiction. First, we claim that \(\delta \gamma'_{i} = 0\) for \(i \in \I\). In order to do so, we compute \({(\varphi \circ \inv{\varphi})_{n+1, i}}\) and obtain \[ 0 = \delta \gamma'_{i} + \sum_{j = 1}^{n} \gamma_{j} \psi_{ji}. \] If \(\psi_{ji}\) would be non-trivial, then also \(\varphi_{ij}\) would be non-trivial, which would yield two non-trivial homomorphisms on the \(i\)-th row of \(\varphi\). Hence, \(0 = \delta \gamma_{i}'\), proving the claim. With this equality, we can prove that \(\delta \delta' \delta = \delta\), by noting that \[ \Id_{H} = (\inv{\varphi} \circ \varphi)_{n + 1, n + 1} = \delta' \delta + \sum_{i \in \I} \gamma_{i}' \beta_{i}, \] as \(\I\) contains all indices \(i\) for which \(\beta_{i}\) is non-trivial. Composing with \(\delta\) and using \(\delta \gamma_{i}' = 0\) yields \(\delta = \delta \delta' \delta\) as desired. Next, we prove the following equality for all \(j \in \{1, \ldots, n\}\): \begin{equation} \label{eq:deltadelta'gammam} \delta \delta' \gamma_{j} = \begin{cases} 0 & \mbox{if } j \in \J \\ \gamma_{j} & \mbox{if } j \notin \J \end{cases} \end{equation} Fix \(j \in \{1, \ldots, n\}\). Suppose that \(j \in \J\). Then \(\varphi_{ij} = 0\) for all \(i \in \{1, \ldots, n\}\). Consequently, \(0 = (\inv{\varphi} \circ \varphi)_{n + 1, j} = \delta' \gamma_{j}\), hence \(\delta \delta' \gamma_{j} = 0\) as well. If \(j \notin \J\), then there is a unique \(i \in \{1, \ldots, n\}\) such that \(\varphi_{ij} \ne 0\). Then also \(\psi_{ji} \ne 0\), and this will be the only non-trivial homomorphism in the first \(n\) rows of the \(i\)-th column of \(\inv{\varphi}\). Using this, we find that \[ 0 = (\inv{\varphi} \circ \varphi)_{n + 1, j} = \gamma_{i}' \varphi_{ij} + \delta' \gamma_{j} \] and \[ 0 = (\varphi \circ \inv{\varphi})_{n + 1, i} = \gamma_{j} \psi_{ji} + \delta \gamma_{i}'. \] Composing the first equality on the left with \(\delta\) yields \begin{align*} 0 &= \delta \gamma_{i}' \varphi_{ij} + \delta \delta' \gamma_{j} \\ &= - \gamma_{j} \psi_{ji} \varphi_{ij} + \delta \delta' \gamma_{j} \\ &= - \gamma_{j} + \delta \delta' \gamma_{j}, \end{align*} where we used that \(\Id_{G_{j}} = \psi_{ji} \varphi_{ij}\). Rearranging terms proves that \(\delta \delta' \gamma_{j} = \gamma_{j}\), finishing the proof of \eqref{eq:deltadelta'gammam}. Finally, we prove that \begin{equation} \label{eq:firstTrivialIntersectionDirectProduct} \grpgen{\{\im \gamma_{j} \mid j \in \J\}} \cap \grpgen{\im \delta, \{\im \gamma_{j} \mid j \notin \J\}} = 1. \end{equation} Let \(h\) be an element of the intersection. Since all the images of the \(\gamma_{i}\)'s and \(\delta\) commute pairwise, we can write \[ h = \prod_{j \in \J} \gamma_{j}(g_{j}) = \delta(h') \prod_{j \notin \J} \gamma_{j}(g_{j}) \] where \(h' \in H\) and \(g_{j} \in G_{j}\) for all indices \(j\). Applying \(\delta \delta'\) on this equality and using \eqref{eq:deltadelta'gammam} yields on the one hand \[ \delta \delta'(h) = \prod_{j \in \J} \delta \delta' \gamma_{j}(g_{j}) = 1, \] and on the other hand, again by using \eqref{eq:deltadelta'gammam} and \(\delta = \delta \delta' \delta\), \[ \delta \delta'(h) = \delta \delta' \delta (h') \prod_{j \notin \J} \delta \delta' \gamma_{j}(g_{j}) = \delta(h')\prod_{j \notin \J} \gamma_{j}(g_{j}) = h. \] Hence, \(h = 1\). Now, both subgroups in \eqref{eq:firstTrivialIntersectionDirectProduct} are normal, being a product of normal subgroups by \cref{lem:automorphismOfDirectProductImpliesNormalImages}, they generate \(H\), also by \cref{lem:automorphismOfDirectProductImpliesNormalImages} and they intersect trivially. Therefore, we conclude that \(H\) equals the internal direct product of the groups \(\im \varphi_{j}\) with \(j \in \J\) and the group \(\grpgen{\im \delta, \{\im \gamma_{j} \mid j \notin \J\}}\), \ie \[ H \cong \mathopen{}\mathclose\bgroup\originalleft(\Times_{j \in \J} G_{j}\aftergroup\egroup\originalright) \times N \] for some normal subgroup \(N\) of \(H\). As \(|\J| \geq 1\), we obtain a contradiction, since we assume that \(H\) has no direct factors isomorphic with \(G_{j}\) for \(j \in \{1, \ldots, n\}\). Therefore, \(\J\) and is empty, hence also \(\I\) is empty, meaning that all \(\beta_{i}\)'s are trivial. Thus, the matrix form of \(\varphi\) is \[ \varphi = \begin{pmatrix} \varphi_{11} & \ldots & \varphi_{1n} & 0 \\ \vdots & \ddots & \vdots & \vdots \\ \varphi_{n1} & \ldots & \varphi_{nn} & 0 \\ \gamma_{1} & \ldots & \gamma_{n} & \delta \end{pmatrix}. \] Rewrite this as \[ \varphi = \begin{pmatrix} \alpha & 0 \\ \gamma & \delta \end{pmatrix} \] with \(\alpha \in \End(G), \gamma \in \Hom(G, H), \delta \in \End(H)\). As \(\varphi\) was arbitrary, each automorphism of \(G \times H\) is of this form. \cref{lem:upperTriangularAutomorphisms} then implies that \(\alpha \in \Aut(G)\), \(\delta \in \Aut(H)\) and \(\gamma \in \Hom(G, Z(H))\). \end{proof} \begin{remark} By \cref{theo:equivalentFormulationGeneralisationJohnson}, we also know what \(\Aut(G)\) looks like. In fact, we could have merged \cref{theo:equivalentFormulationGeneralisationJohnson,theo:SecondGeneralisationJohnson} into one theorem, since the start of the proof of \cref{theo:SecondGeneralisationJohnson} is the same as that of \cref{theo:equivalentFormulationGeneralisationJohnson}. However, for the sake of clarity, we have split the results: \cref{theo:equivalentFormulationGeneralisationJohnson} deals with the internal structure of \(\Aut(G)\) and \cref{theo:SecondGeneralisationJohnson} determines the influence of \(G\) on an additional factor \(H\) on which we impose less strict conditions. \end{remark} \begin{cor} Let \(G_{1}, \ldots, G_{n}, H\) and \(G\) be as in the previous theorem. \begin{enumerate}[(i)] \item If \(G_{i} \in \Rinf\) for some \(i \in \{1, \ldots, n\}\), then \(G \times H \in \Rinf\) as well. \item If \(G_{i}\) is finitely generated residually finite for each \(i \in \{1, \ldots n\}\) and \(H \in \Rinf\), then \(G \times H \in \Rinf\) as well. \item If \(G_{i}\) is finitely generated torsion-free residually finite for each \(i \in \{1, \ldots n\}\), then \[ \SpecR(G \times H) = \SpecR(G) \cdot \SpecR(H). \] \end{enumerate} \end{cor} \begin{proof} By the previous theorem, \(H\) is characteristic in \(G \times H\). If \(G_{i} \in \Rinf\) for some \(i \in \{1, \ldots, n\}\), then so does \(G\) by \cref{cor:SpecRProductCentrelessDirectlyIndecomposableGroups} and therefore \(G \times H\) has the \(\Rinf\)-property as well by \cref{lem:ReidemeisterNumbersExactSequence}, since \(G \cong \frac{G \times H}{H}\). Now, suppose that \(H \in \Rinf\) and that each \(G_{i}\) is finitely generated residually finite. Then also \(G\) is finitely generated residually finite. For an automorphism \(\varphi\) of \(G \times H\), denote by \(\varphi'\) the induced automorphism on \(H\) and by \(\bar{\varphi}\) the induced automorphism on \(G\). If \(R(\bar{\varphi}) = \infty\), then \cref{lem:ReidemeisterNumbersExactSequence}\eqref{lemitem:ReidemeisterNumberGreaterThanQuotientReidemeisterNumber} yields \(R(\varphi) = \infty\). If \(R(\bar{\varphi}) < \infty\), then \cref{prop:infiniteTwistedStabilisersImpliesInfiniteReidemeisterNumberFGRF} implies that \(\Fix(\bar{\varphi})\) is finite. Since \(H \in \Rinf\), we know that \(R(\varphi') = \infty\) and then \cref{lem:ReidemeisterNumbersExactSequence}\eqref{lemitem:InfiniteSubgroupReidemeisterNumberFiniteFixedPointsImpliesInfiniteReidemeisterNumber} yields that also \(R(\varphi) = \infty\). The last item follows from \cref{theo:SpecRDirectProductFGTFResiduallyFiniteCharacteristic}. \end{proof} \section{Direct products of virtually free groups} As a more concrete application of the obtained results, we investigate the \(\Rinf\)-property for direct products of finitely generated virtually free groups. \begin{defin} Let \(G\) be a group. We say that \(G\) is \emph{non-elementary} virtually free if there is a non-abelian free subgroup \(F\) of finite index in \(G\). \end{defin} \begin{prop} \label{prop:FGNonelementaryVirtuallyFreeRinf} Let \(G\) be a finitely generated non-elementary virtually free group. Then \(G\) has the \(\Rinf\)-property. \end{prop} \begin{proof} Since \(G\) is finitely generated and contains a non-abelian free subgroup of finite index, it is Gromov hyperbolic. The result then follows from \cite[Theorem~3]{Felshtyn01}. \end{proof} \begin{lemma} Let \(G\) be a non-elementary virtually free group. Then \(Z(G)\) is finite. \end{lemma} \begin{proof} Let \(F\) be a finite index non-abelian free subgroup of \(G\). If \(Z(G)\) is infinite, then \(F \cap Z(G)\) is non-trivial. However, \(F \cap Z(G)\) lies in the center of \(F\), which is trivial, since \(F\) is non-abelian free. Therefore, \(Z(G)\) is finite. \end{proof} \begin{lemma} \label{lem:maximalFiniteNormalSubgroupVirtuallyFree} Let \(G\) be a virtually free group. Then \(G\) has a unique maximal finite normal subgroup \(N_{0}\), which is characteristic. Moreover, \(G / N_{0}\) has no non-trivial finite normal subgroups. In particular, if \(G\) is non-elementary virtually free, then \(G / N_{0}\) is centreless. \end{lemma} \begin{proof} Let \(F\) be a finite index free subgroup of \(G\). We may assume its normal, by intersecting all conjugates of \(F\). Denote by \(\pi: G \to G / F\) the canonical projection. Let \(E\) be a finite subgroup. Then \(E \cap F\) is trivial, since \(E\) is torsion. Therefore, \(\size{E} = \size{\pi(E)} \leq [G : F]\). Thus, the size of finite subgroups of \(G\) is bounded by \([G : F]\). Let \(N_{0}\) be a finite normal subgroup of maximal size. Let \(N\) be a arbitrary finite normal subgroup of \(G\). Then \(NN_{0}\) is a normal finite subgroup of \(G\). Therefore, \(\size{NN_{0}} \leq \size{N_{0}}\). Also, \(N_{0} \leq N N_{0}\), thus \(N_{0} = N N_{0}\), implying that \(N \leq N_{0}\). Thus, \(N_{0}\) is the unique maximal finite normal subgroup. Since automorphisms preserve order and normality, \(N_{0}\) is characteristic. Now, if \(N / N_{0}\) is a finite normal subgroup of \(G / N_{0}\), then \(N\) is a finite normal subgroup of \(G\) containing \(N_{0}\). Therefore, \(N = N_{0}\). For the centre of \(G / N_{0}\), note that the (non-abelian) free subgroup \(F\) of \(G\) projects injectively down to \(G / N_{0}\). Therefore, \(G / N_{0}\) is non-elementary virtually free, hence has finite centre. Since the centre is normal, we conclude that \(Z(G / N_{0}) = 1\). \end{proof} \begin{lemma} \label{lem:FiniteCharacteristicSubgroupVirtuallyFree} Let \(G_{1}, \ldots, G_{n}\) be non-elementary virtually free groups. Let \(N_{1}, \ldots, N_{n}\) be their respective maximal normal finite subgroups. Then \(N := \Times\limits_{i = 1}^{n}N_{i}\) is characteristic in \(G := \Times\limits_{i = 1}^{n} G_{i}\). \end{lemma} \begin{proof} Let \(\varphi \in \Aut(G)\) and \(g = (g_{1}, \ldots, g_{n}) \in \varphi(N)\). Let \(\pi_{j}: G \to G_{j}\) be the projection. Then \(\pi_{j}(\varphi(N))\) is a finite normal subgroup of \(G_{j}\), therefore \(g_{j} \in \pi_{j}(\varphi(N)) \leq N_{j}\). This holds for all \(j \in \{1, \ldots, n\}\), implying that \(g \in N\). \end{proof} \begin{lemma} \label{lem:VirtuallyFreeGroupsDirectlyIndecomposable} Let \(G\) be a virtually free group with no non-trivial finite normal subgroups. Then \(G\) is directly indecomposable. \end{lemma} \begin{proof} Suppose that \(G \cong H \times K\) for some non-trivial subgroups of \(G\). Since \(H\) and \(K\) must be normal in \(G\), both are infinite. Let \(F\) be a free subgroup of finite index in \(G\). Then \(H \cap F\) and \(K \cap F\) are finite index subgroups of \(H\) and \(K\) as well, hence they are infinite. Therefore, \(F\) contains the subgroup \((H \cap F) \times (K \cap F)\). This is a contradiction, however, since \((H \cap F) \times (K \cap F)\) must be a free group (being a subgroup of \(F\)), but free groups are directly indecomposable (see \eg \cite[Observation~p.~177]{LyndonSchupp77}) \end{proof} \begin{theorem} Let \(G_{1}, \ldots, G_{n}\) be finitely generated non-elementary virtually free groups. Then \( G := \Times\limits_{i = 1}^{n} G_{i} \) has the \(\Rinf\)-property. \end{theorem} \begin{proof} Let \(N\) be the characteristic subgroup of \(G\) provided by \cref{lem:FiniteCharacteristicSubgroupVirtuallyFree}. By \cref{lem:ReidemeisterNumbersExactSequence}, it is sufficient to prove that \(G / N\) has the \(\Rinf\)-property. Therefore, by \cref{lem:maximalFiniteNormalSubgroupVirtuallyFree}, we may assume that each \(G_{i}\) is centreless and has no non-trivial finite normal subgroups. \cref{lem:VirtuallyFreeGroupsDirectlyIndecomposable} then implies that each \(G_{i}\) is directly indecomposable. Since each \(G_{i}\) has the \(\Rinf\)-property by \cref{prop:FGNonelementaryVirtuallyFreeRinf}, \cref{cor:SpecRProductCentrelessDirectlyIndecomposableGroups} implies that \(G\) has it as well. \end{proof} \end{document}
\begin{document} \begin{abstract} We propose a generalisation of the congruence subgroup problem for groups acting on rooted trees. Instead of only comparing the profinite completion to that given by level stabilizers, we also compare pro-$\mathcal{C}$ completions of the group, where $\mathcal{C}$ is a pseudo-variety of finite groups. A group acting on a rooted, locally finite tree has the $\mathcal{C}$-congruence subgroup property ($\mathcal{C}$-CSP) if its pro-$\mathcal{C}$ completion coincides with the completion with respect to level stabilizers. We give a sufficient condition for a weakly regular branch group to have the $\mathcal{C}$-CSP. In the case where $\mathcal{C}$ is also closed under extensions (for instance the class of all finite $p$-groups for some prime $p$), our sufficient condition is also necessary. We apply the criterion to show that the Basilica group and the GGS-groups with constant defining vector (odd prime relatives of the Basilica group) have the $p$-CSP. \mathbf{e}nd{abstract} \title{Pro-$\mathcal{C} \section{Introduction} Groups of rooted tree automorphisms have been studied intensively for the past few decades. One of the driving factors for this was the appearance in the 1980s of examples of groups with properties hitherto thought of as exotic (intermediate word growth, finitely generated infinite torsion, amenable but not elementary amenable, etc). The theory of groups acting on rooted trees, and (weakly) branch groups in particular, has come a long way since the early days in which it just seemed a collection of curious examples and is now an important part of group theory, with connections to other areas of mathematics (see \cite{bartholdi-grigorchuk-sunic:branch,grigorchuk:unsolved,nekrashevych:self-similar}). The congruence subgroup problem (or property), first studied in the context of arithmetic groups, and $\mathrm{SL_n}(\mathbb{Z})$ in particular (\cite{bass-lazard-serre:csp}), has been adapted and generalised to several other natural contexts. The classical version of this problem asks whether every finite index subgroup of $\mathrm{SL_n}(\mathbb{Z})$ contains the kernel of the map $\mathrm{SL}_n(\mathbb{Z})\rightarrow \mathrm{SL}_n(\mathbb{Z}/m\mathbb{Z})$ for some $m\in \mathbb{N}$, the filtration consisting of these kernels being an obvious one to consider when studying finite quotients of $\mathrm{SL_n}(\mathbb{Z})$. One of the most natural generalisations of this problem is to the context of groups acting on rooted, locally finite, infinite trees (henceforth ``rooted trees''), as every residually finite group acts faithfully on some such tree. The \mathbf{e}mph{congruence subgroup problem} then asks whether every finite index subgroup contains some level stabilizer. This can be rephrased in terms of profinite completions as follows. For a group $G$ acting faithfully on a rooted tree, taking the level stabilizers $\{\st_G(n) \mid n\geq0\}$ as a neighbourhood basis for the identity gives a topology on $G$ -- the \mathbf{e}mph{congruence topology} -- and the completion $\overline{G}$ of $G$ with respect to this topology is a profinite group called the \mathbf{e}mph{congruence completion} of $G$. As $G$ acts faithfully on the tree, we have $\bigcap_n \st_G(n)=1$, so $G$ embeds in $\overline{G}$. A fortiori, $G$ also embeds in its profinite completion $\widehat{G}$ which maps onto $\overline{G}$. The congruence subgroup problem asks whether the map $\widehat{G}\to \overline{G}$ is an isomorphism. If the answer is positive, then $G$ has the \mathbf{e}mph{congruence subgroup property}. The congruence subgroup problem for groups acting on rooted trees has so far only really been considered for branch groups (see Section \ref{section:definitions} for the definition). It is known that a number of ``canonical'' examples have the congruence subgroup property (Grigorchuk group, Gupta--Sidki groups). The first examples of branch groups without this property were tailor-made in \cite{pervova:csp}. The problem was considered systematically for the first time in \cite{bartholdi:csp}, where the authors also show that the Hanoi towers group (see \cite{grigorchuk-sunic:hanoi}) does not have the congruence subgroup property. We propose to study a generalisation of this problem in two natural directions simultaneously. Firstly, we consider weakly branch, but not necessarily branch groups. Secondly, we allow other completions. For a class $\mathcal{C}$ of finite groups, the pro-$\mathcal{C}$ completion $\widehat{G}_{\mathcal{C}}$ of a group $G$ is the inverse limit of all quotients of $G$ that lie in $\mathcal{C}$. The congruence subgroup property can now be modified to the context of pro-$\mathcal{C}$ completions, where it is sometimes more natural because all quotients by level stabilizers lie in some class to which not all finite quotients of $G$ belong. Consider a group $G\leq \mathcal{A}ut T$ and a class $\mathcal{C}$ of finite groups. The weakest possible requirement on $\mathcal{C}$ is that it be a formation, but for our purposes $\mathcal{C}$ should also be closed under taking subgroups, i.e., a pseudo-variety. Then $G$ satisfies the \mathbf{e}mph{$\mathcal{C}$-congruence subgroup property}, or \mathbf{e}mph{$\mathcal{C}$-CSP} for short, if every quotient of $G$ lying in $\mathcal{C}$ is a quotient of some $G/\st_G(n)$. In other words, the congruence completion $\overline{G}$ maps onto the pro-$\mathcal{C}$ completion $\widehat{G}_{\mathcal{C}}$. If all quotients $G/\st_G(n)$ happen to be in $\mathcal{C}$, then $G$ has the $\mathcal{C}$-CSP if and only if $\overline{G}$ is isomorphic to $\widehat{G}_{\mathcal{C}}$. Our main result is a sufficient condition for a weakly regular branch group to have the $\mathcal{C}$-CSP. Let $\psi:\mathcal{A}ut T\rightarrow \mathcal{A}ut T\wr \Sym(d)$ be the isomorphism induced by the natural identification of the $d$-regular rooted tree $T$ with any of its subtrees at distance 1 from the root. The rest of the notation and terms used in the next theorem is explained in Section \ref{section:definitions}. \begin{theorem}\label{thm:csp} Let $G\leq \mathcal{A}ut T$ be a weakly regular branch group over a subgroup $R$ and let $\mathcal{C}$ be a pseudo-variety of finite groups. Suppose that there exists $H\unlhd G$ such that $R\geq H\geq R'\geq L$ where $L:=\psi^{-1}(H\times \overset{d}{\dots}\times H)$. If $G$ has the $\mathcal{C}$-CSP modulo $H$ and $H$ has the $\mathcal{C}$-CSP modulo $L$, then $G$ has the $\mathcal{C}$-CSP. \mathbf{e}nd{theorem} If $\mathcal{C}$ is extension-closed, then this condition is also necessary. We then apply the criterion to some examples of weakly regular branch groups, the Basilica group acting on the binary tree and an analogue of it acting on the $p$-regular tree for $p$ an odd prime, the GGS-group with constant defining vector. This last group was studied in \cite{bartholdi-grigorchuk:parabolic} for $p=3$ and for general $p$ in \cite{alcober-zugadi:hausdorff,alcober-garrido-uria:GGS}. For the appropriate $p$, each of these groups is contained in a Sylow pro-$p$ subgroup of $\mathcal{A}ut T$ consisting of elements that permute vertices according to a fixed cyclic permutation of order $p$ (when $p=2$, this is already the whole of $\mathcal{A}ut T$). In particular, all quotients by level stabilizers are $p$-groups, being subgroups of an iterated wreath product $C_p\wr \dots\wr C_p$. None of these groups have the CSP for the simple reason that they virtually map onto $\mathbb{Z}$ and therefore have quotients of arbitrary order. (This is actually the same reason that many lattices in rank 1 Lie groups fail to have the CSP.) However, according to our criterion, they do have the $\mathcal{C}$-CSP when $\mathcal{C}$ is the pseudo-variety of all finite $p$-groups. This implies that, if $G$ denotes any of these groups, the kernel of $\widehat{G} \rightarrow \overline{G}$ is the inverse limit of all finite quotients of $G$ whose order is coprime to $p$. By contrast, the examples constructed by Pervova \cite{pervova:csp} still fail to have the $\mathcal{C}$-CSP as the derived subgroup, of $p$-power index, does not contain any level stabilizer. It is worth mentioning that, even though we only treat the pseudo-variety of finite $p$-groups in our examples, the criterion is valid for any pseudo-variety $\mathcal{C}$ of finite groups. Therefore it is interesting to consider weakly branch groups whose quotients by level stabilizers all lie in other pseudo-varieties of finite groups such as finite nilpotent groups ($\mathcal{C}_n$), or finite solvable groups ($\mathcal{C}_s$). The case of the Hanoi towers group $H$ is particularly interesting, because although it acts on the ternary tree, the quotients by level stabilizers are not 3-groups, they are only solvable. Despite this, and the fact that $H$ is just non-solvable (it is not solvable but all of its proper quotients are), it does not have the $\mathcal{C}_s$-CSP, because the derived subgroup $H'$ does not contain any level stabilizer. It would be interesting to see more constructions of weakly regular branch groups with ``intermediate" CSPs. Another line of investigation worth pursuing involves calculating the kernels of the various maps between all these possible completions of a weakly branch group. In \cite{bartholdi:csp}, the authors give a general method for calculating the kernel of the map $\widehat{G}\to \overline{G}$ for a branch group $G$. Unfortunately, this method does not carry through to weakly branch but not branch groups, because it really makes use of the fact that the rigid stabilizers have finite index and that the completion of $G$ with respect to this filtration lies between $\widehat{G}$ and $\overline{G}$. It is therefore desirable to find alternative methods for this wider setting and it seems plausible that using the various pro-$\mathcal{C}$ completions as ``stepping stones'' will help with this problem. \subsection*{Acknowledgements.} We are grateful to B. Klopsch for suggesting the problem and valuable discussions and to G. A. Fern\'{a}ndez Alcober for suggesting several improvements. D. Francoeur, B. Klopsch and H. Sasse pointed out an inaccuracy in \cite{grigorchuk-zuk:torsion-free} that affected our calculations (but not the main result) for the Basilica group in a previous version. \section{Definitions and preliminaries}\label{section:definitions} \subsection*{Pseudo-varieties of groups} \begin{definition} Let $\mathcal{C}$ be a class of finite groups. Say that $\mathcal{C}$ is a \mathbf{e}mph{pseudo-variety} of finite groups if the following properties are satisfied: \begin{itemize} \item[$(\mathcal{C}_1)$] it is closed under taking subgroups, that is, if $G\in\mathcal{C}$ and $H\leq G$ then $H\in\mathcal{C}$, \item[$(\mathcal{C}_2)$] it is closed under taking quotients, that is, if $G\in\mathcal{C}$ and $N\unlhd G$ then $G/N\in\mathcal{C}$, \item[$(\mathcal{C}_3)$] it is closed under taking finite direct products, that is, if $G_1,\dots,G_k\in\mathcal{C}$ for $k\in \mathbb{N}$ then $\prod_{i=1}^k G_i\in\mathcal{C}.$ \mathbf{e}nd{itemize} If $\mathcal{C}$ is also closed under taking extensions of groups in $\mathcal{C}$, it is an \mathbf{e}mph{extension-closed pseudo-variety.} \mathbf{e}nd{definition} To simplify notation, $N\unlhd_{\mathcal{C}}G$ will denote that $N\unlhd G$ and $G/N\in\mathcal{C}$. The following observations are straightforward from the above definition and will be used in future without reference. \begin{lemma}\label{variety_properties} Let $G$ be a group and $\mathcal{C}$ a pseudo-variety of finite groups. \begin{enumerate} \item If $N_1,N_2\unlhd_{\mathcal{C}} G$ then $N_1\cap N_2\unlhd_{\mathcal{C}} G$. \item If $N\unlhd_{\mathcal{C}} G$ and $N\leq K \unlhd G$ then $K\unlhd_{\mathcal{C}} G$. \item If $N\unlhd_{\mathcal{C}} G$ and $K\leq G$ then $N\cap K\unlhd_{\mathcal{C}} K$. \item If $\alpha:G_1\longrightarrow G_2$ is a homomorphism and $N_1\unlhd_{\mathcal{C}} G_1$ then $\alpha(N_1)\unlhd_{\mathcal{C}} \alpha(G_1).$ \mathbf{e}nd{enumerate} \mathbf{e}nd{lemma} \subsection*{Branch and weakly branch groups} A \mathbf{e}mph{level homogeneous} rooted tree is one where all vertices at a given distance from the root have the same finite valency. A faithful action of a group $G$ on such a tree is a \mathbf{e}mph{weakly branched action} if it is transitive on each level of the tree and if for each vertex $v$ there is a non-trivial element of $G$ whose support is contained in the subtree rooted at $v$. The set of all elements of $G$ which are only supported on the subtree rooted at $v$ is a subgroup of $G$, the \mathbf{e}mph{rigid stabilizer} $\rst_G(v)$ of $v$. If $v, u$ are two distinct vertices of the same level, $\rst_G(u), \rst_G(v)$ commute. The subgroup $\rst_G(n):=\prod \{\rst_G(v)\mid v \text{ of level } n\}$ is called the \mathbf{e}mph{$n$th level rigid stabilizer}. A weakly branched action is a \mathbf{e}mph{branched action} if $\rst_G(n)$ is of finite index in $G$ for every $n\in \mathbb{N}$. A group is \mathbf{e}mph{(weakly) branch} if it has a faithful (weakly) branched action on some level homogeneous rooted tree. Let $T$ denote the $d$-regular infinite rooted tree (all vertices have valency $d+1$, except the root, which has $d$), for an integer $d\geq 2$. Since $T$ is regular, the subtree rooted at any vertex may be identified with $T$. Under this identification, we have an isomorphism $\psi:\mathcal{A}ut T\rightarrow \mathcal{A}ut T\wr \Sym(d)$ where $$\psi(\st(1))=\mathcal{A}ut T\times \overset{d}{\dots}\times \mathcal{A}ut T.$$ Inductively, we also have $\psi_n:\mathcal{A}ut T\rightarrow \mathcal{A}ut T\wr \Sym(d)\wr \overset{n}{\dots} \wr \Sym(d)$ and $$\psi_n(\st(n))=\mathcal{A}ut T\times \overset{d^n}{\dots}\times \mathcal{A}ut T$$ for each $n\in \mathbb{N}$. A group $G\leq \mathcal{A}ut T$ is \mathbf{e}mph{weakly regular branch} over a (non-trivial) subgroup $K$ if $$\psi(K)\geq K\times \overset{d}{\dots}\times K$$ and $G$ acts transitively on all levels of $T$. This implies that $\psi(\rst_G(1))\geq K\times \overset{d}{\dots}\times K$ and, inductively, that $\psi_n(\rst_G(n))\geq K\times \overset{d^n}{\dots}\times K$ for each $n\in \mathbb{N}$. In particular, each $\rst_G(v)$ contains a copy of $K$ and so $G$ is a weakly branch group. For convenience, let us record a fundamental lemma that can be extracted from the proof of \cite[Theorem 4]{grigorchuk:just-infinite}. \begin{lemma}\label{lemma:rist'} Let $T$ be a level homogeneous rooted tree and $G\leq \mathcal{A}ut T$ act transitively on every level of $T$. For every non-trivial normal subgroup $N$ of $G$ there exists $n$ such that $N\geq \rst_G(n)'$. \mathbf{e}nd{lemma} \begin{proof} Let $g\in N$ be a non-trivial element and choose some vertex $v$ of $T$ which is moved by $g$. Let $x, y\in \rst_G(v)$. Since $N$ is normal, it contains $[[g,x],y]$ which equals $[x,y]$, as $y^g$ commutes with $x$ and $y$. Knowing that $N\geq \rst_G(v)'$, the result follows using the fact that $N$ is normal and that all rigid stabilizers of the same level as $v$ are conjugate, because $G$ acts transitively. \mathbf{e}nd{proof} \subsection*{The $\mathcal{C}$-congruence subgroup property} Let $G\leq\mathcal{A}ut T$ and let $\mathcal{C}$ be a pseudo-variety of finite groups. \begin{definition} A group $G\leq \mathcal{A}ut T$ has the \mathbf{e}mph{$\mathcal{C}$-congruence subgroup property} (abbreviated to \mathbf{e}mph{$\mathcal{C}$-CSP}) if every $N \trianglelefteq G$ satisfying $G/N\in\mathcal{C}$ contains some level stabilizer in $G$. $G$ has the \mathbf{e}mph{$\mathcal{C}$-CSP modulo $M\unlhd G$} if every normal subgroup $N \trianglelefteq G$ satisfying $G/N\in\mathcal{C}$ and $M\leq N$ also contains some level stabilizer in $G$. \mathbf{e}nd{definition} \subsection*{Independence of the weakly branch action} A weakly branch group $G$ may have several different faithful weakly branched actions. However, they are all related to each other. It was shown in \cite{garrido:thesis} (see also \cite{garrido:csp}) that for any two weakly branch actions $\sigma:G\rightarrow \mathcal{A}ut T_{\sigma}$, $\rho:G\rightarrow \mathcal{A}ut T_{\rho}$ of a group $G$ the sets of respective level stabilizers are cofinal in each other. That is, for every $n\in \mathbb{N}$ there exists $m\in\mathbb{N}$ such that $\st_{\sigma}(n)\geq \st_{\rho}(m)$, and vice-versa. This means that both filtrations define the same topology on $G$ when taken as neighbourhood bases of the identity. Thus, having the $\mathcal{C}$-CSP is independent of the weakly branch action of the group. The examples we consider in this paper are not only subgroups of $\mathcal{A}ut T$ where $T$ is the rooted $p$-adic tree, but of a Sylow pro-$p$-subgroup $\mathcal{A}$ of $\mathcal{A}ut T$, isomorphic to the infinite iterated wreath product of cyclic groups of order $p$. The above-mentioned results imply that if $\sigma:G\hookrightarrow \mathcal{A}$ is a weakly branched action then any other weakly branched action $\rho:G \hookrightarrow \mathcal{A}ut T$, on, a priori, some arbitrary level-homogeneous rooted tree $T$, must actually have image in (a conjugate of) $\mathcal{A}$. \section{A criterion for a weakly regular branch group to have the $\mathcal{C}$-CSP} We start with a simple but very useful result that will be used repeatedly. \begin{lemma}\label{lem:csp_transitivity} Let $G\in \mathcal{A}ut T$ and $N\unlhd M\unlhd G$. If $G$ has the $\mathcal{C}$-CSP modulo $M$ and $M$ has the $\mathcal{C}$-CSP modulo $N$ then $G$ has the $\mathcal{C}$-CSP modulo $N$. \mathbf{e}nd{lemma} \begin{proof} First note that (iii) of Lemma \ref{variety_properties} ensures that $\st_G(n)\cap M=\st_M(n)\unlhd_{\mathcal{C}} M$ for every $n\in\mathbb{N}$ so that it makes sense for $M$ to have the $\mathcal{C}$-CSP. Now let $H\unlhd_{\mathcal{C}} G$ be such that $H\geq N$. We have to prove that $H\geq\st_G(n)$ for some $n\in\mathbb{N}$. Since $M$ has the $\mathcal{C}$-CSP modulo $N$ and since $H\cap M\unlhd_{\mathcal{C}} M$, there is some $m\in\mathbb{N}$ such that $\st_M(m)\leq H\cap M$. Now since $H,\st_G(m)\unlhd_{\mathcal{C}} G$, we have $\st_G(m)\cap H\unlhd_{\mathcal{C}} G$ and $(\st_G(m)\cap H)M\unlhd_{\mathcal{C}} G$. Thus there is some $l\in\mathbb{N}$ such that $\st_G(l)\leq (\st_G(m)\cap H)M$. Taking $n:=\max \{m,l\}$, we have $$\begin{aligned} \st_G(n) &= \st_G(l)\cap \st_G(m) \leq (\st_G(m)\cap H)M \cap \st_G(m)\\ &=(\st_G(m)\cap H)(M\cap \st_G(m))\\ &\leq (\st_G(m)\cap H)(H\cap M)\leq H, \mathbf{e}nd{aligned}$$ where the second equality follows by the modular law. \mathbf{e}nd{proof} Let us also record another extension property. \begin{lemma}\label{lem:csp-subnormal} Let $\mathcal{C}$ be an extension-closed pseudo-variety of finite groups (for instance, that of all finite $p$-groups). Let $G\leq\mathcal{A}ut T$ be a group with normal subgroups $M\leq H$ such that $H\unlhd_{\mathcal{C}} G$. If $G$ has the $\mathcal{C}$-CSP modulo $M$, then so does $H$. \mathbf{e}nd{lemma} \begin{proof} Consider $K\unlhd_{\mathcal{C}} H$ with $K\geq M$. Then each of the finitely many conjugates $K_i$ of $K$ by elements of $G$ also satisfy $M\leq K_i\unlhd_{\mathcal{C}} H$, therefore so does their intersection, $N$, the normal core of $K$ in $G$. Since $\mathcal{C}$ is closed under taking extensions, $N\unlhd_{\mathcal{C}} G$ and therefore $N$ contains some level stabilizer of $G$. \mathbf{e}nd{proof} { \renewcommand{\ref{thm:csp}}{\ref{thm:csp}} \begin{theorem} Let $G\leq \mathcal{A}ut T$ be a weakly regular branch group over a subgroup $R$ and let $\mathcal{C}$ be a pseudo-variety of finite groups. Suppose that there exists $H\unlhd G$ such that $R\geq H\geq R'\geq L$ where $L:=\psi^{-1}(H\times \overset{d}{\dots}\times H)$. If $G$ has the $\mathcal{C}$-CSP modulo $H$ and $H$ has the $\mathcal{C}$-CSP modulo $L$, then $G$ has the $\mathcal{C}$-CSP. \mathbf{e}nd{theorem} \addtocounter{theorem}{-1} } \begin{proof} Put $L_0:=H$, $L_1:=L=\psi^{-1}(H\times \overset{d}{\dots}\times H)\leq R'$ and $$L_n:=\psi_{n}^{-1}(H\times\overset{d^{n}}{\dots}\times H)\leq \psi_{n-1}^{-1}(R'\times\overset{d^{n-1}}{\dots\times}R')\leq \rst_G(n-1)'$$ for $n\in\mathbb{N}, n\geq 2$. We will show by induction on $n$ that $G$ has the $\mathcal{C}$-CSP modulo $L_n$ for each $n\in\mathbb{N}$. Then, as $G$ is weakly regular branch, it is in particular transitive on all levels of $T$, so by Lemma \ref{lemma:rist'}, for each non-trivial $N\unlhd G$ there exists $n\in \mathbb{N}$ such that $N\geq\rst_G(n)'\geq L_{n+1}$, whence the result follows. There is nothing to show for the base case as we have assumed that $G$ has the $\mathcal{C}$-CSP modulo $H$. It will suffice to show that $L_n$ has the $\mathcal{C}$-CSP modulo $L_{n+1}$ for all $n\in\mathbb{N}$ and then inductively apply Lemma \ref{lem:csp_transitivity}. Fix $n\in\mathbb{N}$ and let $L_{n+1}\leq N\unlhd_{\mathcal{C}} L_n$. Then $$L\times\overset{d^n}{\dots}\times L\leq \psi_n(N)\unlhd_{\mathcal{C}} H\times \overset{d^n}{\dots}\times H.$$ For $i=1,\dots,d^n$, denote by $H_i$ the $i$th coordinate subgroup in $\psi(L_n)$ and similarly for $L_i$. Then $N_i:=N\cap H_i\unlhd_{\mathcal{C}} H_i$ and $N_i\geq L_i$, so, since $H$ has the $\mathcal{C}$-CSP modulo $L$, there is some $m_i\in\mathbb{N}$ such that $\st_H(m_i)\leq N_i$. Taking the maximum, $m$, of the $m_i$, we obtain $$\st_H(m)\times\overset{d^n}{\dots}\times\st_H(m)\leq\psi_n(N). $$ Thus $$\st_{L_n}(m+n)=\psi_{n}^{-1}(\st_H(m)\times\overset{d^{n}}{\dots}\times \st_H(m))\leq N$$ as required. \mathbf{e}nd{proof} \begin{corollary} If $\mathcal{C}$ is also extension-closed and $H\unlhd_{\mathcal{C}} G$, then Lemma \ref{lem:csp-subnormal} shows that the condition in Theorem \ref{thm:csp} is also necessary for $G$ to have the $\mathcal{C}$-CSP. \mathbf{e}nd{corollary} \section{Examples: the $p$-CSP} We consider two types of weakly branch groups, one for $p$ odd (the GGS-groups with constant defining vector studied in \cite{bartholdi-grigorchuk:parabolic,alcober-zugadi:hausdorff,alcober-garrido-uria:GGS}) and another for $p=2$ (the Basilica group, studied in \cite{grigorchuk-zuk:torsion-free}). These examples are subgroups of a Sylow pro-$p$ group of $\mathcal{A}ut T$, so the quotients by level stabilizers are finite $p$-groups. It is therefore sensible to consider the $\mathcal{C}$-CSP for $\mathcal{C}$ a pseudo-variety consisting of finite $p$-groups. In this section we focus on the pseudo-variety of all finite $p$-groups and will therefore talk about the $p$-CSP. \subsection{Example: the GGS-groups with constant defining vector.} Let $p$ be an odd prime and let $\mathcal{G}=\langle a, b\rangle \leq\mathcal{A}ut T$ be the GGS-group with constant defining vector. That is, $a$ cyclically permutes the vertices of the first level as the permutation $(1\; 2\; \dots \; p)$ and $b=(a,a,\dots, a, b)$ acts as $a$ on the first $p-1$ subtrees rooted at the first level. Let $K=\langle ba^{-1}\rangle^{\mathcal{G}}$. It was shown in \cite{alcober-garrido-uria:GGS} that $\mathcal{G}$ does not have the CSP, because it virtually maps onto $\mathbb{Z}$ and therefore has many finite quotients that are not $p$-groups. We show here that it does have the $p$-CSP. This automatically gives us the answers for the cases of pseudo-varieties of solvable and nilpotent groups. For the pseudo-variety $\mathcal{C}_s$ of finite solvable groups, $\mathcal{G}$ will not have the $\mathcal{C}_s$-CSP because $\mathcal{G}/K\cong (\mathbb{Z}\times\overset{p-1}{\dots}\times \mathbb{Z}) \rtimes C_p$ and so $\mathcal{G}$ has quotients that are solvable but not of $p$-power index. On the other hand, for $\mathcal{C}_n$, the family of finite nilpotent groups, $\mathcal{G}$ has the $\mathcal{C}_n$-CSP. Suppose $N\unlhd G$ such that $G/N$ is nilpotent. Then $N\geq \gamma_i(\mathcal{G})$ for some $i\in\mathbb{N}$. Since $\mathcal{G}/\mathcal{G}'$ is finite of exponent $p$, each quotient $\gamma_i(\mathcal{G})/\gamma_{i+1}(\mathcal{G})$ is also finite of exponent $p$. Thus, each $\gamma_i(\mathcal{G})$ is of finite index in $\mathcal{G}$ and moreover of index a power of $p$. If $\mathcal{G}$ has the $p$-CSP then, in particular, each $\gamma_i(\mathcal{G})$ contains some level stabilizer, and thus $\mathcal{G}$ has the $\mathcal{C}_n$-CSP. \begin{proposition}\label{prop:rists} For each $n\in \mathbb{N}$, the $n$th rigid stabilizer satisfies $\psi_n(\rst_{\mathcal{G}}(n))=K'\times \overset{p^n}{\dots}\times K'$. \mathbf{e}nd{proposition} \begin{proof} We know from \cite{alcober-zugadi:hausdorff} that $\mathcal{G}$ is weakly regular branch over $K'$. This means that $\psi_n(\rst_G(n))\geq K'\times\overset{p^n}{\dots} \times K'$ for all $n$. Now if we prove the statement for $n=1$, since $\psi(\rst_G(2))\leq \rst_G(1)\times\dots\times\rst_G(1)$ we get $\psi_2(\rst_G(2))\leq K'\times\overset{p^2}{\dots}\times K'$ and inductively the same for the rest of the levels. By the proof of Theorem 3.7 of \cite{alcober-garrido-uria:GGS}, we have $\psi(\rst_{\mathcal{G}'}(1))=K'\times \overset{p}{\dots}\times K'$. We need only prove that $K\geq \rst_{\mathcal{G}}(1)$, since then $\rst_{\mathcal{G}}(1)=\rst_{K}(1)=\rst_{\mathcal{G}'}(1)$, where the latter equality holds because $\mathcal{G}'=\st(1)\cap K$. We will in fact show the stronger statement $\st_{\mathcal{G}}(1)'\geq \rst_{\mathcal{G}}(x)$ for some $x\in X$, (and therefore for all $x\in X$, as $\st_{\mathcal{G}}(1)$ is normal in $G$, which acts transitively on $X$) from which the claim follows as $K \geq \st_{\mathcal{G}}(1)'$. Suppose that there is some $g$ such that $\psi(g)=(h,1,\dots,1)\in\rst_{\mathcal{G}}(x)\setminus \st_{\mathcal{G}}(1)'$. Then we can write $$g=b^{i_0}(b^a)^{i_1}\dots (b^{a^{p-1}})^{i_{p-1}}t$$ where $t\in \st_{\mathcal{G}}(1)'. $ Now $$\psi(g)=(a^*b^{i_1}a^*t_1,\dots, a^*b^{i_0}a^*t_n)=(h,1,\dots,1),$$ where $t_i\in G'$ for $i=1,\dots,p$ and the $*$ denote unimportant exponents. Then, necessarily, $i_j=0$ for $j\neq 1$, and consequently $$\psi(g)=(b^{i_1}t_1,a^{i_1}t_2,\dots,a^{i_1}t_{p-1})=(h,1,\dots,1),$$ implies that also $i_1=0$. Thus $g\in\st_{\mathcal{G}}(1)'$, as required. \mathbf{e}nd{proof} For the following, see \cite[Proposition 3.4]{alcober-garrido-uria:GGS} and \cite[Example 7.4.14, Section 8.2]{leedham-green-mckay:p-power}. \begin{proposition}\label{uniserial} The quotient $\mathcal{G}/K'$ is isomorphic to the integral uniserial space group $\mathbb{Z}[\theta]\rtimes C_p$ where $\theta$ is a primitive $p$th root of unity and the generator of $C_p$ acts by multiplication by $\theta$. In particular, each normal subgroup of $p$-power index in $\mathcal{G}/K'$ is precisely $\gamma_i(\mathcal{G})K'/K'$ for some $i\in\mathbb{N}$. \mathbf{e}nd{proposition} \begin{corollary}\label{csp_mod_K'} The groups $\mathcal{G}$ and $K$ have the $p$-CSP modulo $K'$. \mathbf{e}nd{corollary} \begin{proof} In \cite[Theorem 4.6]{alcober-zugadi:hausdorff} it is proved that $G/K'\st_G(n)$ is of maximal class and order $p^{n+1}$ for every $n\in\mathbb{N}$. Thus $\st_G(n)K'=\gamma_n(G)K'$ for every $n\in\mathbb{N}$ and by Proposition \ref{uniserial} the first claim follows. The second claim follows by Lemma \ref{lem:csp-subnormal}. \mathbf{e}nd{proof} Define $K_1:=K', K_2:=\psi^{-1}(K'\times\overset{p}{\dots}\times K')=\rst_G(1)$. Consider the following maps: $$\begin{array}{rccc} S \colon & \st_{\mathcal{G}}(1) & \to & G/K_1\times \overset{p-2}{\dots}\times G/K_1 \\ & g=(g_1,\dots,g_p) & \mapsto & (g_1K_1,\dots,g_{p-2}K_1) \mathbf{e}nd{array},$$ and for $n\geq 3$, $$\begin{array}{rccc} \pi_n \colon & K/K_1\times \overset{p-2}{\dots}\times K/K_1 & \to & K/\st_{\mathcal{G}}(n)K_1\times \overset{p-2}{\dots}\times K/\st_{\mathcal{G}}(n)K_1\\ & (g_1K_1,\dots,g_{p-2}K_1) & \mapsto & (g_1\st_{\mathcal{G}}(n)K_1,\dots,g_{p-2}\st_{\mathcal{G}}(n)K_1) \mathbf{e}nd{array},$$ Observe that $\ker\pi_n=\st_{\mathcal{G}}(n)K_1/K_1\times \overset{p-2}{\dots}\times \st_{\mathcal{G}}(n)K_1/K_1$. Then we have the following properties, which can be extracted from the proof of Theorem 4.5 in \cite{alcober-zugadi:hausdorff}. \begin{lemma}\label{maps} With the above notation, \begin{enumerate} \item the map $S$ restricted to $K_1$ has kernel $K_2$ and image $K/K_1\times \overset{p-2}{\dots}\times K/K_1$, \item the kernel of the composition $S_n:=\pi_n\circ S$ is $(\st_{\mathcal{G}}(n+1)\cap K_1)K_2$. \mathbf{e}nd{enumerate} \mathbf{e}nd{lemma} \begin{proposition}\label{prop:K1/K2_csp} The group $K_1$ has the $p$-CSP modulo $K_2$. \mathbf{e}nd{proposition} \begin{proof} Let $K_2\leq N\unlhd_p K_1$. Then $$S(N)\unlhd_p K/K_1\times \overset{p-2}{\dots}\times K/K_1.$$ For $i\in{1,\dots,p-2}$, the intersection of $S(N)$ with the $i$th direct factor $(K/K_1)_i$ in $S(K_1)$ is of $p$-power index in $(K/K_1)_i$. By Corollary \ref{csp_mod_K'}, it contains $(\st_{\mathcal{G}}(n_i)K_1/K_1)_i$ for some $n_i\in\mathbb{N}$. Taking $n=\max \{n_i\mid 1\leq i \leq p-1\}$ yields $$S(N)\geq \st_{\mathcal{G}}(n)K_1/K_1 \times \overset{p-2}{\dots} \times \st_{\mathcal{G}}(n)K_1/K_1.$$ That is, $S(N)\geq \ker \pi_n$, and thus $N\geq S^{-1}(\ker \pi_n)=\ker S_n=(\st_{\mathcal{G}}(n+1)\cap K_1)K_2.$ \mathbf{e}nd{proof} We must now separate the proof into two cases: $p=3$ and $p\geq 5$. This is because we would like to apply Theorem \ref{thm:csp} to $\mathcal{G}$ with $H=R=K_1$. The only remaining hypothesis to check is that $K_1'\geq K_2$. However, this only holds when $p\geq 5$, which is implicit in the proof of \cite[Lemma 4.2 (iii)]{alcober-zugadi:hausdorff}. In fact, by (ii) in Lemma \ref{maps}, $K_1/K_2\cong K/K_1\times\overset{p-2}{\dots}\times K/K_1$, which is abelian, so $K_1'=K_2$. In particular, this and Proposition \ref{prop:rists} imply that $\rst_{\mathcal{G}}(n)'=\rst_{\mathcal{G}}(n+1)$ for each $n\geq 1$. \begin{corollary} For every prime $p\geq 5$, the GGS-group $\mathcal{G}\leq\mathcal{A}ut T$ with constant vector has the $p$-CSP, but not the CSP. \mathbf{e}nd{corollary} Let us now prove the remaining case, so that from now on $p=3$. The following result can be found in \cite{bartholdi-grigorchuk:parabolic}. \begin{lemma}\label{K2=G''} Let $\mathcal{G}$ and $K$ be as before, then we have $$\psi(\mathcal{G}'')=K'\times K'\times K'.$$ \mathbf{e}nd{lemma} \begin{proof} One inclusion is clear because $\psi(\mathcal{G}')\leq K\times K\times K$ by \cite[Lemma 4.2 (iii)]{alcober-zugadi:hausdorff}. For the other one, observe that $\psi([b,a])=(y_1,1,y_1^{-1})$ and $\psi([b^{-1},a]^a)=(y_0,y_0^{-1},1)$. Thus $\psi([[b,a],[b^{-1},a]^a])=([y_0,y_1],1,1)$ and, since $K'=\langle [y_0,y_1]\rangle ^{\mathcal{G}}$, the result follows. \mathbf{e}nd{proof} In order to apply Theorem \ref{thm:csp} with $R=K_1=K'$ and $H=K_2=\mathcal{G}''$ we must check that $K''\geq\psi^{-1}(\mathcal{G}''\times \mathcal{G}''\times\mathcal{G}'').$ \begin{proposition}\label{gamma3_K}We have $\mathcal{G}''\leq\gamma_3(K).$ \mathbf{e}nd{proposition} \begin{proof} Since $\mathcal{G}'=\langle[a,b]\rangle^{\mathcal{G}}$, we have $$\mathcal{G}''=\langle [[a,b],[a,b]^g]\mid g\in\mathcal{G}\rangle^{\mathcal{G}}.$$ Because $\gamma_3(K)$ is normal in $\mathcal{G}$, it suffices to prove that $ [[a,b],[a,b]^g]\in\gamma_3(K)$ for every $g\in\mathcal{G}$. We already know that $\mathcal{G}/K\cong C_3$ and we can take as coset representatives $\{1,a,a^2\}$. Write $g=ka^i$ with $i\in \mathbb{F}_3$ and $k\in K$. If $i=0$ there is nothing to prove, because \begin{align*} [[a,b],[a,b]^g]&=[[a,b],[a,b][a,b,g]]\\ &=[[a,b],[a,b,g]], \mathbf{e}nd{align*} and since $G'\leq K$, the element belongs to $\gamma_3(K)$. Suppose that $g=ka^i$ with $i=1,2$ and $k\in K$. Now we have \begin{align*} [[a,b],[a,b,ka^i]]&=[[a,b],[a,b,a^i][a,b,k]^{a^i}]\\ &=[[a,b],[a,b,k]^{a^i}][[a,b],[a,b,a^i]]^{[a,b,k]^{a^i}}. \mathbf{e}nd{align*} It is clear that the first factor is in $\gamma_3(K)$. On the other hand, \begin{align*} \psi([a,b])&=(b^{-1}a,1,a^{-1}b),\\ \psi([a,b,a])&=((a^{-1}b)^2,b^{-1}a,b^{-1}a),\\ \psi([a,b,a^2]&=(a^{-1}b,a^{-1}b,(b^{-1}a)^2), \mathbf{e}nd{align*} imply that the second factor is trivial for $i=1,2$. \mathbf{e}nd{proof} \begin{proposition} We have $\psi(K'')\geq \mathcal{G}''\times\mathcal{G}''\times \mathcal{G}''.$ \mathbf{e}nd{proposition} \begin{proof} By Proposition \ref{gamma3_K}, it suffices to prove the containment $$\psi(K'')\geq\gamma_3(K)\times\gamma_3(K)\times\gamma_3(K).$$ Since $\mathcal{G}$ is weakly regular branch over $K'$, we know that for every $k_1\in K'$ there is some $g_1\in K'$ such that $\psi(g_1)=(k_1,1,1)$. On the other hand, since $\psi([y_0,y_1])=(y_2,y_0,y_1)$ we get that $K'$ is subdirect in $K\times K\times K$. Thus, for every $k_2\in K$ there is some $g_2\in K'$ such that $\psi(g_2)=(k_2,*,*).$ Finally, we obtain $$\psi([g_1,g_2])=([k_1,k_2],1,1),$$ and the result follows. \mathbf{e}nd{proof} Now we can apply Theorem \ref{thm:csp} with $R=K'=K_1$ and $H=\mathcal{G}''=K_2$. By Proposition~\ref{prop:K1/K2_csp}, we only need to prove that $K_2=\mathcal{G}''$ has the $p$-CSP modulo $\psi^{-1}(\mathcal{G}''\times \mathcal{G}''\times \mathcal{G}'')$. Lemma~\ref{K2=G''} implies that $\mathcal{G}'' /\psi^{-1}(\mathcal{G}''\times \mathcal{G}''\times \mathcal{G}'')\cong K_1/K_2\times K_1/K_2\times K_1/K_2$, and then using again Proposition \ref{prop:K1/K2_csp} the result follows. \subsection{Example: Basilica group} This group was defined by R. Grigorchuk and A. Zuk in \cite{grigorchuk-zuk:torsion-free}. In the same paper they prove that this group is torsion-free and weakly branch. We recall here the definition and some auxiliary results proved there. \begin{definition} Let $T$ be the binary tree. The Basilica group $G$ is generated by two automorphisms $a$ and $b$ defined recursively as follows: $$a=(1,b) \qquad b=(1,a)\varepsilon$$ where $\varepsilon$ denotes the swap at the root. \mathbf{e}nd{definition} \begin{lemma}\label{basilica:multi-lemma} Let $G$ be the Basilica group. Then, \begin{enumerate} \item $G$ acts transitively on all levels of $T$, \item $\psi(G')\geq G'\times G'$, so $G$ is weakly branch over $G'$, \item $G'=\psi^{-1}(G'\times G')\rtimes \langle [a,b]\rangle$, \item $G/G'=\langle a\rangle\times\langle b\rangle\cong\mathbb{Z}\times\mathbb{Z}$, \item $G$ is torsion-free. \mathbf{e}nd{enumerate} \mathbf{e}nd{lemma} Since $G/G'\cong \mathbb{Z}\times\mathbb{Z}$, and all quotients by level stabilizers are 2-groups, $G$ does not have the congruence subgroup property. We show below that it has the $2$-CSP. \begin{lemma} Let $A:=\langle a\rangle^G$ and $B:=\langle b\rangle^G$. Then $G'=A\cap B$. \mathbf{e}nd{lemma} \begin{proof} $G'=\langle [a,b]\rangle^G$ and $[a,b]\in A\cap B$, so $G'\leq A\cap B$. Since $G/(A\cap B) \cong G/A \times G/B \cong \mathbb{Z} \times \mathbb{Z}$, the result follows. \mathbf{e}nd{proof} \begin{lemma}\label{basilica:rist} With $A$ and $B$ as above we have \begin{enumerate} \item $\rst_G(1)=A$ with $\psi(A)=B\times B$, \item $\psi_{n-1}(\rst_G(n))=G'\times\overset{2^{n-1}}{\dots}\times G'$. \mathbf{e}nd{enumerate} \mathbf{e}nd{lemma} \begin{proof} The first item is Lemma 3 of \cite{grigorchuk-zuk:torsion-free}. For the rest of the levels, the fact that $$\psi(\rst_G(n))=(\rst_G(n-1)\times\rst_G(n-1))\cap \psi(\rst_G(n-1))$$ for every $n$, implies in particular that $\psi(\rst_G(2))=(A\times A)\cap (B\times B)=G'\times G'$. The claim follows because $\psi(G')\geq G'\times G'$. \mathbf{e}nd{proof} It was pointed out to us by D. Francoeur, and separately by B. Klopsch and H. Sasse, that Lemma 8 of \cite{grigorchuk-zuk:torsion-free} is inaccurate, which also affects the proof of Lemma 9 in that paper (although not the statement). Since we will make use of these results, we provide corrected versions. \begin{lemma}\label{lem:correct_lemmas89} The following hold: \begin{enumerate} \item $\psi(G'')=\gamma_3(G)\times\gamma_3(G)$; \item $\gamma_3(G)/G''\cong \mathbb{Z}^2$; \item $G'/G''\cong \mathbb{Z}^3$. \mathbf{e}nd{enumerate} \mathbf{e}nd{lemma} \begin{proof} First note that $G' \ni [a,b^{-1}]=(b,b^{-1})$ and, since $([b,a],1), (1,[b,a]), \in G'$ by Lemma \ref{basilica:multi-lemma}, we obtain $([[b,a],b],1), (1,[[b,a],b])\in G''$. It is easy to check that $[[b,a],a]=1$, so $\gamma_3(G)=\langle [[b,a],b]\rangle ^G $. This and that $\st_G(1)\leq G\times G$ proves one inclusion of the first item. For the other inclusion, it suffices to show that $G'/\gamma_3(G)\times\gamma_3(G)$ is abelian. For this, we use that $G'=\langle [a,b^{-1}]\rangle^G$. Then $[a, b^{-1}]^{a^{\pm 1}}=[a,b^{-1}]$. A calculation yields \begin{align*} [a,b^{-1}]^{b^{2n}} &= (b, b^{-1})^{(a^n,a^n)}\mathbf{e}quiv (b[b, a]^n, b^{-1}[b,a]^{-n}) \mod \gamma_3(G)\times\gamma_3(G) \\ [a,b^{-1}]^{b^{2n+1}} &=(b^{-1}[b^{-1}, a], b)^{(a^n,a^n)}\mathbf{e}quiv (b^{-1}[b, a]^{-n-1}, b[b,a]^{n}) \mod \gamma_3(G)\times\gamma_3(G) \mathbf{e}nd{align*} for every $n\in\mathbb{N}$. Since $a$ commutes with the above elements modulo $\gamma_3(G)\times \gamma_3(G)$, the group $G' / \gamma_3(G)\times \gamma_3(G)$ is generated by the images of $[a,b^{-1}]^m$ for $m\in \mathbb{Z}$, and these clearly commute with each other, which implies that $G'' \leq \gamma_3(G) \times \gamma_3(G)$. To show the second item, we examine the image of $\langle [[a,b^{-1}], b]\rangle^G$ modulo $\gamma_3(G)\times \gamma_3(G)$: \begin{align*} x &:=[[a,b^{-1}],b]=(b^{-1}b^{-a}, b^2)\mathbf{e}quiv (b^{-2}[b,a]^{-1}, b^2)\\ x^{b^{2n}} &= x^{(a^n, a^n)} \mathbf{e}quiv (b^{-2}[b,a]^{-2n-1}, b^2[b,a]^{2n})\\ x^{b^{2n+1}} &= (x^b)^{(a^n, a^n)} \mathbf{e}quiv (b^{2}[b,a]^{2n+2}, b^{-2}[b,a]^{-2n-1}). \mathbf{e}nd{align*} Since $a$ commutes with $x^{b^m}$ for all $m\in\mathbb{Z}$, these conjugates of $x$ generate $\gamma_3(G)$. Writing $y:=x^{b^2}x^b\mathbf{e}quiv ([b,a]^{-1}, [b,a])$, it is easily checked that $x^{b^{2n}}\mathbf{e}quiv xy^{2n}$ and $x^{b^{2n+1}}\mathbf{e}quiv x^{-1}y^{-2n-1}$ modulo $\gamma_3(G)\times \gamma_3(G)$, for all $n\in \mathbb{Z}$. Now, $[a,b]^n=(b^{na}, b^{-n})\mathbf{e}quiv (b^n[b,a]^n, b^{-n}) \mod \gamma_3(G)\times\gamma_3(G)$ for every $n\in\mathbb{Z}$. Suppose that $[a,b]^n\in\gamma_3(G)$ for some $n\in\mathbb{Z}$. Then there exist $r, s\in\mathbb{Z}$ such that $$x^ry^s\mathbf{e}quiv (b^{-2r}[b,a]^{-r-s}, b^{2r}[b,a]^s) \mathbf{e}quiv (b^n[b,a]^n, b^{-n}) \mathbf{e}quiv [a,b]^n \mod \gamma_3(G)\times\gamma_3(G). $$ Comparing the second coordinate, and using that $b^m\in G'$ if and only if $m=0$ (see Lemma \ref{basilica:multi-lemma}), we get $2r=-n$ and $s=0$. Using this for the first coordinate yields that $b^{n+2r}\mathbf{e}quiv [b,a]^{-r-n} \mod \gamma_3(G)$. But this means that $n=r=-2r$, so $r=0$ and $n=0$. This now easily implies that $x$ and $y$ are of infinite order modulo $\gamma_3(G)\times \gamma_3(G)$, proving the second item. Put $c:=[a,b^{-1}]G''$, $d:=[a,b^{-1}][a,b^{-1}]^bG''=([a,b], 1)G''$ and $e:=[a,b^{-1}][a,b^{-1}]^{b^{-1}}G''=(1,[a,b])G''.$ Then $c, d, e$ clearly generate $G'/\left( \gamma_3(G)\times \gamma_3(G) \right) $ and the above argument shows that they are of infinite order and linearly independent, proving the third item. \mathbf{e}nd{proof} In view of the fact that $\psi(\gamma_3(G))\geq\psi(G'')=\gamma_3(G)\times \gamma_3(G)$, we will take $R=G'$ and $H=\gamma_3(G)$ to apply Theorem \ref{thm:csp}. Note that we even have $L_n\leq \rst_G(n)'$ for all $n\in \mathbb{N}$, in the notation of that theorem. It only remains to show that $G$ and $\gamma_3(G)$ have the $2$-CSP modulo $\gamma_3(G)$ and $\psi^{-1}(\gamma_3(G)\times\gamma_3(G))$, respectively. The rest of this section is devoted to proving this. \begin{proposition}\label{prop:infinite_mod_gamma3} The quotient $G'/\gamma_3(G)$ is infinite cyclic and $G/\gamma_3(G)$ is isomorphic to the integral Heisenberg group. \mathbf{e}nd{proposition} \begin{proof} The first statement follows from the last two items of Lemma \ref{lem:correct_lemmas89}. The second follows from the first statement and the fact that $G/G' \cong \mathbb{Z}^2$. \mathbf{e}nd{proof} \begin{lemma}\label{lem:g0g1inG'} If $g\in G'$ is such that $\psi(g)=(g_1,g_2)$ then $g_1g_2\in G'$. Similarly if $g\in G'\st_G(n)$ then $g_1g_2\in G'\st_G(n-1)$. \mathbf{e}nd{lemma} \begin{proof} Define $\varphi: G\times G\longrightarrow G/G'$ by $(g_1,g_2) \mapsto g_1g_2 G'$. We claim that $G'\leq \ker(\varphi\circ\psi)$. Clearly, $\psi^{-1}(G'\times G')$ is contained in the kernel and, since $G'\geq\psi^{-1}(G'\times G')$, it suffices to check that the image is in the kernel for the generators of $G'$ modulo $\psi^{-1}(G'\times G')$, that is, for $[a,b]$. The result follows because $\psi([a,b])=(b^{a},b^{-1})$. \mathbf{e}nd{proof} \begin{proposition}\label{prop:GhasCSPmodgamma3} The group $G$ has the $2$-CSP modulo $\gamma_3(G)$. \mathbf{e}nd{proposition} \begin{proof} It suffices to prove that $G$, $A$, and $G'$ have the $2$-CSP modulo $A$, $G'$ and $\gamma_3(G)$, respectively, and apply Lemma \ref{lem:csp_transitivity} twice. Since $G/A\cong A/G'\cong G'/\gamma_3(G)\cong \mathbb{Z}$, it is enough to show that $|G:A\st_G(n)|$, $|A:G'\st_{A}(n)|$ and $|G':\gamma_3(G)\st_{G'}(n)|$ tend to infinity with $n$. Indeed, since in $\mathbb{Z}$ the subgroups of order a power of $2$ are totally ordered, this will imply that any normal subgroup $N$ of index a power of $2$ in, for instance, $A\leq N\leq G$ will satisfy $N\geq\st_G(n)A$ for some $n\in\mathbb{N}$. We first prove by induction that $b^{2^n}\notin A\st_G(2n+1)$. The base step, $b\notin A\st_G(1)=\st_G(1)$, is clear. Now assume that $b^{2^{n-1}}\notin A\st_G(2n)$ and suppose for a contradiction that $b^{2^n}\in A\st_G(2n+1)$. By Lemma \ref{basilica:multi-lemma}, we have $$A\st_G(2n+1)=\langle a\rangle G'\st_G(2n+1)=\langle a\rangle \langle [a,b]\rangle \psi^{-1}(G'\times G')\st_G(2n+1).$$ So there are $i, j\in \mathbb{Z}$ such that $[a,b]^ja^ib^{2^n} \in \psi^{-1}(G'\times G')\st_G(2n+1).$ Thus $$\psi([a,b]^ja^ib^{2^n})=((b^a)^ja^{2^{n-1}}, b^{i-j}a^{2^{n-1}}) \in G'\st_G(2n)\times G'\st_G(2n).$$ Consider $b^{i-j}a^{2^{n-1}}\in G'\st_G(2n)$. As $\psi(b^{i-j}a^{2^{n-1}})=(a^{(i-j)/2}, a^{(i-j)/2}b^{2^{n-1}})$, applying Lemma \ref{lem:g0g1inG'} yields $a^{i-j}b^{2^{n-1}}\in G'\st_G(2n-1)\leq A\st_G(2n-1)$. This implies that $b^{2^{n-1}}\in A\st_G(2n-1)$, a contradiction. The claim follows by induction. This easily implies that $a^{2^n}\notin G'\st_G(2n+2)$ for each $n\in\mathbb{N}$. Indeed, $a^{2^n}=(1,b^{2^n})$ and, since $b^{2^n}\notin G'\st_G(2n+1)$, Lemma \ref{lem:g0g1inG'} yields that $a^{2^n}$ cannot be in $G'\st_G(2n+2)$. Finally, let us prove that $|G':\gamma_3(G)\st_{G'}(n)|$ tends to infinity with $n$. Suppose that it does not, so there exist $M, K\in\mathbb{N}$ such that $G'/\gamma_3(G)\st_{G'}(m)\cong \mathbb{Z}/2^K\mathbb{Z}$ for all $m\geq M$. In particular, $[a,b^{-1}]\mathbf{e}quiv [a,b]^{-1}$ has order $2^K$ modulo $\gamma_3(G)\st_{G'}(m)$. By the proof of the second item of Lemma \ref{lem:correct_lemmas89}, for each $m\geq M$ there exist $r_m, s_m\in \mathbb{Z}$ such that \begin{equation}\label{eq:[a,b]2Kingamma3st(m)} [a,b^{-1}]^{2^K}\mathbf{e}quiv (b^{2^K}, b^{-2^K})\mathbf{e}quiv x^{r_m}y^{s_m}\mathbf{e}quiv (b^{-2r_m}[b,a]^{-r_m-s_m}, b^{2r_m}[b,a]^{s_m}) \mathbf{e}nd{equation} modulo $(\gamma_3(G)\times \gamma_3(G))\st_{G'}(m)\leq \gamma_3(G)\st_G(m-1)\times \gamma_3(G)\st_G(m-1)$. In particular, $b^{2^K+2r_m}\in\mathcal{G}'\st_G(m-1)$. We have seen above that $b^{2^n}\in G'\st_G(2n)\setminus G'\st_G(2n+1)$ for all $n\in\mathbb{N}$. Thus $2^{\lfloor m/2 \rfloor}$ divides $2^K+2r_m$. In other words, denoting by $v(\cdot)$ the 2-adic valuation, $\lfloor m/2\rfloor \leq v(2^K+2r_m)$. If $v(2r_m)\neq K$ then $v(2^K+2r_m)= \min \{K, v(2r_m) \}\geq \lfloor m/2 \rfloor$, which is impossible for $m> 2K+1$ and so for those $m$ there must be some odd $t_m\in \mathbb{Z}$ such that $2r_m=2^Kt_m$. Thus equation \ref{eq:[a,b]2Kingamma3st(m)} implies that $b^{2^K+2^Kt_m}[b,a]^{2^K t_m + s_m}, b^{2^K+2^Kt_m}[b,a]^{s_m}\in \gamma_3(G)\st_G(m-1)$ for $m\geq \max \{M, 2K+2\}$, which in turn means that $[b,a]^{2^{K-1}t_m}\in\gamma_3(G)\st_G(m-1)$. Since $G'/\gamma_3(G)\st_G(n)$ is a 2-group for all $n\in \mathbb{N}$, and $t_m$ is odd, we deduce that $[b,a]=[a,b]^{-1}$ has order at most $2^{K-1}$ modulo $\gamma_3(G)\st_G(m-1)$ for all $m\geq \max \{M,2K+2\}$, a contradiction. \mathbf{e}nd{proof} \begin{proposition} The group $\gamma_3(G)$ has the $2$-CSP modulo $\psi^{-1}(\gamma_3(G)\times \gamma_3(G))$. \mathbf{e}nd{proposition} \begin{proof} This is proved like the previous result. In the proof of Lemma \ref{lem:correct_lemmas89}, we saw that $\psi(\gamma_3(G))$ is generated by $(b^{-2}[b,a]^{-1}, b^2)$ and $([b,a]^{-1}, [b,a])$ modulo $\gamma_3(G)\times \gamma_3(G)$, so it is also generated by $\alpha:=(b^{2}[b,a], b^{-2})$ and $\beta:=(b^{-2}, b^{2}[b,a]^{-1} )$. We first show that $\gamma_3(G)$ has the 2-CSP modulo $\langle \beta\rangle \gamma_3(G)\times \gamma_3(G)$. Suppose for a contradiction that there exist $M, K\in\mathbb{N}$ such that $\gamma_3(G)/\langle \beta\rangle (\gamma_3(G)\times\gamma_3(G))\st_{\gamma_3(G)}(m) \cong \mathbb{Z}/2^K\mathbb{Z}$ for all $m\geq M$. That is, for all $m\geq M$ there exists $r_m\in \mathbb{Z}$ such that $$\alpha^{2^K}=(b^{2^{K+1}}[b,a]^{2^K}, b^{-2^{K+1}})\mathbf{e}quiv \beta^{r_m}=(b^{-2r_m}, b^{2r_m}[b,a]^{-r_m})$$ modulo $(\gamma_3(G)\times\gamma_3(G))\st_{\gamma_3(G)}(m)\leq \gamma_3(G)\st_G(m-1)\times\gamma_3(G)\st_G(m-1)$. In particular, $$b^{2^{K+1}+2r_m}[b,a]^{2^K}, b^{2^{K+1}+2r_m}[b,a]^{-r_m}\in \gamma_3(G)\st_G(m-1)$$ and therefore $[b,a]^{2^K}\gamma_3(G)\st_G(m-1) = [b,a]^{-r_m}\gamma_3(G)\st_G(m-1) $. We saw in the proof of \ref{prop:GhasCSPmodgamma3} that $G'/\gamma_3(G)\st_G(n)\cong \mathbb{Z}/2^{t_n}\mathbb{Z}$ where $t_n$ tends to infinity with $n$, which means that $-r_m=2^K$ for all large enough $m$. Thus $b^{2^{K+1}-2^{K+1}}[b,a]^{2^K} \in \gamma_3(G)\st_G(m-1)$ for all large enough $m$, contradicting that $G'$ has the 2-CSP modulo $\gamma_3(G)$. To show that $\langle \beta\rangle (\gamma_3(G)\times \gamma_3(G))$ has the 2-CSP modulo $\gamma_3(G)\times \gamma_3(G)$, suppose for a contradiction that there exist $M, K\in \mathbb{N}$ such that $\beta$ has order $2^K$ modulo $(\gamma_3(G)\times \gamma_3(G))\st(m)$ for all $m\geq M$. This means that $(b^{-2^{K+1}}, b^{2^{K+1}}[b,a]^{-2^K} )\in (\gamma_3(G)\times \gamma_3(G))\st(m)\leq (\gamma_3(G)\st(m-1)\times \gamma_3(G)\st(m-1))$ for all $m\geq M$. In particular, $b^{2^{K+1}}\in G'\st(m-1)$ for all $m\geq M$, a contradiction to the proof of \ref{prop:GhasCSPmodgamma3}. Lemma \ref{lem:csp_transitivity} now yields the result. \mathbf{e}nd{proof} \mathbf{e}nd{document}
\begin{document} \title[The set of toric minimal log discrepancies] {The set of toric minimal log discrepancies} \begin{abstract} We describe the set of minimal log discrepancies of toric log varieties, and study its accumulation points. \end{abstract} \maketitle \section*{Introduction} \footnotetext[1]{This work is supported by a 21st Century COE Kyoto Mathematics Fellowship, and a JSPS Grant-in-Aid No 17740011. } \footnotetext[2]{1991 Mathematics Subject Classification. Primary: 14B05. Secondary: 14M25.} Minimal log discrepancies are invariants of singularities of log varieties. A log variety $(X,B)$ is a normal variety $X$ endowed with an effective Weil ${\mathbb R}$-divisor $B$, having at most log canonical singularities. For any Grothendieck point $\eta\in X$, the minimal log discrepancy of $(X,B)$ at $\eta$, denoted by $a(\eta;X,B)$, is a non-negative real number. For example, $a(\eta;X,B)=1-\operatorname{mult}_\eta(B)$ for every codimension one point $\eta\in X$. For higher codimensional points, minimal log discrepancies can be computed on a suitable resolution of $X$. Let $A\subset [0,1]$ be a set containing $1$ and let $d$ be a positive integer. Denote by $\operatorname{Mld}_d(A)$ the set of minimal log discrepancies $a(\eta;X,B)$, where $\eta\in X$ is a Grothendieck point of codimension $d$, and $(X,B)$ is a log variety whose minimal log discrepancies in codimension one belong to $A$. For example, $\operatorname{Mld}_1(A)=A$. In connection to the termination of a sequence of log flips (see~\cite{Shokurov88, Termination}), Shokurov conjectured that if $A$ satisfies the ascending chain condition, so does $\operatorname{Mld}_d(A)$. Furthermore, under certain assumptions, the accumulation points of $\operatorname{Mld}_d(A)$ should correspond to minimal log discrepancies of smaller codimensional points. This is known to hold for $d=2$ (Shokurov~\cite{Shokurov93}, Alexeev~\cite{Alexeev93}) and for any $d$ in the case of toric varieties without boundary (Borisov~\cite{Borisov97}). The purpose of this note is to extend Borisov's result to the case of toric log varieties. Given the explicit nature of the toric case, we hope this will provide the reader with interesting examples. In order to state the main result, define $\operatorname{Mld}^{tor}_d(A) \subset\operatorname{Mld}_d(A)$ as above, except that we further require that $X$ is a toric variety and $B$ is torus invariant. Note that $\operatorname{Mld}^{tor}_1(A)=A$. \begin{theom}\label{main} The following properties hold for $d\ge 2$: \begin{itemize} \item[(1)] We have $$ \operatorname{Mld}^{tor}_d(A) =\{\sum_{i=1}^s x_i a_i \left| \begin{array}{l} 2\le s\le d \\ (x_1,\ldots,x_s)\in {\mathbb Q}^s\cap (0,1]^s, (a_1,\ldots,a_s)\in A^s\\ \operatorname{index}(x_i)\vert \operatorname{index}(x_1,\ldots,\hat{x_i},\ldots,x_s), \ \forall 1\le i\le s \\ \sum_{i=1}^s (1+(m-1)x_i-\lceil mx_i\rceil)a_i\ge 0\ \forall m\in {\mathbb Z} \end{array}\right \}, $$ where for a rational point $x\in {\mathbb Q}^n$, we denote by $\operatorname{index}(x)$ the smallest positive integer $q$ such that $qx\in {\mathbb Z}^n$. \item[(2)] If $A$ satisfies the ascending chain condition, then so does $\operatorname{Mld}^{tor}_d(A)$. \item[(3)] Assume that $A$ has no nonzero accumulation points. Then the set of accumulation points of $\operatorname{Mld}^{tor}_d(A)$ is included in $$ \{0\}\cup \bigcup_{1\le d'\le d-1}\operatorname{Mld}^{tor}_{d'} (\{\frac{1}{n};n\ge 1\}\cdot A). $$ Equality holds if $d=2$, or if $\{\frac{1}{n};n\ge 1\}\cdot A \subseteq A$. \end{itemize} \end{theom} We use the same methods as Borisov~\cite{Borisov97, Borisov99}. The explicit description in (1) is straightforward, whereas the accumulation behaviour in (2) and (3) relies on a result of Lawrence~\cite{Lawrence91} stating that the set of closed subgroups of a real torus, which do not intersect a given open subset, has finitely many maximal elements with respect to inclusion. Finally, we should point out that $\operatorname{Mld}^{tor}_d(A)$ is strictly smaller than $\operatorname{Mld}_d(A)$ in general. For example, even the set of accumulation points of $\operatorname{Mld}_2(A)$ (see Shokurov~\cite{Shokurov93} for an explicit description) is larger than $\{0\}\cup \{\frac{1}{n};n\ge 1\}\cdot A$, the set of accumulation points of $\operatorname{Mld}^{tor}_2(A)$. \section{Toric log varieties} In this section we recall the definition of minimal log discrepancies and their explicit description in the toric case. The reader may consult~\cite{Ambro97} for more details. A {\em log variety} $(X,B)$ consists of an algebraic variety $X$, defined over an algebraically closed field of characteristic zero, endowed with a finite combination $B=\sum_i b_i B_i$ of Weil prime divisors with real coefficients, such that $K_X+B$ is ${\mathbb R}$-Cartier. Here $K_X$ is the canonical divisor of $X$, computed as the Weil divisor of zeros and poles $(\omega)_X$ of a top rational form $\omega\in \wedge^{\dim(X)}\Omega^1_{{\mathbb C}(X)/{\mathbb C}}$; it is uniquely defined up to linear equivalence. The ${\mathbb R}$-Cartier property of $K_X+B$ means that locally on $X$, there exists finitely many non-zero rational functions $a_\alpha\in {\mathbb C}(X)^\times$ and $r_\alpha\in {\mathbb R}$ such that $K_X+B=\sum_\alpha r_\alpha(a_\alpha)$. Let $\mu\colon X'\to X$ be a proper birational morphism from a normal variety $X'$ and let $E\subset X'$ be a prime divisor. Let $\omega$ be a top rational form on $X$, defining $K_X$, and let $K_{X'}$ be the canonical divisor defined by $\mu^*\omega$. The real number $$ a(E;X,B)=1+\operatorname{mult}_E(K_{X'}-\mu^*(K_X+B)) $$ is called the {\em log discrepancy of $(X,B)$ at $E$}. For a Grothendieck point $\eta\in X$, the {\em minimal log discrepancy of $(X,B)$ at $\eta$} is defined as $$ a(\eta;X,B)=\inf_{\mu(E)={\mathbf a}r{\eta}}a(E;X,B), $$ where the infimum is taken after all prime divisors $E$ on proper birational maps $\mu\colon X'\to X$. This infimum is either $-\infty$, or a non-negative real number. In the latter case, $(X,B)$ is said to have {\em log canonical singularities at} $\eta$ and the invariant is computed as follows. By Hironaka, there exists a proper birational morphism $\mu\colon X'\to X$ such that $X'$ is nonsingular, $\mu^{-1}({\mathbf a}r{\eta})$ is a divisor on $X'$, and there exists a simple normal crossings divisor $\sum_i E_i$ on $X'$ which supports both $\mu^{-1}({\mathbf a}r{\eta})$ and $K_{X'}-\mu^*(K_X+B)$. Then $$ a(\eta;X,B)=\min_{\mu(E_i)={\mathbf a}r{\eta}}a(E_i;X,B). $$ Next we specialize these notions to the toric case. We employ standard terminology on toric varieties, cf. Oda~\cite{Oda}. A {\em toric log variety} is a log variety $(X,B)$ such that $X$ is a toric variety and $B$ is torus invariant. Thus there exists a fan $\Delta$ in a lattice $N$ such that $X=T_N\operatorname{emb}(\Delta)$ and $B=\sum_i b_i V(e_i)$, where $\{e_i\}_i$ is the set of primitive lattice points on the one-dimensional cones of $\Delta$ and $V(e_i)\subset X$ is the torus invariant prime Weil divisor corresponding to $e_i$. The canonical divisor is $K_X=\sum_i -V(e_i)$, and the ${\mathbb R}$-Cartier property of $K_X+B$ means that there exists a function $\psi\colon \vert \Delta\vert\to {\mathbb R}$ such that $\psi(e_i)=1-b_i$ for every $i$, and $\psi\vert\sigma$ is linear for every cone $\sigma\in \Delta$. We may assume that $(X,B)$ has log canonical singularities, which is equivalent to $\psi\ge 0$ or $b_i\in [0,1]$ for all $i$. Let $e\in N^{prim}\cap \vert\Delta\vert$ be a non-zero primitive vector. The barycentric subdivision with respect to $e$ defines a subdivision $\Delta_e\prec\Delta$ and the exceptional locus of the birational morphism $T_N\operatorname{emb}(\Delta_e)\to T_N\operatorname{emb}(\Delta)$ is a prime divisor denoted $E_e$. It is easy to see that $$ a(E_e;X,B)=\psi(e). $$ Due to this property, $\psi$ is called the {\em log discrepancy function of} $(X,B)$. Minimal log discrepancies of toric log varieties are computed as follows. These are local invariants, so we only consider affine varieties. Thus $\Delta$ consists of the faces of some strongly convex rational polyhedral cone $\sigma\subset N_{\mathbb R}$ and we denote $X=T_N\operatorname{emb}(\sigma)$. Assume first that $0\in X$ is a torus invariant closed point (it is unique since $X$ is affine). Using the existence of good resolutions in the toric category, it is easy to see that $$ a(0;X,B)=\min(\psi\vert_{N\cap \operatorname{relint}(\sigma)}). $$ For the general case, let $\eta\in X$ be a Grothendieck point. There exists a unique face $\tau\prec \sigma$ such that $\eta\in \operatorname{orb}(\tau)$. Let $c$ and $d$ be the codimension of $\operatorname{orb}(\tau)$ and $\eta$ in $X$, respectively. The induced affine toric log variety $$ (X',B')= (T_{N\cap(\tau-\tau)}\operatorname{emb}(\tau),\sum_{e\in\tau(1)}\operatorname{mult}_{V(e)}(B)V(e)) $$ has a unique torus invariant closed point $0'$, and we obtain $$ a(\eta;X,B)=\operatorname{mld}(0';X',B')+d-c. $$ \section{The set of toric minimal log discrepancies} Let $A\subseteq [0,1]$ be a set containing $1$. \begin{defn} For an integer $d\ge 1$, let $\operatorname{Mld}^{tor}_d(A)$ be the set of minimal log discrepancies $a(\eta;X,B)$, where $\eta\in X$ is a Grothendieck point of codimension $d$ and $(X,B)$ is a toric log variety whose minimal log discrepancies in codimension one belong to $A$. \end{defn} It is easy to see that $\operatorname{Mld}^{tor}_1(A)=A$. \begin{defn} For an integer $d\ge 2$, define $V_d(A)$ to be the set of pairs $(x,a)\in (0,1]^d\times A^d$ satisfying the following properties: \begin{itemize} \item[(i)] $x\in {\mathbb Q}^d$. \item[(ii)] $\operatorname{index}(x_i)\vert \operatorname{index}(x_1,\ldots,\hat{x_i},\ldots,x_d)$ for $1\le i\le d$. \item[(iii)] $\sum_{i=1}^d(1+(m-1)x_i-\lceil mx_i\rceil) a_i\ge 0$ for all $m\in {\mathbb Z}$. \end{itemize} For $x\in {\mathbb Q}^n$, $\operatorname{index}(x)$ denotes the smallest positive integer $q$ such that $qx\in {\mathbb Z}^n$. \end{defn} Note that property (ii) means that $(1,0,\ldots,0),\ldots,(0,\ldots,0,1)$ are primitive vectors in the lattice ${\mathbb Z}^d+{\mathbb Z} x$. Also, it is enough to verify property (iii) for the finitely many integers $1\le m\le \operatorname{index}(x)-1$. For $(x,a)\in V_d(A)$ we denote $$ \langle x,a\rangle=\sum_{i=1}^d x_i a_i. $$ \begin{prop}\label{gutaiteki} For $d\ge 2$, we have $$ \operatorname{Mld}^{tor}_d(A)=\bigcup_{2\le s\le d} \{\langle x,a\rangle; (x,a)\in V_s(A)\}. $$ \end{prop} \begin{proof} (1) We first show that the right hand side is included in the left hand side. Fix $(x,a)\in V_s(A)$ for some $2\le s\le d$. If $s=d$, let $N={\mathbb Z}^d+{\mathbb Z} x$ and let $\sigma$ be the standard positive cone in ${\mathbb R}^d$, spanned by standard basis $e_1,\ldots,e_d$ of ${\mathbb Z}^d$. Let $0\in T_N\operatorname{emb}(\sigma)$ be the invariant closed point corresponding to $\sigma$. Then the affine toric log variety $$ (T_N\operatorname{emb}(\sigma),\sum_{i=1}^{d}(1-a_i)V(e_i)) $$ has minimal log discrepancy at $0$ equal to $\langle x,a\rangle$. Indeed, the log discrepancy function $\psi=\sum_{i=1}^d a_ie_i^*$ attains its minimum at $x$, and $\psi(x)=\langle x,a\rangle$. Therefore $\langle x,a\rangle\in \operatorname{Mld}^{tor}_d(A)$. Assume now that $2\le s\le d-1$. Let $e_1,\ldots,e_d$ be the standard basis of ${\mathbb Z}^d$, let $e_{d+1}=(d-s) e_1+e_2-\sum_{i=s+1}^d e_i$, let $v=\sum_{i=1}^s x_i e_i$ and let $N={\mathbb Z}^d+{\mathbb Z} v$. Let $\sigma$ be the cone in ${\mathbb R}^d$ generated by $e_1,\ldots,e_{d+1}$. Set $a_i=a_1$ for $s+1\le i\le d$ and $a_{d+1}=a_2$. Then $$ 0\in (T_N\operatorname{emb}(\sigma),\sum_{i=1}^{d+1}(1-a_i)V(e_i)) $$ is a $d$-dimensional germ of a toric log variety with minimal log discrepancy equal to $\langle x,a\rangle$. Indeed, note first that the log variety is well defined since $a_2=(d-s)a_1+a_2-\sum_{i=s+1}^d a_1$; the log discrepancy function is $\psi=\sum_{i=1}^d a_i e_i^*$. There exists $e=\sum_{i=1}^{d+1}y_i e_i\in N\cap \operatorname{relint}(\sigma)$ where the log discrepancy function $\psi$ attains its minimum. We may assume $y_i\in [0,1]$ for every $i$. If $y_{d+1}\notin {\mathbb Z}$, then $y_{s+1}=\cdots=y_d=y_{d+1}$, hence $e=\sum_{i=1}^s y_i e_i$. Therefore $\psi(e)\ge \psi(v)$. If $y_{d+1}\in {\mathbb Z}$, then $\sum_{i=1}^s y_i e_i \in N\cap \operatorname{relint}(\sigma)$, hence $\psi(e)\ge \psi(\sum_{i=1}^s y_i e_i)\ge \psi(v)$. We conclude that $\psi$ attains its minimum at $v$. Therefore $\langle x,a\rangle=\psi(v)\in \operatorname{Mld}^{tor}_d(A)$. (2) Let $(X,B)$ be a toric log variety with codimension one log discrepancies in $A$ and let $\eta\in X$ be a Grothendieck point of codimension $d$. We want to show that $a(\eta;X,B)$ belongs to the set on the right hand side. There exists a unique cone $\sigma$ in the fan defining $X$ such that $\eta\in \operatorname{orb}(\sigma)$. Let $c$ be the codimension of $\operatorname{orb}(\sigma)$ in $X$. Then $a(\eta;X,B)$ coincides with the minimal log discrepancy of the toric log variety $$ (T_{N\cap (\sigma-\sigma)}\operatorname{emb}(\sigma),\sum_{e\in \sigma(1)}\operatorname{mult}_{V(e)}(B)V(e))\times {\mathbb C}^{d-c}. $$ in the invariant closed point $0$. Therefore we may assume that $X$ is affine and $\eta$ is a torus invariant closed point $0$. We have $X=T_N\operatorname{emb}(\sigma)$, with $\dim N=d$, $B=\sum_{i\in I} (1-a_i)V(e_i)$ with $a_i\in A$ for every $i$. The log discrepancy function $\psi\in \sigma^\vee$ of $(X,B)$ satisfies $\psi(e_i)=a_i$, and we have $$ \operatorname{mld}(0;X,B)=\min(\psi\vert_{N\cap \operatorname{relint}(\sigma)}). $$ There exists $e\in N\cap \operatorname{relint}(\sigma)$ such that $\operatorname{mld}(0;X,B)=\psi(e)$. It is easy to see that there exists a subset $\{1,\ldots, s\}\subseteq I$, with $2\le s\le d$, such that $e_1,\ldots,e_s$ are linearly independent and $e$ belongs to the relative interior of the cone spanned by $e_1,\ldots,e_s$. Let $e=\sum_{i=1}^s x_ie_i$, and denote $x=(x_1,\ldots,x_s)\in (0,1]^d, a=(a_1,\ldots,a_s)\in A^d$. It is clear that $\operatorname{mld}(0;X,B)=\langle x,a \rangle$, and we claim that $(x,a)\in V_s(A)$. Indeed, it is clear that $x\in {\mathbb Q}^s$. Since $e_i$ is a primitive lattice point of $N$, it is also a primitive in the sublattice $\sum_{i=1}^s {\mathbb Z} e_i+{\mathbb Z} e$, which is equivalent to $\operatorname{index}(x_i)\vert \operatorname{index}(x_1,\ldots,\hat{x_i},\ldots,x_s)$ for every $1\le i\le s$. Finally, let $m\in {\mathbb Z}$. We have $\sum_{i=1}^s(1+mx_i-\lceil mx_i\rceil)e_i\in N\cap \operatorname{relint}(\sigma)$, hence $\psi(\sum_{i=1}^s(1+mx_i-\lceil mx_i\rceil)e_i)\ge \psi(e)$. Equivalently, $\sum_{i=1}^s(1+(m-1)x_i-\lceil mx_i\rceil) a_i\ge 0$. Therefore $(x,a)\in V_s(A)$. \end{proof} \section{The set $\tilde{V}_d(A)$} By Proposition~\ref{gutaiteki}, the limiting behaviour of toric of minimal log discrepancies is controlled by the limiting behaviour of the sets $V_d(A)$. The rationality properties (i) and (ii) defining $V_d(A)$ do not behave well with respect to limits, and for this reason we enlarge $V_d(A)$ to a new set $\tilde{V}_d(A)$, defined only by property (iii), which turns out to have good inductive properties and limiting behaviour. \begin{defn} Let $A\subseteq [0,1]$ be a subset containing $1$. Define $$ \tilde{V}_d(A)=\{(x,a)\in (0,1]^d\times A^d; \sum_{i=1}^d(1+(m-1)x_i-\lceil mx_i\rceil)a_i\ge 0, \forall m\in {\mathbb Z}\}. $$ \end{defn} Equivalently, $\tilde{V}_d(A)$ is the set of pairs $(x,a)\in (0,1]^d\times A^d$ such that the group ${\mathbb Z}^d+{\mathbb Z} x$ does not intersect the set $\{y\in (0,1]^d; \langle y-x,a\rangle<0\}$. As before, we denote $\langle x,a\rangle=\sum_{i=1}^d x_i a_i$. \begin{lem}\label{onedim} The following equality holds $$ \tilde{V}_1(A)=((0,1]\times \{0\})\cup(\{\frac{1}{n}; n\ge 1\} \times A), $$ where the first term is missing if $0\notin A$. In particular, $$ \{\langle x,a\rangle; (x,a)\in \tilde{V}_1(A)\}= \bigcup_{n=1}^\infty \frac{1}{n}\cdot A. $$ \end{lem} \begin{proof} Let $x\in (0,1]$ such that $1+(m-1)x-\lceil mx\rceil\ge 0$ for every integer $m$. Equivalently, we have $$ \sup_{m\in {\mathbb Z}}(\lceil mx\rceil-mx) \le 1-x. $$ Assume by contradiction that $x\notin {\mathbb Q}$. Then the set $\{\lceil mx\rceil-mx\}_{m\ge 1}$ is dense in $[0,1]$ (cf.~\cite{Cassels57}, Chapter IV), hence $\sup_{m\in {\mathbb Z}}(\lceil mx\rceil-mx)=1$. We obtain $1\le 1-x$, hence $x=0$. Contradiction. Therefore $x=\frac{p}{q}$, for integers $1\le p\le q$ with $\gcd(p,q)=1$. The above inequality becomes $$ 1-\frac{1}{q}=\max_{m\in {\mathbb Z}}(\lceil mx\rceil-mx) \le 1-\frac{p}{q}, $$ hence $p=1$. Therefore $x=\frac{1}{q}$. \end{proof} We will need the following result of Lawrence. \begin{thm}[\cite{Lawrence91}]\label{Law} Let $T={\mathbb R}^d/{\mathbb Z}^d$ be a real torus. \begin{itemize} \item[(i)] Let $U\subset T$ be an open subset. The the set of closed subgroups of $T$ which do not intersect $U$ has only finitely many maximal elements with respect to inclusion. \item[(ii)] The set of finite unions of closed subgroups of $T$ satisfies the descending chain condition. \end{itemize} \end{thm} \begin{thm}\label{dioacc} Assume that $A$ satisfies the ascending chain condition. Then the set $\{\langle x,a\rangle; (x,a)\in \tilde{V}_d(A)\}$ satisfies the ascending chain condition. \end{thm} \begin{proof} Assume first that $d=1$. By Lemma~\ref{onedim}, $$ \{\langle x,a\rangle; (x,a)\in \tilde{V}_1(A)\}= \{\frac{1}{n};n\ge 1\}\cdot A. $$ Both sets $\{\frac{1}{n};n\ge 1\},A$ are nonnegative and satisfy the ascending chain condition, hence their product satisfies the ascending chain condition. Let now $d\ge 2$ and assume by induction the result for smaller values of $d$. Assume by contradiction that $\{(x^n,a^n)\}_{n\ge 1}$ is a sequence in $\tilde{V}_d(A)$ such that $$ \langle x^n,a^n\rangle<\langle x^{n+1},a^{n+1}\rangle \mbox{ for } n\ge 1. $$ Since $A$ satisfies the ascending chain condition, we may assume after passing to a subsequence that $$ a_i^n\ge a_i^{n+1}, \forall n\ge 1, \forall 1\le i\le d. $$ Assume first that $x^n\notin (0,1)^d$ for infinitely many $n$'s. After passing to a subsequence, we may assume $x^n_1=1$ for every $n$. Write $x^n=(1,{\mathbf a}r{x}^n)$ and $a^n=(a_1^n,{\mathbf a}r{a}^n)$. Then $\langle {\mathbf a}r{x}^n,{\mathbf a}r{a}^n\rangle<\langle {\mathbf a}r{x}^{n+1}, {\mathbf a}r{a}^{n+1}\rangle$ for every $n\ge 1$, which contradicts the acc property of the set $\{\langle {\mathbf a}r{x},{\mathbf a}r{a}\rangle; ({\mathbf a}r{x},{\mathbf a}r{a})\in \tilde{V}_{d-1}(A)\}$. Assume now that $x^n\in (0,1)^d$ for every $n$. We set $$ U^n=\{x\in (0,1)^d;\langle x-x^n,a^n\rangle<0\} $$ and regard $U^n$ as an open subset of the torus $T^d={\mathbb R}^d/{\mathbb Z}^d$. Let $X^n$ be the union of the subgroups of $T^d$ which do not intersect $U^n$. By Theorem~\ref{Law}.(i), $X^n$ is a finite union of closed subgroups of $T^d$. It is easy to see that $U^n\subseteq U^{n+1}$, hence $X^n\supseteq X^{n+1}$ for $n\ge 1$. Since $(x^n,a^n)\in \tilde{V}_d(A)$, we have $U^n\cap ({\mathbb Z}^d+{\mathbb Z} x^n)=\emptyset$. Therefore $x^n\in X^n$ for every $n$. We have $$ \langle x^n,a^{n+1}\rangle\le \langle x^n,a^n\rangle<\langle x^{n+1},a^{n+1}\rangle. $$ Then $x^n\in U^{n+1}$, hence $x^n\notin X^{n+1}$. Therefore $X^n\supsetneqq X^{n+1}$ for every $n\ge 1$, contradicting Theorem~\ref{Law}.(ii). \end{proof} \begin{lem}\label{closure} The following properties hold: \begin{itemize} \item[(1)] If $A$ is a closed set, then $\tilde{V}_d(A)$ is a closed subset of $(0,1]^d\times A^d$. \item[(2)] Identify $(0,1]^s$ with the face $x_{s+1}=\cdots=x_d=1$ of $(0,1]^d$. Then $$ \tilde{V}_d(A)\cap (0,1]^s=\tilde{V}_s(A). $$ \item[(3)] Identify $[0,1]^s$ with the face $x_{s+1}=\cdots=x_d=0$ of $[0,1]^d$ and assume that $A$ is a closed set. Then $$ \overline{\tilde{V}_d(A)}\cap (0,1]^s=\tilde{V}_s(A). $$ \end{itemize} \end{lem} \begin{proof} (1) Let $(x,a)\in (0,1]^d\times A^d$ such that there exists a sequence $\{(x^n,a^n)\}_{n\ge 1}$ in $\tilde{V}_d(A)$ with $x=\lim_{n\to \infty} x^n$ and $a=\lim_{n\to \infty} a^n$. Fix $m\in {\mathbb Z}$. By assumption, we have $$ \sum_{i=1}^d (1+(m-1)x_i^n-\lceil mx_i^n\rceil)a^n_i\ge 0, \forall n\ge 1. $$ There exists a positive integer $n(m)$ such that $ \lceil mx_i^n \rceil\ge \lceil mx_i\rceil $ for every $1\le i\le d$ and every $n\ge n(m)$. Therefore $$ \sum_{i=1}^d (1+(m-1)x_i^n-\lceil mx_i\rceil)a^n_i\ge 0, \forall n\ge n(m), $$ Letting $n$ converge to infinity, we obtain $$ \sum_{i=1}^d (1+(m-1)x_i-\lceil mx_i\rceil)a_i\ge 0. $$ Since $m$ was arbitrary, we conclude that $(x,a)\in \tilde{V}_d(A)$. (2) This is clear. (3) Assume that we have a sequence $\{(x^n,a^n)\}_{n\ge 1}\subset \tilde{V}_d(A)$ such that $\lim_{n\to \infty} x^n=(x,0,\ldots,0)\in (0,1]^s$ and $\lim_{n\to \infty}a^n=(a,a_{s+1},\ldots,a_d)$. Let $m$ be a positive integer. Note that for $s+1\le i\le d$ we have $mx^n_i\in (0,1]$ for $n\ge n(m)$, hence $$ \lim_{n\to \infty} (1+(m-1)x_i^n-\lceil mx_i^n\rceil)=0 \mbox{ for } s+1\le i\le d. $$ Therefore $\sum_{i=1}^s (1+(m-1)x_i-\lceil mx_i\rceil)a_i\ge 0$ for every $m\ge 1$. Since ${\mathbb Z}^s+{\mathbb Z} x$ is included in the closure of ${\mathbb Z}^d+{\mathbb Z}_{\ge 0} x$, we obtain $\sum_{i=1}^s (1+(m-1)x_i-\lceil mx_i\rceil)a_i\ge 0$ for $m\le -1$ as well. Therefore $(x,a)\in \tilde{V}_s(A)$, proving the direct inclusion. For the converse, just note that $(x,a)\in \tilde{V}_s(A)$ is the limit of the sequence $ ((x,\frac{1}{n},\ldots,\frac{1}{n}),(a,1,\ldots,1)) \in \tilde{V}_d(A). $ \end{proof} \begin{defn} For $x\in {\mathbb R}$ and $m\in {\mathbb Z}$, define $$ x^{(m)}=1+mx-\lceil mx\rceil. $$ Note that this operation induces a selfmap of the half-open interval $(0,1]$. For $x\in {\mathbb R}^d$ and $m\in {\mathbb Z}$, define $x^{(m)}\in {\mathbb R}^d$ componentwise. \end{defn} Since $(0,1]^d\cap ({\mathbb Z}^d+{\mathbb Z} x)=\{x^{(m)}; m\in {\mathbb Z}\}$, we have the equivalent description $$ \tilde{V}_d(A)=\{(x,a)\in (0,1]^d\times A^d; \langle x^{(m)}-x,a\rangle\ge 0,\forall m\in {\mathbb Z}\}. $$ \begin{lem}\label{limit} Let $x\in (0,1]^d$ and let $a\in A^d$ such that $a_i>0$ for $1\le i\le d$. Then there exists a relatively open neighborhood $x\in U\subseteq (0,1]^d$ such that if $y\in U$ and $\langle y^{(m)}-x,a\rangle\ge 0$ for every $m\in {\mathbb Z}$, then $\langle y-x,a\rangle=0$. \end{lem} \begin{proof} (1) Assume first that $x\in (0,1)^d$. By Theorem~\ref{Law}.(ii), the set of closed subgroups of ${\mathbb R}^d$ which contain ${\mathbb Z}^d$ and do not intersect the nonempty open set $ \{y\in (0,1)^d; \langle y-x,a\rangle <0\} $ has finitely many maximal elements with respect to inclusion, say $H_1,\ldots, H_l$. If $x\in H_1$, then $H_1$ is a rational affine subspace of ${\mathbb R}^d$ in an open neighborhood $x\in U_1\subset (0,1)^d$. Let $v\in H_1-x$. Since $x\in (0,1)^d$, there exists $\epsilon>0$ such that $x+tv\in H_1\cap (0,1)^d$ for $\vert t\vert<\epsilon$. In particular, $\langle x+tv-x,a\rangle \ge 0$, that is $t\langle v,a\rangle \ge 0$ for $\vert t\vert<\epsilon$. We infer that $\langle v,a\rangle=0$. Therefore $H_1\cap U_1$ is contained in $\{y\in (0,1)^d; \langle y-x,a\rangle=0\}$. If $x\notin H_1$, then $U_1=(0,1)^d\setminus H_1$ is an open neighborhood of $x$. Repeating this procedure, we obtain a neighborhood $U_i$ of $x$, for each of the closed subgroup $H_i$. The intersection $U=U_1\cap \cdots \cap U_l$ is the desired neighborhood. (2) We may assume after a reordering that $x_i=1$ for $1\le i\le s$ and $x_i\in (0,1)$ for $s< i\le n$. If $s=n$, we may take $U=(0,1]^d$. Assume now that $s<n$. By~\cite{Cassels57}, Chapter IV, there exists a {\em negative} integer $m_0$ such that $$ \langle x^{(m_0)}-x,a\rangle<\min_{i=1}^s a_i. $$ Let $y\in (0,1]^s\times \prod_{i=s+1}^d (\frac{\lceil m_0x_i\rceil}{m_0}, \frac{\lceil m_0x_i\rceil-1}{m_0})$ such that $\langle y^{(m)}-x,a\rangle\ge 0$ for every $m\in {\mathbb Z}$. We claim that $y_1=\cdots=y_s=1$. Indeed, assume by contradiction that $y_j<1$ for some $1\le j\le s$. A straightforward computation gives $$ \langle y^{(m_0)}-x,a\rangle-m_0\langle y-x,a\rangle= \langle x^{(m_0)}-x,a\rangle+\sum_{i=1}^d (\lceil m_0x_i\rceil-\lceil m_0y_i\rceil)a_i. $$ By the choice of $y$, we obtain $$ \sum_{i=1}^d (\lceil m_0x_i\rceil-\lceil m_0y_i\rceil)a_i =\sum_{i=1}^s (m_0-\lceil m_0y_i\rceil)a_i\le -a_j, $$ hence $0\le \langle x^{(m_0)}-x,a\rangle-a_j$. This contradicts our choice of $m_0$. Let ${\mathbf a}r{x}=(x_{s+1},\ldots,x_d), {\mathbf a}r{y}=(y_{s+1},\ldots, y_d), {\mathbf a}r{a}=(a_{s+1},\ldots,a_d)$. We have $({\mathbf a}r{x},{\mathbf a}r{a}) \in \tilde{V}_{d-s}(A)$ and $\langle {\mathbf a}r{y}^{(m)}-{\mathbf a}r{x},{\mathbf a}r{a}\rangle\ge 0$ for every $m\in {\mathbb Z}$. From Step 1, there exists an open neighborhood ${\mathbf a}r{x}\in {\mathbf a}r{U}\subset (0,1)^s$ such that if ${\mathbf a}r{y}\in {\mathbf a}r{U}$ then $\langle {\mathbf a}r{y}-{\mathbf a}r{x},{\mathbf a}r{a}\rangle=0$. Then $$ U=(0,1]^s\times ({\mathbf a}r{U}\cap \prod_{i=s+1}^d (\frac{\lceil m_0 x_i\rceil}{m_0}, \frac{\lceil m_0 x_i\rceil-1}{m_0})). $$ satisfies the required properties. \end{proof} \begin{lem}\label{rat} The following equality holds for $d\ge 1$ and $a\in A^d$: $$ \{\langle x,a\rangle;(x,a)\in \tilde{V}_d(A),x\in {\mathbb Q}^d\}= \{\langle x,a\rangle; (x,a)\in \tilde{V}_d(A)\}. $$ \end{lem} \begin{proof} Let $(x,a)\in \tilde{V}_d(A)$. We have $x_1,\ldots,x_s<1$ and $x_{s+1}=\ldots=x_d=1$, where $0\le s\le d$. If $s=0$, then $x\in {\mathbb Q}^d$ and we are done. Assume $s\ge 1$ and set ${\mathbf a}r{x}=(x_1,\ldots,x_s)$ and ${\mathbf a}r{a}=(a_1,\ldots,a_s)$. Then $({\mathbf a}r{x},{\mathbf a}r{a})\in \tilde{V}_s(A)$. Since ${\mathbf a}r{x}\in (0,1)^s$, and there exists a closed subgroup ${\mathbb Z}^s\subseteq {\mathbf a}r{H} \subseteq {\mathbb R}^s$ such that ${\mathbf a}r{x}\in {\mathbf a}r{H} \cap U_{{\mathbf a}r{x}}\subset \{{\mathbf a}r{z}; \langle {\mathbf a}r{z}-{\mathbf a}r{x}, {\mathbf a}r{a}\rangle=0\}$, by Step 1 of the proof of Lemma~\ref{limit}. Since ${\mathbf a}r{H}$ is rational, there exists ${\mathbf a}r{z} \in {\mathbb Q}^s\cap H\cap U_{{\mathbf a}r{x}}$. Set $x'=({\mathbf a}r{z},1, \ldots,1)$. Then $(x',a)\in \tilde{V}_d(A)$, $\langle x,a\rangle=\langle x',a\rangle$ and $x'\in {\mathbb Q}^d$. \end{proof} \begin{prop}\label{accum} Assume that $A$ has no positive accumulation points. Then the set of accumulation points of $\{\langle x,a\rangle; (x,a)\in \tilde{V}_d(A)\}$ is $$ \{0\}\cup\bigcup_{1\le d'\le d-1} \{\langle x,a\rangle; (x,a)\in \tilde{V}_{d'}(A)\}. $$ \end{prop} \begin{proof} Let $r>0$ be an accumulation point, that is there exists a sequence $(x^n,a^n)\in \tilde{V}_d(A)$ with $r=\lim_{n\to \infty}\langle x^n,a^n\rangle$ and $r\ne \langle x^n,a^n\rangle$ for every $n\ge 1$. By compactness, we may assume after passing to a subsequence that $\lim_{n\to \infty} x^n=x\in [0,1]^d$ and $\lim_{n\to \infty} a^n=a\in [0,1]^d$ exist. We have $r=\langle x,a\rangle$. We claim that $a_ix_i=0$ for some $i$. Indeed, assume by contradiction that $a_ix_i>0$ for every $1\le i\le d$. Since $A$ has no nonzero accumulation points, we obtain $a^n=a$ for $n\ge 1$. Let $U_x\subset (0,1]^d$ be the relative neighborhood of $x$ associated to $(x,a)$ in Lemma~\ref{limit}. Then $x^n\in U_x$ for $n\ge n_0$. If $\langle x^n-x,a\rangle\ge 0$, then $(x^n,a)\in\tilde{V}_d(A)$ implies that $\langle z-x,a\rangle\ge 0$ for every $z\in ({\mathbb Z}^d+{\mathbb Z} x^n)\cap (0,1]^d$. Therefore $\langle x^n-x,a\rangle=0$. This means $\langle x^n,a\rangle=r$, a contradiction. Therefore $\langle x^n,a\rangle<r$ for every $n$. Since $A$ has no positive accumulation points, it satisfies the ascending chain condition. Therefore the sequence $(\langle x^n,a\rangle)_{n\ge 1}$ satisfies the ascending chain condition as well, by Theorem~\ref{dioacc}. This is a contradiction. We may assume $a_ix_i>0$ for $1\le i\le {d'}$ and $a_i x_i=0$ for $d'+1\le i\le d$. We have $d'\ge 1$, since $\langle x,a\rangle>0$. Denote ${\mathbf a}r{x}=(x_1,\ldots,x_{d'})$ and ${\mathbf a}r{a}=(a_1,\ldots,a_{d'})$. We have $r=\langle {\mathbf a}r{x},{\mathbf a}r{a}\rangle$ and $({\mathbf a}r{x},{\mathbf a}r{a})\in \tilde{V}_{d'}(A)$ by Theorem~\ref{closure}. For the converse, note that $$ ((\frac{1}{k},\ldots,\frac{1}{k}), (1,\ldots,1))\in \tilde{V}_d(A) $$ and $\langle (\frac{1}{k},\ldots,\frac{1}{k}), (1,\ldots,1) \rangle=\frac{d}{k}$ accumulates to $0$. Let now $(x,a)\in \tilde{V}_{d'}(A)$ for $1\le d'\le d-1$. Define $x_k=(x',\frac{1}{k},\ldots,\frac{1}{k})$ and $a=(a',1,\ldots,1)$. Then $(x_k,a)\in \tilde{V}_d(A)$ and $\langle x_k,a\rangle=\langle x',a'\rangle+ \frac{d-d'}{k}$ accumulates to $\langle x',a'\rangle$. \end{proof} \begin{rem} Proposition~\ref{accum} is false if $A$ has a positive accumulation point. For example, let $a>0$ be an accumulation point of a sequence of elements $a_k\in A$. Then $((1,\ldots,1),(a_k,1,\ldots,1))\in V_d(A)$ and $\langle (1,\ldots,1),(a_k,1,\ldots,1)\rangle$ accumulates to $d-1+a>d-1$, which clearly does not correspond to any element of $\tilde{V}_{d'}(A)$, for $d'\le d-1$. \end{rem} There are many rational points in the set $\tilde{V}_d\setminus V_d$. For example, $(\frac{1}{2},1)$ or $(\frac{l-1}{2l},\frac{1}{l})$ for $l\ge 2$. However, the following property holds. \begin{lem}\label{bartov} The following inclusion holds: $$ \{\langle x,a\rangle;(x,a)\in \tilde{V}_d(A)\} \subseteq \{\langle x,a\rangle; (x,a)\in V_d(\{\frac{1}{n};n\ge 1\} \cdot A)\}. $$ \end{lem} \begin{proof} Let $r=\langle x,a\rangle$ for some $(x,a)\in \tilde{V}_d(A)$. By Lemma~\ref{rat}, we may assume that $x\in {\mathbb Q}^d$. We may assume $a_i>0$ for every $i$. Let $e_1,\ldots,e_d$ be the standard basis of ${\mathbb R}^d$, spanning the standard cone $\sigma$, let $e=\sum_{i=1}^dx_ie_i$ and let $N=\sum_{i=1}^d {\mathbb Z} e_i+{\mathbb Z} e$. If we set $\psi=\sum_{i=1}^d a_ie_i^*$, then we have $$ \min(\psi\vert_{N\cap \operatorname{relint}(\sigma)})=\psi(e)=r. $$ There exists positive integers $n_i\ge 1$ such that $e'_i=\frac{1}{n_i}e_i$ are primitive elements of the lattice $N$. In the new coordinates, we have $\psi=\sum_{i=1}^d \frac{a_i}{n_i}{e'_i}^*$ and $e=\sum_{i=1}n_ix_ie'_i$. Since $\psi$ attains its minimum at $e$ and all $a_i$'s are positive, we infer that $n_ix_i<1$ for every $i$. Set $a'_i=\frac{a_i}{n_i}$ and $x'_i=n_ix_i$. Then $(x',a')\in V_d(\{\frac{1}{n};n\ge 1\}\cdot A)$ and $\langle x',a'\rangle=r$. \end{proof} \begin{cor} Assume that $A=\{\frac{1}{n};n\ge 1\} \cdot A$. Then $$ \{\langle x,a\rangle;(x,a)\in V_d(A)\} = \{\langle x,a\rangle; (x,a)\in \tilde{V}_d(A)\}. $$ \end{cor} \section{Accumulation points of $\operatorname{Mld}^{tor}_d(A)$} \begin{thm}\label{mt} The following properties hold: \begin{itemize} \item[(1)] If $A$ satisfies the ascending chain condition, then so does $\operatorname{Mld}^{tor}_d(A)$. \item[(2)] Assume that $A$ has no positive accumulation points. Then the set of accumulation points of $\operatorname{Mld}^{tor}_d(A)$ is included in $$ \{0\}\cup \bigcup_{1\le d'\le d-1}\operatorname{Mld}^{tor}_{d'}(\{\frac{1}{n};n\ge 1\}\cdot A). $$ The inclusion is an equality if $\{\frac{1}{n};n\ge 1\}\cdot A\subset A$. \item[(3)] Assume that $A$ has no positive accumulation points and $\{\frac{1}{n};n\ge 1\}\cdot A\subset A$. Then $\operatorname{Mld}^{tor}_d(A)$ is a closed set if and only if $0\in A$. \end{itemize} \end{thm} \begin{proof} (1) By Proposition~\ref{gutaiteki}, the inclusion $V_d(A)\subseteq \tilde{V}_d(A)$, and Theorem~\ref{dioacc}, the set $\operatorname{Mld}^{tor}_d(A)$ is a subset of a finite union of sets satisfying the ascending chain condition. Therefore $\operatorname{Mld}^{tor}_d(A)$ satisfies the ascending chain condition. (2) Assume that $A$ has no nonzero accumulation points. By Proposition~\ref{accum} and Lemma~\ref{bartov}, the accumulation points of $\operatorname{Mld}^{tor}_d(A)$ belong to the set $$ \{0\}\cup \bigcup_{1\le d'\le d-1}\operatorname{Mld}^{tor}_{d'}(\{\frac{1}{n};n\ge 1\}\cdot A). $$ Assuming moreover that $\{\frac{1}{n};n\ge 1\}A\subset A$, we will show that all points of the above set are accumulation points of $\operatorname{Mld}^{tor}_d(A)$. If $(x,a)\in V_{d'}(A)$, then $$ ((x,1,\ldots,1),(a,\frac{1}{n},\ldots,\frac{1}{n}))\in V_d(A), $$ and $\langle (x,1,\ldots,1), (a,\frac{1}{n},\ldots,\frac{1}{n})\rangle=\langle x,a\rangle+ \frac{d-d'}{n}$ accumulates to $\langle x,a\rangle$. Similarly, $$ ((1,1,\ldots,1),(\frac{1}{n},\frac{1}{n},\ldots,\frac{1}{n}))\in V_d(A) $$ and $\langle (1,1,\ldots,1), (\frac{1}{n},\frac{1}{n},\ldots,\frac{1}{n})\rangle=\frac{d}{n}$ accumulates to $0$. This proves the claim. (3) Assume that $\operatorname{Mld}^{tor}_d(A)$ is a closed set. Since $$ ((\frac{1}{k},\ldots,\frac{1}{k}),(a,\ldots,a))\in V_d(A), $$ we infer that $0=\lim_{k\to \infty}\frac{da}{k}\in \operatorname{Mld}^{tor}_d(A)$, which implies $0\in A$. Conversely, assume $0\in A$. If $(x,a)\in V_{d'}(A)$ then $$ ((x,1,\ldots,1),(a,0,\ldots,0))\in V_d(A) $$ and $\langle (x,1,\ldots,1),(a,0,\ldots,0)\rangle = \langle x,a\rangle$. We infer from (3) that $\operatorname{Mld}^{tor}_d(A)$ is a closed set. \end{proof} \begin{lem} Assume that $A$ has no positive accumulation points. Then the following properties hold: \begin{itemize} \item[(1)] The set of accumulation points of $\operatorname{Mld}^{tor}_2(A)$ is $ \{0\}\cup \bigcup_{k\ge 1}\frac{1}{k}A. $ \item[(2)] The set $\operatorname{Mld}^{tor}_2(A)$ is closed if and only if $0\in A$. \end{itemize} \end{lem} \begin{proof} (1) From Theorem~\ref{mt}, all accumulation points are of this form. Conversely, fix $a\in A$ and $k\in {\mathbb Z}_{\ge 1}$. Then $ ((\frac{1}{kn+1},\frac{n}{nk+1}),(a,a))\in V_2(A) $ is a sequence converging to $((0,\frac{1}{k}),(a,a))$, hence $\frac{a}{k}$ is an accumulation point of $\operatorname{Mld}^{tor}_2(A)$. Since $k$ is arbitrary, we infer that $0$ is an accumulation point as well. (2) Assume that $\operatorname{Mld}^{tor}_2(A)$ is a closed set. Then $0\in \operatorname{Mld}^{tor}_2(A)$, which implies $0\in A$. Assume now that $0\in A$. Then for $a\in A$ and $k\in {\mathbb Z}_{\ge 1}$ we have $$ \frac{a}{k}=\langle (\frac{1}{k},\frac{1}{k}),(0,a)\rangle \in \operatorname{Mld}^{tor}_2(A). $$ From (1), these are all possible accumulation points, hence $\operatorname{Mld}^{tor}_2(A)$ is a closed set. \end{proof} \end{document}
\begin{document} \title{Global stabilization of a Korteweg-de Vries equation with saturating distributed control hanks{This work has been partially supported by Fondecyt 1140741, MathAmsud COSIP, and Basal Project FB0008 AC3E.} \slugger{sicon}{xxxx}{xx}{x}{x--x} \renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \footnotetext[2]{ Gipsa-lab, Department of Automatic Control, Grenoble Campus, 11 rue des Math\'ematiques, BP 46, 38402 Saint Martin d'H\`eres Cedex, France. E-mail: {\tt [email protected], [email protected]} } \footnotetext[3]{ Departamento de Matem\'atica, Universidad T\'ecnica Federico Santa Mar\'ia, Avda. Espa\~na 1680, Valpara\'iso, Chile. E-mail: {\tt [email protected]} } \footnotetext[4]{ Universit\'e Lyon 1 CNRS UMR 5007 LAGEP, France and Fachbereich C - Mathematik und Naturwissenschaften, Bergische Universit\"at Wuppertal, Gau\ss stra\ss e 20, 42097 Wuppertal, Germany. E-mail: {\tt [email protected]} } \renewcommand{\arabic{footnote}}{\arabic{footnote}} \begin{abstract} This article deals with the design of saturated controls in the context of partial differential equations. It focuses on a Korteweg-de Vries equation, which is a nonlinear mathematical model of waves on shallow water surfaces. Two different types of saturated controls are considered. The well-posedness is proven applying a Banach fixed point theorem, using some estimates of this equation and some properties of the saturation function. The proof of the asymptotic stability of the closed-loop system is separated in two cases: i) when the control acts on all the domain, a Lyapunov function together with a sector condition describing the saturating input is used to conclude on the stability; ii) when the control is localized, we argue by contradiction. Some numerical simulations illustrate the stability of the closed-loop nonlinear partial differential equation. {\epsilon}nd{abstract} \begin{keywords} Korteweg-de Vries equation, stabilization, distributed control, saturating control, nonlinear system{\epsilon}nd{keywords} \begin{AMS} 93C20, 93D15, 35Q53 {\epsilon}nd{AMS} \pagestyle{myheadings} \thispagestyle{plain} \markboth{S. MARX, E. CERPA, C. PRIEUR, V. ANDRIEU}{STABILIZATION OF A KORTEWEG-DE VRIES EQUATION WITH SATURATING CONTROL} \section{Introduction} In recent decades, a great effort has been made to take into account input saturations in control designs (see e.g \cite{tarbouriech2011book_saturating}, \cite{zacc2003antiwindupLMI} or more recently \cite{laporte2015bounded}). In most applications, actuators are limited due to some physical constraints and the control input has to be bounded. Neglecting the amplitude actuator limitation can be source of undesirable and catastrophic behaviors for the closed-loop system. The standard method to analyze the stability with such nonlinear controls follows a two steps design. First the design is carried out without taking into account the saturation. In a second step, a nonlinear analysis of the closed-loop system is made when adding the saturation. In this way, we often get local stabilization results. Tackling this particular nonlinearity in the case of finite dimensional systems is already a difficult problem. However, nowadays, numerous techniques are available (see e.g. \cite{tarbouriech2011book_saturating,teel1992globalsaturation,sussmann1991saturation}) and such systems can be analyzed with an appropriate Lyapunov function and a sector condition of the saturation map, as introduced in \cite{tarbouriech2011book_saturating}. In the \color{blue} literature\color{black}, there are few papers studying this topic in the infinite dimensional case. Among them, we can cite \cite{lasiecka2002saturation}, \cite{prieur2016wavecone}, where a wave equation equipped with a saturated distributed actuator is studied, and \cite{daafouz2014nonlinear}, where a coupled PDE/ODE system modeling a switched power converter with a transmission line is considered. Due to some restrictions on the system, a saturated feedback has to be designed in the latter paper. There exist also some papers using the nonlinear semigroup theory and focusing on abstract systems (\cite{Logemann98time-varyingand},\cite{seidman2001note},\cite{slemrod1989mcss}). Let us note that in \cite{slemrod1989mcss}, \cite{seidman2001note} and \cite{Logemann98time-varyingand}, the study of a priori bounded controller is tackled using abstract nonlinear theory. To be more specific, for bounded (\cite{slemrod1989mcss},\cite{seidman2001note}) and unbounded (\cite{seidman2001note}) control operators, some conditions are derived to deduce, from the asymptotic stability of an infinite-dimensional linear system in abstract form, the asymptotic stability when closing the loop with saturating controller. These articles use the nonlinear semigroup theory (see e.g. \cite{miyadera1992nl_sg} or \cite{brezis2010functional}). The Korteweg-de Vries equation (KdV for short) \begin{equation} y_t+y_{x}+y_{xxx}+yy_x=0, {\epsilon}nd{equation} is a mathematical model of waves on shallow water surfaces. Its controllability and stabilizability properties have been deeply studied with no constraints on the control, as reviewed in \cite{cerpa2013control, bible_coron, rosier-zhang}. In this article, we focus on the following controlled KdV equation \begin{equation} \label{nlkdv} \left\{ \begin{array}{ll} y_t+y_x+y_{xxx}+yy_x+f=0,& (t,x)\in [0,+\infty)\times [0,L],\\ y(t,0)=y(t,L)=y_x(t,L)=0,& t\in[0,+\infty),\\ y(0,x)=y_0(x),& x\in [0,L], {\epsilon}nd{array} \right. {\epsilon}nd{equation} where $y$ stands for the state and $f$ for the control. As studied in \cite{rosier1997kdv}, if $f=0$ and \begin{equation} \label{critical-length} L\in\left\{ 2\pi\sqrt{\frac{k^2+kl+l^2}{3}}\,\Big\slash \,k,l\in\mathbb{N}^*\right\}, {\epsilon}nd{equation} then, there exist solutions of the linearized version of {\epsilon}qref{nlkdv}, written as follows, \begin{equation} \label{lkdv} \left\{ \begin{array}{l} y_t+y_x+y_{xxx}=0,\\ y(t,0)=y(t,L)=y_x(t,L)=0,\\ y(0,x)=y_0(x), {\epsilon}nd{array} \right. {\epsilon}nd{equation} for which the $L^2(0,L)$-energy does not decay to zero. For instance, if $L=2\pi$ and $y_0=1-\cos(x)$ for all $x\in [0,L]$, then $y(t,x)=1-\cos(x)$ is a stationary solution of {\epsilon}qref{lkdv} conserving the energy for any time $t$. Note however that, if $L=2\pi$ and $f=0$, the origin of {\epsilon}qref{nlkdv} is locally asymptotically stable as stated in \cite{chu2013asymptotic}. \color{blue} It is worth to mention that there is no hope to obtain global stability, as established in \cite{doronin-natali} where an equilibrium with arbitrary large amplitude is built.\color{black} In the literature there are some methods stabilizing the KdV equation {\epsilon}qref{nlkdv} with boundary \cite{cerpa2009rapid, cerpa_coron_backstepping, marx-cerpa} or distributed controls \cite{perla-vasconcellos-zuazua, pazoto2005localizeddamping}. Here we focus on the distributed control case. In fact, as proven in \cite{perla-vasconcellos-zuazua, pazoto2005localizeddamping}, the feedback control $f(t,x)=a(x)y(t,x)$, where $a$ is a positive function whose support is a nonempty open subset of $(0,L)$, makes the origin an exponentially stable solution. In \cite{mcpa2015kdv_saturating}, in which it is considered a linear Korteweg-de Vries equation with a saturated distributed control, we use a nonlinear semigroup theory. In the case of the present paper, since the term $yy_x$ is not globally Lipschitz, such a theory is harder to use. Thus, we aim here at studying a particular nonlinear partial differential equation without seeing it as an abstract control system and without using the nonlinear semigroup theory. In this paper, we introduce two different types of saturation borrowed from \cite{prieur2016wavecone,mcpa2015kdv_saturating} and \cite{slemrod1989mcss}. In finite dimension, a way to describe this constraint is to use the classical saturation function (see \cite{tarbouriech2011book_saturating} for a good introduction on saturated control problems) defined by \begin{equation} \texttt{sat}(s)=\left\{ \begin{array}{rl} -u_0 &\text{ if } s\leq -u_0,\\ s &\text{ if } -u_0\leq s\leq u_{0},\\ u_0 &\text{ if } s\geq u_{0}, {\epsilon}nd{array} \right. {\epsilon}nd{equation} for some $u_0>0$. As in \cite{prieur2016wavecone} and \cite{mcpa2015kdv_saturating} we use its extension to infinite dimension for the following feedback law \begin{equation} f(t,x)=\mathfrak{sat}_{\texttt{loc}}(ay)(t,x), {\epsilon}nd{equation} where, for all sufficiently smooth function $s$ and for all $x\in [0,L]$, $\mathfrak{sat}_{\texttt{loc}}$ is defined as follows \begin{equation} \label{sat-linf} \mathfrak{sat}_{\texttt{loc}}(s)(x)=\texttt{sat}(s(x)). {\epsilon}nd{equation} Such a saturation is called localized since its image depends only on the value of $s$ at $x$. \color{blue} In this work, we also use a saturation operator in $L^2(0,L)$, denoted by $\mathfrak{sat}_2$, and defined by \color{black} \begin{equation} \label{function-saturation} \mathfrak{sat}_2(s)(x)=\left\{ \begin{array}{rl} &\hspace{-1.3cm}s(x)\hspace{0.65cm}\text{ if }\Vert s\Vert_{L^2(0,L)}\leq u_{0},\\ \frac{s(x)u_0}{\Vert s\Vert_{L^2(0,L)}}&\text{ if } \Vert s\Vert_{L^2(0,L)}\geq u_{0}. {\epsilon}nd{array} \right. {\epsilon}nd{equation} Note that this definition is borrowed from \cite{slemrod1989mcss} (see also \cite{seidman2001note} or \cite{lasiecka2002saturation}) where the saturation is obtained from the norm of the Hilbert space of the control operator. This saturation seems more natural when studying the stability with respect to an energy, but it is less relevant than $\mathfrak{sat}_{\texttt{loc}}$ for applications. Figure \ref{diff-sat} illustrates how different these saturations are. \begin{figure}[ht] \centering \includegraphics[scale=0.6]{diff-sat.eps} \caption{$x\in [0,\pi]$. Red: $\mathfrak{sat}_2(\cos)(x)$ and $u_0=0.5$, Blue: $\mathfrak{sat}_{\texttt{loc}}(\cos)(x)$ and $u_0=0.5$, Dotted lines: $\cos(x)$.} \label{diff-sat} {\epsilon}nd{figure} Our first main result states that using either the localized saturation {\epsilon}qref{sat-linf} or using the $L^2$ saturation map {\epsilon}qref{function-saturation} the KdV equation {\epsilon}qref{nlkdv} in closed loop with a saturated control is well-posed (see Theorem \ref{nl-theorem-wp} below for a precise statement). Our second main result states that the origin of the KdV equation {\epsilon}qref{nlkdv} in closed loop with a saturated control is globally asymptotically stable. Moreover, in the case where the control acts on all the domain and where the control is saturated with {\epsilon}qref{function-saturation}, if the initial conditions are bounded in $L^2$ norm, then the solution converges exponentially with a decay rate that can be estimated (see Theorem \ref{glob_as_stab} below for a precise statement). This article is organized as follows. In Section \ref{sec_mainresults}, we present our main results about the well posedness and the stability of {\epsilon}qref{nlkdv} in presence of saturating control. Sections \ref{sec_wp} and \ref{sec_stab} are devoted to prove these results by using the Banach fixed-point theorem, Lyapunov techniques and a contradiction argument. In Section \ref{sec_simu}, we provide a numerical scheme for the nonlinear equation and give some simulations of the equation looped by a saturated feedback. Section \ref{sec_conc} collects some concluding remarks and possible further research lines. \textbf{Notation:} \color{blue} A function $\alpha$ is said to be a class $\mathcal{K}_\infty$ function if $\alpha$ is nonnegative, increasing, vanishing at $0$ and such that $\lim_{s\rightarrow +\infty}\alpha(s)=+\infty$.\color{black} \section{Main results} \label{sec_mainresults} We first give an analysis of our system {\epsilon}qref{nlkdv} when there is no constraint on the control $f$. To do that, letting $f(t,x):=ay(t,x)$ in {\epsilon}qref{nlkdv}, where $a$ is a nonnegative function satisfying \begin{equation} \label{gain-control} \left\{ \begin{array}{l} 0<a_0 \leq a(x)\leq a_1,\quad \forall x\in\omega,\\ \text{where $\omega$ is a nonempty open subset of $(0,L)$}, {\epsilon}nd{array} \right. {\epsilon}nd{equation} then, following \cite{rosier2006global}, we get that the origin of {\epsilon}qref{nlkdv} is globally asymptotically stabilized. If $\omega=[0,L]$, then any solution to {\epsilon}qref{nlkdv} satisfies \begin{equation} \label{lyap-without-sat} \frac{1}{2}\frac{d}{dt}\int_0^L |y(t,x)|^2 dx=-\frac{1}{2}|y_x(t,0)|^2-\int_0^L a(x) |y(t,x)|^2 dx \leq -a_0\int_0^L |y(t,x)|^2 dx, {\epsilon}nd{equation} which ensures an exponential stability with respect to the $L^2(0,L)$-norm. Note that the decay rate can be selected as large as we want by tuning the parameter $a_0$. Such a result is refered to as a rapid stabilization result. Let us consider the KdV equation controlled by a saturated distributed control as follows \begin{equation} \label{nlkdv_sat} \left\{ \begin{array}{l} y_t+y_x+y_{xxx}+yy_x+\mathfrak{sat}(ay)=0,\\ y(t,0)=y(t,L)=y_x(t,L)=0,\\ y(0,x)=y_0(x), {\epsilon}nd{array} \right. {\epsilon}nd{equation} where $\mathfrak{sat}=\mathfrak{sat}_2$ or $\mathfrak{sat}_{\texttt{loc}}$. Since these two operators have properties in common, we will use the notation $\mathfrak{sat}$ all along the paper. However, in some cases, we get different results. Therefore, the use of a particular saturation is specified when it is necessary. Let us state the main results of this paper. \begin{theorem}{\epsilon}m (Well posedness) {\epsilon}m \label{nl-theorem-wp} For any initial condition $y_0\in L^2(0,L)$, there exists a unique mild solution $y\in C([0,T];L^2(0,L))\cap L^2(0,T;H^1(0,L))$ to {\epsilon}qref{nlkdv_sat}. {\epsilon}nd{theorem} \begin{theorem} {\epsilon}m (Global asymptotic stability) {\epsilon}m \label{glob_as_stab} Given a nonempty open subset $\omega$ and the positive values $a_0$ and $u_0$, there exist a positive value $\mu^{\star}$ and a \color{blue} class $\mathcal{K}_\infty$ function $\alpha_0:\mathbb{R}_{\geq 0}\rightarrow \mathbb{R}_{\geq 0}$ \color{black} such that for any $y_0\in L^2(0,L)$, the mild solution $y$ of (\ref{nlkdv_sat}) satisfies \begin{equation} \Vert y(t,.)\Vert_{L^2(0,L)}\leq \alpha_0(\Vert y_0\Vert_{L^2(0,L)})e^{-\mu^{\star}t},\quad \forall t\geq 0. {\epsilon}nd{equation} Moreover, in the case where $\omega=[0,L]$ and $\mathfrak{sat}=\mathfrak{sat}_2$ we can estimate {\epsilon}m locally {\epsilon}m the decay rate of the solution. In other words, for all $r>0$, for any initial condition $y_0\in L^2(0,L)$ such that $\Vert y_0\Vert_{L^2(0,L)}\leq r$, the mild solution $y$ to {\epsilon}qref{nlkdv_sat} satisfies \begin{equation} \label{local_as_stab_estim} \Vert y(t,.)\Vert_{L^2(0,L)}\leq \Vert y_0\Vert_{L^2(0,L)}e^{-\mu t},\quad \forall t\geq 0, {\epsilon}nd{equation} where $\mu$ is defined as follows \begin{equation} \label{formula-mu} \mu:=\min\left\{a_0,\frac{u_0a_0}{ra_1}\right\}. {\epsilon}nd{equation} {\epsilon}nd{theorem} The remaining part of this paper is devoted to the proof of these results (see Sections \ref{sec_wp} and \ref{sec_stab}, respectively) and to numerical simulations to illustrate Theorem \ref{glob_as_stab} (see Section \ref{sec_simu}). \section{Well-posedness} \label{sec_wp} \subsection{Linear system} \label{linear_wp} Before proving the well-posedness of {\epsilon}qref{nlkdv_sat}, let us recall some useful results on the linear system {\epsilon}qref{lkdv}. To do that, consider the operator defined by $$ D(A)=\lbrace w\in H^3(0,L),\: w(0)=w(L)=w^\prime(L)=0\rbrace, $$ $$ A:w\in D(A)\subset L^2(0,L)\longmapsto (-w^\prime-w^{\prime\prime\prime})\in L^2(0,L). $$ It can be proved that this operator and its adjoint operator defined by $$ D(A^\star)=\lbrace w\in H^3(0,L),\: w(0)=w(L)=w^\prime(0)=0\rbrace, $$ $$ A^\star:w\in D(A^\star)\subset L^2(0,L)\longmapsto w^\prime+w^{\prime\prime\prime}, $$ are both dissipative, which means that, for all $w\in D(A)$, $\int_0^L wA(w)dx\leq 0$ and for all $w\in D(A^\star)$, $\int_0^L wA^\star(w)dx\leq 0$. Therefore, from \cite{pazy1983semigroups}, the operator $A$ generates a strongly continuous semigroup of contractions which we denote by $W(t)$. We have the following theorem proven in \cite{rosier1997kdv} and \cite{cerpa2013control} \begin{theorem}[Well-posedness of {\epsilon}qref{lkdv}, \cite{rosier1997kdv},\cite{cerpa2013control}] \label{lkdv-wp} \begin{itemize} \item For any initial condition $y_0\in D(A)$, there exists a unique strong solution $y\in C(0,T;D(A))\cap C^1(0,T;L^2(0,L))$ to (\ref{lkdv}). \item For any initial condition $y_0\in L^2(0,L)$, there exists a unique mild solution $y\in C([0,T];L^2(0,L))\cap L^2(0,T;H^1(0,L))$ to (\ref{lkdv}). Moreover, there exists $C_0>0$ such that the solution to (\ref{lkdv}) satisfies \begin{equation} \label{dissipativity-regularity} \Vert y\Vert_{C(0,T;L^2(0,L))}+\Vert y\Vert_{L^2(0,T;H^1(0,L))}\leq C_0\Vert y_0\Vert_{L^2(0,L)} {\epsilon}nd{equation} and the extra trace regularity \begin{equation} \label{extra-regularity} \Vert y_x(.,0)\Vert_{L^2(0,T)}\leq \Vert y_0\Vert_{L^2(0,L)}. {\epsilon}nd{equation} {\epsilon}nd{itemize} {\epsilon}nd{theorem} To ease the reading, let us denote the following Banach space, for all $T>0$, $$\mathcal{B}(T):=C(0,T;L^2(0,L))\cap L^2(0,T;H^1(0,L))$$ endowed with the norm \begin{equation} \Vert y\Vert_{\mathcal{B}(T)}=\sup_{t\in [0,T]}\Vert y(t,.)\Vert_{L^2(0,L)}+\left(\int_0^T \Vert y(t,.)\Vert^2_{H^1(0,L)}dt\right)^{\frac{1}{2}}. {\epsilon}nd{equation} Before studying the well-posedness of (\ref{nlkdv_sat}), we need a well-posedness result with a right-hand side. Given $g\in L^1(0,T;L^2(0,L)$, let us consider $y$ the unique solution \footnote{With $f=0$, the existence and the unicity of $y$ are insured since $A$ generates a $C_0$-semigroup of contractions. It follows from the semigroup theory the existence and the unicity of $y$ when $g\in L^1(0,T;L^2(0,L))$ (see \cite{pazy1983semigroups}).} to the following nonhomogeneous problem: \begin{equation} \label{kdv-2} \left\{ \begin{array}{l} y_t+y_x+y_{xxx}=g,\\ y(t,0)=y(t,L)=y_x(t,L)=0,\\ y(0,.)=y_0. {\epsilon}nd{array} \right. {\epsilon}nd{equation} Note that we need the following property on the saturation function, which will allow us to state that this type of nonlinearity belongs to the space $L^1(0,T;L^2(0,L))$. \begin{lemma} \label{lipschitz-satl2} For all $(s,\tilde{s})\in L^2(0,L)^2$, we have \begin{equation} \Vert \mathfrak{sat}(s)-\mathfrak{sat}(\tilde{s})\Vert_{L^2(0,L)}\leq 3\Vert s-\tilde{s}\Vert_{L^2(0,L)}. {\epsilon}nd{equation} {\epsilon}nd{lemma} \begin{proof} For $\mathfrak{sat}=\mathfrak{sat}_2$, please refer to \cite[Theorem 5.1.]{slemrod1989mcss} for a proof. For $\mathfrak{sat}=\mathfrak{sat}_{\texttt{loc}}$, we know from \cite[Page 73]{bible_khalil} that for all $(s,\tilde{s})\in L^2(0,L)^2$ and for all $x\in [0,L]$, $$|\mathfrak{sat}_{\texttt{loc}}(s(x))-\mathfrak{sat}_{\texttt{loc}}(\tilde{s}(x))|\leq |s(x)-\tilde{s}(x)|.$$ Thus, we get $$\Vert \mathfrak{sat}_{\texttt{loc}}(s)-\mathfrak{sat}_{\texttt{loc}}(\tilde{s})\Vert_{L^2(0,L)}\leq \Vert s-\tilde{s}\Vert_{L^2(0,L)},$$ which concludes the proof of Lemma \ref{lipschitz-satl2}. {\epsilon}nd{proof} \color{blue} We have the following proposition borrowed from \cite[Proposition 4.1]{rosier1997kdv}. \color{black} \begin{proposition}[\cite{rosier1997kdv}] \label{proposition-reg-rosier} If $y\in L^2(0,T;H^1(0,L))$, then $yy_x\in L^1(0,T;L^2(0,L))$ and the map $\psi_1: y\in L^2(0,T;H^1(0,L))\mapsto yy_x\in L^1(0,T;L^2(0,L))$ is continuous {\epsilon}nd{proposition} We have also the following proposition. \begin{proposition} \label{proposition-reg} \color{blue} Assume $a:[0,L]\rightarrow \mathbb{R}$ satisfies {\epsilon}qref{gain-control}\color{black}. If $y\in L^2(0,T;H^1(0,L))$, then $\mathfrak{sat}(ay)\in L^1(0,T;L^2(0,L))$ and the map $\psi_2: y\in L^2(0,T;H^1(0,L)) \mapsto \mathfrak{sat}(ay)\in L^1(0,T;L^2(0,L))$ is continuous; {\epsilon}nd{proposition} \begin{proof} Let $y,z\in L^2(0,T;H^1(0,L))$. We have, using Lemma \ref{lipschitz-satl2} and H\"older inequality \color{blue} \begin{eqnarray} \label{regularity-sat-l1} \Vert \mathfrak{sat}(ay) - \mathfrak{sat}(az)\Vert_{L^1(0,T;L^2(0,L))} &&\leq 3\int_0^T \Vert a(y-z)\Vert_{L^2(0,L)}\nonumber \\ &&\leq 3\sqrt{L}a_1\sqrt{T}\Vert (y-z)\Vert_{L^2(0,T;H^1(0,L))} {\epsilon}nd{eqnarray} \color{black} Plugging $z=0$ in (\ref{regularity-sat-l1}) yields $\mathfrak{sat}(ay)\in L^1(0,T;L^2(0,L))$ and (\ref{regularity-sat-l1}) implies the continuity of the map $\psi_2$. It concludes the proof of Proposition \ref{proposition-reg}.{\epsilon}nd{proof} \color{blue} Let us study the non-homogenenous linear KdV equation with $y_0(x):=0$. For any $g\in L^1(0,T;L^2(0,L))$, it is described with the following equation \begin{equation} \label{kdv-zero} \left\{ \begin{split} &y_t+y_x+y_{xxx}+g=0,\\ &y(t,0)=y(t,L)=y_x(t,L)=0,\\ &y(0,x)=0. {\epsilon}nd{split} \right. {\epsilon}nd{equation} It can be rewritten as follows \begin{equation} \left\{ \begin{split} &\deltaot{y}=Ay+g,\\ &y(0)=0. {\epsilon}nd{split} \right. {\epsilon}nd{equation} By standard semigroup theory (see \cite{pazy1983semigroups}), for any positive value $t$ and any function $g\in L^1(\mathbb{R}_{\geq 0};L^2(0,L))$, the solution to {\epsilon}qref{kdv-zero} can be expressed as follows \begin{equation} y(t)=\int_0^t W(t-\tau)g(\tau,x)d\tau. {\epsilon}nd{equation} Finally, we have the following result borrowed from \cite[Lemma 2.2]{rosier2006global} \begin{proposition}[\cite{rosier2006global}] \label{prop-w(t-tau)} There exists a positive value $C_1$ such that for any positive value $T$ and any function $g\in L^1(0,T;L^2(0,L))$ the solution to {\epsilon}qref{kdv-zero} satisfies the following inequality, \begin{equation} \left\Vert \int_0^t W(t-\tau)g(\tau,x)d\tau\right\Vert_{\mathcal{B}(T)} \leq C_1\int_0^T \Vert g(\tau,.)\Vert_{L^2(0,L)}d\tau. {\epsilon}nd{equation} {\epsilon}nd{proposition} \color{black} \subsection{Proof of Theorem \ref{nl-theorem-wp}} Let us begin this section with a technical lemma. \begin{lemma}{\epsilon}m (\cite{zhang1999KdV}) {\epsilon}m \label{zhang-regularity} For any $T>0$ and $y,z\in\mathcal{B}(T)$, \begin{equation} \int_0^T \Vert (y(t,.)z(t,.))_x\Vert_{L^2(0,L)}dt\leq 2 \sqrt{T}\Vert y\Vert_{\mathcal{B}(T)}\Vert z\Vert_{\mathcal{B}(T)}. {\epsilon}nd{equation} {\epsilon}nd{lemma} The following is a local well-posedness result. \begin{lemma}{\epsilon}m (Local well-posedness) {\epsilon}m \label{local-wp} Let $T>0$ be given. For any $y_0\in L^2(0,L)$, there exists $T^\prime \in [0,T]$ depending on $\Vert y_0\Vert_{L^2(0,L)}$ such that {\epsilon}qref{nlkdv_sat} admits a unique mild solution $y\in\mathcal{B}(T^\prime)$. {\epsilon}nd{lemma} \begin{proof} We follow the strategy of \cite{chapouly2009global} and \cite{rosier2006global}. We know from Proposition \ref{proposition-reg} that, for all $z\in L^1(0,T;L^2(0,L))$, there exists a unique mild solution to the following system \begin{equation} \label{kdv-fixed-point} \left\{ \begin{array}{l} y_t+y_x+y_{xxx}=-zz_x-\mathfrak{sat}(az),\\ y(t,0)=y(t,L)=y_x(t,L)=0,\\ y(0,x)=y_0(x). {\epsilon}nd{array} \right. {\epsilon}nd{equation} Solution to {\epsilon}qref{kdv-fixed-point} can be written in its integral form \begin{equation} y(t)=W(t)y_0-\int_0^t W(t-\tau)(zz_x)(\tau)d\tau-\int_0^t W(t-\tau)\mathfrak{sat}(az(\tau,.))d\tau. {\epsilon}nd{equation} For given $y_0\in L^2(0,L)$, let $r$ and $T^\prime$ be positive constants to be chosen later. We define \begin{equation} S_{T^\prime ,r}=\lbrace z\in \mathcal{B}(T^\prime),\: \Vert z\Vert_{\mathcal{B}(T^\prime)}\leq r\rbrace, {\epsilon}nd{equation} which is a closed, convex and bounded subset of $\mathcal{B}(T^\prime)$. Consequently, $S_{T^\prime ,r}$ is a complete metric space in the topology induced from $\mathcal{B}(T^\prime)$. We define a map $\Gamma$ on $S_{T^\prime,r}$ by, for all $t\in [0,T^\prime]$ \begin{equation} \Gamma(z):=W(t)y_0-\int_0^t W(t-\tau)(zz_x)(\tau)d\tau-\int_0^t W(t-\tau)\mathfrak{sat}(az(\tau,.))d\tau,\: \forall z\in S_{T^\prime,r}. {\epsilon}nd{equation} We aim at proving that there exists a unique fixed point to this operator. \color{blue} It follows from Proposition \ref{prop-w(t-tau)}, Lemma \ref{zhang-regularity} and the linear estimates given in Theorem \ref{lkdv-wp} that for every $z\in S_{T^\prime,r}$, there exists a positive value $C_2:=C_2(a_1,T,L,C_1)$ such that it holds \begin{equation} \begin{split} \Vert \Gamma(z)\Vert_{\mathcal{B}(T^\prime)} &\leq C_0 \Vert y_0\Vert_{L^2(0,L)}+C_1\int_0^T (\Vert zz_x(\tau,.)\Vert_{L^2(0,L)}+\Vert \mathfrak{sat}(az(\tau,.)\Vert_{L^2(0,L)}) d\tau\\ & \leq C_0\Vert y_0\Vert_{L^2(0,L)} +2C_1\sqrt{T^\prime} \Vert z\Vert^2_{\mathcal{B}(T^\prime)}+C_2\sqrt{T^\prime}\Vert z \Vert_{\mathcal{B}(T^\prime)} {\epsilon}nd{split} {\epsilon}nd{equation} where the first line has been obtained with the linear estimates given in Theorem \ref{lkdv-wp} and the estimate given in Proposition \ref{prop-w(t-tau)} and the second line with Lemma \ref{zhang-regularity} and Proposition \ref{proposition-reg}. We choose $r>0$ and $T^\prime>0$ such that \begin{equation} \left\{ \begin{array}{l} r=2C_0\Vert y_0\Vert_{L^2(0,L)},\\ 2C_1\sqrt{T^\prime}r + C_2\sqrt{T^\prime}\leq \frac{1}{2}, {\epsilon}nd{array} \right. {\epsilon}nd{equation} \color{black} in order to obtain \begin{equation} \Vert \Gamma(z)\Vert_{\mathcal{B}(T^\prime)}\leq r,\quad \forall z\in S_{T^\prime,r}. {\epsilon}nd{equation} Thus, with such $r$ and $T^\prime$, $\Gamma$ maps $S_{T^\prime ,r}$ to $S_{T^\prime ,r}$. Moreover, one can prove \color{blue} with Proposition \ref{prop-w(t-tau)}, Lemma \ref{zhang-regularity} and the linear estimates given in Theorem \ref{lkdv-wp}\color{black} that \begin{equation} \Vert \Gamma(z_1)-\Gamma(z_2)\Vert_{\mathcal{B}(T^\prime)}\leq \frac{1}{2}\Vert z_1-z_2\Vert_{\mathcal{B}(T^\prime)},\: \forall z_1,z_2\in S_{T^\prime ,r}. {\epsilon}nd{equation} The existence of mild solutions to the Cauchy problem {\epsilon}qref{nlkdv_sat} follows by using the Banach fixed-point theorem \cite[Theorem 5.7]{brezis2010functional}. {\epsilon}nd{proof} Before proving the global well-posedness, we need the following lemma inspired by \cite{coroncrepeau2004missed} and \cite{chapouly2009global} which implies that if there exists a solution for some $T>0$ then the solution is unique. \begin{lemma} \label{uniqueness-coron-crepeau} \color{blue} Let $T>0$ and $a:[0,L]\rightarrow \mathbb{R}$ satisfying {\epsilon}qref{gain-control}. There exists $C_{11}:=C_{11}(T,L)>0$ such that for every $y_0,z_0\in L^2(0,L)$ for which there exist mild solutions $y$ and $z$ of \begin{equation} \label{kdv-y} \left\{ \begin{array}{l} y_t+y_x+y_{xxx}+yy_x+\mathfrak{sat}(ay)=0,\\ y(t,0)=y(t,L)=y_x(t,L)=0,\\ y(0,x)=y_0(x), {\epsilon}nd{array} \right. {\epsilon}nd{equation} and \begin{equation} \label{kdv-z} \left\{ \begin{array}{l} z_t+z_x+z_{xxx}+zz_x+\mathfrak{sat}(az)=0,\\ z(t,0)=z(t,L)=z_x(t,L)=0,\\ z(0,x)=z_0(x), {\epsilon}nd{array} \right. {\epsilon}nd{equation} these solutions satisfy \begin{equation} \int_0^T \int_0^L (z_x(t,x)-y_x(t,x))^2dxdt\leq e^{C_{11}(1+\Vert y\Vert_{L^2(0,T;H^1(0,L))}+\Vert z\Vert_{L^2(0,T;H^1(0,L))})}\int_0^L (z_0(x)-y_0(x))^2dx, {\epsilon}nd{equation} \begin{equation} \int_0^T \int_0^L (z(t,x)-y(t,x))^2dxdt\leq e^{C_{11}(1+\Vert y\Vert_{L^2(0,T;H^1(0,L))}+\Vert z\Vert_{L^2(0,T;H^1(0,L))})}\int_0^L (z_0(x)-y_0(x))^2dx. {\epsilon}nd{equation} \color{black} {\epsilon}nd{lemma} \begin{proof} We follow the strategy of \cite{coroncrepeau2004missed} and \cite{chapouly2009global}. Let us assume that for given $y_0\in L^2(0,L)$, there exist $T>0$ and two different solutions $y$ and $z$ to {\epsilon}qref{kdv-y} and {\epsilon}qref{kdv-z}, respectively, defined on $[0,T]\times [0,L]$. Then $\Delta:=z-y$ defined on $[0,T]\times [0,L]$ is a mild solution of \begin{equation} \label{kdv-delta} \left\{ \begin{array}{l} \Delta_t+\Delta_x+\Delta_{xxx}=-y\Delta_x-z_x\Delta-(\mathfrak{sat}(az)-\mathfrak{sat}(ay)),\\ \Delta(t,0)=\Delta(t,L)=\Delta_x(t,L)=0,\\ \Delta(0,x)=z_0(x)-y_0(x). {\epsilon}nd{array} \right. {\epsilon}nd{equation} Integrating by parts in \begin{equation} \int_0^L 2x\Delta(\Delta_t+\Delta_x+\Delta_{xxx}+y\Delta_x+z_x\Delta+\mathfrak{sat}(az)-\mathfrak{sat}(ay))dx=0, {\epsilon}nd{equation} and using the boundary conditions of {\epsilon}qref{kdv-delta}, we readily get \begin{multline} \frac{d}{dt}\int_0^L x\Delta^2dx+3\int_0^L \Delta^2_xdx= \int_0^L \Delta^2 dx-2\int_0^L xy\Delta\Delta_xdx\\ +\int_0^L z\Delta^2 dx+4\int_0^L xz\Delta\Delta_xdx-\int_0^L x\Delta(\mathfrak{sat}(az)-\mathfrak{sat}(ay))dx. {\epsilon}nd{multline} By the boundary conditions and the continuous Sobolev embedding $H^1_0(0,L)\subset C([0,T])$, there exists $C_3=C_3(L)>0$ such that \begin{equation} 2\left|\int_0^L xy\Delta\Delta_xdx\right|\leq C_3\Vert y_x\Vert_{L^2(0,L)}\int_0^L |x\Delta\Delta_x|dx. {\epsilon}nd{equation} Thus, \begin{equation} 2\left|\int_0^L xy\Delta\Delta_xdx\right|\leq \frac{1}{2}\int_0^L \Delta_x^2dx+\frac{C_3^2}{2}\Vert y_x\Vert^2_{L^2(0,L)}L\int_0^L x\Delta^2dx. {\epsilon}nd{equation} Similarly, \begin{equation} 4\left|\int_0^L xz\Delta\Delta_xdx\right|\leq \frac{1}{2}\int_0^L \Delta_x^2 dx+2C_3^2\Vert z_x\Vert^2_{L^2(0,L)}\int_0^L x\Delta^2dx. {\epsilon}nd{equation} Moreover, since $\mathfrak{sat}$ is globally \color{blue} Lipschitz with constant 3 \color{black} (as stated in Lemma \ref{lipschitz-satl2}) and \color{blue} for all $x\in [0,L]$, $a(x)\leq a_1$, we use a H\"older inequality to get \color{black} \begin{equation} \begin{array}{rcl} \left|\int_0^L x\Delta(\mathfrak{sat}(az)-\mathfrak{sat}(ay))dx\right| &\leq& \Vert x\Delta\Vert_{L^2(0,L)}\Vert \mathfrak{sat}(az)-\mathfrak{sat}(ay)\Vert_{L^2(0,L)}\\ &\leq& 3\Vert a(x)\Delta\Vert_{L^2(0,L)}\Vert x\Delta\Vert_{L^2(0,L)}\\ &\leq& 3a_1\int_0^L x\Delta^2 dx. {\epsilon}nd{array} {\epsilon}nd{equation} Note that, from \cite[Lemma 16]{coroncrepeau2004missed}, for every $\phi\in H^1(0,L)$ with $\phi(0)=0$, and every $d\in [0,L]$, \begin{equation} \label{lemma16-cc} \int_0^L \phi^2dx\leq \frac{d^2}{2}\int_0^L \phi_x^2dx+\frac{1}{d}\int_0^L x\phi^2dx. {\epsilon}nd{equation} Thus, from {\epsilon}qref{lemma16-cc} there exists $C_4>0$ such that $$ \int_0^L \Delta^2dx\leq \frac{1}{2}\int_0^L \Delta_x^2dx+C_4\int_0^L x\Delta^2dx. $$ Moreover, with the boundary conditions of $z$ and the Sobolev embedding $H_0^1(0,L)\subset C([0,T])$, there exists $C_5=C_5(L)>0$ such that $$ 2\left|\int_0^L z\Delta^2dx\right|\leq C_5\Vert z_x\Vert_{L^2(0,L)}\int_0^L \Delta^2dx. $$ Hence, using the boundary conditions of $\Delta$ and {\epsilon}qref{lemma16-cc} with $d:=\min\{C_5^{-1/2}\Vert z_x\Vert^{-1/2}_{L^2(0,L)},L\}$, there exists $C_6=C_6(L)>0$ such that \begin{equation} 2\int_0^L z\Delta^2 dx\leq \frac{1}{2}\int_0^L\Delta_x^2dx+C_6(1+\Vert z_x\Vert_{L^2(0,L)}^{3/2})\int_0^L x\Delta^2dx. {\epsilon}nd{equation} Finally, there exists $C_7=C_7(L)>0$ such that \begin{equation} \frac{d}{dt}\int_0^L x\Delta^2dx+\int_0^L \Delta_x^2dx\leq C_7(1+\Vert y_x\Vert^2_{L^2(0,L)}+\Vert z_x\Vert^2_{L^2(0,L)})\int_0^L x\Delta^2dx. {\epsilon}nd{equation} In particular, \begin{equation} \frac{d}{dt}\int_0^L x\Delta^2dx\leq C_7(1+\Vert y_x\Vert^2_{L^2(0,L)}+\Vert z_x\Vert^2_{L^2(0,L)})\int_0^L x\Delta^2dx. {\epsilon}nd{equation} Using the Gr\"onwal Lemma, the last inequality and the initial conditions of $\Delta$, we get, for every $t\in[0,T]$, \begin{equation} \int_0^L x\Delta^2(t,x)dx\leq e^{C_7\left( T+\Vert y\Vert^2_{L^2(0,T;H^1(0,L))}+\Vert z\Vert^2_{L^2(0,T;H^1(0,L))}\right)}\int_0^L x(z_0(x)-y_0(x))^2dx, {\epsilon}nd{equation} and thus, we obtain the existence of $C_8=C_8(T,L)$ such that \begin{equation} \label{unicity-1} \int_0^T\int_0^L (z_x(t,x)-y_x(t,x))^2dxdt\leq e^{C_8\left(\Vert y\Vert^2_{L^2(0,T;H^1(0,L))}+\Vert z\Vert^2_{L^2(0,T;H^1(0,L))}\right)} \int_0^L (z_0(x)-y_0(x))^2dx. {\epsilon}nd{equation} Similarly, integrating by parts in \begin{equation} \int_0^L \Delta(\Delta_t+\Delta_x+\Delta_{xxx}+y\Delta_x+z_x\Delta+\mathfrak{sat}(az)-\mathfrak{sat}(ay))dx=0 {\epsilon}nd{equation} we get, using the boundary conditions of $\Delta$, \begin{equation} \frac{1}{2}\frac{d}{dt}\int_0^L \Delta^2 dx+\frac{1}{2}\Delta^2_x(t,0)=-\int_0^L (y\Delta_x-2z\Delta_x)\Delta dx-\int_0^L \Delta(\mathfrak{sat}(az)-\mathfrak{sat}(ay))dx. {\epsilon}nd{equation} Moreover, \begin{equation} \label{unicity-linf} -\int_0^L (y\Delta_x-2z\Delta_x)\Delta\leq \int_0^L \Delta_x^2dx+\int_0^L \left(\frac{1}{2}y^2+2z^2\right)\Delta^2dx, {\epsilon}nd{equation} and \begin{equation} \label{lipschitz-sat} \left|\int_0^L \Delta(\mathfrak{sat}(az)-\mathfrak{sat}(ay))dx\right|\leq 3a_1\int_0^L \Delta^2 dx. {\epsilon}nd{equation} Thanks to the continuous Sobolev embedding $H^1_0(0,L)\subset C([0,L])$, {\epsilon}qref{lipschitz-sat} and {\epsilon}qref{unicity-linf}, there exists $C_9=C_9(L)>0$ such that \begin{equation} \frac{1}{2}\frac{d}{dt}\int_0^L \Delta^2 dx\leq \int_0^L \Delta_x^2dx+C_9\left(\Vert y_x\Vert^2_{L^2(0,L)}+\Vert z_x\Vert^2_{L^2(0,L)}+1\right)\int_0^L \Delta^2 dx. {\epsilon}nd{equation} Thus applying the Gr\"onwall Lemma, we get the existence of $C_{10}=C_{10}(L)>0$ such that \begin{equation} \label{unicity-2} \int_0^L (z(t,x)-y(t,x))^2dx\leq e^{C_{10}\left(1+\Vert y\Vert^2_{L^2(0,T;H^1(0,L))}+\Vert z\Vert^2_{L^2(0,T;H^1(0,L))}\right)}\int_0^L (z_0(x)-y_0(x))^2dx. {\epsilon}nd{equation} With the use {\epsilon}qref{unicity-1} and {\epsilon}qref{unicity-2}, it concludes the proof of Lemma \ref{uniqueness-coron-crepeau}. {\epsilon}nd{proof} We aim at removing the smallness condition given by $T^\prime$ in Lemma \ref{local-wp}, following \cite{chapouly2009global}. Since we have the local well-posedness, we only need to prove the following a priori estimate for any mild solution to (\ref{nlkdv_sat}). \begin{lemma} \label{global-estimation} For given $T>0$, there exists $G:=G(T)>0$ such that for any $y_0\in L^2(0,L)$, for any $0< T^\prime\leq T$ and for any mild solution $y\in \mathcal{B}(T^\prime)$ to (\ref{nlkdv_sat}), it holds \begin{equation} \label{glob-estim-chapouly} \Vert y\Vert_{\mathcal{B}(T^\prime)}\leq G\Vert y_0\Vert_{L^2(0,L)}, {\epsilon}nd{equation} and \begin{equation} \label{l2-dissipativity} \Vert y\Vert_{L^2(0,L)}\leq \Vert y_0\Vert_{L^2(0,L)}. {\epsilon}nd{equation} {\epsilon}nd{lemma} \begin{proof} Let us fix $0<T^\prime\leq T$. We multiply the first equation in (\ref{nlkdv_sat}) by $y$ and integrate on $(0,L)$. Using the boundary conditions in {\epsilon}qref{nlkdv_sat}, we get the following estimates \begin{equation*} \int_0^L yy_xdx=0,\hspace{0.3cm}\int_0^L yy_{xxx}dx=\frac{1}{2}|y_x(t,0)|^2,\hspace{0.3cm}\int_0^L y^2y_xdx=0. {\epsilon}nd{equation*} Using the fact that $\mathfrak{sat}$ is odd, we get that \begin{equation} \label{l2-dissipativity1} \frac{1}{2}\frac{d}{dt}\Vert y(t,.)\Vert^2_{L^2(0,L)} \leq -\frac{1}{2}|y_x(t,0)|^2-\int_0^L y\mathfrak{sat}(ay)dx\leq 0 {\epsilon}nd{equation} which implies {\epsilon}qref{l2-dissipativity}. Moreover, using again {\epsilon}qref{l2-dissipativity1}, there exists $C_{12}=C_{12}(L)>0$ such that \begin{equation} \label{global-wp-1} \Vert y\Vert_{L^\infty(0,T^\prime;L^2(0,L)}\leq C_{12}\Vert y_0\Vert_{L^2(0,L)}. {\epsilon}nd{equation} It remains to prove a similar inequality for $\Vert y_x\Vert_{L^2(0,T^\prime;L^2(0,L))}$ to achieve the proof. We multiply (\ref{nlkdv_sat}) by $xy$, integrate on $(0,L)$ and use the following \begin{equation*} \int_0^L xyy_xdx=-\frac{1}{2}\Vert y\Vert^2_{L^2(0,L)}, \quad \int_0^L yy_{xxx}dx=\frac{3}{2}\Vert y_x\Vert^2_{L^2(0,L)}, {\epsilon}nd{equation*} and \begin{multline} \label{nl-kdv-diss} -\int_0^L xy^2y_xdx =\frac{1}{3}\int_0^L y^3(t,x)dx \leq \frac{1}{3}\sup_{x\in [0,L]}|y(t,x)|\Vert y\Vert^2_{L^\infty(0,T;L^2(0,L))}\\ \leq \frac{\sqrt{L}}{3}\Vert y_x\Vert_{L^2(0,L)}\Vert y\Vert^2_{L^\infty(0,T;L^2(0,L))} \leq \frac{\sqrt{L}\deltaelta}{6}\Vert y_x\Vert_{L^2(0,L)}+\frac{\sqrt{L}}{6\deltaelta}\Vert y\Vert^4_{L^\infty(0,T;L^2(0,L))} {\epsilon}nd{multline} where $\deltaelta$ is chosen as $\deltaelta:=\frac{3}{\sqrt{L}}$. In this way, we obtain \begin{equation} \frac{1}{2}\frac{d}{dt}\int_0^L |x^{1/2}y(t,.)|^2dx-\frac{1}{2} \int_0^L y^2dx+\frac{3}{2}\int_0^L |y_x|^2dx-\frac{1}{3}\int_0^L |y|^3dx =-\int_0^L x\mathfrak{sat}(ay)ydx. {\epsilon}nd{equation} We get, using {\epsilon}qref{nl-kdv-diss} and the fact that $\mathfrak{sat}$ is odd, that \begin{equation} \label{H1-inequality} \frac{1}{2}\frac{d}{dt}\int_0^L |x^{1/2}y(t,.)|^2dx +\int_0^L |y_x|^2dx \leq \frac{1}{2}\Vert y \Vert^2_{L^\infty(0,T;L^2(0,L))}+\frac{L}{18}\Vert y\Vert^4_{L^\infty(0,T;L^2(0,L))}. {\epsilon}nd{equation} Using (\ref{global-wp-1}) and \color{blue} Gr\"onwall inequality\color{black}, we get the existence of a positive value $C_{13}=C_{13}(L)>0$ such that \begin{equation} \Vert y_x\Vert_{L^2(0,T^\prime;L^2(0,L))}\leq C_{13}\Vert y_0\Vert_{L^2(0,L)}, {\epsilon}nd{equation} which concludes the proof of Lemma \ref{global-estimation}. {\epsilon}nd{proof} Using a classical extension argument, Lemmas \ref{local-wp}, \ref{global-estimation} and \ref{uniqueness-coron-crepeau}, for any $T>0$, we can conclude that there exists a unique mild solution in $\mathcal{B}(T)$ to (\ref{nlkdv_sat}). Indeed, with Lemma \ref{local-wp}, we know that there exists $T^\prime\in (0,T)$ such that there exists a unique solution to {\epsilon}qref{nlkdv_sat} in $\mathcal{B}(T^\prime)$. Moreover, Lemma \ref{global-estimation} allows us to state the existence of \color{blue} a mild solution \color{black} to {\epsilon}qref{nlkdv_sat} for every $T>0$: since the solution $y$ to {\epsilon}qref{nlkdv_sat} is bounded by its initial condition for every $T^\prime>0$ belonging to $[0,T]$ as stated in {\epsilon}qref{l2-dissipativity}, we know that there exists a solution to {\epsilon}qref{nlkdv_sat} in $\mathcal{B}(T)$. Finally, Lemma \ref{uniqueness-coron-crepeau} implies that there exists a unique mild solution to {\epsilon}qref{nlkdv_sat} in $\mathcal{B}(T)$. This concludes the proof of Theorem \ref{nl-theorem-wp}. \color{blue} \begin{remark} \label{remark-gkdv} In \cite{rosier2006global}, the following generalized Korteweg-de Vries equation is considered \begin{equation} \label{gkdv-rosier} \left\{ \begin{split} &y_t+y_x+y_{xxx}+b(y)y_x+ay=0,\\ &y(t,0)=y(t,L)=y_x(t,L)=0,\\ &y(0,x)=y_0(x), {\epsilon}nd{split} \right. {\epsilon}nd{equation} where the function $a:[0,L]\rightarrow \mathbb{R}$ satisfies {\epsilon}qref{gain-control} and where $b:\mathbb{R}\rightarrow \mathbb{R}$ satisfies the following growth condition \begin{equation} b(0)=0,\: |b^{(j)}(\mu)|\leq C\left(1+|\mu|^{p-j}\right),\quad \forall \mu\in\mathbb{R}, {\epsilon}nd{equation} for $j=0$ if $1\leq p< 2$ and for $j=0,1,2$ if $p\geq 2$. The saturated version of {\epsilon}qref{gkdv-rosier} is \begin{equation} \label{gkdv-saturated} \left\{ \begin{split} &y_t+y_x+y_{xxx}+b(y)y_x+\mathfrak{sat}(ay)=0,\\ &y(t,0)=y(t,L)=y_x(t,L)=0,\\ &y(0,x)=y_0(x). {\epsilon}nd{split} \right. {\epsilon}nd{equation} The strategy followed in \cite{rosier2006global} can be followed easily to prove the same result than Theorem \ref{nl-theorem-wp} for {\epsilon}qref{gkdv-saturated}. Note that in \cite{rosier2006global}, provided that the initial condition satisfies some compatibility conditions, the well-posedness is proved for a solutions in $C([0,T];H^s(0,L))\cap L^2(0,T;H^{s+1}(0,L))$, where $s\in [0,3]$. The authors proved this result by looking at $v=y_t$ which solves an equation equivalent to {\epsilon}qref{gkdv-rosier}. In our case, it seems harder to prove such a result. Since the saturation operator introduces some non-smoothness, $v=y_t$ does not solve an equation equivalent to {\epsilon}qref{gkdv-saturated}. {\epsilon}nd{remark} \color{black} \section{Global asymptotic stability} \label{sec_stab} Let us begin by introducing the following definition. \begin{definition} System (\ref{nlkdv_sat}) is said to be {\epsilon}m semi-globally exponentially stable {\epsilon}m in $L^2(0,L)$ if for any $r>0$ there exists two constants $K:=K(r)>0$ and $\mu:=\mu(r)>0$ such that for any $y_0\in L^2(0,L)$ such that $\Vert y_0\Vert_{L^2(0,L)}\leq r$, the mild solution $y=y(t,x)$ to (\ref{nlkdv_sat}) satisfies \begin{equation} \label{local_exp_def} \Vert y(t,.)\Vert_{L^2(0,L)}\leq K\Vert y_0\Vert_{L^2(0,L)}e^{-\mu t},\qquad \forall t\geq 0. {\epsilon}nd{equation} {\epsilon}nd{definition} Following \cite{rosier2006global}, we first show that {\epsilon}qref{nlkdv_sat} is semi-globally exponentially stable in $L^2(0,L)$. From this result, we will be able to prove the global uniform exponential stability of {\epsilon}qref{nlkdv_sat}. To do that, we state and prove a technical lemma that allows us to bound the saturation function with a linear function as long as the initial condition is bounded. Then we separate our proof into two cases. The first one deals with the case $\omega=[0,L]$ and $\mathfrak{sat}=\mathfrak{sat}_2$, while the second one deals with the case $\omega\subseteq [0,L]$ whatever the saturation is. The tools to tackle these two cases are different. The goal of \color{blue} the \color{black} next three sections is to prove the following result \begin{proposition}[Semi-global exponential stability] \label{local_as_stab} For all $y_0\in L^2(0,L)$ with $\Vert y_0\Vert_{L^2(0,L)}\leq r$, the system (\ref{nlkdv_sat}) is semi-globally exponentially stable in $L^2(0,L)$. Moreover, if $\omega=[0,L]$ and $\mathfrak{sat}=\mathfrak{sat}_2$, inequality {\epsilon}qref{local_exp_def} holds with $K=1$ and $\mu$ can be estimated as done in Theorem \ref{glob_as_stab}. {\epsilon}nd{proposition} \subsection{Technical Lemma} Before starting the proof of the Proposition \ref{local_as_stab}, let us state and prove the following lemma. \begin{lemma}[Sector Condition] \label{sat-l2-local} Let $r$ be a positive value, a function $a:[0,L]\rightarrow \mathbb{R}$ satisfying {\epsilon}qref{gain-control} and $k(r)$ defined by \begin{equation} k(r)=\min\left\{\frac{u_0}{a_1 r},1 \right\}. {\epsilon}nd{equation} \begin{itemize} \item[(i)] Given $\mathfrak{sat}=\mathfrak{sat}_2$ and $s\in L^2(0,L)$ such that $\Vert s\Vert_{L^2(0,L)}\leq r$, we have \begin{equation} \Big( \mathfrak{sat}_2(a(x)s(x))-k(r) a(x)s(x)\Big)s(x)\geq 0,\quad \forall x\in [0,L], {\epsilon}nd{equation} \item[(ii)] Given $\mathfrak{sat}=\mathfrak{sat}_{\texttt{loc}}$ and $s\in L^\infty(0,L)$ such that, for all $x\in [0,L]$, $|s(x)|\leq r$, we have \begin{equation} \Big(\mathfrak{sat}_{\texttt{loc}}(a(x)s(x))- k(r) a(x)s(x)\Big)s(x)\geq 0,\quad \forall x\in [0,L]. {\epsilon}nd{equation} {\epsilon}nd{itemize} {\epsilon}nd{lemma} \begin{proof} (i) We first prove item (i) of Lemma \ref{sat-l2-local}. Two cases may occur \begin{itemize} \item[1.] $\Vert as\Vert_{L^2(0,L)}\geq u_0$; \item[2.] $\Vert as\Vert_{L^2(0,L)}\leq u_0$. {\epsilon}nd{itemize} The first case implies that, for all $x\in [0,L]$ $$ \mathfrak{sat}_2(a(x)s(x))=\frac{a(x)s(x)}{\Vert as\Vert_{L^2(0,L)}}u_0. $$ Thus, for all $x\in [0,L]$, \begin{equation*} \Big( \mathfrak{sat}_2(a(x)s(x))-k(r) a(x)s(x)\Big)s(x)=a(x)s(x)^2\left(\frac{u_0}{\Vert as\Vert_{L^2(0,L)}}-k(r)\right). {\epsilon}nd{equation*} Since $$ \frac{u_0}{\Vert as\Vert_{L^2(0,L)}} \geq \frac{u_0}{a_1\Vert s\Vert_{L^2(0,L)}} \geq \frac{u_0}{a_1 r} \geq k(r), $$ we obtain $$ \Big(\mathfrak{sat}_2(a(x)s(x))-k(r) a(x)s(x)\Big)s(x)\geq 0. $$ Now, let us consider the case $\Vert as\Vert_{L^2(0,L)}\leq u_0$. We have, for all $x\in [0,L]$, $$ \mathfrak{sat}_2(a(x)s(x))=a(x)s(x), $$ and then, for all $x\in [0,L]$, \begin{equation*} (\mathfrak{sat}_2(a(x)s(x))-k(r) a(x)s(x))s(x)=a(x)s(x)^2(1-k(r))\geq 0. {\epsilon}nd{equation*} (ii) We now deal with item (ii) of Lemma \ref{sat-l2-local}. Let us pick $x\in [0,L]$ and consider the two following cases \begin{itemize} \item[1.] $|a(x)s(x)|\geq u_0$; \item[2.] $|a(x)s(x)|\leq u_0.$ {\epsilon}nd{itemize} The first case implies either $a(x)s(x)\geq u_0$ or $a(x)s(x)\leq -u_0$. Since these two possibilities are symmetric, we just deal with the case $a(x)\geq u_0$. We have \begin{equation*} \mathfrak{sat}_{\texttt{loc}}(a(x)s(x))=u_0, {\epsilon}nd{equation*} and then \begin{multline*} \Big(\mathfrak{sat}_{\texttt{loc}}(a(x)s(x))-k(r)a(x)s(x)\Big)s(x)=u_0s(x)-k(r)a(x)s^2(x)\\ \geq \Big(u_0-k(r)a(x)r\Big)s(x) \geq \left(u_0-\frac{u_0}{a_1 r}a(x)r\right)s(x) \geq 0. {\epsilon}nd{multline*} The second case implies that $\mathfrak{sat}_{\texttt{loc}}(a(x)s(x))=a(x)s(x),$ and then $ \Big(\mathfrak{sat}_{\texttt{loc}}(a(x)s(x)-k(r)a(x)s(x)\Big)s(x)=\Big(1-k(r)\Big)a(x)^2s(x)^2\geq 0.$ Thus it concludes the proof of the second item of the Lemma \ref{sat-l2-local}. {\epsilon}nd{proof} \subsection{Proof of Proposition \ref{local_as_stab} when $\omega=[0,L]$ and $\mathfrak{sat}=\mathfrak{sat}_2$} \label{dead-zone-technique} Now we are able to prove \color{blue} Proposition \ref{local_as_stab} \color{black} when $\omega=[0,L]$ and $\mathfrak{sat}=\mathfrak{sat}_2$. Let $r>0$ and $y_0\in L^2(0,L)$ be such that $\Vert y_0\Vert_{L^2(0,L)}\leq r$. Multiplying {\epsilon}qref{nlkdv_sat} by $y$, integrating with respect to $x$ on $(0,L)$ yields \begin{equation} \label{lyap-dz} \frac{1}{2}\frac{d}{dt}\int_0^L |y(t,x)|^2dx\leq -\int_0^L \mathfrak{sat}_2(ay(t,x))y(t,x)dx. {\epsilon}nd{equation} Note that from {\epsilon}qref{l2-dissipativity}, we get \begin{equation} \label{dissipativity-nl} \Vert y\Vert_{L^2(0,L)} \leq \Vert y_0\Vert_{L^2(0,L)} \leq r. {\epsilon}nd{equation} Thus, using Lemma \ref{sat-l2-local} and {\epsilon}qref{lyap-dz}, it implies that \begin{equation} \frac{1}{2}\frac{d}{dt}\int_0^L |y(t,x)|^2dx\leq -\int_0^L k(r) a_0 |y(t,x)|^2dx. {\epsilon}nd{equation} Applying the Gr\"onwall lemma leads to \begin{equation} \Vert y(t,.)\Vert_{L^2(0,L)}\leq e^{-\mu t}\Vert y_0\Vert_{L^2(0,L)} {\epsilon}nd{equation} where $\mu$ is defined in the statement of Theorem \ref{glob_as_stab}. It concludes the proof of Proposition \ref{local_as_stab} when $\omega=[0,L]$ and when $\mathfrak{sat}=\mathfrak{sat}_2$. \begin{remark} The constant $\mu$ depends on $u_0$, $r$ and $a_0$. Thus, although we have proven an exponential stability, the rapid stabilization is still an open question. Moreover, in the case $a(x)=a_0=a_1$ for all $x\in [0,L]$, which is the case where the gain is constant, we obtain that $$ \mu=\min\left\{a_0,\frac{u_0}{r}\right\}. $$ {\epsilon}nd{remark} \subsection{Proof of Proposition \ref{local_as_stab} when $\omega \subseteq [0,L]$} In this section, we have $\mathfrak{sat}=\mathfrak{sat}_2$ or $\mathfrak{sat}=\mathfrak{sat}_{\texttt{loc}}$. We follow the strategy of \cite{rosier2006global} and \cite{cerpa2013control}. We use a \color{blue} contradiction argument\color{black}. It is based on the following \color{blue} unique continuation result\color{black}. \begin{theorem}[\cite{saut1987unique}] \label{theorem-saut} Let $u\in L^2(0,T;H^3(0,L))$ be a solution of $$ u_t+u_x+u_{xxx}+uu_x=0 $$ such that $$u(t,x)=0,\: \forall t\in (t_1,t_2),\: \forall x\in\omega,$$ with $\omega$ an open nonempty subset of $(0,L)$. Then $$ u(t,x)=0,\: \forall t\in (t_1,t_2),\:\forall x\in (0,L). $$ {\epsilon}nd{theorem} Moreover, the following lemma will be used. \begin{lemma}[Aubin-Lions Lemma, \cite{simon1987compact}, Corollary 4] \label{aubin-lions} Let $X_0\subset X\subset X_1$ be three Banach spaces with $X_0$, $X_1$ reflexive spaces. Suppose that $X_0$ is compactly embedded in $X$ and $X$ is continuously embedded in $X_1$. Then $\lbrace h\in L^p(0,T;X_0)/\: \deltaot{h}\in L^q(0,T;X_1)\rbrace$ embeds compactly in $L^p(0,T;X)$ for any $1<p,\: q<\infty$. {\epsilon}nd{lemma} Let us now start the proof of Proposition \ref{local_as_stab}. Let $r>0$ and $y_0\in L^2(0,L)$ be such that \begin{equation} \label{alternative-r} \Vert y_0\Vert_{L^2(0,L)}\leq r. {\epsilon}nd{equation} \color{blue} As in the proof of Lemma \ref{global-estimation}, with multiplier techniques applied to {\epsilon}qref{nlkdv_sat}, we obtain \color{black} \begin{equation} \label{alternative1} \Vert y(t,.)\Vert^2_{L^2(0,L)}=\Vert y_0\Vert^2_{L^2(0,L)}-\int_0^t |y_x(\sigma,0)|^2d\sigma -2\int_0^t\int_0^L \mathfrak{sat}(ay)ydxd\sigma,\quad \forall t\in [0,T] {\epsilon}nd{equation} and \begin{equation} \label{alternative-H1} \Vert y\Vert^2_{L^2(0,T;H^1(0,L))}\leq \frac{8T+2L}{3}\Vert y_0\Vert^2_{L^2(0,L)}+\frac{TC}{27}\Vert y_0\Vert^4_{L^2(0,L)}. {\epsilon}nd{equation} Moreover, multiplying {\epsilon}qref{nlkdv_sat} by $(T-t)y$, we obtain after performing some integrations by parts \begin{equation} \label{rosier.3.13} T\Vert y_0\Vert^2_{L^2(0,L)}\leq \int_0^T \int_0^L |y(t,x)|^2dxdt+\int_0^T (T-t)|y_x(t,0)|^2dt+2\int_0^T (T-t) \int_0^L \mathfrak{sat}(ay)ydxdt. {\epsilon}nd{equation} Note that, since $\mathfrak{sat}$ is an odd function, {\epsilon}qref{alternative1} implies that, for all $t\in [0,T]$ \begin{equation} \label{dissipativity-alternative} \Vert y(t,.)\Vert^2_{L^2(0,L)}\leq \Vert y_0\Vert^2_{L^2(0,L)}. {\epsilon}nd{equation} From now on, we will separate the proof into two cases: $\mathfrak{sat}=\mathfrak{sat}_2$ and $\mathfrak{sat}=\mathfrak{sat}_{\texttt{loc}}$.\\ \textbf{Case 1:} $\mathfrak{sat}=\mathfrak{sat}_2$. Using {\epsilon}qref{l2-dissipativity}, we have, $$ \Vert y(T,.)\Vert_{L^2(0,L)}\leq r, $$ and we can apply the first item of Lemma \ref{sat-l2-local}. The inequality {\epsilon}qref{alternative1} becomes \begin{equation} \Vert y(T,.)\Vert^2_{L^2(0,L)}\leq \Vert y_0\Vert^2_{L^2(0,L)}-\int_0^T |y_x(t,0)|^2dt-2\int_0^T\int_0^L ak(r)y^2 dxdt. {\epsilon}nd{equation} Let us state a claim that will be useful in the following. \begin{claim} \label{claim-statement} For any $T>0$ and any $r>0$ there exists a positive constant $C_{14}=C_{14}(T,r)$ such that for any solution $y$ to {\epsilon}qref{nlkdv_sat} with an initial condition $y_0\in L^2(0,L)$ such that $\Vert y_0\Vert_{L^2(0,L)}\leq r$, it holds that \begin{equation} \label{claim} \Vert y_0\Vert^2_{L^2(0,L)}\leq C_{14}\left(\int_0^T |y_x(t,0)|^2dt+2\int_0^T\int_0^L k(r)a|y(t,x)|^2dxdt\right). {\epsilon}nd{equation} {\epsilon}nd{claim} Let us assume Claim \ref{claim-statement} for the time being. Then {\epsilon}qref{alternative1} implies \begin{equation} \Vert y(kT,.)\Vert^2_{L^2(0,L)}\leq \gamma^k\Vert y_0\Vert^2_{L^2(0,L)}\qquad \forall k\geq 0,\: \forall t\geq 0, {\epsilon}nd{equation} where $\gamma\in (0,1)$. From {\epsilon}qref{dissipativity-alternative}, we have $\Vert y(t,.)\Vert_{L^2(0,L)}\leq \Vert y(kT,.)\Vert_{L^2(0,L)}$ for $kT\leq t\leq (k+1)T$. Thus we obtain, for all $t\geq 0$, \begin{equation} \label{stability-absurd} \Vert y(t,.)\Vert^2_{L^2(0,L)}\leq \frac{1}{\gamma}\Vert y_0\Vert_{L^2(0,L)}e^{\frac{\log \gamma}{T}t}. {\epsilon}nd{equation} In order to prove Claim \ref{claim-statement}, since the solution to {\epsilon}qref{nlkdv_sat} satisfies {\epsilon}qref{rosier.3.13}, it is sufficient to prove that there exists some constant $C_{15}:=C_{15}(T,L)>0$ such that \begin{equation} \label{alternative2} \int_0^T\int_0^L |y|^2dxdt\leq C_{15}\left(\int_0^T |y_x(t,0)|^2dt+2\int_0^T\int_0^L k(r)ay^2dxdt\right) {\epsilon}nd{equation} provided that $\Vert y_0\Vert_{L^2(0,L)}\leq r$. We argue by contradiction to prove the existence of such a constant $C_{15}$. Suppose {\epsilon}qref{alternative2} fails to be true. Then, there exists a sequence of mild solutions $\lbrace y^n\rbrace_{n\in \mathbb{N}}\subseteq \mathcal{B}(T)$ of {\epsilon}qref{nlkdv_sat} with \begin{equation} \label{alternative-initial-conditions} \Vert y^n(0,.)\Vert_{L^2(0,L)}\leq r {\epsilon}nd{equation} and such that \begin{equation} \label{alternative-limit} \lim_{n\rightarrow +\infty}\frac{\Vert y^n\Vert^2_{L^2(0,T;L^2(0,L))}}{\int_0^T |y_x^n(t,0)|^2dt+2\int_0^T\int_0^L k(r)a(y^n)^2dxdt}=+\infty. {\epsilon}nd{equation} Note that {\epsilon}qref{alternative-initial-conditions} implies with {\epsilon}qref{dissipativity-alternative} that \begin{equation} \label{alternative3} \Vert y^n(t,.)\Vert_{L^2(0,L)}\leq r,\quad \forall t\in [0,T]. {\epsilon}nd{equation} Let $\lambda^n:=\Vert y^n\Vert_{L^2(0,T;L^2(0,L))}$ and $v^n(t,x)=\frac{y^n(t,x)}{\lambda^n}$. Notice that $\lbrace \lambda^n\rbrace_{n\in\mathbb{N}}$ is bounded, according to {\epsilon}qref{alternative3}. \color{blue} Hence, there exists a subsequence, that we continue to denote by $\lbrace\lambda^n\rbrace_{n\in\mathbb{N}}$ such that \color{black} $$ \lambda^n\rightarrow \lambda\geq 0\quad \text{as $n\rightarrow +\infty$}. $$ Then $v^n$ fullfills \begin{equation} \label{alternative4} \left\{ \begin{array}{l} v^n_t+v^n_{x}+v^n_{xxx}+\lambda^n v^nv^n_x+\frac{\mathfrak{sat}_2(a\lambda^n v^n)}{\lambda^n}=0,\\ v^n(t,0)=v^n(t,L)=v^n_x(t,L)=0,\\ \Vert v^n\Vert_{L^2(0,T;L^2(0,L))}=1, {\epsilon}nd{array} \right. {\epsilon}nd{equation} and, due to {\epsilon}qref{alternative-limit}, we obtain \begin{equation} \label{alternative-limit-l2} \int_0^T |v_x^n(t,0)|^2dt+2\int_0^T\int_0^L ak(r)(v^n)^2 dxdt\rightarrow 0\text{ as $n\rightarrow +\infty$.} {\epsilon}nd{equation} It follows from {\epsilon}qref{rosier.3.13} that $\lbrace v^n(0,.)\rbrace_{n\in\mathbb{N}}$ is bounded in $L^2(0,L)$. Note also that from {\epsilon}qref{alternative-H1} $\lbrace v^n\rbrace_{n\in\mathbb{N}}$ is bounded in $L^2(0,T;H^1(0,L))$. Thus we see that $\lbrace v^nv^n_x\rbrace_{n\in\mathbb{N}}$ is a subset of $L^2(0,T;L^1(0,L))$. In fact, \begin{equation} \Vert v^nv^n_x\Vert_{L^2(0,T;L^1(0,L))}\leq \Vert v^n\Vert_{C(0,T;L^2(0,L))}\Vert v^n\Vert_{L^2(0,T;H^1(0,L))}. {\epsilon}nd{equation} Moreover, we have that $\left\{\frac{\mathfrak{sat}_2(a\lambda^n v^n)}{\lambda^n}\right\}_{n\in\mathbb{N}}$ is a bounded sequence in $L^2(0,T;L^2(0,L))$. Indeed, from Lemma \ref{lipschitz-satl2} \begin{equation} \left\Vert \frac{\mathfrak{sat}_2(a\lambda^n v^n)}{\lambda^n}\right\Vert_{L^2(0,T;L^2(0,L))}\leq 3\Vert av^n \Vert_{L^2(0,T;L^2(0,L))}\leq 3a_1\sqrt{L}\Vert v^n\Vert_{L^2(0,T;H^1(0,L))}. {\epsilon}nd{equation} Thus $\lbrace v^nv^n_x\rbrace_{n\in\mathbb{N}}$ and $\left\{\frac{\mathfrak{sat}_2(a\lambda^n v^n)}{\lambda^n}\right\}_{n\in\mathbb{N}}$ are also subsets of $L^2(0,T;H^{-2}(0,L))$ since $L^2(0,L)\subset L^1(0,L)\subset H^{-1}(0,L)\subset H^{-2}(0,L)$. Combined with {\epsilon}qref{alternative4} it implies that $\lbrace v_t^n\rbrace_{n\in\mathbb{N}}$ is a bounded sequence in $L^2(0,T;H^{-2}(0,L))$. Since $\lbrace v^n\rbrace_{n\in\mathbb{N}}$ is a bounded sequence of $L^2(0,T;H^1(0,L))$, then we get with Lemma \ref{aubin-lions} that a subsequence of $\lbrace v^n\rbrace_{n\in\mathbb{N}}$ also denoted by $\lbrace v^n\rbrace_{n\in\mathbb{N}}$ converges strongly in $L^2(0,T;L^2(0,L))$ to a limit $v$. Moreover, with the last line of {\epsilon}qref{alternative4}, it holds that $\Vert v\Vert_{L^2(0,T;L^2(0,L))}=1$. Therefore, having in mind {\epsilon}qref{alternative-limit-l2}, we get \begin{equation} \Vert v_x(.,0)\Vert^2_{L^2(0,T)}+\int_0^T\int_0^L ak(r)v^2 dxdt\leq \liminf_{n\rightarrow +\infty} \left\{\Vert v^n_x(.,0)\Vert^2_{L^2(0,T)}+\int_0^T \int_0^L ak(r)(v^n)^2dxdt\right\}=0. {\epsilon}nd{equation} Thus, \begin{equation} ak(r)v^2(t,x)=0,\: \forall x\in [0,L],\forall t\in (0,T),\: \text{ and }\: v_x(t,0)=0,\: \forall t\in (0,T). {\epsilon}nd{equation} and therefore \begin{equation} \label{unique-continuation} v(t,x)=0,\: \forall x\in \omega,\forall t\in (0,T),\: \text{ and }\: v_x(t,0)=0,\: \forall t\in (0,T). {\epsilon}nd{equation} We obtain that the limit function $v$ satisfies \begin{equation} \label{kdv-unique-continuation} \left\{ \begin{array}{l} v_t+v_x+v_{xxx}+\lambda vv_x=0,\\ v(t,0)=v(t,L)=v_x(t,L)=0,\\ \Vert v\Vert_{L^2(0,T;L^2(0,L))}=1, {\epsilon}nd{array} \right. {\epsilon}nd{equation} with $\lambda\geq 0$. Let us consider $u:=v_t$ which satisfies \begin{equation} \label{kdv-unique-continuation-u} \left\{ \begin{array}{l} u_t+u_x+u_{xxx}+\lambda v_xu+\lambda vu_x=0,\\ u(t,0)=u(t,L)=u_x(t,L)=0, {\epsilon}nd{array} \right. {\epsilon}nd{equation} with $u(0,.)=-v^\prime(0,.)-v^{\prime\prime\prime}(0,.)-\lambda v(0,.)v^\prime(0,.)\in H^{-3}(0,L)$ and $$ u(t,x)=0,\: \forall x\in \omega,\forall t\in (0,T),\: \text{ and }\: u_x(t,0)=0,\: \forall t\in (0,T). $$ Let us recall the following result. \begin{lemma}[\cite{pazoto2005localizeddamping}, Lemma 3.2] There exists a positive value $C_{16}(T,r)>0$ such that for any solution $u$ to {\epsilon}qref{kdv-unique-continuation-u} where $v$ is solution to {\epsilon}qref{kdv-unique-continuation}, it holds \begin{equation} \Vert u_x(.,0)\Vert^2_{L^2(0,T)}+\Vert u(0,.)\Vert^2_{H^{-3}(0,L)}\geq C_{16}\Vert u(0,.)\Vert^2_{L^2(0,L)}. {\epsilon}nd{equation} {\epsilon}nd{lemma} Applying the result of this lemma, we get $u(0,.)\in L^2(0,L)$ and therefore $u=v_t\in\mathcal{B}(T)$. Since $v,v_t\in L^2(0,T;H^1(0,L))$ and $v\in C([0,T];H^1(0,L))$, we can conclude that $vv_x\in L^2(0,T;L^2(0,L))$. In this way, $v_{xxx}=-v_t-v_x-\lambda vv_x\in L^2(0,T;L^2(0,L))$ and therefore $v\in L^2(0,T;H^3(0,L))$. Finally, using Theorem \ref{theorem-saut}, we obtain $$ v(t,x)=0,\quad \forall x\in [0,L],\: t\in [0,T]. $$ Thus we get a contradiction with $\Vert v\Vert_{L^2(0,T;L^2(0,L))}=1$. It concludes the proof of Claim \ref{claim-statement} and thus Lemma \ref{local_as_stab} in the case where $\mathfrak{sat}=\mathfrak{sat}_2$. \\ \textbf{Case 2:} $\mathfrak{sat}=\mathfrak{sat}_{\texttt{loc}}$. Following the same strategy than before, we write the following claim. \begin{claim} \label{claim-statement1} For any $T>0$ and any $r>0$, there exists a positive constant $C_{17}=C_{17}(T,r)$ such that for any mild solution $y$ to {\epsilon}qref{nlkdv_sat} with an initial condition $y_0\in L^2(0,L)$ such that $\Vert y_0\Vert_{L^2(0,L)}\leq r$, it holds that \begin{equation} \label{claim1} \Vert y_0\Vert^2_{L^2(0,L)}\leq C_{17}\left(\int_0^T |y_x(t,0)|^2dt+2\int_0^T \int_0^L \mathfrak{sat}_{\texttt{loc}}(ay(t,x))y(t,x)dtdx\right). {\epsilon}nd{equation} {\epsilon}nd{claim} If Claim \ref{claim-statement1} holds, we obtain also {\epsilon}qref{stability-absurd} for a suitable choice of $\gamma$ and we end the proof of Lemma \ref{local_as_stab} when $\mathfrak{sat}=\mathfrak{sat}_{\texttt{loc}}$. \color{blue} Due to \color{black} {\epsilon}qref{rosier.3.13}, we see that in order to prove Claim \ref{claim-statement1}, it is sufficient to obtain the existence of $C_{18}>0$ such that \begin{equation} \label{claim1-contradiction} \int_0^T \int_0^L |y(t,x)|^2dtdx\leq C_{18}\left(\int_0^T |y_x(t,0)|^2dt+2\int_0^T \int_0^L \mathfrak{sat}_{\texttt{loc}}(ay(t,x))y(t,x)dtdx\right). {\epsilon}nd{equation} We argue by contradiction to prove {\epsilon}qref{claim1-contradiction}. Then, we assume that there exists a sequence of mild solutions $\lbrace y^n\rbrace_{n\in\mathbb{ N}}\subseteq \mathcal{B}(T)$ to {\epsilon}qref{nlkdv_sat} with \begin{equation} \label{alternative-initial-conditions1} \Vert y^n(0,.)\Vert_{L^2(0,L)}\leq r {\epsilon}nd{equation} and such that \begin{equation} \label{alternative-limit1} \lim_{n\rightarrow +\infty}\frac{\Vert y^n\Vert^2_{L^2(0,T;L^2(0,L))}}{\int_0^T |y_x^n(t,0)|^2dt+2\int_{0}^T \int_0^L \mathfrak{sat}_{\texttt{loc}}(ay^n(t,x))y^n(t,x)dtdx}=+\infty. {\epsilon}nd{equation} Note that {\epsilon}qref{alternative-initial-conditions1} implies with {\epsilon}qref{dissipativity-alternative} that \begin{equation} \label{alternative31} \Vert y^n(t,.)\Vert_{L^2(0,L)}\leq r,\quad \forall t\in [0,T]. {\epsilon}nd{equation} Note that we have, from {\epsilon}qref{alternative-r} and {\epsilon}qref{alternative-H1} $$\Vert y^n\Vert^2_{L^2(0,T;H^1(0,L))}\leq \beta,$$ where $$ \beta:=\frac{8T+2L}{3}r^2+\frac{TC}{27}r^4. $$ Moreover, due to Poincar\'e inequality and the left Dirichlet boundary condition of {\epsilon}qref{nlkdv_sat}, we obtain \begin{equation} \sup_{x\in [0,L]} |y^n(t,x)|\leq \sqrt{L}\Vert y^n(t,.)\Vert_{H^1(0,L)},\quad \forall t\in [0,T]. {\epsilon}nd{equation} Thus, we see that \begin{equation} \label{omega1} \int_0^T |y^n(t,x)|^2dt\leq L\Vert y^n\Vert^2_{L^2(0,T;H^1(0,L))}\leq L\beta. {\epsilon}nd{equation} Now let us consider $\Omega_i\subset [0,T]$ defined as follows \begin{equation} \Omega_i=\left\{ t\in [0,T],\: \sup_{x\in [0,L]}|y(t,x)| >i\right\}. {\epsilon}nd{equation} In the following, we will denote by $\Omega_i^c$ its complement. It is defined by \begin{equation} \Omega_i^c=\left\{ t\in [0,T],\: \sup_{x\in [0,L]}|y(t,x)| \leq i\right\}. {\epsilon}nd{equation} Since the function $t\mapsto \sup_{x\in [0,L]}|y^n(t,x)|^2$ is a nonnegative function, we have \begin{equation} \label{omega2} \int_0^T \sup_{x\in [0,L]}|y^n(t,x)|^2dt \geq \int_{\Omega_i} \sup_{x\in [0,L]}|y^n(t,x)|^2dt\geq i^2 \nu(\Omega_i), {\epsilon}nd{equation} where $\nu(\Omega_i)$ denotes the Lebesgue measure of $\Omega_i$. Therefore, with {\epsilon}qref{omega1}, we obtain \begin{equation} \nu(\Omega_i)\leq \frac{L\beta}{i^2}. {\epsilon}nd{equation} We deduce from the previous equation that \begin{equation} \label{lebesgue-measure-satloc} \max\left(T-\frac{L\beta}{i^2},0\right) \leq \nu(\Omega_i^c)\leq T. {\epsilon}nd{equation} Moreover, with the second item of Lemma \ref{sat-l2-local}, we have, for all $i\in\mathbb{N}$, \begin{eqnarray} \label{sector-condition-satloc} \int_0^T \int_0^L \mathfrak{sat}_{\texttt{loc}}(ay^n)y^ndtdx= &&\int_{\Omega_i}\int_0^L \mathfrak{sat}_{\texttt{loc}}(ay^n)y^ndtdx+\int_{\Omega_i^c}\int_0^L \mathfrak{sat}_{\texttt{loc}}(ay^n)y^ndtdx\nonumber \\ \geq && \int_{\Omega_i^c}\int_0^L \mathfrak{sat}_{\texttt{loc}}(ay^n)y^ndtdx\nonumber \\ \geq && \int_{\Omega_i^c} \int_0^L ak(i)(y^n)^2dtdx. {\epsilon}nd{eqnarray} Let $\lambda^n:=\Vert y^n\Vert_{L^2(0,T;L^2(0,L))}$ and $v^n(t,x)=\frac{y^n(t,x)}{\lambda^n}$. Notice that $\lbrace\lambda^n\rbrace_{n\in\mathbb{N}}$ is bounded, according to {\epsilon}qref{alternative31}. Hence, there exists a subsequence, that we continue to denote by $\lbrace\lambda^n\rbrace_{n\in\mathbb{N}}$ such that $$ \lambda^n\rightarrow \lambda\geq 0,\text{ as $n\rightarrow +\infty$}. $$ Then, $v^n$ fullfills \begin{equation} \label{alternative41} \left\{ \begin{array}{l} v^n_t+v^n_{x}+v^n_{xxx}+\lambda^n v^nv^n_x+\frac{\mathfrak{sat}_{\texttt{loc}}(a\lambda^n v^n)}{\lambda^n}=0,\\ v^n(t,0)=v^n(t,L)=v^n_x(t,L)=0,\\ \Vert v^n\Vert_{L^2(0,T;L^2(0,L))}=1, {\epsilon}nd{array} \right. {\epsilon}nd{equation} and, due to {\epsilon}qref{alternative-limit1}, $$ \int_0^T |v_x^n(t,0)|^2dt+2\int_ 0^T \int_0^L \frac{\mathfrak{sat}_{\texttt{loc}}(a\lambda^n v^n)}{\lambda^n}v^n dtdx\rightarrow 0\text{ as $n\rightarrow +\infty$.} $$ Moreover, due to {\epsilon}qref{sector-condition-satloc}, we have, for all $i\in\mathbb{N}$, \begin{equation} \label{convergence-absurde-satloc} \int_0^T |v_x^n(t,0)|^2dt+2\int_{\Omega_i^c} \int_0^L ak(i)(v^n)^2dtdx\rightarrow 0\text{ as $n\rightarrow +\infty$.} {\epsilon}nd{equation} Note that from Lemma \ref{lipschitz-satl2}, \begin{equation} \left\Vert \frac{\mathfrak{sat}_{\texttt{loc}}(a\lambda^n v^n)}{\lambda^n}\right\Vert_{L^2(0,T;L^2(0,L))}\leq 3a_1\sqrt{L}\Vert v^n\Vert_{L^2(0,T;H^1(0,L)}. {\epsilon}nd{equation} and therefore the sequence $\left\{ \frac{\mathfrak{sat}_{\texttt{loc}}(a\lambda^n v^n)}{\lambda^n}\right\}_{n\in\mathbb{N}}$ is a subset of $L^2(0,T;L^2(0,L))$. In addition, $\left\{ v^nv^n_x\right\}_{n\in\mathbb{N}}$ is a bounded sequence of $L^2(0,T;L^1(0,L))$. Note that $L^1(0,L)\subset L^2(0,L)\subset H^{-2}(0,L)$, thus $\left\{ \frac{\mathfrak{sat}_{\texttt{loc}}(a\lambda^n v^n)}{\lambda^n}\right\}_{n\in\mathbb{N}}$ and $\left\{ v^nv^n_x\right\}_{n\in\mathbb{N}}$ are bounded sequences of $L^2(0,T;H^{-2}(0,L))$. Since $v^n_t=-v^n_x-v^n_{xxx}-\lambda^n v^nv^n_x-\frac{\mathfrak{sat}_{\texttt{loc}}(a\lambda^n v^n)}{\lambda^n}$, we know that $\lbrace v^n_t\rbrace_{n\in\mathbb{N}}$ is a subset of $L^2(0,T;H^{-2}(0,L))$. Since $\lbrace v^n\rbrace_{n\in\mathbb{N}}$ is a subset of $L^2(0,T;H^1(0,L))$, we obtain from Lemma \ref{aubin-lions} that $\lbrace v^n\rbrace_{n\in \mathbb{N}}$ \color{blue} converges strongly to a function $v$ \color{black} in $L^2(0,T;L^2(0,L))$. Futhermore, with {\epsilon}qref{convergence-absurde-satloc} and due to the non-negativity of $k$, we have, for all $i\in\mathbb{N}$, \begin{equation} \label{unique-continuation2} ak(i)v(t,x)=0,\: \forall x\in [0,L],\forall t\in \Omega_i^c,\: \text{ and }\: v_x(t,0)=0,\: \forall t\in (0,T). {\epsilon}nd{equation} Thus, since for all $i\in\mathbb{N}$, $k(i)$ is strictly positive, we have \begin{equation} \label{unique-continuation3} v(t,x)=0,\: \forall x\in \omega,\forall t\in \Omega_i^c,\: \text{ and }\: v_x(t,0)=0,\: \forall t\in (0,T). {\epsilon}nd{equation} We obtain \begin{equation} \label{unique-continuation4} v(t,x)=0,\: \forall x\in \omega,\forall t\in \bigcup_{i\in\mathbb{N}}\Omega_i^c ,\: \text{ and }\: v_x(t,0)=0,\: \forall t\in (0,T). {\epsilon}nd{equation} Since, with {\epsilon}qref{lebesgue-measure-satloc}, we know that $\nu\left(\bigcup_{i\in\mathbb{N}}\Omega_i^c\right)=T$, we get that, for almost every $t\in [0,T]$, \begin{equation} \label{unique-continuation5} v(t,x)=0,\: \forall x\in \omega ,\: \text{ and }\: v_x(t,0)=0. {\epsilon}nd{equation} We obtain that $v$ fullfills \begin{equation} \label{kdv-uc-satl} \left\{ \begin{array}{l} v_t+v_x+v_{xxx}+\lambda vv_x=0,\\ v(t,0)=v(t,L)=v_x(t,L)=0,\ \color{blue}, \color{black}\ \Vert v\Vert_{L^2(0,T;L^2(0,L))}=1. {\epsilon}nd{array} \right. {\epsilon}nd{equation} Thus $v$ is a solution to a Korteweg-de Vries equation. In particular, it belongs to $\mathcal{B}(T)$ and is consequently in $C(0,T;L^2(0,L))$. Therefore, {\epsilon}qref{unique-continuation5} becomes \begin{equation} \label{unique-continuation6} v(t,x)=0,\: \forall x\in \omega,\forall t\in [0,T],\: \text{ and }\: v_x(t,0)=0,\: \forall t\in (0,T). {\epsilon}nd{equation} We are in the same situation \color{blue} as \color{black} {\epsilon}qref{kdv-unique-continuation}. Therefore we obtain once again a contradiction. We can conclude that Claim 2 is true. It concludes the proof of Lemma \ref{local_as_stab} when $\mathfrak{sat}=\mathfrak{sat}_{\texttt{loc}}$ and completes the proof of Proposition \ref{local_as_stab}. \null $\Box$ \begin{remark} Since the strategy followed in the last section is to argue by contradiction, we cannot estimate the exponential rate $\mu$. However, such a proof allows us to prove the local exponential stability of the solution whatever the saturation $\mathfrak{sat}$ is. {\epsilon}nd{remark} \subsection{Proof of Theorem \ref{glob_as_stab}} \label{section_astuce} We are now in position to prove Theorem \ref{glob_as_stab}, following \cite{rosier2006global}. By \color{blue} Proposition \ref{local_as_stab}\color{black}, there exists $\mu^{\star}>0$ such that if \begin{equation} \Vert \tilde{y}_0\Vert_{L^2(0,L)}\leq 1, {\epsilon}nd{equation} then the corresponding solution $\tilde{y}$ to (\ref{nlkdv_sat}) satisfies \begin{equation} \label{1-nl-stab-lyap} \Vert \tilde{y}(t,.)\Vert_{L^2(0,L)}\leq K_1\Vert \tilde{y}_0\Vert_{L^2(0,L)}e^{-\mu^{\star}t}\qquad \forall t\geq 0, {\epsilon}nd{equation} for some constants $K_1\geq 1$ which depends only on $\Vert \tilde{y}_0\Vert_{L^2(0,L)}$. In addition, for a given $r>0$, there exist two constants $K_r>0$ and $\mu_r>0$ such that if $\Vert y_0\Vert_{L^2(0,L)}\leq r$, then any mild solution $y$ to (\ref{nlkdv_sat}) satisfies \begin{equation} \Vert y(t,.)\Vert_{L^2(0,L)}\leq K_r \Vert y_0\Vert_{L^2(0,L)}e^{-\mu_r t}\qquad \forall t\geq 0. {\epsilon}nd{equation} Consequently, setting $T_r:=\mu_r^{-1}\ln(rK_r)$, we have $$ \Vert y_0\Vert_{L^2(0,L)}\leq r\Rightarrow \Vert y(T_r,.)\Vert_{L^2(0,L)}\leq 1. $$ Therefore, using {\epsilon}qref{1-nl-stab-lyap}, we obtain \begin{equation} \begin{array}{rcl} \Vert y(t,.)\Vert_{L^2(0,L)} &\leq& K_1\Vert y(T_r,.)\Vert_{L^2(0,L)}e^{-\mu^{\star}(t-T_r)}\qquad \forall t\geq T_r,\\ &\leq& K_1K_r\Vert y_0\Vert_{L^2(0,L)}e^{\mu^{\star}T_r}e^{-\mu^{\star}t}\qquad \forall t\geq 0. {\epsilon}nd{array} {\epsilon}nd{equation} Thus it concludes the proof of Theorem \ref{glob_as_stab}. \hspace*{1pt} $\Box$ \color{blue} \begin{remark} As it has been noticed in Remark \ref{remark-gkdv}, the same result than Theorem \ref{glob_as_stab} can be obtained for {\epsilon}qref{gkdv-rosier} following the strategy of \cite{rosier2006global}. Note that in \cite{rosier2006global}, a stabilization in $H^3(0,L)$ is obtained. The authors used a similar strategy than the one described in Remark \ref{remark-gkdv}. Hence, it seems harder to obtain such a result for {\epsilon}qref{gkdv-saturated}, since the saturation introduces some non-smoothness. {\epsilon}nd{remark} \color{black} \section{Simulations} \label{sec_simu} In this section we provide some numerical simulations showing the effectiveness of our control design. In order to discretize our KdV equation, we use a finite difference scheme inspired by \cite{nm_KdV}. The final time is denoted $T_{final}$. We choose $(N_x+1)$ points to build a uniform spatial discretization of the interval $[0,L]$ and $(N_t+1)$ points to build a uniform time discretization of the interval $[0,T_{final}]$. We pick a space step defined by $dx=L/N_x$ and a time step defined by $dt=T_{final}/N_t$. We approximate the solution with the following notation $y(t,x)\approx Y^i_j$, where $i$ denotes the time and $j$ the space discrete variables. Some used approximations of the derivative are given by \begin{equation} \mathcal{D}_-y=\frac{Y^i_j-Y^i_{j-1}}{dx} {\epsilon}nd{equation} and \begin{equation} \mathcal{D}_+y=\frac{Y^i_{j+1}-Y^i_{j}}{dx}. {\epsilon}nd{equation} As in \cite{nm_KdV}, we choose the numerical scheme $ y_x(t,x)\approx\frac{1}{2}(\mathcal{D}_++\mathcal{D}_-)(Y^i_j):=\mathcal{D}(Y^i_j)$ and $ y_t(t,x)\approx \frac{Y^{i+1}_j-Y^i_j}{dt}.$ For the other differentiation operator, we use $y_{xxx}(t,x)\approx \mathcal{D}_+\mathcal{D}_+\mathcal{D}_-(Y^i_j)$. Let us introduce a matrix notation. Let us consider the matrices $D_-, D_+, D \in\mathbb{R}^{N_x\times N_x}$ given by \begin{equation} D_-=\frac{1}{dx}\begin{bmatrix} 1 & 0 & \ldots & \ldots & 0\\ -1 & 1 & \deltadots & & \vdots\\ 0 & \deltadots & \deltadots & \deltadots & \vdots\\ \vdots & \deltadots & \deltadots & 1 & 0\\ 0 & \ldots & 0 & -1 & 1 {\epsilon}nd{bmatrix},\: D_+=\frac{1}{dx}\begin{bmatrix} -1 & 1 & 0 & \ldots & 0\\ 0 & -1 & 1 & \deltadots & \vdots\\ \vdots & \deltadots & \deltadots & \deltadots & 0 \\ \vdots & & \deltadots & -1 & 1\\ 0 & \ldots &\ldots & 0 & -1 {\epsilon}nd{bmatrix} {\epsilon}nd{equation} \begin{equation} D:=\frac{1}{2}(D_++D_-) {\epsilon}nd{equation} and let us define $\mathcal{A}=D_+D_+D_-+D$, and $\mathcal{C}=\mathcal{A}+dtI$ where $I$ is the identity matrix in $\mathcal{M}_{N_x\times N_x}(\mathbb{R})$. Note that we choose this forward difference approximation in order to obtain a positive definite matrix $\mathcal{C}$. Moreover, for each discrete time $i$, we denote $Y^i:=\begin{bmatrix} Y^i_1 & Y^i_2 & \ldots & Y_{N_x+1}^i {\epsilon}nd{bmatrix}^{\top}$. Thus, inspired by \cite{nm_KdV}, we consider a completely implicit numerical scheme for the approximation of the nonlinear problem {\epsilon}qref{nlkdv_sat} which reads as follows: \begin{equation} \label{numerical_scheme} \left\{ \begin{array}{l} \deltaisplaystyle \frac{Y_j^{i+1}-Y_j^{i}}{dt}+(\mathcal{A}Y^{i+1})_j+\frac{1}{2}\left(D[(Y^{i+1})^2]\right)_j+\mathfrak{sat}(a_\deltaelta Y_j^{i+1})=0,\quad j=1,\ldots N_x,\\ Y^{i}_1=Y^i_{N_x+1}=Y^i_{N_x}=0,\\ Y^1=\int_{x_{j-\frac{1}{2}}}^{x_{j+\frac{1}{2}}}y_0(x)dx, {\epsilon}nd{array}\right. {\epsilon}nd{equation} where $x_j=(j+\frac{1}{2})dx$, $x_j=jdx$ and $Y^1$ denotes the discretized version of the initial condition $y_0(x)$. Note that $a_\deltaelta$ is the approximation of the damping function $a=a(x)$ and is given by $a_\deltaelta=\left( a_j\right)_{J=1}^{N_x}\in\mathbb{R}^{N_x}$, where each components $a_j$ is defined by $a_j:=\int_{x_{j-\frac{1}{2}}}^{x_{j+\frac{1}{2}}} a(x)dx$. Since we have the nonlinearities $yy_x$ and $\mathfrak{sat}(ay)$, we use an iterative Newton fixed-point method to solve the nonlinear system $$\mathcal{C}Y^{i+1}=Y^i-dt\frac{1}{2}D(Y^{i+1})^2-dt\mathfrak{sat}\left(a_\deltaelta Y^{i+1}\right).$$ With $N_{iter}=5$, which denotes the number of iterations of the fixed point method, we get good approximations of the solutions. Note that for sufficiently large $N_{iter}$ the solutions can be approximated with this fixed-point method. Given $Y^1$ satisfying {\epsilon}qref{numerical_scheme}, the following is the structure of the algorithm used in our simulations. \\ \fbox{\begin{minipage}{15cm} \textbf{For $i=1:N_t$}\\ \begin{itemize} \item[$\bullet$] $Y_{1}^{i}=Y^{i}_{N_x}=Y^{i}_{N_{x}+1}=0$; \item[$\bullet$] Setting $J(1)=Y^i$, for all $k\in\lbrace 1,\ldots, N_{iter}\rbrace$, solve $$J(k+1)=C^{-1}(Y^i-dt\frac{1}{2}D(J(k))^2-dt\mathfrak{sat}(a_\deltaelta J(k)))$$\\ Set $Y^{i+1}=J(N_{iter})$ {\epsilon}nd{itemize} \textbf{end} {\epsilon}nd{minipage}}\\ In order to illustrate our theoretical results, we perform some simulations with $L= 2\pi$, for which we know that the linearized KdV equation is not asymptotically stable. To be more specific, letting $y_0(x)=1-\cos(x)$ and $f=0$, it holds that the energy $\Vert y\Vert^2_{L^2(0,L)}$ of the linearized equation {\epsilon}qref{lkdv} remains constant for all $t\geq 0$. Let us perform a simulation of {\epsilon}qref{nlkdv_sat} with these parameters. We first simulate our system in the case where the damping is not localized. We use the saturation function $\mathfrak{sat}_2$. Given $a_0=1$, $T_{final}=6$ and $L=2\pi$, Figure \ref{figure1} shows the solution to {\epsilon}qref{nlkdv}, denoted by $y_w$, with the unsaturated control $f=a_0y_w$ and starting from $y_0$. Figure \ref{figure2} illustrates the simulated solution with the same initial condition and a saturated control $f=\mathfrak{sat}_2(a_0y)$ where $u_0=0.5$. Figure \ref{figure3} gives the evolution of the control with respect to the time and the space. We check in Figures \ref{figure1} and \ref{figure2} that the solution to {\epsilon}qref{nlkdv_sat} converges to $0$ with the unsaturated and the saturated controls as proven in Theorem \ref{glob_as_stab}. The evolution of the $L^2$-energy of the solution in these two cases is given by Figure \ref{figure4}. With $\Vert y_0\Vert_{L^2(0,L)}:=3.07$ and the values of $u_0$, $a_0$ and $a_1$, the value $\mu$ is computed numerically following the formula {\epsilon}qref{formula-mu} given in Theorem \ref{glob_as_stab}. It is is equal to $\mu=0.3257$. We deduce from the second point of Theorem \ref{glob_as_stab} that the energy function $\Vert y\Vert^2_{L^2(0,L)}$ converges exponentially to $0$ with an explicit decay rate given by $\mu$ as stated in Theorem \ref{glob_as_stab}. \begin{figure}[H] \begin{minipage}[c]{.50\linewidth} \includegraphics[scale=0.6]{solutiondeuxpi-without.eps} \caption{Solution $y_w(t,x)$ with the control $f=a_0y_w$ where $\omega=[0,L]$} \label{figure1} {\epsilon}nd{minipage} \begin{minipage}[c]{.46\linewidth} \includegraphics[scale=0.6]{solutiondeuxpi-0_5.eps} \caption{Solution $y(t,x)$ with the control $f=\mathfrak{sat}_2(a_0y)$ where $\omega=[0,L]$, $u_0=0.5$} \label{figure2} {\epsilon}nd{minipage} {\epsilon}nd{figure} \begin{figure}[H] \begin{minipage}[c]{.46\linewidth} \includegraphics[scale=0.6]{controldeuxpi-0_5.eps} \caption{Control $f=\mathfrak{sat}_2(a_0 y)(t,x)$ where $\omega=[0,L]$, $u_0=0.5$} \label{figure3} {\epsilon}nd{minipage} \begin{minipage}[c]{.46\linewidth} \includegraphics[scale=0.6]{lyapdeuxpi-0_5.eps} \caption{Blue: Time evolution of the energy function $\Vert y\Vert^2_{L^2(0,L)}$ with a saturation $u_0=0.5$ and $a_0=1$. Red: Time evolution of the theoritical energy $\Vert y_0\Vert^2_{L^2(0,L)}e^{-2\mu t}$. Dotted line: Time evolution of the solution without saturation $y_w$ and $a_0=1$.} \label{figure4} {\epsilon}nd{minipage} {\epsilon}nd{figure} We now focus on the case where the damping is localized. We close the loop with the saturated controller $f=\mathfrak{sat}_{\texttt{loc}}(ay)$ where $a$ is defined by $a(x)=a_0=1$, for all $x\in\omega:=\left[\frac{1}{3}L,\frac{2}{3}L\right].$ Given $T_{final}=6$, Figure \ref{figure7} shows the simulated solution of {\epsilon}qref{nlkdv}, denoted by $y_w$, with a localized control that is not saturated and starting from $y_0$. Figure \ref{figure8} illustrates the simulated solution to {\epsilon}qref{nlkdv_sat} with the same initial condition, but with a localized saturated control whose saturation level is given by $u_0=0.5$. We check, in Figures \ref{figure7} and \ref{figure8}, that the mild solution to {\epsilon}qref{nlkdv_sat} converges to $0$ as stated in Theorem \ref{glob_as_stab}. Moreover, Figure \ref{figure9} gives the evolution of the control with respect to the time and the space. The evolution of the $L^2$-energy of the solution in these two last cases is given by Figure \ref{figure10}. We can see that the energy function $\Vert y\Vert^2_{L^2(0,L)}$ converges exponentially to $0$ as stated in Proposition \ref{local_as_stab}. However, in contrary with the case $\mathfrak{sat}=\mathfrak{sat}_2$ and $\omega=[0,L]$, we cannot have an estimation of the decay rate since our proof is based on a contradiction argument. \begin{figure}[H] \begin{minipage}[c]{.46\linewidth} \includegraphics[scale=0.6]{solutiondeuxpi-without-loc.eps} \caption{Solution $y_w(t,x)$ with a localized feedback law without saturation} \label{figure7} {\epsilon}nd{minipage} \begin{minipage}[c]{.46\linewidth} \includegraphics[scale=0.6]{solutiondeuxpi-0_5-loc.eps} \caption{Solution $y(t,x)$ with a localized feedback law saturated; $u_0=0.5$} \label{figure8} {\epsilon}nd{minipage} {\epsilon}nd{figure} \begin{figure}[H] \begin{minipage}[c]{.46\linewidth} \includegraphics[scale=0.6]{controldeuxpi-0_5-loc.eps} \caption{Control $f=\mathfrak{sat}_{\texttt{loc}}(a y)(t,x)$ where $\omega=\left[\frac{1}{3}L,\frac{2}{3}L\right]$, $u_0=0.5$} \label{figure9} {\epsilon}nd{minipage} \begin{minipage}[c]{.46\linewidth} \includegraphics[scale=0.6]{lyapdeuxpi-without-loc.eps} \caption{Blue: Time evolution of the energy function $\Vert y\Vert^2_{L^2(0,L)}$ with a saturation $u_0=0.5$, $a_0=1$ and $\omega=\left[\frac{1}{3}L,\frac{2}{3}L\right]$. Dotted line: Time evolution of the solution without saturation $y_w$ with $a_0=1$ and $\omega=\left[\frac{1}{3}L,\frac{2}{3}L\right]$.} \label{figure10} {\epsilon}nd{minipage} {\epsilon}nd{figure} \section{Conclusion} \label{sec_conc} In this paper, we have studied the well-posedness and the asymptotic stability of a Korteweg-de Vries equation with saturated distributed controls. The well-posedness issue has been tackled by using the Banach fixed-point theorem. The stability has been studied with two different methods: in the case where the control acts on all the domain saturated with $\mathfrak{sat}_2$, we used a sector condition and Lyapunov theory for infinite dimensional systems; in the case where the control acts only on a part of the domain saturated with either $\mathfrak{sat}_2$ or $\mathfrak{sat}_{\texttt{loc}}$, we argued by contradiction. We illustrate our results on some simulations, which show that the smaller is the saturation level, the slower is the convergence to zero. To conclude, let us state some questions arising in this context: \color{blue} 1. Can a saturated localized damping stabilize in $H^3(0,L)$ a generalized Korteweg-de Vries equation, as done in the unsaturated case in \cite{rosier2006global} and \cite{linares2007exponential} ? \color{black} 2. \color{blue} Is it possible to saturate other damping terms, for instance the one suggested in \cite{perla-vasconcellos-zuazua} and used in \cite{massarolo} which dissipates the $H^{-1}$-norm in the unsaturated case? \color{black} 3. Some boundary controls have been already designed in \cite{cerpa_coron_backstepping}, \cite{coron2014local}, \cite{tang2013stabKdV} or \cite{cerpa2009rapid}. By saturating these controllers, are the corresponding equations still stable? 4. Another constraint than the saturation can be considered. For instance the backlash studied in \cite{tarbouriech2014stability} or the quantization \cite{ferrante2015quantization}. 5. Can we apply the same method for other nonlinear partial differential equations, for instance the Kuramoto-Sivashinsky equation \cite{cerpa2010cKSl, pato2014controlKS} ? \\%An interesting model could be the one-dimensional Burgers equation given by \textbf{Acknowledgements.} The authors would like to thank Lionel Rosier for having attracted our attention to the article \cite{rosier2006global} and for fruitful discussions. {\epsilon}nd{document}
\begin{document} \title[On generalized max-linear models in max-stable random fields]{On generalized max-linear models in max-stable random fields} \author{Michael Falk, Maximilian Zott} \address{University of Wurzburg, Institute of Mathematics, Emil-Fischer-Str. 30, 97074 W\"{u}rzburg, Germany.} \email{[email protected], [email protected]} \subjclass[2010]{Primary 60G70} \keywords{Multivariate extreme value distribution $\bullet$ max-stable random field $\bullet$ $D$-norm $\bullet$ max-linear model $\bullet$ stochastic interpolation} \begin{abstract} In practice, it is not possible to observe a whole max-stable random field. Therefore, a way how to reconstruct a max-stable random field in $C\left([0,1]^k\right)$ by interpolating its realizations at finitely many points is proposed. The resulting interpolating process is again a max-stable random field. This approach uses a \emph{generalized max-linear model}. Promising results have been established in the case $k=1$ in a previous paper. However, the extension to higher dimensions is not straightforward since we lose the natural order of the index space. \end{abstract} \maketitle \section{Introduction and Preliminaries} \citet{domeyri12} derive an algorithm to sample from the regular conditional distribution of a max-stable random field $\bm{e}ta$, say, given the marginal observations $\eta_{s_1}=z_1,\dots,\eta_{s_k}=z_k$ for some $z_1,\dots,z_d$ from the state space and $k$ locations $s_1,\dots,s_d$. This, clearly, concerns the \emph{distribution} of $\bm{e}ta$ and derived distributional parameters. Different to that, we try to \emph{reconstruct} $\bm{e}ta$ from the observations $\eta_{s_1},\dots,\eta_{s_k}$. This is done by a \emph{generalized max-linear model} in such a way, that the interpolating process $\hat\bm{e}ta$ is again a (standard) max-stable random field. As our approach is deterministic, once the observations $\eta_{s_1}=z_1,\dots,\eta_{s_k}=z_k$ are given, a proper way to measure the performance of our approach is the \emph{mean squared error} (MSE). Convergence of the pointwise MSE as well as of the integrated MSE (IMSE) is established if the set of grid points $s_1,\dots,s_d$ gets dense in the index space. A \emph{max-stable random process} with index set $T$ is a family of random variables $\bm\xi=(\xi_t)_{t\in T}$ with the property that there are functions $a_n:T\to\mathbb{R}^+_0$ and $b_n:T\to\mathbb{R}$, $n\in\mathbb{N}$, such that \[ \left(\max_{i=1,\dotsc,n}\left(\frac{\xi^{(i)}_t-b_n(t)}{a_n(t)}\right)\right)_{t\in T}=_d\bm\xi, \] where $\bm\xi^{(i)}=(\xi^{(i)}_t)_{t\in T}$, $i=1,\dotsc,n$, are independent copies of $\bm\xi$ and '$=_d$' denotes equality in distribution. We get a max-stable random vector (rv) on $\mathbb{R}^d$ by putting $T=\{1,\dotsc,d\}$. Different to that, we obtain a max-stable process with continuous sample paths on some compact metric space $S$, if we set $T=S$ and require that the sample paths $\bm\xi(\omega):S\to\mathbb{R}$ realize in $C(S)=\{g\in\mathbb{R}^S:~g\text{ continuous}\}$, and that the norming functions $a_n,b_n$ are continuous as well. Max-stable random vectors, and processes, respectively, have been investigated intensely over the last decades. For detailed reviews of max-stable rv and processes, see for instance the monographies of \citet{beirgotese04}, \citet{dehaf06}, \citet{resn08}, \citet{fahure10} and \citet{davpari12} among others. Max-stable rv and processes are of enormous interest in extreme value theory since they are the only possible limit of linearly standardized maxima of independent and identically distributed rv or processes. Clearly, the univariate margins of a max-stable random process are max-stable distributions on the real line. A max-stable random object $\bm\xi=(\xi_t)_{t\in T}$ is commonly called \emph{simple max-stable} in the literature if each univariate margin is unit Fr\'{e}chet distributed, i.\,e. $P(\xi_t\leq x)=\exp\left(-x^{-1}\right)$, $x>0$, $t\in T$. Different to that, we call a random process $\bm\eta=(\eta_t)_{t\in T}$ \emph{standard max-stable} if all univariate marginal distributions are standard negative exponential, i.\,e. $P(\eta_t\leq x)=\exp\left(x\right)$, $x\leq0$, $t\in T$. The transformation to simple/standard margins does not cause any problems, neither in the case of rv (see e.\,g. \citet{dehar77} or \citet{resn08}), nor in the case of rf with continuous sample paths (see e.\,g. \citet{ginhv90}). It is well known (e.g. \citet{dehar77}, \citet{pick81}, \citet{fahure10}) that a rv $(\eta_1,\dotsc,\eta_d)$ is a \emph{standard max-stable rv} iff there exists a rv $(Z_1,\dotsc,Z_d)$ and some number $c\geq1$ with $Z_i\in[0,c]$ almost surely (a.\,s.) and $E(Z_i)=1$, $i=1,\dotsc,d$, such that for all $\bm x=(x_1,\dotsc,x_d)\leq\bm 0\in\mathbb{R}^d$ \[ P(\eta_1\leq x_1,\dotsc,\eta_d\leq x_d)=\exp\left(-\norm{\bm x}_D\right):=\exp\left(-E\left(\max_{i=1,\dotsc,d}\left(\abs{x_i}Z_i\right)\right)\right). \] The condition $Z_i\in[0,c]$ a.\,s. can be weakened to $P(Z_i\geq0)=1$. Note that $\norm\cdot_D$ defines a norm on $\mathbb{R}^{d}$, called \emph{$D$-norm}, with \emph{generator} $\bm Z$. The $D$ means dependence: We have independence of the margins of $\bm X$ iff $\norm\cdot_{D}$ equals the norm $\norm{\bm x}_1=\sum_{i=1}^d\abs{x_i}$, which is generated by $(Z_1,\dotsc,Z_d)$ being a random permutation of the vector $(d,0\dotsc,0)$. We have complete dependence of the margins of $\bm X$ iff $\norm\cdot_{D}$ is the maximum-norm $\norm{\bm x}_\infty=\max_{1\le i\le d}\abs{x_i}$, which is generated by the constant vector $(Z_1,\dotsc,Z_d)=(1,\dotsc,1)$. We refer to \citet[Section 4.4] {fahure10} for further details of $D$-norms. Let $S$ be a compact metric space. A standard max-stable process $\bm\eta=(\eta_t)_{t\in S}$ with sample paths in $\bar C^-(S):=\{g\in C(S):~g\leq 0\}$ is, in what follows, shortly called a \emph{standard max-stable process} (SMSP). Denote further by $E(S)$ the set of those bounded functions $f\in\mathbb{R}^S$ that have only a finite number of discontinuities and define $\bar E^-(S):=\{f\in E(S):~f\leq 0\}$. We know from \citet{ginhv90} that a process $\bm\eta=(\eta_t)_{t\in S}$ with sample paths in $C(S)$ is an SMSP iff there exists a stochastic process $\bm Z=(Z_t)_{t\in S}$ realizing in $\bar C^+(S):=\{g\in C(S):~g\geq 0\}$ and some $c\geq1$, such that $Z_t\leq c$ a.\,s., $E(Z_t)=1$, $t\in S$, and \[ P(\bm\eta\leq f)=\exp\left(-\norm f_D\right):=\exp\left(-E\left(\sup_{t\in S}\left(\abs{f(t)}Z_t\right)\right)\right),\qquad f\in\bar E^-(S). \] Note that $\norm\cdot_D$ defines a norm on the function space $E(S)$, again called \emph{$D$-norm} with \emph{generator process} $\bm Z$. The functional $D$-norm is topologically equivalent to the sup-norm $\norm f_{\infty}=\sup_{t\in S}\abs{f(t)}$, which is itself a $D$-norm by putting $Z_t=1$, $t\in S$, see \citet{aulfaho11} for details. At first it might seem unusual to consider the function space $E(S)$. The reason for that is that a suitable choice of the function $f\in\bar E^-(S)$ allows the incorporation of the finite dimensional marginal distributions by the relation $P(\bm\eta\leq f)=P(\eta_{t_i}\leq x_i,1\leq i\leq d)$. The condition $P\left(\sup_{t\in S}Z_t\leq c\right)=1$ can be weakened to \begin{equation}\label{eq:condition_generator} E\left(\sup_{t\in S}Z_t\right)<\infty, \end{equation} see \citet[Corollary 9.4.5]{dehaf06}. \section{Generalized max-linear models}\label{sec:model} \subsection*{The model and some examples} In this section we will approximate a given SMSP with sample paths in $\bar C^-\left([0,1]^k\right)$, where $k$ is some integer, by using a generalized max-linear model for the interpolation of a finite dimensional marginal distribution. The parameter space $[0,1]^k$ is chosen for convenience and could be replaced by any compact metric space $S$. Let in what follows $\bm\eta=(\eta_{t})_{t\in [0,1]^k}$ be an SMSP with generator $\bm Z=(Z_{ t})_{ t\in [0,1]^k}$ and $D$-norm $\norm\cdot_{D}$. Choose pairwise different points $ s_1,\dotsc, s_d\in [0,1]^k$ and obtain a standard max-stable rv $(\eta_{ s_1},\dotsc,\eta_{ s_d})$ with generator $(Z_{ s_1},\dotsc,Z_{ s_d})$ and $D$-norm $\norm\cdot_{D_{1,\dotsc,d}}$, i.\,e., \[ P(\eta_{ s_1}\leq x_1,\dotsc,\eta_{ s_d}\leq x_d)=\exp\left(-E\left(\max_{i=1,\dotsc,d}\left(\abs{x_i}Z_{ s_i}\right)\right)\right)=:\exp\left(-\norm{\bm x}_{D_{1,\dotsc,d}}\right), \] $\bm x=(x_1,\dotsc,x_d)\leq\bm 0$. Our aim is to find another SMSP that interpolates the above rv. Take functions $g_i\in\bar C^+\left([0,1]^k\right)$, $i=1,\dotsc,d$, with the property \begin{equation}\label{eq:norming_functions_standardization} \norm{(g_1( t),\dotsc,g_d( t))}_{D_{1,\dotsc,d}}=1\text{ for all } t\in[0,1]^k. \end{equation} Then the stochastic process $\hat{\bm\eta}=(\hat\eta_{ t})_{ t\in[0,1]^k}$ that is generated by the \emph{generalized max-linear model} \begin{equation}\label{eq:generalized_max_linear_model} \hat\eta_{ t}:=\max_{i=1,\dotsc,d}\frac{\eta_{ s_i}}{g_i( t)},\qquad t\in[0,1]^k, \end{equation} defines an SMSP with generator \begin{equation}\label{eq:generalized_max_linear_model_generator} \hat Z_{ t}=\max_{i=1,\dotsc,d}\left(g_i( t)Z_{ s_i}\right),\qquad t\in[0,1]^k, \end{equation} due to property \eqref{eq:norming_functions_standardization}, see \citet{falhz13} for details. The case $\norm\cdot_{D_{1,\dotsc,d}}=\norm\cdot_1$ leads to the regular \emph{max-linear model}, cf. \citet{wansto11}. If we want $\hat{\bm\eta}$ to interpolate $(\eta_{ s_1},\dotsc,\eta_{ s_d})$, then we only have to demand \begin{equation}\label{eq:norming_functions_interpolation} g_i( s_j)=\delta_{ij}:=\begin{cases}1,&\qquad i=j,\\0,&\qquad i\neq j,\end{cases}\quad 1\leq i,j\leq d. \end{equation} Recall that $\eta_{ s_i}$ is negative with probability one. We call $\hat{\bm\eta}$ the \emph{discretized version} of $\bm\eta$ with grid $\{ s_1,\dotsc, s_d\}$ and weight functions $g_1,\dotsc,g_d$, when the weight functions satisfy both \eqref{eq:norming_functions_standardization} and \eqref{eq:norming_functions_interpolation}. \begin{exam}\label{exam:onedimensional_model} \upshape In the one-dimensional case $k=1$ the weight functions $g_i$ can be chosen as follows. Take a grid $0:=s_1<s_2<\cdots<s_{d-1}<s_d=:1$ of the interval $[0,1]$ and denote by $\norm\cdot_{D_{i-1,i}}$ the $D$-norm pertaining to $(\eta_{s_{i-1}},\eta_{s_i})$, $i=2,\dotsc,d$. Put \begin{align*} g_1(t)&:=\begin{cases}\dfrac{s_{2}-t}{\norm{(s_{2}-t,t)}_{D_{1,2}}},\quad &t\in[0,s_2], \\ 0,\quad &\text{else},\end{cases}\\ g_i(t)&:=\begin{cases}\dfrac{t-s_{i-1}}{\norm{(s_i-t,t-s_{i-1})}_{D_{i-1,i}}},\quad &t\in[s_{i-1},s_i], \\ \dfrac{s_{i+1}-t}{\norm{(s_{i+1}-t,t-s_{i})}_{D_{i,i+1}}},\quad &t\in[s_i,s_{i+1}], \\ 0,\quad &\text{else},\end{cases}\quad i=2,\dotsc,d-1,\\ g_d(t)&:=\begin{cases}\dfrac{t-s_{d-1}}{\norm{(s_d-t,t-s_{d-1})}_{D_{d-1,d}}},\quad &t\in[s_{d-1},1], \\ 0,\quad &\text{else}.\end{cases} \end{align*} This model has been studied intensely in \citet{falhz13}. The functions $g_1,\dotsc,g_d$ are continuous and satisfy conditions \eqref{eq:norming_functions_standardization} and \eqref{eq:norming_functions_interpolation}, so they provide an interpolating generalized max-linear model on $C[0,1]$. \end{exam} \begin{exam}\label{exam:multidimensional_model} \upshape Choose pairwise different points $s_1,\dotsc,s_d\in [0,1]^k$ and an arbitrary norm $\norm\cdot$ on $\mathbb{R}^k$. Define \[ \tilde g_i(t):=\min_{j\neq i}\left(\norm{t-s_j}\right),\qquad t\in[0,1]^k,\; i=1,\dotsc,d. \] In order to normalize, put \[ g_i(t):=\frac{\tilde g_i(t)}{\norm{(\tilde g_1(t),\dotsc,\tilde g_d(t))}_{D_{1,\dotsc,d}}},\quad t\in[0,1]^k,\quad i=1,\dotsc,d. \] These functions $g_i$ are well-defined since the denominator never vanishes: Suppose there is $ t\in[0,1]^k$ with $\tilde g_1( t)=\cdots=\tilde g_d( t)=0$. Then $\min_{j\neq i}\left(\norm{ t- s_j}\right)=0$ for all $i=1,\dotsc,d$. Now fix $i\in\{1,\dotsc,d\}$. There is $j\neq i$ with $ t= s_j$. But on the other hand, we have also $\min_{k\neq j}\left(\norm{ t- s_k}\right)=0$ which implies that there is $k\neq j$ with $ t= s_k= s_j$ which is a contradiction. The functions $g_i$, $i=1,\dotsc,d$, are clearly functions in $\bar C^+\left([0,1]^k\right)$ that also satisfy condition \eqref{eq:norming_functions_standardization} and \eqref{eq:norming_functions_interpolation} as can be seen as follows. We have for $t\in[0,1]^k$ \begin{align*} &\norm{\big(g_1(t),\dots,g_d(t)\big)}_{D_{1,\dots,d}}\\ &=\norm{\left(\frac{\tilde g_1(t)}{\norm{(\tilde g_1(t),\dots,\tilde g_d(t))}_{D_{1,\dots,d}}},\dots, \frac{\tilde g_d(t)}{\norm{(\tilde g_1(t),\dots,\tilde g_d(t))}_{D_{1,\dots,d}}} \right)}_{D_{1,\dots,d}}\\ &= \frac{\norm{\big(\tilde g_1(t),\dots,\tilde g_d(t)\big)}_{D_{1,\dots,d}}}{\norm{\big(\tilde g_1(t),\dots,\tilde g_d(t)\big)}_{D_{1,\dots,d}}}\\ &=1, \end{align*} which is condition \eqref{eq:norming_functions_standardization}. Note, moreover, that $\tilde g_i(s_j)=0$ if $i\not=j$. But this implies condition \eqref{eq:norming_functions_interpolation}: \begin{align*} g_i(s_j)&= \frac{\tilde g_i(s_j)}{\norm{\big(\tilde g_1(s_j),\dots,\tilde g_d(s_j)\big)}_{D_{1,\dots,d}}}\\ &=\frac{\tilde g_i(s_j)}{\norm{\big(0,\dots,0,\tilde g_j(s_j),0,\dots,0\big)}_{D_{1,\dots,d}}}\\ &=\frac{\tilde g_i(s_j)}{\tilde g_j(s_j) \norm{\big(0,\dots,0,1,0,\dots,0\big)}_{D_{1,\dots,d}}}\\ &= \frac{\tilde g_i(s_j)}{\tilde g_j(s_j)}=\delta_{ij} \end{align*} by the fact that a $D$-norm of each unit vector in $\mathbb{R}^d$ is one. Thus, we have found an interpolating generalized max-linear model on $C\left([0,1]^k\right)$. \end{exam} \subsection*{The mean squared error of the discretized version} We start this section with a result that applies to bivariate standard max-stable rv in general. \begin{lemma}\label{lem:properties_bivariate_smsrv} Let $(X_1,X_2)$ be standard max-stable with generator $(Z_1,Z_2)$ and $D$-norm $\norm\cdot_D$. \begin{enumerate}[(i)] \item \[E(X_1X_2)=\int_0^{\infty}\frac{1}{\norm{(1,u)}^2_{D}}~du.\] \item \[E(|Z_1-Z_2|)=2\left(\norm{(1,1)}_{D}-1\right).\] \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(i)] \item See \citet[Lemma 3.6]{falhz13}. \item The assertion follows from the general identity $\max(a,b)=\frac12(a+b+\abs{a-b})$. \end{enumerate} \end{proof} Let $\hat{\bm\eta}=(\hat\eta_{ t})_{ t\in [0,1]^k}$ be the discretized version of $\bm\eta=(\eta_{ t})_{ t\in[0,1]^k}$ with grid $\{ s_1,\dotsc, s_d\}$ and weight functions $g_1,\dotsc,g_d$. In order to calculate the mean squared error of $\hat\eta_t$, we need the following lemma. \begin{lemma}\label{lem:eta_hateta_sms} Let $\hat{\bm Z}=(\hat Z_t)_{t\in[0,1]^k}$ be the generator of $\hat{\bm\eta}$ that is defined in \eqref{eq:generalized_max_linear_model_generator}. For each $t\in[0,1]^k$, the rv $(\eta_{ t},\hat\eta_{ t})$ is standard max-stable with generator $(Z_t,\hat Z_t)$ and $D$-norm \[ \norm{(x,y)}_{D_{ t}}=E\left(\max\left(\abs x Z_t,\abs y\hat Z_t\right)\right)=\norm{\left(x,g_1( t)y,\dotsc,g_d( t)y\right)}_{D_{ t, s_1,\dotsc, s_d}}, \] where $\norm{\cdot}_{D_{ t, s_1,\dotsc, s_d}}$ is the $D$-norm pertaining to $(\eta_{ t},\eta_{ s_1},\dotsc,\eta_{ s_d})$. \end{lemma} \begin{proof} As $\bm Z=(Z_{ t})_{ t\in[0,1]^k}$ is a generator of $\bm\eta$, we have for $x,y\leq 0$ \begin{align*} P(\eta_{ t}\leq x,\hat\eta_{ t}\leq y)&=P(\eta_{ t}\leq x,\eta_{ s_1}\leq g_1( t)y,\dotsc,\eta_{ s_d}\leq g_d( t)y)\\ &=\exp\left(-E\left(\max\left(\abs xZ_{ t},\abs y\max\left(g_1( t)Z_{ s_1},\dotsc,g_d( t)Z_{ s_d}\right)\right)\right)\right)\\ &=\exp\left(-E\left(\max\left(\abs xZ_{ t},\abs y\hat Z_t\right)\right)\right). \end{align*} Then the assertion follows from the fact that $\hat Z_t\geq 0$ and $E(\hat Z_t)=1$. \end{proof} We can now use the preceding Lemmas to compute the mean squared error. \begin{prop}\label{prop:mean squared error} The mean squared error of $\hat{\eta_{ t}}$ is given by \[ \MSE\left(\hat\eta_{t}\right):=E\left(\left(\eta_{ t}-\hat{\eta}_{ t}\right)^2\right)=2\left(2-\int_0^{\infty}\frac{1}{\norm{(1,u)}^2_{D_{ t}}}~du\right),\qquad t\in[0,1]^k. \] \end{prop} \begin{proof} Due to Lemma \ref{lem:eta_hateta_sms}, $(\eta_t,\hat\eta_t)$ is standard max-stable. Therefore, Lemma \ref{lem:properties_bivariate_smsrv} (i) and the fact that $E(\eta_t)=E(\hat\eta_t)=-1$ and $\Var(\eta_t)=\Var(\hat\eta_t)=1$ yield \[ \MSE\left(\hat\eta_{ t}\right)=E\left(\eta_t^2\right)-2E\left(\eta_t\hat\eta_t\right)+E\left(\hat \eta_t^2\right)=4-2\int_0^{\infty}\frac{1}{\norm{(1,u)}^2_{D_{ t}}}~du. \] \end{proof} \begin{lemma}\label{lem:mse_inequality} The mean squared error of $\hat\eta_t$ satisfies \[ \MSE\left(\hat\eta_t\right)\leq6 E\left(\abs{Z_t-\hat Z_t}\right),\qquad t\in[0,1]^k. \] \end{lemma} \begin{proof} We have \begin{align*} &2-\int_0^{\infty}\frac{1}{\norm{(1,u)}_{D_t}^2}~du\\ &=\int_0^{\infty}\frac{1}{\norm{(1,u)}_{\infty}^2}~du-\int_0^{\infty}\frac{1}{\norm{(1,u)}_{D_t}^2}~du\\ &=\int_0^{\infty}\left(\norm{(1,u)}_{D_t}-\norm{(1,u)}_{\infty}\right)\frac{\norm{(1,u)}_{D_t}+\norm{(1,u)}_{\infty}}{\norm{(1,u)}_{D_t}^2\norm{(1,u)}_{\infty}^2}~du\\ &=\int_0^{1}\left(\norm{(1,u)}_{D_t}-1\right)\frac{\norm{(1,u)}_{D_t}+1}{\norm{(1,u)}_{D_t}^2}~du+\int_1^{\infty}\left(\norm{(1,u)}_{D_t}-u\right)\frac{\norm{(1,u)}_{D_t}+u}{u^2\norm{(1,u)}_{D_t}^2}~du\\ &\leq 3\int_0^{1}\left(\norm{(1,u)}_{D_t}-1\right)~du+2\int_1^{\infty}\frac{\norm{(1/u,1)}_{D_t}-1}{u^2}~du\\ &=:3I_1+2I_2. \end{align*} Since every $D$-norm is monotone, we have \[ \norm{(1,u)}_{D_t}\leq \norm{(1,1)}_{D_t},~ u\in[0,1],\text{ and } \norm{(1/u,1)}_{D_t}\leq \norm{(1,1)}_{D_t},~ u>1, \] and, thus, by Lemma \ref{lem:properties_bivariate_smsrv} (ii) \begin{equation*} I_1+I_2\leq \norm{(1,1)}_{D_t}-1+\left(\norm{(1,1)}_{D_t}-1\right)\int_1^{\infty}u^{-2}~du=E\left(\abs{Z_t-\hat Z_t}\right). \end{equation*} \end{proof} \begin{rem}\upshape The upper bound $E\left(\abs{Z_t-\hat Z_t}\right)$ in Lemma \ref{lem:mse_inequality} gets small if the distance between $t$ and its nearest neighbor $s_j$, say, in the grid $\set{s_1,\dots,s_d}$ gets small, which can be seen as follows. The triangle inequality implies \[ \abs{Z_t-\hat Z_t} \le \abs{Z_t- Z_{s_j}} + \abs{Z_{s_j}-\max_{i=1,\dots,d}\left(g_i(t)Z_{s_i}\right)}. \] From the condition $g_i(s_j)=\delta_{ij}$ we obtain the representation \[ Z_{s_j}= \max_{i=1,\dots,d}\left(g_i(s_j)Z_{s_i}\right) \] and, thus, \begin{align*} \abs{Z_{s_j}-\max_{i=1,\dots,d}\left(g_i(t)Z_{s_i}\right)} &=\abs{ \max_{i=1,\dots,d}\left(g_i(s_j)Z_{s_i}\right) - \max_{i=1,\dots,d}\left(g_i(t)Z_{s_i}\right) }\\ &\le \max_{i=1,\dots,d}\left(\abs{g_i(t)-g_i(s_j)}Z_{s_i}\right) \end{align*} by elementary arguments. As a consequence we obtain \begin{align*} &E\left(\abs{Z_t-\hat Z_t}\right)\\ &\le E\left(\abs{Z_t-Z_{s_j}} \right) + E\left(\max_{i=1,\dots,d}\left(\abs{g_i(t)-g_i(s_j)}Z_{s_i}\right) \right)\\ &= E\left(\abs{Z_t-Z_{s_j}} \right) + \norm{\big(\abs{g_1(t)-g_1(s_j)},\dots, \abs{g_d(t)-g_d(s_j)} \big)}_{D_{1,\dots,d}}\\ &\le E\left(\abs{Z_t-Z_{s_j}} \right) + \max_{i=1,\dots,d} \abs{g_i(t)-g_i(s_j)} \; \norm{(1,\dots,1)}_{D_{1,\dots,d}}\\ &\to_{\abs{t-s_j}\to 0}0 \end{align*} by the fact that each $D$-norm $\norm\cdot_D$ is monotone, i.e., $\norm{\bm{x}}_D\le\norm{\bm{y}}_D$ if $\bm{0}\le\bm{x}\le\bm{y}\in\mathbb{R}^d$, and by the continuity of the functions $g_1,\dots,g_d$ and $\bm{Z}$. \end{rem} \begin{exam} \upshape Choose as a generator process $\bm{Z}=(Z_t)_{t\in[0,1]^k}$ of a $D$-norm \[ Z_{t}:=\exp\left(X_{t}-\frac{\sigma^2(t)}2 \right),\qquad t\in[0,1]^k, \] where $\left(X_{t}\right)_{t\in \mathbb{R}^k}$ is a continuous zero mean Gaussian process with stationary increments, $\sigma^2(t):= E\left(X_{t}^2\right)$ and $X_0=0$. This model was originally created by \citet{brore77}, and developed by \citet{kaschdeh09} for max-stable random fields $\bm{t}heta=(\vartheta_{t})_{t\in[0,1]^k}$ with Gumbel margins, i.e., $P(\vartheta_t\le x)=\exp(-e^{-x})$, $x\in\mathbb{R}$. The transformation to a SMSP $(\eta_{t})_{t\in[0,1]^k}$ is straightforward by putting $\eta_{t}:=-\exp(-\vartheta_{t})$, $t\in[0,1]^k$. Explicit formulae for the corresponding $D$-norm \[ \norm{f}_D = E\left(\sup_{t\in[0,1]^k} (\abs{f(t)}Z_{t})\right),\qquad f\in E([0,1]^k), \] are only available for bivariate $\norm\cdot_{D_{t_1,t_2}}$ and trivariate $\norm\cdot_{D_{t_1,t_2,t_3}}$ $D$-norms pertaining to the random vectors $(\eta_{t_1},\eta_{t_2})$ and $(\eta_{t_1},\eta_{t_2},\eta_{t_3})$, respectively, see \citet{huserdav13}. In the bivariate case we have for $(x_1,x_2)\in\mathbb{R}^2$ \begin{align*} \norm{(x_1,x_2)}_{D_{t_1,t_2}} &= \abs{x_1} \Phi\left(\frac{\sigma(\abs{t_1-t_2})}2 + \frac1{\sigma(\abs{t_1-t_2})} \log\left(\frac{\abs{x_1}}{\abs{x_2}}\right)\right)\\ &\hspace*{1cm}+ \abs{x_2} \Phi\left(\frac{\sigma(\abs{t_1-t_2})}2 + \frac1{\sigma(\abs{t_1-t_2})} \log\left(\frac{\abs{x_2}}{\abs{x_1}}\right)\right), \end{align*} where $\Phi$ denotes the standard normal distribution function and the absolute value $\abs{t_1-t_2}$ is meant component wise, see \citet[Remark 24]{kabl09}. This Brown-Resnick model could in particular be used for the generalized max-linear model in dimension $k=1$ as in Example \ref{exam:onedimensional_model}, since in this case the approximation $\hat\bm{e}ta$ of $\bm{e}ta$ only uses bivariate $D$-norms $\norm\cdot_{t_1,t_2}$. \end{exam} \section{A generalized max-linear model based on kernels} \subsection*{The model} There is the need for the definition of $d$ functions $g_1,\dots,g_d$ satisfying certain constraints in the \emph{ordinary} generalized max-linear model with $d=d(n)$ tending to infinity as the grid $s_1,\dots,s_d$ gets dense in the index set. For the kernel approach introduced in this section, this is reduced to the choice of just one kernel and a bandwidth. And in this case we can establish convergence to zero of MSE and IMSE as the grid gets dance, essentially without further conditions. This approach was briefly mentioned in \citet{falhz13} and is evaluated here. The disadvantages are: The interpolation is not an exact one at the grid points, i.e., $\hat\eta_{s_j}\not=\eta_{s_j}$. This is due to the fact that the generated functions do not satisfy the condition $g_i(s_j)=\delta_{ij}$ exactly, but only in the limit as $h$ tends to zero, see Lemma \ref{lem:convergence_to_kronecker}. The choice of an optimal bandwidth, which is statistical folklore in kernel density estimation, is still an open problem here. Again, throughout the whole section, let $\bm\eta=(\eta_t)_{t\in[0,1]^k}$ be an SMSP with generator $\bm Z=(Z_t)_{t\in[0,1]^k}$ and denote by $\norm\cdot_{s_1,\dotsc,s_d}$ the $D$-norm pertaining to $(\eta_{s_1},\dotsc,\eta_{s_d})$. Let $K:[0,\infty)\to[0,1]$ be a continuous and strictly monotonically decreasing function (kernel) with the two properties \begin{equation}\label{eq:condition_on_kernel} K(0)=1,\qquad \lim_{x\to\infty}\frac{K(ax)}{K(bx)}=0,\quad 0\le b< a. \end{equation} The exponential kernel $K_e(x)=\exp(-x)$, $x\ge 0$, is a typical example. Choose an arbitrary norm $\norm\cdot$ on $\mathbb{R}^k$ and a grid of pairwise different points $\{ s_1,\dotsc, s_d\}$ in $[0,1]^k$. Put for $i=1,\dotsc,d$ and the bandwidth $h>0$ \[ g_{i,h}( t):=\frac{K(\norm{t-s_i}/h)}{\norm{(K(\norm{t-s_1}/h),\dots,K(\norm{t-s_d}/h))}_{D_{s_1,\dotsc,s_d}}},\quad t\in[0,1]^k. \] Define for $i=1,\dots,d$ \begin{equation}\label{eq:set_closest_points} N(s_i):=\set{t\in[0,1]^k:\,\norm{t-s_i}\le \norm{t-s_j},\,j\not=i}, \end{equation} which is the set of those points $t\in[0,1]^k$ that are closest to the grid point $s_i$. \begin{lemma}\label{lem:convergence_to_kronecker} We have for arbitrary $t\in[0,1]^k$ and $1\le i\le d$ \[ g_{i,h}(t)\to_{h\downarrow 0}\begin{cases} 1&,\mbox{ if }t=s_i\\ 0&,\mbox{ if }t\not\in N(s_i) \end{cases} \] as well as $g_{i,h}(t)\le 1$. \end{lemma} \begin{proof} The convergence $g_{i,h}( s_i)\to_{h\downarrow0}1$ follows from the fact that $K(0)=1$ and that the $D$-norm of a unit vector is 1. The fact that an arbitrary $D$-norm is bounded below by the sup-norm together with the monotonicity of $K$ implies for $t\in[0,1]^k$ \begin{equation*} g_{i,h}(t)\le \frac{K\left(\norm{t-s_i}/h\right)}{\max_{1\le j\le d}K\left(\norm{t-s_j}/h\right)} =\frac{K\left(\frac{\norm{t-s_i}}h\right)}{K\left(\frac{\min_{1\le j\le d}\norm{t-s_j}}h\right)}\le 1. \end{equation*} Note that $K\left(\norm{t-s_i}/h\right)/K\left(\min_{1\le j\le d}\norm{t-s_j}/h\right) \to_{h\downarrow 0}0$ if $t\not\in N(s_i)$ by the required growth condition on the kernel $K$ in \eqref{eq:condition_on_kernel}. \end{proof} The above Lemma shows in particular $g_{i,h}( s_j)\to_{h\downarrow0}\delta_{ij}$ which is close to condition \eqref{eq:norming_functions_interpolation}. Obviously, the functions $g_{i,h}$ are constructed in such a way that condition \eqref{eq:norming_functions_standardization} holds exactly. Therefore, we obtain the generalized max-linear model \begin{equation*} \hat\eta_{ t,h}=\max_{i=1,\dotsc,d}\frac{\eta_{ s_i}}{g_{i,h}( t)},\qquad t\in[0,1]^k, \end{equation*} which does not interpolate $(\eta_{ s_1},\dotsc,\eta_{ s_d})$ exactly, but $\hat\eta_{s_i,h}$ converges to $\eta_{s_i}$ as $h\downarrow0$. Note that the limit functions $\lim_{h\downarrow0}g_{i,h}$ are not necessarily continuous: For instance, there may be $t_0\in[0,1]^k$ with $\norm{t_0-s_1}=\cdots=\norm{t_0-s_d}$. Then $ t_0\in\partial N( s_1)$ and $\lim_{h\downarrow0}g_{1,h}( t_0)=1/\norm{(1,\dotsc,1)}_{D_{1,\dotsc,d}}$, but $\lim_{h\downarrow0}g_{1,h}( t)=0$ for all $ t\notin N( s_1)$ due to Lemma \ref{lem:convergence_to_kronecker}. \subsection*{Convergence of the mean squared error} In this section we investigate a sequence of kernel-based generalized max-linear models, where the diameter of the grids decreases. We analyze under which conditions the integrated mean squared error of $(\hat\eta_{t,h})_{t\in[0,1]^k}$ converges to zero. We start with a general result on generator processes. \begin{lemma}\label{lem:generator_uniformly_continuous} Let $(Z_t)_{t\in[0,1]^k}$ be a generator of an SMSP and $\varepsilon_n$, $n\in\mathbb{N}$, be a null sequence. Then \begin{equation*} E\left(\sup_{\norm{t-s}\leq \varepsilon_n}\abs{Z_t-Z_s}\right)\to_{n\to\infty}0, \end{equation*} where $\norm\cdot$ is an arbitrary norm on $\mathbb{R}^k$. \end{lemma} \begin{proof} The paths of $(Z_t)_{t\in[0,1]^k}$ are continuous, so they are also uniformly continuous. Therefore, $\sup_{\norm{t-s}\leq \varepsilon_n}\abs{Z_t-Z_s}\to_{n\to\infty}0$. Furthermore, \[ \sup_{\norm{t-s}\leq \varepsilon_n}\abs{Z_t-Z_s}\leq 2\sup_{t\in[0,1]^k}Z_t \] with $E\left(\sup_{t\in[0,1]^k}Z_t\right)<\infty$ due to property \eqref{eq:condition_generator} of a generator. The assertion now follows from the dominated convergence theorem. \end{proof} Let $\mathcal G_n:=\set{s_{1,n},\dotsc,s_{d(n),n}}$, $n\in\mathbb{N}$, be a set of distinct points in $[0,1]^k$ with the property \[ \forall n\in\mathbb{N}~\forall t\in[0,1]^k~ \exists s_{i,n}\in\mathcal G_n:~\norm{t-s_{i,n}}\leq\varepsilon_n, \] where $\varepsilon_n\to_{n\to\infty}0$. Define, for instance, $\mathcal G_n$ in such a way that \[ \varepsilon_n:=\max_{i=1,\dotsc,d}\sup_{s,t\in N(s_{i,n})}\norm{s-t}\to_{n\to\infty}0, \] with $N(s_{i,n})$ as defined in \eqref{eq:set_closest_points}. Clearly, $d:=d(n)\to_{n\to\infty}\infty$. Denote by $\norm\cdot_{D^{(n)}_{s_1,\dotsc,s_d}}$ the $D$-norm pertaining to $\eta_{s_{1,n}},\dotsc,\eta_{s_{d,n}}$. Let further $\hat{\bm\eta}_n=(\hat \eta_{t,n})_{t\in[0,1]^k}$ be the kernel-based discretized version of $\bm\eta$ with grid $\mathcal G_n$, that is, \begin{equation*} \hat\eta_{t,n}=\max_{i=1,\dotsc,d}\frac{\eta_{s_{i,n}}}{g_{i,n}(t)},\qquad t\in[0,1]^k, \end{equation*} where for $i=1,\dotsc,d$ \[ g_{i,n}(t)=\frac{K(\norm{t-s_{i,n}}/h_n)}{\norm{(K(\norm{t-s_{1,n}}/h_n),\dots,K(\norm{t-s_{d,n}}/h_n))}_{D^{(n)}_{s_1,\dotsc,s_d}}},\quad t\in[0,1]^k, \] $K:[0,\infty)\to[0,1]$ is the continuous and strictly decreasing kernel function satisfying condition \eqref{eq:condition_on_kernel} and $h_n$, $n\in\mathbb{N}$, is some positive sequence. We have already seen in Lemma \ref{lem:convergence_to_kronecker} that $g_{i,n}(t)\in[0,1]$, $t\in[0,1]^k$, $n\in\mathbb{N}$. Furthermore we have the following result. \begin{lemma}\label{lem:weight_functions_converge_to_one} Choose $t\in[0,1]^k$. There is a sequence $i(n)$, $n\in\mathbb{N}$, such that $t\in\bigcap_{n\in\mathbb{N}}N(s_{i(n),n})$. Define $g_{i(n),n}$ and $\varepsilon_n$ as above, $n\in\mathbb{N}$. Then \[ \lim_{n\to\infty}g_{i(n),n}(t)=1, \] if $\varepsilon_n\to_{n\to\infty}0$, $h_n\to_{n\to\infty}0$, $\varepsilon_n/h_n\to_{n\to\infty}\infty$. \end{lemma} \begin{proof} Let $t\in[0,1]^k$ and choose a sequence $i(n)$, $n\in\mathbb{N}$, as above. Put for simplicity $s_{i(n),n}=:s_{i,n}$ and $g_{i(n),n}=:g_{i,n}$. We have \begin{align*} 1\geq g_{i,n}(t)&=\frac{K\left(\norm{t-s_{i,n}}/h_n\right)}{E\left(\max_{j=1,\dotsc,d}K\left(\norm{t-s_{j,n}}/h_n\right)Z_{s_{j,n}}\right)}\\ &\geq \Bigg(\frac{E\left(\max_{j:\norm{s_{j,n}-t}\geq2\varepsilon_n}K\left(\norm{t-s_{j,n}}/h_n\right)Z_{s_{j,n}}\right)}{K\left(\norm{t-s_{i,n}}/h_n\right)}\\ &\qquad+\frac{E\left(\max_{j:\norm{s_{j,n}-t}<2\varepsilon_n}K\left(\norm{t-s_{j,n}}/h_n\right)Z_{s_{j,n}}\right)}{K\left(\norm{t-s_{i,n}}/h_n\right)}\Bigg)^{-1}\\ &=:(A_{i,n}(t)+B_{i,n}(t))^{-1}. \end{align*} From $t\in N(s_{i,n})$ we conclude $\norm{t-s_{i,n}}\leq\varepsilon_n$. Hence, we have due to \eqref{eq:condition_generator} and the properties of the kernel function $K$ \[ 0\leq A_{i,n}(t)\leq\frac{K(2\varepsilon_n/h_n)}{K(\varepsilon_n/h_n)}E\left(\sup_{t\in[0,1]^k}Z_t\right)\to_{n\to\infty}0, \] since $\varepsilon_n/h_n\to_{n\to\infty}\infty$ by assumption. Furthermore, $t\in N(s_{i,n})$ and the fact that $K$ is decreasing implies \[ \max_{j:\norm{s_{j,n}-t}<2\varepsilon_n}K\left(\norm{t-s_{j,n}}/h_n\right)=K\left(\norm{t-s_{i,n}}/h_n\right). \] Thus, \begin{align*} 1\leq B_{i,n}(t)&=\frac{1}{K\left(\norm{t-s_{i,n}}/h_n\right)}\bigg(E\bigg(\max_{j:\norm{s_{j,n}-t}<2\varepsilon_n}K\left(\norm{t-s_{j,n}}/h_n\right)Z_{s_{j,n}}\\ &\hspace*{3cm}-\max_{j:\norm{s_{j,n}-t}<2\varepsilon_n}K\left(\norm{t-s_{j,n}}/h_n\right)Z_{s_{i,n}}\bigg)\bigg)+1\\ &\leq\frac{E\left(\max_{j:\norm{s_{j,n}-t}<2\varepsilon_n}K\left(\norm{t-s_{j,n}}/h_n\right)\abs{Z_{s_{j,n}}-Z_{s_{i,n}}}\right)}{K\left(\norm{t-s_{i,n}}/h_n\right)}+1\\ &\leq E\left(\max_{j:\norm{s_{j,n}-t}<2\varepsilon_n}\abs{Z_{s_{j,n}}-Z_{s_{i,n}}}\right)+1\\ &\leq E\left(\sup_{\norm{r-s}<3\varepsilon_n}\abs{Z_{r}-Z_{s}}\right)+1\\ &\to_{n\to\infty}1, \end{align*} because of Lemma \ref{lem:generator_uniformly_continuous}. Note that $\norm{s_{j,n}-t}<2\varepsilon_n$ and $t\in N(s_{i,n})$ imply \linebreak $\norm{s_{j,n}-s_{i,n}}<3\varepsilon_n$. \end{proof} We have now gathered the tools to prove convergence of the mean squared error to zero. \begin{theorem}\label{the:mse_kernel_model} Define $\hat{\bm\eta}_n$ and $\varepsilon_n$ as above, $n\in\mathbb{N}$. Then for every $t\in[0,1]^k$ \[ \MSE\left(\hat\eta_{t,n}\right)\to_{n\to\infty}0, \] and \[ \IMSE\left(\hat\eta_{t,n}\right):=\int_{[0,1]^k}\MSE\left(\hat\eta_{t,n}\right)~dt\to_{n\to\infty}0, \] if $\varepsilon_n\to_{n\to\infty}0$, $h_n\to_{n\to\infty}0$, $\varepsilon_n/h_n\to_{n\to\infty}\infty$. \end{theorem} \begin{proof} Denote by \[ \hat Z_{t,n}=\max_{j=1,\dotsc,d}\left(g_{j,n}( t)Z_{ s_{j,n}}\right),\qquad t\in[0,1]^k, \] the generator of $\hat{\bm \eta}_n$. Choose $t\in[0,1]^k$ and a sequence $i:=i(n)$, $n\in\mathbb{N}$, such that $t\in\bigcap_{n\in\mathbb{N}}N\left(s_{i,n}\right)$. We have by Lemma \ref{lem:mse_inequality}, Lemma \ref{lem:weight_functions_converge_to_one} and the continuity of $\bm{Z}$ \begin{align*} \MSE\left(\hat\eta_{t,n}\right)&\leq 6E\left(\abs{Z_t-\hat Z_{t,n}}\right)\nonumber\\ &\leq 6E\left(\abs{Z_{t}-Z_{s_{i,n}}}\right)+6E\left(\abs{Z_{s_{i,n}}-g_{i,n}(t)Z_{s_{i,n}}}\right)\nonumber\\ &\quad+6E\left(\abs{g_{i,n}(t)Z_{s_{i,n}}-\hat Z_{t,n}}\right)\nonumber\\ &= 6E\left(\abs{Z_{t}-Z_{s_{i,n}}}\right)+12\left(1-g_{i,n}(t)\right)\nonumber\\ &\to_{n\to\infty}0\label{eqn:bound_for_mse}; \end{align*} recall that $g_{i,n}(t)Z_{s_{i,n}}\le \hat Z_{t,n}$. Next we establish convergence of the integrated mean squared error. The sets $N(s_{i,n})$, as defined in \eqref{eq:set_closest_points}, are typically not disjoint, but the intersections $N(s_{i,n})\cap N(s_{j,n})$, $i\neq j$, have Lebesgue measure zero on $[0,1]^k$. Clearly, $\bigcup_{i=1}^dN(s_{i,n})=[0,1]^k$. Therefore, applying Lemma \ref{lem:mse_inequality} yields \begin{align*} \IMSE\left(\hat\eta_{t,n}\right)&=\sum_{i=1}^d\int_{N(s_{i,n})}\MSE\left(\hat\eta_{t,n}\right)~dt\\ &\leq6 \sum_{i=1}^d\int_{N(s_{i,n})}E\left(\abs{Z_t-\hat Z_{t,n}}\right)~dt\\ &\leq6\bigg(\sum_{i=1}^d\int_{N(s_{i,n})}E\left(\abs{Z_t-Z_{s_i,n}}\right)~dt\\ &\qquad\qquad+\sum_{i=1}^d\int_{N(s_{i,n})}\abs{1-g_{i,n}(t)}E\left(Z_{s_i,n}\right)~dt\\ &\qquad\qquad+\sum_{i=1}^d\int_{N(s_{i,n})}E\left(\abs{g_{i,n}(t)Z_{s_i,n}-\hat Z_{t,n}}\right)~dt\bigg)\\ &=:6\left(S_{1,n}+S_{2,n}+S_{3,n}\right) \end{align*} due to Lemma \ref{lem:mse_inequality}. From Lemma \ref{lem:generator_uniformly_continuous} we conclude \begin{align*} S_{1,n}&=\sum_{i=1}^d\int_{N(s_{i,n})}E\left(\abs{Z_t-Z_{s_{i,n}}}\right)~dt\\ &\leq\sum_{i=1}^d\int_{N(s_{i,n})}E\left(\sup_{\norm{r-s}\leq\varepsilon_n}\abs{Z_r-Z_s}\right)~dt\\ &=\int_{[0,1]^k}E\left(\sup_{\norm{r-s}\leq\varepsilon_n}\abs{Z_r-Z_s}\right)~dt\\ &=E\left(\sup_{\norm{r-s}\leq\varepsilon_n}\abs{Z_r-Z_s}\right)\\ &\to_{n\to\infty}0. \end{align*} Define \[ A_n:=\frac{K(2\varepsilon_n/h_n)}{K(\varepsilon_n/h_n)}E\left(\sup_{t\in[0,1]^k}Z_t\right),\quad B_n:=E\left(\sup_{\norm{r-s}<3\varepsilon_n}\abs{Z_{r}-Z_{s}}\right)+1. \] As we have seen in the proof of Lemma \ref{lem:weight_functions_converge_to_one}, we have for $t\in N(s_{i,n})$ \[ 1\geq g_{i,n}(t)\geq (A_n+B_n)^{-1}\to1, \] and therefore \begin{align*} S_{2,n}&=\sum_{i=1}^d\int_{N(s_{i,n})}(1-g_{i,n}(t))~dt\\ &\leq\sum_{i=1}^d\int_{N(s_{i,n})}1-(A_n+B_n)^{-1}~dt\\ &=\int_{[0,1]^k}1-(A_n+B_n)^{-1}~dt\\ &=1-(A_n+B_n)^{-1}\\ &\to_{n\to\infty}0. \end{align*} Lastly, we have by the same argument as above \begin{equation*} S_{3,n}=\sum_{i=1}^d\int_{N(s_{i,n})}E\left(\hat Z_{t,n}-g_{i,n}(t)Z_{s_i,n}\right)~dt=S_{2,n}\to_{n\to\infty}0, \end{equation*} which completes the proof. \end{proof} \begin{rem} \upshape Given a grid $s_1,\dots,s_{d(n)}$ with pertaining $\varepsilon_n$, the bandwidth $h_n:=\varepsilon_n^2$ would, for example, satisfy the required growth conditions entailing convergence of MSE and IMSE to zero. But, it would clearly be desirable to provide some details on how to choose the bandwidth in an optimal way, which is, for example, statistical folklore in kernel density estimation. In our setup, however, this is an open problem, which requires future work. \end{rem} \section{Discretized versions of copula processes} Next we transfer the model we have established in Section \ref{sec:model} to copula processes that are in a sense close to max-stable processes. A \emph{copula process} $\bm U=(U_t)_{t\in[0,1]^k}$ is a stochastic process with continuous sample paths, such that each rv $U_t$ is uniformly distributed on the interval $[0,1]$. We say that $\bm U$ is in the \emph{functional domain of attraction} of an SMSP $\bm\eta=(\eta_t)_{t\in[0,1]^k}$, if \begin{equation}\label{eq:fdoa} \lim_{n\to\infty}P\left(n\left(\bm U-1\right)\leq f\right)^n=P\left(\bm\eta\leq f\right)=\exp\left(-\norm f_D\right),\qquad f\in\bar E^-\left([0,1]^k\right). \end{equation} Define for any $t\in[0,1]^k$ and $n\in \mathbb{N}$ \[ Y_t^{(n)}:=n\left(\max_{i=1,\dotsc,n}U_t^{(i)}-1\right), \] with $\bm U^{(1)},\bm U^{(2)},\dotsc$ being independent copies of $\bm U$. Now choose again pairwise different points $s_1,\dotsc,s_d\in[0,1]^k$ and functions $g_1,\dotsc,g_d\in\bar C^+\left([0,1]^k\right)$ with the properties \eqref{eq:norming_functions_standardization} and \eqref{eq:norming_functions_interpolation}. Condition \eqref{eq:fdoa} implies weak convergence of the finitedimensional distributions of $\bm Y^{(n)}=(Y_t^{(n)})_{t\in[0,1]^k}$, i.\,e. \[ \left(Y_{s_1}^{(n)},\dotsc,Y_{s_d}^{(n)}\right)\to_{\mathcal D}\left(\eta_{s_1},\dotsc,\eta_{s_d}\right), \] where '$\to_{\mathcal D}$' denotes convergence in distribution. Just like before, we can define the \emph{discretized version} $\hat{\bm Y}^{(n)}=(\hat Y^{(n)}_t)_{t\in[0,1]^k}$ of $\bm Y^{(n)}$ with grid $\{s_1,\dotsc,s_d\}$ and weight functions $g_1,\dotsc,g_d$ to be \[ \hat Y^{(n)}_t:=\max_{i=1,\dotsc,d}\frac{Y_{s_i}^{(n)}}{g_i(t)},\qquad t\in[0,1]^k. \] Elementary calculations show that \eqref{eq:fdoa} implies \[ \lim_{n\to\infty}P\left(\hat{\bm Y}^{(n)}\leq f\right)=P\left(\hat{\bm\eta}\leq f\right),\qquad f\in \bar E^-\left([0,1]^k\right), \] where $\hat{\bm\eta}$ is the discretized version of $\bm\eta$ as defined in \eqref{eq:generalized_max_linear_model}. Also, it is not difficult to see that for each $t\in[0,1]^k$, \begin{equation*}\label{eq:doa_bivariate_smsrv} \left(Y_t^{(n)},\hat Y_t^{(n)}\right)\to_{\mathcal D}(\eta_t,\hat\eta_t) \end{equation*} where $(\eta_t,\hat\eta_t)$ is the standard max-stable rv from Lemma \ref{lem:eta_hateta_sms}. Now applying the continuous mapping theorem, we obtain \begin{equation*} \left(Y_t^{(n)}-\hat Y_t^{(n)}\right)^2\to_{\mathcal D}(\eta_t-\hat\eta_t)^2. \end{equation*} It remains to prove uniform integrability of the sequence on the left hand side in order to obtain the next result. \begin{prop} Let $t\in[0,1]^k$. Then \[ \MSE\left(\hat Y_{t}^{(n)}\right)=E\left(\left(Y_t^{(n)}-\hat Y_t^{(n)}\right)^2\right)\to_{n\to\infty}\MSE\left(\hat\eta_t\right). \] \end{prop} \begin{proof} Fix $t\in[0,1]^k$. It remains to show that the sequence $X_t^{(n)}:=\left(Y_t^{(n)}-\hat Y_t^{(n)}\right)^2$ is uniformly integrable. A sufficient condition for uniform integrability is \[ \sup_{n\in\mathbb{N}}E\left(\left(X_t^{(n)}\right)^2\right)<\infty, \] see \citet[Section 3]{billi99}. Clearly, for every $n\in\mathbb{N}$, \[ E\left(\left(X_t^{(n)}\right)^2\right)\leq E\left(\left(Y_t^{(n)}\right)^4\right)+E\left(\left(\hat Y_t^{(n)}\right)^4\right). \] It is easy to verify that the rv $Y_t^{(n)}$ has the density $(1+x/n)^{n-1}$ on $[-n,0]$. Therefore, \begin{align*} E\left(\left(Y_t^{(n)}\right)^4\right)=\int_{-n}^0x^4\left(1+\frac xn\right)^{n-1}~dx=\frac{24n^5(n-1)!}{(n+4)!}\leq 24. \end{align*} Moreover, putting $c:=\min_{i=1,\dotsc,d}g_i(t)>0$, \[ \abs{\hat Y_t^{(n)}}=\min_{i=1,\dotsc,d}\frac{\abs{Y_{s_i}^{(n)}}}{g_i(t)}\leq\frac{\abs{Y_{s_1}^{(n)}}}{c}, \] and hence \[ E\left(\left(\hat Y_t^{(n)}\right)^4\right)\leq\frac{24}{c^4}, \] which completes the proof. \end{proof} \section*{Acknowledgment} The authors are grateful to two anonymous reviewers for their careful reading of the manuscript. The paper has benefitted a lot from their critical remarks. \end{document}
\begin{document} \title{\Large \textbf{Bounds for zeros of Collatz polynomials, with necessary and sufficient strictness conditions}} \author{\Large Matt Hohertz \\ Department of Mathematics, Rutgers University \\ [email protected] \\ ORCID 0000-0001-6724-1034} \maketitle \begin{abstract} In a previous paper, we introduced the Collatz polynomials $P_N\inps{z}$, whose coefficients are the terms of the Collatz sequence of the positive integer $N$. Our work in this paper expands on our previous results, using the Eneström-Kakeya Theorem to tighten our old bounds of the roots of $P_N\inps{z}$ and giving precise conditions under which these new bounds are sharp. In particular, we confirm an experimental result that zeros on the circle $\sbld{z\in\C:\aval{z} = 2}$ are rare: the set of $N$ such that $P_N\inps{z}$ has a root of modulus 2 is sparse in the natural numbers. We close with some questions for further study. \end{abstract} \textsc{keywords} Polynomial zeros, Collatz conjecture, generating functions \par \textsc{word count} 1295 \section*{Biographical note} Matt Hohertz received his Ph.D. in 2022 from Rutgers University. His thesis, titled \ic{Expanding the Geometric Modulus Principle}, explores the behavior of analytic functions near critical points. His recent research is in one-variable complex analysis, harmonic function theory, and polynomial root isolation. \section{Introduction} Let $c(N)$ be the Collatz iterate of a number $N\in\N\cup\sbld{0}$: \ic{i.e.}, \begin{equation} c(N) := \begin{cases} \frac{N}{2}, &N\mbox{ even} \\ \frac{3N+1}{2}, &N\mbox{ odd and not $1$} \\ 0, &N = 1. \end{cases} \end{equation} Also, let $c^j(N)$ have the standard meaning \begin{equation} c^j(N) := \begin{cases} N, &j = 0 \\ c\inps{c^{j-1}(N)}, &j \geq 1, \end{cases} \end{equation} and define $n(N)$ (just $n$ when $N$ is clear from context) as \begin{equation} n(N):=\min \sbld{j\in\N : c^j(N) = 1}. \end{equation} In this paper, we assume\footnote{\ic{cf.} item \ref{item:polyn} of Section \ref{sec:final-remarks}} that $n\inps{N} < \infty$ for all $N$, even though this remains an open question as of January 2022 \citep{boas}. As in \citep{hohertz_kalantari}, we define the \ic{$N^{th}$ Collatz polynomial} to be \begin{equation} P_N(z) := \sum_{j=0}^{n\inps{N}} c^j(N)\cdot z^j, \end{equation} \ic{i.e.}, the polynomial whose coefficients are the Collatz iterates of $N$, or equivalently consecutive members of the \ic{Collatz trajectory/sequence of $N$} (we will assume $N\in\N - \sbld{0,1}$ to avoid the trivial $P_0\inps{z} = 0$ and $P_1(z) = 1$). Throughout the article, let $z_N$ signify an arbitrary root of $P_N(z)$ (which we call a \ic{Collatz zero}), and let $[k] := [1,k] \cap \N$. \par We prove that \begin{equation} \frac{2\cdot M(N)}{3\cdot M(N)+1} \leq \aval{z_N} \leq 2, \end{equation} where $M(N)$ is the least odd iterate of $N$ other than 1. In fact, we go somewhat further, proving that \begin{equation} \frac{2\cdot M(N)}{3\cdot M(N)+1} < \aval{z_N} < 2 \end{equation} for almost all $N$ and giving precise conditions under which $N$ belongs to the sparse set of exceptions. \section{General bounds} \begin{lemma}[Eneström-Kakeya, Theorem A of \citep{eksharp}] \label{lemma:ek} Let $f(z) = a_kz^k + \cdots + a_1z + a_0$ have all strictly positive coefficients and set \begin{align} \alpha[f] &:= \min_{j=0,\cdots,k-1} \frac{a_j}{a_{j+1}} \\ \beta[f] &:= \max_{j=0,\cdots,k-1} \frac{a_j}{a_{j+1}}. \end{align} Then \begin{equation} \label{eq:ek} \alpha[f] \leq \aval{w} \leq \beta[f] \end{equation} for all roots $w$ of $f(z)$. \end{lemma} With Lemma \ref{lemma:ek}, we prove the following general bound for $\aval{z_N}$: \begin{theorem} \label{thm:bounds} \begin{equation} \label{eq:bounds} \frac{2\cdot M\inps{N}}{3\cdot M\inps{N} + 1} \leq \aval{z_N} \leq 2, \end{equation} where $M\inps{N}:=\max\Big(\sbld{-1/2}\cup\sbld{\ell > 1\::\:\ell\mbox{ an odd Collatz iterate of }N}\Big)$. \begin{proof} If $a_j$ is odd and not 1, then \begin{equation} \frac{a_j}{a_{j+1}} = \frac{2a_j}{3a_j + 1}, \end{equation} and if $a_j$ is even, then \begin{equation} \frac{a_j}{a_{j+1}} = \frac{2a_j}{a_j} = 2. \end{equation} If $N$ has an odd iterate other than 1, then the conclusion follows by Lemma \ref{lemma:ek}. Otherwise, $P_N(z)$ is the partial sum \begin{equation} N\cdot \inps{1 + \frac{z}{2} + \cdots + \frac{z^n}{2^n}}, \end{equation} whose roots all lie on $\sbld{z\in\C : \aval{z} = 2}$. \end{proof} \end{theorem} \section{Strictness of the lower bound} \begin{lemma}[Theorem B of \citep{eksharp}] \label{lemma:thmb} For a polynomial $f(z)$ of degree $k$ with strictly positive coefficients, let $S[f]$ be the set of all $j\in[k+1]$ such that \begin{equation} \label{eq:strin} \beta\left[f\right] > \frac{a_{k-j}}{a_{k+1-j}}. \end{equation}Then the upper bound of Equation \eqref{eq:ek} is an equality if and only if $1 < d := \gcd\inps{j\in S[f]}$, in which case \begin{itemize} \item all the zeros on $\aval{z} = \beta[f]$ are simple and given by $\Big\{\beta[f]\cdot \exp\inps{\frac{2\pi i j}{d}},\;j=1,\cdots, d-1\Big\}$ and \item $f\inps{\beta[f]\cdot z} = \inps{1 + z + \cdots + z^{d-1}}\cdot q_m\inps{z^d}$ for a degree $m$ polynomial $q_m$ with strictly positive coefficients. \end{itemize} Moreover, if $m > 0$, then all zeros of $q_m$ belong to $\D$ and $\beta[q_m] \leq 1$. \end{lemma} Define the \ic{reciprocal polynomial} $\tilde{f}$ of the degree $k$ polynomial $f(z) = a_kz^k + \cdots + a_jz^j + \cdots + a_0$, for which we assume $a_0a_k\neq 0$, to be \begin{align} \tilde{f}(z) &:= z^k\cdot f\inps{\frac{1}{z}} \\ &= a_0z^k + \cdots + a_{k-j}z^j + \cdots + a_k. \end{align} Note that \begin{align} \beta[\tilde{f}] &= \max_{j=0,\cdots, k-1}\frac{a_{k-j}}{a_{k-j-1}} \\ &=\frac{1}{ \min_{j=0,\cdots, k-1}\frac{a_{j}}{a_{j+1}}} \\ &= \frac{1}{\alpha[f]}, \end{align} and that, more generally, an upper bound for the zeros of $\tilde{f}$ is the reciprocal of a lower bound for the zeros of $f$. \begin{theorem} \label{thm:lb} Equality holds for the leftmost inequality of Equation \eqref{eq:bounds} if and only if $N$ is a power of 2. \begin{proof} We have observed the ``if" direction in the proof of Theorem \ref{thm:bounds}. For the ``only if" direction, suppose that $N$ is not a power of 2 and let $M\inps{N}$ again signify the minimum odd iterate of $N$ not equal to 1. Then \begin{equation} \beta\left[\tilde{P_N}\right] = \frac{1}{\alpha\left[P_N\right]} = \frac{3}{2} + \frac{1}{2\cdot M\inps{N}}. \end{equation} Now, for $j\in[n+1]$, \begin{equation} \frac{a_{n-\inps{n-j}}}{a_{n-\inps{n+1-j}}} = \frac{a_j}{a_{j-1}} = \begin{cases} \frac{1}{2}, &a_{j-1}\mbox{ even and }j\neq n+1 \\ \frac{3}{2} + \frac{1}{2a_{j-1}}, &a_{j-1}\mbox{ odd} \\ 0, &j = n+1, \end{cases} \end{equation} so that $\beta\left[\tilde{P_N}\right] > \frac{a_j}{a_{j-1}}$ for all but the unique $j$ for which $a_{j-1} = M$. Thus, there exist at least two consecutive values of $j$ such that $\beta\left[\tilde{P_N}\right] > \frac{a_j}{a_{j-1}}$, implying that $\gcd\inps{j\in S[\tilde{P_N}]} = 1$. Therefore, by Lemma \ref{lemma:thmb}, $\beta\left[\tilde{P_N}\right]$ is a non-sharp upper bound for the zeros of $\tilde{P_N}$, whence $\alpha\left[P_n\right]$ is a non-sharp lower bound for the zeros of $P_N$. \end{proof} \end{theorem} \section{Strictness of the upper bound} For a given $N$, define the set $T_N:= \sbld{n+1}\cup\sbld{j\in[n] : c^{n-j}(N)\mbox{ is odd}}$. Then Lemma \ref{lemma:thmb} implies the following sharpness result: \begin{theorem} \label{thm:s} Let $d_N := \gcd(j\in T_N)$. Then the upper bound of Equation \eqref{eq:bounds} is an equality if and only if $d_N > 1$, in which case \begin{enumerate} \item the zeros of $P_N$ on $\sbld{\aval{z} = 2}$ are simple and given precisely by $\sbld{2\omega : \omega^{d_N} = 1\land \omega\neq 1}$ and \item $P_N$ factors as \begin{equation} P_N(2z) = \inps{1 + z + \cdots + z^{d_N-1}}\cdot Q_N(z^{d_N}),\end{equation} where $Q_N$ is a polynomial with positive coefficients. Moreover, if $Q_N$ is non-constant then all its zeros lie in $\D$ and $\beta[Q_N] \leq 1$. \end{enumerate} \end{theorem} \begin{proof} By the proof of Theorem \ref{thm:bounds} (and the fact that $a_{-1} = 0$ because $P_N$ is a polynomial), the set $T_N$ is precisely the set of $j\in[n+1]$ such that $\frac{a_{n-j}}{a_{n+1-j}} < \beta[P_N] = 2$. Thus the result follows from applying Lemma \ref{lemma:thmb}. \end{proof} The exacting conditions of Theorem \ref{thm:s} suggest that zeros on $\aval{z} = 2$ are rare, and indeed we prove that this is so. First, we need two lemmas. \begin{lemma} \label{lemma:cons-ones} The number of length $k$ binary strings with no consecutive 1s is $F_{k+1}$, the $\inps{k+1}^{th}$ Fibonacci number. In particular, let $p_k$ be the probability that a length $k$ binary string selected uniformly at random contains no consecutive 1s. Then $\lim_{k\rightarrow\infty} p_k = 0$. \end{lemma} \begin{proof} The first statement holds for $k = 0$ and $1$. Consider a binary string $x$ of length $k\geq 2$ and let $substring(x, j)$ be the substring of $x$ comprising its first $j$ digits. If the last digit of $x$ is 0, then $x$ has two consecutive 1s if and only if $substring(x, k-1)$ does. Otherwise, its last two digits are either $11$, in which case it has consecutive 1s, or $01$, in which case it has consecutive 1s if and only if $substring(x, k-2)$ does. Therefore, the number of length $k$ binary strings with no consecutive 1s satisfies the same recurrence relation as the Fibonacci numbers. \par Since $F_k\sim \frac{\ensuremath{\varphi}^k}{\sqrt{5}}$, $p_k\sim \frac{\ensuremath{\varphi}}{\sqrt{5}}\inps{\frac{\ensuremath{\varphi}}{2}}^k$, and this quantity has limit 0 as $k\rightarrow\infty$. \end{proof} \begin{lemma} \label{lemma:sparse-subset} Let $X$ be the set of natural numbers $N$ with the property that $$c^j\inps{N}\mbox{ odd}\Rightarrow c^{j+1}\inps{N}\mbox{ even}$$ for all integers $j\geq 0$. The set $X$ has density zero in the natural numbers. \end{lemma} \begin{proof} Let $x_j\inps{N}:= c^j\inps{N}\;\inps{\mbox{mod }2}.$ By \citep[Theorem B]{lagarias_1985}, the function $\Z\rightarrow {\Z/2^{k+1}\Z}$ defined by \begin{equation} N\rightarrow \inps{x_0\inps{N},\dots, x_k\inps{N}} \end{equation} is periodic with period $2^{k+1}$. For a fixed $k$, the set $X$ is a subset of the preimage of the set of length $k+1$ strings with no consecutive 1s under this function. By Lemma \ref{lemma:cons-ones}, as $k\rightarrow\infty$, this preimage becomes sparse as a subset of the natural numbers. \end{proof} Lemmas \ref{lemma:cons-ones} and \ref{lemma:sparse-subset} culminate in the following theorem. \begin{theorem}\label{thm:zerodens} The set of $N$ such that $P_N$ has a root on $\aval{z} = 2$ has density 0 in the natural numbers. \end{theorem} \begin{proof} By Lemma \ref{lemma:sparse-subset}, $T_N$ contains consecutive natural numbers for almost all $N$, so that $\gcd\inps{j\in T_N} = 1$ for almost all $N$. The conclusion follows from Theorem \ref{thm:s}. \end{proof} \section{Final remarks} \label{sec:final-remarks} We conclude with a few suggestions for further research. \begin{enumerate} \item Strengthen Theorem \ref{thm:lb} by finding an explicit, closed-form lower bound for $\aval{z_N}$ in terms of $N$. \item Strengthen Theorems \ref{thm:s} and \ref{thm:zerodens} by finding an explicit, closed-form upper bound for $\aval{z_N}$ in terms of $N$. \item Figure \ref{fig:zeros} suggests that certain subsets of $\sbld{z\in\C\::\:\aval{z} \leq 2}$ contain large clusters of Collatz zeros while others are zero-free. Give an algorithm for proving that a subset of $\sbld{z\in\C\::\:\aval{z} \leq 2}$ is free of Collatz zeros. \item While Descartes' Rule of Signs implies that no Collatz zero is positive real, there appears to be a sequence of zeros that approaches $z = 1$ arbitrarily closely, bounded by a parabola with vertex at $z = 1$. Prove or disprove the existence of such a convergent sequence and parabola. \item Under some modest assumptions, the zeros of a polynomial with random coefficients cluster near the unit circle with uniform angular distribution \citep{hughes_nikeghbali_2008}. Prove or disprove that these assumptions hold for the Collatz polynomials. \item Find the Galois groups of $P_N$ for general classes of $P_N$. \item Expanding on Theorem \ref{thm:s}, find conditions under which $P_N$ has zeros at real multiples (other than 2) of roots of unity. \item Except for Theorem \ref{thm:lb}, all the theorems in this paper are functions of the Collatz sequence of $N$ rather than of $N$ itself. Prove theorems on $z_N$ which can be applied without calculating the Collatz sequence of $N$ (\ic{e.g.}, if $N\equiv 3\mbox{ mod }4$, then the zeros of $P_N$ lie in ...). \item Our results rest on the assumption, equivalent to the Collatz conjecture itself, that $P_N$ is a polynomial for all $N$. Using other methods of proof, find a property of Collatz zeros that contradicts a theorem in this paper, thereby disproving the Collatz conjecture. \label{item:polyn} \end{enumerate} \section*{Declaration of interests} The authors report there are no competing interests to declare. \section*{Figures} \begin{figure} \caption{Complex plot of the zeros of $P_N$ for $2\leq N \leq 2^{16} \label{fig:zeros} \end{figure} \listoffigures \setcitestyle{numbers} \end{document}
\begin{document} \title{Simplicial Complexes Obtained from Qualitative Probability Orders} \author{Paul H. Edelman} \email{[email protected]} \author{ Tatyana Gvozdeva} \email{[email protected]} \author{ Arkadii Slinko} \email{[email protected]} \maketitle \begin{abstract} In this paper we inititate the study of abstract simplicial complexes which are initial segments of qualitative probability orders. This is a natural class that contains the threshold complexes and is contained in the shifted complexes, but is equal to neither. In particular we construct a qualitative probability order on 26 atoms that has an initial segment which is not a threshold simplicial complex. Although 26 is probably not the minimal number for which such example exists we provide some evidence that it cannot be much smaller. We prove some necessary conditions for this class and make a conjecture as to a characterization of them. The conjectured characterization relies on some ideas from cooperative game theory. \end{abstract} \section{Introduction} \label{intro} The concept of qualitative (comparative) probability takes its origins in attempts of de Finetti (\citeyear{dF}) to axiomatise probability theory. It also played an important role in the expected utility theory of \citet[p.32]{Sa}. The essence of a qualitative probability is that it does not give us numerical probabilities but instead provides us with the information, for every pair of events, which one is more likely to happen. The class of qualitative probability orders is broader than the class of probability measures for any $n\ge 5$ \citep{KPS}. Qualitative probability orders on finite sets are now recognised as an important combinatorial object \citep{KPS,PF1,PF2} that finds applications in areas as far apart from probability theory as the theory of Gr\"obner bases \citep[e.g.,][]{Mac}. Another important combinatorial object, also defined on a finite set is an abstract simplicial complex. This is a set of subsets of a finite set, called faces, with the property that a subset of a face is also a face. This concept is dual to the concept of a simple game whose winning coalitions form a set of subsets of a finite set with the property that if a coalition is winning, then every superset of it is also a winning coalition. The most studied class of simplicial complexes is the class of threshold simplicial complexes. These arise when we assign weights to elements of a finite set, set a threshold and define faces as those subsets whose combined weight is not achieving the threshold. Given a qualitative probability order one may obtain a simplicial complex in an analogous way. For this one has to choose a threshold---which now will be a subset of our finite set---and consider as faces all subsets that are earlier than the threshold in the given qualitative probability order. This initial segment of the qualitative probability order will, in fact, be a simplicial complex. The collection of complexes arising as initial segments of probability orders contains threshold complexes and is contained in the well-studied class of shifted complexes \citep{Klivans05,Klivans07}. A natural question is therefore to ask if this is indeed a new class of complexes distinct from both the threshold complexes and the shifted ones. In this paper we give an affirmative answer to both of these questions. We present an example of a shifted complex on $7$ points that is not the initial segment of any qualitative probability order. On the other hand we also construct an initial segment of a qualitative probability order on $26$ atoms that is not threshold. We also show that such example cannot be too small, in particular, it is unlikely that one can be found on fewer than $18$ atoms. The structure of this paper is as follows. In Section 2 we introduce the basics of qualitative probability orders. In Section 3 we consider abstract simplicial complexes and give necessary and sufficient conditions for them being threshold. In Section 4 we give a construction that will further provide us with examples of qualitative probability orders that are not related to any probability measure. Finally in Sections 5 and 6 we present our main result which is an example of a qualitative probability order on $26$ atoms that is not threshold. Section 7 concludes with a conjectured characterization of initial segment complexes that is motivated by work in the theory of cooperative games. \section{Qualitative Probability Orders and Discrete Cones} In this paper all our objects are defined on the set $[n]=\{1,2,\ldots, n\}$. By $2^{[n]}$ we denote the set of all subsets of $[n]$. An order\footnote{An order in this paper is any reflexive, complete and transitive binary relation. If it is also anti-symmetric, it is called linear order.} $\preceq$ on $2^{[n]}$ is called a {\em qualitative probability order} on ${[n]}$ if \begin{equation} \label{nontriv} \emptyset \preceq A \end{equation} for every nonempty subset $A$ of ${[n]}$, and $\preceq$ satisfies de Finetti's axiom, namely for all $A,B,C\in 2^{[n]}$ \begin{equation} \label{deFeq} A\preceq B \ \Longleftrightarrow \ A\cup C\,\preceq\, B\cup C \ \mbox{whenever}\ \ (A\cup B)\cap C=\emptyset\,. \end{equation} Note that if we have a probability measure ${\bf p}=(\row pn)$ on ${[n]}$, where $p_i$ is the probability of $i$, then we know the probability $p(A)$ of every event $A$ and $p(A)=\sum_{i\in A}p_i$. We may now define a relation $\preceq$ on $2^{[n]}$ by \[ A\preceq B \quad \mbox{if and only if}\quad p(A)\le p(B); \] obviously $\preceq$ is a qualitative probability order on ${[n]}$, and any such order is called {\em representable} \citep[e.g.,][]{PF1,GR}. Those not obtainable in this way are called {\em non-representable}. The class of qualitative probability orders is broader than the class of probability measures for any $n\ge 5$ \citep{KPS}. A non-representable qualitative probability order $\preceq$ on ${[n]}$ is said to {\em almost agree} with the measure ${\bf p}$ on ${[n]}$ if \begin{equation} \label{almost-repr} A\preceq B \Longrightarrow p(A)\le p(B). \end{equation} If such a measure ${\bf p}$ exists, then the order $\preceq$ is said to be {\em almost representable}. Since the arrow in (\ref{almost-repr}) is only one-sided it is perfectly possible for an almost representable order to have $A\preceq B$ but not $B\preceq A$ while $p(A)=p(B)$. \par We begin with some standard properties of qualitative probability orders which we will need subsequently. Let $ \preceq$ be a qualitative probability order on $2^{[n]}$. As usual the following two relations can be derived from it. We write $A\prec B$ if $A\preceq B$ but not $B\preceq A$ and $A\sim B$ if $A\preceq B$ and $B\preceq A$. \begin{lem} \label{strict-equiv} Suppose that $ \preceq$ is a qualitative probability order on $2^{[n]}$, $A, B, C, D \in 2^{[n]}$, $A \preceq B$, $C \preceq D$ and $B \cap D = \emptyset$. Then $A \cup C \preceq B \cup D$. Moreover, if $A \prec B$ or $C \prec D$, then $A \cup C \prec B \cup D$. \end{lem} \begin{proof} Firstly, let us consider the case when $A\cap C=\emptyset$. Let $B'=B\setminus C$ and $C'=C\setminus B$ and $I=B\cap C$. Then by (\ref{deFeq}) we have \[ A\cup C'\preceq B\cup C'=B'\cup C\preceq B'\cup D \] where we have $A\cup C' \prec B'\cup D$ if $A\prec B$ or $C\prec D$. Now we have \[ A\cup C'\preceq B'\cup D \Leftrightarrow A\cup C=(A\cup C')\cup I\preceq (B'\cup D) \cup I=B\cup D. \] Now let us consider the case when $A\cap C\ne \emptyset$. Let $A'=A\setminus C$. By (\ref{nontriv}) and (\ref{deFeq}) we now have $A'\prec B$. Since now we have $A'\cap C=\emptyset$ so by the previous case \[ A\cup C=A'\cup C\prec B\cup C\preceq B\cup D. \] In this case we always obtain a strict inequality. \end{proof} A weaker version of this lemma can be found in \cite{Mac}[Lemma 2.2]. \begin{defn} \label{cancel} A sequence of subsets $(\row Aj; \row Bj)$ of $[n]$ of even length $2j$ is said to be a {\em trading transform} of length $j$ if for every $i\in [n]$ $$ \left|\{k\mid i\in A_k\}\right|=\left|\{k\mid i\in B_k\}\right|. $$ In other words, sets $\row Aj$ can be converted into $\row Bj$ by rearranging their elements. We say that an order $\preceq $ on $2^{[n]}$ satisfies the $k$-th cancellation condition $CC_k$ if there does not exist a trading transform $(\row Ak;\row Bk)$ such that $A_i\preceq B_i$ for all $i\in [k]$ and $A_i\prec B_i$ for at least one $i\in [k]$. \end{defn} The key result of \cite{KPS} can now be reformulated as follows. \begin{thm}[Kraft-Pratt-Seidenberg] \label{KPStheorem} A qualitative probability order $\preceq $ is representable if and only if it satisfies $CC_k$ for all $k=1,2,\ldots $. \end{thm} It was also shown in \citet[Section 2]{PF1} that $CC_2$ and $CC_3$ hold for linear qualitative probability orders. It follows from de Finetti's axiom and properties of linear orders. It can be shown that a qualitative probability order satisfies $CC_2$ and $CC_3$ as well. Hence $CC_4$ is the first nontrivial cancellation condition. As was noticed in \cite{KPS}, for $n<5$ all qualitative probability orders are representable, but for $n=5$ there are non-representable ones. For $n=5$ all orders are still almost representable \cite{PF1} which is no longer true for $n=6$ \cite{KPS}. \par \begin{comment} {\color{blue}These cancellation conditions may look to some as rather unnatural and complicated. Since they are derived from linear algebra, it is not surprising that they can be made to look much more natural by being reformulated in terms of vectors, as was done in \cite{CS} and as we do later in this paper.\par In this paper we consider qualitative probability orders order $\preceq$ on $2^{[n]}$ is a that satisfy \[ \{1\}\succeq \{2\}\succeq \ldots \succeq \{n\} \succ \emptyset \] which is equivalent to $p_1\ge p_2\ge \ldots\ge p_n>0$ for a qualitative probability order arising from a probability measure ${\bf p}=(\row pn)$.} \par \end{comment} It will be useful for our constructions to rephrase some of these conditions in vector language. To every such linear order $\preceq$, there corresponds a {\em discrete cone} $C(\preceq)$ in $T^n$, where $T=\{-1,0,1\}$, as defined in \cite{PF1}. \begin{defn} A subset $C\subseteq T^n$ is said to be a discrete cone if the following properties hold: \begin{enumerate} \item[{\rm D1.}] $\{{\bf e}_1,\ldots, {\bf e}_n\}\subseteq C$ and $\{-{\bf e}_1, \ldots, -{\bf e}_n\} \cap C = \emptyset$, where $\{{\bf e}_1,\ldots,{\bf e}_n\}$ is the standard basis of $\mathbb R^n$, \item[{\rm D2.}] $\{-{\bf x},{\bf x}\}\cap C\ne\emptyset$ \ for every ${\bf x}\in T^n$, \item[{\rm D3.}] ${\bf x}+{\bf y}\in C$ whenever ${\bf x},{\bf y}\in C$ and ${\bf x}+{\bf y}\in T^n$. \end{enumerate} \end{defn} We note that \cite{PF1} requires ${\bf 0}\notin C$ because his orders are anti-reflexive. In our case, condition D2 implies ${\bf 0}\in C$. Given a qualitative probability order $\preceq $ on $2^{[n]}$, for every pair of subsets $A,B$ satisfying $B \preceq A$ we construct a characteristic vector of this pair $\chi(A,B)=\chi(A)\!-\!\chi(B)\in T^n$. We define the set $C(\preceq)$ of all characteristic vectors $\chi(A,B)$, for $A,B\in 2^{[n]}$ such that $B\preceq A$. The two axioms of qualitative probability guarantee that $C(\preceq)$ is a discrete cone \citep[see][Lemma~2.1]{PF1}. Following \cite{PF1}, the cancellation conditions can be reformulated as follows: \begin{prop} A qualitative probability order $\preceq $ satisfies the $k$-th cancellation condition $CC_k$ if and only if there does not exist a set $\{\brow xk\}$ of nonzero vectors in $C(\preceq )$ such that \begin{equation} \label{axm} \bsum xk={\bf 0} \end{equation} and $-{\bf x}_i \notin C(\preceq) $ for at least one $i$. \end{prop} Geometrically, a qualitative probability order $\preceq$ is representable if and only if there exists a positive vector ${\bf u}\in \mathbb R^n$ such that \[ {\bf x}\in C(\preceq) \Longleftrightarrow ({\bf u},{\bf x})\ge 0 \quad \mbox{for all}\, \ {\bf x}\in T^n\setminus\{{\bf 0}\}, \] where $(\cdot,\cdot)$ is the standard inner product; that is, $\preceq$ is representable if and only if every non-zero vector in the cone $C(\preceq)$ lies in the closed half-space $H^{+}_{\bf u}=\{{\bf x}\in\mathbb R^n\mid ({\bf u},{\bf x})\ge 0\}$ of the corresponding hyperplane $H_{\bf u}=\{{\bf x}\in\mathbb R^n\mid ({\bf u},{\bf x})= 0\}$. \par Similarly, for a non-representable but {\em almost} representable qualitative probability order $\preceq$, there exists a vector ${\bf u}\in \mathbb R^n$ with non-negative entries such that \[ {\bf x}\in C(\preceq ) \Longrightarrow ({\bf u},{\bf x})\ge 0 \quad \mbox{for all}\, \ {\bf x}\in T^n\setminus\{{\bf 0}\}. \] In the latter case we can have ${\bf x}\in C(\preceq)$ and $-{\bf x}\notin C(\preceq)$ despite $({\bf u},{\bf x})= 0$. In both cases, the normalised vector ${\bf u}$ gives us the probability measure, namely ${\bf p}=(u_1+\ldots +u_n)^{-1}\left(\row un \right)$, from which $\preceq $ arises or with which it almost agrees.\par \section{Simplicial complexes and their cancellation conditions} \label{cancond} In this section we will introduce the objects of our study, simplicial complexes that arise as initial segments of a qualitative probability order. Using cancellation conditions for simplicial complexes, we will show that this class contains the threshold complexes and is contained in the shifted complexes. Using only these conditions it will be easy to show that the initial segment complexes are strictly contained in the shifted complexes. Showing the strict containment of the threshold complexes will require more elaborate constructions which will be developed in the rest of the paper. A subset $\Delta \subseteq 2^{{[n]}}$ is an {\em (abstract) simplicial complex} if it satisfies the condition: \[\text{if } B \in \Delta \text{ and } A \subseteq B, \text{ then } A \in \Delta.\] Subsets that are in $\Delta$ are called {\em faces}. Abstract simplicial complexes arose from geometric simplicial complexes in topology \citep[e.g.,][]{Ma}. Indeed, for every geometric simplicial complex $\Delta$ the set of vertex sets of simplices in $\Delta$ is an abstract simplicial complex, also called the {\em vertex scheme} of $\Delta$. In combinatorial optimization various abstract simplicial complexes associated with finite graphs (\cite{JJ}) are studied, such as the independence complex, matching complex etc. Abstract simplicial complexes are also in one-to-one correspondence with {\em simple games} as defined by \cite{vNM:b:theoryofgames}. A simple game is a pair $G=([n],W)$, where $W$ is a subset of the power set $2^{[n]}$ which satisfies the monotonicity condition: \[ \text{ if $X\in W$ and $X\subseteq Y\subseteq [n]$, then $Y\in W$.} \] The subsets from $W$ are called {\em winning coalitions} and the subsets from $L=2^{[n]}\setminus W$ are called {\em losing coalitions}. Obviously the set of losing coalitions $L$ is a simplicial complex. The reverse is also true: if $\Delta $ is a simplicial complex, then the set $2^{[n]}\setminus \Delta $ is a set of winning coalitions of a certain simple game. A well-studied class of simplicial complexes is the {\em threshold} complexes (mostly as an equivalent concept to the concept of a weighted majority game but also as threshold hypergraphs \citep{RRST}). A simplicial complex $\Delta$ is a threshold complex if there exist non-negative reals $w_1, \ldots, w_n$ and a positive constant $q$, such that \[ A \in \Delta \Longleftrightarrow w(A) = \sum_{i \in A} w_i < q. \] The same parameters define a {\em weighted majority game} by setting \[ A \in W \Longleftrightarrow w(A) = \sum_{i \in A} w_i \ge q. \] This game has the standard notation $[q;\row wn]$.\par A much larger but still well-understood class of simplicial complexes are {\em shifted} simplicial complexes \citep{Klivans05,Klivans07}. A simplicial complex is shifted if there exists an order $\trianglelefteq $ on the set of vertices $[n]$ such that for any face $F$, replacing any of its vertices $x\in F$ with a vertex $y$ such that $y \trianglelefteq x$ results in a subset $(F\setminus \{x\})\cup \{y\}$ which is also a face. Shifted complexes correspond to complete\footnote{sometimes also called linear} games \citep{FMDAM}. A complete game has an order $\trianglelefteq$ on players such that if a coalition $W$ is winning, then replacing any player $x\in W$ with a player $x \trianglelefteq z$ results in a coalition $(W\setminus \{x\})\cup \{z\}$ which is also winning. A related concept is the so-called Isbel's desirability relation $\le_I$ \cite{tz:b:simplegames}. Given a game $G$ the relation $\le_I $ on $[n]$ is defined by setting $j \le_I i$ if for every set $X\subseteq [n]$ not containing $i$ and~$j$ \begin{equation} \label{isbel} X\cup \{j\}\in W \Longrightarrow X\cup \{i\} \in W. \end{equation} The idea is that if $j \le_I i$, then $i$ is more desirable as a coalition partner than $j$. The game is complete if and only if $\le_I$ is an order on $[n]$. \par Let $\preceq $ be a qualitative probability order on $[n]$ and $T\in 2^{[n]}$. We denote \[ \Delta(\preceq, T)=\{X\subseteq [n]\mid X\prec T\}, \] where $X\prec Y$ stands for $X\preceq Y$ but not $Y\preceq X$, and call it an {\em initial segment} of $\preceq $. \begin{lem} Any initial segment of a qualitative probability order is a simplicial complex. \end{lem} \begin{proof}Suppose that $\Delta=\Delta(\preceq,T)$ and $B \in \Delta$. If $A \subset B$, then let $C=B \setminus A$. By (\ref{nontriv}) we have that $\emptyset \preceq C$ and since $A \cap C= \emptyset$ it follows from (\ref{deFeq}) that $\emptyset \cup A \preceq C \cup A$ which implies that $ A \preceq B$. Since $\Delta$ is an initial segment, $B \in \Delta$ and $A \preceq B$ implies that $A \in \Delta$ and thus $\Delta$ is a simplicial complex. \end{proof} We will refer to simplicial complexes that arise as initial segments of some qualitative probability order as an {\em initial segment complex}. \par In a similar manner as for the qualitative probability orders, cancellation conditions will play a key role in our analyzing simplicial complexes. \begin{defn} A simplicial complex $\Delta$ is said to satisfy $CC_k^{*}$ if for no $k\ge 2$ there exists a trading transform $(A_1, \ldots, A_k;B_1, \ldots, B_k)$, such that $A_i \in \Delta$ and $B_i \notin \Delta$, for every $i \in [k]$. \end{defn} Let us show the connection between $CC_k$ and $CC_k^{*}$. \begin{thm} Suppose $\preceq$ is a qualitative probability order on $2^{[n]}$ and ${\Delta(\preceq, T)}$ is its initial segment. If $\preceq$ satisfies $CC_k$ then ${\Delta(\preceq, T)}$ satisfies $CC_k^{*}$. \end{thm} This gives us some initial properties of initial segment complexes. Since conditions $CC_k$, $k=2,3$, hold for all qualitative probability orders \citep{PF1} we obtain \begin{thm}\label {cc*>3} If an abstract simplicial complex $\Delta \subseteq 2^{[n]}$ is an initial segment complex, then it satisfies $CC_k^{*}$ for all $k \le 3$. \end{thm} From this theorem we get the following corollary, due to Caroline Klivans (personal communication): \begin{cor} Every initial segment complex is a shifted complex. Moreover, there are shifted complexes that are not initial segment complexes. \end{cor} \begin{proof} Let $\Delta$ be a non-shifted simplicial complex. then it is known to contain an obstruction of the form: there are $i,j \in [n]$, and $A,B \in \Delta$, neither containing $i$ or $j$, so that $A \cup i$ and $B \cup j$ are in $\Delta$ but neither $i \cup B$ nor $j \cup A$ are in $\Delta$ \citep{Klivans05}. But then $(A \cup i,B\cup j;B\cup i,A\cup j)$ is a trading transform that violates $CC_2^{*}$. Since all initial segments satisfy $CC_2^{*}$ they must all be shifted. On the other hand, there are shifted complexes that fail to satisfy $CC_2^{*}$ and hence can not be initial segments. Let $\Delta$ be the smallest shifted complex (where shiftingis with respect to the usual ordering) that contains $\{1,5,7\}$ and $\{2,3,4,6\}$ Then it is easy to check that neither $\{3,4,7\}$ nor $\{1,2,5,6\}$ are in $\Delta$ but \begin{equation} (\{1,5,7\},\{2,3,4,6\};\{3,4,7\},\{1,2,5,6\}) \end{equation} is a transform in violation of $CC_2^{*}$. \end{proof} Similarly, the {\em terminal segment} \[ G(\preceq, T)=\{X\subseteq [n]\mid T\preceq X\} \] of any qualitative probability order is a complete simple game. The Theorem~2.4.2 of the book \cite{tz:b:simplegames} can be reformulated to give necessary and sufficient conditions for the simplicial complex to be a threshold. \begin{thm} \label{all_cc*} An abstract simplicial complex $\Delta \subseteq 2^{[n]}$ is a threshold complex if and only if the condition $CC_k^{*}$ holds for all $k \ge 2$. \end{thm} Above we showed that the initial segment complexes are strictly contained in the shifted complexes. What is the relationship between the initial segment complexes and threshold complexes? \begin{lem} Every threshold complex is an initial segment complex. \end{lem} \begin{proof} The threshold complex defined by the weights $w_1, \ldots, w_n$ and a positive constant $q$ is the initial segment of the representable qualitative probability order whose where $p_i=w_i,\ 1 \leq i\leq n$ and where the threshold set $T$ has the property that $w(A) \leq w(T) <q$ for all $A \in \Delta$. \end{proof} This leaves us with the question of whether this containment is strict, i.e., are there initial segment complexes which are not threshold complexes. One might think that some initial segment of a non-representable qualitative probability order is not threshold. Unfortunately that may not be the case. \begin{exmp} This example, adapted from \citep{Mac}[Example2.5, Example 3.9] gives a non-representable qualitative probability order for which every initial segment complex is threshold. Construct a representable qualitative probability order on $2^{[5]}$ using the $p_i's$ $\{7,10,16,20,22\}$. The order begins \begin{equation} \emptyset \prec 1 \prec 2 \prec 3 \prec 12 \prec 4 \prec 5\prec \cdots \end{equation} where $1$ denotes the singleton set $\{1\}$ and by $12$ we mean $\{1,2\}$. Since the qualitative probability order is representable, every initial segment is a threshold complex. Now suppose we interchange the order of $12$ and $4$. The new ordering, which begins \begin{equation} \emptyset \prec 1 \prec 2 \prec 3 \prec 4 \prec 12 \prec 5 \prec \cdots , \end{equation} is still a qualitative probability order but it is no longer representable \citep[Example 2.5]{Mac}. With one exception, all of the initial segments in this new non-representable qualitative order are initial segments in the original one and thus are threshold. The one exception is the segment \begin{equation} \emptyset \prec 1 \prec 2 \prec 3 \prec 4 \end{equation} which is obviously a threshold complex. \end{exmp} Another approach to finding an initial segment complex that is not threshold is to construct a complex that violates $CC_k^{*}$ for some small value of $k$. As noted above, all initial segment complexes satisfy $CC_2^{*}$ and $CC_3^{*}$ so the smallest condition that could fail is $CC_4^{*}$. We will now show that for small values of $n$ cancellation condition $CC^{*}_4$ is satisfied for any initial segment. This will also give us invaluable information on how to construct a non-threshold initial segment later. \begin{defn} Two pairs of subsets $(A_1,B_1)$ and $(A_2,B_2)$ are said to be compatible if the following two conditions hold: \begin{align*} & x\in A_1\cap A_2 \Longrightarrow x\in B_1\cup B_2,\ \text{and}\\ & x\in B_1\cap B_2 \Longrightarrow x\in A_1\cup A_2. \end{align*} \end{defn} \begin{lem} \label{oneless} Let $\preceq$ be a qualitative probability order on $2^{[n]}$, $T\subseteq [n]$, and let $\Delta=\Delta_n(\preceq, T)$ be the respective initial segment. Suppose $(\row As,\row Bs)$ is a trading transform and $A_i\prec T\preceq B_j$ for all $i,j\in [s]$. If any two pairs $(A_i,B_k)$ and $(A_j,B_l)$ are compatible, then $\preceq$ fails to satisfy $CC_{s-1}$. \end{lem} \begin{proof} Let us define \begin{align} &\bar{A}_i=A_i\setminus (A_i\cap B_k), \qquad\qquad \bar{B}_k=B_k\setminus (A_i\cap B_k),\\ &\bar{A}_j=A_j\setminus (A_j\cap B_l), \qquad\qquad \bar{B}_l=B_l\setminus (A_j\cap B_l). \end{align} We note that \begin{equation} \label{emptyintersection} \bar{A}_i\cap \bar{A}_j=\bar{B}_k\cap \bar{B}_l=\emptyset. \end{equation} Indeed, suppose, for example, $x\in \bar{A}_i\cap \bar{A}_j$, then also $x\in A_i\cap A_j$ and by the compatibility $x\in B_k$ or $x\in B_l$. In both cases it is impossible for $x$ to be in $x\in \bar{A}_i\cap \bar{A}_j$. We note also that by Lemma~\ref{strict-equiv} we have \begin{equation} \label{pcompjoined} \bar{A}_i\cup \bar{A}_j\prec \bar{B}_k\cup \bar{B}_l. \end{equation} Now we observe that \begin{equation*} (\bar{A}_i,\bar{A}_j, A_{m_1},\ldots, A_{m_{s-2}};\bar{B}_k,\bar{B}_l, B_{r_1},\ldots, B_{r_{s-2}}). \end{equation*} is a trading transform. Hence, due to (\ref{emptyintersection}), \begin{equation*} (\bar{A}_i\cup \bar{A}_j, A_{m_1},\ldots, A_{m_{s-2}};\bar{B}_k\cup \bar{B}_l, B_{r_1},\ldots, B_{r_{s-2}}) \end{equation*} is also a trading transform. This violates $CC_{s-1}$ since (\ref{pcompjoined}) holds and $A_{m_t}\prec B_{r_t}$ for all $t=1,\ldots, s-2$. \end{proof} By definition of a trading transform we are allowed to use repetitions of the same coalition in it. However we will show that to violate $CC^{*}_{4}$ we need a trading transform $(A_1, \ldots, A_4; B_1, \ldots, B_4)$ where all $A$'s and $B$'s are different. \begin{lem}\label{norepet} Let $\preceq$ be a qualitative probability order on $2^{[n]}$, $T\subseteq [n]$, and let $\Delta=\Delta_n(\preceq, T)$ be the respective initial segment. Suppose $(\row A4,\row B4)$ is a trading transform and $A_i\prec T\preceq B_j$ for all $i,j\in [4]$. Then \[|\{\row A4\}|=|\{\row B4\}| =4.\] \end{lem} \begin{proof} Note that every pair $(A_i, B_j),(A_l,B_k)$ is not compatible. Otherwise by Lemma~\ref{oneless} the order $\preceq$ fails $CC_3$, which contradicts to the fact that every qualitative probability satisfies $CC_3$. Assume, to the contrary, that we have at least two identical coalitions among $\row A4$ or $\row B4$. Without loss of generality we can assume $A_1=A_2$. Clearly all $A$'s or all $B$'s cannot coincide and there are at least two different $A$'s and two different $B$'s. Suppose $A_1 \neq A_3$ and $B_1 \neq B_2$. The pair $(A_1, B_1), (A_3, B_2)$ is not compatible. It means one of the following two statements is true: either there is $x \in A_1 \cap A_3$ such that $x \notin B_1 \cup B_2$ or there is $y \in B_1 \cap B_2$ such that $y \notin A_1 \cup A_3$. Consider the first case the other one is similar. We know that $x \in A_1 \cap A_3$ and we have at least three copies of $x$ among $\row A4$. At the same time $x \notin B_1 \cup B_2$ and there could be at most two copies of $x$ among $\row B4$. This is a contradiction. \end{proof} \begin{thm} \label{cc4} $CC_4^{*}$ holds for $\Delta=\Delta_n(\preceq, T)$ for all $n\le 17$. \end{thm} \begin{proof} Let us consider the set of column vectors \begin{equation} \label{36vectors} U=\{ {\bf x}\in \mathbb{R}^8\mid x_i\in \{0,1\}\ \text{and}\ x_1+x_2+x_3+x_4=x_5+x_6+x_7+x_8=2\}. \end{equation} This set has an involution ${\bf x}\mapsto {\bf \bar{x}}$, where $\bar{x}_i=1-x_i$. Say, if ${\bf x}=(1,1,0,0,0,0,1,1)^T$, then ${\bf \bar{x}}=(0,0,1,1,1,1,0,0)^T$. There are 36 vectors from $U$ which are split into 18 pairs $\{{\bf x}, {\bf \bar{x}}\}$. Suppose now ${\mathcal T}=(A_1,A_2,A_3,A_4; B_1,B_2,B_3,B_4)$ is a trading transform, $A_i\prec T\preceq B_j$ and no two coalitions in the trading transform coincide. Let us write the characteristic vectors of $A_1$, $A_2$, $A_3$, $A_4$, $B_1$, $B_2$, $B_3$, $B_4$ as rows of $8\times n$ matrix $M$, respectively. Since $\preceq $ satisfies $CC_3$, by Lemma~\ref{oneless} we know that no two pairs $(A_i,B_a)$ and $(A_j,B_b)$ are compatible. The same can be said about the complementary pair of pairs $(A_k,B_c)$ and $(A_l,B_d)$, where $\{a,b,c,d\}=\{i,j,h,l\}=[4]$. We have \[ A_i \prec B_a,\text{ } A_j \prec B_b,\text{ } A_h \prec B_c,\text{ } A_l \prec B_d, \] Since $(A_i,B_a)$ and $(A_j,B_b)$ are not compatible one of the following two statements is true: either there exists $x\in A_i\cap A_j$ such that $x\notin B_a\cup B_b$ or there exists $y\in B_a\cap B_b$ such that $x\notin A_i\cup A_j$. As $\mathcal T$ is the trading transform in the first case we will also have $x\in B_c\cap B_d$ such that $x\notin A_h\cup A_l$; in the second $y\in A_h\cap A_l$ such that $y\notin B_c\cup B_d$. Let us consider two columns $M_x$ and $M_y$ of $M$ that corresponds to elements $x,y\in [n]$. The above considerations show that both belong to $U$ and $M_x=\bar{M}_y$. In particular, if $(i,j,k,l)=(a,b,c,d)=(1,2,3,4)$, then the columns $M_x$ and $M_y$ will be as in the following picture \newline \hspace*{6.8cm} $x$\hspace{1.4cm}$y$ \[ M= \left[ \begin{array}{cc} \chi(A_1)\\ \chi(A_2)\\ \chi(A_3)\\ \chi(A_4)\\ \chi(B_1)\\ \chi(B_2)\\ \chi(B_3)\\ \chi(B_4) \end{array} \right] = \left[\begin{array}{ccccccccccccc} &&&&1&&&&0&&&&\\ &&&&1&&&&0&&&&\\ &&&&0&&&&1&&&&\\ &&&&0&&&&1&&&&\\ \hline &&&&0&&&&1&&&&\\ &&&&0&&&&1&&&&\\ &&&&1&&&&0&&&&\\ &&&&1&&&&0&&&& \end{array}\right] \] (we emphasize however that we have only one such column in the matrix, not both). We saw that one pairing of indices $(i,a), (j,b), (k,c), (k,d)$ gives us a column from one of the 18 pairs of $U$. It is easy to see that a vector from every pair of $U$ can be obtained by the appropriate choice of the pairing of indices. This means that the matrix contains at least 18 columns. That is $n\ge 18$. \end{proof} While no initial segment complex on fewer than $18$ points can fail $CC_4^{*}$, there is such an example on $26$ points which will show that the initial segment complexes strictly contain the threshold complexes. The next three sections are devoted to constructing such an example. The next section presents a general construction technique for producing almost representable qualitative probability orders from representable ones. This technique will be employed in section 5 to construct our example. Some of the proofs required will be done in section 6. \iffalse Lemma~\ref{oneless} can be easily generalised in the following way. \begin{lem} \label{manyless} Let $\preceq$ be a qualitative probability order on $2^{[n]}$, $T\subseteq [n]$, and let $\Delta=\Delta_n(\preceq, T)$ be the respective simplicial complex. Suppose $(\row Ak;\row Bk)$ is a trading transform and $A_i\prec T\preceq B_j$ for all $i,j\in [k]$. If any $m$ disjoint pairs are compatible, then $\preceq$ fails to satisfy $CC_{k-m}$. \end{lem} \color{red} Before showing that $CC^{*}_5$ holds for small values of $n$, we need to investigate how many identical coalitions we may have in a trading transform that violates $CC^{*}_5$. More specifically we will prove that the trading transform $(\row A5,\row B5)$ where $A_i\prec T\preceq B_j$ for all $i,j\in [4]$ can contain two identical $A$'s or two identical $B$'s but not two identical $A$'s and $B$'s at the same time. \begin{lem}\label{norepet5} Let $\preceq$ be a qualitative probability order on $2^{[n]}$, $T\subseteq [n]$, and let $\Delta=\Delta_n(\preceq, T)$ be the respective initial segment. Suppose $(\row A5,\row B5)$ is a trading transform, and $A_i\prec T\preceq B_j$ for all $i,j\in [4]$, and pair $(A_i, B_j), (A_k, B_l)$ is not compatible for any $i,j,k,l \in [5], i \neq k$ and $j \neq l$. Then \[|\{\row A5, \row B5\}| \geq 9.\] \end{lem} \begin{proof} Note that $A_i \neq B_j$ for all $i,j \in [5]$ and \[|\{\row A5, \row B5\}| = |\{\row A5\}| + |\{\row B5\}|.\] Without loss of generality assume that at least three $A$'s are the same $A_1=A_2=A_3$. We know, that situation where $A_1=\cdots = A_5$ or $B_1= \cdots = B_5$ is impossible. Hence we can assume that $A_1 \neq A_4$ and $B_1 \neq B_2$. The pair $(A_1,B_1), (A_4, B_2)$ is not compatible, so either there is $x \in A_1 \cap A_4$ such that $x \notin B_1 \cup B_2$ or there is $y \in B_1 \cap B_2$ such that $y \notin A_1 \cup A_4 $. Consider the first situation (the second can be done in the same way). We have $x \in A_1 \cap A_4$ and $x$ can be found at least four time among $\row A5$. On the other hand $x \notin B_1 \cup B_2$ and we have at most three copies of $x$ among $\row B5$, a contradiction. Therefore \[|\{\row A5, \row B5\}| \geq 8.\] Suppose, to the contrary, that $|\{\row A5, \row B5\}| = 8$ or equivalently there are unique $i,j,k,l \in [5]$ such that $A_i = A_j$ and $B_k = B_l$. Without loss of generality assume that $i=k=1, j=l=2$. The pair $(A_1,B_1), (A_3, B_3)$ is not compatible and hence one of the following two statements is true: either there is $x \in A_1 \cap A_3$ such that $x \notin B_1 \cup B_3$ or there is $y \in B_1 \cap B_3$ such that $y \notin A_1 \cup A_3$. As before we can consider only the first case. Therefore $x \in A_1 \cap A_3$ and we can meet $x$ at least three times among $\row A5$. At the same time $x \notin B_1 \cap B_3$ and $x$ can be in at most two coalitions among $\row B5$. We have a contradiction. \end{proof} In the proof of Theorem~\ref{cc4} we exploit the fact, that there are no compatible pairs in the trading transform. However if $(\row A5; \row B5)$ shows the failure of $CC^{*}_5$ then we are allowed to have compatible pairs in this trading transform. Note that if we have more then one different compatible pairs then by Lemma~\ref{manyless} $\preceq$ fails to satisfy $CC_2$ or $CC_3$. Suppose our trading transform has a compatible pair $(A_i, B_k ), (A_j, B_l)$. Let \[\bar{A}_i = A_i \setminus B_k, \bar{A}_j = A_j \setminus B_l \text{ and } \bar{B}_k = B_k \setminus A_i, \bar{B}_l = B_l \setminus A_j.\] Then by Theorem~\ref{oneless} we have a trading transform $${T} =(\bar{A}_i \cup \bar{A}_j, A_{m_1}, A_{m_2}, A_{m_3}; \bar{B}_k \cup \bar{B}_l, B_{r_1}, B_{r_2}, B_{r_3}),$$ where $\bar{A}_i \cup \bar{A}_j \prec \bar{B}_k \cup \bar{B}_l$ and $A_{m_t} \prec T \preceq B_{r_s}$. Lets show that all coalitions on the right hand side of $ T$ are different, as well as on the left hand side. \begin{lem}\label{norepet4} Let $\preceq$ be a qualitative probability order on $2^{[n]}$. Suppose $(\row A4,\row B4)$ is a trading transform and $A_i\prec T\preceq B_j$ for all $i,j\in [3]$ and $A_4 \prec B_4$. Then \[|\{\row A4\}|=|\{\row B4\}| =4.\] \end{lem} \begin{proof} Clearly pairs $(A_i,B_j), (A_k, B_l)$ and $(A_4, B_4),(A_i,B_j)$ are not compatible for all $i,j,k,l \in [3]$. If at least one of them is compatible then after the suitable relabeling by Corollary~\ref{oneless_qp} we will have a failure of $CC_3$, which is impossible. Assume, to the contrary, that we have at least two identical coalitions among $A$'s and $B$'s. Without loss of generality we can consider two following cases: \begin{enumerate} \item $A_1=A_2$; \item $A_1=A_4$. \end{enumerate} By the same means as in the proofs of Lemma~\ref{norepet} and Lemma~\ref{norepet5} and using information about non-compatible pairs we can prove that these two cases are contradictory. \end{proof} \begin{thm}\label{cc*5} $CC_5^{*}$ holds for $\Delta=\Delta_n(\preceq, T)$ for all $n\le 8$. \end{thm} \begin{proof} Suppose that $(\row A5;\row B5)$ is a trading transform and $A_i\prec T\preceq B_j$ for all $i,j\in [5]$. Then by Lemma~\ref{norepet5} without loss of generality it is enough to consider the following cases: \begin{enumerate} \item there are no compatible pairs and all $A$'s and $B$'s are different; \item there are no compatible pairs and exactly two coalitions coincide $A_1=A_2$; \item there is compatible pair $(A_4,B_4), (A_5, B_5)$. \end{enumerate} Let us consider the set of column vectors \begin{equation} \label{1000vectors} U=\{ {\bf x}\in \mathbb{R}^{10}\mid x_i\in \{0,1\}\ \text{and}\ x_1+\cdots+x_5=x_6+\cdots+x_{10}=k, k \in \{2,3\}\}. \end{equation} This set has an involution ${\bf x}\mapsto {\bf \bar{x}}$, where $\bar{x}_i=1-x_i$. Say, if ${\bf x}=(1,1,0,0,0,0,0,0,1,1)^T$, then ${\bf \bar{x}}=(0,0,1,1,1,1,1,1,0,0)^T$. There are 1000 vectors from $U$ which are split into 100 pairs $\{{\bf x}, {\bf \bar{x}}\}$. Let us write the characteristic vectors of $\row A5 , \row B5$ as rows of $10 \times n$ matrix $M$, respectively. Every vector ${\bf x} \in U$ shows that six different pairs are not compatible if $\bf x$ is a column of $M$. For example, if ${\bf x}=(1,1,0,0,0,0,0,0,1,1)^T$ then pairs $(A_1,B_1),(A_2, B_2);$ $(A_1,B_1),(A_2, B_3);$ $(A_1,B_2),$ $(A_2, B_3);$ $(A_3,B_4),(A_4, B_5);$ $(A_3,B_4),(A_5, B_5);$ $(A_4,B_4),(A_5, B_5)$ are not compatible. One can see that any vector from $U$ shows the biggest possible number of non-compatible pairs. {\it Case (1).} There are no compatible pairs and all $A$'s and $B$'s are different. Therefore we need to show that ${5\choose 2}\cdot {5\choose 2} = 100$ pair are not compatible. To achieve the smallest number of columns in $M$ we clearly need to add columns that shows the biggest possible number of non-compatible pairs, i.e. vectors of $U$. Assume that every new vector ${\bf x } \in U$ cancels out new 6 non compatible pairs. Hence we need at least $\lceil \frac{100}{6} \rceil = 17$ columns in $M$ or equivalently $n \geq 17$. {\it Case (2).} There are no compatible pairs and exactly two coalitions coincide $A_1=A_2$. We know a pair $(A_1, B_i),(A_3,B_j)$ is not compatible for every $i,j \in [5]$. To cancel out all such pairs we need 10 vectors from $U$, because every vector ${\bf x} \in U$ shows that $(A_1, B_i),(A_3,B_j)$ is not compatible only for the one pair of indexes $i,j$. Hence we need at least 10 columns in $M$ to cover pairs $(A_1, B_i),(A_3,B_j)$ for all $i,j \in [5]$. Moreover pair $(A_1, B_i),(A_4,B_j)$ is not compatible for every $i,j \in [5]$. Note, that in all our ten columns of $M$ every $x \in [10]$ belongs to exactly one coalition $A_1$ or $A_4$. Therefore none of the existing 10 columns of $M$ can show that $(A_1, B_i),(A_4,B_j)$ is not compatible for some $i,j$. As before we need at least 10 more vectors of $U$ to cancel out all non compatible pairs $(A_1, B_i),(A_4,B_j)$ for all $i,j \in [5]$. Hence $M$ has at least 20 columns or equivalently $n \geq 20$. {\it Case (3).} There is one compatible pair $(A_4,B_4), (A_5, B_5)$. Let \[\bar{A}_4 = A_4 \setminus B_4, \bar{A}_5 = A_5 \setminus B_5 \text{ and } \bar{B}_4 = B_4 \setminus A_4, \bar{B}_5 = B_5 \setminus A_5.\] By Lemma~\ref{oneless} the trading transform \[T=({A}_1, A_2, A_3, \bar{A}_4 \cup \bar{A}_5; B_1, B_2, B_3, \bar{B}_4 \cup \bar{B}_5),\] shows the failure of $CC_4$, where $A_i \prec B_j$ for every $i,j \in [3]$ and $\bar{A}_4 \cup \bar{A}_5 \prec \bar{B}_4 \cup \bar{B}_5$. By Lemma~\ref{norepet4} all $$|\{{A}_1, A_2, A_3, \bar{A}_4 \cup \bar{A}_5\}| = |\{B_1, B_2, B_3, \bar{B}_4 \cup \bar{B}_5\}| =4.$$ Denote $\bar{A}_4 \cup \bar{A}_5$ and $\bar{B}_4 \cup \bar{B}_5$ as $A'_4$ and $B'_4$ respectivly. As we know pairs $(A_i,B_j), (A_k, B_l)$ and $(A'_4, B'_4),(A_i,B_j)$ are not compatible for all $i,j,k,l \in [3]$. If at least one of them is compatible then after the suitable relabeling by Corollary~\ref{oneless_qp} we will have a failure of $CC_3$, which is impossible. Note that ${3\choose 2}^2 + 3^2=18$ is the smallest number of non compatible pairs, that can be used to reduce $CC_4$ to $CC_3$ by Corollary~\ref{oneless_qp}. To see it we consider the following situation: \[B'_4 \prec A_i \text{ and } B'_4 \prec A'_4 \setminus B'_4 \cup A_i \setminus B_j \text{ for } i,j \in [3]. \] Let $(A'_4, B_i), (A_j, B_k)$ be compatible pair for some $i,j,k \in [3]$ and \[\bar{A}'_4 = A'_4 \setminus B'_4, \bar{A}_i = A_i \setminus B_j \text{ and } \bar{B}'_4 = B'_4 \setminus A'_4, \bar{B}_j = B_j \setminus A_i.\] Then $$(\bar{A}'_4 \cup \bar{A}_j, A_{m_1}, A_{m_2}; \bar{B}_i \cup \bar{B}_k, B_{r}, B'_{4})$$ is a trading transform, but it doesn't show the failure of $CC_3$. More precisely, $B'_4 \prec A_{m_s}$ for all $i \in [2]$ and $B'_4 \prec \bar{A}'_4 \cup \bar{A}_i$, however either $A_{m_s} \prec B_{r}$ for all $s \in [2]$ and $\bar{A}'_4 \cup \bar{A}_i \prec B_{r}$ or $ A_{m_s} \prec \bar{B}_i \cup \bar{B}_k$ for all $s \in [2]$ and $ \bar{A}'_4 \cup \bar{A}_i \prec \bar{B}_i \cup \bar{B}_k$. It means there is no relabeling of coalition $\{\bar{A}'_4 \cup \bar{A}_j, A_{m_1}, A_{m_2}\} = \{X_1,X_2,X_3\}$ and $\{\bar{B}_i \cup \bar{B}_k, B_{r}, B'_{4}\} = \{Y_1, Y_2 Y_3\}$, such that for the trading transform $(X_1,X_2,X_3; Y_1, Y_2 Y_3)$ one of the two following statements holds: $X_s \preceq Y_s$ or $Y_s \preceq X_s$ for every $s \in [3]$. Hence even if all pairs $(A'_4, B_i), (A_j, B_k)$ are compatible for all $i,j,k \in [3]$ it will not lead to the failure of $CC_3$. Let $(A_i, B_j), (A_k, B'_4)$ be compatible pair for some $i,j,k \in [3]$. Then $B'_4 \prec A_i, A_k$ and $A_i, A_k \prec B_j$. We can not predict the order of $(A_i \setminus B_j ) \cup (A_k \setminus B'_4)$ and $(B_j \setminus A_i ) \cup (B'_4 \setminus A_k)$. Hence even if $(A_i, B_j), (A_k, B'_4)$ is compatible pair there is no guarantee that $T$ could be reduced to a trading transform which shows the failure of $CC_3$. Therefore 18 is the smallest ``realizable number'' of non compatible pairs, that can be used to reduce $CC_4$ to $CC_3$ by Corollary~\ref{oneless_qp}. Consider the set of column vectors \begin{equation*} V=\{ {\bf x}\in \mathbb{R}^8\mid x_i\in \{0,1\}\ \text{and}\ x_1+x_2+x_3+x_4=x_5+x_6+x_7+x_8=2\}. \end{equation*} By the proof of Theorem~\ref{cc4} there are 36 vectors from $V$ which are split into 18 pairs $\{{\bf x}, {\bf \bar{x}}\}$. Let us write the characteristic vectors of $A_1, A_2, A_3, A'_4, B_1, B_2, B_3, B'_4$ as rows of $8 \times n$ matrix $N$, respectively. From above we know pairs $(A_1,B_i), (A_2, B_j)$ are not compatible for every $i,j \in [3]$. Note that every vector ${\bf x} \in V, x_1+x_2 \in \{0,2\}$ as a column of $N$ shows that $(A_1,B_i), (A_2, B_j)$ is not compatible for the only one pair $i,j \in [3]$. To cover all pairs $(A_1,B_i), (A_2, B_j)$ we need at least three different vectors of $V$ as columns of $N$. Clearly $x_1 + x_3 = x_2 + x_3 =1$ in every vector out of those three. Hence in order to show that pairs $(A_1,B_i), (A_3, B_j)$ and $(A_2,B_i), (A_3, B_j)$ are not compatible for every $i,j \in [3]$ we need at least six more different vectors of $V$. Pairs $(A'_4, B'_4),(A_i,B_j)$ are not compatible for all $i,j \in [3]$. However in the nine columns of $N$ we have already showed that all of them are not compatible, because if $(A_i,B_j), (A_k, B_l)$ is not compatible then $(A'_4, B'_4),(A_{s}, B_t)$ is not compatible as well for all $\{i,k,s\}=\{j,l,t\}=[3]$. \end{proof} \color{blue}a pair of disjoint pairs \[ \left[ \begin{array}{c} A_i\prec B_a\\A_j\prec B_b\\\end{array} \right. \qquad\qquad \left[ \begin{array}{c} A_k\prec B_c\\A_l\prec B_d\\\end{array} \right. \] can be chosen in $\frac{1}{2}{5\choose 2}^2\cdot {3\choose 2}^2=450$ ways. Note that we do not distinguish between \[ \left[ \begin{array}{c} A_i\prec B_a\\A_j\prec B_b\\\end{array} \right. \qquad \text{and}\qquad \left[ \begin{array}{c} A_i\prec B_b\\A_j\prec B_a\\\end{array} \right. \] as they are either both compatible or none of them. As in the proof of Theorem 1 we create a $10\times n$ columns we should consider are those which contain three ones among the first five coordinates and three ones among the remaining coordinates. Any such column prevent 18 pairs of pairs to be both compatible. To prevent all we need at least $450/18=25$ columns. \noindent{\bf Note.} This might also work to prove that $CC_6^{*}$ is satisfied for some $n\le n_0$ for some $n_0$ but it will certainly not work for $CC_7^{*}$. \fi \color{black} \section{Constructing almost representable orders from nonlinear representable ones} \label{almost} Our approach to finding an initial segment complex that is not threshold will be to start with a non-linear representable qualitative probability order and then perturb it so as to produce an almost representable order. By judicious breaking of ties in this new order we will be able to produce an initial segment that will violate $CC_4^{*}$. The language of discrete cones will be helpful and we begin with a technical lemma that will needed in the construction. \begin{prop} Let $\preceq $ be a non-representable but almost representable qualitative probability order which almost agrees with a probability measure ${\bf p}$. Suppose that the $m$th cancellation condition $CC_m$ is violated, and that for some non-zero vectors $\{\brow xm\}\subseteq C(\preceq )$ the condition (\ref{axm}) holds, i.e., ${\bf x}_1 + \cdots + {\bf x}_m ={\bf 0}$ and ${\bf x}_i \notin C(\preceq)$ for at least one $i \in [m]$. Then all of the vectors $\brow xm$ lie in the hyperplane $H_{\bf p}$. \end{prop} \begin{proof} First note that for every ${\bf x}\in C(\preceq )$ which does not belong to $H_{\bf p}$, we have $({\bf p},{\bf x})>0$. Hence the condition (\ref{axm}) can hold only when all ${\bf x}_i\in H_{\bf p}$. \end{proof} We need to understand how we can construct new qualitative probability orders from old ones so we need the following investigation. Let $\preceq $ be a representable but not linear qualitative probability order which agrees with a probability measure ${\bf p}$. Let $S(\preceq )$ be the set of all vectors of $C(\preceq )$ which lie in the corresponding hyperplane $H_{\bf p}$. Clearly, if ${\bf x}\in S(\preceq )$, then $-{\bf x}$ is a vector of $S(\preceq )$ as well. Since in the definition of discrete cone it is sufficient that only one of these vectors is in $C(\preceq )$ we may try to remove one of them in order to obtain a new qualitative probability order. The new order will almost agree with ${\bf p}$ and hence will be at least almost representable. The big question is: what are the conditions under which a set of vectors can be removed from $S(\preceq )$? What can prevent us from removing a vector from $S(\preceq )$? Intuitively, we cannot remove a vector if the set comparison corresponding to it is a consequence of those remaining. We need to consider what a consequence means formally. There are two ways in which one set comparison might imply another one. The first way is by means of the de Finetti condition. This however is already built in the definition of the discrete cone as $\chi(A,B)=\chi(A\cup C,B\cup C)$. Another way in which a comparison may be implied from two other is transitivity. This has a nice algebraic characterisation. Indeed, if $C\prec B\prec A$, then $\chi(A,C)=\chi(A,B)+\chi(B,C)$. This leads us to the following definition. Following \cite{CCS} let us define a restricted sum for vectors in a discrete cone ${C}$. Let ${\bf u},{\bf v}\in {C}$. Then \[ {\bf u}\oplus {\bf v}= \left\{ \begin{array}{cl} {\bf u}+{\bf v}& \text{if ${\bf u}+{\bf v}\in T^n $}, \\\ \text{undefined} & \text{if ${\bf u}+{\bf v}\notin T^n $}. \end{array} \right. \] It was shown in \cite[Lemma~2.1]{PF1} that the transitivity of a qualitative probability order is equivalent to closedness of its corresponding discrete cone with respect to the restricted addition (without formally defining the latter). The axiom D3 of the discrete cone can be rewritten as \begin{enumerate} \item[{\rm D3.}] ${\bf x}\oplus {\bf y}\in C$ whenever ${\bf x},{\bf y}\in C$ and ${\bf x}\oplus {\bf y}$ is defined. \end{enumerate} Note that a restricted sum is not associative. \begin{thm}[Construction method] \label{constr} Let $\preceq $ be a representable non-linear qualitative probability order which agrees with the probability measure ${\bf p}$. Let $S(\preceq )$ be the set of all vectors of $C(\preceq )$ which lie in the hyperplane $H_{\bf p}$. Let $X$ be a subset of $S(\preceq )$ such that \begin{itemize} \item $X\cap \{{\bf s},-{\bf s}\}\ne\emptyset$ for every ${\bf s}\in S(\preceq )$. \item $X$ is closed under the operation of restricted sum. \end{itemize} Then $Y=S(\preceq )\setminus X$ may be dropped from $C(\preceq )$, that is $C_Y=C(\preceq )\setminus Y$ is a discrete cone. \end{thm} \begin{proof} We first note that if ${\bf x} \in C(\preceq) \setminus S(\preceq)$ and ${\bf y} \in C(\preceq)$, then ${\bf x} \oplus {\bf y}$, if defined, cannot be in $S(\preceq)$. So due to closedness of $X$ under the restricted addition all axioms of a discrete cone are satisfied for $C_Y$. On the other hand, if for some two vectors ${\bf x},{\bf y}\in X$ we have ${\bf x}\oplus {\bf y}\in Y$, then $C_Y$ would not be a discrete cone and we would not be able to construct a qualitative probability order associated with this set. \end{proof} \begin{exmp}[Positive example] \label{ex4} The probability measure \[ {\bf p}=\frac{1}{16}(6,4,3,2,1). \] defines a qualitative probability order $\preceq $ on $[5]$ (which is better written from the other end): \[ \emptyset\prec 5\prec 4\prec 3\prec 45\prec 35 \sim 2\prec 25\sim 34\prec 1 \prec 345\sim 24\prec 23\sim 15\prec 245\prec 14\sim 235\ldots . \] (Here only the first 17 terms are shown, since the remaining ones can be uniquely reconstructed. See \cite[Proposition~1]{KPS} for details). There are only four equivalences here \[ \ 35 \sim 2,\ \ 25 \sim 34,\ \ 23\sim 15\ \ \hbox{and} \ \ 14\sim 235, \] and all other follow from them, that is: \begin{align*} & 35 \sim 2\ \text{implies}\ 345\sim 24,\ 135\sim 12;\\ & 25 \sim 34\ \text{implies}\ 125 \sim 134;\\ & 23\sim 15\ \text{implies}\ 234 \sim 145;\\ & 14\sim 235\ \text{has no consequences} \end{align*} Let ${\bf u}_1=\chi(2,35)=(0,1,-1,0,-1)$, ${\bf u}_2=\chi(34,25)=(0,-1,1,1,-1)$, ${\bf u}_3=\chi(15,23)=(1,-1,-1,0,1)$ and ${\bf u}_4=\chi(235,14)=(-1,1,1,-1,1)$. Then \[ S(\preceq)=\{\pm{\bf u}_1, \pm{\bf u}_2, \pm{\bf u}_3, \pm{\bf u}_4\} \] and $X=\{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4\}$ is closed under the restricted addition as ${\bf u}_i\oplus {\bf u}_j$ is undefined for all $i\ne j$. Note that ${\bf u}_i\oplus { -\bf u}_j$ is also undefined for all $i\ne j$. Hence we can subtract from the cone $C(\preceq)$ any non-empty subset $Y$ of $-X=\{-{\bf u}_1, -{\bf u}_2, -{\bf u}_3, -{\bf u}_4\}$ and still get a qualitative probability. Since \[ {\bf u}_1+{\bf u}_2+{\bf u}_3+{\bf u}_4={\bf 0}. \] it will not be representable. The new order corresponding to the discrete cone $C_{-X}$ is linear. \end{exmp} \begin{exmp}[Negative example] \label{ex5} A certain qualitative probability order is associated with the Gabelman game of order 3. Nine players are involved each of whom we think as associated with a certain cell of a $3\times 3$ square: \begin{center} \begin{tabular}{|c|c|c|} \hline 1 & 2 & 3\\ \hline 4 & 5 & 6\\ \hline 7 & 8 & 9\\ \hline \end{tabular} \end{center} The $i$th player is given a positive weight $w_i$, $i=1,2,\ldots, 9$, such that in the qualitative probability order, associated with ${\bf w}=(\row w9)$, \[ 147\sim 258\sim 369 \sim 123\sim 456\sim 789. \] Suppose that we want to construct a qualitative probability order $\preceq $ for which \[ 147\sim 258\sim 369 \prec 123\sim 456\sim 789 . \] Then we would like to claim that it is not weighted since for the vectors \begin{align*} {\bf x}_1&=(0,1,1,-1,0,0,-1,0,0)=\chi(123,147),\\ {\bf x}_2&=(0,-1,0,1,0,1,0,-1,0)=\chi(456,258),\\ {\bf x}_3&=(0,0,-1,0,0,-1,1,1,0)=\chi(789,369) \end{align*} we have ${\bf x}_1+{\bf x}_2+{\bf x}_3={\bf 0}$. Putting the sign $\prec $ instead of $\sim$ between $369$ and $123$ will also automatically imply $147\prec 123$, $258\prec 456$ and $369\prec 789$. This means that we are dropping the set of vectors $\{-{\bf x}_1, -{\bf x}_2, -{\bf x}_3\}$ from the cone while leaving the set $\{{\bf x}_1, {\bf x}_2, {\bf x}_3\}$ there. This would not be possible since ${\bf x}_1\oplus {\bf x}_2=-{\bf x}_3$. So every $X \supset \{{\bf x}_1, {\bf x}_2, {\bf x}_3\}$ with $X \cap \{-{\bf x}_1, -{\bf x}_2, -{\bf x}_3\} = \emptyset$ is not closed under $\oplus$. \end{exmp} \section{An example of a nonthreshold initial segment of a linear qualitative probability order} In this section we shall construct an almost representable linear qualitative probability order $\sqsubseteq $ on $2^{[26]}$ and a subset $T\subseteq [26]$, such that the initial segment $\Delta(\sqsubseteq,T)$ of $\sqsubseteq $ is not a threshold complex as it fails to satisfy the condition $CC^{*}_4$. The idea of the example is as follows. We will start with a representable linear qualitative probability order $\preceq$ on $[18]$ defined by weights $\row w{18}$ and extend it to a representable but nonlinear qualitative probability order $\preceq' $ on $[26]$ with weights $\row w{26}$. A distinctive feature of $\preceq'$ will be the existence of eight sets $A'_1, \ldots, A'_4$, $B'_1, \ldots, B'_4$ in $[26]$ such that: \begin{enumerate} \item The sequence $(A'_1, \ldots, A'_4;B'_1, \ldots, B'_4)$ is a trading transform. \item The sets $A'_1, \ldots, A'_4$, $B'_1, \ldots, B'_4$ are tied in $\preceq'$, that is, \[ A'_1\sim' \ldots A'_4\sim' B'_1\sim' \ldots \sim' B'_4. \] \item If any two distinct sets $X,Y \subseteq [26]$ are tied in $\preceq'$, then $\chi(X,Y) =\chi (S,T)$, where $S,T \in \{A'_1, \ldots, A'_4,B'_1, \ldots, B'_4\}$. In other words all equivalences in $\preceq'$ are consequences of $A'_i\sim' A'_j$, $A'_i\sim' B'_j$, $B'_i\sim' B'_j$, where $i,j \in [4]$. \end{enumerate} Then we will use Theorem~\ref{constr} to untie the eight sets and to construct a comparative probability order $\sqsubseteq $ for which \begin{equation*} A'_1 \sqsubset A'_2 \sqsubset A'_3 \sqsubset A'_4 \sqsubset B'_1 \sqsubset B'_2 \sqsubset B'_3 \sqsubset B'_4, \end{equation*} where $X \sqsubset Y$ means that $X \sqsubseteq Y$ is true but not $Y \sqsubseteq X$. This will give us an initial segment $\Delta(\sqsubseteq , B'_1)$ of the linear qualitative probability order $\sqsubseteq $, which is not threshold since $CC^{*}_4$ fails to hold. \par Let $\preceq$ be a representable linear qualitative probability order on $2^{[18]}$ with weights $w_1, \ldots , w_{18}$ that are linearly independent (over $\mathbb{Z}$) real numbers in the interval~$[0, 1]$. Due to the choice of weights, no two distinct subsets $X, Y \subseteq [18]$ have equal weights relative to this system of weights, i.e., \[ X\ne Y \Longrightarrow w(X)=\sum_{i\in X}w_i \ne w(Y)=\sum_{i\in Y}w_i. \] Let us consider again the set $U$ defined in (\ref{36vectors}). Let $M$ be a subset of $U$ with the following properties: $|M|=18$ and ${\bf x}\in M$ if and only if ${\bf \bar{x}}\notin M$. In other words $M$ contains exactly one vector from every pair into which $U$ is split. By $M$ we will also denote an $8\times18$ matrix whose columns are all the vectors from $M$ taken in arbitrary order. By $A_1,\ldots, A_4, B_1, \ldots, B_4 $ we denote the sets with characteristic vectors equal to the rows $\row M8$ of $M$, respectively. The way $M$ was constructed secures that the following lemma is true. \begin{lem} \label{propertiesofM} The subsets $A_1,\ldots, A_4, B_1, \ldots, B_4$ s of $[18]$ satisfy: \begin{enumerate} \item $(A_1, \ldots , A_4; B_1 , \ldots , B_4 )$ is a trading transform; \item for any choice of $i,k,j,m \in [4]$ with $i \neq k$ and $ j \neq m$ the pair $(A_i, B_j), (A_k, B_m)$ is not compatible. \end{enumerate} \end{lem} We shall now embed $A_1, \ldots, A_4, B_1 , \ldots, B_4$ into $[26]$ and add new elements to them forming $A'_1, \ldots, A'_4,$ $ B'_1, \ldots, B'_4$ in such a way that the characteristic vectors $\chi(A'_1) ,\ldots, \chi(A'_4),$ $\chi(B'_1),\ldots, \chi(A'_1)$ are the rows $M_1',\ldots,M_8'$ of the following matrix \begin{equation} \label{matrix} M'=\quad \kbordermatrix{ & 1 \ldots 18 & \vrule &19 \text{ } \text{ } 20 \text{ } \text{ } 21 \text{ } \text{ } 22 & \vrule& 23 \text{ } \text{ } 24 \text{ } \text{ } 25 \text{ } \text{ } 26 \\ &\begin{array}{c}\chi(A_1) \\ \chi(A_2) \\ \chi(A_3) \\ \chi(A_4)\\ \end{array} & \vrule & I & \vrule& I \\ \hline &\begin{array}{c}\chi(B_1) \\ \chi(B_2) \\ \chi(B_3) \\ \chi(B_4) \end{array} & \vrule & J & \vrule& I } , \end{equation} respectively. Here $I$ is the $4 \times 4$ identity matrix and $$J=\left( \begin{array}{cccc} 0 & 0& 0& 1 \\ 1 & 0& 0& 0 \\ 0 & 1& 0& 0 \\ 0 & 0& 1& 0 \end{array} \right).$$ Note that if $X$ belongs to $[18]$, it also belongs to $[26]$, so the notation $\chi (X)$ is ambiguous as it may be a vector from ${\mathbb Z}^{18}$ or from ${\mathbb Z}^{26}$, depending on the circumstances. However the reference set will be always clear from the context and the use of this notation will create no confusion. One can see that $(A'_1, \ldots,A'_4;B'_1,\ldots,B'_4)$ is again a trading transform and there are no compatible pairs $(A'_i , B'_j ), (A'_k , B'_m)$, where $i,k,j,m \in [4] \text{ and }i \neq k\ \text{or}\ j \neq m.$ We shall now choose weights $w_{19}, \ldots, w_{26}$ of new elements $19, \ldots, 26$ in such a way that the sets $A'_1, A'_2, A'_3, A'_4, B'_1, B'_2, B'_3, B'_4$ all have the same weight $N$, which is a sufficiently large number. It will be clear from the proof how large it should be. To find weights $w_{19}, \ldots, w_{26}$ that satisfy this condition we need to solve the following system of linear equations \begin{equation} \label{vesa} \left( \begin{array}{cc} I &I \\ J & I \end{array} \right) \left( \begin{array}{c} w_{19} \\ \vdots \\ w_{26} \end{array} \right)=N\textbf{1}- M\cdot {\bf w}, \end{equation} where $\textbf{1}=(1,\ldots, 1)^T \in {\mathbb R}^8$ and ${\bf w}=(w_1, \ldots, w_{18})^T \in \mathbb{R}^{18}$. The matrix from (\ref{vesa}) has rank~$7$, and the augmented matrix of the system has the same rank. Therefore, the solution set is not empty, moreover, there is one free variable (and any one can be chosen for this role). Let this free variable be $w_{26}$ and let us give it value $K$, such that $K$ is large but much smaller than $N$. In particular, $126 < K<N$. Now we can express all other weights $w_{19}, . . . , w_{25}$ in terms of $w_{26} = K$ as follows: \begin{equation}\label{urav} \begin{split} w_{19} = N - &K -(\chi(A_4) - \chi(B_1)+ \chi(A_1)) \cdot {\bf w} \\ w_{20} = N - & K -(\chi(A_4) - \chi(B_1) + \chi(A_1) - \chi(B_2) + \chi(A_2) ) \cdot {\bf w} \\ w_{21} = N - & K -(\chi(A_4) - \chi(B_1) + \chi(A_1)- \chi(B_2) + \chi(A_2) -\\ & \chi(B_3)+ \chi(A_3))\cdot {\bf w} \\ w_{22} = N - &K -\chi(A_4) \cdot {\bf w} \\ w_{23} = K -&(- \chi(A_4)+\chi(B_1) ) \cdot {\bf w}\\ w_{24} = K -&( - \chi(A_4)+\chi(B_1) - \chi(A_1)+ \chi(B_2) ) \cdot {\bf w} \\ w_{25} = K -&(- \chi(A_4) +\chi(B_1) - \chi(A_1) + \chi(B_2) - \chi(A_2) + \chi(B_3) )\cdot {\bf w}. \end{split} \end{equation} By choice of $N$ and $K$ weights $w_{19}, . . . , w_{25}$ are positive. Indeed, all ``small'' terms in the right-hand-side of~(\ref{urav}) are strictly less then $7 \cdot 18=126<\min\{K,N-K\}$. Let $\preceq'$ be the representable qualitative probability order on $[26]$ defined by the weight vector ${\bf w}'=(w_1, \ldots, w_{26})$. Using $\preceq'$ we would like to construct a linear qualitative probability order $\sqsubseteq$ on $2^{{[26]}}$ that ranks the subsets $A_i'$ and $B_j'$ in the sequence \begin{equation} \label{eightstrict} A'_1 \sqsubset A'_2 \sqsubset A'_3 \sqsubset A'_4 \sqsubset B'_1 \sqsubset B'_2 \sqsubset B'_3 \sqsubset B'_4. \end{equation} \par We will make use of Theorem~\ref{constr} now. Let $H_{{\bf w}'}=\{x \in {\mathbb R}^n | ({\bf w}',x)=0\}$ be the hyperplane with the normal vector ${\bf w}'$ and $S(\preceq')$ be the set of all vectors of the respective discrete cone $C(\preceq')$ that lie in $H_{{\bf w}'}$. Suppose \[ X'= \{ \chi(C,D) \mid C, D \in \{A'_1, \ldots, A'_4, B'_1, \ldots, B'_4\}\ \text{and $D$ earlier than $C$ in (\ref{eightstrict})}\}. \] This is a subset of $T^{26}$, where $T=\{-1,0,1\}$. Let also $Y'=S(\preceq')\setminus X'$. To use Theorem~\ref{constr} with the goal to achieve (\ref{eightstrict}) we need to show, that \begin{itemize} \item $S(\preceq') = X' \cup -X'$ and \item $X'$ is closed under the operation of restricted sum. \end{itemize} If we could prove this, then $C(\sqsubseteq) = C(\preceq') \setminus Y'$ is a discrete cone of a linear qualitative probability order $\sqsubseteq $ on $[26]$ satisfying (\ref{eightstrict}). Then the initial segment $\Delta(\sqsubseteq, B'_1)$ will not be a threshold complex, because the condition $CC_4^{*}$ will fail for it. \par Let $Y$ be one of the sets $A_1, A_2, A_3, A_4, B_1, B_2, B_3, B_4$. By $\breve{Y}$ we will denote the corresponding superset of $Y$ from the set $\{A'_1, A'_2, A'_3, A'_4, B'_1, B'_2, B'_3, B'_4\} $. \begin{prop} \label{closedZ} The subset \[ X= \{ \chi(C,D) \mid C,D \in \{A_1, \ldots, A_4, B_1, \ldots, B_4\}\ \text{with $ \breve{D}$ earlier than $\breve{C}$ in (\ref{eightstrict})}\}. \] of $T^{18}$ is closed under the operation of restricted sum. \end{prop} \begin{proof} Let ${\bf u}$ and ${\bf v}$ be any two vectors in $X$. As we will see the restricted sum ${\bf u} \oplus {\bf v}$ is almost always undefined. Without loss of generality we can consider only five cases.\par {\bf Case 1.} ${\bf u}= \chi (B_i, A_j)$ and ${\bf v}=\chi (B_k, A_m)$, where $i \neq k$ and $j \neq m$. In this case by Lemma~\ref{propertiesofM} the pairs $(B_i, A_j)$ and $ (B_k, A_m)$ are not compatible. It means that there exists $p \in [18]$ such that either $p \in B_i \cap B_k$ and $p \notin A_j \cup A_m$ or $p \in A_j \cap A_m$ and $p \notin B_i \cup B_k$. The vector ${\bf u} + {\bf v}$ has $2$ or $-2$ at $p$th position and ${\bf u} \oplus {\bf v}$ is undefined. This is illustrated in the table below:\par \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & $\chi(B_i)$ & $\chi (B_k)$ & $\chi(A_j)$ & $ \chi(A_m)$ & $\chi (B_i, A_j)$ & $\chi (B_k, A_m)$ & ${\bf u}+{\bf v}$\\ \hline $p$th & 1 & 1& 0& 0 & 1& 1 & 2\\ coordinate & 0 & 0& 1& 1 & -1& -1 & -2\\ \hline \end{tabular} {\bf Case 2.} ${\bf u}= \chi (B_i, A_j)$, ${\bf v}=\chi (B_i, A_m)$ or ${\bf u}=\chi (B_j, A_i)$, $ {\bf v}=\chi (B_m,A_i)$, where $j \neq m$. In this case choose $k \in [4]\setminus \{i\}$. Then the pairs $(B_i, A_j)$ and $(B_k, A_m)$ are not compatible. As above, the vector $\chi (B_i, A_j)+\chi(B_k,A_m)$ has $2$ or $-2$ at some position~$p $. Suppose $p \in B_i \cap B_k$ and $p \notin A_j \cup A_m$. Then $B_i$ has a $1$ in $p$th position and each of the vectors $\chi (B_i, A_j)$ and $\chi (B_i, A_m)$ has a $1$ in $p$th position as well. Therefore, ${\bf u} \oplus {\bf v}$ is undefined because ${\bf u} + {\bf v}$ has $2$ in $p$th position. Similarly, in the case when $p \in A_j \cap A_m$ and $p \notin B_i \cup B_k$ the $p$th coordinate of ${\bf u} + {\bf v}$ is $-2$. The case when ${\bf u}=\chi (B_j, A_i)$ and ${\bf v}=\chi (B_m,A_i)$ is similar.\par {\bf Case 3.} ${\bf u}= \chi (B_i, B_j)$, ${\bf v}=\chi (B_k, B_m)$ or ${\bf u}=\chi (A_i, A_j)$, ${\bf v}=\chi (A_k,A_m)$, where $\{i,j,k,m\}=[4]$. By construction of $M$ there exists $p\in [18]$ such that $p\in B_i\cap B_k$ and $p\notin B_j\cup B_m$ or $p\notin B_i\cup B_k$ and $p\in B_j\cap B_m$. So there is $p \in [18]$, such that ${\bf u} + {\bf v}$ has $2$ or $-2$ in $p$th position. Thus ${\bf u} \oplus {\bf v}$ is undefined.\par {\bf Case 4.} ${\bf u}= \chi (B_i, B_j)$, ${\bf v}=\chi (B_k, B_m)$ or ${\bf u}=\chi (A_i, A_j)$, $ {\bf v}=\chi (A_k,A_m)$, where $i = k$ or $j = m$. If $i=k$ and $j=m$, then ${\bf u} \oplus {\bf v}$ is undefined. Consider the case $i=k$, $j \neq m$ and ${\bf u}= \chi (B_i, B_j)$, ${\bf v}=\chi (B_i, B_m)$. Let $s= [4] \setminus \{i,j,m\}$. By construction of $M$ either we have $p\in [18]$ such that $p\in B_i\cap B_s$ and $p\notin B_j\cup B_m$ or $p\notin B_i\cup B_s$ and $p\in B_j\cap B_m$. In both cases ${\bf u} + {\bf v}$ has $2$ or $-2$ in position $p$. \par {\bf Case 5.} ${\bf u}= \chi (B_i, B_j)$, ${\bf v}=\chi (B_k, B_m)$ or ${\bf u}=\chi (A_i, A_j)$, $ {\bf v}=\chi (A_k,A_m)$, where $ j =k$ or $i = m$. Suppose $ j =k$. Since $i > j$ and $j> m$ we have $i> m$. This implies that $\chi (B_i, B_m)$ belongs to $X$. On the other hand ${\bf u} + {\bf v} = \chi(B_i) - \chi(B_m)= \chi(B_i, B_m)$. Therefore ${\bf u} \oplus {\bf v}={\bf u} + {\bf v} \in X$. \end{proof} \begin{cor}\label{closedX} $X'$ is closed under restricted sum. \end{cor} \begin{proof} We will have to consider the same five cases as in the Proposition~\ref{closedZ}. As above in the first four cases the restricted sum of vectors will be undefined. In the fifth case, when ${\bf u}= \chi (B'_i, B'_j)$, ${\bf v}=\chi (B'_k, B'_m)$ or ${\bf u}=\chi (A'_i, A'_j)$, $ {\bf v}=\chi (A'_k,A'_m)$, where $ j =k$ or $i = m$, we will have ${\bf u} + {\bf v} = \chi(B'_i) - \chi(B'_m)= \chi(B'_i, B'_m)\in X'$ or ${\bf u} + {\bf v} = \chi(A'_i) - \chi(A'_m)= \chi(A'_i, A'_m)\in X'$. \end{proof} To satisfy conditions of Theorem~\ref{constr} we need also to show that the intersection of the discrete cone $C(\preceq')$ and the hyperplane $H_{{\bf w}'}$ equals to $X' \cup -X'$. More explicitly we need to prove the following: \begin{prop} \label{conseq} Suppose $C,D \subseteq [26]$ are tied in $\preceq'$, that is $C\preceq' D$ and $D\preceq' C$. Then $\chi(C,D) \in X' \cup -X'$. \end{prop} \begin{proof} Assume to the contrary that there are two sets $C, D \in 2^{[26]}$ that have equal weights with respect to the corresponding system of weights defining $\preceq'$ but $\chi(C,D) \notin X' \cup -X'$. The sets $C$ and $D$ have to contain some of the elements from $[26]\setminus [18]$ since $w_1, \ldots , w_{18}$ are linearly independent. Thus $C = C_1 \cup C_2 \text{ and } D = D_1 \cup D_2$, where $C_1,D_1 \subseteq [18]$ and $C_2, D_2 \subseteq [26]\setminus [18]$ with $C_2$ and $D_2$ being nonempty. We have \[ 0=\chi(C,D)\cdot {\bf w}' = \chi(C_1, D_1)\cdot {\bf w} + \chi (C_2, D_2) \cdot {\bf w}^+, \] where ${\bf w}^+=(w_{19}, \ldots, w_{26})^T$. By~(\ref{urav}), we can express weights $w_{19}, \ldots, w_{26}$ as linear combinations with integer coefficients of $N, K$ and $\row w{18}$ obtaining \[ \chi (C_2, D_2)\cdot {\bf w}^+ =\left( \sum_{i=1}^4\gamma_i \chi(A_i) + \sum_{i=1}^4\gamma_{4+i} \chi(B_i)\right)\cdot {\bf w} + \beta_1 N + \beta_2 K, \] where $\gamma_i, \beta_j \in {\mathbb Z}$. Clearly the expression in the bracket on the right-hand-side is just a vector with integer entries. Let us denote it $\alpha$. Then \begin{equation} \label{introduction_of_alpha} \chi (C_2, D_2)\cdot {\bf w}^+ = {\bf \alpha}\cdot {\bf w} + \beta_1 N + \beta_2 K, \end{equation} where ${\bf \alpha} \in {\mathbb Z}^{18}$. We can now write $\chi(C,D)\cdot {\bf w}'$ in terms of ${\bf w}, K$ and $N$: $$ 0=\chi(C,D)\cdot {\bf w}' = (\chi(C_1, D_1) + {\bf \alpha})\cdot {\bf w} + \beta_1 N + \beta_2 K. $$ We recap that $K$ was chosen to be much greater then $\sum_{i\in [18]}w_i$ and $N$ is much greater then $K$. So if $\beta_1, \beta_2$ are different from zero then $|\beta_1 N +\beta_2 K |$ is a very big number, which cannot be canceled out by $(\chi(C_1, D_1) + {\bf \alpha})\cdot {\bf w}$. Weights $w_1, \ldots, w_{18}$ are linearly independent, so for arbitrary ${\bf b} \in Z^{18}$ the dot product ${\bf b}\cdot {\bf w}$ can be zero if and only if ${\bf b}={\bf 0}$. Hence $$ w(C)=w(D) \mbox{ iff } \chi(C_1, D_1) = -\alpha \text{ and } \beta_1=0, \beta_2=0. $$ Taking into account that $\chi(C_1,D_1)$ is a vector from $T^{18}$, we get \begin{equation}\label{uslovie24} {\bf \alpha} \notin T^{18} \Longrightarrow w(C) \neq w(D). \end{equation} We need the following two claims to finish the proof, their proofs are delegated to the next section. \begin{claim}\label{from'tocons} Suppose $\chi(C_1,D_1)$ belongs to $X \cup -X$. Then $\chi(C,D)$ belongs to $X' \cup -X'$. \end{claim} \begin{claim} \label{alpha} If ${\bf \alpha} \in T^{18}$, then $\alpha$ belongs to $X\cup -X$. \end{claim} Now let us show how with the help of these two claims the proof of Proposition~\ref{conseq} can be completed. The sets $C$ and $D$ have the same weight and this can happen only if ${\bf \alpha}$ is a vector in $T^{18}$. By Claim~\ref{alpha} ${\bf \alpha} \in X \cup -X$. The characteristic vector $\chi(C_1,D_1)$ is equal to $-{\bf \alpha}$, hence $\chi(C_1,D_1) \in X \cup -X$. By Claim~\ref{from'tocons} we get $\chi(C,D) \in X' \cup -X' $, a contradiction. \end{proof} \begin{thm} There exists a linear qualitative probability order $\sqsubseteq$ on $[26]$ and $T\subset [26]$ such that the initial segment $\Delta(\sqsubseteq,T)$ is not a threshold complex. \end{thm} \begin{proof} By Corollary~\ref{closedX} and Proposition~\ref{conseq} all conditions of Theorem~\ref{constr} are satisfied. Therefore $C(\preceq') \setminus (-X')$ is a discrete cone $C(\sqsubseteq )$, where $\sqsubseteq$ is a almost representable linear qualitative probability order. By construction $A'_1 \sqsubset A'_2 \sqsubset A'_3 \sqsubset A'_4 \sqsubset B'_1 \sqsubset B'_2 \sqsubset B'_3 \sqsubset B'_4$ and thus $\Delta(\sqsubseteq, B'_1)$ is an initial segment, which is not a threshold complex. \end{proof} Note that we have a significant degree of freedom in constructing such an example. The matrix $M$ can be chosen in $2^{18}$ possible ways and we have not specified the linear qualitative probability order $\preceq$. \section{Proofs of Claim~\ref{from'tocons} and Claim~\ref{alpha}} Lets fix some notation first. Suppose ${\bf b} \in {\mathbb Z}^{k}$ and ${\bf x}_i \in {\mathbb Z}^{n} $ for $i \in [k]$. Then we define the product \[ {\bf b} \cdot ({\bf x}_1, \ldots, {\bf x}_k) = \sum_{i \in [k]} b_i {\bf x}_i. \] It resembles the dot product (the difference is that the second argument is a sequence of vectors) and is denoted in the same way. For a sequence of vectors $({\bf x}_1, \ldots, {\bf x}_k)$ we also define $({\bf x}_1, \ldots, {\bf x}_k)_p=({\bf x}_1^{(p)}, \ldots, {\bf x}_k^{(p)})$, where ${\bf x}_i^{(j)}$ is the $j$th coordinate of vector ${\bf x}_i$. We start with the following lemma. \begin{lem} \label{scalmult} Let ${\bf b}\in \mathbb{Z}^6$. Then \begin{equation*} {\bf b}\cdot (\chi(B_1,A_4),\chi(B_2,A_1), \chi(B_3,A_2),\chi(A_2,A_1),\chi(A_3,A_1),\chi(A_4,A_1)) = {\bf 0} \end{equation*} if and only if ${\bf b}={\bf 0}$. \end{lem} \begin{proof} We know that the pairs $(B_1, A_4)$ and $(B_2, A_1)$ are not compatible. So there exists an element $p$ that lies in the intersection $B_1 \cap B_2$ (or $A_1 \cap A_4$), but $p \notin A_4 \cup A_1$ ($p \notin B_1 \cup B_2$, respectively). We have exactly two copies of every element among $A_1, \ldots, A_4$ and $B_1, \ldots, B_4$. Thus, the element $p$ belongs to $A_2 \cap A_3$ ($B_3 \cap B_4$) and doesn't belong to $B_3 \cup B_4$ ($A_2 \cup A_3$ ). The following table illustrates this: \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & $\chi(A_1)$ & $\chi (A_2)$ & $\chi(A_3)$ & $ \chi(A_4)$ & $\chi (B_1)$ & $\chi (B_2)$ & $\chi(B_3)$ & $\chi(B_4) $\\ \hline $\text{$p$th} $ & 0 & 1& 1& 0 & 1& 1 & 0& 0\\ $\text{coordinate} $& 1 & 0& 0& 1 & 0& 0 & 1 & 1\\ \hline \end{tabular} \noindent Then at $p$th position we have \[ (\chi(B_1,A_4),\chi(B_2,A_1), \chi(B_3,A_2),\chi(A_2,A_1),\chi(A_3,A_1),\chi(A_4,A_1))_p=\pm (1,1,-1,1,1,0) \] and hence \[ b_1+b_2-b_3+b_4+b_5 = 0. \] From the fact that other pairs are not compatible we can get more equations relating $\row b6$: \[ \begin{array}{ccc} b_1- b_2 + b_3 - b_4 - b_6 = 0& \text{from}& (B_1,A_4), (B_3,A_2);\\ -b_1+ b_2 + b_3 + b_5 + b_6 = 0 & \text{from}& (B_1,A_4), (B_4,A_3);\\ b_2 + b_5 + b_6 = 0& \text{from} &(B_1,A_1), (B_2,A_2);\\ b_4 + b_6 = 0 & \text{from} &(B_1,A_1), (B_3,A_3);\\ b_3 + b_5 + b_6 = 0& \text{from}& (B_1,A_1), (B_3,A_2). \end{array} \] The obtained system of linear equations has only the zero solution. \end{proof} \begin{lem} \label{scalarw} Let ${\bf a}=(\row a8)$ be a vector in ${\mathbb Z}^{8}$ whose every coordinate $a_i$ has absolute value which is at most $100$. Then ${\bf a}\cdot {\bf w}^+ =0$ if and only if ${\bf a}={\bf 0} $. \end{lem} \begin{proof} We first rewrite~(\ref{urav}) in more convenient form: \begin{equation}\label{newurav} \begin{split} w_{19}& = N - K -(-\chi(B_1,A_4)+ \chi(A_1))\cdot {\bf w} \\ w_{20} & = N - K -(-\chi(B_1,A_4) -\chi(B_2,A_1)+ \chi(A_2) )\cdot {\bf w} \\ w_{21}& = N - K -(-\chi(B_1,A_4) -\chi(B_2,A_1) - \chi(B_3,A_2)+ \chi(A_3))\cdot {\bf w} \\ w_{22}& = N - K -\chi(A_4) \cdot {\bf w} \\ w_{23}& = K - \chi(B_1,A_4) \cdot {\bf w}\\ w_{24}& = K -( \chi(B_1,A_4) + \chi(B_2,A_1) ) \cdot {\bf w} \\ w_{25}& = K -(\chi(B_1,A_4) +\chi(B_2,A_1) + \chi(B_3,A_2) )\cdot {\bf w}\\ w_{26}& = K \end{split} \end{equation} We calculate the dot product ${\bf a}\cdot {\bf w}^+$ substituting the values of $w_{19}, \ldots, w_{26}$ from~(\ref{newurav}): \begin{equation} \label{wsystem} \begin{split} 0={\bf a}\cdot {\bf w}^+ &=N\sum_{i \in [4]}a_i - K \left(\sum_{i \in [4]}a_i - \sum_{i \in [4] }a_{4+i}\right) \\ &- \Bigl[ \chi(B_1,A_4)\left(\sum_{i =5}^{7}a_i - \sum_{i =1}^{3}a_i \right) +\chi(B_2,A_1)\left(\sum_{i =6}^{7}a_i - \sum_{i =2}^{3}a_i\right) \\ &+ \chi(B_3,A_2)(-a_3 +a_7)+ \sum_{i\in [4]}a_i\chi(A_i) \Bigr] \cdot {\bf w}. \end{split} \end{equation} The numbers $N$ and $K$ are very big and $\sum_{i \in [18]}w_i$ is small. Also $|a_i|\le 100$. Hence the three summands cannot cancel each other. Therefore $\sum_{i \in [4]}a_i = 0$ and $\sum_{i \in [4]}a_{4+i} = 0$. The expression in the square brackets should be zero because the coordinates of ${\bf w}$ are linearly independent. We know that $a_1 = -a_2-a_3-a_4$, so the expression in the square brackets in~(\ref{wsystem}) can be rewritten in the following form: \begin{equation} \label{fform} \begin{split} b_1\chi(B_1,A_4) +b_2 \chi(B_2,A_1)+b_3 \chi(B_3,A_2)+\\ a_2\chi(A_2,A_1)+a_3\chi(A_3,A_1)+a_4\chi(A_4,A_1), \end{split} \end{equation} where $b_1=\sum_{i =5}^{7}a_i - \sum_{i =1}^{3}a_i,$ $b_2=\sum_{i =6}^{7}a_i - \sum_{i =2}^{3}a_i$ and $b_3 = a_7-a_3.$ By Lemma~\ref{scalmult} we can see that expression~(\ref{fform}) is zero iff $b_1=0,$ $b_2=0, b_3=0$ and $ a_2= 0, a_3=0, a_4=0$ and this happens iff ${\bf a} = {\bf 0}$. \end{proof} \begin{proof}[Proof of Claim 1] Assume to the contrary that $\chi(C_1,D_1) \in X \cup -X$ and $\chi(C,D)$ does not belong to $X' \cup -X'$. Consider $\chi(\breve{C_1}, \breve{D_1}) \in X' \cup -X'$. We know that the weight of $C$ is the same as the weight of $D$, and also that the weight of $\breve{C}_1$ is the same as the weight of $\breve{D}_1$. This can be written as \begin{align*} &\chi(C_1,D_1)\cdot {\bf w} + \chi(C_2,D_2) \cdot {\bf w}^+ = 0, \\ &\chi(C_1,D_1)\cdot {\bf w} + \chi(\breve{C_1}\setminus C_1, \breve{D_1}\setminus D_1 )\cdot {\bf w}^+ = 0. \end{align*} We can now see that \[(\chi(\breve{C_1}\setminus C_1, \breve{D_1}\setminus D_1) - \chi(C_2, D_2))\cdot {\bf w}^+ = 0.\] The left-hand-side of the last equation is a linear combination of weights $w_{19}, \ldots, w_{26}$. Due to Lemma~\ref{scalarw} we conclude from here that \[ \chi(\breve{C_1}\setminus C_1,\breve{D_1}\setminus D_1 ) - \chi(C_2,D_2) = \bf 0. \] But this is equivalent to $\chi (C,D) = \chi(\breve{C_1},\breve{D_1}) \in X$, which is a contradiction. \end{proof} \begin{proof}[Proof of Claim~\ref{alpha}] We remind the reader that $\bf\alpha$ was defined in (\ref{introduction_of_alpha}). Sets $C$ and $D$ has the same weight and we established that $\beta_1=\beta_2=0$. So \begin{equation*} \label{alpha_equation} \chi(C_2,D_2) \cdot {\bf w}^+ = {\bf \alpha}\cdot {\bf w}. \end{equation*} If we look at the representation of the last eight weights in~(\ref{newurav}), we note that the weights $w_{19}$, $w_{20}$, $w_{21}$, $w_{22}$ are much heavier than the weights $w_{23}$, $w_{24}$, $w_{25}$, $w_{26}$. Hence $w(C)=w(D)$ implies \begin{equation} \label{equal_heavy} \begin{split} |C_2 \cap \{19, 20,21,22\}| = |D_2 \cap \{19, 20,21,22\}| & \text{ and }\\ |C_2 \cap \{23,24,25,26\}| = |D_2 \cap \{23, 24,25,26\}|. & \end{split} \end{equation} That is $C$ and $D$ have equal number of super-heavy weights and equal number of heavy ones. Without loss of generality we can assume that $C_2 \cap D_2$ is empty. Similar to derivation in the proof of Lemma~\ref{scalarw}, the vector $\bf \alpha$ can be expressed as \begin{equation} \label{alpha-repr-temp} \alpha = a_1\chi(B_1,A_4) +a_2 \chi(B_2,A_1) + a_3 \chi(B_3,A_2) + \sum_{i\in [4]}b_i \chi(A_i) \end{equation} for some $a_i, b_j \in {\mathbb Z}$. The characteristic vectors $\chi(A_1), \ldots, \chi(A_4)$ participate in the representations of super-heavy elements $w_{19}, \ldots, w_{22}$ only. Hence $b_i=1$ iff element $18+i\in C_2$ and $b_i=-1$ iff element $18+i\in D_2$. Without loss of generality we can assume that $C_2 \cap D_2 = \emptyset$. By~(\ref{equal_heavy}) we can see that if $C_2$ contains some super-heavy element $p \in \{19, \ldots, 22\}$ with $\chi(A_k)$, $ k \in [4]$, in the representation of $w_p$, then $D_2$ has a super-heavy $q \in \{19, \ldots, 22\}$, $q \neq p$ with $\chi(A_t), t \in [4] \setminus \{k\}$ in representation of $w_q$. In such case $b_k= -b_t =1$ and \[ b_k \chi(A_k)+ b_t \chi(A_t) = \chi(A_k,A_t). \] By~(\ref{equal_heavy}) the number of super-heavy element in $C_2$ is the same as the number of super-heavy elements in $D_2$. Therefore~(\ref{alpha-repr-temp}) can be rewritten in the following way: \begin{equation}\label{alpha-repr} \alpha = a_1\chi(B_1,A_4) +a_2 \chi(B_2,A_1) + a_3 \chi(B_3,A_2) +\\ a_4\chi(A_i,A_p) + a_5\chi(A_k,A_t), \end{equation} where $a_1, a_2, a_3 \in {\mathbb Z}$; $a_4,a_5 \in \{0,1\}$ and $\{i,k,t,p\} = [4]$.\par Now the series of technical facts will finish the proof. \begin{fact} \label{three} Suppose ${\bf a}=(a_1,a_2,a_3) \in \mathbb{Z}^3$ and $|\{i, k, t\}|=|\{j, m, s\}|=3$. Then \[ a_1 \chi(B_j,A_i) + a_2\chi( B_m,A_k) + a_3\chi( B_s,A_t) \in T^{18} \] if and only if \begin{equation} \label{3-list} {\bf a} \in \{(0,0,0),\ (\pm1,0,0),\ (0,\pm1,0),\ (0,0,\pm1),\ (1,1,1),\ (-1,-1,-1)\}. \end{equation} \end{fact} \begin{proof} The pairs $((B_j,A_i), (B_m,A_k))$, $((B_j,A_i), (B_s,A_t))$ and $((B_m,A_k), (B_s,A_t))$ are not compatible. Using the same technique as in the proofs of Proposition~\ref{closedZ} and Lemma~\ref{scalmult} and watching a particular coordinate we get \[ (a_1 + a_2-a_3),\ (a_1 - a_2+a_3),\ (-a_1 + a_2+a_3) \in T, \] respectively. The absolute value of the sum of every two of these terms is at most two. Add the first term to the third. Then $|2a_2| \le 2$ or, equivalently, $|a_2|\le 1$. In a similar way we can show that $|a_3|\le 1$ and $|a_1|\le 1$. The only vectors that satisfy all the conditions above are those listed in (\ref{3-list}). \end{proof} \begin{fact} \label{three+} Suppose ${\bf a}=(a_1,a_2,a_3) \in \mathbb{Z}^3$ and $|\{i, k, t\}|=|\{j, m, s\}|=3$. Then \[ a_1 \chi(B_j,A_i) + a_2\chi( B_m,A_k) + a_3\chi( B_s,A_t)+ \chi(A_k,A_t) \in T^{18} \] if and only if \begin{equation} \label{4-list} {\bf a} \in \{(0,0,0),\ (0,1,0),\ (0,0,-1),\ (0,1, -1)\}. \end{equation} \end{fact} \begin{proof} Considering non-compatible pairs $((B_m,A_k), (B_s,A_t))$, $((B_j,A_i), (B_m,A_k))$, $((B_j,A_i), (B_{s},A_t))$, $((B_j,A_k), (B_{s},A_i))$, $((B_j,A_t), (B_{m},A_i))$, we get the inclusions \begin{equation*} (-a_1 +a_2+ a_3),\ (a_1+a_2 - a_3 -1),\ (a_1-a_2+ a_3+1),\ (a_1-1),\ (a_1+1) \in T, \end{equation*} respectively. We can see that $|2a_2-1| \le 2$ and $|2a_3+1| \le 2$ and $a_1=0$. So $a_2$ can be only $0$ or $1$ and $a_3$ can have values $-1$ or $ 0$. \end{proof} \begin{fact} \label{three_and_oneint} Suppose ${\bf a}=(a_1,a_2,a_3) \in \mathbb{Z}^3$ and $\{i, k, t, p\} = [4]$ and $|\{j, m, s\}|=3$. Then \[ a_1 \chi(B_j,A_i) + a_2\chi( B_m,A_k) + a_3\chi( B_s,A_t) + \chi(A_i,A_p) \in T^{18} \] if and only if \[ a \in \{(0,0,0),\ (1,0,0),\ (1,1,1),\ (2,1,1)\}. \] \end{fact} \begin{proof} Let $\ell\in [4]\setminus \{j, m, s\}$. From consideration of the following non-compatible pairs \begin{align*} & ((B_j,A_i),(B_m,A_k)),\ ((B_j,A_i), (B_s,A_t)),\ ((B_m,A_k), (B_{s},A_t)),\ ((B_j,A_i), (B_{m},A_t)), \\ & ((B_j,A_i), (B_{m},A_p)),\ ((B_j,A_i), (B_{s},A_p)),\ ((B_s,A_t), (B_{\ell},A_i)) \end{align*} we get the following inclusions \begin{align*} & (a_1 +a_2- a_3-1),\ (a_1-a_2 + a_3 -1),\ (-a_1+a_2+ a_3),\\ & (a_1-1),\ (a_1-a_3),\ (a_1-a_2),\ (a_2-a_3+1)\in T, \end{align*} respectively. So we have $|2a_3-1| \le 2$ (from the second and the third inclusions) and $|2a_2-1| \le 2$ (from the first and the third inclusions) from which we immediately get $a_2, a_3 \in \{1,0\}$. We also get $a_1 \in \{2,1,0\}$ (by the forth inclusion). \begin{itemize} \item If $a_1=2$, then by the fifth and sixth inclusions $a_3=1$ and $a_2=1$. \item If $a_1 = 1$, then $a_2$ can be either zero or one. If $a_2=0$ then we have $\chi(B_j,A_i)+ a_3\chi( B_s,A_t) + \chi(A_i,A_p) = \chi(B_j,A_p)+ a_3\chi( B_s,A_t)$. By Fact~\ref{three}, $a_3$ can be zero only. On the other hand, if $a_2=1$, then $a_3=1$ by the seventh inclusion. \item If $a_1=0$ then $a_2$ can be a $0$ or a $1$. Suppose $a_2=0$. Then $a_3=0$ by the first two inclusions. Assume $a_2=1$. Then $a_3=0$ by the third inclusion and on the other hand $a_3=1$ by the second inclusion, a contradiction. \end{itemize} This proves the statement. \end{proof} \begin{fact} \label{three_and_two} Suppose ${\bf a}=(a_1,a_2,a_3) \in \mathbb{Z}^3$ and $\{i, k, t, p\} = [4]$ and $|\{j, m, s\}|=3$. Then \[ a_1 \chi(B_j,A_i) + a_2\chi( B_m,A_k) + a_3\chi( B_s,A_t) + \chi(A_i,A_p) + \chi(A_k,A_t)\notin T^{18}. \] \end{fact} \begin{proof} Let $\ell\in [4]\setminus \{j, m, s\}$. Using the same technique as above from consideration of non-compatible pairs \begin{align*} & ((B_j,A_i), (B_{m},A_t)),\ ((B_s,A_t), (B_{j},A_k)),\ ((B_j,A_i), (B_s,A_t)),\\ & ((B_m,A_k), (B_{s},A_t)),\ ((B_j,A_i), (B_{m},A_p)),\ ((B_j,A_i), (B_{\ell},A_k)) \end{align*} we obtain inclusions: \[ a_1,\ a_3,\ (a_1-a_2 + a_3),\ (-a_1+a_2+ a_3),\ (a_1-a_3),\ (a_1-a_3-2) \in T, \] respectively. From the last two inclusions we can see that $a_1-a_3=1$. This, together with the first and the second inclusions, imply $(a_1, a_3) \in \{(1,0), (0,-1)\}$. Suppose $(a_1,a_3)=(1,0)$. Then \[ \chi(B_j,A_i) + a_2\chi( B_m,A_k) + \chi(A_i,A_p) + \chi(A_k,A_t) =\chi(B_j,A_p) + a_2\chi( B_m,A_k) + \chi(A_k,A_t). \] By Fact~\ref{three_and_oneint}, it doesn't belong to $T^{18}$ for any value of $a_2$. Suppose now that $(a_1,a_3)=(0,-1)$. Then by the third and the forth inclusions $a_2$ can be only zero. Then ${\bf a}=(0,0,-1)$ and \[ -\chi( B_s,A_t) + \chi(A_i,A_p) + \chi(A_k,A_t) = -\chi( B_s,A_k) + \chi(A_i,A_p). \] However, by Fact~\ref{three_and_oneint} the right-hand-side of this equation is not a vector of $T^{18}$. \end{proof} \begin{fact}\label{final fact} Suppose ${\bf a} \in \mathbb{Z}^5$ and \[ {\bf v}=a_1 \chi(B_j,A_i) + a_2\chi( B_m,A_k) + a_3\chi( B_s,A_t) + a_4\chi(A_i,A_p) + a_5\chi(A_k,A_t). \] If $a_4,a_5 \in \{0,1,-1\}$ and $v \in T^{18}$, then $v$ belongs to $X$ or $-X$. \end{fact} \begin{proof} First of all, we will find the possible values of ${\bf a}$ in case ${\bf v} \in T^{18}$. By Facts~\ref{three} --\ref{three_and_two} one can see that ${\bf v} \in T^{18}$ iff ${\bf a}$ belongs to the set \begin{align*} Q=&\{(0,0,0,0,0),\ (\pm1,0,0,0,0),\ (0,\pm1,0,0,0), \ (0,0,\pm1,0,0),\ (1,1,1,0,0), \\ &(0,0,0,\pm1,0),\ (\pm1,0,0,\pm1,0),\ (\pm1, \pm1, \pm1,\pm1,0),\ (\pm2, \pm1, \pm1,\pm1,0),\\ &(0,0,0,0,\pm1),\ (0,\pm1,0,0,\pm1),\ (0,0,\mp1,0,\pm1),\ (0,\pm1,\mp1,0,\pm1) \}. \end{align*} By the construction of $\preceq$ the sequence $(A_1, \ldots, A_4; B_1, \ldots, B_4)$ is a trading transform. So for every $\{i_1, \ldots, i_4 \} = \{j_1, \ldots, j_4 \} = [4]$ the equation \begin{equation}\label{trading} \chi(B_{i_1},A_{j_1}) + \chi(B_{i_2},A_{j_2})+ \chi(B_{i_3},A_{j_3})+\chi(B_{i_4},A_{j_4}) = 0. \end{equation} holds. Taking~(\ref{trading}) into account one can show, that for every ${\bf a} \in Q$, vector ${\bf v}$ belongs to $X$ or $-X$. For example, if ${\bf a} = (2,1, 1,1,0)$ then \begin{multline*} 2 \chi(B_j,A_i) + \chi( B_m,A_k) + \chi( B_s,A_t) + \chi(A_i,A_p) =\\ \chi(B_j,A_i) - \chi(B_{\ell},A_p)+ \chi(A_i,A_p) = \chi(B_j,B_{\ell}), \end{multline*} where $\ell\in [4] \setminus \{j,m,s\}$. \end{proof} One can see that ${\bf v}$ from the Fact~\ref{final fact} is the general form of $\bf \alpha$. Hence ${\bf \alpha} \in T^{18}$ if and only if ${\bf \alpha} \in X \cup -X$ which is Claim~2. \end{proof} \section{Acyclic games and a conjectured characterization} So far we have shown that the initial segment complexes strictly contain the threshold complexes and are strictly contained within the shifted complexes. In this section we introduce some ideas from the theory of simple games to formulate a conjecture that characterizes initial segment complexes. The idea in this section is to start with a simplicial complex and see if there is a natural linear order available on $2^{[n]}$ which gives a qualitative probability order and has the original complex as an initial segment. We will follow the presentation of Taylor and Zwicker \cite{tz:b:simplegames}. Let $\Delta \subseteq 2^{[n]}$ be a simplicial complex. Define the \emph{Winder desirability relation}, $\leq_W$, on $2^{[n]}$ by $A \leq_W B$ if and only if for every $Z \subseteq [n] \setminus((A \setminus B)\cup(B \setminus A))$ we have that \begin{equation*} (A \setminus B) \cup Z \notin \Delta \mathbb Rightarrow (B \setminus A) \cup Z \notin \Delta. \end{equation*} Furthermore define the \emph{Winder existential ordering}, $\prec_W$, on $2^{[n]}$ to be \begin{equation*} A \prec_W B \Longleftrightarrow \textrm{It is not the case that } B \leq_W A. \end{equation*} \begin{defn}\label{d:SA} A simplicial complex $\Delta$ is called strongly acyclic if there are no $k$-cycles \begin{equation*} A_1 \ensuremath{\prec_W} A_2 \ensuremath{\prec_W} \cdots A_k \ensuremath{\prec_W} A_1 \end{equation*} for any $k$ in the Winder existential ordering. \end{defn} \begin{thm}\label{t:initsa} Suppose $\preceq$ is a qualitative probability order on $2^{[n]}$ and $T \in 2^{[n]}$. Then the initial segment $\Delta(\preceq,T)$ is strongly acyclic. \end{thm} \begin{proof} Let $\Delta=\Delta(\preceq,T)$. It follows from the definition that $A \ensuremath{\prec_W} B$ if and only if there exists a $Z \in [n] \setminus((A \setminus B)\cup (B \setminus A))$ such that $(A \setminus B) \cup Z \in \Delta$ and $(B \setminus A) \cup Z \notin \Delta$. From the definition of $\Delta$ it follows that \begin{equation*} (A \setminus B) \cup Z \prec (B \setminus A) \cup Z \end{equation*} which, by de Finneti's axiom \ref{deFeq}, implies \begin{equation*} A \setminus B \prec B \setminus A \end{equation*} and hence, again by de Finneti's axiom \ref{deFeq}, \begin{equation*} A \prec B. \end{equation*} Thus a $k$-cycle \begin{equation*} A_1 \ensuremath{\prec_W} \cdots \ensuremath{\prec_W} A_k \ensuremath{\prec_W} A_1 \end{equation*} in $\Delta$ would imply a $k$-cycle \begin{equation*} A_1 \prec \cdots \prec A_k \prec A_1 \end{equation*} which contradicts that $\prec$ is a total order. \end{proof} \begin{con}\label{c:weak} A simplicial complex $\Delta$ is an initial segment complex if and only if it is strongly acyclic. \end{con} We will return momentarily to give some support for Conjecture~\ref{c:weak}. First, however, it is worth noting that the necessary condition of being strongly acyclic from Theorem~\ref{t:initsa} allows us to see that there is little relationship between being an initial segment complex and satisfying the conditions $CC_k^*$. \begin{cor} For every $M > 0$ there exist simplicial complexes that satisfy $CC_M^*$ but are not initial segment complexes. \end{cor} \begin{proof} Taylor and Zwicker \cite{tz:b:simplegames} construct a family of complexes $\{G_k\}$, which they call Gabelman games that satisfy $CC_{k-1}^*$ but not $CC_k^*$. They then show \cite[Corollary 4.10.7]{tz:b:simplegames} that none of these examples are strongly acyclic. The result then follows from Theorem~\ref{t:initsa}. \end{proof} Our evidence in support of Conjecture~\ref{c:weak} is based on the idea that the Winder existential order can be used to produce the related qualitative probability order for strongly acyclic complexes. Here are two lemmas that give some support for this belief: \begin{lem} If $\Delta$ is a simplicial complex with $A \in \Delta$ and $B \notin \Delta$ then $A \ensuremath{\prec_W} B$. \end{lem} \begin{proof} Let $Z=A \cap B$. Then \begin{align*} (A \setminus B) \cup Z &= A \in \Delta \\ (B \setminus A) \cup Z &=B \notin \Delta, \end{align*} and so $A \ensuremath{\prec_W} B$. \end{proof} \begin{lem}\label{l:weprop} For any $\Delta$, the Winder existential order \ensuremath{\prec_W} satisfies the property \begin{equation*} A \ensuremath{\prec_W} B \Longleftrightarrow A\cup D \ensuremath{\prec_W} B \cup D \end{equation*} for all $D$ disjoint from $A \cup B$. \end{lem} \begin{proof} See \cite[Proposition 4.7.8]{tz:b:simplegames}. \end{proof} This pair of lemmas leads to a slightly stronger version of Conjecture~\ref{c:weak}. \begin{con}\label{c:strong} If $\Delta$ is strongly acyclic then there exists an extension of \ensuremath{\prec_W} to a qualitative probability order. \end{con} What are the barriers to proving Conjecture~\ref{c:strong}? The Winder order need not be transitive. In fact there are examples of threshold complexes for which \ensuremath{\prec_W} \ is not transitive \cite[Proposition 4.7.3]{tz:b:simplegames}. Thus one would have to work with the transitive closure of \ensuremath{\prec_W}, which does not seem to have a tractable description. In particular we do not know if the analogue to Lemma~\ref{l:weprop} holds for the transitive closure of \ensuremath{\prec_W}. \section{Conclusion}\label{s:conc} In this paper we have begun the study of a class of simplicial complexes that are combinatorial generalizations of threshold complexes derived from qualitative probability orders. We have shown that this new class of complexes strictly contains the threshold complexes and is strictly contained in the shifted complexes. Although we can not give a complete characterization of the complexes in question, we conjecture that they are the strongly acyclic complexes that arise in the study of cooperative games. We hope that this conjecture will draw attention to the ideas developed in game theory which we believe to be too often neglected in the combinatorial literature. \end{document}
\begin{document} \title{Volume growth, curvature, and Buser-type inequalities in graphs} \begin{abstract} We study the volume growth of metric balls as a function of the radius in discrete spaces, and focus on the relationship between volume growth and discrete curvature. We improve volume growth bounds under a lower bound on the so-called Ollivier curvature, and discuss similar results under other types of discrete Ricci curvature. Following recent work in the continuous setting of Riemannian manifolds (by the first author), we then bound the eigenvalues of the Laplacian of a graph under bounds on the volume growth. In particular, $\lambda_2$ of the graph can be bounded using a weighted discrete Hardy inequality and the higher eigenvalues of the graph can be bounded by the eigenvalues of a tridiagonal matrix times a multiplicative factor, both of which only depend on the volume growth of the graph. As a direct application, we relate the eigenvalues to the Cheeger isoperimetric constant. Using these methods, we describe classes of graphs for which the Cheeger inequality is tight on the second eigenvalue. We also describe a method for proving Buser's Inequality in graphs, particularly under a lower bound assumption on curvature. \end{abstract} \section{Introduction} \subsection{History and Motivation} In Riemannian geometry there is a large and celebrated body of literature relating the Ricci curvature to various properties of the manifold, such as the Laplacian operator, the volume, the diameter, and various isoperimetric properties \cite{CY75,Buser82,Bishop}. There has been much work in graphs and Markov chains studying the analogues of concepts that arise in Riemannian geometry, for example the Laplacian, isoperimetric constant and Cheeger inequalities \cite{Alon86,AC,Ch97}. These successes have motivated the problem of defining the discrete Ricci curvature. There have so far been several proposed definitions of discrete Ricci curvature \cite{S06,LV09,OV00,BE85,Oll07,EM12,CY16,Bauer2013,BS09}. It is generally unclear whether or not any of these notions of curvature are equivalent, and in some instances examples illustrate that they are not equivalent. It is preferable that a notion of discrete Ricci curvature would allow for similar results to those that hold for manifolds, such as relating global isoperimetric properties to the discrete curvature. We should also hope that it is relatively easy to compute the discrete curvature. In Riemannian geometry there are many results under the hypothesis of positive (or non-negative) curvature; if we can find similar results for graphs, we would like there to be large classes of interesting graphs that \emph{have} positive (or non-negative) curvature, and be able to make use of it in refining or strengthening various geometric and functional inequalities. As mentioned above, there have been many distinct definitions of the discrete Ricci curvature, each developed by taking a well-understood property of Ricci curvature in Riemannian manifolds and adapting it to the setting of graphs and Markov chains. In this work we will mainly focus on the Ollivier curvature, which is defined by the solutions to minimum transport problems between balls of small radius. The so-called Ollivier curvature was defined and developed significantly by Ollivier (although it was introduced earlier, independently by Sammer) \cite{Oll07,Sam05}. To motivate this definition, we first briefly discuss the relationship between optimal transport and curvature in manifolds. Let $M$ be a Riemannian manifold with points $x,y$ which are close enough to be connected via a unique distance minimizing geodesic $\gamma$ and let $v$ be a direction at $x.$ We denote by the direction $w$ at $y,$ the parallel transport of $v$ along $\gamma$ to the point $y$ using the manifold's connection. Now consider $B(x,r)$ and $B(y,r),$ the metric ball of small radius $r>0$ centered at $x$ and $y$ respectively. We can move $B(x,r)$ along a small distance $\alpha>0$ in the direction $v$ by moving each $z \in B(x,r)$ in the following way: transport $v$ from $x$ to $z$ along the distance minimizing geodesic from $x$ to $z,$ call this direction $v_z.$ Then move a distance $\alpha$ from $z$ in the direction $v_z,$ corresponding to a point $z'$ in the manifold. We can use the same procedure with the vector $w$ at $y$ to move each point in $B(y,r)$ distance $\alpha$ in the direction of $w.$ If the Ricci curvature is positive, then the average of the distances between points in $B(x,r)$ and $B(y,r)$ will be further than their counterparts under the parallel transport of these metric balls. One the other hand, if the curvature is negative, on average, the distances between points in $B(x,r)$ and $B(y,r)$ will be closer than their counterparts under the parallel transport. Ollivier observed that the average distance can be replaced by the $L_1$-Wasserstein distance between uniform distributions on $B(x,r)$ and $B(y,r)$, and this metric is used in definition of the so-called Ollivier curvature, which can be used to recover the manifold's Ricci curvature (up to a factor) \cite{Oll07}. Ollivier used this concept to help define the discrete Ricci curvature \cite{Oll07}. The metric balls $B(x,r)$ and $B(y,r)$ can also be defined on a graph where $r$ is a non-negative integer and $x$ and $y$ are vertices of the graph. Then the $L_1$-Wasserstein distance between the balls $B(x,r)$ and $B(y,r)$ determines a notion of curvature on the graph. While definitions of Ollivier curvature can be applied to any metric measure space, arguably its most fruitful use has been to define curvature in graphs with the graph distance and counting measure, for example \cite{BaJoLi11,BCLMP17,JoLi11}. That will also be our focus in this work: A well-known fact due to Bishop is that a Riemannian manifold with a lower bound on its Ricci curvature will have the volume growth of its metric balls controlled by this lower bound \cite{Bishop}. Under many notions of discrete curvature it is unclear whether such a volume growth bound exists. In this work we will present a volume growth that is interesting for regular graphs with a negative lower bound on Ollivier curvature. We will also briefly discuss the $CDE'$ curvature, which was created by Bauer, Jost, and Liu \cite{Bauer2013}. The $CDE'$ inequality is a modification of the $CD$ inequality of Bakry-\'{E}mery, which is a discrete generalization of the Bochner formula from Riemannian geometry. Those authors demonstrated a version of the Li-Yau gradient estimate for graphs under the $CDE'$ curvature. This is a result that does not have any known analogue in the setting of Ollivier curvature. Volume growth estimates for Riemannian manifolds can also be applied to study the eigenvalues of the Laplace-Beltrami operator, denoted $\Delta_g,$ on the manifold. In fact, the relationship between the dimension, Ricci curvature, Cheeger constant, and spectrum of the Laplace-Beltrami operator on a closed Riemannian $n$-manifold has been well established. To remain consistent with the notation of the Laplace eigenvalues on graphs, denote by $\lambda_2(M)$ the first nonzero eigenvalue of $\Delta_g$ on $M.$\footnote{In geometry, the convention is to index the least positive eigenvalue by 1. However, we adopt the convention used in graph theory throughout.} Cheeger first showed that $\lambda_2(M) \geq h^2(M)/4,$ independent of the curvature or volume growth of the manifold \cite{Ch70}. Buser then proved that if the Ricci curvature of $M$ is bounded below by $-(n-1)\delta^2$ with $\delta \geq 0,$ then \begin{equation*} \lambda_2(M) \leq 2\delta(n-1)h(M)+10h^2(M). \end{equation*} Buser's original proof of this inequality used work relating the volume growth to the lower bound on the Ricci curvature due to Bishop \cite{Bishop} and Heintze and Karcher \cite{HK}. More recently, Agol proved a quantitative improvement of the estimate \cite{Agol}. Soon after, the first author proved an analogue giving upper bounds on every eigenvalue of $\Delta_g$ using only the dimension, a lower bound on Ricci curvature, and the Cheeger constant; the same quantities used in Buser's original inequality \cite{B15}. In each of these results, the lower bound on the Ricci curvature is necessary to control the volume growth of the level sets of the distance functions from the optimal Cheeger splitting. Further details are discussed in greater detail in Section \ref{sect:vg_lambda}. It should also be noted here that in \cite{Ledoux94}, Ledoux provided a simpler analytic proof of Buser's original result, and also followed up with a remarkable {\em dimension-free} improvement (see Theorem~5.2 in \cite{Ledoux04spectralgap}), where the constants are independent of the dimension of the manifold. A problem of particular interest for graphs is the relationship between the isoperimetric constants and the spectral gap ($\lambda_2$) of the Laplacian of the graph. The Cheeger and Buser inequalities have analogues for graphs. Such relationships are frequently referred to in the literature as Cheeger-type inequalities, and relates the algebraic and geometric expansion properties of the graph. For the isoperimetric constant $h_{out}$, defined using the outer vertex boundary (also known as {\it vertex expansion}, and reviewed in the next section), the Cheeger inequalities \cite{BHT} are \begin{align}\frac{\left (\sqrt{1+h_{out}}-1\right )^2}{2d}\leq \lambda_2 \leq h_{out}\end{align} for any $d$-regular graph. A long-standing problem of general interest is to determine the class of graphs for which the lower inequality $\lambda_2 \approx h_{out}^2$ is tight. There is a previous proof of a discrete Buser's inequality, which states that under the condition of non-negative Ricci curvature (in the sense of the $CD$ inequality of Bakry-\'{E}mery), the lower Cheeger inequality is tight \cite{KKRT15}. The proof method relies on decomposing a candidate Cheeger-optimizing vertex set as a linear sum of eigenfunctions of the Laplacian, and analyzing the behavior of those functions under the heat flow operator $P_t$, which can be seen as the evolution of the random walk on the graph. This proof was recently extended to bound the higher eigenvalues of the Laplacian \cite{LMP}. \subsection{Summary of Results} We prove specific results bounding the spectrum using only volume growth. To summarize, let $A$ be a subset of the vertex set of a graph $G.$ In Theorem \ref{theo:eigen2}, we prove that $\lambda_2(G)$ can be bounded from above by a weighted discrete Hardy inequality which depends only on bounds on the volume growth of $A.$ Such Hardy inequalities are well understood and we combine our work with results of Miclo \cite{Miclo} to give quantitative estimates on the first eigenvalues in terms of volume growth and $h_{out}(G),$ which are stated in Theorem \ref{thm:lambda_2}. We also prove in Theorem \ref{theo:eigenhigher} that higher eigenvalues $\lambda_k(G)$ where $k\geq 2$ can be bounded above by the eigenvalues of matrices which depends only on volume growth bounds. As an application of the relationship between the spectrum and volume growth, we suggest an alternate proof method of Buser's inequality on graphs, which instead uses a bound on volume growth around a set achieving the optimal Cheeger constant. Such approach is inspired by the original proof of Buser \cite{Buser82}, in the continuous setting of manifolds, as well as subsequent improvements by Agol \cite{Agol} and the first author \cite{B15}. In particular, we can extend the proof of our Buser-type inequality on graphs to bound the higher eigenvalues of the Laplacian. A similar result was demonstrated for manifolds in previous work of the first author. It is interesting to note that a bound on discrete curvature is only used in our methods to find a suitable volume growth function. If a bound on volume growth for a specific graph (or a family of graphs) exists under some other condition unrelated to curvature, our theorems immediately admit upper bounds on eigenvalues. In particular, we prove that any graph whose ``shells'' -- sets of vertices a fixed distance from a (Cheeger-optimal) isoperimetric cut-set -- have volume bounded from above by the volume of the cut-set satisfies \begin{align*}\lambda_2 \leq \frac{27}{2} h_{out}^2.\end{align*} Therefore, the lower Cheeger inequality is tight up to a multiplicative factor $c = c(d)$ depending only on degree $d$ of a $d$-regular graph. This result appears in Example \ref{ex:constant}. In Example \ref{ex:HigherEigen}, we show that when the volume growth is bounded by a constant, that higher eigenvalues can be bounded by higher Cheeger constants. Specifically, the higher Cheeger constant $h_{out}(n)$ (arising from splitting the graph into $n$ subgraphs). Specifically, under the same aforementioned volume growth assumptions, we have, for any positive integers $n$ and $k$, \begin{equation*} \lambda_k \leq k^2\left (\frac{27\pi^2}{16}+o(1)\right )h_{out}(n)^2. \end{equation*} \subsection*{Acknowledgments} The authors are indebted to the anonymous referees for a very careful reading of the original manuscript. Their many corrections and suggestions have helped remove errors and improved the presentation of the results. The first author would like to thank the School of Mathematics at the Georgia Institute of Technology for their support and hospitality during several visits to work on this project. The second and third authors were supported in part by the NSF grants DMS-1407657 and DMS-1811935. The second author was also supported in part by the AMO grant W911NF-14-1-0094. A portion of the research was conducted while the first author was a visiting assistant professor at Kansas State University and the second was a graduate student at Georgia Institute of Technology. \section{Notation} A graph $G = (V,E)$ has a vertex set $V$ and an edge set $E$ that contains $2$-element subsets of $V$. A finite graph is one where $V$ is a finite set. If $\{x,y\}\in E$, we say that $x$ and $y$ are neighbors, denoted $x\sim y$. A common shorthand is that the edge $\{x,y\}$ may be denoted $xy$. The degree of a vertex $x$ is the number of neighbors of $x$. A locally finite graph is one where each vertex has a finite set of neighbors. For some integer $d>0$, a $d$-regular graph is one where each vertex has exactly $d$ neighbors. Clearly such a graph is also locally finite. A walk on $G$ is a series of vertices $v_0,v_1,\dots, v_n$ so that $v_{i-1}v_i$ is an edge for all $i = 1,\dots n$. A graph is connected if every pair of vertices comprises the two ends of some walk. For the rest of this work, we will only consider connected graphs. Let $G$ be a $d$-regular graph. The adjacency operator $A$ on the space $\{f:V\to{\mathbb R}\}$ is defined by the equation \begin{align*}Af(x) = \tfrac{1}{d}\sum_{y:x\sim y}f(y),\end{align*} and the Laplacian operator $\Delta$ on the same space is \begin{align*}\Delta f(x) = \tfrac{1}{d}\sum_{y:y\sim x}f(x) -f(y).\end{align*} In other words, one has $\Delta = I- A,$ where $I$ is the identity operator satisfying $If =f.$ (In other parts of the literature, these operators are sometimes referred to as the \emph{normalized} adjacency operator and \emph{normalized} Laplacian.) Observe that for a finite graph $\Delta$ is a symmetric and positive semi-definite matrix; as such, the eigenvalues of $\Delta$ are all real and non-negative. By convention we write the eigenvalues of $\Delta$ (counting multiplicities) as $\lambda_1(\Delta),\lambda_2(\Delta),\dots$ with $\lambda_1(\Delta)\leq \lambda_2(\Delta) \leq \dots$. It is well-known that $\lambda_1(\Delta)$ is achieved by the eigenfunction $f\equiv 1$ with $\lambda_1 = 0$. The spectral gap of $G$ is the difference between the two least eigenvalues of $\Delta,$ which is $\lambda_2(\Delta)$ since $\lambda_1(\Delta)=0.$ Often we write these values as $\lambda_1(G),\lambda_2(G),\dots,$ even suppressing the graph $G$ when clear. Let $G$ be a $d$-regular, finite graph. For a vertex subset $A\subset V$, define the edge boundary $\partial A$ to be $\set{\{x,y\}\in E: x\in A; y\notin A}.$ The (Cheeger) edge isoperimetric constant is defined as $ h(G) = \min_{A}\frac{|\partial A|}{d|A|}\,,$ where the minimization is over all sets $A$ with $0<|A|\leq \tfrac{|V|}{2}$. Cheeger-type inequalities relate edge and vertex isoperimetric constants to the spectral gap of the Laplacian of the graph. In particular, classical results (e.g., \cite{AM, AC,Tan84}, to cite just a few) show that $\frac{h^2}{2}\leq \lambda_2 \leq 2h\,.$ In addition to the edge boundary of a set $A\subset V$, one can define two different vertex boundaries: The inner vertex boundary is $\partial_{in}A = \{x\in A:\exists y\sim x; y\notin A\}$, and the outer vertex boundary is $\partial_{out}A = \{y\notin A:\exists x\sim y; x\in A\}$. Following \cite{BHT}, one has the (Cheeger) vertex isoperimetric constants using the vertex boundaries: \[ h_{in}(G) = \min \frac{|\partial_{in}A|}{|A|}\ \ \mbox{ and } \ \ h_{out}(G) = \min \frac{|\partial_{out}A|}{|A|}\,. \] In all cases, the minimization is over non-empty vertex sets with $|A|\leq \tfrac{1}{2}|V(G)|$. Observe the trivial bounds $h(G)\leq h_{in}(G)\leq d\cdot h(G)$ and $h(G)\leq h_{out}(G)\leq d\cdot h(G),$ so bounds on the vertex constants imply bounds on the edge constants and vice versa. There are also a pair of Cheeger-type inequalities for each of these isoperimetric constants \cite{BHT,Alon86}; in particular, for the outer vertex boundary, the inequalities are: $$\frac{\left (\sqrt{1+h_{out}}-1\right )^2}{2d^2}\leq \lambda_2 \leq \frac{h_{out}}{d}\,,$$ where the additional factors of $d$ in the denominators as compared to the edge Cheeger inequalities arise from the need to normalize $h_{out}$. We now define the Ollivier curvature, which relies on concepts of optimal or minimum transport. Let $X$ be a measurable metric space with metric $\mathrm{dist}(\cdot,\cdot)$, and let $\mu,\nu$ be two probability measures on $X$. The $L_1$ Wasserstein (also known as minimum-transport or earth-mover) distance \cite{AGS11a} is \begin{align*}W_1(\mu,\nu) = \inf_m \int_{X\times X} \mathrm{dist}(x,y)\, m(x,y), \end{align*} where the minimum is taken over all joint distributions (couplings) $m$ on $X\times X$ with left marginal $\mu$ and right marginal $\nu$. Qualitatively, we wish to transport the distribution $\mu$ to $\nu$. Here $m$ is a movement plan that moves probability mass $m(x,y)$ from $x$ to $y$, and we choose $m$ to minimize the average distance moved by the mass. There is a well-known dual to the minimization problem \cite{AG11}: \begin{align} W_1(\mu,\nu) = \sup_{f\in \text{Lip}(1)} \int_X f \nu - \int_X f\mu,\end{align} where $\text{Lip}(1)$ is the space of functions with Lipschitz constant equal to one. A maximizing function for this equation is sometimes known as a \emph{Kantorovich potential}. Observe that if the probability measures $\mu_x$ and $\mu_y$ (on $X$) have finite support, both the primal and dual characterizations of $W_1(\mu_x,\mu_y)$ are linear programs on a finite set of variables. All the probability distributions we will consider in our discussion of Ollivier curvature will be of this type. For these distributions we will use the notation of finite sums indexed by vertices rather than integrals over the measure space of vertices. Let $G$ be a locally finite connected graph and $x\in V(G)$ a vertex with degree $d_x$. Define a probability measure $\mu_x$ on $V$ so that \begin{align*}\mu_x(v) = \begin{cases} \tfrac{1}{2} & \text{if }v = x \\ \frac{1}{2d_x} & \text{if }v \sim x \\ 0 & \text{otherwise.} \end{cases}\end{align*} Here, think of taking one step of a random walk starting at $x$ and with laziness parameter $1/2$. \begin{definition} If $x,y\in V$, the Ollivier curvature with is \begin{align}\kappa(x,y) = 1-\frac{W_1(\mu_x,\mu_y)}{d(x,y)}.\end{align} \end{definition} The choice of laziness parameter is to some extent not important: suppose we vary the value $p = \mu_x(x) = \mu_y(y)$. When $p\geq \max \parens{\frac{1}{d_x+1}, \frac{1}{d_y+1}},$ the optimal transport plans and the value $\kappa(x,y)$ vary linearly with $1-p$ \cite{BCLMP17}. For later sections, we need some basic and well-known facts about Ollivier curvature which we now briefly review. \begin{theorem}[Neighbors minimizing curvature (Y. Ollivier, \cite{Oll07})]\label{thm:oll_minim} Suppose that $\kappa(u,v)\geq k$ whenever $u,v\in V$ are neighboring vertices. Then, also for any $x,y\in V$ (not necessarily neighbors), we have $\kappa(x,y)\geq k.$\end{theorem} In other words, it is equivalent to say that $k$ is a global lower bound on curvature and that $k$ is a lower bound on the curvature between each pair of neighbors. We give a quick proof due to Ollivier \cite{Oll07}. \begin{proof}Observe that if $u\sim v$, then $W_1(\mu_u,\mu_v) = 1 - \kappa(u,v) \leq 1 - k.$ Let $x = x_0, x_1, \ldots, x_l = y$ be a geodesic path in $G$. Because $W_1$ is a metric, it follows that $$W_1(\mu_x,\mu_y) \leq \sum_{i=1}^l W_i(\mu_{x_{i-1}},\mu_{x_i}) \leq (1-k)d(x,y),$$ and $\kappa(x,y) \geq 1 - \frac{(1-k)d(x,y)}{d(x,y)} = k.$ \end{proof} Ollivier also provided a result for estimating curvature on product graphs. Later, we will use the following result to apply our techniques to the discrete hypercube. In our notation, the graph product $G\square H$ has vertex set $V(G)\times V(H)$, and edges $(x,y)\sim (w,z)$ if $(x,y)$ and $(w,z)$ are adjacent in one component and identical in other other. \begin{theorem}[Ollivier curvature tensorization (Y. Ollivier, \cite{Oll07})]\label{thm:oll_tens} Let $G$ be a $d$-regular graph, and denote $G\square G\square \cdots \square G$ with $r$ terms in the product by $G^r$. Suppose that for every $x,y\in V(G)$, it holds that $\kappa(x,y)\geq k.$ Then for every $x',y'\in V(G^r),$ we have that $\kappa(x',y')\geq \frac{k}{r}.$ \end{theorem} Again, we provide a short proof from Ollivier's original work \cite{Oll07}. \begin{proof} Let $x$ and $y$ be neighbors in $G^r$. By Theorem \ref{thm:oll_minim}, it suffices to show $\kappa(x,y)>r$. Without loss of generality we may assume $x = (x_1,x_2,\dots x_r)$ and $y = (y_1,x_2,\dots x_r)$. Let $f_1$ be the Kantorovich potential satisfying \[\sum_v f_1(v) \mu_{y_1}(v) - \sum_v f_1(v) \mu_{x_1}(v) = 1-\kappa(x_1,y_1) \leq 1-k.\] Define $f(z_1,\dots, z_r) = f_1(z_1)$, then we see that \begin{align*}&\sum_w f(w) \mu_y(w) - \sum_w f(w) \mu_x(w) \\ = &\frac{1}{r}\parens{\sum_v f_1(v) \mu_{y_1}(v) - \sum_v f_1(v) \mu_{x_1}(v)} + \frac{r-1}{r}\parens{f_1(y_1)-f_1(x_1)} \\ \leq &\frac{1}{r}(1-k) + \frac{r-1}{r} = 1 - \frac{k}{r}.\end{align*} In other words, we have $\kappa(x,y)\geq \frac{k}{r}$. \end{proof} \section{Volume Growth and Spectral Gap in Manifolds}\label{sect:vg_lambda} In this section we will outline the proof of Buser-type results on manifolds, particularly following the work of Buser \cite{Buser82}, of Agol \cite{Agol}, and of the first author \cite{B15}. In the following sections we will develop analogous methods to bound the spectral gap and higher eigenvalues in graphs. Let $M$ be an $n$-dimensional manifold and let $A$ and $B$ be a Cheeger-minimizing partition of $M$, so that their common boundary $\Sigma = \partial A = \partial B$ satisfies \begin{align*}h(M) = \frac{\mathrm{Vol}(\Sigma)}{\min\parens{\mathrm{Vol}(A),\mathrm{Vol}(B)}}.\end{align*} The minimax principle tells us that $\lambda_2(M)\leq \max\set{\lambda_1(A),\lambda_1(B)}$ where eigenfunctions $f$ of $A$ (similarly $B$) corresponding to eigenvalue $\mu$ satisfy the Dirichlet boundary conditions \begin{align*}\begin{cases} \Delta f = \mu f \text{ on }A \\ f(\Sigma) = 0. \end{cases}\end{align*} Here, $\lambda_1(A)$ (similarly $\lambda_1(B)$) is the least non-zero value $\mu$ for which an eigenfunction exists. Without loss of generality assume that $\lambda_1(A)\geq \lambda_1(B)$. The Rayleigh principle tells us that $\lambda_1(A)$ of a manifold is achieved by minimizing the Rayleigh quotient $\int_A\abs{\nabla f}^2/\int_A f^2$ over functions satisfying the boundary condition $f(\Sigma) = 0$. For more details, see \cite{HJ, Chavel, Lablee}. Buser's idea is to use a test-function for this Rayleigh quotient that depends linearly on the distance from $\Sigma,$ which we denote $\mathrm{dist}_{\Sigma}(p).$ Specifically, one constructs \begin{align*}f(p) = \begin{cases} \mathrm{dist}_{\Sigma}(p) &\text{ if } \mathrm{dist}_{\Sigma}(p) \leq t\\ t &\text{ if } \mathrm{dist}_{\Sigma}(p) \geq t.\end{cases}\end{align*} Define $A(t) = \{p\in A: \mathrm{dist}_{\Sigma}(p)\leq t\}$. Buser observes that \begin{align*}&\int_A \abs{\nabla f}^2 \leq \int_{A(t)}1 = \mathrm{Vol}(A(t))\text{ and}\\ &\int_A f^2 \geq \int_{A-A(t)}t^2 = t^2\parens{\mathrm{Vol}(A)-\mathrm{Vol}(A(t)}. \end{align*} Now, for any $t> 0$ satisfying $\mathrm{Vol}(A)> \mathrm{Vol}(A(t))$, one sees that \begin{align*} \lambda_2(M)\leq \frac{\mathrm{Vol}(A(t))}{t^2\parens{\mathrm{Vol}(A)-\mathrm{Vol}(A(t)}}\,. \end{align*} What remains is to bound $\mathrm{Vol}(A(t))$. In this step, Buser uses a global lower bound on Ricci curvature and the crucial assumption that $\Sigma$ is a Cheeger-optimal cut-set. Suppose $N$ is a compact hypersurface (codimension-1 submanifold) of $M$. Further, assume that the planes of $M$ containing a tangent vector of a geodesic segment which minimizes the distance to $N$ have sectional curvatures are bounded below by $\delta$. A consequence of the Heintze-Karcher comparison theorem \cite{HK} is the following volume growth bound: There exists $\nu_{\delta} \in C^{\infty}[0,\infty)$ such that for all $\tau\geq 0$, we have $$\mathrm{Vol}_{n-1} \big ( \mathrm{dist}^{-1}(\tau)\big )\leq \mathrm{Vol}_{n-1}(\Sigma)\nu_{\delta}(\tau).$$ Now, the volume growth bound $\mathrm{Vol}(A(t)) \leq \int_0^t \nu(s)\mathrm{Vol}(\Sigma) \, ds$ can be applied (when clear we will suppress the $\delta$ in $\nu_{\delta}$). Specifically, Buser finds the bound \begin{align*}\lambda_2(M) &\leq\frac{\int_0^t \nu(s)\mathrm{Vol}(\Sigma)ds}{t^2 \mathrm{Vol}(A) - t^2\int_0^t \nu(s)\mathrm{Vol}(\Sigma)ds}\\ &\leq \frac{h\int_0^t\nu(s)ds}{t^2(1-h\int_0^t\nu(s)ds)} \end{align*} for $\lambda_2(M)$ in terms of the curvature (again, because the volume growth function $\nu$ depends on curvature), the Cheeger cut-set $A$ and boundary $\Sigma$. We will not reproduce the remainder of Buser's proof \cite{Buser82}, which is somewhat technical, except to state the result: \begin{theorem}[Buser's Inequality, (P. Buser 1982)] If $M$ is an $n$-dimensional manifold with $-(\delta^2)(n-1)$ as a lower bound on curvature (for some $\delta\geq 0$), then \begin{align*}\lambda_2(M)\leq c(\delta h(M) + h(M)^2), \end{align*} where $c$ is a universal constant.\end{theorem} More recently, Agol observed that the constant in Buser's proof can be improved by optimizing over \emph{all} possible test-functions that depend on the distance from $\Sigma$, not just those that grow linearly up to some critical distance $t$ \cite{Agol}. While reformulating Agol's result using Sturm-Liouville theory, the first author showed that the method can be extended to give bounds on the higher eigenvalues \cite{B15}. One begins with the observation that \begin{equation}\label{eq:Courant} \lambda_{2k}(M)\leq \max\parens{\lambda_k(A),\lambda_k(B)}, \end{equation} where $\lambda_1(M),\lambda_2(M),\dots$ are the eigenvalues of $M$ in increasing order and $A$ and $B$ have the properties that \begin{itemize} \item $B=A^{\complement},$ \item $\mathrm{Vol}(A) \leq \mathrm{Vol}(B),$ \item $A \cap B=\partial A=\partial B =: \Sigma,$ \item $h(M)=\frac{\mathrm{Vol}(\Sigma)}{\mathrm{Vol}(A)}.$ \end{itemize} We denote $D$ to be the set $A$ or $B$ that achieves the maximum in Equation \ref{eq:Courant}. Here, the Rayleigh quotient is $$\lambda_k(A) =\inf_U \sup_{f \in U} \frac{\int_D \|\nabla(f)\|^2 \, d\mathrm{Vol}}{\int_D f^2 \, d\mathrm{Vol}},$$ where $U$ is the set of $k$-dimensional subspaces of the Sobolev space $H_0^1(D)$ on which $f(\Sigma) = 0$. Limiting to only those functions $f$ that depend on the distance from $\Sigma$, the co-area formula implies that \begin{align}\label{eqn:rayleigh2}\lambda_k (D) \leq \inf_V \sup_{f \in V} \frac{\int_0^{\infty}f'(s)^2\mathrm{Vol}\big (\mathrm{dist}_{\Sigma}^{-1}(s) \big )\, ds}{\int_0^{\infty} f(s)^2 \mathrm{Vol}\big (\mathrm{dist}_{\Sigma}^{-1}(s)\big )\, ds},\end{align} where $\mathrm{dist}^{-1}_{\Sigma}(s)$ is the set $\{p\in D: \mathrm{dist}(p,\Sigma) = s\}$ and $V$ is the set of all $k$-dimensional subspaces of $H_0^1[0,\infty)$ on which $f(0) = 0$. Given a Heintze-Karcher-type growth bound $\mathrm{Vol}(\mathrm{dist}_{\Sigma}^{-1}(s))\leq \nu(s)\mathrm{Vol}(\Sigma)$, observe that \begin{align*} \mathrm{Vol}(A(s)) = \int_0^s \mathrm{Vol}\left (\mathrm{dist}_{\Sigma}^{-1}(\tau)\right )\, d\tau \leq \mathrm{Vol}(\Sigma)\int_0^s \nu(\tau)d\tau. \end{align*} Because $\Sigma$ is the Cheeger-achieving boundary and $\mathrm{dist}_{\Sigma}^{-1}(s)$ is the boundary for some other non-Cheeger-achieving partition of $M$, we have \begin{equation*}h(M) = \frac{\mathrm{Vol}(\Sigma)}{\min\parens{\mathrm{Vol}(A),\mathrm{Vol}(B)}}\end{equation*} and also \begin{equation*} h(M)\leq \frac{\mathrm{Vol}(\mathrm{dist}_{\Sigma}^{-1}(s))}{\min \parens{\mathrm{Vol}(A\setminus A(s)),\mathrm{Vol}(B\cup A(s))}}. \end{equation*} In the case that $\mathrm{Vol}(B\cup A(s))\geq \mathrm{Vol}(A\setminus A(s))$, we have \begin{equation*} h(M)\leq \frac{\mathrm{Vol}(\mathrm{dist}_{\Sigma}^{-1}(s))}{\mathrm{Vol}(A\setminus A(s))}, \end{equation*} and so \begin{align*} \mathrm{Vol}(\mathrm{dist}_{\Sigma}^{-1}(s))&\geq h(M)\mathrm{Vol}(A) - h(M)\mathrm{Vol}(A(s)) \\ &\geq \mathrm{Vol}(\Sigma) - h(M)\mathrm{Vol}(\Sigma)\int_0^s\nu(\tau)d\tau\\ &=\mathrm{Vol}(\Sigma)\parens{1-h(M)\int_0^s\nu(\tau)d\tau}. \end{align*} In the other case, we have that $\mathrm{Vol}(B\cup A(s))\leq \mathrm{Vol}(A-A(s))$. As such, it follows that $\mathrm{Vol}(B)\leq \mathrm{Vol}(A),$ in other words, the set $B$ is the Cheeger minimizing set. We find that \begin{align*}\mathrm{Vol}(\mathrm{dist}_{\Sigma}^{-1}(s))\geq h(M)\mathrm{Vol}(B\cup A(s))\geq h(M)\mathrm{Vol}(B) = \mathrm{Vol}(\Sigma),\end{align*} with the last equality following from the definition of $\Sigma$. Combining both cases, the first author achieves the lower bound \begin{align*}\mathrm{Vol}(\mathrm{dist}_{\Sigma}^{-1}(s))\geq \mathrm{Vol}(\Sigma)\parens{1-h(M)\int_0^s\nu(\tau)d\tau}.\end{align*} Observe that this bound is only meaningful for values of $s$ where \begin{align*}h(M)\int_0^s \nu(\tau)d\tau\leq 1.\end{align*} Because the parameter $s$ is continuous, one can always apply such values. This is one of several ways in which the discrete formulation on graphs presents a challenge which does not appear in the related continuous result on Riemannian manifolds. Define now $T$ to be the value for which $h(M)\int_0^T \nu(\tau)d\tau = 1.$ With both an upper and lower bound for $\mathrm{Vol}(\mathrm{dist}_{\Sigma}^{-1}(s))$, it is possible to plug those bounds into Equation \ref{eqn:rayleigh2}, truncating the integrals at $T$, to obtain the bound \begin{align}\label{eqn:rayleigh3}\lambda_k(A) \leq \inf_{W} \sup_{f\in W} \frac{\int_0^T f'(s)^2\nu(s)\, ds} {\int_0^T f(s)^2\left (1-h\int_0^{s} \nu(\tau)\, d\tau \right )\, ds},\end{align} where $W$ is the set of $k$-dimensional subspaces of $H_0^1[0,T]$ in which $f(0) = 0$. What remains is the technical problem of finding the function $f$ that minimizes the Rayleigh quotient in Equation \ref{eqn:rayleigh3}. Such a function is an eigenfunction of a Sturm-Liouville problem, which leads to an eigenvalue comparison. Specifically, the eigenvalues of the Laplacian on the manifold are bounded above by the eigenvalues of a Sturm-Liouville (ODE) problem which depends only on the same data as in Buser's original inequality; namely the Cheeger constant, dimension, and Ricci curvature lower bound. For more details, see \cite{B15}. We will see in Section \ref{sect:Spectrum} that in the discrete case, higher eigenvalues of the graph $\lambda_k(G)$ can be bounded by the eigenvalues of a tridiagonal matrix times a multiplicative factor. The entries of the matrix only depend on the bounds on volume growth, which can be given in terms of several notions of the graph's curvature. Further, the multiplicative factor can be interpreted using the upper bound on the volume growth of the graph and the outer vertex Cheeger constant or its analogues corresponding to splitting the graph into more than two subgraphs. \section{The Relationship Between Volume Growth and Curvature}\label{sect:vg_cde} In Section \ref{sect:Spectrum}, we develop the relationship between the spectrum of the Laplace operator on a graph and the volume growth of subsets of the graph. Our goal is to also develop the connection between the spectrum, notions of curvature, and the Cheeger constant of the graph in the form of Buser-type inequalities. To allow us to make these connections in Section \ref{sect:Spectrum}, in this section we discuss volume growth in graphs under several notions of a curvature lower bound. \subsection{Bounds under $CDE'$ curvature} We first present a volume growth bound under the so-called $CDE'$ inequality. This notion of discrete curvature is a variant of the $CD$ inequality, which was introduced by Bakry-\'Emery in \cite{BE85}. The $CDE'$ inequality was introduced by Bauer et al.\cite{Bauer2013}. We present only the definition of $CDE'(K,N)$ herein, for a full discussion the reader can consult the paper of Bauer et al. Let $f,g:V(G)\to {\mathbb R}$ be functions and $x\in V(G).$ The field-squared operator $\Gamma(f,g)(x)$ is defined by \begin{align*} \Gamma(f,g)(x) = \frac{1}{2d_x}\sum_{y:y\sim x}\parens{f(x)-f(y)}\parens{g(x)-g(y)}. \end{align*} A graph $G$ is said to satisfy the $CDE'(K,N)$ inequality at $x$ if for every function $f:V(G)\to {\mathbb R}^+$, \begin{align*}f^2 \parens{\tfrac{1}{2}\Delta \Gamma(\log f,\log f) - \Gamma(\log f, \Delta \log f)}\geq \frac{1}{N}\parens{\Delta \log f}^2 + K\Gamma(f,f).\end{align*} In this case, we say that $K$ is a lower bound on the $CDE'$ curvature of $G$ at $x$ with dimension $N$. In a follow-up work \cite{HLLY15}, a volume growth bound was discovered under a lower bound on $CDE'$ curvature: \begin{theorem}(Horn, Lin, Liu \& Yau \cite{HLLY15})\label{thm:vg_HLLY} Let $G$ be a locally finite graph satisfying $CDE'(n,0)$. Then there exists a constant $C$ depending on $n$ such that for all $x \in V$ and any integers $r,s$ with $r \geq s$: \begin{equation}\label{eq:HLLYVol} |\mathrm{dist}^{-1}_x(r)| \leq C \left ( \frac{r}{s}\right )^{\frac{\log(C)}{\log(2)}}|\mathrm{dist}^{-1}_x(s)|. \end{equation} \end{theorem} Note that a similar volume growth bound (albeit with a multiplicative factor $\sqrt{d}$ in the base of the exponent) is implicit in the recent article \cite{Munch19} under the $CD(n,0)$ inequality, and a similar corollary can be obtained by the following method. We use this bound on ball volumes to prove the following bound on shell volumes: \begin{corollary}Let $G$ be a graph satisfying $CDE'(n,0)$ at all vertices $x\in V(G)$. Let $\Sigma \subset V$, and let $C = C(G)$ be the constant from Theorem \ref{thm:vg_HLLY}, let $r>0$. Then $$|\mathrm{dist}^{-1}_{\Sigma}(r)| \leq d|\Sigma| C(r-1)^{\frac{\log(C)}{\log (2)}} \left [C \left ( \frac{r}{r-1}\right )^{\frac{\log(C)}{\log(2)}}-1 \right ].$$ \end{corollary} \begin{proof}Letting $s=r-1$ in Equation \ref{eq:HLLYVol}, the estimate becomes $$|\mathrm{dist}^{-1}_x(r)| \leq C \left ( \frac{r}{r-1} \right )^{\frac{\log(C)}{\log(2)}}|\mathrm{dist}^{-1}_x(r-1)|.$$ Since we are interested in counting vertices with distance exactly $r$ from $x$, we wish to consider $|\mathrm{dist}^{-1}_x(r)|-|\mathrm{dist}^{-1}_x(r-1)|$, so we subtract $|\mathrm{dist}^{-1}_x(r-1)|$ from both sides of the previous inequality to give $$|\mathrm{dist}^{-1}_x(r)|-|\mathrm{dist}^{-1}_x(r-1)| \leq \left [ C \left (\frac{r}{r-1} \right )^{\frac{\log(C)}{\log(2)}} -1 \right]|\mathrm{dist}^{-1}_x(r-1)|.$$ In fact, we want to consider the set of vertices with distance exactly $r$ from $\Sigma$. We can sum over all $x \in \Sigma$ on both sides of the previous equation to give $$|\mathrm{dist}^{-1}_{\Sigma}(r)|\leq \sum_{x \in \Sigma} |\mathrm{dist}^{-1}_x(r)|-|\mathrm{dist}^{-1}_x(r-1)| \leq \sum_{x \in \Sigma} \left [ C \left (\frac{r}{r-1} \right )^{\frac{\log(C)}{\log(2)}} -1 \right]|\mathrm{dist}^{-1}_x(r-1)|.$$ Simplifying, we find $$|\mathrm{dist}^{-1}_{\Sigma}(r)| \leq |\Sigma| \left [C \left ( \frac{r}{r-1}\right )^{\frac{\log(C)}{\log(2)}}-1 \right ] \max_{x \in \Sigma} |\mathrm{dist}^{-1}_{\Sigma}(r-1)|.$$ Now we wish to estimate the term $\max_{x \in \Sigma} |\mathrm{dist}^{-1}_{\Sigma}(r-1)|$ and we will again apply Equation \ref{eq:HLLYVol}, this time we replace $r$ with $r-1$ in the formula and take $s=1$. As a result, our estimate becomes $$|\mathrm{dist}^{-1}_{\Sigma}(r)| \leq d|\Sigma| C(r-1)^{\frac{\log(C)}{\log (2)}} \left [C \left ( \frac{r}{r-1}\right )^{\frac{\log(C)}{\log(2)}}-1 \right ],$$ observing that $|\mathrm{dist}^{-1}_x(1)| = d$.\end{proof} \subsection{Bounds under Ollivier curvature} Next, we will find upper bounds on the shell volume $|\mathrm{dist}^{-1}_x(i)|$ in terms of a lower bound on Ollivier curvature. It is simple to convert such bounds into bounds on the ball volume (analogous to the Bishop-Gromov Volume Comparison Theorem \cite{Bishop}) with the equation $$|B_x(r)| = \displaystyle \sum_{i=0}^r |\mathrm{dist}^{-1}_x(i)|.$$ In this area, there is a previous result due to Paeng \cite{Paeng12}. \begin{theorem}[Paeng \cite{Paeng12}] Let $G$ be a graph with maximum degree $D$. Let $r$ be an integer with $0\leq r\leq \mathrm{diam}(G)$. Assume that $\kappa(x,y) \geq k$ for all $x,y\in V$. \begin{align}|d^{-1}_x(r)| \leq D^r \prod_{m=0}^{r-1} \left ( 1-\frac{k}{2}m \right )\,.\end{align} \end{theorem} These bounds are only useful in the case that $k > 0$: if we set $k=0$ above, we see only the trivial result that $|f^{-1}(r)|\leq D^r$. In the case $k > 0$, we see that $|\mathrm{dist}_x^{-1}(\lceil 2/k + 1\rceil)|\leq 0$; that is, $G$ has $\lceil 2/k \rceil\leq \mathrm{diam}(G)$. Because $G$ is finite, $G$ has polynomial volume growth with $|\mathrm{dist}^{-1}_x(r)|\leq |V(G)|r^0$ (depending on $G$, a much tighter bound may be possible.) We develop results that are useful in the case that $G$ has a negative lower bound on curvature. We find that such graphs do not necessarily have polynomial volume growth. {\it We remark here that it remains an open question whether or not a bound of $\kappa(x,y) \geq 0$, for all $x,y\in V$, implies polynomial volume growth.} \begin{theorem}\label{thm:vg1}Let $G$ be a $d$-regular graph with $\kappa(v_1,v_2)\geq k$, for every pair of vertices $v_1,v_2$. Fix $x\in V$, define $S_i = \mathrm{dist}^{-1}_x(i)$. Then for $i\geq 1$, $$|S_{i+1}|\leq \frac{d+1-2dk}{2}|S_i|.$$\end{theorem} \begin{proof} First, we bound $e(S_i,S_{i+1})$, the number of edges between $S_i$ and $S_{i+1}$. Let $z\in S_i$, $z$ is adjacent to some vertex $y(z)\in S_{i-1}$. (If $z$ is adjacent to multiple vertices in $S_{i-1}$, choose $y(z)$ arbitrarily from them.) Let $T(z)$ be the set of common neighbors of $z$ and $y(z)$. Neither $y$ nor a neighbor of $y$ can be in $S_{i+1}$, so $e(z,S_{i+1})\leq d-1-|T(z)|$. (Note that we frequently suppress $y(z)$ to $y$.) Let $T^* = \sum_{z\in S_i} |T(z)|$, so $e(S_i,S_{i+1})\leq (d-1)|S_i| - T^*.$ Next, for each $z$ we wish to use the Kantorovich characterization of $W_1(\mu_y,\mu_z)$. Define the following test-function $f$:\begin{itemize} \item $f(y) = 0$. \item $f(z) = 1$. \item $f|_{T(z)} = 0$. \item For any other neighbor $v$ of $y$, $f(v) = -1.$ \item Let $W(z)$ be the set of neighbors of $z$ (besides $y$) that are not in $T(z)$ and are adjacent to a neighbor of $y$ (besides $z$) that is not in $T(z)$. We may set $f|_{W(z)} = 0$. \item Let $U(z) = N(z) \setminus\parens{ \{y\}\cup T(z)\cup W(z)}$, set $f|_{U(z)} = 1$. Here we use $N(z)$ to denote the set of neighbors of $z$. \item $f$ can be made $1$-Lipschitz by setting $f = 0$ on every other vertex. \end{itemize} We have: \[\sum_x f(x)\mu_z(x) = 1 + \frac{|U(z)|-d}{2d} \ \mbox{ and } \sum_x f(x)\mu_y(x) = \frac{|T(z)|+2-d}{2d}.\] Combining the two, we get \[(1-k)\geq \sum_x f(x)(\mu_z(x)-\mu_y(x)) \geq 1 + \frac{|U(z)|-2-|T(z)|}{2d}\,,\] and rearranging gives \[|T(z)|+2-2dk \geq |U(z)|\,.\] If a neighbor of $z$ is not in $U(z)$, the neighbor must be either $y$, adjacent to $y$ (and thus in $T(z)$), or adjacent to more than one neighbor of $y$, and hence in $W(z)$.\\ Any vertex in $S_{i+1}$ for which $z$ is the only neighbor in $S_i$ must be in $U(z)$. The total number $U^*$ of vertices in $S_{i+1}$ that are adjacent to only one vertex in $S_i$ is at most \begin{align*}U^*\leq \sum_z |U(z)|\leq \sum_z \parens{|T(z)|+2-2dk} = T^*+|S_i|(2-2dk).\end{align*} We can now see that the number of vertices in $S_{i+1}$ that are adjacent to more than one vertex in $S_i$ is bounded above by $$\frac{(d-1)|S_i|-T^*-U^*}{2}. $$ This is because the total number of possible edges from $S_i$ to these vertices is at most $e(S_i,S_{i+1})\leq (d-1)|S_i|-T^*$ less the $U^*$ edges that are accounted for by vertices in $S_{i+1}$ with only one neighbor in $S_i$. Every other vertex must be incident to at least $2$ of those $(d-1)|S_i|-T^*-U^*$ edges, so we divide by $2$. Now, we add the other $U^*$ vertices in $S_{i+1}$ to achieve the desired result: \begin{align*} |S_{i+1}|& \leq U^*+\frac{(d-1)|S_i|-T^*-U^*}{2} = \frac{(d-1)|S_i|-T^*+U^*}{2}\\ &\leq\frac{(d-1)|S_i|-T^* + (2-2dk)|S_i|+T^*}{2} = \frac{d+1-2dk}{2}|S_i|. \end{align*} \end{proof} Following the same proof outline, we obtain a better bound for bipartite graphs. \begin{theorem}\label{thm:vg_bipartite}Let $G$ be a $d$-regular bipartite graph with $\kappa(v_1,v_2)\geq k$ for every pair of vertices $v_1,v_2$. Fix $x\in V$, define $S_i = \mathrm{dist}^{-1}_x(i)$. For $i\geq 1$, $$|S_{i+1}|\leq \frac{d-dk}{2}|S_i|.$$\end{theorem} \begin{proof} First, we bound $e(S_i,S_{i+1})$, the number of edges between $S_i$ and $S_{i+1}$. Let $z\in S_i$, $z$ is adjacent to some vertex $y(z)\in S_{i-1}$. Because $y\notin S_{i+1}$, $e(z,S_{i+1})\leq d-1$. Clearly, $e(S_i,S_{i+1})\leq (d-1)|S_i|$. Next, for each $z$ we wish to use the Kantorovich characterization of $W_1(\mu_y,\mu_z)$. Define a test-function $f$: \begin{itemize} \item $f(y) = 0$. \item $f(z) = 1$. \item For any other neighbor $v$ of $y$, $f(v) = -1.$ \item Let $W(z)$ be the set of neighbors of $z$ (besides $y$) that are adjacent to a neighbor of $y$ other than $z$. Set $f|_{W(z)} = 0$. \item Let $U(z) = N(z) \setminus \parens{ \{y\}\cup W(z)}$, set $f|_{U(z)} = 2$. \item $f$ can be made $1$-Lipschitz by setting $f = 1$ on any other vertex in the same set of the bipartition as $z$ and $f = 0$ on any other vertex in the same set as $y$. \end{itemize} We have: \[\sum_x f(x)\mu_z(x) = 1 + \frac{2|U(z)|-d}{2d} \ \mbox{ and } \ \sum_x f(x)\mu_y(x) = \frac{2-d}{2d}\,.\] \\\text{Combining, } \[(1-k)\geq \sum_x f(x)(\mu_z(x)-\mu_y(x)) \geq 1 + \frac{2|U(z)|-2}{2d}\,,\]\\ \text{resulting in } \[ |U(z)| \le 1-dk\,.\] If a neighbor of $z$ is not in $U(z)$, it must be either $y$ or adjacent to more than one neighbor of $y$, and thus in $W(z)$.\\ Any vertex in $S_{i+1}$ for which $z$ is the only neighbor in $S_i$ must be in $U(z)$. The total number $U^*$ of vertices in $S_{i+1}$ that are adjacent to only one vertex in $S_i$ is at most \begin{align*}U^*\leq \sum_z |U(z)|\leq |S_i|(1-dk).\end{align*} We can now bound the number of vertices in $S_{i+1}$ that are adjacent to more than one vertex in $S_i$ from above by $$\frac{(d-1)|S_i|-U^*}{2}. $$ This is because the total number of possible edges from $S_i$ to these vertices is at most $e(S_i,S_{i+1})\leq (d-1)|S_i|$ less the $U^*$ edges that are accounted for by vertices in $S_{i+1}$ with only one neighbor in $S_i$. Each counted vertex must be incident to at least $2$ of those $(d-1)|S_i|-U^*$ edges, so we divide by $2$. Now, we add the other $U^*$ vertices to achieve the desired bound on $|S_{i+1}|$: \begin{align*}|S_{i+1}|\leq U^*+\frac{(d-1)|S_i|-U^*}{2} = \frac{(d-1)|S_i|+U^*}{2}\\ \leq\frac{(d-1)|S_i| + (1-dk)|S_i|}{2} = \frac{d(1-k)}{2}|S_i|.\end{align*} \end{proof} We continue to denote $S_i=\mathrm{dist}_x(i)$ and summarize the results of Theorems \ref{thm:vg1} and \ref{thm:vg_bipartite} as follows. \begin{theorem} \label{thm:vg} For any $d$-regular graph, for $i\geq 1$, we have \begin{align*}|S_i|\leq d^i\biggl(\frac{1+\tfrac{1}{d}-2k}{2}\biggr)^{i-1}.\end{align*} For any $d$-regular bipartite graph, for $i\geq 1$, we also have \begin{align*}|S_i|\leq d^i \biggl(\frac{1-k}{2}\biggr)^{i-1}.\end{align*} \end{theorem} \begin{proof} Observe $S_0 =1$ and $S_1 = d$ for every graph. Repeated application of Theorems \ref{thm:vg1} and \ref{thm:vg_bipartite} gives the desired bounds for $S_i$ when $i\geq 2.$ This can be made formal using induction, which is left to the reader. \end{proof} As far as we are aware, these are the first non-trivial bounds on volume growth under a negative bound on Ollivier curvature. A weakness in the proof method is that vertices in $S_{i+1}$ are counted either as having exactly one neighbor in $S_i$ (i.e., in some set $U(x)$), or as having several neighbors ($W(x)$), but the upper bound on $|S_{i+1)}|$ assumes the worst case - that there are a large number of vertices of type $W$, each having only $2$ neighbors in $S_i$. For graphs where that assumption is correct (or close), our bound is somewhat tight. In other graphs, the average number of neighbors in $S_i$ for any vertex in $S_{i+1}$ can be $O(d)$. For those graphs the bound is not tight. Below we give an example illustrating this issue. \begin{example} Let $T_p$ be the infinite $p$-regular tree and $T_p^q$ be the Cartesian product graph $T_p\square T_p\square \cdots \square T_p$, with the product taken $q$ times. Note that $T_p^q$ is $pq$-regular. It is easy to compute that $T_p$ has $k(x,y) = \tfrac{2-p}{p}$ if $x\sim y$. By tensorization of curvature (see for instance \cite{KKRT15}), we know that $T_p^q$ has $k(x,y) \geq \tfrac{2-p}{pq}$ whenever $x\sim y$. Because $T_p^q$ is bipartite, we apply the second statement of Theorem \ref{thm:vg} to find the bound \begin{align}|d^{-1}_x(i)|\leq (pq)^i\parens{\frac{1-\tfrac{2-p}{pq}}{2}}^{i-1} = pq\parens{\frac{p(q+1)}{2}-1}^{i-1},\end{align} so that \begin{align} \log(|d^{-1}_x(i)|)\leq i\log\parens{\frac{p(q+1)}{2}-1} + O(1).\end{align} A vertex $y\in \mathrm{dist}^{-1}_x(i)$ is characterized by the distance from $x$ parallel to each of the $q$ copies of $T_p$ in the product graph, and, given those distances, by the path taken in $T_p$ of that distance. There are $\binom{i+q-1}{q}$ choices of what distance is traveled along each copy of $T_p$. At each step of any path taken along some copy of $T_p$, there are either $p$ possibilities (for the first step) or $p-1$ possibilities (for any subsequent step). As such, we have \begin{align} \binom{i+q-1}{q}(p-1)^i\leq |\mathrm{dist}^{-1}_x(i)|\leq \binom{i+q-1}{q}p^q(p-1)^{i-q}.\end{align} It follows that \begin{align}\log(|\mathrm{dist}^{-1}_x(i)|)=i\log(p-1) + O(1).\end{align} Observe that $q$ is the maximum number of neighbors that $y\in \mathrm{dist}^{-1}_x(i)$ has in $\mathrm{dist}^{-1}_x(i-1).$ The difference between the bound from Theorem \ref{thm:vg} of $i\log \parens{\frac{p(q+1)}{2}-1}$ and the actual value $i\log (p)$ results from the value of $q$. As discussed before, the reason for this is that in the proof of Theorem \ref{thm:vg}, the upper bound assumes as a worst-case scenario that every vertex in $\mathrm{dist}^{-1}_x(i)$ has either $1$ or $2$ neighbors in $\mathrm{dist}^{-1}_x(i-1)$. But in fact, as $i$ grows almost every vertex in the $i$-shell of $T_p^q$ has $q$ neighbors in the $(i-1)$-shell. \end{example} {\it We conjecture here that $T_p^q$ actually experiences the maximum volume growth for their curvature and regularity.} \begin{conjecture}Let $G$ be a $pq$-regular graph so that if $u,v\in V(G)$, then $\kappa(u,v)\geq \frac{2-p}{pq}$. Let $x\in V(G)$ and $y\in V(T_p^q)$. Then for any $i\geq 0$, one has \[|\mathrm{dist}^{-1}_x(i)|\leq |\mathrm{dist}^{-1}_y(i)|.\]\end{conjecture} Qualitatively, the graph $T_p^q$ is conjectured to fill the same role that the space of constant curvature does in the Bishop Volume Comparison Theorem. A case of this conjecture is that the $d$-dimensional lattice $T_2^d$ is conjectured to have the fastest volume growth for any $2d$-regular graph with curvature lower bound $0$. If correct, this would prove that any such graph has polynomial volume growth. \section{Volume Growth and Spectral Estimates in Graphs} \label{sect:Spectrum} In this section we follow the methods from the continuous setting that were developed by B. Benson \cite{B15} and discussed in Section \ref{sect:vg_lambda}. First, we demonstrate an upper bound for an eigenvalue $\lambda_k(G)$ by taking the Rayleigh quotient of a function based only on distance from a cut-set $\Sigma$. Next, we optimize that quotient by treating it as a discrete Hardy-type inequality. \begin{remark} In applying our results using volume growth to bound the spectrum, we will use the relationship between notions of curvature of the graph and volume growth, as introduced and referenced in the previous section. The bounds which illustrate this relationship are the only point in our analysis that relies on the discrete curvature. Given another volume growth result (either based on another notion of discrete curvature or unrelated to curvature), it will be possible to repeat the analysis we present here and achieve similar results.\end{remark} \subsection{Bounding eigenvalues using volume growth} In this section, we will establish bounds for the spectrum of the graph Laplacian using bounds on volume growth. Our methods in this section for approximating $\lambda_k$, where $k\geq 2$, do not make any assumption about the cut-set, but the bounds we obtain will only be in terms of the generic volume growth bounds $\mu,\nu$. Later, we will give a bound for $\lambda_2$ with the assumption that $\Sigma$ is the outer vertex isoperimetric optimizing cut-set. Let $G = (V,E)$ be a graph. Let $\Sigma\subset V(G)$ be a cut-set that separates $V\setminus\Sigma$ into $V^+$ and $V^-$. Note that under this definition, it is possible that $V^+$ or $V^-$ is empty. The signed distance function $\mathrm{dist}_{\Sigma}:V\to{\mathbb Z}$ is defined so that $|\mathrm{dist}_{\Sigma}(v)| = \text{min}_{x\in \Sigma} \mathrm{dist}_G(x,v)$ where $\mathrm{dist}_G$ is the graph distance, and the sign of $\mathrm{dist}_{\Sigma}$ is positive on $V^+$ and negative on $V^-$. We will assume that we have volume growth and decay bounds for the level sets of $\mathrm{dist}_{\Sigma}.$ Specifically, let $\nu(k)$ denote a volume growth bound and $\mu (k)$ denote a uniform volume decay bound respectively. Here, for $k \in {\mathbb Z},$ the bounds $\nu (k)$ and $\mu(k)$ have the property that \begin{equation}\label{eq:GrowthBound} |\Sigma|\mu (k) \leq |\mathrm{dist}^{-1}_{\Sigma}(k)|\leq |\Sigma| \nu (k). \end{equation} \begin{definition}\label{def:spaces} Define $T^+ \in {\mathbb Z}_{>0}$ so that $\mu (k)>0$ for all $0 \leq k \leq T^+$ and define $T^- \in {\mathbb Z}_{<0}$ so that $\mu (k) >0$ for all $T^- \leq k \leq 0$. We denote by $U^+$ the set of all functions $f^+:V^+\to {\mathbb R}$ and $U^- = \{f^-:V^-\to {\mathbb R}\}$. The inner products for $U^+$ and $U^-$ will be inherited from the inner product on $V$ by letting $f^+(v)=0$ for every $v \in V\setminus V^+$ and $f^-(v)=0$ for every $v \in V\setminus V^-,$ i.e. extending the functions on $U^+$ and $U^-$ to all of $V$ by zero, then taking the inner product on all of $V.$ We also define a pair of corresponding spaces of functions on $\{0,\dots, T^+\}$ and $\{T^-, T^-+1, \ldots,-1,0\}$: let $W^+$ be the space of functions $g^+:\{0,1,2,\ldots , T^+\} \to {\mathbb R}$ such that $g^+(0)=0$, and $W^-$ be the space of functions $g^-:\{T^-, T^-+1, \ldots,-1,0\} \to {\mathbb R}$ such that $g^-(0)=0.$ Further, for $u^+,v^+ \in W^+,$ define $$\langle u^+,v^+\rangle_+=\sum_{i=0}^{T^+} u(i)v(i).$$ Similarly, for $u^-,v^- \in W^-,$ define $$\langle u^-,v^-\rangle_-=\sum_{i=T^-}^{0} u(i)v(i).$$ \end{definition} To estimate $\lambda_2(G),$ we will be interested in (the smallest positive) solutions $\rho^{+} \in {\mathbb R}^{T^{+}}$ and $\rho^- \in {\mathbb R}^{T^-}$ which satisfy the respective equations \begin{align}\label{eq:Hardy1} \sum_{i=0}^{T^{+}} \phi(i)^2 \cdot \parens{\nu(i)+\nu(i-1)} &\leq 2\rho^{+} \sum_{k=0}^{T^{+}}\biggl [ \sum_{i=0}^k \phi(i) \biggr ]^2 \cdot \mu(k),\\ \label{eq:Hardy2} \sum_{i=T^-}^0 \phi (i)^2 \cdot \parens{\nu(i)+\nu(i-1)} &\leq 2\rho^- \sum_{k=T^-}^0 \biggl [ \sum_{i=k}^0 \phi(i) \biggr ]^2 \cdot \mu(k) \end{align} with $\phi(0)=0,$ $\phi \not\equiv 0$ and where $\nu$ and $\mu$ are the volume growth bounds defined in Equation \ref{eq:GrowthBound}. Equations of this form are called weighted discrete Hardy inequalities. For a fuller discussion of this topic, we refer to \cite{Miclo}. \begin{theorem}\label{theo:eigen2} Let $\rho^+$ and $\rho^-$ be defined by Equations \ref{eq:Hardy1} and \ref{eq:Hardy2}. Then $\lambda_2(G) \leq \max \{\rho^+, \rho^-\}.$ \end{theorem} Before proving the theorem, we formulate the results for the higher eigenvalues. To estimate the higher eigenvalues, we define a symmetric, tridiagonal matrix $A^+$ indexed by $\{1,\dots, T^+\}$ so that \begin{align*} A^+_{ij} = \begin{cases} \tfrac{2\nu(i) + \nu(i-1)+\nu(i+1)}{\mu(i)}, &\text{ if }i=j;~ 1\leq i<T^+\\ \tfrac{\nu(T^+)+\nu(T^+-1)}{\mu(T^+)},&\text{ if }i = j = T^+\\ \tfrac{-\nu(i)-\nu(j)}{\sqrt{\mu(i)\mu(j)}},&\text{ if }|i-j| = 1;~ 1\leq i,j\leq T^+\\ 0, &\text{otherwise} \end{cases} \end{align*} Similarly, we define the symmetric, tridiagonal matrix $A^-$ indexed by $\{T^-, T^-+1, \ldots , -2, -1\}$: \begin{align*} A^-_{ij} = \begin{cases} \tfrac{2\nu(i) + \nu(i-1)+\nu(i+1)}{\mu(i)}, &\text{ if }i=j; T^-<i\leq -1\\ \tfrac{\nu(T^-)+\nu(T^-+1)}{\mu(T^-)}, &\text{ if }i = j = T^-\\ \tfrac{-\nu(i)-\nu(j)}{\sqrt{\mu(i)\mu(j)}},&\text{ if }|i-j| = 1; T^- \leq i,j\leq -1\\ 0, &\text{otherwise.} \end{cases} \end{align*} \begin{theorem}\label{theo:eigenhigher} For a graph $G$ and any $k,l \in {\mathbb N}$ with $1 \leq k,l \leq \min \{\left |T^-\right |, T^+\}=:T,$ then \begin{equation}\label{eq:eigenhigher} \lambda_{k+l} (G) \leq \frac{1}{2}\max \left \{ \rho_k^+ , \rho_l^- \right \} \end{equation} where $\rho^+_k$ and $\rho^-_l$ are the $k$-th and $l$-th non-trivial eigenvalues of the respective equations $A^+g^+=\rho^+g^+$ with $g^+ \in W^+$ and $A^-g^-=\rho^-g^-$ with $g^- \in W^-.$ In particular, we have that \begin{equation*} \lambda_j(G) \leq \frac{1}{2}\min_{j=k+l} \min_{\substack{j\leq t^++|t^-|\leq 2T\\t^+,-t^- \geq 1}} \max \left \{ \rho^+_k(t^+), \rho_l^-(|t^-|) \right \} \end{equation*} where $\rho^+_k(t^+)$ and $\rho_l^-(|t^-|)$ are the eigenvalues of the symmetric, tridiagonal matrices $A^+(t^+)$ and $A^-(t^-)$ resulting from replacing $T^+$ with $t^+$ and $T^-$ with $t^-$ in the definitions of $A^+$ and $A^-,$ respectively. \end{theorem} \begin{remark} Broadly speaking, we are using estimates of volume growth and decay which act as weights and linearize the graph Laplacian eigenvalue problem on the graph. The main idea is to linearize the graph using the weights $\nu$ and $\mu.$ This can be done in multiple ways, however, we choose to do it in a way which is based around a minimizing vertex cut for $h_{out}(G),$ as we find it to be a natural way to produce Buser-type inequalities for both $\lambda_2(G)$ as well as the higher eigenvalues and higher Cheeger constants. In this form, we show that the higher eigenvalues are bounded above by eigenvalues of tridiagonal matrices, which in some cases, are known in closed form \cite{KST99}. \end{remark} We will now prove both Theorems \ref{theo:eigen2} and \ref{theo:eigenhigher} simultaneously. \begin{proof}[Proof of Theorems \ref{theo:eigen2} and \ref{theo:eigenhigher}] Using the Poincar\'e minimax principle for characterization of eigenvalues, we see that \begin{equation}\label{eq:Ray1} \lambda_k (G)= \inf_{U_k}\sup_{f\in U_k} \frac{\langle f,\Delta f\rangle }{\langle f,f\rangle }, \end{equation} where $U_k$ is the set of $k$-dimensional subspaces of functions $f \in {\mathbb R}^V.$ Expanding these inner products, we find that \begin{align*} \langle f,\Delta f\rangle &= \sum_x f(x)\sum_{y\sim x}\tfrac{1}{d}\parens{f(x)-f(y)}\\ &= \sum_{\{x,y\}:x\sim y}\tfrac{1}{d}\parens{f(x)-f(y)}^2\\ &= \sum_x\tfrac{1}{2d}\sum_{y\sim x}\parens{f(x)-f(y)}^2, \text{ and}\\ \langle f,f\rangle &= \sum_x f^2(x). \end{align*} Define $\Sigma \subset V$ so that $h_{out}(G)=|\Sigma|/|A|$ where $A\subset V,$ with $\Sigma=\partial_{out} A,$ and $|A|\leq |V|/2.$ Use the signed distance from $\Sigma,$ with positive distance into $A,$ to define the following vertex subsets: \begin{equation*} V_{\geq a}:=\bigcup_{n \in {\mathbb N}: n\geq a} \mathrm{dist}^{-1}_{\Sigma}(n) \text{ and } V_{\leq a}:=\bigcup_{n \in {\mathbb N}: n\leq a} \mathrm{dist}^{-1}_{\Sigma}(n). \end{equation*} We wish to estimate the eigenvalue $\lambda_j (G)$ of the Laplacian on $G$ by the eigenvalues of the Laplacian on the subgraphs on $V_{\geq 0}$ and $V_{\leq 0}$, wherein the test functions for the Rayleigh quotients $f^+:V_{\geq }\to {\mathbb R}$ satisfy $f^+(v)=0$ for every $v\in \mathrm{dist}_{\Sigma}^{-1}(0)=\Sigma$, and $f^-:V_{\leq 0} \to {\mathbb R}$ satisfy $f^-(v)=0$ for every $v \in \Sigma$. Denote the eigenvalues determined by the Rayleigh quotients of these specific test functions by $\xi_k(V_{\geq 0})$ and $\xi_l(V_{\leq 0})$, respectively. Using the Poincar\'e minimax principle it is possible to see that when $1 \leq k,l\leq \min \left \{ |V^-|,|V^+|\right \},$ it follows that \begin{equation}\label{eq:Poincare} \lambda_{k+l} (G) \leq \max \left \{ \xi_k(V_{\geq 0}), \xi_l(V_{\leq 0}) \right \}. \end{equation} This can be seen by discretizing Theorem 8.2.1 found in \cite{Buser} or Proposition 2.1 of \cite{B15}. Such an argument is given in detail by Balti for weighted directed graphs, where the result is also extended in several ways, including to the special Laplacian operator on these graphs \cite[Section 5]{Balti}. To give an upper bound for the eigenvalue, we restrict the test functions for Equation \ref{eq:Ray1} in ${\mathbb R}^V$ to functions which \begin{enumerate} \item vanish on either $V_{\geq 0}$ or $V_{\leq 0},$ \item are constant on each set $\mathrm{dist}^{-1}_{\Sigma}(i),$ \item and are constant for values less than or equal to $T^-$ and greater than or equal to $T^+.$ \end{enumerate} Recall the definitions of the spaces of functions $U^{+}, U^{-},$ and $W^+, W^-$ from Definition \ref{def:spaces}. We will first treat these test functions as functions in $U^+$ or $U^-,$ respectively, and then as functions in $W^+$ or $W^-$. Let $U^+_k$ and $U^-_l$ be, respectively, arbitrary sets of $k$- and $l$-dimensional subspaces of real-valued functions of $U^+$ and $U^-$, for values $k,l \in {\mathbb Z}_{\geq 0}.$ Similarly, $W^+_k$ and $W^-_l$ are $k$- and $l$-dimensional subspaces of $W^+$ and $W^-$, respectively. Combining Equation \ref{eq:Poincare} with the Poincar\'e minimax characterization for the Dirichlet eigenvalues $\xi_k(V_{\geq 0})$ and $\xi_l(V_{\leq 0})$ while maintaining the assumption that $1\leq k,l\leq \min\{|V^-|,|V^+|\}$ we have that \begin{equation} \label{eq:Ray_higher} \lambda_{k+l}(G) \leq \max \left \{\inf_{U_k^+}\sup_{f^+\in U_k^+} \frac{\langle f^+,\Delta f^+\rangle }{\langle f^+,f^+\rangle }, \inf_{U_l^-}\sup_{f^-\in U_l^-} \frac{\langle f^-,\Delta f^-\rangle }{\langle f^-,f^-\rangle } \right \}. \end{equation} Now, for $g$ defined on $V_{\geq 0},$ with constant value $g(i)$ on the $i$-shell, using the volume growth estimates from Equation \ref{eq:GrowthBound}, we have the estimates \begin{align*} \langle g,\Delta g\rangle &= \sum_{i=0}^{T^+}\sum_{x\in \mathrm{dist}^{-1}_{\Sigma}(i)} \sum_{y\sim x}\frac{1}{2d}\parens{g(i)-g(y)}^2\\ &\leq \frac{1}{2}\sum_{i=0}^{T^+}\sum_{x \in \mathrm{dist}_{\Sigma}^{-1}(i)}\biggl[ \parens{g(i)-g(i+1)}^2 + \parens{g(i)-g(i-1)}^2\biggr]\\ &=\frac{1}{2}\sum_{i=0}^{T^+}|\mathrm{dist}^{-1}_{\Sigma}(i)|\biggl[ \parens{g(i)-g(i+1)}^2 + \parens{g(i)-g(i-1)}^2\biggr]\\ &\leq \frac{1}{2}\sum_{i=0}^{T^+}\nu(i)|\Sigma|\biggl[ \parens{g(i)-g(i+1)}^2 + \parens{g(i)-g(i-1)}^2\biggr]\end{align*} and \begin{align*}\langle g,g\rangle &= \sum_{i=0}^{T^+}|\mathrm{dist}^{-1}_{\Sigma}(i)|g^2(i)\\ & \geq \sum_{i=0}^{T^+}\mu(i)|\Sigma| g^2(i). \end{align*} Similar estimates hold for a function $g$ defined on $V_{\leq 0}.$ We now use these bounds in Equation \ref{eq:Ray_higher}. Because we are restricting to functions $g$ with a constant value in each shell, it is equivalent to write the expression in terms of $W^+$ and $W^-$, rather than $U^+$ and $U^-$. \begin{equation}\label{eq:Ray3} \begin{split} \lambda_{k+l} (G) &\leq \max \left \{ \inf_{W_k^+}\sup_{g\in W_k^+}\frac{\sum_{i=0}^{T^+}\biggl[ \parens{g(i)-g(i+1)}^2 + \parens{g(i)-g(i-1)}^2\biggr]\cdot \nu(i)}{2\sum_{i=0}^{T^+}g^2(i)\cdot \mu(i)}, \right.\\ &\qquad \qquad \left. \inf_{W_l^-}\sup_{g\in W_l^-}\frac{\sum_{i=T^-}^{0}\biggl[ \parens{g(i)-g(i+1)}^2 + \parens{g(i)-g(i-1)}^2\biggr]\cdot \nu(i)}{2\sum_{i=T^-}^{0}g^2(i)\cdot \mu(i)} \right \}. \end{split} \end{equation} Note that $|\Sigma|,$ a common factor appearing in the numerator and denominator, has been eliminated in the resulting estimates in Equation \ref{eq:Ray3}. For estimating $\lambda_2(G),$ we take $k=l=1$ and the Rayleigh quotient for $W^+$ in Equation \ref{eq:Ray3} becomes \begin{equation}\label{eq:HardyRay} \inf_{g \in W^+, g\not\equiv 0} \, \frac{\sum_{i=0}^{T^+}\biggl[ \parens{g(i)-g(i+1)}^2 + \parens{g(i)-g(i-1)}^2\biggr]\cdot \nu(i)}{2\sum_{i=0}^{T^+}g^2(i)\cdot \mu(i)}. \end{equation} Define $\phi(j):=g(j)-g(j-1)$ for the $W^+$ quotient in Equation \ref{eq:Ray3}. Noting that $g(j)=\sum_{i=0}^j \phi(i)$ and using the facts that $g(0)=0,$ $\phi(0)=0,$ and $g(i)$ is constant for all $i \geq T^+$ implies $\phi(i)=0$ for all $i > T^+.$ As a result, Equation \ref{eq:HardyRay} becomes \begin{equation*} \inf_{g \in W^+, g\not\equiv 0} \, \frac{\sum_{i=0}^{T^+}[\phi(i+1)^2+\phi(i)^2]\nu(i)}{2\sum_{k=0}^{T^+}\left [\sum_{i=0}^k \phi(i) \right ]^2\mu(i)} =\inf_{g \in W^+, g\not\equiv 0} \, \frac{\sum_{i=0}^{T^+}\phi(i)^2[\nu(i)+\nu(i-1)]}{2\sum_{k=0}^{T^+}\left [\sum_{i=0}^k \phi(i) \right ]^2\mu(i)} \end{equation*} In the rightmost equality, we have used that $\phi(T^++1)=g(T^++1)-g(T^+)=0.$ It follows from the definition of $\phi$ and a routine computation that the quotient in Equation \ref{eq:HardyRay} is bounded from above by $\rho^+$ in Equation \ref{eq:Hardy2}. A similar argument verifies $\rho^-$ in Equation \ref{eq:Hardy2}. This establishes Theorem \ref{theo:eigen2}. $\Box$ \textbf{Bounding the higher eigenvalues:} We now continue the argument for higher eigenvalues. Since the test function $g$ can be thought of as a test function vanishing off of $V_{\geq 1},$ we wish to find a symmetric matrix $A^+$ and a column vector $\mathbf{w}_+$ so that $$\langle \mathbf{w}_+,A^+\mathbf{w}_+ \rangle_+=\sum_{i=0}^{T^+}\left [(g(i)-g(i+1))^2+(g(i)-g(i-1))^2 \right ]\nu(i)$$ and $$\langle \mathbf{w}_+,\mathbf{w}_+ \rangle_+=\sum_{i=0}^{T^+} g(i)^2\mu(i).$$ As a result, the quotient $\tfrac{\langle \mathbf{w}_+,A^+\mathbf{w}_+ \rangle_+}{2\langle \mathbf{w}_+,\mathbf{w}_+ \rangle_+}$ is equal to the eigenvalue estimate for $V_{\geq 0}$ in Equation \ref{eq:Ray3}. Since $g(0)=0,$ we omit the $i=0$ entry in constructing the column vector $\mathbf{w}_+,$ defining $$\mathbf{w}_+= \begin{bmatrix}g(1)\sqrt{\mu(1)}\\ \vdots\\ g(T^+)\sqrt{\mu(i)}\end{bmatrix}.$$ In other words, we let the $i$-th entry of $\mathbf{w}_+$ to be $g(i)\sqrt{\mu(i)}.$ We now wish to construct a symmetric matrix $A^+$ such that \begin{equation}\label{eq:Aplus} \mathbf{w}_+^{\intercal}A^+\mathbf{w}_+=\langle \mathbf{w}_+,A^+\mathbf{w}_+\rangle_+ =\sum_{i=0}^{T^+}[(g(i)-g(i+1))^2+(g(i)-g(i-1))^2]\nu(i), \end{equation} where $\mathbf{w}_+^{\intercal}$ is the transpose of $\mathbf{w}_+$ when written as a column vector. Now, the $i$-th term in the right hand side of Equation \ref{eq:Aplus} can be rewritten as \begin{equation} \label{eq:gForm} [2g(i)^2+g(i-1)^2+g(i+1)^2-2g(i)(g(i-1)+g(i+1))]\nu(i). \end{equation} It follows from Equation \ref{eq:gForm} that the entries of $A^+$ are quotients where the numerator can be expressed as linear combinations of the weights $\nu$ and the denominator of $A^+_{ij}$ is equal to $\sqrt{\mu(i)\mu(j)}.$ Since $A^+_{ij}$ and $A^+_{ji}$ correspond to the right hand side of Equation \ref{eq:Aplus}, the presence of terms of the form $$-2g(i)g(i-1)\nu(i)=g(i)g(i-1)(-2\nu(i))$$ contribute an additive term $-\nu(i)$ to the numerator of each entry $A_{i-1,i}^+$ and $A_{i,i-1}^+$, where the factor of $\nu(i)$ has been halved due to the fact that we require $A^+_{i-1,i}=A^+_{i,i-1}.$ For the same reason, terms of the form $$-2g(i)g(i+1)\nu(i)=g(i)g(i+1)(-2\nu(i))$$ contribute an additive term $-\nu(i)$ to the numerator of $A_{i,i+1}^+$ and $A_{i+1,i}^+.$ This implies that when $|i-j|=1,$ we have that $$A_{ij}^+=\frac{-\nu(i)-\nu(j)}{\sqrt{\mu(i)\mu(j)}}.$$ When $1\leq i<T^+,$ the terms $$[2g(i)^2+g(i-1)^2+g(i+1)^2]\nu(i)$$ contribute $2\nu(i)$ to $A_{ii}^+$ and $\nu(i)$ to $A_{i-1,i-1}^+$ and $A_{i+1,i+1}^+,$ giving $$A_{ii}^+=\frac{2\nu(i)+\nu(i-1)+\nu(i+1)}{\mu(i)}.$$ In other words, the numerator of the entry $A_{ii}^+$ contains the multiple of $g(i)^2$ in the right hand side of Equation \ref{eq:Aplus}, while the numerator of the entry $A_{ij}^+$ contains one half of the multiple of $g(i)g(j)=g(j)g(i)$ in the sum, since $A_{ij}^+=A_{ji}^+.$ Finally, the $T^+$-th term in the sum on right hand side of Equation \ref{eq:Aplus} can be written as $$[g(T^+)^2-2g(T^+)g(T^+-1)+g(T^+-1)^2]\nu(T^+).$$ This contributes $-\nu(T^+)$ to the numerators of $A_{T^+,T^+}^+$ and $A_{T^+-1,T^+-1}^+.$ However, since $g(T^+)-g(T^++1)$ vanishes, we have that $$A_{T^+,T^+}^+=\frac{\nu(T^+)+\nu(T^+-1)}{\mu(T^+)}.$$ Thus, we conclude that the desired symmetric matrix $A^+$ can be constructed as \begin{align*} A^+_{ij} = \begin{cases} \tfrac{2\nu(i) + \nu(i-1)+\nu(i+1)}{\mu(i)}, &\text{ if }i=j;~ 1\leq i<T^+\\ \tfrac{\nu(T^+)+\nu(T^+-1)}{\mu(T^+)},&\text{ if }i = j = T^+\\ \tfrac{-\nu(i)-\nu(j)}{\sqrt{\mu(i)\mu(j)}},&\text{ if }|i-j| = 1;~ 1\leq i,j\leq T^+\\ 0, &\text{otherwise} \end{cases} \end{align*} where $1\leq i,j\leq T^+.$ Since the test functions corresponding to the eigenvalues $\xi_l(V_{\leq 0})$ also vanish for vertices in $\Sigma,$ the arguments for finding the matrix $A_{ij}^+$ can be repeated to find that \begin{align*} A^-_{ij} = \begin{cases} \tfrac{2\nu(i) + \nu(i-1)+\nu(i+1)}{\mu(i)}, &\text{ if }i=j; T^-<i\leq -1\\ \tfrac{\nu(T^-)+\nu(T^-+1)}{\mu(T^-)}, &\text{ if }i = j = T^-\\ \tfrac{-\nu(i)-\nu(j)}{\sqrt{\mu(i)\mu(j)}},&\text{ if }|i-j| = 1; T^- \leq i,j\leq -1\\ 0, &\text{otherwise.} \end{cases} \end{align*} We can now estimate Equation \ref{eq:Ray3} from above using the matrices $A^+$ and $A^-$: \begin{equation}\label{eq:Ray4} \lambda_{k+l}(G)\leq \frac{1}{2}\max \left \{ \inf_{W_k^+}\sup_{\mathbf{w}_+\in W_k^+}\frac{\langle \mathbf{w}_+,A^+\mathbf{w}_+ \rangle_+}{\langle \mathbf{w}_+,\mathbf{w}_+ \rangle_+}, \inf_{W_l^-}\sup_{\mathbf{w}_-\in W_l^-}\frac{\langle \mathbf{w}_-,A^-\mathbf{w}_- \rangle_-}{\langle \mathbf{w}_-,\mathbf{w}_-\rangle_-} \right \}. \end{equation} Since $A^+$ and $A^-$ are symmetric, the spectral theorem implies that there exist an orthonormal basis of $T^+$ real eigenfunctions of $A^+$ in $W^+$ with corresponding real eigenvalues and an orthonormal basis of $T^-$ real eigenfunctions of $A^-$ in $W^-$ having real eigenvalues. It is easy to see that if $\mathbf{w}_{\ast}\in W^+$ is an eigenfunction of $A^+$ with corresponding eigenvalue $\rho_{\ast},$ we have \begin{equation}\label{eq:SpecThm} \frac{\langle \mathbf{w}_{\ast},A^+\mathbf{w}_{\ast} \rangle_+} {\langle \mathbf{w}_{\ast},\mathbf{w}_{\ast} \rangle_+}=\rho_{\ast}. \end{equation} The same relationship holds for $A^-$ and the eigenfunctions of $W^-.$ Since these bases of eigenfunctions are orthonormal, the $k$-th eigenvalue of $A^+$ in $W^+_k$ and the $l$-th eigenvalue of $W_l^-$ which we denote $\rho_k^+$ and $\rho_l^-,$ respectively, satisfy the following: \begin{equation}\label{eq:Spec2} \inf_{W_k^+}\sup_{\mathbf{w}_+\in W_k^+}\frac{\langle \mathbf{w}_+,A^+\mathbf{w}_+ \rangle_+}{\langle \mathbf{w}_+,\mathbf{w}_+ \rangle_+}=\rho_k^+ \hspace{.5in}\text{ and } \hspace{.5in} \inf_{W_l^-}\sup_{\mathbf{w}_- \in W_l^-}\frac{\langle \mathbf{w}_-,A^-\mathbf{w}_- \rangle_-}{\langle \mathbf{w}_-,\mathbf{w}_- \rangle_-}=\rho_l^- \end{equation} Combining Equations \ref{eq:Ray4} and \ref{eq:Spec2}, it follows that \begin{equation*} \lambda_{k+l}(G) \leq \frac{1}{2} \max \left \{\rho_k^+,\rho_l^- \right \}. \end{equation*} This establishes Equation \ref{eq:eigenhigher}. The eigenvalue estimate on $\lambda_j(G)$ holds by taking $j=k+l$ in Equation \ref{eq:eigenhigher} while noting that the arguments above hold for any $t^+$ and $t^-$ with $1\leq t^+ \leq T^+$ and $T^-\leq t^- \leq -1,$ where one must restrict $k$ and $l$ such that $k \leq t^+$ and $l \leq |t^-|.$ \end{proof} We remark that in the continuous case, one shows that analogue of the operator $A$ can be rewritten as a Sturm-Liouville problem depending on the same parameters of the manifold as Buser's inequality. The details can be found in Benson \cite{B15}. \subsection{Applying volume growth bounds} In this section, we use $\nu(k)$ to denote a volume growth bound around $\Sigma$; i.e., a function with the property that, given a fixed $\Sigma\subset V$, all choices of sets $V^+,V^-$, and all $k\geq 0$, we have that $|\mathrm{dist}^{-1}_{\Sigma}(k)|\leq |\Sigma| \nu(k)$. The function $\nu$ may depend on $\Sigma$ as well as the curvature, though previously we have only presented volume growth bounds that are independent of the choice of $\Sigma$. For convenience, we often denote $\Sigma_k=\mathrm{dist}_{\Sigma}^{-1}(k).$ \begin{remark} In this section our results are in terms of the outer vertex isoperimetric constant $h_{out}$. This is most natural because we use the counting measure on the vertex set. As stated before, there are simple bounds relating $h_{out}$ to the edge isoperimetric constant $h$: $$h\leq h_{out}\leq hd\,,$$ where $d$ is the degree of the graph. Using these inequalities, it is possible to rewrite our results in terms of $h$. \end{remark} \begin{lemma}\label{lem:d_lowerbound} Let $A\subset V$ be the set that achieves the outer vertex isoperimetric constant $h_{out}$ and let $\Sigma = \partial_{out}A$. Set either $V^+ = A$ or $V^+ = V\setminus (A\cup \Sigma)$, and let $V^-$ be the other. Use this choice of $V^{+}$ and $V^-$ to define the signs (positive and negative, respectively) of the signed distance function $\mathrm{dist}_{\Sigma}$. Let $k\geq 0,$ for $\Sigma_k = \mathrm{dist}^{-1}_{\Sigma}(k),$ it follows that \begin{align*} |\Sigma_k| \geq |\Sigma |\biggl(1 -h_{out}\sum_{i=0}^k \nu(i)\biggr). \end{align*} \end{lemma} \begin{proof} Observe that the case $k = 0$ is trivial. Assume $k>0$. Define $C^- = \bigcup_{i < k}\mathrm{dist}^{-1}_{\Sigma}(i)$ and $C^+ = \bigcup_{i > k}\mathrm{dist}^{-1}_{\Sigma}(i).$ We will split the proof into two cases. \begin{enumerate} \item In the first case, suppose $|C^-|< \tfrac{1}{2}|V|$. Since $k>0$, we have that $(V^-\cup \Sigma )\subseteq C^-$, \\ therefore $|V^-\cup \Sigma|< \tfrac{1}{2}|V|.$ By assumption $\tfrac{1}{2}|V| \leq |V\setminus A| = \left|\big (V\setminus (A\cup\Sigma)\big )\cup\Sigma \right|,$ so $V^-\neq V\setminus (A\cup\Sigma).$ It follows that $V^- = A$. Because $|A| =|V^-|<|C^-|< \tfrac{1}{2}|V|$, we have that $$\frac{|\Sigma|}{|A|}=h_{out} \leq \frac{|\Sigma_k|}{|C^-|}$$ and so $|\Sigma_k|\geq h_{out}|C^-|\geq h_{out}|A| = |\Sigma|$ and the result follows.\\ \item In the other case, we have $|C^-|\geq \tfrac{1}{2}|V|.$ Because $C^-$ and $C^+$ are disjoint,\\ we have that $|C^+|\leq \tfrac{1}{2}|V|$. Therefore \begin{align*}|\Sigma_k|\geq h_{out}|C^+| = h_{out}\left (|V^+|-\sum_{i=1}^k |\mathrm{dist}^{-1}_{\Sigma}(i)|\right ). \end{align*} Observe that since $|A|\leq |V|/2,$ we have that \begin{align*} |V^+|&\geq \min\{|V\setminus(A\cup\Sigma)|,|A|\}\\ &\geq \min\{|V\setminus A|-|\Sigma|,|A|\}\\ &\geq \min\{|A|-|\Sigma|,|A|\} \\ &= |A|-|\Sigma|. \end{align*} Applying the previous bound gives us \begin{align*}|\Sigma_k|\geq h_{out}\biggl(|V^+|-\sum_{i=1}^k |\mathrm{dist}^{-1}_{\Sigma}(i)|\biggr) \geq h_{out}\biggl(|A|-|\Sigma|-\sum_{i=1}^k |\Sigma|\nu(i) \biggr)\\ = h_{out}\biggl(|A|-\sum_{i=0}^k |\Sigma|\nu(i) \biggr) = |\Sigma |\biggl(1 -h_{out}\sum_{i=0}^k \nu(i)\biggr), \end{align*} where the first equality relies on the (always reasonable) assumption that $\nu(0) \geq 1.$ This proves the result. \end{enumerate} \end{proof} By this conclusion of Lemma \ref{lem:d_lowerbound}, one can think of the lower weights for vertex expansion $\mu(k)$ as $$\mu(k)=1-h_{out}\sum_{i=0}^k \nu(i).$$ As a result, we have $$|\Sigma|\nu(k)\geq |\mathrm{dist}^{-1}_{\Sigma}(k)|\geq |\Sigma|\left (1-h_{out}\sum_{i=0}^k\nu(i)\right )$$ and from the Rayleigh quotient in Equation \ref{eq:Ray3}, we obtain \begin{equation}\label{eq:Eigen} \lambda_2 \leq \inf_{W_1}\sup_{g\in W_1}\frac{\frac{1}{2}\sum_{k=0}^{T}\nu(k)\biggl[ \parens{g(k)-g(k+1)}^2 + \parens{g(k)-g(k-1)}^2\biggr]}{\sum_{k=0}^{T}g^2(k)\left (1-h_{out}\sum_{i=0}^k \nu(i)\right )}.\end{equation} where $T$ is the largest integer for which $1>h_{out}\sum_{i=0}^T \nu(i)$. Here, by assumption we have the same volume growth bounds on $V^+$ and $V^-$, so (unlike the previous section) the Rayleigh quotients are identical on both sides of the cut-set. \subsection{Bounds on $\lambda_2$} Of particular interest is the problem of bounding $\lambda_2$. Indeed, the original proofs of Buser's inequality only bound $\lambda_2$ and not the higher eigenvalues $\lambda_k:k\geq 3$. \cite{Buser82,Ledoux94,Ledoux04spectralgap}. First, we will give a short proof of a bound on $\lambda_2$ that is independent of the Cheeger cut-set. \begin{theorem}\label{thm:biglevel} Let $\Sigma\subset V$ be a set (not necessarily the Cheeger-achieving cut set) that cuts $V$ into $V^+$ and $V^-$, and define the one-sided shells $\mathrm{dist}^{-1}_{\Sigma}(k)$ as before. Let $\alpha = |\Sigma|/|V|$ . Assume that $\alpha < 1/4$. If $|\Sigma|\geq |\mathrm{dist}^{-1}_{\Sigma}(k)|$ for all $k\in {\mathbb Z}$, then $\lambda_2 \leq 8\alpha^2+ o(\alpha^2)$. \end{theorem} The proof loosely follows the method of the original proof of Buser's inequality for graphs. \begin{proof} Recall the Rayleigh quotient \begin{align*}\lambda_2(G) = \inf_f \frac{\tfrac{1}{2d}\sum_x \sum_{y\sim x} \parens{f(x)-f(y)}^2}{\sum_x f(x)^2}.\end{align*} Without loss of generality assume that $|V^+| \geq |V^-|$. Let $t = \lfloor \frac{1}{4\alpha} \rfloor$ Because $\alpha < 1/4$ and $t>0,$ we can construct the following test-function in the Rayleigh quotient to bound $\lambda_2(G)$: \begin{align*} f(x) = \begin{cases}0 \text{ if } x\in \mathrm{dist}^{-1}_{\Sigma}(i)\text{ where } i\leq 0,\\ i \text{ if } x\in \mathrm{dist}^{-1}_{\Sigma}(i)\text{ where } 0\leq i \leq t,\\ t \text{ if } x\in \mathrm{dist}^{-1}_{\Sigma}(i)\text{ where } i \geq t. \end{cases} \end{align*} For a vertex $x$, \begin{align*} \tfrac{1}{2d} \sum_{y\sim x} \parens{f(x)-f(y)}^2 \leq \begin{cases} \tfrac{1}{2} \text{ if } x\in \mathrm{dist}^{-1}_{\Sigma}(i) \text{ where }0\leq i \leq t,\\ 0 \text{ otherwise}, \end{cases} \end{align*} and \begin{align*} f(x)^2 \geq \begin{cases} t^2 \text{ if } x\in \mathrm{dist}^{-1}_{\Sigma}(i) \text{ where }i \geq t,\\ 0 \text{ otherwise}. \end{cases} \end{align*} Using these bounds, we see that \begin{align*}\sum_x \tfrac{1}{2d} \sum_{y\sim x} \parens{f(x)-f(y)}^2\leq \tfrac{1}{2}\sum_{i=0}^t |\mathrm{dist}^{-1}_{\Sigma}(i)|\leq \frac{(t+1)}{2}|\Sigma| \end{align*} and \begin{align*} \sum_x f(x)^2 \geq t^2 \sum_{i \geq t }|\mathrm{dist}^{-1}_{\Sigma}(i)| \geq t^2 \parens{|V^+|-t|\Sigma|} \geq t^2 \parens{\tfrac{1}{2}|V|-\tfrac{1}{4}|V|} = \tfrac{1}{4}t^2|V|. \end{align*} Combining the previous two inequalities, we find the result: \begin{align*}\lambda_2\leq \frac{2(t+1)|\Sigma|}{t^2|V|} = \parens{2/t + o(1/t)}\alpha = 8\alpha^2 + o(\alpha^2).\end{align*} \end{proof} Now we attempt to bound $\lambda_2$ in terms of the Cheeger cut-set in order to achieve a Buser-type result. Observe that the Rayleigh minimizing function for $\lambda_2$ must have certain properties. \begin{lem}\label{lem:gMonotone} The function $g(k)$ corresponding to the non-constant minimizer of the Rayleigh quotient in Equation \ref{eq:Eigen} is monotone in $k$. \end{lem} \begin{proof}[Proof of Lemma \ref{lem:gMonotone}.] We will induct on $k$. The base case is trivial since $g(0)=0$ by the Dirichlet boundary condition on $f^{-1}(0)$. Without loss of generality, assume that $g(1)\geq 0$, else replace $g(1)$ with $-g(1)$ and proceed to the induction step. Assume for contradiction that $g$ is monotone increasing up to some $k$ in its domain, but that $g(k+1)<g(k)$. Then replacing $g(k+1)$ by $2g(k)-g(k+1)$, the numerator of $R(g)$ is unchanged as $$\left [g(k)-\big (2g(k)-g(k+1)\big )\right ]^2=\big (g(k)-g(k+1)\big )^2.$$ At the same time, the denominator of $R(g)$ increases since $\big (2g(k-1)-g(k)\big )^2>g(k-1)^2$, therefore the quotient $R(g)$ decreases, contradicting the assumption that $g$ is a non-constant minimizer of $R(g)$. \end{proof} We are now able to bound the Rayleigh quotient within a constant factor. To bound $\lambda_2$, we apply Equation \ref{eq:Eigen} giving the Rayleigh quotient \begin{align}\label{eqn:Ray4} \lambda_2 \leq R:= \inf_f\frac{\frac{1}{2}\sum_{k=0}^{T}\nu(k)\biggl[ \parens{f(k)-f(k+1)}^2 + \parens{f(k)-f(k-1)}^2\biggr]}{\sum_{k=1}^{T}f^2(k)\left (1-h_{out}\sum_{i=0}^k\nu(i)\right )}, \end{align} where the infimum is taken over all functions $f:{\mathbb Z}\to{\mathbb R}$ with $f(0) = 0$, $f(1)\neq 0$, $f(i) = 0$ if $i<0$ and $f(i) = f(T)$ if $i>T$. \begin{theorem}\label{thm:vgB}The bounds on $R(g)$ are $\frac{1}{8B}\leq R\leq \frac{1}{2B},$ where $$B = \sup_{n\geq 1}\left (\sum_{k=n}^T (1-h_{out}\sum_{i=0}^k \nu(i))\right )\left (\sum_{k=1}^n\frac{1}{\nu(k)+\nu(k-1)}\right ).$$ \end{theorem} \begin{proof} To apply a result of L. Miclo \cite{Miclo}, we write Equation \ref{eqn:Ray4} in a different form: set $g(k) = f(k)-f(k-1)$ for $k\in {\mathbb Z}$. Observe that $f(k) = \sum_{i=1}^k g(i)$. Also observe that $g(k) = 0$ if $k\leq 0$ or $k>T$. We have \begin{align*} 2R = \inf_g\frac{\sum_{k=0}^{T}\nu(k)\left [g(k+1)^2 + g(k)^2\right ]} {\sum_{k=1}^{T}\biggl(\sum_{i=1}^k g(i) \biggr)^{\!\!2}\left (1-h_{out}\sum_{i=0}^k \nu(i)\right )}\\ = \inf_g \frac{\sum_{k=1}^{T}g(k)^2\left ( \nu(k)+\nu(k-1) \right )}{\sum_{k=1}^{T}\left (\sum_{i=1}^k g(i) \right )^{\!\!2}\left (1-h_{out}\sum_{i=0}^k \nu(i)\right )},\end{align*} taken over all functions $g:{\mathbb N}\to{\mathbb R}$. To simplify, we write the volume growth and decay bounds as $\mu(k) = 1-h_{out}\sum_{i=0}^k \nu(i)$ and $\zeta(k) = \nu(k)+\nu(k-1)$ if $1\leq k\leq T$, and $\mu(k) = \zeta(k) = 0$ if $k\geq T$. We have \begin{align*}2R = \inf_g \frac{\sum_{k=1}^{T}g(k)^2\zeta(k)}{\sum_{k=1}^{T}\left (\sum_{i=1}^k g(i) \right )^{\!\! 2}\mu(k)}.\end{align*} The result follows from Proposition 1 in \cite{Miclo}. \end{proof} An immediate corollary is a bound on the spectral gap, obtained by combining Theorem \ref{thm:vgB} with the bound $\lambda_2\leq R$. \begin{theorem}\label{thm:lambda_2} The inequality $$\lambda_2(G) \leq \frac{1}{2B}$$ holds, where $$B = \sup_{n\geq 1}\left (\sum_{k=n}^T \left (1-h_{out}\sum_{i=0}^k \nu(i)\right )\right )\left (\sum_{k=1}^n\frac{1}{\nu(k)+\nu(k-1)}\right ).$$ \end{theorem} A case of particular interest is when $\Sigma = \max_{i\in {\mathbb Z}}|d^{-1}_{\Sigma}(i)|$. In this case we may set $\nu\equiv 1$. \begin{corollary}\label{corr:lambda2} If the vertex-isoperimetric cut-set $\Sigma$ satisfies $\displaystyle \Sigma = \max_{i\in {\mathbb Z}}|d^{-1}_{\Sigma}(i)|$, then $$\lambda_2\leq \frac{27}{2}h_{out}^2(1+o(1)).$$\end{corollary} The proof is found in Example \ref{ex:constant}. Under these hypotheses the Cheeger lower bound $\lambda_2 \geq c*h_{out}^2/d$ is tight up to a linear factor of $d$. Observe that this is a related result to Theorem \ref{thm:biglevel}. WLOG assume $h_{out} = |\Sigma|/|V^+|$, this behaves similarly to the term $\alpha = |\Sigma|/|V|$ in that theorem. \subsection{Results for the higher Cheeger constants} We define the {\bf higher order, outer vertex Cheeger constant} to be $$h_{out}(n)=\min_{V_1, \ldots , V_n} \max_i \left \{ \frac{|\partial_{out} V_i|}{|V_i|} \right \},$$ where $V_1, V_2, \ldots, V_n\subset V$ are non-empty, pairwise disjoint, and have the property that $\cup_{i=1}^n V_i =V.$ Our main focus in this subsection is to develop enough of the properties of $h_{out}(n)$ to give the following analogue of Corollary \ref{corr:lambda2} for the higher eigenvalues: \begin{theorem}\label{thm:higherBuser} Assume that $\nu(i)=1$ for all $i \in [T^-, T^+].$ If $n\geq 2$ and $h_{out}(n)<1,$ then we have \begin{align*}\lambda_k(G)\leq k^2h_{out}(n)^2 \parens{\frac{27\pi^2}{16}+o(1)}. \end{align*} \end{theorem} The proof of Theorem \ref{thm:higherBuser} is found in Example \ref{ex:HigherEigen} and the remaining portion of this section is devoted to developing the properties of $h_{out}(n)$ enough to support the proof of this result. The concept of the higher Cheeger constant of graphs, as well as the first Cheeger-type and Buser-type inequalities for the higher Cheeger constants (in various forms) have been studied by many authors; see for instance \cite{LOGT,LRT12, Miclo2}. We will assume that $$h_{out}(n)=\max_{i=1,2,\ldots, n} \left \{ \frac{|\partial_{out}V_i|}{|V_i|} \right \} = \frac{|\partial_{out}V_n|}{|V_n|}.$$ For convenience and without loss of generality, we assume that $$\frac{|\partial_{out} V_1|}{|V_1|} \leq \frac{|\partial_{out}V_2|}{|V_2|} \leq \cdots \leq \frac{|\partial_{out}V_n|}{|V_n|}.$$ Further, we may also construct the $V_i$ such that if $$\frac{|\partial_{out}V_{k-1}|}{|V_{k-1}|}=\frac{|\partial_{out}V_k|}{|V_k|},$$ then $|V_{k-1}| \geq |V_k|.$ To prove bounds on $\lambda_n(G)$ with respect to $h_{out}(n),$ there are two plausible approaches: \begin{enumerate} \item Prove a monotonicity-type estimate bounding $h_{out}(n)$ from below by $h_{out}(2).$ Then apply these estimates directly to Lemma \ref{lem:d_lowerbound}. \item Prove an analog to Lemma \ref{lem:d_lowerbound} for $h_{out}(n)$ in place of $h_{out}(2).$ \end{enumerate} While we take approach 1 for convenience, we mention approach 2, since we would be interested in any work in this direction that might produce better bounds. The fact that $h_{out}(n)\geq h_{out}(2)$ follows immediately from the following result. \begin{proposition}\label{prop:hmonotonicity2} With $h_{out}(n)$ defined as above, for $n \geq 3,$ we have $$h_{out}(n-1) \leq h_{out}(n).$$ \end{proposition} \begin{proof} Using the notation established in this section, we remind that reader that \begin{equation*} h_{out}(n)= \max_{1\leq i \leq n} \frac{|\partial_{out}V_i|}{|V_i|} =\frac{|\partial_{out}V_n|}{|V_n|}. \end{equation*} Consider the sets $V_1,\dots, V_n$ that optimize $h_{out}(n)$. We form a collection of $n-1$ sets that will be a candidate to optimize $h_{out}(n-1)$ by merging $V_1$ and $V_2$ to make $V^* = V_1\cup V_2$ and by retaining the other $n-2$ sets. Observe that $$\frac{|\partial_{out}V^*|}{|V^*|}\leq \frac{|\partial_{out}V_1| + |\partial_{out}V_2|}{|V_1|+|V_2|}\leq \max\left \{\frac{|\partial_{out}V_1|}{|V_1|}, \frac{|\partial_{out}V_2|}{|V_2|}\right \},$$ where the first inequality relies on the fact that $|\partial_{out}(V_1\cup V_2)|\leq |\partial_{out}V_1| + |\partial_{out}V_2|$. The second inequality uses the rule $\frac{a+b}{c+d}\leq \max\{\frac{a}{c},\frac{b}{d}\}$ when $a,b,c,d > 0$. Combining this bound with the monotonicity of $\frac{|\partial_{out}V_i|}{|V_i|}$, we find that $$\frac{|\partial_{out}V^*|}{|V^*|}\leq \max_{1\leq i \leq n} \frac{|\partial_{out}V_i|}{|V_i|} =\frac{|\partial_{out}V_n|}{|V_n|};$$ that is, the maximum ratio on these $n-1$ sets that partition $V$ is $\frac{|\partial_{out}V_n|}{|V_n|}$. Because $h_{out}(n-1)$ is the minimum value of the maximum ratio taken over any choice of $n-1$ sets that partition $V$, we find that $$h_{out}(n-1)\leq \frac{|\partial_{out}V_n|}{|V_n|} = h_{out}(n).$$ \end{proof} \begin{remark}\label{rmk:d_lowerbound} Recall that the terms $h_{out}(n)$ and $\sum_{i=0}^k \nu(i)$ are both positive. Using these facts, the following bound is immediate from combining Proposition \ref{prop:hmonotonicity2} with Lemma \ref{lem:d_lowerbound}. So, with the same notation and assumptions as in Lemma \ref{lem:d_lowerbound}, we have $$|\Sigma_k| \geq |\Sigma| \left ( 1-h_{out}(n) \sum_{i=0}^k \nu(i) \right ).$$ \end{remark} In the next section, we will cite this remark in the analysis of some examples. \section{Examples of spectral gap bounds using volume growth} In this section, we use Theorem \ref{thm:lambda_2} to bound the second eigenvalue by the volume growth. First we obtain several general bounds depending only on the growth function $\nu(k)$. Second, we use these results to bound $\lambda_2$ for specific graphs where the growth function is known. In each example where a bound on $\lambda_2(G)$ is computed, we compute $B$ from the statement of Theorem \ref{thm:lambda_2}. \subsection{Examples of volume growth functions} \begin{example}\label{ex:exponential} If $\nu(i)$ is exponential, i.e., $\nu(i) = c^i$ for some value $c>1$, then $T$ satisfies \begin{align*}\frac{c^{T+1}-1}{c-1}\leq \frac{1}{h_{out}}\leq \frac{c^{T+2}-1}{c-1}.\end{align*} As such, we have \begin{align*}h_{out}\leq \frac{c-1}{c^{T+1}-1}.\end{align*} Note that it is trivial that $\nu(i) = d\cdot(d-1)^{i-1} < d^i$ is a volume growth bound for all $d$-regular graphs. This bound is achieved by a tree where $|\Sigma|$ is a single vertex. So we only need to consider the case $c\leq d$. If $T\geq n\geq 1$, it follows that \begin{align*}\sum_{k=n}^T \left (1-h_{out}\sum_{i=0}^k \nu(i)\right ) &= (T-n+1)-\sum_{k=n}^T h_{out} \frac{c^{k+1}-1}{c-1} \\ &\geq (T-n+1) - \sum_{k=n}^T\frac{c^{k+1}-1}{c^{T+1}+1}\\ &=(T-n+1)\left (1+\frac{1}{c^{T+1}-1}\right )-\frac{c^{T+2}-c^{n+1}}{(c-1)(c^{T+1}-1)}\,.\end{align*} We also have that \begin{align*}\sum_{k=1}^n\frac{1}{\nu(k)+\nu(k-1)} = \sum_{k=1}^n \frac{1}{(c+1)c^{k-1}} =\frac{1-c^{-n}}{c-\tfrac{1}{c}} = \frac{c-c^{1-n}}{c^2-1}\,.\end{align*} Combining the previous two equations, we have \begin{align*}B\geq \sup_{T\geq n\geq 1} \left ((T-n+1)\left (1+\frac{1}{c^{T+1}-1}\right )-\frac{c^{T+2}-c^{n+1}}{(c-1)(c^{T+1}-1)}\right )\left (\frac{c-c^{1-n}}{c^2-1}\right ). \end{align*} Taking $n = 1$, we find \begin{align*}B&\geq \biggl(T+\frac{T}{c^{T+1}-1}-\frac{c^{T+2}-c^{2}}{(c-1)(c^{T+1}-1)}\biggr)\biggl(\frac{c-1}{c^2-1}\biggr) \\ &\geq\biggl(T+\frac{T}{c^{T+1}-1}-\frac{c}{c-1}\biggr)\biggl(\frac{1}{c+1}\biggr).\end{align*} On the other hand, for any value $n$ satisfying $1\leq n\leq T$, we have that \begin{align*}&\sum_{k=n}^T \left (1-h_{out}\sum_{i=0}^k \nu(i)\right ) \leq T\,, \end{align*} and, as a result, it follows that \begin{align*} \sum_{k=1}^n\frac{1}{\nu(k)+\nu(k-1)} = \sum_{k=1}^n \frac{1}{(c+1)c^{k-1}} = \frac{1-c^{-n}}{c-\tfrac{1}{c}}\leq \frac{1}{c-\tfrac{1}{c}} = \frac{c}{c^2-1}\,. \end{align*} So, combining all parts, we see that \begin{align*}\biggl(T+\frac{T}{c^{T+1}-1}-\frac{c}{c-1}\biggr)\biggl(\frac{1}{c+1}\biggr)\leq B\leq T\frac{c}{c^2-1}.\end{align*} In particular, if $c\geq 1+{\varepsilon},$ for a fixed ${\varepsilon}>0$, then $B = \Theta(T/c)$ and $\lambda_2=O(c/T).$ \end{example} \begin{example}\label{ex:exponential2} Of particular interest is the case that $\nu(0) = 1$, $\nu(i) = d c^{i-1}$ if $i\geq 1$, where $d$ is the common degree of vertices in the graph and $c>1$. This is the form of Theorem \ref{thm:vg}. Proceeding in the same way as the previous example, we see that $T$ satisfies \begin{align*}1+d\frac{c^{T}-1}{c-1}\leq \frac{1}{h_{out}}\leq 1 + d\frac{c^{T+1}-1}{c-1}.\end{align*} It follows that \begin{align*}h_{out}\leq \frac{c-1}{c-1+d(c^{T}-1)}.\end{align*} In the case where $T\geq n\geq 1$, we have \begin{align*} \sum_{k=n}^T \left (1-h_{out}\sum_{i=0}^k \nu(i)\right ) &= (T-n+1)-\sum_{k=n}^T h_{out} \frac{c-1+d(c^{k}-1)}{c-1} \\ & \geq (T-n+1) - \sum_{k=n}^T\frac{c-1+ d(c^{k}-1)}{c-1+d(c^{T}-1)}\\ &=(T-n+1)\left (1+\frac{d+1-c}{c-1+d(c^{T}-1)}\right )\\ &\hspace{.2in} -\frac{d(c^{T+1}-c^{n})}{(c-1)(c-1+d(c^{T}-1))}.\end{align*} In addition, we find that \begin{align*} \sum_{k=1}^n\frac{1}{\nu(k)+\nu(k-1)} &= \frac{1}{1+d}+\sum_{k=2}^n \frac{1}{d(c+1)c^{k-2}} \\ &=\frac{1}{1+d}+\frac{1}{d}\cdot\frac{1-c^{1-n}}{c-\tfrac{1}{c}} \\ &= \frac{1}{1+d}+\frac{1}{d}\cdot\frac{c-c^{2-n}}{c^2-1}.\end{align*} Combining the previous two equations, we have \begin{align*} B\geq \sup_{T\geq n\geq 1} \left ((T-n+1)\left (1+\frac{d+1-c}{c-1+d(c^{T}-1)}\right ) -\frac{d(c^{T+1}-c^{n})}{(c-1)(c-1+d(c^{T}-1))}\right )\\ \cdot \left (\frac{1}{1+d}+\frac{1}{d}\cdot\frac{c-c^{2-n}}{c^2-1}\right ). \end{align*} Taking $n = 1,$ we find \begin{align*}B & \geq \left (T\left (1+\frac{d+1-c}{c-1+d(c^{T}-1)}\right )- \frac{d(c^{T+1}-c)}{(c-1)(c-1+d(c^{T}-1))}\right )\\ &\hspace{.2in} \cdot \left (\frac{1}{1+d}+\frac{1}{d}\cdot\frac{c-c}{c^2-1}\right ) \\ &\geq \left (T+\frac{T(d+1-c)}{c-1+d(c^{T}-1)}-\frac{c}{c-1}\right )\left(\frac{1}{1+d}\right ). \end{align*} On the other hand, if $1\leq n\leq T$, we have \begin{align*} \sum_{k=n}^T \left (1-h_{out}\sum_{i=0}^k \nu(i)\right ) \leq T\,, \end{align*} and \begin{align*}\sum_{k=1}^n\frac{1}{\nu(k)+\nu(k-1)} = \frac{1}{1+d}+\frac{1}{d}\cdot\frac{c-c^{2-n}}{c^2-1}\leq \frac{1}{1+d}+\frac{1}{d}\cdot\frac{c}{c^2-1}\,.\end{align*} Thus, combining all parts, we see that \begin{align*}\biggl(T+\frac{T(d+1-c)}{c-1+d(c^{T}-1)}-\frac{c}{c-1}\biggr)\biggl(\frac{1}{1+d}\biggr)\leq B\leq T\biggl(\frac{1}{1+d}+\frac{1}{d}\cdot\frac{c}{c^2-1} \biggr).\end{align*} If $c\geq 1+{\varepsilon}$ for a fixed value ${\varepsilon} > 0$, then $B = \Theta(T/d)$ and $\lambda_2=O(d/T)$. \end{example} \begin{example}\label{ex:constant}\item If $\nu(i) = 1$ for all $i\geq 0$, then $T$ satisfies $T+1\leq \frac{1}{h_{out}}\leq T+2.$ \begin{align*} B &= \sup_{n\geq 1}\left (\sum_{k=n}^T\big (1-h_{out}(k+1)\big )\right ) \left (\sum_{k=1}^n \frac{1}{2}\right ) \\ &= \sup_{n\geq 1}\left ((T-n+1)-h_{out}\left [\binom{T+1}{2}-\binom{n}{2}\right ]\right ) \left(\frac{n}{2}\right ) \\ &\geq \sup_{n\geq 1}\biggl(T+1-n-\frac{1}{T+1}\frac{T^2+T-n^2+n}{2}\biggr)\cdot\frac{n}{2} \\ &= \frac{T^2}{27}(1\pm o(1)) \\ &= \frac{1}{27h_{out}^2}(1\pm o(1)). \end{align*} Here the supremum for $B$ is achieved when $n$ is roughly equal to $T/3$. It follows that $\lambda_2\leq \frac{27}{2}h_{out}^2(1+o(1))$. In this case the Cheeger lower bound $\lambda_2 \geq c*h_{out}^2/d$ is tight up to a linear factor of $d$. Note that this is a case of Theorem \ref{thm:biglevel} in which the Cheeger cut-set is also the largest set. This follows because for all $i$, $|\Sigma_i|\leq \nu(i)|\Sigma_0| = |\Sigma_0|$, where $|\Sigma_0|$ is by assumption the Cheeger cut-set. \end{example} \begin{example} \label{ex:polynomial} \item If $b\geq 1$ is a constant so that $\nu(i) = 1+i^b$ for all $i\geq 0,$ then $T$ satisfies \begin{align*}\frac{1}{h_{out}} \geq \sum_{i=0}^T (1+i^b)\geq \int_0^T x^b\, dx=\frac{T^{b+1}}{b+1}. \end{align*} For the computation of $B,$ we use the inequality \begin{equation}\label{eq:Referee} \sum_{i=0}^k \left (1+i^b\right ) \leq 3\cdot \sum_{i=1}^k i^b \leq 3\cdot \frac{(k+1)^{b+1}-1}{b+1}. \end{equation} The first inequality follows from $$\sum_{i=1}^k i^b \geq \sum_{i=1}^k i=\frac{k(k+1)}{2}\geq \frac{k+1}{2}.$$ So then $2\sum_{i=1}^k i^b \geq k+1$ and so $$\sum_{i=0}^k i^b \leq (k+1)+\sum_{i=1}^k i^b \leq 3 \cdot \sum_{i=1}^k i^b.$$ The second inequality in (\ref{eq:Referee}) follows from $$\sum_{i=1}^k i^b\leq \int_1^{k} x^b \, dx=\frac{k^{b+1}-1}{b+1},$$ since $b\geq 1.$ Thus, we have \begin{align*}B &= \sup_{n\geq 1}\left (\sum_{k=n}^T\left (1-h_{out}\left (\sum_{i=0}^k(1+i^b)\right )\right )\right ) \left (\sum_{k=1}^n \frac{1}{2+k^b+(k-1)^b}\right )\\ &\geq \sup_{n\geq 1}\left (\sum_{k=n}^T\left (1-3h_{out}\left (\frac{k^{b+1}-1}{b+1}\right )\right )\right ) \Theta(1) \\ &\geq \sup_{n\geq 1}\left((T-n+1)-3h_{out}\left(\sum_{k=n}^T\frac{k^{b+1}-1}{b+1}\right)\right ) \Theta(1)\\ &= \sup_{n\geq 1}\Theta\left ((T-n+1)-3h_{out}\frac{T^{b+2}-n^{b+2}}{(b+1)(b+2)}(1+o(1))\right )\\ &\geq \sup_{n\geq 1}\Theta\left ((T-n+1)-3\cdot \frac{T^{b+2}-n^{b+2}}{T^{b+1}(b+2)}(1+o(1))\right )\\ &= \Theta(T) \end{align*} So we conclude that $\lambda_2= O(1/T) = O(h_{out}^{1/b+1})$. This example represents {\it polynomial} volume growth. Recall that in the setting of Ollivier curvature, every graph with positive curvature has polynomial volume growth with some positive integer $b$. But the Buser bound we hoped to achieve is $\lambda_2 = O(h_{out}^2)$. The reason for the difference may be that Paeng's polynomial volume growth bound is a correct bound for the volume growth around any initial set. In this section we are only concerned with bounding volume growth around the Cheeger-achieving cut-set. For that set, a tighter bound may apply. Our next examples are instances of this phenomenon, where the volume growth is much slower around the Cheeger cut-set than around general vertex sets. \end{example} We will now provide an application of Theorem \ref{theo:eigenhigher} to Buser-type inequalities for combinations of higher eigenvalues and the higher Cheeger constants. \begin{example}\label{ex:HigherEigen} Assume that $\nu(i)=1$ for all $i \in [T^-, T^+]$ and $h_{out}(n)<1$ for some $n \geq 2.$ Due to the symmetry of this example, we abuse notation slightly to simplify the presentation, defining $B^{\pm}$ to be the $T^{\pm} \times T^{\pm}$ Toeplitz, tridiagonal matrix defined by \begin{equation*} B^{\pm}_{ij}=\begin{cases} 4, & \text{ if }i=j\\ -2, &\text{ if }|i-j|=1\\ 0, &\text{ otherwise}.\end{cases} \end{equation*} Because $B^{\pm}$ differs from $A^{\pm}$ in only the $(T^{\pm},T^{\pm})$ entry, we have that \begin{equation*} \langle g,B^{\pm} g\rangle_{\pm}-\langle g, A^{\pm}g\rangle_{\pm}= B^{\pm}_{T^{\pm},T^{\pm}} g(T^{\pm})^2-A^{\pm}_{T^{\pm},T^{\pm}}g(T^{\pm})^2=2g(T^{\pm})^2 \geq 0. \end{equation*} Note that the eigenvalues of the matrix $B^{\pm},$ denote them $\psi_k,$ are given in closed form by \begin{equation}\label{eq:ToeEigen} \psi_k=4\left (1-\cos\left ( \frac{k\pi}{T^+-T^-}\right ) \right ), \end{equation} see, for instance, Theorem~2.2 in \cite{KST99}, wherein a new approach was proposed (with extensions to Toeplitz-like matrices), while \cite{Smith85} details the classical treatment. Now we combine Equation \ref{eq:ToeEigen} with Theorem \ref{theo:eigenhigher} which implies that \begin{equation}\label{eq:higherBound} \lambda_k(G)\leq 2\cdot\min_{\left \lceil \frac{k}{2}\right\rceil \leq t \leq \min\{T^+,T^-\}} \frac{1-\cos \left (\frac{ \left \lceil \frac{k}{2}\right \rceil}{t+1}\pi \right )} {1-h_{out}(n)(t+1)}, \end{equation} where the denominator follows from Remark \ref{rmk:d_lowerbound}. In particular, the weight $\mu(k)$ from Theorem \ref{theo:eigenhigher} is given by \begin{equation*} \mu(k)=1-h_{out}(n)\sum_{i=0}^k \nu(i) =1-h_{out}(n)(k+1). \end{equation*} It remains to minimize the right hand side of Equation \ref{eq:higherBound}. We will use the simple bound that if $0 \leq x \leq \pi$, with $\frac{2}{\pi^2}x^2 \leq 1-\cos(x) \leq \frac{1}{2}x^2.$ From Equation \ref{eq:higherBound}, we obtain \begin{equation} \lambda_k(G)\leq 2\cdot\min_{\left \lceil \frac{k}{2}\right\rceil \leq t \leq \min\{T^+,T^-\}} \frac{ \left \lceil \frac{k}{2}\right \rceil^2\frac{\pi^2}{2}} {[1-h_{out}(n)(t+1)](t+1)^2}. \end{equation} Observe that in this step of our estimate, we use a bound that is tight up to a constant factor $\pi^2/4.$ One might be tempted to use a better approximation for $\cos(x)$, but this factor gives an upper bound on the potential improvement from that method. Elementary calculus reveals that the minimum is achieved when $(t+1) = \frac{2}{3h_{out}(n)}.$ Of course this may be not an integer: we will set \begin{align*} t+1 = \left \lceil \frac{2}{3h_{out}(n)}\right \rceil \text{ if }\frac{k}{2}\leq\frac{2}{3h_{out}(n)}\leq \min\{T^+,T^-\}. \end{align*} In this case, we find that \begin{align*}\lambda_k(G)\leq 2\cdot \frac{ \left \lceil \frac{k}{2}\right \rceil^2\frac{\pi^2}{2}} {[1-h_{out}(n)\lceil \frac{2}{3h_{out}(n)} \rceil ](\lceil \frac{2}{3h_{out}(n)} \rceil)^2} = k^2h_{out}(n)^2 \parens{\frac{27\pi^2}{16}+o(1)}.\end{align*} For this problem we have $1/h_{out}(n) < 2+\min\{T^+,T^-\}$, and so $\frac{2}{3h_{out}(n)}\leq \min\{T^+,T^-\}$ as long as $\min\{T^+,T^-\}\geq 4$ We will not analyze the case that $\frac{2}{3h_{out}(n)}<k/2$ or that $\min\{T^+,T^-\}<4$. It is easy to check that both cases give (trivial) bounds of the form $\lambda_k \leq C$ for a universal constant $C.$ \end{example} \subsection{Examples of specific graphs} We will now test our methods on several concrete examples. For these examples, information about the spectrum is already known, allowing us to compare the results. \begin{example}[Hypercube] The hypercube $\Omega_d$ is commonly expressed as the graph with vertex set $\{0,1\}^d$ and $x\sim y$ if and only if $x$ and $y$ disagree in exactly one coordinate. With this notation, we define the $k$-slice $A_k\subset V$ to be the set of vertices that are $1$ in exactly $k$ coordinates. It is clear that $|A_k| = \binom{d}{k}$. It is known that $h_{out}$ is achieved by the $\lfloor d/2\rfloor$-slice $\Sigma$, with $h_{out} = \Theta(1/\sqrt{d})$ \cite{Har66}. With this choice of $\Sigma$, we see that $\mathrm{dist}^{-1}(i) = A_{\lfloor d/2\rfloor+i}$, and $$|\mathrm{dist}^{-1}(i)| = \binom{d}{\lfloor d/2\rfloor+i}\leq \binom{d}{\lfloor d/2\rfloor} = |\Sigma|.$$ As such, we may set $\nu(i) = 1$, and we have $$T = \left \lfloor\frac{1}{h_{out}}-1\right \rfloor = \Theta(\sqrt{d}).$$ By the results of Example \ref{ex:constant}, $\lambda_2 \leq \tfrac{27}{2}h_{out}^2(1+o(1))$, thus $\lambda_2 = O(1/d)$. It is well-known that the actual value of $\lambda_2$ is indeed $\Theta(1/d)$. \end{example} \begin{example}[Discrete torus] If $C_n$ is the $n$-cycle for $n\geq 3$, the discrete torus $C_n^d$ is the $2d$-regular graph $C_n\square C_n \square \cdots \square C_n$. It is understood that $h_{out}$ is achieved by the ball $B(x,\lceil\tfrac{dn}{4}\rceil-1)$ with $\Sigma = S(x,\lceil\tfrac{dn}{4}\rceil)$, where $x$ is an arbitrary (fixed) vertex \cite{BoL06}. The level sets are $\mathrm{dist}^{-1}_\Sigma(i) = S(x,i+\lceil\tfrac{dn}{4}\rceil)$ with $|\mathrm{dist}^{-1}_\Sigma(i)|\leq |\Sigma|.$ We will give a brief argument that $\tfrac{2}{nd}(1+o(1) < h_{out} <\tfrac{4}{n}(1-o(1))$. First note that $|\Sigma|< 2n^{d-1}$, as the latter is achieved by the boundary of the candidate cut-set bounded by two parallel $d-1$-planes seperated by a distance $\lfloor n/2\rfloor$. It follows that $h_{out} < n^{d-1}/(\tfrac{1}{2}n^d(1+o(1)) = 4/n(1+o(1)$. Next, consider the set $A\subset C_n^{d-1}$ defined to contain those $a$ for which there is an element of $\Sigma$ whose first $d-1$ entries are $a$. Let $\overline{x}$ be the first $d-1$ entries of $x$, $A = \bigcup_{k=0}^{\lfloor n/2\rfloor} S(\overline{x},\lceil\tfrac{dn}{4}-k\rceil)$. Inductively, we know that $A$ contains the disjoint union of the $n/2-O(1)$ largest shells around $\overline{x}$ in $C_n^{d-1}$; as there are $nd/2 + O(1)$ shells in total and we take the largest fraction $1/d - o(1)$ to form $A$, $|A|\geq \parens{\tfrac{1}{d}-o(1)}|C_n^{d-1}| = (n^{d-1}/d)(1-o(1))$. Clearly $|A| \leq |\Sigma|$, it follows that $h_{out} > (n^{d-1}/d)/(\tfrac{1}{2}n^d)(1-o(1)) = \tfrac{2}{nd}(1-o(1))$. And so we have determined that $h_{out} = \tfrac{1}{n}$ is tight within a factor linear in $d$. Proceeding similarly to the hypercube, we may use $\nu(i) = 1$ as in Example \ref{ex:constant} to see that $\lambda_2\leq \tfrac{27}{2}h_{out}^2(1+o(1))$, thus $$\lambda_2 \leq \Theta_d(\tfrac{1}{n^2})\,.$$ It is well-known that the actual value is $\lambda_2 = \Theta(\tfrac{1}{n^2})$, so our estimate is tight up to a factor depending on $d$. \end{example} \end{document}
\begin{document} \title[$L_\infty$-estimates for degenerate SPDE]{Supremum estimates for degenerate, quasilinear stochastic partial differential equations} \author[K. Dareiotis and B. Gess]{Konstantinos Dareiotis and Benjamin Gess} \address[K. Dareiotis]{Max Planck Institute for Mathematics in the Sciences, Inselstrasse 22, 04103 Leipzig, Germany} \email{[email protected]} \address[B. Gess]{Max Planck Institute for Mathematics in the Sciences, Inselstrasse 22, 04103 Leipzig and Faculty of Mathematics, University of Bielefeld, 33615 Bielefeld, Germany} \email{[email protected]} \begin{abstract} We prove a priori estimates in $L_\infty$ for a class of quasilinear stochastic partial differential equations. The estimates are obtained independently of the ellipticity constant $\varepsilon$ and thus imply analogous estimates for degenerate quasilinear stochastic partial differential equations, such as the stochastic porous medium equation. \end{abstract} \maketitle \section{Introduction} We consider quasilinear stochastic partial differential equations (SPDEs) of the form\footnote{Throughout the article we use the summation convention with respect to integer valued repeated indices.} \begin{equation} \label{eq: quasi} \begin{aligned} du &=\left[ \D_i \left( a^{ij}_t(x,u)\D_j u +F^i_t(x,u)\right)+ F_t(x,u) \right] \, dt \\ & + \left[ \D_i \big( g^{ik}_t(x,u) \big)+G^k_t(x, u) \right] \, d \beta^k_t, \\ u_0&= \xi, \end{aligned} \end{equation} for $(t,x) \in [0,T] \times Q=:Q_T$, with zero Dirichlet conditions on $\D Q$, for some bounded open set $Q \subset \bR^d$ and $\beta^k$ being independent Wiener processes. In this work, using Moser's iteration techniques (see e.g.\ \cite{MOS}), we prove the following: First, roughly speaking, we show that if the initial condition $\xi$ is in $L_\infty$ then the solution is in $L_\infty$ for all times $t \geq 0$. Second, we show a regularizing effect, that is, if the initial condition is in $L_2$, then for all $t >0$ the solution $u(t)$ is in $L_\infty$ and the corresponding norm blows up at a rate of $t^{-\tilde{\theta}}$, for some constant $\tilde{\theta}>0$, as $t \searrow 0$. A key point in these results is that the obtained estimates are uniform with respect to the ellipticity constant of the diffusion coefficients $a^{ij}$ and thus can be applied to the case of degenerate, quasilinear SPDE, such as the porous medium equation. More precisely, under certain conditions on the coefficients $a^{ij}, F^i, F, g^{ik}, G^k$ (see Assumptions \ref{as: nd}-\ref{as:boundednessV} below for details), we prove the following $L^\infty$ bounds: \begin{theorem*}[see Theorems \ref{thm: quasilinear nd} and \ref{thm: smoothing2}] Let $\alpha>0$, {\color{black}$\mu \in [2, \infty] \cap ((d+2)/2,\infty]$}. There exists constants $N$, $\tilde \theta>0$ such that if $u$ is a solution of \eqref{eq: quasi}, then \begin{equs} \mathbb{E} \|u \|^\alpha _{L_\infty(Q_T)} \ \leq N \mathbb{E} \left( 1+\|\xi \|_{L_\infty(Q)}^\alpha {\color{black}+\|V^1\|_{L_\mu(Q_T)}^\alpha+ \|V^2\|_{L_{2\mu}(Q_T)}^\alpha } \right), \end{equs} and \begin{equation*} \mathbb{E} \|u \|^\alpha _{L_\infty((\rho, T) \times Q)} \ \leq \rho^{-\tilde{\theta}} N \mathbb{E} \left( 1+\|\xi \|_{L_2(Q)}^\alpha {\color{black}+\|V^1\|_{L_\mu(Q_T)}^\alpha+ \|V^2\|_{L_{2\mu}(Q_T)}^\alpha} \right), \end{equation*} for all $\rho \in (0,T)$. \end{theorem*} {\color{black}In the above theorem $V^1$ and $V^2$ are functions that can be regarded as dominating any existing ``free terms" coming from the drift part and the noise part of the equation, respectively (cf. Assumption \ref{as: nd} below).} A key point in the two estimates above is that the constants $N$ and $\tilde{\theta}$ are independent of the ellipticity constant of the diffusion coefficients $a^{ij}$. Hence, the established estimates carry over without change to degenerate SPDE such as stochastic porous media equations \begin{equation} \label{eq: PME stratonovich} \begin{aligned} du = & \left[ \Delta \left( |u|^{m-1}u \right) +f_t(x) \right] \, dt + \sum_{i=1}^d \sigma_t\D_i u \circ d\tilde{\beta}^i_t\\ +& \sum_{k=1}^\infty\left[\nu^k_t(x) u +g^k_t(x) \right] dw^k_t \\ u_0= & \xi, \end{aligned} \end{equation} with zero Dirichlet conditions on $\D Q$ and $m \in (1, \infty)$, where $\tilde{\beta}^1_t,...\tilde{\beta}^d_t, w^1_t,w^2_t,...$ are independent $\bR$-valued standard Wiener processes. The corresponding theorem reads as follows: \begin{theorem*}[see Theorems \ref{thm: theorem boundedness} and \ref{thm: smoothing PME}] Let ${\color{black}\mu \in [2, \infty] \cap ((d+2)/2,\infty]}$. There exists constants $N$, $\tilde \theta>0$ such that if $u$ is a solution of \eqref{eq: PME stratonovich}, then \begin{equation*} \mathbb{E} \|u \|^2 _{L_\infty(Q_T)} \ \leq N \mathbb{E} \left( 1+\|\xi \|_{L_2(Q)}^2{\color{black} +\|f\|^2_{L_\mu(Q_T)}+ \||g|_{l_2}\|^2_{L_{2\mu}(Q_T)}}\right), \end{equation*} and \begin{align*} \mathbb{E} \|u \|^2 _{L_\infty((\rho, T) \times Q)} \ &\leq \rho^{-\tilde{\theta}} N \mathbb{E} \left(1+\| \xi \|_{H^{-1}}^2 {\color{black}+\|f\|^2_{L_\mu(Q_T)}+ \||g|_{l_2}\|^2_{L_{2\mu}(Q_T)}} \right), \end{align*} for all $\rho \in (0,T)$. \end{theorem*} We restrict to affine operators in the noise in \eqref{eq: PME stratonovich} for the sole reason that no complete well-posedness theory in $L_p$ spaces of \eqref{eq: PME stratonovich} with non-linear noise is yet available. We emphasize that this linear structure is \textit{not} required in the derivation of the a priori bounds established in this work. Concerning the well-posedness for nonlinear noise we also refer the reader to \cite{BEN2} for a well-posedness theory of such equations in a kinetic framework. In the following we will briefly comment on existing literature on the regularity of solutions to stochastic porous media equations. The existence of strong solutions (i.e.\ $|u|^{m-1}u \in L^2((0,T);H^1_0$) has been shown in \cite{BEN1} under the assumption that the operators in the noise are bounded and Lipschitz continuous and under the assumption that $\xi \in L_{m+1}$. In the case of linear multiplicative noise (and $\sigma=0$) \eqref{eq: PME stratonovich} can be transformed into a PDE with random coefficients. Based on this, the H\"older-continuity and boundedness of solutions has been shown in \cite{G13-2,BAR}. Concerning the regularity theory for deterministic singular and degenerate quasilinear equations we refer to \cite{CAF,DIB2,SACKS} (see also the monographs \cite{DIB,VAS} and the references therein). The regularity of solutions to non-degenerate SPDE has been addressed in \cite{DENIS,HOF,KOMA,KOMA2,WANG, MATE}. For general background on SPDE and stochastic evolution equations we refer to \cite{KR,MR,BAR2,PARDOUX}. \subsection{Notation} Let us introduce some notation that will be used throughout this paper. Let $T$ be a positive real number. Let $(\Omega, \mathcal{F}, \mathbb{F}, \bP)$ be a filtered probability space, where the filtration $\mathbb{F}=( \mathcal{F}_t)_{t \in [0,T]}$ is right continuous and $\mathcal{F}_0$ contains all $\bP$-null sets. We assume that on $\Omega$ we are given a sequence of independent one-dimensional $\mathbb{F}$ -Wiener processes $(\beta^k_t)_{k=1}^\infty$. The predictable $\sigma$-field on $\Omega_T:= \Omega \times [0,T]$ will be denoted by $\mathcal{P}$. Let $Q \subset \bR^d$ be a bounded open domain. For $t \in [0,T]$ we set $Q_t= [0,t] \times Q$. The norm in $L_p(Q)$ will be denoted by $\|\cdot\|_{L_p}$. We denote by $H^1_0$ the completion of $C^\infty_c(Q)$ under the norm $$ \|u\|_{H^1_0}^2 := \int_Q |\nabla u |^2 \, dx $$ and by $H^{-1}$ the dual of $H^1_0(Q)$. For $q \in [1, \infty)$, we denote by $\mathbb{H}_q^{-1}$ the set of all $H^{-1}$-valued, $\mathbb{F}$-adapted, continuous processes $u$, such that $u \in L_q(\Omega_T, \mathcal{P}; L_q(Q))$. Similarly, we denote by $\mathbb{L}_2$ the set of all $L_2(Q)$-valued, $\mathbb{F}$-adapted, continuous processes $u$, such that $u \in L_2(\Omega_T, \mathcal{P}; H^1_0(Q))$. We will write $(\cdot, \cdot)_H$ for the inner product in a Hilbert space $H$. For $m \geq 1$, we will consider the Gel'fand triple $$ L_{m+1}(Q) \hookrightarrow H^{-1}\hookrightarrow (L_{m+1}(Q))^*. $$ The duality pairing between $L_{m+1}(Q)$ and $(L_{m+1}(Q))^*$ will be denoted by ${}_{L_{m+1}^*}\langle \cdot, \cdot \rangle_{L_{m+1}}$. Notice that this duality is defined by means of the inner product in $H^{-1}$. Consequently, for $u,v \in C^\infty_c(Q)$ $$ {}_{L_{m+1}^*}\langle u, v \rangle_{L_{m+1}} =(u,v)_{H^{-1}}= (u, (-\Delta)^{-1}v)_{L_2(Q)} \neq \int_Q u v \, dx. $$ For more details we refer to \cite[pp. 68-70]{MR}. We will use the summation convention with respect to integer valued repeated indices. Moreover, when no confusion arises, we suppress the $(t,x)$-dependence of the functions for notational convenience. The article is organized in two sections. In Section 2 we prove our results for the non-degenerate equation. In Section 3, we verify the well-posedness of the degenerate equation, and we approximate the solution by the method of the vanishing viscosity, and by using the estimates of the previous section we pass to the limit. \section{Non-Degenerate Quasilinear SPDE} As already mentioned in the introduction, in order to obtain the desired estimates for equation \eqref{eq: PME stratonovich} we first study a class of non-degenerate SPDEs. More precisely, we consider SPDEs of the form \begin{align} \nonumber du &=\left[ \D_i \left( a^{ij}_t(x,u)\D_j u +F^i_t(x,u)\right)+ F_t(x,u) \right] \, dt \\ \label{eq: nd quasilinear} & + \left[ \D_i \big( g^{ik}_t(x,u) \big)+G^k_t(x, u) \right] \, d \beta^k_t, \\ \label{eq: initial nd quasilinear} u_0&= \xi, \end{align} for $(t,x) \in [0,T] \times Q$, with zero Dirichlet conditions on $\D Q$. \begin{assumption} \label{as: nd} $$ $$ \begin{enumerate}[(i)] \item The functions $a^{ij}, F^i, F : \Omega_T \times Q \times \bR \to \bR$ are $\mathcal{P}\otimes \mathcal{B}(Q)\otimes \mathcal{B}(\bR) $-measurable. \item The functions $g^i, G : \Omega_T \times Q \times \bR \to l_2$ are $\mathcal{P}\otimes \mathcal{B}(Q)\otimes \mathcal{B}(\bR) $-measurable. \item There exists constants $c > 0$, $\theta>0$ and $\tilde{m} >0$ such that for all $(\omega,t,x,r)\in \Omega_T \times Q \times \bR$ \begin{equation} \label{eq: ellipticity} (a^{ij}_t(x,r) - \frac{1}{2}\D_rg_t^{ik}(x,r)\D_rg_t^{jk}(x,r)) \xi^i \xi^j \geq (c | r|^{\tilde{m}}+\theta) |\xi|^2. \end{equation} {\color{black}\item \label{item: regularity gi}For all $(\omega,t) \in \Omega_T$ we have $F^i_t \in C^1(\bar{Q} \times \bR)$, $g_t^i\in C^2( \bar{Q} \times \bR;l_2)$. Moreover, there exist predictable processes $ V^1, V^2: \Omega_T \to L_2(Q)$, such that $V^1, V^2 \in L_2((0,T); L_2(Q))$ almost surely, and a constant $K$ such that for all $(\omega, t, x, r)\in \Omega_T \times Q \times \bR$ \begin{align} \label{eq: growth condition,F} |F_t(x,r)|+|F^i_t(x,r)|+| \D_i F^i_t(x,r)|& \leq V^1_t(x)+ K |r| \\ \label{eq: growth condition} |G_t(x,r)|_{l_2}+|g^i_t(x,r)|_{l_2}+|\D_ig^i_t(x,r)|_{l_2}+ |\D^2_ig^i_t(x,r)|_{l_2}&\leq V^2_t(x)+ K |r| \\ \label{eq: boundedness of coef} |\D_rg^i_t(x,r)|_{l_2}+|\D_r\D_i g^i_t(x,r)|_{l_2} &\leq K. \end{align}} \item \label{item: regularity g} Let $ \mathbb{N}_g := \{k \in \mathbb{N} : \exists i \in \{1,...,d\}, g^{ik} \not\equiv 0\}$. We assume in addition that for all $(\omega,t)$ we have $(G^k_t)_{k \in \mathbb{N}_g} \in C^1( \bar Q \times \bR; l_2(\mathbb{N}_g))$, and for all $(\omega,t,x,r) \in \Omega_T \times Q\times \bR $ {\color{black}\begin{equation} \label{eq: one extra derivative} \sum_{k \in \mathbb{N}_g} |\D_i G^k_t(x,r)|^2 \leq V^2_t(x)+ K |r|. \end{equation}} \item The initial condition $\xi $ is an $\mathcal{F}_0$-measurable $L_2(Q)$-valued random variable. \end{enumerate} \end{assumption} {\color{black}\begin{assumption} \label{as:boundednessV} There exists a constant $N$ such that for all $(\omega, t, x) \in \Omega_T \times Q$ we have \begin{equs} \label{eq:boundednessV} |V^1_t(x)|+|V^2_t(x)| \leq N. \end{equs} \end{assumption}} {\color{black}\begin{remark} Assumption \ref{as:boundednessV} is purely technical in the sense that our estimates do not depend on the bound of $V^1$ and $V^2$. As it will be seen in the next section, Assumption 2.2 can be removed provided that one has solvability of the equation and some stability properties with respect to the ``free-terms". \end{remark}} \begin{definition} \label{def: definition L2} A function $u \in \mathbb{L}_2$ will be called a solution of \eqref{eq: nd quasilinear}-\eqref{eq: initial nd quasilinear} if \begin{enumerate}[(i)] \item For all $i,j \in \{1,...,d\}$, almost surely $$ \int_0^T \| a^{ij}(u) \D_iu \|^2_{L_2} \, dt < \infty. $$ \item For all $\phi \in H^1_0$, almost surely, for all $t \in [0,T]$ \begin{align*} (u_t, \phi )_{L_2} =(\xi , \phi )_{L_2} & + \int_0^t\left[ \left( F(u), \phi \right)_{L_2} - \left( a^{ij}(u) \D_j u +F^i (u), \D_i \phi \right)_{L_2} \right] dt \\ &+ \int_0^t \left[ \big( G^k(u), \phi \big)_{L_2}-\big( g^{ik}(u), \D_i \phi \big)_{L_2} \right] \, d \beta^k_t, \end{align*} \end{enumerate} \end{definition} We first present a collection of lemmas that will be used in the proofs of the main theorems. The following can be found in \cite{MY} (see Proposition IV.4.7 and Exercise IV.4.31/1). \begin{lemma}\label{lem: Revuz-Yor} Let $X$ be a non-negative, adapted, right-continuous process, and let $Y$ be a non-decreasing, adapted, continuous process such that $$ \mathbb{E} (X_{\tau}| \mathscr{F}_0 )\leq \mathbb{E} (Y_{\tau}| \mathscr{F}_0) $$ for any bounded stopping time $\tau \leq T$. Then for any $\sigma\in(0,1)$ $$ \mathbb{E} \sup_{t\leq T}X_t^{\sigma}\leq \sigma^{-\sigma}(1-\sigma)^{-1}\mathbb{E} Y_T^{\sigma}. $$ \end{lemma} The following lemma is well known (see, e.g., \cite[p.8, Proposition 3.1]{DIB}). We provide the proof in order to emphasize that the constant $C$ can be chosen independent of $\lambda$ for $\lambda \in [1,2]$ (see below). \begin{lemma} \label{lem: embeding} There exists a constant $N$ such that for all $\lambda \in [1,2]$, $s \in [0,T]$ and all $v \in L_\infty((s,T); L_\lambda(Q))\cap L_2((s,T) ; H^1_0(Q))$, we have \begin{equation} \label{eq:emb} \int_s^T \int_Q |v|^q\, dx dt \leq N^q \left( \int_s^T \int_Q |\nabla v|^2 \, dx dt \right) \left( \esssup_{s\leq t \leq T} \int_Q |v|^\lambda \, dx \right)^{2/d}, \end{equation} where $q=q(\lambda)=2(d+\lambda)/d$. \end{lemma} \begin{proof} By the Gagliardo-Nirenberg inequality (see \cite[p.62, Theorem 2.2]{LAD}) we have (notice that $d(2-\lambda)+2\lambda>0$) for a.e. $t \in (0,T)$ $$ \|v_t\|_{L_q} \leq N(\lambda)\| \nabla v_t\|^{2/q}_{L_2} \|v_t\|^{(q-2)/q}_{L_\lambda} $$ where $$ N(\lambda):= \left( I_{1=d}\frac{1+\lambda}{\lambda}+I_{d=2}\max\left\lbrace\frac{q(d-1)}{d}, \frac{\lambda+2}{2} \right\rbrace+I_{d>2} \frac{2(d-1)}{d-2} \right)^{2/q}. $$ Since $C:=\sup_{\lambda \in [1,2]} N(\lambda) < \infty$, the result follows by taking the $q$-th power in the inequality above and integrating over $(0,T)$. \end{proof} Next is It\^o's formula for the $p$-th power of the $L_p$ norm. It can be proved as \cite[Lemma 2]{KOMA} with the help of a localization argument. \begin{lemma} \label{lem: ito formula} Let Assumption \ref{as: nd} hold and let $u$ be a solution of \eqref{eq: nd quasilinear}. Moreover, suppose that for some $p \geq 2$ and some $s \in [0,T)$, almost surely {\color{black}$$ \| u_s\|^p_{L_p}+ \int_s^T \left( \| V^1\|_{L_p}^p+ \| V^2\|_{L_p}^p \, \right) dt < \infty. $$} Then, almost surely \begin{equation} \label{eq: finite energy} \sup_{s \leq t \leq T} \|u_t\|_{L_p}^p+ \int_s^T \int_Q (|u|^{p-2}+|u|^{\tilde{m}+p-2}) | \nabla u|^2 \, dx dt < \infty. \end{equation} Moreover, almost surely \begin{align} \nonumber &\| u_t\|_{L_p}^p= \|u_s\|_{L_p}^p+ \int_s^t \int_Q \left( A_{ij}(x,u)\D_j u +F^i(x,u)\right) p(1-p)|u|^{p-2} \D_i u \,dxdz \\ \nonumber &+ \int_s^t \int_Q \left(\frac{1}{2} p(p-1)| \D_i(g^i(x,u))+G(x,u)|_{l_2}^2|u|^{p-2}+ p F(x,u)u |u|^{p-2} \right) dx dz \\ \label{eq: Ito formula} &+ \int_s^t\int_Q \left( p(1-p) g^{ik}(x,u) |u|^{p-2} \D_iu+pG^k(x,u) u |u|^{p-2} \right) \,dx d\beta^k_z, \end{align} for all $t \in [s,T]$. \end{lemma} {\color{black}From now on we fix $\mu \in \Gamma_d:= [2, \infty]\cap ((d+2)/2, \infty]$, we denote by $\mu'$ its conjugate exponent, that is, $ \frac{1}{\mu}+\frac{1}{\mu'}=1$, and we set \begin{align*} \gamma&:= 1+ (2/d), \\ \bar \gamma & := \gamma / \mu', \\ \mathfrak{N}&:=\{ l \in [2, \infty) : l=\tilde{m}(1+\bar \gamma+...+\bar \gamma^n)/ \mu', \ n \in \mathbb{N} \}, \\ \kappa & := \sup_{p \in \mathfrak{N}} \max\{2p/(p-1), I_{p \neq 2} 4p/(p-2)\} < \infty. \end{align*} Notice that $\bar \gamma >1$. } \begin{lemma} \label{lem: right right} Let Assumptions \ref{as: nd}-\ref{as:boundednessV} hold and let $u $ be a solution of \eqref{eq: nd quasilinear}. Then, for all $p \in \mathfrak{N}$, $q \geq p$, and $\eta \in (0,1)$ we have \begin{align} \nonumber &\mathbb{E} \left( A_q \vee \left( \sup_{t \leq T} \|u_t\|^p_{L_p(Q)} +\int_0^T \int_Q\left| \nabla |u|^{(\tilde{m}+p)/2} \right|^2 \, dx dt \right) \right)^\eta \\ \label{eq: right right estimate} &{\color{black}\leq \frac{\eta^{-\eta}}{1-\eta} (N p^\kappa)^\eta \mathbb{E} \left( \left(A_q \vee \| u\|_{L_{\mu'p}(Q_T)}^p \right)+p^{-p}\left( \|V^1\|_{L_\mu(Q_T)}^p+ \|V^2\|_{L_{2\mu}(Q_T)}^p\right) \right)^\eta,} \end{align} where $A_q=(1+ \|\xi\|_{L_\infty})^q$, and $N$ is a constant depending only on $\tilde{m}, T, c,K,$ $d, \mu$, and $|Q|$. \end{lemma} \begin{proof} We assume that the right hand side in \eqref{eq: right right estimate} is finite, or else there is nothing to prove. Under this assumption, for each $p \in \mathfrak{N}$ we have the formula \eqref{eq: Ito formula} with $s=0$. We proceed by estimating the terms that appear at the right hand side of \eqref{eq: Ito formula}. We have $$ F^i(x,u)p(1-p)|u|^{p-2} \D_i u= \D_i (\mathcal{R}_p(F^i)(x,u))- \mathcal{R}_p(\D_iF^i)(x,u), $$ where for a function $f$ we have used the notation $$ \mathcal{R}_p(f)(x,r)= \int_0^rp(p-1) f(x,s)|s|^{p-2} \, ds. $$ Moreover, from \eqref{eq: finite energy}, \eqref{eq: growth condition,F}, the fact that $V^1$ is bounded and the definition of $\mathcal{R}_p(F^i)(x,u)$, it follows (see Lemma \ref{lem:Appendix}) that $\mathcal{R}_p(F^i)(\cdot,u) \in W^{1,1}_0(Q)$ for a.e. $(\omega, t)\in \Omega_T$, which in particular implies that $$ \int_Q \D_i (\mathcal{R}_p(F^i)(\cdot,u)) \, dx =0. $$ Moreover, one can see from \eqref{eq: growth condition,F} that {\color{black}\begin{align*} | \mathcal{R}_p(\D_iF^i)(x,r)| &\leq p V^1(x) |r|^{p-1}+K(p-1)|r|^p \end{align*}} By H\"older's inequality and Young's inequality, we obtain {\color{black}\begin{equs} & \ \ \ \int_0^t \int_Q | \mathcal{R}_p(\D_iF^i)(x,u)| \, dx ds \\ & \leq p \|V^1\|_{L_\mu(Q_t)} \|u\|^{p-1}_{L_{\mu'(p-1)}(Q_t)}+K(p-1) \|u\|_{L_{p}(Q_t)}^p \\ &\leq N p \|V^1\|_{L_\mu(Q_t)} \|u\|^{p-1}_{L_{\mu'p}(Q_t)}+K(p-1) \|u\|_{L_{\mu'p}(Q_t)}^p \\ & \leq N p^{-p} \|V^1\|^p_{L_\mu(Q_t)} +N(Kp+p^{2p/(p-1)}) \|u\|_{L_{\mu'p}(Q_t)}^p. \end{equs}} Consequently, almost surely, for each $t \in [0,T]$ \begin{equation} \label{eq: estimate F} \int_0^t \int_Q F^i(x,u)p(1-p)|u|^{p-2} \D_i u \, dx ds {\color{black}\leq N p^{-p}\|V^1\|_{L_\mu(Q_t)}^p + Np^\kappa\|u\|_{L_{\mu'p}(Q_t)}^p,} \end{equation} where $N$ depends only on $K$ and $|Q|$. We continue with the estimate of the term $$ \frac{1}{2} p(p-1) \int_0^t\int_Q | \D_i(g^i(x,u))+G(x,u)|_{l_2}^2 |u|^{p-2}\, dx ds. $$ Obviously, \begin{align*} & \int_Q | \D_i(g^i(x,u))+G(x,u)|_{l_2}^2 |u|^{p-2} \, dx \\ =&\sum_{k \in \mathbb{N}_g^c}\int_Q|G^k(x,u)|^2|u|^{p-2}\, xd+ \sum_{k \in \mathbb{N}_g}\int_Q| \D_i(g^{ik}(x,u))+G^k(x,u)|^2 |u|^{p-2} \, dx. \end{align*} By the growth condition \eqref{eq: growth condition}, H\"older's inequality and Young's inequality we have \begin{equs} & \frac{1}{2}p(p-1)\sum_{k \in \mathbb{N}_g^c}\int_0^t \int_Q |G^k(x,u)|^2|u|^{p-2} \, dx ds \\ \leq & {\color{black}N p^{-p}\|V^2\|^p_{L_{2\mu}(Q_t)}+ Np^\kappa \|u\|^p_{L_{\mu'p}(Q_t)}.} \end{equs} Moreover, by Assumption \ref{as: nd} \eqref{item: regularity g} we have \begin{align*} &\sum_{k \in \mathbb{N}_g}| \D_i(g^{ik}(x,u))+G^k(x,u)|^2 \\ =&\sum_{k \in \mathbb{N}_g}|\D_rg^{ik}(x,u)\D_iu|^2+ \sum_{k \in \mathbb{N}_g}| \D_ig^{ik}(x,u)+G^k(x,u)|^2 \\ +&\sum_{k \in \mathbb{N}_g}2\D_rg^{ik}(x,u)\D_iu(\D_ig^{ik}(x,u)+G^k(x,u)). \end{align*} By the growth condition \eqref{eq: growth condition}, H\"older's inequality and Young's inequality we have \begin{align*} &\frac{1}{2}p(p-1)\sum_{k \in \mathbb{N}_g}\int_0^t \int_Q | \D_ig^{ik}(x,u)+G^k(x,u)|^2|u|^{p-2} \, dx ds\\ \leq & {\color{black}N p^{-p}\|V^2\|^p_{L_{2\mu}(Q_t)}+ Np^\kappa \|u\|^p_{L_{\mu'p}(Q_t)}}. \end{align*} Moreover, we have \begin{align*} &p(p-1)\sum_{k\in \mathbb{N}_g } \D_rg^{ik}(x,u)\D_iu(\D_ig^{ik}(x,u)+ G^k(x,u))|u|^{p-2} \\ &=\D_i\left( \mathcal{R}_p\left( \mathfrak{g}\right) (x,u)\right)- \mathcal{R}_p\left( \D_i \mathfrak{g}\right) (x,u), \end{align*} where $$ \mathfrak{g}:= \sum_{k\in \mathbb{N}_g} \D_rg^{ik}( \D_ig^{ik}+G^k). $$ As before, it follows that $\mathcal{R}_p\left( \mathfrak{g}\right) (\cdot,u) \in W^{1,1}_0(Q)$, which in turn implies that $$ \int_Q \D_i (\mathcal{R}_p\left( \mathfrak{g}\right) (x,u)) \, dx =0. $$ By \eqref{eq: growth condition}, \eqref{eq: boundedness of coef}, and \eqref{eq: one extra derivative}, we have $$ \int_0^t \int_Q |\mathcal{R}_p\left( \D_i \mathfrak{g}\right) (x,u)|\, dx ds \leq {\color{black}N p^{-p}\|V^2\|^p_{L_{2\mu}(Q_t)}+ Np^\kappa \|u\|^p_{L_{\mu'p}(Q_t)}}. $$ Consequently, almost surely, for all $t \in [0,T]$ we have \begin{align} \nonumber &\frac{1}{2} p(p-1) \int_0^t\int_Q | \D_i(g^i(x,u))+G(x,u)|_{l_2}^2 |u|^{p-2}\, dx ds \\ \nonumber \leq & \frac{1}{2} p(p-1)\int_0^t \int_Q |\D_rg^i(x,u) \D_iu |^2_{l_2} |u|^{p-2}\, dx ds \\ \label{eq: estimate quadratic variation} + & {\color{black}N p^{-p}\|V^2\|^p_{L_{2\mu}(Q_t)}+ Np^\kappa \|u\|^p_{L_{\mu'p}(Q_t)}}, \end{align} where $N$ depends only on $K$ and $|Q|$. In a similar manner one gets $$ p \int_Q F(x,u)u |u|^{p-2} \, dx {\color{black}\leq N p^{-p}\|V^1\|^p_{L_\mu(Q_t)}+ Np^\kappa \|u\|^p_{L_{\mu'p}(Q_t)}.} $$ Using the above inequality combined with \eqref{eq: estimate F}, \eqref{eq: estimate quadratic variation}, and \eqref{eq: ellipticity} we obtain from \eqref{eq: Ito formula} \begin{align} \nonumber &\| u_t\|_{L_p}^p + cp(p-1)\int_0^t \int_Q|u|^{\tilde{m}+p-2}| \nabla u|^2 \, dx ds \\ \label{eq: consequence of Ito's} \leq & \|\xi\|_{L_p}^p + {\color{black}N \left(p^{-p}\|V^1\|^p_{L_\mu(Q_t)}+p^{-p}\|V^2\|^p_{L_{2\mu}(Q_t)}+ Np^\kappa \|u\|^p_{L_{\mu'p}(Q_t)} \right)} +M_t, \end{align} where $M_t$ is the local martingale from \eqref{eq: Ito formula}. For any stopping time $ \tau \leq T$ and any $B \in \mathcal{F}_0$ we have by the Burkholder-Davis-Gundy inequality \begin{align*} \mathbb{E} \sup_{t \leq \tau} I_B|M_t| & \leq N \mathbb{E} I_B \left( \int_0^\tau \sum_k \left( p(1-p)\int_Q g^{ik}(x,u) |u|^{p-2} \D_iu \, dx \right)^2 \, ds \right)^{1/2} \\ &+N \mathbb{E} I_B \left( \int_0^\tau \sum_k \left(p\int_Q G^k(x, u)u |u|^{p-2}\, dx \right)^2 \, ds \right)^{1/2}. \end{align*} We have $$ p(1-p) g^{ik}(x, u) |u|^{p-2} \D_iu =\D_i( \mathcal{R}_p ( g^{ik} ) (x,u))- \mathcal{R}_p ( \D_i g^{ik} ) (x,u). $$ As before, we have $\mathcal{R}_p ( g^{ik} ) (\cdot,u) \in W^{1,1}_0(Q)$, which implies that $$ \int_Q \D_i(\mathcal{R}_p ( g^{ik} ) (x,u)) \, dx =0. $$ Next notice that by Minkowski's integral inequality, H\"older's inequality, and Young's inequality, we have \begin{align} \nonumber \sum_k \left( \int_Q \mathcal{R}_p ( \D_i g^{ik} ) (x,u) \, dx \right)^2 &\leq \left( \int_Q | \mathcal{R}_p ( \D_i g^{ik} ) (x,u) |_{l_2} \, dx \right)^2 \\ \nonumber &\leq \left( \int_Q \int_{-|u|}^{|u|}p(p-1) |\D_ig^i(x,s)|_{l_2} |s|^{p-2} \, ds dx \right)^2 \\ \nonumber &\leq N \left( 2p \int_Q {\color{black}|V^2(x)|}|u|^{p-1}+|u|^p \, dx \right)^2 \\ \label{eq:quadratic-estimate} &\leq N \|u\|_{L_p}^p \left( p^2\int_Q{\color{black} |V^2(x)|^2 }|u|^{p-2} \, dx + p^2 \|u\|^p_{L_p}\right), \end{align} {\color{black}which implies \begin{equs} \label{eq: quadratic variation} & \int_0^t \sum_k \left( \int_Q \mathcal{R}_p ( \D_i g^{ik} ) (x,u) \, dx \right)^2 \, ds \\ \leq & N \sup_{s \leq t} \|u_s\|_{L_p}^p \left( p^{-p}\|V^2\|^p_{L_{2\mu}(Q_t)}+ p^\kappa \|u\|^p_{L_{\mu'p}(Q_t)} \right). \end{equs}} Consequently, by Young's inequality we have for any $\varepsilon>0$ \begin{align} \nonumber & N \mathbb{E} I_B \left( \int_0^\tau \sum_k \left( p(1-p)\int_Q g^{ik}(x,u) |u|^{p-2} \D_iu \, dx \right)^2 \, ds \right)^{1/2} \\ \nonumber &\leq \varepsilon \mathbb{E} I_B \sup_{t \leq \tau}\|u_t\|_{_p}^p + \frac{1}{\varepsilon}N \mathbb{E} I_B {\color{black}\left( p^{-p}\|V^2I_{[0, \tau]}\|^p_{L_{2\mu}(Q_T)}+ p^\kappa \|uI_{[0, \tau]}\|^p_{L_{\mu'p}(Q_T)} \right) }. \end{align} In a similar manner, for any $\varepsilon>0$ we get \begin{align*} &N \mathbb{E} I_B \left( \int_0^\tau \sum_k \left(p\int_Q G^k(x,u) u |u|^{p-2} \, dx \right)^2 \, ds \right)^{1/2} \\ &\leq \varepsilon \mathbb{E} I_B \sup_{t \leq \tau}\|u_t\|_{_p}^p + \frac{1}{\varepsilon}N \mathbb{E} I_B {\color{black} \left( p^{-p}\|V^2I_{[0, \tau]}\|^p_{L_{2\mu}(Q_T)}+ p^\kappa \|uI_{[0, \tau]}\|^p_{L_{\mu'p}(Q_T)} \right) .} \end{align*} Hence, we obtain for any $\varepsilon>0$ \begin{equs} \mathbb{E} \sup_{t \leq \tau}I_B| M_t| & \leq \varepsilon \mathbb{E}\sup_{t \leq \tau}I_B\|u_t\|_{_p}^p \\ + \frac{1}{\varepsilon}N \mathbb{E} I_B & \left( p^{-p}\|V^2I_{[0, \tau]}\|^p_{L_{2\mu}(Q_T)}+ p^\kappa \|uI_{[0, \tau]}\|^p_{L_{\mu'p}(Q_T)} \right). \label{eq: estimate martingale} \end{equs} By \eqref{eq: consequence of Ito's} we have \begin{align} \nonumber &\mathbb{E} I_B \sup_{t \leq \tau} \| u_t\|_{L_p}^p \leq \mathbb{E} I_B\|\xi\|_{L_p}^p +\mathbb{E} I_B \sup_{t \leq \tau}|M_t| \\ +N &\mathbb{E} I_B {\color{black}\left( p^{-p}\|V^1I_{[0, \tau]}\|^p_{L_\mu(Q_T)}+ p^{-p}\|V^2I_{[0, \tau]}\|^p_{L_{2\mu}(Q_T)}+ p^\kappa \|uI_{[0, \tau]}\|^p_{L_{\mu'p}(Q_T)} \right).} \label{eq:sup-u} \end{align} By \eqref{eq: consequence of Ito's} again, after a localization argument we obtain \begin{align} \nonumber &\frac{4cp(p-1)}{(p+\tilde{m})^2}\mathbb{E} I_B \int_0^ \tau \int_Q \left| \nabla |u|^{(p+ \tilde{m})/2} \right|^2 \, dx ds \leq N \mathbb{E} I_B \|\xi\|_{L_p}^p \\ & + {\color{black}N \mathbb{E} I_B \left( p^{-p}\|V^1I_{[0, \tau]}\|^p_{L_\mu(Q_T)}+ p^{-p}\|V^2I_{[0, \tau]}\|^p_{L_{2\mu}(Q_T)}+ p^\kappa \|uI_{[0, \tau]}\|^p_{L_{\mu'p}(Q_T)} \right)} \label{eq: estimate deriv of power} \end{align} and notice that for all $p \geq 2$ $$ \frac{4cp(p-1)}{(p+\tilde{m})^2} \geq N(\tilde{m}, c), $$ and therefore it can be dropped from the right hand side of \eqref{eq: estimate deriv of power}. Let us denote by $\tau_n$ the first exit time of $\|u_t\|^p_{L_p}+ {\color{black}\| V^1 \|^p_{L_\mu(Q_t)} +\| V^2 \|^p_{L_{2\mu}(Q_t)}}$ from $(-n,n)$, and by $C_n:= \{ \|\xi\|_{L_p} \leq n\}$. For an arbitrary $C \in \mathcal{F}_0$ and an arbitrary stopping time $\rho \leq T$, we apply \eqref{eq: estimate martingale} with $\tau= \tau^n \wedge \rho=: \rho_n$ and $B= C \cap C_n=:H_n$, which combined with \eqref{eq:sup-u} gives after rearrangement \begin{equs} &\mathbb{E}\sup_{t \leq \rho_n}I_{H_n}\|u_t\|_{L_p}^p \leq N \mathbb{E} I_{H_n} \| \xi \|_{L_p}^p \\ + N &{\color{black}\mathbb{E} I_{H_n} \left( p^{-p}\|V^1I_{[0, \rho_n]}\|^p_{L_\mu(Q_T)}+ p^{-p}\|V^2I_{[0, \rho_n]}\|^p_{L_{2\mu}(Q_T)}+ p^\kappa \|uI_{[0, \rho_n]}\|^p_{L_{\mu'p}(Q_T)} \right).} \end{equs} By the above inequality and \eqref{eq: estimate deriv of power} (applied with $\tau= \rho_n$, $B=H_n$) one can easily see that for all $q \geq p$ we have $$ \mathbb{E} I_C X^{n, q}_\rho \leq \mathbb{E} I_C Y^{n,q}_\rho < \infty, $$ where \begin{align*} X_t^{n,q}&:=I_{C_n}\left( A_q \vee \left( \sup_{s \leq \tau_n \wedge t} \|u_s\|^p_{L_p} +\int_0^{t \wedge\tau_n} \int_Q\left| \nabla |u|^{(\tilde{m}+p)/2} \right|^2 \, dx ds \right) \right) \\ Y_t^{n,q}&:= N p^\kappa I_{C_n}{\color{black}\left( \left( A_q \vee\|uI_{[0, t \wedge\tau_n]}\|^p_{L_{\mu'p}(Q_T)}\right) \right. } \\ & {\color{black}\left. + p^{-p}\left( \|V^1I_{[0, t \wedge\tau_n]}\|^p_{L_\mu(Q_T)}+ \|V^2I_{[0, t \wedge\tau_n]}\|^p_{L_{2\mu}(Q_T)}\right) \right)} \end{align*} and $A_q=(1+ \|\xi\|_{L_\infty})^q$. By Lemma \ref{lem: Revuz-Yor} we have $$ \mathbb{E} \left( X^{n, q}_T \right)^\eta \leq \frac{\eta^{-\eta}}{1-\eta} \mathbb{E} \left( Y^{n,q}_T \right)^\eta. $$ The assertion now follows by letting $n \to \infty$. \end{proof} \begin{lemma} \label{lem: right right local} Let Assumptions \ref{as: nd}-\ref{as:boundednessV} hold and let u be a solution of \eqref{eq: nd quasilinear}. Let $\rho \in (0,1)$ and set $r_n=\rho(1-2^{-n})$. Then for all $p \in \mathfrak{N}$, $\eta \in (0,1)$, and $n \in \mathbb{N}$ we have \begin{equs} & \mathbb{E}\left( \sup_{t \in [r_{n+1},T]}\|u\|^p_{L_p} + \int_{r_{n+1}}^T \int_Q \left|\nabla |u|^{(\tilde{m} +p)/2} \right|^2 \, dx dt \right)^\eta \leq \Big(N p^\kappa \frac{2^n}{\rho}\Big)^\eta \frac{\eta^{-\eta}}{1-\eta} \\ & \times {\color{black}\mathbb{E} \left( \| uI_{[r_n, T]}\|_{L_{\mu'p}(Q_T)}^p +p^{-p}\left( \|V^1\|_{L_\mu(Q_T)}^p+ \|V^2\|_{L_{2\mu}(Q_T)}^p\right)\right) ^\eta,} \\ \label{eq: right right local} \end{equs} where $N$ is a constant depending only on $\tilde{m}, T, c,K,d, \mu$ and $|Q|$. \end{lemma} \begin{proof} We assume that the right hand side of \eqref{eq: right right local} is finite and we set $$ c_n =\rho\left( 1- \frac{3}{4}2^{-n} \right). $$ There exists a $t' \in (r_n , c_n)$ such that almost surely $$ \|u_{t'}\|_{L_p}^p+ {\color{black}\int_{t'}^T( \|V^1_s\|_{L_p}^p+\|V^2_s\|_{L_p}^p) \, ds< \infty}. $$ Let $ \psi \in C^1([0, T])$ with $0 \leq \psi \leq 1$, $\psi_t=0$ for $ 0\leq t \leq c_n$, $\psi_t=1$ for $r_{n+1}\leq t \leq T$, and $|\psi'_t| \leq 2^{n+2} \rho^{-1}$. By Lemma \ref{lem: ito formula} we obtain \begin{align} \nonumber &\psi_t\| u_t\|_{L_p}^p= p(1-p)\int_0^t \int_Q \psi \left( A_{ij}(u)\D_j u +F^i(u)\right) |u|^{p-2} \D_i u \, ds \\ \nonumber &+ \int_0^t \psi \left[ \frac{1}{2} p(p-1)\int_Q| \D_i(g^i(u))+G(u)|_{l_2}^2 |u|^{p-2}\, dx+ p \int_Q F(u)u |u|^{p-2} \, dx \right] \, ds \\ \nonumber &+ \int_0^t \psi \left[ p(1-p)\int_Q g^{ik}(u) |u|^{p-2} \D_iu\, dx +p\int_Q G^k(u) u |u|^{p-2}\, dx \right] \, d\beta^k_s \\ \label{eq: ito product} &+\int_0^t \| u_s \|^p_{L_p} \psi' \, ds. \end{align} By using the estimates obtained in the proof of Lemma \ref{lem: right right}, we obtain \begin{equs} \nonumber &\psi_t\| u_t\|_{L_p}^p + cp(p-1)\int_0^t \int_Q\psi |u|^{\tilde{m}+p-2}| \nabla u|^2 \, dx ds \\ & {\color{black}\leq N p^{-p} \left( \|V^1\|_{L_\mu(Q_t)}^p + \|V^2\|_{L_{2\mu}(Q_t)}^p\right)+ N p^\kappa \| \psi^{1/p} u\|^p_{L_{\mu'p}(Q_t)} } \\ \label{eq: consequence of Ito's local} & +\int_0^t \psi'\|u\|^p_{L_p} ds +M_t, \end{equs} where $M_t$ is the local martingale from \eqref{eq: ito product}. From this, by using arguments almost identical to the ones of the proof of Lemma \ref{lem: right right} one gets \begin{equs} \nonumber &\mathbb{E}\left( \sup_{t \in [0,T]}\psi_t \|u_t\|^p_{L_p} + \int_0^T \int_Q \psi \left| \nabla |u|^{(\tilde{m} +p)/2} \right|^2 \, dx dt \right)^\eta \\ \leq & {\color{black}N^\eta \frac{\eta^{-\eta}}{1-\eta} \times \mathbb{E} \left( p^{-p} \left( \|V^1\|_{L_\mu(Q_T)}^p + \|V^2\|_{L_{2\mu}(Q_T)}^p\right) \right.} \\ + & {\color{black} \left. p^\kappa \| \psi^{1/p}u\|^p_{L_{\mu'p}(Q_T)} + \int_0^T \psi' \|u\|_{L_p}^p \, ds \right)^\eta.} \end{equs} Having in mind that {\color{black}$\mu'p>p$} and that $p^\kappa+|\psi'| \leq 8p^\kappa 2^n\rho^{-1}$, the result follows from the properties of $\psi$. \end{proof} \begin{lemma} \label{lem: stochastic Gronwal} Let Assumption \ref{as: nd} hold and let $u$ be a solution of \eqref{eq: nd quasilinear}. Then for all $p \geq 2$ and all $\alpha>0$, we have $$ \mathbb{E} \sup_{t \leq T} \|u_t\|_{L_p}^\alpha \leq N \mathbb{E}\|\xi\|_{L_p}^ \alpha + {\color{black}N \mathbb{E} \left( \int_0^T \|V^1 \|_{L_p}^p + \|V^2 \|_{L_p}^p \, ds \right)^{\alpha/p}}, $$ where $N$ depends on $\alpha, p, K, T$ and $d$. \end{lemma} \begin{proof} We assume that the right hand side is finite. {\color{black}Similarly to \eqref{eq: consequence of Ito's}, one can show that} \begin{align} \nonumber \| u_t\|_{L_p}^p \leq \|\xi\|_{L_p}^p + N {\color{black}\int_0^t \left( \|V^1\|_{L_p}^p+ \|V^2\|_{L_p}^p+ \|u\|^p_{L_p} \right) ds} +M_t, \end{align} where $M_t$ is the local martingale from \eqref{eq: Ito formula}. Moreover, as in the derivation of \eqref{eq:quadratic-estimate}, one can check that $$ \langle M\rangle_t \leq N \int_0^t\left( \|u\|_{L_p}^p{\color{black}\|V^2\|^p_{L_p} }+ \|u\|_{L_p}^{2p} \right) \, ds. $$ The result then follows from Lemma 5.2 in \cite{GGK}. \end{proof} We are now ready to present our first main result. \begin{theorem} \label{thm: quasilinear nd} Let Assumptions \ref{as: nd}-\ref{as:boundednessV} hold and let $u \in \mathbb{L}_2$ be a solution of \eqref{eq: nd quasilinear}-\eqref{eq: initial nd quasilinear}. Then, {\color{black}for all $\mu \in \Gamma_d $}, $\alpha >0$, we have \begin{equation} \label{eq: estimate nd quasilinear} \mathbb{E} \|u \|^\alpha _{L_\infty(Q_T)} \ \leq N \mathbb{E} \left( 1+\|\xi \|_{L_\infty(Q)}^\alpha +{\color{black}\|V^1\|_{L_\mu(Q_T)}^\alpha +\|V^2\|_{L_{2\mu}(Q_T)}^\alpha }\right), \end{equation} where $N$ is a constant depending only on $\alpha, \tilde{m}, T, c,K,d, \mu$ and $|Q|$. \end{theorem} \begin{proof} {\color{black}We fix $\alpha>0$, $\mu\in \Gamma_d$, and let $\mu'$ be the conjugate exponent of $\mu$. Without loss of generality we assume that the right hand side of \eqref{eq: estimate nd quasilinear} is finite. Recall also the notations $\gamma= 1+(2/d)$, $\bar\gamma= \gamma / \mu'(>1)$ and let $\delta:= \tilde{m}\bar\gamma / (\bar\gamma-1)$. } By Lemma \ref{lem: embeding}, after raising \eqref{eq:emb} to the power $\gamma^{-1}$, we obtain by Young's inequality (with exponents $p=\gamma$, $p^*=\gamma/(\gamma-1)$, and note that $ 2/ d(\gamma-1)=1$) \begin{equation} \label{eq:C} \left(\int_s^T \int_Q |v|^q\, dx dt\right)^{1/\gamma} \leq C^{q/\gamma} \left( \int_s^T \int_Q |\nabla v|^2 \, dx dt + \esssup_{s\leq t \leq T} \int_Q |v|^\lambda \, dx \right). \end{equation} For $p \geq \tilde{m}$, we apply this inequality with $\lambda= 2p/(\tilde{m}+p) \in [1,2]$, $q=2(1+(\lambda/d))$, $v=|u|^{(\tilde{m}+p)/2}$ (notice that $q(\tilde{m}+p)/2= \tilde{m}+\gamma p{\color{black}= \tilde{m}+\mu'\bar \gamma p}$), and we raise to the power {\color{black}$\alpha (\mu')^{n+1}/ (\delta \gamma^n)$} to conclude that \begin{align} \nonumber &{\color{black}\mathbb{E} \left( A_\alpha \vee \left( \int_0^T \int_Q |u|^{\tilde{m}+\mu'p \bar \gamma} \, dx dt \right) ^{\color{black}{\alpha /(\delta\bar\gamma^{n+1})}} \right)} \\ \nonumber =&\mathbb{E} \left( A_\alpha \vee \left( \int_0^T \int_Q |u|^{\tilde{m}+p \gamma} \, dx dt \right) ^{\color{black}{\alpha (\mu')^{n+1} /(\delta\gamma^{n+1})}} \right) \\ \nonumber \leq& N^{1/{\color{black}\bar \gamma^n}} \mathbb{E} \left( A_\alpha \vee \left( \sup_{t \leq T} \|u_t\|^p_{L_p} +\int_0^T \int_Q \left| \nabla |u|^{(\tilde{m}+p)/2} \right|^2 \, dx dt \right)^{\color{black}{\alpha (\mu')^{n+1}/ (\delta\gamma^n)}} \right) \\ \label{eq: right estimate} \leq & N^{1/{\color{black}\bar \gamma^n}} \mathbb{E} \left( A_{\delta \bar\gamma^n/\mu'} \vee \left( \sup_{t \leq T} \|u_t\|^p_{L_p} +\int_0^T \int_Q \left| \nabla |u|^{(\tilde{m}+p)/2} \right|^2 \, dx dt \right)\right)^{\color{black}{\alpha \mu'/ (\delta\bar \gamma^n)} } \end{align} where, recall that for $q \geq 0$, $A_q:=(1+\|\xi\|_{L_\infty})^q$. Let $$ {\color{black}p_n:= \tilde{m}(1+...+\bar \gamma^n)= \frac{\tilde{m}(\bar \gamma^{n+1}-1)}{\bar \gamma-1},} $$ and let $n_0$ be the minimal positive integer such that $$ {\color{black}p_{n_0} \geq 2\mu' \ \ \text{and} \ \ \alpha \mu' / (\delta \bar \gamma^{n_0} ) < 1.} $$ By combining inequality \eqref{eq: right estimate} ({\color{black}with $p=p_n/\mu'$}) with \eqref{eq: right right estimate} (with {\color{black}$\eta= \alpha \mu' \delta^{-1}\bar \gamma^{-n}$}, {\color{black}$q= \delta \bar \gamma^n /\mu' \geq p_n/\mu'$}) we obtain for all $n \geq n_0$ {\color{black}\begin{align} \nonumber & \mathbb{E} \left( A_{\alpha} \vee \left( \int_0^T \int_Q |u|^{p_{n+1}} \, dx dt \right) ^{\color{black}{\alpha /(\delta \bar \gamma^{n+1})}} \right) \\ \nonumber & =\mathbb{E} \left( A_{\alpha} \vee \left( \int_0^T \int_Q |u|^{\color{black}{\tilde{m}+\bar \gamma p_n}} \, dx dt \right) ^{\color{black}{\alpha /(\delta \bar \gamma^{n+1})}} \right) \\ \nonumber & \leq c_n \mathbb{E} \left(\left(A_{\delta \bar \gamma^n/\mu'} \vee \|u\|^{p_n/\mu'}_{L_{p_n}(Q_T)} \right) \right. \\ \nonumber &\left. +\left( \frac{p_n}{\mu'}\right)^{-p_n/\mu'} \left( \|V^1\|_{L_\mu(Q_T)}^{p_n/\mu'}+ \|V^2\|_{L_{2\mu}(Q_T)}^{p_n/\mu'}\right) \right)^{\alpha \mu' /(\delta \bar \gamma^n)} \\ \nonumber & \leq c_n \mathbb{E} \left(A_{\alpha } \vee \left(\int_0^T \int_Q |u|^{p_n} \,dxds \right)^{\alpha /(\delta \bar \gamma^n)} \right) \\ \label{eq: iterable} &+c_n \left( \frac{p_n}{\mu'}\right)^{-\alpha p_n/(\delta \bar \gamma^n)} \mathbb{E} \left(2+ \|V^1\|_{L_\mu(Q_T)}^\alpha+\|V^2\|_{L_{2\mu}(Q_T)}^\alpha\right), \end{align}} where {\color{black}\begin{equation} \label{eq: cn} c_n:= N^{1/ {\color{black}\bar\gamma^n}} (\delta {\color{black}\bar\gamma^n}/ (\mu'\alpha) )^{\alpha \mu' /(\delta {\color{black}\bar \gamma^n})} \frac{1}{1-(\alpha \mu' /(\delta {\color{black}\bar \gamma^n}))} \left(N \frac{p_n^\kappa}{(\mu')^k}\right)^{\alpha \mu'/(\delta {\color{black}\bar \gamma^n})}, \end{equation}} $N$ does not depend on $n$, and we have used that $p_n/{\color{black}\bar \gamma^n }\uparrow \delta$. Notice that the right hand side of \eqref{eq: iterable} is finite (by the assumption that the right hand side of \eqref{eq: estimate nd quasilinear} is finite, Lemma \ref{lem: stochastic Gronwal} and \eqref{eq:boundednessV}). One can easily see that {\color{black}$$ \prod_{n=1}^\infty N^{1/{\color{black}\bar \gamma^n}}< \infty, \qquad \prod_{n=1}^\infty (\delta{\color{black} \bar \gamma^n}/ (\mu' \alpha))^{\alpha \mu' /( \delta {\color{black}\bar \gamma^n})} < \infty. $$} Also, ${\color{black}p_n \leq \tilde{m}n \bar \gamma^n}$ which implies that {\color{black}$$ \prod_{n=1}^\infty \left(N \frac{p_n^\kappa}{(\mu')^k}\right)^{\alpha \mu' /(\delta {\color{black}\bar \gamma^n})}< \infty. $$} Moreover, since $e^{-2x} \leq 1-x$ for all $x$ sufficiently small, we have for some constant $N$ that and all $M \in \mathbb{N}$ {\color{black}$$ \prod_{n=1}^M \frac{1}{1-(\alpha \mu' / (\delta {\color{black}\bar \gamma^n}))} \leq N e^{\sum_{n=1}^M 2\alpha \mu' / \delta {\color{black}\bar \gamma^n}}, $$} which implies that {\color{black}$\prod_{n=1}^\infty \frac{1}{1-(\alpha \mu' / (\delta {\color{black}\bar \gamma^n}))} < \infty$}. Consequently, there exists an $N \in \bR$ such that for any $M \in \mathbb{N}$ \begin{equation} \label{eq:bound-cn} \prod_{n=1}^M c_n \leq N. \end{equation} Since $p_n/ {\color{black}\bar \gamma^n }\uparrow \delta$, there exists an $N$ such that for all $n \in \mathbb{N}$ large enough, we have $$ p_n^{- \alpha p_n /(\delta {\color{black}\bar \gamma^n})} \leq N ({\color{black}\bar\gamma^n })^{-\alpha p_n/(\delta{\color{black}\bar\gamma^n)}} \leq N\left({\color{black}\bar\gamma}^{ \alpha /2}\right)^{-n}. $$ Since, $\alpha>0$, $ {\color{black}\bar \gamma}>1$ we have \begin{equation} \label{eq:bound-pn} \sum_{n=1}^\infty\left(\frac{ p_n}{{\color{black}\mu'}}\right) ^{-\alpha p_n/ (\delta{\color{black}\bar \gamma^n}) } < \infty. \end{equation} Consequently, by iterating \eqref{eq: iterable} and using \eqref{eq:bound-cn} we obtain \begin{equation} \label{eq:before-limit-iter} \Theta_m \leq \left( \prod_{n=n_0}^m c_n\right) \Theta_{n_0} + N\left( \sum_{n=n_0}^m \lambda_n \right) \mathbb{E} \left(1+ {\color{black}\mathcal{V}_\mu}\right)^\alpha, \end{equation} where $$ \Theta_n := \mathbb{E} \left( A_{\alpha} \vee \left( \int_0^T \int_Q |u|^{p_{n}} \, dx dt \right) ^{\alpha /(\delta {\color{black}\bar \gamma^{n})} }\right), \qquad \lambda_n: = \left( \frac{p_n}{\mu'} \right) ^{-\alpha p_n/ (\delta {\color{black}{\color{black}\bar \gamma^n}}) }, $$ and $$ {\color{black}\mathcal{V}_\mu:= \|V^1\|_{L_\mu(Q_T)}+\|V^2\|_{L_{2\mu}(Q_T)}.} $$ By virtue of \eqref{eq:bound-cn} and \eqref{eq:bound-pn}, we can let $m \to \infty$ in \eqref{eq:before-limit-iter} and use that $ p_m/(\delta{\color{black} \bar \gamma^m}) \to 1$ to obtain by Fatou's lemma \begin{align*} \mathbb{E} \| u\|_{L_\infty(Q_T)}^\alpha & \leq \liminf_{n\to \infty} \mathbb{E} \left( \int_0^T \int_Q |u|^{p_n} \, dx dt \right) ^{\alpha /(\delta {\color{black}\bar \gamma^n})} \\ & {\color{black}\leq N \mathbb{E} \| u\|_{L_{p_{n_0}}(Q_T)}^{\alpha p_{n_0}/( \delta \bar \gamma^{n_0})} + N \mathbb{E} \left(1+ \|\xi\|_{L_\infty(Q)}^\alpha+|\mathcal{V}_\mu|^\alpha\right).} \end{align*} {\color{black}Since $p_n / \bar \gamma^n $ is increasing in $n$, we have $p_{n_0} / (\delta \bar \gamma^{n_0}) \leq 1 $ and thus \begin{equation} \label{eq: estimate with n0} \mathbb{E} \| u\|_{L_\infty(Q_T)}^ {\alpha}\leq N \mathbb{E} \left(1+\| u\|_{L_{p_{n_0}}(Q_T)}^{\alpha}+ \|\xi\|_{L_\infty(Q)}^\alpha+|\mathcal{V}_\mu|^\alpha\right). \end{equation}} {\color{black}Notice that by the assumption that the right hand side of \eqref{eq: estimate nd quasilinear} is finite, combined with \eqref{eq:boundednessV} and Lemma \ref{lem: stochastic Gronwal}, we get that the right hand side of \eqref{eq: estimate with n0} is finite. By the interpolation inequality $$ \|u\|_{L_{p_{n_0}}(Q_T)} \leq \varepsilon\|u\|_{L_\infty(Q_T)} +N_\varepsilon \|u\|_{L_2(Q_T)}, $$ we obtain after rearrangement in \eqref{eq: estimate with n0} $$ \mathbb{E} \| u\|_{L_\infty(Q_T)}^ {\alpha}\leq N \mathbb{E} \left(1+\| u\|_{L_2(Q_T)}^{\alpha}+ \|\xi\|_{L_\infty(Q)}^\alpha+|\mathcal{V}_\mu|^\alpha\right), $$ which again by virtue of Lemma \ref{lem: stochastic Gronwal} gives (since $\mu \geq 2$) \begin{equation*} \mathbb{E} \| u\|_{L_\infty(Q_T)}^ {\alpha}\leq N \mathbb{E} \left(1+ \|\xi\|_{L_\infty(Q)}^\alpha+|\mathcal{V}_\mu|^\alpha \right). \end{equation*} This finishes the proof.} \end{proof} \begin{remark} In \cite{KOMA}, in the non-degenerate case, mixed $L^t_\nu L^x_\mu$-norms of the free terms appear at the right hand side of the estimates. This is also achievable in our setting provided that one has a mixed-norm version of the embedding Lemma \ref{lem: embeding} (see also \cite[Lemma 1]{KOMA}). \end{remark} {\color{black}Next we present the ``regularizing" effect. Recall that $\gamma= 1+(2/d)$, $\bar \gamma= \gamma/\mu'$, $\delta= \tilde{m} \bar \gamma / (\bar \gamma-1)$, and $p_n = \tilde{m}(1+\bar \gamma+...+ \bar \gamma^n)$. We will need the following two lemmata.} \begin{lemma} \label{lem:p0} Let $\alpha>0$, and let $q:=p_{n_0}$, where $n_0$ is the minimal positive integer such that $p_{n_0} \geq 2$ and $\alpha / ( \delta \bar \gamma ^{n_0})< 1$. Suppose that Assumptions \ref{as: nd}- \ref{as:boundednessV} are satisfied and let $u$ be a solution of \eqref{eq: nd quasilinear}. Then, for all $ \rho \in (0,1)$ we have \begin{equation} \label{eq: smoothing} {\color{black}\mathbb{E} \|u \|^\alpha _{L_\infty((\rho, T) \times Q)} \ \leq \rho^{-\tilde{\theta}} N \mathbb{E} \left( 1+ \|u \|^\alpha _{L_{q}((r_{n_0},T)\times Q)} + |\mathcal{V}_\mu|^\alpha \right),} \end{equation} where $$ r_{n_0}=\rho(1-2^{-n_0}), \qquad {\color{black}|\mathcal{V}_\mu|= \|V^1\|_{L_\mu(Q_T)}+\|V^2\|_{L_{2\mu}(Q_T)}}, $$ $N$ is a constant depending only on $\alpha, \tilde{m}, T, c,K,d, \mu$, and $|Q|$, and $\tilde{\theta} > 0$ is a constant depending only on $\alpha, d, \mu$ and $\tilde{m}$. \end{lemma} \begin{proof} Similarly to the proof of Theorem \ref{thm: quasilinear nd}, by Lemma \ref{lem: embeding} and Lemma \ref{lem: right right local}, we have for all $n \geq n_0$ {\color{black}\begin{align} \nonumber & \mathbb{E} \left( \int_{r_{n+1}}^T \int_Q |u|^{p_{n+1}} \, dx dt \right) ^{\alpha/(\delta \bar \gamma^{n+1})} \\ \nonumber &\leq (\rho^{-1}2^n)^{\mu' \alpha/(\delta \bar \gamma ^n)} c_n \mathbb{E} \left(\int_{r_n}^T \int_Q |u|^{p_n} \, ds \right)^{\alpha/(\delta \bar \gamma^n)} \\ \label{eq: iterable local} &+(\rho^{-1}2^n)^{\alpha \mu'/(\delta \bar \gamma ^n)} c_n \left( \frac{p_n}{\mu'} \right) ^{-\alpha p_n/(\delta \bar \gamma^n)} \mathbb{E} \left(2+\|V^1\|_{L_\mu(Q_T)}^\alpha+\|V^2\|_{L_{2\mu}(Q_T)}^\alpha \right), \end{align}} where $c_n$ is given in \eqref{eq: cn}. Under the assumption that the right hand side of \eqref{eq: smoothing} is finite, it follows that the right hand side of the above inequality it is also finite for $n=n_0$, and by the same inequality and induction it follows that is finite for all $n \geq n_0$. Also notice that for all $M \in \mathbb{N}$ $$ \prod_{n=n_0}^M (\rho^{-1}2^n)^{\color{black}{\alpha \mu'/(\delta \bar \gamma ^n )}} \leq N \rho^{-\tilde{\theta}}, $$ with $\tilde{\theta}= (\alpha {\color{black}\mu'} / \delta) \sum_{n}{\color{black}\bar \gamma}^{-n}$. Consequently, by iterating \eqref{eq: iterable local} and passing to the limit as $n \to \infty$ we obtain \begin{align*} \mathbb{E} \|u \|^\alpha _{L_\infty((\rho, T) \times Q)} \ & \leq \rho^{-\tilde{\theta}} N \mathbb{E} \left(\int_{r_{n_0}}^T \int_Q |u|^{p_{n_0}} \, ds \right)^{\sfrac{\alpha}{\delta {\color{black}\bar \gamma^{n_0}}}}\\ &+ \rho^{-\tilde{\theta}} N \left(1+ {\color{black}| \mathcal{V}_\mu|^\alpha}\right) \\ &\leq \rho^{-\tilde{\theta}} N \mathbb{E} \left( 1+ \|u \|^\alpha _{L_{p_{n_0}}((r_{n_0},T)\times Q)} + {\color{black}| \mathcal{V}_\mu|^\alpha }\right), \end{align*} where we have used that {\color{black}$p_{n_0} \leq \delta \bar \gamma^{n_0}$. } \end{proof} \begin{lemma} \label{thm: smoothing} Suppose that Assumptions \ref{as: nd}-\eqref{as:boundednessV} are satisfied, let $\alpha >0$, and let $u \in \mathbb{L}_2$ be a solution of \eqref{eq: nd quasilinear}-\eqref{eq: initial nd quasilinear}. Then, for all $\rho \in (0,1)$ we have \begin{align} \label{eq:up-to-L2-1} \mathbb{E} \|u \|^\alpha _{L_\infty((\rho, T) \times Q)} \ &\leq \rho^{-\tilde{\theta}} N \mathbb{E} \left( 1+\| u \|_{L_2(Q_T)}^\alpha +{\color{black}|\mathcal{V}_{\mu}|^\alpha} \right), \end{align} where $N$ is a constant depending only on $\alpha, \tilde{m}, T, c,K,d, \mu$, and $|Q|$, and $\tilde{\theta}>0$ is a constant depending only on $\alpha, d,\mu $ and $\tilde{m}$. \end{lemma} \begin{proof} Due to Lemma \ref{lem:p0}, we only need to estimate $\mathbb{E} \|u \|^\alpha _{L_{p_{n_0}}((r_{n_0},T)\times Q)} $ by the right hand side of \eqref{eq:up-to-L2-1}. For this, it suffices to show that for all $\beta>0$, $p>2$, and $\varrho \in (0,1)$ we have \begin{equation} \label{eq:up-to-L2} \mathbb{E} \|u \|^\beta _{L_p((\varrho, T) \times Q)} \ \leq N \varrho^{-\tilde{\theta}} \mathbb{E} \left( 1+\| u \|_{L_2(Q_T)}^\beta +{\color{black}|\mathcal{V}_\mu|}^\beta \right), \end{equation} where $N$ is a constant depending only on $\beta, p, \varrho, \tilde{m}, T, c,K,d, r$ and $|Q|$, and $\tilde{\theta}>0$ depends only on $\beta, d$ and $\tilde{m}$. {\color{black}We assume that the right hand side of \eqref{eq:up-to-L2} is finite. Let us set $p_0=2$, $p_{n+1}=\tilde{m}+ p_n \bar \gamma$, $n'=\min \{ n\in \mathbb{N} \colon p_n \geq p \}$, and $\varrho_k= k\varrho / n'$, for $k=0,...,n'$. Clearly, it suffices to prove that for all $k=0,...,n'$ we have \begin{equation} \label{eq:rk} \mathbb{E} \|u \|^\beta _{L_{p_{k+1}}((\varrho_{k+1}, T) \times Q)} \ \leq \varrho^{-\tilde \theta } N \mathbb{E} \left( 1+\| u \|_{L_{p_k}((\varrho_k,T)\times Q)}^\beta +|\mathcal{V}_\mu|^\beta \right), \end{equation}} since \eqref{eq:up-to-L2} follows by iterating \eqref{eq:rk} finitely many times. We assume that the right hand side of \eqref{eq:rk} is finite and we first prove it for {\color{black}$k\geq 1$}. Let $\varrho_k'=(\varrho_k+\varrho_{k+1})/2$. Let $ \psi \in C^1([0, T])$ with $0 \leq \psi \leq 1$, $\psi_t=0$ for $ 0\leq t \leq \varrho_k'$, $\psi_t=1$ for $\varrho_{k+1}\leq t \leq T$, and $|\psi'_t| \leq 2 n' \varrho^{-1}$. Then, similarly to \eqref{eq: consequence of Ito's} we have for $p \geq 2$ \begin{align} \nonumber &\psi_t\| u_t\|_{L_p}^p + cp(p-1)\int_0^t \int_Q \psi |u|^{\tilde{m}+p-2}| \nabla u|^2 \, dx ds \\ \nonumber & \leq N {\color{black} \left( \|V^1\|_{L_\mu(Q_t)}^p + \|V^2\|_{L_{2\mu}(Q_t)}^p\right)+ N \| \psi^{1/p} u\|^p_{L_{\mu'p}(Q_t)} } \\ \label{eq:A} & +\int_0^t \psi'\|u\|^p_{L_p} ds +M_t, \end{align} with $M_t$ is the martingale from \eqref{eq: ito product}. If $\beta \gamma / p_{k+1} < 1$, then by virtue of Lemma \ref{lem: Revuz-Yor} and the familiar techniques of Lemma \ref{lem: right right} we obtain \begin{equs} \nonumber &\mathbb{E}\left( \sup_{t \in [0,T]}\psi_t \|u_t\|^{p}_{L_{p}} + \int_0^T \int_Q \psi \left| \nabla |u|^{(\tilde{m} +{p})/2} \right|^2 \, dx dt \right) ^{\frac{\beta \gamma} { p_{k+1} }} \\ \leq N & \mathbb{E} \left( {\color{black} \left( \|V^1\|_{L_\mu(Q_T)}^p + \|V^2\|_{L_{2\mu}(Q_T)}^p\right) + \| \psi^{1/p} u\|^p_{L_{\mu'p}(Q_T)} } +\int_0^T \psi'\|u\|^p_{L_p} ds\right) ^{\frac{\beta \gamma} { p_{k+1} }}, \\ \label{eq:B} \end{equs} where $N$ depends on $\beta, p, \varrho, \tilde{m}, T, c,K,d,r$ and $|Q|$. If $\beta \gamma / p_{k+1} \geq 1$ we have by the Burkholder-Davis-Gundy inequality \begin{equation} \label{eq:F} \mathbb{E}\sup_{t \leq T}|M_t|^ {\beta \gamma / p_{k+1}} \leq N \mathbb{E}\langle M \rangle_T^{\beta \gamma / 2 p_{k+1}}. \end{equation} Again, as in the derivation of \eqref{eq: quadratic variation} we have \begin{align*} {\color{black}\langle M\rangle_T \leq \sup_{t \leq T} \psi_t\|u_t\|_{L_p}^p \left( \|V^2\|_{L_{2\mu}(Q_T)}^p + \| \psi^{1/p}u\|^p_{L_{\mu'p}(Q_T)}\right) } \end{align*} which combined with \eqref{eq:F} gives by virtue of Young's inequality, for any $\varepsilon>0$ \begin{align*} \mathbb{E}\sup_{t \leq T}|M_t|^ {\beta \gamma / p_{k+1}}&\leq \varepsilon \mathbb{E} \sup_{t \leq T} (\psi_t\|u_t\|_{L_{p}}^{p})^{\beta \gamma / p_{k+1}} \\ &+ N {\color{black}\mathbb{E}\left( \|V^2\|_{L_{2\mu}(Q_T)}^p + \| \psi^{1/p}u\|^p_{L_{\mu'p}(Q_T)}\right) ^{\beta \gamma / p_{k+1}}.} \end{align*} Using this and \eqref{eq:A} we get \eqref{eq:B} (for $\beta \gamma / p_{k+1} \geq 1$), provided that the quantity $\mathbb{E} \sup_{t \leq T} (\psi_t\|u_t\|_{L_{p}}^{{p}})^{\beta \gamma / p_{k+1}}$ is finite, which can be achieved by a localization argument. Now we use \eqref{eq:B} with {\color{black}$p= p_k/\mu'$ }{\color{black}(notice that $p_k/\mu' \geq 2$ for $k \geq 1$) }{\color{black}and using the properties of $\psi$ and the fact that $\bar \gamma / p_{k+1} \leq 1/ p_k$, \eqref{eq:B} yields \begin{align} \nonumber &\mathbb{E}\left( \sup_{t \in [\varrho_{k+1},T]} \|u_t\|^{p_k/\mu'}_{L_{p_k/\mu'}} + \int_{\varrho_{k+1}}^T \int_Q \left| \nabla |u|^{(\tilde{m} +{p_k/\mu'})/2} \right|^2 \, dx dt \right)^{\beta \gamma / p_{k+1} } \\ & \nonumber \leq \varrho^{-\tilde \theta } N \mathbb{E} \left( 1+\| u \|_{L_{p_k}((\varrho_k,T)\times Q)}^\beta +|\mathcal{V}_\mu|^\beta \right). \end{align}} An application of Lemma \ref{lem: embeding} (see also \eqref{eq:C} and \eqref{eq: right estimate}) gives \eqref{eq:rk}. Recall that we have assumed that $k \geq 1$. {\color{black}For $k=0$, instead of \eqref{eq:A}, we use the estimate \begin{align} \nonumber &\psi_t\| u_t\|_{L_p}^p + cp(p-1)\int_0^t \int_Q \psi |u|^{\tilde{m}+p-2}| \nabla u|^2 \, dx ds \\ \nonumber & \leq N \left( \|V^1\|_{L_p(Q_t)}^p + \|V^2\|_{L_p(Q_t)}^p\right)+ N \| \psi^{1/p} u\|^p_{L_{p}(Q_t)} \\ \nonumber & +\int_0^t \psi'\|u\|^p_{L_p} ds +M_t. \end{align} We apply it with $p=2$ and we proceed as above, this time raising to the power $\gamma \beta / (\tilde{m} +2\gamma)$. Following the same steps, one arrives at the estimate \begin{align} \nonumber &\mathbb{E}\left( \int_{\varrho_{1}}^T \int_Q |u|^{\tilde{m} +2 \gamma} \, dx dt \right)^{\beta /(\tilde{m} +2 \gamma ) } \\ & \nonumber \leq \varrho^{-\tilde \theta } N \mathbb{E} \left( 1+\| u \|_{L_{2}(Q_T)}^\beta +\|V^1\|^\beta_{L_2(Q_T)} +\|V^2\|^\beta_{L_2(Q_T)} \right). \end{align} This finishes the proof since $\tilde{m}+2 \gamma \geq \tilde{m}+ 2 \bar \gamma= p_1$.} \end{proof} \begin{theorem} \label{thm: smoothing2} Suppose that Assumptions \ref{as: nd}-\ref{as:boundednessV} are satisfied. Let $u \in \mathbb{L}_2$ be a solution of \eqref{eq: nd quasilinear}-\eqref{eq: initial nd quasilinear} and let $\alpha >0$ and {\color{black}$\mu \in \Gamma_d$}. Then, for all $\rho \in (0,1)$ we have \begin{align} \label{eq:D} \mathbb{E} \|u \|^\alpha _{L_\infty((\rho, T) \times Q)} \ &\leq \rho^{-\tilde{\theta}} N \mathbb{E} \left( 1+\| \xi\|_{L_2(Q)}^\alpha +{\color{black}\|V^1\|_{L_\mu(Q_T)}^\alpha +\|V^2\|^\alpha_{L_{2\mu}(Q_T)}} \right) \end{align} where $N$ is a constant depending only on $\alpha, \tilde{m}, T, c,K,d, \mu$, and $|Q|$, and $\tilde{\theta}>0$ is a constant depending only on $\alpha, d, \mu$ and $\tilde{m}$. \end{theorem} \begin{proof} The conclusion of the theorem follows immediately from Lemma \ref{lem: stochastic Gronwal} and Lemma \ref{thm: smoothing}. \end{proof} \begin{remark} In Theorems \ref{thm: quasilinear nd} and \ref{thm: smoothing2}, the expressions $ \| u\|_{L_\infty(Q_T)}$ and $ \| u\|_{L_\infty((\rho, T)\times Q)}$ can be replaced by $\sup_{t \in [0,T]} \|u_t\|_{L_\infty(Q)}$ and $\sup_{t \in [\rho,T]} \|u_t\|_{L_\infty(Q)}$, respectively. This follows from the fact that $u$ is a continuous $L_2(Q)$-valued process. \end{remark} \begin{remark} A cut-off argument in space, similar to the cut-off in time as it was used in the proof of Theorem \ref{thm: smoothing2}, can be used in order to derive local in space-time estimates that are applicable not only to solutions of the Dirichlet problem but to any $u$ satisfying \eqref{eq: nd quasilinear} (see, e.g., \cite{KOMA}). \end{remark} \section{Degenerate Quasilinear SPDE} In this section, we proceed with the degenerate equation \eqref{eq: PME stratonovich}. Notice that the constant $N$ in Theorem \ref{thm: quasilinear nd} and Theorem \ref{thm: smoothing} of the previous section does not depend on the non-degeneracy constant $\theta$. Using this fact we can deduce similar estimates for the stochastic porous medium equation \eqref{eq: PME stratonovich}. Suppose that on $(\Omega, \mathcal{F}, \mathbb{F}, \bP)$ we are given independent $\bR$-valued Wiener processes $\tilde{\beta}^1_t,..., \tilde{\beta}^d_t, w^1_t, w^2_t , ...$. Moreover, in this section we will assume that the domain $Q$ is convex and that the boundary $\D Q$ is of class $C^2$. The Stratonovich integral $$ \sum_{i=1}^d \sigma_t \D_iu_t \circ d\tilde{\beta}^i_t $$ in \eqref{eq: PME stratonovich} is a short notation for $$ \frac{1}{2} \sigma_t^2 \Delta u_t \, dt + \sum_{i=1}^d \sigma_t \D_iu_t \, d \tilde{\beta}^i_t. $$ In the following, we consider a slightly more general class of equations. Namely, on $(0,T) \times Q$ we consider the stochastic porous medium equation (SPME) of the form \begin{equation} \begin{aligned}\label{eq: main equation} du = & \left[ \Delta \left( \Phi (u) \right) +H_tu +f_t(x) \right] \, dt + \sum_{i=1}^d \sigma_t\D_i u \, d\tilde{\beta}^i_t\\ +& \sum_{k=1}^\infty\left[\nu^k_t(x) u +g^k_t(x) \right] dw^k_t \\ u_0= & \xi, \end{aligned} \end{equation} with zero Dirichlet boundary condition on $\D Q$, where $$ H_tu:=\frac{\sigma_t^2}{2} \Delta u +b^i_t(x)\D_iu+c_t(x)u . $$ If we set \begin{align*} \beta^k_t:= \begin{cases} \tilde{\beta}^k_t , & \text{for } k\in \{1,...,d\} \\ w^{k-d}_t, & \text{for } k \in \{ d+1,d+2,...\} \end{cases} \end{align*} and \begin{align*} M^k_t(u):= \begin{cases} \sigma_t\D_k u , & \text{for } k\in \{1,...,d\}\\ \nu^{k-d}_t(x)u+g^{k-d}_t(x) , & \text{for } k \in \{ d+1,d+2,...\}, \end{cases} \end{align*} we can rewrite \eqref{eq: main equation} in the more compact form \begin{equation} \begin{aligned} du = & \left[ \Delta \left( \Phi(u) \right) +H_tu +f_t(x) \right] \, dt + \sum_{k=1}^\infty M^k_t(u) d\beta^k_t \\ u_0= & \xi. \end{aligned} \end{equation} \begin{assumption} \label{as: coeff} $$ $$ \begin{enumerate} \item \label{item: coercivity of phi} The function $\Phi: \bR \to \bR$ is continuously differentiable, non-decreasing, such that $\Phi(0)=0$. There exists constants $\lambda >0$, $C \geq 0$, $m \in [1, \infty)$ such that for all $r \in \bR$ we have $$r \Phi(r) \geq \lambda |r|^{m+1}-C, \qquad |\Phi'(r)| \leq C|r|^{m-1}+C. $$ \item For each $i =1,..., d$, the functions \label{as: coefficients drift} $ b^i, c : \Omega_T \times \overline{Q} \to \bR$ are $\mathcal{P} \otimes \mathcal{B}(\overline{Q})$-measurable, and for all $(\omega, t) \in \Omega_T$ we have $ b^i \in C^2(\overline{Q};\bR)$, $c \in C^1(\overline{Q};\bR)$, and $b^i=0$ on $\D Q$. The functions \label{as: coefficients noise} $\sigma: \Omega_T \to \bR$ and $\nu: \Omega_T\times \overline{Q} \to l_2$ are $\mathcal{P}$- and $\mathcal{P}\times \mathcal{B}(\overline{Q})$-measurable, respectively, and for all $(\omega,t) \in \Omega_T$ we have $\nu \in C^1(\overline{Q};l_2)$. Moreover, there exists a constant $K$ such that for all $(\omega, t,x) \in \Omega_T \times Q$ we have \begin{align*} &|\sigma_t| + | b^i_t(x)|+|c_t(x)|+|\nu_t(x)|_{l_2} \\ +& | \nabla b^i_t(x)|+|\nabla c_t(x)| + |\nabla \nu_t(x)|_{l_2}+| \nabla^2 b^i_t(x)|\leq K \end{align*} \item \label{as: free terms} The functions $f : \Omega_T \to H^{-1}$ and $g^k:\Omega_T \to H^{-1}$, for $k \in \mathbb{N}$, are $\mathcal{P}$-measurable and it holds that $$ \mathbb{E} \int_0^T ( \|f\|_{H^{-1}}^2+ \sum_{k=1}^\infty \|g^k\|^2_{H^{-1}}) \, dt < \infty. $$ \item \label{as:icH-1}The initial condition $\xi$ is an $\mathcal{F}_0$-measurable $H^{-1}$-valued random variable such that $\mathbb{E} \|\xi\|^2_{H^{-1}}< \infty$. \end{enumerate} \end{assumption} \begin{assumption} \label{as: extra condition} There exists a constant $\bar{c}>0$ such that $\bar{c}|r|^{m-1} \leq \Phi'(r) $ for all $r \in \bR$. \end{assumption} We note that Assumption \ref{as: extra condition} implies the first part of Assumption \ref{as: coeff} \eqref{item: coercivity of phi}, that is $r \Phi(r) \geq \bar{c}|r|^{m+1}$. Let us set $A_t(u):= \Delta \left( \Phi(u)\right) + H_tu_t +f_t $. The operators are understood in the following sense: For $u \in L_{m+1}(Q)$, we have $A_t(u) \in (L_{m+1})^*$, $M_t^k(u)\in H^{-1}$, given by \begin{align*} {}_{(L_{m+1})^*}\langle A_t(u), \phi \rangle_ {L_{m+1}}:= &-\int_Q \Phi(u) \phi \, dx- \int_Q \frac{1}{2}\sigma^2_t u \phi \, dx \\ &-\int_Q u \D_i (b^i_t (-\Delta)^{-1} \phi)+ \int_Q (c_tu+f_t) (-\Delta)^{-1} \phi \, dx, \\ \left(M^k_t(u)\right)(\psi) :=& \left\{\begin{array}{lr} -\int_Q \sigma_t u \D_k \psi \, dx , \ \ \ \ \qquad \text{for} \ k\in \{1,...,d\}\\ \int_Q (\nu_t^{k-d}u+g^{k-d}_t) \psi \, dx \ \ \text{for } k \in \{ d+1,d+2,...\}, \end{array}\right. \end{align*} for $\phi \in L_{m+1}(Q)$, $\psi \in H^1_0(Q)$, where $(-\Delta)^{-1}$ denotes the inverse of the Dirichlet Laplacian on $Q$. Notice that for $\phi \in H^{-1}$ (in particular for $\phi \in L_{m+1}(Q)$), it holds that (see, e.g., p. 69 in \cite{MR}) $$ (M^k_t(u), \phi)_{H^{-1}}=\left(M^k_t(u)\right)((-\Delta)^{-1}\phi). $$ \begin{definition} \label{def: solution} A solution of equation \eqref{eq: main equation} is a process $u \in \mathbb{H}^{-1}_{m+1}$, such that for all $\phi \in L_{m+1}(Q)$, with probability one we have $$ (u_t, \phi)_{H^{-1}}= (\xi , \phi)_{H^{-1}}+\int_0^t {}_{(L_{m+1})^*}\langle A(u), \phi \rangle_{L_{m+1}} \, ds + \int_0^t (M^k(u), \phi)_{H^{-1}} d\beta^k_s, $$ for all $t \in [0,T]$. \end{definition} \subsection{Well-posedness} In this subsection we show that the problem \eqref{eq: main equation} has a unique solution. This will be a consequence of \cite[Theorems 3.6 and 3.8]{KR}, once the respective assumptions are shown to be fulfilled. This is the purpose of the following lemmata. \begin{remark} In Definition \ref{def: solution}, the set of full probability on which the equality is satisfied can be chosen independently of $\phi \in L_{m+1}$. This follows by the fact that the expression $$ \int_0^\cdot M^k(u) \, d\beta^k_s $$ is a continuous $H^{-1}$-valued martingale, combined with the separability of $L_{m+1}$. \end{remark} \begin{lemma} Under Assumption \ref{as: coeff} there is a constant $N$ depending only on $K$ such that for all $(\omega, t) \in \Omega_T$, $u \in L_{m+1}(Q)$ we have \begin{align} \label{eq: first derivatives} \left| \int_Q u \D_i( b^i_t ( - \Delta )^{-1} u ) \, dx \right| \leq & N \|u \|_{H^{-1}}^2, \\ \label{eq: zero order} \left| \int_Q u c_t ( - \Delta )^{-1} u \, dx \right| \leq & N \|u \|_{H^{-1}}^2. \end{align} \end{lemma} \begin{proof} By continuity it suffices to show the conclusion for $u \in C^\infty_c(Q)$. We have \begin{align} \int_Q u \D_i( b^i_t ( - \Delta )^{-1} u ) \, dx& = \int_Q (\D_ib^i_t) u( - \Delta )^{-1} u \, dx +\int_Q b^i_t u\D_i( - \Delta )^{-1} u \, dx. \end{align} Recall that $\D Q \in C^2$ which implies $(-\Delta)^{-1}u \in H^2(Q)$. Hence, writing $u= (-\Delta) (-\Delta)^{-1}u$, integration by parts gives ($(-\Delta)^{-1}u$ vanishes on $\D Q$) \begin{align} \nonumber \left| \int_Q (\D_ib^i_t) u( - \Delta )^{-1} u \, dx \right| & \leq \left| \int_Q (\D_{ij} b^i_t) \left(\D_j ( - \Delta )^{-1} u\right) (-\Delta)^{-1} u \, dx \right| \\ \nonumber & +\left| \int_Q (\D_ib^i_t)\left( \D_j( - \Delta )^{-1} u\right)^2 \, dx \right| \\ \label{eq: first derivatives estimate} & \leq N \| u \|^2_{H^{-1}}, \end{align} where we have used Young's and Poincar\'e's inequalities. For the other term, since $\D _j(-\Delta)^{-1}u \in H^1(Q)$ (recall that $\D Q \in C^2$) and $b^i$ vanishes on $\D Q$, we have \begin{align} \nonumber \int_Q b^i_t u\D_i( - \Delta )^{-1} u \, dx = &\int_Q b^i_t \left( \D_j(-\Delta)^{-1}u \right) \D_j\D_i( - \Delta )^{-1} u \, dx \\ \nonumber +&\int_Q (\D_jb^i_t) \left( \D_j (-\Delta)^{-1}u \right) \D_i ( - \Delta )^{-1} u \, dx. \end{align} For the second term in the above equality we have by H\"older's inequality $$ \left| \int_Q (\D_jb^i_t) \left(\D_j (-\Delta)^{-1}u \right)\D_i( - \Delta )^{-1} u \, dx \right| \leq N \|u\|^2_{H^{-1}}, $$ while for the first term we have \begin{align} \nonumber \left| \int_Q b^i_t \left( \D_j(-\Delta)^{-1}u \right) \D_i\D_j( - \Delta )^{-1} u \, dx \right| &= \frac{1}{2}\left| \int_Q b^i_t \D_i \left( \D_j(-\Delta)^{-1} u \right)^2 \, dx \right| \\ \nonumber &= \frac{1}{2}\left| \int (\D_ib^i_t) \left(\D_j (-\Delta)^{-1} u \right)^2 \, dx \right| \\ \nonumber & \leq N \|u\|^2_{H^{-1}}, \end{align} where we have used again that $b^i=0$ on $\D Q$. Hence, \begin{equation} \label{eq: estimate first derivatives 2} \left| \int_Q b^i_t u\D_i( - \Delta )^{-1} u \, dx\right| \leq N \|u\|_{H^{-1}}^2. \end{equation} Combining \eqref{eq: first derivatives estimate} with \eqref{eq: estimate first derivatives 2} we obtain \eqref{eq: first derivatives}. Inequality \eqref{eq: zero order} follows similarly from the fact that $|c_t(x)|+|\nabla c_t(x)|\leq K$. \end{proof} \begin{lemma} \label{lem: monotonicity} Under Assumption \ref{as: coeff}, there exists a constant $N$ depending only on $K$ and $d$ such that for all $(\omega, t)\in \Omega_T$ and all $\varphi, \psi \in L_{m+1}(Q)$ we have \begin{align} \nonumber &2 {}_{(L_{m+1})^*}\langle H_t\phi+f_t,\phi \rangle_{L_{m+1}} +\sum_{k=1}^\infty \| M^k_t(\phi)\|_{H^{-1}}^2 \\ \label{eq: part of coercivity} &\leq N \left( \|\phi\|_{H^{-1}}^2+\|f_t\|^2_{H^{-1}}+ \sum_{k=1}^\infty\|g^k_t\|_{H^{-1}}^2\right), \end{align} and \begin{equation} \label{eq: monotonicity} 2{}_{(L_{m+1})^*}\langle A_t(\phi)-A_t(\psi), \phi-\psi \rangle_{L_{m+1}}+ \sum_{k=1}^\infty \| M^k_t (\phi) -M^k_t (\psi) \|^2_{H^{-1}} \leq N \| \phi- \psi\|^2_{H^{-1}}. \end{equation} \end{lemma} \begin{proof} We start by proving \eqref{eq: part of coercivity}. By virtue of the previous lemma, it suffices to show that \begin{align*} &-\sigma_t^2 \| \phi \|_{L_2}^2+ ( f_t, \phi)_{H^{-1}}+ \sum_{k=1}^\infty \| M^k_t(\phi) \|^2_{H^{-1}} \\ &\leq N \left(\| \phi \|^2_{H^{-1}}+\|f_t\|^2_{H^{-1}}+ \sum_{k=1}^\infty\|g^k_t\|_{H^{-1}}^2\right). \end{align*} Clearly it suffices to show the last inequality for $\phi \in C^\infty_c(Q)$. To this end, we have \begin{align*} &-\sigma_t^2 \| \phi \|_{L_2}^2+ ( f_t, \phi)_{H^{-1}}+ \sum_{k=1}^\infty \| M^k_t(\phi) \|^2_{H^{-1}} \\ =& -\sigma_t^2 \| \phi \|_{L_2}^2+ ( f_t, \phi)_{H^{-1}}+ \sigma_t^2 \sum_{i=1}^d \|\D_i \phi \|^2_{H^{-1}}+ \sum_{k=1}^\infty \| \nu_t^k\phi+g^k_t\|^2_{H^{-1}} \\ \leq & \ \sigma_t^2(\sum_{i=1}^d \|\D_i \phi \|^2_{H^{-1}}-\| \phi \|_{L_2}^2) +N \left(\| \phi \|^2_{H^{-1}}+\|f_t\|^2_{H^{-1}}+ \sum_{k=1}^\infty\|g^k_t\|_{H^{-1}}^2\right). \end{align*} Hence, we only have to show that $$ \sum_{i=1}^d \|\D_i \phi \|^2_{H^{-1}}\leq \| \phi \|_{L_2}^2. $$ Let $\zeta \in C^\infty_c(Q)$. The action of $\D_i\phi$ on $\zeta$ is given by $-(\phi, \D_i \zeta)_{L_2}$. We have \begin{align*} |(\phi, \D_i \zeta)_{L_2}|=|( (-\Delta)(-\Delta)^{-1} \phi, \D_i \zeta)_{L_2}|& =\left| \sum_{l=1}^d (\D_i \D_l (-\Delta)^{-1} \phi, \D_l \zeta)_{L_2}\right| \\ & \leq \|\zeta\|_{H^1_0}^2\sum_{l=1}^d \|\D_i \D_l (-\Delta)^{-1} \phi\|_{L_2}^2. \end{align*} Consequently, $$ \| \D_i \phi\|_{H^{-1}}^2 \leq \sum_{l=1}^d \|\D_i \D_l (-\Delta)^{-1} \phi\|_{L_2}^2. $$ It now suffices to show that $$ \sum_{l,i=1}^d \|\D_i \D_l (-\Delta)^{-1} \phi\|_{L_2}^2\leq \| \phi \|^2_{L_2}. $$ This follows from the convexity of $Q$. Namely, if $Q$ is a convex, open, bounded subset of $\bR^d$ with boundary of class $C^2$, then it holds that $$ \sum_{l,i=1}^d \|\D_i \D_l v\|_{L_2(Q)}^2\leq \| \Delta v \|^2_{L_2(Q)} $$ for all $v \in H^2(Q) \cap H^1_0(Q)$ (see \cite[p139, Theorem 3.1.2.1, inequality (3,1,2,2)]{GRI}). Applying this to $v := (-\Delta)^{-1} \phi$ finishes the proof of \eqref{eq: part of coercivity}. For \eqref{eq: monotonicity}, by considering \eqref{eq: part of coercivity} (with $f=0, \ g=0$), it is clear that we only have to show that $$ {}_{(L_{m+1})^*}\langle \Delta \left( \Phi (\phi) \right) - \Delta \left( \Phi (\psi) \right) , \phi -\psi \rangle_{L_{m+1}} \leq 0. $$ This follows from the well-known fact (see, e.g., p.71 in \cite{MR}) that $$ {}_{(L_{m+1})^*}\langle \Delta \left( \Phi (\phi) \right) - \Delta \left( \Phi (\psi) \right) , \phi -\psi \rangle_{L_{m+1}}= -(\Phi(\phi)-\Phi(\psi), \phi-\psi)_{L_2} \leq 0, $$ since $\Phi$ is non-decreasing. This completes the proof. \end{proof} \begin{theorem} \label{thm: well posedness} Under Assumption \ref{as: coeff} there exists a unique solution of equation \eqref{eq: main equation}. \end{theorem} \begin{proof} It is straightforward to check that the operator $A$ satisfies ($A_1$) (hemi-continuity) from \cite{KR}. The fact that $A$ and $M$ satisfy ($A_2$) (monotonicity) was proved in \eqref{eq: monotonicity}. Coercivity or ($A_3$), follows from \eqref{eq: part of coercivity} combined with \eqref{item: coercivity of phi} of Assumption \ref{as: coeff}, which implies that $$ {}_{(L_{m+1})^*}\langle \Delta \left( \Phi(\phi)\right), \phi \rangle_{L_{m+1}} = - \left( \Phi(\phi), \phi\right)_{L_2} \leq - \lambda \|\phi\|_{L_{m+1}}^{m+1}+C. $$ For the growth condition ($A_4$) we have, for $v \in L_{m+1}$, \begin{align*} \| \Delta \left( \Phi (v ) \right) + H_tv+f_t\|_{(L_{m+1})^*} & \leq \|\Phi(v)\|_{L_{(m+1)/m}} + \|H_tv+f_t\|_{(L_{m+1})^*} \\ & \leq N+ N\|v\|^{m}_{L_{m+1}}+\|H_tv+f_t\|_{(L_{m+1})^*}, \end{align*} where we have used Assumption \ref{as: coeff}, \eqref{item: coercivity of phi}. Then, notice that for $\phi \in C^\infty_c(Q)$ \begin{align*} &{}_{(L_{m+1})^*}\langle H_tv+f_t, \phi \rangle_{L_{m+1}} \\ =&- \int_Q \frac{1}{2}\sigma^2_t u \phi \, dx -\int_Q u \D_i (b^i_t (-\Delta)^{-1} \phi)\, dx+ \int_Q (c_tu+f_t) (-\Delta)^{-1} \phi \, dx \\ \leq &N\|u\|_{L_{(m+1)/m}} \|\phi\|_{L_{m+1}}+N \|u\|_{L_2}\|\phi\|_{H^{-1}}+ N\|f\|_{H^{-1}}\|\phi\|_{H^{-1}}\\ \leq & N \left( 1+ \|f_t \|_{H^{-1}}^{2m/(m+1)}+ \|v\|^{m }_{L_{m+1}} \right) \|\phi\|_{L_{m+1}}. \end{align*} Hence, $$ \| \Delta \left( \Phi (v ) \right) + H_tv+f_t\|_{(L_{m+1})^*} \leq N \left( 1+ \|f_t \|_{H^{-1}}^{2m/{m+1}}+ \|v\|^{m}_{L_{m+1}} \right), $$ where $N$ depends only on $m, K, C, d$ and $|Q|$. This finishes the verification of the assumptions of \cite[Theorems 3.6 and 3.8]{KR}, an application of which concludes the proof. \end{proof} \subsection{Regularity} In this section we add a viscosity term of magnitude $\varepsilon$ to equation \eqref{eq: main equation} and show that the corresponding equation and its solution $u^\varepsilon$ satisfy the assumptions of Theorem \ref{thm: quasilinear nd}, which yields supremum estimates for $u^\varepsilon$ uniformly in $\varepsilon$. Then, we show that $u^\varepsilon$ converges to $u$ and pass to the limit to obtain the desired estimates for the $u$. First, we consider an approximating equation where the non-linear term is Lipschitz continuous, that is, for $\varepsilon>0$, on $Q_T$ \begin{equation} \label{eq: double approximation} \begin{aligned} d u_t&= \left[\Delta \left( \bar{\Phi}(u_t)\right) + \varepsilon\Delta u_t+Hu_t+f_t\right]dt+ M^k_t(u_t) \, d\beta^k_t \\ u_0&=\xi, \end{aligned} \end{equation} with zero Dirichlet boundary conditions on $\D Q$. Let us set $$ \mathcal{K}_p := \mathbb{E}\|\xi\|_{L_p}^p +\mathbb{E} \int_0^T \left(\|f\|_{L_p}^p+\| |g|_{l_2}\|^p_{L_p}\right) \, dt. $$ As in \cite[Lemma B.1]{BenRoc} we have the following. \begin{lemma} \label{lem: Lipschitz non-linearity} Assume that Assumption \ref{as: coeff} \eqref{as: coefficients drift}, \eqref{as: free terms}, \eqref{as:icH-1} is satisfied and that $\bar{\Phi}: \bR \to \bR$ is a Lipschitz continuous, non-decreasing function with $\bar{\Phi}(0)=0$. Then, equation \eqref{eq: double approximation} has a unique solution $u$ in $\mathbb{H}^{-1}_2$. Moreover, if $K_2< \infty$, then $u \in \mathbb{L}_2$ and for any $p \geq 2$ the following estimate holds \begin{equation} \label{eq: p estimates} \mathbb{E} \sup_{t \leq T} \|u_t\|^p_{L_p} + \mathbb{E} \int_0^T \||u|^{(p-2)/2}|\nabla u| \|_{L_2}^2 \, dt \leq N \mathcal{K}_p, \end{equation} where $N$ is a constant depending only on $\varepsilon, K, d, T$ and $p$. \end{lemma} \begin{proof} The existence and uniqueness of solutions in $\mathbb{H}^{-1}_2$ follows from Theorem \ref{thm: well posedness}. Therefore, we only have to show that $u \in \mathbb{L}_2$ and \eqref{eq: p estimates} under the assumption that $\mathcal{K}_2 < \infty$. Let $(e_i)_{i=1}^\infty$ be an orthonormal basis of $H^{-1}$ consisting of eigenvectors of $-\Delta$ and let $\Pi_n : H^{-1} \to \text{span}\{ e_1, ..., e_n\}$ be the orthogonal projection onto the span of the first $n$ eigenvectors. Consider the Galerkin approximation \begin{equation} \label{eq: Galerkin approximation} \begin{aligned} d u^n_t&= \Pi_n \left[ \Delta \left( \bar{\Phi}(u^n_t)\right) + \varepsilon\Delta u^n_t+Hu^n_t+f_t\right] dt+ \Pi_n M^k_t(u^n_t) \, d\beta^k_t \\ u^n_0&=\Pi_n \xi. \end{aligned} \end{equation} Under the assumptions of the lemma, it is very well known from the theory of stochastic evolution equations (see \cite{KR}) that the Galerkin scheme above has a unique solution $u^n$ which converges weakly in $L_2( \Omega_T, \mathcal{P}; L_2(Q))$ to $u$ (in fact, this is how the solution $u$ is constructed). Notice that the restriction of $\Pi_n$ to $L_2$ is again the orthogonal projection (in $L_2$) onto $\text{span}\{ e_1, ..., e_n\}$. Consequently, for any $\phi, \psi \in C_c^\infty$ we have $-(\phi, \Pi_n \D_i \psi)_{L_2}= (\D_i \Pi_n \phi, \psi)_{L_2}$ which remains true for $\phi ,\psi \in L_2$. Hence, by It\^o's formula, we have \begin{align*} \|u^n_t\|_{L_2}^2 \leq \|\xi\|_{L_2}^2& - \int_0^t 2\left( \D_j u^n, \D_j \left( \bar{\Phi}(u^n) \right) + \varepsilon \D_j u^n+ \frac{\sigma^2}{2}\D_j u^n \right)_{L_2} \, ds \\ & +\int_0^t 2 \left( b^i \D_i u^n+c u^n+f, u^n\right)_{L_2} \, ds \\ &+ \int_0^t\left( \sum_{i=1}^d \sigma^2 \| \D_iu^n\|_{L_2}^2+ \sum_{k=1}^\infty \|\nu^ku^n+g^k\|^2_{L_2}\right) \, ds \\ &+ \int_0^t 2 \left( M^k u^n+g^k, u^n\right) \, d \beta^k_s. \end{align*} Since $ \bar{ \Phi}$ is a non-decreasing Lipschitz continuous function, we have $$ \left( \D_j u^n, \D_j \left( \bar{\Phi}(u^n) \right) \right)_{L_2} \geq 0. $$ It then follows by standard arguments (see, e.g., the proof of Theorem 4 in \cite{ROZ}) that $$ \mathbb{E}\int_0^T \| u^n\|^2_{H^1_0} \, dt \leq N \mathcal{K}_2, $$ where $N$ depends only on $\varepsilon, K, d,$ and $T$. Since the Galerkin approximation $u^n$ converges weakly in $L_2( \Omega_T, \mathcal{P}; L_2(Q))$ to $u$, taking $\liminf$ as $n \to \infty$ in the above inequality gives $$ \mathbb{E}\int_0^T \| u\|^2_{H^1_0} \, dt \leq N \mathcal{K}_2. $$ Moreover, since $u \in L_2(\Omega_T, \mathcal{P}; H^1_0(Q))$ and satisfies \eqref{eq: double approximation}, it follows (see \cite[Theorem 2.16]{KR}) that is has a continuous $L_2$-valued version which implies that $u \in \mathbb{L}_2$. From here, one can deduce \eqref{eq: p estimates} by following step by step the proof of Lemma 2 from \cite{KOMA}, keeping in mind that $\bar{\Phi}'(u) \geq 0$. \end{proof} We use the previous result to obtain the required regularity for the solution of the SPME in the presence of a non-degenerate viscosity term, \begin{equation} \label{eq: single approximation} \begin{aligned} du_t& =\left[ \Delta \left( \Phi (u) \right) +\varepsilon \Delta u_t +H_tu_t +f_t\right] \, dt + M^k_t (u_t) d\beta^k_t \\ u_0&= \xi. \end{aligned} \end{equation} \begin{lemma} \label{lem: regularity} Suppose that Assumption \ref{as: coeff} holds. Then, there exists a unique $\mathbb{H}^{-1}_{m+1}$-solution of equation \eqref{eq: single approximation}. If $\xi \in L_{m+1}(\Omega; L_{m+1}(Q))$, $ f \in L_{m+1}(\Omega_T; L_{m+1}(Q))$, and $g \in L_{m+1}(\Omega_T; L_{m+1}(Q; l_2))$, then we have that $u, \Phi(u) \in L_2(\Omega_T; H^1_0)$ and $\nabla \Phi (u) = \Phi'(u) \nabla u$. In particular, $u \in \mathbb{L}_2$. \end{lemma} \begin{proof} The fact that \eqref{eq: single approximation} has a unique $\mathbb{H}^{-1}_{m+1}$-solution follows from Theorem \ref{thm: well posedness}. For the remaining properties, let us consider the approximation \begin{equation} \label{eq: approximation with Lipschitz} \begin{aligned} du^n_t& =\left[ \Delta \left( \Phi _n(u^n_t)\right)+\varepsilon \Delta u^n_t +H_tu^n_t +f_t\right] \, dt + M^k_t (u^n_t) \, d\beta^k_t \\ u_0&= \xi, \end{aligned} \end{equation} where for $n \in \mathbb{N}$, $\Phi_n: \bR \to \bR$ is defined by $$ \Phi_n(r)= \int_0^r \min\{\Phi'(s), n\} \, ds. $$ By Lemma \ref{lem: Lipschitz non-linearity}, equation \eqref{eq: approximation with Lipschitz} has a unique solution $u^n$ in $\mathbb{H}^{-1}_2$ which moreover belongs to $\mathbb{L}_2$, and for all $q \in [2, m+1]$ we have \begin{equation} \label{eq: stability in Lp} \begin{aligned} &\mathbb{E} \sup_{t \leq T} \| u^n \|^q_{L_q}+ \mathbb{E} \int_0^T\int_Q | u^n |^{q-2}|\nabla u^n |^2 \, dx dt \\ &\leq N \left( \mathbb{E} \|\xi\|^q_{L_q} + \mathbb{E} \int_0^T \| f\|^q_{L_q} + \| |g|_{l_2} \|_{L_q}^q \, dt \right)< \infty, \end{aligned} \end{equation} where $N$ depends only on $K, T, d, q$ and $\varepsilon$. Let $\Psi_n(r) = \int_0^r \Phi_n(s) \, ds$. By Theorem 3.1 in \cite{KRITO} we have \begin{align} \nonumber \int_Q \Psi_n (u^n_t) \, dx &= \int_Q \Psi_n (\xi) \, dx- \int_0^t \int_Q | \nabla \Phi_n(u^n) |^2 + \varepsilon\Phi'(u^n) |\nabla u^n|^2 \, dx ds \\ \nonumber &+ \int_0^t ( b^i\D_iu^n+cu^n+f, \Phi_n (u^n) )_{L_2} \, ds \\ \nonumber &+\int_0^t \frac{1}{2}(\Phi'_n(u^n),| \nu u^n+g|^2_{l_2}) _{L_2}\, ds \\ \label{eq: Ito psi_n} & + \int_0^t ( M^k(u^n) , \Phi_n(u^n) )_{L_2} \, d\beta^k_s. \end{align} Notice that by Assumption \ref{as: coeff} we have \begin{align*} -\Phi'_n(u^n) |\nabla u^n|^2 \leq & 0, \\ (cu^n+f, \Phi_n(u^n) )_{L_2} \leq & N ( \|u^n\|^{m+1}_{L_{m+1}}+\|f\|^{m+1}_{L_{m+1}} ) \\ ( b^i\D_iu^n, \Phi_n(u^n))_{L_2}=&-((\D_ib^i)u^n, \Phi_n(u^n))_{L_2}- ( b^iu^n, \D_i\Phi_n(u^n))_{L_2} \\ &\leq N(1+\|u^n\|_{L_{m+1}}^{m+1})+\frac{1}{2}\|\nabla \Phi_n(u^n)\|_{L_2}^2 \end{align*} and \begin{align*} (\Phi'_n(u^n),| \nu u^n+g|^2_{l_2})_{L_2} & \leq N ( 1+\|u^n\|^{m+1}_{L_{m+1}}+\||g|_{l_2}\|^{m+1}_{L_{m+1}}), \end{align*} for a constant $N$ depending only on $C,K,d,$ and $|Q|$. Hence, after a localization argument we obtain for all $t \in [0,T]$ \begin{align} \nonumber &\mathbb{E}\int_Q \Psi_n (u^n_t) \, dx +\mathbb{E} \int_0^t \| \nabla \Phi_n(u^n) \|_{L_2}^2\, dt \\ \label{eq:RemarkLater} &\leq N \mathbb{E} \left(1+ \|\xi\|^{m+1}_{L_{m+1}} + \int_0^t \| f\|^{m+1}_{L_{m+1}} + \| |g|_{l_2} \|_{L_{m+1}}^{m+1} +\|u^n\|^{m+1}_{L_{m+1}}\, dt \right), \end{align} for a constant $N$ depending only on $C,K,d,$ and $|Q|$, which in particular gives by \eqref{eq: stability in Lp} \begin{align} \nonumber &\mathbb{E} \int_0^T \| \nabla \Phi_n(u^n) \|_{L_2}^2\, dt \\ \label{eq: stability phi_n} &\leq N \mathbb{E} \left(1+ \|\xi\|^{m+1}_{L_{m+1}} + \int_0^T \| f\|^{m+1}_{L_{m+1}} + \| |g|_{l_2} \|_{L_{m+1}}^{m+1} \, dt \right)< \infty, \end{align} where $N$ depends only on $K, T,d ,m,C, |Q|$ and $\varepsilon$. By \eqref{eq: stability in Lp} and \eqref{eq: stability phi_n} we have for a (non-relabeled) subsequence \begin{equation} \label{eq: weak converg Lp} \begin{aligned} u^n &\rightharpoonup v \ \text{in} \ L_2(\Omega_T ; H^1_0(Q)), \\ u^n &\rightharpoonup v \ \text{in} \ L_{m+1}(\Omega_T ; L_{m+1}(Q)), \\ \Phi_n(u_n) &\rightharpoonup \eta \ \text{in} \ L_2(\Omega_T ; H^1_0(Q)), \\ u^n_T &\rightharpoonup u^ \infty\ \ \text{in} \ L_{m+1}(\Omega; L_{m+1}(Q)), \end{aligned} \end{equation} for some $v$, $\eta$ and $u^ \infty$. Recall that we want to show that $u, \Phi(u) \in L_2(\Omega_T; H^1_0)$. For this, we will show that $u=v$ and $\Phi(v)= \eta$ by using standard techniques from the theory of monotone operators (see, e.g., \cite{KR}). Notice that by \eqref{eq: weak converg Lp} we also have \begin{equation} \label{eq: weak converg H -1} \begin{aligned} u^n &\rightharpoonup v \ \text{in} \ L_2(\Omega_T ; H^{-1}), \\ \Delta \Phi_n(u_n) &\rightharpoonup \Delta \eta \ \text{in} \ L_2(\Omega_T ; H^{-1}), \\ u^n_T &\rightharpoonup u^ \infty\ \ \text{in} \ L_2(\Omega; H^{-1}). \end{aligned} \end{equation} As in Section 3.5 in \cite{KR}, by passing to the weak limit in \eqref{eq: approximation with Lipschitz} we have in $H^{-1}$ for almost all $(\omega, t)$ \begin{equation} \label{eq: equality after limit} v_t =\xi+\int_0^t \left( \Delta \eta +\varepsilon \Delta v +Hv +f\right) \, ds + \int_0^t M^k (v) \, d\beta^k_s, \end{equation} and, almost surely, \begin{equation} u^\infty =\xi+ \int_0^T \left( \Delta \eta +\varepsilon \Delta v +Hv +f\right) \, ds + \int_0^T M^k (v) \, d\beta^k_s. \end{equation} Hence, we can choose a version of $v$ that is a continuous, adapted, $H^{-1}$-valued process. It follows that \eqref{eq: equality after limit} holds for all $t \in [0,T]$ on a set of probability one and that almost surely $v_T= u^\infty$. To ease the notation let us set \begin{align*} A^n_t(\varphi)&:=\Delta \left( \Phi _n(\varphi)\right)+\varepsilon \Delta \varphi +H_t \varphi +f_t \\ A_t(\varphi)&:=\Delta \left( \Phi (\varphi)\right) +\varepsilon \Delta \varphi +H_t \varphi +f_t \end{align*} for $\varphi \in L_{m+1}(Q)$. Let $y$ be a predictable $L_{m+1}(Q)$-valued process, such that $$ \mathbb{E} \int_0^T \| y\|^{m+1}_{L_{m+1}} \, dt < \infty. $$ For $c>0$ we set \begin{align} \nonumber \mathcal{O}_n:= & \mathbb{E} \int_0^T e^{-ct}2 {}_{(L_{m+1})^*}\langle A^n(u^n)-A(y), u^n-y \rangle_{L_{m+1}} \, dt \\ \nonumber +& \mathbb{E} \int_0^T e^{-ct}\sum_{k=1}^\infty\|M^k (u^n)- M^k( y) \|^2_{H^{-1}} \, dt -\mathbb{E} \int_0^T c e^{-ct}\|u^n - y\|^2_{H^{-1}} \, dt. \end{align} Notice that due to Lemma \ref{lem: monotonicity} we have for $c>0$ large enough (independent of $n$) \begin{equation*} \begin{aligned} \mathcal{O}_n&= \mathbb{E} \int_0^T 2 e^{-ct} {}_{(L_{m+1})^*}\langle A^n(u^n)-A^n(y), u^n-y \rangle_{L_{m+1}} \, dt \\ & +\mathbb{E} \int_0^T e^{-ct} \sum_{k=1}^\infty\|M^k (u^n)- M^k (y) \|^2_{H^{-1}} \, dt \\ & - \mathbb{E} \int_0^T c e^{-ct}\|u^n - y\|^2_{H^{-1}} \, dt \\ &+\mathbb{E} \int_0^T e^{-ct} 2{}_{(L_{m+1})^*}\langle A^n(y)-A(y) , u^n-y \rangle_{L_{m+1}} \, dt \\ & \leq \mathbb{E} \int_0^T e^{-ct}2{}_{(L_{m+1})^*}\langle A^n(y)-A(y) , u^n-y \rangle_{L_{m+1}} \, dt. \end{aligned} \end{equation*} Moreover, one can easily see that by the properties of $\Phi_n$ we have that $ \Phi_n (y) \to \Phi(y)$ strongly in $L_{m+1}(\Omega_T; L_{m+1}(Q))$, which combined with \eqref{eq: weak converg Lp} gives $$ \lim_{n \to \infty} \mathbb{E} \int_0^T e^{-ct} 2{}_{(L_{m+1})^*}\langle A^n(y)-A(y) , u^n-y \rangle_{L_{m+1}} \, dt =0. $$ Consequently, we have \begin{equation} \label{eq: limsup On} \limsup_{ n \to \infty} \mathcal{O}_n \leq 0. \end{equation} We also set \begin{equation*} \mathcal{O}^1_n:= \mathbb{E} \int_0^T e^{-ct} \left( 2 {}_{(L_{m+1})^*}\langle A^n(u^n), u^n \rangle_{L_{m+1}} +\sum_{k=1}^\infty \|M^k (u^n) \|^2_{H^{-1}} - c \| u^n\|^2_{H^{-1}}\right) \, dt \end{equation*} and $\mathcal{O}_n^2= \mathcal{O}_n-\mathcal{O}_n^1$. By It\^o's formula (see \cite[Theorem 2.17]{KR}) we have \begin{align*} &e^{-cT}\|u^n_T\|_{H^{-1}}^2-\|\xi\|_{H^{-1}}^2 \\ &= \int_0^T e^{-ct} \left( 2 {}_{(L_{m+1})^*}\langle A^n(u^n), u^n \rangle_{L_{m+1}} +\sum_{k=1}^\infty \|M^k (u^n) \|^2_{H^{-1}} \right)\, dt \\ &-\int_0^T e^{-ct}c \| u^n\|^2_{H^{-1}} \, dt + \int_0^T e^{-ct} (M^k(u^n), u^n)_{H^{-1}} \, d\beta^k_t. \end{align*} By the estimates in \eqref{eq: stability in Lp} one can easily see that $$ \mathbb{E} \left( \int_0^T \sum_{k=1}^\infty(M^k(u^n),u^n)_{H^{-1}}^2 \, dt \right)^{1/2}< \infty, $$ which implies that the expectation of the last term at the right hand side of the above equality vanishes. Hence, \begin{equation*} \mathcal{O}_n^1= \mathbb{E} e^{-cT}\|u^n_T\|_{H^{-1}}^2-\mathbb{E} \|\xi\|_{H^{-1}}^2, \end{equation*} from which we get that \begin{equation} \limsup_{n \to \infty} \mathcal{O}_n^1= \mathbb{E} e^{-cT}\|v_T\|_{H^{-1}}^2-\mathbb{E} \|\xi\|_{H^{-1}}^2+ \delta e^{-cT} \end{equation} with $\delta:= \limsup_{n \to \infty }\mathbb{E}\|u^n_T\|_{H^{-1}}^2-\mathbb{E}\|v_T\|_{H^{-1}}^2 \geq 0$, due to \eqref{eq: weak converg H -1}. On the other hand, by \eqref{eq: weak converg Lp} and \eqref{eq: stability in Lp} it follows that the quantity $$ \mathbb{E} \esssup_{t \in [0,T]} \|v_t\|^2_{L_2}+ \mathbb{E} \int_0^T\|\nabla v \|^2_{L_2} \, dt $$ can be estimated by the right hand side of \eqref{eq: stability in Lp} with $q=2$. In particular, this implies that $$ \mathbb{E} \left( \int_0^T \sum_{k=1}^\infty(M^k(v),v)_{H^{-1}}^2 \, dt \right)^{1/2}< \infty. $$ Hence, by \eqref{eq: equality after limit} and It\^o's formula we obtain \begin{align*} \mathbb{E} e^{-cT}\|v_T\|_{H^{-1}}^2 &= \mathbb{E} \|\xi\|_{H^{-1}}^2 \\ &+ \mathbb{E} \int_0^T e^{-ct} 2 {}_{(L_{m+1})^*}\langle \Delta \eta +\varepsilon \Delta v +Hv +f, v \rangle_{L_{m+1}} \, dt \\ &+\mathbb{E} \int_0^T e^{-ct}\sum_{k=1}^\infty \|M^k(v) \|^2_{H^{-1}} \, dt -\mathbb{E} \int_0^T e^{-ct}c \| v\|^2_{H^{-1}} \, dt. \end{align*} Hence, \begin{align} \nonumber \limsup_{n \to \infty} \mathcal{O}_n^1&=\mathbb{E} \int_0^T e^{-ct} 2 {}_{(L_{m+1})^*}\langle \Delta \eta +\varepsilon \Delta v +Hv +f, v \rangle_{L_{m+1}} \, dt \\ &+\mathbb{E} \int_0^T e^{-ct}\sum_{k=1}^\infty \|M^k (v) \|^2_{H^{-1}} \, dt -\mathbb{E} \int_0^T e^{-ct}c \| v\|^2_{H^{-1}} \, dt+ \delta e^{-cT}. \end{align} Moreover, by \eqref{eq: weak converg Lp} we have \begin{align} \nonumber \lim_{n \to \infty} \mathcal{O}^2_n &=\mathbb{E} \int_0^T e^{-ct} 2 \left( {}_{(L_{m+1})^*}\langle A(y), y \rangle_{L_{m+1}}-{}_{(L_{m+1})^*}\langle A(y), v \rangle_{L_{m+1}} \right) \, dt \\ \nonumber &-\mathbb{E} \int_0^T e^{-ct} 2 {}_{(L_{m+1})^*}\langle \Delta \eta +\varepsilon\Delta v +Hv+f, y \rangle_{L_{m+1}} \, dt \\ \nonumber &+\mathbb{E} \int_0^T e^{-ct}\left(\sum_{k=1}^\infty \|M^k (y) \|^2_{H^{-1}}- 2( M^k (y), M^k(v))_{H^{-1}} \right)\, dt \\ &+\mathbb{E} \int_0^T e^{-ct}c \left(2(v, y)_{H^{-1}}- \| y\|^2_{H^{-1}}\right) \, dt. \end{align} Consequently, \begin{align} \nonumber &\mathbb{E} \int_0^T e^{-ct}2 {}_{(L_{m+1})^*}\langle \Delta \eta + \varepsilon \Delta v +Hv +f-A(y), v-y \rangle_{L_{m+1}} \, dt \\ \nonumber + &\mathbb{E} \int_0^T e^{-ct}2\sum_{k=1}^\infty\|M^k (v)- M^k (y) \|^2_{H^{-1}} \, dt \\ \nonumber - &\mathbb{E} \int_0^T c e^{-ct}\|v- y\|^2_{H^{-1}} \, dt + \delta e^{-cT} = \limsup_{n \to \infty} \mathcal{O}^1_n + \lim_{n \to \infty} \mathcal{O}^2_n \\ \label{eq: testing} =& \limsup_{n \to \infty} \mathcal{O}_n \leq 0, \end{align} by \eqref{eq: limsup On}. By choosing $y=v$ in \eqref{eq: testing} we obtain that $\delta=0$. Moreover, it follows that \begin{align*} \nonumber &\mathbb{E} \int_0^T e^{-ct}2 {}_{(L_{m+1})^*}\langle \Delta \eta + \varepsilon \Delta v +Hv +f -A(y), v-y \rangle_{L_{m+1}} \, dt \\ - &\mathbb{E} \int_0^T c e^{-ct}\|v - y\|^2_{H^{-1}} \, dt \leq 0. \end{align*} Let $z$ be a predictable process with values in $L_{m+1}(Q)$ with $\mathbb{E} \int_0^T \|z\|_{L_{m+1}}^{m+1} \, dt < \infty$ and choose in the above inequality $y= v- \lambda z$ for $\lambda > 0$. Then, we have \begin{align*} \nonumber &\mathbb{E} \int_0^T e^{-ct}2 \lambda {}_{(L_{m+1})^*}\langle \Delta \eta + \varepsilon \Delta v +Hv +f -A(v - \lambda z), z \rangle_{L_{m+1}} \, dt \\ - &\mathbb{E} \int_0^T c \lambda^2 e^{-ct}\|z\|^2_{H^{-1}} \, dt \leq 0. \end{align*} Dividing by $\lambda$, letting $\lambda \to 0$ and using the hemi-continuity property we obtain $$ \mathbb{E} \int_0^T {}_{(L_{m+1})^*}\langle \Delta \eta - \Delta \left( \Phi(v)\right) , z \rangle_{L_{m+1}} \, dt \leq 0. $$ Since $z$ was arbitrary, we have $\Delta \eta = \Delta \left( \Phi (v) \right) $. This shows that $v$ is a solution of \eqref{eq: single approximation}, and by uniqueness, we have $u=v$, $\Phi(u)= \Phi(v) = \eta$. This finishes the proof. \end{proof} \begin{remark} \label{rem:H10} Suppose that Assumption \ref{as: extra condition} also holds and let $u^\varepsilon$ be the solution of \eqref{eq: single approximation}. By writing It\^o's formula for $\|\Psi(u^\varepsilon_t)\|_{L_1(Q)}$, where $\Psi(r)= \int_0^r \Phi(s) \, ds$, similarly to \eqref{eq:RemarkLater} and after applying Gronwal's lemma one has $$ \mathbb{E} \int_0^T \|\nabla \Phi(u^{\varepsilon})\|_{L_2}^2 \, dt \leq N \mathbb{E} \left(1+ \|\xi\|^{m+1}_{L_{m+1}} + \int_0^T \| f\|^{m+1}_{L_{m+1}} + \| |g|_{l_2} \|_{L_{m+1}}^{m+1} \, dt \right), $$ where $N$ is independent of $\varepsilon$. \end{remark} We will also need the following. \begin{lemma} \label{lem: lsc} Let $u^n$ be real-valued functions on $Q$ such that $u^n \rightharpoonup u$ in $H^{-1}$ for some $u \in H^{-1}$. Then for any $p \in [1, \infty]$ $$ \|u\|_{L_p} \leq \liminf_{n \to \infty} \|u^n\|_{L_p}. $$ \end{lemma} \begin{proof} Suppose first that $p\in [1, \infty)$. We assume that $\liminf_{n \to \infty} \|u^n\|_{L_p}< \infty$ or else there is nothing to prove. Under this assumption there exists a subsequence $u^{n_k}$ with $\lim_k \|u^{n_k}\|_{L_p}^p= \liminf_n \|u^{n}\|_{L_p}^p$ and $v \in L_p(Q)$ such that $u^{n_k} \rightharpoonup v$ in $L_p(Q)$. It follows that $u=v \in L_p(Q)$, which finishes the proof since $\|v\|_{L_p} \leq \liminf \|u^{n_k}\|_{L_p}^p$. For $p= \infty$ we have the following. We know that the conclusion holds for all $p \in [1, \infty)$. Hence, $$ \|u\|_{L_p} \leq |Q|^{1/p}\liminf_{n \to \infty} \|u^n\|_{L_\infty}, $$ The assertion follows by letting $p \to \infty$. \end{proof} We can now present our main theorems. \begin{theorem} \label{thm: theorem boundedness} Suppose that Assumptions \ref{as: coeff} and \ref{as: extra condition} are satisfied with $m>1$. Let {\color{black}$\mu \in \Gamma_d$} and let $u \in \mathbb{H}^{-1}_{m+1}$ be the unique solution of \eqref{eq: main equation}. Then, we have \begin{equation} \mathbb{E} \|u \|^2 _{L_\infty(Q_T)} \ \leq N \mathbb{E} \left( \|\xi \|_{L_\infty(Q)}^2 +{\color{black}|S_\mu(f,g)|^2} \right), \end{equation} where $$ {\color{black}\mathcal{S}_r(f,g)=1+\|f\|_{L_\mu(Q_T)}+ \| |g|_{l_2} \|_{L_{2\mu}(Q_T)},} $$ and $N$ is a constant depending only on $ m, T, \bar{c},K, d, \mu$ and $|Q|$. \end{theorem} \begin{proof} \emph{Step 1:} In a first step we assume that $|\xi(x)|$, $|f_t(x)|$, $|g_t(x)|_{l_2}$ are bounded uniformly in $(\omega,t,x)$. Let $u^\varepsilon$ denote the unique solution of the problem \eqref{eq: single approximation}. By Assumptions \ref{as: coeff} and \ref{as: extra condition} we have that equation \eqref{eq: single approximation} satisfies Assumptions \ref{as: nd}-\ref{as:boundednessV} with $\theta= \varepsilon$ , $c= \bar{c}$, $\tilde{m}=m-1$, and $$ a^{ij}_t(x,r)= \Phi'(r)I_{i=j}+\varepsilon I_{i=j}r+\sigma^2 _tr/2, $$ \begin{equs} F^i_t(x,r)&=b^i_t(x)r,\qquad\qquad \qquad F_t(x,r)&&= (c_t(x)-\D_ib^i_t(x))r+f_t(x), \\ g^{ik}_t(x,r)&=I_{i=k, k \leq d}\sigma_t r,\qquad \qquad G^k_t(x,r)&&= I_{k>d} (\nu ^{k-d}_t(x)r + g^{k-d}_t(x)), \\ V^1_t(x)&= |f_t(x)|, \qquad \qquad \qquad V^2_t(x)&&= |g_t(x)|_{l_2}. \end{equs} By Lemma \ref{lem: regularity} we have that $u^\varepsilon$ satisfies equation \eqref{eq: single approximation} also in the sense of Definition \ref{def: definition L2}. Therefore, by Theorem \ref{thm: quasilinear nd} we have \begin{equation} \label{eq: boundedness epsilon} \mathbb{E} \|u^\varepsilon \|^2 _{L_\infty(Q_T)} \ \leq N \mathbb{E} \left( 1+\|\xi \|_{L_\infty(Q)}^2 +|S_\mu(f,g)|^2 \right), \end{equation} where $N$ is a constant depending only on $ m, T, \bar{c},K,d, r$, and $|Q|$. By It\^o's formula, the monotonicity of $\Phi$, and \eqref{eq: part of coercivity}, one can easily see that for a constant $N$ independent of $\varepsilon$ we have $$ \|u_t-u_t^\varepsilon\|_{H^{-1}}^2 \leq N \int_0^t \|u-u^\varepsilon\|_{H^{-1}}^2 + \varepsilon2 {}_{(L_{m+1})^*}\langle \Delta u^\varepsilon , u^\varepsilon-u \rangle_{L_{m+1}} \, ds +M^\varepsilon_t, $$ for a local martingale $M^\varepsilon_t$. Hence, \begin{equation} \label{eq: viscosity convergence} \mathbb{E} \|u^\varepsilon_t-u_t\|^2_{H^{-1}} \leq N \mathbb{E} \int_0^T \varepsilon | {}_{(L_{m+1})^*}\langle \Delta u^\varepsilon , u^\varepsilon-u \rangle_{L_{m+1}}| \, dt. \end{equation} Moreover, by It\^o's formula, Assumption \ref{as: coeff}, \eqref{item: coercivity of phi}, and \eqref{eq: part of coercivity} we have for a constant $N$ independent of $\varepsilon$ $$ \mathbb{E} \int_0^T \| u^\varepsilon \|_{L_{m+1}}^{m+1} \, dt \leq N \mathbb{E} \left( 1+ \| \xi\|_{H^{-1}}^2+ \int_0^T \|f\|^2_{H^{-1}}+ \sum_{k=1}^\infty\|g^k\|_{H^{-1}}^2 \, dt \right)< \infty. $$ The same estimate holds for $u$. Hence, by virtue of \eqref{eq: viscosity convergence}, we have \begin{equation*} \label{eq: u epsilon to u} \lim_{ \varepsilon \to 0} \mathbb{E} \int_0^T \| u_t-u_t^\varepsilon\|_{H^{-1}}^2\, dt=0. \end{equation*} In particular, for a sequence $\varepsilon_k \to 0$ we have $\|u^{\varepsilon_k}_t-u_t\|_{H^{-1}} \to 0$ for almost all $(\omega,t)$. By Lemma \ref{lem: lsc}, Fatou's lemma, and \eqref{eq: boundedness epsilon}, we have for any $p \in[1, \infty)$ \begin{align*} \mathbb{E} \|u \|^2 _{L_p(Q_T)} & \leq \liminf_k \mathbb{E} \|u^{\varepsilon_k} \|^2_{L_p(Q_T)} \leq \liminf_k N \mathbb{E} \|u^{\varepsilon_k} \|^2 _{L_\infty(Q_T)} \\ &\leq N \mathbb{E} \left( 1+\|\xi \|_{L_\infty(Q)}^2 +|S_\mu(f,g)|^2 \right), \end{align*} with $N$ independent of $p$, and the result follows by letting $p \to \infty$. \emph{Step 2:} For general $\xi, f , g$ we set \begin{align*} \xi^n&= ((-n)\vee \xi) \wedge n, \qquad f^n= ((-n)\vee f) \wedge n, \\ g^n&= \sum_{k=1}^{n}\left( ((-C_n)\vee g^{k})\wedge C_n \right) e_k \end{align*} where $(e_k)_{k=1}^\infty$ is the usual orthonormal basis of $l_2$ and $C_n \geq 0$ are chosen such that $ \mathbb{E} \int_0^T \| |g^{n}-g|_{l_2}\|^2_{L_2} \, dt \to 0$. Let $u^n$ be the solution of the equation corresponding to the truncated data. Then by It\^o's formula one can easily check that \begin{align*} \mathbb{E} \int_0^T\|u^n_t-u_t\|^2_{H^{-1}}\, dt & \leq N \mathbb{E} \|\xi -\xi^n\|_{H^{-1}}^2 \\ & + N \mathbb{E} \left( \int_0^T \|f-f^n\|^2_{H^{-1}}+ \sum_{k=1}^\infty\|g^k-g^{n,k}\|_{L_2}^2 \, dt \right)\to 0, \end{align*} as $n \to \infty$. Since for each $n\in \mathbb{N}$ we have $$ \mathbb{E} \|u^n \|^2 _{L_\infty(Q_T)} \ \leq N \mathbb{E} \left( 1+\|\xi \|_{L_\infty(Q)}^2 +|S_\mu(f,g)|^2 \right), $$ the result follows by virtue of Lemma \ref{lem: lsc}. \end{proof} \begin{theorem} \label{thm: smoothing PME} Suppose that Assumptions \ref{as: coeff} and \ref{as: extra condition} are satisfied with $m>1$. Let {\color{black}$\mu \in \Gamma_d$} and let $u$ be the solution of \eqref{eq: main equation}. Then, for all $\rho \in (0,1)$ we have \begin{align} \mathbb{E} \|u \|^2 _{L_\infty((\rho, T) \times Q)} \ &\leq \rho^{-\tilde{\theta}} N \mathbb{E} \left(\| \xi \|_{H^{-1}}^2 +{\color{black}|S_\mu(f,g)|^2} \right), \end{align} where $S_\mu(f,g)$ is as in Theorem \ref{thm: theorem boundedness}, $N$ is a constant depending only on $C, m, T, c,K,d, \mu$ and $|Q|$, and $\tilde{\theta}>0$ is a constant depending only on $ d, \mu$ and $m$. \end{theorem} \begin{proof} \emph{Step 1:} In a first step we assume that $\xi \in L_{m+1}(\Omega, L_{m+1}(Q))$ and that $|f_t(x)|$, $|g_t(x)|_{l_2}$ are bounded uniformly in $(\omega,t,x)$. Let $u^\varepsilon$ denote the unique solution of the problem \eqref{eq: single approximation}. By Lemma \ref{lem: regularity} $u^\varepsilon$ is a solution also in the sense of Definition \ref{def: definition L2}. Hence, by Lemma \ref{thm: smoothing} we have \begin{equation} \label{eq:K} \mathbb{E} \|u^\varepsilon \|^2 _{L_\infty((\rho, T) \times Q)} \ \leq \rho^{-\tilde{\theta}} N \mathbb{E} \left( 1+\| u^\varepsilon \|_{L_2(Q_T)}^2 +{\color{black}|\mathcal{S}_r(f,g)|^2} \right), \end{equation} with $N$ independent of $\varepsilon$. By It\^o's formula, Assumption \ref{as: coeff} \eqref{item: coercivity of phi} and Lemma \ref{lem: monotonicity} we have \begin{align} \nonumber &\|u^\varepsilon_t\|^2_{H^{-1}}+ \int_0^t \| u^\varepsilon\|^{m+1}_{L_{m+1}} \, ds \\ \label{eq:G} &\leq N\left(1+ \|\xi\|^2_{H^{-1}}+\int_0^t\|u^\varepsilon\|^2_{H^{-1}}+\|f\|_{H^{-1}}^2+\sum_{k=1}^\infty\|g^k\|_{H^{-1}}^2 \, ds\right) +M_t, \end{align} for a local martingale, with $N$ independent of $\varepsilon$. After a localization argument and Gronwal's lemma one gets $$ \mathbb{E} \int_0^T \| u^\varepsilon\|^{m+1}_{L_{m+1}} \, ds \leq N\mathbb{E} \left(1+ \|\xi\|^2_{H^{-1}}+\int_0^T (\|f\|_{H^{-1}}^2+\sum_{k=1}^\infty\|g^k\|_{H^{-1}}^2 ) \, ds\right). $$ Plugging this in \eqref{eq:K} ($m+1>2$) gives the desired inequality. \emph{Step 2:} For general $\xi, f$ and $g$, one can proceed as in the proof of Theorem \ref{thm: theorem boundedness}, this time choosing $\xi^n \in L_{m+1}(\Omega; L_{m+1}(Q))$ such that $\lim_n\mathbb{E}\|\xi ^n- \xi\|_{H^{-1}}^2 =0$ and $\|\xi_n\|_{H^{-1}} \leq \|\xi\|_{H^{-1}}$ almost surely. This finishes the proof. \end{proof} \begin{remark} \label{rem} As already seen, for any $\xi \in L_2(\Omega; H^{-1})$, the corresponding solution $u$ of \eqref{eq: main equation} belongs to the space $ L_{m+1}(\Omega_T; L_{m+1}(Q))$. Consequently, there exists arbitrarily small $s>0$ such that $\mathbb{E} \|u_s \|_{L_{m+1}}^{m+1}< \infty$. By Remark \ref{rem:H10} the quantity $ \mathbb{E} \|\Phi(u^{\varepsilon}) \|_{L_2(s,T;H^1_0)}^2$ (where $u^{\varepsilon}$ is the solution of \eqref{eq: single approximation} starting at time $s$ from $u^{\varepsilon}_s=u_s$) can be controlled by $\mathbb{E} \|u_s\|^{m+1}_{L_{m+1}}$. Using this, one can use again the theory of monotone operators to show that the weak limit of $\Phi(u^\varepsilon)$ in $L_2(\Omega \times (s,T);H^1_0)$ coincides with $\Phi(u)$. In particular, the solution $u$ is strong on the time interval $(s,T)$, that is, $\Phi(u_t)\in H^1_0(Q)$ for a.e. $(\omega,t) \in \Omega \times (s,T)$. \end{remark} \appendix \section{} \begin{lemma} \label{lem:Appendix} Let $Q \subset \bR^d$ be an open bounded set and let $R \in C^1 (\overline{Q} \times \bR)$ be such that there exist $N \in \bR$, $p \in [2, \infty)$ and $g \in L_p(Q)$ such that for all $(x,r) \in \bar{Q} \times \bR$ \begin{equation} |R(x,r)|+| \nabla_x R(x,r)| \leq N+|g(x)||r|^{p-2} +N |r|^{p-1}. \end{equation} Set $$ G(x,r):= \int_0^r R(x,s) \, ds, $$ and let $u \in H^1_0(Q)$ be such that \begin{equation} \label{eq:integrability-u} \int_Q |u|^p \, dx + \int_Q | \nabla u |^2|u|^{p-2} \, dx < \infty. \end{equation} Then $G(\cdot, u) \in W^{1,1}_0(Q)$. \end{lemma} \begin{proof} Let us set \[ R_n(x,r) := \left\{\begin{array}{lcr} R(x,r), &\text{for}& \qquad |r| \leq n \\ R (x,n), &\text{for}& \qquad r > n \\ R(x,-n), &\text{for}& \qquad r < -n \end{array} \right. \] and $$ G_n(x,r) : = \int_0^r R_n(x,s) \, ds. $$ It follows that $ \nabla_x G_n(x, r)$ and $\D_r G_n(x, r)$ are continuous in $(x,r) \in \overline{Q} \times \bR$, and they satisfy with some constant $N(n)$, for all $(x,r) \in \overline{Q} \times \bR$ $$ |\nabla_x G_n(x, r)| \leq N(n) |r|, \qquad |\D_r G_n(x, r)| \leq N(n). $$ Moreover, we have $G_n(x,0)=0$. Hence, by approximating $u$ in $H^1_0(Q)$ with $u^m \in C^\infty_c(Q)$, one concludes easily that $G_n(\cdot, u) \in W^{1,1}_0(Q)$. Notice that there exists a constant $N$, such that for all $n \in \mathbb{N}, \ (x,r) \in \overline{Q} \times \bR$ we have \begin{enumerate}[(i)] \item $|G_n(x,r)| \leq N(1+|g(x)|^p +|r|^{p})$, \item $ | \nabla_x G_n(x,r)| \leq N(1+|g(x)|^p +|r|^{p}) $, \item $|\D_rG_n(x,r)| \leq N(1+| g(x)||r|^{p-2}+|r|^{p-1})$. \end{enumerate} This implies by Young's inequality \begin{align*} |G_n(x,u)| &\leq N(1+|g(x)|^p +|u|^{p}), \\ |\nabla_x G_n(x,u)| + |\D_rG_n(x,u)| |\nabla u| &\leq N(1+| g(x)|^p+|u|^p+|u|^{p-2}|\nabla u|^2). \end{align*} By Lebesgue's theorem on dominated convergence we have $G_n(\cdot, u)\to G(\cdot,u)$ and $ \nabla_x( G_n(\cdot, u)) \to \nabla_xG(\cdot, u)+ \D_rG(\cdot, u) \nabla_x u$ in $L_1(Q)$, and the claim follows since $ G_n(\cdot,u) \in W^{1,1}_0(Q)$ for all $n \in \mathbb{N}$. \end{proof} \end{document}
\begin{document} \title{Gaussian analytic functions and operator symbols of Dirichlet type} \author{Haakan Hedenmalm} \address{ Hedenmalm: Department of Mathematics\\ KTH Royal Institute of Technology\\ S--10044 Stockholm\\ Sweden} \mathrm{e}mail{[email protected]} \author{Serguei Shimorin} \address{ Shimorin: Department of Mathematics\\ KTH Royal Institute of Technology\\ S--10044 Stockholm\\ Sweden} \mathrm{e}mail{[email protected]} \subjclass[2000]{Primary 30C40, 30B20, 30C55, 60G55} \keywords{} \thanks{This research was supported by Vetenskapsr\aa{}det (VR)} \begin{abstract} Let $\calH$ be a separable infinite-dimensional ${\mathbb C}$-linear Hilbert space, with sesquilinear inner product $\langle\cdot,\cdot\rangle_\calH$. Given any two orthonormal systems $x_1,x_2,x_3,\ldots$ and $y_1,y_2,y_3,\ldots$ in $\calH$, we show that \[ \sum_{l=2}^{+\infty}s^{l}\bigg| \sum_{j,k:j+k=l}(jk)^{-\frac12}\langle x_j,y_k\rangle_{\calH}\bigg|^2 \le 2s\log\frac{\mathrm{e}^{1/2}}{1-s},\qquad0\le s<1 . \leqno{\text(1)} \] In terms of the weighted sums \[ S(l):=\sum_{j,k:j+k=l}\bigg(\frac{l}{jk}\bigg)^{\frac12}\, \langle x_j,y_k\rangle_{\calH}, \] this means that \[ \sum_{l=2}^{+\infty}\frac{s^{l}}{l}|S(l)|^2\le 2\log\frac{\mathrm{e}^{1/2}}{1-s},\qquad0\le s<1. \] Expressed more vaguely, $|S(l)|^2\lessapprox2$ holds in the sense of averages. Concerning the optimality of the bound (1), a construction due to Zachary Chase shows that the statement does not hold if the number $2$ is replaced by the smaller number $1.72$. In the construction, the system $y_1,y_2,y_3,\ldots$ is a permutation of the system $x_1,x_2,x_3,\ldots$. We interpret our bound in terms of the correlation ${\mathbb E} \Phi(z)\Psi(z)$ of two copies of a Gaussian analytic function with possibly intricate Gaussian correlation structure between them. The Gaussian analytic function we study arises in connection with the classical Dirichlet space, which is naturally M\"obius invariant. The study of the correlations ${\mathbb E}\Phi(z)\Psi(z)$ leads us to introduce a new space, the \mathrm{e}mph{mock-Bloch space}, which is slightly bigger than the standard Bloch space. Our bound has an interpretation in terms of McMullen's asymptotic variance, originally considered for functions in the Bloch space. Finally, we show that the correlations ${\mathbb E}\Phi(z)\Psi(w)$ may be expressed as Dirichlet symbols of contractions on $L^2({\mathbb D})$, and show that the Dirichlet symbols of Grunsky operators associated with univalent functions find a natural characterization in terms of a nonlinear wave equation. \mathrm{e}nd{abstract} \maketitle \section{Introduction} \subsection{Basic notation in the plane} \label{subsec-1.1} We write ${\mathbb Z}$ for the integers, ${\mathbb Z}_+$ for the positive integers, ${\mathbb R}$ for the real line, and ${\mathbb C}$ for the complex plane. Moreover, we write ${\mathbb C}_\infty:={\mathbb C}\cup\{\infty\}$ for the extended complex plane (the Riemann sphere). For a complex variable $z=x+\mathrm{i} y\in{\mathbb C}$, let \[ \mathrm{d} s(z):=\frac{|\mathrm{d} z|}{2\pi},\qquad \mathrm{d} A(z):=\frac{\mathrm{d} x\mathrm{d} y}{\pi}, \] denote the normalized arc length and area measures, as indicated. Moreover, we shall write \[ \varDelta_z:=\frac{1}{4}\bigg(\frac{\partial^2}{\partial x^2}+ \frac{\partial^2}{\partial y^2}\bigg) \] for the normalized Laplacian, and \[ \partial_z:=\frac{1}{2}\bigg(\frac{\partial}{\partial x}-\mathrm{i} \frac{\partial}{\partial y}\bigg),\qquad \bar\partial_z:=\frac{1}{2}\bigg(\frac{\partial}{\partial x}+\mathrm{i} \frac{\partial}{\partial y}\bigg), \] for the standard complex derivatives; then $\varDelta$ factors as $\varDelta_z=\partial_z\bar\partial_z$. Often we will drop the subscript for these differential operators when it is obvious from the context with respect to which variable they apply. We let ${\mathbb D}$ denote the open unit disk, ${\mathbb T}:=\partial{\mathbb D}$ the unit circle, and ${\mathbb D}_e$ the exterior disk: \[ {\mathbb D}:=\{z\in{\mathbb C}:\,\,|z|<1\},\qquad {\mathbb D}_e:=\{z\in{\mathbb C}_\infty:\,\,|z|>1\}. \] We will find it useful to introduce the sesquilinear forms $\langle\cdot,\cdot \rangle_{\mathbb C}$ and $\langle\cdot,\cdot \rangle_{\mathbb D}$, as given by \[ \langle f,g \rangle_{\mathbb C}:=\int_{\mathbb C} f(z)\bar g(z)\mathrm{d} A(z),\qquad \langle f,g\rangle_{\mathbb D}:=\int_{\mathbb D} f(z)\bar g(z)\mathrm{d} A(z), \] where we need $f\bar g\in L^1({\mathbb C})$ in the first instance and $f\bar g\in L^1({\mathbb D})$ in the second. These are standard Lebesgue spaces with respect to normalized area measure $\mathrm{d} A$. Here, generally, for a given complex-valued function $f$, we denote by $\bar f$ the function whose values are the complex conjugates of $f$. To simplify the notation further, we write \[ \langle f\rangle_{\mathbb C}=\langle f,1\rangle_{\mathbb C},\quad \langle f\rangle_{\mathbb D}=\langle f,1\rangle_{\mathbb D}. \] As for operators ${\mathbf T}$ on a Hilbert function space, we let ${\mathbf T}^*$ denote the adjoint, while $\bar{\mathbf T}$ means the operator defined by \[ \bar{\mathbf T} f=\overline{{\mathbf T}\bar f}. \] \subsection{Complex Gaussian Hilbert space} \label{subsec-CGHS} A \mathrm{e}mph{Gaussian Hilbert space} is a closed linear subspace $\Gspace$ of $L^2(\Omega)=L^2(\Omega,\mathrm{d} P)$, where $(\Omega,\mathrm{d} P)$ is a probability space with a given $\sigma$-algebra, with the property that each element $\gamma\in\Gspace$ has a Gaussian distribution with mean $0$. Since we will be working with the complex field ${\mathbb C}$, this means that the real and imaginary parts of $\gamma$ are jointly Gaussian, and that the mean is $0$ of each one. Here, the \mathrm{e}mph{expectation} (or \mathrm{e}mph{mean}) operation ${\mathbb E}$ is just given by ${\mathbb E} \gamma:=\langle\gamma\rangle_\Omega=\int_\Omega\gamma\mathrm{d} P$. We say that $\gamma$ is \mathrm{e}mph{symmetric} if ${\mathbb E}(\gamma^2)=0$. Moreover, $\gamma$ is a \mathrm{e}mph{standard complex Gaussian} variable if it has mean $0$, is symmetric and has ${\mathbb E}(|\gamma|^2)=1$. In other words, the values of $\gamma$ are distributed according to the density $\mathrm{e}^{-|z|^2}\mathrm{d} A(z)$ in the plane. We will assume for convenience that $\Gspace$ is \mathrm{e}mph{conjugation-invariant}, that is, $\gamma\in\Gspace\Longleftrightarrow\bar\gamma\in\Gspace$. We refer to \cite{Janson} for an exposition on Gaussian Hilbert spaces. We will write $\langle\gamma,\gamma'\rangle_\Omega =\langle\gamma\bar\gamma'\rangle_\Omega={\mathbb E} \gamma\bar\gamma'$ for the inner product of $\Gspace$. We shall need the following observation. If $\Gspace$ is separable and infinite-dimensional, then there exists a sequence $\gamma_1,\gamma_2,\gamma_3,\ldots$ in $\Gspace$ consisting of i i d standard complex Gaussians, such that the sequence $\gamma_1,\bar\gamma_1,\gamma_2,\bar\gamma_2,\ldots$ forms an orthonormal basis in $\Gspace$. In particular, $\Gspace$ splits as an orthogonal sum $\Gspace=\Hspace\oplus\Hspace_*$, where $\Hspace$ is the closed subspace spanned by $\gamma_1,\gamma_2,\gamma_3,\ldots$, while $\Hspace_*$ is spanned by $\bar\gamma_1,\bar\gamma_2,\bar\gamma_3,\ldots$. \subsection{Gaussian analytic functions associated with the Dirichlet space} We now outline a more direct approach to the analytic part of GFF outlined in the preceding subsection. Let $A^2({\mathbb D})$ denote the subspace of $L^2({\mathbb D})$ consisting of the holomorphic functions, which is a closed subspace and hence a Hilbert space in its own right, known as the \mathrm{e}mph{Bergman space}. The \mathrm{e}mph{Dirichlet space} is the space $\calD({\mathbb D})$ of analytic functions $f$ with $f'\in A^2({\mathbb D})$, equipped with the Dirichlet inner product \[ \langle f,g\rangle_\nabla:=\langle f',g'\rangle_{\mathbb D}, \] The importance of the Dirichlet space comes from its conformal invariance property. For instance, if $\phi$ is a M\"obius automorphism of the unit disk ${\mathbb D}$, we have that \[ \langle f\circ\phi,g\circ\phi\rangle_\nabla=\langle f,g\rangle_\nabla. \] The Dirichlet inner product gives rise to a seminorm \[ \|f\|_\nabla^2:=\|f'\|^2_{A^2({\mathbb D})}=\langle f',f'\rangle_{\mathbb D}, \] which vanishes on the constant functions. So, to make it a norm, we could add the requirement that the functions should vanish at a given point $\lambda\in{\mathbb D}$: \[ \calD_\lambda({\mathbb D}):=\{f\in\calD({\mathbb D}):\,f(\lambda)=0\}. \] We will focus our attention to $\lambda=0$, and study the space $\calD_0({\mathbb D})$. By the M\"obius invariance of the seminorm, this choice is not restrictive as we may easily move any other point $\lambda$ to the origin using a M\"obius automorphism. In recent years, \mathrm{e}mph{Gaussian analytic functions} has received increasing attention. For instance, see \cite{Sodin} and the book \cite{HKPV}. In the space $\calD_0({\mathbb D})$, we have a canonical orthogonal basis \[ e_j(z):=j^{-\frac12}z^j,\qquad j=1,2,3,\ldots, \] and we form a $\calD_0$-Gaussian analytic function ($\calD_0$-GAF) \begin{equation} \Phi(z):=\sum_{j=1}^{+\infty}\alpha_j\,e_j(z)= \sum_{j=1}^{+\infty}\frac{\alpha_j}{\sqrt{j}}\,z^j, \label{eq-Gauss1} \mathrm{e}nd{equation} where the $\alpha_j$ are i i d (independent identically distributed) standard complex Gaussian variables, taken from a Gaussian Hilbert space $\Gspace$. Then for two points in the disk $z,w\in{\mathbb D}$, we have the complex correlation structure \begin{equation} {\mathbb E}(\Phi(z)\Phi(w))=0, \quad {\mathbb E}(\Phi(z)\bar\Phi(w))= \log\frac{1}{1-z\bar w}. \label{eq-Gauss2} \mathrm{e}nd{equation} Since Gaussian random variables are determined by their correlation structures, we may, depending on the point of view, take \mathrm{e}qref{eq-Gauss2} as the defining property instead of the more explicit \mathrm{e}qref{eq-Gauss1}. On the right-hand side of \mathrm{e}qref{eq-Gauss2}, we recognize the reproducing kernel for the Dirichlet space, \begin{equation} \mathrm k_{\calD_0}(z,w)=\log\frac{1}{1-z\bar w}, \mathrm{e}nd{equation} with the point evaluation property \[ f(w)=\langle f,\mathrm k_{\calD_0}(\cdot,w)\rangle_\nabla,\qquad f\in \calD_0({\mathbb D}). \] It is appropriate to think of the correlation structure \mathrm{e}qref{eq-Gauss2} in terms of the matrix-valued correlation structure \begin{equation} \mathbb{k}_{2\times2}[\Phi](z,w)={\mathbb E} \begin{pmatrix} \Phi(z)\\ \bar \Phi(z) \mathrm{e}nd{pmatrix} \begin{pmatrix} \bar\Phi(w) & \Phi(w) \mathrm{e}nd{pmatrix}=\begin{pmatrix} {\mathbb E} \Phi(z)\bar\Phi(w) & {\mathbb E}\Phi(z)\Phi(w)\\ {\mathbb E} \bar\Phi(z)\bar\Phi(w) & {\mathbb E}\bar\Phi(z)\Phi(w) \mathrm{e}nd{pmatrix} = \begin{pmatrix} \log\frac{1}{1-z\bar w} & 0\\ 0 & \log\frac{1}{1-\bar z w} \mathrm{e}nd{pmatrix}, \label{eq-matrixcorr1} \mathrm{e}nd{equation} and the associated $4\times4$ matrix \begin{equation} \begin{pmatrix} \mathbb{k}_{2\times2}[\Phi](z,z)&\mathbb{k}_{2\times2}[\Phi](z,w) \\ \mathbb{k}_{2\times2}[\Phi](z,w)^\ast &\mathbb{k}_{2\times2}[\Phi](w,w) \mathrm{e}nd{pmatrix} \label{eq-matrixcorr1.1} \mathrm{e}nd{equation} is positive semidefinite (the asterisque $\ast$ stands for the operation of taking the adjoint of the matrix). The real part of $\Phi(z)$ may be understood, up to an additive constant, as the restriction of the Gaussian free field (GFF) on ${\mathbb C}$ conditioned to be harmonic in ${\mathbb D}$. For some background on GFF, we refer to the survey paper \cite{Sheff} as well as to \cite{HedNiem}. Alternatively, the process $\Phi(z)$ may be identified as the limit of the logarithm of the characteristic polynomial for random unitary matrices as the size of the matrices tends to infinity (see below). In analogy with \cite{PerVir}, it might be of interest to study the random zeros of the function $\Phi(z)$, but since one of them is deterministic (the origin), we should not expect full M\"obius automorphism invariance. By the Edelman-Kostlan formula (see \cite{Sodin}) the density of zeros is given by \begin{equation} \varDelta\log \mathrm k_{\calD_0}(z,z)\,\mathrm{d} A(z) =\varDelta\log\log\frac{1}{1-|z|^2} \mathrm{d} A(z), \mathrm{e}nd{equation} which has a unit point mass at the origin due to the deterministic zero there. Here, one might also be interested in the process for the critical points. We will not pursue any of these directions here. A rather interesting object appears to be the random curve (or tree) structure we obtain by following the gradient flow for the random harmonic function $\re\Phi(z)$ which stops at critical points. At each critical point we would instead choose among the possible directions, for instance by maximizing the second directional derivative (perhaps after precomposing with a M\"obius mapping to put the critical point at the origin). Although quite promising, We will not pursue this matter further here. A related setting of gradient flow for the plane defined in terms of the Bargmann-Fock space was studied by Nazarov, Sodin, and Volberg \cite{NSV}. \subsection{$\calD_0$-Gaussian analytic functions and random unitary matrices} Let $M_n$ be a random $n\times n$ unitary matrix with distribution given by Haar measure. Let \[ \chi_{M_n}(\lambda)=\det(\lambda{I}_n-M_n) \] be the associated random characteristic polynomial, where $I_n$ is the $n\times n$ identity matrix. Diaconis and Evans \cite{DiacEv} found an interesting relationship connecting the characteristic polynomial of $M_n$ with the process given by \mathrm{e}qref{eq-Gauss1}. They showed that \[ \operatorname{tr}\log(I_n-zM_n^*)=\log\det(I_n-zM_n^*) =\log\frac{\chi_{M_n}(z)}{\chi_{M_n}(0)} \] converges, as $n\to+\infty$, in distribution, to the $\calD_0$-Gaussian analytic function $\Phi(z)$ given by \mathrm{e}qref{eq-Gauss1}. The details are supplied in Example 5.6 of \cite{DiacEv}. For the convenience of the reader, we mention that the master relationship between their random function $F_n(z)$ and $\chi_{M_n}(z)$ has a typo, and should be replaced by \[ F_n(z)=\frac{n}{2\pi}-\frac{z}{\pi}\frac{\chi_{M_n}'(z)}{\chi_{M_n}(z)}. \] \begin{rem} The matters considered here, the possible correlation structure of two jointly Gaussian $\calD_0$-GAFs, have their (finite-dimensional) counterpart for random matrices. Let $M_n$ and $M_n'$ be two copies of the random $n\times n$ unitary matrix enemble, with possibly complicated correlation structure between $M_n$ and $M_n'$, but at least all their entries are jointly (complex) Gaussian variables. What could we say about the structure of the ${\mathbb C}^2$-valued process of normalized random characteristic polynomials \[ \bigg(\frac{\chi_{M_n}(z)}{\chi_{M_n}(0)}, \frac{\chi_{M_n'}(z)}{\chi_{M_n'}(0)}\bigg)? \] \mathrm{e}nd{rem} \subsection{Two interacting copies of the $\calD_0$-Gaussian analytic function process} The topic here involves two copies of the process \mathrm{e}qref{eq-Gauss1}, \begin{equation} \Phi(z):=\sum_{j=1}^{+\infty}\frac{\alpha_j}{\sqrt{j}}\,z^j,\quad \Psi(z):=\sum_{j=1}^{+\infty}\frac{\beta_j}{\sqrt{j}}\,z^j, \label{eq-Gauss2.1} \mathrm{e}nd{equation} where $\Phi(z)$ is as before and the $\beta_j$ are i i d from $N_{{\mathbb C}}(0,1)$, taken from the same Gaussian Hilbert space $\Gspace\subset L^2(\Omega)$. We will refer to $(\Phi(z),\Psi(z))$ as a \mathrm{e}mph{pair of jointly Gaussian $\calD_0$-GAFs}. Consisting of jointly Gaussian variables with zero mean, the vector-valued process $(\Phi(z),\Psi(z))$ is governed by the correlation matrix \begin{multline} {\mathbb{k}}_{4\times4}[\Phi,\Psi](z,w):= {\mathbb E}\begin{pmatrix} \Phi(z)\\ \bar\Phi(z)\\ \Psi(z)\\ \bar\Psi(z) \mathrm{e}nd{pmatrix} \begin{pmatrix} \bar\Phi(w)&\Phi(w)&\bar\Psi(w)&\Psi(w) \mathrm{e}nd{pmatrix} \\ =\begin{pmatrix} {\mathbb E}\Phi(z)\bar\Phi(w) & {\mathbb E}\Phi(z)\Phi(w) & {\mathbb E}\Phi(z)\bar\Psi(w) &{\mathbb E}\Phi(z)\Psi(w) \\ {\mathbb E}\bar\Phi(z)\bar\Phi(w) & {\mathbb E}\bar\Phi(z)\Phi(w) & {\mathbb E}\bar\Phi(z)\bar\Psi(w) & {\mathbb E}\bar\Phi(z)\Psi(w) \\ {\mathbb E}\Psi(z)\bar\Phi(w) & {\mathbb E}\Psi(z)\Phi(w) & {\mathbb E}\Psi(z)\bar\Psi(w) & {\mathbb E}\Psi(z)\Psi(w) \\ {\mathbb E}\bar\Psi(z)\bar\Phi(w) & {\mathbb E}\bar\Psi(z)\Phi(w) & {\mathbb E}\bar\Psi(z)\bar\Psi(w) & {\mathbb E}\bar\Psi(z)\Psi(w) \mathrm{e}nd{pmatrix} \\ =\begin{pmatrix} \log\frac{1}{1-z\bar w} & 0 & {\mathbb E}\Phi(z)\bar\Psi(w) &{\mathbb E}\Phi(z)\Psi(w) \\ 0 & \log\frac{1}{1-\bar z w} & {\mathbb E}\bar\Phi(z)\bar\Psi(w) & {\mathbb E}\bar\Phi(z)\Psi(w) \\ {\mathbb E}\Psi(z)\bar\Phi(w) & {\mathbb E}\Psi(z)\Phi(w) & \log\frac1{1-z\bar w} & 0 \\ {\mathbb E}\bar\Psi(z)\bar\Phi(w) & {\mathbb E}\bar\Psi(z)\Phi(w) & 0 & \log\frac{1}{1-\bar z w} \mathrm{e}nd{pmatrix}, \label{eq-4X4} \mathrm{e}nd{multline} and the associated $8\times8$ matrix \begin{equation} \begin{pmatrix} \mathbb{k}_{4\times4}[\Phi](z,z)&\mathbb{k}_{4\times4}[\Phi](z,w) \\ \mathbb{k}_{4\times4}[\Phi](z,w)^\ast &\mathbb{k}_{4\times4}[\Phi](w,w) \mathrm{e}nd{pmatrix} \label{eq-4X4.1} \mathrm{e}nd{equation} is positive semidefinite. Note that although there are eight unknown entries in \mathrm{e}qref{eq-4X4}, in fact only two are needed, as clearly, \[ {\mathbb E}(\bar\Phi(z)\bar\Psi(w))= \overline{{\mathbb E}(\Phi(z)\Psi(w))},\quad {\mathbb E}(\bar\Phi(z)\Psi(w))=\overline{{\mathbb E}(\Phi(z)\bar\Psi(w))}, \] and the remaining four only involve exchanging the variables $z$ and $w$. So we need only be concerned with the quantities \begin{equation} {\mathbb E}(\Phi(z)\bar\Psi(w))\quad\text{and}\quad {\mathbb E}(\Phi(z)\Psi(w)). \label{eq-correl1.01} \mathrm{e}nd{equation} In a sense they complement each other, as we see below. \begin{prop} We have that \[ |{\mathbb E}\Phi(z)\bar\Psi(w)|+|{\mathbb E}\Phi(z)\Psi(w)|\le \bigg(\log\frac{1}{1-|z|^2}\bigg)^{\frac12} \bigg(\log\frac{1}{1-|w|^2}\bigg)^{\frac12}, \qquad z,w\in{\mathbb D}. \] \label{prop-triang1} \mathrm{e}nd{prop} Since for a given point with $|z|=|w|$ each of the two terms on the left-hand side may reach up to the right-hand side bound, the estimate tells us they cannot do so simultaneously. The proof of this estimate is presented in Subsection \ref{subsec-thm-fund1}. \subsection{The fundamental integral estimate} The following is our basic estimate of the correlations. \begin{thm} For $a,b\in{\mathbb C}$, we have the estimate \[ \int_{\mathbb D}\big|a w{\mathbb E}\Phi(z)\Psi'(w)+b\bar w{\mathbb E}\Phi(z)\bar\Psi'(w)\big|^2 \frac{\mathrm{d} A(w)}{|w|^2}\le (|a|^2+|b|^2)\log\frac{1}{1-|z|^2},\qquad z\in{\mathbb D}. \] \label{thm-fund1} \mathrm{e}nd{thm} This may be interpreted as an estimate of the radial derivative (with respect to $w$) of the harmonic function \[ a{\mathbb E}\Phi(z)\Psi(w)+b{\mathbb E}\Phi(z)\bar\Psi(w). \] Indeed, if $F$ is holomorphic in ${\mathbb D}$, then its radial derivative is \[ \partial_r F(r\mathrm{e}^{\mathrm{i}\theta})=\mathrm{e}^{\mathrm{i}\theta}F'(r\mathrm{e}^{\mathrm{i}\theta}), \] so that the estimate of Theorem \ref{thm-fund1} asserts that ($\partial_{r(w)}$ is the radial derivative in the $w$ variable) \begin{equation} \int_{\mathbb D}\big|\partial_{r(w)}\big(a {\mathbb E}\Phi(z)\Psi(w)+b {\mathbb E}\Phi(z)\bar\Psi(w)\big)\big|^2 \mathrm{d} A(w) \le (|a|^2+|b|^2)\log\frac{1}{1-|z|^2},\qquad z\in{\mathbb D}. \label{eq-fundest2} \mathrm{e}nd{equation} Interesting estimates are obtained for instance when $(a,b)=(1,0)$ and $(a,b)=(0,1)$. We shall mainly focus on the first of these, when $(a,b)=(1,0)$. We defer the proof of this result to Section \ref{sec-hilbspaces}. \subsection{Growth of correlations in the mean along diagonals} We are interested in the behavior of the correlations \[ {\mathbb E}\Phi(z)\Psi(w),\quad{\mathbb E}\Phi(z)\bar\Psi(w) \] as $z,w\in{\mathbb D}$ approach the unit circle ${\mathbb T}$. The first one we will refer to as the \mathrm{e}mph{analytic correlation}, and the second the \mathrm{e}mph{sesquianalytic correlation}. We may study the growth behavior by looking along complex lines through the origin $w=\lambda z$ for some parameter $\lambda\in{\mathbb C}$ in which case our correlations are \begin{equation} {\mathbb E}\Phi(z)\Psi(\lambda z),\quad{\mathbb E}\Phi(z)\bar\Psi(\lambda z). \label{eq-twocorr1} \mathrm{e}nd{equation} The alternative study of conjugate-linear lines $w=\mu\bar z$ with $\mu\in{\mathbb C}$ is completely analogous and essentially only corresponds to reversing the order of these correlations (in the sense that $w\mapsto\bar\Psi(\mu\bar w)$ is a GAF). For this reason we will not consider such conjugate-linear lines further. When $|\lambda|<1$ the process $\Phi(z)$ dominates in the correlations since $\Psi(\lambda z)$ is analytic in the disk ${\mathbb D}(0,|\lambda|^{-1})$, while if $|\lambda|>1$ instead the process $\Psi(\lambda z)$ dominates. The most interesting instance seems to be the balanced case when $|\lambda|=1$, in which case the line $w=\lambda z$ might be called a \mathrm{e}mph{generalized diagonal}. For $|\lambda|=1$, the process $\Psi(\lambda z)$ is just another copy of the $\calD_0$-GAF, so as long as $\lambda$ is fixed we might as well consider $\lambda=1$. So the study of \mathrm{e}qref{eq-twocorr1} for fixed $\lambda$ with $|\lambda|=1$ reduces to the diagonal case \begin{equation} {\mathbb E}\Phi(z)\Psi(z),\quad{\mathbb E}\Phi(z)\bar\Psi(z). \label{eq-twocorr2} \mathrm{e}nd{equation} We note that by Proposition \ref{prop-triang1}, \begin{equation} |{\mathbb E}\Phi(z)\Psi(z)|+|{\mathbb E}\Phi(z)\bar\Psi(z)|\le\log\frac{1}{1-|z|^2}. \label{eq-twocorr3} \mathrm{e}nd{equation} Some examples should elucidate which term, if any, may be dominant on the left-hand side. \begin{rem} We supply some examples which help us understand the size of the two contributions on the left-hand side of \mathrm{e}qref{eq-twocorr3}. \noindent (a) If $\Psi=\Phi$, then \[ {\mathbb E}\Phi(z)\Psi(z)={\mathbb E}(\Phi(z)^2)=0,\quad {\mathbb E}\Phi(z)\bar\Psi(z)={\mathbb E}|\Phi(z)|^2=\log\frac{1}{1-|z|^2}. \] In this case we have \mathrm{e}mph{equality} in \mathrm{e}qref{eq-twocorr3}, and on the left-hand side the first term vanishes, while the second is dominant. \noindent{(b)} If $\Psi(z)$ and $\Phi(z)$ are stochastically independent, we have \[ {\mathbb E}\Phi(z)\bar\Psi(z)= {\mathbb E}\Phi(z)\Psi(z)=0, \] so that both contributions to the left-hand side \mathrm{e}qref{eq-twocorr3} collapse. \noindent{(c)} Consider $\Psi(z)=\bar\Phi(\bar z)$, when \[ {\mathbb E}\Phi(z)\Psi(z)={\mathbb E}\Phi(z)\bar\Phi(\bar z)=\log\frac{1}{1-z^2}, \quad {\mathbb E}\Phi(z)\bar\Psi(z)={\mathbb E}\Phi(z)\Phi(\bar z)=0. \] So at least pointwise, ${\mathbb E}\Phi(z)\Psi(z)$ may be the dominant contribution in \mathrm{e}qref{eq-twocorr3}. \label{rem-1.1} \mathrm{e}nd{rem} The example in Remark \ref{rem-1.1}(a) shows that the sesquianalytic correlation ${\mathbb E}\Phi(z)\bar\Psi(z)$ may be maximally big in the sense of modulus \mathrm{e}mph{everywhere in the disk} ${\mathbb D}$. However, the example in Remark \ref{rem-1.1}(c) only says that the analytic correlation ${\mathbb E}\Phi(z)\Psi(z)$ may be maximal in modulus along the radius $[0,1[$ emanating from the origin. This leaves open the possibility of bounding $L^2$ means along concentric circles. The fact that ${\mathbb E}\Phi(z)\Psi(z)$ represents a holomorphic function in ${\mathbb D}$ limits to some extent the possible growth of the function. However, from the work of Abakumov and Doubtsov \cite{AbDoub}, we see that this is not a very strong restriction, and effectively knowing that ${\mathbb E}\Phi(z)\Psi(z)$ is holomorphic does not add much to the growth control beyond the pointwise bound \mathrm{e}qref{eq-twocorr3}, which may be understood as belonging to a Korenblum-type growth space. For some other aspects on the growth behavior of functions in Korenblum-type spaces, see \cite{BoLyuMalTho}. To measure growth of functions in the Bloch space, the asymptotic variance of a function in the Bloch space has been studied (see \cite{McM1}, \cite{AIPP}, \cite{IvriiQC}, \cite{Hed1}). We recall that the \mathrm{e}mph{Bloch space} $\calB({\mathbb D})$ consists of all complex-valued holomorphic functions $f:{\mathbb D}\to{\mathbb C}$ such that \[ \|f\|_{\calB}:=\sup_{z\in{\mathbb D}}(1-|z|^2)|f'(z)|<+\infty. \] Naturally, this defines a seminorm on $\calB({\mathbb D})$, as constants get seminorm value $0$. The \mathrm{e}mph{asymptotic variance} of a function $f\in\calB({\mathbb D})$ is the quantity \begin{equation} \sigma(f)^2:=\limsup_{r\to1^-} \frac{1}{\log\frac{1}{1-r^2}}\int_{\mathbb T}|f(r\zeta)|^2\mathrm{d} s(\zeta). \label{eq-asvar1} \mathrm{e}nd{equation} At least in dynamical situations, it captures very well the boundary growth of the given function. From a probabilistic point of view, it is based on thinking of the evolution of the function $r\mapsto f(r\zeta)$ as a Brownian motion in time $\log\frac{1+r}{1-r}\sim\log\frac1{1-r^2}$. The analytic correlation $f(z)={\mathbb E}\Phi(z)\Psi(z)$ need not be an element of the Bloch space $\calB({\mathbb D})$. However, it has a finite asymptotic variance nevertheless. \begin{thm} For all jointly Gaussian processes $(\Phi,\Psi)$ consisting of $\calD_0$-GAFs, we have the estimate \[ \int_{\mathbb T} |{\mathbb E}\Phi(r\zeta)\Psi(r\zeta)|^2\mathrm{d} s(\zeta) \le 2r^2\log\frac{1}{1-r^2}+r^2. \] \label{thm-fund2} \mathrm{e}nd{thm} This means that in the $L^2$-average sense on concentric circles, the function ${\mathbb E}\Phi(z)\Psi(z)$ spends most of its time on $|z|=r$ with values bounded by a constant times \mathrm{e}mph{the square root of} $\log\frac{1}{1-r^2}$, which is of course much smaller than what the bound \mathrm{e}qref{eq-twocorr3} would allow for. In terms of the random variables $\alpha_j,\beta_k$, the left-hand side expresssion in the above theorem equals \begin{equation} \int_{\mathbb T} |{\mathbb E}\Phi(r\zeta)\Psi(r\zeta)|^2\mathrm{d} s(\zeta) =\sum_{l=2}^{+\infty}r^{2l}\bigg| \sum_{j,k:j+k=l}(jk)^{-\frac12}\langle\alpha_j,\bar\beta_k\rangle_\Omega\bigg|^2. \label{eq-asvar1.01} \mathrm{e}nd{equation} It is natural to wonder if the bound $\sigma(f)^2\le2$ for the asymptotic variance of the analytic correlation $f(z)={\mathbb E}\Phi(z)\Psi(z)$ in Theorem \ref{thm-fund2} is optimal. By a construction due to Zachary Chase \cite{Chase}, we have the following. \begin{thm} {\rm(Chase)} There is a permutation $\pi:{\mathbb Z}_+\to{\mathbb Z}_+$ such that if $\beta_j=\bar \alpha_{\pi(j)}$ and $f(z)={\mathbb E}\Phi(z)\Psi(z)$, we have $\sigma(f)^2\ge 1.72$. \label{thm-1.72} \mathrm{e}nd{thm} So, it remains to investigate the universal quantity $\Sigma^2_{\mathrm{oper}}:=\sup_f\sigma(f)^2$, where $f$ runs over all possible analytic correlations ${\mathbb E}\Phi(z)\Psi(z)$. The subscript refers to the relation with norm contractive operators on $L^2({\mathbb D})$ described in Subsection \ref{subsec-symbols1.1} below. \subsection{Orthonormal systems in separable Hilbert space} In terms of the inner products $\langle\alpha_j,\bar \beta_k\rangle_\Omega$, the condition that the elements belong to a Gaussian Hilbert space is inconsequential and may be removed. \begin{cor} Let $\{x_1,x_2,x_3,\ldots\}$ an $\{y_1,y_2,y_3,\ldots\}$ be orthonormal systems in a separable complex Hilbert space $\calH$. Then, for $0\le r<1$, we have the estimate \begin{equation*} \sum_{l=2}^{+\infty}r^{2l}\bigg|\sum_{j,k:j+k=l}(jk)^{-\frac12}\langle x_j,y_k\rangle_{\calH}\bigg|^2\le 2\log\frac{\mathrm{e}}{1-r^2}. \mathrm{e}nd{equation*} \label{cor-Hilb1} \mathrm{e}nd{cor} One possible interpretation of the corollary is that \mathrm{e}mph{on average}, the sums \[ \bigg|\sum_{j,k:j+k=l}\bigg(\frac{l}{jk}\bigg)^{\frac12}\langle x_j, y_k\rangle_\calH\bigg|^2 \] are bounded by $2$. \subsection{The analytic correlation and Dirichlet operator symbols} \label{subsec-symbols1.1} For $w\in{\mathbb D}$, let $\mathrm s_w$ denotes the Szeg\H{o} kernel \begin{equation} \mathrm s_z(\zeta):=\frac{1}{1-\bar z \zeta}. \label{eq-szego1} \mathrm{e}nd{equation} For functions in the Bergman space $A^2({\mathbb D})$, taking the inner product with $s_\zeta$ is the same as finding the average \begin{equation} \langle f,\mathrm s_z\rangle_{\mathbb D}=\int_0^1 f(zt)\mathrm{d} t,\qquad f\in A^2({\mathbb D}). \mathrm{e}nd{equation} \begin{defn} Let ${\mathbf T}$ be a bounded ${\mathbb C}$-linear operator on $L^2({\mathbb D})$. The \mathrm{e}mph{Dirichlet operator symbol} associated with ${\mathbf T}$ is the function \begin{equation*} {\mathbb Z}symb[{\mathbf T}](z,w):=\langle {\mathbf T}(\bar \mathrm s_z),\mathrm s_w\rangle_{\mathbb D}, \qquad z,w\in{\mathbb D}, \mathrm{e}nd{equation*} which is holomorphic in ${\mathbb D}^2$, with diagonal restriction \begin{equation*} \oslash {\mathbb Z}symb[{\mathbf T}](z)=\langle {\mathbf T}(\bar \mathrm s_z),\mathrm s_z\rangle_{\mathbb D}, \qquad z\in{\mathbb D}. \mathrm{e}nd{equation*} \mathrm{e}nd{defn} \begin{rem} If ${\mathbf T}={\mathbf M}_\mu$, the operator of multiplication by $\mu\in L^\infty({\mathbb D})$, then \begin{equation} \oslash {\mathbb Z}symb[{{\mathbf M}_\mu}](z)= \langle {\mathbf M}_\mu(\bar \mathrm s_z),\mathrm s_z\rangle_{\mathbb D}=\int_{\mathbb D} \frac{\mu(\xi)\mathrm{d} A(\xi)}{(1-z\bar\xi)^2},\qquad z\in{\mathbb D}, \label{eq-Bergman1} \mathrm{e}nd{equation} which shows that $\oslash{\mathbb Z}symb[{\mathbf T}]$ is a generalization of the Bergman projection to the setting of general bounded operators. There is a way to write ${\mathbb Z}symb[{\mathbf T}]$ which makes the analogy with \mathrm{e}qref{eq-Bergman1} clearer: \[ {\mathbb Z}symb[{\mathbf T}](z,w)=\langle{\mathbf T},\mathrm s_z\otimes\mathrm s_w\rangle_{\mathrm{tr}}. \] Here, we use the bilinear tensor product $(f\otimes g)(h)=\langle h,\bar g\rangle f$, and the notation $\langle A,\mathbf{B}\rangle_{\mathrm{tr}}=\mathrm{tr}(\mathbf{A}\bar \mathbf{B})= \mathrm{tr}(\bar \mathbf{B}\mathbf{A})$ for the trace inner product. \mathrm{e}nd{rem} The next result characterizes the analytic correlations ${\mathbb E}\Phi(z)\Psi(w)$ as the Dirichlet symbols associated with contractions on $L^2({\mathbb D})$. \begin{thm} \noindent{\rm(a)} Given a pair of jointly Gaussian $\calD_0$-GAFs $(\Phi(z),\Psi(z))$ there exists a norm contraction ${\mathbf T}:L^2({\mathbb D})\to L^2({\mathbb D})$ such that \begin{equation*} {\mathbb E}\Phi(z)\Psi(w) =zw\langle{\mathbf T} \bar \mathrm s_z,\mathrm s_w\rangle_{\mathbb D}\quad \qquad z,w\in{\mathbb D}, \leqno{\mathrm{(i)}} \mathrm{e}nd{equation*} \noindent{\rm(b)} Given a norm contraction ${\mathbf T}$ on $L^2({\mathbb D})$, there exists a pair of jointly Gaussian $\calD_0$-GAFs $(\Phi(z),\Psi(z))$ such that {\rm(i)} holds. \label{thm-transfer1} \mathrm{e}nd{thm} In particular, we see that in the sense of the theorem, the analytic correlations ${\mathbb E}\Phi(z)\Psi(w)$ may be identified with the Dirichlet operator symbols of contractions on $L^2{\mathbb D})$: \[ {\mathbb E}\Phi(z)\Psi(w)=zw {\mathbb Z}symb[{\mathbf T}](z,w). \] \subsection{Analytic correlations and the Bloch space} The \mathrm{e}mph{Bloch space} $\calB({\mathbb D})$ consists of all complex-valued holomorphic functions $f:{\mathbb D}\to{\mathbb C}$ such that \[ \|f\|_{\calB}:=\sup_{z\in{\mathbb D}}(1-|z|^2)|f'(z)|<+\infty. \] This defines a seminorm on $\calB({\mathbb D})$, since constants get seminorm $0$. \begin{defn} The \mathrm{e}mph{mock-Bloch space} $\calB^{\text{mock}}({\mathbb D})$ is the space of functions \[ \big\{\oslash{\mathbb Z}symb[{\mathbf T}]:\,\,{\mathbf T}\,\,\, \text{is a bounded operator on}\,\,\,L^2({\mathbb D})\big\}. \] \mathrm{e}nd{defn} This mock-Bloch space is naturally endowed with a norm, which equals the infimum of $\|{\mathbf T}\|$ over all operators ${\mathbf T}$ representing the same symbol $\oslash{\mathbb Z}symb[{\mathbf T}]$. All functions in $\calB({\mathbb D})$ are in $\calB^{\text{mock}}({\mathbb D})$. This is well-known an easy to see using multiplication operators ${\mathbf M}_\mu$, as in \cite{Hed1} (compare with \mathrm{e}qref{eq-Bergman1}). On the other hand, is $\calB^{\text{mock}}({\mathbb D})$ contained in $\calB({\mathbb D})$? This is answered in the negative by the following. \begin{thm} There exists a function $f\in\calB^{\mathrm{mock}}({\mathbb D})$ which is not in $\calB({\mathbb D})$. \label{thm-notbloch} \mathrm{e}nd{thm} It is known that $\calB({\mathbb D})$ is maximal among M\"obius-invariant spaces \cite{RubTim}, so $\calB^{\text{mock}}({\mathbb D})$ cannot be M\"obius-invariant in the standard sense. For a M\"obius automorphism $\phi:{\mathbb D}\to{\mathbb D}$, let \begin{equation} {\mathbf U}_\phi f(z):=\phi'(z)f\circ\phi(z),\quad \bar{\mathbf U}_\phi f(z):=\bar\phi'(z)f\circ\phi(z), \label{eq-unitaries1.2} \mathrm{e}nd{equation} be the associated unitary transformations of $L^2({\mathbb D})$. \begin{thm} For a M\"obius automorphism $\varphi:{\mathbb D}\to{\mathbb D}$, and a bounded operator ${\mathbf T}$ on $L^2({\mathbb D})$, we write ${\mathbf T}_\phi:={\mathbf U}_\phi{\mathbf T}\bar{\mathbf U}_\phi^*$, which has the same norm as ${\mathbf T}$. If we write $\mathcal{Q}[{\mathbf T}](z,w):=zw{\mathbb Z}symb[{\mathbf T}](z,w)$ and $\oslash\mathcal{Q}[{\mathbf T}](z):=z^2 {\mathbb Z}symb[{\mathbf T}](z,z)$, we then have the identity \[ \oslash\mathcal{Q}[{{\mathbf T}_\phi}](z)=\oslash \mathcal{Q}[{\mathbf T}]\circ\phi(z)- \mathcal{Q}[{\mathbf T}](\phi(z),\phi(0))-\mathcal{Q}[{\mathbf T}](\phi(0),\phi(z))+ \oslash \mathcal{Q}[{\mathbf T}](\phi(0)). \] \label{thm-mockbloch} \mathrm{e}nd{thm} Typically, in M\"obius-invariant spaces, the correction after a M\"obius transform amounts to the subtraction of an appropriate constant. Here, we instead subtract a function in the Dirichlet space. \begin{rem} The mock-Bloch space is intimately connected with the Hankel forms on the Dirichlet space studied by Arcozzi, Rochberg, Sawyer, and Wick (see Subsection 6.2 of \cite{ARSW}). In a sense that space of Hankel forms is predual to the mock-Bloch space. To be more precise, let $b(z)=\sum_{l=2}^{+\infty}\hat b(l)z^l$, and observe that \[ \int_{\mathbb D} \oslash\mathcal{Q}[{\mathbf T}](z)\bar b'(z)\mathrm{d} A(z)= \sum_{j,k=1}^{+\infty}\frac{\overline{\hat b(j+k)}}{\sqrt{jk}}\,\langle {\mathbf T} \bar f_j,f_k\rangle_{\mathbb D}, \] where $f_j(z)=jz^{j-1}$, $j=1,2,3,\ldots$, is the standard orthonormal basis in $A^2({\mathbb D})$. This means that $b'$ is in the predual space of the mock-Bloch space if and only if the infinite matrix $\{(jk)^{-1/2}\hat b(j+k)\}_{j,k=1}^{+\infty}$ is trace class. This supplies the connexion with Theorem 8 of \cite{ARSW}. \mathrm{e}nd{rem} \subsection{Symbols of Grunsky operators} Let $\varphi:{\mathbb D}\to{\mathbb C}$ be a univalent function. In other words, $\varphi$ is a conformal mapping onto a simply connected domain. The associated \mathrm{e}mph{Grunsky operator} ${\boldsymbol\Gamma}_\varphi$ is given by the expression \begin{equation} {\boldsymbol\Gamma}_\varphi f(z):=\int_{\mathbb D}\bigg( \frac{\varphi'(z)\varphi'(w)}{(\varphi(z)-\varphi(w))^2} -\frac{1}{(z-w)^2}\bigg)\,f(w)\mathrm{d} A(w),\qquad z\in{\mathbb D}. \label{eq-Grunskyop1} \mathrm{e}nd{equation} It is well-known that ${\boldsymbol\Gamma}_\varphi$ is a norm contraction on $L^2({\mathbb D})$, and it maps into the Bergman space $A^2({\mathbb D})$. This contractiveness is called the \mathrm{e}mph{Grunsky inequalities}, and in this form it was studied in, e.g., \cite{BarHed}. For a given $\varphi$, we may consider instead the normalized mapping \[ \tilde\varphi(z)=\frac{\varphi(z)-\varphi(0)}{\varphi'(0)}, \] which has $\tilde\varphi(0)=0$ and $\tilde\varphi'(0)=1$. It is easy to see that ${\boldsymbol\Gamma}_{\tilde\varphi}={\boldsymbol\Gamma}_\varphi$, so we might as well replace $\varphi$ by its normalized variant $\tilde\varphi$, and require of $\varphi$ that $\varphi(0)=0$ and $\varphi'(0)=1$. The Dirichlet symbol associated with ${\boldsymbol\Gamma}_\varphi$ is then \begin{equation} \mathcal{Q}[{\boldsymbol\Gamma}_\varphi](z,w)=zw\,{\mathbb Z}symb[{\boldsymbol\Gamma}_\varphi](z,w)= \log\frac{zw(\varphi(z)-\varphi(w))}{(z-w)\varphi(z)\varphi(w)}, \qquad(z,w)\in{\mathbb D}^2, \label{eq-WboldG} \mathrm{e}nd{equation} with diagonal restriction \[ \oslash \mathcal{Q}[{\boldsymbol\Gamma}_\varphi](z)=z^2\,\oslash {\mathbb Z}symb[{\boldsymbol\Gamma}_\varphi](z) =\log\frac{z^2\varphi'(z)}{(\varphi(z))^2},\qquad z\in{\mathbb D}. \] We want to characterize the Dirichlet symbols of the above form \mathrm{e}qref{eq-WboldG} among all Dirichlet symbols $\mathcal{Q}[{\mathbf T}](z,w)$ of norm contractions ${\mathbf T}$ on $L^2({\mathbb D})$. \begin{thm} A function $Q=Q(z,w)$ which is holomorphic on ${\mathbb D}^2$ is of the form $\mathcal{Q}[{\boldsymbol\Gamma}_\varphi](z,w)$ for a normalized univalent function $\varphi:{\mathbb D}\to{\mathbb C}$ if and only if \noindent{\rm (a)} $Q(0,w)\mathrm{e}quiv0$ and $Q(z,0)\mathrm{e}quiv0$, and \noindent{\rm (b)} $Q=Q(z,w)$ solves the nonlinear wave equation \[ \partial_z\partial_w Q+(\partial_z Q)(\partial_w Q) -\frac{z^2\partial_z Q-w^2\partial_w Q}{zw(z-w)}=0. \] \label{thm-NLW} \mathrm{e}nd{thm} \begin{rem} This result ties in nicely with deformation theory. Let ${\mathbf L}$ denote the linear wave operator \[ {\mathbf L} Q(z):=\partial_z\partial_w Q -\frac{z^2\partial_z Q-w^2\partial_w Q}{zw(z-w)}. \] Let $\lambda\in{\mathbb D}$, and suppose we look for an analytic family of solutions $\lambda\mapsto Q_\lambda$ to the above nonlinear wave equation ${\mathbf L} Q+(\partial_z Q)(\partial_w Q)=0$. If $Q_0\mathrm{e}quiv0$, we Taylor expand $Q_\lambda=\sum_{j=1}^\infty\lambda^j\hat Q_j$ and see that the the nonlinear wave equation becomes a sequence of linear PDEs for the coefficient functions $\hat Q_j$. First, $\hat Q_1$ solves the homogeneous equation ${\mathbf L} \hat Q_1=0$, while for $j=2,3,4,\ldots$, $\hat Q_j$ solves an inhomogeneous equation ${\mathbf L}\hat Q_j=F$, where $F$ is a nonlinear expression involving the lower order coefficient functions $\hat Q_k$ for $1\le k<j$. \mathrm{e}nd{rem} \begin{rem} It is a matter of substantial interest whether $\Sigma_{\mathrm{conf}}^2:= \sup_f \sigma(f)^2>1$, where the supremum is taken over all $f$ of the form $f=\oslash\mathcal{Q}[{\boldsymbol\Gamma}_\varphi]$ for a normalized univalent function $\varphi:{\mathbb D}\to{\mathbb C}$. This question is related to the issue of whether ${\boldsymbol\Gamma}_\varphi$ is special among the contractions, which it of course is in accordance with Theorem \ref{thm-NLW}. On the other hand, for general contractions, we have Chase's construction of Theorem \ref{thm-1.72} which gives a rather big asymptotic variance $\approx1.72$. \mathrm{e}nd{rem} \subsection{Acknowledgements and a comment} This paper is the result of a joint project with Serguei Shimorin, of which a preliminary version was available earlier \cite{HedShim3}. The present treatment of the subject matter has evolved rather substantially since the preliminary writeup. Tragically Serguei passed away in July 2016 as the result of a hiking accident. We thank several colleagues who helped organizing a conference in his honor at the Mittag-Leffler Institute in June, 2018. Among the organizers were Catherine B\'en\'eteau, Dmitry Khavinson, Mihai Putinar, and Alan Sola. We also want to thank Eero Saksman for a conversation on the fact that the mock-Bloch space is bigger than the Bloch space, Oleg Ivrii and Bassam Fayad for their interest in the asymptotic variance, and Zachary Chase for his contribution with the construction of a permutation matrix with somewhat extremal properties. \section{The duality induced by the bilinear form of GAF} \subsection{The GAF as a duality} Let us for the moment write $\Phi_{\boldsymbol\alpha}(z)$ for the $\calD_0$-Gaussian analytic function given by \mathrm{e}qref{eq-Gauss1}, having in mind the notation $\boldsymbol\alpha:=(\alpha_1,\alpha_2,\alpha_3,\ldots)$ for the Gaussian vector of elements from $\Gspace$. The closure in $\Gspace$ of the linear span of the vectors $\alpha_j$, $j=1,2,3,\ldots$, will be denoted by $\mathfrak{A}$. We shall also need the closure in $\Gspace$ of the linear span of the vectors $\bar\alpha_j$, $j=1,2,3,\ldots$, and we denote it by $\mathfrak{A}_*$. The independence and symmetry of of these random variables means that the vectors $\alpha_j$ form an orthonormal system in $\Gspace$, and that $\mathfrak{A}$ is orthogonal to $\mathfrak{A}_*$. Continuing along the same line of thinking, we would write $\Phi_{\boldsymbol\beta}(z)$ for $\Psi(z)$, the second copy of the same Gaussian process. Now, if ${\mathbf M}$ is a bounded linear operator on $\mathfrak{A}$, then ${\mathbf M}\alpha_j\in \mathfrak{A}$ and hence has a convergent expansion in basis vectors: \[ {\mathbf M} \alpha_j=\sum_{k=1}^{+\infty}M_{j,k}\alpha_k, \] where the sequence $k\mapsto M_{j,k}$ is in $l^2$. If we write ${\mathbf M}\boldsymbol\alpha= ({\mathbf M}\alpha_1,{\mathbf M}\alpha_2,{\mathbf M}\alpha_3,\ldots)$, we may speak of a Gaussian analytic function process \begin{equation} \Phi_{{\mathbf M}\boldsymbol\alpha}(z)=\sum_{j=1}^{+\infty}{\mathbf M}\alpha_j\, e_j(z)=\sum_{j=1}^{+\infty} \sum_{k=1}^{+\infty}M_{j,k}\alpha_k\,e_j(z)=\sum_{k=1}^{+\infty}\alpha_k \sum_{j=1}^{+\infty}M_{j,k} \,e_j(z)=\sum_{k=1}^{+\infty}\alpha_k\, {\mathbf M}^\dagger e_k(z), \label{eq-Gauss-M} \mathrm{e}nd{equation} where $e_j(z)=j^{-\frac12}z^j$ as before. Moreover, the \mathrm{e}mph{GAF transpose of} ${\mathbf M}$, given by \begin{equation} {\mathbf M}^\dagger e_k(z):=\sum_{j=1}^{+\infty}M_{j,k} e_j(z) \label{eq-Gauss-MM} \mathrm{e}nd{equation} defines a bounded linear mapping on $\calD_0({\mathbb D})$, as it just corresponds to the transpose of the matrix for ${\mathbf M}$, and shifting the basis from that of the Gaussian space $\mathfrak{A}$ to that of $\calD_0({\mathbb D})$. This way we have a natural transpose mapping ${\mathbf M}\to{\mathbf M}^\dagger$, and it is perhaps also natural to let its inverse be denoted the same way, so that $({\mathbf M}^{\dagger})^{\dagger}={\mathbf M}$. Typically, \mathrm{e}qref{eq-Gauss-M} will define a Gaussian analytic function with a correlation kernel which is different from that of $\Phi_{\boldsymbol\alpha}(z)$. Indeed, while ${\mathbb E}\Phi_{{\mathbf M}\boldsymbol\alpha}(z)\Phi_{{\mathbf M}\boldsymbol\alpha}(w)=0$ automatically since $\mathfrak{A}$ is orthogonal to $\mathfrak{A}_*$, we see that \begin{equation} {\mathbb E}\Phi_{{\mathbf M}\boldsymbol\alpha}(z)\bar\Phi_{{\mathbf M}\boldsymbol\alpha}(w)=\sum_{j,k=1}^{+\infty} \langle{\mathbf M}\alpha_j,{\mathbf M}\alpha_k\rangle_\Omega\, e_j(z)\bar e_k(w), \label{eq-Gauss-M2} \mathrm{e}nd{equation} which need not coincide with the corresponding correlation for $\Phi_{\boldsymbol\alpha}$. However, in the special case when the restriction ${\mathbf M}|_\mathfrak{A}={\mathbf U}$ is unitary on $\mathfrak{A}$, so that ${\mathbf U}^*{\mathbf U}=\mathbf{I}$ on $\mathfrak{A}$, \mathrm{e}qref{eq-Gauss-M2} gives us \begin{equation} {\mathbb E}\Phi_{{\mathbf U}\boldsymbol\alpha}(z)\bar\Phi_{{\mathbf U}\boldsymbol\alpha}(w)=\sum_{j,k=1}^{+\infty} \langle{\mathbf U}^*{\mathbf U}\alpha_j,\alpha_k\rangle_\Omega\, e_j(z)\bar e_k(w) =\sum_{j=1}^{+\infty}e_j(z)\bar e_j(w)=\log\frac{1}{1-z\bar w}, \label{eq-Gauss-M3} \mathrm{e}nd{equation} that is, the same correlation structure as for $\Phi_{\boldsymbol\alpha}(z)$. In other words, $\Phi_{{\mathbf U}\boldsymbol\alpha}$ is another copy of the $\calD_0$-GAF. When ${\mathbf U}:\mathfrak{A}\to\mathfrak{A}$ is unitary, its GAF transpose ${\mathbf U}^\dagger$ acts unitarily on $\calD_0({\mathbb D})$, and the functions ${\mathbf U}^\dagger e_j(z)$ form an orthonormal basis for $\calD_0({\mathbb D})$. Naturally, this goes the other way around as well, that is, if a unitary transformation ${\mathbf V}$ on $\calD_0({\mathbb D})$ is given, this defines another unitary transformation ${\mathbf V}^\dagger$ on $\mathfrak{A}$ via \mathrm{e}qref{eq-Gauss-M} with ${\mathbf V}$ in place of ${\mathbf M}^\dagger$. An important instance is when the unitary transformation on $\calD_0({\mathbb D})$ is generated by a M\"obius automorphism $\phi$ of the disk ${\mathbb D}$. If $\phi:{\mathbb D}\to{\mathbb D}$ is a M\"obius automorphism, then the operator ${\mathbf V}_\phi$ given by \[ {\mathbf V}_\phi f(z):=f\circ\phi(z)-f\circ\phi(0) \] is unitary on $\calD_0({\mathbb D})$ and therefore corresponds to a unitary transformation ${\mathbf V}_\phi^\dagger$ acting on $\mathfrak{A}$ such that \begin{equation} \Phi_{{\mathbf V}_\phi^\dagger\boldsymbol\alpha}(z)=\sum_{j=1}^{+\infty}{\mathbf V}_\phi^\dagger\alpha_j\,e_j(z)= \sum_{j=1}^{+\infty}\alpha_j\,{\mathbf V}_\phi e_j(z)=\sum_{j=1}^{+\infty}\alpha_j\,j^{-\frac12} (\phi(z)^j-\phi(0)^j). \label{eq-Vopduality} \mathrm{e}nd{equation} \subsection{GAF and Hankel-type duality} We describe a variation on the above-mentioned GAF duality theme. Suppose that instead ${\mathbf M}$ is a bounded linear operator $\mathfrak{A}\to\mathfrak{A}_*$ (like a Hankel operator). In the same fashion as before, we write \[ {\mathbf M} \alpha_j=\sum_{k=1}^{+\infty}M_{j,k}\bar\alpha_k, \] and obtain that \begin{equation} \Phi_{{\mathbf M}\boldsymbol\alpha}(z)=\sum_{j=1}^{+\infty}{\mathbf M}\alpha_j\, e_j(z)=\sum_{j=1}^{+\infty} \sum_{k=1}^{+\infty}M_{j,k}\bar\alpha_k\,e_j(z)=\sum_{k=1}^{+\infty}\bar\alpha_k \sum_{j=1}^{+\infty}M_{j,k} \,e_j(z)=\sum_{k=1}^{+\infty}\bar\alpha_k\, {\mathbf M}^\ddagger e_k(z), \label{eq-Gauss-M'} \mathrm{e}nd{equation} with ${\mathbf M}^\ddagger$, \mathrm{e}mph{the GAF-Hankel transpose of} ${\mathbf M}$, given by the analogue of \mathrm{e}qref{eq-Gauss-MM}, \begin{equation} {\mathbf M}^\ddagger e_k(z):=\sum_{j=1}^{+\infty}M_{j,k} e_j(z). \label{eq-Gauss-M''} \mathrm{e}nd{equation} As with the GAF transpose, we let it be its own inverse, so that $({\mathbf M}^\ddagger)^\ddagger={\mathbf M}$. If ${\mathbf M}:\mathfrak{A}\to\mathfrak{A}_*$ is isometric and onto, then ${\mathbf M}^\ddagger$ acts unitarily on $\calD_0({\mathbb D})$. On the other hand, if ${\mathbf V}$ is unitary on $\calD_0({\mathbb D})$, the $\calD_0$-GAF \begin{equation} \sum_{k=1}^{+\infty}\bar\alpha_k\,{\mathbf V} e_k(z)=\sum_{k=1}^{+\infty}\bar\alpha_k \sum_{j=1}^{+\infty}V_{k,j} \,e_j(z)=\sum_{j=1}^{+\infty} \sum_{k=1}^{+\infty}V_{k,j}\bar\alpha_k\,e_j(z)= \sum_{j=1}^{+\infty}{\mathbf V}^\ddagger\alpha_j\,e_j(z), \label{eq-Gauss-M'''} \mathrm{e}nd{equation} where \begin{equation} {\mathbf V}^\ddagger\alpha_j=\sum_{k=1}^{+\infty}V_{k,j}\bar\alpha_k. \label{eq-Gauss-M'''} \mathrm{e}nd{equation} \subsection{Representation of the correlations ${\mathbb E} \Phi(z)\Psi(w)$ and ${\mathbb E} \Phi(z)\bar\Psi(w)$} In view of the definitions of $\Phi(z)$ and $\Psi(w)$, we have that \begin{equation} \Phi(z)\Psi(w)=\sum_{j,k=1}^{+\infty}\frac{\alpha_j\beta_k}{\sqrt{jk}}z^{j}w^{k}, \label{eq-CORR1} \mathrm{e}nd{equation} so that taking expectations, we obtain that \begin{equation} {\mathbb E}\Phi(z)\Psi(w) =\sum_{j,k=1}^{+\infty}(jk)^{-\frac12}({\mathbb E}\alpha_j\beta_k)\,z^{j}w^{k} =\sum_{j,k=1}^{+\infty}(jk)^{-\frac12}\langle\alpha_j,\bar\beta_k \rangle_\Omega\,z^{j}w^{k},\qquad z,w\in{\mathbb D}. \label{eq-CORR2} \mathrm{e}nd{equation} Next, let ${\mathbf S}:\mathfrak{G}\to\mathfrak{G}$ be the bounded linear operator which maps $\mathfrak{A}_*\to\mathfrak{B}_*$ according to ${\mathbf S} \bar\alpha_j=\bar\beta_j$ for $j=1,2,3,\ldots$, while ${\mathbf S}\gamma=0$ holds for all $\gamma\in\mathfrak{G}\ominus\mathfrak{X}_*=\mathfrak{A}\oplus\mathfrak{N}$. Then ${\mathbf S}$ is a partial isometry: it vanishes on $\mathfrak{A}\oplus\mathfrak{N}$, and acts isometrically on $\mathfrak{A}_*$. In terms of this operator, we may rewrite \mathrm{e}qref{eq-CORR2}: \begin{equation} {\mathbb E}\Phi(z)\Psi(w)= \sum_{j,k=1}^{+\infty}(jk)^{-\frac12}\langle\alpha_j,\bar\beta_k \rangle_\Omega\,z^{j}w^{k}=\sum_{j,k=1}^{+\infty}(jk)^{-\frac12} \langle\alpha_j,{\mathbf S}\bar\alpha_k\rangle\,z^{j}w^{k},\qquad z\in{\mathbb D}. \label{eq-CORR3} \mathrm{e}nd{equation} While the representation \mathrm{e}qref{eq-CORR3} has some good properties, it is not too convenient to give useful estimates. We split \[ \bar\beta_j=\mathbf{S}\bar\alpha_j={\mathbf P}_{\mathfrak{A}}\mathbf{S}\bar\alpha_j+ {\mathbf P}_{\mathfrak{A}}^\perp\mathbf{S}\bar\alpha_j\quad\Longleftrightarrow\quad \beta_j=\bar\mathbf{S}\alpha_j={\mathbf P}_{\mathfrak{A}_*}\bar\mathbf{S}\alpha_j+ {\mathbf P}_{\mathfrak{A}_*}^\perp\bar\mathbf{S}\alpha_j, \] so that the process $\Psi(w)$ takes the form \begin{equation*} \Psi(w)=\sum_{j=1}^{+\infty}\beta_j\,e_j(w)= \sum_{j=1}^{+\infty}{\mathbf P}_{\mathfrak{A}_*}\bar\mathbf{S}\alpha_j\,e_j(w)+ \sum_{j=1}^{+\infty}{\mathbf P}_{\mathfrak{A}_*}^\perp\bar\mathbf{S}\alpha_j\,e_j(w) =:\Psi_1(w)+\Psi_2(w), \mathrm{e}nd{equation*} with the obvious splitting of the process in two. Since \[ {\mathbb E} \Phi(z)\Psi_2(w)=\langle \Phi(z),\bar\Psi_2(w)\rangle_\Omega=0 \] as a consequence of the properties of the projections, we see that \[ {\mathbb E}\Phi(z)\Psi(w)={\mathbb E}\Phi(z)\Psi_1(w), \] and from the GAF-Hankel duality of \mathrm{e}qref{eq-Gauss-M'}, \[ \Psi_1(w)=\sum_{j=1}^{+\infty}({\mathbf P}_{\mathfrak{A}_*}\bar\mathbf{S}\alpha_j)\,e_j(w)= \sum_{j=1}^{+\infty}\bar\alpha_j\,({\mathbf P}_{\mathfrak{A}_*}\bar\mathbf{S})^\ddagger e_j(w). \] It is now immediate that \begin{equation} {\mathbb E}\Phi(z)\Psi(w)={\mathbb E}\Phi(z)\Psi_1(w) =\sum_{j=1}^{+\infty}e_j(z)\,({\mathbf P}_{\mathfrak{A}_*}\bar\mathbf{S})^\ddagger e_j(w),\qquad z\in{\mathbb D}. \label{eq-Gauss-Hankel1} \mathrm{e}nd{equation} Turning our attention to the other correlation ${\mathbb E}\Phi(z)\bar\Psi(w)$, we split \[ \bar\beta_j=\mathbf{S}\bar\alpha_j={\mathbf P}_{\mathfrak{A}_*}\mathbf{S}\bar\alpha_j+ {\mathbf P}_{\mathfrak{A}_*}^\perp\mathbf{S}\bar\alpha_j\quad\Longleftrightarrow\quad \beta_j=\bar\mathbf{S}\alpha_j={\mathbf P}_{\mathfrak{A}}\bar\mathbf{S}\alpha_j+ {\mathbf P}_{\mathfrak{A}}^\perp\bar\mathbf{S}\alpha_j, \] so that the process $\Psi(w)$ takes the form \begin{equation*} \Psi(w)=\sum_{j=1}^{+\infty}\beta_j\,e_j(w)= \sum_{j=1}^{+\infty}{\mathbf P}_{\mathfrak{A}}\bar\mathbf{S}\alpha_j\,e_j(w)+ \sum_{j=1}^{+\infty}{\mathbf P}_{\mathfrak{A}}^\perp\bar\mathbf{S}\alpha_j\,e_j(w) =:\Psi_3(w)+\Psi_4(w), \mathrm{e}nd{equation*} with the obvious splitting of the process in two. Since \[ {\mathbb E} \Phi(z)\bar\Psi_4(w)=\langle \Phi(z),\Psi_4(w)\rangle_\Omega=0 \] as a consequence of the properties of the projections, we find that \[ {\mathbb E}\Phi(z)\bar\Psi(w)={\mathbb E}\Phi(z)\bar\Psi_3(w). \] In addition, by the duality of \mathrm{e}qref{eq-Gauss-MM}, \[ \Psi_3(w)=\sum_{j=1}^{+\infty}({\mathbf P}_{\mathfrak{A}}\bar\mathbf{S}\alpha_j)\,e_j(w)= \sum_{j=1}^{+\infty}\alpha_j\,({\mathbf P}_{\mathfrak{A}}\bar\mathbf{S})^\dagger e_j(w). \] which gives the equality \begin{equation} {\mathbb E}\Phi(z)\bar\Psi(w) =\sum_{j=1}^{+\infty}e_j(z)\,\overline{({\mathbf P}_{\mathfrak{A}}\bar\mathbf{S})^\dagger e_j(w)}, \qquad z,w\in{\mathbb D}. \label{eq-Gauss-Hankel2} \mathrm{e}nd{equation} To simplify the notation, we write ${\mathbf Q}=({\mathbf P}_{\mathfrak{A}_*}\bar\mathbf{S})^\ddagger$ and ${\mathbb R}op=({\mathbf P}_{\mathfrak{A}}\bar\mathbf{S})^\dagger$ which are both contractions on $\calD_0({\mathbb D})$. Then our main formulas become, for $z,w\in{\mathbb D}$: \begin{equation} {\mathbb E}\Phi(z)\Psi(w)=\sum_{j=1}^{+\infty}e_j(z)\,{\mathbf Q} e_j(w),\qquad {\mathbb E}\Phi(z)\bar\Psi(w)=\sum_{j=1}^{+\infty}e_j(z)\,\overline{{\mathbb R}op e_j(w)}. \label{eq-QR1} \mathrm{e}nd{equation} \section{Proofs of the fundamental bounds} \subsection{The joint pointwise bound of correlations} \begin{proof}[Proof of Proposition \ref{prop-triang1}] Essentially, we just need to use the property that the $8\times8$ matrix \mathrm{e}qref{eq-4X4.1} is positive semidefinite. Since for complex constants $a,b,c,d$, \begin{multline*} 0\le\big|a\Phi(z)+b\bar\Phi(z)-c\Psi(w)-d\bar\Psi(w)\big|^2= (|a|^2+|b|^2)|\Phi(z)|^2+(|c|^2+|d|^2)|\Psi(w)|^2 +2\re(a\bar b(\Phi(z))^2) \\ -2\re(a\bar c\Phi(z)\bar\Psi(w))-2\re(a\bar d\Phi(z)\Psi(w))- 2\re(\bar b c\Phi(z)\Psi(w))-2\re(\bar b d\Phi(z)\bar\Psi(w)) +2\re(c\bar d (\Psi(w))^2), \mathrm{e}nd{multline*} the inequality survives after taking the expectation: \begin{multline*} 0\le{\mathbb E}\big|a\Phi(z)+b\bar\Phi(z)-c\Psi(w)-d\bar\Psi(w)\big|^2= (|a|^2+|b|^2)\log\frac{1}{1-|z|^2}+(|c|^2+|d|^2)\log\frac{1}{1-|w|^2} \\ -2\re((a\bar c+\bar bd){\mathbb E}\Phi(z)\bar\Psi(w)) -2\re((a\bar d+\bar b c){\mathbb E}\Phi(z)\Psi(w)). \mathrm{e}nd{multline*} In other words, we have the inequality \begin{multline*} 2\re((a\bar d+\bar b c){\mathbb E}\Phi(z)\Psi(w)) +2\re((a\bar c+\bar bd){\mathbb E}\Phi(z)\bar\Psi(w)) \le (|a|^2+|b|^2)\log\frac{1}{1-|z|^2}+(|c|^2+|d|^2)\log\frac{1}{1-|w|^2}. \mathrm{e}nd{multline*} We now restrict the values of our parameters, and assume that $b=\bar a$ and $d=\bar c$. The above inequality then gives that \begin{equation*} 2\re(ac{\mathbb E}\Phi(z)\Psi(w)) +2\re(a\bar c{\mathbb E}\Phi(z)\bar\Psi(w)) \le |a|^2\log\frac{1}{1-|z|^2}+|c|^2\log\frac{1}{1-|w|^2}. \mathrm{e}nd{equation*} We write $ac=|ac|\omega_1$ and $a\bar c=|ac|\omega_2$, where $|\omega_1|=|\omega_2|=1$. Then \begin{equation*} 2\re(\omega_1{\mathbb E}\Phi(z)\Psi(w)) +2\re(\omega_2{\mathbb E}\Phi(z)\bar\Psi(w))\le \frac{|a|}{|c|}\log\frac{1}{1-|z|^2}+\frac{|c|}{|a|}\log\frac{1}{1-|w|^2}. \mathrm{e}nd{equation*} On the right-hand side, we are free to minimize over $|a|$ and $|c|$, while on the left-hand side, we are free to maximize over the (freely choosable) unit vectors $\omega_1$ and $\omega_2$. After optimization, we arrive at the asserted estimate. \mathrm{e}nd{proof} \subsection{The proof of the fundamental integral estimate} \label{subsec-thm-fund1} \begin{proof}[Proof of Theorem \ref{thm-fund1}] The first observation is that by $L^2({\mathbb D})$-orthogonality, \[ \int_{\mathbb D}\big|a w{\mathbb E}\Phi(z)\Psi'(w)+b\bar w {\mathbb E}\Phi(z)\bar\Psi'(w)\big|^2 \frac{\mathrm{d} A(w)}{|w|^2} =|a|^2\int_{\mathbb D}\big|{\mathbb E}\Phi(z)\Psi'(w)\big|^2\mathrm{d} A(w)+ |b|^2\int_{\mathbb D}\big|{\mathbb E}\Phi(z)\bar \Psi'(w)\big|^2\mathrm{d} A(w). \] Next, we observe that by the representation \mathrm{e}qref{eq-QR1} and the norm contractive property of ${\mathbf Q}$, \[ \int_{\mathbb D}\big|{\mathbb E}\Phi(z)\Psi'(w)\big|^2\mathrm{d} A(w)= \Big\|\sum_{j=1}^{+\infty}e_j(z){\mathbf Q} e_j\Big\|_\nabla^2\le \Big\|\sum_{j=1}^{+\infty}e_j(z) e_j\Big\|_\nabla^2=\sum_{j=1}^{+\infty}|e_j(z)|^2 =\log\frac{1}{1-|z|^2}, \] and, that analogously, by the norm contractive property of ${\mathbb R}op$, \[ \int_{\mathbb D}\big|{\mathbb E}\Phi(z)\bar\Psi'(w)\big|^2\mathrm{d} A(w)= \Big\|\sum_{j=1}^{+\infty}\bar e_j(z){\mathbb R}op e_j\Big\|_\nabla^2 \le\Big\|\sum_{j=1}^{+\infty}\bar e_j(z) e_j\Big\|_\nabla^2= \sum_{j=1}^{+\infty}|e_j(z)|^2=\log\frac{1}{1-|z|^2}. \] The proof is complete. \mathrm{e}nd{proof} \section{Dirichlet symbols of contractions on $L^2({\mathbb D})$ and analytic correlations of GAFs} \subsection{The correspondence between Dirichlet symbols and the analytic correlation} We show the indicated relationship between the analytic correlation ${\mathbb E} \Phi(z)\Psi(w)$ and the Dirichlet symbols ${\mathbb Z}symb[{\mathbf T}](z,w)$ for contractions ${\mathbf T}$ on $L^2({\mathbb D})$. \begin{proof}[Proof of Theorem \ref{thm-transfer1}] We begin with part (a), so we are given the orthonormal systems $\{\alpha_j\}_j$ and $\{\beta_j\}_j$ in the Gaussian Hilbert space $\Gspace$, and need to construct the norm contractive operator ${\mathbf T}$ on $L^2({\mathbb D})$ with the indicated property. We let ${\mathbf S}:\Gspace\to\Gspace$ be the bounded linear operator with ${\mathbf S}\bar\alpha_j=\bar\beta_j$ for $j=1,2,3,\ldots$ while ${\mathbf S}\gamma=0$ for all $\gamma\in\Gspace\ominus\mathfrak{A}_*$. Given that ${\mathbf S}$ is a contraction, the product ${\mathbf P}_{\mathfrak{A}}{\mathbf S}$ is a contraction as well, and we may decompose \[ {\mathbf P}_\mathfrak{A}\bar\beta_k={\mathbf P}_{\mathfrak{A}}{\mathbf S}\bar\alpha_k= \sum_{j=1}^{+\infty}A_{k,j}\alpha_j, \] where $\sum_j|A_{k,j}|^2\le1$. For $j=1,2,3,\ldots$, we write $f_j(z)=e_j'(z)=j^{\frac12}z^{j-1}$, which constitutes an orthonormal basis in $A^2({\mathbb D})$, and put \[ {\mathbf T}^* f_k=\sum_{l=1}^{+\infty}A_{k,l}\bar f_l,\qquad k=1,2,3,\ldots. \] By linearity and norm boundedness of the matrix $(A_{j,k})_{j,k}$, this defines ${\mathbf T}^*$ on $A^2({\mathbb D})$. Then \[ \langle \bar f_j,{\mathbf T}^*f_k\rangle_{\mathbb D}=\sum_{l=1}^{+\infty}A_{k,l} \langle \bar f_j,\bar f_l\rangle_{\mathbb D}=A_{k,j}=\sum_{l=1}^{+\infty}A_{k,l} \langle\alpha_j,\alpha_l\rangle_\Omega=\langle\alpha_j, {\mathbf P}_{\mathfrak{A}}\mathbf{S}\bar\alpha_k\rangle_\Omega =\langle\alpha_j,\mathbf{S}\bar\alpha_k\rangle_\Omega= \langle\alpha_j,\bar\beta_k\rangle_\Omega, \] and since \begin{equation} \bar z\mathrm s_z(\zeta)=\frac{\bar z}{1-\bar z\zeta}=\sum_{j=1}^{+\infty}\bar z^j \zeta^{j-1}=\sum_{j=1}^{+\infty}\bar e_j(z)f_j(\zeta), \label{eq-kerneldecomp1} \mathrm{e}nd{equation} it now follows that \[ zw\,\langle \bar\mathrm s_z,{\mathbf T}^*\bar\mathrm s_w\rangle_{\mathbb D}=\sum_{j,k=1}^{+\infty} e_j(z)e_k(w)\langle \bar f_j,{\mathbf T}^*f_k\rangle_{\mathbb D}=\sum_{j,k=1}^{+\infty} \langle\alpha_j,\bar\beta_k\rangle_\Omega e_j(z)e_k(w)={\mathbb E}\Phi(z)\Psi(w), \] so that condition (i) holds if ${\mathbf T}$ is the adjoint of ${\mathbf T}^*$. But to properly define ${\mathbf T}$, we need to extend ${\mathbf T}^*$ to all of $L^2({\mathbb D})$. To this end, we simply declare that ${\mathbf T}^*f=0$ holds for $f\in L^2({\mathbb D})\ominus A^2({\mathbb D})$. It remains to check that so constructed, ${\mathbf T}^*$ is a contraction on $L^2({\mathbb D})$, for then the adjoint ${\mathbf T}$ is contractive as well. For a polynomial $f\in A^2({\mathbb D})$, we decompose it as a finite sum $f=\sum_kb_k f_k$ where $\|f\|^2_{L^2({\mathbb D})}=\sum_k|b_k|^2$, and since ${\mathbf T}^* f=\sum_{l,k}A_{k,l}b_k \bar f_l$, we find that \[ \|{\mathbf T}^*f\|_{L^2({\mathbb D})}^2=\sum_{l}\bigg|\sum_{k}A_{k,l}b_k\bigg|^2 =\bigg\|{\mathbf P}_{\mathfrak{A}}{\mathbf S}\sum_k b_k\bar\alpha_k\bigg\|^2\le \bigg\|\sum_k b_k\bar\alpha_k\bigg\|^2=\sum_k|b_k|^2=\|f\|^2_{L^2({\mathbb D})}, \] and it follows that ${\mathbf T}^*$ defines a contraction on $A^2({\mathbb D})$ and hence in a second step on all of $L^2({\mathbb D})$. This concludes the demonstration of part (a). We proceed with the remaining task of obtaining part (b), which amounts to constructing the Gaussian Hilbert space $\Gspace$ and the sequence $\beta_j$ and associated partial isometry $\mathbf{S}$ for a given contraction ${\mathbf T}$ on $L^2({\mathbb D})$. We recall that $\mathfrak{A}$ and $\mathfrak{A}_*$ are two orthogonal subspaces in $\Gspace$. However, the sum $\mathfrak{A}\oplus\mathfrak{A}_*$ need not be all of $\Gspace$. We will assume that $\mathfrak{N}:=\Gspace\ominus(\mathfrak{A}\oplus\mathfrak{A}_*)$ is \mathrm{e}mph{separable and infinite-dimensional} which just amounts to considering a sufficiently big (separable) Gaussian Hilbert space $\Gspace$. We split $\mathfrak{N}=\mathfrak{M}\oplus\mathfrak{M}_*$, where $\mathfrak{M}$ is the closed linear span of certain elements $\nu_1,\nu_2,\nu_3,\ldots$ of $\mathfrak{N}$, which are all i i d standard complex Gaussian variables (see Subsection \ref{subsec-CGHS}). The space $\mathfrak{M}_*$ is then the closed linear span of the complex conjugates $\bar\nu_1,\bar\nu_2, \bar\nu_3,\ldots$. As for notation, we will need the orthogonal (Bergman) projection ${\mathbf P}_{A^2}:L^2({\mathbb D})\to A^2({\mathbb D})$, and its conjugate $\bar{\mathbf P}_{A^2}$ defined by \[ \bar{\mathbf P}_{A^2}(f)=\overline{{\mathbf P}_{A^2}(\bar f)}. \] We begin with the observation that \[ \langle{\mathbf T}\bar f_j, f_k\rangle_{\mathbb D}=\langle\bar f_j, {\mathbf T}^* f_k\rangle_{\mathbb D} =\langle\bar f_j, \bar{\mathbf P}_{A^2}{\mathbf T}^* f_k\rangle_{\mathbb D}. \qquad j,k=1,2,3,\ldots, \] We need to find i i d standard Gaussian vectors $\beta_1,\beta_2,\beta_3,\ldots$ in the Gaussian Hilbert space $\Gspace$ such that \[ {\mathbb E}\alpha_j\beta_k= \langle\alpha_j,\bar\beta_k\rangle_\Omega=\langle{\mathbf T}\bar f_j, f_k\rangle_{\mathbb D}= \langle\bar f_j, \bar{\mathbf P}_{A^2}{\mathbf T}^* f_k\rangle_{\mathbb D}, \qquad j,k=1,2,3,\ldots, \] since by summing over $j,k$ we arrive at \begin{multline*} {\mathbb E}\Phi(z)\Psi(z)=\sum_{j,k=1}^{+\infty}e_j(z)e_k(w)\, {\mathbb E}\alpha_j\beta_k= \sum_{j,k=1}^{+\infty}e_j(z)e_k(w)\,\langle{\mathbf T}\bar f_j, f_k\rangle_{\mathbb D} \\ =\sum_{j,k=1}^{+\infty}e_j(z)e_k(w)\, \langle\bar f_j, \bar{\mathbf P}_{A^2}{\mathbf T}^* f_k\rangle_{\mathbb D}= \langle\bar \mathrm s_z, \bar{\mathbf P}_{A^2}{\mathbf T}^* \mathrm s_w\rangle_{\mathbb D} =\langle\bar{\mathbf P}_{A^2}\bar \mathrm s_z, {\mathbf T}^* \mathrm s_w\rangle_{\mathbb D}= \langle\bar \mathrm s_z, {\mathbf T}^* \mathrm s_w\rangle_{\mathbb D}=\langle{\mathbf T}\bar \mathrm s_z, \mathrm s_w\rangle_{\mathbb D}, \mathrm{e}nd{multline*} where we used \mathrm{e}qref{eq-kerneldecomp1}. The element $\bar{\mathbf P}_{A^2}{\mathbf T}^* f_k$ is in the space of complex conjugates of $A^2({\mathbb D})$, and as such it has an expansion \[ \bar{\mathbf P}_{A^2}{\mathbf T}^* f_k=\sum_{l=1}^{+\infty}A_{k,l}\bar f_l, \] where $\sum_j|A_{k,j}|^2\le1$. We need ${\mathbf S}$ to have the property that in terms of the above expansion, \[ {\mathbf P}_{\mathfrak{A}}{\mathbf S}\bar\alpha_k=\mathbf{A}\bar\alpha_k:= \sum_{j=1}^{+\infty}A_{k,j}\alpha_j, \] which defines $\mathbf{A}$ as an operator $\mathfrak{A}_*\to\mathfrak{A}$. As such, it is a contraction. Indeed, if $\gamma\in\mathfrak{A}_*$ has expansion $\gamma=\sum_k b_k \bar\alpha_k$, we obtain that \[ \|\mathbf{A}\gamma\|_{\Omega}^2= \sum_{j}\bigg|\sum_{k} A_{k,j}b_k\bigg|^2 =\bigg\|\bar{\mathbf P}_{A^2}{\mathbf T}\sum_k b_k\bar e_k\bigg\|^2\le \bigg\|\sum_k b_k\bar e_k\bigg\|^2=\sum_k|b_k|^2=\|\gamma\|^2, \] which verifies the norm contractivity of $\mathbf{A}$. We proceed to define the operator ${\mathbf S}$ and hence the Gassian vectors $\bar\beta_j=\mathbf{S}\bar\alpha_j$. To do this, we appeal to a standard procedure in operator theory. Since $\mathbf{A}$ maps $\mathfrak{A}_*\to\mathfrak{A}$, it has an adjoint $\mathbf{A}^\circledast$ which maps $\mathfrak{A}\to\mathfrak{A}_*$. We now form the \mathrm{e}mph{defect operator} \[ {\mathbb D}op:=(\mathbf{I}_{\mathfrak{A}_*}-\mathbf{A}^\circledast\mathbf{A})^{1/2}, \] which maps $\mathfrak{A}_*\to\mathfrak{A}_*$. The square root is well-defined given that we are taking the square root of a positive (semidefinite) operator. We use this defect operator to define an associated operator $\tilde{\mathbb D}op$ on $\mathfrak{M}$, by declaring that if ${\mathbb D}op\bar\alpha_j=\sum_k D_{j,k}\bar\alpha_k$, then \[ \tilde{\mathbb D}op\nu_j=\sum_{k}D_{j,k}\nu_k,\qquad j=1,2,3,\ldots. \] Then $\tilde{\mathbb D}op$ becomes a contraction on $\mathfrak{M}$, and we may now define the operator ${\mathbf S}$. For $\gamma\in\Gspace\ominus\mathfrak{A}_*$, we declare ${\mathbf S}\gamma=0$. For $\gamma\in\mathfrak{A}_*$, we expand in basis vectors $\gamma=\sum_k b_k\bar\alpha_k$, and define the Gaussian vectors \begin{equation} \bar\beta_k={\mathbf S}\bar\alpha_k:=\mathbf{A}\bar\alpha_k+\tilde{\mathbb D}op\nu_k \in\mathfrak{A}\oplus\mathfrak{M}, \qquad k=1,2,3,\ldots, \label{eq-defbeta1} \mathrm{e}nd{equation} where ${\mathbf P}_\mathfrak{A}{\mathbf S}$ is as before. Since $\tilde{\mathbb D}op\nu_k\in\mathfrak{M}\subset\mathfrak{N}$, we see that \[ {\mathbf P}_{\mathfrak{A}}{\mathbf S}\bar\alpha_k={\mathbf P}_{\mathfrak{A}}\mathbf{A}\bar\alpha_k +{\mathbf P}_{\mathfrak{A}}\tilde{\mathbb D}op\nu_k=\mathbf{A}\bar\alpha_k, \] since $\mathbf{A}\bar\alpha_k\in\mathfrak{A}$ and we know that $\mathfrak{N}$ is orthogonal to $\mathfrak{A}$, so things are as they should be. Moreover, ${\mathbf S}$ acts isometrically on $\mathfrak{A}_*$, as we see from \[ \|{\mathbf S}\gamma\|_{L^2({\mathbb D})}^2= \|\mathbf{A}\gamma\|_{L^2({\mathbb D})}^2+\|{\mathbb D}op\gamma\|^2= \|\gamma\|^2. \] It follows that the functions $\bar\beta_k:=\mathbf{S}\bar\alpha_k$ form an orthonormal system in $\Gspace$. It remains to verify that they are i i d standard complex Gaussians, which requires in addition to orthonormality that ${\mathbb E}\bar\beta_j\bar\beta_k=0$ holds for all $j$ and $k$. In view of \mathrm{e}qref{eq-defbeta1}, \[ {\mathbb E}\bar\beta_j\bar\beta_k=\langle\bar\beta_j,\beta_k\rangle_\Omega=0, \] given that $\bar\beta_j\in \mathfrak{A}\oplus \mathfrak{M}$ while $\beta_k\in \mathfrak{A}_*\oplus\mathfrak{M}_*$ and the subspaces $\mathfrak{A}\oplus\mathfrak{M}$ and $\mathfrak{A}_*\oplus\mathfrak{M}_*$ are orthogonal to one another in $\Gspace$. This tells us how to construct the sequence $\beta_1,\beta_2,\beta_3,\ldots$ stanrting from the contraction ${\mathbf T}$ on $L^2({\mathbb D})$, and concludes the proof of part (b). \mathrm{e}nd{proof} \subsection{Orthonormal systems in Hilbert space and operator symbols} We recall the setting of Corollary \ref{cor-Hilb1}, where $x_1,x_2,x_3,\ldots$ and $y_1,y_2,y_3,\ldots$ are two orthonormal systems in complex Hilbert space $\calH$. Let $\calX$ denote the closed linear span of the vectors $x_1,x_2,x_3,\ldots$, and ${\mathbf P}_\calX$ the orthogonal projection $\calH\to\calX$. \begin{proof}[Proof of Corollary \ref{cor-Hilb1}] We recall the notation $f_j(z)=e_j'(z)=j^{1/2}z^{j-1}$, and let ${\mathbf T}^*$ be a linear operator with the property that \begin{equation} {\mathbf T}^* f_j=\sum_{k}\langle y_j,x_k\rangle_\calH \bar f_k. \label{eq-Tope*1} \mathrm{e}nd{equation} Then we have for scalars $c_j$ (only finitely many nonzero) that \[ \bigg\|{\mathbf T}^*\sum_j c_jf_j\bigg\|^2_{\mathbb D}= \bigg\|\sum_{j,k} c_j\langle y_j,x_k\rangle_\calH \bar f_k \bigg\|^2_{\mathbb D} =\bigg\|\sum_{j,k} c_j\langle y_j,x_k\rangle_\calH x_k \bigg\|^2_\calH=\bigg\|{\mathbf P}_{\calX}\sum_{j} c_j y_j\bigg\|_\calH^2\le \bigg\|\sum_{j} c_j f_j\bigg\|_\calH^2 \] which shows that ${\mathbf T}^*$ defines a norm contraction $A^2({\mathbb D})\to\mathrm{conj}\,A^2({\mathbb D})$. In a second step, we extend ${\mathbf T}^*$ to all of $A^2({\mathbb D})$ by declaring that ${\mathbf T}^* f=0$ for all $f\in L^2({\mathbb D})\ominus A^2({\mathbb D})$, and we see that this defines a contraction on $L^2({\mathbb D})$. The Dirichlet symbol of ${\mathbf T}$ is then, in view of \mathrm{e}qref{eq-kerneldecomp1}, \[ zw\,{\mathbb Z}symb[{\mathbf T}](z,w)=zw\langle {\mathbf T}\bar\mathrm s_z,\mathrm s_w\rangle_{\mathbb D}= zw\langle \bar\mathrm s_z,{\mathbf T}^*\mathrm s_w\rangle_{\mathbb D}= \sum_{j,k=1}^{+\infty}e_j(z)e_k(w)\langle \bar f_j,{\mathbf T}^*f_k\rangle_{\mathbb D} =\sum_{j,k=1}^{+\infty}e_j(z)e_k(w)\langle x_j,y_k\rangle_{\mathbb D}. \] Taking the diagonal restriction, we have that \[ z^2{\mathbb Z}symb[{\mathbf T}](z,z)=\sum_{l=2}^{+\infty}z^l\sum_{j,k:j+k=l}(jk)^{-\frac12} \langle x_j,y_k\rangle_{\mathbb D}, \] and it follows that the claim is a direct consequence of Theorem \ref{thm-fund2}. \mathrm{e}nd{proof} \section{Hilbert spaces and diagonal restriction on the bidisk} \label{sec-hilbspaces} \subsection{Weighted Bergman spaces on the disk and bidisk} For real $\alpha>-1$, we write $A^2_\alpha({\mathbb D})$ for the Hilbert space of holomorphic functions $f:{\mathbb D}\to{\mathbb C}$ subject to the norm boundedness condition \[ \|f\|_{A^2_\alpha({\mathbb D})}^2=(\alpha+1)\int_{\mathbb D}|f(z)|^2(1-|z|^2)^\alpha\mathrm{d} A(z)<+\infty. \] Moreover, we write $A^2_{-1,0}({\mathbb D}^2)$ for the Hilbert space of holomorphic functions $f:{\mathbb D}\to{\mathbb C}$ subject to the norm boundedness condition \[ \|f\|_{A^2_{-1,0}({\mathbb D})}^2=\int_{\mathbb D}\int_{\mathbb T} |f(z,w)|^2\mathrm{d} s(z) \mathrm{d} A(w)<+\infty. \] For analytic functions $f$ on the bidisk, we let $\oslash$ denote the operation of taking the diagonal restriction, $\oslash f(z):=f(z,z)$. We may for instance write $\partial_{z}^j\oslash(\partial_w^k f)$ to denote the function \[ \partial_{z}^j\Big(\partial_w^k f(z,w)\big|_{w:=z}\Big). \] In \cite{HedShim1}, the following diagonal norm expansion theorem was obtained. \begin{thm} For $f\in A^2_{-1,0}({\mathbb D}^2)$, we have that \begin{equation*} \|f\|_{A^2_{-1,0}({\mathbb D})}^2=\sum_{n=0}^{+\infty} \frac{(n+2)_n}{(n+1)!}\bigg\|\sum_{k=0}^{n} \frac{(-1)^k(k+2)_{n-k}}{k!(n-k)!(n+k+2)_{n-k}} \partial_{z}^{n-k}\oslash(\partial_w^k f)\bigg\|^2_{A^2_{2n+1}({\mathbb D})}. \mathrm{e}nd{equation*} \label{thm-hedshim-DMJ} \mathrm{e}nd{thm} \subsection{The implementation of the fundamental estimate into the diagonal norm expansion} Our starting point is the instance of $(a,b)=(1,0)$ in Theorem \ref{thm-fund1}: \[ \int_{\mathbb D}\big|a(z){\mathbb E}\Phi(z)\Psi'(w)\big|^2 \mathrm{d} A(w)\le |a(z)|^2\log\frac{1}{1-|z|^2},\qquad z\in{\mathbb D}. \] We dilate each variable using $r$, $0<r<1$, multiply by $|a(z)|^2$ for some $a\in H^2({\mathbb D})$, and integrate over ${\mathbb T}\times{\mathbb D}$: \[ r^2\int_{\mathbb T}\int_{{\mathbb D}(0,\frac1r)}\big|a(z){\mathbb E}\Phi(rz)\Psi'(rw)\big|^2 \mathrm{d} A(w)\mathrm{d} s(z)\le \|a\|_{H^2}^2\,\log\frac{1}{1-r^2}. \] We now throw away a part of the domain of integration (but, by monotonicity, we may remove the $r^2$ factor at the same time): \begin{equation} \int_{\mathbb T}\int_{{\mathbb D}}\big|a(z){\mathbb E}\Phi(rz)\Psi'(rw)\big|^2 \mathrm{d} A(w)\mathrm{d} s(z)\le \|a\|_{H^2}^2\,\log\frac{1}{1-r^2}. \label{eq-aest1} \mathrm{e}nd{equation} We recognize the left-hand side expression as the norm-square in the space $A^2_{-1,0}({\mathbb D}^2)$ of the function $f(z,w)=a(z){\mathbb E}\Phi(rz)\Psi'(rw)$. Clearly, \[ \oslash(\partial_w^k f)(z)=r^k a(z){\mathbb E}\Phi(rz)\Psi^{(k+1)}(rz), \] so an application of Theorem \ref{thm-hedshim-DMJ} gives that \begin{multline} \sum_{n=0}^{+\infty} \frac{2(n+2)_n}{n!}\int_{\mathbb D}\bigg|\sum_{k=0}^{n} \frac{(-1)^k(k+2)_{n-k}\,r^k}{k!(n-k)!(n+k+2)_{n-k}} \partial_{z}^{n-k}\big(a(z){\mathbb E}\Phi(rz)\Psi^{(k+1)}(rz)\big)\bigg|^2 (1-|z|^2)^{2n+1}\mathrm{d} A(z) \\ \le \|a\|_{H^2}^2\,\log\frac{1}{1-r^2}. \label{eq-expanded1} \mathrm{e}nd{multline} We choose for simplicity $a(z)\mathrm{e}quiv1$, and expand the higher order derivative using the Leibniz rule \[ \partial_{z}^{n-k}\big({\mathbb E}\Phi(rz)\Psi^{(k+1)}(rz)\big)= r^{n-k}\sum_{l=0}^{n-k}\frac{(n-k)!}{l!(n-k-l)!}{\mathbb E}\Phi^{(n-k-l)}(rz) \Psi^{(k+l+1)}(rz). \] It follows that \begin{multline} \sum_{k=0}^{n}\sum_{l=0}^{n-k}\frac{(-1)^k(k+2)_{n-k}\,r^k}{k!(n-k)!(n+k+2)_{n-k}} \partial_{z}^{n-k}\big({\mathbb E}\Phi(rz)\Psi^{(k+1)}(rz)\big) \\ =r^n\sum_{k=0}^{n}\sum_{l=0}^{n-k}\frac{(-1)^k(k+2)_{n-k}}{k!l!(n-k-l)!(n+k+2)_{n-k}} {\mathbb E}\Phi^{(n-k-l)}(rz)\Psi^{(k+l+1)}(rz) \\ =r^n\sum_{m=0}^{n}\frac{(-1)^m(n+1)[(n-m+1)_m]^2}{m!(m+1)!(n+2)_n} \big({\mathbb E}\Phi^{(n-m)}(rz)\Psi^{(m+1)}(rz)\big) \label{eq-combid1} \mathrm{e}nd{multline} since it happens to be true for integers $m$ with $0\le m\le n$ that \begin{equation*} \sum_{k,l\ge0: k+l=m}\frac{(-1)^k(k+2)_{n-k}}{k!l!(n-m)!(n+k+2)_{n-k}} =\frac{(-1)^m(n+1)[(n-m+1)_m]^2}{m!(m+1)!(n+2)_n}. \mathrm{e}nd{equation*} As we implement \mathrm{e}qref{eq-combid1} into \mathrm{e}qref{eq-expanded1}, we arrive at \begin{multline*} \sum_{n=0}^{+\infty} \frac{2(n+1)^3\,r^{2n}}{(2n+1)!}\int_{\mathbb D}\bigg|\sum_{m=0}^{n} \frac{(-1)^m[(n-m+1)_m]^2}{m!(m+1)!} \big({\mathbb E}\Phi^{(n-m)}(rz)\Psi^{(m+1)}(rz)\big)\bigg|^2 (1-|z|^2)^{2n+1}\mathrm{d} A(z) \\ \le \log\frac{1}{1-r^2}. \mathrm{e}nd{multline*} If we only keep the first term with $n=0$ on the left-hand side we are left with \begin{equation} 2\int_{\mathbb D}\big|{\mathbb E}\Phi(rz)\Psi'(rz)\big|^2 (1-|z|^2)\mathrm{d} A(z) \le \log\frac{1}{1-r^2}. \label{eq-PhiPsi1} \mathrm{e}nd{equation} We are free to switch the roles of $\Phi$ and $\Psi$, so that we also have \begin{equation} 2\int_{\mathbb D}\big|{\mathbb E}\Phi'(rz)\Psi(rz)\big|^2 (1-|z|^2)\mathrm{d} A(z) \le \log\frac{1}{1-r^2}. \label{eq-PhiPsi2} \mathrm{e}nd{equation} Since \[ \partial_z{\mathbb E}\Phi(rz)\Psi(rz)= r{\mathbb E} \Phi'(rz)\Psi(rz)+r{\mathbb E}\Phi(rz)\Psi'(rz), \] it follows from \mathrm{e}qref{eq-PhiPsi1} and \mathrm{e}qref{eq-PhiPsi2} that \begin{multline} \int_{\mathbb D}\big|\partial_z{\mathbb E}\Phi(rz)\Psi(rz)\big|^2 (1-|z|^2)\mathrm{d} A(z) \\ \le2r^2\int_{\mathbb D}\big(\big|{\mathbb E}\Phi(rz)\Psi'(rz)\big|^2+ \big|{\mathbb E}\Phi'(rz)\Psi(rz)\big|^2\big)(1-|z|^2)\mathrm{d} A(z) \le 2r^2\log\frac{1}{1-r^2}. \label{eq-PhiPsi3} \mathrm{e}nd{multline} \begin{proof}[Proof of Theorem \ref{thm-fund2}] A variant of the Littlewood-Paley identity states that for an analytic function $f$ in the Hardy space $H^2({\mathbb D})$, \[ \int_{\mathbb D}|f'(z)|^2(1-|z|^2)\mathrm{d} A(z)=\int_{\mathbb T} |f(z)|^2\mathrm{d} s(z)- \int_{\mathbb D}|f(z)|^2\mathrm{d} A(z), \] so that with $F(z)={\mathbb E}\Phi(rz)\Psi(rz)$, \mathrm{e}qref{eq-PhiPsi3} asserts that \begin{equation} \int_{\mathbb T} |F(rz)|^2\mathrm{d} s(z)-\int_{\mathbb D}|F(rz)|^2\mathrm{d} A(z)\le 2r^2\log\frac{1}{1-r^2}. \label{eq-formula-F} \mathrm{e}nd{equation} In terms of the Taylor expansion of $F$, \[ F(z)=\sum_{j=2}^{+\infty}\hat F(j)z^j, \] the estimate \mathrm{e}qref{eq-formula-F} amounts to \begin{equation} \sum_{j=2}^{+\infty}\frac{j\,r^{2j}}{j+1}\,|\hat F(j)|^2\le 2r^2\log\frac{1}{1-r^2}. \label{eq-formula-F2} \mathrm{e}nd{equation} By integration, we see from \mathrm{e}qref{eq-formula-F2} that \begin{multline} \int_{\mathbb D}|F(rz)|^2\mathrm{d} A(z)=\sum_{j=2}^{+\infty}\frac{r^{2j}}{j+1}\,|\hat F(j)|^2 \le 2\int_0^r\sum_{j=2}^{+\infty}\frac{j\,t^{2j-1}}{j+1}\,|\hat F(j)|^2\mathrm{d} t \\ \le 2\int_0^r t\log\frac{1}{1-t^2}\mathrm{d} t=(1-r^2)\log(1-r^2)+r^2\le r^2. \label{eq-formula-F2} \mathrm{e}nd{multline} It now follows from \mathrm{e}qref{eq-formula-F} combined with the estimate \mathrm{e}qref{eq-formula-F2} that \begin{equation} \int_{\mathbb T} |F(rz)|^2\mathrm{d} s(z)\le 2r^2\log\frac{1}{1-r^2}+r^2, \mathrm{e}nd{equation} as claimed. \mathrm{e}nd{proof} \section{M\"obius invariance and the mock-Bloch space} \subsection{M\"obius invariance of the Dirichlet symbol} For a M\"obius automorphism $\phi$ of the unit disk ${\mathbb D}$, let ${\mathbf U}_\phi$ and ${\mathbf V}_\phi$ be the unitary transformations on $L^2({\mathbb D})$ given by \mathrm{e}qref{eq-unitaries1.2}. If $\phi,\psi$ are two such M\"obius automorphisms, we see that \[ {\mathbf U}_\psi{\mathbf U}_\phi f={\mathbf U}_\psi(\phi'(f\circ\phi))=\psi'(\phi'\circ\psi) (f\circ\phi\circ\psi)=(\phi\circ\psi)'(f\circ\phi\circ\psi)= {\mathbf U}_{\phi\circ\psi}(f), \] which puts us in the context of representation theory. In particular, we find that ${\mathbf U}_\phi^*={\mathbf U}_\phi^{-1}={\mathbf U}_{\phi^{-1}}$. \begin{lem} We have that \[ \bar w\,{\mathbf U}_\phi^* \mathrm s_w=\bar\phi(w)\,\mathrm s_{\phi(w)} -\bar\phi(0)\,\mathrm s_{\phi(0)},\qquad w\in{\mathbb D}. \] \label{lem-Mob1.1} \mathrm{e}nd{lem} \begin{proof} This is a direct computation. \mathrm{e}nd{proof} \begin{proof}[Proof of Theorem \ref{thm-mockbloch}] In view of the definition of the operator ${\mathbf T}_\phi={\mathbf U}_\phi{\mathbf T}\bar{\mathbf U}_\phi^*$, we see that \begin{equation*} \oslash W[{\mathbf T}_\phi](z)=z^2\langle{\mathbf U}_\phi{\mathbf T}\bar{\mathbf U}_\phi^*\bar\mathrm s_z, \mathrm s_z\rangle_{\mathbb D}=z^2\langle{\mathbf T}\bar{\mathbf U}_\phi^*\bar\mathrm s_z,{\mathbf U}_\phi^* \mathrm s_z\rangle_{\mathbb D}, \mathrm{e}nd{equation*} and by Lemma \ref{lem-Mob1.1}, it follows that \begin{multline*} z^2\langle{\mathbf T}\bar{\mathbf U}_\phi^*\bar\mathrm s_z,{\mathbf U}_\phi^*\mathrm s_z\rangle_{\mathbb D} =\phi(z)^2\langle {\mathbf T}\bar\mathrm s_{\phi(z)},\mathrm s_{\phi(z)}\rangle_{\mathbb D}-\phi(0)\phi(z) \langle {\mathbf T}\bar\mathrm s_{\phi(z)},\mathrm s_{\phi(0)}\rangle_{\mathbb D} \\ -\phi(0)\phi(z)\langle {\mathbf T}\mathrm s_{\phi(0)},\mathrm s_{\phi(z)}\rangle_{\mathbb D}+ \phi(0)^2\langle {\mathbf T}\mathrm s_{\phi(0)},\mathrm s_{\phi(0)}\rangle_{\mathbb D}, \mathrm{e}nd{multline*} which is the claimed invariance. \mathrm{e}nd{proof} \subsection{The mock-Bloch space is bigger than the Bloch space} We show that the product of two Dirichlet space functions need not be in the Bloch space. \begin{proof}[Proof of Theorem \ref{thm-notbloch}] Let $r_1,r_2,r_3,\ldots$ be a increasing sequence on $]0,1[$ tending rapidly to $1$. We let $f$ and $g$ be the functions \[ f(z):=\sum_{j=1}^{+\infty}j^{-1}(1-r_j^2)\frac{z}{1-r_j z},\quad g(z):=\sum_{j=1}^{+\infty}\frac{j^{-1}}{\sqrt{\log\frac{1}{1-r_j^2}}} \log\frac{1}{1-r_j z}. \] Then \[ \|f\|_\nabla^2=\int_{\mathbb D}|f'|^2\mathrm{d} A=\int_{\mathbb D}\bigg|\sum_{j=1}^{+\infty}j^{-1} \tfrac{1-r_j^2}{(1-r_j z)^2}\bigg|^2\mathrm{d} A(z)=\sum_{j,k=1}^{+\infty}(jk)^{-1} \frac{(1-r_j^2)(1-r_k^2)}{(1-r_j r_k)^2}<+\infty \] if the sequence $\{r_j\}_j$ is sparse enough. In a similar manner, \[ \|g\|_\nabla^2=\int_{\mathbb D}|g'|^2\mathrm{d} A=\int_{\mathbb D}\bigg|\sum_{j=1}^{+\infty} \frac{j^{-1}}{\sqrt{\log\frac{1}{1-r_j^2}}} \frac{r_j}{1-r_j z}\bigg|^2\mathrm{d} A(z)=\sum_{j,k=1}^{+\infty}(jk)^{-1} \frac{\log\frac{1}{1-r_jr_k}} {\sqrt{\log\frac{1}{1-r_j^2}}\sqrt{\log\frac{1}{1-r_k^2}}}<+\infty \] if the sequence is sparse enough. We could require for instance that simultaneously the following conditions should hold: \[ \log\frac{1}{1-r_jr_k}\le 2^{-|j-k|} \sqrt{\log\frac{1}{1-r_j^2}}\sqrt{\log\frac{1}{1-r_k^2}} \] and \[ \frac{1}{(1-r_jr_k)^2}\le 2^{-|j-k|}\frac{1}{(1-r_j^2)(1-r_k^2)}. \] By construction, we have \[ f'(z)g(z)=\sum_{j,k=1}^{+\infty}(jk)^{-1}\frac{1-r_j^2}{(1-r_j z)^2} \frac{\log\frac{1}{1-r_k z}}{\sqrt{\log\frac{1}{1-r_k^2}}}, \] so that \[ (1-r_l^2)f'(r_l)g(r_l)=\sum_{j,k=1}^{+\infty}(jk)^{-1}\frac{1-r_j^2}{(1-r_j r_l)^2} \frac{\log\frac{1}{1-r_k r_l}}{\sqrt{\log\frac{1}{1-r_k^2}}} \ge l^{-2}\sqrt{\log\frac{1}{1-r_l^2}} \] which with a sufficiently sparse sequence $\{r_j\}_j$ can be made to tend to infinity. Since both $f$ and $g$ have nonnegative Taylor coefficients, \[ (fg)'(x)=f'(x)g(x)+f(x)g'(x)\ge f'(x)g(x),\qquad 0\le x<1, \] so it would follow that \[ \|fg\|_{\calB}=\sup_{z\in{\mathbb D}}(1-|z|^2)|(fg)'(z)|\ge \sup_l(1-r_l^2)f'(r_l)g(r_l) =+\infty. \] On the other hand, there is a rank $1$ operator ${\mathbf T}$ such that $f(z)g(z)=\oslash{\mathbb Z}symb[{\mathbf T}](z)$, so $fg$ definitely belongs to the mock-Bloch space $\calB^{\mathrm{mock}}({\mathbb D})$. \mathrm{e}nd{proof} \section{Characterization of Dirichlet symbols of Grunsky operators} \subsection{Grunsky operators} Let $\varphi:{\mathbb D}\to{\mathbb C}$ be a univalent function. In other words, $\varphi$ is a conformal mapping onto a simply connected domain. The associated \mathrm{e}mph{Grunsky operator} ${\boldsymbol\Gamma}_\varphi$ is given by \mathrm{e}qref{eq-WboldG}, and it is well-known that ${\boldsymbol\Gamma}_\varphi$ is a norm contraction on $L^2({\mathbb D})$, and that it maps into the Bergman space $A^2({\mathbb D})$. This contractiveness is usually referred to as the \mathrm{e}mph{Grunsky inequalities}, and in this form it was studied in, e.g., \cite{BarHed}. Without loss of generality, we assume that $\varphi(0)=0$ and $\varphi'(0)=1$. We recall that the Dirichlet symbol associated with ${\boldsymbol\Gamma}_\varphi$ is given by \mathrm{e}qref{eq-WboldG}. \begin{proof}[Proof of Theorem \ref{thm-NLW}] We first show that any symbol $Q(z,w)=\mathcal{Q}[{\boldsymbol\Gamma}_\varphi](z,w)$ for a normalized univalent function $\varphi$ has the properties (a) and (b). Since $\mathcal{Q}[{\boldsymbol\Gamma}_\varphi](z,w)=zw{\mathbb Z}symb[{\boldsymbol\Gamma}_\varphi](z,w)$ it follows that (a) holds. We note that if $\psi(z):=1/\varphi(1/z)$ and if $\xi:=1/z$, $\mathrm{e}ta:=1/w$, then \begin{multline*} Q(z,w)=\mathcal{Q}[{\boldsymbol\Gamma}_\varphi](z,w)=\log\frac{zw(\varphi(z)-\varphi(w))} {(z-w)\varphi(z)\varphi(w)} =\log\frac{\xi^{-1}\mathrm{e}ta^{-1}(\varphi(\xi^{-1})-\varphi(\mathrm{e}ta^{-1}))} {(\xi^{-1}-\mathrm{e}ta^{-1})\varphi(\xi^{-1})\varphi(\mathrm{e}ta^{-1})}= \log\frac{\psi(\xi)-\psi(\mathrm{e}ta)}{\xi-\mathrm{e}ta}. \mathrm{e}nd{multline*} In other words, \[ \psi(\xi)-\psi(\mathrm{e}ta)=(\xi-\mathrm{e}ta)\,\mathrm{e}^{Q(\xi^{-1},\mathrm{e}ta^{-1})}, \] so that \begin{multline} 0=\partial_\xi\partial_\mathrm{e}ta (\psi(\xi)-\psi(\mathrm{e}ta)) =\partial_\xi\partial_\mathrm{e}ta\big\{(\xi-\mathrm{e}ta)\,\mathrm{e}^{Q(\xi^{-1},\mathrm{e}ta^{-1})}\big\} \\ =\bigg\{\xi^{-2}\partial_z Q(\xi^{-1},\mathrm{e}ta^{-1}) -\mathrm{e}ta^{-2}\partial_w Q(\xi^{-1},\mathrm{e}ta^{-1})+(\xi-\mathrm{e}ta)\xi^{-2}\mathrm{e}ta^{-2} \big(\partial_z\partial_w Q(\xi^{-1},\mathrm{e}ta^{-1}) \\ +(\partial_zQ(\xi^{-1},\mathrm{e}ta^{-1})) (\partial_zQ(\xi^{-1},\mathrm{e}ta^{-1}))\big) \bigg\}\,\mathrm{e}^{Q(\xi^{-1},\mathrm{e}ta^{-1})}. \label{eq-WE1} \mathrm{e}nd{multline} Changing back to $(z,w)$-coordinates, we obtain that \begin{equation*} 0=z^2\partial_z Q(z,w) -w^2\partial_w Q(z,w)+(w-z)zw \big(\partial_z\partial_w Q(z,w) +(\partial_zQ(z,w)) (\partial_zQ(z,w))\big), \mathrm{e}nd{equation*} which is the same as \begin{equation*} \frac{w^2\partial_w Q(z,w)-z^2\partial_z Q(z,w)}{(w-z)zw}= \partial_z\partial_w Q(z,w)+(\partial_zQ(z,w))(\partial_zQ(z,w)), \mathrm{e}nd{equation*} that is, property (b). We turn to the reverse implication, to show that a holomorphic function $Q$ in ${\mathbb D}^2$ with the properties (a) and (b) is necessarily of the form $\mathcal{Q} [{\boldsymbol\Gamma}_\varphi]$ for some normalized conformal mapping $\varphi$. In view of the above calculation \mathrm{e}qref{eq-WE1}, condition (b) asserts that \[ \partial_\xi\partial_\mathrm{e}ta\big\{(\xi-\mathrm{e}ta)\,\mathrm{e}^{Q(\xi^{-1},\mathrm{e}ta^{-1})}\big\}= 0 \] which means that locally in ${\mathbb D}_{\rm e}^2$, \[ (\xi-\mathrm{e}ta)\,\mathrm{e}^{Q(\xi^{-1},\mathrm{e}ta^{-1})}=G_1(\xi)+G_2(\mathrm{e}ta), \] where $G_1,G_2$ are holomorphic but with possible logarithmic branching at infinity. Letting $\mathrm{e}ta\to\xi$, we find that $G_1(\xi)+G_2(\xi)=0$, so that $G_2(\mathrm{e}ta)=-G_1(\mathrm{e}ta)$. So the above identity becomes \begin{equation} (\xi-\mathrm{e}ta)\,\mathrm{e}^{Q(\xi^{-1},\mathrm{e}ta^{-1})}=G_1(\xi)-G_1(\mathrm{e}ta). \label{eq-G1G2} \mathrm{e}nd{equation} We still need to know that $G_1$ is a globally well-defined function in ${\mathbb D}_{\mathrm{e}}$ (without logarithmic branching). We differentiate both sides with respect to $\xi$: \[ G_1'(\xi)=\partial_{\xi}\big((\xi-\mathrm{e}ta)\,\mathrm{e}^{Q(\xi^{-1},\mathrm{e}ta^{-1})}\big)= \big\{1-\xi^{-2}(\xi-\mathrm{e}ta)\partial_z Q(\xi^{-1},\mathrm{e}ta^{-1})\big\}\, \mathrm{e}^{Q(\xi^{-1},\mathrm{e}ta^{-1})}=\mathrm{e}^{Q(\xi^{-1},\xi^{-1})}, \] where in the last step we plugged in $\mathrm{e}ta=\xi$, which is allowed since the expression is independent of $\mathrm{e}ta$. As $|\xi|\to+\infty$, we have $Q(\xi^{-1},\xi^{-1})=\mathrm{O}(|\xi|^{-2})$, so that $\mathrm{e}^{Q(\xi^{-1},\xi^{-1})}=1+ \mathrm{O}(|\xi|^{-2})$, which rules out a $\xi^{-1}$ term, and hence there is no logarithmic branching. In addition, we see that $G_1'(\infty)=1$. If we put, for some constant $c$, $\psi:=G_1+c$, then by \mathrm{e}qref{eq-G1G2}, \begin{equation*} \mathrm{e}^{Q(\xi^{-1},\mathrm{e}ta^{-1})}=\frac{\psi(\xi)-\psi(\mathrm{e}ta)}{\xi-\mathrm{e}ta}. \mathrm{e}nd{equation*} Since the left-hand side is holomorphic and does not vanish in ${\mathbb D}_{\mathrm{e}}^2$, it follows that $\psi$ is univalent on ${\mathbb D}_{\mathrm{e}}$. But then there must exist a point in the complex plane ${\mathbb C}$ which is not in the image $\psi({\mathbb D}_{\mathrm{e}})$, and by adjusting $c$ we can make sure that $0\notin \psi({\mathbb D}_{\mathrm{e}})$. Then winding things backwards we get $\varphi$ from $\psi$ in the above fashion, and $Q(z,w)$ is seen to be of the form \mathrm{e}qref{eq-WboldG}, as claimed. \mathrm{e}nd{proof} \section{Zachary Chase's construction of a permutation} \subsection{Permutation of bases} We consider a permutation $\pi:{\mathbb Z}_+\to{\mathbb Z}_+$. We use the permutation to define that $\beta_j:=\bar\alpha_{\pi(j)}$, which in turn defines the second Gaussian process $\Psi(z)$. In this case, the formula \mathrm{e}qref{eq-asvar1.01} reduces to \begin{equation} \int_{\mathbb T} |{\mathbb E}\Phi(r\zeta)\Psi(r\zeta)|^2\mathrm{d} s(\zeta) =\sum_{l=2}^{+\infty}r^{2l}\bigg(\sum_{j,k:j+k=l}(jk)^{-\frac12}\delta_{j,\pi(k)}\bigg)^2, \label{eq-asvar1.02} \mathrm{e}nd{equation} where $\delta_{j,k}$ denotes the Kronecker delta, which equals $1$ if $j=k$ and $0$ otherwise. Since the sum of Kronecker deltas is squared, it makes sense to try to concentrate the times they equal $1$ to certain values of $l$. \begin{proof}[Proof of Theorem \ref{thm-1.72}] Let $d\ge 3$ be an integer. We define the permutation $\pi=\pi_d$ in terms of a disjoint partition into intervals ${\mathbb Z}_+=I_1\cup I_2\cup I_3\cup\ldots$, where $I_m$ is an interval on ${\mathbb Z}_+$ which moves toward the right as $m$ increases. On each interval $I_m$ we let $\pi_d$ permute the interval in question. The first interval is $I_1:=\{1,\ldots,d-1\}$, and we put $\pi_d(j):=d-j$ for $j\in I_1$. The second interval is $I_2:=\{d,\ldots,d^2-d\}$, and we put $\pi_d(j):=d^2-j$ for $j\in I_2$. The third interval is $I_3:=\{d^2-d+1,\ldots,d^3-d^2+d-1\}$ and on it we put $\pi_d(j):=d^3-j$. The fourth interval is $I_4:=\{d^3-d^2+d,\ldots,d^4-d^3+d^2-d\}$, and on it we put $\pi_d(j):=d^4-j$. The general formula is $\pi_d(j):=d^m-j$ on $I_m$, but the endpoints of interval $I_m$ depend on whether $m$ is even or odd. If $m$ is odd, then $m=2n-1$ for some $n=1,2,3,\ldots$, and \[ I_m=I_{2n-1}:=\bigg\{\frac{d^{2n-1}+1}{d+1},\ldots,\frac{d^{2n}-1}{d+1}\bigg\} \] while if $m$ is even, then $m=2n$ for some $n=1,2,3,\ldots$, and \[ I_m=I_{2n}:=\bigg\{\frac{d^{2n}+d}{d+1},\ldots,\frac{d^{2n+1}-d}{d+1}\bigg\}. \] The permutation $\pi_d$ is now well-defined, and we see that for $k\in I_m$, $\delta_{j,\pi_d(k)}=\delta_{j,d^m-k}=0$ unless $j+k=d^m$. This means that only the parameter values $l$ that are powers of $d$ contribute to the sum \mathrm{e}qref{eq-asvar1.02}. When $l=d^m$, we find that \[ \sum_{j,k:j+k=d^m}(jk)^{-\frac12}\delta_{j,\pi_d(k)}=\sum_{j\in I_m} j^{-\frac12}(d^m-j)^{-\frac12}=\frac{1}{d^m}\sum_{j\in I_m} \bigg(\frac{j}{d^m}\bigg)^{-\frac12}\bigg(1-\frac{j}{d^m}\bigg)^{-\frac12}= \int_{\frac{1}{d+1}}^{1-\frac{1}{d+1}}t^{-\frac12}(1-t)^{-\frac12}\mathrm{d} t+\mathrm{O}(d^{-m+1}), \] by thinking of the sum as the Riemann sum of the integral with step length $d^{-m}$. The integral is the incomplete Beta function, since by symmetry \[ \int_{\frac{1}{d+1}}^{1-\frac{1}{d+1}}t^{-\frac12}(1-t)^{-\frac12}\mathrm{d} t =\pi-2\int_{0}^{\frac{1}{d+1}}t^{-\frac12}(1-t)^{-\frac12}\mathrm{d} t= \pi-4(d+1)^{-\frac12}\, {}_2F_1\big(\tfrac12,\tfrac12;\tfrac32;\tfrac1{d+1}\big), \] where the last equality relates it to the standard hypergeometric function. As it is well-known that \[ \lim_{r\to1^-}\frac{1}{\log\frac{1}{1-r^2}} \sum_{m=1}^{+\infty}r^{2d^m}=\frac{1}{\log d}, \] it follows from the obtained asymptotics that \[ \lim_{r\to1^-} \frac{1}{\log\frac{1}{1-r^2}} \sum_{m=1}^{+\infty}r^{2d^m} \bigg(\sum_{j,k:j+k=d^m}(jk)^{-\frac12}\delta_{j,\pi_d(k)}\bigg)^2 =\frac{1}{\log d}\, \Big\{\pi-4(d+1)^{-\frac12}{}_2F_1 \big(\tfrac12,\tfrac12;\tfrac32;\tfrac1{d+1}\big)\Big\}^2. \] Finally, choosing $d=29$ gives us the value $\approx1.7208$. This is the asymptotic variance of the correlation function $f(z)={\mathbb E}\Phi(z)\Psi(z)$ with coefficients $\beta_j=\bar\alpha_{\pi_d(j)}$. \mathrm{e}nd{proof} \begin{thebibliography}{9} \bibitem{AbDoub} Abakumov, E., Doubtsov, E., \mathrm{e}mph{Moduli of holomorphic functions and logarithmically convex radial weights}. Bull. Lond. Math. Soc. \textbf{47} (2015), no. 3, 519-532. \bibitem{ARSW} Arcozzi, N., Rochberg, R., Sawyer, E., Wick, B. D., \mathrm{e}mph{Function spaces related to the Dirichlet space}. J. Lond. Math. Soc. (2) \textbf{83} (2011), no. 1, 1-18. \bibitem{AIPP} Astala, K., Ivrii, O., Per\"al\"a, A., Prause, I., \mathrm{e}mph{Asymptotic variance of the Beurling transform}. Geom. Funct. Anal. \textbf{25} (2015), no. 6, 1647-1687. \bibitem{BarHed} Baranov, A., Hedenmalm, H., \mathrm{e}mph{Boundary properties of Green functions in the plane}. Duke Math. J. \textbf{145} (2008), no. 1, 1-24. \bibitem{BoLyuMalTho} Borichev, A., Lyubarski\u\i{}, Yu., Malinnikova, E., Thomas, P., \mathrm{e}mph{Radial growth of functions in the Korenblum space}. St. Petersburg Math. J. \textbf{21} (2010), no. 6, 877-891. \bibitem{Chase} Chase, Z., \mathrm{e}mph{Oral communication}. \bibitem{DiacEv} Diaconis, P., Evans, S. N., \mathrm{e}mph{Linear functionals of eigenvalues of random matrices}. Trans. Amer. Math. Soc. \textbf{353} (2001), no. 7, 2615-2633. \bibitem{Hed1} Hedenmalm, H., \mathrm{e}mph{Bloch functions and asymptotic tail variance}. Adv. Math. \textbf{313} (2017), 947-990. \bibitem{HedNiem} Hedenmalm, H., Nieminen, P. J., \mathrm{e}mph{The Gaussian free field and Hadamard's variational formula}. Probab. Theory Related Fields \textbf{159} (2014), no. 1-2, 61-73. \bibitem{HedShim1} Hedenmalm, H., Shimorin, S., \mathrm{e}mph{Weighted Bergman spaces and the integral means spectrum of conformal mappings}. Duke Math. J. \textbf{127} (2005), no. 2, 341-393. \bibitem{HedShim2} Hedenmalm, H., Shimorin, S., \mathrm{e}mph{On the universal integral means spectrum of conformal mappings near the origin}. Proc. Amer. Math. Soc. \textbf{135} (2007), no. 7, 2249-2255. \bibitem{HedShim3} Hedenmalm, H., Shimorin, S., \mathrm{e}mph{A new type of operator symbols}. Manuscript, 2015. \bibitem{HKPV} Hough, J. Ben, Krishnapur, Manjunath, Peres, Y., Vir\'ag, B., \mathrm{e}mph{Zeros of Gaussian analytic functions and determinantal point processes}. University Lecture Series, \textbf{51}. American Mathematical Society, Providence, RI, 2009. \bibitem{IvriiQC} Ivrii, O., \mathrm{e}mph{Quasicircles of dimension $1+k^2$ do not exist}. arXiv: 1511:07240 \bibitem{Janson} Janson, S., \mathrm{e}mph{Gaussian Hilbert spaces}. Cambridge Tracts in Mathematics, \textbf{129}. Cambridge University Press, Cambridge, 1997. \bibitem{McM1} McMullen, C. T., \mathrm{e}mph{Thermodynamics, dimension and the Weil-Petersson metric}. Invent. Math. \textbf{173} (2008), no. 2, 365-425. \bibitem{NSV} Nazarov, F., Sodin, M., Volberg, A., \mathrm{e}mph{Transportation to random zeroes by the gradient flow}. Geom. Funct. Anal. \textbf{17} (2007), no. 3, 887-935. \bibitem{PerVir} Peres, Y., Vir\'ag, B., \mathrm{e}mph{Zeros of the i.i.d. Gaussian power series: a conformally invariant determinantal process}. Acta Math. \textbf{194} (2005), no. 1, 1-35. \bibitem{RubTim} Rubel, L. A., Timoney, R. M., \mathrm{e}mph{An extremal property of the Bloch space}. Proc. Amer. Math. Soc. \textbf{75} (1979), no. 1, 45-49. Sheffield, S., \mathrm{e}mph{Gaussian free fields for mathematicians}. Probab. Theory Related Fields \textbf{139} (2007), no. 3-4, 521-541. \bibitem{Sodin} Sodin, M., \mathrm{e}mph{Zeros of Gaussian analytic functions}. Math. Res. Lett. \textbf{7} (2000), no. 4, 371-381. \mathrm{e}nd{thebibliography} \mathrm{e}nd{document}
\begin{document} \begin{abstract} A \emph{double star} $S(n,m)$ is the graph obtained by joining the center of a star with $n$ leaves to a center of a star with $m$ leaves by an edge. Let $r(S(n,m))$ denote the Ramsey number of the double star $S(n,m)$. In $1979$ Grossman, Harary and Klawe have shown that $$r(S(n,m)) = \max\{n+2m+2,2n+2\}$$ for $3 \leq m \leq n\leq \sqrt{2}m$ and $3m \leq n$. They conjectured that equality holds for all $m,n \geq 3$. Using a flag algebra computation, we extend their result showing that $r(S(n,m))\leq n+~2m~+~2$ for $m \leq n \leq 1.699 m$. On the other hand, we show that the conjecture fails for $\frac{7}{4}m~+~o(m)\leq n \leq \frac{105}{41}m-o(m)$. Our examples additionally give a negative answer to a question of Erd\H{o}s, Faudree, Rousseau and Schelp from $1982$. \end{abstract} \title{Asymptotics of Ramsey numbers of double stars} \section{Introduction} \label{intro} The Ramsey number $r(G)$ of a graph $G$ is the least integer $N$ such that any 2-coloring of edges of $K_N$ contains a monochromatic copy of $G$. The difficult problem of estimating Ramsey numbers of various graph families has attracted considerable attention since its introduction in the paper of Erd\H{o}s and Szekeres~\cite{ErdSze35}. See~\cite{CFSSurvey,Radsurvey} for recent surveys. Computing Ramsey numbers exactly appears to be very difficult in general, even for trees. However, determining the Ramsey numbers of stars is fairly straightforward. Harary~\cite{HarStars} has shown that $$r(K_{1,n})=\begin{cases} 2n, &\mathrm{if}\; n\; \mathrm{is \; odd,} \\ 2n-1, &\mathrm{if}\; n\; \mathrm{is \; even.} \end{cases}$$ A natural direction in extending the above result is to consider double stars. A {\em double star} $S(n,m)$, where $n \geq m \geq 0$, is the graph consisting of the union of two stars, $K_{1,n}$ and $K_{1,m}$, and an edge called the {\em bridge}, joining the centers of these two stars. Grossman, Harary and Klawe have established the following bounds on $r(S(n,m))$. \begin{thm}[Grossman, Harary and Klawe~\cite{grossman}]\label{t:GHK} $$r(S(n,m)) = \begin{cases} \max(2n+1, n+2m+2) &\mathrm{if}\; n\; \mathrm{ is \; odd\; and\:} m\leq 2, \\ \max(2n+2, n+2m+2) &\mathrm{if}\; n\; \mathrm{ is \: even\: or\:} m\geq 3, \mathrm{and\:} n \leq \sqrt{2}m \mathrm{\:or\:} n \geq 3m, \end{cases} $$ \end{thm} They further conjectured that the restriction $n \leq \sqrt{2}m$ or $n \geq 3m$ is not necessary. \begin{conj}[Grossman, Harary and Klawe~\cite{grossman}]\label{c:GHK} $r(S(n,m)) \leq \max(2n+2, n+2m+2) $ for all $n \geq m \geq 0$. \end{conj} Our first result shows that the above conjecture is false for a wide range of values of $m$ and $n$. \begin{thm}\label{t:lower} For all $n \geq m \geq 0$, \begin{equation}\label{e:C5} r(S(n,m)) \geq \frac{5}{6}m+\frac{5}{3}n + o(m). \end{equation} Further, for $n \geq 2m$, \begin{equation}\label{e:LK7} r(S(n,m)) \geq \frac{21}{23}m+ \frac{189}{115}n + o(m). \end{equation} \end{thm} Note that the bounds in Theorem~\ref{t:lower} imply that Conjecture~\ref{c:GHK} fails for $$\frac{7}{4}m+o(m)~\leq~n~\leq~\frac{105}{41}m-o(m).$$ Theorem~\ref{t:lower} also provides a negative answer to a related more general question about Ramsey numbers of trees, which we now discuss. Let $T$ be a tree, and let $t_1$ and $t_2$, with $t_1 \leq t_2$, be the sizes of the color classes in the 2-coloring of $T$. Then $r(T) \geq 2t_1+t_2 - 1$. Indeed, one can color the edges of $K_{2t_1+t_2 - 2}$ in two colors so that the edges of the first color induce the complete bipartite graph $K_{t_1+t_2-1,t_1-1}$. Similarly, we have $r(T) \geq 2t_2 - 1$ by considering a 2-coloring of the edges of $K_{2t_{2}-2}$ with the first color inducing the complete bipartite graph $K_{t_2-1,t_2-1}$. Let $r_B(T) :=\max(2t_1+t_2 - 1,2t_2-1)$. Burr~\cite{Burr74} conjectured that $r(T)=r_B(T)$ for every tree $T$. Grossman, Harary and Klawe~\cite{grossman} disproved Burr's conjecture, by showing that the Ramsey number of some double stars is larger than $r_B(T)$ by one. (See Theorem~\ref{t:GHK}.) They asked whether the difference $r(T)-r_B(T)$ can be arbitrarily large. Haxell, {\L}uczak and Tingley proved that Burr's conjecture is asymptotically true for trees with relatively small maximum degree. \begin{thm}[Haxell, {\L}uczak and Tingley~\cite{haxell}]\label{thm:haxell} For every $\eta >0$ there exists $\delta>0$ satisfying the following. If $T$ is a tree with maximum degree at most $\delta|V(T)|$ then $r(T) \leq (1 + \eta)r_B(T)$. \end{thm} Finally, Erd\H{o}s, Faudree, Rousseau and Schelp~\cite{brooms} asked whether $r(T)=r_B(T)$ for trees $T$ with colors classes of sizes $|V(T)|/3$ and $2|V(T)|/3$. (Note that in the case the two quantities in the definition of $r_B(T)$ are equal, and that Theorem~\ref{t:GHK} does not cover this case for double star.) Theorem~\ref{t:lower} gives a negative answer to this question and to the above question of Grossman, Harary and Klawe by showing that $r(T)$ and $r_B(T)$ can differ substantially even for trees with colors classes of sizes $k$ and $2k$. Indeed, if $T=S(2k-1,k-1)$ we have $r_B(T)=4k-1$, but $r(T) \geq 4.2k-o(k)$ by (\ref{e:LK7}). Let us now return to upper bounds. Using Razborov's flag algebra method, we extend the results of Theorem~\ref{t:GHK} showing the following. \begin{thm} \label{mainthm} \[r(S(n,m)) \leq n+2m+2\] for $m \leq n \leq 1.699 (m+1)$. \end{thm} The paper is structured as follows. In Section~\ref{s:prelim} we show that the problem of finding $r(S(n,m))$ is essentially equivalent to the problem of characterizing the set of pairs $(\delta,\eta)$ such that there exists graph $G$ with minimum degree at least $\delta|V(G)|$, in which every two vertices have at least $\eta|V(G)|$ common non-neighbors. (See Theorem~\ref{t:graph} for the precise statement.) In Section~\ref{s:valid} we analyze this set of pairs. In Section~\ref{s:back} we continue the discussion of Ramsey numbers of double stars and prove the consequences of the results of Section~\ref{s:prelim} in this context. In particular, we prove Theorems~\ref{t:lower} and~\ref{mainthm} and establish general asymptotic upper and lower bounds on Ramsey numbers of double stars, which differ by less than $2\%$. (See Theorem~\ref{t:bounds}). The paper uses standard graph theoretic notation. In particular, $N(v)$ denotes the neighborhood of a vertex $v$ in a graph $G$, when the graph is understood from context. \section{From Ramsey numbers to degree conditions}\label{s:prelim} In this section we prove preliminary results which allow us to break the symmetry between colors and replace the original Ramsey-theoretic problem by an equivalent problem with Tur\'{a}n-type flavor. Let $(B,R)$ be a partition of the edges of $K_p$ into two color classes $B$ and $R$. For brevity we will say that $(B,R)$ is \emph{$(n,m)$-free} if $K_p$ contains no $S(n,m)$ with all the edges belonging to the same part of $(B,R)$. For $v \in [p]$ and $C \in \{B,R\}$, let $N_C(v)$ denote the set of vertices joined to $v$ by edges in $C$, and let $\deg_C(v)=|N_C(v)|$. The first lemma that we need is due to Grossman, Harary and Klawe, but we include a proof for completeness. \begin{lem}[{\cite[Lemma 3.4]{grossman}}]\label{l:grossman} Let $p \geq n+2m+2$, and let $(B,R)$ be an $(n,m)$-free partition of the edges of $K_p$. Then $\deg_C(v) \leq n+m$ for every $v \in [p]$ and $C \in \{B,R\}$. \end{lem} \begin{proof} Choose $v \in [p]$ and $C \in \{B,R\}$ such that $\deg_C(v)$ is maximum. Suppose for a contradiction that $\deg_C(v) \geq n+m+1$. We assume without loss of generality that $C=B$. If $\deg_B(u) \geq m+1$ for some $u \in N_B(v)$ then $K_p$ contains a double star $S(n,m)$ with edges in $B$ and bridge $uv$. Thus $\deg_R(u) \geq p - m-1 \geq m+n+1$ for every $u \in N_B(v)$. It follows that there exist $u,w \in N_B(v)$ such that $uw \in R$. In this case $(B,R)$ contains a double star $S(n,m)$ with edges in $R$ and the bridge $uw$, a contradiction. \end{proof} \begin{lem}\label{l:degrees} Let $p \geq n+2m+2$, and let $(B,R)$ be a partition of the edges of $K_p$. Then $(B,R)$ is $(n,m)$-free if and only if for every $C \in \{B,R\}$ and every $uv \in C$ either \begin{equation}\label{e:union} |N_C(u) \cup N_C(v)| \leq n+m+1 \end{equation} or\begin{equation}\label{e:degree} \deg_C(u) \leq n \mathrm{\ and \ } \deg_C(v) \leq n. \end{equation} \end{lem} \begin{proof} Clearly, if $uv \in C$ satisfies either (\ref{e:union}) or (\ref{e:degree}) then $uv$ is not a bridge of a monochromatic $S(n,m)$. Conversely, suppose that $uv \in C$ for some $C \in \{B,R\}$ violates both (\ref{e:union}) and (\ref{e:degree}). In particular, we may assume that $\deg_C(u) \geq n+1$. By Lemma~\ref{l:grossman}, we may further assume that $\deg_C(v) \geq p-n-m-1 \geq m+1$. Thus $(B,R)$ contains a double star $S(n,m)$ with edges in $C$ and a bridge $uv$. (It can be constructed by first choosing $n$ neighbors of $u$ from $N_C(u) \cup N_C(v)$ which will serve as the leaves of $S(n,m)$ adjacent to $u$. We choose these neighbors outside of $N_C(v)$ whenever possible. Then at least $m$ elements of $N_C(v)$ will remain, and can serve as the leaves of $S(n,m)$ adjacent to $u$.) \end{proof} The next key lemma will allow us to break the symmetry between colors and replace the original Ramsey-theoretic problem by an equivalent problem with Tur\'{a}n-type flavor. \begin{lem}\label{l:uniquecolour} Let $p \geq \max(2n+2,n+2m+2)$, and let $(B,R)$ be an $(n,m)$-free partition of the edges of $K_p$. Then there exists $C \in \{B,R\}$ such that $\deg_C(v) \leq n$ for all $v \in [p]$. \end{lem} \begin{proof} Suppose for a contradiction that there exists $v_1 \in [p]$ such that $\deg_B(v_1) \geq n+1$, and $v_2 \in [p]$ such that $\deg_R(v_2) \geq n+1$. Then, as $p \geq 2n+2$, there exists a partition $(V_B,V_R)$ of $[p]$ such that $\deg_B(v) \geq n+1$ for every $v \in V_B$, $\deg_R(v) \geq n+1$ for every $v \in V_R$, and $V_B,V_R \neq \emptyset.$ As $(B,R)$ is $(n,m)$-free it follows from Lemma~\ref{l:degrees} that \begin{equation}\label{e:triangle1} |N_C(u) \cap N_C(v)| \geq p-n-m-1 \mathrm{\ for\ all\ } u \in V_B,v\in V_R, C \in \{B,R\} \mathrm{\ such\ that\ } uv \not \in C \end{equation} Let $b:[p]^2 \to \{0,1\}$ be the characteristic function of $B$, that is $b(uv)=1$ if and only if $\{u,v\} \in B$. Define $r:[p]^2 \to \{0,1\}$ analogously. Then (\ref{e:triangle1}) can be rewritten as \begin{equation}\label{e:triangle2} \sum_{w \in [p]}(b(uv)r(uw)r(vw)+ r(uv)b(uw)b(vw)) \geq p-n-m-1 \end{equation} for all $ u \in V_B,v\in V_R$. Summing (\ref{e:triangle2}) over all such pairs $u$ and $v$ we obtain \begin{align}\label{e:sum1} &\sum_{(u,w) \in V_B^2, v \in V_R}(b(uv)r(uw)r(vw)+ r(uv)b(uw)b(vw)) \notag \\&+ \sum_{u \in V_B, (v,w) \in V_R^2}(b(uv)r(uw)r(vw)+ r(uv)b(uw)b(vw))\notag\\ &\geq (p-n-m-1)|V_B||V_R| \end{align} On the other hand for every $v \in V_R$, \begin{equation}\label{e:sum2} \sum_{(u,w) \in V_B^2}(b(uv)r(uw)r(vw)+ r(uv)b(uw)b(vw)) = |N_B(v) \cap V_B||N_R(v) \cap V_B| \leq \frac{1}{4}|V_B|^2 \end{equation} Similarly for every $u \in V_B$, \begin{equation}\label{e:sum3} \sum_{(v,w) \in V_R^2}(b(uv)r(uw)r(vw)+ r(uv)b(uw)b(vw)) \leq \frac{1}{4}|V_R|^2 \end{equation} Thus \begin{align}\label{e:sum4} &\sum_{(u,w) \in V_B^2, v \in V_R}(b(uv)r(uw)r(vw)+ r(uv)b(uw)b(vw)) \notag \\&+ \sum_{u \in V_B, (v,w) \in V_R^2}(b(uv)r(uw)r(vw)+ r(uv)b(uw)b(vw))\notag\\ &\leq \frac{1}{4}(|V_R||V_B|^2 + |V_B||V_R|^2) \end{align} Combining (\ref{e:sum1}) and (\ref{e:sum4}) we obtain \begin{equation}\label{e:conclusion1} p-n-m-1 \leq \frac{1}{4} (|V_R|+|V_B|)=\frac{p}{4}. \end{equation} Inequality (\ref{e:conclusion1}) can be rewritten as $3p \leq 4m+4n+4$. However, $$3p \geq 2(n+2m+2)+(2n+2)=4m+4n+6,$$ implying the desired contradiction. \end{proof} Lemma~\ref{l:uniquecolour} readily implies the following main result of this section. \begin{thm}\label{t:graph} Let $n \geq m \geq 0$ and $p \geq \max(2n+2,n+2m+2)$ be integers. Then the following are equivalent \begin{enumerate} \item[(i)] $p < r(S(n,m))$, \item[(ii)] there exists a graph $G$ with $|V(G)|=p$ such that $\deg(v) \geq p-n-1$ for every $v \in V(G)$ and $|N(v) \cup N(u)| \leq n+m+1$ for all $uv \in E(G)$. \end{enumerate} \end{thm} \begin{proof} {\bf (i) $\Rightarrow$ (ii).} Let $(B,R)$ be an $(n,m)$-free partition of the edges of $K_p$. By Lemma~\ref{l:uniquecolour}, we assume without loss of generality that $\deg_R(v) \leq n$ for every $v \in [p]$, or equivalently $\deg_B(v) \geq p-n-1 \geq n+1$. Let $G$ be the graph with $V(G)=[p]$ and $E(G)=B$. By Lemma~\ref{l:degrees} $|N(v) \cup N(u)| \leq n+m+1$ for all $uv \in E(G)$. Thus $G$ satisfies (ii). {\bf (ii) $\Rightarrow$ (i).} Let $(B,R)$ be a partition of the edges of the complete graph with the vertex set $V(G)$ such that $B=E(G)$. Then neither $B$ nor $R$ contains the edge set of a double star $S(n,m)$ by Lemma~\ref{l:degrees}. Thus $p < r(S(n,m))$. \end{proof} \section{Valid points}\label{s:valid} By Theorem~\ref{t:graph}, the function $r(S(n,m))$ is completely determined by the answer to the following question: For which triples $(p,d,s)$ does there exist a graph $G$ with $|V(G)|=p$ such that $\deg(v) \geq d$ for every $v \in V(G)$ and $|N(v) \cup N(u)| \leq s$ for all $uv \in E(G)$? We will be primarily interested in the asymptotic behavior of $r(S(n,m))$, and thus rather than answering the (likely very difficult) question above we analyze the following setting. Given $0 \leq \delta,\eta \leq 1$ we say that a graph $G$ with $|V(G)|>1$ is a \emph{$(\delta,\eta)$-graph} if \begin{itemize} \item $\deg(v)+1 \geq \delta|V(G)|$ for every $v \in V(G)$, and \item $|N(v) \cup N(u)| \leq (1 -\eta)|V(G)|$ for all $uv \in E(G)$. \end{itemize} We say that $(\delta,\eta) \in [0,1]^2$ is \emph{directly valid} if there exists a $(\delta,\eta)$-graph, and we say that $(\delta,\eta) \in [0,1]^2$ is \emph{valid} if it belongs to the closure of the set of directly valid points. Let $\mc{V} \subseteq [0,1]^2$ denote the set of valid points. Note, in particular, that if $0 \leq x_2 \leq x_1$, $0 \leq y_2 \leq y_1$ and $(x_1,y_1) \in \mc{V}$ then $(x_2,y_2) \in \mc{V}$. Finally, a point $(\delta,\eta) \in [0,1]^2$ is \emph{invalid} if it is not valid. In this section we approximate the set of valid points. \begin{lem}\label{l:sparsify} For $n,\delta,\eta \geq 0$, let $G$ be a $(\delta n)$-regular graph with $|V(G)|=n$ such that $|N(v) \cup N(u)| \leq (1 -\eta)n$ for all $uv \in E(G)$. Then $$\left(\frac{1}{n}+p\delta,1-\frac{2}{n} -2\left(\delta-\frac{1}{n}\right) p+(2\delta+\eta-1)p^2 \right) \in \mc{V}$$ for every $p \in [0,1]$. \end{lem} \begin{proof} We will construct a ``random sparsified blow-up'' of $G$ as follows. Let $k$ be an integer, let $U$ be a set with $|U|=kn$, and let $\phi: U \to V(G)$ be a map such that $|\phi^{-1}(v)|=k$ for every $v \in V(G)$. Let $G'$ be a random graph with $V(G')=U$ is constructed as follows. Let $uv \in E(G')$ if $\phi(u)=\phi(v)$, let $uv \not \in E(G')$ if $\phi(u) \neq \phi(v)$ and $\phi(u)\phi(v) \not \in E(G)$, and finally let $uv$ be an edge of $G$ with probability $p$ (independently for each edge) if $\phi(u)\phi(v) \in E(G)$. (It is natural to think of $G'$ as a graph obtained from $G$ by replacing every vertex by a clique of size $k$ and every edge by a random bipartite graph with density $p$.) We have almost surely $\deg(v) \geq (1+p\delta n)k -o(k)$ for each $v\in V(G')$. Furthermore, let $uv \in E(G')$ be such that $\phi(v) \neq \phi(u)$, and let $\eta'=|N(\phi(v)) \cap N(\phi(u))|/n$, then $2\delta-\eta' \leq 1-\eta$, and almost surely \begin{align*} nk&-|N(v) \cup N(u)| \geq kn(1-2\delta+\eta' + 2(\delta-\eta'-1/n)(1-p)+\eta'(1-p)^2)+o(k) \\ &=kn(1-2/n -2(\delta-1/n)p+\eta'p^2) \geq kn(1-2/n -2(\delta-1/n)p+(2\delta+\eta-1)p^2) \end{align*} Thus $G'$ is almost surely a $(1/n+p\delta-o(1), 1-2/n -2(\delta-1/n)p+(2\delta+\eta-1)p^2 -o(1))$-graph. It follows that $(1/n+p\delta,1-2/n -2(\delta-1/n) p+(2\delta+\eta-1)p^2) \in \mc{V}$. \end{proof} \begin{cor}\label{c:valid} For every $p \in [0,1]$, $$\left(\frac{1+2p}{5}, \frac{3-2p}{5} \right),\left(\frac{1+10p}{21}, \frac{19-18p+5p^2}{21}\right) \in \mc{V}$$ \end{cor} \begin{proof} We apply Lemma~\ref{l:sparsify} to the cycle of length five ($n=5$, $\delta = 2/5$, $\eta=1/5$), and the line graph of the complete graph on seven-vertices ($n=21$, $\delta = 10/21$, $\eta=6/21$), respectively. \end{proof} We use Corollary~\ref{c:valid} to approximate $\mc{V}$ from below. Approximating $\mc{V}$ from above requires the use of flag algebras. \newcommand{9}{9} \newcommand{9Plus}{10} \begin{table}[h] \centering \begin{tabular}{|l| c | c |} \hline $i$ &$\delta^*_i$ & $\eta^*_i$ \\ \hline 1&0.505&0.3164\\ \hline 2&0.510&0.3080\\ \hline 3&0.515&0.3011\\ \hline 4&0.520&0.2944\\ \hline 5&0.525&0.2883\\ \hline 6&0.530&0.2823\\ \hline 7&0.535&0.2766 \\ \hline 8&0.540&0.2710\\ \hline 9&0.5406 & 0.2703 \\ \hline \end{tabular} \caption{Invalid pairs $(\delta^*_i,\eta^*_i)$.}\label{tbl:invalid} \end{table} \begin{thm}\label{t:invalid} The pairs $(\delta^*_i,\eta^*_i)$ for $1 \leq i \leq 9$ given in Table~\ref{tbl:invalid} are invalid. \end{thm} \begin{proof} The proof is computer-generated and consists of a flag algebra computation carried out in Flagmatic \cite{sliacan}. It is accomplished by executing the following script, which produces certificates of infeasibility that can be found at \url{http://www.math.mcgill.ca/sun/double-star.html}. \begin{verbatim} from flagmatic.all import * p = GraphProblem(7, density=[("3:121323",1),("3:",1)], mode="optimization") p.add_assumption("1:",[("2:12(1)",1)],$\delta^*_i$) p.add_assumption("2:12",[("3:12(2)",1)],$\eta^*_i$) p.solve_sdp(show_output=True, solver="csdp") \end{verbatim} As the usage of flag algebra computations to obtain similar bounds has become standard in the area in recent years (see a survey~\cite{RazSurvey}), and the method is described in great detail in a number of papers (e.g.~\cite{DHMNSCliques,HKNFlags,KeevashSurvey,PVCliques}), we avoid extensive discussion of the flag algebra setting. Essentially, nonexistence of $(\delta^*_i-\varepsilon,\eta^*_i-\varepsilon)$-graphs for some positive $\varepsilon$ is proved by exhibiting a system of inequalities, involving homomorphism densities of seven vertex graphs, which has to hold in every $(\delta^*_i-\varepsilon,\eta^*_i-\varepsilon)$-graph, but which has no solutions. \end{proof} As we will see in Section~\ref{s:back} for the purposes of investigating Ramsey numbers we are primarily interested in the restriction of $\mc{V}$ to the region $[0.5,0.545] \times [0.265,0.32]$. The sets of points in this region, which are valid by Corollary~\ref{c:valid} or invalid by Theorem~\ref{t:invalid}, are shown in Figure~\ref{f:valid}. \begin{figure} \caption{The restrictions of the valid {\color{blue} \label{f:valid} \end{figure} Finally, in addition to Theorem~\ref{t:invalid}, we will use the following result, which can be extracted from the proof of~\cite[Theorem 3.3]{grossman}. We include the proof for completeness. \begin{thm}\label{t:invalid2} For every $\varepsilon>0$ the pair $(1/2+\varepsilon,1/3+\varepsilon)$ is invalid. \end{thm} \begin{proof} It suffices to show that, if $G$ is a graph with $|V(G)|=n$ such that $\deg(v) > n/2$ for every $v \in V(G)$, then there exists an edge $uv \in V(G)$ such that $|N(v) \cup N(u)| > 2n/ 3$. Suppose that no such edge exists. For every pair of (not necessarily distinct) vertices $u,w \in V(G)$, there exists $v \in V(G)$ such that $uv, wv \in E(G)$. It follows that \begin{align}\label{e:nonadj} |N(u) &\cap N(w)| \geq |N(u) \cap N(w) \cap N(v)| \geq |N(u) \cap N(v)| + |N(w) \cap N(v)| -|N(v)|\notag \\ &= (|N(u)|+|N(v)| - |N(u) \cup N(v)| ) + (|N(w)|+|N(v)| - |N(w) \cup N(v)| ) -|N(v)| \notag\\ &\geq |N(v)|+|N(u)|+|N(w)| -2\cdot \frac{2}{3}n > \frac{n}{6}. \end{align} Therefore, \begin{align*} \frac{n^3}{4} &\geq \sum_{u \in V(G)}\deg(u)(n -\deg(u)) \\ &= \sum_{u\in V(G), v\in N(u), w\not\in N(u)} 1 \\&= \sum_{\substack{(u,w) \in V(G)^2 \\ uw\not\in E(G)}} | N(u)\cap N(w)| + \sum_{\substack{(u,v) \in V(G)^2 \\ uv\in E(G)}} n - |N(u)\cap N(v)|\\ &\stackrel{(\ref{e:nonadj})}{>} \frac{n}{6}(n^2-2|E(G)|)+\frac{n}{3}\cdot 2|E(G)| \\ &= \frac{n^3}{6}+ \frac{n}{3}|E(G)| \geq \frac{n^3}{4}, \end{align*} a contradiction, as desired. \end{proof} \section{Back to Ramsey numbers}\label{s:back} In this section we derive bounds on Ramsey numbers of double stars from the information on the set of valid points obtained in Section~\ref{s:valid}. In particular we prove Theorems~\ref{t:lower} and~\ref{mainthm}. Our first lemma follows immediately from the definition of a directly valid point and Theorem~\ref{t:graph}. \begin{lem}\label{l:upper} Let $n \geq m \geq 0$ be integers, and let $p=r(S(n,m))-1$. If $p \geq \max(2n+2,n+2m+2)$ then $$\left(1 - \frac{n}{p},1 - \frac{n+m+1}{p}\right)$$ is directly valid. \end{lem} The next corollary is in turn a direct consequence of Lemma~\ref{l:upper}. \begin{cor}\label{c:upper} Let $n \geq m \geq 0$ be integers, and let $(\delta, \eta)$ be an invalid point then $$r(S(n,m)) \leq \max \left(2n+2,n+2m+2,\left\lceil\frac{n}{1-\delta}\right\rceil,\left\lceil\frac{n+m+1}{1 - \eta}\right\rceil\right).$$ \end{cor} We are now ready to derive Theorem~\ref{mainthm}. \begin{proof}[Proof of Theorem~\ref{mainthm}.] By Theorem~\ref{t:invalid} the point $(0.5406,0.2703)$ is invalid. For $n \leq 1.699 (m+1)$ we have $$ \frac{n}{1 - 0.5406} \leq n+2m+2\qquad \mathrm{and} \qquad \frac{n+m+1}{1 - 0.2703} \leq n+2m+2.$$ Thus $r(S(n,m)) \leq n+2m+2$ by Corollary~\ref{c:upper}. \end{proof} Next we turn to asymptotic bound on $r(S(n,m))$. For $x \in [1,+\infty)$ define \[\hat{r}(x) := \lim_{\substack{n,m \to \infty \\n/m \to x}} \frac{r(S(n,m))}{m}. \] The next theorem expresses $\hat{r}(x)$ in terms of $\mc{V}$. \begin{thm}\label{t:asymptotic} For every $x \geq 1$, let $$\hat{r}'(x)=\max\left\{r \: : \: \left( 1-\frac{x}{r}, 1 -\frac{x+1}{r} \right) \in \mc{V} \right\}.$$ Then $$\hat{r}(x) = \max(2x,x+2,\hat{r}'(x)).$$ In particular, the limit in the definition of $\hat{r}(x)$ exists. \end{thm} \begin{proof} It follows immediately from Corollary~\ref{c:upper} that $$\limsup_{\substack{n,m \to \infty \\n/m \to x}} \frac{r(S(n,m))}{m} \leq \max(2x,x+2,\hat{r}'(x)).$$ Since $r(S(n, m))\ge \max (2n+2, n+2m + 2)$ for $n \geq m \geq 3$, it remains to show that for all $x \geq 1$ and $\varepsilon >0$ there exist $\gamma, N>0$ such that, if $m \geq N$ $n \geq (x -\gamma)m$ are integers, then $r(S(n,m)) \geq (\hat{r}'(x)-\varepsilon)m$. Let $$r = \hat{r}'(x), \; \delta= 1-\frac{x}{r}-\frac{\varepsilon}{2r^2}, \; \mathrm{and} \; \eta= 1-\frac{x+1}{r}-\frac{\varepsilon}{2r^2}.$$ The point $(\delta, \eta)$ is directly valid by definition of $\mc{V}$. Therefore there exists a graph $G$ with $|V(G)|>1$ such that $\deg(v)+1 \geq \delta |V(G)|$ for every $v \in V(G)$ and $|N(v) \cup N(u)| \leq (1 -\eta)|V(G)|$ for all $uv \in E(G)$. Let $s=|V(G)|$, $\gamma=\frac{\varepsilon^2}{4r^2}$, $N=\lceil 2s /\varepsilon\rceil$. Let $m \geq N$, $n \geq (x -\gamma)m$ be integers, and let $k= \lfloor (r - \varepsilon/2)m/s \rfloor$. Then \begin{equation}\label{e:deltabound} (1 - \delta) ks \leq \left( \frac{x}{r}+\frac{\varepsilon}{2r^2}\right) \left( r -\frac{\varepsilon}{2}\right)m\leq \left(x-\frac{\varepsilon^2}{4r^2} \right)m\leq n. \end{equation} Similarly, \begin{equation}\label{e:etabound} (1 - \eta) ks \leq \left( \frac{x+1}{r}+\frac{\varepsilon}{2r^2}\right) \left( r -\frac{\varepsilon}{2}\right)m\leq \left(x+1-\frac{\varepsilon^2}{4r^2}\right) m \leq m+n. \end{equation} Let $G'$ be the graph with $|V(G)|=ks$ obtained by replacing every vertex of $G$ by a complete graph on $k$ vertices and replacing all the edges by complete bipartite graphs. By (\ref{e:deltabound}) and (\ref{e:etabound}), the graph $G'$ satisfies the conditions in Theorem~\ref{t:graph}(ii). Therefore $$r(S(n,m)) \geq ks =\lfloor (r -\varepsilon/2)m/s \rfloor s \geq (r -\varepsilon/2)m -s \geq (r -\varepsilon)m$$ by the choice of $N$, as desired. \end{proof} \begin{cor}\label{c:linlower} The following inequalities hold: \begin{align} &\hat{r}(x) \geq \frac{5}{3}x+ \frac{5}{6} & \mathrm{for \ } 1 \leq x, \label{e:lower1}\\ &\hat{r}(x) \geq \frac{21}{10}x &\mathrm{for \ } 1 \leq x \leq 2,\label{e:lower2}\\ &\hat{r}(x) \geq \frac{189}{115}x+\frac{21}{23} &\mathrm{for \ } 2 \leq x .\label{e:lower3} \end{align} \end{cor} \begin{proof} For $x \geq 1$, let $p_1=(x+2)/(2x+1) \leq 1$, and let $r_1 = 5x/3+ 5/6$. Direct computations show that $1-x/r_1=(1+2p_1)/5$, and $1-(x+1)/r_1=(3-2p_1)/5$. By Corollary~\ref{c:valid}, $((1+2p_1)/5,(3-2p_1)/5) \in \mc{V}$. Thus $\hat{r}(x) \geq r_1$ by Theorem~\ref{t:asymptotic}, and (\ref{e:lower1}) holds. For the proof of (\ref{e:lower2}), let $r_2 = 21x/10$. Then $1-x/r_2 =11/21$, and $1-(x+1)/r_2 \leq 6/21$ for $1 \leq x \leq 2$. We have $(11/21,6/21) \in \mc{V}$ by Corollary~\ref{c:valid} applied with $p=1$. Thus (\ref{e:lower2}) holds. Finally, for the proof of (\ref{e:lower3}), let $x \geq 2$, let $p_3=(20+13x)/(10+18x) \leq 1$, and let $r_3 =189x/115 + 21/23$. Then $1-x/r_3=(1+10p_3)/21$ and $$1- \frac{x+1}{r_3} = \frac{19-18p_3+5p_3^2}{21} - \frac{125}{84}\left(\frac{x-2}{5+9x}\right)^2 \leq \frac{19-18p_3+5p_3^2}{21}.$$ By Corollary~\ref{c:valid}, we have $((1+10p_3)/21,(19-18p_3+5p_3^2)/21) \in \mc{V}$, implying $\hat{r}(x) \geq r_1$ by Theorem~\ref{t:asymptotic}. \end{proof} Let us note that the lower bound in (\ref{e:lower3}) can be tightened to $$\hat{r}(x) \geq \frac{7}{60}\left(5+4x+\sqrt{25+40x+106x^2} \right).$$ However, we chose to keep the bound linear, and hence hopefully more transparent. Theorem~\ref{t:lower} follows immediately from Corollary~\ref{c:linlower}. \begin{proof}[Proof of Theorem~\ref{t:lower}.] Note that $$r(S(n,m)) = \hat{r}(\frac{n}{m})m+o(m).$$ Thus inequalities (\ref{e:C5}) and (\ref{e:LK7}) Theorem~\ref{t:lower} follow from inequalities (\ref{e:lower1}) and (\ref{e:lower3}) in Corollary~\ref{c:linlower}, respectively. \end{proof} More generally, Corollary~\ref{c:linlower} implies the following piecewise linear lower bound on $\hat{r}(x)$. Let $$\hat{r}_l(x)= \begin{cases} x+2 &\mathrm{for \ } 1 \leq x \leq \frac{7}{4} \\ \frac{5}{3}x+ \frac{5}{6} & \mathrm{for \ } \frac{7}{4} \leq x \leq \frac{25}{13}, \\ \frac{21}{10}x &\mathrm{for \ } \frac{25}{13} \leq x \leq 2,\\ \frac{189}{115}x+\frac{21}{23} &\mathrm{for \ } 2 \leq x \leq \frac{105}{41}, \\ 2x &\mathrm{for \ } \frac{105}{41} \leq x. \end{cases}$$ Coming back to upper bounds, define $$u_{\delta,\eta}(x) = \max\left(x+2,2x, \frac{x}{1 -\delta},\frac{x+1}{1 -\eta} \right)$$ for $(\delta,\eta) \in [0,1]^2.$ It follows from Theorem~\ref{t:asymptotic} that $\hat{r}(x) \leq u_{\delta,\eta}(x)$ for every point $(\delta,\eta)$ that lies in the closure of the set of invalid points. Accordingly we define $$\hat{r}_u(x)=\min_{i=1}^{9Plus} u_{\delta^*_i,\eta^*_i}(x),$$ where the pairs $(\delta^*_i,\eta^*_i)$ for $1 \leq i \leq 9$ are listed in Table~\ref{t:invalid}, and $(\delta^*_{9Plus},\eta^*_{9Plus})=(1/2,1/3)$. It follows from Theorems~\ref{t:invalid} and~\ref{t:invalid2} that $\hat{r}_u(x)$ is an upper bound on $\hat{r}(x)$. Next theorem collects all the asymptotic lower and upper bounds on Ramsey numbers of double stars established in this section. \begin{thm}\label{t:bounds}$ \hat{r}_l(x) \leq \hat{r}(x) \leq \hat{r}_u(x).$ \end{thm} We present two figures which should be helpful in visualizing the bounds in Theorem~\ref{t:bounds}. For comparison, let us introduce an additional function $\hat{r}^*_l(x)=\max(x+2,2x)$ equal to the value of $\hat{r}(x)$ conjectured in~\cite{grossman}. Functions $\hat{r}^*_l(x)$,$\hat{r}_l(x)$ and $\hat{r}_u(x)$ are plotted in Figure~\ref{f:bounds1}. The ratio $\hat{r}_u(x)/\hat{r}_l(x)$ is plotted in Figure~\ref{f:bounds2}. In particular, the bounds in Theorem~\ref{t:bounds} asymptotically predict the value of $r(S(n,m))$ with the error less that $1\%$. In comparison, as mentioned in the introduction, the value of $r(S(2m,m))$ conjectured in~\cite{grossman} is asymptotically smaller than the lower bound provided in Theorem~\ref{t:lower} by $5 \%$. \begin{figure} \caption{Functions $\hat{r} \label{f:bounds1} \end{figure} \begin{figure} \caption{The ratio $\hat{r} \label{f:bounds2} \end{figure} \section{Concluding remarks} \subsection*{Asymptotic value of $r(S(n,m))$.} The constructions we used to provide the new lower bounds on Ramsey numbers of double stars are not simple, and we do not attempt to conjecture their tightness. Understanding the asymptotic behavior of $r(S(2m,m))$ appears to be difficult already. \begin{question} Is $r(S(2m,m)) =4.2m +o(m)$? \end{question} Perhaps, a combination of flag algebra techniques with stability methods, along the lines of the arguments in~\cite{BHLP5Cycle,BHLPVYRainbow}, can be used to resolve the above question. More ambitiously, one can ask the following. \begin{question} Let $T$ be a tree on $n$ vertices. Suppose that color classes in the 2-coloring of $T$ have sizes $n/3$ and $2n/3$. Is $r(T) \leq 1.4n+o(n)$? \end{question} \subsection*{When is $r(S(n,m))=2n+2$?} In Theorem~\ref{mainthm} we were able to substantially extend the range of known values $(m,n)$ for which the equality $r(S(n,m))=n+2m+2$ holds. We were not similarly successful in reducing the lower bound on $n$ in Theorem~\ref{t:GHK} which guarantees $r(S(n,m))=2n+2$. By Theorem~\ref{t:graph}, finding the optimal bound is essentially equivalent to answering the following question. \begin{question} Find the infimum $c_{\inf}$ of the set of real numbers $c$ for which there exists a graph $G$ with $|V(G)|=n$ such that \begin{itemize} \item $\deg(v) > n/2$ for every $v \in V(G)$, and \item $|N(v) \cup N(u)| \leq cn$ for every $uv \in E(G)$. \end{itemize} \end{question} Theorem~\ref{t:invalid2} shows that $c_{\inf} \geq 2/3$. The sparsified blow-ups of the line graph of $K_7$, introduced in the proof of Lemma~\ref{l:sparsify}, show that $c_{\inf} \leq 389/560 \approx 0.694$. We have convinced ourselves that $c_{\inf} > 2/3$, but the proof is technical and does not provide a meaningful improvement of the lower bound on $c_{\inf}$. \end{document}
\begin{document} \title[Motivic integration on smooth rigid varieties]{Motivic integration on smooth rigid varieties and invariants of degenerations} \author{Fran\c cois Loeser} \address{{\'E}cole Normale Sup{\'e}rieure, D{\'e}partement de math{\'e}matiques et applications, 45 rue d'Ulm, 75230 Paris Cedex 05, France (UMR 8553 du CNRS)} \email{[email protected]} \urladdr{http://www.dma.ens.fr/~loeser/} \author{Julien Sebag} \address{{\'E}cole Normale Sup{\'e}rieure, D{\'e}partement de math{\'e}matiques et applications, 45 rue d'Ulm, 75230 Paris Cedex 05, France (UMR 8553 du CNRS)} \email{[email protected]} \maketitle \section{Introduction}In the last years, motivic integration has shown to be a quite powerful tool in producing new invariants in birational geometry of algebraic varieties over a field $k$, say of characteristic zero, cf. \cite{K}\cite{Bat}\cite{Ba3}\cite{arcs}\cite{MK}\cite{barc}. Let us explain the basic idea behind such results. If $h : Y \rightarrow X$ is a proper birational morphism between $k$-algebraic varieties, the induced morphism ${\mathcal L} (Y) \rightarrow {\mathcal L} (X)$ between arc spaces (cf. \cite{arcs}) is an isomorphism outside subsets of infinite codimension. By a fundamental change of variable formula, motivic integrals on ${\mathcal L} (X)$ may be computed on ${\mathcal L} (Y)$ when $Y$ is smooth. In the present paper we develop similar ideas in the somewhat dual situation of degenerating families over complete discrete valuation rings with perfect residue field, for which rigid geometry appears to be a natural framework. More precisely, let $R$ be a complete discrete valuation ring with fraction field $K$ and perfect residue field $k$. We construct a theory of motivic integration for smooth\footnote{The extension to singular rigid spaces when $K$ is of characteristic zero will be considered in a separate publication.} rigid $K$-spaces, always assumed to be quasi-compact and separated. Let $X$ be a smooth rigid $K$-space of dimension $d$. Our construction assigns to a gauge form $\omega$ on $X$, {\it i.e.} a nowhere vanishing differential form of degree $d$ on $X$, an integral $\int_{X} \omega d \tilde \mu$ with value in the ring ${K_0 ({\rm Var}_k)}_{\rm loc}$. Here ${K_0 ({\rm Var}_k)}_{\rm loc}$ is the localization with respect to the class of the affine line of the Grothendieck group of algebraic varieties over $k$. In concrete terms, two varieties over $k$ define the same class in ${K_0 ({\rm Var}_k)}_{\rm loc}$ if they become isomorphic after cutting them into locally closed pieces and stabilization by product with a power of the affine line. More generally, if $\omega$ is a differential form of degree $d$ on $X$, we define an integral $\int_{X} \omega d \mu$ with value in the ring $\widehat {K_0 ({\rm Var}_k)}$, which is the completion of ${K_0 ({\rm Var}_k)}_{\rm loc}$ with respect to the filtration by virtual dimension (see \S\kern .15em \ref{GK}). The construction is done by viewing $X$ as the generic fibre of some formal $R$-scheme ${\mathcal X}$. To such a formal $R$-scheme, by mean of the Greenberg functor ${\mathcal X} \mapsto {\rm Gr} ({\mathcal X})$ one associates a certain $k$-scheme ${\rm Gr} ({\mathcal X})$, which, when $R = k [[t]]$, and ${\mathcal X}$ is the formal completion of $X_0 \otimes k [[t]]$, for $X_0$ an algebraic variety over $k$, is nothing else that the arc space ${\mathcal L} (X_0)$ considered before. We may then use the general theory of motivic integration on schemes ${\rm Gr} ({\mathcal X})$ which is developed in \cite{sebag}. Of course, for the construction to work one needs to check that it is independent of the chosen model. This is done by using two main ingredients: the theory of weak N{\'e}ron models developed in \cite{neron} and \cite{bs}, and the analogue for schemes of the form ${\rm Gr} ({\mathcal X})$ of the change of variable formula which is proven in \cite{sebag}. In fact the theory of weak N{\'e}ron models really pervades the whole paper and some parts of the book \cite{neron} were crying out for their use in motivic integration\footnote{cf. the remark at the bottom of p.105 in \cite{neron}.}. As an application of our theory, we are able to assign in a canonical way to any smooth quasi-compact and separated rigid $K$-space $X$ an element $\lambda (X)$ in the quotient ring ${K_0 ({\rm Var}_k)}_{\rm loc} /({\mathbf L} - 1) {K_0 ({\rm Var}_k)}_{\rm loc}$, where ${\mathbf L}$ stands for the class of the affine line. When $X$ admits a formal $R$-model with good reduction, $\lambda (X)$ is just the class of the fibre of that model. More generally, if ${\mathcal U}$ is a weak N{\'e}ron model of $X$, $\lambda (X)$ is equal to the class of the special fibre of ${\mathcal U}$ in ${K_0 ({\rm Var}_k)}_{\rm loc} /({\mathbf L} - 1) {K_0 ({\rm Var}_k)}_{\rm loc}$. In particular it follows that this class is independent of the choice of the weak N{\'e}ron model ${\mathcal U}$. This result can be viewed as a rigid analogue of a result of Serre concerning compact smooth locally analytic varieties over a local field. To such a variety $\tilde X$, Serre associates in \cite{serre}, using classical $p$-adic integration, an invariant $s (\tilde X)$ in the ring ${\mathbf Z} /(q - 1) {\mathbf Z}$, where $q$ denotes the cardinality of the finite field $k$. Counting rational points in $k$ yields a canonical morphism ${K_0 ({\rm Var}_k)}_{\rm loc} /({\mathbf L} - 1) {K_0 ({\rm Var}_k)}_{\rm loc} \rightarrow {\mathbf Z} /(q - 1) {\mathbf Z}$, and we show that the image by this morphism of our motivic invariant $\lambda (X)$ of a smooth rigid $K$-space $X$ is equal to the Serre invariant of the underlying locally analytic variety. Unless making additional assumptions on $X$, one cannot hope to lift our invariant $\lambda (X)$ to a class in the Grothendieck ring ${K_0 ({\rm Var}_k)}_{\rm loc}$, which would be a substitute for the class of the special fibre of {\bf the} N{\'e}ron model, when such a N{\'e}ron model happens to exist. In the particular situation where $X$ is the analytification of a Calabi-Yau variety over $K$, \textit{i.e.} a smooth projective algebraic variety over $K$ of pure dimension $d$ with $\Omega^d_X$ trivial, this can be achieved: one can attach to $X$ a canonical element of ${K_0 ({\rm Var}_k)}_{\rm loc}$, which, if $X$ admits a proper and smooth $R$-model ${\mathcal X}$, is equal to the class of the special fibre ${\mathcal X}_0$ in ${K_0 ({\rm Var}_k)}_{\rm loc}$. In particular, if $X$ admits two such models ${\mathcal X}$ and ${\mathcal X}'$, the class of the special fibres ${\mathcal X}_0$ and ${\mathcal X}'_0$ in ${K_0 ({\rm Var}_k)}_{\rm loc}$ are equal, which may be seen as an analogue of Batyrev's result on birational Calabi-Yau varieties \cite{BCY}. The paper is organized as follows. Section \ref{sec2} is devoted to preliminaries on formal schemes, the Greenberg functor and weak N{\'e}ron models. In the following section, we review the results on motivic integration on formal schemes obtained by the second named author in \cite{sebag} which are needed in the present work. We are then able in section \ref{sec4} to construct a motivic integration on smooth rigid varieties and to prove the main results which were mentionned in the present introduction. Finally, in section \ref{sec5}, guided by the analogy with arc spaces, we formulate an analogue of the Nash problem, which is about the relation between essential (\textit{i.e.} appearing in every resolution) components of resolutions of a singular variety and irreducible components of spaces of truncated arcs on the variety, for formal $R$-schemes with smooth generic fibre. In this context, analogy suggests there might be some relation between essential components of weak N{\'e}ron models of a given formal $R$-scheme ${\mathcal X}$ with smooth generic fibre and irreducible components of the truncation $\pi_n ({\rm Gr} ({\mathcal X}))$ of its Greenberg space for $n \gg 0$. As a very first step in that direction we compute the dimension of the contribution of a given irreducible component to the truncation. \section{Preliminaries on formal schemes and Greenberg functor}\label{sec2} \subsection{Formal schemes}In this paper $R$ will denote a complete discrete valuation ring with residue field $k$ and fraction field $K$. We shall assume $k$ is perfect. We shall fix once for all an uniformizing parameter $\varpi$ and we shall set $R_n := R / (\varpi)^{n +1}$, for $n \geq 0$. In the whole paper, by a formal $R$-scheme, we shall always mean a quasi-compact, separated, locally topologically of finite type formal $R$-scheme, in the sense of \S\kern .15em 10 of \cite{EGA}. A formal $R$-scheme is a locally ringed space $({\mathcal X}, {\mathcal O}_{{\mathcal X}})$ in topological $R$-algebras. It is equivalent to the data, for every $n \geq 0$, of the $R_n$-scheme $X_n = ({\mathcal X}, {\mathcal O}_{{\mathcal X}} \otimes_R R_n)$. The $k$-scheme $X_0$ is called the special fibre of ${\mathcal X}$. As a topological space ${\mathcal X}$ is isomorphic to $X_0$ and ${\mathcal O}_{{\mathcal X}} = \limproj {\mathcal O}_{X_n}$. We have $X_n = X_{n + 1} \otimes_{R_{n +1}} R_n$ and ${\mathcal X}$ is canonically isomorphic to the inductive limit of the schemes $X_n$ in the category of formal schemes. Locally ${\mathcal X}$ is isomorphic to an affine formal $R$-scheme of the form ${\Specf} A$ with $A$ an $R$-algebra topologically of finite type, \textit{i.e.} a quotient of a restricted formal series algebra $R \{T_1, \dots, T_m\}$. If ${\mathcal Y}$ and ${\mathcal X}$ are $R$-formal schemes, we denote by ${\rm Hom}_R ({\mathcal Y}, {\mathcal X})$ the set of morphisms of formal $R$-schemes ${\mathcal Y} \rightarrow {\mathcal X}$, \textit{i.e.} morphisms between the underlying locally topologically ringed spaces over $R$ (cf. \S\kern .15em 10 of \cite{EGA}). It follows from Proposition 10.6.9 of \cite{EGA}, that the canonical morphism ${\rm Hom}_R ({\mathcal Y}, {\mathcal X}) \rightarrow \limproj {\rm Hom}_{R_n} ({\mathcal Y}_n, {\mathcal X}_n)$ is a bijection. If $k$ is a field, by a variety over $k$ we mean a separated reduced scheme of finite type over $k$. \subsection{Extensions}Let $A$ be a $k$-algebra. We set $L (A) = A$ when $R$ is a ring of equal characteristic and $L (A) = W (A)$, the ring of Witt vectors, when $R$ is a ring of unequal characteristic and we denote by $R_A$ the ring $R_A := R \otimes_{L (k)} L(A)$. When $F$ is a field containing $k$, we denote by $K_F$ the field of fractions of $R_F$. When the field $F$ is perfect, the ring $R_F$ is a discrete valuation ring and, furthermore, the uniformizing parameter $\varpi$ in $R$ induces an uniformizing parameter in $R_F$. Hence, since $k$ is assumed to be perfect, the extension $R \rightarrow R_F$ has ramification index 1 in the terminology of \S\kern .15em 3.6 of \cite{neron}. \subsection{The Greenberg Functor}We shall recall some material from \cite{g1} and \S\kern .15em 9.6 of \cite{neron}. Let us remark, that, when $R$ is a ring of equal characteristic, we can view $R_n$ as the set of $k$-valued points of some affine space ${\mathbf A}^m_k$ which we shall denote by ${\mathcal R}_n$, in a way compatible with the $k$-algebra structure. When $R$ is a ring of unequal characteristic, $R_n$ can no longer be viewed as a $k$-algebra. However, using Witt vectors, we may still interpret $R_n$ as the set of $k$-valued points of a ring scheme ${\mathcal R}_n$, which, as a $k$-scheme, is isomorphic to some affine space ${\mathbf A}^m_k$. Remark we have canonical morphisms ${\mathcal R}_{n + 1} \rightarrow {\mathcal R}_n$. Now, for every $n \geq 0$, we consider the functor $h_n^{\ast}$ which to a $k$-scheme $T$ associates the locally ringed space $h_n^{\ast} (T)$ which has $T$ as underlying topological space and ${\cal Hom}_k (T, {\mathcal R}_n)$ as structure sheaf. In particular, for any $k$-algebra $A$, $$ h_n^{\ast} (A) = \Spec (R_n \otimes_{L (k)} L (A)). $$ Taking $A = k$, we see that $h_n^{\ast} T$ is a locally ringed space over $\Spec R_n$. By a fundamental result of Greenberg \cite{g1} (which in the equal characteristic case amounts to Weil restriction of scalars), for $R_n$-schemes $X_n$, locally of finite type, the functor $$ T \longmapsto {\rm Hom}_{R_n} (h_n^{\ast} (T), X_n) $$ from the category of $k$-schemes to the category of sets is represented by a $k$-scheme ${\rm Gr}_n (X_n)$ which is locally of finite type. Hence, for any $k$-algebra $A$, $$ {\rm Gr}_n (X_n) (A) = X_n (R_n \otimes_{L (k)} L (A)), $$ and, in particular, setting $A =k$, we have $$ {\rm Gr}_n (X_n) (k) = X_n (R_n). $$ Among basic properties the Greenberg functor respects closed immersions, open imersions, fibred products, smooth and {\'e}tale morphisms and also sends affines to affines. Now let us consider again a formal $R$-scheme ${\mathcal X}$. The canonical adjunction morphism $ h_{n +1}^{\ast} ({\rm Gr}_{n +1} (X_{n +1})) \rightarrow X_{n +1} $ gives rise, by tensoring with $R_n$, to a canonical morphism of of $R_n$-schemes $ h_{n}^{\ast} ({\rm Gr}_{n +1} (X_{n +1})) \rightarrow X_{n}, $ from which one derives, again by adjunction, a canonical morphism of $k$-schemes $$ \theta_n : {\rm Gr}_{n +1} (X_{n +1}) \longrightarrow {\rm Gr}_{n} (X_{n}). $$ In this way we attach to the formal scheme ${\mathcal X}$ a projective system $({\rm Gr}_{n} (X_{n}))_{n \in {\mathbf N}}$ of $k$-schemes. The transition morphisms $\theta_n$ being affine, the projective limit $$ {\rm Gr} ({\mathcal X}) := \limproj {\rm Gr}_{n} (X_{n}) $$ exists in the category of $k$-schemes. Let $T$ be a $k$-scheme. We denote by $h^{\ast} (T)$ the locally ringed space which has $T$ as underlying topological space and $\limproj {\cal Hom}_k (T, {\mathcal R}_n)$ as structure sheaf. It is a locally ringed space over $\Specf R$ which identifies with the projective limit of the spaces $h_n^{\ast} (T)$ in the category of locally ringed spaces. Furthermore one checks, similarly as in Proposition 10.6.9 of \cite{EGA}, that the canonical morphism ${\rm Hom}_R (h^{\ast} (T), {\mathcal X}) \rightarrow \limproj {\rm Hom}_{R_n} (h_n^{\ast} (T), {\mathcal X}_n)$ is a bijection for every formal $R$-scheme ${\mathcal X}$. Putting everything together we get the following: \begin{prop}Let ${\mathcal X}$ be a quasi-compact, locally topologically of finite type formal $R$-scheme. The functor $$ T \longmapsto {\rm Hom}_{R} (h^{\ast} (T), {\mathcal X}) $$ from the category of $k$-schemes to the category of sets is represented by the $k$-scheme ${\rm Gr} ({\mathcal X})$. \qed \end{prop} In particular, for every field $F$ containing $k$, there are canonical bijections $$ {\rm Gr} ({\mathcal X}) (F) \simeq {\rm Hom}_R (\Specf R_F, {\mathcal X}) \simeq {\mathcal X} (R_F). $$ One should note that, in general, ${\rm Gr} ({\mathcal X})$ is not of finite type, even if ${\mathcal X}$ is a quasi-compact, topologically of finite type formal $R$-scheme. In this paper, we shall always consider the schemes ${\rm Gr}_{n} (X_{n})$ and ${\rm Gr} ({\mathcal X})$ with their reduced structure. Sometimes, by abuse of notation, we shall write ${\rm Gr}_n ({\mathcal X})$ for ${\rm Gr}_n (X_n)$. \begin{prop}\label{glue} \begin{enumerate} \item[(1)]The functor ${\rm Gr}$ respects open and closed immersions, fibre products, and sends affine formal $R$-schemes to affine $k$-schemes. \item[(2)] Let ${\mathcal X}$ be a formal quasi-compact and separated $R$-scheme and let $({\mathcal O}_i)_{i \in J}$ be a finite covering by formal open subschemes. There are canonical isomorphisms ${\rm Gr} ({\mathcal O}_i \cap {\mathcal O}_j) \simeq {\rm Gr} ({\mathcal O}_i) \cap {\rm Gr} ({\mathcal O}_j)$ and the scheme ${\rm Gr} ({\mathcal X})$ is canonically isomorphic to the scheme obtained by glueing the schemes ${\rm Gr} ({\mathcal O}_i)$. \end{enumerate} \end{prop} \begin{proof}Assertion (1) for the functor ${\rm Gr}_n$ is proved in \cite{g1} and \cite{neron}, and follows for ${\rm Gr}$ by taking projective limits. Assertion (2) follows from (1) and the universal property defining ${\rm Gr}$. \end{proof} \begin{remark}\label{arcspace}Assume we are in the equal characteristic case, \textit{i.e.} $R = k[[\varpi]]$. For $X$ an algebraic variety over $k$, we can consider the formal $R$-scheme $X \widehat \otimes R$ obtained by base change and completion. We have canonical isomorphisms ${\rm Gr} (X \widehat \otimes R) \simeq {\mathcal L} (X)$ and ${\rm Gr}_n (X \otimes R_n) \simeq {\mathcal L}_n (X)$, where ${\mathcal L} (X)$ and ${\mathcal L}_n (X)$ are the arc spaces considered in \cite{arcs}. \end{remark} \subsection{Smoothness}\label{smooth} Let us recall the definition of smoothness for morphisms of formal $R$-schemes. A morphism $f : {\mathcal X} \rightarrow {\mathcal Y}$ of formal $R$-schemes is smooth at a point $x$ of $X_0$ of relative dimension $d$ if it is flat at $x$ and the induced morphism $f_0 : X_0 \rightarrow Y_0$ is smooth at $x$ of relative dimension $d$. An equivalent condition (cf. Lemma 1.2 of \cite{rigid2}) is that for every $n$ in ${\mathbf N}$ the induced morphism $f_n : X_n \rightarrow Y_n$ is smooth at $x$ of relative dimension $d$. The morphism $f$ is smooth if it is smooth at every point of $X_0$. The formal $R$-scheme ${\mathcal X}$ is smooth at a point $x$ of $X_0$ if the structural morphism is smooth at $x$. Let ${\mathcal X}$ be a flat formal $R$-scheme of relative dimension $d$. We denote by ${\mathcal X}_{\rm sing}$ the closed formal subscheme of ${\mathcal X}$ defined by the radical of the Fitting ideal sheaf ${\rm Fitt}_d \Omega_{{\mathcal X} / R}$. The formal $R$-scheme ${\mathcal X}$ is smooth at a point $x$ of $X_0$ (resp. is smooth) if and only if $x$ is not in ${\mathcal X}_{\rm sing}$ (resp. ${\mathcal X}_{\rm sing}$ is empty). \subsection{Greenberg's Theorem}The following statement, which is an adaptation of a result of Schappacher \cite{scha}, is an analogue of Greenberg's Theorem \cite{g2} in the framework of formal schemes. We refer to \cite{sebag} for a more detailled exposition. \begin{theorem}\label{gree}Let $R$ be a complete discrete valuation ring and let ${\mathcal X}$ be a formal $R$-scheme. For every $n \geq 0$, there exists an integer $\gamma_{{\mathcal X}} (n) \geq n$ such that, for every field $F$ containing $k$, and every $x$ in ${\mathcal X} (R_F/ \varpi^{\gamma_{{\mathcal X}} (n) +1})$, the image of $x$ in ${\mathcal X} (R_F/\varpi^{n +1})$ may be lifted to a point in ${\mathcal X} (R_F)$. \end{theorem} \begin{remark}\label{gf}The function $n \mapsto \gamma_{{\mathcal X}} (n)$ is called the Greenberg function of ${\mathcal X}$. \end{remark} \subsection{Rigid spaces}For $\cal X$ a flat formal $R$-scheme we shall denote by ${\mathcal X}_{K}$ its generic fibre in the sense of Raynaud \cite{table}. By Raynaud's Theorem \cite{table}, \cite{rigid1}, the functor ${\mathcal X} \mapsto {\mathcal X}_{K}$ induces an equivalence between the localization of the category of quasi-compact flat formal $R$-schemes by admissible formal blowing-ups, and the category of rigid $K$-spaces which are quasi-compact and quasi-separated. Furthemore, ${\mathcal X}$ is separated if and only if ${\mathcal X}_{K}$ is separated (cf. Proposition 4.7 of \cite{rigid1}). Recall that for the blowing-up of an ideal sheaf ${\mathcal I}$ to be admissible means that ${\mathcal I}$ contains some power of the uniformizing parameter $\varpi$. In the paper all rigid $K$-spaces will be assumed to be quasi-compact and separated. \subsection{Weak N{\'e}ron models}We shall denote by $R^{\rm sh}$ a strict henselization of $R$ and by $K^{\rm sh}$ its field of fractions. \begin{definition}Let $X$ be a smooth rigid $K$-variety. A weak formal N{\'e}ron model\footnote{We follow here the terminology of \cite{bs} which is somewhat different from that of \cite{neron}.} of $X$ is a smooth formal $R$-scheme ${\mathcal U}$, whose generic fibre ${\mathcal U}_K$ is an open rigid subspace of $X_K$, and which has the property that the canonical map ${\mathcal U} (R^{\rm sh}) \rightarrow X (K^{\rm sh})$ is bijective. \end{definition} The construction of weak N{\'e}ron models using N{\'e}ron's smoothening process presented in \cite{neron}, carries over almost literally from $R$-schemes to formal $R$-schemes, and gives, as explained in \cite{bs}, the following result: \begin{theorem}\label{wn}Let ${\mathcal X}$ be a quasi-compact formal $R$-scheme, whose generic fibre ${\mathcal X}_K$ is smooth. Then there exists a morphism of formal $R$-schemes ${\mathcal X}'\rightarrow {\mathcal X}$, which is the composition of a sequence of formal blowing-ups with centers in the corresponding special fibres, such that every $R^{\rm sh}$-valued point of ${\mathcal X}$ factors through the smooth locus of ${\mathcal X}'$. \end{theorem} One deduces the following omnibus statement: \begin{prop}Let $X$ be a smooth quasi-compact and separated rigid $K$-space and let ${\mathcal X}$ be a formal $R$-model of $X$, \textit{i.e} a quasi-compact formal $R$-scheme ${\mathcal X}$ with generic fibre $X$. Then there exists a weak formal N{\'e}ron model ${\mathcal U}$ of $X$ which dominates ${\mathcal X}$ and which is quasi-compact. Furthermore the canonical map ${\mathcal U} (R^{\rm sh}) \rightarrow {\mathcal X} (R^{\rm sh})$ is a bijection and for every perfect field $F$ containing $k$, the formal $R_F$-scheme ${\mathcal U} \otimes_R R_F$ is a weak N{\'e}ron model of the rigid $K_F$-space $X \otimes_K K_F$. In particular, the morphism ${\mathcal U} \rightarrow {\mathcal X}$ induces a bijection between points of ${\rm Gr} ({\mathcal U})$ and ${\rm Gr} ({\mathcal X})$. \end{prop} \begin{proof}We choose a formal model ${\mathcal X}$ of $X$ such that we are in the situation of Theorem \ref{wn}. The smooth locus ${\mathcal U}$ of ${\mathcal X}'$ is quasi-compact and is a weak N{\'e}ron model of ${\mathcal X}_K$, since, by \cite{bs} 2.2 (ii), every $K^{\rm sh}$-valued point of ${\mathcal X}_K$ extends uniquely to a $R^{\rm sh}$-valued point of ${\mathcal X}$. Also, it follows from Corollary 6 of \S\kern .15em 3.6 \cite{neron} that, if ${\mathcal U}$ is a weak N{\'e}ron model of the rigid $K$-space $X$, then for every field $F$ containing $k$, the formal $R_F$-scheme ${\mathcal U} \otimes_R R_F$ is a weak N{\'e}ron model of the rigid $K_F$-space $X \otimes_K K_F$. \end{proof} \section{Motivic integration on formal schemes}\label{sec3} The material in this section is borrowed from \cite{sebag} to which we shall refer for details. \subsection{Truncation}For ${\mathcal X}$ a formal $R$-scheme, we shall denote by $\pi_{n, {\mathcal X}}$ or $\pi_n$ the canonical projection ${\rm Gr} ({\mathcal X}) \rightarrow {\rm Gr}_n (X_n)$, for $n$ in ${\mathbf N}$. Let us first state the following corollary of Theorem \ref{gree}: \begin{prop}\label{chev}Let ${\mathcal X}$ be a formal $R$-scheme. The image $\pi_n ({\rm Gr} ({\mathcal X}))$ of ${\rm Gr} ({\mathcal X})$ in ${\rm Gr}_n (X_n)$ is a constructible subset of ${\rm Gr}_n (X_n)$. More generally, if $C$ is a constructible subset of ${\rm Gr}_m (X_m)$, $\pi_n (\pi_m^{-1} (C))$ is a constructible subset of ${\rm Gr}_n (X_n)$, for every $n \geq 0$. \end{prop} \begin{proof}Indeed, it follows from Theorem \ref{gree} that $\pi_n ({\rm Gr} ({\mathcal X}))$ is equal to the image of ${\rm Gr}_{\gamma (n)} (X_{\gamma (n)})$ in ${\rm Gr}_n (X_n)$. The morphism ${\rm Gr}_{\gamma (n)} (X_{\gamma (n)}) \rightarrow {\rm Gr}_n (X_n)$ being of finite type, first the statement follows from Chevalley's Theorem. For the second statement, one may assume $m = n$, and the proof proceeds as before. \end{proof} \begin{prop}\label{smoothtr} Let $\cal X$ be a smooth formal separated $R$-scheme (quasi-compact, locally topologically of finite type over $R$), of relative dimension $d$. \begin{enumerate} \item[(1)]For every $n$, the morphism $\pi_n : {\rm Gr} ({\mathcal X}) \rightarrow {\rm Gr}_n (X_n)$ is surjective. \item[(2)]For every $n$ and $m$ in ${\mathbf N}$, the canonical projection ${\rm Gr}_{n +m} (X_{n +m}) \rightarrow {\rm Gr}_n (X_n)$ is a locally trivial fibration for the Zariski topology with fibre ${\mathbf A}_k^{dm}$. \end{enumerate} \end{prop} We say that a map $\pi : A \rightarrow B$ is a piecewise morphism if there exists a finite partition of the domain of $\pi$ into locally closed subvarieties of $X$ such that the restriction of $\pi$ to any of these subvarieties is a morphism of schemes. \subsection{Away from the singular locus}Let ${\mathcal X}$ be a formal $R$-scheme and consider its singular locus ${\mathcal X}_{\rm sing}$ defined in \ref{smooth}. For any integer $e \geq 0$, we view ${\rm Gr}_{e} ({\mathcal X}_{{\rm sing}, e})$ as contained in ${\rm Gr}_{e} ({\mathcal X})$ and we set $$ {\rm Gr}^{(e)} ({\mathcal X}) := {\rm Gr} ({\mathcal X}) \setminus \pi_e^{-1} ({\rm Gr}_{e} ({\mathcal X}_{{\rm sing}, e})). $$ We say that a map $\pi : A \rightarrow B$ between $k$-constructible sets $A$ and $B$ is a piecewise trivial fibration with fibre $F$, if there exists a finite partition of $B$ in subsets $S$ which are locally closed in $Y$ such that $\pi^{- 1} (S)$ is locally closed in $X$ and isomorphic, as a variety over $k$, to $S \times F$, with $\pi$ corresponding under the isomorphism to the projection $S \times F \rightarrow S$. We say that the map $\pi$ is a piecewise trivial fibration over some constructible subset $C$ of $B$, if the restriction of $\pi$ to $\pi^{- 1} (C)$ is a piecewise trivial fibration onto $C$. \begin{prop}\label{ptf}Let ${\mathcal X}$ a flat formal $R$-scheme of relative dimension $d$. There exists an integer $c \geq 1$ such that, for every integers $e$ and $n$ in ${\mathbf N}$ such that $n \geq c e$, the projection $$ \pi_{n+1} ({\rm Gr} ({\mathcal X})) \longrightarrow \pi_{n} ({\rm Gr} ({\mathcal X})) $$ is a piecewise trivial fibration over $\pi_{n} ({\rm Gr}^{(e)} ({\mathcal X}))$ with fibre ${\mathbf A}_k^d$. \end{prop} \subsection{Dimension estimates} \begin{lem}\label{dimgr} Let ${\mathcal X}$ be a formal $R$-scheme whose generic fibre ${\mathcal X}_K$ is of dimension $\leq d$. Then \begin{enumerate} \item[(1)]For every $n$ in ${\mathbf N}$, $${\rm dim} \, \pi_n ({\rm Gr} ({\mathcal X})) \leq (n + 1) d.$$ \item[(2)]For $m \geq n$, the fibres of the projection $\pi_m ({\rm Gr} ({\mathcal X})) \rightarrow \pi_n ({\rm Gr} ({\mathcal X}))$ are of dimension $\leq (m - n) d$. \end{enumerate} \end{lem} \begin{lem}\label{dim2} Let ${\mathcal X}$ be a formal $R$-scheme whose generic fibre ${\mathcal X}_K$ is of dimension $d$. Let ${\mathcal S}$ be a closed formal $R$-subscheme of ${\mathcal X}$ such that ${\mathcal S}_K$ is of dimension $< d$. Then, for all integers $n$, $i$ and $\ell$ such that $n \geq i \geq \gamma_{{\mathcal S}} (\ell)$, where $\gamma_{{\mathcal S}}$ is the Greenberg function of ${\mathcal S}$ defined in \ref{gf}, $\pi_{n, {\mathcal X}} (\pi_{i, {\mathcal X}}^{-1} {\rm Gr}_i ({\mathcal S}))$ is of dimension $\leq (n + 1) d - \ell -1$. \end{lem} \subsection{Grothendieck rings}\label{GK} Let $k$ be a field. We denote by ${K_0 ({\rm Var}_k)}$ the abelian group generated by symbols $[S]$, for $S$ a variety over $k$, with the relations $[S] = [S']$ if $S$ and $S'$ are isomorphic and $[S] = [S'] + [S \setminus S']$ if $S'$ is closed in $S$. There is a natural ring structure on ${K_0 ({\rm Var}_k)}$, the product being induced by the cartesian product of varieties, and to any constructible set $S$ in some variety one naturally associates a class $[S]$ in ${K_0 ({\rm Var}_k)}$. We denote by ${K_0 ({\rm Var}_k)}_{\rm loc}$ the localisation ${K_0 ({\rm Var}_k)}_{\rm loc} := {K_0 ({\rm Var}_k)} [{\mathbf L}^{-1}]$ with ${\mathbf L} := [{\mathbf A}^1_{k}]$. We denote by $F^m {K_0 ({\rm Var}_k)}_{\rm loc}$ the subgroup generated by $[S] {\mathbf L}^{- i}$ with ${\rm dim} \, S - i \leq -m$, and by $\widehat{{K_0 ({\rm Var}_k)}}$ the completion of ${K_0 ({\rm Var}_k)}_{\rm loc}$ with respect to the filtration $F^{\cdot}$\footnote{It is still unknown whether the filtration $F^{\cdot}$ is separated or not.}. We will also denote by $F^{\cdot}$ the filtration induced on $\widehat{{K_0 ({\rm Var}_k)}}$. We denote by $\overline {K_0 ({\rm Var}_k)}_{\rm loc}$ the image of ${K_0 ({\rm Var}_k)}_{\rm loc}$ in $\widehat {K_0 ({\rm Var}_k)}$. We put on the ring $\widehat{{K_0 ({\rm Var}_k)}}$ a structure of non-archimedean ring by setting $||a|| := 2^{-n}$, where $n$ is the largest $n$ such that $a \in F^{n} \widehat {K_0 ({\rm Var}_k)}$, for $a \not=0$ and $||0|| = 0$. \subsection{Cylinders}Let ${\mathcal X}$ be a formal $R$-scheme. A subset $A$ of ${\rm Gr} ({\mathcal X})$ is cylindrical of level $n \geq 0$ if $A = \pi_n^{- 1} (C)$ with $C$ a constructible subset of ${\rm Gr}_n ({\mathcal X})$. We denote by ${\mathbf C}_{{\mathcal X}}$ the set of cylindrical subsets of ${\rm Gr} ({\mathcal X})$ of some level. Let us remark that ${\mathbf C}_{{\mathcal X}}$ is a boolean algebra, \textit{i.e.} contains ${\rm Gr} ({\mathcal X})$, $\emptyset$, and is stable by finite intersection, finite union, and by taking complements. It follows from Proposition \ref{chev}, that if $A$ is cylindrical of some level, then $\pi_n (A)$ is constructible for every $n \geq 0$. A basic finiteness property of cylinders is the following: \begin{lem}\label{finiteness} Let $A_i$, $i \in I$, be an enumerable family of cylindrical subsets of ${\rm Gr} ({\mathcal X})$. If $A := \cup_{i \in I} A_i$ is also a cylinder, then there exists a finite subset $J$ of $I$ such that $A := \cup_{i \in J} A_i$. \end{lem} \begin{proof}Since ${\rm Gr} ({\mathcal X})$ is quasi-compact, this follows from Th{\'e}or{\`e}me 7.2.5 of \cite{EGA}. \end{proof} \subsection{Motivic measure for cylinders}Let ${\mathcal X}$ a flat formal $R$-scheme of relative dimension $d$. Let $A$ be a cylinder of ${\rm Gr} ({\mathcal X})$. We shall say $A$ is stable of level $n$ if it is cylindrical of level $n$ and if, for every $m \geq n$, the morphism $$ \pi_{m+1} ({\rm Gr} ({\mathcal X})) \longrightarrow \pi_{m} ({\rm Gr} ({\mathcal X})) $$ is a piecewise trivial fibration over $\pi_{n} (A)$ with fibre ${\mathbf A}_k^d$. We denote by ${\mathbf C}_{0, {\mathcal X}}$ the set of stable cylindrical subsets of ${\rm Gr} ({\mathcal X})$ of some level. It follows from Proposition \ref{smoothtr} that every cylinder in ${\rm Gr} ({\mathcal X})$ is stable when ${\mathcal X}$ is smooth. When ${\mathcal X}$ is no longer assumed to be smooth, ${\mathbf C}_{0, {\mathcal X}}$ is in general not a boolean algebra, but is an ideal of ${\mathbf C}_{{\mathcal X}}$: ${\mathbf C}_{0, {\mathcal X}}$ contains $\emptyset$, is stable by finite union, and the intersection of an element in ${\mathbf C}_{{\mathcal X}}$ with an element of ${\mathbf C}_{0, {\mathcal X}}$ belongs to ${\mathbf C}_{0, {\mathcal X}}$. In general ${\rm Gr} ({\mathcal X})$ is not stable, but it follows from Proposition \ref{ptf} that ${\rm Gr}^{(e)} ({\mathcal X})$ is a stable cylinder of ${\rm Gr} ({\mathcal X})$, for every $e \geq 0$. From first principles, one proves (cf. \cite{arcs}, \cite{sebag}): \begin{def-prop}There is a unique additive morphism $$\tilde \mu : {\mathbf C}_{0, {\mathcal X}} \longrightarrow {K_0 ({\rm Var}_k)}_{\rm loc}$$ such that $\tilde \mu (A) = [\pi_n (A)] \, {\mathbf L}^{- (n +1) d}$, when $A$ is a stable cylinder of level $n$. Furthermore the measure $\tilde \mu$ is $\sigma$-additive. \end{def-prop} One deduces from Proposition \ref{dimgr} and Proposition \ref{dim2}, cf. \cite{arcs}, \cite{sebag}, the following: \begin{prop}\label{conv} \begin{enumerate} \item[(1)]For any cylinder $A$ in ${\mathbf C}_{{\mathcal X}}$, the limit $$ \mu (A) := \lim_{e \rightarrow \infty} \tilde \mu (A \cap {\rm Gr}^{(e)} ({\mathcal X})) $$ exists in $\widehat {K_0 ({\rm Var}_k)}$. \item[(2)]If $A$ belongs to ${\mathbf C}_{0, {\mathcal X}}$, then $\mu (A)$ coincide with the image of $\tilde \mu (A)$ in $\widehat {K_0 ({\rm Var}_k)}$. \item[(3)]The measure $A \mapsto \mu (A)$ is $\sigma$-additive. \item[(4)]For $A$ and $B$ in ${\mathbf C}_{{\mathcal X}}$, $|| \mu (A \cup B)|| \leq \max (||\mu (A)||, ||\mu (B)||)$. If $A \subset B$, $|| \mu (A)|| \leq || \mu (B)||$. \end{enumerate} \end{prop} \subsection{Measurable subsets of ${\rm Gr} ({\mathcal X})$} For $A$ and $B$ subsets of the same set, we use the notation $A \triangle B$ for $A \cup B \setminus A \cap B$. \begin{definition}We say that a subset $A$ of ${\rm Gr} ({\mathcal X})$ is {\emph{measurable}} if, for every positive real number $\varepsilon$, there exists an $\varepsilon$-cylindrical approximation, \textit{i.e.} a sequence of cylindrical subsets $A_{i} (\varepsilon)$, $i \in {\mathbf N}$, such that $$ \Bigl(A \triangle A_{0} (\varepsilon) \Bigr) \subset \bigcup_{i \geq 1} A_{i} (\varepsilon), $$ and $||\mu (A_{i} (\varepsilon))|| \leq \varepsilon$ for all $i \geq 1$. We say that $A$ is {\emph{strongly measurable}} if moreover we can take $A_{0} (\varepsilon) \subset A$. \end{definition} \begin{theorem}If $A$ is a measurable subset of ${\rm Gr} ({\mathcal X})$, then $$\mu (A) := \lim_{\varepsilon \rightarrow 0} \mu (A_{0} (\varepsilon))$$ exists in $\widehat {K_0 ({\rm Var}_k)}$ and is independent of the choice of the sequences $A_{i} (\varepsilon)$, $i \in {\mathbf N}$. \end{theorem} For $A$ a measurable subset of ${\rm Gr} ({\mathcal X})$, we shall call $\mu (A)$ the motivic measure of $A$. We denote by ${\mathbf D}_{{\mathcal X}}$ the set of measurable subsets of ${\rm Gr} ({\mathcal X})$. One should remark that obviously ${\mathbf C}_{{\mathcal X}}$ is contained in ${\mathbf D}_{{\mathcal X}}$. \begin{prop}\begin{enumerate} \item[(1)]${\mathbf D}_{{\mathcal X}}$ is a boolean algebra. \item[(2)]If $A_{i}$, $i \in {\mathbf N}$, is a sequence of measurable subsets of ${\rm Gr} ({\mathcal X})$ with $\lim_{i \rightarrow \infty} ||\mu (A_{i})|| = 0$, then $\cup_{i \in {\mathbf N}} A_{i}$ is measurable. \item[(3)] Let $A_{i}$, $i \in {\mathbf N}$, be a family of measurable subsets of ${\rm Gr} ({\mathcal X})$. Assume the sets $A_{i}$ are mutually disjoint and that $A := \cup_{i \in {\mathbf N}}A_{i}$ is measurable. Then $\sum_{i \in {\mathbf N}} \mu (A_{i})$ converges in $\widehat {K_0 ({\rm Var}_k)}$ to $\mu (A)$. \item[(4)] If $A$ and $B$ are measurable subsets of ${\rm Gr} ({\mathcal X})$ and if $A \subset B$, then $||\mu (A)|| \leq ||\mu (B)||$. \end{enumerate} \end{prop} \begin{remark}\label{comp}In the situation of Remark \ref{arcspace}, one can check that the notion of cylinders, stable cylinders, measurable subsets of ${\rm Gr} (X \widehat \otimes R)$ coincides with the analogous notions introduced in \cite{MK} for subsets of ${\mathcal L} (X)$. \end{remark} \subsection{Order of the Jacobian ideal} Let $h : {\mathcal Y} \rightarrow {\mathcal X}$ be a morphism of flat formal $R$-schemes of relative dimension $d$. Let $y$ be a point of ${\rm Gr} ({\mathcal Y}) \setminus {\rm Gr} ({\mathcal Y}_{\rm sing})$ defined over some field extension $F$ of $k$. We denote by $\varphi : \Specf R_F \rightarrow {\mathcal Y}$ the corresponding morphism of formal $R$-schemes. We define ${\rm ord}_{\varpi} ({\rm Jac}_h) (y)$, the order of the Jacobian ideal of $h$ at $y$, as follows. From the natural morphism $h^{\ast} \Omega_{{\mathcal X} | R} \rightarrow \Omega_{{\mathcal Y} | R}$, one deduces, by taking the $d$-th exterior power, a morphism $h^{\ast} \Omega^d_{{\mathcal X} | R} \rightarrow \Omega^d_{{\mathcal Y} | R}$, hence a morphism $$(\varphi^{\ast}h^{\ast} \Omega^d_{{\mathcal X} | R}) /(\text{torsion}) \longrightarrow (\varphi^{\ast}\Omega^d_{{\mathcal Y} | R})/(\text{torsion}).$$ Since $L:= (\varphi^{\ast}\Omega^d_{{\mathcal Y} | R})/(\text{torsion})$ is a free ${\mathcal O}_{R_F}$-module of rank 1, it follows from the structure theorem for finite type modules over principal domains, that the image of $M := (\varphi^{\ast}h^{\ast} \Omega^d_{{\mathcal X} | R}) /(\text{torsion})$ in $L$ is either $0$, in which case we set ${\rm ord}_{\varpi} ({\rm Jac}_h) (y) =\infty$, or $\varpi^n L$, for some $n \in {\mathbf N}$, in which case we set ${\rm ord}_{\varpi} ({\rm Jac}_h) (y) = n$. \subsection{The change of variable formula} If $h :{\mathcal Y} \rightarrow {\mathcal X}$ is a morphism of formal $R$-schemes, we still write $h$ for the corresponding morphism ${\rm Gr} ({\mathcal Y}) \rightarrow {\rm Gr} ({\mathcal X})$. The following lemmas are basic geometric ingredients in the proof of the change of variable formula \ref{cv} and \ref{cvtilde}. \begin{lem}\label{cv1}Let $h : {\mathcal Y} \rightarrow {\mathcal X}$ be a morphism between flat formal $R$-schemes of relative dimension $d$. We assume ${\mathcal Y}$ is smooth. For $e$ and $e'$ in ${\mathbf N}$ we set $$ \Delta_{e, e'} := \Bigl\{ \varphi \in {\rm Gr} ({\mathcal Y}) \Bigm \vert {\rm ord}_{\varpi} ({\rm Jac}_h) (y) = e \quad \text{\rm and} \quad h(\varphi) \in {\rm Gr}^{(e')} ({\mathcal X}) \Bigr\}. $$ Then, there exists $c$ in ${\mathbf N}$ such that, for every $n \geq 2e$, $n \geq e + c e'$, for every $\varphi$ in $\Delta_{e, e'}$ and every $x$ in ${\rm Gr} ({\mathcal X})$ such that $\pi_n (h (\varphi)) = \pi_n (x)$, there exists $y$ in ${\rm Gr} ({\mathcal Y})$ such that $h (y) = x$ and $\pi_{n - e} (\varphi) = \pi_{n - e} (y)$. \end{lem} \begin{lem}\label{cv2}Let $h : {\mathcal Y} \rightarrow {\mathcal X}$ be a morphism between flat formal $R$-schemes of relative dimension $d$. We assume ${\mathcal Y}$ is smooth. Let $B$ be a cylinder in ${\rm Gr} ({\mathcal Y})$ and set $A = h (B)$. Assume ${\rm ord}_{\varpi} ({\rm Jac}_h)$ is constant with value $e < \infty$ on $B$ and that $A \subset {\rm Gr}^{(e')} ({\mathcal X})$ for some $e' \geq 0$. Then $A$ is a cylinder. Furthermore, if the restriction of $h$ to $B$ is injective, then for $n \gg 0$, the following holds: \begin{enumerate} \item[(1)]If $\varphi$ and $\varphi'$ belong to $B$ and $\pi_n (h (\varphi)) = \pi_n (h (\varphi'))$, then $\pi_{n - e} (\varphi) = \pi_{n - e} (\varphi')$. \item[(2)]The morphism $\pi_n (B) \rightarrow \pi_n (A) $ induced by $h$ is a piecewise trivial fibration with fibre ${\mathbf A}_k^e$. \end{enumerate} \end{lem} For a measurable subset $A $ of ${\rm Gr} ({\mathcal X})$ and a function $\alpha : A \rightarrow {\mathbf Z} \cup \{\infty\}$, we say that ${\mathbf L}^{-\alpha}$ is {{integrable}} or that $\alpha$ is {{exponentially integrable}} if the fibres of $\alpha$ are measurable and if the motivic integral $$ \int_{A} {\mathbf L}^{- \alpha} d\mu := \sum_{n \in {\mathbf Z}} \mu (A \cap \alpha^{-1} (n)) {\mathbf L}^{- n} $$ converges in $\widehat {K_0 ({\rm Var}_k)}$. When all the fibres $A \cap \alpha^{-1} (n)$ are stable cylinders and $\alpha$ takes only a finite number of values on $A$, it is not necessary to go to the completion of ${K_0 ({\rm Var}_k)}_{\rm loc}$ and one may define directly $$ \int_{A} {\mathbf L}^{- \alpha} d\tilde \mu := \sum_{n \in {\mathbf Z}} \tilde \mu (A \cap \alpha^{-1} (n)) {\mathbf L}^{- n} $$ in ${K_0 ({\rm Var}_k)}_{\rm loc}$. \begin{theorem}\label{cv}Let $h : {\mathcal Y} \rightarrow {\mathcal X}$ be a morphism between flat formal $R$-schemes of relative dimension $d$. We assume ${\mathcal Y}$ is smooth. Let $B$ be a strongly measurable subset of ${\rm Gr} ({\mathcal Y})$. Assume $h$ induces a bijection between $B$ and $A := h(B)$. Then, for every exponentially integrable function $\alpha : A \rightarrow {\mathbf Z} \cup \infty$, the function $\alpha \circ h + {\rm ord}_{\varpi}({\rm Jac}_h)$ is exponentially integrable on $B$ and $$ \int_A {\mathbf L}^{- \alpha} d\mu = \int_B {\mathbf L}^{-\alpha \circ h - {\rm ord}_{\varpi}({\rm Jac}_h)} d\mu. $$ \end{theorem} We shall also need the following variant of Theorem \ref{cv}. \begin{theorem}\label{cvtilde}Let $h : {\mathcal Y} \rightarrow {\mathcal X}$ be a morphism between flat formal $R$-schemes of relative dimension $d$. We assume ${\mathcal Y}$ and ${\mathcal X}_K$ are smooth and that the morphism $h_K : {\mathcal Y}_K \rightarrow {\mathcal X}_K$ induced by $h$ is {\'e}tale (cf. \cite{rigid3}). Let $B$ be a cylinder in ${\rm Gr} ({\mathcal Y})$. Assume $h$ induces a bijection between $B$ and $A := h(B)$ and that $A$ is a stable cylinder of ${\rm Gr} ({\mathcal X})$. Then the fibres $B \cap {\rm ord}_{\varpi}({\rm Jac}_h)^{-1} (n)$ are stable cylinders, ${\rm ord}_{\varpi}({\rm Jac}_h)^{-1} (n)$ takes only a finite number of values on $B$, and $$ \int_A d \tilde \mu = \int_B {\mathbf L}^{- {\rm ord}_{\varpi}({\rm Jac}_h)} d \tilde\mu $$ in ${K_0 ({\rm Var}_k)}_{\rm loc}$. \end{theorem} \section{Integration on smooth rigid varieties}\label{sec4} \subsection{Order of differential forms} Let ${\mathcal X}$ be a flat formal $R$-scheme equidimensional of relative dimension $d$. Consider a differential form $\omega$ in $\Omega^d_{{\mathcal X} | R} ({\mathcal X})$. Let $x$ be a point of ${\rm Gr} ({\mathcal X}) \setminus {\rm Gr} ({\mathcal X}_{\rm sing})$ defined over some field extension $F$ of $k$. We denote by $\varphi : \Specf R_F \rightarrow {\mathcal X}$ the corresponding morphism of formal $R$-schemes. Since $L:= (\varphi^{\ast}\Omega^d_{{\mathcal X} | R})/(\text{torsion})$ is a free ${\mathcal O}_{R_F}$-module of rank 1, it follows from the structure theorem for finite type modules over principal domains, that its submodule $M$ generated by $\varphi^{\ast} \omega$ is either $0$, in which case we set ${\rm ord}_{\varpi} (\omega) (x) = \infty$, or $\varpi^n L$, for some $n \in {\mathbf N}$, in which case we set ${\rm ord}_{\varpi} (\omega) (x) = n$. Since there is a canonical isomorphism $\Omega^d_{{\mathcal X}_{K}} ({\mathcal X}_K) \simeq \Omega^d_{{\mathcal X} | R} ({\mathcal X}) \otimes_R K$ (cf. Proposition 1.5 of \cite{rigid3}), if $\omega$ is in $\Omega^d_{{\mathcal X}_{K}} ({\mathcal X}_K)$, we write $\omega = \varpi^{-n} \tilde \omega$, with $\tilde \omega$ in $\Omega^d_{{\mathcal X} | R} ({\mathcal X})$ and $n \in {\mathbf N}$, we set ${\rm ord}_{\varpi, {\mathcal X}} (\omega) := {\rm ord}_{\varpi} (\tilde \omega) -n$. Clearly this definition does not depend on the choice of $\tilde \omega$. \begin{lem}\label{ordre}Let $h : {\mathcal Y} \rightarrow {\mathcal X}$ be a morphism between flat formal $R$-schemes equidimensional of relative dimension $d$. Let $\omega$ be in $\Omega^d_{{\mathcal X} | R} ({\mathcal X})$ (resp. in $\Omega^d_{{\mathcal X}_{K}} ({\mathcal X}_K)$). Let $y$ be a point in ${\rm Gr} ({\mathcal Y}) \setminus {\rm Gr} ({\mathcal Y}_{\rm sing})$ and assume $h (y)$ belongs to ${\rm Gr} ({\mathcal X}) \setminus {\rm Gr} ({\mathcal X}_{\rm sing})$. Then $${\rm ord}_{\varpi} (h^{\ast} \omega) (y) = {\rm ord}_{\varpi} (\omega) (h(y)) + {\rm ord}_{\varpi}({\rm Jac}_h) (y),$$ resp. $${\rm ord}_{\varpi, {\mathcal Y}} (h^{\ast}_K \omega) (y) = {\rm ord}_{\varpi, {\mathcal X}} (\omega) (h(y)) + {\rm ord}_{\varpi}({\rm Jac}_h) (y).$$ \end{lem} \begin{proof}Follows directly from the definitions. \end{proof} \begin{def-theorem}Let $X$ be a smooth rigid variety over $K$, purely of dimension $d$. Let $\omega$ be a differential form in $\Omega^d_{X} (X)$. \begin{enumerate} \item[(1)]Let ${\mathcal X}$ be a formal $R$-model of $X$. Then the function ${\rm ord}_{\varpi, {\mathcal X}} (\omega)$ is exponentially integrable on ${\rm Gr} ({\mathcal X})$ and the integral $\int_{{\rm Gr} ({\mathcal X})} {\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal X}} (\omega)} d\mu$ in $\widehat {K_0 ({\rm Var}_k)}$ does not depend on the model ${\mathcal X}$. We denote it by $\int_X \omega d\mu$. \item[(2)]Assume furthermore $\omega$ is a gauge form, \textit{i.e.} that it generates $\Omega^d_{X}$ at every point of $X$, and assume some open dense formal subscheme ${\mathcal U}$ of ${\mathcal X}$ is a weak N{\'e}ron model of $X$. Then the function ${\rm ord}_{\varpi, {\mathcal X}} (\omega)$ takes only a finite number of values and its fibres are stable cylinders. Furthermore the integral $\int_{{\rm Gr} ({\mathcal X})} {\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal X}} (\omega)} d\tilde \mu$ in ${K_0 ({\rm Var}_k)}_{\rm loc}$ does not depend on the model ${\mathcal X}$. We denote it by $\int_X \omega d \tilde \mu$. \end{enumerate} \end{def-theorem} \begin{proof}Let us prove (2). Write $\omega = \varpi^{-n} \tilde \omega$, with $\tilde \omega$ in $\Omega^d_{{\mathcal X} | R} ({\mathcal X})$ and $n \in {\mathbf N}$. Since ${\mathcal U}$ is smooth, $\Omega^d_{{\mathcal U} | R}$ is locally free of rank 1 and $\tilde \omega {\mathcal O}_U \otimes (\Omega^d_{{\mathcal U} | R})^{-1}$ is isomorphic to a principal ideal sheaf $(f) {\mathcal O}_U$, with $f$ in ${\mathcal O}_U$. Furthermore, the function ${\rm ord}_{\varpi, {\mathcal X}} (\tilde \omega)$ coincides with the function ${\rm ord}_{\varpi} (f)$ which to a point $\varphi$ of ${\rm Gr} ({\mathcal U}) = {\rm Gr} ({\mathcal X})$ associates ${\rm ord}_{\varpi} (f (\varphi))$. The fibres of ${\rm ord}_{\varpi} (f)$ are stable cylinders. Since $\omega$ is a gauge form, $f$ induces an invertible function on $X$, hence, by the maximum principle (cf. \cite{BGR}), the function ${\rm ord}_{\varpi} (f)$ takes only a finite number of values. To prove that $\int_{{\rm Gr} ({\mathcal X})} {\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal X}} (\omega)} d\tilde \mu$ in ${K_0 ({\rm Var}_k)}_{\rm loc}$ does not depend on the model ${\mathcal X}$, it is enough to consider the case of another model ${\mathcal X}'$ obtained from ${\mathcal X}$ by an admissible formal blow-up $h : {\mathcal X}' \rightarrow {\mathcal X}$. We may also assume ${\mathcal X}'$ contains as an open dense formal subscheme a weak N{\'e}ron model ${\mathcal U}'$ of $X$. The equality $$\int_{{\rm Gr} ({\mathcal X}')} {\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal X}'} (\omega)} d\tilde \mu = \int_{{\rm Gr} ({\mathcal X})} {\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal X}} (\omega)} d\tilde \mu $$ then follows from Lemma \ref{ordre} and Theorem \ref{cvtilde}. Statement (1) follows similarly from Lemma \ref{ordre} and Theorem \ref{cv}. \end{proof} \begin{remark} A situation where gauge forms naturally occur is that of reductive groups. Let $G$ be a connected reductive group over $k$. B. Gross constructs in \cite{gross}, using Bruhat-Tits theory, a differential form of top degree $\omega_G$ on $G$ which is defined up to multiplication by a unit in $R$. One may easily check that the differential form $\omega_G$ induces a gauge form on the rigid $K$-group $G^{\rm rig} := (G \widehat \otimes R)_K$. \end{remark} \begin{lem}\label{add} Let $X$ be a smooth rigid variety over $K$, purely of dimension $d$, and let $\omega$ be a gauge form on $X$. Let ${\mathcal O}= (O_i)_{i \in J}$ be a finite admissible covering and set $O_I := \cap_{i \in I} O_i$ for $I \subset J$. Then $$ \int_X \omega d \tilde \mu = \sum_{\emptyset \not= I \subset J} (-1)^{|I| - 1} \int_{O_I} \omega_{| O_I} d \tilde \mu. $$ If $\omega$ is only assumed to be a differential form in $\Omega_X^d (X)$, then $$ \int_X \omega d \mu = \sum_{\emptyset \not= I \subset J} (-1)^{|I| - 1} \int_{O_I} \omega_{| O_I} d \mu. $$ \end{lem} \begin{proof}Let us prove the first statement, the proof of the second one being similar. It is enough to consider the case $|J| = 2$. Choose an $R$-model ${\mathcal X}$ containing a weak N{\'e}ron model ${\mathcal U}$ of $X$ as an open dense formal subscheme and such that the covering $X = O_1 \cup O_2$ is induced from a covering ${\mathcal X} = {\mathcal O}_1 \cup {\mathcal O}_2$ by open formal subschemes. It is sufficient to prove that \begin{equation*} \begin{split} \int_{{\rm Gr} ({\mathcal X})} {\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal X}} (\omega)} d\tilde \mu = \int_{{\rm Gr} ({\mathcal O}_1)} & {\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal O}_1} (\omega_{| O_1})} d\tilde \mu + \int_{{\rm Gr} ({\mathcal O}_2)} {\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal O}_2} (\omega_{| O_2})} d\tilde \mu \\ &- \int_{{\rm Gr} ({\mathcal O}_1 \cap {\mathcal O}_2)} {\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal O}_1 \cap {\mathcal O}_2} (\omega_{| O_1 \cap O_2})} d\tilde \mu,\\ \end{split} \end{equation*} which follows from the fact that for every open formal subscheme ${\mathcal O}$ of ${\mathcal X}$ the function ${\rm ord}_{\varpi, {\mathcal X}} (\omega)$ restricts to ${\rm ord}_{\varpi, {\mathcal O}} (\omega_{| {\mathcal O}_K})$ on ${\rm Gr} ({\mathcal O})$ and the equalities ${\rm Gr} ({\mathcal X}) = {\rm Gr} ({\mathcal O}_1) \cup {\rm Gr} ({\mathcal O}_2)$ and ${\rm Gr} ({\mathcal O}_1) \cap {\rm Gr} ({\mathcal O}_2) = {\rm Gr} ({\mathcal O}_1 \cap {\mathcal O}_2)$ which follow from Proposition \ref{glue}. \end{proof} \begin{prop}\label{prod}Let $X$ and $X'$ be smooth rigid $K$-varieties purely of dimension $d$ and $d'$ and let $\omega$ and $\omega'$ be gauge forms on $X$ and $X'$. Then $$ \int_{X \times X'} \omega \times \omega' d \tilde \mu = \int_X \omega d \tilde \mu \times \int_{X'} \omega' d \tilde \mu. $$ If $\omega$ and $\omega'$ are only assumed to be differential forms in $\Omega_X^d (X)$, then $$ \int_{X \times X'} \omega \times \omega' d \mu = \int_X \omega d \mu \times \int_{X'} \omega' d \mu. $$ \end{prop} \begin{proof}Let us prove the first assertion, the proof of the second one being similar. Choose $R$-models ${\mathcal X}$ and ${\mathcal X}'$ of $X$ and $X'$ respectively containing a weak N{\'e}ron model ${\mathcal U}$ of $X$ and ${\mathcal U}'$ of $X'$ as an open dense formal subscheme. Also write $\omega = \varpi^{-n} \tilde \omega$ and $\omega' = \varpi^{-n'} \tilde \omega'$, with $\tilde \omega$ and $\tilde \omega'$ in $\Omega^d_{{\mathcal X} | R} ({\mathcal X})$ and $\Omega^{d'}_{{\mathcal X}' | R} ({\mathcal X}')$, respectively. It enough to check that $\tilde \mu ({\rm ord}_{\varpi, {\mathcal X} \times {\mathcal X}'} (\tilde \omega \times \tilde \omega') = m)$ is equal to $\sum_{m'+ m'' = m} \tilde \mu ({\rm ord}_{\varpi, {\mathcal X}} (\tilde \omega) = m') \times \tilde \mu ({\rm ord}_{\varpi, {\mathcal X}'} (\tilde \omega') = m'')$, which follows the fact that on ${\rm Gr} ({\mathcal X} \times {\mathcal X}') \simeq {\rm Gr} ({\mathcal X}) \times {\rm Gr} ({\mathcal X}') = {\rm Gr} ({\mathcal U}) \times {\rm Gr} ({\mathcal U}')$, the functions ${\rm ord}_{\varpi, {\mathcal X} \times {\mathcal X}'} (\tilde \omega \times \tilde \omega')$ and ${\rm ord}_{\varpi, {\mathcal X}} (\tilde \omega) + {\rm ord}_{\varpi, {\mathcal X}'} (\tilde \omega')$ are equal. \end{proof} \subsection{Invariants for gauged smooth rigid varieties} Let $d$ be an integer $\geq 0$. We define $K_0 ({\rm GSRig}_K^d)$, the Grothendieck group of gauged smooth rigid $K$-varieties of dimension $d$, as follows: as an abelian group it is the quotient of the free abelian group over symbols $[X, \omega]$, with $X$ a smooth rigid $K$-variety of dimension $d$ and $\omega$ a gauge form on $X$ by the relations $$ [X', \omega'] = [X, \omega] $$ if there is an isomorphism $h : X' \rightarrow X$ with $h^{\ast} \omega = \omega'$, and $$ [X, \omega] = \sum_{\emptyset \not= I \subset J} (-1)^{|I| - 1} [O_I, \omega_{| O_I}], $$ when $(O_i)_{i \in J}$ is a finite admissible covering of $X$, with the notation $O_I := \cap_{i \in I} O_i$ for $I \subset J$. One puts a graded ring structure on $K_0 ({\rm GSRig}_K) := \oplus_d K_0 ({\rm GSRig}_K^d)$ by requiring $$[X, \omega] \times [X', \omega'] := [X \times X', \omega \times \omega'].$$ Forgetting gauge forms, one defines similarly $K_0 ({\rm SRig}_K^d)$, the Grothendieck group of smooth rigid $K$-varieties of dimension $d$, and the graded ring $K_0 ({\rm SRig}_K) := \oplus_d K_0 ({\rm SRig}_K^d)$. There are natural forgetful morphisms $$F : K_0 ({\rm GSRig}_K^d) \longrightarrow K_0 ({\rm SRig}_K^d)$$ and $$F : K_0 ({\rm GSRig}_K) \longrightarrow K_0 ({\rm SRig}_K).$$ \begin{prop}\label{mainprop} The assignment which to a gauged smooth rigid $K$-variety $(X, \omega) $ associates $\int_X \omega d \tilde \mu$ factorizes uniquely as a ring morphism $$ \tilde \mu : K_0 ({\rm GSRig}_K) \rightarrow {K_0 ({\rm Var}_k)}_{\rm loc}. $$ \end{prop} \begin{proof}This follows from Lemma \ref{add} and Proposition \ref{prod}. \end{proof} \subsection{A formula for $\int_X \omega d \tilde \mu$} Let $X$ be a smooth rigid variety over $K$ of pure dimension $d$. Let ${\mathcal U}$ be a weak N{\'e}ron model of $X$ contained in some model ${\mathcal X}$ of $X$ and let $\omega$ be a form in $\Omega^d_{{\mathcal X} | R} ({\mathcal X})$ inducing a gauge form on $X$. We denote by $U_0^i$, $i \in J$, the irreducible components of the special fibre of ${\mathcal U}$. By assumption, each $U_0^{i}$ is smooth and $U_0^{i} \cap U_0^{j} = \emptyset$ for $i \not= j$. We denote by ${\rm ord}_{U_0^{i}} (\omega)$ the unique integer $n$ such that $\varpi^{-n} \omega$ generates $\Omega^d_{{\mathcal X} | R}$ at the generic point of $U_0^{i}$. More generally, if $\omega$ is a gauge form in $\Omega^d_{{\mathcal X}_{K}} ({\mathcal X}_K)$, we write $\omega = \varpi^{-n} \tilde \omega$, with $\tilde \omega$ in $\Omega^d_{{\mathcal X} | R} ({\mathcal X})$ and $n \in {\mathbf N}$, and we set ${\rm ord}_{U_0^{i}} (\omega) := {\rm ord}_{U_0^{i}} (\tilde \omega) -n$. \begin{prop}\label{form}Let $X$ be a smooth rigid variety over $K$ of pure dimension $d$. Let ${\mathcal U}$ be a weak N{\'e}ron model of $X$ contained in some model ${\mathcal X}$ of $X$ and let $\omega$ be a gauge form in $\Omega^d_{{\mathcal X}_{K}} ({\mathcal X}_K)$. With the above notations, we have $$ \int_X \omega d \tilde \mu = {\mathbf L}^{-d} \, \sum_{i \in J} \, [U_0^{i}] \, {\mathbf L}^{- {\rm ord}_{U_0^{i}} (\omega)} $$ in ${K_0 ({\rm Var}_k)}_{\rm loc}$. \end{prop} \begin{proof}Denote by ${\mathcal U}_0^{i}$ the irreducible component of ${\mathcal X}$ with special fibre $U_0^{i}$. Since ${\rm Gr} ({\mathcal X})$ is the disjoint union of the sets ${\rm Gr} ({\mathcal U}_0^{i})$, we may assume ${\mathcal X}$ is a smooth irreducible formal $R$-scheme of dimension $d$. Let $\omega$ be a section of $\Omega^d_{{\mathcal X} | R} ({\mathcal X})$ which generates $\Omega^d_{{\mathcal X} | R}$ at the generic point of ${\mathcal X}$ and induces a gauge form on the generic fibre. Let us remark that the function ${\rm ord}_{\varpi, {\mathcal X}} (\omega)$ is identically equal to $1$ on ${\rm Gr} ({\mathcal X})$. Indeed, after shrinking ${\mathcal X}$, we may write $\omega = f \omega_0$ with $\omega_0$ a generator $\Omega^d_{{\mathcal X} | R}$ a every point and $f$ in ${\mathcal O}_{{\mathcal X}} ({\mathcal X})$. By hypothesis $f$ is a unit at the generic point of ${\mathcal X}$. Assume at some point $x$ of ${\rm Gr} ({\mathcal X})$, ${\rm ord}_{\varpi} f (x) \geq 1$ ; it would follow that the locus of $f = 0$ is non empty in ${\mathcal X}$, which contradicts the assumption that $\omega$ induces a gauge form on the generic fibre. Hence, we get $\int_{{\rm Gr} ({\mathcal X})} {\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal X}} (\omega)} \tilde d\mu = {\mathbf L}^{-d} [X_0]$, and the result follows. \end{proof} \subsection{Application to Calabi-Yau varieties over $K$} Let $X$ be a Calabi-Yau variety over $K$. By this we mean a smooth projective algebraic variety over $K$ of pure dimension $d$ with $\Omega^d_X$ trivial. We denote by $X^{\rm an}$ the rigid $K$-variety associated to $X$. Since $X$ is proper, $X^{\rm an}$ is canonically isomorphic to the generic fibre of the formal completion $X \widehat \otimes R$. By GAGA (cf. \cite{lut} Theorem 2.8), $\Omega^d_{X^{\rm an}} (X^{\rm an}) = \Omega^d_{X} (X) = K$. Now we can associate to any Calabi-Yau variety over $K$ a canonical element in the ring ${K_0 ({\rm Var}_k)}_{\rm loc}$ which coincides with the class of the special fibre when $X$ has a model with good reduction. \begin{theorem}\label{CY}Let $X$ be a Calabi-Yau variety over $K$, let ${\mathcal U}$ be a weak N{\'e}ron model of $X^{\rm an}$ and let $\omega$ be a gauge form on $X^{\rm an}$. We denote by $U_0^{i}$, $i \in J$, the irreducible components of the special fibre of ${\mathcal U}$ and set $\alpha (\omega) := \inf {\rm ord}_{U_0^{i}} (\omega)$. Then the virtual variety \begin{equation}\label{res} [\overline X] := \sum_{i \in J} \, [U_0^{i}] \, {\mathbf L}^{\alpha (\omega)- {\rm ord}_{U_0^{i}} (\omega)} \end{equation} in ${K_0 ({\rm Var}_k)}_{\rm loc}$ only depends on $X$. When $X$ has a proper smooth model with good reduction over $R$, $[\overline X]$ is equal to the class of the special fibre. \end{theorem} \begin{proof}Let $\omega$ be a gauge form on $X^{an}$. By Proposition \ref{form}, the right hand side of (\ref{res}) is equal to ${\mathbf L}^{d - \alpha (\omega)}\int_{X^{an}} \omega d \tilde \mu$ which does not depend on $\omega$. \end{proof} In particular, we have the following analogue of Batyrev's result on birational projective Calabi-Yau manifolds \cite{BCY}, \cite{arcs}. \begin{cor}Let $X$ be a Calabi-Yau variety over $K$ and let ${\mathcal X}$ and ${\mathcal X}'$ be two proper and smooth $R$-models of $X$ with special fibres ${\mathcal X}_0$ and ${\mathcal X}'_0$. Then $$[{\mathcal X}_0] = [{\mathcal X}'_0]$$ in ${K_0 ({\rm Var}_k)}_{\rm loc}$.\qed \end{cor} \begin{remark} Calabi-Yau varieties over $k ((t))$, with $k$ of characteristic zero have been considered in \cite{ks}. \end{remark} \subsection{A motivic Serre invariant for smooth rigid varieties} We can now define the motivic Serre invariant for smooth rigid varieties. \begin{theorem}\label{motivicserre} There is a canonical ring morphism $$\lambda : K_0 ({\rm SRig}_K) \longrightarrow {K_0 ({\rm Var}_k)}_{\rm loc} / ({\mathbf L} - 1) {K_0 ({\rm Var}_k)}_{\rm loc}$$ such that the diagram \begin{equation*}\label{t}\xymatrix{ K_0 ({\rm GSRig}_K) \ar[d]^{F} \ar[r]^<<<<<<<<<<<{\tilde \mu} & {K_0 ({\rm Var}_k)}_{\rm loc} \ar[d]\\ K_0 ({\rm SRig}_K) \ar[r]^<<<<<{\lambda}&{K_0 ({\rm Var}_k)}_{\rm loc} / ({\mathbf L} - 1) {K_0 ({\rm Var}_k)}_{\rm loc} } \end{equation*} is commutative. \end{theorem} \begin{proof}Since any smooth rigid $K$-variety of dimension $d$ admits a finite admissible covering by affinoids $(O_i)_{i \in J}$, with $\Omega^d_{O_i}$ trivial, the morphism $F$ is surjective. Hence it is enough to show the following statement: let ${\mathcal X}$ be a smooth formal $R$-scheme of relative dimension $d$ with $\Omega^d_{{\mathcal X} | R}$ trivial, and let $\omega_1$ and $\omega_2$ be two global sections of $\Omega^d_{{\mathcal X} | R}$ inducing gauge forms on the generic fibre ${\mathcal X}_K$, then $\int_{{\rm Gr} ({\mathcal X})} ({\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal X}} (\omega_1)}-L^{- {\rm ord}_{\varpi, {\mathcal X}} (\omega_2)}) d\tilde \mu$ belongs to $({\mathbf L} - 1) {K_0 ({\rm Var}_k)}_{\rm loc}$. To prove this, we take $\omega_0$ a global section of $\Omega^d_{{\mathcal X} | R}$ such that $\Omega^d_{{\mathcal X} | R} \simeq \omega_0 {\mathcal O}_{{\mathcal X}}$. If $\omega$ is any global section of $\Omega^d_{{\mathcal X} | R}$, write $\omega = f \omega_0$ with $f$ in ${\mathcal O}_{{\mathcal X}} ({\mathcal X})$. By the maximum principle, the function ${\rm ord}_{\varpi} (f)$ takes only a finite number of values on ${\rm Gr} ({\mathcal X})$. It follows we may write ${\rm Gr} ({\mathcal X})$ as a disjoint union of the subsets ${\rm Gr} ({\mathcal X})_{{\rm ord}_{\varpi} (f) = n}$ where ${\rm ord}_{\varpi} (f)$ takes the value $n$. These subsets are stable cylinders and only a finite number of them are non empty. Hence the equality $$ \int_{{\rm Gr} ({\mathcal X})} ({\mathbf L}^{- {\rm ord}_{\varpi, {\mathcal X}} (\omega)}-L^{- {\rm ord}_{\varpi, {\mathcal X}} (\omega_0)}) d\tilde \mu = \sum_n ({\mathbf L}^{-n} - 1) \tilde \mu ({\rm Gr} ({\mathcal X})_{{\rm ord}_{\varpi} (f) = n}) $$ holds in ${K_0 ({\rm Var}_k)}_{\rm loc}$ and the statement follows. \end{proof} \begin{remark}The ring ${K_0 ({\rm Var}_k)}_{\rm loc} / ({\mathbf L} - 1) {K_0 ({\rm Var}_k)}_{\rm loc}$ is much smaller than ${K_0 ({\rm Var}_k)}_{\rm loc}$, but still quite large. Let $\ell$ be a prime number distinct from the characteristic of $k$. Then the {\'e}tale $\ell$-adic Euler characteristic with compact supports $ X \mapsto \chi_{c, \ell} (X) := \sum (-1)^i \dim H^i_{c, \text{\'et}} (X, {\mathbf Q}_{\ell})$ induces a ring morphism $\chi_{c, \ell} : {K_0 ({\rm Var}_k)}_{\rm loc} / ({\mathbf L} - 1) {K_0 ({\rm Var}_k)}_{\rm loc} \rightarrow {\mathbf Z}$. Similarly, assume there is a natural morphism $H : {K_0 ({\rm Var}_k)}_{\rm loc} \rightarrow {\mathbf Z} [u, v]$ which to the class of a variety $X$ over $k$ assigns its Hodge polynomial $H (X)$ for de Rham cohomology with compact support. Such a morphism is known to exist when $k$ is of characteristic zero. Then if one sets $H_{1/2} (X) (u) := H (X) (u, u^{-1})$ one gets a morphism $H_{1/2} : {K_0 ({\rm Var}_k)}_{\rm loc} / ({\mathbf L} - 1) {K_0 ({\rm Var}_k)}_{\rm loc} \rightarrow {\mathbf Z} [u]$, since $H ({\mathbf A}^1_k) = uv$. \end{remark} \begin{theorem}\label{invner} Let $X$ be a smooth rigid variety over $K$ of pure dimension $d$. Let ${\mathcal U}$ be a weak N{\'e}ron model of $X$ and denote by $U_0$ its special fibre. Then $$ \lambda ([X]) = [U_0] $$ in ${K_0 ({\rm Var}_k)}_{\rm loc} /({\mathbf L} - 1){K_0 ({\rm Var}_k)}_{\rm loc}$. In particular, the class of $[U_0]$ in ${K_0 ({\rm Var}_k)}_{\rm loc} /({\mathbf L} - 1){K_0 ({\rm Var}_k)}_{\rm loc}$ does not depend on the weak N{\'e}ron model ${\mathcal U}$. \end{theorem} \begin{proof}By taking an appropriate admissible cover, we may assume there exists a gauge form on $X$, in which case the result follows from Proposition \ref{form}, since $[U_0] = \sum_{i \in J} [U_0^i]$. (In fact, one can also prove Theorem \ref{motivicserre} that way, but we prefered to give a proof which is quite parallel to that of Serre in \cite{serre}.) \end{proof} \subsection{Relation with $p$-adic integrals on compact locally analytic varieties} Let $K$ be a local field with residue field $k = {\mathbf F}_q$. Let us consider the Grothendieck group $K_0 ({\rm SLocAn}^d_K)$ of compact locally analytic smooth varieties over $K$ of pure dimension $d$, which is defined similarly as $K_0 ({\rm SRig}^d_K)$ replacing smooth rigid varieties by compact locally analytic smooth varieties and finite admissible covers by finite covers. Also a nowhere vanishing locally analytic $d$-form on a smooth compact locally analytic variety $X$ of pure dimension $d$ will be called a gauge form on $X$, and one defines the Grothendieck group $K_0 ({\rm GSLocAn}^d_K)$ of gauged compact locally analytic smooth varieties over $K$ of pure dimension $d$ similarly as $K_0 ({\rm GSRig}^d_K)$. There are canonical forgetful morphisms $F : K_0 ({\rm SRig}^d_K) \rightarrow K_0 ({\rm SLocAn}^d_K)$ and $F : K_0 ({\rm GSRig}^d_K) \rightarrow K_0 ({\rm GSLocAn}^d_K)$ induced from the functor which to a rigid variety (resp. gauged variety) associates the underlying locally analytic variety (resp. gauged variety). If $(X, \omega)$ is a gauged compact locally analytic smooth variety, the $p$-adic integral $\int_X |\omega|$ belongs to ${\mathbf Z} [q^{-1}]$ (cf. \cite{serre}), and by additivity of $p$-adic integrals one gets a morphism ${\rm int}_p : K_0 ({\rm GSLocAn}^d_K) \rightarrow {\mathbf Z} [q^{-1}]$. On the other hand, there is a canonical morphism $N : {K_0 ({\rm Var}_k)} \rightarrow {\mathbf Z}$ which to the class of a $k$-variety $S$ assigns the number of points of $S ({\mathbf F}_q)$, and which induces a morphism $N : {K_0 ({\rm Var}_k)}_{\rm loc} \rightarrow {\mathbf Z} [q^{-1}]$. We shall also denote by $N$ the induced morphism ${K_0 ({\rm Var}_k)}_{\rm loc} /({\mathbf L} - 1){K_0 ({\rm Var}_k)}_{\rm loc} \rightarrow {\mathbf Z} [q^{-1}] / (q - 1){\mathbf Z} [q^{-1}] \simeq {\mathbf Z} / (q - 1){\mathbf Z}$. \begin{prop}\label{compint}Let $K$ be a local field with residue field $k = {\mathbf F}_q$. Then the diagram \begin{equation*} \label{ttt}\xymatrix{ K_0 ({\rm GSRig}^d_K) \ar[d]^{F} \ar[r]^<<<<{\tilde \mu} & {K_0 ({\rm Var}_k)}_{\rm loc} \ar[d]^{N}\\ K_0 ({\rm GSLocAn}^d_K) \ar[r]^<<<<<{{\rm int}_p}&{\mathbf Z} [q^{-1}] } \end{equation*} is commutative. \end{prop} \begin{proof}One reduces to showing the following: let ${\mathcal X}$ be a smooth formal $R$-scheme of dimension $d$ and let $f$ be a function in ${\mathcal O}_{{\mathcal X}} ({\mathcal X})$ which induces a non vanishing function on ${\mathcal X}_K$, then $$ N (\int_{{\rm Gr}({\mathcal X})} {\mathbf L}^{- {\rm ord}_{\varpi} (f)} d\tilde \mu) = \int_{{\mathcal X} (R)} q^{- {\rm ord}_{\varpi} (f)} d\tilde \mu_p, $$ with $d\tilde \mu_p$ the $p$-adic measure on ${\mathcal X} (R)$. It is enough to check that $N (\tilde \mu ({\rm ord}_{\varpi} (f) = n))$ is equal to the $p$-adic measure of the set of points $x$ of ${\mathcal X} (R)$ with ${\rm ord}_{\varpi} (f) (x) = n$, which follows from Lemma \ref{compmes}. \end{proof} \begin{lem}\label{compmes}Let $K$ be a local field with residue field $k = {\mathbf F}_q$. Let ${\mathcal X}$ be a smooth formal $R$-scheme of dimension $d$. Let $A$ be a (stable) cylinder in ${\rm Gr} ({\mathcal X})$. Then $N (\tilde \mu (A))$ is equal to the $p$-adic volume of $A \cap {\rm Gr} ({\mathcal X}) (k)$. \end{lem} \begin{proof}Write $A = \pi_n^{-1}(C)$, with $C$ a constructible subset if ${\rm Gr}_n ({\mathcal X})$. By definition $\tilde \mu (A) = {\mathbf L}^{ -d (n + 1)} [C]$. On the other hand ${\mathcal X}$ being smooth, the morphism $A \cap {\rm Gr} ({\mathcal X}) (k) \rightarrow C (k)$ is surjective and its fibres are balls of radius $q^{-d (n + 1)}$. It follows that the $p$-adic volume of $A \cap {\rm Gr} ({\mathcal X}) (k)$ is equal to $|C (k)| q^{-d (n + 1)}$. \end{proof} Let us now explain the relation with the work of Serre in \cite{serre}. Serre shows in \cite{serre} that any compact locally analytic smooth variety over $K$ of pure dimension $d$ is isomorphic to $r B^d$, with $r$ an integer $\geq 1$ and $B^d$ the unit ball of dimension $d$ and that, futhermore, $r B^d$ is isomorphic to $r' B^d$ if and only if $r$ and $r'$ are congruent modulo $q - 1$. We shall denote by $s (X)$ the class of $r$ in ${\mathbf Z} /(q - 1) {\mathbf Z}$. It follows from Serre's results that $s$ induces an isomorphism $s : K_0 ({\rm SLocAn}^d_K) \rightarrow {\mathbf Z} /(q - 1) {\mathbf Z}$ and that the diagram \begin{equation*} \xymatrix{ K_0 ({\rm GSLocAn}^d_K) \ar[d] \ar[r]^<<<<{{\rm int}_p} & {\mathbf Z} [q^{-1}] \ar[d]\\ K_0 ({\rm SLocAn}^d_K) \ar[r]^<<<<<{s}&{\mathbf Z} /(q - 1) {\mathbf Z} } \end{equation*} is commutative. The following result then follows from Proposition \ref{compint}. \begin{cor}Let $K$ be a local field with residue field $k = {\mathbf F}_q$. Then the diagram \begin{equation*} \label{tt}\xymatrix{ K_0 ({\rm SRig}^d_K) \ar[d]^{F} \ar[r]^<<<<{\lambda} & {K_0 ({\rm Var}_k)}_{\rm loc} / ({\mathbf L} - 1){K_0 ({\rm Var}_k)}_{\rm loc} \ar[d]^N\\ K_0 ({\rm SLocAn}^d_K) \ar[r]^<<<<<<<<<<<{s}&{\mathbf Z} /(q - 1) {\mathbf Z} } \end{equation*} is commutative. \qed \end{cor} \section{Essential components of weak N{\'e}ron models}\label{sec5} \subsection{Essential components and the Nash problem} Since we shall proceed by analogy with \cite{nash}, let us begin by recalling some material from that paper. We assume in this subsection that $k$ is of characteristic zero and that $R = k [[\varpi]]$. For $X$ an algebraic variety over $k$, we denote by ${\mathcal L} (X)$ its arc space as defined in \cite{arcs}. In fact, in the present \S\kern .15em, we shall use notations and results from \cite{arcs}, even when they happen to be special cases of ones in this paper. As remarked in \ref{arcspace}, ${\mathcal L} (X) = {\rm Gr} (X \widehat \otimes R)$ and there are natural morphisms $\pi_n : {\mathcal L} (X) \rightarrow {\mathcal L}_n (X)$ with ${\mathcal L}_n (X) = {\rm Gr}_n (X \otimes R_n)$. By a desingularization of a variety $X$ we mean a proper and birational morphism $$ h : Y \longrightarrow X, $$ with $Y$ a smooth variety, inducing an isomorphism between $h^{-1} (X \setminus X_{\rm sing}) $ and $X \setminus X_{\rm sing} $ (some authors omit the last condition). Let $h : Y \rightarrow X$ be a desingularization of $X$ and let $D$ be an irreducible component of $h^{-1} (X_{\rm sing}) $ of codimension 1 in $Y$. If $h' : Y' \rightarrow X$ is another desingularization of $X$, the birational map $\phi: h'{}^{-1} \circ h : Y \dashrightarrow Y'$ is defined at the generic point $\xi$ of $D$, since $h'$ is proper, hence we can define the image of $D$ in $Y'$ as the closure of $\phi (\xi)$ in $Y'$. One says that $D$ is an essential divisor with respect to $X$, if, for every desingularization $h' : Y' \rightarrow X$ of $X$ the image of $D$ in $Y'$ is a divisor and that $D$ is an essential component with respect to $X$, if, for every desingularization $h' : Y' \rightarrow X$ of $X$ the image of $D$ in $Y'$ is an irreducible component of $h'{}^{-1} (X_{\rm sing})$. In general, if $D$ is an irreducible component of $h^{-1} (X_{\rm sing}) $, we say $D$ is an essential component with respect to $X$, if there exists a proper birational morphism $p : Y'\rightarrow Y$, with $Y'$ smooth, and a divisor $D'$ in $Y'$ such that $D'$ is an essential component with respect to $X$ and $p(D') = D$. It follows from the definitions and Hironaka's Theorem that essential components of different resolutions of the same variety $X$ are in natural bijection, hence we may denote by $\tau (X)$ the number of essential components in any resolution of $X$. Let $W$ be a constructible subset of an algebraic variety $Z$. We say that $W$ is irreducible in $Z$ if the Zariski closure $\overline W$ of $W$ in $Z$ is irreducible. In general let $\overline W = \cup_{1 \leq i \leq n} W'_{i}$ be the decomposition of $\overline W$ into irreducible components. Clearly $W_{i} := W'_{i} \cap W$ is non empty, irreducible in $Z$, and its closure in $Z$ is equal to $W'_{i}$. We call the $W_{i}$'s the irreducible components of $W$ in $Z$. Let $E$ be a locally closed subset of $h^{-1} (X_{\rm sing})$. We denote by $Z_{E}$ the set of arcs in ${\mathcal L} (Y)$ whose origin lies on $E$ but which are not contained in $E$. In other words $Z_{E} = \pi^{-1}_{0} (E) \setminus {\mathcal L} (E)$. Let us remark that if $E$ is smooth and connected, $\pi_n (Z_{E})$ is constructible and irreducible in ${\mathcal L}_n (Y)$. Now we set $N_{E} := h (Z_{E})$. Since $\pi_n (N_{E})$ is the image of $\pi_n (Z_{E})$ under the morphism ${\mathcal L}_n (Y) \rightarrow {\mathcal L}_n (X)$ induced by $h$, it follows that $\pi_n (N_{E})$ is constructible and irreducible in ${\mathcal L}_n (Y)$. The following result, proved in \cite{nash}, follows easily from the above remarks and Hironaka's resolution of singularities: \begin{prop}[Nash \cite{nash}]\label{nashprop}Let $X$ be an algebraic variety over $k$, a field of characteristic zero. Set ${\mathcal N} (X) := \pi^{-1}_{0} (X_{\rm sing}) \setminus {\mathcal L} (X_{\rm sing})$. For every $n \geq 0$, $\pi_n ({\mathcal N} (X))$ is a constructible subset of ${\mathcal L}_n (X)$. Denote by $W^1_n$, \dots $W^{r (n)}_n$ the irreducible components of $\pi_n ({\mathcal N} (X))$. The mapping $n \mapsto r(n)$ is nondecreasing and bounded by the number $\tau (X)$ of essential components occuring in a resolution of $X$. \end{prop} Up to renumbering, we may assume that $W^{i}_{n +1}$ maps to $\overline {W^{i}_{n}}$ for $n \gg 0$. Let us call the family $(W^{i}_{n})_{n \gg 0}$ a Nash family. Nash shows furthermore that for every Nash family $(W^{i}_{n})_{n \gg0}$ there exists a unique essential component $E$ in a given resolution $h : Y \rightarrow X$ of $X$ such that $\overline {\pi_n (N_{E})} = \overline {W^{i}_{n}}$ for $n \gg 0$. Now, we can formulate the Nash problem: \begin{problem}[Nash \cite{nash} p.36]\label{nashprob} Is there always a corresponding Nash family for an essential component? In general, how completely do the essential components correspond to Nash families? What is the relation between $\tau (X)$ and $\lim r (n)$? \end{problem} Let $W$ be a constructible subset of some variety $X$. We denote the supremum of the dimension of the irreducible components of the closure of $W$ in $Y$ by ${\rm dim} \, W$. Let $h : Y \rightarrow X$ be a proper birational morphism with $Y$ a smooth variety. Let $E$ be a codimension 1 irreducible component of the exceptional locus of $h$ in $Y$. We denote by $\nu (E) - 1$ the length of $\Omega^d_{Y} / h^{\ast} \Omega^d_{X}$ at the generic point of $E$. Here $\Omega^d_{X}$ denotes the $d$-th exterior power of the sheaf $\Omega^1_{X}$ of differentials on $X$. \begin{prop}\label{codim2}Let $X$ be a variety of pure dimension $d$ over $k$, a field of characteristic 0. Let $h : Y \rightarrow X$ be a proper birational morphism with $Y$ a smooth variety and let $U$ be a non empty open subset of a codimension 1 irreducible component $E$ of the exceptional locus of $h$ in $Y$. Then $${\rm dim} \, \pi_{n} (N_U) = (n + 1) \, d - \nu (E) $$ for $n \gg 0$. \end{prop} \begin{proof}By Theorem 6.1 of \cite{arcs}, the image of $[\pi_{n} (N_U)] {\mathbf L}^{- (n + 1) d}$ in $\widehat{{K_0 ({\rm Var}_k)}}$ converges to $\mu (N_U)$ in $\widehat{{K_0 ({\rm Var}_k)}}$. Since ${\rm dim} \, \pi_{n} (N_U) \leq (n+ 1)d$ by Lemma 4.3 of \cite{arcs}, one deduces the fact that ${\rm dim} \, \pi_{n} (N_U) - (n + 1) \, d$ has a limit. To conclude we first remark that $\overline{\pi_{n} (N_{U})} = \overline{\pi_{n} (N_{E})}$ for any non empty open subset $U$ in $E$. Hence we may assume that $(h^{\ast} \Omega^d_{X}) / {\rm torsion}$ is locally free on a neighborhood of $U$. It then follows from Proposition 6.3.2 in \cite{arcs}, or rather from its proof, that $$ \mu (N_{U}) = {\mathbf L}^{-d} [U] ({\mathbf L} - 1) \sum_{\ell \geq 1} {\mathbf L}^{- \ell \nu (E)} $$ in $\widehat{{K_0 ({\rm Var}_k)}}$. Hence $\mu (N_{U})$ belongs to $F^{\nu (E)}$ and not to $F^{\nu (E) + 1}$, and the result follows. \end{proof} \subsection{Essential components of weak N{\'e}ron models}We shall return now to the setting of the present paper. We shall fix a flat formal $R$-scheme ${\mathcal X}$ of relative dimension $d$ with smooth generic fibre ${\mathcal X}_K$. By a weak N{\'e}ron model of ${\mathcal X}$, we shall mean a weak N{\'e}ron model ${\mathcal U}$ of ${\mathcal X}_K$ together with a morphism $h : {\mathcal U} \rightarrow {\mathcal X}$ inducing the inclusion ${\mathcal U}_K \hookrightarrow {\mathcal X}_K$. As before we shall denote by $U_0^{i}$, $i \in J$, the irreducible components of the special fibre of ${\mathcal U}$. Let $\xi^i$ denote the generic point of $U_0^{i}$. We shall say $U_0^{i}$ is an essential component with respect to ${\mathcal X}$ if, for every weak N{\'e}ron model ${\mathcal U}'$ of ${\mathcal X}$, the Zariski closure of ${\pi_{0, {\mathcal U}'} (\pi_{0, {\mathcal U}}^{-1} (\xi^i))}$ is an irreducible component of the special fibre of ${\mathcal U}'$. Note that being an essential component is a property relative to ${\mathcal X}$. By their very definition, essential components in different weak N{\'e}ron models of ${\mathcal X}$ are in natural bijection. We have the following analogue of Proposition \ref{nashprop}. \begin{prop}\label{annashprop}Let ${\mathcal X}$ be a flat formal $R$-scheme of relative dimension $d$ with smooth generic fibre ${\mathcal X}_K$. Denote by $W^1_n$, \dots $W^{r (n)}_n$ the irreducible components of the constructible subset $\pi_n ({\rm Gr} ({\mathcal X}))$ of ${\rm Gr}_n ({\mathcal X})$. The mapping $n \mapsto r(n)$ is nondecreasing and bounded by the number $\tau ({\mathcal X})$ of essential components occuring in a weak N{\'e}ron model of ${\mathcal X}$. \end{prop} \begin{proof}Clearly the mapping $n \mapsto r(n)$ is nondecreasing. Let $h : {\mathcal U} \rightarrow {\mathcal X}$ be a weak N{\'e}ron model of ${\mathcal X}$, with irreducible components ${\mathcal U}^i$, $i \in J$. Since ${\mathcal U}^i$ is smooth and irreducible, $\pi_{n, {\mathcal U}} ({\rm Gr} ({\mathcal U}^i))$ is also smooth and irreducible, hence the Zariski closure of $h (\pi_{n, {\mathcal U}} ({\rm Gr} ({\mathcal U}^i))) = \pi_{n, {\mathcal X}} ({\rm Gr} ({\mathcal U}^i))$ in ${\rm Gr}_n ({\mathcal X})$ is irreducible. Since ${\rm Gr} ({\mathcal X})$ is the union of the subschemes ${\rm Gr} ({\mathcal U}^i)$, it follows that $r (n)$ is bounded by $|J|$. Now if ${\mathcal U}^i$ is not an essential component, there exists some weak N{\'e}ron model of ${\mathcal X}$, $h' : {\mathcal U}' \rightarrow {\mathcal X}$, such that, if we denote by $W^i$ the image of ${\rm Gr} ({\mathcal U}^i)$ in ${\rm Gr} ({\mathcal U}')$, $\pi_{n, {\mathcal U}'} (W^i)$ is contained in the Zariski closure of $\pi_{n, {\mathcal U}'} ({\rm Gr} ({\mathcal U}') \setminus W^i)$. It follows that $\pi_{n, {\mathcal X}} ({\rm Gr} ({\mathcal U}^i))$ is contained in the closure of $\pi_{n, {\mathcal X}} ({\rm Gr} ({\mathcal U}) \setminus {\rm Gr} ({\mathcal U}^i))$. The bound $r (n) \leq \tau ({\mathcal X})$ follows. \end{proof} As previously, we may, up to renumbering, assume that $W^{i}_{n +1}$ maps to $\overline {W^{i}_{n}}$ for $n \gg 0$. We shall still call the family $(W^{i}_{n})_{n \gg 0}$ a Nash family. Let $\xi^i_n$ be the generic point of $\overline {W^{i}_{n}}$. By construction $\xi^i_{n +1}$ maps to $\xi^i_{n}$ under the truncation morphism ${\rm Gr}_{n +1} ({\mathcal X}) \rightarrow {\rm Gr}_{n} ({\mathcal X})$, hence to the inverse system $(\xi^{i}_{n})_{n \gg 0}$ corresponds a point $\xi^{i}$ of ${\rm Gr} ({\mathcal X})$. Let $h : {\mathcal U} \rightarrow {\mathcal X}$ be weak N{\'e}ron model of ${\mathcal X}$ with irreducible components ${\mathcal U}^j$, $j \in J$. There is a unique irreducible component ${\mathcal U}^{j (i)}$ such that the point $\xi^i$ belongs to ${\rm Gr} ({\mathcal U}^{j (i)})$. Furthermore, $\overline {h (\pi_n ({\rm Gr} ({\mathcal U}^{j (i)}))} = \overline {W^{i}_{n}}$ for $n \gg 0$ and it follows from the proof of Proposition \ref{annashprop} that $U^{j (i)}_0$ is essential. We have the following analogue of the Nash problem: \begin{problem}\label{annashprob} Is there always a corresponding Nash family for an essential component? In general, how completely do the essential components correspond to Nash families? What is the relation between $\tau ({\mathcal X})$ and $\lim r (n)$? \end{problem} For $E$ a locally closed subset of the special fibre of ${\mathcal U}$, we set $Z_E := \pi_{0, {\mathcal U}}^{-1} (E))$ and $N_E := h (Z_E)$. Remark that, for every $n$, $\pi_n (Z_E)$ and $\pi_n (N_E)$ are constructible subsets of ${\rm Gr}_n ({\mathcal U})$ and ${\rm Gr}_n ({\mathcal X})$, respectively. Indeed, $\pi_n (Z_E)$ is constructible since ${\mathcal U}$ is smooth, hence $\pi_n (N_E) = h (\pi_n (Z_E))$ also. We denote by $\nu (U_0^{i}) - 1$ the length of $\Omega^d_{{\mathcal U} | R} / h^{\ast} \Omega^d_{{\mathcal X} | R}$ at the generic point $\xi^i$ of $U_0^{i}$. We also have the following analogue of Proposition \ref{codim2}. \begin{prop}\label{codim3}Let ${\mathcal X}$ be a flat formal $R$-scheme of relative dimension $d$ with smooth generic fibre ${\mathcal X}_K$. Let $h : {\mathcal U} \rightarrow {\mathcal X}$ be a weak N{\'e}ron model of ${\mathcal X}$ and let $E$ be an open dense subset of an irreducible component $U_0^{i}$ of the special fibre of ${\mathcal U}$. Then $${\rm dim} \, \pi_{n} (N_E) = (n + 1) \, d - \nu (U_0^{i}) $$ for $n \gg 0$. \end{prop} \begin{proof}Fix an integer $e \geq 0$. By Lemma \ref{cv2}, for $n \gg e$, \begin{equation*} \begin{split}\dim \pi_n (N_E \cap {\rm Gr}^{(e)} {\mathcal X}) &= \dim \pi_n (h^{-1} (N_E \cap {\rm Gr}^{(e)} {\mathcal X})) - \nu (U_0^{i})\\ &= (n +1) d - \nu (U_0^{i}).\\ \end{split} \end{equation*} On the other hand, it follows from Lemma \ref{dim2} that $$ \dim \pi_n (N_E \cap ({\mathcal X} \setminus {\rm Gr}^{(e)} {\mathcal X})) < (n +1) d - \nu (U_0^{i}), $$ when $n \gg e \gg \nu (U_0^{i})$. \end{proof} \end{document}
\begin{document} \title [] {To specify surfaces of revolution with pointwise 1-type Gauss map in 3-dimensional Minkowski space} \author [Milani] {V. Milani} \author [Shojaei-Fard] {A. Shojaei-Fard} \address{Department of Mathematics, Shahid Beheshti University, 1983963113 Tehran, Iran} \email{[email protected]} \email{a\[email protected]} \mathcal{S}ubjclass{(2000 MSC) 53A10; 53A35; 53B25; 53C50} \keywords{Minkowski Space, Surfaces of Revolution, Bour's Theorem, Minimal Surfaces, Maximal Surfaces} \date{} \dedicatory{} \commby{} \maketitle \mathcal{S}etlinespacing{1.12} \begin{abstract} In this paper, by the studying of the Gauss map, Laplacian operator, curvatures of surfaces in $\mathbb{R}_{1}^{3}$ and Bour's theorem, we are going to identify surfaces of revolution with pointwise 1-type Gauss map property in $3-$dimensional Minkowski space. \end{abstract} \mathcal{S}ection*{Introduction} The classification of submanifolds in Euclidean and Non-Euclidean spaces is one of the interesting topics in differential geometry and in this way one can find some attempts in terms of finite type submanifolds \cite{BCV1, C1, C2, C3, CP1}. On the other hand Kobayashi in \cite{K1} classified space-like ruled minimal surfaces in $\mathbb{R}_{1}^{3}$ and its extension to the Lorentz version is done by de Woestijne in \cite{W1}. In continue, people encounter with the following problem: \emph{Classify all surfaces in 3-dimensional Minkowski space satisfying the pointwise 1-type Gauss map condition $\Delta N = kN$ for the Gauss map $N$ and some function $k$.} In 2000, D.W.Yoon and Y.H.Kim in \cite{KY1} classified minimal ruled surfaces in terms of pointwise 1-type Gauss map in $\mathbb{R}_{1}^{3}$. On suitability oriented surface $M$ in $\mathbb{R}^{3}$ with positive Gaussian curvature $K$, one can induce a positive definite second fundamental form $II$ with component functions $e$, $f$, $g$. The second Gaussian curvature is defined by \begin{equation} K_{II}= \frac{1}{(|eg|-f^{2})^{2}}( \left| \begin{array}{rrr} -\frac{1}{2}e_{tt}+f_{st}-\frac{1}{2}g_{ss} & \frac{1}{2}e_{s} & f_{s}-\frac{1}{2}e_{t} \\ f_{t}-\frac{1}{2}g_{s} & e & f\\ \frac{1}{2}g_{t} & f & g \end{array} \right| -\left| \begin{array}{rrr} 0 & \frac{1}{2}e_{t} & \frac{1}{2}g_{s} \\ \frac{1}{2}e_{t} & e & f\\ \frac{1}{2}g_{s} & f & g \end{array} \right| ).\ \end{equation} \cite{KY2} We can extend it to the surfaces in $\mathbb{R}_{1}^{3}$. In 2004, D.W.Yoon and Y.H.Kim in \cite{KY2}, classified ruled surfaces in terms of the second Gaussian curvature, the mean curvature and the Gaussian curvature in 3-dimensional Minkowski space. On the other hand, in 2001, Ikawa in \cite{I1} proved Bour's theorem in $\mathbb{R}_{1}^{3}$. He showed that \begin{quote} \emph{A generalized helicoid is isometric to a surface of revolution in $\mathbb{R}_{1}^{3}$.} \end{quote} In this paper, the above problem is answered for the surfaces of revolution in 3-dimensional Minkowski space. \mathcal{S}ection{Classification} Let $\mathbb{R}_{1}^{3}$ be a 3-dimensional Minkowski space with the scalar product $\langle \ , \ \rangle$ of index 1 defined as \begin{equation} \langle X,Y\rangle = X_{1}Y_{1} +X_{2}Y_{2} - X_{3}Y_{3} \end{equation} for every vectors $X=(X_{i})$ and $Y=(Y_{i})$ in $\mathbb{R}_{1}^{3}$. A vector $X$ of $\mathbb{R}_{1}^{3}$ is said to be {\it space-like} if $\langle X,X \rangle >0$ or $X=0$, {\it time-like} if $\langle X,X \rangle <0$ and {\it light-like or null} if $\langle X,X \rangle =0$ and $X\neq 0$. A time-like or light-like vector in $\mathbb{R}_{1}^{3}$ is said to be {\it causal}. \begin{lem} \label{1} There are no causal vectors in $\mathbb{R}_{1}^{3}$ orthogonal to a time-like vector \cite{G1}. \end{lem} A lorentz cross product $X\times Y$ is given by \begin{equation} X\times Y=(X_{2}Y_{3}-X_{3}Y_{2} , X_{3}Y_{1}-X_{1}Y_{3},X_{2}Y_{1}-X_{1}Y_{2} ). \end{equation} A curve in $\mathbb{R}_{1}^{3}$ is called space-like, time-like or light-like if the tangent vector at any point is space-like, time-like or light-like, respectively. A plane in $\mathbb{R}_{1}^{3}$ is space-like, time-like or light-like if its Euclidean unit normal is time-like, space-like or light-like, respectively. A surface in $\mathbb{R}_{1}^{3}$ is space-like, time-like or light-like if the tangent plane at any point is space-like, time-like or light-like, respectively. Let $M$ be a surface in $\mathbb{R}_{1}^{3}$. The Gauss map $N:M\longrightarrow Q^{2}(\varepsilonilon) \mathcal{S}ubset \mathbb{R}_{1}^{3}$ which sends each point of $M$ to the unit normal vector to $M$ at that point is called the {\it Gauss map of surface $M$}. Here $\varepsilonilon(=\pm1)$ denotes the sign of the vector field $N$ and $Q^{2}(\varepsilonilon)$ is a 2-dimensional space form as follows: \begin{equation} Q^{2}(\varepsilonilon) = \end{equation} $$ \left\lbrace \begin{array}{c l} S^{2}_{1}(1) & \text{in $\mathbb{R}_{1}^{3} (\varepsilonilon = 1)$} \\ H^{2}(-1) & \text{in $\mathbb{R}_{1}^{3} (\varepsilonilon = -1)$} \end{array} \right. $$ It is well known that in terms of local coordinates $(x_{i})$ of $M$ the Laplacian can be written by \begin{equation} \label{*} \Delta =-\frac{1}{\mathcal{S}qrt{|\textbf{G}|}} \mathcal{S}um_{i,j} \frac{\partial}{\partial x^{i}}(\mathcal{S}qrt{|\textbf{G}|}g^{ij}\frac{\partial}{\partial x^{j}}) \end{equation} where $\textbf{G}=\det(g_{ij})$, $(g^{ij})=(g_{ij})^{-1}$ and $(g_{ij})$ are the components of the metric of $M$ with respect to $(x_{i})$. Now, we define a ruled surface $M$ in a three-dimensional Minkowski space $\mathbb{R}_{1}^{3}$. Let $I$ be an open interval in the real line $\mathbb{R}$. Let $\alpha=\alpha(s)$ be a curve in $\mathbb{R}_{1}^{3}$ defined on $I$ and $\beta=\beta(s)$ a transversal vector field along $\alpha$. For an open interval $J$ in $\mathbb{R}$, let $M$ be a ruled surface parameterized by: \begin{equation} x=x(s,t)=\alpha(s)+t\beta(s)\ \ s\in I,\ t\in J. \end{equation} According to the character of the base curve $\alpha$ and the director curve $\beta$ the ruled surfaces are classified into the following six groups. If the base curve $\alpha$ is space-like or time-like, then the ruled surface $M$ is said to be of { \it type $M_{+}$ or type $M_{-}$}, respectively. Also the ruled surface of type $M_{+}$ can be divided into three types. When $\beta$ is space-like, it is said to be of {\it type $M^{1}_{+}$} or {\it $M^{2}_{+}$} if $\beta'$ is non-null or light-like, respectively. By \ref{1} when $\beta$ is time-like, $\beta'$ must be space-like. In this case, $M$ said to be of {\it type $M^{3}_{+}$}. On the other hand, for the ruled surface of type $M_{-}$, it is also said to be of {\it type $M^{1}_{-}$} or {\it $M^{2}_{-}$} if $\beta'$ is non-null or light-like, respectively. Note that in the case of type $M_{-}$ the director curve $\beta$ is always space-like. The ruled surface of type $M^{1}_{+}$ or $M^{2}_{+}$ (resp. $M^{3}_{+}$, $M^{1}_{-}$, $M^{2}_{-}$) is clearly space-like (resp. time-like). If the base curve $\alpha$ and the director curve $\beta$ are light-like, then the ruled surface is called {\it null scroll} and it is a time-like surface. Now we modify the definition of a surface of revolution and generalized helicoid in $\mathbb{R}_{1}^{3}$. For an open interval $I\mathcal{S}ubset \mathbb{R}$, let $\gamma:I\longrightarrow \Pi$ be a curve in a plane $\Pi$ in $\mathbb{R}_{1}^{3}$ ({\it profile curve}) and let $l$ be a straight line in $\Pi$ which does not intersect the curve $\gamma$ ({\it axis}). A surface of revolution in $\mathbb{R}_{1}^{3}$ is defined by the Lorentzian rotation $\gamma$ around $l$. Suppose the case when a profile curve $\gamma$ rotates around the axis $l$, it simultaneously displaces parallel to $l$ so that the speed of displacement is proportional to the speed of rotation. Then the resulting surface is called {\it generalized helicoid}. In the following at first we give some examples of surfaces of revolution in $\mathbb{R}_{1}^{3}$ and next will use them. \begin{ex} \label{2} For the constants $a,b$, let $M$ be a surface in $\mathbb{R}_{1}^{3}$ with the parametric representation \begin{equation} R(s,t)=( \mathcal{S}qrt{(t+a)^{2}-b^{2}}\cos s , \mathcal{S}qrt{(t+a)^{2}-b^{2}}\mathcal{S}in s , \int\mathcal{S}qrt{\frac{b^{2}}{(t+a)^{2}-b^{2}}}dt ) \end{equation} where $b^{2}<(t+a)^{2}$. It is called {\it surface of revolution of the 1st kind as space-like}. \end{ex} \begin{ex} \label{3} For the constants $a,b$, let $M$ be a surface in $\mathbb{R}_{1}^{3}$ with the parametric representation \begin{equation} R(s,t)=(\mathcal{S}qrt{b^{2}-(t+a)^{2}}\mathcal{S}inh s , -b\mathcal{S}in^{-1}(\frac{\mathcal{S}qrt{b^{2}-(t+a)^{2}}}{-b}) , \mathcal{S}qrt{b^{2}-(t+a)^{2}}\cosh s) \end{equation} where $(t+a)^{2}<b^{2}$. It is called {\it surface of revolution of the 2nd kind as space-like}. \end{ex} \begin{ex} \label{4} For constants $a,b$, let $M$ be a surface in $\mathbb{R}_{1}^{3}$ with the parametric representation \begin{equation} R(s,t)=( \mathcal{S}qrt{(t+a)^{2}-b^{2}}\cosh s ,-b\cosh^{-1}(\frac{\mathcal{S}qrt{(t+a)^{2}-b^{2}}}{-b}) ,\mathcal{S}qrt{(t+a)^{2}-b^{2}}\mathcal{S}inh s) \end{equation} where $b^{2}<(t+a)^{2}$. It is called {\it surface of revolution of the 2nd kind as time-like}. \end{ex} \begin{ex} \label{5} For constants $a,b$, let $M$ be a surface in $\mathbb{R}_{1}^{3}$ with the parametric representation \begin{equation} R(s,t)=(\int \mathcal{S}qrt{\frac{-b^{2}} { b^{2}+(t+a)^{2}} } dt , \mathcal{S}qrt{b^{2}+(t+a)^{2}}\mathcal{S}inh s , \mathcal{S}qrt{b^{2}+(t+a)^{2}}\cosh s). \end{equation} It is called {\it surface of revolution of the 3rd kind as Lorentzian}. \end{ex} \begin{prop} \label{6} Let $M$ be a helicoid of the 1st kind as space-like $$x(s,t)=((t+a)\cos s , (t+a)\mathcal{S}in s , -bs) $$ where $|a|>|b|>0$, $t<min(-a-b,-a+b)$ or $t>max(-a-b,-a+b)$. This surface is isometric to a minimal surface of revolution with pointwise 1-type Gauss map property. \end{prop} \begin{proof} According to the Bour's theorem in Minkowski 3-space \cite{I1}, for each helicoidal surface there exists an isometric surface of revolution to it. Therefore one can see that helicoid of the 1st kind as space-like is isometric to the surface of revolution of the 1st kind as space-like. By the parametrization of this surface of revolution, its Gauss map is given by $$N=\frac{R_{s}\times R_{t}}{\parallel R_{s}\times R_{t} \parallel }=-\frac{1}{\mathcal{S}qrt{(t+a)^{2}-b^{2}}}(b\cos s , b\mathcal{S}in s ,t+a). $$ The components $(g_{ij})$ of the metric with respect to the first fundamental forms of this surface are $$E=g_{11}=\langle R_{s},R_{s}\rangle=(t+a)^{2}-b^{2},$$ $$F=g_{12}=\langle R_{s},R_{t}\rangle=0, $$ $$F=g_{21}= \langle R_{t},R_{s}\rangle=0, $$ $$G=g_{22}=\langle R_{t},R_{t}\rangle=1.$$ By (\ref{*}), $$\Delta N=2b^{2}( (t+a)^{2}-b^{2} ) ^{-\frac{5}{2}}(b\cos s , b\mathcal{S}in s ,t+a). $$ Then for some function $k$, $\Delta N = kN$ such that $k=-2b^{2}( (t+a)^{2}-b^{2} ) ^{-2}$. In other words, this surface of revolution has pointwise 1-type Gauss map property. On the other hand, the second fundamental forms of surface of revolution of the 1st kind as space-like are $$e=\langle R_{ss},N\rangle=b, $$ $$f=\langle R_{st},N\rangle=\langle R_{ts},N\rangle,$$ $$g=\langle R_{tt},N\rangle=-b( (t+a)^{2}-b^{2} )^{-1}.$$ The mean curvature $H$ is given by $$ H=\frac{Eg-2Ff+Ge}{2|EG-F^{2}|}=0. $$ Therefore surface of revolution of the 1st kind as space-like is a maximal surface and its Gauss map is of pointwise 1-type. \end{proof} \begin{prop} \label{7} Let $M$ be a helicoid of the 2nd kind as space-like $$x(s,t)=((t+a)\cosh s , -bs , (t+a)\mathcal{S}inh s ) $$ where $|b|>|a|$, $min(-a-b,-a+b)<t<max(-a-b,-a+b)$. This surface is isometric to a minimal surface of revolution with pointwise 1-type Gauss map property. \end{prop} \begin{proof} According to the Bour's theorem in Minkowski 3-space \cite{I1}, for every helicoidal surface one can find its isometric surface of revolution. Therefore the helicoid of the 2nd kind as space-like is isometric to the surface of revolution of the 2nd kind as space-like. By the parametrization of this surface of revolution, its Gauss map is given by $$N=\frac{R_{s}\times R_{t}}{\parallel R_{s}\times R_{t} \parallel }=\frac{1}{\mathcal{S}qrt{b^{2}-(t+a)^{2}}}( -b\mathcal{S}inh s ,t+a , -b\cosh s). $$ The components $(g_{ij})$ of the metric with respect to the first fundamental forms of this surface are $$E=g_{11}=\langle R_{s},R_{s}\rangle=b^{2}-(t+a)^{2},$$ $$F=g_{12}=\langle R_{s},R_{t}\rangle=0, $$ $$F=g_{21}= \langle R_{t},R_{s}\rangle=0, $$ $$G=g_{22}=\langle R_{t},R_{t}\rangle=1.$$ By (\ref{*}), $$\Delta N =-2b^{2}(b^{2}-(t+a)^{2})^{-\frac{5}{2}}( -b\mathcal{S}inh s ,t+a , -b\cosh s).$$ Then for some function $k$, $\Delta N = kN$ such that $k=-2b^{2}(b^{2}-(t+a)^{2})^{-2}$. In other words, this surface of revolution has pointwise 1-type Gauss map property. And also, the second fundamental forms of surface of revolution of the 2nd kind as space-like are $$e=\langle R_{ss},N\rangle=b, $$ $$f=\langle R_{st},N\rangle=\langle R_{ts},N\rangle=0,$$ $$g=\langle R_{tt},N\rangle=-b(b^{2}-(t+a)^{2})^{-1}.$$ The mean curvature $H$ is given by $$ H=\frac{Eg-2Ff+Ge}{2|EG-F^{2}|}=0. $$ Therefore surface of revolution of the 2nd kind as space-like is a maximal surface and its Gauss map has pointwise 1-type property. \end{proof} \begin{prop} \label{8} Let $M$ be a helicoid of the 2nd kind as time-like $$x(s,t)=( (t+a)\cosh s ,-bs , (t+a)\mathcal{S}inh s ) $$ where $|a|>|b|>0$, $t<min(-a-b,-a+b)$ or $ t>max(-a-b,-a+b)$. This surface is isometric to a minimal surface of revolution with pointwise 1-type Gauss map property. \end{prop} \begin{proof} By Bour's theorem in Minkowski 3-space, the helicoid of the 2nd kind as time-like is isometric to the surface of revolution of the 2nd kind as time-like. The Gauss map of this surface of revolution is $$N=\frac{R_{s}\times R_{t}}{\parallel R_{s}\times R_{t} \parallel }=\frac{1}{\mathcal{S}qrt{(t+a)^{2}-b^{2}}}( -b\mathcal{S}inh s ,t+a , -b\cosh s). $$ The components $(g_{ij})$ of the metric with respect to the first fundamental forms of this surface are $$E=g_{11}=\langle R_{s},R_{s}\rangle=-((t+a)^{2}-b^{2}),$$ $$F=g_{12}=\langle R_{s},R_{t}\rangle=0, $$ $$F=g_{21}= \langle R_{t},R_{s}\rangle=0, $$ $$G=g_{22}=\langle R_{t},R_{t}\rangle$$ $$=(t+a)^{2}((t+a)^{2}-b^{2})^{-1}+(t+a)^{2}((t+a)^{2}-b^{2})^{-1}sinh^{2}(\frac{((t+a)^{2}-b^{2})^{\frac{1}{2}}}{-b})cosh^{-4}(\frac{((t+a)^{2}-b^{2})^{\frac{1}{2}}}{-b}).$$ By (\ref{*}), $$\Delta N =-2b^{2}((t+a)^{2}-b^{2})^{-\frac{5}{2}}( -b\mathcal{S}inh s ,t+a , -b\cosh s).$$ Then for some function $k$, $\Delta N = kN$ such that $k=-2b^{2}((t+a)^{2}-b^{2})^{-2}$. In other words, this surface of revolution has pointwise 1-type Gauss map property. On the other hand by the calculating of the second fundamental forms of the surface of revolution of the 2nd kind as time-like (similar to the surface of revolution of the 2nd kind as space-like) \cite{I1}, one can see that the mean curvature $H$ is given by $$ H=\frac{Eg-2Ff+Ge}{2|EG-F^{2}|}=0. $$ Therefore surface of revolution of the 2nd kind as time-like is a minimal surface and its Gauss map is of pointwise 1-type. \end{proof} \begin{prop} \label{9} Let $M$ be a helicoid of the 3rd kind as Lorentzian $$x(s,t)=( bs , (t+a)\mathcal{S}inh s , (t+a)\cosh s ) $$ where $|a|<|b|$, $min(-a-b,-a+b)<t<max(-a-b,-a+b)$. This surface is isometric to a minimal surface of revolution with pointwise 1-type Gauss map property. \end{prop} \begin{proof} Bour's theorem gives an isometric between the helicoid of the 3rd kind as Lorentzian and surface of revolution of the 3rd kind as Lorentzian. Its Gauss map is given by $$N=\frac{R_{s}\times R_{t}}{\parallel R_{s}\times R_{t} \parallel }=-((t+a)^{2}+b^{2})^{-\frac{1}{2}}(t+a , ib\mathcal{S}inh s , ib\cosh s). $$ The components $(g_{ij})$ of the metric with respect to the first fundamental forms of this surface are $$E=g_{11}=\langle R_{s},R_{s}\rangle=(t+a)^{2}+b^{2},$$ $$F=g_{12}=\langle R_{s},R_{t}\rangle=0, $$ $$F=g_{21}= \langle R_{t},R_{s}\rangle=0, $$ $$G=g_{22}=\langle R_{t},R_{t}\rangle=-1.$$ By (\ref{*}), $$\Delta N =2b^{2}((t+a)^{2}+b^{2})^{-\frac{5}{2}}(t+a , ib\mathcal{S}inh s , ib\cosh s).$$ Then $\Delta N = kN$ for some function $k$ such that $k=-2b^{2}((t+a)^{2}+b^{2})^{-2}$. It means that, this surface of revolution has pointwise 1-type Gauss map property. The second fundamental forms of the surface of revolution of the 3rd kind as Lorentzian are $$e=\langle R_{ss},N\rangle=ib, $$ $$f=\langle R_{st},N\rangle=\langle R_{ts},G\rangle,$$ $$g=\langle R_{tt},N\rangle=ib((t+a)^{2}+b^{2})^{-1}.$$ The mean curvature $H$ is given by $$ H=\frac{Eg-2Ff+Ge}{2|EG-F^{2}|}=0. $$ Therefore surface of revolution of the 3rd kind as Lorentzian is a minimal surface and its Gauss map is of pointwise 1-type. \end{proof} \begin{prop} \label{10} Let $M$ be a conjugate of Enneper's surface of the 2nd kind as space-like or time-like $$x(s,t)=(hs^{2}+t,h(\frac{s^{3}}{3}-s)+ts, h(\frac{s^{3}}{3}+s)+ts). $$ This surface is isometric to a minimal or maximal surface of revolution with pointwise 1-type Gauss map property. \end{prop} \begin{proof} According to the Bour's theorem in Minkowski 3-space \cite{I1,W1}, we can see that the conjugate of Enneper's surface of the 2nd kind as space-like or time-like is isometric to minimal or maximal surface of revolution Enneper of the 2nd, 3rd kind $$x(s,t)=(at^{3}+t-s^{2}t+b,-2st,at^{3}-t-s^{2}t+b) $$ or $$x(s,t)=(-at^{3}+t-s^{2}t+b,-2st,-at^{3}-t-s^{2}t+b) $$ respectively, where $a>0,b \in \mathbb{R}$ and $t\neq 0$. On the other hand, the conjugate of Enneper's surface of the 2nd kind as space-like and the surface of revolution Enneper of the 2nd kind have the same Gauss map. Also, conjugate of Enneper's surface of the 2nd kind as time-like and surface of revolution Enneper of the 3rd kind have the same Gauss map. Since the conjugate of Enneper's surface of the 2nd kind as space-like or time-like has pointwise 1-type Gauss map property \cite{KY1}, the proof is completed. \end{proof} \begin{prop} \label{11} Let $M$ be a non-developable ruled surface of type $M^{1}_{+}$ or $M^{3}_{+}$ in $\mathbb{R}_{1}^{3}$, such that has one of the following conditions: $$aK_{II}+bH=constant,\ a,b\in \mathbb{R}-\{0\} ,\ 2a-b\neq 0,\ along\ each \ ruling$$ or $$aH+bK=constant,\ a\neq 0,\ b\in \mathbb{R},\ along\ each \ ruling$$ or $$aK_{II}+bK=constant,\ a\neq 0,\ b\in \mathbb{R},\ along\ each \ ruling.$$ Then $M$ is isometric to an open part of one of the following surfaces of revolution: \begin{enumerate} \item Surface of revolution of the 1st kind as space-like, \item Surface of revolution of the 2nd kind as space-like, \item Surface of revolution of the 3rd kind as Lorentzian. \end{enumerate} \end{prop} \begin{proof} According to theorems 4.1, 4.2 and 4.4 in \cite{KY2}, we know that every non-developable ruled surfaces of type $M^{1}_{+}$ or $M^{3}_{+}$ in $\mathbb{R}_{1}^{3}$ such that have one of the above conditions, are open parts of one of the following surfaces: \begin{quote} The helicoid of the 1st, 2nd kind as space-like, \end{quote} \begin{quote} The helicoid of the 3rd kind as Lorentzian. \end{quote} On the other hand \ref{6}, \ref{7} and \ref{9} show that these surfaces are isometric to \begin{quote} Surface of revolution of the 1st, 2nd kind as space-like, \end{quote} \begin{quote} Surface of revolution of the 3rd kind as Lorentzian, \end{quote} respectively. \end{proof} \begin{prop} \label{12} Let $M$ be a non-developable ruled surface of type $M^{1}_{-}$ and $M$ isn't an open part of the helicoid of the 1st kind as time-like in $\mathbb{R}_{1}^{3}$ such that this ruled surface satisfies one of the following conditions: $$aK_{II}+bH=constant,\ a,b\in \mathbb{R}-\{0\},\ 2a-b\neq 0,\ along\ each\ ruling$$ or $$aH+bK=constant,\ a\neq 0,\ b\in \mathbb{R},\ along\ each\ ruling$$ or $$aK_{II}+bK=constant,\ a\neq 0,\ b\in \mathbb{R},\ along\ each\ ruling.$$ Then $M$ is isometric to an open part of the following surface: \begin{center} Surface of revolution of the 2nd kind as time-like. \end{center} \end{prop} \begin{proof} According to theorems 4.1, 4.2 and 4.4 in \cite{KY2}, we know that every non-developable ruled surfaces of type $M^{1}_{-}$ and not an open part of the helicoid of the 1st kind as time-like in $\mathbb{R}_{1}^{3}$ such that have one of the above conditions, are open parts of the helicoid of the 2nd kind as time-like. On the other hand \ref{8} shows that this surface is isometric to the surface of revolution of the 2nd kind as time-like. This completes the proof. \end{proof} \begin{prop} \label{13} Let $M$ be a non-developable ruled surface of type $M^{2}_{+}$ or $M^{2}_{-}$ in $\mathbb{R}_{1}^{3}$, such that has one of the following conditions: $$aK_{II}+bH=constant,\ a,b\in \mathbb{R}-\{0\} ,\ 2a-b\neq 0,\ along\ each \ ruling$$ or $$aH+bK=constant,\ a\neq 0,\ b\in \mathbb{R},\ along\ each \ ruling.$$ Then $M$ is isometric to an open part of the following surfaces of revolution: \begin{center} \item surfaces of Enneper of the 2nd, 3rd kind. \end{center} \end{prop} \begin{proof} By theorems 4.1 and 4.2 in \cite{KY2}, one can see that every non-developable ruled surfaces of type $M^{2}_{+}$ or $M^{2}_{-}$ in $\mathbb{R}_{1}^{3}$ such that have one of the above conditions, are open parts of the conjugate of Enneper's surfaces of the 2nd kind as space-like or time-like. On the other hand \ref{10} shows that these surfaces isometric to surfaces of Enneper of the 2nd, 3rd kind. \end{proof} \begin{cor} \label{14} Let $R$ be a surface of revolution such that is isometric to an open part of the non-developable ruled surface $M$ of type $M^{1}_{+}$ or $M^{3}_{+}$ where $M$ satisfies one of the following conditions: $$aK_{II}+bH=constant,\ a,b\in \mathbb{R}-\{0\},\ 2a-b\neq 0,\ along\ each\ ruling$$ or $$aH+bK=constant,\ a\neq 0,\ b\in \mathbb{R},\ along\ each\ ruling$$ or $$aK_{II}+bK=constant,\ a\neq 0,\ b\in \mathbb{R},\ along\ each\ ruling.$$ Then $R$ has pointwise 1-type Gauss map property. \end{cor} \begin{proof} According to \ref{11}, $R$ is an open part of one of the following surfaces of revolution: \begin{quote} Surface of revolution of the 1st kind as space-like, \end{quote} \begin{quote} Surface of revolution of the 2nd kind as space-like, \end{quote} \begin{quote} Surface of revolution of the 3rd kind as Lorentzian. \end{quote} On the other hand \ref{6}, \ref{7} and \ref{9} show that these surfaces have pointwise 1-type Gauss map property. \end{proof} \begin{cor} \label{15} Let $R$ be a surface of revolution such that is isometric to an open part of the non-developable ruled surface $M$ of type $M^{1}_{-}$ and $M$ isn't an open part of the helicoid of the 1st kind as time-like in $\mathbb{R}_{1}^{3}$ and also this ruled surface satisfies one of the following conditions: $$aK_{II}+bH=constant,\ a,b\in \mathbb{R}-\{0\},\ 2a-b\neq 0,\ along \ each\ ruling$$ or $$aH+bK=constant,\ a\neq 0,\ b\in \mathbb{R},\ along\ each\ ruling$$ or $$aK_{II}+bK=constant,\ a\neq 0,\ b\in \mathbb{R},\ along\ each\ ruling.$$ Then $R$ has pointwise 1-type Gauss map property. \end{cor} \begin{proof} According to \ref{12}, $R$ is an open part of the surface of revolution of the 2nd kind as time-like. On the other hand \ref{8} shows that this surface has pointwise 1-type Gauss map property. \end{proof} \begin{cor} \label{16} Let $R$ be a surface of revolution such that is isometric to an open part of the non-developable ruled surface $M$ of type $M^{2}_{+}$ or $M^{2}_{-}$ where $M$ satisfies one of the following conditions: $$aK_{II}+bH=constant,\ a,b\in \mathbb{R}-\{0\},\ 2a-b\neq 0,\ along\ each\ ruling$$ or $$aH+bK=constant,\ a\neq 0,\ b\in \mathbb{R},\ along\ each\ ruling.$$ Then $R$ has pointwise 1-type Gauss map property. \end{cor} \begin{proof} According to \ref{13}, $R$ is an open part of the surfaces Enneper of the 2nd and 3rd kind. On the other hand \ref{10} shows that these surfaces have pointwise 1-type Gauss map property. \end{proof} \begin{cor} \label{17} Let $R$ be one of the following surfaces: \begin{enumerate} \item Surface of revolution of the 1st kind as space-like, \item Surface of revolution of the 2nd kind as space-like or time-like, \item Surface of revolution of the 3rd kind as Lorentzian, \item Surfaces of Enneper of the 2nd, 3rd kind. \end{enumerate} Then $R$ is a part of one of the following surfaces: \begin{enumerate} \item A space-like or time-like plane, \item The catenoids of the 1st, 2nd, 3rd, 4th, 5th kind, \item Surfaces of Enneper of the 2nd, 3rd kind. \end{enumerate} \end{cor} \begin{proof} By \ref{6}, \ref{7}, \ref{8}, \ref{9} and also \ref{10}, these surfaces of revolution are minimal or maximal. On the other hand by \cite{W1}, we know that the only minimal or maximal surfaces of revolution in Minkowski 3-space are an open part of one of the following surfaces: A space-like or time-like plane, The catenoids of the 1st, 2nd, 3rd, 4th, 5th kind, Surfaces of Enneper of the 2nd, 3rd kind. \end{proof} And finally one can reach to a nice characterization of the minimal and maximal surfaces of revolution by the pointwise 1-type property. We have \begin{prop} \label{18} Let $R$ be a surface of revolution in $\mathbb{R}_{1}^{3}$. $R$ has pointwise 1-type Gauss map property if and only if $R$ be an open part of the minimal or maximal surfaces of revolution. \end{prop} \end{document}
\begin{document} \title{Gaussian entanglement in the turbulent atmosphere} \author{M. Bohmann}\email{[email protected]} \affiliation{Institut f\"ur Physik, Universit\"at Rostock, Albert-Einstein-Str. 23, D-18051 Rostock, Germany} \author{A. A. Semenov} \affiliation{Institut f\"ur Physik, Universit\"at Rostock, Albert-Einstein-Str. 23, D-18051 Rostock, Germany} \affiliation{Institute of Physics, NAS of Ukraine, Prospect Nauky 46, UA-03028 Kiev, Ukraine} \author{J. Sperling} \affiliation{Institut f\"ur Physik, Universit\"at Rostock, Albert-Einstein-Str. 23, D-18051 Rostock, Germany} \affiliation{Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom} \author{W. Vogel} \affiliation{Institut f\"ur Physik, Universit\"at Rostock, Albert-Einstein-Str. 23, D-18051 Rostock, Germany} \begin{abstract} We provide a rigorous treatment of the entanglement properties of two-mode Gaussian states in atmospheric channels by deriving and analyzing the input-output relations for the corresponding entanglement test. A key feature of such turbulent channels is a non-trivial dependence of the transmitted continuous-variable entanglement on coherent displacements of the quantum state of the input field. Remarkably, this allows one to optimize the entanglement certification by modifying local coherent amplitudes using a finite, but optimal amount of squeezing. In addition, we propose a protocol which, in principle, renders it possible to transfer the Gaussian entanglement through any turbulent channel over arbitrary distances. Therefore, our approach provides the theoretical foundation for advanced applications of Gaussian entanglement in free-space quantum communication. \end{abstract} \date{\today} \pacs{03.67.Mn, 42.68.Bz, 42.50.Nn, 42.68.Ay} \maketitle \section{Introduction} Based on the fundamental principles of quantum mechanics, quantum protocols can increase the security of communication channels \cite{Gisin}. Quantum-based communication systems using optical fibers are already commercially available. However, one faces the disadvantages of limited flexibility concerning the positions of sender and receiver, bounding the distances to $\sim$100~km, due to losses \cite{Fibre1,Fibre2,Fibre3,Fibre4}. An alternative consists in atmospheric free-space channels. In past years, it has been demonstrated that quantum communication is possible through free-space links \cite{Ursin,Fedrizzi} and even via orbiting satellites \cite{Satellite0,Satellite2,Satellite3,Satellite4,Satellite5}. This opens the possibility to establish a global quantum communication network. Beside many experimental demonstrations in discrete variables \cite{Ursin,Fedrizzi,Satellite0,Satellite2,Satellite3}, a different approach is the continuous-variable quantum key distribution (CV-QKD) \cite{CV-QKD1,CV-QKD2,CV-QKD3,CV-QKD4,CV-QKD5,CV-QKD6}. The latter works even in the presence of bright day light \cite{Elser,Heim,Peuntinger}. However, the standard CV-QKD protocols require further improvements, as current methods only support rather limited communication distances \cite{Fossier,Xuan,Jouguet2012,Jouguet2013}. The generation of Gaussian entangled states is nowadays quite advanced and, thus, they serve as the main class of states for improving CV-QKD \cite{Rodo,DiGuglielmo,Su,Madsen,Furrer,Eberle2013}. Gaussian states are fully characterized by the covariance matrix of their field quadratures or, equivalently, of their complex field amplitudes. Bipartite Gaussian entanglement can be always uncovered by the Simon criterion \cite{Simon2000}, which is based on the partially transposed covariance matrix. Another criterion by Duan {\it et al.} \cite{Duan2000} is necessary and sufficient for a particular choice of the computational or measurement basis. The description of the quantum properties of light after transmission through the turbulent atmosphere requires a quantum theory of atmospheric fading channels with fluctuating losses \cite{Semenov2009,Semenov2010,beamwandering,VSV2016}. Such channels are characterized by a probability distribution of transmission (PDT). The derivation of the PDT relies on detailed knowledge of the atmospheric properties, such as the propagation distance, the weather, and the daytime conditions. It requires one to unify the knowledge on classical atmospheric optics~\cite{Tatarskii} with quantum optics~\cite{VSV2016}. For Gaussian quantum light, one usually measures the field quadratures with balanced homodyne detection. This technique has been adapted for fading atmospheric channels \cite{Elser,Heim,Peuntinger,Semenov2012} by propagating the signal and the reference field, i.e., the local oscillator, in orthogonal polarization modes. Some scenarios with Gaussian entanglement in the atmosphere have been studied \cite{Usenko,GaussSatellites}, but a full analysis of the evolution of bipartite Gaussian entanglement in arbitrary free-space links is missing yet. Such a complete analysis would allow one to optimize the transmitted entanglement, e.g., for ensuring a maximal security of CV-QKD protocols. The treatment of secure data transfer through atmospheric channels needs interdisciplinary research, combining the fields of classical atmospheric optics, quantum optics, and quantum information theory. In this contribution, we aim at a full study of the transmission of bipartite Gaussian entanglement through the turbulent atmosphere. For this purpose, we introduce input-output relations for the Simon entanglement criterion. We consider two fundamentally different cases. The first one is the case of uncorrelated fading channels where both modes are subjected to independent losses, e.g., due to propagation in different directions. The second one is the case of correlated fading channels. Based on adaptive methods, we show that any channel can attain this correlated form, which results in entanglement preserving links. A remarkable consequence of our analysis is a dependence of the Gaussian entanglement on the local coherent displacements of the input fields. An adjustment of these parameters allows one to optimize Gaussian entanglement transfer through the turbulent atmosphere. The article is structured as follows. In Sec.~\ref{ch:general}, we derive the output covariance matrix and the corresponding entanglement test for the transmission of bipartite Gaussian entanglement trough turbulent channels. We focus on the effects of uncorrelated atmospheric losses in Sec.~\ref{ch:uncorrelated}. In Sec.~\ref{ch:adaptive}, we introduce an adaptive method to correlate the channels and discuss the Gaussian entanglement in this case. A summary and conclusions are given in Sec.~\ref{ch:summary}. \section{Gaussian entanglement in fading channels}\label{ch:general} Gaussian states are completely described by the first- and second-order moments of their field quadratures or, equivalently, bosonic creation and annihilation operators. The Simon entanglement criterion \cite{Simon2000} in the form of Ref. \cite{ShchukinVogel2005} states that any two-mode Gaussian state is entangled if and only if \begin{equation} \label{Witness} \mathcal{W}=\det V^{\mathrm{PT}}<0, \end{equation} where $V^{\mathrm{PT}}$ is the partial transposition of the matrix \begin{align}\label{eq:SimonMatrix} \begin{aligned} V{=}& \begin{pmatrix} \langle \Delta\hat a^\dagger\Delta \hat a\rangle & \langle \Delta\hat a^{\dagger2}\rangle & \langle \Delta\hat a^\dagger\Delta \hat b\rangle & \langle \Delta\hat a^\dagger\Delta \hat b^\dagger\rangle \\ \langle \Delta\hat a^2\rangle & \langle \Delta\hat a\Delta\hat a^\dagger\rangle & \langle \Delta\hat a\Delta \hat b\rangle & \langle \Delta\hat a\Delta \hat b^\dagger\rangle \\ \langle \Delta\hat a\Delta\hat b^\dagger\rangle & \langle \Delta\hat a^\dagger\Delta\hat b^\dagger\rangle & \langle \Delta\hat b^\dagger\Delta \hat b\rangle & \langle \Delta \hat b^{\dagger2}\rangle \\ \langle \Delta\hat a\Delta\hat b\rangle & \langle \Delta\hat a^\dagger\Delta\hat b\rangle & \langle \Delta\hat b^2\rangle & \langle \Delta \hat b\Delta \hat b^\dagger\rangle \end{pmatrix} \\{=}&\begin{pmatrix}A&C^\dagger\\C&B\end{pmatrix}. \end{aligned} \end{align} Here, $V$ is the second-order matrix in bosonic creation and annihilation operators, with $\hat{a}$ and $\hat{b}$ denoting the annihilation operators of the two field modes and $\Delta\hat x=\hat x-\langle\hat x\rangle$, with $\hat x=\hat a, \hat b$. The matrix $V$ can be given in a block form with $2\times2$ blocks $A$, $B$, and $C$, where $A$ and $B$ are related to the single-mode covariances and $C$ describes the correlations between the modes. The description with bosonic field-mode operators can always be rewritten in terms of quadratures via a linear transformation. Note that the Simon criterion~Eq.~\eqref{Witness} is an entanglement test which indicates entanglement. However, it is not aimed at quantifying the amount of entanglement. For a quantification of Gaussian entanglement, one needs to employ entanglement measures (monotones); see, e.g.,~\cite{SSV13}. In the following, we will introduce the treatment of atmospheric fading channels. The theoretical description of a general two-mode quantum state after transmission through fading channels is a bipartite generalization of the theory established in Ref. \cite{Semenov2009}. In this context, the elements of matrix~(\ref{eq:SimonMatrix}) can be expressed in terms of normally ordered moments, \begin{equation}\label{MomentForm} \langle \hat{a}^{\dagger n}\hat{a}^{m}\hat{b}^{\dagger k}\hat{b}^{l}\rangle_\mathrm{atm.} =\langle T_a^{n+m}T_b^{k+l}\rangle\langle \hat{a}^{\dagger n}\hat{a}^{m}\hat{b}^{\dagger k}\hat{b}^{l}\rangle. \end{equation} Here, $\langle \hat{a}^{\dagger n}\hat{a}^{m}\hat{b}^{\dagger k}\hat{b}^{l}\rangle$ are the field moments of the state and $\langle \hat{a}^{\dagger n}\hat{a}^{m}\hat{b}^{\dagger k}\hat{b}^{l}\rangle_\mathrm{atm.}$ denote the moments after propagating the state through the turbulent atmosphere. Moreover, $T_a$ and $T_b$ denote the amplitude transmission coefficients of the two field modes which are fluctuating according to the joint PDT, $\mathcal{P}(T_a,T_b)$, i.e., \begin{align} \langle T_a^{n+m}T_b^{k+l}\rangle=\int_0^1 dT_a\int_0^1 dT_b\,\mathcal P(T_a,T_b) T_a^{n+m}T_b^{k+l}. \end{align} Due to the negligibly small depolarization effects of the atmosphere \cite{Tatarskii} and no dephasing between different polarizations \cite{Elser,Heim,Peuntinger}, the transmission coefficients can be considered as real random variables \cite{Semenov2012}. In contrast to fading channels with fluctuating loss, deterministic loss channels are characterized by a deterministic PDT, $\mathcal P(T_a,T_b)=\delta(T_a-\sqrt{\eta_a})\delta(T_b-\sqrt{\eta_b})$, where $1-\eta_{a(b)}$ denotes the constant loss in the subsystem $a(b)$. For example, deterministic losses may properly describe the attenuation in optical fibers. The above description of fading losses is general and applies to all passive polarization-preserving turbulent loss media. However, in order to give some realistic examples in the further course of this work, we will shortly comment on existing models of different atmospheric loss regimes which show good agreement with experimental data. In the case of weak turbulence, the leading effect is beam wandering, which describes the wandering of the beam spot at the receiver aperture plane due to atmospheric turbulence. For this effect, a PDT model has been derived \cite{beamwandering}, which agrees well with experiments with a propagation distance of $1.6$ km~\cite{Usenko}. Recently, the approach was generalized for including elliptic beam-shape deformation effects \cite{VSV2016}. This model even applies to conditions of strong turbulence. The derived PDT behaves similar to that in Ref. \cite{Villoresi2012} for a $144$ km free-space link. These examples show that the propagation distance is one of the relevant parameters controlling the transition from weak to strong turbulence. After passing through turbulent channels, the quantum state of light is, in general, not Gaussian anymore. The Simon criterion is solely based on moments of second order. Hence, it yields a complete entanglement characterization of bipartite Gaussian states. For non-Gaussian states, it still certifies the entanglement inherent in the second moments, denoted as the Gaussian part of entanglement. However, it cannot identify entanglement effects related to higher-order correlations. Let us formulate the input-output relations, which connect matrix~\eqref{eq:SimonMatrix} at the source with that at the receivers. For technical details, we refer to~\cite{Supplementary}. The atmospheric output matrix reads \begin{align}\label{IOR_V} V_{\rm atm.}=V_{\langle T_{a}^2\rangle,\langle T_{b}^2\rangle,\Gamma} +\begin{pmatrix} \vec\mu_a\vec\mu_a^\dagger & \Delta\Gamma \vec\mu_a\vec\mu_b^\dagger \\ \Delta\Gamma \vec\mu_b\vec\mu_a^\dagger & \vec\mu_b\vec\mu_b^\dagger \end{pmatrix}. \end{align} In the following, we discuss this result in detail including the notations and the used symbols. The effect of a deterministic loss, $1-\eta_{a(b)}$, is represented by the attenuated matrix $V_{\eta_a,\eta_b,1}$, which has been extensively studied \cite{Filippov}. Here, we have $\eta_{a(b)}=\langle T_{a(b)}^2\rangle$. The atmospheric matrix $V_{\rm atm.}$ is further characterized by the two correlation coefficients \begin{align}\label{CorrPara} \Gamma=\frac{\langle T_a T_b\rangle}{\sqrt{\langle T_a^2\rangle\langle T_b^2\rangle}} \quad\text{and}\quad \Delta\Gamma=\frac{\langle \Delta T_a\Delta T_b\rangle}{\sqrt{\langle \Delta T_a^2\rangle\langle \Delta T_b^2\rangle}}. \end{align} It is easy to see that $\Gamma,|\Delta\Gamma|\in[0,1]$ \cite{CSI}. The $\Gamma$ index in the matrix $V_{\eta_a,\eta_b,\Gamma}$ [cf. Eq.~\eqref{IOR_V}] indicates that the correlations between the two modes are diminished by a factor $\Gamma<1$. That is, the correlation block in Eq.~\eqref{eq:SimonMatrix} maps as $C\mapsto\Gamma C$. Moreover, the second term in Eq.~\eqref{IOR_V} depends on the vectors of local displacement $\vec\mu_a=\sqrt{\langle \Delta T_a^2\rangle}(\langle \hat a^\dagger\rangle,\langle\hat a\rangle)^{\rm T}$ and $\vec\mu_b=\sqrt{\langle \Delta T_b^2\rangle}(\langle \hat b^\dagger\rangle, \langle\hat b\rangle)^{\rm T}$. Note that these vectors are scaled with the atmospheric fluctuations of the transmission coefficients. Hence, fading channels lead to a strict dependence of the entanglement certification on the coherent amplitudes, unlike in the case of deterministic attenuation ($\langle \Delta T_{a}^2\rangle=\langle \Delta T_{b}^2\rangle=0$). Again, the off-diagonal (intermode) part of the displacement contribution in Eq.~\eqref{IOR_V} is scaled with a correlation coefficient, $\Delta\Gamma$. Thus, major channel characteristics are determined by the two correlation parameters $\Gamma$ and $\Delta\Gamma$, which will be studied for different scenarios later on. The corresponding entanglement criterion, $\mathcal{W}_{\rm atm.}=\det V_{\rm atm.}^{\rm PT}<0$, is calculated by inserting the partial transposition of $V_{\rm atm.}$ in Eq.~\eqref{IOR_V} into Eq.~\eqref{Witness}. This results in~\cite{Supplementary} \begin{align}\label{eq:WitnessIOR} \begin{aligned} \mathcal{W}_{\rm atm.}=&\Gamma^2\mathcal{W}_{\langle T_{a}^2\rangle,\langle T_{b}^2\rangle,1}+(1-\Gamma^2)\mathcal{N} \\&+(1-\Delta\Gamma^2)\mathcal{F}+ \vec\nu^\dagger \mathcal{S}\vec \nu, \end{aligned} \end{align} where $\mathcal{W}_{\langle T_{a}^2\rangle,\langle T_{b}^2\rangle,1}=\det V_{\langle T_{a}^2\rangle,\langle T_{b}^2\rangle,1}^{\rm PT}$ is the well-known deterministic loss contribution and with a displacement vector $\vec \nu=(\vec\mu_{b\perp}^\dagger, -\vec\mu_{a\perp}^T)^T$ ($\vec x_\perp$ indicates the perpendicular vector to $\vec x$). Additionally, we have the terms \begin{align}\nonumber \mathcal{N}{=}&\det\begin{pmatrix} \det \tilde A & \Gamma\det\tilde C^\dagger\\\Gamma\det\tilde C&\det\tilde B \end{pmatrix} \!,\, \mathcal{S}{=}\begin{pmatrix} S_{aa} & \Gamma \Delta\Gamma S_{ba}^\dagger \\ \Gamma\Delta\Gamma S_{ba} & S_{bb} \end{pmatrix}\!,\, \\&\text{and }\label{Eq:NSF} \mathcal{F}{=}\det\begin{pmatrix} \vec{\mu}^\dagger_{a\perp}\tilde A\vec\mu_{a\perp} & \Gamma\vec\mu^\dagger_{a\perp}\tilde C^\dagger\vec\mu_{b\perp}^\ast \\ \Gamma\vec\mu^{\rm T}_{b\perp}\tilde C\vec\mu_{a\perp} & \vec\mu^{\rm T}_{b\perp}\tilde B\vec\mu_{b\perp}^\ast \end{pmatrix}, \end{align} where $\tilde X$ (for $X{=}A,B,C$) denotes the $2\times 2$ blocks of $V_{\langle T_{a}^2\rangle,\langle T_{a}^2\rangle,1}$ [cf. Eqs.~\eqref{eq:SimonMatrix} and~\eqref{IOR_V}] after partial transposition, and $S_{aa}=\det\tilde A(\tilde B-\Gamma^2\tilde C\tilde A^{-1}\tilde C^\dagger)$, $S_{ba}=-\det\tilde C(\Gamma^2\tilde C^\dagger-\tilde A\tilde C^{-1}\tilde B)$, as well as $S_{bb}=\det \tilde B(\tilde A-\Gamma^2\tilde C^\dagger \tilde B^{-1}\tilde C)$. Let us analyze the structure of Eq.~\eqref{eq:WitnessIOR}. The first two terms represent the decrease of the correlation between the two modes by the factor $\Gamma<1$ in turbulent loss channels [see Eqs.~\eqref{IOR_V} and~\eqref{CorrPara}] where $\mathcal{N}>0$. The last two terms in Eq.~\eqref{eq:WitnessIOR}, being related to $\mathcal{F}\geq 0$ and $\mathcal{S}$, show the dependency of the Simon entanglement test on coherent displacements, which is an important finding for turbulent loss channels. In particular, the contribution including $\vec \nu^\dagger\mathcal{S}\vec \nu$ can be negative for $\Delta\Gamma\neq 0$ with proper choices of displacement vectors. This leads to the fact that the entanglement transfer can be optimized, as we will discuss in the continuation of this work. Note that, in the case of deterministic attenuation, i.e., $\langle\Delta T_{a(b)}^2\rangle=0$, this criterion reduces to the case of deterministic attenuation which always preserves Gaussian entanglement \cite{Filippov,ConsatntLoss1}. Equations~\eqref{IOR_V} and~\eqref{eq:WitnessIOR} represent the most general form of the input-output relation for the Gaussian entanglement test in atmospheric links. \section{Uncorrelated fading channels} \label{ch:uncorrelated} After this full treatment, we continue our analysis with the case of uncorrelated channels, with $\langle T_a^mT_b^n\rangle=\langle T_a^m\rangle\langle T_b^n\rangle$. Thus, we have for the correlation parameters in Eq.~\eqref{CorrPara}: $\Gamma=\big(\langle T_a\rangle/\sqrt{\langle T_a^2\rangle}\big)\big(\langle T_b\rangle/\sqrt{\langle T_b^2\rangle}\big)$ and $\Delta\Gamma=0$. A natural example is the case of counterpropagation, i.e., both modes propagate in different directions through the atmosphere. The case in which one mode undergoes only a deterministic loss is also included. An example is a scenario where entanglement is established between the sender, who locally keeps one mode in a fiber loop, and a remote receiver of the other mode, connected via a free-space link \cite{Ursin}. Let us assume zero coherent displacements, $0=\vec\mu_a=\vec\mu_b=\vec\nu$. Then, the entanglement test~\eqref{eq:WitnessIOR} reduces to \begin{align}\label{eq:unnoshift} \mathcal{W}_{\rm atm.}=\Gamma^2\mathcal{W}_{\langle T_{a}^2\rangle,\langle T_{b}^2\rangle,1}+(1-\Gamma^2)\mathcal{N}. \end{align} The first term resembles the deterministic loss scaled by the atmospheric coefficient $\Gamma^2$ [cf. Eq.~\eqref{CorrPara}]. The second term is clearly positive, as $\mathcal{N}>0$. With decreasing $\Gamma$, the absolute value of the negative first term becomes smaller, while the positive second term increases. Consequently, the Gaussian entanglement part vanishes, as $\mathcal{W}_{\rm atm.}$ becomes positive. \begin{figure} \caption{(Color online) The Simon entanglement test $\mathcal{W} \label{fig:tmsv} \end{figure} A surprising effect can be observed when we consider the propagation of a two-mode squeezed-vacuum (TMSV) state through uncorrelated atmospheric channels; $|{\rm TMSV}\rangle=(\cosh\xi)^{-1}\sum_{n=0}^{\infty}(\tanh \xi)^n |n,n\rangle$ with squeezing parameter $\xi\geq0$. In Fig.~\ref{fig:tmsv}, we see that the increase of squeezing can frustrate the transfer of Gaussian entanglement through atmospheric links. Thus, too strong squeezing might be hindering in turbulent loss channels. Consequently, an optimal squeezing interval can be identified. Let us stress that this statement has to be understood in terms of the significance of verified entanglement, which yields an operational quantification of the entanglement transferred through the atmosphere in the sense of its significance. We examine the initial partially transposed matrix $V^{\rm PT}$ of the TMSV state. All blocks of $V^{\rm PT}$ have a diagonal form, $\tilde A=\tilde B={\rm diag}(\sinh^2\xi,\cosh^2\xi)$ and $\tilde C={\rm diag}(\sinh \xi \cosh \xi,\sinh \xi \cosh \xi)$. With increasing $\xi$, all nonzero entries of $V^{\rm PT}$ become approximately $e^{2\xi}$. However, the turbulence reduces the correlations $\tilde C$ by the factor $\Gamma$. As these correlations are responsible for the entanglement this reduction eventually yields $\mathcal{W}_{\rm atm.}>0$. A more intuitive way to understand this effect can be given by considering the squeezing ellipses. A TMSV state appears to show squeezing in the joint position and momentum basis of the two modes, respectively. The stronger the squeezing the more pronounced are these ellipses. Uncorrelated turbulent losses cause different fluctuations in both modes which leads to rotational blurring of the original squeezing ellipses around the origins. For stronger squeezing this smearing effect is more grave as the antisqueezed parts of the ellipses fluctuate more intensively. This leads to the fact that TMSV states with stronger squeezing may disentangle faster in uncorrelated turbulent loss channels. In particular, the smaller $\Gamma$ is, the smaller is the squeezing interval for which Gaussian entanglement survives (cf.~Fig.~\ref{fig:tmsv}). In order to exemplarily demonstrate the capability of our approach to describe real atmospheric channels, we use measured experimental transmission characteristics. In Fig.~\ref{fig:tmsv} we apply turbulent loss parameters obtained from two experiments which correspond to a rather long and a short transmission channel: the first one for a 144-km-long free-space link between two Canary islands \cite{Villoresi2012} and the second one for a 1.6-km-long atmospheric channel \cite{Usenko}. For the former, we used a PDT model which resembles the log-normal distribution \cite{VSV2016}, the latter is described by the dominant effect of beam wandering \cite{beamwandering}. Both PDT models are in good agreement with the corresponding experimental data \cite{Villoresi2012,Usenko}, which demonstrates the applicability for different atmospheric conditions. Now, we will consider the surprising effect of coherent displacements on the Simon test for uncorrelated fading channels. As $\Delta\Gamma=0$, we see that the second term in Eq.~\eqref{IOR_V} only adds a positive part to the single-mode blocks of the matrix $V_{\rm atm.}$, i.e., $A$ and $B$ get the additional summands $\vec\mu_a\vec\mu_a^\dagger$ and $\vec\mu_b\vec\mu_b^\dagger$, respectively, and $C$ has no additional displacement dependent term. In terms of the test in Eq.~\eqref{eq:WitnessIOR}, this means that the last term is positive and increases with the local displacements. As a consequence, Gaussian entanglement vanishes with increasing the values of $\langle \hat{a}\rangle$ and $\langle \hat{b}\rangle$. Here, a similar argument, as given above for the frustration of entanglement transfer by strong squeezing, can be employed to give the reader a more intuitive explanation of this behavior. We already mentioned that uncorrelated turbulent losses lead to rotational blurring in the phase space which is more pronounced the further the state is displaced from the origin. Consequently, one can directly understand why coherent displacements are disadvantageous in such environments. Hence, coherent displacements should be avoided to preserve entanglement in uncorrelated atmospheric channels. However, for certain scenarios, such as CV-QKD protocols, one employs coherent displacement (see, e.g., \cite{Madsen,Ralph}). In such cases, one may optimize the displacements to conserve the entanglement. \begin{figure} \caption{(Color online) The roots of the Simon entanglement test, $\mathcal W_{\rm atm.} \label{fig:anisotropy} \end{figure} An example of the displacement dependence in phase space is shown in Fig.~\ref{fig:anisotropy}. We study an asymmetric TMSV state, which can be generated by mixing two equally squeezed, single-mode states on a beam splitter with transmission coefficient of $t^2=0.95$ and a consecutive displacement in one mode, $\alpha=\langle\hat a\rangle\neq0$ and $\langle\hat b\rangle=0$. It is worth mentioning that the standard TMSV state is obtained with a $50{:}50$ beam splitter ($t^2=0.5$). The here considered asymmetry of the input state leads to larger values of $\alpha$ in some directions of the phase space for which the entanglement is still preserved. \section{Adaptive channel correlations}\label{ch:adaptive} We proceed with our analysis to characterize the case of fully correlated channels, i.e., $\langle T_a^mT_b^n\rangle=\langle T_a^{m+n}\rangle=\langle T_b^{m+n}\rangle$. Thus, we get the maximal values $\Gamma=\Delta \Gamma=1$ [see Eq.~\eqref{CorrPara}] which reduces Eq.~\eqref{eq:WitnessIOR} to \begin{align}\label{PerfectCorrWitness} \mathcal{W}_{\rm atm.}=\mathcal{W}_{\langle T_{a}^2\rangle,\langle T_{a}^2\rangle,1}+\vec\nu^\dagger \mathcal{S}\vec\nu. \end{align} Completely correlated fading channels can be established in the case of copropagation \cite{Fedrizzi, Semenov2010}. Alternatively, this ideal correlation can be produced in other kinds of communication channels by artificially monitoring and adapting the channel transmissivities. In detail, (i) one has to measure the transmission coefficients in both channels, (ii) share this information via classical communication, and (iii) attenuate the channel with the higher transmissivity to the level of the lower one. The online monitoring of the turbulence can be performed with the copropagating local oscillator beam \cite{Elser,Heim,Peuntinger,Semenov2012}. As long as the classical communication time does not exceed the coherence time of the atmosphere such an approach is feasible. The joint PDT of our adaptive scheme $\mathcal P'$ can be obtained straightforwardly from the initial distribution $\mathcal P$ [cf. Eq.~\eqref{MomentForm}] by mapping the random variables $T_a,T_b\mapsto\min\{T_a,T_b\}$~\cite{OrderedStatistics,OrderedStatisticsBook}, \begin{align}\label{eq:adaptivePDTC} \mathcal P'(T_a,T_b){=}\delta(T_a{-}T_b)\!\!\left[ \int\limits_{T_a}^1\!\!dT'_a\mathcal P(T'_a,T_b) {+}\!\! \int\limits_{T_b}^1\!\!dT'_b\mathcal P(T_a,T'_b) \right]\!\!. \end{align} Thus, applying the proposed steps of this protocol, the fading channels become perfectly correlated, as $T_a=T_b$ results in $\Gamma=\Delta\Gamma=1$. At first glance it might seem counterintuitive that an additional attenuation improves the entanglement transfer. However, we will see that this is the case due to the resulting correlation. In the absence of coherent shifting, $\vec\nu=0$, we see that the Simon test for perfectly correlated channels in Eq.~\eqref{PerfectCorrWitness}, reduces to the exact form of deterministic attenuations. Because such a deterministic loss always preserves Gaussian entanglement \cite{Filippov,ConsatntLoss1}, this also holds true for any Gaussian entangled state with $\langle \hat{a}\rangle=0$ and $\langle \hat{b}\rangle=0$ propagating in the atmosphere and by applying our adaptive protocol. This is an important finding, as it shows that there is no trade-off due to artificial attenuation by using the adaptive scheme, as long as $\langle T_a^2\rangle,\langle T_b^2\rangle\neq0$. The additional attenuation, due to the integration in Eq.~\eqref{eq:adaptivePDTC}, is worthwhile, as the introduced correlation between the modes ($\Gamma=\Delta\Gamma=1$) assures the survival of Gaussian entanglement. The influence of the turbulence occurs when we consider coherent displacement which results in a non-zero second term $\vec\nu^\dagger \mathcal{S}\vec\nu$ [see Eq.~\eqref{PerfectCorrWitness}]. Notably, this term is not necessarily positive (see previous discussion of uncorrelated channels), and is a genuine effect of turbulent loss channels which differs from deterministic loss scenarios. Hence, an optimal choice of coherent shifting can ensure the entanglement transfer in the atmosphere, especially for CV-QKD applications, as we discussed before. \begin{figure} \caption{(Color online) The contours illustrate the bounds of the regions of Gaussian entanglement of a displaced TMSV state. Entanglement is preserved in the gray shaded areas. Two cases of ideal correlations $T_a{=} \label{fig:CoShift} \end{figure} In Fig.~\ref{fig:CoShift} the entanglement test for a displaced TMSV state in a correlated fading channel ($T_a=T_b$) is shown for two different atmospheric characteristics. The state has a fixed, joint displacement amplitude $2|\vec\nu|^2=|\langle\hat a\rangle|^2+|\langle\hat b\rangle|^2=50$. The dependency on $|\alpha|^2=|\langle \hat a \rangle|^2$ and the sum of the phases of the coherent amplitudes $\phi+\chi$, with $\langle \hat a \rangle=|\langle \hat a \rangle|e^{i\phi}$ and $\langle \hat b \rangle=|\langle \hat b \rangle|e^{i\chi}$, is depicted. If Gaussian entanglement persists, it strictly depends on the choice of phase of the coherent shifting. In particular $\phi+\chi=0$ and $\phi+\chi=\pm\pi$ lead to the worst and the optimal case, respectively. Additionally, the sensitivity to the channel characteristics $\langle T_a^2\rangle$ and $\langle T_a\rangle$ can be seen by comparing the two cases. However, there also exists the entanglement-persisting region, which does not depend on the channel characteristics~\cite{Supplementary}. \section{Summary and conclusions}\label{ch:summary} We have introduced input-output relations for the entanglement of bipartite Gaussian states propagated through the turbulent atmosphere. In particular, our rigorous studies demonstrate that the Gaussian entanglement preservation strongly depends on the initial coherent amplitudes, which is not the case for standard Gaussian channels. Moreover, we show that optimal and finite squeezing levels exist that are preferable for fading quantum communication links. Our findings open up new perspectives for optimal CV-QKD protocols in free-space links which employ entangled Gaussian states and encode information via coherent displacements. For uncorrelated fading channels, one can choose the displacement to increase the range of distributed Gaussian entanglement by steering the input state with passive optical elements. In addition, we proposed an adaptive technique which, by monitoring and controlling the channel transmittance, can correlate the atmospheric channel characteristics. In this manner, Gaussian entanglement is always preserved. Therefore, this approach renders it possible to distribute Gaussian entanglement between arbitrary points on Earth and orbiting satellites via atmospheric links. We believe that our rigorous studies and proposed methods will find a number of applications for atmospheric continuous-variable entanglement transfer, global quantum-communication networks, and in-lab experiments to improve observable properties of Gaussian entanglement in turbulent media. An extension of our treatment to non-Gaussian entanglement certifiers might further improve the entanglement detection and the related applications in free-space channels. \begin{thebibliography}{99} \bibitem{Gisin} N. Gisin and R. Thew, Quantum communication, Nature Photon. {\bf 1}, 165 (2007). \bibitem{Fibre1} H. Takesue, S. W. Nam, Q. Zhang, R. H. Hadfield, T. Honjo, K. Tamaki, and Y. Yamamoto, Quantum key distribution over a 40-dB channel loss using superconducting single-photon detectors, Nature Photon. {\bf 1}, 343 (2007). \bibitem{Fibre2} H. H\"ubel, M. R. Vanner, T. Lederer, B. Blauensteiner, T. Lor\"unser, A. Poppe, and A. Zeilinger, High-fidelity transmission of polarization encoded qubits from an entangled source over 100 km of fiber, Opt. Express {\bf 15}, 7853 (2007). \bibitem{Fibre3} T. Honjo, H. Takesue, H. Kamada, Y. Nishida, O. Tadanaga, M. Asobe, and K. Inoue, Long-distance distribution of time-bin entangled photon pairs over 100 km using frequency up-conversion detectors, Opt. Express {\bf 15}, 13957 (2007). \bibitem{Fibre4} Q. Zhang, H. Takesue, S. W. Nam, C. Langrock, X. Xie, B. Baek, M. M. Fejer, and Y. Yamamoto, Distribution of Time-Energy Entanglement over 100 km fiber using superconducting single-photon detectors, Opt. Express {\bf 16}, 5776 (2008). \bibitem{Ursin} R. Ursin {\it et al.}, Entanglement-based quantum communication over 144 km, Nature Phys. {\bf 3}, 481 (2007). \bibitem{Fedrizzi} A. Fedrizzi, R. Ursin, T. Herbst, M. Nespoli, R. Prevedel, T. Scheidl, F. Tiefenbacher, T. Jennewein, and A. Zeilinger, High-fidelity transmission of entanglement over a high-loss free-space channel, Nature Phys. {\bf 5}, 389 (2009). \bibitem{Satellite0} S. Nauerth, F. Moll, M. Rau, C. Fuchs, J. Horwath, S. Frick, and H. Weinfurter, Air-to-ground quantum communication, Nature Photon. {\bf 7}, 382 (2013). \bibitem{Satellite2} J.-Yu Wang {\it et al.}, Direct and full-scale experimental verifications towards ground-satellite quantum key distribution, Nature Photon. {\bf 7}, 387 (2013). \bibitem{Satellite3} G. Vallone, D. Bacco, D. Dequal, S. Gaiarin, V. Luceri, G. Bianco, and P. Villoresi, Experimental Satellite Quantum Communications, Phys. Rev. Lett. {\bf 115}, 040502 (2015). \bibitem{Satellite4} D. Dequal, G. Vallone, D. Bacco, S. Gaiarin, V. Luceri, G. Bianco, and P. Villoresi, Experimental single photon exchange along a space link of 7000 km, Phys. Rev. A {\bf 93}, 010301(R) (2016). \bibitem{Satellite5} G. Vallone, D. Dequal, M. Tomasin, F. Vedovato, M. Schiavon, V. Luceri, G. Bianco, and P. Villoresi, Quantum interference along satellite-ground channels, arXiv:1509.07855 [quant-ph]. \bibitem{CV-QKD4} T. Hirano, H. Yamanaka, M. Ashikaga, T. Konishi, and R. Namiki, Quantum cryptography using pulsed homodyne detection, Phys. Rev. A {\bf 68} 42331 (2003). \bibitem{CV-QKD3} R. Namiki and T. Hirano, Security of quantum cryptography using balanced homodyne detection, Phys. Rev. A \textbf{67}, 22308 (2003). \bibitem{CV-QKD1} F. Grosshans and P. Grangier, Continuous Variable Quantum Cryptography Using Coherent States, Phys. Rev. Lett. \textbf{88}, 057902 (2002). \bibitem{CV-QKD2} C. Silberhorn, T. C. Ralph, N. L\"utkenhaus, and G. Leuchs, Continuous Variable Quantum Cryptography: Beating the 3 dB Loss Limit, Phys. Rev. Lett. \textbf{89}, 167901 (2002). \bibitem{CV-QKD6} C. Weedbrook,A. M. Lance, W. P. Bowen, T. Symul, T. C. Ralph, and P. K. Lam, Quantum Cryptography Without Switching, Phys. Rev. Lett. \textbf{93}, 170504 (2004). \bibitem{CV-QKD5} F. Grosshans, G. V. Assche, J. Wenger, R. Brouri, N. J. Cerf, and P. Grangier, Quantum key distribution using gaussian-modulated coherent states, Nature (London) \textbf{421}, 238 (2003). \bibitem{Elser} D. Elser, T. Bartley, B. Heim, C. Wittmann, D. Sych, and G. Leuchs, Feasibility of free space quantum key distribution with coherent polarization states, New J. Phys. \textbf{11}, 045014 (2009). \bibitem{Heim} B. Heim, D. Elser, T. Bartley, M. Sabuncu, C. Wittmann, D. Sych, C. Marquardt, and G. Leuchs, Atmospheric channel characteristics for quantum communication with continuous polarization variables, Appl. Phys. B \textbf{98}, 635 (2010). \bibitem{Peuntinger} C. Peuntinger, B. Heim, C. R. M\"uller, C. Gabriel, C. Marquardt, and G. Leuchs, Distribution of Squeezed States through an Atmospheric Channel, Phys. Rev. Lett. \textbf{113}, 060502 (2014). \bibitem{Fossier} S. Fossier, E. Diamanti, T. Debuisschert, A. Villing, R. Tualle-Brouri, and P. Grangier, Field test of a continuous-variable quantum key distribution prototype, New J. Phys. \textbf{11}, 045023 (2009). \bibitem{Xuan} Q. Dinh Xuan, Z. Zhang, and P. Voss, A 24 km fiber-based discretely signaled continuous variable quantum key distribution system, Opt. Express \textbf{17}, 24244 (2009). \bibitem{Jouguet2012} P. Jouguet {\it et al.}, Field test of classical symmetric encryption with continuous variables quantum key distribution, Opt. Express \textbf{20}, 14030 (2012). \bibitem{Jouguet2013} P. Jouguet, S. Kunz-Jacques, A. Leverrier, P. Grangier, and E. Diamanti, Experimental demonstration of long-distance continuous-variable quantum key distribution, Nature Photon. \textbf{7}, 378 (2013). \bibitem{Rodo} C. Rod\'o, O. Romero-Isart, K. Eckert, and A. Sanpera, Efficiency in Quantum Key Distribution Protocols with Entangled Gaussian States, Open Syst. Inform. Dyn. \textbf{14}, 69 (2007). \bibitem{DiGuglielmo} J. DiGuglielmo, B. Hage, A. Franzen, J. Fiur\'a\v{s}ek, and R. Schnabel, Experimental characterization of Gaussian quantum-communication channels, Phys. Rev. A \textbf{76}, 012323 (2007). \bibitem{Su} X. Su, W. Wang, Y. Wang, X. Jia, C. Xie, and K. Peng, Continuous variable quantum key distribution based on optical entangled states without signal modulation, Europhys. Lett. \textbf{87}, 20005 (2009). \bibitem{Madsen} L. S. Madsen, V. C. Usenko, M. Lassen, R. Filip, and U. L. Andersen, Continuous variable quantum key distribution with modulated entangled states, Nat. Commun. \textbf{3}, 1083 (2012). \bibitem{Furrer} F. Furrer, T. Franz, M. Berta, A. Leverrier, V. B. Scholz, M. Tomamichel, and R. F. Werner, Continuous Variable Quantum Key Distribution: Finite-Key Analysis of Composable Security against Coherent Attacks, Phys. Rev. Lett. \textbf{109}, 100502 (2012). \bibitem{Eberle2013} T. Eberle, V. H\"andchen, J. Duhme, T. Franz, F. Furrer, R. Schnabel, and R. F. Werner, Gaussian entanglement for quantum key distribution from a single-mode squeezing source, New J. Phys. \textbf{15}, 053049 (2013). \bibitem{Simon2000} R. Simon, Peres-Horodecki Separability Criterion for Continuous Variable Systems, Phys. Rev. Lett. \textbf{84}, 2726 (2000). \bibitem{Duan2000} L.-M. Duan, G. Giedke, J. I. Cirac, and P. Zoller, Inseparability Criterion for Continuous Variable Systems, Phys. Rev. Lett. \textbf{84}, 2722 (2000). \bibitem{Semenov2009} A. A. Semenov and W. Vogel, Quantum light in the turbulent atmosphere, Phys. Rev. A {\bf 80}, 021802(R) (2009). \bibitem{Semenov2010} A. A. Semenov and W. Vogel, Entanglement Transfer through the Turbulent Atmosphere, Phys. Rev. A {\bf 81}, 023835 (2010). \bibitem{beamwandering} D. Yu. Vasylyev, A. A. Semenov, and W. Vogel, Toward Global Quantum Communication: Beam Wandering Preserves Nonclassicality, Phys. Rev. Lett. \textbf{108}, 220501 (2012). \bibitem{VSV2016} D. Yu. Vasylyev, A. A. Semenov, and W. Vogel, Atmospheric Quantum Channels with Weak and Strong Turbulence, arXiv:1604.01373 [quant-ph]. \bibitem{Tatarskii} V. Tatarskii, {\it Effects of the Turbulent Atmosphere on Wave Propagation} (IPST, Jerusalem, 1972). \bibitem{Semenov2012} A. A. Semenov, F. T\"{o}ppel, D. Yu. Vasylyev, H. V. Gomonay, and W. Vogel, Homodyne detection for atmosphere channels, Phys. Rev. A \textbf{85}, 013826 (2012). \bibitem{Usenko} V. C. Usenko, B. Heim, C. Peuntinger, C. Wittmann, C. Marquardt, G. Leuchs, and R. Filip, Entanglement of Gaussian states and the applicability to quantum key distribution over fading channels, New J. Phys. \textbf{14}, 093048 (2012). \bibitem{GaussSatellites} N. Hosseinidehaj and R. Malaney, Gaussian Entanglement Distribution via Satellites, Phys. Rev. A \textbf{91}, 022304 (2015). \bibitem{ShchukinVogel2005} E. Shchukin and W. Vogel, Inseparability Criteria for Continuous Bipartite Quantum States, Phys. Rev. Lett. \textbf{95}, 230502 (2005). \bibitem{SSV13} F. Shahandeh, J. Sperling, and W. Vogel, Operational Gaussian Schmidt-number witnesses, Phys. Rev. A {\bf 88}, 062323 (2013). \bibitem{Villoresi2012} I. Capraro, A. Tomaello, A. Dall'Arche, F. Gerlin, R. Ursin, G. Vallone, and P. Villoresi, Impact of Turbulence in Long Range Quantum and Classical Communications, Phys. Rev. Lett. {\bf 109}, 200502 (2012). \bibitem{Supplementary} Supplemental Material for details on the conducted algebra. \bibitem{Filippov} S. N. Filippov and M. Ziman, Entanglement sensitivity to signal attenuation and amplification, Phys. Rev. A {\bf 90}, 010301(R) (2014). \bibitem{CSI} The Cauchy-Schwarz inequality implies for all random variables $A$ and $B$: $|\langle AB\rangle|^2\leq\langle A^2\rangle\langle B^2\rangle$. \bibitem{ConsatntLoss1} F. A. S. Barbosa, A. J. de Faria, A. S. Coelho, K. N. Cassemiro, A. S. Villar, P. Nussenzveig, and M. Martinelli, Disentanglement in bipartite continuous-variable systems, Phys. Rev. A {\bf 84}, 052330 (2011). \bibitem{Ralph} T. C. Ralph, Continuous variable quantum cryptography, Phys. Rev. A {\bf 61}, 010303(R) (1999). \bibitem{OrderedStatistics} From a joint probability density $\mathcal P$, we get the cumulative probability distribution for $Z=\min\{X,Y\}\leq z$ as ${\rm Pr}(X{\leq} z \vee Y{\leq} z)=1-\int_z dx'\int_z dy'\mathcal P(x',y')$. The derivative gives the desired probability density: $\mathcal P(z)=(d/dz){\rm Pr}(X{\leq} z \vee Y{\leq} z)=\int_z dy'\mathcal P(z,y'){+}\int_z dx'\mathcal P(x',z)$. \bibitem{OrderedStatisticsBook} H. A. David and H. N. Nagaraja, {\it Order Statistics}, 3rd ed. (John Wiley \& Sons, New Jersey, 2003). \end{thebibliography} \begin{widetext} \section*{Supplemental Material -- Gaussian entanglement in the turbulent atmosphere} \subsection*{Determinant expansion}\label{App:Relations} For the following treatment of bipartite covariance matrices, let us give some well-known relations. Those are useful for a compact formulation of our theory. Firstly, relations for sums of $2\times2$ matrices and $4\times 4$ block matrices are given. Say $A,B,C,D\in\mathbb C^{2\times2}$ and $J=\left(\begin{smallmatrix}0&1\\-1&0\end{smallmatrix}\right)=-J^T$. It holds \begin{align} \det(A+B)=&\det A+\det B-{\rm tr}(AJB^TJ),\label{eq:Rel1} \\\nonumber \det\begin{pmatrix}A&D\\C&B\end{pmatrix}=&\det A\det B+\det C\det D \\ &-{\rm tr}(AJC^TJBJD^TJ),\label{eq:Rel2} \end{align} which can be directly verified when expanding the matrices in components and comparing both sides. Secondly, in a two-dimensional system, the orthogonal vector $\vec\xi_\perp$ to $\vec\xi$ is given by \begin{align} \vec\xi_\perp=J\vec\xi^{\,\ast} \quad\text{or}\quad \vec\xi_\perp^{\,\dagger}=-\vec\xi^{\,T}J, \end{align} with identical lengths: $\vec\xi^{\,\dagger} \vec\xi={\vec\xi_\perp}^{\,\dagger} \vec\xi_\perp$. A related, useful relation for $2\times2$ matrices is $JA^TJ=-(\det A)A^{-1}$. From the relations~\eqref{eq:Rel1} and~\eqref{eq:Rel2}, let us deduce a rule for an Hermitian block matrix, i.e., $A=A^\dagger$ and $B=B^\dagger$, as well as $D=C^\dagger$. Moreover, we assume decompositions $A\mapsto A+\vec\alpha\vec\alpha^\dagger$, $B\mapsto B+\vec\beta\vec\beta^\dagger$, and $C\mapsto gC+h\vec\beta\vec\alpha^\dagger$, for $g,h\in\mathbb C$ and $\vec\alpha,\vec\beta\in\mathbb C^2$. Applying the previous formulas, we find \begin{align}\label{eq:DetFullExpansion} \begin{aligned} &\det\begin{pmatrix} A+\vec\alpha\vec\alpha^\dagger & g^\ast C^\dagger+h^\ast\vec\alpha\vec\beta^\dagger \\ gC+h\vec\beta\vec\alpha^\dagger & B+\vec\beta\vec\beta^\dagger \end{pmatrix}\\ =&|g|^2\det\begin{pmatrix}A&C^\dagger\\C&B\end{pmatrix} +(1-|g|^2)\det\begin{pmatrix}\det A&g^\ast\det C^\dagger\\g\det C&\det B\end{pmatrix} +(1-|h|^2)\det\begin{pmatrix} \vec\alpha^\dagger_\perp A\vec\alpha_\perp & g^\ast\vec\alpha^\dagger_\perp C^\dagger\vec\beta_\perp \\ g\vec\beta^\dagger_\perp C\vec\alpha_\perp & \vec\beta^\dagger_\perp B\vec\beta_\perp \end{pmatrix} \\&+\begin{pmatrix}\vec\beta_\perp\\-\vec\alpha_\perp\end{pmatrix}^\dagger\begin{pmatrix} \det A(B-|g|^2CA^{-1}C^\dagger) & -g^\ast h\det C^\dagger(|g|^2C-BC^{-1 \dagger}A) \\ -gh^\ast\det C(|g|^2C^\dagger-AC^{-1}B) & \det B(A-|g|^2C^\dagger B^{-1}C) \end{pmatrix}\begin{pmatrix}\vec\beta_\perp\\-\vec\alpha_\perp\end{pmatrix}. \end{aligned} \end{align} Let us further assume that the Hermitian $4\times4$ matrix is non-negative, i.e., $M=\left(\begin{smallmatrix}A&C^\dagger \\C&B\end{smallmatrix}\right)\geq0$. For any $|g|\leq 1$, one can observe that \begin{align} 0\leq\begin{pmatrix} A & g^\ast C^\dagger \\ g C & B\end{pmatrix} =\begin{pmatrix}g^\ast&0\\0&1\end{pmatrix}^\dagger \begin{pmatrix} A & C^\dagger \\ C & B\end{pmatrix} \begin{pmatrix}g^\ast&0\\0&1\end{pmatrix} +(1-|g|^2)\begin{pmatrix} A & 0 \\ 0 & 0\end{pmatrix}, \end{align} where both terms on the right-hand-side are non-negative. Moreover, the matrix $\left(\begin{smallmatrix}\det A&\det C^\dagger \\\det C&\det B\end{smallmatrix}\right)$ is non-negative, because of $\det A\geq0$, $\det B\geq0$, and \begin{align} 0\leq \det M=\det A\det B+|\det C|^2-{\rm tr}\big( \underbrace{A}_{\geq0} \underbrace{\left[ J C^T J B (J C^T J)^\dagger \right]}_{\geq0} \big)\leq\det A\det B+|\det C|^2 =\det\begin{pmatrix}\det A &\det C^\dagger\\\det C&\det B\end{pmatrix}, \end{align} applying Eq.~\eqref{eq:Rel1}. Finally, the matrix $\left(\begin{smallmatrix}\vec\alpha^\dagger A\vec\alpha&\vec\alpha^\dagger C^\dagger\vec\beta \\\vec\beta^\dagger C\vec\alpha&\vec\beta^\dagger B\vec\beta\end{smallmatrix}\right)$ is non-negative, since for all vectors $\left(\begin{smallmatrix}x\\y\end{smallmatrix}\right)\in\mathbb C^2$ holds \begin{align} \begin{pmatrix}x\\y\end{pmatrix}^\dagger \begin{pmatrix} \vec\alpha^\dagger A\vec\alpha & \vec\alpha^\dagger C^\dagger\vec\beta \\ \vec\beta^\dagger C\vec\alpha & \vec\beta^\dagger B\vec\beta \end{pmatrix} \begin{pmatrix}x\\y\end{pmatrix} =\begin{pmatrix}x\vec\alpha\\y\vec\beta\end{pmatrix}^\dagger M \begin{pmatrix}x\vec\alpha\\y\vec\beta\end{pmatrix} \geq0. \end{align} \subsection*{Partially transposed matrices}\label{App:Covar} The initial second-order matrix in bosonic creation and annihilation operators reads \begin{align} V=V_{1,1,1}=&\begin{pmatrix} \langle \Delta\hat a^\dagger\Delta \hat a\rangle & \langle \Delta\hat a^{\dagger2}\rangle & \langle \Delta\hat a^\dagger\Delta \hat b\rangle & \langle \Delta\hat a^\dagger\Delta \hat b^\dagger\rangle \\ \langle \Delta\hat a^2\rangle & \langle \Delta\hat a\Delta\hat a^\dagger\rangle & \langle \Delta\hat a\Delta \hat b\rangle & \langle \Delta\hat a\Delta \hat b^\dagger\rangle \\ \langle \Delta\hat a\Delta\hat b^\dagger\rangle & \langle \Delta\hat a^\dagger\Delta\hat b^\dagger\rangle & \langle \Delta\hat b^\dagger\Delta \hat b\rangle & \langle \Delta \hat b^{\dagger2}\rangle \\ \langle \Delta\hat a\Delta\hat b\rangle & \langle \Delta\hat a^\dagger\Delta\hat b\rangle & \langle \Delta\hat b^2\rangle & \langle \Delta \hat b\Delta \hat b^\dagger\rangle \end{pmatrix} =\begin{pmatrix} A & C^\dagger \\ C & B \end{pmatrix}, \end{align} the latter in a $2\times 2$ block-matrix form and with $\Delta\hat x=\hat x-\langle\hat x\rangle$ as well as $\langle \Delta\hat a\Delta \hat a^\dagger\rangle=\langle \Delta\hat a^\dagger\Delta \hat a\rangle+1$ and $\langle \Delta\hat b\Delta \hat b^\dagger\rangle=\langle \Delta\hat b^\dagger\Delta \hat b\rangle+1$. The block $A$($B$) describes the covariance of the first(second) subsystem, and $C$ includes the correlations between the subsystems. The index of $V_{1,1,1}$ will be explained shortly below. The partially transposed matrix may be decomposed in the forms \begin{align} V_{1,1,1}^{\rm PT}=&\begin{pmatrix} \langle \Delta\hat a^\dagger\Delta \hat a\rangle & \langle \Delta\hat a^{\dagger2}\rangle & \langle \Delta\hat a^\dagger\Delta \hat b^\dagger\rangle & \langle \Delta\hat a^\dagger\Delta \hat b\rangle \\ \langle \Delta\hat a^2\rangle & \langle \Delta\hat a\Delta\hat a^\dagger\rangle & \langle \Delta\hat a\Delta \hat b^\dagger\rangle & \langle \Delta\hat a\Delta \hat b\rangle \\ \langle \Delta\hat a\Delta\hat b\rangle & \langle \Delta\hat a^\dagger\Delta\hat b\rangle & \langle \Delta\hat b^\dagger\Delta \hat b\rangle & \langle \Delta \hat b^2\rangle \\ \langle \Delta\hat a\Delta\hat b^\dagger\rangle & \langle \Delta\hat a^\dagger\Delta\hat b^\dagger\rangle & \langle \Delta\hat b^{\dagger2}\rangle & \langle \Delta \hat b\Delta \hat b^\dagger\rangle \end{pmatrix} =\begin{pmatrix}1&0\\0&X\end{pmatrix} \begin{pmatrix} A & C^\dagger \\ C & B \end{pmatrix} \begin{pmatrix}1&0\\0&X\end{pmatrix} -\begin{pmatrix}0&0\\0&Z\end{pmatrix} =\begin{pmatrix}A&C^\dagger X\\XC&B^T\end{pmatrix}, \label{Vpt} \end{align} where the transposition is performed in subsystem $b$, and with $X=\left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)$ and $Z=\left(\begin{smallmatrix}1&0\\0&-1\end{smallmatrix}\right)$. If we diminish the correlations in terms, $C\mapsto\Gamma C$ for $|\Gamma|<1$, we write \begin{align} V_{1,1,\Gamma}=\begin{pmatrix} A & \Gamma^\ast C^\dagger \\ \Gamma C & B \end{pmatrix}. \end{align} Moreover, we assume a constant loss, $ \langle\hat a^{\dagger m}\hat a^n\hat b^{\dagger p}\hat b^q\rangle \stackrel{\rm loss}{\longmapsto} \sqrt{\eta_a}^{m+n}\sqrt{\eta_b}^{p+q} \langle \hat a^{\dagger m}\hat a^n\hat b^{\dagger p}\hat b^q\rangle $. This results in the attenuated matrix \begin{align}\label{eq:constLossCov} V_{\eta_a,\eta_b,1}=\begin{pmatrix}\sqrt{\eta_a}&0\\0&\sqrt{\eta_b}\end{pmatrix} \begin{pmatrix} A & C^\dagger \\ C & B \end{pmatrix} \begin{pmatrix}\sqrt{\eta_a}&0\\0&\sqrt{\eta_b}\end{pmatrix} +\begin{pmatrix}(1-\eta_a)\frac{1-Z}{2} &0\\0&(1-\eta_b)\frac{1-Z}{2} \end{pmatrix} =\begin{pmatrix} \tilde A & \tilde C^\dagger \\ \tilde C &\tilde B \end{pmatrix}^{\rm PT}. \end{align} Where we used the standard method for describing attenuations in quantum optics which is formulated in terms of beam splitters with the transmission coefficients $T_a=\sqrt{\eta_a}$ and $T_b=\sqrt{\eta_b}$. As it has been shown, the atmosphere can be modeled by treating $T_a$ and $T_b$ as random variables. This yields \begin{align} \langle \hat a^{\dagger m}\hat a^n\hat b^{\dagger p}\hat b^q\rangle \stackrel{\rm atm.}{\longmapsto} \langle T_a^{m+n}T_b^{p+q}\rangle\langle \hat a^{\dagger m}\hat a^n\hat b^{\dagger p}\hat b^q\rangle, \end{align} where the first expectation value, $\langle T_a^{m+n}T_b^{p+q}\rangle$, is given in terms of the joint probability distribution of the transmission coefficients of the atmosphere, $\mathcal P(T_a,T_b)$. Inserting this relation yields the matrices after the propagation in an atmospheric channel as \begin{align}\nonumber V\mapsto V_{\rm atm.}=&\begin{pmatrix} \langle T_a^2\rangle A+(1-\langle T_a^2\rangle)\frac{1-Z}{2} + \langle \Delta T_a^2\rangle\vec\mu_a\vec\mu_a^\dagger & \langle T_aT_b\rangle C^\dagger + \langle \Delta T_a\Delta T_b\rangle\vec\mu_a\vec\mu_b^\dagger \\ \langle T_aT_b\rangle C + \langle \Delta T_a\Delta T_b\rangle\vec\mu_b\vec\mu_a^\dagger & \langle T_b^2\rangle B+(1-\langle T_b^2\rangle)\frac{1-Z}{2} + \langle \Delta T_b^2\rangle\vec\mu_b\vec\mu_b^\dagger \end{pmatrix} \\=& V_{\langle T_a^2\rangle,\langle T_{b}^2\rangle,\Gamma} \label{eq:CovExpandLine1} +\begin{pmatrix} \vec\mu_a\vec\mu_a^\dagger & \Delta\Gamma \vec\mu_a\vec\mu_b^\dagger \\ \Delta\Gamma \vec\mu_b\vec\mu_a^\dagger & \vec\mu_b\vec\mu_b^\dagger \end{pmatrix} \end{align} with $\vec\mu_a=\sqrt{\langle \Delta T_a^2\rangle}\left(\begin{smallmatrix}\langle \hat a^\dagger\rangle \\ \langle\hat a\rangle \end{smallmatrix}\right)$, $\vec\mu_b=\sqrt{\langle \Delta T_b^2\rangle}\left(\begin{smallmatrix}\langle \hat b^\dagger\rangle \\ \langle\hat b\rangle \end{smallmatrix}\right)$, and the correlation coefficients \begin{align} \Gamma=\frac{\langle T_a T_b\rangle}{\sqrt{\langle T_a^2\rangle\langle T_b^2\rangle}} \quad\text{and}\quad \Delta\Gamma=\frac{\langle \Delta T_a\Delta T_b\rangle}{\sqrt{\langle \Delta T_a^2\rangle\langle \Delta T_b^2\rangle}}. \end{align} In the first term in line~\eqref{eq:CovExpandLine1}, we have the scenario of constant losses ($\eta_x=\langle T_x^2\rangle$ for $x=a,b$) if $\Gamma=1$, otherwise, $\Gamma<1$, we have a constant loss including decreased correlations, $C\mapsto\Gamma C$. In second term in line~\eqref{eq:CovExpandLine1}, we have the major contribution of the atmosphere in terms of fluctuations of the transmission coefficients, $\Delta T_a,\Delta T_b\neq0$, and the dependence on the local displacements $\vec\mu_a$ and $\vec\mu_b$. Note that, in the case of uncorrelated channels, $\langle T_a^mT_b^n\rangle=\langle T_a^m\rangle\langle T_b^n\rangle$, we have $\Delta\Gamma=0$ and $\Gamma=\frac{\langle T_a\rangle}{\sqrt{\langle T_a^2\rangle}}\frac{\langle T_b\rangle}{\sqrt{\langle T_b^2\rangle}}$, and for perfectly correlated links, $\langle T_a^mT_b^n\rangle=\langle T_a^{m+n}\rangle=\langle T_b^{m+n}\rangle$, we get the maximal values $\Gamma=\Delta \Gamma=1$. Finally, the partially transposed matrix after a propagation in an atmospheric channel takes the form \begin{align} V_{\rm atm.}^{\rm PT}=V_{\langle T_a^2\rangle,\langle T_{b}^2\rangle,\Gamma }^{\rm PT} \label{eq:CovPTExpandLine1} +\begin{pmatrix} \vec\mu_a\vec\mu_a^\dagger & \Delta\Gamma \vec\mu_a\vec\mu_b^T \\ \Delta\Gamma \vec\mu_b^\ast\vec\mu_a^\dagger & \vec\mu_b^\ast\vec\mu_b^T \end{pmatrix}. \end{align} Note that the second summand is a positive semi-definite matrix. \subsection*{Simon criterion}\label{App:Simon} The Simon entanglement test is then easily obtained by calculating the determinant $\det V_{\rm atm.}^{\rm PT}$ using Eq.~\eqref{eq:DetFullExpansion}. Bipartite Gaussian entanglement is revealed if and only if $\det V_{\rm atm.}^{\rm PT}<0$. For the case of no coherent displacement, $\langle\hat a\rangle=\langle\hat b\rangle=0$, we find \begin{align}\label{eq:AtmMinor0} \det V_{\rm atm.}^{\rm PT}=\det V_{\langle T_a^2\rangle,\langle T_{b}^2\rangle,\Gamma}^{\rm PT} =\Gamma^2\det V_{\langle T_a^2\rangle,\langle T_{b}^2\rangle,1}^{\rm PT} +(1-\Gamma^2)\underbrace{\det\begin{pmatrix} \det \tilde A & \Gamma\det \tilde C^\dagger\\\Gamma \det \tilde C&\det \tilde B \end{pmatrix}}_{=\mathcal N}, \end{align} which separates the well characterized case of constant loss $\det V_{\langle T_a^2\rangle,\langle T_{b}^2\rangle,1}$ from the part that diminishes the correlations, $C\mapsto \Gamma C$. In the most general case, including coherent shifting in Eq.~\eqref{eq:CovPTExpandLine1} and using Eq.~\eqref{eq:DetFullExpansion}, we obtain the entanglement condition \begin{align}\label{eq:AtmMinor} \begin{aligned} 0>\mathcal W_{\rm atm.}=&\det V_{\rm atm.}^{\rm PT}\\ =&\Gamma^2\det V_{\langle T_a^2\rangle,\langle T_{b}^2\rangle,1}^{\rm PT} +(1-\Gamma^2)\det\begin{pmatrix} \det \tilde A & \Gamma\det\tilde C^\dagger\\\Gamma\det\tilde C&\det\tilde B \end{pmatrix} +(1-\Delta\Gamma^2)\underbrace{\det\begin{pmatrix} \vec{\mu}^\dagger_{a\perp}\tilde A\vec\mu_{a\perp} & \Gamma\vec\mu^\dagger_{a\perp}\tilde C^\dagger\vec\mu_{b\perp}^\ast \\ \Gamma\vec\mu^{\rm T}_{b\perp}\tilde C\vec\mu_{a\perp} & \vec\mu^{\rm T}_{b\perp}\tilde B\vec\mu_{b\perp}^\ast \end{pmatrix}}_{=\mathcal F} \\&+\begin{pmatrix}\vec\mu_{b\perp}^\ast\\-\vec\mu_{a\perp}\end{pmatrix}^\dagger\begin{pmatrix} \det\tilde A(\tilde B-\Gamma^2\tilde C\tilde A^{-1}\tilde C^\dagger) & -\Gamma \Delta\Gamma\det\tilde C^\dagger(\Gamma^2\tilde C-\tilde B\tilde C^{-1\dagger}\tilde A) \\ -\Gamma\Delta\Gamma\det\tilde C(\Gamma^2\tilde C^\dagger-\tilde A\tilde C^{-1}\tilde B) & \det \tilde B(\tilde A-\Gamma^2\tilde C^\dagger \tilde B^{-1}\tilde C) \end{pmatrix}\begin{pmatrix}\vec\mu_{b\perp}^\ast\\-\vec\mu_{a\perp}\end{pmatrix}, \end{aligned} \end{align} where the last summand can also be rewritten as $\vec\nu^\dagger\mathcal S\vec \nu$. We can rewrite $\mathcal N$ and $\mathcal F$, defined in Eqs.~\eqref{eq:AtmMinor0} and~\eqref{eq:AtmMinor}, respectively, in the equivalent forms \begin{align} \mathcal N=\det\begin{pmatrix} \det\tilde A & -\Gamma\det\tilde C^\dagger \\ -\Gamma\det\tilde C & \det\tilde B \end{pmatrix}= \det\begin{pmatrix} \det(\tilde A) & \Gamma\det(\tilde C^\dagger X) \\ \Gamma\det(X \tilde C) & \det(\tilde B^T) \end{pmatrix}, \end{align} using $\det \tilde B=\det \tilde B^T$ and $(-1)\det \tilde C=\det X\det \tilde C$, and \begin{align} \mathcal F=\det\begin{pmatrix} \vec{\mu}^\dagger_{a\perp}\tilde A\vec\mu_{a\perp} & \Gamma\vec\mu^\dagger_{a\perp}(\tilde C^\dagger X)\vec\mu_{b\perp} \\ \Gamma\vec\mu^\dagger_{b\perp}(X\tilde C)\vec\mu_{a\perp} & \vec\mu^\dagger_{b\perp}\tilde B^T\vec\mu_{b\perp} \end{pmatrix} \end{align} using that $X\left(\begin{smallmatrix}t\\t^\ast\end{smallmatrix}\right)=\left(\begin{smallmatrix}t\\t^\ast\end{smallmatrix}\right)^\ast$ and $X^2=\left(\begin{smallmatrix}1&0\\0&1\end{smallmatrix}\right)$. The right-hand-side formulations show that $\mathcal F$ and $\mathcal N$ are invariant under partial transposition [cf. Eq.~\eqref{Vpt}] and, therefore, non-negative. The latter follows from expanding $\det V_{\rm atm.}$, which is non-negative; see also the discussion on non-negativity in Sec.~\ref{App:Relations}. \subsection*{Channel-independent entanglement condition for correlated channels}\label{SupplementaryB} Let us consider the $2{\times} 2$ matrix $D^{\rm PT}$, which is built from the elements of the first and third rows and columns of the matrix $V_{1,1,1}^{\rm PT}$ in Eq.~\eqref{Vpt}, \begin{align} D^{\rm PT}=\begin{pmatrix} \langle\Delta \hat{a}^\dag \Delta \hat{a} \rangle & \langle\Delta \hat{a}^\dag \Delta \hat{b}^\dag \rangle \\ \langle\Delta \hat{a} \Delta \hat{b} \rangle & \langle\Delta \hat{b}^\dag \Delta \hat{b} \rangle \end{pmatrix}.\label{DuanMatrix} \end{align} The negative determinant of this matrix, $\det D^{\rm PT}<0$, is a sufficient condition of the Gaussian entanglement. This condition corresponds to the Duan \textit{et al.} criterion. Now, we assume a perfectly correlated channel, $T=T_a=T_b$. Performing the same consideration like in the case of the Simon criterion, one gets for the correlated channels, \begin{align} \det D^{\rm PT}_\mathrm{atm.} =\langle T^2\rangle^2\det D^{\rm PT} +\langle\Delta T^2\rangle \langle T^2\rangle \begin{pmatrix}\langle\hat a\rangle\\-\langle\hat b^\dagger\rangle\end{pmatrix}^\dagger \begin{pmatrix} \langle\Delta\hat{b}^\dag\Delta\hat{b}\rangle &\langle\Delta\hat{a}\Delta\hat{b}\rangle \\ \langle\Delta\hat{a}^\dag\Delta\hat{b}^\dag\rangle & \langle\Delta\hat{a}^\dag\Delta\hat{a}\rangle \end{pmatrix} \begin{pmatrix}\langle\hat a\rangle\\-\langle\hat b^\dagger\rangle\end{pmatrix}. \end{align} The negativity of the second term, \begin{align} \begin{pmatrix}\langle\hat a\rangle\\-\langle\hat b^\dagger\rangle\end{pmatrix}^\dagger \begin{pmatrix} \langle\Delta\hat{b}^\dag\Delta\hat{b}\rangle &\langle\Delta\hat{a}\Delta\hat{b}\rangle \\ \langle\Delta\hat{a}^\dag\Delta\hat{b}^\dag\rangle & \langle\Delta\hat{a}^\dag\Delta\hat{a}\rangle \end{pmatrix} \begin{pmatrix}\langle\hat a\rangle\\-\langle\hat b^\dagger\rangle\end{pmatrix}<0 \end{align} identifies the entanglement-persisting region, which does not depend on the channel characteristics. \end{widetext} \end{document}
\begin{document} \title{Price of Anarchy for Mechanisms with Risk-Averse Agents ootnote{This work is supported by DFG through Cluster of Excellence MMCI.} \begin{abstract} We study the price of anarchy of mechanisms in the presence of risk-averse agents. Previous work has focused on agents with quasilinear utilities, possibly with a budget. Our model subsumes this as a special case but also captures that agents might be less sensitive to payments than in the risk-neutral model. We show that many positive price-of-anarchy results proved in the smoothness framework continue to hold in the more general risk-averse setting. A sufficient condition is that agents can never end up with negative quasilinear utility after playing an undominated strategy. This is true, e.g., for first-price and second-price auctions. For all-pay auctions, similar results do not hold: We show that there are Bayes-Nash equilibria with arbitrarily bad social welfare compared to the optimum. \end{abstract} \section{Introduction}\label{sec:introduction} Many practical, ``simple'' auction mechanisms are not incentive compatible, making it beneficial for agents to behave strategically. A standard example is the first-price auction, in which one item is sold to one of $n$ agents. Each of these agents is asked to report a valuation; the item is given to the agent reporting the highest value, who then has to pay what he/she reported. A common way to understand the effects of strategic behavior is to study resulting equilibria and to bound the \emph{price of anarchy}. That is, one compares the social welfare that is achieved at the (worst) equilibrium of the induced game to the maximum possible welfare. Typical equilibrium concepts are Bayes-Nash equilibria and (coarse) correlated equilibria, which extend mixed Nash equilibria toward incomplete information or learning settings respectively. A key assumption in these analyses is that agents are \emph{risk neutral}: Agents are assumed to maximize their expected quasilinear utility, which is defined to be the difference of the value associated to the outcome and payment imposed to the agent. So, an agent having a value of $1$ for an item would be indifferent between getting this item with probability $10 \%$ for free and getting it all the time, paying $0.9$. However, there are many reasons to believe that agents are not risk neutral. For instance, in the above example the agent might favor the certain outcome to the uncertain one. Therefore, in this paper, we ask the question: \emph{What ``simple'' auction mechanisms preserve good performance guarantees in the presence of risk-averse agents?} The standard model of risk aversion in economics (see, e.g., \cite{mas1995microeconomic}) is to apply a (weakly) concave function to the quasilinear term. That is, if agent $i$'s outcome is $x_i$ and his payment is $p_i$, his utility is given as $u_i(x_i, p_i) = h_i(v_i(x_i) - p_i)$, where $h_i\colon \mathbb{R} \to \mathbb{R}$ is a weakly concave, monotone function. That is, for $y,y'\in\mathbb{R}$ and for all $\lambda \in [0, 1]$, it holds that $h_i(\lambda y + (1-\lambda) y') \geq \lambda h_i(y) + (1-\lambda) h_i(y')$. Agent $i$ is risk neutral if and only if $h_i$ is a linear function. If the function is strictly concave, this has the effect that, by Jensen's inequality, the utility for fixed $x_i$ and $p_i$ is higher than for a randomized $x_i$ and $p_i$ with the same expected $v_i(x_i) - p_i$. We compare outcomes based on their social welfare, which is defined to be the sum of utilities of all involved parties including the auctioneer. That is, it is the sum of agents' utilities and their payments $\mbox{SW}(\mathbf{x}, \mathbf{p}) = \sum_i u_i(x_i, p_i) + \sum_i p_i$. In the quasilinear setting this definition of social welfare coincides with the sum of values $\sum_i v_i(x_i)$. With risk-averse utilities they usually differ. However, all our results bound the sum of values and therefore also hold for this benchmark. We assume that the mechanisms are oblivious to the $h_i$-functions and work like in the quasilinear model. Only the individual agent's perception changes. This makes it necessary to normalize the $h_i$-functions because otherwise they could be on different scales, e.g., if $h_1(y) = y$ and $h_2(y) = 1000 \cdot y$, which would be impossible for the mechanism to cope with without additional information. Therefore, we will assume that $u_i(\mathbf{x},p_i) = v_i(\mathbf{x})$ if $p_i = 0$ and that $u_i(\mathbf{x},p_i) = 0$ if $p_i = v_i$. That is, for the two cases that $p_i$ is either $0$ or the full value, the utility matches exactly the quasilinear one. However, due to risk aversion, the agents might be less sensitive to payments. \footnote{We note here that this will not in turn allow the mechanism to arrive at huge utility gains, as compared to the quasilinear model, for example, by increasing payments arbitrarily. Indeed, Lemma~\ref{lemma:OPT_relation} in Section~\ref{sec:model} will show that the difference between the two optima is bounded by at most a multiplicative factor of $2$.} \subsection{Our Contribution} We give bounds on the price of anarchy for Bayes-Nash and (coarse) correlated equilibria of mechanisms in the presence of risk-averse agents. Our positive results are stated within the smoothness framework, which was introduced by \cite{Roughgarden09}. We use the version that is tailored to quasilinear utilities by \cite{SyrgkanisT13}, which we extend to mechanism settings with general utilities (for a formal definition see Section~\ref{sec:general_smoothness}). Our main positive result states that the loss of performance compared to the quasilinear setting is bounded by a constant if a slightly stronger smoothness condition is fulfilled. \begin{main-result} \label{main-result-smooth} \label{MAIN-RESULT-SMOOTH} Given a mechanism with price of anarchy $\alpha$ in the quasilinear model provable via smoothness such that the deviation guarantees non-negative utility, then this mechanism has price of anarchy at most $2 \alpha$ in the risk-averse model. \end{main-result} This result relies on the fact that the deviation action to establish smoothness guarantees agents non-negative utility. A sufficient condition is that all undominated strategies never have negative utility. First-price and second-price auctions satisfy this condition, we thus get constant price-of-anarchy bounds for both of these auction formats. In an all-pay auction every positive bid can lead to negative utility. Therefore, the positive result does not apply. As a matter of fact, this is not a coincidence because, as we show, equilibria can be arbitrarily bad. \begin{main-result} \label{main-result-allpay} \label{MAIN-RESULT-ALLPAY} The single-item all-pay auction has unbounded price of anarchy for Bayes-Nash equilibria, even with only three agents. \end{main-result} This means that although equilibria of first-price and all-pay auctions have very similar properties with quasilinear utilities, in the risk-averse setting they differ a lot. We feel that this to some extent matches the intuition that agents should be more reluctant to participate in an all-pay auction compared to a first-price auction. In our construction for proving Main Result~\ref{main-result-allpay}, we give a symmetric Bayes-Nash equilibrium for two agents. The equilibrium is designed in such a way that a third agent of much higher value would lose with some probability with every possible bid. Losing in an all-pay auction means that the agent has to pay without getting anything, resulting in negative utility. In the quasilinear setting, this negative contribution to the utility would be compensated by respective positive amounts when winning. For the risk-averse agent in our example, this is not true. Because of the risk of negative utility, he prefers to opt out of the auction entirely. We also consider a different model of aversion to uncertainty, in which solution concepts are modified. Instead of evaluating a distribution over utilities in terms of their expectation, agents evaluate them based on the expectation minus a second-order term. We find that this model has entirely different consequences on the price of anarchy. For example, the all-pay auction has a constant price of anarchy in correlated and Bayes-Nash equilibria, whereas the second-price auction can have an unbounded price of anarchy in correlated equilibria. \subsection{Related Work}\label{subsec:related} Studying the impact of risk-averseness is a regularly reoccurring theme in the literature. A proposal to distinguish between money and the utility of money, and to model risk aversion by a utility function that is concave first appeared in~\cite{10.2307/1909829}. The expected utility theory, which basically states that the agent's behavior can be captured by a utility function and the agent behaves as a maximizer of the expectation of his utility, was postulated in~\cite{zbMATH03106184}. This theory does not capture models that are standardly used in portfolio theory, ``expectation minus variance'' or ``expectation minus standard deviation''~\cite{markowitz1968portfolio}, the latter of which we also consider in Section~\ref{sec:other-models}. In the context of mechanisms, one usually models risk aversion by concave utility functions. One research direction in this area is to understand the effects of risk aversion on a given mechanism. For example, \cite{DBLP:journals/ijgt/FibichGS06} studies symmetric equilibria in all-pay auctions with a homogenous population of risk-averse players. \tk{Due to symmetry and homogeneity, in this case, equilibria are fully efficient, that is, the price of anarchy is $1$.} In~\cite{mathews2006} a similar analysis for auctions with a buyout option is performed; \cite{DBLP:conf/wine/HoyIL16} considers customers with heterogeneous risk attitudes in mechanisms for cloud resources. In~\cite{DBLP:conf/sigecom/DuttingKT14} it is shown that for certain classes of mechanisms the correlated equilibrium is unique and has a specific structure depending on the respective valuations but independent of the actual utility function. One consequence of this result is that risk aversion does not influence the allocation outcome or the revenue. Another direction is to design mechanisms for the risk-averse setting. For example, the optimal revenue is higher because buyers are less sensitive to payments. In a number of papers, mechanisms for revenue maximization are proposed \cite{matthews1983,maskin1984optimal,DBLP:conf/sigecom/SundararajanY10,hu2010risk,bhalgat2012mechanism,fu2013prior}. Furthermore, randomized mechanisms that are \emph{truthful in expectation} lose their incentive properties if agents are not risk neutral. Black-box transformations from truthful-in-expectation mechanisms into ones that fulfill stronger properties are given in~\cite{DBLP:journals/corr/abs-1206-2957} and~\cite{DBLP:journals/teco/HoeferKV16}. Studying the effects of risk aversion also has a long history in game theory, where different models of agents' attitudes towards risk are analyzed. One major question is, for example, if equilibria still exist and if they can be computed \cite{DBLP:conf/sagt/FiatP10,hoy2012concavity}. Price of anarchy analyses have so far only been carried out for congestion games. Tight bounds on the price of anarchy for atomic congestion games with affine cost functions under a range of risk-averse decision models are given in~\cite{DBLP:journals/teco/PiliourasNS16}. The smoothness framework was introduced by \cite{Roughgarden09}. Among others, \cite{SyrgkanisT13} tailored it to the quasilinear case of mechanisms. It is important to remark here that our approach is different from the one taken by \cite{DBLP:journals/sigmetrics/MeirP15}. They use the smoothness framework to prove generalized price of anarchy bounds for \bk{nonatomic congestion} games in which players have biased utility functions. They assume that players are playing the ``wrong game'' and their point of comparison is the ``true'' optimal social welfare, \bk{meaning that the biases only determine the equilibira but do not affect the social welfare.} We take the utility functions as they are, including the risk aversion, to evaluate social welfare \bk{in equilibria and also to determine the optimum, which makes our models incomparable.} For precise relation of von Neumann-Morgenstern preferences to mean-variance preferences, see for instance~\cite{DBLP:journals/eor/Markowitz14}. Mean-variance preferences were explored for congestion games in~\cite{DBLP:journals/ior/NikolovaM14,DBLP:conf/sigecom/NikolovaS15}, while~\cite{klose2014auctioning} studies the bidding behavior in an all-pay auction depending on the level of variance-averseness. \section{Preliminaries}\label{sec:preliminaries} \subsection{Setting} We consider the following setting: There is a set $N$ of $n$ players and $\mathcal{X}$ is the set of possible outcomes. Each player $i$ has a utility function $u_i^{\theta_i}$, which is parameterized by her type $\theta_i \in \Theta_i$. Given a type $\theta_i$, an outcome $\mathbf{x} \in \mathcal{X}$, and a payment $p_i \geq 0$, her utility is $u_i^{\theta_i}(\mathbf{x}, p_i)$. The traditionally most studied case are quasilinear utilities, in which types are valuation functions $v_i\in\mathcal{V}_i$, $v_i\colon \mathcal{X} \to \mathbb{R}$ and $u_i^{v_i}(\mathbf{x},p_i)=v_i(\mathbf{x})-p_i$. Throughout this paper, we will refer to quasilinear utilities by $\hat{u}_i^{v_i}$. For fixed utility functions and types, the social welfare of an outcome $\mathbf{x} \in \mathcal{X}$ and payments $(p_i)_{i \in N}$ is defined as $\mbox{SW}^{\boldsymbol{\theta}}(\mathbf{x},\mathbf{p}):=\sum_{i \in N} u_i^{\theta_i}(\mathbf{x}, p_i) + \sum_{i \in N} p_i$. In the quasilinear case, this simplifies to $\sum_{i \in N} v_i(\mathbf{x})$. Unless noted otherwise, by $\mbox{OPT}(\boldsymbol\theta)$, we will refer to the optimal social welfare under type profile $\boldsymbol\theta$, i.e., $\max_{\mathbf{x}, \mathbf{p}} \mbox{SW}^{\boldsymbol\theta}(\mathbf{x}, \mathbf{p})$. A \emph{mechanism} is a triple $(\mathcal{A}, X, P)$, where for each player $i$, there is a set of actions $\mathcal{A}_i$ and $\mathcal{A}=\times_i\mathcal{A}_i$ is the set of action profiles, $X\colon\mathcal{A}\to \mathcal{X}$ is an allocation function that maps actions to outcomes and $P\colon\mathcal{A}\to \mathbb{R}_+^n$ is a payment function that maps actions to payments $p_i$ for each player $i$. Given an action profile $\mathbf{a} \in \mathcal{A}$, we will use the short-hand notation $u_i^{\theta_i}(\mathbf{a})$ to denote $u_i^{\theta_i}(X(\mathbf{a}), p_i)$. \subsection{Solution Concepts} In the setting of \emph{complete information}, the type profile $\boldsymbol\theta$ is fixed. We consider (coarse) correlated equilibria, which generalize Nash equilibria and are the outcome of (no-regret) learning dynamics. A \emph{correlated equilibrium (CE)} is a distribution $\bf a$ over action profiles from $\mathcal{A}$ such that for every player $i$ and every strategy $a_i$ in the support of $\mathbf{a}$ and every action $a_i'\in \mathcal{A}_i$, player $i$ does not benefit from switching to $a_i'$ whenever he was playing $a_i$. Formally, \[ \mathbb{E}_{\mathbf{a}_{-i}\vert a_i}[u_i(\mathbf{a})]\ge \mathbb{E}_{\mathbf{a}_{-i}\vert a_i}[u_i(a_i', \mathbf{a}_{-i})], \forall a_i'\in\mathcal{A}_i, \forall i\enspace. \] In \emph{incomplete information}, the type of each player is drawn from a distribution $F_i$ over her type space $\Theta_i$. The distributions are common knowledge and the draws are independent among players. The solution concept we consider in this setting is the \emph{Bayes-Nash Equilibrium}. Here, the strategy of each player is now a (possibly randomized) function $s_i\colon \Theta_i \to \mathcal{A}_i$. The equilibrium is a distribution over these functions $s_i$ such that each player maximizes her expected utility conditional on her private information. Formally, \[ \mathbb{E}_{\boldsymbol\theta_{-i}\vert \theta_i}[u_i^{\theta_i}(\mathbf{s}(\boldsymbol\theta))] \ge \mathbb{E}_{\boldsymbol\theta_{-i}\vert \theta_i}[u_i^{\theta_i}(a_i,\mathbf{s}_{-i}(\boldsymbol\theta_{-i}))], \forall a_i\in \mathcal{A}_i, \forall \theta_i\in \Theta_i, \forall i\enspace. \] The measure of efficiency is the expected social welfare over the types of the players: given a strategy profile $\mathbf{s}\colon\times_i \Theta_i \to \times_i \mathcal{A}_i$, we consider $\mathbb{E}_{\boldsymbol\theta}[\mbox{SW}^{\boldsymbol\theta}(\mathbf{s}(\boldsymbol\theta))]$. We compare the efficiency of our solution concept with respect to the expected optimal social welfare $\mathbb{E}_{\boldsymbol\theta}[\mbox{OPT}(\boldsymbol\theta)]$. The \emph{price of anarchy (PoA)} with respect to an equilibrium concept is the worst possible ratio between the optimal expected welfare and the expected welfare at equilibrium, that is \[ \mathrm{PoA} = \max_F \max_{D \in EQ(F)}\frac{\mathbb{E}_{\boldsymbol\theta \sim F}[\mbox{OPT}(\boldsymbol\theta)]}{\mathbb{E}_{\boldsymbol\theta \sim F, \mathbf{a} \sim D}[\mbox{SW}^{\boldsymbol\theta}(\mathbf{a})]}\enspace, \] where by $F=F_1\times\dots\times F_n$ we denote the product distribution of the players' type distributions and by $EQ(F)$ the set of all equilibria, which are probability distributions over action profiles. We assume that players always have the possibility of not participating, hence any rational outcome has non-negative utility in expectation over the non-available information and the randomness of other players and the mechanism. \section{Modeling Risk Aversion}\label{sec:model} When modeling risk aversion, one wants to capture the fact that a random payoff (lottery) $X$ is less preferred than a deterministic one of value $\mathbf{E}[X]$. The standard approach is, therefore, to apply a concave non-decreasing function $h\colon \mathbb{R} \to \mathbb{R}$ to X and consider $h(X)$ instead. By Jensen's inequality, we now know $\mathbf{E}[h(X)] \leq h(\mathbf{E}[X])$. In the case of mechanism design, the utility of a risk-neutral agent is defined as the quasilinear utility $v_i(\mathbf{x}) - p_i$. That is, if an agent has a value of $1$ for an item and has to pay $0.9$ for it, then the resulting utility is $0.1$. The expected utility is identical if the agent only gets the item with probability $0.1$ for free. To capture the effect that the agent prefers the certain outcome to the uncertain one, we again apply a concave function $h_i\colon \mathbb{R} \to \mathbb{R}$ to the quasilinear term $v_i(\mathbf{x}) - p_i$. We then consider utility functions $u_i(\mathbf{x}, p_i) = h_i(v_i(\mathbf{x}) - p_i)$ in the setting described in Section~\ref{sec:preliminaries}. Note that the mechanisms we consider do not know the $h_i$-functions. They work as if all utility functions were quasilinear. We want to compare outcomes based on their social welfare. We use the definition of social welfare being the sum of utilities of all involved parties including the auctioneer. That is, $\mbox{SW}(\mathbf{x}, \mathbf{p}) = \sum_{i \in N} u_i(\mathbf{x}, p_i) + \sum_{i \in N} p_i$. It is impossible for any mechanism to choose good outcomes for this benchmark if the $h_i$-function are arbitrary and unknown. Therefore, we assume that utility functions are normalized so that the utility matches the quasilinear one for $p_i = 0$ and $p_i = v_i(\mathbf{x})$ (see Figure~\ref{fig:risk_averse_utility}). In more detail, we assume the following \emph{normalized risk-averse utilities}: \begin{enumerate} \item \label{prop:monotonicity} $u_i^{v_i}(\mathbf{x}, p_i) \geq u_i^{v_i}(\mathbf{x}, p_i')$ if $p_i \leq p_i'$ (monotonicity) \item \label{prop:first} $u_i^{v_i}(\mathbf{x}, p_i)=0$ if $p_i=v_i(\mathbf{x})$ (normalization at $p_i=v_i(\mathbf{x})$) \item \label{prop:second} $u_i^{v_i}(\mathbf{x},p_i)=v_i(\mathbf{x})$ if $p_i=0$ (normalization at $p_i=0$) \item \label{prop:third} $u_i^{v_i}(\mathbf{x},p_i)\geq v_i(\mathbf{x})-p_i$ if $0\leq p_i\leq v_i(\mathbf{x})$ and $u_i^{v_i}(\mathbf{x},p_i)\leq v_i(\mathbf{x})-p_i$ otherwise (relaxed concavity) \end{enumerate} Assumption~\ref{prop:third} is a relaxed version of concavity that suffices our needs for the positive results. Our negative results always fulfill concavity. \begin{SCfigure} \centering \begin{tikzpicture}[yscale=0.5] \draw[->] (-1,0) -- (4.2,0) node[anchor=north] {$p_i$}; \draw[->] (0,-1) -- (0,4.2) node[anchor=east] {$u_i(\mathbf{x}, p_i)$}; \draw[scale=0.5, font=\footnotesize] (6,0) node[anchor=north east] {$v_i(\mathbf{x})$}; \draw[scale=0.5, font=\footnotesize] (0,6) node[anchor=east] {$v_i(\mathbf{x})$}; \draw[scale=0.5, domain=0:6.25,smooth,variable=\x] plot ({\x},{6-\x}); \draw[scale=0.5, domain=0:6.25,smooth,variable=\x, very thick] plot ({\x},{6/(1-e^(-6)) * (1-e^(-6+\x))}); \end{tikzpicture} \caption{Normalized risk-averse utility function (bold) and quasilinear utility function for a fixed allocation $\mathbf{x}$ and varying payment $p_i$.} \label{fig:risk_averse_utility} \end{SCfigure} As an effect of normalization, the optimal social welfare of the risk-averse setting can be bounded in terms of the optimal sum of values, which coincides with the social welfare for quasilinear utilities. \begin{lemma}\label{lemma:OPT_relation} Given valuation functions $(v_i)_{i \in N}$ and normalized risk-averse utilities $(u_i^{v_i})_{i \in N}$, let $\mbox{OPT}$ denote the optimal social welfare with respect to utilities $(u_i^{v_i})_{i \in N}$ and $\widehat\mbox{OPT}$ denote the optimal social welfare with respect to quasilinear utilities $(\hat{u}_i^{v_i})_{i \in N}$. Then, $\mbox{OPT} \leq 2 \widehat\mbox{OPT}$. \end{lemma} \begin{proof} Let $\mathbf{x}$, $\mathbf{p}$ denote the outcome and payment profile that maximizes the social welfare $\sum_{i \in N} u_i^{v_i}(\mathbf{x}, p_i) + \sum_{i \in N} p_i$. Consider a fixed player $i$. If $0 \leq p_i \leq v_i(\mathbf{x})$, then by monotonicity of $u_i^{v_i}(\mathbf{x}, \cdot)$ and Assumption~\ref{prop:second}, $u_i^{v_i}(\mathbf{x}, p_i) + p_i \leq u_i^{v_i}(\mathbf{x}, 0) + p_i \leq 2 v_i(\mathbf{x})$. If $p_i > v_i(\mathbf{x})$, then we know from Assumption~\ref{prop:third} that $u_i^{v_i}(\mathbf{x}, p_i) + p_i \leq v_i(\mathbf{x})$. So, always, $u_i^{v_i}(\mathbf{x}, p_i) + p_i \leq 2 v_i(\mathbf{x})$. By taking the sum over all players, we get $\mbox{OPT} = \sum_{i \in N} u_i^{v_i}(\mathbf{x}, p_i) + \sum_{i \in N} p_i \leq \sum_{i \in N} 2 v_i(\mathbf{x}) \leq 2 \widehat\mbox{OPT}$. \end{proof} As a consequence, the optimal social welfare changes only within a factor of 2 by risk aversion and we may as well take $\widehat\mbox{OPT}$ as our point of comparison. A VCG mechanism, for example, is still incentive compatible under risk-averse utilities but optimizes the wrong objective function. Lemma~\ref{lemma:OPT_relation} shows that it is still a constant-factor approximation to optimal social welfare. However, in simple mechanisms, the agents' strategic behavior may or may not change drastically under risk aversion, depending on the mechanism. This way, equilbria and outcomes can possibly be very different. \section{Smoothness Beyond Quasilinear Utilities}\label{sec:general_smoothness} Most of our positive results rely on the \emph{smoothness} framework. It was introduced by \cite{Roughgarden09} for general games. There are multiple adaptations to the quasilinear mechanism-design setting. We will use the one by \cite{SyrgkanisT13}. As our utility functions will not be quasilinear, in this section we first \bk{observe that the framework can be extended to general utility functions}. Note that throughout this section, the exact definition of $\mbox{OPT}(\boldsymbol\theta)$ is irrelevant. Therefore, it can be set to the optimal social welfare but also to weaker benchmarks depending on the setting. \begin{definition}[Smooth Mechanism]\label{def:smoothness} A mechanism $M$ is $(\lambda, \mu)$-smooth with respect to utility functions $(u_i^{\theta_i})_{\theta_i\in\Theta_i, i\in N}$ for $\lambda, \mu \ge 0$, if for any type profile $\boldsymbol\theta \in \times_i \Theta_i$ and for any action profile $\mathbf{a}$ there exists a randomized action $a^*_i(\boldsymbol\theta, a_{i})$ for each player $i$, such that $\sum_i u_i^{\theta_i}(a^*_i(\boldsymbol\theta, a_{i}), \mathbf{a}_{-i}) \ge \lambda \mbox{OPT}(\boldsymbol\theta) - \mu \sum_i p_i(\mathbf{a})$. We denote by $u_i^{\theta_i}(\mathbf{a})$ the expected utility of a player if $\mathbf{a}$ is a vector of randomized strategies. \end{definition} Mechanism smoothness implies bounds on the price of anarchy. The following theorem and its proof are analogous to the theorems in~\cite{SyrgkanisT13}, the proof is therefore deferred to the Appendix, Section~\ref{sec:missing_proofs}. In cases where the deviation required by smoothness does not depend on $a_i$, the results extend to coarse correlated equilibria. The important point is that the respective bounds mostly do not depend on the assumption of quasilinearity. \begin{theorem}\label{thm:smoothness_CCE_and_BNE} If a mechanism $M$ is $(\lambda, \mu)$-smooth w.r.t.~utility functions $(u_i^{\theta_i})_{\theta_i\in\Theta_i, i\in N}$, then any Correlated Equilibrium in the full information setting and any Bayes-Nash Equilibrium in the Bayesian setting achieves efficiency of at least $\frac{\lambda}{\max\{1, \mu\}}$ of $\mbox{OPT}(\boldsymbol\theta)$ or of $\mathbb{E}_{\boldsymbol\theta}[\mbox{OPT}(\boldsymbol\theta)]$, respectively. \end{theorem} In the standard single-item setting, one item is auctioned among $n$ players, with their valuations and actions (bids) both being real numbers. In the common auction formats, the item is given to the bidder with the highest bid. In a \emph{first-price auction}, the winner has to pay her bid; the other players do not pay anything. It is $(1 - 1/e, 1)$-smooth w.r.t.\ quasilinear utility functions. In an \emph{all-pay auction}, all players have to pay their bid. It is $(1/2, 1)$-smooth w.r.t.\ quasilinear utility functions. These smoothness results were given by \cite{SyrgkanisT13}. They also show that simultaneous and sequential compositions of smooth mechanisms are again smooth. What is remarkable here is that first-price and all-pay auctions achieve nearly the same welfare guarantees. We will show that in the risk-averse setting this is not true. While the first-price auction almost preserves its constant price of anarchy, the all-pay auction has an unbounded price of anarchy, even with only three players. \section{Quasilinear Smoothness Often Implies Risk-Averse Smoothness (Main Result~\ref{main-result-smooth})} \label{sec:positive-results} Our main positive result is that many price-of-anarchy guarantees that are proved via smoothness in the quasilinear setting transfer to the risk-averse one. First, we consider mechanisms that are $(\lambda, \mu)$-smooth with respect to quasilinear utility functions. We show that if the deviation strategy $\mathbf{a^\ast}$ that is used to establish smoothness ensures non-negative utility, then the price-of-anarchy bound extends to risk-averse settings at a multiplicative constant loss. \begin{theorem}\label{thm:main_result1} If a mechanism is $(\lambda, \mu)$-smooth w.r.t.~quasilinear utility functions $(\hat{u}_i^{v_i})_{i\in N, v_i\in\mathcal{V}_i}$ and the actions in the support of the smoothness deviations satisfy $\hat{u}_i(a_i^*, \mathbf{a}_{-i})\geq0, \forall \mathbf{a}_{-i}, \forall i$, then any Correlated Equilibrium in the full information setting and any Bayes-Nash Equilibrium in the Bayesian setting achieves efficiency at least $\frac{\lambda}{2\cdot\max\{1, \mu\}}$ of the expected optimal even in the presence of risk averse bidders. \end{theorem} Using Theorem~\ref{thm:smoothness_CCE_and_BNE}, it suffices to prove the following lemma. \begin{lemma}\label{lemma:smooth} If a mechanism is $(\lambda, \mu)$-smooth w.r.t.~quasilinear utility functions $(\hat{u}_i^{v_i})_{i\in N, v_i\in\mathcal{V}_i}$ and the actions in the support of the smoothness deviations satisfy \begin{equation} \label{eq:nonnegutility} \hat{u}_i(a_i^*, \mathbf{a}_{-i})\geq0, \forall \mathbf{a}_{-i}, \forall i \enspace, \end{equation} then the mechanism is $(\lambda/2, \mu)$-smooth with respect to any normalized risk-averse utility functions $(u_i^{v_i})_{i\in N, v_i\in\mathcal{V}_i}$. \end{lemma} \begin{proof} We start from an arbitrary action profile $\mathbf{a}$ and want to satisfy Definition~\ref{def:smoothness}. Since there exist smoothness deviations s.t. $\hat{u}_i(a_i^*, \mathbf{a}_{-i})=v_i(\mathbf{x}(a_i^*, \mathbf{a}_{-i}))-p_i\geq0, \forall \mathbf{a}_{-i}, \forall i$, we know from property~\ref{prop:third} of the risk aversion definition that $u_i^{v_i}(a_i^*, \mathbf{a}_{-i})\ge \hat{u}_i^{v_i}(a_i^*, \mathbf{a}_{-i})$. Therefore, \[ \sum_i u_i^{v_i}(a_i^*, \mathbf{a}_{-i}) \ge \sum_i \hat{u}_i^{v_i}(a_i^*, \mathbf{a}_{-i})\ge \lambda \widehat\mbox{OPT} - \mu \sum_i p_i(\mathbf{a})\ge\frac{\lambda}{2} \mbox{OPT} - \mu \sum_i p_i(\mathbf{a})\enspace, \] where the last inequality follows from Lemma~\ref{lemma:OPT_relation}. \end{proof} Note that in order for~\eqref{eq:nonnegutility} to hold, it is sufficient if all undominated strategies guarantee non-negative quasilinear utility. For example, in a first-price auction, the only undominated bids are the ones from $0$ to $v_i$. Regardless of the other players' bids, these can never result in negative utilities. \begin{corollary} Under normalized risk-averse utilities, the first-price auction has a constant price of anarchy for correlated and Bayes-Nash equilibria. \end{corollary} \bk{We in addition note here that the first part of Property~\ref{prop:third} of the normalization assumption, $u_i^{v_i}(\mathbf{x}, p_i) \ge v_i(\mathbf{x})-p_i, 0\le p_i\le v_i(\mathbf{x})$, is not crucial for obtaining a result similar to Theorem~\ref{thm:main_result1}. Indeed, a relaxation of the form $u_i^{v_i}(\mathbf{x}, p_i) \ge C\cdot(v_i(\mathbf{x})-p_i), 0\le p_i\le v_i(\mathbf{x})$ for $0<C<1$, $C$ constant, would incur a loss of at most a factor of $C$ in the efficiency bound of Theorem~\ref{thm:main_result1}. More details can be found in the Appendix, Section~\ref{sec:normalization_relaxation}.} For second-price auctions and their generalizations, for example, the just stated theorems do not suffice to prove guarantees on the quality of equilibria. One in addition needs a no-overbidding assumption and this is further taken care of in the framework of \emph{weak smoothness}, also introduced in~\cite{SyrgkanisT13}. We defer all definitions and results that deal with weak smoothness, including the extension from quasilinear to general utility functions and risk-averse utilities yielding a constant loss as compared to the quasilinear case, to the Appendix, Section~\ref{sec:weak_smoothness}. We also consider the setting where players have hard \emph{budget constraints}. Note that in this case the players' preferences are not quasilinear already in the risk neutral case. Informally, we show that if a mechanism is $(\lambda,\mu)$-smooth w.r.t. quasilinear utility functions, then the loss of performance in the budgeted setting is bounded by a constant, even in the presence of risk-averse agents. All details can be found in Section~\ref{sec:budgets} of the Appendix. \section{Unbounded Price of Anarchy for All-Pay Auctions (Main Result~\ref{main-result-allpay})}\label{sec:negative-results} From the previous section, we infer that the constant price-of-anarchy bounds for first-price and second-price auctions immediately extend to the risk-averse setting. This is not true for all-pay auctions; by definition there is no non-trivial bid that always ensures non-negative utility. Indeed, as we show in this section, the price of anarchy is unbounded in the presence of risk-averse players. Missing calculation details can be found in the Appendix, Section~\ref{subsec:calculations}. \begin{theorem} \label{theorem:all-pay} In an all-pay auction with risk-averse players, the PoA is unbounded. \end{theorem} The general idea is to construct a Bayes-Nash equilibrium with two players that very rarely have high values and only then bid high values. We then add a third player who always has a high value. However, as the first two players bid high values occasionally, there is no possible bid that ensures he will surely win. This means, any bid has a small probability of not getting the item but having to pay. Risk-averse players are more inclined to avoid this kind of lotteries. In particular, making our third player risk-averse enough, he prefers the sure zero utility of not participating to any way of bidding that always comes with a small probability of negative utility. \begin{proof}[Proof of Theorem~\ref{theorem:all-pay}] We consider two (mildly) risk-averse players who both have the same valuation distributions and a third (very) risk-averse player with a constant value. For a large number $M>5$, the first two players have values $v_1$ and $v_2$ drawn independently from distributions with density functions of value $2\cdot(1-(M-1)\cdot\varepsilon)$ on the interval $[1/2,1)$ and value $\varepsilon$ on the interval $[1,M]$, where $\varepsilon=1/M^2$. The third player always has value $1/3\cdot \ln (M/2)$ for winning. We will construct a symmetric pure Bayes-Nash equilibrium involving only the first two players. It will be designed such that for the third player it is a best response to always bid $0$, i.e., to opt out of the mechanism and never win the item. So, the combination of these strategies will be a pure Bayes-Nash equilibrium for all three players. Note that the social welfare of any equilibrium of this form is upper-bounded by the optimal social welfare that can be achieved by the first two bidders. By Lemma~\ref{lemma:OPT_relation}, it is bounded by \[ \mathbb{E}[\mbox{SW}]\le2\cdot \mathbb{E}[\max\{v_1,v_2\}]\le2\cdot \mathbb{E}[v_1+v_2]=2\cdot (\mathbb{E}[v_1]+\mathbb{E}[v_2])=4\cdot\mathbb{E}[v_1]\le 4 \enspace. \] Furthermore, the third player's value $v_3 = 1/3\cdot \ln (M/2)$ is a lower bound to the optimal social welfare \bk{in the construction containing all three players}. So, as pointwise $\mbox{OPT}(v)\ge 1/3\cdot\ln(M/2)$, where $v=(v_1,v_2,v_3)\in\mathcal{V}$ denotes the valuation profile, this implies that the price of anarchy can be arbitrarily high. We define the utility functions by setting \begin{equation} u^{v_i}_i(b_i) = \begin{cases} \frac{h_i(v_i-b_i)}{h(v_i)}\cdot v_i & \text{if $b_i$ is the winning bid}\\ \frac{h_i(-b_i)}{h(v_i)}\cdot v_i & \text{otherwise } \end{cases} \end{equation} For the first two players, we use $h_i(x):=1-e^{-x}$, $i \in \{1, 2\}$, so in particular increasing and concave. For the third player, we set $h_i(x) = x$ for $x \geq 0$ and $h_i(x) = C \cdot x$ for $x < 0$, where $C = (16\cdot\frac{1}{3}\cdot\ln M/2)\cdot M^2 \geq 1$. Again this function is increasing and concave\footnote{Its slope is not an absolute constant. This is indeed necessary because the price of anarchy can be bounded in terms of the slopes of the $h_i$-functions as we show in Appendix~\ref{sec:bounded_slope}.}. \bk{Note that the utility functions also satisfy normalizations at $p_i=v_i(\mathbf{x})$ and at $p_i=0$.} We see that in this example risk aversion has the effect of heavily penalizing payments without winning the auction. \begin{claim} With the third player not participating, it is a symmetric pure Bayes-Nash equilibrium for the first two players to play according to bidding function $\beta\colon\mathcal{V}_i\to \mathbb{R}_+, i\in\{1,2\},$ such that \begin{equation}\label{eq:beta} \beta(x) =\int_{\frac{1}{2}}^x \frac{f(t)(e^t-1)}{F(t)+(1-F(t))e^t} dt\enspace, \end{equation} where $F$ denotes the cumulative distribution function of the value and $f$ denotes its density. \end{claim} \begin{proof} We will argue that playing according to $\beta$ is always the unique best response if the other player is playing according to $\beta$, too. Due to symmetry reasons, it is enough to argue about the first player. Let us fix player 1's value $v_1 = x$ and consider the function $g\colon \mathbb{R}_{\geq 0} \to \mathbb{R}$ that is defined by $g(y) = \mathbf{E}[u_1^x(b_1=y, b_2=\beta(v_2), b_3=0)]$. We claim that $g$ is indeed maximized at $y = \beta(x)$. We have\footnote{Note that the first step assumes tie breaking in favor of player 1. This is irrelevant for the future steps as the involved probability distributions are continuous.} \begin{align*} g(y)&=\Pr[\beta(v_2) \leq y]\cdot \frac{h_1(x-y)}{h_1(x)}\cdot x + \left(1-\Pr[\beta(v_2) \leq y]\right)\cdot \frac{h_1(-y)}{h_1(x)}\cdot x\\ &=\frac{x}{h_1(x)}\left[ F(\beta^{-1}(y))\Big(h_1(x-y)-h_1(-y)\Big) + h_1(-y)\right] \\ &=x e^y F(\beta^{-1}(y)) + \frac{x ( 1 - e^y )}{1 - e^{-x}}\enspace. \end{align*} The first derivative of this function is given by \begin{align*} g'(y) & = x e^y F(\beta^{-1}(y)) + x e^y \frac{d}{dy} F(\beta^{-1}(y)) - \frac{x}{1 - e^{-x}} e^y\enspace. \end{align*} The inverse function theorem implies $\frac{d}{dy} F(\beta^{-1}(y)) = \frac{f(\beta^{-1}(y))}{\beta'(\beta^{-1}(y))}$. Furthermore, as $\beta'(t) = \frac{f(t)(e^t-1)}{F(t)+(1-F(t))e^t}$, we get for all $t$ that $ \frac{f(t)}{\beta'(t)} = \frac{F(t) + (1 - F(t)) e^t}{e^t - 1}= (1-F(t)) + \frac{1}{e^t - 1}\enspace. $ This simplifies $g'(y)$ to \begin{align*} g'(y) & =xe^y+ \frac{x e^y}{e^{\beta^{-1}(y)} - 1} - \frac{xe^y}{1 - e^{-x}} =\frac{xe^y}{(1-e^{-x})(e^{\beta^{-1}(y)}-1)} \left(1 - e^{-x+\beta^{-1}(y)} \right) \enspace. \end{align*} Notice that the factor $\frac{xe^y}{(1-e^{-x})(e^{\beta^{-1}(y)}-1)}$ is always positive. Therefore, we observe that $g'(y) = 0$ if and only if $e^{-x+\beta^{-1}(y)} = 1$, which is equivalent to $y = \beta(x)$. Furthermore, $g'(y) > 0$ for $y < \beta(x)$ and $g'(y) < 0$ for $y > \beta(x)$. This means that $y = \beta(x)$ has to be the (unique) global maximum of $g(y)$. \end{proof} \begin{claim} If the first two players are bidding according to~(\ref{eq:beta}), then it is a best response for the third player to always bid $0$. \end{claim} \begin{proof} We now show that the very risk-averse third player with valuation $1/3\cdot \ln (M/2)$ will indeed bid $0$ because every bid $b_3' > 0$ will cause negative expected utility. We distinguish two cases. For values of $b_3' > \frac{1}{16}$, we use that with a small probability one of the two remaining players has a valuation of at least $M-1$, which leads to negative utility. For $b_3' \leq \frac{1}{16}$ on the other hand, he loses so often that his expected utility is again negative. Let us first assume that the third player bids $b_3'$ with $\frac{1}{16} < b_3' \leq v_3$. In this case, with probability more than $\varepsilon$ one of the first two players has value of at least $M-1$. The bid of this player with $v_i \geq M-1$ can be estimated as follows \begin{align*} \beta(v_i)&\geq \beta(M-1) \geq \int_{M/2}^{M-1} \frac{f(t)(e^t-1)}{1+(1-F(t))e^t}dt = \int_{M/2}^{M-1}\frac{\varepsilon (e^t-1)}{1+\varepsilon(M-t)e^t}dt\\ &\ge \frac{1}{2}(1-e^{-\frac{M}{2}}) \int_{M/2}^{M-1} \frac{\varepsilon e^t}{\varepsilon (M-t)e^t}dt = \frac{1}{2}(1-e^{-\frac{M}{2}}) \ln (M/2) > \frac{1}{3} \ln (M/2)\enspace, \end{align*} which means that by bidding $b_3'$ the third player loses with probability of at least $\varepsilon=1/M^2$. For the expected utility, this implies \begin{align*} \mathbb{E}[u_3(b_3', \mathbf{b}_{-3})] &\leq (1 - \varepsilon)(v_3 - b_3') + \varepsilon(- C\cdot b_3') < \frac{1}{3}\ln \frac{M}{2} - \frac{1}{16}\cdot 16\cdot\frac{1}{3}\ln \frac{M}{2} = 0\enspace. \end{align*} In the case where the third player bids $b_3'$, $0 < b_3' \le \frac{1}{16}$, we need to be a bit more careful with estimating the winning probability. We first give a lower bound on the bidding function of the first player \bk{for $v_1<1$} \begin{align*}\label{eq:bid_estimate} \beta(v_1)&\ge \int_{1/2}^{v_1} \frac{2(1-\frac{M-1}{M^2})(e^t-1)}{2(t-\frac{1}{2})(1-\frac{M-1}{M^2})+ (1-2(t-\frac{1}{2})(1-\frac{M-1}{M^2}))\cdot e^t} dt>\frac{1}{4} \left( v_1 - \frac{1}{2} \right)\enspace. \end{align*} \bk{Since for $v_1\ge1$, $\beta(v_1)>\frac{1}{16}$ with probability $1$,} this implies that with $b_3'$, the third player has a winning probability of at most \[ \Pr[\beta(v_1) \leq b_3'] \le \Pr\left[\frac{1}{4}\left(v_1-\frac{1}{2}\right) \leq b_3'\right] = \Pr\left[v_1 \leq 4b_3'+\frac{1}{2}\right]<2\cdot 4 b_3'\enspace. \] Now, having in mind that $C = (16v_3)\cdot M^2\ge 32\cdot v_3$, the utility can be estimated as follows \begin{align*} \mathbb{E}[u_3(b_3', \mathbf{b}_{-3})] &\le\Pr[\beta(v_1) \leq b_3']\cdot(v_3-b_3')-\Pr[\beta(v_1) > b_3']\cdot 32\cdot v_3\cdot b_3' \\ &<8b_3'\left(-v_3-\frac{1}{16}\right)<0\enspace. \end{align*} So also in this case, the expected utility is negative. \end{proof} Combining the two claims, we have constructed a class of distributions and Bayes-Nash equilibria with unbounded price of anarchy. \end{proof} As a final remark, we note that the first two bidders occasionally bid high only due to risk aversion. In a symmetric Bayes-Nash equilibrium of the all-pay auction in the quasilinear setting, all bids are always bounded by the expected value of a player (see Appendix~\ref{sec:quasilinear_allpay}). Therefore, such an equilibrium would not work as a point of departure. \section{Variance-Aversion Model}\label{sec:other-models} In this section, we consider a different model that tries to capture the effect that agents prefer certain outcomes to uncertain ones. It is inspired by similar models in game theory and penalizes variance of random variables. Rather than reflecting the aversion in the utility functions, it is modeled by adapting the solution concept. In the usual definition of equilibria involving randomization, the utility of a randomized strategy profile is set to be the expectation over the pure strategies. The definition we consider here is modified by subtracting the respective standard deviation. For a player $i$, the utility of a randomized strategy profile $\mathbf{a}$ is given as $u^{v_i}_i(\mathbf{a}) = \mathbb{E}_{\mathbf{b} \sim \mathbf{a}}[\hat{u}^{v_i}_i(\mathbf{b})] - \gamma\sqrt{\mathrm{Var}[\hat{u}_i^{v_i}(\mathbf{b})]}$, so a player's utility for an action profile is his expected quasilinear utility for this profile minus the standard deviation multiplied by a parameter $\gamma$ that determines the degree of variance-averseness, $0\le\gamma\le1$. As already mentioned, $\hat{u}_i(\mathbf{a})$ denotes the quasilinear utility of player $i$ for the action profile $\mathbf{a}$. Bayes-Nash Equilibria and correlated equilibria can be defined the same way as before, always replacing expectations by the difference of expectation and standard deviation. The formal definition for $s(\mathbf{v})$ being a Bayes-Nash equilibrium in this setting is that $\forall i \in N$, $\forall v_i\in \Theta_i$, $a_i\in \mathcal{A}_i$, \begin{multline*} \mathbb{E}_{\mathbf{v}_{-i}}[\hat{u}_i^{v_i}(s(\mathbf{v})) \mid v_i] - \gamma\sqrt{\mathrm{Var}[\hat{u}_i^{v_i}(s(\mathbf{v})) \mid v_i]}\\ \ge \mathbb{E}_{\mathbf{v}_{-i}}[\hat{u}_i^{v_i}(a_i,s_{-i}(\mathbf{v}_{-i})) \mid v_i] - \gamma\sqrt{\mathrm{Var}[\hat{u}_i^{v_i}(a_i, s_{-i}(\mathbf{v}_{-i})) \mid v_i]}\enspace. \end{multline*} Note that we again evaluate social welfare as agents perceive it. That is, for a randomized strategy profile $\mathbf{a}$, we set $\mbox{SW}^{\mathbf{v}}(\mathbf{a}) = \sum_i u_i^{v_i}(\mathbf{a}) + \sum_i p_i(\mathbf{a})$. Our first result shows that first-price and notably also all-pay auctions have a constant price of anarchy in this setting. Note that even though the proof looks a lot like smoothness proofs, it is not possible to phrase it within the smoothness framework, since here we are dealing with a different solution concept. \begin{theorem} Bayes-Nash Equilibria and Correlated Equilibria of the first-price and all-pay auction have a constant price of anarchy in this model. \end{theorem} \begin{proof} For simplicity, we will show the claim only for Bayes-Nash equilibria. The proof for correlated equilibria works the same way with minor modifications to the notation. Assume $\mathbf{b}$ is a Bayes-Nash equilibrium. We claim that $ \mathbb{E}_{\mathbf{v}}\left[\mbox{SW}^{\mathbf{v}}(\mathbf{b})\right]\ge \frac{1}{16} \cdot \mathbb{E}_{\mathbf{v}}[\mbox{OPT}]\enspace, $ where $\mbox{OPT}$ denotes the value of social welfare in the allocation that maximizes it, i.e. maximized sum of utility and payments of the agents. Consider a fixed player $j$ and a fixed valuation $v_j$. Let $q=\Pr[\max_{i\neq j}b_{i} \le \frac{1}{4}\cdot v_j]$ denote the probability that no other player's bid exceeds $\frac{1}{4}\cdot v_j$. Assume first that $q\le \frac{3}{4}$. Then, because the total social welfare lower bounded by the payments $ \mathbb{E}_{\mathbf{v}_{-j}\vert v_j}\left[\mbox{SW}^{\mathbf{v}}(\mathbf{b})\right] \ge (1-q)\frac{1}{4} v_j\ge\frac{1}{16} v_j\enspace. $ On the other hand, if $q\ge\frac{3}{4}$, since $\mathbb{E}_{\mathbf{v}_{-j}\vert v_j}\left[\mbox{SW}^{\mathbf{v}}(\mathbf{b})\right]\ge \mathbb{E}_{\mathbf{v}_{-j}\vert v_j}\left[u_j^{v_j}(\frac{v_j}{4},b_{-j})\right]$, \begin{multline*}\label{eq:variance_first_and_allpay} \mathbb{E}_{\mathbf{v}_{-j}\vert v_j}\left[\mbox{SW}^{\mathbf{v}}(\mathbf{b})\right] \ge v_j q - \frac{1}{4} v_j - \gamma v_j\sqrt{q(1-q)} \ge \left(\frac{2- \gamma\sqrt{3}}{4}\right)v_j\ge \frac{1}{16}v_j\enspace, \end{multline*} where the first inequality is in fact an equality for the all-pay auction. From here, by taking the expectation over $v_j$ and by weighing the right hand side by the probability that $\mbox{OPT}$ takes a particular agent, the theorem follows. \end{proof} This is contrasted by a correlated equilibrium with $0$ social welfare in a setting with positive values. Indeed, for the special case of $\lambda=1$, we see that the variance-averse model further differs from the risk-averse model described in previous sections. \begin{observation}\label{lemma:observation} The PoA for CE of second price auctions is unbounded if $\gamma=1$. \end{observation} The proof can be found in the Appendix, Section~\ref{sec:observation}. This is not only a difference between smoothness and weak smoothness. Our final result is a mechanism that is $(\lambda,\mu)$-smooth for constant $\lambda$ and $\mu$ but has unbounded price of anarchy. \begin{theorem}\label{thm:variance_smoothness} For any constant $\gamma>0$ there is a mechanism that is $(\lambda,\mu)$-smooth with respect to quasilinear utility functions for constant $\lambda$ and $\mu$ but has unbounded price of anarchy in the variance-aversion model. \end{theorem} \begin{proof} Consider a setting with two items and two players, who have unit-demand valuation functions such that $\frac{1}{c} v_{i, 1} \leq v_{i, 2} \leq c v_{i, 1}$ for constant $c \geq 1$. The players' possible actions are to either report one of the two items as preferred or to opt out entirely. Our mechanism first assigns player 1 her (claimed) favorite item, then assigns player 2 the remaining one unless she opts out. There are no payments. Obviously, this mechanism is $(\frac{1}{c}, 0)$-smooth because the allocation is within a $\frac{1}{c}$-factor of the optimal allocation by construction. We will now construct a mixed Nash equilibrium of bad welfare. To this end, let $v_{1, 1} = v_{1, 2} = \varepsilon$ for some small $\varepsilon > 0$. This makes player 1 indifferent between items 1 and 2. In particular, it is a best response to ask for item 1 with probability $\frac{q-1}{q}$ and for item 2 with probability $\frac{1}{q}$. We note at this point that in a Bayes-Nash equilibrium we could make this respective action the unique best response by having random types. For player 2, we set $v_{2, 1} = c$, $v_{2, 2} = 1$. She has the choice of participating or opting out. Opting out implies utility $0$, whereas participating implies \begin{align*} u_2(\mathbf{a}) &= \frac{c+q-1}{q} - \gamma\sqrt{\frac{(c-1)^2(q-1)}{q^2}}= \frac{(c-1)(1-\gamma\sqrt{q-1})}{q}+1 \end{align*} Now, if we set $q=c-1$, then $u_2(\mathbf{a})=2-\gamma\sqrt{c-2}$ which is negative for $c>\frac{4}{\gamma^2}+2$. We further set $c=\frac{4}{\gamma^2}+3$. That is, player 2 prefers to opt out. This outcome has social welfare $\varepsilon$ whereas the optimal social welfare is $c$. \end{proof} Note that this last example shows that variance-averseness yields very strange preferences for lotteries. In our example, the variance-averse player prefers not to participate although any outcome in the (free) lottery has positive value. \appendix \section{Proof of Theorem~\ref{thm:smoothness_CCE_and_BNE}}\label{sec:missing_proofs} \subsection{Full Information Setting} \begin{proof} Let $\bf a$ be a correlated equilibrium. This means that for every $a_i$ in the support of $\bf a$ \[ \mathbb{E}_{\mathbf{a}_{-i}\vert a_i}[u_i^{\theta_i}(a_i, \mathbf{a}_{-i})]\ge \mathbb{E}_{\mathbf{a}_{-i}\vert a_i}[u_i^{\theta_i}(a_i', \mathbf{a}_{-i})], \forall a_i'\in\mathcal{A}_i, \forall i\enspace. \] Applying the equilibrium property to $a_i'=a_i^*(\boldsymbol{\theta}, a_i)$, we know that for every $a_i$ in the support of $\bf a$: \[ \mathbb{E}_{\mathbf{a}_{-i}\vert a_i}[u_i^{\theta_i}(a_i, \mathbf{a}_{-i})]\ge \mathbb{E}_{\mathbf{a}_{-i}\vert a_i}[u_i^{\theta_i}(a_i^*(\boldsymbol\theta, a_i), \mathbf{a}_{-i})], \forall i\enspace. \] If we now take the expectation over $a_i$ and add over all players: \[ \mathbb{E}_{\mathbf{a}}[\sum_i u_i^{\theta_i}(\mathbf{a})]\ge \mathbb{E}_{\mathbf{a}}[\sum_i u_i^{\theta_i}(a_i^*(\boldsymbol\theta, a_i), \mathbf{a}_{-i})]\ge \lambda\mbox{OPT}(\boldsymbol\theta) - \mu\mathbb{E}_{\mathbf{a}}[\sum_i p_i(\mathbf{a})]\enspace, \] and further by adding $\mathbb{E}_{\mathbf{a}}[\sum_i p_i(\mathbf{a})]$ to both sides \begin{multline*} \mathbb{E}_{\mathbf{a}}[\sum_i u_i^{\theta_i}(\mathbf{a}) + \sum_i p_i(\mathbf{a})]\ge \mathbb{E}_{\mathbf{a}}[\sum_i u_i^{\theta_i}(a_i^*(\boldsymbol\theta, a_i), \mathbf{a}_{-i})]\\ \ge \lambda\mbox{OPT}(\boldsymbol\theta) + (1 - \mu)\mathbb{E}_{\mathbf{a}}[\sum_i p_i(\mathbf{a})]\enspace. \end{multline*} The result follows by doing a case distinction over $\mu\le1$ and $\mu>1$. In the first case, we immediately get \[ \mathbb{E}_{\mathbf{a}}[\sum_i u_i^{\theta_i}(\mathbf{a}) + \sum_i p_i(\mathbf{a})] \ge \lambda\mbox{OPT}(\boldsymbol\theta) + (1 - \mu)\mathbb{E}_{\mathbf{a}}[\sum_i p_i(\mathbf{a})] \ge \lambda\mbox{OPT}(\boldsymbol\theta)\enspace, \] and in the latter case we use the fact that $\mathbb{E}_{\mathbf{a}}[\sum_i u_i^{\theta_i}(\mathbf{a})]\ge 0$. Then also \[ \mathbb{E}_{\mathbf{a}}[\sum_i u_i^{\theta_i}(\mathbf{a})] + \mathbb{E}_{\mathbf{a}}[\sum_i p_i(\mathbf{a})]\ge \mathbb{E}_{\mathbf{a}}[\sum_i p_i(\mathbf{a})]\enspace, \] which results in \[ \mathbb{E}_{\mathbf{a}}[\sum_i u_i^{\theta_i}(\mathbf{a}) + \sum_i p_i(\mathbf{a})] \ge \lambda\mbox{OPT}(\boldsymbol\theta) + (1 - \mu)\mathbb{E}_{\mathbf{a}}[\sum_i u_i^{\theta_i}(\mathbf{a}) + \sum_i p_i(\mathbf{a})] \] and finally \[\mathbb{E}_{\mathbf{a}}[\sum_i u_i^{\theta_i}(\mathbf{a}) + \sum_i p_i(\mathbf{a})] \ge \frac{\lambda}{\mu}\mbox{OPT}(\boldsymbol\theta)\enspace. \] \end{proof} \subsection{Bayesian Setting} \begin{proof} For reasons of clarity, we prove the claim for the simpler case of pure BNE. First, we let each player $i$ sample a type profile $\boldsymbol\zeta\sim \times_i F_i$ and play $a_i^*((\theta_i, \boldsymbol\zeta_{-i}), s_i(\zeta_i))$. \begin{align*} \mathbb{E}_{\boldsymbol\theta}[u_i^{\theta_i}(s(\boldsymbol\theta))]&\geq \mathbb{E}_{\boldsymbol\theta,\boldsymbol\zeta}[u_i^{\theta_i}(a_i^*((\theta_i, \boldsymbol\zeta_{-i}), s_i(\zeta_i)), s_{-i}(\boldsymbol\theta_{-i}))]\\ &=\mathbb{E}_{\boldsymbol\theta,\boldsymbol\zeta}[u_i^{\zeta_i}(a_i^*((\zeta_i, \boldsymbol\zeta_{-i}), s_i(\theta_i)), s_{-i}(\boldsymbol\theta_{-i}))]\\ &=\mathbb{E}_{\boldsymbol\theta,\boldsymbol\zeta}[u_i^{\zeta_i}(a_i^*(\boldsymbol\zeta, s_i(\theta_i)), s_{-i}(\boldsymbol\theta_{-i}))] \end{align*} Summing over the players and using the smoothness property, we get \begin{multline*} \mathbb{E}_{\boldsymbol\theta}\Bigg[\sum_i u_i^{\theta_i}(s(\boldsymbol\theta))\Bigg] \geq \mathbb{E}_{\boldsymbol\theta, \boldsymbol\zeta}\Bigg[\sum_i u_i^{\zeta_i}(a_i^*(\boldsymbol\zeta, s_i(\theta_i)), s_{-i}(\boldsymbol\theta_{-i}))\Bigg]\\\geq\mathbb{E}_{\boldsymbol\theta, \boldsymbol\zeta}\Bigg[\lambda\mbox{OPT}(\boldsymbol\zeta)-\mu\sum_i p_i(s(\boldsymbol\theta))\Bigg]=\lambda\mathbb{E}_{\boldsymbol\theta}[\mbox{OPT}(\boldsymbol\theta)] - \mu\mathbb{E}_{\boldsymbol\theta}\Bigg[\sum_i p_i(s(\boldsymbol\theta))\Bigg]\enspace, \end{multline*} and therefore \[ \mathbb{E}_{\boldsymbol\theta}\Bigg[\sum_i u_i^{\theta_i}(s(\boldsymbol\theta)) + \sum_i p_i(s(\boldsymbol\theta))\Bigg]\geq \lambda\mathbb{E}_{\boldsymbol\theta}[\mbox{OPT}(\boldsymbol\theta)] + (1-\mu)\mathbb{E}_{\boldsymbol\theta}\Bigg[\sum_i p_i(s(\boldsymbol\theta))\Bigg]\enspace, \] from where the result follows by case distinction over $\mu$, as in the proof for the full information setting. The generalization to a mixed Bayes-Nash equilibrium is now straightforward. \end{proof} \section{Relaxation of the normalization assumption}\label{sec:normalization_relaxation} We assume the following \emph{relaxed normalized risk-averse utilities}: \begin{enumerate} \item \label{prop:monotonicity_r} $u_i^{v_i}(\mathbf{x}, p_i) \geq u_i^{v_i}(\mathbf{x}, p_i')$ if $p_i \leq p_i'$ (monotonicity) \item \label{prop:first_r} $u_i^{v_i}(\mathbf{x}, p_i)=0$ if $p_i=v_i(\mathbf{x})$ (normalization at $p_i=v_i(\mathbf{x})$) \item \label{prop:second_r} $u_i^{v_i}(\mathbf{x},p_i)=v_i(\mathbf{x})$ if $p_i=0$ (normalization at $p_i=0$) \item \label{prop:third_r} $u_i^{v_i}(\mathbf{x},p_i)\geq C\cdot(v_i(\mathbf{x})-p_i)$ if $0\leq p_i\leq v_i(\mathbf{x}), 0<C<1, C$ constant and $u_i^{v_i}(\mathbf{x},p_i)\leq v_i(\mathbf{x})-p_i$ otherwise (extra relaxed concavity) \end{enumerate} \begin{lemma}\label{lemma:smooth} If a mechanism is $(\lambda, \mu)$-smooth w.r.t. quasilinear utility functions $(\hat{u}_i^{v_i})_{i\in N, v_i\in\mathcal{V}_i}$ and the actions in the support of the smoothness deviations satisfy \begin{equation} \hat{u}_i(a_i^*, \mathbf{a}_{-i})\geq0, \forall \mathbf{a}_{-i}, \forall i \enspace, \end{equation} then the mechanism is $(C\cdot\lambda/2, C\cdot\mu)$-smooth with respect to any relaxed normalized risk-averse utility functions $(u_i^{v_i})_{i\in N, v_i\in\mathcal{V}_i}$. \end{lemma} \begin{proof} We start from an arbitrary action profile $\mathbf{a}$ and want to satisfy Definition~\ref{def:smoothness}. Since there exist smoothness deviations s.t. $\hat{u}_i(a_i^*, \mathbf{a}_{-i})=v_i(\mathbf{x}(a_i^*, \mathbf{a}_{-i}))-p_i\geq0, \forall \mathbf{a}_{-i}, \forall i$, we know from property~\ref{prop:third_r} of the relaxed risk aversion definition that $u_i^{v_i}(a_i^*, \mathbf{a}_{-i})\ge C\cdot\hat{u}_i^{v_i}(a_i^*, \mathbf{a}_{-i})$. Therefore, \begin{align*} \sum_i u_i^{v_i}(a_i^*, \mathbf{a}_{-i}) &\ge \sum_i C\cdot\hat{u}_i^{v_i}(a_i^*, \mathbf{a}_{-i})\ge C\cdot\lambda \widehat\mbox{OPT} - C\cdot\mu \sum_i p_i(\mathbf{a})\\ &\ge\frac{C\cdot\lambda}{2} \mbox{OPT} - C\cdot\mu \sum_i p_i(\mathbf{a})\enspace, \end{align*} where the last inequality follows from Lemma~\ref{lemma:OPT_relation}. \end{proof} Using Theorem~\ref{thm:smoothness_CCE_and_BNE}, we obtain the following theorem. \begin{theorem} If a mechanism is $(\lambda, \mu)$-smooth w.r.t. quasilinear utility functions $(\hat{u}_i^{v_i})_{i\in N, v_i\in\mathcal{V}_i}$ and the actions in the support of the smoothness deviations satisfy $\hat{u}_i(a_i^*, \mathbf{a}_{-i})\geq0, \forall \mathbf{a}_{-i}, \forall i$, then any Correlated Equilibrium in the full information setting and any Bayes-Nash Equilibrium in the Bayesian setting achieves efficiency at least $\frac{C\cdot\lambda}{2\cdot\max\{1, C\cdot\mu\}}$ of the expected optimal even in the presence of risk averse bidders. \end{theorem} \section{Weak Smoothness}\label{sec:weak_smoothness} \subsection{Extension to General Utility Functions} For second-price auctions and their generalizations, for example, the already stated theorems do not suffice to prove guarantees on the quality of equilibria. One in addition needs a no-overbidding assumption. To state this assumption, we first need the notion of willingness-to-pay that was originally defined in \cite{SyrgkanisT13}. \begin{definition}[Willingness-to-pay] Given a mechanism $(\mathcal{A}, X, P)$ a player's maximum willingness-to-pay for an allocation $\mathbf{x}$ when using strategy $a_i$ is defined as the maximum he could ever pay conditional on allocation $\mathbf{x}$: \[ W_i(a_i,\mathbf{x})=\max_{\mathbf{a}_{-i}:X(\mathbf{a})=\mathbf{x}} p_i(\mathbf{a})\enspace. \] \end{definition} Now, we can state weak smoothness. \begin{definition}[Weakly Smooth Mechanism]\label{def:weaksmoothness} A mechanism $M$ is weakly $(\lambda, \mu_1, \mu_2)$-smooth with respect to utility functions $(u_i^{\theta_i})_{\theta_i\in\Theta_i, i\in N}$ for $\lambda, \mu_1, \mu_2 \ge 0$, if for any type profile $\boldsymbol\theta \in \times_i \Theta_i$ and for any action profile $\mathbf{a}$ there exists a randomized action $a^*_i(\boldsymbol\theta, a_{i})$ for each player $i$, s.t.: \begin{equation*}\label{eq:weaksmoothness} \sum_i u_i^{\theta_i}(a^*_i(\boldsymbol\theta, a_{i}), \mathbf{a}_{-i}) \ge \lambda \mbox{OPT}(\boldsymbol\theta) - \mu_1 \sum_i p_i(\mathbf{a}) - \mu_2 \sum_i W_i(a_i, X(\mathbf{a}))\enspace. \end{equation*} We denote by $u_i^{\theta_i}(\mathbf{a})$ the expected utility of a player if $\mathbf{a}$ is a vector of randomized strategies. \end{definition} Note that $(\lambda, \mu)$-smoothness implies weak $(\lambda, \mu, 0)$-smoothness. We get the following generalization of the price-of-anarchy guarantees for equilibria that fulfill the aforementioned no-overbidding assumption on the players' willigness-to-pay: \begin{theorem}\label{thm:weak_smoothness} If a mechanism is weakly $(\lambda, \mu_1, \mu_2)$-smooth w.r.t. utility functions $(u_i^{\theta_i})_{\theta_i\in\Theta_i, i\in N}$, then any Correlated Equilibrium in the full information setting and any Bayes-Nash Equilibrium in the Bayesian setting that satisfies \begin{equation} \mathbb{E}_{\mathbf{a}}[W_i( a_i, X( \mathbf{a}))] \leq \mathbb{E}_{\mathbf{a}}[u_i^{\theta_i}( \mathbf{a}) + p_i( \mathbf{a})] \label{eq:generic_no-overbidding} \end{equation} achieves efficiency of at least $\frac{\lambda}{(\mu_2 + \max\{\mu_1,1\})}$ of $\mbox{OPT}(\boldsymbol\theta)$ or of $\mathbb{E}_{\boldsymbol\theta}[\mbox{OPT}(\boldsymbol\theta)]$, respectively. \end{theorem} In the quasilinear setting, \eqref{eq:generic_no-overbidding} simplifies to the no-overbidding assumption $\mathbb{E}_{\mathbf{a}}[W_i(a_i, X(\mathbf{a}))]\le \mathbb{E}_{\mathbf{a}}[v_i(X(\mathbf{a}))]$ that was introduced in \cite{SyrgkanisT13}, and that is a generalization of the no-overbidding assumptions previously used in the literature~\cite{ChristodoulouKS08,BhawalkarR11,CaragiannisKKKLLT15}. That is, players cannot pay more than their respective value, regardless of the other players' actions. \begin{proof} For the complete information setting, we show that \[ \sum_i u_i^{\theta_i} (\mathbf{a})\geq \lambda\cdot \widehat{\mbox{OPT}} - \mu_1\sum_i p_i(\mathbf{a}) - \mu_2 \sum_i W_i(a_i, X_i(\mathbf{a}))\enspace. \] Using~\eqref{eq:generic_no-overbidding}, \[ (1+\mu_2)\Bigg[\sum_i u_i^{\theta_i}(\mathbf{a}) + \sum_i p_i(\mathbf{a})\Bigg]\ge \lambda \widehat{\mbox{OPT}} - (\mu_1-1)\sum_i p_i(\mathbf{a})\enspace. \] By doing a case distinction over $\mu\le1$ and $\mu>1$, we get the claimed result. For the incomplete information setting, we arrive at \begin{multline*} \mathbb{E}_{\boldsymbol\theta}\Bigg[\sum_i u_i^{\theta_i}(s(\boldsymbol\theta))\Bigg]\geq \lambda\mathbb{E}_{\boldsymbol\theta}[\widehat{\mbox{OPT}}] - \mu_1\mathbb{E}_{\boldsymbol\theta}\Bigg[\sum_i p_i(s(\boldsymbol\theta))\Bigg]\\ -\mu_2 \mathbb{E}_{\boldsymbol\theta}\Bigg[\sum_i W_i(s_i(\theta_i), X_i(s(\boldsymbol\theta)))\Bigg]\enspace. \end{multline*} The result now follows by using the no-overbidding assumption~\eqref{eq:generic_no-overbidding} and case distinction, similar to the full information case. \end{proof} In a \emph{second-price auction}, the winner has to pay the second highest bid, the other players do not pay anything. In the quasilinear setting it is weakly $(1, 0, 1)$-smooth. \subsection{Risk-Averse Utilities} We will assume the following pointwise condition: \begin{definition}[Pointwise No-Overbidding] A randomized strategy profile $\mathbf{a}$ satisfies the pointwise no-overbidding assumption if for every player $i$ and every action in the support of $\mathbf{a}$ the following holds: \[ W_i(a_i, \mathbf{x}):= \max_{\mathbf{a}_{-i}:X(\mathbf{a})=\mathbf{x}} p_i(\mathbf{a}) \le v_i(\mathbf{x})\enspace, \] i.e. no player is pointwise bidding in a way that she could potentially pay more than her value, subject to her allocation remaining the same. \end{definition} \begin{theorem}\label{thm:weak_smoothness_risk} If a mechanism is weakly $(\lambda, \mu_1, \mu_2)$-smooth with respect to quasilinar utility functions $(\hat{u}_i^{v_i})_{i\in N, v_i\in\mathcal{V}_i}$, the actions in the support of the smoothness deviations satisfy $\hat{u}_i(a_i^*, \mathbf{a}_{-i})\geq0, \forall \mathbf{a}_{-i}, \forall i$, then any Correlated Equilibrium in the full information setting and any Bayes-Nash Equilibrium in the Bayesian setting that satisfies the pointwise no-overbidding assumption achieves efficiency at least $\frac{\lambda}{2\cdot(\mu_2 + \max\{\mu_1,1\})}$ of the expected optimal even in the presence of risk-averse bidders. \end{theorem} \begin{proof} First, we show that weak smoothness with respect to quasilinear utility functions with the additional constraint that players have non-negative utility from the smoothness deviation implies weak smoothness with respect to risk-averse players. \begin{lemma}\label{lemma:weakly_smooth} If a mechanism is weakly $(\lambda, \mu_1, \mu_2)$-smooth with respect to quasilinear utility functions $(\hat{u}_i^{v_i})_{i\in N, v_i\in\mathcal{V}_i}$ and the actions in the support of the smoothness deviations satisfy $\hat{u}_i(a_i^*, \mathbf{a}_{-i})\geq0, \forall a_{-i}, \forall i$, then the mechanism is weakly $(\lambda/2, \mu_1, \mu_2)$-smooth with respect to risk-averse utility functions $(u_i^{v_i})_{i\in N, v_i\in\mathcal{V}_i}$. \end{lemma} \begin{proof} We start from an arbitrary action profile $\mathbf{a}$ and want to satisfy Definition~\ref{def:smoothness}. Since there exist smoothness deviations s.t. $\hat{u}_i(a_i^*, \mathbf{a}_{-i})=v_i(\mathbf{x}(a_i^*, \mathbf{a}_{-i}))-p_i\geq0, \forall \mathbf{a}_{-i}, \forall i$, we know from property~\ref{prop:third} of the risk aversion definition that $u_i^{v_i}(a_i^*, \mathbf{a}_{-i})\ge \hat{u}_i^{v_i}(\mathbf{x}(a_i^*, \mathbf{a}_{-i}))$. Therefore, \begin{align*} \sum_i u_i^{v_i}(a_i^*, \mathbf{a}_{-i}) &\ge \sum_i \hat{u}_i^{v_i}(a_i^*, \mathbf{a}_{-i})\ge \lambda \widehat\mbox{OPT} - \mu_1 \sum_i p_i(\mathbf{a}) - \mu_2 \sum_i W_i(a_i, X(\mathbf{a}))\\ &\ge\frac{\lambda}{2} \mbox{OPT} - \mu_1 \sum_i p_i(\mathbf{a})- \mu_2 \sum_i W_i(a_i, X(\mathbf{a}))\enspace, \end{align*} where the last inequality follows from Lemma~\ref{lemma:OPT_relation}. \end{proof} Next, we will show that pointwise no-overbidding indeed implies the no-overbidding assumption~\eqref{eq:generic_no-overbidding}: Using the pontwise no-overbidding assumption $v_i(\mathbf{x})\ge p_i$, we know that $u_i^{v_i}(\mathbf{x},p_i)\ge v_i(\mathbf{x})-p_i$. From here, $W_i(a_i, \mathbf{x}) \le v_i(\mathbf{x})\le u_i^{v_i}(\mathbf{a})+p_i(\mathbf{a})$, so we can conclude that \[ \mathbb{E}_{\mathbf{a}}[W_i(a_i, X(\mathbf{a}))]\le \mathbb{E}_{\mathbf{a}}[u_i^{v_i}(\mathbf{a}) + p_i(\mathbf{a})]\enspace. \] Theorem~\ref{thm:weak_smoothness} now completes the proof. \end{proof} Using that the second-price auction is weakly $(1, 0 ,1)$-smooth with respect to quasilinear utilities, we immediately get that its price of anarchy is also constant in the risk-averse setting. \begin{corollary} Under normalized risk-averse utilities, the second-price auction has a constant price of anarchy for correlated and Bayes-Nash equilibria with pointwise no-overbidding. \end{corollary} \section{Budget Constraints}\label{sec:budgets} The techniques and results so far have striking similarities to settings with budget constraints, where players do not have quasilinear preferences already in the risk neutral case. As it turns out, under very mild additional assumptions, we can also add (a generalized form of) hard budget constraints to our consideration. We now assume that types are pairs $\theta_i = (v_i, B_i)$, where $B_i\colon \mathcal{X}_i\to \mathbb{R}^+$ is an outcome-dependent budget function. Depending on which outcome is achieved, the agent may have different amounts of liquidity. We assume that there is a normalized risk-averse utility function $u_i^{v_i}$ such that for a player of type $\theta_i = (v_i, B_i)$ \[ u_i^{\theta_i}(\mathbf{x}, p_i) = \begin{cases} u_i^{v_i}(\mathbf{x}, p_i) & \text{ if $p_i \leq B_i(\mathbf{x})$} \\ - \infty & \text{ otherwise} \end{cases} \enspace. \] In the budgeted setting, one cannot hope to achieve full welfare. This is due to low budget participants not being able to maximize their contribution. Therefore, we will replace $\mbox{OPT}(\boldsymbol\theta)$ in the price-of-anarchy and smoothness definition by the optimal \emph{effective} or \emph{liquid} welfare, given as $\max_{\mathbf{x}, \mathbf{p}} \sum_i \min\{u_i^{\theta_i}(\mathbf{x}, p_i) + p_i, B_i(\mathbf{x})\}$. This benchmark, introduced in~\cite{DobzinskiL14}, reflects that players with low budgets cannot be expected to be effective at maximizing their own value. The effect of budgets on efficiency in the risk neutral case was already studied in~\cite{SyrgkanisT13}, where the authors, in order to be able to prove efficiency bounds, introduced the notion of a \emph{conservatively smooth mechanism} that has the following additional assumption on the smoothness deviations: \begin{equation}\label{eq:conservatively_smooth} \max_{\mathbf{a}_{-i}} p_i(a_i^*(\mathbf{v}, a_i), \mathbf{a}_{-i})\le \max_{\mathbf{x}} v_i(\mathbf{x})\enspace. \end{equation} Conservatively smooth mechanisms are then shown to allow the budgeted scenario without any further loss of efficiency. Note that~(\ref{eq:conservatively_smooth}) is a weaker assumption than the Condition~(\ref{eq:nonnegutility}) we ask for. Therefore, we can easily extend our results for risk-averse bidders to the budgeted setting. Our main result is that if the type space is chosen in a way that taking the pointwise minimum of a valuation function and a budget function yields again a feasible valuation function, meaning that we stay within the ``permitted'' valuation space when applying the budget costraints, then the price-of-anarchy guarantee is again preserved. The valuation space being closed under capping is a crucial requirement both for our result and the result in~\cite{SyrgkanisT13}. \begin{theorem}\label{thm:budgets_risk} If a mechanism is $(\lambda, \mu)$-smooth w.r.t. quasilinear utility functions $(\hat{u}_i^{v_i})_{i\in N, v_i\in\mathcal{V}_i}$, its valuation space is closed under capping with budget functions, the actions in the support of the smoothness deviations satisfy $\hat{u}_i(a_i^*, \mathbf{a}_{-i})\geq0, \forall \mathbf{a}_{-i}, \forall i$, then the social welfare at any Correlated Equilibrium and at any Bayes-Nash Equilibrium is at least $\frac{\lambda}{2\cdot\max\{1,\mu\}}$ of the expected maximum effective welfare even in the presence of risk-averse bidders. \end{theorem} As before, we prove a lemma connecting smoothness with respect to quasilinear utilities to smoothness with respect to risk-averse ones. \begin{lemma}\label{lemma:budgets} If a mechanism is $(\lambda, \mu)$- smooth w.r.t. quasilinear utility functions $(\hat{u}_i^{v_i})_{i\in N, v_i\in\mathcal{V}_i}$, its valuation space is closed under capping with the budget functions, and the actions in the support of the smoothness deviations satisfy $\hat{u}_i(a_i^*, \mathbf{a}_{-i})\geq0, \forall \mathbf{a}_{-i}, \forall i$, then the mechanism is $(\lambda/2, \mu)$-smooth with respect to risk-averse budgeted utility functions $(u_i^{\theta_i})_{\theta_i\in\Theta_i, i\in N}$. \end{lemma} \begin{proof} We start from an arbitrary action profile $\mathbf{a}$ and keep in mind that the risk-averse budgeted utility function $u_i^{\theta_i}$ has type $\theta_i=(v_i, B_i)$. By $\hat{u}^{\bar{v}_i}$, we denote the quasilinear utility of player $i$ with the capped valuation function $\bar{v}_i$. Formally, \[ \hat{u}_i^{\bar{v}_i}(\mathbf{x},p_i)=\bar{v}_i(\mathbf{x})-p_i=\min\{v_i(\mathbf{x}), B_i(\mathbf{x})\}-p_i\enspace. \] Since the valuation space is closed under capping with the budget function, we can find smoothness deviations $a_i^*(\bar{\mathbf{v}}, a_i)$ s.t. $\hat{u}_i^{\bar{v}_i}(X(a_i^*, \mathbf{a}_{-i}),p_i(a_i^*, \mathbf{a}_{-i}))=\bar{v}_i(X(a_i^*, \mathbf{a}_{-i}))-p_i(a_i^*, \mathbf{a}_{-i})\geq0, \forall \mathbf{a}_{-i}$ and therefore $u_i^{\bar{v}_i}(a_i^*, \mathbf{a}_{-i})\ge \hat{u}_i^{\bar{v}_i}(a_i^*, \mathbf{a}_{-i})$. It follows that \begin{align*} \sum_i u_i^{\theta_i}(a_i^*(\bar{\mathbf{v}}, a_i), \mathbf{a}_{-i})&= \sum_i u_i^{v_i}(a_i^*(\bar{\mathbf{v}}, a_i), \mathbf{a}_{-i}) \ge \sum_i u_i^{\bar{v}_i}(a_i^*, \mathbf{a}_{-i}) \ge \sum_i \hat{u}_i^{\bar{v}_i}(a_i^*, \mathbf{a}_{-i})\\ &\ge \lambda\cdot \widehat{\mbox{OPT}}_{\bar{\mathbf{v}}} - \mu\cdot\sum_i p_i(\mathbf{a})\ge\frac{\lambda}{2}\cdot \mbox{OPT}_{\bar{\mathbf{v}}} - \mu\cdot\sum_i p_i(\mathbf{a}) \enspace, \end{align*} where the first equality holds because the deviations are such that the payments are below the budgets, the first inequality because \[ u_i^{v_i}(\mathbf{a})=h\left(v_i(\mathbf{x}(\mathbf{a})-p_i(\mathbf{a})\right)\ge h(\min\{v_i(\mathbf{x}(\mathbf{a})), B_i\}-p_i(\mathbf{a})) = u_i^{\bar{v}_i}(\mathbf{a}), \forall \mathbf{a}\enspace, \] and the third because the valuation space is closed under capping. \end{proof} The generality of Theorem~\ref{thm:smoothness_CCE_and_BNE} allows us to now obtain Theorem~\ref{thm:budgets_risk}. Note that $\mbox{OPT}_{\bar{\mathbf{v}}}$, where $\bar{\mathbf{v}}$ is the vector of capped valuation functions, indeed aligns correctly with the effective welfare benchmark. \section{Missing Details from Section~\ref{sec:negative-results}}\label{sec:calculations} \subsection{Calculations}\label{subsec:calculations} \begin{align*} \mathbb{E}[u_3(b_3', \mathbf{b}_{-3})] &\leq (1 - \varepsilon)\cdot(\frac{1}{3}\cdot\ln M/2 - b_3') + \varepsilon\cdot(- (16\cdot\frac{1}{3}\cdot\ln M/2 -1)\cdot M^2 \cdot b_3')\\ &< \frac{1}{3}\cdot\ln M/2 - b_3' - \frac{1}{M^2}\cdot \big(16\cdot\frac{1}{3}\cdot\ln M/2 -1\big)\cdot M^2 \cdot b_3'\\ &=\frac{1}{3}\cdot\ln M/2 - b_3' (1 + 16\cdot\frac{1}{3}\cdot\ln M/2 -1)\\ &< \frac{1}{3}\ln M/2 - \frac{1}{16}\cdot 16\cdot\frac{1}{3}\ln M/2 = 0\enspace. \end{align*} \begin{align*}\label{eq:bid_estimate} \beta(v_1)&\ge \int_{1/2}^{v_1} \frac{2(1-\frac{M-1}{M^2})(e^t-1)}{2(t-\frac{1}{2})(1-\frac{M-1}{M^2})+ (1-2(t-\frac{1}{2})(1-\frac{M-1}{M^2}))\cdot e^t} dt\\ &>\int_{1/2}^{v_1}\frac{\frac{3}{2}(e^t-1)}{2t-1 + 2\cdot e^t} dt=\frac{3}{4}\int_{1/2}^{v_1}\frac{e^t-1}{e^t+t-\frac{1}{2}}dt\\ &\ge \frac{3}{4}\int_{1/2}^{v_1} \left(1-\frac{1}{\sqrt{e}}\right) dt=\frac{3}{4}\left(1-\frac{1}{\sqrt{e}}\right)\left( v_1 - \frac{1}{2} \right) >\frac{1}{4} \left( v_1 - \frac{1}{2} \right) \enspace. \end{align*} \begin{align*} \mathbb{E}[u_3(b_3', \mathbf{b}_{-3})] &\le\Pr[\beta(v_1) \leq b_3']\cdot(v_3-b_3')-\Pr[\beta(v_1) > b_3']\cdot 32\cdot v_3\cdot b_3' \\ & <2\cdot 4b_3'(v_3-b_3')-(1-2\cdot 4b_3')\cdot 32\cdot v_3\cdot b_3'\\ &=8b_3'v_3-8(b_3')^2-32b_3'v_3+8\cdot32v_3(b_3')^2 \\ &=8b_3'\big(-3v_3+b_3'\big(32v_3-1\big)\big)\le8b_3'\left(-3v_3 + 2v_3 - \frac{1}{16}\right)\\ &=8b_3'\left(-v_3-\frac{1}{16}\right)<0\enspace. \end{align*} \subsection{All-Pay Auction with Limited Risk-Aversion}\label{sec:bounded_slope} \begin{theorem} In an all-pay auction with risk-averse players whose utilities are of the form $h(v_i(\mathbf{x})-p_i)$, where $h$ is a concave function s.t. $h(x)=C\cdot x$ for $x<0$, $C\ge1$ constant, the Price of Anarchy is at most $4(C+1)$. \end{theorem} \begin{proof} We use the following smoothness deviation: the highest value player with value $v_h$ deviates to $\frac{1}{2}v_h$ and everybody else to $0$. Now, it is easy to see that the following inequality holds independent of whether the highest value player obtains the item or not \[ u_h^{v_h}(\frac{v_h}{2}, \mathbf{a}_{-i})\ge \frac{1}{2}v_h - (C+1)\max_{i\neq h}a_i\ge \frac{1}{2}\widehat{\mbox{OPT}} - (C+1)\sum_i a_i\enspace, \] so then \[ \sum_i u_i^{v_i}(a_i^*, \mathbf{a}_{-i}) \ge \frac{1}{2}\widehat{\mbox{OPT}} - (C+1)\sum_i p_i(\mathbf{a})\ge \frac{1}{4}\mbox{OPT} - (C+1)\sum_i p_i(\mathbf{a})\enspace. \] The claim follows by applying Theorem~\ref{thm:smoothness_CCE_and_BNE}. \end{proof} \subsection{Symmetric BNE of All-Pay Auction in the Quasilinear Setting}\label{sec:quasilinear_allpay} \begin{claim} In a symmetric BNE of the all-pay auction in the quasilinear setting, all bids are bounded by the expected value of a player. \end{claim} \begin{proof} Due to symmetry, it is enough to argue about the first player. Let $\beta$ denote the equilibrium bidding function. We fix player 1's value $v_1=x$ and consider his expected utility for bidding $y$: \begin{align*} \mathbb{E}[u_1^{x}(b_1=y, b_2=\beta(v_2))] &= \Pr[\beta(v_2)<y]\cdot(x-y) + \Pr[\beta(v_2)\ge y]\cdot (-y)\\ &= \Pr[\beta(v_2)<y]\cdot x - y = F(\beta^{-1}(y))\cdot x -y\enspace. \end{align*} By taking the derivative and setting it to zero, we arrive at \[ \frac{f(\beta^{-1}(b))}{\beta'(\beta^{-1}(b))}\cdot x -1 =0\enspace, \] so \[ \beta'(x)=x\cdot f(x)\enspace. \] Now it is obvious that \[ \beta(x)=\int_0^x t\cdot f(t) \le \mathbb{E}[v_1]\enspace. \] \end{proof} \section{Proof of Observation~\ref{lemma:observation}}\label{sec:observation} \begin{proof} Consider two bidders that both have a valuation of $1$. If they both bid $1$ with probability $\frac{1}{2}$ and $0$ with the remaining probability, but in a correlated manner, such that always just one player submits a non-zero bid -- they will be in an equilibrium. Let us now calculate the utilities: \[ u_i(\mathbf{b}) = \mathbb{E}_{\mathbf{a}\sim\mathbf{b}}[\hat{u}_i(\mathbf{a})] - \sqrt{\mathbb{E}[\hat{u}_i^2(\mathbf{a})] - (\mathbb{E}[\hat{u}_i(\mathbf{a})])^2} = \frac{1}{2} - \sqrt{\frac{1}{2} - \Big(\frac{1}{2}\Big)^2} = \frac{1}{2}-\frac{1}{2}=0\enspace. \] Since the payments are also $0$, the social welfare in this equilibrium is $0$, meaning that the price of anarchy is unbounded. \end{proof} \end{document}
\begin{document} \author{Daniel Pellegrino and Joedson Santos} \address[D. Pellegrino]{ Departamento de Matem\'{a}tica, Universidade Federal da Paraíba, 58038-310 - Jo\~{a}o Pessoa, Paraíba, Brazil\\ [J. Santos] {Departamento de Matem\'{a}tica, Universidade Federal de Sergipe, 49.500-000 - Itabaiana, Sergipe, Brazil.}} \title[Absolutely summing multilinear operators: a panorama]{Absolutely summing multilinear operators: a panorama } \keywords{Absolutely summing operators; multiple summing multilinear operators; strongly multiple summing multilinear operators; multi-ideals} \thanks{2010 Mathematics Subject Classification:46G25, 47B10, 47L20, 47L22.} \thanks{Daniel Pellegrino is supported by CNPq (Edital Casadinho) Grant 620108/2008-8} \begin{abstract} This paper has a twofold purpose: to present an overview of the theory of absolutely summing operators and its different generalizations for the multilinear setting, and to sketch the beginning of a research project related to an objective search of \textquotedblleft perfect\textquotedblright \ multilinear extensions of the ideal of absolutely summing operators. The final section contains some open problems that may indicate lines for future investigation. \end{abstract} \maketitle \section{Introduction} Absolutely summing multilinear operators and homogeneous polynomials between Banach spaces were first conceived by A. Pietsch \cite{PPPP, pp22} in the eighties. Pietsch \'{} s work and R. Alencar and M.C. Matos' research report \cite{AlencarMatos} are usually quoted as the precursors of the now well-known nonlinear theory of absolutely summing operators. In the last decade this topic of investigation attracted the attention of many authors and various different concepts related to summability of nonlinear operators were introduced; this line of research, besides its intrinsic interests, highlighted abstract questions in the mainstream of the theory of multi-ideals which contributed to the revitalization of the general interest in questions related to ideals of polynomials and multilinear operators (see \cite{note, botstudia, indagationes, muro, cds}). This paper has a twofold purpose: to summarize/organize some information constructed in the last years concerning the different multilinear generalizations of absolutely summing operators; and to sketch a research project directed to the investigation of the existence of multilinear ideals (related to the ideal of absolutely summing operators) satisfying a list of properties which we consider natural. We define the notion of maximal and minimal ideals satisfying some given properties and obtain existence results, using Zorn \'{} s Lemma. We also discuss qualitative results, posing some question on the concrete nature of the maximal and minimal ideals. None of our goals has the intention to be exhaustive: the overview of the multilinear theory of absolutely summing operators will be concentrated in special properties and has no encyclopedic character. Besides, our approach to the existence of multi-ideals satisfying some given properties is, of course, focused on those selected properties. \section{Absolutely summing operators: an overview} A. Dvoretzky and C. A. Rogers \cite{DR}, in 1950, solved a long standing problem in Banach Space Theory, by showing that in every infinite-dimensional Banach space there is an unconditionally convergent series which fails to be absolutely convergent. This result answers Problem 122 of the Scottish Book \cite{Mau} (the problem was raised by S. Banach in \cite[page 40]{Banach32}). This result attracted the interest of A. Grothendieck who, in \cite{Gro1955}, presented a different proof of Dvoretzky-Rogers Theorem. Grothendieck's \textquotedblleft R\'{e}sum\'{e} de la th\'{e}orie m\'{e}trique des produits tensoriels topologiques\textquotedblright\ together with his thesis may be regarded, in some sense, as the birthplace of the theory of operators ideals. The concept of absolutely $p$-summing linear operators is due to A. Pietsch \cite{stu} and the notion of $(q,p)$-summing operator is due to B. Mitiagin and A. Pe\l czy\'{n}ski \cite{MPel}. Another cornerstone in the theory is J. Lindenstrauss and A. Pe\l czy\'{n}ski's paper \cite{lp}, which translated Grothendieck's ideas to an universal language and showed the intrinsic beauty of the theory and richness of possible applications. From now on the space of all continuous linear operators from a Banach space $E$ to a Banach space $F$ will be denoted by $\mathcal{L}(E,F).$ Let \[ \ell_{p}^{\text{weak}}(E):=\left\{ (x_{j})_{j=1}^{\infty}\subset E:\left\Vert (x_{j})_{j=1}^{\infty}\right\Vert _{w,p}:=\sup_{\varphi\in B_{E^{\ast}} }\left( {\displaystyle\sum\limits_{j=1}^{\infty}} \left\vert \varphi(x_{j})\right\vert ^{p}\right) ^{1/p}<\infty\right\} \] and \[ \ell_{p}(E):=\left\{ (x_{j})_{j=1}^{\infty}\subset E:\left\Vert (x_{j} )_{j=1}^{\infty}\right\Vert _{p}:=\left( {\displaystyle\sum\limits_{j=1}^{\infty}} \left\Vert x_{j}\right\Vert ^{p}\right) ^{1/p}<\infty\right\} . \] If $1\leq p\leq q<\infty,$ we say that a continuous linear operator $u:E\rightarrow F$ is $(q,p)$-summing if $\left( u(x_{j})\right) _{j=1}^{\infty}\in\ell_{q}(F)$ whenever $(x_{j})_{j=1}^{\infty}\in\ell _{p}^{\text{weak}}(E)$. The class of absolutely $(q,p)$-summing linear operators from $E$ to $F$ will be represented by $\Pi_{q,p}\left( E,F\right) $ and $\Pi_{p}\left( E,F\right) $ if $p=q$ (in this case $u\in\Pi_{p}\left( E,F\right) $ is said to be absolutely $p$-summing). An equivalent formulation asserts that $u:E\rightarrow F$ is $(q,p)$-summing if there is a constant $C\geq0$ such that \[ \left( {\displaystyle\sum\limits_{j=1}^{\infty}} \left\Vert u(x_{j})\right\Vert ^{q}\right) ^{1/q}\leq C\left\Vert (x_{j})_{j=1}^{\infty}\right\Vert _{w,p} \] for all $(x_{j})_{j=1}^{\infty}\in\ell_{p}^{\text{weak}}(E)$. The above inequality can also be replaced by: there is a constant $C\geq0$ such that \[ \left( {\displaystyle\sum\limits_{j=1}^{m}} \left\Vert u(x_{j})\right\Vert ^{q}\right) ^{1/q}\leq C\left\Vert (x_{j})_{j=1}^{m}\right\Vert _{w,p} \] for all $x_{1},...,x_{m}\in E$ and all positive integers $m$. The infimum of all $C$ that satisfy the above inequalities defines a norm, denoted by $\pi_{q,p}(u)$ (or $\pi_{p}(u)$ if $p=q$), and $\left( \Pi _{q,p}\left( E,F\right) ,\pi_{q,p}\right) $ is a Banach space. From now on, if $1<p<\infty,$ the conjugate of $p$ is denoted by $p^{\ast},$ i.e., $\frac{1}{p}+\frac{1}{p^{\ast}}=1.$ For a full panorama of the linear theory of absolutely summing operators we refer to the classical book \cite{djt}. Here we restrict ourselves to five pillars of the theory: Dvoretzky-Rogers Theorem, Grothendieck's Inequality, Grothendieck (Lindenstrauss-Pe\l czy\'{n}ski) $\ell_{1}$-$\ell_{2}$ Theorem, Lindenstrauss-Pe\l czy\'{n}ski Theorem (on the converse of Grothendieck $\ell_{1}$-$\ell_{2}$ Theorem) and Pietsch Domination Theorem. Dvoretzky-Rogers Theorem can be stated in the context of absolutely summing operators as follows: \begin{theorem} [Dvoretzky-Rogers Theorem, 1950]If$\ p\geq1,$ then $\Pi_{p}(E;E)=\mathcal{L} (E;E)$ if and only if $\dim E<\infty.$ \end{theorem} In view of the above result it is natural to ask for the existence of some $p$ and infinite-dimensional Banach spaces $E$ and $F$ for which $\Pi _{p}(E;F)=\mathcal{L}(E;F).$ This question will be answered by Theorem \ref{kyp} and Theorem \ref{uyy} below. The fundamental tool of the theory is Grothendieck's Inequality (the formulation below is due to Lindenstrauss and Pe\l czy\'{n}ski \cite{lp}). We omit the proof, but several different proofs can be easily found in the literature: \begin{theorem} [Grothendieck's Inequality (version of Lindenstrauss and Pe\l czy\'{n}ski), 1968]\label{dggg} There is a positive constant $K_{G}$ so that, for all Hilbert space $H$, all $n\in\mathbb{N}$, every matrix $\left( a_{ij}\right) _{n\times n}$ and any $x_{1},...,x_{n},$ $y_{1},...,y_{n}$ in the unit ball of $H$, the following inequality holds: \begin{equation} \left\vert \sum_{i,j=1}^{n}a_{ij}\langle x_{i},y_{j}\rangle\right\vert \leq K_{G}\sup\left\{ \left\vert \sum_{i,j=1}^{n}a_{ij}s_{i}t_{j}\right\vert :\left\vert s_{i}\right\vert ,\left\vert t_{j}\right\vert \leq1\right\} . \label{29.5} \end{equation} \end{theorem} A consequence of Grothendieck's Inequality is that every continuous linear operator from $\ell_{1}$ to $\ell_{2}$ is absolutely $1$-summing. This result was stated by Lindenstrauss-Pe\l czy\'{n}ski \cite{lp} and is, in some sense, contained in Grothendieck \'{} s R\'{e}sum\'{e}. This type of result is what is now referred to as a \textquotedblleft coincidence theorem\textquotedblright, i.e., a situation where there are Banach spaces $E$ and $F$ and real numbers $1\leq p,q<\infty$ so that \[ \Pi_{q,p}(E,F)=\mathcal{L}(E,F). \] The same terminology will be used for multilinear mappings. We sketch here one of the most elementary proofs of Grothendieck (Lindenstrauss-Pe\l czy\'{n}ski) Theorem; the crucial role played by Grothendieck Inequality is easily seen. \begin{theorem} [Grothendieck's $\ell_{1}$-$\ell_{2}$ Theorem (version of Lindenstrauss, Pe\l czy\'{n}ski), 1968]\label{kyp}Every continuous linear operator from $\ell_{1}$ to $\ell_{2}$ is absolutely $1$-summing. \end{theorem} \begin{proof} Let $\left( T_{n}\right) _{n=1}^{\infty}$ be the sequence of the canonical projections , i.e., \begin{align*} T_{n} & :\ell_{1}\longrightarrow\ell_{1}\\ x & =\sum_{i=1}^{\infty}a_{j}e_{j}\mapsto T_{n}\left( x\right) =\sum _{i=1}^{n}a_{j}e_{j}. \end{align*} Let $\left( x_{k}\right) _{k=1}^{\infty}\in\ell_{1}^{\text{weak}}\left( \ell_{1}\right) $ with \[ \left\Vert \left( x_{k}\right) _{k=1}^{\infty}\right\Vert _{w,1} =\sup_{\varphi\in B_{\ell_{1}^{\ast}}}\sum_{k=1}^{\infty}\left\vert \varphi\left( x_{k}\right) \right\vert \leq1. \] One can easily verify that \[ \left\Vert \left( T_{n}x_{k}\right) _{k=1}^{\infty}\right\Vert _{w,1} =\sup_{\varphi\in B_{\ell_{1}^{\ast}}}\sum_{k=1}^{\infty}\left\vert \varphi\left( T_{n}x_{k}\right) \right\vert \leq1. \] Denoting \[ x_{k}=\sum_{j=1}^{\infty}a_{jk}e_{j}\text{ and }T_{n}x_{k}=\sum_{j=1} ^{n}a_{jk}e_{j}, \] for each $n,k,$ we can verify that for any positive integers $m\leq n$ and $\left( s_{j}\right) _{j=1}^{n},\left( t_{k}\right) _{k=1}^{m}\subset B_{\mathbb{K}}$, we have \[ \left\vert \sum_{j=1}^{n}\sum_{k=1}^{m}a_{jk}s_{j}t_{k}\right\vert \leq1. \] Now, let $T\in\mathcal{L}\left( \ell_{1},\ell_{2}\right) $ and $m,n\in\mathbb{N}$, with $n\geq m.$ For each $k,$ $1\leq k\leq m,$ from Hahn-Banach Theorem and Riesz Representation Theorem there is a $y_{k}\in \ell_{2},$ with $\left\Vert y_{k}\right\Vert _{2}\leq1,$ so that \[ \left\Vert TT_{n}x_{k}\right\Vert _{2}=\langle TT_{n}x_{k},y_{k}\rangle. \] If $m<n$, we can choose $y_{m+1}=\cdots=y_{n}=0.$ Hence \[ \sum_{k=1}^{m}\left\Vert TT_{n}x_{k}\right\Vert _{2}=\left\vert \sum_{k=1} ^{m}\sum_{j=1}^{n}a_{jk}\langle Te_{j},y_{k}\rangle\right\vert . \] Now Grothendieck's Inequality comes into play: \begin{equation} \sum_{k=1}^{m}\left\Vert TT_{n}x_{k}\right\Vert _{2}\leq K_{G}\left\Vert T\right\Vert \sup\left\{ \left\vert \sum_{j=1}^{n}\sum_{k=1}^{n}a_{jk} s_{j}t_{k}\right\vert :\left\vert s_{j}\right\vert ,\left\vert t_{k} \right\vert \leq1\right\} \leq K_{G}\left\Vert T\right\Vert \label{mmmnnn2} \end{equation} for all $n,m$, with $n\geq m.$ Since \[ \lim_{n\rightarrow\infty}T_{n}x_{k}=x_{k}, \] making $n\rightarrow\infty$ em (\ref{mmmnnn2}), we have \[ \sum_{k=1}^{m}\left\Vert Tx_{k}\right\Vert _{2}\leq K_{G}\left\Vert T\right\Vert \] and the proof is done. \end{proof} The next result is a kind of reciprocal of the Grothendieck Theorem (for a proof we refer to \cite{lp}): \begin{theorem} [Lindenstrauss, Pe\l czy\'{n}ski, 1968]\label{uyy}If $E$ and $F$ are infinite-dimensional Banach spaces, $E$ has an unconditional Schauder basis and $\Pi_{1}(E,F)=\mathcal{L}(E,F)$ then $E=\ell_{1}$ and $F$ is a Hilbert space. \end{theorem} Another interesting feature of absolutely summing operators is the Domination-Theorem: \begin{theorem} [Pietsch-Domination Theorem, 1967]\label{ppk}If $E$ and $F$ are Banach spaces, a continuous linear operator $T:E\rightarrow F$ is absolutely $p$-summing if and only if there is a constant $C>0$ and a Borel probability measure $\mu$ on the closed unit ball of the dual of $E,$ $\left( B_{E^{\ast}},\sigma(E^{\ast },E)\right) ,$ such that \begin{equation} \left\Vert T(x)\right\Vert \leq C\left( \int_{B_{E^{\ast}}}\left\vert \varphi(x)\right\vert ^{p}d\mu\right) ^{\frac{1}{p}}. \label{gupdt} \end{equation} \end{theorem} \begin{proof} (Sketch) If (\ref{gupdt}) holds it is easy to show that $T$ is absolutely $p$-summing. For the converse, consider the (compact) set $P(B_{E^{\ast}})$ of the probability measures in $C(B_{E^{\ast}})^{\ast}$ (endowed with the weak-star topology). For each $(x_{j})_{j=1}^{m}$ in $E,$ and $m\in \mathbb{N},$ let $g:P(B_{E^{\ast}})\rightarrow\mathbb{R}$ be defined by \[ g\left( \rho\right) =\sum_{j=1}^{m}\left[ \left\Vert T(x_{j})\right\Vert ^{p}-C^{p}\int_{B_{E^{\ast}}}\left\vert \varphi(x_{j})\right\vert ^{p} d\rho(\varphi)\right] \] and $\mathcal{F}$ be the set of all such $g.$ It is not difficult to show that $\mathcal{F}$ is concave and each $g\in\mathcal{F}$ is continuous and \ convex. Besides, for each $g\in\mathcal{F}$ there is a measure $\mu_{g}\in P(B_{E^{\ast}})$ such that $g(\mu_{g})\leq0.$ In fact, from the compactness of $B_{E^{\ast}}$ and Weierstrass' theorem there is a $\varphi_{0}\in K$ so that \[ \sum_{j=1}^{m}\left\vert \varphi_{0}(x_{j})\right\vert ^{p}=\sup_{\varphi\in B_{E^{\ast}}}\sum_{j=1}^{m}\left\vert \varphi(x_{j})\right\vert ^{p}. \] Then, considering the Dirac measure $\mu_{g}=\delta_{\varphi_{0}},$ we deduce $g(\mu_{g})\leq0.$ So, Ky Fan Lemma (see \cite[page 40]{mono}) ensures that there exists a $\mu\in P(B_{E^{\ast}})$ so that \[ g(\mu)\leq0 \] for all $g\in\mathcal{F}$ and by choosing an arbitrary $g$ with $m=1$ the proof is done. \end{proof} Using the canonical inclusions from $L_{p}$ spaces we get the following result: \begin{corollary} [Inclusion Theorem]If $1\leq r\leq s<\infty,$ then every absolutely $r$-summing operator is absolutely $s$-summing. \end{corollary} The 70's witnessed the emergence of the notion of cotype of a Banach space, with contributions from J. Hoffmann-J\o rgensen \cite{HJ}, B. Maurey \cite{Ma2}, S. Kwapie\'{n} \cite{K22}, E. Dubinsky, A. Pe\l czy\'{n}ski and H. P. Rosenthal \cite{DPR} among others; in 1976 the strong connection between the notions of cotype and absolutely summing operators became evident with the work of B. Maurey and G. Pisier \cite{pisier}. Let us recall the notion of cotype. The Rademacher functions \[ r_{n}:\left[ 0,1\right] \longrightarrow\mathbb{R},n\in\mathbb{N} \] are defined as \[ r_{n}\left( t\right) :=sign\left( \sin2^{n}\pi t\right) . \] A Banach space $E$ is said to have cotype $q\geq2$ if there is a constant $K\geq0$ so that, for all positive integer $n$ and all $x_{1},...,x_{n}$ in $E$, we have \begin{equation} \left( \underset{i=1}{\overset{n}{ {\displaystyle\sum} }}\left\Vert x_{i}\right\Vert ^{q}\right) ^{1/q}\leq K\left( \int_{0} ^{1}\left\Vert \underset{i=1}{\overset{n}{ {\displaystyle\sum} }}r_{i}\left( t\right) x_{i}\right\Vert ^{2}dt\right) ^{1/2}\text{.} \label{2.3} \end{equation} We denote by $C_{q}\left( E\right) $ the infimum of all such $K$ satisfying $\left( \ref{2.3}\right) $ and $\cot E$ denotes the infimum of the cotypes assumed by $E$, i.e., \[ \cot E=\inf\left\{ 2\leq q\leq\infty;E\text{ has cotype }q\right\} . \] It is worth mentioning that $E$ need not to have cotype $\cot E.$ The following combination of results of Maurey, Pisier \cite{pisier} and Talagrand \cite{T1} are self-explanatory: \begin{theorem} [Maurey, Pisier, 1976 + Talagrand, 1992]If a Banach space $E$ has finite cotype $q$, then $id_{E}$ is absolutely $(q,1)$-summing. The converse is true, except for $q=2$. \end{theorem} \begin{proof} (Easy part) If $E$ has cotype $q<\infty,$ then \begin{align*} \left( \underset{i=1}{\overset{n}{ {\displaystyle\sum} }}\left\Vert x_{i}\right\Vert ^{q}\right) ^{1/q} & \leq C_{q}(E)\left( \int_{0}^{1}\left\Vert \underset{i=1}{\overset{n}{ {\displaystyle\sum} }}r_{i}\left( t\right) x_{i}\right\Vert ^{2}dt\right) ^{1/2}\\ & \leq C_{q}(E)\underset{\left\vert t\right\vert \leq1}{\sup}\left\Vert \sum\limits_{j=1}^{n}r_{j}(t)x_{j}\right\Vert \\ & \leq C_{q}(E)\left\Vert (x_{j})_{j=1}^{n}\right\Vert _{w,1}. \end{align*} The rest of the proof is quite delicate. \end{proof} In the 80's the part of the focus of the investigation related to absolutely summing operators was naturally moved to the nonlinear setting, which will be treated in the next sections. However the linear theory is still alive and there are still interesting problems being investigated (see, for example, \cite{de3, ko4}). For recent results we mention \cite{belg, PellZ, ku, ku2}: Recent results reinforce the important role played by cotype: \begin{theorem} [Botelho, Pellegrino, 2009 ]\label{oik}(\cite{belg, PellZ}) Let $E$ and $F$ be infinite-dimensional Banach spaces. (i) If $\Pi_{1}(E,F)=\mathcal{L}(E,F)$ then $\cot E=\cot F=2$. (ii) If $2\leq r<\cot F$ and $\Pi_{q,r}(E,F)=\mathcal{L}(E,F),$ then $\mathcal{L}(\ell_{1},\ell_{\cot F})=\Pi_{q,r}(\ell_{1},\ell_{\cot F})$. (iii) If $\cot F=\infty$ and $p\geq1$, there exists a continuous linear operator from $E$ to $F$ which fails to be $p$-summing. \end{theorem} In a completely different direction, recent papers have investigated linear absolutely summing operators in the context of the theory of lineability/spaceability (see \cite{bd, seo, timoney}). For example, in \cite{timoney} the following result (which can be interpreted as a generalization of results from \cite{davis}) is shown: \begin{theorem} [Kitson, Timoney, 2010]Let $\mathcal{K}(E,F)$ denote the space of compact linear operators from $E$ to $F$. If $E$ and $F$ are infinite-dimensional Banach spaces and $E$ is super-reflexive, then \[ A=\mathcal{K}(E,F)\smallsetminus {\displaystyle\bigcup\limits_{1\leq p<\infty}} \Pi_{p}(E,F) \] is spaceable (i.e., $A\cup\{0\}$ contains a closed infinite-dimensional vector space). \end{theorem} \section{Operator ideals and multi-ideals: generating multi-ideals} The theory of operator ideals is due to Pietsch and goes back to his monograph \cite{mono} in 1978. An operator ideal $\mathcal{I}$ is a subclass of the class $\mathcal{L}$ of all continuous linear operators between Banach spaces such that for all Banach spaces $E$ and $F$ its components \[ \mathcal{I}(E;F):=\mathcal{L}(E;F)\cap\mathcal{I} \] satisfy: (1) $\mathcal{I}(E;F)$ is a linear subspace of $\mathcal{L}(E;F)$ which contains the finite rank operators. (2) (Ideal property) If $u\in\mathcal{I}(E;F)$, $v\in\mathcal{L}(G;E)$ for $j=1,\ldots,n$ and $t\in\mathcal{L}(F;H)$, then $t\circ u\circ v\in \mathcal{I}(G;H)$. The structure of operator ideals is shared by the most important classes of operators that appear in Functional Analysis, such as compact, weakly compact, nuclear, approximable, absolutely summing, strictly singular operators, among many others. The multilinear theory of operator ideals was also sketched by Pietsch in \cite{PPPP}. From now on $\mathbb{K}$ represents the field of all scalars (complex or real), and $\mathbb{N}$ denotes the set of all positive integers. For $n\geq1,$ the Banach space of all continuous $n$-linear mappings from $E_{1}\times\cdots\times E_{n}$ into $F$ endowed with the $\sup$ norm is denoted by $\mathcal{L}(E_{1},...,E_{n};F).$ An ideal of multilinear mappings (or multi-ideal) $\mathcal{M}$ is a subclass of the class of all continuous multilinear operators between Banach spaces such that for a positive integer $n$, Banach spaces $E_{1},\ldots,E_{n}$ and $F$, the components \[ \mathcal{M}(E_{1},\ldots,E_{n};F):=\mathcal{L}(E_{1},\ldots,E_{n} ;F)\cap\mathcal{M} \] satisfy: (i) $\mathcal{M}(E_{1},\ldots,E_{n};F)$ is a linear subspace of $\mathcal{L} (E_{1},\ldots,E_{n};F)$ which contains the $n$-linear mappings of finite type. (ii) The ideal property: if $A\in\mathcal{M}(E_{1},\ldots,E_{n};F)$, $u_{j} \in\mathcal{L}(G_{j};E_{j})$ for $j=1,\ldots,n$ and $t\in\mathcal{L}(F;H)$, then $t\circ A\circ(u_{1},\ldots,u_{n})$ belongs to $\mathcal{M}(G_{1} ,\ldots,G_{n};H)$. Moreover, there is a function $\Vert\cdot\Vert_{\mathcal{M}}\colon \mathcal{M}\longrightarrow\lbrack0,\infty)$ satisfying (i') $\Vert\cdot\Vert_{\mathcal{M}}$ restricted to $\mathcal{M}(E_{1} ,\ldots,E_{n};F)$ is a norm, for all Banach spaces $E_{1},\ldots,E_{n}$ and $F$, which makes $\mathcal{M}(E_{1},\ldots,E_{n};F)$ a Banach space. (ii') $\Vert A\colon\mathbb{K}^{n}\longrightarrow\mathbb{K}:A(\lambda _{1},\ldots,\lambda_{n})=\lambda_{1}\cdots\lambda_{n}\Vert_{\mathcal{M}}=1$ for all $n$, (iii') If $A\in\mathcal{M}(E_{1},\ldots,E_{n};F)$, $u_{j}\in\mathcal{L} (G_{j};E_{j})$ for $j=1,\ldots,n$ and $v\in\mathcal{L}(F;H)$, then $\Vert v\circ A\circ(u_{1},\ldots,u_{n})\Vert_{\mathcal{M}}\leq\Vert v\Vert\Vert A\Vert_{\mathcal{M}}\Vert u_{1}\Vert\cdots\Vert u_{n}\Vert$. However, the construction of adequate multilinear and polynomial extensions of a given operator ideal needs some care. The first is that, given positive integers $n_{1}$ and $n_{2}$, the respective levels of $n_{1}$-linearity and $n_{2}$-linearity need to have some inter-connection and obviously a strong relation with the original level $(n=1)$. This pertinent preoccupation has appeared in different recent papers, with the notions of ideals closed for scalar multiplication, closed for differentiation and the notions of coherent and compatible multilinear ideals (see \cite{botstudia, indagationes, muro}). The following properties illustrate the essence of the aforementioned inter-connection between the levels of the multi-ideal (these concepts are natural adaptations from the analogous for polynomials defined in \cite{indagationes}). \begin{definition} [cud multi-ideal]An ideal of multilinear mappings $\mathcal{M}$ is closed under differentiation (cud)\textbf{ }if, for all $n$, $E_{1},...,E_{n},F$ and $T\in\mathcal{M}(E_{1},...,E_{n};F)$, every linear operator obtained by fixing $n-1$ vectors $a_{1},...,a_{j-1},a_{j+1},...,a_{n}$ belongs to $\mathcal{M} (E_{j};F)$ for all $j=1,...,n.$ \end{definition} \begin{definition} [csm multi-ideal]An ideal of multilinear mappings $\mathcal{M}$ is closed for scalar multiplication (csm) if for all $n$, $E_{1},...,E_{n},E_{n+1},F,$ $T\in\mathcal{M}(E_{1},...,E_{n};F)$ and $\varphi\in E_{n+1}^{\ast}$, the map $\varphi T$ belongs to $\mathcal{M}(E_{1},...,E_{n},E_{n+1};F)$. \end{definition} For the theory of polynomials and multilinear mappings between Banach spaces we refer to \cite{Di, Mu}. \section{Multiple summing multilinear operators: the prized idea} Few know that the concept of multiple $p$-summing mappings was introduced in a research report of M.C. Matos in 1992 \cite{rr}, under the terminology of \textquotedblleft strictly absolutely summing multilinear mappings\textquotedblright. The motivation of Matos was a question of Pietsch on the eventual coincidence of the Hilbert-Schmidt $n$-linear functionals and the space of absolutely $(s;r_{1},...,r_{n})$-summing $n$-linear functionals for some values of $s$ and $r_{k},k=1,...,n.$ In this research report, the first properties of this class are introduced, as well as the connections with Hilbert-Schmidt multilinear operators and a solution to Pietsch's question in the context of strictly absolutely summing multilinear mappings. However, this research report was not published and only in 2003 Matos \cite{collec} published an improved version of this preprint, now using the terminology of \textit{fully summing multilinear mappings.} At the same time, and independently, Bombal, P\'{e}rez-Garc\'{\i}a and Villanueva \cite{bombal, jmaa} introduced the same concept, under the terminology of multiple summing multilinear operators. Since then this class has gained special attention, being considered by several authors as the most important multilinear generalization of the ideal of absolutely summing operators. For this reason we will dedicate more attention to the description of this class. A fair description of the subject should begin in 1930, when Littlewood \cite{LLL} (see \cite{bla} for a recent approach) proved his Littlewood's $4/3$ inequality asserting that \[ \left( \sum\limits_{i,j=1}^{N}\left\vert U(e_{i},e_{j})\right\vert ^{\frac {4}{3}}\right) ^{\frac{3}{4}}\leq\sqrt{2}\left\Vert U\right\Vert \] for every bilinear form $U:\ell_{\infty}^{N}\times\ell_{\infty}^{N} \rightarrow\mathbb{C}$ and every positive integer $N.$ One year later Bohnenblust and Hille \cite{BH} (see also \cite{sevilla, Def2}) improved this result to multilinear forms by showing that for every positive integer $n$ there is a $C_{n}>0$ so that \begin{equation} \left( \sum\limits_{i_{1},...,i_{n}=1}^{N}\left\vert U(e_{i_{^{1}} },...,e_{i_{n}})\right\vert ^{\frac{2n}{n+1}}\right) ^{\frac{n+1}{2n}}\leq C_{n}\left\Vert U\right\Vert \label{ju} \end{equation} for every $n$-linear mapping $U:\ell_{\infty}^{N}\times\cdots\times \ell_{\infty}^{N}\rightarrow\mathbb{C}$ and every positive integer $N$. Using that $\mathcal{L}\left( c_{0};E\right) $ is isometrically isomorphic to $\ell_{1}^{w}\left( E\right) $ (see \cite{djt}), Bohnenblust-Hille inequality can be re-written as (details can be found in \cite{pgtese}): \begin{theorem} [Bohnenblust-Hille, re-written (P\'{e}rez-Garc\'{\i}a, 2003)]\label{ytr}If $1\leq p<\infty$, and $n$ is a positive integer and $E_{1},...,E_{n}$ are Banach spaces and $U\in\mathcal{L}(E_{1},\ldots,E_{n};\mathbb{K}),$ then there exists a constant $C_{n}\geq0$ such that \begin{equation} \left( \sum_{j_{1},\ldots,j_{n}=1}^{N}\left\vert U(x_{j_{1}}^{(1)} ,\ldots,x_{j_{n}}^{(n)})\right\vert ^{\frac{2n}{n+1}}\right) ^{\frac{n+1} {2n}}\leq C_{n}\prod_{k=1}^{n}\left\Vert (x_{j}^{(k)})_{j=1}^{N}\right\Vert _{w,1} \label{juo} \end{equation} for every positive integer $N$ and $x_{j}^{(k)}\in E_{k}$, $k=1,...,n$ and $j=1,...,N.$ \end{theorem} In this sense Bohnenblust-Hille theorem can be interpreted as the beginning of the notion of multiple summing operators: If $1\leq p_{1},...,p_{n}\leq q<\infty,$ $T:E_{1}\times\cdots\times E_{n}\rightarrow F$ is multiple\emph{ }$(q;p_{1},...,p_{n})$-summing ($T\in\mathcal{L}_{m,(q,p_{1},...,p_{n})}(E_{1},...,E_{n};F)$) if there exists $C_{n}>0$ such that \begin{equation} \left( \sum_{j_{1},...,j_{n}=1}^{\infty}\Vert T(x_{j_{1}}^{(1)},...,x_{j_{n} }^{(n)})\Vert^{q}\right) ^{1/q}\leq C_{n}\prod\limits_{k=1}^{n}\Vert (x_{j}^{(k)})_{j=1}^{\infty}\Vert_{w,p_{k}}\text{ } \label{jup2} \end{equation} for every $(x_{j}^{(k)})_{j=1}^{\infty}\in\ell_{p_{k}}^{w}(E_{k})$, $k=1,...,n$. When $p_{1}=...=p_{n}=p$ we write $\mathcal{L}_{m,(q;p)}$ instead of $\mathcal{L}_{m,(q;p_{1},...,p_{n})};$ when $p_{1}=...=p_{n}=p=q$ we write $\mathcal{L}_{m,p}$ instead of $\mathcal{L}_{m,(q;p_{1},...,p_{n})}.$ The infimum of the constants $C_{n}$ satisfying (\ref{jup2}) defines a norm in $\mathcal{L}_{m,(q,p)}$ and is denoted by $\pi_{q;p_{1},...,p_{k}}$ (or $\pi_{q;p}$ if $p_{1}=\cdots=p_{k}=p$ or even $\pi_{p}$ when $p_{1} =\cdots=p_{k}=p=q$). It is worth mentioning that the essence of the notion of multiple summing multilinear operators, for bilinear operators, also appears in the paper of Ramanujan and Schock \cite{Ram}. It is well-known that the power $\frac{2n}{n+1}$ in Bonenblust-Hille Theorem \ref{ytr} is optimal. The constant $C_{n}$ from (\ref{juo}) is the same constant from (\ref{ju}). The optimal values are not known. For recent estimates for $C_{n}$ we refer to \cite{psseo}. For example, in the real case, for $2\leq n\leq14,$ in \cite{psseo} it is shown that $C_{n}\leq$ $2^{\frac{n^{2}+6n-8}{8n}}$ if $n$ is even and by $C_{n}\leq2^{\frac {n^{2}+6n-7}{8n}}$ if $n$ is odd (these estimates are derived from \cite{Def2}). In the complex case, H. Qu\'{e}ffelec \cite{Que}, A. Defant and P. Sevilla-Peris \cite{sevilla} have proved that $C_{n}\leq\left( \frac {2}{\sqrt{\pi}}\right) ^{n-1}$ but for $n\geq8$ better estimates can be also found in \cite{psseo} (also derived from \cite{Def2}). So, since the power $\frac{2n}{n+1}$ is sharp, one might not expect that the class of multiple summing operators shall lift the trivial coincidence situations from the linear case, i.e., \[ \Pi_{p}(E;\mathbb{K})=\mathcal{L}(E,\mathbb{K}) \] for every Banach spaces $E$ but, in general, \[ \mathcal{L}_{m,p}(^{n}E;\mathbb{K})\neq\mathcal{L}(^{n}E,\mathbb{K}). \] The multi-ideal of multiple summing multilinear operators is, by far, the most investigated class related to the multilinear theory of absolutely summing operators (see \cite{botp, andreas david, sevilla, davidstudia} and references therein). The reason for the success of this generalization of absolutely summing operators is perhaps the nice combination of nontrivial good properties, as coincidence theorems similar to those from the linear theory (\cite{bombal, botpams, botp}), and challenging problems as the inclusion theorem which holds in very special situations. The main results below are presented with the respective dates. In the case of results that appeared in a thesis or dissertation and were published after, we have chosen the date of the thesis/dissertation. A first remark on the class of multiple summing multilinear operators is that it is easy to show that coincidence results for multiple summing multilinear operators always imply in the respective linear ones (details can be found in \cite{spp}): \begin{proposition} If $\mathcal{L}(E_{1},\ldots,E_{n};F)=\mathcal{L}_{m,(q;p_{1},\ldots,p_{n} )}(E_{1},\ldots,E_{n};F)$, then \[ \mathcal{L}(E_{j};F)=\Pi_{q,p_{j}}(E_{j};F),j=1,\ldots,n. \] \end{proposition} Bohnenblust-Hille type results were also studied in a different perspective (trying to replace $\frac{2n}{n+1}$ by $2$ by changing the $1$-weak norm by some $p_{n}$-weak norm). If $(p_{k})_{k=0}^{\infty}$ is the sequence of real numbers given by \[ p_{0}=2\mbox{ and }p_{k+1}=\frac{2p_{k}}{1+p_{k}}\mbox{ for }k\geq0, \] then the following Bohnenblust-Hille type result is valid: \begin{theorem} [Botelho, Braunss, Junek, Pellegrino, 2009](\cite{botpams}) Let $E_{1} ,\ldots,E_{n}$ be Banach spaces of cotype $2$. If $k$ is the natural number such that $2^{k-1}<n\leq2^{k}$, then \[ \mathcal{L}(E_{1},\ldots,E_{n};\mathbb{K})=\mathcal{L}_{m(2;p_{k},\ldots ,p_{k})}(E_{1},\ldots,E_{n};\mathbb{K}). \] \end{theorem} A very important contribution to the theory of multiple summing multilinear operators was given in D. P\'{e}rez-Garc\'{\i}a's thesis, where several new results and techniques are presented, inspiring several related papers. The inclusion theorems proved by P\'{e}rez-Garc\'{\i}a deserves special attention: \begin{theorem} [P\'{e}rez-Garc\'{\i}a, 2003](\cite{pgtese, davidstudia})\label{bbn} If $1\leq p\leq q<2$, then $\mathcal{L}_{m,p}(E_{1},...,E_{n};F)\subset\mathcal{L} _{m,q}(E_{1},...,E_{n};F).$ \end{theorem} P\'{e}rez-Garc\'{\i}a has also shown that the above result cannot be extended in the sense that for each $q>2$ there exists $T\in\mathcal{L}_{m,p}(^{2} \ell_{1};\mathbb{K})$ for $1\leq p\leq2$ which does not belong to $\mathcal{L}_{m,q}(^{2}\ell_{1};\mathbb{K}).$ When the $F$ has cotype $2$ the result is slight better: \begin{theorem} [P\'{e}rez-Garc\'{\i}a, 2003](\cite{pgtese, davidstudia})\label{unv} If $1\leq p\leq q\leq2$ and $F$ has cotype $2$, then $\mathcal{L}_{m,p}(E_{1} ,...,E_{n};F)\subset\mathcal{L}_{m,q}(E_{1},...,E_{n};F).$ \end{theorem} When the spaces from the domain have cotype $2$, the inclusions from Theorem \ref{bbn} become coincidences (for a simple proof we refer to \cite{michels, enama}; the main tool used in the proof are results from \cite{arregui}): \begin{theorem} [Botelho, Pellegrino, 2008 and Popa, 2009](\cite{enama, popa}) If $1\leq p,q<2$ and $E_{1},...,E_{n}$ have cotype $2$, then \[ \mathcal{L}_{m,p}(E_{1},...,E_{n};F)=\mathcal{L}_{m,q}(E_{1},...,E_{n};F). \] \end{theorem} Recently, in \cite{michels}, it was shown (using complex interpolation and an argument of complexification) that a more general version of Theorem \ref{unv} is valid when the spaces from the domain are $\mathcal{L}_{\infty}$-spaces: \begin{theorem} [Botelho, Michels, Pellegrino, 2010]Let $1\leq p\leq q\leq\infty$ and $E_{1},\ldots,E_{n}$ {be} $\mathcal{L}_{\infty}$-spaces. Then $\mathcal{L} _{m,p}(E_{1},\ldots,E_{n};F)\subset\mathcal{L}_{m,q}(E_{1},\ldots,E_{n};F)$. \end{theorem} The proofs of the above results are technical and we omit them. For other related results we mention (\cite{botpams, michels, davidstudia}). Coincidence theorems are also a fruitful subject in the context of multiple summing operators. For example, D. P\'{e}rez-Garc\'{\i}a proved that Grothendieck's Theorem is valid for multiple summing multilinear operators: \begin{theorem} [P\'{e}rez-Garc\'{\i}a, 2003]\cite{pgtese} If $1\leq p\leq2$, then $\mathcal{L}_{m,p}(^{n}\ell_{1};\ell_{2})=\mathcal{L}(^{n}\ell_{1};\ell_{2}).$ \end{theorem} We sketch the proof of a more general result from \cite{botp}, which is inspired in P\'{e}rez-Garc\'{\i}a's ideas: \begin{theorem} [Botelho, Pellegrino, 2009]\label{novoteorema} Let $r\geq s\geq1$. If $\mathcal{L}(\ell_{1};F)=\Pi_{r;s}(\ell_{1};F)$, then \[ \mathcal{L}(^{n}\ell_{1};F)=\mathcal{L}_{m,(r;\,\min\{s,2\})}(^{n}\ell_{1};F) \] for every $n\in\mathbb{N}$. \end{theorem} \begin{proof} (Sketch) In \cite[Theorem 3.4]{jmaa} it is shown that when $1\leq p\leq2,$ then $\mathcal{L}_{m,p}(^{2}\ell_{1};\mathbb{K})=\mathcal{L}(^{2}\ell _{1};\mathbb{K}),$ and \begin{equation} \pi_{p}(\cdot)\leq K_{G}^{2}\Vert\cdot\Vert. \label{ljg} \end{equation} Let $(x_{j}^{(1)})_{j=1}^{m_{1}},\ldots,(x_{j}^{(n)})_{j=1}^{m_{n}}$ be $n$ finite sequences in $\ell_{1}$. Using that $\widehat{\otimes}_{\pi}^{k} \ell_{1}$ is isometrically isomorphic to $\ell_{1}$ and (\ref{ljg}), one can prove that, for every $1\leq p\leq2$, \[ \left\Vert (x_{j_{1}}^{(1)}\otimes\cdots\otimes x_{j_{n}}^{(n)})_{j_{1} ,\ldots,j_{n}=1}^{m_{1},\ldots,m_{n}}\right\Vert _{w,p}\leq K_{G} ^{2n-2}\left\Vert (x_{j}^{(1)})_{j=1}^{m_{1}}\right\Vert _{w,p}\cdots \left\Vert (x_{j}^{(n)})_{j=1}^{m_{n}}\right\Vert _{w,p} \] \indent Let $A\in\mathcal{L}(^{n}\ell_{1};F)$. By $A_{L}$ we mean the linearization of $A$ on $\widehat{\otimes}_{\pi}^{n}\ell_{1}$, that is $A_{L}\in\mathcal{L}(\widehat{\otimes}_{\pi}^{n}\ell_{1};F)$ and $A_{L} (x_{1}\otimes\cdots\otimes x_{n})=A(x_{1},\ldots,x_{n})$ for every $x_{j} \in\ell_{1}$. Since $\widehat{\otimes}_{\pi}^{n}\ell_{1}$ is isometrically isomorphic to $\ell_{1}$, by assumption we have that $A_{L}$ is $(r;s)$ -summing and $\pi_{r;s}(A_{L})\leq M\Vert A_{L}\Vert=M\Vert A\Vert$, where $M$ is a constant independent of $A$. Using the claim with $p=\min\{s,2\}$ we get \begin{align*} \left( \sum_{j_{1},\ldots,j_{n}=1}^{m_{1},\ldots,m_{n}}\left\Vert A(x_{j_{1} }^{(1)},\ldots,x_{j_{n}}^{(n)})\right\Vert ^{r}\right) ^{\frac{1}{r}} & \leq\left( \sum_{j_{1},\ldots,j_{n}=1}^{m_{1},\ldots,m_{n}}\left\Vert A_{L}(x_{j_{1}}^{(1)}\otimes\cdots\otimes x_{j_{n}}^{(n)})\right\Vert ^{r}\right) ^{\frac{1}{r}}\\ & \leq\pi_{r;s}(A_{L})\left\Vert (x_{j_{1}}^{(1)}\otimes\cdots\otimes x_{j_{n}}^{(n)})_{j_{1},\ldots,j_{n}=1}^{m_{1},\ldots,m_{n}}\right\Vert _{w,s}\\ & \leq\pi_{r;s}(A_{L})\left\Vert (x_{j_{1}}^{(1)}\otimes\cdots\otimes x_{j_{n}}^{(n)})_{j_{1},\ldots,j_{n}=1}^{m_{1},\ldots,m_{n}}\right\Vert _{w,\min\{s,2\}}\\ & \leq M\Vert A\Vert K_{G}^{2n-2}\left\Vert (x_{j}^{(1)})_{j=1}^{m_{1} }\right\Vert _{w,\min\{s,2\}}\cdots\left\Vert (x_{j}^{(n)})_{j=1}^{m_{n} }\right\Vert _{w,\min\{s,2\}}, \end{align*} which shows that $A$ is multiple $(r;\min\{s,2\})$-summing. \end{proof} \begin{corollary} [Botelho, Pellegrino, 2009](\cite{botp}) Let $1\leq p\leq2$, $r\geq p$ and let $F$ be a Banach space. The following assertions are equivalent:\newline \textrm{(a)} $\mathcal{L}(\ell_{1};F)=\mathcal{L}_{m(r;p)}(^{n}\ell_{1} ;F)$.\newline\textrm{(b)} $\mathcal{L}(^{n}\ell_{1};F)=\mathcal{L} _{m(r;p)}(^{n}\ell_{1};F)$ for every $n\in\mathbb{N}$.\newline\textrm{(c)} $\mathcal{L}(^{n}\ell_{1};F)=\mathcal{L}_{m(r;p)}(^{n}\ell_{1};F)$ for some $n\in\mathbb{N}$. \end{corollary} The connection between linear coincidence results with coincidence results for multiple summing multilinear operators in indeed stronger: \begin{theorem} [Botelho, Pellegrino, 2009](\cite{botp})\label{teorema} Let $p,r\in \lbrack1,q]$ and let $F$ be a Banach space. Let $B(p,q,r,F)$ denote the set of all Banach spaces $E$ such that \[ \mathcal{L}(E;F)=\Pi_{q;p}(E;F)\mathit{~and~}\mathcal{L}(E;\ell_{q} (F))=\Pi_{q;r}(E;\ell_{q}(F)). \] Then, for every $n\geq2$, \[ \mathcal{L}(E_{1},\ldots,E_{n};F)=\mathcal{L}_{m(q;r,\ldots,r,p)}(E_{1} ,\ldots,E_{n};F) \] whenever $E_{1},\ldots,E_{n}\in B(p,q,r,F)$. \end{theorem} \begin{proof} (Sketch) Induction on $n$. For the case $n=2,$ let $E_{1},E_{2}\in B(p,q,r,F)$. By the Open Mapping Theorem there are constants $C_{1}$ and $C_{2}$ such that \[ \pi_{q;p}(u)\leq C_{1}\Vert u\Vert\mathrm{~for~every~}u\in\mathcal{L} (E_{2};F)\mathit{~and~} \] \[ \pi_{q;r}(v)\leq C_{2}\Vert v\Vert\mathrm{~for~every~}v\in\mathcal{L} (E_{1};\ell_{q}(F)). \] Let $A\in\mathcal{L}(E_{1},E_{2};F)$. Given two sequences $(x_{j}^{(1)} )_{j=1}^{\infty}\in\ell_{r}^{w}(E_{1})$ and $(x_{j}^{(2)})_{j=1}^{\infty} \in\ell_{p}^{w}(E_{2})$, fix $m\in\mathbb{N}$ and consider the continuous linear operator \[ A_{1}^{(m)}\colon E_{1}\longrightarrow\ell_{q}(F)~:~A_{1}^{(m)}(x)=(A(x,x_{1} ^{(2)}),\ldots,A(x,x_{m}^{(2)}),0,0,\ldots). \] So, $A_{1}^{(m)}$ is $(q;r)$-summing and $\pi_{q;r}(A_{1}^{(m)})\leq C_{2}\Vert A_{1}^{(m)}\Vert$. For each $x\in B_{E_{1}}$, consider the continuous linear operator \[ A_{x}\colon E_{2}\longrightarrow F~:~A_{x}(y)=A(x,y). \] So, $A_{x}$ is $(q;p)$-summing and $\pi_{q;p}(A_{x})\leq C_{1}\Vert A_{x} \Vert\leq C_{1}\Vert A\Vert\Vert x\Vert\leq C_{1}\Vert A\Vert$ and we can obtain \begin{equation} \left( \sum_{j=1}^{m}\sum_{k=1}^{m}\left\Vert A(x_{j}^{(1)},x_{k} ^{(2)})\right\Vert ^{q}\right) ^{\frac{1}{q}}\!\leq C_{1}C_{2}\Vert A\Vert\left\Vert (x_{j}^{(1)})_{j=1}^{m}\right\Vert _{w,r}\left\Vert (x_{k}^{(2)})_{j=1}^{m}\right\Vert _{w,p},\nonumber \end{equation} which shows that $A$ is multiple $(q;r,p)$-summing and $\pi_{q;r,p}(A)\leq C_{1}C_{2}\Vert A\Vert$.\newline\indent Suppose now that the result holds for $n$, that is: for every $E_{1},\ldots,E_{n}\in B(p,q,r,F)$, $\mathcal{L} (E_{1},\ldots,E_{n};F)=\mathcal{L}_{m(q;r,\ldots,r,p)}(E_{1},\ldots,E_{n};F)$. To prove the case $n+1$, let $E_{1},\ldots,E_{n+1}\in B(p,q,r,F)$. Since $E_{2},\ldots,E_{n+1}$ belong to $B(p,q,r,F)$, we have $\mathcal{L} (E_{2},\ldots,E_{n+1};F)=\mathcal{L}_{m(q;r,\ldots,r,p)}(E_{2},\ldots ,E_{n+1};F)$ by the induction hypotheses and hence there is a constant $C_{1}$ such that \[ \pi_{q;r,\ldots,r,p}(B)\leq C_{1}\Vert B\Vert\mathrm{~for~every~} B\in\mathcal{L}(E_{2},\ldots,E_{n+1};F). \] Since $E_{1}\in B(p,q,r,F)$, there is a constant $C_{2}$ such that \[ \pi_{q;r}(v)\leq C_{2}\Vert v\Vert\mathrm{~for~every~}v\in\mathcal{L} (E_{1};\ell_{q}(F)). \] Let $A\in\mathcal{L}(E_{1},\ldots,E_{n+1};F)$. Given sequences $(x_{j} ^{(1)})_{j=1}^{\infty}\in\ell_{r}^{w}(E_{1}),\ldots,$ $(x_{j}^{(n)} )_{j=1}^{\infty}\in\ell_{r}^{w}(E_{n})$ and $(x_{j}^{(n+1)})_{j=1}^{\infty} \in\ell_{p}^{w}(E_{n+1})$, fix $m\in\mathbb{N}$ and consider the continuous linear operator \[ A_{1}^{(m)}\colon E_{1}\longrightarrow\ell_{q}(F)~:~A_{1}^{(m)}(x)=\left( (A(x,x_{j_{2}}^{(2)},\ldots,x_{j_{n+1}}^{(n+1)}))_{j_{2},\ldots,j_{n+1}=1} ^{m},0,0,\ldots\right) . \] So, $A_{1}^{(m)}$ is $(q;r)$-summing and $\pi_{q;r}(A_{1}^{(m)})\leq C_{2}\Vert A_{1}^{(m)}\Vert$. For each $x\in B_{E_{1}}$, consider the continuous $n$-linear mapping \[ A_{x}^{n}\colon E_{2}\times\cdots\times E_{n+1}\longrightarrow F~:~A_{x} ^{n}(x_{2},\ldots,x_{n+1})=A(x,x_{2},\ldots,x_{n+1}). \] So, \[ \pi_{q;r,\ldots,r,p}(A_{x}^{n})\leq C_{1}\Vert A_{x}^{n}\Vert\leq C_{1}\Vert A\Vert\Vert x\Vert\leq C_{1}\Vert A\Vert \] and we conclude that \[ \left( \sum_{j_{1}=1}^{m}\cdots\sum_{j_{n+1}=1}^{m}\left\Vert A(x_{j_{1} }^{(1)},\ldots x_{j_{n+1}}^{(n+1)})\right\Vert ^{q}\right) ^{\frac{1}{q}}\leq C_{1}C_{2}\Vert A\Vert\left( \prod_{k=1}^{n}\left\Vert (x_{j}^{(k)} )_{j=1}^{m}\right\Vert _{w,r}\right) \left\Vert (x_{j}^{(n+1)})_{j=1} ^{m}\right\Vert _{w,p}. \] \end{proof} \begin{corollary} [Souza, 2003, P\'{e}rez-Garc\'{\i}a, 2003 ](\textrm{\cite{bombal, pgtese, souza})} If $F$ has cotype $q$ and $E_{1},\ldots,E_{n}$ are arbitrary Banach spaces, then \[ \mathcal{L}(E_{1},\ldots,E_{n};F)=\mathcal{L}_{m}{}_{(q;1)}(E_{1},\ldots ,E_{n};F)\mathit{~and~} \] \[ \pi_{q;1}(A)\leq C_{q}(F)^{n}\Vert A\Vert\mathit{~for~every~}A\in \mathcal{L}(E_{1},\ldots,E_{n};F), \] where $C_{q}(F)$ is the cotype $q$ constant of $F$. \end{corollary} \begin{proof} Both $F$ and $\ell_{q}(F)$ have cotype $q$ (see \cite[Theorem 11.12]{djt}), so $\mathcal{L}(E;F)=\Pi_{q;1}(E;F)$ and $\mathcal{L}(E;\ell_{q}(F))=\Pi _{q;1}(E;\ell_{q}(F))$ for every Banach space $E$ by \cite[Corollary 11.17]{djt}. \end{proof} \begin{corollary} [P\'{e}rez-Garc\'{\i}a, 2003](\textrm{\cite{bombal, pgtese})} If $E_{1} ,\ldots,E_{n}$ are $\mathcal{L}_{1}$-spaces and $H$ is a Hilbert space, then \[ \mathcal{L}(E_{1},\ldots,E_{n};H)=\mathcal{L}_{m,2}(E_{1},\ldots ,E_{n};H)\mathit{~and~} \] \[ \pi_{2}(A)\leq K_{G}^{n}\Vert A\Vert\mathit{~for~every~}A\in\mathcal{L} (E_{1},\ldots,E_{n};H), \] where $K_{G}$ stands for the Grothendieck constant. \end{corollary} \begin{proof} From \cite[Ex. 23.17(a)]{DF} we know that $H$ and $\ell_{2}(H)$ are $\mathcal{L}_{2}$-spaces, so $\mathcal{L}(E;H)=\Pi_{2;2}(E;H)$ and $\mathcal{L}(E;\ell_{2}(H))=\Pi_{2;2}(E;\ell_{2}(H))$ for every $\mathcal{L} _{1}$-space $E$ by \cite[Theorems 3.1 and 2.8]{djt}. \end{proof} \begin{corollary} [P\'{e}rez-Garc\'{\i}a, 2003](\textrm{\cite{bombal, pgtese})} If $F$ has cotype $2$ and $E_{1},\ldots,E_{n}$ are $\mathcal{L}_{\infty}$-spaces, then $\mathcal{L}(E_{1},\ldots,E_{n};F)=\mathcal{L}_{m,2}(E_{1},\ldots,E_{n};F).$ \end{corollary} \begin{proof} From \cite[Theorem 11.12]{djt} we know that $F$ and $\ell_{2}(F)$ have cotype $2$, so $\mathcal{L}(E;F)=\Pi_{2;2}(E;F)$ and $\mathcal{L}(E;\ell_{2} (F))=\Pi_{2;2}(E;\ell_{2}(F))$ for every $\mathcal{L}_{\infty}$-space $E$ by \cite[Theorem 11.14(a)]{djt}. \end{proof} By invoking \cite[Theorem 11.14(b)]{djt} instead of \cite[Theorem 11.14(a)]{djt} we get: \begin{corollary} If $F$ has cotype $q>2$, $E_{1},\ldots,E_{n}$ are $\mathcal{L}_{\infty} $-spaces and $r<q$, then $\mathcal{L}(E_{1},\ldots,E_{n};F)=\mathcal{L} _{m(q,r)}(E_{1},\ldots,E_{n};F).$ \end{corollary} Very recently, in a remarkable paper \cite{Def2}, A. Defant, D. Popa and U. Schwarting introduced the notion of coordinatewise multiple summing operators and, among various interesting results, presented the following vector-valued generalization of Bohnenblust-Hille inequality. Below, a multilinear map $U$ $\in\mathcal{L}(^{n}E_{1},...,E_{n};F)$ is separately $(r,1)$-summing if it is absolutely $(r,1)$-summing in each coordinate separately. \begin{theorem} [Defant, Popa, Schwarting, 2010]Let $F$ be a Banach space with cotype $q$, and $1\leq r<q$. Then each separately $(r,1)$-summing $U$ $\in\mathcal{L} (E_{1},...,E_{n};F)$ is multiple $(\frac{qrn}{q+(n-1)r},1)$-summing. \end{theorem} Using $F=\mathbb{K}$, $q=2$ and $r=1$ in the above theorem, Bohnenblust-Hille Theorem is recovered. A last comment about the richness of applications of the class of absolutely summing multilinear operators is related to tensor norms. A. Defant and D. P\'{e}rez-Garc\'{\i}a \cite{andreas david} constructed an $n$-tensor norm, in the sense of Grothendieck (associated to the class of multiple $1$-summing multilinear forms) possessing the surprising property that the $\alpha$-tensor product $\alpha(Y_{1},...,Y_{n})$ has local unconditional structure for each choice of $n$ arbitrary $\mathcal{L}_{p_{j}} $-spaces $Y_{j}.$ This construction answers a question posed by J. Diestel. It is interesting to mention that in \cite{32} it is shown that none of Grothendieck's $14$ norms satisfies such condition. \section{Other attempts of multi-ideals related to absolutely summing operators: an overview} In the last decade several classes of multilinear maps have been investigated as extensions of the linear concept of absolutely summing operators (for works comparing these different classes we refer to \cite{CD, davidarchiv}). Depending on the properties that a given class possesses, this class is usually compared with the original linear ideal and, in some sense, qualified as a good (or bad) extension of the linear ideal. In this direction, the ideals of dominated multilinear operators and multiple summing multilinear operators are mostly classified as nice generalizations of absolutely summing linear operators. Of course, the evaluation of what properties are important or not has a subjective component, but some classical properties of absolutely summing operators are naturally expected to hold in the context of a reasonable multilinear generalization. The usual procedure in the multilinear and polynomial theory of absolutely summing operators is to define a class and study their properties. The final sections of this paper have a different purpose; we elect some properties that we consider fundamental and investigate which classes satisfy them (specially if there exist maximal and minimal classes, in a sense that will be clear soon). Below we sketch an overview of the different multilinear approaches to summability of operators which have arisen in the last years: \subsection{Dominated multilinear operators: the first attempt} If $p\geq1,$ $T\in\mathcal{L}(E_{1},...,E_{n};F)$ is said to be $p$-dominated $(T\in\mathcal{L}_{d,p}(E_{1},...,E_{n};F))$ if $\left( T(x_{j}^{1} ,...,x_{j}^{n})\right) _{j=1}^{\infty}\in\ell_{p/n}(F)$ whenever $(x_{j} ^{k})_{j=1}^{\infty}\in\ell_{p}^{w}(E_{k}).$ This concept was essentially introduced by Pietsch and explored in \cite{AlencarMatos, anais, sch} and has strong similarity with the original linear ideal of absolutely summing operators; during some time (before the emergence of the class if multiple summing multilinear operators) this ideal seemed to be considered as the most promising multilinear approach to summability (however, as it will be clear soon, this class is, is some sense, too small). The terminology \textquotedblleft$p$-dominated\textquotedblright\ is justified by the Pietsch-Domination type theorem (a detailed proof can be found in \cite{tesina} or as a consequence of a more general result \cite{pss}): \begin{theorem} [Pietsch, Geiss, 1985](\cite{geisse}) $T\in\mathcal{L}(E_{1},...,E_{n};F)$ is $p$-dominated if and only if there exist $C\geq0$ and regular probability measures $\mu_{j}$ on the Borel $\sigma$-algebras of $B_{E_{j}^{^{\prime}}}$ endowed with the weak star topologies such that \[ \left\Vert T\left( x_{1},...,x_{n}\right) \right\Vert \leq C\prod \limits_{j=1}^{n}\left( \int_{B_{E_{j}^{\prime}}}\left\vert \varphi\left( x_{j}\right) \right\vert ^{p}d\mu_{j}\left( \varphi\right) \right) ^{1/p} \] for every $x_{j}\in E_{j}$ and $j=1,...,n$. \end{theorem} \begin{corollary} If $1\leq p\leq q<\infty$, then $\mathcal{L}_{d,p}\subset\mathcal{L}_{d,q}.$ \end{corollary} This class has several other similarities with the linear concept of absolutely summing operators. We mention two results whose proofs mimic the linear analogues: \begin{theorem} [Mel\'{e}ndez-Tonge, 1999](\cite{MT}) Let $2<p<r^{\ast}<\infty.$ Let $n$ be a positive integer and $F$ be a Banach space. Then \[ \mathcal{L}_{d,1}(^{n}\ell_{p};F)=\mathcal{L}_{d,r}(^{n}\ell_{p};F). \] \end{theorem} \begin{theorem} [Extrapolation Theorem, 2005](\cite{irishd}) If $1<r<p<\infty$ and $E$ is a Banach space such that \[ \mathcal{L}_{d,p}(^{n}E;\ell_{p})=\mathcal{L}_{d,r}(^{n}E;\ell_{p}), \] then \[ \mathcal{L}_{d,p}(^{n}E;F)=\mathcal{L}_{d,1}(^{n}E;F) \] for every Banach space $F.$ \end{theorem} A consequence of Grothendieck's Inequality ensures a rare coincidence situation for this class (this result seems to be part of the folklore of the theory): \begin{theorem} \label{yb}$\mathcal{L}_{d,2}(^{2}c_{0};\mathbb{K})=\mathcal{L}(^{2} c_{0};\mathbb{K}).$ \end{theorem} \begin{proof} (Real case) It suffices to deal with $A:\ell_{\infty}^{m}\times\ell_{\infty }^{m}\rightarrow\mathbb{R}$ with $\left\Vert A\right\Vert \leq1$. Note that \[ \left\vert A(x,y)\right\vert =\left\vert A\left( {\displaystyle\sum\limits_{i=1}^{m}} x_{i}e_{i}, {\displaystyle\sum\limits_{i=1}^{m}} y_{i}e_{i}\right) \right\vert =\left\vert {\displaystyle\sum\limits_{i,j=1}^{m}} A(e_{i},e_{j})x_{i}y_{j}\right\vert . \] Let $a_{ij}=A(e_{i},e_{j})$ and $(x_{k})_{k=1}^{N},(y_{k})_{k=1}^{N}\in \ell_{2}^{w}(\ell_{\infty}^{m})$ be so that $\left\Vert (x_{k})_{k=1} ^{N}\right\Vert _{w,2},\left\Vert (y_{k})_{k=1}^{N}\right\Vert _{w,2}\leq1$, with \[ x_{k}=(x_{k}^{(1)},...,x_{k}^{(m)})\text{ and }y_{k}=(y_{k}^{(1)} ,...,y_{k}^{(m)}). \] Hence, for $i,j=1,...,m,$ consider \[ \widetilde{x_{i}}:=(x_{1}^{(i)},...,x_{N}^{(i)})\in\ell_{2}^{N}\text{ and }\widetilde{y_{j}}:=(y_{1}^{(j)},...,y_{N}^{(j)})\in\ell_{2}^{N}. \] It is well-known (see, for example, \cite[Proposici\'{o}n 5.18]{tesina}) that \[ \left\Vert (x_{k})_{k=1}^{N}\right\Vert _{w,2}^{2}=\max_{1\leq i\leq m}\left\Vert \widetilde{x_{i}}\right\Vert ^{2}\text{ and }\left\Vert (y_{k})_{k=1}^{N}\right\Vert _{w,2}^{2}=\max_{1\leq j\leq m}\left\Vert \widetilde{y_{j}}\right\Vert ^{2}. \] So we have $\left\Vert \widetilde{x_{i}}\right\Vert \leq1,\left\Vert \widetilde{y_{j}}\right\Vert \leq1$ for every $i,j=1,...,m,$ and, since $\left\Vert A\right\Vert \leq1,$ from Grothendieck's Inequality we have \[ \left\vert {\displaystyle\sum\limits_{i,j=1}^{m}} a_{ij}<\widetilde{x_{i}},\widetilde{y_{j}}>\right\vert \leq K_{G}, \] and therefore \[ \left\vert {\displaystyle\sum\limits_{i,j=1}^{m}} a_{ij} {\displaystyle\sum\limits_{k=1}^{N}} x_{k}^{(i)}y_{k}^{(j)}\right\vert \leq K_{G} \] i.e., \[ \left\vert {\displaystyle\sum\limits_{k=1}^{N}} \left( {\displaystyle\sum\limits_{i,j=1}^{m}} a_{ij}x_{k}^{(i)}y_{k}^{(j)}\right) \right\vert \leq K_{G} \] and \[ \left\vert {\displaystyle\sum\limits_{k=1}^{N}} A\left( x_{k},y_{k}\right) \right\vert =\left\vert {\displaystyle\sum\limits_{k=1}^{N}} A\left( {\displaystyle\sum\limits_{i=1}^{m}} x_{k}^{(i)}e_{i}, {\displaystyle\sum\limits_{j=1}^{m}} y_{k}^{(j)}e_{j}\right) \right\vert \leq K_{G}. \] Since $x_{k}$ can be replaced by $\varepsilon_{k}x_{k}$ with $\varepsilon _{k}=1$ or $-1$, we can conclude that \[ {\displaystyle\sum\limits_{k=1}^{N}} \left\vert A\left( x_{k},y_{k}\right) \right\vert \leq K_{G}. \] \end{proof} In fact the result above is valid for $\mathcal{L}_{\infty}$ spaces instead of $c_{0}$. For a direct proof of this result to $C(K)$ spaces we refer to \cite{irish}. It is also known that dominated multilinear maps satisfy a Dvoretzky-Rogers type theorem ($\mathcal{L}_{d,p}(^{n}E;E)=\mathcal{L}(^{n}E;E)$ if and only if $\dim E<\infty$). Recent results show that this class is too small, in some sense (coincidence situations are almost impossible). The proof of the next result presented here is different from the original \cite{Jar}, and appears in \cite{belg}: \begin{theorem} [Jarchow, Palazuelos, P\'{e}rez-Garc\'{\i}a and Villanueva, 2007] \label{ffy}(\cite{Jar}) For every $n\geq3$ and every $p\geq1$ and every infinite dimensional Banach space $E$ there exists $T\in\mathcal{L} (^{n}E;\mathbb{K})$ that fails to be $p$-dominated. \end{theorem} \begin{proof} Suppose that every $T\in\mathcal{L}(^{3}E;\mathbb{K})$ is $p$-dominated. From \cite[Lemma 3.4]{irish} one can conclude that every continuous linear operator from $E$ to $\mathcal{L}(E;\mathcal{L}(^{2}E;\mathbb{K}))$ is $p$-summing. From \cite[Proposition 19.17]{djt} we know that $\mathcal{L}(^{2} E;\mathbb{K})$ has no finite cotype, but from Theorem \ref{oik} (iii) this is not possible. Since the result is true for $n=3$, it is easy to conclude that it is true for $n>3$. \end{proof} For polynomial versions of this result we refer to \cite{PAMS, PAMS2} and for more results on dominated multilinear operators/polynomials we refer to \cite{irish, PAMS, cg, Jar, MT} and references therein. Since Theorem \ref{ffy} is valid for $n\geq3$, a natural question is: are there coincidence situations for $n=2$ different from the obvious variations of Theorem \ref{yb}? The answer is yes: \begin{theorem} [Botelho, Pellegrino, Rueda, 2010](\cite{jap})\label{jja} Let $E$ be a cotype $2$ space. Then $E\widehat{\otimes}_{\pi}E=E\widehat{\otimes}_{\varepsilon}E$ if and only if $\mathcal{L}_{d,1}(^{2}E;\mathbb{K})=\mathcal{L}(^{2} E;\mathbb{K})$. \end{theorem} The existence of spaces fulfilling the hypotheses of Theorem \ref{jja} is assured by G. Pisier \cite{18}. Also, $\cot E=2$ is a necessary condition for Theorem \ref{jja} since in \cite{jap} it is also proved that \[ \mathcal{L}_{d,1}(^{2}E;\mathbb{K})=\mathcal{L}(^{2}E;\mathbb{K} )\Rightarrow\cot E=2. \] \subsection{Semi-integral multilinear operators} If $p\geq1,$ $T\in\mathcal{L}(E_{1},...E_{n};F)$ is $p$-semi-integral $(T\in\mathcal{L}_{si,p}(E_{1},...,E_{n};F))$ if there exists a $C\geq0$ such that \[ \left( \sum\limits_{j=1}^{m}\parallel T(x_{j}^{(1)},...,x_{j}^{(n)} )\parallel^{p}\right) ^{1/p}\leq C\left( \sup_{\left( \varphi _{1},..,\varphi_{_{n}}\right) \in B_{E_{1}^{\ast}}\times\cdots\times B_{E_{n}^{\ast}}}\sum\limits_{j=1}^{m}\mid\varphi_{1}(x_{j}^{(1)} )...\varphi_{n}(x_{j}^{(n)})\mid^{p}\right) ^{1/p} \] for every $m\in\mathbb{N}$, $x_{j}^{(l)}\in E_{l}$ with $l=1,...,n$ and $j=1,...,m.$ This ideal goes back to the research report \cite{AlencarMatos} of R. Alencar and M.C. Matos and was explored in \cite{CD}. As in the case of $p$-dominated multilinear operators, a Pietsch Domination theorem is valid in this context (for a proof we mention \cite{CD}, although the result is inspired by the case $p=1$ from Alencar-Matos paper \cite{AlencarMatos}; see also \cite{BPRn} for a recent general argument): \begin{theorem} [Alencar, Matos, 1989 and \c{C}aliskan, Pellegrino, 2007]$T\in\mathcal{L} (E_{1},...E_{n};F)$ is $p$-semi-integral if and only if there exist $C\geq0$ and a regular probability measure $\mu$ on the Borel $\sigma-$algebra $\mathcal{B}(B_{E_{1}^{^{\ast}}}\times\cdots\times$ $B_{E_{n}^{^{\ast}}})$ of $B_{E_{1}^{^{\ast}}}\times\cdots\times$ $B_{E_{n}^{^{\ast}}}$ endowed with the product of the weak star topologies $\sigma(E_{l}^{\ast},E_{l}),$ $l=1,...,n,$ such that \[ \parallel T(x_{1},...,x_{n})\parallel\leq C\left( \int_{B_{E_{1}^{\ast} }\times\cdots\times B_{E_{n}^{\ast}}}\mid\varphi_{1}(x_{1})...\varphi _{n}(x_{n})\mid^{p}d\mu(\varphi_{1},...,\varphi_{n})\right) ^{1/p} \] \end{theorem} \begin{corollary} If $1\leq p\leq q<\infty,$ then $\mathcal{L}_{si,p}\subset\mathcal{L}_{si,q}.$ \end{corollary} It is well-known that, as it happens with the ideal of $p$-dominated multilinear operators, this ideal satisfies a Dvoretzky-Rogers type theorem. This \textquotedblleft size\textquotedblright\ of this class is strongly connected to the \textquotedblleft size\textquotedblright\ of the class of $p$-dominated multilinear operators. For example, in \cite{CD} it is shown that \begin{equation} \mathcal{L}_{si,p}(E_{1},...,E_{n};F)\subset\mathcal{L}_{d,np}(E_{1} ,...,E_{n};F). \label{ww} \end{equation} In fact, if $T\in\mathcal{L}_{si,p}(E_{1},...,E_{n};F)$ then \begin{align*} \left( \sum\limits_{j=1}^{\infty}\parallel T(x_{1}^{(j)},...,x_{n} ^{(j)})\parallel^{p}\right) ^{1/p} & \leq C\left( \sup_{\varphi_{l}\in B_{E_{l}^{\ast}},l=1,...,n}\sum\limits_{j=1}^{\infty}\mid\varphi_{1} (x_{1}^{(j)})...\varphi_{n}(x_{n}^{(j)})\mid^{p}\right) ^{1/p}\\ & \leq C\sup_{\varphi_{l}\in B_{E_{l}^{\ast}},l=1,...,n}\left( \sum\limits_{j=1}^{\infty}\mid\varphi_{1}(x_{1}^{(j)})\mid^{np}\right) ^{\frac{1}{np}}...\left( \sum\limits_{j=1}^{\infty}\mid\varphi_{n} (x_{n}^{(j)})\mid^{np}\right) ^{\frac{1}{np}}\\ & =C\left\Vert (x_{1}^{(j)})_{j=1}^{\infty}\right\Vert _{w,np}...\left\Vert (x_{n}^{(j)})_{j=1}^{\infty}\right\Vert _{w,np}. \end{align*} In view of the \textquotedblleft small size\textquotedblright\ of the class of $p$-dominated multilinear operators, the inclusion (\ref{ww}) might be viewed as a bad property. \subsection{Strongly summing multilinear operators} If $p\geq1,$ $T\in\mathcal{L}(E_{1},...,E_{n};F)$ is strongly $p$-summing ($T\in\mathcal{L}_{ss,p}(E_{1},...,E_{n};F)$) if there exists a constant $C\geq0$ such that \begin{equation} \left( \sum\limits_{j=1}^{m}\parallel T(x_{j}^{(1)},...,x_{j}^{(n)} )\parallel^{p}\right) ^{1/p}\leq C\left( \underset{\phi\in B_{\mathcal{L} (E_{1},...,E_{n};\mathbb{K})}}{\sup}\sum\limits_{j=1}^{m}\mid\phi(x_{j} ^{(1)},...,x_{j}^{(n)})\mid^{p}\right) ^{1/p}. \label{in} \end{equation} for every $m\in\mathbb{N}$, $x_{j}^{(l)}\in E_{l}$ with $l=1,...,n$ and $j=1,...,m.$ The multi-ideal of strongly $p$-summing multilinear operators is due to V. Dimant \cite{dimant} is perhaps the class that best translates to the multilinear setting the properties of the original linear concept. For example, a Grothendieck type theorem and a Pietsch-Domination type theorem are valid: \begin{theorem} [Dimant, 2003](\cite{dimant}) Every $T\in\mathcal{L}(^{n}\ell_{1};\ell_{2})$ is strongly $1$-summing. \end{theorem} \begin{theorem} [Dimant, 2003](\cite{dimant}) $T\in\mathcal{L}\left( E_{1},...,E_{n} ;F\right) $ is strongly $p$-summing if, and only if, there are a probability measure $\mu$ on $B_{(E_{1}\otimes_{\pi}\cdots\otimes_{\pi}E_{n})^{\ast}}$, with the weak-star topology, and a constant $C\geq0$ so that \begin{equation} \left\Vert T\left( x_{1},...,x_{n}\right) \right\Vert \leq C\left( \int_{B_{(E_{1}\otimes_{\pi}\cdots\otimes_{\pi}E_{n})^{\ast}}}\left\vert \varphi\left( x_{1}\otimes\cdots\otimes x_{n}\right) \right\vert ^{p} d\mu\left( \varphi\right) \right) ^{\frac{1}{p}} \label{7out08c} \end{equation} for all $(x_{1},...,x_{n})\in E_{1}\times\cdots\times E_{n},$ \end{theorem} The following intriguing result shows that in special situations the class of strongly $p$-summing multilinear maps contains the ideal of multiple $p$-summing operators: \begin{theorem} [Mezrag, Saadi, 2009](\cite{mss}) Let $1<p<\infty.$ If $E_{j}$ is an $\mathcal{L}_{p}$-space for all $j=1,...,n$ and $F$ is an $\mathcal{L} _{p^{\ast}}$-space, then \[ \mathcal{L}_{m,p^{\ast}}(E_{1},...,E_{n};F)\subset\mathcal{L}_{ss,p^{\ast} }(E_{1},...,E_{n};F). \] \end{theorem} It is not hard to prove that a Dvoretzky-Rogers Theorem is also valid for this class. Besides, the class has a nice size in the sense that no coincidence theorem can hold for $n$-linear maps if there is no analogue for linear operators. This indicates that this class is not \textquotedblleft unnecessarily big\textquotedblright. \subsection{Absolutely\emph{ }summing multilinear operators} If $\frac{1}{p}\leq\frac{1}{q_{1}}+\cdots+\frac{1}{q_{n}},$ $T\in \mathcal{L}(E_{1},...,E_{n};F)$ is absolutely\emph{ }$(p;q_{1},...,q_{n} )$-summing at the point $a=(a_{1},...,a_{n})\in E_{1}\times\cdots\times E_{n}$ when \[ \left( T(a_{1}+x_{j}^{(1)},...,a_{n}+x_{j}^{(n)})-T(a_{1},...,a_{n})\right) _{j=1}^{\infty}\in\ell_{p}(F) \] for every $\left( x_{j}^{(k)}\right) _{j=1}^{\infty}\in\ell_{q_{k}} ^{w}(E_{k}).$ This class is denoted by $\mathcal{L}_{as,(p;q_{1},...,q_{n} )}^{(a)}.$ When $a$ is the origin call simply absolutely $(p;q_{1},...,q_{n} )$-summing and represent by $\mathcal{L}_{as,(p;q_{1},...,q_{n})}$ (when $q_{1}=\cdots=q_{n}=q$ we write $\mathcal{L}_{as,(p;q)}$ and when $q_{1}=...=q_{n}=q=p$ we just write $\mathcal{L}_{as,p}$)$.$ In the case that $T$ is absolutely $(p;q_{1},...,q_{n})$-summing at every $(a_{1},...,a_{n})\in E_{1}\times\cdots\times E_{n}$ we say that $T$ is absolutely $p$-summing everywhere and we write $T\in\mathcal{L}_{as,(p;q_{1},...,q_{n})}^{ev} (E_{1},...,E_{n};F)$ (when $q_{1}=\cdots=q_{n}=q$ we write $\mathcal{L} _{as,(p;q)}^{ev}$ and when $q_{1}=...=q_{n}=q=p$ we just write $\mathcal{L} _{as,p}^{ev}$)$.$ The class of absolutely $(p;q_{1},...,q_{n})$-summing operators (when $a=0$) seems to have appeared for the first time in \cite{AlencarMatos}. The starting point of the theory of absolutely\emph{ }summing is perhaps the result due to A. Defant and J. Voigt (see \cite{AlencarMatos}), known as Defant-Voigt Theorem, which asserts that every continuous multilinear form is $(1;1,...,1)$-summing. We prove here a slightly more general version which can be found in (\cite{port}): \begin{theorem} [The generalized Defant-Voigt Theorem, 2007]\label{t1}Let $A\in\mathcal{L} (E_{1},...,E_{n};F)$ and suppose that there exist $1\leq r<n$ and $C>0$ so that for any $x_{1}\in E_{1},....,x_{r}\in E_{r},$ the $s$-linear ($s=n-r$) mapping $A_{x_{1}....x_{r}}(x_{r+1},...,x_{n})=A(x_{1},...,x_{n})$ is absolutely $(p;q_{1},...,q_{s})$-summing and \[ \left\Vert A_{x_{1}....x_{r}}\right\Vert _{as(p;q_{1},...,q_{s})}\leq C\left\Vert A\right\Vert \left\Vert x_{1}\right\Vert ...\left\Vert x_{r}\right\Vert . \] Then $A$ is absolutely $(p;1,...,1,q_{1},...,q_{s})$-summing. In particular \[ \mathcal{L}(E_{1},...,E_{n};\mathbb{K})=\mathcal{L}_{as,1}(E_{1} ,...,E_{n};\mathbb{K}) \] \end{theorem} \begin{proof} (Sketch) Given $m\in\mathbb{N}$ and $x_{1}^{(1)},...,x_{1}^{(m)}\in E_{1},....,x_{n}^{(1)},...,x_{n}^{(m)}\in E_{n}$, let us consider $\varphi _{j}\in B_{F^{\prime}}$ such that $\left\Vert A(x_{1}^{(j)},...,x_{n} ^{(j)})\right\Vert =\varphi_{j}(A(x_{1}^{(j)},...,x_{n}^{(j)}))$ for every $j=1,...,m$. Fix $b_{1},...,b_{m}\in\mathbb{K}$ so that $\sum\limits_{j=1} ^{m}\left\vert b_{j}\right\vert ^{q}=1,$ where $\frac{1}{p}+\frac{1}{q}=1,$ and \[ \left( \sum\limits_{j=1}^{m}\left\Vert A(x_{1}^{(j)},...,x_{n}^{(j)} )\right\Vert ^{p}\right) ^{\frac{1}{p}}=\left\Vert \left( \left\Vert A(x_{1}^{(j)},...,x_{n}^{(j)})\right\Vert \right) _{j=1}^{m}\right\Vert _{p}=\sum\limits_{j=1}^{m}b_{j}\left\Vert A(x_{1}^{(j)},...,x_{n} ^{(j)})\right\Vert . \] If $\lambda$ is the Lebesgue measure on $I=[0,1]^{r},$ we have \[ \int\nolimits_{I}\sum\limits_{j=1}^{m}\left( \prod_{l=1}^{r}r_{j} (t_{l})\right) b_{j}\varphi_{j}A(\sum\limits_{j_{1}=1}^{m}r_{j_{1}} (t_{1})x_{1}^{(j_{1})},...,\sum\limits_{j_{r}=1}^{m}r_{j_{r}}(t_{r} )x_{r}^{(j_{r})},x_{r+1}^{(j)},...,x_{n}^{(j)})d\lambda \] \[ =\sum\limits_{j,j_{1},...,j_{r}=1}^{m}b_{j}\varphi_{j}A(x_{1}^{(j_{1} )},...,x_{r}^{(j_{r})},x_{r+1}^{(j)},...,x_{n}^{(j)})\int\limits_{0}^{1} r_{j}(t_{1})r_{j_{1}}(t_{1})dt_{1}...\int\limits_{0}^{1}r_{j}(t_{r})r_{j_{r} }(t_{r})dt_{r}\hspace*{1.3em} \] \[ =\sum\limits_{j=1}^{m}\sum\limits_{j_{1}=1}^{m}...\sum\limits_{j_{r}=1} ^{m}b_{j}\varphi_{j}A(x_{1}^{(j_{1})},...,x_{r}^{(j_{r})},x_{r+1} ^{(j)},...,x_{n}^{(j)})\delta_{jj_{1}}...\delta_{jj_{r}}=\sum\limits_{j=1} ^{m}b_{j}\varphi_{j}A(x_{1}^{(j)},...,x_{n}^{(j)}). \] For $z_{l}=\sum\limits_{j=1}^{m}r_{j}(t_{l})x_{l}^{(j)}$, $l=1,...,r,$ we get \begin{align*} & \left( \sum\limits_{j=1}^{m}\left\Vert A(x_{1}^{(j)},...,x_{n} ^{(j)})\right\Vert ^{p}\right) ^{\frac{1}{p}}=\sum\limits_{j=1}^{m} b_{j}\varphi_{j}A(x_{1}^{(j)},...,x_{n}^{(j)})\\ & \leq\int\nolimits_{I}\left\vert \sum\limits_{j=1}^{m}\left( \prod _{l=1}^{r}r_{j}(t_{l})\right) b_{j}\varphi_{j}A(\sum\limits_{j_{1}=1} ^{m}r_{j_{1}}(t_{1})x_{1}^{(j_{1})},...,\sum\limits_{j_{r}=1}^{m}r_{j_{r} }(t_{r})x_{r}^{(j_{r})},x_{r+1}^{(j)},...,x_{n}^{(j)})\right\vert d\lambda \end{align*} and after standard calculations we get \[ \left( \sum\limits_{j=1}^{m}\left\Vert A(x_{1}^{(j)},...,x_{n}^{(j)} )\right\Vert ^{p}\right) ^{\frac{1}{p}}\leq C\left\Vert A\right\Vert \left( \prod_{l=1}^{r}\left\Vert (x_{l}^{(j)})_{j=1}^{m}\right\Vert _{w,1}\right) \left( \prod_{l=r+1}^{n}\left\Vert (x_{l}^{(j)})_{j=1}^{m}\right\Vert _{w,q_{l}}\right) .\text{ } \] \end{proof} Using a generalized version of Grothendieck's Inequality, D. P\'{e}rez Garc\'{\i}a proved a striking generalization of Theorem \ref{yb}: \begin{theorem} [P\'{e}rez-Garc\'{\i}a, 2002](\cite{tesina, trace}) \label{blaw} $\mathcal{L}_{as,(1,2)}(^{n}c_{0};\mathbb{K})=\mathcal{L}(^{n}c_{0} ;\mathbb{K})$ for every $n\geq2.$ \end{theorem} A recent result from Blasco et al \cite{bla2} shows that the crucial cases of Theorem \ref{blaw} are precisely the cases $n=2$ and $n=3:$ \begin{theorem} [Blasco, Botelho, Pellegrino, Rueda, 2010]Let $1\leq r\leq2$. If $\mathcal{L}(^{2}E;\mathbb{K})=\mathcal{L}_{as,(1,r)}(^{2}E;\mathbb{K})$ and $\mathcal{L}(^{3}E;\mathbb{K})=\mathcal{L}_{as,(1,r)}(^{3}E;\mathbb{K})$, then \[ \mathcal{L}(^{n}E;\mathbb{K})=\mathcal{L}_{as,(1,r)}(^{n}E;\mathbb{K}) \] for every $n\geq2.$ \end{theorem} \begin{proof} (Sketch of the proof when $n$ is odd) Induction: Suppose that the result is valid for a fixed odd $k$ and we shall prove that it is also true for $k+2$. Let $T\in\mathcal{L}(^{k+2}E;\mathbb{K})$ and consider \begin{align*} F & =E\hat{\otimes}_{\pi}\cdots\hat{\otimes}_{\pi}E\text{ (}k\text{ times)}\\ G & =E\hat{\otimes}_{\pi}E. \end{align*} Consider a bilinear form \[ B\in\mathcal{L}(F,G;\mathbb{K}) \] so that \[ B(x^{1}\otimes\cdots\otimes x^{k},x^{k+1}\otimes x^{k+2})=T(x^{1} ,...,x^{k+2}). \] Let $x_{j}^{(s)}\in E$ for $j=1,...,m$ and $s=1,...,k+2$. Using Defant-Voigt Theorem for $B$ and the induction hypothesis one can found a constant $C$ so that \begin{align*} \lefteqn{\sum_{j=1}^{m}\left\vert T(x_{j}^{(1)},\ldots,x_{j}^{(k+2)} )\right\vert } & & \\ & & & =\sum_{j=1}^{m}\left\vert B(x_{j}^{(1)}\otimes\ldots\otimes x_{j}^{(k)},x_{j}^{(k+1)}\otimes x_{j}^{(k+2)})\right\vert \\ & & & \leq\pi_{(1;1)}(B)\left\Vert (x_{j}^{(1)}\otimes\ldots\otimes x_{j}^{(k)})_{j=1}^{m}\right\Vert _{\ell_{1}^{w}(E\hat{\otimes}_{\pi} \cdots\hat{\otimes}_{\pi}E)}\left\Vert (x_{j}^{k+1}\otimes x_{j}^{k+2} )_{j=1}^{m}\right\Vert _{\ell_{1}^{w}(E\hat{\otimes}_{\pi}E)}\\ & & & \leq C\left\Vert B\right\Vert \left\Vert (x_{j}^{(1)})_{j=1} ^{m}\right\Vert _{\ell_{r}^{w}(E)}\cdots\left\Vert (x_{j}^{(k+2)})_{j=1} ^{m}\right\Vert _{\ell_{r}^{w}(E_{{}})}\\ & & & \end{align*} and the proof is done$.$ \end{proof} For general Banach spaces, the class $\mathcal{L}_{as,(1,2)}(^{n} E;\mathbb{K})$ also plays an important role, as an \textquotedblleft upper bound\textquotedblright\ for the classes of $p$-dominated multilinear mappings \cite{fmat, pgtese}: \begin{theorem} [Floret, Matos, 1995 (complex case), P\'{e}rez-Garc\'{\i}a, 2003]Let $n\in\mathbb{N}$, $n\geq2$ and $p\geq1.$ If $E$ is a Banach space, then \[ \mathcal{L}_{d,p}(^{n}E;\mathbb{K})\subset\mathcal{L}_{as,(1,2)} (^{n}E;\mathbb{K}). \] \end{theorem} At a first glance the concept of absolutely summing multilinear operator seems to be the natural multilinear definition of absolute summability. However it is easy to find bad properties which makes the ideal very different from the linear ideal. For example, no general Inclusion Theorem is valid. In fact, Defant-Voigt Theorem ensures that \[ \mathcal{L}_{as,1}(^{2}\ell_{2};\mathbb{K})=\mathcal{L}(^{n}\ell _{2};\mathbb{K}) \] but it is easy to show that \[ \mathcal{L}_{as,2}(^{2}\ell_{2};\mathbb{K})\neq\mathcal{L}(^{2}\ell _{2};\mathbb{K}). \] Besides, contrary to the linear case, several coincidence theorems hold, and this behavior removes the linear essence from this class. For example, Grothendieck \'{} s Theorem is valid but there are several other coincidence situations with absolutely no linear analogue, as \begin{equation} \mathcal{L}_{as,1}(^{n}\ell_{2};F)=\mathcal{L}(^{n}\ell_{2};F) \label{kio} \end{equation} for all $n\geq2$ and all $F$. Since (\ref{kio}) is not true for $n=1$, from now on we call coincidence situations as (\ref{kio}) by \textquotedblleft artificial coincidence situation\textquotedblright. Moreover, the polynomial version of this class is not an holomorphy type (this is a bad property!) and, in the terminology of \cite{muro}, this bad property is reinforced since this class is not compatible with the linear ideal of absolutely summing operators. Despite its bad properties, this class has some challenging problems (see, for example, \cite{michels, junek, danielstudia}). As it occurs for multiple summing multilinear operators, in (\cite{michels}) it was shown that a full Inclusion Theorem is valid when the spaces from the domain are $\mathcal{L}_{\infty}$-spaces: \begin{theorem} [Botelho, Michels, Pellegrino, 2010]Let $1\leq p\leq q\leq\infty$ and $E_{1},\ldots,E_{n}$ {be} $\mathcal{L}_{\infty}$-spaces. Then \[ \mathcal{L}_{as,p}(E_{1},\ldots,E_{n};F)\subset\mathcal{L}_{as,q}(E_{1} ,\ldots,E_{n};F). \] \end{theorem} In some cases, surprisingly, the inclusion theorem holds in the opposite direction than the expected \cite{junek} (i.e. if $p$ increases, the ideal decreases): \begin{theorem} [Junek, Matos, Pellegrino, 2008]\label{tt}If $E$ has cotype $2,$ $F$ is any Banach space and $n\geq2,$ then \[ \mathcal{L}_{as,q}(^{n}E;F)\subset\mathcal{L}_{as,p}(^{n}E;F) \] holds true for $1\leq p\leq q\leq2$. \end{theorem} The class of everywhere absolutely $p$-summing multilinear operators was introduced by M.C. Matos \cite{nachmatos} but he credits the idea to Richard Aron. It is easy to show that $\mathcal{L}_{m,p}\subset\mathcal{L}_{as,p} ^{ev}$ and, as it occurs for $\mathcal{L}_{ss,p}$ and $\mathcal{L}_{m,p},$ this class has no artificial coincidence theorem (a proof can be found in \cite{spp}): \begin{proposition} If $\mathcal{L}(E_{1},\ldots,E_{n};F)=\mathcal{L}_{as,(q;p_{1},\ldots,p_{n} )}^{ev}(E_{1},\ldots,E_{n};F)$, then \[ \mathcal{L}(E_{j};F)=\Pi_{q,p_{j}}(E_{j};F),j=1,\ldots,n. \] \end{proposition} \subsection{Strongly multiple summing multilinear operators: the last attempt} If $p\geq1,$ $T\in\mathcal{L}(E_{1},...,E_{n};F)$ is strongly multiple $p$-summing ($T\in\mathcal{L}_{sm,p}(E_{1},...,E_{n};F)$) if there exists $C\geq0$ such that \begin{equation} \left( \sum\limits_{j_{1},...,j_{n}=1}^{m}\parallel T(x_{j_{1}} ^{(1)},...,x_{j_{n}}^{(n)})\parallel^{p}\right) ^{1/p}\leq C\left( \underset{\phi\in B_{\mathcal{L}(E_{1},...,E_{n};\mathbb{K})}}{\sup} \sum\limits_{j_{1},...,j_{n}=1}^{m}\mid\phi(x_{j_{1}}^{(1)},...,x_{j_{n} }^{(n)})\mid^{p}\right) ^{1/p} \end{equation} for every $m\in\mathbb{N}$, $x_{j_{l}}^{(l)}\in E_{l}$ with $l=1,...,n$ and $j_{l}=1,...,m.$\newline The multi-ideal of strongly multiple $p$-summing multilinear operators was introduced in \cite{port} and has not been explored since then.\ It contains the ideals $\mathcal{L}_{ss,p}$ and $\mathcal{L}_{m,p}.$ All nice properties from $\mathcal{L}_{ss,p}$ are also valid except perhaps for versions of Pietsch Domination Theorem (and inclusion theorem) which are unknown. The size of this class is potentially better than the sizes of $\mathcal{L}_{ss,p}$ and $\mathcal{L}_{m,p}$ since despite containing these two classes, it is also known that for this class no coincidence theorem can hold for $n$-linear maps if there is no analogue for linear operators. So, even having a nice size, this class has no artificial coincidence results, with no linear analogue. \section{Desired properties for a nice multi-ideal extension of absolutely summing operators} In \cite{port, CD} and it is shown that \begin{align*} \mathcal{L}_{d,p} & \subset\mathcal{L}_{si,p}\subset\mathcal{L}_{m,p} \subset\mathcal{L}_{as,p}^{ev}\subset\mathcal{L}_{as,p}.\\ \mathcal{L}_{d,p} & \subset\mathcal{L}_{si,p}\subset\mathcal{L} _{ss,p}\subset\mathcal{L}_{sm,p}.\\ \mathcal{L}_{d,p} & \subset\mathcal{L}_{si,p}\subset\mathcal{L}_{m,p} \subset\mathcal{L}_{sm,p}. \end{align*} It is not difficult to show that $\mathcal{L}_{d,p}$ is cud, csm and it is well known that the Dvoretzky-Rogers Theorem is true, and also a Pietsch Domination Theorem (and, of course, the inclusion theorem) holds. On the other hand, as we have mentioned before this class is small and the Grothendieck Theorem is not true. As the above table shows the class $\mathcal{L}_{sm,p}$ is much bigger and from \cite{port} we know that this class is cud, csm, and Dvoretzky-Rogers Theorem and Grothendieck's $\ell_{1}$-$\ell_{2}$ Theorem are valid. More generally, this class contains the better-known class of multiple summing multilinear operators and hence it inherits all the known coincidence theorems for the class of multiple summing operators. In some sense, it is natural to expect that all reasonable multilinear extensions $\mathcal{M} =(\mathcal{M}_{p})_{p\geq1}$ of the ideal of absolutely summing operators should satisfy $\mathcal{L}_{d,p}\subset\mathcal{M}_{p}\subset\mathcal{L} _{sm,p}.$ Below we list the properties of each class: $ \begin{bmatrix} \text{Property/ Class} & \mathcal{L}_{d,p} & \mathcal{L}_{si,p} & \mathcal{L}_{ss,p} & \mathcal{L}_{m,p} & \mathcal{L}_{sm,p} & \mathcal{L} _{as,p} & \mathcal{L}_{as,p}^{ev}\\ \text{cud} & \text{Yes} & \text{Yes} & \text{\textbf{Yes}} & \text{Yes} & \text{Yes} & \text{No} & \text{Yes}\\ \text{csm} & \text{Yes} & \text{Yes} & \text{\textbf{Yes}} & \text{Yes} & \text{Yes} & \text{Yes} & \text{Yes}\\ \text{Grothendieck Theorem} & \text{No} & \text{No} & \text{\textbf{Yes}} & \text{Yes} & \text{Yes} & \text{Yes} & \text{Yes}\\ \text{Inclusion Theorem} & \text{Yes} & \text{Yes} & \text{\textbf{Yes}} & \text{No} & ? & \text{No} & \text{No}\\ \text{Dvoretzky-Rogers Theorem} & \text{Yes} & \text{Yes} & \text{\textbf{Yes} } & \text{Yes} & \text{Yes} & \text{No} & \text{Yes}\\ \mathcal{L}_{d,p}\subset\cdot\subset\mathcal{L}_{sm,p} & \text{Yes} & \text{Yes} & \text{\textbf{Yes}} & \text{Yes} & \text{Yes} & \text{No} & \text{?} \end{bmatrix} $ Taking into account the main properties of the linear ideal of absolutely summing operators, we propose the following concept of \textquotedblleft desired generalization of $(\Pi_{p})_{p\geq1}$\textquotedblright: \begin{definition} A family of normed ideals of multilinear mappings $(\mathcal{M}_{p})_{p\geq1}$ is a desired generalization of $(\Pi_{p})_{p\geq1}$ if \begin{itemize} \item (i) $\mathcal{L}_{d,p}\subset\mathcal{M}_{p}\subset\mathcal{L}_{sm,p}$ for all $p$ and the inclusions have norm $\leq1.$ \item (ii) $\mathcal{M}_{p}$ is csm for all $p.$ \item (iii) $\mathcal{M}_{p}$ is cud for all $p.$ \item (iv) $\mathcal{M}_{p}\subset\mathcal{M}_{q}$ whenever $p\leq q.$ \item (v) Grothendieck's Theorem and a Dvoretzky-Rogers theorem are valid. \end{itemize} \end{definition} First, observe that if $\mathcal{L}_{d,p}\subset\mathcal{M}_{p}\subset \mathcal{L}_{sm,p}$ then Dvoretzky-Rogers theorem is valid. Note that, accordingly to the above table, $(\mathcal{L}_{ss,p})_{p\geq1}$ is a desirable generalization of the family $(\Pi_{p})_{p\geq1}$. \ A desired generalization will be called \textquotedblleft desired family\textquotedblright. \begin{definition} A desired family $\mathcal{M}=(\mathcal{M}_{p})_{p\geq1}$ is maximal if whenever $\mathcal{M}_{p}\subset\mathcal{I}_{p}$ for all $p$ $($and the inclusion has norm $\leq1)$ and $(\mathcal{I}_{p})_{p\geq1}$ is a desired family, then $\mathcal{M}_{p}=\mathcal{I}_{p}$ for all $p.$ Similarly, a desired family $\mathcal{M}=(\mathcal{M}_{p})_{p\geq1}$ is minimal if whenever $\mathcal{I}_{p}\subset\mathcal{M}_{p}$ for all $p$ $($and the inclusion has norm $\leq1)$ and $(\mathcal{I}_{p})_{p\geq1}$ is a desired family$,$ then $\mathcal{M}_{p}=\mathcal{I}_{p}$ for all $p.$ \end{definition} \begin{theorem} There exists a desired family of multilinear mappings which is maximal. \end{theorem} \begin{proof} Let \[ D=\left\{ \mathcal{M}^{\lambda}=(\mathcal{M}_{p}^{\lambda})_{p\geq 1}:\mathcal{M}^{\lambda}\ \text{is a desired family for every }\lambda \in\Lambda\right\} . \] In $D$ we consider the partial order \begin{equation} \mathcal{M}^{\lambda_{1}}\leq\mathcal{M}^{\lambda_{2}}\Leftrightarrow \mathcal{M}_{p}^{\lambda_{1}}\subseteq\mathcal{M}_{p}^{\lambda_{2} }\ \text{and}\ \left\Vert \cdot\right\Vert _{\mathcal{M}_{p}^{\lambda_{2}} }\leq\left\Vert \cdot\right\Vert _{\mathcal{M}_{p}^{\lambda_{1}}}\text{ for all }p\geq1. \label{ulty} \end{equation} Note that $D\neq{\varnothing}$ since $(\mathcal{L}_{ss,p})_{p\geq1}\in D.$ We just need to show that Zorn's Lemma is applicable in order to yield the existence of a maximal family. If $O\subset D$ is totally ordered and $\Lambda_{O}=\{\lambda\in \Lambda:\mathcal{M}^{\lambda}=(\mathcal{M}_{p}^{\lambda})_{p\geq1}\in O\}$, consider the class \[ \mathcal{U}=(\mathcal{U}_{p})_{p\geq1}, \] where, for each $p\geq1$, $\mathcal{U}_{p}=\bigcup\limits_{\lambda\in \Lambda_{O}}\mathcal{M}_{p}^{\lambda}$. In $\Lambda_{O}$ we consider the direction \begin{equation} \lambda_{1}\leq\lambda_{2}\Leftrightarrow\mathcal{M}^{\lambda_{1}} \leq\mathcal{M}^{\lambda_{2}} \label{ulty2} \end{equation} and, for each $p\geq1$, define \begin{equation} \left\Vert T\right\Vert _{\mathcal{U}_{p}}:=\lim\limits_{\lambda\in\Lambda _{O}}\left\Vert T\right\Vert _{\mathcal{M}_{p}^{\lambda}}. \label{nor} \end{equation} Note that the above limit exists in view of (\ref{ulty}) and (\ref{ulty2}). It is not difficult to show that $\left( \mathcal{U}_{p}\left( E_{1} ,...,E_{n};F\right) ,\left\Vert \cdot\right\Vert _{\mathcal{U}_{p}}\right) $ is a normed space, for each $E_{1},...,E_{n},F.$ Moreover, $\left( \mathcal{U}_{p},\left\Vert \cdot\right\Vert _{\mathcal{U}_{p}}\right) $ is a normed ideal and one can quickly verify that $\left( \mathcal{U} _{p},\left\Vert \cdot\right\Vert _{\mathcal{U}_{p}}\right) _{p\geq1}$ is a desired family. So $\mathcal{U}=(\mathcal{U}_{p})_{p\geq1}\in D$ and $\mathcal{U}\geq\mathcal{M}$ for all $\mathcal{M}\in D;$ hence Zorn's Lemma yields that $D$ has a maximal element. \end{proof} We also have: \begin{theorem} \label{minimal} There exists a desired family of multilinear mappings which is minimal. \end{theorem} \begin{proof} Consider the set $D$ as in the proof of the above theorem and the partial order \[ \mathcal{M}^{\lambda_{2}}\leq\mathcal{M}^{\lambda_{1}}\Leftrightarrow \mathcal{M}_{p}^{\lambda_{1}}\subseteq\mathcal{M}_{p}^{\lambda_{2} }\ \text{and}\ \left\Vert \cdot\right\Vert _{\mathcal{M}_{p}^{\lambda_{2}} }\leq\left\Vert \cdot\right\Vert _{\mathcal{M}_{p}^{\lambda_{1}}}\text{ for all }p\geq1. \] Let also $O\subset D$ and $\Lambda_{O}$ be as before. Define \[ \mathcal{I}=(\mathcal{I}_{p})_{p\geq1} \] where, for all $p\geq1$, $\mathcal{I}_{p}=\bigcap\limits_{\lambda\in \Lambda_{O}}^{{}}\mathcal{M}_{p}^{\lambda}$. Note that, for all $p\geq1$, if $T\in\mathcal{I}_{p}\left( E_{1},...,E_{n};F\right) ,$ then \begin{equation} \left\Vert T\right\Vert _{\mathcal{I}_{p}}=\lim_{\lambda\in\Lambda_{O} }\left\Vert T\right\Vert _{\mathcal{M}_{p}^{\lambda}} \label{normini} \end{equation} defines a norm in $\mathcal{I}_{p}\left( E_{1},...,E_{n};F\right) $. In fact, from our hypotheses, for each $p\geq1$, the inclusion \[ inc:\mathcal{L}_{d,p}(E_{1},...,E_{n};F)\rightarrow\mathcal{M}_{p}^{\lambda }(E_{1},...,E_{n};F) \] has norm $\leq1$ for all $\lambda\in\Lambda_{O}.$ So, it follows that $\left\{ \left\Vert T\right\Vert _{\mathcal{M}_{p}^{\lambda}}:\lambda \in\Lambda_{O}\right\} $ is bounded from above by $\left\Vert T\right\Vert _{d,p}$, and so the limit in $\left( \ref{normini}\right) $ exists$.$ Moreover, for all $p\geq1$, the inclusion \[ inc:\mathcal{M}_{p}^{\lambda}(E_{1},...,E_{n};F)\rightarrow\mathcal{L} _{sm,p}(E_{1},...,E_{n};F) \] has norm $\leq1$ for every$\ \lambda\in\Lambda_{O}$. Hence \[ \left\Vert T\right\Vert _{\mathcal{M}_{p}}\geq\left\Vert T\right\Vert _{sm,p}\geq0\ \text{for all}\ T\in\mathcal{M}_{p}^{\lambda}\left( E_{1},...,E_{n};F\right) \] and so \[ \left\Vert T\right\Vert _{\mathcal{I}_{p}}=0\text{ if and only if }T=0. \] The other properties are easily verified and hence $\left( \mathcal{I} _{p}\left( E_{1},...,E_{n};F\right) ,\left\Vert \cdot\right\Vert _{\mathcal{I}_{p}}\right) $ is a normed space for all $E_{1},...,E_{n},F$. The rest of the proof follows the lines of the previous proof. \end{proof} \section{Open Problems} From the previous section two open problems arise: \begin{problem} Is $(\mathcal{L}_{sm,p})_{p\geq1}$ a desired ideal?(we conjecture that it is not) If the answer is positive, it will be maximal. \end{problem} \begin{problem} Is $(\mathcal{L}_{ss,p})_{p\geq1}$ a maximal or minimal desired ideal? \end{problem} The answer to the next problem seems to be \textquotedblleft NO\textquotedblright, but to the best of our knowledge, it is an open problem: \begin{problem} Is $(\mathcal{L}_{ss,p})_{p\geq1}=(\mathcal{L}_{sm,p})_{p\geq1}$? \end{problem} For the next problem that we will propose, we need to define a quite artificial multilinear version of absolutely summing operators. Let $n$ be a positive integer, $k\in\{1,...,n\}$ and $p\geq1$. An $n$-linear operator $T\in\mathcal{L}(E_{1},...,E_{n};F)$ is $k$-absolutely $p$--summing if there is a constant $C\geq0$ so that \begin{equation} \left( {\displaystyle\sum\limits_{j_{k}=1}^{\infty}} \left\Vert T(x_{j_{1}},...,x_{j_{k}},...,x_{j_{n}})\right\Vert ^{p}\right) ^{1/p}\leq C\left\Vert x_{j_{1}}\right\Vert \ldots\left\Vert \left( x_{j_{k} }\right) _{j_{k}=1}^{\infty}\right\Vert _{w,p}\ldots\left\Vert x_{j_{n} }\right\Vert \label{as} \end{equation} for all $x_{j_{i}}\in E_{i}$ with $i=1,...,k-1,k+1...,n$ and all $\left( x_{j_{k}}\right) _{j_{k}=1}^{\infty}$ $\in l_{p}^{w}\left( E_{k}\right) .$ In this case we write $T\in\mathcal{L}_{s,p}^{k}(E_{1},...,E_{n};F).$ The infimum of the $C$ satisfying $\left( \ref{as}\right) $ defines a norm for $\mathcal{L}_{s,p}^{k}(E_{1},...,E_{n};F),$ denoted by $\left\Vert \cdot\right\Vert _{ks,p}.$ These maps were essentially introduced in \cite{port} as an \ example of an artificial generalization of absolutely summing operators. Now, consider the new class, $\mathcal{L}_{qs,p}$, whose components we will call quasi-absolutely $p$-summing: \[ \mathcal{L}_{qs,p}(E_{1},...,E_{n};F):=\bigcap\limits_{k=1}^{n}\mathcal{L} _{s,p}^{k}(E_{1},...,E_{n};F). \] Defining \begin{equation} \left\Vert T\right\Vert _{qs,p}:=\max_{k}\left\Vert T\right\Vert _{ks,p}, \label{nor1} \end{equation} we get a norm for $\mathcal{L}_{qs,p}(E_{1},...,E_{n};F)$ and $(\mathcal{L} _{qs,p},\left\Vert .\right\Vert _{qs,p})$ is a Banach ideal. The ideal $(\mathcal{L}_{qs,p},\left\Vert .\right\Vert _{qs,p})$ is not interesting since it is the linear ideal in a nonlinear disguise. So, it is interesting to show that this class is not a desired family. In order to to this it is necessary to answer the following problem: \begin{problem} Is $(\mathcal{L}_{sm,p})_{p\geq1}=(\mathcal{L}_{qs,p})_{p\geq1}?$ \end{problem} Note that it is plain that $\mathcal{L}_{sm,p}\subset\mathcal{L}_{qs,p}$ and the inclusion has norm $\leq1$. Any answer to the above problem will lead to very important conclusions: - If the answer to the above problem is YES (we conjecture that this is not), then we have several serious bits of information: (i) the equality is nontrivial and the result will be interesting by its own; (ii) we conclude that $(\mathcal{L}_{sm,p})_{p\geq1}$ is a (maximal) desired ideal and (the more important) we conclude that $(\mathcal{L}_{sm,p})_{p\geq1}$ indeed possesses very nice properties. For example, besides the inclusion theorem (which was unknown for this class), since every linear coincidence situation $\Pi_{p}(E;F)=\mathcal{L}(E;F)$ is naturally extended to $\mathcal{L} _{qs,p}(^{n}E;F)=\mathcal{L}(^{n}E;F)$, so we will also have $\mathcal{L} _{sm,p}(^{n}E;F)=\mathcal{L}(^{n}E;F)$ and with all this information in hand, it would be natural to consider $(\mathcal{L}_{sm,p})_{p\geq1}$ as the "perfect" generalization of $\left( \Pi_{p}\right) _{p\geq1}$than $(\mathcal{L}_{m,p})_{p\geq1}.$ - If the answer to the above problem is NO (which we conjecture), then we conclude that $(\mathcal{L}_{qs,p})_{p\geq1}$ is not a desired class, a reasonable situation, since the class $(\mathcal{L}_{qs,p})_{p\geq1}$ is artificially constructed. \end{document}
\begin{document} \title{The GEO\,600 squeezed light source} \author{Henning Vahlbruch, Alexander Khalaidovski, Nico Lastzka, Christian Gr\"af, Karsten Danzmann, and Roman Schnabel} \address{Institut f\"ur Gravitationsphysik of Leibniz Universit\"at Hannover and Max-Planck-Institut f\"ur Gravitationsphysik (Albert-Einstein-Institut), Callinstr. 38, 30167 Hannover, Germany} \ead{[email protected]} \begin{abstract} The next upgrade of the GEO\,600 gravitational wave detector is scheduled for 2010 and will, in particular, involve the implementation of squeezed light. The required non-classical light source is assembled on a 1.5\,m$^2$ breadboard and includes a full coherent control system and a diagnostic balanced homodyne detector. Here, we present the first experimental characterization of this setup as well as a detailed description of its optical layout. A squeezed quantum noise of up to 9\,dB below the shot-noise level was observed in the detection band between 10\,Hz and 10\,kHz. We also present an analysis of the optical loss in our experiment and provide an estimation of the possible non-classical sensitivity improvement of the future squeezed light enhanced GEO\,600 detector. \end{abstract} \section{Introduction} Photon shot-noise is a limiting noise source in laser interferometric gravitational wave (GW) detectors. The signal to shot-noise ratio can be improved by increasing the laser power. For this reason the planned Advanced LIGO detectors are designed to store about a megawatt of optical power inside the interferometer arms \cite{SMITH09}. At such high laser powers thermally induced optical waveform distortion due to light absorption and the excitation of parasitic instabilities might become an issue \cite{Dambr03, Shaug04}. Alternatively, the signal to shot-noise ratio can also be improved by `squeezing' the shot-noise as proposed by Caves in 1981 \cite{Cav81}. In this case, the laser power inside the interferometer is not increased. In order to squeeze the shot-noise of a Michelson interferometer that is operated close to a dark fringe, squeezed (vacuum) states of light have to be injected into the signal output port. A squeezed state is a quantum state whose uncertainty in one of the field quadratures is reduced compared to the vacuum state, while the noise in the conjugate quadrature is increased. Later it was realized that squeezed states of light can also be used to reduce the overall quantum noise in interferometers including radiation pressure noise, thereby beating the standard-quantum-limit (SQL) \cite{Unruh82,JRe90}. Theoretical analysis of Gea-Banacloche and Leuchs \cite{GLe87} and Harms \textit{et al.}\,\cite{HCCFVDS03,SHSD04} suggested that squeezing is broadly compatible with interferometer recycling techniques \cite{DHKHFMW83pr,Mee88} thereby further promoting the application of squeezed states in GW-detectors. The first observation of squeezed states was done by Slusher \textit{et al.}\,\cite{SHYMV85} in 1985. Since then different techniques for the generation of squeezed light have evolved. One of the most successful approaches for squeezed light generation is optical parametric amplification (OPA), also called parametric down-conversion, based on second-order nonlinear crystals. Common materials like MgO:LiNbO$_3$ or periodicly poled potassium titanyl phosphate (PPKTP) can be used to produce squeezing at the carrier wavelength of today's GW-detectors operating at 1064 nm. Ground based detectors require a broadband squeezed field in the detection band from about 10 Hz up to 10 kHz. Squeezed states were combined in table top experiments with interferometer recycling techniques \cite{KSMBL02,VCHFDS05}, were demonstrated at audio frequencies \cite{MGBWGCL04, MMGLGGMC05, VCHFDS06, VCDS07}, and were tested on a suspended GW prototype interferometer \cite{GODA08}. A coherent control scheme for the generation and stable control of squeezed vacuum states was also developed for the application in GW-detectors \cite{VCHFDS06, CVDS07}. The progress achieved, now provides all the techniques for a first squeezed light upgrade of a large-scale signal-recycled gravitational wave detector at shot-noise limited frequencies as envisaged in \cite{SHSD04}. In this paper we present a detailed description of the recently assembled GEO\,600 squeezed light source. The optical layout and also elements of the control scheme are presented. Our first measurements demonstrate up to 9\,dB squeezing over the complete detection bandwidth of ground-based GW-detectors. \begin{figure} \caption{Schematic of the complete optical setup of the GEO\,600 squeezed light source including control lasers and diagnostic tools. Squeezed states of light at audio-band Fourier frequencies around the carrier frequency $\omega_0$ are produced inside a nonlinear squeezing resonator via parametric down-conversion. The squeezing resonator contains a birefringent nonlinear crystal that is pumped with a green laser beam at 532\,nm. Two auxiliary laser beams are also focussed into the crystal and serve as control beams for piezo-electric length stabilization of the cavity and the green pump phase. A balanced homodyne detector was used to characterize the squeezed light output.} \label{GeneralSetup} \end{figure} \section{Experimental} The schematic of the optical setup for the GEO\,600 squeezed light source is shown in Figure \ref{GeneralSetup}. For clarity optical components like lenses, wave plates and steering mirrors etc. are omitted. The optical setup is partly based on earlier experiments presented in \cite{VCHFDS06}-\nocite{VCDS07, CVDS07}\cite{VMCHFLGDS08}. The device has been set up in a class 100 cleanroom in order to avoid dust particles and light scattering. Most optical components like steering mirrors and lenses have super-polished surfaces. The custom-made breadboard of the GEO\,600 squeezed light source has the dimensions of 135\,cm x 113\,cm. A thickness of about 5\,cm (2 inches) in combination with a steel bottom plate and an aluminum inner structure and top plate was chosen to provide a high mechanical stability at a reasonable weight (approximately 70\,kg). The dimensions of the breadboard are adapted to the available space on the GEO\,600 detection bench where the squeezed light source will be operated. The now fully assembled squeezed light source, as shown in Fig.~\ref{GEOSqueezerBreadboard}, has a weight of about 130\,kg. \begin{figure} \caption{Photograph showing the housing of the nonlinear squeezing resonator. Nonlinear crystal, peltier and piezo-elements as well as the out-coupling mirror are embedded in a compact quasi-monolithic design for a high intrinsic mechanical stability.} \label{Ofen} \end{figure} \subsection{Squeezed-light laser resonator} The squeezed-light laser resonator (`squeezing resonator' in short) was set up as a standing-wave hemilithic cavity containing a plano-convex PPKTP crystal of about 10\,mm length. The convex crystal surface is high-reflectivity coated for the fundamental and second-harmonic wavelengths at 1064\,nm and 532\,nm, respectively. The planar crystal surface is anti-reflection coated for both wavelengths. The coupling mirror of the standing-wave cavity is a piezo-actuated external mirror with a power reflectivity of R=92\,\% at 1064\,nm and is placed in front of the planar crystal surface at a distance of approximately 20\,mm. The resonator is singly resonant and has a Finesse of about 75 at 1064\,nm. The PPKTP crystal is temperature stabilized to the phase matching temperature of the fundamental, down-converted squeezed (vacuum) field and the second harmonic pump field. Figure \ref{Ofen} shows a photograph of the quasi-monolithic squeezing resonator housing accommodating the nonlinear crystal, peltier and piezo elements as well as the out-coupling mirror. \subsection{Diagnostic homodyne detector} The GEO\,600 squeezed light source features an integrated diagnostic balanced homodyne detector (BHD) which could be used to characterize the performance of the squeezed light decoupled from the GEO\,600 interferometer. The required local oscillator beam of about 500\,$\mu$W at 1064\,nm is picked off from the main Nd:YAG laser (2 W, \textit{Mephisto} from Innolight) and is injected into a spatial mode cleaning traveling-wave resonator. This cavity is held on resonance using the PDH-technique at a RF-modulation frequency of 76.5\,MHz with a control bandwidth of 10\,kHz. The transmitted beam interferes with the squeezed beam on a 50/50 beamsplitter with a fringe visibility of 98.6\,\%. Each beamsplitter output field is detected with a single high quantum efficiency photodiode. Both photo currents are subtracted from each other before electronic signal amplification. \begin{figure} \caption{Photograph of the GEO\,600 squeezed light source. The breadboard dimensions are 135\,cm x 113\,cm. The three Nd:YAG Lasers are located on the upper left, at the bottom left the squeezing resonator and on the bottom right the homodyne detector with its covering box is shown. The total weight of the complete system is approximately 130\,kg.} \label{GEOSqueezerBreadboard} \end{figure} \subsection{Preparation of pump and control laser fields} The GEO\,600 squeezed light source requires altogether four different laser frequencies as illustrated in Figure \ref{GeneralSetup}. A main laser at 1064\,nm provides the optical reference frequency $\omega_0$. Two more laser fields at about 1064\,nm have frequency offsets of more than 10\,MHz with respect to $\omega_0$ and serves as optical control fields. The fourth laser provides the second-harmonic pump field for the parametric down-conversion process at precisely $2\omega_0$. All four laser fields are mode-matched and injected into the squeezing resonator. We now discuss the preparation and purpose of the four laser fields in more detail. \textit{Main local oscillator beam at fundamental frequency $\omega_0$} - The main laser source at frequency $\omega_0$ will finally be phase locked to the GEO\,600 laser source. This main laser provides the input field for the second harmonic generator and also serves as a local oscillator beam for homodyne detection of the generated squeezed vacuum field. Additionally, a small fraction of this laser beam is used for the initial alignment of the squeezing resonator. Subsequently this resonator«s length is stabilized using the light transmitted through the resonator, which is detected on the photodiode PD$_{Alignment}$ (see Figure \ref{GeneralSetup}). A RF-demodulation scheme deliveres a squeezing resonator length error signal, which is fed back onto the piezo-actuated cavity out-coupling mirror. If an intense alignment beam (about 100\,mW) is injected, a sufficient amount of infrared photons at 1064\,nm gets frequency doubled in order to align the green pump path in the counter direction comfortably. This counter propagating green field (compared to the pump beam) is monitored with either the photodiode PD$_{PumpBeam}$ or PD$_{MC532trans}$. This offers an ideal alignment procedure of this beam path, which could otherwise only be adjusted by measuring the parametric gain inside the squeezed light source. This latter method can be quite imprecise and time consuming. Nevertheless, the parametric gain can alternatively be monitored using the DC-output of the photodetector PD$_{Alignment}$. Please note that during the use of this alignment beam only squeezed states at Fourier frequencies in the MHz regime can be generated due to the technical laser noise carried with this beam at lower Fourier frequencies. Therefore, the alignment beam was switched off after the described alignment procedure. \textit{Second harmonic pump field} - The main laser frequency is frequency doubled by employing a second harmonic generator (SHG). This SHG produces the necessary pump field for the squeezing resonator at the frequency $2\omega_0$. The design of the SHG is almost identical to the squeezed light source except that the nonlinear medium was MgO:LiNbO$_3$ instead of PPKTP. About 35\,mW of pump power are required to achieve an appropriate parametric (de-)amplification factor. After being generated in the SHG, the pump field is guided to a ring-cavity. This mode cleaner has a Finesse of 555 and a linewidth (FWHM) of 1,3\,MHz. The purpose of this resonator is to attenuate high frequency phase noise, which is inherent on the green beam due to RF-phase-modulation used for locking the SHG cavity length. It has been shown that phase noise on the green pump beam can deteriorate the squeezing strength \cite{Furus07,FHDFS06}. Utilizing the photo diode PD$_{MC532}$ an error signal for the cavity length control is generated via the Pound-Drever-Hall technique at a modulation frequency of 120\,MHz, which is much higher compared to the cavity linewidth. \textit{Squeezing resonator length control beam} - A second NPRO-laser source (200\,mW, Innolight Mephisto OEM Product Line), which is frequency locked to the main laser on the squeezing breadboard, serves for the cavity length control of the squeezed light source. Using the orthogonal polarization (p-polarization) for locking the cavity length, no technical laser noise is introduced into the squeezed beam at the fundamental frequency ($\omega_0$, s-polarization). Due to the birefringence of the nonlinear crystal inside the squeezed light source, however, this coherent control beam has to be frequency shifted to be simultaneously resonant with the generated s-polarized squeezed field. The frequency offset was determined to approximately 12.6\,MHz. This was measured while injecting both the alignment beam at frequency $\omega_0$ and the frequency shifted control beam. While scanning the cavity length, both orthogonally polarized TEM$_{00}$ Airy-peaks had to be overlapped. For monitoring the simultaneously resonant Airy-peaks, the photodetector PD$_{Alignment}$ was used. \textit{Squeezing angle control beam} - A third NPRO-laser (again with an output power of 200\,mW) is set up for coherent control of the squeezing ellipse orientation with respect to the diagnostic homodyne detector. For this it is sufficient to inject only 25$\mu$W into the squeezed light source cavity in order to generate clear error signals. This auxiliary laser is again frequency locked to the main laser using a PLL with an offset frequency of 15.2\,MHz. Operating on the GEO site, this coherent control field will finally be used to stabilize the phase relation between the squeezed vacuum field and the GEO\,600 interferometer signal output field. Furthermore, this coherent control field will be used to set up an auto alignment system for the squeezed beam into the interferometer. Two locking loops are set up for full coherent control of the squeezed vacuum beam using this coherent control beam. The first loop stabilizes the relative phase between the injected control beam and the green pump field. To this end, an error signal is obtained by detecting a fraction of the (frequency shifted) light, which is back-reflected from the squeezed light source. Thus the photocurrent of the photodetector PD$_{GreenPhase}$ has to be demodulated at \emph{twice} the offset frequency \cite{VCHFDS06,VCDS07}. The feedback is applied to a phase shifter device in the green pump path with a unity gain frequency of 6\,kHz. A second control loop is set up in order to stabilize the phase relation between the squeezed vacuum field and the local oscillator beam of the homodyne detector. The error signal is extracted from the subtracted photocurrents of both homodyne photo diodes via RF-demodulation at the coherent control beam offset frequency. A phase shifter in the local oscillator beam path serves as an actuator. The locking bandwidth of this control loop is about 10\,kHz. \section{Results and discussion} \begin{figure} \caption{Quantum noise measurements performed with a balanced homodyne detector (BHD) using a 500\,$\mu$W local oscillator power. Trace (a) constitutes the shot-noise (vacuum noise) reference of the BHD, measured with the squeezed light input blocked. Trace (b) shows the observed squeezed quantum noise from our source. A nonclassical noise suppression of up to 9\,dB below shot-noise (a) was measured throughout the complete spectrum from 10\,Hz up to 10\,kHz. The corresponding anti-squeezing (c) was 14\,dB above the shot-noise level. The electronic dark noise (not shown) was 17\,dB below the shot-noise and was not subtracted from the measured data. The peaks at 50\,Hz and 100\,Hz were due to the electric mains supply.} \label{SqueezingMeasurement} \end{figure} In Figure \ref{SqueezingMeasurement}, quantum noise measurements of the diagnostic balanced homodyne detector in the frequency band from 10\,Hz up to 10\,kHz are shown. Trace (a) represents a shot-noise measurement performed with the signal input port blocked of the homodyne detector. The local oscillator intensity was 500\,$\mu$W throughout all the measurements. This LO power delivered a large dark noise clearance of 17\,dB. When the squeezed states from our source are mode-matched into the homodyne detector signal input port the noise level changes according to the quantum noise variance of these states. Trace (b) shows the variance of the squeezed quantum noise and trace (c) the variance of the anti-squeezed noise. In order to perform these measurements, the local oscillator phase was stabilized to either the squeezed or the anti-squeezed field quadrature of the signal input, respectively. For the upgrade of GEO\,600 one is obviously interested in the stabilization of the phase between the squeezed field and the GEO\,600 signal field in such a way that the finally detected quantum noise is squeezed. Nevertheless, a full characterization of the squeezed field is only possible through the measurement of both the squeezed and anti-squeezed quadrature field variances. The squeezing level achieved in the detection band from 10\,Hz to 10\,kHz is at least 8\,dB below the shot-noise level. At several frequencies, squeezing of up to 9\,dB is observed. The corresponding anti-squeezing level was measured to be 14\,dB above the shot-noise level over the entire bandwidth (trace (c)). From these measurements a total optical loss of approximately 10\,\% on the squeezed laser field can be derived. This loss value \textit{includes} the homodyne detection efficiency of approximately 95\,\%. Thus, if the squeezed states are directly guided into the signal output port of GEO\,600, the diagnostic homodyne detector loss can be subtracted, and more than 10\,dB squeezing will be injected into the GW-detector. In the following we estimate the expected nonclassical sensitivity improvement of the future squeezed-light enhanced GEO\,600 detector assuming that additional optical loss will reduce the squeezing strength. We consider frequencies at which GEO\,600 is currently shot-noise limited, i.e. above a few hundred Hertz. We estimate the additional optical loss for the squeezed field to 10\,\% up to 15\,\%. Here, we consider (i) a non-perfect mode matching to the GEO\,600 signal-recycling cavity, (ii) loss in the input Faraday isolator, (iii) non-perfect dielectric coatings of GEO\,600 optics, and (iv) the non-perfect photo-diode quantum efficiency. From these assumptions we conclude, that finally a 6\,dB nonclassical sensitivity improvement of GEO\,600 might be reachable. This number corresponds to a sensitivity improvement that is equivalent to an increase in laser power by a factor of 4, without however, the unwanted side-effects from a higher thermal load. \section{Conclusion} We presented a detailed description of the optical setup of the GEO\,600 squeezed light source and showed first measurement results. Up to 9\,dB of squeezing over the entire bandwidth of the earth-based gravitational wave detectors was demonstrated. To the best of our knowledge, this is the highest measured squeezing value at audio frequencies observed so far. Our result also belongs to the highest squeezing values ever measured. At radio frequencies (MHz) only recently slightly higher values between 9\,dB and 11.5 dB were reported \cite{VMCHFLGDS08,Furus07,Moritz11dB}. We have estimated the additional optical loss for the squeezed-light field when injected into GEO\,600 and come to the conclusion that a non-classical detector sensitivity improvement of 6\,dB might be possible for the shot-noise limited band of GEO\,600. After a long-term test the squeezed light source will be ready for the implementation in GEO\,600. \ack{This work has been supported by the Deutsche Forschungsgemeinschaft (DFG) through Sonderforschungsbereich 407 and the Centre for Quantum Engineering and Space-Time Research QUEST.} \section*{References} \end{document}
\begin{document} \title{\bf Weak measurement reveals super ergotropy} \author{ E. Faizi\thanks {E-mail:[email protected]} \hspace{1mm}and M. A. Balkanlu \\ {\small Physics Department, Azarbaijan Shahid Madani University, Tabriz, Iran,}}\maketitle \begin{abstract} \noindent The concept of "ergotropy" has previously been introduced as the maximum work that can be extracted from a quantum state. The enhancement of it, that is induced by quantum correlations via projective measurement, was formulated as the 'daemonic ergotropy'. In this paper, we investigated the ergotropy in the presence of quantum correlation via the weak measurement because of its elegant effects on the measured system. In a bipartite correlated quantum system that decomposed of main and ancillary systems, we showed that the extractable work by the non-selective weak measurement on ancilla is always equal to the situation captured by the strong measurement. As such, the selective weak measurement reveals more work than the daemonic ergotropy. The ergotropy of the total system is also greater than or equal to the daemonic ergotropy. We showed that for Bell diagonal states, at the cost of losing quantum correlation, the total extractable and thus non-local extractable works can be increased by means of measurement. We also show that for these cases, there is no direct relationship between quantum correlation and non-local extractable work. \\ \\ \\ {\bf Keywords:} ergotropy, weak measurement, quantum correlation, non-local extractable work, quantum thermodynamics. \end{abstract} \section{Introduction} In quantum thermodynamics, concepts such as heat, work, thermal equilibrium, temperature, entropy and heat transfer at the atomic scale, need to be redefined and re-examined. Quantum thermodynamic concepts of heat and work were first defined by R. Alicki $\cite{Alicki}$, where the changes in the local Hamiltonian and the state of a system are associated with work and heat respectively. Recently, new definitions have been proposed as “The change in the energy which is accompanied by a change in the entropy is identified as heat, while any change in the energy which does not lead to a change in the entropy is known as work.” $\cite{Ahmadi}$. For the case of an individual system it has been published that, similar to classical thermodynamics, the optimal extractable work is equal to its the free energy change $\cite{Horodecki,Skrzypczyk}$. In general, work can only be extracted from a system which its initial state is non-passive. Thermodynamic passivity expresses that quantum state's energy cannot be reduced by any reversible, unitary process that acts on the system. This states are so-called passive states $\cite{Sparaciari,Pusz,Perarnau}$. However, several copies of a passive state can become non-passive which allows work to be extracted from these states otherwise the state is called completely passive $\cite{Skrzypczyk2}$. It is worth mentioning that, only the completely passive states are thermal states $\cite{Lenard}$. The maximal extractable work from a quantum non-passive state using cyclic unitary transformation is defined as ergotropy in which unitary transformation is governed by time-dependent external potentials as work sources $\cite{Allahverdian}$. There are also some ancilla-assisted protocols to work extraction processes based on quantum correlations $\cite{Francica,Perarnau2,Gonzalo,Bera}$. In a bipartite quantum system, the non-local effect on one party due to the measurement on the other is usually called quantum correlation. The non-local properties of quantum correlation make it a key element for extracting work from a global state consisting of local thermal states by means of nonlocal extraction operations. Nonlocal extractable work is defined as the difference between the total and the local extractable works. G. Francica et al research shows that, in a bipartite closed quantum system which is composed the main and the ancillary systems, thanks of quantum correlation, it is possible to reach more work than the ergotropy. In this scenario a non-interacting ancillary system is joined to the main system. Then a measurement by a set of orthogonal projectors is implemented on the ancilla which is followed by ergotropic transformation of all the conditional states. This enhanced extractable work which is daemonic ergotropy, is the average ergotropy of the conditional states of the main system over all the possible outcomes of the measurement on the ancillary system $\cite{Francica}$. The general belief that "the more information is obtained from a quantum system, the more its state is disturbed by measurement” has created a great deal of interest in finding a balance between the two $\cite{Banaszek1,Banaszek2,DAriano,Sciarrino,Sacchi}$. Performing a measurement on the system that is necessary to observe it, will disturb the state of the system and thus its coherence. For instance, Strong measurement, with complete destructing of coherence, irreversibly collapses the system state to eigenstates of its projectors $\cite{Nielsen}$. While, protecting the unknown state of a quantum system from decoherence is a key requirement for quantum information processing. Recovering the initial state after a measurement is one kind of the state protection. If the interaction between the system and measuring device is weak, then the system will partially be perturbed and its coherence will not be completely annihilated. So it is possible to retrieve the initial state by undoing the effect of the measurement $\cite{Cheong}$. Such an observation of the quantum systems is known as weak measurement $\cite{Aharonov,Brun,Oreshkov,Alipour}$. It has recently been shown that weak measurement also helps protect quantum entanglement against decoherence based on its reversibility $\cite{Kim}$. In a bipartite quantum system, the weak measurement performed on one of the subsystems can reveal greater quantum correlation than strong measurement $\cite{Singh}$. The measurement on the ancillary system is the critical instrument to extract the enhanced work from the system in ancilla-assisted protocols. The reversibility of the system state under weak measurement motivated the authors in investigating “could it be for the weak measurement to give a further gain in work extraction compared by projective measurement? So in this work, we performed weak measurement instead that’s of projective and compared the results. With the difference that, we restricted our assumptions for systems with Hilbert space composed of two orthogonal subspaces and consequently with two projectors to these subspaces. The relationship between geometrically defined correlation and non-local extractable work for the two-qubit pure and Bell diagonal states has been confirmed $\cite{Kevin}$. Therefore, we were interested in investigating the effect of projective measurement on total and consequently non-local extractable works and reexamine relationship between quantum correlation and non-local extractable work in Bell diagonal states. We showed that by performing a non-selective weak measurement on the ancillary system, the maximum extractable work is always equal to the daemonic ergotropy. That means we can reach the daemonic ergotropy by small change of the system and without losing its coherence completely (section III). Nevertheless, by using the selective weak measurement, one can reach work equal or greater than the daemonic ergotropy that we name “super ergotropy” . The internal energy of the system’s conditional states in ergotropic transformation associated with applying projective measurement operators on the ancilla play a control role in this strategy and determine which the of weak measurement’s operators must be used (section IV). We also showed that in Bell two-qubit diagonal states, due to measurement, non-local extractable work is maximized at the cost of loss of the quantum correlation (section V). \section{Ergotropy and daemonic ergotropy} In this section we are going to review the definitions of ergotropy and daemonic ergotropy. According to Ref. $\cite{Allahverdian}$, we assume a thermally isolated quantum system $S$ which is initially prepared in a non-passive state $\hat{\rho}(0)=\sum_{j} r_{j} |r_{j}\rangle\langle r_{j}| (r_{j}\geq r_{j+1}$ $ j=1,2,...) $ with respect to its Hamiltonian $\hat{H}=\sum_{k} \varepsilon_{k}|\varepsilon_{k}\rangle\langle \varepsilon_{k}| (\varepsilon_{k}\leq \varepsilon_{k+1}$ $ k=1,2,...) $ and can exchange work with a external macroscopic source. To extract the ergotropy, any arbitrary Hamiltonian perturbation $\hat{V}(t)$ is cyclically imposed on the system over a fixed period of time $[0,\tau]$. So under the action of this potential, the system undergoes a unitary transformation as $\hat{U}=\sum_{i} |\varepsilon_{i}\rangle\langle r_{i}|$ from $\hat{\rho}(0)$ to the final state $\hat{\rho}(\tau)$ with lowest final energy. then the system exchanges released energy as work with the external source. The ergotropy which is defined as the maximal extractable work from the state $\hat{\rho}(0)$ can be expressed as \begin{eqnarray} W=Tr[\hat{\rho}(0)\Hat{H}_{s}]-Tr[\hat{\rho}(\tau)\Hat{H}_{s}]=\sum_{j,k} \varepsilon_{j}r_{k}(|\langle\varepsilon_{j}|r_{k}\rangle|^2-\delta_{jk}). \end{eqnarray} The ergotropy is enhanced by using the concept of quantum correlation which is obtained in Ref. $\cite{Francica}$. The procedure is that, the system $S$ and a non-interacting ancillary system $A$ are prepared in a joint state\hspace{1mm}$\hat\rho_{SA}$. Then local measurement is done on the subsystem $A$ and finally ergotropic transformation is performed for all conditional density matrices $\hat\rho_{S|A}$. The optimal extracted work from the state $\hat\rho_{S}=Tr_{A}(\rho_{SA})=\sum_{k}r_{k}|r_{k}\rangle\langle r_{k}|$ of the system $S$ is as follows \begin{eqnarray} W_{\{\hat\Pi^A_a\}}=Tr[\hat{\rho}_{s}\Hat{H}_{s}]-\sum_{a} p_{a}Tr[\hat{U_{a}}\hat\rho_{S|a}\hat{U}^{\dagger}_{a}\hat{H}_s], \end{eqnarray} where $p_{a}=Tr[\hat\rho_{SA}\hat\Pi^A_a]$ is the probability of finding the system $S$ in the conditional state $ \hat\rho_{S|a}=Tr_{A}[\hat\Pi^A_a\hat\rho_{SA}\hat\Pi^A_a]/p_{a}$, $\hat{U_{a}}$ is the cyclic unitary process which transforms $\hat\rho_{S|a}$ in order to minimize its internal energy and ${\{\hat\Pi^A_a\}}$ is a set of orthogonal projectors of measurement. The extracted work $W_{\{\hat\Pi^A_a\}}$ which is known as $Daemonic Ergotropy$ is calculated as \begin{eqnarray} W_{\{\hat\Pi^A_a\}}=Tr[\hat{\rho}_{s}\Hat{H}_{s}]-\sum_{a} p_{a}\sum_{k}r^{a}_{k}\varepsilon_{j}, \end{eqnarray} where $r^{a}_{k}$ are the eigenvalues of $\hat\rho_{S|a}$. The state $\hat\rho_{S}$ can be written as $\hat\rho_{S}=\sum_{a}p_{a}\hat\rho_{S|a}=\sum_{a}p_{a}\sum_{k}r^{a}_{k}|r^{a}_{k}\rangle\langle r^{a}_{k}|$ which implies $r_{k}=\sum_{a}p_{a}\sum_{j}r^{a}_{j}|\langle r_{k}|r^{a}_{j}\rangle|^2$and finally the difference $W_{\{\hat\Pi^A_a\}}-W$ reads as \begin{eqnarray} W_{\{\hat\Pi^A_a\}}-W=\sum_{k}\varepsilon_{k}(r_{k}-\sum_{a}p_{a}r^{a}_{k})=\sum_{a}p_{a}\sum_{k,j}r^{a}_{j}\varepsilon_{k}(|\langle r_{k}|r^{a}_{j}\rangle|^2-\delta_{jk}). \end{eqnarray} This is the average of all the conditional states ergotropy relative to Hamiltonian $\hat{H}=\sum_{k} \varepsilon_{k}|r_{k}\rangle\langle r_{k}|$ which are non-negative by definition. From positivity of Eq. (4), we have $W_{\{\hat\Pi^A_a\}}\geq W$. When the subsystems are initially uncorrelated as $\hat\rho_{SA}=\hat\rho_{S}\otimes\hat\rho_{A}$, we have $\hat\rho_{S|a}=\hat\rho_{S}$ for any set $\{\hat\Pi^A_a\}$ and outcome $a$, which it means there would be no gain in work extraction and $W_{\{\hat\Pi^A_a\}}=W$. \section{Non-selective weak measurement and daemonic ergotropy} We have shown that by applying the mentioned restrictions on the system dimension and projectors, the same results as in the previous section are obtained by performing weak measurement on the ancillary system in the joint system $\hat\rho_{SA}$. Nevertheless, its advantage is to maintain correlation between subsystems. So in this section, we proceed with non-selective weak measurement. The operators $\hat {P}(x)$ and $\hat {P}(-x)$ for weak measurement has the forms as [22] \begin{align} &\hat {P}(x)=\sqrt{b_{0}(x)} \hat\Pi_0+\sqrt{b_{1}(x)} \hat\Pi_1, \end{align} \begin{align} &\hat {P}(-x)=\sqrt{b_{0}(-x)} \hat\Pi_0+\sqrt{b_{1}(-x)} \hat\Pi_1, \end{align} and the POVM elements $\hat {E(x)}$, $\hat {E(-x)}$ for this measurement is given as \begin{eqnarray} \hat {E}(x)=\hat {p}^\dagger (x)\hat {p}(x)=b_{0}(x)\hat\Pi_{0}+b_{1}(x)\hat\Pi_{1}, \end{eqnarray} \begin{eqnarray} \hat {E}(-x)=\hat {p}^\dagger (-x)\hat {p}(-x)=b_{0}(-x)\hat\Pi_{0}+b_{1}(-x)\hat\Pi_{1}, \end{eqnarray} where $b_{0}(x)=(1-tanhx)/2, b_{1}(x)=(1+tanhx)/2 $ and $x$ is the parameter to control the strength of measurement. We can write the extractable work from the systems state as average ergotropies of two possible outcomes of measurement: \begin{eqnarray} W_{\{\hat P^A_{\pm x}\}}=Tr[\hat{\rho}_{s}\Hat{H}_{s}]-\sum_{y=x, -x}p_{y}Tr[\hat{U_{y}}\hat\rho_{S|y}\hat{U}^{\dagger}_{y}\hat{H}_s]. \end{eqnarray} $\bf{Theorem \ 1.}$ For any system $S$ and ancillary system $A$ prepaired in a state $\hat\rho_{SA}$, by performing weak measurement on the ancillary system, we always have $W_{\{\hat P^A_{\pm x}\}}=W_{\{\hat \Pi^A_{a}\}}$.\\ $\bf{Proof. \ }$ We can write probabilities of measurement as below \begin{align} p_{x}&=Tr[(\hat I\otimes \hat {E}(x) )\hat\rho_{SA}] \\ \nonumber &=Tr\{[\hat I\otimes (b_{0}(x)\hat\Pi_{0}+b_{1}(x)\hat\Pi_{1})]\hat\rho_{SA}\}=b_{0}(x)p_{0}+b_{1}(x)p_{1}, \end{align} and $\hat\rho_{S|x}$ which is related to $\hat\rho_{S|a}$ is given by \begin{align} \hat\rho_{S|x}=\frac{Tr_{A}[(\hat I\otimes \hat {E}(x) )\hat\rho_{SA}]}{p(x)}=\frac{\sum_{a}b_{a}(x)p_{a}\hat\rho_{S|a}}{p(x)}. \end{align} By considering equations (10) and (11), Eq. (9) reduces as \begin{eqnarray} W_{\{\hat P^A_{\pm x}\}}=Tr[\hat{\rho}_{s}\Hat{H}_{s}]-\sum_{a}\sum_{y=x, -x} b_{a}(y) p_{a}Tr[\hat{U_{a}}\hat\rho_{S|a}\hat{U}^{\dagger}_{a}\hat{H}_s]. \end{eqnarray} Similar to Eq.(4) the difference $W_{\{\hat P^A_{\pm x}\}}-W$ is given by \begin{eqnarray} W_{\{\hat P^A_{\pm x}\}}-W=\sum_{k}\varepsilon_{k}(r_{k}-\sum_{a}\sum_{y=x, -x}b_{a}(y) p_{a} r^{a}_{k}). \end{eqnarray} By using the fact $\sum_{y=x, -x}b_{a}(y)=1$, we have $W_{\{\hat P^A_{\pm x}\}}=W_{\{\hat \Pi^A_{a=0, 1}\}}$. \section{Weak measurement and super ergotropy} The daemonic ergotropy can be obtained via weak measurement on the ancillary system in order to avoid quantum correlation loss. This fact motivated the authors to optimize the extractable work which lead to investigate selective weak measurement i.e. by applying one of its operators. We proceed the work extraction by measuring the ancillary system with one of the weak measurement operators. So the conditional density matrix of the system $S$ given the measurement operator $\hat P{(\pm x)}$ on the system $A$ can be written as \begin{eqnarray} \hat\rho_{S|\pm x}=\frac{\sum_{a}b_{a}(\pm x)p_{a}\hat\rho_{S|a}}{p_{(\pm x)}} \end{eqnarray} where \begin{eqnarray} p_{\pm x} =b_{0}(\pm x)p_{0}+b_{1}(\pm x)p_{1}. \end{eqnarray} The extractable work from the state of system $S$ reads as \begin{eqnarray} W_{\hat P^A_{\pm x}}=Tr[\hat{\rho}_{s}\Hat{H}_{s}]-\frac{1}{p_{(\pm x)}}\sum_{a=0}^{1} b_{a}(\pm x) p_{a}Tr[\hat{U_{a}}\hat\rho_{S|a}\hat{U}^{\dagger}_{a}\hat{H}_s]. \end{eqnarray} So the difference between the maximal extractable work by using non-selective (projective measurement) and selective (weak measurement) on the subsysatem $A$ can be obtain as \begin{align} \delta_{W(\pm x)} =W_{\hat P^A_{\pm x}}-W_{\{\hat \Pi^A_{a}\}}&=\sum_{a=0}^{1}(1-\frac{b_{a}(\pm x)}{p_{(\pm x)}})Tr[\hat{U_{a}}\hat\rho_{S|a}\hat{U}^{\dagger}_{a}\hat{H}_s]\\ \nonumber &=\pm C_{\pm}(Tr[\hat{U_{0}}\hat\rho_{S|0}\hat{U}^{\dagger}_{0}\hat{H}_s]-Tr[\hat{U_{1}}\hat\rho_{S|1}\hat{U}^{\dagger}_{1}\hat{H}_s]), \end{align} where $C_{\pm}=\frac{2p_{0}p_{1}tanh x}{1\pm(p_{1}-p_{0})tanh x}$ and signs $+$ and $-$ are related to $\hat{P}(+x)$ and $\hat{P}(-x)$ respectively. The plots in Figs. 1(a) and 1(b) show $C_{+}$ and $C_{-}$ in terms of $x$ and $p_{0}$ respectively. According to figures, these coefficients are positive and depend on the probability of measurement outcomes and its intensity. \\ $\bf{Theorem \ 2.}$ For any system $S$ and ancillary system $A$ prepaired in a state $\hat\rho_{SA}$, by performing selective weak measurement on the ancillary system we always have $W_{\hat P^A_{\pm x}}\ge W_{\{\hat \Pi^A_{a=0, 1}\}}$.\\ $\bf{Proof. \ }$ From Eq. (17), it is clear that, if $Tr[\hat{U_{0}}\hat\rho_{S|0}\hat{U}^{\dagger}_{0}\hat{H}_s]$ is greater than $Tr[\hat{U_{1}}\hat\rho_{S|1}\hat{U}^{\dagger}_{1}\hat{H}_s]$, we select measurement operator $P(x)$ otherwise we chose $P(-x)$ in order to $\delta_{W(\pm x)}$ be positive. If two traces are equal, the extractable work by both measurements is the same. \subsection{Example} In what follows, we introduce some examples to make the theorem more clear. If we consider the pure states $\hat{\rho}_1=|\psi\rangle_{SA}\langle\psi|$ with $|\psi\rangle_{SA}=cos(\theta)|00\rangle+sin(\theta)|00\rangle$ and $\hat{\rho}_2=|\phi\rangle_{SA}\langle\phi|$ with $|\phi\rangle_{SA}=\frac{1}{\sqrt3}(|11\rangle+|10\rangle+|01\rangle)$ the maximum work which can be extracted by using the selective weak measurement is the same as with the demonic ergotropy. Here we will show the ability of selective weak measurement to gain more work than demonic ergotropy. For another example we consider the mixed state $\hat\rho_{SA}=\frac{1}{3}(|11\rangle\langle11|+|10\rangle\langle10|+|01\rangle\langle01|)$ which its daemonic ergotropy is calculated from Eq. (2) as $W_{\{\hat \Pi^A_{a=0, 1}\}}=\frac{\varepsilon_{1}-\varepsilon_{0} }{3}=\frac{\Delta \varepsilon }{3}$, where $\varepsilon_{1}$ and $\varepsilon_{0}$ are the energy of levels $1$ and $0$ of the system $S$ respectively . By measuring the ancillary system via operator $P(x)$ followed by the ergotropic transformation, the maximal extractable work reads as \begin{eqnarray} W_{\hat P^A_{+x}}=\frac{3+tanhx}{3-tanhx}(\Delta \varepsilon /3). \end{eqnarray} The plot in Fig. 2 shows $W_{\hat P^A_{+x}}/W_{\{\hat \Pi^A_{a}\}}$ as a function of measurement strength $x$ which is greater than one. The more strongly one perturbs the subsystem $A$, the more work can be revealed from system $S$. So for sufficiently large values of $x$ we have $W_{\hat P^A_{+x}}=2W_{\{\hat \Pi^A_{a}\}}$. In this case, selective strong measurement with projector $\Pi^A_1$ emerges instead of $P(+x)$. However, when we set $x=0$, the extractable works $W_{\hat P^A_{+x}}$ and $W_{\{\hat \Pi^A_{a}\}}$ became equal. \section{Measurement effect on total and non-local extractable works} Non-local extractable work of a joint system is defined as the difference between the total and the local extractable works $\cite{Kevin}$. We are also interested in measuring the total system by a set of projective operators $\{\hat {\Pi}_{ij}\}$. We define a classical non-local extractable work as the difference between the average ergotropy of projective measurement on the general system and the local extractable work as: \begin{eqnarray} W^{\{\hat {\Pi}_{ij}\}}_{Non-loc}=W^{\{\hat {\Pi}_{ij}\}}_{tot}-W_{loc}. \end{eqnarray} The definition above enables us to compare the extractable work from quantum and classical correlated states and examine the role of measurement in this scenario.\\ In this section, we consider a general two-qubit system with computational bases $\{|11\rangle,|10\rangle,$\hspace{1mm}$|01\rangle,|00\rangle\}$ which is consisted of a non-interacting main and an ancillary subsystems. The density matrix of such system can be written in geometrically Bloch representation as \begin{eqnarray} \hat{\rho}_{SA}=\frac{1}{4}(\hat{I}+\sum_{i=1}^{3} s_{i}\hat{\sigma}_{i}\otimes \hat{I}+\sum_{j=1}^{3} r_{j}\hat{I}\otimes \hat{\sigma}_{j}+\sum_{i,j=1}^{3}t_{ij}\hat{\sigma}_{i}\otimes \hat{\sigma}_{j}) \end{eqnarray} where $\hat{I}$ is the identity matrix, $s_{i}=Tr[\hat{\rho}_{SA}(\hat{\sigma}_{i}\otimes \hat{I})]$, $r_{j}=Tr[\hat{\rho}_{SA}(\hat{I}\otimes \hat{\sigma}_{j})]$ and $t_{ij}=Tr[\hat{\rho}_{SA}(\hat{\sigma}_{i}\otimes \hat{\sigma}_{j})]$ is the $3\times 3$ correlation matrix and $\hat{\sigma}_{i}(\hat{\sigma}_{j})$ are the Pauli matrices. The total extractable work is obtained by the ergotropic transformation $\hat{U}_{SA}$ on the state $\hat{\rho}_{SA}$. Due to the zero Hamiltonian of the ancillary system, the local extractable work will be only the ergotropy of the main system. Now we perform measurement on the ancillary system with the set of orthogonal projectors $\hat{\Pi}_{1}$ and $\hat{\Pi}_{0}$ respectivly. The conditional density matrix of the system $S$ given projector $\hat{\Pi}_{1}$ on the system $A$ reads as \begin{eqnarray} \hat{\rho}_{{S|1}}=\frac{1}{4}[(1+r_{3})\hat{I}+\sum_{i}(s_{i}+t_{i3})\hat{\sigma}_{i}]=\lambda_{+}|\lambda_{+}\rangle\langle \lambda_{+}| +\lambda_{-}|\lambda_{-}\rangle\langle \lambda_{-}|, \end{eqnarray} with \begin{eqnarray} \lambda_{\pm}=\frac{1+r_{3}\pm|\vec s+\vec t_{i3}|}{4}, \end{eqnarray} where $|\vec s+\vec t_{i3}|=\sqrt{(s_{1}+t_{13})^2+(s_{2}+t_{23})^2+(s_{3}+t_{33})^2}$. The conditional density matrix of the system $S$ given projector $\hat{\Pi}_{0}$ on the system $A$ reads as \begin{eqnarray} \hat{\rho}_{{S|0}}=\frac{1}{4}[(1-r_{3})\hat{I}+\sum_{i}(s_{i}-t_{i3})\hat{\sigma}_{i}]=\lambda_{+}^\prime|\lambda_{+}^\prime\rangle\langle \lambda_{+}^\prime|+\lambda_{-}^\prime|\lambda_{-}^\prime\rangle\langle \lambda_{-}^\prime|, \end{eqnarray} with \begin{eqnarray} \lambda_{\pm}^\prime=\frac{1+r_{3}\pm|\vec s-\vec t_{i3}|}{4}, \end{eqnarray} where $|\vec s-\vec t_{i3}|=\sqrt{(s_{1}-t_{13})^2+(s_{2}-t_{23})^2+(s_{3}-t_{33})^2}$. The unitary operators of the ergotropic transformation are $\hat{U}_{1}=|1\rangle\langle \lambda_{+}|+|0\rangle\langle \lambda_{-}|$ and $\hat{U}_{0}=|1\rangle\langle \lambda_{+}^\prime|+|0\rangle\langle \lambda_{-}^\prime|$ which gives the daemonic ergotropy as \begin{eqnarray} W_{\{\hat \Pi^A_{a}\}}=(2s_{3}+|\vec s+\vec t_{i3}|+|\vec s-\vec t_{i3}|)\frac{\Delta\varepsilon}{4}. \end{eqnarray} On the other hand, the reduced density matrix of the system $S$ is given by $\hat\rho_{S}=Tr_{A}(\hat\rho_{SA})=\frac{1}{2}(\hat{I}+\sum_{i}s_{i}\hat{\sigma}_{i})$. The ergotropy of the reduced state is $W=(s_{3}+s)\frac {\Delta\varepsilon}{2}$ $\cite{Kevin}$. As mentioned in section II, we have $W_{\{\hat \Pi^A_{a}\}}\ge W$, which is compatible with this geometric interpretation which states for two arbitrary vectors $A$ and $B$, $|\vec A-\vec B|+|\vec A-\vec B|\ge 2|\vec A|$. Classical non-local extractable work from Eq. (19) obtains as $W^{\{\hat {\Pi}_{ij}\}}_{Non-loc}=(1-s)\frac {\Delta\varepsilon}{2}$ which is greater than or equal to zero (appendix A). Up to local unitary equivalence, a general two-qubit state is always reducible to a state at the Bloch normal form as \cite{Shunlong} \begin{eqnarray} \hat{\rho}_{SA}=\frac{1}{4}(\hat{I}+\vec s\hat{\sigma}\otimes \hat{I}+\hat{I}\otimes \vec r \hat{\sigma}_{j}+\sum_{i=1}^{3}c_{i}\hat{\sigma}_{i}\otimes \hat{\sigma}_{i}), \end{eqnarray} where $\vec s$ and $\vec r$ are the Bloch vectors of the reduced density matrices of the systems $S$ and $A$ respectively and $c_{i}$ are the real numbers such that $|c_{i}|\le 1$. We assume that $\vec s=\vec r=0$ in order to reducing it to the Bell-diagonal states: \begin{align} \hat{\rho}_{SA}&=\frac{1}{4}(\hat{I}+\sum_{i=1}^{3}c_{i}\hat{\sigma}_{i}\otimes \hat{\sigma}_{i})=\sum_{i=1}^{3}\lambda_{i}|\lambda_{i}\rangle\langle \lambda_{i}| \end{align} with $\lambda_{0}=1-c_{1}-c_{2}-c_{3}, \lambda_{1}=1-c_{1}+c_{2}+c_{3}, \lambda_{2}=1 +c_{1}-c_{2}+c_{3}$ and $\lambda_{3}=1+c_{1}+c_{2}-c_{3}$. The reduced density matrices are maximally mixed states and cannot be transformed by unitary operations, so the local extractable work of $\hat{\rho}_{SA}$ will be zero $W_{loc}=0$. The conditional density matrices of the system given the projective measurement operators $\hat{\Pi}_{i} (i=0, 1)$ on the ancillary system can obtain as \begin{eqnarray} \hat{\rho}_{S|1}=\frac{1}{4}[\hat{I}+c_{3}\hat{\sigma}_{3}]=\frac{1}{4}[(1+c_{3})|1\rangle\langle1| +(1-c_{3})|0\rangle\langle0|] \text{ with} \hspace{1mm}p_{1}=\frac{1}{4}, \end{eqnarray} \begin{eqnarray} \hat{\rho}_{S|0}=\frac{1}{4}[\hat{I}-c_{3}\hat{\sigma}_{3}]=\frac{1}{4}[(1-c_{3})|1\rangle\langle1| +(1+c_{3})|0\rangle\langle0|] \text{ with} \hspace{1mm} p_{0}=\frac{1}{4}. \end{eqnarray} So the daemonic ergotropy of the system $S$ is given by \begin{eqnarray} W_{\{\hat \Pi^A_{a}\}}=|c_{3}|\frac {\Delta\varepsilon}{2}. \end{eqnarray} Depending on which $|c_{i}|$ is greatest than the others, the corresponding cyclic unitary transformation is required to extract work from the joint system state. The ergotropy of the total system is given by \begin{eqnarray} W_{tot}= \begin {cases} |c_{1}|\Delta\varepsilon/2 & \text{ with} \hspace{3mm} \hat{U}_{1}=|11\rangle\langle\lambda_{0}|+|10\rangle\langle\lambda_{1}|+|01\rangle\langle\lambda_{2}|+|00\rangle\langle\lambda_{3}|,\\ |c_{2}|\Delta\varepsilon/2 & \text{ with} \hspace{3mm} \hat{U}_{2}=|11\rangle\langle\lambda_{0}|+|10\rangle\langle\lambda_{2}|+|01\rangle\langle\lambda_{1}|+|00\rangle\langle\lambda_{3}|,\\|c_{3}|\Delta\varepsilon/2 & \text{ with} \hspace{3mm} \hat{U}_{3}=|11\rangle\langle\lambda_{0}|+|10\rangle\langle\lambda_{3}|+|01\rangle\langle\lambda_{1}|+|00\rangle\langle\lambda_{2}|, \end {cases} \end{eqnarray} which is greater than or equal to the daemonic ergotropy $W_{\{\hat \Pi^A_{a}\}}$. Now we perform the projective measurement on the total system by set of $\hat\Pi_{ij}$ and calculate the conditional density matrices as \begin{eqnarray} \begin{array}{c} \hat{\rho}^{SA}_{\hat {\Pi}_{11}}=\frac{Tr(\hat{\Pi}_{11}\hat{\rho}_{SA})}{p_{11}}=|11\rangle\langle11|\hspace{3mm} \text{ with} \hspace{3mm} p_{11}=\frac {1}{4}(1+c_{3}),\\ \hat{\rho}^{SA}_{\hat {\Pi}_{10}}=\frac{Tr(\hat{\Pi}_{10}\hat{\rho}_{SA})}{p_{10}}=|10\rangle\langle10|\hspace{3mm}\text{ with} \hspace{3mm} p_{10}=\frac {1}{4}(1-c_{3}),\\ \hat{\rho}^{SA}_{\hat {\Pi}_{01}}=\frac{Tr(\hat{\Pi}_{01}\hat{\rho}_{SA})}{p_{01}}=|01\rangle\langle01|\hspace{3mm}\text{ with} \hspace{3mm} p_{01}=\frac {1}{4}(1-c_{3}),\\ \hat{\rho}^{SA}_{\hat {\Pi}_{00}}=\frac{Tr(\hat{\Pi}_{00}\hat{\rho}_{SA})}{p_{00}}=|00\rangle\langle00|\hspace{3mm}\text{ with} \hspace{3mm} p_{00}=\frac {1}{4}(1+c_{3}).\\ \end{array} \end{eqnarray} The energy of each conditional state is reduced to $\varepsilon_{0}$ by using the cyclic unitary operators, so the average ergotropy of all the conditional states reads as follows \begin{eqnarray} W^{\{\hat{\Pi}_{ij}\}}_{tot}=Tr[\hat{\rho}_{SA}\hat{H}_{SA}]-\sum_{i,j} p_{ij}Tr[\hat{U_{ij}}\hat\rho_{SA|ij}\hat{U}^{\dagger}_{ij}\hat{H}_{SA}]=\Delta\varepsilon/2, \end{eqnarray} which is greater than or equal to the total extractable work of the joint system state. It means we can extract more work by performing the projective measurement on the state $\hat\rho_{SA}$ than the both of the total extractable work and the daemonic ergotropy. On the other hand, since $W_{loc}=0$ therefore $W^{\{\hat{\Pi}_{ij}\}}_{Non-loc}=W^{\{\hat{\Pi}_{ij}\}}_{tot}$ . Thus for the post measurement total state $\hat{\rho}_{SA}=\sum_{i,j} p_{ij}\hat\rho_{SA|ij}$ the non-local extractable work is maximized however the quantum correlation is completely lost. Before such measurement, by using a measure which is introduced in Ref. $\cite{Hui}$ one can get the quantum correlation as \begin{eqnarray} C(\hat{\rho})=\sqrt{2-\sqrt{4-[c^2_{3}(c^2_{1}+c^2_{2})+c^2_{1}c^2_{2}}}]. \end{eqnarray} As we see from the Eq. (34), the quantum correlation depend on all $c_{i}$s $(i=1, 2, 3)$, however non-local extractable work $W_{non-loc}$ is only proportional to one of them. In general, there is not an explicit relationship between these two quantities. The plot in Fig. 3 shows the quantum correlation $C(\hat{\rho})$ as the function of $c_{1}/c_{2}$ and $c_{3}/c_{2}$. As one can see this quantity is maximal when all $c_{i}$s tend to $1$. \subsection{Example} As an example of Bell-diagonal states, we plot $W^{\{\hat{\Pi}_{ij}\}}_{Non-loc}$ and $W_{Non-loc}$ in Fig. 4 for $c_{1}=\frac{1}{2}$, $c_{2}=-\frac{1}{2}$ and $c_{3}=sin(\theta)$. For post measurement classical correlated state $\hat{\rho}_{SA}=\frac{1}{4}\sum_{j,k}(1+(-1)^{j+k}c_{3})|jk\rangle\langle jk|, (j,k=0,1)$, the plot in Fig. 4 shows that $W^{\{\hat{\Pi}_{ij}\}}_{Non-loc}$ is greater than $W_{Non-loc}$ except in $\theta =\frac{\pi}{2}$ which are equals. In Fig. 5 we plot the quantum correlation in terms of $\theta$. It shows the correlation of the initial state before the work extraction process which is calculated as $\frac{1}{2}\sqrt{8-\sqrt{63-8sin^2(\theta)}}$. As one can see the quantum correlation is maximized at $\theta =\frac{\pi}{2}$, which specifies the extremum point for two types of non-local extractable work in Fig. 4. \section{Conclusions} In this paper, we have investigated the concept of the gain in work extraction in closed quantum systems in the presence of quantum correlation through the performing of weak measurement on the ancillary subsystem. we considered a bipartite quantum system and showed that the extractable work induced by the weak measurement on the ancillary subsystem, is always equal to the daemonic ergotropy captured by the projective measurement. This result is independent of the strength coefficient of the weak measurement. We can reach more work by means of performing selective weak measurement on the ancillary system which we named “$Super Ergotropy$”. It depends on the ergotropies of the conditional principal system states to specify which of the weak measurement operators to be imposed. We also provide a geometric confirmation for daemonic ergotropy in the case of general two-qubit states. We investigate total system ergotropy in the case of Bell diagonal states and showed that, in the presence of quantum correlation one can decrease this quantity by means of measurement. We defined non-local extractable work for bipartite systems which are classical correlated. We confirmed that in effect of projective measurement, non-local extractable work increase for Bell diagonal states and showed that there is not relationship between quantum correlation and non-local extractable work for this states. \section {Acknowledgments} The authors would like to thank A. Ektesabi and B. Ahansaz for very useful comments and advices. \section{Appendix A} The Hamiltonian of the two-qubit state is \begin{eqnarray} \hat{H}=\hat{H}_{A}\otimes\hat{I}. \end{eqnarray} So initial energy of the total system is given by \begin{align} Tr[\hat{\rho}_{SA}\hat{H}_{SA}]&=\frac{1}{4}Tr[(\hat{I}+\sum_{i=1}^{3} s_{i}\hat{\sigma}_{i}\otimes \hat{I}+\sum_{j=1}^{3} r_{j}\hat{I}\otimes \hat{\sigma}_{j}+\sum_{i,j=1}^{3}t_{ij}\hat{\sigma}_{i}\otimes \hat{\sigma}_{j})(\hat{H}_{A}\otimes\hat{I})]\\ \nonumber &=\frac{1}{2}(\varepsilon_{0}+\varepsilon_{1}+s_{3}\Delta\varepsilon). \end{align} By performing measurement on the two-qubit system via a set of projective operators $\hat{\Pi}_{ij}(i,j=0, 1)$ the conditional density matrices can be obtain as \begin{align} \begin{array}{c} \hat{\rho}^{SA}_{\hat {\Pi}_{11}}=\frac{Tr(\hat{\Pi}_{11}\hat{\rho}_{SA})}{p_{11}}=|11\rangle\langle11| \hspace{3mm}\text{ with} \hspace{3mm} p_{11}=\frac {1}{4}(1+s_{3}+r_{3}+t_{33}),\\ \hat{\rho}^{SA}_{\hat {\Pi}_{10}}=\frac{Tr(\hat{\Pi}_{10}\hat{\rho}_{SA})}{p_{10}}=|10\rangle\langle10|\hspace{3mm}\text{ with} \hspace{3mm} p_{10}=\frac {1}{4}(1+s_{3}-r_{3}-t_{33}),\\ \hat{\rho}^{SA}_{\hat {\Pi}_{01}}=\frac{Tr(\hat{\Pi}_{01}\hat{\rho}_{SA})}{p_{01}}=|01\rangle\langle01|\hspace{3mm}\text{ with} \hspace{3mm} p_{01}=\frac {1}{4}(1-s_{3}+r_{3}-t_{33}),\\ \hat{\rho}^{SA}_{\hat {\Pi}_{00}}=\frac{Tr(\hat{\Pi}_{00}\hat{\rho}_{SA})}{p_{00}}=|00\rangle\langle00|\hspace{3mm}\text{ with} \hspace{3mm} p_{00}=\frac {1}{4}(1-s_{3}-r_{3}+t_{33}).\\ \end{array} \end{align} The cyclic unitary operators which minimize the energy of the conditional state is given by \begin{eqnarray} \begin{array}{c} \hat{U}_{11}=|01\rangle\langle11|+|11\rangle\langle01| \text {or}\hspace{1mm} \hat{U}_{11}=|00\rangle\langle11|+|11\rangle\langle00|,\\ \hat{U}_{11}=|01\rangle\langle10|+|10\rangle\langle01| \hspace{1mm}\text {or} \hspace{1mm} \hat{U}_{11}=|00\rangle\langle10|+|10\rangle\langle00|,\\ \hat{U}_{01}=\hat{I},\\ \hat{U}_{00}=\hat{I}. \end{array} \end{eqnarray} All the cyclic unitary operators lead the total systems energy to the lowest eigenstate of the Hamiltonian. So the total extractable work is \begin{align} W^{\{\hat{\Pi}_{ij}\}}_{tot}&=Tr[\hat{\rho}_{SA}\hat{H}_{SA}]-\sum_{i,j} p_{i,j}Tr[\hat{U_{ij}}\hat\rho_{SA|ij}\hat{U}^{\dagger}_{ij}\hat{H}_{SA}]\\ \nonumber &=(1+s_{3})\Delta\varepsilon/2. \end{align} So the classical non-local extractable work abstains as \begin{eqnarray} W^{\{\hat{\Pi}_{ij}\}}_{Non-loc}=W^{\{\hat{\Pi}_{ij}\}}_{tot}-W=(1-s)\Delta\varepsilon/2. \end{eqnarray} Fig. 1. (a) The coefficient $C_{+}$ as a function of $p_{0}$ and $x$. (b) The coefficient $C_{-}$ as a function of $p_{0}$ and $x$. \begin{figure}\label{fig:first_sub} \label{fig:second_sub} \end{figure} Fig. 2. $\frac {W_{\hat P^A_{+x}}}{W_{\{\hat \Pi^A_{a}\}}}$ as a function of measurement strength $x$. \begin{figure}\label{Fig1a} \end{figure} Fig. 3.The quantum correlation of Bell-diagonal state as the function of $c_{1}/c_{2}$ and $c_{3}/c_{2}$. \begin{figure}\label{Fig1a} \end{figure} Fig. 4.The Non-local extractable works $W_{Non-loc}$ (solid line) and $W^{\{\hat{\Pi}_{i,j}\}}_{Non-loc}$ (dashed line) divided by $\Delta E$ of Bell-diagonal state for the case $c_{1}=\frac{1}{2}, c_{2}=-\frac{1}{2}$ and $c_3=sin(\theta)$. \begin{figure}\label{Fig1a} \end{figure} Fig. 5.The quantum correlation of Bell-diagonal state for the case $c_{1}=\frac{1}{2}, c_{2}=-\frac{1}{2}$ and $c_{3}=sin(\theta)$. \begin{figure}\label{Fig1a} \end{figure} \end{document}
\begin{eqnarray}gin{document} \title{A General Setting for Geometric Phase of Mixed States Under an Arbitrary Nonunitary Evolution} \author{A. T. Rezakhani} \email{[email protected]} \author{P. Zanardi} \email{[email protected]} \affiliation{Institute for Scientific Interchange (ISI), Villa Gualino, Viale Settimio Severo 65, I-10133 Torino, Italy} \begin{eqnarray}gin{abstract} The problem of geometric phase for an open quantum system is reinvestigated in a unifying approach. Two of existing methods to define geometric phase, one by Uhlmann's approach and the other by kinematic approach, which have been considered to be distinct, are shown to be related in this framework. The method is based upon purification of a density matrix by its uniform decomposition and a generalization of the parallel transport condition obtained from this decomposition. It is shown that the generalized parallel transport condition can be satisfied when Uhlmann's condition holds. However, it does not mean that all solutions of the generalized parallel transport condition are compatible with those of Uhlmann's one. It is also shown how to recover the earlier known definitions of geometric phase as well as how to generalize them when degeneracy exists and varies in time. \end{abstract} \date{\today} \pacs{03.65.Vf, 42.50.Dv} \maketitle The concept of geometric phase was originally introduced by Pancharatnam in the classical context of comparing two polarized light beams through their interference \cite{panch}. Later, Berry pointed out to its importance even in quantum systems undergoing a cyclic adiabatic evolution \cite{berry}. After that, this important notion has been subject of interest in many different aspects, which has led to many different generalizations and applications \cite{anandan,shapere}. Of course in general cases to retain purely geometrical nature of the phase one has to put some constrains, namely parallel transport (PT) conditions. In this manner, geometric phase is a feature which only depends on geometry of the path traversed by the system in its motion during evolution. It is also worth noting that an important source of the renewed interest in geometric phases is their relevance to geometric quantum computation and holonomic quantum computation \cite{zanardi}. Indeed, it is known that quantum logic gates can be implemented only by using the concept of geometric phases. It is believed that the purely geometric nature of this phase makes such computations intrinsically fault tolerant and robust against noise \cite{rob}. A pure state is merely an idealization and in real experiments a description of the system in terms of mixed states is usually required. This point accounts for attempts towards extending the concept of geometric phase to mixed states. In fact, Uhlmann was the first in tackling the problem through the mathematical approach of purification of mixed states \cite{uhl1}. This method is rather general in that it is independent of the type of evolution of the system. Next, Sj\"{o}qvist {\em et al.} put forward a quantum interferometric based definition for geometric phase of nondegenerate density matrices undergoing a {\em unitary} evolution \cite{sjoq1}. Later, Singh {\em et al.} proposed a kinematic description and extended the results to the case of degenerate mixed states \cite{singh}. It must be mentioned that there also exists another, differential geometric, approach to define geometric phase for mixed states undergoing a unitary evolution \cite{chaturvedi}. In this approach, mixed state geometric phase appears as an immediate and direct generalization of the pure state case. Indeed, in the case of environmental effects such as decoherence, one has to consider nonunitary evolutions of mixed states. Some generalizations in this direction have been addressed in Refs.~ \cite{uhl1,ericsson2,gam,pix,pati,carollo1,carollo2,nonunitary,lidar}. The proposition in Ref.~\cite{ericsson2} for completely positive maps in spite of being operationally well-defined depends on the specific Kraus representation for the map. In Refs.~\cite{carollo1,carollo2}, the problem of geometric phase of an initially pure open quantum system, based on the standard definition of pure state geometric phase, has been addressed through the quantum jump method. A more recent effort is based on a kinematic approach, with no {\em a priori} assumption about dynamics of the system \cite{nonunitary}. However, most of these different definitions do not agree with each other. In fact, Uhlmann's method even in the case of unitary evolution does not agree with the interferometric definition \cite{ericsson,slater}. The source of such disagreement is known to be the use of different types of PT conditions. Hence, it has been argued \cite{ericsson,slater,levay} that these approaches are not generally equivalent and one cannot obtain one from the other. Therefore, it could be desirable to find a more unified approach which can bring together the previous general ideas. Recently, in the unitary evolution case it has been argued that using (nonorthogonal) decompositions different from spectral decomposition can make it possible to unify the kinematic and Uhlmann's approaches \cite{gen}. In this framework, a suitable notion of PT condition of the mixed state is based on the PT condition of the vectors constituting this decomposition. In this paper, we shall use a rather similar mechanism plus uniform decomposition of density matrices, and propose a generalized kinematic approach for geometric phase of mixed states under an arbitrary nonunitary evolution. This approach vividly shows how it is possible to merge Uhlmann's approach and kinematic approach. It is also shown how to recover the earlier definitions of geometric phase from this more general approach. In addition, it is shown that the approach can be easily modified to include the more general case of degenerate mixed states. This investigation may as well be useful in the study of robustness of geometric phases against decoherence \cite{tidstrom}. Let us suppose that the density matrix of our system of interest (with the Hilbert space ${\mathcal{H}}_s$) is $\varrho(t)=\sum_{k=1}^N p_k(t)|w_k(t)\rangle\langle w_k(t)|$, in which $p_k(t)$'s ($|w_k(t)\rangle$'s) are considered to be its eigenvalues (normalized eigenvectors). In a {\em general} evolution both $p_k$ and $|w_k\rangle$ are subject to change in time. For simplicity of our discussion, we in the sequel assume that the rank of this matrix is constant at all instants, and even more the matrix is nondegenerate. In the case of {\em unitary} evolution, we have $p_k(t)=p_k(0)$ and $|w_k(t)\rangle=U(t)|w_k(0)\rangle$, where $U(t)$ is the unitary evolution operator. However, when evolution is nonunitary, the eigenvalues, $p_k$, can also vary in time. Thus, generally $U(t)=\sum_k|w_k(t)\rangle\langle w_k(0)|$ does not encompass the whole dynamical information. In fact, in such cases, to obtain $\varrho(t)$ one often has to resort to some approximative methods in the theory of open quantum systems \cite{open}, such as the Lindblad equation \cite{lindblad}. Since in our construction we use Uhlmann's PT condition \cite{uhl1} we need to recall it briefly. Uhlmann's approach is based upon the standard purification $\textit{w}(t)$, where $\varrho(t)=\textit{w}(t){\textit{w}}^{\dagger}(t)$, for density matrices. In other words, $\textit{w}$ can be considered as a purification of $\varrho$ in the larger Hilbert space of Hilbert-Schmidt operators with scalar product $\langle \textit{w}(t),\textit{w}(t')\rangle=\text{tr}({\textit{w}}^{\dagger}(t)\textit{w}(t'))$ such that $\textit{w}{\textit{w}}^{\dagger}=\varrho$. It is clear that $\textit{w}(t)=\sqrt{\varrho(t)}V(t)$ is an acceptable purification of $\varrho$ for {\em any} unitary $V(t)$. For a special purification where each $|\langle \textit{w}(t),\textit{w}(t')\rangle|$ is constrained to its maximum value Uhlmann has defined the geometric phase associated to the evolution from $\varrho(0)$ to $\varrho(\tau)$ as $\gamma_g(\tau)=\text{arg}(\langle \textit{w}(0),\textit{w}(\tau)\rangle),$ where the PT condition ${\textit{w}}^{\dagger}(t)\dot{\textit{w}}(t)=\dot{\textit{w}}^{\dagger}(t)\textit{w}(t)$ has to be satisfied. Let us also briefly review the construction of the geometric phase in Ref. \cite{nonunitary}. Consider a purification for the density matrix $\varrho(t)$ as \begin{eqnarray}\label{purify1}&|\Psi(t)\rangle_{sa}=\sum_k\sqrt{p_k(t)}|w_k(t)\rangle_s\otimes|a_k\rangle_a,~~t\in[0,\tau]. \hskip 2mm\end{eqnarray} Now after imposing the PT condition; $\langle w_k(t)|\frac{\text{d}}{\text{d}t}|w_k(t)\rangle=0$, the geometric phase, defined a la Pancharatnam \cite{panch}; $\gamma(\tau)=\text{arg}(\langle \Psi(0)|\Psi(\tau)\rangle)$, reads as $\gamma_g(\tau)=\text{arg}(\sum_{k=1}^N\sqrt{p_k(0)p_k(\tau)}\langle w_k(0)|w_k(\tau)\rangle e^{-\int_0^{\tau}\langle w_k(t)|\dot{w}_k(t)\rangle\text{d}t})$. Indeed, by using the PT condition one fixes the general form of the unitary operators which like $U(t)$ can run system's dynamics. As is clear, in this method purification of mixed state of the system is done based on its spectral decomposition and the PT condition is considered to be the PT condition of all the vectors constituting this (spectral) decomposition. We know that a purification as in Eq.~(\ref{purify1}), is only one of the possible purifications that can give rise to the correct mixed state of the system. So, one has the freedom to choose other decompositions and study the problem of geometric phase with respect to them. In the sequel, we follow such a strategy and look for a specific purification in which all normalized terms can be treated in a naturally uniform manner, unlike Eq.~(\ref{purify1}) where the contribution of the $k$-th normalized term is the time dependent variable $\sqrt{p_k(t)}$. In other words, instead of starting from the spectral decomposition of a density matrix which is the usual starting point of purification based approaches, we start with another decomposition which can result to the mentioned uniformity. In order to do so, we need the next two important theorems on different decompositions of a density matrix $\varrho$.\\ \indent{Theorem 1 \cite{hugh}: Let $\varrho$ has the spectral ensemble $\{p_k,|w_k\rangle\}$. Then $\{q_l,|x_l\rangle\}$ is another ensemble for it iff there exists a {\em unitary} matrix ${\mathcal{U}}=({\mathcal{U}}_{kl})$ such that \begin{eqnarray}\label{th1}&\sqrt{q_l}|x_l\rangle=\sum_k\sqrt{p_k}~{\mathcal{U}}_{lk}|w_k\rangle.\end{eqnarray} \indent {Theorem 2} \cite{prob}: Let $\{q_l\}$ be a probability distribution. Then there exist normalized quantum states $\{|x_l\rangle\}$ such that $\varrho=\sum_lq_l|x_l\rangle\langle x_l|$, iff $\vec{q}$ is majorized by $\vec{p}$.\\ An immediate corollary of Theorem 2 is the existence of a {\em uniform} ensemble for {\em any} density matrix. Therefore, there exist normalized pure states $|x_1\rangle,\ldots,|x_{\mathcal{N}}\rangle$ such that $\varrho$ is an equal mixture of these states with probability $1/{\mathcal{N}}$ (${\mathcal{N}}\geq N$), i.e. $\varrho=\frac{1}{{\mathcal{N}}}\sum_{l=1}^{{\mathcal{N}}}|x_l\rangle\langle x_l|$. For the rest of discussion we assume that ${\mathcal{N}}=N$. Now, let us see how this uniform decomposition is related to the spectral decomposition. By using Theorem 1, we have $\frac{1}{\sqrt{N}}|x_k\rangle=\sum_{l=1}^N\sqrt{p_l}~{\mathcal{U}}_{kl}|w_l\rangle$. It is easy to see that if one chooses an $N\times N$ Fourier matrix (corresponding to discrete Fourier transformations \cite{fourier}) ${\mathcal{U}}_{kl}=\frac{1}{\sqrt{N}}e^{-2\pi i\frac{kl}{N}}$ ($k,l=0,\ldots,N-1$), and momentarily run all indices from 0 to $N-1$ (rather than 1 to $N$) this equation is satisfied. Then, by using a Fourier matrix one can find a uniform ensemble for any density matrix. If we define $C(t)=\sum_k\sqrt{p_k(t)}|w_k(0)\rangle\langle w_k(0)|$ and use the definition of $U(t)$, we can rewrite $|x_k(t)\rangle$ in the following matrix form \begin{eqnarray}\label{xw}&|x_k(t)\rangle=\sqrt{N}U(t)C(t){\mathcal{U}}|w_k(0)\rangle.\end{eqnarray} Now we show that the above mentioned uniform decomposition is useful in our discussion of geometric phase. Consider the following pure state of the combined system $sa$ \begin{eqnarray}\label{purification1} |\Phi(t)\rangle_{sa}=\frac{1}{\sqrt{N}}\sum_k|x_k(t)\rangle_s\otimes V(t)|a_k\rangle_a, \end{eqnarray} where $V(t)$ is the unitary evolution of the $|a_k\rangle$'s. This state is a legitimate purification of the density matrix $\varrho(t)$ of the system; $\varrho(t)=\text{tr}_a(|\Phi(t)\rangle_{sa}\langle\Phi(t)|)$. If $V(t)=I$, since $\langle x_k|x_k\rangle=1$ and all $|x_k(t)\rangle$ vectors enter with equal and constant probability of $\frac{1}{N}$ in the decomposition of the density matrix, it seems natural to consider our (generalized) PT conditions in the form of \begin{eqnarray}\label{xpt}\langle x_k(t)|\frac{\text{d}}{\text{d}t}|x_k(t)\rangle=0,~~k=1,\ldots,N,\end{eqnarray} that is, a density matrix undergoes a PT condition when all of the vectors in its uniform decomposition do so. Here a point is in order. It must be mentioned that, except the pure state case, this PT condition is generally different from the one considered in earlier literature \cite{sjoq1,nonunitary} \footnote{ Tong {\em et al.}'s PT condition \cite{nonunitary} imposes a constraint on the form of parallel transported evolution operator, $U^{\parallel}(t)$, as $\langle w_k(0)|U^{\parallel\dagger}\dot{U}^{\parallel}|w_k(0)\rangle=0, ~~k=1,\ldots,N,$ whereas Eq.~(\ref{xpt}) results into $\langle w_k(0)|{\mathcal{U}}^{\dagger}CU^{\parallel\dagger}\dot{U}^{\parallel} C{\mathcal{U}}|w_k(0)\rangle=0,~~k=1,\ldots,N.$}. In general, in the purification (\ref{purification1}) ancillary vectors could also vary in time, and we have to find a natural picture for geometric phase in this case. Let us first remind a simple and useful property of Schmidt decomposition of bipartite pure states \cite{gen}. If $|\Phi\rangle_{ab}=\sum_kc_k|a_k\rangle_a|b_k\rangle_b$, then $(U\otimes V)|\Phi\rangle_{ab}=(UC{\mathcal{V}}^T\otimes I)\sum_k|a_k\rangle_a|b_k\rangle_b$, where $C$ is a diagonal matrix in the $\{|a_k\rangle\}$ basis defined as $C=\sum_kc_k|a_k\rangle\langle a_k|$ and ${\mathcal{V}}=\sum_{kk'}\langle b_{k}|V|b_{k'}\rangle|a_k\rangle\langle a_{k'}|$. Here, for notational purposes, we omit $T$ sign of ${\mathcal{V}}^T$. Now, noting this property and assuming that the basis vectors of the ancillary Hilbert space are $\{|w_k(0)\rangle\}$, one can rewrite Eq.~(\ref{purification1}) as \begin{eqnarray}\label{purification2}&|\Phi(t)\rangle_{sa}=\sum_k|\tilde{x}_k(t)\rangle_s\otimes|w_k(0)\rangle_a,\end{eqnarray} where $|\tilde{x}_k(t)\rangle=U(t)C(t){\mathcal{U}}{\mathcal{V}}(t)|w_k(0)\rangle$. This purification now results into the nonorthogonal decomposition $\varrho(t)=\sum_k|\tilde{x}_k(t)\rangle\langle\tilde{x}_k(t)|$ for the density matrix. Unlike the $\{|x_k(t)\rangle\}$ decomposition, now for a general ${\mathcal{V}}$, $\langle\tilde{x}_k(t)|\tilde{x}_k(t)\rangle$ is not time independent and, as well, is not equal for all $k$'s. However, if we consider the normalized vectors $|\hat{\tilde{x}}(t)\rangle=\frac{|\tilde{x}_k(t)\rangle}{||\tilde{x}_k(t)||}$ it still looks natural to consider our generalized PT condition to be in the following form \begin{eqnarray}\label{lastpt}\langle\hat{\tilde{x}}_k(t)|\frac{\text{d}}{\text{d}t}|\hat{\tilde{x}}_k(t)\rangle=0.\end{eqnarray} In terms of $|\tilde{x}_k(t)\rangle$ vectors this is equal to $\langle\tilde{x}_k(t)|\frac{\text{d}}{\text{d}t}|\tilde{x}_k(t)\rangle= \frac{1}{2}\frac{\text{d}}{\text{d}t}(\langle\tilde{x}_k(t)|\tilde{x}_k(t)\rangle)$, or equivalently in more detail it is \begin{eqnarray}\label{explicitPT}\aligned \langle & w_k(0)|{\mathcal{V}}^{\dagger}{\mathcal{U}}^{\dagger}CU^{\dagger}\dot{U}C{\mathcal{U}}{\mathcal{V}}+{\mathcal{V}}^{\dagger} {\mathcal{U}}^{\dagger}C\dot{C}{\mathcal{U}}{\mathcal{V}}+{\mathcal{V}}^{\dagger}{\mathcal{U}}^{\dagger}C^2{\mathcal{U}}\\&\times \dot{{\mathcal{V}}}|w_k(0)\rangle=\frac{1}{2}\frac{\text{d}}{\text{d}t}(\langle w_k(0)|{\mathcal{V}}^{\dagger}{\mathcal{U}}^{\dagger}C^2{\mathcal{U}}{\mathcal{V}}|w_k(0)\rangle).\endaligned \end{eqnarray} Now let us see what is the form of Uhlmann's PT condition. We note that $\textit{w}(t)$ operator reads as $\textit{w}(t)=U(t)C(t){\mathcal{U}}{\mathcal{V}}(t)$. Hence, the explicit form of Uhlmann's PT condition is \begin{eqnarray}\aligned\label{explicitul}&{\mathcal{V}}^{\dagger}{\mathcal{U}}^{\dagger}CU^{\dagger}\dot{U}C{\mathcal{U}}{\mathcal{V}} +{\mathcal{V}}^{\dagger} {\mathcal{U}}^{\dagger}C\dot{C}{\mathcal{U}}{\mathcal{V}}+{\mathcal{V}}^{\dagger}{\mathcal{U}}^{\dagger}C^2{\mathcal{U}} \dot{{\mathcal{V}}}\\ &\quad={\mathcal{V}}^{\dagger}{\mathcal{U}}^{\dagger}C\dot{U}^{\dagger}UC{\mathcal{U}}{\mathcal{V}}+{\mathcal{V}}^{\dagger} {\mathcal{U}}^{\dagger}\dot{C}C{\mathcal{U}}{\mathcal{V}}+\dot{{\mathcal{V}}}^{\dagger}{\mathcal{U}}^{\dagger}C^2{\mathcal{U}} {\mathcal{V}}. \endaligned\end{eqnarray} As is seen \textsc{lhs} of this equation is exactly the expression within bra-ket of the PT condition (\ref{explicitPT}). If sandwiched between $\langle w_k(0)|$ and $|w_k(0)\rangle$, Eq.~(\ref{explicitul}) gives rise to \begin{eqnarray}\label{dN}\aligned\text{\textsc{lhs} of (\ref{explicitPT})}&=\frac{1}{2}\langle w_k(0)|\textsc{lhs}+\text{\textsc{rhs} of (\ref{explicitul})}|w_k(0)\rangle\\&=\frac{1}{2}\frac{\text{d}}{\text{d}t}(\langle w_k(0)|{\mathcal{V}}^{\dagger}{\mathcal{U}}^{\dagger}C^2{\mathcal{U}}{\mathcal{V}}|w_k(0)\rangle)\hskip 2mm.\endaligned\end{eqnarray} This is what we wanted to show; by using Uhlmann's PT condition the generalized PT conditions (\ref{explicitPT}) are also satisfied. However, it must be noted that generally number of equations of the two PT conditions are not equal. In other words, Eq.~(\ref{explicitul}) is a matrix equation which constitutes $N^2$ different equations (for ${\mathcal{V}}$) though Eq.~(\ref{lastpt}) is just a set of $N$ equations. This simply means that there might be solutions of Eq.~(\ref{explicitPT}) that are not solutions of Eq.~(\ref{explicitul}). If it is assumed that ${\mathcal{V}}(t)=e^{-i\tilde{H}(t)}$, then solution of Eq.~(\ref{explicitul}) is as follows \cite{uhl1}\begin{eqnarray}\label{ulsol}\aligned -i\tilde{H}(t)=&-2\sum_{kk'}{\mathcal{U}}^{\dagger}|w_{k'}(0)\rangle\langle w_k(0)|{\mathcal{U}}\\&\times\int_0^t {\text{d}}t'\langle w_{k'}(t')|\dot{w}_k(t')\rangle \frac{\sqrt{p_{k'}(t')p_k(t')}}{p_{k'}(t')+p_{k}(t')}.\endaligned\end{eqnarray} Now it is easy to show that Eq.~(\ref{explicitPT}) can have solutions other than (\ref{ulsol}). For example, if we suppose that $[{\mathcal{U}}{\mathcal{V}},C]=0$, and ${\mathcal{U}}{\mathcal{V}}=\sum_k e^{-il_k(t)}|w_k(0)\rangle\langle w_k(0)|$, then Eq.~(\ref{explicitPT}) gives \begin{eqnarray}\label{li1} & l_k(t)=-i\int_0^t\text{d}t'\langle w_k(t')|\dot{w}_k(t')\rangle,\end{eqnarray} which does not generate a ${\mathcal{V}}(t)$ compatible with (\ref{ulsol}). This comes from the fact that to satisfy Eq.~(\ref{explicitPT}) we only need to have the diagonal terms of Uhlmann's PT condition, whereas off-diagonal terms of this equation may put extra constraints that are redundant for validity of Eq.~(\ref{explicitPT}). Now geometric phase can be simply defined a la Pancharatnam as \begin{eqnarray}\label{geom} \gamma_g(t)&=\text{arg}(\langle \Phi(0)|\Phi(t)\rangle)=\text{arg}(\sum_k \nu_k(t)e^{i\gamma_k(t)}),\end{eqnarray} where $\langle \tilde{x}_k(0)|\tilde{x}_k(t)\rangle=\nu_k(t)e^{i\gamma_k(t)}$, i.e. $\nu_k(t)$ ($\gamma_k(t)$) is the visibility (geometric phase) of the $k$-th component of $|\Phi(t)\rangle$. The explicit form is obtained by insertion of the definition of $|\tilde{x}_k(t)\rangle$ in this equation, which gives \begin{eqnarray}\label{}\aligned\gamma_g(t)=&\text{arg}(\sum_{kk'}\sqrt{p_{k'}(0)p_k(t)}\langle w_{k'}(0)|w_k(t)\rangle\\&\times\langle w_k(0)|{\mathcal{U}}{\mathcal{V}}(t){\mathcal{U}}^{\dagger}|w_{k'}(0)\rangle).\endaligned\end{eqnarray} This equation shows that geometric phase, as described here to be combined with Uhlmann's definition, generally retains a memory of the evolution of both system and the ancilla, that is, it is a general property of the whole system which depends on the history of the system as well as the history of the ancilla entangled with it \cite{ericsson}. In the remainder, we investigate how the earlier definitions of geometric phase \cite{sjoq1,nonunitary} can be obtained from the present framework as special cases. If we confine ourselves to a restriction of the solution of (\ref{ulsol}) for ${\tilde{\mathcal{V}}}(t)$, such that ${\tilde{\mathcal{V}}}_{kk'}(t)={\mathcal{V}}_{kk'}(t)\delta_{kk'}$ and has the property \begin{eqnarray}\label{commutation}&[\tilde{{\mathcal{V}}},{\mathcal{U}}^{\dagger}C^2{\mathcal{U}}]=0,\end{eqnarray} or equivalently $\tilde{{\mathcal{V}}}(t)=\sum_ke^{-il_k(t)}{\mathcal{U}}^{\dagger}|w_k(0)\rangle\langle w_k(0)|{\mathcal{U}}$, where $l(t)$ is defined as in Eq.~(\ref{li1}), then explicit form of $\gamma_g$ becomes \begin{eqnarray}\hskip -3mm&\gamma_g(t)=\text{arg}(\sum_k\sqrt{p_k(0)p_k(t)}\langle w_k(0)|w_k(t)\rangle e^{-il_k(t)}),\hskip 3mm\end{eqnarray} as in Ref. \cite{nonunitary}. Thus, in the context of the discussion of Ref.~\cite{ericsson2}, it can be said that the physical role of the commutation relation (\ref{commutation}) appears like removing memory effects of ancilla's evolution from geometric phase. Let us end by mentioning some remarks on the initial assumptions of the approach while our stress is still on derivation of earlier results and their possible generalizations. Based upon Theorem 2, it is seen that one can always choose ${\mathcal{N}}$, number of the vectors in uniform decomposition, such that ${\mathcal{N}}\geq N$. For example, we can assume that ${\mathcal{N}}=\text{dim}({\mathcal{H}}_s)$. Now we show how the whole framework can be modified in the degenerate case. Consider the evolution for the density matrix of the system from $\varrho(0)$ to $\varrho(t)=\sum_{k=1}^N\sum_{\mu=1}^{n_k}p_k(t)|w_k^{\mu}(t)\rangle\langle w_k^{\mu}(t)|$, where $p_k(t)$, $k=1,\ldots,N$, are the $n_k$-fold degenerate eigenvalues of $\varrho(t)$, and $|w_k^{\mu}(t)\rangle$, $\mu=1,\dots,n_k$, are considered the corresponding eigenvectors. In this case, the pure state of the total system is $|\Phi(t)\rangle_{sa}=\sum_{k=1}^N\sum_{\mu=1}^{n_k}|\tilde{x}_k^{\mu}(t)\rangle_s\otimes |w_{k}^{\mu}(0)\rangle_a$, where $|\tilde{x}_k^{\mu}(t)\rangle$ is defined as in Eq.~(\ref{purification2}) in which $|w_k(0)\rangle$ is replaced by $|w_k^{\mu}(0)\rangle$. Then, one notes that \begin{eqnarray}\label{deg3}\aligned\langle\Phi(0)|\Phi(t)\rangle=&\sum_{kk'\mu\mu'}\sqrt{p_k(0)p_{k'}(t)}\langle w_k^{\mu}(0)|w_{k'}^{\mu'}(t)\rangle\\&\quad\times\langle w_k^{\mu}(0)|{\mathcal{U}}{\mathcal{V}}(t){\mathcal{U}}^{\dagger}|w_{k'}^{\mu'}(0)\rangle, \endaligned\end{eqnarray} which is determined when all elements of ${\mathcal{V}}(t)$ are known. Now we choose our PT condition in this general case as \begin{eqnarray}\label{lastPT}\langle\hat{\tilde{x}}_k^{\mu}(t)|\frac{\text{d}}{\text{d}t} |\hat{\tilde{x}}_k^{\mu'}(t)\rangle=0,~~~~\mu,\mu'=1,\dots,n_k.\end{eqnarray} It can be checked that this PT condition can also be satisfied by assuming Uhlmann's PT condition, Eq.~(\ref{explicitul}). In this case, it is easily seen that the most general form for $\tilde{{\mathcal{V}}}$ which satisfies Eq.~(\ref{commutation}) is as follows \begin{eqnarray}\label{Vformdeg}&\tilde{{\mathcal{V}}}(t)=\sum_{k\mu\mu'}\alpha_k^{\mu\mu'}(t)~ {\mathcal{U}}^{\dagger}|w_k^{\mu}(0)\rangle\langle w_k^{\mu'}(0)|{\mathcal{U}}.\end{eqnarray} After some algebra and using the commutation relation (\ref{commutation}), it is obtained that $\alpha_k^{\mu\mu'}(t)= \langle w_k^{\mu}(0)|\textbf{P}e^{-\int_0^tU^{\dagger}(t')\dot{U}(t')\text{d}t'}|w_k^{\mu'}(0)\rangle$, where \textbf{P} denotes path ordering. After inserting this relation back into Eq.~(\ref{deg3}), non-Abelian factors show up in the geometric phase. In general, when degeneracies vary in time, a level--crossing like behavior can occur. In this situation, in the discussion of differentiability of the eigenvalues (and eigenvectors) the notion of ordering of the eigenvalues becomes important. For example, it can happen that the natural ordering as $p_1(t)\geq\dots\geq p_N(t)$ (for all $t$) destroys differentiability, thus, one has to seek for some ordering which respects it \cite{Bhatia}. If such an ordering can be found, then the operator $U(t)$, eigenvalues, and eigenvectors are still well--defined differentiable functions and our approach may be generalized as well. In summary, the notion of geometric phase of a mixed state undergoing nonunitary evolution has been investigated in a unifying picture in which two of the previous general definitions, Uhlmann's definition and kinematic approach, have been related to each other. In this formalism, we have used the idea of purification of state of a system by uniform decomposition of its density matrix rather than the spectral one, and by attaching a time varying ancilla to it. Then, as a natural choice for parallel transport condition, we have considered that a mixed state is undergoing parallel transport condition when all the (normalized) vectors of its corresponding purification are subject to this condition. This generalized parallel transport condition is different from the ones defined previously in the literature. It has been shown that the new conditions are satisfied when Uhlmann's condition holds. However, because of different numbers of equations in the two parallel transport conditions, the generalized parallel conditions are only diagonal equations of Uhlmann's condition. Finally, it has been shown how to recover earlier definitions of geometric phase of a mixed state. Extension of the method to the more general cases of degenerate density matrices with time varying degeneracies have also been discussed. We thank N. Paunkovic for useful discussions. This work was supported by EU project TOPQIP under Contract No. IST-2001-39215. \begin{eqnarray}gin{thebibliography}{99} \bibitem{panch} S. Pancharatnam, Proc. Indian Acad. Sci., Sect. A {\bf 44}, 247 (1956). \bibitem{berry}M. V. Berry, Proc. Roy. Soc. London, Ser. A {\bf 329}, 45 (1984). \bibitem{anandan} Y. Aharonov and J. S. Anandan, Phys. Rev. Lett. {\bf 85} 1593 (1987); J. Samuel and R. Bhandari, Phys. Rev. Lett. {\bf 60}, 2339 (1988); N. Mukunda and R. Simon, Ann. Phys. (N.Y.) {\bf 228}, 205 (1993). \bibitem{shapere}A. Shapere and F. Wilczek, {\em Geometric Phase in Physics} (World Scientific, Singapore, 1989). \bibitem{zanardi} P. Zanardi and M. Rasetti, Phys. Lett. A {\bf 264}, 94 (1999); J. Pachos, P. Zanardi, and M. Rasetti, Phys. Rev. A {\bf 61}, 010305(R) (2000); J. A. Jones {\em et al.}, Nature (London) {\bf 403}, 869 (2000); A. Ekert {\em et al.}, J. Mod. Opt. {\bf 47}, 2501 (2000); L. M. Duan, J. I. Cirac, and P. Zoller, Science {\bf 292}, 1695 (2001). \bibitem{rob}A. Nazir {\em et al.}, Phys. Rev. A {\bf 65}, 042303 (2002); A. Blais and A. -M. S. Tremblay, Phys. Rev. A {\bf 67}, 012308 (2003); S. -L. Zhu and P. Zanardi, Phys. Rev. A {\bf 72}, 020301(R) (2005); D. Parodi {\em et al.}, quant-ph/0510056. \bibitem{uhl1}A. Uhlmann, Rep. Math. Phys. {\bf 24}, 229 (1986); Ann. Phys. (Leipzig) {\bf 46}, 63 (1989); Lett. Math. Phys. {\bf 21}, 229 (1991). \bibitem{sjoq1}E. Sj\"oqvist {\em et al.}, Phys. Rev. Lett. {\bf 85}, 2845 (2000). \bibitem{singh} K. Singh {\em et al.}, Phys. Rev. A {\bf 67}, 032106 (2003). \bibitem{chaturvedi} S. Chaturvedi {\em et al.}, Eur. Phys. Jour. C {\bf 35}, 413 (2004). \bibitem{gam}D. Gamliel and J. H. Freed, Phys. Rev. A {\bf 39}, 3238 (1989). \bibitem{ericsson2}M. Ericsson {\em et al.}, Phys. Rev. A {\bf 67}, 020101(R) (2003). \bibitem{pix}J. G. Peixoto de Faria {\em et al.}, Europhys. Lett. {\bf 62}, 782 (2003). \bibitem{pati}A. K. Pati, Phys. Rev. A {\bf 52}, 2576 (1995). \bibitem{carollo1}A. Carollo {\em et al.}, Phys. Rev. Lett. {\bf 90}, 16402 (2003). \bibitem{carollo2}A. Carollo {\em et al.}, Phys. Rev. Lett. {\bf 92}, 020402 (2004). \bibitem{nonunitary}D. M. Tong {\em et al.}, Phys. Rev. Lett. {\bf 93}, 080405 (2004). \bibitem{lidar} M. S. Sarandy and D. A. Lidar, quant-ph/0507012. \bibitem{open} H. -P. Breuer and F. Petruccione, {\em The Theory of Open Quantum Systems } (Oxford University Press, Oxford, 2002). \bibitem{lindblad}G. Lindblad, Commun. Math. Phys. {\bf 48}, 119 (1976). \bibitem{ericsson} M. Ericsson {\em et al.}, Phys. Rev. Lett. {\bf 91}, 090405 (2003). \bibitem{slater} P. B. Slater, Lett. Math. Phys. {\bf 60}, 123 (2002). \bibitem{levay}P. L\'{e}vay, J. Phys. A: Gen. Math. {\bf 37}, 4593 (2004). \bibitem{gen} M. Shi and J. F. Du, quant-ph/0501006. \bibitem{tidstrom} J. Tidstr\"om and E. Sj\"oqvist, Phys. Rev. A {\bf 67}, 032110 (2003). \bibitem{hugh}L. P. Hughston, R. Jozsa, and W. K. Wootters, Phys. Lett. A {\bf 183}, 14 (1993). \bibitem{prob} M. A. Nielsen, Phys. Rev. A {\bf 62}, 052308 (2000). \bibitem{fourier} E. O. Brigham, {\em The Fast Fourier Transform} (Prentice--Hall, Englewood Cliffs, 1974). \bibitem{Bhatia}R. Bhatia, {\em Matrix Analysis} (Springer--Verlag, New York, 1997). \end{thebibliography} \end{document}
\begin{document} \let\thefootnote\relax\footnote{$^{\ast}$ Permanent affiliation: University of Science, Vietnam National University, Hanoi.} \begin{abstract} This paper is devoted to studies of non-negative, non-trivial (classical, punctured, or distributional) solutions to the higher order Hardy--H\'enon equations \[ (-\Delta)^m u = |x|^\sigmaigma u^p \] in $\R^n$ with $p > 1$. We show that the condition \[ n - 2m - \frac{2m+\sigmaigma}{p-1} >0 \] is necessary for the existence of distributional solutions. For $n \geq 2m$ and $\sigmaigma > -2m$, we prove that any distributional solution satisfies an integral equation and a weak super polyharmonic property. We establish some sufficient conditions for punctured or classical solution to be a distributional solution. As application, we show that if $n \geq 2m$ and $\sigmaigma > -2m$, there is no non-negative, non-trivial, classical solution to the equation if \[ 1 < p < \frac{n+2m+2\sigmaigma}{n-2m}. \] At last, we prove that for for $n > 2m$, $\sigma > -2m$ and $$p \geq \frac{n+2m+2\sigmaigma}{n-2m},$$ there exist positive, radially symmetric, classical solutions to the equation. \epsilonnd{abstract} \date{\bf \today \; at \, \currenttime} \sigmaubjclass[2010]{Primary 35B53, 35J91, 35B33; Secondary 35B08, 35B51, 35A01} \keywords{Hardy--H\'enon polyharmonic equation; Distributional solution; Existence and non-existence; Weak and strong super-polyharmonic properties} \maketitle \sigmaection{Introduction} In this note, we are interested in non-negative, non-trivial solutions to the following higher order elliptic equation \begin{subequations}\label{eqMAIN} \begin{align} (-\Delta)^m u = |x|^\sigmaigma u^p \tag*{ \epsilonqref{eqMAIN}$_\sigmaigma$}. \epsilonnd{align} \epsilonnd{subequations} in $\R^n$ with $m \geq 2$, $p>1$ and $\sigmaigma \in \R$. Traditionally, the equation \epsilonqref{eqMAIN}$_\sigmaigma$ with $m =1 $ is called the H\'enon (resp. Hardy or Lane--Emden) equation if $\sigmaigma > 0$ (resp. $\sigmaigma <0$ or $\sigmaigma =0$). In the same way, for $m > 1$, we call \epsilonqref{eqMAIN}$_\sigmaigma$ the higher order H\'enon, Hardy, or Lane--Emden equation following the sign of $\sigmaigma$. Since we are mostly interested in $\sigmaigma \ne 0$ and $m \geq 2$, we shall call \epsilonqref{eqMAIN}$_\sigmaigma$ the higher order Hardy--H\'enon equation. In the literature, equations of the form \epsilonqref{eqMAIN}$_\sigmaigma$ have captured a lot of attention in the last decades, since they can be seen from various geometric and physics problems. To tackle \epsilonqref{eqMAIN}$_\sigmaigma$, various type of solutions were introduced and studied such as classical solutions, weak solutions, distributional solutions, singular solutions etc. For the reader's convenience, let us precise the notion of solutions that we are interested in here. First, a function $u$ is called a \textbf{classical solution} to \epsilonqref{eqMAIN}$_\sigmaigma$ if it belongs to the class \begin{equation*}\label{eqClassicalSolutionClass} C^{2m} (\R^n) \;\; \text{ if } \sigmaigma \geq 0, \quad C(\R^n) \cap C^{2m} (\R^n \backslash \{ 0 \} ) \;\; \text{ if } \sigmaigma < 0; \epsilonnd{equation*} hence the equation \epsilonqref{eqMAIN}$_\sigmaigma$ is verified pointwisely except eventually at $x=0$ if $\sigmaigma < 0$. Second, often we can drop the definition of $u$ at the origin, namely we only require that \[ u \in C^{2m} (\R^n \backslash \{ 0 \} ). \] In this case, $u$ is called a \textbf{punctured solution} to \epsilonqref{eqMAIN}$_\sigmaigma$. A typical example of punctured solutions is the \textit{standard singular solution} to \epsilonqref{eqMAIN}$_\sigmaigma$ in the form $C_0 |x|^{-\theta}$ for suitable $p>1$, $\sigmaigma > -2m$ and $C_0>0$. Here and after, the constant $\theta$ is as follows \begin{align} \label{new61} \theta := \frac{2m + \sigmaigma}{p-1} \epsilonnd{align} which, as we shall see, play an important role in the work. At last, we call $u$ a \textbf{distributional solution} to \epsilonqref{eqMAIN}$_\sigmaigma$ if \begin{equation*}\label{eqDistributionalSolutionClass} u \in L_{\rm loc}^1 (\R^n), \quad |x|^\sigmaigma u^p \in L_{\rm loc}^1 (\R^n), \epsilonnd{equation*} and \epsilonqref{eqMAIN}$_\sigmaigma$ is satisfied in the sense of distributions, that is, \[ \int_{\R^n} u (-\Delta)^m \varphi = \int_{\R^n} |x|^\sigmaigma u^p \varphi, \quad \forall\; \varphi \in C_0^\infty(\R^n). \] In the literature, a distributional solution is sometimes called \textit{very weak} solution to distinguish with \textit{weak} solutions belonging to suitable Sobolev space required by variational approach. To avoid repetition, let us presume throughout this paper that $$ \mbox{\it by a solution $u$, we always mean that $u$ is \textbf{non-negative and non-trivial}}. $$ \indent Now let us briefly go through some literature review for classical and punctured solutions to the equation \epsilonqref{eqMAIN}$_\sigmaigma$. There are two important numbers, $p_{\mathsf S}nma$ and $p_{\mathsf{C}}nma$ associated with \epsilonqref{eqMAIN}$_\sigmaigma$, called respectively the critical Sobolev and Serrin exponents, which are given by \[ p_{\mathsf S}nma = \begin{cases} \dfrac{n+2m+ 2\sigmaigma}{n-2m} & \text{ if } n > 2m,\\ +\infty & \text{ if } n \leq 2m, \epsilonnd{cases} \] and \[ p_{\mathsf{C}}nma = \begin{cases} \dfrac{n + \sigmaigma}{n-2m} & \text{ if } n > 2m,\\ +\infty & \text{ if } n \leq 2m. \epsilonnd{cases} \] These critical exponents, generalizing the classical ones for the case $\sigmaigma = 0$, are important because the solvability of the equation \epsilonqref{eqMAIN}$_\sigmaigma$ often changes when $p$ passes through them. For the autonomous case, i.e.~$\sigmaigma = 0$, the existence of classical solutions to the equation \epsilonqref{eqMAIN}$_0$ is well-understood for general $m$. It is well-known that \epsilonqref{eqMAIN}$_0$ has no classical solution if \[ 1 < p < p_{\mathsf S}nmz, \] see \cite{GS81, CLi91} for $m=1$, \cite{Lin98, Xu00} for $m=2$, and \cite{WX99} for arbitrary $m$. On the other hand, if $p \geq p_{\mathsf S}nmz$, the equation \epsilonqref{eqMAIN}$_0$ always has positive classical solutions; see for instance \cite{GS81, Lin98, WX99, LGZ06b}. For interested readers, we also refer to \cite{NNPY18} for exhaustive existence and non-existence results of classical solutions to $\Delta^m u = \partialm u^p$ in $\R^n$, with all $n, m \geq 1$ and $p \in \R$. The situation $\sigmaigma \ne 0$ is also well-known for the Laplacian. Fix $m = 1$, Ni proved in \cite{Ni82, Ni86} the existence of classical solution for $p\geq p_{\mathsf S} (1, \sigmaigma)$ with $\sigmaigma > -2$. Hence, we are left with the subcritical case $p < p_{\mathsf S}(1,\sigmaigma)$. As far as we know, the subcritical case was firstly classified by Reichel and Zou. In \cite{RZ00}, they considered a cooperative semilinear elliptic system with a new development of the moving spheres method. Among others, the result of Reichel and Zou indicates that \epsilonqref{eqMAIN}$_\sigmaigma$ with $m = 1$ does not admit any classical solution if $1<p<p_{\mathsf S}(1, \sigma)$ and $\sigmaigma > -2$; see \cite[Theorem 2]{RZ00}. The non-existence result of Reichel and Zou was revisited by Phan and Souplet in \cite[Theorem 1.1]{PhanSouplet}; and a new proof of non-existence of \textit{bounded} solutions in the case $n=3$ was provided by using the technique introduced in \cite{SZ96}. Recently, Guo and Wan study the case of quasilinear equations in \cite{GW17}. On the other hand, as indicated by Mitidieri and Pohozaev in \cite[Theorem 6.1]{MP01}, Dancer, Du and Guo in \cite[Theorem 2.3]{DDG11} (see also Brezis and Cabr\'e \cite{BC98}), the condition $\sigmaigma > -2$ is necessary for the existence of punctured solutions to \epsilonqref{eqMAIN}$_\sigmaigma$ in the case $m=1$. Thus we have a complete picture for the existence problem of classical solutions to \epsilonqref{eqMAIN}$_\sigmaigma$ with $m=1$ and $p > 1$. For general polyharmonic situation $m \geq 2$, the existence of solutions to \epsilonqref{eqMAIN}$_\sigmaigma$ with $\sigmaigma \ne 0$ is less understood. As above, it is natural to expect that $-2m$ serves as a threshold for $\sigmaigma$. Remarkably, Mitidieri and Pohozaev confirmed this fact by showing that if $\sigmaigma = -2m$, then there is no punctured super-solution to \epsilonqref{eqMAIN}$_{-2m}$ for any $p > 1$ and any $n, m \geq 1$, even in the distributional sense; see \cite[Theorem 9.1]{MP01}. Therefore, we will always assume $\sigmaigma \ne -2m$ throughout the paper. There are various attempts to generalize Reichel--Zou's result to polyharmonic operator $m > 1$, see for example \cite{CL16, DaiQin-Liouville-v6} and the references there in. In the situation of \epsilonqref{eqMAIN}$_\sigmaigma$, it is natural to consider the system of $(u, -\Delta u, ... , (-\Delta)^{m-1}u)$. However, notice that we cannot directly apply the non-existence result of Reichel and Zou since the sign of intermediate $(-\Delta)^i u$ are unknown yet. Indeed, as remarked already in \cite{Lin98, Xu00, WX99} for $\sigmaigma = 0$ case, the observation \[ (-\Delta)^i u > 0 \quad \text{ for all } \; 1 \leq i \leq m-1, \] which is the so-called \textit{super polyharmonic property} (SPH property for short), is essential in the study of polyharmonic equations. If the above SHP property actually holds for classical solutions to \epsilonqref{eqMAIN}$_\sigmaigma$ in $\R^n \backslash\{0\}$, then we can apply the non-existence result in \cite{RZ00} to get a Liouville result for classical solutions to \epsilonqref{eqMAIN}$_\sigmaigma$ for $\sigmaigma > -2$. From now on, we shall refer the above pointwise estimate as the \textit{strong SPH property} to distinguish with a weak version to be precised later. Technically, the strong or weak SPH property serves an important tool as a kind of maximum principle which usually lacks due to polyharmonic operator. In a series of papers starting from \cite{ChenLi13}, Chen, Li and their collaborators proposed an interesting approach to study polyharmonic equations. They showed that one can transfer a differential equation to a suitable integral equation if the strong SPH property is valid. More precisely, the SPH property, if holds, is crucial in order to transform \epsilonqref{eqMAIN}$_\sigmaigma$ into the integral equation (for $n > 2m$) \begin{equation*} u(x) = C(2m)\int_{\R^n} \frac{ |y|^\sigmaigma u^p(y)}{|x-y|^{n-2m} } dy \epsilonnd{equation*} by using Chen--Li's trick. Here $C(\alpha)$ denotes \begin{equation}\label{eq-CAlpha} C(\alpha) := \Gamma \big( \frac {n-\alpha}2 \big) \Big[ 2^\alpha \partiali^{n/2} \Gamma \big(\frac \alpha 2 \big) \Big]^{-1}, \quad \forall\; 0 < \alpha < n, \epsilonnd{equation} and $\Gamma$ is the Riemann Gamma function. Therefore, in order to study \epsilonqref{eqMAIN}$_\sigmaigma$, it is tempting to understand the strong SPH property of solutions, at least for the \textit{full range} $\sigmaigma > -2m$. To the best of our knowledge, the first work considering the strong SPH property to classical solutions to \epsilonqref{eqMAIN}$_\sigmaigma$ with $\sigmaigma \ne 0$ is due to Lei \cite[Theorem 2.1]{Lei13} in which the case $-2< \sigmaigma \leq 0$ was examined. The case $\sigmaigma \geq 0$ was studied by Fazly, Wei and Xu in \cite{FWX15} for $m=2$; by Cheng and Liu in \cite{CL16} for arbitrary $m \geq 1$. Recently, Dai, Peng and Qin proved in \cite{DaiPengQin-Liouville-v4} the strong SPH property for classical solutions with \begin{align}\label{dpq18} \left\{ \begin{aligned} &\text{ either } \; -2-2p \leq \sigmaigma < 0; \\ &\text{ or }\; -2m < \sigmaigma < 0 \; \text{ and } \; u(x) = o(|x|^2) \; \text{ as }\; |x| \to \infty. \epsilonnd{aligned} \right. \epsilonnd{align} Clearly, when $m>1+p$ and without assuming $u(x) = o(|x|^2)$ at infinity, there is a gap \[ -2m < \sigmaigma < -2-2p \] for $\sigmaigma$, which is not covered in \cite{DaiPengQin-Liouville-v4}. In other words, fixing any $\sigmaigma \in ({-2m}, 0)$ and $m \geq 3$, the previous works cannot cover the range $1<p<1-\sigmaigma/2$. This limitation is one of our motivations to work on \epsilonqref{eqMAIN}$_\sigmaigma$. Unlike most of existing works in the literature, which were mainly concentrated on the classical solutions, in this work we pursue a very different route. Roughly speaking, we would like to understand more about the distributional solutions, including connections between the three classes of solutions mentioned above. Following this strategy, we first show a very general non-existence result for distributional solutions to \epsilonqref{eqMAIN}$_\sigmaigma$. \begin{theorem}[Liouville result for distributional solutions]\label{thm-GeneralExistence-Distributional} Let $n, m \geq 1$, $p > 1$ and $\sigmaigma \in \R$. If \begin{align} \label{new002} n - 2m - \theta \leq 0, \epsilonnd{align} then \epsilonqref{eqMAIN}$_\sigmaigma$ has no distributional solution. \epsilonnd{theorem} Clearly, an immediate consequence of Theorem \ref{thm-GeneralExistence-Distributional} is that under the condition $\sigmaigma > -2m$, if \[ \left\{ \begin{aligned} &\text{ either } \; n \leq 2m\\ &\text{ or } \; n > 2m \; \text{ and } \; 1< p \leq p_{\mathsf{C}}nma, \epsilonnd{aligned} \right. \] then no distributional solution exists for \epsilonqref{eqMAIN}$_\sigmaigma$. As an interesting application, we obtain a Liouville result for classical solutions in the critical case $n=2m$. \begin{proposition}\label{n=2m} Let $n = 2m \geq 2$, $\sigmaigma > -2m$, and $p > 1$, then there is no classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$. \epsilonnd{proposition} Notice that the Liouville result for $n=2m$ was already obtained in \cite[Theorem 1.1]{CDQ18} under the extra condition \epsilonqref{dpq18} when $\sigma$ is negative. Our observation is that when $n = 2m$ and $\sigmaigma > -2m$, any classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$ is a distributional one, see Lemma \ref{lem-Classic->Distribution} below. Notice that for polyharmonic equation $m \geq 2$, a classical solution is not always a distributional one, see Remark \ref{rem:j1} below. Another interesting point we want to draw is that the condition $n-2m-\theta >0$ is sufficient and necessary for a punctured or classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$ to be also a distributional solution; see Proposition \ref{prop-Strong->Distribution} below. We hope to understand more about distributional solutions to \epsilonqref{eqMAIN}$_\sigmaigma$. To this purpose, we use a general approach developed in the work of Caristi, D'Ambrosio and Mitidieri \cite{CAM08}, where the authors proposed an idea to gain the integral representation formula for any distributional solutions to the general equation \begin{equation*}\label{eq-CAM} (-\Delta)^m u = \mu \epsilonnd{equation*} with a Radon measure $\mu$. Following the approach developed in \cite{CAM08}, we can claim \begin{proposition}[integral equation]\label{prop-IntegralEquation} Let $n>2m$, $\sigmaigma > -2m$, and $p>1$, any distributional solution $u$ to \epsilonqref{eqMAIN}$_\sigmaigma$ solves the integral equation \begin{align} \label{eqIntegralEquation} u(x) = C(2m) \int_{\R^n} \frac{ |y|^\sigmaigma u^p(y)}{|x-y|^{n-2m} } dy \epsilonnd{align} for almost everywhere $x \in \R^n$. Here $C(2m)$ is the constant given by \epsilonqref{eq-CAlpha}. \epsilonnd{proposition} Combining with Theorem \ref{thm-GeneralExistence-Distributional} and Lemma \ref{lem-Classic->Distribution} below, the above representation formula \epsilonqref{eqIntegralEquation} yields the following Liouville theorem for classical solutions to \epsilonqref{eqMAIN}$_\sigmaigma$, which significantly improves the existing results on this subject. \begin{theorem}[Liouville result for classical solutions]\label{thm-Liouville-classical-upper} Let $n \geq 2m$, $\sigmaigma > -2m$, and \[ 1 < p < p_{\mathsf S}nma. \] Then the equation \epsilonqref{eqMAIN}$_\sigmaigma$ does not admit any classical solution. \epsilonnd{theorem} Theorem \ref{thm-Liouville-classical-upper} is \textit{optimal} seeing Theorem \ref{thm-classical-supercritical} below. In view of Theorem \ref{thm-GeneralExistence-Distributional}, it is obvious that Theorem \ref{thm-Liouville-classical-upper} is not true for distributional solutions or punctured solutions to \epsilonqref{eqMAIN}$_\sigmaigma$, because of the example $C_0|x|^{-\theta}$ with $p_{\mathsf{C}}nma < p < p_{\mathsf S}nma$. Another consequence of the representation formula \epsilonqref{eqIntegralEquation} is the following result for classical solutions, which also generalizes the previous works. \begin{proposition}[strong SPH property]\label{prop-StrongSPH} Let $n>2m$, $\sigmaigma > -2m$, and $p \geq p_{\mathsf S}nma$. Then any classical solution $u$ to \epsilonqref{eqMAIN}$_\sigmaigma$ enjoys the strong SPH property, namely \begin{align} \label{SPHi} (-\Delta)^{m - i} u > 0 \quad \mbox{in }\; \R^n\backslash\{0\} \epsilonnd{align} for any $1 \leq i \leq m-1$. Moreover, $u$ is positive in $\R^n$. \epsilonnd{proposition} Following the argument leading to the proof of Proposition \ref{prop-StrongSPH}, the strong SPH property \epsilonqref{SPHi} actually holds for any $p>1$. However, in view of Theorem \ref{thm-Liouville-classical-upper}, it is not necessary to treat the case $1<p<p_{\mathsf S}nma$. In section \ref{apd-NewProofSPH}, we will show a result on strong SPH property more general than Proposition \ref{prop-StrongSPH} above, by using completely different arguments; see Theorem \ref{SPHbis} below. A quick consequence of this more general SPH property indicates that \epsilonqref{SPHi} still holds for some $\sigmaigma < -2m$. Now we turn our attention to the case $p \geq p_{\mathsf S}nma$ with necessarily $n > 2m$. In this regime, we shall establish the following existence result. \begin{theorem}[existence for classical solutions]\label{thm-classical-supercritical} Let $n > 2m$ and $\sigmaigma>-2m$. For any $p \geq p_{\mathsf S}nma$, the equation \epsilonqref{eqMAIN}$_\sigmaigma$ always admits radial positive classical solutions. \epsilonnd{theorem} As can be easily recognized, the existence result in Theorem \ref{thm-classical-supercritical} consists of two cases: $p=p_{\mathsf S}nma$ and $p > p_{\mathsf S}nma$. For the former situation, the result is well-known since it is related to the existence of optimal functions for higher order Hardy-Sobolev inequality; see \cite[Section 2.4]{Lions-1985}. For the later case, as mentioned earlier, it is a natural generalization of a classical result in \cite{Ni86} for $m=1$. To obtain such an existence result, Ni first used a fixed point argument to obtain a local solution, and followed an interesting comparison argument to realize that such a solution is indeed a global one. For $m=2$ and $\sigmaigma = 0$, it was obtained in \cite{GG06}, where shooting method was applied with suitable $\Delta u(0)$. To ensure that such a solution is actually global, the authors used a comparison principle for polyharmonic operator given in \cite{KR}. The existence result with arbitrary $m \geq 1$ was proved in \cite{LGZ06b} for $\sigmaigma =0$ and $p > p_{\mathsf S} (m, 0)$. The case $\sigmaigma > -2$ and $p>p_{\mathsf S}nma$ could be covered by Villavert's approach in \cite{Vil14}; see also \cite{LV16}. As far as we know, there is no general proof for $m \geq 2$ and $\sigmaigma > -2m$. Our paper is organized as follows: \tableofcontents \sigmaection{Preliminaries} \label{sec-preliminaries} Throughout the paper, by $B_R$ we mean the Euclidean ball with radius $R > 0$, centered at the origin. For brevity, the following notation for volume and surface integrals shall be used: \[ \int_{B_R} v := \int_{B_R} v(x) dx,\quad \int_{\partialartial B_R} w := \int_{\partialartial B_R} w(x) d\sigmaigma_x. \] Since our approach is based on integral estimates, frequently we make use the following cut-off function: Let $p_{\mathsf S}i$ be a smooth radial function satisfying \begin{align} \label{cutoff} {\mathbbm 1}_{B_1} \leq p_{\mathsf S}i \leq {\mathbbm 1}_{B_2}. \epsilonnd{align} We use also the following convention: for any function $f$ \[ \Delta^{k/2} f = \begin{cases} \Delta^{k/2} f & \text{ if $k$ is even},\\ \nabla \Delta^{(k-1)/2} f & \text{ if $k$ is odd}. \epsilonnd{cases} \] In this section, we establish several elementary estimates. We start with $L^p$-estimate for distributional and punctured solutions. Such estimates can be called Serrin--Zou type estimates; see \cite{SZ96}. \begin{lemma}\label{lem:L1Estimate-u^p} Let $p > 1$ and $u$ be a distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$, then we have \[ \int_{B_R} |x|^\sigmaigma u^p \lesssim R^{n-2m -\theta}, \quad \mbox{for any $R>0$.} \] \epsilonnd{lemma} \begin{proof} Let $R>0$ and consider the following function \[ \partialhi_R (x) = \Big[ p_{\mathsf S}i \big( \frac x R \big) \Big]^q, \] where $q := 2mp/(p-1)$. Clearly \[ |\Delta^m \partialhi_R (x)| \lesssim R^{-2m} \Big[ p_{\mathsf S}i \big( \frac xR \big)\Big]^{q-2m} =R^{-2m} \partialhi_R^{1/p}. \] Testing the equation \epsilonqref{eqMAIN}$_\sigmaigma$ with the smooth function $\partialhi_R$, we obtain \begin{align*} \int_{\R^n} |x|^\sigmaigma u^p \partialhi_R &= \int_{\R^n} u (-\Delta)^m \partialhi_R \\ &\leq \int_{B_{2R} \backslash B_R } u \big| (-\Delta)^m \partialhi_R \big| \lesssim R^{-2m} \int_{B_{2R} \backslash B_R} u \partialhi_R^{1/p}. \epsilonnd{align*} Keep in mind that in $B_{2R} \backslash B_R$ there holds \[ |x|^{-\frac \sigmaigma {p-1}} \leq \begin{cases} (2R)^{-\frac \sigmaigma {p-1}} & \text{ if } \sigmaigma \leq 0,\\ R^{-\frac \sigmaigma {p-1}} & \text{ if } \sigmaigma > 0 . \epsilonnd{cases} \] Now application of H\"older's inequality gives \begin{align*} \int_{B_{2R} \backslash B_R} u \partialhi_R^{1/p} &\leq \Big(\int_{B_{2R} \backslash B_R} |x|^{-\frac \sigmaigma {p-1}} \Big)^{(p-1)/p} \Big(\int_{B_{2R} \backslash B_R} |x|^\sigmaigma u^p \partialhi_R \Big)^{1/p}\\ &\lesssim R^\frac{n(p-1)-\sigmaigma}{p} \Big(\int_{B_{2R} \backslash B_R} |x|^\sigmaigma u^p \partialhi_R \Big)^{1/p}\\ &\lesssim R^\frac{n(p-1)-\sigmaigma}{p} \Big(\int_{B_{2R} } |x|^\sigmaigma u^p \partialhi_R \Big)^{1/p} . \epsilonnd{align*} Hence \[ \int_{B_{2R}} |x|^\sigmaigma u^p \partialhi_R \lesssim R^{n-\frac{2mp+ \sigmaigma}{p-1}} =R^{n-2m -\theta} \] for any $R>0$ as claimed. \epsilonnd{proof} With exactly the same idea but a different cut-off function $${\mathbbm 1}_{B_2\backslash B_1} \leq \xi \leq {\mathbbm 1}_{B_3\backslash B_{1/2}}$$ instead of $p_{\mathsf S}i$, we get the following estimate for punctured solutions, so we omit the proof. \begin{lemma}\label{lem:new1} Let $u$ be a punctured solution to \epsilonqref{eqMAIN}$_\sigmaigma$ with $p > 1$, then \[ \int_{B_{2R}\backslash B_R} |x|^\sigmaigma u^p \lesssim R^{n-2m -\theta}, \quad \mbox{for any $R>0$.} \] \epsilonnd{lemma} Next we will prove an $L^1$-estimate for any $\Delta^i u$ on $B_R$ with $1 \leq i \leq m-1$. To this aim, we make use of the following interpolation inequality on $B_R$. \begin{lemma}\label{lem:InterpolationInequality} Let $u$ be a non-negative function such that $u \in L^1_{\rm loc}(\R^n)$ and $\Delta^m u \in L^1_{\rm loc}(\R^n)$. Then $\Delta^i u \in L^1_{\rm loc}(\R^n)$ for any $1 \leq i \leq m-1$. Furthermore, we have \begin{align} \label{new001} \int_{B_{R/2}} |\Delta^i u | \lesssim R^{2m-2i} \int_{B_R} |\Delta^m u | + R^{-2i}\int_{B_R \backslash B_{R/2}} u \epsilonnd{align} for any $R>0$. \epsilonnd{lemma} \begin{proof} The fact $\Delta^i u \in L^1_{\rm loc}(\R^n)$ for any $1 \leq i \leq m-1$ is standard. For any $R > 0$, consider the equation $\Delta^m v = \Delta^m u$ in $B_R$ with the Navier boundary conditions. Then $v \in W^{2m+1, q}(B_R)$ for suitable $q>1$. Moreover, $v-u$ is a polyharmonic function, hence smooth in $B_R$; see \cite{Mit18}. Therefore, $\Delta^i u$ is locally integrable in $B_R$ for any $1 \leq i \leq m-1$. More precisely, if $R = 1$, there exists $C > 0$ such that \begin{equation}\label{InterpolationInequality} \sigmaum_{i=1}^{m-1} \int_{B_{3/4}} |\Delta^i u_k | < C \Big( \int_{B_1} |\Delta^m u_k | + \int_{B_1 } |u_k | \Big). \epsilonnd{equation} \indent Now we move to \epsilonqref{new001}. By a density argument, it suffices to establish the inequality for $u \in C^{2m} (\overline B_R)$. By a simple scaling argument, it suffices to consider the case $R=1$, namely, we wish to prove \begin{align*} \sigmaum_{i=1}^{m-1} \int_{B_{1/2}} |\Delta^i u | \lesssim \int_{B_1} |\Delta^m u | + \int_{B_1 \backslash B_{1/2}} |u| \epsilonnd{align*} for any $u \in C^{2m} (\overline B_1)$. If the above claim was wrong, there would exist a sequence $(u_k) \in C^{2m}(\overline B_1)$ such that \begin{align*} \sigmaum_{i=1}^{m-1} \int_{B_{1/2}} |\Delta^i u_k | > k \Big( \int_{B_1} |\Delta^m u_k | + \int_{B_1 \backslash B_{1/2}} |u_k| \Big), \quad \forall\; k \in {\mathbb N}. \epsilonnd{align*} Seeing \epsilonqref{InterpolationInequality}, there holds \begin{align*} \int_{B_1} |\Delta^m u_k | + \int_{B_1 } |u_k | &> \frac k C \Big( \int_{B_1} |\Delta^m u_k | + \int_{B_1 \backslash B_{1/2}} |u_k| \Big). \epsilonnd{align*} Clearly, we can assume that $\| u_k \|_{L^1(B_{1/2})} = 1$ by scaling. Hence we get, for large $k$ \begin{equation}\label{new2} 1 = \int_{B_{1/2}} |u_k | \geq \frac k C \Big( \int_{B_1} |\Delta^m u_k | + \int_{B_1 \backslash B_{1/2}} |u_k| \Big). \epsilonnd{equation} Again using \epsilonqref{InterpolationInequality} and standard elliptic estimates, $(u_k)$ is bounded in $W^{2m, 1}(B_{5/8})$. Therefore, up to a subsequence, $u_k$ converges weakly to some $u_* \in W^{2m-1, q}(B_{5/8})$ for $1 < q < n/(n-1)$. Applying Sobolev's compact embedding and \epsilonqref{new2}, $u_*$ enjoys \[ \int_{B_{1/2}} |u_*| = 1, \quad \int_{B_{5/8} \backslash B_{1/2}} |u_*| = 0, \quad \Delta^m u_* = 0 \; \mbox{ in $B_{5/8}$}. \] In particular, $u_*$ is polyharmonic hence real analytic in $B_{5/8}$. The fact $u_* = 0$ in $B_{5/8} \backslash B_{1/2}$ yields then $u_* \epsilonquiv 0$ in $B_{5/8}$. However, this is impossible since there holds $\|u_*\|_{L^1(B_{1/2})} = 1$. So we are done. \epsilonnd{proof} A direct consequence of the interpolation formula \epsilonqref{InterpolationInequality} is the $L^1$-estimate for $\Delta^{i} u$ with $1 \leq i \leq m-1$. \begin{lemma}\label{lem:L1Estimate-Delta^i} Let $u$ be a distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$ in $\R^n$ with $p > 1$. Then we have $\Delta^{i} u \in L^1_{\rm loc}(\R^n)$ and \[ \int_{B_R} |\Delta^{i} u| \lesssim R^{n-2i-\theta} \] for any $R>0$ and any $1 \leq i \leq m$. \epsilonnd{lemma} \begin{proof} Since $u$ is a distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$, we know that $u \in L^1_{\rm loc}(\R^n)$ and $\Delta^m u \in L^1_{\rm loc}(\R^n)$. The case $i = m$ is given by Lemma \ref{lem:L1Estimate-u^p}. Let $1 \leq i \leq m-1$, we can apply Lemma \ref{lem:InterpolationInequality} to see that $\Delta^{m-i} u \in L^1_{\rm loc}(\R^n)$ and \begin{align*} \int_{B_R} |\Delta^{m-i} u| \lesssim R^{2i} \int_{B_{2R}} |x|^\sigmaigma u^p + R^{2i-2m} \int_{B_{2R} \backslash B_R} u . \epsilonnd{align*} As in the proof of Lemma \ref{lem:L1Estimate-u^p}, we can use H\"older's inequality to claim \begin{align*} \int_{B_{2R} \backslash B_R} u \lesssim R^\frac{n(p-1)-\sigmaigma}{p} \Big(\int_{B_{2R} } |x|^\sigmaigma u^p \Big)^{1/p} . \epsilonnd{align*} Finally, Lemma \ref{lem:L1Estimate-u^p} permits to conclude the proof. \epsilonnd{proof} \begin{remark}\label{rmk-EllipticEstimate-NoSizeSigma} It is important to note that we do not assume any condition on $n$, $m$ or the value of $\sigmaigma$ in Lemmas \ref{lem:L1Estimate-u^p}-- \ref{lem:L1Estimate-Delta^i} above. \epsilonnd{remark} \sigmaection{Liouville result for distributional solutions} \label{subsec-Liouville-distribution} We prove here Theorem \ref{thm-GeneralExistence-Distributional}, namely, under the condition \epsilonqref{new002}, the equation \epsilonqref{eqMAIN}$_\sigmaigma$ does not have any distributional solution. \begin{proof}[Proof of Theorem \ref{thm-GeneralExistence-Distributional}] Recall the condition \epsilonqref{new002}: $n-2m-\theta \leq 0$. If $n-2m-\theta < 0$, the desired result simply follows from Lemma \ref{lem:L1Estimate-u^p}. Therefore, we are left with the case $n-2m-\theta =0$. In this scenario, Lemma \ref{lem:L1Estimate-u^p} gives \[ \int_{\R^n} |x|^\sigmaigma u^p < +\infty. \] In particular, there holds \[ \lim_{R\to +\infty} \int_{B_{2R} \backslash B_R} |x|^\sigmaigma u^p =0. \] To derive a contradiction, we take a closer look at the proof of Lemma \ref{lem:L1Estimate-u^p}. More precisely, the following estimate \begin{align*} \int_{B_R} |x|^\sigmaigma u^p \lesssim R^{-2m} \int_{B_{2R} \backslash B_R} u \partialhi_R^{1/p} \lesssim R^{-2m + \frac{n(p-1)-\sigmaigma}{p}} \Big(\int_{B_{2R} \backslash B_R} |x|^\sigmaigma u^p \partialhi_R \Big)^{1/p} \epsilonnd{align*} remains valid. As now $$-2m+ \frac{n(p-1)-\sigmaigma}{p} = \frac{p-1}{p}(n - 2m - \theta) = 0,$$ Sending $R \to +\infty$, we get $u \epsilonquiv 0$ almost everywhere. The proof of Theorem \ref{thm-GeneralExistence-Distributional} is now complete. \epsilonnd{proof} Now we examine the condition \epsilonqref{new002} in detail. In the case $\sigmaigma > -2m$, \epsilonqref{new002} is fulfilled if either $n \leq 2m$; or $n>2m$ and a $1<p \leq p_{\mathsf{C}}nma$ holds. Indeed, for $n > 2m$ and $p > 1$, \[ p>p_{\mathsf{C}}nma \quad \iff \quad n-2m-\theta >0. \] Hence, it remains is to understand if a distributional solution exists when $n>2m$ and $p>p_{\mathsf{C}}nma$. The answer is easily positive, which means the sharpness of the threshold $p_{\mathsf{C}}nma$ under the condition $\sigmaigma > -2m$, for the existence of distributional solutions to \epsilonqref{eqMAIN}$_\sigmaigma$. Mote precisely, a simple calculation shows that in $\R^n\backslash\{0\}$, \begin{equation}\label{eqPolyLaplacianOnTestFunction} \begin{aligned} (-\Delta)^m (|x|^{-\theta}) = \partialrod_{k=0}^{m-1} (\theta + 2k) \times \partialrod_{k=1}^{m} (n-2k-\theta) |x|^\sigmaigma |x|^{-\theta p}. \epsilonnd{aligned} \epsilonnd{equation} Since $\theta > 0$ if $\sigmaigma > -2m$ and $p > 1$, the first product term is positive. As $n-2m-\theta > 0$, the second product term is also positive. Thus, there exists $C_0>0$ such that $C_0|x|^{-\theta}$ is a punctured solution to \epsilonqref{eqMAIN}$_\sigmaigma$. Using direct verification, or Proposition \ref{prop-Strong->Distribution} below, we can check that $C_0|x|^{-\theta}$ is also distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$ if $\sigmaigma > -2m$ and $p>p_{\mathsf{C}}nma$. \sigmaection{From punctured or classical solution to distributional solution} \label{subsec-Strong->Distribution} We consider here the relationship between the three different types of solutions. Obviously, a classical solution is always a punctured solution. As we will soon see, for polyharmonic case $m \geq 2$, a classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$ is not always a distributional solution. The following result provides a simple criterion to guarantee that any punctured (or classical) solution to \epsilonqref{eqMAIN}$_\sigmaigma$ is a distributional one. \begin{proposition}\label{prop-Strong->Distribution} Suppose that $n, m \geq 1$, $p>1$, and $\sigmaigma \in \R$. Then a punctured solution $u$ to \epsilonqref{eqMAIN}$_\sigmaigma$ is also a distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$ if and only if \begin{align} \label{newj1} n - 2m - \theta > 0. \epsilonnd{align} The same result also holds true for classical solutions. \epsilonnd{proposition} \begin{proof} Seeing Theorem \ref{thm-GeneralExistence-Distributional}, we need only to prove that \epsilonqref{newj1} is a sufficient condition. Let $u$ be a punctured solution to \epsilonqref{eqMAIN}$_\sigmaigma$, to prove $u \in L_{\rm loc}^1 (\R^n)$ and $|x|^\sigmaigma u^p\in L_{\rm loc}^1 (\R^n)$, we need only to show that $u$ and $|x|^\sigmaigma u^p$ belong to $L^1 (B_1)$. First we verify $|x|^\sigmaigma u^p \in L^1 (B_1)$. For any $R > 0$, thanks to Lemma \ref{lem:new1}, there holds \begin{align*} \int_{B_{2R}\backslash B_R} |x|^\sigmaigma u^p \lesssim R^{n-\frac{2mp+\sigmaigma}{p-1}} = R^{n-2m- \theta}. \epsilonnd{align*} If $n - 2m - \theta > 0$, using $R_k = 2^{-k}$ and summing, we get readily $|x|^\sigmaigma u^p \in L^1 (B_1)$. Now, we prove $u \in L^1 (B_1)$. Note that the conditions $n -2m - \theta > 0$ and $p>1$ imply immediately $\sigmaigma< n(p-1)$, there are two possible situations: \noindent{\it Case 1}. If $\sigmaigma \leq 0$, from $|x|^\sigmaigma u^p \in L^1 (B_1)$ we immediately get $u^p \in L^1 (B_1)$, and so is $u$, as $p>1$. \noindent{\it Case 2}. If $0<\sigmaigma <n(p-1)$, then by H\"older's inequality we have \begin{align*} \int_{B_1} u \leq \Big(\int_{B_1} |x|^{-\frac \sigmaigma {p-1}} \Big)^{(p-1)/p} \Big(\int_{B_1} |x|^\sigmaigma u^p \Big)^{1/p} < +\infty, \epsilonnd{align*} proving $u \in L^1 (B_1)$ as claimed. Now we check that $u$ solves \epsilonqref{eqMAIN}$_\sigmaigma$ in the sense of distributions, equivalently \begin{align} \label{new4} \int_{\R^n} u (-\Delta)^m \varphi = \int_{\R^n} |x|^\sigmaigma u^p \varphi \epsilonnd{align} holds for any $\varphi \in C_0^\infty(\R^n)$. Indeed, for each $0<\epsilonpsilon \ll 1$, consider the following cut-off function \[ \partialhi_\epsilonpsilon (x) = \Big[ 1 - p_{\mathsf S}i \big( \frac x \epsilonpsilon \big) \Big]^q, \] where $q = 2mp/(p-1)$ and $p_{\mathsf S}i$ is a standard cut-off function satisfying \epsilonqref{cutoff}. Clearly, $\partialhi_\epsilonpsilon (x) = 0$ if $|x| \leq \epsilonpsilon$ and $\partialhi_\epsilonpsilon (x) =1$ if $|x| \geq 2\epsilonpsilon$. Moreover, there hold $|\nabla^k \partialhi_\epsilonpsilon | \leq C\epsilonpsilon^{-k}$ for all $1 \leq k \leq 2m$, thanks to $q>2m$. Using the test function $\partialhi_\epsilonpsilon \varphi \in C_0^\infty(\R^n\backslash\{0\})$ to \epsilonqref{eqMAIN}$_\sigmaigma$, we have \begin{align*} \int_{\R^n} \partialhi_\epsilonpsilon \varphi |x|^\sigmaigma u^p =\int_{\R^n} u (-\Delta)^m (\partialhi_\epsilonpsilon \varphi ) =\int_{\R^n} u \big[ \partialhi_\epsilonpsilon (-\Delta)^m \varphi + \Phi_\epsilonpsilon \big], \epsilonnd{align*} where the term $\Phi_\epsilonpsilon$ enjoys \[ |\Phi_\epsilonpsilon| \lesssim \sigmaum_{k=1}^{2m} |\nabla^k \partialhi_\epsilonpsilon| | \nabla^{2m-k} \varphi| \lesssim \sigmaum_{k=1}^{2m}\epsilonpsilon^{-k} \lesssim \epsilonpsilon^{-2m}. \] Note that $|\nabla^k \partialhi_\epsilonpsilon | \epsilonquiv 0$ outside $B_{2\epsilonpsilon} \backslash B_{\epsilonpsilon}$ for any $k \geq 1$, so is $|\Phi_\epsilonpsilon|$. Hence, we easily get \begin{align*} \Big| \int_{\R^n} u \Phi_\epsilonpsilon \Big| & \lesssim \epsilonpsilon^{-2m}\int_{B_{2\epsilonpsilon} \backslash B_{\epsilonpsilon}} u \\ &\lesssim \epsilonpsilon^{-2m} \epsilonpsilon^\frac{n(p-1)-\sigmaigma}{p} \Big( \int_{B_{2\epsilonpsilon} \backslash B_{\epsilonpsilon} } |x|^\sigmaigma u^p \Big)^{1/p} \lesssim \epsilonpsilon^{n - 2m - \theta}. \epsilonnd{align*} Here Lemma \ref{lem:new1} is appied for the last inequality. Therefore, \begin{align*} \int_{\R^n} \partialhi_\epsilonpsilon \varphi |x|^\sigmaigma u^p =\int_{\R^n} u \partialhi_\epsilonpsilon (-\Delta)^m \varphi + O\big(\epsilonpsilon^{n - 2m - \theta}\big). \epsilonnd{align*} Since $|x|^\sigmaigma u^p \in L_{\rm loc}^1 (\R^n)$, $u \in L_{\rm loc}^1 (\R^n)$, $\varphi \in C_0^\infty(\R^n)$, and $\partialhi_\epsilonpsilon \to 1$ a.e. as $\epsilonpsilon \to 0^+$, we can apply the dominated convergence theorem to conclude \epsilonqref{new4}. This completes the proof for punctured solutions to \epsilonqref{eqMAIN}$_\sigmaigma$. As any classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$ is also a punctured solution to \epsilonqref{eqMAIN}$_\sigmaigma$, the same conclusion is valid for classical solutions. \epsilonnd{proof} We should mention that when $m = 1$, no punctured solution exists if $n - 2 - \theta \leq 0$. Indeed, for $\sigmaigma \leq -2$, \cite{MP01, DDG11} proved the non-existence for any $p > 1$; while for $\sigmaigma > -2$, it is showed in \cite[Theorem 4.1]{GHY18} that no solution exists in any exterior domain if $1 < p \leq p_{\mathsf{C}}(1,\sigmaigma)$; see also \cite{AGQ16} for $\sigmaigma = 0$. To conclude, we have the following fact for $m = 1$. \begin{corollary} Suppose that $n \geq 1$, $m=1$, $p>1$, and $\sigmaigma \in \R$. Then any punctured solution to \epsilonqref{eqMAIN}$_\sigmaigma$ is also a distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$. \epsilonnd{corollary} The situation is however completely different for polyharmonic equation $m \geq 2$, which shows a notable difference. Recall that when $\sigmaigma > -2m$, the inequality $n-2m-\theta < 0$ is equivalent to $1<p < p_{\mathsf{C}}nma$. In view of \epsilonqref{eqPolyLaplacianOnTestFunction}, the function $|x|^{-\theta}$ yields a punctured solution to \epsilonqref{eqMAIN}$_\sigmaigma$ if and only if \begin{align} \label{new3} \partialrod_{k=0}^{m-1} (\theta + 2k) \times \partialrod_{k=1}^{m} (n-2k-\theta) > 0. \epsilonnd{align} As $\theta > 0$ in this case, and $n-2m-\theta < 0$, it suffices to select $p \in (1, p_{\mathsf{C}}nma)$ such that \[ \partialrod_{k=1}^{m-1} (n-2k-\theta) < 0. \] Apparently, this can occur for any $m \geq 2$. For example, when $m \geq 3$, a possible choice of $p > 1$ is as follows (with $n > 2m - 4$) \[ n-2m+2-\theta<0<n-2m+4-\theta, \quad \mbox{i.e. }\; \frac{n+4+\sigmaigma}{n-2m+4} < p < \frac{n+2+\sigmaigma}{n-2m+2}; \] while for $n \geq m = 2$, we can choose \[ n-2m+2-\theta <0, \quad\mbox{i.e. }\; 1< p < \frac{n+2+\sigmaigma}{n-2}. \] Another interesting remark is that even for $\sigmaigma < -2m$ and $m \geq 2$, there exist still $p > 1$ satisfying \epsilonqref{new3} so that $C_0 |x|^{-\theta}$ remains a punctured solution to \epsilonqref{eqMAIN}$_\sigmaigma$. This makes a big contrast with the non-existence result in \cite{MP01, DDG11} for $m = 1$ and $\sigmaigma \leq -2$. For example, let $n \geq m = 2$, \epsilonqref{new3} is satisfied by $\theta < -2$, which means that $$\frac{4 + \sigmaigma}{p-1} < -2, \; p > 1, \quad \mbox{i.e. }\; 1 < p < \frac{2+\sigmaigma}{-2}, \; \sigmaigma < -4.$$ \begin{remark} \label{rem:j1} Notice that with $\theta < 0$, a punctured solution $C_0 |x|^{-\theta}$ is in fact a classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$. If we take for example $n = 5$, $m = 3$ and $\theta \in (-1, 0)$, then the condition \epsilonqref{new3} holds true. However, the corresponding classical solution $C_0 |x|^{-\theta}$ does not satisfy \epsilonqref{eqMAIN}$_\sigmaigma$ in the sense of distribution, since $n - 2m - \theta <0$. \epsilonnd{remark} We end this section by showing another sufficient condition of different nature, which ensures also that a classical solution is a distributional one. \begin{lemma} \label{lem-Classic->Distribution} Let $u$ be a classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$ with $p > 0$ and $\sigmaigma > -n$. Suppose that $u$ is of class $C^k$ at the origin and $n-2m + k \geq 0$ for some $k \geq 0$, then $u$ is a also a distribution solution. In particular, if $n \geq 2m$, $\sigmaigma > -2m$, any classical solution of \epsilonqref{eqMAIN}$_\sigmaigma$ is a distributional one. \epsilonnd{lemma} \begin{proof} We use notations similar to that in the proof of Proposition \ref{prop-Strong->Distribution}. Clearly, $u \in L_{\rm loc}^1 (\R^n)$, and there holds $|x|^\sigmaigma u^p \in L^1_{\rm loc}(\R^n)$ since $\sigmaigma > -n$. Hence we are left with the verification of the integral identity \epsilonqref{new4}. Notice that we can assume $k \leq 2m-1$ as $n \geq 1$. Using integration by parts, \begin{align*} \int_{\R^n} \partialhi_\epsilonpsilon \varphi (-\Delta)^m u &= (-1)^{m-k}\int_{\R^n} \Delta^{k/2} u \Delta^{m-k/2} (\partialhi_\epsilonpsilon \varphi)\\ & = (-1)^{m-k}\int_{\R^n} \big[\Delta^{k/2} u- (\Delta^{k/2} u) (0)\big] \Delta^{m-k/2} (\partialhi_\epsilonpsilon \varphi). \epsilonnd{align*} Since $\Delta^{m-k/2} (\partialhi_\epsilonpsilon \varphi) = \partialhi_\epsilonpsilon\Delta^{m-k/2} \varphi + \Phi_\epsilon$ for some $\Phi_\epsilonpsilon$, we obtain then \begin{align*} \int_{\R^n} \partialhi_\epsilonpsilon \varphi |x|^\sigmaigma u^p & = (-1)^{m-k}\int_{\R^n} \big[\Delta^{k/2} u- (\Delta^{k/2} u) (0)\big] \big[ \partialhi_\epsilonpsilon\Delta^{m-k/2} \varphi + \Phi_\epsilon\big]. \epsilonnd{align*} Thanks to the continuity of $\Delta^{k/2} u$ at the origin and the estimate $|\Phi_\epsilon| \lesssim \epsilon^{k-2m}{\mathbbm 1}_{B_{2\epsilon}\backslash B_\epsilon}$, there holds \begin{align*} \left|\int_{\R^n} \big[\Delta^{k/2} u- (\Delta^{k/2} u) (0)\big] \Phi_\epsilon\right| \leq o_\epsilon(1) \times \epsilon^{n-2m +k}, \epsilonnd{align*} which goes to zero as $\epsilonpsilon \to 0$, because $n -2m +k \geq 0$. Tending $\epsilon$ to $0$, we conclude \begin{align*} \int_{\R^n} \varphi |x|^\sigmaigma u^p & = (-1)^{m-k}\int_{\R^n} \big[\Delta^{k/2} u- (\Delta^{k/2} u) (0)\big] \Delta^{m-{k/2}}\varphi\\ & = (-1)^{m-k}\int_{\R^n} \Delta^{k/2} u \Delta^{m-{k/2}}\varphi\\ & = \int_{\R^n} u (-\Delta)^m\varphi. \epsilonnd{align*} So we are done. \epsilonnd{proof} In practice, Lemma \ref{lem-Classic->Distribution} is quite useful since it helps us to obtain Liouville result for classical solutions via that for distributional solutions established in Theorem \ref{thm-GeneralExistence-Distributional}. For example, combining Lemma \ref{lem-Classic->Distribution} with Theorem \ref{thm-GeneralExistence-Distributional}, we easily get Proposition \ref{n=2m}. Indeed, as $n - 2m - \theta = -\theta < 0$ if $n = 2m$, $\sigmaigma > -2m$ and $p > 1$, then no distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$ can exist. In view of Theorem \ref{thm-Liouville-classical-upper}, it is natural to ask whether or not a Liouville result for classical solutions exists if $n<2m$. As far as we know, there is no such a result for $m \geq 2$. However, by using Lemma \ref{lem-Classic->Distribution}, we can conditionally obtain such a result. \begin{corollary}\label{cor-Liouville-n<2m} Let $2 \leq n < 2m$, $\sigmaigma > -2m$, and $p > 1$. Then there is no classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$, which is of class $C^{2m-n}$ at the origin, in particular, the equation \epsilonqref{eqMAIN}$_\sigmaigma$ has no solution in $C^{2m-2} (\R^n) \cap C^{2m} (\R^n \backslash \{ 0 \} )$. \epsilonnd{corollary} \sigmaection{Integral equation and the weak SPH for distributional solutions} \label{Properties-ds} In this section, we establish two important properties of distributional solutions. First, for $n > 2m$, $\sigmaigma > -2m$, and $p > 1$, we will show that any distributional solution to the differential equation \epsilonqref{eqMAIN}$_\sigmaigma$ solves the integral equation \epsilonqref{eqIntegralEquation} almost everywhere. Next, we show that any distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$ satisfies the weak SPH property; see Proposition \ref{prop-WeakSPH} below. As application, we get the strong SPH property for classical solutions to \epsilonqref{eqMAIN}$_\sigmaigma$, namely Proposition \ref{prop-StrongSPH}. The departure point for us is the following result. \begin{lemma}[ring condition] \label{lem:ring} Any distributional solution $u$ to \epsilonqref{eqMAIN}$_\sigmaigma$ with $\sigmaigma > -2m$ and $p>1$ satisfies the ring condition: \begin{equation}\label{eqRingCondition} \lim_{R \to + \infty} \frac 1{R^n} \int_{R \leq |x-y| \leq 2R} u(y) dy = 0, \quad \forall\; x \in \R^n. \epsilonnd{equation} \epsilonnd{lemma} \begin{proof} Fix any $x \in \R^n$, consider $R > 2|x|$. Readily $\{y : R \leq |x-y| \leq 2R \} \sigmaubset B_{3R}\backslash B_{R/2}$. By H\"older's inequality and Lemma \ref{lem:L1Estimate-u^p}, there holds \begin{align*} \int_{R \leq |x-y| \leq 2R} u(y) & \leq \int_{B_{3R}\backslash B_{R/2}} u\\ &\leq \Big(\int_{B_{3R}\backslash B_{R/2}} |y|^{-\frac \sigmaigma {p-1}} dy \Big)^{(p-1)/p} \Big(\int_{B_{3R}\backslash B_{R/2}} |y|^\sigmaigma u^p (y) dy \Big)^{1/p}\\ & \lesssim R^{(n-\frac\sigmaigma{p-1})\frac{p-1}p} R^\frac{n-2m-\theta}p\\ & = R^{n-\theta}. \epsilonnd{align*} Hence the distributional solution $u$ enjoys the ring condition, thanks to $\theta > 0$. \epsilonnd{proof} \begin{remark} \label{rem:j2} Applying Lemma \ref{lem:new1}, the same proof shows that if $\sigmaigma > -2m$ and $p > 1$, any punctured solution of \epsilonqref{eqMAIN}$_\sigmaigma$ also satisfies the ring condition \epsilonqref{eqRingCondition}. \epsilonnd{remark} From the ring condition \epsilonqref{eqRingCondition} we can apply a general result of Caristi, D'Ambrosio, and Mitidieri to conclude that, any distributional solution $u$ to \epsilonqref{eqMAIN}$_\sigmaigma$ solves \epsilonqref{eqIntegralEquation} almost everywhere; see \cite[Theorem 2.4]{CAM08}. However, only the proof for $m=2$ was provided in \cite{CAM08}, and we are not convinced that \epsilonqref{eqIntegralEquation} holds for all Lebesgue points of $u$, as claimed in \cite{CAM08}. We show here a detailed proof for all $m$, for the sake of completeness and the reader's convenience. Let us first introduce some notations. Denote \[ {\mathbf G}^\epsilonpsilon (x) = \Big( \frac 1{\epsilonpsilon^2 + |x|^2} \Big)^\frac{n-2m}2 \quad \mbox{and} \quad U_q (x) = \Big( \frac 1{1+|x|^2} \Big)^q \quad\text{with} \; \epsilonpsilon, q > 0. \] We shall use the test function \begin{align} \label{new5} \varphi(x) = \partialhi_R(x) {\mathbf G}^\epsilonpsilon(x) = p_{\mathsf S}i \big(\frac{x}{R} \big) {\mathbf G}^\epsilonpsilon(x), \quad R, \epsilonpsilon > 0; \epsilonnd{align} where $p_{\mathsf S}i$ is a cut-off function satisfying \epsilonqref{cutoff}. We need also some estimate for $(-\Delta)^m {\mathbf G}^\epsilonpsilon$. Toward a precise computation of this term, we use the following auxiliary result. \begin{lemma}\label{lem-IdentityU} There holds \begin{align*} (-\Delta)^m U_q =& \; 2^m\partialrod_{k=0}^{m-1}(q+k)\partialrod_{k=1}^m(n-2k-2q)U_{q+m}\\ &\; + \sigmaum_{i=1}^{m-1}2^{m+i} \binom{m}{i} \partialrod_{k=0}^{m+i-1}(q+k)\partialrod_{k=i+1}^m(n-2k-2q)U_{q+m+i}\\ & \; + 2^{2m}\partialrod_{k=0}^{2m-1}(q+k)U_{q+2m}. \epsilonnd{align*} \epsilonnd{lemma} \begin{proof} A direct calculation gives \begin{align*} -\Delta U_q(x) &= 2q\left[\frac{n(1+|x|^2) - 2(q+1)|x|^2}{(1+|x|^2)^{q+2}}\right]\\ & = 2q(n-2-2q)U_{q+1}(x) + 4q(q+1) U_{q+2}(x). \epsilonnd{align*} Hence, the proof follows by induction on $q$. We omit its details. \epsilonnd{proof} Having all the notations above, we can proceed now the proof of Proposition \ref{prop-IntegralEquation}. \begin{proof}[Proof of Proposition \ref{prop-IntegralEquation}] To prove \epsilonqref{eqIntegralEquation} for a fixed point $x$, it suffices to verify that \begin{equation}\label{eq-IntegralIdentity-changed} u(x) = C(2m) \int_{\R^n} \frac{ |x-y|^\sigmaigma u^p(x-y)}{|y|^{n-2m} } dy. \epsilonnd{equation} Here the constant $C(2m)$ is given by \epsilonqref{eq-CAlpha}. Testing our equation $$(-\Delta)^m u(x-y) = |x-y|^\sigmaigma u^p(x-y)$$ with $\partialhi_R {\mathbf G}^\epsilonpsilon$ given by \epsilonqref{new5}, integration by parts yields \begin{align*} \int_{\R^n} |x-y|^\sigmaigma u^p(x-y) \partialhi_R (y) {\mathbf G}^\epsilonpsilon (y) dy & = \int_{\R^n} u(x-y) (-\Delta)^m \big( \partialhi_R {\mathbf G}^\epsilonpsilon \big) (y) dy\\ & =: I_1^\epsilonpsilon + I_2^\epsilonpsilon, \epsilonnd{align*} where \[ I_1^\epsilonpsilon= \int_{\R^n} u(x-y) \partialhi_R (y) (-\Delta)^m {\mathbf G}^\epsilonpsilon (y) dy. \] Using the notation $U_q$ as in Lemma \ref{lem-IdentityU}, we see that ${\mathbf G}^\epsilonpsilon (y) = \epsilonpsilon^{2m-n} U_{\frac{n-2m}2}(y/\epsilonpsilon)$ and \begin{align*} (-\Delta)^m {\mathbf G}^\epsilonpsilon (y) &= \epsilonpsilon^{-n} 2^{2m}\partialrod_{k=0}^{2m-1} \Big( \frac{n-2m}{2}+k \Big) U_\frac{n+2m}{2} \big( \frac y \epsilonpsilon \big) \\ &= \epsilonpsilon^{-n} 2^{2m} \frac{ \Gamma \big( \frac{n+2m}{2}\big) }{\Gamma \big( \frac{n-2m}{2}\big) } U_\frac{n+2m}{2} \big( \frac y \epsilonpsilon \big) . \epsilonnd{align*} Clearly, \begin{align*} \int_{\R^n} \epsilonpsilon^{-n} U_\frac{n+2m}{2} \big( \frac y \epsilonpsilon \big) dy & =\int_{\R^n} U_\frac{n+2m}{2} (y) dy \\ & = |\mathbb S^{n-1}|\int_0^{+\infty} \Big( \frac 1{1+r^2} \Big)^\frac{n+2m}{2} r^{n-1}dr\\ &= \frac{2\partiali^{n/2}}{\Gamma \big( \frac n2 \big)} \frac{\Gamma \big( \frac{n+2m}{2} - \frac{n}{2} \big)\Gamma \big( \frac{n}{2} \big)}{2\Gamma \big( \frac{n+2m}{2} \big)} = \frac{ \partiali^{n/2} \Gamma (m) }{\Gamma \big( \frac{n+2m}{2} \big)}. \epsilonnd{align*} So we can rewrite $I_1^\epsilonpsilon$ as follows \begin{align*} I_1^\epsilonpsilon &= \int_{\R^n} u (x-y) \partialhi_R (y) (-\Delta)^m {\mathbf G}^\epsilonpsilon (y) dy\\ & = 2^{2m} \frac{ \Gamma \big( \frac{n+2m}{2}\big) }{\Gamma \big( \frac{n-2m}{2}\big) } \int_{\R^n} u (x - \epsilonpsilon y) \partialhi_R (\epsilonpsilon y) U_\frac{n+2m}{2} (y) dy \\ & = 2^{2m} \frac{ \Gamma \big( \frac{n+2m}{2}\big) }{\Gamma \big( \frac{n-2m}{2}\big) } \big( f * g_\epsilonpsilon \big) (x) , \epsilonnd{align*} with $f(z) = u(z)\partialhi_R(x-z) \in L^1(\R^n)$ and $g_\epsilonpsilon (z) = \epsilonpsilon^{-n} U_\frac{n+2m}{2} (z/\epsilonpsilon)$. By definition, it is clear that \textit{the least decreasing radial majorant} of $U_\frac{n+2m}{2}$ is integrable, i.e. \[ \int_{\R^n} \big[\sigmaup_{|x| \geq |y|} U_\frac{n+2m}{2} (x) \big] dy =\int_{\R^n} U_\frac{n+2m}{2} (y) dy <+\infty. \] Therefore, we can apply \cite[Theorem 2(b)]{Stein} to claim that \begin{equation}\label{eq-EstimateI_1} \lim_{\epsilonpsilon \to 0^+} I_1^\epsilonpsilon =2^{2m} \frac{ \Gamma \big( \frac{n+2m}{2}\big) }{\Gamma \big( \frac{n-2m}{2}\big) } u(x) \partialhi_R (0) \int_{\R^n} U_\frac{n+2m}{2} (y)dy = \frac 1{C(2m)} u(x) \epsilonnd{equation} for almost everywhere $x$. On the other hand, as $u \in L^1_{loc}(\R^n)$, letting $\epsilonpsilon \to 0^+$ gives \begin{equation*} \lim_{\epsilonpsilon \to 0^+} I_2^\epsilonpsilon = \int_{B_{2R} \backslash B_R} u(x-y) L( \partialhi_R) (y) dy , \epsilonnd{equation*} where $L$ is the operator defined by \[ L : \partialhi \mapsto (-\Delta)^m(\partialhi{\mathbf G}^0) - \partialhi(-\Delta)^m{\mathbf G}^0= (-\Delta)^m(\partialhi{\mathbf G}^0), \quad \mbox{in } \R^n\backslash\{0\}, \] with ${\mathbf G}^0(x) = |x|^{2m-n}$. Observe that $$\partialhi_R{\mathbf G}^0(x) = R^{2m-n}(p_{\mathsf S}i{\mathbf G}^0)(x/R),$$ we easily get $L( \partialhi_R) = R^{-n}L(p_{\mathsf S}i)(x/R)$, hence $$|L(\partialhi_R) | \leq CR^{-n}{\mathbbm 1}_{B_{2R} \backslash B_R},$$ where $C$ is a constant independent of $R > 0$. Consequently, \begin{align} \label{eq-EstimateI_2} \big|\lim_{\epsilon\to 0^+} I_2^\epsilonpsilon \big| \leq CR^{-n}\int_{B_{2R} \backslash B_R} u(x-y)dy. \epsilonnd{align} Finally, for a.e. $x$ and for any $R > |x|$, tending $\epsilon$ to $0^+$, using \epsilonqref{eq-EstimateI_1}--\epsilonqref{eq-EstimateI_2} and the proof of Lemma \ref{lem:ring}, we conclude that \[ \int_{\R^n} {\mathbf G}^0(y) |x-y|^\sigmaigma u^p(x-y) \partialhi_R(y) dy = \frac 1{C(2m)} u(x)+ O\big(R^{-\theta}\big)_{R \nearrow +\infty}. \] Sending now $R \to +\infty$, we just proved \[ u(x) =C(2m) \int_{\R^n} {\mathbf G}^0(y) |x-y|^\sigmaigma u^p(x-y) dy \] for a.e.~$x$. This completes the proof of \epsilonqref{eq-IntegralIdentity-changed}, equivalently \epsilonqref{eqIntegralEquation} holds for a.e. $x$. \epsilonnd{proof} From the integral equation, it is easy to obtain the weak or strong SPH properties for solutions to (\ref{eqMAIN})$_\sigmaigma$. These properties play no crucial role here for the existence or non-existence results, which illustrates a key difference between our approach and other approaches in the existing literature. It is also worth noting that the weak SPH property is stated for distributional solutions to the integral equation \epsilonqref{eqIntegralEquation}, which is also of fundamental difference to the strong SPH property. The result below is not really new, it is indeed part of \cite[Theorem 2.4]{CAM08}. \begin{proposition}[weak SPH property]\label{prop-WeakSPH} Let $n>2m$. Then any distributional solution $u$ to the integral equation \epsilonqref{eqIntegralEquation} enjoys the weak SPH property, namely there hold \[ \int_{\R^n} u (-\Delta)^{m - i} \partialhi \geq 0 \] for all $1 \leq i \leq m-1$ and for any $0 \leq \partialhi \in C_0^\infty(\R^n)$. \epsilonnd{proposition} \begin{proof} Let $0 \leq \partialhi \in C_0^\infty(\R^n)$ and $1 \leq i \leq m-1$ be arbitrary. First we recall the well-known Selberg formula \[ \int_{\R^n} \frac{C(\alpha)}{|x-z|^{n-\alpha}}\frac{C(\beta)}{|y-z|^{n-\beta}} dz =\frac{C(\alpha + \beta)}{|x-y|^{n-\alpha-\beta}}. \] where $\alpha, \beta > 0$, $\alpha + \beta < n$; and $C(\alpha)$, $C(\beta)$ and $C(\alpha+\beta)$ are constants in \epsilonqref{eq-CAlpha}; see \cite{GM99}. Using the above formula and Fubini's theorem, we can rewrite $u$ from \epsilonqref{eqIntegralEquation} as follows \begin{align*} u(x)& = C(2m) \int_{\R^n} \frac{ |y|^\sigmaigma u^p(y)}{|x-y|^{n-2m} } dy\\ &=C(2m-2i) \int_{\R^n} \frac{ 1}{|x-z|^{n-2m+2i} } \Big( C(2i) \int_{\R^n} \frac{|y|^\sigmaigma u^p(y) }{|y-z|^{n-2i} } dy \Big) dz\\ & =C(2m-2i) \int_{\R^n} \frac{1}{|x-z|^{n-2m+2i} } d\mu_i (z) \epsilonnd{align*} for some positive measure $\mu_i$. Now, we multiply both sides of \epsilonqref{eqIntegralEquation} by $(-\Delta)^{m-i} \partialhi$ and integrate to get \begin{align*} \int_{\R^n} u (-\Delta)^{m - i} \partialhi & = C(2m-2i) \int_{\R^n} \Big( \int_{\R^n} \frac{1}{|x-z|^{n-2m+2i} } d\mu_i (z) \Big) (-\Delta)^{m - i} \partialhi \\ & = C(2m-2i) \int_{\R^n} \partialhi (-\Delta)^{m - i} \Big( \int_{\R^n} \frac{1}{|x-z|^{n-2m+2i} } d\mu_i (z) \Big) \\ & = \int_{\R^n} \partialhi (x) d\mu_i (x) \geq 0. \epsilonnd{align*} This implies that $u$ satisfies the weak SPH property. \epsilonnd{proof} \sigmaection{The strong SPH property for classical solutions} \label{apd-NewProofSPH} In the existing literature, the method of proving the strong SPH property \epsilonqref{SPHi} is often based on careful analysis on the spherical averages of $(-\Delta)^i u$, which could be rather technical and involved, see for example \cite{WX99, ChenLi13}. Here we can easily obtain the strong SPH property based on its weak form. It is quite obvious to see that Proposition \ref{prop-WeakSPH} implies Proposition \ref{prop-StrongSPH}. Indeed, for $n > 2m$, $\sigmaigma > -2m$, and $p > 1$, any classical solution is a distributional one using Lemma \ref{lem-Classic->Distribution}, hence $(-\Delta)^i u \geq 0$ in $\R^n\backslash\{0\}$ for $1\leq i \leq m-1$ using Proposition \ref{prop-WeakSPH}. Furthermore, as $(-\Delta)^m u \geq 0$ and not identically zero, the strong maximum principle ensures that all $(-\Delta)^i u$ are positive in $\R^n\backslash\{0\}$. Using just \epsilonqref{eqIntegralEquation}, we see that $u$ is positive in $\R^n$. In fact, we can prove a result more general than Proposition \ref{prop-StrongSPH}, by integral estimates for a distributional solution, without using the weak SPH property, nor the usual spherical average procedure. \begin{theorem}[{\it partially} SPH property] \label{SPHbis} Let $u$ be both classical and distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$ with $p > 1$ and $n \geq 3$. Assume that there exists $\epsilonll \in {\mathbb N}$ such that $m \geq \epsilonll+1$ and $2\epsilonll + \theta > 0$, then \[ (-\Delta)^i u > 0 \quad \text{ in } \; \R^n\backslash\{0\} \quad \text{ for all } \; \epsilonll\leq i \leq m-1. \] In particular, the strong SPH property \epsilonqref{SPHi} holds under $n \geq 3$, $m \geq 2$ and $\theta > -2$ where we select $\epsilonll =1$. \epsilonnd{theorem} Our proof is inspired by an idea from \cite[Appendix]{FWX15}, where Fazly, Wei and Xu suggested a simple argument to handle classical solutions to $\Delta^2 u = |x|^\sigmaigma u^p$ with $\sigmaigma \geq 0$. To show $-\Delta u \geq 0$, their idea is to estimate the harmonic function $h := \Delta u + w$ from the above, where \[ w(x) =C(2) \int_{\R^n} \frac{|y|^\sigmaigma u^p(y)}{|x-y|^{n-2}}dy \geq 0, \] and $C(2)$ is given by \epsilonqref{eq-CAlpha}. For any $x_0 \in \R^n$, we have \[ h(x_0) = \sigmatrokedint_{\partialartial B_R (x_0)} (\Delta u + w) \leq \sigmatrokedint_{\partialartial B_R (x_0)}|\Delta u| + \sigmatrokedint_{\partialartial B_R (x_0)} w, \quad \forall\; R > 0. \] Therefore, if the right hand sides goes to zero for a suitable sequence $R_k \to +\infty$, then $h(x_0) \leq 0$, hence $-\Delta u(x_0) \geq 0$ as expected. Unfortunately, they met some difficulty in \cite{FWX15} to control $\|\Delta u\|_{L^1(\partial B_R(x_0))}$ from the above. Here we generalize the idea in \cite{FWX15}; in particular giving a new, independent proof of Proposition \ref{prop-StrongSPH}. Our proof make uses of the integral estimates. Before proving the result, recall the following fact from \cite[Section 9.7]{LiebLoss}: For any $n \geq 3$, $x_0, y \in \R^n$ and $R > 0$, there holds \begin{equation}\label{eq-LL} \sigmatrokedint_{\partialartial B_r (x_0)} \frac{d\sigmaigma_y }{ |x - y|^{n-2} } = \max \big\{ |x-x_0|, r \big\}^{2-n}. \epsilonnd{equation} \begin{proof}[Proof of Theorem \ref{SPHbis}] Let $u$ be a classical and distributional solution of \epsilonqref{eqMAIN}$_\sigmaigma$ with $p > 1$, $n \geq 3$, $m \geq \epsilonll +1$, $\epsilonll \in {\mathbb N}$ and $2\epsilonll + \theta > 0$. Fix $\epsilonll \leq i \leq m-1$. From Lemma \ref{lem:L1Estimate-Delta^i}, there holds \begin{align*} \int_{B_{2R} \backslash B_R} |x|^{-n+2} |\Delta^{i+1}u| \lesssim R^{- 2i - \theta}, \quad \forall \; R > 0. \epsilonnd{align*} Remark that $-2i-\theta \leq -2\epsilonll - \theta < 0$ as $i \geq \epsilonll$. Summing up with $R_k = 2^kR$, we get \begin{align} \label{newa2} \int_{\R^n \backslash B_R} |x|^{-n+2}|\Delta^{i+1}u| \lesssim R^{-2i - \theta}, \quad \forall \; R > 0. \epsilonnd{align} We claim that \[ w_i (x) = C(2) \int_{\R^n}\frac{(-\Delta)^{i+1}u (y)}{|x - y|^{n-2}} dy \] is well defined for any $x\in \R^n\backslash\{0\}$. Indeed, let $x\ne 0$ be arbitrary but fixed point, there hold: \begin{itemize} \item on $B_{|x|/2}$, the integral is bounded as $(-\Delta)^{i+1}u \in L^1_{\rm loc}(\R^n)$; \item on $B_{2|x|}\backslash B_{|x|/2}$, the integral exists since $(-\Delta)^{i+1}u$ is bounded over this set; \item on $\R^n\backslash B_{2|x|}$, the integral is easily bounded, thanks to \epsilonqref{newa2} and the inequality $|x-y| \geq |y|/2$. \epsilonnd{itemize} The well-definition of $w_i$ for a.e.~$x \in \R^n$ and $(-\Delta)^{i+1}u \in L_{\rm loc}^1 (\R^n)$ allow us to apply \cite[Theorem 6.21]{LiebLoss} to deduce that $w_i$ satisfies \[ -\Delta w_i = (-\Delta)^{i+1}u \quad \mbox{in }\; {\mathcal D}'(\R^n). \] Therefore \[ h_i := w_i - (-\Delta)^{i+1}u \] solves $-\Delta h_i = 0$ in ${\mathcal D}'(\R^n)$. Hence $h_i$ is harmonic and smooth in $\R^n$ by the classical Weyl lemma, see \cite[Theorem 7.10]{Mit18} or \cite[page 256]{LiebLoss}, consequently $w_i \in C(\R^n \backslash \{ 0 \})$. Hence we have, for any $x_0 \in \R^n\backslash\{0\}$ and any $R > |x_0|$, \begin{align} \label{newa1} h_i (x_0) = \sigmatrokedint_{\partialartial B_R (x_0)} h_i \leq \sigmatrokedint_{\partialartial B_R (x_0)} w_i + \sigmatrokedint_{\partialartial B_R (x_0)}|\Delta^{i+1} u|. \epsilonnd{align} Following the idea in \cite{FWX15}, it is necessary to estimate the two integrals on the right hand side of \epsilonqref{newa1}. By Fubini's theorem, we can write \begin{align*} \frac{1}{C(2)} & \sigmatrokedint_{\partialartial B_R (x_0)} w_i d\sigmaigma_x \\ & \leq \int_{\R^n} \Big(\sigmatrokedint_{\partialartial B_R (x_0)} \frac{d\sigmaigma_x }{ |x - y|^{n-2} }\Big) |\Delta^{i+1}u(y)| dy\\ &= \Big( \int_{|y-x_0| > R} + \int_{|y-x_0| < R} \Big) \Big(\sigmatrokedint_{\partialartial B_R (x_0)} \frac{d\sigmaigma_x }{ |x - y|^{n-2} }\Big) |\Delta^{i+1}u(y)| dy\\ & =: I_1 + I_2. \epsilonnd{align*} For any $R > 2|x_0|$, by \epsilonqref{eq-LL} and \epsilonqref{newa2}, there holds \[ I_ 1 = \int_{|y-x_0| > R} \frac{|\Delta^{i+1}u(y)|}{ |x_0 - y|^{n-2} } dy \lesssim (R-|x_0|)^{-2i - \theta}. \] For $I_2$, still by \epsilonqref{eq-LL} and using Lemma \ref{lem:L1Estimate-u^p}, we deduce that \begin{align*} I_2 = R^{2-n} \int_{|y-x_0| < R} |\Delta^{i+1}u(y)| dy & \leq R^{2-n} \int_{B_{R+|x_0|}} |\Delta^{i+1}u(y)| dy\\ & \lesssim R^{2-n} (R+ |x_0|)^{n- 2(i+1) - \theta} \\ & \lesssim (R+ |x_0|)^{-2i - \theta}. \epsilonnd{align*} Putting the estimates for $I_1$ and $I_2$ together, we have \[ \lim_{R \to +\infty} \sigmatrokedint_{\partialartial B_R (x_0)} w_i = 0, \] thanks again to $-2i - \theta < 0$. Moreover, in view of Lemma \ref{lem:L1Estimate-Delta^i}, there exists a sequence $R_k \to +\infty$ such that \[ \liminf_{k \to +\infty} \sigmatrokedint_{\partialartial B_{R_k} (x_0)} |\Delta^i u| = 0. \] Using this sequence $R_k$ in \epsilonqref{newa1}, we see that $h_i(x_0) \leq 0$, so $(-\Delta)^i u(x_0) > 0$ as $w_i (x_0) > 0$. The proof is completed. \epsilonnd{proof} The following are some comments on Theorem \ref{SPHbis}. \begin{itemize} \item The condition $2\epsilonll + \theta > 0$ is {\it almost} necessary for Theorem \ref{SPHbis}. For example, let $m = 2$, $n \geq 4$, $\theta < -2$ and $\epsilonll = 1$, $u = C|x|^{-\theta}$ is a classical and distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$ with suitable $C > 0$, but $\Delta u > 0$ in $\R^n\backslash \{0\}$. \item By Theorem \ref{thm-GeneralExistence-Distributional}, an implicit condition in Theorem \ref{SPHbis} is $n - 2m - \theta > 0$. \item Clearly, seeing Lemma \ref{lem-Classic->Distribution}, Proposition \ref{prop-StrongSPH} is a special case of Theorem \ref{SPHbis} with $\epsilonll = 0$ and $\theta > 0$. \epsilonnd{itemize} We call the property obtained in Theorem \ref{SPHbis} the {\it partially} SPH property, since we could have the positivity of $(-\Delta)^i u$ only for $\epsilonll \leq i \leq m-1$ instead of the whole range $1 \leq i \leq m-1$. Theorem \ref{SPHbis} could be useful to understand classical solutions for $n \geq 2m$, $\sigma < -2m$; or for $n = 2m-1$. As far as we know, the {\it partially} SPH property for \epsilonqref{eqMAIN}$_\sigmaigma$ with $p>1$ has not been studied before. However, for $p<0$, it was observed in \cite{DN17} that for any positive smooth solution $u$ to $(-\Delta)^3 u = u^p$ in $\R^3$ with $-2 < p<-1/2$, where $\Delta^2 u > 0$ in $\R^3$, and it is likely that $\Delta u$ does not have a fixed sign. \sigmaection{Existence and non-existence of classical solutions} This section is devoted to the proof of Theorems \ref{thm-Liouville-classical-upper} and \ref{thm-classical-supercritical}. First we start with a quick proof for Theorem \ref{thm-Liouville-classical-upper}. \begin{proof}[Proof of Theorem \ref{thm-Liouville-classical-upper}] The case $n=2m> -\sigmaigma$ and $p>1$ is just Proposition \ref{n=2m}. Suppose now $n > 2m > -\sigma$ and $1 < p < p_{\mathsf S}nma$. Using the fact that any classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$ is a distributional solution, it follows from the proof of Proposition \ref{prop-IntegralEquation} that $u$ satisfies the integral equation \epsilonqref{eqIntegralEquation} everywhere and $u > 0$ in $\R^n$. Now it is standard to realize that $u$ is radially symmetric with respect to the origin; for example, see \cite[Theorem 3]{CL16} or \cite[Theorem 1.6]{DaiQin-Liouville-v6}. A consequence of the symmetry of $u$ is that it satisfies the following upper bound \[ u(x) \lesssim |x|^{-\theta}. \] From this estimate, there holds \[ \int_{\R^n} |x|^\sigmaigma u^{p+1} < +\infty, \] since $p<p_{\mathsf S}nma$. From this fact and the positivity of $u$, we can easily obtain a contradiction by making use a Pohozaev type identity. \epsilonnd{proof} In the following, we consider the existence of classical solution for $n > 2m > -\sigma$ and $p \geq p_{\mathsf S}nma$. As mentioned in Introduction, the existence result in the case $p= p_{\mathsf S}nma$ is well-known because it is related to the existence of optimal functions for the following higher order Hardy--Sobolev inequality \[ \Big( \int_{\R^n} |x|^\sigmaigma |u|^\frac{2(n+\sigmaigma)}{n-2m} \Big)^\frac{n-2m}{n+\sigmaigma} \leq C_{\mathsf{HS}} \int_{\R^n} |\Delta^{m/2} u|^2, \quad \forall\; u \in \mathcal D^{m,2}(\R^n). \] Recall that the space $\mathcal D^{m,2}(\R^n)$ is the completion of $C_0^\infty (\R^n)$ under the Dirichlet norm, see \cite[Section 2.4]{Lions-1985}. Recall also that $\Delta^{m/2} = \nabla\Delta^{(m-1)/2}$ when $m$ is odd. Since optimal functions for the above inequality can be characterized by \begin{align} \label{new6} \inf_{u \in \mathcal D^{m,2}(\R^n) \backslash \{ 0 \}} \frac {\|\Delta^{m/2} u\|_{L^2(\R^n)}^2} {\Big\||x|^\sigmaigma |u|^\frac{2(n+\sigmaigma)}{n-2m}\Big\|_{L^1(\R^n)}^\frac{n-2m}{n+\sigmaigma}}, \epsilonnd{align} it is easy to verify that any optimal function for the Hardy--Sobolev inequality yields a distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$. The fact that any optimal function $u$ to \epsilonqref{new6} belongs to $C^{2m}(\R^n \backslash \{ 0 \} ) \cap C(\R^n)$ is also well-known; for example, see \cite[Theorem 3]{JL14}. Hence, in the rest of this section, we will handle the supercritical case $p > p_{\mathsf S}nma$. To look for a solution to \epsilonqref{eqMAIN}$_\sigmaigma$, it is common to establish a local existence first, which often relies on either a fixed-point argument, see \cite{Ni86, LGZ06b, Vil14}, or the shooting method, see \cite{GG06}. However it is not so clear how to employ a uniform fixed-point argument for \epsilonqref{eqMAIN}$_\sigmaigma$ in the full range of $\sigmaigma > -2m$. It seems also difficult to apply the shooting method, since some $(-\Delta)^i u$ could have no sense at the origin for $\sigmaigma < 0$. Consequently, to obtain the existence result for \epsilonqref{eqMAIN}$_\sigmaigma$, we shall use an indirect argument. We start with positive solutions to the following auxiliary problem \begin{subequations}\label{eqA} \begin{align} \left\{ \begin{aligned} (-\Delta)^m u &= \lambda |x|^\sigmaigma (1+u)^p & & \text{ in } B_1,\\ \nabla^i u \big|_{\partialartial B_1} &=0 & & \text{ for } 0 \leq i \leq m-1, \epsilonnd{aligned} \right. \tag*{\epsilonqref{eqA}$_\lambda$}. \epsilonnd{align} \epsilonnd{subequations} with $\lambda > 0$. Under the Dirichlet boundary condition, it is well-known that the kernel for $(-\Delta)^m$ is positive on the balls, so it is not difficult to use standard method to get existence result for small $\lambda > 0$. A key observation is that for supercritical exponent $p$, the equation \epsilonqref{eqA}$_\lambda$ admits a unique solution if $\lambda > 0$ is small enough. Then we study the set of radial solutions to \epsilonqref{eqA}$_\lambda$ with different $\lambda$, and show that the suitable scaling of a sequence of solutions to \epsilonqref{eqA}$_\lambda$ converges to a classical solution of \epsilonqref{eqMAIN}$_\sigmaigma$. This approach was recently used in \cite[Section 2]{ACDFGW} and in \cite{HS19}, respectively for fractional Laplacian and biharmonic case. For clarity, we formulate the proof of Theorem \ref{thm-classical-supercritical} into several subsections. \sigmaubsection{Existence of solutions to (\ref{eqA})$_\lambda$ for all $0<\lambda \leq \lambda^*$} Here we prove the existence of the minimal solution $u_\lambda$ to \epsilonqref{eqA}$_\lambda$ for $0<\lambda \leq \lambda^*$, where $\lambda^*$ is a critical value to be precised later. The crucial point is that $\G$, the Green function of $(-\Delta)^m$ on $B_1$ under the Dirichlet boundary conditions, is positive. Indeed, by Boggio's formula, \[ \G(x,y) = k_{n,m} |x-y|^{2m-n} \int_1^\frac{\sigmaqrt{|x|^2 |y|^2 - 2 x\cdot y +1}}{|x-y|} (t^2-1)^{m-1} t^{1-n} dt\quad \forall\; x, y \in B_1, \] for some constant $k_{n,m}>0$; see \cite{GGS10}. From the positivity of $\G$ we can apply the standard monotone iteration method. First, $w_0 \epsilonquiv 0$ is obviously a subsolution to \epsilonqref{eqA}$_\lambda$ for any $\lambda > 0$. Consider \begin{align*} \left\{ \begin{aligned} (-\Delta)^m \overline w &= |x|^\sigmaigma && \text{ in } B_1, \\ \nabla^i \overline w \big|_{\partialartial B_1} &=0 && \text{ for } 0 \leq i \leq m-1. \epsilonnd{aligned} \right. \epsilonnd{align*} As $\sigmaigma > -2m$, $\overline w \in C(\overline B_1)$, hence there exists $\lambda_0 > 0$ such that $1 \geq \lambda_0(1+\|\overline w\|_\infty)^p$. Let $\lambda \in (0, \lambda_0]$, readily $\overline w$ is a supersolution to \epsilonqref{eqA}$_\lambda$. Consider the following iteration process: \begin{align*} \left\{ \begin{aligned} (-\Delta)^m w_{k+1} &= \lambda |x|^\sigmaigma (1 + w_k)^p && \text{ in } B_1, \\ \nabla^i w_{k+1} \big|_{\partialartial B_1} &=0 && \text{ for } 0 \leq i \leq m-1. \epsilonnd{aligned} \right. \epsilonnd{align*} Clearly, using the positivity of $\G$ and monotonicity of $t \mapsto (1+t)^p$ in $\R_+$, there holds \begin{align*} 0 \leq w_k \leq w_{k+1} \leq \overline w, \quad \forall\; k \geq 0. \epsilonnd{align*} It is easy to conclude that $$u_\lambda = \lim_{k\to +\infty}w_k$$ exists and $u_\lambda$ is a weak solution to \epsilonqref{eqA}$_\lambda$. As $\overline w$ can be replaced by any solution of \epsilonqref{eqA}$_\lambda$, we see that $u_\lambda$ is the minimal solution to \epsilonqref{eqA}$_\lambda$. The uniqueness of the minimal solution and $\sigmaigma > -2m$ guarantee that $u_\lambda \in C_{0,\rm rad} (\overline B_1)$. Here $C_{0,\rm rad} (\overline B_1)$ stands for the space of radial continuous functions in $\overline B_1$ with zero boundary value, equipped with the sup-norm $\|u \|_\infty$. Denote \[ \Lambda = \big\{ \lambda > 0 : \epsilonqref{eqA}_\lambda \text{ admits a positive solution in } C_{0,\rm rad} (\overline B_1) \big\}. \] As any solution of \epsilonqref{eqA}$_\lambda$ is a supersolution to \epsilonqref{eqA}$_\mu$ for $\mu \in (0, \lambda)$, $\Lambda$ is clearly an interval. We claim now $$\lambda^* = \sigmaup \Lambda < +\infty.$$ Let $\Phi_{1,\sigmaigma}$ be the first eigenfunction for the following eigenvalue problem \begin{equation*}\label{eqEVP} \left\{ \begin{aligned} (-\Delta)^m u &= \lambda_{1,\sigmaigma} |x|^\sigmaigma u && \text{ in }\; B_1,\\ \nabla^i u \big|_{\partialartial B_1} &=0 && \text{ for }\; 0 \leq i \leq m-1. \epsilonnd{aligned} \right. \epsilonnd{equation*} It is not hard to see that $\Phi_{1,\sigmaigma}$ can be obtained via standard argument as $\sigmaigma > -2m$. Indeed, \[ 0 < \lambda_{1,\sigmaigma} = \inf_{u \in H_0^{m,2}(B_1)\backslash\{0\}} \frac {\|\Delta^{m/2} u\|^2_{L^2(B_1) }} { \|u\|^2_{L^2(B_1, |x|^\sigmaigma dx)}} \] is attained. Notice that the positivity of $\G$ also implies the strong maximum principle, so that the corresponding first eigenfunction $\Phi_{1,\sigmaigma}$ can be chosen to be positive in $B_1$. Now let $\lambda \in \Lambda$ and $u$ be a solution to \epsilonqref{eqA}$_\lambda$, there holds \begin{align*} \lambda_{1,\sigmaigma} \int_{B_1} |x|^\sigmaigma u \Phi_{1,\sigmaigma} &= \int_{B_1} u (-\Delta)^m \Phi_{1,\sigmaigma} \\ &=\lambda \int_{B_1} |x|^\sigmaigma (1+u)^p \Phi_{1,\sigmaigma} \geq\lambda p\int_{B_1} |x|^\sigmaigma u \Phi_{1,\sigmaigma}. \epsilonnd{align*} Since $u\Phi_{1,\sigmaigma} > 0$ in $B_1$, we arrive at $\lambda \leq \lambda_{1, \sigmaigma}/p$, in other words, $\lambda^* \leq \lambda_{1, \sigmaigma}/p < +\infty$ as claimed. \begin{proposition}\label{prop-ExistenceA-Small} There exists $0<\lambda^*<+\infty$ such that \begin{itemize} \item we have a minimal solution $u_\lambda \in C_{0,\rm rad} (\overline B_1)$ to equation \epsilonqref{eqA}$_\lambda$ for any $0<\lambda < \lambda^*$, and for any $x \in B_1$, the mapping $\lambda \mapsto u_\lambda(x)$ is increasing in $(0, \lambda^*)$; \item for $\lambda > \lambda^*$, \epsilonqref{eqA}$_\lambda$ has no solution. \epsilonnd{itemize} Furthermore, given $\mu \in (0, \lambda^*)$, there exists a universal constant $C_\mu$ such that for any $u \in C_{0,\rm rad} (\overline B_1)$ solution to equation \epsilonqref{eqA}$_\lambda$ with $\lambda \geq \mu$, there holds \[ u (x) \leq C_\mu |x|^{-\theta} \] for all $x \in B_{1/4} \backslash \{ 0 \}$. \epsilonnd{proposition} \begin{proof} We are only left with the uniform upper bound for any solution to \epsilonqref{eqA}$_\lambda$ for any $\lambda \geq \mu$. As $n>2m$, the Green function $\G$ satisfies the following two-sided estimate: for any $x, y \in B_1$, \begin{equation}\label{eqTwoSidedGreen} |x-y|^{2m-n} \min \Big\{ 1, \Big( \frac{(1-|x|)(1-|y|)}{|x-y|^2} \Big)^m \Big\} \lesssim \G(x,y) \lesssim |x-y|^{2m-n}; \epsilonnd{equation} see \cite[equation (4.24)]{GGS10}. Let $u \in C_{0,\rm rad} (\overline B_1)$ be a solution to \epsilonqref{eqA}$_\lambda$ and $x \in B_{1/4} \backslash \{ 0 \}$. Take any \[ y \in B_{|x|/4} \big(\frac {3x}4 \big) \sigmaubset B_{|x|/2} (x) \cap B_{|x|} (0), \] There holds $|x|/2 \leq |y| \leq |x|$, so that $1-|y| \geq |x-y|$, $1-|x| \geq |x-y|$ and $|x-y| \leq |x|/2$. By \epsilonqref{eqTwoSidedGreen} we arrive at \[ \G(x,y) \gtrsim |x-y|^{2m-n} \gtrsim |x|^{2m-n}. \] Moreover, putting the above facts together, the solution $u$ can be estimated as follows \begin{align*} u(x) &\gtrsim \mu \int_{B_{|x|/4} (3x/4)} \frac{|y|^\sigmaigma}{|x-y|^{n-2m}} u^p(y) dy \gtrsim \mu |x|^{2m-n+\sigmaigma} \times \frac {|x|^n}{4^n} u^p(x) . \epsilonnd{align*} Here we used the fact that $u$ is decreasing with respect to the radius, see \cite[Theorem 2]{GeYe}. The above inequality gives us the desired estimate. \epsilonnd{proof} \sigmaubsection{Uniqueness of radial solutions to (\ref{eqA})$_\lambda$ for $\lambda > 0$ small} This subsection is devoted to show the uniqueness of solution to \epsilonqref{eqA}$_\lambda$ for small $\lambda > 0$. We will use Schaaf's idea; see \cite{Sch00}, based on the Pohozaev's identity and supercritical exponent $p$. \begin{lemma}\label{lem-Pohozaev} Assume that $\Sigma \sigmaubset \R^n$ is a bounded, smooth domain and $f \in C^1(\overline \Sigma \times \R)$. Let $u$ be a $C^{2m}$-solution to \[ (-\Delta)^m u = f(x,u) \quad \text{ in } \Sigma \sigmaubset \R^n. \] Denote by $\nu$ the unit outside normal vector on $\partial\Sigma$ and \[ F(x,u) := \int_0^u f(x,t)dt. \] Then one has the following identities \begin{equation}\label{eqPohozaev1} \begin{aligned} n \int_{\Sigma} F(x,u) & + \int_{\Sigma} x \cdot \nabla_x F(x,u) - \int_{\partialartial\Sigma} (x\cdot\nu) F(x, u) \\ = & \frac{n-2m}2 \int_{\Sigma} |\Delta^{m/2} u|^2 + \frac 12 \int_{\partialartial \Sigma} A(u, u) + \frac{1}{2}\int_{\partialartial\Sigma} T_m(x, u) \epsilonnd{aligned} \epsilonnd{equation} and \begin{equation}\label{eqPohozaev2} \begin{aligned} \int_{\Sigma} |\Delta^{m/2} u|^2 = \int_{\Sigma} u f(x, u) + \frac 12 \int_{\partialartial \Sigma} B(u, u) . \epsilonnd{aligned} \epsilonnd{equation} Here the boundary terms have the form \begin{align*} A (u,v) = \sigmaum_{j=1, j \ne m}^{2m-1} \overline l_j (x, \nabla^j u, \nabla^{2m-j} v) +\sigmaum_{j=0}^{2m-1} \widetilde l_j (\nabla^j u, \nabla^{2m-j-1} v) \epsilonnd{align*} and \begin{align*} B (u,v) = \sigmaum_{j=0}^{2m-1} \widehat l_j (\nabla^j u, \nabla^{2m-1-j} v), \epsilonnd{align*} where $\overline l_j $ is trilinear in $(x, u, v)$, $\widetilde l_j$, and $\widehat l_j$ are bilinear in $(u, v)$, and $$T_m(x, u) = \left\{\begin{array}{ll} \big(x\cdot\Delta^{m/2}u\big)\big(\nu\cdot\Delta^{m/2}u\big) & \;\; \mbox{if $m$ is odd},\\ \displaystyle \Delta^{m/2}u \times \sigmaum_{1\leq i, j \leq n} x_i\nu_j \partialartial_{ij}\big(\Delta^{m/2 - 1}u\big) & \;\; \mbox{if $m$ is even}. \epsilonnd{array} \right.$$ \epsilonnd{lemma} To prove \epsilonqref{eqPohozaev1} and \epsilonqref{eqPohozaev2}, it is routine to use $x \cdot \nabla u$ and $u$ as test functions over $\Sigma$. Such computations are straightforward but tedious; however, for completeness, we provide a sketch of proof in Appendix \ref{apd-Pohozaev}. It is important to note that the boundary terms $A$ and $B$ in \epsilonqref{eqPohozaev1} and \epsilonqref{eqPohozaev2} do not depend on $f$. Now we are in position to prove the uniqueness result for radial solutions to \epsilonqref{eqA}$_\lambda$ for small $\lambda > 0$. It is worth noting that the condition $p> p_{\mathsf S}nma$ is a crucial argument. \begin{lemma} \label{lem-Uniqueness} Let $p> p_{\mathsf S}nma$. Then, there exists $\lambda_*>0$ such that for every $\lambda \in (0, \lambda_*]$, the equation \epsilonqref{eqA}$_\lambda$ has a unique solution in $C_{0,\rm rad} (\overline B_1)$; hence coincides the minimal solution $u_\lambda $. \epsilonnd{lemma} \begin{proof} Let $\lambda \in \Lambda$ and $u_\lambda \in C_{0,\rm rad} (\overline B_1)$ be the minimal solution to \epsilonqref{eqA}$_\lambda$, thanks to Proposition \ref{prop-ExistenceA-Small}. Let $v \in C_{0,\rm rad} (\overline B_1)$ be another solution to \epsilonqref{eqA}$_\lambda$, there hold $w = v-u_\lambda \geq 0$ in $B_1$ and $w$ is a solution to \begin{equation} \label{new7} \left\{ \begin{aligned} (-\Delta)^m w &= \lambda |x|^\sigmaigma f_\lambda (x, w) & & \text{ in } B_1,\\ \nabla^i w \big|_{\partialartial B_1} &=0 & & \text{ for } 0 \leq i \leq m-1, \epsilonnd{aligned} \right. \epsilonnd{equation} with \[ f_\lambda (x, w) =(1+ w + u_\lambda)^p - (1+u_\lambda)^p \geq 0. \] Our aim is to show that $w \epsilonquiv 0$ if $\lambda$ is small, hence concluding the claimed uniqueness for \epsilonqref{eqA}$_\lambda$. Denote \[ F_\lambda (x, w) = \int_0^w f_\lambda (x,t) dt. \] In the sequel, we apply Lemma \ref{lem-Pohozaev} with the solution $w$ of \epsilonqref{new7} and $\Sigma = B_1 \backslash \overline B_\epsilonpsilon$, $\epsilonpsilon \in (0, 1)$. Thanks to the Dirichlet boundary conditions, there hold $F(x, w) = A(w, w) = B(w, w) = 0$ on $\partial B_1$. Moreover, since $w$ is a radial function, we can check readily that for any $m$ and $r > 0$, there holds \[ T_m(x, u) = (x\cdot \nu) |\Delta^{m/2} w|^2 \quad \mbox{on }\; \partialartial B_r. \] In particular, $T_m(x, u) \geq 0$ on $\partialartial B_1$. Hence, from \epsilonqref{eqPohozaev1} we have \begin{align*} \lambda \int_{B_1 \backslash B_\epsilonpsilon} & \Big[ |x|^\sigmaigma F_\lambda(x, w) + \frac 1n x \cdot \nabla_x \big( |x|^\sigmaigma F_\lambda(x, w) \big) \Big] + \frac{\lambda \epsilon^{\sigmaigma + 1}}{n}\int_{\partialartial B_\epsilon} F_\lambda(x, w) \\ \geq & \; \frac{n-2m}{2n} \int_{B_1 \backslash B_\epsilonpsilon} |\Delta^{m/2} w|^2 - \frac{\epsilon}{2n} \int_{\partialartial B_\epsilon} |\Delta^{m/2} w|^2 + \frac 1{2n} \int_{\partialartial B_\epsilonpsilon} A(w, w) . \epsilonnd{align*} From \epsilonqref{eqPohozaev2}, we obtain, for any $\alpha \in \R$, \[ \alpha\int_{B_1 \backslash B_\epsilonpsilon} |\Delta^{m/2} w|^2 = \alpha \lambda \int_{B_1 \backslash B_\epsilonpsilon} |x|^\sigmaigma w f_\lambda (x, w) + \frac \alpha 2 \int_{\partialartial B_\epsilonpsilon} B(w, w). \] Combining these two estimates together, we arrive at \begin{align} \label{new8} \begin{split} & \Big( \frac{n-2m}{2n} - \alpha \Big) \int_{B_1 \backslash B_\epsilonpsilon} |\Delta^{m/2} w|^2 + J_\epsilonpsilon \\ \leq & \, \lambda \int_{B_1 \backslash B_\epsilonpsilon} \Big[ |x|^\sigmaigma F_\lambda(x, w) + \frac 1n x \cdot \nabla_x \big( |x|^\sigmaigma F_\lambda(x, w) \big) - \alpha |x|^\sigmaigma u f_\lambda (x, w) \Big] \\ = & \, \lambda \int_{B_1 \backslash B_\epsilonpsilon} |x|^\sigmaigma \Big[ \big( 1 + \frac \sigmaigma n \big) F_\lambda(x, w) - \alpha u f_\lambda (x, w) + \frac 1n x \cdot \nabla_x F_\lambda(x, w) \Big], \epsilonnd{split} \epsilonnd{align} where \begin{align*} J_\epsilonpsilon & = - \frac{\epsilon}{2n} \int_{\partialartial B_\epsilon} |\Delta^{m/2} w|^2 - \frac{\lambda \epsilon^{\sigmaigma + 1}}{n} \int_{\partialartial B_\epsilon} F_\lambda(x, w) \\ & \quad + \frac 1{2n} \int_{\partialartial B_\epsilonpsilon} A(w, w) - \frac \alpha 2 \int_{\partialartial B_\epsilonpsilon} B(w,w). \epsilonnd{align*} We will estimate $J_\epsilonpsilon $ and $\nabla_x F_\lambda(x, w) $ appearing in \epsilonqref{new8}. For $\nabla_x F_\lambda(x, w)$, we note that \begin{equation}\label{Fnearzero} \begin{aligned} F_\lambda(x, w) = w \int_0^1 f_\lambda (x,sw) ds & = w \int_0^1 \big[ (1+ sw + u_\lambda)^p - (1+u_\lambda)^p \big] ds\\ &= p w^2 \int_0^1 s \int_0^1 (1+u_\lambda + \tau sw)^{p-1} d\tau ds. \epsilonnd{aligned} \epsilonnd{equation} Therefore \[ \nabla_x F_\lambda(x, w) = \Big[ p(p-1)w^2 \int_0^1 s\int_0^1 (1+u_\lambda + \tau s w)^{p-2} d\tau ds \Big] \nabla u_\lambda (x) . \] Using \cite[Theorem 2]{GeYe}, $u_\lambda$ is decreasing with respect to the radius, namely $x \cdot \nabla u_\lambda \leq 0$, so \begin{align} \label{new9} x \cdot \nabla_x F_\lambda(x, w) \leq 0. \epsilonnd{align} Now we estimate $J_\epsilonpsilon $. As $v, u_\lambda \in C_{0, \rm rad}(\overline B_1)$ and $\sigmaigma > -2m$, the regularity theory ensures that $f_\lambda \in C^{0, \gamma}(\overline B_1)$ for some $\gamma > 0$. The scaling argument and the interior estimate, see \cite[Theorem 2.19]{GGS10}, applied to \epsilonqref{new7} then imply \begin{align*} |\nabla^j w(x)| \leq C\left(|x|^{2m+\sigmaigma - j} + 1\right), \quad \forall\; 1 \leq j \leq 2m, \; |x| \leq \frac{1}{2}. \epsilonnd{align*} Hence \begin{align*} |A (w,w) | + |B(w,w)| + \epsilon|\Delta^{m/2} w|^2\leq C\big[\epsilonpsilon^{2(2m+\sigmaigma) + 1 - 2m} + 1\big]\quad \mbox{on }\; \partial B_\epsilon. \epsilonnd{align*} For the term involving $F_\lambda$, as $F_\lambda$ is bounded in a neighborhood of the origin, we get \[ \epsilon^{\sigmaigma + 1} F_\lambda(x, w) \leq C \epsilon^{\sigmaigma + 1} \quad \mbox{on }\; \partial B_\epsilon. \] Finally, as $n > 2m > -\sigmaigma$, there holds \begin{align} \label{new10} \lim_{\epsilon\to 0^+} J_\epsilonpsilon = 0. \epsilonnd{align} Keep in mind that $F_\lambda \in L^1(B_1)$, $wf_\lambda \in L^1(B_1)$, and $\Delta^{m/2} w \in L^2(B_1)$. Putting \epsilonqref{new8}, \epsilonqref{new9}, and \epsilonqref{new10} together and sending $\epsilon\to 0^+$, we conclude that \begin{align*} \Big( \frac{n-2m}{2n} - \alpha \Big) & \int_{B_1} |\Delta^{m/2} w|^2 \leq \lambda \int_{B_1 } |x|^\sigmaigma \Big[ \big( 1 + \frac \sigmaigma n \big) F_\lambda(x, w) - \alpha w f_\lambda (x, w) \Big] . \epsilonnd{align*} From now on, we consider $\lambda \leq \lambda^*/2$, where $\lambda^*$ is given in Proposition \ref{prop-ExistenceA-Small}. By direct computation, we get \[ F_\lambda (x, t) = \frac 1{p+1} \big[(1+ t + u_\lambda)^{p+1} - (1+ u_\lambda)^{p+1} \big] - t (1+u_\lambda)^p. \] As $0 \leq u_\lambda \leq \|u_{\lambda^*/2}\|_\infty$, we claim \[ \lim_{t \to +\infty} \frac{F_\lambda (x,t)}{t f_\lambda (x,t)} =\frac 1{p+1}\lim_{t \to +\infty} \frac{ (1+ t + u_\lambda)^{p+1} - (1+ u_\lambda)^{p+1}}{t \big[(1+ t + u_\lambda)^p - (1+ u_\lambda)^p \big]} = \frac 1{p+1} \] uniformly in $B_1$ and in $\lambda \leq \lambda^*/2$. Thus, combining with \epsilonqref{Fnearzero}, for any $\delta >0$, there is $M_\delta >0$ such that \[ F_\lambda (x,t) \leq \frac {1+\delta}{p+1} t f_\lambda (x, t) + M_\delta t^2 \] for all $(x, t, \lambda) \in B_1 \times \R_+\times (0, \lambda^*/2]$. Then we choose $\alpha, \delta > 0$ satisfying \[ \big( 1 + \frac \sigmaigma n \big) \frac {1+\delta}{p+1} = \alpha <\frac{n-2m}{2n}. \] This can be done because \[ \big( 1 + \frac \sigmaigma n \big) \frac 1{p+1} <\frac{n-2m}{2n} \quad \iff \quad p+1> \frac{2(n+\sigmaigma)}{n-2m}. \] With these choices, we have just shown that for $\lambda \leq \lambda^*/2$, \begin{align} \label{new11} \Big( \frac{n-2m}{2n} - \alpha \Big) \int_{B_1} |\Delta^{m/2} w|^2 & \leq \lambda \big( 1 + \frac \sigmaigma n \big) M_\delta \int_{B_1} |x|^\sigmaigma w^2 \epsilonnd{align} Making use of the H\"older and Hardy--Sobolev inequalities, as $\sigmaigma > -n$, there holds \begin{align}\label{new111} \begin{split} \int_{B_1} |x|^\sigmaigma w^2 & \leq \Big( \int_{B_1} |x|^\sigmaigma |w|^\frac{2(n+\sigmaigma)}{n-2m} dx\Big)^\frac{n-2m}{n+\sigmaigma} \Big( \int_{B_1 } |x|^\sigmaigma \Big)^\frac{\sigmaigma+2m}{n+\sigmaigma}\\ & \lesssim \int_{B_1} |\Delta^{m/2} w|^2. \epsilonnd{split} \epsilonnd{align} Putting \epsilonqref{new111} into \epsilonqref{new11}, we obtain $\|\Delta^{m/2} w\|_{L^2(B_1)} = 0$ if $\lambda > 0$ is small enough. Coming back to inequality \epsilonqref{new111}, together with the continuity of $w$, there holds $w \epsilonquiv 0$ in $B_1$, namely $u_\lambda$ is the unique solution in $C_{0, \rm rad}(\overline B_1)$ for $\lambda > 0$ small. \epsilonnd{proof} \sigmaubsection{Existence of classical solutions to (\ref{eqMAIN})$_\sigma$} Here we prove finally the existence of a classical solution to \epsilonqref{eqMAIN}$_\sigma$ using solutions to the auxiliary problem \epsilonqref{eqA}$_\lambda$. \begin{lemma}\label{lem-Bifurcation} There exists a sequence of $ (\lambda_k, u^{\lambda_k})$ in $(0, \lambda^*]\times C_{0,\rm rad} (\overline B_1)$ with $u^{\lambda_k}$ a solution to \epsilonqref{eqA}$_{\lambda_k}$ such that \[ \lim_{k \to +\infty} \lambda_k = \lambda_\infty > 0 \quad \mbox{and}\quad \lim_{k \to +\infty} \|u^{\lambda_k}\|_{\infty} = +\infty. \] \epsilonnd{lemma} \begin{proof} Consider $\mathscr F : C_{0,\rm rad} (\overline B_1) \to C_{0,\rm rad} (\overline B_1)$ defined as follows: for $u \in C_{0,\rm rad} (\overline B_1)$, let $v = \mathscr F(u)$ be the unique solution to \begin{align*} \left\{ \begin{aligned} (-\Delta)^m v & = |x|^\sigmaigma (1 + |u|)^p && \mbox{in }\; B_1, \\ \nabla^i v \big|_{\partial B_1} &= 0 && \mbox{for }\; 0 \leq i \leq m-1. \epsilonnd{aligned} \right. \epsilonnd{align*} As $\sigmaigma > -2m > -n$, by regularity theory and Sobolev embedding, we see readily that $\mathscr F$ is compact. For each $\lambda \in (0, \lambda^*)$, any solution $u^\lambda$ to \epsilonqref{eqA}$_\lambda$ actually solves \begin{align} \label{new12} u^\lambda = \lambda\mathscr F (u^\lambda). \epsilonnd{align} It follows from \cite[Theorem 6.2]{Rab73} that the set of pairs $(\lambda, u^\lambda)$ satisfying \epsilonqref{new12} is unbounded in $\R_+ \times C_{0,\rm rad} (\overline B_1)$. By Proposition \ref{prop-ExistenceA-Small}, we can extract an unbounded sequence $(\lambda_k, u^{\lambda_k}) \in (0, \lambda^*) \times C_{0,\rm rad} (\overline B_1)$. Up to a subsequence, we can assume that \[ \lim_{k \to +\infty} \lambda_k = \lambda_\infty \in [0, \lambda^*] \quad \text{and} \quad \lim_{k \to +\infty} \|u^{\lambda_k}\|_\infty = +\infty. \] Moreover, Lemma \ref{lem-Uniqueness} combined with the unboundedness of $\|u^{\lambda_k}\|_\infty$ means $\lambda_\infty > 0$. \epsilonnd{proof} We are now ready to prove the existence of a classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$ in $\R^n$. \begin{proof}[Proof of Theorem \ref{thm-classical-supercritical}] Let $(\lambda_k, u^{\lambda_k})$ be the sequence provided by Lemma \ref{lem-Bifurcation}, as $u^{\lambda_k}$ is radially symmetric and decreasing with respect to the radius, we have \[ u^{\lambda_k} (0)=\max_{\overline B_1} u^{\lambda_k}(x) \to +\infty. \] Set \[ v_k(x) = \frac{u^{\lambda_k}(r_k x)}{u^{\lambda_k}(0)} \] with $r_k > 0$ satisfying \[ \lambda_k r_k^{2m+\sigmaigma} \big(u^{\lambda_k} (0) \big)^{p-1}= 1. \] Clearly, $r_k \to 0$ as $k \to +\infty$. It is easy to check that $v_k$ satisfies in $B_{1/r_k}$ \begin{equation*}\label{eqVk} \begin{aligned} (-\Delta)^m v_k (x) &= \frac{r_k^{2m}}{u^{\lambda_k}(0)} \lambda_k |r_kx|^\sigmaigma \Big( u^{\lambda_k}(r_k x) + 1\Big)^p\\ &=|x|^\sigmaigma \Big(v_k (x) + \frac 1{u^{\lambda_k}(0)}\Big)^p =: h_k (x). \epsilonnd{aligned} \epsilonnd{equation*} Moreover, there hold $0 \leq v_k \leq 1$, $v_k(0)=1$, and $v_k$ is radially symmetric and decreasing with respect to the radius. Now let $R>0$ be arbitrary but fixed. As $\sigmaigma > -2m$ and $|h_k(x)| \leq 2^p |x|^\sigmaigma$ in $B_{1/r_k}$, then $h_k \in L^q(B_{2R})$ for $k$ large enough and some $q > n/(2m)$. Applying the $L^q$-theory to equation $(-\Delta)^m v_k = h_k$, see \cite[Corollary 2.21]{GGS10}, we know that $v_k$ are bounded in $W^{2m,q}(B_R) \hookrightarrow C^{0, \gamma}(B_R)$ for some $\gamma \in (0,1)$. Therefore, up to a subsequence, there exists a function $v$ such that $v_k \to v$ locally uniformly in $\R^n$ and $v \in C_{\rm rad}(\R^n)$. Hence \[ \mbox{$v$ is non-increasing with the radius, }\; v(0)=1, \;\; 0 \leq v(x) \leq 1. \] Readily, $v$ is a continuous, distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$ in $\R^n$. The regularity theory implies that $v$ is a classical solution. \epsilonnd{proof} \begin{remark} Using Proposition \ref{prop-IntegralEquation}, the limiting function $v \in C_{\rm rad}(\R^n)$ satisfies the integral equation \epsilonqref{eqIntegralEquation}, which leads to the decay estimate $|v(x)| \leq C|x|^{-\theta}$ at infinity. Here we give a direct proof by Proposition \ref{prop-ExistenceA-Small}. Indeed, as $\lambda_\infty > 0$, we have \begin{align*} v_k (x) \leq u^{\lambda_k}(r_k x) \lesssim r_k^{-\theta} |x|^{-\theta} \lesssim \lambda_k^\frac{1}{p-1} |x|^{-\theta} \leq C |x|^{-\theta} \epsilonnd{align*} for any $x \in B_{1/(4r_k)}\backslash \{ 0 \}$. So we get $v(x) \lesssim |x|^{-\theta}$ in $\R^n \backslash \{ 0 \}$. \epsilonnd{remark} A quick consequence of Theorem \ref{thm-classical-supercritical} is the existence of a fast-decay punctured solution to \epsilonqref{eqMAIN}$_\sigmaigma$ in $\R^n \backslash \{ 0 \}$, which is different from the slow decay punctured solution $C_0 |x|^{-\theta}$. Given $n > 2m > -\sigmaigma$ and $p_{\mathsf{C}}nma < p < p_{\mathsf S}nma$. Consider \epsilonqref{eqMAIN}$_{\widetilde\sigmaigma}$ with $\widetilde \sigmaigma = (n-2m)p - (n+2m+\sigmaigma)$ and $p$. We check easily that $\widetilde\sigmaigma > -2m$ and $p > p_{\mathsf S} (m, \widetilde \sigmaigma)$. Using the proof of Theorem \ref{thm-classical-supercritical}, there exists a radial classical solution $\widetilde u$ to \epsilonqref{eqMAIN}$_{\widetilde\sigmaigma}$. Clearly the Kelvin transform of $\widetilde u$, namely \[ u(x) = |x|^{2m -n} \widetilde u \big(\frac x{|x|^2} \big), \] is a fast-decay punctured solution to \epsilonqref{eqMAIN}$_\sigmaigma$. Thus, we have just shown the following. \begin{corollary}\label{cor-punctured-subcritical} Let $n > 2m > -\sigma$ and $p_{\mathsf{C}}nma < p < p_{\mathsf S}nma$. Then the equation \epsilonqref{eqMAIN}$_\sigmaigma$ admits a radial, fast-decay, punctured solution $u$ such that \[ u(x) \sigmaim \left\{ \begin{aligned} & |x|^{-\theta} & & \text{ as } |x| \to 0,\\ & |x|^{2m-n} & & \text{ as } |x| \to +\infty. \epsilonnd{aligned} \right. \] \epsilonnd{corollary} In the case $m=2$, Corollary \ref{cor-punctured-subcritical} is already known; see \cite[Theorem 2.2]{HS19}. \sigmaection{Further remarks and some open questions} From the discussion in this paper, we see that the polyharmonic Hardy-H\'enon equation \epsilonqref{eqMAIN}$_\sigmaigma$ is much more complex than the Laplacian case since many conclusions for $m = 1$ are no longer valid for $m \geq 2$. In the present work, we mainly studied the case: $n \geq 2m$, $\sigmaigma > -2m$, and $p > 1$. Under these conditions, what we obtained are the following: \begin{itemize} \item a classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$ exists \textbf{if and only if} $p \geq p_{\mathsf S}nma$; \item a distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$ exists \textbf{if and only if} $p > p_{\mathsf{C}}nma$; \item a punctured solution to \epsilonqref{eqMAIN}$_\sigmaigma$ exists \textbf{if} $p > p_{\mathsf{C}}nma$. \epsilonnd{itemize} There is no ``and only if'' for punctured solutions to \epsilonqref{eqMAIN}$_\sigmaigma$ because by the examples after the proof of Proposition \ref{prop-Strong->Distribution}, we know that for any $m \geq 2$, there exist $n > 2m$ and $1 < p \leq p_{\mathsf{C}}nma$ such that punctured solutions exist, and these solutions are not distributional ones. Moreover, recall that the inequality \epsilonqref{new3} is a sufficient condition for the existence of punctured solutions. Thus, a natural question is to know if the condition \epsilonqref{new3} is also necessary. \noindent \textbf{Question 1}: Does a punctured solution to \epsilonqref{eqMAIN}$_\sigmaigma$ exist only if \epsilonqref{new3} is satisfied? If the general answer is negative, is that true at least for $n \geq 2m$, $\sigmaigma > -2m$, and $p > 1$? For the existence of classical solutions to \epsilonqref{eqMAIN}$_\sigmaigma$, the situation seems very open for $n \leq 2m$ or $\sigmaigma < -2m$. In Remark \ref{rem:j1}, we see some examples of classical solutions, which are not distributional ones with $n < 2m = 6$. There are many other examples, here are some ones for the biharmonic case. Let $m = 2$ and $\sigmaigma < -4$, then \[ \begin{aligned} &\text{if } \; \; n = 2, \; \theta < 0, \; \theta \ne -2;\\ &\text{or } \; \; n = 3, \; \theta \in (-\infty, -2)\cup(-1, 0);\\ &\text{or } \; \; n \geq 4, \; \theta < -2, \epsilonnd{aligned} \] a classical solution of \epsilonqref{eqMAIN}$_\sigmaigma$ exists in the form $C|x|^{-\theta}$, because \epsilonqref{new3} is satisfied. A striking observation is that any of the above examples of classical solution does not satisfy the SPH property. \noindent \textbf{Question 2}: Let $n < 2m$ or $\sigmaigma < -2m$, for which $p > 1$ there exist classical solutions? Moreover, can we have classical solutions satisfying the strong SPH property? There are very few results for the existence or non-existence of solutions to the polyharmonic equation \epsilonqref{eqMAIN}$_\sigmaigma$ with $0 < p < 1$ and $m \geq 2$. If the case $\sigmaigma < 0$ could yield more difficulty in general, we can ask \noindent \textbf{Question 3}: Let $\sigmaigma > 0$ and $m\geq 2$, for which $p \in (0, 1)$ a classical solution to \epsilonqref{eqMAIN}$_\sigmaigma$ exists? Choosing suitable $p < 1$ and $\sigmaigma$, the examples after Question 1 provide us some classical solutions to \epsilonqref{eqMAIN}$_\sigmaigma$. Once again, the situation is totally different from the second order case. In fact, Dai and Qin proved that no classical solution to $-\Delta u = |x|^\sigmaigma u^p$ exists for any $\sigmaigma \in \R$ and $p \in (0, 1]$, see \cite[Theorem 1.1]{DQ20}. For the case $m \geq 2$, the results in \cite{NNPY18} could mean that the answer to Question 3 will depend on the parity of $m$. At last, by Theorem \ref{thm-GeneralExistence-Distributional}, the condition $n - 2m - \theta > 0$ is necessary to have a distributional solution, but except the case $\sigmaigma > -2m$, we don't know if it is always necessary. Hence we can ask \noindent \textbf{Question 4}: Is there always a distributional solution to \epsilonqref{eqMAIN}$_\sigmaigma$ when $n - 2m - \theta > 0$ and $\sigmaigma \leq -2m$? To conclude, the existence and non-existence problems to the polyharmonic equation \epsilonqref{eqMAIN}$_\sigmaigma$ keeps a lot of secrets when $m \geq 2$ and $\sigmaigma \ne 0$. Apparently, we are very far away from a complete picture for the case $m = 1$ or for the case $\sigmaigma = 0$; see \cite{NNPY18} where a complete picture is known for classical solutions. Possible answers could depend on many factors including the sign of $n- 2m$, the sign of $\sigmaigma + 2m$, the sign of $p-1$, and the required regularity of solutions. \sigmaection*{Acknowledgments} This work was initiated when QAN was visiting the Center for PDEs at the East China Normal University in 2019. He would like to thank them for hospitality and financial support. Thanks also go to Quoc Hung Phan for useful discussion on the work \cite{PhanSouplet}. QAN is supported by the Tosio Kato Fellowship awarded in 2018. DY is partially supported by Science and Technology Commission of Shanghai Municipality (STCSM) under grant No. 18dz2271000. \appendix \sigmaection{Proof of Lemma \ref{lem-Pohozaev}} \label{apd-Pohozaev} For completeness, we provide here a proof of Lemma \ref{lem-Pohozaev}. First, we need the following two identities. Let $u, v \in C^{2m}(\overline \Sigma) \cap C^{2m-1}(\overline \Sigma)$, \begin{equation}\label{eq-Identity2} \begin{aligned} \int_{\Sigma} & \big[ v (-\Delta)^m u + u (-\Delta)^m v \big] = -\int_{\partialartial \Sigma} B_m (u, v) + 2 \int_{\Sigma} \Delta^{m/2} u \Delta^{m/2} v \epsilonnd{aligned} \epsilonnd{equation} and \begin{equation}\label{eq-Identity1} \begin{aligned} \int_{\Sigma} \big[ (x \cdot \nabla v)& (-\Delta)^m u + (x \cdot \nabla u) (-\Delta)^m v \big] \\ &=- \int_{\partialartial \Sigma} C_m (u, v) - \frac{n-2m}2 \int_{\Sigma} \big[ v (-\Delta)^m u + u (-\Delta)^m v \big] , \epsilonnd{aligned} \epsilonnd{equation} where the boundary term $B_m (u,v)$ is that in Lemma \ref{lem-Pohozaev}, that is \begin{align*} B_m (u,v) = \sigmaum_{j=0}^{2m-1} \widehat l_{m,j} (\nabla^j u, \nabla^{2m-1-j} v), \epsilonnd{align*} and the boundary term $C_m (u,v)$ is of the form \begin{align*} C_m (u,v) = \sigmaum_{j=1}^{2m-1} \overline l_{m,j} (x, \nabla^j u, \nabla^{2m-j} v) +\sigmaum_{j=0}^{2m-1} \breve l_{m,j} (\nabla^j u, \nabla^{2m-j-1} v). \epsilonnd{align*} Here $\overline l_{m,j}$ is trilinear in $(x, u, v)$, $\breve l_{m,j}$, and $\widehat l_{m,j}$ are bilinear in $(u, v)$. The identities \epsilonqref{eq-Identity2}, \epsilonqref{eq-Identity1} can be proved directly using integration by parts, see for example \cite[Proposition 3.3]{GPY17}. The only thing we need to verify is the precise formula for the term $\overline l_{m,m}$. In fact, this term comes from the integration by parts for \begin{align*} \int_\Sigma \Delta^{(m-1)/2}(x\cdot \nabla v)\Delta^{(m+1)/2}u. \epsilonnd{align*} Let $m = 2k + 1$, as $\Delta^k(x\cdot \nabla v) = 2k\Delta^k v + x\cdot\nabla(\Delta^k v)$, we obtain \begin{align*} \int_\Sigma \Delta^k(x\cdot \nabla v)\Delta^{k+1}u = &\; -\int_\Sigma \Delta^{m/2}(x\cdot \nabla v)\Delta^{m/2}u \\ & + 2k\int_{\partialartial \Sigma}\Delta^k v (\nu\cdot\nabla \Delta^k u) + \int_{\partialartial \Sigma} (x\cdot\nabla\Delta^k v)(\nu\cdot\nabla \Delta^k u). \epsilonnd{align*} We can notice that the first boundary term belongs to $\breve l_{m, m}$, while the last one yields that $$2\overline l_{m,m}(x, \nabla^m u, \nabla^m v) = (x\cdot\nabla\Delta^k v)(\nu\cdot\nabla \Delta^k u) + (x\cdot\nabla\Delta^k v)(\nu\cdot\nabla \Delta^k u).$$ As $T_m(x, u) = \overline l_{m,m}(x, \nabla^m u, \nabla^m u)$, we are done for $m$ odd. The case for $m$ even is completely similar, so we omit the details. Now we are ready to prove Lemma \ref{lem-Pohozaev}. Using $u$ as testing function to the equation, by \epsilonqref{eq-Identity2}, we obtain \[ \int_{\Sigma} u f(x,u) = \int_{\Sigma} u (-\Delta)^m u = -\frac 12 \int_{\partialartial \Sigma} B_m (u, u) + \int_{\Sigma} |\Delta^{m/2} u|^2, \] namely \epsilonqref{eqPohozaev2} holds. For \epsilonqref{eqPohozaev1}, we use $x \cdot \nabla u$ as testing function. There holds then \begin{align}\label{eqPohozaev0} \begin{split} \int_{\Sigma} (x \cdot \nabla u) (-\Delta)^m u & = \int_{\Sigma} (x \cdot \nabla u) f(x,u) \\ & = \int_{\Sigma} x\cdot\nabla\big[F(x,u)\big] - \sigmaum_{i=1}^n\int_{\Sigma} x_i \int_0^u \frac{\partialartial f}{\partialartial x_i}(x,t) dt\\ & =- n \int_{\Sigma} F(x,u) - \int_{\Sigma} x \cdot \nabla_x F(x,u) + \int_{\partialartial\Sigma} (x\cdot\nu) F(x, u), \epsilonnd{split} \epsilonnd{align} where we formally denote $$\nabla_x F(x,u) = \Big(\int_0^u \frac{\partialartial f}{\partialartial x_i}(x,t) dt\Big)_{1\leq i \leq n}.$$ Taking $v = u$ in \epsilonqref{eq-Identity1} and combining with \epsilonqref{eqPohozaev0}, we arrive at \begin{align*} n \int_{\Sigma} F(x,u) + \int_{\Sigma} x \cdot \nabla_x F(x,u) = & \; \frac{n-2m}2 \int_{\Sigma} u (-\Delta)^m u\\ & + \frac 12 \int_{\partialartial \Sigma} C_m (u, u) + \int_{\partialartial\Sigma} (x\cdot\nu) F(x, u). \epsilonnd{align*} From this and \epsilonqref{eqPohozaev2} we obtain readily \epsilonqref{eqPohozaev1}. This completes the proof. \qed \begin{thebibliography}{wwwww} \bibitem[AGQ16]{AGQ16} \textsc{S. Alarc\'on, J. Garc\'ia-Meli\'an, and A. Quaas}, Optimal Liouville theorem for supersolutions of elliptic equations with the laplacian, \textit{Ann. Sc. Norm. Super. Pisa Cl. Sci.} \textbf{168} (2016) 129--158. \bibitem[ACD$\dagger$19]{ACDFGW} \textsc{W. Ao, H. Chan, A. DelaTorre, M.A. Fontelos, M. del Mar Gonz\'alez, and J.C. Wei}, On higher-dimensional singularities for the fractional Yamabe problem: A nonlocal Mazzeo--Pacard program, \textit{Duke Math. J.} \textbf{168} (2019) 3297--3411. \bibitem[BC98]{BC98} \textsc{H. Brezis and X. Cabr\'e}, Some simple nonlinear PDE's without solutions, \textit{Boll. Unione Mat. Ital.} \textbf{1-B } (1998) 223--262. \bibitem[CAM08]{CAM08} \textsc{G. Caristi, L. D'Ambrosio, and E. Mitidieri}, Representation formulae for solutions to some classes of higher order systems and related {L}iouville theorems, \textit{Milan J. Math.} \textbf{76} (2008) 27--67. \bibitem[CDQ18]{CDQ18} \textsc{W. Chen, W. Dai, and G. Qin}, Liouville type theorems, a priori estimates and existence of solutions for critical order Hardy--H\'enon equations in $\mathbb R^n$, arXiv:1808.06609v4. \bibitem[CLi91]{CLi91} \textsc{W. Chen and C. Li}, Classification of solutions of some nonlinear elliptic equations. \textit{Duke Math. J.} \textbf{63} (1991), 615-622. \bibitem[CLi13]{ChenLi13} \textsc{W. Chen and C. Li}, Super polyharmonic property of solutions for PDE systems and its applications, \textit{Commun. Pure Appl. Anal.} \textbf{12} (2013) 2497--2514. \bibitem[CL16]{CL16} \textsc{T. Cheng and S. Liu}, A Liouville type theorem for higher order Hardy--H\'enon equation in $R^n$, \textit{J. Math. Anal. Appl.} \textbf{444} (2016) 370--389. \bibitem[DPQ18]{DaiPengQin-Liouville-v4} \textsc{W. Dai, S. Peng, and G. Qin}, Liouville type theorems, a priori estimates and existence of solutions for non-critical higher order Lane--Emden--Hardy equations, arXiv:1808.10771. \bibitem[DQ19]{DaiQin-Liouville-v6} \textsc{W. Dai and G. Qin}, Liouville type theorems for fractional and higher order H\'enon--Hardy type equations via the method of scaling spheres, arXiv:1810.02752v7. \bibitem[DQ20]{DQ20} \textsc{W. Dai and G. Qin}, Liouville type theorems for Hardy--H\'enon equations with concave nonlinearities, \textit{Math. Nachr.} \textbf{293} (2020) 1084--1093. \bibitem[DDG11]{DDG11} \textsc{E.N. Dancer, Y. Du, and Z. Guo}, Finite Morse index solutions of an elliptic equation with supercritical exponent, \textit{J. Differential Equations} \textbf{250} (2011) 3281--3310. \bibitem[DN17]{DN17} \textsc{T.V. Duoc and Q.A. Ng\^o}, \textit{Exact growth at infinity for radial solutions to $\Delta^3 u +u^{-q} = 0$ in $\R^3$}, preprint, 2017. \bibitem[FWX15]{FWX15} \textsc{M. Fazly, J.C. Wei, and X. Xu}, A pointwise inequality for the fourth-order {L}ane--{E}mden equation, \textit{Anal. PDE} \textbf{8} (2015) 1541--1563. \bibitem[HS19]{HS19} \textsc{A. Hyder and Y. Sire}, Singular solutions for the constant $Q$-curvature problem, arXiv: 1911.11891 \bibitem[GG06]{GG06} \textsc{F. Gazzola and H.-C. Grunau}, Radial entire solutions for supercritical biharmonic equations, \textit{Math. Ann.} \textbf{334} (2006) 905--936. \bibitem[GGS10]{GGS10} \textsc{F. Gazzola, H.-C. Grunau, and G. Sweers}, Polyharmonic boundary value problems, \textit{Lecture Notes in Mathematics} 1991, Springer-Verlag, Berlin, 2010. \bibitem[GY02]{GeYe} \textsc{Y. Ge and D. Ye}, Monotonicity of radially symmetric supersolutions for polyharmonic-type operators, \textit{Differential Integral Equations} \textbf{15} (2002) 357--366. \bibitem[GS81]{GS81} \textsc{B. Gidas and J. Spruck}, Global and local behavior of positive solutions of nonlinear elliptic equations, \textit{Comm. Pure Appl. Math.} \textbf{34} (1981) 525--598. \bibitem[GM99]{GM99} \textsc{L. Grafakos and C. Morpurgo}, A Selberg integral formula and applications, \textit{Pacific J. Math.} \textbf{191} (1999) 85--94. \bibitem[GHY18]{GHY18} \textsc{Z. Guo, X. Huang, and D. Ye}, Existence and nonexistence results for a weighted elliptic equation in exterior domains, \textit{Z. Angew. Math. Phys.} \textbf{71} (2020) 116. \bibitem[GPY17]{GPY17} \textsc{Y. Guo, S. Peng, and S. Yan}, Local uniqueness and periodicity induced by concentration, \textit{Proc. London Math. Soc.} \textbf{114} (2017) 1005--1043. \bibitem[GW17]{GW17} \textsc{Z. Guo and F. Wan}, Further study of a weighted elliptic equation, \textit{Sci. China Math.} \textbf{60} (2017), 2391--2406. \bibitem[JL14]{JL14} \textsc{E. Jannelli and A. Loiudice}, Critical polyharmonic problems with singular nonlinearities, \textit{Nonlinear Anal.} \textbf{110} (2014) 77--96. \bibitem[Lei13]{Lei13} \textsc{Y. Lei}, Asymptotic properties of positive solutions of the Hardy-Sobolev type equations., \textit{J. Differential Equations} \textbf{254} (2013) 1774--1799. \bibitem[LV16]{LV16} \textsc{C. Li and J. Villavert}, Existence of positive solutions to semilinear elliptic systems with supercritical growth, \textit{Comm. Partial Differential Equations} \textbf{41} (2016) 1029--1039. \bibitem[LL01]{LiebLoss} \textsc{E.H. Lieb and M. Loss}, \textit{Analysis}, Graduate studies in Mathematics, {\bf 14}, American Mathematical Society, Providence, RI, 2001. \bibitem[Lio85]{Lions-1985} \textsc{P.L. Lions}, The concentration-compactness principle in the calculus of variations. The limit case, Part 2, \textit{Rev. Mat. Iberoam.} \textbf{1} (1985) 45--121. \bibitem[LGZ06]{LGZ06b} \textsc{J.Q. Liu, Y. Guo, and Y.J. Zhang}, Existence of positive entire solutions for polyharmonic equations and systems, \textit{J. Partial Differential Equations} \textbf{19} (2006) 256--270. \bibitem[Lin98]{Lin98} \textsc{C.-S. Lin}, A classification of solutions of a conformally invariant fourth order equation in {${\mathbb R}^n$}, \textit{Comment. Math. Helv.} \textbf{73} (1998) 206--231. \bibitem[MR03]{KR} \textsc{P.J. McKenna and W. Reichel}, Radial solutions of singular nonlinear biharmonic equations and applications to conformal geometry, \textit{Electron. J. Differential Equations} \textbf{37} (2003) 1--13. \bibitem[MP01]{MP01} \textsc{E. Mitidieri and S.I. Pohozaev}, A priori estimates and the absence of solutions of nonlinear partial differential equations and inequalities, \textit{Tr. Mat. Inst. Steklova} \textbf{234} (2001) 1--384. \bibitem[Mit18]{Mit18} \textsc{D. Mitrea}, Distributions, Partial Differential Equations, and Harmonic Analysis, Universitext, 2018. \bibitem[NN$\dagger$18]{NNPY18} \textsc{Q.A. Ng\^o, V.H. Nguyen, Q.H. Phan, and D. Ye}, Exhaustive existence and non-existence results for some prototype polyharmonic equations, arXiv:1802.05956, 2018. \bibitem[Ni82]{Ni82} \textsc{W.M. Ni}, On the elliptic equation $\Delta u+K(x)u^{(n+2)/(n-2)}=0$, its generalizations, and applications in geometry, \textit{Indiana Univ. Math. J.} \textbf{31} (1982) 493--529. \bibitem[Ni86]{Ni86} \textsc{W.M. Ni}, Uniqueness, nonuniqueness and related questions of nonlinear elliptic and parabolic equations, \textit{Nonlinear functional analysis and its applications, Part 2}, 229--241, Proc. Sympos. Pure Math., 45, Part 2, Amer. Math. Soc., Providence, RI, 1986. \bibitem[PS12]{PhanSouplet} \textsc{Q.H. Phan and P. Souplet}, Liouville-type theorems and bounds of solutions of Hardy--H\'enon equations, \textit{J. Differential Equations} \textbf{252} (2012) 2544--2562. \bibitem[Rab73]{Rab73} \textsc{P. Rabinowitz}, Some aspects of nonlinear eigenvalue problems, \textit{Rocky Mountain J. Math.} \textbf{3} (1973) 161--202. \bibitem[RZ00]{RZ00} \textsc{W. Reichel and H. Zou}, Non-existence results for semilinear cooperative elliptic systems via moving spheres, \textit{J. Differential Equations} \textbf{161} (2000) 219--243. \bibitem[Sch00]{Sch00} \textsc{R. Schaaf}, Uniqueness for semilinear elliptic problems: supercritical growth and domain geometry, \textit{Adv. Differential Equations} \textbf{5} (2000) 1201--1220. \bibitem[SZ96]{SZ96} \textsc{S. Serrin and H. Zou}, Non-existence of positive solutions of Lane-Emden systems, \textit{Differential Integral Equations} \textbf{9} (1996) 635--653. \bibitem[Ste70]{Stein} \textsc{E.M. Stein}, Singular integrals and differentiability properties of functions, \textit{Princeton Mathematical Series}, No. 30 Princeton University Press, Princeton, N.J. 1970 xiv+290 pp. \bibitem[Vil14]{Vil14} \textsc{J. Villavert}, Shooting with degree theory: Analysis of some weighted poly-harmonic systems, \textit{J. Differential Equations} \textbf{257} (2014) 1148--1167. \bibitem[WX99]{WX99} \textsc{J. Wei and X. Xu}, Classification of solutions of higher order conformally invariant equations, \textit{Math. Ann.} \textbf{313} (1999) 207--228. \bibitem[Xu00]{Xu00} \textsc{X. Xu}, Uniqueness theorem for the entire positive solutions of biharmonic equations in $\mathbf R^n$, \textit{Proc. Roy. Soc. Edinburgh Sect. A} \textbf{130} (2000) 651--670. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\scriptsize}{\scriptsize} \newcommand{\footnotesize}{\footnotesize} \begin{center} \baselineskip 0.3in {\bf Klein Paradox for bound states - A puzzling phenomenon} \vspace {0.1in} {Nagalakshmi A. Rao} {\it Department of Physics, Government Science College,} {\it Bangalore-560001, Karnataka, India} {\it [email protected]} {B.A. Kagali} {\it Department of Physics, Bangalore University,} {\it Bangalore-560056, Karnataka, India} {\it [email protected]} \end{center} \hspace{2.3in} {\bf Abstract} \indent While Klein paradox is often encountered in the context of scattering of relativistic particles at a potential barrier, we presently discuss a puzzling situation that arises with the Klein-Gordon equation for bound states. With the usual minimal coupling procedure of introducing the interaction potential, a paradoxical situation results when the "hill" becomes a "well", simulating a bound state like situation. This phenomenal phenomenon for bound states is contrary to the conventional wisdom of quantum mechanics and is analogous to the well-known Klein paradox, a generic property of relativistic wave equations. \noindent \hspace{-0.27in} PACS NOs.: 03.65.Ge, 03.65.Pm \indent \section{\bf \sf \sc Introduction} \setcounter{equation}{0} \hspace{0.1in}\, In non-relativistic quantum mechanics, the scattering of an electron by a potential barrier is known to be one of the simplest solvable problems. However, a similar problem with a potential step or a barrier in relativistic quantum mechanics, often results in paradoxical situations, called the Klein Paradox. \indent In the original work of Klein{$^1$}[1929], electrons incident on a large potential step was addressed. The problem was treated with the Dirac equation and it was found that for large potentials, $V(x)>E+mc^{2},$ the reflection coefficient, $R_{S}$, exceeds unity while the transmission coefficient, $T_{S}$, becomes negative. This suggested that more particles are refelected by the step than are incident on it. Such a puzzling situation contradicting non-relativistic expectation was termed Klein Paradox. \indent Several authors{$^{2-6}$} have discussed Klein Paradox under various circumstances. Considering a potential step with sharp boundaries, Bjorken{$^7$} has illustrated that a weak potential having decaying exponential solutions inside the potential region leads to undamped oscillatory solutions for potentials exceeding $(E+mc^{2})$, consistent with the original version of Klein Paradox. However, Greiner{$^8$}, on the basis of the group velocity treatement has illustrated an unexpected largness of the transmission coefficient. While Brojken's explanation of Klein Paradox is essentially based on pair production, Greiner's representation is that of single-particle interpretation. \indent Similar results exist for Klein Gordon particles as well. Guang Jiong et.al{$^9$}. have shown that the Klein-Gordon equation with a step potential in minimal coupling exhibits the Klein Paradox at the one-particle level. Rubin Landau{$^{10}$} has given a reasonable explanation of the Klein Paradox based on particle-antiparticle pair production. \indent In the following section, we discuss the appearance of a paradoxical result in the context of bound states. {\section{\bf \sf \sc The Klein-Gordon Equation With a Potential}} \setcounter{equation}{0} \indent The time-independent one-dimensional Klein-Gordon equation for a general potential introduced as a vector field, may well be written as \begin{eqnarray} \left\{ {d^{2}\over dx^{2}}+{\left(E-V\left(x\right)\right)^{2}-m^{2}{c}^{4}\over {c}^{2}{\hbar }^{2}}\right\} \psi (x)=0. \end{eqnarray} This equation may be cast into the form of Schrodinger equation as \begin{eqnarray} {d^{2}\psi \over d{x}^{2}}+{2m\over {\hbar }^{2}}\left(E_{\!eff}-{V\!}_{eff}\right)\psi \left(x\right)=0, \end{eqnarray} with \begin{eqnarray} E_{\!eff}={E^{2}-{m}^{2}{c}^{4}\over 2m{c}^{2}} \end{eqnarray} and \begin{eqnarray} V_{\!\!eff}={2E\ V(x)-V^{2}\left(x\right)\over 2mc^{2}}. \end{eqnarray} \indent The concept of effective energy and effective potential used to simulate the properties of relativistic wave equations leads to paradoxical results. \indent We first review the finite square well problem and then address the case of potential hill. \noindent {\large \bf A. Potential Well} \indent For an attractive square well potential defined by \begin{eqnarray} V\left(x\right)=\left\{ \matrix{-V_{0}\ \ \ {\rm for}\ \ \left|x\right|\leq a\cr \cr \ 0\ \ \ \ {\rm for}\ \ \ \left|x\right|>a,} \right. \end{eqnarray} the effective potential takes the form \begin{eqnarray} V_{\!\!eff}\left(x\right)={-2E\ V_{0}\ -\ V_{0}^{2}\over 2mc^{2}}\ \ {\rm for}\ \left|x\right|\leq a \end{eqnarray} and vanishes elsewhere. Considering positive energy states, the effective potential looks like another square well potential and therefore, so long as $E_{\!eff}<0$ or $E<mc^{2}$, bound states are possible. This result is quite reasonable. \noindent {\large \bf B. Potential Hill} \indent We now consider a potential barrier defined by \begin{eqnarray} V\left(x\right)=\left\{ \matrix{+V_{\!0}\ \ \ {\rm for}\ \ \ \left|x\right|\leq a\ \cr \cr 0\ \ \ \ {\rm for}\ \ \ \left|x\right|>a,} \right. \end{eqnarray} which is not the usual Klein step, but has better-defined boundaries. The effective potential for a `hill' takes the form \begin{eqnarray} V_{\!\!eff}\left(x\right)={2E\ V_{0}\ -\ V_{0}^{2}\over 2mc^{2}}, \end{eqnarray} \indent the effective energy remains the same. It is trivial to check that \begin{eqnarray} \left(E_{\!eff}-V_{\!\!eff}\right)=\left\{ \matrix{{E^{2}-m^{2}{c}^{4}+V_{0}^{2}-2EV_{0}\over 2mc^{2}} & \ \ \ {\rm for} \ \ \ & \left|x\right|\leq a\cr\cr {E^{2}-{m}^{2}{c}^{4}\over {2mc}^{2}} & \ \ \ {\rm for} \ \ \ & \left|x\right|>a} \right. \end{eqnarray} \indent Interestingly, $V_{\!\!eff}$ can be positive, zero or even negative for a range of values of the barrier height $V_{0}$. \indent For $V_{0}<2mc^{2}$, $V_{\!\!eff}$ remains positive, and the problem is analogous to a typical scattering problem. \indent For $V_{0}=2mc^{2}$, the effective potential vanishes and the barrier becomes supercritical. \indent However a puzzling situation arises for a barrier height exceeding $2mc^{2}$. As is seen from Eqn.(8), the effective potential becomes negative and thus a `hill' is transformed into a `well'. For $(V_{0}>2mc^{2})$, particles, instead of being scattered by the potential "hill" are trapped inside the simulated "well". This means that bound states are possible for strong barriers. Such a paradoxical situation may be called the Klein Paradox for bound states. \indent Moreimportantly, it may be inferred that the potential hill need not be only of the square type for such anomalous bound states. The actual number of the bound states and the energies, however, will depend on the shape and size of the hill. These trivial results may be worked out from the standard procedures of quantum mechanics. \section{\bf \sf \sc Results and Discussion} \setcounter{equation}{0} \indent While the original version of Klein Paradox concerns scattering situation of Dirac particles at a potential step, we have shown that an analogous paradoxical situation arises even for bound states. With the usual minimal coupling procedure of introducing interaction, the Klein-Gordon equation leads to bound states for finitely extended potential hills, contrary to the conventional wisdom of quantum mechanics. So long as the effective vector potential remains positive,$(V<2mc^{2})$ for repulsive potentials, the situation is similar to a typical scattering problem. As the potential becomes stronger and stronger, exceeding the limiting value $E+mc^{2}$ the `hill' becomes a `well' simulating a bound state-like situation.Surprisingly, a typical scattering problem is transformed into a bound state problem. Thus {\it Klein Paradox is retrieved for bound states in the case of a strong repulsive, finite ranged barrier}. \indent Interestingly, a paradoxical situation like this does not arise for pure scalar repulsive potentials. In such a case, a barrier is transformed into another barrier, no matter how strong or weak the potential is. The paradoxical result for bound state illustrates that Klein-Gordon equation is reasonable even at the one particle level. \noindent {\large \bf Acknowledgements} One of the authors (N.A.Rao) is grateful to the Director of Collegiate Education in Karnataka, Prof. K.V.Kodandaramaiah for his support. Thanks are extended to Prof. T. Gangadaraiah, Principal, Govt. Science College for his encouragement. \baselineskip 0.2in \hspace{2.3in} {\Large \bf References} \begin {enumerate} \item Klein O.Z. {\it Physics}, {\bf 53} (1929) 157-165. \item Mark J.Thomson and Bruce H.J.McKellar, {\it The solution of the Dirac equation for high squre barrier}, Am. J. Phys. {\bf 59}(4) (1991) 340-346. \item Francisco Dominguez-Adame, {\it A relativistic interaction without Klein Paradox}, Phys. Lett. {\bf A 162} (1992) 18-20. \item Holstein B.R., {\it Klein Paradox}, Am.J.Phys. {\bf 69} (1999) 507-512. \item Calogeracos A and N.Dombey, {\it Klein tunneling and Klein Paradox}, Int J. M. Phys. {\bf A 14} (4) (1999) 631-643. \item Antonio S de Castro, {\it $(n+1)$ dimensional Dirac equation and the Klein Paradox}, Am.J.Phys. {\bf 69}(10) (2001), 1111-1112. \item James D. Bjorken and Sidney D.Drell, {\it Relativistic Quantum Mechanics}, Mc Graw Hill Book Company, NY,(1964) pp 40-43. \item Greiner, Walter, {\it Relativistic Quantum Mechanics-Wave Equations}, Springer-Verlag, (1995). \item Gaung-Jiong Ni, Weimin Zhou and Jun Yan, {\it Klein Paradox and antiparticle} \item Rubin H.Landau, {\it Quantum Mechanics II A Second Course in Quantum Theory}, A Wiley Interscience Publication, New York, 1996. \end{enumerate} \end{document}
\begin{document} \title{Combining Slow and Fast: \Complementary Filtering for Dynamics Learning} \begin{abstract} Modeling an unknown dynamical system is crucial in order to predict the future behavior of the system. A standard approach is training recurrent models on measurement data. While these models typically provide exact short-term predictions, accumulating errors yield deteriorated long-term behavior. In contrast, models with reliable long-term predictions can often be obtained, either by training a robust but less detailed model, or by leveraging physics-based simulations. In both cases, inaccuracies in the models yield a lack of short-time details. Thus, different models with contrastive properties on different time horizons are available. This observation immediately raises the question: \emph{Can we obtain predictions that combine the best of both worlds?} Inspired by sensor fusion tasks, we interpret the problem in the frequency domain and leverage classical methods from signal processing, in particular complementary filters. This filtering technique combines two signals by applying a high-pass filter to one signal, and low-pass filtering the other. Essentially, the high-pass filter extracts high-frequencies, whereas the low-pass filter extracts low frequencies. Applying this concept to dynamics model learning enables the construction of models that yield accurate long- and short-term predictions. Here, we propose two methods, one being purely learning-based and the other one being a hybrid model that requires an additional physics-based simulator. \end{abstract} \section{Introduction} \label{section:intro} Many physical processes $\left(x_n\right)_{n=0}^N$ with $x_n \in \mathbb{R}^{D_x}$ can be described via a discrete-time dynamical system \begin{equation}\label{eq:dyn} \begin{aligned} x_{n+1}= f(x_n). \end{aligned} \end{equation} Typically, it is not possible to measure the whole state-space of the system \eqref{eq:dyn}, but a function of the states corrupted by noise $\hat{y}_n$ can, for example, be measured by sensors \begin{equation}\label{eq:obs} \begin{aligned} y_n &= g(x_n) = Cx_n, \\ \hat{y}_n &= y_n+\epsilon_n, \textrm{ with } \epsilon_n \sim \mathcal{N}(0,\sigma^2) \end{aligned} \end{equation} and $C \in \mathbb{R}^{D_y \times D_x}$. Our general interest is to make accurate predictions for the observable components $y_n$ in Eq. \eqref{eq:obs}. One possible way to address this problem is training a recurrent model on the noisy measurements $\hat{y}_n$ in Eq. \eqref{eq:obs}. Learning-based methods are often able to accurately reflect the system's behavior and therefore produce accurate short-term predictions. However, the errors accumulate over time leading to deteriorated long-term behavior \citep{DBLP:conf/iclr/ZhouLXHH018}. To obtain reliable prediction behavior on each time scale, we propose to decompose the problem into two components. In particular, we aim to combine two separate models, where one component reliably predicts the long-term behavior, while the other adds short-term details, thus combining the strengths of each component. Interpreted in the frequency domain, one model tackles the low-frequency components while the other tackles the high-frequency parts. Combining high and low-frequency information from different signals or models is well-known from control engineering or signal processing tasks. One typical example is tilt estimation in robotics, where accelerometer and gyroscope data are often available simultaneously \citep{5509756, 9834094}. On one hand, the gyroscope provides position estimates that are precise on the short-term but due to integration in each time step, accumulating errors cause a drift on the long-term. On the other hand, the accelerometer-based position estimates are long-term stable, but considerably noisy and thus not reliable on the short-term. Interpreted in the frequency domain, the gyroscope is more reliable on high frequencies, whereas the accelerometer is more reliable on low frequencies. Therefore, a high-pass filter is applied to the gyroscope measurements, whereas a low-pass filter is applied to the accelerometer measurements. Both filtered components are subsequently combined in a new complementary filtered signal that is able to approximate the actual position more accurately. Here, we adopt the concept of complementary filter pairs to our task to fuse models with contrastive properties. In general, a complementary filter pair consists of a high-pass filter $H$ and a low-pass filter $L$, where the filters map signals to signals. Depending on the specific filter, certain frequencies are eliminated while others pass. Intuitively the joint information of both filters in a complementary filter pair covers the whole frequency domain. Thus, the key concept that we leverage here is the decomposition of a signal $y=(y_n)_{n=0}^N$ into a high-pass filter component $H(y)$ and a low-pass filter component $L(y)$ via \begin{equation}\label{eq:concept} y=H(y)+L(y). \end{equation} Based on the decomposition, we propose to address $H(y)$ and $L(y)$ by different models that are reliable on their specific time scale. \emph{In particular, we propose two methods, one being purely-learning based and one being a hybrid method that leverages an additional physics-based simulation.} Both concepts are visualized in Figure \ref{fig:scheme}. In the purely learning-based scenario, we train seperate networks that represent $H(y)$ and $L(y)$ in Eq. \eqref{eq:concept}. In order to obtain a low-frequency model that indeed provides accurate long-term predictions, we apply a downsampling technique to the training data, thus reducing the number of integration steps. During inference, the predictions are upsampled up to the original sampling rate. Applying the low-pass filter allows lossless downsampling of the signal depending on the downsampling ratio. In the hybrid scenario, only a single model is trained. Hybrid modeling addresses the problem of producing predictions by mixing different models that are either learning-based or obtained from first principles, e.g. physics \citep{yin2021augmenting,Suhartono_2017}. Here, we consider the case where access to predictions $y^{\text s}$ for the system \eqref{eq:dyn} is provided by a physics-based simulator. Additional insights, such as access to the simulator's latent space or differentiability are not given. While physics-based approaches are typically robust and provide reliable long-term behavior, incomplete knowledge of the underlying physics leads to short-term errors in the model. Hence, we consider the case where $L(y^{\text s}) \approx L(y)$ holds. By training a model for $H(y)$, the decomposition \eqref{eq:concept} becomes a hybrid model that combines the strengths of both components. The filter pair $(L,H)$ is integrated into the training process, assuring that the long-term behavior is indeed solely addressed by the simulator. In both scenarios, the learning-based and the hybrid, recurrent neural networks (RNNs) are trained on whole trajectories. In summary, the main contributions of this paper are: \begin{compactitem} \item By leveraging complementary filters, we propose a new view on dynamics model learning; \item we propose a purely learning-based and a hybrid method that decompose the learning problem into a long-term and a short-term component; and \item we show that this decomposition allows for training models that provide accurate long and short-term predictions. \end{compactitem} \begin{figure*} \caption{A high-level overview of our methods. Purely-learning-based scheme (left): a training signal is filtered into complementary components. The low-pass filtered signal is downsampled. Two seperate RNNs are trained on the decomposed signal. Hybrid model (right): The predictions of simulator and RNN are fed into the complementary filter. The resulting signal is trained end-to-end on the noisy observations by minimizing the root mean-squared error (RMSE). This structure is also applied to obtain predictions from the model.} \label{fig:scheme} \end{figure*} \section{Related work} \label{section:rel} In this section, we give an overview of related literature. Several works point out parallels between classical signal-theoretic concepts and neural network architectures. In particular, connections to finite-impulse response (FIR) and infinite-impulse response (IIR) filters have been drawn. The relations between these filters and feedforward-models have been investigated in \citet{6795541}. Precisely, they construct different feedforward architectures by building synapses from different filters. Depending on the specific type, locally recurrent but globally feedforward structure can be obtained. These models are revisited in \citet{Campolucci} by introducing a novel backpropagation technique. More recently, feedforward Sequential Memory Networks are introduced, which can be interpreted as FIR filters \citep{zhang2016feedforward}. Relations between fully recurrent models and filters have been drawn as well. The hidden structure of many recurrent networks can be identified with classical filters. \citet{Kuznetsov2020DIFFERENTIABLEIF} point out the relation between Elman-Networks and filters and introduce trainable IIR structures that are applied to sound signals in the experiments section. Precisely, an Elman network can be interpreted as simple first-order IIR filter. In \citet{oliva2017statistical}, long-term dependencies are modeled via a moving average in the hidden units. Moving averages can again be interpreted as special FIR filters. \citet{Stepleton2018LowpassRN} recover long-term dependencies via a hidden structure of memory pools that consist of first-order IIR filters. However, none of this works leverages complementary filters in order to capture effects on multiple time scales. Additionally, none of these approaches addresses hybrid dynamics models. \citet{doi:10.1177/0142331218755234, CERTIC2011419, Milic} combine learning techniques and in particular gradient-descent with complementary filters. However, they consider the automatical adaption of the filter parameters. In contrast, we leverage complementary filters for learning, in particular dynamics learning. Filters manipulate signals on the frequency domain and thus address spectral properties. In \citet{Kutz2016DynamicMD} and \citet{Lange2021FromFT}, a signal is identified via spectral methods that are transformed into a linear model. Koopman theory is then leveraged to lift the system to the nonlinear space again. However, in our work, we use filters in order to separate the predictions on different time-horizons. Thus, in contrast to these works, our methods can be combined with different (recurrent) architectures and therefore allow for computing predictions via state-of-the-art techniques. Combining physics-based simulators with learning-based models is an emerging trend. Hybrid models produce predictions by taking both models into account. Typically, the simulator is extended or parts of the simulator are replaced. There is a vast literature that deals with hybrid models for dynamical systems or time-series data. A traditional approach is learning the errors or residua of simulator predictions and data \citep{Forssell97combiningsemi-physical, Suhartono_2017}. Another common approach in hybrid modeling is extending a physics-based dynamics model with neural ODEs \citep{yin2021augmenting, qian2021integrating}. However, in contrast to our approach, these hybrid architectures do not explicitly exploit characteristics of the simulator, in particular the long-term behavior. \citet{10.1785/0120170293} construct a hybrid model for the prediction of seismic behavior. Similar to our setting, they consider the case where a physics-based simulation provides reliable predictions for low frequencies, whereas lacking of a reliable model for high frequencies. However, the approach differs significantly from ours since a neural network is trained on a mapping from low to high frequencies. Furthermore, they do not consider dynamics models. Therefore, it is unclear how to apply the approach to our problem setting. \section{Background} \label{section:background} In this section, we provide the necessary background on signal processing and filtering. For a more detailed introduction, we refer the reader to \citet{oppenheim1999discrete}. \subsection{Motivation} Filters are linear time-invariant systems that aim to extract specific frequency components from a signal. Standard types are high-pass and low-pass filters. Low-pass filters extract low frequencies and attenuate high frequencies, whereas high-pass filters extract high frequencies and attenuate low frequencies. Frequencies that are allowed to pass are determined by a desired cutoff frequency. Further, additional specifications play a principal role in filter design, such as pass- and stop-band fluctuations and width of the transition band \citep{oppenheim1999discrete}. Technically, a filter $F$ is a mapping in the time domain $F: l^\infty \to l^\infty: y \mapsto Z^{-1}(\mathcal F(Z(y)))$, where $\mathcal F: \mathbb C \to \mathbb C$ is the so-called transfer function in the frequency domain, $l^\infty$ is the signal space of bounded sequences and $Z: l^\infty \to \mathbb C$ is the well-known z-transform. Hence, a filter is obtained by designing a transfer function $\mathcal F$ in the frequency domain. For the type of filters considered here, the structure of $\mathcal F$ allows to directly compute $F(y)$ via a recurrence equation in the time domain (see the appendix for more details). A typical application of filters is, for example, the denoising of signals. Noise adds a high-frequency component to the signal and can therefore be tackled by applying a low-pass filter. \subsection{IIR-filter} Typical filter types are finite-impulse response (FIR) and infinite-impulse response (IIR) \citep{oppenheim1999discrete}. Here, we consider IIR filters. In contrast to FIR filters, IIR filters possess internal feedback. Filtering a signal $y$ via an IIR-filter yields a recurrence equation for the filtered signal $\tilde{y}=(\tilde{y}_n)_{n=0}^N$ given by \begin{equation}\label{eq:rearange} \tilde y_n=\frac{1}{a_0}\left(\sum_{k=1}^P a_k \tilde{y}_{n-k}+ \sum_{k=0}^P b_k y_{n-k}\right), \end{equation} where $P$ describes the filter order. The filter coefficients $a_k$ and $b_k$ are obtained from filter design with respect to the desired properties in the frequency domain. A detailed derivation is given in the appendix. There are different strategies to initialize the first $P$ values $\tilde{y}_0,\dots,\tilde{y}_{P-1}$ \citep{initialize, 492552}. \subsection{Complementary filter pairs}\label{sec:complementary} A complementary filter pair consists of a high-pass filter transfer function $\mathcal{H}$ and a low-pass filter transfer function $\mathcal{L}$ \citep{4101411}, chosen in a way that they cover the whole frequency domain, thus \begin{equation}\label{eq:decomposition} y \approx L(y)+H(y) \end{equation} for any signal $y \in l^{\infty}$. Applying the complementary filter pair to two different signals $y^{\text{h}}$ and $y^{\text{l}}$ via $\tilde{y}=L(y^{\text{l}})+H(y^{\text{h}})$ directly yields a recurrence equation for the complementary filtered signal $\tilde{y}$ given by \begin{equation} \label{eq:IIR} \tilde{y}_n = \frac{1}{a_0} \left(\sum_{k=1}^P a_k \tilde{y}_{n-k}+\sum_{k=0}^P b_k y^{\text{h}}_{n-k}+\sum_{k=0}^{P} \tilde{b}_k y^{\text{l}}_{n-k}\right), \end{equation} where $a_k,b_k$ describe the high-pass filter parameters and $a_k,\tilde{b}_k$ describe the low-pass filter parameters. To obtain a joint recurrence equation, the filters are forced to share the parameters $a_k$. However, this can be done without loss of generality. \paragraph{Perfect complement} \label{section:perfect} There are different strategies to express the decomposition \eqref{eq:decomposition} mathematically. One way is to construct the perfect complement in the frequency domain such that $\mathcal{H}+\mathcal{L}=1$ \citep{s21061937}. Applying the perfect complementary filter to two identical signals $y^{\text{h}}=y^{\text{l}}$ results in the same signal as output. For the IIR complementary filter \eqref{eq:IIR} this holds if $\tilde{b}_k=a_k-b_k$. A detailed derivation is moved to the appendix. However, depending on the desired behavior of the filters, perfectly complementary filters are not always favorable. Different approaches have been investigated in \citet{VAIDYANATHAN, inproceedings}. \section{Method} \label{section:method} We present two methods that leverage the idea of complementary filters for dynamics model learning in order to produce accurate short- and long-term predictions. Our first approach is applicable to general dynamics model learning, whereas our second approach is a hybrid modeling technique. In the second case, access to trajectory data produced by a physics-based simulator is required. The key ingredient of both models is a complementary filter pair $(H,L)$ with parameters $a_k,b_k,\tilde{a}_k$ and $\tilde b_k$ (cf. Sec. \ref{sec:complementary}). While in the hybrid case reliable long-term predictions are already provided by the simulator, the long-term predictions have to be addressed by an additional model in the purely learning-based scenario. \subsection{Recurrent dynamics model learning} First, we give an overview of the recurrent dynamics model learning structure that serves as a backbone for our method. Here, we consider a recurrent multilayer perceptron (MLP) and a gated recurrent unit (GRU) model \citep{cho-etal-2014-learning}. However, the method is not restricted to that choice and could be combined with other recurrent architectures such as \citet{HochSchm97, pmlr-v80-doerr18a}. Consider a trainable neural network transition function $f_{\theta}:\mathbb{R}^{D_h \times D_y} \rightarrow \mathbb{R}^{D_h}$ and a linear observation model $C_{\theta} \in \mathbb{R}^{D_y \times D_h}$. Here, $\theta$ defines the trainable parameters and $h$ the latent states with corresponding latent dimension $D_h$. Predictions are computed via \begin{equation}\label{eq:RNN} \begin{aligned} h_{n+1} & = f_{\theta}(h_n,y_n) \\ y_n & = C_{\theta}h_n, \end{aligned} \end{equation} where the initial hidden state $h_0$ can be obtained from the past trajectory by training a recognition model similar to \citet{pmlr-v80-doerr18a} or by performing a warmup phase. Details are provided in the appendix. The mapping $F_{\theta}:\mathbb{R}^{D_h \times D_y} \times \mathbb{N} \rightarrow \mathbb{R}^{D_y \times N}$ that computes an $N$-step rollout via Eq. \eqref{eq:RNN} reads \begin{equation} \label{eq:trajectory} F_{\theta}(h_0,y_0,N)=y_{0:N}, \end{equation} where $y_{0:N} \in \mathbb{R}^{D_y \times N}$ defines a trajectory with $N$ steps. \subsection{Purely learning-based model} Next, we dive into the details of constructing complementary filter-based learning schemes and introduce our methods. In the purely learning-based scenario, two different models are trained, wherein one model addresses the high-frequency parts and the other addresses the low-frequency parts (see Figure \ref{fig:scheme} (left).) To this end, the training signal $\hat{y}$ is decomposed into a high-frequency component $H(\hat{y})$ and a low-frequency component $L(\hat{y})$ via the complementary filter pair (cf. Sec. \ref{sec:complementary}). The models are trained separately on the decomposition. In order to obtain a model that indeed provides stable long-term behavior, the low-frequency training data is downsampled. During inference, the predicted signal is upsampled again. Downsampling yields a model that performs less integration steps and thus, produces less error accumulation. As an additional advantage, backpropagation through less integration steps is computationally more efficient. Applying the low-pass filter allows lossless downsampling up to a specific ratio that is determined by the Nyquist frequency. Intuitively, only high-frequency information is removed that is addressed by the second network during training and inference. Details are provided in the appendix. Splitting the training signal and training the models separately ensures that one model indeed addresses the low-frequency part of the signal and thus, the long-term behavior. End-to-end training on the other hand might yield deteriorated long-term behavior since it generally allows a single network to tackle both- short, and long-term behavior. \paragraph{Up and Downsampling: } The downsampling operation $d_k:\mathbb{R}^{D_y \times N} \rightarrow \mathbb{R}^{D_y \times \lfloor N/k \rfloor}$ maps a signal to a lower resolution by considering every $k^{th}$ step of the signal via \begin{equation} \label{eq:downsampling} d_k(y_{0:N})=(y_0,y_k,\dots,y_{k \lfloor N/k \rfloor}). \end{equation} The reverse upsampling operation $u_k:\mathbb{R}^{D_y \times N} \rightarrow \mathbb{R}^{D_y \times kN}$ maps a signal to a higher resolution by filling in the missing data without adding high-frequency artifacts to the signal. Mathematically, this corresponds to an interpolation problem \citep{oppenheim1999discrete}. Here, we consider lossless downsampling, where tolerable downsampling ratios are determined by the cutoff frequency of the low-pass filter. \paragraph{Training: } Consider training data $\hat{y}_{0:N} \in \mathbb{R}^{D_y \times N}$ from which the first $R<N$ steps $\hat{y}_{0:R} \in \mathbb{R}^{D_y \times R}$ are used to obtain an appropriate initial hidden state. We consider trainable models $f_{\theta}^\text{h}, C_{\theta}^\text{h}, f_{\nu}^\text{l}, C_{\nu}^\text{l}$ with corresponding rollout mappings $F_{\theta}^\text{h}$ and $F_{\nu}^\text{l}$ (cf. Eq. \eqref{eq:trajectory}), an up/downsampling ratio $k$ and a complementary filter pair $L,H$ (cf. Sec. \ref{sec:complementary}). The weights $\theta$ and $\nu$ are trained by minimizing the root-mean-squared error (RMSE) $\Vert y-\hat{y} \Vert_2$ via \begin{equation} \label{eq:rec_training} \begin{aligned} \hat{\theta} & = \arg\min_{\theta} \Vert H(\hat{y})_{R:N} - F^\text{h}_{\theta}(\hat{y}_R^\text{h},h_R^\text{h},N-R) \Vert_2\\ \hat{\nu} & = \arg\min_{\nu} \Vert d_k(L(\hat{y})_{R:N})-F^\text{l}_{\nu}(\hat{y}_R^\text{l},h_R^\text{l},\tilde{N})\Vert_2, \end{aligned} \end{equation} with $\hat{y}_R^\text{h}=H(\hat{y})_R$, $\hat{y}_R^\text{l}=L(\hat{y})_R$, $N-R$ steps $H(\hat{y})_{R:N}$ and $L(\hat{y})_{R:N}$ from the filtered signals $H(\hat{y})$ and $L(\hat{y})$ and $\tilde{N}=\lfloor (N^{\prime}-R)/k \rfloor$. The hidden states $h_R^\text{h}$ and $h_R^\text{l}$ are obtained from a warmup phase that we specify in the appendix. \paragraph{Predictions: } A prediction with $N^{\prime}-R$ steps $\tilde y_{R:N^{\prime}}$ is obtained by adding the high-frequency predictions and the upsampled low-frequency predictions \begin{equation} \label{eq:rec_predictions} \tilde{y}_{R:N^{\prime}}=F^\text{h}_{\theta}(\tilde{y}_R^\text{h},h_R^\text{h},N^{\prime}-R)+u_k(F_{\nu}^\text{l}(\tilde{y}_R^\text{l},h_R^\text{l},\tilde{N})), \end{equation} where $\tilde{y}_R^\text{h}=H(\tilde{y}_{0:R})$ and $\tilde{y}_R^\text{l}=L(\tilde{y}_{0:R})$ have to be provided and $\tilde{N}=\lfloor (N^{\prime}-R)/k \rfloor$. The hidden states $h_R^\text{h}$ and $h_R^\text{l}$ can, for example, be obtained from a short warmup phase. A slight modification of the method is obtained by wrapping an additional high-pass filter around the predictions via $H(F^\text{h}_{\theta}(\tilde{y}_R^\text{h},h_R^\text{h},N-R))$ in Eq. \eqref{eq:rec_training} during training and via $H(F^\text{h}_{\theta}(\tilde{y}_R^\text{h},h_R^\text{h},N^{\prime}-R))$ in Eq. \eqref{eq:rec_predictions} during predictions. This adds an additional guarantee preventing the high-frequency model from producing low-frequency errors. We provide a numerical comparison of both variants in our experiments. \subsection{Hybrid modeling} In the hybrid case, we assume access to a simulator that produces predictions $y^\text{s}_{0:N}$. Thus, reliable long-term predictions are already available. In this case, we can directly train a single recurrent model in an end-to-end fashion (see Figure \ref{fig:scheme} (right)). In particular, the low-pass filter is applied to the simulator, whereas the high-pass filter is applied to the learning-based trajectory. Training directly with the complementary filter ensures that each model indeed stays on its time scale. By decoupling the propagation of latent states and the filtered simulator states, the method is technically applicable to a large class of simulators. It is solely required that the simulator is able to produce time-series predictions of the system given initial conditions. Differentiating through the simulator or any insight into the simulator's hidden states is not required. \paragraph{Training and predictions: } Consider training data $\hat{y}_{0:N} \in \mathbb{R}^{D_y \times N}$, a trainable model $f_{\theta}, C_{\theta}$ with corresponding rollout mapping $F_{\theta}$ (cf. Eq. \eqref{eq:trajectory}) and a complementary filter pair $(L,H)$. Again, the first $R$ steps $\hat y_{0:R}$ are used for providing the intial hidden state $h_R$. The weights $\theta$ are trained via \begin{equation}\label{eq:loss} \hat{\theta} = \arg \min_{\theta} \Vert H(y^{\text{r}}) + L(y^\text{s}_{R:N}) - \hat{y}_{R:N} \Vert_2, \end{equation} with $y^{\text{r}}=F_{\theta}(\hat{y}_R,h_R,N-R)$. The calculation of $H(y^{\text{r}}) + L(y^\text{s}_{R:N})$ can directly be obtained by Eq. \eqref{eq:IIR}. \subsection{Filter design} We design filters $H$ and $L$ (cf. Eq. \eqref{eq:rec_training} and \eqref{eq:loss}) before training. In the purely learning-based scenario, a broad range of cutoff frequencies is possible, which we demonstrate empirically in the appendix. In the hybrid case, we aim to use as much correct long-term information as possible from the simulator without including short term errors. In general, suitable cutoff frequencies can often be derived from domain knowledge. Here, we analyze the frequency spectra of ground truth and simulator in order to find a suitable cutoff frequency. For a specific filter design, we test the plausibility of the complementary filter by applying the high-pass component to the measurements and the low-pass component to the simulator. Calculating the RMSE between the combined signal and ground truth indicates whether the filters are appropriate. For a more detailed introduction to general filter design, we refer the reader to \citet{oppenheim1999discrete}. \begin{figure*} \caption{Demonstrating the method with the double-mass spring system (i). Shown is the analysis of the frequency spectrum with marked cutoff frequency (left). This yields the predictions of the two seperate GRUs (middle). The accumulated RMSE over time indicates good short-and-long term behavior, while the baseline method accumulates errors (right).} \label{fig:mass} \end{figure*} \begin{figure*} \caption{Rollouts for the purely learning-based scenario with all three methods for Systems (i) (left), (ii) (middle) and (iv) (right). The training horizon is marked with dotted lines. The results show accumulating errors of the baseline method in contrast to our approach.} \label{fig:learning_based} \end{figure*} \begin{figure*} \caption{Rollouts for the hybrid setting. Shown are the results for the Van-der-Pol oscillator (v) with RNN (left) and rollouts of the single components before combining them via the complementary filter (middle). Further, the rollouts of the drill-string system (vi) with RNN are shown (right). The training horizon is marked with dotted lines. The results demonstrate accumulating errors in the baseline methods, while our approach provides accurate short and long-term predictions.} \label{fig:hybrid} \end{figure*} \section{Experiments} \label{section:exp} In this section, we demonstrate that our complementary filter-based methods yield accurate long and short-term predictions on simulated and real world data. In the hybrid setting, we consider additional access to a physics-based simulation that is able to predict the long-term behavior of the system but is not capable of accommodating all short-term details due to e.g., modeling simplifications. \subsection{Baselines} We consider four systems. For each system, we have access to measurement data. Either real measurements are available, or we simulate trajectories from the ground truth system and corrupt them with noise. We consider the following baselines. \textbf{RNN: } RNN structure that corresponds to an MLP that is propagated through time. \textbf{GRU: } state-of-the-art recurrent architecture for time-series learning \citep{cho-etal-2014-learning}. \textbf{Simulator: } in the hybrid setting, access to simulator predictions $y^\text{s}$ is required. \textbf{Residual GRU/ RNN: } in the hybrid case, we consider a residual model that combines RNN or GRU predictions $y^\text{r}$ with simulator predictions $y^\text{s}$ via $y=y^\text{r}+y^\text{s}$. \subsection{Constructing the filters} We use the tools for IIR filter design provided by \texttt{Scipy} \citep{2020SciPy-NMeth} and apply Butterworth filters. We construct the coefficients $b_k$ and $a_k$ for the low-pass filter and coefficients $\tilde{b}_k$ and $\tilde{a}_k$ for the high-pass filter as described in Sec. \ref{section:background}, where both filters share the cutoff frequency. An example of a frequency spectrum and choice of the cutoff frequency is shown in Figure \ref{fig:mass} (left). In the appendix, we add information on the specific design of the complementary filter pairs for each experiment. Further, we add frequency spectra for each system. \subsection{Learning task and comparison} For each system, we observe a single trajectory. The models are trained on a fixed subtrajectory of the full trajectory. Predictions are performed by computing a rollout of the model over the full trajectory. We evaluate the model accuracy by computing the RMSE along the full trajectory. On the simulated systems, the RMSE between predictions and ground truth is computed. On real world data, the RMSE between predictions and measurements is computed. Runtimes are reported in the appendix. \subsection{Purely learning-based model: } We apply the strategy derived in Sec. \ref{section:method} to GRU models (referred to by “split GRU“) and compare to a single GRU model trained on the entire bandwidth. In order to draw a fair comparison, we choose an equal number of total hidden units for the baseline GRU and the sum of hidden states in our approach. We provide architecture details in the appendix. Furthermore, we optionally wrap an additional high-pass filter around the predictions $F^h_{\theta}(\hat{y}_R^h,h_R^h,N-R)$ during training and inference (cf. Eq. \eqref{eq:rec_training} and \eqref{eq:rec_predictions}), and denote this by the suffix "+HP". In order to demonstrate the flexibility of our method, we add results with varying cutoff frequencies and downsampling ratios in the appendix. We train our model on the following systems: \textbf{(i) Double-mass spring system: } We simulate a double-mass spring system that consists of two sinusoidal waves with different frequencies and corrupt the simulation with additional observation noise. Training is performed on an interval of 250 steps, while predictions are computed on 1000 steps (further details can be found in the appendix). \textbf{(ii) - (iv) Double torsion pendulum: } In the second set of experiments, we consider real measurements from the double-torsion pendulum system introduced in \citet{Lisowski}. Data are obtained by exciting the system with different inputs. In particular, we consider 4 different excitations with varying frequencies. Training is performed on the first 600 measurements, while predictions are performed on an 2000-steps interval. \subsection{Hybrid model} For the hybrid model, we train our complementary filtering method with GRU and RNN and compare against the corresponding non-hybrid models (GRU and RNN), the corresponding residual models (residual GRU/ RNN), and the simulator. We consider the following systems: \textbf{(v) Van-der-Pol oscillator: } Data from a Van-der-Pol oscillator with external force is simulated from the four-dimensional ground truth system (\citet{Cartwright}). It is assumed that only the first dimension, corresponding to the position, is observed. Simulator data are obtained from an unforced Van-der-Pol oscillator. For the corresponding equations, we refer to the appendix. \textbf{(vi) Drill-string: } We consider measurement data from the drill string experiment provided in \citet{AARSNES2018712} Figure 14 as training data and the corresponding simulated signal as simulator. \subsection{Results} The results indicate the advantage of leveraging complementary filters for dynamics model learning. In particular, the resulting predictions show stable short and long-term behavior, while especially the GRU and RNN baselines tend to drift on the long-term due to accumulating errors. For both scenarios, we provide additional plots showing the accumulated RMSE over time for each system in the appendix. \begin{table*}[!htpb] \centering \begin{tabular}{rccc} \noalign{ } \hline \hline \noalign{ } System & GRU & split GRU (ours) & split GRU + HP (ours) \\ \hline (i) & 0.587 (0.002) & \textbf{0.127} (0.008) & 0.168 (0.03)\\ (ii) & 1.124 (0.485) & 0.331 (0.065) & \textbf{0.318} (0.089) \\ (iii) & 0.287 (0.15) & 0.159 (0.051) & \textbf{0.13} (0.02)\\ (iv) & 0.262 (0.17) & \textbf{0.201} (0.07) & 0.18 (0.06) \\ \noalign{ } \hline \noalign{ } \end{tabular}\quad \caption{Total RMSEs (mean (std)) over 5 indep. runs with purely learning-based scheme.} \label{t:errors} \end{table*} \paragraph{Purely learning-based} The results in Table \ref{t:errors} indicate the advantage of our approach due to accumulating errors for the baseline method. Integrating a small model error in each time-step leads to a long-term drift that can also be directly observed in the rollouts (cf. Figure \ref{fig:learning_based}). Our approach on the other hand does not suffer from this drift due to the specific architecture and therefore outperforms the baseline method on every task. The findings are also supported by the RSME over time $(e_n)_{n=0}^N$ with $e_n= \sqrt{\sum_{k=0}^n \frac{1}{n+1} \Vert y_k-\hat{y}_k\Vert^2}$ shown in Figure \ref{fig:mass} (right). In some cases our methods yields faster convergence than the baseline method. For System i) we report the results after 300 training epochs for our method, while the GRU was trained on 2000 epochs. To provide more insights, we demonstrate the functionality of our method with the double-mass spring system (i) (cf. Figure \ref{fig:mass}). Designing the filters shown in Figure \ref{fig:mass} (left) yields seperate predictions from the two GRUs in Figure \ref{fig:mass} (middle). Similar results of our split GRU and our split GRU+HP indicate that the most effective part is already contained in the split GRU (cf. Table \ref{t:errors}). Here, the high-frequency model already stays on the desired time scale and the additional high-pass filter rather introduces a small distortion. Further, our split GRU+HP shows a higher error in the beginning due to transient behavior of the filter, which can be seen in Figure \ref{fig:mass} (right). However, the additional high-pass filter guarantees that the high-frequency predictions are indeed affecting the correct time scale. \begin{table*}[h!] \centering \begin{tabular}{rcccc} \noalign{ } \hline \hline \noalign{ } System & RNN & residual RNN & simulator & filtered RNN (ours) \\ \hline (v) & 1.29 (0.63) & 0.417 (0.03) &0.418& \textbf{0.347} (0.041) \\ (vi) & 1.1 (1.26) & 3.60 (1.62) & 0.729 & \textbf{0.487} (0.381) \\ \noalign{ } \hline \noalign{ } \end{tabular}\quad \caption{Total RMSEs for the hybrid model with RNN (mean (std)) over 5 indep. runs.} \label{t:MLP} \end{table*} \begin{table*}[h!] \centering \begin{tabular}{rcccc} \noalign{ } \hline \hline \noalign{ } System & GRU & residual GRU & simulator & filtered GRU (ours) \\ \hline (v) & 0.463 (0.305) & 0.476 (0.096) & 0.418 & \textbf{0.387} (0.026) \\ (vi) & 1.140 (0.258) & \textbf{0.681} (0.055) & 0.729 & 0.765 (0.008) \\ \noalign{ } \hline \noalign{ } \end{tabular}\quad \caption{Total RMSEs for the hybrid model with GRU (mean (std)) over 5 indep. runs.} \label{t:GRU} \end{table*} \paragraph{Hybrid model} We report the RMSEs for the hybrid setting with RNNs in Table \ref{t:MLP} and with GRUs in Table \ref{t:GRU}. The results demonstrate that our method is beneficial for different types of models, here MLP-based RNNs and GRUs. Again, the standard training with single GRU or single RNN shows some drift causing bad long-term behavior. The unstable long-term behavior is demonstrated particularly clearly by the RNN results shown in Figure \ref{fig:hybrid} (left and right). While the residual RNN baseline does not suffer from the typical drift that is observed for the RNN baseline, it still shows instabilities in the long-term behavior. In particular, the results for System (vi) in Figure \ref{fig:hybrid} (right) demonstrate that low-frequency errors occur for the residual model as well. Our method, in contrast, eliminates these errors by design. However, on System (vi), our filtered GRU is outperformed by the residual GRU since our predictions stay close to the simulator predictions. We provide additional insights into our method by depicting the RNN and simulator predictions before combining them via the complementary filter for System (v) in Figure \ref{fig:hybrid} (middle). Additional plots are provided in the appendix. \section{Conclusion} \label{section:conclusion} In this paper, we propose to combine complementary filtering with dynamics model learning. In particular, we fuse the predictions of different models, where one models provides reliable long-term predictions and the other reliable short-term predictions. Leveraging the concept of complementary filter pairs yields a model that combines the best of both worlds. Based on this idea, we propose a purely learning-based model and a hybrid model. In the hybrid scenario, the long-term predictions are addressed by a simulator, whereas in the purely learning-based scenario an additional model has to be trained. The experimental results demonstrate that our approach yields predictions with accurate long and short-term behavior. An interesting topic for future research is an extension of the hybrid scenario learning the relationship between simulator predictions and learning-based predictions. \end{document}
\begin{document} \title{Generating entanglement with low Q-factor microcavities} \author{A.B.~Young}\email{[email protected]} \affiliation{Merchant Venturers School of Engineering, Woodland Road Bristol, BS8 1UB} \author{C.Y. Hu}\affiliation{Merchant Venturers School of Engineering, Woodland Road Bristol, BS8 1UB} \author{J.G.~Rarity}\affiliation{Merchant Venturers School of Engineering, Woodland Road Bristol, BS8 1UB}\ \date{\today} \begin{abstract} We propose a method of generating entanglement using single photons and electron spins in the regime of resonance scattering. The technique involves matching the spontaneous emission rate of the spin dipole transition in bulk dielectric to the modified rate of spontaneous emission of the dipole coupled to the fundamental mode of an optical microcavity. We call this regime resonance scattering where interference between the input photons and those scattered by the resonantly coupled dipole transition result in a reflectivity of zero. The contrast between this and the unit reflectivity when the cavity is empty allow us to perform a non demolition measurement of the spin and to non deterministically generate entanglement between photons and spins. The chief advantage of working in the regime of resonance scattering is that the required cavity quality factors are orders of magnitude lower than is required for strong coupling, or Purcell enhancement. This makes engineering a suitable cavity much easier particularly in materials such as diamond where etching high quality factor cavities remains a significant challenge. \end{abstract} \maketitle Entanglement is a fundamental resource for quantum information tasks, and generating entanglement between different qubit systems such as photons and single electron spins has been shown to be a key to building quantum repeaters, universal gates\cite{PhysRevLett.92.127902, Yao:2004uq,waks:153601,Yao:2005fk, Barrett:2005kx, PhysRevLett.104.160503, PhysRevB.78.085307, PhysRevB.80.205326, PhysRevB.78.125318}, and eventually large scale quantum computers\cite{PhysRevA.78.032318}. These previous proposals for generating entanglement using a deterministic spin photon interface have focussed on having the optical transitions of a spin system strongly coupled to an optical microcavity, or at least deep into the Purcell regime\cite{PhysRevB.78.085307,PhysRevB.78.125318,PhysRevB.80.205326}. Recent measurements in high quality-factor micropillars have suggested that it is hard to fulfil the requirement of strong coupling whilst maintaining the necessary input output coupling efficiency\cite{Young:2011uq}. In order to work around this we propose a non-deterministic spin photon interface that works in the low Q-factor regime where efficient in/out coupling of photons should be possible. The scheme works by operating in a regime of resonance scattering where the decay constants for the optical dipole transitions in bulk dielectric are matched to the decay parameters when resonantly coupled to an optical microcavity. \begin{figure} \caption{Schematic diagram of a single sided cavity coupled to a dipole. $e$ and $g$ represent the excited and ground states of the dipole transition, $\kappa$ represents the coupling rate via the input output mirror and $\kappa_{s} \label{schematic} \end{figure} If we consider the single sided dipole-cavity system in Fig.\ref{schematic} then the system can be parameterised by four constants, these are: $\kappa$, the decay rate for intracavity photons via the input/output mirror (outcoupling), $\kappa_{s}$, the decay rate for intra-cavity photons into loss modes, which can include losses out the side of the cavity, transmission and absorption, $g$, the dipole-cavity field coupling rate, and $\gamma$, the linewidth of the dipole transition. We may now express the photon reflectivity when incident on the input/output mirror as\cite{PhysRevB.78.085307}: \begin{eqnarray}\label{eqn:ref} &&r(\omega)=|r(\omega)|e^{i\phi}\\ &=&1-\frac{\kappa(i(\omega_{d}-\omega)+\frac{\gamma}{2})}{(i(\omega_{d}-\omega)+\frac{\gamma}{2})(i(\omega_{c}-\omega)+\frac{\kappa}{2}+\frac{\kappa_{s}}{2})+g^{2}}\nonumber \end{eqnarray} \noindent where $\omega_{d}$ and $\omega_{c}$ are the frequencies of the QD and cavity, and $\omega$ is the frequency of incident photons. If we match the linewidth of the dipole transition in bulk dielectric ($\gamma$), to the modified spontaneous emission lifetime in the cavity ($4g^{2}/\kappa$)\cite{PhysRevB.60.13276}, then any photons that are input resonant to the dipole-cavity system are scattered into lossy modes. This is due to a destructive interference between the input light and light that is scattered from the dipole. The reflectivity for an empty cavity, and reflectivity for a cavity resonantly coupled to a dipole ($\omega_{d}=\omega_{c}$) can be seen in Fig.\ref{fig:qdrscat}. Here we consider a lossless single sided cavity ($\kappa_{s}=0$), and have set $g^{2}=\gamma\kappa/4$ (the condition for resonance scattering). \begin{figure}\label{fig:qdrscat} \end{figure} We can see that for the case when the dipole transition is resonantly coupled to the cavity then there is a dip in the reflectivity spectrum ($r_{d}$), that goes to zero at zero detuning ($\omega_{c}=\omega_{d}=\omega$). This dip is a result of resonance scattering and has the linewidth of the dipole transition($\gamma$), which we have set to be $\gamma=0.1\kappa$ as an upper limit where $\gamma$ is typically $<<0.1\kappa$ for most atom-cavity \cite{tu-prl-75-4710}, and quantum dot-cavity \cite{nat-432-7014, reitzenstein:251109} experiments. For the case when the cavity is empty ($r_{c}$), all of the input light is reflected. The result is a large intensity contrast between the case of a cavity resonantly coupled to a dipole and an empty cavity. If instead of a single dipole transition we coupled a spin system to a cavity in the resonance scattering regime, then if the two dipole transitions corresponding to the $\uparrow$, $\downarrow$ states are distinguishable in some way (energy or polarisation) we can perform a quantum-non-demolition measurement of the spin\cite{Young:2009fk}. From this QND measurement it is possible to generate entanglement non-deterministically between spins and photons. We will now move on to consider some specific spin dipole systems to outline the benefits of generating this non deterministic entanglement in the resonance scattering regime. \section{Charged quantum dot in a pillar microcavity.} We consider the example of a charged quantum dot where the optical transitions for orthogonal spin states couple to orthogonal circular polarisation states of light. By coupling to a pillar microcavity an incident photon would obey the following set of transformations on reflection: \begin{eqnarray} \ket{R}\otimes\ket{\uparrow}&\rightarrow& r_{d}\ket{R}\ket{\uparrow}\\ \ket{R}\otimes\ket{\downarrow}&\rightarrow& r_{c}\ket{R}\ket{\downarrow}\\ \ket{L}\otimes\ket{\uparrow}&\rightarrow& r_{c}\ket{L}\ket{\uparrow}\\ \ket{L}\otimes\ket{\downarrow}&\rightarrow& r_{d}\ket{L}\ket{\downarrow} \end{eqnarray} \noindent Here if the input photon has right circular polarisation $\ket{R}$, and the spin is in the state $\ket{\uparrow}$ the photon sees a dipole-coupled cavity system and has a reflectivity given by $r_{d}$. Conversely if the spin is in the state $\ket{\downarrow}$ then the input photon sees an empty cavity and has a reflectivity given by $r_{c}$. If the input photon has left-circular polarisation $\ket{L}$ then it has the opposite interaction with the spin. In the case when the electron spin of the charged QD is in a equal superposition of spin up and spin down, and two linearly polarised (horizontal) photons are sequentially reflected from the QD-cavity then the output state will be: \begin{eqnarray} \nonumber &&\frac{1}{\sqrt{8}}[(\ket{R}_{1}+\ket{L}_{1})\otimes(\ket{R}_{2}+\ket{L}_{2})\otimes(\ket{\uparrow}+\ket{\downarrow})]\\\nonumber &&=\frac{1}{\sqrt{8}}(r_{c}^{2}\ket{R}_{1}\ket{R}_{2}+r_{d}^{2}\ket{L}_{1}\ket{L}_{2}+\\\nonumber &&r_{c}r_{d}\ket{R}_{1}\ket{L}_{2}+r_{d}r_{c}\ket{L}_{1}\ket{R}_{2})\ket{\uparrow}\\\label{eq:pent} &+&\frac{1}{\sqrt{8}}(r_{c}^{2}\ket{L}_{1}\ket{L}_{2}+r_{d}^{2}\ket{R}_{1}\ket{R}_{2}\\\nonumber &&+r_{c}r_{d}\ket{R}_{1}\ket{L}_{2}+r_{d}r_{c}\ket{L}_{1}\ket{R}_{2})\ket{\downarrow} \end{eqnarray} \noindent Now after a Hadamard pulse ($\pi/2$) on the electron spin we have the state: \begin{eqnarray}\nonumber &&\ket{\psi_{out}}=\frac{1}{\sqrt{8}}[(r_{c}^{2}+r_{d}^{2})(\ket{R}_{1}\ket{R}_{2}+\ket{L}_{1}\ket{L}_{2})\\ &&+2r_{c}r_{d}(\ket{R}_{1}\ket{L}_{2}+\ket{L}_{1}\ket{R}_{2})]\ket{\uparrow}\\\nonumber +&&\frac{1}{\sqrt{8}}[(r_{c}^{2}-r_{d}^{2})\ket{R}_{1}\ket{R}_{2}+(r_{d}^{2}-r_{c}^{2})\ket{L}_{1}\ket{L}_{2}]\ket{\downarrow} \end{eqnarray} \noindent From Fig.\ref{fig:qdrscat}, we can see the terms that are proportional to $r_{d}$ will disappear, and $r_{c}=1$. If the electron spin is then measured to be "up" ($\uparrow$) with either a third photon or using the single shot readout technique outlined in previous work\cite{Young:2009fk} then the two photon state will become: \begin{equation} \ket{\psi_{out}}=\frac{1}{\sqrt{8}}(\ket{R}_{1}\ket{R}_{2}+\ket{L}_{1}\ket{L}_{2}) \end{equation} \noindent which is the $\ket{\psi^{+}}$ Bell state. Alternatively if the spin is measured to be down ($\downarrow$) we will project the two photons into the state: \begin{equation} \ket{\psi_{out}}=\frac{1}{\sqrt{8}}(\ket{R}_{1}\ket{R}_{2}-\ket{L}_{1}\ket{L}_{2}) \end{equation} \noindent which is the $\ket{\psi^{-}}$ Bell state. Thus we have generated entangled states with unit fidelity except there is a reduced efficiency of $1/4$. In order to generate larger entangled states then we simply need to reflect more photons from the system however the efficiency scales as $1/2^{n}$, which would make the scheme intractable for entangling large numbers of photons (n). \begin{figure}\label{spinentangler} \end{figure} There is an analogous procedure for entangling many spins where photons can be reflected from more than one charged-QD cavity system. Consider the case as in Fig.\ref{spinentangler} where the photon is sequentially reflected from two charged QD-cavity coupled devices operating in the resonance scattering regime. The joint two spin photon state at the output will be \begin{eqnarray} \nonumber \ket{\psi_{out}}=&&\frac{1}{\sqrt{8}}[(\ket{\uparrow}_{1}+\ket{\downarrow}_{1})\otimes(\ket{\uparrow}_{2}+\ket{\downarrow}_{2})\otimes(\ket{R}+\ket{L})]\\\nonumber &&=\frac{1}{\sqrt{8}}(r_{c_{1}}r_{c_{2}}\ket{\uparrow}_{1}\ket{\uparrow}_{2}+r_{d_{1}}r_{d_{2}}\ket{\downarrow}_{1}\ket{\downarrow}_{2}\\\nonumber &&+r_{c_{1}}r_{d_{2}}\ket{\uparrow}_{1}\ket{\downarrow}_{2}+r_{d_{1}}r_{c_{2}}\ket{\downarrow}_{1}\ket{\uparrow}_{2})\ket{R}\\\label{eq:entresscat} +&&\frac{1}{\sqrt{8}}(r_{c_{1}}r_{c_{2}}\ket{\downarrow}_{1}\ket{\downarrow}_{2}+r_{d_{1}}r_{d_{2}}\ket{\uparrow}_{1}\ket{\uparrow}_{2}\\\nonumber &&+r_{c_{1}}r_{d_{2}}\ket{\uparrow}_{1}\ket{\downarrow}_{2}+r_{d_{1}}r_{c_{2}}\ket{\downarrow}_{1}\ket{\uparrow}_{2})\ket{L} \end{eqnarray} \noindent Where $r_{c_{1}}$, and $r_{c_{2}}$ represent the reflectivity from empty cavity for the first and second cavities respectively, and $r_{d_{1}}$ and $r_{d_{2}}$ represent the reflectivity's from dipole-coupled-cavity systems in the resonant scattering regime for the first and send cavities respectively. Assuming $r_{c_{1}}=r_{c_{2}}=1$, and $r_{d_{1}}=r_{d_{2}}=0$, if a Hadamard is performed on the photon (i.e. using a polarising beam splitter), upon detection of a horizontally polarised photon, the spins are projected into the state: \begin{equation} \ket{\psi_{out}}=\frac{1}{\sqrt{8}}(\ket{\uparrow}_{1}\ket{\uparrow}_{2}+\ket{\downarrow}_{1}\ket{\downarrow}_{2}) \end{equation} \noindent Which is the $\ket{\psi^{+}}$ Bell state. Alternatively upon detection of a vertically polarised photon, the spins are projected into the state: \begin{equation} \ket{\psi_{out}}=\frac{1}{\sqrt{8}}(\ket{\uparrow}_{1}\ket{\uparrow}_{2}-\ket{\downarrow}_{1}\ket{\downarrow}_{2}) \end{equation} \noindent Which is the $\ket{\psi^{-}}$ Bell state. This is identical to the photonic entanglement generated above and again has an efficiency of $1/4$ associated with photon loss. The benefit of using this technique to entangle spins is that the spin entanglement is heralded upon detection of a photon, thus it is possible to use many photons and keep reflecting them until one is detected. \subsection{Entanglement in lossy cavities} So far to outline this procedure we have assumed that we have a perfect cavity where all the photons escape through the input-output mode ($r_{c}=1$), or are lost through the resonant scattering process, however to make the ideas presented more realistic we must consider cavity imperfections that introduce losses. We must thus include $\kappa_{s}$ in our calculation of the reflectivity. In Fig.\ref{resscatF}.a. we can see a plot of the ratio of the rate of input-output coupling to the rate of losses ($\kappa/\kappa_{s}$) plotted against the reflectivity where the charged QD-cavity is resonantly coupled, and the probe photons are resonant with both ($\omega_{d}=\omega_{c}=\omega$). In this plot the Q-factor of the cavity remains constant, i.e. the total decay rate is not changed ($(\kappa+\kappa_{s})/\kappa_{T}=1$). We have set $g=\sqrt{\kappa_{T}\gamma/4}$, and set $\gamma=0.1\kappa$ \begin{figure}\label{resscatF} \end{figure} \noindent Let us first consider the case of an empty cavity given by the line $r_{c}$ in Fig.\ref{resscatF}.a. (black line) Here we can see that in the regime where $\kappa>>\kappa_{s}$, the reflectivity at zero detuning ($\omega_{c}=\omega$) is $\approx1$. As $\kappa_{s}$ is increased then the reflectivity on resonance drops corresponding to more light being lost from the cavity, until the point when $\kappa_{s}=\kappa$ at which point the reflectivity on resonance drops to $r_{c}=0$, this corresponds to the cavity resonantly transmitting light into lossy modes. As $\kappa_{s}$ is increased further then the coupling into the cavity becomes poorer, until in the regime when $\kappa_{s}>>\kappa$, when the coupling via the input output mode is negligible and the cavity behaves as a conventional mirror. For a charged quantum dot where the dipole transitions are resonantly coupled to a cavity (red dashed line), in the regime that $\kappa>>\kappa_{s}$ then $r_{d}=0$. This is what we expect to observe for the case of a resonantly coupled charged QD-cavity in the resonance scattering regime where input photons destructively interfere with scattered photons, and all of the light is lost to non-cavity modes. As $\kappa_{s}$ is increased an extra damping term is added the result is the destructive interference is no longer perfect and some light is reflected. As $\kappa_{s}$ is increased further it begins to dominate and the interference becomes constructive and the reflectivity from a dipole coupled cavity system ($r_{d}$), becomes greater than that of an empty cavity ($r_{c}$). In the limit when $\kappa_{s}>>\kappa$ then no light enters the cavity thus no light is scattered by the dipole transition and we have that $r_{d}=r_{c}$. The effect of losses on the fidelity is that the terms proportional to $r_{d}$ in Eqn.\ref{eq:entresscat} no longer disappear and the $\ket{\psi^{+}}$ entangled state is no longer prepared with unit fidelity, but instead with a reduced fidelity given by: \begin{equation} F_{\psi^{+}}=\frac{1}{\sqrt{1+\frac{4(r_{d}r_{c})^{2}}{(r_{d}^{2}+r_{c}^{2})^{2}}}} \end{equation} \noindent for the case when we wish to entangle two photons with one spin with an efficiency $\eta_{\psi^{+}}$ given by: \begin{equation}\label{eq:resscateff} \eta_{\psi^{+}}=\frac{(r_{d}^{2}+r_{c}^{2})^{2}}{4} \end{equation} \noindent For the case when we wish to entangle two spins with one photon then we have to slightly modify these equations so that fidelity is now: \begin{equation} F_{\psi^{+}}=\frac{1}{\sqrt{1+\frac{2(r_{d_{1}}r_{c_{2}})^{2}+2(r_{c_{1}}r_{d_{2}})^{2}}{(r_{d_{1}}r_{d_{2}}+r_{c_{1}}r_{c_{2}})^{2}}}} \end{equation} \noindent where the efficiency is now: \begin{equation}\label{eq:resscateff2} \eta_{\psi^{+}}=\frac{(r_{d_{1}}r_{d_{2}}+r_{c_{1}}r_{c_{2}})^{2}}{4} \end{equation} \noindent Note that the fidelity is not influenced by the two charged-QD cavity systems having non equal values of $r_{c}$ and $r_{d}$, but only by the intensity contrast at both individual dipole cavity system. This means that both systems need not be identical a great advantage when it comes to fabrication of such structures. The preparation of the $\ket{\psi^{-}}$ state is not affected by changes in $r_{d}$, and $r_{c}$ and always has $F=1$, but has an efficiency given by: \begin{equation} \eta_{\psi^{-}}=\frac{(r_{d}^{2}-r_{c}^{2})^{2}}{4} \end{equation} In Fig.\ref{resscatF}.b we can see a corresponding plot for how the fidelity and efficiency is affected by changing the ratio of $\kappa/\kappa_{s}$. Note we have maintained an overall $\kappa_{T}=const$, thus the Q-factor is constant. At the point where $r_{d}=r_{c}$ ($\kappa\approx 2\kappa_{s}$) there is a minimum in fidelity for the preparation of the $\ket{\psi^{+}}$ state, as at this point the cross terms proportional to $\ket{R}_{1}\ket{L}_{2}+\ket{L}_{1}\ket{R}_{2}$ in Eqn.\ref{eq:pent} are a maximum. When $\kappa=\kappa_{s}$ the reflectivity for an empty cavity is zero ($r_{c}=0$), therefore there is a peak in the fidelity and $F=1$, however the since $r_{d}\approx0.5$ the efficiency is low ($\eta\approx0.016$). As we move into the region where $\kappa<\kappa_{s}$ then both $r_{d}$ and $r_{c}$ increase and the efficiency increases, $r_{c}$ increases faster than $r_{d}$, until the limit when $\kappa<<\kappa_{s}$ where $r_{d}=r_{c}=1$ and $\eta=1$. However in this regime there is a minimum in fidelity ($F=1/\sqrt{2}$) for the preparation of the $\ket{\psi^{+}}$ state, again due to the two reflectivities being equal. Note the fidelity for the preparation of the $\ket{\psi^{-}}$ state remains $F=1$, however in a cavity with a large ratio of leaks to input-output coupling the efficiency $\eta_{\psi^{-}}$ drops to zero. In order to achieve entanglement with the highest possible efficiency and fidelity for both $\ket{\psi^{+}}$ and $\ket{\psi^{-}}$ states it is necessary to have $\kappa>>\kappa_{s}$. This requirement at first sight is no different to the requirement for the deterministic spin photon-interface outlined in previous work\cite{PhysRevB.78.085307,PhysRevB.78.125318,PhysRevB.80.205326,waks:153601}. So seemingly the non-deterministic scheme outlined offers no advantage, however the required cavity Q-factors are significantly less. In order to see some of the benefits of entanglement generation using resonance scattering it is necessary to consider in more detail some experimental parameters. We consider some of the state of the art QD pillar microcavity experiments performed by Reithmaier et. al (2004)\cite{nat-432-7014}. Here they showed strong coupling of a QD to a pillar microcavity where the QD-cavity coupling rate $g=80\mu$eV, the cavity linewidth was $\kappa_{T}=180\mu$ev (Q=7350), and the QD linewidth was $\gamma<10\mu$eV at low temperature. If we now assume the maximum value for the QD linewidth ($\gamma=10\mu$eV), then the required cavity linewidth in order to fulfil the requirement for resonance scattering is $\kappa_{T}=2.56$meV (Q=517). This a significantly smaller value than would be required for a deterministic spin-photon interface in previous work\cite{PhysRevLett.92.127902, PhysRevB.78.085307} where we would require $g>\kappa_{T}+\gamma$, meaning $\kappa_{T}=70\mu$ev (Q=18900). With the reduced Q-factor that is required to generate entanglement with resonance scattering, comes a secondary crucial benefit. The state of the art micropillars used in the experiment above and most high-Q micropillars, are limited by losses. Small diameter high Q micropillars have significant sidewall scattering and operate in the regime where $\kappa<\kappa_{s}$. Assuming the linewidth of the pillar is entirely defined by losses out the side $\kappa_{T}=\kappa_{s}=180\mu$eV. The Q-factor can then be reduced by removing, or growing fewer DBR mirror pairs. This will increase $\kappa$ whilst $\kappa_{s}$ should remain constant. Reducing the Q-factor in such a way so that $Q=517$, would result in $\kappa=2.38$meV, and $\kappa_{s}=180\mu$eV, thus have $\kappa/\kappa_{s}=13$. So by reducing the Q-factor we simultaneously increase the input-output coupling rate and move into a regime where the losses out of the side of the pillar become negligible. This means that we can entangle two spins or two photons using charged QD's coupled to such cavities, with fidelity $F>99\%$ in both the $\ket{\psi^{+}}$ and $\ket{\psi^{-}}$ states, with an efficiency $\eta=0.14$. We have already discussed that this scheme is best employed when used to herald entanglement between many spins. Assuming perfect detection it would be necessary to send in $\approx10$ photons to ensure one was detected heralding the entanglement of two spins. The photons would have to be separated by a time greater than the spontaneous emission lifetime of the QD $\approx1ns$, so it would take approximately $10ns$ to entangle two spins. Pairs of spins could be entangled in parallel and then entanglement could be generated between pairs by repeating the process between single spins from each pair. Hence a linear cluster of N spins could be entangled in $\approx 20ns$, well within the $\mu$s coherence time of a charged QD spin. By parallelising the entanglement procedure we compensate for the non-deterministic nature of generating entanglement using resonance scattering at the expense of the complexity of the photon source required to perform the experiment. The advantage of the non-deterministic scheme for generating entanglement is that clearly the required Q-factor is low. A knock on effect is that low Q-micropillars naturally have good input-output coupling efficiency and it is easy to achieve $\kappa>>\kappa_{s}$. To realise the spin-photon interface in the strong coupling regime requires high Q-factor low loss pillars which are much more challenging to fabricate. Further the low Q-factor means the spectral width of the cavity is large compared to the linewidth of the dipole transitions $\gamma$. This means that charged QD's in different micropillar samples have a larger range over which they can be tuned and still be resonantly coupled to the microcavity meaning it will be easier to realise the situation where both dipole transitions are at the same wavelength. Finally the low Q-factor will lessen the effects of any ellipticity or mode splitting in the cavity. Since the linewidth of the $E_{x}$ and $E_{y}$ modes will be large then any mode-spilitting as a result of fabrication error would be small in comparison The downside to operating in the regime of resonance scattering is that the charged QD-pillar system has to be engineered so that $g^2=\kappa_{T}\gamma/4$. Since the position of self-assembled QD's is random, fulfilling this requirement will be difficult, and may require the growth of site controlled QD's with pillars etched out of the wafer around them. This is not a problem for the spin-photon interface in the strong coupling regime where the coupling rate $g$ just has to be above the threshold where $g>\kappa_{T}+\gamma$, but not have a specific value. Hence operating in the resonance scattering regime changes the nature of the engineering problem. It is easy to achieve a low loss micropillar, but it will be difficult to precisely control the structure to meet the condition for resonance scattering. One possible system that would lend itself to this sort of technique could be toroidal,or microsphere cavities where the Q-factor can effectively be tuned by changing the distance between the cavity and an evanescently coupled tapered fiber. It remains to be seen if the realisation of the structures required for this non-deterministic entanglement scheme will be any easier than the structures required for deterministic spin photon interface in the strong coupling regime. \section{application to NV center in diamond} \begin{figure} \caption{{\bf a.} \label{nvscheme} \end{figure} The entanglement protocol outlined here for charged QD-spins, could be applied to other spin systems for example the NV-center in a photonic crystal\cite{Young:2009fk}. Here distinguishing between the two spin states can be achieved with frequency instead of polarisation. If photons were passed through an electro-optical modulator then they can be placed in a superposition of two distinct frequencies $A$ and $B$. Frequency $A$ can then be tuned to be resonant with the $^{3}A_{(m=0)} \rightarrow ^{3}E$ transition (transition (1) Fig.\ref{nvscheme}.a.), and frequency $B$ resonant with the $^{3}A_{(m=\pm1)} \rightarrow^{3}E$ transition (transition (2) Fig.\ref{nvscheme}.a). Since the linewidth of the zero phonon line at low temperature is of order MHz\cite{0953-8984-18-21-S08} then there will be two dips in the reflectivity as a result of resonance scattering corresponding to the $m=0$ and $m=\pm1$ spin states of order MHz spilt by $\approx 2.88$GHz. The distinguishability of these two dips allows us to perform a quantum non-demolition measurement of the spin\cite{Young:2009fk}, and generate entanglement using precisely the same protocol as outlined for the case of a charged quantum dot using photons in a superposition of frequency instead of polarisation. Recent results\cite{Togan:2010fk} have also shown that the $m=\pm1$ spin states can be used as a qubit and orthogonal circular polarisations of light then couple the ground states to an excited state $A_{2}$. In this instance the resonant scattering protocol outlined for the charged QD could be directly applied to a NV-center coupled to an appropriate optical microcavity. An alternative method to generating entanglement in this regime that is perhaps simpler for the case of the NV-center is to only use photons with frequency $\omega_{m=0}$ that are resonant with transition 1 in Fig.\ref{nvscheme}.a. In Fig.\ref{nvscheme}.b. we can see a schematic diagram of how this could work. We can take two photons 1 and 2 that are both resonant with the spin preserving transition 1, and reflect them from two cavity systems 1 and 2 that are both coupled to an NV-center in the resonance scattering regime. After the two photons are reflected they are then interfered on a 50:50 beamsplitter. The entanglement would then be generated using the exact same protocol as outlined by Barrett and Kok\cite{Barrett:2005kx}, which could lead to the formation of large cluster states. One benefit of realising this type of scheme using a resonance scattering technique is that we do not need to use photons that are generated via spontaneous emission from spin in the cavity, and can use some external source, in fact photon 1 and photon 2 can be produced from the same source. This means it should be easier to ensure that the two photons are indistinguishable, which remains a challenge\cite{Bernien:2012fk}, thus effectively removing a decoherence channel from the existing Barrett and Kok protocol. Further to produce indistinguishable phonons via spontaneous emission would require the photons produced to be transform limited. This would require some Purcell enhancement thus $g^{2}>\kappa_{T}\gamma/4$ hence the Q-factor required would need to be higher. Note that this technique is also possible for other spin cavity systems for example the charged QD system examined earlier where we would just set photons 1 and 2 to have the same circular polarisation. Finally for illustrative purposes we can consider coupling a nitrogen vacancy centre to a photonic crystal cavity with current state of the art fabrication techniques. Recent results have shown\cite{Riedrich-Moller:2012kx} the fabrication of photonic crystals in diamond with Q-factor of $\approx700$ and a mode volume of $\approx 0.13\mu$m$^{3}$. Using this mode volume and given a typical oscillator strength for the ground to excited state triplet transitions of $f\approx0.12$\cite{nat_phs_2_408}, then we can calculate the dipole-cavity coupling rate to be $g\approx13.5\mu$eV. Since the zero phonon linewidth at low temperature is $\gamma\approx0.1\mu$eV\cite{0953-8984-16-30-R03} then the Q-factor required to meet the resonance scattering condition in such a structure would be $Q\approx256$ nearly three times smaller than has already been experimentally realised. So provided the input/output coupling rate $\kappa$ can be made much larger than the loss rate $\kappa_{s}$ then current experimentally realised structures in diamond would be suitable for generating entanglement using resonant scattering techniques. \section{Summary} In Summary we have shown a way to non-deterministically generate entanglement between electron spins and photons. We have shown how this can be applied to charged QD-spins, and nitrogen vacancy centers coupled to optical microcavities. The idea uses resonance scattering where orthogonal photon states are scattered and lost depending on the internal spin state of the electron spin. The advantage to this scheme is that it requires low Q micropillars where the input-output coupling rate is intrinsically high. The disadvantage is the non-deterministic nature makes scaling difficult compared to the spin-photon interface in the strong coupling regime. \end{document}
\begin{document} \copyrightpage \signaturepage \titlepage \begin{dedication} \index{Dedication@\emph{Dedication}} I would like to dedicate this thesis to my dear friends, without whom this would not have been possible. \end{dedication} \begin{acknowledgments} \index{Acknowledgments@\emph{Acknowledgments}} I would like to thank David Helm, Michael Starbird, and Eric Katz for many helpful conversations and their generous application of time, patience, and encouragement. I especially thank Eric for the origins of many of the ideas below, David for the guidance to make this document thorough and rigorous, and Mike for determination to see it to completion. \end{acknowledgments} \utabstract \index{Abstract} Hurwitz numbers are a weighted count of degree $d$ ramified covers of curves with specified ramification profiles at marked points on the codomain curve. Isomorphism classes of these covers can be included as a dense open set in a moduli space, called a Hurwitz space. The Hurwitz space has a forgetful morphism to the moduli space of marked, stable curves, and the degree of this morphism encodes the Hurwitz numbers. Mikhalkin has constructed a moduli space of tropical marked, stable curves, and this space is a tropical variety. In this paper, I construct a tropical analogue of the Hurwitz space in the sense that it is a connected, polyhedral complex with a morphism to the tropical moduli space of curves such that the degree of the morphism encodes the Hurwitz numbers. \tableofcontents \chapter{Introduction} \textit{This document combines Hurwitz numbers from classical enumerative geometry and the moduli space of curves from tropical geometry. This introduction establishes the basic definitions in each domain and frames them for the work to come. Then it states the original results and motivates the tools that will be used to prove them.} \section{Hurwitz Numbers} \textit{In this section, we define Hurwitz numbers and show how they can be computed in the class algebra.} \subsection{Counting Ramified Covers} \textit{In this subsection, we define Hurwitz numbers in enumerative geometry. For a fixed ramification profile, the Hurwitz number will be a weighted count of ramified covers with that profile.} \begin{dfn}\cite{GJ} Fix $d \in \N$. Then a \textbf{ramified cover of degree $d$} is a morphism $f: D \to C$, where $C$ and $D$ are compact curves over $\C$, such that $|f^{-1}(Q)| = d$ for all but a finite number of points $Q$ in $C$. The points in $C$ that do not have $d$ preimages are called \textbf{ramification values} or \textbf{branch points} of $f$. Two ramified covers $f: D \to C$ and $f': D' \to C'$ are isomorphic if there are isomorphisms $d: D \to D'$ and $c: C \to C'$ such that $f' \circ d = c \circ f$. \end{dfn} \begin{remark} We will require that $C$ be connected below, but $D$ need not be. \end{remark} \begin{lemma} Let $f: D \to C$ be a ramified cover of degree $d$ and $P \in D$. Then there exists $m_P \in \N$ such that $f$ is locally isomorphic to the map $z \mapsto z^{m_P}$. The integer $m_P$ is called the \textbf{ramification index} of $f$ at $P$. \end{lemma} \begin{proof} This is proved by Proposition IV.2.2 of Hartshorne, \cite{Hartshorne}. \end{proof} The given definition of a ramified cover requires that any point that is \textit{not} a branch point has $d$ distinct preimages. The next lemma uses the ramification indices to extend this count to \textit{all} values in the codomain, $C$. \begin{lemma}\label{ramificationpartition} Let $f: D \to C$ be a ramified cover of degree $d$ and fix $Q \in C$. Then $$\sum_{P \in f^{-1}(Q)} m_P = d.$$ \end{lemma} \begin{proof} This is proved by Proposition II.6.9 of Hartshorne, \cite{Hartshorne}. \end{proof} \begin{dfn} Let $f: D \to C$ be a ramified cover of degree $d$ and $Q \in C$. Then, by lemma \ref{ramificationpartition}, the ramification indices at points $P \in f^{-1}(Q)$ form an integer partition of $d$. This integer partition will be denoted by $\sigma(Q)$. \end{dfn} \begin{notation} Let $\sigma$ be an integer partition of $d$. Let $n_i$ be the number of parts of this integer partition of size $i$. Then we can represent this data as $\sigma = (1^{n_1} 2^{n_2} \cdots d^{n_d})$. There is no information lost by dropping those $i$ with $n_i = 0$ from this notation. \end{notation} \begin{example}\label{trivialram} Let $f : D \to C$ be a ramified cover of degree $d$. By definition, all but a finite number of points in $C$ have $d$ distinct preimages. Let $Q$ be one of the points with $d$ distinct preimages, and let $\{\hat{Q}_1,\ldots,\hat{Q}_d\}$ be those preimages. Lemma \ref{ramificationpartition} implies that the ramification indices are all positive integers summing to $d$, so $m_{\hat{Q}_i} = 1$ for all $i$. Thus $\sigma(Q) = (1^d)$ for all but a finite number of points $Q \in C$. \end{example} \begin{dfn}\cite{CJM} Let $d$ be a natural number and fix $n$ points, $Q_1,\ldots,Q_n$, on $\p^1$ and $\overline{\sigma} = \{\sigma_1, \ldots, \sigma_n\}$ a collection of integer partitions of $d$. Then a \textbf{Hurwitz number}, $h(\overline{\sigma})$, is defined as a weighted count of (isomorphism classes of) \begin{center} degree $d$ ramified covers, $f: D \to \p^1$ such that: \begin{itemize} \item $D$ is a smooth curve; \item $f$ is unramified over $\p^1 \setminus \{Q_1,\ldots,Q_n\}$; and \item $f$ ramifies with profile $\sigma(Q_i) = \sigma_i$. \end{itemize} Each cover $f$ is counted with weight $\frac{1}{|Aut(f)|}$. \end{center} \end{dfn} The definition above of Hurwitz numbers is a little different from the usual one. Traditionally, the genus of $D$ is specified and included in the notation. This allows for a partial collection of ramification profiles to be specified, while an unknown number of points with profile $(1^{d-2}2^1)$ are left unspecified. If you know the ramification profiles, then you can read off the degree of the covers. Moreover, from the ramification profiles, you can compute the genus, $g$, using the following famous result. The two definitions are the same except for some notation. We will assume all ramification is listed in the profiles, so we gave that version of the definition. \begin{dfn} Let $\sigma = (1^{n_1} 2^{n_2} \cdots d^{n_d})$ be an integer partition of $d$. Define $\ell(\sigma) = \sum_i n_i$, the number of parts in the integer partition $\sigma$. Also define $r(\sigma) = d - \ell(\sigma)$. \end{dfn} \begin{note} By Example \ref{trivialram}, for all but a finite number of points $Q \in C$, $r(\sigma(Q)) = r(1^d) = d - \ell(1^d) = d-d = 0$. \end{note} \begin{thm}[Riemann-Hurwitz Formula] Let $f: D \to C$ be a ramified cover of degree $d$. Let $g(C)$ and $g(D)$ be the genera of these curves. Then $$2 - 2g(D) = d(2-2g(C)) - \sum_{Q \in C} r(\sigma(Q)).$$ \end{thm} \begin{proof} This is proved by Corollary 2.4 of Hartshorne, \cite{Hartshorne}. \end{proof} \subsection{Counting Covering Spaces} \textit{Hurwitz numbers count ramified covers. Instead, they can be interpreted as counting related covering space maps, using the Riemann Extension Theorem.} \begin{lemma} Let $f : D \to \p^1$ be a ramified cover of degree $d$, and let $Q_1, \ldots, Q_n$ be the ramification values of $f$. Then $$f : D \setminus f^{-1}(\{Q_1, \ldots, Q_n\}) \to \p^1 \setminus \{Q_1, \ldots, Q_n\}$$ is a covering space map. \end{lemma} The Riemann Extension Theorem says that a covering space of a punctured copy of $\p^1$ can be completed to a ramified cover in a unique manner. See \cite{Miranda} by Miranda for a general reference. In the language of Hartshorne I.6 (\cite{Hartshorne}), the morphism $f$ induces a map of the function fields of the codomain and domain curves. Then this map of function fields can be realized as a morphism of smooth, projective curves. The curves in this new version of the morphism are each the unique smooth curve in the birational equivalence class of the original domain and codomain respectively, and the original domain and codomain curves live as (Zariski) open sets in these projective curves. Since curves are dimension $1$, the complements of the original curves are dimension $0$, meaning finite collections of points, as desired. The automorphisms of the covering space and the ramified cover are the same, coming from the fundamental group of the punctured $\p^1$. So, counting these covering spaces, weighted by their automorphisms, is identical to computing Hurwitz numbers. \subsection{Counting Monodromy Representations}\label{monodromy} \textit{Counts of covering maps can be interpreted as counting monodromy representations.} \begin{notation} Let $f: D' \to \p^1 \setminus \{Q_1, \ldots, Q_n\}$ be a degree $d$ covering space. Fix a point $x \in \p^1 \setminus \{Q_1, \ldots, Q_n\}$, and fix a labeling of the points in $f^{-1}(x)$ as $\{\tilde{x}_1, \tilde{x}_2, \ldots, \tilde{x}_d\}$. Let $\ell$ be an element of $\pi_1(\p^1 \setminus \{Q_1, \ldots, Q_n\}, x)$, and let $\tilde{\ell}_k$ be the lift of $\ell$ with $\tilde{\ell}_k(0) = \tilde{x}_k$, the starting point of this lift path. \end{notation} \begin{dfn} The \textbf{monodromy representation} of the covering space $f : D' \to \p^1 \setminus \{Q_1, \ldots, Q_n\}$ with basepoint $x$ is the homomorphism $$\hat{f} : \pi_1(\p^1 \setminus \{Q_1, \ldots, Q_n\}, x) \to S_d$$ satisfying $\tilde{x}_{\hat{f}(\ell)(k)} = \tilde{\ell}_k(1)$, the endpoint of this lift path. \end{dfn} \begin{remark} The previous definition is not clear geometrically. In short, each lift of a loop from the fundamental group gives an oriented path from $\tilde{x}_i$ to $\tilde{x}_j$ for every $i$. This map $i \to j$ specifies a permutation of the $d$ preimages. Alternately, this is the permutation representation on the cosets of $f_*\pi_1(D',\tilde{x})$ in $\pi_1(\p^1 \setminus \{Q_1, \ldots, Q_n\}, x)$. \end{remark} By standard covering space results, the covering space can be recovered from the map $\hat{f}$. Notice that any reordering of $\{\tilde{x}_1, \tilde{x}_2, \ldots, \tilde{x}_d\}$ produces the same covering space, so we have over-counted by a factor of $d!$. \begin{notation} Fix $\alpha \in S_d$. Then $\alpha$ can be written uniquely as a collection of disjoint cycles so that all of $\{1,\ldots,d\}$ appears in a cycle (up to reordering). An $i$-cycle is a cycle containing exactly $i$ elements from $\{1,\ldots,d\}$. Let $n_i$ be the number of $i$-cycles in this representation. Notice that the sum of the cycle lengths of $\alpha$ is always $d$, so those lengths form an integer partition of $d$. The \textbf{cycle-type} of the permutation $\alpha$ is the integer partition $\sigma(\alpha) = (1^{n_1}2^{n_2}\cdots d^{n_d})$. \end{notation} \begin{dfn} Let $f: D' \to \p^1 \setminus \{Q_1, \ldots, Q_n\}$ be a covering space and let $g_i$ be the loop in $\pi_1(\p^1 \setminus \{Q_1, \ldots, Q_n\}, x)$ that separates the puncture at $Q_i$ from all of the other punctures such that the puncture at $Q_i$ is on the left-hand side of the loop. \begin{center} \psset{xunit=0.8cm,yunit=0.8cm,algebraic=true,dotstyle=*,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-1.3,-0.8)(4,2.2) \rput{-132.88}(2.43,0.69){\psellipse(0,0)(1.7,0.56)} \rput{132.87}(0.07,0.65){\psellipse(0,0)(1.75,0.54)} \psline{->}(-0.23,0.17)(-0.06,0.01) \psline{->}(2.32,1.37)(2.17,1.23) \psdots(1.3,-0.58) \rput[bl](1.38,-0.46){$x$} \psdots(2.88,0.92) \rput[bl](2.86,1.09){$Q_1$} \rput[bl](1.8,1.46){$g_1$} \psdots(-0.32,1.04) \rput[bl](-0.64,1.16){$Q_2$} \rput[bl](0.2,1.46){$g_2$} \end{pspicture*} \end{center} \end{dfn} \begin{notation} Notice that $g_i$ does not have to act transitively on the lifts of $x$. However, the orbits of this action do partition the set of preimages; the sizes of the sets in this (set) partition form an integer partition of $d$. Write $\sigma(Q_i)$ for the integer partition of the action of $g_i$. \end{notation} \begin{remark} The partition $\sigma(Q_i)$ must be the integer partition generated by the cycle-type of the image of $g_i$ in the monodromy representation: $\sigma(Q_i) = \sigma(\hat{f}(g_i))$. In addition, because each $g_i$ goes around one puncture, their product goes around all of them and hence is trivial in the fundamental group of $\p^1$, the sphere. \end{remark} \begin{prop}\label{pigens} Giving $\hat{f}: \pi_1(\p^1 \setminus \{Q_1,\ldots,Q_n\}) \to S_d$ up to conjugacy is equivalent to giving a choice of generators, $\gamma_i$ for $\{\gamma_i | 1 \leq i \leq r\}$ with the single relation $\gamma_1 * \gamma_2 * \cdots * \gamma_n = 1$ such that $\sigma(\gamma_i) = \sigma(Q_i)$. In other words, $$h(\overline{\sigma}) = \frac{|Hom^{\overline{\sigma}}(\pi_1(\p^1 \setminus \{Q_1,\ldots,Q_n\}), S_d)|}{d!}.$$ \end{prop} \begin{remark} There may not always be $d!$ elements in each conjugacy class, but the stabilizer is in bijection with the automorphisms of the covers, giving this simple description. The notation above comes to me from a talk by R. Cavalieri, \cite{CavTalk}. \end{remark} \subsection{Computing in the Class Algebra} \textit{Counts of monodromy representations can be computed from coefficients in expressions in the class algebra. The computation uses the trace on the class algebra.} \begin{notation} Fix $d \in \N$ and consider the group ring $\R[S_d]$. Then every element $r \in \R[S_d]$ can be written as an $\R$-linear sum of the elements of $S_d$, $$r = \sum_{g \in S_d} c_g \cdot g$$ where each $c_g \in \R$. \end{notation} \begin{remark} The field $\R$ could be replaced with $\Q$ or even $\Z$ for our purposes below. We originally chose values in $\R$ because of the possibility of interpreting the coefficients as edge lengths. \end{remark} \begin{dfn} The \textbf{class algebra} is the center of $\R[S_d]$, written $Z(\R[S_d])$. \end{dfn} Note that two permutations in $S_d$ have the same cycle-type if and only if they are conjugates. So the set of conjugacy classes can be identified with the set of integer partitions of $d$. \begin{dfn}\label{basisdfn} For a fixed integer partition $\sigma$ of $d$, let $$K_\sigma = \sum_{\sigma(g) = \sigma} 1 \cdot g.$$ Through a slight abuse of notation, we can think of each element of $S_d$ as an element of $\R[S_d]$. In this parlance, $K_\sigma$ is the sum of the permutations with cycle-type $\sigma$. \end{dfn} \begin{lemma}\label{basis} The class algebra, $Z(\R[S_d])$, has as a (vector space) basis $$\{K_\sigma \mid \sigma \mbox{ an integer partition of } d\}.$$ \end{lemma} \begin{proof} First, we will show that $K_\sigma$ is in the center of the group algebra, $Z(\R[S_d])$. Consider an element $g$, which is a single permutation inside $\R[S_d]$. Then $$g K_\sigma = g(\sum_{\sigma(\alpha) = \sigma} \alpha) = \sum_{\sigma(\alpha) = \sigma} g \alpha = \sum_{\sigma(\alpha) = \sigma} g \alpha g^{-1}g \overset{*}{=} \sum_{\sigma(\alpha') = \sigma} \alpha' g = K_\sigma g.$$ The starred equality holds because two elements in $S_d$ are conjugate if and only if they have the same cycle-type (which is to say integer partition of $d$) and conjugation is a bijection. Any element $r \in \R[S_d]$ is an $\R$-linear combination of permutations, so this computation actually checks that $K_\sigma$ is central. Conversely, let $z = \sum_{g\in S_d} c_g \cdot g$ be some central element in $\R[S_d]$ and suppose there is a permutation $\alpha$ so that $c_\alpha \neq 0$. Then, for any $\beta = g \alpha g^{-1}$, the coefficient of $\beta$ in $g z g^{-1}$ is $c_\alpha$. However, because $z$ is central, this coefficient is also $c_\beta$. Hence the coefficients are constant on conjugacy classes and $z$ is in the span of the $K_\alpha$. So the set $\{K_\sigma \mid \sigma \mbox{ an integer partition of } d\}$ forms a basis for the center of the group algebra as a vector space. \end{proof} \begin{notation} We will write $|K_\sigma|$ for the size of the conjugacy class with integer partition $\sigma$. If $\sigma = (1^{n_1}2^{n_2} \cdots d^{n_d})$, then standard combinatorial techniques show that $$|K_\sigma| = \frac{d!}{1^{n_1}n_1!2^{n_2}n_2! \cdots d^{n_d}n_d!}.$$ \end{notation} \begin{dfn} Define the function $\tr: Z(\R[S_d]) \to \R$ by $$\tr(\sum_{g \in S_d} c_g \cdot g) = c_e.$$ \end{dfn} \begin{example}\label{conjsize} Let $\sigma \neq \sigma'$ be integer partitions of $d$. Then $\tr(K_\sigma K_{\sigma'}) = 0$. This is because the only way to write the identity as the product of two permutations is as the product of inverses, and in $S_d$, inverses have the same cycle-type. Similarly, $\tr(K_\sigma K_\sigma) = |K_\sigma|$ because each element has a unique inverse in its conjugacy class. \end{example} \begin{lemma} The function $\tr$ is linear. \end{lemma} \begin{proof} The trace function is the projection onto one of the basis elements. \end{proof} \begin{prop} The number of conjugacy classes of monodromy representations, $\hat{f}$, is equal to $\frac{1}{d!} \tr(K_{\sigma_1} \ldots K_{\sigma_n})$. \end{prop} \begin{proof} The trace function reads off the coefficient of the identity. So\\ $\tr(K_{\sigma_1} \ldots K_{\sigma_n})$ counts the ways to write the identity as a product of $n$ permutations with cycle-types $\overline{\sigma} = \{\sigma_1, \ldots, \sigma_n\}$. This is exactly the same count as the number of ways to choose generators for the the monodromy representation. \end{proof} \begin{note} The loops $g_i$ around the punctures in the covering space give the trivial relation in any order, so it's not surprising that we end up in the class algebra, the \textit{center} of a group ring. \end{note} \section{Tropical Geometry} Algebraic geometry is the study of the geometry of a set through properties of its set of regular functions. Classically, algebraic geometers studied sets that can be described locally as the simultaneous zero set of a collection of polynomials; in this case, the set of regular functions is described as a quotient of a polynomial ring over some field. The field over which the polynomials above are defined is critical. For example, we might study the solutions to the equation $x^2 + y^2 = 1$. You will immediately note that the set of solutions to the equation depends on the allowable values for $x$ and $y$. Number theorists are interested in this set if $x$ and $y$ are restricted to being in an algebraic extension of $\Q$; a precalculus student might be interested in this set when $x$ and $y$ are real numbers. Most algebraic geometers consider the sets defined by algebraic or regular functions that are defined over an algebraically closed field like $\C$. Algebraic geometry has a very different character over different fields. \subsection{The Tropical Numbers, $\T$} Tropical algebraic geometry is algebraic geometry over the tropical numbers. To do tropical geometry, we must first define the tropical numbers and then specify our collections of regular functions. \begin{dfn} Let $\T = \R \cup \{-\infty\}$ be the \textbf{tropical numbers}. \end{dfn} Although the tropical numbers contain $\R$, we will give them a very different algebraic structure. We will extend the operations of $\max$ and $+$ from $\R$ to all of $\T$ to define a pair of distributing binary operations, making the tropical numbers a semi-field. \begin{dfn}\label{troperations} Let $a,b \in \R$. Then we define $a \oplus b = \max\{a,b\}$ and $a \odot b = a + b$ where this maximum and addition are computed using the traditional Archimedean ordering and additive structure on $\R$. Extend the operations $\oplus$ and $\odot$ to $\T$ by letting \begin{itemize} \item $-\infty \oplus a = a = a \oplus -\infty$; \hspace{.1in} $-\infty \oplus -\infty = -\infty$, and \item $-\infty \odot a = -\infty = a \odot -\infty$; \hspace{.1in} $-\infty \odot -\infty = -\infty$. \end{itemize} \end{dfn} We have extended these operations to all of $\T$ in the most naive manner. We are essentially thinking of $-\infty$ as the most negative `real' number; this makes it clear that $-\infty$ only affects a maximum if all terms are $-\infty$ and it dominates any traditional sum. In the following two expressions, the right-hand expression is an abuse of notation; we will allow this abuse because the common interpretations of these expressions are consistent with the more precise definitions above. $$-\infty \oplus 1 = 1 = \max\{-\infty,1\}$$ $$-\infty \odot 1 = -\infty = -\infty + 1$$ In short, $\oplus$ can always been interpreted on $\T$ as $\max$ and $\odot$ can always be interpreted as $+$. For $\T$ to play the role of the field of definition for a branch of algebraic geometry, it should be a field. It turns out that the tropical numbers are not a field because of the lack of additive inverses. \begin{dfn} A monoid with all of the additional properties of a field (except the existence of additive inverses) is a called a \textbf{semifield}. \end{dfn} As long as we avoid subtraction, we will be able to proceed. \begin{thm} With $\oplus$ and $\odot$ defined as in \ref{troperations}, $(\T, \oplus, \odot)$ is a semifield. \end{thm} \begin{proof} Almost every property is known for values in $\R$, and the checks using $-\infty$ are straight-forward. Note that $0_\T = -\infty$ and $1_\T = 0$. \end{proof} \begin{note} The tropical semifield is idempotent: for all $a \in \T$, $a \oplus a = a$. \end{note} A common first question in tropical geometry seeks an explanation for the choice of the word ``tropical''. According to Jean-Eric Pin \cite{Pin}, the term was coined by Dominique Perrin, a French computer scientist, in honor of his Brazilian friend and colleague, Imre Simon. In the words of Maclagan and Sturmfels \cite{MS}, this choice ``simply stands for the French view of Brazil... without any deeper meaning". According to Cohen, Gaubert, and Quadrat in \cite{CGQ}, the history of max-plus algebras can be traced to at least the early 1960s. They list scheduling theory, graph theory, dynamic programming, optimal control theory, asymptotic analysis, and discrete event theory as fields that have given rise to the use of this idempotent algebraic structure. \begin{note} Some of the researchers listed above actually worked with min-plus algebras on $\R \cup \{\infty\}$. The algebraic structure of these two semifields can be shown to be identical through the transformation $x \mapsto -x$. \end{note} Tropical polynomials behave differently from classical polynomials; two distinct tropical polynomials can produce the same tropical function. \begin{example} Consider the polynomial $p(x,y) = ``x^2 + xy + y^2" = x \odot x \oplus x \odot y \oplus y \odot y = \max\{2x,x+y,2y\}$. Note that $x+y > 2x$ exactly when $y > x$, in which case the third term achieves the maximum: $\max\{2x,x+y,2y\} = 2y$. The possibility $x+y > 2y$ is similar. As a result, the middle term can never be the maximum (without agreeing with another term), so it can be removed from the polynomial without changing $p(x,y)$ as a function; $``x^2 + xy + y^2" = ``x^2 + y^2"$ as functions. \end{example} Tropical polynomials do not produce distinct functions by evaluation; however, they can be grouped into equivalence classes producing the same function by evaluation. A version of the fundamental theorem of algebra holds for equivalence classes of tropical polynomials. \begin{thm}\cite{MS} Every tropical polynomial in one variable is equivalent to a tropical polynomial (as a function) that can be written as the product of linear tropical polynomials. In other words, the tropical semifield is algebraically closed. \end{thm} We interpret this result to mean that it is reasonable to attempt to do geometry over $\T$ that is analogous to classical geometry. \subsection{Tropical Varieties}\label{TropVars} \textit{In this subsection, we give the definition of a tropical variety and specify it to dimension $1$, curves.} Tropical geometry is sometimes thought of as either a logarithmic or valuation image of classical geometry (as seen in the transformation of multiplication into addition). It is also sometimes seen as a ``dequantization" of the algebraic structure on $\R$ \cite{MikM,Litvinov}. As a result, the tropical analogue of classical objects have $\R$-dimension equal to their $\C$-dimension, and they are a special kind of polyhedral objects. \begin{dfn} The \textbf{zero locus} of a tropical polynomial is its non-linearity locus as a function. \end{dfn} \begin{example}\label{tripodex} Consider the tropical polynomial in two variables, $p(x,y) = x \oplus y \oplus 0 = \max\{x,y,0\}$. This maximum is non-linear exactly when two of these terms agree and achieve that maximum. The zero locus of $p(x,y)$ is then the following three rays meeting at the origin. \begin{center} \psset{xunit=0.8cm,yunit=0.8cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-1.68,-1.85)(2.68,1.84) \psplot{0}{2.68}{(-0--1*x)/1} \psline(0,0)(0,-1.85) \psplot{-1.68}{0}{(-0-0*x)/-1} \psline{->}(0,0)(1,1) \psline{->}(0,0)(0,-1) \psline{->}(0,0)(-1,0) \rput[tl](0.8,0.6){$(1,1)$} \rput[tl](-1.6,0.7){$(-1,0)$} \rput[tl](0.2,-0.5){$(0,-1)$} \end{pspicture*} \end{center} \end{example} \begin{remark} If there is any validity to the analogy between classical geometry, then this example can already specify several of the connections. First, the zero locus of a linear polynomial in two variables should be a genus zero curve. So this ``tripod" should be a tropical curve of genus zero. Second, notice that a $1$-dimensional tropical variety would then have $\R$-dimension $1$ as well; in general dimension $k$ tropical objects should have pieces that are dimension $k$ over $\R$. And third, notice that the three primitive integer vectors listed on the rays sum to the zero vector. So perhaps tropical varieties are going to be piece-wise linear objects with a similar condition at the intersections of the linear pieces. \end{remark} \begin{remark} Mikhalkin has shown that, in general, a tropical hypersurface in $\T^n$ (the zero set of a single tropical polynomial in $n$ variables) is a polyhedral complex with a balancing condition at the $(n-2)$-dimensional faces. See Property 3.2 in \cite{MikT}. \end{remark} We now give Mikhalkin's more careful definition of this balancing condition in order to define tropical varieties in general. The following definition is extremely technical, but it boils down to a `zero-tension' condition that guarantees the well-definedness of degree for tropical objects. \begin{dfn}\cite{MikM} Let $P$ be a $k$-dimensional polyhedral complex embedded in $\T^N$; to each $k$-dimensional cell, associate a rational number called the weight. Let $F \subset P \cap \R^n$ be a $(k-1)$-dimensional cell of $P$ and $F_1,\ldots, F_l$ be the $k$-dimensional cells of $P$ adjacent to $F$ whose weights are $w_1,\ldots,w_l$. Let $L \subset \R^N$ be an $(N-k)$-dimensional affine-linear space defined by integer equations such that it intersects $F$. For a generic (real) vector $v \in \R^N$ the intersection $F_j \cap (L + v)$ is either empty or a single point. Let $\Lambda_{F_j} \subset \Z^N$ be the integer vectors parallel to $F_j$ and $\Lambda_L \subset \Z^N$ be the integer vectors parallel to $L$. Set $\lambda_j$ to be the product of $w_j$ and the index of the subgroup $\Lambda_{F_j}+\Lambda_L \subset \Z^N$. We say that $P \subset \T^N$ is \textbf{balanced} if for any choice of $F$, $L$ and a small generic $v$ the sum $$\iota_L = \sum_{j | F_j\cap(L+v)\neq\emptyset} \lambda_j$$ is independent of $v$. We say that $P$ is \textbf{simply balanced} if in addition for every $j$ we can find $L$ and $v$ so that $F_j \cap (L + v) \neq \emptyset$, $\iota_L = 1$ and for every small $v$ there exists an affine hyperplane $H_v \subset L$ such that the intersection $P \cap (L + v)$ sits entirely on one side of $H_v + v$ in $L + v$ while the intersection $P \cap (H_v + v)$ is a point. \end{dfn} \begin{dfn} Let $Y$ be a subset of $\T^N$. Then we can define a sheaf of functions on $Y$, $\mathcal{O}_Y$, by taking the restrictions of the (tropical) Laurent polynomials in $N$ variables to $Y$ (and its open sets). \end{dfn} \begin{dfn}\cite{MikM} A topological space $X$ enhanced with a sheaf of tropical functions $\mathcal{O}_X$ is called a \textbf{(smooth) tropical variety} of dimension $k$ if, for every $x \in X$, there exists an open set $U \ni x$ and an open set $V$ in a simply balanced polyhedral complex $Y \subset \T^N$ such that the restrictions $\mathcal{O}_X|_U$ and $\mathcal{O}_Y|_V$ are isomorphic. \end{dfn} \begin{remark}\cite{MikM}\label{tropcurvedfn} The definition of a \textbf{tropical curve}, a smooth tropical variety of dimension $1$, is particularly simple. As in \ref{tripodex}, simply balanced $1$-dimensional objects will be unions of line segments meeting at points. The sheaf of functions induces a structure that is equivalent to a complete metric away from the $1$-valent vertices and adjacent edges. So the edges without degree $1$ vertices must have finite length and are called \textit{bounded} edge; the edges containing $1$-valent vertices must have infinite length in this metric and are called \textit{unbounded} edges. In short, a tropical curve is a certain kind of metric graph. \end{remark} \section{Tropical Hurwitz Geometry} \textit{In this section, we review some of what is known about the relationship between classical and tropical Hurwitz numbers as a well as motivate the kinds of geometric goals we seek for the rest of this document.} \subsection{Classical Hurwitz Geometry} \textit{There are many ways to compute the Hurwitz numbers classically, and it's not clear which would make the best analogy for tropical geometry. In this subsection, we frame them as the degree of a morphism between two moduli spaces.} Hurwitz numbers lie at the intersection of many different areas of mathematics. Historically, Hurwitz related them to the representation theory of $S_d$ and thereby combinatorics. Ekedahl-Lando-Shapiro-Vainshtein connected Hurwitz numbers to the intersection theory on the moduli space of curves, Faber's Conjecture, and integrable systems (\cite{GJV}, \cite{ELSV}). More recently, new light has been shed on Hurwitz numbers by mathematical physics. In particular, they have applications in string theory, the study of Calabi-Yau manifolds, and Gromov-Witten theory. In addition, the set of (classical) ramified covers can be given a geometric structure, which we explain now for motivation both of the methods used below and the results. Consider a collection of interesting `objects' and a notion of isomorphism of those objects. Let $M$ be the set of isomorphism classes of these objects. Sometimes it is possible to give $M$ a geometric structure. For example, the set of lines through the origin in $\R^2$ is called $\R\p^1$ and given a geometry that makes it diffeomorphic to $\s^1$. In general, projective space is given a geometric structure in a similar manner. Similarly, the tangent space to a point $x$ in a manifold is the set of parameterized curves through $x$ up to reparamaterization. This set is usually given the structure of a vector space, which has both algebraic and geometric structure. The set, $M$, of isomorphism classes, along with a geometric structure is called a \textbf{moduli space}, which I find to be one of the most interesting and exciting concepts in algebraic geometry. Sometimes the geometric structure comes naturally from the representations of the objects (or isomorphism classes). For example, the set of degree $1$ polynomials with real coefficients can be seen as a vector space. These polynomials are usually written $\{ax+b|a,b\in\R\}$; the values of $a$ and $b$ can vary, and we think of them as the coordinates on the moduli space. These varying coordinates are sometimes called parameters or \textit{moduli}. If the geometric structure on $M$ is natural, it may be possible to use its geometry to ask global questions about the set $M$. For example, the question ``How many curves are there passing through this set of points in the plane?" is answered by Kontsevich's Formula, which can be shown using intersection theory on a moduli space. Each index condition (like passing through a fixed point) is realized as a divisor, and the computation for the formula can be shown using intersection theory. Because this proof uses inersections and degenerations, it requires that the geometric structure on the moduli space be compact, a theme that will resurface below. In classical algebraic geometry, the most famous moduli spaces are $\overline{M}_g$, the moduli space of stable curves of genus $g$, and the related space $\overline{M}_{g,n}$, the moduli space of stable genus $g$ curves with $n$ distinct marked points. The notion of a ramified cover can also be expanded slightly to allow the construction of a moduli space of admissible covers, $\overline{H}(\overline{\sigma})$, called a Hurwitz space. The geometry of these spaces can be used to compute the Hurwitz numbers; there is a natural map $\phi: \overline{H}(\overline{\sigma}) \to \overline{M}_{g,n}$ such that the $\deg(\phi)$ encodes the Hurwitz numbers. In tropical geometry, Mikhalkin has constructed a moduli space of genus $0$ marked curves, $\M_{0,n}$, and the goal of this document is to construct a candidate for the tropical analogue of a Hurwitz space, $\mathcal{H}(\overline{\sigma})$, along with a natural map to $\M_{0,n}$ whose degree also encodes the Hurwitz numbers in the same way. There is a partially understood map from classical geometry to tropical geometry called \textbf{tropicalization}, which is discussed very briefly at the beginning of subsection \ref{TropVars}. Tropicalization clearly loses some information, but it appears to retain certain critical enumerative information. Ideally, we would like to fill in the following commutative diagram in which the horizontal maps represent tropicalization and the vertical maps have degrees that encode the Hurwitz numbers by constructing $\mathcal{H}(\overline{\sigma})$. $$\begin{CD} \overline{H}(\overline{\sigma}) @>>> \mathcal{H}(\overline{\sigma})\\ @VVV @VVV\\ \overline{M}_{0,n} @>>> \M_{0,n} \end{CD}$$ The relationship between $\overline{M}_{0,n}$ and $\M_{0,n}$ will be highly instructive for motivating our construction, as will understanding the structure of $\overline{H}(\overline{\sigma})$. Both of these tools require and understanding of $\overline{M}_{0,n}$, so we start there. \begin{dfn} An \textbf{$n$-marked rational curve} is a copy of $\p^1$ with a list of $n$ distinct points on $\p^1$, $\overline{Q} = \{Q_1,\ldots,Q_n\}$. Two $n$-marked curves $(\p^1,\overline{Q})$ and $(\p^1,\overline{Q}')$ are isomorphic if there is an isomorphism of curves $f: \p^1 \to \p^1$ such that $f(Q_i) = Q_i'$. For $n \geq 3$, let $M_{0,n}$ be the set of isomorphism classes of $n$-marked rational curves. \end{dfn} An $n$-marked rational curve is difficult to draw on a chalk board. As a result, many algebraic geometers draw a line segment for the $\p^1$ and a small tick mark for each marked point. Here is an image of a $4$-marked rational curve in this notation. \begin{center} \psset{xunit=1.0cm,yunit=1.0cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-.1,-.1)(1.1,1.1) \psline(0,0)(1,1) \psline(.15,.25)(.25,.15) \psline(.35,.45)(.45,.35) \psline(.55,.65)(.65,.55) \psline(.75,.85)(.85,.75) \end{pspicture*} \end{center} The set $M_{0,n}$ can be given the structure of a non-compact variety. The locations of the $Q_i$ are coordinates on this space, and there are clearly holes at the points where multiple marked points would coincide. We could compactify by allowing the points to collide; then the compactification would be $(\p^1)^n/Aut(\p^1)$. Some of these curves have an infinite number of automorphisms, which we wish to avoid here. In addition, we are going to add other information to each of the marked points, so we will not want them to be allowed to coincide. One way to do this would be to remember the direction from which the points approached each other. The projectivized tanget space to a point on $\p^1$ is again $\p^1$, so this could be accomplished by keeping the original $\p^1$, adding a second $\p^1$ intersecting the first at the point where the marked points were going to collide, and moving the colliding marked points onto the new $\p^1$. For example, if two of the marked points in $M_{0,4}$ collide, this would produce the following \textit{degeneration}. \begin{center} \psset{xunit=1.0cm,yunit=1.0cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-.1,-.1)(4.1,1.1) \psline(0,0)(1,1) \psline(.15,.25)(.25,.15) \psline(.35,.45)(.45,.35) \psline(.55,.65)(.65,.55) \psline(.75,.85)(.85,.75) \psline{->}(1,0.5)(2,0.5) \psline(2,0)(3,1) \psline(2.15,.25)(2.25,.15) \psline(2.35,.45)(2.45,.35) \psline(2.8,1)(3.8,0) \psline(3.55,.15)(3.65,.25) \psline(3.35,.35)(3.45,.45) \end{pspicture*} \end{center} This object is a \textbf{tree} of $\p^1$s, meaning that each irreducible component is a copy of $\p^1$. For the curves in $\overline{M}_{0,n}$ this tree has no cycles or self-intersections. In other words, the genus has not increased. There are now two kinds of \textbf{special} points on the irreducible components, marked points and intersections with other components. We use these ideas to expand the collection of isomorphism classes that we are considering in order to compactify $M_{0,n}$. \begin{dfn} An \textbf{$n$-marked stable rational curve} is a tree of $\p^1$s, $X$ with a list of $n$ distinct, smooth points on $X$, $\overline{Q} = \{Q_1,\ldots,Q_n\}$ such that each irreducible component has at least three special points. Two $n$-marked stable curves $(X,\overline{Q})$ and $(X',\overline{Q}')$ are isomorphic if there is an isomorphism of curves $f: X \to X'$ such that $f(Q_i) = Q_i'$. For $n \geq 3$, let $\overline{M}_{0,n}$ be the set of isomorphism classes of $n$-marked stable rational curves. \end{dfn} The set $\overline{M}_{0,n}$ can be given the geometric structure of a compact variety. \begin{remark} The condition $n \geq 3$ clearly implies that an $n$-marked rational curve is an $n$-marked stable rational curve. \end{remark} \begin{remark} The term \textbf{stable} is used because these curves do not have any non-trivial automorphisms. Recall that Mobius transformations (the automorphisms of $\p^1$) can take any three points on $\p^1$ to any other three points on $\p^1$. Once three points are fixed, the other points are determined. \end{remark} \begin{remark} By the discussion above, $M_{0,4}$ can be given the following geometric structure. Use Mobius transformations to send the first three marked points to $\{0,1,\infty\}$. This sends the fourth point to the cross ratio of the original $4$ points. This fourth point can be located at any other point on $\p^1$, so $M_{0,4}$ can be given the geometric structure of a thrice punctured sphere. This object is sometimes called a `bowling ball'. Notice that it is homeomorphic to the `pair of pants' from $2$-dimensional topological quantum field theory. There are three extra $4$-marked stable curves added to compactify this space, and they correspond to the three ways to add labels to the marked points in the image of a degenerate tree of $\p^1$s. As a result, $\overline{M}_{0,4}$ is isomorphic to $\p^1$. \end{remark} \begin{dfn} The \textbf{boundary} of this compactification is the set of isomorphism classes of the $n$-marked stable rational curves that are not $n$-marked rational curves. \end{dfn} For $n > 4$, the boundary will be more than a finite collection of points. Moreover, $\overline{M}_{0,n}$ can be stratified by the number of irreducible components in the curve $X$. The $n$-marked rational curves form a dense open set of smooth curves. Moreover, the closure of each stratum contains all of the more degenerate strata. Now we return to ramified covers. The set of isomorphism classes of ramified covers can be given the structure of a non-compact moduli space. This set can be realized as a dense open set in a compact moduli space, the moduli space of admissible covers. \begin{dfn} Given a ramification profile $\overline{\sigma} = \{\sigma_1,\ldots,\sigma_n\}$ of $n$ integer partitions of $d$, an admissible cover is a morphism of curves $f: D \to X$ such that \begin{itemize} \item $X$ is an $n$-marked stable rational curve, \item $f^{-1}(X_{non-sing}) = D_{non-sing}$, \item for each irreducible component $X_i$ of $X$, $f: f^{-1}(X_i) \to X_i$ is a ramified cover, \item $\sigma(Q_i) = \sigma_i$, \item if $P$ lies in the intersection of two irreducible components of $D$, then the ramification index, $m_P$, is independent of the component used to compute it, and \item at all other points $f$ is unramified. \end{itemize} Two admissible covers are isomorphic if there is an isomorphism of $D$ and $X$ that commutes with $f$. Given a ramification profile $\overline{\sigma}$ let $\overline{H}(\overline{\sigma})$ be the set of isomorphism classes of admissible covers, called a \textbf{Hurwitz space}. \end{dfn} \begin{remark} In section 3.G of \cite{HM}, Harris and Morrison argue that $\overline{H}(\overline{\sigma})$ is a compactification of the space of ramified covers. \end{remark} \begin{remark} As above, the degree $d$ is implicit in $\overline{\sigma}$, and the genus of the domain curve can be computed from $\overline{\sigma}$ using the Riemann-Hurwitz formula. If we instead dropped the requirement that all of the branch points are included in $\overline{Q}$, then this information would be nontrivial. \end{remark} \begin{remark} If $X$ is smooth, this definition restricts to our definition of a ramified cover above. If $X$ is not smooth, then this definition still gives us a collection of marked points giving the ramification profile $\overline{\sigma}$. In addition, the penultimate requirement tells us that there is also a consistent notion of the ramification profile over the non-singular points of $X$. In other words, each special point in the codomain of an admissible cover has a ramification profile, not just the marked points. \end{remark} While the codomain curves in admissible covers have no non-trivial automorphisms, the covers themselves may. However, the groups of automorphisms for each cover will be finite. In this situation, the appropriate structure to consider is a stack. The set $\overline{H}(\overline{\sigma})$ can be given the structure of a Deligne-Mumford stack; see for example \cite{Cavalieri}. The isomorphism classes of ramified covers form a dense open set, as with $\overline{M}_{0,n}$. There is a natural ``forgetful" morphism, $\phi_{\overline{\sigma}}: \overline{H}(\overline{\sigma}) \to \overline{M}_{0,n}$ that sends a cover $f: D \to X$ to the $n$-marked curve $X$. Notice that the boundary of the Hurwitz space maps into the boundary of the moduli space of marked points. The inverse image under $\phi_{\overline{\sigma}}$ of a marked curve $C$ is exactly the set of branched covers of $C$ with ramification profile $\overline{\sigma}$, up to certain reorderings of the marked points. As a result, the Hurwitz numbers can be recovered geometrically from the degree of this forgetful morphism. Moreover, the fact that this map even has a degree tells us that the Hurwitz number does not depend on the isomorphism class of $X$, which is not clear from the definition. \begin{thm}\cite{ELSV,GV} Suppose $g, m$ are integers ($g \geq 0$, $m \geq 1$) such that $2g-2+m > 0$ (ie the functor $\overline{M}_{g,n}$ is represented by a Deligne-Mumford stack). Then $$H^g_\alpha = \frac{r!}{\# Aut(\alpha)} \prod_{i=1}^m \frac{\alpha_i^{\alpha_i}}{\alpha_i!} \int_{\overline{M}_{g,n}} \frac{1- \lambda_1 + \cdots \pm \lambda_g}{\prod (1-\alpha_i\psi_i)}$$ where $\lambda_i = c_i(\mathbb{E})$ ($\mathbb{E}$ is the Hodge bundle). \end{thm} The key tool in the proof of the above theorem in \cite{ELSV} relies on the famous Lyashko-Looijenga mapping, $\mathcal{LL}$, that associates to a holomorphic function the (unordered) set of its ramification values, much like the branch morphism. The factor $Aut(\overline{\sigma})$ below simply accounts for issues arising from the situation in which multiple points have the same ramification profile. The following theorem frames the classical results in the form most analogous to the tropical results below. \begin{thm}\cite{ELSV} There is a morphism $\mathcal{LL}: H(\overline{\sigma}) \to \C^{\mu-1}$ so that $h(\overline{\sigma}) = \frac{\deg(\mathcal{LL})}{|Aut(\ol{\sigma})|}$. \end{thm} \subsection{Tropicalization} Recall from example \ref{tripodex} that our tropical analogue of $\p^1$ was a graph with a single vertex and three emanating rays. This inspires \textit{both} a dual picture to our trees of $\p^1$s above and a possible way to think of tropicalization. \begin{dfn} Consider an $n$-marked stable rational curve $X$. Create a graph $G_X$ in the following manner. For each irreducible component and marked point of $X$, create a vertex for $G_X$. If a marked point is on an irreducible component, add an edge to $G_X$ between their vertices. If two irreducible components of $X$ intersect, add an edge between their vertices. We will call $G_X$ the \textbf{dual graph of $X$}. \end{dfn} \begin{remark} Notice that the stability condition implies that the vertices corresponding to irreducible components have degree at least $3$. This condition will still be called stability in the following chapters. Also notice that the definition of a tree of $\p^1$s tells us that the image is connected and has no circuits, namely it is a tree. \end{remark} We now compare the image of the dualization map to Mikhalkin's moduli space of marked tropical rational curves, $\M_{0,n}$. Recall that a tropical curve is a metric graph with a complete metric on the complement of the 1-valent vertices (\ref{tropcurvedfn}). \begin{dfn} The genus of a tropical curve $X$ is $g= \dim(H^1(X))$. \end{dfn} \begin{remark} So, the connected, genus zero tropical curves will have no circuits, meaning that they are trees. \end{remark} \begin{remark}\cite{MikM} Adding an unbounded edge to a tropical curve is the tropical analogue of deleting a point from the dual classical curve. This is an example of a more general technique, called \textbf{tropical modification}. See \cite{MikM} for more detail in higher dimensions. As a result, we can think of adding unbounded edges on a tropical curve as analogous to marking points on a classical curve. As a result, adding unbounded edges to a tropical curve gives us a way to mark points on that curve. \end{remark} \begin{dfn} Let $\M_{g,n}$ be the set of tropical curves with genus $g$ and $n$ distinct marked points (meaning $n$ distinct unbounded edges). \end{dfn} \begin{remark} The only automorphism of a tree that fixes all of its leaves is the identity. To avoid automorphisms of a curve with itself, we and Mikhalkin restrict ourselves to the case $g=0$. In addition, the moduli spaces of higher genus curves are much less well understood. \end{remark} Mikhalkin then gives a description of $\M_{0,n}$ as a polyhedral complex. \begin{dfn}\label{combtype} The \textbf{combinatorial type} of a tropical curve with $n$ marked points is its equivalence class up to homeomorphisms respecting the markings. \end{dfn} The combinatorial types partition the set $\M_{0,n}$ into disjoint subsets. The edge-length functions for the finite edges give the subset of $\M_{0,n}$ with a given combinatorial type the structure of an integral polyhedral cone $\R^M_{>0}$ because each of the lengths must be positive, where $M$ is the number of finite edges. If $G$ is a genus $0$ curve with only degree $1$ and $3$ vertices, then $G$ has $n-3$ bounded edges (and fewer otherwise). Furthermore, any face of the closure of this cone, $\R^M_{\geq0}$, gives a cone corresponding to another combinatorial type, the type in which some of the edges of the curve have been contracted. This gives an adjacency structure on $\M_{0,n}$, making it a polyhedral complex. \begin{thm} The set $\M_{0,n}$ for $n \geq 3$ admits the structure of an $(n-3)$-dimensional tropical variety such that the edge-length functions are regular within each combinatorial type. Furthermore, $\M_{0,n}$ can be tropically embedded in $\R^N$ for some $N$, meaning that $\M_{0,n}$ can be presented as a simply balanced complex. \end{thm} Although tropicalization is not understood fully, there are some relationships known for the map $\overline{M}_{0,n} \to \M_{0,n}$. \begin{example} Classically, $\overline{M}_{0,4}$ is isomorphic to $\p^1$, which we saw above. Mikhalkin's construction gives $\M_{0,4}$ the structure of three rays emanating from a point. This is exactly our candidate for the tropical analogue of $\p^1$! Moreover, each of the rays in $\M_{0,4}$ is associated to one of the three points in the boundary of $\overline{M}_{0,4}$. The position on the ray corresponds to the length of the bounded edge in the dual graph to these degenerate trees of $\p^1$s. \end{example} For now, this discussion allows us to think of tropicalization as dualization. Notice that dualization is a ``degeneration reversing" map between the classical and tropical moduli spaces of marked curves. \begin{center} \psset{xunit=1.0cm,yunit=1.0cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-0.1,-2.1)(4.1,1.1) \psline(0,0)(1,1) \psline(.15,.25)(.25,.15) \psline(.35,.45)(.45,.35) \psline(.55,.65)(.65,.55) \psline(.75,.85)(.85,.75) \psline{->}(1.1,0.5)(1.9,0.5) \psline(2,0)(3,1) \psline(2.15,.25)(2.25,.15) \psline(2.35,.45)(2.45,.35) \psline(2.8,1)(3.8,0) \psline(3.55,.15)(3.65,.25) \psline(3.35,.35)(3.45,.45) \psline{<-}(1.1,-1.5)(1.9,-1.5) \psline(0,-1.5)(1,-1.5) \psline(0.5,-2)(0.5,-1) \psline(2.6,-1.5)(3.2,-1.5) \psline(2.6,-1.5)(2,-1) \psline(2.6,-1.5)(2,-2) \psline(3.2,-1.5)(3.8,-1) \psline(3.2,-1.5)(3.8,-2) \end{pspicture*} \end{center} \begin{prop} The map $X \to G_X$, called dualization above, gives a bijection between the boundary strata of $\overline{M}_{0,n}$ and the combinatorial types of $n$-marked, genus $0$ tropical curves. Moreover, if $Y$ is a stratum of the boundary of $\overline{M}_{0,n}$ and $Y'$ is a stratum in the closure of $Y$, then the combinatorial type of $G_Y$ is a degeneration of the combinatorial type of $G_{Y'}$. \end{prop} \begin{proof} The most general curves classically are smooth and hence dualize to a graph with a single vertex of degree $n$ with $n$ unbounded edges. This tropical curve is a degeneration of all combinatorial types. The most degenerate classical curves have exactly three special points, be they marked points or intersections. The dual of such a degenerate classical curve will be a tree in which every vertex is degree $1$ or $3$, a condition that we will later call \textit{trivalence}. Classical degeneration involves the collision of at least two special points from a component with at least $4$; these colliding points end up on a new irreducible component. The dualization of this process picks a vertex of degree at least $4$, splits its edges between two vertices and adds an edges between these vertices. This description is exactly the reverse of a degeneration of a stable tropical curve. \end{proof} \begin{remark} All of the positive dimensional cones in Mikhalkin's construction already correspond to the boundary of the classical moduli space. As a result, the polyhedral complex constructed below will be open, not compact. \end{remark} Finally, we are ready to define tropical ramified covers. Taking the analogy from classical geometry, we will want to talk about tropical covers of stable graphs with ramification profiles at the degree $1$ vertices (the marked points). In addition, classical admissible covers allowed for a ramification profile at the intersection point. This point has become the edge between the high-degree vertices, so we will also want to assign ramification profiles to each of them. \begin{dfn} Consider $\overline{\sigma} = \{\sigma_1,\ldots,\sigma_n\}$, where each $\sigma_i$ is an integer partition of $d$. Pick a tropical curve $G$ with $n$ marked points; then a \textbf{tropical ramified cover} with ramification profile $\overline{\sigma}$ will be a copy of $G$ with the following integer partitions associated to edges of $G$. Associate $\sigma_i$ to the $i^{\mbox{th}}$ marked point; also assign an integer partition of $d$ to each of the finite edges of $G$. \end{dfn} \begin{remark} This definition of a tropical ramified cover lists the codomain curve and the ramification profile, which seems substantively different from the classical definition of a ramified cover because we do \textit{not} actually include the tropical covering map. This leaves open the question of whether the information included above is sufficient for specifying a more analogous definition of a ``tropical admissible cover" and whether that more geometric version of a ``tropical admissible cover" can be realized as the tropicalization of the classical cover with the same data. The authors of \cite{CJM} give such a definition in the case of only two specified marked points, and their computations do use exactly the information that I specify here. In the maximally degenerate situation of a trivalent graph for the tropical curve, each vertex has exactly three edges, meaning three ramification points. There is a unique such cover classically, and the required gluing is also specified for us, so this claim seems reasonable. However, it \textit{is} clear how to get one of our tropical ramified covers from an admissible cover. Simply take the marked stable curve that is its codomain and dualize it to a graph. To each marked point, assign the integer partition given by the ramification profile over that point. To each finite edge, associate the ramification profile coming from the ramification over the associated non-singluar point. \end{remark} \textit{In summary:} There is already a tropical moduli space of genus $0$ curves with $n$ marked points, $\M_{0,n}$, which was created by Mikhalkin (\cite{MikM}). The goal of this document is, for each ramification profile $\overline{\sigma}$, to give the set of tropical ramified covers the structure of a moduli space, $\mathcal{H}(\overline{\sigma})$, such that \begin{itemize} \item $\mathcal{H}(\overline{\sigma})$ is a connected, polyhedral complex, and \item $\mathcal{H}(\overline{\sigma})$ has a natural morphism of polyhedral complexes $\phi: \mathcal{H}(\overline{\sigma}) \to \M_{0,n}$ such that $\deg(\phi)$ encodes the Hurwitz number $h(\overline{\sigma})$. \end{itemize} \chapter{Preliminaries} \textit{We establish the definitions and lemmas needed for the original mathematics in the following chapter.} \section{Graphs} \textit{Having reframed the discussion of tropical curves in the previous section in terms of metric graphs, we must solidify the foundations of our perspective in the language of graph theory.} \begin{notation} A graph is an ordered pair $G= (V,E)$, with $V$ a finite set (called the \textbf{vertices}) and $E$ a set of (unordered) pairs of elements from $V$ (called the \textbf{edges}). Note that we are disallowing ``multiple edges" in graphs. The vertices appearing in an edge are called its \textbf{endpoints}. A \textbf{tree} is a connected graph without circuits. The \textbf{degree} of a vertex $v \in V$ is the number of times $v$ appears as an endpoint of elements in $E$. Vertices of degree $1$ are called \textbf{leaves}. \end{notation} \begin{dfn} Edges containing a leaf will be called \textbf{unbounded edges}; the other edges will be called \textbf{bounded edges}. Vertices that are not leaves will be called \textbf{internal vertices}. \end{dfn} \begin{thm}\label{leavesexist} Any tree with at least one edge has at least two leaves. \end{thm} \begin{proof} Notice that removing any edge from $T$ disconnects it into two trees. This standard result can be shown by removing any edge and using strong induction on the number of edges in the tree, starting with the unique tree with $1$ edge (and two leaves). \end{proof} \begin{notation}\label{partitionpi} For a tree $T$, let $L(T)$ be the set of leaves of $T$. For any edge $e$ of $T$, let $\pi(e) = \{S(e),S'(e)\}$ be the (set) partition of $L(T)$ obtained by deleting $e$ from $T$ and partitioning the leaves by the connected component in which they lie. \begin{remark} Notice also that \ref{leavesexist} implies that neither $S(e)$ nor $S'(e)$ can be empty. Either $e$ is an unbounded edge, in which case it clearly separates one leaf from the others, or $e$ is bounded, and the components have other edges and hence at least $2$ leaves each. \end{remark} Let $T = (V,E)$ and $T' = (V',E')$ be trees, and let $f: V \to V'$ be a bijection. Then the function $f$ extends to subsets of $L(T)$, writing $f(\pi(e)) = \{f(S(e)),f(S'(e))\}$. \end{notation} \begin{dfn} A graph is said to be \textbf{stable} if it has no vertices of degree $2$. \end{dfn} \begin{remark} This definition of stable is the tropicalization (dualization) of the classical definition of stability. In addition, degree $2$ vertices allow for a infinite number of metric structures, akin to the infinite number of automorphisms of $\p^1$ with only $2$ marked points. \end{remark} \begin{lemma}\label{distinctpartitions} Let $T = (V,E)$ be a stable tree with $e, e' \in E$. If $e \neq e'$, then $\pi(e) \neq \pi(e')$. \end{lemma} \begin{proof} If $|E| = 1$, then the lemma is vacuously true. Otherwise, pick two distinct edges $e$ and $e'$ in $E$. Find a path in $T$ containing $e$ and $e'$ as follows. If you remove $e$ from $T$, one of its endpoints is not in the component of what remains that contains $e'$; call this endpoint $v$. Similarly, let $w$ be the endpoint of $e'$ not in the component with $e$. Then the (unique) path from $v$ to $w$, $P_{(v,w)}$, contains both $e$ and $e'$. Now I will show that the partitions associated to edges in a path are all distinct, showing that $\pi(e) \neq \pi(e')$. Consider $\pi(e) = \{S(e),S'(e)\}$, labeled so that $S(e)$ comes from the component containing $e'$. Let $v_1$ be the other endpoint of $e = \{v,v_1\}$. Because $T$ is stable, $\deg(v_1) \geq 3$, so there is at least one edge coming from $v$ that is not included in $P_{(v,w)}$, $\hat{e}_1$. Let $S(\hat{e}_1)$ be the part coming from the component not containing $P_{(v,w)}$. Then $S(\hat{e}_1)$ is a subset of $S(e)$. Moreover, $$\pi(e_1) = \{S(e) \setminus \left(\cup S(\hat{e}_1)\right),S'(e) \cup \left(\cup S(\hat{e}_1)\right)\}.$$ The figure below shows this in a simple case. \begin{center} \psset{xunit=1.0cm,yunit=1.0cm,algebraic=true,dotstyle=*,dotsize=5pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-5.5,-.1)(5,2) \psline(-4.5,0)(-1.5,0) \psline(-1.5,0)(1.5,0) \psline(1.5,0)(4.5,0) \psline(-1.5,0)(-1.5,1.3) \psline(1.5,0)(1.5,1.3) \psdots(-4.5,0) \rput[bl](-4.4,0.2){$v$} \psdots(-1.5,0) \rput[bl](-1.4,0.2){$v_1$} \psdots(1.5,0) \psdots(4.5,0) \rput[bl](4.6,0.2){$w$} \rput[bl](-3,0.2){$e$} \rput[bl](0,0.2){$e_1$} \rput[bl](3,0.2){$e'$} \psdots(-1.5,1.3) \rput[bl](-1.9,0.6){$\hat{e}_1$} \psdots(1.5,1.3) \rput[tl](-5.5,.3){$S'(e)$} \rput[tl](-1.5,1.85){$S(\hat{e}_1)$} \end{pspicture*} \end{center} In short, at each step along the path $P_{(v,w)}$, some leaves move from $S$ to $S'$, and they move only in that direction. The only remaining concern is that we will somehow end up with a new partition with the roles of $S$ and $S'$ switched. However, the edges of $S'(e)$ are fixed, so there is no chance that $S$ and $S'$ switch roles. \end{proof} \begin{dfn}\label{graphisodfn} Let $G_1 = (V_1,E_1)$ and $G_2 = (V_2,E_2)$ be graphs. A \textbf{morphism} $F: G_1 \to G_2$ is a function $F_V: V_1 \to V_2$ such that $$F_E(\{v,w\}) := \{F_V(v),F_V(w)\}$$ is a function $F_E: E_1 \to E_2$. A morphism $F$ is an \textbf{isomorphism} if $F_V$ and $F_E$ are bijections. \end{dfn} \begin{dfn}\label{topotype} The \textbf{topological type} of a stable graph is its isomorphism class in the sense of \ref{graphisodfn}. \end{dfn} \begin{prop}\label{graphisomorph} Let $T_1 = (V_1,E_1)$ and $T_2 = (V_2,E_2)$ be stable trees, and let $f: L(T_1) \to L(T_2)$ be a bijection. Suppose that \begin{equation} \{\pi(e_2) | e_2 \in T_2\} = \{f(\pi(e_1)) | e_1 \in T_1\}. \end{equation} Then there exists an isomorphism of graphs, $F: T_1 \to T_2$, such that the restriction of $F_V$ to $L(T_1)$ equals $f$. In other words, $f$ can be extended to an isomorphism of graphs. \end{prop} \begin{proof} First notice that \ref{distinctpartitions} tells us that the partitions coming from edges within a single graph are distinct. Under the hypotheses of equation $(2.1)$, there is a pairing of the partitions from the two graphs. This implies that $|E_1| = |E_2|$. We will prove this lemma by induction on $n = |E_1|$, the number of edges of $T_1$. Suppose $n = 1$; then $T_1$ and $T_2$ are both the unique tree with one edge. This tree has exactly two vertices, both of which are leaves. So $F_V := f $ is already a bijection on the full set of vertices. By inspection, the bijection $F_V$ does induce the map $F_E$ that sends the one edge of $T_1$ to the one edge of $T_2$ and hence $F$ is an isomorphism of the trees. Let $n \in \N$ with $n \geq 2$ and suppose, for any $j \in \N$ with $j < n$ the statement of our theorem is true for trees with $j$ edges. Let $T_1$ and $T_2$ be two trees satisfying equation $(2.1)$ such that $T_1$ has $n$ edges. Recall that the unbounded edges partition the leaves of a tree into a singleton and a remaining leaves. As a result, if $T_1$ does not have any bounded edges, then neither does $T_2$. In this case, they are both the star of $n$ edges, which are clearly isomorphic in a way that extends the isomorphism on the leaves. Otherwise, choose a bounded edge $e_1 = \{v_1,v_2\}$ of $T_1$. Add two new vertices, $\hat{v}_1$ and $\hat{v}_2$ to $V_1$; remove $e_1$ from $E_1$ and replace it with two new edges, $\{v_1,\hat{v}_1\}$ and $\{v_2,\hat{v}_2\}$. This is essentially breaking the edge $e_1$ into two unbounded edges. The resulting graph is the union of two trees. Label these trees $T_{1,1}$ and $T_{1,2}$, with the leaves $S(e_1) \cup \{\hat{v}_1\}$ in $T_{1,1}$ and the leaves $S'(e_1) \cup \{\hat{v}_2\}$ in $T_{1,2}$. \begin{center} \psset{xunit=1.0cm,yunit=1.0cm,algebraic=true,dotstyle=*,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-1.1,-0.1)(9.4,2.5) \psline(0,0)(-0.49,0.87) \psline(0,0)(1,-0.01) \psline(0,0)(0.51,0.86) \psline(0.51,0.86)(0.88,1.79) \psline(0.51,0.86)(1.5,1) \psline(1.5,1)(2.42,1.4) \psline(1.5,1)(1.98,0.12) \psline(2.42,1.4)(2.38,2.4) \psline(2.42,1.4)(3.23,1.99) \psline(0,0)(-1,-0.01) \psline(6,0)(5,-0.01) \psline(6,0)(5.51,0.87) \psline(6,0)(7,-0.01) \psline(6,0)(6.51,0.86) \psline(6.51,0.86)(6.88,1.79) \psline(7.5,1)(8.42,1.4) \psline(7.5,1)(7.98,0.12) \psline(8.42,1.4)(8.38,2.4) \psline(8.42,1.4)(9.23,1.99) \psline{->}(3.3,1)(4.3,1) \psline(6.49,0.87)(6.83,0.9) \psline(7.12,0.95)(7.5,1) \rput[tl](-0.61,2.3){$T_1$} \rput[tl](5.6,1.59){$T_{1,1}$} \rput[tl](8.73,1.09){$T_{1,2}$} \psdots(0,0) \psdots(1.5,1) \psdots(-0.49,0.87) \psdots(1,-0.01) \psdots(0.88,1.79) \psdots(2.42,1.4) \rput[bl](.95,0.55){$e_1$} \rput[bl](6.75,0.4){$\hat{v}_1$} \rput[bl](7,1.1){$\hat{v}_2$} \psdots(1.98,0.12) \psdots(2.38,2.4) \psdots(3.23,1.99) \psdots(0,0) \psdots(0.49,0.87) \psdots(0,0) \psdots(-1,-0.01) \psdots(0,0) \psdots(6.49,0.87) \psdots(5,-0.01) \psdots(5.51,0.87) \psdots(6,0) \psdots(7,-0.01) \psdots(6,0) \psdots(6.88,1.79) \psdots(7.5,1) \psdots(8.42,1.4) \psdots(7.5,1) \psdots(7.98,0.12) \psdots(8.42,1.4) \psdots(8.38,2.4) \psdots(8.42,1.4) \psdots(9.23,1.99) \psdots(6.83,0.9) \psdots(7.12,0.95) \end{pspicture*} \end{center} Here is another way to think of this construction: the tree $T_{1,j}$ was formed by crushing all of $T_1$ separated from $v_j$ by $v_{(1-j)}$ to the point $v_{(1-j)}$ and calling that crushed point $\hat{v}_{(1-j)}$. By hypothesis, there is an edge $e_2$ such that $\pi(e_2) = f(\pi(e_1))$. Notice that $e_2$ must also be bounded. As with $T_1$, break $e_2$ into two unbounded edges by adding $\hat{w}_1$ and $\hat{w}_2$ to $V_2$, labelled so that $\hat{w}_1$ lies in the component containing the leaves $f(S(e_1))$. Then delete $e_2$ from $E_2$ and add $\{w_1,\hat{w}_1\}$ and $\{w_2,\hat{w}_2\}$ to $E_2$. Label these trees $T_{2,1}$ and $T_{2,2}$, with the leaves $f(S(e_1)) \cup \{\hat{w}_1\}$ in $T_{2,1}$ and the leaves $f(S'(e_1)) \cup \{\hat{w}_2\}$ in $T_{2,2}$. Notice that $f$ restricts to bijections $f_i: L(T_{1,i}) \setminus \{\hat{v}_i\} \to L(T_{2,i}) \setminus \{\hat{w_i}\}$; extend these functions to bijections $\hat{f}_i: L(T_{1,i}) \to L(T_{2,i})$ by setting $\hat{f}_i(v_i) = w_i$. If equation $(2.1)$ for $T_1$ and $T_2$ descends to equation $(2.1)$ for each of the pairs $T_{1,j}$ and $T_{2,j}$, then we will be able to use the inductive hypothesis. Notice that, for any edge $e \neq e_1$ in $V_1$, $\pi(e)$ is related to $\pi(e_1) = \{S(e_1),S'(e_1)\}$. If $e$ is in $T_{1,1}$, then $e$ separates some $S(e_1)$ and moves it into $S'(e_1)$. A similar discussion holds for any of the other three new trees. This implies that the original pairing given by equation $(2.1)$ splits into two pairings for the new trees. These original partitions are not maintained in the new trees; there are different leaves. However, the relation is straight-forward. Consider an edge $e$ from $T_1$, but think of it in $T_{1,j}$. I claim that we can compute $\pi(e)_{T_{1,j}}$ from $\pi(e)$ as follows. Notice that all of $L(T_{1,(1-j)}) \setminus \{\hat{v}_{(1-j)}\}$ lives in one part of the partition $\pi(e)$. To compute $\pi(e)_{T_{1,j}}$, simply find this collection of leaves and replace it with $\hat{v}_j$. A similar discussion holds for the edges of $T_{2,j}$. This shows that equation $(2.1)$ does descend to the appropriate equations for the pairs of new trees. In short, removing these edges gives two pairs of sub-trees satisfying all of the hypotheses of the theorem. Moreover, each subtree $T_{1,i}$ has fewer edges than $T_1$, so by strong induction, each of $\hat{f}_i$ extends to an isomorphism $F_i: T_{1,i} \to T_{2,i}$. Taken together, these isomorphisms are a bijection $F_V: V_1 \cup \{\hat{v}_1,\hat{v}_2\} \to V_2 \cup \{\hat{w}_1,\hat{w}_2\}$ that induces a bijection $\hat{F}_E: (E_1 \cup \{\{v_1,\hat{v}_1\}, \{v_2,\hat{v}_2\}\}) \setminus \{e_1\} \to (E_2 \cup \{\{w_1,\hat{w}_1\},\{w_2,\hat{w}_2\}\}) \setminus \{e_2\}$. Notice that $F_V(\hat{v}_i) = \hat{w}_i$, so $F_V(v_i) = w_i$ so that the edge $\{v_i,\hat{v}_i\}$ is mapped correctly. Hence $F_E(e_1) = F_E(\{v_1,v_2\}) = \{F_V(v_1),F_V(e_2)\} = \{w_1,w_2\} = e_2$. Thus $F: T_1 \to T_2$ is an isomorphism. \end{proof} \begin{dfn}\label{degeneration} Consider a graph $G = (V,E)$ with a bounded edge $e = \{v_1,v_2\}$. Build a new new graph $G' = (V',E')$ as follows. \begin{itemize} \item The set $V'$ is built from $V$ by removing $v_2$. \item The set $E'$ is built from $E$ by removing $e$ and then replacing all instances of $v_2$ in edges with $v_1$. \end{itemize} Essentially, the graph $G'$ is built from $G$ by shrinking the edge $e$ to a point. As a result, we say that $G'$ is the \textbf{degeneration} of $G$ by $e$. \end{dfn} \begin{remark} We chose the term degeneration because of the parallel to classical geometry, but this graph-theoretic construction is also called \textit{contraction}. Degeneration of a graph by the edge $e$ is defined by removing one of the endpoints of $e$ and \textit{renaming} certain endpoints. Instead, this definition could have been framed in terms of \textit{identifying} the two endpoints of $e$. As a result we can talk about degenerating a graph by multiple edges simultaneously without worrying about the order of operations. \end{remark} \begin{dfn} A graph is said to be \textbf{trivalent} if all of its vertices have degree $1$ or $3$. \end{dfn} \begin{remark} The trivalent graphs are the duals of the maximally degenerate classical curves but are the most general tropical curves. \end{remark} \begin{lemma}\label{n-3} Let $G$ be a trivalent tree with $|L(G)| = n$. If $n \geq 3$, then $G$ has $n-3$ bounded edges. \end{lemma} \begin{proof} The simplest trivalent tree has exactly three (unbounded) edges meeting at a point. The claim is true for this tree. Any trivalent tree can built from this tree by inserting a point in the middle of an existing edge and adding a new unbounded edge and leaf at the new internal vertex. This process adds an bounded edge and leaf at each step, so the relationship remains true for all trivalent trees. Alternately, noticing that any tree is planar allows us to show this result using the Euler Characteristic Theorem. \end{proof} \begin{lemma}\label{twoleaftripod} In any trivalent tree $G$ with $n \geq 3$ leaves, there is an internal vertex in $G$ where two distinct unbounded edges meet. \end{lemma} \begin{proof} Consider the subgraph, $B$, formed from the bounded edges and internal vertices. The graph $B$ is still connected and hence a tree. If $B$ is a single vertex, then that vertex is the intersection of three unbounded edges in $G$. Otherwise, $B$ is a tree with edges and hence at least two leaves. The remaining two degrees for these internal vertices in $G$ must come from unbounded edges. \end{proof} \begin{lemma}\label{stabledegen} Let $G$ be a graph. If $G$ is trivalent or stable, then any degeneration of $G$ is stable. \end{lemma} \begin{proof} We only allow degenerations by bounded edges, so the degree of a leaf will never change. When degenerating by the bounded edge, $e = \{v,w\}$, the degree of the identified vertex, $v=w$, in the degeneration will be $\deg(v) + \deg(w) - 2 \geq 3 + 3 - 2 = 4$. \end{proof} \section{Polyhedral Complexes} \textit{We are not going to be able to embed our polyhedral complexes in a single affine space. As a result, we will not be able to show that the combinatorial object built is a tropical variety. We now state a (generalized) definition of a polyhedral complex that does not need to be embedded in a single affine space.} \begin{dfn} Let $C$ be a topological space and $P$ a closed subset of $C$. A \textbf{chart} for $P$ is a homeomorphism of $P$ with a closed, possibly unbounded lattice polyhedron $X$ in $\R^j$. \end{dfn} \begin{notation} Let $X \subset \R^j$ be a lattice polyhedron. Let $V_X$ be the smallest affine space containing $X$, that is, $V_X = \{x + span\{x - x'\} \mid x,x' \in X\}$. \end{notation} \begin{dfn} Let $X \subset \R^j$ and $Y \subset \R^k$ be lattice polyhedra. Two charts for $P$, $c_X: X \to P$ and $c_Y: Y \to P$, are \textbf{equivalent} if there is an isomorphism of affine spaces $f: V_X \to V_Y$ such that $f(X) = Y$ and $c_X = c_Y \circ f$. We say $c_X$ and $c_Y$ are \textbf{lattice-equivalent} if $f$ restricts to an (affine) isomorphism of $V_X \cap \Z^j$ with $V_Y \cap \Z^k$. Note that such an $f$ is unique if it exists because $X$ spans $V_X$. \end{dfn} \begin{dfn}\label{polycomplex} A \textbf{polyhedral complex}, $C$, is a topological space, together with a collection of closed cells $\overline{P} = \{P_i|i \in I\}$ and for each $i$ a lattice-equivalence class of charts $\overline{c} = \{c_i: X_i \to P_i|i \in I\}$ for some closed lattice polyhedra $X_i$ such that: \begin{itemize} \item $C$ is the union of the $P_i$, \item for each $i,j \in I$, the intersection $P_i \cap P_j$ is equal to $P_k$ for some $k \in I$, \item if $Y$ is a face of $X_i$, then there exists $j$ such that $c_i(Y) = P_j$, and \item if two charts have the same image, $P_i = P = P_j$, then $c_i: X_i \to P$ and $c_j: X_j \to P$, are lattice-equivalent. \end{itemize} The elements of $\overline{P}$ will be called the polyhedral cells of $P$. \end{dfn} \begin{dfn} If $c: X \to P$ is a chart and $Y$ is a face of $X$, the we say that the polyhedral cell $c(Y)$ is a \textbf{face} of $P$. \end{dfn} \begin{dfn} A \textbf{morphism} of polyhedral complexes is a continuous function $\phi: C \to C'$ such that \begin{itemize} \item any polyhedral cell $P_i$ of $P$ maps entirely into a polyhedral cell $P_j'$ of $P'$ with \item $c_j'^{-1} \circ \phi \circ c_i: X_i \to X_j'$ affine and integral where defined. \end{itemize} \end{dfn} \begin{dfn} A polyhedral cell $P$ with chart $c: X \to P$ is said to be \textbf{dimension $k$} if the lattice polyhedron $X$ is dimension $k$. A polyhedral complex is said to have \textbf{dimension $k$} if $k$ is the biggest dimension of any of its polyhedral cells and any polyhedral cell is a face of a polyhedral cell of dimension $k$. \end{dfn} \begin{dfn} Let $\phi: C \to C'$ be a morphism of polyhedral complexes of dimension $k$. Pick a point $p$ in the interior of a $k$-dimensional polyhedral cell, $P_i$. Then there exists a polyhedral cell $P_j'$ of $P'$ such that $c_j'^{-1} \circ \phi \circ c_i: X_i \to X_j'$ is affine and integral where defined. There is a lattice $\Lambda_i$ in $X_i$ and a lattice $\Lambda_j'$ in $X_j'$. Let $\ind(\phi)_p$ be $0$ if the dimension of $P_j'$ is not $k$ and otherwise let $\ind(\phi)_p = [c_j'^{-1} \circ \phi \circ c_i(\Lambda_i):\Lambda_j']$. \end{dfn} \begin{dfn} A polyhedral complex of dimension $k$ will be said to be \textbf{weighted} if each polyhedral cell of dimension $k$ is assigned a rational number. The weight of a polyhedral cell $P_i$ will usually be denoted by $w(P_i)$. \end{dfn} \begin{remark} Although the weights are associated to top-dimensional polyhedral cells, we can think of the weight as a function that is only defined on the interiors of the top-dimensional polyhedral cells. \end{remark} \begin{dfn} Let $C$ and $C'$ be polyhedral complexes and $\phi: C \to C'$ a morphism of polyhedral complexes of the same dimension. Suppose $C$ is weighted; the weight of the polyhedral cell $P_i$ will be written $w(P_i)$. Pick $q \in C'$ in the interior of a top dimensional polyhedral cell. Define $$\deg(\phi)_q = \sum_{\begin{array}{c}p \in P\\\phi(p) = q\end{array}} w(p) \cdot \ind(\phi)_p.$$ If the sum is independent of the choice of $q$, then we say that the \textbf{degree} of $\phi$ is this constant value, denoted $\deg(\phi)$. \end{dfn} \section{Orthogonal Basis Lemma} \textit{The previous section shows us that computing the degree of a morphism will require us to simplify a sum. The following lemma will allow us to do that simplification later in the proof of the main theorem. This lemma is a special case of a general lemma about bilinear pairings with an orthogonal basis.} \begin{prop}\label{states} Let $\omega_1, \omega_2 \in Z(\R[S_d])$. Then $$\tr(\omega_1\omega_2) = \sum_{\nu} \frac{1}{|K_\nu|}\tr(\omega_1 K_\nu)\tr(K_\nu \omega_2),$$ where the sum is taken over integer partitions $\nu$ of $d$. \end{prop} \begin{proof} First, write $\omega_i = \sum_\alpha \omega_{i,\alpha} K_\alpha$. Then $\sum_\nu \frac{1}{|K_\nu|}\tr(\omega_1K_\nu)\tr(K_\nu \omega_2)$ $$\begin{array}{lll} = & \sum_\nu \frac{1}{|K_\nu|}(\sum_\alpha \omega_{1,\alpha} \tr(K_\alpha K_\nu))(\sum_\beta \omega_{2,\beta} \tr(K_\nu K_\beta)) & \mbox{linearity}\\ = & \sum_{\nu} \frac{1}{|K_\nu|}(\omega_{1,\nu} \tr(K_\nu K_\nu))(\omega_{2,\nu} \tr(K_\nu K_\nu)) & \mbox{orthogonality}\\ = & \sum_{\nu} \frac{1}{|K_\nu|}(\omega_{1,\nu} \omega_{2,\nu}) (|K_\nu|)^2 & \mbox{\ref{conjsize}}\\ = & \sum_{\nu} (\omega_{1,\nu} \omega_{2,\nu}) |K_\nu| & \mbox{}\\ = & \sum_{\nu,\epsilon} (\omega_{1,\nu} \omega_{2,\epsilon}) \tr(K_\nu K_\epsilon) & \mbox{orthogonality}\\ = & \tr( \sum_{\nu,\epsilon} (\omega_{1,\nu} \omega_{2,\epsilon}) K_\nu K_\epsilon) & \mbox{linearity}\\ = & \tr(\omega_1 \omega_2) & \mbox{}\\ \end{array}$$ \end{proof} \chapter{The Construction} \textit{In this chapter, we construct a closed (but unbounded) polyhedral complex with a morphism to the tropical moduli space of marked curves and show that the degree of this morphism captures information about the Hurwitz numbers.} \section{Constructing the Polyhedral Complex, $\mathcal{H}(\overline{\sigma})$} \textit{In this section, we specify a union of closed cones in real vector spaces and give modular meaning to points in these cones related to covers of marked tropical curves. Then we identify the points on the cones for which the modular meanings agree. Finally, we assign weights to the top-dimensional polyhedral cells.} \begin{dfn}\label{fulllabel} Fix $n \geq 3$ in $\N$, and $d \in \N$. Let $G$ be a topological type (\ref{topotype}) of trivalent trees with $n$ leaves and $m = n-3$ bounded edges (\ref{n-3}). Label the unbounded edges of $G$ by distinct elements of the set $\overline{\lambda} = \{\lambda_1,\ldots,\lambda_n\}$. For each edge, $\lambda_i$, pick an integer partition $\sigma_i$ of $d$; call this collection of choices $\overline{\sigma}$. Label the bounded edges of $G$ by distinct elements of the set $\overline{E} = \{e_1, \ldots, e_m\}$. For each edge, $e_i$, pick an integer partition $\nu_i$ of $d$; call this collection of choices $\overline{\nu}$. Let $G(\overline{\lambda},\overline{E})$ represent only the choice of the labelings on $G$. Let $G(\overline{\lambda},\overline{\sigma},\overline{E},\overline{\nu})$ represent this collection of choices, both the labelings and the associated integer partitions of $d$. \end{dfn} \begin{note} Under the hypotheses in \ref{fulllabel}, each unbounded edge of $G$ contains a single leaf. As a result, we can also think of $\overline{\lambda}$ as a labeling of the elements of $L(G)$, the leaves of $G$. \end{note} The next definition will show how to interpret $G(\overline{\lambda},\overline{E})$ as a function from $\R^m_{\geq0}$ (the closure of the positive orthant in $\R^m$) to the set of metric graphs (with labeled leaves). In other words, this function gives modular meaning to the points in $\R^m_{\geq0}$. \begin{dfn} For any labelled topological type of trivalent trees, $G(\overline{\lambda},\overline{E})$, and point $p = (p_1,\ldots,p_m) \in \R^m_{\geq0}$, construct the following metric graph. \begin{itemize} \item If there are any $p_i = 0$, then degenerate $G$ by $e_i$ in the sense of \ref{degeneration}. \item For any $p_i > 0$, assign length $p_i$ to edge $e_i$. \end{itemize} Notice that $\overline{\lambda}$ descends to a labeling of the unbounded edges (and leaves) of this new graph; notice also that the elements of $\overline{E}$ that were not degenerated also descend to a labeling of the bounded edges of this new graph. Call this labelled metric graph $G(\overline{\lambda},\overline{E})(p)$. \end{dfn} \begin{remark} Notice that degeneration will never change the genus of a tropical curve. \end{remark} \begin{notation} We will sometimes suppress the labelings of the edges in the notation $G(\overline{\lambda},\overline{\sigma},\overline{E},\overline{\nu})$ because they are universal, writing instead $G(\overline{\sigma},\overline{\nu})$. \end{notation} \begin{dfn}\label{acceptable} Consider a set of choices $G(\overline{\sigma},\overline{\nu})$ on a trivalent tree $G$. Let $v$ be an internal (degree $3$) vertex in $G$. Thus $v$ is the endpoint of three edges, $\{\epsilon_1,\epsilon_2,\epsilon_3\}$. Each of these edges has an associated integer partition of $d$ as either a bounded or unbounded edge; call these the three integer partitions $\{\mu_1,\mu_2,\mu_3\}$. Recall from \ref{basisdfn} that each integer partition $\mu_i$ of $d$ corresponds to a basis element $K_{\mu_i}$ in the class algebra. Define $$I(v) = \tr(K_{\mu_1}K_{\mu_2}K_{\mu_3}).$$ We will say that the vertex $v$ is \textbf{acceptable} if $I(v) \neq 0$. We say that $G(\overline{\sigma},\overline{\nu})$ is \textbf{acceptable} if every internal vertex $v$ in $G$ is acceptable. \end{dfn} \begin{remark} Recall that each vertex in a tropical curve corresponds to an irreducible component in a classical marked stable curve. The acceptable condition corresponds to whether this data can be realized as a classical cover of $\p^1$ with only three ramification points. \end{remark} \begin{dfn}\label{Dsig} Recall that we have fixed $n,d \in \N$, and let $m = n-3$. Pick a collection of $n$ integer partitions of $d$: $\overline{\sigma} = \{\sigma_1,\ldots,\sigma_n\}$. Define $D(\overline{\sigma})$ as the disjoint union of copies of $\R^m_{\geq0}$ indexed by the possible choices for acceptable $G(\overline{\sigma},\overline{\nu})$: $$D(\overline{\sigma}) = \coprod_{\begin{array}{c}G(\overline{\sigma},\overline{\nu})\\ \mbox{acceptable}\end{array}} \R^m_{\geq0}.$$ \end{dfn} \begin{remark} The data for each cone has much in common; only the topological type and $\overline{\nu}$ are allowed to vary. \end{remark} \begin{dfn} Pick a collection of $n$ integer partitions of $d$: $\overline{\sigma} = \{\sigma_1,\ldots,\sigma_n\}$. Let $\mathcal{H}(\overline{\sigma})$ be the topological space formed from $D(\overline{\sigma})$ by identifying points $p \in D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}$ and $q \in D(\overline{\sigma})_{G'(\overline{\sigma},\overline{\nu}')}$ exactly when there is an isomorphism (\ref{graphisodfn}) of metric graphs $F: G(\overline{\lambda},\overline{E})(p) \to G'(\overline{\lambda},\overline{E})(q)$ such that \begin{itemize} \item $F_E(\lambda_i) = \lambda_i$, and \item if $F_E(e_i) = e_j$, then $\nu_i = \nu_j'$. \end{itemize} Note that the first property says that the images of the leaves are specified, which is sufficient to specify the image of all vertices and edges in a tree. As a result, if such an isomorphism exists, it is unique. In addition, because $\overline{\sigma}$ is constant in $D(\overline{\sigma})$, the first property implies that the integer partition, $\sigma_i$, associated with each unbounded edge also matches when $p$ and $q$ are glued. \end{dfn} The space $\mathcal{H}(\overline{\sigma})$ is our candidate for the tropical analogue for the Hurwitz space. \begin{thm} The space $\mathcal{H}(\overline{\sigma})$ is a polyhedral complex in the sense of definition \ref{polycomplex}. \end{thm} \begin{proof} Each cone $D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}$ is a copy of $\R^m_{\geq0}$, which is the closure of the positive orthant in $\R^m$ with the standard Euclidean topology. The space $D(\overline{\sigma})$ is a disjoint union of these topological spaces, which gives it a natural topology. The space $\mathcal{H}(\overline{\sigma})$ is formed by identifying points in $D(\overline{\sigma})$, which gives it a natural topology. First we show that each copy of $\R^m_{\geq0}$ is homeomorphic to its image in $\mathcal{H}(\overline{\sigma})$. Let $p,q \in D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}$. If $p$ and $q$ are identified in $\mathcal{H}(\overline{\sigma})$, then there is an isomorphism of metric graphs $F: G(\overline{\lambda},\overline{E})(p) \to G(\overline{\lambda},\overline{E})(q)$ that respects the labelings. But because they come from the same copy of $D(\overline{\sigma})$, the edges that are degenerated must be degenerated for both $p$ and $q$. Thus the coordinates with value $0$ for $p$ and $q$ must agree. Since $F$ is an isomorphism of metric graphs, all of the non-degenerated edges must be the same length, so the positive coordinates of $p$ and $q$ must also be identical, meaning that $p=q$. In other words, each the projection $c_{D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}}: D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})} \to \mathcal{H}(\overline{\sigma})$ is an injection. As a result, $c_{D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}}$ is a homeomorphism of a closed lattice polyhedron in $\R^m$ with its image in $\mathcal{H}(\overline{\sigma})$. Let $Y$ be a face of $D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}$; the preceding paragraph also shows that $c_{D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}}|_Y$ is a homeomorphism of the face $Y$ with its image. Given such a face, $Y$, let $c_{\hat{Y}}$ be the associated function and $\hat{Y}$ the image of that function. Consider the sets $$\overline{P} = \bigcup_{G,\overline{\nu}} \{\hat{Y} \mid Y \mbox{ is a face of } D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}\}$$ and $$\overline{c} = \bigcup_{G,\overline{\nu}} \{c_{\hat{Y}} \mid Y \mbox{ is a face of } D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}\}.$$ We will now show that $\overline{P}$ and $\overline{c}$ give $\mathcal{H}(\overline{\sigma})$ the structure of a polehedral complex. Notice that $\mathcal{H}(\overline{\sigma})$ is, by definition, the union of the images of the unrestricted charts, so it is certainly the union of these polyhedral cells. Now we must show that the intersection of two polyhedral cells is a polyhedral cell. Consider two polyhedral cells $\hat{Y}, \hat{Y}' \in \overline{P}$. Then there is a trivalent graph $G$ and collection of integer partitions $\overline{\nu}$ such that $\hat{Y}$ is the image of a face $Y$ of $D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}$ and there is a trivalent graph $G'$ and collection of integer partitions $\overline{\nu}'$ such that $\hat{Y}'$ is the image of a face $Y'$ of $D(\overline{\sigma})_{G'(\overline{\sigma},\overline{\nu}')}$. Notice that the set of leaves in $G$ and $G'$ have the same labels. Let $D_1$ be the set of bounded edges, $e$, of $G$ such that there does not exist an edge $e'$ of $G'$ with $\pi(e') = \pi(e)$ (\ref{partitionpi}). Let $D_1'$ be the set of bounded edges, $e'$, of $G'$ such that there does not exist an edge $e$ of $G$ with $\pi(e') = \pi(e)$. Let $H_1$ be the degeneration of $G$ by the edges of $D_1$; let $H_1'$ be the degeneration of $G'$ by the edges of $D_1'$. Because $G$ and $G'$ were trivalent trees, $H_1$ and $H_1'$ are stable trees (\ref{stabledegen}). As above, $\overline{\lambda}$ gives a bijection of $L(H_1)$ to $L(H_1')$; moreover, by construction, the set of (set) partitions created by bounded edges in each of $H_1$ and $H_1'$ are identical. So by \ref{graphisomorph}, there is an isomorphism $F: H_1 \to H_1'$ that agrees with the labeling on the vertices. Let $D_2$ be the set of edges $e_i$ of $H_1$ such that $F_E(e_i) = e_j'$ but $\nu_i \neq \nu_j$. Let $D_2'$ be the set of edges $e_j'$ of $H_1'$ such that $F_E(e_i) = e_j'$ but $\nu_i \neq \nu_j$. Let $H_2$ be the degeneration of $H_1$ by the edges of $D_2$, and let $H_2'$ be the degeneration of $H_1'$ by the edges of $D_2'$. Clearly, $F$ descends to an isomorphism $F: H_2 \to H_2'$, and now $F$ respects all labelings and integer partitions. Consider the face of $D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}$ determined by setting the coordinates associated with $D_1 \cup D_2$ to zero and intersecting with $Y$; call this face $Q$. Similarly, consider the face of $D(\overline{\sigma})_{G'(\overline{\sigma},\overline{\nu}')}$ determined by setting the coordinates associated with $D_1' \cup D_2'$ to zero and intersecting with $Y'$; call this face $Q'$. Pick a point $p \in Q$. Consider the following point $q$. If $p_i > 0$ and $F_E(e_i) = e_j$, then set $q_j = p_i$. Otherwise, let $q_j = 0$. By the above discussion, $G(\overline{\lambda},\overline{E})(p)$ is isomorphic to $G(\overline{\lambda},\overline{E})(q)$ by $F$. So the image of $p$ is in the intersection, $\hat{Y} \cap \hat{Y}'$. Similarly, the image of every point from $\hat{Q}'$ is in the intersection. So $\hat{Q} = \hat{Q}'$ is contained in $\hat{Y} \cap \hat{Y}'$. Let $\hat{p} \in \hat{Y} \cap \hat{Y}'$; then $p$ represents a metric graph in $Y$ and $Y'$. The partitions of the leaves that can be realized in each version of $p$ must be a subset of those possible for each of $G$ and $G'$, so certainly $p$ lies in the set of points for which the edges of $D_1$ have been degenerated. Similarly, the two versions of $p$ must have identical integer partitions on the bounded edges, so $p$ also lives in the face on which the edges of $D_2$ have been degenerated. Hence $p$ lies in $Y$, meaning $\hat{p} \in \hat{Q}$. Thus $\hat{Q} = \hat{Q}'$ is the intersection, which is clearly a polyhedral cell. Finally, we must show that the different charts for the faces are lattice equivalent. Suppose $\hat{Y} = \hat{Y'}$, where $Y$ is a face of $D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}$ and $Y'$ is a face of $D(\overline{\sigma})_{G'(\overline{\sigma},\overline{\nu}')}$. Then for any point, $p \in Y$, there is a point $p' \in \hat{Y}'$ with which it is identified. The coordinates of $p$ and $p'$ corresponding to the sets $D_1 \cup D_2$ constructed above are all zero. The other coordinates are permuted by the isomorphism, and this permutation is the same for all points. Thus the affine spaces containing $Y$ and $Y'$ are identified globally by a permutation of the coordinates. This isomorphism clearly respects the lattices, so these charts are lattice-equivalent. Having checked all conditions in \ref{polycomplex}, we see that $\mathcal{H}(\overline{\sigma})$ is a polyhedral complex. \end{proof} \begin{thm} The space $\mathcal{H}(\overline{\sigma})$ is connected. \end{thm} \begin{proof} The graphs $G(\overline{\sigma},\overline{\nu})(0,\ldots,0)$ and $G'(\overline{\sigma},\overline{\nu}')(0,\ldots,0)$ are both trees with a degree $n$ vertex at the end of $n$ unbounded edges. Clearly, there exists an isomorphism of these trees respecting $\overline{\lambda}$ (and thus $\overline{\sigma}$), and there are no bounded edges left to consider, so that isomorphism respects $\overline{E}$ and $\overline{\nu}$. So these two points are identified in $\mathcal{H}(\overline{\sigma})$. The point $(0,\ldots,0)$ is in every face of these cones as well. So, the image point is in the image of every chart (meaning every polyhedral cell). In the Euclidean topology, that is enough for us to know that the space is connected. \end{proof} \begin{remark} Although $\mathcal{H}(\overline{\sigma})$ is not embedded, the previous theorem shows that it has a fan-like structure. \end{remark} \begin{dfn}\label{weightsdfn} Consider a top-dimensional polyhedron $P_i \in \overline{P}$, meaning that it is the image of the full copy of $\R^m_{\geq0}$ from $D(\overline{\sigma})_{G(\overline{\lambda},\overline{\sigma},\overline{E},\overline{\nu})}$, where $G = (V,E)$ is a trivalent tree. Define the weight of $P_i$ as $$w(P_i) = \frac{1}{d!} \left(\prod_{e_i \in \overline{E}} \frac{1}{|K_{\nu_i}|}\right) \left(\prod_{v \in (V \setminus L(G))} I(v)\right).$$ Notice that the condition $v \in V \setminus L(G)$ is the same as saying that $v$ is an internal vertex. \end{dfn} \section{The Morphism to $\M_{0,n}$} \textit{In this section, we define a morphism from the polyhedral complex constructed above to Mikhalkin's moduli space of marked tropical rational curves and show that the degree of that morphism captures information about the Hurwitz numbers.} Many of the ideas in this section are derived from \cite{MikM}, including the statements of \ref{doubleratio} and \ref{ratiocoord}. More detail is provided here because it is absent from the original. \begin{dfn}\label{doubleratio}\cite{MikM} Consider one of the polyhedra from \ref{Dsig}, $D(\overline{\sigma})_{G(\overline{\lambda},\overline{\sigma},\overline{E},\overline{\nu})}$. Fix $4$ distinct elements, $w,x,y,z \in \overline{\lambda}$, and think of these as labels of the leaves. Define the function $d_{(w,x),(y,z)}: D(\overline{\sigma})_{G(\overline{\lambda},\overline{\sigma},\overline{E},\overline{\nu})} \to \R$ as follows. First notice that, because $G$ is a tree, there is a unique (oriented) path, $P_{(w,x)}$, in $G$ from $w$ to $x$. Similarly, there is a unique (oriented) path, $P_{(y,z)}$, in $G$ from $y$ to $z$. Call this intersection $P_{(w,x),(y,z)}$. Notice that $P_{(w,x),(y,z)}$ is connected; if the paths separate and rejoin, that would give a circuit. Notice that no unbounded edge can be used in more than one of these paths because the leaves are distinct (and paths don't repeat edges); so, $P_{(w,x),(y,z)}$ is contained in the bounded edges of $G$. Let $p \in D(\overline{\sigma})_{G(\overline{\lambda},\overline{\sigma},\overline{E},\overline{\nu})}$; then $G(\overline{\lambda},\overline{E})(p)$ contains a copy of the intersection, $P_{(w,x),(y,z)}(p)$, but here each edge has a length. If the orientations of $P_{(w,x)}$ and $P_{(y,z)}$ agree on $P_{(w,x),(y,z)}(p)$, then let $d_{(w,x),(y,z)}(p)$ be the length of $P_{(w,x),(y,z)}(p)$. If the orientations disagree, let $d_{(w,x),(y,z)}(p)$ be the negative of the length of $P_{(w,x),(y,z)}(p)$. We call $d_{(w,x),(y,z)}$ the \textbf{double ratio} associated with these distinct pairs of ordered points. \end{dfn} \begin{lemma} On each face $Y$ of $D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}$, $d_{(w,x),(y,z)} \circ c_{\hat{Y}}$ is linear. \end{lemma} \begin{proof} Pick $p,q \in D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}$ and $r \in \R^+$. Notice that the choice of a double ratio determines all orientation issues, so we do not need to worry about the signs when working with a single function. The length of $P_{(w,x),(y,z)}(rp)$ is computed from $P_{(w,x),(y,z)}(p)$ by first scaling the length of each included edge by $r$ and then summing, which gives the same result as computing the length of $P_{(w,x),(y,z)}(p)$ and then scaling the total length by $r$. Similarly, the length of $P_{(w,x),(y,z)}(p+q)$ is computed by first adding the edge lengths for $p$ and $q$ and then computing the intersection length, which gives the same result as computing the lengths of $P_{(w,x),(y,z)}(p)$ and $P_{(w,x),(y,z)}(q)$ and then adding them. \end{proof} \begin{lemma}\label{ratiocoords}\cite{MikM} Let $p \in D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}$. Then, for each $i$, there exists a choice of $w,x,y,z \in \overline{\lambda}$ such that $|d_{(w,x),(y,z)}(p)| = p_i$. In other words, each edge length is a double ratio. \end{lemma} \begin{proof} Fix $1 \leq i \leq n$ and consider $e_i = \{v,w\}$. If you are familiar with the classical or tropical moduli spaces of marked stable curves, you could find this as the pull-back of the only double ratio on $\M_{0,4}$ by the forgetful morphism $\M_{0,n} \to \M_{0,4}$ defined by those $4$ marked points. But we can locate this double ratio directly as follows. We will construct two paths whose intersection is $e_i$. Because $e_i$ is bounded and $G$ is trivalent, there are two other edges coming out of each of $v$ and $w$: $\{v,a\}, \{v,b\}, \{w,c\}, \{w,d\}$. Remove $\{v,a\}$ from $G$ and consider $\pi(\{v,a\}) = \{S(\{v,a\}),S'(\{v,a\})\}$. Let $w$ be any leaf coming from the part of the partition associated with the component that contains $a$. Similarly, pick $x,y,z$, being careful to use either $c$ or $d$ to produce $x$. Then the intersection of these paths is clearly $e_i$, so $d_{(w,x),(y,z)}(p) = \pm p_i$. The same argument holds for degenerations of $G$ by simply picking $2$ pairs of adjacent vertices to play the roles of $a,b,c,d$. In the degeneration, there may be multiple ways to realize the edge's length as a double ratio. \end{proof} \begin{remark} The proof above shows that $e_i$ is part of $P_{(w,x),(y,z)}$ if $\pi(e_i)$ separates $w$ from $x$ and $y $ from $z$. We will say that $d_{(w,x),(y,z)}$ is \textbf{$e_i$-compatible} in this case. \end{remark} \begin{lemma}\label{ratiocoord} The length of $e_i$ for a point $p$ is the minimal, non-zero (absolute) value of the $e_i$-compatible double ratios evaluated at $p$. \end{lemma} \begin{proof} By \ref{ratiocoords}, the length of $e_i$ does appear on the list of values, up to a sign change. Notice, however, that the lengths of the segments are all positive. So any path containing $e_i$ and other edges must be strictly longer than the path containing only $e_i$ and hence have a larger absolute value. \end{proof} \begin{remark} The definition of $d_{(w,x),(y,z)}$ clearly agrees on all copies of points from different cones that are identified in $\mathcal{H}(\overline{\sigma})$ because it only depends on the metric structure on the graph. So we can think of $d_{(w,x),(y,z)}$ as a function from $\mathcal{H}(\overline{\sigma})$ to $\R$. \end{remark} \begin{remark} Notice that $d_{(x,w),(y,z)} = -d_{(w,x),(y,z)} = -d_{(x,w),(z,y)}$. Also notice that $d_{(y,z),(w,x)} = d_{(w,x),(y,z)}$. We say that two double ratios are equivalent if they differ only by these kinds of reorderings (but respect the vertex pairs). The equivalence classes depend only on a choice of $4$ vertices and a way of putting those vertices into disjoint pairs. Hence there are $N = 3\left(\begin{array}{c}n\\4\end{array}\right)$ equivalence classes. \end{remark} \begin{dfn} Pick $N = 3\left(\begin{array}{c}n\\4\end{array}\right)$ double ratios, one from each equivalence class, and order them $(d_1, \ldots d_N)$. Define $\phi: \mathcal{H}(\overline{\sigma}) \to \R^N$ by $p \mapsto (d_1(p),\ldots,d_N(p))$. \end{dfn} \begin{cor}\label{latticeindex} Let $q \in \phi(\mathcal{H}(\overline{\sigma}))$ such that the coordinates of $q$ are integral. Then for any $p \in \mathcal{H}(\overline{\sigma})$ such that $q = \phi(p)$, the coordinates of $p$ (in any chart) are integral. \end{cor} \begin{proof} Being integral in the image means every coordinate is integral, including the ones that measure the edge lengths. So any preimage point has edge lengths that are all integral. \end{proof} \begin{cor} For each face $Y$ of $D(\overline{\sigma})_{G(\overline{\sigma},\overline{\nu})}$, $\phi \circ c_{\hat{Y}}$ is a linear isomorphism onto its image. \end{cor} \begin{proof} Linearity follows from the linearity of the coordinate functions. Injectivity follows from the fact that the coordinates from the domain are coordinate functions. \end{proof} \begin{lemma} The image of $\phi: \mathcal{H}(\overline{\sigma}) \to \R^N$ is independent of $\overline{\sigma}$. \end{lemma} \begin{proof} The definition of $\phi$ used only the metric graph structure and did not mention the integer partitions of $d$ associated with each unbounded edge. \end{proof} \begin{remark} Different choices of double ratios to define the map $\phi$ will change the image, but as Mikhalkin points out, the image differs only by negating some coordinates, which clearly produces an isomorphic polyhedral structure. \end{remark} \begin{notation} Let $\tilde{M} = \phi(\mathcal{H}(\overline{\sigma}))$. \end{notation} \begin{thm} The collections $\phi(\overline{P}) = \{\phi(P_i) | P_i \in \overline{P}\}$ and $\phi(\overline{c}) = \{\phi \circ c_i | c_i \in \overline{c}\}$ give $\tilde{M}$ the structure of a polyhedral complex. \end{thm} \begin{proof} The space $\tilde{M}$ is a subset of $\R^N$, which gives it a topology. Because the maps are linear, the images of the closed polyhedra are still closed in $\R^N$. In addition, because the maps are linear isomorphisms, they are also homoemorphisms on the cones. The space $\tilde{M}$ is the image of the union of the polyhedra $\overline{P}$, but this is trivially the union of the images, which are the polyhedra in $\phi(\overline{P})$. As before, showing that the intersection of two polyhedra is a polyhedron is a more subtle than the rest of this proof. Note that $\phi$ forgets the integer partitions, $\overline{\nu}$. Given two polyhedra in $\phi(\mathcal{H}(\overline{\sigma}))$, there are many choices of preimage polyhedra. As long as we pick the preimages with the same choice of $\overline{\nu}$, then the intersection of the lifts will have as its image the intersection of the images. We already know that the charts in $\overline{c}$ are lattice-equivalent on intersections. In $\phi(\overline{c})$, each of these is post-composed with $\phi$, the same linear isomorphism, which clearly preserves lattice-equivalence. So, $\tilde{M}$ is a polyhedral complex. \end{proof} \begin{lemma} Given the polyhedral structures above, $\phi: \mathcal{H}(\overline{\sigma}) \to \tilde{M}$ is a morphism of polyhedral complexes. \end{lemma} \begin{proof} Be the very definition of $\tilde{M}$, each polyhedral cell from $\mathcal{H}(\overline{\sigma})$ maps into a polyhedral cell of $\tilde{M}$. The check that $\phi$ is locally affine and integral is trivial: substituting in the definition of the charts on $\tilde{M}$, we see that $\phi$ is composed with its inverse. This cancels, leaving the original maps from the lattice equivalence in $\mathcal{H}(\overline{\sigma})$, for which the desired property has already been shown. \end{proof} \begin{thm} Given the weights defined in \ref{weightsdfn}, $\deg(\phi) = \frac{1}{d!}\tr(K_{\sigma_1} \cdots K_{\sigma_n})$. \end{thm} \begin{proof} We will prove this theorem by induction on the number of internal vertices in the topological types, $k = n-2$. But first we simplify the computation in all cases. Pick $q$ in the interior of a top-dimensional polyhedral cell in $\tilde{M}$. All of the polyhedral cells map isomorphically through $\phi$, so every preimage of $q$ is in the interior of top-dimension polyhedral cell in $\mathcal{H}(\overline{\sigma})$. By \ref{latticeindex}, the lattice from each of these domain points maps onto the entire lattice in the codomain. So for any preimage $p$, $\ind(\phi)_p = 1$. Hence we must compute $$\deg(\phi)_q = \sum_{\begin{array}{c}p \in \mathcal{H}(\overline{\sigma})\\\phi(p) = q\end{array}} w(p) \cdot \ind(\phi)_p = \sum_{\begin{array}{c}p \in \mathcal{H}(\overline{\sigma})\\\phi(p) = q\end{array}} w(p).$$ Recall that, for $p$ in the interior of $D(\overline{\sigma})_{G(\overline{\lambda},\overline{\sigma},\overline{E},\overline{\nu})}$, $$w(p) = \frac{1}{d!} \left(\prod_{e_i \in \overline{E}} \frac{1}{|K_{\nu_i}|}\right) \left(\prod_{v \in (V \setminus L(G))} I(v)\right).$$ First, suppose $k=1$, the smallest possible number of internal vertices in a trivalent graph. In the unique (topological type of a) trivalent graph with only one internal vertex, there are no bounded edges. In addition, there is only one vertex of degree $3$. If $v$ is that degree $3$ vertex, then $I(v) = \tr(K_{\sigma_1} K_{\sigma_2} K_{\sigma_3})$. So for any $p \in \phi^{-1}(q)$, $$w(p) = \frac{1}{d!} \cdot 1 \cdot \tr(K_{\sigma_1} K_{\sigma_2} K_{\sigma_3}).$$ Notice also that there are no choices for $\overline{\nu}$ in a graph without bounded edges. So there is only this one preimage point $p = \phi^{-1}(q)$, and this single index is actually the the degree. Now suppose that $k > 1$ and that the expression is known for all trivalent trees with $j < k$ internal vertices. The topological type of the preimages of $q$ can be determined from the coordinates of $q$, as seen in \cite{MikM}. Call this topological type $G = (V,E)$. By \ref{twoleaftripod}, $G$ has an internal vertex that is the intersection of two unbounded edges. Permute the labeling $\overline{\lambda}$ (simultaneously on all of $\mathcal{H}(\overline{\sigma})$) so that the unbounded edges $\lambda_{n-1}$ and $\lambda_n$ intersect at the internal vertex $\hat{v}$. In addition, permute the labeling $\overline{E}$ such that the third edge at $\hat{v}$ is $e_m$. Consider the following topological type of graphs, $\hat{G} = (\hat{V},\hat{E})$, defined as follows. Thinking of $\overline{\lambda}$ as a labeling of $L(G)$, let $\hat{V} = V \setminus \{\lambda_{n-1}, \lambda_n\}$. Thinking of $\overline{\lambda}$ as a labeling of the unbounded vertices, let $\hat{E} = E \setminus \{\lambda_{n-1}, \lambda_n\}$. In short, $\hat{G}$ is formed from $G$ be removing two unbounded edges that intersect (and their leaves). \begin{center} \psset{xunit=0.5cm,yunit=0.5cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-4,-1.1)(16,7.7) \psline(-2.94,0.94)(-0.84,1.34) \psline(-1.6,-0.54)(-0.84,1.34) \psline(-0.84,1.34)(1.02,2.82) \psline(1.02,2.82)(3.34,1.42) \psline(1.02,2.82)(0.76,4.3) \psline(0.76,4.3)(1.5,4.8) \psline(0.76,4.3)(0.02,4.76) \pscircle[linestyle=dashed,dash=2pt 2pt](0.76,4.5){0.6} \rput[tl](2.1,5.2){$\sigma_{n-1}$} \rput[tl](-1.55,5.2){$\sigma_n$} \rput[tl](-1.9,3.8){$\nu_m$} \rput[tl](-3.6,1.2){$\sigma_1$} \rput[tl](-2.4,-0.3){$\sigma_2$} \rput[tl](3.4,1.5){$\sigma_3$} \rput[tl](0,1.8){$\nu_1$} \psline{->}(-0.8,3.6)(0.8,3.6) \rput[tl](0,7.2){$G$} \psline{->}(4.7,1.8)(6.7,1.8) \psline(7.06,0.94)(10.16,1.34) \psline(9.4,-0.54)(10.16,1.34) \psline(10.16,1.34)(12.02,2.82) \psline(12.02,2.82)(14.34,1.42) \psline(12.02,2.82)(11.76,4.3) \rput[tl](11,5.3){$\nu_m$} \rput[tl](6.4,1.2){$\sigma_1$} \rput[tl](8.6,-0.3){$\sigma_2$} \rput[tl](14.4,1.5){$\sigma_3$} \rput[tl](11,1.8){$\nu_1$} \rput[tl](11,7.3){$\hat{G}$} \end{pspicture*} \end{center} Notice that $\hat{v}$, which was internal in $G$, is now a leaf. So $\overline{\lambda}_{\hat{G}} = \{\lambda_1,\ldots,\lambda_{n-2},e_m\}$ and $\overline{E}_{\hat{G}} = \{e_1,\ldots,e_{m-1}\}$ are labelings of the unbounded and bounded edges of $\hat{G}$ respectively. Moreover, $\overline{\sigma}_{\hat{G}} = \{\sigma_1,\ldots,\sigma_{n-2},\nu_m\}$ and $\overline{\nu}_{\hat{G}} = \{\nu_1,\ldots,\nu_{m-1}\}$ are collections of integer partitions associated to these edges. This information determines a polyhedral cell in a version of $\mathcal{H}(\overline{\sigma})$, $\hat{H}$, with one fewer internal vertices. Each polyhedral cell containing a preimage of $q$ has a distinct image in $\hat{H}$ obtained by simply forgetting the length of $e_m$. Similarly, $q$ has an analogous point in the image of these points. So $$\deg(\phi)_q = \sum_{\begin{array}{c}p \in \mathcal{H}(\overline{\sigma})\\\phi(p) = q\end{array}} w(p) = \sum_{\begin{array}{c}p \in \mathcal{H}(\overline{\sigma})\\\phi(p) = q\end{array}} \frac{1}{d!}\prod_{e_i \in \overline{E}} \frac{1}{|K_{\nu_i}|} \prod_{v \in (V \setminus L(G))} I(v)$$ Recall that the data of a preimage point is the same as a choice of $\overline{\nu}$. Also, we can factor out the parts of the product coming from the vertex $\hat{v}$. $$= \sum_{\overline{\nu}} \left(\frac{1}{|K_{\nu_m}|} I(\hat{v})\right) \left(\frac{1}{d!} \prod_{e_i \in \overline{E}_{\hat{G}}} \frac{1}{|K_{\nu_i}|} \prod_{v \in (\hat{V} \setminus L(\hat{G}))} I(v)\right)$$ The sum over $\overline{\nu}$ can be decomposed into a sum over each term in $\overline{\nu}$. Notice that the two factored terms only depend on $\nu_m$, so we can bring it outside that part of the sum. $$= \sum_{\nu_m} \left(\frac{1}{|K_{\nu_m}|} I(\hat{v})\right) \sum_{\overline{\nu}_{\hat{G}}} \left(\frac{1}{d!} \prod_{e_i \in \overline{E}_{\hat{G}}} \frac{1}{|K_{\nu_i}|} \prod_{v \in (\hat{V} \setminus L(\hat{G}))} I(v)\right)$$ By our inductive hypothesis, the internal sum is just the degree of the morphism $\hat{\phi}: \hat{H} \to \hat{M}$, so we may substitute. $$\deg(\phi)_q = \sum_{\nu_m} \left(\frac{1}{|K_{\nu_m}|} \tr(K_{\sigma_{n-1}} K_{\sigma_n} K_{\nu_m})\right) \left(\frac{1}{d!} \tr(K_{\sigma_1 }\cdots K_{\sigma_{n-2}} K_{\nu_m})\right)$$ Not every possible such product appears, but the ones that have been removed have value zero (see \ref{acceptable}), so we may assume they are present in this sum as well. Then the orthogonal basis lemma, \ref{states}, allows us to simplify to $$\deg(\phi)_q = \frac{1}{d!} \tr(K_{\sigma_1} \cdots K_{\sigma_n}).$$ Notice that this expression does not depend on the polyhedron containing $q$, so $$\deg(\phi) = \frac{1}{d!}\tr(K_{\sigma_1} \cdots K_{\sigma_n}).$$ \end{proof} \begin{remark} If the last proof is hard to conceptualize, think about it in a slightly different way. Unpacking the definition of $I(v) = \tr(K_{\mu_1}K_{\mu_2}K_{\mu_3})$ in the sum in the previous proof, we see that every integer partition, $\nu_i$, will appear in two distinct trace functions in the product and each integer partition $\sigma_i$ will appear in one. Repeated applications of the orthogonal basis lemma will absorb every factor of $\frac{1}{|K_{\nu_i}|}$ and combine the products of traces into a single trace containing one copy of $K_{\sigma_i}$ for each integer partition in $\overline{\sigma}$. \end{remark} \begin{remark} The space $\tilde{M}$ is exactly Mikhalkin's moduli space of tropical, genus $0$ curves with $n$ marked points, $\M_{0,n}$. He uses open cells, so his polyhedra correspond to the relative interiors of my polyhedral cells. He also uses \textit{combinatorial types} of graphs, which correspond to labeling just the leaves of the trees; my function $G(\overline{\lambda},\overline{E})$ also labels the the bounded edges, but this labeling is just notation to talk about the integer partitions $\overline{\nu}$ in a consistent manner. We then both add lengths to the bounded edges. This means that the map $\phi$ factors through his embedding of $\M_{0,n}$ into $\R^m$. \end{remark} So, we summarize: \begin{thm} Given a ramification profile, $\overline{\sigma}$, there is a connected polyhedral complex, $\mathcal{H}(\overline{\sigma})$ and a morphism of polyhedral complexes $\phi: \mathcal{H}(\overline{\sigma}) \to \M_{0,n}$ such that $\deg(\phi) = h(\ol{\sigma})$. Moreover, there is a modular interpretation of points in $\mathcal{H}(\overline{\sigma})$ as tropical ramified covers so that $\phi$ is the forgetful morphism taking a cover to its codomain with marked points at the ramification values. \end{thm} \begin{remark} Mikhalkin compactifies his space in \cite{MikM} by allowing the lengths of the bounded edges to grow to infinity and shows that the compact object is still smooth. We could do the analogous construction and extend our morphism to the compactified case. However, degree is defined for us only in the interiors of the top dimensional cones, so this adds nothing. In addition, the top-dimensional cones already correspond to the most degenerate classical curves, and further degenerating adds no new interesting curve from the modular perspective. However, mathematicians with a more combinatorial perspective on tropical geometry may wish to see Mikhalkin's discussion of the compactification in \cite{MikM}. \end{remark} \subsection{Discussion} \begin{remark} If we let $d = 1$ in the construction above, then we get a version of our Hurwitz space. However, notice that there would then be no choices for the labels of the edges, so $\phi$ would be an injection, and we would have recovered the construction of Mikhalkin's moduli space, $\M_{0,n}$. \end{remark} \begin{remark} The condition of being balanced as a polyhedral complex is what we need to guarantee that there can be a consistent notion of intersection theory (including the notion of degree) in tropical geometry. While we have not embedded $\mathcal{H}(\overline{\sigma})$ in a large vectorspace, the result above indicates that there is not an obstruction to putting a tropical structure on $\mathcal{H}(\overline{\sigma})$ compatible with the structure that Mikhalkin gives to $\M_{0,n}$ (i.e. finding an embedding of $\mathcal{H}(\overline{\sigma})$ as a balanced polyhedral complex, which would make it an honest tropical object). \end{remark} \begin{questions} There are several questions that need answers. \begin{itemize} \item Does $\mathcal{H}(\overline{\sigma})$ have an embedding as a (simply) balanced polyhedral complex? \item Does the the stratification of $\mathcal{H}(\overline{\sigma})$ correspond to the stratification of the boundary of the classical Hurwitz space? \item Is there an algorithm for transforming the data from our definition of a tropical ramified cover into something that looks more like an honest tropical admissible cover? Are those covers the tropicalization of a classical admissible covers? \item Will this construction carry over into the case of higher genus? Recent work by Kozlov (\cite{Kozlov2,Kozlov}) and Caporaso (\cite{Caporaso2,Caporaso}) indicates that the moduli spaces of higher genus curves can be given tropical structures much like Mikhalkin's from genus zero. Instead of being like Real manifolds, these spaces are like oribifolds. \end{itemize} \end{questions} \section{Extensions} \textit{Here we realize that most of this construction works if the class algebra is replaced by a general Frobenius algebra.} \begin{dfn} Let $V$ be a finite dimensional, unital algebra over a field $k$. Then $V$ is a \textbf{Frobenius algebra} over $k$ if $V$ has a non-degenerate bilinear pairing $h : V \times V \to k$ such that, for any triple of elements $a,b,c \in V$, $$h(ab,c) = h(a,bc).$$ \end{dfn} \begin{thm} The class algebra, $Z(\R[S_d])$, is a Frobenius algebra. \end{thm} \begin{proof} Define the bilinear pairing $h : V \times V \to \R$ by: $$h(g,g') = \tr(gg').$$ Note that if $g$, $g'$, and $g''$ are in the class algebra, then $h(gg',g'') = \tr(gg'g'') = h(g,g'g'')$, so $V$ is a Frobenius algebra. \end{proof} There are only a handful of aspects of our construction that depended on the class algebra. \begin{enumerate} \item The bilinear pairing gives the trace function, but there is not an obvious orthogonal basis for the Frobenius algebra. \item Instead of choosing integer partitions for each edge of a topological type of tropical graph, we would choose basis vectors for each edge. \item We already realized that $|K_\sigma| = \tr(K_\sigma K_\sigma)$. We could replace this quantity in the expressions above with $h(v,v)$ for a basis vector $v$. It is not at all clear what role these numbers play. If we instead replace each basis vector above by $K_\sigma \to \frac{K_\sigma}{\sqrt{|K_\sigma|}}$ in order to make the basis orthonormal, then the weights become $$w(P_i) = \frac{1}{d!} \left(\prod_{\lambda_i \in \overline{\lambda}} \sqrt{|K_{\sigma_i}|} \right) \left(\prod_{v \in (V \setminus L(G))} I(v)\right).$$ The first two terms in this product no longer depend on the cone at all, but it is not clear what either term would mean in another Frobenius algebra. \end{enumerate} There are a few connections that we can make at this time. \begin{remark} Notice that the induction in the main theorem that rips off an internal vertex is really a special case of the famous Cut-and-Join formula. Also, notice that the vertex $\hat{v}$ corresponds classically to a copy of $\p^1$ with three special points. If you consider these points to be punctures, this object is the famous ``pair of pants" from a $2$-dimensional topological quantum field theory. The category of 2D-TQFTs is known to be equivalent to the category of Frobenius algebras. It is, however, not clear if there is any reasonable classical geometry interpretation of this construction in general as there was in the case of admissible covers. \end{remark} \baselineskip=15.5pt plus .5pt minus .2pt \printindex \nocite{*} \index{Bibliography@\emph{Bibliography}} \begin{vita} \index{Vita@\emph{Vita}} Brian Paul Katz \index{Brian Paul Katz} was born in Greensboro, NC, USA on November 26, 1980, the son of Jefferey David Katz and Laurie Ann North Katz. After completing his High School studies in Greensboro, NC in 1999, he entered Williams College in Williamstown, MA. He received a degree of B.\,A. in Mathematics, Music, and Chemistry cum laude with Honors from Williams College in 2003. In the fall of 2003 he started graduate studies in the department of Mathematics at The University of Texas at Austin where he was employed as a graduate research assistant and teaching assistant. In the fall of 2006 he became an assistant instructor in the same department. In the spring of 2009 he accepted a position as an assistant professor in the Department of Mathematics and Computer Science at Augustana College in Rock Island, Illinois, where he now teaches. \end{vita} \end{document}
\begin{document} \setlength{\baselineskip}{4.9mm} \setlength{\abovedisplayskip}{4.5mm} \setlength{\belowdisplayskip}{4.5mm} \renewcommand{\roman{enumi}}{\roman{enumi}} \renewcommand{(\theenumi)}{(\roman{enumi})} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \allowdisplaybreaks[2] \parindent=20pt \begin{center} {\bf Enhanced variety of higher level and \\ Kostka functions associated to complex reflection groups} \par \par Toshiaki Shoji \title{} \end{center} \begin{abstract} Let $V$ be an $n$-dimensional vector space over an algebraic closure of a finite field $\BF_q$, and $G = GL(V)$. A variety $\SX = G \times V^{r-1}$ is called an enhanced variety of level $r$. Let $\SX\uni = G\uni \times V^{r-1}$ be the unipotent variety of $\SX$. We have a partition $\SX\uni = \coprod_{\Bla}X_{\Bla}$ indexed by $r$-partitions $\Bla$ of $n$. In the case where $r = 1$ or 2, $X_{\Bla}$ is a single $G$-orbit, but if $r \ge 3$, $X_{\Bla}$ is, in general, a union of infinitely many $G$-orbits. In this paper, we prove certain orthogonality relations for the characteristic functions (over $\BF_q$) of the intersection cohomology $\IC(\ol X_{\Bla},\Ql)$, and show some results, which suggest a close relationship between those characteristic functions and Kostka functions associated to the complex reflection group $S_n\ltimes (\BZ/r\BZ)^n$. \end{abstract} \maketitle \markboth{SHOJI}{KOSTKA POLYNOMIALS} \pagestyle{myheadings} \begin{center} {\sc Introduction} \end{center} Let $V$ be an $n$-dimensional vector space over an algebraic closure of a finite field $\BF_q$, and $G = GL(V) \simeq GL_n$. In 1981, Lusztig showed in [L1] that Kostka polynomials $K_{\la,\mu}(t)$ have a geometric interpretation in terms of the intersection cohomology associated to the closure of unipotent classes in $G$ in the following sense. Let $C_{\la}$ be the unipotent class corresponding to a partition $\la$ of $n$, and $K = \IC(\ol C_{\la}, \Ql)$ be the intersection cohomology complex on the closure $\ol C_{\la}$ of $C_{\la}$. He proved that $\SH^iK = 0$ for odd $i$, and that for partitions $\la, \mu$ of $n$, \begin{equation*} \tag{*} t^{n(\mu)}K_{\la,\mu}(t\iv) = t^{n(\la)}\sum_{i \ge 0}(\dim \SH^{2i}_xK)t^i, \end{equation*} where $x \in C_{\mu} \subset \ol C_{\la}$, and $n(\la)$ is the usual $n$-function. \par Kostka polynomials are polynomials indexed by a pair of partitions. In [S1], [S2], as a generalization of Kostka polynomials, Kostka functions $K_{\Bla, \Bmu}(t)$ associated to the complex reflection group $S_n \ltimes (\BZ/r\BZ)^n$ were introduced, which are a-priori rational functions in $\BQ(t)$ indexed by $r$-partitions $\Bla, \Bmu$ of $n$ (see 3.10 for the definition of $r$-partitions of $n$). It is known by [S2] that $K_{\Bla, \Bmu}(t)$ are actually polynomials if $r = 2$. Although those Kostka functions are defined in a purely combinatorial way, and have no geometric background, recently various generalizations of Lusztig's result for those Kostka functions were found. Under the notation above, consider a variety $\SX = G \times V$, which is called the enhanced variety, and its subvariety $\SX\uni = G\uni \times V$ is isomorphic to the enhanced nilpotent cone introduced by Achar-Henderson [AH] (here $G\uni$ is the unipotent variety of $G$). The set of $G$-orbits under the diagonal action of $G$ on $\SX\uni$ is parametrized by double partitions of $n$ ([AH], [T]). In [AH], they proved that Kostka polynomials indexed by double partitions have a geometric interpretation as in (*) in terms of the intersection cohomology associated to the closure of $G$-orbits in $\SX\uni$. \par On the other hand, let $V$ be a $2n$-dimensional symplectic vector space over an algebraic closure of $\BF_q$ with $\ch \BF_q \ne 2$, and consider $G = GL(V) \supset H = Sp(V)$. The variety $\SX = G/H \times V$ is called the exotic symmetric space, and its ``unipotent variety'' $\SX\uni$ is isomorphic to the exotic nilpotent cone introduced by Kato [K1]. $H$ acts diagonally on $\SX\uni$, and the set of $H$-orbits on $\SX\uni$ is parametrized by double partitions of $n$ ([K1]). As in the enhanced case, it is proved by [K2], and [SS1], [SS2], independently, that Kostka polynomials indexed by double partitions have a geometric interpretation in terms of the intersection cohomology associated to the closure of $H$-orbits in $\SX\uni$. \par As a generalization of the enhanced variety $G \times V$ or the exotic symmetric space $G/H \times V$, we consider $G \times V^{r-1}$ or $G/H \times V^{r-1}$ for any $r \ge 1$. $G$ acts diagonally on $G \times V^{r-1}$, and $H$ acts diagonally on $G/H \times V^{r-1}$. $\SX = G \times V^{r-1}$ is called the enhanced variety of level $r$, and a certain $H$-stable subvariety $\SX$ of $G/H \times V^{r-1}$ is called the exotic symmetric space of level $r$. For those varieties $\SX$, one can consider $G$-stable subvariety $\SX\uni$ (unipotent variety). The crucial difference for the general case is that the number of $G$-orbits (or $H$-orbits) on $\SX\uni$ is no longer finite if $r \ge 3$. Nevertheless, it was shown in [S3], for the exotic case or the enhanced case, that one can construct subvarieties $X_{\Bla}$ of $\SX\uni$ indexed by $r$-partitions $\Bla$ of $n$, and the intersection cohomologies $\IC(\ol X_{\Bla}, \Ql)$ enjoy similar properties as in the case $r = 1$ or 2, more precisely, a generalization of the Springer correspondence holds for $\SX\uni$. \par So it is natural to expect that those intersection cohomologies will have a close relation with Kostka functions indexed by $r$-partitions of $n$. In this paper, we consider this problem in the case where $\SX$ is the enhanced variety of level $r$. In this case, we have a partition $\SX\uni = \coprod_{\Bla}X_{\Bla}$ parametrized by $r$-partitions $\Bla$ of $n$. In the case where $r = 1$ or 2, $X_{\Bla}$ is a single $G$-orbit, but if $r \ge 3$, $X_{\Bla}$ is, in general, a union of infinitely many $G$-orbits. By applying the strategy employed in the theory of character sheaves in [L2], [L3] (and in [SS2]), we show (Theorem 5.5) that the characteristic functions of the Frobenius trace over $\BF_q$ associated to $\IC(\ol X_{\Bla}, \Ql)$ satisfy certain orthogonality relations, which are quite similar to the case of $r = 1$ or 2. In fact, in the case where $r = 1$ or 2, the geometric interpretation is deduced from this kind of orthogonality relations. However, in the case where $r \ge 3$, these orthogonality relations are not enough to obtain the required formula. We show some partial results which suggest an interesting relationship between those characteristic functions and Kostka functions. \par The author is very grateful to Jean Michel for making the GAP program for computing Kostka functions. The examples in 7.3 and 7.4 were computed by his program. \par \section{Complexes on the enhanced variety} \para{1.1.} Let $\Bk$ be an algebraic closure of a finite field $\Fq$. Let $V$ be an $n$-dimensional vector space over $\Bk$, and put $G = GL(V)$. For a fixed integer $r \ge 2$, we consider the variety $G \times V^{r-1}$, on which $G$ acts diagonally. We call $G \times V^{r-1}$ the enhanced variety of level $r$. Put $\SQ_{n,r} = \{ \Bm = (m_1, \dots, m_r) \in \BZ^r_{\ge 0} \mid \sum_im_i = n\}$. For each $\Bm \in \SQ_{n,r}$, we define integers $p_i = p_i(\Bm)$ by $p_i = m_1 + \cdots + m_i$ for $i =1, \dots, r$. Let $B = TU$ be a Borel subgroup of $G$, $T$ a maximal torus of $B$, and $U$ the unipotent radical of $B$. Let $(M_i)_{1 \le i \le n}$ be the total flag in $V$ such that the stabilizer of $(M_i)$ in $G$ coincides with $B$. We define varieties \begin{align*} \wt\SX_{\Bm} &= \{ (x, \Bv, gB) \in G \times V^{r-1} \times G/B \mid g\iv xg \in B, g\iv \Bv \in \prod_{i=1}^{r-1}M_{p_i} \}, \\ \SX_{\Bm} &= \bigcup_{g \in G}g(B \times \prod_{i=1}^{r-1}M_{p_i}), \end{align*} and the map $\pi_{\Bm} : \wt \SX_{\Bm} \to \SX_{\Bm}$ by $(x,\Bv, gB) \mapsto (x,\Bv)$. Then $\wt\SX_{\Bm}$ is smooth and irreducible, and $\pi_{\Bm}$ is a proper surjective map. In particular, $\SX_{\Bm}$ is a closed subvariety of $G \times V^{r-1}$. In the case where $\Bm = (n, 0, \dots, 0)$, we denote $\wt\SX_{\Bm}, \SX_{\Bm}$ by $\wt\SX, \SX$. Hence $\SX = G\times V^{r-1}$. \par Let $G\uni$ be the set of unipotent elements in $G$. Similarly to the above, we define \begin{align*} \wt\SX_{\Bm, \unip} &= \{ (x, \Bv, gB) \in G \times V^{r-1} \times G/B \mid g\iv xg \in U, g\iv \Bv \in \prod_{i=1}^{r-1}M_{p_i} \}, \\ \SX_{\Bm,\unip} &= \bigcup_{g \in G}g(U \times \prod_{i=1}^{r-1}M_{p_i}), \end{align*} and the map $\pi_{\Bm,1}: \wt\SX_{\Bm,\unip} \to \SX_{\Bm,\unip}$ by $(x, \Bv, gB) \mapsto (x, \Bv)$. Note that $\wt\SX_{\Bm,\unip} = \pi_{\Bm}\iv(\SX_{\Bm,\unip})$, and $\pi_{\Bm,1}$ is the restriction of $\pi_{\Bm}$ on $\wt\SX_{\Bm,\unip}$. Hence $\pi_{\Bm,1}$ is proper, and $\SX_{\Bm, \unip}$ is a closed subvariety of $G\uni \times V^{r-1}$. In the case where $\Bm = (n, 0, \dots,0)$, we denote $\wt\SX_{\Bm,\unip}, \SX_{\Bm, \unip}$ by $\wt\SX\uni, \SX\uni$. \par Let $T\reg$ be the set of regular semisimple elements in $T$, and put $G\reg = \bigcup_{g \in G}gT\reg g\iv$, $B\reg = G\reg \cap B$. We define varieties \begin{align*} \wt\SY_{\Bm} &= \{ (x,\Bv, gB) \in G\reg \times V^{r-1} \times G/B \mid g\iv xg \in B\reg, g\iv\Bv \in \prod_i M_{p_i} \}, \\ \SY_{\Bm} &= \bigcup_{g \in G}g(B\reg \times \prod_iM_{p_i}) = \bigcup_{g \in G}g(T\reg \times \prod_iM_{p_i}). \end{align*} Then $\wt\SY_{\Bm} = \pi_{\Bm}\iv(\SY_{\Bm})$ and we define the map $\psi_{\Bm} : \wt\SY_{\Bm} \to \SY_{\Bm}$ by the restriction of $\pi_{\Bm}$ on $\wt\SY_{\Bm}$. It is known that \begin{lem}[{[S3, Lemma 4.2]}] \begin{enumerate} \item $\SY_{\Bm}$ is open dense in $\SX_{\Bm}$ and $\wt\SY_{\Bm}$ is open dense in $\wt\SX_{\Bm}$. \item $\dim \SX_{\Bm} = \dim \wt\SX_{\Bm} = n^2 + \sum_{i=1}^r(r-i)m_i$. \end{enumerate} \end{lem} \para{1.3.} We fix a basis $e_1, \dots, e_n$ of $V$ such that $e_i$ are weight vectors of $T$ and that $M_i$ is spanned by $e_1, \dots, e_i$. Let $W = N_G(T)/T$ be the Weyl group of $G$, which is isomorphic to the permutation group $S_n$ of the basis $\{e_1, \dots, e_n\}$. We denote by $W_{\Bm}$ the subgroup of $W$ which permutes the basis $\{e_j\}$ of $M_{p_i}$ for each $i$. Hence $W_{\Bm}$ is isomorphic to the Young subgroup $S_{\Bm} = S_{m_1} \times \cdots \times S_{m_r}$ of $S_n$. Let $M_{p_i}^0$ be the set of $v = \sum_ja_je_j \in M_{p_i}$ such that $a_j \ne 0$ for $p_{i-1} + 1 \le j \le p_i$. We define a variety $\wt\SY_{\Bm}^0$ by \begin{equation*} \wt\SY_{\Bm}^0 = G \times^T(T\reg \times \prod_iM_{p_i}^0). \end{equation*} Since $\wt\SY_{\Bm} \simeq G \times^T(T\reg \times \prod_iM_{p_i})$, $\wt\SY_{\Bm}^0$ is identified with the open dense subset of $\wt\SY_{\Bm}$. Then $\SY_{\Bm}^0 = \psi_{\Bm}(\wt\SY^0_{\Bm})$ is an open dense smooth subset of $\SY_{\Bm}$. The map $\psi_{\Bm}^0 : \wt\SY_{\Bm}^0 \to \SY^0_{\Bm}$ obtained from the restriction of $\psi_{\Bm}$ turns out to be a finite Galois covering with group $W_{\Bm}$ (apply the discussion in [S3, 1.3] to the enhanced case). \par We consider the diagram \begin{equation*} \begin{CD} T @<\a << \wt\CX_{\Bm} @>\pi_{\Bm}>> \CX_{\Bm}, \end{CD} \end{equation*} where $\a$ is the map defined by $(x,\Bv, gB) \mapsto (p_T(g\iv xg))$ ($p_T : B \to T$ is the natural projection). Let $\SE$ be a tame local system on $T$. We denote by $W_{\Bm, \SE}$ the stabilizer of $\SE$ in $W_{\Bm}$. Let $\a_0$ be the restriction of $\a$ on $\wt\SY_{\Bm}$. We also denote by $\a_0$ the restriction of $\a$ on $\wt\SY_{\Bm}^0$. Since $\psi_{\Bm}^0$ is a finite Galois covering, $(\psi^0_{\Bm})_!\a_0^*\SE$ is a local system on $\SY^0_{\Bm}$ equipped with $W_{\Bm,\SE}$-action, and is decomposed as \begin{equation*} \tag{1.3.1} (\psi^0_{\Bm})_!\a_0^*\SE \simeq \bigoplus_{\r \in W_{\Bm,\SE}\wg} \r \otimes \SL_{\r}, \end{equation*} where $\SL_{\r} = \Hom (\r, (\psi^0_{\Bm})_!\a_0^*\SE)$ is the simple local system on $\CY^0_{\Bm}$. The following results were proved in Proposition 4.3 and Theorem 4.5 in [S3]. \begin{thm}[{[S3]}] Take $\Bm \in \SQ_{n,r}$, and put $d_{\Bm} = \dim \SX_{\Bm}$. \begin{enumerate} \item $(\psi_{\Bm})_!\a_0^*\SE[d_{\Bm}]$ is a semisimple perverse sheaf on $\SY_{\Bm}$ equipped with $W_{\Bm,\SE}$-action, and is decomposed as \begin{equation*} \tag{1.4.1} (\psi_{\Bm})_!\a_0^*\SE[d_{\Bm}] \simeq \bigoplus_{\r \in W_{\Bm,\SE}\wg} \r \otimes \IC(\SY_{\Bm}, \SL_{\r})[d_{\Bm}]. \end{equation*} \item $(\pi_{\Bm})_!\a^*\SE[d_{\Bm}]$ is a semisimple perverse sheaf on $\SX_{\Bm}$ equipped with $W_{\Bm,\SE}$-action, and is decomposed as \begin{equation*} \tag{1.4.2} (\pi_{\Bm})_!\a^*\SE[d_{\Bm}] \simeq \bigoplus_{\r \in W\wg_{\Bm,\SE}} \r \otimes \IC(\SX_{\Bm}, \SL_{\r})[d_{\Bm}]. \end{equation*} \end{enumerate} \end{thm} \para{1.5.} Let $P$ be the stabilizer of the partial flag $(M_{p_i})$ in $G$, which is a parabolic subgroup of $G$ containing $B$. Let $L$ be the Levi subgroup of $P$ containing $T$, and $U_P$ the unipotent radical of $P$. We consider the varieties \begin{align*} \SX_{\Bm}^P &= \bigcup_{g \in P}g(B \times \prod_{i=1}^{r-1} M_{p_i}) = P \times \prod_{i=1}^{r-1} M_{p_i}, \\ \wh \SX^P_{\Bm} &= G \times^P\SX_{\Bm}^P = G \times^P(P \times \prod_{i=1}^{r-1}M_{p_i}), \\ \wt \SX^P_{\Bm} &= P \times^{B}(B \times \prod_{i=1}^{r-1}M_{p_i}). \end{align*} We define $\pi': \wt\SX_{\Bm} \to \wh\SX^P_{\Bm}$ as the map induced from the inclusion map $G \times (B \times \prod M_{p_i}) \to G \times (P \times \prod M_{p_i})$ under the identification $\wt\SX_{\Bm} \simeq G \times^B(B \times \prod M_{p_i})$, and define $\pi'': \wh\SX_{\Bm}^P \to \SX_{\Bm}$ by $g*(x,\Bv) \mapsto (gxg\iv, g\Bv)$. (Here we denote by $g*(x,\Bv)$ the image of $(g, (x,\Bv)) \in G \times \SX^P_{\Bm}$ on $\wh\SX^P_{\Bm}$.) Thus we have $\pi_{\Bm} = \pi''\circ \pi'$. Since $\pi_{\Bm}$ is proper, $\pi'$ is proper. $\pi''$ is also proper. \par Let $B_L = B \cap L$ be the Borel subgroup of $L$ containing $T$, and put $\ol M_{p_i} = M_{p_i}/M_{p_{i-1}}$ under the convention $M_{p_0} = 0$. Then $L$ acts naturally on $\ol M_{p_i}$, and by applying the definition of $\pi_{\Bm} : \wt\SX_{\Bm} \to \SX_{\Bm}$ to $L$, we can define \begin{align*} \wt\SX^L_{\Bm} &= L \times^{B_L}(B_L \times \prod_{i=1}^{r-1}\ol M_{p_i}), \\ \SX^L_{\Bm} &= \bigcup_{g \in L}g(B_L \times \prod_{i = 1}^{r-1}\ol M_{p_i}) = L \times \prod_{i=1}^{r-1} \ol M_{p_i} \end{align*} and the map $\pi^L_{\Bm} : \wt\SX^L_{\Bm} \to \SX^L_{\Bm}$ similarly. We have the following commutative diagram \begin{equation*} \tag{1.5.1} \begin{CD} \wt\SX_{\Bm} @<\wt p<< G \times \wt\SX^P_{\Bm} @>\wt q>> \wt\SX^L_{\Bm} \\ @V\pi'VV @VV r V @VV\pi^L_{\Bm} V \\ \wh \SX^P_{\Bm} @<p << G \times \SX_{\Bm}^P @>q>> \SX^L_{\Bm} \\ @V\pi''VV \\ \SX_{\Bm}, \end{CD} \end{equation*} where the map $q$ is defined by $(g, x, \Bv) \mapsto (\ol x, \ol \Bv)$, with $x \mapsto \ol x, \Bv \mapsto \ol \Bv$ natural maps $P \to L, \prod M_{p_i} \to \prod \ol M_{p_i}$. $\wt q$ is defined as the composite of the projection $G \times \wt\SX_{\Bm}^P \to \wt\SX^P_{\Bm}$ and the map $\wt\SX_{\Bm}^P \to \wt\SX^L_{\Bm}$ induced from the projection $P \times (B \times \prod M_{p_i}) \to L \times (B_L \times \ol M_{p_i})$. The maps $p, \wt p$ are the quotients by $P$, under the identification $\wt\SX_{\Bm} \simeq G \times^P\wt\SX^P_{\Bm}$. $r = \id \times r'$, where $r'$ is the natural map $\wt\SX^P_{\Bm} \to \SX^P_{\Bm}$, $g*(x, \Bv) \mapsto (gxg\iv, g\Bv)$. \par Here both squares in the diagram are cartesian. Moreover, we have \par \noindent (i) \ $p$ is a principal $P$-bundle. \\ (ii) \ $q$ is a locally trivial fibration with fibre isomorphic to $G \times U_P \times \prod_{i=1}^{r-2}M_{p_i}$. \par Put $a = \dim P$, $b = \dim G + \dim U_P + \dim \prod_{i=1}^{r-2}M_{p_i}$. By (i) and (ii), the following property holds. \par \noindent (1.5.2) \ For any $L$-equivariant simple perverse sheaf $A_1$ on $\SX^L_{\Bm}$, $q^*A_1[b]$ is a $G \times P$-equivariant simple perverse sheaf on $G \times \SX^P_{\Bm}$, and there exists a unique $G$-equivriant simple perverse sheaf $A_2$ on $\wh\SX^P_{\Bm}$ (up to isomorphism) such that \begin{equation*} p^*A_2[a] \simeq q^*A_1[b]. \end{equation*} \par We define a perverse sheaf $K_{\Bm, T,\SE}$ on $\SX_{\Bm}$ by the right hand side of the formula (1.4.2). Thus $K_{\Bm, T,\SE} \simeq (\pi_{\Bm})_!\a^*\SE[d_{\Bm}]$. We consider the perverse sheaf $K^L_{\Bm, T, \SE}$ on $\SX^L_{\Bm}$ defined similarly to $K_{\Bm, T,\SE}$. Put $\SL = \a^*\SE$. Since $\dim \wh\SX_{\Bm}^P = d_{\Bm}$, we see by (1.5.2) that $\pi'_!\SL[d_{\Bm}]$ is a perverse sheaf on $\wh\SX^P_{\Bm}$ satisfying the property \begin{equation*} \tag{1.5.3} p^*\pi'_!\SL[d_{\Bm} + a] \simeq q^*K^L_{\Bm, T,\SE}[b]. \end{equation*} Since $K^L_{\Bm, T,\SE}$ is decomposed as \begin{equation*} \tag{1.5.4} K^L_{\Bm,T,\SE} = \bigoplus_{\r \in W_{\Bm,\SE}\wg} \r\otimes\IC(\SX_{\Bm}^L, \SL^L_{\r})[d^L_{\Bm}], \end{equation*} where $\SL^L_{\r}, d^L_{\Bm}$ are defined similarly to the theorem, again by (1.5.2), $\pi'_!\SL[d_{\Bm}]$ is a semisimple perverse sheaf, equipped with $W_{\Bm,\SE}$-action, and is decomposed as \begin{equation*} \tag{1.5.5} \pi'_!\SL[d_{\Bm}] \simeq \bigoplus_{\r \in W_{\Bm,\SE}\wg}\r \otimes A_{\r}, \end{equation*} where $A_{\r}$ is a simple perverse sheaf on $\wh\SX^P_{\Bm}$ such that $p^*A_{\r}[a] \simeq q^*\IC(\SX^L_{\Bm}, \SL^L_{\r})[d_{\Bm}^L + b]$. By applying $\pi''_!$ on both sides, we have a decomposition \begin{equation*} \tag{1.5.6} K_{\Bm, T,\SE} \simeq \bigoplus_{\r \in W_{\Bm,\SE}\wg}\r \otimes \pi''_!A_{\r}. \end{equation*} Comparing this with the decomposition in (1.4.2), we have \begin{prop} $\pi''_!A_{\r} \simeq \IC(\SX_{\Bm}, \SL_{\r})[d_{\Bm}]$. \end{prop} \begin{proof} By replacing $\wt\SX_{\Bm}$, etc. by $\wt\SY^0_{\Bm}$, etc., we have a similar diagram as (1.5.1), \begin{equation*} \tag{1.6.1} \begin{CD} \wt\SY^0_{\Bm} @<\wt p_0<< G \times \wt\SY^{P,0}_{\Bm} @>\wt q_0>> \wt\SY^{L,0}_{\Bm} \\ @V\psi'VV @VV r_0 V @VV\psi^{L,0}_{\Bm} V \\ \wh \SY^{P,0}_{\Bm} @<p_0 << G \times \SY_{\Bm}^{P,0} @>q_0>> \SY^{L,0}_{\Bm} \\ @V\psi''VV \\ \SY^0_{\Bm}, \end{CD} \end{equation*} where \begin{align*} \SY_{\Bm}^{P,0} &= \bigcup_{g \in P}g(T\reg \times \prod_i M^0_{p_i}), \\ \wh \SY^{P,0}_{\Bm} &= G \times^P\SY_{\Bm}^{P,0}, \\ \wt \SY^{P,0}_{\Bm} &= P \times^{T}(T\reg \times \prod_iM^0_{p_i}). \end{align*} and $G \times \SY^{P,0}_{\Bm} = q\iv(\SY^{L,0}_{\Bm}) = p\iv(\wt\SY^{P,0}_{\Bm})$, $p_0, q_0$ are restrictions of $p,q$, respectively. Similar properties as in (1.5.1) hold also for (1.6.1). Note that $\psi^{L,0}_{\Bm}$ is a finite Galois covering with group $W_{\Bm,\SE}$, and so $\psi'$ is also a finite Galois covering with $W_{\Bm,\SE}$. Since $\psi^0_{\Bm} = \psi''\circ\psi'$ and $\psi^0_{\Bm}$ is a finite Galois covering with group $W_{\Bm,\SE}$, we see that $\psi''$ is an isomorphism. Now $\wh\SY^{0,P}_{\Bm}$ is open dense in $\wh\SX^P_{\Bm}$, and the $W_{\Bm,\SE}$-module structure on $\pi'_!\SL$ is determined from the corresponding structure on $\psi'_!\SL$ obtained from the Galois covering $\psi'$. On the other hand, the $W_{\Bm,\SE}$-module structure on $(\pi_{\Bm})_!\SL$ is determined from the corresponding structure on $(\psi^0_{\Bm})_!\SL$ obtained from the Galois covering $\psi^0_{\Bm}$. Since $\psi''$ is an isomorphism, this shows that the operation $\pi''_!$ is compatible with the $W_{\Bm, \SE}$-module structures of $\pi'_!\SL$ and of $(\pi_{\Bm})_!\SL$. The proposition is proved. \end{proof} \par \section{Green functions} \para{2.1.} We now assume that $G$ and $V$ are defined over $\BF_q$, and let $F: G \to G, F: V \to V$ be the corresponding Frobenius maps. We fix an $F$-stable Borel subgroup $B_0$ and an $F$-stable maximal torus $T_0$ contained in $B_0$. We define $W_0$ as $W_0 = N_G(T_0)/T_0$. Let $(M_{0,i})$ be the total flag corresponding to $B_0$. Thus $M_{0,i}$ are $F$-stable subspaces. For $\Bm \in \SQ_{n,r}$, let $P_{\Bm}$ be the parabolic subgroup of $G$ containing $B_0$ which is the stabilizer of the partial flag $(M_{0,p_i})$. Then the Weyl subgroup of $W_0$ corresponding to $P_{\Bm}$ is given by $(W_0)_{\Bm}$. \par Let $T$ be an $F$-stable maximal torus of $G$, and $B \supset T$ a not necessarily $F$-stable Borel subgroup of $G$. Let $(M_i)$ be the total flag of $G$ whose stabilizer is $B$. We assume that $M_{p_i}$ is $F$-stable for each $i$. Let us construct $\wt\SX_{\Bm}, \wt\SY_{\Bm}$, etc. as in Section 1 by using these $T$ and $B$. There exists $h \in G$ such that $B = hB_0h\iv, T = hT_0h\iv$, and that $h\iv F(h) = \dw$, where $\dw$ is a representative of $w \in (W_0)_{\Bm}$ in $N_G(T_0)$. We fix an $F$-stable basis $e_1, \dots, e_n$ of $V$ which are weight vectors for $T_0$. Then $he_1, \dots, he_{p_i}$ are basis of $M_{p_i}$ consisting of weight vectors for $T$. If we define $M_{p_i}^0$ as in 1.3 by using this basis, then $M_{p_i}^0$ is $F$-stable for each $i$. Since $\wt\SY^0_{\Bm} \simeq G \times^T(T\reg \times \prod_iM^0_{p_i})$, $\wt\SY^0_{\Bm}$ has a natural $\Fq$-structure. $\SY^0_{\Bm}$ is $F$-stable, and the maps $\psi^0_{\Bm} : \wt\SY^0_{\Bm} \to \SY^0_{\Bm}$ and $\a_0: \wt\SY^0_{\Bm} \to T$ are $F$-equivariant. Let $\SE$ be a tame local system on $T$ such that $F^*\SE \simeq \SE$. We fix an isomorphism $\vf_0 : F^*\SE \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \SE$. Then $\vf_0$ induces an isomorphism $\wt\vf_0 : F^*\SL^{\bullet} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \SL^{\bullet}$, where $\SL^{\bullet}$ is the local system $(\psi^0_{\Bm})_!\a_0^*\SE$ on $\SY^0_{\Bm}$. By (1.3.1), we have $\SL^{\bullet} \simeq \bigoplus_{\r \in W_{\Bm,\SE}\wg}\r \otimes \SL_{\r}$. As in 1.5, we define a complex $K_{\Bm, T,\SE}$ on $\SX_{\Bm}$ by \begin{equation*} \tag{2.1.2} K_{\Bm, T,\SE} = \IC(\SX_{\Bm}, \SL^{\bullet})[d_{\Bm}] \simeq \bigoplus_{\r \in W_{\Bm,\SE}\wg} \r \otimes \IC(\SX_{\Bm}, \SL_{\r})[d_{\Bm}]. \end{equation*} $\wt\vf_0$ can be extended to a unique isomorphism $\vf: F^*K_{\Bm, T,\SE} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K_{\Bm, T,\SE}$. Note that by Theorem 1.4 (ii), $(\pi_{\Bm})_!\a^*\SE[d_{\Bm}]$ is isomorphic to $K_{\Bm, T,\SE}$. But the $\Fq$-structure of $(\pi_{\Bm})_!\a^*\SE[d_{\Bm}]$ is not defined directly from the construction. \para{2.2.} Let $\SD X = \SD^b_c(X)$ be the bounded derived category of $\Ql$-constructible sheaves on a variety $X$ over $\Bk$. Assume that $X$ is defined over $\Fq$, and let $F:X \to X$ be the corresponding Frobenius map. Recall that for a given $K \in \SD X$ with an isomorphism $\f : F^*K \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K$, the characteristic function $\x_{K,\f} : X^F \to \Ql$ is defined by \begin{equation*} \x_{K,\f}(x) = \sum_i(-1)^i\Tr(\f^*, \SH^i_xK), \quad (x \in X^F), \end{equation*} where $\f^*$ is the induced isomorphism on $\SH^i_xK$. \par Returning to the original setting, we consider a tame local system $\SE$ on $T$ such that $F^*\SE \simeq \SE$. Since the isomorphism $F^*\SE \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \SE$ is unique up to scalar, we fix $\vf_0: F^*\SE \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \SE$ so that it induces the identity map on the stalk $\SE_e$ at the identity element $e \in T$. We consider the characteristic function of $K_{\Bm, T,\SE}$ with respect to the map $\vf$ induced from this $\vf_0$, and denote it by $\x_{\Bm, T,\SE}$. Since $K_{\Bm,T,\SE}$ is a $G$-equivariant perverse sheaf, $\x_{\Bm, T,\SE}$ is a $G^F$-invariant function on $\SX_{\Bm}^F$. \par The following result is an analogue of Lustig's result ([L2, (8.3.2)]) for character sheaves. The gap of the proof in [L2] was corrected in [L4] in a more general setting of character sheaves on disconnected reductive groups. The analogous statement in the case of exotic symmetric space (of level 2) was proved in [SS2, Prop. 1.6] based on the argument in [L4]. The proof for the present case is quite similar to that of [SS2], so we omit the proof here. \begin{prop} The restriction of $\x_{\Bm, T,\SE}$ on $\SX_{\Bm,\unip}$ is independent of the choice of $\SE$. \end{prop} We define a function $Q_{\Bm, T} = Q^G_{\Bm, T}$ as the restriction of $\x_{\Bm, T,\SE}$ on $\SX^F_{\Bm,\unip}$, and call it the Green function on $\SX_{\Bm,\unip}$. \para{2.4.} Let $T = T_w$ be an $F$-stable maximal torus in $G$ as in 2.1, namely $T = hT_0h\iv$ with $ h\in G$ such that $h\iv F(h) = \dw$ for $w \in (W_0)_{\Bm}$. We consider the isomorphism $\vf = \vf_T : F^*K_{\Bm,T,\SE} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K_{\Bm,T,\SE}$ as in 2.2, defined from the specific choice of $\vf_0$. Let $\SE_0$ be the tame local system on $T_0$ defined by $\SE_0 = (\ad h)^*\SE$. Then we have an isomorphism $\vf_{T_0}: F^*K_{\Bm, T_0, \SE_0} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K_{\Bm, T_0, \SE_0}$. For later use, we shall describe the relationship between $\vf_T$ and $\vf_{T_0}$. We write the varieties and maps $\wt\SY_{\Bm}, \wt\SY^0_{\Bm}, \a_0$, etc. as $\wt\SY_{\Bm, T}, \wt\SY^0_{\Bm, T}, \a_{0,T}$, etc. to indicate the dependence on $T$. $\wt\SY^0_{\Bm, T_0}$ has a natural Frobenius action $F: (x, \Bv, gT_0) \mapsto (F(x), F(\Bv), F(g)T_0)$, and similarly for $\wt\SY^0_{\Bm, T}$. The map $(x, \Bv, gT) \mapsto (x, \Bv, ghT_0)$ gives a morphism $\d : \wt\SY^0_{\Bm, T} \to \wt\SY^0_{\Bm, T_0}$ commuting with the projection to $\SY^0_{\Bm}$ (note that $\SY^0_{\Bm}$ is independent of the choice of $T$). We define a map $a_w : \wt\SY^0_{\Bm, T_0} \to \wt\SY^0_{\Bm, T_0}$ by $(x, \Bv, gT_0) \mapsto (x, \Bv, g\dw\iv T_0)$. Then we have a commutative diagram \begin{equation*} \tag{2.4.1} \begin{CD} \wt\SY^0_{\Bm, T} @>\d >> \wt\SY^0_{\Bm, T_0} \\ @VF VV @VVa_{w}F V \\ \wt\SY^0_{\Bm, T} @>\d >> \wt\SY^0_{\Bm, T_0} \end{CD} \end{equation*} Let $\SL^{\bullet}_0 = (\psi^0_{\Bm, T_0})_!(\a_{0,T_0})^*\SE_0$ be the local system on $\SY^0_{\Bm}$. We know $\End \SL^{\bullet}_0 \simeq \Ql[(W_0)_{\Bm, \SE_0}]$. This isomorphism is given as follows; the map $w \mapsto a_w$ gives a homomorphism $(W_0)_{\Bm} \to \Aut(\wt\SY^0_{\Bm, T_0})$. If $w \in (W_0)_{\Bm,\SE}$, $a_w$ induces an isomorphism $\wt a_w$ on $\SL^{\bullet}_0$, and the map $w \mapsto \wt a_w$ gives the isomorphism $\Ql[(W_0)_{\Bm, \SE_0}] \to \End \SL^{\bullet}_0$. By the property of the intermediate extensions, we have an isomorphism on $K_{\Bm, T_0, \SE_0}$ induced from $\wt a_w$, which we denote by $\th_w$. Also $\d$ induces an isomorphism $\SL^{\bullet} = (\psi^0_{\Bm,T})_!(\a_{0,T})^*\SE \to \SL^{\bullet}_0$, and so induces an isomorphism $K_{\Bm, T, \SE} \to K_{\Bm, T_0, \SE_0}$, which we denote by $\wt\d$. Then the diagram (2.4.1) implies the following commutative diagram. \begin{equation*} \tag{2.4.2} \begin{CD} F^*K_{\Bm, T,\CE} @>F^*(\wt\d)>> F^*K_{\Bm, T_0, \CE_0} \\ @V\vf_T VV @VV \vf_{T_0}\circ F^*(\th_w)V \\ K_{\Bm, T, \SE} @>\wt\d>> K_{\Bm, T_0, \SE_0}. \end{CD} \end{equation*} Note that $F^*(\th_w) = \th_{F(w)}$. Since $F$ acts trivially on $W_0$, we have $F^*(\th_w) = \th_w$. \para{2.5.} For a semisimple element $s \in G^F$, we consider $Z_G(s) \times V^{r-1}$. Then $V$ is decomposed as $V = V_1 \oplus \cdots \oplus V_t$, where $V_i$ is an eigenspace of $s$ with $\dim V_i = n_i$, and $F$ permutes the eigenspaces. $Z_G(s) \simeq G_1 \times \cdots \times G_t$ with $G_i = GL(V_i)$. Hence \begin{equation*} \tag{2.5.1} Z_G(s) \times V^{r-1} \simeq \prod_{i=1}^t (G_i \times V_i^{r-1}), \end{equation*} and the diagonal action of $Z_G(s)$ on the left hand side is compatible with the diagonal action of $G_i$ on $G_i\times V_i^{r-1}$ under the isomorphism $Z_G(s) \simeq G_1 \times \cdots \times G_t$. The definition of $\wt\SX_{\Bm}, \SX_{\Bm},\wt\SY_{\Bm}, \SY_{\Bm}$, etc. make sense if we replace $G$ by $Z_G(s)$, hence one can define the complex $K_{\Bm, T,\SE}$ associated to $Z_G(s)$. By (2.5.1), $\wt\SX_{\Bm}, \SX_{\Bm}$ with respect to $Z_G(s)$ are a direct product of similar varieties appeared in 1.1 with respect to $G_i$, hence Theorem 1.4 holds also for $Z_G(s)$ under a suitable adjustment. Concerning the $\Fq$-structure, Proposition 2.3 holds also for $Z_G(s)$. In fact, this is easily reduced to the case where $F$ acts transitively on $Z_G(s) \simeq G_1 \times \cdots \times G_t$. In that case, $G^F \simeq G_1^{F^t}$, and we may assume that $s \in T_0^{F^t}$. Thus Proposition 2.3 holds in this case, hence holds for the general case. In particular, one can define a Green function $Q^{Z_G(s)}_{\Bm, T}$ on $\SX_{\Bm, \unip}^{Z_G(s)}$ for $T \subset Z_G(s)$. \par For an $F$-stable torus $S$, put $(S^F)\wg = \Hom (S^F, \Ql^*)$. As in [SS2, 2.3], the set of $F$-stable tame local systems on $S$ is in bijection with the set $(S^F)\wg$ in such a way that the characteristic function $\x_{\SE, \vf_0}$ on $S^F$ gives an element of $(S^F)\wg$ (under the specific choice of $\vf_0: F^*\SE \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \SE$ as in 2.2). We denote by $\SE_{\th}$ the $F$-stable tame local system on $S$ corresponding to $\th \in (S^F)\wg$. \par The following theorem is an analogue of Lusztig's character formula for character sheaves [L2, Theorem 8.5]. A similar formula for the case of exotic symmetric space (of level 2) was proved in [SS2, Theorem 2.4]. The proof of the theorem is quite similar to the proof given in [SS2, Theorem 2.4], so we omit the proof here. \begin{thm}[Character formula] Let $s, u \in G^F$ be such that $su = us$, where $s$ is semisimple and $u$ is unipotent. Then \begin{equation*} \x_{\Bm, T, \SE}(su, \Bv) = |Z_G(s)^F|\iv \sum_{\substack{x \in G^F \\ x\iv sx \in T^F}}Q^{Z_G(s)}_{\Bm, xTx\iv}(u,\Bv)\th(x\iv sx), \end{equation*} where $\th \in (T^F)\wg$ is such that $\SE = \SE_{\th}$. \end{thm} \para{2.7.} For later use, we shall introduce another type of Green functions. We follow the notation in 1.5. By using the basis $\{ e_1, \dots, e_n\}$ of $V$, we identify $\ol M_{p_i}$ with the subspace $M_{p_i}^+$ of $V$ so that \begin{equation*} V = M^+_{p_1} \oplus \cdots \oplus M^+_{p_r}. \end{equation*} Hence $B_L$ stabilizes each $M_{p_i}^+$. As an analogue of the construction of $\wt\SX_{\Bm}$, we consider the diagram \begin{equation*} \begin{CD} T @<\a^+ << \wt\SX^+_{\Bm} @>\pi^+_{\Bm}>> \SX_{\Bm}, \end{CD} \end{equation*} where \begin{align*} \wt\SX^+_{\Bm} &= \{ (x, \Bv, gB_L) \in G \times V^{r-1} \times G/B_L \mid g\iv xg \in B, g\iv \Bv \in \prod_{i=1}^{r-1} M^+_{p_i} \} \\ &\simeq G \times^{B_L}(B \times \prod_i M^+_{p_i}), \end{align*} and $\a^+ : (x, \Bv, gB_L) \mapsto p_T(g\iv xg), \pi^+_{\Bm} : (x, \Bv, gB_L) \mapsto (x, \Bv)$. Put \begin{equation*} \tag{2.7.1} \SX^+_{\Bm} = \bigcup_{g \in G}g(B \times \prod_iM^+_{p_i}). \end{equation*} Then $\wt\SX^+_{\Bm}$ is smooth, irreducible, and $\Im \pi^+_{\Bm} = \SX^+_{\Bm}$, but $\pi^+_{\Bm}$ is not proper, and $\SX^+_{\Bm}$ is not necessarily a locally closed subset of $\SX_{\Bm}$. We consider a variety \begin{equation*} \tag{2.7.2} \wh\SX^L_{\Bm} = G \times^L(P \times \prod_iM^+_{p_i}) \end{equation*} and put $d^+_{\Bm} = \dim \wh\SX^L_{\Bm}$. For a tame local system $\SE$ on $T$, we consider the complex $(\pi^+_{\Bm})_!(\a^+)^*\SE[d^+_{\Bm}]$ on $\SX_{\Bm}$, which we denote by $K^+_{\Bm, T,\SE}$. \para{2.8.} Let $\wh\SX^L_{\Bm}$ be as in 2.7. We further define \begin{align*} \SX_{\Bm}^{P,+} &= \bigcup_{g \in L}g(B \times \prod_i M^+_{p_i}) = P \times \prod_i M^+_{p_i}, \\ \wt \SX^{L,+}_{\Bm} &= L \times^{B_L}(B \times \prod_i M^+_{p_i}). \end{align*} We define maps $\pi'_+: \wt\SX^+_{\Bm} \to \wh\SX^L_{\Bm}$, $\pi''_+: \wh\SX_{\Bm}^L \to \SX_{\Bm}$ in a similar way as in 1.5. Thus we have $\pi^+_{\Bm} = \pi''_+\circ \pi'_+$. We consider a commutative diagram \begin{equation*} \tag{2.8.1} \begin{CD} \wt\SX^+_{\Bm} @<\wt p_+<< G \times \wt\SX^{L,+}_{\Bm} @>\wt q_+>> \wt\SX^L_{\Bm} \\ @V\pi'_+VV @VV r_+ V @VV\pi^L_{\Bm} V \\ \wh \SX^L_{\Bm} @<p_+ << G \times \SX_{\Bm}^{P,+} @>q_+>> \SX^L_{\Bm} \\ @V\pi''_+VV \\ \SX_{\Bm}, \end{CD} \end{equation*} where $q_+$ is the composite of the projection $G \times \SX^{P,+}_{\Bm} \to \SX^{P,+}_{\Bm}$ and the map $\SX^{P,+}_{\Bm} \to\SX^L_{\Bm}, (x, \Bv) \mapsto (\ol x, \Bv)$ under the identification $M^+_{p_i} = \ol M_{p_i}$ ($\ol x$ is as in 1.5). The map $\wt q_+$ is the composite of the projection $G \times \wt\SX_{\Bm}^{L,+} \to \wt\SX_{\Bm}^{L,+}$ and the natural map $\wt\SX^{L,+}_{\Bm} \to \wt\SX^L_{\Bm}$ induced from the map $L \times (B \times \prod_iM^+_{p_i}) \mapsto L \times (B_L \times \prod_i\ol M_{p_i})$. $r_+ = \id \times r_+'$, where $r_+'$ is the natural map $\wt\SX^{L,+}_{\Bm} \to \SX^{P,+}_{\Bm}$. Both squares are cartesian, and we have \par \noindent (i) \ $p_+$ is a principal $L$-bundle. \par\noindent (ii) \ $q_+$ is a locally trivial fibration with fibre isomorphic to $G \times U_P$. \par \noindent Then (i) and (ii) imply a similar statement as (1.5.2). We consider the perverse sheaf $K^L_{\Bm, T, \SE}$ on $\SX^L_{\Bm}$ as in 1.5. Put $\SL^+ = (\a^+)^*\SE$. By a similar discussion as in 1.5, $(\pi'_+)_!\SL^+[d^+_{\Bm}]$ is a perverse sheaf on $\wh\SX^L_{\Bm}$ satisfying the property \begin{equation*} \tag{2.8.2} p_+^*(\pi'_+)!\SL^+[d^+_{\Bm} + a'] \simeq q_+^*K^L_{\Bm, T,\SE}[b'], \end{equation*} where $a' = \dim L$ and $b' = \dim G + \dim U_P$. Since $K^L_{\Bm, T,\SE}$ is decomposed as in (1.5.4), it follows that $(\pi'_+)_!\SL^+[d^+_{\Bm}]$ is a semisimple perverse sheaf on $\wh\SX^L_{\Bm}$, equipped with $W_{\Bm,\SE}$-action, and is decomposed as \begin{equation*} \tag{2.8.3} (\pi'_+)_!\SL^+[d^+_{\Bm}] \simeq \bigoplus_{\r \in W_{\Bm,\SE}\wg}\r \otimes B_{\r}, \end{equation*} where $B_{\r}$ is a simple perverse sheaf on $\wh\SX^L_{\Bm}$. \par By applying $(\pi''_+)_!$ on both sides of (2.8.3), we have a similar formula as (1.5.6), \begin{equation*} \tag{2.8.4} K^+_{\Bm, T,\SE} \simeq \bigoplus_{\r \in W_{\Bm,\SE}\wg} \r \otimes (\pi''_+)_!B_{\r}. \end{equation*} But note that $(\pi''_+)_!B_{\r}$ is not necessarily a perverse sheaf. \para{2.9.} We now consider the $\Fq$-structure, and assume that $T, B$ are both $F$-stable, hence $P$ and $L$ are also $F$-stable. Then all the varieties involved in the diagram (2.8.1) have natural $\Fq$-structures, and all the maps are $F$-equivariant. This is also true for the diagram (1.5.1). Recall that $K^L_{\Bm, T,\SE}$ has the natural $\Fq$-structure $\vf^L: F^*K^L_{\Bm, T,\SE} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K^L_{\Bm, T,\SE}$. Then the decomposition (1.5.4) determines the isomorphism $\vf^L_{\r} : F^*\IC(\SX^L_{\Bm}, \SL^L_{\r}) \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \IC(\SX^L_{\Bm}, \SL^L_{\r})$ for each $\r$ such that $\vf^L = \sum_{\r}\s_{\r} \otimes \vf^L_{\r}$, where $\s_{\r}$ is the identity map on the representation space $\r$. By (2.8.1), the map $\vf^L_{\r}$ induces an isomorphism $h_{\r} :F^*B_{\r} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, B_{\r}$. We also obtain an isomorphism $f_{\r} : F^*A_{\r} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, A_{\r}$ by a similar discussion applied to the diagram (1.5.1). \par We have the following lemma. \begin{lem} For any $z \in (\SX_{\Bm}^+)^F$ and $\r \in W_{\Bm,\SE}\wg$, we have \begin{equation*} \Tr\bigl( h^*_{\r}, \BH_c^i((\pi''_+)\iv(z), B_{\r})\bigr) = \Tr\bigl(f^*_{\r}, \BH_c^{i - \dim U_P}((\pi'')\iv(z), A_{\r})\bigr)q^{\dim U_P}, \end{equation*} where $f^*_{\r}, h^*_{\r}$ are the linear maps on the cohomologies induced from $f_{\r}, h_{\r}$. \end{lem} \begin{proof} We consider the following commutative diagram \begin{equation*} \tag{2.10.1} \begin{CD} \wh\SX^L_{\Bm} @<p_+<< G \times (P \times \prod M_{p_i}^+) @>q_+>> \SX^L_{\Bm} \\ @V\e VV @V\wt\e VV @VV\id V \\ G \times^L(P \times \prod M_{p_i}) @<p_{\bullet}<< G \times (P \times \prod M_{p_i}) @>q>> \SX^L_{\Bm} \\ @V\xi VV @VV\id V \\ \wh\SX^P_{\Bm} @<p<< G \times (P \times \prod M_{p_i}), \end{CD} \end{equation*} where $\wt\e$ is the inclusion map, and $\e$ is the map induced from $\wt\e$ by taking the quotients by $L$. $\xi$ is the natural map from the quotient by $L$ to the quotient by $P$, and $p_{\bullet}$ is the quotient by $L$. Note that $\e$ is injective, and $\xi$ is a locally trivial fibration with fibre isomorphic to $U_P$. \par By a similar construction of $A_{\r}$ and $B_{\r}$, one can define a simple perverse sheaf $\wt B_{\r}$ on $G \times^L(P \times \prod M_{p_i})$. By (2.10.1), we have \begin{equation*} \tag{2.10.2} \xi^*A_{\r}[d] \simeq \wt B_{\r}, \qquad \e^*\wt B_{\r} \simeq B_{\r}, \end{equation*} where $d = \dim U_P$. (The latter formula follows from the fact that the upper left square in (2.10.1) is cartesian.) We now define $\pi_{\bullet}'' : G \times^L(P \times \prod M_{p_i}) \to \SX_{\Bm}$ by $(g*(x,\Bv)) \mapsto (gxg\iv, g\Bv)$. Then $(\pi_{\bullet}'')\iv(\SX_{\Bm}^+) = \wh\SX^L_{\Bm}$ and the restriction of $\pi''_{\bullet}$ on $\wh\SX^L_{\Bm}$ coincides with $\pi''_+$. It follows, for $z \in (\SX^+_{\Bm})^F$, that \begin{align*} \tag{2.10.3} \BH^i_c((\pi''_+)\iv(z), B_{\r}) &\simeq \BH^i_c((\pi''_{\bullet})\iv(z), \wt B_{\r}) \\ &= \BH^i_c((\pi''_{\bullet})\iv(z), \xi^* A_{\r}[d]) \\ &\simeq \BH^i_c((\pi'')\iv(z), \xi_!\xi^*A_{\r}[d]) \\ &\simeq \BH_c^{i - d}((\pi'')\iv(z), A_{\r})(d) \end{align*} since $(\pi_{\bullet}'')\iv(z) \to (\pi'')\iv(z)$ is a locally trivial fibration with fibre isomorphic to $U_P$, and so $\xi_!\xi^*A_{\r} \simeq A_{\r}[-2d](d)$, where $(d)$ denotes the Tate twist. Note that the Frobenius actions $h^*_{\r}, f^*_{\r}$ on these cohomologies come from the Frobenius action $\vf^L_{\r}$ on $\IC(\SX^L_{\Bm}, \SL^L_{\r})$. Hence the isomorphism in (2.10.3) is compatible with the maps $h^*_{\r}, f^*_{\r}$. This proves the lemma. \end{proof} \para{2.11.} We now consider the general case. We assume that $T, L$ and $P$ are $F$-stable, but $B$ is not necessarily $F$-stable. Since $P$ is $F$-stable, in particular all the subspaces $M_{p_i}$ are $F$-stable. We consider the diagram (2.8.1). Except the top row, all the objects involved in (2.8.1) are $F$-equivariant. We consider the isomorphism $\vf^L : F^*K^L_{\Bm, T,\SE} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K^L_{\Bm, T,\SE}$. Since $p_+, q_+$ are $F$-equivariant, $\vf^L$ induces an isomorphism $\vf': F^*((\pi'_+)_!\SL^+) \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, (\pi'_+)_!\SL^+$. Since $\pi''_+$ is $F$-equivariant, $\vf'$ induces an isomorphism $\vf^+ : F^*K^+_{\Bm, T, \SE} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K^+_{\Bm, T,\SE}$. We define a function $\x^+_{\Bm, T,\SE}$ by \begin{equation*} \tag{2.11.1} \x^+_{\Bm, T,\SE} = (-q)^{- \dim U_P}\x_{K^+_{\Bm, T,\SE}, \vf^+}. \end{equation*} \para{2.12.} Let $T_0 \subset B_0$ be as in 2.1. We also consider the $F$-stable parabolic subgroup $P_0$ containing $B_0$ which is conjugate to $P$ under $G^F$. Let $L_0$ be the Levi subgroup of $P_0$ containing $T_0$. Since $T$ is $G^F$-conjugate to an $F$-stable maximal torus in $L_0$, in considering the $\Fq$-structure of $K_{\Bm, T,\SE}$, we may assume that $T = T_w \subset L_0$ for $w \in (W_0)_{\Bm}$. We shall consider two complexes $K_{\Bm, T,\SE}$ and $K_{\Bm, T_0, \SE_0}$ as discussed in 2.4. We follow the notation in 2.4. In particular, let $\vf_T$ (resp. $\vf_{T_0}$) be the isomoprhism $\vf$ with respect to $K_{\Bm, T,\SE}$ (resp. $K_{\Bm, T_0, \SE_0}$). By the decomposition of $K_{\Bm, T_0, \SE_0}$ in (1.4.2), $\vf_{T_0}$ determines an isomorphism $\vf_{\r} : F^*\IC(\SX_{\Bm}, \SL_{\r})[d_{\Bm}] \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \IC(\SX_{\Bm}, \SL_{\r})[d_{\Bm}]$ such that $\vf_{T_0} = \sum_{\r \in (W_0)_{\Bm,\SE_0}\wg}\s_{\r}\otimes \vf_{\r}$, where $\s_{\r}$ is the identity map on the representation space $\r$. By (2.4.2), under the isomorphism $\wt\d : K_{\Bm, T, \SE} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K_{\Bm, T_0, \SE_0}$, the map $\vf_T$ can be described by the map $\vf_{T_0}$ and $\th_w$, where $\th_w$ corresponds to the action of $w$ on each $(W_0)_{\Bm, \SE_0}$-module $\r$. It follows that \begin{equation*} \tag{2.12.1} \vf_{T} \simeq \sum_{\r \in (W_0)_{\Bm, \SE_0}\wg}w|_{\r}\otimes \vf_{\r}, \end{equation*} where $w|_{\r}$ denotes the action of $w$ on $\r$. \par Next we consider two complexes $K^+_{\Bm, T, \SE}$ and $K^+_{\Bm, T_0, \SE_0}$. Let $\vf^+_T$ (resp. $\vf^+_{T_0}$) be the isomorphism with respect to $K^+_{\Bm, T, \SE}$ (resp. $K^+_{\Bm, T_0, \SE_0}$). The isomorphism $\vf^L_{T}$ and $\vf^L_{T_0}$ are defined similarly with respect to $K^L_{\Bm, T, \SE}$ and $K^L_{\Bm, T_0, \SE_0}$. Then a similar formula as (2.12.1) holds for $\vf^L_T$ and $\vf^L_{T_0}$. We denote by $K^+_T$ (resp. $K^+_{T_0}$) the complex $(\pi'_+)_!\SL^+[d^+_{\Bm}]$ defined with respect to $T$ (resp. $T_0$). By the diagram (2.8.1), the isomorphism $\vf^L_T$ induces an isomorphism $h_T: F^*K^+_T \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K^+_T$, and $h_{T_0}: F^*K^+_{T_0} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K^+_{T_0}$. Then again by (2.8.1), we have \begin{equation*} \tag{2.12.2} h_T \simeq \sum_{\r \in (W_0)_{\Bm, \SE_0}\wg}w|_{\r} \otimes h_{\r}, \end{equation*} where $h_{\r}$ is the isomorphism in 2.9 defined with respect to $T_0$. By applying $(\pi''_+)_!$, we see that \begin{equation*} \tag{2.12.3} \vf^+_T \simeq \sum_{\r \in (W_0)_{\Bm,\SE_0}\wg}w|_{\r} \otimes \vf^+_{\r}, \end{equation*} where $\vf^+_{\r} : F^*(\pi''_+)_!B_{\r} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, (\pi''_+)_!B_{\r}$ is an isomorphism induced from $h_{\r}$. \begin{prop} Under the setting in 2.11, we have \begin{equation*} \x^+_{\Bm, T,\SE}(z) = \begin{cases} \x_{\Bm, T,\SE}(z) &\quad\text{ if } z \in (\SX^+_{\Bm})^F, \\ 0 &\quad\text{otherwise.} \end{cases} \end{equation*} \end{prop} \begin{proof} Since $\Im \pi^+_{\Bm} = \SX^+_{\Bm}$, it is clear that $\x^+_{\Bm, T,\SE}(z) = 0$ unless $z \in (\SX^+_{\Bm})^F$. Take $z \in (\SX^+_{\Bm})^F$. By (2.12.1), we have \begin{equation*} \x_{\Bm, T,\SE}(z) = \sum_{\r \in (W_0)_{\Bm, \SE_0}\wg}\Tr(w, \r)\x_{I_{\r}, \vf_{\r}}, \end{equation*} where $I_{\r} = \IC(\SX_{\Bm}, \SL_{\r})[d_{\Bm}]$. Here $\SH^i_z(I_{\r}) \simeq \BH^i_c((\pi'')\iv(z), A_{\r})$ by Proposition 1.6, and the isomorphism on $\SH^i_z(I_{\r})$ induced from $\vf_{\r}$ coincides with the isomorphism $f^*_{\r}$ on $\BH^i_c((\pi'')\iv(z), A_{\r})$. On the other hand, by (2.12.3) we have \begin{equation*} \x_{K^+_{\Bm, T,\SE}, \vf^+}(z) = \sum_{\r \in (W_0)_{\Bm, \SE_0}\wg}\Tr(w, \r)\x_{J_{\r}, \vf^+_{\r}}, \end{equation*} where $J_{\r} = (\pi''_+)_!B_{\r}$. Here $\SH^i_z(J_{\r}) \simeq \BH^i_c((\pi''_+)\iv(z), B_{\r})$, and the isomorphism on $\SH^i_z(J_{\r})$ induced from $\vf^+_{\r}$ coincides with $h^*_{\r}$ on $\BH^i_c((\pi''_+)\iv(z), B_{\r})$. The proposition now follows from Lemma 2.10 and (2.11.1). \end{proof} \para{2.14.} As in the case of Green functions $Q_{\Bm, T}$, we define Green functions $Q^+_{\Bm, T}$ as the restriction of $\x^+_{\Bm, T, \SE}$ on $\SX_{\Bm,\unip}$. In view of Proposition 2.13 and Proposition 2.3, $Q^+_{\Bm, T}$ does not depend on the choice of $\SE$. As in the case of Green functions, the definition of $Q^+_{\Bm, T}$ can be generalized to the case where $G$ is replaced by $Z_G(s)$, in which case we denote it as $Q^{+, Z_G(s)}_{\Bm, T}$. Now take $(su, \Bv) \in (\SX^+_{\Bm})^F$ under the setting in Theorem 2.6. Then by Proposition 2.13 and Theorem 2.6, the value $\x^+_{\Bm,T,\SE}(su, \Bv) = \x_{\Bm, T,\SE}(su,\Bv)$ can be described as a linear combination of various Green functions $Q^{Z_G(s)}_{\Bm, xTx\iv}(u,\Bv)$ such that $s \in xTx\iv$. One can check that if $(su, \Bv) \in \SX^+_{\Bm}$, then $(u,\Bv) \in \SX^{+,Z_G(s)}_{\Bm, \unip}$. It follows, by Proposition 2.13, that $Q^{Z_G(s)}_{\Bm, xTx\iv}(u,\Bv) = Q^{+,Z_G(s)}_{\Bm,xTx\iv}(u,\Bv)$. Thus as a corollary to Theorem 2.6, we have \begin{cor} [Character formula for $\x^+_{\Bm, T,\SE}$] Under the assumption in Theorem 2.6, we have \begin{equation*} \x^+_{\Bm, T,\SE}(su,\Bv) = |Z_G(s)^F|\iv \sum_{\substack{x \in G^F \\ x\iv sx \in T^F}}Q^{+,Z_G(s)}_{\Bm, xTx\iv}(u,\Bv)\th(x\iv sx). \end{equation*} \end{cor} \par \section{Orthogonality relations} \para{3.1.} For a fixed $\Bm \in \SQ_{n,r}$, we have defined in the previous sections, $K_{\Bm, T,\SE}$, $\x_{\Bm, T,\SE}$, $Q_{\Bm, T}$, etc. and $K^+_{\Bm, T, \SE}, \x^+_{\Bm, T, \SE}, Q^+_{\Bm, T}$, etc.. From this section by changing the notation, we denote $K_{\Bm, T,\SE}, \x_{\Bm, T,\SE}, Q_{\Bm, T}$, etc. by $K^-_{\Bm, T,\SE}, \x^-_{\Bm, T,\SE}, Q^-_{\Bm, T}$, etc. by attaching the sign ``$-$''. In this section, we shall prove the orthogonality relations for the functions $\x^{\pm}_{\Bm, T,\SE}$ and $Q^{\pm}_{\Bm,T}$. Before stating the results, we prepare the following. \begin{prop} Let $K = K^{\ve}_{\Bm, T, \CE}$ $($resp. $K' = K^{\ve'}_{\Bm', T', \CE'}$$)$ be a complex on $\SX$ associated to $\Bm$ and $(M_{p_i})$ $($ resp. $\Bm'$ and $(M'_{p'_i}))$, where $\ve, \ve' \in \{ +, - \}$. Assume that $\SE'$ is the constant sheaf on $T'$, and $\SE$ is a non-constant sheaf on $T$. Then we have \begin{equation*} \BH_c^i(\SX, K \otimes K') = 0 \quad\text{ for all } i. \end{equation*} \end{prop} \begin{proof} We prove the proposition by a similar argument as in the proof of Proposition 7.2 in [L2]. In the discussion below, we consider the case where $\ve = -, \ve' = +$. The other cases are dealt similarly. Let $B'$ be the Borel subgroup of $G$ containing $T'$, which is the stabilizer of the total flag $(M'_i)$, and $P'$ the parabolic subgroup of $G$ containing $B'$ which is the stabilizer of the partial flag $(M'_{p'_i})$. Let $L'$ be the Levi subgroup of $P'$ containing $T'$, and $U_{P'}$ the unipotent radical of $P'$. Put $B'_{L'} = B' \cap L'$. We consider the fibre product $Z = \wt\SX_{\Bm} \times_{\SX} \wt\SX^+_{\Bm'}$, where $Z$ can be written as \begin{equation*} \begin{split} Z = \{ (gB, &g'B'_{L'}, x, \Bv) \in G/B \times G/B'_{L'} \times G \times V^{r-1} \\ &\mid g\iv xg \in B, g'^{-1}x{g'} \in B', g\iv \Bv \in \prod_i M_{p_i}, g'^{-1} \Bv \in \prod_i M'^+_{p'_i} \}. \end{split} \end{equation*} Let $\SL = \a^*\SE$ and $\SL' = (\a^+)^*\SE'$. Since $K = (\pi_{\Bm})_!\SL$ and $K' = (\pi^+_{\Bm})_!\SL'$, up to shift, by the K\"unneth formula, we have \begin{equation*} \tag{3.2.1} \BH^i_c(\SX, K \otimes K') \simeq H_c^i(Z, \SL \boxtimes \SL') \end{equation*} up to the degree shift. Hence in order to prove the proposition, it is enough to show that the right hand side of (3.2.1) is equal to zero for each $i$. For each $G$-orbit $\SO$ of $G/B \times G/B'$, put \begin{equation*} Z_{\SO} = \{ (gB, g'B'_{L'}, (x, \Bv)) \in Z \mid (gB, g'B') \in \SO\}. \end{equation*} Then $Z = \coprod_{\SO} Z_{\SO}$ is a finite partition, and $Z_{\SO}$ is a locally closed subvariety of $Z$. Hence it is enough to show that $H_c^i(Z_{\SO}, \SL\boxtimes\SL') = 0$ for any $i$. We consider the morphism $\vf_{\SO} : Z_{\SO} \to \SO, (gB, g'B'_{L'}, (x, \Bv)) \mapsto (gB, g'B')$. Then by the Leray spectral sequence, we have \begin{equation*} H_c^i(\SO, R^j(\vf_{\SO})_!(\SL\boxtimes\SL')) \Rightarrow H_c^{i+j}(Z_{\SO},\SL\boxtimes \SL'). \end{equation*} Thus it is enough to show that $R^j(\vf_{\SO})_!(\SL\boxtimes \SL') = 0$ for any $j$, which is equivalent to the statement that $H_c^j(\vf_{\SO}\iv(\xi), \SL\boxtimes\SL') = 0$ for any $j$ and any $\xi \in \SO$. Since $\SL\boxtimes\SL'$ is a $G$-equivariant local system, it is enough show this for a single element $\xi \in \SO$. Thus, we may choose $\xi = (B, nB') \in \SO$, where $n \in G$ is such that $nT'n\iv = T$. Then $\vf_{\SO}\iv(\xi)$ is given as \begin{equation*} \begin{split} \vf_{\SO}\iv(\xi) = \{ (B, &nuB'_{L'}, x, \Bv) \mid x \in B, n\iv xn \in B', \\ &\Bv \in \prod_iM_{p_i}, u\iv n\iv \Bv \in \prod_i M'^+_{p'_i}, u \in U_{P'} \}. \end{split} \end{equation*} Thus $Y = \vf_{\SO}\iv(\xi)$ is isomorphic to $(B \cap nB'n\iv) \times Y_1$, where \begin{equation*} Y_1 = \{(u, \Bv) \in U_{P'} \times V^{r-1} \mid \Bv \in \prod_iM_{p_i} \cap \prod_inu( M'^+_{p'_i}) \}. \end{equation*} Let $h : Y \to U_{P'}$ be the map obtained from the projection $h_1 : Y_1 \to U_{P'}$. Again by using the Leray spectral sequence associated to the map $h$, in order to show $H^i_c(Y, \SL\boxtimes\SL') = 0$, it is enough to see that $H^i_c(h\iv(u), \SL \boxtimes\SL') = 0$ for any $u \in U_{P'}$. For each $u \in U_{P'}$, the fibre $h_1\iv(u)$ has a structure of an affine space. Hence $h\iv(u)$ is isomorphic to the direct product of $(B \cap nB'n\iv)$ with an affine space. Since $B \cap nB'n\iv \simeq T \times (U \cap n U' n\iv)$, where $U'$ is the unipotent radical of $B'$, $h\iv(u)$ can be written as $h\iv(u) \simeq T \times Y_2$ with an affine space $Y_2$. If we denote by $p : h\iv(u) \to T$ the projection on $T$, the restriction of $\SL \boxtimes \SL'$ on $h\iv(u)$ coincides with $p^*(\SE \otimes f^*\SE') = p^*\SE \simeq \SE \boxtimes \Ql$ since $\SE'$ is the constant sheaf, where $f : T \to T' = n\iv Tn$. Hence we have only to show that $H^i_c(T, \SE) = 0$ for any $i$. But since $\SE$ is a non-constant tame local system on $T$, we have $H_c^i(T,\SE) = 0$ for any $i$. This proves the proposition. \end{proof} \para{3.3.} We consider the complexes $K^{\pm}_{\Bm, T, \CE}, K^{\pm}_{\Bm', T',\CE'}$, and their characteristic functions $\x^{\pm}_{\Bm, T, \CE}, \x^{\pm}_{\Bm', T', \CE'}$. We put $N(T, T') = \{ n \in G \mid n\iv Tn = T'\}$. For $\ve, \ve' \in \{ +, -\}$ and $n \in N(T, T')$, we define $a_{\ve, \ve'}(\Bm, \Bm'; n)$ by \begin{equation*} \tag{3.3.1} a_{\ve,\ve'}(\Bm, \Bm'; n) = \sum_{i=1}^{r-1}\dim (M^{\ve}_{p_i} \cap n({M'}^{\ve'}_{p'_i})), \end{equation*} where $M^+_{p_i}$ is as before, and we put $M_{p_i} = M^-_{p_i}$. We define $M'^{\ve'}_{p'_i}$ similarly. Also put \begin{equation*} \tag{3.3.2} p_-(\Bm) = \dim \prod_{i=1}^{r-1} M_{p_i} = \sum_{i=1}^{r-1}p_i, \quad p_+(\Bm) = \dim \prod_{i=1}^{r-1} M^+_{p_i} = p_{r-1}. \end{equation*} \par We have the following orthogonality relations, which is an analogue of Theorems 9.2 and 9.3 in [L2]. Also see Theorems 3.4 and 3.5 in [SS2] for the case of exotic symmetric space of level 2. \begin{thm}[Orthogonality relations for $\x^{\pm}_{\Bm, T, \SE}$] Let $\SE = \SE_{\th}, \SE' = \SE_{\th'}$ with $\th \in (T^F)\wg, \th' \in ({T'}^F)\wg$. Then we have \begin{equation*} \tag{3.4.1} \begin{split} (-1)^{p_{\ve}(\Bm) + p_{\ve'}(\Bm')}&|G^F|\iv \sum_{z \in \SX^F} \x^{\ve}_{\Bm, T, \SE}(z)\x^{\ve'}_{\Bm',T',\SE'}(z) \\ &= |T^F|\iv|{T'}^F|\iv\sum_{\substack{n \in N(T,T')^F \\ t \in T^F}} \th(t)\th'(n\iv tn)q^{a_{\ve,\ve'}(\Bm, \Bm';n)}. \end{split} \end{equation*} \end{thm} \begin{thm}[Orthogonality relations for Green functions] \begin{equation*} \tag{3.5.1} \begin{split} (-1)^{p_{\ve}(\Bm) + p_{\ve'}(\Bm')}&|G^F|\iv \sum_{z \in \SX^F\uni} Q^{\ve}_{\Bm, T}(z)Q^{\ve'}_{\Bm', T'}(z) \\ &= |T^F|\iv |{T'}^F|\iv \sum_{n \in N(T, T')^F}q^{a_{\ve,\ve'}(\Bm, \Bm';n)}. \end{split} \end{equation*} \end{thm} \para{3.6.} As was discussed in Section 2, the functions $\x^{\pm}_{\Bm, T,\SE}, Q^{\pm}_{\Bm, T}$ make sense if we replace $G$ by its subgroup $Z_G(s)$, and $G \times V^{r-1}$ by $Z_G(s) \times V^{r-1}$ for a semisimple element $s \in G^F$. Theorem 3.4 and 3.5 are then formulated for this general setting. In what follows, we shall prove Theorem 3.4 and 3.5 simultaneously under this setting. \par First we note \par \noindent (3.6.1) \ Theorem 3.4 holds if $\th'$ is the trivial character and $\th$ is a non-trivial character. \par In fact, by the trace formula for the Frobenius maps, the left hand side of (3.4.1) coincides, up to scalar, with \begin{equation*} \sum_i(-1)^i\Tr(F^*, \BH_c^i(\SX, K^{\ve}_{\Bm, T, \SE} \otimes K^{\ve'}_{\Bm',T',\SE'})), \end{equation*} where $F^*$ is the isomorphism induced from $\vf: F^*K^{\ve}_{\Bm,T,\SE} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K^{\ve}_{\Bm,T,\SE}$ and \linebreak $\vf': F^*K^{\ve'}_{\Bm',T',\SE'} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K^{\ve'}_{\Bm',T',\SE'}$. Since $\SE'$ is a constant sheaf and $\SE$ is a non-constant sheaf, by Proposition 3.2, the left hand side of (3.4.1) is equal to zero. On the other hand, the right hand side of (3.4.1) is equal to zero by the orthogonality relations for irreducible characters of $T^F$. Hence (3.6.1) holds. \par Next we show \par \noindent (3.6.2) \ Theorem 3.4 holds if there exist $F$-stable Borel subgroups $B, B'$ such that $B \supset T, B' \supset T'$. \par Since $B, B'$ are $F$-stable, we can compute $\x^{\pm}_{\Bm,T,\SE}$ by using the complexs $(\pi_{\Bm})_!\a^*\SE$ and $(\pi^+_{\Bm})_!(\a^+)^*{\SE}$ defined in Section 1 and 2. Then by the trace formula for the Frobenius map, we have \begin{align*} (-1)^{d_{\Bm}}\x_{\Bm,T,\SE}(x, \Bv) &= |B^F|\iv\sum_{\substack{g \in G^F \\ g\iv xg \in B^F \\ g\iv \Bv \in \prod_iM_{p_i}}}\x_{\SE, \vf_0}(p_T(g\iv xg)), \\ (-1)^{d^+_{\Bm'}}\x^+_{\Bm', T',\SE'}(x, \Bv) &= (-q)^{-\dim U_{P'}} |B'^F_{L'}|\iv\sum_{\substack{g \in G^F \\ g\iv xg \in B'^F \\ g\iv \Bv \in \prod_iM'^+_{p_i}}}\x_{\SE', \vf'_0}(p_{T'}(g\iv xg)). \end{align*} Since $d_{\Bm} = n^2 + p_-(\Bm)$ by Lemma 1.2 and $d^+_{\Bm'} = n^2 + \dim U_{P'} + p_+(\Bm')$ by (2.7.2), we have \begin{equation*} \tag{3.6.3} \begin{split} (-1)^{p_{\ve}(\Bm) + p_{\ve'}(\Bm')}&|G^F|\iv\sum_{(x, \Bv) \in \SX^F} \x^{\ve}_{\Bm,T,\SE}(x, \Bv)\x^{\ve'}_{\Bm',T',\SE'}(x,\Bv) \\ &= |G^F|\iv|B^F|\iv|B'^F|\iv \sum _{(*)}\th(p_T(g\iv xg))\th'(p_{T'}(g'^{-1}xg')), \end{split} \end{equation*} where the sum (*) is taken over all $x \in G^F, \Bv \in (V^{r-1})^F, g,g' \in G^F$ such that $g\iv xg \in B^F, g'^{-1}xg' \in B'^F, g\iv \Bv \in \prod_i (M^{\ve}_{p_i})^F, g'^{-1}\Bv \in \prod_i (M^{\ve'}_{p'_i})^F$. We change the variables; put $y = g\iv xg, h = g\iv g', \Bv' = g\iv \Bv$. Then the condition (*) is equivalent to the condition (**) $g \in G^F,h \in G^F, y \in B^F, \Bv' \in \prod_i(M^{\ve}_{p_i})^F$ such that $h\iv yh \in B'^F, h\iv \Bv' \in \prod_i (M^{\ve'}_{p'_i})^F$. We consider the partition of $G$ into double cosets $B\backslash G/B'$. For each $F$-stable coset $BnB'$ of $G$, we may assume that $n\iv Tn = T'$ since $T$ and $T'$ are $G^F$-conjugate. Then the sum $\sum_{(*)}$ in (3.6.3) can be written as \begin{equation*} |G^F||U^F||U'^F| \sum_{n \in (B\backslash G/B')^F} \sum_{\substack{ h \in (TnT')^F \\ t \in T^F}}\th(t)\th'(n\iv tn)q^{a(\Bm,\Bm';n)}. \end{equation*} Thus Theorem 3.4 holds. (3.6.2) is proved. \para{3.7.} In this subsection, we show that Theorem 3.4 holds for $G$ under the assumption that Theorem 3.5 holds for subgroups $Z_G(s)$ of $G$. By making use of the character formulas in Theorem 2.6 and Corollary 2.15, we have \begin{equation*} \begin{split} &(-1)^{p_{\ve}(\Bm) + p_{\ve'}(\Bm')}|G^F|\iv \sum_{z \in \SX^F}\x^{\ve}_{\Bm, T, \SE}(z) \x^{\ve'}_{\Bm', T', \SE'}(z) \\ &= |G^F|\iv (-1)^{p_{\ve}(\Bm) + p_{\ve'}(\Bm')}\sum_{\substack{s \in G^F \\ x, x' \in G^F \\ x\iv sx \in T^F \\ {x'}\iv sx' \in {T'}^F}} f(s, x, x')|Z_G(s)^F|^{-2} \th(x\iv sx)\th'({x'}\iv sx'), \end{split} \end{equation*} where \begin{equation*} f(s,x,x') = \sum_{\substack{u \in Z_G(s)^F\uni \\ \Bv \in (V^{r-1})^F}} Q^{\ve, Z_G(s)}_{\Bm, xTx\iv}(u, \Bv) Q^{\ve', Z_G(s)}_{\Bm', x'T{x'}\iv}(u, \Bv). \end{equation*} By applying Theorem 3.5 for $Z_G(s)$, we see that \begin{equation*} f(s,x,x') = (-1)^{p_{\ve}(\Bm) + p_{\ve'}(\Bm')} |Z_G(s)^F||T^F|\iv |{T'}^F|\iv\sum_{\substack{ n \in Z_G(s) \\ n\iv xTx\iv n = x'T'{x'}\iv}} q^{a_{\ve,\ve'}(\Bm, \Bm'; n)}. \end{equation*} It follows that the previous sum is equal to \begin{equation*} \begin{split} &|G^F|\iv |T^F|\iv |{T'}^F|\iv \\ &\times \sum_{\substack{ s \in G^F \\ x, x' \in G^F \\ x\iv sx \in T^F \\ {x'}\iv sx' \in {T'}^F \\ }} |Z_G(s)^F|\iv \sum_{\substack{n \in Z_G(s)^F \\ n\iv xTx\iv n = x'T'{x'}\iv}} \th(x\iv sx)\th'({x'}\iv sx') q^{a_{\ve, \ve'}(\Bm, \Bm';n)}. \end{split} \end{equation*} Now put $t = x\iv sx \in T^F$ and $y = x\iv nx'$. Then $y \in G^F$ such that $y\iv Ty = T'$. Under this change of variables, the above sum can be rewritten as \begin{equation*} |G^F|\iv |T^F|\iv |{T'}^F|\iv \sum_{\substack{ x \in G^F \\ t \in T^F}} |Z_G(t)^F|\iv \sum_{\substack{ y \in N(T,T')^F\\ x' \in (xZ_G(t)y)^F}}\th(t)\th'(y\iv t y) q^{a_{\ve, \ve'}(\Bm, \Bm'; y)}, \end{equation*} which is equal to \begin{equation*} |T^F|\iv|{T'}^F|\iv\sum_{t \in T^F}\sum_{y \in N(T,T')^F} \th(t)\th'(y\iv ty)q^{a_{\ve,\ve'}(\Bm, \Bm'; y)}. \end{equation*} Thus our assertion holds. \para{3.8.} We shall show that Theorem 3.5 holds for $G$ under the assumption that it holds for $Z_G(s)$ if $s$ is not central. Hence Theorem 3.4 holds for such groups by 3.7. Put \begin{align*} A &= (-1)^{p_{\ve}(\Bm) + p_{\ve'}(\Bm')} |G^F|\iv\sum_{z \in \SX^F\uni} Q^{\ve}_{\Bm, T}(z)Q^{\ve'}_{\Bm', T'}(z), \\ B &= |T^F|\iv |{T'}^F|\iv \sum_{n \in N(T, T')^F}q^{a_{\ve,\ve'}(\Bm, \Bm';n)}. \end{align*} We want to show that $A = B$. By making use of a part of the arguments in 3.7 (which can be applied to the case where $s \notin Z(G)^F$), we see that \begin{equation*} \tag{3.8.1} \begin{split} &(-1)^{p_{\ve}(\Bm) + p_{\ve'}(\Bm')}|G^F|\iv \sum_{z \in \SX^F}\x^{\ve}_{\Bm, T, \SE}(z) \x^{\ve'}_{\Bm', T', \SE'}(z) - A \sum_{s \in Z(G)^F}\th(s)\th'(s) \\ &= |T^F|\iv|{T'}^F|\iv\sum_{t \in T^F}\sum_{y \in N(T,T')^F} \th(t)\th'(y\iv ty)q^{a_{\ve,\ve'}(\Bm, \Bm'; y)} - B|Z(G)^F|. \end{split} \end{equation*} This formula holds for any $\th \in (T^F)\wg$ and $\th' \in ({T'}^F)\wg$. The case where $G = T$ is included in the case discussed in (3.6.2). So we may assume that $G \ne T$, hence $Z(G) \ne G$. Now assume that $q > 2$. Then one can find a linear character $\th$ on $T^F$ such that $\th|_{ZG)^F} = \id$ and that $\th \ne \id$. We choose $\th'$ the identity character on ${T'}^F$. Then by (3.6.1), the first term of the left hand side of (3.8.1) coincides with the first term of the right hand side. This implies that $A = B$ as asserted. The remaining case is the case where $q = 2$. If $T$ is not a split torus, still one can find a linear character $\th$ satisfying the above property, and the above discussion can be applied. So we may assume that both of $T, T'$ are $\BF_q$-split. But in this case Theorem 3.4 holds by (3.6.2). Hence the first term of the left hand side of (3.8.1) coincides with the first term of the right hand side. By choosing the identity characters $\th, \th'$, again we have $A = B$. Thus our assertion holds. \para{3.9.} We are now ready to prove Theorems 3.4 and 3.5. First note that Theorem 3.4 and 3.5 hold for $Z_G(s)$ in the case where $Z_G(s) = T$, i.e., in the case where $s$ is regular semisimple. In fact, in that case, we have $T = T'$ and $N(T, T') = T$. We have $\x^-_{\Bm, T, \SE}(t, \Bv) = (-1)^{d_{\Bm}}\th(t)$ if $\Bv \in \prod_i (M^{\ve}_i)^F$ and is equal to zero otherwise. A similar formula holds for $\x^+_{\Bm, T,\SE}$ by replacing $d_{\Bm}$ by $d^+_{\Bm}$. Theorem 3.4 and 3.5 follows from this. Now by induction of the semisimple rank of $Z_G(s)$, we may assume that Theorem 3.5 holds for $Z_G(s)$ if $s$ is non central. Then by 3.8, Theorem 3.5 holds for $G$, and by 3.7, Theorem 3.4 holds for $G$. This completes the proof of Theorems 3.4 and 3.5. \para{3.10.} An $r$-tuple of partitions $\Bla = (\la^{(1)}, \dots, \la^{(r)})$ is called an $r$-partition of $n$ if $n = \sum_{i=1}^r|\la^{(i)}|$. We denote by $\SP_{n,r}$ the set of all $r$-partitions of $n$. In the case where $r = 1$, we denote $\SP_{n,1}$ simply by $\SP_n$. For each $\Bm \in \SQ_{n,r}$, we denote by $\SP(\Bm)$ the subset of $\SP_{n,r}$ consisting of $\Bla$ such that $|\la^{(i)}| = m_i$. Let $S_{\Bm} = S_{m_1} \times \cdots \times S_{m_r}$ be the Young subgroup of $S_n$ corresponding to $\Bm = (m_1, \dots, m_r) \in \SQ_{n,r}$. Recall that irreducible characters of $S_n$ are parametrized by partitions of $n$. We denote by $\x^{\la}$ the irreducible character of $S_n$ corresponding to $\la \in \SP_n$ (here $\x^{(n)}$ corresponds to the trivial character, and $\x^{(1^n)}$ corresponds to the sign character). Thus the irreducible characters of $S_{\Bm}$ are parametrized by $\SP(\Bm)$. We denote by $\x^{\Bla} = \x^{\la^{(1)}}\boxtimes\cdots \boxtimes \x^{\la^{(r)}}$ the irreducible character of $S_{\Bm}$ corresponding to $\Bla \in \SP(\Bm)$. \par We identify $(W_0)_{\Bm}$ with $S_{\Bm}$. For each $w \in S_{\Bm}$, let $T_w$ be an $F$-stable maximal torus in $G$ as given in 2.1. For each $\Bla \in \SP(\Bm)$, we define functions $Q^{\pm}_{\Bla}$ on $\SX\uni^F$ by \begin{equation*} \tag{3.10.1} Q^{\pm}_{\Bla} = |S_{\Bm}|\iv \sum_{w \in S_{\Bm}} \x^{\Bla}(w)Q^{\pm}_{\Bm, T_w}. \end{equation*} \par Let $\SC_q = \SC_q(\SX\uni)$ be the $\Ql$-space of all $G^F$-invariant $\Ql$-functions on $\SX\uni^F$. We define a bilinear form $\lp f, h\rp $ on $\SC_q$ by \begin{equation*} \tag{3.10.2} \lp f, h \rp = \sum_{z \in \SX\uni^F}f(z)h(z). \end{equation*} Concerning the functions $Q_{\Bla}^{\pm}$, the following formula holds. \begin{prop} For any $\Bla \in \SP(\Bm)$, $\Bmu \in \SP(\Bm')$ and $\ve, \ve' \in \{ +,-\}$, we have \begin{equation*} \begin{split} \lp Q^{\ve}_{\Bla}, Q^{\ve'}_{\Bmu} \rp &= (-1)^{p_{\ve}(\Bm) + p_{\ve'}(\Bm')}|S_{\Bm}|\iv |S_{\Bm'}|\iv |G^F| \\ &\times \sum_{\SO \in S_{\Bm}\backslash S_n /S_{\Bm'}} q^{a_{\ve,\ve'}(\Bm, \Bm'; n_{\SO})} \sum_{\substack{ x\in \SO \\ w \in S_{\Bm} \cap xS_{\Bm'}x\iv }} |T_w^F|\iv\x^{\Bla}(w)\x^{\Bmu}(x\iv wx), \end{split} \end{equation*} where $n_{\SO}$ is an element in $N(T_w, T_{x\iv wx})^F $ attached to $\SO$. \end{prop} \begin{proof} By making use of the orthogonality relations for Green functions (Theorem 3.5), we have \begin{align*} (-1)&^{p_{\ve}(\Bm) + p_{\ve'}(\Bm')}|G^F|\iv\lp Q^{\ve}_{\Bla}, Q^{\ve'}_{\Bmu} \rp \\ &= |S_{\Bm}|\iv |S_{\Bm'}|\iv (-1)^{p_{\ve}(\Bm) + p_{\ve'}(\Bm')} |G^F|\iv\sum_{\substack{w \in S_{\Bm} \\ w' \in S_{\Bm'}}}\x^{\Bla}(w)\x^{\Bmu}(w') \lp Q^{\ve}_{\Bm, T_w}, Q^{\ve'}_{\Bm', T_{w'}} \rp \\ &= |S_{\Bm}|\iv |S_{\Bm'}|\iv \sum_{\substack{ w \in S_{\Bm} \\ w' \in S_{\Bm'}}} \x^{\Bla}(w)\x^{\Bmu}(w') |T_w^F|\iv |T_{w'}^F|\iv \sum_{n \in N(T_w, T_{w'})^F} q^{a_{\ve, \ve'}(\Bm, \Bm';n)}. \end{align*} Let $P_{\Bm}$ be the parabolic subgroup of $G$ containing $B_0$ which is the stabilizer of the partial flag $(M_{0,p_i})$ with respect to $\Bm$, and define $P_{\Bm'}$ similarly. We put $P_{\Bm} = L_{\Bm}U_{\Bm}$, where $L_{\Bm}$ is the Levi subgroup of $P_{\Bm}$ containing $T_0$ and $U_{\Bm}$ is the unipotent radical of $P_{\Bm}$. Define $P_{\Bm'} = L_{\Bm'}U_{\Bm'}$ similarly. We may assume that $T_w \subset P_{\Bm}$ and $T_{w'} \subset P_{\Bm'}$. Thus $a_{\ve,\ve'}(\Bm,\Bm';n)$ is defined with respect to $(M^{\ve}_{0,p_i}), (M^{\ve'}_{0, p'_i})$. It follows that $a_{\ve,\ve'}(\Bm, \Bm'; n)$ is independent of the choice of $n \in \wt\SO^F$ for each orbit $\wt\SO \in L_{\Bm}\backslash G /L_{\Bm'}$. Let $\wh\SO$ be an element in $P_{\Bm}\backslash G/P_{\Bm'}$ containing $\wt\SO$. Then \begin{equation*} \wh\SO \cap N(T_w, T_{w'})^F = \wt\SO \cap N(T_w, T_{w'})^F \end{equation*} since $\wh\SO = \bigcup_{u \in U_{\Bm}, u' \in U_{\Bm'}}u\wt\SO u'$. It follows that $a_{\ve,\ve'}(\Bm, \Bm'; n)$ is independent of the choice for any $n \in \wh\SO \cap N(T_w, T_{w'})^F$. Note that $P_{\Bm}\backslash G /P_{\Bm'} \simeq S_{\Bm}\backslash S_n /S_{\Bm'}$. We denote by $\SO \in S_{\Bm}\backslash S_n /S_{\Bm'}$ the orbit in $S_n$ corresponding to $\wh\SO$, and choose $n_{\SO} \in \wh\SO \cap N(T_w, T_{w'})^F$ for each $\SO$. Then the last sum is equal to \begin{equation*} \tag{3.11.1} \begin{split} |S_{\Bm}|\iv &|S_{\Bm'}|\iv \sum_{\substack{ w \in S_{\Bm} \\ w' \in S_{\Bm'}}} \x^{\Bla}(w)\x^{\Bmu}(w')|T_w^F|\iv |T_{w'}^F|\iv |N_{L_{\Bm}}(T_w)^F| |N_{L_{\Bm'}}(T_{w'})^F| \\ &\times \sum_{\substack{ \SO \in S_{\Bm}\backslash S_n/S_{\Bm'} \\ n_{\SO}\iv T_w n_{\SO} = T_{w'} }} |N_{L_{\Bm}}(T_w)^F \cap n_{\SO}N_{L_{\Bm'}}(T_{w'})^Fn_{\SO}\iv|\iv q^{a_{\ve,\ve'}(\Bm, \Bm'; n_{\SO})}. \end{split} \end{equation*} Here we have \begin{equation*} \tag{3.11.2} |N_{L_{\Bm}}(T_w)^F|/|T_w^F| = |Z_{S_{\Bm}}(w)|, \quad |N_{L_{\Bm'}}(T_{w'})^F|/|T_{w'}^F| = |Z_{S_{\Bm'}(w')}|. \end{equation*} Moreover since $n_{\SO}\iv T_wn_{\SO} = T_{w'}$ for $n_{\SO} \in G^F$, there exists $x_{\SO} \in S_n$ such that $x_{\SO}\iv w x_{\SO} = w'$. Now $\ad n_{\SO}\iv$ gives an isomorphism $N_G(T_w) \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, N_G(T_{w'})$, which induces an isomorphism $N_G(T_w)/T_w \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, N_G(T_{w'})/T_{w'}$, hence by taking the $F$-fixed point subgroups on both side, an isomorphism $Z_{S_n}(w) \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, Z_{S_n}(w')$. We may assume that this isomorphism is induced by $\ad x_{\SO}\iv$. It follows that \begin{equation*} (N_{L_m}(T_w)^F \cap n_{\SO} N_{L_{\Bm'}}(T_{w'})^Fn_{\SO}\iv)/T_w^F \simeq Z_{S_{\Bm}}(w) \cap x_{\SO} Z_{S_{\Bm'}}(w')x_{\SO}\iv, \end{equation*} and so \begin{equation*} \tag{3.11.3} |N_{L_{\Bm}}(T_w)^F \cap n_{\SO}N_{L_{\Bm'}}(T_{w'})^Fn_{\SO}\iv| = |T_w^F||Z_{S_{\Bm}}(w) \cap x_{\SO}Z_{S_{\Bm'}}(w')x_{\SO}\iv|. \end{equation*} Substituting (3.11.2) and (3.11.3) into (3.11.1), the formula (3.11.1) turns out to be \begin{equation*} \begin{split} |S_{\Bm}|\iv&|S_{\Bm'}|\iv\sum_{\substack{ w \in S_{\Bm} \\ w' \in S_{\Bm'}}} \x^{\Bla}(w)\x^{\Bmu}(w') |T_w^F|\iv |Z_{S_{\Bm}}(w)||Z_{S_{\Bm'}}(w')| \\ &\times \sum_{\substack{ \SO \in S_{\Bm}\backslash S_n/S_{\Bm'} \\ n_{\SO}\iv T_w n_{\SO} = T_{w'}}} |Z_{S_{\Bm}}(w) \cap x_{\SO}Z_{S_{\Bm'}}(w')x_{\SO}\iv|\iv q^{a_{\ve,\ve'}(\Bm, \Bm'; n_{\SO})}. \end{split} \end{equation*} But we have \begin{equation*} |Z_{S_{\Bm}}(w)||Z_{S_{\Bm'}}(w')||{Z_{S_{\Bm}}(w)} \cap x_{\SO}Z_{S_{\Bm'}}(w')x_{\SO}\iv|\iv = |Z_{S_{\Bm}}(w)x_{\SO}Z_{S_{\Bm'}}(w')|, \end{equation*} and for given $w \in S_{\Bm}, w' \in S_{\Bm'}$, the choice of $x \in S_n$ such that $w' = x\iv wx$ is given by $x \in Z_{S_{\Bm}}(w)x_{\SO}Z_{S_{\Bm'}}(w')$. Hence the last formula is equal to \begin{equation*} |S_{\Bm}|\iv |S_{\Bm'}|\iv \sum_{\substack{ w \in S_{\Bm} \\ w' \in S_{\Bm'}}} \sum_{\substack{ \SO \in S_{\Bm}\backslash S_n/S_{\Bm'} \\ x \in \SO \\ w' = x\iv wx }} \x^{\Bla}(w)\x^{\Bmu}(w')|T_w^F|\iv q^{a_{\ve,\ve'}(\Bm,\Bm'; n_{\SO})}. \end{equation*} This proves the proposition. \end{proof} \section{Unipotent variety} \para{4.1.} We express an $r$-partition $\Bla$ by $\Bla = (\la^{(i)}_j)$, where $\la^{(i)} = (\la^{(i)}_1, \la^{(i)}_2, \dots, \la^{(i)}_m)$ is a partition with $\la^{(i)}_m \ge 0$ for some fixed number $m$. For an $r$-partition $\Bla$, we define a sequence of non-negative integers $c(\Bla)$ associated to $\Bla$ by \begin{equation*} c(\Bla) = (\la^{(1)}_1, \la^{(2)}_1, \dots, \la^{(r)}_1, \la^{(1)}_2, \la^{(2)}_2, \dots, \la^{(r)}_2, \dots, \la^{(1)}_m, \la^{(2)}_m, \dots, \la^{(r)}_m). \end{equation*} \par For a sequence of non-negative integers, $\la = (\la_1, \la_2, \dots, \la_m), \mu = (\mu_1, \mu_2, \dots,\mu_m)$, we denote by $\la \le \mu$ if \begin{equation*} \la_1 + \cdots + \la_k \le \mu_1 + \cdots + \mu_k \end{equation*} for $k = 1, 2, \cdots, m$. We define a dominance order $\le $ on $\SP_{n,r}$ by the condition that $\Bla \le \Bmu$ if $c(\Bla) \le c(\Bmu)$. In the case where $r = 1$, this is the standard dominance order on the set of partitions. In the case of $r = 2$, this coincides with the partial order used in [AH] and [SS2]. \par For a partition $\la = (\la_1, \dots, \la_m) \in \SP_n$, we define $n(\la) \in \BZ$ by $n(\la) = \sum_i(i-1)\la_i$. For $\Bla = (\la^{(1)}, \dots, \la^{(r)}) \in \SP_{n,r}$, we define $n(\Bla) \in \BZ$ by $n(\Bla) = \sum_{i=1}^rn(\la^{(i)})$. Hence if we put $\nu = \la^{(1)} + \cdots + \la^{(r)}$, we have $n(\Bla) = n(\nu)$. \para{4.2.} Let $\SX\uni = G\uni \times V^{r-1}$. In the case where $r = 2$, it is known by [AH], [T] that $G$-orbits of $\SX\uni$ are parametrized by $\SP_{n,2}$. The parametrization is given as follows; take $(x,v) \in G\uni \times V$. Put $E^x = \{ y \in \End(V) \mid xy = yx\}$. $E^x$ is a subalgebra of $\End(V)$ containing $x$. Put $W = E^xv$. Then $W$ is an $x$-stable subspace of $V$. We denote by $\la^{(1)}$ the Jordan type of $x|_W$, and by $\la^{(2)}$ the Jordan type of $x|_{V/W} $. Thus one can define $\Bla = (\la^{(1)}, \la^{(2)}) \in \SP_{n,2}$. We denote by $\SO_{\Bla}$ the $G$-orbit containing $(x,v)$. This gives the required parametrization. Note that if $(x,v) \in \SO_{\Bla}$, the Jordan type of $x$ is given by $\la^{(1)} + \la^{(2)}$. For $(x,v) \in G\uni \times V$, we say that $(x,v)$ has type $\Bla$ if $(x,v) \in \SO_{\Bla}$. \par The normal basis of $(x,v) \in \SO_{\Bla}$ was constructed in [AH] as follows. Assume that $\nu = (\nu_1, \dots, \nu_{\ell})$ for $\nu = \la^{(1)} + \la^{(2)}$. Choose a basis $\{ u_{jk} \mid 1 \le j \le \ell, 1 \le k \le \nu_j\}$ of $V$, and define an action of $x$ on the basis by $(x-1)u_{j,k} = u_{j, k-1}$ under the convention that $u_{j,0} = 0$. Put $v = \sum_{j = 1}^{\ell}u_{j, \la^{(1)}_j}$. Then $(x,v)$ gives a representative of the orbit $\SO_{\Bla}$. \par If $r \ge 3$, the number of $G$-orbits in $\SX\uni$ is infinite. In [S3, 5.3], the partition of $X$ into $G$-stable pieces $X_{\Bla}$ (possibly a union of infinitely many $G$-orbits) labelled by $\Bla \in \SP_{n,r}$ is given; \begin{equation*} \tag{4.2.1} \SX\uni = \coprod_{\Bla \in \SP_{n,r}}X_{\Bla}. \end{equation*} Following [S3], we define $X_{\Bla}$ by induction on $r$ as follows. Take $(x, \Bv) \in \SX\uni$ with $\Bv = (v_1, \dots, v_{r-1})$. Put $W = E^xv_1$, $\ol V = V/W$ and $\ol G = GL(\ol V)$. We consider the variety $\SX'\uni = \ol G \times \ol V^{r-2}$. Assume that $(x,v_1) \in G\uni \times V$ is of type $(\la^{(1)}, \nu')$, where $\nu = \la^{(1)} + \nu'$ is the Jordan type of $x$. Let $\ol x$ be the restriction of $x$ on $\ol V$. Then the Jordan type of $\ol x \in GL(\ol V)$ is $\nu'$. Put $\ol \Bv = (\ol v_2, \dots \ol v_{r-1})$, where $\ol v_i$ is the image of $v_i$ on $\ol V$. Thus $(\ol x, \ol \Bv) \in \SX'\uni$. By induction, we have a partition $\SX'\uni = \coprod_{\Bmu \in \SP_{n',r-1}} X'_{\Bmu}$, where $\dim \ol V = n'$. Thus there exists a unique $X'_{\Bla'}$ containing $(\ol x, \ol\Bv)$. If we write $\Bla' = (\la^{(2)}, \dots, \la^{(r)})$, we have $\la^{(2)} + \cdots + \la^{(r)} = \nu'$. It follows that $\Bla = (\la^{(1)}, \dots \la^{(r)}) \in \SP_{n,r}$. We define the type of $(x,\Bv)$ by $\Bla$, and define a subset $X_{\Bla}$ of $\SX\uni$ as the set of all $(x,\Bv)$ with type $\Bla$. Then $X_{\Bla}$ is a $G$-stable subset of $\SX\uni$, and we obtain the required partition (4.2.1). \par $X_{\Bla}$ has an alternate description. Assume that $\Bla \in \SP(\Bm)$ for $\Bm \in \SQ_{n,r}$, and let $P = P_{\Bm} = LU_P$ be the parabolic subgroup of $G$ attached to $\Bm$ as in 1.5. Put $\ol M_{p_i} = M_{p_i}/M_{p_{i-1}}$. Then $L \simeq G_1 \times \cdots \times G_r$ with $G_i = GL(\ol M_{p_i})$. Let $\SM_{\Bla}$ be the subset of $U \times \prod_{i=1}^{r-1}M_{p_i}$ defined by the following properties; take $(x, \Bv)$ with $x \in U$ and $v_i \in M_{p_i}$. Put $x_i = x|_{\ol M_{p_i}}$, and let $\ol v_i \in \ol M_{p_i}$ be the image of $v_i$. Then $(x,\Bv) \in \SM_{\Bla}$ if the Jordan type of $x$ is $\la^{(1)} + \cdots + \la^{(r)}$, the $G_i$-orbit of $(x_i, \ol v_i) \in (G_i)\uni \times \ol M_{p_i}$ has type $(\la^{(i)}, \emptyset)$ for $i = 1, \dots, r-1$, and the Jordan type of $x_r$ is $\la^{(r)}$. Let $\ol X_{\la}$ be the closure of $X_{\Bla}$ in $\SX\uni$. Then by a similar argument as in the proof of [S3, Lemma 6.17], we have \begin{equation*} \tag{4.2.2} \ol X_{\Bla} = \ol{\bigcup_{g \in G}g\SM_{\Bla}}. \end{equation*} By Propositions 5.4 and 5.14 in [S3], we have \begin{prop} $X_{\Bla}$ is a $G$-stable, smooth, irreducible, locally closed subvariety of $\SX\uni$ with \begin{equation*} \dim X_{\Bla} = (n^2 - n - 2n(\Bla)) + \sum_{i=1}^{r-1}(r-i)|\la^{(i)}|. \end{equation*} \end{prop} Let $\le$ be the dominance order on $\SP_{n,r}$ defined in 4.1. Concerning the closure relations of $X_{\Bla}$, we have \begin{prop}[{[S3, Prop. 5.11]}] Let $\ol X_{\Bla}$ be the closure of $X_{\Bla}$ in $\SX\uni$. Then \begin{equation*} \ol X_{\Bla} \subset \bigcup_{\Bmu \le \Bla} X_{\Bmu}. \end{equation*} \end{prop} For each $\Bm \in \SQ_{n,r}$, recall the map $\pi_{\Bm,1} : \wt\SX_{\Bm, \unip} \to \SX_{\Bm,\unip}$ in 1.1. Define $\Bla(\Bm) \in \SP_{n,r}$ by $\Bla(\Bm) = ((m_1), (m_2), \dots, (m_r))$. Then we have \begin{prop}[{[S3, Prop. 5.9]}] For $\Bm \in \SQ_{n,r}$, we have \begin{enumerate} \item $\SX_{\Bm,\unip} = \ol X_{\Bla(\Bm)}$. \item $\dim \SX_{\Bm\unip} = n^2 - n + \sum_{i=1}^{r-1}(r-i)m_i$. \item For $\Bmu \in \SP(\Bm)$, $X_{\Bmu} \subset \SX_{\Bm, \unip}$. \end{enumerate} \end{prop} \para{4.6.} As in [S3, 5.10], we define a distinguished element $z_{\Bla} \in X_{\Bla}$ as follows. Put $\nu = (\nu_1, \dots, \nu_{\ell}) \in \SP_n$ for $\nu = \la^{(1)} + \cdots + \la^{(r)}$. Take $x \in G\uni$ of Jordan type $\nu$, and let $\{ u_{j,k}\mid 1 \le j \le \ell, 1 \le k \le \nu_j \}$ be a Jordan basis of $x$ in $V$ having the property $(x-1)u_{j,k} = y_{j,k-1}$ with the convention $u_{j,0} = 0$. We define $v_i \in V$ for $i = 1, \dots, r-1$ by the condition that \begin{equation*} \tag{4.6.1} v_i = \sum_{1 \le j \le \ell} u_{j, \la^{(1)}_j + \cdots + \la^{(i)}_j}. \end{equation*} Put $\Bv = (v_1, \dots, v_{r-1})$ and $z_{\Bla} = (x, \Bv)$. Then $z_{\Bla} \in X_{\Bla}$. We denote by $\SO^-_{\Bla} \subset X_{\Bla}$ the $G$-orbit containing $z_{\Bla}$. We also define $z_{\Bla}^+ \in X_{\Bla}$ as follows; put \begin{equation*} \tag{4.6.2} v_i' = \sum_{1 \le j \le \ell}\ol u_{j, \la^{(1)}_j + \cdots + \la^{(i)}_j}, \end{equation*} where $\ol u_{j, \la^{(1)}_j+\cdots +\la^{(i)}_j} = u_{j,\la^{(1)}_j + \cdots + \la^{(i)}_j}$ if $\la^{(i)}_j \ne 0$ and is equal to zero if $\la^{(i)}_j = 0$. Put $\Bv' = (v'_1, \dots, v'_{r-1})$, and $z^+_{\Bla} = (x, \Bv')$. Then $z^+_{\Bla} \in X_{\Bla}$, and we denote by $\SO_{\Bla}^+$ the $G$-orbit in $X_{\Bla}$ containing $z^+_{\Bla}$. The $G$-orbits $\SO_{\Bla}^-, \SO^+_{\Bla} \subset X_{\Bla}$ are determined, independently of the choice of the Jordan basis $\{ u_{j,k}\}$. \par Now take $\Bm \in \SQ_{n,r}$, and let $M_{p_i}^+$ be the subspace of $M_{p_i}$ isomorphic to $\ol M_{p_i}$ defined in 2.7. Recall the set $\SX^+_{\Bm}$ in (2.7.1). We define a set $\SX^+_{\Bm \unip}$ by \begin{equation*} \SX^+_{\Bm \unip} = \bigcup_{g \in G}g(U \times \prod_i M^+_{p_i}), \end{equation*} which coincides with $\SX\uni \cap \SX^+_{\Bm}$. Note that for each $\Bla \in \SP(\Bm)$, $\SO_{\Bla}^+ \subset X_{\Bla} \cap \SX^+_{\Bm, \unip}$. \par In general, $X_{\Bla} \cap \SX^+_{\Bm}$ consists of infinitely many $G$-orbits even if $\Bla \in \SP(\Bm)$. The following special case would be worth mentioning. \begin{lem} Assume that $\Bm \in \SQ_{n,r}$ is of the form $m_j = 0$ for $j \ne i_0, r$ for some $i_0$ $($possibly $i_0 = r$$)$. Then $X_{\Bla} \cap \SX^+_{\Bm,\unip} = \SO_{\Bla}^+$ for any $\Bla \in \SP_{n,r}$ if it is non-empty. \end{lem} \begin{proof} Assume that $1 \le i_0 \le r-1$ and that $X_{\Bla} \cap \SX^+_{\Bm} \ne \emptyset$. Take $(x, \Bv) \in X_{\Bla} \cap \SX^+_{\Bm}$. Since $(x, \Bv) \in \SX^+_{\Bm}$, we must have $v_j = 0$ for $j \ne i_0$. Then the description of $X_{\Bla}$ in 4.1 implies that $\la^{(i)} = \emptyset$ for $i \ne i_0$. It follows that $\Bla \in \SP(\Bm')$ with $\Bm' = (m_1', \dots, m'_r)$ such that $m'_{i_0} \le m_{i_0}, m'_j = 0$ for $j \ne i_0, r$. But this is essentially the same as the case of $r = 2$, hence $X_{\Bla} \cap \SX^+_{\Bm}$ coincides with $\SO_{\Bla}^+$. The case where $i_0 = r$ is dealt with similarly. \end{proof} \par Note that irreducible representations of $S_{\Bm}$ are, up to isomorphism, parametrized by $\SP(\Bm)$ (see 3.10). We denote by $V_{\Bla}$ the irreducible representation of $S_{\Bm}$ corresponding to $\Bla \in \SP(\Bm)$. The following result gives the Springer correspondence between $\SX_{\Bm, \unip}$ and $S_{\Bm}$. \begin{thm}[{[S3, Theorem 8.13]}] For any $\Bm \in \SQ_{n,r}$, put $d'_{\Bm} = \dim \SX_{\Bm,\unip}$. Then $(\pi_{\Bm,1})_!\Ql[d'_{\Bm}]$ is a semisimple perverse sheaf on $\SX_{\Bm,\unip}$, equipped with the action of $S_{\Bm}$, and is decomposed as \begin{equation*} (\pi_{\Bm,1})_!\Ql[d'_{\Bm}] \simeq \bigoplus_{\Bla \in \SP(\Bm)} V_{\Bla} \otimes \IC(\ol X_{\Bla}, \Ql)[\dim X_{\Bla}]. \end{equation*} \end{thm} \para{4.9.} For each $z = (x, \Bv) \in \SX_{\Bm}$, we consider the Springer fibre $(\pi_{\Bm})\iv(z) \simeq \SB^{(\Bm)}_z$, where $\SB^{(\Bm)}_z$ is a closed subvariety of $\SB = G/B$ defined as \begin{equation*} \SB^{(\Bm)}_z = \{ gB \in \SB \mid g\iv xg \in B, g\iv \Bv \in \prod_i M_{p_i} \}. \end{equation*} For each $\Bla \in \SP(\Bm)$, put $d_{\Bla} = (\dim \SX_{\Bm \unip} - \dim X_{\Bla})/2$. One can check that $d_{\Bla} = n(\Bla)$ (see 4.1 for the definition of $n(\Bla)$). We have \begin{lem} Assume that $\Bla \in \SP(\Bm)$. Then for any $z \in X_{\Bla}$, we have $\dim \SB^{(\Bm)}_z = d_{\Bla}$. \end{lem} \begin{proof} By [S3, Lemma 8.5], we have $\dim \SB^{(\Bm)}_z \ge d_{\Bla}$. On the other hand, clearly we have $\SB^{(\Bm)}_z \subset \SB_x$, where $\SB_x = \{ gB \in \SB \mid g\iv xg \in B\}$. Here the Jordan type of $x$ is given by $\nu = \la^{(1)} + \cdots + \la^{(r)}$. By the classical result for the case of $GL_n$, we know that $\dim \SB_x = n(\nu)$. Since $n(\nu) = n(\Bla)$, we have $\dim \SB^{(\Bm)}_z \le d_{\Bla}$. The lemma is proved. \end{proof} \par \section{Kostka functions} \para{5.1.} Kostka functions associated to complex reflection groups were introduced in [S1], [S2] as a generalization of Kostka polynomials. In this section, we discuss the relationship between our Green functions and those Kostka functions. In [S1], [S2], Kostka functions $K^{\pm}_{\Bla, \Bmu}(t)$ indexed by $\Bla, \Bmu \in \SP_{n,r}$ (and depending on the sign $+$, $-$) were introduced as coefficients of the transition matrix between the basis of Schur functions and those of Hall-Littlewood functions, as in the case of original Kostka polynomials. A-priori they are rational functions in $\BQ(t)$. We define a modified Kostka function $\wt K^{\pm}_{\Bla, \Bmu}(t)$ by \begin{equation*} \tag{5.1.1} \wt K^{\pm}_{\Bla, \Bmu}(t) = t^{a(\Bmu)}K^{\pm}_{\Bla, \Bmu}(t\iv), \end{equation*} where the $a$-function $a(\Bla)$ is defined by \begin{equation*} \tag{5.1.2} a(\Bla) = r\cdot n(\Bla) + |\la^{(2)}| + 2|\la^{(3)}| + \cdots + (r-1)|\la^{(r)}| \end{equation*} for $\Bla \in \SP_{n,r}$. Note that in the case where $r = 1$, $a(\Bla)$ coincides with $n(\la^{(1)})$, and in the case where $r = 2$, $a(\Bla)$ coincides with the $a$-function on $\SP_{n,2}$ used in [AH] and [SS2] (in [AH], the notation $b(\mu;\nu)$ is used instead of $a(\Bla)$ for $\Bla = (\mu;\nu)$). \par Following [S1], we give a combinatorial characterization of modified Kostka functions $\wt K^{\pm}_{\Bla, \Bmu}(t)$. Let $W_{n,r}$ be the complex reflection group $S_n\ltimes (\BZ/r\BZ)^n$. For a (not necessarily irreducible) character $\x$ of $W_{n,r}$, we define the fake degree $R(\x)$ by \begin{equation*} \tag{5.1.3} R(\x) = \frac{\prod_{i=1}^n(t^{ir}-1)}{|W_{n,r}|}\sum_{w \in W_{n,r}} \frac{\det_{\BV}(w)\x(w)}{\det_{\BV}(t - w)}, \end{equation*} where $\BV$ is a representation space (over $\Ql$) of the reflection representation of $W_{n,r}$, and $\det_{\BV}$ means the determinant on $\BV$. Here we have $|W_{n,r}| = n!r^n$. Note that $R(\x) \in \BZ_{\ge 0}[t]$; if $\x$ is irreducible, $R(\x)$ is given as the graded multiplicity of $\x$ in the coinvariant algebra $R(W_{n,r})$ of $W_{n,r}$. Let $N^*$ be the number of reflections of $W_{n,r}$, which is given as the maximum degree of $R(W_{n,r})$, and is explicitly given as \begin{equation*} \tag{5.1.4} N^* = \frac{rn(n+1)}{2} - n = \binom{n}{2}r + (r-1)n. \end{equation*} \par It is known that irreducible characters of $W_{n,r}$ are parametrized by $\SP_{n,r}$. We denote by $\r^{\Bla}$ the irreducible character of $W_{n,r}$ corresponding to $\Bla \in \SP_{n,r}$. (For example, $\r^{\Bla}$ is the trivial character for $\Bla = (n;-;\cdots;-)$. See 5.6 for details). For $\Bla, \Bmu \in \SP_{n,r}$, we define a square matrix $\Om = (\w_{\Bla,\Bmu})_{\Bla,\Bmu \in \SP_{n,r}}$ by \begin{equation*} \w_{\Bla,\Bmu} = t^{N^*}R(\r^{\Bla}\otimes\ol{\r^{\Bmu}}\otimes\ol\det_V), \end{equation*} where $\ol \x$ denotes the complex conjugate of the character $\x$ (in fact, the function $\ol\x$ is defined by $\ol\x(w) = \x(w\iv)$ for $w \in W_{n,r}$). Here we fix a total order $\Bla \preceq \Bmu$ on $\SP_{n,r}$ compatible with the partial order $\Bla \le \Bmu$ defined in 4.1, and consider the square matrix with respect to this total order. Note that $\w_{\Bla,\Bmu} \in \BZ_{\ge 0}[t]$. We have the following result. \begin{thm}[{[S1, Theorem 5.4]}] There exist unique matrices $P^{\pm} = (p^{\pm}_{\Bla,\Bmu}), \vL = (\xi_{\Bla,\Bmu})$ over $\BQ(t)$ satisfying the equation \begin{equation*} \tag{5.2.1} P^-\vL\, {}^t\!P^+ = \Om \end{equation*} subject to the conditions that $\vL$ is a diagonal matrix and that \begin{equation*} p^{\pm}_{\Bla,\Bmu} = \begin{cases} 0 &\quad\text{ unless } \Bmu \preceq \Bla, \\ t^{a(\Bla)} &\quad\text{ if } \Bla = \Bmu. \end{cases} \end{equation*} Then the entry $p^{\pm}_{\Bla,\Bmu}$ of the matrix $P^{\pm}$ coincides with $\wt K^{\pm}_{\Bla,\Bmu}(t)$. \end{thm} \remarks{5.3.} (i) \ Since $\Om$ is non-symmetric unless $r = 1$ or 2, $P^+$ does not coincide with $P^-$ if $r \ge 3$. \par (ii) \ Our construction of Kostka functions $K_{\Bla,\Bmu}(t)$ depends on the choice of the total order $\preceq$ on $\SP_{n,r}$. In the case where $r = 1$ or 2, it is known that Kostka functions are independent of the choice of the total order whenever it is compatible with the partial order $\le$ (see [M] for $r = 1$, and [S2], [AH], [SS2] for $r = 2$). \para{5.4.} We define an involution $\t$ on $\SP_{n,r}$ by \begin{equation*} \tag{5.4.1} \t: (\la^{(1)}, \dots, \la^{(r)}) \mapsto (\la^{(r-1)}, \la^{(r-2)},\dots, \la^{(1)}, \la^{(r)}) \end{equation*} for $\Bla = (\la^{(1)}, \dots, \la^{(r)}) \in \SP_{n,r}$. We shall prove the following result. \begin{thm} For any $\Bla \in \SP(\Bm), \Bmu \in \SP(\Bm')$, we have \begin{equation*} \tag{5.5.1} \begin{split} q^{-a(\Bla) - a(\tau(\Bmu))}&\w_{\Bla,\Bmu}(q) = \\ &(-1)^{p_-(\Bm) + p_+(\Bm')}q^{-r(n(\Bla) + n(\Bmu))}\sum_{z \in \SX\uni^{F^r}} Q^-_{\Bla}(z)Q^+_{\Bmu}(z). \end{split} \end{equation*} \end{thm} \para{5.6.} The right hand side of (5.5.1) can be computed by applying Proposition 3.11 to the case where $\ve = -, \ve' = +$. Hence we will compute the left hand side of (5.5.1), i.e., $\w_{\Bla,\Bmu}(t)$ for the indeterminate $t$. In order to compute them, we recall some known facts on the characters of $W_{n,r}$. Let $\d : W_{n,r} \to \Ql^*$ be the linear character of $W_{n,r}$ defined by $\d|_{S_n} = 1_{S_n}$ and $\d(s_0) = \z$, where $s_0$ is a (simple) reflection of order $r$, and $\z$ is a primitive $r$-th root of unity in $\Ql$. Assume given $\Bla = (\la^{(1)}, \dots, \la^{(r)})\in \CP(\Bm)$. Then the irreducible character $\r^{\Bla}$ is constructed by \begin{equation*} \tag{5.6.1} \r^{\Bla} = \Ind_{W_{\Bm,r}}^{W_{n,r}} (\x^{\la^{(1)}}\boxtimes \d\x^{\la^{(2)}} \boxtimes\cdots\boxtimes \d^{r-1}\x^{\la^{(r)}}), \end{equation*} where $W_{\Bm, r} = W_{m_1,r}\times \cdots \times W_{m_r,r}$, and $\x^{\la^{(i)}}$ is the irreducible character of $S_{m_i}$ corresponding to the partition $\la^{(i)}$, which is regarded as a character of $W_{m_i,r}$ by the natural surjection $W_{m_i,r} \to S_{m_i}$, and $\d^{i-1}\x^{\la^{(i)}}$ is the irreducible character of $W_{m_i,r}$ obtained by multiplying $\d^{i-1}$ on it. (Here we use the same symbol $\d$ to denote the corresponding linear character of $W_{m_i,r}$ for each $i$.) Let $\ve$ be the irreducible character of $W_{n,r}$ obtained by the pull-back of the sign character of $S_n$ under the map $W_{n,r} \to S_n$. (Do not confuse the character $\ve$ of $W_{n,r}$ with the sigantures $\{ \ve, \ve'\} = \{ -,+\}$. In the present discussion, the signatures are fixed.) Then we have $\det_{\BV} = \ve\d$. For $i = 1, \dots, r$, put $\Bla_i = (\la^{(1)}, \dots, \la^{(r)})$ with $\la^{(j)} = \emptyset$ for $j \ne i$ and $\la^{(i)} = (n)$, and put $\Bmu_i = (\mu^{(1)}, \dots, \mu^{(r)})$ with $\mu^{(j)} = \emptyset$ for $j \ne i$ and $\mu^{(i)} = (1^n)$. Then the following fact is easily verified from (5.6.1). \begin{equation*} \tag{5.6.2} \r^{\Bla_i} = \d^{i-1}, \qquad \r^{\Bmu_i} = \ve\d^{i-1}. \end{equation*} In particular, we have \begin{equation*} \tag{5.6.3} 1_{W _{n,r}}= \r^{\Bla_1}, \qquad \d = \r^{\Bla_2}, \qquad {\det}_{\BV} = \r^{\Bmu_2}, \qquad \ol\det_{\BV} = \r^{\Bmu_r}= \ve\d^{r-1}. \end{equation*} Although the following facts (5.6.4) $\sim$ (5.6.6) are not used in the discussion below, we write them for the reference. \begin{align*} \tag{5.6.4} \ol{\r^{\Bla}} &= \r^{\Bla'} \quad\text{ for } \Bla' = (\la^{(1)}, \la^{(r)}, \la^{(r-1)}, \dots, \la^{(2)}), \\ \tag{5.6.5} \r^{{}^t\Bla} &= \ve \r^{\Bla}, \end{align*} where ${}^t\Bla = ({}^t\la^{(1)}, \dots, {}^t\la^{(r)})$. It follows from (5.6.5) that \begin{equation*} \tag{5.6.6} \w_{\Bla,\Bmu} = \w_{{}^t\Bla, {}^t\Bmu} \end{equation*} for any $\Bla, \Bmu \in \SP_{n,r}$. \par For a given $\Bm \in \SQ_{,r}$, we consider $W_{\Bm, r} \simeq S_{\Bm} \ltimes (\BZ/r\BZ)^n$. As in 3.10, we denote by $\x^{\Bla}$ the irreducible character of $S_{\Bm}$ corresponding to $\Bla \in \SP(\Bm)$. We define a linear character $\d_{\Bm}$ of $W_{\Bm,r} = W_{m_1,r} \times \cdots \times W_{m_r,r}$ by \begin{equation*} \d_{\Bm} = \d^0\boxtimes \d^1 \boxtimes \cdots \boxtimes \d^{r-1}. \end{equation*} Let $\wt\x^{\Bla}$ be the irreducible character $\d_{\Bm}\x^{\Bla}$ on $W_{\Bm,r}$. Then (5.6.1) can be rewritten as \begin{equation*} \tag{5.6.7} \r^{\Bla} = \Ind_{W_{\Bm,r}}^{W_{n,r}}\wt\x^{\Bla}. \end{equation*} \para{5.7.} We now compute $R(\r^{\Bla}\otimes \ol\r^{\Bmu}\otimes \ol\det_{\BV})$ for $\Bla \in \SP(\Bm),\Bmu \in \SP(\Bm')$. In the computation below, we write $W_{n,r}, W_{\Bm,r}$, etc. simply as $W_n, W_{\Bm}$, etc. by omitting $r$. \begin{align*} \tag{5.7.1} R(\r^{\Bla}\otimes \ol{\r^{\Bmu}}\otimes \ol\det_{\BV}) &= \frac{\prod_{k=1}^n(t^{kr} -1)}{|W_n|} \sum_{w \in W_n}\frac{\bigl(\Ind_{W_{\Bm}}^{W_n}\wt\x^{\Bla}\bigr)(w) \ol{\bigl(\Ind_{W_{\Bm'}}^{W_n}\wt\x^{\Bmu}\bigr)(w)}} {{\det}_{\BV}(t - w)} \\ &= \frac{\prod_{k=1}^n(t^{kr}-1)}{|W_n||W_{\Bm}||W_{\Bm'}|} \sum_{\substack{ w, w_1, w_2 \in W_n \\ w_1\iv ww_1 \in W_{\Bm} \\ w_2\iv ww_2 \in W_{\Bm'}}} \frac{\wt\x^{\Bla}(w_1\iv ww_1)\ol{\wt\x^{\Bmu}(w_2\iv ww_2)}} {\det_{\BV}(t -w)} \\ &= \frac{\prod_{k=1}^n(t^{kr}-1)}{|W_{\Bm}||W_{\Bm'}|} \sum_{\substack{ x \in W_n \\ y \in W_{\Bm} \cap x W_{\Bm'}x\iv }} \frac{\wt\x^{\Bla}(y)\ol{\wt\x^{\Bmu}(x\iv yx)}}{\det_{\BV}(t - y)}, \end{align*} where the last formula follows from the change of variables $x = w_1\iv w_2, y = w_1\iv ww_1$. \par We consider the set of double cosets $W_{\Bm}\backslash W_n/W_{\Bm'}$. This set is described by a certain set of matrices as given in Section 5 in [AH]. We define $\SM_{\Bm, \Bm'}$ as the set of degree $r$ matrices $(h_{ij})$ with entires in $\BZ_{\ge 0}$ satisfying the following properties; \begin{equation*} \tag{5.7.2} \sum_{i=1}^{r} h_{ij} = m_j \text{ for all } j, \quad \sum_{j=1}^{r}h_{ij} = m'_i \text{ for all } i. \end{equation*} Then there exists a bijective correspondence \begin{equation*} \SM_{\Bm, \Bm'} \simeq S_{\Bm} \backslash S_n /S_{\Bm'} \simeq W_{\Bm} \backslash W_n /W_{\Bm'} \end{equation*} satisfying the properties \begin{equation*} S_{\Bm} \cap xS_{\Bm'}x\iv \simeq \prod_{i,j}S_{h_{ij}} \quad \text{ and } \quad W_{\Bm} \cap x W_{\Bm'}x\iv \simeq \prod_{i,j} W_{h_{ij}} \end{equation*} if $x$ is contained in the orbit $\SO \in S_{\Bm} \backslash S_n /S_{\Bm'}$ corresponding to $(h_{ij}) \in \SM_{\Bm, \Bm'}$. Note that if $x \in S_n$, we have $W_{\Bm} \cap x W_{\Bm'}x\iv \simeq (S_{\Bm} \cap xS_{\Bm'}x\iv) \ltimes (\BZ/r\BZ)^n$. \par By applying the above expression, we see that \begin{equation*} \frac{1}{|W_{\Bm} \cap x W_{\Bm'}x\iv|} \sum_{y \in W_{\Bm} \cap x W_{\Bm'}x\iv} \frac{\wt\x^{\Bla}(y)\ol{\wt\x^{\Bmu}(x\iv yx)}} {\det_{\BV}(t - y)} = \frac{R(\wt\x_0^{\Bla}\otimes \ol{\wt\x_0^{\Bmu}}\otimes \ol\det_{\BV})} {\prod_{i,j}\prod_{k=1}^{h_{ij}}(t^{kr} - 1)} \end{equation*} for $x \in \SO$ corresponding to $(h_{ij}) \in \SM_{\Bm,\Bm'}$, where $\wt\x^{\Bla}_0$ (resp. $\wt\x^{\Bmu}_0$) is the restriction of $\wt\x^{\Bla}$ (resp. ${}^x (\wt\x^{\Bmu})$) on $W_{\Bm} \cap xW_{\Bm'}x\iv$. Substituting this into the last formula of (5.7.1), we have \begin{equation*} \tag{5.7.3} R(\r^{\Bla}\otimes\ol{\r^{\Bmu}}\otimes\ol\det_{\BV}) = \sum_{\SO \in W_{\Bm}\backslash W_n/W_{\Bm'}} \frac{\prod_{k=1}^n(t^{kr} -1)} {\prod_{i,j}\prod_{k=1}^{h_{ij}}(t^{kr} - 1)} \\ R(\wt\x^{\Bla}_0\otimes \ol{\wt\x^{\Bmu}_0}\otimes \ol\det_{\BV}) \end{equation*} since $|W_{\Bm}||W_{\Bm'}|/|W_{\Bm} \cap x W_{\Bm'}x\iv| = |\SO|$ for $\SO = W_{\Bm}xW_{\Bm'}$. \para{5.8.} Our next aim is to compute $R(\wt\x^{\Bla}_0\otimes\ol{\wt\x^{\Bmu}_0}\otimes\ol\det_{\BV})$. For each $\SO \subset W_n$, we choose a representative $x \in \SO$ such that $x \in S_n$. Then under the isomorphism $W_{\Bm} \cap x W_{\Bm'}x\iv \simeq (S_{\Bm} \cap xS_{\Bm'}x\iv) \ltimes (\BZ/r\BZ)^n$, we have \begin{align*} \wt\x^{\Bla}_0 &= \wt\x^{\Bla}|_{W_{\Bm} \cap x W_{\Bm'}x\iv} = \x^{\Bla}|_{S_{\Bm} \cap xS_{\Bm'}x\iv}\otimes \d_{\Bm}, \\ \wt\x^{\Bmu}_0 &= {}^x(\wt\x^{\Bmu})|_{W_{\Bm} \cap x W_{\Bm'}x\iv} = {}^x(\x^{\Bmu})|_{S_{\Bm} \cap x S_{\Bm'}x\iv}\otimes {}^x\d_{\Bm'}, \end{align*} where $\d_{\Bm}, {}^x\d_{\Bm'}$ are the restriction of the corresponding linear characters of $W_n$ to $W_{\Bm} \cap x W_{\Bm'}x\iv$. It follows from this that \begin{equation*} \wt\x^{\Bla}_0 \otimes \ol{\wt\x^{\Bmu}_0} \otimes \ol\det_{\BV} = (\x^{\Bla}\otimes{}^x(\x^{\Bmu})\otimes\ve)|_{S_{\Bm} \cap xS_{\Bm'}x\iv} \otimes \d_1\ve, \end{equation*} where $\d_1 = \d_{\Bm} \otimes \ol{{}^x\d_{\Bm'}} \otimes \ol\det_{\BV}$, and $\ve$ is as in 5.6. Note that $\ol\det_{\BV} = \ve\d^{r-1}$ by (5.6.3). Hence we have $\d_1\ve = \d_{\Bm} \otimes \ol{{}^x\d_{\Bm'}} \otimes \d^{r-1}$, which is a linear character of $(\BZ/r\BZ)^n$ extended to that of $W_{\Bm} \cap x W_{\Bm'}x\iv$ by the trivial action of $S_{\Bm} \cap x S_{\Bm'}x\iv$. We claim that \begin{equation*} \tag{5.8.1} R(\wt\x^{\Bla}_0\otimes\ol{\wt\x_0^{\Bmu}}\otimes\ol\det_{\BV}) = R(\x^{\Bla}\otimes{}^x(\x^{\Bmu})\otimes\ve)|_{t \to t^r}R(\d_1\ve), \end{equation*} where we denote by $f|_{t \to t^r}$ the polynomial $f(t^r)$ obtained from $f(t) \in \BZ[t]$. \par We show (5.8.1). Let $R = R(W_n)$ be the coinvariant algebra of $W_n$. Then we have \begin{equation*} R \simeq \Ql[x_1, \dots, x_n]/I_+(W_n), \end{equation*} where $I_+(W_n)$ is the ideal of $\Ql[x_1, \dots, x_n]$ generated by $e_1(x^r), \dots, e_n(x^r)$. Here $e_i(x) = e_i(x_1, \dots, x_n)$ is the $i$th elementary symmetric polynomial with variables $x_1, \dots, x_n$, and $e_i(x^r)$ denotes such a polynomial with variables $x_1^r, \dots, x_n^r$. Note that $(\BZ/r\BZ)^n$ acts on $R$ via $t_i\cdot x_i = \z x_i$ and $t_i\cdot x_j = x_j$ if $j \ne i$ for a generator $t_i$ of the $i$th factor cyclic group $\BZ/r\BZ$. Let $\p_{\Ba}$ be a linear character of $(\BZ/r\BZ)^n$ defined by $\p_{\Ba}(t_i) = \z^{a_i}$ for $\Ba = (a_1, \dots, a_n) \in \BZ^n$. Then the $\p_{\Ba}$-isotypic subspace of $R$ is given by \begin{equation*} x_1^{a_1}\cdots x_n^{a_n}\Ql[x_1^r, \dots, x_n^r]/ (I_+(W_n)\cap \Ql[x_1^r, \dots, x_n^r]) \end{equation*} with $0 \le a_i <r$. If $\p_{\Ba}$ can be lifted to the linear character of $W_n$, then $R(\p_{\Ba}) = \prod_i t^{a_i}$. We have \begin{equation*} \Ql[x_1^r, \dots x_n^r]/(I_+(W_n) \cap \Ql[x_1^r, \dots, x_n^r]) \simeq \Ql[x_1^r, \dots, x_n^r]/I_+, \end{equation*} where $I_+$ is the ideal of $\Ql[x_1^r, \dots, x_n^r]$ generated by $e_1(x^r), \dots, e_n(x^r)$, hence it is isomorphic to the coinvariant algebra of $S_n$ with respect to the variables $x_1^r, \dots, x_n^r$. Now suppose that $S_n$ stabilizes $\p_{\Ba}$. Then for an irreducible character $\x$ of $S_n$, $\r = \x\otimes \p_{\Ba}$ gives rise to an irreducible character of $W_n$. Moreover any irreducible submodule in $R$ affording $\r$ is contained in the $\p_{\Ba}$-isotypic subspace $R^{\p_{\Ba}}$ of $R$, and the graded multiplicity of $\r$ is determined by the graded multiplicity of $\x$ in the graded $S_n$-module $R^{\p_{\Ba}}$. Hence in this case we have $R(\r) = R(\x)|_{t\to t^r}R(\p_{\Ba})$. A similar argument works also for the group $W_{\Bm} \cap x W_{\Bm'}x\iv \simeq \prod_{i,j}W_{h_{ij}}$. Thus (5.8.1) holds. \par The definition of the fake degree in (5.1.3) makes sense for the case of symmetric groups, and we have \begin{equation*} R(\x^{\Bla}\otimes{}^x(\x^{\Bmu})\otimes\ve)|_{t \to t^r} = \frac{\prod_{i,j}\prod_{k=1}^{h_{ij}}(t^{kr}-1)} {|S_{\Bm} \cap x S_{\Bm'}x\iv|} \sum_{y \in S_{\Bm}\cap xS_{\Bm'}x\iv} \frac{\x^{\Bla}(y)\x^{\Bmu}(x\iv yx)} {\det_{\BV}(t^r - y)}. \end{equation*} On the other hand, $R(\d_1\ve)$ is determined by the double coset $\SO \in S_{\Bm}\backslash S_n / S_{\Bm'}$, so one can write it as $R(\d_1\ve) = t^{A_{\SO}}$ for some integer $A_{\SO}$. Substituting these data into (5.8.1), the formula (5.7.3) turns out to be \begin{equation*} \tag{5.8.2} \begin{split} R(\r^{\Bla}\otimes\ol{\r^{\Bmu}}\otimes\ol\det_{\BV}) = \frac{\prod_{k=1}^n(t^{kr}-1)}{|S_{\Bm}||S_{\Bm'}|} \sum_{\SO \in S_{\Bm}\backslash S_n/S_{\Bm'}}t^{A_{\SO}} \sum_{\substack{ x \in \SO \\ y \in S_{\Bm} \cap xS_{\Bm'}x\iv }} \frac{\x^{\Bla}(y)\x^{\Bmu}(x\iv yx)}{\det_{\BV}(t^r - y)}, \end{split} \end{equation*} \par Next we compute the value $A_{\SO}$ for each $\SO$. For a matrix $(h_{ij}) \in \SM_{\Bm, \Bm'}$, we introduce the notation $h_{\le i, \le j} = \sum_{k \le i}\sum_{l\le j} h_{{k,l}}$, $h_{i,\le j} = \sum_{k \le j} h_{i,k}$, etc.. Assume that $\SO$ corresponds to $(h_{ij}) \in \SM_{\Bm,\Bm'}$. We define an integer $B_{\SO}(\Bla, \Bmu)$ by \begin{equation*} B_{\SO}(\Bla,\Bmu) = \binom{n}{2} - n(\Bla) - n(\Bmu) + \sum_{i=1}^{r-1}h_{i, \le i}. \end{equation*} We have the following lemma. \begin{lem} Under the above notation, we have \begin{equation*} N^* - a(\Bla) - a(\tau(\Bmu)) + A_{\SO} = r\cdot B_{\SO}(\Bla,\Bmu). \end{equation*} \end{lem} \para{5.10.} Assuming the lemma, we continue the proof. By (5.8.2) together with Lemma 5.9, we have \begin{equation*} \tag{5.10.1} \begin{split} t^{-a(\Bla) - a(\tau(\Bmu))}\w_{\Bla,\Bmu}(t) &= \frac{\prod_{k=1}^n(t^{kr}-1)}{|S_{\Bm}||S_{\Bm'}|} \\ &\times \sum_{\SO \in S_{\Bm}\backslash S_n/S_{\Bm'}} t^{r\cdot B_{\SO}(\Bla,\Bmu)} \sum_{\substack{ x \in \SO \\ y \in S_{\Bm}\cap xS_{\Bm'}s\iv }} \frac{\x^{\Bla}(y)\x^{\Bmu}(x\iv yx)}{\det_{\BV}(t^r - y)}. \end{split} \end{equation*} Here we note that the set of double cosets $P_{\Bm}\backslash G /P_{\Bm'}$ is also in bijective correspondence with $\SM_{\Bm, \Bm'}$ in such a way that if $\wh\SO \in P_{\Bm}\backslash G /P_{\Bm'}$ corresponds to $(h_{ij}) \in \SM_{\Bm, \Bm'}$, then we have \begin{equation*} \dim (M_{p_j} \cap g(M'_{p'_i})) = h_{\le i, \le j} \text{ for } g \in \wh\SO \text{ and for any } i,j. \end{equation*} We have a decomposition $M'_{p'_i} = M_{p'_1}'^+\oplus\cdots \oplus M'^+_{p'_i}$, and the maximal torus $T = T_0$ is contained in $L_{\Bm}$ and $L_{\Bm'}$. Recall that $\{ e_1, \dots, e_n\}$ gives a basis of $V$ consisting of weight vectors of $T$, and bases of $M_{p_j}, M'^+_{p'_i}$ are given by subsets of this basis. For each $\wh\SO \in P_{\Bm}\backslash G /P_{\Bm'}$, one can choose a representative $n_{\SO} \in \wh\SO$ such that $n_{\SO} \in N_G(T)$, hence $n_{\SO}$ permutes the basis $\{ e_1, \dots, e_n\}$ up to scalar. It follows that \begin{equation*} \dim (M_{p_i} \cap n_{\SO}(M_{p'_i}'^+)) = h_{\le i, \le i} - h_{\le i-1, \le i} = h_{i, \le i}, \end{equation*} and we see that \begin{equation*} \tag{5.10.2} a_{-,+}(\Bm, \Bm'; n_{\SO}) = \sum_{i=1}^{r-1} h_{i, \le i}. \end{equation*} We now compare Proposition 3.11 with (5.10.1). Since $q^{r\binom{n}{2}}\prod_{k=1}^n(q^{kr} - 1) = |G^{F^r}|$, and $|\det_{\BV}(q^r - y)| = |T_y^{F^r}|$, we have, by (5.10.2), that \begin{equation*} (-1)^{p_-(\Bm) + p_+(\Bm')}q^{-r(n(\Bla) + n(\Bmu))}\sum_{z \in \SX_{\Bm, \unip}^{F^r}} Q_{\Bla}^-(z)Q_{\Bmu}^+(z) = q^{-a(\Bla) - a(\tau(\Bmu))}\w_{\Bla, \Bmu}(q). \end{equation*} This proves the theorem modulo Lemma 5.9. \para{5.11.} We shall prove Lemma 5.9. Note that the isomorphism $W_{\Bm} \cap x W_{\Bm'}x\iv \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, $ $\prod_{i,j} W_{h_{ij}}$ is chosen so that it satisfies the following properties; the factor of $\d_{\Bm}$ corresponding to $W_{h_{ij}}$ is equal to $\d_0^{j-1}$, and the factor of ${}^x\d_{\Bm'}$ corresponding to $W_{h_{ij}}$ is equal to $\d_0^{i-1}$, where we denote by $\d_0$ the linear character of $W_{h_{ij}}$ corresponding to $\d$ given in 5.6. It follows that the factor of $\d_1\ve = \d_{\Bm}\otimes\ol{{}^x\d_{\Bm'}}\otimes\d^{r-1}$ corresponding to $W_{h_{ij}}$ is given by \begin{equation*} \tag{5.11.1} \d_0^{(j-1) + (r-i+1) + (r-1)} = \d_0^{(j-1) + (r-i)}. \end{equation*} Since $\d_0 = \p_{\Ba}$ with $\Ba = (1, \dots, 1)$ in the notation of 5.8, we see that \begin{equation*} \tag{5.11.2} R(\d_0^k) = q^{kh_{ij}} \end{equation*} for $k = 0, \dots, r-1$. It follows that \begin{equation*} A_{\SO} = \sum_{1 \le i,j \le r}[j - i -1]h_{ij}, \end{equation*} where $[j-i-1]$ is an integer between 0 and $r-1$ which is congruent to $(j-1) + (r-i)$ modulo $r$. On the other hand, by (5.1.2), (5.1.4) and (5.4.1), we have \begin{equation*} N^* - a(\Bla) - a(\t(\Bmu)) = r\bigl\{\binom{n}{2} - n(\Bla) - n(\Bmu)\bigr\} + C, \end{equation*} where \begin{equation*} \begin{split} C = (r-1)n &- \bigl\{ |\la^{(2)}| + 2|\la^{(3)}| + \cdots + (r-1)|\la^{(r)}|\bigr\} \\ &- \bigl\{ |\mu^{(r-2)}| + 2|\mu^{(r-3)}| + \cdots + (r-2)|\mu^{(1)}| + (r-1)|\mu^{(r)}| \bigr\}. \end{split} \end{equation*} Hence in order to prove the lemma, it is enough to show that \begin{equation*} \tag{5.11.3} C + \sum_{1 \le i, j \le r}[j - i -1]h_{ij} = r\sum_{i=1}^{r-1}h_{i,\le i}. \end{equation*} We show (5.11.3). Since $n = \sum_{1 \le i,j \le r} h_{ij}$, we have \begin{equation*} \tag{5.11.4} \begin{split} (r-1)n &+ \sum_{1 \le i, j \le r}[j - i -1]h_{ij} \\ &= \sum_{ 1 \le i,j \le r}\bigl\{[j - i -1] + (r-1)\bigr\}h_{ij}. \end{split} \end{equation*} On the other hand, since $|\la^{(j)}| = m_j$ and $|\mu^{(i)}| = m_i'$, by (5.7.2), we have \begin{align*} (j-1)|\la^{(j)}| &= \sum_{1 \le i \le r}(j-1)h_{ij} \quad\text{ for } j = 1, 2, \dots, r, \\ (r-1-i)|\mu^{(i)}| &= \sum_{1 \le j \le r}(r-1-i)h_{ij} \quad\text{ for } i = 0, 1, \dots, r-1, \end{align*} where we understand that $\mu^{(0)} = \mu^{(r)}$ and $h_{0j} = h_{rj}$. It follows that \begin{equation*} \tag{5.11.5} \begin{split} \bigl\{ |\la^{(2)}| &+ 2|\la^{(3)}| + \cdots + (r-1)|\la^{(r)}|\bigr\} \\ &+ \bigl\{ |\mu^{(r-2)}| + 2|\mu^{(r-3)}| + \cdots + (r-2)|\mu^{(1)}| + (r-1)|\mu^{(r)}| \bigr\} \\ &= \sum_{\substack{1 \le i \le r \\ 1 \le j \le r}} (j-1)h_{ij} + \sum_{\substack{0 \le i < r \\ 1 \le j \le r}} (r-1-i)h_{ij} \\ &= \sum_{\substack{1 \le i < r \\ 1 \le j \le r}} (j - i + r-2)h_{ij} + \sum_{1 \le j \le r}(j + r-2)h_{rj}. \end{split} \end{equation*} Subtracting (5.11.5) from (5.11.4), we see that the left hand side of (5.11.3) is equal to \begin{align*} &\sum_{\substack{ 1\le i < r \\ 1 \le j \le r}} \bigl([j-i-1] - (j-i-1)\bigr)h_{ij} \\ &+ \sum_{1 \le j \le r}([j-r-1] -(j-1))h_{rj} \\ &= r\sum_{i=1}^{r-1}h_{i, \le i}. \end{align*} Hence (5.11.3) holds, and the lemma is proved. The proof of Theorem 5.5 is now complete. \par \section{ Geometric properties of Kostka functions} \para{6.1.} We consider the complex $K_1 = (\pi_{\Bm,1})_!\Ql$ on $\SX_{\Bm,\unip}$. By Theorem 4.8, $K_1[d'_{\Bm}]$ is a semisimple perverse sheaf equipped with $W_{\Bm}$-action, and is decomposed as \begin{equation*} K_1[d'_{\Bm}] \simeq \bigoplus_{\Bla \in \SP(\Bm)}V_{\Bla}\otimes A_{\Bla}, \end{equation*} where $A_{\Bla} = \IC(\ol X_{\Bla}, \Ql)[\dim X_{\Bla}]$ is a simple perverse sheaf on $\SX_{\Bm,\unip}$. Assume that the map $\pi_{\Bm,1} : \wt\SX_{\Bm, \unip} \to \SX_{\Bm,\unip}$ is defined with respect to an $F$-stable Borel subgroup $B$. Then $\wt\SX_{\Bm, \unip}$ has a natural $\Fq$-structure, and $\pi_{\Bm, 1}$ is $F$-equivariant. Thus one can define a canonical isomorphism $\vf_1: F^*K_1 \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K_1$. Since each $X_{\Bla}$ is $F$-stable, we have $F^*A_{\Bla} \simeq A_{\Bla}$, and $\vf_1$ induces an isomorphism $F^*(V_{\Bla} \otimes A_{\Bla}) \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, (V_{\Bla} \otimes A_{\Bla})$. It follows that there exists a unique isomorphism $\vf_{\Bla,1} : F^*A_{\Bla} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, A_{\Bla}$ such that $\vf_1 = \sum_{\Bla}\s_{\Bla} \otimes \vf_{\Bla,1}$, where $\s_{\Bla}$ is the identity map on the representation space $V_{\Bla}$. Let $\f_{\Bla}: F^*A_{\Bla} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, A_{\Bla}$ be the natural isomorphism induced from the $\Fq$-structure of $X_{\Bla}$. Since $A_{\Bla}$ is a simple perverse sheaf, $\f_{\Bla}$ coincides with $\vf_{\Bla,1}$ up to scalar. Let $d_{\Bla}$ be as in 4.9. The following result can be verified in a similar way as in [SS2, (4.1.1)]. \par \noindent (6.1.1) \ $\vf_{\Bla,1} = q^{d_{\Bla}}\f_{\Bla}$. In particular, the map $\vf_{\Bla,1}$ coincides with the scalar multiplication $q^{d_{\Bla}}$ on $A_{\Bla}|_{X_{\Bla}}$. \par In fact, for $z \in \SX_{\Bm, \unip}$, we have $\SH^i_zK_1 \simeq H^{i}(\SB_z^{(\Bm)}, \Ql)$. For $z \in X^F_{\Bla}$, we have, by Theorem 4.8 (and by [S3, Prop. 8.16]), \begin{equation*} H^{2d_{\Bla}}(\SB_z^{(\Bm)},\Ql) \simeq V_{\Bla} \otimes \SH^0_z\IC(\ol X_{\Bla},\Ql) \simeq V_{\Bla}. \end{equation*} Here $d_{\Bla} = \dim \SB_z^{(\Bm)}$ by Lemma 4.10, and $H^{2d_{\Bla}}(\SB_z^{\Bm)},\Ql)$ is an irreducible $W_{\Bm}$-module. Since the Frobenius action on $H^{2d_{\Bla}}(\SB_z^{(\Bm)},\Ql)$ commutes with the $W_{\Bm}$-action, the Frobenius map acts on $H^{2d_{\Bla}}(\SB_z^{(\Bm)},\Ql)$ as a scalar multiplication by Schur's lemma. In particular, all the irreducible components of $\SB_z^{(\Bm)}$ are $F$-stable, and this scalar is given by $q^{d_{\Bla}}$. It follows that $\vf_{\Bla,1}$ acts as a scalar multiplication by $q^{d_{\Bla}}$ on $\SH_z^0\IC(\ol X_{\Bla}, \Ql) \simeq \Ql$. Since $\f_{\Bla}$ acts as the identity map on this space, we obtain (6.1.1). \para{6.2.} In general, let $K$ be a complex on a variety $X$ defined over $\Fq$ such that $F^*K \simeq K$. An isomorphism $\f: F^*K \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K$ is said to be pure at $x \in X^F$ if the eigenvalues of $\f$ on $\SH^i_xK$ are algebraic numbers all of whose complex conjugates have absolute value $q^{i/2}$. \par We prove the following. \begin{prop} Let $\f_{\Bla}: F^*A_{\Bla} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, A_{\Bla}$ be as in 6.1. Assume that $z \in \SO^{\pm}_{\Bmu} \subset X_{\Bmu}^F$. Then $\f_{\Bla}$ is pure at $z$. \end{prop} \begin{proof} Note that if $z \in \SO^-_{\Bmu}$ (resp. $z \in \SO^{+}_{\Bmu})$, then $z = (x,\Bv)$ with $\Bv = (v_1, \dots, v_{r-1})$ satisfies the condition (a) (resp. (b)), where \par \noindent (a) \ $v_i = v_{i-1}$ if $\mu^{(i)} = \emptyset$, \\ (b) \ $v_i = 0$ if $\mu^{(i)} = \emptyset$. \par First we show \par \noindent (6.3.1) \ $\vf_1: F^*K_1 \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K_1$ is pure at $z \in X_{\Bmu}$ if $z$ satisfies the condition (a) or (b). \par Assume that $\Bmu \in \SP(\Bm')$ with $\Bm' = (m_1', \dots, m_r') \in \SQ_{n,r}$, and let $0 \le p_1'\le \cdots \le p_r'= n$ be integers associated to $\Bm'$. For given $\Bm, \Bm'$, we define a sequence of integers $\Bm'' = (\Bm^{(1)}, \dots, \Bm^{(r)})$, where $\Bm^{(i)} \in \SQ_{m_i', r_i}$ with $\sum_{i=1}^r (r_i-1) = r-1$ as follows; write the sequences $\{ p_i\}, \{ p_i'\}$ in the increasing order $0 \le p_1''\le \cdots \le p_{2r}''= n$. Then the sequence $\{ p_i''\}$ determines a composition of $m_i'$ which we denote by $\Bm^{(i)}$. Here $r_i$ is given by $r_i = \sharp\{ j \mid p'_{i-1} < p_j \le p'_i\} + 1$. We consider a parabolic subgroup $P = P_{\Bm'}$, which is the stabilizer of a partial flag $(M_{p'_i})$ in $G$. Let $L = L_{\Bm'}$ be the Levi subgroup of $P$ containing $T$, and $U_P$ the unipotent radical of $P$. Note that $L \simeq G_1 \times \cdots \times G_r$ with $G_i = GL_{m_i'}$. We define varieties \begin{align*} \wt\SX^L_{\Bm'',\unip} &= \wt\SX^{G_1}_{\Bm^{(1)},\unip} \times \cdots \times \wt \SX^{G_r}_{\Bm^{(r)}, \unip}, \\ \SX^L_{\Bm'',\unip} &= \SX^{G_1}_{\Bm^{(1)},\unip} \times \cdots \times \SX^{G_r}_{\Bm^{(r)}, \unip}, \end{align*} where $\wt\SX^{G_i}_{\Bm^{(i)},\unip}$ and $\SX^{G_i}_{\Bm^{(i)}, \unip}$ are varieties with respect to $G_i$ defined similarly to $\wt\SX_{\Bm \unip}$ and $\SX_{\Bm,\unip}$. Thus we can define a map $\pi^L_{\Bm'',1}: \wt\SX^L_{\Bm'',\unip} \to \SX^L_{\Bm'',\unip}$ as the product of $\pi^{G_i}_{\Bm^{(i)}, 1}: \wt\SX^{G_i}_{\Bm^{(i)},\unip} \to \SX^{G_i}_{\Bm^{(i)}, \unip}$. \par By generalizing the diagram in (1.5.1), we obtain a diagram \begin{equation*} \tag{6.3.2} \begin{CD} \wt\SX_{\Bm,\unip} @<\wt p_1<< G \times \wt\SX^P_{\Bm,\unip} @>\wt q_1>> \wt\SX^L_{\Bm'',\unip} \\ @V\pi'_1VV @VV r_1 V @VV\pi^L_{\Bm'',1} V \\ \wh \SX^P_{\Bm,\unip} @<p_1 << G \times \SX_{\Bm,\unip}^P @>q_1>> \SX^L_{\Bm'',\unip} \\ @V\pi''_1VV \\ \SX_{\Bm,\unip}, \end{CD} \end{equation*} where, by putting $P\uni = L\uni U_P$ (the set of unipotent elements in $P$), \begin{align*} \SX_{\Bm,\unip}^P &= \bigcup_{g \in P}g(U \times \prod_i M_{p_i}), \\ \wh \SX^P_{\Bm,\unip} &= G \times^P\SX_{\Bm,\unip}^P, \\ \wt \SX^P_{\Bm,\unip} &= P \times^{B}(U \times \prod_iM_{p_i}). \end{align*} The maps are defined similarly to (1.5.1). In particular, $\pi''_1\circ\pi'_1 = \pi_{\Bm,1}$. As in (1.5.1), both squares are cartesian. The map $p_1$ is a principal $P$-bundle, and the map $q_1$ is a locally trivial fibration with fibre isomorphic to $G \times U_P \times \prod_{i=1}^{r-2}M_{p'_i}$. We assume that $B$ and $L$ are $F$-stable. Then all the objects in (6.3.2) are defined over $\Fq$. Put $K_1' = (\pi'_1)_!\Ql$ and $K_1^L = (\pi^L_{\Bm'', 1})_!\Ql$. As in the discussion in 1.5, we see by (6.3.2) \begin{equation*} \tag{6.3.3} p_1^*K_1' \simeq q_1^*K_1^L. \end{equation*} Take $z \in (\SX_{\Bm, \unip} \cap X_{\Bmu})^F$ satisfying the condition (a) or (b). We have \begin{equation*} \wh\SX^P_{\Bm,\unip} \simeq \{ (y, gP) \in (G\uni \times V^{r-1}) \times G/P \mid g\iv y \in \SX^P_{\Bm, \unip} \}. \end{equation*} Up to $G$-conjugate, one can choose $\xi = (z, P) \in (\wh\SX^P_{\Bm,\unip})^F$ such that $\pi_1''(\xi) = z$. Choose $\e \in (G \times \SX^P_{\Bm,\unip})^F$ such that $p_1(\e) = \xi$. Put $z'= q_1(\e) \in (\SX^L_{\Bm'',\unip})^F$. By (6.3.3), we have \begin{equation*} \tag{6.3.4} \SH^i_{\xi}(K_1') \simeq \SH^i_{\e}(p_1^*K_1') \simeq \SH^i_{\e}(q_1^*K^L_1) \simeq \SH^i_{z'}(K^L_1). \end{equation*} Put $z' = (z_1', \dots, z_r') \in \SX^L_{\Bm'', \unip}$ with $z'_i \in \SX^{G_i}_{\Bm^{(i)}, \unip}$. If we denote $(\pi^{G_i}_{\Bm^{(i)},1})_*\Ql$ by $K_1^{G_i}$, $K^L_1$ can be written as \begin{equation*} \tag{6.3.5} K_1^L \simeq K^{G_1}_1 \boxtimes \cdots \boxtimes K^{G_r}_1 \end{equation*} and so \begin{equation*} \tag{6.3.6} \SH^i_{z'}K_1^L \simeq \bigoplus_{i_1 + \cdots + i_r = i}\SH^{i_1}_{z_1'}K^{G_1}_1 \otimes \cdots \otimes \SH^{i_r}_{z'_r}K^{G_r}_1. \end{equation*} The isomorphism $\vf_1^L : F^*K^L_1 \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K_1^L$ is defined similar to $\vf_1$, and under the isomorphism (6.3.5), $\vf^L_1$ can be written as $\vf^L_1 = \vf^{G_1}_1 \otimes \cdots \otimes \vf^{G_r}_1$, where $\vf^{G_i}_1: F^*K^{G_i}_1 \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K^{G_i}_1$. Note that $z'_i \in \SX^{G_i}_{\Bm^{(i)}, \unip}$ also satisfies the condition (a) or (b). (6.3.1) holds in the case where $r = 2$ by [AH, Corrigendum, Prop. 3]. So by double induction on $n$ and $r$, we may assume that $\vf^{G_i}_1$ is pure at $z_i$ for each $i$ unless $\Bm' = (n; -; \dots; -)$. Now assume that $\Bm' \ne (n;-;\dots;-)$. Then $\vf^L_1$ is pure at $z'$ by (6.3.6). By (6.3.4), $K_1'$ is pure at $\xi$ with respect to $\vf_1': F^*K_1' \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K_1'$. Note that $P = P_{\Bm'}$ is the unique parabolic subgroup of $G$, conjugate to $P$, containing $x$ such that the image of $x$ on $L \simeq G_1 \times \cdots \times G_r$ has Jordan type $(\mu^{(1)}, \dots, \mu^{(r)})$. Thus $(\pi''_1)\iv(z) = \{ (z, P)\}$, and so $\SH^i_zK_1 \simeq \SH^i_{\xi}K_1'$. This proves (6.3.1) in the case where $\Bm' \ne (n;-;\dots;-)$. \par It remains to consider the case $\Bm' = (n;-;\dots;-)$. In this case, by our assumption we have $v_1 = v_2 = \dots = v_{r-1}$ or $v_2 = \dots = v_{r-1} = 0$. Hence the complex $K_1'$ is isomorphic to a similar complex in the case where $r = 2$. So in this case, (6.3.1) holds by [AH]. \par (6.3.1) implies that the eigenvalues of $F^*$ on $H^i(\SB_z^{(\Bm)}, \Ql)$ have absolute value $q^{i/2}$. By Theorem 4.8, we have \begin{equation*} \SH^i_zK_1 \simeq \bigoplus_{\Bla \in \SP(\Bm)}V_{\Bla} \otimes \SH_z^{i- d'_{\Bm} + \dim X_{\Bla}}\IC(\ol X_{\Bla}, \Ql) \end{equation*} Since $d_{\Bla} = (d'_{\Bm} - \dim X_{\Bla})/2$, $i - d'_{\Bm } + \dim X_{\Bla} = i-2d_{\Bla}$. Since the eigenvalues of $\vf_1$ on $\SH^i_zK_1$ have absolute value $q^{i/2}$ by (6.3.1), the eigenvalues of $\vf_{\Bla,1}$ on $\SH_z^{i - 2d_{\Bla}}\IC(\ol X_{\Bla}, \Ql)$ also have absolute value $q^{i/2}$. By (6.1.1), this implies that the eigenvalues of $\f_{\Bla}$ have absolute value $q^{i/2}$. The proposition is proved. \end{proof} \remark{6.4.} In the case where $r = 2$, $X_{\Bla}$ is a single $G$-orbit. In this case the corresponding fact was proved in [AH, Corrigendum, Prop. 3], by making use of the contraction of a suitable transversal slice, and by the result of [MS]. The purity result for the exotic case with $r = 2$ was also proved in [SS2, Prop. 4.4] based on the discussion in [L3], again by making use of the transversal slice. However for $r \ge 3$, the discussion by using the transversal slice will not work since it often happens that $Z_G(z)$ turns out to be a unipotent group for $z \in \SX_{\Bm, \unip}$, and one cannot construct a one-parameter subgroup. It is not clear whether $\f_{\Bla}$ is pure for all $z \in X_{\Bmu}^F$. \para{6.5.} For $\Bla, \Bmu \in \SP_{n,r}$, we define a polynomial $\IC^-_{\Bla, \Bmu}(t) \in \BZ[t]$ by \begin{equation*} \IC^-_{\Bla,\Bmu}(t) = \sum_{\i \ge 0}\dim \SH^{2i}_z\IC(\ol X_{\Bla}, \Ql)t^i \end{equation*} for $z \in \SO^-_{\Bmu}$. Also we define a polynomial $\IC^+_{\Bla, \Bmu}(t) \in \BZ[t]$ by the same formula as above if $z \in \SO^+_{\Bmu} \cap \SX^+_{\Bm, \unip}$ and by 0 if $\SO^+_{\Bmu} \cap \SX^+_{\Bm,\unip} = \emptyset$, where we assume that $\Bla \in \SP(\Bm)$. \para{6.6.} As in 3.10, let $\SC_q = \SC_q(\SX\uni)$ be the $\Ql$-space of all $G^F$-invariant $\Ql$-valued functions on $\SX\uni^F$. The bilinear form $\lp \ , \ \rp$ on $\SC_q$ is defined as in (3.10.2). Let $\SC^{\pm}_q$ be the $\Ql$-subspace of $\SC_q$ generated by $Q^{\pm}_{\Bla}$ for various $\Bla \in \SP_{n,r}$. Then $\{ Q^{\pm}_{\Bla} \mid \Bla \in \SP_{n,r} \}$ gives a basis of $\SC^{\pm}_q$. For an $F$-stable $G$-orbit $C$ in $G\uni \times V^{r-1}$, we denote by $y_C$ the characteristic function of $C^F$. In the case where $C = \SO^{\pm}_{\Bla}$ we denote it by $y^{\pm}_{\Bla}$. Then $Q^{\pm}_{\Bla}$ can be written as a sum of various $y_C$. For each $\Bla \in \SP_{n,r}$, we define a function $Y^{\pm}_{\Bla} \in \SC^{\pm}_q$ by the condition that $Y^{\pm}_{\Bla}$ is a linear combination of $Q^{\pm}_{\Bnu}$ with $\Bnu \le \Bla$ and that \begin{equation*} Y^{\pm}_{\Bla} = y^{\pm}_{\Bla} + \sum_{C}b^{\pm}_{\Bla,C}y_C, \end{equation*} where $b^{\pm}_{\Bla,C} = 0$ for $y_C = y^{\pm}_{\Bnu}$ with $\Bnu < \Bla$. Note that $Y^{\pm}_{\Bla}$ is determined uniquely by this condition. Although $Y^{\pm}_{\Bla}$ are not characteristic functions on $(\SO^{\pm}_{\Bla})^F$, they enjoy similar properties, \begin{equation*} \tag{6.6.1} Y^{\pm}_{\Bla}(z) = \begin{cases} 1 &\quad\text{ if } z \in \SO^{\pm}_{\Bla}, \\ 0 &\quad\text{ if $z \in \SO^{\pm}_{\Bnu}$ with $\Bnu < \Bla$}. \end{cases} \end{equation*} $\{ Y^{\pm}_{\Bla} \mid \Bla \in \SP_{n,r}\}$ gives a basis of $\SC^{\pm}_q$. We define a matrix $\wh\vL_q = (\wh\xi_{\Bla,\Bmu}(q))$ by $\wh\xi_{\Bla,\Bmu}(q) = \lp Y^-_{\Bla}, Y^+_{\Bmu} \rp$. In the case where $r = 2$, $Y^-_{\Bla} = Y^+_{\Bla}$ is the characteristic function of a single $G$-orbit, hence $\wh\vL_q$ is a diagonal matrix. For $r \ge 3$, $Y^-_{\Bla}$ and $Y^+_{\Bmu}$ are not necessarily orthogonal for $\Bla \ne \Bmu$. We consider the following condition for $\wh\vL_q$. \par \noindent (A) \ The matrix $\wh\vL_q$ is upper triangular with respect to the total order $\vl$. \par We show the following theorem. \begin{thm} Suppose that the condition (A) holds for $\wh\vL_q$. \begin{enumerate} \item $ \wt K^-_{\Bla,\Bmu}(t) = t^{a(\Bla)}\IC^-_{\Bla,\Bmu}(t^r). $ \item Assume that $z \in (\SO^-_{\Bmu})^F$ for $\Bmu \le \Bla$. Then $\SH^i_z\IC(\ol X_{\Bla},\Ql) = 0$ if $i$ is odd, and the eigenvalues of $\f_{\Bla}$ on $\SH^{2i}_z\IC(\ol X_{\Bla},\Ql)$ are $q^i$. \end{enumerate} \end{thm} \begin{proof} In view of (2.4.2), $Q^-_{\Bm, T_w}(z)$ can be written as \begin{equation*} Q^-_{\Bm, T_w}(z) = (-1)^{d_{\Bm}}\sum_{i \ge 0}(-1)^i\Tr(\vf_1w,\SH^i_zK_1). \end{equation*} Thus by Theorem 4.8, noticing that $d'_{\Bm} - \dim X_{\Bla} = 2d_{\Bla}$ is an even integer, we have \begin{equation*} \tag{6.7.1} Q^-_{\Bla}(z) = (-1)^{d_{\Bm}}\sum_{i \ge 0}(-1)^i \Tr(\vf_{\Bla,1}, \SH^i_z\IC(\ol X_{\Bla}, \Ql)). \end{equation*} $Q^+_{\Bla}(z)$ is obtained by restricting $Q^-_{\Bla}$ on $(\SX^+_{\Bm, \unip})^F$ by Proposition 2.13. \par We express $Q^{\pm}_{\Bla}$ as \begin{equation*} \tag{6.7.2} Q^{\pm}_{\Bla} = \sum_{\Bnu \le \Bla}\wh p^{\pm}_{\Bla,\Bnu}(q)Y^{\pm}_{\Bnu}, \end{equation*} Then we have \begin{equation*} \tag{6.7.3} \lp Q_{\Bla}^-, Q^+_{\Bmu}\rp = \sum_{\substack{\Bnu \le \Bla \\ \Bnu' \le \Bmu}} \wh p^-_{\Bla,\Bnu}(q)\wh \xi_{\Bnu,\Bnu'}(q)\wh p^+_{\Bmu,\Bnu'}(q). \end{equation*} We also note, by (6.1.1), that $\wh p^{\pm}_{\Bla,\Bla}(q) = (-1)^{d_{\Bm}}q^{d_{\Bla}}$. Let $\wt Q^{\pm}_{\Bla} \in \SC^{\pm}_{q^r}$ be the functions defined by \begin{align*} \wt Q^-_{\Bla}(z) &= (-1)^{p_-(\Bm)}q^{a(\Bla)}(q^r)^{-d_{\Bla}}Q^-_{\Bla}(z), \\ \wt Q^+_{\Bla}(z) &= (-1)^{p_+(\Bm)}q^{a(\t(\Bla))}(q^r)^{-d_{\Bla}}Q^+_{\Bla}(z). \\ \end{align*} for $z \in \SX^{F^r}_{\Bm,\unip}$. Put, for $\Bla \in \SP(\Bm), \Bmu \in \SP(\Bm')$, \begin{align*} p^-_{\Bla,\Bnu}(q) &= (-1)^{d_{\Bm}} q^{a(\Bla)}(q^r)^{-d_{\Bla}}\wh p^-_{\Bla,\Bnu}(q^r), \\ p^+_{\Bmu, \Bnu'}(q) &= (-1)^{d_{\Bm'} + p_-(\Bm') + p_+(\Bm')} q^{a(\t(\Bmu)) + a(\Bnu') - a(\t(\Bnu'))}(q^r)^{-d_{\Bmu}} \wh p^+_{\Bmu,\Bnu'}(q^r), \\ \xi_{\Bnu,\Bnu'}(q) &= q^{a(\t(\Bnu')) - a(\Bnu')} \wh \xi_{\Bnu,\Bnu'}(q^r). \end{align*} Since $d_{\Bm} = \dim G + p_-(\Bm)$ by Lemma 1.2, we have \begin{align*} d_{\Bm} + d_{\Bm'}+ p_-(\Bm') + p_+(\Bm') \equiv p_-(\Bm) + p_+(\Bm') \pmod 2. \end{align*} Hence by Theorem 5.5 together with (6.7.3), we have \begin{equation*} \tag{6.7.4} \sum_{z \in \SX^{F^r}\uni} \wt Q^-_{\Bla}(z)\wt Q^+_{\Bmu}(z) = \sum_{\substack{\Bnu \le \Bla \\ \Bnu' \le \Bmu}} p^-_{\Bla,\Bnu}(q)\xi_{\Bnu,\Bnu'}(q)p^+_{\Bmu, \Bnu'}(q) = \w_{\Bla,\Bmu}(q), \end{equation*} with $p^-_{\Bla,\Bla}(q) = q^{a(\Bla)}$, $p^+_{\Bmu,\Bmu}(q) = (-1)^{p_-(\Bm') + p_+(\Bm')}q^{a(\Bmu)}$. Moreover, $p^{\pm}_{\Bla,\Bnu} = 0$ unless $\Bnu \le \Bla$. We consider the matrix $P_1^{\pm} = (p^{\pm}_{\Bla, \Bmu})$, $\vL_1 = (\xi_{\Bnu,\Bnu'})$ and $\Om = (\w_{\Bla,\Bmu})$. We have a matrix equation \begin{equation*} \tag{6.7.5} P_1^-(\vL_1\,{}^t\!P_1^+) = \Om. \end{equation*} Since $P_1^-$ is a lower triangular matrix with diagonal entries $q^{a(\Bla)}$, and $\vL_1\,{}^t\!P_1^+$ is an upper triangular matrix with diagonal entries $\pm q^{a(\Bmu)}\xi_{\Bmu,\Bmu}(q)$ by the condition (A), $P_1^-$ and $\vL_1\,{}^t\!P_1^+$ are determined uniquely from $\Om$. In particular, the diagonal part of $\vL_1$ is determined uniquely from $\Om$. Let $\vL$ be a diagonal matrix and consider the matrix equation $P^-\vL\, {}^t\!P^+ = \Om$, where $P^{\pm}$ satisfy similar conditions as in Theorem 5.2. Then $P^{\pm}, \vL$ are determined uniquely from the equation, and by the uniqueness of the solution for (6.7.5), we have $P^- = P_1^-$ and $\vL{}^t\!P^+ = \vL_1\, {}^t\!P_1^+$. In particular, $\vL$ coincides with the diagonal entries of $\vL_1$. (But $\vL_1$ and $P_1^+$ are not determined from the equation (6.7.5).) Thus by Theorem 5.2, we have $p^-_{\Bla,\Bmu}(q) = \wt K^-_{\Bla,\Bmu}(q)$. \par By (6.6.1) and (6.7.2), we have $Q^{\pm}_{\Bla}(z) = \wh p^{\pm}_{\Bla,\Bnu}(q)$ for $z \in (\SO^{\pm}_{\Bnu})^F$. We consider the equation (6.7.1). We replace $\vf_{\Bla,1}$ by $\f_{\Bla}$, and use (6.1.1). By replacing $q$ by $q^m$ for a positive integer $m$, we have \begin{align*} \tag{6.7.6} \wt K^-_{\Bla, \Bnu}(q^m) = (q^m)^{a(\Bla)}\sum_{i \ge 0}(-1)^i \Tr(\f^{rm}_{\Bla}, \SH^i_z\IC(\ol X_{\Bla}, \Ql)). \end{align*} By Proposition 6.3, $\f_{\Bla}$ is pure at $z \in (\SO^-_{\Bnu})^F$. Thus, if we put $\SH^i_z = \SH^i_z\IC(\ol X_{\Bla},\Ql)$, one can write as \begin{equation*} \tag{6.7.7} \Tr(\f_{\Bla}^{rm}, \SH^i_z) = \sum_{j=1}^{k_i}(\a_{ij}q^{i/2})^{rm}, \end{equation*} where $k_i = \dim \SH^i_z$ and $\{ \a_{ij}q^{i/2} \mid 1 \le j \le k_i \}$ are eigenvalues of $\f_{\Bla}$ on $\SH^i_z$ such that $\a_{ij}$ are algebraic numbers all of whose complex conjugates have absolute value 1. By Theorem 5.2, $\wt K^-_{\Bla,\Bnu}(t)$ is a rational function on $t$. Here we note that \par \noindent (6.7.8) \ $\wt K^-_{\Bla,\Bnu}(t)$ is a polynomial in $t$. \par In fact, we can write $\wt K_{\Bla,\Bmu}(t)$ as $\wt K_{\Bla,\Bnu}(t) = P(t) + R(t)/Q(t)$, where $P,Q,R$ are polynomials with $\deg R < \deg Q$ or $R = 0$. By (6.7.6) and (6.7.7), the absolute value of the right hand side of (6.7.6) goes to $\infty$ when $m \to \infty$. It follows that the absolute value of $\wt K_{\Bla,\Bnu}(q^m) - P(q^m)$ goes to $\infty$ when $m \to \infty$, if it is non-zero. This implies that $R = 0$, and (6.7.8) holds. \par Now by applying Dedekind's theorem, we see that $\a_{ij} = 0$ for odd $i$, and $\a_{ij} = 1$ for even $i$ such that $\SH^i_z \ne 0$. It follows that $\SH^i_z = 0$ for odd $i$, and $\sum_{j=1}^{k_i}\a_{ij}q^{i/2} = (\dim \SH^i_z)q^{i/2}$. Thus by (6.7.6), we have $\wt K^-_{\Bla,\Bnu}(q) = q^{a(\Bla)}\IC_{\Bla,\Bnu}^-(q^r)$, which holds for any prime power $q$. The assertion (i) follows from this in view of (6.7.8). The assertion (ii) is already shown in the proof of (i). The theorem is proved. \end{proof} \par By using similar arguments, we can prove the following result. Note that in this case, we do not need to appeal the condition (A). \begin{prop} Let $\Bnu \in \SP(\Bm'')$ with $\Bm'' = (m_1'', \dots, m_r'')$. Assume that $m_i'' = 0$ for $i = 1, \dots, r-2$. Then for any $\Bla \in \SP(\Bm)$ and $\Bmu \in \SP(\Bm')$, the following holds. \begin{enumerate} \item We have \begin{align*} \wt K^-_{\Bla,\Bnu}(t) &= t^{a(\Bla)}\IC^-_{\Bla,\Bnu}(t^r), \\ \wt K^+_{\Bmu,\Bnu}(t) &= t^{a(\t(\Bmu)) + a(\Bnu) - a(\t(\Bnu))} \IC^+_{\Bmu,\Bnu}(t^r). \end{align*} \item Assume that $z \in X_{\Bnu}^F$ for $\Bnu \le \Bla$. Then $\SH^i_z\IC(\ol X_{\Bla},\Ql) = 0$ if $i$ is odd, and the eigenvalues of $\f_{\Bla}$ on $\SH^{2i}_z\IC(\ol X_{\Bla},\Ql)$ are $q^i$. \end{enumerate} \end{prop} \begin{proof} Put $\Bla_0 = (-;\cdots;-;n;-)$. Then for any $\Bnu \le \Bla_0$, $X_{\Bnu} = \SO^{\pm}_{\Bnu}$ is a single $G$-orbit. It follows that $Y^{\pm}_{\Bla}$ coincides with the characteristic function $y_{\Bla} = y_{\Bla}^{\pm}$ of $X_{\Bla}^F$ for $\Bla \le \Bla_0$. Assume that $\Bla \le \Bla_0$ or $\Bmu \le \Bla_0$. Then (6.7.4) can be rewritten as \begin{equation*} \tag{6.8.1} \sum_{z \in \SX^{F^r}\uni} \wt Q^-_{\Bla}(z)\wt Q^+_{\Bmu}(z) = \sum_{\Bnu \le \Bla} p^-_{\Bla,\Bnu}(q)\xi_{\Bnu,\Bnu}(q)p^+_{\Bmu, \Bnu}(q) = \w_{\Bla,\Bmu}(q), \end{equation*} We consider the matrix $P_0^{\pm} = (p^{\pm}_{\Bla, \Bmu})$ $\vL_0 = (\xi_{\Bla,\Bmu})$ and $\Om_0 = (\w_{\Bla,\Bmu})$ indexed by $\Bla, \Bmu \le \Bla_0$. By (6.8.1), the matrices $P^{\pm}_0, \vL_0$ and $\Om_0$ satisfy a similar condition as in Theorem 5.2. (Note that $p_+(\Bm') = p_-(\Bm')$ if $\Bmu \le \Bla_0$, hence $p^+_{\Bmu,\Bmu}(q) = q^{a(\Bmu)}$.) Thus the equation (6.8.1) determines uniquely the matrices $P^{\pm}_0$ and $\vL_0$. Hence by Theorem 5.2, $p^{\pm}_{\Bla,\Bmu}(q) = \wt K^{\pm}_{\Bla,\Bmu}(q)$ for any $\Bla,\Bmu \le \Bla_0$. In particular, $p^+_{\Bmu, \Bnu}(q)$ and $\xi_{\Bnu,\Bnu}$ are determined for $\Bnu \le \Bmu \le \Bla_0$. We now consider arbitrary $\Bla \in \SP_{n,r}$, Since $p^+_{\Bnu,\Bnu}(q) = q^{a(\Bnu)}$, the equation (6.8.1) determines uniquely $p^-_{\Bla,\Bnu}(q)$ for $\Bnu \le \Bla_0$, by induction on the total order $\vl$ on $\SP_{n,r}$. Again by Theorem 5.2, we see that $p^-_{\Bla,\Bnu}(q) = \wt K^-_{\Bla,\Bnu}(q)$. Similar argument also holds for $p^+_{\Bmu,\Bnu}(q)$. Thus we have proved \begin{equation*} \tag{6.8.2} p^-_{\Bla, \Bnu}(q) = \wt K^-_{\Bla,\Bnu}(q), \quad p^+_{\Bmu,\Bnu}(q) = \wt K^+_{\Bmu, \Bnu}(q) \qquad \text{ for any $\Bnu \le \Bla_0$.} \end{equation*} By using a similar argument as in the proof of (6.7.6), we have \begin{align*} \tag{6.8.3} \wt K^-_{\Bla, \Bnu}(q^m) &= (q^m)^{a(\Bla)}\sum_{i \ge 0}(-1)^i \Tr(\f^{rm}_{\Bla}, \SH^i_z\IC(\ol X_{\Bla}, \Ql)), \\ \wt K^+_{\Bmu, \Bnu}(q^m) &= (q^m)^{a(\t(\Bmu)) + a(\Bnu) - a(\t(\Bnu))}\sum_{i \ge 0}(-1)^i \Tr(\f^{rm}_{\Bmu}, \SH^i_z\IC(\ol X_{\Bmu}, \Ql)). \\ \end{align*} (Here in the latter formula, we assume that $z \in \SO_{\Bnu}^+ \cap \SX^+_{\Bm'}$ for $\Bmu \in \SP(\Bm')$. In the case where $\SO^+_{\Bnu} \cap \SX^+_{\Bm'} = \emptyset$, we have $\wt K^+_{\Bmu, \Bnu}(q^m) = 0$.) Now the proposition follows by a similar argument as in the proof of Theorem 6.7. \end{proof} The condition(A) can be verified in small rank cases. We have the following result. \begin{prop} Assume that $n = 1, 2$, and $r$ is arbitrary. Then the condition (A) holds for $\wh\vL_q$. \end{prop} \begin{proof} First consider the case where $n = 1$. In this case, $X_{\Bnu} \cap \SX^+_{\Bm'}$ coincides with $\SO^+_{\Bnu}$ for any $\Bnu$ and $\Bm'$, if it is non-empty. Hence $Y^+_{\Bmu} = y^+_{\Bmu}$ for any $\Bmu$. One can check that $Y^-_{\Bmu}$ coincides with the characteristic function of $X_{\Bmu}^F$. Thus the condition (A) holds. In fact, in this case $\wh\vL_q$ is a diagonal matrix, and by applying the arguments in the proof of Proposition 6.8, $\wt K^-_{\Bla,\Bnu}(t), \wt K^+_{\Bmu,\Bnu}(t)$ are given as in the formulas in Proposition 6.8 (i). \par Next consider the case where $n = 2$. We determine the pair $(\Bm', \Bnu)$ satisfying the condition (*) $X_{\Bnu} \cap \SX^+_{\Bm'}$ splits into more that two orbits. This occurs only when $\Bm'$ is of the form $\Bm' = (\cdots,1,\cdots,1,\cdots)$, where non-zero factors occur on $1 \le k < \ell< r$, and $\Bnu$ is such that $\nu^{(k)} = (1^2)$, or $\Bnu = \Bla(\Bm') = (\cdots;1;\cdots;1;\cdots)$. In that case, we have $X_{\Bnu} \cap \SX^+_{\Bm'} = \SO^+_{\Bnu} \coprod X^+_{\Bnu, \Bm'}$, where $X^+_{\Bnu,\Bm'}$ consists of infinitely many $G$-orbits. Thus we have, for any $1\le k < \ell < r$, \begin{equation*} \tag{*} \Bm' = (\cdots, 1, \cdots, 1, \cdots), \quad \Bnu = (\cdots;1^2; \cdots; -; \cdots) \quad \text{ or } \quad \Bnu = \Bla(\Bm'), \end{equation*} where 1 appear in the $k,\ell$-th factors for $\Bm'$, and $1^2$ appears in the $k$-th factor for $\Bnu$. We show \par \noindent (6.9.1) \ The function $Y^+_{\Bmu}$ coincides with $y^+_{\Bmu}$ unless $\Bmu \in \SP(\Bm')$ for $\Bm'$ in (*), in which case, $Y^+_{\Bmu} = y^+_{\Bmu} + \sum_{\Bnu,C}a_Cy_C$ with $C \subset X^+_{\Bnu,\Bm'}$ for $\Bnu$ in (*). \par If $\Bmu = (\cdots;1^2;\cdots)$, any $\Bnu \le \Bmu$ has a similar form as $\Bmu$. Hence by (*), $Y^+_{\Bmu} = y^+_{\Bmu}$. If $\Bmu = (\cdots;1;\cdots;1)$, any $\Bnu \le \Bmu$ has a similar form or a simialr form as in the previous case. Hence again by (*), $Y^+_{\Bmu} = y^+_{\Bmu}$. Assume that $\Bmu = (\cdots;1;\cdots;1;\cdots)$ with $1 \le k < \ell<r$. Then $X_{\Bnu} \cap \SX^+_{\Bm'} \ne \emptyset$ only when $\Bnu = \Bmu$, $(\cdots;1^2;\cdots)$, or $(\cdots;1; \cdots;1)$. In the second and the third case, by the previous results, $Y^+_{\Bnu} = y^+_{\Bnu}$. Except the case where $\Bnu$ is as in (*), $X_{\Bnu} \cap \SX^+_{\Bm'} = \SO_{\Bnu}^+$. Hence by subtracting those functions $y^+_{\Bnu}$ from $Q^+_{\Bmu}$, we obtain the function which has supports only on $\SO_{\Bmu}^+$ and $X^+_{\Bnu,\Bm'}$. Finally assume that $\Bmu = (\cdots;2;\cdots)$. Then $X_{\Bnu} \cap \SX^+_{\Bm'} \ne \emptyset$ only when $\Bnu = \Bmu$, or $\Bnu = (\cdots:1^2;\cdots), (\cdots;1; \cdots;1)$. Moreover, by (*), in each case, $X_{\Bnu} \cap \SX^+_{\Bm} = \SO^+_{\Bnu}$. Hence by subtracting those $y^+_{\Bnu}$ such that $\Bnu \ne \Bmu$ from $Q^+_{\Bmu}$, we obtain $Y^+_{\Bmu} = y^+_{\Bmu}$. This proves (6.9.1). \par Next we determine the pair $(\Bm, \Bnu)$ satisfying the condition (**) $X_{\Bnu} \cap \SX_{\Bm} \neq \emptyset$ and $X_{\Bnu} \not\subset \SX_{\Bm}$. If $\Bm = (\dots, 2, \dots)$, clearly $X_{\Bnu} \subset \SX_{\Bm}$. So assume that $\Bm$ is of the form $(\dots, 1, \dots, 1, \dots)$ for some $1 \le a < b \le r$. If $\Bnu$ is of the form $(\cdots; 2; \cdots)$ for $k$-factor, then $k \ge b$ since $\Bla(\Bm) \ge \Bnu$. But in this case, $X_{\Bnu} \subset \SX_{\Bm}$. If $\Bnu = (\cdots;1;\cdots;1\cdots)$ for some $k ,\ell$-factors, then $a \le k, b \le \ell$, and $X_{\Bnu} \subset \SX_{\Bm}$. Hence we assume that $\Bnu = (\cdots; 1^2; \cdots)$ for some $k$-factor. If $k \ge b$, then $X_{\Bnu} \subset \SX_{\Bm}$. So, we have $a \le k < b$. Then $X_{\Bnu} \not\subset \SX_{\Bm}$. Hence we have \begin{equation*} \tag{**} \Bm = (\cdots, 1,\cdots, 1, \cdots), \quad \Bnu = (\cdots; 1^2; \cdots), \end{equation*} where $k$ factor appears for $\Bnu$, and $a,b$ factors appear for $\Bm$ with $a \le k < b$. For each $\Bla$, let $\wt y_{\Bla}$ be the characteristic function of $X_{\Bla}^F$. We show \par \noindent (6.9.2) \ $Y^-_{\Bla}$ coincides with $\wt y_{\Bla}$ unless $\Bla = (\cdots;1;\cdots;1;\cdots)$ with $\la^{(a)} = \la^{(b)} = 1$ for $a < b$, in which case, $Y^-_{\Bla} = \wt y_{\Bla} + \sum_Ca_Cy_C$ with $C \subset X_{\Bnu}\backslash \SO^-_{\Bnu}$ for $\Bnu = (\cdots;1^2;\cdots)$ with $\nu^{(a)} = (1^2)$. \par If $\Bla = (;\cdots;1^2;\cdots)$, any $\Bnu \le \Bla$ has a similar type as $\Bla$. We have $X_{\Bnu} \subset \SX_{\Bm}$ and $\SB^{(\Bm)}_z$ has a common structure for any $z \in X_{\Bla}$. Thus by backwards induction on $k$ ($1^2$ appears in the $k$-th factor), we see that $Y^-_{\Bla} = \wt y_{\Bla}$. Assume that $\Bla = (\cdots;1;\cdots;1;\cdots)$, where $\la^{(i)} = (1)$ for $i = a, b$ with $a < b - 1$. Put $\Bla' = (\cdots;1;\cdots;1;\cdots)$, where $\la'^{(a+1)} = \la'^{(b)} = (1)$. If $\Bnu \le \Bla$, then $\Bnu$ has the type $\Bnu = (\cdots;1^2;\cdots)$ or $(\cdots;1;\cdots;1;\cdots)$. Assume that $\Bnu$ is such that $\nu^{(k)} = (1^2)$ for some $k \ge a$. We have \begin{align*} \tag{6.9.3} X_{\Bnu} &= \{ (x,\Bv) \in G\uni \times V^{r-1} \mid x = 1, v_i = 0 \text{ for } i \le k-1, v_k \ne 0\}, \\ X_{\Bnu} \cap \SX_{\Bm} &= \{(x, \Bv) \in X_{\Bla} \mid \lp v_i \rp = \lp v_a \rp \text{ for } k +1 \le i < b\}. \end{align*} Take $z \in X_{\Bnu} \cap \SX_{\Bm}$. Then $\SB^{(\Bm)}_z$ is equal to a one point if $k < b$, and is equal to $\SB$ if $k \ge b$. If $\Bnu = (\cdots;1;\cdots;1;\cdots)$, then $\SB^{(\Bm)}_z$ is a one point for any $z \in X_{\Bnu}$. A similar property also holds for the pair $(\Bm'', \Bnu')$ if $\Bla' \in \SP(\Bm'')$ and $\Bnu'\le \Bla'$. It follows that the function $Q^-_{\Bla} - Q^-_{\Bla'}$ has supports only on $X_{\Bla}$ and $X_{\Bnu} \cap \SX_{\Bm}$ for $\Bnu$ such that $\nu^{(a)} = (1^2)$. By subtracting $\wt y_{\Bnu}$ from this, we see that $Y^-_{\Bla} = \wt y_{\Bla} + \sum_Ca_Cy_C$, where $C \subset X_{\Bnu} \backslash \SO_{\Bnu}^-$ for $\Bnu$ with $\nu^{(a)} = (1^2)$. Next assume that $\Bla = (\cdots;1;1;\cdots)$ with $\la^{(a)} = \la^{(a+1)} = 1$. If $a+1 = r$, it is easy to see that $Y^-_{\Bla} = \wt y_{\Bla}$. So assume that $a+1 < r$. We consider $\Bla' = (\cdots;1;-;1;\cdots)$ with $\la'^{(a)} = \la'^{(a+2)} = 1$. Then by a similar consideration as above, $Q^-_{\Bla} - Q^-_{\Bla'}$ has supports only on $X_{\Bla}$ and $X_{\Bnu}$ with $\Bnu$ such that $\nu^{(a)} = (1^2)$. Hence $Y^-_{\Bla}$ can be written as in the previous case. Finally assume that $\Bla = (\cdots;2;\cdots)$ with $\la^{(a)} = (2)$. If $a = r$, it is easy to see that $Y^-_{\Bla} = \wt y_{\Bla}$. So assume that $a < r$, and put $\Bla' = (\cdots;1;1;\cdots)$ with $\la'^{(a)} = \la'^{(a+1)} = 1$. As in the previous discussion, $Q^-_{\Bla} - Q^-_{\Bla'}$ has supports only on $\Bla$, $\Bnu = (\cdots;2;\cdots)$ with $\nu^{(k)} = (2)$ for $a < k$, or $\Bnu = (\cdots;1^2;\cdots)$ with $\nu^{(a)} = 1$. If $\Bnu$ is in the former case, by induction we may assume that $Y^-_{\Bnu} = \wt y_{\Bnu}$. By the previous results, we also have $Y^-_{\Bnu} = \wt y_{\Bnu}$ for the latter $\Bnu$. Hence we have $Y^-_{\Bla} = \wt y_{\Bla}$. This proves (6.9.2). \par Now assume that $\Bla$ is of the form $\Bla = (\cdots;1;\cdots;1;\cdots)$ with $\la^{(a)} = \la^{(b)} = 1$ for $a < b$. By (6.9.2), $Y^-_{\Bla}$ has an additional support only for $\Bnu = (\cdots;1^2;\cdots)$ with $\nu^{(a)} = (1^2)$. Let $\Bmu = (\cdots;1;\cdots;1\cdots) \in \SP(\Bm')$ be as in (*) with $k = a$. Then $X_{\Bnu} \cap \SX^+_{\Bm'} = \SO^+_{\Bnu} \coprod X^+_{\Bnu,\Bm'}$. We show that \par \noindent (6.9.4) \ If $b < \ell$, then $\lp Y^-_{\Bla}, Y^+_{\Bmu} \rp = 0$. In particular, if $\lp Y^-_{\Bla}, Y^+_{\Bmu}\rp \ne 0$, then $\Bla \not > \Bmu$. \par First assume that $a < b-1$, and recall the discussion in the proof of (6.9.2). Consider $Q^-_{\Bla} - Q^-_{\Bla'}$ as in (6.9.2) for $\Bla' = (\cdots;1;\cdots;1;\cdots)$ with $\la'^{(a+1)} = \la'^{(b)} = 1$. By our assumption $b < \ell$, it follows from (6.9.3) that $X_{\nu} \cap \SX_{\Bm}$ contains $\SO^+_{\Bnu} \cup X^+_{\Bnu,\Bm'}$. Then if we subtract $\wt y_{\Bnu}$ from $Q^-_{\Bla} - Q^-_{\Bla'}$, the resulting function does not have support on $X^+_{\Bnu,\Bm'}$. Hence $Y^-_{\Bla}$ does not have support on $X^+_{\Bnu,\Bm'}$. Then by (6.9.1), we see that $\lp Y^-_{\Bla}, Y^+_{\Bmu} \rp = 0$. The proof for $a = b-1$ case is similar, following the discussion of the proof of (6.9.2). Thus (6.9.4) is proved. \par We now define a total order $\vl$ on $\SP_{n,r}$ compatible with the partial order $\le$ so that $\wh\vL_q$ is upper triangular. $\SP_{n,r}$ is decomposed as $\SP_{n,r} = \coprod_{\nu \in \SP_n}\SP[\nu]$, where $\SP[\nu] = \{ \Bla \in \SP_{n,r} \mid \sum \la^{(i)} = \nu \}$. We arrange $\SP[\nu]$ according to the total order compatible with the dominance order on $\SP_n$. In our case, $\SP_{2,r} = \SP[(2)] \coprod \SP[(1^2)]$. We put the lexicographic order on $\SP[(2)]$ and on $\SP[(1^2)]$. We define $\SP[(2)] \vg \SP[(1^2)]$. By (6.9.1), (6.9.2) and (6.9.4), we see that if $\lp Y^-_{\Bla}, Y^+_{\Bmu}\rp \ne 0$, then $\Bla, \Bmu \in \SP[(1^2)]$, and $\Bla \vle \Bmu$. The proposition is proved. \end{proof} \remarks{6.10.} \ (i) In the discussion of the proof of Theorem 6.7, the equation (6.7.5) holds without the assumption (A). Thus, under the notation there, if one can show that $P_1 = P$, then we have $\vL\,{}^t\!P^+ = \vL_1\,{}^t\!P_1^+$. This implies that $\vL_1$ is upper triangular, and so the condition (A) holds. Since $Q^-_{\Bla}(z) = \wh p^-_{\Bla}(q)$, the condition $P_1 = P$ is equivalent to the formula (cf. (6.7.6)) \begin{equation*} \tag{6.10.1} \wt K_{\Bla,\Bmu}^-(q) = q^{a(\Bla)}\sum_{i \ge 0}(-1)^i \Tr(\f_{\Bla}^r, \SH^i_z\IC(\ol X_{\Bla},\Ql)). \end{equation*} Thus the condition (A) is equivalent to the formula (6.10.1). \par (ii) The matrix $\wh \vL_q$ is not diagonal even in the case where $n = 2, r= 3$. In fact, assume that $\Bla = (1;-;1), \Bmu = (1;1;-)$ and $\Bnu = (1^2;-;-)$ with $\Bla \in \SP(\Bm), \Bmu \in \SP(\Bm')$. In this case, $X_{\Bnu} \cap \SX_{\Bm}$ does not contain $X^+_{\Bnu,\Bm'}$, hence $Y^-_{\Bla} = \wt y_{\Bla} + y$, where $y$ is a function whose support is contained in $X_{\Bnu}^F$, and $y$ is non-zero on $X^+_{\Bnu,\Bm'}$. Moreover, by (6.9.1), $Y^+_{\Bmu} = y^+_{\Bmu} + \sum_Ca_Cy_C$ with $C \subset X^+_{\Bnu,\Bm'}$. It follows that $\lp Y^-_{\Bla}, Y^+_{\Bmu}\rp \ne 0$. Also we have $\lp Y^-_{\Bnu}, Y^+_{\Bmu}\rp \ne 0$ since $Y^-_{\Bnu} = \wt y_{\Bnu}$ by (6.9.2). In fact, these are the only cases such that $\lp Y^-_{\Bla'}, Y^+_{\Bmu'}\rp \ne 0$. \section{Some examples} \para{7.1.} We consider the matrix equation $P^-\vL {}^tP^+ = \Om$ with $P^{\pm} = (\wt K^{\pm}_{\Bla,\Bmu}(t))$ as in (5.2.1). In the case where $n = 1$, Kostka functions have an interpretation in terms of the polynomials $\IC^{\pm}_{\Bla, \Bmu}(t)$ by Proposition 6.9 (see also Proposition 6.8), namely, the following formula holds. \begin{align*} \tag{7.1.1} \wt K^-_{\Bla, \Bnu}(t) &= t^{a(\Bla)}\IC^-_{\Bla, \Bnu}(t^r), \\ \wt K^+_{\Bmu, \Bnu}(t) &= t^{a(\tau(\Bmu)) + a(\Bnu) - a(\tau(\Bnu))} \IC^+_{\Bmu, \Bnu}(t^r). \end{align*} In this case, $W = G(r,1,1) \simeq \BZ/r\BZ$. Put $\SP_{1,r} = \{ \Bla_1, \dots, \Bla_r\}$, arranged with respect to the dominance order, $\Bla_1 < \Bla_2 < \cdots < \Bla_r$ where $\Bla_i = (\la^{(1)}, \dots, \la^{(r)})$ with $\la^{(r+1-i)} = (1), \la^{(j)} = \emptyset$ for $j \ne r+1-i$. \par We first consider the simplest case, i.e., the case where $n = 1$ and $r = 3$. Hence $\SP_{1,r} = \{ \Bla_1, \Bla_2,\Bla_3\}$. In this case the equation $P^-\vL\, {}^t\!P^+ = \Om$ is given by \begin{equation*} \begin{pmatrix} t^2 & & \\ t & t & \\ 1 & 1 & 1 \end{pmatrix} \begin{pmatrix} 1 & & \\ & t^2 - t\iv & \\ & & t^4 - t \end{pmatrix} \begin{pmatrix} t^2 & 1 & t \\ & t & 0 \\ & & 1 \end{pmatrix} = \begin{pmatrix} t^4 & t^2 & t^3 \\ t^3 & t^4 & t^2 \\ t^2 & t^3 & t^4 \end{pmatrix}. \end{equation*} Note that the matrix $\vL$ has non-polynomial entries. However, the left hand side of this equation is changed to the form \begin{equation*} P'\vL'\, {}^t\!P'' = \begin{pmatrix} t^2 & & \\ t & t & \\ 1 & 1 & 1 \\ \end{pmatrix} \begin{pmatrix} 1 & & \\ & t^3-1 & \\ & & t^3 -1 \end{pmatrix} \begin{pmatrix} t^2 & 1 & t \\ & 1 & 0 \\ & & t \end{pmatrix}, \end{equation*} where $P' = P^-, P'' = P^+\varTheta\iv, \vL' = \vL\varTheta$ with a diagonal matrix $\varTheta = \Diag (1,t,t\iv)$. In this case, we have \begin{equation*} (\IC^-_{\Bla,\Bmu}(t^3)) = \begin{pmatrix} 1 & & \\ 1 & 1 & \\ 1 & 1 & 1 \end{pmatrix}, \quad (\IC^+_{\Bla,\Bmu}(t^3)) = \begin{pmatrix} 1 & & \\ 1 & 1 & \\ 1 & 0 & 1 \end{pmatrix}. \end{equation*} We define polynomials $\xi_{\Bla_i}(t)$ by $\xi_{\Bla_1}(t) = 1, \xi_{\Bla_2}(t) = t-1, \xi_{\Bla_3}(t) = t-1$. Then we have, \begin{equation*} \vL' = \begin{pmatrix} \xi_{\Bla_1}(t^3) & & \\ & \xi_{\Bla_2}(t^3) & \\ & & \xi_{\Bla_3}(t^3) \\ \end{pmatrix}, \quad\text{ and }\quad \vL\iv \vL' = \begin{pmatrix} 1 & & \\ & t & \\ & & t\iv \end{pmatrix}. \end{equation*} We have $(a(\Bla_1), a(\Bla_2), a(\Bla_3)) = (2,1,0)$. In our case $\t$ is a permutation $\Bla_1 \lra \Bla_1, \Bla_2 \lra \Bla_3$ and so $(a(\t(\Bla_1)), a(\t(\Bla_2)), a(\t(\Bla_3))) = (2,0,1)$. Then $P^- = P'$ is obtained from $(\IC^-_{\Bla,\Bmu}(t^3))$ by multiplying $t^2, t,1$ for corresponding rows. In turn, $P''$ is obtained from $(\IC^+_{\Bla,\Bmu}(t^3))$ by multiplying $t^2, 1, t$ for corresponding rows, and $P^+$ is obtained from $P''$ by multiplying $1,t, t\iv$ for corresponding columns. Moreover, we have \begin{equation*} \varTheta = (t^{a(\Bla_1) - a(\tau(\Bla_1))}, t^{a(\Bla_2)-a(\tau(\Bla_2))}, t^{a(\Bla_3) - a(\tau(\Bla_3))}). \end{equation*} \para{7.2.} We consider the general $r$ with $n = 1$. As in 7.1, we consider the relation $P'\vL'\,{}^t\!P''= \Om$ where $P' = P^-, P'' = P^+\varTheta\iv, \vL' = \vL\varTheta$ with a diagonal matrix $\varTheta = \Diag (\dots, t^{a(\Bla) -a(\t(\Bla))}, \dots)$. We give the matrices $P^{\pm}, (\IC^{\pm}_{\Bla,\Bmu}(t^r)), \vL, \vL'$ and $\varTheta$. The matrices $P^{\pm}$ are given as follows. \begin{equation*} P^- = \begin{pmatrix} t^{r-1} & & & & & \\ t^{r-2} & t^{r-2} & & & & \\ t^{r-3} & t^{r-3} & t^{r-3} & & & \\ \cdots & \cdots & \cdots & \ddots & & \\ t & t & t & t & t & \\ 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix}, \quad P^+ = \begin{pmatrix} t^{r-1} & & & & & \\ 1 & t^{r-2} & & & & \\ t & 0 & t^{r-3} & & & \\ \cdots & \cdots & \cdots & \ddots & & \\ t^{r-3} & 0 & \cdots & 0 & t & \\ t^{r-2} & 0 & \cdots & 0 & 0 & 1 \end{pmatrix}. \end{equation*} The diagonal matrix $\vL$ is given as \begin{equation*} \vL = \Diag (1, t^2 - t^{-r+2}, t^4 - t^{-r+4}, \dots, t^{2r-4} - t^{r-4}, t^{2r-2} - t^{r-2} ). \end{equation*} The permutation of $\SP_{1,r}$ is given by $\Bla_1 \lra \Bla_1$ and $\Bla_i \lra \Bla_{r-i+2}$ for $i \ne 1$. Then $\varTheta$ is given by \begin{equation*} \varTheta = \Diag ( 1, t^{r-2}, t^{r-4}, \dots, t^{-r+4}, t^{-r+2}), \end{equation*} and the matrix $\vL' = \vL\varTheta$ is given by \begin{equation*} \vL' = (1, t^r-1, t^r-1,\cdots, t^r-1 ). \end{equation*} Finally the matrices $(\IC^{\pm}_{\Bla,\Bmu}(t^3))$ are given by \begin{align*} (\IC^-_{\Bla,\Bmu}(t^3)) &= \begin{pmatrix} 1 & & & & & \\ 1 & 1 & & & & \\ 1 & 1 & 1 & & & \\ \cdots & \cdots & \cdots & \ddots & & \\ 1 & 1 & 1 & 1 & 1 & \\ 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix}, \quad (\IC^+_{\Bla,\Bmu}(t^3)) &= \begin{pmatrix} 1 & & & & & \\ 1 & 1 & & & & \\ 1 & 0 & 1 & & & \\ \cdots & \cdots & \cdots & \ddots & & \\ 1 & 0 & 0 & 0 & 1 & \\ 1 & 0 & 0 & 0 & 0 & 1 \\ \end{pmatrix}. \end{align*} \para{7.3.} Assume that $W = G(3,1,2) \simeq \FS_2\ltimes (\BZ/3\BZ)^2$. We arrange the elements in $\CP_{3,2}$ in the total order $\Bla_1 \vl \Bla_2 \vl \cdots \vl \Bla_9$ compatible with the dominance order, where \begin{align*} \Bla_1 &= (-;-;1^2), & \Bla_2 &= (-;1^2;-), & \Bla_3 &= (-;-;2), \\ \Bla_4 &= (1^2;-;-), & \Bla_5 &= (-;1;1), & \Bla_6 &= (1;-;1), \\ \Bla_7 &= (-;2;-), & \Bla_8 &= (1;1;-), & \Bla_9 &= (2;-;-). \end{align*} Then the matrices $P^{\pm}$ are given as follows. \begin{align*} P^- &= \begin{pmatrix} t^7 & & & & & & & & &\\ t^5 & t^5 & & & & & & & \\ t^4 & 0 & t^4 & & & & & & \\ t^3 & t^3 & 0 & t^3 & & & & & \\ t^6 + t^3 & t^3 & t^3 & 0 & t^3 & & & & \\ t^5 + t^2 & t^2 & t^2 & t^2 & t^2 & t^2 & & & \\ t^2 & t^2 & t^2 & 0 & t^2 & 0 & t^2 & & \\ t^4 + t & t^4 + t & t & t & t & t & t & t & & \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix}, \\ \\ P^+ &= \begin{pmatrix} t^7 & & & & & & & & \\ t^3 & t^5 & & & & & & & \\ t^4 & 0 & t^4 & & & & & & \\ t^5 & 0 & 0 & t^3 & & & & & \\ t^5 + t^2 & t^4 & t^2 & 0 & t^3 & & & & \\ t^6 + t^3 & 0 & t^3 & t & 0 & t^2 & & & \\ 1 & t^2 & 1 & 0 & t & 0 & t^2 & & \\ t^4 + t & t^3 & t & t^2 & t^2 & 0 & 0 & t & \\ t^2 & 0 & t^2 & 1 & 0 & t & 0 & 0 & 1 \end{pmatrix}. \end{align*} \par The diagonal matrix $\vL$ is given as \begin{align*} \vL = \Diag\biggl( &1, & &t^{-2}(t^6-1), & &(t^6-1), \\ &t^2(t^6-1), & &t^{-1}(t^3 - 1)(t^6-1), & &t(t^3-1)(t^6-1), \\ &t(t^3-1)(t^6-1), & &t^3(t^3-1)(t^6-1), & &t^5(t^3-1)(t^6-1)\biggr). \end{align*} In this case the permutation $\t$ on $\CP_{2,3}$ is given as \begin{equation*} \Bla_2 \lra \Bla_4, \quad \Bla_5 \lra \Bla_6, \quad \Bla_7 \lra \Bla_9, \quad (\text{other $\Bla_j$ are fixed}). \end{equation*} Then $\varTheta$ is given by $\varTheta = \Diag (1, t^2, 1, t^{-2}, t, t\iv, t^2, 1, t^{-2})$, and the matrix $\vL' = \vL\varTheta$ is given by \begin{align*} \vL' = \Diag\biggl( &1, & &(t^6-1),& &(t^6-1), \\ &(t^6-1), & &(t^3-1)(t^6-1), & &(t^3-1)(t^6-1), \\ &t^3(t^3-1)(t^6-1), & &t^3(t^3-1)(t^6-1), & &t^3(t^3-1)(t^6-1)\biggr). \end{align*} Now $P^{\pm} = (p^{\pm}_{\Bla,\Bmu}(t))$ can be modified to the matrices \begin{align*} (t^{-a(\Bla)}p^-_{\Bla,\Bmu}(t)) &= \begin{pmatrix} 1 & & & & & & & & \\ 1 & 1 & & & & & & & \\ 1 & 0 & 1 & & & & & & \\ 1 & 1 & 0 & 1 & & & & & \\ t^3+1 & 1 & 1 & 0 & 1 & & & & \\ t^3+1 & 1 & 1 & 1 & 1 & 1 & & & \\ 1 & 1 & 1 & 0 & 1 & 0 & 1 & & \\ t^3+1 & t^3+1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix}, \\ \\ (t^{-a(\tau(\Bla))-a(\Bmu) + a(\tau(\Bmu))}p^+_{\Bla,\Bmu}(t)) &= \begin{pmatrix} 1 & & & & & & & & \\ 1 & 1 & & & & & & & \\ 1 & 0 & 1 & & & & & & \\ 1 & 0 & 0 & 1 & & & & & \\ t^3+1 & 1 & 1 & 0 & 1 & & & & \\ t^3+1 & 0 & 1 & 1 & 0 & 1 & & & \\ 1 & 1 & 1 & 0 & 1 & 0 & 1 & & \\ t^3+1 & 1 & 1 & t^3 & 1 & 0 & 0 & 1 & \\ 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 \end{pmatrix}. \end{align*} In this case, it is possible to compute $\IC^{\pm}_{\Bla,\Bmu}(t)$ directly. One can check that $(t^{-a(\Bla)}p^-_{\Bla,\Bmu}(t))$ coincides with $(\IC^-_{\Bla,\Bmu}(t^3))$. This shows that (6.10.1) holds, and so, by Remarks 6.10 (i), the condition (A) holds in this case. On the other hand, we have \begin{align*} \IC^+_{\Bla,\Bmu}(t^3)) = \begin{pmatrix} 1 & & & & & & & & \\ 1 & 1 & & & & & & & \\ 1 & 0 & 1 & & & & & & \\ 1 & 0 & 0 & 1 & & & & & \\ t^3+1 & 1 & 1 & 0 & 1 & & & & \\ t^3+1 & 0 & 1 & 1 & 0 & 1 & & & \\ 1 & 1 & 1 & 0 & 1 & 0 & 1 & & \\ t^3+1 & t^3+1 & 1 & 1 & 0 & 1 & 1 & 1 & \\ 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 \end{pmatrix}. \end{align*} In contrast to the cases in 7.1 and 7.2, $(t^{-a(\tau(\Bla))-a(\Bmu) + a(\tau(\Bmu))}p^+_{\Bla,\Bmu}(t))$ does not coincide with $(\IC^+_{\Bla,\Bmu}(t^3))$. \para{7.4.} We give the table of $P^{\pm}$ and $\vL$ for the case where $n = 3$ and $r = 3$. We fix the total order on $\SP_{3,3}$ as in the first column of the following table. The diagonal matrix $\vL = (\xi_{\Bla,\Bla})$ and the values of $a$-function are given as follows. \par \hspace*{2cm} \begin{tabular}{c|c|c} $\Bla$ & $a(\Bla)$ & $\xi_{\Bla,\Bla}$ \\ \hline $(-,-,1^3)$ & 15 & 1 \\ $(-,1^3,-)$ & 12 & $t^{-3}(t^9 - 1)$ \\ $(-,-,21)$ & 9 & $(t^3 + 1)(t^9 - 1)$ \\ $(1^3,-,-)$ & 9 & $t^3(t^9 - 1)$ \\ $(-,1,1^2)$ & 8 & $t^{-1}(t^6 - 1)(t^9 - 1)$ \\ $(-,-,3)$ & 6 & $t^{3}(t^6 - 1)(t^9 - 1)$ \\ $(1,-,1^2)$ & 7 & $t(t^6-1)(t^9 - 1)$ \\ $(-,1^2,1)$ & 7 & $t(t^6 - 1)(t^9 - 1)$ \\ $(1^2,-,1)$ & 5 & $t^5(t^6 - 1)(t^9 - 1)$ \\ $(-,21,-)$ & 6 & $t^3(t^6 - 1)(t^9 - 1)$ \\ $(-,1,2)$ & 5 & $t^2(t^3 - 1)(t^6-1)(t^9 - 1)$ \\ $(1,1^2,-)$ & 5 & $t^5(t^6 - 1)(t^9 - 1)$ \\ $(1,-,2)$ & 4 & $t^4(t^3 -1)(t^6 - 1)(t^9 - 1)$ \\ $(-,2,1)$ & 4 & $t^4(t^3 -1)(t^6 - 1)(t^9 - 1)$ \\% $ \Phi_1^3\Phi_2\Phi_3^3\Phi_6\Phi_9$ $(1^2,1,-)$ & 4 & $t^4(t^3 -1)(t^6 - 1)(t^9 - 1)$ \\% $\Phi_1^3\Phi_2\Phi_3^3\Phi_6\Phi_9$ $(-,3,-)$ & 3 & $t^6(t^3 -1)(t^6 - 1)(t^9 - 1)$ \\% $\Phi_1^3\Phi_2\Phi_3^3\Phi_6\Phi_9$ $(1,1,1)$ & 2 & $t^8(t^3 -1)(t^6 - 1)(t^9 - 1)$ \\% $\Phi_1^3\Phi_2\Phi_3^3\Phi_6\Phi_9$ $(21,-,-)$ & 3 & $t^9(t^3 -1)(t^6 - 1)(t^9 - 1)$ \\% $\Phi_1^3\Phi_2\Phi_3^2\Phi_6\Phi_9$ $(1,2,-)$ & 2 & $t^8(t^3 -1)(t^6 - 1)(t^9 - 1)$ \\% $\Phi_1^3\Phi_2\Phi_3^3\Phi_6\Phi_9$ $(2,-,1)$ & 2 & $t^8(t^3 -1)(t^6 - 1)(t^9 - 1)$ \\% $\Phi_1^3\Phi_2\Phi_3^3\Phi_6\Phi_9$ $(2,1,-)$ & 1 & $t^{10}(t^3 -1)(t^6 - 1)(t^9 - 1)$ \\ $(3,-,-)$ & 0 & $t^{12}(t^3 -1)(t^6 - 1)(t^9 - 1)$ \\ \hline \end{tabular} \normalsize \begin{center} $P^-$ for $n = 3, r= 3$ \end{center} \footnotesize \par \hspace*{2cm} \begin{sideways} \begin{tabular}{|c|cccccccccc|} \hline \phantom{$\displaystyle\prod$} & $(-,-,1^3)$ & $(-,1^3,-)$ & $(-,-,21)$ & $(1^3,-,-)$ & $(-,1,1^2)$ & $(-,-,3)$ & $(1,-,1^2)$ & $(-,1^2,1)$ & $(1^2,-,1)$ & $(-,21,-)$ \\ \hline $(-,-,1^3)$ & $t^{15}$ & \phantom{$\displaystyle\prod$} & & & & & & & &\\ $(-,1^3,-)$ & $t^{12}$ & $t^{12}$ & & & & & & & & \\ $(-,-,21)$ & $t^{12} + t^9$ & 0 & $t^9$ & & & & & & & \\ $(1^3,-,-)$ & $t^9$ & $t^9$ & 0 & $t^9$ & & & & & & \\ $(-,1,1^2)$ & $t^{14} + t^{11} + t^8$ & $t^8$ & $t^8$ & 0 & $t^8$ & & & & & \\ $(-,-,3)$ & $t^6$ & 0 & $t^6$ & 0 & 0 & $t^6$ & & & & \\ $(1,-,1^2)$ & $t^{13} + t^{10} + t^7$ & $t^7$ & $t^7$ & $t^7$ & $t^7$ & 0 & $t^7$ & & & \\ $(-,1^2,1)$ & $t^{13} + t^{10} + t^7$ & $t^{10} + t^7$ & $t^7$ & 0 & $t^7$ & 0 & 0 & $t^7$ & & \\ $(1^2,-,1)$ & $t^{11} + t^8 + t^5$ & $t^8 + t^5$ & $t^5$ & $t^8 + t^5$ & $t^5$ & 0 & $t^5$ & $t^5$ & $t^5$ & \\ $(-,21,-)$ & $t^9 + t^6$ & $t^9 + t^6$ & $t^6$ & 0 & $t^6$ & 0 & 0 & $t^6$ & 0 & $t^6$ \\ $(-,1,2)$ & $t^{11} + t^8 + t^5$ & $t^5$ & $t^8 + t^5$ & 0 & $t^5$ & $t^5$ & 0 & $t^5$ & 0 & 0 \\ $(1,1^2,-)$ & $t^{11} + t^8 + t^5$ & $t^{11} + t^8 + t^5$ & $t^5$ & $t^5$ & $t^5$ & 0 & $t^5$ & $t^5$ & 0 & $t^5$ \\ $(1,-,2)$ & $t^{10} + t^7 + t^4$ & $t^4$ & $t^7 + t^4$ & $t^4$ & $t^4$ & $t^4$ & $t^4$ & $t^4$ & $t^4$ & 0 \\ $(-,2,1)$ & $t^{10} + t^7 + t^4$ & $t^7 + t^4$ & $t^7 + t^4$ & 0 & $t^7 + t^4$ & $t^4$ & 0 & $t^4$ & 0 & $t^4$ \\ $(1^2,1,-)$ & $t^{10} + t^7 + t^4$ & $t^{10} + t^7 + t^4$ & $t^4$ & $t^7 + t^4$ & $t^4$ & 0 & $t^4$ & $t^4$ & $t^4$ & $t^4$ \\ $(-,3,-)$ & $t^3$ & $t^3$ & $t^3$ & 0 & $t^3$ & $t^3$ & 0 & $t^3$ & 0 & $t^3$ \\ $(1,1,1)$ & $t^{12} + 2t^9 + 2t^6 + t^3$ & $t^9 + 2t^6 + t^3$ & $2t^6 + t^3$ & $t^6 + t^3$ & $2t^6 + t^3$ & $t^3$ & $t^6 + t^3$ & $t^6 + t^3$ & $t^3$ & $t^3$ \\ $(21,-,-)$ & $t^6 + t^3$ & $t^6 + t^3$ & $t^3$ & $t^6 + t^3$ & $t^3$ & 0 & $t^3$ & $t^3$ & $t^3$ & $t^3$ \\ $(1,2,-)$ & $t^8 + t^5 + t^2$ & $t^8 + t^5 + t^2$ & $t^5 + t^2$ & $t^2$ & $t^5 + t^2$ & $t^2$ & $t^2$ & $t^5 + t^2$ & $t^2$ & $t^5 + t^2$ \\ $(2,-,1)$ & $t^8 + t^5 + t^2$ & $t^5 + t^2$ & $t^5 + t^2$ & $t^5 + t^2$ & $t^5 + t^2$ & $t^2$ & $t^5 + t^2$ & $t^2$ & $t^2$ & $t^2$ \\ $(2,1,-)$ & $t^7 + t^4 + t$ & $t^7 + t^4 + t$ & $t^4 + t$ & $t^4 + t$ & $t^4 + t$ & $t$ & $t^4 + t$ & $t^4 + t$ & $t$ & $t^4 + t$ \\ $(3,-,-)$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \end{tabular} \end{sideways} \normalsize \begin{center} $P^-$ for $n = 3, r= 3$ (continued) \end{center} \footnotesize \par \hspace*{2cm} \begin{sideways} \begin{tabular}{|c|cccccccccccc|} \hline \phantom{$\displaystyle\prod$} & $(-,1,2)$ & $(1,1^2,-)$ & $(1,-,2)$ & $(-,2,1)$ & $(1^2,1,-)$ & $(-,3,-)$ & $(1,1,1)$ & $(21,-,-)$ & $(1,2,-)$ & $(2,-,1)$ & $(2,1,-)$ & $(3,-,-)$ \\ \hline $(-,-,1^3)$ & \phantom{$\displaystyle\prod$} &&&&&&&&&&& \\ $(-,1^3,-)$ & &&&&&&&&&&& \\ $(-,-,21)$ &&&&&&&&&&&& \\ $(1^3,-,-)$ &&&&&&&&&&&& \\ $(-,1,1^2)$ &&&&&&&&&&&& \\ $(-,-,3)$ &&&&&&&&&&&& \\ $(1,-,1^2)$ &&&&&&&&&&&& \\ $(-,1^2,1)$ &&&&&&&&&&&& \\ $(1^2,-,1)$ &&&&&&&&&&&& \\ $(-,21,-)$ &&&&&&&&&&&& \\ $(-,1,2)$ & $t^5$ &&&&&&&&&&& \\ $(1,1^2,-)$ & 0 & $t^5$ &&&&&&&&&& \\ $(1,-,2)$ & $t^4$ & 0 & $t^4$ &&&&&&&&& \\ $(-,2,1)$ & $t^4$ & 0 & 0 & $t^4$ &&&&&&&& \\ $(1^2,1,-)$ & 0 & $t^4$ & 0 & 0 & $t^4$ &&&&&&& \\ $(-,3,-)$ & $t^3$ & 0 & 0 & $t^3$ & 0 & $t^3$ &&&&&& \\ $(1,1,1)$ & $t^3$ & $t^3$ & $t^3$ & $t^3$ & $t^3$ & 0 & $t^3$ &&&&& \\ $(21,-,-)$ & 0 & $t^3$ & 0 & 0 & $t^3$ & 0 & 0 & $t^3$ &&&& \\ $(1,2,-)$ & $t^2$ & $t^2$ & $t^2$ & $t^2$ & $t^2$ & $t^2$ & $t^2$ & 0 & $t^2$ &&& \\ $(2,-,1)$ & $t^2$ & $t^2$ & $t^2$ & $t^2$ & $t^2$ & 0 & $t^2$ & $t^2$ & 0 & $t^2$ && \\ $(2,1,-)$ & $t$ & $t^4 + t$ & $t$ & $t$ & $t$ & $t$ & $t$ & $t$ & $t$ & $t$ & $t$ & \\ $(3,-,-)$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \end{tabular} \end{sideways} \normalsize \begin{center} $P^+$ for $n = 3, r= 3$ \end{center} \footnotesize \par \hspace*{2cm} \begin{sideways} \begin{tabular}{|c|cccccccccc|} \hline \phantom{$\displaystyle\prod$} & $(-,-,1^3)$ & $(-,1^3,-)$ & $(-,-,21)$ & $(1^3,-,-)$ & $(-,1,1^2)$ & $(-,-,3)$ & $(1,-,1^2)$ & $(-,1^2,1)$ & $(1^2,-,1)$ & $(-,21,-)$ \\ \hline $(-,-,1^3)$ & $t^{15}$ & \phantom{$\displaystyle\prod$} & & & & & & & &\\ $(-,1^3,-)$ & $t^{9}$ & $t^{12}$ & & & & & & & & \\ $(-,-,21)$ & $t^{12} + t^9$ & 0 & $t^9$ & & & & & & & \\ $(1^3,-,-)$ & $t^{12}$ & 0 & 0 & $t^9$ & & & & & & \\ $(-,1,1^2)$ & $t^{13} + t^{10} + t^7$ & $t^{10}$ & $t^7$ & 0 & $t^8$ & & & & & \\ $(-,-,3)$ & $t^6$ & 0 & $t^6$ & 0 & 0 & $t^6$ & & & & \\ $(1,-,1^2)$ & $t^{14} + t^{11} + t^8$ & 0 & $t^8$ & $t^5$ & 0 & 0 & $t^7$ & & & \\ $(-,1^2,1)$ & $t^{11} + t^{8} + t^5$ & $t^{11} + t^8$ & $t^5$ & 0 & $t^6$ & 0 & 0 & $t^7$ & & \\ $(1^2,-,1)$ & $t^{13} + t^{10} + t^7$ & 0 & $t^7$ & $t^7 + t^4$ & 0 & 0 & $t^6$ & 0 & $t^5$ & \\ $(-,21,-)$ & $t^6 + t^3$ & $t^9 + t^6$ & $t^3$ & 0 & $t^4$ & 0 & 0 & $t^5$ & 0 & $t^6$ \\ $(-,1,2)$ & $t^{10} + t^7 + t^4$ & $t^7$ & $t^7 + t^4$ & 0 & $t^5$ & $t^4$ & 0 & $t^6$ & 0 & 0 \\ $(1,1^2,-)$ & $t^{10} + t^7 + t^4$ & $t^{10} + t^7$ & $t^4$ & $t^7$ & $t^5$ & 0 & 0 & $t^6$ & 0 & 0 \\ $(1,-,2)$ & $t^{11} + t^8 + t^5$ & 0 & $t^8 + t^5$ & $t^2$ & 0 & $t^5$ & $t^4$ & 0 & $t^3$ & 0 \\ $(-,2,1)$ & $t^{8} + t^5 + t^2$ & $t^8 + t^5$ & $t^5 + t^2$ & 0 & $t^6 + t^3$ & $t^2$ & 0 & $t^4$ & 0 & $t^5$ \\ $(1^2,1,-)$ & $t^{11} + t^8 + t^5$ & $t^{8}$ & $t^5$ & $t^8 + t^5$ & $t^6$ & 0 & 0 & 0 & $t^3$ & 0 \\ $(-,3,-)$ & 1 & $t^3$ & 1 & 0 & $t$ & 1 & 0 & $t^2$ & 0 & $t^3$ \\ $(1,1,1)$ & $t^{12} + 2t^9 + 2t^6 + t^3$ & $t^9 + t^6$ & $2t^6 + t^3$ & $t^6 + t^3$ & $t^7 + t^4$ & $t^3$ & $t^5$ & $t^5$ & $t^4$ & 0 \\ $(21,-,-)$ & $t^9 + t^6$ & 0 & $t^6$ & $t^6 + t^3$ & 0 & 0 & $t^5$ & 0 & $t^4$ & 0 \\ $(1,2,-)$ & $t^7 + t^4 + t$ & $t^7 + t^4$ & $t^4 + t$ & $t^4$ & $t^5 + t^2$ & $t$ & 0 & $t^3$ & $t^2$ & $t^4$ \\ $(2,-,1)$ & $t^{10} + t^7 + t^4$ & 0 & $t^7 + t^4$ & $t^4 + t$ & 0 & $t^4$ & $t^6 + t^3$ & 0 & $t^2$ & 0 \\ $(2,1,-)$ & $t^8 + t^5 + t^2$ & $t^5$ & $t^5 + t^2$ & $t^5 + t^2$ & $t^3$ & $t^2$ & $t^4$ & $t^4$ & $t^3$ & 0 \\ $(3,-,-)$ & $t^3$ & 0 & $t^3$ & 1 & 0 & $t^3$ & $t^2$ & 0 & $t$ & 0 \\ \hline \end{tabular} \end{sideways} \normalsize \begin{center} $P^+$ for $n = 3, r= 3$ (continued) \end{center} \footnotesize \par \hspace*{2cm} \begin{sideways} \begin{tabular}{|c|cccccccccccc|} \hline \phantom{$\displaystyle\prod$} & $(-,1,2)$ & $(1,1^2,-)$ & $(1,-,2)$ & $(-,2,1)$ & $(1^2,1,-)$ & $(-,3,-)$ & $(1,1,1)$ & $(21,-,-)$ & $(1,2,-)$ & $(2,-,1)$ & $(2,1,-)$ & $(3,-,-)$ \\ \hline $(-,-,1^3)$ & \phantom{$\displaystyle\prod$} &&&&&&&&&&& \\ $(-,1^3,-)$ & &&&&&&&&&&& \\ $(-,-,21)$ &&&&&&&&&&&& \\ $(1^3,-,-)$ &&&&&&&&&&&& \\ $(-,1,1^2)$ &&&&&&&&&&&& \\ $(-,-,3)$ &&&&&&&&&&&& \\ $(1,-,1^2)$ &&&&&&&&&&&& \\ $(-,1^2,1)$ &&&&&&&&&&&& \\ $(1^2,-,1)$ &&&&&&&&&&&& \\ $(-,21,-)$ &&&&&&&&&&&& \\ $(-,1,2)$ & $t^5$ &&&&&&&&&&& \\ $(1,1^2,-)$ & 0 & $t^5$ &&&&&&&&&& \\ $(1,-,2)$ & 0 & 0 & $t^4$ &&&&&&&&& \\ $(-,2,1)$ & $t^3$ & 0 & 0 & $t^4$ &&&&&&&& \\ $(1^2,1,-)$ & 0 & $t^3$ & 0 & 0 & $t^4$ &&&&&&& \\ $(-,3,-)$ & $t$ & 0 & 0 & $t^2$ & 0 & $t^3$ &&&&&& \\ $(1,1,1)$ & $t^4$ & $t^4$ & 0 & 0 & 0 & 0 & $t^3$ &&&&& \\ $(21,-,-)$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & $t^3$ &&&& \\ $(1,2,-)$ & $t^2$ & $t^2$ & 0 & $t^3$ & $t^3$ & 0 & 0 & 0 & $t^2$ &&& \\ $(2,-,1)$ & 0 & 0 & $t^3$ & 0 & 0 & 0 & 0 & $t$ & 0 & $t^2$ && \\ $(2,1,-)$ & $t^3$ & $t^3$ & 0 & 0 & 0 & 0 & $t^2$ & $t^2$ & 0 & 0 & $t$ & \\ $(3,-,-)$ & 0 & 0 & $t^2$ & 0 & 0 & 0 & 0 & 1 & 0 & $t$ & 0 & 1 \\ \hline \end{tabular} \end{sideways} \par \noindent T. Shoji \\ Department of Mathematics, Tongji University \\ 1239 Siping Road, Shanghai 200092, P. R. China \\ E-mail: \verb|[email protected]| \end{document}
\begin{document} \title[On Hopf hypersurfaces of the nearly K\"ahler $\mathbf{S}^3\times\mathbf{S}^3$] {On Hopf hypersurfaces of the homogeneous nearly K\"ahler $\mathbf{S}^3\times\mathbf{S}^3$} \author[Zejun Hu and Zeke Yao]{Zejun Hu and Zeke Yao} \address{School of Mathematics and Statistics, Zhengzhou University, Zhengzhou 450001, People's Republic of China} \email{[email protected]; [email protected]} \thanks{2010 {\it Mathematics Subject Classification}. 53B25, 53B35, 53C30, 53C42.} \thanks{This project was supported by NSF of China, Grant Number 11771404.} \keywords{Nearly K\"ahler manifold $\mathbf{S}^3\times\mathbf{S}^3$, Hopf hypersurface, principal curvature, holomorphic distribution, almost product structure.} \thanks{{\sl School of Mathematics and Statistics, Zhengzhou University, Zhengzhou 450001, People's Republic of China}.\ \ {\bf E-mails}: [email protected]; [email protected]} \begin{abstract} In this paper, extending our previous joint work (Hu et al., Math Nachr 291:343--373, 2018), we initiate the study of Hopf hypersurfaces in the homogeneous NK (nearly K\"ahler) manifold $\mathbf{S}^3\times\mathbf{S}^3$. First, we show that any Hopf hypersurface of the homogeneous NK $\mathbf{S}^3\times\mathbf{S}^3$ does not admit two distinct principal curvatures. Then, for the important class of Hopf hypersurfaces with three distinct principal curvatures, we establish a complete classification under the additional condition that their holomorphic distributions $\{U\}^\perp$ are preserved by the almost product structure $P$ of the homogeneous NK $\mathbf{S}^3\times\mathbf{S}^3$. \end{abstract} \maketitle \section{Introduction}\label{sect:1} Let $\bar{M}$ be an almost Hermitian manifold with almost complex structure $J$. Given a connected orientable real hypersurface $M$ of $\bar{M}$, there appears an important notion the {\it structure vector field} defined by $U:=-J\xi$, where $\xi$ is the unit normal vector field. If the integral curves of $U$ are geodesics, then it is well known that $M$ is called a {\it Hopf hypersurface}. During the last four decades, Hopf hypersurfaces of the complex space forms and several other almost Hermitian manifolds have been extensively and deeply investigated, for details we refer to \cite{B,CR,KM,M,NR} and \cite{BBW,BS,HJZ} and the references therein. Recall that a nearly K\"ahler (NK) manifold is an almost Hermitian manifold such that the covariant derivative of the almost complex structure $J$ is skew-symmetric. It is well known from Nagy's classification of nearly K\"ahler manifolds \cite{N} that the six-dimensional ones are important construction factor, and from Butruille \cite{But1, But2} that the only homogeneous $6$-dimensional NK manifolds are the $6$-sphere $\mathbf{S}^6$, the $\mathbf{S}^3\times\mathbf{S}^3$, the complex projective space $\mathbf{CP}^3$ and the flag manifold $SU(3)/U(1)\times U(1)$, and moreover from Foscolo-Haskins \cite{F-H} that both $\mathbf{S}^6$ and $\mathbf{S}^3\times\mathbf{S}^3$ admit inhomogeneous NK structures. Notice that the Riemannian geometric invariants of the homogeneous NK $\mathbf{S}^3\times \mathbf{S}^3$ were systematically presented by Bolton-Dillen-Dioos-Vrancken \cite{B-D-D-V}. Since then the study of the canonical submanifolds of the homogeneous NK $\mathbf{S}^3\times\mathbf{S}^3$ becomes quite active and many interesting results have been obtained. This includes the results about almost complex surfaces in \cite{B-D-D-V,D-L-M-V,H-Z1}, about Lagrangian and CR submanifolds in \cite{ADMV,B-M-J-L1, B-M-J-L2, D-V-W, H-Z2, Z-D-H-V-W}. Nevertheless, about hypersurfaces the results are few that appear only in \cite{H-Y-Z,H-Y-Z2}. The goal of this paper is to study Hopf hypersurfaces in the homogeneous NK $\mathbf{S}^3 \times\mathbf{S}^3$. In this situation, according to Proposition 1 of \cite{BBW}, the Hopf condition is equivalent to that the structure vector field is a principal curvature vector field of the hypersurface. Our first concern is Hopf hypersurfaces with two distinct principal curvatures. The result we obtain is the following: \begin{theorem}\label{thm:1.1} No Hopf hypersurface in the homogeneous NK $\mathbf{S}^3\times\mathbf{S}^3$ admits exactly two distinct principal curvatures. \end{theorem} Our next concern is Hopf hypersurfaces with three distinct principal curvatures. It turns out that hypersurfaces of this class are quite complicated and examples of at least three families appear. As the second main result of this paper, we obtain a classification of them under the additional/natural condition that their holomorphic distributions $\{U\}^\perp$ are preserved by the almost product structure $P$ of the homogeneous NK $\mathbf{S}^3\times\mathbf{S}^3$. Before stating the result, we would recall that, according to Moruz-Vrancken \cite{M-V} and Podest\`a-Spiro \cite{P-S}, the following three maps \begin{enumerate} \item[(1)] $\mathcal{F}_1:\ \mathbf{S}^3 \times \mathbf{S}^3 \rightarrow \mathbf{S}^3\times \mathbf{S}^3{\rm\ with\ } \mathcal{F}_1(p,q)=(q,p),$ \item[(2)] $\mathcal{F}_2:\ \mathbf{S}^3 \times \mathbf{S}^3 \rightarrow \mathbf{S}^3\times \mathbf{S}^3{\rm\ with\ } \mathcal{F}_2(p,q)=(\bar{p},q\bar{p}),$ \item[(3)] $\mathcal{F}_{abc}:\ \mathbf{S}^3 \times \mathbf{S}^3 \rightarrow \mathbf{S}^3\times \mathbf{S}^3{\rm\ with\ } \mathcal{F}_{abc}(p,q)=(ap\bar c, bq\bar c)$ for any unitary quaternions $a,b,c$ \end{enumerate} are isometries of the NK $\mathbf{S}^3 \times \mathbf{S}^3$. Then, the result can be stated as follows: \begin{theorem}\label{thm:1.2} Let $M$ be a Hopf hypersurface of the homogeneous NK $\mathbf{S}^3\times\mathbf{S}^3$ with three distinct principal curvatures. If $P\{U\}^\perp=\{U\}^\perp$, then, up to isometries of type $\mathcal{F}_{abc}$, $M$ is locally given by one of the following embeddings $f_r, f_r'$ and $f_r'': \mathbf{S}^3\times\mathbf{S}^2\rightarrow\mathbf{S}^3\times\mathbf{S}^3$ defined by: $$ f_r(x,y)=(x,\sqrt{1-r^2}+ry),\ \ \ \ f_r'=\mathcal{F}_1\circ f_r,\ \ \ \ f_r''=\mathcal{F}_2\circ f_r, $$ where $0<r\leq1$, $x\in\mathbf{S}^3$, $y\in\mathbf{S}^2\subset\mathbb{R}^3$, and as usual $\mathbf{S}^3$ (resp. $\mathbf{S}^2$) is regarded as the set of the unitary (resp. imaginary) quaternions in the quaternion space $\mathbb{H}$. \end{theorem} \begin{remark}\label{rem:1.1} Let $M_1^{(r)},M_2^{(r)},M_3^{(r)}$ denote the images of the three embeddings $f_r, f_r',f_r''$, respectively. Then, for $0<r\le1$, $M_1^{(r)},M_2^{(r)}$ and $M_3^{(r)}$ correspond to the three possibilities of the action $P$ on the unit normal vector field $\xi$, which we shall establish in Proposition \ref{prop:5.1} below. \end{remark} \begin{remark}\label{rem:1.2} Theorem \ref{thm:1.2} is an extension of the previous result in \cite{H-Y-Z}, where the hypersurfaces $M_1^{(r)},M_2^{(r)},M_3^{(r)}$ corresponding to $r=1$ were characterized by the property of satisfying $A\phi=\phi A$, where $A$ is the shape operator of the hypersurfaces and $\phi$ is the almost contact structure induced from $J$. Moreover, it is worthy to mention that each of the hypersurfaces $M_1^{(r)},M_2^{(r)}$ and $M_3^{(r)}$ is minimal if and only if $r=1$. \end{remark} \begin{remark}\label{rem:1.3} Theorem \ref{thm:1.2} shows that Niebergall and Ryan's observation (cf. p.234 of \cite{NR}), which states that certain interesting classes of hypersurfaces in the complex space forms can be characterized by conditions on the holomorphic distribution $\{U\}^\perp$, is similarly valid for the homogeneous NK $\mathbf{S}^3\times\mathbf{S}^3$. On the other hand, at the moment we do not know if there exist Hopf hypersurfaces of the homogeneous NK $\mathbf{S}^3\times\mathbf{S}^3$ that have three distinct principal curvatures and satisfy $P\{U\}^\perp\not=\{U\}^\perp$. \end{remark} \section{Preliminaries}\label{sect:2} \subsection{The homogeneous NK structure on $\mathbf{S}^3\times\mathbf{S}^3$}\label{sect:2.1}~ One can look the classical and comprehensive study of the NK manifolds from [14]. In this section, we first collect some necessary materials from \cite{B-D-D-V}. Let us denote by $\mathbf{S}^3$ the $3$-sphere in $\mathbb{R}^4$ as the set of all unitary quaternions. By the natural identification $T_{(p,q)}(\mathbf{S}^3\times\mathbf{S}^3)\cong T_p \mathbf{S}^3\oplus T_q\mathbf{S}^3$, we write a tangent vector at $(p,q)\in \mathbf{S}^3 \times\mathbf{S}^3$ as $Z(p,q)=(U_{(p,q)},V_{(p,q)})$ or simply $Z=(U,V)$. The well-known almost complex structure $J$ on $\mathbf{S}^3\times\mathbf{S}^3$ is defined by \begin{equation}\label{eqn:2.1} J Z(p,q)=\tfrac{1}{\sqrt{3}}(2pq^{-1}V-U, -2qp^{-1}U+V). \end{equation} On $\mathbf{S}^3\times\mathbf{S}^3$ we can define a Hermitian metric $g$ compatible with $J$ by \begin{equation}\label{eqn:2.2} \begin{split} g(Z, Z')& = \tfrac{1}{2}(\langle Z,Z'\rangle + \langle JZ, JZ' \rangle )\\ & = \tfrac{4}{3}(\langle U,U' \rangle + \langle V,V' \rangle) - \tfrac{2}{3}(\langle p^{-1}U, q^{-1}V' \rangle + \langle p^{-1}U', q^{-1}V \rangle), \end{split} \end{equation} where $Z=(U, V)$ and $Z'=(U', V')$ are tangent vectors, and $\langle\cdot,\cdot\rangle$ is the standard product metric on $\mathbf{S}^3\times\mathbf{S}^3$. Then $\{g,J\}$ gives the homogeneous NK structure on $\mathbf{S}^3\times\mathbf{S}^3$. Let $\tilde\nabla$ be the Levi-Civita connection with respect to $g$, and as usual we define a $(1,2)$-tensor field $G$ by $G(X,Y):=(\tilde\nabla_X J)Y$ for $X,Y\in T(\mathbf{S}^3\times\mathbf{S}^3)$. Then, we have the following formulas for $G$: \begin{gather} G(X,Y)+G(Y,X)=0,\label{eqn:2.3}\\ G(X,JY)+JG(X,Y)=0,\label{eqn:2.4}\\ g(G(X,Y),Z)+g(G(X,Z),Y)=0,\label{eqn:2.5}\\ \begin{aligned}\label{eqn:2.6} g(G(X,Y),G(Z,W))=&\tfrac{1}{3}\big[g(X,Z)g(Y,W)-g(X,W)g(Y,Z)\\ &\quad +g(JX,Z)g(JW,Y)-g(JX,W)g(JZ,Y)\big]. \end{aligned} \end{gather} An almost product structure $P$ on $\mathbf{S}^3\times\mathbf{S}^3$ is introduced by \begin{equation}\label{eqn:2.7} PZ=(pq^{-1}V, qp^{-1}U),\ \ \forall\, Z=(U,V)\in T_{(p,q)}(\mathbf{S}^3\times\mathbf{S}^3). \end{equation} It is easily seen that $P$ is compatible with the metric $g$, i.e., $P$ is symmetric with respect to $g$. Also $P$ is anti-commutative with $J$. Moreover, with respect to $G$ and $P$, we further have \begin{equation}\label{eqn:2.8} 2\DP{X}Y=JG(X,PY)+JPG(X,Y), \end{equation} \begin{equation}\label{eqn:2.9} PG(X,Y)+G(PX,PY)=0. \end{equation} Note also that in terms of $P$ the usual product structure $Q$, defined by $Q(Z)=(-U,V)$ for $Z=(U,V)$, can be expressed by \begin{equation}\label{eqn:2.10} QZ=\tfrac{1}{\sqrt{3}}(2PJZ-JZ). \end{equation} For the NK $\mathbf{S}^3\times\mathbf{S}^3$, we also need the useful relation between the NK connection $\tilde{\nabla}$ and the usual Euclidean connection $\nabla^E$ (cf. Lemma 2.2 of \cite{D-L-M-V} and Remark 2.5 of \cite{D-V-W}): \begin{equation}\label{eqn:2.11} \nabla_X^E Y=\tilde{\nabla}_X Y+\tfrac12[JG(X,PY)+JG(Y,PX)]. \end{equation} The Riemannian curvature tensor $\tilde R$ of the NK $\mathbf{S}^3\times\mathbf{S}^3$ is given by \begin{equation}\label{eqn:2.12} \begin{split} \tilde{R}(X,Y)Z=&\tfrac{5}{12}\big[g(Y,Z)X-g(X,Z)Y\big]\\ &+\tfrac{1}{12}\big[g(JY,Z)JX-g(JX,Z)JY-2g(JX,Y)JZ\big]\\ &+\tfrac{1}{3}\big[g(PY,Z)PX-g(PX,Z)PY\\ &\qquad +g(JPY,Z)JPX-g(JPX,Z)JPY\big]. \end{split} \end{equation} \subsection{Hypersurfaces of the NK $\mathbf{S}^3\times\mathbf{S}^3$}\label{sect:2.2}~ Let $M$ be a hypersurface of the NK $\mathbf{S}^3\times\mathbf{S}^3$ with unit normal vector field $\xi$. For any vector field $X$ tangent to $M$, we have the decomposition \begin{equation}\label{eqn:2.13} JX=\phi X+\eta(X)\xi, \end{equation} where $\phi X$ and $\eta(X)\xi$ are the tangent and normal parts of $JX$, respectively. Then $\phi$ is a tensor field of type (1,1), $\eta$ is a $1$-form on $M$. By definition, the following relations hold: \begin{equation}\label{eqn:2.14} \left\{ \begin{aligned} &\eta(X)=g(X,U),\ \ \eta(\phi X)=0,\ \ \phi^2X=-X+\eta(X)U,\ \ \phi U=0,\\ &g(\phi X,Y)=-g(X,\phi Y),\ \ g(\phi X,\phi Y)=g(X,Y)-\eta(X)\eta(Y), \end{aligned}\right. \end{equation} where $U:=-J\xi$ is called the {\it structure vector field} of $M$. The equations \eqref{eqn:2.14} show that $(\phi,U,\eta,g)$ determines an {\it almost contact metric structure} over $M$. Let $\nabla$ be the induced connection on $M$ and $R$ its Riemannian curvature tensor. The formulas of Gauss and Weingarten state that \begin{equation}\label{eqn:2.15} \begin{split} \tilde \nabla_X Y=\nabla_X Y + h(X,Y),\quad \tilde \nabla_X \xi=- A X , \ \ \forall\, X,Y \in TM, \end{split} \end{equation} where $h$ is the second fundamental form and $A$ is the shape operator. They are related by $h(X,Y)=g(AX,Y)\xi$. Using the formulas of Gauss and Weingarten, we can easily show that \begin{equation}\label{eqn:2.16} \nabla_X U=\phi AX-G(X,\xi). \end{equation} The Gauss and Codazzi equations of $M$ are given by \begin{equation}\label{eqn:2.17} \begin{split} R(X,Y)Z=&\tfrac{5}{12}\big[g(Y,Z)X - g(X,Z)Y\big]\\ &+ \tfrac{1}{12}\big[g(JY,Z)\phi X - g(JX,Z)\phi Y - 2g(JX,Y)\phi Z\big]\\ &+ \tfrac{1}{3}\Big[g(PY,Z)(PX)^\top - g(PX,Z)(PY)^\top\\ &\qquad + g(JPY,Z)(JPX)^\top - g(JPX,Z)(JPY)^\top\Big]\\& +g(AZ,Y)AX-g(AZ,X)AY, \end{split} \end{equation} and \begin{equation}\label{eqn:2.18} \begin{split} (\nabla_X A)Y-(\nabla_Y A)X=&\tfrac{1}{12}\big[g(X,U)\phi Y - g(Y,U)\phi X - 2g(JX,Y)U\big]\\ &+ \tfrac{1}{3}\Big[g(PX,\xi)(PY)^\top - g(PY,\xi)(PX)^\top\\ &\qquad + g(PX,U)(JPY)^\top - g(PY,U)(JPX)^\top\Big], \end{split} \end{equation} where $\cdot^\top$ means the tangential part. Similar to that of the complex space forms, a hypersurface $M$ of the NK $\mathbf{S}^3\times\mathbf{S}^3$ is a Hopf hypersurface if and only if the integral curves of its structure vector field $U$ are geodesics, i.e., $\nabla_U U=0$. We denote by $\alpha$ the principal curvature function corresponding to the structure vector field $U$, i.e., $AU=\alpha U$. First of all, we shall present two elementary lemmas for Hopf hypersurfaces of the NK $\mathbf{S}^3\times\mathbf{S}^3$ as follows: \begin{lemma}[cf. \cite{H-Y-Z2}]\label{lemma:2.1} Let $M$ be a Hopf hypersurface in the NK $\mathbf{S}^3 \times \mathbf{S}^3$. Then we have \begin{equation}\label{eqn:2.19} \begin{aligned} \tfrac16&g(\phi X,Y)-\tfrac23\big[g(PX,\xi)g(PY,U)-g(PX,U)g(PY,\xi)\big]\\ &=g((\alpha I-A)G(X,\xi),Y)+g(G((\alpha I-A)X,\xi),Y)\\ &\ \ \ -\alpha g((A\phi +\phi A)X,Y)+2g(A\phi AX,Y),\ \ X,Y\in \{U\}^{\bot}, \end{aligned} \end{equation} where $\{U\}^\perp$ denotes the subdistribution of $TM$ that is orthogonal to $U$, and $I$ denotes the identity transformation. \end{lemma} \begin{lemma}\label{lemma:2.2} Let $M$ be a Hopf hypersurface in the NK $\mathbf{S}^3\times \mathbf{S}^3$ satisfying $P\{U\}^\bot=\{U\}^\perp$. Then the function $\alpha$ is constant. \end{lemma} \begin{proof} By using the Codazzi equation and the symmetry of $A$, we have the calculation \begin{equation*} 0=g((\nabla_U A)Y-(\nabla_Y A)U,U)=g((\nabla_U A)U,Y)-g((\nabla_Y A)U,U)=-Y\alpha,\ Y\in\{U\}^\perp. \end{equation*} It follows that $\nabla\alpha=(U\alpha)\,U$. Then, for $X,Y\in\{U\}^\perp$, we have \begin{equation}\label{eqn:2.20} 0=X(Y\alpha)-Y(X\alpha)=[X,Y]\alpha=g([X,Y],U)\,U\alpha. \end{equation} If $U\alpha\ensuremath{\nabla^E}q0$ holds on some open set, then \eqref{eqn:2.20} implies that $[X,Y]\in\{U\}^\perp$. Thus $\{U\}^\perp$ is integrable which gives four-dimensional almost complex submanifolds of the NK $\mathbf{S}^3\times \mathbf{S}^3$. This is impossible because, according to Lemma 2.2 of \cite{P-S}, any six-dimensional compact non-K\"ahler NK manifold admits no almost complex four-dimensional submanifold. Hence $U\alpha=0$ and $\alpha$ is constant. \end{proof} \subsection{A canonical distribution related to hypersurfaces of the NK $\mathbf{S}^3\times\mathbf{S}^3$}\label{sect:2.3}~ In order for choosing an appropriate local orthonormal frame of the NK $\mathbf{S}^3\times\mathbf{S}^3$ along its hypersurface $M$, following that in \cite{H-Y-Z2} we consider $$ \mathfrak{D}(p):={\rm Span}\,\{\xi(p),U(p),P\xi(p),PU(p)\},\ \ p\in M. $$ It is easily seen that, since $P$ is anti-commutative with $J$, $\mathfrak{D}$ defines a distribution on $M$ with dimension exact $2$ or $4$, and that it is invariant under both $J$ and $P$. Along $M$, let $\mathfrak{D}^\perp$ denote the distribution in $T(\mathbf{S}^3\times\mathbf{S}^3)$ that is orthogonal to $\mathfrak{D}$ at each $p\in M$. For later's purpose, we shall make some remarks about $\dim\mathfrak{D}$: (1) If $\dim\mathfrak{D}=4$ holds in an open set, then there exists a unit tangent vector field $e_1\in \{U\}^\bot$ and functions $a,b,c$ with $c>0$ such that \begin{equation}\label{eqn:2.21} P\xi=a\xi+bU+ce_1,\ \ a^2+b^2+c^2=1. \end{equation} Put $e_2=Je_1$. Moreover, from the fact $\dim\,\mathfrak{D}^\perp=2$ and that $\mathfrak{D}^\perp$ is invariant under the action of both $J$ and $P$, we can choose a local unit vector field $e_3\in\mathfrak{D}^\perp$ such that $Pe_3=e_3$. Now, putting $e_4=Je_3$ and $e_5=U$, then $\{e_i\}_{i=1}^5$ is a well-defined orthonormal basis of $TM$ and, acting by $P$, it has the following properties: \begin{equation}\label{eqn:2.22} \left\{ \begin{aligned} &P\xi=a\xi+ce_1+be_5,\ \ \ \ \ Pe_1=c\xi-ae_1-be_2,\\ &Pe_2=ce_5-be_1+ae_2,\ \ Pe_3=e_3,\\ &Pe_4=-e_4,\ \hspace{18mm} Pe_5=b\xi+ce_2-ae_5. \end{aligned}\right. \end{equation} (2) If $\dim\mathfrak{D}=2$ holds in an open set, then $P\{U\}^\perp=\{U\}^\perp$ and we can write \begin{equation}\label{eqn:2.23} P\xi=a\xi+bU,\ \ a^2+b^2=1. \end{equation} Now, $\mathfrak{D}^\perp$ is a $4$-dimensional distribution that is invariant under the action of both $J$ and $P$. Hence, we can choose unit vector fields $e_1,\, e_3\in\mathfrak{D}^\perp$ such that $Pe_1=e_1,\, Pe_3=e_3$. Put $e_2=Je_1,\,e_4=Je_3$ and $e_5=U$. In this way, we obtain an orthonormal basis $\{e_i\}_{i=1}^5$ of $TM$. However, we would remark that such choice of $\{e_1, e_3\}$ (resp. $\{e_2, e_4\}$) is unique up to an orthogonal transformation. \section{The proof of Theorem \ref{thm:1.1}}\label{sect:3} Suppose on the contrary that $M$ is a Hopf hypersurface in the NK $\mathbf{S}^3 \times\mathbf{S}^3$ which has two distinct principal curvatures, say $\alpha$ and $\lambda$, with $AU=\alpha U$. We denote by $V_\alpha$ and $V_\lambda$ the corresponding eigen-distributions. By the continuity of the principal curvature functions, we know that the dimensions $(\dim V_\alpha,\dim V_\lambda)$ of the two eigen-distributions have to be one of the four possibilities: (1,4), (2,3), (3,2) and (4,1). Next, we separate the proof of Theorem \ref{thm:1.1} into the proofs of two lemmas, depending on the dimension of $\mathfrak{D}$. \begin{lemma}\label{lemma:3.1} The case $\dim\mathfrak{D}=4$ does not occur. \end{lemma} \begin{proof} To argue by contradiction we assume that $\dim\mathfrak{D}=4$ does hold on an open set. Now we check each possibility of $(\dim V_\alpha,\dim V_\lambda)$. \vskip 1mm {\bf (i) $(\dim V_\alpha,\dim V_\lambda)=(1,4)$ on $M$.} In this case, it is easy to see that $A\phi=\phi A$ holds. This is impossible because, according to Theorem 4.1 of \cite{H-Y-Z}, hypersurfaces satisfying $A\phi=\phi A$ must have three distinct principal curvatures. \vskip 1mm {\bf (ii) $(\dim V_\alpha,\dim V_\lambda)=(2,3)$ on $M$.} In this case, we can take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ such that $$ AX_i=\alpha X_i,\ i=1,5;\ \ \ \ AX_j=\lambda X_j,\ j=2,3,4, $$ where $X_2=JX_1,X_4=JX_3,X_5=U$. Then by using \eqref{eqn:2.3}--\eqref{eqn:2.6} we get \begin{equation}\label{eqn:3.1} G(X_1,X_4)=G(X_2,X_3)=-JG(X_1,X_3),\ \ g(G(X_1,X_3),X_i)=0\ \ {\rm for}\ \ 1\leq i\leq 4, \end{equation} \begin{equation}\label{eqn:3.2} g(G(X_1,X_3),G(X_1,X_3))=\tfrac{1}{3}. \end{equation} Let $\{e_{i}\}_{i=1}^{5}$ be the orthonormal basis as described in \eqref{eqn:2.22}. Then $$ X_1=m e_1+n e_2+u e_3+v e_4,\ \ X_3=-u e_1+v e_2+m e_3-n e_4, $$ for some functions $m,n,u,v$; and $$ X_2=-n e_1+m e_2-v e_3+u e_4,\ \ X_4=-v e_1-u e_2+n e_3+m e_4. $$ Now, taking in \eqref{eqn:2.19}, respectively, $(X,Y)=(X_1,X_3),\ (X_1,X_4),\ (X_2,X_3)$, $(X_2,X_4)$, we can obtain \begin{gather} \tfrac{2}{3}c^2mv+\tfrac{2}{3}c^{2}nu=(\lambda-\alpha)g(G(X_1,\xi),X_3),\label{eqn:3.3}\\ -\tfrac{2}{3}c^2mu+\tfrac{2}{3}c^{2}nv=(\lambda-\alpha)g(G(X_1,\xi),X_4),\label{eqn:3.4}\\ -\tfrac{2}{3}c^2nv+\tfrac{2}{3}c^{2}mu=2(\lambda-\alpha)g(G(X_2,\xi),X_3),\label{eqn:3.5}\\ \tfrac{2}{3}c^2nu+\tfrac{2}{3}c^{2}mv=2(\lambda-\alpha)g(G(X_2,\xi),X_4).\label{eqn:3.6} \end{gather} From \eqref{eqn:3.4} and \eqref{eqn:3.5}, and respectively \eqref{eqn:3.3} and \eqref{eqn:3.6}, we deduce that $$ g(G(X_1,X_3),U)=0,\ \ \ g(G(X_1,X_3),\xi)=0. $$ This combining with \eqref{eqn:3.1} implies that $G(X_1,X_3)=0$, a contradiction to \eqref{eqn:3.2}. \vskip 1mm {\bf (iii) $(\dim V_\alpha,\dim V_\lambda)=(3,2)$ on $M$.} In this case, as $U\in V_\alpha$, we have $\dim (V_\alpha\cap\{U\}^\perp)=\dim V_\lambda=2$. For an orthonormal basis $\{X_1,X_2\}$ of $V_\alpha\cap\{U\}^\perp$ we consider $|g(JX_1,X_2)|$, which is obviously independent of the choice of $\{X_1,X_2\}$, thus gives a well-defined function $\theta:=|g(JX_1,X_2)|$ on $M$, with $0\leq\theta\leq1$. Since our concern is only local, in order to prove that Case (iii) does not occur, we are sufficient to show that the following three subcases do not occur on $M$. \vskip 1mm {\bf (iii)-(1)}. $0<\theta<1$. In this subcase, we can take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $$ AX_1=\alpha X_1,\ AX_2=\alpha X_2,\ AX_3=\lambda X_3,\ AX_4=\lambda X_4,\ X_5=U, $$ where $X_3=(JX_1-\theta X_2)/\sqrt{1-\theta^2},\ X_4=(JX_2+\theta X_1)/\sqrt{1-\theta^2}$ and $\theta=g(JX_1,X_2)$. Moreover, direct calculations give the following relations: \begin{equation}\label{eqn:3.7} \left\{ \begin{aligned} &JX_1=\sqrt{1-\theta^2}X_3+\theta X_2,\ \ JX_2=\sqrt{1-\theta^2}X_4-\theta X_1,\\[-1mm] &JX_3=-\sqrt{1-\theta^2}X_1-\theta X_4,\ \ JX_4=-\sqrt{1-\theta^2}X_2+\theta X_3,\\[-1mm] &g(JX_1,X_2)=-g(JX_3,X_4)=\theta,\ \ g(JX_1,X_3)=g(JX_2,X_4)=\sqrt{1-\theta^2},\\[-1mm] &g(JX_1,X_4)=g(JX_2,X_3)=0,\ \ G(X_3,X_4)=-G(X_1,X_2),\\ &G(X_1,X_3)=\tfrac{-\theta}{\sqrt{1-\theta^2}}G(X_1,X_2),\ \ G(X_1,X_4)=\tfrac{-1}{\sqrt{1-\theta^2}}JG(X_1,X_2),\\ &G(X_2,X_3)=\tfrac{1}{\sqrt{1-\theta^2}}JG(X_1,X_2),\ \ G(X_2,X_4)=\tfrac{-\theta}{\sqrt{1-\theta^2}}G(X_1,X_2). \end{aligned}\right. \end{equation} Let $\{e_{i}\}_{i=1}^{5}$ be the orthonormal basis as described in \eqref{eqn:2.22} and assume that $$ X_{i}=\sum_{j=1} ^4 a_{ij}e_{j}, \ \ 1\le i\le4. $$ Then, by the definition of $X_3$ and $X_4$, we can derive \begin{equation}\label{eqn:3.8} \left\{ \begin{aligned} &a_{31}=\tfrac{-a_{12}-a_{21}\theta}{\sqrt{1-\theta^2}},\ a_{32}=\tfrac{a_{11}-a_{22}\theta}{\sqrt{1-\theta^2}},\ a_{33}=\tfrac{-a_{14}-a_{23}\theta}{\sqrt{1-\theta^2}},\ a_{34}=\tfrac{a_{13}-a_{24}\theta}{\sqrt{1-\theta^2}};\\ & a_{41}=\tfrac{-a_{22}+a_{11}\theta}{\sqrt{1-\theta^2}},\ a_{42}=\tfrac{a_{21}+a_{12}\theta}{\sqrt{1-\theta^2}},\ a_{43}=\tfrac{-a_{24}+a_{13}\theta}{\sqrt{1-\theta^2}},\ a_{44}=\tfrac{a_{23}+a_{14}\theta}{\sqrt{1-\theta^2}}. \end{aligned} \right. \end{equation} Taking, in \eqref{eqn:2.19}, $(X,Y)=(X_i,X_j)$ for $1\leq i<j\leq4$, and using \eqref{eqn:3.7} and \eqref{eqn:2.22}, we get \begin{equation}\label{eqn:3.9} -\tfrac{1}{6}\theta+\tfrac{2}{3}c^{2}(a_{11}a_{22}-a_{12}a_{21})=0, \end{equation} \begin{equation}\label{eqn:3.10} \tfrac{2}{3\sqrt{1-\theta^2}}c^2(a_{11}a_{21}+a_{12}a_{22}) +\tfrac{1}{\sqrt{1-\theta^2}}(\alpha-\lambda)g(G(X_1,X_2),U)=0, \end{equation} \begin{equation}\label{eqn:3.11} \tfrac{2}{3\sqrt{1-\theta^2}}c^2(a_{11}a_{21}+a_{12}a_{22}) -\tfrac{1}{\sqrt{1-\theta^2}}(\alpha-\lambda)g(G(X_1,X_2),U)=0, \end{equation} \begin{equation}\label{eqn:3.12} \begin{aligned} \tfrac{2}{3\sqrt{1-\theta^2}}c^2(&-a_{11}^2-a_{12}^2+(a_{22}a_{11}-a_{21}a_{12})\theta)+\tfrac{\sqrt{1-\theta^2}}{6}\\[-1mm] &-\tfrac{\theta}{\sqrt{1-\theta^2}}(\alpha-\lambda)g(G(X_1,X_2),\xi)+\alpha(\alpha-\lambda)\sqrt{1-\theta^2}=0, \end{aligned} \end{equation} \begin{equation}\label{eqn:3.13} \begin{aligned} \tfrac{2}{3\sqrt{1-\theta^2}}c^2(&-a_{21}^2-a_{22}^2+(a_{22}a_{11}-a_{21}a_{12})\theta)+\tfrac{\sqrt{1-\theta^2}}{6}\\[-1mm] &-\tfrac{\theta}{\sqrt{1-\theta^2}}(\alpha-\lambda)g(G(X_1,X_2),\xi) +\alpha(\alpha-\lambda)\sqrt{1-\theta^2}=0, \end{aligned} \end{equation} \begin{equation}\label{eqn:3.14} \begin{aligned} \tfrac{2}{3(1-\theta^2)}c^2\big[&a_{21}a_{12}-a_{11}a_{22}+(a_{22}^2+a_{11}^2+a_{21}^2+a_{12}^2)\theta +(a_{12}a_{21}-a_{11}a_{22})\theta^2\big]\\[-1mm] &-\tfrac{\theta}{6}-2(\alpha-\lambda)g(G(X_1,X_2),\xi)-2\lambda(\alpha-\lambda)\theta=0. \end{aligned} \end{equation} From \eqref{eqn:3.10}, \eqref{eqn:3.11} and $g(X_1,X_2)=0$, we have $$ g(G(X_1,X_2),U)=0,\ \ a_{11}a_{21}+a_{12}a_{22}=0,\ \ a_{13}a_{23}+a_{14}a_{24}=0. $$ From \eqref{eqn:3.9}, \eqref{eqn:3.12}, \eqref{eqn:3.13} and $g(X_1,X_1)=g(X_2,X_2)=1$, we have $$ a_{11}^2+a_{12}^2=a_{21}^2+a_{22}^2\not=0,\ \ a_{13}^2+a_{14}^2=a_{23}^2+a_{24}^2. $$ Thus, we can write $$ \left\{ \begin{aligned} &a_{11}=\sqrt{a_{11}^2+a_{12}^2}\cos \omega_{1},\ \ a_{12}=\sqrt{a_{11}^2+a_{12}^2}\sin \omega_{1};\\ &a_{21}=\sqrt{a_{11}^2+a_{12}^2}\cos \omega_{2},\ \ a_{22}=\sqrt{a_{11}^2+a_{12}^2}\sin\omega_{2}. \end{aligned} \right. $$ Then the fact $0=a_{11}a_{21}+a_{12}a_{22}=(a_{11}^2+a_{12}^2)\cos(\omega_{1}-\omega_{2})$ implies that $\omega_{1}-\omega_{2}=\tfrac{\pi}{2}(2k+1)$ for $k\in\mathbb Z$. Hence $(a_{21},a_{22})=\pm\,(a_{12},-a_{11})$. On the other hand, \eqref{eqn:3.9} implies that $a_{11}a_{22}-a_{12}a_{21}=\tfrac{\theta}{4c^2}>0$, so it should be that $(a_{21},a_{22})=-(a_{12},-a_{11})$. Similarly, we can prove that $(a_{23},a_{24})=(a_{14},-a_{13})$. It follows that $a_{11}^2+a_{12}^2=\tfrac{\theta}{4c^2}$ and $a_{13}^2+a_{14}^2=1-\tfrac{\theta}{4c^2}$. On the other hand, by definition, we can finally get $$ \theta=\sum a_{1i}a_{2j}g(Je_i,e_j)=a_{11}a_{22}-a_{12}a_{21}+a_{13}a_{24}-a_{14}a_{23} =\tfrac{\theta}{2c^2}-1, $$ and thus $\theta=\tfrac{2c^2}{1-2c^2}$. Next, from the fact $g(G(X_1,X_2),X_i)=0$ for $1\leq i \leq 5$ and that, by \eqref{eqn:2.6}, $$ g(G(X_1,X_2),G(X_1,X_2))=\tfrac13(1-\theta^2), $$ we have $G(X_1,X_2)=\pm\sqrt{(1-\theta^2)/3}\,\xi$. Since the discussion is totally similar, we just consider the case $G(X_1,X_2)=\sqrt{(1-\theta^2)/3}\,\xi$. We calculate the connections $\{\nabla_{X_{i}}X_{j}\}$ so that we can apply for the Codazzi equations. Put $\nabla_{X_i}X_j=\sum \Gamma_{ij}^{k}X_{k}$ with $\Gamma_{ij}^{k}=-\Gamma_{ik}^{j}$, $1\le i,j,k\le 5$. Then, on the one hand, by definition and the Gauss-Weingarten formulas, we have \begin{equation*}\label{eq:cs1} G(X_1,\xi)=-\sum_{i=1}^5 \Gamma_{15}^{i}X_i+\alpha JX_1. \end{equation*} On the other hand, using $G(X_1,\xi)=\sum_ig(G(X_1,\xi),X_i)X_i$, we easily get $$ G(X_1,\xi)=-\sqrt{\tfrac{1-\theta^2}{3}}X_2+\tfrac{\sqrt{3}}{3}\theta X_3. $$ From the above calculations and \eqref{eqn:3.7}, it follows that \begin{equation}\label{eqn:3.15} \Gamma_{15}^{1}=0,\ \ \Gamma_{15}^{2}=\alpha\theta+\sqrt{\tfrac{1-\theta^2}{3}},\ \ \Gamma_{15}^{3}=\alpha\sqrt{1-\theta^2}-\tfrac{\sqrt{3}}{3}\theta,\ \ \Gamma_{15}^{4}=0. \end{equation} Analogously, calculating $G(X_i,\xi)=(\tilde \nabla_{X_i} J)\xi$ for $2\leq i\leq 4$, we can further obtain \begin{equation}\label{eqn:3.16} \left\{ \begin{aligned} &\Gamma_{25}^{1}=-\alpha\theta-\sqrt{\tfrac{1-\theta^2}{3}},\ \ \Gamma_{25}^{2}=0,\ \ \Gamma_{25}^{3}=0,\ \ \Gamma_{25}^{4}=\alpha\sqrt{1-\theta^2}-\tfrac{\sqrt{3}}{3}\theta,\\ &\Gamma_{35}^{1}=-\lambda\sqrt{1-\theta^2}+\tfrac{\sqrt{3}}{3}\theta,\ \ \Gamma_{35}^{2}=0,\ \ \Gamma_{35}^{3}=0,\ \ \Gamma_{35}^{4}=-\lambda\theta-\sqrt{\tfrac{1-\theta^2}{3}},\\ &\Gamma_{45}^{1}=0,\ \ \Gamma_{45}^{2}=-\lambda\sqrt{1-\theta^2}+\tfrac{\sqrt{3}}{3}\theta,\ \ \Gamma_{45}^{3}=\lambda\theta+\sqrt{\tfrac{1-\theta^2}{3}},\ \ \Gamma_{45}^{4}=0. \end{aligned}\right. \end{equation} Now, we are ready to calculate $(\nabla_U A){e_i}-(\nabla_{e_i} A)U$ for $1\leq i\leq 4$. On the one hand, using $e_{i}=\sum_{j=1}^4{a_{ji}}X_{j}$ and the preceding results \eqref{eqn:3.15} and \eqref{eqn:3.16}, direct calculations give the ${\{U\}^\bot}$-components of $(\nabla_U A){e_i}-(\nabla_{e_i} A)U$: $$ \left( \begin{aligned} &(\nabla_UA)e_1-(\nabla_{e_1}A)U\\ &(\nabla_UA)e_2-(\nabla_{e_2}A)U\\ &(\nabla_UA)e_3-(\nabla_{e_3}A)U\\ &(\nabla_UA)e_4-(\nabla_{e_4}A)U \end{aligned} \right)_{{\{U\}^\bot}}=BC \left( \begin{aligned} &X_1\\ &X_2\\ &X_3\\ &X_4 \end{aligned} \right), $$ where $$ B=(a_{ij})^T= \left( \begin{array}{cccc} a_{11} & -a_{12} & -a_{12}\sqrt{(1-\theta)/(1+\theta)} & -a_{11}\sqrt{(1-\theta)/(1+\theta)}\\[1mm] a_{12} & a_{11} & a_{11}\sqrt{(1-\theta)/(1+\theta)} & -a_{12}\sqrt{(1-\theta)/(1+\theta)}\\[1mm] a_{13} & a_{14} & -a_{14}\sqrt{(1+\theta)/(1-\theta)} & a_{13}\sqrt{(1+\theta)/(1-\theta)}\\[1mm] a_{14} & -a_{13} & a_{13}\sqrt{(1+\theta)/(1-\theta)} & a_{14}\sqrt{(1+\theta)/(1-\theta)} \end{array} \right), $$ $$ C=(C_{ij}):= \left( \begin{array}{cccc} U(\alpha) & 0 & (\alpha-\lambda)(\Gamma_{51}^3-\Gamma_{15}^3) & (\alpha-\lambda)\Gamma_{51}^4 \\ 0 & U(\alpha) & (\alpha-\lambda)\Gamma_{52}^3 & (\alpha-\lambda)(\Gamma_{52}^4-\Gamma_{25}^4) \\ (\lambda-\alpha)\Gamma_{53}^1 & (\lambda-\alpha)\Gamma_{53}^2 & U(\lambda) & (\lambda-\alpha)\Gamma_{35}^4 \\ (\lambda-\alpha)\Gamma_{54}^1 & (\lambda-\alpha)\Gamma_{54}^2 & (\lambda-\alpha)\Gamma_{45}^3 & U(\lambda) \end{array} \right). $$ On the other hand, using the Codazzi equation \eqref{eqn:2.18}, $e_{i}=\sum_{j=1}^4{a_{ji}}X_{j}$ and \eqref{eqn:2.22}, another calculation for the ${\{U\}^\bot}$-components of $(\nabla_U A)e_{i}-(\nabla_{e_{i}} A)U$ can be carried out to obtain: $$ \left( \begin{aligned} &(\nabla_UA)e_1-(\nabla_{e_1}A)U\\ &(\nabla_UA)e_2-(\nabla_{e_2}A)U\\ &(\nabla_UA)e_3-(\nabla_{e_3}A)U\\ &(\nabla_UA)e_4-(\nabla_{e_4}A)U \end{aligned} \right)_{{\{U\}^\bot}}=(D+E) \left( \begin{aligned} &X_1\\ &X_2\\ &X_3\\ &X_4 \end{aligned} \right), $$ where $$ D=(D_{ij}):= \left( \begin{array}{cccc} -\tfrac{2ab}{3}a_{11} & \tfrac{2ab}{3}a_{12} & \tfrac{2aba_{12}}{3}\sqrt{\tfrac{1-\theta}{1+\theta}} & \tfrac{2aba_{11}}{3}\sqrt{\tfrac{1-\theta}{1+\theta}} \\[1mm] \tfrac{2ab}{3}a_{12} & \tfrac{2ab}{3}a_{11} & \tfrac{2aba_{11}}{3}\sqrt{\tfrac{1-\theta}{1+\theta}} & -\tfrac{2aba_{12}}{3}\sqrt{\tfrac{1-\theta}{1+\theta}} \\[1mm] \tfrac{b}{3}a_{13} & \tfrac{b}{3}a_{14} & -\tfrac{ba_{14}}{3}\sqrt{\tfrac{1+\theta}{1-\theta}} & \tfrac{ba_{13}}{3}\sqrt{\tfrac{1+\theta}{1-\theta}} \\[1mm] -\tfrac{b}{3}a_{14} & \tfrac{b}{3}a_{13} & -\tfrac{ba_{13}}{3}\sqrt{\tfrac{1+\theta}{1-\theta}} & -\tfrac{ba_{14}}{3}\sqrt{\tfrac{1+\theta}{1-\theta}} \end{array} \right), $$ $$ E=(E_{ij}):= \left( \begin{array}{cccc} \tfrac{8a^2-3}{12}a_{12} & \tfrac{8a^2-3}{12}a_{11} & \tfrac{(8a^2-3)a_{11}}{12}\sqrt{\tfrac{1-\theta}{1+\theta}} & -\tfrac{(8a^2-3)a_{12}}{12}\sqrt{\tfrac{1-\theta}{1+\theta}}\\ \tfrac{3-8b^2}{12}a_{11} & \tfrac{8b^2-3}{12}a_{12} & \tfrac{(8b^2-3)a_{12}}{12}\sqrt{\tfrac{1-\theta}{1+\theta}} & \tfrac{(8b^2-3)a_{11}}{12}\sqrt{\tfrac{1-\theta}{1+\theta}}\\ \tfrac{1-4a}{12}a_{14} & -\tfrac{1-4a}{12}a_{13} & \tfrac{(1-4a)a_{13}}{12}\sqrt{\tfrac{1+\theta}{1-\theta}} & \tfrac{(1-4a)a_{14}}{12}\sqrt{\tfrac{1+\theta}{1-\theta}}\\ -\tfrac{1+4a}{12}a_{13} & -\tfrac{1+4a}{12}a_{14} & \tfrac{(1+4a)a_{14}}{12}\sqrt{\tfrac{1+\theta}{1-\theta}} & -\tfrac{(1+4a)a_{13}}{12}\sqrt{\tfrac{1+\theta}{1-\theta}} \end{array} \right). $$ In this way, we obtain the equation $BC=D+E$. This can be written in equivalent form: $C_{ij}=\sum_ka_{ik}(D_{kj}+E_{kj})$ for $1\le i,j\le4$. Then, since by \eqref{eqn:3.16} we have $$ C_{11}-C_{22}=0, \ C_{12}+C_{21}=0,\ C_{33}-C_{44}=0,\ C_{34}+C_{43}=0, $$ it follows that $LF=0$, where $L=(a_{11}^2-a_{12}^2, \, a_{13}^2-a_{14}^2, \, a_{11}a_{12},\, a_{13}a_{14})$, and $$ F=\left( \begin{array}{cccc} -2ab & a^2-b^2 & (b^2-a^2)(1-\theta)^2 & 2ab(1-\theta)^2\\ b & a & -a(1+\theta)^2 & -b(1+\theta)^2\\ 2(a^2-b^2) & 4ab & -4ab(1-\theta)^2 & 2(b^2-a^2)(1-\theta)^2\\ -2a & 2b & -2b(1+\theta)^2 & 2a(1+\theta)^2 \end{array}\right). $$ Now, direct calculation gives that $\det F=-64\theta^2(a^2+b^2)^3$. If $\det F=0$, then $c=1$ and this contradicts to $\theta=\tfrac{2c^2}{1-2c^2}\in (0,1)$. If $\det F\ensuremath{\nabla^E}q0$, then $L=0$ and thus $a_{11}=a_{12}=a_{13}=a_{14}=0$, which is also a contradiction. In summary, we have shown that {\bf (iii)-(1)} does not occur. \vskip 1mm {\bf (iii)-(2)}. $\theta=0$. In this case, we have $J\{V_\alpha\cap\{U\}^\perp\}=V_\lambda$. Take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $$ AX_1=\alpha X_1,\ AX_2=\alpha X_2,\ AX_3=\lambda X_3,\ AX_4=\lambda X_4,\ AX_5=\alpha X_5, $$ where $X_3=JX_1,\ X_4=JX_2,\ X_5=U$. It follows that $$ g(G(X_1,X_2),X_i)=0,\ 1\leq i\le 4;\ \ \ g(G(X_1,X_2),G(X_1,X_2))=\tfrac{1}{3}. $$ Assume that $X_{i}=\sum_{j=1} ^4 a_{ij}e_{j}$ for $1\leq i\leq 4$. Then taking in \eqref{eqn:2.19} that $(X,Y)=(X_i,X_j)$ for each $1\le i,j\le4$, we can still get the equations from \eqref{eqn:3.9} up to \eqref{eqn:3.14} but with $\theta=0$. From \eqref{eqn:3.9} and \eqref{eqn:3.14} corresponding to $\theta=0$, we get $g(G(X_1,X_2),\xi)=0$. Then, by \eqref{eqn:3.10} and \eqref{eqn:3.11}, we obtain $g(G(X_1,X_2),U)=0$. It follows that $G(X_1,X_2)=0$, a contradiction to $g(G(X_1,X_2),G(X_1,X_2))=\tfrac{1}{3}$. \vskip 1mm {\bf (iii)-(3)}. $\theta=1$. In this case, both $V_\alpha\cap\{U\}^\perp$ and $V_\lambda$ are $J$-invariant. Then, it is easily seen that $M$ satisfies $A\phi=\phi A$, and according to Theorem 4.1 of \cite{H-Y-Z} once more we get as desired a contradiction. \vskip 1mm {\bf (iv) $(\dim V_\alpha,\dim V_\lambda)=(4,1)$ on $M$.} In this case, we can take a local orthonormal basis $\{X_{i}\}_{i=1}^{5}$ such that $$ AX_1=\lambda X_1,\ AX_2=\alpha X_2,\ AX_3=\alpha X_3,\ AX_4=\alpha X_4,\ AX_5=\alpha X_5, $$ where $X_2=JX_1,\ X_4=JX_3,\ X_5=U$. Then as preceding we have \begin{equation}\label{eqn:3.17} g(G(X_1,X_3),X_i)=0,\ 1\leq i\leq4;\ \ |G(X_1,X_3)|^2=\tfrac{1}{3}. \end{equation} Let $\{e_{i}\}_{i=1}^{5}$ be the orthonormal basis as described in \eqref{eqn:2.22} and assume, for some functions $m,n,u,v$ that $X_1=m e_1+n e_2+ue_3+ve_4,\ \ X_3=-ue_1+ve_2+m e_3-n e_4$. Then, by definition, we have $$ X_2=-n e_1+m e_2-ve_3+ue_4,\ \ X_4=-ve_1-ue_2+n e_3+m e_4. $$ Taking in \eqref{eqn:2.19}, respectively, $(X,Y)=(X_1,X_3),(X_1,X_4),(X_3,X_2),(X_4,X_2)$, we get \begin{gather} \tfrac{2}{3}c^2mv+\tfrac{2}{3}c^{2}nu=(\lambda-\alpha)g(G(X_1,\xi),X_3),\label{eqn:3.18}\\ -\tfrac{2}{3}c^2mu+\tfrac{2}{3}c^{2}nv=(\lambda-\alpha)g(G(X_1,\xi),X_4),\label{eqn:3.19}\\ -\tfrac{2}{3}c^2mu+\tfrac{2}{3}c^{2}nv=0,\label{eqn:3.20}\\ \tfrac{2}{3}c^2nu+\tfrac{2}{3}c^{2}mv=0.\label{eqn:3.21} \end{gather} From these equations we immediately obtain $$ g(G(X_1,X_3),U)=0,\ \ \ g(G(X_1,X_3),\xi)=0. $$ This together with \eqref{eqn:3.17} gives $G(X_1,X_3)=0$, a contradiction to $|G(X_1,X_3)|^2=\tfrac{1}{3}$. This finally completes the proof of Lemma \ref{lemma:3.1}. \end{proof} \begin{lemma}\label{lemma:3.2} The case $\dim\mathfrak{D}=2$ does not occur. \end{lemma} \begin{proof} Suppose on the contrary that $\dim\mathfrak{D}=2$ does hold on $M$. Then, we consider each possibility of the dimensions $(\dim V_\alpha,\dim V_\lambda)$. \vskip 1mm {\bf (i) $(\dim V_\alpha,\dim V_\lambda)=(1,4)$ on $M$.} In this case, we can easily show that $M$ satisfies $A\phi=\phi A$. As before by Theorem 4.1 in \cite{H-Y-Z} this is impossible. \vskip 1mm {\bf (ii) $(\dim V_\alpha,\dim V_\lambda)=(2,3)$ on $M$.} In this case, we take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $$ AX_1=\alpha X_1,\ AX_2=\lambda X_2,\ AX_3=\lambda X_3,\ AX_4=\lambda X_4,\ AX_5=\alpha X_5, $$ where $X_2=JX_1,\ X_4=JX_3,\ X_5=U$. By \eqref{eqn:2.3}--\eqref{eqn:2.5}, $G(X_1,\xi)$ is orthogonal to ${\rm Span}\{\xi,U,X_1,X_2\}$, so $AG(X_1,\xi)=\lambda G(X_1,\xi)$. Then, taking $X=X_1$ in \eqref{eqn:2.19}, we can get \begin{equation}\label{eqn:3.22} (\alpha-\lambda)g(G(X_1,\xi),Y)=(\alpha^2-\alpha\lambda+\tfrac{1}{6})g(X_2,Y),\ \forall\, Y\in\{U\}^\perp. \end{equation} Notice that $g(X_2,X_3)=g(X_2,X_4)=0$ and $\alpha\ensuremath{\nabla^E}q\lambda$, so \eqref{eqn:3.22} implies that $G(X_1,\xi)=0$. However, by \eqref{eqn:2.6} we have $|G(X_1,\xi)|^2=\tfrac{1}{3}$. This is a contradiction. \vskip 1mm {\bf (iii) $(\dim V_\alpha,\dim V_\lambda)=(3,2)$ on $M$.} In this case, we take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $$ AX_1=\alpha X_1,\ AX_2=\alpha X_2,\ AX_3=\lambda X_3,\ AX_4=\lambda X_4,\ AX_5=\alpha X_5, $$ where $X_5=U$. Taking in \eqref{eqn:2.19} $(X,Y)=(X_1,X_2)$ gives $g(\phi X_1,X_2)=0$. It follows that $J\{V_\alpha\cap\{U\}^\perp\}=V_\lambda$. Then, we can choose a local orthonormal frame field $\{\tilde{X}_i\}_{i=1}^{5}$ such that $\tilde{X}_1=X_1,\ \tilde{X}_2=J\tilde{X}_1,\ \tilde{X}_3=X_2, \ \tilde{X}_4=J\tilde{X}_3,\ \tilde{X}_5=U$, and moreover, $\tilde{X}_1,\tilde{X}_3,\tilde{X}_5\in V_\alpha$ and $\tilde{X}_2,\tilde{X}_4\in V_\lambda$. By the identity \eqref{eqn:2.19} with $(X,Y)$ equal to $(\tilde{X}_2,\tilde{X}_3),\ (\tilde{X}_2,\tilde{X}_4)$, respectively, we have $g(G(\tilde{X}_2,\xi),\tilde{X}_3)=g(G(\tilde{X}_2,\xi),\tilde{X}_4)=0$. This implies that $G(\tilde{X}_2,\xi)=0$ due to the obvious fact $G(\tilde{X}_2,\xi)\perp{\rm Span}\,\{\xi,U,\tilde{X}_1,\tilde{X}_2\}$. However, by \eqref{eqn:2.6} we have $|G(\tilde{X}_2,\xi)|^2=\tfrac{1}{3}$. This is also a contradiction. \vskip 1mm {\bf (iv) $(\dim V_\alpha,\dim V_\lambda)=(4,1)$ on $M$.} In this case, we take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $$ AX_1=\lambda X_1,\ AX_2=\alpha X_2,\ AX_3=\alpha X_3,\ AX_4=\alpha X_4,\ AX_5=\alpha X_5, $$ where $X_2=JX_1,\ X_4=JX_3,\ X_5=U$. By \eqref{eqn:2.3}--\eqref{eqn:2.5}, $G(X_1,\xi)$ is orthogonal to ${\rm Span}\{\xi,U,X_1,X_2\}$, so $AG(X_1,\xi)=\alpha G(X_1,\xi)$. Taking in \eqref{eqn:2.19} $X=X_1$, we get \begin{equation}\label{eqn:3.23} (\alpha-\lambda)g(G(X_1,\xi),Y)=(\alpha^2-\alpha\lambda+\tfrac{1}{6})g(X_2,Y),\ \forall\, Y\in\{U\}^\perp. \end{equation} Then, similar as in case {\bf (ii)}, from \eqref{eqn:3.23}, the fact $g(X_2,X_3)=g(X_2,X_4)=0$ and $\alpha\ensuremath{\nabla^E}q\lambda$, we obtain $G(X_1,\xi)=0$. However, by \eqref{eqn:2.6}, $|G(X_1,\xi)|^2=\tfrac{1}{3}$. This is a contradiction. \end{proof} \section{Examples of Hopf hypersurfaces in $\mathbf{S}^3\times\mathbf{S}^3$}\label{sect:4} As usual we denote $\mathbf{S}^3$ (resp. $\mathbf{S}^2$) the set of the unitary (resp. imaginary) quaternions in the quaternion space $\mathbb{H}$. Then, in this short section, we can describe several of the simplest examples of Hopf hypersurfaces in the NK $\mathbf{S}^3\times\mathbf{S}^3$. \begin{examples}\label{exam:4.1} For each $0<r\leq1$, we define three families of hypersurfaces $M^{(r)}_1$, $M^{(r)}_2$ and $M^{(r)}_3$ in the NK $\mathbf{S}^3\times\mathbf{S}^3$ as below: \begin{equation*} \begin{aligned} &M^{(r)}_1:=\left\{(x,\sqrt{1-r^2}+ry)\in\mathbf{S}^3\times \mathbf{S}^3 ~|~x\in \mathbf{S}^3,\ y\in\mathbf{S}^2\right\},\\ &M^{(r)}_2:=\mathcal{F}_1(M^{(r)}_1),\\%\qquad \qquad &M^{(r)}_3:=\mathcal{F}_2(M^{(r)}_1). \end{aligned} \end{equation*} \end{examples} \begin{remark}\label{rem:4.1} Among the preceding hypersurfaces $M^{(r)}_1$, $M^{(r)}_2$ and $M^{(r)}_3$ of the NK $\mathbf{S}^3\times\mathbf{S}^3$, $M^{(r)}_1$, $M^{(r)}_2$ and $M^{(1)}_3$ have been carefully discussed, respectively, in Examples 5.1, 5.2 and 5.3 of \cite{H-Y-Z}. As a matter of fact, all of them are Hopf hypersurfaces with three distinct constant principal curvatures: $\alpha=0$ (i.e. $AU=0$) of multiplicity $1$, $\lambda=\tfrac{\sqrt{1-r^2}}{2r}-\tfrac{\sqrt{3-2r^2}}{2\sqrt{3}r}$ of multiplicity $2$, and $\beta=\tfrac{\sqrt{1-r^2}}{2r}+\tfrac{\sqrt{3-2r^2}}{2\sqrt{3}r}$ of multiplicity $2$. The holomorphic distributions $\{U\}^\perp$ of these hypersurfaces are all preserved by the almost product structure $P$ of the NK $\mathbf{S}^3\times\mathbf{S}^3$, but $P$ acts differently on their unit normal vector fields. \end{remark} \begin{examples}\label{exam:4.2} For each $0<k,l<1$, $k^2+l^2=1$, we can define three families of hypersurfaces $M^{(k,l)}_4$, $M^{(k,l)}_5$ and $M^{(k,l)}_6$ in the NK $\mathbf{S}^3\times\mathbf{S}^3$ as below: \begin{equation*} \begin{aligned} &M^{(k,l)}_4:=\left\{(x,(y_1,y_2,y_3,y_4))\in \mathbf{S}^3 \times \mathbf{S}^3 ~|~x\in \mathbf{S}^3,\ y_1^2+y_2^2=k^2,\ y_3^2+y_4^2=l^2\right\},\\ &M^{(k,l)}_5:=\mathcal{F}_1(M^{(k,l)}_4),\\%\qquad \qquad &M^{(k,l)}_6:=\mathcal{F}_2(M^{(k,l)}_4). \end{aligned} \end{equation*} \end{examples} \begin{remark}\label{rem:4.2} Direct calculations show that all of these three families of hypersurfaces are Hopf ones, and they have five distinct constant principal curvatures: $\alpha=0$ (i.e. $AU=0$), $\lambda_1=\tfrac{3k-\sqrt{9k^2+3l^2}}{6l}$, $\lambda_2=\tfrac{3k+\sqrt{9k^2+3l^2}}{6l}$, $\lambda_3=\tfrac{-3l-\sqrt{3k^2+9l^2}}{6k}$, $\lambda_4=\tfrac{-3l+\sqrt{3k^2+9l^2}}{6k}$. Similarly, the holomorphic distributions $\{U\}^\perp$ of these hypersurfaces are all preserved by the almost product structure $P$ of the NK $\mathbf{S}^3\times\mathbf{S}^3$, but $P$ acts differently on their unit normal vector fields. \end{remark} \begin{remark}\label{rem:4.3} Theorem \ref{thm:1.2} gives a characterization of the Hopf hypersurfaces $M^{(r)}_1$, $M^{(r)}_2$ and $M^{(r)}_3$ in the NK $\mathbf{S}^3\times\mathbf{S}^3$. We expect that a similar interesting characterization of the Hopf hypersurfaces $M^{(k,l)}_4$, $M^{(k,l)}_5$ and $M^{(k,l)}_6$ in the NK $\mathbf{S}^3\times\mathbf{S}^3$ is possible, but at the moment it is still not achieved. \end{remark} \section{The proof of Theorem \ref{thm:1.2}}\label{sect:5} This last section is devoted to the proof of Theorem \ref{thm:1.2}, which is given in two steps. In the sequel, we assume that $M$ is a Hopf hypersurface of the NK $\mathbf{S}^3\times\mathbf{S}^3$ with three distinct principal curvatures $\alpha,\ \lambda$ and $\beta$ such that $AU=\alpha U$, and that $P\{U\}^\perp=\{U\}^\perp$. In particular, \eqref{eqn:2.23} holds. \subsection{The principal curvatures and their multiplicities}\label{sect:5.1}~ Let $V_\alpha,V_\lambda$ and $V_\beta$ denote the eigenspaces corresponding to the principal curvatures $\alpha,\lambda$ and $\beta$, respectively. By the assumption of having three distinct principal curvatures and the continuity of the principal curvature functions, we know that the dimensions $(\dim V_\alpha,\dim V_\lambda,\dim V_\beta)$ remain unchanged on $M$, which, without loss of generality, have four possibilities: $(3,1,1)$, $(2,2,1)$, $(1,3,1)$ and $(1,2,2)$. First of all, we shall determine the multiplicities of the principal curvatures. \begin{lemma}\label{lemma:5.1} The multiplicities of the three distinct principal curvature functions $\alpha,\ \lambda,\ \beta$ can only be $1,2$ and $2$, respectively. \end{lemma} \begin{proof} Suppose on the contrary that, for the multiplicities of the principal curvatures $\alpha,\lambda$ and $\beta$, one of the three possibilities $(3,1,1),(2,2,1),(1,3,1)$ does occur. Then, for each possible case, we shall derive a contradiction by using Lemma \ref{lemma:2.1}. \vskip 1mm {\bf (i) $(\dim V_\alpha,\dim V_\lambda,\dim V_\beta)=(3,1,1)$ on $M$.} We take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $$ AX_1=\lambda X_1,\ AX_2=\beta X_2,\ AX_3=\alpha X_3,\ AX_4=\alpha X_4,\ X_5=U. $$ Taking in \eqref{eqn:2.19} $(X,Y)=(X_3,X_4)$, we get $g(\phi X_3,X_4)=0$, which implies that $J\{V_\lambda\oplus V_\beta\}=V_\alpha\cap{\{U\}^{\perp}}$. So we can further choose $X_3=JX_1$ and $X_4=JX_2$. Then, we easily show that $G(X_1,X_2)\in {\rm Span}\{\xi,U\}$, and by \eqref{eqn:2.6}, we have $|G(X_1,X_2)|^2=\tfrac{1}{3}$. Now, taking in \eqref{eqn:2.19} $(X,Y)=(X_1,X_3),\ (X_2,X_4),\ (X_2,X_3),\ (X_1,X_2)$, respectively, we obtain \begin{gather} \alpha^2-\alpha\lambda=-\tfrac{1}{6},\ \ \alpha^2-\alpha\beta=-\tfrac{1}{6},\label{eqn:5.1}\\ (\alpha-\beta)g(G(X_1,X_2),U)=0, \ \ (2\alpha-\lambda-\beta)g(G(X_1,X_2),\xi)=0.\label{eqn:5.2} \end{gather} From \eqref{eqn:5.2}, $\alpha-\beta\ensuremath{\nabla^E}q0$ and the preceding results, we see that $g(G(X_1,X_2),\xi)\not=0$ and $\lambda+\beta=2\alpha$. On the other hand, from \eqref{eqn:5.1} we get $2\alpha^2-\alpha(\lambda+ \beta)=-\tfrac{1}{3}$. But this is a contradiction to $\lambda+\beta=2\alpha$. \vskip 1mm {\bf (ii) $(\dim V_\alpha,\dim V_\lambda,\dim V_\beta)=(2,2,1)$ on $M$.} In this case, we can define a function $\theta:=|g(JX,Y)|$ on $M$ for unit vectors $X\in V_\alpha\cap\{U\}^\perp$ and $Y\in V_\beta$. Since $0\leq\theta\leq1$ and that our concern is only local, in order to prove that Case (ii) does not occur, it is sufficient to show that the following three subcases do not occur on $M$. \vskip 1mm {\bf (ii)-(a)}. $0<\theta<1$. In this subcase, we have the decomposition $JX=W+g(JX,Y)Y$ and $0\not=W\in V_\lambda$. Then, we can take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $$ AX_1=\alpha X_1,\ AX_2=\beta X_2,\ AX_3=\lambda X_3,\ AX_4=\lambda X_4,\ X_5=U, $$ where $X_3=(JX_1-\theta X_2)/\sqrt{1-\theta^2},\ X_4=(JX_2+\theta X_1)/\sqrt{1-\theta^2}$ and $\theta=g(JX_1,X_2)$. It follows that $G(X_1,X_2)\in {\rm Span}\{\xi,U\}$ and, by \eqref{eqn:2.6}, $|G(X_1,X_2)|^2=(1-\theta^2)/3$. Moreover, it is easily seen that with respect to the frame field $\{X_{i}\}_{i=1}^{5}$, all relations of \eqref{eqn:3.7} hold. Then, taking in \eqref{eqn:2.19} that $(X,Y)=(X_1,X_4)$ and making use of \eqref{eqn:3.7}, we get $$ 0=(\lambda-\alpha)g(G(X_1,X_2),U). $$ It follows that $g(G(X_1,X_2),U)=0$ and $G(X_1,X_2)=\pm\sqrt{(1-\theta^2)/3}\,\xi$. In case $G(X_1,X_2)=-\sqrt{(1-\theta^2)/3}\,\xi$, with respect to the normal vector $\tilde{\xi}=-\xi$, we have $G(X_1,X_2)=\sqrt{(1-\theta^2)/3}\,\tilde\xi$, and the principal curvatures become $\tilde{\alpha}=-\alpha$, $\tilde{\lambda}=-\lambda$, $\tilde{\beta}=-\beta$, and $X_1,X_5\in V_{\tilde{\alpha}}$, $X_2\in V_{\tilde{\beta}}$, $X_3,X_4\in V_{\tilde{\lambda}}$. So it is sufficient to show that $G(X_1,X_2)=\sqrt{(1-\theta^2)/3}\,\xi$. Taking in \eqref{eqn:2.19}, respectively, $(X,Y)=(X_1,X_2), (X_1,X_3), (X_2,X_4), (X_3,X_4)$, and making use of \eqref{eqn:3.7}, we have \begin{equation}\label{eqn:5.3} -\tfrac{\theta}{6}=(\alpha-\beta)\sqrt{\tfrac{1-\theta^2}{3}} +(\alpha^2-\alpha\beta)\theta, \end{equation} \begin{equation}\label{eqn:5.4} \sqrt{3}\alpha+\tfrac{\sqrt{3}}{6(\alpha-\lambda)}=\tfrac{\theta}{\sqrt{1-\theta^2}}, \end{equation} \begin{equation}\label{eqn:5.5} -\tfrac{\sqrt{1-\theta^2}}{6}=-\tfrac{\sqrt{3}}{3}(2\alpha-\lambda-\beta)\theta +(\alpha\lambda+\alpha\beta-2\lambda\beta)\sqrt{1-\theta^2}, \end{equation} \begin{equation}\label{eqn:5.6} -\sqrt{3}\lambda-\tfrac{\sqrt{3}}{12(\alpha-\lambda)}=\tfrac{\sqrt{1-\theta^2}}{\theta}. \end{equation} From these equations, we can derive a contradiction. Indeed, from \eqref{eqn:5.4} and \eqref{eqn:5.6}, we have \begin{equation}\label{eqn:5.7} \sqrt{3}(\alpha-\lambda)+\tfrac{\sqrt{3}}{12(\alpha-\lambda)}=\tfrac{1}{\theta\sqrt{1-\theta^2}}. \end{equation} It follows that $\alpha-\lambda=\tfrac{1\pm\sqrt{1-\theta^2+\theta^4}}{2\theta\sqrt{3(1-\theta^2)}}$. Then, from \eqref{eqn:5.4}, \eqref{eqn:5.6} and \eqref{eqn:5.3} we get $$ \alpha=\tfrac{-1+\theta^2\pm\sqrt{1-\theta^2+\theta^4}}{\theta\sqrt{3(1-\theta^2)}},\ \ \lambda=\tfrac{-3+2\theta^2\pm\sqrt{1-\theta^2+\theta^4}}{2\theta\sqrt{3(1-\theta^2)}},\ \ \beta=\tfrac{\pm(2-\theta^2+\theta^4)-2(1-\theta^2)\sqrt{1-\theta^2+\theta^4}} {2\sqrt{3}\theta\sqrt{(1-\theta^2)(1-\theta^2+\theta^4)}}. $$ Now, substituting $\alpha,\lambda$ and $\beta$ into \eqref{eqn:5.5}, we get the contradiction $\tfrac{\sqrt{1-\theta^2}}{3\sqrt{1-\theta^2+\theta^4}}=0$. \vskip 1mm {\bf (ii)-(b). $\theta=1$}. In this subcase, both $(V_\alpha\cap\{U\}^\perp)\oplus V_\beta$ and $V_\lambda$ are $J$-invariant. We take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $$ AX_1=\alpha X_1,\ AX_2=\beta X_2,\ AX_3=\lambda X_3,\ AX_4=\lambda X_4,\ X_5=U, $$ where $X_2=JX_1$ and $X_4=JX_3$. Then $G(X_1,X_3)\in {\rm Span}\{\xi,U\}$, and by \eqref{eqn:2.6}, we have $|G(X_1,X_3)|^2=\tfrac{1}{3}$. Taking in \eqref{eqn:2.19} $(X,Y)=(X_1,X_3)$ and $(X_1,X_4)$, respectively, we easily get $(\alpha-\lambda)g(G(X_1,X_3),\xi)=(\alpha-\lambda)g(G(X_1,X_4),\xi)=0$. This together with $G(X_1,X_4)=-JG(X_1,X_3)$ implies that $G(X_1,X_3)=0$, which is a contradiction. \vskip 1mm {\bf (ii)-(c). $\theta=0$}. In this subcase, $J\{(V_\alpha\cap\{U\}^\perp)\oplus V_\beta\}=V_\lambda$. Then, we can take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $$ AX_1=\alpha X_1,\ AX_2=\lambda X_2,\ AX_3=\lambda X_3,\ AX_4=\beta X_4,\ X_5=U, $$ where $X_2=JX_1$ and $X_4=JX_3$. Then $G(X_1,X_3)\in {\rm Span}\{\xi,U\}$ and $|G(X_1,X_3)|^2=\tfrac{1}{3}$. Taking in \eqref{eqn:2.19} $(X,Y)=(X_1,X_3)$ and $(X_1,X_4)$, respectively, we get $$ (\alpha-\lambda)g(G(X_1,X_3),\xi)=(\alpha-\beta)g(G(X_1,X_4),\xi)=0. $$ Then similar as the last subcase we get $G(X_1,X_3)=0$, which is a contradiction. \vskip 1mm {\bf (iii) $(\dim V_\alpha,\dim V_\lambda,\dim V_\beta)=(1,3,1)$ on $M$.} In this case, we can take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $$ AX_1=\beta X_1,\ AX_2=\lambda X_2,\ AX_3=\lambda X_3,\ AX_4=\lambda X_4,\ X_5=U, $$ where $X_2=JX_1,X_4=JX_3$. Then $G(X_1,X_3)\in {\rm Span}\{\xi,U\}$ and $|G(X_1,X_3)|^2=\tfrac{1}{3}$. Taking in \eqref{eqn:2.19} $(X,Y)=(X_1,X_2),(X_1,X_3)$ and $(X_1,X_4)$, respectively, we have \begin{gather} -\tfrac{1}{6}=\alpha\lambda+\alpha\beta-2\lambda\beta,\label{eqn:5.8}\\ (2\alpha-\lambda-\beta)g(G(X_1,X_3),\xi)=(2\alpha-\lambda-\beta)g(G(X_1,X_4),\xi)=0.\label{eqn:5.9} \end{gather} Then, by \eqref{eqn:5.9} and the fact $g(G(X_1,X_4),\xi)=g(-JG(X_1,X_3),\xi)=-g(G(X_1,X_3),U)$, we get $2\alpha-\lambda-\beta=0$. This together with \eqref{eqn:5.8} gives the contradiction $(\lambda-\beta)^2=-\tfrac{1}{3}$. We have completed the proof of Lemma \ref{lemma:5.1}. \end{proof} Next, we shall determine the principal curvatures and show that they are constants. Since we have the fact $\dim V_\alpha=1$ and $\dim V_\lambda=\dim V_\beta=2$, without loss of generality, we shall assume that $\lambda>\beta$. Then, we can state our result as follows: \begin{lemma}\label{lemma:5.2} All the three distinct principal curvatures $\alpha, \lambda$ and $\beta$ are constants. More specifically, we have $\alpha=0,\ \lambda=\tfrac{\sqrt{1-\theta^2}+1}{2\sqrt{3}\theta}$ and $\beta=\tfrac{\sqrt{1-\theta^2}-1}{2\sqrt{3}\theta}$ for some $0<\theta\le1$. \end{lemma} \begin{proof} It is easily seen that $|g(JX,Y)|$, for an orthonormal basis $\{X,Y\}$ of $V_\lambda$, defines a well-defined function $\theta$ on $M$ satisfying $0\le\theta\le1$. Since our concern is only local, in order to prove Lemma \ref{lemma:5.2}, by using the continuity of the principal curvature functions and $\theta$, we are sufficient to consider the following three cases: \vskip 1mm {\bf (1). $0<\theta<1$ on $M$}. In this case, we see that $JV_\lambda\ensuremath{\nabla^E}q V_\beta$ and $V_\lambda$ is not $J$-invariant. Then, we can take a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $\theta=g(JX_1,X_2)$ and \begin{equation}\label{eqn:5.10} AX_1=\lambda X_1,\ AX_2=\lambda X_2,\ AX_3=\beta X_3,\ AX_4=\beta X_4,\ AX_5=\alpha X_5, \end{equation} where $X_5=U,\ X_3=\tfrac{JX_1-\theta X_2}{\sqrt{1-\theta^2}},\ X_4=\tfrac{JX_2+\theta X_1}{\sqrt{1-\theta^2}}$. Thus $G(X_1,X_2)\in {\rm Span}\{\xi,U\}$ and, by \eqref{eqn:2.6}, $|G(X_1,X_2)|^2=\tfrac13(1-\theta^2)$. Moreover, it is easily seen that with respect to the frame field $\{X_{i}\}_{i=1}^{5}$, all relations of \eqref{eqn:3.7} hold. Taking, in \eqref{eqn:2.19}, $(X,Y)=(X_3,X_4)$ and $(X,Y)=(X_1,X_i)$ for $2\leq i\leq4$, respectively, and making use of \eqref{eqn:3.7}, we have \begin{equation}\label{eqn:5.11} -\tfrac{\theta}{6}=2(\alpha-\lambda)g(G(X_1,X_2),\xi)+2\lambda(\alpha-\lambda)\theta, \end{equation} \begin{equation}\label{eqn:5.12} -\tfrac16\sqrt{1-\theta^2}=-\tfrac{\theta(2\alpha-\lambda-\beta)}{\sqrt{1-\theta^2}}g(G(X_1,X_2),\xi) +(\alpha\lambda+\alpha\beta-2\lambda\beta)\sqrt{1-\theta^2}, \end{equation} \begin{equation}\label{eqn:5.13} 0=(2\alpha-\lambda-\beta)g(G(X_1,X_2),U), \end{equation} \begin{equation}\label{eqn:5.14} \tfrac{\theta}{6}=-2(\alpha-\beta)g(G(X_1,X_2),\xi)+2\beta(\beta-\alpha)\theta. \end{equation} If $2\alpha-\lambda-\beta=0$, then together with \eqref{eqn:5.12} we derive a contradiction $(\lambda-\beta)^2=-\tfrac{1}{3}$. Hence $2\alpha-\lambda-\beta\ensuremath{\nabla^E}q0$. Then from \eqref{eqn:5.13} we get $g(G(X_1,X_2),U)=0$, and therefore we obtain $G(X_1,X_2)=\pm\sqrt{(1-\theta^2)/3}\,\xi$. Without loss of generality, we shall assume that $G(X_1,X_2)=-\sqrt{(1-\theta^2)/3}\,\xi$. Actually, if it occurs $G(X_1,X_2)=\sqrt{(1-\theta^2)/3}\,\xi$, then $G(X_3,X_4)=-\sqrt{(1-\theta^2)/3}\,\xi$ and $g(JX_3,X_4)=-\theta<0$. Now, with respect to the normal vector field $\tilde{\xi}=-\xi$, the principal curvatures become $\tilde{\alpha}=-\alpha$, $\tilde{\lambda}=-\beta$ and $\tilde{\beta}=-\lambda$, $\tilde{\lambda}>\tilde{\beta}$. Putting $\tilde{X_1}=X_3$, $\tilde{X_2}=-X_4$, $\tilde{X}_3=\tfrac{J\tilde{X}_1-\theta\tilde{X}_2} {\sqrt{1-\theta^2}}$, $\tilde{X}_4=\tfrac{J\tilde{X}_2+\theta \tilde{X}_1}{\sqrt{1-\theta^2}}$ and $\tilde{X}_5=U$, then, with respect to the orthonormal frame field $\{\tilde{X}_{i}\}_{i=1}^{5}$, as assumed we have $G(\tilde{X}_1,\tilde{X}_2)=-\sqrt{(1-\theta^2)/3}\,\tilde{\xi}$ and $g(J\tilde{X}_1,\tilde{X}_2)=\theta>0$. Having the assumption $G(X_1,X_2)=-\sqrt{(1-\theta^2)/3}\,\xi$, the equations \eqref{eqn:5.11}, \eqref{eqn:5.12} and \eqref{eqn:5.14} become \begin{equation}\label{eqn:5.15} \theta=4\sqrt{3}(\alpha-\lambda)\sqrt{1-\theta^2}+12\lambda(\lambda-\alpha)\theta, \end{equation} \begin{equation}\label{eqn:5.16} -\sqrt{1-\theta^2}=2\sqrt{3}\theta(2\alpha-\lambda-\beta) +6(\alpha\lambda+\alpha\beta-2\lambda\beta)\sqrt{1-\theta^2}, \end{equation} \begin{equation}\label{eqn:5.17} \theta=4\sqrt{3}(\alpha-\beta)\sqrt{1-\theta^2}+12\beta(\beta-\alpha)\theta. \end{equation} Then, solving $\lambda$ and $\beta$ from \eqref{eqn:5.15} and \eqref{eqn:5.17}, we obtain $$ \lambda+\beta=\tfrac{3\alpha\theta+\sqrt{3(1-\theta^2)}}{3\theta},\ \ \lambda\beta=\tfrac{4\alpha\sqrt{3(1-\theta^2)}-\theta}{12\theta}. $$ This combining with \eqref{eqn:5.16} gives $\alpha(\alpha\sqrt{1-\theta^2}+\tfrac{2\theta^2-1}{\sqrt{3}\theta})=0$. Hence, $\alpha=0$ or $\alpha=\tfrac{1-2\theta^2}{\theta\sqrt{3-3\theta^2}}$. In conclusion, we can solve the above equations to obtain two possibilities: {\bf Case (1)-(i)}: $\alpha=0,\ \ \lambda=\tfrac{\sqrt{1-\theta^2}+1}{2\sqrt{3}\theta},\ \ \beta=\tfrac{\sqrt{1-\theta^2}-1}{2\sqrt{3}\theta}$; {\bf Case (1)-(ii)}: $\alpha=\tfrac{1-2\theta^2}{\theta\sqrt{3(1-\theta^2)}},\ \ \lambda=\tfrac{2-3\theta^2+\theta}{2\theta\sqrt{3(1-\theta^2)}},\ \ \beta=\tfrac{2-3\theta^2-\theta}{2\theta\sqrt{3(1-\theta^2)}}$. \vskip 1mm Before dealing with these two subcases in more details, we need some preparations. Put $\nabla_{X_i}X_j=\sum \Gamma_{ij}^{k}X_{k}$ with $\Gamma_{ij}^{k}=-\Gamma_{ik}^{j}$, $1\le i,j,k\le 5$. First of all, we have \begin{equation*} G(X_1,\xi)=-\sum_{i=1}^5 \Gamma_{15}^{i}X_i+\lambda JX_1. \end{equation*} On the other hand, the facts $g(G(X_1,X_2),\xi)=-\sqrt{(1-\theta^2)/3}$ and $g(G(X_1,X_2),U)=0$ imply that $G(X_1,\xi)=\sqrt{\tfrac{1-\theta^2}{3}}X_2-\tfrac{\sqrt{3}}{3}\theta X_3$. Hence, we obtain \begin{equation}\label{eqn:5.18} \Gamma_{15}^{1}=0,\ \ \Gamma_{15}^{2}=\lambda\theta-\sqrt{\tfrac{1-\theta^2}{3}},\ \ \Gamma_{15}^{3}=\lambda\sqrt{1-\theta^2}+\tfrac{\sqrt{3}}{3}\theta,\ \ \Gamma_{15}^{4}=0. \end{equation} Similarly, calculating $G(X_i,\xi)$ for $2\leq i\leq 4$, we can further obtain \begin{equation}\label{eqn:5.19} \left\{ \begin{aligned} &\Gamma_{25}^{1}=-\lambda\theta+\sqrt{\tfrac{1-\theta^2}{3}},\ \ \Gamma_{25}^{2}=0,\ \ \Gamma_{25}^{3}=0,\ \ \Gamma_{25}^{4}=\lambda\sqrt{1-\theta^2}+\tfrac{\sqrt{3}}{3}\theta,\\ &\Gamma_{35}^{1}=-\beta\sqrt{1-\theta^2}-\tfrac{\sqrt{3}}{3}\theta,\ \ \Gamma_{35}^{2}=0,\ \ \Gamma_{35}^{3}=0,\ \ \Gamma_{35}^{4}=-\beta\theta+\sqrt{\tfrac{1-\theta^2}{3}},\\ &\Gamma_{45}^{1}=0,\ \ \Gamma_{45}^{2}=-\beta\sqrt{1-\theta^2}-\tfrac{\sqrt{3}}{3}\theta,\ \ \Gamma_{45}^{3}=\beta\theta-\sqrt{\tfrac{1-\theta^2}{3}},\ \ \Gamma_{45}^{4}=0. \end{aligned}\right. \end{equation} Now, we calculate $g((\nabla_{X_i}A){X_j}-(\nabla_{X_j}A){X_i},X_k)$ for each $1\leq i,j,k\leq4$. First, by using \eqref{eqn:2.18} we easily see that $g((\nabla_{X_i} A){X_j}-(\nabla_{X_j} A){X_i},X_k)=0$. On the other hand, by using \eqref{eqn:5.10} we can calculate $0=g((\nabla_{X_i} A){X_j}-(\nabla_{X_j} A){X_i},X_k)$ to conclude that $X_1\lambda=X_2\lambda=X_3\beta=X_4\beta=0$ that is $X_i\theta=0$ for $1\le i\le4$, and $\Gamma_{ij}^k=\Gamma_{ik}^j=0$ for $i\in\{1,2,3,4\},j\in\{1,2\}$ and $k\in\{3,4\}$. Next, by definition, the above information of $\{\Gamma_{ij}^k\}$ and \eqref{eqn:3.7}, we can get $$ 0=g(G(X_1,X_2),X_3)=g((\tilde{\nabla}_{X_1} J){X_2},X_3)=\sqrt{1-\theta^2}\,(\Gamma_{14}^3-\Gamma_{12}^1). $$ It follows that $\Gamma_{14}^3=\Gamma_{12}^1$. Similarly, by calculating $0=g(G(X_i,X_1),X_4)$ for $2\le i\le4$, we further get $\Gamma_{23}^4=\Gamma_{21}^2$, $\Gamma_{33}^4=\Gamma_{31}^2$ and $\Gamma_{43}^4=\Gamma_{41}^2$. Moreover, by using \eqref{eqn:3.7} we have $g(G(U,X_1),X_4)=-\tfrac{\sqrt{3}}{3}$, then direct calculation of its left hand side gives \begin{equation}\label{eqn:5.20} (\Gamma_{53}^4-\Gamma_{51}^2)\sqrt{1-\theta^2}+(\Gamma_{52}^4+ \Gamma_{51}^3)\theta=-\tfrac{\sqrt{3}}{3}. \end{equation} Finally, from now on we assume that $PX_i=\sum_{j=1}^4p_{ij}X_j$ for $1\leq i\leq4$, where $p_{ij}=p_{ji}$ and, by the definition of $X_3$ and $X_4$, we have the following relations: \begin{equation}\label{eqn:5.21} \left\{ \begin{aligned} &p_{23}=p_{14}-\tfrac{(p_{11}+p_{22})\theta}{\sqrt{1-\theta^2}},\ \ p_{33}=\tfrac{\theta^2p_{22}-p_{11}+2\theta^2p_{11}}{1-\theta^2}-\tfrac{2\theta p_{14}}{\sqrt{1-\theta^2}},\\ &p_{34}=\tfrac{(p_{13}-p_{24})\theta}{\sqrt{1-\theta^2}}-p_{12},\ \ p_{44}=\tfrac{2\theta p_{14}}{\sqrt{1-\theta^2}}-\tfrac{p_{22}+\theta^2 p_{11}}{1-\theta^2}. \end{aligned} \right. \end{equation} \vskip 1mm Now, we come to discuss {\bf Case (1)-(i)} and show that in this subcase $\theta$ is constant. For that purpose, we apply for the Codazzi equation \eqref{eqn:2.18} with $(X,Y)=(U,X_i)$ for $1\leq i\leq4$, and then checking the results we obtain the following equations: \begin{gather} 3U\lambda-p_{11}b-a(p_{12}\theta+p_{13}\sqrt{1-\theta^2}\,)=0,\label{eqn:5.22}\\ ap_{11}\theta-p_{12}b-ap_{14}\sqrt{1-\theta^2}=0,\label{eqn:5.23}\\ 2ap_{14}\theta-1-2p_{13}b+\tfrac{2\sqrt{3}}{\theta}\Gamma_{51}^3+2ap_{11}\sqrt{1-\theta^2}=0,\label{eqn:5.24}\\ \tfrac{\sqrt{3}}{\theta}\Gamma_{51}^4-p_{14}b-ap_{13}\theta+ap_{12}\sqrt{1-\theta^2}=0,\label{eqn:5.25}\\ 3U\lambda-p_{22}b+ap_{12}\theta-ap_{24}\sqrt{1-\theta^2}=0,\label{eqn:5.26} \end{gather} \begin{equation}\label{eqn:5.27} \begin{aligned} \Gamma_{52}^3\sqrt{3(1-\theta^2)}&+b\theta\big[(p_{11}+p_{22})\theta-p_{14}\sqrt{1-\theta^2}\,\big]\\[-1mm] &+a\theta(p_{12}-p_{12}\theta^2+p_{24}\theta\sqrt{1-\theta^2}\,)=0, \end{aligned} \end{equation} \begin{equation}\label{eqn:5.28} \begin{aligned} 2\sqrt{3}\Gamma_{52}^4(\theta^2-1)-\theta\big\{&\theta^2-1+2p_{24}b(\theta^2-1)\\[-1mm] &+2a\big[p_{14}\theta(\theta^2-1)+\sqrt{1-\theta^2}(p_{22}+p_{11}\theta^2)\big]\big\}=0, \end{aligned} \end{equation} \begin{equation}\label{eqn:5.29} \begin{aligned} 2p_{14}b\theta(\theta^2-1)&+\sqrt{1-\theta^2}\big[(2p_{11}b+p_{22}b+3U\beta)\theta^2-p_{11}b-3U\beta\big]\\[-1mm] &+a(\theta^2-1)\big[p_{13}-\theta(p_{24}\theta+p_{12}\sqrt{1-\theta^2}\,)\big]=0, \end{aligned} \end{equation} \begin{equation}\label{eqn:5.30} \begin{aligned} b(\theta^2-1)&\big[(p_{24}-p_{13})\theta+p_{12}\sqrt{1-\theta^2}\,\big]\\[-1mm] &+a\big[\theta\sqrt{1-\theta^2}(p_{22}+p_{11}\theta^2)+p_{14}(\theta^4-1)\big]=0, \end{aligned} \end{equation} \begin{equation}\label{eqn:5.31} \begin{aligned} 2p_{14}b\theta(\theta^2-1)&+\sqrt{1-\theta^2}\big[p_{22}b+3U\beta+(p_{11}b-3U\beta)\theta^2\big]\\[-1mm] &-a(\theta^2-1)\big[p_{24}+\theta(p_{12}\sqrt{1-\theta^2}-p_{13}\theta\,)\big]=0. \end{aligned} \end{equation} Calculating \eqref{eqn:5.22}\,-\,\eqref{eqn:5.26} and \eqref{eqn:5.29}+\eqref{eqn:5.31}, respectively, we obtain \begin{equation}\label{eqn:5.32} 0=(p_{22}-p_{11})b+a\big[-2p_{12}\theta+(p_{24}-p_{13})\sqrt{1-\theta^2}\,\big], \end{equation} \begin{equation}\label{eqn:5.33} \begin{split} 0=&a(1-\theta^2)\big[(p_{24}-p_{13})(1+\theta^2)+2p_{12}\theta\sqrt{1-\theta^2}\,\big]\\[-1mm] &+b\big\{4p_{14}\theta(\theta^2-1)+\sqrt{1-\theta^2}\big[p_{22}-p_{11}+(3p_{11}+p_{22})\theta^2\big]\big\}. \end{split} \end{equation} Now, we claim that $a\not=0$ holds on $M$. Indeed, if otherwise, we assume $a(z)=0$ for some $z\in M$. Then, carrying calculations below at $z$, we have $b=\pm1$ and, by \eqref{eqn:5.32}, \eqref{eqn:5.33}, \eqref{eqn:5.23} and \eqref{eqn:5.30}, we have \begin{equation}\label{eqn:5.34} p_{22}-p_{11}=p_{12}=p_{24}-p_{13}=0,\ \ p_{14}=\tfrac{p_{11}\theta}{\sqrt{1-\theta^2}}. \end{equation} From \eqref{eqn:5.22} and \eqref{eqn:5.31}, we obtain $U\lambda=-U\beta=\tfrac13p_{11}b$ and thus $U(\lambda+\beta)=0$. Then, as $\lambda+\beta=\tfrac{\sqrt{1-\theta^2}}{\sqrt{3}\theta}$ and $0<\theta<1$, we get $U\theta=0$ and thus $U\lambda=U\beta=p_{11}=0$. From \eqref{eqn:5.34}, we have $p_{11}=p_{12}=p_{22}=p_{14}=0$. Finally, we apply for $0=g(G(PX_1,PX_2)+PG(X_1,X_2),U)$. By direct calculation of the right hand side, making use of the fact $G(X_1,X_2)=-\sqrt{\tfrac{1-\theta^2}{3}}\,\xi$, \eqref{eqn:3.7} and \eqref{eqn:5.21}, we get the contradiction $\sqrt{1-\theta^2}b=0$, which verifies the claim. \vskip 2mm As $a\not=0$, from \eqref{eqn:5.23} we solve $p_{14}=\tfrac{ap_{11}\theta-p_{12}b}{a\sqrt{1-\theta^2}}$. Then, from \eqref{eqn:5.32}, \eqref{eqn:5.33} and \eqref{eqn:5.30}, we obtain a matrix equation $AB=0$, where $$ A=(p_{22}-p_{11},\ p_{12}, \ p_{24}-p_{13}), $$ $$ B=\left( \begin{array}{ccc} b & b(1+\theta^2) & -a\\ -2a\theta & \tfrac{4b^2\theta+2a^2\theta(1-\theta^2)}{a} & -2b\theta \\[1mm] a\sqrt{1-\theta^2} & a\sqrt{1-\theta^2}(1+\theta^2)& b\sqrt{1-\theta^2} \end{array} \right). $$ The fact $\det B=\tfrac{4\theta\sqrt{1-\theta^2}}{a}\ensuremath{\nabla^E}q0$ implies that $p_{22}-p_{11}=p_{12}=p_{24}-p_{13}=0$. By \eqref{eqn:5.22} and \eqref{eqn:5.31}, we have $U\lambda=-U\beta=\tfrac13(p_{11}b+ap_{13}\sqrt{1-\theta^2}\,)$. The fact $0<\theta<1$ and $\lambda+\beta=\tfrac{\sqrt{1-\theta^2}}{\sqrt{3}\theta}$ then implies that $U\theta=0$. This combining with $X_i\lambda=X_i\beta=0$ for $1\leq i\leq4$ shows that $\theta$ and so that $\lambda$ and $\beta$ are constants on $M$. Moreover, from \eqref{eqn:5.22} up to \eqref{eqn:5.31}, we can finally obtain: \begin{equation}\label{eqn:5.35} p_{13}=-\tfrac{p_{11}b}{a\sqrt{1-\theta^2}},\ \ p_{14}=\tfrac{p_{11}\theta}{\sqrt{1-\theta^2}},\ \ \Gamma_{51}^3=\Gamma_{52}^4=\tfrac{\theta(-2p_{11}+a\sqrt{1-\theta^2})}{2a\sqrt{3-3\theta^2}},\ \ \Gamma_{51}^4=\Gamma_{52}^3=0. \end{equation} Then, by $\sum_{i=1}^4 (p_{1i})^2=1$, we get $(p_{11})^2=a^2(1-\theta^2)$. Now, calculating the curvature tensor, we obtain \begin{equation}\label{eqn:5.36} \begin{aligned} g(R(X_1,X_3)X_3,X_1)&=\Gamma_{31}^5\Gamma_{53}^1-\Gamma_{13}^5\Gamma_{35}^1-\Gamma_{13}^5\Gamma_{53}^1 =\tfrac{4p_{11}(1+\theta^2)-a\sqrt{1-\theta^2}(5+3\theta^2)}{12a\sqrt{1-\theta^2}}. \end{aligned} \end{equation} On the other hand, by Gauss equation \eqref{eqn:2.17} and the fact $a^2+b^2=1$, we have \begin{equation}\label{eqn:5.37} g(R(X_1,X_3)X_3,X_1)=\tfrac{a^2(10\theta^2-7-3\theta^4)-4(p_{11})^2(\theta^2-2)}{12a^2(\theta^2-1)}. \end{equation} Comparing these two calculations, we get $$ (p_{11})^2(2-\theta^2)+3a^2(\theta^2-1)+ap_{11}\sqrt{1-\theta^2}(1+\theta^2)=0. $$ Then, by using $(p_{11})^2=a^2(1-\theta^2)$, we finally get $p_{11}=a\sqrt{1-\theta^2}$. It follows that, by \eqref{eqn:5.20}, \eqref{eqn:5.35} and the previous results about $p_{ij}$, we have \begin{equation}\label{eqn:5.38} \left\{ \begin{aligned} &p_{11}=p_{22}=-p_{33}=-p_{44}=a\sqrt{1-\theta^2},\ \ p_{12}=p_{34}=0,\\[-1mm] &p_{13}=p_{24}=-b,\ p_{14}=-p_{23}=a\theta,\\[-1mm] &\Gamma_{51}^3=\Gamma_{52}^4=-\tfrac{\theta}{2\sqrt{3}},\ \ \Gamma_{53}^4=\Gamma_{51}^2-\sqrt{\tfrac{1-\theta^2}{3}}. \end{aligned}\right. \end{equation} Later, in Lemma \ref{lemma:5.3}, we will show that {\bf Case (1)-(ii)} occurs only if $\theta=\tfrac{\sqrt{2}}{2}$. But this implies that {\bf Case (1)-(ii)} is actually a special situation of {\bf Case (1)-(i)} with $\theta=\tfrac{\sqrt{2}}{2}$. \vskip 1mm {\bf (2). $\theta=1$ on $M$}. In this case, it is easy to see that $M$ satisfies $A\phi=\phi A$. According to Proposition 5.7 of \cite{H-Y-Z}, the principal curvatures of $M$ are $\alpha=0$, $\lambda=\tfrac{\sqrt{3}}{6}$ and $\beta=-\tfrac{\sqrt{3}}{6}$. This exactly shows that expressions of the principal curvatures stated in {\bf Case (1)-(i)} are valid also for $\theta=1$. \vskip 1mm {\bf (3). $\theta=0$ on $M$}. In this case, we choose a local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ such that $$ AX_1=\lambda X_1,\ AX_2=\beta X_2,\ AX_3=\lambda X_3,\ AX_4=\beta X_4,\ X_5=U, $$ where $X_2=JX_1$ and $X_4=JX_3$. Then $G(X_1,X_3)\in {\rm Span}\{\xi,U\}$ and $|G(X_1,X_3)|^2=\tfrac{1}{3}$. Now, taking in \eqref{eqn:2.19} $(X,Y)=(X_1,X_2)$, $(X_1,X_3)$ and $(X_1,X_4)$, respectively, we obtain \begin{equation}\label{eqn:5.39} \alpha\beta+\alpha\lambda-2\lambda\beta=-\tfrac{1}{6}, \end{equation} \begin{equation}\label{eqn:5.40} (\alpha-\lambda)g(G(X_1,X_3),\xi)=0,\ \ (2\alpha-\lambda-\beta)g(G(X_1,X_3),U)=0. \end{equation} From \eqref{eqn:5.40}, $\alpha\ensuremath{\nabla^E}q\lambda$ and $|G(X_1,X_3)|^2=\tfrac{1}{3}$, we get $2\alpha-\lambda-\beta=0$. This combining with \eqref{eqn:5.39} gives the contradiction $(\lambda-\beta)^2=-\tfrac{1}{3}$. We have completed the proof of Lemma \ref{lemma:5.2}. \end{proof} \begin{lemma}\label{lemma:5.3} If {\bf Case (1)-(ii)} in the proof of Lemma \ref{lemma:5.2} does occur, then $\theta=\tfrac{\sqrt{2}}2$. \end{lemma} \begin{proof} First of all, according to Lemma \ref{lemma:2.2}, $\alpha$ is constant. Hence, by the formulas for {\bf Case (1)-(ii)} of the proof of Lemma \ref{lemma:5.2}, also $\theta, \lambda$ and $\beta$ are constants. Now, since the local orthonormal frame field $\{X_{i}\}_{i=1}^{5}$ of $M$ satisfy \eqref{eqn:5.10}, we apply for the Codazzi equation \eqref{eqn:2.18} with $(X,Y)=(U,X_i)$ for $1\leq i\leq4$. Then, by checking the results, as in {\bf Case (1)-(i)} we obtain the equations \eqref{eqn:5.22}, \eqref{eqn:5.23}, \eqref{eqn:5.26} and \eqref{eqn:5.29}--\eqref{eqn:5.31} with $U\lambda=U\beta=0$. Moreover, we have the following additional four equations: \begin{gather} \theta\big\{2\sqrt{3}\Gamma_{51}^3+\theta-2p_{13}b\sqrt{1-\theta^2} +2a\big[p_{11}(1-\theta^2)+p_{14}\theta\sqrt{1-\theta^2}\,\big]\big\}-1=0,\label{eqn:5.41}\\ \sqrt{3}\Gamma_{51}^4-p_{14}b\sqrt{1-\theta^2}+ a\big[p_{12}(1-\theta^2)-p_{13}\theta\sqrt{1-\theta^2}\,\big]=0,\label{eqn:5.42}\\ \sqrt{3}\Gamma_{52}^3+(p_{11}+p_{22})b\theta-p_{14}b\sqrt{1-\theta^2} +a\big[p_{12}(1-\theta^2)+p_{24}\theta\sqrt{1-\theta^2}\,\big]=0,\label{eqn:5.43}\\ \theta\big\{2\sqrt{3}\Gamma_{52}^4+\theta-2p_{24}b\sqrt{1-\theta^2} +2a\big[p_{22}+\theta(p_{11}\theta-p_{14}\sqrt{1-\theta^2}\,)\big]\big\}-1=0.\label{eqn:5.44} \end{gather} It follows that \eqref{eqn:5.32} and \eqref{eqn:5.33} are still valid. Then, similar discussions as in dealing with {\bf Case (1)-(i)}, we have $$ a\ensuremath{\nabla^E}q0,\ (p_{11})^2=a^2(1-\theta^2),\ p_{22}=p_{11},\ p_{12}=0,\ p_{13}=p_{24}=-\tfrac{p_{11}b}{a\sqrt{1-\theta^2}}, \ p_{14}=\tfrac{p_{11}\theta}{\sqrt{1-\theta^2}}. $$ Moreover, by using the equations \eqref{eqn:5.41}\,--\,\eqref{eqn:5.44}, we can get $$ \Gamma_{51}^3=\Gamma_{52}^4=\tfrac{a-2p_{11}\theta-a\theta^2}{2\sqrt{3}a\theta},\ \ \Gamma_{51}^4=\Gamma_{52}^3=0. $$ Now, calculating the curvature tensor, we obtain $$ \begin{aligned} &g(R(X_1,X_3)X_4,X_2)=\Gamma_{34}^5\Gamma_{15}^2-\Gamma_{13}^5\Gamma_{54}^2+\Gamma_{31}^5\Gamma_{54}^2 =\tfrac{a(6\theta^2-4-3\theta^4)-4p_{11}\theta(\theta^2-2)}{12a\theta^2},\\ &g(R(X_1,X_3)X_3,X_1)=\Gamma_{31}^5\Gamma_{53}^1-\Gamma_{13}^5\Gamma_{35}^1-\Gamma_{13}^5\Gamma_{53}^1 =\tfrac{a(11\theta^2-8-3\theta^4)-4p_{11}\theta(\theta^2-2)}{12a\theta^2}. \end{aligned} $$ On the other hand, by the Gauss equation \eqref{eqn:2.17} and the fact $a^2+b^2=1$, we have $$ \begin{aligned} &g(R(X_1,X_3)X_4,X_2)=\tfrac{4(p_{11})^2\theta^2+a^2(2-3\theta^2)(1-\theta^2)}{12a^2(1-\theta^2)},\\ &g(R(X_1,X_3)X_3,X_1)=\tfrac{4(p_{11})^2\theta^2(2-\theta^2)-a^2(1-\theta^2)^2(4+3\theta^2)}{12a^2\theta^2(\theta^2-1)}. \end{aligned} $$ Comparing these two calculations, respectively, we can obtain \begin{equation}\label{eqn:5.45} (p_{11})^2\theta^4-ap_{11}\theta(2-\theta^2)(1-\theta^2)+a^2(1-\theta^2)^2=0, \end{equation} \begin{equation}\label{eqn:5.46} (p_{11})^2\theta^2(\theta^2-2)-ap_{11}\theta(2-\theta^2)(1-\theta^2)+3a^2(1-\theta^2)^2=0. \end{equation} Now the calculation \eqref{eqn:5.45}-\eqref{eqn:5.46} gives that $$ (p_{11})^2\theta^2=a^2(1-\theta^2)^2, $$ and, by using the fact $(p_{11})^2=a^2(1-\theta^2)$, we obtain $\theta=\tfrac{\sqrt{2}}{2}$. This completes the proof of Lemma \ref{lemma:5.3}. \end{proof} Based on Lemma \ref{lemma:5.2}, we can prove the following result for Hopf hypersurfaces which is an interesting counterpart of Proposition 5.8 in \cite{H-Y-Z}. \begin{proposition}\label{prop:5.1} Let $M$ be a Hopf hypersurface of the NK $\mathbf{S}^3\times\mathbf{S}^3$ with three distinct principal curvatures and assume that the almost product structure $P$ of $M$ preserves the holomorphic distribution, i.e., $P\{U\}^\perp=\{U\}^\perp$. Then either $P\xi=\tfrac12\xi+\tfrac{\sqrt{3}}{2}J\xi$, or $P\xi=\tfrac12\xi-\tfrac{\sqrt{3}}{2}J\xi$, or $P\xi=-\xi$. \end{proposition} \begin{proof} We first assume that $0<\theta<1$. Let $\{X_i\}_{i=1}^5$ be as described by \eqref{eqn:5.10}. Then, by using \eqref{eqn:3.7}, \eqref{eqn:5.38} and the fact $G(X_1,X_2)=-\sqrt{(1-\theta^2)/3}\,\xi$, we can show that the equation $0=g(G(PX_1,PX_2)+PG(X_1,X_2),\xi)$ becomes equivalently $$ (1-2a)(1+a)=0. $$ This implies the assertion that we have three possibilities for $P\xi$, namely, (1) $a=\tfrac{1}{2}$ and $b=-\tfrac{\sqrt{3}}{2}$, \ (2) $a=\tfrac{1}{2}$ and $b=\tfrac{\sqrt{3}}{2}$, \ (3) $a=-1$ and $b=0$. \vskip 1mm Next, if $\theta=1$, then as stated before the hypersurface satisfies $A\phi=\phi A$ and the assertion follows from Proposition 5.8 of \cite{H-Y-Z}. \end{proof} \vskip 1mm For the sake of later's purpose, we summarize the following conclusion that we have established. \begin{lemma}\label{lemma:5.4} For $0<\theta<1$ with $\alpha=0,\ \lambda=\tfrac{\sqrt{1-\theta^2}+1}{2\sqrt{3}\theta}$ and $\beta=\tfrac{\sqrt{1-\theta^2}-1}{2\sqrt{3}\theta}$, the vector $P\xi$ has three possibilities: $\tfrac12\xi+\tfrac{\sqrt{3}}{2}J\xi,\ \tfrac12\xi-\tfrac{\sqrt{3}}{2}J\xi,\ -\xi$. For each of these cases, we have a local orthonormal frame $\{X_i\}_{i=1}^5$, which is described by \eqref{eqn:5.10}, such that $PX_i=\sum_{j=1}^4p_{ij}X_j$ for $1\leq i\leq4$, and $\{p_{ij}\}$ satisfy \eqref{eqn:5.38}. Moreover, with respect to $\{X_i\}_{i=1}^5$, the connection coefficients $\{\Gamma_{ij}^k\}$ satisfy \eqref{eqn:5.18}, \eqref{eqn:5.19}, \eqref{eqn:5.38}, as well as the following relations: \begin{equation}\label{eqn:5.47} \left\{ \begin{aligned} &\Gamma_{ij}^k=0,\ {\rm if}\ i\in\{1,2,3,4\},\ j\in\{1,2\},\ k\in\{3,4\};\\ &\Gamma_{14}^3=\Gamma_{12}^1,\ \Gamma_{23}^4=\Gamma_{21}^2,\ \Gamma_{33}^4=\Gamma_{31}^2,\ \Gamma_{43}^4=\Gamma_{41}^2,\ \Gamma_{51}^4=\Gamma_{52}^3=0. \end{aligned}\right. \end{equation} \end{lemma} \vskip 2mm \subsection{Proof of Theorem \ref{thm:1.2}}\label{sect:5.3}~ We get the proof of Theorem \ref{thm:1.2} as a direct consequence of three results concerning the three possibilities for $P\xi$ described in Proposition \ref{prop:5.1}. First of all, we prove the following result: \begin{theorem}\label{thm:5.1} Let $M$ be a Hopf hypersurface of the NK $\mathbf{S}^3\times\mathbf{S}^3$ which possesses three distinct principal curvatures and satisfies $P\{U\}^\perp=\{U\}^\perp$ on $M$. If $P\xi=\tfrac{1}{2}\xi+\tfrac{\sqrt{3}}{2}J\xi$, then, up to isometries of type $\mathcal{F}_{abc}$, $M$ is locally given by the embedding $f_r$ $(0<r\leq1)$ in Theorem \ref{thm:1.2}. \end{theorem} \begin{proof} We first assume that $0<\theta<1$ and let $\{X_i\}_{i=1}^5$ be as described by \eqref{eqn:5.10}. Put \begin{equation}\label{eqn:5.48} \left\{ \begin{aligned} &\bar{e}_1=\tfrac{\sqrt{2+\sqrt{1-\theta^2}}}{2}X_1-\tfrac{\sqrt{3}}{2\sqrt{2+\sqrt{1-\theta^2}}}X_3 +\tfrac{\theta}{2\sqrt{2+\sqrt{1-\theta^2}}}X_4,\ \bar{e}_5=X_5=U,\\[-1mm] &\bar{e}_2=\tfrac{\sqrt{2+\sqrt{1-\theta^2}}}{2}X_2-\tfrac{\theta}{2\sqrt{2+\sqrt{1-\theta^2}}}X_3 -\tfrac{\sqrt{3}}{2\sqrt{2+\sqrt{1-\theta^2}}}X_4,\\ &\bar{e}_3=\tfrac{\theta}{\sqrt{2+2\sqrt{1-\theta^2}}}X_2+\tfrac{\sqrt{1+\sqrt{1-\theta^2}}}{\sqrt{2}}X_3, \ \ \bar{e}_4=\tfrac{\theta}{\sqrt{2+2\sqrt{1-\theta^2}}}X_1-\tfrac{\sqrt{1+\sqrt{1-\theta^2}}}{\sqrt{2}}X_4. \end{aligned} \right. \end{equation} Then $\{\bar{e}_i\}_{i=1}^5$ is a local (non-orthonormal) frame field of $M$. We consider the following decomposition of the tangent bundle of $M$: $TM={\rm Span}\{\bar{e}_1,\bar{e}_2\}\oplus{\rm Span}\{\bar{e}_3,\bar{e}_4,\bar{e}_5\}$. Using Lemma \ref{lemma:5.4}, we have $$ \nabla_{\bar{e}_i} {\bar{e}_j}\in {\rm Span}\{\bar{e}_1,\bar{e}_2,\bar{e}_5\}\ {\rm for}\ i,j=1,2; \ \ \ \nabla_{\bar{e}_i} {\bar{e}_j}\in {\rm Span}\{\bar{e}_3,\bar{e}_4,\bar{e}_5\}\ {\rm for}\ i,j=3,4,5. $$ Moreover, by direct calculation, we can show that $$ [\bar{e}_i,\bar{e}_j]\in {\rm Span}\{\bar{e}_1,\bar{e}_2\}\ {\rm for}\ i,j=1,2;\ \ \ [\bar{e}_i,\bar{e}_j]\in {\rm Span}\{\bar{e}_3,\bar{e}_4,\bar{e}_5\}\ {\rm for}\ i,j=3,4,5. $$ It follows that both $\rm{Span}\{\bar{e}_1,\bar{e}_2\}$ and $\rm{Span}\{\bar{e}_3,\bar{e}_4,\bar{e}_5\}$ are integrable distributions. Let $M_1$ and $M_2$ be the integral manifolds of $\rm{Span}\{\bar{e}_3,\bar{e}_4,\bar{e}_5\}$ and $\rm{Span}\{\bar{e}_1,\bar{e}_2\}$, respectively. Note also that now we have $$ g(A\bar{e}_i,\bar{e}_j)=0\ {\rm for}\ i,j=3,4,5;\ \ g(A\bar{e}_i,\bar{e}_j)=\tfrac{\sqrt{3(1-\theta^2)}}{4\theta}\delta_{ij}\ {\rm for}\ i,j=1,2. $$ So we have $\tilde{\nabla}_{\bar{e}_i}{\bar{e}_j}\in TM_1\ {\rm for}\ i,j=3,4,5$; and $\tilde{\nabla}_{\bar{e}_i}{\bar{e}_j}=\hat{\nabla}_{\bar{e}_i}{\bar{e}_j}+\hat{h}(\bar{e}_i,\bar{e}_j)\ {\rm for}\ i,j=1,2$, where $\hat{\nabla}$ is the Levi-Civita connection of $M_2$, and $\hat{h}$ is the second fundamental form of the submanifold $M_2\hookrightarrow\mathbf{S}^3\times\mathbf{S}^3$. Moreover, by direct calculations we can show that $\hat{h}(\bar{e}_i,\bar{e}_j)=(\tfrac{\sqrt{1-\theta^2}} {4\theta}U+\tfrac{\sqrt{3(1-\theta^2)}}{4\theta}\xi)\delta_{ij},i,j=1,2$. Hence $M_1$ is a totally geodesic submanifold of $\mathbf{S}^3\times\mathbf{S}^3$, whereas $M_2$ is a totally umbilical submanifold of $\mathbf{S}^3\times\mathbf{S}^3$. Applying for \eqref{eqn:2.12}, we further see that $M_1$ and $M_2$ have constant sectional curvature $\tfrac{3}{4}$ and $\tfrac{1+2\theta^2}{4\theta^2}$, respectively. Thus, $M_1$ (resp. $M_2$) is locally isometric to $\mathbf{S}^3$ (resp. $\mathbf{S}^2$) equipped with metric $\tfrac{4}{3}g_0$ (resp. $\tfrac{4\theta^2}{1+2\theta^2}g_0$), where $g_0$ denotes the standard metric of constant sectional curvature $1$ on $\mathbf{S}^3$ (resp. $\mathbf{S}^2$). In particular, $M$ is locally diffeomorphic to the product manifold $\mathbf{S}^3\times\mathbf{S}^2$. By the identification of $M$ with an open subset of $\mathbf{S}^3\times\mathbf{S}^2$, we can express the hypersurface $M$ by an immersion $f=(p,q)$ with the parametrization $(x,y)$ of $\mathbf{S}^3\times\mathbf{S}^2$ such that $$ \begin{aligned} f:\mathbf{S}^3\times\mathbf{S}^2 \longrightarrow\mathbf{S}^3\times\mathbf{S}^3,\ \ \ (x,y)\mapsto (p(x,y),q(x,y)). \end{aligned} $$ From \eqref{eqn:2.10}, $P\xi=\tfrac{1}{2}\xi-\tfrac{\sqrt{3}}{2}U$, \eqref{eqn:3.7}, \eqref{eqn:5.38} and \eqref{eqn:5.48}, it can be verified that $$ Q\bar{e}_1=\bar{e}_1,\ \ Q\bar{e}_2=\bar{e}_2,\ \ Q\bar{e}_3=-\bar{e}_3,\ \ Qe_4=-\bar{e}_4,\ \ QU=-U. $$ Then, by the definition of $Q$, it follows that $dp,dq:T(\mathbf{S}^3\times\mathbf{S}^2)\rightarrow T\mathbf{S}^3$ have the following properties: \begin{equation}\label{eqn:5.49} \left\{ \begin{aligned} (dp(v),0)&=\tfrac{1}{2}(df(v)-Qdf(v))=df(v),\\ (0,dq(v))&=\tfrac{1}{2}(df(v)+Qdf(v))=0, \end{aligned} \right.\ \ \ \ \forall\, v\in T(\mathbf{S}^3\times\{pt\}). \end{equation} \vskip-3mm \begin{equation}\label{eqn:5.50} \left\{ \begin{aligned} (dp(w),0)&=\tfrac{1}{2}(df(w)-Qdf(w))=0,\\ (0,dq(w))&=\tfrac{1}{2}(df(w)+Qdf(w))=df(w), \end{aligned} \right.\ \forall\, w\in T(\{pt\}\times\mathbf{S}^2). \end{equation} The first equation of \eqref{eqn:5.50} shows that $p$ depends only on the first entry $x$, and hence it can be regarded as a mapping from $\mathbf{S}^3$ to $\mathbf{S}^3$. From \eqref{eqn:5.49} we see that $p:\mathbf{S}^3\to\mathbf{S}^3$ is a local diffeomorphism. Noting that the pull-back metric $f^*g$ restricted on $\mathbf{S}^3\times \{pt\}$ is exactly $\tfrac{4}{3}g_0$, $p$ is actually an isometry. By a re-parametrization of the preimage $\mathbf{S}^3$, we can assume that $p(x)=x$. Similarly, from the second equation in \eqref{eqn:5.49} we derive that $q$ depends only on the second entry $y$, thus $q$ is actually a mapping from $\mathbf{S}^2$ to $\mathbf{S}^3$. As the second equation in \eqref{eqn:5.50} shows that $dq$ is of rank $2$, then $q(\mathbf{S}^2)$ is a $2$-dimensional submanifold in $\mathbf{S}^3$. Noting that the pull-back metric $f^*g$ restricted on $\{pt\}\times\mathbf{S}^2$ is $\tfrac{4\theta^2}{1+2\theta^2}g_0$. It follows that $\mathbf{S}^2$ is totally umbilical immersed in $\mathbf{S}^3$ and, up to an isometry of $\mathbf{S}^3$, we can assume that $q(y)=\sqrt{1-r^2}+ry$, where $r=\tfrac{\sqrt{3}\theta}{\sqrt{1+2\theta^2}}$ and $y\in\mathbf{S}^3\cap\mathrm{Im}\,\mathbb{H}$. Hence, up to isometries of type $\mathcal{F}_{abc}$, $M$ is locally the image of the embedding $f_r$, corresponding to $0<r<1$, as described in Theorem \ref{thm:1.2}. \vskip 1mm Next, we consider the case $\theta=1$. As we mentioned earlier, in this case $M$ satisfies $A\phi=\phi A$. Then, according to Theorem 5.9 of \cite{H-Y-Z}, $M$ is locally given by the embedding $f_1$ as described in Theorem \ref{thm:1.2}. This completes the proof of Theorem \ref{thm:5.1}. \end{proof} \begin{theorem}\label{thm:5.2} Let $M$ be a Hopf hypersurface of the NK $\mathbf{S}^3\times\mathbf{S}^3$ which possesses three distinct principal curvatures and satisfies $P\{U\}^\perp=\{U\}^\perp$ on $M$. If $P\xi=\tfrac{1}{2}\xi-\tfrac{\sqrt{3}}{2}J\xi$, then, up to isometries of type $\mathcal{F}_{abc}$, $M$ is locally given by the embedding $f_r'$ $(0<r\leq1)$ in Theorem \ref{thm:1.2}. \end{theorem} \begin{proof} Given $M$, by using the isometry $\mathcal{F}_1$, we obviously get another Hopf hypersurface $\mathcal{F}_1(M)$ of the NK $\mathbf{S}^3 \times \mathbf{S}^3$ which also possesses three distinct principal curvatures. From Theorem 5.1 of \cite{M-V}, we know that the differential of the isometry $\mathcal{F}_1$ anticommutes with the almost complex structure $J$, and commutes with the almost product structure $P$, that is, $$ d\mathcal{F}_1\circ J=-J\circ d\mathcal{F}_1,\ \ \ \ d\mathcal{F}_1\circ P= P\circ d\mathcal{F}_1. $$ Noticing that $\xi':=d\mathcal{F}_1(\xi)$ and $U':=-J\xi'=-d\mathcal{F}_1(U)$ are the unit normal vector field and the structure vector field of $\mathcal{F}_1(M)$. By using $P\xi=\tfrac{1}{2}\xi-\tfrac{\sqrt{3}}{2}J\xi$, we have \begin{equation*} \begin{aligned} P\xi'&=Pd\mathcal{F}_1(\xi)=d\mathcal{F}_1P(\xi)=d\mathcal{F}_1(\tfrac{1}{2}\xi-\tfrac{\sqrt{3}}{2}J\xi)\\ &=\tfrac{1}{2}d\mathcal{F}_1(\xi)+\tfrac{\sqrt{3}}{2}Jd\mathcal{F}_1(\xi)=\tfrac{1}{2}\xi'+\tfrac{\sqrt{3}}{2}J\xi'. \end{aligned} \end{equation*} It follows that $P\{U'\}^\perp=\{U'\}^\perp$ holds on $\mathcal{F}_1(M)$. Noticing that, for any unitary quaternions $a,b,c$, the isometries $\mathcal{F}_{abc}$ and $\mathcal{F}_1$ satisfy $(\mathcal{F}_1)^2={\rm id}$ and $\mathcal{F}_{abc}\circ\mathcal{F}_1=\mathcal{F}_1\circ\mathcal{F}_{bac}$. Then, applying for Theorem \ref{thm:5.1} to the hypersurface $\mathcal{F}_1(M)$, we immediately conclude the proof of Theorem \ref{thm:5.2}. \end{proof} \begin{theorem}\label{thm:5.3} Let $M$ be a Hopf hypersurface of the NK $\mathbf{S}^3\times\mathbf{S}^3$ which possesses three distinct principal curvatures and satisfies $P\{U\}^\perp=\{U\}^\perp$ on $M$. If $P\xi=-\xi$, then, up to isometries of type $\mathcal{F}_{abc}$, $M$ is locally given by the embedding $f_r''$ $(0<r\leq1)$ in Theorem \ref{thm:1.2}. \end{theorem} \begin{proof} Given $M$, by using the isometry $\mathcal{F}_2$, we get another Hopf hypersurface $\mathcal{F}_2(M)$ of the NK $\mathbf{S}^3\times\mathbf{S}^3$ which also possesses three distinct principal curvatures. From Theorem 5.2 of \cite{M-V}, the differential of the isometry $\mathcal{F}_2$ satisfies the following relationship with $J$ and $P$: $$ d\mathcal{F}_2\circ J=-J\circ d\mathcal{F}_2,\ \ \ \ d\mathcal{F}_2\circ P=(-\tfrac{1}{2}P+\tfrac{\sqrt{3}}{2}JP)\circ d\mathcal{F}_2. $$ Noticing that $\xi'':=d\mathcal{F}_2(\xi)$ and $U'':=-J\xi''=-d\mathcal{F}_2(U)$ are the unit normal vector field and the structure vector field of $\mathcal{F}_2(M)$. By using $P\xi=-\xi$, we have \begin{equation*} \begin{aligned} P\xi''&=Pd\mathcal{F}_2(\xi)=-2d\mathcal{F}_2P(\xi)+\sqrt{3}JPd\mathcal{F}_2(\xi)\\ &=2d\mathcal{F}_2(\xi)+\sqrt{3}JP\xi''=2\xi''+\sqrt{3}JP\xi''. \end{aligned} \end{equation*} It follows that $P\xi''=\tfrac{1}{2}(\xi''-\sqrt{3}PJP\xi'')=\tfrac{1}{2}\xi''+\tfrac{\sqrt{3}}{2}J\xi''$, and $P\{U''\}^\perp=\{U''\}^\perp$ holds on $\mathcal{F}_2(M)$. Noticing also that, for any unitary quaternions $a,b,c$, the isometries $\mathcal{F}_{abc}$ and $\mathcal{F}_2$ satisfy $(\mathcal{F}_2)^2={\rm id}$ and $\mathcal{F}_{abc} \circ \mathcal{F}_2=\mathcal{F}_2 \circ \mathcal{F}_{cba}$. Then, applying for Theorem \ref{thm:5.1} to the hypersurface $\mathcal{F}_2(M)$, we immediately conclude the proof of Theorem \ref{thm:5.3}. \end{proof} Finally, combining Proposition \ref{prop:5.1} and Theorems \ref{thm:5.1}--\ref{thm:5.3}, we have completed the proof of Theorem \ref{thm:1.2}.\qed \vskip 2mm \noindent{\bf Acknowledgements}. The authors are greatly indebted to the referee for his/her carefully reading the first submitted version of this paper and giving elaborate comments and valuable suggestions on revision so that the presentation can be greatly improved. \normalsize\noindent \end{document}
\begin{document} \title[A note on poly-Bernoulli numbers and polynomials of the second kind ]{A note on poly-Bernoulli numbers and polynomials of the second kind } \author[T. Kim, S. H. Lee, and J. J. Seo]{Taekyun Kim, Sang Hun Lee, and Jong Jin Seo} \begin{abstract} In this paper, we consider the poly-Bernoulli numbers and polynomials of the second kind and presents new and explicit formulae for calculating the poly-Bernoulli numbers of the second kind and the Stirling numbers of the second kind. \end{abstract} \thanks{*Corresponding Author: } \keywords{Bernoulli polynomials of the second, poly-Bernoulli numbers and polynomials, Stirling number of the second kind} \maketitle \section{Introduction} As is well known, the Bernoulli polynomials of the second kind are defined by the generating function to be \begin{equation} \frac{t}{\log(1+t)}(1+t)^x=\sum_{n=0}^{\infty}b_{n} (x)\frac{t^n}{n!}, \ \textnormal{(see [5,14,16])}. \end{equation} When $x=0, b_{n}=b_{n}(0)$ are called the Bernoulli numbers of the second kind. The first few Bernoulli numbers $b_{n}$ of the second kind are $b_{0}=1$, $b_{1}={1}/{2}$, $b_{2}=-{1}/{12}$, $b_{3}={1}/{24}$, $b_{4}=-{19}/{720}$, $b_{5}={3}/{160}, \cdots$. From (1), we have \begin{equation} b_{n}(x)=\sum_{l=0}^{n}\binom{n}{l}b_{l}\ (x)_{n-l}, \end{equation} where $(x)_{n}=x(x-1)\cdots(x-n+1), (n\geqq0)$. The Stirling number of the second kind is defined by \begin{equation}\begin{split} x^{n}=\sum_{l=0}^{n}S_{2}(n,l)(x)_{l},\ (n\geqq0). \end{split}\end{equation} The ordinary Bernoulli polynomials are given by \begin{equation}\begin{split} \frac{t}{e^{t}-1}e^{xt}=\sum_{n=0}^{\infty}B_{n}(x)\frac{t^{n}}{n!},\ \textnormal{(see [1-18])}. \end{split}\end{equation} When $x=0, B_{n}=B_{n}(0)$ are called the Bernoulli numbers. It is known that the classical polylogarithmic function $Li_{k}(x)$ is given by \begin{equation}\begin{split} Li_{k}(x)=\sum_{n=1}^{\infty}\frac{x^{n}}{n^{k}}, (k\in \mathbb Z),\ \textnormal{(see [6,7,8])}. \end{split}\end{equation} For $k=1, Li_{1}(x)=\sum_{n=1}^{\infty}\frac{x^{n}}{n}=-\log(1-x)$. The Stirling number of the first kind is defined by \begin{equation}\begin{split} (x)_{n}=\sum_{l=0}^{n}S_{1}(n,l)x^{l}, (n\geq0),\ \textnormal{(see [15])}. \end{split}\end{equation} In this paper, we consider the poly-Bernoulli numbers and polynomials of the second kind and presents new and explicit formulae for calculating the poly-Bernoulli number and polynomial and the Stirling number of the second kind. \section{poly-Bernoulli numbers and polynomials of the second kind} For $k\in \mathbb Z$, we consider the poly-Bernoulli polynomials $b_{n}^{(k)}(x)$ of the second kind as follows: \begin{equation} \frac{Li_{k}(1-e^{-t})}{\log(1+t)}(1+t)^{x}=\sum_{n=0}^{\infty}b_{n}^{(k)}(x)\frac{t^{n}}{n!}. \end{equation} When $x=0, b_{n}^{(k)}=b_{n}^{(k)}(0)$ are called the poly-Bernoulli numbers of the second kind. Indeed, for $k=1$, we have \begin{equation} \frac{Li_{k}(1-e^{-t})}{\log(1+t)}(1+t)^{x}=\frac{t}{\log(1+t)}(1+t)^{x}=\sum_{n=0}^{\infty}b_{n}(x)\frac{t^{n}}{n!}. \end{equation} By (7) and (8), we get \begin{equation} b_{n}^{(1)}(x)=b_{n} (x),\ (n\geq 0). \end{equation} It is known that \begin{equation} \frac{t(1+t)^{x-1}}{\log(1+t)}=\sum_{n=0}^{\infty}B_{n}^{(n)}(x)\frac{t^{n}}{n!}, \end{equation} where $B_{n}^{(\alpha)}(x)$ are the Bernoulli polynomials of order $\alpha$ which are given by the generating function to be \begin{equation*} \left(\frac{t}{e^{t}-1}\right)^{\alpha}e^{xt}=\sum_{n=0}^{\infty}B_{n}^{(\alpha)}(x)\frac{t^{n}}{n!},\ \textnormal{(see [1-18])}. \end{equation*} By (1) and (10), we get $$b_{n}(x)=B_{n}^{(n)}(x+1),\ (n\geq 0).$$ Now, we observe that \begin{equation}\begin{split} &\frac{Li_{k}(1-e^{-t})}{\log(1+t)}(1+t)^{x}\\ =&\sum_{n=0}^{\infty}b_{n}^{(k)}(x)\frac{t^{n}}{n!}\\ =&\frac{1}{\log(1+t)}\underbrace{\int_{0}^{t}\frac{1}{e^{x}-1}\int_{0}^{t}\frac{1}{e^{x}-1} \cdots\frac{1}{e^{x}-1}}_{k-1\ times}\int_{0}^{t}\frac{x}{e^{x}-1}dx\cdots dx(1+t)^{x}. \end{split}\end{equation} Thus, by (11), we get \begin{equation}\begin{split} \sum_{n=0}^{\infty}b_{n}^{(2)}(x)\frac{t^{n}}{n!}&=\frac{(1+t)^{x}}{\log(1+t)}\int_{0}^{t}\frac{x}{e^{x}-1}dx \\ &=\frac{(1+t)^{x}}{\log(1+t)}\sum_{l=0}^{\infty}\frac{B_{l}}{l!}\int_{0}^{t}x^{l}dx \\ &=\left(\frac{t}{\log(1+t)}\right)(1+t)^{x}\sum_{l=0}^{\infty}\frac{B_{l}}{(l+1)}\frac{t^{l}}{l!}\\ &=\sum_{n=0}^{\infty}\left\{\sum_{l=0}^{n}\binom{n}{l}\frac{B_{l}b_{n-l} (x)}{l+1}\right\}\frac{t^{n}}{n!}. \end{split}\end{equation} Therefore, by (12), we obtain the following theorem. \begin{theorem} For $n\geq 0$ we have \begin{equation*} b_{n}^{(2)}(x)=\sum_{l=0}^{n}\binom{n}{l}\frac{B_{l}b_{n-l} (x)}{l+1}. \end{equation*} \end{theorem} From (11), we have \begin{equation}\begin{split} \sum_{n=0}^{\infty}b_{n}^{(k)}(x)\frac{t^{n}}{n!}=&\frac{Li_{k}(1-e^{-t})}{\log(1+t)}(1+t)^{x}\\ =&\frac{t}{\log(1+t)}\frac{Li_{k}(1-e^{-t})}{t}(1+t)^{x}. \end{split}\end{equation} We observe that \begin{equation}\begin{split} \frac{1}{t}Li_{k}(1-e^{-t})&=\frac{1}{t}\sum_{n=1}^{\infty}\frac{1}{n^{k}}(1-e^{-t})^{n}\\ &=\frac{1}{t}\sum_{n=1}^{\infty}\frac{(-1)^{n}}{n^{k}}n!\sum_{l=n}^{\infty}S_{2}(l,n)\frac{(-t)^l}{l!}\\ &=\frac{1}{t}\sum_{l=1}^{\infty}\sum_{n=1}^{l}\frac{(-1)^{n+l}}{n^{k}}n!S_{2}(l,n)\frac{t^{l}}{l!}\\ &=\sum_{l=0}^{\infty}\sum_{n=1}^{l+1}\frac{(-1)^{n+l+1}}{n^{k}}n!\frac{S_{2}(l+1,n)}{l+1}\frac{t^{l}}{l!}. \end{split}\end{equation} Thus, by (10) and (14), we get \begin{equation}\begin{split} \sum_{n=0}^{\infty}b_{n}^{(k)}(x)\frac{t^{n}}{n!} &=\left(\sum_{m=0}^{\infty}b_{m}(x)\frac{t^{m}}{m!}\right) \left\{\sum_{l=0}^{\infty}\left(\sum_{p=1}^{l+1}\frac{(-1)^{p+l+1}}{p^{k}}p!\frac{S_{2}(l+1,p)}{l+1}\right)\frac{t^{l}}{l!}\right\}\\ &=\sum_{n=0}^{\infty}\left\{\sum_{l=0}^{n}\binom{n}{l}\left(\sum_{p=1}^{l+1}\frac{(-1)^{p+l+1}p!}{p^{k}}\frac{S_{2}(l+1,p)}{l+1}\right)b_{n-l} {(x)}\right\}\frac{t^{n}}{n!}. \end{split}\end{equation} Therefore, by (15), we obtain the following theorem. \begin{theorem} For $n\geq 0$, we have \begin{equation*} b_{n}^{(k)}(x)=\sum_{l=0}^{n}\binom{n}{l}\left(\sum_{p=1}^{l+1}\frac{(-1)^{p+l+1}}{p^{k}}p!\frac{S_{2}(l+1,p)}{l+1}\right)b_{n-l}(x). \end{equation*} \end{theorem} By (7), we get \begin{equation}\begin{split} \sum_{n=0}^{\infty}\left(b_{n}^{(k)}(x+1)-b_{n}^{(k)}(x)\right)\frac{t^{n}}{n!} =&\frac{Li_{k}(1-e^{-t})}{\log(1+t)}(1+t)^{x+1}-\frac{Li_{k}(1-e^{-t})}{\log(1+t)}(1+t)^{x}\\ =&\frac{tLi_{k}(1-e^{-t})}{\log(1+t)}(1+t)^{x}\\ =&\left(\frac{t}{\log(1+t)}(1+t)^{x}\right)Li_{k}(1-e^{-t})\\ =&\left(\sum_{l=0}^{\infty}\frac{b_{l}(x)}{l!}t^{l}\right)\left\{\sum_{p=1}^{\infty}\left(\sum_{m=1}^{p}\frac{(-1)^{m+p}m!}{m^{k}}S_{2}(p,m)\right)\right\}\frac{t^{p}}{p!} \\ \end{split}\end{equation} \begin{equation}\begin{split} &=\sum_{n=1}^{\infty}\left(\sum_{p=1}^{n}\sum_{m=1}^{p}\frac{(-1)^{m+p}}{m^{k}}m!S_{2}(p,m)\frac{b_{n-p} {(x)}n!}{(n-p)!p!}\right)\frac{t^{n}}{n!}\\ &=\sum_{n=1}^{\infty}\left\{\sum_{p=1}^{n}\sum_{m=1}^{p}\binom{n}{p}\frac{(-1)^{m+p}m!}{m^{k}}S_{2}(p,m)b_{n-p}(x)\right\}\frac{t^{n}}{n!}. \end{split}\end{equation} Therefore, by (16), we obtain the following theorem. \begin{theorem} For $n\geq 1$, we have \begin{equation} b_{n}^{(k)}(x+1)-b_{n}^{(k)}(x)=\sum_{p=1}^{n}\sum_{m=1}^{p}\binom{n}{p}\frac{(-1)^{m+p}m!}{m^{k}}S_{2}(p,m)b_{n-p}(x). \end{equation} \end{theorem} From (13), we have \begin{equation}\begin{split} \sum_{n=0}^{\infty}b_{n}^{(k)}(x+y)\frac{t^{n}}{n!}&=\left(\frac{Li_{k}(1-e^{-t})}{\log(1+t)}\right)^{k}(1+t)^{x+y} \\ &=\left(\frac{Li_{k}(1-e^{-t})}{\log(1+t)}\right)^{k}(1+t)^{x}(1+t)^{y} \\ &=\left(\sum_{l=0}^{\infty}b_{l}^{(k)}(x)\frac{t^{l}}{l!}\right)\left(\sum_{m=0}^{\infty}(y)_{m}\frac{t^{m}}{m!}\right) \\ &=\sum_{n=0}^{\infty}\left(\sum_{l=0}^{n}(y)_{l}b_{n-l}^{(k)}(x)\frac{n!}{(n-l)!l!}\right)\frac{t^{n}}{n!} \\ &=\sum_{n=0}^{\infty}\left(\sum_{l=0}^{n}\binom{n}{l}b_{n-l}^{(k)}(x)(y)_{l}\right)\frac{t^{n}}{n!}. \\ \end{split}\end{equation} Therefore, by (17), we obtain the following theorem. \begin{theorem} For $n\geq 0$, we have \begin{equation*} b_{n}^{(k)}(x+y)=\sum_{l=0}^{n}\binom{n}{l}b_{n-l}^{(k)}(x)(y)_{l}. \end{equation*} \end{theorem} {\hskip -1pc \bf Acknowledgements}.The present Research has been conducted by the Research Grant of Kwangwoon University in 2014 \noindent \noun{T. Kim\\ Department of Mathematics\\ Kwangwoon University\\ Seoul 139-701, Republic of Korea}\\ \textnormal{e-mail: [email protected]} \vskip 1pc \noindent \noun{S. H. Lee\\ Division of General Education\\ Kwangwoon University\\ Seoul 139-701, Republic of Korea}\\ \textnormal{e-mail: [email protected]} \vskip 1pc \noindent \noun{J. J. Seo\\ Department of Applied Mathematics\\ Pukyong National University\\ Pusan 698-737, Republic of Korea}\\ \textnormal{e-mail: [email protected]} \end{document}
\begin{document} \title[Short Title]{Multi-qubit non-adiabatic holonomic controlled quantum gates in decoherence-free subspaces} \author{Shi Hu} \affiliation{Department of Physics, College of Science, Yanbian University, Yanji, Jilin 133002, People's Republic of China} \author{Wen-Xue Cui} \affiliation{Department of Physics, College of Science, Yanbian University, Yanji, Jilin 133002, People's Republic of China} \author{Qi Guo} \affiliation{College of Physics and Electronics Engineering, Shanxi University, Taiyuan 030006, People's Republic of China} \author{Hong-Fu Wang\footnote{E-mail: [email protected]}} \affiliation{Department of Physics, College of Science, Yanbian University, Yanji, Jilin 133002, People's Republic of China} \author{Ai-Dong Zhu} \affiliation{Department of Physics, College of Science, Yanbian University, Yanji, Jilin 133002, People's Republic of China} \author{Shou Zhang\footnote{E-mail: [email protected]}} \affiliation{Department of Physics, College of Science, Yanbian University, Yanji, Jilin 133002, People's Republic of China} \begin{abstract} Non-adiabatic holonomic quantum gate in decoherence-free subspaces is of greatly practical importance due to its built-in fault tolerance, coherence stabilization virtues, and short run-time. Here we propose some compact schemes to implement two- and three-qubit controlled unitary quantum gates and Fredkin gate. For the controlled unitary quantum gates, the unitary operator acting on the target qubit is an arbitrary single-qubit gate operation. The controlled quantum gates can be directly implemented using non-adiabatic holonomy in decoherence-free subspaces and the required resource for the decoherence-free subspace encoding is minimal by using only two neighboring physical qubits undergoing collective dephasing to encode a logical qubit. \pacs {03.67.Lx, 03.67.Pp, 03.65.Vf} \keywords{multi-qubit controlled gate, quantum holonomy, decoherence-free subspace} \end{abstract} \maketitle \section{Introduction}\label{sec0} Based on the quantum parallelism, quantum computation is believed to can speed up the solution of a number of mathematical tasks and has attracted more and more interests. The key step to implement effective quantum computation is the construction of robust quantum gates. Holonomic quantum computation (HQC), which is first proposed by Zanardi and Rasetti~\cite{PMPLA9994} basing on adiabatic evolution, is regarded as a promising way to implement universal sets of robust gates. It can be robust against certain types of errors in the control process and has been used to realize robust quantum computation~\cite{JVAGN00403,LJP033,XM0187,SZPRL0391,LPDPRL0595,LZSPRA0674, XQZPRA0674,XCHCPRL09103,VMDPRA1081} by taking advantage of non-Abelian geometric phases~\cite{FAPRL8452} which only depend on global geometric properties of the evolution paths. Unfortunately, however, the long run-time requirement for the desired parametric control associated with adiabatic evolution makes the quantum gates become vulnerable to open system effects and parameter fluctuations that may lead to loss of coherence. In order to remove the problem of long run-time associated with the original form of HQC~\cite{PMPLA9994}, Sj\"{o}qvist {\it et al.} developed a non-adiabatic generalization of HQC~\cite{EDLBMKNJP1214} in which high-speed universal quantum gates can be implemented using non-adiabatic non-Abelian geometric phases~\cite{JPLA8813}. Non-adiabatic HQC has also been experimentally demonstrated in different physical systems, such as three-level transmon qubit~\cite{AJKMSASN13496}, nuclear magnetic resonance (NMR) quantum information processor~\cite{GGGPRL13110}, and diamond nitrogen-vacancy centers~\cite{SASGNC145,CWLWCFLN14514}. Besides errors from the control of quantum system, decoherence, arised from the inevitable interaction between the quantum system and environment, is another main challenge in implementing robust quantum gates. Decoherence will destruct the desired coherence of the system, so it is harmful for effective quantum computation. One of the promising strategies to avoid decoherence is decoherence-free subspaces (DFSs) which utilize the symmetry structure of the system-environment interaction~\cite{DIK9881}. The basic idea of DFS is that information encoded in it still undergoes unitary evolution even though taking the decoherence caused by environment into account. In addition, DFSs have been experimentally demonstrated in a host of physical systems~\cite{PAJAS00290,DVMCWCDS01291,MJKAPRL0391,JDLPRL0391, MMSCAHPRL0492}. Many efforts have been devoted to combining the fault tolerance of HQC and the quantum coherence stabilization virtues of DFSs~\cite{LPDPRL0595,LZSPRA0674,XQZPRA0674}. In 2005, Wu {\it et al.}~\cite{LPDPRL0595} implemented HQC in DFSs which was robust against some stochastic errors and collective dephasing. However, the long run-time associated with the adiabatical control of the parameters and the using of four neighboring physical qubits undergoing collective dephasing to encode a logical qubit are big challenges in experiment. After that, Xu {\it et al.}~\cite{GJDELPRL12109} developed a non-adiabatic generalization of HQC in DFSs which could overcome the long run-time requirement of its adiabatic counterpart. Latter, some other schemes for non-adiabatic HQC in DFSs in different physical systems have also been proposed~\cite{ZYWZHPRA1489,JWYZOE1523,ZJZPRA1592}. However, all the above schemes only focused on one- and two-qubit gates. As we all known, it is too complex to implement most algorithms with the increase of the number of qubits if only one- and two-qubit gates are available. The direct implementation of multiqubit gates, which is generally believed to provide a simpler design, a faster operation, and a lower decoherence, is thus of greatly practical importance. In this paper, inspired by above works, we propose some compact schemes to implement non-adiabatic holonomic two- and three-qubit controlled unitary quantum gates and Fredkin gate in DFSs. Here the unitary operator acting on the target qubit in controlled unitary quantum gates, is an arbitrary single-qubit gate operation by varying the parameters independently. These controlled quantum gates can be directly implemented, which avoids the extra work of combining two gates into one. Furthermore, they are robust against certain types of errors in the control process and the decoherence caused by environment, and can be implemented in a high speed. This is the first scheme for implementing three-qubit controlled quantum gates using non-adiabatic holonomy in DFSs. Moreover, an attractive feature of our schemes is that the resources cost for the DFSs encoding is minimal by using only two neighboring physical qubits to encode a logical qubit. \section{QUANTUM HOLONOMY AND PHYSICAL MODEL}\label{sec1} We now briefly show how quantum holonomy can arise in non-adiabatic unitary evolution before introducing our physical model. Consider a quantum system described by an $N$-dimensional state space and governed by Hamiltonian $H(t)$. Assume that there is a time-dependent $M$-dimensional subspace $S(t)$ spanned by the orthonormal basis vectors $\{|\psi_{m}(t)\rangle\}_{m=1}^{M}$. The evolution operator $\mathcal{U}(\tau,0)$ is a holonomic matrix acting on $S(0)$ spanned by $\{|\psi_{m}(0)\rangle\}_{m=1}^{M}$ if $|\psi_{m}(t)\rangle$ satisfies the following conditions~\cite{GJDELPRL12109}: \begin{eqnarray}\label{e01} (\mathrm{i})\sum_{m=1}^{M}|\psi_{m}(\tau)\rangle\langle\psi_{m}(\tau)| =\sum_{m=1}^{M}|\psi_{m}(0)\rangle\langle\psi_{m}(0)|, \end{eqnarray} \begin{eqnarray}\label{e02} ~(\mathrm{ii})~\langle\psi_{m}(t)|H(t)|\psi_{l}(t)\rangle=0,~~m,l=1,2,...,M, \end{eqnarray} where $\tau$ is the evolution period, $|\psi_{m}(t)\rangle=\mathcal{U}(t,0)|\psi_{m}(0)\rangle= \textbf{T}\mathrm{exp}(-i\int_{0}^{t}H(t')dt')|\psi_{m}(0)\rangle$, \textbf{T} is time ordering. Here condition (i) ensures that the evolution of subspace $S(0)$ is cyclic, while condition (ii) means that the evolution is purely geometric. In order to combine the fault tolerance of HQC and the quantum coherence stabilization virtues of DFSs, we consider the following physical model. The quantum system consists of $N$ physical qubits interacting collectively with a dephasing environment. The interaction between the quantum system and its environment is described by the interaction Hamiltonian \begin{eqnarray}\label{e03} H_{I}=\Big(\sum_{k=1}^{N}Z_{k}\Big)\otimes B, \end{eqnarray} where $Z_{k}$ is the Pauli $Z$ operator for the $k$th physical qubit and $B$ is an arbitrary environment operator. Due to the symmetry of the interaction we can find a DFS to protect quantum information against decoherence. For the simplest case, i.e., the number of physical qubits is two, there exists a DFS: \begin{eqnarray}\label{e04} S^{D}=\mathrm{Span}\{|01\rangle,|10\rangle\}. \end{eqnarray} We can use this subspace to encode a logical qubit, i.e., $|0\rangle_{L}=|01\rangle$, $|1\rangle_{L}=|10\rangle$, hereafter we use the subscript $L$ to denote logical states. Obviously, the resources cost for the DFS encoding is minimal by using only two neighboring physical qubits, which undergo collective dephasing to encode a logical qubit. In the following, we will use this encoding to implement controlled quantum gates. \section{TWO-QUBIT CONTROLLED UNITARY GATE}\label{sec2} In this section we demonstrate how to implement a non-adiabatic holonomic two-qubit controlled unitary gate, denoted as $C_{1}$-$U$ gate, in DFS. Here $U$ is an arbitrary single-qubit unitary gate operation acting on the target qubit, whose matrix form is given by \begin{eqnarray}\label{e05} U=\begin{pmatrix}u_{00}&u_{01}\\ u_{10}&u_{11} \end{pmatrix}. \end{eqnarray} To this end, we consider four physical qubits interacting collectively with the dephasing environment and there exists a six-dimensional DFS: \begin{eqnarray}\label{e06} S^{D_{1}}=\mathrm{Span}\Big\{|0101\rangle,|0110\rangle,|1001\rangle,|1010\rangle, |0011\rangle\,|1100\rangle\Big\}. \end{eqnarray} We encode logical qubits in the subspace \begin{eqnarray}\label{e07} S^{L_{1}}=\mathrm{Span}\Big\{|0101\rangle,|0110\rangle,|1001\rangle,|1010\rangle\Big\}, \end{eqnarray} where the logical qubit states are denoted as $|0\rangle_{L}|0\rangle_{L}=|0101\rangle$, $|0\rangle_{L}|1\rangle_{L}=|0110\rangle$, $|1\rangle_{L}|0\rangle_{L}=|1001\rangle$, and $|1\rangle_{L}|1\rangle_{L}=|1010\rangle$. $S^{L_{1}}$ is a subspace of $S^{D_{1}}$ and the remaining vectors $|0011\rangle$ and $|1100\rangle$ are used as ancillary states, denoted as $|a_{1}\rangle=|0011\rangle$ and $|a_{2}\rangle=|1100\rangle$ for convenience. Under the basis $\{|0\rangle_{L}|0\rangle_{L}$, $|0\rangle_{L}|1\rangle_{L}$, $|1\rangle_{L}|0\rangle_{L}$, $|1\rangle_{L}|1\rangle_{L}\}$, the $C_{1}$-$U$ gate is written as~\cite{ACRDNPTJHPRA9552} \begin{eqnarray}\label{e08} C_1-U= \begin{pmatrix}~1~~&&0~~&&0~~&0~\\ ~0~~&&1~~&&0~~&0~\\ ~0~~&&0~~&&u_{00}~~&u_{01}~\\ ~0~~&&0~~&&u_{10}~~&u_{11}~\\ \end{pmatrix}. \end{eqnarray} In order to implement $C_{1}$-$U$ gate, we consider the following Hamiltonian \begin{eqnarray}\label{e09} H_{1}&=&\frac{1}{2}\bigg\{(I_{2}+Z_{2}) \Big[\Delta_{1}(I_{1}+Z_{1}) +(\Omega_{1}R^{x}_{13} +\Omega_{2}R^{x}_{14}+\mathrm{H.c.})\Big]\cr\cr&&+(I_{1}-Z_{1}) \Big[\Delta_{2}(I_{2}-Z_{2}) +(\Omega_{3}R^{x}_{23} +\Omega_{4}R^{x}_{24}+\mathrm{H.c.})\Big]\bigg\}, \end{eqnarray} where $R^{x}_{lm}=\dfrac{1}{4}(X_{l}-iY_{l})(X_{m}+iY_{m})$, $I$ is the one-qubit identity matrix, $X$, $Y$, and $Z$ are Pauli matrices acting on corresponding physical qubit, H.c. means Hermitian conjugate, and $\Delta_{i}$ and $\Omega_{i}$ are controllable coupling parameters, with \begin{eqnarray}\label{e10} \Delta_{1}&=&-\Omega\sin\xi, ~~~~~~~~~~~~~~~~~\Delta_{2}~=~-\Omega\sin\gamma,\cr\cr \Omega_{1}&=&\Omega\cos\xi\cos\frac{\alpha}{2}, ~~~~~~~~~~~~\Omega_{3}~=~-\Omega\cos\gamma\cos\frac{\alpha}{2},\cr\cr \Omega_{2}&=&\Omega e^{i\beta}\cos\xi\sin\frac{\alpha}{2}, ~~~~~~~~~\Omega_{4}~=~\Omega e^{i\beta}\cos\gamma\sin\frac{\alpha}{2}. \end{eqnarray} The Hamiltonian $H_{1}$ can be rewritten as \begin{eqnarray}\label{e11} H_{1}'&=&-2\Omega \big(\sin\xi|a_{1}\rangle\langle a_{1}| +\sin\gamma|a_{2}\rangle\langle a_{2}|\big)\cr\cr &&+\Omega\big(\cos\xi|1\rangle_{L}|+\rangle_{L}\langle a_{1}| +\cos\gamma|1\rangle_{L}|-\rangle_{L}\langle a_{2}|+\mathrm{H.c.}\big), \end{eqnarray} where we have used two orthogonal states $|+\rangle_{L}=\cos\dfrac{\alpha}{2}|0\rangle_{L} +e^{i\beta}\sin\dfrac{\alpha}{2}|1\rangle_{L}$ and $|-\rangle_{L}=e^{-i\beta}\sin\dfrac{\alpha}{2}|0\rangle_{L} -\cos\dfrac{\alpha}{2}|1\rangle_{L}$. The subspace spanned by $\{|+\rangle_{L}$, $|-\rangle_{L}\}$ is the same as that by $\{|0\rangle_{L}$, $|1\rangle_{L}\}$. The evolution operator associated with $H_{1}$ is $\mathcal{U}_{1}(t)=e^{-iH_{1}t}$. With the choice of $\Omega\tau_{1}=\pi$, the resulting evolution operator is given by \begin{eqnarray}\label{e12} \mathcal{U}_{1}(\tau_{1})= \left(\begin{array}{cccccc} e^{i(\delta-\frac{\theta}{2})}&0&~~~0~~~&~~~0~~~&0&0\\ 0&e^{i(\delta+\frac{\theta}{2})}&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&e^{i(\delta-\frac{\theta}{2})}&0\\ 0&0&0&0&0&e^{i(\delta+\frac{\theta}{2})} \end{array}\right), \end{eqnarray} in the basis $\{|a_{1}\rangle,$ $|a_{2}\rangle,$ $|0\rangle_{L}|+\rangle_{L},$ $|0\rangle_{L}|-\rangle_{L},$ $|1\rangle_{L}|+\rangle_{L},$ $|1\rangle_{L}|-\rangle_{L}\}$, where $\delta-\theta/2=\pi+\pi\sin\xi$ and $\delta+\theta/2=\pi+\pi\sin\gamma$. Since the parameters $\xi$ and $\gamma$ are mutually independent, we can vary the parameters $\delta$ and $\theta$ independently. Therefore, for the states in the logical subspace $S^{L_{1}}$, the action of the evolution operator $\mathcal{U}_{1}(\tau_{1})$ is equivalent to $C_{1}$-$U$ gate and the single-qubit unitary gate operation $U$ is written as \begin{eqnarray}\label{e13} U=e^{i(\delta-\frac{\theta}{2})|+\rangle_{L}\langle +|+i(\delta+\frac{\theta}{2})|-\rangle_{L}\langle -|}. \end{eqnarray} Under the basis $\{|0\rangle_{L}, |1\rangle_{L}\}$, defining the Pauli operators as $\sigma_{x}=|0\rangle_{L}\langle1|+|1\rangle_{L}\langle0| $, $\sigma_{y}=-i|0\rangle_{L}\langle1|+i|1\rangle_{L}\langle0| $, and $\sigma_{z}=|0\rangle_{L}\langle0|-|1\rangle_{L}\langle1|$, then $U$ can be rewritten as \begin{eqnarray}\label{e14} U=\mathrm{exp}(i\delta)R_{\hat{n}}(\theta),~~~~ R_{\hat{n}}(\theta)=\mathrm{exp}\left(-i\frac{\theta}{2}\hat{n}\cdot\bm{\sigma}\right), \end{eqnarray} with $\bm{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})$ and the unit vector $\hat{n}=(\sin\alpha\cos\beta, \sin\alpha\sin\beta, \cos\alpha)$. In the above, $R_{\hat{n}}(\theta)$ represents a single-qubit rotation around the direction $\hat{n}$ with angle $\theta$. Thus $U$ corresponds to an arbitrary single-qubit gate operation by varying the parameters $\delta$, $\theta$, $\alpha$, and $\beta$ independently~\cite{MIQCQI}. In particular, when setting $\delta=\theta/2=\alpha=\pi/2$ ($\xi=\pi$, $\gamma=0$, $\alpha=\pi/2$) and $\beta=0$, we can implement a two-qubit controlled-NOT (CNOT) gate. Since $S^{D_{1}}$ is an invariant subspace of the evolution operator, $\mathcal{U}_{1}(\tau_{1})$ has decoherence-free property. Next, we use conditions $(\mathrm{i})$ and $(\mathrm{ii})$ to check that $\mathcal{U}_{1}(\tau_{1})$ is a holonomic matrix acting on $S^{L_{1}}$. For condition $(\mathrm{i})$, the subspace spanned by $\{\mathcal{U}_{1}(\tau_{1})|0\rangle_{L}|0\rangle_{L},$ $\mathcal{U}_{1}(\tau_{1})|0\rangle_{L}|1\rangle_{L},$ $\mathcal{U}_{1}(\tau_{1})|1\rangle_{L}|0\rangle_{L},$ $\mathcal{U}_{1}(\tau_{1})|1\rangle_{L}|1\rangle_{L}\}$ coincides with $S^{L_{1}}$, it is satisfied. While for condition $(\mathrm{ii})$, considering that $\mathcal{U}_{1}(t)$ commutes with $H_{1}$, condition $(\mathrm{ii})$ reduces to $\langle k|H_{1}|k^{'}\rangle=0$, where $|k\rangle, |k^{'}\rangle\in \{|0\rangle_{L}|0\rangle_{L}, |0\rangle_{L}|1\rangle_{L}, |1\rangle_{L}|0\rangle_{L}, |1\rangle_{L}|1\rangle_{L}\}$. From Eq.~({\ref{e11}}), it is easy to find that condition $(\mathrm{ii})$ is also satisfied. Therefore, $\mathcal{U}_{1}(\tau_{1})$ is a holonomic matrix acting on $S^{L_{1}}$ with decoherence-free property. Through the above illustration, a non-adiabatic holonomic $C_{1}$-$U$ gate in which $U$ is an arbitrary single-qubit gate operation in DFS with two- and three-body interactions have been directly and successfully implemented. It is worth pointing out that one needs four-body interaction~\cite{GJDELPRL12109} or the combination of a single-qubit gate and a two-qubit nontrivial gate~\cite{JWYZOE1523} to implement a non-adiabatic holonomic CNOT gate in DFS. \section{THREE-QUBIT CONTROLLED unitary GATE}\label{sec1} It is well known that by using two CNOT gates, two $C_{1}$-$V$ gates $(V^{2}=U)$, and a $C_{1}$-$V^{\dag}$ gate, one can get a three-qubit controlled unitary gate with two control qubits and a unitary operator $U$ acting on a target qubit, which is denoted as $C_{2}$-$U$ gate~\cite{ACRDNPTJHPRA9552}. Obviously, this combination is very complex and it is more desirable to implement $C_{2}$-$U$ gate directly. In this section we will show how to implement the $C_{2}$-$U$ gate directly in DFS. To this end, we need six physical qubits interacting collectively with the dephasing environment to construct a ten-dimensional DFS: \begin{eqnarray}\label{e15} S^{D_{2}}=\mathrm{Span}\Big\{|010101\rangle,|010110\rangle, |011001\rangle,|011010\rangle,|100101\rangle,~\cr\cr |100110\rangle,|101001\rangle,|101010\rangle, |100011\rangle,|101100\rangle\Big\}. \end{eqnarray} Similar to the case of $C_{1}$-{\it U} gate, we encode logical qubits in the subspace \begin{eqnarray}\label{e16} S^{L_{2}}=\mathrm{Span}\Big\{|010101\rangle,|010110\rangle, |011001\rangle,|011010\rangle,~\cr\cr |100101\rangle,|100110\rangle,|101001\rangle,|101010\rangle\Big\}, \end{eqnarray} and the logical qubit states are denoted as \begin{align}\label{e17}\notag |0\rangle_{L}|0\rangle_{L}|0\rangle_{L}&=|010101\rangle, &|0\rangle_{L}|0\rangle_{L}|1\rangle_{L}&=|010110\rangle, \end{align} \begin{align}\notag |0\rangle_{L}|1\rangle_{L}|0\rangle_{L}&=|011001\rangle, &|0\rangle_{L}|1\rangle_{L}|1\rangle_{L}&=|011010\rangle, \end{align} \begin{align}\notag |1\rangle_{L}|0\rangle_{L}|0\rangle_{L}&=|100101\rangle, &|1\rangle_{L}|0\rangle_{L}|1\rangle_{L}&=|100110\rangle, \end{align} \begin{align} |1\rangle_{L}|1\rangle_{L}|0\rangle_{L}&=|101001\rangle, &|1\rangle_{L}|1\rangle_{L}|1\rangle_{L}&=|101010\rangle. \end{align} In the case of three-qubit $C_{2}$-$U$ gate, we also use only two neighboring physical qubits to encode a logical qubit and $|a_{3}\rangle=|100011\rangle$ and $|a_{4}\rangle=|101100\rangle$ are as ancillary states. The Hamiltonian $H_{2}$ for implementing the $C_{2}$-$U$ gate is \begin{eqnarray}\label{e18} H_{2}&=&\frac{1}{4}\bigg\{(I_{1}-Z_{1})(I_{4}+Z_{4}) \Big[\Delta_{1}(I_{3}+Z_{3}) +(\Omega_{1}R^{x}_{35} +\Omega_{2}R^{x}_{36}+\mathrm{H.c.})\Big]\cr\cr &&+(I_{1}-Z_{1})(I_{3}-Z_{3}) \Big[\Delta_{2}(I_{4}-Z_{4}) +(\Omega_{3}R^{x}_{45} +\Omega_{4}R^{x}_{46}+\mathrm{H.c.})\Big]\bigg\}\cr\cr &=&\Big[2\Delta_{1}|a_{3}\rangle\langle a_{3}|+(\Omega_{1}|1\rangle_{L}|1\rangle_{L}|0\rangle_{L}\langle a_{3}|+\Omega_{2}|1\rangle_{L}|1\rangle_{L}|1\rangle_{L}\langle a_{3}|+\mathrm{H.c.})\cr\cr &&+2\Delta_{2}|a_{4}\rangle\langle a_{4}|+(\Omega_{3}|a_{4}\rangle_{L}\langle 1|_{L}\langle1|_{L}\langle1|+\Omega_{4}|a_{4}\rangle_{L}\langle 1|_{L}\langle1|_{L}\langle0|+\mathrm{H.c.})\Big], \end{eqnarray} where the controllable coupling parameters are chosen the same as in the case of $C_{1}$-$U$ gate (see Eq.~({\ref{e10}})). In this way the Hamiltonian in Eq.~({\ref{e18}}) can be rewritten as \begin{eqnarray}\label{e19} H_{2}'&=&-2\Omega(\sin\xi|a_{3}\rangle\langle a_{3}| +\sin\gamma|a_{4}\rangle\langle a_{4}|)\cr\cr &&+\Omega(\cos\xi|1\rangle_{L}|1\rangle_{L}|+\rangle_{L}\langle a_{3}| +\cos\gamma|1\rangle_{L}|1\rangle_{L}|-\rangle_{L}\langle a_{4}|+\mathrm{H.c.}). \end{eqnarray} The Hamiltonian $H_{2}'$ has the same structure as $H_{1}'$ and the states $|+\rangle_{L}$ and $|-\rangle_{L}$ are the same as that in Eq.~({\ref{e11}}). Similar to the case of $C_{1}$-$U$ gate, it is easy to get the evolution operator associated with $H_{2}$ under the basis $\{|0\rangle_{L}|0\rangle_{L}|0\rangle_{L},$ $|0\rangle_{L}|0\rangle_{L}|1\rangle_{L},$ $|0\rangle_{L}|1\rangle_{L}|0\rangle_{L},$ $|0\rangle_{L}|1\rangle_{L}|1\rangle_{L},$ $|1\rangle_{L}|0\rangle_{L}|0\rangle_{L},$ $|1\rangle_{L}|0\rangle_{L}|1\rangle_{L},$ $|1\rangle_{L}|1\rangle_{L}|0\rangle_{L},$ $|1\rangle_{L}|1\rangle_{L}|1\rangle_{L}\}$ \begin{eqnarray}\label{e20} \mathcal{U}_{2}(\tau_{2})=\mathrm{Diag}\left[1,1,1,1,1,1,U\right], \end{eqnarray} with evolution time satisfying $\Omega\tau_{2}=\pi$. From Eq.~({\ref{e20}}), one can easily find that $\mathcal{U}_{2}(\tau_{2})$ acts as a $C_{2}$-$U$ gate on the states of $S^{L_{2}}$ and $U$ is given by Eq.~({\ref{e05}}). A Toffoli gate, which can perform a NOT operation on the target qubit or not, depending on the states of two control qubits~\cite{ETIJTP8221}, is an important $C_{2}$-$U$ gate. One can get a Toffoli gate by using at least six CNOT gates in principle~\cite{VIQIC099}. Here the Toffoli gate can be directly implemented by utilizing the same parameters in the case of implementing CNOT gate. The decoherence-free and holonomy properties of the gate can now easily be verified. Since the verification exactly parallels the one for the case of $C_{1}$-$U$ gate discussed in the last section and we don't present here. Now we turn to the implementation of a Fredkin gate, which is another important three-qubit controlled gate that can perform a swap operation on two target qubits or not, depending on the state of the control qubit. In order to achieve the Fredkin gate we consider the following Hamiltonian \begin{eqnarray}\label{e21} H_{3}&=&\frac{1}{2\sqrt{2}}\eta(I_{1}-Z_{1})(R^{x}_{35} -R^{x}_{46}+\mathrm{H.c.})\cr\cr &=&\eta\frac{1}{\sqrt{2}}(|1\rangle_{L}|1\rangle_{L}|0\rangle_{L}\langle a_{3}|+|1\rangle_{L}|0\rangle_{L}|1\rangle_{L}\langle a_{4}|\cr\cr &&-|1\rangle_{L}|0\rangle_{L}|1\rangle_{L}\langle a_{3}|-|1\rangle_{L}|1\rangle_{L}|0\rangle_{L}\langle a_{4}|+\mathrm{H.c.})\cr\cr &=&\eta(|1\rangle_{L}|1\rangle_{L}|0\rangle_{L} -|1\rangle_{L}|0\rangle_{L}|1\rangle_{L})\langle a_{-}|+\mathrm{H.c.}, \end{eqnarray} where $\eta$ is a controllable coupling parameter and $|a_{-}\rangle=\dfrac{1}{\sqrt{2}}(|a_{3}\rangle-|a_{4}\rangle)$. Here the encoding is as the same as the situation in $C_{2}$-$U$ gate (see Eq.~({\ref{e18}})). The Hamiltonian $H_{3}$ is in the $\Lambda$-type with ancillary state $|a_{-}\rangle$ at the top while the logical qubit states $|1\rangle_{L}|1\rangle_{L}|0\rangle_{L}$ and $|1\rangle_{L}|0\rangle_{L}|1\rangle_{L}$ at the bottom. The state orthogonal to $|a_{-}\rangle$ is denoted as $|a_{+}\rangle=\dfrac{1}{\sqrt{2}}(|a_{3}\rangle+|a_{4}\rangle)$ and it decouples from the evolution of the system. The subspace spanned by $\{|a_{+}\rangle$, $|a_{-}\rangle\}$ is the same to that by $\{|a_{3}\rangle$, $|a_{4}\rangle\}$. When the evolution time $\tau_{3}$ meets $\eta\tau_{3}=\pi/\sqrt{2}$, the resulting evolution operator in the basis $\{|0\rangle_{L}|0\rangle_{L}|0\rangle_{L},$ $|0\rangle_{L}|0\rangle_{L}|1\rangle_{L},$ $|0\rangle_{L}|1\rangle_{L}|0\rangle_{L},$ $|0\rangle_{L}|1\rangle_{L}|1\rangle_{L},$ $|1\rangle_{L}|0\rangle_{L}|0\rangle_{L},$ $|1\rangle_{L}|0\rangle_{L}|1\rangle_{L},$ $|1\rangle_{L}|1\rangle_{L}|0\rangle_{L},$ $|1\rangle_{L}|1\rangle_{L}|1\rangle_{L}\}$ is given by \begin{eqnarray}\label{e22} \mathcal{U}_{3}(\tau_{3})=\mathrm{Diag}\left[1,1,1,1, \left(\begin{array}{cccc} 1~&~0~&~0~&~0\\ 0~&~0~&~1~&~0\\ 0~&~1~&~0~&~0\\ 0~&~0~&~0~&~1\end{array}\right)\right]. \end{eqnarray} One can find from Eq.~({\ref{e22}}) that $\mathcal{U}_{3}(\tau_{3})$ acts as a Fredkin gate on the states in the logic subspac $S^{L_{2}}$ and its decoherence-free and holonomy properties can be demonstrated easily. In this way we implement a non-adiabatic holonomic three-qubit Fredkin gate in DFS with three-body interaction. \section{Discussion and Conclusions}\label{sec3} So far, we have succeeded in constructing $C_{1}$-$U$, $C_{2}$-$U$, and Fredkin gates. We now introduce a few concepts from differential geometry to understand the nature of the above holonomic gates. The set of $K$-dimensional subspaces of an $N$-dimensional Hilbert space is a Grassmann manifold $G(N; K)$. The closed path $\mathcal{C}$ of $K$-dimensional subspaces is a loop in $G(N; K)$. We now consider the holonomic gates described above. The $C_{1}$-$U$, $C_{2}$-$U$, and Fredkin gates are associated with loops in $G(4; 2)$~\cite{VCENJP1416}, where the Hilbert spaces relevant for the holonomy is spanned by $\{|a_{1}\rangle, |a_{2}\rangle, |1\rangle_{L}|0\rangle_{L}, |1\rangle_{L}|1\rangle_{L}\}$, $\{|a_{3}\rangle, |a_{4}\rangle, |1\rangle_{L}|1\rangle_{L}|0\rangle_{L}, |1\rangle_{L}|1\rangle_{L}|1\rangle_{L}\}$, and $\{|a_{3}\rangle, |a_{4}\rangle, |1\rangle_{L}|0\rangle_{L}|1\rangle_{L}, |1\rangle_{L}|1\rangle_{L}|0\rangle_{L}\}$, respectively. However, the previous schemes were almost associated with loops in $G(3; 2)$~\cite{EDLBMKNJP1214}. It is worth noting that the schemes proposed here can be generalized. For the $C_{1}$-$U$ gate between the $m$th and the $n$th logic qubits, the Hamiltonian has the same structure as $H_{1}$ but with the exchanging $R^{x}_{13}\rightarrow R^{x}_{2m-1,2n-1}$, $R^{x}_{14}\rightarrow R^{x}_{2m-1,2n}$, $R^{x}_{23}\rightarrow R^{x}_{2m,2n-1}$, $R^{x}_{24}\rightarrow R^{x}_{2m,2n}$, $(I_{1}+Z_{1})\rightarrow (I_{2m-1}+Z_{2m-1})$, and $(I_{2}+Z_{2})\rightarrow (I_{2m}+Z_{2m})$. For the $C_{2}$-$U$ gate between the $m$th, $n$th, and $l$th logic qubits, the Hamiltonian has the same structure as $H_{2}$ but with the exchanging $R^{x}_{35}\rightarrow R^{x}_{2n-1,2l-1}$, $R^{x}_{36}\rightarrow R^{x}_{2n-1,2l}$, $R^{x}_{45}\rightarrow R^{x}_{2n,2l-1}$, $R^{x}_{46}\rightarrow R^{x}_{2n,2l}$, $(I_{1}-Z_{1})\rightarrow (I_{2m-1}-Z_{2m-1})$, $(I_{3}+Z_{3})\rightarrow (I_{2n-1}+Z_{2n-1})$, and $(I_{4}+Z_{4})\rightarrow (I_{2n}+Z_{2n})$. At last, for the Fredkin gate between the $m$th, $n$th, and $l$th logic qubits, the Hamiltonian has the same structure as $H_{3}$ but with the exchanging $R^{x}_{35}\rightarrow R^{x}_{2n-1,2l-1}$, $R^{x}_{46}\rightarrow R^{x}_{2n,2l}$ and $(I_{1}-Z_{1})\rightarrow (I_{2m-1}-Z_{2m-1})$. In conclusion, we have proposed schemes for implementing $C^{1}$-{\it U}, $C^{2}$-{\it U}, and Fredkin gates directly by using non-adiabatic holonomy in DFSs. Our schemes combine the coherence stabilization virtues of DFSs and the built-in fault tolerance of holonomic control. These gate operations can be implemented in a high speed which avoids the extra errors and decoherence involved in adiabatic case due to long time evolution. Moreover, the resource cost for the DFSs encoding is minimal by using only two neighboring physical qubits undergoing collective dephasing to encode a logical qubit. \begin{center} {\small {\bf ACKNOWLEDGMENTS}} \end{center} This work was supported by the National Natural Science Foundation of China under Grant Nos. 11264042, 11465020, 61465013, 11165015, and 11564041. \end{document}
\begin{document} \title[Coherent control and feedback cooling in an atom--optomechanical system]{Coherent control and feedback cooling in a remotely-coupled hybrid atom--optomechanical system} \author{James S. Bennett, Lars S. Madsen, Mark Baker, Halina Rubinsztein-Dunlop and Warwick P. Bowen} \address{Australian Research Council Centre of Excellence for Engineered Quantum Systems (EQuS), The University of Queensland, St Lucia, QLD 4072, Australia} \ead{[email protected]} \begin{abstract} Cooling to the motional ground state is an important first step in the preparation of nonclassical states of mesoscopic mechanical oscillators. Light-mediated coupling to a remote atomic ensemble has been proposed as a method to reach the ground state for low frequency oscillators. The ground state can also be reached using optical measurement followed by feedback control. Here we investigate the possibility of enhanced cooling by combining these two approaches. The combination, in general, outperforms either individual technique, though atomic ensemble-based cooling and feedback cooling each individually dominate over large regions of parameter space. \end{abstract} \maketitle \section{Introduction} \label{Sec:Introduction} Preparation of mesoscopic mechanical devices in high-purity nonclassical states is a long-standing goal of the opto- and electromechanical communities. In addition to promising applications in fundamental physics research---such as gravitational effects in quantum mechanics and the quantum-to-classical transition \cite{Treutlein2012,Chen2013}---mechanical devices provide outstanding opportunities for metrology \cite{Giovannetti2004,Milburn2011,Serafini2012} and emerging quantum technologies \cite{Tsukanov2011,Muschik2011,Stannigel2012}. The majority of mechanical quantum state preparation and verification schemes require that the oscillator be initialised very near to the motional ground state \cite{Kleckner2008,Rocheleau2010}. Though this has been achieved in the gigahertz and megahertz regimes \cite{OConnell2010,Teufel2011,Chan2011}, progress toward cooling low-frequency ($\omega_{m}$) oscillators has been inhibited by the lack of both cryogenic systems capable of ‘freezing out’ their thermal motion and sufficiently high quality optical cavities to achieve the good cavity (cavity linewidth $\kappa \ll \omega_{m}$) regime required for resolved-sideband cooling. Both remote coupling to the motional state of a cooled atomic ensemble \cite{Hammerer2010, Camerer2011, Vogell2013} and optical measurement followed by feedback control \cite{Mancini1998,Courty2001,Vitali2002,Hopkins2003,Genes2008a, Doherty2012} have been suggested as alternative approaches to cooling that, in principle, overcome these limitations and allow ground state cooling in the bad cavity limit ($\kappa \gg \omega_{m}$). In remote atomic ensemble-based cooling, or \textit{sympathetic cooling}, light mediates a swap between the centre of mass motional state of the ensemble and a mode of the mechanical oscillator, which may be separated from the former by a macroscopic distance ($\sim 1$ m) \cite{Hammerer2010,Camerer2011,Vogell2013}. Proposed remotely-coupled atom--optomechanical systems are theoretically capable of sympathetically cooling their mechanical elements to near the ground state. The directional flow of quantum information from the mechanical element---the `plant', in control parlance---to the atoms and back allows us to identify the atoms as an irreversible coherent controller (\textit{cf}.{} \cite{Jacobs2014}, pg. 3, and \cite{Lloyd2000,Hamerly2012}). The controller is necessarily imperfect, as the phase of the output optical field retains knowledge of the mechanical position. The resulting back-action on the momentum hinders the cooling process. In this article we investigate combining sympathetic cooling with feedback damping based on a phase measurement of the output field (\textit{cf}.{} Fig.~\ref{Fig:ModelGeometry}). The latter retrieves information from the optical field and allows suppression of decoherence. We derive an analytical mechanical power spectrum for such a system, from which the steady-state temperature is found. This reveals a set of criteria specifying when near-ground-state temperatures may be achieved. In general the combined scheme outperforms both individual methods; however, when the optomechanical cooperativity is sufficiently large the cooling is dominated by feedback. Conversely, one may still approach the mechanical ground state with weak optomechanical cooperativity provided that its ensemble--light counterpart (\textit{cf}.{} Eqn~\eqref{Eqn:AtomCooperativity}) is appropriately large. These statements are made quantitative in $\S$~\ref{Sec:AtomsAlone}. We also clarify the role of atom--light and optomechanical interactions in performing the atom $\leftrightarrow$ mechanical state swap which underpins sympathetic cooling. In particular, and somewhat counter-intuitively, this swap is possible even when the optomechanical cooperativity is insufficient to permit a complete state transfer between the oscillator and the optical field. \begin{figure} \caption{\label{Fig:ModelGeometry} \label{Fig:ModelGeometry} \end{figure} \section{Model} \label{Sec:Theory} Consider the device depicted in Fig.~\ref{Fig:ModelGeometry}, in which a ring resonator, containing an atomic cloud, and a one-sided optomechanical cavity are coupled by a lossless optical transmission line. An optical lattice, formed by interference of the counter-propagating cavity modes, traps the atoms in approximately harmonic wells (\textit{cf}.{} $\S$~\ref{Sec:AtomLightInteraction}). The atomic centre of mass and the optically-coupled oscillator therefore comprise two harmonic mechanical degrees of freedom. This class of hybrid atom--optomechanical system (\textit{e.g.}{} \cite{Hammerer2010,Camerer2011,Vogell2013} and \cite{Hammerer2009EPR}) is desirable from an immediate experimental perspective, where their key advantage is circumventing the need for close integration of cryogenic and ultra-high vacuum apparatus, and within the context of future quantum networks, where atomic and solid-state processing and memory nodes are anticipated to be interfaced via{} optical photons \cite{Kimble2008}. Displacement of the atoms relative to the lattice results in an exchange of photons between the left- and right-going optical modes, which modulates the optical power incident upon the mechanical device. Conversely, changing the position of the latter alters the phase of the reflected field (\textit{cf}.{} $\S$~\ref{Sec:OptomechanicalInteraction}), causing axial translation of the optical lattice. In this way each oscillator is subject to a force which depends on the position of its counterpart \cite{Hammerer2010}. Our model of the atom--mechanical coupling is closely related to that proposed by \cite{Vogell2013}. The primary difference, besides the inclusion of the external measurement-based feedback loop, is the addition of a second optical cavity into which the atoms are loaded. The cavity allows us to employ an intuitive two-mode description of the atom--light interaction and may be experimentally desirable in certain circumstances (\textit{cf}.{} $\S$~\ref{Sec:EffectiveDynamics}). \subsection{\textbf{Atom--light interaction}} \label{Sec:AtomLightInteraction} The heart of the atom trapping apparatus is an optical dipole trap \cite{Grimm2000} which uses the AC Stark effect to confine $N$ identical two-level atoms of mass $m$. This trap is incorporated into a ring resonator of quality $Q_{AC}$ which is driven by an input field $\baop{a}_{d}$, with a large coherent amplitude $\alpha_{d} = \expect{\baop{a}_{d}}$ at (angular) frequency $\omega_{d}$. Neglecting internal losses, the linewidth of this cavity is $\kappa_{AC} \approx \omega_{d}/Q_{AC}$ (\textit{i.e.}{} the resonator is strongly overcoupled to the transmission line). We take the majority of the cavity's modes to be far off resonance with the drive beam, leaving two relevant counter-propagating modes which are lowered by the operators $\baop{a}_{L}$ and $\baop{a}_{R}$, with the subscripts referring to the handedness of circulation depicted in Fig.~\ref{Fig:ModelGeometry}. In practice additional field modes or semiclassical potentials may be required to stably confine the atoms in the plane transverse to the counter-propagating modes introduced above. We will neglect all noise introduced by these fields/potentials in the analysis that follows on the grounds that, to harmonic order, motion in the transverse directions is decoupled from the axial motion. The Hamiltonian governing the internal (axial) dynamics of the cavity---in a frame rotating at the drive beam frequency $\omega_{d}$---is then \begin{equation*} \hat{H}_{AC} = -\hbar\Delta_{AC}\Br{\bcop{a}_{R}\baop{a}_{R} + \bcop{a}_{L}\baop{a}_{L}} +\hat{H}_{SS} +\sum\sb{j=1}^{N} \frac{\ensuremath{\hat{p}}\sb{j}^{2}}{2m}, \end{equation*} where $\Delta_{AC}$ is the detuning of the drive from the bare cavity resonance, $\ensuremath{\hat{p}}\sb{j}$ is the momentum of the $j$\textsuperscript{th} atom and $\hat{H}_{SS}$, the Stark shift operator, models the light--atom interactions. Following \cite{Hammerer2010} (\textit{cf}.{} \cite{Dalibard1985}), we will treat all atom--light interactions in the dispersive limit, suppressing any internal structure of the ground and first excited electronic states of the atom. This is a valid approximation for alkali gases provided that the detuning $\Delta_{t}$ between the drive and the electronic transition frequency is large compared to the laser linewidth and all other relevant frequency scales \cite{Grimm2000}. The one-dimensional Hamiltonian describing the atom--light interaction is therefore (neglecting off-resonant terms) \begin{equation} \hat{H}_{SS} = \sum\sb{j=1}^{N} \frac{\mu^{2}}{\hbar\Delta_{t}}\hat{E}^{\left(-\right)}\left(\ensuremath{\hat{x}}\sb{j}\right)\hat{E}^{\left(+\right)}\left(\ensuremath{\hat{x}}\sb{j}\right), \label{Eqn:AtomLight} \end{equation} where $E^{\Br{+}}$ is the positive frequency component of the electric field and each atom has a transition dipole moment of $\mu$ \cite{Vogell2013}. The positive-frequency part of the cavity electric field may be written \begin{equation*} \hat{E}^{\Br{+}} =\mathrm{i}\sqrt{\frac{\hbar\omega_{AC}}{2\epsilon_{0}\mathcal{V}}}\Br{\baop{a}_{R}\mathrm{e}^{-\mathrm{i} kx} + \baop{a}_{L}\mathrm{e}^{\mathrm{i} kx}}, \end{equation*} where $\omega_{AC}$ is the bare resonance frequency of the atom cavity, $k$ is the optical wavenumber, $\epsilon_{0}$ is the permittivity of free space and $\mathcal{V}$ is the cavity mode volume \cite{WallsMilburn2008}. From this we find via{} Eqn~\eqref{Eqn:AtomLight} that the strength of the single-atom--light interaction is characterised by the coupling rate \begin{equation} g_{a} = \frac{\mu^{2}\omega_{d}}{2 \hbar \Delta_{t} \epsilon_{0} \mathcal{V}}. \label{Eqn:AtomCouplingRate} \end{equation} We have used $\omega_{AC} \approx \omega_{d}$. Note that for red detuned light ($\Delta_t<0$), as we will assume from here onward, $g_{a}$ is negative. Expanding the annihilation operators $\baop{a}_{L,R} \rightarrow \alpha_{L,R} + \delta \baop{a}_{L,R}$ about the coherent field amplitudes $\expect{\baop{a}_{L,R}} = \alpha_{L,R}$, which we assume are real and satisfy $\alpha_{R} \approx \alpha_{L} \gg 1$, and truncating the oscillatory terms at second order in the Lamb-Dicke parameter \cite{Hammerer2010}, we acquire a static shift of the cavity resonance frequency, an effective harmonic trapping potential with frequency \begin{equation} \omega_{a} = 2k\sqrt{\frac{-2\hbar g_{a}\alpha_{L}\alpha_{R}}{m}} \nonumber \end{equation} and a linearised interaction between the atoms' positions and the optical phase quadrature fluctuations. Finally, setting the bare detuning to $\Delta_{AC} = -2Ng_{a}$ brings the cavity onto resonance in the presence of the mean interaction, yielding the Hamiltonian \begin{equation} \hat{H}_{AC} = \sum\sb{j=1}^{N} \Sq{\frac{\ensuremath{\hat{p}}\sb{j}^{2}}{2m} + \frac{m\omega_{a}^{2}\ensuremath{\hat{x}}\sb{j}^{2}}{2} + 2\hbar kg_{a}\ensuremath{\hat{x}}\sb{j}\Br{\alpha_{L} \delta \hat{X}_{R}^{-}-\alpha_{R} \delta \hat{X}_{L}^{-}}} \label{Eqn:AtomLightLin} \end{equation} where we have introduced the amplitude and phase quadrature fluctuation operators, $\delta \hat{X}^{+} = \delta\bcop{a} + \delta\baop{a}$ and $\delta \hat{X}^{-} = \mathrm{i}\left(\delta\bcop{a} - \delta\baop{a}\right)$ and neglected contributions of order $\ensuremath{\hat{x}}\sb{j}^{2}\delta \hat{X}^{\ensuremath{\hat{p}}m}$. Inspection of Eqn~\eqref{Eqn:AtomLightLin} reveals that the phase difference of the optical fields couples to the \textit{collective} motion of the atomic cloud, specifically the centre of mass mode. This degree of freedom may be described by a simple harmonic oscillator with coordinate $\ensuremath{\hat{x}}_{a} = \frac{1}{N}\sum\sb{j=1}^{N} \ensuremath{\hat{x}}\sb{j}$, momentum $\ensuremath{\hat{p}}_{a} = \sum\sb{j=1}^{N} \ensuremath{\hat{p}}\sb{j}$ and zero-point extension $x_{zp,a} = \sqrt{\hbar/2Nm\omega_{a}}$ (\textit{cf}.{} $\S$~\ref{Sec:EffectiveDynamics}). \subsection{\textbf{Optomechanical interaction}} \label{Sec:OptomechanicalInteraction} The canonical cavity optomechanical interaction is most easily understood in the context of a single-sided Fabry-P\'{e}rot cavity wherein the input mirror is fixed and the other is harmonically bound (\textit{e.g.}{} as depicted in Fig.~\ref{Fig:ModelGeometry}). Motion of the end mirror changes the cavity length, thereby altering the optical resonance frequency, which in turn modulates the number of photons in the cavity field. Finally, the photon number controls the radiation force experienced by the mirror. This interplay leads to the emergence of a rich variety of well-studied physics \cite{Kippenberg2007,KippenbergVahala2008} even at first-order in $\ensuremath{\hat{x}}_{m}$. A linearly-coupled optomechanical system, whether actuated by radiation pressure or the gradient force \cite{Thourhout2010}, may be described by the parametric coupling Hamiltonian \cite{Law1995} \begin{equation*} \hat{H}_{MC} = \hbar\Br{g_{m}\ensuremath{\hat{x}}^{\ensuremath{\hat{p}}rime}_{m}-\Delta_{MC}}\bcop{b}\baop{b} +\frac{\ensuremath{\hat{p}}_{m}^{2}}{2M} + \half M\omega_{m}^{2}\ensuremath{\hat{x}}^{\ensuremath{\hat{p}}rime\;2}_{m} \end{equation*} in a frame rotating at $\omega_{d}$. The bare detuning of the laser drive to the cavity resonance is $\Delta_{MC}$, $\ensuremath{\hat{x}}_{m}^{\ensuremath{\hat{p}}rime}$ is a suitable position coordinate of a vibrational mode with effective mass $M$ and frequency $\omega_{m}$, photons are removed from the cavity field by $\baop{b}$ and $g_{m}$ is the optomechanical coupling rate (with dimensions of s\textsuperscript{-1}m\textsuperscript{-1}). As above, we assume that the internal optical mode is coupled to the drive beam at a rate $\kappa_{MC} \approx \omega_{d}/Q_{MC}$, where $Q_{MC}$ is the Q-factor of the optical resonator, leading to the build-up of a steady-state intracavity amplitude $\beta = 2\alpha_{d}/\sqrt{\kappa_{MC}}$. Linearising about this amplitude, introducing a zero-mean position coordinate $\ensuremath{\hat{x}}_{m}$ and selecting $\Delta_{MC}$ so as to bring the cavity onto resonance in the presence of the mean interaction\footnote{For a linear oscillator we may arrive at Eqn~\eqref{Eqn:OptoMechLin} by introducing $\ensuremath{\hat{x}}_{m} = \ensuremath{\hat{x}}_{m}^{\ensuremath{\hat{p}}rime}-\expect{\ensuremath{\hat{x}}_{m}^{\ensuremath{\hat{p}}rime}}$, with $\expect{\ensuremath{\hat{x}}_{m}^{\ensuremath{\hat{p}}rime}} = \frac{\hbar g_{m}}{M\omega_{m}^{2}}\beta^{2}$, and detuning from the bare cavity resonance by $\Delta_{MC}=g_{m}\expect{\ensuremath{\hat{x}}_{m}^{\ensuremath{\hat{p}}rime}}$. Alternatively, an external control force may be used to cancel the mean optical force, leaving $\Delta_{MC} = 0$.} yields the effective Hamiltonian \begin{equation} \hat{H}_{MC} = \hbar g_{m}\ensuremath{\hat{x}}_{m}\beta\delta \hat{Y}^{+} + \frac{\ensuremath{\hat{p}}_{m}^{2}}{2M} + \half M\omega_{m}^{2}\ensuremath{\hat{x}}_{m}^{2}, \label{Eqn:OptoMechLin} \end{equation} with the optical quadrature fluctuation operators $\delta \hat{Y}^{\ensuremath{\hat{p}}m}$. The ground state variance of the mechanical resonator is $x_{zp,m}^{2} = \hbar/2M\omega_{m}$. \subsection{\textbf{Effective dynamics}} \label{Sec:EffectiveDynamics} Given Eqn~\eqref{Eqn:AtomLightLin} and Eqn~\eqref{Eqn:OptoMechLin} we may determine the dynamics of the system in the Heisenberg picture. Under free evolution, neglecting coupling to reservoirs for the moment, \begin{eqnarray} Nm\sde{\hat{x}_{a}}{t} & = & -Nm\omega_{a}^{2}\ensuremath{\hat{x}}_{a} + 2\hbar Nkg_{a}\left(\alpha_{R}\delta \hat{X}_{L}^{-}-\alpha_{L}\delta \hat{X}_{R}^{-}\right), \label{Eqn:MechAC} \\ M\sde{\hat{x}_{m}}{t} & = & -M\omega_{m}^{2}\ensuremath{\hat{x}}_{m} - \hbar g_{m}\beta\delta \hat{Y}^{+}. \label{Eqn:MechMT} \end{eqnarray} These equations show, as expected, that the atom--light interaction depends on the optical phase fluctuations, whereas the optomechanical system responds to amplitude noise (\textit{cf}.{} \cite{Camerer2011,Vogell2013}). Dramatic simplifications may be made in the optical adiabatic limit, wherein the optical quadrature fluctuations are slaved to the positions of the mechanical and atomic elements. Conveniently, this is also the regime in which the most sensitive measurements of mechanical displacement are achieved \cite{Genes2008a}. We make the bad-cavity approximation (\textit{cf}.{} \ref{Sec:Fields}) under the requirement that, for both optical cavities, $\kappa\gg\max\left\{\omega_{a}, \omega_{m}\right\}$. What emerges is an effective direct coupling between the two oscillators; \begin{eqnarray} Nm\sde{\hat{x}_{a}}{t} & = & -Nm\omega_{a}^{2}\ensuremath{\hat{x}}_{a} + K\ensuremath{\hat{x}}_{m}, \label{Eqn:MechACEffNoLoss}\\ M\sde{\hat{x}_{m}}{t} & = & -M\omega_{m}^{2}\ensuremath{\hat{x}}_{m} +\hat{F}_{BA,m} + K\ensuremath{\hat{x}}_{a}. \label{Eqn:MechMTEffNoLoss} \end{eqnarray} $\hat{F}_{BA,m} = -4\hbar g_{m}\alpha_{d}\delta\hat{X}_{d}^{+}/\kappa_{MC}$ is an optical back-action force (\textit{cf}.{} $\S~$\ref{Sec:Langevin}) arising due to amplitude fluctuations of the input electromagnetic field ($\delta\hat{X}_{d}^{+} = \delta\bcop{a}_{d} + \delta\baop{a}_{d}$); for the case discussed here the back-action on the atoms is negligibly small (\textit{cf}.{} \ref{Sec:Fields}). The spring constant $K$ quantifies, for the moment, the coupling strength. \begin{equation} K = -8^{2} \hbar Nkg_{a}g_{m}\frac{Q_{AC}Q_{MC}}{\omega_{d}^{2}}\alpha_{d}^{2}\label{Eqn:SpringConstant} \end{equation} In this same limit the optical field which exits the system carries the fluctuations \begin{eqnarray} \delta \hat{Z}^{+} & = & \delta \hat{X}_{d}^{+} \nonumber\\ \delta \hat{Z}^{-} & = & \delta \hat{X}_{d}^{-}-\frac{4g_{m}\beta}{\sqrt{\kappa_{MC}}}\ensuremath{\hat{x}}_{m}. \label{Eqn:OutputField} \end{eqnarray} It is natural to divide the optically-mediated interactions into coherent and `incoherent' processes. The latter is simply the back-action noise. Taken together, Eqn~\eqref{Eqn:MechACEffNoLoss} and Eqn~\eqref{Eqn:MechMTEffNoLoss} imply that the former may be described by an effective direct Hamiltonian interaction between the two oscillators, \begin{eqnarray} \hat{H}_{eff} & = & - \hbar g \frac{\ensuremath{\hat{x}}_{m}}{x_{zp,m}}\frac{\ensuremath{\hat{x}}_{a}}{x_{zp,a}} \nonumber\\ & = & -\hbar g \Br{\bcop{a}_{a}\baop{a}_{m}+\bcop{a}_{m}\baop{a}_{a}+\bcop{a}_{m}\bcop{a}_{a}+\baop{a}_{m}\baop{a}_{a}}, \label{Eqn:EffectiveHamiltonian} \end{eqnarray} where the motional annihilation operators are $\baop{a}_{a}$ (atomic) and $\baop{a}_{m}$ (mechanical), and $g$ is the atom--mechanical coupling rate \cite{Hammerer2010, Vogell2013}. The latter is \begin{equation} g = \sqrt{N}\frac{g_{m}}{k}\frac{\omega_{a}}{\kappa_{MC}} \sqrt{\frac{m\omega_{a}}{M\omega_{m}}} \label{Eqn:CouplingRate} \end{equation} in accordance with the relationship $g = \frac{K}{\hbar}x_{zp,a}x_{zp,m}$. The combined system is stable if \begin{equation} g < \half \frac{1}{1/\omega_{a} + 1/\omega_{m}}; \label{Eqn:Instability} \end{equation} at higher coupling rates the harmonic potential experienced by the symmetric mode $\ensuremath{\hat{x}}_{a}+\ensuremath{\hat{x}}_{m}$ becomes inverted, anti-trapping this degree of freedom. Note that if optical losses between (and/or within) the cavities are non-negligible the effective interaction of the oscillators becomes non-Hamiltonian, as discussed in detail by \cite{Hammerer2010} (see also \ref{Sec:Fields}). Our Eqn~\eqref{Eqn:CouplingRate} agrees with that derived in \cite{Vogell2013}, which sports a free-space atom trap. One may be tempted to increase $g$ by choosing $\omega_{a} > \omega_{m}$; however, for our purposes this is counterproductive. Consider Eqn~\eqref{Eqn:EffectiveHamiltonian} in the interaction picture with respect to the free mechanical Hamiltonian ($\hbar\omega_{a}\bcop{a}_{a}\baop{a}_{a} + \hbar\omega_{m}\bcop{a}_{m}\baop{a}_{m}$); \begin{eqnarray*} H_{eff} & = & -\hbar g \left\{a^{\dagger}_{a}a_{m}\mathrm{e}^{+\mathrm{i}\Br{\omega_{m}-\omega_{a}}t}+a^{\dagger}_{m}a_{a}\mathrm{e}^{-\mathrm{i}\Br{\omega_{m}-\omega_{a}}t}\right.\\ & & \ensuremath{\hat{p}}hantom{-\hbar g g} \left. +a^{\dagger}_{m}a^{\dagger}_{a}\mathrm{e}^{-\mathrm{i}\Br{\omega_{m}+\omega_{a}}t}+a_{m}a_{a}\mathrm{e}^{+\mathrm{i}\Br{\omega_{m}+\omega_{a}}t}\right\}, \end{eqnarray*} where the lack of a carat indicates an operator in the interaction picture. If the mechanical systems are not resonant ($\omega_{a} \neq \omega_{m}$) all four terms have explicit time dependence which, when averaged over many mechanical cycles, degrades the effective interaction strength. Conversely, on resonance ($\omega_{a} = \omega_{m}$) the interaction reduces to an atomic $\leftrightarrow$ mechanical state swap operation (since the `two-mode squeezing' terms of the lower line are far off-resonant)\footnote{Physically, the state swap terms correspond to in-phase operations which tend to correlate $\ensuremath{\hat{p}}_{a}\ensuremath{\left(t\right)}$ and $\ensuremath{\hat{p}}_{m}\ensuremath{\left(t\right)}$ with time-delayed versions of themselves ($\ensuremath{\hat{p}}_{a}\Br{t-\ensuremath{\hat{p}}i/\Omega}$ and $\ensuremath{\hat{p}}_{m}\Br{t-\ensuremath{\hat{p}}i/\Omega}$), whereas the two-mode squeezing terms correspond to in-quadrature operations which lead to correlations between the two oscillators.}. Sympathetic cooling leverages this fact by continually swapping the cold motional state of the atoms onto the mechanical device \cite{Hammerer2010}. We will therefore restrict ourselves to the special case of $\omega_{m} = \omega_{a} = \Omega$ in $\S$~\ref{Sec:Cooling}. Note that the atomic $\leftrightarrow$ mechanical state swap which these dynamics perform is \textit{not} composed of consecutive swaps between the mechanical degrees of freedom and the light. A curious corollary is that the state transfer may be performed even in a regime where the optomechanical cooperativity is too low to allow an optical $\leftrightarrow$ mechanical swap. The requirements on experimental parameters (for the optomechanical device) are therefore significantly more relaxed than those of resolved-sideband cooling (\textit{cf}.{} $\S$~\ref{Sec:AtomsAlone}). Finally, we note that there are experimental advantages to incorporating a ring resonator into an experiment---despite the moderate increase in technical complexity---even though $g$ depends only on $\alpha_{L,R}$ (\textit{i.e.}{} on $\alpha_{d}/\sqrt{\kappa_{AC}}$). This is especially true if the drive strength $\alpha_{d}$ is limited (\textit{e.g.}{} by photodetector saturation or absorptive heating). For instance, keeping all other optical parameters constant (\textit{i.e.}{} fixed detuning, input power and transverse profile), our additional ring cavity yields a $g\ensuremath{\hat{p}}ropto\mathcal{\sqrt{\mathcal{F}_{AC}}}$ improvement of the coupling rate over that given by a free space trap \cite{Vogell2013} ($\mathcal{F}_{AC}$ is the finesse of the ring cavity, and we have imagined scaling $\omega_{m}\ensuremath{\hat{p}}ropto\sqrt{\mathcal{F}_{AC}}$ so as to maintain the $\omega_{a}=\omega_{m}$ resonance condition). Including a ring cavity may also permit one to use a larger transverse beam distribution (\textit{e.g.}{} in a bow-tie resonator), allowing $g$ to be boosted by trapping more atoms simultaneously. Alternatively, the cavity may be used to assist in suppressing heating of the atomic motion due to spontaneous photon scattering \cite{Gordon1980}, which occurs at a rate $\Gamma_{sc}$. To see this, note that $\Gamma_{sc}\ensuremath{\hat{p}}ropto\alpha_{R}\alpha_{L}/\Delta_{t}^{2} \ensuremath{\hat{p}}ropto \mathcal{F}_{AC}/\Delta_{t}^{2}$ (for fixed $\alpha_{d}$) \cite{Grimm2000} whilst $g\ensuremath{\hat{p}}ropto \Br{\mathcal{F}_{AC}/\Delta_{t}}^{3/4}$; we may therefore leave $g$ unchanged but suppress $\Gamma_{sc}$ by a factor of $1/\varepsilon$ by scaling both $\mathcal{F}_{AC}$ and $\Delta_{t}$ by $\varepsilon$ (see \ref{Sec:BackAction} for further discussion). \subsection{\textbf{Coupling to reservoirs}} \label{Sec:Langevin} Ultimately, the performance of our cooling scheme is limited by noise sources modelled by forming the Langevin equations \cite{GardinerZoller} \begin{eqnarray} Nm\sde{\hat{x}_{a}}{t} & = & -Nm\omega_{a}^{2}\ensuremath{\hat{x}}_{a} + K\ensuremath{\hat{x}}_{m} + \hat{F}_{CB} - \Gamma_{a}Nm\de{\ensuremath{\hat{x}}_{a}}{t}, \label{Eqn:MechACEff}\\ M\sde{\hat{x}_{m}}{t} & = & -M\omega_{m}^{2}\ensuremath{\hat{x}}_{m} +\hat{F}_{BA,m} + K\ensuremath{\hat{x}}_{a} + \hat{F}_{TH} - \Gamma_{m} M \de{\ensuremath{\hat{x}}_{m}}{t} + \hat{F}_{FB}. \label{Eqn:MechMTEff} \end{eqnarray} The atomic motion is damped into a cold bath (\textit{e.g.}{} by application of laser cooling \cite{Phillips1998}) at a rate $\Gamma_{a}$, which introduces fluctuations $\hat{F}_{CB}$. We will assume this reservoir to be at zero temperature. Mechanical losses are due to coupling to a hot bath at a rate $\Gamma_{m}$, with an associated forcing term $\hat{F}_{TH}$ consistent with a thermal occupancy $\bar{n}_{B,m}\approx k_{B}T_{B,m}/\hbar\omega_{m}\gg1$ (\textit{cf}.{} \ref{Sec:Correlators}). Optical back-action on the mechanical oscillator is given by $\hat{F}_{BA,m}$, with the equivalent force on the atomic system vanishing within the realm of validity of our model (\textit{cf}.{} \ref{Sec:Fields}). Finally, the effects of feedback are encapsulated by $\hat{F}_{FB}$. It will be convenient to adopt a frequency domain description of the system for the purposes of treating the feedback circuit; for each time domain operator $\hat{f}\ensuremath{\left(t\right)}$ there is a corresponding frequency domain operator $\tilde{f}\ensuremath{\left(\omega\right)}$ given by the Fourier transform \begin{equation*} \tilde{f}\ensuremath{\left(\omega\right)} = \intinf{t}\;\mathrm{e}^{\mathrm{i}\omega t}\hat{f}\ensuremath{\left(t\right)} \Leftrightarrow \hat{f}\ensuremath{\left(t\right)} = \frac{1}{2\ensuremath{\hat{p}}i}\intinf{\omega}\;\mathrm{e}^{-\mathrm{i}\omega t}\tilde{f}\ensuremath{\left(\omega\right)}. \end{equation*} Taking the transforms of Eqn~\eqref{Eqn:MechACEff} and Eqn~\eqref{Eqn:MechMTEff} and eliminating $\tilde{x}_{a}$ yields \begin{equation} \tilde{x}_{m}\ensuremath{\left(\omega\right)} = \frac{1}{\chi_{m}^{-1}\ensuremath{\left(\omega\right)}-K^{2}\chi_{a}\ensuremath{\left(\omega\right)}}\Sq{\tilde{F}_{TH}+\tilde{F}_{BA,m}+\tilde{F}_{FB} + K\chi_{a}\ensuremath{\left(\omega\right)}\tilde{F}_{CB}}, \label{Eqn:FourierXmNaive} \end{equation} in which the mechanical susceptibility is $\chi_{m}\ensuremath{\left(\omega\right)} = \Sq{M\Br{\omega_{m}^{2}-\omega^{2}-\mathrm{i}\omega\Gamma_{m}}}^{-1}$ and the atomic motion has the transfer function $\chi_{a}\ensuremath{\left(\omega\right)} = \Sq{Nm\Br{\omega_{a}^{2}-\omega^{2}-\mathrm{i}\omega\Gamma_{a}}}^{-1}$. \subsection{\textbf{Modelling cold damping}} \label{Sec:Feedback} The fluctuation-dissipation theorem enforces our inability to cool an oscillator by increasing the rate at which it is damped into its thermal bath \cite{Pinard2000}; however, no such restrictions apply when coupled to a non-thermal environment. One method of engineering such an effective non-equilibrium reservoir is to use an external feedback circuit to apply a force $\hat{F}_{FB} \ensuremath{\hat{p}}ropto -\ensuremath{\hat{p}}_{m}$ to the oscillator, which increases its linewidth and introduces a (coloured) fluctuating force determined by noise on measurements of $\ensuremath{\hat{p}}_{m}$ \cite{Poggio2007}. The resulting mechanical steady-state is approximately thermal, with an effective temperature that may be less than that of the environment (\textit{cf}.{} $\S$~\ref{Sec:ColdDamping}). In optomechanical experiments $\ensuremath{\hat{p}}_{m}$ is typically not directly accessible; instead, one detects the phase quadrature of the output optical field (Eqn~\eqref{Eqn:OutputField}, with detection noise added as in \ref{Sec:Fields}), which carries information concerning $\ensuremath{\hat{x}}_{m}$, and feeds this signal through an electrical filter. Balanced homodyne is an appropriate detection method. The most intuitive filter is a low-pass differentiator circuit\footnote{In practice one would also include a band-pass filter centred at $\omega_{m}$ so as to isolate the mechanical mode of interest.} with bandwidth $\Delta\omega_{FB} \gg \omega_{m}$ \cite{Mancini1998}; such a filter is optimal in the limit that $c_{m} \gg \bar{n}_{B,m}$ and $\eta = 1$, in that the controlled state asymptotically approaches the ground state as $c_{m}\rightarrow\infty$ (for appropriate choice of gain, \textit{cf}.{} $\S$~\ref{Sec:ColdDamping}). The feedback force is \begin{equation} \tilde{F}_{FB} = \frac{\mathrm{i}\omega G \Gamma_{m}M}{1-\mathrm{i}\omega/\Delta\omega_{FB}} \tilde{x}_{m} + \tilde{F}_{SN} \label{Eqn:FeedbackFunction} \end{equation} where the contribution due to optical shot noise is \begin{equation*} \tilde{F}_{SN} = \frac{-\mathrm{i}\omega G}{1-\mathrm{i}\omega/\Delta\omega_{FB}}\frac{M\Gamma_{m}\kappa_{MC}}{8 g_{m}\alpha_{d}}\Br{\delta\tilde{X}_{d}^{-}+\sqrt{\frac{1}{\eta}-1}\delta\tilde{Z}_{v}^{-}}. \end{equation*} The detection efficiency is $\eta \in \Sq{0,1}$ and $\delta\tilde{Z}_{v}^{-}$ is the uncorrelated vacuum noise coupled in by imperfect detection. We have normalised the filter function such that the overall feedback gain $G$ has a transparent physical interpretation (\textit{cf}.{} Eqn~\eqref{Eqn:ModifiedSusceptibility}). Inserting the feedback force into Eqn~\eqref{Eqn:FourierXmNaive} yields \begin{equation} \tilde{x}_{m}\ensuremath{\left(\omega\right)} = \chi_{m}^{\ensuremath{\hat{p}}rime}\ensuremath{\left(\omega\right)}\Sq{\tilde{F}_{TH}+\tilde{F}_{BA,m}+\tilde{F}_{SN} + K\chi_{a}\ensuremath{\left(\omega\right)}\tilde{F}_{CB}}. \label{Eqn:FourierXm} \end{equation} The effective mechanical susceptibility has been modified to \begin{equation} \chi_{m}^{\ensuremath{\hat{p}}rime}\ensuremath{\left(\omega\right)} = \Sq{\chi_{m}^{-1}\ensuremath{\left(\omega\right)} - \frac{\mathrm{i}\omega}{1-\mathrm{i}\omega/\Delta\omega_{FB}} G M\Gamma_{m} -K^{2}\chi_{a}\ensuremath{\left(\omega\right)}}^{-1}. \label{Eqn:ModifiedSusceptibility} \end{equation} In the limit $\omega_{m}\ll\Delta\omega_{FB}$ the second term describes a feedback-induced increase in the linewidth of the oscillator, with $G$ being the amount of broadening, whilst the third term is a modification due to the atomic centre of mass motion. Feedback of the form described here may be implemented in a number of ways. For example, `electrostatic' actuation may be used to apply a force directly to the mechanical oscillator \cite{Hopkins2003,Poggio2007,Lee2010,Sridaran2011}, or the feedback force may be generated by imprinting an amplitude modulation onto a bright auxiliary optical field (which does not interact with the atomic system) \cite{Pinard2000,Kleckner2006,LIGO2009}. Importantly, we note that it is generally possible to arrange the feedback apparatus such that the quantum noise originating from the actuator (\textit{e.g.}{} the RF or optical field, respectively, for the examples given above) is negligible (\textit{cf}.{} Eqn~\eqref{Eqn:FourierXm}); this approximation has been extensively employed in the optomechanical feedback literature (\textit{e.g.}{} \cite{Courty2001,Genes2008a,Doherty2012,Courty2000}). \subsection{\textbf{Position power spectrum}} \label{Sec:CalculatingSxx} The power spectrum of the position, $S\sb{x_{m}x_{m}}$, is found via{} the Wiener-Khinchin theorem \cite{Aspelmeyer2013review} in the limit $\Delta\omega_{FB} \rightarrow \infty$. \begin{eqnarray} S\sb{x_{m}x_{m}}\Sq{\omega} & = & \frac{1}{2\ensuremath{\hat{p}}i}\intinf{\omega^{\ensuremath{\hat{p}}rime}} \expect{\tilde{x}_{m}\ensuremath{\left(\omega\right)} \tilde{x}_{m}\left(\omega^{\ensuremath{\hat{p}}rime}\right)} \nonumber \\ & = & 2\hbar \modd{\chi^{\ensuremath{\hat{p}}rime}_{m}}^{2} \left[ \Gamma_{m}M\omega_{m}\Br{\half + \bar{n}_{B,m}+c_{m}+\frac{\omega}{\omega_{m}}\frac{G}{4}+\frac{\omega^{2}}{\omega_{m}^{2}}\frac{G^{2}}{4^{2}\eta c_{m}}} \right. \nonumber\\ & \ensuremath{\hat{p}}hantom{=} & \ensuremath{\hat{p}}hantom{2\hbar \modd{\chi^{\ensuremath{\hat{p}}rime}_{m}}^{2} \; \;} \left. + K^{2}\modd{\chi_{a}}^{2}\Gamma_{a}Nm\omega_{a}\Br{\half}\right]. \label{Eqn:PowerSpectrum} \end{eqnarray} The first two terms correspond to vacuum noise ($1/2$) and phonons entering the mechanics via{} the thermal bath (with occupancy $\bar{n}_{B,m}$), whilst the third is the optomechanical cooperativity, \begin{equation} c_{m} = \frac{4\Br{g_{m}x_{zp,m}\beta}^{2}}{\Gamma_{m}\kappa_{MC}} = \frac{2\hbar}{M\omega_{m}\Gamma_{m}}\left(\frac{2g_{m}\alpha_{d}}{\kappa_{MC}}\right)^{2}, \label{Eqn:MechCooperativity} \end{equation} corresponding to the effective number of additional bath phonons introduced by optical back-action. As will be seen, $c_{m}$ controls the efficacy of feedback cooling \cite{Genes2008a} and strongly contributes to the sympathetic cooling performance. The fourth term of Eqn~\eqref{Eqn:PowerSpectrum} arises from correlations between $\hat{F}_{BA,m}$ and $\hat{F}_{SN}$, and the fifth is solely due to the latter (\textit{cf}.{} \ref{Sec:Correlators}). Finally, noise entering from the zero-temperature bath is filtered by the atomic susceptibility and appears as the sixth term in the power spectrum. It is convenient to define an associated cooperativity by analogy with $c_{m}$: \begin{equation} c_{a} = \frac{2\hbar N}{m\omega_{a}\Gamma_{a}}\left(\frac{4kg_{a}\alpha_{d}}{\kappa_{AC}}\right)^{2} = \frac{Nm\omega_{a}^{3}}{2\hbar\Gamma_{a}}\Br{\frac{1}{4k\alpha_{d}}}^{2}. \label{Eqn:AtomCooperativity} \end{equation} Although it does not appear directly in $S\sb{x_{m}x_{m}}$ (in the above form), $c_{a}$ is important in determining whether sympathetic cooling is capable of reaching the mechanical ground state. Integrating over the power spectrum yields the steady-state variance of $\ensuremath{\hat{x}}_{m}$, \textit{viz}.{} \begin{equation} \expect{\ensuremath{\hat{x}}_{m}^{2}} = \frac{1}{2\ensuremath{\hat{p}}i} \intinf{\omega}S\sb{x_{m}x_{m}}\Sq{\omega}, \label{Eqn:VarianceGeneral} \end{equation} which is the result of interest. It is generally necessary to evaluate $\expect{\ensuremath{\hat{x}}_{m}^{2}}$ numerically; however, in $\S$~\ref{Sec:Cooling} we also explore several limits in which it is possible to give approximate analytical solutions of Eqn~\eqref{Eqn:VarianceGeneral}. \section{Cooling Performance} \label{Sec:Cooling} We are now in a position to calculate the mechanical oscillator's position variance as a function of the system parameters and the applied feedback gain. In the cases examined below the mechanical steady state is well approximated by a thermal distribution, as confirmed by numerical calculations of its covariance matrix. For this reason $\expect{\ensuremath{\hat{x}}_{m}^{2}}$ serves as an excellent proxy for temperature. With this in mind, we will refer to the oscillator as `ground state cooled' if \begin{equation} \expect{\ensuremath{\hat{x}}_{m}^{2}} \leq 3x_{zp,m}^{2}. \label{Eqn:GroundStateDefinition} \end{equation} This corresponds to the requirement that the oscillator contain at most one phonon on average ($\expect{\bcop{a}_{m}\baop{a}_{m}} \leq 1$) \footnote{We note that this is the same cut-off as implicitly employed in previous works \textit{e.g.}{} \cite{Hammerer2010,Vogell2013}}. \subsection{\textbf{Feedback Cooling}} \label{Sec:ColdDamping} Let us first consider the case in which there are no atoms in the trap. We include a brief summary of results (\textit{cf}.{} sources presented in $\S$~\ref{Sec:Introduction}) here for the purposes of comparison with sympathetic cooling. It is straightforward to extremise Eqn~\eqref{Eqn:VarianceGeneral} with $c_{a}=0$ (using the relations given in \ref{Sec:Maths}), confirming the presence of a global minimum variance of \begin{equation} \frac{\expect{\ensuremath{\hat{x}}_{m}^{2}}_{opt}}{x_{zp,m}^{2}} = \frac{G_{opt}^{\Br{0}}}{4 \eta c_{m}} \label{Eqn:VarianceFeedback} \end{equation} which is achieved with the feedback gain $G_{opt}^{\Br{0}} = \sqrt{1+\ensuremath{\textup{SNR}}}-1$, where the signal-to-noise ratio \cite{Lee2010} is given by $\ensuremath{\textup{SNR}} = 16\eta c_{m}\Br{\bar{n}_{B,m}+c_{m}+1/2}$. In the experimentally-relevant regime of $\left\{\ensuremath{\textup{SNR}},\bar{n}_{B,m}\right\} \gg 1$ it is straightforward to show that ground state cooling may be realised if \begin{equation} \bar{n}_{B,m} \lesssim \Br{9\eta-1}c_{m} \leq 8 c_{m}. \label{Eqn:FeedbackGroundstateCriterion} \end{equation} Thus we see that feedback cooling to the ground state is possible when the mechanical noise spectrum is dominated by radiation pressure fluctuations. \subsection{\textbf{Sympathetic cooling}} \label{Sec:AtomsAlone} We now consider the capacity of sympathetic cooling alone: our analysis complements and extends those performed by \cite{Hammerer2010} and \cite{Vogell2013} by explicitly determining the temperature achievable in the regime of hybridised mechanical modes. Analytical treatments are tractable in both the atomic adiabatic (weak coupling) regime, wherein the atoms are damped heavily compared to the rate of phonon transfer between the oscillators, and the strong coupling (hybridised) regime, in which the coherent interaction (Eqn~\eqref{Eqn:EffectiveHamiltonian}) is dominant. The primary challenge in either case is evaluating $\intinf{\omega} \modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}$ and $\intinf{\omega} \modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}\modd{\chi_{a}}^{2}$. These control, respectively, the response to noise acting directly on the mechanics and indirectly via{} the atoms. In the weak coupling regime ($\Gamma_{a} \gg g$ \& $\Gamma_{a} > \Gamma_{m}$) the atomic centre of mass adiabatically follows the motion of the mechanical oscillator, allowing us to expand the atomic susceptibility near the resonance frequency as \begin{equation*} \chi_{a}\Br{\omega \approx \Omega} \approx \frac{\mathrm{i}}{Nm\omega\Gamma_{a}} \approx \frac{\mathrm{i}}{Nm\Gamma_{a}\Omega}\Br{2-\frac{\omega}{\Omega}}. \end{equation*} Neglecting the small frequency-independent imaginary term which arises leaves us with the approximate modified mechanical susceptibility \begin{equation} \chi^{\ensuremath{\hat{p}}rime}_{m}\left(\omega\right) \approx \frac{1}{M\Sq{\Omega^{2}-\omega^{2}-\mathrm{i}\Gamma_{m}\omega\left(1+c\right)}} \label{Eqn:EffectiveChi} \end{equation} which describes a simple harmonic oscillator with an enhanced linewidth of $\Gamma_{m}\left(1+c\right)$. Fittingly, the broadening is characterised by the atom--mechanical cooperativity \begin{equation} c = \frac{4 g^{2}}{\Gamma_{a}\Gamma_{m}} = 4^{2}c_{a}c_{m}, \label{Eqn:Cooperativity} \end{equation} With these approximations $\modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}$ may be readily integrated (\textit{cf}.{} \ref{Sec:Maths}). The remaining term, proportional to $\intinf{\omega} \modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}\modd{\chi_{a}}^{2}$, is not evaluated directly. Instead, as shown in \ref{Sec:ApproxAnalytical}, we approximate the integrand around the mechanical resonance frequency as a Lorentzian peak; since the majority of the spectral variance is contained within a relatively narrow bandwidth the true integral is faithfully reproduced. Using these two approximations we find \begin{equation} \frac{\expect{\ensuremath{\hat{x}}_{m}^{2}}}{x_{zp,m}^{2}} = \frac{2}{1+c}\Sq{ \bar{n}_{B,m}+c_{m}+\half+\bar{n}_{a,eff} }, \label{Eqn:VarianceWeak} \end{equation} where coupling to the atoms has introduced an effective number of phonons \begin{equation*} \bar{n}_{a,eff} = \frac{c/2}{1+\frac{\Gamma_{m}}{\Gamma_{a}}\Br{1+c}}. \end{equation*} In the opposite limit, strong coupling ($g \gg \max\Cu{\Gamma_{a},\Gamma_{m}}$), excitations are hybridised across the two local modes ($\ensuremath{\hat{x}}_{a}$ and $\ensuremath{\hat{x}}_{m}$), giving rise to a symmetric and an antisymmetric normal mode. In this case, as detailed in \ref{Sec:ApproxAnalytical}, $\modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}$ and $\modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}\modd{\chi_{a}}^{2}$ share a similar twin-peaked structure. In both instances we exploit the rapid decay of the spectral variance away from these peaks, much as above, to integrate Eqn~\eqref{Eqn:PowerSpectrum}, yielding \begin{equation} \frac{\expect{\ensuremath{\hat{x}}_{m}^{2}}}{x_{zp,m}^{2}} = \half\Sq{\frac{\Gamma_{m}}{\Gamma_{N}}\Br{\bar{n}_{B,m} + c_{m}} + 1}{\Br{1-\frac{g^{2}}{\Omega^{2}}}^{-2}\Br{1-\frac{4g^{2}}{\Omega^{2}}}}^{-1}. \label{Eqn:VarianceStrong} \end{equation} Here $\Gamma_{N} = \half\Br{\Gamma_{a}+\Gamma_{m}}$ is the linewidth of the normal modes of the system. Our analytical expressions (Eqn~\eqref{Eqn:VarianceWeak} and Eqn~\eqref{Eqn:VarianceStrong}) are compared to numerical results in Fig.~\ref{Fig:Sympathetic}. The parameters $\Omega$ and $\Gamma_{m}$ have been chosen to be representative of a SiN nanostring \cite{Schmid2011,Brawley2012} ($\Omega/2\ensuremath{\hat{p}}i = 220$ kHz, $\Gamma_{m} = 195$ mHz) held in a dilution refrigerator such that $\bar{n}_{B,m} = 2.8\times10^{4}$ (temperature $T_{B,m} = 300$ mK, \textit{cf}.{} \cite{Thompson2008}). Optical dipole traps are readily capable of achieving vibrational frequencies on the order of $\Omega$ \cite{Camerer2011, Vetsch2012}. The remaining independent\footnote{Of course, the cooperativities \textit{do} depend on the resonance frequency and decay rates: however, it is possible to vary them independently by suitable adjustment of $g_{a}$, $N$, $g_{m}$, \textit{etc.}{}} parameters appearing in the power spectrum are $c_{a}$, $c_{m}$ and $\Gamma_{a}$ (with $G$ and $\eta$ being important when measurement feedback is included). These parameters are discussed further in Table~\ref{Tab:Params} and $\S$~\ref{Sec:CombinedCooling}. \begin{figure} \caption{\label{Fig:Sympathetic} \label{Fig:Sympathetic} \end{figure} We may now ask whether sympathetic cooling is capable of producing near-ground-state-cooled mechanical oscillators. Substituting Eqn~\eqref{Eqn:VarianceWeak} into the ground state criterion (Eqn~\eqref{Eqn:GroundStateDefinition}) and using the fact that $\bar{n}_{B,m} \gg 1$ yields \begin{equation} \bar{n}_{B,m} < \Br{2+3\frac{\Gamma_{m}}{\Gamma_{a}}}\bar{n}_{a,eff}-c_{m} \label{Eqn:AtomicGroundstateCriterion} \end{equation} as a sufficient condition for ground state cooling in the adiabatic limit (we have used $c \Gamma_{m}/\Gamma_{a} \ll 1$, which holds in this case). Much as the condition $\bar{n}_{B,m} < 8c_{m}$ indicates that the mechanical spectrum must be dominated by radiation pressure fluctuations in order to ground state cool using feedback, Eqn~\eqref{Eqn:AtomicGroundstateCriterion} shows that \textit{atomic} contributions must dominate in order to sympathetically cool to the ground state. It is straightforward to show that the region in which Eqn~\eqref{Eqn:AtomicGroundstateCriterion} is satisfied is a portion of the parameter space where $c > \bar{n}_{B,m}$ and $c_{a} > 1/24$. The condition $c > \bar{n}_{B,m}$ is of great importance, as it also dictates whether near-ground-state cooling may be achieved in the strong coupling regime. Inserting the variance in the hybridised regime (Eqn~\eqref{Eqn:VarianceStrong}) into the ground state criterion gives an inequality which may only be satisfied if $c = \bar{n}_{B,m}$ is reached in the \textit{weak} coupling limit; that is, if near-ground-state cooling is possible in the adiabatic limit then it is also possible in the case of strong coupling\footnote{We imagine tuning between the two regimes by varying $c_{a}$ and $c_{m}$, keeping $\Gamma_{a}, \Gamma_{m}$ and $\Omega$ fixed.}. The converse statement is also true; if near-ground-state cooling is \textit{not} possible in the weakly-coupled limit then it is also \textit{not} possible in the hybridised limit (\textit{i.e.}{} no near-ground cooling if $c=\bar{n}_{B,m}$ requires strong coupling). The physical interpretation of these statements is that the atomic motion must be damped into the zero-temperature bath faster than phonons enter from the hot reservoir, which is entirely consistent with our na\"{i}ve expectations. Effective steady-state cooling is therefore more difficult to realise in the strongly-coupled case simply because $\Gamma_{a}$ is bounded above by $\Omega/2$ (required for stability, \textit{cf}.{} Eqn~\eqref{Eqn:Instability}). In summary, we have shown that there exists two (slightly overlapping) parameter regimes, summarised in Table~\ref{Tab:Regimes}, in which the oscillator is prepared near the quantum ground state. \begin{table} \centering \begin{tabular}{l | l} Near-ground cooling & Condition \\ \hline Sympathetic & $\bar{n}_{B,m} < c$ \& $1/24 < c_{a}$ \& $\Gamma_{m}\bar{n}_{B,m} \ll \Gamma_{a}$\\ Feedback & $\bar{n}_{B,m} < \Br{9\eta-1}c_{m}$\\ Neither & $\max\Cu{\Br{9\eta-1}c_{m},c}<\bar{n}_{B,m}$ or $c_{a} < 1/24$\\ & or $\Gamma_{a} \lesssim \Gamma_{m}\bar{n}_{B,m}$ \\ \end{tabular} \caption{\label{Tab:Regimes} A summary of the relevant parameter regimes for sympathetic and feedback cooling to near the ground state. In the case that both sympathetic cooling and cold damping are capable of approaching the ground state there exists an overlap region if the feedback efficiency satisfies $1 \geq \eta > \frac{1}{9}\Br{1+\frac{2}{3}\Gamma_{a}\Gamma_{m}\bar{n}_{B,m}\Omega^{-2}}$.} \end{table} \FloatBarrier \subsection{\textbf{Combined cooling}} \label{Sec:CombinedCooling} We now turn our attention to the performance of combined sympathetic and feedback cooling. Both mechanisms act to suppress the leakage of information into the environment; cold damping achieves this explicitly by measurement and feedback, whilst atomic cooling achieves the same effect by diverting a portion of the leakage into the atoms, and thence back to the mechanical system. Our main computational task is therefore to determine the optimum feedback gain to apply for a fixed sympathetic cooling capacity, and to then calculate the new---hopefully reduced---position variance. The former is typically pushed away from its $c_{a}=0$ value, $G_{opt}^{\Br{0}}$, to a new optimum, $G_{opt}$. To calculate this gain we analytically differentiate $S\sb{x_{m}x_{m}}$ with respect to $G$, numerically integrate over $\omega$ to find $\ensuremath{\hat{p}}artial \expect{x_{m}^{2}} / \ensuremath{\hat{p}}artial G$, and apply a numerical root-finding algorithm to determine $G_{opt}$. The results shown in Fig.~\ref{Fig:CombinedCooling} and Fig.~\ref{Fig:Comparison} are calculated in the case of perfect feedback \textit{i.e.}{} $\eta = 1$: we note that the results with imperfect feedback efficiency are qualitatively the same for $\eta \gtrsim 50\%$ (one essentially need only renormalise the lower axis appropriately), but differ substantially for $\eta \lesssim 15\%$. These data make it quite clear that, generally speaking, including measurement-based feedback alongside sympathetic cooling significantly decreases the oscillator's temperature. The notable exception to this behaviour is in the parameter space where atoms alone are capable of achieving near-ground-state temperatures ($8 c_{m} < \bar{n}_{B,m} < c$ \& $\Gamma_{m}\bar{n}_{B,m} \ll \Gamma_{a}$), in which the addition of feedback yields little improvement ($\sim 0.01$ dB improvement). Furthermore, the addition of atoms to the system has negligible impact if feedback cooling to the ground state is possible. Exemplary experimental parameters are given in Table~\ref{Tab:Params}; these yield the points denoted by $\square$ and $\diamond$ in Figures \ref{Fig:Sympathetic}, \ref{Fig:CombinedCooling} and \ref{Fig:Comparison}. Ideally, our example mechanical system which operates in the mechanical back-action--dominated regime ($8c_{m}>\bar{n}_{B,m}$, left panel of Table~\ref{Tab:Params}) will reach a final variance of $1.41\,x_{zp,m}^{2}$ with feedback alone: its thermal variance is $\sim 5\times 10^{4}\,x_{zp,m}^{2}$. Our suggested hybrid system (right panel of Table~\ref{Tab:Params}: note that $g_{m}$ has been adjusted such that $8c_{m}<\bar{n}_{B,m}$ \textit{i.e.}{} cold damping to the ground state is not possible) achieves variances of $1.35\,x_{zp,m}^{2}$ and $1.33\,x_{zp,m}^{2}$ in the weakly-coupled regime---with and without feedback, respectively---whilst in the case of strong coupling the (sympathetically-cooled) variance of $603\,x_{zp,m}^{2}$ may be reduced to a mere $4.79\,x_{zp,m}^{2}$ by switching on feedback. This is essentially equal to the feedback-only steady-state variance in this regime. It is encouraging that all of these parameters are within reach of state-of-the-art optomechanical and atomic systems; sympathetic cooling (and/or cold damping) to the mechanical ground state is technically feasible. The experimental challenges lay in combining these heretofore disparate elements and eliminating technical noise (and system-specific noise sources, such as absorptive heating) which we have not considered here. \begin{table} \centering \begin{tabular}{l| l} Mechanical alone \small{$\square$} \textit{cf}.{} \cite{Brawley2012} & Mechanical and atomic $\diamond$ \\ \hline \begin{tabular}{c l} $\omega_{m}/2\ensuremath{\hat{p}}i$ & $220$~kHz \\ $\Gamma_{m}/2\ensuremath{\hat{p}}i$ & $31$~mHz \\ $M$ & $1.4$~ng \\ $\kappa_{MC}$ & $20\:\omega_{m}$ \\ $\alpha_{d}$ & $6.58\times 10^{6}$~$\sqrt{\mathrm{Hz}}$ \\ $g_m$ & $9.85$~MHz/nm\\ $T_{B,m}$ & $300$~mK \cite{Blencowe2004} \\ $8c_{m}/\bar{n}_{B,m}$ & $9.03$~dB \\ \end{tabular} & \begin{tabular}{c l | c l} $\omega_{m}/2\ensuremath{\hat{p}}i$ & $220$~kHz & $\omega_{a}$ & $\omega_{m}$ \\ $\Gamma_{m}/2\ensuremath{\hat{p}}i$ & $31$~mHz & $\Gamma_{a}$ & $\omega_{m}~\Br{10^{2}\:\Gamma_{m}}$ \\ $M$ & $1.4$~ng & $m$ & $1.44\times 10^{-25}$~kg \\ $\kappa_{MC}$ & $20\:\omega_{m}$ & $\kappa_{AC}$ & $20\:\omega_{m}$ \\ $\alpha_{d}$ & $6.58\times 10^{6}$~$\sqrt{\mathrm{Hz}}$ & $N$ & $3.1\times 10^{8}$ \cite{Kerman2000} \\ $g_m$ & $3.19$~MHz/nm & $\Delta_{t}/2\ensuremath{\hat{p}}i$ & $-1$~GHz \cite{Vogell2013} \\ $T_{B,m}$ & $300$~mK \cite{Blencowe2004} & $\mathcal{V}$ & $2.8\times 10^{-8}$~$\mathrm{m}^{3}$ \\ $8c_{m}/\bar{n}_{B,m}$ & $-4.38$~dB & $c_{a}$ & $9.54~\Br{58.1}$~dB \\ \end{tabular} \\ \end{tabular} \caption{\label{Tab:Params} Feasible experimental parameters which permit preparation of a mechanical oscillator near its ground state by using sympathetic or feedback cooling.\newline The mechanical ($\square$) specifications are drawn from the literature concerning evanescently-coupled, high-tension silicon nitride nanostrings (as discussed in $\S$~\ref{Sec:AtomsAlone}); we have assumed that the damping rate is independent of temperature.\newline The atomic cavity ($\diamond$) parameters have been chosen to be comparable to those used to construct optical parametric oscillators (\textit{e.g.}{} cavity length $\sim 30$~cm, linewidth $\kappa_{AC}/2\ensuremath{\hat{p}}i = 4.4$~MHz). The transverse beam area is drawn from \cite{Camerer2011} and the detuning estimated according to \cite{Vogell2013}. We have used the transition wavelength and dipole moment of the \isotope{Rb}{87} D2 line \textit{cf}.{} \cite{Vogell2013}. Values in parentheses are valid in the small $\Gamma_{a}$ limit. It is possible to prepare large numbers of atoms in their motional ground state (\textit{e.g.}{} using Raman cooling in a three-dimensional optical lattice) \cite{Kerman2000}. \newline See $\S$~\ref{Sec:CombinedCooling} for discussion of the final temperatures achieved using these specifications.} \end{table} \begin{figure} \caption{\label{Fig:CombinedCooling} \label{Fig:CombinedCooling} \end{figure} \begin{figure} \caption{\label{Fig:Comparison} \label{Fig:Comparison} \end{figure} \FloatBarrier \section{Conclusion} \label{Sec:Conclusion} We have modelled steady-state cooling of a low-frequency mechanical oscillator using the combined effects of optical coupling to a remote atomic ensemble and measurement-based feedback. Combining these two methods is beneficial in all circumstances, although there exists distinct regions of parameter space (\textit{cf}.{} Table \ref{Tab:Regimes}) in which one technique or the other dominates the cooling. We have also demonstrated that an optically-mediated state swap between the two mechanical degrees of freedom (\textit{i.e.}{} sympathetic cooling to the ground state) may be performed even in the case that $\bar{n}_{B,m} < c_{m}$, in which a complete mechanical $\leftrightarrow$ optical swap is forbidden. Both sympathetic and feedback cooling to the ground state are feasible with current experimental parameters. \ensuremath{\hat{p}}rovidecommand{\newblock}{} \ensuremath{\hat{p}}agebreak \appendix \section[\hspace{3cm} Treatment of adiabatic field evolution]{Treatment of adiabatic field evolution} \label{Sec:Fields} As explained in $\S~$\ref{Sec:Theory}, we assume that the system is operating in a regime where the field fluctuation operators rapidly reach quasistatic forms relative to the mechanical period(s). Furthermore, we suppose that both optical cavities are strongly overcoupled ($\kappa_{AC} \gg \left\{\Gamma_{AC},-2Nkg_{a}x_{zp,a}\right\}$ and $\kappa_{MC} \gg \left\{\Gamma_{MC},g_{m}x_{zp,m}\right\}$) such that perturbations of the steady-state field amplitudes away from their interaction-free values are negligibly small. In this limit we have $\alpha_{L} = \alpha_{R} = 2\alpha_{d}/\sqrt{\kappa_{AC}}$ and $\beta = 2\alpha_{d}/\sqrt{\kappa_{MC}}$. \subsection[\hspace{3cm} Adiabatic limit of the optical Langevin equations]{\textbf{Adiabatic limit of the optical Langevin equations}} We describe the fields in the optical cavities with the Langevin equation \cite{WallsMilburn2008} \begin{equation} \de{\delta \baop{a}}{t} = \frac{1}{\mathrm{i}\hbar}\comm{\delta \baop{a}}{\hat{H}} + \sqrt{\kappa}\delta\baop{a}_{in}- \half\kappa\delta \baop{a}, \label{Eqn:OpticalLangevin} \end{equation} in which the coupling rate\footnote{It may be shown that including extraneous losses (\textit{e.g.} scattering into unguided modes and absorption loss) at a rate $\Gamma$ leads to an amount of additional noise which scales as $\Gamma/\kappa$; we therefore neglect these contributions in the regime $\kappa\gg\Gamma$.} between the resonator and drive mode $\delta \baop{a}_{in}$ is $\kappa$, and $\hat{H}$ is either Eqn~\eqref{Eqn:AtomLightLin} or Eqn~\eqref{Eqn:OptoMechLin} as appropriate. $\delta \baop{a}_{in}$ represents the fluctuations of the multi-mode input field. Specifically, the fluctuations of the drive mode obey the relationships \cite{Milburn2011} \begin{eqnarray*} \expect{\delta\baop{a}_{d}} & = & 0, \\ \expect{\delta\bcop{a}_{d}\ensuremath{\left(t\right)}\delta\baop{a}_{d}\left(t^{\ensuremath{\hat{p}}rime}\right)} & = & 0, \\ \expect{\delta\baop{a}_{d}\ensuremath{\left(t\right)}\delta\bcop{a}_{d}\left(t^{\ensuremath{\hat{p}}rime}\right)} & = & \delta\left(t-t^{\ensuremath{\hat{p}}rime}\right). \\ \end{eqnarray*} These translate into correlations between sidebands in the frequency domain (the carrier having been translated to $\omega = 0$). \begin{eqnarray*} \expect{\delta \tilde{X}_{j}^{\ensuremath{\hat{p}}m}\ensuremath{\left(\omega\right)} \delta \tilde{X}_{k}^{\ensuremath{\hat{p}}m}\left(\omega^{\ensuremath{\hat{p}}rime}\right)} & = & 2\ensuremath{\hat{p}}i \delta_{j,k}\delta\left(\omega+\omega^{\ensuremath{\hat{p}}rime}\right),\\ \expect{\delta \tilde{X}_{j}^{\ensuremath{\hat{p}}m}\ensuremath{\left(\omega\right)} \delta \tilde{X}_{k}^{\mp}\left(\omega^{\ensuremath{\hat{p}}rime}\right)} & = & \ensuremath{\hat{p}}m \mathrm{i} 2\ensuremath{\hat{p}}i \delta_{j,k}\delta\left(\omega+\omega^{\ensuremath{\hat{p}}rime}\right). \\ \end{eqnarray*} Transfer of information between the two optical cavities is treated using the input-output formalism. The appropriate input-output relations are (\textit{cf}.{}~Fig.~\ref{Fig:ModelGeometry}) \cite{Milburn2011} \begin{eqnarray*} \delta \baop{a}_{in,R} & = & \delta \baop{a}_{d}, \\ \delta \baop{b}_{in} & = & \sqrt{\kappa_{AC}}\delta \baop{a}_{R} - \delta \baop{a}_{d}, \\ \delta \baop{a}_{in,L} & = & \sqrt{\kappa_{MC}}\delta \baop{b} - \delta \baop{b}_{in}. \\ \end{eqnarray*} These expressions are valid in the case that the time taken for light to propagate between the two cavities is small compared to the mechanical period(s). Application of Eqn~\eqref{Eqn:OpticalLangevin} in the adiabatic limit yields {\begin{eqnarray*} \delta \tilde{X}_{R}^{+} & = & \frac{2}{\sqrt{\kappa_{AC}}} \left(\delta \tilde{X}^{+}_{d}+\frac{4Nkg_{a}\alpha_{L}}{\sqrt{\kappa_{AC}}}\tilde{x}_{a}\right), \\ \delta \tilde{X}_{R}^{-} & = & \frac{2}{\sqrt{\kappa_{AC}}} \Br{\delta\tilde{X}_{d}^{-}}, \\ \delta \tilde{Y}^{+} & = & \frac{2}{\sqrt{\kappa_{MC}}} \Br{\delta\tilde{X}_{d}^{+}+\frac{8Nkg_{a}\alpha_{L}}{\sqrt{\kappa_{AC}}}\tilde{x}_{a}}, \\ \delta \tilde{Y}^{-} & = & \frac{2}{\sqrt{\kappa_{MC}}} \Br{\delta\tilde{X}_{d}^{-}-\frac{2g_{m}\beta}{\sqrt{\kappa_{MC}}}\tilde{x}_{m}}, \\ \delta \tilde{X}_{L}^{+} & = & \frac{2}{\sqrt{\kappa_{AC}}} \left(\delta \tilde{X}^{+}_{d}+\frac{4Nkg_{a}\alpha_{L}}{\sqrt{\kappa_{AC}}}\tilde{x}_{a}\right), \\ \delta \tilde{X}_{L}^{-} & = & \frac{2}{\sqrt{\kappa_{AC}}} \left(\delta\tilde{X}_{d}^{-}-\frac{4g_{m}\beta}{\sqrt{\kappa_{MC}}}\tilde{x}_{m}\right). \\ \end{eqnarray*}} By inspecting these equations we may see that the optomechanical interaction creates phase fluctuations which then perturb the atoms, whilst motion of the atoms modulates the amplitude fluctuations experienced by the micromechanical element. The optical field at the output may be found by calculating $\delta \baop{a}_{out} = \sqrt{\kappa_{AC}} \delta\baop{a}_{L} - \delta \baop{a}_{in,L}$. Imperfect homodyne detection (efficiency $\eta$) is modelled by applying a standard beamsplitter transformation to this field (Eqn~\eqref{Eqn:OutputField})---which introduces an amount of uncorrelated noise associated with the vacuum fluctuations $\delta \hat{Z}_{v}^{\ensuremath{\hat{p}}m}$---and treating the photodetectors as perfectly efficient. The resulting effective detected field is \begin{eqnarray*} \delta \hat{Z}^{+} & = & \sqrt{\eta}\delta \hat{X}_{d}^{+} + \sqrt{1-\eta}\delta\hat{Z}_{v}^{+}, \\ \delta \hat{Z}^{-} & = & \sqrt{\eta}\Br{\delta \hat{X}_{d}^{-}-\frac{4g_{m}\beta}{\sqrt{\kappa_{MC}}}\ensuremath{\hat{x}}_{m}} + \sqrt{1-\eta}\delta \hat{Z}_{v}^{-}. \end{eqnarray*} \subsection[\hspace{3cm} Back-action forces]{\textbf{Back-action forces}} \label{Sec:BackAction} Substituting the above into the equations of motion (Eqn~\eqref{Eqn:MechAC} and Eqn~\eqref{Eqn:MechMT}) gives the time evolution of the mechanical elements, Eqn~\eqref{Eqn:MechACEffNoLoss} and Eqn~\eqref{Eqn:MechMTEffNoLoss}, under the effect of the coupling (Eqn~\eqref{Eqn:CouplingRate}) and the back-action noise \begin{eqnarray*} \tilde{F}_{BA,a} & = & 0 ,\\ \tilde{F}_{BA,m} & = & \frac{-4\hbar g_{m}\alpha_{d}}{\kappa_{MC}}\delta\tilde{X}_{d}^{+}. \end{eqnarray*} We briefly discuss the result $\tilde{F}_{BA,a} = 0$. Complete cancellation of the optical back-action onto the atomic motion is an artefact of our model, arising due to neglect of optical loss and near-field atom--atom interactions. In reality, there will be some amount of heating caused by optical loss, near-field interactions between atoms and spontaneous scattering of photons out of the trap beams. As noted in $\S$~\ref{Sec:EffectiveDynamics}, any optical loss will render the effective oscillator--oscillator interaction asymmetric, and hence non-Hamiltonian. Such losses also introduce vacuum noise on the left-going field which is uncorrelated with that on the right-going, leading to imperfect cancellation of the phase fluctuations appearing in the atomic equation of motion. In the limit that the noise is completely uncorrelated the back-action heating rate is $2\Gamma_{a}c_{a}$. Even in the absence of uncorrelated noise, diffusion of the atomic centre of mass occurs due to near-field optically-mediated interactions between atoms: for instance, a sideband photon may be emitted by one atom and absorbed by another. These processes do not alter the optical far-field, which our (one-dimensional) model describes, but do lead to back-action heating of the atomic motion. This effect will be small in the far-detuned limit, and scales weakly with the atom number ($\ensuremath{\hat{p}}ropto N^{1/3}$) \cite{Balykin2000}, and we therefore neglect it. The heating rate associated with Gordon-Ashkin (GA) diffusion \cite{Gordon1980} is also negligible in the regime discussed. To illustrate this, consider the momentum diffusion coefficient $\mathcal{D}_{p}$ given by \cite{Murr2006}. Since the atoms are trapped near to an antinode of the cavity field there is (to first order) no spatial variation of the electric field amplitude or of the degree of coherence between the atoms' ground and excited states (given by $\expect{\sigma}$, where $\sigma$ is the atomic lowering operator), so the diffusion should be dominated by spontaneous scattering terms. The axial motion of each atom is thus heated at a rate (in the harmonic approximation) \[ \Gamma_{GA}\bar{n}_{GA} = \frac{\mathcal{D}_{p}}{6m\hbar\omega_{a}} \approx \frac{\omega_{a}\gamma_{e}}{8\modd{\Delta_{t}}}, \] where the excited state lifetime is $1/2\gamma_{e}$ (on the order of $10$ ns for \isotope{Rb}{87} \cite{Steck2003}), $\Gamma_{GA}$ is the coupling rate to the effective bath and $\bar{n}_{GA}$ is the phonon number characterising this bath. We may easily satisfy $\Gamma_{GA}\bar{n}_{GA} \ll \Gamma_{a}/2$ because we operate in the far-detuned and bad-cavity limits. Furthermore, independent scattering from each atom results in suppression of the centre of mass diffusion coefficient by a factor of $1/\sqrt{N}$. Heating due to Gordon-Ashkin processes may therefore be safely omitted from our model. Incorporating these imperfections into the model will leave the essential conclusions of this paper unchanged. Qualitatively, the chief difference is that near-ground-cooled `tongue' evident in Fig.~\ref{Fig:Sympathetic} and Fig.~\ref{Fig:CombinedCooling} does not extend to arbitrarily high $c_{a}$ (at low $c_{m}$): at some point the atomic back-action heats the mechanical oscillator out of the ground state. We then expect, as predicted by \cite{Hammerer2010} and \cite{Vogell2013}, that the ground state may be approached by sympathetic cooling if the both the atoms and mechanics operate in the regime where the back-action and thermal (plus zero-point) noises are approximately equal. \section[\hspace{3cm} Correlators]{Correlators} \label{Sec:Correlators} If the typical thermal timescale $\hbar / k_{B}T_{B,m}$ is small compared to the mechanical period it is possible to form the correlator \cite{Genes2008a} \begin{equation*} \expect{\tilde{F}_{TH}\left(\omega\right)\tilde{F}_{TH}\left(\omega^{\ensuremath{\hat{p}}rime}\right)} = 4\ensuremath{\hat{p}}i\hbar\Gamma_{m}\omega_{m}M\left(\bar{n}_{B,m}+\half\right)\delta\left(\omega+\omega^{\ensuremath{\hat{p}}rime}\right). \end{equation*} $\bar{n}_{B,m}$ is understood to be the mean number of excitations in the oscillator when in thermal equilibrium with its bath. When forming the product $\expect{\tilde{x}_{m}\ensuremath{\left(\omega\right)}\tilde{x}_{m}\Br{\omega^{\ensuremath{\hat{p}}rime}}}$ the following correlators also arise; \begin{eqnarray*} \expect{\tilde{F}_{CB}\left(\omega\right)\tilde{F}_{CB}\left(\omega^{\ensuremath{\hat{p}}rime}\right)} & = & 4\ensuremath{\hat{p}}i\hbar\Gamma_{a}Nm\omega_{a}\half\;\delta\Br{\omega+\omega^{\ensuremath{\hat{p}}rime}}, \\ \expect{\tilde{F}_{BA,m}\left(\omega\right)\tilde{F}_{BA,m}\left(\omega^{\ensuremath{\hat{p}}rime}\right)} & = & 4\ensuremath{\hat{p}}i\hbar\Gamma_{m}M\omega_{m}c_{m}\;\delta\Br{\omega+\omega^{\ensuremath{\hat{p}}rime}}, \\ \expect{\tilde{F}_{SN}\left(\omega\right)\tilde{F}_{BA,m}\left(\omega^{\ensuremath{\hat{p}}rime}\right)} & = & 4\ensuremath{\hat{p}}i\hbar\Gamma_{m}M\omega_{m}\frac{G}{4}\frac{\omega/\omega_{m}}{1-\mathrm{i}\omega/\Delta\omega_{FB}}\delta\Br{\omega+ \omega^{\ensuremath{\hat{p}}rime}}, \\ \expect{\tilde{F}_{SN}\left(\omega\right)\tilde{F}_{SN}\left(\omega^{\ensuremath{\hat{p}}rime}\right)} & = & -4\ensuremath{\hat{p}}i\hbar\Gamma_{m}\omega_{m}M\left(\frac{G^{2}}{4^{2}\eta c_{m}}\right) \times \\ & & \ensuremath{\hat{p}}hantom{-}\frac{\omega\omega^{\ensuremath{\hat{p}}rime}}{\omega_{m}^{2}}\frac{\delta\left(\omega+\omega^{\ensuremath{\hat{p}}rime}\right)}{\left(1-\mathrm{i}\omega/\Delta\omega_{FB}\right)\left(1-\mathrm{i}\omega^{\ensuremath{\hat{p}}rime}/\Delta\omega_{FB}\right)}. \\ \end{eqnarray*} It is important to recall that, for any observables $\hat{A}$ and $\hat{B}$, $\expect{\tilde{A}\ensuremath{\left(\omega\right)}\tilde{B}\left(\omega^{\ensuremath{\hat{p}}rime}\right)} = \expect{\tilde{B}\left(-\omega^{\ensuremath{\hat{p}}rime}\right)\tilde{A}\left(-\omega\right)}^{*}$. We note also that the commutation relations ensure that for any vacuum field $\expect{\delta\tilde{Z}^{\ensuremath{\hat{p}}m}\left(\omega\right)\delta\tilde{Z}^{\mp}\left(\omega^{\ensuremath{\hat{p}}rime}\right)} = \ensuremath{\hat{p}}m 2\ensuremath{\hat{p}}i \mathrm{i} \delta(\omega + \omega^{\ensuremath{\hat{p}}rime})$. All correlators not obtained from those above vanish. \section[\hspace{3cm} Useful integrals]{Useful integrals} \label{Sec:Maths} The following integrals arise in the analytical evaluation of $\expect{\ensuremath{\hat{x}}_{m}^{2}}$ (\textit{cf}.{} Eqn~\eqref{Eqn:VarianceGeneral}); \begin{eqnarray*} \frac{\ensuremath{\hat{p}}i}{\Gamma\Omega^{2}} & = & \intinf{\omega} \frac{1}{\left(\omega^{2}-\Omega^{2}\right)^{2}+\Gamma^{2}\omega^{2}}, \\ \frac{\ensuremath{\hat{p}}i}{\Gamma} & = & \intinf{\omega} \frac{\omega^{2}}{\left(\omega^{2}-\Omega^{2}\right)^{2}+\Gamma^{2}\omega^{2}}. \end{eqnarray*} \section[\hspace{3cm} Approximations to the mechanical susceptibility]{Approximations to the mechanical susceptibility} \label{Sec:ApproxAnalytical} As noted in $\S$~\ref{Sec:AtomsAlone}, the primary challenge in analytically determining $\expect{\ensuremath{\hat{x}}_{m}^{2}}$ is finding suitable approximations to \begin{eqnarray*} \intinf{\omega} & & \modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2},\\ \intinf{\omega} & & \modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}\modd{\chi_{a}}^{2} \end{eqnarray*} in the weak and strong coupling regimes. The derivation of an approximation to the former in the weak coupling limit is given in $\S$~\ref{Sec:AtomsAlone} (plotted in Fig.~\ref{Fig:SusceptibilityApproximations}~A). In the same limit the latter integral may be evaluated by using the fact that the majority of the spectral variance is concentrated near the resonance frequency $\Omega$. We therefore replace $\modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}\modd{\chi_{a}}^{2}$, a product of two approximately Lorentzian functions, with a single (approximate) Lorentzian, the linewidth of which is chosen to give an accurate fit in the region $\omega \approx \Omega$. We find that \begin{equation*} \modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}\modd{\chi_{a}}^{2} \approx \Sq{\frac{1}{NmM\Omega\Gamma_{ser}}}^{2}\frac{1}{\Br{\Omega^{2}-\omega^{2}}^{2}+\omega^{2}\Gamma_{par}^{2}} \end{equation*} is a suitable replacement (depicted in Fig.~\ref{Fig:SusceptibilityApproximations}~B). The linewidth of this function is the `parallel sum' \begin{equation*} \Gamma_{par} = \Br{\frac{1}{\Gamma_{a}}+\frac{1}{\Gamma_{m}\Br{1+c}}}^{-1} \end{equation*} of the atomic and (enhanced) mechanical motion decay rates, rather than the `serial sum' $\Gamma_{ser} = \Sq{\Gamma_{a}+\Gamma_{m}\Br{1+c}}$; this is necessary to obtain the correct behaviour when $\Gamma_{m}\Br{1+c} \sim \Gamma_{a}$. Numerical integration confirms that this replacement faithfully reproduces the true value of $\intinf{\omega} \modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}\modd{\chi_{a}}^{2}$, despite decaying more slowly than the true integrand as $\omega\rightarrow \infty$. In the regime that $\Omega/2 > g \gg \max\Cu{\Gamma_{m}, \Gamma_{a}}$ the atoms no longer adiabatically follow the motion of the mechanical oscillator; instead, the coherent interaction of the two resonators leads to hybridisation of the mechanical modes. Splitting of the susceptibility $\modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}$ into two peaks, evident in Fig.~\ref{Fig:SusceptibilityApproximations}~C, is a clear signature of hybridisation. Since the two peaks of the susceptibility account for the majority of the spectral variance we set \begin{equation} \modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2} \approx \modd{\chi_{+}}^{2} + \modd{\chi_{-}}^{2} \label{Eqn:IntegralApproxDoublePeak} \end{equation} where the symmetric ($+$) and antisymmetric ($-$) modes have susceptibilities \begin{equation} \chi_{\ensuremath{\hat{p}}m}^{-1} = M_{\ensuremath{\hat{p}}m}\Sq{\Omega^{2}\Br{1\ensuremath{\hat{p}}m\frac{-2g}{\Omega}}-\omega^{2}-\mathrm{i}\omega\Gamma_{N}}. \label{Eqn:PMSusceptibility} \end{equation} Note that when $g \ll \Omega/2$ the splitting between the symmetric and antisymmetric peaks is $2g$. The linewidth $\Gamma_{N}$ is the mean of $\Gamma_{m}$ and $\Gamma_{a}$, and the effective masses $M_{\ensuremath{\hat{p}}m} = 2M\Br{1-g^{2}/\Omega^{2}}$ may be obtained by considering the zero-frequency behaviour of $\modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}$. It is usually possible to ignore the strong suppression of the susceptibility at $\omega \approx \Omega$ (\textit{cf}.{} Fig.~\ref{Fig:SusceptibilityApproximations}~C), due to interference of the normal modes, without significantly altering the value of the integral. Noise acting on the atomic local mode appears in $S\sb{x_{m}x_{m}}\Sq{\omega}$ as a sharp peak at $\omega \approx \Omega$; fortunately, the aforementioned interference of the normal modes ensures that the product $\modd{\chi_{a}}^{2}\modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2}$ remains dominated by the twin-peaked susceptibility, as in Fig.~\ref{Fig:SusceptibilityApproximations}~D. Thus we again note that the majority of the spectral variance lies close to the maxima, motivating the substitution \begin{equation} \modd{\chi_{a}}^{2}\modd{\chi_{m}^{\ensuremath{\hat{p}}rime}}^{2} \approx \frac{\modd{\chi_{+}}^{2} + \modd{\chi_{-}}^{2}}{\Br{2Nm\Omega g}^{2}}. \label{Eqn:IntegralApproxProduct} \end{equation} The scaling factor $\Br{2Nm\Omega g}^{-2}$ accounts for the different masses of the normal and local modes. This approximate integrand yields quite accurate results, as confirmed by numerical integration. Combining Eqn~\eqref{Eqn:IntegralApproxDoublePeak} and Eqn~\eqref{Eqn:IntegralApproxProduct}, we find that the variance is given by \begin{equation*} \frac{\expect{\ensuremath{\hat{x}}_{m}^{2}}}{x_{zp,m}^{2}} = \half\Sq{\frac{\Gamma_{m}}{\Gamma_{N}}\Br{\bar{n}_{B,m} + c_{m}} + 1}{\Br{1-\frac{g^{2}}{\Omega^{2}}}^{-2}\Br{1-\frac{4g^{2}}{\Omega^{2}}}}^{-1} \end{equation*} as previously stated in $\S$~\ref{Sec:AtomsAlone}. \begin{figure} \caption{\label{Fig:SusceptibilityApproximations} \label{Fig:SusceptibilityApproximations} \end{figure} \end{document}
\begin{document} \title{Trend to equilibrium for a reaction-diffusion system modelling reversible enzyme reaction} \begin{abstract} A spatio-temporal evolution of chemicals appearing in a reversible enzyme reaction and modelled by a four component reaction-diffusion system with the reaction terms obtained by the law of mass action is considered. The large time behaviour of the system is studied by means of entropy methods. \\ \noindent \textit{Keywords: enzyme reaction; reaction-diffusion system; trend to equilibrium; entropy; duality method} \end{abstract} \section{Introduction} In eukaryotic cells, responses to a variety of stimuli consist of chains of successive protein interactions where enzymes play significant roles, mostly by accelerating reactions. Enzymes are catalysts that facilitate a conversion of molecules (generally proteins) called substrates into other molecules called products, but they themselves are not changed by the reaction. In the reaction scheme proposed by Michaelis and Menten~\cite{Michaelis-1913} in 1913, an enzyme $E$ converts a substrate $S$ into a product $P$ through a two step process, schematically written as \begin{equation} \label{eq:ER:s1:00} S + E \underset{k_{-}}{\overset{k_{+}}{\rightleftharpoons}} C \overset{k_{p+}}{\longrightarrow} E + P, \end{equation} where $C$ is an intermediate complex and $k_{+}, k_{-}$ and $k_{p+}$ are positive kinetic rates of the reaction. In 1925, the enzyme reaction~(\mathbb{R}f{eq:ER:s1:00}) was analysed by Briggs and Haldane~\cite{Briggs-1925} by using ordinary differential equations (ODE) derived from mass action kinetics. In their quasi-steady state approximation (QSSA), the complex is assumed to reach a steady state quickly, i.e., there is no change in its concentration $\nc=[C]$ in time ($\textnormal{d} \nc/\textnormal{d} t = 0$). The analysis yields an algebraic expression, the so-called Michaelis-Menten function, for $\nc$ and a simple, though nonlinear, ODE for the substrate's concentration $\ns=[S]$. The kinetics of enzyme reactions described by Briggs and Haldane is sometimes called the Michealis-Menten kinetics. Further details on the existing approximation techniques can be found in \cite{Cornish-Bowden-2012, Schnell-2000, Schnell-2002, Segel-1989}, a validation of the QSSA also in \cite{Perthame-LN}. Many important reactions in biochemistry are, however, reversible in the sense that a significant amount of the product $P$ exists in the reaction mixture due to a reaction of $P$ with the enzyme $E$,~\cite{Cornish-Bowden-2012}. Therefore, the Michaelis-Menten mechanism~(\mathbb{R}f{eq:ER:s1:00}) is incomplete for these reactions and should be rather replaced by \begin{equation} \label{eq:ER:s2:09} S + E \underset{k_{-}}{\overset{k_{+}}{\rightleftharpoons}} C \underset{k_{p-}}{\overset{k_{p+}}{\rightleftharpoons}} E + P. \end{equation} Almost entire mathematical modelling of the enzyme reactions~(\mathbb{R}f{eq:ER:s1:00}) and~(\mathbb{R}f{eq:ER:s2:09}) is usually done by using ODE approaches, \cite{Cornish-Bowden-2012, Schnell-2000, Schnell-2002, Segel-1989}. However, protein pathways occur in living cells, (heterogenous) spatial structure of which has an impact on the enzyme efficiency and the speed of enzyme reactions. In this paper, a spatial reaction-diffusion system for the reversible enzyme reaction~(\mathbb{R}f{eq:ER:s2:09}) is studied without any kind of approximation. More precisely, we will consider a system of four equations for the concentrations $n_i$, $\isecp$, for the species appearing in~(\mathbb{R}f{eq:ER:s2:09}) with the reaction terms obtained by the law of mass action. Moreover, we assume that the species can diffuse freely and randomly (modelled by linear diffusion) with constant diffusion rates. Thus, we consider the system \begin{equation} \label{eq:ER:s2:10} \begin{aligned} \dfrac{\partial\ns}{\partial t} - D_S \Delta \ns & = k_{-} \nc - k_{+} \ns \ne,\\ \dfrac{\partial \ne}{\partial t} - D_E \Delta \ne &= (k_{-}+k_{p+}) \nc - k_{+} \ns \ne - k_{p-}\ne\np,\\ \dfrac{\partial\nc}{\partial t} - D_C \Delta \nc & = k_{+} \ns \ne - (k_{-}+k_{p+}) \nc + k_{p-}\ne\np, \\ \dfrac{\partial\np}{\partial t} - D_P \Delta \np &= k_{p+} \nc - k_{p-}\ne\np. \end{aligned}\end{equation} It is assumed that for each $\isecp$ $n_i = n_i(t,x)$ is defined on an open, bounded domain $\Omega \subset \mathbb{R}^d$ with a sufficiently smooth boundary $\partial \Omega$ (e.g., $C^2$) and a time interval $I = [0,T]$ for $0<T<\infty$, $Q_T = I \times \Omega$. Without loss of generality, we assume $\vert \Omega \vert = 1$. The diffusion coefficients $D_i$ are supposed to be positive constants, possibly different from each other. Further, we assume that there exist nonnegative measurable functions $n_i^0$ such that \begin{equation} \label{eq:ER:s2:11} n_i(0,x) = n_i^0(x) \; \text{ in } \Omega, \quad \int_{\Omega} n_i^0(x) \, \textnormal{d}x > 0, \quad \forall \, \isecp. \end{equation} Finally, the system is coupled with the zero-flux boundary conditions \begin{equation} \label{eq:ER:s2:12} \nabla n_i \cdot \nu = 0, \quad \forall t \in I, \; x \in \partial \Omega, \; \isecp, \end{equation} where $\nu$ is a unit normal vector pointed outward from the boundary $\partial \Omega$. Two linearly independent conservation laws can be observed, in particular, \begin{eqnarray} & {\displaystyle \int_{\Omega}} (\ne + \nc)(t,x) \, \textnormal{d}x = {\displaystyle \int_{\Omega}} (\ne^0+\nc^0)(x) \, \textnormal{d}x = M_1, \label{eq:ER:s2:02} \\ & {\displaystyle \int_{\Omega}} (\ns + \nc + \np)(t,x) \, \textnormal{d}x = {\displaystyle \int_{\Omega}} (\ns^0 + \nc^0 + \np^0) (x) \, \textnormal{d}x = M_2, \label{eq:ER:s2:03} \end{eqnarray} for each $t \ge 0$, where $M_1 >0, M_2 > 0$. Note that there is often $M_1 \ll M_2$ \cite{Cornish-Bowden-2012}, however, we will not assume any relation between $M_1$ and $M_2$. The conservations laws~(\mathbb{R}f{eq:ER:s2:02}) and~(\mathbb{R}f{eq:ER:s2:03}) imply the uniform $L^1$ bounds on the solutions of~({\mathbb{R}f{eq:ER:s2:10}})-({\mathbb{R}f{eq:ER:s2:12}}) which are insufficient for the existence of global solutions. A global weak solution in all space dimensions ($d \ge 1$), however, can be deduced from a combination of a duality argument (see Appendix), which provides estimates on the (at most quadratic) nonlinearities of the system, and an approximation method developed in \cite{PSch, DFPV}, which justifies rigorously the existence of the weak solution to~(\mathbb{R}f{eq:ER:s2:10})-({\mathbb{R}f{eq:ER:s2:12}}) builded up from the solutions of the approximated system. The existence of the global weak solution with the total mass conserved by means of~(\mathbb{R}f{eq:ER:s2:02}) and~(\mathbb{R}f{eq:ER:s2:03}) can be shown constructively by the semi-implicit (Rothe) method \cite{Roubicek-2013, EliasThesis}, a method suitable for numerical simulations. We also refer to~\cite{Bothe-2015} where a proof of the existence of the unique, global-in-time solution to~(\mathbb{R}f{eq:ER:s2:10})-({\mathbb{R}f{eq:ER:s2:12}}) with the concentration dependent diffusivities and $d \le 9$ is obtained by a combination of the duality and bootstrapping arguments. Therefore, we do not give any rigorous results on the existence of solutions; instead, we focus on the large time behaviour of the solution to its equilibrium as $t \to \infty$. However, we derive a-priori estimates which make all the integrals that appear (e.g., entropy functional) well defined. In particular, by a direct application of a duality argument (see Appendix), we deduce that whenever $n_i^0 \in L^2(\log L)^2(\Omega)$, then $n_i \in L^2(\log L)^2(Q_T)$ for each $0< T <\infty$, $\isecp$. With the $L^2(\log L)^2$ estimates at hand, the solution $n_i$ for $\isecp$ can be shown, as in \cite{Bothe-2015}, to belong to $L^{\infty}((0,\infty)\times \overline{\Omega})$ by using the properties of the heat kernel combined with a bootstrapping argument and by assuming sufficiently regular initial data and $\partial \Omega$. Thus, we can deduce the global-in-time existence of the classical solution (that is bounded solution which has classical derivatives at least \emph{a.e.} and the equations in~({\mathbb{R}f{eq:ER:s2:10}}) are understood pointwise) by the standard results for reaction-diffusion systems \cite{Lady1968}. The main result of this paper is a quantitative analysis of the large time behaviour of the solution $n_i$, $\isecp$, to~({\mathbb{R}f{eq:ER:s2:10}})-({\mathbb{R}f{eq:ER:s2:12}}). It can be stated as follows: \begin{theorem} \label{theorem_ER_02} Let $(n_S, n_E, n_C, n_P)$ be a solution to~({\mathbb{R}f{eq:ER:s2:10}})-({\mathbb{R}f{eq:ER:s2:12}}) satisfying~(\mathbb{R}f{eq:ER:s2:02}) and~(\mathbb{R}f{eq:ER:s2:03}). Then there exist two explicitly computable constants $C_1$ and $C_2$ such that \begin{equation} \label{eq:ER:s2:27} \sum_{i\in\{S,E,C,P\}} \Vert n_i - n_{i, \infty} \Vert_{L_1(\Omega)}^2 \le C_2 e^{ -C_1 t} \end{equation} where $n_{i, \infty}$ is the unique, positive, detailed balance steady state defined in~(\mathbb{R}f{eq:ER:s2:18}). \end{theorem} \noindent In other words we show $L^1$ convergence of the solution $n_i$, $\isecp$, of~({\mathbb{R}f{eq:ER:s2:10}})-({\mathbb{R}f{eq:ER:s2:12}}) to its respective steady state $n_{i,\infty}$, $\isecp$, at the rate $C_1/2$. We remark that by following the general theory of the detailed balance systems, e.g., \cite{Fellner-2016a} and references therein, there exists a unique detailed balance equilibrium to the system~({\mathbb{R}f{eq:ER:s2:10}})-({\mathbb{R}f{eq:ER:s2:12}}) satisfying the conservation laws \begin{equation} \label{eq:ER:s2:14} \neR + \ncR = M_1, \quad \nsR + \ncR + \npR = M_2, \end{equation} and the detailed balance conditions \begin{equation} \label{eq:ER:s2:15} k_{-}\ncR = k_{+}\nsR\neR, \quad k_{p+}\ncR = k_{p-}\npR\neR. \end{equation} It is easy to show that the unique, strictly positive equilibrium $\mathbf{n}_{\infty} = (\nsR,\neR,\ncR,\npR)$ is then \begin{equation}\label{eq:ER:s2:18} \begin{aligned} & \ncR = \dfrac{1}{2}\left(M+K - \sqrt{(M+K)^2-4 M_1M_2} \right), \\ \neR & = M_1 - \ncR, \quad \nsR = \dfrac{k_{-}\ncR}{k_{+}\neR}, \quad \npR = \dfrac{k_{p+}\ncR}{k_{p-}\neR}, \end{aligned} \end{equation} where $M=M_1+M_2$ and $K = k_{-}/k_{+}+k_{p+}/k_{p-}$. Theorem~\mathbb{R}f{theorem_ER_02} is proved by means of entropy methods, which are based on an idea to measure the distance between the solution and the stationary state by the (monotone in time) entropy of the system. This entropy method has been developed mainly in the framework of the scalar diffusion equations and the kinetic theory of the spatially homogeneous Boltzmann equation, see \cite{Arnold1, Carrillo-2001, Villani-2003} and references therein. The method has been already used to obtain explicit rates for the exponential decay to equilibrium in the case of reaction-diffusion systems modelling chemical reactions $2 A_1 \rightleftharpoons A_2$, $A_1 + A_2 \rightleftharpoons A_3$, $A_1 + A_2 \rightleftharpoons A_3 + A_4$ and $A_1 + A_2 \rightleftharpoons A_3 \rightleftharpoons A_4 + A_5$ in \cite{DF2, DF3, DF1, Fellner-2016a}. The large time behaviour of a solution to a general detailed balance reaction-diffusion system counting $R$ reversible reactions involving $N$ chemicals, \begin{equation}\label{eq:ER:r1:01} \alpha^j_1 A_1 + \ldots + \alpha^j_N A_N \rightleftharpoons \beta^j_1 A_1 + \ldots + \beta^j_N A_N \end{equation} with the nonnegative stoichiometric coefficients $\alpha^j_1, \ldots, \alpha^j_N$, $\beta^j_1, \ldots, \beta^j_N$, for $j = 1, \ldots, R$, was also studied in \cite{Fellner-2016a}. However, the convergence rates could not be explicitly calculated without knowing explicit structure of the mass conservation laws in the general case. The present paper extends the application of the proposed entropy method for the reversible enzyme reaction~(\mathbb{R}f{eq:ER:s2:09}) counting two single reversible reactions. The difficulty comes from a chemical (enzyme) that appears in both reactions which makes~(\mathbb{R}f{eq:ER:s2:09}) different from the reaction $A_1 + A_2 \rightleftharpoons A_3 \rightleftharpoons A_4 + A_5$ studied in~\cite{Fellner-2016a}, in particular, in the structure of the conservation laws that is essential in the computation of the rates of convergence. Further, even though the convergence rates are obtained through a chain of rather simple but nasty calculations in \cite{DF2, DF3, DF1, Fellner-2016a}, we simplify them by means of an inequality~(\mathbb{R}f{eq:ER2:01}) in Lemma~\mathbb{R}f{lemma_ER_01}. In particular, if we denote $N_i = \sqrt{n_i}$, $N_{i,\infty} = \sqrt{n_{i,\infty}}$ and $\overline{N_i} = \int_{\Omega} N_i(x)\, \textnormal{d}x$ for some chemical $n_i$ and its equilibrium $n_{i,\infty}$, the expansion used in \cite{DF2, DF3, DF1, Fellner-2016a} (c.f., equation~(2.29) in \cite{Fellner-2016a}) to measure the distance between $\overline{N_i}$ and $N_{i,\infty}$ is of the form \[\overline{N_i} = N_{i,\infty}(1+\mu_i) - \dfrac{\overline{N^2_i} - \overline{N_i}^2}{\sqrt{\overline{N^2_i}}+\overline{N_i}} \] for some constant $\mu_i \ge -1$. The fraction in this expansion may become unbounded when $\overline{N^2_i}$ approaches zero, which has to be carefully treated. On the other hand, Lemma~\mathbb{R}f{lemma_ER_01} allows different expansions that consequently lead to easier calculations. For the sake of completeness, a different approach based on a convexification argument is used in \cite{Mielke-2014} to study the large time behaviour of the reaction-diffusion system for~(\mathbb{R}f{eq:ER:r1:01}). However, it is difficult to derive explicit convergence rates even for a bit more complex chemical reactions such as~(\mathbb{R}f{eq:ER:s2:09}) by using this convexification argument. First order chemical reaction networks have been recently analysed in \cite{Fellner-2015a}. The rest of the paper is organised as follows. In Section \mathbb{R}f{sec:entropy} we introduce entropy and entropy dissipation functionals and provide first estimates including $L^2$ and $L^2(\log L)^2$ bounds. A main ingredient for the a-priori estimates is a duality argument that is presented in Appendix. The large time behaviour of the solution as $t \to \infty$ studied by the entropy method is given in Section~\mathbb{R}f{sec:trend}. \section{Entropy, entropy dissipation and a-priori estimates} \label{sec:entropy} Let us first mention a simple result on the non-negativity of solutions of~({\mathbb{R}f{eq:ER:s2:10}})-({\mathbb{R}f{eq:ER:s2:12}}) which follows from the so-called quasi-positivity property of the right hand sides of~({\mathbb{R}f{eq:ER:s2:10}}), see~\cite{Pierre1}. \begin{lemma} \label{theorem_ER_03} Let $n_i^0 \ge 0$ in $\Omega$, then $n_i \ge 0$ everywhere in $Q_T$ for each $\isecp$. \end{lemma} In the sequel, we will write shortly $\mathbf{n} = (\ns,\ne,\nc,\np)$. The entropy functional $E(\textbf{n}) : [0, \infty)^4 \to [0, \infty)$ and the entropy dissipation $D(\textbf{n}) : [0, \infty)^4 \to [0, \infty)$ are defined, respectively, by \begin{equation} \label{eq:ER:s2:100} E(\textbf{n}) = \sum_{i = \{S,E,C,P\}} \int_{\Omega} n_i\log (\sigma_i n_i) - n_i +1/\sigma_i \, \textnormal{d}x \end{equation} and \begin{equation} \label{eq:ER:s2:101} \begin{aligned} D(\textbf{n}) = & \sum_{i = \{S,E,C,P\}} 4 D_i \int_{\Omega} \vert \nabla \sqrt{n_i} \vert^2 \, \textnormal{d}x\\ & + \int_{\Omega} \left[ \left( k_{+}\ns\ne - k_{-}\nc \right)\left( \log(\sigma_S \sigma_E \ns \ne) - \log(\sigma_C \nc) \right) \right. \\ & \left.+ \left( k_{p-}\ne\np - k_{p+}\nc \right)\left( \log(\sigma_E \sigma_P \ne \np) - \log(\sigma_C \nc) \right) \right] \, \textnormal{d}x, \end{aligned}\end{equation} where $\sigma_S$, $\sigma_E$, $\sigma_C$ and $\sigma_P$ depend on the kinetic rates. The first integral of the entropy dissipation~(\mathbb{R}f{eq:ER:s2:101}) is known as the relative Fisher information in information theory and as the Dirichlet form in the theory of large particle systems, since \[ 4 \int \vert \nabla \sqrt{n_i} \vert^2 = \int \dfrac{\vert \nabla n_i \vert^2}{n_i} = \int n_i \left| \nabla \left( \log n_i \right) \right|^2,\] see \cite{Villani-2003}, p. 278. Note that the function $x\log x-x+1$ is nonnegative and strictly convex on $[0,\infty)$. Thus, the entropy $E(\textbf{n})$ is nonnegative along the solution $\mathbf{n}(t, \cdot)$ for each $t \ge 0$. Also, the entropy dissipation $D(\textbf{n})$ is nonnegative along the solution $\mathbf{n}(t, \cdot)$ for $\alpha, \beta > 0$ such that \begin{equation} \label{eq:ER:s2:13} \begin{aligned} \sigma_C = \alpha k_{-}, \quad \sigma_S\sigma_E = \alpha k_{+}, \\ \sigma_C = \beta k_{p+}, \quad \sigma_E\sigma_P = \beta k_{p-}. \end{aligned}\end{equation} Indeed, with~(\mathbb{R}f{eq:ER:s2:13}) the last two integrands in~(\mathbb{R}f{eq:ER:s2:101}) have a form of $(x-y)(\log x - \log y)$ which is nonnegative for all $x,y \in \mathbb{R}_{+}$. One can choose $\alpha=1$ and $\beta=k_{-}/k_{p+}$ to obtain \begin{equation} \label{eq:sig} \sigma_C=\sigma_E=k_{-}, \; \sigma_S=\dfrac{k_{+}}{k_{-}} \; \textnormal{ and } \; \sigma_P=\dfrac{k_{p-}}{k_{p+}}, \end{equation} though other options are possible. It is straightforward to verify that $D(\textbf{n}) = - \partial_t E(\textbf{n})$, which implies that $E(\textbf{n})$ is decreasing along the solution $\mathbf{n}(t, \cdot)$ and that there exists a limit of $E(\mathbf{n}(t,\cdot))$ as $t \to \infty$. By integrating this simple relation over $[t_1, \, t_2]$ ($t_2 > t_1 > 0$) we obtain \[ E(\textbf{n}(t_1,x)) - E(\textbf{n}(t_2,x)) = \int_{t_1}^{t_2} D(\textbf{n}(s,x)) \textnormal{d}s \] which implies that \begin{equation}\label{eq:ER:s2:d5} \lim_{t \to \infty}\int_t^{\infty} D(\textbf{n}(s,x)) \textnormal{d}s = 0. \end{equation} Hence, if the solution $\textbf{n}(t,x)$ tends to some $\textbf{n}_{\infty}(x)$ as $t \to \infty$, then $D(\textbf{n}_{\infty}(x))=0$ and $\textbf{n}_{\infty}$ is spatially homogeneous due to the Fisher information in~(\mathbb{R}f{eq:ER:s2:101}). In fact, it holds that \begin{equation}\label{eq:ER:s2:d1} D(\textbf{n}(t,x)) = 0 \Longleftrightarrow \textbf{n}(t,x) = \textbf{n}_{\infty} \end{equation} where $\textbf{n}_{\infty}$ is given by~(\mathbb{R}f{eq:ER:s2:14}) and~(\mathbb{R}f{eq:ER:s2:15}). Let us remark that the entropy $E(\textbf{n})$ is ``D-diffusively convex Lyapunov functional" which implies that the diffusion added to systems of ODEs is irrelevant to their long-term dynamics and that there cannot exist other (non-constant) equilibrium to~({\mathbb{R}f{eq:ER:s2:10}})-({\mathbb{R}f{eq:ER:s2:12}}) than~(\mathbb{R}f{eq:ER:s2:18}), \cite{Fitzgibbon-1997}. Further, we can write \begin{equation}\label{eq:ER:s2:d3} E(\textbf{n}(t,x)) + \int_0^t D(\textbf{n}(s,x)) \textnormal{d}s = E(\textbf{n}(0,x)). \end{equation} Since the entropy and entropy dissipation are both nonnegative we can deduce from~(\mathbb{R}f{eq:ER:s2:d3}) that \begin{equation}\label{eq:ER:s2:d4a} \sup_{t\in[0,\infty)} \Vert n_i \log n_i \Vert_{L^1(\Omega)} \le C, \end{equation} i.e., $n_i \in L^{\infty}([0,\infty); L (\log L)(\Omega))$ for each $\isecp$, and \begin{equation}\label{eq:ER:s2:d4c} \Vert \nabla \sqrt{n_i} \Vert^2_{L^{2}([0,\infty); L^2(\Omega,\mathbb{R}^d))} \le C, \end{equation} i.e., $\sqrt{n_i} \in L^{2}([0,\infty); W^{1,2}(\Omega))$ for each $\isecp$. In addition to the above estimates, let us introduce nonnegative entropy density variables $z_i = n_i \log (\sigma_i n_i) - n_i +1/ \sigma_i$. Then, the system~({\mathbb{R}f{eq:ER:s2:10}})-({\mathbb{R}f{eq:ER:s2:12}}) becomes \begin{equation}\label{eq:ER:s4:01} \dfrac{\partial z}{\partial t} - \Delta (Az) \le 0 \textnormal{\; in \;} \Omega, \quad \nabla (Az) \cdot \nu = 0 \textnormal{\; on \;} \partial \Omega \end{equation} where $z = \sum_i z_i$, $z_d = \sum_i D_i z_i$ (sums go through $i\in \{S,E,C,P\}$) and $A = z_d/z \in \left[ \underline{D}, \overline{D} \right]$ for $\underline{D} = \min_{i\in \{S,E,C,P\}}\{ D_i\}$ and $\overline{D} = \max_{i\in \{S,E,C,P\}}\{ D_i\}$. Indeed, after some algebra we obtain \begin{equation} \label{eq:ER:s4:02} \begin{aligned} \dfrac{\partial z}{\partial t} - \Delta (Az) = &- \sum_i D_i \dfrac{\vert \nabla n_i \vert^2}{n_i} \\ & - \left( k_{+}\ns\ne - k_{-}\nc \right)\left( \log(\sigma_S \sigma_E \ns \ne) - \log(\sigma_C \nc) \right) \\ & - \left( k_{p-}\ne\np - k_{p+}\nc \right)\left( \log(\sigma_E \sigma_P \ne \np) - \log(\sigma_C \nc) \right) \end{aligned} \end{equation} where the \textit{r.h.s.} of~(\mathbb{R}f{eq:ER:s4:02}) is nonpositive for $\sigma_i's$ given by~(\mathbb{R}f{eq:sig}). The boundary condition in~(\mathbb{R}f{eq:ER:s4:01}) can be also easily verified. Hence, a duality argument developed in \cite{PSch, Pierre1} and reviewed in Appendix implies for each $j \in \{S,E,C,P\}$ that \begin{equation}\label{eq:ER:s2:47} \Vert n_j \log (\sigma_j n_j)-n_j+1/\sigma_j \Vert_{L^2(Q_T)} \le C \left\| \sum_{i} n_i^0 \log (\sigma_i n_i^0)-n_i^0+1/\sigma_i \right\|_{L^2(\Omega)} \end{equation} where $C = C(\Omega, \underline{D}, \overline{D}, T)$. We deduce from~(\mathbb{R}f{eq:ER:s2:47}) that $n_i\in L^2(\log L)^2(Q_T)$ as soon as $n_i^0 \in L^2(\log L)^2(\Omega)$ for each $\isecp$. Note that a function $v \in L^2(\log L)^2(\Omega)$ is a measurable function such that $\int_{\Omega} v^2 (\log v)^2 \, \textnormal{d}x$ is finite. Moreover, the same duality argument implies $L^2(Q_T)$ bounds by taking into account $n_i^0 \in L^2(\Omega)$ for each $\isecp$ and $z = n_S + n_E + 2n_C + n_P$, $z_d = D_S n_S + D_E n_E + 2D_C n_C + D_P n_P$ and $A = z_d/z$ for which we directly obtain~(\mathbb{R}f{eq:ER:s4:01}). \section{Exponential convergence to equilibrium: an entropy method} \label{sec:trend} Let us first describe briefly a basic idea of the method. Consider an operator $A$, which can be linear or nonlinear and can involve derivatives or integrals, and an abstract problem \[\partial_t \rho = A \rho.\] Assume that we can find a Lyapunov functional $E := E(\rho)$, usually called the entropy, such that $D(\rho) = -\partial_t E(\rho) \ge 0$ and \[ D(\rho) \ge \Phi (E(\rho)-E(\rho_{eq})) \tag{EEDI} \label{eq:eedig}\] along the solution $\rho$ where $\Phi$ is a continuous function strictly increasing from 0 and $\mathbf{\rho_{eq}}$ is a state independent of the time $t$, \cite{Arnold1, Villani-2003}. The aforementioned inequality between the entropy dissipation $D(\rho)$ and the relative entropy $E(\rho)-E(\rho_{eq})$ is known as the entropy-entropy dissipation inequality (EEDI). The EEDI and the Gronwall inequality then imply the convergence in the relative entropy $E(\rho) \to E(\rho_{eq})$ as $t \to \infty$ that can be either exponential if $\Phi(x) = \lambda x$ or polynomial if $\Phi(x) = x^{\alpha}$; in both cases $\lambda$ and $\alpha$ can be found explicitly. In the second step, the relative entropy $E(\rho) - E(\rho_{eq})$ needs to be bounded from below by the distance $\rho-\rho_{eq}$ in some topology. In our reaction-diffusion setting, the relative entropy $E(\mathbf{n} \vert \mathbf{n}_{\infty}) := E(\mathbf{n})-E(\mathbf{n}_{\infty})$ for the entropy functional defined in~(\mathbb{R}f{eq:ER:s2:100}) can be written as \begin{equation} \label{eq:ER:s2:24} E(\mathbf{n} \vert \mathbf{n}_{\infty}) = \sum_{i = \{S,E,C,P\}} \int_{\Omega} n_i \log \dfrac{n_i}{n_{i,\infty}} - (n_i - n_{i,\infty}) \, \textnormal{d}x \ge 0. \end{equation} This is a consequence of the conservation laws~(\mathbb{R}f{eq:ER:s2:02}) and~(\mathbb{R}f{eq:ER:s2:03}) which together with~(\mathbb{R}f{eq:ER:s2:14}) and~(\mathbb{R}f{eq:ER:s2:15}) imply \begin{equation} \label{eq:ER:s2:24b} \sum_{i = \{S,E,C,P\}} (\overline{n_i} - n_{i,\infty}) \log (\sigma_i n_{i,\infty}) = 0. \end{equation} Note that the relative entropy~(\mathbb{R}f{eq:ER:s2:24}), known also as the Kullback-Leibler divergence, is universal in the sense that it is independent of the reaction rate constants, \cite{Gorban-2010}. The relative entropy~(\mathbb{R}f{eq:ER:s2:24}) can be then estimated from below by using the Czisz\'{a}r-Kullback-Pinsker (CKP) inequality known from information theory that can be stated as follows. \begin{lemma}[Czisz\'{a}r-Kullback-Pinsker, \cite{Fellner-2016a}] \label{theorem_ER_05} Let $\Omega$ be a measurable domain in $\mathbb{R}^d$ and $u, v : \Omega \to \mathbb{R}_{+}$ measurable functions. Then \begin{equation}\label{eq:ER:s2:21} \int_{\Omega} u \log \dfrac{u}{v} - (u-v) \, \textnormal{d}x \ge \dfrac{3}{2 \Vert u \Vert_{L^1(\Omega)} + 4 \Vert v \Vert_{L^1(\Omega)}} \Vert u - v \Vert_{L^1(\Omega)}^2. \end{equation} \end{lemma} \noindent Hence, the application of the CKP inequality~(\mathbb{R}f{eq:ER:s2:21}) concludes the second step of the entropy method. Let us mention some other tools that will be later recalled in the proof of the first step. \begin{lemma}[Logarithmic Sobolev inequality, \cite{DF1}] Let $\Omega \in \mathbb{R}^d$ be a bounded domain such that $\vert \Omega \vert \ge 1$. Then, \begin{equation} \label{eq:ER:s2:20} \int_{\Omega} u^2 \log u^2 \, \textnormal{d}x - \left( \int_{\Omega} u^2 \, \textnormal{d}x \right) \log \left( \int_{\Omega} u^2 \, \textnormal{d}x \right) \le L \int_{\Omega} \vert \nabla u \vert^2 \end{equation} that holds for some $L=L(\Omega,d)$ positive, whenever the integrals on both sides of the inequality exist. \end{lemma} \begin{lemma}[Poincar\'e-Wirtinger inequality, \cite{Perthame-LN}] Let $\Omega \in \mathbb{R}^d$ be a bounded domain. Then \begin{equation} \label{eq:PWI} P(\Omega) \int_{\Omega} \vert u(x) - \overline{u}\vert^2 \le \int_{\Omega} \vert \nabla u \vert^2, \quad \forall u \in H^1(\Omega) \end{equation} where $\overline{u} = \int_{\Omega} u(x) \, \textnormal{d}x$ and $P(\Omega)$ is the first non-zero eigenvalue of the Laplace operator with a Neumann boundary condition. \end{lemma} The following lemma is a technical consequence of the Jensen inequality. \begin{lemma} \label{lemma_ER_01} Let $\Omega \in \mathbb{R}^d$ be such that $\vert \Omega \vert = 1$, $u, v \in L^1(\Omega)$ be nonnegative functions, $\overline{u} = \int_{\Omega} u(x) \, \textnormal{d}x$ and $\overline{v} = \int_{\Omega} v(x) \, \textnormal{d}x$. Then \begin{equation} \label{eq:ER2:01} \begin{aligned} & \left( \sqrt{\overline{u}} - \sqrt{\overline{v}} \right)^2 \le ( \overline{\sqrt{u}} - \sqrt{\overline{v}} )^2 + \Vert \sqrt{u} - \overline{\sqrt{u}} \Vert^2_{L^2(\Omega)}, \end{aligned}\end{equation} where equality occurs for $v \equiv 0$. \end{lemma} \begin{proof} Let us define an expansion of $\sqrt{u}$ around its spatial average $\overline{\sqrt{u}}$ by $\sqrt{u} = \overline{\sqrt{u}} + \delta_u(x)$ which implies immediately that $\overline{\delta_u} = 0$, \[\Vert \sqrt{u} - \overline{\sqrt{u}} \Vert^2_{L^2(\Omega)} = \Vert \delta_u \Vert^2_{L^2(\Omega)} = \overline{\delta_u^2} \quad \text{and} \quad \overline{u} = \overline{\sqrt{u}}^2 + \overline{\delta_u^2}.\] Then, with the Jensen inequality $\overline{\sqrt{u}} \le \sqrt{\overline{u}}$ we can write \[ \begin{aligned} \left( \sqrt{ \overline{u} } - \sqrt{\overline{v}} \right)^2 &= \overline{u} - 2 \sqrt{\overline{u}} \sqrt{\overline{v}} + \overline{v} \\ &\le \overline{\sqrt{u}}^2 - 2 \overline{\sqrt{u}} \sqrt{\overline{v}} + \overline{v} +\overline{\delta_u^2}\\ &= ( \overline{\sqrt{u}} - \sqrt{\overline{v}} )^2 + \overline{\delta_u^2} \end{aligned}\] which concludes the proof. \end{proof} \noindent In fact, with the ansatz $\sqrt{v} = \overline{\sqrt{v}} + \delta_v(x)$, we can deduce that \[ \begin{aligned} \left( \sqrt{\overline{u}} - \sqrt{\overline{v}} \right)^2 & \le ( \overline{\sqrt{u}} - \overline{\sqrt{v}} )^2 + \Vert \sqrt{u} - \overline{\sqrt{u}} \Vert^2_{L^2(\Omega)} + \Vert \sqrt{v} - \overline{\sqrt{v}} \Vert^2_{L^2(\Omega)} \\ & \le \Vert \sqrt{u} - \sqrt{v} \Vert^2_{L^2(\Omega)} + \frac{1}{P(\Omega)} (\Vert \nabla \sqrt{u} \Vert^2_{L^2(\Omega)} + \Vert \nabla \sqrt{v} \Vert^2_{L^2(\Omega)}) \end{aligned}\] by the Jensen and Poincar\'e-Wirtinger inequalities. Recall that we assume $\vert \Omega \vert = 1$, $\underline{D} = \min_i \{D_i\}$, $\overline{D} = \max_i \{D_i\}$ and we write shortly $\textbf{n} = (\ns,\ne,\nc,\np)$, $\textbf{n}_{\infty} = (\nsR,\neR,\ncR,\npR)$ and $\overline{\textbf{n}}(t) = (\nsRO,\neRO,\ncRO,\npRO)$ where $\overline{n_i} = \int_{\Omega} n_i \, \textnormal{d}x$ for each $\isecp$. In the summations we will omit $\isecp$ from the notation. We can finally prove the exponential convergence of the solution $\textbf{n}(t)$ of~({\mathbb{R}f{eq:ER:s2:10}})-({\mathbb{R}f{eq:ER:s2:12}}) to the equilibrium $\textbf{n}_{\infty}$ given by~(\mathbb{R}f{eq:ER:s2:18}). \begin{proof}{\it (of Theorem~\mathbb{R}f{theorem_ER_02})} We can deduce from~(\mathbb{R}f{eq:ER:s2:d1}) that \[ D(\textbf{n}) = - \dfrac{\textnormal{d}}{\textnormal{d} t}E(\textbf{n}) = - \dfrac{\textnormal{d}}{\textnormal{d} t} E(\textbf{n} \vert \textbf{n}_{\infty}). \] As suggested above, we search for a constant $C_1$ such that \begin{equation} \label{eq:ER:s2:26} D(\textbf{n}) \ge C_1 E(\textbf{n} \vert \textbf{n}_{\infty}), \end{equation} since in this case we obtain \[ \dfrac{\textnormal{d}}{\textnormal{d} t} E(\textbf{n} \vert \textbf{n}_{\infty}) \le -C_1 E(\textbf{n} \vert \textbf{n}_{\infty}), \] and, by the Gronwall inequality, \begin{equation} \label{eq:ER:s2:26b} E(\textbf{n} \vert \textbf{n}_{\infty}) \le E(\textbf{n}(0,x) \vert \textbf{n}_{\infty}) e^{ -C_1 t}, \end{equation} that is the exponential convergence in the relative entropy as $t \to \infty$. The CKP inequality~(\mathbb{R}f{eq:ER:s2:21}) applied on the \textit{l.h.s.} of~(\mathbb{R}f{eq:ER:s2:26b}) yields \begin{equation} \begin{aligned} \label{eq:ER:s2:25} E(\textbf{n} \vert \textbf{n}_{\infty}) \ge & \dfrac{1}{2M_2}\Vert \ns - \nsR \Vert^2_{L^1(\Omega)} + \dfrac{1}{M_1 + M_2}\Vert \nc - \ncR \Vert^2_{L^1(\Omega)} \\ & + \dfrac{1}{2M_1}\Vert \ne - \neR \Vert^2_{L^1(\Omega)} + \dfrac{1}{2M_2}\Vert \np - \npR \Vert^2_{L^1(\Omega)} \end{aligned} \end{equation} due to~(\mathbb{R}f{eq:ER:s2:24}) and the conservation laws~(\mathbb{R}f{eq:ER:s2:02}) and~(\mathbb{R}f{eq:ER:s2:03}). Thus, with $C_1$ to be found and \[ C_2 = E(\textbf{n}(0,x) \vert \textbf{n}_{\infty}) / \min \left \{ 1/2M_1, 1/2M_2, 1/(M_1+M_2) \right \}\] we obtain~(\mathbb{R}f{eq:ER:s2:27}). To show the EEDI~(\mathbb{R}f{eq:ER:s2:26}) let us split the relative entropy so that \[ E(\textbf{n} \vert \textbf{n}_{\infty}) = E(\textbf{n} \vert \overline{\textbf{n}}) + E( \overline{\textbf{n}} \vert \textbf{n}_{\infty}), \] and estimate both terms separately. For the first term we obtain that \begin{equation} \label{eq:ER:s2:30} E(\textbf{n} \vert \overline{\textbf{n}}) = \sum_{i} \int_{\Omega} n_i \log n_i \, \textnormal{d}x - \overline{n_{i}} \log \overline{n_{i}} \le L \sum_{i} \int_{\Omega} \vert \nabla \sqrt{n_i} \vert^2 \, \textnormal{d}x \end{equation} by the logarithmic Sobolev inequality~(\mathbb{R}f{eq:ER:s2:20}). Hence, when compared with the entropy dissipation~(\mathbb{R}f{eq:ER:s2:101}), we conclude that $ D(\mathbf{n}) \ge \overline{C}_1 E(\mathbf{n} \vert \overline{\mathbf{n}})$ for the constant $\overline{C}_1 = 4 \underline{D}/L$. For the second term $E( \overline{\mathbf{n}} \vert \mathbf{n}_{\infty})$ we use~(\mathbb{R}f{eq:ER:s2:24b}) and an elementary inequality $x \log x - x + 1 \le (x-1)^2$, which holds true for $x \ge 0$, to obtain \begin{equation} \label{eq:ER:s2:31} \begin{aligned} E( \overline{\mathbf{n}} \vert \mathbf{n}_{\infty}) & = \sum_{i} \overline{n_i} \log \dfrac{ \overline{n_i}}{n_{i,\infty}} - \overline{n_i} + n_{i,\infty} \\ & \le\sum_{i} \dfrac{1}{n_{i,\infty}} \left( \overline{n_i} - n_{i,\infty} \right)^2 \\ &\le C \sum_{i} \left( \sqrt{\overline{n_i}} - \sqrt{n_{i,\infty}} \right)^2 \\ &\le C \left( \sum_{i} \left( \overline{\sqrt{n_i}} - \sqrt{n_{i,\infty}} \right)^2 + \sum_{i} \Vert \sqrt{n_i} - \overline{\sqrt{n_i}} \Vert^2_{L^2(\Omega)} \right) \end{aligned}\end{equation} where the last inequality is due to~(\mathbb{R}f{eq:ER2:01}) (for $u=n_i$ and $v =\overline{v} = n_{i,\infty}$) and the constant $C = 2\max_i \{ 1/n_{i,\infty} \} \max\{ 2M_1, 2M_2, M_1+M_2\}$ is deduced from~(\mathbb{R}f{eq:ER:s2:02}) and~(\mathbb{R}f{eq:ER:s2:03}). On the other hand, the entropy dissipation $D(\mathbf{n})$ given by~(\mathbb{R}f{eq:ER:s2:101}) can be estimated from below by the Poincar\'e-Wirtinger inequality~(\mathbb{R}f{eq:PWI}) and an elementary inequality $(x-y)(\log x - \log y) \ge 4(\sqrt{x} - \sqrt{y})^2$, which holds true for $x, y \in \mathbb{R}_{+}$. We obtain \begin{equation} \label{eq:ER:s2:28} \begin{aligned} D(\mathbf{n}) \ge \; & 4 \min \{ P(\Omega) \underline{D}, 1\} {\Big (} \sum_{i} \Vert \sqrt{n_i} - \overline{\sqrt{n_i}} \Vert^2_{L^2(\Omega)} \\ & + \left. \Vert \sqrt{k_{+} \ns \ne} - \sqrt{k_{-} \nc}\Vert^2_{L^2(\Omega)} + \Vert \sqrt{k_{p-} \ne \np} - \sqrt{k_{p+} \nc}\Vert^2_{L^2(\Omega)} \right). \end{aligned} \end{equation} Hence, we can conclude the proof once we find two constants $C_3$ and $C_4$ such that \begin{equation} \label{eq:ER:s2:eedi} \begin{aligned} \sum_{i} & \left( \overline{\sqrt{n_i}} - \sqrt{n_{i,\infty}} \right)^2 + \sum_{i} \Vert \sqrt{n_i} - \overline{\sqrt{n_i}} \Vert^2_{L^2(\Omega)} \le C_3 \sum \Vert \sqrt{n_i} - \overline{\sqrt{n_i}} \Vert^2_{L^2(\Omega)} \\ & + C_4 \left( \Vert \sqrt{k_{+} \ns \ne} - \sqrt{k_{-} \nc}\Vert^2_{L^2(\Omega)} + \Vert \sqrt{k_{p-} \ne \np} - \sqrt{k_{p+} \nc}\Vert^2_{L^2(\Omega)} \right), \end{aligned} \end{equation} since in this case, by combining~(\mathbb{R}f{eq:ER:s2:31})-(\mathbb{R}f{eq:ER:s2:eedi}), we obtain \[ \dfrac{1}{C} E(\overline{\textbf{n}}(t) \vert \textbf{n}_{\infty}) \le \dfrac{\max \{C_3, C_4\}}{4 \min \{ P(\Omega) \underline{D}, 1\}} D(\mathbf{n}). \] Hence, we can derive a constant $\widetilde{C}_1$ such that $D(\mathbf{n}) \ge \widetilde{C}_1 E( \overline{\mathbf{n}} \vert \mathbf{n}_{\infty})$ and thus the convergence rate $C_1$ in the EEDI~(\mathbb{R}f{eq:ER:s2:26}), e.g., $C_1 = \min \{\overline{C}_1, \widetilde{C}_1\}/2$. The missing inequality~(\mathbb{R}f{eq:ER:s2:eedi}) is proved in Lemma~\mathbb{R}f{lemma_ER_06}. \end{proof} For the sake of simplicity, let us denote $N_i = \sqrt{n_i}$ and $N_{i,\infty} = \sqrt{n_{i,\infty}}$ and thus rewrite~(\mathbb{R}f{eq:ER:s2:eedi}) into the form \begin{equation} \begin{aligned} \label{eq:ER:s2:48} & \sum_{i } \left( \overline{N_i} - N_{i, \infty} \right)^2 + \sum_{i} \Vert N_i - \overline{N_i} \Vert^2_{L^2(\Omega)} \le C_3 \sum_{i} \Vert N_i - \overline{N_i} \Vert^2_{L^2(\Omega)} \\ + C_4 & \left( \Vert \sqrt{k_{+}} N_S N_E - \sqrt{k_{-}} N_C \Vert^2_{L^2(\Omega)} + \Vert \sqrt{k_{p-}} N_E N_P - \sqrt{k_{p+}} N_C \Vert^2_{L^2(\Omega)} \right). \end{aligned} \end{equation} \begin{lemma} \label{lemma_ER_06} Let $N_i$, $\isecp$, be measurable functions from $\Omega$ to $\mathbb{R}_{+}$ satisfying the conservation laws~(\mathbb{R}f{eq:ER:s2:02}) and~(\mathbb{R}f{eq:ER:s2:03}), i.e. \begin{equation} \label{eq:ER:s2:63} \overline{N_C^2} + \overline{N_E^2} = M_1 \quad \textnormal{and} \quad \overline{N_S^2} + \overline{N_C^2} + \overline{N_P^2} = M_2, \end{equation} and let $n_{i,\infty} = N_{i,\infty}^2$ be defined by~(\mathbb{R}f{eq:ER:s2:14}) and~(\mathbb{R}f{eq:ER:s2:15}). Then, there exist constants $C_3$ and $C_4$, cf.~(\mathbb{R}f{eq:ER1:10}) and~(\mathbb{R}f{eq:ER1:11}), such that~(\mathbb{R}f{eq:ER:s2:48}) is satisfied. \end{lemma} \begin{proof} Let us use the expansion of $N_i$ around the spatial average $\overline{N_i}$ from Lemma~\mathbb{R}f{lemma_ER_01}, \begin{equation} \label{eq:ER:s2:49} N_i = \overline{N_i} + \delta_i(x), \quad \overline{\delta}_i = 0, \quad \isecp, \end{equation} which implies $\overline{N_i^2} = \overline{N_i}^2 + \overline{\delta_i^2}$ for each $\isecp$ and \begin{equation} \label{eq:ER:s2:51} \sum_{i} \Vert N_i - \overline{N_i} \Vert^2_{L^2(\Omega)} = \sum_{i} \overline{\delta_i^2}. \end{equation} With~(\mathbb{R}f{eq:ER:s2:49}) at hand, we can expand the remaining terms in~(\mathbb{R}f{eq:ER:s2:48}), e.g., \begin{equation} \begin{aligned} \label{eq:ER:s2:52} \Vert \sqrt{k_{+}} N_S N_E - \sqrt{k_{-}} N_C \Vert^2_{L^2(\Omega)} = & \left( \sqrt{k_{+}} \overline{N_S} \overline{N_E} - \sqrt{k_{-}} \overline{N_C}\right)^2\\ & + 2\sqrt{k_{+}}\left( \sqrt{k_{+}} \overline{N_S} \overline{N_E} - \sqrt{k_{-}} \overline{N_C}\right) \overline{\delta_S\delta_E}\\ & + \Vert \sqrt{k_{+}}\left( \overline{N_S} \delta_E + \overline{N_E} \delta_S + \delta_S \delta_E \right) - \sqrt{k_{-}}\delta_C \Vert^2_{L^2(\Omega)}\\ \ge & \left( \sqrt{k_{+}} \overline{N_S} \overline{N_E} - \sqrt{k_{-}} \overline{N_C}\right)^2 - \sqrt{k_{+}} K_1 \sum_{i} \overline{\delta^2_i}, \end{aligned}\end{equation} since the third term in~(\mathbb{R}f{eq:ER:s2:52}) is nonnegative and the second term can be estimated as follows, \[ \begin{aligned} 2\left( \sqrt{k_{+}} \overline{N_S} \overline{N_E} - \sqrt{k_{-}} \overline{N_C}\right) \overline{\delta_S\delta_E} &\ge -2 \left| \sqrt{k_{+}} \overline{N_S} \overline{N_E} - \sqrt{k_{-}} \overline{N_C} \right| \int_{\Omega}\delta_S\delta_E\, \textnormal{d}x \\ & \ge - K_1 ( \overline{\delta^2_S}+\overline{\delta^2_E}) \ge - K_1 \sum_{i} \overline{\delta^2_i}, \end{aligned}\] where $K_1 = \sqrt{k_{+} M_1 M_2} + \sqrt{k_{-}(M_1 + M_2)/2}$ is deduced from the Jensen inequality $\overline{N_i^2} \ge \overline{N_i}^2$ and~(\mathbb{R}f{eq:ER:s2:63}). Analogously, we deduce for $K_2 = \sqrt{k_{p-} M_1 M_2} + \sqrt{k_{p+}(M_1 + M_2)/2}$ that \begin{equation} \label{eq:ER:s2:55} \Vert \sqrt{k_{p-}} N_P N_E - \sqrt{k_{p+}} N_C \Vert^2_{L^2(\Omega)} \ge \left( \sqrt{k_{p-}} \overline{N_P} \overline{N_E} - \sqrt{k_{p+}} \overline{N_C}\right)^2 - \sqrt{k_{p-}} K_2 \sum_{i} \overline{\delta^2_i}.\end{equation} \noindent We see that with~(\mathbb{R}f{eq:ER:s2:51})--(\mathbb{R}f{eq:ER:s2:55}) it is sufficient to find $C_3$ and $C_4$ such that \begin{equation} \label{eq:eedi2} \begin{aligned} \sum_{i} ( \overline{N_i} & - N_{i, \infty})^2 + \sum_{i} \overline{\delta_i^2} \le \left( C_3 - C_4 (\sqrt{k_{+}} K_1 + \sqrt{k_{p-}} K_2) \right) \sum_{i} \overline{\delta_i^2} \\ & + C_4 \left( \left( \sqrt{k_{+}} \overline{N_S} \overline{N_E} - \sqrt{k_{-}} \overline{N_C}\right)^2 + \left( \sqrt{k_{p-}} \overline{N_P} \overline{N_E} - \sqrt{k_{p+}} \overline{N_C}\right)^2 \right) \end{aligned}\end{equation} from which~(\mathbb{R}f{eq:ER:s2:48}) (and so~(\mathbb{R}f{eq:ER:s2:eedi})) directly follows. Let us explore how far the spatial average $\overline{N_i}$ can be from the equilibrium state $N_{i,\infty}$ for each $\isecp$, i.e., let us consider a substitution \begin{equation}\label{eq:ER1:01} \overline{N_i} = N_{i,\infty} (1+\mu_i) \end{equation} for some $\mu_i \ge -1$, $\isecp$. We obtain \begin{equation}\label{eq:ER1:02} \sum_{i} \left( \overline{N_i} - N_{i, \infty} \right)^2 = \sum_{i} \NiI^2 \mu_i^2 \end{equation} and with~(\mathbb{R}f{eq:ER:s2:15}), i.e., $\sqrt{k_{+}} \NSI \NEI = \sqrt{k_{-}} \NCI$ and $\sqrt{k_{p-}} \NPI \NEI = \sqrt{k_{p+}} \NCI$, \begin{equation}\label{eq:ER1:03} \begin{aligned} \left( \sqrt{k_{+}} \overline{N_S} \overline{N_E} - \sqrt{k_{-}} \overline{N_C}\right)^2 = k_{-} \NCI^2 ((1+\mu_S)(1+ \mu_E) - (1+\mu_C))^2, \\ \left( \sqrt{k_{p-}} \overline{N_P} \overline{N_E} - \sqrt{k_{p+}} \overline{N_C}\right)^2 = k_{p+} \NCI^2 ((1+\mu_P)(1+ \mu_E) - (1+ \mu_C))^2. \end{aligned} \end{equation} Hence,~(\mathbb{R}f{eq:eedi2}) follows from \begin{equation} \label{eq:eedi3} \begin{aligned} \sum_{i} & \NiI^2 \mu_i^2 + \sum_{i} \overline{\delta_i^2} \le \left( C_3 - C_4 (\sqrt{k_{+}} K_1 + \sqrt{k_{p-}} K_2) \right) \sum_{i} \overline{\delta_i^2} \\ & + C_4 K_3 ( \underbrace{((1+\mu_S)(1+ \mu_E) - (1+\mu_C))^2}_{{\displaystyle = I_1}} + \underbrace{((1+\mu_P)(1+ \mu_E) - (1+\mu_C))^2}_{{\displaystyle = I_2}}) \end{aligned}\end{equation} where $K_3 = \min\{\sqrt{k_{-}}, \sqrt{k_{p+}}\}\NCI^2$. Let us note that the conservation law~(\mathbb{R}f{eq:ER:s2:63}), reflecting the ansatz~(\mathbb{R}f{eq:ER:s2:49}) and the substitution~(\mathbb{R}f{eq:ER1:01}), i.e., \begin{eqnarray} &\NEI^2 + \NCI^2 = \NEI^2(1+\mu_E)^2 + \delEO + \NCI^2(1+\mu_C)^2 + \delCO, \label{eq:ER1:s1} \\ & \begin{aligned} \NSI^2 + \NCI^2 + \NPI^2 = \NSI^2(1+\mu_S)^2 + \delSO & + \NCI^2(1+\mu_C)^2 + \delCO \\ & + \NPI^2(1+\mu_P)^2 + \delPO, \label{eq:ER1:s2} \end{aligned} \end{eqnarray} possesses some restrictions on the signs of $\mu_i$'s. In particular, we remark that \begin{itemize} \item[i)] $\forall \isecp$, $-1 \le \mu_i \le \mu_{i,max}$ where $\mu_{i,max}$ depends on $\mathbf{n_{\infty}}$; \item[ii)] the conservation law~(\mathbb{R}f{eq:ER1:s1}) excludes the case when $\mu_E >0$ and $\mu_C >0$, since in this case $\NEO > \NEI$ and $\NCO > \NCI$ and we deduce from~(\mathbb{R}f{eq:ER:s2:63}),~(\mathbb{R}f{eq:ER1:s1}) and the Jensen inequality $ \overline{N_i^2} \ge \overline{N_i}^2$, that \[M_1 = \overline{N_E^2} + \overline{N_C^2} \ge \NEO^2 + \NCO^2 > \NEI^2 + \NCI^2 = M_1,\] which is a contradiction; \item[iii)] analogously, the conservation law~(\mathbb{R}f{eq:ER1:s2}) excludes the case when $\mu_S >0$, $\mu_C >0$ and $\mu_P >0$; \item[iv)] for $-1 \le \mu_E, \mu_C \le 0$, the conservation law~(\mathbb{R}f{eq:ER1:s1}) implies $\NEI^2 \mu_E^2 + \NCI^2 \mu_C^2 \le \sum \deliO$, since for $-1 \le s \le 0$ we have $-1 \le s \le -s^2 \le 0$ and we can deduce from~(\mathbb{R}f{eq:ER1:s1}) that \[ \begin{aligned} 0 = \NEI^2(2\mu_E + \mu_E^2) + \NCI^2(2\mu_C & + \mu_C^2) + \delCO + \delEO \\ &\le - \NEI^2 \mu_E^2 - \NCI^2 \mu_C^2 + \sum \deliO; \end{aligned}\] \item[v)] analogously, for $-1 \le \mu_S, \mu_C, \mu_P \le 0$, the conservation law~(\mathbb{R}f{eq:ER1:s2}) implies that $\NSI^2 \mu_S^2 + \NCI^2 \mu_C^2 + \NPI^2 \mu_P^2 \le \sum \deliO$. \end{itemize} To find $C_3$ and $C_4$ explicitly, we have to consider all possible configurations of $\mu_i$'s in~(\mathbb{R}f{eq:eedi3}), that is all possible quadruples $(\mu_E, \mu_C, \mu_S, \mu_P)$ depending on their signs. The remarks~(ii) and~(iii) reduce the total number of quadruples by five and the remaining 11 quadruples are shown in Table~\mathbb{R}f{tab:ER1:1}.\\ \mathbb{R}newcommand{1}{1.4} \begin{table}[h] \centering \caption[Relations between $\mu_i$ for $\isecp$]{\label{tab:ER1:1} Eleven quadruples of possible relations among $\mu_i$, $\isecp$, which are allowed by the conservation laws~(\mathbb{R}f{eq:ER1:s1}) and~(\mathbb{R}f{eq:ER1:s2}). In the table $``+"$ means that $\mu_i > 0$ and $``-"$ that $-1 \le \mu_i \le 0$. Each quadruple is denoted by a Roman numeral from I to XI.} \label{tab:ER1:1} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} $\mu_E$ & \multicolumn{4}{c|}{$-$} & \multicolumn{4}{c|}{$+$} & \multicolumn{3}{c}{$-$} \\ \hline $\mu_C$ & \multicolumn{4}{c|}{$-$} & \multicolumn{4}{c|}{$-$} & \multicolumn{3}{c}{$+$}\\ \hline $\mu_S$ & $-$ & $-$ & $+$ & $+$ & $-$ & $-$ & $+$ & $+$ & $-$ & $-$ & $+$ \\ \hline $\mu_P$ & $-$ & $+$ & $-$ & $+$ & $-$ & $+$ & $-$ & $+$ & $-$ & $+$ & $-$ \\ \hline & (I) & (II) & (III) & (IV) & (V) & (VI) & (VII) & (VIII) & (IX) & (X) & (XI) \end{tabular} \end{table} \mathbb{R}newcommand{1}{1} Ad (I). The remarks (iv) and (v) implies $\sum_i \NiI^2 \mu_i^2 \le 2 \sum_i \deliO$ and, therefore,~(\mathbb{R}f{eq:eedi3}) is satisfied for $C_3=3$ and $C_4=0$.\\ Ad (II) and (III). We prove~(\mathbb{R}f{eq:eedi3}) for $-1 \le \mu_E, \mu_C \le 0$ and $\mu_S$ and $\mu_P$ having opposite signs. Firstly, let us remark that~(\mathbb{R}f{eq:ER1:s1}) implies that \[ \NEI^2 = \NEI^2(1+\mu_E)^2 + \NCI^2(2\mu_C + \mu_C^2) + \delEO + \delCO,\] i.e., \[\begin{aligned} (1+\mu_E)^2 & = 1 - \frac{\NCI^2}{\NEI^2}(2\mu_C + \mu_C^2) - \frac{1}{\NEI^2} (\delEO + \delCO) \\ & \ge 1 - \frac{1}{\NEI^2} \sum \deliO \end{aligned},\] since for $-1 \le \mu_C \le 0$ there is $-1 \le 2\mu_C + \mu_C^2 \le 0$. Then, by using an elementary inequality $a^2+b^2 \ge (a-b)^2/2$ we obtain for \[I_1 + I_2 = ((1+\mu_S)(1+ \mu_E) - (1+\mu_C))^2 + ((1+\mu_P)(1+ \mu_E) - (1+ \mu_C))^2\] that \begin{equation} \label{eq:ER1:12} \begin{aligned} I_1 + I_2 & \ge \frac{1}{2}(\mu_S-\mu_P)^2(1+\mu_E)^2 \\ & \ge \frac{1}{2} K_4 (\NSI^2 \mu_S^2 + \NPI^2 \mu_P^2) - K_5 \sum \deliO \end{aligned} \end{equation} since $\mu_S$ and $\mu_P$ have opposite signs and are bounded above by $\mu_{S,max}$ and $\mu_{P,max}$ (by the remark~(i)). In~(\mathbb{R}f{eq:ER1:12}), $K_4 = \min \left\{1/\NSI^2, 1/\NPI^2 \right\}$ is sufficient, nevertheless, we will take \begin{equation}\label{eq:ER1:09} K_4 = \min_{\isecp} \left\{\dfrac{1}{\NiI^2} \right\} \textnormal{ and } K_5 = \dfrac{1}{\NEI^2} (\mu_{S,max}^2 + \mu_{P,max}^2),\end{equation} since $K_4$ in~(\mathbb{R}f{eq:ER1:09}) will appear several times elsewhere. Together with $\NEI^2 \mu_E^2 + \NCI^2 \mu_C^2 \le \sum \deliO$ for $-1 \le \mu_E, \mu_C \le 0$ known from the remark~(iv), we deduce \begin{equation}\label{eq:ER:t1} \sum \NiI^2 \mu_i^2 + \sum \deliO \le 2 \left(1 + \frac{K_5}{K_4}\right) \sum \deliO + \frac{2}{K_4} (I_1 + I_2), \end{equation} and we see that~(\mathbb{R}f{eq:eedi3}) is satisfied for \[C_4 = \frac{2}{K_3 K_4} \quad \textnormal{and} \quad C_3 = 2\left(1 + \frac{K_5}{K_4}\right) + C_4 \left( \sqrt{k_{+}} K_1 + \sqrt{k_{p-}} K_2 \right), \] when~(\mathbb{R}f{eq:eedi3}) is compared with the \textit{r.h.s.} of~(\mathbb{R}f{eq:ER:t1}).\\ Ad (IV) Assume $-1 \le \mu_E, \mu_C \le 0$ and $\mu_S, \mu_P > 0$. A combination of~(\mathbb{R}f{eq:ER1:s1}) and~(\mathbb{R}f{eq:ER1:s2}) gives \begin{equation}\label{eq:ER3:04} \begin{aligned} \NEI^2 - \NSI^2 - \NPI^2 &= \NEO^2 + \delEO - \NSO^2 - \delSO - \NPO^2 - \delPO \\ & \le \NEI^2 - \NSO^2 - \NPO^2 + \delEO - \delSO - \delPO, \end{aligned}\end{equation} since $\NEO^2 \le \NEI^2$ for $-1 \le \mu_E \le 0$. We deduce from~(\mathbb{R}f{eq:ER3:04}) that \[- \NSI^2 - \NPI^2 \le - \NSI^2(1+\mu_S)^2 - \NPI^2(1+\mu_P)^2 + \delEO - \delSO - \delPO \] and \[\NSI^2(2\mu_S + \mu_S^2) + \NPI^2(2\mu_P + \mu_P^2) \le \delEO - \delSO - \delPO \le \sum \deliO. \] Thus, $\NSI^2 \mu_S^2 + \NPI^2 \mu_P^2 \le \sum \deliO$ since $\mu_S, \mu_P > 0$. This estimate together with the remark~(iv) yields $\sum \NiI^2 \mu_i^2 \le 2\sum \deliO$. Similarly as in the case (I),~(\mathbb{R}f{eq:eedi3}) is satisfied for $C_3=3$ and $C_4=0$. \\ Ad (V) Let us now consider the case when $-1 \le \mu_S, \mu_C, \mu_P \le 0$ and $\mu_E>0$. As in the case (IV), a combination of~(\mathbb{R}f{eq:ER1:s1}) and~(\mathbb{R}f{eq:ER1:s2}) gives \begin{equation}\label{eq:ER3:03} \begin{aligned} \NSI^2 + \NPI^2 - \NEI^2 &= \NSO^2 + \delSO + \NPO^2 + \delPO - \NEO^2 - \delEO \\ & \le \NSI^2 + \NPI^2 - \NEO^2 + \delSO + \delPO - \delEO, \end{aligned}\end{equation} since, again, $\NiO^2 \le \NiI^2$ for $-1 \le \mu_i \le 0$. Hence, for $\mu_E>0$ we deduce from~(\mathbb{R}f{eq:ER3:03}) that $ \NEI^2 \mu_E^2 < \sum \deliO$, which with the remark~(v) gives $\sum \NiI^2 \mu_i^2 < 2\sum \deliO$. Thus,~(\mathbb{R}f{eq:eedi3}) is satisfied for $C_3=3$ and $C_4=0$.\\ Ad (VI) and (VII). Assume that $\mu_E > 0$, $-1 \le \mu_C \le 0$ and $\mu_S$ and $\mu_P$ have opposite signs. Then using an elementary inequality $a^2+b^2 \ge (a+b)^2/2$ we obtain \[I_1+I_2 \ge \dfrac{1}{2}(\mu_S-\mu_P)^2(1+\mu_E)^2 > \dfrac{1}{2}(\mu_S-\mu_P)^2 \ge \dfrac{1}{2}(\mu_S^2 + \mu_P^2),\] since $(1+\mu_E)^2>1$ and $\mu_S$ and $\mu_P$ have opposite signs. Further, it holds that $(1+\mu_k)(1+\mu_E) > (1+\mu_E)$ for $\mu_k$ being either $\mu_S>0$ or $\mu_P>0$ (one of them is positive). This implies $(1+\mu_k)(1+\mu_E) - (1+\mu_C) > \mu_E - \mu_C > 0$ and thus ($\mu_E$ and $\mu_C$ have opposite signs) \[I_1 + I_2 > (\mu_E-\mu_C)^2 \ge \mu_E^2 + \mu_C^2.\] Altogether, we obtain for both cases that $I_1 + I_2 > \sum \mu_i^2/4 \ge K_4/4 \sum \NiI^2 \mu_i^2$ where $K_4$ is defined in~(\mathbb{R}f{eq:ER1:09}). We deduce that~(\mathbb{R}f{eq:eedi3}) is satisfied for \begin{equation} \label{eq:es1} C_4 = \frac{4}{K_3 K_4} \quad \textnormal{and} \quad C_3 = 1 + C_4 \left( \sqrt{k_{+}} K_1 + \sqrt{k_{p-}} K_2\right). \end{equation} Ad (VIII). Assume that $\mu_E, \mu_S, \mu_P > 0$ and $-1 \le \mu_C \le 0$. Using the similar arguments as in the previous case, in particular, $(1+\mu_S)(1+\mu_E) > (1+\mu_S)$, $(1+\mu_S)(1+\mu_E) > (1+\mu_E)$, $(1+\mu_P)(1+\mu_E) > (1+\mu_P)$ and $(1+\mu_P)(1+\mu_E) > (1+\mu_E)$ and since $\mu_i-\mu_C > 0$ for each $i \in \{S,E,P\}$, we can write \[\begin{aligned}I_1 + I_2 &= ((1+\mu_S)(1+ \mu_E)-(1+\mu_C))^2 + ((1+\mu_P)(1+ \mu_E)-(1+\mu_C))^2 \\ &\ge \dfrac{1}{2}(\mu_S-\mu_C)^2 + (\mu_E-\mu_C)^2 + \dfrac{1}{2}(\mu_P-\mu_C)^2 \ge \dfrac{1}{2} \sum \mu_i^2 \ge \dfrac{K_4}{2} \sum \NiI^2 \mu_i^2. \end{aligned}\] Hence,~(\mathbb{R}f{eq:eedi3}) is satisfied for $C_4 = 2/K_3K_4$ and $C_3$ defined in~(\mathbb{R}f{eq:es1}). \\ Ad (IX). The case when $-1 \le \mu_E, \mu_S, \mu_P \le 0$ and $\mu_C > 0$ is similar to the case (VIII). Now we observe that $\mu_C-\mu_i > 0$ for each $i \in \{S,E,P\}$ and that $(1+\mu_S)(1+\mu_E) \le (1+\mu_S)$, $(1+\mu_S)(1+\mu_E) \le (1+\mu_E)$, $(1+\mu_P)(1+\mu_E) \le (1+\mu_P)$ and $(1+\mu_P)(1+\mu_E) \le (1+\mu_E)$ which can be used to conclude $I_1 + I_2 \ge \sum \mu_i^2/2\ge K_4/2 \sum \NiI^2 \mu_i^2$. The constants $C_3$ and $C_4$ are the same as in the case (VIII). \\ Ad (X). Assume that $-1 \le \mu_E \le 0$, $\mu_C > 0$, $-1 \le \mu_S \le 0$ and $\mu_P > 0$. By the same argument as in~(IX), we can write \begin{equation} \label{eq:ER1:05} I_1 + I_2 \ge I_1 \ge (\mu_C-\mu_E)^2 \ge \mu_C^2 + \mu_E^2.\end{equation} Using the same elementary inequality as in (II) and (VI), we obtain \begin{equation} \label{eq:ER1:06} I_1 + I_2 \ge \dfrac{1}{2}(\mu_S-\mu_P)^2(1+\mu_E)^2, \end{equation} where $-1 \le \mu_E \le 0$, thus we cannot proceed in the way as in the cases~(VI) and (VII) nor in the cases~(II) and (III), since $\mu_C$ is positive now. Nevertheless, we distinguish two subcases when $-1 < \eta \le \mu_E \le 0$ and $-1 \le \mu_E < \eta$. For example, $\eta = -1/2$ works well, however, a more suitable constant $\eta$ could be possibly found. For $\eta = -1/2$ and $\eta \le \mu_E \le 0$ we obtain from~(\mathbb{R}f{eq:ER1:06}) that \begin{equation} \label{eq:ER1:07} I_1 + I_2 \ge \dfrac{1}{8}(\mu_S-\mu_P)^2 \ge \dfrac{1}{8}(\mu_S^2+\mu_P^2). \end{equation} This with~(\mathbb{R}f{eq:ER1:05}) implies that $I_1 + I_2 \ge \sum \mu_i^2/16 \ge K_4/16 \sum \NiI^2 \mu_i^2$ and we conclude that~(\mathbb{R}f{eq:eedi3}) is satisfied for $C_4 = 16/K_3K_4$ and $C_3$ defined in~(\mathbb{R}f{eq:es1}). For $\eta = -1/2$ and $-1 \le \mu_E < \eta$ we obtain, by using an elementary inequality $(a-b)^2 \ge a^2/2-b^2$, that \begin{equation} \label{eq:ER1:08} \begin{aligned} I_1 + I_2 \ge I_1 & = ((1+\mu_C) - (1+\mu_S)(1+\mu_E))^2 \\ & \ge \dfrac{1}{2} (1+\mu_C)^2 - (1+\mu_S)^2(1+\mu_E)^2 > \dfrac{1}{4}, \end{aligned} \end{equation} since $(1+\mu_C)^2 > 1$ for $\mu_C > 0$ and $(1+\mu_S)^2(1+\mu_E)^2 < 1/4$ for $-1 \le \mu_S \le 0$ and $-1 \le \mu_E < -1/2$. On the other hand, $\sum \NiI^2 \mu_i^2 \le \sum \NiI^2 \mu_{i,\max}^2$ by the remark~(i). In fact, for the given quadruple of $\mu_i$'s, we deduce from~(\mathbb{R}f{eq:ER1:s2}) a constant $K_6 = \NSI^2 (1 + \NPI^2 + \NCI^2) + \NEI^2$ such that $\sum \NiI^2 \mu_i^2 \le K_6$. We see that~(\mathbb{R}f{eq:eedi3}) is satisfied for $C_4 = K_6/4K_3$ and $C_3$ as in~(\mathbb{R}f{eq:es1}). \\ Ad (XI). Finally, assume that $-1 \le \mu_E \le 0$, $\mu_C > 0$, $\mu_S > 0$ and $-1 \le \mu_P \le 0$. This case is symmetric to the previous case (X), thus the same procedure can be applied again (it is sufficient to exchange superscripts $S$ and $P$ everywhere they appear in (X)) to deduce the constants $C_3$ and $C_4$ in~(\mathbb{R}f{eq:eedi3}). In particular, for $ -1/2 \le \mu_E \le 0$ we take $C_4 = 16/K_3K_4$ and for $ -1 \le \mu_E < -1/2 $ we take $C_4 = K_7/4K_3$ and $K_7 = \NPI^2 (1 + \NSI^2 + \NCI^2) + \NEI^2$. In both subcases $C_3$ is as in~(\mathbb{R}f{eq:es1}).\\ From the eleven cases (I)-(XI), we need to take \begin{equation} \label{eq:ER1:10} C_4 = \frac{1}{K_3} \max \left\{\frac{16}{K_4}, \frac{K_6}{4}, \frac{K_7}{4} \right\}\end{equation} and \begin{equation} \label{eq:ER1:11}C_3 = \max \left\{3, 2\left(1 + \frac{K_5}{K_4}\right) \right\} + C_4 \left( \sqrt{k_{+}} K_1 + \sqrt{k_{p-}} K_2 \right) \end{equation} to find~(\mathbb{R}f{eq:eedi3}) true and thus to conclude the proof. \end{proof} \section*{Appendix. A duality principle} \addcontentsline{toc}{section}{Appendix. A duality principle} We recall a duality principle \cite{Pierre1, PSch} which is used to show $L^2(\log L)^2$ and $L^2$ bounds, respectively, for the solution to~(\mathbb{R}f{eq:ER:s2:10})-(\mathbb{R}f{eq:ER:s2:12}). Note that a more general result is proved in \cite{Pierre1}, Chap. 6, than presented here. \begin{lemma}[Duality principle] \label{lemma_ER_03} Let $0<T<\infty$ and $\Omega$ be a bounded, open and regular (e.g., $C^2$) subset of $\mathbb{R}^d$. Consider a nonnegative weak solution $u$ of the problem \begin{equation}\label{eq:ER:s2:40} \left\{ \begin{aligned} & \partial_t u - \Delta (A u) \le 0, \\ & \nabla (A u) \cdot \nu = 0, \quad \forall t \in I, \; x \in \partial \Omega, \\ & u(0,x) = u_0(x), \\ \end{aligned} \right. \end{equation} where we assume that $0 < A_1 \le A = A(t,x) \le A_2 < \infty $ is smooth, $A_1$ and $A_2$ are strictly positive constants, $u_0 \in L^2(\Omega)$ and $\int u_0 \ge 0$. Then, \begin{equation}\label{eq:ER:s2:41} \Vert u \Vert_{L^2(Q_T)} \le C \Vert u_0 \Vert_{L^2(\Omega)} \end{equation} where $C=C(\Omega, A_1, A_2, T)$. \end{lemma} \begin{proof} Let us consider an adjoint problem: find a nonnegative function $v \in C(I;L^2(\Omega))$ which is regular in the sense that $\partial_t v, \Delta v \in L^2(Q_T)$ and satisfies \begin{equation}\label{eq:ER:s2:42} \left\{ \begin{aligned} & - \partial_t v - A \Delta v = F, \\ & \nabla v \cdot \nu = 0, \quad \forall t \in I, \; x \in \partial \Omega, \\ & v(T,x) = 0, \\ \end{aligned} \right. \end{equation} for $F = F(t,x) \in L^2(Q_T)$ nonnegative. The existence of such $v$ follows from the classical results on parabolic equations \cite{Lady1968}. By combining equations for $u$ and $v$, we can readily check that \[ -\dfrac{\textnormal{d}}{\, \textnormal{d}t} \int_{\Omega} u v \ge \int_{\Omega} u F \] which, by using $v(T)=0$, yields \begin{equation}\label{eq:ER:s2:43} \int_{Q_T} u F \le \int_{\Omega} u_0 v_0. \end{equation} By multiplying equation for $v$ in~(\mathbb{R}f{eq:ER:s2:42}) by $-\Delta v$, integrating per partes and using the Young inequality, we obtain \[ -\dfrac{1}{2}\dfrac{\textnormal{d}}{\, \textnormal{d}t} \int_{\Omega} \vert \nabla v \vert^2 + \int_{\Omega} A (\Delta v)^2 = - \int_{\Omega} F \Delta v \le \int_{\Omega} \dfrac{F^2}{2A} + \dfrac{A}{2}(\Delta v)^2, \] i.e. \[ -\dfrac{\textnormal{d}}{\, \textnormal{d}t} \int_{\Omega} \vert \nabla v \vert^2 + \int_{\Omega} A (\Delta v)^2 \le \int_{\Omega} \dfrac{F^2}{A}. \] Integrating this over $[0,T]$ and using $v(T)=0$ gives \[ \int_{\Omega} \vert \nabla v_0 \vert^2 + \int_{Q_T} A (\Delta v)^2 \le \int_{Q_T} \dfrac{F^2}{A}. \] Thus we obtain the a-priori bounds \begin{equation}\label{eq:ER:s2:44} \Vert \nabla v_0 \Vert_{L^2(\Omega,\mathbb{R}^d)} \le \left\| \dfrac{F}{\sqrt{A}} \right\|_{L^2(Q_T)} \quad \textnormal{and} \quad \Vert \sqrt{A} \Delta v \Vert_{L^2(\Omega)} \le \left\| \dfrac{F}{\sqrt{A}} \right\|_{L^2(Q_T)}. \end{equation} From the equation for $v$ we can write (again, by integrating this equation over $\Omega$ and $[0,T]$ and using $v(T)=0$) \[ \int_{\Omega} v_0 = \int_{Q_T} A \Delta v + F. \] Hence, \begin{equation}\label{eq:ER:s2:45} \begin{aligned} \int_{\Omega} v_0 = \int_{Q_T} \sqrt{A} \left( \sqrt{A} \Delta v + \dfrac{F}{\sqrt{A}}\right) & \le \Vert \sqrt{A} \Vert_{L^2(Q_T)} \left\| \sqrt{A}\Delta v + \dfrac{F}{\sqrt{A}} \right\|_{L^2(Q_T)} \\ & \le 2 \Vert \sqrt{A} \Vert_{L^2(Q_T)} \left\| \dfrac{F}{\sqrt{A}} \right\|_{L^2(Q_T)}, \end{aligned} \end{equation} which follows from the H\"{o}lder inequality and~(\mathbb{R}f{eq:ER:s2:44}). To conclude the proof, let us return to~(\mathbb{R}f{eq:ER:s2:43}) and write \[ \begin{aligned} 0 \le \int_{Q_T} u F \le \int_{\Omega} u_0 v_0 &= \int_{\Omega} u_0( v_0 - \overline{v_0}) + u_0\overline{v_0}\\ & \le \Vert u_0 \Vert_{L^2(\Omega)}\Vert v_0 - \overline{v_0} \Vert_{L^2(\Omega)} + \int_{\Omega} \overline{u_0} v_0 \\ & \le C(\Omega) \Vert u_0 \Vert_{L^2(\Omega)} \Vert \nabla v_0 \Vert_{L^2(\Omega,\mathbb{R}^d)} + \overline{u_0} \int_{\Omega} v_0, \end{aligned}\] where we have used the H\"{o}lder and Poincar\'{e}-Wirtinger inequalities, respectively. Recall that $\overline{v} = \dfrac{1}{\vert \Omega \vert} {\displaystyle \int_{\Omega} v \, \textnormal{d}x}.$ The norm of the gradient $v_0$ can be estimated by~(\mathbb{R}f{eq:ER:s2:44}) and the last remaining integral by~(\mathbb{R}f{eq:ER:s2:45}) so that we obtain \begin{equation}\label{eq:ER:s2:46} \int_{Q_T} u F \le \left( C(\Omega) \Vert u_0 \Vert_{L^2(\Omega)} + 2\overline{u_0} \Vert \sqrt{A} \Vert_{L^2(Q_T)} \right) \left\| \dfrac{F}{\sqrt{A}} \right\|_{L^2(Q_T)}, \end{equation} which holds true for any $F \in L^2(Q_T)$. Thus, for $F = Au$ we can finally write \begin{equation}\label{eq:ER:s2:46} \Vert \sqrt{A}u\Vert_{L^2(Q_T)} \le C(\Omega) \Vert u_0 \Vert_{L^2(\Omega)} + 2 \overline{u_0} \Vert \sqrt{A} \Vert_{L^2(Q_T)} \end{equation} and deduce~(\mathbb{R}f{eq:ER:s2:41}) by using the boundedness of $A$, i.e. $A_1 \le A(t,x) \le A_2$. \end{proof} \end{document}
\begin{eqnarray*}gin{document} \title[Newton's Method Backpropagation]{Newton's Method Backpropagation for Complex-Valued Holomorphic Multilayer Perceptrons} \author[Diana Thomson La Corte and Yi Ming Zou]{Diana Thomson La Corte and Yi Ming Zou} \maketitle \begin{eqnarray*}gin{abstract} The study of Newton's method in complex-valued neural networks faces many difficulties. In this paper, we derive Newton's method backpropagation algorithms for complex-valued holomorphic multilayer perceptrons, and investigate the convergence of the one-step Newton steplength algorithm for the minimization of real-valued complex functions via Newton's method. To provide experimental support for the use of holomorphic activation functions, we perform a comparison of using sigmoidal functions versus their Taylor polynomial approximations as activation functions by using the algorithms developed in this paper and the known gradient descent backpropagation algorithm. Our experiments indicate that the Newton's method based algorithms, combined with the use of polynomial activation functions, provide significant improvement in the number of training iterations required over the existing algorithms. \end{abstract} \section{Introduction} \mathbf{p}ar The use of fully complex-valued neural networks to solve real-valued as well as complex-valued problems in physical applications has become increasingly popular in the neural network community in recent years \cite{Hirose2012,Hirose2011,Mukherjee2012}. Complex-valued neural networks pose unique problems, however. Consider the problem of choosing the activation functions for a neural network. Real-valued activation functions for real-valued neural networks are commonly taken to be everywhere differentiable and bounded. Typical activation functions used for real-valued neural networks are the sigmoidal, hyperbolic tangent, and hyperbolic secant functions \begin{eqnarray*}gin{equation*} f(x)=\frac{1}{1+\exp(-x)}, \textrm{ } \tanh(x)=\frac{e^x-e^{-x}}{e^x+e^{-x}}, \textrm{ and } \textrm{sech}(x)=\frac{2}{e^x+e^{-x}}. \end{equation*} For activation functions of complex-valued networks, an obvious choice is to use the complex counterparts of these real-valued functions. However, as complex-valued functions, these functions are no longer differentiable and bounded near $0$, since they have poles near $0$. Different approaches have been proposed in the literature to address this problem. \mathbf{p}ar Liouville's theorem tells us that there is no non-constant complex-valued function which is both bounded and differentiable on the whole complex plane \cite{Conway1978}. On the basis of Liouville's theorem, \cite{Georgiou1992} asserts that an entire function is not suitable as an activation function for a complex-valued neural network and claims boundedness as an essential property of the activation function. Some authors followed this same reasoning and use the so-called ``split'' functions of the type $f(z)=f(x+iy)=f_1(x)+if_2(y)$ where $f_1,f_2$ are real-valued functions, typically taken to be one of the sigmoidal functions \cite{Jalab2011,Kim2001,Pande2007}. Such activation functions have the advantage of easily modeling data with symmetry about the real and imaginary axes. However, this yields complex-valued neural networks which are close to real-valued networks of double dimensions and are not fully complex-valued \cite{Hirose2012}. Amplitude-phase-type activation functions have the type $f(z)=f_3(\vert z\vert) \exp (i\text{arg}(z))$ where $f_3$ is a real-valued function. These process wave information well, but have the disadvantage of preserving phase data, making the training of a network more difficult \cite{Hirose2012,Kim2001}. Some authors forgo complex-valued activation functions entirely, choosing instead to scale the complex inputs using bounded real-valued functions which are differentiable with respect to the real and imaginary parts \cite{Amin2009-2,Amin2009,Amin2008}. While this approach allows for more natural grouping of data for classification problems, it requires a modified backpropagation algorithm to train the network, and again the networks are not fully complex-valued. Other authors choose differentiability over boundedness and use elementary transcendental functions \cite{Hanna2003,Kim2001,Kim2002}. Such functions have been used in complex-valued multilayer perceptrons trained using the traditional gradient descent backpropagation algorithm and in other applications \cite{Burse2011,Li2005,Savitha2011}. However, the problem of the existence of poles in a bounded region near $0$ presents again. Though one can try to scale the data to avoid the regions which contain poles \cite{Leung1991}, this does not solve the problem, since for unknown composite functions, the locations of poles are not known {\it a priori}. The exponential function $\exp(z)$ has been proposed as an alternative to the elementary transcendental functions for some complex-valued neural networks, and experimental evidence suggests better performance of the entire exponential function as activation function than those with poles \cite{Savitha2009}. \mathbf{p}ar In this paper, we will derive the backpropagation algorithm for fully complex-valued neural networks based on Newton's method. We compare the performances of using the complex-valued sigmoidal activation function and its Taylor polynomial approximations. Our results give strong supporting evidence for the use of holomorphic functions, in particular polynomial functions, as activation functions for complex-valued neural networks. Polynomials have been used in fully complex-valued functional link networks \cite{Amin2011,Amin2012}, however their use is limited as activation functions for fully complex-valued multilayer perceptrons. Polynomial functions are differentiable on the entire complex plane and are underlying our computations due to Taylor's Theorem, and they are bounded over any bounded region. Moreover, the complex Stone-Weierstrass Theorem implies that any continuous complex-valued function on a compact subset of the complex plane can be approximated by a polynomial \cite{Reed1980}. Due to the nature of the problems associated with the activation functions in complex-valued neural networks, different choices of activation functions can only suit different types of neural networks properly, and one should only expect an approach to be better than the others in certain applications. \mathbf{p}ar We will allow a more general class of complex-valued functions for activation functions, namely the holomorphic functions. There are two important reasons for this. The first one is that holomorphic functions encompass a general class of functions that are commonly used as activation functions. They allow a wide variety of choices both for activation functions and training methods. The second is that the differentiability of holomorphic functions leads to much simpler formulas in the backpropagation algorithms. For application purpose, we will also consider the backpropagation algorithm using the pseudo-Newton's method, since it has computational advantage. Our main results are given by Theorem \ref{thm:complexnewtbackprop}, Corollary \ref{cor:complexp-newtbackprop}, and Theorem \ref{thm:ConvergenceNewton}. Theorem \ref{thm:complexnewtbackprop} gives a recursive algorithm to compute the entries of the Hessian matrices in the application of Newton's method to the backpropagation algorithm for complex-valued holomorphic multilayer perceptrons, and Corollary \ref{cor:complexp-newtbackprop} gives a recursive algorithm for the application of the pseudo-Newton's method to the backpropagation algorithm based on Theorem \ref{thm:complexnewtbackprop}. The recursive algorithms we developed are analogous to the known gradient descent backpropagation algorithm as stated in Section III, hence can be readily implemented in real-world applications. A problem with Newton's method is the choice of steplengths to ensure the algorithm actually converges in applications. Our setting enables us to perform a rigorous analysis for the one-step Newton steplength algorithm for the minimization of real-valued complex functions using Newton's method. This is done in Section VI. Our experiments, reported in Section VII, show that the algorithms we developed use significantly fewer iterations to achieve the same results as the gradient descent algorithm. We believe that the Newton's method backpropagation algorithm provides a valuable tool for fast learning for complex-valued neural networks as a practical alternative to the gradient descent methods. \mathbf{p}ar The rest of the paper is as follows. In Section II, we define holomorphic multilayer perceptrons and set up our notation for the network architecture we use throughout the rest of the paper. In Section III, we give a reformulation of the gradient descent backpropagation algorithm based on our setting of holomorphic neural networks. In Section IV, we derive the backpropagation algorithm for holomorphic multilayer perceptrons using Newton's method, and in Section V we restirct the results of Section IV to the pseudo-Newton's method. In Section VI we state the one-step Newton steplength algorithm, and in Section VII we report our experiments. The appendices provide the detailed computations omitted from Section IV and a detailed proof, omitted from Section VI, of the convergence of the one-step Newton steplength algorithm for the minimization of real-valued complex functions. \section{Holomorphic MLPs: Definition and Network Architecture} \mathbf{p}ar A well-used type of artificial neural network is the multilayer perceptron (MLP). An MLP is built of several layers of single neurons hooked together by a network of weight vectors. Usually the activation function is taken to be the same among a single layer of the network; the defining characteristic of the MLP is that in at least one layer, the activation function must be nonlinear. If there is no nonlinear activation function, the network can be collapsed to a two-layer network \cite{Buchholz2008}. \begin{eqnarray*}gin{definition} A {\bf holomorphic MLP} is a complex-valued MLP in which the activation function in the layer indexed by $p$ of the network is holomorphic on some domain $\Omega_p\subseteq\mathbb{C}$. \label{def:HolomorphicMLP} \end{definition} \mathbf{p}ar Most of the publications on complex-valued neural networks with holomorphic activation functions deal with functions that have poles. We will mainly focus on entire functions for the purpose of applying Newton's method. For these functions, we do not have to worry about the entries of a Hessian matrix hitting the poles. However, we will allow some flexibility in our setting and set up our notation for a general $L$-layer holomorphic MLP as follows (see Figure \ref{fig:networkdiagram}). \begin{eqnarray*}gin{figure}[b]\begin{eqnarray*}gin{center} \includegraphics[scale=0.6]{network_diagram.jpg} \caption{Network Architecture} \label{fig:networkdiagram}\end{center} \end{figure} \begin{eqnarray*}gin{itemize} \item The input layer has $m=K_0$ input nodes denoted $$z_1=x^{(0)}_1,...,z_m=x^{(0)}_m.$$ \item There are $L-1$ hidden layers of neurons, and the $p$th ($1\le p\le L-1$) hidden layer contains $K_p$ nodes. We denote the output of node $j$ ($j=1,...,K_p$) in the $p$th layer by $x^{(p)}_j.$ The inputted weights to the $p$th layer are denoted $w^{(p-1)}_{ji}$ ($j=1,...,K_p$, $i=1,...,K_{p-1}$), where $j$ denotes the target node of the weight in the $p$th layer and $i$ denotes the source node in the $(p-1)$th layer. With these conventions we define the weighted net sum and the output of node $j$ of the $p$th layer by \begin{eqnarray*}gin{equation*} \left( x^{(p)}_j\right)^{\textrm{net}} = \sum_{i=1}^{K_{p-1}} w^{(p-1)}_{ji} x^{(p-1)}_i \textrm{ and } x^{(p)}_j = g_p \left(\left( x^{(p)}_j\right)^{\textrm{net}}\right), \label{eq:hidlayeroutput} \end{equation*} where $g_p$, which is assumed to be holomorphic on some domain $\Omega_p\subseteq \mathbb{C}$, is the activation function for all the neurons in the $p$th layer. \item The output layer has $C=K_L$ output nodes denoted $y_1=x^{(L)}_1,...,y_C=x^{(L)}_C.$ We define the weighted net sum and the output of node $l$ ($l=1,...,C$) by \begin{eqnarray*}gin{equation*} y_l^{\textrm{net}} = \sum_{k=1}^{K_{L-1}} w^{(L-1)}_{lk} x^{(L-1)}_k \textrm{ and }y_l = g_L \left(y_l^{\textrm{net}}\right), \label{eq:outlayeroutput} \end{equation*} where $g_L$, which is assumed to be holomorphic on some domain $\Omega_L\subseteq \mathbb{C}$, is the activation function for all the neurons in the output layer. \end{itemize} \mathbf{p}ar To train the network, we use a training set with $N$ data points $\left\{(z_{t1},...,z_{tm},d_{t1},...,d_{tC}) \, \vert \, t=1,...,N \right\}$, where $(z_{t1},...,z_{tm})$ is the input vector corresponding to the desired output vector $(d_{t1},...,d_{tC})$. As the input vector $(z_{t1},...,z_{tm})$ of the $t$th training point is propagated throughout the network we update the subscripts of the network calculations with an additional $t$ subscript to signify that those values correspond to the $t$th training point. For example, $( x^{(p)}_{tj})^{\textrm{net}},\textrm{ } x^{(p)}_{tj},\textrm{ } y_{tl}^{\textrm{net}}, \textrm{ and } y_{tl}.$ Finally, we train the network by minimizing the standard sum-of-squares error function \begin{eqnarray*}gin{equation*} E=\frac{1}{N}\sum_{t=1}^N \sum_{l=1}^C \vert y_{tl}-d_{tl}\vert^2 = \frac{1}{N}\sum_{t=1}^N \sum_{l=1}^C \left(y_{tl}-d_{tl}\right)\left( \overline{y_{tl}}-\overline{d_{tl}}\right). \label{eq:error} \end{equation*} \section{The Gradient Descent Backpropagation Algorithm} \mathbf{p}ar Minimization of the error function can be achieved through the use of the backpropagation algorithm. Backpropagation trains the network by updating the output layer weights first in each step (via an update rule from some numerical minimization algorithm), then using the updated output layer weights to update the first hidden layer weights, and so on, ``backpropagating'' the updates throughout the network until a desired level of accuracy is achieved (usually, this is when the error function drops below a pre-fixed value). In the case of real-valued neural networks, minimization of the error function by Newton's method is generally thought to be too computationally ``expensive,'' and several different methods are commonly used to approximate the Hessian matrices instead of computing them directly: for example the conjugate gradient, truncated Newton, Gauss-Newton and Levenberg-Marquardt algorithms \cite{Al-Haik2003,Beigi1993,Hagan1994,Mukherjee2012,Yu2011}. In contrast, for complex-valued neural networks, gradient descent methods, which are known to give stable (albeit slow) convergence, are commonly used due to their relatively simple formulations, and a number of such minimization algorithms exist \cite{Goh2005,Leung1991,Zimmermann2011}. \mathbf{p}ar We reformulate a backpropagation algorithm using gradient descent according to our setting of the neural networks defined in Section II for two reasons: the algorithm has a much simpler formulation compared with the known ones \cite{Buchholz2008,Leung1991} due to the activation functions being taken to be holomorphic, and we will use it for comparison purpose. A similar formulation of the backpropagation algorithm to ours is presented in \cite{Li2008}. The formulas of gradient descent for complex functions can be found in \cite{Kreutz-Delgado2009}. We use the following vector notation. For $1\le p\le L$, we denote the weights that input into the $p$th layer of the network using a vector whose components correspond to the target nodes: \begin{eqnarray*}gin{equation*} \mathbf{w}^{(p-1)}:=\left( w^{(p-1)}_{11},...,w^{(p-1)}_{1K_{p-1}},...,w^{(p-1)}_{K_p1},...,w^{(p-1)}_{K_pK_{p-1}} \right)^T, \end{equation*} that is, the components of $\mathbf{w}^{(p-1)}$ are \begin{eqnarray*}gin{equation}\mathbf{w}^{(p-1)}\left[ (j-1)\cdot K_{p-1}+i\right] =w^{(p-1)}_{ji}, \label{eq:weightcomponents} \end{equation} where $j=1,...,K_p$, $i=1,...,K_{p-1}$. Using this notation the update steps for backpropagation look like \begin{eqnarray*}gin{equation} \mathbf{w}^{(p-1)}(n+1)=\mathbf{w}^{(p-1)}(n)+\mu(n) \Delta \mathbf{w}^{(p-1)}, \label{eq:updatestep} \end{equation} where $\mathbf{w}^{(p-1)}(n)$ denotes the weight value after the $n$th iteration of the training algorithm, and $\mu(n)$ denotes the learning rate or steplength which is allowed to vary with each iteration. \mathbf{p}ar Using the gradient descent method, the update for the $(p-1)$th layer of a holomorphic complex-valued neural network is (\cite{Kreutz-Delgado2009}, p. 60) \begin{eqnarray*}gin{equation*}\Delta \mathbf{w}^{(p-1)}=-\left(\frac{\mathbf{p}artial E}{\mathbf{p}artial \mathbf{w}^{(p-1)}} \right)^*.\end{equation*} Suppose the activation function for the $p$th layer of the network, $p=1,...,L$, satisfies $$\overline{g(z)}=g(\overline{z}).$$ Coordinate-wise the partial derivatives $\frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lk}}$, taken with respect to the output layer weights $w^{(L-1)}_{lk}$, $l=1,...,C$, $k=1,...,K_{L-1}$, are given by \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lk}} &= \frac{\mathbf{p}artial}{\mathbf{p}artial w^{(L-1)}_{lk}} \left[ \frac{1}{N}\sum_{t=1}^N \sum_{h=1}^C (y_{th}-d_{th})(\overline{y_{th}}-\overline{y_{th}}) \right]\\ &=\frac{1}{N} \sum_{t=1}^N \left[ \frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial w^{(L-1)}_{lk}} (\overline{y_{tl}}-\overline{d_{tl}}) + (y_{tl}-d_{tl}) \frac{\mathbf{p}artial \overline{y_{tl}}}{\mathbf{p}artial w^{(L-1)}_{lk}}\right]\\ &=\frac{1}{N}\sum_{t=1}^N \left( \frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial y_{tl}^{\textrm{net}}}\frac{\mathbf{p}artial y_{tl}^{\textrm{net}}}{\mathbf{p}artial w^{(L-1)}_{lk}} +\frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial \overline{y_{tl}^{\textrm{net}}}}\frac{\mathbf{p}artial \overline{y_{tl}^{\textrm{net}}}}{\mathbf{p}artial w^{(L-1)}_{lk}}\right) (\overline{y_{tl}}-\overline{d_{tl}})\\ &=\frac{1}{N}\sum_{t=1}^N (\overline{y_{tl}}-\overline{d_{tl}}) g'_L\left( y_{tl}^{\textrm{net}}\right) x^{(L-1)}_{tk}, \end{split} \label{eq:gradupdateder} \end{equation*} so that \begin{eqnarray*}gin{equation*} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lk}} \right)^* = \frac{1}{N}\sum_{t=1}^N (y_{tl}-d_{tl}) g'_L\left( \overline{y_{tl}^{\textrm{net}}}\right) \overline{x^{(L-1)}_{tk}}. \end{equation*} The partial derivatives $\left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(p-1)}_{ji}}\right)^*$, taken with respect to the hidden layer weights $w^{(p-1)}_{ji}$, $1\le p \le L-1$, $j=1,...,K_p$, $i=1,...,K_{p-1}$, are computed recursively. The partial derivatives $\frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-2)}_{ji}}$, taken with respect to the $(L-2)$th hidden layer weights, are computed using the updated $(L-1)$th output layer weights: \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-2)}_{ji}} &= \frac{\mathbf{p}artial}{\mathbf{p}artial w^{(L-2)}_{ji}} \left[ \frac{1}{N}\sum_{t=1}^N \sum_{l=1}^C (y_{tl}-d_{tl})(\overline{y_{tl}}-\overline{y_{tl}}) \right]\\ &= \frac{1}{N} \sum_{t=1}^N \sum_{l=1}^C \left[ \frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial w^{(L-2)}_{ji}} (\overline{y_{tl}}-\overline{d_{tl}}) + (y_{tl}-d_{tl}) \frac{\mathbf{p}artial \overline{y_{tl}}}{\mathbf{p}artial w^{(L-2)}_{ji}} \right], \end{split} \label{eq:gradupdateder2} \end{equation*} where \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial w^{(L-2)}_{ji}} &= \frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial y_{tl}^{\textrm{net}}} \frac{\mathbf{p}artial y_{tl}^{\textrm{net}}}{\mathbf{p}artial w^{(L-2)}_{ji}} + \frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial \overline{y_{tl}^{\textrm{net}}}} \frac{\overline{\mathbf{p}artial y_{tl}^{\textrm{net}}}}{\mathbf{p}artial w^{(L-2)}_{ji}}\\ &=g'_L\left( y_{tl}^{\textrm{net}}\right)\left( \frac{\mathbf{p}artial y_{tl}^{\textrm{net}}}{\mathbf{p}artial x_{tj}^{(L-1)}}\frac{\mathbf{p}artial x_{tj}^{(L-1)}}{\mathbf{p}artial w^{(L-2)}_{ji}} + \frac{\mathbf{p}artial y_{tl}^{\textrm{net}}}{\mathbf{p}artial \overline{x_{tj}^{(L-1)}}}\frac{\mathbf{p}artial \overline{x_{tj}^{(L-1)}}}{\mathbf{p}artial w^{(L-2)}_{ji}} \right)\\ &= g'_L \left( y_{tl}^{\textrm{net}}\right) w^{(L-1)}_{lj} \left( \frac{\mathbf{p}artial x_{tj}^{(L-1)}}{\mathbf{p}artial \left( x_{tj}^{(L-1)}\right)^{\textrm{net}}} \frac{\mathbf{p}artial \left( x_{tj}^{(L-1)}\right)^{\textrm{net}}}{\mathbf{p}artial w^{(L-2)}_{ji}}\right.\\ &\hspace{50mm}\left.+ \frac{\mathbf{p}artial x_{tj}^{(L-1)}}{\mathbf{p}artial \overline{\left( x_{tj}^{(L-1)}\right)^{\textrm{net}}}} \frac{\mathbf{p}artial \overline{\left( x_{tj}^{(L-1)}\right)^{\textrm{net}}}}{\mathbf{p}artial w^{(L-2)}_{ji}} \right)\\ &= g'_L \left( y_{tl}^{\textrm{net}}\right) w_{lj}^{(L-1)} g'_{L-1}\left( \left( x_{tj}^{(L-1)}\right)^{\textrm{net}}\right) x_{ti}^{(L-2)} \end{split} \end{equation*} and \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial \overline{y_{tl}}}{\mathbf{p}artial w_{ji}^{(L-2)}} &= \frac{\mathbf{p}artial \overline{y_{tl}}}{\mathbf{p}artial \overline{y_{tl}^{\textrm{net}}}} \frac{\overline{\mathbf{p}artial y_{tl}^{\textrm{net}}}}{\mathbf{p}artial w^{(L-2)}_{ji}} + \frac{\mathbf{p}artial \overline{y_{tl}}}{\mathbf{p}artial y_{tl}^{\textrm{net}}} \frac{\mathbf{p}artial y_{tl}^{\textrm{net}}}{\mathbf{p}artial w^{(L-2)}_{ji}}\\ &=g'_L\left( \overline{y_{tl}^{\textrm{net}}}\right)\left( \frac{\mathbf{p}artial \overline{y_{tl}^{\textrm{net}}}}{\mathbf{p}artial \overline{x_{tj}^{(L-1)}}}\frac{\mathbf{p}artial \overline{x_{tj}^{(L-1)}}}{\mathbf{p}artial w^{(L-2)}_{ji}} + \frac{\mathbf{p}artial \overline{y_{tl}^{\textrm{net}}}}{\mathbf{p}artial x_{tj}^{(L-1)}}\frac{\mathbf{p}artial x_{tj}^{(L-1)}}{\mathbf{p}artial w^{(L-2)}_{ji}} \right)\\ &=g'_L \left( \overline{y_{tl}^{\textrm{net}}}\right) \overline{w^{(L-1)}_{lj}} \left( \frac{\mathbf{p}artial \overline{x_{tj}^{(L-1)}}}{\mathbf{p}artial \overline{\left( x_{tj}^{(L-1)}\right)^{\textrm{net}}}} \frac{\mathbf{p}artial \overline{\left( x_{tj}^{(L-1)}\right)^{\textrm{net}}}}{\mathbf{p}artial w^{(L-2)}_{ji}} \right.\\ &\hspace{50mm} \left.+ \frac{\mathbf{p}artial \overline{x_{tj}^{(L-1)}}}{\mathbf{p}artial \left( x_{tj}^{(L-1)}\right)^{\textrm{net}}} \frac{\mathbf{p}artial \left( x_{tj}^{(L-1)}\right)^{\textrm{net}}}{\mathbf{p}artial w^{(L-2)}_{ji}} \right)\\ &=0, \end{split} \end{equation*} so that \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} \left(\frac{\mathbf{p}artial E}{\mathbf{p}artial w_{ji}^{(L-2)}} \right)^*&=\frac{1}{N}\sum_{t=1}^N \left( \sum_{l=1}^C (y_{tl}-d_{tl}) g'_L\left( \overline{y_{tl}^{\textrm{net}}}\right) \overline{w_{lj}^{(L-1)}}\right) \\ & \hspace{35mm} \cdot g'_{L-1}\left( \overline{\left( x_{tj}^{(L-1)}\right)^{\textrm{net}}}\right) \overline{x_{ti}^{(L-2)}}, \end{split} \end{equation*} and so on. We summarize the partial derivatives by \begin{eqnarray*}gin{eqnarray}{\left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(p-1)}_{ji}}\right)^* = \frac{1}{N} \sum_{t=1}^N E^{(p)}_{tj} \overline{x^{(p-1)}_{ti}}}, \label{eq:grad}\end{eqnarray} $1\le p\le L$, where $j=1,...,K_p$, $i=1,...,K_{p-1}$, and the $E_{tj}^{(p)}$ are given recursively by \begin{eqnarray*}gin{eqnarray}{E^{(L)}_{tl}=\left( y_{tl}-d_{tl}\right) g_L'\left( \overline{y^{\textrm{net}}_{tl}}\right), \label{eq:deltaL}}\end{eqnarray} where $l=1,...,C$, $t=1,...,N$; and for $1\le p\le L-1$, \begin{eqnarray*}gin{eqnarray}{E^{(p)}_{tj} = \left[ \sum_{\alpha =1}^{K_{p+1}} E^{(p+1)}_{t\alpha} \overline{w^{(p)}_{\alpha j}}\right] g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right), \label{eq:deltap}}\end{eqnarray} where $j=1,...,K_p$, $t=1,...,N$. The gradient descent method is well known to be rather slow in the convergence of the error function. We next derive formulas for the backpropagation algorithm using Newton's method (compare with \cite{Buchholz2008,Leung1991}). \section{Backpropagation Using Newton's Method} \mathbf{p}ar The weight updates for Newton's method with complex functions are given by formula (111) of \cite{Kreutz-Delgado2009} (we omit the superscripts, which index the layers, to simplify our writing): \begin{eqnarray*}gin{equation} \Delta\mathbf{w} = \left( \mathcal{H}_{\mathbf{w}\mathbf{w}} - \mathcal{H}_{\overline{\mathbf{w}}\mathbf{w}}\mathcal{H}_{\overline{\mathbf{w}}\overline{\mathbf{w}}}^{-1}\mathcal{H}_{\mathbf{w}\overline{\mathbf{w}}} \right)^{-1} \left[ \mathcal{H}_{\overline{\mathbf{w}}\mathbf{w}}\mathcal{H}_{\overline{\mathbf{w}}\overline{\mathbf{w}}}^{-1} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial \overline{\mathbf{w}}} \right)^* - \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial \mathbf{w}} \right)^* \right]. \label{eq:NewUp} \end{equation} \mathbf{p}ar To apply the Newton algorithm we need to compute the Hessian matrices (again omitting the superscripts) \begin{eqnarray*}gin{equation} \mathcal{H}_{\mathbf{w}\mathbf{w}}=\frac{\mathbf{p}artial}{\mathbf{p}artial \mathbf{w}} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial \mathbf{w}}\right)^* \textrm{ and } \mathcal{H}_{\overline{\mathbf{w}}\mathbf{w}}=\frac{\mathbf{p}artial}{\mathbf{p}artial \overline{\mathbf{w}}} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial \mathbf{w}}\right)^*, \label{eq:hessdef} \end{equation} where the entries of $\left(\frac{\mathbf{p}artial E}{\mathbf{p}artial \mathbf{w}}\right)^*$ are given by (\ref{eq:grad}). Note that although (\ref{eq:NewUp}) asks for the four Hessian matrices $\mathcal{H}_{\mathbf{w}\mathbf{w}}$, $\mathcal{H}_{\overline{\mathbf{w}}\mathbf{w}}$, $\mathcal{H}_{\mathbf{w}\overline{\mathbf{w}}}$, and $\mathcal{H}_{\overline{\mathbf{w}}\overline{\mathbf{w}}}$, we have $\mathcal{H}_{\mathbf{w}\overline{\mathbf{w}}} =\overline{\mathcal{H}_{\overline{\mathbf{w}}\mathbf{w}}} \textrm{ and } \mathcal{H}_{\overline{\mathbf{w}}\overline{\mathbf{w}}}=\overline{\mathcal{H}_{\mathbf{w}\mathbf{w}}}.$ Thus we only need to compute two of them. \mathbf{p}ar We consider the entries of the Hessian matrices $\mathcal{H}_{\mathbf{w}\mathbf{w}}$ and $\mathcal{H}_{\overline{\mathbf{w}}\mathbf{w}}$. For the $(p-1)$th layer, the entries of $\mathcal{H}_{\mathbf{w}\mathbf{w}}$ are given by (see (\ref{eq:weightcomponents})) \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} \mathcal{H}_{\mathbf{w}\mathbf{w}} \left[ (j-1) \cdot K_{p-1}+i , (b-1)\cdot K_{p-1}+a\right] =\frac{\mathbf{p}artial}{\mathbf{p}artial w^{(p-1)}_{ba}}\left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(p-1)}_{ji}}\right)^*, \end{split} \end{equation*} where $j,b=1,...,K_p$ and $i,a=1,...,K_{p-1}$, and the entries of $\mathcal{H}_{\overline{\mathbf{w}}\mathbf{w}}$ are given by \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} \mathcal{H}_{\overline{\mathbf{w}}\mathbf{w}} \left[ (j-1) \cdot K_{p-1}+i , (b-1)\cdot K_{p-1}+a\right] = \frac{\mathbf{p}artial}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}}\left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(p-1)}_{ji}}\right)^*, \end{split} \end{equation*} where $j,b=1,...,K_p$ and $i,a=1,...,K_{p-1}$. \mathbf{p}ar First we derive an explicit formula for the entries of the Hessians $\mathcal{H}_{\mathbf{w}\mathbf{w}}$. We start with the output layer and compute $\frac{\mathbf{p}artial}{\mathbf{p}artial w^{(L-1)}_{kq}}\left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lp}}\right)^*$, where $k,l=1,...,C$ and $q,p=1,...,K_{L-1}$. Observe that if $k\neq l$, then each term $(y_{tl}-d_{tl})g_L'\left(\overline{y^{\textrm{net}}_{tl}}\right)\overline{x^{(L-1)}_{tp}}$ in the cogradient given by (\ref{eq:grad}) and (\ref{eq:deltaL}) does not depend on the weights $w^{(L-1)}_{kq}$, hence this entry of the Hessian will be $0$. So the Hessian matrix for the output layer has a block diagonal form: \begin{eqnarray*}gin{equation*}\mathcal{H}_{\mathbf{w}^{(L-1)}\mathbf{w}^{(L-1)}}=\textrm{diag} \left\{ \left[ \frac{\mathbf{p}artial}{\mathbf{p}artial w^{(L-1)}_{lq}}\left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lp}} \right)^* \right]_{1\le p\le K_{L-1} \atop 1\le q\le K_{L-1}} : l=1,...,C\right\}.\end{equation*} Now: \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial}{\mathbf{p}artial w^{(L-1)}_{lq}} &\left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lp}} \right)^* = \frac{\mathbf{p}artial}{\mathbf{p}artial w^{(L-1)}_{lq}} \left[ \frac{1}{N} \sum_{t=1}^N (y_{tl}-d_{tl}) g_L'\left( \overline{y^{\textrm{net}}_{tl}} \right) \overline{x^{(L-1)}_{tp}} \right]\\ &= \frac{1}{N}\sum_{t=1}^N \left[ (y_{tl}-d_{tl}) \frac{\mathbf{p}artial g_L'\left( \overline{y^{\textrm{net}}_{tl}}\right)}{\mathbf{p}artial w^{(L-1)}_{lq}} + g_L'\left( \overline{y^{\textrm{net}}_{tl}} \right) \frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial w^{(L-1)}_{lq} } \right] \overline{x^{(L-1)}_{tp}} \end{split}\label{eq:outcomp} \end{equation} where \begin{eqnarray*}gin{equation*} \frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial w^{(L-1)}_{lq}} = \frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial y^{\textrm{net}}_{tl}}\frac{\mathbf{p}artial y^{\textrm{net}}_{tl}}{\mathbf{p}artial w^{(L-1)}_{lq}}+\frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial \overline{y^{\textrm{net}}_{tl}}}\frac{\mathbf{p}artial \overline{y^{\textrm{net}}_{tl}}}{\mathbf{p}artial w^{(L-1)}_{lq}}=g_L'\left( y^{\textrm{net}}_{tl}\right) x^{(L-1)}_{tq} \end{equation*} since $g_L$ is holomorphic and therefore $\frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial \overline{y_{tl}^{\textrm{net}}}}=0$ (Cauchy-Riemann condition), and similarly \begin{eqnarray*}gin{equation*} \frac{\mathbf{p}artial g_L'\left( \overline{y^{\textrm{net}}_{tl}}\right)}{\mathbf{p}artial w^{(L-1)}_{lq}} = \frac{\mathbf{p}artial g_L'\left( \overline{y^{\textrm{net}}_{tl}}\right)}{\mathbf{p}artial \overline{y^{\textrm{net}}_{tl}}}\frac{\overline{\mathbf{p}artial y^{\textrm{net}}_{tl}}}{\mathbf{p}artial w^{(L-1)}_{lq}}+\frac{\mathbf{p}artial g_L'\left( \overline{y^{\textrm{net}}_{tl}}\right)}{\mathbf{p}artial y^{\textrm{net}}_{tl}}\frac{\mathbf{p}artial y^{\textrm{net}}_{tl}}{\mathbf{p}artial w^{(L-1)}_{lq}}=0. \end{equation*} Combining these two partial derivatives with (\ref{eq:outcomp}) gives the following formula for the entries of the output layer Hessian matrix: \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial}{\mathbf{p}artial w^{(L-1)}_{kq}}&\left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lp}}\right)^*\\ &=\left\{ \begin{eqnarray*}gin{array}{ll} \frac{1}{N}\sum_{t=1}^N g_L'\left( \overline{y^{\textrm{net}}_{tl}} \right)g_L'\left( y^{\textrm{net}}_{tl} \right) \overline{x^{(L-1)}_{tp}}x^{(L-1)}_{tq} & \textrm{if }k=l,\\ 0 & \textrm{if }k\neq l. \end{array}\right. \end{split} \label{eq:outputhess} \end{equation} \mathbf{p}ar After updating the output layer weights, the backpropagation algorithm updates the hidden layer weights recursively. We compute the entries of the Hessian $\mathcal{H}_{\mathbf{w}^{(p-1)}\mathbf{w}^{(p-1)}}$ for the $(p-1)$th layer using (\ref{eq:grad}): \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial}{\mathbf{p}artial w^{(p-1)}_{ba}}\left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(p-1)}_{ji}}\right)^* &= \frac{\mathbf{p}artial}{\mathbf{p}artial w^{(p-1)}_{ba}} \left[ \frac{1}{N}\sum_{t=1}^N E^{(p)}_{tj} \overline{x^{(p-1)}_{ti}} \right]\\ &= \frac{1}{N}\sum_{t=1}^N \frac{\mathbf{p}artial E^{(p)}_{tj}}{\mathbf{p}artial w^{(p-1)}_{ba}} \overline{x^{(p-1)}_{ti}}. \end{split} \label{eq:hidcomp} \end{equation} Applying the chain rule to (\ref{eq:deltap}), we have \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} &\frac{\mathbf{p}artial E^{(p)}_{tj}}{\mathbf{p}artial w^{(p-1)}_{ba}} = \frac{\mathbf{p}artial}{\mathbf{p}artial w^{(p-1)}_{ba}}\left[\left( \sum_{\eta =1}^{K_{p+1}} E^{(p+1)}_{t\eta} \overline{w^{(p)}_{\eta j}}\right) g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right)\right]\\ &= g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right) \sum_{\eta =1}^{K_{p+1}} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial w^{(p-1)}_{ba}} \overline{w^{(p)}_{\eta j}}\\ &=g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right) \sum_{\eta =1}^{K_{p+1}} \left[ \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial x^{(p)}_{tb}} \frac{\mathbf{p}artial x^{(p)}_{tb}}{\mathbf{p}artial w^{(p-1)}_{ba}}\right.+ \left.\frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \frac{\mathbf{p}artial \overline{x^{(p)}_{tb}}}{\mathbf{p}artial w^{(p-1)}_{ba}} \right] \overline{w^{(p)}_{\eta j}}\\ &= g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right) \sum_{\eta =1}^{K_{p+1}} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial x^{(p)}_{tb}} \left[ \frac{\mathbf{p}artial x^{(p)}_{tb}}{\mathbf{p}artial (x^{(p)}_{tb})^{\textrm{net}}} \frac{\mathbf{p}artial (x^{(p)}_{tb})^{\textrm{net}}}{\mathbf{p}artial w^{(p-1)}_{ba}}\right.\\ & \hspace{40mm} \left.+ \frac{\mathbf{p}artial x^{(p)}_{tb}}{\mathbf{p}artial \overline{(x^{(p)}_{tb})^{\textrm{net}}}} \frac{\mathbf{p}artial \overline{(x^{(p)}_{tb})^{\textrm{net}}}}{\mathbf{p}artial w^{(p-1)}_{ba}} \right] \overline{w^{(p)}_{\eta j}}\\ &= g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right)\sum_{\eta =1}^{K_{p+1}}\frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial x^{(p)}_{tb}} g_p'\left(\left(x^{(p)}_{tb}\right)^{\textrm{net}}\right) x^{(p-1)}_{ta} \overline{w^{(p)}_{\eta j}}. \end{split} \label{eq:deltaw} \end{equation} In the above computation, we have used the fact that $g_p$ is holomorphic and hence $\frac{\mathbf{p}artial g'_p\left( \overline{( x^{(p)}_{tj})^{\textrm{net}}}\right)}{\mathbf{p}artial w^{(p-1)}_{ba}}=0$ and $\frac{\mathbf{p}artial \overline{x_{tb}^{(p)}}}{\mathbf{p}artial w_{ba}^{(p-1)}}=0$, $\frac{\mathbf{p}artial x^{(p)}_{tb}}{\mathbf{p}artial \overline{( x^{(p)}_{tb})^{\textrm{net}}}}=0$, and $\frac{\mathbf{p}artial \overline{ (x^{(p)}_{tb})^{\textrm{net}}}}{\mathbf{p}artial w^{(p-1)}_{ba}} =0$. Combining (\ref{eq:hidcomp}) and (\ref{eq:deltaw}), we have: \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial}{\mathbf{p}artial w^{(p-1)}_{ba}}\left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(p-1)}_{ji}}\right)^* &=\frac{1}{N}\sum_{t=1}^N \left[ \sum_{\eta=1}^{K_{p+1}} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial x^{(p)}_{tb}} \overline{w^{(p)}_{\eta j}} \right]\\ & \hspace{10mm} \cdot g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right) g_p'\left(\left(x^{(p)}_{tb}\right)^{\textrm{net}}\right) \overline{x^{(p-1)}_{ti}} x^{(p-1)}_{ta}. \end{split} \label{eq:hiddeltas} \end{equation} \mathbf{p}ar Next, we derive a recursive rule for finding the partial derivatives $\frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial x^{(p)}_{tb}}$. For computational purposes, an explicit formula for $\frac{\mathbf{p}artial E^{(L)}_{t\eta}}{\mathbf{p}artial x^{(L-1)}_{tb}}$ is not necessary. What we need is a recursive formula for these partial derivatives as will be apparent shortly. Using (\ref{eq:deltap}) we have the following: \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial x^{(p)}_{tb}} &= \frac{\mathbf{p}artial}{\mathbf{p}artial x^{(p)}_{tb}} \left[\left( \sum_{\alpha =1}^{K_{p+2}} E^{(p+2)}_{t\alpha} \overline{w^{(p+1)}_{\alpha \eta}}\right) g_{p+1}'\left(\overline{\left(x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right)\right]\\ &=g_{p+1}'\left(\overline{\left(x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \sum_{\alpha =1}^{K_{p+2}} \frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial x^{(p)}_{tb}} \overline{w^{(p+1)}_{\alpha \eta}}\\ &=g_{p+1}'\left(\overline{\left(x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \sum_{\alpha =1}^{K_{p+2}}\sum_{\begin{eqnarray*}ta=1}^{K_{p+1}} \left[ \frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial x^{(p+1)}_{t\begin{eqnarray*}ta}} \frac{\mathbf{p}artial x^{(p+1)}_{t\begin{eqnarray*}ta}}{\mathbf{p}artial x^{(p)}_{tb}}\right.\\ & \hspace{40mm} \left.+\frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial \overline{x^{(p+1)}_{t\begin{eqnarray*}ta}}} \frac{\mathbf{p}artial \overline{x^{(p+1)}_{t\begin{eqnarray*}ta}}}{\mathbf{p}artial x^{(p)}_{tb}} \right] \overline{w^{(p+1)}_{\alpha \eta}}\\ \end{split} \label{eq:deltapbyx} \end{equation} \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} &=g_{p+1}'\left(\overline{\left(x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \sum_{\alpha =1}^{K_{p+2}}\sum_{\begin{eqnarray*}ta=1}^{K_{p+1}}\frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial x^{(p+1)}_{t\begin{eqnarray*}ta}}\\ & \hspace{3mm} \cdot\left[ \frac{\mathbf{p}artial x^{(p+1)}_{t\begin{eqnarray*}ta}}{\mathbf{p}artial (x^{(p+1)}_{t\begin{eqnarray*}ta})^{\textrm{net}}} \frac{\mathbf{p}artial (x^{(p+1)}_{t\begin{eqnarray*}ta})^{\textrm{net}}}{\mathbf{p}artial x^{(p)}_{tb}}+\frac{\mathbf{p}artial x^{(p+1)}_{t\begin{eqnarray*}ta}}{\mathbf{p}artial \overline{(x^{(p+1)}_{t\begin{eqnarray*}ta})^{\textrm{net}}}} \frac{\mathbf{p}artial \overline{(x^{(p+1)}_{t\begin{eqnarray*}ta})^{\textrm{net}}}}{\mathbf{p}artial x^{(p)}_{tb}} \right] \overline{w^{(p+1)}_{\alpha \eta}}\\ &=\sum_{\begin{eqnarray*}ta=1}^{K_{p+1}}\left[\sum_{\alpha =1}^{K_{p+2}}\frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial x^{(p+1)}_{t\begin{eqnarray*}ta}} \overline{w^{(p+1)}_{\alpha \eta}}\right]\\ & \hspace{20mm} \cdot g_{p+1}'\left(\overline{\left(x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right)g_{p+1}'\left(\left(x^{(p+1)}_{t\begin{eqnarray*}ta}\right)^{\textrm{net}}\right)w^{(p)}_{\begin{eqnarray*}ta b}. \end{split} \end{equation*} This gives a recursive formula for computing the partial derivatives $\frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial x^{(p)}_{tb}}$. We will combine the above calculations to give a more concise recursive algorithm for computing the entries of the matrices $\mathcal{H}_{\mathbf{w}\mathbf{w}}$ in Theorem \ref{thm:complexnewtbackprop}, below. \mathbf{p}ar Next we consider the Hessians $\mathcal{H}_{\overline{\mathbf{w}}\mathbf{w}}$. Again we start with the output layer and compute $\frac{\mathbf{p}artial}{\mathbf{p}artial \overline{w^{(L-1)}_{kq}}} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lp}} \right)^*$. Using the fact that $\frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lp}}$ does not depend on $\overline{w^{(L-1)}_{kq}}$ if $k\neq l$, we see that the output layer Hessian $\mathcal{H}_{\overline{\mathbf{w}^{(L-1)}}\mathbf{w}^{(L-1)}}$ is also block diagonal with blocks \begin{eqnarray*}gin{eqnarray*} \left[ \frac{\mathbf{p}artial}{\mathbf{p}artial \overline{w^{(L-1)}_{lq}}} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lp}} \right)^*\right]_{1\le p\le K_{L-1} \atop 1\le q\le K_{L-1}} \end{eqnarray*} for $l=1,...,C$. Computing the entries in these blocks, \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial}{\mathbf{p}artial \overline{w^{(L-1)}_{lq}}} & \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lp}} \right)^* = \frac{\mathbf{p}artial}{\mathbf{p}artial \overline{w^{(L-1)}_{lq}}} \left[ \frac{1}{N} \sum_{t=1}^N (y_{tl}-d_{tl}) g_L'\left(\overline{y^{\textrm{net}}_{tl}}\right) \overline{x^{(L-1)}_{tp}}\right]\\ &= \frac{1}{N} \sum_{t=1}^N \left[ (y_{tl}-d_{tl})\frac{\mathbf{p}artial g_L'\left(\overline{y^{\textrm{net}}_{tl}}\right)}{\mathbf{p}artial \overline{w^{(L-1)}_{lq}}}+ g_L'\left(\overline{y^{\textrm{net}}_{tl}}\right) \frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial \overline{w^{(L-1)}_{lq}}} \right]\overline{x^{(L-1)}_{tp}} \end{split} \label{eq:outconjcomp} \end{equation*} where $\frac{\mathbf{p}artial y_{tl}}{\mathbf{p}artial \overline{w^{(L-1)}_{lq}}} =0$, and \begin{eqnarray*}gin{eqnarray*} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial g_L'\left(\overline{y^{\textrm{net}}_{tl}}\right)}{\mathbf{p}artial \overline{w^{(L-1)}_{lq}}} &=\frac{\mathbf{p}artial g_L'\left(\overline{y^{\textrm{net}}_{tl}}\right)}{\mathbf{p}artial \overline{y_{tl}^{\textrm{net}}}} \frac{\mathbf{p}artial \overline{y_{tl}^{\textrm{net}}}}{\mathbf{p}artial \overline{w^{(L-1)}_{lq}}}+\frac{\mathbf{p}artial g_L'\left(\overline{y^{\textrm{net}}_{tl}}\right)}{\mathbf{p}artial y_{tl}^{\textrm{net}}} \frac{\mathbf{p}artial y_{tl}^{\textrm{net}}}{\mathbf{p}artial \overline{w^{(L-1)}_{lq}}}\\ &= g_L''\left(\overline{y_{tl}^{\textrm{net}}}\right)\overline{x^{(L-1)}_{tq}}. \end{split} \end{eqnarray*} Thus: \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial}{\mathbf{p}artial \overline{w^{(L-1)}_{kq}}} & \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(L-1)}_{lp}} \right)^*\\ &=\left\{ \begin{eqnarray*}gin{array}{ll} \frac{1}{N}\sum_{t=1}^N (y_{tl}-d_{tl}) g_L''\left(\overline{y_{tl}^{\textrm{net}}}\right)\overline{x^{(L-1)}_{tq}}\overline{x^{(L-1)}_{tp}} & \textrm{if } k=l,\\ 0 & \textrm{if }k\neq l. \end{array}\right. \end{split} \label{eq:outconj} \end{equation} \mathbf{p}ar The entries of the Hessian $\mathcal{H}_{\overline{\mathbf{w}^{(p-1)}}\mathbf{w}^{(p-1)}}$ for the $(p-1)$th layer can be computed similarly. We record the formula here and provide the detailed computations in Appendix A. We have: \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} &\frac{\mathbf{p}artial}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(p-1)}_{ji}} \right)^*\\ &=\left\{ \begin{eqnarray*}gin{array}{l} \frac{1}{N} \sum_{t=1}^N \left\{ \left[ \sum_{\eta=1}^{K_{p+1}}\frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \overline{w^{(p)}_{\eta j}} \right] g_p'( \overline{(x^{(p)}_{tj})^{\textrm{net}}})g_p'( \overline{(x^{(p)}_{tb})^{\textrm{net}}}) \right.\\ \hspace{10mm} \left. +\left[\sum_{\eta=1}^{K_{p+1}} E_{t\eta}^{(p+1)} w^{(p)}_{\eta j} \right] g_p''( \overline{(x^{(p)}_{tj})^{\textrm{net}}}) \right\} \overline{x^{(p-1)}_{ti}}\overline{x^{(p-1)}_{ta}}\textrm{ if }j=b,\\ \frac{1}{N} \sum_{t=1}^N \left\{ \left[ \sum_{\eta=1}^{K_{p+1}}\frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \overline{w^{(p)}_{\eta j}} \right] g_p'( \overline{(x^{(p)}_{tj})^{\textrm{net}}})g_p'( \overline{(x^{(p)}_{tb})^{\textrm{net}}}) \right\}\\ \hspace{70mm}\cdot \overline{x^{(p-1)}_{ti}}\overline{x^{(p-1)}_{ta}} \hspace{3.5mm}\textrm{if }j\neq b,\\ \end{array}\right. \end{split} \label{eq:hidconjdeltas} \end{equation} where $j,b=1,...,K_p$ and $i,a=1,...,K_{p+1}$, and the partial derivatives $\frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}}$ are given recursively by \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} &=\sum_{\begin{eqnarray*}ta=1}^{K_{p+1}} \left[ \sum_{\alpha=1}^{K_{p+2}} \frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial \overline{x^{(p+1)}_{t\begin{eqnarray*}ta}}}\overline{w^{(p+1)}_{\alpha \eta}} \right]\\ &\hspace{20mm}\cdot g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\begin{eqnarray*}ta}\right)^{\textrm{net}}}\right) \overline{w^{(p)}_{\begin{eqnarray*}ta b}}\\ &\hspace{10mm}+\left[\sum_{\alpha=1}^{K_{p+2}} E^{(p+2)}_{t\alpha} \overline{w^{(p+1)}_{\alpha \eta}} \right]g''_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \overline{w^{(p)}_{\eta b}}. \end{split} \label{eq:deltapbyxconj} \end{equation} \mathbf{p}ar We now summarize the formulas we have derived in the following theorem. \begin{eqnarray*}gin{theorem}[Newton Backpropagation Algorithm for Holomorphic Neural Networks] The weight updates for the holomorphic MLPs with activation functions satisfying $$\overline{g(z)}=g(\overline{z}),$$ $p=1,...,L$, using the backpropagation algorithm with Newton's method are given by \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \Delta\mathbf{w}^{(p-1)} & = \left( \mathcal{H}_{\mathbf{w}^{(p-1)}\mathbf{w}^{(p-1)}} - \mathcal{H}_{\overline{\mathbf{w}^{(p-1)}}\mathbf{w}^{(p-1)}}\mathcal{H}_{\overline{\mathbf{w}^{(p-1)}}\overline{\mathbf{w}^{(p-1)}}}^{-1}\mathcal{H}_{\mathbf{w}^{(p-1)}\overline{\mathbf{w}^{(p-1)}}} \right)^{-1}\\ &\hspace{5mm}\cdot\left[ \mathcal{H}_{\overline{\mathbf{w}^{(p-1)}}\mathbf{w}^{(p-1)}}\mathcal{H}_{\overline{\mathbf{w}^{(p-1)}}\overline{\mathbf{w}^{(p-1)}}}^{-1} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial \overline{\mathbf{w}^{(p-1)}}} \right)^* - \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial \mathbf{w}^{(p-1)}} \right)^* \right], \end{split} \label{eq:NewUpThm} \end{equation} where: \begin{eqnarray*}gin{enumerate} \item the entries of the Hessian matrices $\mathcal{H}_{\mathbf{w}^{(p-1)}\mathbf{w}^{(p-1)}}$ for $p=1,...,L$ are given by \begin{eqnarray*}gin{equation} \frac{\mathbf{p}artial}{\mathbf{p}artial w^{(p-1)}_{ba}}\left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(p-1)}_{ji}}\right)^* = \frac{1}{N} \sum_{t=1}^N \gamma^{(p)}_{tjb} \overline{x^{(p-1)}_{ti}} x^{(p-1)}_{ta} \label{eq:hessp} \end{equation} for $j,b=1,...,K_p$ and $i,a=1,...,K_{p-1}$, where the $\gamma^{(p)}_{tjb}$ are defined for $t=1,...,N$ recursively on $p$ by \begin{eqnarray*}gin{equation*} \gamma^{(L)}_{tkl} = \left\{ \begin{eqnarray*}gin{array}{ll} g_L'(\overline{y^{\textrm{net}}_{tl}})g_L'(y^{\textrm{net}}_{tl}) & \textrm{if } k=l,\\ 0 & \textrm{if } k\neq l, \end{array}\right. \label{eq:gammaL} \end{equation*} for $k,l=1,...,C$, and for $p=1,...,L-1$, \begin{eqnarray*}gin{equation} \gamma^{(p)}_{tjb} = \left[ \sum_{\eta=1}^{K_{p+1}} \sum_{\begin{eqnarray*}ta=1}^{K_{p+1}} \gamma^{(p+1)}_{t\eta \begin{eqnarray*}ta} \overline{w^{(p)}_{\eta j}}w^{(p)}_{\begin{eqnarray*}ta b}\right] g_p'\left( \overline{(x^{(p)}_{tj})^{\textrm{net}}} \right)g_p'\left( (x^{(p)}_{tb})^{\textrm{net}} \right) \label{eq:gammap} \end{equation} for $j,b=1,...,K_{p+1}$, \item the entries of the Hessian matrices $\mathcal{H}_{\overline{\mathbf{w}^{(p-1)}}\mathbf{w}^{(p-1)}}$ for $p=1,...,L$ are given by \begin{eqnarray*}gin{equation} \frac{\mathbf{p}artial}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(p-1)}_{ji}}\right)^* = \frac{1}{N} \sum_{t=1}^N \left( \mathbf{p}si^{(p)}_{tjb} +\theta^{(p)}_{tjb}\right) \overline{x^{(p-1)}_{ti}}\overline{x^{(p-1)}_{ta}} \label{eq:conjhessp} \end{equation} for $j,b=1,...,K_p$ and $i,a=1,...,K_{p-1}$, where the $\theta^{(p)}_{tjb}$ are defined for $t=1,...,N$ by \begin{eqnarray*}gin{equation*} \theta^{(L)}_{tkl}=\left\{ \begin{eqnarray*}gin{array}{ll} (y_{tl}-d_{tl})g_L''\left(\overline{y_{tl}^{\textrm{net}}} \right) & \textrm{if }k=l, \\ 0 & \textrm{if }k\neq l, \end{array}\right. \label{thetaL} \end{equation*} for $k,l=1,...,C$, and for $p=1,...,L-1$, \begin{eqnarray*}gin{equation} \theta^{(p)}_{tjb} =\left\{ \begin{eqnarray*}gin{array}{ll} \left[ \sum_{\eta=1}^{K_{p+1}} E^{(p+1)}_{t\eta} w^{(p)}_{\eta j} \right] g_p'' \left( \overline{\left( x^{(p)}_{tj}\right)^{\textrm{net}}}\right) & \textrm{if } j=b, \\ 0 & \textrm{if } j\neq b, \end{array}\right. \label{eq:thetap} \end{equation} for $j,b=1,...,K_{p+1}$, where the $E^{(p)}_{t\eta}$ are given by (\ref{eq:deltaL}) and (\ref{eq:deltap}), and the $\mathbf{p}si^{(p)}_{tjb}$ are defined for $t=1,...,N$ recursively on $p$ by $\mathbf{p}si^{(L)}_{tkl}=0$ for $k,l=1,...,C$, and for $p=1,...,L-1$, \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \mathbf{p}si^{(p)}_{tjb}=\left[ \sum_{\eta=1}^{K_{p+1}}\sum_{\begin{eqnarray*}ta=1}^{K_{p+1}} \left( \mathbf{p}si^{(p+1)}_{t\eta\begin{eqnarray*}ta} \overline{w^{(p)}_{\begin{eqnarray*}ta b}}\right.\right. &+\left.\left.\theta^{(p+1)}_{t\eta\begin{eqnarray*}ta} \overline{w^{(p)}_{\eta b}}\right)\overline{w^{(p)}_{\eta j}} \right]\\ &\cdot g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}} \right) g_p'\left(\overline{\left(x^{(p)}_{tb}\right)^{\textrm{net}}} \right) \end{split} \label{eq:psip} \end{equation} for $j,b=1,...,K_{p+1}$, and \item for the other two Hessian matrices we have $\mathcal{H}_{\mathbf{w}^{(p-1)}\overline{\mathbf{w}^{(p-1)}}}=\overline{\mathcal{H}_{\overline{\mathbf{w}^{(p-1)}}\mathbf{w}^{(p-1)}}}$ and $\mathcal{H}_{\overline{\mathbf{w}^{(p-1)}}\overline{\mathbf{w}^{(p-1)}}} = \overline{\mathcal{H}_{\mathbf{w}^{(p-1)}\mathbf{w}^{(p-1)}}}.$ \end{enumerate} \label{thm:complexnewtbackprop} \end{theorem} \begin{eqnarray*}gin{proof}\begin{eqnarray*}gin{enumerate} \item Setting $\gamma^{(L)}_{tkl}$ as defined above, Equation (\ref{eq:hessp}) follows immediately from (\ref{eq:outputhess}). For the hidden layer Hessian matrix entries, set \begin{eqnarray*}gin{equation} \gamma^{(p)}_{tjb} = \left[\sum_{\eta=1}^{K_{p+1}} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial x^{(p)}_{tb}} \overline{w^{(p)}_{\eta j}}\right] g_p'\left( \overline{(x^{(p)}_{tj})^{\textrm{net}}} \right)g_p'\left( (x^{(p)}_{tb})^{\textrm{net}} \right) \label{eq:proof1a} \end{equation} in (\ref{eq:hiddeltas}), giving us (\ref{eq:hessp}). Then using (\ref{eq:deltapbyx}) we have \begin{eqnarray*}gin{equation} \frac{\mathbf{p}artial E_{t\eta}^{(p+1)}}{\mathbf{p}artial x_{tb}^{(p)}} = \sum_{\begin{eqnarray*}ta=1}^{K_{p+1}} \gamma^{(p+1)}_{t\eta\begin{eqnarray*}ta} w^{(p)}_{\begin{eqnarray*}ta b}. \label{eq:proof1b} \end{equation} So substituting (\ref{eq:proof1b}) into (\ref{eq:proof1a}) we get the recursive formula (\ref{eq:gammap}). \item The formula (\ref{eq:conjhessp}) for $p=L$ follows directly from the way we defined $\theta^{(L)}_{tkl}$, $\mathbf{p}si^{(L)}_{tkl}$, and equation (\ref{eq:outconj}). Next, define the $\theta^{(p)}_{tjb}$ as above, and set \begin{eqnarray*}gin{equation} \mathbf{p}si^{(p)}_{tjb}=\left[ \sum_{\eta=1}^{K_{p+1}} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \overline{w^{(p)}_{\eta j}}\right]g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}} \right)g_p'\left(\overline{\left(x^{(p)}_{tb}\right)^{\textrm{net}}} \right) \label{eq:proof2a} \end{equation} in (\ref{eq:hidconjdeltas}). Substituting (\ref{eq:proof2a}) and (\ref{eq:thetap}) into (\ref{eq:hidconjdeltas}) gives us (\ref{eq:conjhessp}). For the $\mathbf{p}si^{(p)}_{tjb}$, using (\ref{eq:deltapbyxconj}) with our definition of the $\mathbf{p}si^{(p)}_{tjb}$ in (\ref{eq:proof2a}) we have: \begin{eqnarray*}gin{equation} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}}=\sum_{\begin{eqnarray*}ta=1}^{K_{p+1}} \left( \mathbf{p}si^{(p+1)}_{t\eta\begin{eqnarray*}ta} \overline{w^{(p)}_{\begin{eqnarray*}ta b}}+\theta^{(p+1)}_{t\eta\begin{eqnarray*}ta} \overline{w^{(p)}_{\eta b}}\right) \label{eq:proof2b} \end{equation} so substituting (\ref{eq:proof2b}) into (\ref{eq:proof2a}) we get (\ref{eq:psip}).\end{enumerate} \end{proof} \section{Backpropagation Using the Pseudo-Newton's Method} To simplify the computation in the implementation of Newton's method, we can use the pseudo-Newton algorithm, which is an alternative algorithm also known to provide good quadratic convergence. For the pseudo-Newton algorithm, we take $\mathcal{H}_{\overline{\mathbf{w}^{(p-1)}}\mathbf{w}^{(p-1)}}=0=\mathcal{H}_{\mathbf{w}^{(p-1)}\overline{\mathbf{w}^{(p-1)}}}$ in (\ref{eq:NewUpThm}), thus reducing the weight updates to \begin{eqnarray*}gin{equation*} \Delta \mathbf{w}^{(p-1)} = -\mathcal{H}_{\mathbf{w}^{(p-1)}\mathbf{w}^{(p-1)}}^{-1} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial \mathbf{w}^{(p-1)}} \right)^*. \label{eq:PseudUp}\end{equation*} Convergence using the pseudo-Newton algorithm will generally be faster than gradient descent. The trade off for computational efficiency over Newton's method is somewhat slower convergence, though if the activation functions in the holomorphic MLP are in addition onto, the performance of the pseudo-Newton versus Newton algorithms should be similar \cite{Kreutz-Delgado2009}. \begin{eqnarray*}gin{corollary}[Pseudo-Newton Backpropagation Algorithm for Holomorphic Neural Networks] The weight updates for the holomorphic MLP with activation functions satisfying $$\overline{g(z)}=g(\overline{z}),$$ $p=1,...,L$, using the backpropagation algorithm with the pseudo-Newton's method are given by \begin{eqnarray*}gin{equation*} \Delta \mathbf{w}^{(p-1)} = -\mathcal{H}_{\mathbf{w}^{(p-1)}\mathbf{w}^{(p-1)}}^{-1} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial \mathbf{w}^{(p-1)}} \right)^*, \label{eq:PseudUpCor}\end{equation*} where the entries of the Hessian matrices $\mathcal{H}_{\mathbf{w}^{(p-1)}\mathbf{w}^{(p-1)}}$ for $1\le p\le L$ are given by (\ref{eq:hessp}) in Theorem \ref{thm:complexnewtbackprop}. \label{cor:complexp-newtbackprop} \end{corollary} \section{The One-Step Newton Steplength Algorithm for Real-Valued Complex Functions} \mathbf{p}ar A significant problem encountered with Newton's method and other minimization algorithms is the tendency of the iterates to ``overshoot.'' If this happens, the iterates may not decrease the function value at each step \cite{Ortega1970}. For functions on real domains, it is known that for any minimization algorithm, careful choice of the sequence of steplengths via various steplength algorithms will guarantee a descent method. Steplength algorithms for minimization of real-valued functions on complex domains have been discussed in the literature \cite{Ang2001,Hanna2003,Manton2002,Sorber2011}. In \cite{Manton2002}, the problem was addressed by imposing unitary conditions on the input vectors. In \cite{Sorber2011}, steplength algorithms were proposed for the BFGS method, which is an approximation to Newton's method. With regard to applications in neural networks, variable steplength algorithms exist for least mean square error algorithms, and these algorithms have been adapted to the gradient descent backpropagation algorithm for fully complex-valued neural networks with analytic activation functions \cite{Ang2001,Goh2005}. Fully adaptive gradient descent algorithms for complex-valued neural networks have also been proposed \cite{Hanna2003}. However, these algorithms do not apply to the Newton backpropagation algorithm. \mathbf{p}ar To provide a steplength algorithm that guarantees convergence of Newton's method for real-valued complex functions, we need the following definitions. Let $f:\Omega\subseteq\mathbb{C}^k\to\mathbb{R}$. The function $f$ is called real differentiable ($\mathbb{R}$-differentiable) if it is (Frechet) differentiable as a mapping $$f(\mathbf{x},\mathbf{y}):D:=\left\{ \left( \begin{eqnarray*}gin{array}{cc} \mathbf{x} \\ \mathbf{y}\end{array}\right)\in\mathbb{R}^{2k} \left\vert \begin{eqnarray*}gin{array}{cc} \mathbf{x},\mathbf{y}\in\mathbb{R}^k \\ \mathbf{z}=\mathbf{x}+i\mathbf{y}\in\Omega \end{array} \right.\right\} \mathbb{R}^{2k}\to\mathbb{R}.$$ We then define a stationary point of $f$ to be a stationary point in the sense of the function $f(\mathbf{x},\mathbf{y}):D\subseteq\mathbb{R}^{2k}\to\mathbb{R}.$ If $f$ is twice $\mathbb{R}$-differentiable, let $\mathcal{H}_{\mathbf{z}\mathbf{z}}$ and $\mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}$ denote the Hessian matrices of $f$ with respect to $\mathbf{z}$ given by (\ref{eq:hessdef}). \mathbf{p}ar Let $\mathbf{z}(0)\in\Omega$. If $\Omega$ is open, we define the level set of $\mathbf{z}(0)$ under $f$ on $\Omega$ to be \begin{eqnarray*}gin{equation} L_{\mathbb{C}^k}(f(\mathbf{z}(0)))= \left\{ \mathbf{z}\in\Omega \, \vert \, f(\mathbf{z})\le f(\mathbf{z}(0)) \right\}, \label{eq:levelset} \end{equation} and let $L_{\mathbb{C}^k}^0(f(\mathbf{z}(0)))$ be the path-connected component of $L_{\mathbb{C}^k}(f(\mathbf{z}(0)))$ containing $\mathbf{z}(0)$. To discuss rate of convergence, recall that the root-convergence factors (R-factors) of a sequence $\{\mathbf{z}(n)\}\subseteq\mathbb{C}^k$ that converges to $ \hat{\mathbf{z}}\in\mathbb{C}^k$ are \begin{eqnarray*}gin{equation} R_p\{ \mathbf{z}(n)\} = \left\{ \begin{eqnarray*}gin{array}{ll} \limsup_{n\to\infty} \Vert \mathbf{z}(n)- \hat{\mathbf{z}}\Vert^{1/n}_{\mathbb{C}^k} & \textrm{if }p=1, \\ \limsup_{n\to\infty} \Vert \mathbf{z}(n)- \hat{\mathbf{z}}\Vert^{1/p^n}_{\mathbb{C}^k} & \textrm{if } p>1, \end{array}\right. \label{eq:Rfactors} \end{equation} and the sequence is said to have at least an R-linear rate of convergence if $R_1\{ \mathbf{z}(n)\} <1$. The following theorem gives the one-step Newton steplength algorithm to adjust the sequence of steplengths for minimization of a real-valued complex function using Newton's method. We provide the detailed proof in Appendix B. \begin{eqnarray*}gin{theorem}[Convergence of the Complex Newton Algorithm with Complex One-Step Newton Steplengths] Let $f:\Omega\subseteq\mathbb{C}^k\to\mathbb{R}$ be twice-continuously $\mathbb{R}$-differentiable on the open convex set $\Omega$ and assume that $ L^0_{\mathbb{C}^k}(f(\mathbf{z}(0)))$ is compact for $\mathbf{z}(0)\in\Omega$. Suppose for all $\mathbf{z}\in\Omega$, $$\mathrm{Re}\{ \mathbf{h}^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z})\mathbf{h} + \mathbf{h}^* \mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z})\overline{\mathbf{h}}\}>0 \textrm{ for all }\mathbf{h}\in\mathbb{C}^k.$$ Assume $f$ has a unique stationary point $\hat{\mathbf{z}}\in L^0_{\mathbb{C}^k}(f(\mathbf{z}(0)))$, and fix $\epsilon\in (0,1]$. Consider the iteration \begin{eqnarray*}gin{equation} \mathbf{z}(n+1)=\mathbf{z}(n)-\omega(n)\mu(n)\mathbf{p}(n), \textrm{ } n=0,1,..., \label{eq:iteration} \end{equation} where the $\mathbf{p}(n)$ are the nonzero complex Newton updates \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \mathbf{p}(\mathbf{z}(n))= &-\left[\mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z}(n))-\mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z}(n))\mathcal{H}_{\overline{\mathbf{z}}\overline{\mathbf{z}}}(\mathbf{z}(n))^{-1}\mathcal{H}_{\mathbf{z}\overline{\mathbf{z}}}(\mathbf{z}(n))\right]^{-1}\\ & \cdot\left[ \mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z}(n))\mathcal{H}_{\overline{\mathbf{z}}\overline{\mathbf{z}}}(\mathbf{z}(n))^{-1} \left(\frac{\mathbf{p}artial f}{\mathbf{p}artial \overline{\mathbf{z}}}(\mathbf{z}(n)) \right)^*-\left(\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}(n)) \right)^*\right], \end{split} \label{eq:Newtupdatesone-step} \end{equation} the steplengths $\mu(n)$ are given by \begin{eqnarray*}gin{equation*} \mu(n)=\frac{\mathrm{Re} \{ \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}} (\mathbf{z}(n))\mathbf{p}(n)\}}{\mathrm{Re}\{ \mathbf{p}(n)^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z}(n))\mathbf{p}(n) + \mathbf{p}(n)^*\mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z}(n))\overline{\mathbf{p}(n)}\}}, \end{equation*}\normalsize and the underrelaxation factors $\omega(n)$ satisfy \begin{eqnarray*}gin{equation} 0\le \epsilon \le \omega(n) \le \frac{2}{\gamma(n)}-\epsilon, \label{eq:OmegaDef} \end{equation} where, taking $\mathbf{z}=\mathbf{z}(n)$ and $\mathbf{p}=\mathbf{p}(n)$, \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \gamma(n)&=\sup \left. \left\{ \frac{\mathrm{Re}\{\mathbf{p}^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z}-\mu\mathbf{p})\mathbf{p} + \mathbf{p}^* \mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z}-\mu\mathbf{p})\overline{\mathbf{p}}\}}{\mathrm{Re}\{\mathbf{p}^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z})\mathbf{p} + \mathbf{p}^* \mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z})\overline{\mathbf{p}}\}} \right. \right\vert \\ &\hspace{50mm} \left.\begin{eqnarray*}gin{array}{c} \mu>0, \textrm{ } f(\mathbf{z}-\nu\mathbf{p})<f(\mathbf{z})\\ \textrm{for all }\nu\in(0,\mu]\end{array}\right\}. \end{split} \label{eq:GammaOmegaDef} \end{equation} Then $\lim_{n\to\infty}\mathbf{z}(n)=\hat{\mathbf{z}},$ and the rate of convergence is at least R-linear. \label{thm:ConvergenceNewton} \end{theorem} \mathbf{p}ar To apply the one-step Newton steplength algorithm to the Newton's method or pseudo-Newton's method backpropagation algorithm for complex-valued holomorphic multilayer perceptrons, at the $n$th iteration in the training process, the one-step Newton steplength for the $p$th step in the backpropagation ($1\le p\le L$) is \begin{eqnarray*}gin{equation} \mu_p(n)=\frac{-\mathbb{R}e \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial \mathbf{w}} \Delta \mathbf{w}\right)}{\mathbb{R}e \left\{ (\Delta\mathbf{w})^* \mathcal{H}_{\mathbf{w}\mathbf{w}}\Delta\mathbf{w} + (\Delta\mathbf{w})^* \mathcal{H}_{\overline{\mathbf{w}}\mathbf{w}}\overline{\Delta\mathbf{w}} \right\} }, \label{eq:steplengthforbackprop} \end{equation} where $\Delta \mathbf{w}=\Delta\mathbf{w}^{(p-1)}$ is the weight update for the $p$th layer of the network given by Theorem \ref{thm:complexnewtbackprop} or Corollary \ref{cor:complexp-newtbackprop}, respectively, and $\mathbf{w}=\mathbf{w}^{(p-1)}$. (Recall (\ref{eq:updatestep}), so that here $\mathbf{p}(n)=-\Delta\mathbf{w}^{(p-1)}$ in (\ref{eq:iteration}).) For the pseudo-Newton's method backpropagation, we set $\mathcal{H}_{\overline{\mathbf{w}^{(p-1)}}\mathbf{w}^{(p-1)}}=\mathcal{H}_{\mathbf{w}^{(p-1)}\overline{\mathbf{w}^{(p-1)}}}=0$ in (\ref{eq:Newtupdatesone-step}) to obtain the pseudo-Newton updates $\Delta \mathbf{w}^{(p-1)}$ given in Corollary \ref{cor:complexp-newtbackprop}, but leave $\mathcal{H}_{\overline{\mathbf{w}^{(p-1)}}\mathbf{w}^{(p-1)}}$ as calculated in Theorem \ref{thm:complexnewtbackprop} in (\ref{eq:steplengthforbackprop}). In theory, for the $n$th iteration in the training process, we should choose the underrelaxation factor $\omega_p(n)$ for the $p$th step in the backpropagation $(1\le p\le L)$ according to (\ref{eq:OmegaDef}) and (\ref{eq:GammaOmegaDef}). However, in practical application it suffices to take the underrelaxation factors to be constant and they may be chosen experimentally to yield convergence of the error function (see our results in Section VII). It is also not necessary in practical application to verify all the conditions of Theorem \ref{thm:ConvergenceNewton}. In particular we may assume that the error function has a stationary point sufficiently close to the initial weights since the initial weights were chosen specifically to be ``nearby'' a stationary point, and that the stationary point is unique in the appropriate compact level set of the initial weights since the set of zeros of the error function has measure zero. \section{Experiments} \begin{eqnarray*}gin{table}[b] \begin{eqnarray*}gin{center} \begin{eqnarray*}gin{tabular}{|c|c|}\hline Input Pattern & Output\\ \hline 0 0 & 0\\ 1 0 & 1\\ 0 1 & 1\\ 1 1 & 0\\ \hline \end{tabular} \caption{XOR Training Set} \label{tab:XORdata}\end{center} \end{table} \mathbf{p}ar To test the efficiency of the algorithms in the previous sections, we will compare the results of applying the gradient descent method, Newton's method, and the pseudo-Newton's method to a holomorphic MLP trained with data from the real-valued exclusive-or (XOR) problem (see Table \ref{tab:XORdata}). Note that the complex-valued XOR problem has different criteria for the data set \cite{Savitha2009}. We use the real-valued XOR problem as we desire a complex-valued network to process real as well as complex data. \mathbf{p}ar The XOR problem is frequently encountered in the literature as a test case for backpropagation algorithms \cite{Pande2007}. A multilayer network is required to solve it: without hidden units the network is unable to distinguish overlapping input patterns which map to different output patterns, e.g. $(0,0)$ and $(1,0)$ \cite{Rumelhart1986}. We use a two-layer network with $m=2$ input nodes, $K=4$ hidden nodes, and $C=1$ output nodes. Any Boolean function of $m$ variables can be trained to a two-layered real-valued neural network with $2^m$ hidden units. Modeling after the real case we choose $K=2^m$, although this could perhaps be accomplished with fewer hidden units, as $2^{m-1}$ is a smaller upper bound for real-valued neural networks \cite{Hassoun1995}. Some discussion of approximating Boolean functions, including the XOR and parity problems, using complex-valued neural networks is given in \cite{Nemoto1992}. \mathbf{p}ar In our experiments, the activation functions are taken to be the same for both the hidden and output layers of the network. The activation function is either the sigmoidal function or its third degree\footnote{One can take a higher degree Taylor polynomial approximation, but this is sufficient for our purposes.} Taylor polynomial approximation $$g(z)=\frac{1}{1+\exp(-z)} \textrm{ or } T(z)=\frac{1}{2} +\frac{1}{4}z-\frac{1}{48}z^3.$$ Notice that while $g(z)$ has poles near zero, the polynomial $T(z)$ is analytic on the entire complex plane and bounded on bounded regions (see Figure \ref{fig:graphsigvspoly}). \begin{eqnarray*}gin{figure}[t]\begin{eqnarray*}gin{center} \includegraphics[scale=0.2]{complex_sigmoidal_3D_graph.jpg} \includegraphics[scale=0.2]{complex_polynomial_3D_graph.jpg} \caption{The sigmoidal function (left) has two poles in a region near $0$, while a Taylor polynomial approximation (right) of the sigmoidal function is bounded on the same region.} \label{fig:graphsigvspoly}\end{center} \end{figure} \mathbf{p}ar For each activation function we trained the network using the gradient descent backpropagation algorithm, the Newton backpropagation algorithm, and the pseudo-Newton backpropagation algorithm. The real and imaginary parts of the initial weights for each trial were chosen randomly from the interval $[-1,1]$ according to a uniform distribution. In each case the network was trained to within $0.001$ error. One hundred trials were performed for each activation function and each backpropagation algorithm (note that the same set of random initial weights was used for each set of trials). For the trials using the gradient descent backpropagation algorithm, a constant learning rate ($\mu$) was used. It is known that for the gradient descent algorithm for real-valued neural networks, some learning rates will result in nonconvergence of the error function \cite{LeCun1991}. There is experimental evidence that for elementary transcendental activation functions used in complex-valued neural networks, sensitivity of the gradient descent algorithm to the choice of the learning rate can result in nonconvergence of the error function as well, and this is not necessarily affected by changes in the initial weight distribution \cite{Savitha2009}. To avoid these problems, a learning rate of $\mu=1$ was chosen both to guarantee convergence and to yield fast convergence (as compared to other values of $\mu$). For the trials using the Newton and pseudo-Newton backpropagation algorithms, a variable learning rate (steplength) was chosen according to the one-step Newton steplength algorithm (Theorem \ref{thm:ConvergenceNewton}) to control the problem of ``overshooting'' of the iterates and nonconvergence of the error function when a fixed learning rate was used. For both the Newton and pseudo-Newton trials, a constant underrelaxation factor of $\omega=0.5$ was used; this was chosen to yield the best chance for convergence of the error function. The results are summarized in Table \ref{tab:experResults}. \begin{eqnarray*}gin{table}[!t]\begin{eqnarray*}gin{center}\tiny \begin{eqnarray*}gin{tabular}{|c|c|c|c|c|c|}\hline & & & & \bf{Number of} & \bf{Average} \\ \bf{Activation} & \bf{Training} & \bf{Learning} & \bf{Underrelaxation} & \bf{Successful} & \bf{Number of} \\ \bf{Function} & \bf{Method} & \bf{Rate ($\mathbf{\mu}$)} & \bf{Factor ($\omega$)} & \bf{Trials} & \bf{Iterations*} \\ \hline Sigmoidal & Gradient & $\mu=1$ & None & 93 & 1258.9 \\ & Descent & & & & \\ \hline Sigmoidal & Newton & One-Step & $\omega=0.5$ & 5 & 7.0 \\ & & Newton & & & \\ \hline Sigmoidal & Pseudo- & One-Step & $\omega=0.5$ & 78 & 7.0 \\ & Newton & Newton & & & \\ \hline Polynomial & Gradient & $\mu=1$ & None & 93 & 932.2 \\ & Descent & & & & \\ \hline Polynomial & Newton & One-Step & $\omega=0.5$ & 53 & 107.9 \\ & & Newton & & & \\ \hline Polynomial & Pseudo- & One-Step & $\omega=0.5$ & 99 & 23.7 \\ & Newton & Newton & & & \\ \hline \end{tabular} \tiny{*Over the successful trials.} \caption{XOR Experiment Results} \label{tab:experResults}\end{center} \end{table} \normalsize \begin{eqnarray*}gin{table}[!t]\begin{eqnarray*}gin{center}\tiny \begin{eqnarray*}gin{tabular}{|c|c|c|c|c|c|c|}\hline & & & & \bf{Undefined} & & \bf{Total} \\ \bf{Activation} & \bf{Training} & \bf{Local} & \bf{Blow} & \bf{Floating} & \bf{Singular} & \bf{Unsuccessful} \\ \bf{Function} & \bf{Method} & \bf{Minimum} & \bf{Up} & \bf{Point} & \bf{Matrix} & \bf{Trials} \\ \hline Sigmoidal & Gradient & 1 & 0 & 6 & N/A & 7 \\ & Descent & & & & & \\ \hline Sigmoidal & Newton & 0 & 0 & 68 & 27 & 95 \\ \hline Sigmoidal & Pseudo- & 0 & 0 & 14 & 8 & 22 \\ & Newton & & & & & \\ \hline Polynomial & Gradient & 0 & 0 & 7 & N/A & 7 \\ & Descent & & & & & \\ \hline Polynomial & Newton & 26 & 2 & 2 & 17 & 47 \\ \hline Polynomial & Pseudo- & 1 & 0 & 0 & 0 & 1 \\ & Newton & & & & & \\ \hline \end{tabular} \caption{Unsuccessful Trials} \label{tab:unsuccess}\end{center} \end{table} \normalsize \mathbf{p}ar Over the successful trials, the polynomial activation function performed just as well as the traditional sigmoidal function for the gradient descent backpropagation algorithm and yielded more successful trials than the sigmoidal function for the Newton and pseudo-Newton backpropagation algorithms. We define a successful trial to be one in which the error function dropped below $0.001$. We logged four different types of unsuccessful trials (see Table \ref{tab:unsuccess}). Convergence of the error function to a local minimum occurred when, after at least 50,000 iterations for gradient descent and 5,000 iterations for the Newton and pseudo-Newton algorithms, the error function remained above $0.001$ but had stabilized to within $10^{-10}$ between successive iterations. This occurred more frequently in the Newton's method trails than the gradient descent trials, which was expected due to the known sensitivity of Newton's method to the initial points. A blow up of the error function occurred when, after the same minimum number of iterations as above, the error function had increased to above $10^{10}$. The final value of the error function was sometimes an undefined floating point number, probably the result of division by zero. This occurred less frequently with the polynomial activation function than with the sigmoidal activation function. Finally, the last type of unsuccessful trial resulted from a singular Hessian matrix (occurring only in the Newton and pseudo-Newton trials). This, necessarily, halted the backpropagation process, and occurred less frequently with the polynomial activation function than with the sigmoidal activation function. \mathbf{p}ar As for efficiency, the Newton and pseudo-Newton algorithms required significantly fewer iterations of the backpropagation algorithm to train the network than the gradient descent method for each activation function. In addition to producing fewer unsuccessful trials, the pseudo-Newton algorithm yielded a lower average number of iterations than the Newton algorithm for the polynomial activation function and the same average number of iterations as the Newton algorithm for the sigmoidal activation function. The network with polynomial activation function trained using the pseudo-Newton algorithm produced the fewest unsuccessful trials. Overall, we conclude that the use of the polynomial activation function yields more consistent convergence of the error function than the use of the sigmoidal activation function, and the use of the Newton and pseudo-Newton algorithms yields significantly fewer training iterations than the use of the gradient descent method. \section{Conclusion} \mathbf{p}ar We have developed the backpropagation algorithm using Newton's method for complex-valued holomorphic multilayer perceptrons. The extension of real-valued neural networks to complex-valued neural networks is natural and doing so allows the proper treatment of the phase information. However, the choice of nonlinear activation functions poses a challenge in the backpropagation algorithm. The usual complex counterparts of the commonly used real-valued activation functions are no longer unbounded: they have poles near zero, while other choices are not fully complex-valued functions. To provide experimental evidence for the choice of holomophic functions as activation functions in addition to mathematical reasoning, we compared the results of using the complex-valued sigmoidal function as activation functions and the results of using its Taylor polynomial approximation as activation functions. Our experiments showed that when Newton's method was used for the XOR example, Taylor polynomial approximations are better choices. The use of polynomials as activation functions allows the possibility of rigorous analysis of performance of the algorithm, as well as making connections with other topics of complex analysis, which are virtually nonexistent in complex-valued neural network studies so far. These topics are under investigation currently. \appendix \section{Derivation of the Entries of the Hessian Matrices for the Newton's Method Backpropagation Algorithm} We give the detail, which was omitted in the main body of the paper, for the computation of the entries of the Hessian matrices $\mathcal{H}_{\overline{\mathbf{w}}\mathbf{w}}$ for the $(p-1)$th layer of the holomorphic multilayer perceptron, $1\le p\le L$, which are given recursively by (\ref{eq:hidconjdeltas}) and (\ref{eq:deltapbyxconj}), in a manner similar to the computation of the Hessian matrices $\mathcal{H}_{\mathbf{w}\mathbf{w}}$ given in Section IV. Using the cogradients (\ref{eq:grad}) we have: \begin{eqnarray*}gin{equation} \frac{\mathbf{p}artial}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} \left(\frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(p-1)}_{ji}} \right)^* = \frac{1}{N} \sum_{t=1}^N \frac{\mathbf{p}artial E^{(p)}_{tj}}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} \overline{x^{(p-1)}_{ti}}, \label{eq:hidconjcomp} \end{equation} where $j,b=1,...,K_p$ and $i,a=1,...,K_{p-1}$. Using (\ref{eq:deltap}), \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} &\frac{\mathbf{p}artial E^{(p)}_{tj}}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} = \frac{\mathbf{p}artial}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} \left[\left(\sum_{\eta=1}^{K_{p+1}} E^{(p+1)}_{t\eta} \overline{w^{(p)}_{\eta j}}\right) g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right)\right]\\ &=g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right) \sum_{\eta=1}^{K_{p+1}} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} \overline{w^{(p)}_{\eta j}}+ \frac{\mathbf{p}artial g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right)}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}}\sum_{\eta=1}^{K_{p+1}} E^{(p+1)}_{t\eta} \overline{w^{(p)}_{\eta j}},\\ \end{split} \end{equation*} where \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} &= \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial x^{(p)}_{tb}} \frac{\mathbf{p}artial x^{(p)}_{tb}}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} +\frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \frac{\overline{\mathbf{p}artial x^{(p)}_{tb}}}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}}\\ &= \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \left[ \frac{\mathbf{p}artial \overline{x^{(p)}_{tb}}}{\mathbf{p}artial \overline{\left(x^{(p)}_{tb}\right)^{\textrm{net}}}} \frac{\mathbf{p}artial \overline{\left(x^{(p)}_{tb}\right)^{\textrm{net}}}}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} + \frac{\mathbf{p}artial \overline{x^{(p)}_{tb}}}{\mathbf{p}artial \left(x^{(p)}_{tb}\right)^{\textrm{net}}} \frac{\mathbf{p}artial \left(x^{(p)}_{tb}\right)^{\textrm{net}}}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}}\right]\\ &= \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} g_p'\left(\overline{\left(x^{(p)}_{tb}\right)^{\textrm{net}}}\right) \overline{x^{(p-1)}_{ta}} \end{split} \label{eq:halfcrossdelta1} \end{equation*} and \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} &\frac{\mathbf{p}artial g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right)}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}}\\ &\hspace{10mm}= \frac{\mathbf{p}artial g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right)}{\mathbf{p}artial \overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}} \frac{\mathbf{p}artial \overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}}+ \frac{\mathbf{p}artial g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right)}{\mathbf{p}artial \left(x^{(p)}_{tj}\right)^{\textrm{net}}} \frac{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}}\\ &\hspace{10mm}=\left\{ \begin{eqnarray*}gin{array}{ll} g_p''\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right)\overline{x_{ta}^{(p-1)}} & \textrm{if } j=b,\\ 0 & \textrm{if } j\neq b, \end{array}\right. \end{split} \label{eq:halfcrossdelta2} \end{equation*} so that \begin{eqnarray*}gin{equation} \frac{\mathbf{p}artial E^{(p)}_{tj}}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} =\left\{ \begin{eqnarray*}gin{array}{l} \left\{\left[ \sum_{\eta=1}^{K_{p+1}} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \overline{w^{(p)}_{\eta j}}\right] g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right)g_p'\left(\overline{\left(x^{(p)}_{tb}\right)^{\textrm{net}}}\right) \right. \\ \hspace{5mm} \left. + \left[ \sum_{\eta=1}^{K_{p+1}} E^{(p+1)}_{t\eta} \overline{w^{(p)}_{\eta j}} \right]g_p''\left( \overline{\left( x^{(p)}_{tj}\right)^{\textrm{net}}}\right)\right\} \overline{x^{(p-1)}_{ta}}\\ \hspace{55mm}\textrm{if }j=b, \\ \left[ \sum_{\eta=1}^{K_{p+1}} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \overline{w^{(p)}_{\eta j}}\right] g_p'\left(\overline{\left(x^{(p)}_{tj}\right)^{\textrm{net}}}\right)g_p'\left(\overline{\left(x^{(p)}_{tb}\right)^{\textrm{net}}}\right)\overline{x^{(p-1)}_{ta}}\\ \hspace{55mm}\textrm{if }j\neq b. \\ \end{array}\right. \label{eq:deltawconj} \end{equation} Combining (\ref{eq:hidconjcomp}) and (\ref{eq:deltawconj}), we get \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} &\frac{\mathbf{p}artial}{\mathbf{p}artial \overline{w^{(p-1)}_{ba}}} \left( \frac{\mathbf{p}artial E}{\mathbf{p}artial w^{(p-1)}_{ji}} \right)^*\\ &=\left\{ \begin{eqnarray*}gin{array}{l} \frac{1}{N} \sum_{t=1}^N \left\{ \left[ \sum_{\eta=1}^{K_{p+1}}\frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \overline{w^{(p)}_{\eta j}} \right] g_p'( \overline{(x^{(p)}_{tj})^{\textrm{net}}})g_p'( \overline{(x^{(p)}_{tb})^{\textrm{net}}}) \right.\\ \hspace{10mm} \left. +\left[\sum_{\eta=1}^{K_{p+1}} E_{t\eta}^{(p+1)} \overline{w^{(p)}_{\eta j}} \right] g_p''( \overline{(x^{(p)}_{tj})^{\textrm{net}}}) \right\} \overline{x^{(p-1)}_{ti}}\overline{x^{(p-1)}_{ta}} \textrm{ if }j=b,\\ \frac{1}{N} \sum_{t=1}^N \left\{ \left[ \sum_{\eta=1}^{K_{p+1}}\frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \overline{w^{(p)}_{\eta j}} \right] g_p'( \overline{(x^{(p)}_{tj})^{\textrm{net}}})g_p'( \overline{(x^{(p)}_{tb})^{\textrm{net}}}) \right\}\\ \hspace{45mm}\cdot \overline{x^{(p-1)}_{ti}}\overline{x^{(p-1)}_{ta}}\hspace{8mm}\textrm{ if }j\neq b, \end{array}\right. \end{split} \label{eq:hidconjdeltas2} \end{equation} where $\frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}}$ can be computed recursively: \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \frac{\mathbf{p}artial E^{(p+1)}_{t\eta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} &= \frac{\mathbf{p}artial}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \left[ \left( \sum_{\alpha=1}^{K_{p+2}} E^{(p+2)}_{t\alpha} \overline{w^{(p+1)}_{\alpha \eta}}\right) g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \right]\\&=g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \sum_{\alpha=1}^{K_{p+2}} \frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \overline{w^{(p+1)}_{\alpha \eta}}\\ &\hspace{20mm} + \frac{\mathbf{p}artial g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right)}{\mathbf{p}artial \overline{x^{(p)}_{tb}}}\sum_{\alpha=1}^{K_{p+2}} E^{(p+2)}_{t\alpha} \overline{w^{(p+1)}_{\alpha \eta}}\\ \end{split} \label{eq:deltapbyxconj2} \end{equation} \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} &=g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \sum_{\alpha=1}^{K_{p+2}} \sum_{\begin{eqnarray*}ta=1}^{K_{p+1}} \left[ \frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial x^{(p+1)}_{t\begin{eqnarray*}ta}} \frac{\mathbf{p}artial x^{(p+1)}_{t\begin{eqnarray*}ta}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \right.\\ &\hspace{45mm} \left. + \frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial \overline{x^{(p+1)}_{t\begin{eqnarray*}ta}}} \frac{\mathbf{p}artial \overline{x^{(p+1)}_{t\begin{eqnarray*}ta}}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}}\right] \overline{w^{(p+1)}_{\alpha \eta}}\\ &\hspace{10mm} + \left[ \frac{\mathbf{p}artial g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right)}{\mathbf{p}artial \overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}} \frac{\mathbf{p}artial \overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \right.\\ &\hspace{15mm} +\left. \frac{\mathbf{p}artial g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right)}{\mathbf{p}artial \left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}} \frac{\mathbf{p}artial \left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}}\right] \sum_{\alpha=1}^{K_{p+2}} E^{(p+2)}_{t\alpha} \overline{w^{(p+1)}_{\alpha \eta}}\\ &= g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \sum_{\alpha=1}^{K_{p+2}} \sum_{\begin{eqnarray*}ta=1}^{K_{p+1}} \frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial \overline{x^{(p+1)}_{t\begin{eqnarray*}ta}}} \left[ \frac{\mathbf{p}artial \overline{x^{(p+1)}_{t\begin{eqnarray*}ta}}}{\mathbf{p}artial \left(x^{(p+1)}_{t\begin{eqnarray*}ta}\right)^{\textrm{net}}} \frac{\mathbf{p}artial \left(x^{(p+1)}_{t\begin{eqnarray*}ta}\right)^{\textrm{net}}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}} \right.\\ &\hspace{40mm}+\left. \frac{\mathbf{p}artial \overline{x^{(p+1)}_{t\begin{eqnarray*}ta}}}{\mathbf{p}artial \overline{\left(x^{(p+1)}_{t\begin{eqnarray*}ta}\right)^{\textrm{net}}}} \frac{\mathbf{p}artial \overline{\left(x^{(p+1)}_{t\begin{eqnarray*}ta}\right)^{\textrm{net}}}}{\mathbf{p}artial \overline{x^{(p)}_{tb}}}\right] \overline{w^{(p+1)}_{\alpha \eta}}\\ & \hspace{10mm} +g''_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \overline{w^{(p)}_{\eta b}} \sum_{\alpha=1}^{K_{p+2}} E^{(p+2)}_{t\alpha} \overline{w^{(p+1)}_{\alpha \eta}}\\ &=g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \sum_{\alpha=1}^{K_{p+2}} \sum_{\begin{eqnarray*}ta=1}^{K_{p+1}} \frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial \overline{x^{(p+1)}_{t\begin{eqnarray*}ta}}} \\ &\hspace{20mm} \cdot g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\begin{eqnarray*}ta}\right)^{\textrm{net}}}\right) \overline{w^{(p)}_{\begin{eqnarray*}ta b}} \overline{w^{(p+1)}_{\alpha \eta}}\\ &\hspace{10mm}+g''_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \overline{w^{(p)}_{\eta b}} \sum_{\alpha=1}^{K_{p+2}} E^{(p+2)}_{t\alpha} \overline{w^{(p+1)}_{\alpha \eta}}\\ &=\sum_{\begin{eqnarray*}ta=1}^{K_{p+1}} \left[ \sum_{\alpha=1}^{K_{p+2}} \frac{\mathbf{p}artial E^{(p+2)}_{t\alpha}}{\mathbf{p}artial \overline{x^{(p+1)}_{t\begin{eqnarray*}ta}}}\overline{w^{(p+1)}_{\alpha \eta}} \right]\\ &\hspace{20mm} \cdot g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) g'_{p+1}\left(\overline{\left( x^{(p+1)}_{t\begin{eqnarray*}ta}\right)^{\textrm{net}}}\right) \overline{w^{(p)}_{\begin{eqnarray*}ta b}}\\ &\hspace{10mm}+\left[\sum_{\alpha=1}^{K_{p+2}} E^{(p+2)}_{t\alpha} \overline{w^{(p+1)}_{\alpha \eta}} \right]g''_{p+1}\left(\overline{\left( x^{(p+1)}_{t\eta}\right)^{\textrm{net}}}\right) \overline{w^{(p)}_{\eta b}}. \end{split} \label{eq:deltabyxconj3} \end{equation*} \section{Convergence of the One-Step Newton Steplength Algorithm of Real-Valued Complex Functions} \mathbf{p}ar Let $f:\Omega\subseteq\mathbb{C}^k \to \mathbb{R}$, and consider a general minimization algorithm with sequence of iterates $\{ \mathbf{z}(n)\}$ given recursively by \begin{eqnarray*}gin{equation} \mathbf{z}(n+1)=\mathbf{z}(n)-\mu(n)\mathbf{p}(n), \textrm{ } n=0,1,..., \label{eq:GeneralIterate} \end{equation} where $\mathbf{p}(n)\in\mathbb{C}^k$ such that $-\mathbf{p}(n)$ is the direction from the $n$th iterate to the $(n+1)$th iterate and $\mu(n)\in \mathbb{R}$ is the learning rate or steplength which we allow to vary with each step. We are interested in guaranteeing that the minimization algorithm is a descent method, that is, that at each stage of the iteration the inequality $f(\mathbf{z}(n+1))\le f(\mathbf{z}(n))$ for $n=0,1,...$ holds. Here, we provide details of the proof of the one-step Newton steplength algorithm for the minimization of real-valued functions on complex domains. Our treatment follows the exposition in \cite{Ortega1970}, with the application to the complex Newton algorithm providing a proof of Theorem \ref{thm:ConvergenceNewton}. \mathbf{p}ar \begin{eqnarray*}gin{lemma} Suppose that $f:\Omega\subseteq\mathbb{C}^k \to \mathbb{R}$ is $\mathbb{R}$-differentiable at $\mathbf{z}\in\mathrm{int} (\Omega)$ and that there exists $\mathbf{p}\in\mathbb{C}^k$ such that $\mathrm{Re} \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}) \mathbf{p}\right) >0$. Then there exists a $\delta>0$ such that $f(\mathbf{z}-\mu \mathbf{p})<f(\mathbf{z}) \textrm{ for all } \mu\in (0,\delta)$. \label{lem:ExistenceofMu} \end{lemma} \begin{eqnarray*}gin{proof} Let $\mathbf{z}=\mathbf{x}+i\mathbf{y}\in \textrm{int}(\Omega)$ with $\mathbf{x},\mathbf{y}\in\mathbb{R}^k$. The function $f:\Omega\subseteq\mathbb{C}^k\to\mathbb{R}$ is $\mathbb{R}$-differentiable at $\mathbf{z}$ if and only if $f:D\subseteq\mathbb{R}^{2k}\to\mathbb{R}$ is (Frechet) differentiable at $(\mathbf{x},\mathbf{y})^T\in\mathrm{int}(D)$, where $D$ is defined as in (\ref{eq:Ddef}) and the (Frechet) derivative (equal to the Gateau derivative) at $(\mathbf{x},\mathbf{y})^T$ is given by $\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}, \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}} \right)$. Suppose there exists $\mathbf{p}=\mathbf{p}_R+i\mathbf{p}_I\in\mathbb{C}^k$ with $\mathbf{p}_R,\mathbf{p}_I\in \mathbb{R}^k$ such that $\mathrm{Re} \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}) \mathbf{p}\right) >0$. Then using the coordinate and cogradient transformations (\ref{eq:CoordTransform}) and (\ref{eq:CogradTransform}) and the fact that $f$ is real-valued, we have the following (\cite{Kreutz-Delgado2009}, pg. 34): \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} &\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x},\mathbf{y}), \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}(\mathbf{x},\mathbf{y})\right)\left( \begin{eqnarray*}gin{array}{c} \mathbf{p}_R \\ \mathbf{p}_I \end{array}\right) = \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z},\overline{\mathbf{z}}), \frac{\mathbf{p}artial f}{\mathbf{p}artial \overline{\mathbf{z}}}(\mathbf{z},\overline{\mathbf{z}})\right) J \cdot \frac{1}{2} J^* \left( \begin{eqnarray*}gin{array}{c} \mathbf{p} \\ \overline{\mathbf{p}} \end{array}\right)\\ & = \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z},\overline{\mathbf{z}}) \mathbf{p} +\frac{\mathbf{p}artial f}{\mathbf{p}artial \overline{\mathbf{z}}}(\mathbf{z},\overline{\mathbf{z}})\overline{\mathbf{p}} = \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}) \mathbf{p} +\overline{\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z})\mathbf{p}} = 2\mathrm{Re} \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}) \mathbf{p}\right) >0. \end{split} \label{eq:ConditionTransform} \end{equation} By (8.2.1) in \cite{Ortega1970} there exists a $\delta>0$ such that $f((\mathbf{x},\mathbf{y})-\mu(\mathbf{p}_R,\mathbf{p}_I))<f(\mathbf{x},\mathbf{y}) \textrm{ for all } \mu\in (0,\delta).$ Viewing $f$ again as a function on the complex domain $\Omega$, this is equivalent to the statement that $f(\mathbf{z}-\mu \mathbf{p})<f(\mathbf{z}) \textrm{ for all } \mu\in (0,\delta)$.\end{proof} \mathbf{p}ar Recall from Section VI that a stationary point of $f$ to be a stationary point in the sense of the function $f(\mathbf{z})=f(\mathbf{x},\mathbf{y}):D\subseteq\mathbb{R}^{2k}\to\mathbb{R}.$ If $\hat{\mathbf{z}}=\hat{\mathbf{x}}+i\hat{\mathbf{y}}$ with $\hat{\mathbf{x}},\hat{\mathbf{y}}\in\mathbb{R}^k$, then $\hat{\mathbf{z}}$ is a stationary point of $f$ if and only if $\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\hat{\mathbf{x}},\hat{\mathbf{y}})=\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}(\hat{\mathbf{x}},\hat{\mathbf{y}})=0$. Note that if $\mathbb{R}e \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z})\right)\neq 0$ for $\mathbf{z}\in\textrm{int} (\Omega)$ (i.e. $\mathbf{z}$ is not a stationary point), then there always exists a $\mathbf{p}\in\mathbb{C}^k$ such that $\mathbb{R}e \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}) \mathbf{p}\right)>0$. So this result is always true in the real domain, and the proof of Lemma \ref{lem:ExistenceofMu} only translates the result from the real domain to the complex domain. \mathbf{p}ar For the sequence of iterates $\{ \mathbf{z}(n)\}$ given by (\ref{eq:GeneralIterate}), we can find a sequence $\{ \mathbf{p}(n)\}$ such that $\mathbb{R}e\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}(n))\mathbf{p}(n)\right)>0$ for $n=0,1,...$. By Lemma \ref{lem:ExistenceofMu}, for each $n$ there is at least one $\mu(n) \in (0,\infty)$ such that $f(\mathbf{z}(n)-\mu(n) \mathbf{p}(n))<f(\mathbf{z}(n))$. At each step in the algorithm we would like to make the largest descent in the value of $f$ as possible, so finding a desirable steplength $\mu(n)$ to guarantee descent translates into the real one-dimensional problem of minimizing $f(\mathbf{z}(n)-\mu\mathbf{p}(n))$ as a function of $\mu$. For each $n$ let $\mathbf{z}(n)=\mathbf{x}(n)+i\mathbf{y}(n)$ and $\mathbf{p}(n)=\mathbf{p}_R(n)+i\mathbf{p}_I(n)$ with $\mathbf{x}(n),\mathbf{y}(n),\mathbf{p}_R(n),\mathbf{p}_I(n)\in\mathbb{R}^k$ and write \begin{eqnarray*}gin{equation*} f(\mathbf{z}(n)-\mu\mathbf{p}(n))=f((\mathbf{x}(n),\mathbf{y}(n))-\mu(\mathbf{p}_R(n),\mathbf{p}_I(n))). \label{eq:RealMinProblem} \end{equation*} Suppose $f$ is twice $\mathbb{R}$-differentiable on $\Omega$. As an approximate solution to this one-dimensional minimization problem we take $\mu(n)$ to be the minimizer of the second-degree Taylor polynomial (in $\mu$) \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} T_2(\mu) &= f(\mathbf{x}(n),\mathbf{y}(n))\\ &\hspace{10mm}-\mu \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x}(n),\mathbf{y}(n)),\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}} (\mathbf{x}(n),\mathbf{y}(n))\right)\left( \begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right)\\ &\hspace{10mm}+ \frac{1}{2} \mu^2 \left( \begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right)^T\mathcal{H}_{\mathbf{r}\mathbf{r}}(\mathbf{x}(n),\mathbf{y}(n)) \left( \begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right) \end{split} \label{eq:2ndTaylorExp} \end{equation} where $\mathcal{H}_{\mathbf{r}\mathbf{r}}$ denotes the real Hessian matrix \begin{eqnarray*}gin{equation*} \mathcal{H}_{\mathbf{r}\mathbf{r}} = \left( \frac{\mathbf{p}artial}{\mathbf{p}artial \mathbf{x}},\frac{\mathbf{p}artial}{\mathbf{p}artial \mathbf{y}}\right) \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}, \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}\right)^T. \label{eq:RealHess} \end{equation*} If \begin{eqnarray*}gin{equation*} \left(\begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right)^T \mathcal{H}_{\mathbf{r}\mathbf{r}}(\mathbf{x}(n),\mathbf{y}(n)) \left( \begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right) >0 \label{eq:MinCondition} \end{equation*} then $T_2$ has a minimum at \begin{eqnarray*}gin{equation} \mu(n)=\frac{\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x}(n),\mathbf{y}(n)),\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}} (\mathbf{x}(n),\mathbf{y}(n))\right)\left( \begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right)}{\left(\begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right)^T \mathcal{H}_{\mathbf{r}\mathbf{r}}(\mathbf{x}(n),\mathbf{y}(n)) \left( \begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right)}. \label{eq:ApproxSolnReal} \end{equation} (Note this is equivalent to taking one step toward minimizing $f$ over $\mu$ via the real Newton algorithm.) Using a computation similar to (\ref{eq:ConditionTransform}) in the proof of Lemma \ref{lem:ExistenceofMu}, the denominator of (\ref{eq:ApproxSolnReal}) translates back into complex coordinates as (\cite{Kreutz-Delgado2009}, pg. 38): \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} &\left(\begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right)^T \mathcal{H}_{\mathbf{r}\mathbf{r}}(\mathbf{x}(n),\mathbf{y}(n)) \left( \begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right)\\ &\hspace{10mm}=2 \mathbb{R}e \left\{\mathbf{p}(n)^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z}(n))\mathbf{p}(n) + \mathbf{p}(n)^*\mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z}(n))\overline{\mathbf{p}(n)} \right\}. \end{split} \label{eq:BilinearFormEq} \end{equation} Combining (\ref{eq:ApproxSolnReal}) with (\ref{eq:BilinearFormEq}) and (\ref{eq:ConditionTransform}), if \begin{eqnarray*}gin{equation} \mathbb{R}e \left\{\mathbf{p}(n)^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z}(n))\mathbf{p}(n) + \mathbf{p}(n)^*\mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z}(n))\overline{\mathbf{p}(n)} \right\}>0 \label{eq:MinCondition} \end{equation} we can take the approximate solution to the minimization problem to be \begin{eqnarray*}gin{equation} \mu(n)=\frac{\mathrm{Re} \left\{ \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}(n)) \mathbf{p}(n)\right\}}{\mathbb{R}e \left\{\mathbf{p}(n)^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z}(n))\mathbf{p}(n) + \mathbf{p}(n)^*\mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z}(n))\overline{\mathbf{p}(n)} \right\}}. \label{eq:ApproxSteplength} \end{equation} Notice that (\ref{eq:MinCondition}) is in fact both a necessary and sufficient condition to obtain an approximate solution using (\ref{eq:2ndTaylorExp}) to the one-dimensional minimization problem of $f(\mathbf{z}(n)-\mu\mathbf{p}(n))$ over $\mu$, for if $$\mathbb{R}e \left\{\mathbf{p}(n)^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z}(n))\mathbf{p}(n) + \mathbf{p}(n)^*\mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z}(n))\overline{\mathbf{p}(n)} \right\}<0,$$ the Taylor polynomial (\ref{eq:2ndTaylorExp}) attains only a maximum. \mathbf{p}ar Since defining the sequence of steplengths $\{ \mu(n)\}$ by (\ref{eq:ApproxSteplength}) is only an approximate method, to guarantee the descent of the iteration, we consider further modification of the steplengths. From Lemma \ref{lem:ExistenceofMu}, it is clear that we can choose a sequence of underrelaxation factors $\{ \omega(n)\}$ such that $$f(\mathbf{z}(n)-\omega(n)\mu(n)\mathbf{p}(n))<f(\mathbf{z}(n))$$ which guarantees that the iteration \begin{eqnarray*}gin{equation} \mathbf{z}(n+1)=\mathbf{z}(n)-\omega(n)\mu(n)\mathbf{p}(n), \textrm{ } n=0,1,... \label{eq:DampedGeneralIterate} \end{equation} is a descent method. We describe a way to choose the sequence $\{ \omega(n)\}$. \mathbf{p}ar First, recall some notation from Section VI. Suppose $\Omega$ is open and let $\mathbf{z}(0)\in\Omega$. The level set of $\mathbf{z}(0)$ under $f$ on $\Omega$ is defined by (\ref{eq:levelset}), and $L_{\mathbb{C}^k}^0(f(\mathbf{z}(0)))$ is the path-connected component of $L_{\mathbb{C}^k}(f(\mathbf{z}(0))$ containing $\mathbf{z}(0)$. Let $\Vert \cdot \Vert_{\mathbb{C}^k}:\mathbb{C}^k\to\mathbb{R}$ denote the Euclidean norm on $\mathbb{C}^k$, with $\Vert \mathbf{z} \Vert_{\mathbb{C}^k} =\sqrt{\mathbf{z}^* \mathbf{z}}$. \begin{eqnarray*}gin{lemma}[Complex Version of the One-Step Newton Steplength Algorithm] Let $f:\Omega\subseteq\mathbb{C}^k\to\mathbb{R}$ be twice-continuously $\mathbb{R}$-differentiable on the open set $\Omega$. Suppose $L_{\mathbb{C}^k}^0(f(\mathbf{z}(0)))$ is compact for $\mathbf{z}(0)\in\Omega$ and that \begin{eqnarray*}gin{equation} \eta_0 \mathbf{h}^*\mathbf{h} \le \mathrm{Re}\{\mathbf{h}^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z})\mathbf{h} + \mathbf{h}^* \mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z})\overline{\mathbf{h}}\} \le \eta_1\mathbf{h}^*\mathbf{h} \label{eq:EtaCondition} \end{equation} for all $\mathbf{z}\in L_{\mathbb{C}^k}^0(f(\mathbf{z}(0)))$ and $\mathbf{h}\in\mathbb{C}^k$, where $0<\eta_0\le \eta_1$. Fix $\epsilon \in (0,1]$. Define the sequence $\{ \mathbf{z}(n)\}$ using (\ref{eq:DampedGeneralIterate}) with $\mathbf{p}(n)\neq0$ satisfying \begin{eqnarray*}gin{equation} \mathrm{Re}\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}} (\mathbf{z}(n))(\mathbf{p}(n))\right) \ge 0, \label{eq:ExistenceLemmaCondition} \end{equation}\normalsize $\mu(n)$ defined by (\ref{eq:ApproxSteplength}), and \begin{eqnarray*}gin{equation} 0<\epsilon\le \omega(n) \le \frac{2}{\gamma(n)}-\epsilon, \label{eq:UnderrelaxFactors} \end{equation} where, setting $\mathbf{z}=\mathbf{z}(n)$ and $\mathbf{p}=\mathbf{p}(n)$, \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} \gamma(n)=&\sup \left. \left\{ \frac{\mathrm{Re}\{\mathbf{p}^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z}-\mu\mathbf{p})\mathbf{p} + \mathbf{p}^* \mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z}-\mu\mathbf{p})\overline{\mathbf{p}}\}}{\mathrm{Re}\{\mathbf{p}^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z})\mathbf{p} + \mathbf{p}^* \mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z})\overline{\mathbf{p}}\}} \right. \right\vert \\ &\hspace{45mm} \left.\begin{eqnarray*}gin{array}{c} \mu>0, \textrm{ } f(\mathbf{z}-\nu\mathbf{p})<f(\mathbf{z})\\ \textrm{for all }\nu\in(0,\mu]\end{array}\right\}. \end{split} \label{eq:GammaDef} \end{equation} Then $\{ \mathbf{z}(n)\}\subseteq L_{\mathbb{C}^k}^0(f(\mathbf{z}(0)))$, \begin{eqnarray*}gin{equation*} \lim_{n\to\infty} \frac{\mathrm{Re}\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}} (\mathbf{z}(n))(\mathbf{p}(n))\right)}{\Vert \mathbf{p}(n)\Vert_{\mathbb{C}^k}}=0, \label{eq:Limitto0Condition} \end{equation*} and $\lim_{n\to\infty} (\mathbf{z}(n)-\mathbf{z}(n+1))=0$. \label{lem:SteplengthAlg} \end{lemma} \begin{eqnarray*}gin{proof} Let $f:\Omega\subseteq\mathbb{C}^k\to\mathbb{R}$ be twice-continuously $\mathbb{R}$-differentiable on the open set $\Omega$, and define $D$ as in (\ref{eq:Ddef}). Then $D$ is open and $f(\mathbf{x},\mathbf{y}):D\subseteq\mathbb{R}^{2k}\to\mathbb{R}$ is twice-continuously differentiable on $D$. Let $\mathbf{z}(0)=\mathbf{x}(0)+i\mathbf{y}(0)\in\Omega$ with $\mathbf{x}(0),\mathbf{y}(0)\in\mathbb{R}^k$ and set $$L^0_{\mathbb{R}^{2k}}(f(\mathbf{x}(0),\mathbf{y}(0))=\left\{\left.\left(\begin{eqnarray*}gin{array}{c}\mathbf{x} \\ \mathbf{y} \end{array}\right)\in D \, \right\vert\, \begin{eqnarray*}gin{array}{c} \mathbf{x},\mathbf{y}\in\mathbb{R}^k, \\ \mathbf{z}=\mathbf{x}+i\mathbf{y}\in L^0_{\mathbb{C}^{k}}(f(\mathbf{z}(0))) \end{array}\right\}.$$ It is clear that since $L_{\mathbb{C}^k}^0(f(\mathbf{z}(0)))$ is assumed to be compact, the real level set $L^0_{\mathbb{R}^{2k}}(f(\mathbf{x}(0),\mathbf{y}(0))$ is also compact. \mathbf{p}ar Next, observe that for $\mathbf{z}=\mathbf{x}+i\mathbf{y}\in\mathbb{C}^k$ with $\mathbf{x},\mathbf{y}\in\mathbb{R}^k$, if $\Vert \cdot\Vert_{R^{2k}}:\mathbb{R}^{2k}\to\mathbb{R}$ denotes the Euclidean norm on $\mathbb{R}^{2k}$, then $$\Vert \mathbf{z} \Vert_{\mathbb{C}^k}^2=\mathbf{z}^*\mathbf{z}=\left\Vert \left( \begin{eqnarray*}gin{array}{c} \mathbf{x} \\ \mathbf{y} \end{array} \right) \right\Vert_{\mathbb{R}^{2k}}^2.$$ Using this fact and (\ref{eq:BilinearFormEq}) we see that for $\mathbf{z}=\mathbf{x}+i\mathbf{y}\in L_{\mathbb{C}^k}^0(f(\mathbf{z}(0)))$ and $\mathbf{h}=\mathbf{h}_R+i\mathbf{h}_I\in\mathbb{C}^k$ with $\mathbf{x},\mathbf{y},\mathbf{h}_R,\mathbf{h}_I\in\mathbb{R}^k$ the condition (\ref{eq:EtaCondition}) is equivalent to \begin{eqnarray*}gin{equation*} \eta_0' \left\Vert \left(\begin{eqnarray*}gin{array}{c} \mathbf{h}_R \\ \mathbf{h}_I\end{array} \right)\right\Vert_{\mathbb{R}^{2k}}^2 \le \left(\begin{eqnarray*}gin{array}{c} \mathbf{h}_R \\ \mathbf{h}_I\end{array} \right)^T \mathcal{H}_{\mathbf{r}\mathbf{r}}(\mathbf{x},\mathbf{y}) \left(\begin{eqnarray*}gin{array}{c} \mathbf{h}_R \\ \mathbf{h}_I\end{array} \right) \le \eta_1' \left\Vert \left(\begin{eqnarray*}gin{array}{c} \mathbf{h}_R \\ \mathbf{h}_I\end{array} \right)\right\Vert_{\mathbb{R}^{2k}}^2, \label{eq:RealEtaCondition} \end{equation*} where again $\mathcal{H}_{\mathbf{r}\mathbf{r}}$ denotes the real Hessian matrix of $f(\mathbf{x},\mathbf{y}):D\subseteq\mathbb{R}^{2k}\to\mathbb{R}$, and $0<\eta_0'=\frac{\eta_0}{2}\le\frac{\eta_1}{2}=\eta_1'$. \mathbf{p}ar We have already seen in the proof of Lemma \ref{lem:ExistenceofMu} (see the calculation (\ref{eq:ConditionTransform})) that the condition (\ref{eq:ExistenceLemmaCondition}) on the vectors $\mathbf{p}(n)=\mathbf{p}_R(n)+i\mathbf{p}_I(n)$ with $\mathbf{p}_R(n),\mathbf{p}_I(n)\in\mathbb{R}^k$ is equivalent to the real condition \begin{eqnarray*}gin{equation*} \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x}(n),\mathbf{y}(n)),\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}(\mathbf{x}(n),\mathbf{y}(n))\right) \left( \begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n)\\ \mathbf{p}_I(n) \end{array}\right) \ge 0. \label{eq:RealExistenceLemma} \end{equation*} We have also seen that our choice (\ref{eq:ApproxSteplength}) for $\mu(n)$ is equal to (\ref{eq:ApproxSolnReal}). \mathbf{p}ar Finally, for $\epsilon\in (0,1]$, using (\ref{eq:BilinearFormEq}) again we have the real analogue of (\ref{eq:GammaDef}): \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} &\gamma(n)=\\ &\sup \left. \left\{ \frac{\left(\begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n)\\ \mathbf{p}_I(n)\end{array}\right)^T \mathcal{H}_{\mathbf{r}\mathbf{r}}((\mathbf{x}(n),\mathbf{y}(n))-\mu(\mathbf{p}_R(n),\mathbf{p}_I(n)))\left(\begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n)\\ \mathbf{p}_I(n)\end{array}\right)}{\left(\begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n)\\ \mathbf{p}_I(n)\end{array}\right)^T \mathcal{H}_{\mathbf{r}\mathbf{r}}(\mathbf{x}(n),\mathbf{y}(n))\left(\begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n)\\ \mathbf{p}_I(n)\end{array}\right)} \right. \right\vert \\ &\hspace{10mm} \left.\begin{eqnarray*}gin{array}{c} \mu>0, \textrm{ } f((\mathbf{x}(n),\mathbf{y}(n))-\nu(\mathbf{p}_R(n),\mathbf{p}_I(n)))<f(\mathbf{x}(n),\mathbf{y}(n))\\ \textrm{for all }\nu\in(0,\mu]\end{array}\right\}. \end{split} \label{eq:RealGamma} \end{equation*} By (\ref{eq:ConditionTransform}), \begin{eqnarray*}gin{equation*} \frac{\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x}(n),\mathbf{y}(n)),\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}(\mathbf{x}(n),\mathbf{y}(n))\right) \left(\begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n)\\ \mathbf{p}_I(n)\end{array}\right)}{\left\Vert \left(\begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n)\\ \mathbf{p}_I(n)\end{array}\right)\right\Vert_{\mathbb{R}^{2k}}}=\frac{2\mathbb{R}e \left(\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}(n)) \mathbf{p}(n) \right)}{\Vert \mathbf{p}(n)\Vert_{\mathbb{C}^k}}, \end{equation*} so applying (14.2.9) in \cite{Ortega1970}, $\left\{ (\mathbf{x}(n), \mathbf{y}(n))^T\right\} \subseteq L^0_{\mathbb{R}^{2k}}(f(\mathbf{x}(0),\mathbf{y}(0)))$, $$\lim_{n\to\infty}\frac{\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x}(n),\mathbf{y}(n)),\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}(\mathbf{x}(n),\mathbf{y}(n))\right) \left(\begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n)\\ \mathbf{p}_I(n)\end{array}\right)}{\left\Vert \left(\begin{eqnarray*}gin{array}{c}\mathbf{p}_R(n)\\ \mathbf{p}_I(n)\end{array}\right)\right\Vert_{\mathbb{R}^{2k}}}=0,$$ and $$\lim_{n\to\infty} \left( \left( \begin{eqnarray*}gin{array}{c} \mathbf{x}(n)\\ \mathbf{y}(n)\end{array}\right)-\left( \begin{eqnarray*}gin{array}{c} \mathbf{x}(n+1)\\ \mathbf{y}(n+1)\end{array}\right)\right)=0.$$ Translating back to complex coordinates yields the desired conclusion.\end{proof} \mathbf{p}ar Assume that there is a unique stationary point $ \hat{\mathbf{z}}$ in $L^0_{\mathbb{C}^k}(f(\mathbf{z}(0))$. We desire to guarantee that the sequence of iterates $\{ \mathbf{z}(n)\}$ converges to $\hat{\mathbf{z}}$. Before we give conditions for convergence of the complex version of the one-step Newton steplength algorithm, recall from Section VI that the R-factors of a sequence $\{\mathbf{z}(n)\}\subseteq\mathbb{C}^k$ that converges to $\hat{\mathbf{z}}\in\mathbb{C}^k$ are given by (\ref{eq:Rfactors}), and the sequence has at least an R-linear rate of convergence if $R_1\{\mathbf{z}(n)\}<1$. \begin{eqnarray*}gin{lemma}[Convergence of the Complex Version of the One-Step Newton Steplength Algorithm] Let $f:\Omega\subseteq\mathbb{C}^k\to\mathbb{R}$ be twice-continuously $\mathbb{R}$-differentiable on the open convex set $\Omega$ and assume that $L^0_{\mathbb{C}^k}(f(\mathbf{z}(0))$ is compact for $\mathbf{z}(0)\in\Omega$. Assume the notation as in Lemma \ref{lem:SteplengthAlg}. Suppose for all $z\in\Omega$, \begin{eqnarray*}gin{equation} \mathrm{Re}\{ \mathbf{h}^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z})\mathbf{h}+\mathbf{h}^* \mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z})\overline{\mathbf{h}}\}>0 \textrm{ for all } \mathbf{h}\in\mathbb{C}^k, \label{eq:PosDefEquivCondition} \end{equation} and assume that the $\mathbf{p}(n)$ are nonzero vectors satisfying \begin{eqnarray*}gin{equation} \mathrm{Re}\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}(n))\mathbf{p}(n)\right)\ge C \left\Vert \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}(n))\right)^T\right\Vert_{\mathbb{C}^k} \Vert \mathbf{p}(n)\Vert_{\mathbb{C}^k}, \textrm{ } n=0,1,... \label{eq:pCondition} \end{equation} for some fixed $C>0$. Assume $f$ has a unique stationary point $\hat{\mathbf{z}}$ in $L^0_{\mathbb{C}^k}(f(\mathbf{z}(0))$. Then $\lim_{n\to\infty} \mathbf{z}(n)= \hat{\mathbf{z}}$, and the rate of convergence is at least R-linear. \label{lem:GeneralConvergence} \end{lemma} \begin{eqnarray*}gin{proof} As in the proof of Lemma \ref{lem:SteplengthAlg}, given the assumptions of this lemma, $f:D\subseteq\mathbb{R}^{2k}\to\mathbb{R}$ is twice-continuously (Frechet) differentiable on the open convex set $D$, and the set $L_{\mathbb{R}^{2k}}^0(f(\mathbf{x}(0),\mathbf{y}(0))$ is compact for $\mathbf{z}(0)=\mathbf{x}(0)+i\mathbf{y}(0)\in\Omega$, where $\mathbf{x}(0),\mathbf{y}(0)\in\mathbb{R}^k$. \mathbf{p}ar Using (\ref{eq:BilinearFormEq}), for $\mathbf{z}=\mathbf{x}+i\mathbf{y}\in\Omega$ the condition (\ref{eq:PosDefEquivCondition}) is equivalent to the condition \begin{eqnarray*}gin{equation*} \left( \begin{eqnarray*}gin{array}{c} \mathbf{h}_1 \\ \mathbf{h}_2 \end{array}\right)^T \mathcal{H}_{\mathbf{r}\mathbf{r}}(\mathbf{x},\mathbf{y})\left( \begin{eqnarray*}gin{array}{c} \mathbf{h}_1 \\ \mathbf{h}_2 \end{array}\right)>0 \textrm{ for all } \left( \begin{eqnarray*}gin{array}{c} \mathbf{h}_1 \\ \mathbf{h}_2 \end{array}\right)\in\mathbb{R}^{2k} \textrm{ with }\mathbf{h}_1,\mathbf{h}_2\in\mathbb{R}^k. \end{equation*} Thus for all $(\mathbf{x},\mathbf{y})^T\in D$, the real Hessian $\mathcal{H}_{\mathbf{r}\mathbf{r}}(\mathbf{x},\mathbf{y})$ of $f$ is positive definite. \mathbf{p}ar Also as in the proof of Lemma \ref{lem:SteplengthAlg}, the real versions of the definitions of $\mu(n)$ and $\omega(n)$ given by (\ref{eq:ApproxSteplength}) and (\ref{eq:UnderrelaxFactors}), respectively, satisfy the real one-step Newton steplength algorithm (14.2.9) in \cite{Ortega1970}. \mathbf{p}ar Finally, for $\mathbf{z}=\mathbf{x}+i\mathbf{y}\in\mathbb{C}^k$ with $\mathbf{x},\mathbf{y}\in\mathbb{R}^k$, a simple calculation shows that $$2\left\Vert\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}} (\mathbf{z})\right)^T \right\Vert_{\mathbb{C}^k} = \left\Vert \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x},\mathbf{y}),\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}(\mathbf{x},\mathbf{y})\right)^T\right\Vert_{\mathbb{R}^{2k}},$$ so using the calculation (\ref{eq:ConditionTransform}) in the proof of Lemma \ref{lem:ExistenceofMu}, the condition (\ref{eq:pCondition}) for the nonzero vectors $\mathbf{p}(n)=\mathbf{p}_R(n)+i\mathbf{p}_I(n)$ with $\mathbf{p}_R(n),\mathbf{p}_I(n)\in\mathbb{R}^k$ is equivalent to the real condition \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} &\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x},\mathbf{y}),\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}(\mathbf{x},\mathbf{y})\right) \left( \begin{eqnarray*}gin{array}{c} \mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right) \\ &\hspace{15mm}\ge C \left\Vert \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x},\mathbf{y}),\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}(\mathbf{x},\mathbf{y})\right)^T \right\Vert_{\mathbb{R}^{2k}} \left\Vert \left( \begin{eqnarray*}gin{array}{c} \mathbf{p}_R(n) \\ \mathbf{p}_I(n) \end{array}\right) \right\Vert_{\mathbb{R}^{2k}}. \end{split} \end{equation*} Thus we may apply Theorem (14.3.6) in \cite{Ortega1970} and transfer back to complex coordinates to obtain that $\lim_{n\to\infty}\mathbf{z}(n)= \hat{\mathbf{z}}$, where $ \hat{\mathbf{z}}= \hat{\mathbf{x}}+i \hat{\mathbf{y}}$ with $\hat{\mathbf{x}},\hat{\mathbf{y}}\in\mathbb{R}^k$ is the unique stationary point of $f$ in $L^0_{\mathbb{C}^k}(f(\mathbf{z}(0)))$, and the rate of convergence is at least R-linear. \end{proof} \mathbf{p}ar We now apply the previous results to the complex Newton algorithm. Let $f:\Omega\subseteq\mathbb{C}^k\to\mathbb{R}$ be twice-continuously $\mathbb{R}$-differentiable on the open convex set $\Omega$. Let $\mathbf{z}(0)\in\Omega$ and assume that the level set $L^0_{\mathbb{C}^k}(f(\mathbf{z}(0)))$ is compact. Suppose for all $\mathbf{z}\in\Omega$, \begin{eqnarray*}gin{equation*} \mathbb{R}e\{ \mathbf{h}^* \mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z})\mathbf{h} + \mathbf{h}^* \mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z})\overline{\mathbf{h}}\}>0 \textrm{ for all }\mathbf{h}\in\mathbb{C}^k. \label{eq:PosDefEquiv2} \end{equation*} As in the proof of Lemma \ref{lem:GeneralConvergence}, this condition is equivalent to the positive definiteness of the real Hessian matrix $\mathcal{H}_{\mathbf{r}\mathbf{r}}(\mathbf{x},\mathbf{y})$ of $f$ for all $(\mathbf{x},\mathbf{y})^T\in D$. Since $f$ is twice-continuously $\mathbb{R}$-differentiable, the Hessian operator $\mathcal{H}_{\mathbf{r}\mathbf{r}}(\cdot):D\subseteq\mathbb{R}^{2k}\to L(\mathbb{R}^{2k})$ (where $L(\mathbb{R}^{2k})$ denotes the set of linear operators $\mathbb{R}^{2k}\to\mathbb{R}^{2k}$) is continuous. Restricting to the compact set $L^0_{\mathbb{R}^{2k}}(f(\mathbf{x}(0),\mathbf{y}(0)))$ we have that $\mathcal{H}_{\mathbf{r}\mathbf{r}}(\cdot):L^0_{\mathbb{R}^{2k}}(f(\mathbf{x}(0),\mathbf{y}(0)))\to L(\mathbb{R}^{2k})$ is a continuous mapping such that $\mathcal{H}_{\mathbf{r}\mathbf{r}}(\mathbf{x},\mathbf{y})$ is positive definite for each vector $(\mathbf{x},\mathbf{y})^T\in L^0_{\mathbb{R}^{2k}}(f(\mathbf{x}(0),\mathbf{y}(0)))$. For each $(\mathbf{x},\mathbf{y})^T\in L^0_{\mathbb{R}^{2k}}(f(\mathbf{x}(0),\mathbf{y}(0)))$ set \begin{eqnarray*}gin{equation*} \tilde{\mathbf{p}}(\mathbf{x},\mathbf{y})=\mathcal{H}_{\mathbf{r}\mathbf{r}}(\mathbf{x},\mathbf{y})^{-1}\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x},\mathbf{y}),\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}(\mathbf{x},\mathbf{y}) \right)^T. \label{eq:TildepDefReal} \end{equation*} By Lemma (14.4.1) in \cite{Ortega1970}, there exists a constant $C>0$ such that \begin{eqnarray*}gin{equation} \begin{eqnarray*}gin{split} &\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x},\mathbf{y}),\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}(\mathbf{x},\mathbf{y}) \right) \tilde{\mathbf{p}}(\mathbf{x},\mathbf{y})\\ &\hspace{20mm}\ge C \left\Vert \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{x}}(\mathbf{x},\mathbf{y}),\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{y}}(\mathbf{x},\mathbf{y}) \right)^T \right\Vert_{\mathbb{R}^{2k}}\Vert \tilde{\mathbf{p}}(\mathbf{x},\mathbf{y}) \Vert_{\mathbb{R}^{2k}} \end{split} \label{eq:RealGradRelated} \end{equation} for all $(\mathbf{x},\mathbf{y})^T\in L^0_{\mathbb{R}^{2k}}(f(\mathbf{x}(0),\mathbf{y}(0)))$. As in the proof of Lemma \ref{lem:GeneralConvergence}, (\ref{eq:RealGradRelated}) is equivalent to the inequality \begin{eqnarray*}gin{equation*} \mathrm{Re}\left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z})\mathbf{p}(\mathbf{z})\right)\ge C \left\Vert \left( \frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z})\right)^T\right\Vert_{\mathbb{C}^k} \Vert \mathbf{p}(\mathbf{z})\Vert_{\mathbb{C}^k} \label{eq:ComplexGradRelated} \end{equation*} for all $\mathbf{z}\in L^0_{\mathbb{R}^{2k}}(f(\mathbf{x}(0),\mathbf{y}(0)))$, where \begin{eqnarray*}gin{equation*} \begin{eqnarray*}gin{split} \tilde{\mathbf{p}}(\mathbf{z})&= -\left[\mathcal{H}_{\mathbf{z}\mathbf{z}}(\mathbf{z})-\mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z})\mathcal{H}_{\overline{\mathbf{z}}\overline{\mathbf{z}}}(\mathbf{z})^{-1}\mathcal{H}_{\mathbf{z}\overline{\mathbf{z}}}(\mathbf{z})\right]^{-1} \\ & \hspace{10mm} \cdot\left[ \mathcal{H}_{\overline{\mathbf{z}}\mathbf{z}}(\mathbf{z})\mathcal{H}_{\overline{\mathbf{z}}\overline{\mathbf{z}}}(\mathbf{z})^{-1} \left(\frac{\mathbf{p}artial f}{\mathbf{p}artial \overline{\mathbf{z}}}(\mathbf{z}) \right)^*-\left(\frac{\mathbf{p}artial f}{\mathbf{p}artial \mathbf{z}}(\mathbf{z}) \right)^*\right] \end{split} \label{eq:pTildeDefComplex} \end{equation*} is obtained from $\tilde{\mathbf{p}}(\mathbf{x},\mathbf{y})$ (where $\mathbf{z}=\mathbf{x}+i\mathbf{y}$) using the coordinate and cogradient transformations (\ref{eq:CoordTransform}) and (\ref{eq:CogradTransform}), respectively \cite{Kreutz-Delgado2009}. Suppose $f$ has a unique stationary point $\hat{\mathbf{z}}$ in $L^0_{\mathbb{C}^k}(f(\mathbf{z}(0)))$. Consider the iteration \begin{eqnarray*}gin{equation*} \mathbf{z}(n+1)=\mathbf{z}(n)-\omega(n)\mu(n)\mathbf{p}(n), \textrm{ } n=0,1,..., \label{eq:DampedGeneralIterate2} \end{equation*} where the $\mathbf{p}(n)$ are the nonzero complex Newton updates defined by $\mathbf{p}(n)=\tilde{\mathbf{p}}(\mathbf{z}(n))$, and assume the notation of Lemma \ref{lem:SteplengthAlg}. Then $\{ \mathbf{z}(n)\}\subseteq L^0_{\mathbb{C}^k}(f(\mathbf{z}(0)))$. The vectors $\mathbf{p}(n)$ satisfy (\ref{eq:pCondition}), so by Lemma \ref{lem:GeneralConvergence} the sequence of iterates $\{ \mathbf{z}(n)\}$ converges to $\hat{\mathbf{z}}$, and the rate of convergence is at least R-linear. Thus we have proved Theorem \ref{thm:ConvergenceNewton}. \end{document}
\begin{document} \title{\sc {On the strong chromatic index and maximum induced matching of tree-cographs, permutation graphs and chordal bipartite graphs} \begin{abstract} We show that there exist linear-time algorithms that compute the strong chromatic index and a maximum induced matching of tree-cographs when the decomposition tree is a part of the input. We also show that there exist efficient algorithms for the strong chromatic index of (bipartite) permutation graphs and of chordal bipartite graphs. \end{abstract} \setcounter{page}{0} \section{Introduction} \begin{definition}[\cite{kn:cameron}] An {\em induced matching\/} in a graph $G$ is a set of edges, no two of which meet a common vertex or are joined by an edge of $G$. The size of an induced matching is the number of edges in the induced matching. An induced matching is maximum if its size is largest among all possible induced matchings. \end{definition} \begin{definition}[\cite{kn:fouquet}] Let $G=(V,E)$ be a graph. A {\em strong edge coloring\/} of $G$ is a proper edge coloring such that no edge is adjacent to two edges of the same color. A strong edge-coloring of a graph is a partition of its edges into induced matchings. The {\em strong chromatic index\/} of $G$ is the minimal integer $k$ such that $G$ has a strong edge coloring with $k$ colors. We denote the strong chromatic index of $G$ by $s\chi^{\prime}(G)$. \end{definition} Equivalently, a strong edge coloring of $G$ is a vertex coloring of $L(G)^2$, the square of the linegraph of $G$. The strong chromatic index problem can be solved in polynomial time for chordal graphs~\cite{kn:cameron} and for partial k-trees~\cite{kn:salavatipour}, and can be solved in linear time for trees~\cite{kn:faudree}. However, it is NP-complete to find the strong chromatic index for general graphs \cite{kn:cameron,kn:mahdian,kn:stockmeyer} or even for planar bipartite graphs~\cite{kn:hocquard}. In this paper, we show that there exist linear-time algorithms that compute the strong chromatic index and a maximum induced matching of tree-cographs when the decomposition tree is a part of the input. We also show that there exist efficient algorithms for the strong chromatic index of (bipartite) permutation graphs and of chordal bipartite graphs. The class of tree-cographs was introduced by Tinhofer in~\cite{kn:tinhofer}. \begin{definition} Tree-cographs are defined recursively by the following rules. \begin{enumerate}[\rm 1.] \item Every tree is a tree-cograph. \item If $G$ is a tree-cograph then also the complement $\Bar{G}$ of $G$ is a tree-cograph. \item For $k \geq 2$, if $G_1,\ldots,G_k$ are connected tree-cographs then also the disjoint union is a tree-cograph. \end{enumerate} \end{definition} Let $G$ be a tree-cograph. A decomposition tree for $G$ consists of a rooted binary tree $T$ in which each internal node, including the root, is labeled as a join node $\otimes$ or a union node $\oplus$. The leaves of $T$ are labeled by trees or complements of trees. It is easy to see that a decomposition tree for a tree-cograph $G$ can be obtained in $O(n^3)$ time. \section{The strong chromatic index of tree-cographs} The {\em linegraph} $L(G)$ of a graph $G$ is the intersection graph of the edges of $G$~\cite{kn:beineke}. It is well-known that, when $G$ is a tree then the linegraph $L(G)$ of $G$ is a claw-free blockgraph~\cite{kn:harary}. A graph is {\em chordal} if it has no induced cycles of length more than three~\cite{kn:dirac}. Notice that blockgraphs are chordal. A vertex $x$ in a graph $G$ is {\em simplicial} if its neighborhood $N(x)$ induces a clique in $G$. Chordal graphs are characterized by the property of having a {\em perfect elimination ordering}, which is an ordering $[v_1,\ldots,v_n]$ of the vertices of $G$ such that $v_i$ is simplicial in the graph induced by $\{v_i,\ldots,v_n\}$. A perfect elimination ordering of a chordal graph can be computed in linear time~\cite{kn:rose}. This implies that chordal graphs have at most $n$ maximal cliques, and the clique number can be computed in linear time, where the {\em clique number} of $G$, denoted by $\omega(G)$, is the number of vertices in a maximum clique of $G$. \begin{theorem}[\cite{kn:cameron}] If $G$ is a chordal graph then $L(G)^2$ is also chordal. \end{theorem} \begin{theorem}[\cite{kn:cameron3}] \label{weakly chordal} Let $k \in \mathbb{N}$ and let $k \geq 4$. Let $G$ be a graph and assume that $G$ has no induced cycles of length at least $k$. Then $L(G)^2$ has no induced cycles of length at least $k$. \end{theorem} \begin{lemma} Tree-cographs have no induced cycles of length more than four. \end{lemma} \begin{proof} Let $G$ be a tree-cograph. First observe that trees are bipartite. It follows that complements of trees have no induced cycles of length more than four. We prove the claim by induction on the depth of a decomposition tree for $G$. If $G$ is the union of two tree-cographs $G_1$ and $G_2$ then the claim follows by induction since any induced cycle is contained in one of $G_1$ and $G_2$. Assume $G$ is the join of two tree-cographs $G_1$ and $G_2$. Assume that $G$ has an induced cycle $C$ of length at least five. We may assume that $C$ has at least one vertex in each of $G_1$ and $G_2$. As one of $G_1$ and $G_2$ has more than two vertices of $C$, $C$ has a vertex of degree at least three, which is a contradiction. \qed\end{proof} \begin{lemma} \label{complement of tree} Let $T$ be a tree. Then $L(\Bar{T})^2$ is a clique. \end{lemma} \begin{proof} Consider two non-edges $\{a,b\}$ and $\{p,q\}$ of $T$. If the non-edges share an endpoint then they are adjacent in $L(\Bar{T})^2$ since they are already adjacent in $L(\Bar{T})$. Otherwise, since $T$ is a tree, at least one pair of $\{a,p\}$, $\{a,q\}$, $\{b,p\}$ and $\{b,q\}$ is a non-edge in $T$, otherwise $T$ has a 4-cycle. By definition, $\{a,b\}$ and $\{p,q\}$ are adjacent in $L(\Bar{T})^2$. \qed\end{proof} If $G$ is the union of two tree-cographs $G_1$ and $G_2$ then \[\omega(L(G)^2)=\max\{\omega(L(G_1)^2),\omega(L(G_2)^2)\}.\] The following lemma deals with the join of two tree-cographs. \begin{lemma} \label{join} Let $P$ and $Q$ be tree-cographs and let $G$ be the join of $P$ and $Q$. Let $X$ be the set of edges that have one endpoint in $P$ and one endpoint in $Q$. Then \begin{enumerate}[\rm (a)] \item $X$ forms a clique in $L(G)^2$, \item every edge of $X$ is adjacent in $L(G)^2$ to every edge of $P$ and to every edge of $Q$, and \item every edge of $P$ is adjacent in $L(G)^2$ to every edge of $Q$. \end{enumerate} \end{lemma} \begin{proof} This is an immediate consequence of the definitions. \qed\end{proof} For $k \geq 3$, a {\em $k$-sun} is a graph which consists of a clique with $k$ vertices and an independent set with $k$ vertices. There exist orderings $c_1,\ldots,c_k$ and $s_1,\ldots,s_k$ of the vertices in the clique and independent set such that each $s_i$ is adjacent to $c_i$ and to $c_{i+1}$ for $i=1,\ldots,k-1$ and such that $s_k$ is adjacent to $c_k$ and $c_1$. A graph is {\em strongly chordal} if it is chordal and has no $k$-sun, for $k \geq 3$~\cite{kn:farber}. \begin{figure} \caption{A $3$-sun, a gem and a claw} \label{3-sun} \end{figure} \begin{lemma} \label{strongly chordal} Let $T$ be a tree. Then $L(T)^2$ is strongly chordal. \end{lemma} \begin{proof} When $T$ is a tree then $L(T)$ is a blockgraph. Obviously, blockgraphs are strongly chordal. Lubiw proves in~\cite{kn:lubiw} that all powers of strongly chordal graphs are strongly chordal. \qed\end{proof} We strengthen the result of Lemma~\ref{strongly chordal} as follows. {\em Ptolemaic graphs} are graphs that are both distance hereditary and chordal~\cite{kn:howorka}. Ptolemaic graphs are gem-free chordal graphs. The following theorem characterizes ptolemaic graphs. \begin{theorem}[\cite{kn:howorka}] A connected graph is ptolemaic if and only if for all pairs of maximal cliques $C_1$ and $C_2$ with $C_1 \cap C_2 \neq \varnothing$, the intersection $C_1 \cap C_2$ separates $C_1 \setminus C_2$ from $C_2 \setminus C_1$. \end{theorem} \begin{lemma} \label{ptolemaic} Let $T$ be a tree. Then $L(T)^2$ is ptolemaic. \end{lemma} \begin{proof} Consider $L(T)$. Let $C$ be a block and let $P$ and $Q$ be two blocks that each intersects $C$ in one vertex. Since $L(T)$ is claw-free, the intersections of $P \cap C$ and $Q \cap C$ are distinct vertices. The intersection of the maximal cliques $P \cup C$ and $Q \cup C$, which is $C$, separates $P \setminus Q$ and $Q \setminus P$ in $L(T)^2$. Since all intersecting pairs of maximal cliques are of this form, this proves the lemma. \qed\end{proof} \begin{corollary} \label{char} Let $G$ be a tree-cograph. Then $L(G)^2$ has a decomposition tree with internal nodes labeled as join nodes and union nodes and where the leaves are labeled as ptolemaic graphs. \end{corollary} {F}rom Corollary~\ref{char} it follows that $L(G)^2$ is perfect~\cite{kn:chudnovsky}, that is, $L(G)^2$ has no odd holes or odd antiholes~\cite{kn:lovasz}. This implies that the chromatic number of $L(G)^2$ is equal to the clique number. Therefore, to compute the strong chromatic index of a tree-cograph $G$ it suffices to compute the clique number of $L(G)^2$. \begin{theorem} Let $G$ be a tree-cograph and let $T$ be a decomposition tree for $G$. There exists a linear-time algorithm that computes the strong chromatic index of $G$. \end{theorem} \begin{proof} First assume that $G=(V,E)$ is a tree. Then the strong chromatic index of $G$ is \begin{equation} \label{form1} s\chi^{\prime}(G)=\max \;\{\; d(x)+d(y)-1 \;|\; (x,y) \in E\;\} \end{equation} where $d(x)$ is the degree of the vertex $x$. To see this notice that Formula~(\ref{form1}) gives the clique number of $L(G)^2$. Assume that $G$ is the complement of a tree. By Lemma~\ref{complement of tree} the strong chromatic index is the number of nonedges in $G$, which is \[s\chi^{\prime}(G)=\binom{n}{2} - (n-1).\] Assume that $G$ is the union of two tree-cographs $G_1$ and $G_2$. Then, obviously, \[s\chi^{\prime}(G)= \max \;\{\; s\chi^{\prime}(G_1), \;s\chi^{\prime}(G_2)\;\}.\] Finally, assume that $G$ is the join of two tree-cographs $G_1$ and $G_2$. Let $X$ be the set of edges of $G$ that have one endpoint in $G_1$ and the other in $G_2$. Then, by Lemma~\ref{join}, we have \[s\chi^{\prime}(G) = |X|+s\chi^{\prime}(G_1) + s\chi^{\prime}(G_2).\] The decomposition tree for $G$ has $O(n)$ nodes. For the trees the strong chromatic index can be computed in linear time. In all other cases, the evaluation of $s\chi^{\prime}(G)$ takes constant time. It follows that this algorithm runs in $O(n)$ time, when a decomposition tree is a part of the input. \qed\end{proof} \section{Induced matching in tree-cographs} Consider a strong edge coloring of a tree-cograph $G$. Then each color class is an induced matching in $G$, which is an independent set in $L(G)^2$~\cite{kn:cameron}. In this section we show that the maximal value of an induced matching in $G$ can be computed in linear time. Again, we assume that a decomposition tree is a part of the input. \begin{theorem} \label{induced matching} Let $G$ be a tree-cograph and let $T$ be a decomposition tree for $G$. Then the maximal number of edges in an induced matching in $G$ can be computed in linear time. \end{theorem} \begin{proof} In this proof we denote the cardinality of a maximum induced matching in a graph $G$ by $i\nu(G)$. First assume that $G$ is a tree. Since the maximum induced matching problem can be formulated in monadic second-order logic, there exists a linear-time algorithm to compute the cardinality of a maximal induced matching in $G$ \cite{kn:brandstadt,kn:fricke}. Assume that $G$ is the complement of a tree. By Lemma~\ref{complement of tree} $L(G)^2$ is a clique. Thus the cardinality of a maximum induced matching in $G$ is one if $G$ has a nonedge and otherwise it is zero. Assume that $G$ is the union of two tree-cographs $G_1$ and $G_2$. Then \[i\nu(G) = i\nu(G_1)+i\nu(G_2).\] Assume that $G$ is the join of two tree-cographs $G_1$ and $G_2$. Then \[i\nu(G)= \max\;\{\;i\nu(G_1), \;i\nu(G_2), \;1\;\}.\] This proves the theorem. \qed\end{proof} \section{Permutation graphs} A permutation diagram on $n$ points is obtained as follows. Consider two horizontal lines $L_1$ and $L_2$ in the Euclidean plane. For each line $L_i$ consider a linear ordering $\prec_i$ of $\{1,\ldots,n\}$ and put points $1,\ldots,n$ on $L_i$ in this order. For $k=1,\ldots,n$ connect the two points with the label $k$ by a straight line segment. \begin{definition}[\cite{kn:golumbic}] A graph $G$ is a permutation graph if it is the intersection graph of the line segments in a permutation diagram. \end{definition} \begin{figure} \caption{A permutation graph and a permutation diagram} \end{figure} Consider two horizontal lines $L_1$ and $L_2$ and on each line $L_i$ choose $n$ intervals. Connect the left - and right endpoint of the $k^{\mathrm{th}}$ interval on $L_1$ with the left - and right endpoint of the $k^{\mathrm{th}}$ interval on $L_2$. Thus we obtain a collection of $n$ trapezoids. We call this a trapezoid diagram. \begin{definition} A graph is a trapezoid graph if it is the intersection graph of a collection of trapezoids in a trapezoid diagram. \end{definition} \begin{lemma} \label{permutation} If $G$ is a permutation graph then $L(G)^2$ is a trapezoid graph. \end{lemma} \begin{proof} Consider a permutation diagram for $G$. Each edge of $G$ corresponds to two intersecting line segments in the diagram. The four endpoints of a pair of intersecting line segments define a trapezoid. Two vertices in $L(G)^2$ are adjacent exactly when the corresponding trapezoids intersect (see Proposition~1 in~\cite{kn:cameron2}). \qed\end{proof} \begin{theorem} There exists an $O(n^4)$ algorithm that computes a strong edge coloring in permutation graphs. \end{theorem} \begin{proof} Dagan, {\em et al.\/},~\cite{kn:dagan} show that a trapezoid graph can be colored by a greedy coloring algorithm. It is easy to see that this algorithm can be adapted so that it finds a strong edge-coloring in permutation graphs. \qed\end{proof} \begin{remark} A somewhat faster coloring algorithm for trapezoid graphs appears in~\cite{kn:felsner}. Their algorithm runs in $O(n \log n)$ time where $n$ is the number of vertices in the trapezoid graph. An adaption of their algorithm yields a strong edge coloring for permutation graphs that runs in $O(m \log n)$ time, where $n$ and $m$ are the number of vertices and edges in the permutation graph. \end{remark} \subsection{Bipartite permutation graphs} A graph is a {\em bipartite permutation graph} if it is not only a bipartite graph but also a permutation graph~\cite{kn:spinrad}. Let $G=(A,B,E)$ be a bipartite permutation graph with color classes $A$ and $B$. \begin{lemma} \label{bip perm} Let $G$ be a bipartite permutation graph. Then $L(G)^2$ is an interval graph. \end{lemma} \begin{proof} We first show that $L(G)^2$ is chordal. We may assume that $L(G)^2$ is connected. Let $x$ and $y$ be two non-adjacent vertices in a graph $H$. An $x,y$-separator is a set $S$ of vertices which separates $x$ and $y$ in distinct components. The separator is a minimal $x,y$-separator if no proper subset of $S$ separates $x$ and $y$. A set $S$ is a minimal separator if there exist non-adjacent vertices $x$ and $y$ such that $S$ is a minimal $x,y$-separator. Recall that Dirac characterizes chordal graphs by the property that every minimal separator is a clique~\cite{kn:dirac}. Consider the trapezoid diagram. Let $S$ be a minimal separator in the trapezoid graph $L(G)^2$ and consider removing the trapezoids that are in $S$ from the diagram. Every component of $L(G)^2-S$ is a connected part in the diagram. Consider the left-to-right ordering of the components in the diagram. Since $S$ is a minimal separator there must exist two {\em consecutive\/} components $C_1$ and $C_2$ such that every vertex of $S$ has a neighbor in both $C_1$ and $C_2$~\cite{kn:bodlaender}. Assume that $S$ has two non-adjacent trapezoids $t_1$ and $t_2$. Each of $t_i$ is characterized by two crossing line segments $\{a_i,b_i\}$ of the permutation diagram. Since $t_1$ and $t_2$ are not adjacent, any pair of line-segments with one element in $\{a_1,b_1\}$ and the other element in $\{a_2,b_2\}$ are parallel. Each trapezoid $t_i$ intersects each component $C_1$ and $C_2$. Since pairs of line-segments are parallel, we have that, for some $i \in \{1,2\}$ \begin{enumerate}[\rm (1)] \item $N_G(a_i) \cap C_1 \subseteq N_G(a_{3-i}) \cap C_1$, \item $N_G(a_i) \cap C_1 \subseteq N_G(b_{3-i}) \cap C_1$, \item $N_G(b_i) \cap C_1 \subseteq N_G(a_{3-i}) \cap C_1$ and \item $N_G(b_i) \cap C_1 \subseteq N_G(b_{3-i}) \cap C_1$, \end{enumerate} and the reverse inequalities hold for $C_2$. Each trapezoid $t_i$ has at least one line segment of $\{a_i,b_i\}$ intersecting with a line segment of $C_i$. By the neighborhood containments this implies that $G$ has a triangle, which contradicts that $G$ is bipartite. This proves that $S$ is a clique and by Dirac's characterization $L(G)^2$ is chordal. Lekkerkerker and Boland prove in~\cite{kn:lekkerkerker} that a graph $H$ is an interval graph if and only if $H$ is chordal and $H$ has no asteroidal triple. It is easy to see that a permutation graph has no asteroidal triple~\cite{kn:kloks}. Cameron proves in~\cite{kn:cameron2} (and independently Chang proves in~\cite{kn:chang}) that $L(H)^2$ is AT-free whenever a graph $H$ is AT-free. Thus, since $L(G)^2$ is chordal and AT-free, $L(G)^2$ is an interval graph. This proves the lemma. \qed\end{proof} Chang proves in~\cite{kn:chang} that there exists a linear-time algorithm that computes a maximum induced matching in bipartite permutation graphs. We show that there is a simple linear-time algorithm that computes the strong chromatic index of bipartite permutation graphs. \begin{theorem} There exists a linear-time algorithm that computes the strong chromatic index of bipartite permutation graphs. \end{theorem} \begin{proof} Let $G=(A,B,E)$ be a bipartite permutation graph and consider a permutation diagram for $G$. Let $[a_1,\ldots,a_s]$ and $[b_1,\ldots,b_t]$ be left-to-right orderings of the vertices of $A$ and $B$ on the topline of the diagram. Assume that $a_1$ is the left-most endpoint of a line segment on the topline. We may assume that the line segment of $a_1$ intersects the line segment of $b_1$. Consider the set of edges in the maximal complete bipartite subgraph $M$ in $G$ that consists of the following vertices. \begin{enumerate}[\rm (a)] \item $M$ contains $a_1$ and $b_1$, \item $M$ contains all the vertices of $A$ of which the endpoint on the topline is to the left of $b_1$, \item $M$ contains all the vertices of $B$ of which the endpoint on the bottom line is to the left of $a_1$. \end{enumerate} Notice that $M$ is the set of edges in the complete bipartite subgraph in $G$ induced by \[N[a_1] \cup N[b_1].\] Extend the set of edges in $M$ with the edges in $G$ that have one endpoint in $M$. Call the set of edges in $M$ plus the edges with one endpoint in $N(a_1) \cup N(b_1)$ the extension $\Bar{M}$ of $M$. That is, \[\Bar{M}=\{\;\{p,q\}\in E\;|\; p \in M \quad\text{or}\quad q \in M\;\}.\] Notice that $\Bar{M}$ is the unique maximal clique that contains the simplicial edge $\{a_1,b_1\}$ in $L(G)^2$. The second maximal clique in $L(G)^2$ is found by the process described above for the line segments induced by $A \setminus \{a_1\} \cup B$. Likewise, the third maximal clique in $L(G)^2$ is found by repeating the process for the line segments induced by $A \cup B \setminus \{b_1\}$. Next, remove the vertices $a_1$ {\em and\/} $b_1$ and repeat the three steps described above. It is easy to see that the list obtained in this manner contains all the maximal cliques of $L(G)^2$. Notice also that this algorithm can be implemented to run in linear time. Since $L(G)^2$ is perfect the chromatic number is equal to the clique number, so it suffices to keep track of the cardinalities of the maximal cliques that are found in the process described above. \qed\end{proof} \section{Chordal bipartite graphs} \begin{definition}[\cite{kn:golumbic2,kn:huang}] A bipartite graph is chordal bipartite if it has no induced cycles of length more than four. \end{definition} In contrast to bipartite permutation graphs, $L(G)^2$ is not necessarily chordal when $G$ is chordal bipartite. An example to the contrary is shown in Figure~\ref{counterexample}. \begin{figure} \caption{A chordal bipartite graph $G$ for which $L(G)^2$ is not chordal.} \label{counterexample} \end{figure} A graph is {\em weakly chordal} if it has no induced cycle of length more than four or the complement of such a cycle~\cite{kn:hayward2}. Weakly chordal graphs are perfect. Notice that chordal bipartite graphs are weakly chordal. Cameron, Sritharan and Tang prove in~\cite{kn:cameron3} (and independently Chang proves in~\cite{kn:chang}) that $L(G)^2$ is weakly chordal whenever $G$ is weakly chordal. Thus, if $G$ is chordal bipartite then $L(G)^2$ is perfect and so, in order to compute the strong chromatic index of $G$ it is sufficient to compute the clique number in $L(G)^2$ (see also~\cite{kn:abueida}). It is well-known that the clique number of a perfect graph can be computed in polynomial time~\cite{kn:grotschel}.\footnote{Actually, this paper shows that for any graph $G$ with $\omega(G)=\chi(G)$ the values of these parameters can be determined in polynomial time. The reason is that Lov\'asz' bound $\vartheta(G)$ for the Shannon capacity of a graph can be computed in polynomial time for all graphs, {\em via\/} the ellipsoid method, and the parameter $\vartheta(G)$ is sandwiched between $\omega(G)$ and $\chi(G)$.} The algorithm presented in~\cite{kn:hayward} to compute the clique number of weakly chordal graphs runs in $O(n^3)$ time, where $n$ is the number of vertices in the graph. A direct application of their algorithm to solve the strong chromatic index of chordal bipartite graphs $G$ involves computing the graph $L(G)^2$. This graph has $m$ vertices, where $m$ is the number of edges in $G$. This gives a timebound $O(n^6)$ for computing the strong chromatic index of a chordal bipartite graph (see also~\cite{kn:abueida}). In this section we show that there is a more efficient method. \begin{definition}[\cite{kn:yannakakis}] A bipartite graph $G=(A,B,E)$ is a chain graph if there exists an ordering $a_1, a_2, \ldots, a_{|A|}$ of the vertices in $A$ such that $N(a_1) \subseteq N(a_2) \subseteq \cdots \subseteq N(a_{|A|})$. \end{definition} Chain graphs are sometimes called {\em difference graphs}~\cite{kn:hammer}. Equivalently, a graph $G=(V,E)$ is a chain graph if there exists a positive real number $T$ and a real number $w(x)$ for every vertex $x \in V$ such that $|w(x)| < T$ for every $x \in V$ and such that, for any pair of vertices $x$ and $y$, $\{x,y\} \in E$ if and only if $|w(x)-w(y)| \geqslant T$ $($see Figure~\ref{fig difference} for an example$)$. \begin{figure} \caption{\label{fig difference} \label{fig difference} \end{figure} Chain graphs can be characterized in many ways~\cite{kn:hammer}. For example, a graph is a chain graph if and only if it has no induced $K_3$, $2K_2$ or $C_5$~\cite[Proposition~2.6]{kn:hammer}. Thus a bipartite graph is a chain graph if it has no induced $2K_2$. Abueida, {\em et al.\/}, prove that, if $G$ is a bipartite graph that does not contain an induced $C_6$, then a maximal clique in $L(G)^2$ is a maximal chain subgraph of $G$~\cite{kn:abueida}. Notice that $C_6$ has three consecutive edges that form a clique in $L(G)^2$, however, these edges do not form a chain subgraph. Thus, computing the clique number of $L(G)^2$ for $C_6$-free bipartite graphs $G$ is equivalent to finding the maximal number of edges that form a chain graph in $G$. In~\cite{kn:abueida} the authors prove that, if $G$ is a $C_6$-free bipartite graph, then \[\chi(G^{\ast})=ch(G),\] where $G^{\ast}$ is the complement of $L(G)^2$ and $ch(G)$ is the minimum number of chain subgraphs of $G$ that cover the edges of $G$. An {\em antimatching} in a graph $G$ is a collection of edges which forms a clique in $L(G)^2$~\cite{kn:mahdian}. It is easy to see that finding a chain subgraph with the maximal number of edges in general graphs is NP-complete. Mahdian mentions in his paper that the complexity of maximum antimatching in simple bipartite graphs is open~\cite{kn:mahdian}. \begin{lemma} Let $G$ be a bipartite graph. Finding a maximum set of edges that form a chain subgraph of $G$ is NP-complete. \end{lemma} \begin{proof} Let $G=(A,B,E)$ be a bipartite graph. Let $C(G)$ be the graph obtained from $G$ by making cliques of $A$ and $B$. Notice that $G$ is a chain graph if and only if $C(G)$ is chordal. Yannakakis shows in~\cite{kn:yannakakis} that adding a minimum set of edges to $C(G)$ such that this graph becomes chordal is NP-complete. \noindent Consider the bipartite complement $G^{\prime}$ of $G$. Adding a minimum set of edges such that $G$ becomes a chain graph is equivalent to removing a minimum set of edges from $G^{\prime}$ such that the remaining graph is a chain graph. This completes the proof. \qed\end{proof} In the following theorem we present our result for chordal bipartite graphs. \begin{theorem} \label{chordal bip} There exists an $O(n^4)$ algorithm that computes the strong chromatic index of chordal bipartite graphs. \end{theorem} \begin{proof} Let $G$ be chordal bipartite with color classes $C$ and $D$. Consider the bipartite adjacency matrix $A$ in which rows correspond with vertices of $C$ and columns correspond with vertices of $D$. An entry of this matrix is one if the corresponding vertices are adjacent and it is zero if they are not adjacent. \noindent It is well-known that $G$ is chordal bipartite if and only if $A$ is totally balanced. Notice that a chain graph has a bipartite adjacency matrix that is triangular. So we look for a maximal submatrix of $A$ which is triangular after permuting rows and columns. \noindent Anstee and Farber and Lehel prove that a totally balanced matrix, which has no repeated columns, can be completed into a `maximal totally balanced matrix.'~\cite{kn:anstee2,kn:lehel}, If $A$ has $n$ rows then this completing has $\binom{n+1}{2}+1$ columns. The rows and columns of a maximal totally balanced matrix can be permuted such that the adjacency matrix gets the following form. \begin{figure} \caption{\label{fig totallybalanced} \label{fig totallybalanced} \end{figure} \noindent One can easily deal with repeated columns in $A$ by giving the vertices a weight. When the matrix has the desired form, one can easily find the maximal triangular submatrix in linear time. Anstee and Farber give a rough bound of $O(n^5)$ to find the completion. But faster algorithms are given by Paige and Tarjan and by Lubiw~\cite{kn:paige,kn:lubiw}. \qed\end{proof} \end{document}
\begin{document} \sloppy \begin{abstract} I generalize a theorem of Predd, et al.~(2009) on domination and strictly proper scoring rules to the case of non-additive scoring rules. \varepsilonnd{abstract} \title{Proper Scoring Rules and Domination} Let $\Omega$ be a finite space. Let $\mathcal C$ be the set of all functions from the power set $\mathcal P\Omega$ to the reals: these we call credence functions. Let $\mathcal P$ be the subset of $\mathcal C$ which consists of the functions satisfying the axioms of probability. A scoring or inaccuracy rule is a function $s$ from $\mathcal C$ to $[0,\infty]^\Omega$. Thus, $s(c)$ for a $c\in\mathcal C$ is a function from $\Omega$ to $[0,\infty]$, and its value at $\omega\in\Omega$ represents how inaccurate having credence $c$ is if the true state is $\omega$. Given a probability $p \in \mathcal P$, let $E_p$ be the expected value with respect to $p$. Then a scoring rule $s$ is said to be proper provided that for all $p\in \mathcal P$ and $c\in\mathcal C$, $E_p s(p) \le E_p s(c)$, where $$ E_p f = \sum_{\omega \in \Omega, p(\{\omega\})\ne 0} p(\{\omega\}) f(\omega) $$ is the mathematical expectation with a tweak to ensure well-definition in case $f$ is infinite at some points with probability zero. If the inequality $E_p s(p) \le E_p s(c)$ is always strict, the rule is said to be strictly proper. Enumerate the elements of $\Omega$ as $\omega_1,\dots,\omega_n$. Let $T$ be the set of vectors $v=(v_1,\dots,v_n)$ in $\mathbb R^n$ with $v_i\ge 0$ and $\sum_i v_i = 1$. For $v\in T$, let $p_v$ be the probability function such that $p_v(\{\omega_i\}) = v_i$ for all $i$. Say that a scoring rule is continuous on probabilities if the function $\hat s(v) = s(p_v)$ is a continuous function from $T$ to $[0,\infty]^\Omega$, where the latter space is equipped with the product topology. Say that a member of $[0,\infty]^\Omega$ is finite if it never takes on the value $\infty$; otherwise, say it's infinite. Let $S_s = \{ s(p) : p\in \mathcal P \}$ be the set of all scores of probabilities, and let $F_s = S_s \cap [0,\infty)^\Omega$ be the set of finite scores. \begin{thm}\label{th:domination} Suppose $s$ is strictly proper and that $S_s$ is the closure of $F_s$. Then for any $c\in \mathcal P\backslash\mathcal C$, there is a probability $p$ such that $s(p) < s(c)$ everywhere on $\Omega$. \varepsilonnd{thm} In other words, given the stated conditions, every score of a non-probability is dominated by the score of a probability (lower scores are better). \begin{cor}\label{cor:domination} Suppose $s$ is strictly proper and continuous on the probabilities. Then for any $c\in \mathcal P\backslash\mathcal C$, there is a probability $p$ such that $s(p) < s(c)$ everywhere on $\Omega$. \varepsilonnd{cor} In the special case of additive scoring rules, this was proved by Predd, et al. (2009). This result in the case of strict propriety was announced by Richard Pettigrew, but the proof appears deficient. Michael Nielsen has discovered a proof of this domination result using different methods during approximately the same time period as I did. \begin{proof}[Proof of Corollary~\ref{cor:domination}] The set $S_s$ is the image of the compact set $T$ under the continuous mapping $v\mapsto s(p_v)$ so it is compact. It remains to show that it is equal to $\overline {F_s}$. Note that $[0,\infty]^\Omega$ is metrizable (e.g., one can use the metric $d(a,b)=\sum_i |\arctan a_i-\arctan b_i|$), so we can work in terms of sequences. Suppose $(s_n)$ is a convergent sequence in $F_s$. Then $s_n = \hat s(v_n)$ for a sequence $(v_n)$ in $T$. Passing to a subsequence, by compactness of $T$, we may assume $v_n$ converges to $v\in T$ is compact. Thus, $s_n$ converges to $\hat s(v)$ by continuity of $\hat s$, and hence the limit of $s_n$ is in $S_s$. Thus, the closure of $F_s$ is a subset of $S_s$. Next, observe that if $v \in T^\circ$, so that all the coordinates $v_i$ of $v$ are non-zero and $c$ is any member of $\mathcal C\backslash \{ p_v \}$, then by strict propriety $E_{p_v}(\hat s(v)) < E_{p_v}(\hat s(v'))$, so $E_{p_v}(\hat s(v))<\infty$. If $\hat s(v)=(a_1,...,a_n)$, then $E_{p_v}(\hat s(v)) = \sum_i v_i a_i$ since the $v_i$ are all non-zero, and so $a_i$ is finite for all $i$. Thus, $\hat s[T^\circ] \subseteq F_s$. Fix $u \in S_s\backslash F_s$. Then $u=\hat s(v)$ for some $v\in\partial T$. Then there will be a sequence in $T^\circ$ converging to $v$, and its image under $\hat s$ will be a sequence in $F_s$ converging to $\hat s(v)$ by continuity. Thus, the closure of $F_s$ is equals $S_s$, and the proof is complete by Theorem~\ref{th:domination}. \varepsilonnd{proof} To prove Theorem~\ref{th:domination}, we will transform the setting to a geometric one. Let $z = \hat s(c)$. Given two points $a$ and $b$ in $(-\infty,\infty]^n$, define the inner product $$ \langle a, b \rangle = \sum_{i\in \{ i : 1\le i\le n, a_i\ne 0, b_i\ne 0 \}} a_i b_i. $$ If both $a$ and $b$ are in $\mathbb R^n$ this is the usual inner product. It is easy to see that $\langle \cdot, v \rangle$ is a continuous function on $(-\infty,\infty]^n$. A closed half-space in $\mathbb R^n$ with outward normal $v\in \mathbb R^n\backslash \{ 0 \}$ is any set of the form: $$ \{ z \in \mathbb R^n : \langle z,v \rangle \le a \} $$ for some $a\in\mathbb R$. An open half-space is one defined as above but with strict inequality. The interior $H^\circ$ of a closed half-space is the corresponding open half-space. If $H$ is a half-space in $\mathbb R^n$ with outward normal $v$, say that its extension $H^*$ is the set $$ \{ z \in (-\infty,\infty]^n : \varepsilonxists w \in H(\langlez, v\rangle \le \langle z, w \rangle) \}. $$ Say that a closed half-space $H$ separates a set $A$ from a set $B$ provided that $A\subseteq H$ and $B \subseteq \mathbb R^n\backslash H^\circ$. The hyperplane separation theorem says that if $A$ and $B$ are convex and disjoint subsets of $\mathbb R^n$, there is such an $H$. Let $C = (-\infty,0]^n$ be the non-positive orthant of $\mathbb R^n$. For a point $z \in (-\infty,\infty]^n$, let $z_i$ be its $i$th coordinate. For any $v\ne 0$ and $D\subseteq \mathbb R^n$, let $$ \sigma_D(v) = \sup_{z\in D} \langle z, v \rangle $$ be the support function of $D$. \begin{lem}\label{lem:cone1} Fix a point $z \in [0,\infty]^n$. Let $D$ be a subset of $\mathbb R^n$. Suppose that for all $v\in C$, the support function $\sigma_D(v)$ is finite. Let $H_v = \{ w : \langle z, v \rangle \le \sigma_D(v) \}$. Suppose further that $z \in (H_v^\circ)^*$ for every non-zero $v\in C$. Then there is a $z'$ in the convex hull $\operatorname{Conv}(D)$ such that $z'_i < z_i$ for all $i$. \varepsilonnd{lem} \begin{lem}\label{lem:cone2} Let $D$ be a subset closed in $\mathbb R^n$ such that for every non-zero $v\in C$, there is a unique closed half-space $H_v$ with outward normal $v$ such that $D\subseteq H_v$ and the boundary of $H_v$ intersects $D$ at exactly one point. Then for any $z\in \operatorname{Conv}(D)$, there is a $z'\in D$ such that $z'_i \le z_i$ for all $i$. \varepsilonnd{lem} \begin{proof}[Proof of Lemma~\ref{lem:cone1}] Let $Q = \{ w \in \mathbb R^n : \forall i(w_i < z_i) \}$. Suppose first that $Q$ does not intersect the convex hull $\operatorname{Conv}(D)$. Then let $H$ be a half-space with outward normal $v$ that separates $\operatorname{Conv}(D)$ from $Q$. It follows that there is an $a\in\mathbb R$ such that for every $w \in Q$, we have $\langle w, v \rangle \ge a$. Suppose $v_j > 0$ for some $j$. Choose any member $w'$ of $Q$. Let $b = \langle w', v \rangle$. Let $w_i = w'_i$ for $i\ne j$ and let $w_i = w'_i - (b-a+1)/v_j$. Then $w \in Q$ and $\langle w, v \rangle = b - (b-a+1) = a-1 < a$, a contradiction. Thus, $v_j \le 0$ for all $j$, and hence $v \in C$. Then $H_v$ and $H$ have the same outward normal, and since $H_v$ is the smallest closed half-space with that outward normal to contain $D$, we must have $H_v \subseteq H$. It follows that $H_v$ separates $\operatorname{Conv}(D)$ from $Q$. Let $a=\sigma_D(v)$. Then $H_v = \{ z : \langle z,v \rangle \le a \}$, and for all $w\in Q$, we have $\langle w,v \rangle \ge a$. Taking a sequence of members of $Q$ converging to $z$ and using the continuity of $\langle \cdot, v \rangle$, we conclude that $\langle z,v \rangle \ge a$. But this contradicts the claim that $z \in (H_v^\circ)^*$. Thus, $Q$ intersects $\operatorname{Conv}(D)$. Let $z'$ be any element of their intersection. \varepsilonnd{proof} \begin{proof}[Proof of Lemma~\ref{lem:cone2}] Observe that $\operatorname{Conv}(D)\cap (z+C)$ is bounded. To see this, let $v=(-1/n,...,-1/n)$. Then $\operatorname{Conv}(D) \subseteq H_v$. But it is easy to see that the $H_v \cap (w+C)$ must be bounded. Thus, $\operatorname{Conv}(D)\cap (w+C)$ is a compact set. Let $z'$ be the point of that set furthest from $z$. Any point in $z'+C$ other than $z'$ will be even further from $z$, so $\operatorname{Conv}(D)\cap (z'+C)$ is empty. Let $H'$ be a closed half-space that separates $\operatorname{Conv}(D)$ from $z'+C$. Since $C$ is self-dual, $H'$ has outward normal $v'$ in $C$. Then $D$ is contained in $H_{v'}$. Then only way $z'$ can be a convex combination of members of $D$, then, is if it is a convex combination of members of $D$ on the boundary of $H_{v'}$. But there is only one member of $D$ on the boundary of $H_{v'}$ for $v'\in C$, and hence $z'$ must be equal to that one member. Thus, $z'\in D$, and the proof is complete. \varepsilonnd{proof} \begin{proof}[Proof of Theorem~\ref{th:domination}] Let $\phi:(-\infty,\infty]^\Omega \to (-\infty,\infty]^n$ be the homeomorphism such that $\phi(f)=u$ where $u_i = f(\omega_i)$. Observe that $E_{p_v}(f) = \langle \phi(f), v \rangle$ for $v\in T$. Let $s^*(v) = \phi(\hat s(v))$. Let $D=\phi[F_s]$. Since all scores are non-negative, if $u\in C$ and $v\in D$, we have $\langle v,u \rangle \le 0$, and so $\sigma_D(u)\le 0<\infty$ for all $u\in C$. Fix any $c\in \mathcal C\backslash \mathcal P$. Note that For each $s^*(v) \in D$ for $v \in T$, we can let $H_v = \{ w \in (-\infty,\infty]^n : \langle w, -v \rangle \le \langle s^*(v), -v \rangle \}$. Strict propriety together with the closure assumption then ensures that if $z=\phi(s(c))$, then the conditions of Lemma~\ref{lem:cone1} are satisfied. Let $z'$ be as in the conclusion of Lemma~\ref{lem:cone1}. Then the conditions of Lemma~\ref{lem:cone2} yield a $z'' \in D$ such that $z''_i \le z'_i$ for all $i$. Letting $p\in\mathcal P$ be such that $z''=\phi(s(p))$, our proof is complete. \varepsilonnd{proof} \begin{thebibliography}{99} \bibitem{Predd} Joel B. Predd, Robert Seiringer, Elliott H. Lieb, Daniel N. Osherson, H. Vincent Poor, and Sanjeev R. Kulkarni. 2009. ``Probabilistic Coherence and Proper Scoring Rules'', \textit{IEEE Transactions on Information Theory} 55:4786--4792. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title[A Stabilized CutFEM for the Darcy Surface Problem]{\bf A Stabilized Cut Finite Element Method for the Darcy Problem on Surfaces} \subjclass[2010]{Primary 65N30; Secondary 65N85, 58J05.} \author[P. Hanbso]{Peter Hansbo} \address[Peter Hansbo]{Department of Mechanical Engineering, J\"onk\"oping University, SE-55111 J\"onk\"oping, Sweden.} \email{[email protected]} \author[M. G. Larson]{Mats G.\ Larson} \author[A. Massing]{Andr\'e Massing} \address[Mats G.\ Larson, Andr\'e Massing]{Department of Mathematics and Mathematical Statistics, Ume{\aa} University, SE-90187 Ume{\aa}, Sweden} \email{[email protected]} \email{[email protected]} \date{\today} \keywords{Surface PDE, Darcy Problem, cut finite element method, stabilization, condition number, a priori error estimates} \maketitle \begin{abstract} {We develop a cut finite element method for the Darcy problem on surfaces. The cut finite element method is based on embedding the surface in a three dimensional finite element mesh and using finite element spaces defined on the three dimensional mesh as trial and test functions. Since we consider a partial differential equation on a surface, the resulting discrete weak problem might be severely ill conditioned. We propose a full gradient and a normal gradient based stabilization computed on the background mesh to render the proposed formulation stable and well conditioned irrespective of the surface positioning within the mesh. Our formulation extends and simplifies the Masud-Hughes stabilized primal mixed formulation of the Darcy surface problem proposed in~\cite{HansboLarson2016} on fitted triangulated surfaces. The tangential condition on the velocity and the pressure gradient is enforced only weakly, avoiding the need for any tangential projection. The presented numerical analysis accounts for different polynomial orders for the velocity, pressure, and geometry approximation which are corroborated by numerical experiments. In particular, we demonstrate both theoretically and through numerical results that the normal gradient stabilized variant results in a high order scheme. } \end{abstract} \section{Introduction} \label{sec:introduction} \subsection{Background and Earlier Work} In recent years, there has been a rapid development of cut finite element methods, also called trace finite element methods, for the numerical solution of partial differential equations (PDEs) on complicated or evolving surfaces embedded into $\mathbb{R}^d$. The main idea is to use the restriction of finite element basis functions defined on a $d$-dimensional background mesh to a discrete, piecewise smooth surface representation which is allowed to cut through the mesh in an arbitrary fashion. The active background mesh then consists of all elements which are cut by the discrete surface, and the finite element space restricted to the active mesh is used to discretize the surface PDE. This approach was first proposed in \cite{OlshanskiiReuskenGrande2009} for the Laplace-Beltrami on a closed surface, see also~\cite{BurmanClausHansboEtAl2014} and the references therein for an overview of cut finite element techniques. Depending on the positioning of the discrete surface within the background mesh, the resulting system matrix might be severely ill conditioned and either preconditioning \cite{OlshanskiiReusken2014} or stabilization \cite{BurmanHansboLarson2015} is necessary to obtain a well conditioned linear system. The stabilization introduced and analyzed in~\cite{BurmanHansboLarson2015} is based on so called face stabilization or ghost penalty, which provides control over the jump in the normal gradient across interior faces in the active mesh. In particular, it was shown that the condition number scaled in an optimal way, independent of how the surface cut the background mesh. Thanks its versatility, the face based stabilization can naturally be combined with discontinuous cut finite element methods as demonstrated in~\cite{BurmanHansboLarsonEtAl2016a}. To reduce the matrix stencil and ease the implementation, a particular simple low order, full gradient stabilization using continuous piecewise linears was developed and analyzed in~\cite{BurmanHansboLarsonEtAl2016c} for the Laplace-Beltrami operator. Then a unifying abstract framework for analysis of cut finite element methods on embedded manifolds of arbitrary codimension was developed in~\cite{BurmanHansboLarsonEtAl2016} and, in particular, the normal gradient stabilization term was introduced and analyzed. Further developments include convection problems~\cite{BurmanHansboLarsonEtAl2015b, OlshanskiiReuskenXu2014}, coupled bulk-surface problems~\cite{BurmanHansboLarsonEtAl2014, GrossOlshanskiiReusken2014} and higher order versions of trace fem for the Laplace-Beltrami operator~\cite{Reusken2014,GrandeLehrenfeldReusken2016}. Moreover, extensions to time-dependent, parabolic-type problems on evolving domains were proposed in~\cite{HansboLarsonZahedi2016, OlshanskiiReuskenXu2014a}. So far, with their many applications to fluid dynamics, material science and biology, e.g., \cite{GanesanTobiska2009, GrossReusken2011, NovakGaoChoiEtAl2007, EilksElliott2008, ElliottStinnerVenkataraman2012, BarreiraElliottMadzvamuse2011}, mainly scalar-valued, second order elliptic and parabolic type equations have been considered in the context of cut finite element methods for surface PDEs. Only very recently, vector-valued surface PDEs in combination with unfitted finite element technologies have been considered, for instance in the numerical discretization of surface-bulk problems modeling flow dynamics in fractured porous media~\cite{FumagalliScotti2013, DelPraFumagalliScotti2015, AntoniettiFacciolaRussoEtAl2016, FlemischFumagalliScotti2016}. But while these contributions employed cut finite element type methods to discretize the bulk equations, only fitted (mixed and stabilized) finite elements methods on triangulated surfaces have been developed for vector surface equation such as the Darcy surface problem, see for instance~\cite{FerroniFormaggiaFumagalli2016,HansboLarson2016}. The present contribution is the first where a cut finite element method for a system of partial differential equations on a surfaces involving tangent vector fields of partial differential equations is developed. \subsection{New Contributions} We develop a stabilized cut finite element method for the numerical solution of the Darcy problem on a surface. The proposed variational formulation follows the approach in~\cite{HansboLarson2016} for the Darcy problem on triangulated surfaces which is based on the stabilized primal mixed formulation by~\citet{MasudHughes2002}. Note that standard mixed type elements are typically not available on discrete cut surfaces. Combining the ideas from~\cite{HansboLarson2016} with the stabilized full gradient formulations of the Laplace-Beltrami problem from~\cite{BurmanHansboLarsonEtAl2016c,BurmanHansboLarsonEtAl2016}, the tangent condition on both the velocity and the pressure gradient is enforced only weakly. When employing finite element function from the full $d$-dimensional background mesh, such a weak enforcement of the tangential condition is convenient and rather natural. To render the proposed formulation stable and well conditioned irrespective of the relative surface position in the background mesh, we consider two stabilization forms: the full gradient stabilization introduced in \cite{BurmanHansboLarsonEtAl2016c} which is convenient for low order elements, and the normal gradient stabilization introduced in \cite{BurmanHansboLarsonEtAl2016} which also works for higher order elements. Through these stabilizations, we gain control of the variation of the solution orthogonal to the surface, which in combination with the Masud-Hughes variational formulation results in a coercive formulation of the Darcy surface problem. In practice, the exact surface is approximated leading to a geometric error which we take into account in the error analysis. We show stability of the method and establish optimal order a priori error estimates. The presented numerical analysis also accounts for different polynomial orders for the velocity, pressure, and geometry approximation. \subsection{Outline} The paper is organized as follows: In Section~\ref{sec:darcy-problem-cont} we present the Darcy problem on a surface together with its Masud-Hughes weak formulation, followed by the formulation of the cut finite element method in Section~\ref{sec:darcy-problem-cutfem}. In Section~\ref{sec:preliminaries} we collect a number of preliminary theoretical results, which will be needed in Section~\ref{sec:a-priori-est} where the main a priori error estimates in the energy and $L^2$ norm are established. Finally, in Section~\ref{sec:numerical-results} we present numerical results illustrating the theoretical findings of this work. \section{The Darcy Problem on a Surface} \label{sec:darcy-problem-cont} \subsection{The Continuous Surface} \label{ssec:preliminaries} In what follows, $\Gamma$ denotes a smooth compact hypersurface without boundary which is embedded in ${{\mathbb{R}}}^{k}$ and equipped with a normal field $n: \Gamma \to \mathbb{R}^{d}$ and signed distance function $\rho$. Defining the tubular neighborhood of $\Gamma$ by $U_{\delta_0}(\Gamma) = \{ x \in \mathbb{R}^{d} : \dist(x,\Gamma) < \delta_0 \}$, the closest point projection $p(x)$ is the uniquely defined mapping given by \begin{align} p(x) = x - \rho(x) n(p(x)) \label{eq:closest-point-projection} \end{align} which maps $x \in U_{\delta_0}(\Gamma)$ to the unique point $p(x) \in \Gamma$ such that $| p(x) - x | = \dist(x, \Gamma)$ for some $\delta_0 > 0$, see~\cite{GilbargTrudinger2001}. The closest point projection allows the extension of a function $u$ defined on $\Gamma$ to its tubular neighborhood $U_{\delta_0}(\Gamma)$ using the pull back \begin{equation} \label{eq:extension} u^e(x) = u \circ p (x) \end{equation} In particular, we can smoothly extend the normal field $n_{\Gamma}$ to the tubular neighborhood $U_{\delta_0}(\Gamma)$. On the other hand, for any subset $\widetilde{\Gamma} \subseteq U_{\delta_0}(\Gamma)$ such that $p: \widetilde{\Gamma} \to \Gamma $ is bijective, a function $w$ on $\widetilde{\Gamma}$ can be lifted to $\Gamma$ by the push forward satisfying \begin{align} (w^l)^e = w^l \circ p = w \quad \text{on } \widetilde{\Gamma} \end{align} Finally, for any function space $V$ defined on $\Gamma$, we denote the space consisting of extended functions by $V^e$ and correspondingly, the notation $V^l$ refers to the lift of a function space~$V$ defined on $\widetilde{\Gamma}$. {{} \subsection{The Surface Darcy Problem} To formulate the Darcy problem on a surface, we first recall the definitions of the surface gradient and divergence. For a function $p: \Gamma \rightarrow \mathbb{R}$ the tangential gradient of $p$ can be expressed as \begin{equation} \nabla_\Gamma p = {P_{\Gamma}} \nabla p^e, \label{eq:tangent-gradient} \end{equation} where $\nabla$ is the standard ${{\mathbb{R}}}^{d}$ gradient and ${P_{\Gamma}} = {P_{\Gamma}}(x)$ denotes the orthogonal projection of $\mathbb{R}^{d}$ onto the tangent plane $T_x\Gamma$ of $\Gamma$ at $x \in \Gamma$ given by \begin{equation} {P_{\Gamma}} = I - n \otimes n, \end{equation} where $I$ is the identity matrix. Since $p^e$ is constant in the normal direction, we have the identity \begin{equation}\label{eq:grad-full} \nabla p^e = {P_{\Gamma}} \nabla p^e \quad \text{on $\Gamma$}. \end{equation} Next, the surface divergence of a vector field $u:\Gamma \rightarrow \mathbb{R}^d$ is defined by \begin{equation} \Div_{\Gamma}(u) = \text{tr}(u \otimes \nabla_\Gamma ) = \text{div}(u^e) - n \mathrm{cd}ot (u^e \otimes \nabla)\mathrm{cd}ot n. \end{equation} With these definitions, the Darcy problem on the surface $\Gamma$ is to seek the tangential velocity vector field $u_t:\Gamma \rightarrow T(\Gamma)$ and the pressure $p:\Gamma\rightarrow \mathbb{R}$ such that \begin{subequations} \label{eq:darcy-problem-cont} \begin{alignat}{2}\label{eq:Darcya} \Div_{\Gamma} u_t &= f \qquad &\text{on $\Gamma$}, \\ \label{eq:Darcyb} u_t + \nabla_\Gamma p &= g \qquad &\text{on $\Gamma$}. \end{alignat} \end{subequations} Here, $f:\Gamma \rightarrow \mathbb{R}$ is a given function such that $\int_\Gamma f = 0$ and $g:\Gamma \rightarrow \mathbb{R}^d$ is a given tangential vector field. \subsection{The Masud-Hughes Weak Formulation} We follow \cite{HansboLarson2016} and base our finite element method on an extension of the Masud-Hughes weak formulation, originally proposed in \cite{MasudHughes2002} for planar domains, to the surface Darcy problem. Using Green's formula \begin{equation}\label{eq:Greensdiv} (\Div_{\Gamma} v_t, q)_\Gamma = -(v_t, \nabla_\Gamma q)_\Gamma \end{equation} valid for tangential vector fields $v_t$, a direct application of the original Masud-Hughes formulation is built upon the fact that the Darcy problem~(\ref{eq:darcy-problem-cont}) solves the weak problem \begin{equation} \label{eq:darcy-problem-weak-tang} a((u_t, p), (v_t, q)) = l((v_t, q)) \end{equation} for test functions $v \in [L^2(\Gamma)]$ and $q \in H^1(\Gamma)/\mathbb{R}$ with \begin{align} \label{eq:darcy-problem-weak-tang-a} a((u_t, p), (v_t, q)) &= (u_t, v_t)_{\Gamma} + (\nabla_\Gamma p, v_t)_{\Gamma} - (u_t, \nabla_\Gamma q)_{\Gamma} + \onehalf (u_t + \nabla_\Gamma p, - v_t + \nabla_\Gamma q)_{\Gamma} \\ \label{eq:darcy-problem-weak-tang-l} l((v,q)) &= (f, q)_{\Gamma} + (g, v_t)_{\Gamma} + \onehalf (g, - v_t + \nabla_\Gamma q)_{\Gamma} \end{align} As in \cite{HansboLarson2016} we enforce the tangent condition on the velocity weakly by using full velocity fields in formulation~\eqref{eq:darcy-problem-weak-tang} and not only their tangential projection. But in contrast to~\cite{HansboLarson2016} we also employ the identity (\ref{eq:grad-full}) to replace the pressure related tangent gradients with the full gradient in order to simplify the implementation further. Earlier, such full gradient based formulation have been developed for the Poisson problem on the surface, see~\cite{Reusken2014,BurmanHansboLarsonEtAl2016c}. With $\mathcal{V} = [L^2(\Gamma)]^3$ as the velocity space, $\mathcal{Q}=H^1(\Gamma)/\mathbb{R}$ as the pressure space and $\mathbb{V} = \mathcal{V} \times \mathcal{Q}$ as the total space, the resulting Masud-Hughes weak formulation of the Darcy surface problem~(\ref{eq:darcy-problem-cont}) is to seek $U := (u,p) \in \mathbb{V}$ satisfying \begin{equation} \label{eq:mhweak} A(U, V) = L(V) \quad \end{equation} for all $V := (v,q) \in \mathbb{V}$, where \begin{align} A(U,V) &=(u,v)_\Gamma +(\nabla p,v)_{\Gamma} -(u, \nabla q)_\Gamma +\frac{1}{2}(u + \nabla p,-v + \nabla q)_\Gamma, \label{eq:a} \\ L(V) &= (f,q)_\Gamma + (g,v)_\Gamma +\frac{1}{2}(g,-v + \nabla q)_\Gamma. \label{eq:l} \end{align} Expanding the forms, the bilinear form $A$ and linear form $L$ can be rewritten as \begin{align} A(U,V) &= \frac{1}{2}(u,v)_\Gamma + \frac{1}{2}(\nabla p,\nabla q)_\Gamma +\frac{1}{2}(\nabla p,v)_{\Gamma} -\frac{1}{2}(u, \nabla q)_\Gamma, \\ L(V) &= (f,q)_\Gamma + \frac{1}{2}(g,v + \nabla q)_\Gamma \end{align} and consequently, the bilinear form $A$ consists of a symmetric positive definite part and a skew symmetric part. For a more detailed discussion on various weak formulation of the surface Darcy problem, we refer to~\cite{HansboLarson2016}. Finally, note that since $\Gamma$ is smooth and $p\in Q$ is the solution to the elliptic problem $\Div_{\Gamma}(\nabla_\Gamma p) = \Div_{\Gamma} u_t - \Div_{\Gamma} g = f - \Div_{\Gamma} g$, the following elliptic regularity estimate holds \begin{equation}\label{eq:ellreg} \| u_t \|_{H^{s+1}(\Gamma)} + \| p \|_{H^{s+2}(\Gamma)} \lesssim \|f \|_{H^{s}(\Gamma)} + \| g \|_{H^{s+1}(\Gamma)} \end{equation} \section{Cut Finite Element Methods for the Surface Darcy Problem} \label{sec:darcy-problem-cutfem} \subsection{The Discrete Surface and Active Background Mesh} \label{ssec:domain-disc} For $\Gamma$, we assume that the discrete surface approximation ${\Gamma_h}$ is represented by a piecewise smooth surface consisting of smooth $d-1$ dimensional surface parts $\mathcal{K}_h = \{ K\} $ associated with a piecewise smooth normal field $n_h$. On $\Gamma_h = \bigcup_{K \in \mathcal{K}_h} K$ we can then define the discrete tangential projection ${P_{\Gamma}}h$ as the pointwise orthogonal projection on the $d$-dimensional tangent space defined for each $x \in K$ and $K\in \mathcal{K}_h$. We assume that: \begin{itemize} \item $\Gamma_h \subset U_{\delta_0}(\Gamma)$ and that the closest point mapping $p:\Gamma_h \rightarrow \Gamma$ is a bijection for $0 < h \leq h_0$. \item The following estimates hold \begin{equation} \ \| \rho \|_{L^\infty(\Gamma_h)} \lesssim h^{k_g+1}, \qquad \| n^e - n_h \|_{L^\infty(\Gamma_h)} \lesssim h^{k_g}. \label{eq:geometric-assumptions-II} \end{equation} \end{itemize} Let $\widetilde{\mathcal{T}}_{h}$ be a quasi-uniform mesh, with mesh parameter $0<h\leq h_0$, which consists of shape regular closed simplices and covers some open and bounded domain $\Omega \subset \mathbb{R}^{k}$ containing the embedding neighborhood $U_{\delta_0}(\Gamma$). For the background mesh $\widetilde{\mathcal{T}}_{h}$ we define the \emph{active} (background) $\mathcal{T}_h$ mesh \begin{align} \mathcal{T}_h &= \{ T \in \widetilde{\mathcal{T}}_{h} : T \cap \Gamma_h \neq \emptyset \}, \label{eq:narrow-band-mesh} \end{align} see~Figure~\ref{fig:domain-set-up} for an illustration. Finally, for the domain covered by $\mathcal{T}_h$ we introduce the notation \begin{align} N_h &= \cup_{T \in \mathcal{T}_h} T. \label{eq:Nh-def} \end{align} \begin{figure} \caption{Set-up of the continuous and discrete domains. (Left) Continuous surface $\Gamma$ enclosed by a $\delta$ tubular neighborhood $U_{\delta} \label{fig:domain-set-up} \end{figure} \subsection{Stabilized Cut Finite Element Methods} On the active mesh $\mathcal{T}_h$ we introduce the discrete space of continuous piecewise polynomials of order $k$, \begin{equation} X_h^k = \{ v \in C(N_h) : v|_T \in P_k(T) \; \forall\, T \in \mathcal{T}_h \}. \label{eq:Xh-def} \end{equation} Occasionally, if the order is not of particular importance, we simply drop the superscript $k$. Next, the discrete velocity, pressure and total approximations spaces are defined by respectively \begin{align} \mathcal{V}_h = [X_h^{k_u}]^d, \qquad \mathcal{Q}_{h} = \{ v \in X_h^{k_p} : \lambda_{\Gamma_h}(v) = 0 \}, \label{eq:Qh-def} \qquad \mathcal{W}_h = \mathcal{V}_h \times \mathcal{Q}_h, \end{align} where we explicitly permit different approximation orders $k_u$ and $k_p$ for the velocity and pressure space. As in the continuous case, $\lambda_{\Gamma_h}(\mathrm{cd}ot)$ denotes the average operator on ${\Gamma_h}$ defined by $\lambda_{\Gamma_h}(v) = \tfrac{1}{{\Gamma_h}}\int_{\Gamma_h} v$. Now the stabilized, full gradient cut finite element method for the surface Darcy problem~(\ref{eq:darcy-problem-cont}) is to seek $U_h := (u_h, p_h) \in \mathbb{V}_h$ such that for all $V := (v, p) \in \mathbb{V}_h$, \begin{align} B_h(U_h, V) = L_h(V) \label{eq:darcy-probl-cutfem} \end{align} where \begin{align} B_h(U_h, V) &= A_h(U_h, V) + S_h(U_h, V) = L_h(V), \\ \label{eq:Ah-def} A_h(U_h, V) &= (u_h,v)_{{\Gamma_h}} + (\nabla p, v)_{{\Gamma_h}} - (u_h,\nabla q)_{{\Gamma_h}} + \onehalf ( u + \nabla p, -v + \nabla q)_{{\Gamma_h}}, \\ S_h(U_h, V) &= s_h(u_h,v_h) + s_h(p_h,q_h), \label{eq:Sh-def} \\ L_h(V) &= (f,q) + \onehalf (g, v + \nabla q)_{\Gamma}. \label{eq:Lh-def} \end{align} For the stabilization form $s_h$, two realizations will be proposed in this work. First, we consider a full gradient based stabilization originally introduced for Laplace-Beltrami surface problem in~\cite{BurmanHansboLarsonEtAl2016c}, \begin{alignat}{3} s_h^1(u_h, v_h) &= h (\nabla u_h, \nabla v_h)_{\mathcal{T}_h}, \qquad & & s_h^1(p_h, q_h) = h (\nabla p_h, \nabla q_h)_{\mathcal{T}_h}. \\ \intertext{Second, to devise a higher order approximation scheme, the normal gradient stabilization} s_h^2(u_h, v_h) &= h (n_h \mathrm{cd}ot \nabla u_h, n_h \mathrm{cd}ot \nabla v_h)_{\mathcal{T}_h}, \qquad & &s_h^2(p_h, q_h) = h (n_h \mathrm{cd}ot \nabla p_h, n_h \mathrm{cd}ot \nabla q_h)_{\mathcal{T}_h} \end{alignat} first proposed and analyzed in~\cite{BurmanHansboLarsonEtAl2016} and then later also considered in~\cite{GrandeLehrenfeldReusken2016}, will be employed. In the remaining work, we will simply write $S_h$ and $s_h$ without superscripts as long as no distinction between the stabilization forms is needed. \begin{remark} For the normal gradient stabilization, any $h$-scaling of the form $h^{\alpha - 1}$ with $\alpha \in [0,2]$ gives an eglible stabilization operator, as pointed out in~\cite{BurmanHansboLarsonEtAl2016}. The condition $\alpha \leqslant 2$ guarantees that the stabilization result~\ref{lem:scaled-L2-norm-Nh-stab} for a properly scaled $L^2$ norm holds, the condition $\alpha \geqslant 0$ on the other hand assures that the condition number of the discrete linear system scales with the mesh size similar to the triangulated surface case. We refer to~\cite{BurmanHansboLarsonEtAl2016} for further details. \end{remark} \section{Preliminaries} \label{sec:preliminaries} To prepare the forthcoming analysis of the proposed cut finite element method in Section~\ref{sec:a-priori-est}, we here collect and state a number of useful definitions, approximation results and estimates. In particular, we introduce suitable continuous and discrete norms, review the construction of a proper interpolation operator and recall the fundamental geometric estimates needed to quantify the quadrature errors introduced by the discretization of $\Gamma$. \subsection{Discrete Norms and Poincar\'e Inequalities} \label{ssec:discrete-norms-poincare} The symmetric parts of the bilinear forms $A$ and $A_h$ give raise to the following natural continuous and discrete ``energy''-type norms \begin{align} |\mspace{-1mu}|\mspace{-1mu}| U |\mspace{-1mu}|\mspace{-1mu}|^2 = \| u \|_{\Gamma}^2 + \| \nabla p \|_{\Gamma}^2, \qquad |\mspace{-1mu}|\mspace{-1mu}| U_h |\mspace{-1mu}|\mspace{-1mu}|_h^2 = \| u_h \|_{{\Gamma_h}}^2 + \| \nabla p_h \|_{{\Gamma_h}}^2 + | U_h |_{S_h}, \label{eq:triple-norm-def} \end{align} where $|\mathrm{cd}ot|_{S_h}$ denotes the semi-norm induced by the stabilization form $S_h$, \begin{align} | U_h |_{S_h}^2 = S_h(U_h, U_h). \label{eq:sh-norm-def} \end{align} To show that $|\mspace{-1mu}|\mspace{-1mu}| \mathrm{cd}ot |\mspace{-1mu}|\mspace{-1mu}|_h$ actually defines a proper norm, we recall the following result from~\cite{BurmanHansboLarsonEtAl2016}. \begin{lemma} \label{lem:scaled-L2-norm-Nh-stab} For $v \in X_h$, the following estimate holds \begin{align} h^{-1}\| v \|^2_{\mathcal{T}_h} & \lesssim \| v \|_{\Gamma_h}^2 + s_h^i(v,v) \quad \text{for } i = 1,2, \label{eq:discrete-poincare-Nh-normal-grad} \end{align} for $0<h \leq h_0$ with $h_0$ small enough. \end{lemma} \begin{remark} Simple counter examples show that the sole expression $\| v_h \|_{{\Gamma_h}} + \| \nabla q_h \|_{{\Gamma_h}}$ defines only a semi-norm on $\mathcal{V}_h \times \mathcal{Q}_h$. For instance, let $ \Gamma = \{ \phi = 0 \}$ be defined as the $0$ level set of a smooth, scalar function $\phi$ such that $\nabla \phi \neq 0$ on $\Gamma$. Take $k_u = 1$, $k_p = 2$ and let ${\Gamma_h} = \{\phi_h = 0 \}$ where $\phi_h \in X_h^1$ is the Lagrange interpolant of $\phi$. Then $v_h = [\phi_h]^d \neq 0$ on $\mathcal{T}_h$ but clearly $\| v_h\|_{{\Gamma_h}} = 0$. Defining $q_h = \phi_h^2 \in \mathcal{Q}_h^2$ gives $\nabla q_h = 2 \phi_h \nabla \phi_h \neq 0$ but $\| \nabla q_h\|_{{\Gamma_h}} = 0$ in this particular case. \end{remark} Next, we state a simple surface-based discrete Poincar\'e estimate. For a proof we refer to~\cite{BurmanHansboLarson2015}. \begin{lemma} \label{lem:discrete-poincare-Gammah-stab} Let $v \in X_h$, then it holds \begin{align} \| v - \lambda_{{\Gamma_h}}(v)\|_{\Gamma} \lesssim \| \nabla_\Gammah v \|_{{\Gamma_h}} \label{eq:discrete-poincare-Gammah} \end{align} for $0 < h \leqslant h_0$ with $h_0$ chosen small enough. \end{lemma} Finally, the previous two lemma can be combined to obtain the following discrete Poincar\'e inequality for the discrete ``energy'' norm $|\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h$. \begin{theorem} \label{thm:discrete-poincare-Nh-stab} For $(v,q) = V \in \mathbb{V}_h$, the following estimate holds \begin{align} h^{-1} \left( \| v \|^2_{\mathcal{T}_h} + \| q - \lambda_{\Gamma_h}(q) \|^2_{\mathcal{T}_h} \right) & \lesssim |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h \label{eq:discrete-poincare-Nh} \end{align} for $0<h \leq h_0$ with $h_0$ small enough. \end{theorem} \begin{figure} \caption{$L^2$ control mechanisms for the full gradient and normal gradient stabilization. (Left) While element $T_1$ has only a small intersection with $\Gamma_h$, there are several neighbor elements in $\mathcal{T} \label{fig:condition_number-example} \end{figure} \subsection{Trace Estimates and Inverse Inequalities} \label{ssec:trace-inverse-est} First, we recall the following trace inequality for $v \in H^1(\mathcal{T}_h)$ \begin{equation} \| v \|_{\partial T} \lesssim h^{-1/2} \|v \|_{T} + h^{1/2} \|\nabla v\|_{T} \quad \forall\, T \in \mathcal{T}_h, \label{eq:trace-inequality} \end{equation} while for the intersection $\Gamma \cap T$ the corresponding inequality \begin{align} \| v \|_{\Gamma \cap T} \lesssim h^{-1/2} \| v \|_{T} + h^{1/2} \|\nabla v\|_{T} \quad \forall\, T \in \mathcal{T}_h \label{eq:trace-inequality-cut-faces} \end{align} holds whenever $h$ is small enough, see \cite{HansboHansboLarson2003} for a proof. In the following, we will also need some well-known inverse estimates for $v_h \in V_h$: \begin{gather} \label{eq:inverse-estimate-grad} \| \nabla v_h\|_{T} \lesssim h^{-1} \| v_h\|_{T} \quad \forall\, T \in \mathcal{T}_h, \\ \| v_h\|_{\partial T} \lesssim h^{-1/2} \| v_h\|_{T}, \qquad \| \nabla v_h\|_{\partial T} \lesssim h^{-1/2} \| \nabla v_h\|_{T} \quad \forall\, T \in \mathcal{T}_h, \label{eq:inverse-estimates-boundary} \end{gather} and the following ``cut versions'' when $K \cap T \not \subseteq \partial T$ \begin{alignat}{5} \|v_h \|_{K \cap T} &\lesssim h^{-1/2} \|v_h\|_{T}, & & \qquad \| \nabla v_h \|_{K \cap T} &\lesssim h^{-1/2} \|\nabla v_h\|_{T} & &\quad \forall\, K \in \mathcal{K}_h, \; \forall\, T \in \mathcal{T}_h, \label{eq:inverse-estimate-cut-v-on-K} \end{alignat} which are an immediate consequence of similar inverse estimates presented in~\cite{HansboHansboLarson2003}. \subsection{Geometric Estimates} We now summarize some standard geometric identities and estimates which typically are used in the numerical analysis of surface PDE discretizations when passing from the discrete surface to the continuous one and vice versa. For a detailed derivation, we refer to \cite{Dziuk1988,OlshanskiiReuskenGrande2009,Demlow2009,DziukElliott2013,BurmanHansboLarsonEtAl2016a}. Starting with the Hessian of the signed distance function \begin{align} \mathcal{H} = \nabla \otimes \nabla \rho \quad \text{on } U_{\delta_0}(\Gamma), \end{align} the derivative of the closest point projection and of an extended function $v^e$ is given by \begin{gather} Dp = {P_{\Gamma}} (I - \rho \mathcal{H}) = {P_{\Gamma}} - \rho \mathcal{H}, \label{eq:derivative-closest-point-projection} \\ Dv^e = D(v \circ p) = Dv Dp = Dv P_{\Gamma}(I - \rho \mathcal{H}). \label{eq:derivative-extended-function} \end{gather} The self-adjointness of ${P_{\Gamma}}$, ${P_{\Gamma}}h$, and $\mathcal{H}$, and the fact that $ {P_{\Gamma}} \mathcal{H} = \mathcal{H} = \mathcal{H} {P_{\Gamma}}$ and ${P_{\Gamma}}^2 = {P_{\Gamma}}$ then leads to the identities \begin{align} \nabla v^e &= {P_{\Gamma}}(I - \rho \mathcal{H}) \nabla v = {P_{\Gamma}}(I - \rho \mathcal{H}) \nabla_\Gamma v, \label{eq:ve-full gradient} \\ \nabla_\Gammah v^e &= {P_{\Gamma}}h(I - \rho \mathcal{H}){P_{\Gamma}} \nabla v = B^{T} \nabla_\Gamma v, \label{eq:ve-tangential-gradient} \end{align} where the invertible linear mapping \begin{align} B = P_{\Gamma}(I - \rho \mathcal{H}) P_{\Gamma_h}: T_x({\Gamma_h}) \to T_{p(x)}(\Gamma) \label{eq:B-def} \end{align} maps the tangential space of $\Gamma_h$ at $x$ to the tangential space of $\Gamma$ at $p(x)$. Setting $v = w^l$ and using the identity $(w^l)^e = w$, we immediately get that \begin{align} \nabla_\Gamma w^l = B^{-T} \nabla_\Gammah w \end{align} for any elementwise differentiable function $w$ on $\Gamma_h$ lifted to $\Gamma$. We recall from \cite[Lemma 14.7]{GilbargTrudinger2001} that for $x\in U_{\delta_0}(\Gamma)$, the Hessian $\mathcal{H}$ admits a representation \begin{equation}\label{Hform} \mathcal{H}(x) = \sum_{i=1}^k \frac{\kappa_i^e}{1 + \rho(x)\kappa_i^e}a_i^e \otimes a_i^e, \end{equation} where $\kappa_i$ are the principal curvatures with corresponding principal curvature vectors $a_i$. Thus \begin{equation} \|\mathcal{H}\|_{L^\infty(U_{\delta_0}(\Gamma))} \lesssim 1 \label{eq:Hesse-bound} \end{equation} for $\delta_0 > 0$ small enough. In the course of the a priori analysis in Section~\ref{sec:a-priori-est}, we will need to estimate various operator compositions involving $B$, the continuous and discrete tangential and normal projection operators. More precisely, using the definition ${Q_{\Gamma}}h := I - {P_{\Gamma}}h = n_h \otimes n_h$, the following bounds will be employed at several occasions. \begin{lemma} \label{lem:composed-operator-bounds} \begin{alignat}{5} \| {P_{\Gamma}} - {P_{\Gamma}} {P_{\Gamma}}h {P_{\Gamma}} \|_{L^\infty(\Gamma)} & \lesssim h^{2 k_g}, \qquad & & \| {Q_{\Gamma}}h {P_{\Gamma}}\|_{L^\infty(\Gamma)} &\lesssim h^{k_g}, \qquad & & \|{P_{\Gamma}} {Q_{\Gamma}}h \|_{L^\infty({\Gamma_h})} &\lesssim h^{k_g}, \label{eq:projector-bounds} \\ \| B \|_{L^\infty(\Gamma_h)} &\lesssim 1, \qquad & & \| B^{-1} \|_{L^\infty(\Gamma)} & \lesssim 1, \qquad & & \| P_\Gamma - B B^T \|_{L^\infty(\Gamma)} &\lesssim h^{k_g+1}. \label{eq:BBTbound} \end{alignat} \end{lemma} \begin{proof} All these estimate have been proved earlier, see \cite{Dziuk1988,DziukElliott2013,BurmanHansboLarsonEtAl2016c} and we only include a short proof for the reader's convenience. We start with the bounds summarized in~\eqref{eq:projector-bounds}. An easy calculation shows that ${P_{\Gamma}} - {P_{\Gamma}} {P_{\Gamma}}h {P_{\Gamma}} = {P_{\Gamma}} ({P_{\Gamma}} - {P_{\Gamma}}h)^2 {P_{\Gamma}}$ from which the desired bound follows by observing that $ {P_{\Gamma}} - {P_{\Gamma}}h = (n - n_h) \otimes n + n_h \otimes (n - n_h) $ and thus $ \| ({P_{\Gamma}} - {P_{\Gamma}}h)^2 \|_{L^{\infty}(\Gamma_h)} \lesssim \| n - n_h \|_{L^{\infty}(\Gamma_h)}^2 \lesssim h^{2 k_g}. $ Next, observe that \begin{align} \| {Q_{\Gamma}}h {P_{\Gamma}} \|_{L^{\infty}(\Gamma)} &= \| n_h \otimes n_h - (n_h,n)_{\mathbb{R}^d} n_h \otimes n \|_{L^{\infty}(\Gamma)} \\ &= \|(1 - (n_h,n)_{\mathbb{R}^d}) n_h \otimes n_h \|_{L^{\infty}(\Gamma)} + \| (n_h,n)_{\mathbb{R}^d} n_h \otimes (n_h - n) \|_{L^{\infty}(\Gamma)} \\ &\lesssim h^{2 k_g} + h^{k_g}. \end{align} Turning to~\eqref{eq:BBTbound}, the first two bounds follow directly from~\eqref{eq:B-def} and ~\eqref{eq:Hesse-bound} together with the assumption $\|\rho\|_{L^{\infty}(\Gamma_h)} \lesssim h^{k_g +1}$. Finally, unwinding the definition of $B$, we find that $ {P_{\Gamma}} - B B^T = {P_{\Gamma}} - {P_{\Gamma}} {P_{\Gamma}}h {P_{\Gamma}} + O(h^{k_g+1}), $ which together with the previously derived estimate for ${P_{\Gamma}} - {P_{\Gamma}} {P_{\Gamma}}h {P_{\Gamma}}$ gives the stated operator bound. \end{proof} The previous lemma allows us to quantify the error introduced by using the full gradient in~\eqref{eq:Ah-def} instead of $\nabla_\Gammah$. To do so we decompose the full gradient as $\nabla = \nabla_\Gammah + {Q_{\Gamma}}h \nabla$ with ${Q_{\Gamma}}h = I - {P_{\Gamma}}h = n_h \otimes n_h$. We then have \begin{corollary} \label{cor:normal-grad-est} For $v \in H^1(\Gamma)$ and $w \in V_h$ it holds \begin{align} \| {Q_{\Gamma}}h \nabla v^e \|_{\Gamma_h} \lesssim h^{k_g} \| \nabla_\Gamma v \|_{\Gamma}, \qquad \| {P_{\Gamma}} {Q_{\Gamma}}h \nabla w \|_{\Gamma_h} \lesssim h^{k_g} \| \nabla w \|_{\Gamma_h}. \label{eq:normal-grad-est} \end{align} \end{corollary} \begin{proof} Since $\| {Q_{\Gamma}}h {P_{\Gamma}} \|_{L^{\infty}(\Gamma_h)} \lesssim h^{k_g}$, the first estimate follows directly from the identity $\nabla v^e$ = $ {P_{\Gamma}}(I - \rho \mathcal{H})\nabla_\Gamma v$ from~\eqref{eq:ve-full gradient}, while the second estimate is a immediate consequence of~\eqref{eq:projector-bounds}. \end{proof} Next, for a subset $\omega\subset {\Gamma_h}$, we have the change of variables formula \begin{equation} \int_{\omega^l} g^l d\Gamma = \int_{\omega} g |B|d\Gamma_h \end{equation} with $|B|$ denoting the absolute value of the determinant of $B$. The determinant $|B|$ satisfies the following estimates. \begin{lemma} It holds \label{lem:detBbounds} \begin{alignat}{5} \| 1 - |B| \|_{L^\infty(\mathcal{K}_h)} &\lesssim h^{k_g+1}, & &\qquad \||B|\|_{L^\infty(\mathcal{K}_h)} &\lesssim 1, & &\qquad \||B|^{-1}\|_{L^\infty(\mathcal{K}_h)} &\lesssim 1. \label{eq:detBbound} \end{alignat} \end{lemma} \noindent Combining the various estimates for the norm and the determinant of $B$ shows that for $m = 0,1$ \begin{alignat}{3} \| v \|_{H^{m}(\mathcal{K}_h^l)} &\sim \| v^e \|_{H^{m}(\mathcal{K}_h)} & &\quad \text{for } v \in H^m(\mathcal{K}_h^l), \label{eq:norm-equivalences-ve} \\ \| w^l \|_{H^{m}(\mathcal{K}_h^l)} &\sim \| w \|_{H^{m}(\mathcal{K}_h)} & &\quad \text{for } w \in V_h. \label{eq:norm-equivalences-wh} \end{alignat} Next, we observe that thanks to the coarea-formula (cf. \citet{EvansGariepy1992}) \begin{align*} \int_{U_{\delta}} f(x) \,dx = \int_{-\delta}^{\delta} \left(\int_{\Gamma(r)} f(y,r) \, \mathrm{d} \Gamma_r(y)\right)\,\mathrm{d}r, \end{align*} the extension operator $v^e$ defines a bounded operator $H^m(\Gamma) \ni v \mapsto v^e \in H^m(U_{\delta}(\Gamma))$ satisfying the stability estimate \begin{align} \| v^e \|_{k,U_{\delta}(\Gamma)} \lesssim \delta^{1/2} \| v \|_{k,\Gamma}, \qquad 0 \leqslant k \leqslant m \label{eq:stability-estimate-for-extension} \end{align} for $0 < \delta \leqslant \delta_0$, where the hidden constant depends only on the curvature of $\Gamma$. \subsection{Interpolation Operator} Next, we recall from~\cite{ErnGuermond2004} that for $v \in H^{k+1}(N_h)$, the Cl\'ement interpolant $\pi_h:L^2(\mathcal{T}_h) \rightarrow X_h^k$ satisfies the local interpolation estimates \begin{alignat}{3} \| v - \pi_h v \|_{m,T} & \lesssim h^{k + 1 -m}| v |_{k+1,\omega(T)}, & &\quad 0\leqslant m \leqslant k+1, \quad &\forall\, T\in \mathcal{T}_h, \label{eq:interpest0} \end{alignat} where $\omega(T)$ consists of all elements sharing a vertex with $T$. Now with the help of the extension operator $(\mathrm{cd}ot)^e$, an interpolation operator $\pi_h: L^2(\Gamma) \to X^k_h$ can be constructed by setting $\pi_h v = \pi_h v^e$, where we took the liberty of using the same symbol. The resulting interpolation operator satisfies the following error estimate. \begin{lemma} \label{lem:interpolenergy} For $V = (v,p) \in [H^{k_u}(\Gamma)]^d \times H^{k_p+1}(\Gamma)$ and $k_u, k_p \geqslant 1$, the interpolant defined by $\Pi_h V^e = (\pi_h v^e, \pi_h q^e) \in \mathcal{V}_h^{k_u} \times \mathcal{Q}_h^{k_p}$ satisfies the interpolation estimate \begin{align} \label{eq:interpolenergy} |\mspace{-1mu}|\mspace{-1mu}| V^e - \Pi_h V^e |\mspace{-1mu}|\mspace{-1mu}|_h \lesssim h^{k_u} \| v \|_{k_u, \Gamma} + h^{k_p} \| q \|_{k_p+1, \Gamma}. \end{align} \end{lemma} \begin{proof} Choosing $\delta_0 \sim h$, it follows directly from combining the trace inequality~\eqref{eq:trace-inequality-cut-faces}, the interpolation estimate~\eqref{eq:interpest0}, and the stability estimate~\eqref{eq:stability-estimate-for-extension} that the first two terms in the definition of $ |\mspace{-1mu}|\mspace{-1mu}| V^e - \Pi_h V^e |\mspace{-1mu}|\mspace{-1mu}|_h^2 = \| v^e - \pi_h v^e \|_{\Gamma}^2 + \| \nabla(p^e - \pi_h p^e) \|_{\Gamma}^2 + | V^e - \Pi_h V^e |_{S_h}^2 $ satisfies the desired estimate. Since $|\mathrm{cd}ot|_{S_h^2} \leqslant |\mathrm{cd}ot|_{S_h^1}$ it is enough to focus on the full gradient stabilization $S_h = S_h^1$ for the remaining part. With the same chain of estimates we find that \begin{align} | V^e - \Pi_h V^e |_{S_h}^2 &= h \bigl( \| \nabla(v^e - \pi_h v^e) \|_{\mathcal{T}_h}^2 + \| \nabla(p^e - \pi_h p^e) \|_{\mathcal{T}_h}^2 \bigr) \\ & \lesssim h^{2k_u-1} \| v^e \|_{k_u, \mathcal{T}_h}^2 + h^{2k_p+1} \| q^e \|_{k_p+1, \mathcal{T}_h}^2 \\ & \lesssim h^{2k_u} \| v \|_{k_u, \Gamma}^2 + h^{2k_p+2} \| q \|_{k_p+1, \Gamma}^2 \end{align} which concludes the proof. \end{proof} \section{A Priori Error Estimates} \label{sec:a-priori-est} We now state and prove the main a priori error estimates for the stabilized cut finite element method~(\ref{eq:darcy-probl-cutfem}). The proofs rest upon a Strang-type lemma splitting the total error into an interpolation error, a consistency error arising from the additional stabilization term $S_h$ and finally, a geometric error caused by the discretization of the surface. We start with establishing suitable estimates for the consistency and quadrature error before we present the final a priori error estimates at the end of this section. \subsection{Estimates for the Quadrature and Consistency Error} \label{ssec:quadr-error-estim} The purpose of the next lemma is two-fold. First, it shows that the full gradient stabilization will not affect the expected convergence order when low-order elements are used. Second, it demonstrates that only the normal gradient stabilization is suitable for high order discretizations where the geometric approximation order $k_g$ needs to satisfy $k_g > 1$. \begin{lemma} Let $U = (u,p) \in [H^1(\Gamma)]^d_t \times H^1(\Gamma)$. Then it holds \label{lem:Sh-const-est} \begin{align} |U^e|_{S_h^1} &\lesssim h ( \| \nabla_\Gamma u \|_{\Gamma} + \| \nabla_\Gamma p \|_{\Gamma} ), \\ |U^e|_{S_h^2} & \lesssim h^{k_g + 1} ( \| \nabla_\Gamma u \|_{\Gamma} + \| \nabla_\Gamma p \|_{\Gamma} ). \label{eq:Sh1-const-est} \end{align} \end{lemma} \begin{proof} A simple application of stability estimate~\eqref{eq:stability-estimate-for-extension} with $\delta \sim h$ shows that for $S_h^1$, \begin{align} S_h^1(U^e, U^e) = h \|\nabla u^e\|_{\mathcal{T}_h}^2 + h \|\nabla p^e\|_{\mathcal{T}_h}^2 \lesssim h^2 (\| u \|_{1,\Gamma}^2 + \| p \|_{1,\Gamma}^2). \end{align} Turning to $S_h^2$, the pressure part of the normal gradient stabilization can be estimated by \begin{align} s_p(p^e, p^e) = h \| {Q_{\Gamma}}h \nabla p^e \|_{\mathcal{T}_h}^2 = h \| ({Q_{\Gamma}}h - {Q_{\Gamma}}) \nabla p^e \|_{\mathcal{T}_h}^2 \lesssim h^{2k_g + 1} \| \nabla p^e \|_{\mathcal{T}_h}^2 \lesssim h^{2 k_g + 2} \| \nabla_\Gamma p \|_{\Gamma}^2, \end{align} and similarly, $|u^e|_{s_h^2} \lesssim h^{k_g + 1} \| \nabla_\Gamma u \|_{\Gamma}$ for $u \in [H^1(\Gamma)]^d$. \end{proof} \begin{lemma} Let $U = (u,p) \in [L^2(\Gamma)]_t^d \times H^1(\Gamma)/\mathbb{R} $ be the solution to weak problem~(\ref{eq:darcy-problem-weak-tang}) and assume that $V \in \mathbb{V}_h$. Then \begin{align} | L(V^l) - L_h(V)| + |A(U, V^l) - A_h(U^e, V_h)| &\lesssim h^{k_g} (\|f\|_{\Gamma} + \|g\|_{\Gamma}) |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h. \label{eq:Lh-quad-est-primal} \end{align} Furthermore, if $\Phi = (\phi_u, \phi_p) \in [H^1(\Gamma)]^d_t \times H^2(\Gamma)/\mathbb{R}$ and $\Phi_h := \Pi_h \Phi = (\pi_h \phi_u, \pi_h \phi_p)$, we have the improved estimate \begin{align} | L(\Phi_h^l) - L_h(\Phi_h)| + |A(U, \Phi_h^l) - A_h(U^e, \Phi_h)| &\lesssim h^{k_g+1} (\|f\|_{\Gamma} + \|g\|_{\Gamma}) ( \| \phi_u \|_{1,\Gamma} + \| \phi_p \|_{2,\Gamma}). \label{eq:Lh-quad-est-dual} \end{align} \end{lemma} \begin{proof} We start with the term $L(\mathrm{cd}ot) - L_h(\mathrm{cd}ot)$. Unwinding the definition of the linear forms $L$ and $L_h$, we get \begin{align} L(V^l) - L_h(V) &= \Bigl( (f, q^l)_{\Gamma} - (f^e, q)_{\Gamma_h} + \onehalf \left( (g, v^l)_{\Gamma} - (g^e, v)_{\Gamma_h} \right) \Bigr) \\ &\quad + \onehalf \left( (g, \nabla q^l)_{\Gamma} - (g^e, \nabla q)_{\Gamma_h} \right) = I + II. \end{align} For the first term, a change of variables together with estimate~\eqref{eq:detBbound} for the determinant $|B|$ yields \begin{align} I &= (f, (1 - |B|^{-1}) q^l)_{\Gamma} + \onehalf (g, (1 - |B|^{-1}) v^l)_{\Gamma} \\ &\lesssim h^{k_g +1} \left( \| f \|_{\Gamma} + \|g\|_{\Gamma} \right) (\| q^l \|_{\Gamma} + \| v^l \|_{\Gamma}) \\ &\lesssim h^{k_g + 1} \left( \| f \|_{\Gamma} + \|g\|_{\Gamma} \right) |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h, \end{align} where in the last step, we used the norm equivalences~(\ref{eq:norm-equivalences-wh}) and the discrete Poincar\'e inequality~(\ref{eq:discrete-poincare-Gammah}) to pass to $|\mspace{-1mu}|\mspace{-1mu}| V_h |\mspace{-1mu}|\mspace{-1mu}|_h$. To estimate $II$, we split $\nabla q$ into its tangential and normal part \begin{align} \nabla q = \nabla_\Gammah q + {Q_{\Gamma}}h \nabla q. \end{align} Note that for the tangential field $g$, the identities \begin{align} (g, \nabla q^l)_{\Gamma} = (g, \nabla_\Gamma q^l)_{\Gamma}, \quad (g^e, \nabla q)_{\Gamma_h} = ({P_{\Gamma}} g^e, \nabla q)_{{\Gamma_h}} \label{eq:tangential-g-identities} \end{align} hold and thus using ${P_{\Gamma}} g = g$ once more and the fact that ${P_{\Gamma}}^T = {P_{\Gamma}}$ allows us to rewrite $II$ as \begin{align} 2 II &= (g, \nabla_\Gamma q^l)_{\Gamma} - (g^e, \nabla_\Gammah q)_{\Gamma_h} - (g^e, {Q_{\Gamma}}h \nabla q)_{\Gamma_h} \\ &= (g, ({P_{\Gamma}} - |B|^{-1}B^T) \nabla_\Gamma q^l)_{\Gamma} - ({P_{\Gamma}} g^e,{Q_{\Gamma}}h \nabla q)_{{\Gamma_h}} \\ &= (g, {P_{\Gamma}} ({P_{\Gamma}} - |B|^{-1}B^T) \nabla_\Gamma q^l)_{\Gamma} + (g^e,{P_{\Gamma}}{Q_{\Gamma}}h \nabla q)_{{\Gamma_h}} \\ &= II_t + II_n. \end{align} Unwinding the definition of $B$ given in~(\ref{eq:B-def}) together with the estimates for the determinant $|B|$ from Lemma~\ref{lem:detBbounds} reveals that \begin{align} {P_{\Gamma}} ({P_{\Gamma}} - |B|^{-1}B^T) &= {P_{\Gamma}} ({P_{\Gamma}} - B^T) + {P_{\Gamma}}(|B|^{-1} - 1)B^T \\ &\sim {P_{\Gamma}} ({P_{\Gamma}} - {P_{\Gamma}}h(I - \rho \mathcal{H}){P_{\Gamma}}) + h^{k_g + 1} \\&\sim {P_{\Gamma}} - {P_{\Gamma}}{P_{\Gamma}}h{P_{\Gamma}} + h^{k_g + 1}. \end{align} Consequently, using the bounds for ${P_{\Gamma}} - {P_{\Gamma}} {P_{\Gamma}}h {P_{\Gamma}}$ and ${P_{\Gamma}}{Q_{\Gamma}}h$ from Lemma~\ref{lem:composed-operator-bounds}, we deduce that \begin{align} II_t & \lesssim h^{k_g + 1} \| g \|_{\Gamma} \| \nabla_\Gamma q^l \|_{\Gamma} \lesssim h^{k_g + 1} \| g \|_{\Gamma} |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h, \\ II_n & \lesssim h^{k_g} \| g \|_{\Gamma} \| \nabla q \|_{{\Gamma_h}} \lesssim h^{k_g} \| g \|_{\Gamma} |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h. \end{align} In the special case $q = \pi_h \phi_p^e$, the bound for $II_n$ can be further improved to \begin{align} II_n &= (g^e,{P_{\Gamma}}{Q_{\Gamma}}h {P_{\Gamma}} \nabla \phi_p^e)_{{\Gamma_h}} + (g^e,{P_{\Gamma}}{Q_{\Gamma}}h \nabla ( \pi_h \phi_p^e - \phi_p^e))_{{\Gamma_h}}, \label{eq:Lhdual_IIn-1} \\ &\lesssim h^{k_g + 1} \| g \|_{\Gamma} \| \nabla \phi_p \|_{\Gamma} + h^{k_g + 1} \| g \|_{\Gamma} \| \phi_p \|_{2,\Gamma}, \label{eq:Lhdual_IIn-2} \end{align} where we once more employed the identity $\nabla \phi_p^e = {P_{\Gamma}} \nabla\phi_p^e$, the estimates~(\ref{eq:projector-bounds}) for the operators ${P_{\Gamma}} {Q_{\Gamma}}h {P_{\Gamma}} $ and ${P_{\Gamma}}{Q_{\Gamma}}h$ and finally, the interpolation estimate~(\ref{eq:interpolenergy}). Turning to the term $A(U, \mathrm{cd}ot) - A(U^e, \mathrm{cd}ot)$ in~(\ref{eq:Lh-quad-est-primal}) and~(\ref{eq:Lh-quad-est-dual}) and recalling the definition of bilinear forms $A$ and $A_h$, we rearrange terms to obtain \begin{align} 2 \left( A(U, V^l) - A_h(U^e, V) \right) &= \left( (u, v^l)_{\Gamma} - (u^e, v)_{\Gamma_h} \right) + \left( (\nabla p, v^l)_{\Gamma} - (\nabla p^e, v)_{{\Gamma_h}} \right) \\ &\quad - \left( (u, \nabla q^l)_{\Gamma} - (u^e, \nabla q)_{{\Gamma_h}} \right) + \left( (\nabla p, \nabla q^l)_{\Gamma} -(\nabla p^e, \nabla q)_{\Gamma_h} \right) \\ &= I + II - III + IV. \end{align} To estimate the term $I$--$IV$, we proceed along the same lines as in the previous part. As before, the first term can be bounded as follows \begin{align} I &= (u, (1 - |B|^{-1}) v^l)_{\Gamma} \lesssim h^{k_g + 1} \| u \|_{\Gamma} \| v^l \|_{\Gamma} \lesssim h^{k_g + 1} \| u \|_{\Gamma} |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h. \end{align} For the remaining terms, the appearance of the full gradient necessitates a similar split into its normal and tangential part as before, followed by a lifting of the tangential part and the use of the operator estimates~(\ref{eq:projector-bounds}) and~(\ref{eq:BBTbound}). Recall that $\nabla p = \nabla_\Gamma p^e$ and consequently, \begin{align} II &= (\nabla_\Gamma p, v^l)_{\Gamma} - (\nabla_\Gammah p^e, v)_{{\Gamma_h}} - ({Q_{\Gamma}}h \nabla p^e, v)_{{\Gamma_h}} \\ &= (({P_{\Gamma}} - |B|^{-1}B^T) \nabla_\Gamma p, v^l)_{\Gamma} - ({Q_{\Gamma}}h {P_{\Gamma}} \nabla p^e, v)_{{\Gamma_h}} \\ &= II_t + II_n. \end{align} Now expand $B$ to see that ${P_{\Gamma}} - |B|^{-1}B^T \sim {P_{\Gamma}} - {P_{\Gamma}}h {P_{\Gamma}} + h^{k_g+1} \sim {Q_{\Gamma}}h {P_{\Gamma}} + h^{k_g + 1}$ and apply the operator bounds from Lemma~\ref{lem:composed-operator-bounds} to ${Q_{\Gamma}}h {P_{\Gamma}}$, followed by the norm equivalences~(\ref{eq:norm-equivalences-wh}) to arrive at the following estimates \begin{align} II_t &\lesssim |({Q_{\Gamma}}h {P_{\Gamma}} \nabla_\Gamma p, v^l)_{\Gamma}| + h^{k_g + 1} \| \nabla_\Gamma p \|_{\Gamma} |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h \lesssim (h^{k_g} + h^{k_g + 1}) \| \nabla_\Gamma p \|_{\Gamma} |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h, \label{eq:Ah-dual-IIt} \\ II_n & \lesssim h^{k_g} \| \nabla_\Gamma p \|_{\Gamma} |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h. \end{align} In the special case $v = \pi_h \phi_u$, exploiting that $\phi_u$ is a $H^1$ regular, tangential field and applying the proper operator and interpolation estimates, the bounds for $II_n$ can be improved to \begin{align} II_n &= ({Q_{\Gamma}}h {P_{\Gamma}} \nabla_\Gamma p^e, \pi_h \phi_u^e)_{{\Gamma_h}} \\ &= ( {P_{\Gamma}} {Q_{\Gamma}}h {P_{\Gamma}} \nabla_\Gamma p^e, \phi_u^e)_{{\Gamma_h}} + ({Q_{\Gamma}}h {P_{\Gamma}} \nabla_\Gamma p^e, \pi_h \phi_u^e - \phi_u^e)_{{\Gamma_h}} \\ &\lesssim h^{k_g + 1} \| \nabla_\Gamma p \|_{\Gamma} \|\phi_u \|_{\Gamma} + h^{k_g} \| \nabla_\Gamma p \|_{\Gamma} \|\pi_h \phi_u^e - \phi_u^e\|_{\Gamma} \lesssim h^{k_g + 1} \| \nabla_\Gamma p \|_{\Gamma} \|\phi_u \|_{1,\Gamma}, \end{align} and similarly for $II_t$, the improvement of first term in~\eqref{eq:Ah-dual-IIt} gives \begin{align} II_t &\lesssim h^{k_g + 1} \| \nabla_\Gamma p \|_{\Gamma} \|\phi_u \|_{1,\Gamma}. \end{align} Turning to the third term, we rewrite $III$ as \begin{align} III &= ({P_{\Gamma}} u, \nabla_\Gamma q^l)_{\Gamma} - ({P_{\Gamma}} u^e, \nabla_\Gammah q)_{{\Gamma_h}} - ({P_{\Gamma}} u_e, {Q_{\Gamma}}h \nabla q)_{{\Gamma_h}} \\ &= ({P_{\Gamma}} u, ({P_{\Gamma}} - |B|^{-1}B^T) \nabla_\Gamma q^e)_{{\Gamma_h}} - (u^e, {P_{\Gamma}} {Q_{\Gamma}}h \nabla q)_{{\Gamma_h}} = III_t + III_n. \end{align} Using ${P_{\Gamma}} ({P_{\Gamma}} - |B|^{-1}B^T) \sim {P_{\Gamma}} - {P_{\Gamma}} {P_{\Gamma}}h {P_{\Gamma}} + h^{k_g+1}$ and applying the operator bounds~(\ref{eq:projector-bounds}) yields \begin{align} III_t &\lesssim h^{k_g+1} \| u \|_{\Gamma} \| \nabla_\Gamma q^l \|_{\Gamma} \lesssim h^{k_g+1} \| u \|_{\Gamma} |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h, \\ III_n &\lesssim h^{k_g} \| u^e \|_{{\Gamma_h}} \| \nabla q \|_{{\Gamma_h}} \lesssim h^{k_g} \| u \|_{\Gamma} |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h. \end{align} Following precisely steps~(\ref{eq:Lhdual_IIn-1})--(\ref{eq:Lhdual_IIn-2}), the term $III_n$ can be improved if $q = \pi_h \phi_p$, showing that \begin{align} III_t &\lesssim h^{k_g + 1} \| u \|_{\Gamma} \| \phi_p \|_{2,\Gamma}. \end{align} Finally, starting from the fact that $\nabla p = \nabla_\Gamma p$, similar steps lead the following bound for $IV$ \begin{align} IV &= (\nabla_\Gamma p, \nabla_\Gamma q^l)_{\Gamma} - (\nabla_\Gammah p^e, \nabla_\Gammah q)_{{\Gamma_h}} - ({Q_{\Gamma}}h \nabla p^e, {Q_{\Gamma}}h \nabla q)_{{\Gamma_h}} \\ &= (({P_{\Gamma}} - |B|^{-1}B B^T) \nabla_\Gamma p, \nabla q^l)_{\Gamma} - ({Q_{\Gamma}}h \nabla p^e, {Q_{\Gamma}}h \nabla q)_{{\Gamma_h}} = IV_t + IV_n, \\ \intertext{and as before thanks to~(\ref{eq:BBTbound}), (\ref{eq:normal-grad-est}) and interpolation estimate~(\ref{eq:interpolenergy}), we see that } IV_t &\lesssim h^{k_g +1} \|\nabla_\Gamma p \|_{\Gamma} \| \nabla_\Gamma q \|_{{\Gamma_h}} \lesssim h^{k_g +1} \|\nabla_\Gamma p \|_{\Gamma} |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h, \\ IV_n &\lesssim h^{k_g} \|\nabla_\Gamma p \|_{\Gamma} \| \nabla q \|_{{\Gamma_h}} \lesssim h^{k_g} \|\nabla_\Gamma p \|_{\Gamma} |\mspace{-1mu}|\mspace{-1mu}| V |\mspace{-1mu}|\mspace{-1mu}|_h, \\ IV_n &\lesssim h^{k_g + 1} \| \nabla_\Gamma p \|_{\Gamma} \| \phi_p \|_{2,\Gamma}, \end{align} assuming $q = \pi_h \phi_p$ in the last case. Collecting the estimates for $I$--$IV$ and using the stability estimate $|\mspace{-1mu}|\mspace{-1mu}| U |\mspace{-1mu}|\mspace{-1mu}| \lesssim (\|f\|_{\Gamma} + \|g\|_{\Gamma})$ concludes the proof. \end{proof} \subsection{A Priori Error Estimates} \label{ssec:priori-error-estim} We start with establishing an a priori estimate for the error measured in the natural ``energy'' norm. \begin{theorem} \label{thm:priori-error-estim} Let $U = (u,p)$ be the solution to the continuous problem~(\ref{eq:darcy-problem-cont}). Assume that $(u,p) \in [H^{k_u+1}(\Gamma)]^d_t \times H^{k_p+1}(\Gamma)$ and that the geometric assumptions~(\ref{eq:geometric-assumptions-II}) hold. Then for the full gradient stabilized form $B_h = A_h + S_h^1$, the solution $U_h = (u_h, p_h) \in \mathcal{V}_h^k \times \mathcal{Q}_h^l$ to the discrete problem~(\ref{eq:darcy-probl-cutfem}) satisfies the a priori estimate \begin{align} |\mspace{-1mu}|\mspace{-1mu}| U^e - U_h |\mspace{-1mu}|\mspace{-1mu}|_h \lesssim h^{k_u +1} \| u\|_{k+1,\Gamma} + h^{k_p} \| p\|_{l+1,\Gamma} + h^{k_g} (\| f \|_{\Gamma} + \| g\|_{\Gamma}) + h (\|u\|_{1,\Gamma} + \|p\|_{1,\Gamma}). \label{eq:aprior-est-energy-sh1} \end{align} If the normal gradient stabilization $S_h = S_h^2$ is employed instead, the discretization error satisfies the improved estimate \begin{align} |\mspace{-1mu}|\mspace{-1mu}| U^e - U_h |\mspace{-1mu}|\mspace{-1mu}|_h \lesssim h^{k_u +1} \| u\|_{k+1,\Gamma} + h^{k_p} \| p\|_{l+1,\Gamma} + h^{k_g} (\| f \|_{\Gamma} + \| g\|_{\Gamma}) + h^{k_g+1} (\|u\|_{1,\Gamma} + \|p\|_{1,\Gamma}). \label{eq:aprior-est-energy-sh2} \end{align} \end{theorem} \begin{proof} We start with considering the ``discrete error'' $E_h = U_h - V_h$. Observe that \begin{align} |\mspace{-1mu}|\mspace{-1mu}| E_h |\mspace{-1mu}|\mspace{-1mu}|_h^2 & = B_h(U_h - V_h,E_h) \\ & = L_h(E_h) - B_h(U^e, E_h) + B_h(U^e - V_h,E_h) \\ & \lesssim \biggl( \sup_{V_h \in \mathbb{V}_h} \dfrac{ L_h(V_h) - B_h(U^e,V_h) }{ |\mspace{-1mu}|\mspace{-1mu}| V_h |\mspace{-1mu}|\mspace{-1mu}|_h } + |\mspace{-1mu}|\mspace{-1mu}| U^e - V_h |\mspace{-1mu}|\mspace{-1mu}|_h \biggr) |\mspace{-1mu}|\mspace{-1mu}| E_h |\mspace{-1mu}|\mspace{-1mu}|_h. \label{eq:strang-energy-lemma-step-3} \end{align} Dividing by $|\mspace{-1mu}|\mspace{-1mu}| E_h |\mspace{-1mu}|\mspace{-1mu}|_h$ and applying the identity \begin{align} L_h(V_h) - B_h(U^e, V_h) &= \bigl( L_h(V_h) - L(V_h^l) \bigr) + \bigl(A(U,V_h^l) - A_h(U^e, V_h) \bigr)- S_h(U^e, V_h) \end{align} gives together with the triangle inequality $|\mspace{-1mu}|\mspace{-1mu}| U^e - U_h |\mspace{-1mu}|\mspace{-1mu}|_h \leqslant |\mspace{-1mu}|\mspace{-1mu}| U^e - V_h |\mspace{-1mu}|\mspace{-1mu}|_h + |\mspace{-1mu}|\mspace{-1mu}| E_h |\mspace{-1mu}|\mspace{-1mu}|_h$ the following Strang-type estimate for the energy error, \begin{align} |\mspace{-1mu}|\mspace{-1mu}| U^e - U_h |\mspace{-1mu}|\mspace{-1mu}|_h &\lesssim \inf_{V_h \in \mathbb{V}_h} |\mspace{-1mu}|\mspace{-1mu}| U^e - V_h |\mspace{-1mu}|\mspace{-1mu}|_h + \sup_{V_h \in \mathbb{V}_h} \dfrac{ L_h(V_h) - B_h(U^e,V_h) }{ |\mspace{-1mu}|\mspace{-1mu}| V_h |\mspace{-1mu}|\mspace{-1mu}|_h } \label{eq:strang-energy-1} \\ &\lesssim \inf_{V_h \in \mathbb{V}_h} |\mspace{-1mu}|\mspace{-1mu}| U^e - V_h |\mspace{-1mu}|\mspace{-1mu}|_h + \sup_{V_h \in \mathbb{V}_h} \dfrac{ L_h(V_h) - L(V_h^l) }{ |\mspace{-1mu}|\mspace{-1mu}| V_h |\mspace{-1mu}|\mspace{-1mu}|_h } + \sup_{V_h \in \mathbb{V}_h} \dfrac{ A(U,V_h^l) - A_h(U^e,V_h) }{ |\mspace{-1mu}|\mspace{-1mu}| V_h |\mspace{-1mu}|\mspace{-1mu}|_h } \nonumber \\ &\phantom{\leqslant} + \sup_{v \in \mathbb{V}_h} \dfrac{ S_h(U^e,V_h) }{ |\mspace{-1mu}|\mspace{-1mu}| V_h |\mspace{-1mu}|\mspace{-1mu}|_h }. \label{eq:strang-energy-2} \end{align} Estimates~(\ref{eq:aprior-est-energy-sh1}) and~(\ref{eq:aprior-est-energy-sh2}) now follow directly from inserting the interpolation estimate~(\ref{eq:interpolenergy}), the quadrature error estimate~\eqref{eq:Lh-quad-est-primal} and, depending on the choice of $S_h$, the proper consistency error estimate from Lemma~\ref{lem:Sh-const-est} into~\eqref{eq:strang-energy-2}. \end{proof} Next, we provide bounds for the $L^2$ error of the pressure approximation as well as the $H^{-1}$ error of the tangential component of the velocity approximation. \begin{theorem} \label{thm:priori-error-estim-dual} Under the same assumptions as made in Theorem~\ref{thm:priori-error-estim}, the following a priori error estimate holds \begin{align} \| p - p_h^l \|_{\Gamma} + \| {P_{\Gamma}} (u - u_h^l) \|_{-1,\Gamma} &\lesssim h C_U, \end{align} with $C_U$ being the convergence rate predicted by Theorem~\ref{thm:priori-error-estim}. \end{theorem} \begin{proof} The proof uses a standard Aubin-Nitsche duality argument employing the dual problem \begin{subequations} \label{eq:dual-problem} \begin{alignat}{2} -\Div_{\Gamma} \phi_u &= \psi_p &\quad \text{on } \Gamma, \\ \phi_u - \nabla_\Gamma \phi_p &= \psi_u &\quad \text{on $\Gamma$}, \end{alignat} \end{subequations} with $(\psi_u, \psi_p) \in [H^1(\Gamma)]^d_t \times L^2_0(\Gamma)$. For the error representation to be derived it is sufficient to consider $(\psi_u, \psi_p)$ such that $ \| \psi_u \|_{1,\Gamma} + \| \psi_p \|_{\Gamma} \lesssim 1$. Thanks to the regularity result~(\ref{eq:ellreg}), the solution $(\phi_u, \phi_p) \in [H^1(\Gamma)]^d_t \times H^2(\Gamma) \cap L^2_0(\Gamma)$ then satisfies the stability estimate \begin{align} \| \phi_u \|_{1,\Gamma} + \| \phi_p \|_{2,\Gamma} \lesssim 1. \label{eq:stability-dual-problem} \end{align} Set $E = U - U_h^l$ and insert the dual solution $\Phi$ as test function into $A(E,\mathrm{cd}ot$). Then adding and subtracting suitable terms leads us to \begin{align} A(E,\Phi) &= A(E,\Phi - \Phi_h^l) + A(E, \Phi_h^l) \label{eq:ep-error-repres-I} \\ &= A(E,\Phi - \Phi_h^l) + L(\Phi_h^l) - A(U_h^l, \Phi_h^l) \\ &= A(E,\Phi - \Phi_h^l) + \left( L(\Phi_h^l) - L_h(\Phi_h) \right)+ \left( A_h(U_h, \Phi_h) - A(U_h^l, \Phi_h^l) \right) \\ &\quad + S_h(U_h, \Phi_h) \\ &= I + II + III + IV. \label{eq:ep-error-repres-final} \end{align} where in the last step, we employed the identity $B_h(U_h, \Phi_h) - L_h(\Phi_h) = 0$. Interpolation estimate~(\ref{eq:interpolenergy}) together with stability estimate~(\ref{eq:stability-dual-problem}) implies that \begin{align} I &\lesssim |\mspace{-1mu}|\mspace{-1mu}| E |\mspace{-1mu}|\mspace{-1mu}| |\mspace{-1mu}|\mspace{-1mu}| \Phi - \Phi_h^l |\mspace{-1mu}|\mspace{-1mu}| \lesssim h |\mspace{-1mu}|\mspace{-1mu}| E |\mspace{-1mu}|\mspace{-1mu}|_h ( \| \phi_u \|_{1,\Gamma} + \| \phi_p \|_{2,\Gamma} ) \lesssim h |\mspace{-1mu}|\mspace{-1mu}| E |\mspace{-1mu}|\mspace{-1mu}|_h. \end{align} Next, the improved quadrature error estimates~(\ref{eq:Lh-quad-est-dual}) and the stability bound~(\ref{eq:stability-dual-problem}) imply that \begin{align} II + III & \lesssim h^{k_g+1} (\|f\|_{\Gamma} + \|g\|_{\Gamma}) (\| \phi_u \|_{1,\Gamma} + \|\phi_p\|_{2,\Gamma}) \lesssim h^{k_g+1} (\|f\|_{\Gamma} + \|g\|_{\Gamma}). \end{align} Finally, after adding and subtracting $U^e$ and $\Phi^e$, the consistency error can be bounded by \begin{align} IV &= S_h(U_h - U^e, \Phi_h - \Phi^e) + S_h(U_h - U^e, \Phi^e) + S_h(U^e, \Phi_h - \Phi^e) + S_h(U^e, \Phi^e) \\ &\lesssim |\mspace{-1mu}|\mspace{-1mu}| U_h - U^e |\mspace{-1mu}|\mspace{-1mu}|_h |\mspace{-1mu}|\mspace{-1mu}| \Phi_h - \Phi^e |\mspace{-1mu}|\mspace{-1mu}|_h + |\mspace{-1mu}|\mspace{-1mu}| U_h - U^e |\mspace{-1mu}|\mspace{-1mu}|_h | \Phi^e|_{S_h} + | U^e |_{S_h} |\mspace{-1mu}|\mspace{-1mu}| \Phi_h - \Phi^e |\mspace{-1mu}|\mspace{-1mu}|_h + | \Phi^e|_{S_h} | U^e |_{S_h} \\ &\lesssim h C_U, \end{align} where in the last step, the energy error estimate from Theorem~\ref{thm:priori-error-estim}, the interpolation estimate~(\ref{eq:interpolenergy}), the consistency error estimates from Lemma~\ref{lem:Sh-const-est} and the stability bound~(\ref{eq:stability-dual-problem}) were successively applied. Collecting the estimates for term $I$--$IV$ shows that \begin{align} |A(E,\Phi)| \lesssim h C_U. \label{eq:Ah-with-Phi-bound} \end{align} Next, using the shorthand notation $E = (e_u, e_p) = (u-u_h^l, p-p_h^l)$, we exploit the properties of the dual problem to derive an error representation for $\|e_p\|_{\Gamma}$ and $\|{P_{\Gamma}} e_u \|_{-1,\Gamma}$ in terms of $A(E, \Phi)$ to establish the desired bounds using~\eqref{eq:Ah-with-Phi-bound}. Since \mbox{$\lambda_{\Gamma_h}(p_h) = 0$} but not necessarily $\lambda_{\Gamma}(p_h^l)$, we first decompose the pressure error $e_p$ into a normalized part $\widetilde{e}_p$ satisfying $\lambda_{\Gamma}(\widetilde{e}_p) = 0$ and a constant part $\overline{e}_p$, \begin{align} e_p = p - p_h^l = \underbrace{p - (p_h^l - \lambda_{\Gamma}(p_h^ l))}_{\widetilde{e}_p} + \underbrace{\lambda_{\Gamma}(p_h^ l) - \lambda_{{\Gamma_h}}(p_h) }_{\overline{e}_p}. \end{align} Then the properties of dual solution $\Phi$ together with the observations that $\phi_u = {P_{\Gamma}}\phi_u$, $\nabla \phi_p = \nabla_\Gamma \phi_p$ and $\nabla e_p = \nabla \widetilde{e}_p$ lead us to the identity \begin{align} A(E,\Phi) &= (e_u, \phi_u)_{\Gamma} - (e_u, \nabla \phi_p)_{\Gamma} + (\nabla \widetilde{e}_p, \phi_u)_{\Gamma} + \onehalf (e_u + \nabla e_p, -\phi_u + \nabla \phi_p)_{\Gamma} \\ &= (e_u, \psi_u)_{\Gamma} - (\widetilde{e}_p, \Div_{\Gamma} \phi_u)_{\Gamma} - \onehalf (e_u + \nabla e_p, \psi_u)_{\Gamma} \\ &= \onehalf (e_u, \psi_u)_{\Gamma} + (\widetilde{e}_p, \psi_p)_{\Gamma} + \onehalf (e_p, \Div_{\Gamma} \psi_u)_{\Gamma}. \label{eq:dual-error-repres} \end{align} Thus choosing $\psi_u = 0$ and $\psi_p \in L^2_0(\Gamma)$, the normalized pressure error can be bounded as follows \begin{align} \| \widetilde{e}_p \|_{\Gamma} &= \sup_{\psi \in L^2_0(\Gamma), \|\psi_p\|_{\Gamma} = 1} (e_p, \psi_p)_{\Gamma} = \sup_{\psi \in L^2_0(\Gamma), \|\psi_p\|_{\Gamma} = 1} A(E,\Phi(0,\psi_p)) \lesssim h C_U. \end{align} Turning to constant error part $\overline{e}_p$ and unwinding the definition of the average operators $\lambda_{{\Gamma_h}}(\mathrm{cd}ot)$ and $\lambda_{\Gamma}(\mathrm{cd}ot)$ yields \begin{align} \|\overline{e}_p\|_{\Gamma} = |\Gamma|^{\onehalf} \left| \dfrac{1}{|\Gamma|} \int_{\Gamma} p_h^l \,\mathrm{d} \Gamma - \dfrac{1}{|\Gamma_h|} \int_{\Gamma_h} p_h \,\mathrm{d} \Gammah \right| \lesssim \dfrac{|\Gamma_h|^{\onehalf}}{|\Gamma_h|} \int_{\Gamma_h} |1-c| |p_h| \,\mathrm{d} \Gammah, \label{eq:error-of-average-est} \end{align} with $c = |\Gamma_h||\Gamma|^{-1} |B|$. We note that $\| 1 - c \|_{L^\infty(\Gamma)} \lesssim h^{k_g+1}$ thanks to~\eqref{eq:detBbound}. Consequently, after successively applying a Cauchy-Schwarz inequality, the Poincar\'e inequality~(\ref{eq:discrete-poincare-Gammah}) and the stability bound $\| \nabla_\Gammah p_h \|_{{\Gamma_h}} \lesssim \|f \|_{\Gamma} + \|g \|_{\Gamma}$, we arrive at \begin{align} \|\overline{e}_p\|_{\Gamma} \lesssim h^{k_g+1} \| p_h\|_\Gamma \lesssim h^{k_g+1} \| \nabla p_h\|_{{\Gamma_h}} \lesssim h^{k_g+1} (\|f \|_{\Gamma} + (\|g \|_{\Gamma}), \end{align} which concludes the derivation of the desired estimate for $\| e_p \|_{\Gamma}$. Finally, to estimate $\|{P_{\Gamma}}(u-u_h^l)\|_{-1,\Gamma}$, we let $\Phi$ be the solution to the dual problem~(\ref{eq:dual-problem}) for right-hand side data $(\psi_u, 0)$ with $\psi_u \in [H^1(\Gamma)]^d_t$. Inserting $\Phi$ into~\eqref{eq:dual-error-repres} \begin{align} |(e_u, \psi_u)_{\Gamma}| \lesssim |A(E, \Phi)| + \|e_p\|_{\Gamma} \|\psi_u\|_{1,\Gamma}, \end{align} and consequently, the general bound~(\ref{eq:Ah-with-Phi-bound}) for $A(E,\Phi)$ together with bound for $L^2$ error of the pressure allows us to derive the final estimate for $e_u$, \begin{align} \| {P_{\Gamma}} e_u \|_{-1, \Gamma} &= \sup_{\psi_u \in [H^1(\Gamma)]^d_t, \|\psi_u\|_{-1,\Gamma} = 1} (e_u, \psi_u)_{\Gamma} \lesssim |A(E, \Phi)| + \|e_p\|_{\Gamma} \lesssim h C_u. \end{align} \end{proof} \section{Numerical Results} \label{sec:numerical-results} To numerically examine the rate of convergence predicted by the a priori error estimates derived in Section~\ref{ssec:priori-error-estim}, we now perform a series of convergence studies. Following the numerical example presented in~\cite{HansboLarson2016}, we consider the Darcy problem posed on the torus surface $\Gamma$ defined by \begin{align} \Gamma = \{ x \in \mathbb{R}^3 : r^2 = x_3^2 + (\sqrt{x_1^2 + x_2^2} -R)^2 \}, \label{eq:torus-levelset} \end{align} with major radius $R = 1.0$ and minor radius $r = 0.5$ and define a manufactured solution $(u,p)$ by \begin{align} u_t = \Bigl( 2xz, -2yz, 2(x^2 - y^2)(R- \sqrt{x^2 + y^2})/\sqrt{x^2 + y^2} \Bigr), \quad u_n = 0, \quad p = z, \end{align} which satisfies the Darcy problem~(\ref{eq:darcy-problem-cont}) with right-hand sides $f = \Div_{\Gamma} u = 0$ and \begin{align} g = \begin{pmatrix} xz(2 - (1 - \tfrac{R}{\sqrt{x^2 + y^2}})/A) \\ yz(-2 - (1 - \tfrac{R}{\sqrt{x^2 + y^2}})/A) \\ 1 - \tfrac{2(x^2 - y^2)(\sqrt{x^2 + y^2} -R)}{\sqrt{x^2 + y^2}} - z^2/A \end{pmatrix}, \quad \text{with } A = (R^2 + x^2 + y^2 - 2R\sqrt{x^2 + y^2} + z^2). \end{align} A sequence of meshes $\{\mathcal{T}_k\}_{k=0}^l$ with uniform mesh sizes $h_k = 2^{-k} h_0$ with $h_0 \approx 0.24$ is generated by uniformly refining an initial, structured background mesh $\widetilde{\mathcal{T}}_0$ for $\Omega = [-1.65,1.65]^3 \supset \Gamma$ and extracting at each refinement level $k$ the active (background) mesh as defined by \eqref{eq:narrow-band-mesh}. \begin{figure} \caption{Plots of the velocity (top) and pressure (bottom) approximations computed for $(k_u, k_p, k_g) = (1,2,2)$ on the finest refinement level. Each plot shows both the solution as computed on the active mesh $\mathcal{T} \label{fig:solution-plots} \end{figure} For a given error norm, the corresponding experimental order of convergence (EOC) at refinement level $k$ is calculated using the formula \begin{align*} \text{EOC}(k) = \dfrac{\log(E_{k-1}/E_{k})}{\log(2)}, \end{align*} with $E_k$ denoting error of the computed discrete velocity $u_k$ or pressure $p_k$ at refinement level $k$. To study the combined effect of chosing various approximation orders $k_u$, $k_p$ and $k_g$ and stabilization forms $S_h^i$ on the overall approximation quality of the discrete solution, we conduct convergence experiments for 6 different scenarios. For each scenario, we compute the $L^2$ norm of the velocity error $e_u = u^e - u_h$ as well as the $H^1$ and $L^2$ norms of the pressure error $e_p = p^e - p_h$ which are displayed in Table~\ref{tab:eoc-tables}. A short summary of the cases considered and the theoretically expected convergence rates are given Table~\ref{tab:eoc-cases}. The computed EOC data in Table~\ref{tab:eoc-tables} clearly confirms the predicted convergence rates. In particular, we observe that increasing the pressure approximation to $k_p = 2$ does only increase the convergence order for all considered error norms by one if both a high order approximation $k_g = 2$ of ${\Gamma_h}$ and the higher-order consistent normal stabilization $S_h^2$ are used. Finally, the discrete solution components computed for $(k_u, k_p, k_g) = (1,2,2)$ at the finest refinement level are visualized in Figure~\ref{fig:solution-plots}. \begin{table}[htb] \centering \footnotesize \begin{tabular}{c c c c c c c c} \toprule Case & $k_u$ & $k_p$ & $k_g$ & $S_h$ & $\| e_u \|_{{\Gamma_h}}$ & $\| e_u \|_{1,{\Gamma_h}}$ & $\| e_p \|_{{\Gamma_h}}$ \\ \midrule 1 & 1 & 1 & 1 & $S_h^1$ & 1 & 1 & 2 \\ 2 & 1 & 1 & 1 & $S_h^2$ & 1 & 1 & 2 \\ 3 & 1 & 2 & 1 & $S_h^1$ & 1 & 1 & 2 \\ 4 & 1 & 2 & 1 & $S_h^2$ & 1 & 1 & 2 \\ 5 & 1 & 2 & 2 & $S_h^1$ & 1 & 1 & 2 \\ 6 & 1 & 2 & 2 & $S_h^2$ & 2 & 2 & 3 \\ \bottomrule \end{tabular} \caption{Summary of the 6 cases considered in the convergence experiments and the corresponding theoretical convergence rates predicted by Theorem~\ref{thm:priori-error-estim} and~Theorem~\ref{thm:priori-error-estim-dual}. } \label{tab:eoc-cases} \footnotesize \end{table} \begin{table}[htb] \centering \begin{subtable}[t]{1.0\textwidth} \centering \begin {tabular}{cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }c} \toprule $k$&\multicolumn {2}{c}{$\|u_k - u^e \|_{\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{1,\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{\Gamma _h}$}&EOC\\\midrule \pgfutilensuremath {0}&$9.71$&$\mathrm{cd}ot 10^{-1}$&--&$1.10$&$\mathrm{cd}ot 10^{0}$&--&$1.69$&$\mathrm{cd}ot 10^{-1}$&--\\% \pgfutilensuremath {1}&$3.06$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.67}&$6.46$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {0.77}&$5.29$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.68}\\% \pgfutilensuremath {2}&$9.04$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.76}&$3.10$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.06}&$1.18$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {2.16}\\% \pgfutilensuremath {3}&$3.08$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.55}&$1.53$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.02}&$2.80$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {2.08}\\% \pgfutilensuremath {4}&$1.30$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.25}&$7.72$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {0.99}&$6.86$&$\mathrm{cd}ot 10^{-4}$&\pgfutilensuremath {2.03}\\\bottomrule \end {tabular} \\[1ex] \caption{Case 1: $(k_u, k_p, k_g) = (1,1,1)$ and full gradient stabilization $S_h = S_h^1$.} \end{subtable} \begin{subtable}[t]{1.0\textwidth} \centering \begin {tabular}{cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }c} \toprule $k$&\multicolumn {2}{c}{$\|u_k - u^e \|_{\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{1,\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{\Gamma _h}$}&EOC\\\midrule \pgfutilensuremath {0}&$4.70$&$\mathrm{cd}ot 10^{-1}$&--&$1.28$&$\mathrm{cd}ot 10^{0}$&--&$7.31$&$\mathrm{cd}ot 10^{-2}$&--\\% \pgfutilensuremath {1}&$1.43$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.72}&$6.45$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {0.99}&$2.08$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.82}\\% \pgfutilensuremath {2}&$4.75$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.58}&$3.14$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.04}&$4.84$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {2.10}\\% \pgfutilensuremath {3}&$2.06$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.21}&$1.57$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {0.99}&$1.19$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {2.02}\\% \pgfutilensuremath {4}&$9.92$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {1.05}&$7.80$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.01}&$2.82$&$\mathrm{cd}ot 10^{-4}$&\pgfutilensuremath {2.08}\\\bottomrule \end {tabular} \\[1ex] \caption{Case 2: $(k_u, k_p, k_g) = (1,1,1)$ and normal gradient stabilization $S_h = S_h^2$.} \end{subtable} \begin{subtable}[t]{1.0\textwidth} \centering \begin {tabular}{cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }c} \toprule $k$&\multicolumn {2}{c}{$\|u_k - u^e \|_{\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{1,\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{\Gamma _h}$}&EOC\\\midrule \pgfutilensuremath {0}&$4.69$&$\mathrm{cd}ot 10^{-1}$&--&$1.30$&$\mathrm{cd}ot 10^{0}$&--&$7.42$&$\mathrm{cd}ot 10^{-2}$&--\\% \pgfutilensuremath {1}&$1.43$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.72}&$6.46$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.01}&$2.08$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.83}\\% \pgfutilensuremath {2}&$4.75$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.58}&$3.14$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.04}&$4.85$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {2.10}\\% \pgfutilensuremath {3}&$2.06$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.21}&$1.57$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {0.99}&$1.19$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {2.02}\\% \pgfutilensuremath {4}&$9.92$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {1.05}&$7.80$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.01}&$2.82$&$\mathrm{cd}ot 10^{-4}$&\pgfutilensuremath {2.08}\\\bottomrule \end {tabular} \\[1ex] \caption{Case 3: $(k_u, k_p, k_g) = (1,2,1)$ and full gradient stabilization $S_h = S_h^1$.} \end{subtable} \begin{subtable}[t]{1.0\textwidth} \centering \begin {tabular}{cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }c} \toprule $k$&\multicolumn {2}{c}{$\|u_k - u^e \|_{\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{1,\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{\Gamma _h}$}&EOC\\\midrule \pgfutilensuremath {0}&$4.69$&$\mathrm{cd}ot 10^{-1}$&--&$1.30$&$\mathrm{cd}ot 10^{0}$&--&$7.42$&$\mathrm{cd}ot 10^{-2}$&--\\% \pgfutilensuremath {1}&$1.43$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.72}&$6.46$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.01}&$2.08$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.83}\\% \pgfutilensuremath {2}&$4.75$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.58}&$3.14$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.04}&$4.85$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {2.10}\\% \pgfutilensuremath {3}&$2.06$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.21}&$1.57$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {0.99}&$1.19$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {2.02}\\\bottomrule \end {tabular} \\[1ex] \caption{Case 4: $(k_u, k_p, k_g) = (1,2,1)$ and normal gradient stabilization $S_h = S_h^2$.} \end{subtable} \begin{subtable}[t]{1.0\textwidth} \centering \begin {tabular}{cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }c} \toprule $k$&\multicolumn {2}{c}{$\|u_k - u^e \|_{\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{1,\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{\Gamma _h}$}&EOC\\\midrule \pgfutilensuremath {0}&$3.61$&$\mathrm{cd}ot 10^{0}$&--&$1.56$&$\mathrm{cd}ot 10^{0}$&--&$3.58$&$\mathrm{cd}ot 10^{-1}$&--\\% \pgfutilensuremath {1}&$2.47$&$\mathrm{cd}ot 10^{0}$&\pgfutilensuremath {0.55}&$5.03$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.63}&$1.44$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.31}\\% \pgfutilensuremath {2}&$9.04$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.45}&$1.34$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.91}&$3.40$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {2.09}\\% \pgfutilensuremath {3}&$2.65$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {1.77}&$4.09$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {1.71}&$8.94$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {1.93}\\\bottomrule \end {tabular} \\[1ex] \caption{Case 5: $(k_u, k_p, k_g) = (1,2,2)$ and full gradient stabilization $S_h = S_h^1$.} \end{subtable} \begin{subtable}[t]{1.0\textwidth} \centering \begin {tabular}{cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }cr<{\pgfplotstableresetcolortbloverhangright }@{}l<{\pgfplotstableresetcolortbloverhangleft }c} \toprule $k$&\multicolumn {2}{c}{$\|u_k - u^e \|_{\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{1,\Gamma _h}$}&EOC&\multicolumn {2}{c}{$\|p_k - p^e \|_{\Gamma _h}$}&EOC\\\midrule \pgfutilensuremath {0}&$1.55$&$\mathrm{cd}ot 10^{0}$&--&$1.77$&$\mathrm{cd}ot 10^{0}$&--&$1.71$&$\mathrm{cd}ot 10^{-1}$&--\\% \pgfutilensuremath {1}&$2.94$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {2.40}&$4.12$&$\mathrm{cd}ot 10^{-1}$&\pgfutilensuremath {2.10}&$1.72$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {3.31}\\% \pgfutilensuremath {2}&$4.73$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {2.63}&$9.63$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {2.10}&$1.49$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {3.53}\\% \pgfutilensuremath {3}&$8.64$&$\mathrm{cd}ot 10^{-3}$&\pgfutilensuremath {2.45}&$2.33$&$\mathrm{cd}ot 10^{-2}$&\pgfutilensuremath {2.05}&$1.15$&$\mathrm{cd}ot 10^{-4}$&\pgfutilensuremath {3.69}\\\bottomrule \end {tabular} \\[1ex] \caption{Case 6: $(k_u, k_p, k_g) = (1,2,2)$ and normal gradient stabilization $S_h = S_h^2$.} \end{subtable} \caption{Experimental order of convergence for the all 6 cases computed with a stabilization parameter $\tau = 0.1$.} \label{tab:eoc-tables} \end{table} \end{document}
\begin{document} \title{The dichotomy property of ${\rm SL}_2(R)$-A short note} \author{Alexander A. Trost} \address{Fakult\"{a}t f\"{u}r Mathematik, Ruhr Universit\"{a}t Bochum, D-44780 Bochum, Germany} \email{[email protected]} \begin{abstract} A recent paper by Polterovich, Shalom and Shem-Tov has shown that non-discrete, conjugation invariant norms on arithmetic Chevalley groups of higher rank give rise to very restricted topologies. Namely, such topologies always have profinite norm-completions. In this note, we sketch an argument showing that this also holds for ${\rm SL}_2(R)$ for $R$ a ring of algebraic integers with infinitely many units. \end{abstract} \maketitle \section{Introduction} \label{intro} The preprint \cite{polterovich2021norm} by Polterovich, Shalom and Shem-Tov studies (among other things) the topologies induced by conjugation-invariant norms on ${\rm SL}_n(R)$ for $n\geq 3$ and rings $R$ for which ${\rm SL}_n(R)$ has a certain finiteness property, namely that there is a natural number $L(R,n)$ such that each element of ${\rm SL}_n(R)$ can be written as a product of at most $L(R,n)$ elementary matrices. This property is called \textit{bounded elementary generation.} They show in this case that the following theorem holds: \begin{theorem}\cite[Theorem~1.8]{polterovich2021norm}\label{old_theorem} Let $R$ be an integral domain, $n\geq 3$ and assume that ${\rm SL}_n(R)$ is boundedly elementary generated and that each non-zero ideal of $R$ has finite index. Then each conjugation-invariant norm on a finite index subgroup of ${\rm SL}_n(R)$ is either discrete or has a profinite norm-completion. \end{theorem} A group satisfying the conclusion of the theorem is said to satisfy the \textit{dichotomy property}. In any case, there are quite many rings of interest that satisfy the assumption of the theorem, for example rings of integers in global and local fields. The assumption of $n\geq 3$ in Theorem~\ref{old_theorem} is also quite common in the study of conjugation-invariant norms on Chevalley groups. This is mostly due to the fact that the normal subgroup structure of ${\rm SL}_2(R)$ is more complicated than the one of ${\rm SL}_n(R)$ for $n\geq 3.$ Namely, the normal subgroups of the latter are essentially parametrized by ideals of the ring $R$ (or more precisely so-called \textit{admissible pairs of ideals} as shown by Abe \cite{MR991973}), whereas normal subgroups of ${\rm SL}_2(R)$ are given by more complicated subgroups of $(R,+)$ called \textit{radices} as proven by Costa and Keller \cite{MR1114610}. However, as seen in our preprint \cite{SL_2_strong_bound}, it is often possible to generalize results of the study of conjugation invariant norms from $n\geq 3$ to $n=2$ by introducing additional assumptions on the existence of a lot of units in the ring. In this context, \cite{polterovich2021norm} raises the following question: \begin{conjecture}\label{conj} Let $p$ be a prime in $\mathbb{Z}.$ Does ${\rm SL}_2(\mathbb{Z}[1/p])$ satisfy the dichotomy property? \end{conjecture} We will answer the question in Conjecture~\ref{conj} affirmatively with the following more general result: \begin{theorem}\label{main_theorem} Let $R$ be the ring of S-algebraic integers in a number field such that $R$ has infinitely many units. Then every finite index subgroup of ${\rm SL}_2(R)$ has the dichotomy property. \end{theorem} The proof is almost identical to the proof of \cite[Theorem~1.8]{polterovich2021norm} itself and only requires a small modification in a technical lemma. \section*{Acknowledgments} This note was written during a research visit at the Mathematische Forschungsinstitut Oberwolfach and I am very grateful for the support I received there. I also want to thank Zvi Shem-Tov for his helpful remarks. \section{Basic definitions and notions} \label{sec_basic_notions} \begin{definition} Let $G$ be a group with neutral element $1_G$. A \textit{conjugation-invariant norm} $\|\cdot\|:G\to\mathbb{R}_{\geq 0}$ is a function satisfying the properties \begin{align*} &\forall a\in G:\|a\|=0\iff a=1_G,\\ &\forall a\in G:\|a\|=\|a^{-1}\|,\\ &\forall a,b\in G:\|ab\|\leq\|a\|+\|b\|\text{ and}\\ &\forall a,b\in G:\|aba^{-1}\|=\|a\| \end{align*} \end{definition} Further, we recall the following two concepts from \cite{polterovich2021norm}: \begin{definition} A group $G$ is said to satisfy the \textit{dichotomy property}, if each non-discrete norm $\|\cdot\|$ on $G$ has a profinite norm-completion. If $G$ is itself a topological group, then $G$ is called \textit{norm-complete}, if each non-discrete norm $\|\cdot\|$ on $G$ induces the topology of $G$ as a topological group. \end{definition} Let us next recall the definition of ${\rm SL}_2:$ \begin{definition} Let $R$ be a commutative ring with $1$. Then ${\rm SL}_2(R):=\{A\in R^{2\times 2}\mid a_{11}a_{22}-a_{12}a_{21}=1\}.$ \end{definition} Obviously for any commutative ring $R$ and any $x\in R,$ the matrices \begin{equation*} E_{12}(x)= \begin{pmatrix} 1 & x\\ 0 & 1 \end{pmatrix} \text{ and } E_{21}(x)= \begin{pmatrix} 1 & 0\\ x & 1 \end{pmatrix} \end{equation*} are elements of ${\rm SL}_2(R).$ They are called \textit{elementary matrices} and we denote the set of elementary matrices $\{E_{12}(x),E_{21}(x)\mid x\in R\}$ by ${\rm EL}.$ The subgroup of ${\rm SL}_2(R)$ generated by ${\rm EL}$ is denoted by $E(2,R).$ Furthermore, for an ideal $I\unlhd R,$ we denote by $E(2,R,I)$ the normal subgroup of $E(2,R)$ generated by the $E(2,R)$-conjugates of elements of $\{E_{12}(x)\mid x\in I\}.$ Additionally, for an ideal $I\subset R$, the subgroups $E_{12}(I)$ and $E_{21}(I)$ of $E(2,R)$ are defined as $E_{12}(I):=\{E_{12}(x)\mid x\in I\}$ and $E_{21}(I):=\{E_{21}(x)\mid x\in I\}.$ Further, for a unit $u\in R^*$ the element \begin{equation*} h(u)= \begin{pmatrix} u & 0\\ 0 & u^{-1} \end{pmatrix} \end{equation*} is also an element of ${\rm SL}_2(R).$ \section{Proof of the main result} \label{proof_main} We first introduce the needed concept of $R$ containing a large number of units: \begin{definition}\label{many_units} Let $R$ be a commutative ring with $1$ such that for each $c\in R-\{0\},$ there is a unit $u\in R$ such that $u-1\in c^2R$ and $u^8-1\neq 0.$ Then we call $R$ a \textit{ring with many units.} \end{definition} \begin{remark} Note, that this is a similar but still slightly different property to the concept of rings with many units introduced in \cite{SL_2_strong_bound}. \end{remark} This enables us to formulate: \begin{theorem}\label{main_thm_tech_version} Let $R$ be an integral domain with many units such that $E(2,R)$ is boundedly generated by elementary matrices and such that each non-zero ideal of $R$ has finite index in $R$. Then each finite index subgroup $H$ of $E(2,R)$ has the dichotomy property. \end{theorem} To prove this, we need the following modified version of \cite[Lemma~2.3]{polterovich2021norm}: \begin{lemma}\label{tech_lemma_1} Let $R$ be as in Theorem~\ref{main_thm_tech_version}. Let $\|\cdot\|$ further be the restriction of a non-discrete conjugation-invariant norm on $E(2,R,I)$ to the elementary subgroup $E_{12}(I).$ Then the norm completion $\overline{E_{12}(I)}$ of $E_{12}(I)$ with respect to $\|\cdot\|$ is profinite. \end{lemma} To prove this we need the following technical lemma, which in turn is a modified version of \cite[Lemma~A9]{polterovich2021norm}: \begin{lemma}\label{tech_lemma_2} Let $R$ be an integral domain, $c\in R$ non-zero and $u\in R$ a unit such that $u-1\in c^2R$. Assume further that \begin{equation*} A= \begin{pmatrix} a & b\\ c & d \end{pmatrix} \end{equation*} is an element of ${\rm SL}_2(R).$ Then each element of $E_{12}((u^8-1)cR)$ is a product of at most four $E(2,R,cR)$-conjugates of $A$ and $A^{-1}.$ \end{lemma} \begin{proof} As $u-1$ is an element of the ideal $cR$ by assumption, so is $u^4-1.$ Hence choose $x\in R$ with $u^4-1=cx$ and set $t:=ax.$ But note that as $u-1\in c^2R$, there is an $y\in R$ such that $u-1=c^2y$ holds. Hence \begin{equation*} cx=u^4-1=(u-1)\cdot(u^3+u^2+u+1)=c^2y\cdot(u^3+u^2+u+1) \end{equation*} and so $x=cy\cdot(u^3+u^2+u+1)\in cR.$ Thus $t$ is an element of $cR.$ Then the matrix $Y:=E_{12}(t)A^{-1}E_{12}(-t)h(u^2)Ah(u^{-2})$ is a product of two $E(2,R,cR)$-conjugates of $A$ and $A^{-1}$ and has the form \begin{equation*} Y= \begin{pmatrix} u^{-4} & q\\ 0 & u^4 \end{pmatrix} \end{equation*} for some $q\in cR.$ But then note for $p\in R$ that \begin{equation*} E_{12}((u^{-4}-u^4)(p+q))=Y\cdot \begin{pmatrix} u^4 & p\\ 0 & u^{-4} \end{pmatrix} \cdot Y^{-1} \cdot \begin{pmatrix} u^{-4} & -p\\ 0 & u^4 \end{pmatrix} \end{equation*} Hence choosing $p:=-q+z$ for $z\in cR$, we obtain that $E_{12}((u^4-u^{-4})z)$ is a product of four $E(2,R,cR)$-conjugates of $A$ and $A^{-1}.$ \end{proof} \begin{remark} Implicit in this proof is the claim that $h(u)$ is an element of $E(2,R,cR)$. We will not prove this, but it follows from a short calculation with the standard decomposition of $h(u)$ into elementary matrices. \end{remark} Using this lemma, we can prove Lemma~\ref{tech_lemma_1}: \begin{proof} For each ideal $J$ of $R$ contained in $I$ consider the closure $U_J$ of $E_{12}(J)$ in $\overline{E_{12}(I)}$. As a non-zero ideal $J$ in $R$ has finite index, this implies that the closed subgroup $U_J$ has finite index in $\overline{E_{12}(I)}.$ Thus $U_J$ is also open in $\overline{E_{12}(I)}.$ Hence to show that $\overline{E_{12}(I)}$ is profinite, it suffices to show that the identity in $\overline{E_{12}(I)}$ has a neighborhood basis of subgroups of the form $U_J.$ First, note that $\|\cdot\|$ is the restriction of a non-discrete norm on $E(2,R,I),$ which we will also denote by $\|\cdot\|.$ Thus for each $\epsilon>0,$ there is an $A\in E(2,R,I)$ with $\|A\|\leq\epsilon/4.$ As there are only finitely many scalar matrices in $E(2,R)$, we may by possibly choosing a smaller $\epsilon$ or by considering a suitable conjugate or commutator of $A$ assume that $A$ has the $(2,1)$-entry $c$ with $c\neq 0.$ But $R$ is a ring with many units, so we may choose a unit $u$ in $R$ with $u^8-1\neq 0$ and $u\equiv 1\text{ mod }c^2R.$ According to Lemma~\ref{tech_lemma_2}, the subgroup $E_{12}((u^8-1)cR)$ is then contained in the $4\cdot\|A\|$ ball around $E_2$ as $c$ is an element of $I.$ Hence $J_{\epsilon}:=(u^8-1)cR$ is contained in $I$ and $E_{12}(J_{\epsilon})$ is contained in the $\epsilon$-ball around $E_2.$ But this implies that $U_{J_{\epsilon}}$ is a subgroup of $\overline{E_{12}(I)}$ contained in the $\epsilon$-ball around $E_2.$ \end{proof} One can now prove Theorem~\ref{main_thm_tech_version} in essentially the same way as \cite[Theorem~1.8]{polterovich2021norm}: \begin{proof} We first prove the dichotomy property in the case of $H=E(2,R)$. So let $\|\cdot\|$ be a non-discrete, conjugation-invariant norm on $E(2,R)$ and let $G$ be the norm-completion of $E(2,R)$ with respect to $\|\cdot\|.$ Further, let $U_1$ and $U_2$ be the closures of $E_{12}(R)$ and $E_{21}(R)$ within $G$ respectively. Then by assumption the group $E(2,R)$ is boundedly generated by its elementary subgroups. Thus $G$ is boundedly generated by its subgroups $U_1,U_2$ and so there is a natural number $N(R)$ such that $G=U_{i_1}\cdot U_{i_2}\cdots U_{i_{N(R)}}$ holds for $i_1,\dots,i_{N(R)}\in\{1,2\}$. But according to Lemma~\ref{tech_lemma_1} the subgroups $U_1,U_2$ of $G$ are profinite. But it was observed in the proof of \cite[Theorem~1.8(i)]{polterovich2021norm}, that if a topological group $G$ is a set-theoretic product of finitely many subgroups profinite in the relative topology, then $G$ itself is profinite. This finishes the proof for $E_2(R)$. The general case works the same way as in \cite{polterovich2021norm} as well: Let $H$ be a finite index subgroup of $E(2,R)$. Then following precisely the line of arguments from \cite{polterovich2021norm} and the already shown case $H=E(2,R)$, one reduces the dichotomy proof for $H$ to the claim that for a non-zero ideal $I$ and a conjugation-invariant, non-discrete norm $\|\cdot\|$ on $E(2,R,I)$, said norm restricts to norms on $E_{12}(I)$ (and $E_{21}(I)$) having profinite norm-completions. But this is precisely the claim of Lemma~\ref{tech_lemma_1}. \end{proof} If $R$ is an integral domain containing a unit $v$ of infinite order and each of its non-zero ideals has finite index, then it has many units: Namely, let $c\in R-\{0\}$ be given. Assuming wlog that $c$ is not a unit in $R$, we note that $R/c^2R$ is finite. Then consider $\overline{v}:=v+c^2R$. This element must have finite order in $(R/c^2R)^*$ and so there is a $k>0$ such that $\overline{v}^k=1+c^2R.$ Note that $u:=v^k$ has infinite order and so $u^8$ can not be $1.$ Also $u-1\in c^2R$ by choice of $u.$ This argument applies for example for rings of S-algebraic integers with infinitely many units as they have units of infinite order according to Dirichlet's Theorem \cite[Corollary~11.7]{MR1697859}. Additionally ${\rm SL}_2(R)$ has bounded elementary generation \cite[Theorem~1.1]{MR3892969}. Thus Theorem~\ref{main_thm_tech_version} implies Theorem~\ref{main_theorem}. Additionally, we note the following version of \cite[Theorem~1.12]{polterovich2021norm}: \begin{theorem} Let $R$ be a compact metrizable ring with many units such that each of its non-zero ideals has finite index and such that $R$ is also an integral domain. Then ${\rm SL}_2(R)$ equipped with the relative topology from $R^{2\times 2}$ as well as any of its finite index subgroups are norm-complete. \end{theorem} The proof is virtually identical to the proof of Theorem~\ref{main_thm_tech_version} above, so we will skip it. We do however want to note, that this yields a proof of \cite[Theorem~3.6]{polterovich2021norm}, which does not require the rather technical \cite[Lemma~3.7]{polterovich2021norm}. \section{Closing remarks} To round out this short note, we note that one can prove a generalization of \cite[Theorem~1.8]{polterovich2021norm} also for all other split Chevalley groups besides the ${\rm SL}_n$. The main difference is not the statement, which would be the same, but that a bit of additional care has to be taken in the proof: Namely, the root subgroups associated to short and long roots require slightly different approaches to prove the appropriate form of Lemma~\ref{tech_lemma_1}. Also the exceptional groups ${\rm Sp}_4$ and $G_2$ will require different strategies in case the ring $R$ in question has the bad characteristics $2$ or $3$. Ultimately though, all the split cases are very similar; the more interesting cases are likely those arising from more complicated algebraic groups. \end{document}
\begin{document} \maketitle \begin{abstract} The novelty of the Jean Pierre Badiali last scientific works stems to a quantum approach based on both (i) a return to the notion of trajectories (Feynman paths) and (ii) an irreversibility of the quantum transitions. These iconoclastic choices find again the Hilbertian and the von Neumann algebraic point of view by dealing statistics over loops. This approach confers an external thermodynamic origin to the notion of a quantum unit of time (Rovelli Connes' thermal time). This notion, basis for quantization, appears herein as a mere criterion of parting between the quantum regime and the thermodynamic regime. The purpose of this note is to unfold the content of the last five years of scientific exchanges aiming to link in a coherent scheme the Jean Pierre's choices and works, and the works of the authors of this note based on hyperbolic geodesics and the associated role of Riemann zeta functions. While these options do not unveil any contradictions, nevertheless they give birth to an intrinsic arrow of time different from the thermal time. The question of the physical meaning of Riemann hypothesis as the basis of quantum mechanics, which was at the heart of our last exchanges, is the backbone of this note. \keywords path integrals, fractional differential equation, zeta functions, arrow of time \pacs 05.30.-d, 05.45.-a, 11.30.-j, 03.65.Vf \end{abstract} \section{From algebraic analysis of quantum mechanics to ``irreversible'' Feynman paths integral} Despite the unstoppable success of the technosciences based on both quantum mechanics, standard particle model and cosmological model, at least two questions must be investigated among many issues that the theories leave open \cite{Penrose,smolin}: (i) the question of the ontological status of the time and (ii) the obsessive interrogation concerning the existence or the absence of an intrinsic ``arrow of time''. The origin of these questions comes from the equivocal equivalence of the status of time in any types of mechanical formalisms. For example, within Newtonian vision, the observable $f$ can be analysed algebraically using action-integral through the Lagrangian $L$ while Poisson brackets gives time differential representations $\rd f/\rd t=\{H,f\}$. According to Noether theorem, the energy, referred to the Hamiltonian $H$, is no other than the tag of a time-shift independence of physical laws, namely a compact commutativity. The statistical knowledge of the high dimensions system requires (i) the definition of a Liouville measure $\mu_{\text L}$ based on the symplectic structure of the phase space and (ii) the value of the configuration distribution $Z_{\text C}$, therefore $\rd\mu\sim(1/Z_{\text C})\,\re^{-\beta H}$, with $\beta=1/k_{\text B}T$ related to the inverse of the temperature. This point of view is discretized in quantum mechanics (QM). With regard to quantum perspectives, mechanical formalism introduces (i) a thickening of the mechanical dot, (ii) the substitution of real variables through the spectrum of operators and (iii) an emphasis on the role of probability. According to von Neumann, the stable core of the operator algebra required to fit the quantum data must be based upon groupoids acting on observables. In the Heisenberg framework, the observable $\hat{f}$ (for instance the paradigmatic example of the set of the rays of materials emissions) is represented by self-adjoint operators in Hilbert space $l^{2}(\mathbb{R}^{3},\mathbb{C}$) which values can be reduced within Born-matrix representation to a set of eigenvectors $|\varphi_{n}\rangle$ chosen in the spectrum $\spec(\hat{f})$ of the groupoid. Energy distribution is given through the linear relations $\hat{H}|\varphi_{n}\rangle=E|\varphi_{n}\rangle$, where the Hamiltonian $\hat{H}$ represents the energy self-adjoint operator. The dynamics is implemented by using the commutator: $[\hat{H},\hat{f}]$ which replaces the Poisson bracket, namely $\rd\hat{f}(t)/\rd t=\frac{2\piup \ri}{h}[\hat{H},\hat{f}]$. The capability of giving cyclic representations of von Neumann algebra (extended to Weyl non-commutative algebra for standard model) leads to expressing the dynamics via the eigenvectors Fourier components $|\psi_{n}(t)\rangle=|\varphi_{n}\rangle\exp(-\ri E_{n}t/\hbar)$. This representation is unitarily equivalent to a wave mechanics usually expressed through the Schr\"odinger equation, $\ri\hbar\frac{\rd}{\rd t}\psi(r,t)=[-\frac{\hbar^{2}}{2m}\nabla^{2}+V(r,t)]\psi(r,t)$. The shift from non-linear finite to linear infinite system must be based upon the statistics dealing with a $\Lambda$-extension of the system, through a linear and positive forms $\hat{f}\in A$ $\Phi_{\Lambda}(A)=(1/Z_{\text C})\Tr\exp(-\beta H_{\Lambda_{A}})$. Hence, the average value of the observable $\hat{f}\in A$ is a trace of an exponential operator. Usually, the distribution of physical data must be given by a measure of probability on $\spec(A)$. Thus, we cannot deal with QM without dealing with Gaussian randomness imposed by some external thermostat. At this step, a useful notion is the notion of density matrix given by: $\rho N=\exp(-\beta H)$. Unfortunately, $N$ the normalization constant suffers from all misgivings involved in thermodynamics, by the ``shaky'' notion of equilibrium. \footnote{The extension of $l^{2}(\mathbb{R}^{3},\mathbb{C}$) toward $l^{2}(\mathbb{R}^{3},\mathbb{C}^{2\times2}$) shifts the second order equation onto a first order equation with observables then based upon matrix values. This shift gives birth to the Dirac operator whose algebra founds the spineur standard model of physics. It is clearly based on inner automorphisms and internal symmetries \cite{Connes-1,Connes 2}.}Each item of the above visions imposes its own algebraic constraints but enforces a paradigmatic concept of time parameter \cite{Connes-1} as a reversible ingredient of the physics. At this step, the statistics appears as the only loophole capable of introducing irreversibility as a path to an assumptive equilibrium state for finite $\beta$ value. Nevertheless, as shown above, this assumption requires the $\Lambda$-extension, namely, the transfer of the operator algebra in the framework of \textit{C{*}}-algebra in which the $A$-algebra of its Hermitian elements patterns the transfer (rays) between a set of perfectly well defined states. Starting from the notion of groupoid and from the algebra of magma upon the states and by analysing the symmetries, a mathematician can also consider the equilibrium from a set of cyclic states $\Omega$ of $\hat{f}$, based on Gelfand, Naimark, Segal construction (\textit{GNS construction)} \cite{GNS} binding quantum states and the cyclic states (cyclic transfer which assumes a specific role of scalar operators, called $M$-factors). At this stage, two points of view must be matched together to make the irreversibility emerge from $M$: (i) Tomita Takesaki's dynamic theory \cite{Takezaki 1} extended by Connes \cite{Connes-1,Connes 2} and (ii) Kubo, Martin and Schwinger KMS physical principle~\cite{KMS}. \begin{itemize} \item According to Tomita-Takesaki, if $A$ is a von Neumann algebra, there exists a modular automorphism group $\Delta$ based on a sole parameter $t$: $\alpha_{t}=\Delta^{-\ri t}A\Delta^{+\ri t}$ which leaves the algebra invariant: $\rd\alpha_{t}(A)/\rd t=\lim_{\Lambda\rightarrow\infty}(2\piup \ri/h)[H_{\Lambda}A]$. There is a canonical homomorphism from the additive group of reals to the outer automorphism group of $A$: $B$, that is independent of the choice of ``faithful'' state. Therefore, $\langle\Omega,B(\alpha_{t+j}A)\Omega\rangle=\langle\Omega,(\alpha_{t}A)B\Omega\rangle$, where $(\,,)$ is the inner product. \item The link with KMS physical constraint extends this abstract point of view. The dynamics expression using the Kubo density matrix allows one to change the ``shaky'' hypothesis of thermodynamic equilibrium by giving it a dynamical expression. KMS suggested to define the equilibrium from a correlation function $[(\gamma_{t}A)B]=[B(\gamma_{t+\ri\beta h}A)]$ allowing to associate the equilibrium with a Hamiltonian according to $\gamma_{t}A=\exp(\ri tH/h).A.\exp(-\ri tH/h)$. \end{itemize} The matching of both sections leads: $\beta h=1$ which is nothing but the emergence of a thermodynamic gauge of time while the time variable stays perfectly reversible \cite{Connes-1}. Starting from this analysis Jean Pierre Badiali (JPB) decided the exploration of QM by using the local irreversible transfer joined to Feynman \cite{Badiali 1} path integrals model based on an iconoclast existence of 2D self-similar ``trajectories''. While this model suffers from mathematical divergences and requires questionable renormalisation operations, Feynman model efficiency was rapidly attested. Nevertheless, many physicists still considered that Feynman integrals are meaningless because the concept of trajectory should \textquotedblleft obviously\textquotedblright{} not be relevant in QM. The discernment of JPB was to take the same trail as Feynman, by imagining irreversible series of transition giving birth to real self-similar paths at particles. By using a Feynman Kac transfer formula for conditional expectation of transfer, he writes $q(x_{0},t_{0},x,t)=\int Dx(t)\exp[-\frac{1}{\hbar}A(x_{0},t_{0};x,t)]$ in which the rules of transfer are based on a Newtonian action $A(x_{0},t_{0};x,t)=\intop\{\frac{1}{2}m[\frac{\rd x(s)}{\rd s}]^{2}+u[x(s)]\}\rd s$, he wrote the solution required for discretizing the trajectories \cite{Badiali 2}. These notions are not associated with any natural Hamiltonian and require a coarse graining of the space-time. To overcome this constraint, JPB considered the couple of functional probabilities $\phi(x,t)=\int \rd y\phi_{0}(y)q(y,t_{0};x,t)$ and $\hat{\phi}(xt)=\int q(t,x;t_{1},y)\phi_{1}(y)\rd y$ with $t_{1}>t>t_{0}$. The evolution of a system is given by a Laplacian propagator in which $\phi(x,t)$ is bended out by geometrical potential $u(x,t)$ according to $\pm\frac{\partial}{\partial t}\phi(x,t)+D\Delta\phi(x,t)=\frac{1}{\hbar}u(x,t)\phi(x,t)$, where $D=\hbar/2m$ is the quantic expression of a diffusion constant and $\phi(x,t)$ cannot be normed. These equations are neither Chapman-Kolmogorov equations nor Schr\"odinger like equations. Thenceforth, which physical and geometrical meaning may we attribute to the discreet arithmetic site on which the fractal-paths are based? How do the morphisms between states and trajectories determine the dynamical topos? How the statistical or non-statistical regularizations ruling the dynamics may smooth the experimental behavior? All these issues are open. To solve them, JPB point of view required a new visitation of the thermodynamics and in conformity with KMS point of view, a new definition of the equilibrium expressed via the irreversibility of the local transfer. To do this, he considered the class of the paths reduced to loops: $\phi(x_{0},t_{0},x_{0},t-t_{0})$ and their fluctuations in energy. Assuming an average energy $U$ determined by a thermostat, the overall fluctuations are ruled by a deviation, on the one hand, from the reference value $U$ and, on the other hand, from the number of loops concerned. As Feynman had imagined it, an entropy function: $S_{\text{path}}=k_{\text B}\ln\int q(x_{0}t_{0},x_{0},t-t_{0})\rd x_{0}$ can be built which is ruled by the concept of path temperature $T_{\text{path}}$: $ \frac{\hbar}{k_{\text B}}(1/T_{\text{path}})=\tau+[U-(\langle u_{K}\rangle_{\text{path}}+\langle u_{p}\rangle_{\text{path}})]\frac{\partial\tau}{\partial u}$. The emergence of an equilibrium is figured dynamically through a critical time scale $1/\beta h$, which possesses a strictly quantum statistical origin merely based on loops $\tau=(\hbar/k_{\text B})T_{\text{path}}$ if it can be assumed that the temperature of the integral of the path is none other than the usual thermodynamic temperature. From this step, JPB finds again the Rovelli-Connes assertions regarding thermal-time \cite{Rovelli 1} and he proves the Boltzmann $H$-theorem. By means of subtle analysis using the duality of the couple propagators (forward and backward dynamics), he built a complex function $\psi,$ solution of the Schr\"odinger equation. The thought of JPB appears as a subtle adventure which --- inscribed in the footsteps of Richard Feynman, and implemented from a deep knowledge of QM, thermodynamics, thermochemistry and irreversible processes --- changes the traditional point of view and builds a perspective that we have to analyze now, from an alternative point of view which replaces the transport along the fractal trail by a transfer across an interface, both perspectives being strongly related. In brief, $1/\beta$ provides a scale of energy which smooths the regime of quantum fluctuations according to an uncertainty relation: $\Delta E=\hbar/\Delta t<1/\beta$ namely $\Delta t>\beta\hbar$. $\beta\hbar$ is the value of the time defining the cut-off between quantum fluctuations and thermodynamic fluctuations. The propagation function imparts a quadratic form to the spatial fluctuations, namely $\delta x^{2}=(\beta\hbar/\partial t-1)\partial t^{2}/m$. If $\Delta t=\beta\hbar/2$ then $\delta x^{2}=\beta\hbar^{2}/m=2\beta\hbar D\sim\delta t$, the value that, with the reserve of taking into account the entropy constant $k_{\text B}$, must be compared to de Broglie's length. Thus, the coarse graining of the time will be considered as the dual of the quadratic quantification of space, when a length in this space can be reduced to the constraints imposed by the geometrical pattern of non-derivable trajectories (herein with a dimension two attributed implicitly and for quantum physical reasons to the set of Feynman paths). The aim of this note is to show that this ``cognitive skeleton'' does not only give birth to thermal statistical time, but through a generalization of fractal dimension, to a purely geometric irreversible time unit: an arrow of time. \section{Zeta function and ``$\alpha$-expon\textit{a}ntiation''} In addition to B. Mandelbrot initial friendship, we owe to J.P. Badiali and Professors I. Epelboin and P.G. de Gennes the first academic support for the development of the industrial TEISI model energy transfer on self-similar (i.e., fractal interfaces). This was at the end of seventies shortly before the premature death of Professor Epelboin. The purpose of this model was to explain the electrodynamic behavior of the lithium-ion batteries which were then at the stage of their first industrial predevelopment \cite{ALM 1982,ALM 2 1883,ALM 3 83}. The interpretation and the patterning of electronic and ionic transfer coupled together in 2D layered positive materials $(\text{TiS}_{2},\text{NiPS}_{3})$ are very similar to the JPB model. The electrode is characterized by a fractal dimension $d$ which, due to the symmetries of real space, must be such as $d\in[1,2]$. When the fractal structure of the electrode is scanned by the transfer dynamics through electrochemical exchanges, the electrode does not behave like an Euclidean interface, as a straightforward separation between two media, but like an infinite set of sheets of approximations normed by $\eta(\omega)$ or a multi-sheet manifold, thick set of self-similar interfaces working as paralleled interfaces \cite{Ttricot}, where $\omega$ is a Fourier variable. Each $\eta(\omega)$-interface is tuned by a Fourier component of the electrochemical dynamics. The overall exchange is ruled by a transfer of energy either supplied by a battery (discharge) or stored inside the device (charge). The impedance of positive electrode is expressed through convolution operators coupled with the distributions of the sites of exchanges (electrode), giving birth at macroscopic level to a class of non-integer differential operators which take into account the laws of scaling, from quantum scales of transfer up to the macroscopic scales of measurement. This convolution between the discreet structure of the geometry and the dynamics must be written in Fourier space by using an extension of the Mandelbrot like fractal measure namely $N\eta^{d}=1$, into operator-algebra with $N=\ri\omega\tau$ \cite{ALM 1982}. Mainly, the model emphasizes the concept of fractal capacity (fractance) --- implicitly Choquet non-additive measure and integrals --- whose charge is ruled by the non-integer differential equation $\ri\sim \rd^{\alpha}U/\rd t^{\alpha}$ with $\alpha=1/d$ \cite{Oldham 2,Machado}, where $U$ is the experimental potential. In the simplest case of the first order local transfer, hence, for canonical transfer, the Fourier transform must be expressed through Cole and Cole type of impedance: $Z_{\alpha}(\omega)\sim1/[1+(\ri\omega\tau)^{\alpha}]$ \cite{Penton 1,Nig 1,Jonsher 2,heliod1} which is a generalization of the exponential transfer turned by convolving with the $d$-fractal geometry. Many other interesting expressions and forms can be found, but being basically related to exponential operator, the canonic form appears as seminal. The model was confirmed experimentally in the frame of many convergent experiments concerning numerous types of batteries and dielectric devices. JPB has advised all these developments especially within controversies and intellectual showdowns. For instance, even if energy storage is at the heart of all engineering purposes \cite{ALM 2 1883,ALM 3 83}, the use of non-integer operators renders the model accountable of the fact that energy is no longer a natural Noetherian invariant of the new renormalizable representation. Therefore, algebraic and topological extensions must be considered whose results are the emergence of time-dissymmetry and of entropic-effects. Fortunately, $Z_{\alpha}(\omega)$ clearly appears as a geodesic of a hyperbolic space $\eta^{d}(\omega)=\frac{u}{v}$ authorizing (figure~\ref{fig1}), on the one hand, the use of non-Euclidean metrics to establish a distance between $\eta(\omega)$-interfacial sheets and, on the other hand, the tricky algebraic and topological extension of the dynamics, practically a dual fractional expression of the exponential. \begin{figure} \caption{(Color online) \textit{Main characteristics of exponential transfer function convoluted by $d$-fractal geometry} \label{fig1} \end{figure} \begin{figure} \caption{(Color online) \textit{Dynamical meaning of Riemann's hypothesis, phases and arrow of time} \label{fig2} \end{figure} The extension to ``dual $\alpha$-geodesic'' $(Z_{\alpha}^{\tau_{n}}=Z_{\alpha}\cup\{\tau_{n}\})$ shown in figure~\ref{fig1} is able to formalize the main characteristics of a global fractional dynamics \cite{Machado} which retrieves, as we can show below, a capability to rebuild, through the addition of entropic factors, the contextual meaning of the physical process. We qualified ``$\alpha$-expon\textbf{a}ntiation'' (with ``a'') this new global dynamics (figure~\ref{fig1}). This denomination integrates the phase angle $\varphi(\alpha)$ (figure~\ref{fig1}), namely the symmetries and inner dynamic automorphisms caused by the $d$-fractal geometry. Let us observe that if $s=\alpha+\ri\varphi(\alpha)$ a new reference, $0<\varphi(s)<\varphi(\alpha)$ defines a compact set $K$ able to be considered as a base for Tomita's shift $s\rightarrow s+\ri t$ implemented in the complex field (see below the $\mathbb{N}$ fibration). In addition, let us observe that the expression of $\varphi(s)$ requires a referential system which can be given either with respect to experimental data (figure~\ref{fig1}) or with regard to an \textit{a priori} referential obtained after a $\piup/4$ rotation, supposing the use of a Laplacian paradigm in which $\Delta(s)$ (figure~\ref{fig2}) is used as a new expression for phase-reference in place of $\varphi(s)$. The reason of the relevance of this duality is an extremely deep physical meaning: according to non-integer dynamic model if the transfer process is implemented across a Peano curve (Feynman paths, nil co-dimension, no outer operator), then $\alpha=1/2$ and $\Delta(s)=0,$ then the overall impedance recovers an inverse Fourier transform and the dynamic measure fits a probability. The traditional concept of energy recovers its practical relevance and the space time relationship becomes coherent with the use of the Laplacian and Dirac operators. The time used is the reversible time of the mechanics. Conversely, if $\alpha\neq1/2$, the inverse Fourier transform does not exist and, therefore, the traditional concept of time vanishes, retrieving the mere arithmetic operator status implemented in the TEISI model, namely $N=\ri\omega\tau$. Time has no longer any straightforward usual meaning. These strange conclusions about ``time'' as well as the issues about ``energy'', left many academic colleagues dubious late in 1970-ies, but not JPB who found in these issues many reasons for reviving the electrochemical and electrodynamical concepts. Although these disturbing issues did stay open, the TEISI experimental efficiency suggested that there was something deeper and more fundamental behind the model; but which thing?\,\ldots Our obstinacy to believe in the physical meaning of the TEISI model was rewarded early this century, by discovering that the canonical Cole and Cole impedance is closely related with the Riemann zeta function properties \cite{ALM CMA 2010} and that these properties are explicitly associated to the phase-locking of fractional differential operators. $Z_{\alpha}^{N}(n)$, the integer discretization of the Cole and Cole impedance $Z_{\alpha}(\omega)$, is characterized by the hyperbolic-dynamic metric given by $(u/v)=1/n^{\alpha}$. Therefore, the overall discretization of $Z_{\alpha}^{\tau_{n}}$ appears as a possible grounding for the definition of both Riemann zeta function $\zeta(s)$ and $\zeta(1-\bar{s})$, where $s=\alpha+\ri\varphi(\alpha)\in\mathbb{C}$, and $1/2\leqslant \Re s<1$ \cite{ALM CMA 2010} with an emphasis given to the arithmetic site $\{S_{\alpha}^{\tau_{n}}\}= \,_{0}^{\infty}L=\{Z_{\alpha}\cap\{\tau_{n}\}\}$ (see \cite{Machado}, page 231). An extension of the concept of time to complex field $t\in\mathbb{R}\Rightarrow s\in\mathbb{C}$ is a natural result of this discretization. In addition, as suggested in recent studies \cite{Wolf,Snaith}, a heuristic reasoning about the symmetries and automorphisms backed on $Z_{\alpha}^{\tau_{n}}$ led us to assume (i) that the Riemann conjecture concerning the distribution of the non-trivial zeros of zeta function $\zeta(s)=0$ could be validated starting from physical arguing by using self-similar properties of $\zeta(s)$ obvious from Cole and Cole impedance and recursive dynamics \cite{ALM CMA 2010,Lapidus}, (ii) that the complex variable $s=\alpha+\ri\varphi$ also associated to the metric of the geometry through $d=1/\alpha$ accommodates, through its complex component, something of the formal nature of the concept of arrow of time and (iii) that according to the consequence of Montgomery hypothesis, QM states should be related to the set of zeros, but also joined to the disappearance of above time intrinsic arrow. We recall that the Riemann conjecture states that the non-trivial zeros of the zeta function $\zeta(s)=0$ are such as if $\Re s=\alpha=1/2$ (phase locking for $d=2$), namely, in TEISI model geometrical terms, Riemann hypothesis would be related to Peano interfaces (2D embedded geometry without any external environment). Due to the similarity between JPB model and TEISI model which use the irreversible transfer as test functions of the distribution of the sites of exchanges, the morphisms concerning the scaling and the role of the metric in this operation, leads to guess that the theory of categories and, moreover, the theory of Topos should be hidden behind the morphism escorted by the role of zeta function. The authors will consider in these following paragraphs only the theory of categories. \subsection{Universality of zeta Riemann function} It is well known \cite{Hauet} that Riemann zeta function can be expressed by means of two distinct formulations: (i) additive series $\zeta(s)=\sum_{n\in\mathbb{N}}n^{-s}$ and (ii) multiplicative series $\zeta(s)=\prod_{p\in\wp}(1-p^{-s})^{-1}$. It is also well known that any analytic function must be expressed through additive series $f(s)=\sum_{n\in\mathbb{N}}(a_{n}s^{n})$. A duality exists that associates $f(s)$ based on $s^{n}$ and $\zeta(s)$ based on $n^{-s}$; under the reserve of the sign, there is an inversion of the spaces occupied by the complex argument $s,$ and by the integer $n$. At this step, the key point is as follows: the dual functions can be compared using the Voronin theorem \cite{Voronin 1975,Voronin 1992,Bagchi 1982}. This theorem states that any analytic function, for example a geodesic on a hyperbolic manifold, can be approximated under conditions set out, by so-called universal functions. The archetype of these functions is precisely nothing else than Riemann zeta function $\zeta(s)$, namely: for $K$ compact in the critical band $1/2\leqslant \alpha<1$ with a connex complement and for $f(s)$ analytic continuous function in its interior \textbf{without zeros on $K$}, $\forall\varepsilon>0$, $\lim\inf_{T\rightarrow\infty}(1/T)\times \text{mes}\{\tau\in[0,T];\text{max}|\zeta(s+\ri t)-f(s)|<\varepsilon\}>0$ if $t\in[0,T]$. The zeta function being used as reference, the extension of the abscissa according to $T\rightarrow\infty$ leads to a ``crushing'' of the analytical function on the reference $\zeta(s)$. In addition, ten years after Voronin did establish his theorem, Bagchi demonstrated \cite{Bagchi 1982} that the validity of the Riemann hypothesis (RH) is equivalent to the verification of the universality theorem of Voronin in the particular case where the function $f(s)$ is replaced by $\zeta(s)$, namely: $\forall\varepsilon>0$, $\lim\inf_{T\rightarrow\infty}(1/T)\times \text{mes}\{\tau\in[0,T];\text{max}|\zeta(s+\ri t)-\zeta(s)|<\varepsilon\}>0$. Therefore, Bagchi's inequality asserts that the nexus of RH is the self-similarity of Riemann function, the property explicitly content in its link with $Z_{\alpha}^{\tau_{n}}$ \cite{ALM CMA 2010,Riot 2016,REE}. Nevertheless, since the distribution of the zeta zeros is unknown, it must be observed that the restriction concerning the absence of zeros inside the compact set $K$ does not allow one to apply the Voronin theorem to $\zeta(s)$. Therefore, if the validity of RH is related to $\zeta(s)$ self-similarity, this property must emerge within a theoretical status at margin with respect to the field of the analysis. Practically, this observation urges us to consider RH as a singularity in an enlarged field of mathematical categories and that is why the authors suggested to introduce the category theory \cite{Cat 2 Law,Cat 3 MLa,Cat 4 Bor,Hines 1} for handling the RH issue \cite{Riot 2016,REE}. The use of this theory is justified for at least two reasons: (i)~according to the work of Rota \cite{Rota}, the function $\zeta(s)$ can easily be expressed in the framework of partially ordered sets (which forms the basis of all standard dynamics), particular cases of categories; (ii)~since the Leinster works \cite{Leinster 2,Cat 1 Lie}, self-similarity as a property of a fixed point must be easily expressed by using the language of categories. Experts in algebraic geometries will consult with profit the reference \cite{connes 4}. The reader of this note will be able to find in this essay and in the associated lectures \cite{Connes-1}, the reasons for which some engineers search for illustrating the profound but also practical signification of the famous hypothesis. Both approaches should be theoretically bonded via the existence of a renormalization group over $Z_{\alpha}^{\tau_{n}}$ capable of compressing the scaling ambiguities characterizing the singularities of the fractional dynamics: scaling extension of figure~\ref{fig1} for tiling the Poincar\'{e} half plan~\cite{Machado,alm98}. \subsection{Design of $E_{pr}$-space } A category is a collection of objects $(a,b,\ldots)$ and of morphisms between these objects. Morphisms are represented by arrows $(a\rightarrow b)$ which can physically account for structural analogies or dynamics relations. Two axioms basically rule the theory: (i) an algebraic composition of arrows $a\rightarrow b\rightarrow c$, pointing out in the framework of set theory the homomorphism: $\text{hom}(a,c)$, and (ii) the identity principle which accounts for an absence of any internal dynamics of the objects $(1_{a}$: $a\rightarrow a)$. We must point out that the compositions of ``arrows'' can in practice be thought of as order-structures. Within the framework of enriched categories, there is, in addition, a close link between categories and metric-structures \cite{Cat 2 Law}. For instance, the additive monoid $(N,+,\geqslant)$ may be substituted by $\text{hom}(a,b)$, after the introduction of the notion of a distance through a normalization of the length of the arrows. $\mathbb{N}$ is naturally associated to the additive law (\textit{construction}) which provides, through the monoid $(N,+)$, an ordered list of its elements $[1,2,3,\ldots, n,n+1,\ldots]$, but it is also associated through the monoid $(N,\times)$ to a distinct order structure. The question of matching both monoids makes it possible to consider it as a mere arithmetic issue, but the order associated to $(N,\times)$ or $(N,\div)$ must over here, --- the\textit{ partition} of the set $\mathbb{N}$, --- be defined in the following way: $p<q$ if and only if $p$ \textit{divides} $q$. \textit{The order is, therefore, only partial} because, as it is well known, any integer may be written in a unique way according to $n=\prod_{i}p_{i}^{r_{i}}$ with $p_{i}$ prime and $r_{i}$ integer while $i$ scans a finite collection of $n\in\mathbb{N}$; the order appearing through the set of $p_{i}$ is mainly different from the order of the set of $(N,+)$. By taking the logarithm, one obtains $\log(n)=\sum_{i}r_{i}\times \log p_{i}$ for all $n$. Mathematically, $\mathbb{N}$ with partial order is a ``\textit{lattice}'': any pair of elements $x$ and $y$ has a single smallest upper bound, in this case, the LCM (Lowest Common Multiple) and a single GLD (Greatest Lower Divisor). The total order structure itself also constitutes a lattice for which the operators max and min can be substituted by LCM and the GLD. The elements of a lattice can be quantified by associating a value $v(p)$ for each $p$ in such a way that for $p\geqslant q$ we have $v(p)\geqslant v(q)$. According to Aczel theorem \cite{Aczel 1,aczel 2,Asso 1}, there is always a function possessing an inverse such that the linear ordered discrete set of values makes it possible to match the partial order associated with the multiplicative monoid $(N,\times)$ and the order associated to $(N,+,\leqslant )$. In the above particular case, this result takes the following form: both monoids $(N,+,\leqslant )$ and $(N,\times)$ are in correspondence by means of a logarithm, so that: $\log[\text{LCM}(p,q)]=\log p+\log q-\log[\text{GLD}(p,q)]$. The dissymmetry between \textit{construction} and \textit{partition} explains the role of non-linear logarithm function, hence, the paradigm of exponential function in the physics of close additive systems, while, conversely, the treatment of non-extensive systems stays always an open issue. Although very elementary, these characteristics of $\mathbb{N}$ invite us to introduce a space of countable infinite dimension $E_{pr}$ which is characterized by an orthogonal vectorial basis indexed by the quantities $\log p_{i}$, where $p_{i}$ is any prime integer and wherein the vectors have a finite number of integer coordinates $r_{i}$, the other coordinates being reduced to zero \cite{Riot 2016,REE}. Indeed, $E_{pr}$ is remarkably well adapted to a linearization of the self-similar properties expressed from the discrete framework of $\mathbb{N}$. The orthogonal character of the basic axis accounts for the fact that the set of prime numbers constitutes an anti-chain upon the partially ordered set associated with the divisibility, hence, $\mathbb{Q}$ the set of rational numbers, such as defined above. The space $E_{pr}$ corresponds to the positive quadrant of a Hilbert space in which the norm of unity vector is equal to $\log p_{i}$. It is then easy to introduce the scaling factor using a parameter based on the complex number $-s\in\mathbb{C}$. At coordinate point, $r_{i}$ is then associated with the coordinate $-s\times r_{i}$. The space obtained by applying the scaling function $s$ may be noted as $N(s)$ \cite{REE}. The construction of this kind of space using the logarithmic function is all the more relevant in that its inverse, i.e., the exponential function, can be applied in return. Therefore, the total measure of the exponential operator can then be easily computed upon the set of integer points constrained by a complex power law $n^{-s}\in N(s)$ on $E_{pr}$ for any chosen parameter $s$. This operation gives birth to zeta Riemann function $\zeta(s)=\sum_{n\in\mathbb{N}}n^{-s}$ which finds, therefore, in $E_{pr}$ its natural mathematical sphere of definition. The zeta Riemann function is the total measure of the exponentiation operator applied upon the set $N(s)$ when expressed in $E_{pr}$, and $\zeta(s)$ is, therefore, merely the trace of this operator in $E_{pr}$: $\zeta(s)=\Tr_{E_{pr}}\{\exp[-s\log N(s)]\}$. At this step, it is interesting to confront the above analysis to quantum mechanics. For example, one can observe that Montgomery-Odlyzko hypothesis (MOH) could be based on a specific interpretation of $E_{pr}$ space. Let us remind that Montgomery considers the identity of distributions between the zeros pair correlations of the Riemann zeta function and the eigenvalue peer correlations of the Hermitian random matrices \cite{Snaith}. The conjecture asserts the possibility of regularizing divergent integrals by using a Laplace operator whose spectra are based upon the $\mathbb{N}$ ordered series of vectors $0\leqslant \lambda_{1}\leqslant \ldots\leqslant \lambda_{n}\leqslant \ldots<\lambda_{\infty}$. Then, one can define the zeta spectral function according to the equation $\zeta_{\lambda}(s)=\sum_{n\in N}(1/\lambda_{n})^{s}$. This function is only convergent for $s\in\mathbb{R}$ but it has an extension in the complex plane. For the hermitian operator $H,$ we have $\zeta_{\lambda}(s)=\Tr[\exp(-s\log H)]$ while $\det H=\prod_{n\in N}\lambda_{n}$. Therefore, with respect to $\zeta(s)$, the description requires an introduction of the concept of energy according to $\log(\det H)=\Tr(\log H)$. Then, the Mellin transform of the kernel of ``\textit{heat equation}'' can then be expressed using: $\zeta(s)=\int_{0}^{\infty}t^{s-1}\Tr[\exp(-tH)]\rd t$ with $\Tr[\exp(-tH)]=\sum_{n\in N}\exp(-t\lambda_{n})$ leading to the Riemann hypothesis. But, being upstream of this specific problem, by highlighting the role of any partial order even for $\alpha\neq1/2$, $E_{pr}$ founds and, within a certain meaning, generalizes the implicit assumptions in MOH. $E_{pr}$ overcomes the limiting role of Laplacian operator and the role Hermitian hypothesis which implicitly and \textit{a priori} imply the additive properties of the systems concerned, or in other words, admit \textit{a priori} the validity of the Riemann hypothesis [the existence of well defined random states associated to the zeros of zeta function: $\Re s=\alpha=1/2$]. The categorical link, described above for any values of $s$, between $(N(s),+,\leqslant)$ and $(N(s),\div)$ [i.e., $(N(s),\times)$] referred, respectively, to the total order (forward construction) and to the partial order (backward partition) is well adapted for dealing with non-additive systems, namely a dynamical conception of being a steady state of arithmetic exchanges, without any additional hypothesis regarding $\Re s$. Obviously, the above conception can be narrowed to additive systems or steady state if $\alpha=1/2$. According to this overall point of view, the categorical matching between \textit{construction} and \textit{partition} which gives birth to a renormalization group, might be physically expressed through gauge constraints, namely, intrinsic automorphisms required for closing the system over itself \cite{Connes 2,connes 4}. Many other essential properties of multi-scaled systems could be unveiled by formalizing the theory from $E_{pr}(s)$ space, even if very singular interesting properties arise when, according to Riemann hypothesis, $\Re s=1/2$, one introduces additional specific symmetries in $E_{pr}$ such as $\zeta(s)=\zeta(1-\bar{s})$. In general, whatever the $\alpha$ value, the function $\zeta(s)$ is the total measure of the exponentiation operator on the support space $N(s)$ at a certain scale $s$ while $\zeta(s)$ is constrained by Bagchi inequality based upon a time-shift $s$ to $s+\ri t$, very identical to the one used in Tomita and KMS relations. In order to analyse a possible analogy between both approaches, it appears then necessary to analyze how the space $N(s)$ behaves under the shift when $\ri^{2}=-1$. \subsection{$E_{pr}$ fibration} Let us consider the parameter $s=\alpha+\ri\varphi$ variable in a compact domain $K\in\mathbb{C}$ such that, $\alpha\in[1/2,1]$ and $0\leqslant \varphi\leqslant \varphi_{\text{max}}(\alpha)\leqslant \piup/4$ (figure~\ref{fig2}). According to Borel-Lebesgue theorem, a compact domain in $\mathbb{C}$ is a closed and bounded set for the usual topology of $\mathbb{C}$, directly inherited from the topology associated with~$\mathbb{R}$. The $K$ bounded character is essential for backing the reasoning based on the shift from $s$ to $s+\ri t$. Indeed, by choosing a parameter $t\in\mathbb{\mathbb{N}}$ as a value sufficiently high with respect to the diameter of the domain $K$, the shift from $s$ to $s+\ri t$ makes it possible to create a translation of the domain $K$ \cite{REE} with a creation of copies of $K$: $K_{t}$ capable of avoiding any overlapping if a relevant period $t=\tau$ is rightly chosen. Thus, $t$-shifts uplift a fiber above $K$. In the frame of $\alpha$-expon\textbf{a}ntial representation, this characteristic may be practically applied for folding the dynamic and zeta function if, for instance, $K$ is associated to the field of definition of $Z_{\alpha}^{\tau_{n}}$, while $Z_{\alpha}(\omega)$ is used to root $\zeta(s)$ on the set $\{\alpha,\varphi_{\text{max}}(\alpha)\}$. Let us observe in advance that $\re^{\pm \ri t}$ implements the fibration by starting from the gauge-phase angle $\varphi_{\text{max}}$ (figure~\ref{fig2}). This way for understanding the fibration is equivalent to replacing the additive operation ($s$ to $s+\ri t$) by a Cartesian product. If we now replace $s$ with $s+\ri m\tau$, where $m$ scans the countless infinite set of integers; the reciprocal image along the base change is then the fiber product of space $N(s)$ by a discrete straight line defined by $\ri\times m\times\tau$. The total space is characterized by $N(s)\times\{\ri\times m\times\tau\}\simeq N(s)\times N(s)\simeq N(s)$. Thus, the change of the basis does not realize anything else but the bijection $\mathbb{\mathbb{N}\times\mathbb{N}=N}$, characterizing a well-known quadratic self-similarity characteristic of the set of integers. The self-similar characteristics of $\mathbb{N}$ can be approached by using a particular class of polycyclic semigroups or monoids \cite{fibre ,Nivat,Howies,Lawson 1}. They are representable as bounded linear operators of a traditional Hilbert space, of type $N(s)$ herein. The change of base consists in introducing such a semigroup realizing a fibration based on the self-similarity $N(s)\times N(s)=N(s)$, or a partition within subspaces with co-dimension 1. Each sheet corresponds to the space above the variable $s+\ri\times m\times\tau_{0}$. The value of the Riemann function $\zeta(s+\ri\times m\times\tau)$ is obtained as the total measure of the exponential operator on each sheet, namely this value is a truncation of $\zeta(s)$. This truncation is the basis of Riemann hypothesis. Let us observe that $m$ which is obtained after a rearrangement of the numerical featuring corresponding to the isomorphism $\mathbb{\mathbb{N}\times\mathbb{N}=N}$ imposes a distribution of points $\alpha+\ri\times m\times\tau$ that, in the complex plan, does not mesh the total order given from $(N,+,\leqslant )$. Above each complex number $s\in K$ and along the fiber, an appropriate category exists in $E_{pr}$ based upon both initial and terminal object $N(s)$ \cite{SmyPlot,Belaiche1,Lambek} leading to the folding of the $\mathbb{N}$ and, therefore, to the second different order. The ``disharmony'' between both orders involved by the relation $\mathbb{N}{}^{2}=\mathbb{N}$ has its equivalent in the TEISI model when the previous self-similarity is expressed by $\eta^{2}\times(\ri\omega\tau)=1$. The interface of transfer is then a Peano interface, where the complex variable $\ri$ expresses the fibration and $\omega\tau=n\in N(s)$ is used for the computation of $\zeta(s)$. However, via the operator general equation $\eta^{d}\times(\ri\omega\tau)=1$, this \textquotedblleft disharmony\textquotedblright{} is notified by tagging the sign of $t$ through a phase factor $\ri^{1/d}$ generally different from $\ri^{1/(1-d)}$. Therefore, one should distinguish at least two cases: \textit{First case: time symmetry and the absence of junction phase}. To avoid dissymmetry of the phase at boundary, the singularity of the phase angle must be canceled, namely, $\varphi(\alpha)=\piup/4$, $\Delta=0$ (figure~\ref{fig2}). The dynamics basis must be expressed through $Z_{1/2}^{\tau_{n}}(s)$, which gives birth to a folding of $\zeta(1/2\pm \ri t)$. RH becomes associated to the expression of the invariance of $t$ under a change of sign. In terms of phase transition, $t$ is a parameter of order and $\alpha=1/2$ is the tag which points out a singularity of ``order'' within the ``disorder'' ruled by $\alpha\neq1/2$, $\zeta(s)\neq0$. The main order which must be considered whatever $s\in\mathbb{C}$ is naturally given through $\zeta(s)=0$. \textit{Second case: existence junction phase.} If we take into account the fact that TEISI relation is more general than the quadratic one and must be considered under its general form: $\eta^{d}(\ri\omega\tau)=1$ namely $\eta\times\ri^{\alpha}(\omega\tau)^{\alpha}=1$, $\ri^{\alpha}$ introduces a critical phase angle when fibration is implemented $\mathbb{\mathbb{N}\times_{\varphi}\!\mathbb{N}=N}$. If $t$ is the physical time parameter, this relation proves the existence of an arrow of time emerging from the underlying fractal geometry, if the metric of this geometry requires an environment. The main mathematical issue revealed by the controversies is then our capability or not of reducing the fractal dynamics to a stochastic process, namely $(\omega\tau)^{\alpha}\rightarrow(\omega'\tau')^{1/2}$. Provided we take into account the phase angle, the presence of $\Delta(s)$ suggests that this transformation could be rightful if a thermodynamical free energy were considered (Legendre transform). The question which must be also addressed within an universe characterized by an $\alpha$-exponantiation with $\alpha\neq1/2$, namely, the disappearance of perfectly defined Hilbert states, concerns the class of groupoids capable of replacing Hilbert-Poincar\'{e} principle. These troublesome issues occupied the latest scientific conversations I had with Jean Pierre Badiali. \section{\textit{Pro tempore} conclusion regarding an arrow of time } The definition of a concept of time requires a unit which, within a progressively restrained point of view from $\mathbb{R}$ to $\mathbb{N}$ (or $\mathbb{Q}$) should match the set $[0,1]\cup]1,\infty]$ onto $[0,1]$, namely a basic loop. Backed on the TEISI model and a general $\alpha$-geodesic which provides a dynamic hyperbolic meaning to Riemann zeta functions, the use of $E_{pr}$ space and $\mathbb{N}$ self-similar category, offers the chance to understand the ambivalence of the concept of physical time. The ambivalence, that unfolds through a complex value of time, may be expressed using a pair of clocks HL and HG. HL is related to the additive monoids. Indeed, as shown from $\alpha$-expon\textbf{a}ntial model, $n\in N(s=\alpha+\ri\varphi)$ can be associated with the scanning of a hyperbolic distance $l_{\text{HL}}=(u/v)^{d}=(1/n)^{d}$ defined on the geodesic $Z_{\alpha}(\omega)$ according to the additive monoid $(N,+,\leqslant )$ (figure~\ref{fig1}). The computed hyperbolic ``path integral'' \cite{ALM CMA 2010} is no other than $\zeta(s)=\sum_{n\in\mathbb{N}}n^{-s}$. Therefore, the evolution of the $n$ along the geodesic is ruled by pulsing $n$. This reversible time can be easily extended to $\mathbb{Q}$ and to $\mathbb{R}$ as usually done. However, by arguing the concept of time from such a dynamic context, one reveals the existence of a second tempo on HG: $t=\omega\tau$ with $t\in\mathbb{N}$ capable of being tuned to the first clock only in the frame of von Neumann (operator) algebra, taking into account an appropriate phase chord $u/v=(\ri\omega\tau)^{1/d}$. Indeed, through the fractal metric, determination of the absolute values of this tempo and, therefore, the matching of both approaches implies the critical role of the phase $\varphi(\alpha)$ [or $\Delta(\alpha)$ if referred to $1/2$-geodesic], the phase which has an impact, without any possible avoidance, on the sign of the fibration $\mathbb{\mathbb{N}\times_{\varphi}\!\mathbb{N}=N}$. The second clock HG can be tuned in upon the pulse of the first one HL, like $\mathbb{N}\times\mathbb{N}$ must be tuned on $\mathbb{N}$ through a product, by adjusting the edges. Practically, two situations must be taken into consideration: \begin{itemize} \item $\alpha=1/2$, in the frame of the dynamic model, the base of the fibration is the $1/2$-expon\textbf{a}ntial geodesic which is a degenerate form of the dynamics characterized by the removal of any exteriority. This form is associated to Riemann hypothesis. The Laplace transform of non-integer operator exists. The energy fills in its usual Noetherian meaning. The spectrum of the operator applied on $\mathbb{N}$ can be built upon the set of prime numbers giving birth to the category of Hilbert eigenstates. Due to the quadratic form, the chord of both clocks can be easily obtained. The characteristics of Laplacian natural equation may account for this tuning which originates in the quadratic self-similar structure of $\mathbb{N}$: $\mathbb{N}^{2}$$=\mathbb{N}$ also expressed in the TEISI equation $\ri\omega\tau.[\eta(\omega\tau)]^{2}\simeq1$. The irreversibility of the time can only have an external origin; the thermal time unit is then nothing else than the unit of time associated to the Gaussian spatial correlations meshed by the temperature associated to an external thermostat, which, by locking the type of fluctuations, smooths the Peano interfacial geometry via a stochastic process. Fortunately, for energy efficiency, the engineering of batteries is not based on this principle. \item $\alpha\in]1/2,1],$ the dynamics is based on the incomplete $\alpha$-expon\textbf{a}ntial geodesic. There is not any natural Laplace transform for such geodesics and the spectrum over $\mathbb{N}$ cannot provide any simple basis for the representation of inner automorphisms joined together in a ``bundle'' $\{\tau_{n}\}$ which assures a completion, but an entanglement when the closing of the degrees of freedom becomes the heart of the physical issues. Fortunately, an integral involution can be built whose minimal expression can be based upon the hybrid complex set of couples $\left\{\zeta(s)\oplus\zeta(1-\bar{s});\zeta(\bar{s})\oplus\zeta(1-s)\right\}$ in which $\oplus$ expresses the disjoint sum of the basic ``geodesics''. $\{\zeta(1-\bar{s});\zeta(1-s)\}$ plays the role usually devoted to the inverse Laplace transform. These couples of functions that take into account, through $s$ and $\bar{s}$, the sign of the fibration (rotation in the complex plane of geodesics) assure the tuning of the complex dynamics and fix the status of the time taking into account the sign of fibration. This analytical context brings the two main issues to light. \begin{itemize} \item The question of commensurability of the couple clocks HL and HG which, as above, can be physically tuned by using a thermal regularization (entropy production). This regularization can be based upon a Legendre transform defined from the upper limit of the $\alpha$-geodesics. This transform is allowed by the possible thermodynamic involution between $\alpha$-geodesics and $1/2$-geodesics whose equation $|\Delta(\alpha)|+|\varphi(\alpha)|=\piup/4$ provides the insurance. This involution might explain the dissipative auto-organizations, well known in physics as well as the existence of some optimal values of fractal dimensions in irreversible processes, especially the critical dimensions, $d_{a}=4/3$ and $d_{g}=7/4$. \item Infinitely more meaningful is the presence of the phase angle $\pm\Delta(\alpha)\neq0$ which imposes an absolute distingue between both possible signs of the parameter of fibration and a non-commutativity of the associated operators for folding. In this context, and exclusively in this context, the reversibility of the cyclic operators, along the fiber must be expressed by $t_{1}.t_{2}=t_{2}.t_{1}\re^{\pm2\ri\Delta}$, non-commutative expression from which the notion of ``arrow of time'' takes on an irrefutable geometrical signification and, herein, an interfacial physical meaning. The irreversibility is then clearly based on the freedom of a boundary phase, namely the initial conditions, when $\mathbb{N}(\times\mathbb{N})_{\varphi(\alpha)}$ the fibration realized the matching between \textit{construction} and \textit{partition}. Intrinsic irreversibility should then originate from the boundary property. It is then in the thermodynamic framework that the $\pm$, namely, the difference between ``future'' and ``past'', must be analyzed, by assigning the emergence of time-energy to the distingue between the work and the heat. The arrow of time justifies the practical emergence of the distingue of HL and HG while Legendre transformation can ensure the mathematical validity of the passage from one to the other of the notions. \end{itemize} \end{itemize} These last elements very exactly summarize the content of the ultimate discussions shared with my friend Jean Pierre Badiali. Starting from Feynman analysis, his talent had assumed that the reversibility of the time usually required for representing the dynamics of quantum processes should be a very specific case (closed path integrals) of a more general situation (local dissipation, open path integrals and non-extensive set) fundamentally based on the local irreversibility and ultimately complicated by the convolution with a set of non-differential discrete paths. The problem of the ``open loops'' and their non-additive properties, will stay as an open issue for him. He assured with courage this uncomfortable position during his last ten years of research, exploring with me all trails capable of conferring a coherence to his mechanical approach. His vision matched, at least partially, the main-stream choices of quantum mechanics according to which the basis of macroscopic irreversibility should be the result of a statistical scaling closure, settled by the contact with a thermostat or an experimenter. In this paradigmatic framework, the concept of thermal time has no other physical origins than these externalities. As we have tried to show synthetically in this note, our last exchanges concerned the possibility of passing this option for building a hybrid point of view using the role of zeta functions. He attempted without success to introduce this function in his own model but he understood the deep signification of Riemann hypothesis to describe complex systems which possess well defined internal states. We had imagined our writing a book together, titled ``Issues of Time''. The disappearance of Jean Pierre has not only suspended this project, but has left us scientifically fatherless in front of (i) the complexity of all physical open questions, (ii) the urgency of assuring science that should never reduce to the only technosciences and, furthermore, (iii) that the research of all truths still hidden within a shadow preserves for ever its human dynamics. \ukrainianpart \title{Від стріли часу в квантовому підході Бадіалі до динамічного значення гіпотези Рімана} \author{П. Ріот\refaddr{label1}, A. лє Меоте\refaddr{label1,label2,label3} } \addresses{ \addr{label1} Франко-Квебекський інститут, Париж, Франція \addr{label2} Відділи фізики та інформаційних систем, Казанський федеральний університет, \\ Казань, Татарстан, Російська Федерація \addr{label3} Проектування матеріалів, Монруж, Франція } \makeukrtitle \begin{abstract} Новизна останніх наукових робіт Жана-П'єра Бадіалі бере початок з квантового підходу, який базується на (і) поверненні до поняття траєкторій (траєкторії Фейнмана), а також на (іі) необоротності квантових переходів. Ці іконокластичні варіанти знову встановлюють гільбертіан і алгебраїчну точку зору фон Неймана, маючи справу зі статистикою за циклами. Цей підхід надає зовнішню термодинамічну першопричину поняттю квантової одиниці часу (термальний час Ровеллі Коннеса). Це поняття, базис для квантування, виникає тут як простий критерій розрізнення між квантовим режимом і термодинамічним режимом. Мета цієї статті є розкрити зміст останніх п'яти років наукових дискусій, націлених з'єднати в когерентну схему як уподобання і роботи Жана-П’єра, так і роботи авторів цієї статті на основі гіперболічної геодезії, і об'єднуючу роль дзета-функції Рімана. Хоча ці варіанти не представляють жодних протиріч, тим не менше, вони породжують власну стрілу часу, інакшу ніж термальний час. Питання фізичного змісту гіпотези Рімана як основи квантової механіки, що було в центрі наших останніх дискусій, є суттю цієї статті. \keywords інтеграли за траєкторіями, диференціальні рівняння в часткових похідних, дзета-функції, стріла часу \end{abstract} \end{document}
\begin{document} \begin{center} \textbf{Unique continuation properties for Schredinger operators in Hilbert spaces} \textbf{Veli Shakhmurov} Department of Mechanical Engineering, Okan University, Akfirat, Tuzla 34959 Istanbul, Turkey, E-mail: [email protected] Institute of Mathematics and Mechanics, Azerbaijan National Academy of Sciences, Azerbaijan, AZ1141, Baku, F. Agaev, 9, E-mail: [email protected] A\textbf{bstract} \end{center} Here, the Morgan type uncertainty principle and unique continuation properties of abstract Schredinger equations with time dependent potentials are obtained in Hilbert space valued function\ classes. The equations include linear operator in abstract Hilbert spaces $H$ dependent on space variables. So, by selecting appropriate spaces $H$ and operators, we derive unique continuation properties for numerous classes of Schr\"{o}dinger type equations and its systems, which occur in a wide variety of physical systems. \textbf{Key Word:}$\mathbb{\ \ }$Schredinger equations\textbf{, }Positive operators\textbf{, }Semigroups of operators, Unique continuation, Morgan type uncertainty principle \begin{center} \ \ \textbf{AMS 2010: 35Q41, 35K15, 47B25, 47Dxx, 46E40 } \textbf{1. Introduction, definitions} \end{center} In this paper, the unique continuation properties of the abstract Schr\"{o}dinger equations \begin{equation} i\partial _{t}u+\Delta u+A\left( x\right) u+V\left( x,t\right) u=0,\text{ } x\in R^{n},\text{ }t\in \left[ 0,1\right] , \tag{1.1} \end{equation} are studied, where $A=A\left( x\right) $ is a linear operator$,$ $V\left( x,t\right) $ is a given potential operator function in a Hilbert space $H$, the subscript $t$ indicates the partial derivative with respect to $t$, $ \Delta $ denotes the Laplace operator in $R^{n}$ and $u=$ $u(x,t)$ is the $H$ -valued unknown function. The goal is to obtain sufficient conditions on $A$ , the potential $V$ and the behavior of the solution $u$ at two different times, $t_{0}=0$ and $t_{1}=1,$ which guarantee that $u\equiv 0$ in $ R^{n}\times \lbrack 0,1]$. This linear result is then applied to show that two regular solutions $u_{1}$ and $u_{2}$ of non-linear Schredinger equation \begin{equation} i\partial _{t}u+\Delta u+Au=F\left( u,\bar{u}\right) ,\text{ }x\in R^{n}, \text{ }t\in \left[ 0,1\right] , \tag{1.2} \end{equation} for general non-linearities $F$, must agree in $R^{n}\times \lbrack 0,1]$, when $u_{1}-u_{2}$ and its gradient decay faster than any quadratic exponential at times $0$ and $1$. Unique continuation properties for Schredinger equations have been studied by $\left[ \text{5-8}\right] ,$ $\left[ \text{21-23}\right] $ and the references therein.\ In contrast to the mentioned above results we will study the unique continuation properties of abstract Schredinger equations with operator potentials. Abstract differential equations were studied e.g. in $\left[ 1\right] $, $\left[ 10\right] ,$ $\left[ 13-19\right] $, $\left[ 25\text{, }26\right] .$ Since the Hilbert space $H$ is arbitrary and $A$, $V$ are possible linear operators, by choosing $H$, $A$ and $V$\ we can obtain numerous classes of Schr\"{o}dinger type equations and its systems which occur in different applications. If we\ choose the abstract space $H$ a concrete Hilbert space, for example $H=L^{2}\left( G\right) $, $A=L,$ $ V=q\left( x,t\right) $, where $G$ is a domain in $R^{m}$ with smooth boundary, $L$ is a elliptic operator with respect to the variable $y\in G$ and $q$ is a complex valued function, then we obtain the unique continuation properties of the following Schr\"{o}dinger equation \begin{equation} \partial _{t}u=i\left[ \Delta u+Lu+q\left( x,t\right) u\right] ,\text{ }x\in R^{n},\text{ }y\in G,\text{ }t\in \left[ 0,1\right] , \tag{1.3} \end{equation} where $u=u\left( x,y,t\right) $. \ Moreover, let $H=L^{2}\left( 0,1\right) $ and let $A$ to be a differential operator with generalized Wentzell-Robin boundary condition defined by \begin{equation} D\left( A\right) =\left\{ u\in W^{2,2}\left( 0,1\right) ,\text{ } B_{j}u=Au\left( j\right) =0\text{, }j=0,1\right\} ,\text{ } \tag{1.4} \end{equation} \[ \text{ }Au=au^{\left( 2\right) }+bu^{\left( 1\right) }, \] where $a$ and $b$ are real valued functions on $\ R^{n}\times \left[ 0,1 \right] $. Then, we obtain the unique continuation properties of the Wentzell-Robin type boundary value problem (BVP) for the nonlinear Schredinger type equation \begin{equation} i\partial _{t}u+\Delta u+a\frac{\partial ^{2}u}{\partial y^{2}}+b\frac{ \partial u}{\partial y}=F\left( u,\bar{u}\right) ,\text{ } \tag{1.5} \end{equation} \[ u=u\left( x,y,t\right) \text{, }x\in R^{n},\text{ }y\in G,\text{ }t\in \left[ 0,1\right] , \] \ \ \ \begin{equation} a\left( j\right) u_{yy}\left( x,j,t\right) +b\left( j\right) u_{y}\left( x,j,t\right) =0\text{, }j=0,1\text{, for a.e. }x\in R^{n},\text{ }t\in \left( 0,1\right) . \tag{1.6} \end{equation} Note that, the regularity properties of Wentzell-Robin type BVP for elliptic equations have been studied e.g. in $\left[ \text{11, 12 }\right] $ and the references therein. Moreover, if put $H=l_{2}$ and choose $A$ and $V\left( x,t\right) $ as infinite matrices $\left[ a_{mj}\right] $, $\left[ b_{mj}\left( x,t\right) \right] ,$ $m,$ $j=1,2,...,\infty $ respectively, then we obtain the unique continuation properties of the following system of Schredinger equation \begin{equation} \partial _{t}u_{m}=i\left[ \Delta u_{m}+\sum\limits_{j=1}^{N}\left( a_{mj}+b_{mj}\left( x,t\right) \right) u_{j}\right] ,\text{ }x\in R^{n}, \text{ }t\in \left( 0,1\right) . \tag{1.7} \end{equation} Let $E$ be a Banach space and $\gamma =\gamma \left( x\right) $ be a positive measurable function on a domain $\Omega \subset R^{n}.$ Here, $ L_{p,\gamma }\left( \Omega ;E\right) $ denotes the space of strongly measurable $E$-valued functions that are defined on $\Omega $ with the norm \[ \left\Vert f\right\Vert _{_{p,\gamma }}=\left\Vert f\right\Vert _{L_{p,\gamma }\left( \Omega ;E\right) }=\left( \int \left\Vert f\left( x\right) \right\Vert _{E}^{p}\gamma \left( x\right) dx\right) ^{\frac{1}{p}}, \text{ }1\leq p<\infty , \] \[ \left\Vert f\right\Vert _{L_{\infty ,\gamma }\left( \Omega ;E\right) }=ess\sup\limits_{x\in \Omega }\left\Vert f\left( x\right) \right\Vert _{E}\gamma \left( x\right) ,\text{ }p=\infty . \] For $\gamma \left( x\right) \equiv 1$ the space $L_{p,\gamma }\left( \Omega ;E\right) $ will be denoted by $L^{p}=L^{p}\left( \Omega ;E\right) $ for $ p\in \left[ 1,\infty \right] .$ Let $\mathbf{p=}\left( p_{1},p_{2}\right) $ and $\Omega =\Omega _{1}\times \Omega _{2}$, where $\Omega _{k}\in R^{n_{k}}$. $L_{x}^{p_{1}}L_{t}^{p_{2}} \left( \Omega ;E\right) $ will denote the space of all $E$-valed, $\mathbf{p} $-summable\ functions with mixed norm, i.e., the space of all measurable functions $f$ defined on $G$ equipped with norm \[ \left\Vert f\right\Vert _{L_{x}^{p_{1}}L_{y}^{p_{2}}\left( \Omega ;E\right) }=\left( \left( \dint\limits_{\Omega _{1}}\left\Vert f\left( x,t\right) \right\Vert _{E}^{p_{2}}dt\right) ^{\frac{p_{1}}{p_{2}}}dx\right) ^{\frac{1}{ p_{1}}}. \] Let $H$ be a Hilbert space and \[ \left\Vert u\right\Vert =\left\Vert u\right\Vert _{H}=\left( u,u\right) _{H}^{\frac{1}{2}}=\left( u,u\right) ^{\frac{1}{2}}\text{ for }u\in H. \] \ For $p=2$ and $E=H$ is a Hilbert space, $L^{p}\left( \Omega ;E\right) $ is to be a Hilbert space with inner product \[ \left( f,g\right) _{L^{2}\left( \Omega ;H\right) }=\int\limits_{\Omega }\left( f\left( x\right) ,g\left( x\right) \right) _{H}dx \] for $f$, $g\in L^{2}\left( \Omega ;H\right) $. Let $C\left( \Omega ;E\right) $ denote the space of $E-$valued, bounded uniformly continuous functions on $\Omega $ with norm \[ \left\Vert u\right\Vert _{C\left( \Omega ;E\right) }=\sup\limits_{x\in \Omega }\left\Vert u\left( x\right) \right\Vert _{E}. \] $C^{m}\left( \Omega ;E\right) $\ will denote the space of $E$-valued bounded uniformly strongly continuous and $m$-times continuously differentiable functions on $\Omega $ with norm \[ \left\Vert u\right\Vert _{C^{m}\left( \Omega ;E\right) }=\max\limits_{0\leq \left\vert \alpha \right\vert \leq m}\sup\limits_{x\in \Omega }\left\Vert D^{\alpha }u\left( x\right) \right\Vert _{E}. \] Moreover, $C_{0}^{\infty }\left( \Omega ;E\right) -$denotes the space of $E$ -valued infinity many differentiable finite functions. Let \[ O_{r}=\left\{ x\in R^{n},\text{ }\left\vert x\right\vert <r\right\} \text{, } r>0. \] Let $\mathbb{N}$ denote the set of all natural numbers, $\mathbb{C}$ denote the set of all complex numbers. Let $\Omega $ be a domain in $R^{n}$ and let $m$ be a positive integer$.$\ $ W^{m,p}\left( \Omega ;E\right) $ denotes the space of all functions $u\in L^{p}\left( \Omega ;E\right) $ that have generalized derivatives $\frac{ \partial ^{m}u}{\partial x_{k}^{m}}\in L^{p}\left( \Omega ;E\right) ,$ $ 1\leq p\leq \infty $ with the norm \[ \ \left\Vert u\right\Vert _{W^{m,p}\left( \Omega ;E\right) }=\left\Vert u\right\Vert _{L^{p}\left( \Omega ;E\right) }+\sum\limits_{k=1}^{n}\left\Vert \frac{\partial ^{m}u}{\partial x_{k}^{m}} \right\Vert _{L^{p}\left( \Omega ;E\right) }<\infty . \] Let $E_{0}$ and $E$ be two Banach spaces and suppose $E_{0}$ is continuously and densely embedded into $E$. Here,\ $W^{m,p}\left( \Omega ;E_{0},E\right) $ denotes the space $W^{m,p}\left( \Omega ;E\right) \cap $ $L^{p}\left( \Omega ;E\right) $ equipped with norm \[ \ \left\Vert u\right\Vert _{W^{m,p}\left( \Omega ;E_{0,}E\right) }=\left\Vert u\right\Vert _{L^{p}\left( \Omega ;E_{0}\right) }+\sum\limits_{k=1}^{n}\left\Vert \frac{\partial ^{m}u}{\partial x_{k}^{m}} \right\Vert _{L^{p}\left( \Omega ;E\right) }<\infty . \] Let $E_{1}$ and $E_{2}$ be two Banach spaces. Let $L\left( E_{1},E_{2}\right) $ will denote the space of all bounded linear operators from $E_{1}$ to $E_{2}.$ For $E_{1}=E_{2}=E$ it will be denoted by $L\left( E\right) .$ Let $A$ is a symmetric operator in a Hilbert sapace $H$ with domain $D\left( A\right) .$ Here, $H\left( A\right) $ denotes the domain of $A$ equipped with graphical norm, i.e. \[ \left\Vert u\right\Vert _{H\left( A\right) }=\left\Vert Au\right\Vert +\left\Vert u\right\Vert . \] The symmetric operator $A$ is positive defined if $D\left( A\right) $ is dense in $H$ and there exists a positive constant $C_{A}$ depend only on $A$ such that \[ \left( Au,u\right) \geq C_{A}\left\Vert u\right\Vert ^{2}. \] It is known that see e.g. $\left[ \text{24, \S 1.15.1}\right] $) there exist fractional powers\ $A^{\theta }$ of the positive defined operator $A.$ Let $\left[ A,B\right] $ be a commutator operator, i.e. \[ \left[ A,B\right] =AB-BA \] for linear operators $A$ and $B.$ Sometimes we use one and the same symbol $C$ without distinction in order to denote positive constants which may differ from each other even in a single context. When we want to specify the dependence of such a constant on a parameter, say $\alpha $, we write $C_{\alpha }$. \begin{center} \textbf{2}.\textbf{1}. \textbf{Main results for} \textbf{absract Scr\"{o} dinger equation} \end{center} Consider the problem $\left( 1.1\right) $. \ Here, \[ X=L^{2}\left( R^{n};H\right) \text{, }X\left( A\right) =L^{2}\left( R^{n};H\left( A\right) \right) ,\text{\ }Y^{k}=W^{2,k}\left( R^{n};H\right) \text{, }k\in \mathbb{N}. \] \textbf{Definition} \textbf{2.1}. A function $u\in L^{\infty }\left( 0,T;H\left( A\right) \right) $ is called a local weak solution to $\left( 1.1\right) $ on $\left( 0,T\right) $ if $u$ belongs to $L^{\infty }\left( 0,T;H\left( A\right) \right) \cap W^{1,2}\left( 0,T;H\right) $ and satisfies $\left( 1.1\right) $ in the sense of $L^{\infty }\left( 0,T;H\left( A\right) \right) .$ In particular, if $\left( 0,T\right) $\ coincides with $\mathbb{R} $, then $u$ is called a global weak solution to $\left( 1.1\right) .$ If the solution of $\left( 1.1\right) $ belongs to $C\left( \left[ 0,T\right] ;H\left( A\right) \right) \cap W^{2,2}\left( 0,T;H\right) ,$ then its called a stronge solution. \textbf{Condition 1. }Assume: (1)\textbf{\ }$A=A\left( x\right) $ is a symmetric operator in Hilbert space $H$ with independent on $x$ domain $ D\left( A\right) $ that is dense on $H;$ (2) $\frac{\partial A}{\partial x_{k}}$ are symmetric operators in Hilbert space $H$ with independent on $x$ domain $D\left( \frac{\partial A}{\partial x_{k}}\right) =D\left( A\right) $ and \[ \dsum\limits_{k=1}^{n}\left( x_{k}\left[ A\frac{\partial f}{\partial x_{k}}- \frac{\partial A}{\partial x_{k}}f\right] ,f\right) _{X}\geq 0,\text{ } \] for each $f\in L^{\infty }\left( 0,T;Y^{1}\left( A\right) \right) $; (3) \[ \dsum\limits_{k=1}^{n}\left( x_{k}\left[ A\frac{\partial f}{\partial x_{k}}+ \frac{\partial A}{\partial x_{k}}f\right] ,f\right) _{X}\geq 0\text{ } \] for each $f\in L^{\infty }\left( 0,T;Y^{1}\left( A\right) \right) $; (4) $iA$ generates a Schr\"{o}dinger grop and $V\left( x,t\right) \in L\left( H\right) $ for $\left( x,t\right) \in R^{n}\times \left[ 0,1\right] ; $ (5) there is a constant $C_{0}>0$ so that \[ \func{Im}\left( \left( A+V\right) \upsilon ,\upsilon \right) _{H}\geq C_{0}\mu \left( x,t\right) \left\Vert \upsilon \left( x,t\right) \right\Vert ^{2}, \] for $x\in R^{n},$ $t\in \left[ 0,T\right] ,$ $T\in \left( 0.\right. 1\left. {}\right] $ and $\upsilon \in D\left( A\right) $, where $\mu $ is a positive function in $L^{1}\left( 0,T;L^{\infty }\left( R^{n}\right) \right) ;$ (6)\ \[ \left\Vert V\right\Vert _{L^{\infty }\left( R^{n}\times \left( 0,1\right) ;L\left( H\right) \right) }\leq C\text{, }\lim\limits_{R\rightarrow \infty }\left\Vert V\right\Vert _{L_{t}^{1}L_{x}^{\infty }\left( L\left( H\right) \right) }=0, \] where \[ \text{ }L_{t}^{1}L_{x}^{\infty }\left( L\left( H\right) \right) =L^{1}\left( 0,1;L^{\infty }\left( R^{n}/O_{r}\right) ;L\left( H\right) \right) . \] Let $A=A\left( x\right) $ is a symmetric operator in Hilbert space $H$ with independent on $x$ domain $D\left( A\right) $ and \[ X\left( A\right) =L^{2}\left( R^{n};H\left( A\right) \right) ,\text{ } Y^{k}\left( A\right) =W^{2,k}\left( R^{n};H\left( A\right) \right) \text{, } k\in \mathbb{N} \] \ Our main result in this paper is the following: \textbf{Theorem 1. }Assume the Condition 1 holds and there exist constants $ a_{0},$ $a_{1},$ $a_{2}>0$ such that for any $k\in \mathbb{Z}^{+}$ a solution $u\in C\left( \left[ 0,1\right] ;X\left( A\right) \right) $ of $ \left( 1.1\right) $ satisfy \begin{equation} \dint\limits_{R^{n}}\left\Vert u\left( x,0\right) \right\Vert ^{2}e^{2a_{0}\left\vert x\right\vert ^{p}}dx<\infty ,\text{ for }p\in \left( 1,2\right) , \tag{2.1} \end{equation} \begin{equation} \dint\limits_{R^{n}}\left\Vert u\left( x,1\right) \right\Vert ^{2}e^{2k\left\vert x\right\vert ^{p}}dx<a_{2}e^{2a_{1}k^{\frac{q}{q-p}}}, \text{ }\frac{1}{p}+\frac{1}{q}=1. \tag{2.2} \end{equation} Moreover, there exists $M_{p}>0$ such that \begin{equation} a_{0}a_{1}^{p-2}>M_{p}. \tag{2.3} \end{equation} Then $u\left( x,t\right) \equiv 0.$ \textbf{Corollary 1. }Assume the Condition1 holds and \[ \lim\limits_{\left\vert R\right\vert \rightarrow \infty }\dint\limits_{0}^{1}\sup\limits_{\left\vert x\right\vert >R}\left\Vert V\left( x,t\right) \right\Vert _{L\left( H\right) }dt=0. \] There exist positive constants $\alpha $, $\beta $ such that a solution $ u\in C\left( \left[ 0,1\right] ;X\left( A\right) \right) $ of $\left( 1.1\right) $ satisfy \begin{equation} \dint\limits_{R^{n}}\left\Vert u\left( x,0\right) \right\Vert ^{2}e^{2\left\vert \alpha x\right\vert ^{p}/p}dx+\dint\limits_{R^{n}}\left\Vert u\left( x,1\right) \right\Vert ^{2}e^{\frac{2\left\vert \beta x\right\vert ^{q}}{q}}dx<\infty ,\text{ } \tag{2.4} \end{equation} with \[ p\in \left( 1,2\right) \text{, }\frac{1}{p}+\frac{1}{q}=1 \] and there exists $N_{p}>0$ such that \begin{equation} \alpha \beta >N_{p}.\text{ } \tag{2.5} \end{equation} Then $u\left( x,t\right) \equiv 0.$ As a direct consequence of Corollary1 we have the following result regarding the uniqueness of solutions for nonlinear equation $\left( 1.2\right) $: \textbf{Theorem 2. }Assume the Condition 1 holds and $u_{1},$ $ u_{2}\in C\left( \left[ 0,1\right] ;Y^{2,k}\left( A\right) \right) $ strong solutions of $(1.2)$ with $k\in \mathbb{Z}^{+},$ $k>\frac{n}{2}.$ Suppose $ F:H\times H\rightarrow H,$ $F\in C^{k}$, $F\left( 0\right) =\partial _{u}F\left( 0\right) =\partial _{\bar{u}}F\left( 0\right) =0$ and there exist positive constants $\alpha $, $\beta $ such that \begin{equation} e^{\frac{\left\vert \alpha x\right\vert ^{p}}{p}}\left( u_{1}\left( .,0\right) -u_{2}\left( .,0\right) \right) \in X,\text{ }e^{^{\frac{ \left\vert \beta x\right\vert ^{q}}{q}}}\left( u_{1}\left( .,0\right) -u_{2}\left( .,0\right) \right) \in X, \tag{2.6} \end{equation} with \[ p\in \left( 1,2\right) \text{, }\frac{1}{p}+\frac{1}{q}=1 \] and there exists $N_{p}>0$ such that \begin{equation} \alpha \beta >N_{p}.\text{ } \tag{2.7} \end{equation} Then $u_{1}\left( x,t\right) \equiv u_{2}\left( x,t\right) .$ \textbf{Corollary 2.} Assume the Condition 1 hold and there exist positive constants $\alpha $ and $\beta $ such that a solution $u\in C\left( \left[ 0,1\right] ;X\left( A\right) \right) $ of $\left( 1.1\right) $ satisfy \begin{equation} \dint\limits_{R^{n}}\left\Vert u\left( x,0\right) \right\Vert ^{2}e^{\frac{ 2\left\vert \alpha x_{j}\right\vert ^{p}}{p}}dx+\dint\limits_{R^{n}}\left \Vert u\left( x,1\right) \right\Vert ^{2}e^{\frac{2\left\vert \beta x_{j}\right\vert ^{q}}{q}}dx<\infty ,\text{ } \tag{2.8} \end{equation} for $j=1,2,...,n$ and $p\in \left( 1,2\right) $, $\frac{1}{p}+\frac{1}{q}=1.$ Moreover, there exists $N_{p}>0$ such that \begin{equation} \alpha \beta >N_{p}.\text{ } \tag{2.9} \end{equation} Then $u\left( x,t\right) \equiv 0.$ \textbf{Remark 2.1. }The Theorem 2 still holds, with different constant $ N_{p}>0$, if one replaces the hypothesis $(2.6)$ by \[ e^{\left\vert \alpha x_{j}\right\vert ^{p}/p}\left( u_{1}\left( .,0\right) -u_{2}\left( .,0\right) \right) \in X\left( A\right) ,\text{ }e^{\left\vert \beta x_{j}\right\vert ^{q}/q}\left( u_{1}\left( .,0\right) -u_{2}\left( .,0\right) \right) \in X\left( A\right) , \] for $j=1,2,...,n.$ Next, we shall extend the method used in the proof Theorem 3 to study the blow the nonlinear Schr\"{o}dinger equations \begin{equation} i\partial _{t}u+\Delta u+Au+F\left( u,\bar{u}\right) u=0,\text{ }x\in R^{n}, \text{ }t\in \left[ 0,1\right] , \tag{2.10} \end{equation} where $A$ is a linear operator in a Hilbert space $H.$ Let $u\left( x,t\right) $ be a solution of the equation $\left( 2.10\right) . $ Then it can be shown that the function \begin{equation} u\left( x,t\right) =U\left( x,1-t\right) u\left( \frac{x}{1-t},\frac{t}{1-t} \right) \text{, } \tag{2.11} \end{equation} is a solution of the focussing $L^{2}$-critical solution of abstract Schredinger equation \begin{equation} i\partial _{t}u+\Delta u+Au+\left\Vert u\right\Vert ^{\frac{4}{n}}u=0,\text{ }x\in R^{n},\text{ }t\in \left[ 0,1\right] \tag{2.12} \end{equation} which blows up at time $t=1,$ where $U\left( x,t\right) $ is a fundamental solution of the Schr\"{o}dinger equation \[ i\partial _{t}u+\Delta u+Au=0,\text{ }x\in R^{n},\text{ }t\in \left[ 0,1 \right] , \] i.e. \[ U\left( x,t\right) =t^{-n/2}\exp \left\{ i\left( A+\left\vert x\right\vert ^{2}\right) /4t\right\} . \] By using the above result we wil prove the following main result \textbf{Theorem 3. }Assume the Condition 1 holds and there exist positive constants $b_{0}$ and $\theta $ such that a solution $u\in C\left( \left[ -1,1\right] ;X\left( A\right) \right) $ of $\left( 2.10\right) $ satisfied $ \left\Vert F\left( u,\bar{u}\right) \right\Vert \leq b_{0}\left\Vert u\right\Vert ^{\theta }$ for $\left\Vert u\right\Vert >1$. Suppose \[ \left\Vert u\left( .,t\right) \right\Vert _{X}=\left\Vert u\left( .,0\right) \right\Vert _{X}=\left\Vert u_{0}\right\Vert _{X}=a,\text{ }t\in \left( -1,1\right) \] and \begin{equation} \left\Vert u\left( x,t\right) \right\Vert \leq \left( 1-t\right) ^{-\frac{n}{ 2}}Q\left( \frac{\left\vert x\right\vert }{1-t}\right) ,\text{ }t\in \left( -1,1\right) \tag{2.14} \end{equation} \ where, \begin{equation} Q\left( x\right) =b_{1}^{-\frac{n}{2}}e^{-b_{2}\left\vert x\right\vert ^{p}} \text{, }b_{1}\text{, }b_{2}>0\text{, }p>1; \tag{2.15} \end{equation} If $p>p\left( \theta \right) =\frac{2\left( \theta n-2\right) }{\left( \theta n-1\right) },$ then $a\equiv 0.$ \begin{center} \textbf{2.2. Some auxiliary results} \end{center} First of all, we generalize the result G. W. Morgan (see e.g $\left[ 7\right] $) about Morgan type uncertainty principle for Fourier transform. \textbf{Lemma 2.0. }Let \textbf{\ }$f\left( x\right) \in L^{1}\left( R^{n};H\right) \cap X$ and \[ \dint\limits_{R^{n}}\dint\limits_{R^{n}}\left\Vert f\left( x\right) \right\Vert \left\Vert \hat{f}\left( \xi \right) \right\Vert e^{\left\vert x.\zeta \right\vert }\text{ }dxd\xi <\infty \text{.} \] Then $f\left( x\right) \equiv 0.$ In particular, using Young's inequality this implies: \textbf{Result 2.1.} Let \[ f\left( x\right) \in L^{1}\left( R^{n};H\right) \cap X,\text{ }p\in \left( 1,2\right) ,\text{ }\frac{1}{p}+\frac{1}{q}=1,\alpha ,\beta >0 \] and \[ \dint\limits_{R^{n}}\left\Vert f\left( x\right) \right\Vert e^{\frac{\alpha ^{p}\left\vert x\right\vert ^{p}}{p}}dx+\dint\limits_{R^{n}}\left\Vert \hat{f }\left( \xi \right) \right\Vert e^{\frac{\beta ^{q}\left\vert \xi \right\vert ^{q}}{q}}d\xi <\infty \text{, }\alpha \beta >1. \] Then $f\left( x\right) \equiv 0.$ The Morgan type uncertainty principle, in terms of the solution of the free Schredinger equation will be as: Let \[ u_{0}\left( .\right) \in L^{1}\left( R^{n};H\right) \cap X\left( A\right) \] and \[ \dint\limits_{R^{n}}\left\Vert u_{0}\left( x\right) \right\Vert e^{\frac{ \alpha ^{p}\left\vert x\right\vert ^{p}}{p}}dx+\dint\limits_{R^{n}}\left \Vert e^{it\left( \Delta +A\right) }u_{0}\left( x\right) \right\Vert e^{ \frac{\beta ^{q}\left\vert \xi \right\vert }{q\left( 2t\right) ^{q}} ^{q}}dx<\infty \text{, }\alpha \beta >1 \] for some $t\neq 0$. Then $u_{0}\left( x\right) \equiv 0.$ Let $L^{2}\left( H\right) =L^{2}\left( \left( 0,1\right) \times R^{n};H\right) .$ Consider the abstract Schredinger equation \begin{equation} \partial _{t}u=\left( a+ib\right) \left[ \Delta u+A\left( x\right) u+V\left( x,t\right) u+F\left( x,t\right) \right] ,\text{ }x\in R^{n},\text{ }t\in \left[ 0,1\right] , \tag{2.16} \end{equation} where $a$, $b$ are real numbers, $A$ is a linear operator, $V\left( x,t\right) $ is a given potential operator function in $H$ and $F\left( x,t\right) $ is a given $H$-valued function. Let\ $S=S\left( t\right) $ be a symmetric, $K=K\left( t\right) $ be a skew-symmetric operators in $H$, \ $f=f\left( .,t\right) \in X$ for $t\in \left[ 0,1\right] $ and \[ \text{ }Q\left( t\right) =\left( f,f\right) _{X}\text{, }D\left( t\right) =\left( Sf,f\right) _{X},\text{ }N\left( t\right) =D\left( t\right) Q^{-1}\left( t\right) , \] \[ \text{ }\partial _{t}S=S_{t}\text{ and }\left\vert \nabla \upsilon \right\vert _{H}^{2}=\dsum\limits_{k=1}^{n}\left\Vert \frac{\partial \upsilon }{\partial x_{k}}\right\Vert _{H}^{2}\text{ for }\upsilon \in W^{1,2}\left( R^{n};H\right) . \] Let \[ \eta \left( x\right) =\frac{\gamma a\left\vert x\right\vert ^{2}}{a+4\gamma \left( a^{2}+b^{2}\right) T},\text{ }L^{1,\infty }\left( 0,T;H\right) =L^{1}\left( 0,T;L^{\infty }\left( R^{n};H\right) \right) . \] \textbf{Lemma 2.1. \ }Let the Condition 1 holds$.$ Assume that $u\in L^{\infty }\left( \left[ 0,1\right] ;X\left( A\right) \right) \cap L^{2}\left( 0,1;Y^{1}\right) $ is a solution of $\left( 2.16\right) $ for $ a\in \mathbb{R}_{+}$ and $b\in \mathbb{R}.$ Then, \[ e^{-M_{T}}\left\Vert e^{\eta \left( x\right) }u\left( .,T\right) \right\Vert _{X}\leq \] \[ \left\Vert e^{\gamma \left\vert x\right\vert ^{2}}u\left( .,0\right) \right\Vert _{X}+\left( a^{2}+b^{2}\right) \left\Vert e^{\eta \left( x\right) }F\left( .,t\right) \right\Vert _{L^{1}\left( 0,T;X\right) }+\left\Vert A^{\frac{1}{2}}u\left( .,t\right) \right\Vert _{X}^{2}, \] uniformly in $t\in \left[ 0,1\right] ,$\ when $\gamma \geq 0,$ $0\leq T\leq 1 $ and \[ M_{T}=\left\Vert a\left( \func{Re}V\right) ^{+}-b\func{Im}V\right\Vert _{L^{1,\infty }\left( 0,T;H\right) }. \] \textbf{Proof. }Set $\upsilon $ $=e^{\varphi }u$, where $\varphi $ is a real-valued function to be chosen later. The function $\upsilon =\upsilon \left( x,t\right) $ verifies \begin{equation} \partial _{t}\upsilon =S\upsilon +K\upsilon +\left( a+ib\right) \left( Vf+e^{\gamma \varphi }F\right) \text{ in }R^{n}\times \left[ 0,1\right] \text{,} \tag{2.17} \end{equation} where $S,$ $K$ are symmetric and skew-symmetric operator, respectively given by \[ S=aA_{1}-ib\gamma B_{1}+\varphi _{t}+a\func{Re}V-b\func{Im}V,\text{ } K=ibA_{1}-a\gamma B_{1}+ \] \begin{equation} i\left( b\func{Re}\upsilon +a\func{Im}\upsilon \right) , \tag{2.18} \end{equation} here \[ A_{1}=\Delta +A\left( x\right) +\gamma ^{2}\left\vert \nabla \varphi \right\vert ^{2},\text{ }B_{1}=2\nabla \varphi .\nabla +\Delta \varphi . \] Formally, \ \[ \partial _{t}\left\Vert \upsilon \right\Vert _{X}^{2}=2\func{Re}\left( S\upsilon ,\upsilon \right) _{X}+2\func{Re}\left( \left( a+ib\right) e^{\gamma \varphi }F,\upsilon \right) _{X} \] for $t\geq 0.$ Again a formal integration by parts gives that \[ \func{Re}\left( S\upsilon ,\upsilon \right) _{X}=-a\dint\limits_{R^{n}}\left\vert \nabla \upsilon \right\vert _{H}^{2}dx+a\dint\limits_{R^{n}}\left( A\upsilon ,\upsilon \right) dx+a\gamma ^{2}\dint\limits_{R^{n}}\left\vert \nabla \varphi \right\vert ^{2}\left\Vert \upsilon \left( x,t\right) \right\Vert ^{2}dx+ \] \[ \dint\limits_{R^{n}}\varphi _{t}\left\Vert \upsilon \left( x,t\right) \right\Vert ^{2}dx+b\func{Im}\dint\limits_{R^{n}}\left( \bar{\upsilon}\nabla \varphi ,\nabla \upsilon \right) dx+\dint\limits_{R^{n}}\left( \left( a\func{ Re}V-b\func{Im}V\right) \upsilon ,\upsilon \right) dx. \] By Cauchy-Schwarz's inequality and in view of assumption on $A,$ $V$ we get \[ \partial _{t}\left\Vert \upsilon \right\Vert _{X}^{2}\leq 2a\left\Vert A^{ \frac{1}{2}}\upsilon \right\Vert _{X}^{2}+2\left\Vert \left( a\func{Re}V-b \func{Im}V\right) \upsilon \right\Vert _{L^{\infty }\left( L\left( H\right) \right) }\left\Vert \upsilon \left( .,t\right) \right\Vert _{X}^{2}+ \] \begin{equation} 2\sqrt{a^{2}+b^{2}}\left\Vert e^{\gamma \varphi }F\left( .,t\right) \right\Vert _{X}\left\Vert \upsilon \left( .,t\right) \right\Vert _{X}, \tag{2.19} \end{equation} when \begin{equation} \left( a+\frac{b^{2}}{a}\right) \left\vert \nabla \upsilon \right\vert ^{2}+\varphi _{t}\leq 0,\text{ for }x\in R^{n},\text{ }t\geq 0. \tag{2.20} \end{equation} Let \begin{equation} \phi _{r}\left( x\right) =\left\{ \begin{array}{c} \left\vert x\right\vert ^{2},\text{ \ \ }\left\vert x\right\vert \leq r \\ r^{2},\text{ \ }\left\vert x\right\vert >r \end{array} \right. \text{, }\varphi _{\rho ,r}=d\left( t\right) \theta _{\rho }\ast \phi _{r}\left( x\right) \text{, }\upsilon _{\rho ,r}=e^{\varphi _{\rho ,r}}u, \tag{2.21} \end{equation} where $\theta _{\rho }$ is a radial mollifier and \[ d\left( t\right) =\frac{\gamma a}{a+4\gamma \left( a^{2}+b^{2}\right) t}. \] Then by reasoning as in the end of proof $\left[ \text{8, Lemma 1}\right] ,$ from $\left( 2.19\right) $ we obtain that the estimate holds \[ \left\Vert \upsilon \left( .,T\right) \right\Vert _{X}\leq e^{M_{T}}\left[ \left\Vert e^{\gamma \left\vert x\right\vert ^{2}}u\left( .,T\right) \right\Vert _{X}\right. +2a\left\Vert A^{\frac{1}{2}}u\left( .,t\right) \right\Vert _{X}^{2}+ \] \[ \left. 2\sqrt{a^{2}+b^{2}}\left\Vert e^{\varphi _{\rho ,r}}F\right\Vert _{L^{1}\left( 0,T;X\right) }\right] \] holds uniformly in $\rho $, $r$ and $t\in \left[ 0,1\right] .$ Lemma 2.1 follows after letting $\rho $ tend to zero and $r$ to infinity. \textbf{Lemma 2.2. \ }Let $S=S\left( x,t\right) $ be a symmetric operator, $ K=K\left( x,t\right) $ be skew-symmetric in Hilbert space $H$ with independent on $x$ domain $D\left( S\right) $ and $D\left( K\right) ,$ respectively$.$\textbf{\ }Assume\ $G\left( .,t\right) \in L^{2}\left( R^{n}\right) $ is a positive funtion and $f(x,t)$ is a $H-$valued reasonable function. Then, \[ \frac{d^{2}}{dt^{2}}Q\left( t\right) =2\partial _{t}\func{Re}\left( \partial _{t}f-Sf-Kf,f\right) _{X}+2\left( S_{t}f+\left[ S,K\right] f,f\right) _{X}+ \] \[ \left\Vert \partial _{t}f-Sf+Kf\right\Vert _{X}^{2}-\left\Vert \partial _{t}f-Sf-Kf\right\Vert _{X}^{2} \] and \[ \partial _{t}N\left( t\right) \geq Q^{-1}\left( t\right) \left[ \left( S_{t}f+\left[ S,K\right] f,f\right) _{X}-\frac{1}{2}\left\Vert \partial _{t}f-Sf-Kf\right\Vert _{X}^{2}\right] . \] Moreover, if \[ \left\Vert \partial _{t}f-Sf-Kf\right\Vert _{H}\leq M_{1}\left\Vert f\right\Vert _{H}+G\left( x,t\right) ,\text{ }S_{t}+\left[ S,K\right] \geq -M_{0}\text{ } \] for $x\in R^{N}$, $t\in \left[ 0,1\right] $ and \[ M_{2}=\sup\limits_{t\in \left[ 0,1\right] }\left\Vert G\left( .,t\right) \right\Vert _{L^{2}\left( R^{n}\right) }\left\Vert f\left( .,t\right) \right\Vert _{X}^{-1}<\infty . \] Then $Q\left( t\right) $ is logarithmically convex in $[0,1]$ and there is a constant $M$ such that \[ Q\left( t\right) \leq e^{M\left( M_{0}+M_{1}+M_{2}+M_{1}^{2}+M_{2}^{2}\right) }Q^{1-t}\left( 0\right) Q^{t}\left( 1\right) \text{, }0\leq t\leq 1. \] \textbf{Proof.} It is clear that \[ \dot{Q}\left( t\right) =2\func{Re}\left( \frac{d}{dt}f,f\right) _{X}=2\func{ Re}\left( \frac{d}{dt}f-Sf-Kf,f\right) _{X}+2\left( Sf,f\right) _{X}. \] Also, \[ \dot{Q}\left( t\right) =\func{Re}\left( \frac{d}{dt}f+Sf,f\right) _{X}+\func{ Re}\left( \frac{d}{dt}f-Sf,f\right) _{X}, \] \[ \frac{d}{dt}D\left( t\right) =\frac{1}{2}\func{Re}\left( \frac{d}{dt} f+Sf,f\right) _{X}-\frac{1}{2}\func{Re}\left( \frac{d}{dt}f-Sf,f\right) _{X}. \] Then by reasoning as in $\left[ \text{8, Lemma 2}\right] $ we obtain the assertion. Consider the following abstract Schredinger equation \begin{equation} \partial _{t}u=\left( a+ib\right) \left[ \Delta u+A\left( x\right) u+V\left( x,t\right) u\right] +F\left( x,t\right) ,\text{ }x\in R^{n},\text{ }t\in \left[ 0,1\right] , \tag{2.22} \end{equation} Let \[ \Phi \left( A,V\right) \upsilon =a\func{Re}\left( \left( A+V\right) \upsilon ,\upsilon \right) -b\func{Im}\left( \left( A+V\right) \upsilon ,\upsilon \right) , \] \[ \text{ for }\upsilon =\upsilon \left( .,t\right) \in D\left( A\right) . \] \textbf{Lemma 2.3. \ }Let the Condition 1 holds. Assume that $a$, $ \gamma >0,$ $b\in \mathbb{R}.$ Let $u\in L^{\infty }\left( 0,1;X\left( A\right) \right) \cap L^{2}\left( 0,1;Y^{1}\right) $ be solution of the equation $\left( 2.22\right) $ and \[ \left\vert \Phi \left( A,V\right) u\left( x,t\right) \right\vert \leq C_{0}\eta \left( x,t\right) \left\Vert u\left( x,t\right) \right\Vert ^{2} \] for $x\in R^{n},$ $t\in \left[ 0,1\right] $ and $u\in D\left( A\right) $, where $\eta $ is a positive function in $L^{1}\left( 0,T;L^{\infty }\left( R^{n}\right) \right) $. Moreover, suppose \[ \sup\limits_{t\in \left[ 0,1\right] }\left\Vert V\left( .,t\right) \right\Vert _{B}\leq M_{1}\text{, }\left\Vert e^{\gamma \left\vert x\right\vert ^{2}}u\left( .,0\right) \right\Vert _{X}<\infty ,\text{ } \left\Vert e^{\gamma \left\vert x\right\vert ^{2}}u\left( .,1\right) \right\Vert _{X}<\infty \text{ } \] and \[ M_{2}=\sup\limits_{t\in \left[ 0,1\right] }\frac{\left\Vert e^{\gamma \left\vert x\right\vert ^{2}}F\left( .,t\right) \right\Vert _{X}}{\left\Vert u\right\Vert _{X}}<\infty . \] Then, $e^{\gamma \left\vert x\right\vert ^{2}}u\left( .,t\right) $ is logarithmically convex in $[0,1]$ and there is a constant $N$ such that \begin{equation} \left\Vert e^{\gamma \left\vert x\right\vert ^{2}}u\left( .,t\right) \right\Vert _{X}\leq e^{NM\left( a,b\right) }\left\Vert e^{\gamma \left\vert x\right\vert ^{2}}u\left( .,0\right) \right\Vert _{X}^{1-t}\left\Vert e^{\gamma \left\vert x\right\vert ^{2}}u\left( .,1\right) \right\Vert _{X}^{t} \tag{2.23} \end{equation} where \[ M\left( a,b\right) =\left( a^{2}+b^{2}\right) \left( \gamma M_{1}^{2}+M_{2}^{2}\right) +\sqrt{a^{2}+b^{2}}\left( M_{1}+M_{2}\right) \] when $0\leq t\leq 1.$ \textbf{Proof. }Let $f=e^{\gamma \varphi }u,$ where $\varphi $ is a real-valued function to be chosen. The function $f\left( x,t\right) $ verifies \begin{equation} \partial _{t}f=Sf+Kf+\left( a+ib\right) \left( Vf+e^{\gamma \varphi }F\right) \text{ in }R^{n}\times \left[ 0,1\right] \text{,} \tag{2.24} \end{equation} where $S$, $K$ are symmetric and skew-symmetric operator, respectively given by \[ S=aA_{1}-ib\gamma B_{1}+\gamma \varphi _{t},\text{ }K=ibA_{1}-a\gamma B_{1}, \] where \[ A_{1}=\Delta +A+\gamma ^{2}\left\vert \nabla \varphi \right\vert ^{2},\text{ }B_{1}=2\nabla \varphi .\nabla +\Delta \varphi . \] A calculation shows that \begin{equation} S_{t}+\left[ S,K\right] =\gamma \partial _{t}^{2}\varphi +2\gamma ^{2}a\nabla \varphi .\nabla \varphi _{t}-2ib\gamma \left( 2\nabla \varphi _{t}.\nabla +\Delta \varphi _{t}\right) - \tag{2.25} \end{equation} \[ \gamma \left( a^{2}+b^{2}\right) \left[ 4\nabla .\left( D^{2}\varphi \nabla \right) -4\gamma ^{2}D^{2}\varphi \nabla \varphi +\Delta ^{2}\varphi \right] +2\left[ A\left( x\right) \nabla \varphi .\nabla -\nabla \varphi .\nabla A \right] . \] If we put $\varphi =\left\vert x\right\vert ^{2}$, then $\left( 2.25\right) $ reduce the following \[ S_{t}+\left[ S,K\right] =-\gamma \left( a^{2}+b^{2}\right) \left[ 8\Delta -32\gamma ^{2}\left\vert x\right\vert ^{2}\right] +2\left[ A\left( x\right) \nabla \varphi .\nabla -\nabla \varphi .\nabla A\right] . \] Moreover by assumtion (2) of Condition 1, \[ \left( S_{t}f+\left[ S,K\right] f,f\right) =\gamma \left( a^{2}+b^{2}\right) \dint\limits_{R^{n}}\left( 8\left\Vert \left\vert \nabla f\right\vert \right\Vert ^{2}+32\gamma ^{2}\left\vert x\right\vert ^{2}\left\Vert f\right\Vert ^{2}\right) dx+ \] \[ 4\dsum\limits_{k=1}^{n}\dint\limits_{R^{n}}x_{k}\left( \left[ A\frac{ \partial f}{\partial x_{k}}-\frac{\partial A}{\partial x_{k}}f\right] ,f\right) dx\geq 0\text{ for }f\in L^{\infty }\left( 0,T;Y^{1}\left( A\right) \right) . \] This identity, the condition on $A=A\left( x\right) ,$ $V=V\left( x,t\right) $ and $\left( 2.24\right) $ imply that \begin{equation} \text{ }S_{t}+\left[ S,K\right] \geq 0, \tag{2.26} \end{equation} \[ \left\Vert \partial _{t}f-Sf-Kf\right\Vert _{X}\leq \sqrt{a^{2}+b^{2}}\left( M_{1}\left\Vert f\right\Vert _{X}+e^{\gamma \varphi }\left\Vert F\right\Vert _{X}\right) \text{.} \] If we knew that the quantities and calculations involved in the proof of Lemma 2.2. ( this fact are derived \i n a similar way as in $\left[ \text{8, Lemma 2}\right] $) were finite and correct, when $f=e^{\gamma \left\vert x\right\vert ^{2}}u$, we would have the logarithmic convexity of $ Q(t)=\left\Vert e^{\gamma \left\vert x\right\vert ^{2}}u\left( .,t\right) \right\Vert _{X}$ and get $(2.23)$ from Lemma 2.2. To justify the validity of the previous arguments now we consider the function $\varphi _{a}\left( x\right) $ constructed in $\left[ \text{8, Lemma 3}\right] $, i.e., \[ \varphi _{a}\left( x\right) =\left\{ \begin{array}{c} \left\vert x\right\vert ^{2},\text{ \ \ }\left\vert x\right\vert \leq 1 \\ \left( 2-a\right) ^{-1}\left( 2\left\vert x\right\vert ^{2-a}-a\right) , \text{ \ }\left\vert x\right\vert \geq 1 \end{array} \right. \] and replace $\varphi =\left\vert x\right\vert ^{2}$ by $\varphi _{a,\rho }=\theta _{\rho }\ast \varphi _{a},$ where $a,$ $\rho \in \left( 0,1\right) $ and $\theta \in C_{0}^{\infty }\left( R^{n}\right) $ is a radial function. From the proof $\left[ \text{8, Lemma 3}\right] $ we get that $\varphi _{a,\rho }$ is convex function and \begin{equation} \left\Vert \Delta \varphi _{a,\rho }\right\Vert _{\infty }\leq C\left( n,\rho \right) a. \tag{2.27} \end{equation} Put then, $f_{a,\rho }=e^{\gamma \varphi _{a,\rho }}u$ for $u\in X$ and $ Q_{a,\rho }\left( t\right) =$ $\left\Vert f_{a,\rho }\left( .,t\right) \right\Vert _{X}^{2}$ in Lemma 2.2. The decay bound in Lemma 2.1 and the interior regularity for solutions of $(2.23)$ can now be used qualitatively to make sure that the quantities or calculations involved in the proof of $ \left[ \text{8, Lemma 2}\right] $ ( with replicing $L^{2}$ by $X)$ are finite and correct for $f_{a,\rho }$. In this case, $f_{a,\rho }$ verifies \[ \partial _{t}f_{a,\rho }=S^{a,\rho }f_{a,\rho }+K^{a,\rho }f_{a,\rho }+\left( a+ib\right) \left[ A\left( x\right) f_{a,\rho }+e^{\gamma \varphi _{a,\rho }}F\left( x,t\right) \right] \] with symmetric and skew-symmetric operators $S^{a,\rho }$ and $K^{a,\rho }$ given by $(2.24)$ with $\varphi $ replaced by $\varphi _{a,\rho }$. The formula $\left( 2\text{.}25\right) $, the convexity of $\varphi _{a,\rho }$, the bounds $(2.26)$ and $(2.27)$ imply that the inequalities \[ \text{ }S_{t}^{a,\rho }+\left[ S^{a,\rho },K^{a,\rho }\right] \geq 0, \] \[ \left\Vert \partial _{t}f^{a,\rho }-Sf^{a,\rho }-K^{a,\rho }f\right\Vert _{X}\leq \sqrt{a^{2}+b^{2}}\left( M_{1}\left\Vert f^{a,\rho }\right\Vert _{X}+e^{\gamma \varphi _{a,\rho }}\left\Vert F\right\Vert _{X}\right) \text{. } \] hold and $M_{2}\left( a,\rho \right) \leq e^{C\left( n\right) \rho ^{2}}M_{2},$ when $a,\rho \in \left( 0,1\right) .$ Particularly, $Q_{a,\rho } $ is logarithmically convex in $[0,1]$ and \begin{equation} Q_{a,\rho }\left( t\right) \leq e^{N\left[ \left( a^{2}+b^{2}\right) \left( M_{1}^{2}+M_{2}^{2}\right) +\left( M_{1}+M_{2}\right) \sqrt{a^{2}+b^{2}} \right] }Q_{a,\rho }^{1-t}\left( 0\right) Q_{a,\rho }^{t}\left( 1\right) . \tag{2.28} \end{equation} Then, $(2.23)$ follows after taking first the limit, when a tends to zero in $(2.28)$ and then, when $\rho $ tends to zero. \begin{center} \textbf{3. Some properties of solutions of the abstract Schredinger equations } \end{center} Let \begin{center} \[ \sigma \left( t\right) =\left[ \alpha \left( 1-t\right) +\beta t\right] ^{-1},\text{ }\eta \left( x,t\right) =\left( \alpha -\beta \right) )|x|^{2} \left[ 4i(\alpha (1-t)+\beta t)\right] ^{-1},\text{ } \] \end{center} \[ \nu \left( s\right) =\left[ \gamma \alpha \beta \sigma ^{2}\left( s\right) + \frac{\left( \alpha -\beta \right) a}{4\left( a^{2}+b^{2}\right) }\sigma \left( s\right) \right] ,\text{ }\phi \left( x,t\right) =\text{ }\frac{ \gamma a\left\vert x\right\vert ^{2}}{a+4\gamma \left( a^{2}+b^{2}\right) t} . \] Let $u=u\left( x,s\right) $ be a solution of the equation \[ \partial _{s}u=i\left[ \Delta u+Au+V\left( y,s\right) u+F\left( y,s\right) \right] ,\text{ }y\in R^{n},\text{ }s\in \left[ 0,1\right] . \] and $a+ib\neq 0$, $\gamma \in \mathbb{R}$, $\alpha $, $\beta \in \mathbb{R} _{+}$. Set \begin{equation} \tilde{u}\left( x,t\right) =\left( \sqrt{\alpha \beta }\sigma \left( t\right) \right) ^{\frac{n}{2}}u\left( \sqrt{\alpha \beta }x\sigma \left( t\right) ,\beta t\sigma \left( t\right) \right) e^{\eta }. \tag{3.1} \end{equation} Then, $\tilde{u}\left( x,t\right) $\ verifies the equation \begin{equation} \partial _{t}\tilde{u}=i\left[ \Delta \tilde{u}+A\left( x\right) \tilde{u}+ \tilde{V}\left( x,t\right) \tilde{u}+\tilde{F}\left( x,t\right) \right] , \text{ }x\in R^{n},\text{ }t\in \left[ 0,1\right] \tag{3.2} \end{equation} with \begin{equation} \tilde{V}\left( x,t\right) =\alpha \beta \sigma ^{2}\left( t\right) V\left( \sqrt{\alpha \beta }x\sigma \left( t\right) ,\beta t\sigma \left( t\right) \right) , \tag{3.3} \end{equation} \begin{equation} \text{ }\tilde{F}\left( x,t\right) =\left( \sqrt{\alpha \beta }\sigma \left( t\right) \right) ^{\frac{n}{2}+2}\left( \sqrt{\alpha \beta }x\sigma \left( t\right) ,\beta t\sigma \left( t\right) \right) . \tag{3.4} \end{equation} Moreover, \begin{equation} \left\Vert e^{\gamma \left\vert x\right\vert ^{2}}\tilde{F}\left( .,t\right) \right\Vert _{X}=\alpha \beta \sigma ^{2}\left( t\right) e^{\nu \left\vert y\right\vert ^{2}}\left\Vert F\left( s\right) \right\Vert _{X}\text{, } \left\Vert e^{\gamma \left\vert x\right\vert ^{2}}\tilde{u}\left( .,t\right) \right\Vert _{X}=e^{\nu \left\vert y\right\vert ^{2}}\left\Vert u\left( s\right) \right\Vert _{X} \tag{3.5} \end{equation} when $s=\beta t\sigma \left( t\right) $. \textbf{Remark 3.1. }Let $\beta =\beta \left( k\right) .$ By assumption we have \[ \left\Vert e^{a_{0}\left\vert x\right\vert ^{p}}u\left( x,0\right) \right\Vert _{X}=a_{0}, \] \begin{equation} \left\Vert e^{k\left\vert x\right\vert ^{p}}u\left( x,0\right) \right\Vert _{X}=a_{k}\leq a_{2}e^{2a_{1}k^{\frac{q}{q-p}}}=a_{2}e^{2a_{1}k^{\frac{1}{2-p }}}. \tag{3.6} \end{equation} Thus, for $\gamma =\gamma (k)$ $\in \lbrack 0,\infty )$ to be chosen later, one has \begin{equation} \left\Vert e^{\gamma \left\vert x\right\vert ^{p}}\tilde{u}_{k}\left( x,0\right) \right\Vert _{X}=\left\Vert e^{\gamma \left( \frac{\alpha }{\beta }\right) ^{p/2}\left\vert x\right\vert ^{p}}u_{k}\left( x,0\right) \right\Vert _{X}=b_{0}, \tag{3.7} \end{equation} \[ \left\Vert e^{\gamma \left\vert x\right\vert ^{p}}\tilde{u}_{k}\left( x,1\right) \right\Vert _{X}=\left\Vert e^{\gamma \left( \frac{\beta }{\alpha }\right) ^{p/2}\left\vert x\right\vert ^{p}}u_{k}\left( x,1\right) \right\Vert _{X}=a_{k}. \] Let we take \[ \gamma \left( \frac{\alpha }{\beta }\right) ^{p/2}=a_{0}\text{ and }\gamma \left( \frac{\beta }{\alpha }\right) ^{p/2}=k, \] i.e. \begin{equation} \gamma =\left( ka_{0}\right) ^{\frac{1}{2}}\text{, }\beta =k^{\frac{1}{p}}, \text{ }\alpha =a_{0}^{\frac{1}{p}}. \tag{3.8} \end{equation} Let \[ M=\dint\limits_{0}^{1}\left\Vert V\left( .,t\right) \right\Vert _{L^{\infty }\left( R^{n};H\right) }dt=\dint\limits_{0}^{1}\left\Vert V\left( .,s\right) \right\Vert _{L^{\infty }\left( R^{n};H\right) }ds. \] From $\left( 3.2\right) $, using energy estimates it follows \begin{equation} e^{-M}\left\Vert u\left( .,0\right) \right\Vert _{X}\leq \left\Vert u\left( .,t\right) \right\Vert _{X}=\left\Vert \tilde{u}\left( .,s\right) \right\Vert _{X}\leq e^{M}\left\Vert u\left( .,0\right) \right\Vert _{X} \text{, }t,s\in \left[ 0,1\right] , \tag{3.9} \end{equation} where \[ s=\beta t\sigma \left( t\right) . \] Consider the following problem \begin{equation} i\partial _{t}u+\Delta u+A\left( x\right) u+V\left( x,t\right) u+F\left( x,t\right) =0,\text{ }x\in R^{n},\text{ }t\in \left[ 0,1\right] , \tag{3.10} \end{equation} \[ u\left( x,0\right) =u_{0}\left( x\right) ,\text{ } \] where $A=A\left( x\right) $ is a linear operator$,$ $V\left( x,t\right) $ is a given potential operator function in a Hilbert space $H$ and $F$ is a $H$ -valued function. Let as define operator valued integral operators in $L^{p}\left( \Omega ;H\right) $. Let $k$: $R^{n}\backslash \left\{ 0\right\} \rightarrow L\left( H\right) .$ We say $k\left( x\right) $ is a $L\left( H\right) $-valued Calderon-Zygmund kernel ($C-Z$ kernel) if $k\in C^{\infty }\left( R^{n}\backslash \left\{ 0\right\} ,L\left( H\right) \right) ,$ $k$ is homogenous of degree $-n,$ $\dint\limits_{B}k\left( x\right) d\sigma =0,$ where \[ B=\left\{ x\in R^{n}\text{: }\left\vert x\right\vert =1\right\} . \] For $f\in L^{p}\left( \Omega ;H\right) ,$ $p\in \left( 1,\infty \right) ,$ $ a\in L^{\infty }\left( R^{n}\right) ,$ and $x\in \Omega $ we set the Calderon-Zygmund operator \[ K_{\varepsilon }f=\dint\limits_{\left\vert x-y\right\vert >\varepsilon ,y\in \Omega }k\left( x,y\right) f\left( y\right) dy,\text{ }Kf=\lim\limits_{ \varepsilon \rightarrow 0}K_{\varepsilon }f \] and commutator operator \[ \left[ K;a\right] f=a\left( x\right) Kf\left( x\right) -K\left( af\right) \left( x\right) = \] \[ \lim\limits_{\varepsilon \rightarrow 0}\dint\limits_{\left\vert x-y\right\vert >\varepsilon ,y\in \Omega }k\left( x,y\right) \left[ a\left( x\right) -a\left( y\right) \right] f\left( y\right) dy. \] By using Calder\'{o}n's first commutator estimates $\left[ 4\right] ,$ convolution operators on abstract functions $\left[ 2\right] $ and abstract commutator theorem in $\left[ 20\right] $ we obtain the following result: \textbf{Theorem A}$_{2}.$ Assume $k(.)$ is $L\left( H\right) -$valued $C-Z$ kernel that have locally integrable first-order derivatives in $\left\vert x\right\vert >0$, and \[ \left\Vert k\left( x,y\right) -k\left( x^{\prime },y\right) \right\Vert _{L\left( E\right) }\leq M\left\vert x-x^{\prime }\right\vert \left\vert x-y\right\vert ^{-\left( n+1\right) }\text{ for }\left\vert x-y\right\vert >2\left\vert x-x^{\prime }\right\vert . \] Let $a(.)$ have first-order derivatives in $L^{r}\left( R^{n}\right) $,$ 1<r\leq \infty $. Then for $p$, $q\in \left( 1,\infty \right) ,$ $ q^{-1}=p^{-1}+r^{-1}$ the following estimates hold \[ \left\Vert \left[ K;a\right] \partial _{x_{j}}f\right\Vert _{L^{q}\left( R^{n};H\right) }\leq C\left\Vert f\right\Vert _{L^{p}\left( R^{n};H\right) }, \] \[ \left\Vert \partial _{x_{j}}\left[ K;a\right] f\right\Vert _{L^{q}\left( R^{n};H\right) }\leq C\left\Vert f\right\Vert _{L^{p}\left( R^{n};H\right) }, \] for $f\in C_{0}^{\infty }\left( R^{n};H\right) ,$\ where the constant $C>0$ is independent of $f.$ Let $X_{\gamma }$ denote the weithed Lebesque space $L_{\gamma }^{2}\left( R^{n}\text{:}H\right) $ with $\gamma \left( x\right) =e^{2\lambda .x}.$ By following $\left[ \text{5, Lemma 2.1}\right] $ let us show: \textbf{Lemma 3.1.} Assume that the Condition 1 holds and there exists $ \varepsilon _{0}>0$ such that \[ \left\Vert V\right\Vert _{L_{t}^{1}L_{x}^{\infty }\left( R^{n}\times \left[ 0,1\right] ;L\left( H\right) \right) }<\varepsilon _{0}. \] Moreover, suppose $u\in C\left( \left[ 0,1\right] ;X\left( A\right) \right) $ is a stronge solution of $\left( 3.10\right) $ with \[ u_{0},\text{ }u_{1}=u\left( x,1\right) \in X_{\gamma },\text{ }F\in L^{1}\left( 0,1;X_{\gamma }\right) \] for $\gamma \left( x\right) =e^{2\lambda .x}$ $\ $and for some $\lambda \in R^{n}.$ Then there exists a positive constant $M_{0}=M_{0}\left( n,A,H\right) $ independent of $\lambda $ such that \begin{equation} \sup\limits_{t\in \left[ 0,1\right] }\left\Vert u\left( .,t\right) \right\Vert _{X_{\gamma }}\leq M_{0}\left[ \left\Vert u_{0}\right\Vert _{X_{\gamma }}+\left\Vert u_{1}\right\Vert _{X_{\gamma }}+\dint\limits_{0}^{1}\left\Vert F\left( .,t\right) \right\Vert _{X_{\gamma }}dt\right] . \tag{3.11} \end{equation} \textbf{Proof. }First, we consider the case, when $\gamma \left( x\right) =\beta \left( x\right) =e^{2\beta x_{1}}.$ Without loss of generality we shall assume $\beta >0.$ Let $\varphi _{n}\in C^{\infty }\left( \mathbb{R} \right) $ such that $\varphi _{n}\left( \tau \right) =1$, $\tau \leq n$ and $ \varphi _{n}\left( \tau \right) =0$ for $\tau \geq 10n$ with $0\leq \varphi _{n}\leq 1,$ $\left\vert \varphi _{n}^{\left( j\right) }\left( \tau \right) \right\vert \leq C_{j}\tau n^{-j}.$ Let \[ \theta _{n}(\tau )=\beta \dint\limits_{0}^{\tau }\varphi _{n}^{2}\left( s\right) ds \] so that $\theta _{n}\in C^{\infty }\left( \mathbb{R}\right) $ nondecreasing with $\theta _{n}(\tau )=\beta \tau $ for $\tau <n,$ $\theta _{n}(\tau )=C_{n}\beta $ for $\tau >10n$ and \begin{equation} \theta _{n}^{\prime }(\tau )=\beta \varphi _{n}^{2}\left( \tau \right) \leq \beta ,\text{ }\theta _{n}^{\left( j\right) }(\tau )=\beta C_{j}n^{1-j}, \text{ }j=1,2,.... \tag{3.12} \end{equation} Let $\phi _{n}\left( \tau \right) =\exp \left( 2\theta _{n}(\tau )\right) $ so that $\phi _{n}\left( \tau \right) \leq \exp \left( 2\beta \tau \right) $ and $\phi _{n}\left( \tau \right) \rightarrow \exp \left( 2\beta \tau \right) $ for $n\rightarrow \infty .$ Let $u\left( x,t\right) $ be a solution of the equation $\left( 3.10\right) $, then one gets the equation $\upsilon _{n}\left( x,t\right) =\phi _{n}\left( x_{1}\right) u\left( x,t\right) $ satisfies the following \begin{equation} i\partial _{t}\upsilon _{n}+\Delta \upsilon _{n}+A\upsilon _{n}=V_{n}\left( x,t\right) \upsilon _{n}+\phi _{n}\left( x_{1}\right) F\left( x,t\right) , \tag{3.13} \end{equation} where \[ V_{n}\left( x,t\right) \upsilon _{n}=V\left( x,t\right) \upsilon _{n}+4\beta \varphi _{n}\left( x_{1}\right) \partial _{x_{1}}\upsilon _{n}+\left[ 4\beta \varphi _{n}\left( x_{1}\right) \varphi _{n}^{\prime }\left( x_{1}\right) -4\beta ^{2}\upsilon _{n}^{4}\right] \upsilon _{n}. \] Now, we consider a new function \[ w_{n}\left( x,t\right) =e^{\mu }\upsilon _{n}\left( x,t\right) ,\text{ }\mu =-i4\beta ^{2}\varphi _{n}^{4}\left( x_{1}\right) t. \] Then from $\left( 3.13\right) $ we get \[ i\partial _{t}w_{n}+\Delta w_{n}+Aw_{n}=\tilde{V}_{n}\left( x,t\right) w_{n}+ \tilde{F}_{n}\left( x,t\right) , \] where \[ \tilde{V}_{n}\left( x,t\right) w_{n}=V\left( x,t\right) w_{n}+h\left( x_{1},t\right) +a^{2}\left( x_{1}\right) \partial _{x_{1}}w_{n}+itb\left( x_{1}\right) \partial _{x_{1}}w_{n} \] \[ \tilde{F}_{n}\left( x,t\right) =e^{\mu }\phi _{n}\left( x_{1}\right) F\left( x,t\right) , \] when \[ h\left( x_{1},t\right) =\left( i16\beta ^{2}\varphi _{n}^{3}\varphi _{n}^{\prime }t\right) ^{2}+i48\beta ^{2}\varphi _{n}^{2}\left( \varphi _{n}^{\prime }\right) ^{2}t+i16\beta ^{2}\varphi _{n}^{3}\varphi _{n}^{\left( 2\right) }t+ \] \[ 4\beta \varphi _{n}\varphi _{n}^{\prime }+i64\beta ^{2}\varphi _{n}^{3}\varphi _{n}^{\prime }t,\text{ }a^{2}=4\beta \varphi _{n}^{2}\left( x_{1}\right) \text{, }b=-32\beta ^{2}\varphi _{n}^{3}\varphi _{n}^{\prime }. \] It is clear to see that \[ \left\Vert \partial _{x_{1}}^{j}h\left( x_{1},t\right) \right\Vert _{L^{\infty }\left( \mathbb{R}\times \left[ 0,1\right] \right) }\leq C_{j}n^{-\left( j+1\right) },\text{ }j=1,2,..., \] \begin{equation} a^{2}\left( x_{1}\right) \geq 0,\text{ }\left\Vert \partial _{x_{1}}^{j}a\left( x_{1}\right) \right\Vert _{L^{\infty }\left( \mathbb{R} \right) }\leq C_{j}n^{-j},\text{ }j=1,2,..., \tag{3.15} \end{equation} \[ \left\Vert \partial _{x_{1}}^{j}b\left( x_{1}\right) \right\Vert _{L^{\infty }\left( \mathbb{R}\right) }\leq C_{j}n^{-j},\text{ }j=1,2,.... \] Then by reasoning as in $\left[ \text{5, Lemma 2.1}\right] $ and by using the properties of symmetric operators $A$ and $V$, we obtain \[ \partial _{t}\left\vert \left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right\vert ^{2}+2\func{Im}\left( \Delta P_{\varepsilon }P_{+}w_{n},\upsilon \right) \left( \bar{K}\left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right) + \] \[ 2\func{Im}\left( AP_{\varepsilon }P_{+}w_{n},\upsilon \right) \left( \bar{K} \left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right) =2\func{Im}\left( P_{\varepsilon }P_{+}\left( Vw_{n}\right) ,\upsilon \right) \left( \bar{K} \left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right) + \] \[ 2\func{Im}\left( \left( P_{\varepsilon }P_{+}hw_{n}\right) ,\upsilon \right) \left( \bar{K}\left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right) +2 \func{Im}\left( \left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right) \bar{K}\left( P_{\varepsilon }P_{+}\left( \tilde{F}_{n}\right) ,\upsilon \right) + \] \begin{equation} 2\func{Im}\left( \left( P_{\varepsilon }P_{+}a^{2}\left( x_{1}\right) \partial _{x_{1}}w_{n}\right) ,\upsilon \right) \left( \bar{K}\left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right) + \tag{3.16} \end{equation} \[ 2\func{Im}\left( \left( P_{\varepsilon }P_{+}ib\left( x_{1}\right) \partial _{x_{1}}w_{n}\right) ,\upsilon \right) \left( \bar{K}\left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right) . \] Since for all $n\in \mathbb{Z}^{+}$, $w_{n}\left( .\right) \in X$, $ \tilde{F}_{n}\left( .,t\right) \in X$ and for a.e. $t\in \left[ 0,1\right] $ by integrating both sides of $\left( 3.16\right) $ on $R^{n}$ we get \[ \dint\limits_{R^{n}}\func{Im}\left( \Delta P_{\varepsilon }P_{+}w_{n},\upsilon \right) \left( \bar{K}\left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right) dx=0. \] It is clear to see that \[ \left( u,\upsilon \right) _{X}=\left( u\left( .\right) ,\upsilon \left( .\right) _{H}\right) _{L^{2}\left( R^{n}\right) }\text{, for }u\text{, } \upsilon \in X. \] Then applying the Cauchy-Schwartz and Holder inequalites for a.e. $ t\in \left[ 0,1\right] $ we obtain \begin{equation} \dint\limits_{R^{n}}\func{Im}\left( P_{\varepsilon }P_{+}Vw_{n},\upsilon \right) \left( \bar{K}\left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right) dx\leq C\left\Vert V\right\Vert _{B}\left\Vert w_{n}\right\Vert _{X}^{2}\left\Vert \upsilon \right\Vert _{X}^{2}, \tag{3.17} \end{equation} \begin{equation} \dint\limits_{R^{n}}\func{Im}\left( P_{\varepsilon }P_{+}hw_{n},\upsilon \right) \bar{K}\left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) dx\leq C\left\Vert h\right\Vert _{L^{\infty }}\left\Vert w_{n}\right\Vert _{X}^{2}\left\Vert \upsilon \right\Vert _{X}^{2}, \tag{3.18} \end{equation} \begin{equation} \dint\limits_{R^{n}}\func{Im}\left( P_{\varepsilon }P_{+}\tilde{F} _{n}w_{n},\upsilon \right) \left( \bar{K}\left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right) dx\leq C\left\Vert \tilde{F} _{n}\right\Vert _{X}\left\Vert w_{n}\right\Vert _{X}^{2}\left\Vert \upsilon \right\Vert _{X}^{2}. \tag{3.19} \end{equation} Moreover, again applying the Cauchy-Schwartz and Holder inequalities due to symmetricity of the operator $A,$ for a.e. $t\in \left[ 0,1\right] $ we get \begin{equation} \dint\limits_{R^{n}}\func{Im}\left( AP_{\varepsilon }P_{+}w_{n},\upsilon \right) \left( \bar{K}\left( P_{\varepsilon }P_{+}w_{n},\upsilon \right) \right) dx\leq C\left\Vert Aw_{n}\right\Vert _{X}\left\Vert w_{n}\right\Vert _{X}\left\Vert \upsilon \right\Vert _{X}^{2} \tag{3.20} \end{equation} where, the constant $C$ in $\left( 3.18\right) -\left( 3.20\right) $ is independent of $\upsilon \in C_{0}^{\infty }\left( R^{n};H\right) ,$ $ \varepsilon \in \left( 0,\left. 1\right] \right. $ and $n\in \mathbb{Z}^{+}.$ Since $C_{0}^{\infty }\left( R^{n};H\right) $ is dense in $X,$ from $\left( 3.17\right) $-$\left( 3.20\right) $ in view of operator theory in Hilbert spaces, we obtain the following \[ \left\vert \dint\limits_{R^{n}}\func{Im}\left( P_{\varepsilon }P_{+}Vw_{n}, \bar{K}P_{\varepsilon }P_{+}w_{n}\right) dx\right\vert \leq C\left\Vert V\right\Vert _{B}\left\Vert w_{n}\right\Vert _{X}^{2}, \] \[ \left\vert \dint\limits_{R^{n}}\func{Im}\left( P_{\varepsilon }P_{+}hw_{n}, \bar{K}P_{\varepsilon }P_{+}w_{n}\right) dx\right\vert \leq C\left\Vert h\right\Vert _{L^{\infty }}\left\Vert w_{n}\right\Vert _{X}^{2}\leq C\frac{1 }{n}\left\Vert w_{n}\right\Vert _{X}^{2}, \] \begin{equation} \left\vert \dint\limits_{R^{n}}\func{Im}\left( P_{\varepsilon }P_{+}\tilde{F} _{n}w_{n},\bar{K}P_{\varepsilon }P_{+}w_{n}\right) dx\right\vert \leq C\left\Vert \tilde{F}_{n}\right\Vert _{X}\left\Vert w_{n}\right\Vert _{X}^{2}, \tag{3.21} \end{equation} \[ \left\vert \dint\limits_{R^{n}}\func{Im}\left( AP_{\varepsilon }P_{+}w_{n}, \bar{K}P_{\varepsilon }P_{+}w_{n}\right) dx\right\vert \leq C\left\Vert Aw_{n}\right\Vert _{X}\left\Vert w_{n}\right\Vert _{X}^{2} \] For bounding the last two terms in $\left( 3.16\right) $ we will use the abstract version of Calder\'{o}n's first commutator estimates $\left[ 4 \right] .$ Really, by Cauchy-Schvartz inequality and in view of Theorem A$ _{2}$ we get \begin{equation} \left\Vert \left( \left[ P_{\pm };a\right] \partial _{x_{1}}f,\upsilon \right) \right\Vert _{X}\leq C\left\Vert \partial _{x_{1}}a\right\Vert _{L^{\infty }}\left\Vert f\right\Vert _{X}\left\Vert \upsilon \right\Vert _{X},\text{ } \tag{3.22} \end{equation} \begin{equation} \left\Vert \partial _{x_{1}}\left( \left[ P_{\pm };a\right] f,\upsilon \right) \right\Vert _{X}\leq C\left\Vert \partial _{x_{1}}a\right\Vert _{L^{\infty }}\left\Vert f\right\Vert _{X}\left\Vert \upsilon \right\Vert _{X},\text{ } \tag{3.23} \end{equation} Also, from the calculus of pseudodifferential operators with operator coefficients (see e.g. $\left[ 5\right] $ ) and the inequality $ (3.15)$, we have \begin{equation} \left\Vert \left( \left[ P_{\varepsilon };a\right] \partial _{x_{1}}f,\upsilon \right) \right\Vert _{X}\leq \frac{C}{n}\left\Vert f\right\Vert _{X}\left\Vert \upsilon \right\Vert _{X},\text{ } \tag{3.24} \end{equation} \begin{equation} \left\Vert \partial _{x_{1}}\left( \left[ P_{\varepsilon };a\right] f,\upsilon \right) \right\Vert _{X}\leq \frac{C}{n}\left\Vert f\right\Vert _{X}\left\Vert \upsilon \right\Vert _{X},\text{ } \tag{3.25} \end{equation} where the constant $C$ in $\left( 3.22\right) -(3.25)$ is independent of $ \varepsilon \in \lbrack 0,1]$ and $n.$ We remark that estimates $ (3.22)-(3.25)$ also hold with $b\left( x_{1}\right) $ replacing $a(x_{1})$. Since $C_{0}^{\infty }\left( R^{n};H\right) $ is dense in $X,$ from $\left( 3.22\right) $-$\left( 3.25\right) $ in view of operator theory in Hilbert spaces, we obtain \[ \left\Vert \left[ P_{\pm };a\right] \partial _{x_{1}}f\right\Vert _{X}\leq C\left\Vert \partial _{x_{1}}a\right\Vert _{L^{\infty }}\left\Vert f\right\Vert _{X},\text{ }\left\Vert \partial _{x_{1}}\left[ P_{\pm };a \right] f\right\Vert _{X}\leq C\left\Vert \partial _{x_{1}}a\right\Vert _{L^{\infty }}\left\Vert f\right\Vert _{X}, \] \begin{equation} \left\Vert \left( \left[ P_{\varepsilon };a\right] \partial _{x_{1}}f,\upsilon \right) \right\Vert _{X}\leq \frac{C}{n}\left\Vert f\right\Vert _{X},\text{ }\left\Vert \partial _{x_{1}}\left( \left[ P_{\varepsilon };a\right] f,\upsilon \right) \right\Vert _{X}\leq \frac{C}{n} \left\Vert f\right\Vert _{X}\text{ ,} \tag{3.26} \end{equation} and the same estimates $\left( 3.26\right) $ with $b(x_{1}$) replacing $ a(x_{1})$. By reasoning as in $\left[ \text{6, Lemma 2.1}\right] $ (claim 1 and 2 ) from $\left( 3.26\right) $\ we obtain \begin{equation} \left\vert \func{Im}\left( \left( P_{\varepsilon }P_{+}a^{2}\left( x_{1}\right) \partial _{x_{1}}w_{n},\bar{K}P_{\varepsilon }P_{+}w_{n}\right) \right) \right\vert \leq O\left( n^{-1}\left\Vert w_{n}\right\Vert _{X}\right) , \tag{3.27} \end{equation} \[ \left\vert \func{Im}\left( \left( P_{\varepsilon }P_{+}b\left( x_{1}\right) \partial _{x_{1}}w_{n},\bar{K}P_{\varepsilon }P_{+}w_{n}\right) \right) \right\vert \leq O\left( n^{-1}\left\Vert w_{n}\right\Vert _{X}\right) . \] Now, the estimates $\left( 3.21\right) $ and $\left( 3.27\right) $ implay the assertion. \begin{center} \textbf{4.} \textbf{Proof of Theorem 1} \end{center} \textbf{\ }We will apply Lemma 3.1 to a solution of the equation $(3.2)$. Since $0<\alpha <\beta =$ $\beta (k)$ for $k>k_{0}$ it follows that $\alpha \leq \sigma \left( t\right) \leq \beta $ for any $t\in \lbrack 0,1]$. Therefore if $y=\sqrt{\alpha \beta }x\sigma \left( t\right) $, then from $ \left( 3.8\right) $ we get \begin{equation} \sqrt{\alpha \beta ^{-1}}\left\vert x\right\vert \leq \left\vert y\right\vert \sqrt{\alpha ^{-1}\beta }\left\vert x\right\vert =\left( ka_{0}^{-1}\right) ^{\frac{1}{2p}}\left\vert x\right\vert \tag{4.1} \end{equation} Thus, \begin{equation} \left\Vert \alpha \beta \sigma ^{2}\left( t\right) V\left( \sqrt{\alpha \beta }x\sigma \left( t\right) ,\beta t\sigma \left( t\right) \right) \right\Vert _{L\left( H\right) }\leq \alpha ^{-1}\beta \left\Vert V\right\Vert _{B}=\left( ka_{0}^{-1}\right) ^{\frac{1}{p}}\left\Vert V\right\Vert _{B} \tag{4.2} \end{equation} and so, \begin{equation} \left\Vert \tilde{V}\left( .,t\right) \right\Vert _{L^{\infty }\left( R^{n};H\right) }\leq \left( ka_{0}^{-1}\right) ^{\frac{1}{p}}\left\Vert V\left( .,t\right) \right\Vert _{L^{\infty }\left( R^{n};H\right) }. \tag{4.3} \end{equation} Also, for $s=\beta t\sigma \left( t\right) $ it is clear that \begin{equation} \frac{ds}{dt}=\alpha \beta \sigma ^{2}\left( t\right) \text{, }dt=\left( \alpha \beta \right) ^{-1}\sigma ^{-2}\left( t\right) ds. \tag{4.4} \end{equation} Therefore, \[ \dint\limits_{0}^{1}\left\Vert \tilde{V}\left( .,t\right) \right\Vert _{L^{\infty }\left( R^{n};H\right) }dt=\dint\limits_{0}^{1}\left\Vert V\left( .,s\right) \right\Vert _{L^{\infty }\left( R^{n};H\right) }ds, \] and from $\left( 4.1\right) $ we get \begin{equation} \dint\limits_{0}^{1}\left\Vert \tilde{V}\left( .,t\right) \right\Vert _{L^{\infty }\left( \left\vert x\right\vert >r;H\right) }dt=\dint\limits_{0}^{1}\left\Vert V\left( .,s\right) \right\Vert _{L^{\infty }\left( \left\vert y\right\vert >\varkappa ;H\right) }ds, \tag{4.5} \end{equation} where \[ \varkappa =\left( a_{0}k^{-1}\right) ^{\frac{1}{2p}}r. \] So, if $\dint\limits_{0}^{1}\left\Vert V\left( .,s\right) \right\Vert _{L^{\infty }\left( \left\vert y\right\vert >\varkappa ;H\right) }ds<\varepsilon _{0}$ then, \[ \dint\limits_{0}^{1}\left\Vert \tilde{V}\left( .,t\right) \right\Vert _{L^{\infty }\left( \left\vert y\right\vert >r;H\right) }ds<\varepsilon _{0}, \text{ for }r=\varkappa \left( ka_{0}^{-1}\right) ^{\frac{1}{2p}} \] and we can applay Lemma 3.1 to the equation $\left( 3.2\right) $ with \[ \tilde{V}\mathbb{=}\tilde{V}_{\chi \left( \left\vert x\right\vert >r\right) }\left( x,t\right) \text{, }\tilde{F}\mathbb{=}\tilde{V}_{\chi \left( \left\vert x\right\vert <R\right) }\left( x,t\right) \tilde{u}\left( x,t\right) \] to get the following estimate \[ \sup\limits_{t\in \left[ 0,1\right] }\left\Vert e^{\nu }\tilde{u}\left( .,t\right) \right\Vert _{X}\leq M_{0}\left( \left\Vert e^{\nu }\tilde{u} \left( .,0\right) \right\Vert _{X}+\left\Vert e^{\nu }\tilde{u}\left( .,1\right) \right\Vert _{X}\right) + \] \begin{equation} M_{0}e^{M}e^{\nu _{0}}\left\Vert \tilde{V}\right\Vert _{B}\left\Vert u\left( .,0\right) \right\Vert _{X}, \tag{4.6} \end{equation} where $M$ a positive constant defined in Remark 2.1 and \[ B=L^{\infty }\left( R^{n}\times \left[ 0,1\right] ;L\left( H\right) \right) \text{, }\nu =\left( 2p\right) ^{\frac{1}{p}}\gamma ^{\frac{1}{p}}\lambda . \frac{x}{2},\text{ }\nu _{0}=\left\vert \lambda \right\vert \left( 2p\right) ^{\frac{1}{p}}\gamma ^{\frac{1}{p}}\frac{r}{2}. \] From $\left( 4.6\right) $ we have \[ \sup\limits_{t\in \left[ 0,1\right] }\dint\limits_{R^{n}}\left\Vert e^{\nu } \tilde{u}\left( .,t\right) \right\Vert _{H}^{2}dx\leq M_{0}\dint\limits_{R^{n}}e^{\nu }\left( \left\Vert \tilde{u}\left( .,0\right) \right\Vert _{H}^{2}+\left\Vert \tilde{u}\left( .,1\right) \right\Vert _{H}^{2}\right) dx+ \] \[ M_{0}e^{M}e^{\left\vert \lambda \right\vert \left( 2p\right) ^{\frac{1}{p} }\gamma ^{\frac{1}{p}r}}\left\Vert \tilde{V}\right\Vert _{B}\left\Vert u\left( .,0\right) \right\Vert _{X}^{2}, \] and multiply the above inequality by $e^{\left\vert \lambda \right\vert /q}\left\vert \lambda \right\vert ^{n\left( q-2\right) /2}$, integrate in $ \lambda $ and in $x$, use Fubini theorem and the following formula \begin{equation} e^{\gamma \left\vert x\right\vert ^{p}/p}\thickapprox \dint\limits_{R^{n}}e^{\gamma ^{\frac{1}{p}}\lambda .x-\left\vert \lambda \right\vert ^{q/q}}\left\vert \lambda \right\vert ^{n\left( q-2\right) /2}d\lambda , \tag{4.7} \end{equation} proven in $\left[ \text{7, Appendx}\right] $ to obtain \[ \dint\limits_{\left\vert x\right\vert >1}e^{2\gamma \left\vert x\right\vert ^{p}}\left\Vert \tilde{u}\left( .,t\right) \right\Vert _{H}^{2}dx\leq M_{0}\dint\limits_{R^{n}}e^{2\gamma \left\vert x\right\vert ^{p}}\left( \left\Vert \tilde{u}\left( .,0\right) \right\Vert _{H}^{2}+\left\Vert \tilde{ u}\left( .,1\right) \right\Vert _{H}^{2}\right) dx+ \] \begin{equation} M_{0}e^{2M}e^{2\gamma r^{p}}r^{C_{p}}\left\Vert \tilde{V}\right\Vert _{B}\left\Vert u\left( .,0\right) \right\Vert _{X}^{2}. \tag{4.8} \end{equation} Hence, the esimates $\left( 3.6\right) $, $\left( 3.8\right) $, $\left( 3.9\right) ,$ $\left( 4.3\right) $ and $\left( 4.8\right) $ imply \[ \sup\limits_{t\in \left[ 0,1\right] }\left\Vert e^{\gamma \left\vert x\right\vert ^{p}}\tilde{u}\left( .,t\right) \right\Vert _{X}\leq M_{0}\left( \left\Vert e^{\gamma \left\vert x\right\vert ^{p}}\tilde{u} \left( .,0\right) \right\Vert _{X}+\left\Vert e^{\gamma \left\vert x\right\vert ^{p}}\tilde{u}\left( .,1\right) \right\Vert _{X}\right) + \] \begin{equation} M_{0}e^{M}e^{\gamma }\left\Vert u\left( .,0\right) \right\Vert _{X}+M_{0}\left( ka_{0}^{-1}\right) ^{C_{p}}e^{M}e^{\varkappa ^{p}\gamma ^{ \frac{1}{2}}\left( ka_{0}^{-1}\right) }\left\Vert u\left( .,0\right) \right\Vert _{X}\left\Vert V\right\Vert _{B}\leq \tag{4.9} \end{equation} \[ M_{0}\left( a_{0}+a_{k}\right) +M_{0}e^{M}\left\Vert u\left( .,0\right) \right\Vert _{X}\left( e^{\gamma }+\left( ka_{0}^{-1}\right) ^{C_{p}}\left\Vert V\right\Vert _{B}\right) e^{k\varkappa ^{p}}\leq \] \[ M_{0}a_{k}=M_{0}e^{a_{1}k^{1/2-p}}\text{ for }k>k_{0}\left( M_{0}\right) \text{ sufficiently large.} \] Next, we shall obtain bounds for the $\nabla \tilde{u}$. Let $\tilde{\gamma}= \frac{\gamma }{2}$ and $\varphi $ be a strictly convex complex valued function on compact sets of $R^{n}$, radial such that (see $[7]$) \[ D^{2}\varphi \geq p(p-1)|x|^{(p-2)},\text{ for }|x|\geq 1, \] \[ \varphi \geq 0,\text{ }\left\Vert \partial ^{\alpha }\varphi \right\Vert _{L^{\infty }}\leq C\text{, }2\leq |\alpha |\leq 4,\text{ }\left\Vert \partial ^{\alpha }\varphi \right\Vert _{L^{\infty }\left( \left\vert x\right\vert <2\right) }\leq C\text{ for }|\alpha |\leq 4, \] \[ \varphi (x)=|x|^{p}+O(|x|)\text{, for }|x|>1. \] Let us consider the equation \begin{equation} \partial _{t}\upsilon =i\left( \Delta \upsilon +A\upsilon +F\left( x,t\right) \right) ,\text{ }x\in R^{n},\text{ }t\in \left[ 0,1\right] , \tag{4.10} \end{equation} where $F\left( x,t\right) =\tilde{V}\upsilon ,$ $A$ is a symmetric operator in $H$ and $\tilde{V}$ is a operator in $H$ defined by $\left( 3.3\right) .$ Let \[ f(x,t)=e^{\tilde{\gamma}\varphi }\upsilon (x,t),\text{ }Q\left( t\right) =\left( f(x,t),f(x,t)\right) _{H}, \] where $\upsilon $ is a solution of $\left( 4.10\right) $. Then, by reasoning as in Lemma 2.3 we have \begin{equation} \partial _{t}f=Sf+Kf+i\left[ A+e^{\tilde{\gamma}\varphi }F\right] \text{, } \left( x,t\right) \in R^{n}\times \left[ 0,1\right] , \tag{4.11} \end{equation} here $S$, $K$ are symmetric and skew-symmetric operator, respectively given by \begin{equation} S=-i\tilde{\gamma}\left( 2\nabla \varphi .\nabla +\Delta \varphi \right) \text{, }K=i\left( \Delta +A+\tilde{\gamma}^{2}\left\vert \nabla \varphi \right\vert ^{2}\right) . \tag{4.12} \end{equation} Let \[ \left[ S,K\right] =SK-KS\text{.} \] A calculation shows that, \[ SK=\tilde{\gamma}\left( 2\nabla \varphi .\nabla +\Delta \varphi \right) \left( \Delta +A+\tilde{\gamma}^{2}\left\vert \nabla \varphi \right\vert ^{2}\right) =\tilde{\gamma}\left( 2\nabla \varphi .\nabla +\Delta \varphi \right) \Delta + \] \[ \tilde{\gamma}\left( 2\nabla \varphi .\nabla +\Delta \varphi \right) A+ \tilde{\gamma}^{3}\left\vert \nabla \varphi \right\vert ^{2}\left( 2\nabla \varphi .\nabla +\Delta \varphi \right) , \] \[ KS=\tilde{\gamma}\left[ \Delta \left( 2\nabla \varphi .\nabla +\Delta \varphi \right) +A\left( 2\nabla \varphi .\nabla +\Delta \varphi \right) \right] +\tilde{\gamma}^{3}\left\vert \nabla \varphi \right\vert ^{2}\left( 2\nabla \varphi .\nabla +\Delta \varphi \right) , \] \[ \left[ S,K\right] =\tilde{\gamma}\left[ \left( 2\nabla \varphi .\nabla +\Delta \varphi \right) \Delta -\Delta \left( 2\nabla \varphi .\nabla +\Delta \varphi \right) \right] +2\tilde{\gamma}\left( \nabla \varphi .\nabla A-A\nabla \varphi .\nabla \right) , \] \begin{equation} \text{ }S_{t}+\left[ S,K\right] =-2i\tilde{\gamma}\left( \nabla \varphi .\partial _{t}\nabla +\Delta \varphi \partial _{t}\right) + \tag{4.13} \end{equation} \[ \tilde{\gamma}\left[ \left( 2\nabla \varphi .\nabla +\Delta \varphi \right) \Delta -\Delta \left( 2\nabla \varphi .\nabla +\Delta \varphi \right) \right] +2\tilde{\gamma}\left( \nabla \varphi .\nabla A-A\nabla \varphi .\nabla \right) . \] By Lemma 2.2 \[ Q^{^{\prime \prime }}\left( t\right) =2\partial _{t}\func{Re}\left( \partial _{t}f-Sf-Kf,f\right) _{X}+2\left( S_{t}f+\left[ S,K\right] f,f\right) _{X}+ \] \begin{equation} \left\Vert \partial _{t}f-Sf+Kf\right\Vert _{X}^{2}-\left\Vert \partial _{t}f-Sf-Kf\right\Vert _{X}^{2}, \tag{4.14} \end{equation} so, \begin{equation} Q^{^{\prime \prime }}\left( t\right) \geq 2\partial _{t}\func{Re}\left( \partial _{t}f-Sf-Kf,f\right) _{X}+2\left( S_{t}f+\left[ S,K\right] f,f\right) _{X}. \tag{4.15} \end{equation} Multiplying $(4.15)$ by $t(1-t)$ and integrating in $t$ we obtain \[ \dint\limits_{0}^{1}t(1-t)\left( S_{t}f+\left[ S,K\right] f,f\right) _{X}dt\leq \] \[ M_{0}\left[ \sup\limits_{t\in \left[ 0,1\right] }\left\Vert e^{\tilde{\gamma} \varphi }\upsilon \left( .,t\right) \right\Vert _{X}+\sup\limits_{t\in \left[ 0,1\right] }\left\Vert e^{\tilde{\gamma}\varphi }F\left( .,t\right) \right\Vert _{X}\right] . \] This computation can be justified by parabolic regularization using the fact that we already know the decay estimate for the solution of $\left( 4.10\right) $. Hence, combining $(3.8)$, $(4.3)$ and $(4.9$) it follows that \[ \tilde{\gamma}\dint\limits_{0}^{1}\dint\limits_{R^{n}}t(1-t)D^{2}\varphi \left( x,t\right) \left( \nabla f,\nabla f\right) _{H}dxdt+\tilde{\gamma} ^{3}\dint\limits_{0}^{1}\dint\limits_{R^{n}}t(1-t)D^{2}\varphi \left( x,t\right) \left( \nabla f,\nabla f\right) _{H}dxdt\leq \] \begin{equation} M_{0}\left[ \sup\limits_{t\in \left[ 0,1\right] }\left\Vert e^{\tilde{\gamma} \varphi }\upsilon \left( .,t\right) \right\Vert _{X}\left( 1+\left\Vert \tilde{V}\left( .,t\right) \right\Vert _{L^{\infty }\left( R^{n};L\left( H\right) \right) }\right) +\tilde{\gamma}\sup\limits_{t\in \left[ 0,1\right] }\left\Vert e^{\tilde{\gamma}\varphi }\upsilon \left( .,t\right) \right\Vert _{X}\right] \leq \tag{4.16} \end{equation} \[ M_{0}k^{C_{p}}a_{k}. \] It is clear to see that \[ \nabla f=\tilde{\gamma}\upsilon e^{\tilde{\gamma}\varphi }\nabla \varphi +e^{ \tilde{\gamma}\varphi }\nabla \upsilon \text{.} \] So, by using the properties of $\varphi $ we get \[ \text{ }\left\vert e^{2\tilde{\gamma}\varphi }D^{2}\varphi \left\vert \nabla \varphi \right\vert ^{2}\right\vert \leq C_{p}e^{3\gamma \varphi \mid 2}. \] From here, we can conclude that \[ \gamma \dint\limits_{0}^{1}\dint\limits_{R^{n}}t(1-t)\left( 1+\left\vert x\right\vert \right) ^{p-2}\left\Vert \nabla \upsilon \left( x,t\right) \right\Vert _{H}e^{\gamma \left\vert x\right\vert ^{p}}dxdt+ \] \begin{equation} \sup\limits_{t\in \left[ 0,1\right] }\left\Vert e^{\frac{\gamma \left\vert x\right\vert ^{p}}{2}}\upsilon \left( .,t\right) \right\Vert _{X}\leq C_{0}k^{C_{p}}a_{k}^{2}=C_{0}k^{C_{p}}e^{a\left( k,p\right) } \tag{4.17} \end{equation} for $k\geq k_{0}(M_{0})$ sufficiently large, where \[ a\left( k,p\right) =C_{\mu }M_{0}k^{C_{p}}e^{2a_{1}k^{1\mid \left( 2-p\right) }}. \] For proving Theorem 1 first, we deduce the following estimate \begin{equation} \dint\limits_{\left\vert x\right\vert <\frac{R}{2}}\dint\limits_{\nu _{1}}^{\nu _{2}}\left\Vert \tilde{u}\left( x,t\right) \right\Vert _{H}dtdx\geq C_{\nu }e^{-M}\left\Vert u\left( .,0\right) \right\Vert _{X}, \tag{4.18} \end{equation} for $r$ sufficiently large, $\nu _{1}$, $\nu _{2}\in \left( 0,1\right) $, $ \nu _{1}<\nu _{2}$ and $\nu =\left( \nu _{1},\nu _{2}\right) $. From $\left( 3.1\right) $ by using the change of variables $s=\beta t\sigma \left( t\right) $ and $y=\sqrt{\alpha \beta }x\sigma \left( t\right) $ we get \begin{equation} \dint\limits_{\left\vert x\right\vert <\frac{r}{2}}\dint\limits_{\nu _{1}}^{\nu _{2}}\left\Vert \tilde{u}\left( x,t\right) \right\Vert _{H}^{2}dtdx= \tag{4.19} \end{equation} \[ \left( \alpha \beta \right) ^{\frac{n}{2}}\dint\limits_{\left\vert x\right\vert <\frac{R}{2}}\dint\limits_{\nu _{1}}^{\nu _{2}}\left\vert \sigma \left( t\right) \right\vert ^{n}\left\Vert u\left( \sqrt{\alpha \beta }x\sigma \left( t\right) ,\beta t\sigma \left( t\right) \right) \right\Vert _{H}^{2}dtdx\geq \] \[ M_{0}\frac{\beta }{\alpha }\dint\limits_{\left\vert y\right\vert <R_{0}}\dint\limits_{s\nu _{1}}^{s\nu _{2}}\left\Vert u\left( y,s\right) \right\Vert _{H}^{2}\frac{dsdy}{s^{2}}\geq M_{0}\frac{\beta }{\alpha } \dint\limits_{\left\vert y\right\vert <R_{0}}\dint\limits_{s\nu _{1}}^{s\nu _{2}}\left\Vert u\left( y,s\right) \right\Vert _{H}^{2}dsdy \] for $k>M_{0},$ $s\nu _{1}>\frac{1}{2}$ and $r_{0}=r\left( ka_{0}^{-1}\right) ^{\frac{1}{2p}}.$ Thus, taking \begin{equation} r>\omega \left( ka_{0}^{-1}\right) ^{\frac{1}{2p}} \tag{4.20} \end{equation} with $\omega =\omega (u)$ a constant to be determined, it follows that \[ \Phi \geq M_{0}\frac{\beta }{\alpha }\dint\limits_{\left\vert y\right\vert <\omega }\dint\limits_{s\nu _{1}}^{s\nu _{2}}\left\Vert u\left( y,s\right) \right\Vert _{H}^{2}dsdy, \] where the interval $I=I_{k}=[s\nu _{1},s\nu _{2}]$ satisfies $I\subset \lbrack 1/2,1]$ for $k$ sufficiently large. Moreover, given $\varepsilon >0$ there exists $k_{0}(\varepsilon )>0$ such that for any $k\geq k_{0}$ one has that $I_{k}\subset \lbrack 1-\varepsilon ,1]$. By hypothesis on $u(x,t)$, i.e. the continuity of $\left\Vert u(\text{\textperiodcentered } ,s)\right\Vert _{X}$ at $s=1$, it follows that there exists $\omega >1$ and $ K_{0}=K_{0}(u)$ such that for any $k\geq K_{0}$ and for any $s\in I_{k}$ \[ \dint\limits_{\left\vert y\right\vert <\omega }\left\Vert u\left( y,s\right) \right\Vert _{H}^{2}dy\geq C_{\nu }e^{-M}\left\Vert u\left( .,0\right) \right\Vert _{X}, \] which yields the desired result. Next, we deduce the following estimate \begin{equation} \dint\limits_{\left\vert x\right\vert <R}\dint\limits_{\mu _{1}}^{\mu _{2}}\left( \left\Vert \tilde{u}\left( x,t\right) \right\Vert _{H}^{2}+\left\Vert \nabla \tilde{u}\left( x,t\right) \right\Vert _{H}^{2}+\left\Vert A\tilde{u}\left( x,t\right) \right\Vert _{H}^{2}\right) dtdx\leq \tag{4.21} \end{equation} \[ C_{n\mu }C_{0}k^{C_{p}}e^{a\left( k,p\right) }, \] for $r$ sufficiently large, $\nu _{1},$ $\nu _{2}\in \left( 0,1\right) $, $ \mu _{1}=\frac{\left( \nu _{2}-\nu _{1}\right) }{8}$, $\mu _{2}=1-\mu _{1},$ $\mu _{1}<\mu _{2}$ and $\mu =\left( \mu _{1},\mu _{2}\right) $. Indeed, from $\left( 3.9\right) $ and $\left( 4.17\right) $\ we obtain \[ \dint\limits_{\left\vert x\right\vert <R}\dint\limits_{\mu _{1}}^{\mu _{2}}\left\Vert \tilde{u}\left( x,t\right) \right\Vert _{H}dtdx\leq C_{\mu }e^{2M}\left\Vert u\left( .,0\right) \right\Vert _{X}, \] \begin{equation} \dint\limits_{\mu _{1}}^{\mu _{2}}\dint\limits_{\left\vert x\right\vert <R}\left\Vert \nabla \tilde{u}\left( x,t\right) \right\Vert _{H}dtdx\leq C_{n\mu }\dint\limits_{\mu _{1}}^{\mu _{2}}\dint\limits_{\left\vert x\right\vert <R}t(1-t)\left\Vert \nabla \upsilon \left( x,t\right) \right\Vert _{H}e^{\gamma \left\vert x\right\vert ^{p}}dtdx\leq \tag{4.22} \end{equation} \[ C_{\mu }\gamma ^{-1}r^{2-p}C_{0}k^{C_{p}}A_{k}^{2}\leq C_{\mu }C_{0}k^{C_{p}}e^{2a_{1}k^{\left( 2-p\right) ^{-1}}}. \] Hence, from $\left( 4.22\right) $ we get $\left( 4.21\right) $ for $k\geq k_{0}(C_{0})$ sufficiently large. Let $Y=L^{2}\left( R^{n}\times \left[ 0,1\right] ;H\right) $. By reasoning as in $\left[ 6\text{, Lemma 3.1}\right] $ we obtain \textbf{Lemma 4.1}. Assume the assumpt\i ions (1) and \ (3) of Condition 1 are satisfied. Suppose that $r>0$ and $\varphi $ : $[0,1]\rightarrow \mathbb{ R}$ is a smooth function. Then, there exists $C=C(n,\varphi ,H,A)>0$ such that, the inequality \[ \dsum\limits_{k=1}^{n}x_{k}\left[ \left( \frac{\partial A}{\partial x_{k}} g,g\right) +\left( \frac{\partial g}{\partial x_{k}},Ag\right) _{X}\right] + \frac{\varkappa ^{\frac{3}{2}}}{r^{2}}\left\Vert e^{\varkappa \left\vert \psi \right\vert }g\right\Vert _{Y}\leq C\left\Vert e^{\varkappa \left\vert \psi \right\vert }i\left( \partial _{t}g+\Delta g+Ag\right) \right\Vert _{Y} \] holds, for $\varkappa \geq CR^{2}$ and $g\in C_{0}^{\infty }\left( R^{n+1};H\left( A\right) \right) $ with support contained in the set \[ \left\{ x,t:\text{ }\left\vert \psi \left( x,t\right) \right\vert =\left\vert \frac{x}{r}+\varphi \left( t\right) e_{1}\right\vert \geq 1\right\} . \] \textbf{Proof. }Let $f=e^{\alpha \left\vert \psi \left( x,t\right) \right\vert ^{2}}g.$ Then, by acts of Schredinger operator $\left( i\partial _{t}+\Delta +A\right) $ to $f\in X\left( A\right) $ we get \[ e^{\alpha \left\vert \psi \left( x,t\right) \right\vert ^{2}}\left( \left( i\partial _{t}g+\Delta g+Ag\right) \right) =S_{\alpha }f-4\alpha K_{\alpha }f, \] where \[ S_{\alpha }=\left( i\partial _{t}+\Delta +A\right) +\frac{4\alpha ^{2}}{r^{2} }\text{ }\left\vert \psi \left( x,t\right) \right\vert ^{2},\text{ } \] \[ K_{\alpha }=\frac{1}{r}\left( \frac{x}{r}+\varphi \left( t\right) e_{1}\right) .\nabla +\frac{n}{r^{2}}+\frac{i\varphi ^{\prime }}{2}\left( \frac{x_{1}}{r}+\varphi \left( t\right) \right) . \] Hence, \[ \left( S_{\alpha }\right) ^{\ast }=S_{\alpha }\text{, }\left( K_{\alpha }\right) ^{\ast }=K_{\alpha } \] and \[ \left\Vert e^{\alpha \left\vert \psi \left( x,t\right) \right\vert ^{2}}\left( i\partial _{t}g+\Delta g+Ag\right) \right\Vert _{X}^{2}=\left( S_{\alpha }f-4\alpha K_{\alpha }f,S_{\alpha }f-4\alpha K_{\alpha }f\right) _{X}\geq \] \[ -4\alpha \left( \left( S_{\alpha }K_{\alpha }-K_{\alpha }S_{\alpha }\right) f,f\right) _{X}=-4\alpha \left( \left[ S_{\alpha },K_{\alpha }\right] f,f\right) _{X}. \] A calculation shows that \begin{equation} \left[ S_{\alpha },K_{\alpha }\right] =\frac{2}{r^{2}}\Delta -\frac{4\alpha ^{2}}{r^{4}}\left\vert \frac{x}{r}+\varphi e_{1}\right\vert ^{2}-\frac{1}{2}+ \frac{2i\varphi ^{\prime }}{r}\partial _{x_{1}}+\left[ A,K_{\alpha }\right] , \tag{4.23} \end{equation} where \[ \left[ A,K_{\alpha }\right] f=\left[ \left( \frac{x_{1}}{r}+\varphi \right) A \frac{\partial f}{\partial x_{1}}+\frac{1}{r}\dsum\limits_{k=2}^{n}x_{k}A \frac{\partial f}{\partial x_{k}}\right] - \] \[ \left( \frac{x_{1}}{r}+\varphi \right) \left( \frac{\partial A}{\partial x_{1}}f+A\frac{\partial f}{\partial x_{1}}\right) +\frac{1}{r} \dsum\limits_{k=2}^{n}x_{k}\left( \frac{\partial A}{\partial x_{k}}f+A\frac{ \partial f}{\partial x_{k}}\right) . \] Since $A$ is a symmetric operator in $H,$\ from $\left( 4.23\right) $ we have \begin{equation} \left\Vert e^{\alpha \left\vert \psi \left( x,t\right) \right\vert ^{2}}\left( i\partial _{t}g+\Delta g+Ag\right) \right\Vert _{X}^{2}\geq \tag{4.24} \end{equation} \[ \frac{16\alpha ^{3}}{r^{4}}\dint \left\vert \frac{x}{r}+\varphi e_{1}\right\vert ^{2}\left\Vert f\left( x,t\right) \right\Vert ^{2}dxdt+ \frac{8\alpha }{r^{2}}\dint \left\Vert \nabla f\left( x,t\right) \right\Vert ^{2}dxdt+ \] \[ 2\alpha \dint \left[ \left( \frac{x_{1}}{r}+\varphi \right) \varphi ^{\prime \prime }+\left( \varphi ^{\prime }\right) ^{2}\right] \left\Vert f\left( x,t\right) \right\Vert ^{2}dxdt-\frac{8\alpha }{r}\dint \varphi ^{\prime }\left( \partial _{x_{11}}f,\bar{f}\right) dxdt+ \] \[ C\dsum\limits_{k=1}^{n}x_{k}\left[ \left( \frac{\partial A}{\partial x_{k}} f,f\right) +\left( \frac{\partial f}{\partial x_{k}},Af\right) _{X}\right] . \] In view of the hypothesis on $g$ and the Cauchy--Schwarz inequality, the absolute value of the third fourth terms in $(4.23)$ can be bounded by a fraction of the first two terms on the right-hand side of $(4.24)$, when $ \alpha >Cr^{2}$ for some large $C$ depending on $\left\Vert \varphi ^{\prime }\right\Vert _{\infty }+\left\Vert \varphi ^{\prime \prime }\right\Vert _{\infty }.$ Moreover, by using\ the assumption on $A=A\left( x\right) $, we get that the last two terms are nonnegative. This yields the assertion. Now, from $\left( 3.3\right) $ we have \ \ \ \ \ \[ \left\Vert \tilde{V}\left( x,t\right) \right\Vert _{H}\leq \frac{\alpha }{ \beta }\mu _{1}^{-2}\left\Vert V\right\Vert _{B}\leq \mu _{1}^{-2}a_{0}^{ \frac{1}{p}}k^{-\frac{1}{p}}\left\Vert V\right\Vert _{B}. \] Then from $\left( 4.20\right) $ we get \begin{equation} \left\Vert \tilde{V}\right\Vert _{L^{\infty }\left( R^{n}\times \left[ \mu _{1},\mu _{2}\right] ;L\left( H\right) \right) }<R. \tag{4.25} \end{equation} Define \[ \delta ^{2}\left( r\right) = \] \[ \dint\limits_{\mu _{1}}^{\mu _{2}}\dint\limits_{R-1\leq \left\vert x\right\vert \leq r}\left( \left\Vert \tilde{u}\left( x,t\right) \right\Vert _{H}^{2}+\left\Vert \nabla \tilde{u}\left( x,t\right) \right\Vert _{H}^{2}+\left\Vert A\tilde{u}\left( x,t\right) \right\Vert _{H}^{2}\right) dtdx. \] Let $\nu _{1}$, $\nu _{2}\in \left( 0,1\right) $, $\nu _{1}<\nu _{2}$ and $ \nu _{2}<2\nu _{1}$. We choose $\varphi \in C^{\infty }\left( \left[ 0,1 \right] \right) $ and $\theta $, $\theta _{r}\in C_{0}^{\infty }\left( R^{n}\right) $ satisfying \[ 0\leq \varphi \left( t\right) \leq 3\text{, }\varphi \left( t\right) =3\text{ for }t\in \left[ \nu _{1},\nu _{2}\right] \text{, }\varphi \left( t\right) =0 \text{ for } \] \[ t\in \left[ 0,\nu _{2}-\nu _{1}\right] \cup \left[ \nu _{2}+\frac{\nu _{2}-\nu _{1}}{2},1\right] , \] \[ \theta _{r}\left( x\right) =1\text{ for }\left\vert x\right\vert <r-1,\text{ }\theta _{R}\left( x\right) =0\text{, for }\left\vert x\right\vert >r\text{,} \] and \[ \theta \left( x\right) =1\text{ for }\left\vert x\right\vert <1,\text{ } \theta \left( x\right) =0\text{, for }\left\vert x\right\vert \geq 2\text{.} \] Let \begin{equation} g\left( x,t\right) =\theta _{r}\left( x\right) \theta \left( \psi \left( x,t\right) \right) \tilde{u}\left( x,t\right) , \tag{4.26} \end{equation} where $\tilde{u}\left( x,t\right) $ is a solution of $\left( 3.2\right) $ when $\tilde{V}=\tilde{F}\equiv 0$. It is clear to see that \[ \left\vert \psi \left( x,t\right) \right\vert \geq \frac{5}{2}\text{ for } \left\vert x\right\vert <\frac{r}{2}\text{ and }t\in \left[ \nu _{1},\nu _{2} \right] . \] Hence, \[ g\left( x,t\right) =\tilde{u}\left( x,t\right) \text{ and }e^{\varkappa \left\vert \psi \left( x,t\right) \right\vert ^{2}}\geq e^{\frac{25}{4} \varkappa }\text{ for }\left\vert x\right\vert <\frac{r}{2}\text{, }t\in \left[ \nu _{1},\nu _{2}\right] . \] Moreover, from $\left( 4.26\right) $ also we get that \[ g\left( x,t\right) =0\text{ for }\left\vert x\right\vert \geq r\text{ or } t\in \left[ 0,\nu _{2}-\nu _{1}\right] \cup \left[ \nu _{2}+\frac{\nu _{2}-\nu _{1}}{2},1\right] , \] so \[ \text{supp }g\subset \left\{ \left\vert x\right\vert \leq R\right\} \times \left[ \nu _{2}-\nu _{1},\nu _{2}+\frac{\nu _{2}-\nu _{1}}{2}\right] \cap \left\{ \left\vert b\left( x,t\right) \right\vert \geq 1\right\} . \] Then, for $\xi =\psi \left( x,t\right) $ we have \[ \left( i\partial _{t}+\Delta +A+\tilde{V}\right) g=\left[ \theta \left( \xi \right) \left( 2\nabla \theta _{r}\left( x\right) .\tilde{u}+\tilde{u}\Delta \theta _{r}\left( x\right) \right) +2\nabla \theta \left( \xi \right) .\nabla \theta _{r}\tilde{u}\right] + \] \[ \theta _{r}\left( x\right) \left[ 2r^{-1}\nabla \theta \left( \xi \right) .\nabla \tilde{u}+r^{-2}\tilde{u}\Delta \theta \left( \xi \right) +i\varphi ^{\prime }\partial x_{1}\theta \left( \xi \right) u\right] =B_{1}+B_{2}\text{ .} \] Note that, \[ \text{supp }B_{1}\subset \left\{ \left( x,t\right) :\text{ }r-1\leq \left\vert x\right\vert \leq r,\text{ }\mu _{1}\leq t\leq \mu _{2}\right\} \] and \[ \text{supp }B_{2}\subset \left\{ \left( x,t\right) \in R^{n}\times \left[ 0,1 \right] \text{, }1\leq \left\vert \psi \left( x,t\right) \right\vert \leq 2\right\} . \] Now applying Lemma 4.1 choosing $\varkappa =d_{n}^{2}R^{2},$ $d_{n}^{2}\geq \left\Vert \varphi ^{\prime }\right\Vert _{\infty }+\left\Vert \varphi ^{\prime \prime }\right\Vert _{\infty }$ it follows that \begin{equation} R\left\Vert e^{\varkappa \left\vert \psi \right\vert ^{2}}g\right\Vert _{Y}\leq C\left\Vert e^{\varkappa \left\vert \psi \right\vert ^{2}}i\left( \partial _{t}g+\Delta g+Ag\right) \right\Vert _{Y}\leq \tag{4.27} \end{equation} \[ C\left[ \left\Vert e^{\varkappa \left\vert \psi \right\vert ^{2}}\tilde{V} g\right\Vert _{Y}+\left\Vert e^{\varkappa \left\vert \psi \right\vert ^{2}}B_{1}\right\Vert _{Y}+\left\Vert e^{\varkappa \left\vert \psi \right\vert ^{2}}B_{2}\right\Vert _{Y}\right] = \] \[ D_{1}+D_{2}+D_{3}. \] Since \[ \left\Vert \tilde{V}\right\Vert _{L^{\infty }\left( R^{n}\times \left[ \mu _{1},\mu _{2}\right] ;L\left( H\right) \right) }<r, \] $D_{1}$ can be absorbed in the left hand side of $\left( 4.27\right) .$ Moreover, $\left\vert \psi \left( x,t\right) \right\vert \leq 4$ on the support of $B_{1},$ thus \ \[ D_{2}\leq C\delta \left( R\right) e^{16\varkappa }. \] Let \[ R_{\mu }^{n}=\left\{ \left( x,t\right) :\left\vert x\right\vert \leq r,\text{ }\mu _{1}\leq t\leq \mu _{2}\right\} \] Then $R_{\mu }^{n}\subset $supp $B_{2}$, and $1\leq \left\vert \psi \left( x,t\right) \right\vert \leq 2$, so \[ D_{3}\leq Ce^{4\varkappa }\left\Vert \tilde{u}+\nabla \tilde{u}\right\Vert _{L^{2}\left( R^{n}\times \left[ \mu _{1},\mu _{2}\right] ;H\right) }. \] By using $\left( 4.18\right) $ and $\left( 4.22\right) $ we have \begin{equation} C_{\mu }e^{-M}e^{\frac{25}{4}\varkappa }\left\Vert u\left( .,0\right) \right\Vert _{X}\leq re^{\frac{25}{4}\varkappa }\left[ \dint\limits_{\left \vert x\right\vert <\frac{R}{2}}\dint\limits_{\nu _{1}}^{\nu _{2}}\left\Vert \tilde{u}\left( x,t\right) \right\Vert _{H}dtdx\right] ^{\frac{1}{2}}\leq \tag{4.28} \end{equation} \[ C_{\mu }\delta \left( r\right) e^{\frac{25}{4}\varkappa }+C_{\mu }e^{4\varkappa }\left\Vert \tilde{u}+\nabla \tilde{u}\right\Vert _{L^{2}\left( R_{\mu }^{n};H\right) }\leq C_{\mu }\delta \left( r\right) e^{16\varkappa }+C_{\mu }C_{0}k^{C_{p}}e^{4\varkappa }e^{2a_{1}k^{\left( 2-p\right) ^{-1}}}. \] Puting $\varkappa =d_{n}r^{2}=2a_{1}k^{\frac{1}{2-p}}$ it follows from $ \left( 4.28\right) $ that, if $\left\Vert u\left( .,0\right) \right\Vert _{X}\neq 0$ then \begin{equation} \delta \left( r\right) \geq C_{\mu }\left\Vert u\left( .,0\right) \right\Vert _{X}e^{-\left( M+10\varkappa \right) }=C_{\mu }\left\Vert u\left( .,0\right) \right\Vert _{X}e^{-\left( M+20\right) a_{1}k^{\frac{1}{ 2-p}}} \tag{4.29 } \end{equation} for $k\geq k_{0}(C_{\mu })$ sufficiently large. Now, by $\left( 4.22\right) $ we get \[ \delta ^{2}\left( r\right) =\dint\limits_{\mu _{1}}^{\mu _{2}}\dint\limits_{r-1\leq \left\vert x\right\vert \leq r}\left( \left\Vert \tilde{u}\left( x,t\right) \right\Vert _{H}^{2}+\left\Vert \nabla \tilde{u} \left( x,t\right) \right\Vert _{H}^{2}+\left\Vert A\tilde{u}\left( x,t\right) \right\Vert _{H}^{2}\right) dtdx\leq \] \[ \dint\limits_{\mu _{1}}^{\mu _{2}}\dint\limits_{r-1\leq \left\vert x\right\vert \leq r}\left( \left\Vert \tilde{u}\left( x,t\right) \right\Vert _{H}^{2}+\left\Vert A\tilde{u}\left( x,t\right) \right\Vert _{H}^{2}\right) dtdx+ \] \[ C_{\mu }\dint\limits_{\mu _{1}}^{\mu _{2}}\dint\limits_{r-1\leq \left\vert x\right\vert \leq r}t\left( 1-t\right) \left\Vert \nabla \tilde{u}\left( x,t\right) \right\Vert _{H}^{2}dtdx\leq \] \[ C_{\mu }e^{-\gamma \left( r-1\right) ^{p}}\sup\limits_{t\in \left[ 0,1\right] }\left\Vert e^{\gamma \left\vert x\right\vert ^{p/2}}\tilde{u}\left( x,t\right) \right\Vert _{X}^{2}+C_{\mu }\gamma ^{-1}r^{2-p}e^{-\gamma \left( r-1\right) ^{p}}\times \] \begin{equation} \dint\limits_{\mu _{1}}^{\mu _{2}}\dint\limits_{r-1\leq \left\vert x\right\vert \leq r}\left[ \frac{t\left( 1-t\right) }{\left( 1+\left\vert x\right\vert \right) ^{2/p}}\left\Vert \nabla \tilde{u}\left( x,t\right) \right\Vert _{H}^{2}+\left\Vert A\tilde{u}\left( x,t\right) \right\Vert _{H}^{2}\right] dtdx\leq \tag{4.30} \end{equation} \[ C_{\mu }\gamma ^{-1}k^{C_{p}}e^{\eta \left( p\right) },\text{ }\eta \left( p\right) =2a_{1}k^{\left( 2-p\right) ^{-1}}-\gamma \left( r-1\right) ^{p}. \] The estimates $\left( 4.28\right) -\left( 4.30\right) $ imply \begin{equation} C_{\mu }e^{-2M}e^{\frac{25}{4}\varkappa }\left\Vert u\left( .,0\right) \right\Vert _{X}\leq C_{0}k^{C_{p}}e^{\omega \left( p\right) }+O\left( k^{1/2\left( 2-p\right) }\right) \text{,} \tag{4.31} \end{equation} where \[ \omega \left( p\right) =42a_{1}k^{1/\left( 2-p\right) }-a_{0}^{-\frac{1}{2} }\left( \frac{2a_{1}}{d_{n}}\right) ^{\frac{p}{2}}k^{1/\left( 2-p\right) }. \] Hence, if $42a_{1}<\sqrt{a_{1}^{p}a_{0}}\left( \frac{2}{d_{n}}\right) ^{ \frac{p}{2}}$ by letting $k$ tends to infinity it follows from $\left( 4.31\right) $ that $\left\Vert u\left( .,0\right) \right\Vert _{X}=0$, which gives $u\left( x,t\right) \equiv 0$. \textbf{Proof of Corollary 1. }Since \[ \dint\limits_{R^{n}}\left\Vert u\left( x,1\right) \right\Vert _{H}^{2}e^{2b\left\vert x\right\vert ^{q}}dx<\infty \text{ for }b=\frac{ \beta ^{q}}{q} \] one has that \[ \dint\limits_{R^{n}}\left\Vert u\left( x,1\right) \right\Vert _{H}^{2}e^{2k\left\vert x\right\vert ^{q}}dx\leq \left\Vert e^{2k\left\vert x\right\vert ^{q}-2b\left\vert x\right\vert ^{q}}\right\Vert _{\infty }\dint\limits_{R^{n}}\left\Vert u\left( x,1\right) \right\Vert _{H}^{2}e^{2b\left\vert x\right\vert ^{q}}dx. \] Then, by reasoning as in $\left[ \text{7, Corollary 1}\right] $ we obtain the assertion. \textbf{\ Proof of Theorem 2. }Indeed, just applying Corollary 1 with \[ u\left( x,t\right) =u_{1}\left( x,t\right) -u_{2}\left( x,t\right) \] and \[ V\left( x,t\right) =\frac{F\left( u_{1},\bar{u}_{1}\right) -F\left( u_{2}, \bar{u}_{2}\right) }{u_{1}-u_{2}} \] we obtain the assertion of Theorem 2. \begin{center} \textbf{5. Proof of Theorem 3. } \end{center} First, we deduce the corresponding upper bounds. Assume \[ \left\Vert u\left( .,t\right) \right\Vert _{X}=a\neq 0. \] Fix $\bar{t}\in \left( 0,1\right) $ near $1,$ and let \[ \upsilon \left( x,t\right) =u\left( x,t-1+\bar{t}\right) \text{, }t\in \left[ 0,1\right] \] which satisfies the equation $\left( 2.10\right) $ with \begin{equation} \left\vert \upsilon \left( x,0\right) \right\vert \leq \frac{b_{1}}{\left( 2- \bar{t}\right) ^{n/2}}e^{-\frac{b_{2}\left\vert x\right\vert ^{p}}{\left( 2- \bar{t}\right) ^{p}}},\text{ }\left\vert \upsilon \left( x,1\right) \right\vert \leq \frac{b_{1}}{\left( 1-\bar{t}\right) ^{n/2}}e^{-\frac{ b_{2}\left\vert x\right\vert ^{p}}{\left( 1-\bar{t}\right) ^{p}}} \tag{5.1} \end{equation} where $A$ is a linear operator$,$ $V\left( x,t\right) $ is a given potential operator function in a Hilbert space $H.$ From $\left( 5.1\right) $ we get \[ \dint\limits_{R^{n}}\left\Vert \upsilon \left( x,0\right) \right\Vert _{H}^{2}e^{A_{0}\left\vert x\right\vert ^{q}}dx=a_{0}^{2},\text{ } \dint\limits_{R^{n}}\left\Vert \upsilon \left( x,1\right) \right\Vert _{H}^{2}e^{A_{1}\left\vert x\right\vert ^{q}}dx=A_{1}^{2}, \] where \begin{equation} A_{0}=\frac{b_{2}}{\left( 2-\bar{t}\right) ^{p}}\text{, }A_{1}=\frac{b_{2}}{ \left( 1-\bar{t}\right) ^{p}}. \tag{5.2} \end{equation} For $V\left( x,t\right) =F\left( u,\bar{u}\right) $, by hypothesis \[ \left\Vert V\left( x,t\right) \right\Vert _{H}\leq C\left\Vert u\left( x,t-1+ \bar{t}\right) \right\Vert _{H}^{\theta }\leq \frac{C}{\left( 2-t-\bar{t} \right) ^{\theta n/2}}e^{\frac{C\left\vert x\right\vert ^{p}}{\left( 2-t- \bar{t}\right) ^{p}}}. \] By using Appell transformation if we suppose that $\upsilon \left( y,s\right) $ is a solution of \[ \partial _{s}\upsilon =i\left( \Delta \upsilon +A\upsilon +V\left( y,s\right) \upsilon \right) \text{, }y\in R^{n},\text{ }t\in \left[ 0,1 \right] , \] $\alpha $ and $\beta $ are positive, then \begin{equation} \tilde{u}\left( x,t\right) =\left( \sqrt{\alpha \beta }\sigma \left( t\right) \right) ^{\frac{n}{2}}u\left( \sqrt{\alpha \beta }x\sigma \left( t\right) ,\beta t\sigma \left( t\right) \right) e^{\eta }. \tag{5.3} \end{equation} \ verifies the equation \begin{equation} \partial _{t}\tilde{u}=i\left[ \Delta \tilde{u}+A\tilde{u}+\tilde{V}\left( x,t\right) \tilde{u}+\tilde{F}\left( x,t\right) \right] ,\text{ }x\in R^{n}, \text{ }t\in \left[ 0,1\right] \tag{5.4} \end{equation} with $\tilde{V}\left( x,t\right) ,$ $\tilde{F}\left( x,t\right) $ defined by $\left( 3.3\right) $, $\left( 3.4\right) $ and \begin{equation} \left\Vert e^{\gamma \left\vert x\right\vert ^{p}}\tilde{u}_{k}\left( x,0\right) \right\Vert _{X}=\left\Vert e^{\gamma \left( \frac{\alpha }{\beta }\right) ^{p/2}\left\vert x\right\vert ^{p}}\upsilon \left( x,0\right) \right\Vert _{X}=a_{0}, \tag{5.5} \end{equation} \[ \left\Vert e^{\gamma \left\vert x\right\vert ^{p}}\tilde{u}_{k}\left( x,1\right) \right\Vert _{X}=\left\Vert e^{\gamma \left( \frac{\beta }{\alpha }\right) ^{p/2}\left\vert x\right\vert ^{p}}\upsilon \left( x,1\right) \right\Vert _{X}=a_{1}. \] \ It follows from expressions $\left( 3.6\right) $ and $\left( 3.8\right) $\ that \begin{equation} \gamma \sim \frac{1}{\left( 1-\bar{t}\right) ^{p/2}}\text{, }\beta \sim \frac{1}{\left( 1-\bar{t}\right) ^{p/2}}\text{, }\alpha \sim 1. \tag{5.6} \end{equation} Next, we shall estimate \[ \left\Vert \tilde{V}\left( x,t\right) \right\Vert _{L_{t}^{1}L_{x}^{\infty }\left( L\left( H\right) ,R\right) },\text{ } \] for a $r>0$, where \[ \text{ }L_{t}^{1}L_{x}^{\infty }\left( L\left( H\right) ,r\right) =L^{1}\left( 0,1;L^{\infty }\left( R^{n}/O_{r}\right) ;L\left( H\right) \right) . \] Thus, \[ \left\Vert \tilde{V}\left( x,t\right) \right\Vert _{L\left( H\right) }\leq \frac{\beta }{\alpha }\left\Vert V\left( y,s\right) \right\Vert _{L\left( H\right) }\leq \frac{\beta }{\alpha }\frac{C}{\left( 1-\bar{t}\right) ^{\theta n/2}}e^{C\left\vert y\right\vert ^{p}}, \] with \[ \left\vert y\right\vert =\sqrt{\alpha \beta }\left\vert x\right\vert \sigma \left( t\right) \geq R\sqrt{\frac{\alpha }{\beta }\sim }\frac{R}{\sqrt{\beta }}=Cr\left( 1-\bar{t}\right) ^{1/2}. \] Hence, \[ \left\Vert \tilde{V}\left( .,t\right) \right\Vert _{L^{\infty }\left( R^{n};L\left( H\right) \right) }\leq \frac{\beta }{\alpha }\left\Vert V\left( .,s\right) \right\Vert _{L^{\infty }\left( R^{n};L\left( H\right) \right) }\leq \frac{C}{\left( 1-\bar{t}\right) ^{1+\theta n/2}} \] and \begin{equation} \left\Vert \tilde{V}\left( x,t\right) \right\Vert _{L_{t}^{1}L_{x}^{\infty }\left( L\left( H\right) ,r\right) }\leq \frac{\beta }{\alpha }\left\Vert V\left( y,s\right) \right\Vert _{L_{t}^{1}L_{x}^{\infty }\left( r;\beta \right) }\leq \tag{5.7} \end{equation} \[ \frac{C}{\left( 1-\bar{t}\right) ^{1+\theta n/2}}e^{-Cr^{p}\left( 1-\bar{t} \right) ^{1/2}}, \] where \[ L_{t}^{1}L_{x}^{\infty }\left( r,\beta \right) =L^{1}\left( 0,1;L^{\infty }\left( R^{n}/O_{Cr/\sqrt{\beta }};L\left( H\right) \right) \right) . \] To apply Lemma 3.1 we need \begin{equation} \left\Vert \tilde{V}\left( x,t\right) \right\Vert _{L_{t}^{1}L_{x}^{\infty }\left( L\left( H\right) ,R\right) }\leq \frac{C}{\left( 1-\bar{t}\right) ^{1+\theta n/2}}e^{-Cr^{p}\left( 1-\bar{t}\right) ^{1/2}}\leq \delta _{0}. \tag{5.8} \end{equation} for some $R,$ i.e., \begin{equation} r\sim \frac{C}{\left( 1-\bar{t}\right) ^{p/2}}\delta \left( t\right) \text{, } \tag{5.9} \end{equation} where \[ \delta \left( t\right) =\log ^{\frac{1}{p}}\phi \left( t\right) \text{, } \phi \left( t\right) =\frac{C}{\delta _{0}\left( 1-\bar{t}\right) ^{\theta n/2}}. \] Let \begin{equation} \mathbb{V=}\tilde{V}_{\chi \left( \left\vert x>R\right\vert \right) }\left( x,t\right) ,\text{ }\mathbb{F}=\tilde{V}_{\chi \left( \left\vert x<r\right\vert \right) }\left( x,t\right) \tilde{u}\left( x,t\right) . \tag{5.10} \end{equation} By using $\left( 5.5\right) -\left( 5.10\right) ,$ by virtue of Lemma 3.1 and $\left( 4.7\right) $ we deduced \[ \sup\limits_{t\in \left[ 0,1\right] }\left\Vert e^{\gamma \left\vert x\right\vert ^{p}}\tilde{u}\left( .,t\right) \right\Vert _{X}^{2}\leq C\left( \left\Vert e^{\gamma \left\vert x\right\vert ^{p}}\tilde{u}\left( .,0\right) \right\Vert _{X}^{2}+\left\Vert e^{\gamma \left\vert x\right\vert ^{p}}\tilde{u}\left( .,1\right) \right\Vert _{X}^{2}\right) + \] \[ Ca^{2}e^{C\gamma r^{p}}\left\Vert \tilde{V}\left( x,t\right) \right\Vert \leq \frac{Ca^{2}}{\left( 1-\bar{t}\right) ^{p}}e^{\delta \left( t\right) }, \text{ }a=\left\Vert u_{0}\right\Vert _{X}. \] Next, using the same argument given in section $4$, $(4.10)-(4.22)$, one finds that \[ \gamma \dint\limits_{0}^{1}\dint\limits_{R^{n}}t(1-t)\left( 1+\left\vert x\right\vert \right) ^{p-2}\left\Vert \nabla \tilde{u}\left( x,t\right) \right\Vert _{H}e^{\gamma \left\vert x\right\vert ^{p}}dxdt\leq \frac{Ca^{2} }{\left( 1-\bar{t}\right) ^{p}}e^{\delta \left( t\right) }. \] Now we turn to the lower bounds estimates. Since they are similar to those given in detail in Section 3, we obtain that the estimate $(4.24)$ for potential operator function $\tilde{V}\left( x,t\right) $ when \[ \frac{\theta n}{2}-1<\frac{p}{2\left( 2-p\right) }\text{, i.e., }p>\frac{ 2\left( \theta n-2\right) }{\theta n-1}. \] Finally, we get \[ e^{\frac{C}{\left( 1-\bar{t}\right) ^{p}}\log \phi \left( t\right) }\leq e^{C\gamma r^{p}}\leq e^{\frac{C}{\left( 1-\bar{t}\right) ^{p/2}}\vartheta \left( t\right) },\text{ }\vartheta \left( t\right) =C\left( 1-\bar{t} \right) ^{\frac{-p^{2}}{2\left( 2-p\right) }} \] for $p<\frac{p}{2}+\frac{p^{2}}{2\left( 2-p\right) }$, i.e. $p>1$ that assumed, i.e. we obtain the assertion of Theorem 3. \textbf{Remark 5.1. }Let us consider the case $\theta =4/n$ in Theorem $3$, i.e. $p>4/3$. Then from Theorem 3 we obtain \textbf{Result 5.1}. Assume that the conditions of Theorem 3 are satisfied for $p>4/3$. Then $u\left( x,t\right) \equiv 0.$ \begin{center} \textbf{6. Unique continuation properties for the system of Schredinger equation } \end{center} Consider the Cauchy problem for the finite or infinite system of Schr\"{o} dinger equation \begin{equation} \frac{\partial u_{m}}{\partial t}=i\left[ \Delta u_{m}+\sum\limits_{j=1}^{N}a_{mj}\left( x\right) u_{j}+\sum\limits_{j=1}^{N}b_{mj}\left( x\right) u_{j}\right] ,\text{ }x\in R^{n},\text{ }t\in \left( 0,T\right) , \tag{6.1} \end{equation} where $u=\left( u_{1},u_{2},...,u_{N}\right) ,$ $u_{j}=u_{j}\left( x,t\right) ,$ $a_{mj}=a_{mj}\left( x\right) $ are and $b_{mj}=b_{mj}\left( x,t\right) $ are complex valued functions$.$ Let $l_{2}=l_{2}\left( N\right) $ and $l_{2}^{s}=l_{2}^{s}\left( N\right) $ (see $\left[ \text{23, \S\ 1.18} \right] $). Let $A$ be the operator in $l_{2}\left( N\right) $ defined by \[ \text{ }D\left( A\right) =\left\{ u=\left\{ u_{j}\right\} \in l_{2},\text{ } \left\Vert u\right\Vert _{l_{2}^{s}\left( N\right) }=\left( \sum\limits_{j=1}^{N}2^{sj}\left\vert u_{j}\right\vert ^{2}\right) ^{\frac{1 }{2}}<\infty \right\} , \] \[ A=\left[ a_{mj}\right] \text{, }a_{mj}=G_{m}\left( x\right) 2^{sj},\text{ } s>0,\text{ }m,j=1,2,...,N,\text{ }N\in \mathbb{N} \] and \[ \text{ }D\left( V\left( x,t\right) \right) =l_{2}, \] \[ V\left( x,t\right) =\left[ b_{mj}\left( x,t\right) \right] \text{, } b_{mj}\left( x,t\right) =g_{m}\left( x,t\right) 2^{sj},\text{ } m,j=1,2,...,N. \] Let \[ X_{2}=L^{2}\left( R^{n};l_{2}\right) ,\text{ }X_{2}\left( A\right) =L^{2}\left( R^{n};l_{2}^{s}\right) ,\text{ }Y^{k,2}=W^{k,2}\left( R^{n};l_{2}\right) . \] \ From Theorem 1 we obtain the following result \textbf{Theorem 6.1. }Suppose $a_{mj}$ are bounded cotinious on $R^{n}$ and $ b_{mj}$\ \ are bounded functions on $R^{n}\times \left[ 0,T\right] .$ Assume there exist the constants $a_{0},$ $a_{1},$ $a_{2}>0$ such that for any $ k\in \mathbb{Z}^{+}$ a solution $u\in C\left( \left[ 0,1\right] ;X_{2}\left( A\right) \right) $ of $\left( 6.1\right) $ satisfy \[ \dint\limits_{R^{n}}\left\Vert u\left( x,0\right) \right\Vert _{l_{2}}^{2}e^{2a_{0}\left\vert x\right\vert ^{p}}dx<\infty ,\text{ for } p\in \left( 1,2\right) , \] \[ \dint\limits_{R^{n}}\left\Vert u\left( x,1\right) \right\Vert _{l_{2}}^{2}e^{2k\left\vert x\right\vert ^{p}}dx<a_{2}e^{2a_{1}k^{\frac{q}{ q-p}}},\text{ }\frac{1}{p}+\frac{1}{q}=1. \] Moreover, there exists $M_{p}>0$ such that \[ a_{0}a_{1}^{p-2}>M_{p}. \] Then $u\left( x,t\right) \equiv 0.$ \ \textbf{Proof.} It is easy to see that $A$ is a symmetric operator in $ l_{2}$ and other conditions of Theorem 1 are satisfied. Hence, from Teorem 1 we obtain the conculision. \begin{center} \textbf{7. Unique continuation properties for nonlinear anisotropic Schredinger equation } \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{center} The regularity property of BVPs for elliptic equations\ were studied e.g. in $\left[ \text{1, 2}\right] $. Let $\Omega =R^{n}\times G$, $G\subset R^{d},$ $d\geq 2$ is a bounded domain with $\left( d-1\right) $-dimensional boundary $\partial G$. Let us consider the following problem \begin{equation} i\partial _{t}u+\Delta _{x}u+\sum\limits_{\left\vert \alpha \right\vert \leq 2m}a_{\alpha }\left( x,y\right) D_{y}^{\alpha }u\left( x,y,t\right) +F\left( u,\bar{u}\right) u=0,\text{ } \tag{7.1} \end{equation} \[ \text{ }x\in R^{n},\text{ }y\in G,\text{ }t\in \left[ 0,1\right] ,\text{ } \] \begin{equation} B_{j}u=\sum\limits_{\left\vert \beta \right\vert \leq m_{j}}\ b_{j\beta }\left( y\right) D_{y}^{\beta }u\left( x,y,t\right) =0\text{, }x\in R^{n}, \text{ }y\in \partial G,\text{ }j=1,2,...,m, \tag{7.2} \end{equation} where $a_{\alpha },$ $b_{j\beta }$ are the complex valued functions, $\alpha =\left( \alpha _{1},\alpha _{2},...,\alpha _{d}\right) $, $\beta =\left( \beta _{1},\beta _{2},...,\beta _{d}\right) ,$ $\mu _{i}<2m,$ $K=K\left( x,y,t\right) $ and \[ D_{x}^{k}=\frac{\partial ^{k}}{\partial x^{k}},\text{ }D_{j}=-i\frac{ \partial }{\partial y_{j}},\text{ }D_{y}=\left( D_{1,}...,D_{d}\right) , \text{ }y=\left( y_{1},...,y_{d}\right) . \] $\ $ Let \[ \xi ^{\prime }=\left( \xi _{1},\xi _{2},...,\xi _{d-1}\right) \in R^{d-1}, \text{ }\alpha ^{\prime }=\left( \alpha _{1},\alpha _{2},...,\alpha _{d-1}\right) \in Z^{d-1},\text{ } \] \[ \text{ }A\left( y_{0},\xi ^{\prime },D_{y}\right) =\sum\limits_{\left\vert \alpha ^{\prime }\right\vert +j\leq 2m}a_{\alpha ^{\prime }}\left( y_{0}\right) \xi _{1}^{\alpha _{1}}\xi _{2}^{\alpha _{2}}...\xi _{d-1}^{\alpha _{d-1}}D_{y}^{j}\text{ for }y_{0}\in \bar{G} \] \[ B_{j}\left( y_{0},\xi ^{\prime },D_{y}\right) =\sum\limits_{\left\vert \beta ^{\prime }\right\vert +j\leq m_{j}}b_{j\beta ^{\prime }}\left( y_{0}\right) \xi _{1}^{\beta _{1}}\xi _{2}^{\beta _{2}}...\xi _{d-1}^{\beta _{d-1}}D_{y}^{j}\text{ for }y_{0}\in \partial G. \] Let \[ X_{2}=L^{2}\left( R^{n};L^{2}\left( G\right) \right) =L^{2}\left( \Omega \right) ,\text{ }X_{2}\left( A\right) =L^{2}\left( R^{n};W^{2m,2}\left( G\right) \right) ,\text{ }Y^{k,2}=W^{k,2}\left( R^{n};L^{2}\left( G\right) \right) . \] \textbf{Theorem 7.1}. Let the following conditions be satisfied: (1) $G\in C^{2}$, $a_{\alpha }\in C\left( \bar{\Omega}\right) $ for each $\left\vert \alpha \right\vert =2m$ and $a_{\alpha }\in L_{\infty }\left( G\right) $ for each $\left\vert \alpha \right\vert <2m$; (2) $b_{j\beta }\in C^{2m-m_{j}}\left( \partial G\right) $ for each $j$, $ \beta $ and $\ m_{j}<2m$, $\sum\limits_{j=1}^{m}b_{j\beta }\left( y^{\prime }\right) \sigma _{j}\neq 0,$ for $\left\vert \beta \right\vert =m_{j},$ $ y^{^{\shortmid }}\in \partial G,$ where $\sigma =\left( \sigma _{1},\sigma _{2},...,\sigma _{d}\right) \in R^{d}$ is a normal to $\partial G$ $;$ (3) for $y\in \bar{G}$, $\xi \in R^{d}$, $\lambda $ with $\left\vert \arg \lambda \right\vert \leq \varphi $ for $0\leq \varphi <\pi $, $\left\vert \xi \right\vert +\left\vert \lambda \right\vert \neq 0$ let $\lambda +$ $ \sum\limits_{\left\vert \alpha \right\vert =2m}a_{\alpha }\left( y\right) \xi ^{\alpha }\neq 0$; (4) for each $y_{0}\in \partial G$ local BVP in local coordinates corresponding to $y_{0}$: \[ \lambda +A\left( y_{0},\xi ^{\prime },D_{y}\right) \vartheta \left( y\right) =0, \] \[ B_{j}\left( y_{0},\xi ^{\prime },D_{y}\right) \vartheta \left( 0\right) =h_{j}\text{, }j=1,2,...,m \] has a unique solution $\vartheta \in C_{0}\left( \mathbb{R}_{+}\right) $ for all $h=\left( h_{1},h_{2},...,h_{d}\right) \in \mathbb{C}^{d}$ and for $\xi ^{\prime }\in R^{d-1};$ (5) there exist positive constants $b_{0}$ and $\theta $ such that a solution $u\in C\left( \left[ -1,1\right] ;X_{2}\left( A\right) \right) $ of $\left( 7.1\right) -\left( 7.2\right) $ satisfied \[ \left\Vert F\left( u,\bar{u}\right) \right\Vert _{L^{2}\left( G\right) }\leq b_{0}\left\Vert u\right\Vert _{L^{2}\left( G\right) }^{\theta }\text{ for } \left\Vert u\right\Vert _{X_{2}\left( A\right) }>1; \] (6) Suppose \[ \left\Vert u\left( .,t\right) \right\Vert _{L^{2}\left( \Omega \right) }=\left\Vert u\left( .,0\right) \right\Vert _{L^{2}\left( \Omega \right) }=\left\Vert u_{0}\right\Vert _{L^{2}\left( \Omega \right) }=a \] for $t\in \left[ -1,1\right] $ and that $\left( 2.15\right) $ holds with $ Q\left( .\right) $ satisfies $\left( 2.16\right) $ for $H=L^{2}\left( G\right) .$ \ If $p>p\left( \theta \right) =\frac{2\left( \theta n-2\right) }{\left( \theta n-1\right) },$ then $a\equiv 0.$ \ \textbf{Proof. }Let us consider operators $A$ and $V\left( x,t\right) $ in $H=L^{2}\left( G\right) $ that are defined by the equalities \[ D\left( A\right) =\left\{ u\in W^{2m,2}\left( G\right) \text{, }B_{j}u=0, \text{ }j=1,2,...,m\text{ }\right\} ,\ Au=\sum\limits_{\left\vert \alpha \right\vert \leq 2m}a_{\alpha }\left( y\right) D_{y}^{\alpha }u\left( y\right) . \] Then the problem $\left( 7.1\right) -\left( 7.2\right) $ can be rewritten as the problem $\left( 2.12\right) $, where $u\left( x\right) =u\left( x,.\right) ,$ $f\left( x\right) =f\left( x,.\right) $,\ $x\in R^{n}$ are the functions with values in\ $H=L^{2}\left( G\right) $. By virtue of $\left[ \text{1}\right] $ operator $A+\mu $ is positive in $L^{2}\left( G\right) $ for sufficiently large $\mu >0$. Moreover, in view of (1)-(6) all conditons of Theorem 3 are hold. Then Theorem 3 implies the assertion. \begin{center} \textbf{8.} \textbf{The Wentzell-Robin type mixed problem for Schredinger equations} \end{center} Consider the problem $\left( 1.5\right) -\left( 1.6\right) $. \ Let \[ \sigma =R^{n}\times \left( 0,1\right) \text{, }X_{2}=L^{2}\left( \sigma \right) ,\text{ }Y^{2,k}=W^{2,k}\left( \sigma \right) . \] Suppose $\nu =\left( \nu _{1},\nu _{2},...,\nu _{n}\right) $ are nonnegative real numbers. In this section, from Theorem 1 we obtain the following result: \textbf{Theorem 8.1. } Suppose the following conditions are satisfied: (1)\ $a$ \ and $b$ are a real-valued functions on $\left( 0,1\right) $ and $ a\left( t\right) >0$. Moreover$,$ $a\left( .\right) $ is bounded continious function on $R^{n}\times \left[ 0,1\right] $ and \[ \exp \left( -\dint\limits_{\frac{1}{2}}^{x}b\left( t\right) a^{-1}\left( t\right) dt\right) \in L_{1}\left( 0,1\right) ; \] (2) $u_{1},$ $u_{2}\in C\left( \left[ 0,1\right] ;Y^{2,k}\right) $ are strong solutions of $(1.2)$ with $k>\frac{n}{2};$ (3) $F:\mathbb{C}\times \mathbb{C}\rightarrow \mathbb{C},$ $F\in C^{k}$, $ F\left( 0\right) =\partial _{u}F\left( 0\right) =\partial _{\bar{u}}F\left( 0\right) =0;$ (4) there exist positive constants $\alpha $ and $\beta $ such that \[ e^{\frac{\left\vert \alpha x\right\vert ^{p}}{p}}\left( u_{1}\left( .,0\right) -u_{2}\left( .,0\right) \right) \in X_{2},\text{ }e^{^{\frac{ \left\vert \beta x\right\vert ^{q}}{q}}}\left( u_{1}\left( .,0\right) -u_{2}\left( .,0\right) \right) \in X_{2}, \] with \[ p\in \left( 1,2\right) \text{, }\frac{1}{p}+\frac{1}{q}=1; \] (5) there exists $N_{p}>0$ such that \[ \alpha \beta >N_{p}.\text{ } \] Then $u_{1}\left( x,t\right) \equiv u_{2}\left( x,t\right) .$ \ \textbf{Proof.} Let $H=L^{2}\left( 0,1\right) $ and $A$ is a operator defined by $\left( 1.4\right) .$ Then the problem $\left( 1.5\right) -\left( 1.6\right) $ can be rewritten as the problem $\left( 1.2\right) $. By virtue of $\left[ \text{10, 11}\right] $ the operator $A$ generates analytic semigroup in $L^{2}\left( 0,1\right) $. Hence, by virtue of (1)-(5) all conditons of Theorem 2 are satisfied. Then Theorem 2 implies the assertion. \begin{center} \textbf{Acknowledgements} \end{center} The author would like to express a gratitude to Dr. Neil. Course for his useful advice in English in preparing of this paper \textbf{References}\ \ \begin{enumerate} \item H. Amann, Linear and quasi-linear equations,1, Birkhauser, Basel (1995). \item A. Benedek, A. Calder\`{o}n, R. Panzone, Convolution operators on Banach space valued functions, Proc. Nat. Acad. Sci. USA, 48(1962), 356--365 \item A. Bonami, B. Demange, A survey on uncertainty principles related to quadratic forms, Collect. Math., V. Extra, (2006), 1--36. \item A.-P. Calder\'{o}n, Commutators of singular integral operators, Proc. Nat. Acad. Sci. U.S.A. 53(1965), 1092--1099. \item C. E. Kenig, G. Ponce, L. Vega, On unique continuation for nonlinear Schr \"{} odinger equations, Comm. Pure Appl. Math. 60 (2002), 1247--1262. \item L. Escauriaza, C. E. Kenig, G. Ponce, L. Vega, On uniqueness properties of solutions of Schr\"{o}dinger Equations, Comm. PDE. 31, 12 (2006) 1811--1823. \item L. Escauriaza, C. E. Kenig, G. Ponce, and L. Vega, Uncertainty principle of Morgan type and Schr\"{o}dinger evolution, J. London Math. Soc. 81, (2011) 187--207. \item L. Escauriaza, C. E. Kenig, G. Ponce, and L. Vega, Hardy's uncertainty principle, convexity and Schr\"{o}dinger Evolutions, J. European Math. Soc. 10, 4 (2008) 883--907. \item L. H\"{o}rmander, A uniqueness theorem of Beurling for Fourier transform pairs, Ark. Mat. 29, 2 (1991) 237--240. \item J. A. Goldstain, Semigroups of Linear Operators and Applications, Oxford University Press, Oxfard (1985). \item A. Favini, G. R. Goldstein, J. A. Goldstein and S. Romanelli, Degenerate Second Order Differential Operators Generating Analytic Semigroups in $L_{p}$ and $W^{1,p}$, Math. Nachr. 238 (2002), 78 --102. \item V. Keyantuo, M. Warma, The wave equation with Wentzell--Robin boundary conditions on Lp-spaces, J. Differential Equations 229 (2006) 680--697. \item S. G. Krein, Linear Differential Equations in Banach space, American Mathematical Society, Providence, (1971). \item A. Lunardi, Analytic Semigroups and Optimal Regularity in Parabolic Problems, Birkhauser (2003). \item J-L. Lions, E. Magenes, Nonhomogenous Boundary Value Broblems, Mir, Moscow (1971). \item V. B. Shakhmurov, Imbedding theorems and their applications to degenerate equations, Differential equations, 24 (4) (1988), 475-482. \item V. B. Shakhmurov, Linear and nonlinear abstract equations with parameters, Nonlinear Anal-Theor., 73 (2010), 2383-2397. \item R. Shahmurov, On strong solutions of a Robin problem modeling heat conduction in materials with corroded boundary, Nonlinear Anal., Real World Appl., 13, (1)( 2011), 441-451. \item R. Shahmurov, Solution of the Dirichlet and Neumann problems for a modified Helmholtz equation in Besov spaces on an annuals, J. Differential equations, 249(3)(2010), 526-550. \item C. Segovia, J. L.Torrea, Vector-valued commutators and applications, Indiana Univ. Math.J. 38(4) (1989), 959--971. \item E. M. Stein, R. Shakarchi, Princeton, Lecture in Analysis II. Complex Analysis, Princeton University Press (2003). \item A. Stefanov, Strichartz estimates for the Schrodinger equation with radial data, Proc. Amer. Math. Soc. 129 (2001), 1395-1401. \item B. Simon, M. Schechter, Unique Continuation for Schrodinger Operators with unbounded potentials, J. Math. Anal. Appl., 77 (1980), 482-492. \item H. Triebel, Interpolation theory, Function spaces, Differential operators, North-Holland, Amsterdam (1978). \item S. Yakubov and Ya. Yakubov, Differential-operator Equations. Ordinary and Partial \ Differential Equations, Chapman and Hall /CRC, Boca Raton (2000). \item Y. Xiao and Z. Xin, On the vanishing viscosity limit for the 3D Navier-Stokes equations with a slip boundary condition. Comm. Pure Appl. Math. 60 (7) (2007), 1027--1055. \end{enumerate} \end{document}
\begin{document} \newtheorem*{lem}{Lemma} \newtheorem{teo}{Theorem} \pagestyle{plain} \title{Differential Equations for $\mathbf F_q$-Linear Functions} \author{Anatoly N. Kochubei\\ \footnotesize Institute of Mathematics,\\ \footnotesize Ukrainian National Academy of Sciences,\\ \footnotesize Tereshchenkivska 3, Kiev, 01601 Ukraine \\ \footnotesize E-mail: \ [email protected]} \date{} \maketitle \vspace*{2cm} Copyright Notice: This work has been submitted to Academic Press for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. \vspace*{8cm} \begin{abstract} We study certain classes of equations for $\mathbf F_q$-linear functions, which are the natural function field counterparts of linear ordinary differential equations. It is shown that, in contrast to both classical and $p$-adic cases, formal power series solutions have positive radii of convergence near a singular point of an equation. Algebraic properties of the ring of $\mathbf F_q$-linear differential operators are also studied. \end{abstract} {\bf Key words: }\ $\mathbf F_q$-linear function; differential equation; radius of convergence; Noetherian ring; Ore domain \section{INTRODUCTION} A standard model of a non-discrete, locally compact field of a positive characteristic $p$ is the field $K$ of formal Laurent series with coefficients from the Galois field $\mathbf F_q$, $q=p^\upsilon $, $\upsilon \in \mathbf Z_+$. Let $\overline{K}_c$ be a completion of an algebraic closure of $K$. A function defined on a $\mathbf F_q$-subspace $K_0$ of $K$, with values in $\overline{K}_c$, is called $\mathbf F_q$-linear if $f(t_1+t_2)=f(t_1)+f(t_2)$ and $f(\alpha t)=\alpha f(t)$ for any $t,t_1,t_2\in K$, $\alpha \in \mathbf F_q$. $\mathbf F_q$-linear functions often appear in analysis over $\overline{K}_c$; see, in particular, the works by Carlitz \cite{C1,C2}, Wagner \cite{W}, Goss \cite{G1,G2}, Thakur \cite{T1,T2}, and the author \cite{K1,K2}. This class of functions includes, in particular, analogues of the exponential, logarithm, Bessel, and hypergeometric functions. It has been noticed (see e.g. \cite{T2,K1}) that the above functions satisfy some equations, which can be seen as function field analogues of first and second order linear differential equations with polynomial coefficients. The role of a derivative is played by the operator $$ d=\sqrt[q]{\ }\circ \Delta $$ where $(\Delta u)(t)=u(xt)-xu(t)$, $x$ is a prime element in $K$. The operator $d$ is also basic in the calculus of $\mathbf F_q$-linear functions developed in \cite{K2}. The meaning of a polynomial coefficient in the function field case is not a usual multiplication by a polynomial, but the action of a polynomial in the operator $\tau$, $\tau u=u^q$. This was most clearly demonstrated in \cite{T2}, where an equation for the hypergeometric functions was derived. The aim of this paper is to start a general theory of differential equations of the above kind. In fact we consider equations (or systems) with holomorphic coefficients. As in the classical theory, we have to make a distinction between the regular and singular cases. In the analytic theory of linear differential equations over $\mathbb C$ a regular equation has a constant leading coefficient (which can be assumed equal to 1). A leading coefficient of a singular equation is a holomorphic function having zeroes at some points. One can divide the equation by its leading coefficient, but then poles would appear at other coefficients, and the solution can have singularities (not only poles but in general also essential singularities) at those points. Similarly, in our case we understand a regular equation as the one with the coefficient 1 at the highest order derivative. As usual, a regular higher-order equation can be transformed into a regular first-order system. For the regular case we obtain a local existence and uniqueness theorem, which is similar to analogous results for equations over $\mathbb C$ or $Q_p$ (for the latter see \cite{Lu}). The only difference is a formulation of the initial condition, which is specific for the function field case. The leading coefficient $A_m(\tau )$ of a singular $\mathbf F_q$-linear equation of an order $m$ is a non-constant holomorhic function of the operator $\tau$. Now one cannot divide the equation $$ A_m(\tau )d^mu(t)+A_{m-1}(\tau )d^{m-1}u(t)+\ldots +A_0(\tau )u(t)=f(t) $$ for an $\mathbf F_q$-linear function $u(t)$ (note that automatically $u(0)=0$) by $A_m(\tau )$. If $A_m(\tau )=\sum\limits_{i=0}^\infty a_{mi}\tau ^i$, $a_{mi}\in \overline{K}_c$, then $$ A_m(\tau )d^mu= \sum\limits_{i=0}^\infty a_{mi}\left( d^mu\right) ^{q^i}, $$ and even when $A_m$ is a polynomial, in order to resolve our equation with respect to $d^mu$ one has to solve an algebraic equation. Thus for the singular case the situation looks even more complicated than in the classical theory. However we show that the behavior of the solutions cannot be too intricate. Namely, in a striking contrast to the classical theory, any formal series solution converges in some (sufficiently small) neighbourhood of the singular point $t=0$. Note that in the $p$-adic case a similar phenomenon takes place for equations satisfying certain strong conditions upon zeros of indicial polynomials \cite{Cla,Set,Bald,Put1}. In our case such a behavior is proved for any equation, which resembles the (much simpler) case \cite{Put1} of differential equations over a field of characteristics zero, whose residue field also has characteristic zero. We also study some algebraic properties of the ring of all polynomial differential operators, that is the ring generated by $\overline{K}_c$, $\tau$, and $d$. It is interesting to compare our results with the ones for the case of characteristic zero \cite{Bj,D}, and the ones for usual differential operators over a field of positive characteristic \cite{Put2}. It appears that the main difference from the former case is caused by nonlinearity of $\tau$ and $d$, while the latter case is totally different. For example, the centre of our ring equals $\mathbf F_q$, the centre of the ring of polynomial differential operators over a field $k$ with $\mbox{char }k=0$ equals $k$. Meanwhile for the situation studied in \cite{Put2} the centre is a ``big'' polynomial ring. The author is grateful to David Goss for his constructive criticism, which helped much to improve the exposition. \section{ANALYTIC PROPERTIES} Let us introduce some notation. If $t\in K$, $$ t=\sum _{i=n}^\infty \theta_ix^i,\quad n\in \mathbf Z,\ \theta_i\in \mathbf F_q \ , \ \theta_n\ne 0, $$ the absolute value $|t|$ is defined as $$ |t|=q^{-n}. $$ We preserve the notation $|\cdot |$ for the extension of the absolute value onto $\overline{K}_c$. The norm $|P|$ of a matrix $P$ with elements from $\overline{K}_c$ is defined as the maximum of absolute values of the elements. We will use systematically the Carlitz factorial $D_i$, $i\ge 0$, defined as $$ D_i=[i][i-1]^q\ldots [1]^{q^{i-1}},\ i\ge 1;\ D_0=1, $$ where $[i]=x^{q^i}-x$ (this notation should not be confused with the one for the commutator $[\cdot ,\cdot ]$). It is easy to see \cite{K2} that \begin{equation} d\left( \frac{t^{q^i}}{D_i}\right) = \frac{t^{q^{i-1}}}{D_{i-1}},\ i\ge 1;\quad d(\mbox{const})=0; \end{equation} \begin{equation} \tau \left( \frac{t^{q^{i-1}}}{D_{i-1}}\right) =[i]\frac{t^{q^i}}{D_i},\ i\ge 1. \end{equation} It is known \cite{K2,T2} that $[d,\tau ]=[1]^{1/q}$. \subsection{{\it Equations without Singularities}} Let us consider an equation \begin{equation} dy(t)=P(\tau )y(t)+f(t) \end{equation} where for each $z\in \left(\overline{K}_c\right) ^m$, $t\in K$, \begin{equation} P(\tau )z=\sum \limits _{k=0}^\infty \pi _kz^{q^k},\quad f(t)=\sum \limits _{j=0}^\infty \varphi _j\frac{t^{q^j}}{D_j}, \end{equation} $\pi _k$ are $m\times m$ matrices with elements from $\overline{K}_c$, $\varphi _j\in \left(\overline{K}_c\right) ^m$, and it is assumed that the series (4) have positive radii of convergence. The action of the operator $\tau $ upon a vector or a matrix is defined component-wise, so that $z^{q^k}=\left( z_1^{q^k},\ldots ,z_m^{q^k}\right)$ for $z=(z_1,\ldots ,z_m)$. Similarly, if $\pi =(\pi _{ij})$ is a matrix, we write $\pi ^{q^k}=\left( \pi_{ij}^{q^k}\right)$. We will seek a $\mathbf F_q$-linear solution of (3) in some neighbourhood of the origin, of the form \begin{equation} y(t)=\sum \limits _{i=0}^\infty y_i\frac{t^{q^i}}{D_i},\quad y_i\in \left(\overline{K}_c\right) ^m, \end{equation} where $y_0$ is a given element, so that the ``initial'' condition for our situation is \begin{equation} \lim \limits _{t\to 0}t^{-1}y(t)=y_0. \end{equation} Note that a function (5), provided the series has a positive radius of convergence, tends to zero for $t\to 0$, so that the right-hand side of (3) makes sense for small $|t|$. \begin{teo} For any $y_0\in \left(\overline{K}_c\right) ^m$ the equation (3) has a unique local solution of the form (5), which satisfies (6), with the series having a positive radius of convergence. \end{teo} {\it Proof}. Making (if necessary) the substitutions $t=c_1t'$, $y=c_2y'$, with sufficiently small $|c_1|$, $|c_2|$, we may assume that the coefficients in (4) are such that $\varphi _j\to 0$ for $j\to \infty$, \begin{equation} |\pi _k^q|\cdot q^{-\frac{q^{k+1}-q}{q-1}}\le 1,\quad k=0,1,\ldots . \end{equation} Using (1) and (2) we substitute (5) into (3), which results in the recurrent formula for the coefficients $y_i$: \begin{equation} y_{l+1}=\sum \limits _{n+k=l}\pi _k^qy_n^{q^{k+1}}[n+1]^{q^k}\ldots [n+k]^q+\varphi _l^q, \quad l=0,1,2,\ldots , \end{equation} where the expressions in square brackets are omitted if $k=0$. It is seen from (8) that a solution of (3), (6) (if it exists) is unique. Since $|[n]|=q^{-1}$ for all $n>0$, we find that $$ \left| [n+1]^{q^k}\ldots [n+k]^q\right| =q^{-(q^k+\cdots +q)}= q^{-\frac{q^{k+1}-q}{q-1}}, $$ and it follows from (7),(8) that $$ |y_{l+1}|\le \max \left\{ |\varphi _l|^q,|y_0|^{q^{l+1}},|y_1|^{q^l},\ldots ,|y_l|^q\right\} . $$ Since $\varphi _n\to 0$, there exists such a number $l_0$ that $|\varphi _l|\le 1$ for $l\ge l_0$. Now either $|y_l|\le 1$ for all $l\ge l_0$ (and then the series (5) is convergent in a neighbourhood of the origin), or $|y_{l_1}|>1$ for some $l_1\ge l_0$. In the latter case $$ |y_{l+1}|\le \max \left\{ |y_0|^{q^{l+1}},|y_1|^{q^l},\ldots , |y_l|^q\right\} ,\quad l\ge l_1. $$ Let us choose $A>0$ in such a way that $$ |y_l|\le A^{q^l},\quad l=1,2,\ldots ,l_1. $$ Then it follows easily by induction that $|y_l|\le A^{q^l}$ for all $l$, which implies the convergence of (5) near the origin.$\quad \blacksquare $ \subsection{{\it Singular Equations}} We will consider scalar equations of arbitrary order \begin{equation} \sum \limits _{j=0}^mA_j(\tau )d^ju=f \end{equation} where $$ f(t)=\sum \limits _{n=0}^\infty \varphi _n\frac{t^{q^n}}{D_n}, $$ $A_j(\tau )$ are power series having (as well as the one for $f$) positive radii of convergence. It will be convenient to start from the model equation \begin{equation} \sum \limits _{j=0}^ma_j\tau ^jd^ju=f,\quad a_j\in \overline{K}_c,\ a_m\ne 0. \end{equation} Suppose that $u(t)$ is a formal solution of (10), of the form \begin{equation} u(t)=\sum \limits _{n=0}^\infty u_n\frac{t^{q^n}}{D_n}. \end{equation} Then $$ a_0\sum \limits _{n=0}^\infty u_n\frac{t^{q^n}}{D_n}+\sum \limits _{j=1}^ma_j\sum \limits _{n=j}^\infty u_n[n-j+1]\ldots [n] \frac{t^{q^n}}{D_n}=\sum \limits _{n=0}^\infty \varphi _n\frac{t^{q^n}}{D_n}. $$ Changing the order of summation we find that for $n\ge m$ \begin{equation} u_n\left( a_0+\sum \limits _{j=1}^ma_j[n-j+1]\ldots [n]\right) = \varphi _n. \end{equation} Let us consider the expression $$ \Phi _n=a_0+\sum \limits _{j=1}^ma_j[n-j+1]\ldots [n], \quad n\ge m. $$ Using repeatedly the identity $[i]^q+[1]=[i+1]$ we find that $$ \Phi _n^{q^m}=a_o^{q^m}+\sum \limits _{j=1}^ma_j^{q^m}\prod \limits _{k=0}^{j-1}[n-k]^{q^m}= a_o^{q^m}+\sum \limits _{j=1}^ma_j^{q^m}\prod \limits _{k=0}^{j-1}\left( [n]^{q^{m-k}}-\sum \limits _{l=1}^k[1]^{q^{m-l}}\right), $$ that is $\Phi _n^{q^m}=\Phi ^{(m)}([n])$ where $$ \Phi ^{(m)}(t)=a_o^{q^m}+\sum \limits _{j=1}^ma_j^{q^m}\prod \limits _{k=0}^{j-1}\left( t^{q^{m-k}}-\sum \limits _{l=1}^k[1]^{q^{m-l}}\right) $$ is a polynomial on $\overline{K}_c$ of a certain degree $N$ not depending on $n$. Let $\theta _1,\ldots ,\theta _N$ be its roots. Then $$ \Phi ^{(m)}([n])=a_m^{q^m}\prod \limits _{\nu =1}^N([n]-\theta _\nu ). $$ As $n\to \infty $, $[n]\to -x$ in $\overline{K}_c$. We may assume that $\theta _\nu \ne [n]$ for all $\nu$, if $n$ is large enough. If $\theta _\nu \ne -x$ for all $\nu$, then for large $n$, say $n\ge n_0\ge m$, $$ \left| \Phi ^{(m)}([n])\right| \ge \mu >0. $$ If $k\le N$ roots $\theta _\nu$ coincide with $-x$, then $$ \left| \Phi ^{(m)}([n])\right| \ge \mu q^{-kq^n},\quad n\ge n_0. $$ Combining the inequalities and taking the root we get \begin{equation} \left| \Phi _n\right| \ge \mu _1 q^{-\mu _2q^n},\quad n\ge n_0. \end{equation} where $\mu _1,\mu _2>0$. Now it follows from (12) and (13) that the series (11) has (together with the series for $f$) a positive radius of convergence. Turning to the general equation (9) we note first of all that one can apply an operator series $A(\tau )=\sum \limits _{k=0}^\infty \alpha _k\tau ^k$ (even without assuming its convergence) to a formal series (11), setting $$ \tau ^ku(t)=\sum \limits _{n=0}^\infty u_n^{q^k}[n+1]^{q^{k- 1}}\ldots [n+k]\frac{t^{q^{n+k}}}{D_{n+k}},\quad k\ge 1, $$ and $$ A(\tau )u(t)=\sum \limits _{l=0}^\infty \frac{t^{q^l}}{D_l}\sum \limits _{n+k=l}\alpha _ku_n^{q^k}[n+1]^{q^{k-1}}\ldots [n+k] $$ where the factor $[n+1]^{q^{k-1}}\ldots [n+k]$ is omitted for $k=0$. Therefore the notion of a formal solution (11) makes sense for the equation (9). We will need the following elementary estimate. \begin{lem} Let $k\ge 2$ be a natural number, with a given partition $k=i_1+\cdots +i_r$, where $i_1,\ldots ,i_r$ are positive integers, $r\ge 1$. Then $$ q^{i_1+\cdots +i_r}+q^{i_2+\cdots +i_r}+\cdots +q^{i_r}\le q^{k+1}. $$ \end{lem} {\it Proof}. The assertion is obvious for $k=2$. Suppose it has been proved for some $k$ and consider a partition $$ k+1=i_1+\cdots +i_r. $$ If $i_1>1$ then $k=(i_1-1)+i_2+\cdots +i_r$, so that $$ q^{(i_1-1)+i_2+\cdots +i_r}+q^{i_2+\cdots +i_r}+\cdots +q^{i_r}\le q^{k+1} $$ whence $$ q^{i_1+i_2+\cdots +i_r}+q^{i_2+\cdots +i_r}+\cdots +q^{i_r}\le q^{k+2}. $$ If $i_1=1$ then $k=i_2+\cdots +i_r$, $$ q^{i_2+\cdots +i_r}+q^{i_3+\cdots +i_r}+\cdots +q^{i_r}\le q^{k+1} $$ and $$ q^{i_1+\cdots +i_r}+q^{i_2+\cdots +i_r}+\cdots +q^{i_r}\le 2q^{k+1}\le q^{k+2}.\quad \blacksquare $$ Now we are ready to formulate our main result. \begin{teo} Let $u(t)$ be a formal solution (11) of the equation (9), where the series for $A_j(\tau )z$, $z\in \overline{K}_c$, and $f(t)$, have positive radii of convergence. Then the series (11) has a positive radius of convergence. \end{teo} {\it Proof}. Applying (if necessary) the operator $\tau $ a sufficient number of times to both sides of (9) we may assume that $$ A_j(\tau )=\sum \limits _{i=0}^\infty a_{ji}\tau ^{i+j},\quad a_{ji}\in \overline{K}_c ,\ j=0,1,\ldots ,m, $$ where $a_{j0}\ne 0$ at least for one value of $j$. Let us assume, for example, that $a_{m0}\ne 0$ (otherwise the reasoning below would need an obvious adjustment). Denote by $L$ the operator at the left-hand side of (9), and by $L_0$ its ``principal part'', $$ L_0u=\sum \limits _{j=0}^ma_{j0}\tau ^jd^ju $$ (the model operator considered above; we will maintain the notations introduced there). Note that $L_0$ is a linear operator. As we have seen, $$ L_0\left( \frac{t^{q^n}}{D_n}\right) =\Phi _n\frac{t^{q^n}}{D_n}, \quad n\ge n_0, $$ where $\Phi _n$ satisfies the inequality (13). This means that $L_0$ is an automorphism of the vector space $X$ of formal series $$ u=\sum \limits _{n=n_0}^\infty u_n\frac{t^{q^n}}{D_n},\quad u_n\in \overline{K}_c, $$ as well of its subspace $Y$ consisting of series with positive radii of convergence. Let us write the formal solution $u$ of the equation (9) as $u=v+w$, where $$ v=\sum \limits _{n=0}^{n_0-1} u_n\frac{t^{q^n}}{D_n}, \quad w=\sum \limits _{n=n_0}^\infty u_n\frac{t^{q^n}}{D_n}. $$ Then (9) takes the form \begin{equation} Lw=g, \end{equation} with $g=\sum g_n\frac{t^{q^n}}{D_n}\in Y$. In order to prove our theorem, it is sufficient to verify that $w\in Y$. For any $y\in X$ we can write $$ Ly=(L_0-L_1)y=L_0(I-L_0^{-1}L_1)y $$ where \begin{equation} L_1y=-\sum \limits _{j=0}^m\sum \limits _{i=1}^\infty a_{ji}\tau ^{i+j}d^jy. \end{equation} In particular, it is seen from (14) that $$ (I-L_0^{-1}L_1)w=L_0^{-1}g,\quad L_0^{-1}g\in Y. $$ Writing formally $$ (I-L_0^{-1}L_1)^{-1}=\sum \limits _{k=0}^\infty \left( L_0^{- 1}L_1\right) ^k $$ and noticing that $L_0^{-1}L_1:\ X\to \tau X$, we find that \begin{equation} w=\sum \limits _{k=0}^\infty \left( L_0^{-1}L_1\right) ^kh, \end{equation} where $h=L_0^{-1}g=\sum \limits _{n=n_0}^\infty h_n\frac{t^{q^n}}{D_n}$, $h_n=\Phi _n^{-1}g_n$, and the series in (16) converges in the natural non-Archimedean topology of the space $X$. A direct calculation shows that for any $\lambda \in \overline{K}_c$ $$ \left( L_0^{-1}L_1\right) \left( \lambda \frac{t^{q^n}}{D_n}\right) =- \sum \limits _{i=1}^\infty \lambda ^{q^i}\Phi ^{-1}_{n+i}\Psi _i^{(n)}\frac{t^{q^{n+i}}}{D_{n+i}} $$ where $$ \Psi _i^{(n)}=[n+1]^{q^{i-1}}[n+2]^{q^{i-2}}\ldots [n+i]\sum _{j=0}^m[n-j+1]^{q^i}\ldots [n]^{q^i}a_{ji}, $$ and the coefficient at $a_{j0}$ in the last sum is assumed to equal 1. Proceeding by induction we get \begin{multline*} \left( L_0^{-1}L_1\right) ^r \left( \lambda \frac{t^{q^n}}{D_n}\right) = (-1)^r\sum \limits _{i_1,\ldots ,i_r=1}^\infty \left( \Psi _{i_1}^{(n)}\right) ^{q^{i_2+\cdots +i_r}}\left( \Psi _{i_2}^{(n+i_1)}\right) ^{q^{i_3+\cdots +i_r}}\ldots \\ \times \left( \Psi _{i_r}^{(n+i_1+\cdots +i_{r-1})}\right) \lambda ^{q^{i_1+\cdots +i_r}}\Phi _{n+i_1}^{-q^{i_2+\cdots +i_r}} \Phi _{n+i_1+i_2}^{-q^{i_3+\cdots +i_r}}\ldots \Phi ^{-1}_{n+i_1+\cdots +i_r}\frac{t^{q^{n+i_1+\cdots +i_r}}}{D_{n+i_1+\cdots +i_r}},\quad r=1,2,\ldots . \end{multline*} Substituting this into (16) and changing the order of summation we find an explicit formula for $w(t)$: \begin{multline} w(t)=\sum \limits _{l=n_0}^\infty \frac{t^{q^l}}{D_l}\sum \limits _{\genfrac{}{}{0pt}{}{n+i_1+\cdots +i_r=l}{n\ge n_0,\, i_i,\ldots ,i_r\ge 1}}(-1)^rh_n^{q^{l-n}}\left( \Psi _{i_1}^{(n)}\right) ^{q^{i_2+\cdots +i_r}}\left( \Psi _{i_2}^{(n+i_1)}\right) ^{q^{i_3+\cdots +i_r}}\ldots \\ \times \left( \Psi _{i_r}^{(n+i_1+\cdots +i_{r-1})}\right) \Phi _{n+i_1}^{-q^{i_2+\cdots +i_r}} \Phi _{n+i_1+i_2}^{-q^{i_3+\cdots +i_r}}\ldots \Phi ^{-1}_{n+i_1+\cdots +i_r} \end{multline} Observe that $$ \left| \Psi _i^{(n)}\right| \le (q^{-1})^{q^{i-1}+q^{i-2}+\cdots +1}\sup \limits _j|a_{ji}|,\quad |g_n|\le M_1^{q^n},\quad |a_{ji}|\le M_2^{q^i}, $$ $M_1,M_2\ge 1$ (due to positivity of the corresponding radii of convergence). We have $$ \left| h_n^{q^{l-n}}\right| \le |\Phi _n|^{-q^{i_1+\cdots +i_r}}M_1^{q^l}, $$ and by the above Lemma \begin{multline*} |\Phi _n|^{-q^{i_1+\cdots +i_r}}|\Phi _{n+i_1}|^{-q^{i_2+\cdots +i_r}}\cdots |\Phi _{n+i_1+\cdots +i_r}|^{-1}\\ \le \mu _1^{q^{i_1+\cdots +i_r}+q^{i_2+\cdots +i_r}+\cdots +q^{i_r}+1}q^{\mu _2\left( q^{n+i_1+\cdots +i_r}+q^{n+i_2+\cdots +i_r}+ \cdots +q^{n+i_r}+q^n\right) }\le \mu _1^{q^{l-n+1}+1}q^{\mu_2q^n\left( q^{l-n+1}+1\right) }\\ \le q^{\mu _3q^{l+1}},\quad \mu_3>0. \end{multline*} The Lemma also yields $$ \left| \Psi_{i_1}^{(n)}\right| ^{q^{i_2+\cdots +i_r}}\left| \Psi _{i_2}^{(n+i_1)}\right| ^{q^{i_3+\cdots +i_r}}\ldots \left| \Psi _{i_r}^{(n+i_1+\cdots +i_{r-1})}\right| \\ \le M_2^{q^{i_1+\cdots +i_r}+q^{i_2+\cdots +i_r}+\cdots +q^{i_r}}\le M_2^{q^{l+1}}. $$ Writing (17) as $$ w(t)=\sum \limits _{l=n_0}^\infty w_l\frac{t^{q^l}}{D_l} $$ we find that $$ \limsup \limits_{l\to \infty}|w_l|^{q^{-l}}\le \limsup \limits_{l\to \infty} \left( q^{\mu_3q^{l+1}}M_1^{q^l}M_2^{q^{l+1}}\right) ^{q{- l}}<\infty , $$ which implies positivity of the radius of convergence. $\quad \blacksquare $ The function field analogue $$ u(t)={}_2F_1(a,b;c;t),\quad a,b,c\in \mathbf Z, $$ of the Gauss hypergeometric function, which was introduced by Thakur [17], satisfies the equation [18] $$ A_2(\tau )d^2u+A_1(\tau )du+A_0(\tau )u=0, $$ where $A_2(\tau )=(1-\tau )\tau$, $A_1(\tau )=([-1]^q+[-b]+[-c])\tau -[-c]$, $A_0(\tau )=-[-a][-b]$ (note that the element $[i]=x^{q^i}-x\in \overline{K}_c$ is defined for any $i\in \mathbf Z$). The radii of convergence for solutions of this equation are found explicitly in [18]. \section{ALGEBRAIC PROPERTIES} In this section we consider the associative ring $\mathcal A$ of ``polynomial differential operators'', that is finite sums \begin{equation} a=\sum \limits _{i,j}\lambda_{ij}\tau ^id^j,\quad \lambda_{ij}\in \overline{K}_c. \end{equation} Operations in $\mathcal A$ are defined in the natural way, with the use of the commutation relations $$ \tau \lambda =\lambda ^q\tau ,\quad d\lambda =\lambda ^{1/q}d\ (\lambda \in \overline{K}_c),\quad d\tau -\tau d=[1]^{1/q}. $$ Note that a representation of an operator $a$ in the form (18) is unique. Indeed, suppose that $$ a=\sum \limits _{i=0}^m\sum \limits _{j=0}^n\lambda_{ij}\tau ^id^j=0. $$ Let $\psi _l(t)=\frac{t^{q^l}}{D_l}$. Using (1), (2), we find that $$ 0=a(\psi _0)=\sum \limits _{i=0}^m\lambda _{i0}\tau ^i\psi _0=\lambda _{00}\psi _0+\sum \limits _{i=1}^m\lambda _{i0}[1]^{q^{i-1}}[2]^{q^{i-2}}\ldots [i]\psi _i $$ whence $\lambda _{i0}=0$. Then we proceed by induction; if $\lambda _{ij}=0$ for $j\le \nu<n$, then $$ a(\psi _{\nu +1})=\sum \limits _{i=0}^m\sum \limits _{j=\nu +1}^n \lambda_{ij}\tau ^id^j\psi _{\nu +1}=\sum \limits _{i=0}^m\lambda _{i,\nu +1}\tau ^i\psi_0, $$ so that $\lambda _{i,\nu +1}=0$ ($i=0,1,\ldots ,m$) as before. Some algebraic properties of the ring $\mathcal A$ are collected in the following theorem. \begin{teo} (i) The centre of the ring $\mathcal A$ coincides with $\mathbf F_q$. (ii) The ring $\mathcal A$ possesses no non-trivial two-sided ideals stable with respect to the mapping $$ P\left( \sum \limits _{i,j}\lambda_{ij}\tau ^id^j\right) = \sum \limits _{i,j}\lambda_{ij}^q\tau ^id^j. $$ (iii) The ring $\mathcal A$ is Noetherian. (iv) $\mathcal A$ is an Ore domain, that is $\mathcal A$ has no zero-divisors and $\mathcal A a\cap \mathcal A b\ne \{ 0\}$, $a\mathcal A \cap b\mathcal A \ne \{ 0\}$ for all pairs of non-zero elements $a,b\in \mathcal A$. \end{teo} {\it Proof}. (i) It is easily proved (by induction) that \begin{equation} [d,\tau ^i]=[i]^{1/q}\tau ^{i-1},\quad [d^j,\tau ]=[j]^{q^{- j}}d^{j-1} \end{equation} for any natural numbers $i,j$. Suppose that $a=\sum \limits _{i=0}^m\sum \limits _{j=0}^n\lambda_{ij}\tau ^id^j$ belongs to the centre of $\mathcal A$. Then $[\tau ,a]=[d,a]=0$. Repeatedly using (19), we find that $$ 0=[\tau ,a]=\sum \limits _{i=0}^m\sum \limits _{j=0}^n \left( \lambda_{ij}^q-\lambda _{ij}\right) \tau^{i+1}d^j-\sum \limits _{j=1}^n\lambda _{0j}[j]^{q^{-j}}d^{j-1}-\sum \limits _{i=0}^{m-1} \sum \limits _{j=0}^{n-1}\lambda_{i+1,j+1}[j+1]^{q^{i-j}} \tau^{i+1}d^j $$ whence \begin{equation} \lambda _{0j}=0,\quad j=1,\ldots ,n, \end{equation} \begin{equation} \lambda _{ij}^q-\lambda _{ij}-[j+1]^{q^{i-j}}\lambda _{i+1,j+1}=0, \quad i=0,1,\ldots ,m-1;\ j=0,1,\ldots ,n-1. \end{equation} It follows from (20), (21) that $\lambda _{ij}=0$ for $i<j$. Next, \begin{multline*} 0=[d,a]=\sum \limits _{i=1}^m\sum \limits _{j=0}^i \left( \lambda_{ij}^{1/q}-\lambda _{ij}\right) \tau^id^{j+1} -\sum \limits _{i=0}^{m-1}\sum \limits _{j=0}^i\lambda_ {i+1,j+1}^{1/q}[i+1]^{1/q}\tau^id^{j+1}\\ +\left( \lambda _{00}^{1/q}-\lambda _{00}\right) d-\sum \limits _{i=1}^m\lambda _{i0}^{1/q}[i]^{1/q}\tau ^{i-1}, \end{multline*} so that \begin{equation} \lambda _{i0}=0,\quad i=1,\ldots ,n; \end{equation} \begin{equation} \lambda _{ij}^{1/q}-\lambda _{ij}-\lambda_{i+1,j+1}^{1/q}[i+1]^{1/q}=0, \quad i=1,\ldots ,n-1;\ j=0,1,\ldots ,i; \end{equation} \begin{equation} \lambda _{00}^{1/q}-\lambda _{00}-\lambda_{11}^{1/q}[1]^{1/q}=0. \end{equation} From (22), (23) we get $\lambda _{ij}=0$ for $i>j$. Raising (24) to the power $q$ we can compare the resulting equality with (21) (with $i=j=0$). Then we find that $\lambda _{11}=0$, and by virtue of (21) also $\lambda _{ii}=0$, $i=2,\ldots ,m$. Finally, it follows from (24) that $\lambda _{00}^q=\lambda _{00}$, so that $a=\lambda _{00}\in \mathbf F_q$. (ii) Let $D$ be a two-sided ideal in $\mathcal A$, $PD\subset D$, containing a non-zero element $$ a=\sum \limits _{i=0}^m\sum \limits _{j=0}^n\lambda_{ij}\tau ^id^j. $$ Then $D$ contains the element $a_1=P(a\tau )-\tau a$. It follows as above that $$ a_1=\sum \limits _{i=0}^m\sum \limits _{j=1}^n\lambda_{ij}^q[j]^{i-j+1}\tau ^id^{j-1}. $$ It is clear that either $\lambda _{ij}=0$ for $j\ge 1$, or $a_1\ne 0$, and the maximal degree of $d$ in $a_1$ is smaller by 1 than the one in $a$. Repeating the procedure (if necessary) we obtain a non-zero element of $D$ of the form $$ b=\sum \limits _{i=0}^m\mu _i\tau ^i. $$ If not all the coefficients $\mu _i$, $i\ge 1$, are equal to zero, we find a non-zero $b_1\in D$, $b_1=P(db)-bd$, $$ b_1=\sum \limits _{i=1}^m\mu _i[i]\tau ^{i-1}. $$ After an appropriate repetition we obtain that $D$ contains a non-zero constant, so that $D=\mathcal A$. (iii) Let $$ A_\nu =\left\{ \sum \limits _{i=0}^m\sum \limits _{j=0}^n\lambda_{ij}\tau ^id^j\in \mathcal A\ :\ m+n\le \nu \right\} . $$ The sequence $\{ A_\nu \}$ of $\overline{K}_c$-vector spaces is increasing, and we can define a graded ring $$ \mbox{gr}\, (\mathcal A) =A_0\oplus A(1)\oplus A(2)\oplus \ldots $$ where $A(\nu )=A_\nu /A_{\nu -1}$, $\nu \ge 1$. The multiplication in $\mbox{gr}\, (\mathcal A)$ is defined as follows. If $f\in A(\nu )$, $g\in A(k)$, $\varphi \in A_\nu$ and $\psi \in A_k$ are arbitrary representatives of $f$ and $g$ respectively, then $\varphi \psi \in A_{\nu +k}$, and we define $fg$ as the class of $\varphi \psi$ in $A(\nu +k)$. In can be checked easily that the multiplication is well-defined. The ring $\mbox{gr}\, (\mathcal A)$ is generated by the classes $\overline{\tau },\overline{d}\in A(1)$ of the elements $\tau ,d\in A_1$, and constants from $\overline{K}_c$, with the commutation relations $$ \overline{\tau }\overline{d}-\overline{d}\overline{\tau }=0, \quad \overline{\tau }c=c^q\overline{\tau }, \quad \overline{d}c=c^{1/q}\overline{d}\ \ (c\in \overline{K}_c ). $$ It follows from the generalization of the Hilbert basis theorem given in \cite{Rt} that $\mbox{gr}\, (\mathcal A)$ is a Noetherian ring. Let $\mathcal L$ be a left ideal in $\mathcal A$. We have to prove that $\mathcal L$ is finitely generated as an $\mathcal A$-submodule. Let $\mbox{gr}\, (\mathcal A)amma _\nu =A_\nu \cap \mathcal L$. Then $\{ \mbox{gr}\, (\mathcal A)amma _\nu \}$ is a filtration in $\mathcal L$. As above, we can construct a graded ring $\mbox{gr}\, (\mathcal A)L$, which is a left ideal in $\mbox{gr}\, (\mathcal A)$. Since $\mbox{gr}\, (\mathcal A)$ is Noetherian, $\mbox{gr}\, (\mathcal A)L$ has a finite system of generators $\sigma _1,\ldots ,\sigma _n$, and we can write finite decompositions $$ \sigma _j=\sum \limits _k\sigma _j(k),\quad \sigma _j(k)\in \mbox{gr}\, (\mathcal A)amma _k/\mbox{gr}\, (\mathcal A)amma _{k-1}. $$ Let $\mu _1,\ldots ,\mu _N$ be the set of all non-zero elements $\sigma _j(k)$. Denote by $\gamma $ the canonical imbedding $\mbox{gr}\, (\mathcal A)amma _k\to \mbox{gr}\, (\mathcal A)amma _k/\mbox{gr}\, (\mathcal A)amma _{k-1}=\mbox{gr}\, (\mathcal A)amma (k)$ extended to the mapping $\mathcal L \to \mbox{gr}\, (\mathcal A)L$. Choose $m_i\in \mathcal L$ in such a way that $\gamma (m_i)=\mu _i$. Then $m_i$ are generators of $\mathcal L$, that is \begin{equation} \mbox{gr}\, (\mathcal A)amma _k\subset \sum \limits _{i=1}^N\mathcal A m_i\quad \mbox{for each }k. \end{equation} Indeed, (25) is obvious for $k=0$. Suppose that \begin{equation} \mbox{gr}\, (\mathcal A)amma _{k-1}\subset \sum \limits _{i=1}^N\mathcal A m_i\quad \mbox{for each }k. \end{equation} Consider an element $l\in \mbox{gr}\, (\mathcal A)amma _k\setminus \mbox{gr}\, (\mathcal A)amma _{k-1}$. We have $\gamma (l)\in \mbox{gr}\, (\mathcal A)amma (k)$, $$ \gamma (l)=\sum \limits _{j=1}^nc_j\sigma _j, \quad c_j\in \mbox{gr}\, (\mathcal A) . $$ Writing each $c_j$ as a sum of homogeneous components $$ c_j=\sum \limits _\nu c_j(\nu ),\quad c_j(\nu )\in A(\nu ), $$ and taking into account that $A(\nu )\mbox{gr}\, (\mathcal A)amma (k)\subset \mbox{gr}\, (\mathcal A)amma (k+\nu )$ for any $k,\nu $, we find that $$ \gamma (l)=\sum \limits _{j+\nu =k}c_j(\nu )\sigma _j(k)= \sum \limits _{j=0}^kc_j(k-j)\sigma _j(k). $$ Choosing $C_j\in A_{k-j}$ in such a way that $c_j(k-j)$ is a class of $C_j$ in $A(k-j)$, we obtain the inclusion $$ l-\sum \limits _{j=0}^kC_jm'_j\in \mbox{gr}\, (\mathcal A)amma _{k-1} $$ where $\{ m'_j\}$ is a subset of $\{ m_i\} _{i=1}^N$. Together with (26) this implies (25). We have proved that $\mathcal A$ is left Noetherian. The proof of the right Noetherianness is similar. (iv) Let $ab=0$ for $$ a=\sum \limits _{i=0}^{m_1}\sum \limits _{j=0}^{n_1}\lambda_{ij}\tau ^id^j,\quad b=\sum \limits _{k=0}^{m_2}\sum \limits _{l=0}^{n_2} \mu_{kl}\tau ^kd^l, $$ and $a\ne 0$, $b\ne 0$, that is \begin{equation} \sum \limits _{i=0}^{m_1}\lambda _{in_1}\tau ^i\ne 0,\quad \sum \limits _{k=0}^{m_2}\mu _{kn_2}\tau ^k\ne 0. \end{equation} It follows from (19) that $$ d^j\tau ^k=\tau ^kd^j+O\left( d^{j-1}\right) $$ where $O\left( d^{j-1}\right)$ means a polynomial in the variable $d$, of a degree $\le j-1$, with coefficients from the composition ring $\overline{K}_c \{ \tau \}$ of polynomials in the operator $\tau $. Therefore the coefficient at $d^{n_1+n_2}$ in the expression for the operator $ab$ equals $$ \sum \limits _{i,k}\lambda _{in_1}\mu _{kn_2}^{q^{i-n_1}}\tau ^{i+k}= P_1(P_2(\tau )) $$ where $$ P_1(\tau )=\sum \limits _{i=0}^{m_1}\lambda _{in_1}\tau ^i,\quad P_2(\tau )=\sum \limits _{k=0}^{m_2}\mu _{kn_2}^{q^{-n_1}}\tau ^k, $$ which contradicts (27), since the ring $\overline{K}_c \{ \tau \}$ has no zero-divisors \cite{G2}. Now (iv) follows from (iii) (see Sect. 4.5 in \cite{Bok}). However we will give also an elementary direct proof (which does not use the Hilbert basis theorem or its generalizations). Let us prove the left Ore condition (the proof of the right condition is simpler and does not differ from the one in characteristic zero, see \cite{Bj}). Thus let $a,b\ne 0$; we will prove that $a\mathcal A \cap b\mathcal A \ne \{ 0\}$. As above, we will use the filtration $\{ A_\nu \}$ in $\mathcal A$. Let $\nu _1$ be such a number that $a,b\in A_{\nu _1}$. Then $aA_\nu \subset A_{\nu +\nu _1}$, $bA_\nu \subset A_{\nu +\nu _1}$. Suppose that $a\mathcal A \cap b\mathcal A =\{ 0\}$. Then in particular \begin{equation} aA_\nu \cap bA_\nu =\{ 0\} \end{equation} Let us prove that the set $\{ a\tau ^id^j,\ i+j\le \nu \}$ forms a basis of $aA_\nu$, that is its elements are linearly independent. This assertion is evident for $\nu =0$. Suppose that it holds for all $\nu \le N-1$, and consider the case $\nu =N$. Let \begin{equation} \sum \limits _{i+j\le N}c_{ij}a\tau ^id^j=0,\quad c_{ij}\in \overline{K}_c. \end{equation} We may write $$ a=\sum \limits _{k+l\le \kappa }\lambda _{kl}\tau ^kd^l,\quad \lambda _{kl}\in \overline{K}_c,\ \kappa \le \nu _1, $$ where $\lambda _{kl}\ne 0$ at least for one couple $(k,l)$ with $k+l=\kappa$. Since $$ d^l\tau^i\equiv \tau ^id^l\pmod{A_{i+l-1}}, $$ we find that $$ \sum \limits _{i+j=N}c_{ij}\left( \sum \limits _{k+l= \kappa }\lambda _{kl}\tau ^kd^l\right) \tau ^id^j\equiv \sum \limits _{i+j=N}\sum \limits _{k+l=\kappa }c_{ij}\lambda _{kl} \tau ^{i+k}d^{j+l}\pmod{A_{i+k+j+l-1}}. $$ By (29), this means that $$ \sum \limits _{i+j=N}\sum \limits _{k+l=\kappa }c_{ij}\lambda _{kl} \tau ^{i+k}d^{j+l}=0 $$ whence $$ 0=\sum \limits _{i=0}^Nc_{i,N-i}\sum \limits _{k=0}^\kappa \lambda _{k,\kappa -k}\tau ^{i+k}d^{N+\kappa -(i+k)}=\sum \limits _{m=0}^{N+\kappa }\left( \sum \limits _{i+k=m}c_{i,N-i} \lambda _{k,\kappa -k}\right) \tau ^md^{N+\kappa -m}, $$ so that $$ \sum \limits _{i+k=m}c_{i,N-i}\lambda _{k,\kappa -k}=0,\quad m=0,1,\ldots ,N+\kappa . $$ The expression in the left-hand side coincides with the $m$-th coefficient of the product of two polynomials. Therefore $c_{ij}=0$ for $i+j=N$, the summation in (29) is actually performed for $i+j\le N-1$, and by the induction assumption $c_{ij}=0$ for all $i,j$. Now $\dim (aA_\nu )=\dim (bA_\nu)=\dim A_\nu $, and it follows from (28) that \begin{equation} \dim (A_{\nu +\nu _1})\ge \dim (aA_\nu \oplus bA_\nu)=2\dim (A_\nu ). \end{equation} Note that $$ \dim A_\nu =\mbox{card }\{ (i,j):\ i+j\le \nu \} =\frac{(\nu +1)(\nu +2)}2, $$ and we see that $$ \frac{\dim (A_{\nu +\nu _1})}{\dim A_\nu }\longrightarrow 1\ \ \mbox{as}\ \ \nu \to \infty , $$ which contradicts (30). $\qquad \blacksquare$ \end{document}
\begin{document} \begin{frontmatter} \title{Experimental implementation of quantum information processing by Zeeman perturbed nuclear quadrupole resonance} \author[ufscar]{J. Teles\corref{cor1}} \ead{[email protected]} \author[ifsc]{C. Rivera-Ascona} \author[ifsc]{R. S. Polli} \author[ifsc]{R. Oliveira-Silva} \author[ifsc]{E. L. G. Vidoto} \author[ifsc]{J. P. Andreeta} \author[ifsc]{T. J. Bonagamba} \cortext[cor1]{Corresponding author} \address[ufscar]{Departamento de Ci\^{e}ncias da Natureza, Matem\'{a}tica e Educa\c{c}\~{a}o, Universidade Federal de S\~{a}o Carlos, Caixa Postal 153, 13600-970, Araras, SP, Brazil} \address[ifsc]{Instituto de F\'{i}sica de S\~{a}o Carlos, Universidade de S\~{a}o Paulo, Caixa Postal 369, 13560-970, S\~{a}o Carlos, SP, Brazil} \begin{abstract} Nuclear magnetic resonance (NMR) has been widely used in the context of quantum information processing (QIP). However, despite the great similarities between NMR and nuclear quadrupole resonance (NQR), no experimental implementation for QIP using NQR has been reported. We describe the implementation of basic quantum gates and their applications on the creation of pseudopure states using linearly polarized radiofrequency pulses under static magnetic field perturbation. The NQR quantum operations were implemented using a single crystal sample of KClO$_{3}$ and observing $^{35}$Cl nuclei, which posses spin 3/2 and give rise to a 2-qubit system. The results are very promising and indicate that NQR can be successfully used for performing fundamental experiments in QIP. One advantage of NQR in comparison to NMR is that the main interaction is internal to the sample, which makes the system more compact, lowering its cost and making it easier to be miniaturized to solid state devices. \end{abstract} \begin{keyword} NQR \sep NMR \sep quantum information processing \sep experimental realization \end{keyword} \end{frontmatter} \section{Introduction} Many nuclear magnetic resonance (NMR) systems have been used in quantum information processing (QIP) \cite{Vandersypen2004,Ramanathan2004,Suter2008}. The first proposals were performed using liquid state NMR, followed by solid state experiments. Nano-scale devices and optically detected NMR were also proposed \cite{Fisher2003, Leskowitz2003, Itoh2005, Yusa2005, Baugh2006, Childress2006, Hanson2008}. In all these cases, the Zeeman interaction with an external magnetic field is the main interaction, where the perturbations range from scalar couplings in liquids, direct dipole couplings in solids and quadrupole interaction in solids and liquid crystals. In nuclear quadrupole resonance (NQR), the main interaction is due to the nuclear electric quadrupole moment in the presence of a local electric field gradient. As in the NMR case, the sole main interaction does not allow the general transitions between all logic states. The first theoretical proposal of the use of pure nuclear quadrupole resonance in the context of QIP was done by Furman and Goren \cite{Furman2002}, where the use of two linearly polarized radiofrequency (RF) fields at arbitrary relative angles are responsible for the degeneracy lifting of the four magnetic states in the case of a spin 3/2, while a third RF field or the amplitude modulation of one of the two former fields promote energy transitions. Possa et al \cite{Possa2011}, proposed a simulation of the NMR-NQR system in an arbitrary configuration of elliptically polarized RF fields and quadrupole and Zeeman interactions. Particularly, aiming QIP applications, they showed how to prepare pseudo-pure states and how to implement basic quantum gates in pure NQR using circularly polarized RF fields and double-quantum excitation. Here we describe the first NQR experimental protocol to implement QIP operations in a two q-bit spin 3/2 system using linearly polarized radiofrequency pulses under static magnetic field perturbation. This protocol was implemented in the spin 3/2 $^{35}$Cl nuclei of a single crystal sample of KClO$_3$ \cite{Zeldes1957}. As result, we obtained all the four pseudo-pure states associated with the computational basis and applied on them the Controlled-not (CNOT) and Hadamard gates. The reading of the resulting states were performed by approximately $\pi/2$ RF pulses. One of the advantages of the use of NQR systems for QIP resides in the compactness of the experimental system, since the main interaction is produced internally by the own sample, making unnecessary the use of high magnetic fields normally produced by superconductor magnets, as in the case of NMR. In the case of NQR, the perturbative magnetic field can be generated by a small Helmholtz coil pair, and represents little cost to the system. Moreover, under Zeeman perturbation we avoid the use of crossed RF coils necessary for distinguishing different transitions in the pure NQR case, by the use of circularized RF fields. \section{Theory} Here we will describe the form of the main operators in the quadrupole interaction picture using the secular approximation in the simplified case of a cylindrical symmetric electric field gradient. The expressions for the energy levels and for the RF pulse operators will be applied in section \ref{Experimental} in order to implement the quantum gates optimizations and the experimental set up. \subsection{The Zeeman perturbed NQR Hamiltonian} We will consider the case in which the main contribution to the nuclear Hamiltonian is due to the interaction of the nuclear electric quadrupole moment with the local electric field gradient (EFG), where this gradient is assumed to be cylindrically symmetric. The symmetry axis of the EFG tensor will be designated by the vector $\vec{G}$ and will be oriented along the $z$ direction, as usual. We will consider the case of a single-crystalline sample having only one EFG symmetry axis direction per unity cell. To the main term will be added an external static magnetic field $\vec{B}_{0}$, oriented in an arbitrary direction constrained to the $xz$ plane and making an angle $\theta$ with the $z$-axis. The time dependent perturbation necessary for the quantum transitions (quantum gates) will be accomplished by a linearly polarized oscillating magnetic field $\vec{B}_{1}$ (RF pulses) constrained to the $xy$ plane and making an angle $\phi$ with the $x$-axis. Fig.~1 illustrates these choices. \begin{figure} \caption{Definition of the angles and orientations of the main interactions in the Zeeman perturbed NQR system.} \end{figure} Under those conditions, the NQR Hamiltonian is written in the laboratory frame as: \begin{align}\label{eq1} H(t)&=H_{0}+H_{RF}(t)\;. \end{align} The RF unperturbed term, $H_{0}=H_{Q}+H_{Z}$, contains the contribution of the quadrupole interaction and the Zeeman perturbation with the static magnetic field, while the $H_{RF}$ term is due to the RF field. The quantum operators are given by: \begin{equation} H_{Q}=\hbar\frac{\omega_{Q}}{2}\left[I_{z}^{2}-\hat{1}I(I+1)/3\right]\;, \end{equation} \begin{equation}\label{eq1p4} H_{Z}=-\hbar\omega_{0}(I_{x}\sin\theta+I_{z}\cos\theta)\;\mathrm{, and} \end{equation} \begin{equation}\label{eq1p5} H_{RF}(t)=-2\hbar\omega_{1}\cos(\omega t+\alpha)e^{-i\phi I_{z}}I_{x}e^{i\phi I_{z}}\;, \end{equation} where $I$ is the spin quantum number, $I_{x}$, $I_{y}$, and $I_{z}$ are the dimensionless Cartesian angular operator components, $\omega_{0}=\gamma B_{0}$ and $\omega_{1}=\gamma B_{1}$ are the Zeeman couplings with the static and RF fields of amplitudes $B_{0}$ and $B_{1}$, respectively. The RF field is applied with frequency $\omega$ and initial phase $\alpha$. Since the Zeeman term is a perturbation of the main Hamiltonian, the quadrupole coupling $\omega_{Q}$ is much greater than $\omega_{0}$. In the absence of the $B_{0}$ field, the quantum states are two-fold degenerate, and the quantum transitions occur at frequencies multiple of $\omega_{Q}$, accordingly with Fig.~2. \begin{figure} \caption{Energy states and the allowed transitions in pure NQR ($H_Q$) and Zeeman perturbed NQR ($H_{Q} \end{figure} \subsection{Quadrupole interaction picture} To facilitate the solution of the Schr\"{o}dinger equation for the nuclear state $|\psi\rangle$ we will apply the following unitary operation on going to the quadrupole interaction picture: \begin{equation} U(t)=e^{\frac{i}{2}\omega I_{z}^{2}t}\;. \end{equation} The transformed Hamiltonian is given by: \begin{equation} \tilde{H}=UHU^{-1}-i\hbar U\frac{d}{dt}U^{-1}\;. \end{equation} Applying the above transformation in Eq.~(\ref{eq1}) results in: \begin{equation}\label{eq2} \tilde{H}(t)=\frac{\hbar}{2}\Delta\omega_{Q}I_{z}^{2}+\tilde{H}_{Z} +\tilde{H}_{RF}(t)\;, \end{equation} where $\Delta\omega_{Q}=\omega_{Q}-\omega$ and we neglected the term proportional to identity. All operators in the quadrupole picture are represented by the tilde symbol. The transformed operators in Eq.~(\ref{eq2}), are calculated by: \begin{equation} U\frac{d}{dt}U^{-1}=-\frac{i}{2}\omega I_{z}^{2}\;, \end{equation} \begin{equation} UI_{z}U^{-1}=I_{z}\;, \end{equation} \begin{equation} UI_{z}^{2}U^{-1}=I_{z}^{2}\;\mathrm{, and} \end{equation} \begin{equation} \left[UI_{x}U^{-1}\right]_{i,i\pm1}=\exp\left[\frac{i}{2}\omega t\left(m_{i}^{2}-m_{i\pm1}^2\right)\right][I_{x}]_{i,i\pm1}\;. \end{equation} For the $I_x$ operator, only the non-null matrix elements in the $I_z$ eigenstates basis were shown. The RF field will induce transitions between the quantum states for values of $\omega$ close to $\omega_{Q}$. In this condition, we can neglect, in first order approximation, any terms that oscillate in Eq.~(\ref{eq2}) with frequencies in the order of $\omega$, since these terms have frequencies much higher than $\Delta\omega_{Q}$ and $\omega_{0}$. In evaluating the matrix elements of $\tilde{H}_{Z}$, we see that only their central transition elements $(\pm1/2,\mp1/2)$ are independent of $\omega$, making these elements the only ones that must be considered. Otherwise, the matrix elements of $\tilde{H}_{RF}$ are multiplied by the function $\cos(\omega t+\alpha)$ in Eq.~(\ref{eq1p5}). In this case, the oscillating terms of the satellite transitions $(\pm3/2,\mp1/2)$ are cancelled by the cosine terms. The same does not happen with the central transition elements, which are the only ones to be neglected in the $\tilde{H}_{RF}$ operator. To illustrate the above discussion, let us calculate the matrix elements for the specific case of a spin~3/2 nucleus in the $I_z$ eigenstates basis: \begin{align} \tilde{I}_{x}&=UI_{x}U^{-1}\nonumber \\ &=\frac{1}{2} \begin{bmatrix} 0 & \sqrt{3}e^{i\omega t} & 0 & 0 \\ \sqrt{3}e^{-i\omega t} & 0 & 2 & 0 \\ 0 & 2 & 0 & \sqrt{3}e^{-i\omega t} \\ 0 & 0 & \sqrt{3}e^{i\omega t} & 0 \end{bmatrix}\;, \end{align} which in first order approximation reduces to: \begin{equation}\label{eq2p5} \tilde{I}_{x}\approx\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\;. \end{equation} The same approximation applied to the RF Hamiltonian results in: \begin{align}\label{eq3} \frac{\tilde{H}_{RF}}{\hbar\omega_{1}}&=-2\cos(\omega t+\alpha)U(I_{x}\cos\phi+I_{y}\sin\phi)U^{-1}\nonumber \\ &\approx -\frac{\sqrt{3}}{2} \begin{bmatrix} 0 & e^{-i\phi_{+}} & 0 & 0 \\ e^{i\phi_{+}} & 0 & 0 & 0 \\ 0 & 0 & 0 & e^{-i\phi_{-}} \\ 0 & 0 & e^{i\phi_{-}} & 0 \end{bmatrix}\;, \end{align} where $\phi_{\pm}\equiv\phi\pm\alpha$. Therefore, for $B_{1}$ time independent, we obtained the time independent effective Hamiltonian $\tilde{H}$ in first order approximation. \subsection{Subspace diagonalization} Observing the representation of the operator $\tilde{I}_{RF}$ in Eq.~(\ref{eq3}), we see that there are non-null elements only in the individual subspaces of each q-bit. That could be a problem to produce general two q-bit logic gates. However, the existence of the $\tilde{I}_{x}$ operator due to the static field perturbation, makes possible the q-bits conditional evolution, as can be seen in the $(\pm1/2,\mp1/2)$ subspace in Eq.~(\ref{eq2p5}). In order to obtain the $\tilde{H}_{0}$ eigenvalues, it is necessary diagonalize the RF unperturbed Hamiltonian. The operator $R$ that diagonalizes $\tilde{H}_{0}$ in the case of a spin 3/2 subspace, is given by: \begin{equation}\label{eq4} R=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & f_{+} & f_{-} & 0 \\ 0 & f_{-} & -f_{+} & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\;, \end{equation} where: \begin{equation} f_{\pm}=\sqrt{\frac{1}{2}\pm\frac{1}{2\sqrt{1+4\tan^{2}\theta}}}\;. \end{equation} For the spin 3/2, the eigenvalues of the RF unperturbed Hamiltonian obtained with the $R$ transformation are: \begin{equation}\label{eq5} E_{(+3/2)_R}=\frac{\hbar}{8}(9\Delta\omega_{Q}-12\omega_{0}\cos\theta)\;, \end{equation} \begin{equation} E_{(+1/2)_R}=\frac{\hbar}{8}(\Delta\omega_{Q}-4\omega_{0}g\cos\theta)\;, \end{equation} \begin{equation} E_{(-1/2)_R}=\frac{\hbar}{8}(\Delta\omega_{Q}+4\omega_{0}g\cos\theta)\;\mathrm{, and} \end{equation} \begin{equation}\label{eq5p4} E_{(-3/2)_R}=\frac{\hbar}{8}(9\Delta\omega_{Q}+12\omega_{0}\cos\theta)\;, \end{equation} where $g=\sqrt{1+4\tan^{2}\theta}$. The RF operator, in the new $R$ basis, is given by: \begin{equation}\label{eq5p5} \frac{\hat{H}_{RF}}{\hbar\omega_{1}}=-\frac{\sqrt{3}}{2} \begin{bmatrix} 0 & e^{-i\phi_{+}}f_{+} & e^{-i\phi_{+}}f_{-} & 0 \\ e^{i\phi_{+}}f_{+} & 0 & 0 & e^{-i\phi_{-}}f_{-} \\ e^{i\phi_{+}}f_{-} & 0 & 0 & -e^{-i\phi_{-}}f_{+} \\ 0 & e^{i\phi_{-}}f_{-} & -e^{i\phi_{-}}f_{+} & 0 \end{bmatrix}\;. \end{equation} All operators in the quadrupole picture and transformed by $R$ are represented by the hat symbol. \subsection{State evolution and measurement} The Zeeman perturbed NQR signal can be calculated by the following trace equation: \begin{equation}\label{eq6} S(t)\propto\mathrm{Tr}\left\{\hat{\rho}\cdot \hat{U}_{0}^{-1}(t)\hat{H}_{RF}\hat{U}_{0}(t)\right\}\;, \end{equation} where $\hat{U}_{0}(t)=e^{-\frac{i}{\hbar}\hat{H}_{0}t}$ is the evolution operator during the reading interval in the laboratory frame, that is, considering $\Delta\omega_{Q}=\omega_{Q}$ in Eqs.~(\ref{eq5}) to (\ref{eq5p4}). The resulting density matrix $\hat{\rho}=\hat{P}\rho_{eq}\hat{P}^ {-1}$ is obtained by the application of the propagator: \begin{equation}\label{eq6p1} \hat{P}=\prod_{n}e^{-\frac{i}{\hbar}\hat{H}_{n}\Delta t_{n}}\;, \end{equation} where $\Delta t_{n}$ is sufficiently small such that $\hat{H}_{n}$ can be considered time independent in the corresponding interval. It is possible to factorize the operator $e^{-i\phi(I_{z_1}+I_{z_2})}$ from $\hat{H}_{RF}$ and $\hat{H}_{0}$, where $I_{z_1}$ and $I_{z_2}$ are spin 1/2 operators, such that, the signal $S(t)$ is independent of the polarization angle $\phi$. The initial equilibrium state $\rho_{eq}$ is given by the Boltzmann statistics in the high temperature approximation: \begin{equation}\label{eq6p2} \rho_{eq}=\frac{1}{Z}\left(\hat{1}-\frac{H_{0}}{kT}\right)\;, \end{equation} where $Z$ is the partition function, $k$ the Boltzmann constant, and $T$ the sample temperature. Applying Eqs. (\ref{eq5}) to (\ref{eq5p5}) into Eq. (\ref{eq6}) results in the following signal expression for spin 3/2: \begin{align}\label{eq7} S(t)\propto&\;\hat{\rho}^{*}_{1,2}f_{+}e^{i\nu_{1,2}t} +\hat{\rho}^{*}_{1,3}f_{-}e^{i\nu_{1,3}t}+\nonumber\\ &+\hat{\rho}_{2,4}f_{-}e^{-i\nu_{2,4}t} -\hat{\rho}_{3,4}f_{+}e^{-i\nu_{3,4}t}\;, \end{align} where the indexes $(1,2,3,4)$ correspond, respectively, to the quantum numbers $(\frac{3}{2},\frac{1}{2},-\frac{1}{2},-\frac{3}{2})_{R}$, and the Bohr frequencies $\nu_{i,j}=(E_{i}-E_{j})/\hbar$ are obtained from Eqs. (\ref{eq5}) to (\ref{eq5p4}). Therefore, in the presence of the static magnetic field perturbation, the two degenerate states of the spin 3/2 are lifted, producing up to four distinct energy levels and up to four observable transitions, as obtained in Eq.~(\ref{eq7}) and illustrated by Fig.~2. The density matrix elements can also be computed by Eq.~(\ref{eq7}), where reading pulses can be applied prior to acquisition in order to transfer the various coherences of $\hat{\rho}$ to the coherences indicated in Eq.~(\ref{eq7}) -- a process known as Quantum State Tomography in QIP terminology, procedure which was not developed in this work, but it is under development in our group. \section{\label{Experimental} Experimental} \subsection{KClO$_3$ Single-crystal} The $^{35}$Cl nuclei posses spin 3/2 with a gyromagnetic ratio of 4.176~MHz/T, and in a single crystal of KClO$_3$ presents a quadrupole coupling $\omega_{Q}/2\pi=28.1$~MHz. There is only one EFG symmetry axis per unit cell, since all molecules are crystallographic equivalent \cite{Zeldes1957}. The single crystals of the potassium chlorate were grown from aqueous solution method. A potassium chlorate solution was prepared by dissolving the Merck GR (for analysis) KClO$_3$ powder in distilled water. The solution, after being filtered at temperature of 35~degree Celsius, was sealed (to prevent solvent loss by evaporation) and placed in a heat bath. The crystal seeds were nucleated using the slow cooling process up to 30~$^\circ$C with cooling rate of 0.5-0.3~$^\circ$C/day. Although irregular growths were observed in some of the experiments, with tendencies to grow dendrites, needle formed crystals and rough surfaces, transparent and high optical quality potassium chlorate single crystals were grown, using slow temperature reduction -- 0.1~$^\circ$C/day -- on a rotating seed holder at 15~rpm. The whole preparation process, the stable growth conditions as well the growth habit control will be described elsewhere. \subsection{Crystal orientation and the choice of $\theta$} The Zeeman perturbed NQR Hamiltonian and, consequently, the quantum states evolution, depend on the $\theta$ angle between the static magnetic field $\vec{B}_{0}$ and the direction of the principal axis of the EFG tensor, $\vec{G}$. In order to easily implement any $\theta$ angle, we built a RF probe inside the goniometer illustrated at Fig.~3. In this system, the crystalline sample is kept fixed inside the RF coil and both stay fixed inside the probe holder box. The latter, in turn, can be rotated around the vertical $\hat{v}$ and the horizontal $\hat{h}$ axes. The field $\vec{B}_{0}$ is stationary and coplanar with the horizontal plane. \begin{figure} \caption{a) Initial configuration of the goniometer, where the spectral line frequencies are measured as function of $\phi_{h} \end{figure} The crystal was positioned with its longer dimension aligned with the RF coil axis in order to have a better filling factor, which was even improved using a solenoid of elliptical cross section. The drawback with this squeezed coil design was a decrease in the $B_1$ amplitude homogeneity. With those choices, $\vec{G}$ points at a direction that, in general, does not belong to the horizontal plane. Fig.~3a illustrates that situation, where $\vec{G}$ makes an initial $\theta$ angle with $\vec{B}_{0}$. The first procedure is, therefore, to bring $\vec{G}$ to the horizontal plane. This can be done by setting $\hat{h}$ perpendicular to $\vec{B}_{0}$ and observing the frequencies of the spectral lines as function of the horizontal rotation angle $\phi_{h}$. It can be shown that the $\theta$ dependency in Eqs. (\ref{eq5}) to (\ref{eq5p4}) is: \begin{equation}\label{eq8} \cos\theta=\sin\theta_{h}\cos\phi_{h}\;, \end{equation} where $\theta_{h}$ is the angle between $\vec{G}$ and the rotation axis $\hat{h}$. With the aid of Eq.~(\ref{eq8}) it can be shown that the maxima and minima for all frequency functions $\nu_{i,j}(\phi_{h})$ occur at $\phi_h=n\pi$ for $n$ integer -- angles at which $\vec{G}$ belongs to the horizontal plane. Fig.~4 shows the experimental frequencies of the $^{35}$Cl spectral line frequencies as function of $\theta$. From this data, it was possible to choose any $\theta$ angle by just rotating the vertical axis of the goniometer in the condition illustrated in Fig.~3.b. \begin{figure} \caption{Experimental data of the $^{35} \end{figure} Since just the perpendicular component of $\vec{B}_{1}$ to $\vec{G}$ is effective in inducing transitions, we obtained the following expression: \begin{equation}\label{eq10} B_{1_\perp}=B_{1}\sqrt{1+ \sin^{2}\gamma\left[\sin^{2}(\theta+\delta)-1\right]}\;, \end{equation} where $\gamma$ is the angle that $\vec{B}_{1}$ does with $\hat{v}$ and $\delta$ is the angle that the $\vec{B}_{1}$ projection onto the $xz$ plane does with $\vec{B}_{0}$. This geometry is indicated in Fig.~3b. We envisage at least three criteria for the choice of the $\theta$ angle: \begin{enumerate} \item Maximize $B_{1_{\perp}}$ in order to apply shorter pulses and, then, minimize relaxation effects. Accordingly to Eq.~(\ref{eq10}), this condition is met for $\theta=\pi/2-\delta$. \item Choose angles were the frequency functions $\nu_{i,j}(\theta)$ are most sensitive to $\theta$ in order to minimize errors in its determination. From Fig.~4b we can see that this condition is fairly satisfied for the intervals $75^{\circ}\leq\theta\leq105^{\circ}$ and $255^{\circ}\leq\theta\leq285^{\circ}$. \item Make the spectral lines evenly spaced to one another, such that single transitions can be individually excited with shorter RF pulses and, again, minimize relaxation. This condition corresponds to $\cos^{2}\theta=4/39$. In fact, the greatest separation between spectral lines would be for $\theta=n\pi$ with $n$ integer. However, there would be no contribution of the $I_{x}$ term in Eq.~(\ref{eq1p4}), preventing two q-bit gates implementation. \end{enumerate} Since all the three criteria can not be simultaneously satisfied, we have chosen the third one. That corresponds to $\theta\approx 71.3^{\circ}$, which is near the most sensitive region settled by criterion two. Moreover, in our experimental setup we found $\gamma=65^{\circ}$ and $\delta=70^{\circ}$, which gives a suitably high $B_{1_{\perp}}/B_{1}$ ratio of 70.7\%. \subsection{Relaxation times} In order to estimate the transverse and longitudinal relaxation times of $^{35}$Cl nuclei in KClO$_3$, we used spin-echo and progressive saturation techniques, respectively. In the case of spin-echo sequence, the experiment was run using 10~$\mu$s and 20~$\mu$s RF pulses for the single-crystal, oriented in such a way that the EFG and the perturbative external magnetic field were aligned, allowing the detection of the echoes intensity, but observing spectra with only two spectral lines. The echo time was changed from 2 to 27~ms in $n$ equally spaced steps of 0.5~ms. The recycle delay was 100~ms. The estimated value for T$_2$ was approximately $(4.6\pm 0.2)$~ms for both lines. For the progressive saturation experiments, we used only a single 10~$\mu$s RF pulse for the situation where EFG and perturbative external magnetic field were making an angle of 71.3$^\circ$, allowing the observation of the four expected equally spaced peaks in the spectra. The delay times ranged in the interval of 5~ms to 205~ms every 5~ms. The estimated $T_1$ value for all the observed lines was $(32\pm 2)$~ ms. \subsection{Quantum gates and pseudo-pure states implementation} Fig.~5(a) shows the modulus of the $^{35}$Cl spectrum obtained for the equilibrium state after the application of an approximately $\pi/2$ reading pulse, which corresponds to a single RF pulse of 10~$\mu$s. The angle between the static magnetic field and the EFG symmetry axis was kept in $\theta\approx71.3^{\circ}$, giving rise to four equally spaced spectral lines. The frequency separation between adjacent lines was 3.5~kHz, which gives a static magnetic field of 730~$\mu$T. This field was produced by the stray field of a wide horizontal bore superconducting magnet of 2~T. This choice was to take advantage of the laboratory setup, however a small Helmholtz pair could produce a similar magnetic field. \begin{figure} \caption{$^{35} \end{figure} The quantum gates were numerically optimized using Eq.~(\ref{eq6p1}) where the Hilbert-Schmidt inner product was used for fidelity evaluation \cite{Fortunato2002,Teles2007}. Five quantum gates were optimized: (i) Controlled NOT at q-bit $a$ (CNOT$_{a}$), (ii) Controlled NOT at q-bit $b$ (CNOT$_{b}$), (iii) permutation of populations $\frac{3}{2}\leftrightarrow\frac{1}{2}$ (P$_{12}$), (iv) permutation of populations $\frac{3}{2}\leftrightarrow\frac{-1}{2}$ (P$_{13}$), and (v) Hadamard at both q-bits $a$ and $b$ (H$_{ab}$). All gates were implemented with pulses on resonance. The longest gate was the P$_{24}$ whose pulse sequence lasted 230~$\mu$s. Nevertheless, it is approximately one-twentieth of the transverse relaxation time, preventing that irreversible processes took place in the experiments performed in this work. As in the NMR case, the NQR thermal equilibrium states are far from pure states ($kT\gg\hbar\omega_{Q}$). Nevertheless, the states evolution is almost unitary for times much shorter than the spins relaxation times. These properties characterize thermal NMR and NQR as ensemble quantum computing techniques \cite{Gershenfeld1997,Cory1997}. A common procedure to relate the highly mixed equilibrium states with the pure states used in quantum computing is to apply operators sum on them. It can be shown that the following operators sum: \begin{equation}\label{eq11p1} S_{00}=\hat{1}+\mathrm{CNOT}_{a}+\mathrm{CNOT}_{b}\;, \end{equation} \begin{equation}\label{eq11p2} S_{01}=\hat{1}+\mathrm{CNOT}_{a}+\mathrm{P}_{13}\;, \end{equation} \begin{equation}\label{eq11p3} S_{10}=\hat{1}+\mathrm{CNOT}_{b}+\mathrm{P}_{12}\;\mathrm{, and} \end{equation} \begin{equation}\label{eq11p4} S_{11}=\hat{1}+\mathrm{P}_{13}+\mathrm{P}_{12}\;, \end{equation} when applied on the deviation equilibrium state: \begin{equation} \Delta\rho_{eq}=\frac{\hat{1}}{Z}-\rho_{eq}\propto I_{z}^{2}\;, \end{equation} generate the states associated with the computational basis, which are called pseudo-pure states (PPS). To perform the operations of Eqs.~(\ref{eq11p1}) to (\ref{eq11p4}) the pulse sequences corresponding to each quantum gate were implemented one at a time, and the resulting spectra were added. Fig.~6 illustrates the transformation $S_{00}$ on the deviation density matrix populations in order to obtain the pseudo-pure state PPS$_{00}$. \begin{figure} \caption{Illustration of the procedure for construction of the PPS$_{00} \end{figure} It can be shown that the application of $\pi/2$ reading pulses on each PPS result in the following normalized spectral amplitudes: \begin{equation}\label{eq34} \pi/2 \rightarrow \mathrm{PPS}_{00} = 1:0:1:0\;, \end{equation} \begin{equation} \pi/2 \rightarrow \mathrm{PPS}_{01} = 0:0:1:1\;, \end{equation} \begin{equation} \pi/2 \rightarrow \mathrm{PPS}_{10} = 1:1:0:0\;, \end{equation} \begin{equation}\label{eq37} \pi/2 \rightarrow \mathrm{PPS}_{11} = 0:1:0:1\;, \end{equation} where each amplitude is normalized relatively to the equilibrium spectrum. Fig.~5(b) shows the modulus of the spectra obtained for each PPS after the application of the $\pi/2$ reading pulse. We can see a good agreement with the expected amplitudes in Eqs.~(\ref{eq34}) to (\ref{eq37}). The worst result was the PPS$_{01}$ which presented a standard deviation of the expected peak amplitudes relative to the equilibrium spectrum of 6.8\%. Fig.~7 shows the experimental results of the CNOT$_a$ and CNOT$_b$ gates implementation on each one of the PPS showed in Fig.~5, where the spectra were also obtained after a $\pi/2$ reading pulse. We can see that the spectra are fairly close to the expected states after the application of the CNOT gates. In this case, the worst result was the operation CNOT$_b|01\rangle$ whose spectrum presented a standard deviation of the expected peak amplitudes relative to the equilibrium spectrum of 11\%. \begin{figure} \caption{$^{35} \end{figure} We also created entangled pseudo-pure states by applying the Hadamard gate on each PPS state of the computational basis. The results shown in Fig.~8(a) correspond to the modulus of the spectra obtained after a $\pi/2$ reading pulse. The black dots are the theoretical amplitudes and the horizontal bars represent the standard deviation of the experimental amplitudes relative to the theoretical ones. We can see a good agreement between the theoretical and experimental data. Only the third spectral line of the H$_{ab}$PPS$_{10}$ and the fourth line of the H$_{ab}$PPS$_{11}$ presented a strong deviation from the expected values. For this reason, the fourth line of the H$_{ab}$PPS$_{11}$ was not included in the calculation of the amplitudes standard deviation represented by the horizontal bars. With that exception, the largest standard deviation was 12\% due to the H$_{ab}$PPS$_{00}$. In Fig.~8(b) is shown a second successive application of the Hadamard gate which, since it is a self-adjoint gate, must result in the original state. We can see a very good correlation with the amplitudes of the corresponding states in Eqs.~(\ref{eq34}) to (\ref{eq37}). The deviations from the expected results can be mainly attributed to the RF amplitude inhomogeneity, since, in order to reach a good filling factor, the sample extended itself until the edges of the RF coil. NQR is also very sensitive to the sample temperature, where the quadrupole transition frequencies variations can be of the order of kHz per kelvin \cite{utton1967}. Therefore, small variations in the sample temperature are responsible for off-resonance pulse imperfections. In order to minimize such effects the pulse sequences were implemented with a large number of dummy scans and a relatively small number of scans. In this way, the thermal equilibrium in the presence of the RF radiation was established as good as possible before effective signal acquisition. Other source of error is, in a less extent, in the determination of the $\theta$ angle between the static magnetic field and the EFG symmetry axis. The best way to evaluate the experimental performance of a quantum operation would be to implement quantum state or quantum process tomography \cite{chuang1998b,bonk2004,kampermann2005}. We are already developing such a method, which will be presented in a forthcoming article. With the $\pi/2$ rotation operation we have only partial access to the 2 q-bit system density matrix. Therefore, states that present spectral amplitudes very close to the theoretical expectations, as those presented in Fig.~5(b), can have, in fact, undesired coherences which do not appear in such spectra, but can be unveiled by a Hadamard gate operation, which mix together many different coherences. That could be an explanation for the strong deviation in the fourth spectral line of the H$_{ab}$PPS$_{11}$ state in Fig.~8(a). \begin{figure} \caption{$^{35} \end{figure} \section{Conclusion} The results presented in this work show that the nuclear spin quantum states can be finely controlled and read in a Quantum Information NQR context, showing results as good as solid state NMR. That was proved by the experimental implementation of the four pseudopure states associated to the spin 3/2 of the chlorine nuclei in the sodium chlorate single crystal. In our approach, a small static magnetic field and a linearly polarized RF field perturbation were applied in order to implement CNOT and Hadamard gates, whose results are in accordance to the expected ones. This indicates that coherence control and the quantum transitions available for the Zeeman perturbed NQR technique can be used to implement basic QIP protocols. Besides its performance similar to NMR, NQR presents other experimental physical parameters that can enhance studies involving the decoherence and relaxation of quantum states \cite{auccaise2008,soarespinto2011}. An actual subject of research is the study of quantum correlations at room temperature, specifically the determination of quantum discord in NMR, which could also be studied in the NQR context \cite{auccaise2011,maziero2013}. Many recent proposals of QIP in NMR quadrupole systems, such as the study of coherent spin states \cite{estrada2013} and bifurcation \cite{araujo2013}, could also be studied by NQR. In order to full explore the quantum states evolution of low dimension Hilbert spaces using NQR it will be interesting to develop a Quantum State Tomography procedure. Future improvements include the design of more robust pulse sequences against pulse imperfections \cite{khaneja2005} or the design of coils with more homogeneous fields. A stable temperature control with sensibility on the order of milikelvin would also avoid off-resonance RF pulse imperfections. \end{document}
\begin{document} \begin{frontmatter} \title{A refined consumer behavior model for energy systems: Application to the pricing and energy-efficiency problems} \author[1]{Chao Zhang} \author[2]{Samson Lasaulce} \author[3]{Li Wang\corref{cor1}} \ead{[email protected]} \author[4]{Lucas Saludjian} \author[5]{H. Vincent Poor} \cortext[cor1]{Corresponding author} \address[1]{School of Computer Science and Engineering, Central South University, Changsha 410083, China} \address[2]{CRAN (CNRS and University of Lorraine), 54000 Nancy, France} \address[3]{School of Mathematics and Statistics, Central South University, Changsha 410083, China} \address[4]{RTE France, 92800 Puteaux, France} \address[5]{Princeton University, 08544 Princeton, United States} \begin{abstract} \textcolor{black}{The sum-utility maximization problem is known to be important in the energy systems literature. The conventional assumption to address this problem is that the utility is concave. But for some key applications, such an assumption is not reasonable and does not reflect well the actual behavior of the consumer. To address this issue, the authors pose and address a more general optimization problem, namely by assuming the consumer's utility to be sigmoidal and in a given class of functions. The considered class of functions is very attractive for at least two reasons. First, the classical NP-hardness issue associated with sum-utility maximization is circumvented. Second, the considered class of functions encompasses well-known performance metrics used to analyze the problems of pricing and energy-efficiency. This allows one to design a new and optimal inclining block rates (IBR) pricing policy which also has the virtue of flattening the power consumption and reducing the peak power. We also show how to maximize the energy-efficiency by a low-complexity algorithm. When compared to existing policies, simulations fully support the benefit from using the proposed approach.} \end{abstract} \begin{keyword} Smart grid, demand response, inclining block rates, energy efficiency, game theory, prospect theory. \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:intro} The smart grid concept encompasses many technological breakthroughs when compared to existing energy networks, and electricity networks in particular. Smart grid deployment will involve big changes at all levels of the electricity networks, at the generation side, at the transmission side, and at the distribution side \cite{Fang-survey-2012}. In particular, the possibility of having bidirectional information and energy flows will be an instrumental element of modern electricity networks. To be specific, demand response (DR) has been proposed to shape the demand at the consumers' side based on the received information from the energy providers \cite{Wang-survey-2017}\cite{Hui-AE-2020}. One fundamental benefit from such an approach is to be able to overcome one of the main limitations of existing power grids, namely, the peak power problem. This approach has been shown to be beneficial not only at the consumers' side, but also for flattening the overall power consumption and thus to reduce the cost incurred by the provider. Price-based DR is one of the most widely considered DR techniques, where demands are shaped through time-dependent pricing. \textcolor{black}{By implementing pricing-based demand response programs, the consumption behaviors are guided through pricing tariffs. To flatten power consumption, the peak demand results in price escalation, which can in return reduce the actual consumption.} Several different pricing strategies have been proposed in previous works, for instance, time-of-use pricing \cite{Wang-AE-2015}, critical peak pricing \cite{Liang-AE-2021}, peak-load pricing \cite{Liang-TSG-2013}, real-time pricing (RTP) \cite{Chen-ICASSP-2011}\cite{Zhang-AE-2021}. Although most of these pricing tariffs have constant flat rates which is independent of the consumption, they can be combined with inclining block rates (IBRs) \cite{Reiss-RES-2005}, for which the electricity price rises when the consumption is beyond a given threshold. The way to obtain these pricing tariffs can be categorized according to their objectives in DR problems, such as minimization of electricity cost \cite{Khayyam-JPS-2012}, maximization of sum-utility (social welfare) \cite{Shi-SGC-2016}, minimization of aggregated power consumption \cite{Jiang-EP-2011}. In this paper, we consider an optimization problem where the pricing tariff is designed to induce a consumption behavior that allows social welfare to be maximized, social welfare being usually defined as the aggregate gain brought by all the consumers minus the electricity cost. A first prerequisite to conducting the analysis of social welfare maximization problem is to choose a model for the behavior of the consumers. With the approach adopted in this paper, this choice amounts to defining an appropriate utility function for the consumers. In the related literature, quadratic utility functions are intensively used; in particular, they are chosen to account for what is called the satisfaction level of a consumer. By introducing satisfaction parameters in the quadratic utility function, the authors of the seminal paper \cite{MR-SGC-2010} design an RTP scheme which differentiates the energy usage in time and a level that can achieve the global optimal performance. With the same utility function, the authors of \cite{Samadi-TSG-2012} propose a Vickrey-Clarke-Groves (VCG) mechanism for demand side management to maximize social welfare. Modelling the satisfactory level as the Euclidean distance between the real consumption and the target consumption, the authors of \cite{Jiang-CDC-2011} and \cite{Jiang-Allerton-2011} provide a distributed algorithm to balance the supply and demand in the presence of random renewable energy. \textcolor{black}{Other types of utility functions have also been considered, such as the logarithm function in \cite{Fan-TSG-2012} and the linear function in \cite{Li-SGC-2011}.} These utility functions are assumed to be non-decreasing and concave, which means that the comfort obtained by the consumer will gradually get saturated when the energy consumption reaches the target level. This is a reasonable assumption when the real consumption is close to its target level. However, the concavity also indicates the marginal benefit, referred to as the additional satisfaction that a consumer receives when the additional consumption has been used, can achieve its maximum with zero consumption. Unlike some cases in economics with decreasing marginal benefit assumption, this might be in contradiction with the practical experience in power systems \cite{Good-AE-2019}, namely, what is actually experienced by the consumer. For instance, if the available power at the consumer is less than a threshold, the consumer may be unable to realize the desired task, meaning a very low quality of experience (QoE) \cite{Valentin-2015} as long as the threshold is not reached. If the consumer is an individual, a typical situation would be that an appliance cannot be put on if the available power is too small. If the consumer is a company or an aggregator, not having enough power at its disposal may prevent a service from being provided or a transaction to occur on the energy market. Assuming a concave utility appears to be non-appropriate and a more general utility function model needs to be considered and studied. This is precisely the purpose of this paper. Indeed, in contrast with the state-of-the-art literature, instead of using concave utility functions, we propose a sigmoidal (S-shaped) utility function to better model the behaviors of consumers, especially in the low consumption power regime. Inspired by prospect theory, we introduce a new degree of freedom or parameter $\lambda$ in the utility function, which can represent the effect of utility framing \cite{PT-1979}\cite{Saad}. Regarding demand response problems, as the sigmoidal function is not convex, conventional approaches to solve corresponding optimization problems (e.g., utility maximization or cost minimization) need to be modified or reformulated. To this end, several analytical results have been found and new approaches are accordingly proposed. In addition, significant improvements have been observed in simulations when comparing the proposed schemes with existing techniques. In what follows, we describe more accurately the literature related to the analysis conducted in this paper. Enabling two-way communications in smart grid systems, DR is performed at the consumer side (residential district) to promote the interaction between consumers and the energy provider with the aim of not only cutting down their energy bills but also enhancing their comfort level represented by utility functions. Smart pricing tariffs have been designed by power utilities emerge as a promising technology for incentivizing consumers to reschedule their energy usage patterns. To maximize social welfare with current grid capacity, \cite{MR-SGC-2010} has demonstrated the optimality of RTP with quadratic utility functions. Even though the unit rate of electricity varies from one time-slot to another time-slot, it remains unchanged regardless of the power consumption. Alternatively, RTP combined with IBR has been proposed in \cite{MR-TSG-2010}\cite{Zhuang-TSG-2013}, either to achieve a desired trade-off between the electricity payment and the waiting time of appliances, or to reduce both the electricity cost and the peak-to-average ratio. \textcolor{black}{In our model, since IBR based pricing tariff and sigmoidal functions both have two segments (two price levels for IBR, convex part and concave part for sigmoidal functions), it could be beneficial to use pricing tariffs with IBR.} Although this pricing tariff is shown to be efficient and attracts lots of attention in different applications, its optimality has neither been formally discussed nor proved. \textcolor{black}{One of the main contributions of the present work is to mathematically prove the optimality of IBR pricing tariffs in terms of social welfare, but not only by experiments/simulations validation.} In addition, increasing electricity prices and concerns related to greenhouse gas effects have given more momentum to the problem of designing energy-efficient systems. To enhance energy savings, information and communication technologies are playing key roles in new grid infrastructures \cite{TWC-Bu-2012}\cite{Survey-EK-2015}. As energy-efficiency problems have been widely studied in communication systems, interdisciplinary approaches can be anticipated to tackle the energy-efficiency problems in smart grids. Some well-defined metrics to assess the energy efficiency in communication systems can be applied in power systems as well, such as overall gains divided by the total consumption (see \cite{Survey-Poor-2016}\cite{Zhang-TVT-2019}), i.e., the benefit brought by unit consumption. \textcolor{black}{The main \textbf{contributions} of this paper can be listed as follows: 1) We propose sigmoidal utility functions instead of the classical concave functions to refine the model of consumers' behavior; 2) We pose and study the associated and new social welfare optimization problem, which consists in maximizing a sum of sigmoidal functions over the unit simplex, and we provide expressions for the optimal solutions; 3) By exploiting the solutions obtained from the aforementioned optimization problem, we propose a new inclining block rates pricing tariff and prove its optimality in terms of social welfare; 4) A bisection-like algorithm is proposed to maximize energy-efficiency while ensuring the minimum requirements of each consumer.} The structure of the paper is as follows. The model of the utility function is presented in Sec. II. To prove the optimality of our pricing tariff, in Sec. III we firstly introduce the optimal way for the provider to allocate a given consumption to consumers such that the aggregate gain of all consumers can be maximized. The IBR pricing tariff is presented in Sec. IV, along with the maximization of energy efficiency. Simulation results are provided in Sec. V. \section{System model} \label{sec:sysm} Each consumer of the power system represents an entity which can behave independently; a consumer may represent e.g., an individual (say a subscriber), a household, a company, an energy aggregator, or a player of the energy market. The energy demand of each consumer may depend on various factors such as the time of day, climate conditions, and also the price of electricity. The energy demand also depends on the type of the users. For example, household users may have different responses to the same price than industrial sites. Different responses of end users to various pricing tariffs can be modeled analytically by adopting the concept of utility function from microeconomics. For all users, one can define the corresponding utility function as $\widetilde{U}(x;r)$, where $x \in \mathbb{R}$ is the power consumption level of the consumer and $r \in \mathbb{R}$ is the reference point which reflects individual expectations of power consumption and may vary among users. \textcolor{black}{The reference point is proportional to the power demand and also depends on the consumer's characteristics and consumption history (see Fig.~1). A simple example could be the reference point $r$ is linearly proportional to its power demand $\omega$, i.e., $r=\alpha\omega$ where $\alpha$ represents the linear coefficient to depict the relationship between the demand and the reference point.} More precisely, for each user, the utility function represents the level of satisfaction obtained by the consumer as a function of its power consumption and the reference point $r$. The concept of utility framing in prospect theory indicates that humans will perceive the values of a utility in terms of gains and losses based on their own reference point $r$. Being consistent with the empirical evidence, it is assumed that the utility function satisfies the following properties: \begin{figure} \caption{ \textcolor{black} \end{figure} 1) \textit{Property I:} Utility functions are non-decreasing. Consumers prefer to consume more power until the saturation power consumption is achieved. \textcolor{black}{Note that we define the utility functions for the aggregate load of different operations/tasks, rather than for the power consumption of each individual appliance. Therefore, consumers can complete more tasks if they consume more power.} Mathematically, when $\frac{\partial \widetilde{U}(x,r)}{\partial x}$ exists, it should fulfil the following relation: \begin{equation} \frac{\partial \widetilde{U}(x;r)}{\partial x}\geq 0. \end{equation} For notational convenience, we define \begin{equation} \widetilde{V}(x;r)\overset{\Delta}{=}\frac{\partial \widetilde{U}(x;r)}{\partial x}. \end{equation} as the marginal benefit. 2) \textit{Property II:} The marginal benefit of consumers is firstly increasing and becomes decreasing when the power consumption is beyond a threshold $\theta$. This implies that \begin{equation} \forall x < \theta, \ \frac{\partial \widetilde{V}(x;r)}{\partial x}\geq 0. \end{equation} \begin{equation} \forall x \geq\theta,\ \frac{\partial \widetilde{V}(x;r)}{\partial x}\leq 0. \end{equation} where $\theta$ is a threshold depending on $r$. The utility function is no longer supposed to be concave but appears to be sigmoidal. Unlike the utility functions proposed in \cite{MR-SGC-2010}, the maximum of marginal benefit to consumers will be achieved at a positive consumption $\theta$ rather than zero consumption. This property is in line with the motivations provided in the introduction section. \textcolor{black}{In economics, the law of diminishing marginal utility states that, as the consumption (investment) increases, the marginal utility derived from each additional unit declines, which yields a decreasing marginal benefit. However, over recent years, researchers have found that this law is not suited to some applications in practical systems. As shown in \cite{Yang-TEM-2010}, supported by a large number of empirical evidence from a wide variety of industrial sectors, including plastics, automobile, energy, transportation, and chemicals, the relationship between investment in research and development and firm performance can be better described by sigmoidal functions. And also, the sigmoidal shape has been shown to be efficient in depicting ecological benefit functions \cite{Drechsler-EE-2020} and the alliance experience-performance relationship \cite{Tseng-CJAS-2017}. Regarding the electrical consumer utility, empirical results (actual experience) obtained from consumers indicates that the marginal benefit at a very low consumption should be very small since a low consumption could not support the appliances to finish basic tasks (such as heating and lighting). The marginal benefit at some critical points to achieve basic goals should have a higher marginal benefit, and it should be diminishing when the power consumption continues to increase to execute optional tasks. According to these practical experiences and the use of sigmoidal shape in benefit-consumption relationship in the literature, we propose the sigmoidal utility functions here.} 3) \textit{Property III:} When the power consumption level is less than the reference point, it is assumed a larger $r$ yields a smaller $\widetilde{U}(x;r)$, i.e., \begin{equation} \forall x<r, \ \frac{\partial \widetilde{U}(x;r)}{\partial r}\leq 0. \end{equation} A larger $r$ implies that the consumer is more difficult to be satisfied, and thus lead to a lower satisfactory level. \textcolor{black}{Also, the higher reference point implies a higher demand level, thus consuming the same amount of power $x$, the consumer with higher demand is less approaching to its target consumption, resulting in a lower satisfactory level.} 4) \textit{Property IV:} We assume that zero power consumption brings no benefit to the consumer, i.e., \begin{equation} \widetilde{U}(x=0;r)=0. \end{equation} \textcolor{black}{According to these properties, the utility function should be increasing and sigmoidal, i.e., firstly convex and then concave. In addition, in a real-life setting, empirical studies have shown that decision makers tend to deviate noticeably from the rationality axioms when in the presence of uncertain reward \cite{PT-1979}\cite{Saad-2016}\cite{Tversky-1988}. Inspired by prospect theory, consumers make decisions based on the potential gain or losses relative to their specific situation (the reference point) rather than an absolute value, and feel greater aggravation for losing a certain amount of consumption than satisfaction associated with gaining the same amount of consumption (referred to as loss aversion). Very interestingly, the classical value functions used in prospect theory, such as power functions with the exponent less than one, are increasing and sigmoidal. Therefore, adjusting classical value functions for our model, we consider the following sigmoidal utility functions: \begin{equation}\label{eq:utility} \widetilde{U}(x;r)= \left\{ \begin{array}{lll} -\lambda(r-x)^{\alpha}+\lambda r^{\alpha} & \hbox{ if \,\,\,$0\leq x< r$} \\\\ (x-r)^{\alpha}+\lambda r^{\alpha} & \hbox{ if \,\,\,$r\leq x< x_{\max}$} \\\\ (x_{\max}-r)^{\alpha}+\lambda r^{\alpha} & \hbox{ if \,\,\,$x\geq x_{\max}$} \\\\ \end{array} \right. \end{equation} where choosing $0<\alpha<1$ allows one to ensure the S-shape property for the utility function, and $\lambda\geq 1$ represents a loss aversion coefficient. The reference point $r$ is typically different for each consumer and originates from its past experiences and future aspirations of profits. In our model, regardless of the common term $\lambda r^{\alpha}$ which is independent of $x$, the term $(x-r)^{\alpha}$ and $-\lambda(r-x)^{\alpha}$ can be seen as the gain and loss, respectively. $x_{\max}$ is considered as the saturation power of the consumer, while consuming more than $x_{\max}$ will no longer bring more benefit. The shape of the function is shown in Fig.~2.} \begin{figure} \caption{Utility functions for the consumers by setting $\alpha=0.6$ and $\lambda=1.5$} \end{figure} \section{Sum-utility maximization problem} In this paper, we consider a system consisting of one provider and $K$ consumers. Before presenting the pricing problems associated with the sigmoidal utility functions, we start by studying the allocation problem in which the provider has to allocate a fixed total power consumption budget $\chi$ among the consumers in order for the sum-utility to be maximized. Each utility consists of a benefit function associated with consuming minus a cost induced by electricity purchase/generation. It is worth noting that the saturation part does not change the structure of the power allocation policy but only involves adding one more constraint. For the sake of clarity, we assume the maximal consumption power level $x_{\max}$ is sufficiently large and the saturation part is neglected in the rest of the paper. Hence, the utility function can be simplified as \begin{equation}\label{eq:utility_simplified} U(x;r)= \left\{ \begin{array}{lll} -\lambda(r-x)^{\alpha}+\lambda r^{\alpha} & \hbox{ if \,\,\,$0\leq x< r$} \\\\ (x-r)^{\alpha}+\lambda r^{\alpha} & \hbox{ if \,\,\,$x\geq r$}. \\\\ \end{array} \right. \end{equation} Always for the sake of clarity, the $\lambda$ and $\alpha$ are assumed to be the same for all the consumers, and thus the different consumers can be mainly distinguished by their reference points. The extension to the most general case can be conducted quite easily. By denoting $\underline{x} = (x_1,\dots,x_K)$, the sum-utility maximization problem can be written as \begin{equation} \begin{array}{lll} &\underset{\underline{x}}{\max} &\sum_{i=1}^K U(x_i,r_i) \\\\ &\mathrm{s.t.} & x_i\geq 0 \\\\ &&\sum_{i=1}^K x_i\leq \chi \\\\ \end{array} \tag{OP-G} \end{equation} where $x_i$ represents the power consumption of consumer $i$, $r_i$ represents the reference point of Consumer $i$. \textcolor{black}{Note that the generic problem of maximizing a sum of sigmoidal functions over a convex constraint set is a non-trivial problem; it has been shown to be NP-hard \cite{Boyd-2013}. In \cite{Boyd-2013} the authors have proposed a general algorithm to find approximate solutions only, but for some structured problems such as the problem under consideration, the problem of finding the optimal solution turns out to be tractable. For this purpose, we first introduce two subproblems and show later that the original sigmoidal programming problem OP-G can be decomposed into these two subproblems. Consequently, the solution of OP-G can be fully expressed.} \subsection{Subproblem A} Assume $0<a_1\leq \lambda$ and $0<\alpha<1$, the first subproblem can be written as: \begin{equation} \begin{array}{lll} &\underset{x,y}{\max} &a_1x^{\alpha}+U(y;r) \\\\ &\mathrm{s.t.} & x,y\geq 0 \\\\ &&x+y\leq C_1 \\\\ \end{array} \tag{OP-A} \end{equation} \textcolor{black}{where $C_1$ is a given constant.} It is worth noting that $a_1x^{\alpha}$ is increasing w.r.t $x$ and $U(y;r)$ is increasing w.r.t $y$, and thus the inequality $x+y\leq C_1$ can be replaced by the equality $x+y=C_1$. Consequently, the first subproblem can be simplified as \begin{equation} \begin{array}{lll} &\underset{x}{\max} & a_1x^{\alpha}+U(C_1-x,r) \\\\ &\mathrm{s.t.} & 0\leq x\leq C_1.\\\\ \end{array} \end{equation} For notational convenience we define \begin{equation} f_1(x)=a_1x^{\alpha}+U(C_1-x,r). \end{equation} The problem OP-A boils down to finding the maximum of the function $f_1(x)$ in the interval $[0,C_1]$. Due to the discontinuity of the derivative at $x=C_1-r$, it is difficult to express the maximum point through a single formula. However, one can check that there is at most one local maximum point for $f_1(x)$ in the interval $[0,C_1]$. By studying the first derivative of $f_1(x)$ and comparing the value of local maximum with the value at boundaries, i.e., $f_1(0)$ and $f_1(C_1)$, the solution of OP-A can be classified and written as \begin{equation}\label{eq:solution_x_OPA} x^{\mathrm{A}}(C_1,r,a_1)= \left\{ \begin{array}{lll} C_1 & \hbox{ if \,\,\,$C_1\leq r\left(\frac{\lambda}{a_1}\right)^{\frac{1}{\alpha-1}}$} \\\\ \frac{r-C_1}{(\frac{\lambda}{a_1})^{\frac{1}{1-\alpha}}-1} & \hbox{ if \,\,\,$ r\left(\frac{\lambda}{a_1} \right)^{\frac{1}{\alpha-1}}<C_1\leq r$}, \\\\ \frac{(C_1-r)a_1^{\frac{1}{1-\alpha}}}{1+a_1^{\frac{1}{1-\alpha}}} & \hbox{ if \,\,\,$C_1>r$} \\\\ \end{array} \right. \end{equation} \begin{equation}\label{eq:solution_y_OPA} y^{\mathrm{A}}(C_1,r,a_1)= \left\{ \begin{array}{lll} 0 & \hbox{ if \,\,\,$C_1\leq r\left(\frac{\lambda}{a_1} \right)^{\frac{1}{\alpha-1}}$} \\\\ C_1-\frac{r-C_1}{(\frac{\lambda}{a_1})^{\frac{1}{1-\alpha}}-1} & \hbox{ if \,\,\,$ r\left(\frac{\lambda}{a_1}\right)^{\frac{1}{\alpha-1}}<C_1\leq r$}. \\\\ r+\frac{C_1-r}{1+a_1^{\frac{1}{1-\alpha}}} & \hbox{ if \,\,\,$C_1>r$} \\\\ \end{array} \right. \end{equation} \subsection{Subproblem B} Suppose $0<\lambda<a_2$, $0<\alpha<1$ the second subproblem can be written as \begin{equation} \begin{array}{lll} &\underset{x,y}{\max} &a_2x^{\alpha}+U(y;r) \\\\ &\mathrm{s.t.} & x,y\geq 0 \\\\ &&x+y\leq C_2 \\\\ \end{array} \tag{OP-B} \end{equation} \textcolor{black}{where $C_2$ is a given constant.} Similarly to the first subproblem, the second subproblem can be further simplified as \begin{equation} \begin{array}{lll} &\underset{x}{\max} & a_2x^{\alpha}+U(C_2-x;r) \\\\ &\mathrm{s.t.} & 0\leq x\leq C_2. \\\\ \end{array} \end{equation} For notational convenience we define \begin{equation} f_2(x)=a_2x^{\alpha}+U(C_2-x;r). \end{equation} Similarly to OP-A, by studying the first derivative of $f_2(x)$, the solution of OP-B can be derived and expressed for the different possible cases: \begin{equation}\label{eq:solution_x_OPB} x^B(C_2)= \left\{ \begin{array}{lll} C_2 & \hbox{ if \,\,\,$C_2< r\gamma_1$} \\\\ \frac{(C_2-r)a_2^{\frac{1}{1-\alpha}}}{1+a_2^{\frac{1}{1-\alpha}}} & \hbox{ if \,\,\,$C_2\geq r\gamma_1$} \\\\ \end{array} \right. \end{equation} \begin{equation}\label{eq:solution_y_OPB} y^B(C_2)= \left\{ \begin{array}{lll} 0 & \hbox{ if \,\,\,$C_2< r\gamma_1$} \\\\ r+\frac{C_2-r}{1+a_2^{\frac{1}{1-\alpha}}} & \hbox{ if \,\,\,$C_2\geq r\gamma_1$} \\\\ \end{array} \right. \end{equation} where $\gamma_1>1$ and being the unique solution of the following equation: \begin{equation} a_2\gamma_1^{\alpha}-\frac{(\gamma_1-1)^{\alpha}}{\left(1+a_2^{\frac{1}{1-\alpha}}\right)^{\alpha-1}}={\lambda}. \end{equation} \subsection{Optimal solution of OP-G} By exploiting the previous results derived for the two auxiliary subproblems, it is possible to fully express the optimal power allocation policy a provider should use to maximize the system social welfare. Denote by $\chi$ the total power budget. Without loss of generality, it is assumed $r_1\leq r_2\leq\dots\leq r_K$, i.e., the values of reference points are in ascending order. Due to the ascending order of reference points, the derivatives of consumers' utilities are in descending order for a given consumption power level $x_1=x_2=\dots=x_K\leq r_1$, that is, $\frac{\partial U(x,r_1)}{\partial x}\geq\frac{\partial U(x,r_2)}{\partial x}\geq\dots\geq\frac{\partial U(x,r_K)}{\partial x} (x\leq r_1)$. Since the utility functions are increasing and convex when $x<r_1$, to maximize the sum-utility, it can be easily checked that the first $r_1$ power of the total budget $\chi$ should be allocated to Consumer $1$. After allocating $r_1$ power to Consumer $1$, the utility function of the Consumer $1$ becomes concave and the increasing speed of the utility function slows down. In this situation, it can be noticed that the problem becomes to decide whether continue to allocate power to the first consumer or start to allocate power to other consumers. Similarly, one can observe that the utility of consumer $2$ increases more rapidly than other consumers for a common power $x_2=\dots=x_K\leq r_2$. Hence, knowing the fact that the first $r_1$ amount of power has been allocated to Consumer $1$, the problem regarding the allocation of the following power $\widehat{C}_2\leq r_2$ can be simplified to decide whether keeping allocating power to Consumer $1$ or starting to allocate power to Consumer $2$. This problem can be formulated as \begin{equation} \begin{array}{lll} &\underset{x_1,x_2}{\max} &(x_1-r_1)^{\alpha}+U(x_2,r_2) \\\\ &\mathrm{s.t.} & x_1-r_1\geq 0,\,\,x_2\geq 0 \\\\ &&x_1-r_1+x_2\leq \widehat{C}_2 \\\\ \end{array} \end{equation} which can be seen as a special case of subproblem A by setting $x=x_1-r_1$, $y=x_2$ and $C_1=\widehat{C}_2$. According to (\ref{eq:solution_y_OPA}), when $\widehat{C}_2=r_2$, one can obtain $x_2=\widehat{C}_2=r_2$, indicating that the second part of power $r_2$ will be fully allocated to the Consumer $2$. In addition, after firstly allocating $r_1$ to $x_1$ and $r_2$ to $x_2$, the utility function of consumer $1$ coincides with the utility function of Consumer $2$, i.e., $U(x_1=r_1+\Delta,r_1)=U(x_2=r_2+\Delta,r_2)$ for any $\Delta\geq 0$. As a consequence, the following allocation policy to consumer $1$ and $2$ should be the same, namely, no matter how much power will be allocated to the Consumer $1$, the same amount of power should be allocated to the Consumer $2$. Similarly, note that the utility function of Consumer $3$ increases more rapidly than other consumers for a common power $x_3=\dots=x_K<r_3$, the problem regarding the third part of the power $\widehat{C}_3\leq r_3$ can be simplified to decide whether continue to allocate power to $x_1$ and $x_2$ or start to allocate power to consumer $3$. This problem can be written as \begin{equation} \begin{array}{lll} &\underset{x_1,x_2,x_3}{\max} &(x_1-r_1)^{\alpha}+(x_2-r_2)^{\alpha}+U(x_3,r_3) \\\\ &\mathrm{s.t.} & x_1-r_1=x_2-r_2\geq 0,\,\,x_3\geq 0 \\\\ &&x_1-r_1+x_2-r_2+x_3\leq \widehat{C}_3 \\\\ \end{array} \end{equation} Replace $x_1-r_1$, $x_2-r_2$ by $\frac{x}{2}$, and {$x_3$ by $y$}, this problem can be rewritten as: \begin{equation} \begin{array}{lll} &\underset{x,y}{\max} &2^{1-\alpha}x^{\alpha}+U(y,r_3) \\\\ &\mathrm{s.t.} & x,y\geq 0 \\\\ && x+y\leq \widehat{C}_3 \\\\ \end{array} \end{equation} which can be seen as a special case of subproblem A when $2^{1-\alpha}\leq \lambda$. In the rest of the paper, it is assumed that $(K-1)^{1-\alpha}\leq \lambda$ except otherwise stated, and thus the rest allocation policy for other consumers ($i>3$) can be done in the same manner. OP-G can be decomposed by a sequence of subproblems A. Without loss of generality, assume that the power constraint fulfills the following condition (suppose $r_{K+1}=\infty$): \begin{equation} \sum_{i=1}^{J}r_i\leq \chi<\sum_{i=1}^{J+1}r_i, \,\,J\in\{1,\dots,K\}. \end{equation} Based on what we have shown before, Consumer $k$ (with $k\in\{1,\dots,J\}$) will be charged at least $r_k$ power and same amount of power beyond $r_k$, i.e., $x_k-r_k=\frac{x_0}{J}\geq 0$, where $x_0=\chi-\sum_{k=1}^J r_k- x_{J+1}$. When $i>J+1$, zero power is allocated to the Consumer $i$. Regarding Consumer $J+1$, its power consumption, $x_{J+1}$, can be obtained jointly with $x_0$ by solving the following problem: \begin{equation} \begin{array}{lll} &\underset{x_0,x_{J+1}}{\max} &J^{1-\alpha}{x_0}^{\alpha}+U(x_{J+1},r_{J+1}) \\\\ &\mathrm{s.t.} & x_0,x_{J+1}\geq 0 \\\\ && x_0+x_{J+1}\leq \chi-\sum_{i=1}^{J}r_i. \\\\ \end{array} \label{eq:OP-G} \end{equation} \textcolor{black}{Implementing the solution of OP-A, the optimal power allocation policy of OP-G can be written as: {\footnotesize \begin{equation}\label{eq:solution_x_OPG} x_i^{\star}(\chi)= \left\{ \begin{array}{lll} r_i+\frac{1}{J}x^A(\chi-\sum_{i=1}^{J}r_i,r_{J+1},j^{1-\alpha}) & \hbox{ if \,\,\,$i\leq J$} \\\\ y^A(\chi-\sum_{i=1}^{J}r_i,r_{J+1},J^{1-\alpha})& \hbox{ if \,\,\,$ i=J+1$} \\\\ 0 & \hbox{ if \,\,\,$ i>J$} \\\\ \end{array} \right. \end{equation}} where $x^A(.)$ and $y^A(.)$ are defined by (\ref{eq:solution_x_OPA}) and (\ref{eq:solution_y_OPA}), respectively. To maximize the sum-utility, the power will be firstly allocated to consumers with the lowest reference points. While the consumers with the lowest reference points reach their reference point, the system begins to allocate the power to consumers with higher reference points. If the power budget is sufficiently large (e.g., $\chi\geq\sum_{i=1}^K r_i$), the difference of charged power among the consumers is the same as their reference points difference.}\textcolor{black}{More precisely, by defining the threshold as $T_i$ (assuming $T_{K+1}=\infty$) if and only if the power could be allocated to consumer $i$ when $\mathcal{X}\geq T_i$, the flowchart of the allocation policy can be shown in Fig.~3.} \begin{figure} \caption{ \textcolor{black} \end{figure} \textbf{Remark:} When $(K-1)^{1-\alpha}>\lambda$, for instance, the subjectivity is not considered and thus $\lambda=1$, the power allocation of (OP-G) can be solved by combining the two subproblems (OP-A) and (OP-B). More precisely, when $k^{1-\alpha}\leq\lambda$ ($k\in\{1,\dots,K\}$), (OP-A) can be firstly used. As $k$ increases to a certain value such that $k^{1-\alpha}>\lambda$, the second subproblem (OP-B) can be applied to solve the problem. For the sake of clarity of the following discussions on pricing policies, we solely consider the scenario where $(K-1)^{1-\alpha}\leq\lambda$. \section{Application to the pricing and energy-efficiency problems} In the previous section, we have considered the problem of maximizing the sum-utility with fixed power consumption. In the perspective of benefits, the optimal total power consumption should also depend on the cost of purchasing (or generating) this amount of power. By using the optimal allocation policy obtained in the preceding section, we consider two more practical scenarios in this section. We consider social welfare (retained gain with cost eliminated) and the global energy-efficiency (benefit brought by unit power consumption) defined by the ratio between the gain modelled by sigmoidal function and the power consumption. \subsection{Maximization of social welfare with inclining block rates pricing policy} To maintain a high consumer satisfaction level, we assume that the provider is regulated so that its objective is not to maximize its own profit through electricity trade, but rather to induce users' consumption in a way that maximizes social welfare (see \cite{MR-SGC-2010}\cite{Li-PES-2011}). Social welfare can be seen as the profit of the system, that is, the sum of all consumers' utility minus the cost of providing the electricity demanded by all the consumers: \begin{equation} W=\sum_{i=1}^K U(x_i,r_i)-f_c(\chi) \end{equation} where $f_c(\chi)$ is the cost function and increasing in $\chi$. For instance, in \cite{MR-SGC-2010}, $f_c(\chi)$ is assumed to be a quadratic function under the form $a\chi^2+b\chi+c$. In \cite{MR-SGC-2010}, while the utility function is a quadratic concave function w.r.t. $x$, it has been proved that the maximization of social welfare can be achieved by a real-time pricing scheme designed by the provider, with the deployment of demand response. Although the electricity rate of this pricing tariff can vary over time slots, the unit electricity price remains to be constant regardless of the power consumption. However, when pure concavity is no longer available in the model with sigmoidal utility functions, common flat rates are no longer optimal to induce consumer's consumption to maximize social welfare. An alternative to the common flat rates in retail electricity market is the inclining block rates, where the unit rate of electricity changes when the consumption is beyond a certain threshold \cite{MR-TSG-2010}. \textcolor{black}{Intuitively, as the sigmoidal function proposed here has two segments (the first segment with increasing marginal benefit and the second segment with decreasing marginal benefit), the IBR with two price levels and two different linear segments inherently match better our proposed scheme.} Motivated by this intuition, we want to prove that the IBR is the optimal or near optimal pricing tariff to maximize social welfare. \textcolor{black}{Before introducing the pricing scheme, we first describe the relations between the aforementioned models and the pricing scheme. The sigmoidal function is proposed to model the consumer's benefit. Based on these benefit functions, we study the sum-utility problem aiming to maximize the sum of benefit functions with a fixed overall consumption. The reason to solve the sum-utility problem is to find an allocation policy that is needed to solve the following social welfare maximization problem defined by (25). After obtaining the optimum consumption profiles to maximize social welfare, the last step is to find a corresponding pricing scheme that could induce consumers' consumption in accordance with the consumption to maximize social welfare. In the rest of this subsection, we elaborate on the derivation of the pricing scheme.} As explained before, for a given $\chi$, the optimal power consumption $x_i^{\star}(\chi)$ can be chosen according to (\ref{eq:solution_x_OPG}). Therefore, the maximization of $W$ can be rewritten as follows: \begin{equation} \underset{\chi}{\max} \sum_{i=1}^K U(x_i^{\star}(\chi),r_i)-f_c(\chi). \label{eq:x_sum_op} \end{equation} \textcolor{black}{Interestingly, it could be checked that there could have at most one local maximum in the interval $(r_j,r_{j+1})$ for every $j\in\{1,K\}$ (suppose $r_{K+1}=\infty$). Hence we could use conventional approaches (e.g., gradient descent, or Newton method) to find these local maximums and compare them to obtain the global maximum $\mathcal{X}^{\star}$. Or we could use brute-force approach with higher complexity to search the $\mathcal{X}^{\star}$. Since the form of the function $U(x_i^{\star}(\mathcal{X}.,r_i))$ is known, the computational complexity of the single variable brute-force approach is not going to be prohibitive.} More importantly, the value of $\chi^{\star}$ does not change the main structure of the proof of optimality of IBR pricing. For the sake of clarity, suppose the optimal total power $\chi^{\star}$ meets the following condition: \begin{equation}\sum_{i=1}^{J^{\star}}r_i\leq \chi^{\star}<\sum_{i=1}^{J^{\star}+1}r_i . \end{equation} where $J^{\star}\in\{1,\dots,K\}$. As explained in Sec. III-C, for any Consumer $i\in\{1,\dots,J^{\star}\}$, the power consumption beyond its reference point should be the same, namely, $x_1^{\star}(\chi^{\star})-r_1=\dots=x_{J^{\star}}^{\star}(\chi^{\star})-r_{J^{\star}}$. Therefore, the first derivative of $U(x_i,r_i)$ at optimal power consumption is the same for $1\leq i\leq {J^{\star}}$, that is, \begin{equation} \frac{\partial U(x;r_1)}{\partial x}|_{x=x_1^{\star}(\chi^{\star})}=\dots=\frac{\partial U(x;r_{J^{\star}})}{\partial x}|_{x=x_{J^{\star}}^{\star}(\chi^{\star})}=p \label{eq:equal_der} \end{equation} where $p$ is a constant related to the value of $\chi^{\star}$. Due to the lower benefit brought by the higher values of reference points, the power allocated to any consumers $i\in\{{J^{\star}}+1,\dots,K\}$ is zero. Regarding Consumer ${J^{\star}}+1$, according to (\ref{eq:solution_y_OPA}), if $\chi^{\star}-\sum_{i=1}^{{J^{\star}}}r_i\geq r_{{J^{\star}}+1}(\frac{\lambda}{{J^{\star}}^{1-\alpha}})^{\frac{1}{\alpha-1}}$, the first derivative of $U(x,r_{{J^{\star}}+1})$ at optimal consumption can be checked to be the same as the first $j$ consumers, i.e., $\frac{\partial U(x,r_{{J^{\star}}+1})}{\partial x}|_{x=x_{{J^{\star}}+1}^{\star}(\chi^{\star})}=p$. Otherwise, if $\chi^{\star}-\sum_{i=1}^{{J^{\star}}}r_i<r_{{J^{\star}}+1}(\frac{\lambda}{{J^{\star}}^{1-\alpha}})^{\frac{1}{\alpha-1}}$, one can obtain $x_{{J^{\star}}+1}^{\star}(\chi^{\star})=x_{{J^{\star}}+2}^{\star}(\chi^{\star})=x_{K}^{\star}(\chi^{\star})=0$. Assuming the demand response is implemented in the consumer side, the objective of each consumer is to maximize their own benefit, namely, the individual satisfaction brought by consumption minus the cost of purchasing electricity from the provider, which can be defined as follows: \begin{equation} u_i(x_i,r_i)=U(x_i,r_i)-p_i(x_i) \label{eq:OP_consumer} \end{equation} where $p_i(.)$ represents the cost of user $i$ by consuming $x_i$ amount of power. \textcolor{black}{According to demand response programs, the power consumption is determined by the consumer to maximize their own benefit, so it is significant to guide consumers through tariffs to preserve social welfare.} To maximize social welfare, the provider aims to design appropriate pricing policies such that the optimal power consumption $x_i^{\mathrm{OP}}$ to maximize $u_i(x_i,r_i)$, namely, \begin{equation} x_i^{\mathrm{OP}}\in \arg\underset{x_i}{\max}\,\, u_i(x_i,r_i), \end{equation} coincides with $x_i^{\star}(\chi^{\star})$. In the following proposition, we propose an IBR pricing policy such that $x_i^{\mathrm{OP}}=x_i^{\star}(\chi^{\star})$ always holds for all $i\neq {J^{\star}}+1$, and $x_{{J^{\star}}+1}^{\mathrm{OP}}=x_{{J^{\star}}+1}^{\star}(\chi^{\star})$ holds under certain conditions. Hence the IBR pricing policy is optimal or near optimal to induce consumers' consumption such that social welfare can be maximized. \begin{proposition} \textcolor{black}{The optimal power consumption $x_i^{\mathrm{OP}}$ to maximize the individual benefit coincides with the optimal power consumption $x_{i}^{\star}(\chi^{\star})$ to maximize social welfare for all $i\neq {J^{\star}}+1$ by implementing the following pricing policy: {\footnotesize \begin{equation}\label{eq:pricing} p_i^{\star}(x_i)= \left\{ \begin{array}{lll} qx_i & \hbox{ if \,\,\,$x_i\leq r_i-\left(\frac{p}{\lambda\alpha}\right)^{\frac{1}{\alpha-1}}$} \\\\ (q-p)(r_i-\left(\frac{p}{\lambda\alpha}\right)^{\frac{1}{\alpha-1}})+px_i & \hbox{ if \,\,\,$x_i> r_i-\left(\frac{p}{\lambda\alpha}\right)^{\frac{1}{\alpha-1}}$} \\\\ \end{array} \right. \end{equation}} where \begin{equation} q=\frac{\lambda r_{J^{\star}}^{\alpha}+(\frac{p}{\alpha})^{\frac{\alpha}{\alpha-1}}-((\frac{p}{\alpha})^{\frac{1}{\alpha-1}}+(\frac{p}{\lambda\alpha})^{\frac{1}{\alpha-1}})p}{r_{J^{\star}}-(\frac{p}{\lambda\alpha})^{\frac{1}{\alpha-1}}}+\Delta \label{eq:q_expression} \end{equation} with $\Delta$ can be any negative value lower bounded by \begin{equation} \begin{split} &-\frac{\lambda r_{J^{\star}}^{\alpha}+(\frac{p}{\alpha})^{\frac{\alpha}{\alpha-1}}-((\frac{p}{\alpha})^{\frac{1}{\alpha-1}}+(\frac{p}{\lambda\alpha})^{\frac{1}{\alpha-1}})p}{r_{J^{\star}}-(\frac{p}{\lambda\alpha})^{\frac{1}{\alpha-1}}}+\\ &\frac{ \lambda r_{{J^{\star}}+1}^{\alpha}+(\frac{p}{\alpha})^{\frac{\alpha}{\alpha-1}}-((\frac{p}{\alpha})^{\frac{1}{\alpha-1}}+(\frac{p}{\lambda\alpha})^{\frac{1}{\alpha-1}})p}{r_{{J^{\star}}+1}-(\frac{p}{\lambda\alpha})^{\frac{1}{\alpha-1}}}. \end{split} \label{eq:delta_expression} \end{equation} } In particular, $x_{{J^{\star}}+1}^{\mathrm{OP}}=x_{{J^{\star}}+1}^{\star}(\chi^{\star})$ can be attained (and hence social welfare maximization can be reconstructed perfectly by the proposed pricing policy) when the following condition is met \begin{equation} 0<\chi^{\star}-\sum_{i=1}^{{J^{\star}}}r_i\leq r_{{J^{\star}}+1}(\frac{\lambda}{{J^{\star}}^{1-\alpha}})^{\frac{1}{\alpha-1}} \end{equation} \end{proposition} \begin{proof} See Appendix. \end{proof} It is worth mentioning that the proposed pricing scheme is a piecewise function, where the unit price is $q$ before the threshold and becomes $p$ after the threshold. Note that the two unit prices are the same for all the consumers, whereas the threshold of the pricing policy depends on the reference point $r_i$ and thus being different for consumers. To implement the proposed IBR scheme, the provider needs to first find $\chi^{\star}$ as the solution of (\ref{eq:x_sum_op}). Then plugging (\ref{eq:solution_x_OPG}) into (\ref{eq:equal_der}), the constant $p$ can be derived. At last, another rate $q$ can be determined by (\ref{eq:q_expression}) and (\ref{eq:delta_expression}). Considering the difference between the two price levels, we provide different interpretations for the following three scenarios: $q>p$, $q<p$, and $q=p$. When $q>p$, the optimal pricing policy is to encourage the users to consume large amount of power (discount for large consumption), corresponding to the practical case where the power utility have strong generation capacity which has not been well exploited. The second case, $q<p$, the optimal pricing policy is to avoid the users to consume large amount of power (punishment for large consumption), corresponding to the practical case where the system have heavy loads and prefer the consumers to cut down their consumption. The last case $q=p$, the proposed IBR pricing policy to maximize social welfare boils down to constant flat rates pricing policy. This phenomenon can be seen in the cases with special cost functions, e.g., the linear function $f_c(\chi)=a\chi+b$. According to Prop. IV.1, for any increasing cost function $f_c(.)$, IBR pricing can at least induce $K-1$ consumers' consumption in the way to maximize social welfare, even though the consumer ${J^{\star}}$ might behave in a different way. Next, with a widely used quadratic cost function, we explore sufficient conditions with which all the consumers' consumption (including consumer ${J^{\star}}$) coincides with the optimal consumption to maximize social welfare. \begin{Corollary} When $C(x)=ax^2+bx+c$, the proposed pricing policy can perfectly reconstruct the optimal power consumption vector to maximize social welfare if the following condition is fulfilled: \begin{equation} a\leq\alpha(1-\alpha)r_K^{\alpha-2}. \end{equation} \end{Corollary} \begin{proof} See Appendix. \end{proof} To conclude this part, the proposed IBR pricing policy can be proved to induce $K-1$ consumers' consumption in a way to maximize social welfare, and all the consumers will follow the optimal consumption rule under the proposed IBR pricing policy if the cost function satisfies certain conditions. In addition, the proposed pricing tariff is easy to implement in power systems by solving low complexity optimization problems. \textbf{Remark:} When the minimum need $m_i$ of each consumer is imposed, the problem can be solved in the same way by using an adapted reference point $r_i^{\prime}=r_i-m_i$. The IBR can still be proved to be optimal to induce consumers' behavior in a way to maximize social welfare. \subsection{Energy-efficiency with sigmoidal utility} In the preceding section we have shown how the proposed optimization framework can be exploited to maximize social welfare. In this section, we also show that it can be exploited to maximize functions which have a different structure namely, we want to maximize energy-efficiency when it is measured by a ratio being a sum-benefit over a sum-cost (see \cite{Debbah-2016}\cite{Zappone-2016}). The problem can be formulated as \begin{equation} \begin{array}{lll} &\underset{x_1,\dots,x_K}{\max} & \frac{\sum_{i=1}^{K}U(x_i; r_i)}{\sum_{i=1}^{K}x_i} \\\\ &\mathrm{s.t.} & \forall i \in \{1,\dots \}, x_i \geq 0 \\\\ \end{array} \tag{OP-EE} \end{equation} Before deriving the solution of (OP-EE), we firstly introduce some basic definitions. Define $x_i^{\mathrm{EE}}$ as the power consumption to maximize the individual energy-efficiency, that is, \begin{equation} x_i^{\mathrm{IEE}}=\arg\underset{x_i > 0}{\max} \,\,\frac{U(x_i ; r_i)}{x_i} \end{equation} and define the maximum energy-efficiency can be achieved at consumer $i$ as \begin{equation} u_i^{\mathrm{IEE}}=\underset{x_i > 0}{\max} \,\,\frac{U(x_i ; r_i)}{x_i}. \end{equation} Note that $U(x_i; r_i)$ is a sigmoidal function, and thus it can be verified that $x_i^{\mathrm{IEE}}>r_i$ is the unique (nonzero) solution of the following equation: \begin{equation} x\alpha(x-r_i)^{\alpha-1}-(x-r_i)^{\alpha}-r_i^{\alpha}=0 \label{eq:IEE_solution} \end{equation} The following proposition compares the value of different $u_i^{\mathrm{IEE}}$ and $x_i^{\mathrm{IEE}}$, respectively. \begin{proposition} When $1\leq i_2<i_1\leq K$, the following inequalities can be obtained: \begin{equation} x_{i_1}^{\mathrm{IEE}}-r_{i_1}> x_{i_2}^{\mathrm{IEE}}-r_{i_2} \end{equation} \begin{equation} u _{i_1}^{\mathrm{IEE}}< u_{i_2}^{\mathrm{IEE}} \end{equation} \begin{proof} See Appendix. \end{proof} \end{proposition} According to this proposition, one can observe that the maximum individual energy efficiency increases when $r$ decreases, which implies the first consumer (who has the minimum reference points) can achieve the highest individual energy efficiency compared with other consumers. In the following proposition, it can be seen that the solution of (OP-EE) is fully connected to individual energy efficiency solutions. \begin{proposition} The solution of (OP-EE), defined as $(x_1^{\mathrm{SEE}},\dots,x_K^{\mathrm{SEE}})$, can be written as \begin{equation} x_1^{\mathrm{SEE}}=x_1^{\mathrm{IEE}} \end{equation} \begin{equation} x_i^{\mathrm{SEE}}=0,\,\,\forall i\geq 2 \end{equation} The maximum sum-energy-efficiency, defined as $u^{\mathrm{SEE}}$, is $ {u_1^{\mathrm{IEE}}}$. \end{proposition} \begin{proof} This proposition can be seen as a special case of Proposition 1 in \cite{Meshkati-2006}. \end{proof} Based on this proposition, we see that in order to maximize the energy efficiency of the network, only the first consumer will be charged, which might not be interesting for systems where fairness among users matters. Hence, we consider a more practical scenario in which each consumer obtains its minimum need, $m_i<r_i$, and the objective is to maximize the energy efficiency while satisfying the minimum power need. In addition, the minimum need constraint can be explained from the perspective of the power utility. Shut-down and ramp-up of a power plant can be costly or sometimes not technically feasible. A very low consumption can be detrimental to the power system sustainability. Hence, a minimum supply level can be imposed from the provider as well. The more practical problem can be formulated as follows: \begin{equation} \begin{array}{lll} &\underset{x_1,\dots,x_K}{\max} & \frac{\sum_{i=1}^{K}U(x_i,r_i)}{\sum_{i=1}^{K}x_i} \\\\ &\mathrm{s.t.} & x_i\geq m_i, \,\,\,\forall 1\leq i\leq K\\\\ \end{array} \tag{OP-EE-P1} \end{equation} Note that $U(x_i,r_i)=U(m_i,r_i)+U(x_i-m_i,r_i-m_i) $. Replacing $x_i-m_i$ by $\widehat{x}_i$ and $r_i-m_i$ by $\widehat{r}_i$, (OP-EE-P1) can be equivalently written as \begin{equation} \begin{array}{lll} &\underset{\widehat{x}_1,\dots,\widehat{x}_K}{\max} & \frac{M_1+\sum_{i=1}^{K}U(\widehat{x}_i,\widehat{r}_i)}{M_2+\sum_{i=1}^{K}\widehat{x}_i} \\\\ &\mathrm{s.t.} & \widehat{x}_i\geq 0, \,\,\,\forall 1\leq i\leq K\\\\ \end{array} \tag{OP-EE-P2} \end{equation} where $M_1$ and $M_2$ are two constants defined as $M_1=\sum_{i=1}^KU(m_i,r_i)$ and $M_2=\sum_{i=1}^Km_i$, respectively. Due to the existence of the two constant $M_1$ and $M_2$, $x_i^{\mathrm{SEE}}$ is no longer a solution for (OP-EE-P2). Even though the solution of (OP-EE-P2) cannot be expressed, some properties of (OP-EE-P2) can still be extracted. Suppose \begin{equation} E^{\star}=\underset{\widehat{x}_1,\dots,\widehat{x}_K}{\max} \frac{M_1+\sum_{i=1}^{K}U(\widehat{x}_i,\widehat{r}_i)}{M_2+\sum_{i=1}^{K}\widehat{x}_i} \label{eq:E_star} \end{equation} By studying the first derivative of $U$, one can observe that the power consumption to maximize the EE, defined as $\widehat{x}_i(E^{\star})$, should satisfy the condition either $\frac{U(\widehat{x}_i,\widehat{r}_i)}{\partial \widehat{x}_i}|_{\widehat{x}_i=\widehat{x}_i(E^{\star})}=E^{\star}$ or $\widehat{x}_i=0$. It can be proved that the power consumption can be written as \begin{equation}\label{eq:EE_solution_constraints} \widehat{x}_i(E^{\star})= \left\{ \begin{array}{lll} 0 & \hbox{ if \,\,\,$\frac{\lambda \widehat{r}_i^{\alpha}+(\frac{E^{\star}}{\alpha})^{\frac{\alpha}{\alpha-1}}}{ \widehat{r}_i+(\frac{E^{\star}}{\alpha})^{\frac{1}{\alpha-1}}}\leq E^{\star}$} \\\\ \widehat{r}_i+(\frac{E^{\star}}{\alpha})^{\frac{1}{\alpha-1}} & \hbox{ otherwise } \\\\ \end{array} \right. \end{equation} Define a function \begin{equation} g(E)= \frac{M_1+\sum_{i=1}^{K}U(\widehat{x}_i(E),\widehat{r}_i)}{M_2+\sum_{i=1}^{K}\widehat{x}_i(E)}-E \end{equation} According to (\ref{eq:E_star}), the optimal EE point $E^{\star}$ is a root of the function $g(E)$, i.e., \begin{equation} g(E^{\star})=0 \end{equation} Moreover, $E^{\star}$ can be proved to be the unique root of $g(E)$ in the following proposition: \begin{proposition} There exists a unique $E^{\star}$ such that \begin{equation} g(E^{\star})=0. \end{equation} \end{proposition} \begin{proof} See Appendix. \end{proof} Note that $\frac{M_1}{M_2}<E^{\star}<u_1^{\mathrm{IEE}}$ since $g(\frac{M_1}{M_2})>0$ and $g(u_1^{\mathrm{IEE}})<0$. Therefore, to recover the optimal EE point, one can find the roots of $g(E)$ in the interval $(\frac{M_1}{M_2},u_1^{\mathrm{IEE}})$. We resort to numerical approaches to find $p^{\star}$. As $g(E)$ has a unique root in the interval $(\frac{M_1}{M_2},u_1^{\mathrm{IEE}})$, the bisection method can be implemented to find the unique root (see Algorithm.~1). \begin{algorithm} {\bf{Inputs:}} $\mathrm{ITER}_{\max}$, $\epsilon$\\ {\bf{Outputs:}} $E^{\star}$\\ {\bf{Initialization:}} Set iteration index $\mathrm{ITER}=0$. Initialize the $x^{(0)}=\frac{M_1}{M_2}$, $y^{(0)}=u_1^{\mathrm{IEE}}$ and $D=2\epsilon$. \\ {\bf{While}} {$ D>\epsilon$ and $\mathrm{ITER}<\mathrm{ITER}_{\max}$}{ \\\quad Calculate the sign of $g(\frac{x^{\mathrm{(ITER)}}+y^{\mathrm{(ITER)}}}{2})$.\\ \quad \quad {\bf{If}} $g(\frac{x^{\mathrm{(ITER)}}+y^{\mathrm{(ITER)}}}{2})\leq 0$ \\\quad \quad\quad$x^{\mathrm{(ITER+1)}}=\frac{x^{\mathrm{(ITER)}}+y^{\mathrm{(ITER)}}}{2}$\\ \quad \quad\quad$y^{\mathrm{(ITER+1)}}=y^{\mathrm{(ITER)}}$\\ \quad \quad {\bf{else}} $y^{\mathrm{(ITER+1)}}=\frac{x^{\mathrm{(ITER)}}+y^{\mathrm{(ITER)}}}{2}$\\ \quad \quad\quad $x^{\mathrm{(ITER+1)}}=x^{\mathrm{(ITER)}}$\\ \quad\quad{\bf{end If}} \\ \quad Update the iteration index: $\mathrm{ITER} \gets \mathrm{ITER}+1$.\\ \quad Update $D=\min (g(|x^{\mathrm{(ITER)}}|),g(|y^{\mathrm{(ITER)}}|)) $.\\ {\bf{end While}} } $\forall m \in \{1,...,M\}, \,\,\, E^{\star}= g(\frac{x^{\mathrm{(ITER-1)}}+y^{\mathrm{(ITER-1)}}}{2})$ \caption{\small Algorithm to obtain the root of $g(E)$} \label{algo_vector} \end{algorithm} As a consequence, the optimum power consumption, which maximizes the EE under the minimum need constraints, can be written as \begin{equation} x_i^{\mathrm{SEEC}}=m_i+\widehat{x}_i(E^{\star}) \end{equation} Taking advantage of the uniqueness of the root, the energy-efficiency can be maximized with a low-complexity and fast convergent algorithm. \section{Numerical analysis} In this section, we provide simulation results and more precisely assess the performance of the proposed schemes by comparing them with the most relevant existing techniques. For this purpose, the cost function $f_c(x)$ is chosen to be a quadratic function: $f_c(x)=ax^2+bx+c$ with $a=0.05$ and $b=0.5$. Unless stated otherwise, the exponential parameter $\alpha$ is chosen as $\alpha=0.8$ and the loss amplification factor is chosen as $\lambda=1.5$. The system under consideration is deliberately taken to be simple to make the analysis and interpretations easier but, from the computational aspect, the analysis conducted in the paper allows for larger systems to be analyzed. The system consists of $K=5$ consumers connected to a single provider. In the following, we want to compare different power allocation policies and different pricing policies in terms of sum-utility. We start with a power allocation scheme that is sum-utility maximizing, namely, $\max\,\, \sum_i U(x_i,r_i)$. \subsection{Comparison among different power allocation policies to maximize the sum-utility} Firstly, we study the problem discussed in Sec. III. The performance comparison is shown for the following three power allocation (PA) strategies: the proposed power allocation policy (optimal), the proportional power allocation policy (PPA), and the uniform power allocation policy (UPA) \cite{Bansal-TWC-2008}. For the PPA, the consumption of Subscriber $i$ scales to its reference point $r_i$, i.e., $x_i=\frac{r_i}{\sum_i r_i}\times \chi$. The reference points vector $\underline{r}=(r_1,\dots,r_K)$ is assumed to be $(1, 1.5,2,2.5,3)$ kW. We select the relative improvement as the metric to assess the performance, that is, the performance difference between our approach and the existing technique over the performance of our method. From Fig.~4, it can be seen that the proposed PA policy can bring up to $30\%$ improvement to the sum-utility. When the total power consumption is less than ${\sum_i r_i}=10$ kW, four peaks can be observed, and the locations of the four peaks are very close to $r_1$, $r_1+r_2$, $r_1+r_2+r_3$ and $r_1+r_2+r_3+r_4$, respectively. This can be explained by the fact that the utility function $U(x_i, r_i)$ changes fast at the point $x_i=r_i$, and thus at the sensitive point $\sum_i^k r_i$ ($k\in \{1,2,3,4\}$), the significance of a good PA will be highlighted. If $\chi$ reaches a certain level and continues to increase, the improvement begins to decrease and becomes negligible, especially compared to the PPA policy. \begin{figure} \caption{The reference points for one day by connecting the \textit{PecanStreet} \end{figure} \begin{figure} \caption{Social welfare comparison between two pricing tariff policy. The proposed scheme brings a significant improvement when the demand levels are high.} \end{figure} \begin{figure} \caption{Relative improvement brought by the proposed scheme against demand levels. The improvement increases rapidly as the system demand levels grows.} \end{figure} \begin{figure} \caption{Aggregate load at different time slots with different pricing policies. Implementing the proposed pricing policy, the aggregate load keeps stable over time, even the demand of different time slots are quite different, which implies that our policy can be a good candidate to minimize the peak power. By contrast, stable power consumption cannot be guaranteed when using state-of-the-art techniques.} \end{figure} \subsection{Comparison of pricing policies} \textcolor{black}{To quantify the improvement brought by the proposed IBR real-time pricing scheme, we compare it with the real-time pricing scheme proposed in \cite{MR-SGC-2010}, which is independent of consumption and has been proved to be optimal in terms of social welfare for quadratic utility functions. The details about the two pricing policies can be described as follows: \begin{itemize} \item{Proposed pricing policy: The IBR pricing tariff is given by (\ref{eq:pricing}). The unit electricity price $q$ in a low consumption regime is given by (\ref{eq:pricing}). Whereas the consumption is beyond a certain threshold, the unit electricity price $p$ is given by (\ref{eq:equal_der}). In each time-slot, due to the different value of reference point (or demand levels), the unit electricity price varies.} \item{Real-time pricing in \cite{MR-SGC-2010}: The unit electricity price is independent of the consumption and chosen as $p$. The rationale behind this choice is that the marginal benefit of the optimal consumption to maximize social welfare is $p$, and this choice was proven to be optimal in terms of social welfare maximization when the consumers' behaviors are modelled by quadratic utility functions \cite{MR-SGC-2010}. In each time-slot, due to the different value of reference point (or demand levels), the unit electricity price varies.} \end{itemize} To make a reasonable connection between the proposed sigmoidal model with the real data, it is assumed that the reference point is a representation of the past power consumption realizations. We consider five households recorded in \textit{PecanStreet} (Household $26$, Household $4998$, Household $6910$, Household $9499$, Household $9609$) \cite{pecan}\cite{Data-sharing} and use their average power consumption of the Year $2013$ as the reference point of current time. One day is divided into 24 time slots representing the average power consumption per hour. Fig.~5 shows the sum of reference points(referred to as demand levels) and reference points for some consumers in each hour. The rush hour is in the evening where consumers have higher demand and the least value is achieved at 5 a.m. when most people are still sleeping. In the rest of the simulations, we use this real data as the reference points except otherwise stated. Fig.~6 depicts social welfare with different pricing policies. One can observe that the RTP with constant rates coincides with our proposed RTP with IBR in the morning and early afternoon, but our proposed pricing tariff outperforms the classical RTP in the evening. In fact, it is not necessary to control the consumption when the demand levels are not high, and thus using a well-selected constant rate is sufficient for social welfare maximization. However, as the demand levels increase, an intelligent pricing tariff is required to flatten the consumption and maximize social welfare. With IBR, the consumption behaviors can be optimized by different unit electricity prices. In addition, to see the case in which our proposed scheme can bring more improvement, we conduct simulations to see the improvement with different demand levels. In Fig.~7, we consider a system consisting of five consumers, and the reference point of each consumer is randomly selected from the interval $[\Delta-0.5,\Delta+0.5]$, where $\Delta\in[1,2.5]$. We define the system demand levels as the sum of each individual reference point. For each given $\Delta$, the average performance is computed over 5000 realizations. To better illustrate the improvement with different demand levels, we define the relative improvement as \begin{equation} R_i=\frac{U_{\mathrm{IBR}}-U_{\mathrm{CRTP}}}{U_{\mathrm{CRTP}}}\times 100\% \end{equation} where $U_{IBR}$ represents the average social welfare by using our proposed IBR pricing tariff and $U_{CRTP}$ represents the average social welfare by using classical RTP. One can observe that the improvement becomes more significant as the demand levels grows, since our proposed scheme can change the unit electricity price to reduce the load in peak hours.} \textcolor{black}{To study the reason why social welfare can be enhanced with our pricing tariff, the aggregate loads at different time slot of the system are illustrated in Fig.~8. Implementing our proposed pricing policy, the aggregate load keeps almost invariant with time, even the demand of different time slots are quite different. This implies that our policy can be a good candidate to minimize the peak power or peak-to-average ratio (PAR). However, with the pricing policy in \cite{MR-SGC-2010}, the aggregate load suddenly increases or decreases in rush hours, even the aggregate load coincides with our proposed policy while the demand level is not high (from 0h to 15h). The fixed pricing policy chooses the same price for the whole day, and thus the power consumption is proportional to the reference points, resulting in very large peak power.} \subsection{Sum-energy-efficiency with different techniques} The performance of sum-energy-efficiency is assessed for the following four techniques: the sum-energy-efficiency maximization without constraints (SEE), the sum-energy-efficiency maximization considering constraints (SEE-C), the individual energy efficiency maximization (IEE) and the UPA. Here, the minimum need is set to be $m_i=\frac{1}{2}r_i$. From Fig.~9, it can be observed that the performance by SEE-C is very close to that of SEE, which means the constraints just bring marginal degradation by using Algo.~1 to find the optimal power vector. Furthermore, both SEE and SEE-C is shown to be better than IEE and UPA, where one aims at maximizing EE of each consumer and another allocate the power uniformly to every consumer. At last, the sum-energy-efficiency decreases when the demand level is higher, for the reason that for higher demands it is more difficult to satisfy the consumer. \section{Conclusion} In this paper we propose a refined model for the behavior of an energy consumer. Mathematically, this model consists of a sigmoidal utility function. Moving from the conventional behavior model (namely, a concave utility function) to the proposed refined model implies that some important tasks become more difficult. In particular, the sum-utility (or social welfare) maximization problem becomes more difficult. Although this problem is generally hard computationally speaking, we show that for the considered class of utility functions, the problem can be completely solved and the optimal solutions can be expressed and interpreted. The complete analysis of the considered optimization problem allows one not only to analyze the sum-utility maximization problem but also the problem of a global efficiency whenever measured as the ratio of the sum-benefit to the sum-cost. Solving the sum-utility maximization problem allows us to derive a new pricing policy and more precisely a new inclining block rates policy. The new policy has three attractive features: 1. It is, by construction, optimal in terms of social welfare; 2. It allows the peak-to-average ratio to be managed; 3. In contrast with the conventional real-time pricing policies, the derived pricing policy is not only time-slot dependent but also adapts to the power consumption level. Concerning the energy-efficiency problem, we show that the profit associated with a unit power consumption can be maximized by using a bisection-based algorithm. By constructing a function for which the unique root corresponds to the maximum energy-efficiency, our algorithm is shown to always converge to the global maximum with high convergence speed. The research work reported in this paper can be extended in many ways. One may better explore the relationship between the reference point and the real energy need, and tune the aversion parameter $\lambda$ accordingly. The parameter selection problem associated with the considered sigmoidal function may be posed and analyzed. To this end, one possible approach is to optimize these parameters by training a deep neural network and using supervised learning. Moreover, the satisfactory level under consideration is modeled by single-stage functions; an interesting and challenging extension might be to use multi-stage utility functions to represent the satisfactory level. \textcolor{black}{Moreover, the model where each consumer has its individual $\lambda$ and $\alpha$ instead of a common $\lambda$ and $\alpha$ can be considered as future works, especially in heterogeneous systems with a great variety of consumer behaviors.} \textcolor{black}{Finally, the model considering the characteristics of the consumers could be of interest to future works, the approach to the case where consumers have the individual priority might be using hierarchical structure or formulating the problem as a weighted sum-utility problem.} \section*{Appendix} \subsection*{A.1. Proof of Prop~4.1} \begin{proof} For the piecewise pricing schemes proposed above, it can be easily checked that $x_i^{\mathrm{OP}}\in\{0,r_i+(\frac{p}{\alpha})^{\frac{1}{\alpha-1}}\}$. While the price before the threshold is selected as (\ref{eq:q_expression}), it can be checked that $x_i^{\mathrm{OP}}=r_i+(\frac{p}{\alpha})^{\frac{1}{\alpha-1}}$ if $i\leq {J^{\star}}$ and $x_i^{\mathrm{OP}}=0$ for $i>{J^{\star}}$. According to (\ref{eq:solution_x_OPA}) and (\ref{eq:solution_y_OPA}), it can be verified that $x_{i}^{\star}(\chi^{\star})=x_i^{\mathrm{OP}}$ if $i\neq {J^{\star}}+1$ and $x_{{J^{\star}}+1}^{\star}(\chi^{\star})$ can be written as {\tiny\begin{equation} x_{{J^{\star}}+1}^{\star}(\chi^{\star})= \left\{ \begin{array}{lll} 0 & \hbox{ if \,\,\,$\chi^{\star}-\sum_{i=1}^{{J^{\star}}}r_i\leq r_{{J^{\star}}+1}(\frac{\lambda}{{J^{\star}}^{1-\alpha}})^{\frac{1}{\alpha-1}}$} \\\\ r_{{J^{\star}}+1}-(\frac{p}{\lambda\alpha}^{\frac{1}{\alpha-1}}) & \hbox{ if \,\,\,$ \chi^{\star}-\sum_{i=1}^{{J^{\star}}}r_i>r_{{J^{\star}}+1}(\frac{\lambda}{{J^{\star}}^{1-\alpha}})^{\frac{1}{\alpha-1}}$} \\\\ \end{array} \right. \end{equation}} When the first condition is met, $x_{{J^{\star}}+1}^{\mathrm{OP}}$ coincides with $x_{{J^{\star}}+1}^{\star}(\chi^{\star})$. Therefore, by implementing the proposed pricing policy, at least the power consumption of $K-1$ consumers coincides with the power consumption to maximize social welfare (${J^{\star}}+1$)-th consumer is the one which might have different consumption with the targeted consumption to maximize social welfare, and the pricing policy can perfectly reconstruct the every $x_i^{\star}(\chi^{\star})$ when the following condition is fulfilled: \begin{equation} 0<\chi^{\star}-\sum_{i=1}^{{J^{\star}}}r_i\leq r_{{J^{\star}}+1}(\frac{\lambda}{{J^{\star}}^{1-\alpha}})^{\frac{1}{\alpha-1}} \end{equation} \end{proof} \subsection*{A.2. Proof of Corollary 1} \begin{proof} Define $H(x)=\sum_{i=1}^KU(x_i^{\star}(x),r_i)$ and its derivative as \begin{equation} h(x)=\frac{\partial H(x)}{\partial x}, \quad x\neq \sum_{i=1}^k r_i,\,\forall 1\leq k\leq K \end{equation} It can be checked that $h(x)$ is partially convex in the intervals $(0,r_1)$, $(\sum_{i=1}^{k}r_{i},\sum_{i=1}^{k+1}r_{i})$, respectively, where $1\leq k\leq K$ (not globally convex in the whole interval $(0,\sum_{i=1}^Kr_{i})$). In the interval $(\sum_{i=1}^{{J^{\star}}}r_{i},\sum_{i=1}^{{J^{\star}}+1}r_{i})$, h(x) firstly decreases until the point $r_{J^{\star}}^{\star}=\sum_{i=1}^{{J^{\star}}}r_{i}+(\frac{\lambda}{{J^{\star}}^{1-\alpha}})^{\frac{1}{\alpha-1}}r_{{J^{\star}}+1}$, then becomes increasing when $x>r_{J^{\star}}^{\star}$. The solution of $\chi^{\star}$ should satisfy $h(\chi^{\star})=C'(\chi^{\star})$, thus we study the position of the intersections between $h(x)$ and $C'(x)$. Two scenarios are possible: both intersections are larger than $r_{J^{\star}}^{\star}$ (case I), or one is larger than $r_{J^{\star}}^{\star}$ and another is smaller than $r_{J^{\star}}^{\star}$ (case II). For both cases, one can check that $h(x)$ is larger than $C'(x)$ when $x$ is smaller than the left intersection or bigger than the right intersection. Thus, the $\chi^{\star}$ can be solely the left intersection. According to Proposition IV.1, if the left intersection point is always less than $r_j^{\star}$, the perfect reconstruction can be attained by the IBR pricing policy. Therefore, it boils down to finding the sufficient condition such that only case II can happen. Note that $h(x)$ is not differentiable at $x=r_i^{\star}$, and thus there exists a minimum $|h'(x)|>0$. It can be checked that, if the second derivative of $C(x)$, i.e., $C''(x)$, is lower than the minimum second derivative of $H(x)$, i.e., $h'(x)$, then the occurrence of case I can be always averted. By some derivations, it can be demonstrated that \begin{equation} h'(x)>\alpha(1-\alpha)r_K^{\alpha-2} \end{equation} Consequently, our claim is proved. \end{proof} \subsection*{A.3. Proof of Prop~4.2} \begin{proof} According to (\ref{eq:IEE_solution}), we can write \begin{equation} x_{i_1}^{\mathrm{IEE}}\alpha(x_{i_1}^{\mathrm{IEE}}-r_{i_1})^{\alpha-1}-(x_{i_1}^{\mathrm{IEE}}-r_{i_1})^{\alpha}-r_{i_1}^{\alpha}=0 \label{eq:proof_IEE1} \end{equation} \begin{equation} x_{i_2}^{\mathrm{IEE}}\alpha(x_{i_2}^{\mathrm{IEE}}-r_{i_2})^{\alpha-1}-(x_{i_2}^{\mathrm{IEE}}-r_{i_2})^{\alpha}-r_{i_2}^{\alpha}=0 \label{eq:proof_IEE1} \end{equation} Assume $x=x_{i_2}^{\mathrm{IEE}}+r_{i_1}-r_{i_2}$, we have \begin{equation} \begin{split} &x\alpha(x-r_{i_1})^{\alpha-1}-(x-r_{i_1})^{\alpha}-r_{i_1}^{\alpha}\\ =&(x_{i_2}^{\mathrm{IEE}}+r_{i_1}-r_{i_2})\alpha(x_{i_2}^{\mathrm{IEE}}-r_{i_2})^{\alpha-1}-(x_{i_2}^{\mathrm{IEE}}-r_{i_2})^{\alpha}-r_{i_1}^{\alpha}\\ =&(r_{i_1}-r_{i_2})\alpha(x_{i_2}^{\mathrm{IEE}}-r_{i_2})^{\alpha-1}+r_{i_2}^{\alpha}-r_{i_1}^{\alpha}\\ >&(r_{i_1}-r_{i_2})\alpha(x_{i_2}^{\mathrm{IEE}}-r_{i_2})^{\alpha-1}-(r_{i_1}-r_{i_2})(\alpha r_{i_2}^{\alpha-1})\\ =&\alpha(r_{i_1}-r_{i_2})((x_{i_2}^{\mathrm{IEE}}-r_{i_2})^{\alpha-1}-r_{i_2}^{\alpha-1}). \end{split} \end{equation} Note that \begin{equation} 2r_{i_2}\alpha(2r_{i_2}-r_{i_2})^{\alpha-1}-(2r_{i_2}-r_{i_2})^{\alpha}-r_{i_2}^{\alpha}<0 \end{equation} Therefore, we can obtain $r_{i_2}<x_{i_2}^{\mathrm{IEE}}<2r_{i_2}$. Consequently, it can be checked that \begin{equation} x\alpha(x-r_{i_1})^{\alpha-1}-(x-r_{i_1})^{\alpha}-r_{i_1}^{\alpha}>0 \end{equation} which implies that \begin{equation} x_{i_1}^{\mathrm{IEE}}-r_{i_1}>x_{i_2}^{\mathrm{IEE}}-r_{i_2} \end{equation} According to the definition of U in (\ref{eq:utility_simplified}), one can easily get \begin{equation} u _{i_1}^{\mathrm{IEE}}< u_{i_2}^{\mathrm{IEE}} \end{equation} \end{proof} \subsection*{A.4. Proof of Prop~4.4} \begin{proof} Here, we use the proof by contradiction. Suppose there exists $E_1$ and $E_2$ ($E_1>E_2$) such that $g(E_1)=g(E_2)=0$. According to (\ref{eq:EE_solution_constraints}) and the properties of sigmoidal function, it can be verified that if $\frac{\lambda \widehat{r}_i^{^\alpha}+(\frac{E_1}{\alpha})^{\frac{\alpha}{\alpha-1}}}{ \widehat{r}_i+(\frac{E_1}{\alpha})^{\frac{1}{\alpha-1}}}\leq E_1$, \begin{equation} U(\widehat{x}_i(E_2);\widehat{r}_i)-U(\widehat{x}_i(E_1),\widehat{r}_i)=0. \end{equation} Otherwise, we have \begin{equation} U(\widehat{x}_i(E_2),\widehat{r}_i)-U(\widehat{x}_i(E_1),\widehat{r}_i)\geq E_2(\widehat{x}_i(E_2)-\widehat{x}_i(E_1)). \end{equation} Consequently, one can obtain that \begin{equation}\label{eq:frac1_proof} \frac{\sum_{i=1}^{K}(U(\widehat{x}_i(E_2),\widehat{r}_i)-U(\widehat{x}_i(E_1),\widehat{r}_i))}{\sum_{i=1}^{K}(\widehat{x}_i(E_2)-\widehat{x}_i(E_1))}\geq E_2 \end{equation} Furthermore, note that \begin{equation}\label{eq:frac2_proof} \frac{{M_1+}\sum_{i=1}^{K}U(\widehat{x}_i(E_1),\widehat{r}_i)}{{M_2+}\sum_{i=1}^{K}(\widehat{x}_i(E_1))}=E_1>E_2. \end{equation} Hence the sum of the two fractions defined in (\ref{eq:frac1_proof}) and (\ref{eq:frac2_proof}) (sum over denominator and nominator respectively) should be larger than $E_2$, i.e., \begin{equation} \frac{{M_1+}\sum_{i=1}^{K}U(\widehat{x}_i(E_2),\widehat{r}_i)}{{M_2+}\sum_{i=1}^{K}(\widehat{x}_i(E_2))}>E_2 \end{equation} This implies \begin{equation} g(E_2)>E_2. \end{equation} which leads to the contradiction. Therefore, there exist a unique root of $g(E)$. \end{proof} \section*{Reference} \end{document}
\begin{document} \twocolumn[ \aistatstitle{Contextual Bandit Algorithms with Supervised Learning Guarantees} \aistatsauthor{Alina Beygelzimer \And John Langford \And Lihong Li} \aistatsaddress{IBM Research\\ Hawthorne, NY\\{[email protected]} \And Yahoo!\ Research \\ New York, NY\\{[email protected]} \And Yahoo!\ Research\\ Santa Clara, CA\\{[email protected]}} \aistatsauthor{Lev Reyzin \And Robert E. Schapire} \aistatsaddress{ Georgia Institute of Technology\\ Atlanta, GA\\ {[email protected]} \And Princeton University\\ Princeton, NJ\\ {[email protected]}} \runningauthor{Alina Beygelzimer, John Langford, Lihong Li, Lev Reyzin, Robert E. Schapire} ] \begin{abstract} We address the problem of competing with any large set of $N$ policies in the non-stochastic bandit setting, where the learner must repeatedly select among $K$ actions but observes only the reward of the chosen action. We present a modification of the \texttt{Exp4} algorithm of Auer et al.~\cite{AuerCFS02}, called \texttt{Exp4.P}, which with high probability incurs regret at most $O(\sqrt{KT\ln N})$. Such a bound does not hold for \texttt{Exp4} due to the large variance of the importance-weighted estimates used in the algorithm. The new algorithm is tested empirically in a large-scale, real-world dataset. For the stochastic version of the problem, we can use \texttt{Exp4.P} as a subroutine to compete with a possibly infinite set of policies of VC-dimension $d$ while incurring regret at most $ O(\sqrt{Td\ln T})$ with high probability. These guarantees improve on those of all previous algorithms, whether in a stochastic or adversarial environment, and bring us closer to providing guarantees for this setting that are comparable to those in standard supervised learning. \end{abstract} \section{INTRODUCTION} A learning algorithm is often faced with the problem of acting given feedback only about the actions that it has taken in the past, requiring the algorithm to explore. A canonical example is the problem of personalized content recommendation on web portals, where the goal is to learn which items are of greatest interest to a user, given such observable context as the user's search queries or geolocation. Formally, we consider an online bandit setting where at every step, the learner observes some contextual information and must choose one of $K$ actions, each with a potentially different reward on every round. After the decision is made, the reward of the chosen action is revealed. The learner has access to a class of $N$ policies, each of which also maps context to actions; the learner's performance is measured in terms of its \emph{regret} to this class, defined as the difference between the cumulative reward of the best policy in the class and the learner's reward. This setting goes under different names, including the ``partial-label problem"~\cite{KakadeST08}, the ``associative bandit problem"~\cite{SMLH06}, the ``contextual bandit problem"~\cite{LangfordZ07} (which is the name we use here), the ``$k$-armed (or multi-armed) bandit problem with expert advice"~\cite{AuerCFS02}, and ``associative reinforcement learning''~\cite{Kaelbling94AssociativeFunctions}. Policies are sometimes referred to as hypotheses or experts, and actions are referred to as arms. If the total number of steps $T$ (usually much larger than $K$) is known in advance, and the contexts and rewards are sampled independently from a fixed but unknown joint distribution, a simple solution is to first choose actions uniformly at random for $O(T^{2/3})$ rounds, and from that point on use the policy that performed best on these rounds. This approach, a variant of $\epsilon$-greedy (see \cite{SuttonB98}), sometimes called $\epsilon$-first, can be shown to have a regret bound of $\textstyle O\left( T^{2/3} (K \ln N)^{1/3} \right)$ with high probability~\cite{LangfordZ07}. In the full-label setting, where the entire reward vector is revealed to the learner at the end of each step, the standard machinery of supervised learning gives a regret bound of $\textstyle O(\sqrt{T \ln N})$ with high probability, using the algorithm that predicts according to the policy with the currently lowest empirical error rate. This paper presents the first algorithm, \texttt{Exp4.P}, that {with high probability} achieves $O(\sqrt{T K \ln N})$ regret in the {adversarial} contextual bandit setting. This improves on the $O(T^{2/3} (K \ln N)^{1/3})$ high probability bound in the {stochastic} setting. Previously, this result was known to hold {\em in expectation\/} for the algorithm \texttt{Exp4}~\cite{AuerCFS02}, but a {high probability\/} statement did not hold for the same algorithm, as per-round regrets on the order of $O(T^{-1/4})$ were possible~\cite{AuerCFS02}. Succeeding with high probability is important because reliably useful methods are preferred in practice. The \texttt{Exp4.P} analysis addresses competing with a finite (but possibly exponential in $T$) set of policies. In the stochastic case, $\epsilon$-greedy or epoch-greedy style algorithms~\cite{LangfordZ07} can compete with an infinite set of policies with a finite VC-dimension, but the worst-case regret grows as $O(T^{2/3})$ rather than $O(T^{1/2})$. We show how to use \texttt{Exp4.P} in a black-box fashion to guarantee a high probability regret bound of $O(\sqrt{Td\ln T})$ in this case, where $d$ is the VC-dimension. There are simple examples showing that it is impossible to compete with a VC-set with an online adaptive adversary, so some stochastic assumption seems necessary here. This paper advances a basic argument, namely, that such exploration problems are solvable in almost exactly the same sense as supervised learning problems, with suitable modifications to existing learning algorithms. In particular, we show that learning to compete with any set of strategies in the contextual bandit setting requires only a factor of $K$ more experience than for supervised learning (to achieve the same level of accuracy with the same confidence). \texttt{Exp4.P} does retain one limitation of its predecessors---it requires keeping explicit weights over the experts, so in the case when $N$ is too large, the algorithm becomes inefficient. On the other hand, \texttt{Exp4.P} provides a practical framework for incorporating more expressive expert classes, and it is efficient when $N$ is polynomial in $K$ and $T$. It may also be possible to run \texttt{Exp4.P} efficiently in certain cases when working with a family of experts that is exponentially large, but well structured, as in the case of experts corresponding to all prunings of a decision tree~\cite{HelmboldSc97}. A concrete example of this approach is given in Section~\ref{s:experiments}, where an efficient implementation of \texttt{Exp4.P} is applied to a large-scale real-world problem. \noindent {\bf Related work:}\quad The non-contextual $K$-armed bandit problem was introduced by Robbins~\cite{Robbins52Some}, and analyzed by Lai and Robbins~\cite{LaiR85} in the i.i.d.\ case for fixed reward distributions. An adversarial version of the bandit problem was introduced by Auer~et~al.~\cite{AuerCFS02}. They gave an exponential-weight algorithm called \texttt{Exp3} with expected cumulative regret of $\tilde{O}(\sqrt{KT})$ and also \texttt{Exp3.P} with a similar bound that holds with high probability. They also showed that these are essentially optimal by proving a matching lower bound, which holds even in the i.i.d. case. They were also the first to consider the $K$-armed bandit problem with expert advice, introducing the \texttt{Exp4} algorithm as discussed earlier. Later, McMahan and Streeter~\cite{McMahanS09} designed a cleaner algorithm that improves on their bounds when many irrelevant actions (that no expert recommends) exist. Further background on online bandit problems appears in \cite{Cesa-BianchiL06}. \texttt{Exp4.P} is based on a careful composition of the \texttt{Exp4} and \texttt{Exp3.P} algorithms. We distill out the exact exponential moment method bound used in these results, proving an inequality for martingales (Theorem~\ref{thm:0}) to derive a sharper bound more directly. Our bound is a Freedman-style inequality for martingales~\cite{Freedman75}, and a similar approach was taken in Lemma 2 of Bartlett~et~al.~\cite{BDHKRT08}. Our bound, however, is more elemental than Bartlett~et~al.'s since our Theorem can be used to prove (and even tighten) their Lemma, but not vice versa. With respect to competing with a VC-set, a claim similar to our Theorem~\ref{thm:VC} (Section~\ref{sec:VCVE}) appears in a work of Lazaric and Munos~\cite{LazaricM09}. Although they incorrectly claimed that \texttt{Exp4} can be analyzed to give a regret bound of $\tilde{O}(KT\ln{N})$ with high probability, one can use \texttt{Exp4.P} in their proof instead. Besides being correct, our analysis is tighter, which is important in many situations where such a risk-sensitive algorithm might be applied. Related to the bounded VC-dimension setting, Kakade and Kalai~\cite{KakadeK05} give a $O(T^{3/4})$ regret guarantee for the transductive online setting, where the learner can observe the rewards of all actions, not only those it has taken. In \cite{BenDavidPS09}, Ben-David~et~al.\ consider agnostic online learning for bounded Littlestone-dimension. However, as VC-dimension does not bound Littlestone dimension, our work provides much tighter bounds in many cases. \noindent {\bf Possible Approaches for a High Probability Algorithm}\quad To develop a better intuition about the problem, we describe several naive strategies and illustrate why they fail. These strategies fail even if the rewards of each arm are drawn independently from a fixed unknown distribution, and thus certainly fail in the adversarial setting. {\bf Strategy 1}: Use confidence bounds to maintain a set of plausible experts, and randomize uniformly over the actions predicted by at least one expert in this set. To see how this strategy fails, consider two arms, 1 and 0, with respective deterministic rewards 1 and 0. The expert set contains $N$ experts. At every round, one of them is chosen uniformly at random to predict arm 0, and the remaining $N-1$ predict arm 1. All of the experts have small regret with high probability. The strategy will randomize uniformly over both arms on every round, incurring expected regret of nearly $T/2$. {\bf Strategy 2}: Use confidence bounds to maintain a set of plausible experts, and follow the prediction of an expert chosen uniformly at random from this set. To see how this strategy fails, let the set consist of $N > 2T$ experts predicting in some set of arms, all with reward 0 at every round, and let there be a good expert choosing another arm, which always has reward 1. The probability we never choose the good arm is $(1-1/N)^T$. We have $-T\log(1-\frac{1}{N}) < T\frac{\frac{1}{N}}{1-\frac{1}{N}} \le \frac{2T}{N} < 1$, using the elementary inequality $-\log(1-x) < x/(1-x)$ for $x\in (0,1]$. Thus $(1-1/N)^T > \frac{1}{2}$, and the strategy incurs regret of $T$ with probability greater than $1/2$ (as it only observes 0 rewards and is unable to eliminate any of the bad experts). \section{PROBLEM SETTING AND NOTATION} Let ${\boldsymbol{r}}(t)\in [0,1]^K$ be the vector of rewards, where $r_j(t)$ is the reward of arm $j$ on round $t$. Let ${\boldsymbol \xi}^i(t)$ be the $K$-dimensional advice vector of expert $i$ on round $t$. This vector represents a probability distribution over the arms, in which each entry $\xi^i_j(t)$ is the (expert's recommendation for the) probability of choosing arm $j$. For readability, we always use $i\in\{1,\ldots,N\}$ to index experts and $j\in\{1,\ldots,K\}$ to index arms. For each policy $\pi$, the associated expert predicts according to $\pi(x_t)$, where $x_t$ is the context available in round $t$. As the context is only used in this fashion here, we talk about expert predictions as described above. For a deterministic $\pi$, the corresponding prediction vector has a $1$ in component $\pi(x_t)$ and $0$ in the remaining components. On each round $t$, the world commits to ${\boldsymbol{r}}(t) \in [0,1]^K$. Then the $N$ experts make their recommendations ${\boldsymbol \xi}^1(t), \ldots, {\boldsymbol \xi}^N(t)$, and the learning algorithm $A$ (seeing the recommendations but not the rewards) chooses action $j_t\in \{1,\ldots,K\}$. Finally, the world reveals reward $r_{j_t}(t)$ to the learner, and this game proceeds to the next round. We define the \emph{return} (cumulative reward) of $A$ as $ G_{A} \doteq \sum_{t=1}^{T}{r_{j_t}(t)}. $ Letting $y_{i}(t)={\boldsymbol \xi}^{i}(t)\cdot {\boldsymbol{r}}(t)$, we also define the expected return of expert $i$, \[ G_{i}\doteq\sum_{t=1}^{T}y_{i}(t), \] and $G_{{p_{\min}}ax}={p_{\min}}ax_i{G_i}$. The expected \emph{regret} of algorithm $A$ is defined as \[ G_{{p_{\min}}ax} - {p_{\min}}athbf{E}[G_{A}]. \] We can also think about bounds on the regret which hold with arbitrarily high probability. In that case, we can say that the regret is bounded by $\epsilon$ with probability $1- \delta$, if we have \[ \textbf{Pr}[G_{{p_{\min}}ax} - G_{A} > \epsilon] \le \delta. \] In the definitions of expected regret and the high probability bound, the probabilities and expectations are taken w.r.t.\ both the randomness in the rewards ${\boldsymbol{r}}(t)$ and the algorithm's random choices. \section{A GENERAL RESULT FOR MARTINGALES} Before proving our main result (Theorem~\ref{thm:main}), we prove a general result for martingales in which the variance is treated as a random variable. It is used in the proof of Lemma \ref{lem:1} and may also be of independent interest. The technique is the standard one used to prove Bernstein's inequality for martingales \cite{Freedman75}. The useful difference here is that we prove the bound for any fixed \emph{estimate} of the variance rather than any \emph{bound} on the variance. Let $X_{1},\ldots,X_{T}$ be a sequence of real-valued random variables. Let $\Expt{Y}=\Exp{Y | X_1,\ldots,X_{t-1}}$. \begin{thm} \label{thm:0} Assume, for all $t$, that $X_t\leq R$ and that $\Expt{X_t}=0$. Define the random variables \[ S \doteq \sum_{t=1}^T X_t, \qquad V \doteq \sum_{t=1}^T \Expt{X_t^2}. \] Then for any $\delta>0$, with probability at least $1-\delta$, we have the following guarantee: For any $V'\in \left[ \frac{R^2 \ln (1/\delta)}{e-2},\infty \right)$, \[ S \leq \sqrt{(e-2)\ln(1/\delta)} \left( \frac{V}{\sqrt{V'}} + \sqrt{V'} \right) \] and for $V' \in \left[0,\frac{R^2 \ln (1/\delta)}{e-2} \right]$, \[ S \leq R \ln (1/\delta) + (e-2) \frac{V}{R}. \] \end{thm} Note that a simple corollary of this theorem is the more typical Freedman-style inequality, which depends on an \emph{a priori} upper bound, which can be substituted for $V'$ and $V$. \begin{proof} For a fixed $\lambda \in [0,1/R]$, conditioning on $X_1,\ldots,X_{t-1}$ and computing expectations gives \begin{eqnarray} \Expt{e^{\lambda X_t}} &\leq& \Expt{1 + \lambda X_t + (e-2) \lambda^2 X_t^2} \label{eq:1} \\ &=& 1 + (e-2)\lambda^2 \Expt{X_t^2} \label{eq:2} \\ &\leq& \exp\paren{(e-2)\lambda^2 \Expt{X_t^2}}. \label{eq:3} \end{eqnarray} \eqref{eq:1} uses the fact that $e^{z} \leq 1 + z + (e-2)z^2$ for $z\leq 1$. \eqref{eq:2} uses $\Expt{X_t}=0$. \eqref{eq:3} uses $1+z\leq e^z$ for all $z$. Let us define random variables $Z_0=1$ and, for $t\geq 1$, \[ Z_t = Z_{t-1} \cdot \exp\paren{\lambda X_t - (e-2)\lambda^2 \Expt{X_t^2}}. \] Then, \begin{eqnarray*} \Expt{Z_t} &=& Z_{t-1} \cdot \exp\paren{- (e-2)\lambda^2 \Expt{X_t^2}} \cdot \Expt{e^{\lambda X_t}} \\ &\leq& Z_{t-1} \cdot \exp\paren{- (e-2)\lambda^2 \Expt{X_t^2}} \\ & & \cdot \exp\paren{(e-2)\lambda^2 \Expt{X_t^2}} = Z_{t-1}. \end{eqnarray*} Therefore, taking expectation over all of the variables $X_1,\ldots,X_T$ gives \[ \Exp{Z_T} \leq \Exp{Z_{T-1}} \leq \cdots \leq \Exp{Z_0} = 1. \] By Markov's inequality, $ \pr{Z_T \geq 1/\delta} \leq \delta$. Since \[ Z_T = \exp\paren{\lambda S - (e-2)\lambda^2 V}, \] we can substitute $\lambda = {p_{\min}}in \left\{\frac{1}{R}, \sqrt{\frac{\ln (1/\delta)}{(e-2)V'}}\right\}$ and apply algebra to prove the theorem. \end{proof} \section{A HIGH PROBABILITY ALGORITHM}\label{ss:main} \begin{algorithm}[t] \begin{raggedright} \caption{Exp4.P}\label{a:exp4.p} \textbf{parameters:} $\delta>0,$ ${p_{\min}} \in [0,1/K]$ $\left(\textrm{we set }{p_{\min}} = \sqrt{\frac{\ln N}{KT}}\right)$ \par\end{raggedright} \begin{raggedright} \textbf{initialization: } Set $w_{i}(1)=1$ for $i=1,\ldots,N$. {p_{\min}}edskip \par\end{raggedright} \begin{raggedright} \textbf{for each} $t=1,2,\ldots$ \par\end{raggedright} \begin{enumerate} \item get advice vectors ${\boldsymbol \xi}^{1}(t),\ldots,{\boldsymbol \xi}^{N}(t)$ \item set $W_{t}=\sum_{i=1}^{N}w_{i}(t)$ and for $j=1,\ldots,K$ set\[ p_{j}(t)=\left(1-K {p_{\min}} \right)\sum_{i=1}^{N}\frac{w_{i}(t)\xi_{j}^{i}(t)}{W_{t}}+{p_{\min}} \] \item draw action $j_t$ randomly according to the probabilities $p_{1}(t),\ldots,p_{K}(t)$. \item receive reward $r_{j_{t}}(t)\in[0,1].$ \item for $j=1,\ldots,K$ set\[ \hat{r}_{j}(t)=\begin{cases} \begin{array}{c} r_{j}(t)/p_{j}(t)\\ 0\end{array} & \begin{array}{c} \textrm{if }j=j_{t}\\ \textrm{otherwise}\end{array}\end{cases}\] \item for $i=1,\ldots,N$ set \begin{eqnarray*} \hat{y}_{i}(t) & = & {\boldsymbol \xi}^{i}(t)\cdot\hat{{\boldsymbol{r}}}(t)\\ {\hat{v}}_{i}(t) & = & \sum_{j}\xi_{j}^{i}(t)/p_{j}(t)\\ w_{i}(t+1) & = & w_{i}(t)e^{\left(\frac{{p_{\min}}}{2}\left(\hat{y}_i(t)+ {\hat{v}}_i(t) \sqrt{\frac{\ln(N/\delta)}{KT}} \right)\right)} \end{eqnarray*} \end{enumerate} \end{algorithm} The \texttt{Exp4.P} algorithm is given in Algorithm~\ref{a:exp4.p}. It comes with the following guarantee. \begin{thm}\label{thm:main} Assume that $\ln(N/\delta) \leq KT$, and that the set of experts includes one which, on each round, selects an action uniformly at random. Then, with probability at least $1-\delta$, \[G_{\ExpP} \geq G_{{p_{\min}}ax} - 6\sqrt{KT \ln(N/\delta)}.\] \end{thm} The proof of this theorem relies on two lemmas. The first lemma gives an upper confidence bound on the expected reward of an expert given the estimated reward of that expert. The estimated reward of an expert is defined as \[ \hat{G}_{i}\doteq\sum_{t=1}^{T}\hat{y}_{i}(t).\] We also define \[ \hat{\sigma}_{i}\doteq\sqrt{KT}+\frac{1}{\sqrt{KT}}\sum_{t=1}^{T} {\hat{v}}_i(t). \] \begin{lem} \label{lem:1} Under the conditions of Theorem~\ref{thm:main}, \[ {p_{\min}}athbf{Pr}\left[ \exists i: G_i \geq \hat{G}_{i}+ \sqrt{\ln(N/\delta)}\hat{\sigma}_{i}\right] \leq\delta.\] \end{lem} \begin{proof} Fix $i$. Recalling that $y_i(t)={\boldsymbol \xi}^{i}(t)\cdot {\boldsymbol{r}}(t)$ and the definition of $\hat{y}_i(t)$ in Algorithm~\ref{a:exp4.p}, let us further define the random variables $X_t = y_i(t) - \hat{y}_i(t)$ to which we will apply Theorem~\ref{thm:0}. Then $\Expt{\hat{y}_i(t)} = y_i(t)$ so that $\Expt{X_t}=0$ and $X_t\leq 1$. Further, we can compute \begin{eqnarray*} \Expt{X_t^2} &=& \Expt{(y_{i}(t)-\hat{y}_{i}(t))^{2}} \\ & = & \Expt{\hat{y}_i(t)^2} - y_i(t)^2 \leq {p_{\min}}athbf{E}_{t}[\hat{y}_{i}(t)^{2}] \\ & = & {p_{\min}}athbf{E}_{t}\left[\left({\boldsymbol \xi}^{i}(t)\cdot\hat{{\boldsymbol{r}}}(t)\right)^{2}\right] \\ & = & \sum_{j}p_{j}(t)\left(\xi_{j}^{i}(t)\cdot\frac{r_{j}(t)}{p_{j}(t)}\right)^{2} \\ & \le & \sum_{j}\frac{{\xi_{j}^{i}(t)}}{p_{j}(t)} \\ & = & {\hat{v}}_{i}(t). \end{eqnarray*} Note that \[ G_i - \hat{G}_i = \sum_{t=1}^T{X_{t}}. \] Using $\delta/N$ instead of $\delta$, and setting $V'=KT$ in Theorem~\ref{thm:0} gives us \begin{eqnarray*} &\pr{G_i - \hat{G}_i \geq \sqrt{(e-2)\ln\left(\frac{N}{\delta}\right)} \left(\frac{\sum_{t=1}^T{{\hat{v}}_i(t)}}{\sqrt{KT}}+\sqrt{KT} \right)}& \\ &\le& \\ &\delta/N&. \end{eqnarray*} Noting that $e-2< 1$, and applying a union bound over the $N$ experts gives the statement of the lemma. \end{proof} To state the next lemma, define $$ {\hat{U}} = {p_{\min}}ax_i \paren{\hat{G}_i + \hat{\sigma}_i \cdot\sqrt{\ln(N/\delta)} }. $$ \begin{lem} \label{lem:2} Under the conditions of Theorem~\ref{thm:main}, \begin{eqnarray*} G_{\ExpP} &\ge& \left(1-2\sqrt{\frac{K \ln N}{T}}\right){\hat{U}} - 2\sqrt{KT \ln(N/\delta)} \\ & & - \sqrt{KT \ln N} - \ln(N/\delta). \end{eqnarray*} \end{lem} We can now prove Theorem~\ref{thm:main}. \begin{proof} Taking the statement of Lemma~\ref{lem:2} and applying the result of Lemma~\ref{lem:1}, and we get, with probability at least $1-\delta$, \begin{eqnarray} G_{\textrm{Exp4.P}} \label{eq:gmaxlesst}&\geq& G_{{p_{\min}}ax} - 2\sqrt{\frac{K \ln N}{T}} T - \ln(N/\delta)\\ && - \sqrt{KT \ln N} - 2\sqrt{KT \ln(N/\delta)}\nonumber\\ &\geq& G_{{p_{\min}}ax} - 6 \sqrt{KT \ln(N/\delta)}\nonumber, \end{eqnarray} with Eq.\ (\ref{eq:gmaxlesst}) using $G_{{p_{\min}}ax}\leq T$. \end{proof} \section{COMPETING WITH SETS OF FINITE VC DIMENSION}\label{sec:VCVE} A standard VC-argument in the online setting can be used to apply \texttt{Exp4.P} to compete with an infinite set of policies $\Pi$ with a finite VC dimension $d$, when the data is drawn independently from a fixed, unknown distribution. For simplicity, this section assumes that there are only two actions ($K=2$), as that is standard for the definition of VC-dimension. The algorithm \texttt{VE} chooses an action uniformly at random for the first $\tau = \sqrt{T (2d \ln \frac{eT}{d} +\ln \frac{2}{\delta})}$ rounds. This step partitions $\Pi$ into equivalence classes according to the sequence of advice on the first $\tau$ rounds. The algorithm constructs a finite set of policies $\Pi' $ by taking one (arbitrary) policy from each equivalence class, and runs \texttt{Exp4.P} for the remaining $T-\tau$ steps using $\Pi'$ as its set of experts. For a set of policies $\Pi$, define $G_{{p_{\min}}ax(\Pi)}$ as the return of the best policy in $\Pi$ at time horizon $T$. \begin{thm}\label{thm:VC} For all distributions $D$ over contexts and rewards, for all sets of policies $\Pi$ with VC dimension $d$, with probability $1-\delta$, \[ G_{\textrm{VE}} \geq G_{{p_{\min}}ax(\Pi)} - 9\sqrt{2T \left(d \ln \frac{eT}{d} +\ln \frac{2}{\delta} \right) }. \] \end{thm} \begin{proof} The regret of the initial exploration is bounded by $\tau$. We first bound the regret of \texttt{Exp4.P} to $\Pi'$, and the regret of $\Pi'$ to $\Pi$. We then optimize with respect to $\tau$ to get the result. Sauer's lemma implies that $|\Pi'| \leq \left( \frac{e \tau}{d} \right)^d$ and hence with probability $1-\delta/2$, we can bound $G_{\ExpP}(\Pi',T-\tau)$ from below by \begin{eqnarray*} G_{{p_{\min}}ax(\Pi')} - 6\sqrt{2(T-\tau)(d \ln(e \tau/ d)+ \ln(2/ \delta))}. \end{eqnarray*} To bound the regret of $\Pi'$ to $\Pi$, pick any sequence of feature observations $x_1,...,x_T$. Sauer's Lemma implies the number of unique functions on the observation sequence in $\Pi$ is bounded by $\left( \frac{e T}{d} \right)^d$. For a uniformly random subset $S$ of size $\tau$ of the feature observations we bound the probability that two functions $\pi,\pi'$ agree on the subset. Let $n=n(\pi,\pi')$ be the number of disagreements on the $T$-length sequence. Then \[ {p_{\min}}athbf{Pr}_{S}\left[ \forall x \in S\,\,\, \pi(x)=\pi'(x) \right] = \left(1 - \frac{n}{T} \right)^\tau \leq e^{-\frac{n\tau}{T}}. \] Thus for all $\pi,\pi'\in \Pi$ with $n(\pi,\pi') \geq \frac{T}{\tau} \ln 1/\delta_0$, we have ${p_{\min}}athbf{Pr}_{S}\left[ \forall x \in S\,\,\, \pi(x)=\pi'(x) \right] \leq \delta_0$. Setting $\delta_0 = \frac{\delta}{2} \left( \frac{d}{e T} \right)^{2d}$ and using a union bound over every pair of policies, we get \begin{eqnarray*} {p_{\min}}athbf{Pr}_{S}( \exists \pi,\pi' & \textrm{ s.t.\ } n(\pi,\pi') \geq \frac{T}{\tau} \left(2d \ln \frac{eT}{d} +\ln \frac{2}{\delta}\right) \\ & {p_{\min}}box{s.t.}\ \forall x \in S\,\,\, \pi(x)=\pi'(x) ) \leq \delta/2. \end{eqnarray*} In other words, for all sequences $x_1,...,x_T$ with probability $1 - \delta/2$ over a random subset of size $\tau$ \[ G_{{p_{\min}}ax(\Pi')} \geq G_{{p_{\min}}ax(\Pi)} - \frac{T}{\tau} \left(2d \ln \frac{eT}{d} +\ln \frac{2}{\delta}\right).\] Because the above holds for any sequence $x_1,...,x_T$, it holds in expectation over sequences drawn i.i.d.\ from $D$. Furthermore, we can regard the first $\tau$ samples as the random draw of the subset since i.i.d.\ distributions are exchangeable. Consequently, with probability $1-\delta$, we have \begin{eqnarray*} G_{\textrm{VE}} &\ge& G_{{p_{\min}}ax(\Pi)} - \tau - \frac{T}{\tau} \left(2d \ln \frac{eT}{d} +\ln \frac{2}{\delta}\right) \\ && - 6\sqrt{2T(d \ln(e \tau/ d)+ \ln(2/ \delta))}. \end{eqnarray*} Letting $\tau = \sqrt{T (2d \ln \frac{eT}{d} +\ln \frac{2}{\delta})}$ and substituting $T \geq \tau$ we get \[ G_{\textrm{VE}} \geq G_{{p_{\min}}ax(\Pi)} - 9\sqrt{2T (d \ln \frac{eT}{d} +\ln \frac{2}{\delta})}. \] \end{proof} This theorem easily extends to more than two actions ($K > 2$) given generalizations of the VC-dimension to multiclass classification and of Sauer's lemma~\cite{HausslerL95}. \section{A PRACTICAL IMPROVEMENT TO EXP4.P}\label{s:msimp} Here we give a variant of Step $2$ of Algorithm~\ref{a:exp4.p} for setting the probabilities $p_j(t)$, in the style of \cite{McMahanS09}. For our analysis of \texttt{Exp4.P}, the two properties we need to ensure in setting the probabilities $p_j(t)$ are \begin{enumerate} \item $p_j(t) \approx \sum_{i=1}^{N}{\frac{w_i(t)\xi_j^i(t)}{W_t}}$. \item The value of each $p_j(t)$ is at least ${p_{\min}}$. \end{enumerate} One way to achieve this, as is done in Algorithm~\ref{a:exp4.p}, is to mix in the uniform distribution over all arms. While this yields a simpler algorithm and achieves optimal regret up to a multiplicative constant, in general, this technique can add unnecessary probability mass to badly-performing arms; for example it can double the probability of arms whose probability would already be set to ${p_{\min}}$. \begin{algorithm}[t] \caption{An Alternate Method for Setting Probabilities in Step 2 of Algorithm~\protect\ref{a:exp4.p}} \label{a:probset} \textbf{parameters:} $w_1(t), w_2(t), \ldots w_N(t)$ and ${\boldsymbol \xi}^{1}(t),\ldots,{\boldsymbol \xi}^{N}(t)$ and ${p_{\min}}$\\ set \[ W_{t}=\sum_{i=1}^{N}w_{i}(t)\] for $j = 1$ to $K$ set \[p_{j}(t)=\sum_{i=1}^{N}\frac{w_{i}(t)\xi_{j}^{i}(t)}{W_{t}}\] let $\Delta := 0$ and $l := 1$ \\ \textbf{for each} action $j$ in increasing order according to $p_j$ \begin{enumerate} \item \textbf{if} $p_j \left(1- \Delta/l \right) \geq {p_{\min}}$ \\ for all actions $j'$ with $p_{j'}\geq p_j$\\ $p'_{j'} = p_{j'} \left(1- \Delta/l \right)$\\ return $\forall j\,\,\,p'_j$ \item \textbf{else} $p'_j = {p_{\min}}$, $\ \Delta := \Delta + p'_j - p_j$, $\ l := l - p_j$. \end{enumerate} \end{algorithm} A fix to this, first suggested by \cite{McMahanS09}, is to ensure the two requirements via a different mechanism. We present a variant of their suggestion in Algorithm~\ref{a:probset}, which can be used to make \texttt{Exp4.P} perform better in practice with a computational complexity of $O(K \ln K)$ for computing the probabilities $p_j(t)$ per round. The basic intuition of this algorithm is that it enforces the minimum probability in order from smallest to largest action probability, while otherwise minimizing the ratio of the initial to final action probability. This technique ensures our needed properties, and it is easy to verify that by setting probabilities using Algorithm~\ref{a:probset} the proof in Section~\ref{ss:main} remains valid with little modification. We use this variant in the experiments in Section~\ref{s:experiments}. \section{EXPERIMENTS}\label{s:experiments} In this section, we applied \texttt{Exp4.P} with the improvement in Section~\ref{s:msimp} to a large-scale contextual bandit problem. The purpose of the experiments is two-fold: it gives a proof-of-concept demonstration of the performance of \texttt{Exp4.P} in a non-trivial problem, and also illustrates how the algorithm may be implemented efficiently for special classes of experts. The problem we study is personalized news article recommendation on the Yahoo!\ front page~\cite{AgarwalCEMPRRZ08,LCLS10}. Each time a user visits the front page, a news article out of a small pool of hand-picked candidates is highlighted. The goal is to highlight the most interesting articles to users, or formally, maximize the total number of user clicks on the recommended articles. In this problem, we treat articles as arms, and define the payoff to be $1$ if the article is clicked on and $0$ otherwise. Therefore, the average per-trial payoff of an algorithm/policy is the overall \textbf{click-through rate} (or \textbf{CTR} for short). Following \cite{LCLS10}, we created $B=5$ user clusters and thus each user, based on \emph{normalized} Euclidean distance to the cluster centers, was associated with a $B$-dimensional \emph{membership feature} ${p_{\min}}athbf{d}$ whose (non-negative) components always sum up to 1. Experts are designed as follows. Each expert is associated with a mapping from user clusters to articles, that is, with a vector ${p_{\min}}athbf{a}\in\{1,\ldots,K\}^B$ where $a_b$ is the article to be displayed for users from cluster $b\in\{1,\ldots,B\}$. When a user arrives with feature ${p_{\min}}athbf{d}$, the prediction ${\boldsymbol \xi}^{{p_{\min}}athbf{a}}$ of expert ${p_{\min}}athbf{a}$ is $\xi^{{p_{\min}}athbf{a}}_j=\sum_{b:a_b=j}d_b$. There are a total of $K^B$ experts. Now we show how to implement \texttt{Exp4.P} efficiently. Referring to the notation in \texttt{Exp4.P}, we have \begin{eqnarray*} \hat{y}_{{\bf a}}(t) &=& {\bxi^{\veca}}(t) \cdot \hat{{\boldsymbol{r}}}(t) = \sum_j \sum_{b:a_b=j} d_b (t) \hat{r}_{j}(t) \\ && = \sum_b d_b(t) \hat{r}_{a_b}(t), \\ \hat{v}_{{\bf a}}(t) &=& \sum_j \sum_{b:a_b=j} \frac{d_b(t)}{p_j(t)} = \sum_b \frac{d_b(t)}{p_{a_b}(t)}. \end{eqnarray*} Thus, \begin{align*} w_{{\bf a}}&(t+1) \\ &= w_{{\bf a}}(t) \exp\left(\frac{{p_{\min}}}{2}\left(\hat{y}_{\bf a}(t)+\hat{v}_{\bf a}(t) \sqrt{\frac{\ln(N/\delta)}{KT}}\right) \right) \\ &= w_{{\bf a}}(t) \exp\left(\sum_b d_b(t) f_{a_b}(t)\right), \end{align*} where \[ f_j(t) = \frac{{p_{\min}}}{2}\left(\hat{r}_j(t) + \frac{1}{p_j(t)}\sqrt{\frac{\ln(N/\delta)}{KT}}\right). \] Unraveling the recurrence, we rewrite $w_{{\bf a}}(t+1)$ by \begin{eqnarray*} w_{{\bf a}}(t+1) &=& \exp\left(\sum_{\tau=1}^t \sum_b d_b(\tau) f_{a_b}(\tau)\right) \\ &=& \exp\left(\sum_b \sum_{\tau=1}^t d_b(\tau) f_{a_b}(\tau)\right) \\ &=& \prod_b g_{b,a_b}(t), \end{eqnarray*} implying that $w_{{\bf a}}(t+1)$ can be computed implicitly by maintaining the quantity $g_{b,j}(t) = \exp\left(\sum_{\tau=1}^t d_b(\tau) f_j(\tau)\right)$ for each $b$ and $j$. Next, we compute $W_t$ as follows: $W_t = \sum_{{\bf a}} w_{\bf a}(t) = \sum_{{\bf a}} \prod_b g_{b,a_b}(t) = \prod_b \left( \sum_j g_{b,j}(t) \right).$ Repeating the same trick, we have \[ \sum_{{\bf a}}\frac{w_{{\bf a}}(t){\bxi^{\veca}}(t)}{W_t} = \sum_b \frac{d_b(t) g_{b,j}(t)}{\sum_{j'=1}^K g_{b,j'}(t)}, \] which are the inputs to Algorithm~\ref{a:probset} to produce the final arm-selection probabilities, $p_j(t)$ for all $j$. Therefore, for this structured set of experts, the time complexity of \texttt{Exp4.P} is only linear in $K$ and $B$ despite the exponentially large size of this set. To compare algorithms, we collected historical user visit events with a random policy that chose articles uniformly at random for a fraction of user visits on the Yahoo!\ front page from May 1 to 9, 2009. This data contains over $41$M user visits, a total of $253$ articles, and about $21$ candidate articles in the pool per user visit. (The pool of candidate articles changes over time, requiring corresponding modifications to \texttt{Exp4.P}\footnote{Our modification ensured that a new article's initial score was the average of all currently available ones'.}). With such random traffic data, we were able to obtain an unbiased \emph{estimate of the CTR} (called \textbf{eCTR}) of a bandit algorithm as if it is run in the real world~\cite{LCLS10}. Due to practical concerns when applying a bandit algorithm, it is common to randomly assign each user visit to one of two ``buckets'': the \emph{learning bucket}, where the bandit algorithm is run, and the \emph{deployment bucket}, where the greedy policy (learned by the algorithm in the learning bucket) is used to serve users without receiving payoff information. Note that since the bandit algorithm continues to refine its policy based on payoff feedback in the learning bucket, its greedy policy may change over time. Its eCTR in the deployment bucket thus measures how good this greedy policy is. And as the deployment bucket is usually much larger than the learning bucket, the deployment eCTR is deemed a more important metric. Finally, to protect business-sensitive information, we only report \emph{normalized eCTR}s, which are the actual eCTRs divided by the random policy's eCTR. Based on estimates of $T$ and $K$, we ran \texttt{Exp4.P} with $\delta=0.01$. The same estimates were used to set $\gamma$ in \texttt{Exp4} to minimize the regret bound in Theorem~7.1 of \cite{AuerCFS02}. Table~\ref{tbl:results} summarizes eCTRs of all three algorithms in the two buckets. All differences are significant due to the large volume of data. \begin{table} \begin{center} \begin{tabular}{l|c|c|c} & \texttt{Exp4.P} & \texttt{Exp4} & $\epsilon$-greedy \\ \hline learning CTR & $1.0525$ & $1.0988$ & $1.3827$ \\ deployment CTR & $1.6512$ & $1.5309$ & $1.4290$ \end{tabular} \end{center} \caption{Overall click-through rates (eCTRs) of various algorithms on the May 1--9 data set.} \label{tbl:results} \end{table} First, \texttt{Exp4.P}'s eCTR is slightly worse than \texttt{Exp4} in the learning bucket. This gap is probably due to the more conservative nature of \texttt{Exp4.P}, as it uses the additional $\hat{v}_i$ terms to control variance, which in turn encourages further exploration. In return for the more extensive exploration, \texttt{Exp4.P} gained the highest deployment eCTR, implying its greedy policy is superior to \texttt{Exp4}. Second, we note a similar comparison to the $\epsilon$-greedy variant of \texttt{Exp4.P}. It was the most greedy among the three algorithms and thus had the highest eCTR in the learning bucket, but lowest eCTR in the deployment bucket. This fact also suggests the benefits of using the somewhat more complicated soft-max exploration scheme in \texttt{Exp4.P}. \subsubsection*{Acknowledgments} We thank Wei Chu for assistance with the experiments and Kishore Papineni for helpful discussions. This work was done while Lev Reyzin and Robert E.\ Schapire were at Yahoo!\ Research, NY. Lev Reyzin acknowledges this material is based upon work supported by the NSF under Grant \#0937060 to the CRA for the Computing Innovation Fellowship program. \appendix \section{PROOF OF LEMMA 4} Recall that the estimated reward of expert $i$ is defined as \[ \hat{G}_{i}\doteq\sum_{t=1}^{T}\hat{y}_{i}(t).\] Also \[ \hat{\sigma}_{i}\doteq\sqrt{KT}+\frac{1}{\sqrt{KT}}\sum_{t=1}^{T} {\hat{v}}_i(t) \] and that \[ {\hat{U}} = {p_{\min}}ax_i \paren{\hat{G}_i + \hat{\sigma}_i \cdot\sqrt{\ln(N/\delta)} }. \] {\bf Lemma 4.}\quad Under the conditions of Theorem 2, \begin{eqnarray*} G_{\ExpP} &\ge& \left(1-2\sqrt{\frac{K \ln N}{T}}\right){\hat{U}} - 2\sqrt{KT \ln(N/\delta)}\\ & & - \sqrt{KT \ln N} - \ln(N/\delta). \end{eqnarray*} \begin{proof} For the proof, we use $\gamma = \sqrt{\frac{K \ln N}{T}}$.\\ We have $$p_{j}(t)\ge {p_{\min}} = \sqrt{\frac{\ln N}{KT}}$$ and $$\hat{r}_{j}(t)\le 1/{p_{\min}}$$ so that \[ \hat{y}_i(t) \leq 1/{p_{\min}} \ \ \ \ \ {p_{\min}}athrm{and} \ \ \ \ \ {\hat{v}}_i(t)\leq 1/{p_{\min}}. \] Thus, \begin{eqnarray*} \frac{{p_{\min}}}{2}\paren{\hat{y}_{i}(t)+\sqrt{\frac{\ln(N/\delta)}{KT}} {\hat{v}}_i(t)} &\leq& \frac{{p_{\min}}}{2} (\hat{y}_{i}(t)+{\hat{v}}_i(t)) \\ &\leq& 1. \end{eqnarray*} Let $\bar{w}_{i}(t)=w_{i}(t)/W_{t}$. We will need the following inequality: \begin{inequality}\label{ineq1}\quad $\sum_{i}^{N} \bar{w}_{i}(t) {\hat{v}}_i(t) \le \frac{K}{1-\gamma }$. \end{inequality} As a corollary, we have \begin{eqnarray*} \sum_{i}^{N} {\bar{w}_{i}(t)}{{\hat{v}}_{i}(t)^2} & \le & \sum_{i}^{N} {\bar{w}_{i}(t)}{{\hat{v}}_{i}(t)}\frac{1}{{p_{\min}}}\\ & \le & \sqrt{\frac{KT}{\ln N}} \frac{K}{1-\gamma}. \end{eqnarray*} Also, \cite{AuerCFS02} (on p.67) prove the following two inequalities (with a typo). For completeness, the proofs of all three inequalities are given below this proof. \begin{inequality}\label{ineq3}\quad $ \sum_{i=1}^{N}\bar{w}_{i}(t)\hat{y}_{i}(t)\le\frac{r_{j_t}(t)}{1-\gamma}$. \end{inequality} \begin{inequality}\label{ineq4}\quad $ \sum_{i=1}^{N}\bar{w}_{i}(t)\hat{y}_{i}(t)^{2}\le\frac{\hat{r}_{j_t}(t)}{1-\gamma}.$ \end{inequality} Now letting $b=\frac{{p_{\min}}}{2}$ and $c=\frac{{p_{\min}} \sqrt{\ln (N/\delta)}}{2\sqrt{KT}}$ we have \begin{eqnarray} \frac{W_{t+1}}{W_{t}} & = & \sum_{i=1}^{N}\frac{w_{i}(t+1)}{W_{t}}\nonumber \\ & = & \sum_{i=1}^{N}\bar{w}_{i}(t)\exp \left(b\hat{y}_i(t)+ c{\hat{v}}_i(t)\right)\nonumber \\ \label{eqn:2ineq} & \le & \sum_{i=1}^{N}\bar{w}_{i}(t)\left[1 +b\hat{y}_{i}(t) +c{\hat{v}}_i(t) \right] \\ & &+ \sum_{i=1}^{N}\bar{w}_{i}(t)\left[ 2b^2\hat{y}_{i}(t)^2 +2c^2{\hat{v}}_i(t)^2 \right]\nonumber \\ &=& 1 +b \sum_{i=1}^N\bar{w}_{i}(t)\hat{y}_{i}(t) +c \sum_{i=1}^N\bar{w}_{i}(t) {\hat{v}}_i(t)\nonumber \\ && +2b^2\sum_{i=1}^N\bar{w}_{i}(t)\hat{y}_{i}(t)^2 +2c^2\sum_{i=1}^N\bar{w}_{i}(t) {\hat{v}}_i(t)^2\nonumber \\ &\leq& \label{eqn:4ineq} 1 +b\frac{r_{j_t}(t)}{1-\gamma} +c\frac{{K}}{1-\gamma} +2b^2\frac{\hat{r}_{j_t}(t)}{1-\gamma} \\ & & +2c^2\sqrt{\frac{KT}{\ln N}}\frac{K}{1-\gamma}\nonumber. \end{eqnarray} Eq.\ (\ref{eqn:2ineq}) uses $e^{a}\le1+a+ (e-2)a^{2}$ for $a\le1$, $(a+b)^2 \leq 2a^2 + 2 b^2$, and $e-2 < 1$. Eq.\ (\ref{eqn:4ineq}) uses inequalities 1 through 3. Now take logarithms, use the inequality $\ln(1+x)\le x$, sum both sides over $T$, and we obtain \begin{eqnarray*} \ln\left(\frac{W_{T+1}}{W_{1}}\right) &\le& \frac{b}{1-\gamma}\sum_{t=1}^T {r_{j_t}(t)} +c\frac{KT}{1-\gamma} \\ &&+\frac{2b^2}{1-\gamma} \sum_{t=1}^T \hat{r}_{j_t}(t) +2c^2\sqrt{\frac{KT}{\ln N}}\frac{KT}{1-\gamma} \\ &\le& \frac{b}{1-\gamma} G_{\textrm{Exp4.P}} +c\frac{KT}{1-\gamma} +\frac{2b^2}{1-\gamma} K {\hat{U}} \\ & & +2c^2\sqrt{\frac{KT}{\ln N}}\frac{KT}{1-\gamma}. \end{eqnarray*} Here, we used \[ G_{\textrm{Exp4.P}} = \sum_{t=1}^T {r_{j_t}(t)} \] and \[ \sum_{t=1}^{T}\hat{r}_{j_t}(t)=K\sum_{t=1}^{T}\frac{1}{K}\sum_{j=1}^{K}\hat{r}_{j}(t)\le K\hat{G}_{\textrm{uniform}}\le K {\hat{U}}. \] because we assumed that the set of experts includes one who always selects each action uniformly at random. We also have $\ln(W_{1})=\ln(N)$ and \begin{eqnarray*} \ln(W_{T+1}) &\ge& {p_{\min}}ax_i \left( \ln w_{i}(T+1) \right) \\ & = & {p_{\min}}ax_i \paren{b\hat{G}_i + c \sum_{t=1}^{T} {\hat{v}}_i(t)}\\ &=& b {\hat{U}} - b \sqrt{KT \ln (N/\delta)}. \end{eqnarray*} Combining then gives \begin{eqnarray*} &b {\hat{U}} - b \sqrt{KT \ln (N/\delta)} - \ln N & \\ & \leq &\\ & \frac{b}{1-\gamma} G_{\textrm{Exp4.P}} +c\frac{KT}{1-\gamma} +\frac{2b^2}{1-\gamma} K {\hat{U}} +2c^2\sqrt{\frac{KT}{\ln N}}\frac{KT}{1-\gamma}.& \end{eqnarray*} Solving for $G_{\textrm{Exp4.P}}$ now gives \begin{eqnarray} \nonumber G_{\textrm{Exp4.P}} &\geq& \left(1-\gamma-2bK\right){\hat{U}} - \left(\frac{1-\gamma}{b}\right)\ln N\nonumber \\ & & - (1-\gamma) \sqrt{KT \ln(N/\delta)} - \frac{c}{b}KT\nonumber \\ & & -2\frac{c^2}{b}\sqrt{\frac{KT}{\ln N}}KT\nonumber \nonumber \\ \label{eq:5} &\geq& \left(1-\gamma-2bK\right){\hat{U}} - \sqrt{KT \ln(N/\delta)} \\ & & - \frac{1}{b}\ln N - \frac{c}{b}KT -2\frac{c^2}{b}\sqrt{\frac{KT}{\ln N}}KT\nonumber\\ &=& \label{eq:6} \left(1-2\sqrt{\frac{K \ln N}{T}}\right){\hat{U}} - \ln(N/\delta)\\ & & - 2 \sqrt{KT \ln N} - \sqrt{KT \ln(N/\delta)},\nonumber \end{eqnarray} using $\gamma>0$ in \eqref{eq:5} and plugging in the definition of $\gamma,b,c$ in \eqref{eq:6}. \end{proof} We prove Inequalities \ref{ineq1} through \ref{ineq4} below. Let $\bar{w}_{i}(t)=w_{i}(t)/W_{t}$. \begin{inequalityrestate}\quad $\sum_{i}^{N} \bar{w}_{i}(t) {\hat{v}}_i(t) \le \frac{K}{1-\gamma }$. \end{inequalityrestate} \begin{proof} \begin{eqnarray*} \sum_{i}^{N}\bar{w}_{i}(t){\hat{v}}_i(t) & = & \sum_{i}^{N}\bar{w}_{i}(t)\sum_{j}^{K}\frac{\xi_{j}^{i}(t)}{p_{j}(t)}\\ & = & \sum_{j=1}^{K}\frac{1}{p_{j}(t)}\sum_{i}^{N}\bar{w}_{i}(t)\xi_{j}^{i}(t)\\ & = & \sum_{j=1}^{K}\frac{1}{p_{j}(t)}\left(\frac{p_{j}(t)- {p_{\min}} }{1-\gamma}\right)\\ & \le & \sum_{j=1}^{K}\frac{1}{1-\gamma}\\ & = & \frac{{K}}{1-\gamma}. \end{eqnarray*} \end{proof} \begin{inequalityrestate}\quad $ \sum_{i=1}^{N}\bar{w}_{i}(t)\hat{y}_{i}(t)\le\frac{r_{j_t}(t)}{1-\gamma}$. \end{inequalityrestate} \begin{proof} \begin{eqnarray*} \sum_{i=1}^{N}\bar{w}_{i}(t)\hat{y}_{i}(t)&=& \sum_{i=1}^{N}\bar{w}_{i}(t)\left(\sum_{j=1}^{K}{\xi_j^i(t)\hat{r}_j(t)}\right)\\ &= & \sum_{j=1}^{K}\left( \sum_{i=1}^{N}{\bar{w}_i(t)\xi^{i}_j(t)} \right)\hat{r}_j(t)\\ &=& \sum_{j=1}^{K}\left(\frac{p_{j}(t)- {p_{\min}} }{1-\gamma}\right)\hat{r}_j(t)\\ &\le&\frac{r_{j_t}(t)}{1-\gamma}. \end{eqnarray*} \end{proof} \begin{inequalityrestate}\quad $ \sum_{i=1}^{N}\bar{w}_{i}(t)\hat{y}_{i}(t)^{2}\le\frac{\hat{r}_{j_t}(t)}{1-\gamma}.$ \end{inequalityrestate} \begin{proof} \begin{eqnarray*} \sum_{i=1}^{N}\bar{w}_{i}(t)\hat{y}_{i}(t)^{2}&=& \sum_{i=1}^{N}\bar{w}_{i}(t)\left(\sum_{j=1}^{K}{\xi_j^i(t)\hat{r}_j(t)}\right)^2\\ &=&\sum_{i=1}^{N}\bar{w}_{i}(t)\left(\xi_{j_t}^i(t)\hat{r}_{j_t}(t)\right)^2\\ &\le&\left(\frac{p_{j_t}(t)}{1-\gamma}\right)\hat{r}_{j_t}(t)^2\\ &\le&\frac{\hat{r}_{j_t}(t)}{1-\gamma}. \end{eqnarray*} \end{proof} \end{document}
\begin{document} \begin{abstract} We introduce a new invariant, the \textit{positive idempotent group}, for strongly asymptotically dynamically convex contact manifolds. This invariant can be used to distinguish different contact structures. As an application, for any complex dimension $n>8$ and any positive integer $k$, we can construct $n-$dimensional Stein manifolds $V_0,V_1,\cdots,V_k$ such that $\tilde{H}_j(V_i)=0, j\neq n-1,n$, $V_i's$ are almost symplectomorphic, their boundaries are in the same almost contact class but not contactomorphic. \end{abstract} \title{STEIN DOMAINS WITH EXOTIC CONTACT BOUNDARIES.} \tableofcontents \section{Introduction} In this paper, we will introduce a new invariant $I_+(\Sigma)$, the \textit{positive idempotent group}, for strongly asymptotically dynamically convex contact manifolds $(\Sigma,\mathrm{x}i,\Phi)$(see definition in Section~\ref{section : def of ADC}). The definition of positive idempotent group $I_+(W)$ depends on the filling $W$: it is well defined when $SH_*(W)\neq 0$ for some Liouville filling $W$, and it is independent of filling when $(\Sigma,\mathrm{x}i,\Phi)$ is a strongly ADC contact manifold. The main purpose of this paper is to prove the following theorem: \begin{theorem}\langlebel{Main Thm} If $(\Sigma,\mathrm{x}i,\Phi)$ is a strongly asymptotically dynamically convex contact structure with a Liouville filling $W$ such that $SH_*(W)\neq 0$, then all connected Liouville fillings of $(\Sigma,\mathrm{x}i,\Phi)$ with nonzero symplectic homology have isomorphic positive idempotent group $I_+$. \end{theorem} \begin{remark} Here a Liouville filling $W$ of $(\Sigma,\mathrm{x}i,\Phi)$ means that $W$ is a filling of $(\Sigma,\mathrm{x}i)$ and the trivialization $\Phi$ of the canonical bundle extends over $W$. Now that all these Liouville fillings have isomorphic positive idempotent group, we can regard $I_+$ as an invariant for strongly ADC contact manifold. We will prove the result in section~\ref{section:indepnedence of I+}. \end{remark} As an application, we will use the \textit{positive idempotent group} to distinguish contact boundaries of Stein manifolds, which has a long history. Y.Eliashberg \cite{eliashberg1991symplectic} constructed an exotic contact structure representing the standard almost contact structure on $S^{4k+1}$, and I.Ustilovsky \cite{ustilovsky1999contact} proved that every almost contact class on $S^{4k+1}$ has infinitely many different contact structures. M.McLean \cite{mclean2007lefschetz} has shown that there are infinitely many exotic Stein structures $\mathbb{C}_k^n$ on $\mathbb{C}^n, n\geq 4$. Using flexible Weinstein structures, O.Lazarev \cite{lazarev2016contact} proved that any contact manifold admitting an almost Weinstein filling admits infinitely many exotic contact structures with flexible fillings. We have the following theorem: \begin{theorem}\langlebel{big theorem} For any complex dimension $n>8$ and any positive integer $k$, there are Stein domains $V_0,V_1,\cdots, V_k$ such that: \begin{itemize} \item $V_i's $ are almost symplectomorphic, \item the contact boundaries ${\partial} V_i$ of $V_i$ are in the same almost contact class, \item ${\partial} V_i$ are mutually non-contactomorphic. \item $\tilde{H}_j(V_i)=0$ for $j\neq n,n-1$. \end{itemize} \end{theorem} \begin{remark} In Theorem 1.14 \cite{lazarev2016contact}, O.Lazarev proved that if $V$ is almost symplectomorphic to a domain containing a closed (regular) Lagrangian, then there are infinitely symplectic structures $V_k$ almost symplectomorphic to $V$ that are not symplectomorphic and their contact boundaries are not contactomorphic either. The Stein domains constructed in this paper are different from Lazarev's examples. \end{remark} \subsection{Sketch of the proof} The contact structure on ${\partial} V_i$ in Theorem~\ref{big theorem} is asymptotically dynamically convex. In the case when $I_+(\Sigma)$ is finite, we can define the \textit{positive idempotent index} $i(\Sigma):=|I_+(\Sigma)|$ (see Section~\ref{ss:definition of positive idempotent group}). The theorem~\ref{big theorem} is based on the following theorem, which will be proved in Section~\ref{Finite}: \begin{theorem}\langlebel{main technical theorem} There exists connected Weinstein domains $(W^{2n},\langlembda,{\partial}si),$ for any $n>8$ such that \begin{itemize} \item $({\partial} W,\langlembda)$ is asymptotically dynamically convex, \item $SH_*(W,\mathbb{Z}/2\mathbb{Z})\neq 0$, \item $|I(W,\mathbb{Z}/2\mathbb{Z})|<\infty$. \item $\tilde{H}_i(W,\mathbb{Z}/2\mathbb{Z})=0$, for $i\neq n,n-1$. \end{itemize} \end{theorem} \begin{remark} The definition of $I$ is in equation~\ref{def of I}. \end{remark} The basic idea to construct the Weinstein domain is to use Brieskorn variety. First we take the complement of a specific Brieskorn variety and then attach a Weinstein 2-handle to kill the fundamental group. With the help of a covering trick we can show that the resultant manifold has asymptotically dynamically convex boundary. The full proof is at the end of this paper, see Section~\ref{Finite}. \begin{comment} \begin{remark} In \cite{mclean2007lefschetz}, for $n>4$, M.McLean constructed infinitely many exotic Stein structures $\mathbb{C}_k^n$ on $\mathbb{C}^n$, which are Weinstein homotopic to $(W_k,\langlembda_k,{\partial}si_k)$. So the boundaries of $\mathbb{C}_k^n$ are all ADC, and non-contactomorphic. \end{remark} \end{comment} We will need the fact that any almost Weinstein domain admits a flexible Weinstein structure in the same almost symplectic class (See Section~\ref{ss:Weinstein handle & contact surgery}). Moreover, if a contact manifold admits a flexible filling, then it is asymptotically dynamically convex, as stated in the following lemma: \begin{lemma}[Corollary 4.1 \cite{lazarev2016contact}]\langlebel{ADC boundary} If $(Y^{2n-1},\mathrm{x}i),n\geq 3$, has a flexible filling, then $(Y,\mathrm{x}i)$ is asymptotically dynamically convex. \end{lemma} \begin{proof}[Proof of Theorem~\ref{big theorem}] Let $(W,\langlembda,{\partial}si)$ be in Theorem~\ref{main technical theorem}. There is a flexible Weinstein domain $(W_1,\langlembda_1,{\partial}si_1)$ that is almost symplectomorphic to $W$. Let \[(W_i,\langlembda_i,{\partial}si_i):=\underbrace{(W,\langlembda,{\partial}si)\natural(W,\langlembda,{\partial}si)\natural\cdots\natural(W,\langlembda,{\partial}si)}_i\natural\underbrace{(W_1,\langlembda_1,{\partial}si_1)\natural(W_1,\langlembda_1,{\partial}si_1)\natural\cdots\natural(W_1,\langlembda_1,{\partial}si_1)}_{k-i}. \] That is, $W_i$ is the boundary connect sum of $i$ copies of $W$ and $k-i$ copies of the flexibilization of $W_1$. The boundary connect sum is equivalent to attaching a Weinstein 1-handle, so $W_i$ is a Weinstein domain. By construction, they are all almost symplectomorphic, see subsection~\ref{sec: formal structures}, and their boundaries are in the same almost contact class by lemma~\ref{lemma for almost contact}. Theorem~\ref{from weinstein to stein} allows us to deform a Weinstein structure into a Stein structure, which is denoted by $V_i$. The last condition is obvious. There's only the third condition left to be verified. Indeed, we have $({\partial} V_i,\langlembda_i)$ is asymptotically dynamically convex. Furthermore, we have: \begin{prop}\langlebel{new prop} $|I_+({\partial} W_i)|\neq |I_+({\partial} W_j)|,i\neq j.$ \end{prop} The proof of Proposition~\ref{new prop} will be defer to Subsection~\ref{subsection:effect of conatct surgery}. \end{proof} \section{Background}\langlebel{section:background} \subsection{Conventions and notation} (See Section~\ref{section:background} for detailed definitions.) Let $\langlembda$ be a Liouville 1-form on a Liouville manifold $W$. $$ d\langlembda(\cdot,J\cdot)=g_J \qquad\text{(Riemannian metric)}, $$ $$ d\langlembda(X_H,\cdot)=-dH,\qquad X_H=J\nabla H \qquad\text{(Hamiltonian vector field)}, $$ $$ \mathcal{L}\widehat W:=C^\infty(S^1,\widehat W), \qquad S^1=\mathbb{R}/\mathbb{Z} \qquad \text{(loop space)}, $$ $$ A_H:\mathcal{L}\widehat W\to \mathbb{R},\qquad A_H(x):=\int_{S^1}x^*\langlembda - \int_{S^1}H(t,x(t))\,dt \qquad\text{(action)}, $$ $$ \nabla A_H(x)=-J(x)(\dot x-X_H(t,x)) \qquad\text{($L^2$-gradient)}, $$ $$ u:\mathbb{R}\to\mathcal{L} W,\qquad {\partial}_su=\nabla A_H(u(s,\cdot)) \qquad\text{(gradient line)} $$ \begin{equation}\langlebel{eq:Floer} \Longleftrightarrow {\partial}_s u + J(u)({\partial}_t u-X_H(t,u))=0 \qquad\text{(Floer equation)}, \end{equation} $$ \mathcal{P}(H):=\mbox{Crit}(A_H) = \{\text{$1$-periodic orbits of the Hamiltonian vector field $X_H$}\} , $$ \[ \textrm{For each } h\in H_1(W), \mathcal{P}^h(H):=\mbox{Crit}_h(A_H) = \{ x\in \mathcal{P}(H) \big| [x]=h\in[S^1\to W] \} \] $$ \hspace{.5cm} \mathcal M(x_-,x_+;H,J)=\{u:\mathbb{R}\times S^1\to W \mid {\partial}_su= \nabla A_H(u(s,\cdot)),\ u({\partial}m\infty,\cdot)=x_{\partial}m\}/\mathbb{R} $$ $$ \mbox{(moduli space of Floer trajectories connecting $x_{\partial}m\in\mathcal{P}(H)$)}, $$ \[ \hspace{.5cm} \mathcal M_h(x_-,x_+;H,J)=\{u:\mathbb{R}\times S^1\to W \mid {\partial}_su= \nabla A_H(u(s,\cdot)),\ u({\partial}m\infty,\cdot)=x_{\partial}m\in \mathcal{P}^h(H)\}/\mathbb{R} \] $$ \mbox{(moduli space of Floer trajectories connecting $x_{\partial}m\in\mathcal{P}^h(H)$)}, $$ $$ \dim \mathcal{M}(x_-,x_+;H,J)=\mu_{CZ}(x_+)-\mu_{CZ}(x_-)-1, $$ $$ A_H(x_+)-A_H(x_-) = \int_{\mathbb{R}\times S^1}|{\partial}_su|^2ds\,dt = \int_{\mathbb{R}\times S^1}u^*(d\langlembda-dH\wedge dt). $$ Here the formula expressing the dimension of the moduli space in terms of Conley-Zehnder indices is to be understood with respect to a symplectic trivialization of $u^*TW$. Let $\mathbb{K}$ be a {\color{black}field} and $a<b$ with $a,b\notin\mbox{Spec}({\partial} W,\alpha)$. We define the filtered Floer chain groups with coefficients in $\mathbb{K}$ by \[ SC_*^{<b}(H) := \bigoplus_{\scriptsize \begin{array}{c} x\in \mathcal{P}(H) \\ A_H(x)<b \end{array}} \mathbb{K}\cdot x,\qquad SC_*^{(a,b)}(H) = SC_*^{<b}(H)/SC_*^{<a}(H), \] with the differential $d:SC_*^{(a,b)}(H)\to SC_{*-1}^{(a,b)}(H)$ given by \[ d x_+=\sum_{\mu_{CZ}(x_-)=\mu_{CZ}(x_+)-1} \#\mathcal M(x_-,x_+;H,J)\cdot x_-. \] Here $\#$ denotes the signed count of points with respect to suitable orientations. We think of the cylinder $\mathbb{R}\times S^1$ as the twice punctured Riemann sphere, with the positive puncture at $+\infty$ as incoming, and the negative puncture at $-\infty$ as outgoing. This terminology makes reference to the corresponding asymptote being an input, respectively an output for the Floer differential. Note that the differential decreases both the action $A_H$ and the Conley-Zehnder index. The filtered Floer homology is now defined as $$ SH_*^{(a,b)}(H) = \ker d/\mbox{im}\, d. $$ Note that for $a<b<c$ the short exact sequence $$ 0 \to SC_*^{(a,b)}(H) \to SC_*^{(a,c)}(H) \to SC_*^{(b,c)}(H) \to 0 $$ induces a {\em tautological exact triangle} \begin{equation}\langlebel{eq:taut1} SH_*^{(a,b)}(H) \to SH_*^{(a,c)}(H) \to SH_*^{(b,c)}(H) \to SH_*^{(a,b)}(H)[-1]. \end{equation} {\color{black}\noindent {\bf Remark.} We will suppress the field $\mathbb{K}$ from the notation. As noted in the Introduction, the definition can also be given with coefficients in a commutative ring. In this paper, $\mathbb{K}=\mathbb{Z}_2$.} \begin{notation*} Let $\mathbf{a}=(a_0,a_1,\cdots,a_n)$ be an $(n+1)$-tuple of integers $a_i>1,\mathbf{z}:=(z_0,z_1,\cdots,z_n)\in \mathbb{C}^{n+1}$, and set $f(\mathbf{z}):=z_0^{a_0}+z_1^{a_1}+\cdots+z_n^{a_n}$, and let $B(s)$ to be the closed ball of radius $s$. \[V_\mathbf{a}(t):=\{(z_0,z_1,\cdots,z_n)\in \mathbb{C}^{n+1}| f(\mathbf{z})=t\}.\] We will often suppress $\mathbf{a}$ from the notation. Let \[X_t^s=V(t)\cap B(s). \] and let $\beta\in C^\infty(\mathbb{R})$ be a smooth monotone decreasing cut-off function with $\beta(x)=1,x\leq \frac{1}{4}$ and $\beta(x)=0,x\geq \frac{3}{4}$, \[U_\mathbf{a}(\epsilon):=\{\mathbf{z}\in \mathbb{C}^{n+1}|z_0^{a_0}+\cdots+z_n^{a_n}=\epsilon\cdot\beta(||\mathbf{z}||^2)\}.\] Likewise $\mathbf{a}$ will often be suppressed. Moreover let \[W_\epsilon^s=U(\epsilon)\cap B(s).\] \end{notation*} \subsection{Symplectic and contact structures} A \textit{symplectic manifold} $(M,\omega) $ is a smooth $2n$-dimensional manifold $M$ together with a nondegenerate, closed 2-form $\omega$. A function $H\in C^{\infty}(M)$ on a symplectic manifold $(M,\omega)$ is called \textit{Hamiltonian}. We define its \textit{Hamiltonian vector field} $X_H$ via \[dH=-\iota_{X_H}\omega=-\omega(X_H,\cdot)=\omega(\cdot,X_H).\] A \textit{contact manifold} $\Sigma$ is a smooth $(2n-1)$-dimensional manifold together with a completely non-integrable smooth hyperplane distribution $\mathrm{x}i \in T\Sigma$. The distribution is called a \textit{contact structure}. It can be locally defined as $\mathrm{x}i=\ker\alpha$ for some local 1-form $\alpha$ such that $\alpha\wedge (d\alpha)^{n-1}\neq 0$ pointwise. If $\alpha$ is globally defined, then $\alpha$ is called a \textit{contact form}. We will always assume $\alpha$ is globally defined. Under this assumption $\alpha\wedge (d\alpha)^{n-1}$ gives rise to a volume form and hence $\Sigma$ is orientable. Once an orientation is chosen we require that $\alpha\wedge (d\alpha)^{n-1} > 0$. Associated with a contact form $\alpha$ one has a \textit{Reeb vector field} $R$, uniquely defined by the equations \begin{align*} &\iota_R(d\alpha)=0, \\ &\iota_R \alpha=1. \end{align*} Clearly $R$ is transverse to $\mathrm{x}i$. If we have two different forms $\alpha, \alpha^{'}$ which define the same contact structure, then we can find a nowhere vanishing function $f$ such that $\alpha^{'}=f\cdot\alpha$. Indeed, $f=\alpha^{'}(R)$. The flow of a Reeb vector field is called Reeb flow, and closed trajectories of Reeb flow are called the \textit{Reeb orbits}. The \textit{action} of a Reeb orbit $\gamma$ is defined as \[A(\gamma):=\int_{S^1}\gamma^{*}\alpha \] Note that $A(\gamma)$ is always positive and equals the period of $\gamma$. The \textit{spectrum} $spec(\Sigma,\alpha)$ is the set of actions of all Reeb orbits of $\alpha$. We will need the following definition for Reeb trajectories which is part of a closed Reeb orbit. \begin{defn} $\gamma:[0,T]\to X$ is called a \textit{fractional Reeb orbit} for contact manifold $(X,\mathrm{x}i)$ if there is a closed Reeb orbit $\gamma_0$ of $(X,\mathrm{x}i)$ such that $\gamma(t)=\gamma_0(t), t\in [0,T]$. \end{defn} We say that a Reeb orbit $\gamma$ of $\alpha$ is \textit{non-degenerate} if the linearized Reeb flow along $\gamma$ from $\mathrm{x}i_p$ to itself for some $p \in \gamma$ has no eigenvalue 1. Moreover we say that a contact form is \textit{non-degenerate} if all Reeb orbits of $\alpha$ are non-degenerate. We can always assume a contact form is non-degenerate after a $\mathcal{C}^0$-small perturbation, since a generic contact form is non-degenerate. Notice that when $\alpha$ is non-degenerate, $spec(\Sigma,\alpha)$ is a discrete subspace of $\mathbb{R}^{+}$. \subsection{Liouville and Weinstein domains}\langlebel{ssec: domains} A \textit{Liouville domain} is a pair $(W^{2n}, \langlembda)$ such that \begin{itemize} \item $W^{2n}$ is a compact manifold with boundary, \item $d\langlembda$ is a symplectic form on $W$ , \item the Liouville field $X_\langlembda$, defined by $i_{X}d\langlembda = \langlembda$, is outward transverse along ${\partial}artial W$. \end{itemize} Let $\alpha:=\langlembda|_{{\partial}artial W}$ be a contact one-form on ${\partial}artial W$. The negative flow of $X$ gives rise to a collar: \begin{align*} &{\partial}hi: (1-\epsilon,1]\times{\partial}artial W\rightarrow W, \\ &{\partial}hi^*\langlembda = r\alpha,\quad {\partial}hi^*X=r{\partial}artial_r. \end{align*} We can attach an infinite cone to it, which is called the \textit{completion} of $(W,\langlembda)$: \begin{align*} &\widehat{W}=W\cup_{{\partial}artial W}([1,\infty)\times {\partial}artial W), \qquad \hat{\langlembda}|_W=\langlembda\\ &\hat{\langlembda}|([1,\infty)\times {\partial}artial W)= r\alpha,\quad \hat{X}|([0,\infty)\times {\partial}artial W)= r{\partial}artial_r, \quad\hat{\omega}=d\hat{\langlembda}. \end{align*} A \textit{Liouville isomorphism} between domains $W_0,W_1$ is a diffeomorphism ${\partial}si: \widehat{W_0}\to\widehat{W_1}$ satisfying ${\partial}si^*\hat{\langlembda_1}=\hat{\langlembda_0}+df$, for some $f$ compactly supported. We also say that $\widehat{W_0}$ and $\widehat{W_1}$ are Liouville isomorphic. Clearly ${\partial}si$ is compatible with the Liouville flow at infinity. \begin{defn} A Liouville domain $(W,\langlembda)$ is called $G$-equivariant if a group $G$ acts on $W$ and $\langlembda$ is $G$-invariant, i.e, $g^{*}\langlembda=\langlembda,\forall g\in G$. A diffeomorphism $f$ between two $G$-equivariant Liouville domains is called $G$-equivariant if the following diagram commutes, for all $g \in G$: \[\begin{tikzcd} (W_1,\langlembda_1) \arrow[r, "f"] \arrow[d, "g*"] & (W_0,\langlembda_0) \arrow[d, "g*" ] \\ (W_1,\langlembda_1) \arrow[r, "f" ] & (W_0,\langlembda_0) \end{tikzcd} \] \end{defn} \begin{remark} A manifold $M$ is called $G$-equivariant if $G$ acts on it. \end{remark} \begin{prop}[Proposition 11.8 \cite{cieliebak2012stein}]\langlebel{Liouville homotopy theorem of Cieliebak and Eliashberg} Let $W$ be a compact symplectic manifold with contact type boundary and $\langlembda_t,t\in [0,1]$ be a homotopy of Liouville forms on $W$. Then there exits a diffeomorphism of the completions $f:\widehat{W_0}\to \widehat{W_1}$ such that $f^*\hat{\langlembda_1}-\hat{\langlembda_0}=dg$ where $g$ is a compactly supported function. \end{prop} We have an immediate corollary for Proposition~\ref{Liouville homotopy theorem of Cieliebak and Eliashberg}: \begin{corollary}\langlebel{liouville homotopy} Let $(\langlembda_t)_{0\leq t\leq 1}$ be a family of ($G-$equivariant) Liouville structures on $W$. Then all the $(W,\langlembda_t)$ ($(\widehat{W},\hat{\langlembda_t})$) are mutually ($G-$equivariantly) Liouville isomorphic. \end{corollary} A \textit{Weinstein domain} is a triple $(W^{2n}, \langlembda, {\partial}hi)$ such that \begin{itemize} \item $(W, \langlembda)$ is a Liouville domain, \item ${\partial}hi: W \rightarrow \mathbb{R}$ is an exhausting Morse function with ${\partial}artial W$ being a regular level set, \item $X_\langlembda$ is a gradient-like vector field for ${\partial}hi$. \end{itemize} Since $W$ is compact and ${\partial}hi$ is an exhausting Morse function with ${\partial}artial W$ as a regular level set, ${\partial}hi$ has finitely many critical points. Liouville and Weinstein \textit{cobordisms} are defined similarly. If a contact manifold $(Y, \mathrm{x}i)$ is contactomorphic to ${\partial}artial (W, \langlembda)$, then we say that $(W, \langlembda)$ is a Liouville or Weinstein \textit{filling} of $(Y, \mathrm{x}i)$. \begin{defn}\langlebel{stein domain} A \textit{Stein manifold} $(M,J,{\partial}hi)$ is a complex manifold $(M,J)$ with an exhausting plurisubharmonic function ${\partial}hi : M\to \mathbb{R} $. A manifold of the form ${\partial}hi^{-1}((-\infty,c])$ is called a \textit{Stein domain}, where c is a regular value of ${\partial}hi$. \end{defn} We also have the following famous theorem by Eliashberg: \begin{theorem}[Theorem 1.1 \cite{cieliebak2012stein}]\langlebel{from weinstein to stein} Given a Weinstein structure $\mathfrak{M}=(\omega,X,{\partial}hi)$ on $V$, there exists a Stein structure $(J,{\partial}hi)$ on $V$ such that $\mathfrak{M}(J,{\partial}hi)$ is Weinstein homotopic to $\mathfrak{M}$ with fixed ${\partial}hi$. \end{theorem} \subsection{Symplectic homology}\langlebel{subsec: symhom} This section is mainly taken out from \cite{lazarev2016contact}. The convention used here agrees with \cite{cieliebak2018symplectic} . \subsubsection{Admissible Hamiltonians and almost complex structures}\langlebel{sss:Ad Hamiltonian} Let $\mathcal{H}_{std}(W)$ denote the class of \textit{admissible Hamiltonians}, which are functions on $\widehat{W}$ defined up to smooth approximation as follows: \begin{itemize} \item $H^s \equiv 0$ in $W$, \item $H^s$ is linear in $r$ with slope $s \not\in Spec(Y, \alpha)$ in $\widehat{W}\setminus W = Y\times [1, \infty)$. \end{itemize} To be more precise, $H$ is a $\mathcal{C}^2$-small Morse function in $W$ and $H=h(r)$ in $\widehat{W}\setminus W$ for some function $h$ such that \begin{itemize} \item $h$ is increasing convex in a small region $(Y \times [1, 1+\epsilon], r\alpha)$ of $Y$, \item $h$ is linear with slope $s$ outside this region. \end{itemize} For $H \in \mathcal{H}_{std}(W)$, the Hamiltonian vector field $X_H$ is defined by $d\hat\langlembda(\cdot, X_H ) = dH$. The time-1 orbits of $X_H$ are called the Hamiltonian orbits of $H$. Depending on their location in $\widehat{W}$, we can classify them into two categories: \begin{itemize} \item In the interior of $W$, the only Hamiltonian orbits are constants corresponding to critical points of $H|_W$. \item In $\widehat{W}\setminus W$, we have $X_H = h'(r)R_\alpha$, where $R_\alpha$ is the Reeb vector field of $(Y, \alpha)$. Therefore all Hamiltonian orbits lie on level sets of $r$ and corresponding to some Reeb orbit of $\alpha$ with period $h'(r)$. \end{itemize} The slope $s$ of $H$ at infinity is not in $Spec(Y, \alpha)$, as a consequence, every non-constant Hamiltonian orbit lies in a small neighborhood of $Y$ in $\widehat{W}$. After a $\mathcal{C}^2$-small time-dependent perturbation of $H$, the orbits become \textit{non-degenerate}. These non-degenerate orbits also lie in a neighborhood of $W$ and so their number is finite. An almost complex structure $J$ is \textit{cylindrical} on the symplectization $(Y \times (0, \infty), r\alpha)$ if \begin{itemize} \item $J$ is independent of $r$, \item $J(r{\partial}artial_r) = R_\alpha$, \item $J$ preserves $\mathrm{x}i = \ker \alpha,$ $J|_\mathrm{x}i$, \item $J$ is compatible with $d(r\alpha)|_\mathrm{x}i$. \end{itemize} Now we define the \textit{admissible} almost complex structures $J$ on $\widehat{W}$, denoted by $\mathcal{J}_{std}(W)$: \begin{itemize} \item $J$ is cylindrical on $\widehat{W}\backslash W = (Y \times [1, \infty), r\alpha)$ \item $J$ is compatible with $\omega$ on $\widehat{W}$. \end{itemize} \subsubsection{Floer complex} For $H \in \mathcal{H}_{std}(W), J\in \mathcal{J}_{std}(W)$, the Floer complex $SC(W,\langlembda, H, J)$ is generated as a free abelian group by Hamiltonian orbits of $H$. In this paper we need to consider all Hamiltonian orbits, as opposed to only the contractible ones, see \cite{wendlbeginner}. First, let's fix a reference loop \[l_h:S^1\to W \] with $[l_h]=h\in H_1(W,\mathbb{Z})$. Denote by $\mathcal{P}^h(H)$ the set of all $1-$periodic orbits of $X_{H_t}$ in the homology class $h$. \begin{comment} Denote by $\Gamma(W)$ the set of all pairs $(\gamma,[\sigma])$, where $\gamma\in C^\infty(S^1,W)$ and $[\sigma]$ is an equivalence class of smooth maps $\sigma:\Sigma \to W$ with $\Sigma$ a compact oriented surface whose two oriented boundary components are ${\partial}artial \Sigma ={\partial}artial_1\Sigma \cup (-{\partial}artial_0 \Sigma), \sigma|_{{\partial}artial_1\Sigma}=\gamma,\sigma|_{{\partial}artial_1\Sigma}=l_[\gamma]$, and we define \[\sigma\sim \sigma' \Leftrightarrow [\sigma]-[\sigma']=0\in H_2(W)/\mathcal{R}. \] We now define the symplectic action functional \[\mathcal{A}_H:\Gamma(W)\to \mathbb{R}:(\gamma,[\sigma])\mapsto \int_{\Sigma}\sigma^*\omega-\int_{S^1}H(t,\gamma(t))dt, \] whose linearization at $\tilde{\gamma}=(\gamma,[\sigma])$ is \[d\mathcal{A}_H(\tilde{\gamma})\eta=\int_{S^1} \omega(\eta,\dot{\gamma}-X_{H_t}(\gamma))dt. \] The critical points of $\mathcal{A}_H$ are thus the pairs $(\gamma,[\sigma])$ for which $\gamma$ is a $1-$periodic orbit. We'll denote all these orbits by \[\widetilde{\mathcal{P}^h}(H)=\{(\gamma,[\sigma]) \in Crit(\mathcal{A}_H)| [\gamma]=h. \} \] \begin{remark} We can modify the definition to that the maps $\sigma$ are homotopies $S^1\times [0,1]\to W$ between $l_{[\gamma]}$ and $\gamma$. \end{remark} There is a natural action of $H_2(W)/\mathcal{R}$ on $\Gamma(W)$ which preserves Crit($\mathcal{A}_H$). Indeed, we define \[A\cdot(\gamma,[\sigma])=(\gamma,A+[\sigma]), \] then we have $\mathcal{A}_H(A\cdot \tilde{\gamma})=\mathcal{A}_H(\tilde{\gamma})+\omega(A).$ In the case $H_2(W)=0$, the Floer chain complex split into $\bigoplus SC^h(W,\langlembda,H,J)$ with respect to $H_1(W)$ class. To define a grading on $SC^h(W,\langlembda,H,J)$, we must choose a symplectic trivialization of $TW$ along the reference loop $l_h$. Note that this choice is arbitrary and the grading will generally depend on it, see subsection\ref{subsection:index}. Once such trivialization is chosen, we have \[\mu_{CZ}(A\cdot \tilde{\gamma})=\mu_{CZ}(\tilde{\gamma})+2c_1(A). \] so we can have an integer grading when $H_2(W,\mathbb{Z})=0$. \end{comment} For a fixed reference class $h$, we will often write the chain complex generated as a free abelian group by orbits in $\mathcal{P}^h(H)$ as $SC^h(H, J)$ when we do not need to specify $(W, \langlembda)$. We will suppress $h$ when it causes no confusion. The differential is given by counts of Floer trajectories. In particular, for two Hamiltonian orbits $x_-, x_+$ of $H$, let $\widehat{\mathcal{M}}(x_-, x_+; H, J)$ be the moduli space of smooth maps $u: \mathbb{R}\times S^1 \rightarrow \widehat{W}$ such that $ \underset{s\rightarrow {\partial}m \infty}{\lim} u(s, \cdot) = x_{\partial}m$ and $u$ satisfies Floer's equation \begin{equation} {\partial}artial_s u + J({\partial}artial_t u - X_H) =0. \end{equation} Here $s, t$ denotes the $\mathbb{R},\, S^1$ coordinates on $\mathbb{R}\times S^1$ respectively. Since the Floer equation is $\mathbb{R}$-invariant, there is a free $\mathbb{R}$-action on $\widehat{\mathcal{M}}(x_-, x_+; H, J)$ for $x_- \ne x_+$. Let $\mathcal{M}(x_-, x_+: H, J)$ be the quotient by this $\mathbb{R}$-action, that is, $\widehat{\mathcal{M}}(x_-, x_+; H, J)/ \mathbb{R}$. After a small time-dependent perturbation of $(H,J)$, $\mathcal{M}(x_-, x_+, H, J)$ is a smooth finite-dimensional manifold. A maximal principle ensures us that Floer trajectories will not escape to infinity in $\widehat{W}$. Let $V \subset (W, \langlembda_W)$ be a \textit{Liouville subdomain}, that is, $(V, \langlembda_W|_V)$ is a Liouville domain and $(Z,\alpha_Z) = {\partial}artial (V, \langlembda)$ a contact manifold. Since $V$ is a Liouville subdomain, there is a collar of $Z$ in $W$ that is symplectomorphic to $(Z \times [1, 1+\delta], d(t\alpha_Z))$ for some small $\delta$. We have the following lemma: \begin{lemma} \cite{abouzaid2010open}\langlebel{lem: maximal_principle} Consider $H: \widehat{W} \rightarrow \mathbb{R}$ such that $H = h(r)$ is increasing near $Z$, where $r$ is the cylindrical coordinate and $J \in \mathcal{J}_{std}(W)$ is cylindrical near $Z$. If both asymptotic orbits of a $(H, J)$-Floer trajectory $u: \mathbb{R}\times S^1 \rightarrow \widehat{W}$ are contained in $V$, then $u$ is contained in $V$. \end{lemma} Apply this result to $V = W$, then we can proceed as if $W$ were closed. Therefore $\mathcal{M}(x_-, x_+; H, J)$ has a codimension one compactification by the Gromov-Floer compactness theorem. This implies that $\mathcal{M}_h(x_-, x_+; H, J)$, the zero-dimensional component of $\mathcal{M}(x_-, x_+; H, J)$, is finite and the map ${\partial}: SC(H, J) \rightarrow SC(H, J), $ defined by \[ {\partial} x_+ := \sum_{x_-} \# \mathcal{M}_h(x_-, x_+; H, J)\cdot x_-{\partial}mod2 \] is a differential. Notice that the underlying vector space $SC(H,J)$ depends only on $H$ while the differential ${\partial}$ depends on both $H$ and $J$. The resulting homology $HF(H, J)$ is independent of $J$ and compactly supported deformations of $H$. \begin{remark} If $c_1(W, \omega) = 0$, then $HF(H, J)$ has a $\mathbb{Z}$-grading due to the fact that $c_1(W, \omega) = 0$ implies the canonical line bundle of $(W, \omega)$ being trivial. For all our purposes, the canonical line bundle will always be trivial in this paper. Once we fix a global trivialization of this bundle, we can assign to each Hamiltonian orbit $x$ an integer, known as the Conley-Zehnder index $\mu_{CZ}(x)$ (see Subsection~\ref{subsection:index}). \end{remark} Generally speaking, the orbit $x$, Conley-Zehnder index $\mu_{CZ}(x)$ depend on the choice of trivialization of the canonical bundle. For a Hamiltonian orbit corresponding to a critical point $p$ of the Morse function $H|_W$, the Conley-Zehnder index $\mu_{CZ}(p)$ coincides with $n- Ind(p)$, where $Ind(p)$ is the Morse index of $H|_W$ at $p$. \subsubsection{Continuation map}\langlebel{sssec: continuation_map} Although $HF(H, J)$ is independent of $J$ and compactly supported deformations of $H$, $HF(H, J)$ does depend on the slope of $H$ at infinity and therefore is not an invariant of $W$. Indeed, $HF(H, J)$ only sees Reeb orbits of period less than the slope of $H$ at infinity. To incorporate all Reeb orbits, we have to consider Hamiltonians with arbitrarily large slope. More formally, this can be done by considering continuation maps between $SC(H, J)$ for different $H$. Given $H_-, H_+ \in \mathcal{H}_{std}(W)$, let $H_s \in \mathcal{H}_{std}(W), s\in \mathbb{R},$ be a family of Hamiltonians such that $H_s = H_-$ for $s \ll 0$ and $H_s = H_+$ for $s\gg 0$. Similarly, let $J_s \in \mathcal{J}_{std}(W)$ interpolate between $J_-, J_+$. For Hamiltonian orbits $x_-, x_+$ of $H_-, H_+$ respectively, let $\mathcal{M}(x_-, x_+; H_s, J_s)$ be the moduli space of parametrized Floer trajectories, i.e. maps $u: \mathbb{R}\times S^1 \rightarrow \widehat{W}$ $$ {\partial}artial_s u + J_s({\partial}artial_t u - X_{H_s}) = 0 $$ To ensure that parametrized Floer trajectories do not escape to infinity, we have to use a maximal principle. For this principle to hold, it is crucial that the homotopy of Hamiltonian functions is decreasing, that is, ${\partial}artial H_s/ {\partial}artial s \le 0$. If $J_s$ is $s$-independent, we use the following parametrized version of `no escape' Lemma \ref{lem: maximal_principle}, which is proven in Proposition 3.1.10 of \cite{gutt2015positive}. If $J_s$ does depend on $s$ and $V = W$, then we use the maximal principle from \cite{seidel2006biased}. \begin{lemma}[\cite{gutt2015positive}, \cite{seidel2006biased}]\langlebel{lem: maximal_principle_param} Consider a decreasing homotopy $H_s: \widehat{W} \rightarrow \mathbb{R}$ such that $H_s = h_s(t)$ is increasing in $t$ near $Z = {\partial}artial V$ and $H_s|_Z$ is $s$-independent; let $J \in \mathcal{J}_{std}(W)$ be cylindrical near $Z$. If $u: \mathbb{R}\times S^1 \rightarrow \widehat{W}$ is a $(H_s, J)$-Floer trajectory with both asymptotes in $V$, then $u$ is contained in $V$. If $V = W$, the same claim also holds for a homotopy $J_s \in J_{std}(W)$ that is cylindrical near $Z$. \end{lemma} By applying the second part of Lemma \ref{lem: maximal_principle_param}, we can conclude that $\mathcal{M}(x_-, x_+; H_s, J_s)$ has a codimension one compactification. The continuation map \[ {\partial}hi_{H_s, J_s}: SC(H_+, J_+) \rightarrow SC(H_-, J_-) \] is defined by \[ {\partial}hi_{H_s, J_s}(x_+) = \sum_{x_-} \#\mathcal{M}_h(x_-, x_+; H_s, J_s) x_-{\partial}mod 2. \] This map is independent of $J_s$ and $H_s$, up to chain homotopy. Notice that there is no $\mathbb{R}$-action since the parametrized Floer equation is not $\mathbb{R}$-invariant. As a result, ${\partial}hi_{H_s, J_s}$ is \textit{degree-preserving}. Now, we can define symplectic homology as the direct limit (taken over continuation maps ${\partial}hi_{H_s, J_s}: HF(H_+, J_+) \rightarrow HF(H_-, J_-)$): \[ SH(W, \langlembda):= \lim_{\rightarrow} HF(H, J). \] It is worth mentioning that $SH(W, \langlembda)$ depends only on the symplectomorphism type of $(\widehat{W}, d\hat{\langlembda})$ \cite{seidel2006biased}. \subsection{Positive symplectic homology}\langlebel{ssec: postive_sym_hom} For a small time-dependent perturbation of $H\in\mathcal{H}_{std}(W)$, the action functional $A_H: C^\infty(S^1, \widehat{W}) \rightarrow \mathbb{R}$ is $$ A_H(x) := \int_{S^1} x^* \langlembda - \int_{S^1} H(x(t)) dt. $$ Under our conventions, the Floer equation is the \textit{positive} gradient flow of the action functional which means if $u\in \mathcal{M}(x_-, x_+)$ is a non-constant Floer trajectory, then $A_H(x_+) > A_H(x_-)$. Let $SC^{<a}(H, J)$ be generated by orbits of action less than $a$. Since action increases along Floer trajectories, the differential decreases action and therefore $SC^{<a}(H, J)$ is a subcomplex of $SC(H, J)$. we define \[SC^{>a}(H, J):=SC(H, J)/ SC^{<a}(H, J). \] For $H\in \mathcal{H}_{std}(W)$, the constant orbits corresponding to Morse critical points $p \in W$ have action $-H(p)$. The non-constant orbits corresponding to Reeb orbits have \emph{positive} action close to the action of the corresponding Reeb orbit. Indeed, for sufficiently small $\epsilon$, $SC^{< \epsilon}(H, J)$ corresponds to the Morse complex of $-H|_W$(with a grading shift). To be specific, \[H_k(SC^{< \epsilon}(H, J)) \cong H^{n-k}(W; \mathbb{Z}). \] Let's define $SC^+(H, J):=SC(H, J)/SC^{< \epsilon}(H, J)$ to be the quotient complex and $HF^+(H, J)$ the resulting homology. We can also define $HF^+(W)$ by a direct limit construction. More precisely, suppose $H_s$ satisfies: \begin{itemize} \item $H_s$ is a decreasing homotopy, \item $H_s = H_+,\, s\gg 0$, \item $H_s = H_-,\, s\ll 0$. \end{itemize}Then the continuation Floer trajectories are also action increasing and induce chain map \[{\partial}hi_{H_s, J_s}^+: SC^+(H_+, J_+) \rightarrow SC^+(H_-, J_-).\] We define $SH^+(W)$ by \begin{equation} SH^+(W, \langlembda):= \lim_{\rightarrow} HF^+(H, J). \end{equation} The direct limit is taken over the continuation maps \[ {\partial}hi_{H_s, J_s}^+: HF^+(H_+, J_+) \rightarrow HF^+(H_-, J_-) \] on homology. $SC^+(H, J)$ is essentially dependent only on $(Y, \alpha)$ and not on the interior $(W, \langlembda)$. This is due to the fact that $SC^+(H,J)$ is generated by non-constant Hamiltonian orbits, which live in the cylindrical end of $W$ and correspond to Reeb orbits of $(Y, \alpha)$. On the other hand, the differential for $SC^+(H, J)$ may depend on the filling $W$ of $(Y, \alpha)$ since Floer trajectories between non-constant orbits may go into the filling, so different Liouville fillings of $(Y, \mathrm{x}i)$ might have different $SH^+$. The short exact sequence on chain-level \begin{equation} 0 \rightarrow SC^{< \epsilon}(H, J) \rightarrow SC(H, J) \rightarrow SC^+(H, J) \rightarrow 0 \end{equation} induces ``tautological" long exact sequence in homology \begin{equation} \cdots \rightarrow H^{n-k}(W; \mathbb{Z}) \rightarrow SH_k(W, \langlembda) \rightarrow SH_k^+(W, \langlembda) \rightarrow H^{n-k+1}(W; \mathbb{Z})\rightarrow \cdots . \end{equation} \begin{comment} Viterbo \cite{viterbo1999functors} showed that if $V \subset W$ is a Liouville subdomain, then there is a \textit{transfer map} $SH(W) \rightarrow SH(V)$. To define the transfer map, we introduce a new class of \textit{step} Hamiltonians and almost complex structures. Since $V$ is a Liouville subdomain of $W$, there is a collar $U$ of $(Z, \alpha_Z) = ({\partial}artial V, \langlembda_V|_{{\partial}artial V})$ in $W\backslash V$ such that $(U, \omega_W)$ is symplectomorphic to $(Z \times [1, 1+\epsilon_V], d(t \alpha_Z))$. Let $\mathcal{H}_{step}(W, V)$ denote the class of smooth functions $H$ on $\widehat{W}$ defined up to smooth approximation as follows: \begin{itemize} \item $H \equiv 0$ in $V$ \item $H$ is linear in $t$ with slope $s_V$ in $U$ \item $H \equiv s_V \epsilon_V$ in $W \backslash ( V \cup U)$ \item $H$ is linear in $r$ with slope $s_W$ in $\widehat{W} \backslash W = Y \times [1, \infty)$. \end{itemize} See Figure \ref{fig: step_hamiltonian}. More precisely, $H$ is a $C^2$-small Morse function in $V$, $C^2$-close to $s_V\epsilon_V$ in $W\backslash (V\cup U)$, increasing convex in $t$ near $Z \times \{1\}$, increasing concave in $t$ near $Z \times \{1 + \epsilon_V\}$, and increasing convex in $r$ near $Y \times \{1\}$; furthermore, $s_W, s_V \not \in Spec(Z, \alpha_Z) \cup Spec(Y, \alpha_Y)$. \begin{figure} \caption{A step Hamiltonian in $\mathcal{H} \end{figure} As depicted in Figure \ref{fig: step_hamiltonian}, the Hamiltonian orbits of $X_{H}$ fall into five classes which we denote by I, II, III, IV, V: \begin{itemize} \item[(I)] Morse critical points of $H|_V$ \item[(II)] Orbits near ${\partial}artial V = Z \times \{1\}$ corresponding to parameterized Reeb orbits of $(Z, \alpha_Z)$ \item[(III)] Orbits near ${\partial}artial (V\cup U) = Z \times \{1+\epsilon_V\}$ corresponding to parameterized Reeb orbits of $(Z, \alpha_Z)$ \item[(IV)] Morse critical points of $H|_{W\backslash (V\cup U)}$ \item[(V)] Orbits near ${\partial}artial W= Y \times \{1\}$ corresponding to parameterized Reeb orbits of $(Y, \alpha_Y)$. \end{itemize} For $H_{W, V} \in \mathcal{H}_{step}(W, V)$, let $i(H_{W, V})\in \mathcal{H}_{std}(V)$ denote the Hamiltonian obtained by extending $H_{W,V}|_{V\cup U}$ to $\widehat{V}$. Similarly, let $\mathcal{J}_{step}(W,V)$ denote the class of almost complex structures $J \in \mathcal{J}_{std}(W)$ that are cylindrical in $U$, i.e. $J|_U$ preserves $\mathrm{x}i_Z = \ker \alpha_Z$, $J|_{\mathrm{x}i_Z}$ is $t$-independent, and $J(t{\partial}artial_t) = R_{\alpha_Z}$. For $J_{W, V} \in \mathcal{J}_{step}(W, V)$, let $i(J_{W, V})\in \mathcal{J}_{std}(V)$ denote the almost complex structure obtained by extending $J_{W,V}|_{V\cup U}$ to $\widehat{V}$. Next we construct a map \begin{equation}\langlebel{eqn: transfer_portion} SH(H_{W,V}, J_{W, V})\rightarrow SH(i(H_{W , V}), i(J_{W , V})). \end{equation} This relies on the following proposition, some formulation of which is essential in all constructions of the transfer map. \begin{prop} \cite{CieliebakOancea} If $\epsilon_V\ge\frac{s_W}{s_V}$, then the subspace $SC^{III, IV, V}(H_{W, V}, J_{W,V})$ generated by III, IV, V orbits is a subcomplex of $SC(H_{W, V}, J_{W,V})$. \end{prop} \begin{remark} In the proof of this proposition, we follow the approach taken in Lemma 4.1 of \cite{cieliebak2018symplectic}, which helps us achieve this particular bound on $\epsilon_V$; this bound will be important in Section \ref{ssec: independencelin}. \end{remark} Assuming that $\epsilon_V \ge \frac{s_W}{s_V}$, let $SC^{I,II}(H_{W, V}, J_{W,V})$ be the quotient complex\\ $SC(H_{W, V}, J_{W,V})/ SC^{III, IV, V}(H_{W, V}, J_{W,V})$. This complex is generated by the I, II orbits, which are precisely the generators of $SH(i(H_{W, V}), i(J_{W,V}))$. Note that $H_{W,V}= h(t)$ is increasing near $Z = {\partial}artial V$, where $I, II$ orbits occur. Therefore, we can apply the `no escape' Lemma \ref{lem: maximal_principle} to $V$, which shows that all $(H_{W,V}, J_{W,V})$-Floer trajectories in $\widehat{W}$ between $I,II$ orbits actually stay in $V$ and hence are $(i(H_{W, V}), i(J_{W,V}))$-trajectories. Therefore, we have an isomorphism $SH^{I,II}(H_{W, V}, J_{W,V}) \cong SH(i(H_{W, V}), i(J_{W,V}))$. So the map in Equation \ref{eqn: transfer_portion} is obtained by projecting to the quotient $SH^{I,II}(H_{W, V}, J_{W,V}) $ and then using this isomorphism. For any $H_W \in \mathcal{H}_{std}(W)$, take $H_{W, V} \in \mathcal{H}_{step}(W, V)$ such that $\epsilon_V \ge \frac{s_W}{s_V}$ and $H_{W,V}, H_W$ differ by the constant $s_V \epsilon_W$ in $\widehat{W}\backslash W$. In particular, $H_{W, V} \ge H_V$ and so there exists a decreasing homotopy $H_s$ from $H_{W,V}$ to $H_W$; also take a homotopy $J_s \in \mathcal{J}_{std}(W)$ from $J_{W,V}$ to $J_W$. Then $(H_s, J_s)$-trajectories define the continuation map $$ {\partial}hi_{H_s, J_s}: SH(H_W, J_W) \rightarrow SH(H_{W, V}, J_{W,V}). $$ Since $H_s$ is decreasing, we can apply the second part of Lemma \ref{lem: maximal_principle_param} to $W$ to ensure that all trajectories stay in $W$ and hence that ${\partial}hi_{H_s}$ is well-defined; in Section \ref{ssec: independencelin}, we will also assume that $H_s = h_s(t)$ is increasing in $t$ and $J_s$ is $s$-independent near $Z$. By composing this continuation map with the projection to the quotient $SH^{I,II}(H_{W, V}, J_{W,V}) \cong SH(i(H_{W, V}), i(J_{W,V}))$, we get a map $$ SH(H_W, J_W) \rightarrow SH(i(H_{W, V}), i(J_{W,V})). $$ This map commutes with continuation maps and hence induces a map on symplectic homology $$ {\partial}hi_{W, V}: SH(W) \rightarrow SH(V), $$ which we call the transfer map. To construct a transfer map on $SH^+$, we first note that since the homotopy $H_s$ is decreasing, action increases along the parametrized Floer trajectories and hence we get a map $$ {\partial}hi_{H_s, J_s}^+: SH^{+}(H_W, J_W) \rightarrow SH^{+}(H_{W,V}, J_{W,V}). $$ We can further quotient out III orbits from $SH^{+}(H_{W,V}, J_{W,V})$; this is because the differential of $SC(H_{W,V}, J_{W,V})$ must map III orbits to III, IV, V orbits by Lemma \ref{lem: rise_above} and IV, V orbits have already been quotiented out in $SH^{>0}(H_{W, V}, J_{W,V})$ since they have negative action (for $\epsilon_V \ge \frac{s_W}{s_V}$). We use $SH^{II}(H_W, J_W)$ to denote the complex obtained by quotienting out III orbits from $SH^{+}(H_{W,V}, J_{W,V})$; $SH^{II}(H_W, J_W)$ is generated by II orbits since these have positive action. Also, note that II orbits are precisely the generators of $SH^+(i(H_{W, V}), i(J_{W,V}))$ and by the `no escape' Lemma \ref{lem: maximal_principle} applied to $V$, we have $SH^{II}(H_W, J_W) \cong SH^+(i(H_{W, V}), i(J_{W,V}))$. By composing ${\partial}hi_{H_s, J_s}^+$ with the projection to the quotient $SH^{II}(H_W, J_W) \cong SH^+(i(H_{W, V}), i(J_{W,V}))$, we get a map $$ SH^+(H_W, J_W) \rightarrow SH^+(i(H_{W,V}), i(J_{W,V})). $$ Again this commutes with continuation maps in $W,V$ and hence we get the transfer map for $SH^+$ $$ {\partial}hi_{W,V}^+: SH^+(W) \rightarrow SH^+(V). $$ \end{comment} \subsection{Summary of the TQFT structure on $SH_*(W)$}\langlebel{Subsection TQFT on SH summary} This is taken out of chapter 6 in \cite{ritter2013topological}. For a detailed construction, see chapter 16 of \cite{ritter2013topological}. Note that both the grading and action functional differ from ours by a negative sign, and our homology $SH_*(W^{2n})$ is cohomology $SH^*(W^{2n})$ in \cite{ritter2013topological}. We summarize here the TQFT structure. Suppose we are given: \begin{enumerate} \item\langlebel{TQFTitem1}\langlebel{TQFTitem2} a Riemann surface $(S,j)$ with $p+q$ punctures, with fixed complex structure $j$; \item\langlebel{TQFTitem3} \emph{ends}: a cylindrical parametrization $s+it$ near each puncture, with $j{\partial}artial_s = {\partial}artial_t$; \item\langlebel{TQFTitem1b} $p\geq 1$ of the punctures are \emph{negative} (i.e, we converge to the puncture as $s\to -\infty$), they are indexed by $a=1,\ldots, p$; \item\langlebel{TQFTitem1c} $q\geq 0$ of the punctures are \emph{positive} (i.e, we converge to the puncture as $s\to +\infty$), they are indexed by $b=1,\ldots,q$; \item\langlebel{TQFTitem4} \emph{weights}: constants $A_a,B_b>0$ satisfying $\sum A_a - \sum B_b\geq 0$; \item\langlebel{TQFTitem5} a $1$-form $\beta$ on $S$ with $d\beta \leq 0$, and on the ends $\beta=A_a\,dt$, $\beta=B_b\,dt$ for large $|s|$. \end{enumerate} \begin{remark} Negative/positive parametrizations are modelled on $(-\infty,0]\times S^1$ and $[0,\infty)\times S^1$, respectively. In (\ref{TQFTitem5}), $d\beta\leq 0$ means $d\beta(v,jv)\leq 0$ for all $v\in TS$. By Stokes' theorem, $\sum A_a - \sum B_b = -\int_S d\beta \geq 0$. This forces $p\geq 1$ and (\ref{TQFTitem4}). Subject to this inequality, such $\beta$ exists. See Lemma 16.1 \cite{ritter2013topological}. \end{remark} Fix a Hamiltonian $H:\widehat{W}\to \mathbb{R}$ linear at infinity with $H\geq 0$ (required in Section 16.3 \cite{ritter2013topological}), this defines $X=X_H$. Fix an almost complex structure $J$ on $W$ of contact type at infinity. The moduli space $\mathcal{M}(x_a; y_b;S,\beta)$ of \emph{Floer solutions} consists of smooth maps $u:S\to \widehat{W}$ such that $du-X\otimes \beta$ is $(j,J)$-holomorphic, and $u$ converges on the ends to $1$-orbits $x_a,y_b$ of $A_a H$, $B_b H$ which we call the \emph{asymptotics}. After a small generic $S$-dependent perturbation $J_{z}$ of $J$, $\mathcal{M}(x_a; y_b;S,\beta)$ is a smooth manifold. One can ensure that on the ends $J_z$ does not depend on $z\!=\!s+it\!\in\! S$ for $|s|\gg 0$. Just as for Floer continuations maps (\ref{sssec: continuation_map}), a maximum principle and an a priori energy estimate $E(u) = \sum \mathcal{A}_{B_b H}(y_b)-\sum \mathcal{A}_{A_a H}(x_a)$ holds, so the $\mathcal{M}(x_a; y_b;S,\beta)$ have compactifications by broken Floer solutions: Floer trajectories for $A_a H,B_bH$ can break off at the respective ends. When gradings are defined (\ref{subsection:index}), \begin{align} \dim \mathcal{M}(x_a;y_b;S,\beta) &= -\sum \mu_{CZ}(x_a) +\sum \mu_{CZ}(y_b) +n\chi(S)\\ &=\sum \mu_{CZ}(y_b)-\sum \mu_{CZ}(x_a)+n(2-2g-p-q) .\langlebel{dimension of moduli space of product} \end{align} Define ${\partial}si_S: \otimes_{b=1}^q SC_*(B_b H) \to \otimes_{a=1}^p SC_*(A_a H)$ on generators by counting isolated Floer solutions \[ {\partial}si_S(y_1 \otimes\cdots \otimes y_q) = \sum_{u\in \mathcal{M}_0(x_a;y_b;S,\beta)} \epsilon_u \; x_1\otimes \cdots \otimes x_p, \] where $\epsilon_u \in \{ {\partial}m 1 \}$ are orientation signs (In this paper we use $\mathbb{Z}_2$ coefficients, so these signs don't matter. In general, see Section 17 of \cite{ritter2013topological}). Then extend ${\partial}si_S$ linearly. The ${\partial}si_S$ are chain maps. On homology, $ {\partial}si_S: \otimes_{b=1}^q SH_*(B_b H) \to \otimes_{a=1}^p SH_*(A_a H) $ is independent of the choices $(\beta,j,J)$ relative to the ends. Taking direct limits, we get induced maps: $${\partial}si_S: SH_*(W)^{\otimes q} \to SH_*(W)^{\otimes p} \qquad (p\geq 1, q\geq 0).$$ So $SH_*(W)$ has a unit ${\partial}si_C(1)$. \subsubsection{The product}\langlebel{Subsection Product}\langlebel{Section Ring structure} \begin{figure} \caption{Pair of pants product: the operation ${\partial} \end{figure} The pair of pants surface $P$ (Figure \ref{The pair of pants p }) defines the product \\[1mm] \begin{tabular*}{\textwidth}{l@{\extracolsep{\fill}}cr@{\extracolsep{0pt}}} \strut & $ {\partial}si_P: SH_i(W)\otimes SH_j(W) \to SH_{i+j}(W),\; x \cdot y = {\partial}si_P(x,y), $ & \strut \end{tabular*} which is graded-commutative and associative. \begin{remark}\langlebel{NOproduct} The pair of pants product also respects the action filtration. As mentioned in \cite{uebele2015periodic} and in Section 16.3 of \cite{ritter2013topological}, we have \[\mathcal{A}_{2H}(x_3)\leq \mathcal{A}_{H}(x_1)+\mathcal{A}_{H}(x_2). \] Hence the product restricts to a map \[SH_*^{[a,b)}(W)\times SH_*^{[a',b')}(W)\rightarrow SH_*^{[\max\{a+b',a'+b\},b+b')}(W), \] where on the right hand side it is necessary to divide out all generators with action less than $\max\{a+b',a'+b\}$ to make the map well defined. So one does not get a product on the whole positive symplectic homology, but we can define maps: \[ SH_*^{[\delta,b)}(W)\times SH_*^{[\delta,b)}(W)\rightarrow SH_*^{[b+\delta,2b)}(W) \] \end{remark} \subsubsection{The unit} \langlebel{Subsection Definition of the unit} Let $C=\mathbb C$ with $p=1$, $q=0$. The end is parametrized by $(-\infty,0]\times S^1$ via $s+it \mapsto e^{ - 2{\partial}i(s+it) }$. On this end, $\beta=f(s)dt$ with $f'(s)\leq 0$, $f(s)=1$ for $s\leq -2$ and $f(s)=0$ for $s\geq -1$. Extend by $\beta=0$ away from the end (See Figure~\ref{Figure Unit}). Thus we get a map ${\partial}si_C: \mathbb{K} \to SH_*(H)$. \begin{figure} \caption{A cap $C$, and its interpretation as a continuation cylinder.} \end{figure} \begin{defn} Let $e_H\!=\!{\partial}si_{C}(1)\!\in\! SH_n(H)$. We can define $e\! =\! \varinjlim e_H \!\in\! SH_n(W).$ \end{defn} \begin{theorem}[Theorem 6.1 \cite{ritter2013topological}]\langlebel{Theorem unital ring structure} $e$ is the unit for the production on $SH_*(W)$. \end{theorem} \begin{proof} By the gluing illustrated in the Figure~\ref{Fig:unit for pair of pants}, ${\partial}si_P(e,\cdot) = {\partial}si_{P\# C}(\cdot) = {\partial}si_Z(\cdot)=\textrm{id}$. \end{proof} \begin{figure} \caption{Unit for pair of pants product} \end{figure} \begin{remark} For ``gluing = compositions'' results, see Theorems 16.10, 16.12, 16.14 in \cite{ritter2013topological}. Before taking direct limits, the above is the continuation map \[SH_*(H) \stackrel{e_H\otimes \cdot}{\longrightarrow} SH_*(H)^{\otimes 2} \stackrel{{\partial}si_P}{\longrightarrow} SH_*(2H).\] \end{remark} \begin{lemma}[Lemma 6.2 \cite{ritter2013topological}]\langlebel{Lemma unit is a count of continuation solutions} $e_H$ is a count of the isolated finite energy Floer continuation solutions $u:\mathbb{R}\times S^1 \to \widehat{W}$ for the homotopy $f(s)H$ from $H$ to $0$. \end{lemma} \begin{lemma}[Lemma 6.3 \cite{ritter2013topological}]\langlebel{Lemma unit is sum of minima} For $H$ as in Section \ref{sss:Ad Hamiltonian}, $e_H = $ sum of the local minima of $H$. \end{lemma} \begin{theorem}[Theorem 6.4 \cite{ritter2013topological}]\langlebel{Theorem unit is image of 1} $e=\varinjlim e_H$ is the image of $1$ under $c_*:H_*(W) \to SH_*(W)$, and $e_H = c_{*,H}(1)$ where $c_{*,H}:H_*(M)\cong SH_*^{<\delta}(H) \to SH_*(H)$ is the inclusion map. \end{theorem} \subsubsection{The TQFT structure on ${SH_*(W)}$ is compatible with the grading by ${H_1(W)}$} \langlebel{Subsection TQFT is compatible with filtrations} We can grade $SC_*(H)=\bigoplus\limits_{h\in H_1(W)} SC^{h}_*(H)$ by the homology classes $h\in H_1(\widehat{W})$ of the generators. The Floer differential preserves the $H_1$ grading, and so do Floer operations on a cylinder and a cap. The pair of pants product respects this grading as follows: ${\partial}si_S: SH^{h_1}_*(W)\otimes SH^{h_2}_*(W) \to SH^{h_1+h_2}_*(W)$. We can also grade $SH_*(W)=\bigoplus\limits_{h}SH^{h}_*(M)$ by the free homotopy classes $h\in [S^1,M]$ of the generators. The TQFT operations for genus zero surfaces are compatible with the grading (the equation above holds after replacing $\sum$ by concatenation of free loops). \begin{remark}\langlebel{remark on pair of pants product about contractible loops} Let $SH_*^0(W)$ denote the summand corresponding to the contractible loops. Considering only contractible loops determines a TQFT with operations ${\partial}si_S:SH_*^0(W)^{\otimes q} \to SH_*^0(W)^{\otimes p}$ $(p\geq 1,q\geq 0)$. Also $c_*:H_*(W)\to SH_*^0(W)\subset SH_*(W)$ naturally lands in $SH_*^0(W)$. \end{remark} \subsubsection{Viterbo Functoriality} \langlebel{Subsection Viterbo Functoriality} For Liouville subdomains $W\subset \widehat{M}$, Viterbo \cite{viterbo1999functors} constructed a restriction map $SH_*(M)\to SH_*(W)$ and McLean \cite{mclean2007lefschetz} proved that it is a ring homomorphism. \begin{comment} Ritter proved a stronger statement: \begin{theorem}[Theorem 9.5 \cite{ritter2013topological}]\langlebel{Theorem Viterbo Functoriality} Let $ i:(W,\theta_W) \hookrightarrow (\widehat{M},\theta) $ be a Liouville subdomain. Then there exists a $\mathrm{restriction\; map}$, $SH_*(i)_{\eta}:SH_*(M)_{\eta} \to SH_*(W)_{i_*\eta},$ which is a TQFT map fitting into a commutative diagram which respects TQFT structures: $$ \mathrm{x}ymatrix@R=12pt{SH^*(W) \ar@{<-}[r]^-{SH_*(i)} \ar@{<-}[d]_-{c_*} & SH_*(M) \ar@{<-}[d]^-{c_*} & & SH_*(W)_{i_*\eta} \ar@{<-}[r]^-{SH_*(i)_{\eta}} \ar@{<-}[d]^-{c_*} & SH_*(M)_{\eta} \ar@{<-}[d]_-{c_*} \\ H_*(W) \ar@{<-}[r]^-{i_*} & H_*(M) & & H_*(W)\otimes \Lambda \ar@{<-}[r]^-{i_*} & H_*(M)\otimes \Lambda} $$ In particular, all maps are unital ring homomorphisms. \end{theorem} Therefore we have the following corollary: \end{comment} \begin{theorem}[\cite{mclean2007lefschetz} \cite{cieliebak2018symplectic}]\langlebel{invariant of SH} Let $W$ and $V$ be compact symplectic manifolds with contact type boundary and assume that the Conley-Zehnder index is well-defined on $W$. If $V$ is obtained from $W$ by attaching to ${\partial}artial W\times [0,1]$ a subcritical symplectic handle $H_k^{2n}$, $k<n$, then it holds that \[SH_*(V,\mathbb{Z}_2)\cong SH_*(W,\mathbb{Z}_2) \] as rings. \end{theorem} \begin{remark} A.Ritter proved a stronger statement in Theorem 9.5 of \cite{ritter2013topological}. \end{remark} \subsection{Conley-Zehnder index}\langlebel{subsection:index} In this section we discuss Conley-Zehnder index as in Fauck \cite{fauck2016rabinowitz}. To define $\mu_{CZ}$, let $Sp(2n)$ denote the group of $2n\times2n$ symplectic matrices. We will discuss a generalization, called the Robbin-Salamon index as follows: any smooth path $\Psi:[a,b]\to Sp(2n)$ satisfies an ordinary differential equation \[\Psi'(t)=J_0S(t)\Psi(t),\qquad \Psi(a)\in Sp(2n), \] Where $t\to S(t)=S(t)^{T}$ is a smooth path of symmetric matrices and $J_0$ is the standard almost complex structure. We say $t\in [a,b]$ is called a crossing if $\det(id-\Psi(t))=0$. The crossing form at time $t$ is a quadratic form $\Gamma(\Psi,t)$ defined for $v\in \ker(id-\Psi(t))$ by \[\Gamma(\Psi,t)v=<v,S(t)v> \] A crossing $t$ is called regular if $\Gamma(\Psi,t)$ is non-degenerate. For a path with only regular crossings, the Robbin-Salamon index is defined by \[\mu_{CZ}(\Psi,a,b):=\frac{1}{2}sign \Gamma(\Psi,a)+\sum_{a<t<b} sign \Gamma(\Psi,t)+\frac{1}{2} sign \Gamma(\Psi,b) \] where the sum runs all over crossings $t\in(a,b)$, and sign($M$) denotes the signature of the matrix $M$, which equals the number of positive eigenvalues minus the number of negative eigenvalues. Here we use $\mu_{CZ}$ to denote the Robbin-Salamon index.The fact that the Robbin-Salamon index coincides with Conley-Zehnder index when $\det(id-\Psi(b))\ne 0$ sort of justifies this abuse of notation. We have the following properties for $\mu_{CZ}$: \begin{itemize} \item (\textit{Naturality}) For any path $\Phi:[a,b]\to Sp(2n)$, $\mu_{CZ}(\Phi\Psi\Phi^{-1})=\mu_{CZ}(\Psi)$ \item (\textit{Homotopy}) $\mu_{CZ}(\Psi_s)$ is constant for any homotopy $\Psi_s$ with fixed endpoints. \item (\textit{Product}) If $Sp(2n)\oplus Sp(2n')$ is identified with a subgroup of $Sp(2(n+n'))$ in the natural way, then $\mu_{CZ}(\Psi\oplus \Psi')=\mu_{CZ}(\Psi)+\mu_{CZ}(\Psi').$ \end{itemize} The homotopy property allows us to define $\mu_{CZ}(\Psi,a,b)$ also for paths with non-regular crossings, given that having regular crossings is a $\mathcal{C}^\infty$ generic property among paths with fixed endpoints. \begin{remark}[Lemma 59 \cite{fauck2016rabinowitz}]\langlebel{Index formula} Let $\Psi_1,\Psi_2,\Psi_3:[0,T]\to Sp(2)$ be the following paths: \[\Psi_1(t)=e^{it}, \quad,\Psi_2(t)=e^{-it},\quad \Psi_3(t)=\textrm{diag}\big(e^{f(t)},e^{-f(t)}\big),f\in C^1(\mathbb{R}). \] Then, their Conley-Zehnder indices are given as follows: \begin{align*} \mu_{CZ}(\Psi_1)&=\Bigg\lfloor\frac{T}{2{\partial}i}\Bigg\rfloor+\Bigg\lceil\frac{T}{2{\partial}i}\Bigg\rceil,\\ \mu_{CZ}(\Psi_2)&=\Bigg\lfloor\frac{-T}{2{\partial}i}\Bigg\rfloor+\Bigg\lceil\frac{-T}{2{\partial}i}\Bigg\rceil=-\mu_{CZ}(\Psi_1),\\ \mu_{CZ}(\Psi_3)&=0. \end{align*} \end{remark} \subsubsection*{Trivialization} Suppose we have a symplectic manifold $(M,\omega)$ with $c_1(M)=0$ and $J$ is an $\omega-$compatible almost complex structure. Then the anti-canonical bundle of $M$ is the highest exterior power of $(TM,J)$, i.e, $\kappa_J^*=\wedge^n(TM,J)$. The canonical bundle $\kappa_J$ is the dual of $\kappa_J^*$. In the same manner, we can define the canonical bundle of a contact manifold $(C,\mathrm{x}i)$ with a choice of one form $\alpha$ and $d\alpha$-compatible almost complex structure on $\mathrm{x}i$. A \textit{trivialization of the canonical bundle} is a bundle isomorphism $\Phi:\kappa_J\to M\times \mathbb{C}$. A \textit{trivialization} of $(\gamma^*TM,J)$( where $\gamma$ is a loop in $M$) is a bundle isomorphism $\Psi:\gamma^*TM\to S^1\times \mathbb{C}^n$. Such a trivialization has a one-to-one correspondence (up to homotopy) with the trivialization of $\gamma^{*}\kappa_J^*$ and hence the trivialization of the canonical bundle via: \[\det\nolimits_\mathbb{C}(\Psi):\wedge^n(\gamma^*TM)=\gamma^*\kappa_J^*\rightarrow S^1\times \mathbb{C}. \] For a 1-periodic Hamiltonian orbit $x$, we fix a trivialization of $(x^*TM,J)$ along $x$ as: \[\Psi:x^*TM\to S^1\times \mathbb{C}^n \] Suppose ${\partial}si$ is the Hamiltonian flow and $d{\partial}si_t:TM|_{x(0)}\to TM|_{x(t)}$ is its linearization, then define \[M_t(x):=\Psi_t\circ d{\partial}si_t\circ \Psi_0^{-1} \] The Conley-Zehnder index of $x$ is defined as $\mu_{CZ}(x):=\mu_{CZ}(M_t(x))$. Similarly, we can define the Conley-Zehnder index of a Reeb orbit. In particular, let $(W,\langlembda)$ be a Liouville domain and $(C:={\partial}artial W,\mathrm{x}i:=\ker \langlembda|_C)$ its boundary. We have $TM|_C=\mathrm{x}i\oplus<X_{Reeb}>\oplus<X>$, where $X_{Reeb},X$ are a Reeb vector field and a Liouville vector field, respectively. Since $<X_{Reeb}>=J<X>$ , we can identify $<X_{Reeb}>\oplus<X>$ with $\mathbb{C}$, i.e. $\gamma^*TM=\gamma^*\mathrm{x}i\oplus\mathbb{C}$, where $\gamma$ is a Reeb orbit. Due to the fact that Reeb flow preserves $X_{Reeb}$ and extends to the symplectization, we have \[M_{t,M}(\gamma)=M_{t,\mathrm{x}i}(\gamma)\oplus \left[ {\begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} } \right] \] where $M_{t,M}(\gamma)$ is the symplectic matrix associated to the linearization of the Reeb flow with respect to a trivialization of $TM|_\gamma$, i.e, $M_{t,\mathrm{x}i}(\gamma)$ is defined in the same manner. The product property of Conley-Zehnder index implies that $\mu_{CZ}(M_{t,M}(\gamma))=\mu_{CZ}(M_{t,\mathrm{x}i}(\gamma))$. Hence we will not specify which index we are referring to in the rest of this paper. Now consider a $G-$equivariant Liouville domain $(W,\langlembda)$. Suppose the group action is free and $|G|<\infty$. Then we have that the quotient map \[{\partial}i_G: (W,\langlembda)\rightarrow (W/G,\langlembda) \] is a finite covering map. Each Reeb orbit $\gamma$ in ${\partial}(W/G)$ then lifts to a fractional orbit $\widetilde{\gamma}$ in ${\partial} W$. That is, $\widetilde{\gamma}(t)=\gamma_0(t),t\in[0,T]$ for some closed Reeb orbit $\gamma_0$ in ${\partial}(W)$. In particular, we can choose $\gamma_0$ with period of $|G|\cdot T$. If we choose a $G-$equivariant trivialization for the canonical bundle $\kappa_W$, then such trivialization descends down to $\kappa_{W/G}$. Equivalently, if we choose $G$-equivariant trivialization of $\mathrm{x}i|_{\gamma_0}$, and $M_{t,\mathrm{x}i}(\gamma_0)$ is the matrix of the linearized map, then we have for some $M_G\in Sp(2n,\mathbb{R})$, \[ M_G\cdot M_{t,\mathrm{x}i}(\gamma_0)=M_{t+T,\mathrm{x}i}(\gamma_0). \] where $M_G$ satisfies $M_G^{|G|}=M_{|G|\cdot T,\mathrm{x}i}(\gamma_0)$ is a constant matrix, which only depends on the homotopy class of our $G$-equivariant trivialization. In particular, $M_{T,\mathrm{x}i}(\gamma_0)=M_G$, so $\mu_{CZ}(M_{t,\mathrm{x}i}(\gamma_0)), t\in [0,T]$ is well defined since the Conley-Zehnder index is constant for any homotopy with fixed endpoints. We can therefore define the Conley-Zehnder index of such a fractional Reeb orbit of $\gamma$ to be the Conley-Zehnder index of $M_{t,\mathrm{x}i}(\gamma_0), t\in[0,T]$. As a consequence, we have \[\mu_{CZ}(\gamma)=\mu_{CZ}(\widetilde{\gamma}). \] \begin{lemma}\langlebel{index for the cylinder component} Let $(\mathbb{R}\times S^1, d(rd\theta))$ be the symplectization of $(S^1,\theta)$. Choose the canonical trivialization of $T(\mathbb{R}\times S^1)=T\mathbb{R}\times TS^1$, then all fractional Reeb orbits of $(S^1,\theta)$ have Conley-Zehnder index (Robbin-Salamon index) zero, with respect to any cyclic group action rotating the cylinder. \end{lemma} \begin{proof} Since Reeb flow preserves $({\partial}artial_r,{\partial}artial_\theta)$, so the matrix for linearized return map is\[ M(t)=\left[ {\begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} } \right]. \]Therefore, the Conley-Zehnder index is zero. \end{proof} The following lemma gives a formula for the Reeb vector field in terms of the Hamiltonian and Liouville vector field. \begin{lemma}\langlebel{formula for reeb vector} Let $(W,\langlembda)$ be a Liouville manifold. Suppose $H$ is a function on $W$ with $0$ as its regular value and the Liouville vector field $X$ is transverse to the 0-level set. Then $(\Sigma:=H^{-1}(0),\langlembda)$ is a contact manifold whose Reeb vector field is given by $X_{Reeb}=\frac{X_H}{X(H)}$, where $X_H$ is the Hamiltonian vector field of $H$. \end{lemma} \begin{proof} $(\Sigma,\langlembda)$ is well known to be contact. We only need to prove the latter part of the lemma. Since \[\iota_{X_H}d\langlembda|_{\Sigma}=-dH|_\Sigma=0\] and \[\iota_{X_H}\langlembda=\iota_{X_H}\iota_{X}d\langlembda=d\langlembda(X,X_H)=dH(X)=X(H), \] it follows $X_{Reeb}=\frac{X_H}{X(H)}$. \end{proof} \begin{lemma}[Lemma 5.20 \cite{mclean2016reeb}] \langlebel{lemma:hamiltonianconleyzehnderindexcomparisonstandard} Let $(C,\mathrm{x}i)$ be a contact manifold with associated contact form $\alpha$ and let $h : \mathbb{R} \to \mathbb{R}$ be a function with $h' > 0,h''>0$ and $h'(0) = 1$. Let $\widehat{C} := C \times \mathbb{R}$ be the symplectization of $C$ with symplectic form $d(e^r \alpha)$ where $r$ parameterizes $\mathbb{R}$. Let $\gamma(t)$ be a Reeb orbit of $\alpha$ of period $L$ with a choice of trivialization of the symplectic vector bundle $\oplus_{j=1}^{N}TM$ along this orbit. This choice of trivialization induces a choice of trivialization of $\gamma^* \oplus_{j=1}^{N} \mathrm{x}i$ in a natural way. Then the Hamiltonian $L h(e^r)$ has a $1$ periodic orbit x equal to $\gamma(Lt)$ inside $C \times \{0\} = C$ and its Conley-Zehnder index is equal to $\mu_{CZ}(\gamma)+\frac{1}{2}$. \end{lemma}\langlebel{remark: index difference} \begin{remark} Notice that the Hamiltonian vector field in \cite{mclean2016reeb} differs from ours by a minus sign. We have \[M_{t,M}(x)=M_{t,\mathrm{x}i}(\gamma)\oplus \left[ {\begin{array}{cc} 1 & 0 \\ ah''t & 1 \\ \end{array} } \right], \]for some constant $a>0$. If instead, $h''< 0$, then the index equals $\mu_{CZ}(\gamma)-\frac{1}{2}$. And if $h'<0$, then the Hamiltonian orbit goes in the opposite direction of the Reeb orbit, and the index differs by a minus sign. \end{remark} We will conclude this subsection with a lemma relating Morse index of critical point with Conley-Zehnder index of the corresponding constant Hamiltonian orbit. \begin{lemma} If $S$ is an invertible symmetric matrix with $||S||<2{\partial}i$ and $\Psi(t)=\exp(tJ_0S)$, then \[\mu_{CZ}(\Psi)=n-Ind(S) \] where $Ind(S)$ is the number of negative eigenvalues of $S$. \end{lemma} \begin{corollary}[Corollary 7.2.2\textbf{ \cite{audin2014morse}}]\langlebel{index of critical pt} Let $W$ be a symplectic manifold of dimension $2n$, let \[H:W\rightarrow \mathbb{R} \] be a Hamiltonian and $x$ be a critical point of $H$. We assume that $H$ is $\mathcal{C}^2$-small (in this case, we can choose a Darboux chart centered at $x$ such that the usual norm $||Hess_x(H)||<2{\partial}i$). Then the Conley-Zehnder index $\mu_{CZ}(x)$ of $x$ as a periodic solution of the Hamiltonian system and its Morse index Ind(x) as a critical point of the function $H$ are connected by \[\mu_{CZ}(x)=n-Ind(x). \] \end{corollary} \subsection{Weinstein handle attachment and contact surgery}\langlebel{ss:Weinstein handle & contact surgery} \subsubsection{Contact surgery} This section is already included in Chapter 6 of \cite{geiges2008introduction}. We will highlight the parts which should be paid attention to in this paper, namely, the trivialization of the conformal symplectic normal bundle. \begin{defn} Let $(M,\mathrm{x}i)$ be a contact manifold. A submanifold $L$ of $(M,\mathrm{x}i)$ is called an \textit{isotropic submanifold} if $T_pL\subset \mathrm{x}i_p$ for all point $p\in L$. \end{defn} Let $L\subset (M,\mathrm{x}i=\ker \alpha)$ be an isotropic submanifold in a contact manifold with cooriented contact structure. Let $(TL)^{\partial}erp\subset \mathrm{x}i_L$ be the subbundle of $\mathrm{x}i_L$ that is symplectically orthogonal to $TL$ with respect to the symplectic bundle structure $d\alpha|_\mathrm{x}i$.The conformal structure of this bundle does not depend on the choice of contact form and therefore $(TL)^{\partial}erp $ is determined by $\mathrm{x}i$. The fact $L$ is isotropic implies that $TL\subset (TL)^{\partial}erp$. So we have the following definition, \begin{defn} The quotient bundle\[ CSN_M(L):=(TL)^{\partial}erp/TL\] with the conformal symplectic structure induced by $d\alpha$ is called the \textit{conformal symplectic normal bundle} of $L$ in $M$. \end{defn} So we have \[\mathrm{x}i|_L=\mathrm{x}i|_L/(TL)^{\partial}erp\oplus(TL)^{\partial}erp/TL\oplus TL=TL\oplus\mathrm{x}i|_L/(TL)^{\partial}erp\oplus CSN_M(L). \] Let $J:\mathrm{x}i\to \mathrm{x}i$ be a complex bundle structure on $\mathrm{x}i$ compatible with the symplectic structure given by $d\alpha$. Then the bundle $\mathrm{x}i|_L/(TL)^{\partial}erp$ is isomorphic to $J(TL)$. So the contact structure has the following natural splitting on the isotropic submanifold: \begin{lemma}\langlebel{spliting of the contact structure} \[\mathrm{x}i|_L=TL\oplus J(TL)\oplus CSN_M(L) \] \end{lemma} Therefore, if we fix a trivialization of $TL\oplus J(TL)$, then the trivialization of $CSN_M(L)$ is determined by the trivialization of $\mathrm{x}i|_L$. Now we can state the contact surgery theorem: \begin{theorem}[Theorem 6.2.5 \cite{geiges2008introduction}]\langlebel{thm for contact surgery} Let $\Lambda^{k-1}$ be an isotropic sphere in a contact manifold $(M,\mathrm{x}i=ker \alpha)$ with a trivialization of the conformal symplectic normal bundle $CSN_M(\Lambda^{k-1})$. Then there is a symplectic cobordism from $(M,\mathrm{x}i)$ to the manifold $M'$ obtained from $M$ by surgery along $\Lambda^{k-1}$ with the natural framing. In particular, the surgered manifold $M'$ carries a contact structure that coincides with the one on $M$ away form the surgery region. \end{theorem} \begin{remark} The resulting contact structure on $M'$ is uniquely determined up to isotopy by the isotopic isotropy class of $\Lambda^{k-1}$ and the homotopy class of the trivialization of $CSN_M(\Lambda^{k-1})$. \end{remark} \subsubsection{Weinstein handlebodies}\langlebel{weinstein handlebody} For the purposes of this paper, we need to attach a handle to a Weinstein domain. We will follow Section 13 in \cite{cieliebak2006symplectic}. The standard handle of index $k$ will be the bidisk in $\mathbb{C}^n$: \[\bigg\{\sum_{j=1}^{k}x_j^2\leq (1+\epsilon)^2,\sum_{j=1}^{k}y_j^2+\sum_{j=k+1}^{n}|z_j|^2\leq \epsilon^2\bigg\}, \] where $z_j=x_j+iy_j,j=1,2,\cdots,n$, are the complex coordinates in $\mathbb{C}^n$. In particular, the handle $H$ carries the standard complex structure $i$, along with the standard symplectic structure $\omega_{std}$. The symplectic form $\omega_{std}$ on $H$ admits a hyperbolic Liouville field \[X_{std}=\sum_{j=1}^{k}\big(-x_j\frac{{\partial}artial}{{\partial}artial x_j} + 2y_j\frac{{\partial}artial}{{\partial}artial y_j}\big)+\frac{1}{2}\sum_{l=k+1}^{n}\big(x_l\frac{{\partial}artial}{{\partial}artial x_l}+y_l\frac{{\partial}artial}{{\partial}artial y_l}\big). \] Let us denote by $\mathrm{x}i^-$ the contact structure $\alpha_{st}|_{{\partial}artial^-H}=0$ defined on ${\partial}artial^- H$ by the Liouville form $\alpha_{st}=\iota_{X_{std}}\omega_{std}$, where ${\partial}artial^- H:={\partial}artial D_1^k\times D_\epsilon^{2n-k}$ is the \textit{lower boundary}. Notice that the bundle $\mathrm{x}i^-|_{\Lambda^{k-1}}$ canonically splits as $T\Lambda^{k-1}\oplus J(T\Lambda^{k-1})\oplus \epsilon^{n-k}$, where $\epsilon^{n-k}$ is a trivial $(n-k)-$dimensional complex bundle. We will denote by $\sigma_\Lambda$ the isomorphism \[T\Lambda^{k-1}\oplus J(T\Lambda^{k-1})\oplus \epsilon^{n-k}\to \mathrm{x}i^-|_\Lambda. \] Suppose we are given a real $k-$dimensional bundle $E$, a complex $n-$dimensional bundle $\tau,n\geq k$, and an injective totally real homomorphism ${\partial}hi:E\to \tau$. Then ${\partial}hi$ canonically extends to a complex homomorphism ${\partial}hi\otimes \mathbb C: E\otimes\mathbb C \to \tau$. If ${\partial}hi\otimes \mathbb C$ extends to a fiberwise complex isomorphism $\Phi: E\otimes\mathbb C \oplus \epsilon^{n-k}$ then $\Phi$ is called a \textit{saturation} of $E$ covering ${\partial}hi$. When $n=k$ the saturation is unique. Let $(V,\omega,X,{\partial}hi)$ be a Weinstein manifold, $p$ a critical point of index $k$ of the function ${\partial}hi$, $a<b={\partial}hi(p)$ a regular value of ${\partial}hi$. Denote $W:=\{ {\partial}hi \leq a\}$. Suppose that the stable manifold of $p$ intersects $V\setminus Int W$ along a disc $D^k$, and let $\Lambda^{k-1}={\partial}artial D^k$ be the attaching sphere. The inclusion $T\Lambda^{k-1}\hookrightarrow \mathrm{x}i$ extends canonically to an injective complex homomorphism $T\Lambda^{k-1}\oplus J(T\Lambda^{k-1})\hookrightarrow \mathrm{x}i$, while the inclusion $TD^k\hookrightarrow TV$ extends to an injective complex homomorphism $TD^k\oplus J(TD^k)\hookrightarrow TV$. There exists a homotopically unique complex trivialization of the conformal symplectic normal bundle $CSN_{{\partial}artial W}(\Lambda^{k-1})$ in $\mathrm{x}i$ which extends to $D^k$ as a trivialization of the conformal symplectic normal bundle to $D^k$ in $TV$. This trivialization provides a canonical isomorphism $\Phi_{D^k}:T\Lambda\oplus J(T\Lambda)\oplus \epsilon^{n-k} \to \mathrm{x}i|_{\Lambda^{k-1}}$, and we will call this the canonical saturation of the inclusion $\Lambda^{k-1} \hookrightarrow {\partial}artial W$. We have the following theorem on attaching a handle to a Weinstein domain: \begin{theorem}[Prop13.11 \cite{cieliebak2006symplectic}, \cite{weinstein1991contact}]\langlebel{weinstein handle attaching} Let $(W,\omega,X,{\partial}hi)$ be a $2n-$dimensional Weinstein domain with boundary ${\partial}artial W$ and $\mathrm{x}i$ the induced contact structure $\{\alpha|_{{\partial}artial W}=0\}$ on $W$ defined by the Liouville form $\alpha=\iota_X\omega$. Let $h:\Lambda\to {\partial}artial W$ be an isotropic embedding of the $(k-1)-$sphere $\Lambda$. Let $\Phi:T\Lambda\oplus J(T\Lambda)\oplus \epsilon^{n-k} \to \mathrm{x}i$ be a saturation covering the differential $dh:T\Lambda\to \mathrm{x}i$. Then there exists a Weinstein domain $(\tilde{W},\tilde{\omega},\tilde{X},\tilde{{\partial}hi})$ such that $W\subset Int \tilde{W}$, and \begin{itemize} \item[(i)] $(\tilde{\omega},\tilde{X},\tilde{{\partial}hi})|_W=(\omega,X,{\partial}hi)$; \item[(ii)] the function $\tilde{{\partial}hi}|_{\tilde{M}\setminus Int W}$ has a unique critical point $p$ of index $k$. \item[(iii)] the stable disc $D$ of the critical point $p$ is attached to ${\partial}artial W$ along the sphere $h(\Lambda)$, and the canonical saturation $\Phi_D$ coincides with $\Phi$. \end{itemize} Given any two Weinstein extensions $(W_0,\omega_0,X_0,{\partial}hi_0)$ and $(W_1,\omega_1,X_1,{\partial}hi_1)$ of $(W,\omega,X,{\partial}hi)$ which satisfy properties (i)-(iii), there exists a diffeomorphism $g$ fixed on $W$ such that $g:W_0\to W_1$ satisfying $(\omega_0,X_0,{\partial}hi_0)$ and $(g^*\omega_1,g^*X_1,g^*{\partial}hi_1)$ are homotopic in the class of Weinstein structures which satisfy (i)-(iii). In particular, the completion of these two Weinstein domains are symplectomorphic via a symplectomorphism fixed on $W$. \end{theorem} We say that the Weinstein domain $(\tilde{W},\tilde{\omega},\tilde{X},\tilde{{\partial}hi})$ is obtained from $(W,\omega,X,{\partial}hi)$ by attaching a handle of index $k$ along an isotropic sphere $h:\Lambda\to {\partial}artial W$ with the given trivialization $\Phi$. \begin{defn}\langlebel{def: weinstein_flexible} A Weinstein domain $(W^{2n}, \langlembda, {\partial}hi)$ is \textit{flexible} if there exist regular values $c_1, \cdots, c_{k}$ of ${\partial}hi$ such that $c_1 < \min {\partial}hi < c_2 < \cdots < c_{k-1} < \max {\partial}hi < c_{k}$ and for all $i = 1, \cdots, k-1$, $\{c_i \le {\partial}hi \le c_{i+1} \}$ is a Weinstein cobordism with a single critical point $p$ whose attaching sphere $\Lambda_p$ is either subcritical or a loose Legendrian in $(Y^{c_i}, \langlembda|_{Y^{c_i}})$. \end{defn} Flexible Weinstein \textit{cobordisms} are defined similarly. Also, a Weinstein handle attachment or contact surgery is called flexible if the attaching Legendrian is loose. So any flexible Weinstein domain can be constructed by iteratively attaching subcritical or flexible handles to $(B^{2n}, \omega_{std})$. A Weinstein domain that is Weinstein homotopic to a Weinstein domain satisfying Definition \ref{def: weinstein_flexible} will also be called flexible. Loose Legendrians have dimension at least $2$ so if $(Y_+,\mathrm{x}i_+)$ is the result of flexible contact surgery on $(Y_-, \mathrm{x}i_-)$, then by Proposition \ref{prop: c1equivalence} $c_1(Y_+)$ vanishes if and only if $c_1(Y_-)$ does. Finally, we note that subcritical domains are automatically flexible. Since they are built using loose Legendrians and subcritical spheres, which satisfy an h-principle, flexible Weinstein domains also satisfy an h-principle \cite{cieliebak2012stein}. Again, the h-principle has an existence and uniqueness part: \begin{itemize} \item any almost Weinstein domain admits a flexible Weinstein structure in the same almost symplectic class \item any two flexible Weinstein domains that are almost symplectomorphic are Weinstein homotopic (and hence have exact symplectomorphic completions and contactomorphic boundaries). \end{itemize} \subsubsection{Formal structures}\langlebel{sec: formal structures} There are also formal versions of symplectic, Weinstein, and contact structures that depend on just the underlying algebraic topological data. For example, an \textit{almost symplectic structure} $(W, J)$ on $W$ is an almost complex structure $J$ on $W$; this is equivalent to having a non-degenerate (but not necessarily closed) 2-form on $W$. An almost symplectomorphism between two almost symplectic manifolds $(W_1, J_1), (W_2, J_2)$ is a diffeomorphism ${\partial}hi: W_1 \rightarrow W_2$ such that ${\partial}hi^*J_2 $ can be deformed to $J_1$ through almost complex structures on $W_1$. Equivalently, it also means that there is a family of non-degenerate 2-forms $\omega_t$ interpolating between $\omega_1$ and $\omega_2$. An \textit{almost Weinstein domain} is a triple $(W, J, {\partial}hi)$, where $(W,J)$ is a compact almost symplectic manifold with boundary and ${\partial}hi$ is a Morse function on $W$ with no critical points of index greater than $n$ and maximal level set ${\partial}artial W$. An \textit{almost contact structure} $(Y, J)$ on $Y$ is an almost complex structure $J$ on the stabilized tangent bundle $TY \oplus \epsilon^1$ of $Y$. Therefore an almost symplectic domain $(W, J)$ has almost contact boundary $({\partial}artial W, J|_{{\partial}artial W})$; it is an almost symplectic filling of this almost contact manifold. Therefore a family of almost symplectic structures give rise to a family of almost contact structures on the boundary: \begin{lemma}\langlebel{lemma for almost contact} Almost symplectomorphic Liouville domains have almost contactomorphic boundaries. \end{lemma} Note that any symplectic, Weinstein, or contact structure can also be viewed as an almost symplectic, Weinstein, or contact structure by considering just the underlying algebraic topological data. Note that the first Chern class $c_1(J)$ is an invariant of almost symplectic, almost Weinstein, or almost contact structures. In this paper, we will often need to assume that $c_1(J)$ vanishes. The following proposition, which will be used several times in this paper, shows that the vanishing of $c_1(Y, J)$ is often preserved under contact surgery and furthermore implies the vanishing of $c_1(W, J)$. \begin{prop}[Proposition 2.1 \cite{lazarev2016contact}]\langlebel{prop: c1equivalence} Let $(W^{2n}, J), n \ge 3,$ be an almost Weinstein cobordism between ${\partial}artial_-W = (Y_-, J_-)$ and ${\partial}artial_+W = (Y_+, J_+)$. If $H^2(W,Y_-) = 0$, the following are equivalent: \begin{itemize} \item $c_1(J_-)=0, c_1(J_+)=0$ \item $c_1(J)=0$. \end{itemize} If ${\partial}artial_-W =\emptyset$, the vanishing of $c_1(J_+)$ and $c_1(J)$ are equivalent. \end{prop} \begin{proof} Let $i_{\partial}m: Y_{\partial}m \hookrightarrow W$ be inclusions. Then $i_{\partial}m^*c_1(J) = c_1(J_{\partial}m)$ so the vanishing of $c_1(J)$ implies the vanishing of $c_1(J_-)$ and $c_1(J_+)$. To prove the converse, consider the cohomology long exact sequences of the pairs $(W, Y_-)$ and $(W, Y_+)$: $$ H^2(W, Y_{\partial}m; \mathbb{Z}) \rightarrow H^2(W; \mathbb{Z}) \mathrm{x}rightarrow{i_{\partial}m^*} H^2(Y_{\partial}m; \mathbb{Z}). $$ By assumption, $H^2(W, Y_-; \mathbb{Z})$ vanishes and hence $i_-^*$ is injective. By Poincar\'e-Lefschetz duality, $H^2(W, Y_+; \mathbb{Z}) \cong H_{2n-2}(W, Y_-;\mathbb{Z})$. Since $2n-2 \ge n+1$ for $n \ge 3$ and $W$ is a Weinstein cobordism, $H_{2n-2}(W, Y_-;\mathbb{Z})$ vanishes and hence $i_+^*$ is also injective. Then if either $c_1(J_-) = i_-^*c_1(J)$ or $c_1(J_+) = i_+^*c_1(J)$ vanish, so does $c_1(J)$. If ${\partial}artial_-W = \emptyset$, we just need the vanishing of $H^2(W, Y_+; \mathbb{Z})$, which holds for $n\ge 3$. \end{proof} \subsection{Morse-Bott case} The results of this section largely come from \cite{mclean2016reeb}. \begin{defn} A \textit{Morse-Bott family of Reeb orbits of $(C,\alpha)$ of period $T$} is a closed path connected submanifold $B\subset C$ where $B$ is contained in the image of the union of closed Reeb orbits of period $T$, satisfying $\ker(D{\partial}si_T)|_B=TB$, where if ${\partial}si_t:B\to B$ is the Reeb flow of $\alpha$. \end{defn} \begin{comment} \begin{defn} Suppose $B_T$ is a subset of $C$ so that: \begin{enumerate} \item $B_T$ is a union Reeb orbits of period $T$. \item There is a neighborhood $\mathcal{N}_{B_T}$ of $B_T$ and a constant $\delta>0$ so that any Reeb orbit with period in the interval $[T-\delta,T+\delta]$ meeting $\mathcal{N}_{B_T}$ is in fact contained in $B_T$ and has period $T$. \end{enumerate} Then we say $B_T$ {\it is an isolated family of Reeb orbits of period} $T$. We will say that $\mathcal{N}_{B_T}$ is {\it an isolating neighborhood for} $B_T$. \end{defn} \begin{defn} \langlebel{defn:morsebottfamily} A {\it pseudo Morse-Bott family} is an isolated family of Reeb orbits $B_T$ of period $T$ with the additional property that $B_T$ is path connected and for each point $p \in B_T$ we have $\text{Size}_p(B_T) := \text{dim ker}(D_p{\partial}si_T|_\mathrm{x}i - \text{id})$ is constant along $B_T$ (recall that $D_p{\partial}si_t|_\mathrm{x}i : \ker(\alpha)_p \to \ker(\alpha)_{{\partial}si_t(p)}$ is the restriction of the linearization of ${\partial}si_t$ to $\ker(\alpha)_p$). \end{defn} An example of a pseudo Morse-Bott family of period $T$ is a Morse-Bott family of Reeb orbits of period $T$. \end{comment} We are interested in indices of Reeb orbits and so from now on we assume that we work with a fixed trivialization of a fixed power of the canonical bundle of $(C,\alpha)$. Note that the Conley-Zehnder index of the period $T$ orbits starting in $B$ are all the same because $B$ is path connected. Hence we define the {\it Conley-Zehnder index of} $B$, $\mu_{CZ}(B)$, to be the Conley-Zehnder index of one of its period $T$ Reeb orbits. We can define an index closely related to the Conley-Zehnder index, called \textit{lower SFT index}, $lSFT(\gamma)$, as follows: \[lSFT(\gamma):=\mu_{CZ}(\gamma)-\frac{1}{2}\dim\ker(D_{\gamma(0)}{\partial}si_T|_\mathrm{x}i-\textrm{id})+(n-3). \] Similarly, we have the following definition: \begin{defn} Let $K$ be a Hamiltonian on a symplectic manifold $(X,\omega_X)$ and $B$ is a set of fixed points of its time $T$ flow. We say that $B$ is \textit{isolated} if any such fixed point near $B$ is contained in $B$. suppose $B$ is a path connected topological space and we have fixed a symplectic trivialization of the canonical bundle of $TX$. Then every such Hamiltonian orbit has the same Conley-Zehnder index and we will write $\mu_{CZ}(B,K)$ for the Conley-Zehnder index. The set $B$ is said to be \textit{Morse-Bott} if $B$ is a submanifold and $\ker(D{\partial}si_K^T-id)=TB$ along $B$ where ${\partial}si_K^T:X\to X$ is the time $T$ Hamiltonian flow of $K$. \end{defn} The following lemma is a technical lemma which relates the index of Reeb orbits in a contact hypersurface (which is a regular level set of a Hamiltonian) and the index of the corresponding Hamiltonian orbits. \begin{lemma}[Lemma 5.22 \cite{mclean2016reeb}]\langlebel{Reeb--Hamiltonian index relation} Let $(W,\omega_W)$ be a symplectic manifold with a choice of symplectic trivialization of the canonical bundle of $TW$. Let $\theta_W$ be a 1-form satisfying $d\theta_W=\omega_W$, and $K$ be an Hamiltonian with the property that $b:=\iota_{X_{\theta_W}}dK>0$. This means $C_r:=K^{-1}(r)$ is a contact manifold with contact form $\alpha_r:=\theta_W|_{C_r}$. Let $B\subset W$ be a connected submanifold transverse to $C_r$ for each $r$ so that $B_r:=C_r\cap B$ is a Morse-Bott submanifold of the contact manifold $(C_r,\alpha_r)$ of period $L_r$, where $L_r$ smoothly depends on $r$. Suppose that $b = L_0$ along $B_0$ and that $db(V) > \frac{d(L_r)}{dr}|_{r = 0}$ along $B_0$, where $V$ is a vector field tangent to $B$ satisfying $dK(V) = 1$. Then $B_0$ is Morse-Bott for $K$ and $\mu_{CZ}(B_0,K)=\mu_{CZ}(B_0,\alpha_0)+\frac{1}{2}$. \end{lemma} \begin{remark} Our sign convention is different from McLean's in \cite{mclean2016reeb} since we use $\omega(\cdot,X_H)=dH$. So the condition on $b:=\iota_{X_{\theta_W}}dK>0$ differs by a minus sign. If $b\neq L_0$ along $B_0$, we have to either rescale $b$ or $L_t$. \end{remark} \begin{remark}\langlebel{remark: index of orbits on cylinder} In light of lemma~\ref{index for the cylinder component}, we have $\mu_{CZ}(\gamma,r^2)=\frac{1}{2}$, where $\gamma$ is any Morse-Bott manifold of Hamiltonian orbits. \end{remark} \section{ADC structures and positive idempotent group} We will define (strongly) asymptotically dynamically convex contact structure first. Then we introduce a new invariant called the \textit{positive idempotent group} base on $SH_*(W)$. It does not depend on the filling for ADC contact structures and therefore can be seen as a contact invariant. The proof will be deferred to section~\ref{section:indepnedence of I+}. In subsection~\ref{subsection:effect of conatct surgery}, we show that the (strongly) ADC property is preserved under subcritical contact surgery. \subsection{Reeb orbits and asymptotically dynamically convex contact structures}\langlebel{section : def of ADC} Let's take a moment to look at the degree of Reeb orbits, which is essential for the definition of ADC contact structures. For any contact manifold $(\Sigma,\alpha)$ with $c_1(\Sigma,\mathrm{x}i)$, the canonical line bundle of $\mathrm{x}i$ is trivial, as will always be the case in this paper. After choosing a global trivialization of this bundle, we can assign an integer to each Reeb orbit $\gamma$ of $(\Sigma,\alpha)$-the reduced Conley-Zehnder index:\[|\gamma|:=\mu_{CZ}(\gamma)+n-3.\]For a general Reeb orbits, $|\gamma|$ depends on the choice of trivialization of the canonical bundle. However, if the Reeb orbit $|\gamma|$ is contractible in $\Sigma$, then the grading does not depend on the trivialization. We will consider both the contractible and non-contractible Reeb orbits in this paper. Let $\mathcal{P}^{<D}_\Phi(\Sigma,\alpha)$ be the set of Reeb orbits $\gamma$ of $(\Sigma,\alpha)$ satisfying $A(\gamma)<D$, where $\Phi$ is a specific trivialization of the canonical bundle. In a similar manner , we can define $\mathcal{P}^{<D}_0(\Sigma,\alpha)$ to be the set of contractible Reeb orbits $\gamma$ of $(\Sigma,\alpha)$ satisfying $A(\gamma)<D$. Here we dropped the subscript $\Phi$ since the degree of contractible Reeb orbits does not depends on the choice of trivialization. \begin{lemma}[Proposition 3.1 \cite{lazarev2016contact}]\langlebel{prop: easy} For any $D, s>0$, there is a grading preserving bijection between $\mathcal{P}^{< D}_\Phi(Y, s\alpha)$ and $\mathcal{P}^{< D/s}_\Phi(Y, \alpha)$. \end{lemma} \begin{proof} Note that $R_{s\alpha} = \frac{1}{s}R_\alpha$. So if $\gamma_\alpha:[0, T]\rightarrow Y$ is a Reeb trajectory of $\alpha$ with action $T$, then $\gamma_{s\alpha} = \gamma_{\alpha}\circ m_{\frac{1}{s}}: [0, s T] \rightarrow Y$ is a Reeb trajectory of $s\alpha$ with action $sT$; here $m_{\frac{1}{s}}: [0, sT] \rightarrow [0, T]$ is multiplication by $\frac{1}{s}$. The map $\gamma_\alpha \rightarrow \gamma_{s\alpha}$ is a bijection between the set of Reeb orbits. If $T < D/s$, then $sT < D$ and so it is a bijection between $\mathcal{P}_\Phi^{< D/s}(Y, \alpha)$ and $\mathcal{P}_\Phi^{< D}(Y, s\alpha)$. This bijection is grading-preserving since the Conley-Zehnder index of a Reeb orbit is determined by the linearized Reeb flow on the trivialized contact planes $\mathrm{x}i$ but does not depend on the speed of the flow. \end{proof} We will also need the following notation. If $\alpha_1, \alpha_2$ are contact forms for $\mathrm{x}i$, then there exists a unique $f: Y \rightarrow \mathbb{R}^+$ such that $\alpha_2 = f \alpha_1$. We write $\alpha_2 > \alpha_1, \alpha_2 \ge \alpha_1$ if $f > 1, f\ge 1$ respectively. Note that if $\alpha_2 > \alpha_1, \alpha_2 \ge \alpha_1$, then for any diffeomorphism $\Psi: Y' \rightarrow Y$, we have $\Psi^*\alpha_2 > \Psi^*\alpha_1, \Psi^*\alpha_2 \ge \Psi^*\alpha_1$, respectively. \begin{defn}\langlebel{def-adc} A contact manifold $(\Sigma,\mathrm{x}i)$ is \textit{asymptotically dynamically convex } (\textit{strongly asymptotically dynamically convex with respect to $\Phi$ }) if there exists a sequence of non-increasing contact forms $\alpha_1\geq \alpha_2\geq \alpha_3\cdots $ for $\mathrm{x}i$ and increasing positive numbers $ D_1<D_2<D_3\cdots $going to infinity such that all elements of $ \mathcal{P}_0^{<D_k}(\Sigma,\alpha_k)$ ($\mathcal{P}^{<D_k}_\Phi(\Sigma,\alpha_k)$) have positive lower SFT index. \end{defn} \begin{remark} The ADC property defined in definition 3.6 \cite{lazarev2016contact} requires the non-degeneracy of $\alpha_i$. Here we define the strongly ADC property (with respect to $\Phi$) using lower SFT index. Therefore the contact form $\alpha_i$ in the definition doesn't have to be non-degenerate. It is an immediate corollary of lemma~\ref{mark's perturbation}, also see remark 3.7 (2) of \cite{lazarev2016contact}. \end{remark} \begin{lemma}[Lemma 4.10 \cite{mclean2016reeb}]\langlebel{mark's perturbation} Let $\gamma$ be any Reeb orbit of $\alpha$ of period T and define $K:=\dim \ker(D{\partial}si_T |\mathrm{x}i(\gamma(0))-id)$. Fix some Riemannian metric on C. There is a constant $\delta > 0 $ and a neighborhood N of $\gamma(0) $ so that for any contact form $\alpha_1$ with $||\alpha-\alpha_1||_{\mathcal{C}^2}<\delta$ and any Reeb orbit $\gamma_1$ of $\alpha_1$ starting in N of period in $[T-\delta, T + \delta]$ we have $ \mu_{CZ}(\gamma_1) \in [\mu_{CZ}(\gamma)-\frac{K}{2}, \mu_{CZ}(\gamma) + \frac{K}{2}]. $ \end{lemma} \subsection{Positive idempotent group $I_+$}\langlebel{ss:definition of positive idempotent group} Now we consider a strongly asymptotically dynamically convex contact manifold $(\Sigma,\mathrm{x}i,\Phi)$ with Liouville filling $(W,\langlembda)$. We have the following result due to Lazarev: \begin{theorem}[Proposition 3.8 \cite{lazarev2016contact}]\langlebel{Lazarev main} If $(\Sigma,\mathrm{x}i,\Phi)$ is a strongly asymptotically dynamically convex contact structure, then all Liouville fillings of $(\Sigma,\mathrm{x}i,\Phi)$ have isomorphic $SH^+$. \end{theorem} \subsubsection{Definition of positive idempotent group $I_+$} We also want to define the ring structure. However as in Remark~\ref{NOproduct}, we can not define a product on $SH^+$. Having said that, we can use the pair-of-pants product on $SH_*(W)$ to define an invariant for $SH_*^+$ which is independent of the Liouville filling. First, let's recall the tautological short exact sequence: \[0\rightarrow SC_*^{< \epsilon}(W)\rightarrow SC_*(W)\rightarrow SC_*(W)/SC^{<\epsilon}_*(W)\rightarrow 0. \] We have long exact sequence: \begin{equation}\langlebel{eqn:exact sequence} \cdots\to SH_*^{<\epsilon}(W)\to SH_*(W)\to SH_*^+(W)\to SH_{*-1}^{<\epsilon}(W)\to\cdots \end{equation} We also have $H^{n-*}(W,H)\cong SH_*^{<\epsilon}(W,H)$ since the admissible Hamiltonian $H$ is $\mathcal{C}^2$ small in $W$. Therefore we can replace $SH_*^{<\epsilon}$ terms in equation~\ref{eqn:exact sequence} by $H^{n-*}$, in particular, we have a long exact sequence \begin{equation}\langlebel{eqn: exact sequence for defn} \cdots\to H^0(W)\to SH_n(W)\to SH_n^+(W)\to H^1(W)\to\cdots \end{equation} In fact, the map $H^0(W)\to SH_n(W)$ in equation~\ref{eqn: exact sequence for defn} is a ring homomorphism, see Appendix. A of \cite{cieliebak2018symplectic}. Suppose $SH_*(W)\neq 0$, then $1_W$ does \textit{not} maps to the unit in $SH_*(W)$, where $1_W $ is the unit of $ H^0(W)$, by Theorem~\ref{Theorem unit is image of 1} (also see Lemma A.3 of \cite{cieliebak2018symplectic}). Therefore $H^0(W)\to SH_n(W)$ is injective, and we will regard it as a subring of $SH_n(W)$. We can thus identify elements in $SH_n(W)/H^0(W)$ with elements in $SH^+_n(W)$. In particular, $SH_n(W)/H^0(W)\cong SH^+_n(W)$ if $H^1(W)=0$. Now let's consider the subgroup of $SH_n(W)$ as follows: \begin{equation}\langlebel{def of I} I(W):=\{ \,\alpha \in SH_n(W)\,\big| \, \alpha^2-\alpha \in H^0(W)\}. \end{equation} Notice the group action here is "addition". Define the \textit{positive idempotent group} $I_+(W)$ by \[I_+(W):=I(W)/H^0(W). \] By the previous analysis, we can regard $I_+(W)$ as a subgroup of $SH_n^+(W)$. In the case $I_+(W)$ is finite, we can further define \textit{positive idempotent index} $i(W):=|I_+(W)|$. \subsubsection{Properties of $I_+$} Since $H^0(W,\mathbb{Z}_2)\cong \mathbb{Z}_2$, $I_+(W,\mathbb{Z}_2)$ is determined by $I(W,\mathbb{Z}_2)$. Recall that $SH_*(W,\mathbb{Z}_2)$ has a $H_1(W,\mathbb{Z})/Tors$ grading. The first observation is that elements in $H^0(W)$ have $H_1/Tors$ grading zero. Indeed, it's true for all elements in $I(W)$. Suppose $R$ is an algebra over $\mathbb{Z}_2$ which is graded by a finitely generated torsion-free abelian group $K$. This means that as a vector space, $R=\bigoplus\limits_{k \in K} R_k$ with the property that if $ a\in R_{k_1}, b\in R_{k_2}$ then $ab\in R_{k_1\cdot k_2}$. Define $I_0(R):=\{0,1\}$ and $I(R):=\{ x\in R|x^2-x\in I_0\}$. \begin{lemma}[Lemma 7.6 \cite{mclean2007lefschetz}] If $a\in I(R)$ then $a\in R_e$ where $e$ is identity of group $K$. \end{lemma} \begin{proof} We argue by contradiction. Suppose we have $a=a_{k_1}+\cdots+a_{k_n}$ where $k_i\in K$ and $a_{k_i}\in R_{k_i}$, $k_1\neq e$. Then $a^2=a_{k_1^2}+\cdots+a_{k_n^2}$. Since $K$ is torsion free, there is a group homomorphism $p: K\to \mathbb{Z}$ such that $p(k_1)\neq 0$. This map actually gives $R$ a $\mathbb{Z}$ grading. Let $b$ be an element in $R$, then it can be uniquely written as $b=b_1+\cdots+b_k$ where $b_k$ are non-zero elements of $R$ with grading $d_i\in \mathbb{Z}$. We can define a function $f$ as follows: \[f(b):=\min\{ |d_i|\neq 0\} \] Note that $f$ is well-defined only if at least one of the $d_i's$ is non-zero. And when it is well-defined, $f(b+1)=f(b)$ because $1\in R_e$ and has grading $0$. The assumption $p(k_1)\neq 0$ implies that $f(a)$ is well defined and positive. On the other hand, we have $a\in I(R)$, which means $a^2=a$ or $a^2=a+1$. Either way, it implies $f(a^2)=f(a)$, which contradicts the fact that $f(a^2)\geq 2f(a)$. \end{proof} \begin{corollary}\langlebel{nullhomologous of idempotents} Any element in $I(W)$ is null-homologous in $H_1(W,\mathbb{Z})/Tors$. \end{corollary} Therefore, we can refine our definition of $I(W)$ to be \begin{equation*} I(W):=\{ \,\alpha \in SH^0_n(W)\,\big| \, \alpha^2-\alpha \in H^0(W)\}. \end{equation*} where $SH^0_*(W)$ is generated by all null-homologous Reeb orbits. In the case of strongly asymptotically dynamically convex contact manifolds (with respect to certain framing $\Phi$), different Liouville fillings have isomorphic positive idempotent group, as stated in Theorem~\ref{Main Thm}, the proof will be deferred to section~\ref{section:indepnedence of I+}. \subsection{Effect of contact surgery}\langlebel{subsection:effect of conatct surgery} \begin{theorem}[Theorem 3.15 \cite{lazarev2016contact}, \cite{yau2004cylindrical}]\langlebel{subcritical surgery of ADC} If $(Y_1^{2n-1},\mathrm{x}i_1), n\geq 2$, is an asymptotically dynamically convex contact structure and $(Y_2,\mathrm{x}i_2)$ is the result of index $k\neq 2$ subcritical contact surgery on $(Y_1,\mathrm{x}i_1)$, then $(Y_2,\mathrm{x}i_2)$ is also asymptotically dynamically convex. \end{theorem} Now we are in the position to prove Proposition~\ref{new prop}. \begin{proof}[Proof of Proposition~\ref{new prop}] Recall that $W_1$ is a flexible Weinstein domain, so $SH_*(W)=0$ (see \cite{bourgeois2012effect}). By lemma~\ref{ADC boundary}, ${\partial}artial W_1$ is asymptotically dynamically convex and so is $W$ by assumption. Moreover, ${\partial}artial V_k$ is obtained by attaching a Weinstein $1-$handle to asymptotically dynamically convex contact manifold, therefore it is asymptotically dynamically convex by Theorem~\ref{subcritical surgery of ADC}. A well known fact is that subcritical surgery does not change symplectic homology as a ring, see Theorem~\ref{invariant of SH}. We have \[ SH_n(W_i)\cong \underbrace{SH_n(W)\oplus\cdots\oplus SH_n(W)}_i \oplus \underbrace{SH_n(W_1)\oplus\cdots\oplus SH_n(W_1)}_{k-i}=\bigoplus_{j=1}^i SH_n(W) \] so we have \[I(W_i)\cong \underbrace{I(W)\oplus\cdots\oplus I(W)}_i. \] Since $SH_*(W)\neq 0$ and is finite dimensional, $\{0,1_{W}\}\subset I(W)$. We therefore have $2\leq | I(W)|<\infty$, so $ |I(W_i)|=|I(W)|^i$ are mutually distinct. Therefore, $|I_+(W_i)|\neq |I_+(W_j)| \textrm{ for }i\neq j$. \end{proof} \begin{theorem}[ \cite{lazarev2016contact}, \cite{yau2004cylindrical}]\langlebel{theorem:contact surgery} Let $(\Sigma_1 ,\mathrm{x}i_1)$ be a strongly asymptotically dynamically convex contact structure with respect to $\Phi$, and $(\alpha_k,D_k)$ as in Definition~\ref{def-adc} and $(\Sigma_2,\mathrm{x}i_2) $ be the result of index 2 contact surgery on $\Lambda^1\subset\Sigma_1$ so that the trivialization $\Phi$ extends to the handle. Then $(\Sigma_2,\mathrm{x}i_2)$ is also strongly asymptotically dynamically convex with respect to $\Phi$. \end{theorem} \begin{remark} Since the trivialization $\Phi$ of the canonical bundle of $(\Sigma_1 ,\mathrm{x}i_1)$ extends to the attaching handle, so by abuse of notation, the trivialization of the canonical bundle of $(\Sigma_2,\mathrm{x}i_2)$ which is obtained by extending $\Phi$ to the attaching handle is still denoted by $\Phi$. \end{remark} \begin{prop}[Proposition 5.5 \cite{lazarev2016contact}, \cite{yau2004cylindrical}]\langlebel{key} Let $\Lambda^{k-1}\subset (\Sigma_1^{2n-1},\alpha_1),n>1,$ be an isotropic sphere with $k<n$. For any $D>0$ and integer $i>0$, there exists $\epsilon=\epsilon(D,i)>0$ such that if $(\Sigma_2,\alpha_2)$ is the result of contact surgery on $U^\epsilon(\Lambda,\alpha)$ with respect to the trivialization $\Phi$, then there is a grading preserving bijection between $\mathcal{P}^{<D}_\Phi(\Sigma_2,\alpha_2)$ and $\mathcal{P}^{<D}_\Phi(\Sigma_1,\alpha_1)\cup \{\gamma^1,\cdots,\gamma^l\}$ where $|\gamma^i|=2n-k-4+2i$. \end{prop} \begin{remark} The proof largely follows \cite{lazarev2016contact} proposition 5.5, with only minor changes regarding the non-contractible Reeb orbits. The difference in the Strongly ADC case is that we need to choose the trivialization to define the Conley-Zehnder index. \end{remark} \begin{proof}[Proof of Proposition~\ref{key}] As explained in \cite{yau2004cylindrical}, the surgery belt sphere $S^{2n-k-1}$ contains a contact sphere $(S^{2n-2k-1},\mathrm{x}i_{std})$. After taking appropriate sequence of contact forms on $(\Sigma_2,\mathrm{x}i_2)$, the Reeb orbits of $(\Sigma_2,\mathrm{x}i_2)$ correspond to the old Reed orbits of $(\Sigma_1,\mathrm{x}i_1)$, plus the new orbits of $(S^{2n-2k-1},\mathrm{x}i_{std})$ inside the belt sphere of action less than $D$. The correspondence is natural since the trivialization of the canonical bundle extends over the surgery. These new orbits corresponds to the iterations $\gamma^1,\cdots,\gamma^l$ of a single Reeb orbit $\gamma$, see \cite{yau2004cylindrical}. Moreover, $\mu_{CZ}(\gamma^i)=n-k-1+2i$ and therefore $|\gamma^i|=2n-k-4+2i$. Meanwhile, by shrinking the handle, the action can be made arbitrarily small and therefore we can ensure that arbitrarily large iterations of $\gamma$ have action less than $D$. \end{proof} For $\Lambda \subset \Sigma$, Since $ J^1(\Lambda) \simeq T^*\Lambda\times\mathbb{R}$, choose a Riemannian metric on $\Lambda$. Let $U^\epsilon(\Lambda) \subset (J^1(\Lambda),\alpha_{std})$ be $\{ ||y||<\epsilon,|z|<\epsilon\}$, the metric on $\Lambda$ to define $||y||$ on the fiber, $z$ is the coordinate on $\mathbb{R}$. If $\Lambda \subset (Y,\alpha) $ is Legendrian, let $U^{\epsilon}(\Lambda,\alpha) \subset (Y,\alpha)$ be a neighborhood of $\Lambda$ that is strictly contactomorphic to $U^{\epsilon}(\Lambda)$. \begin{prop}[Proposition 6.7 \cite{lazarev2016contact}]\langlebel{prop6.7} Let $\alpha_1>\alpha_2$ be contact forms for $(\Sigma,\mathrm{x}i)$ and let $\Lambda \subset (\Sigma,\mathrm{x}i)$ be an isotropic submanifold with trivial symplectic conormal bundle. Then for any sufficiently small $\delta_1,\delta_2$, there exists a contactomorphism h of $(\Sigma,\mathrm{x}i)$ such that \begin{itemize} \item h is supported in $U^\epsilon(\Lambda,\alpha_1),h|_\Lambda=Id$, and $h^*\alpha_2<4\alpha_1$ \item $h^*\alpha_2|_{U^{\delta_1}(\Lambda,\alpha_1)}=c\alpha_1|_{U^{\delta_1}(\Lambda,\alpha_1)}$ for some constant c (depending on $\delta_1,\delta_2$) \item $h(U^{\delta_1}(\Lambda,\alpha_1))\subset U^{\delta_2}(\Lambda,\alpha_2)$. \end{itemize} \end{prop} \begin{prop}[Remark 6.5 \cite{lazarev2016contact}]\langlebel{remark6.5} Let $\Lambda \subset (\Sigma_1^{2n-1},\mathrm{x}i_1),n>2$ be an isotropic sphere and $(\Sigma_2,\mathrm{x}i_2)$ be the result of contact surgery on $\Lambda$ which extends the chosen trivialization $\Phi$ of the canonical bundle. Suppose $(\Sigma_1,\mathrm{x}i_1)$ is a strongly asymptotically dynamically convex contact structure with respect to the trivialization $\Phi$ and has $(\alpha_k,D_k)$ as in Definition~\ref{def-adc}. If $\alpha_k|_{U^\epsilon(\Lambda,\alpha_1)}=c_k\alpha_1|_{U^\epsilon(\Lambda,\alpha_1)}$ for some constants $\epsilon,c_k$, then $(\Sigma_2,\mathrm{x}i_2)$ is also strongly asymptotically dynamically convex with respect to the trivialization $\Phi$. \end{prop} \begin{proof}[Proof of Theorem~\ref{theorem:contact surgery}] Now we will proceed exactly as Lazarev did, keeping in mind that we are dealing with the strongly ADC property. We can apply Proposition~\ref{prop6.7} so that the conditions of Proposition~\ref{remark6.5} are satisfied. \end{proof} \section{$I_+$ is an invariant of ADC contact manifolds }\langlebel{section:indepnedence of I+} We will follow Lazarev's approach. Here we will use the procedure called stretching-the-neck.(For details, see Section 3.3 in~\cite{lazarev2016contact}) \begin{prop}[Proposition 3.10 \cite{lazarev2016contact}]\langlebel{lazarev proof for differential} Suppose that $(Z,\alpha)$ is strongly ADC with respect to the trivialization $\Phi$ and all elements of $\mathcal{P}^{<D}_\Phi(Z,\alpha)$ have positive degree. If $A_{H_+}(x_+)-A_{H_-}(x_-)<D$, then there exists $R_0 \in (0,1-\delta)$ such that for any $R\leq R_0$, all rigid $(H_s,J_{R,s})$-Floer trajectories are contained in $\widehat{W}\setminus V$. \end{prop} \begin{remark} If $H_s$ is independent of $s$, then the Floer trajectories define the differential; if $H_s$ is an decreasing homotopy, then $(H_s,J_{R,s})$-Floer trajectories define the continuation map. \end{remark} In P.Uebele's paper \cite{uebele2015periodic}, the pair-of-pants product is defined for ``index-positive" contact manifold, where the symplectic homology used is actually Rabinowitz-Floer homology. Though as the paper points out, the ring structure is not well defined on $SH^+$. However, at the chain level, if the pair-of-pants product is asymptotic to Hamiltonian orbits of positive action, then by the stretching-the-neck technique, we can prove the pair-of-pants does not enter the interior of the Liouville filling. In \cite{uebele2015periodic}, this is proved for index-positive contact manifolds. However, it is not true for Strongly ADC contact manifolds in general. That being said, the pair-of-pants does \textit{not} enter the interior of the filling when the indices of the Hamiltonian orbits of the asymptotes are high enough. To be precise, we have the following: \begin{prop}\langlebel{pair of pants product --nonberaking } Suppose that $(Z,\alpha)$ is strongly ADC with respect to the trivialization $\Phi$ and all elements of $\mathcal{P}^{<D}_\Phi(Z,\alpha)$ have positive reduced Conley-Zehnder index. Furthermore, let $A_H(x_i)<D/2(i=1,2,3)$ be non-constant Hamiltonian orbits such that $\mu_{CZ}(x_1)+\mu_{CZ}(x_2)-\mu_{CZ}(x_3)=n$ and $\mu_{CZ}(x_i)\geq n,i=1,2$, then there exists $R_0 \in (0,1-\delta)$ such that for any $R\leq R_0$, all pair-of-pants products are contained in $\widehat{W}\setminus V$. \end{prop} \begin{figure} \caption{Top of Floer building is connected (such breaking does \textit{not} \end{figure} \begin{proof} The proof is a combination of proposition 3.10 of \cite{lazarev2016contact} and lemma 3.12 of \cite{uebele2015periodic}. First of all, we have to rule out the breaking as in Figure~\ref{Fig: connected top floer biulding} (Similarly with $x_1$ and $x_2$ exchanged). Suppose we have the breaking as in Figure~\ref{Fig: connected top floer biulding}, then the top level has positive dimension, and we have (see lemma 3.10 of \cite{uebele2015periodic}) \[\mu_{CZ}(x_1)-\mu_{CZ}(x_3)-\mu_{CZ}(\gamma_1)-n+3\geq 0 \] and \[\mu_{CZ}(x_2)-\mu_{CZ}(\gamma_2)+3\geq 0. \] Then, since \[\mu_{CZ}(x_1)+\mu_{CZ}(x_2)-\mu_{CZ}(x_3)=n, \] these conditions are reduced to \[\mu_{CZ}(\gamma_1)\leq 3-\mu_{CZ}(x_2) \quad \text{and}\quad \mu_{CZ}(\gamma_2)\leq 3+\mu_{CZ}(x_2).\] In particular, \[\mu_{CZ}(\gamma_1)\leq 3-n.\] Meanwhile, the Floer energy of the top level would be \[0\leq E= A_H(x_1)-A_H(x_3)-A(\gamma_1).\] So \[0<A(\gamma_1)\leq A_H(x_1)-A_H(x_3)< D. \] We have $\gamma_1 \in \mathcal{P}^{<D}_\Phi(Z,\alpha)$, which implies $\mu_{CZ}(\gamma_1)>3-n$, which is a contradiction. Now we know that the top of the Floer building is connected, so we can proceed as in the proof of proposition 3.10 of \cite{lazarev2016contact}. We prove this by contradiction. Suppose the pair-of-pants product breaks after neck-stretching and $\gamma_k$ are the Reeb orbits in the top of the Floer building as in \cite{lazarev2016contact}. The virtual dimension of the moduli space of the top Floer building is \[ |x_1|+|x_2|-|x_3|-\sum |\gamma_k|<0. \] Contradiction. \end{proof} \begin{comment} Let's consider for $\alpha \in SH_*(W)\neq 0$, the \textit{spectrum value} of $\alpha$ is defined as \[\nu(\alpha):=\inf\{ \,b\, | \,\alpha \in \, \text{Im}(SH_*^{(-\infty,b)}(W)\to SH_*(W)) \}. \] Clearly, we have $\nu(\alpha+\beta)\leq \max\{\nu(\alpha),\nu(\beta)\}$ by definition and $\nu(\alpha\cdot\beta)\leq \nu(\alpha)+\nu(\beta)$ as a result of the fact that pair-of-pants product decreases action. The unit $e$ plays a special role. Indeed, we have $\nu(e)\leq 0$ since $SH_*^{\leq 0}(W)\to SH_*(W)$ is a map of rings with unit, also \[\nu(e)=\nu(e\cdot e)\leq 2\nu(e) \] Thus we have $\nu(e)=0$ or $\nu(e)=-\infty$.(Note that these conditions are independent of $\langlembda$.) \end{comment} Now if we further require that the admissible Hamiltonian $H$ has a unique minimum (which is always possible and compatible with our requirements on admissible Hamiltonians), then the Floer chain complex $SC_*(W,H,J)=O_*(W,H,J)\oplus C_*(W,H,J)$, where $O_*(W,H,J)$ is generated by all non-constant Hamiltonian orbits and $C_*(W,H,J)$ is generated by all constant Hamiltonian orbits (critical points of $H$). Since $H$ is $\mathcal{C}^2$ small in $W$, the action of the critical points is small, and the Floer differential $d$ coincides with the Morse boundary operator $d_1$. We therefore have $(SC_*^{<\delta}(W,H,J),d)= (C_*(W,H,J),d_1)$ and $SC_*^+(W,H,J)=O_*(W,H,J)$. For degree reasons, $C_n(W,H,J)=\mathbb{Z}_2<p>$, where $p$ is the unique minimum of $H$. Note that $d(p)=d_1(p)= 0$ and we have the fact that $d(x+p)=0$ implies $d(x)=d(p)=0$. \begin{comment} We can define for the ring $SH_*(W)$ its a subring $I_0(W)$ as \begin{equation} I_0(W):=\{\,\alpha\in SH_n(W)\big|\, \alpha^2-\alpha=0,\, \nu(\alpha)\leq 0\,\} \end{equation} and \begin{equation} I(W):=\{ \,\alpha \in SH_n(W)\,\big| \, \alpha^2-\alpha \in I_0\}. \end{equation} Apparently $I_0$ and $I$ are groups. Notice that $I_0(W)=\{0,[p]\}$,where $[p]$ is the unit of ring $SH_*(W)$.In fact, $I_0(W)=SH_n^{\leq 0}(W)$. Notice that $SH_*^+(W)=SH_*(W)/SH_*^{\leq 0}(W)=SH_*(W)/I_0(W)$ we can define a subgroup called \textit{positive idempotent group} $I_+(W)$ of $SH_*^+(W)$ as follows: \[I_+(W):=\{ [\alpha]\big| \alpha^2-\alpha\in I_0(W) ,\alpha \in SH_*(W)\}=I/I_0 \] Proposition~\ref{pair of pants product --nonberaking } tells us that as long as the inputs of pair-of-pants product have the required index, then its output is determined up to a difference of element generated by critical points. If we regard $SH_*^0,+(W)=SH_*^0(W)/SH_*^{0,\leq 0}(W)$, then the image under this quotient is independent of the filling.To be precise, we have the following proposition: \end{comment} \begin{figure} \caption{Pair-of-pants product for different fillings $W$ and $V$ and the natural identification of $C_n^{<D} \end{figure} \begin{proof}[Proof of Theorem~\ref{Main Thm}] As shown in Figure~\ref{Fig:difference in pair of pants product}, let $W,V$ be two different Liouville fillings for a strongly ADC contact manifold $(\Sigma,\langlembda)$. Suppose $H_W^D,H_V^D$ are Hamiltonians (as in Subsection~\ref{sss:Ad Hamiltonian}) whose slopes at infinity are $D \notin Spec(\Sigma,\langlembda)$. We can further assume that they have unique minima which are denoted by $p,q$ respectively. Note that any element $x \in O_*(W,H_W,J_W)$ has action $\mathcal{A}_{H_W}(x)<D$. As shown above, $O_*(W,H_W,J_W)=SC_*^+(W,H_W,J_W)$. After neck-stretching, we can assume that \[ (H_W,J_W)|_{\Sigma \times [R,\infty)}\equiv(H_V,J_V)|_{\Sigma \times [R,\infty)}\] So we have $O_*(W,H_W,J_W)=O_*(W,H_V,J_V)$. Proposition~\ref{lazarev proof for differential} shows that Floer cylinders with asymptotes in $SC_*^+(W,H_W,J_W)$ are entirely contained in $\Sigma \times [R,\infty)$. Therefore Floer differentials of $O_*(W,H_W,J_W)$ and $O_*(W,H_V,J_V)$ coincide. We will suppress $W$ and $V$ in the notation and denote them by $(O_*(H,J),{\partial})$($(O_*^{<K}(H,J),{\partial})$ if it is filtered above by action $K$). We have the pair-of-pants product $\otimes_W$ on \[SC_n^{<D/2}(W,H^D,J)=O_n^{<D/2} (H^D,J)\oplus C_n^{<D/2} =O_n^{<D/2} (H^D,J)\oplus \mathbb{Z}_2<p> \] defined as \begin{align} SC_n^{<D/2}(W,H^D,J) \otimes SC_n^{<D/2}(W,H^D,J) &\rightarrow SC_n^{<D}(W,H^D,J)\\ (x,y)&\mapsto x\otimes_W y. \end{align} By Proposition~\ref{pair of pants product --nonberaking }, $\otimes_W$ coincides with $\otimes_V$ on components in $O_n^{<D} (H^D,J)$, that is, for $x,y \in O_n^{<D/2} (H^D,J)$, $x\otimes_W y=z+\delta_W(x,y)$, where $z \in O_n^{<D} (H^D,J)$ and $\delta_W(x,y) \in \mathbb{Z}_2<p>$. Note that $\delta_W(x,y)$ is closed in $SC_n^{<D}(H,J)$. Likewise, we have $x\otimes_V y=z+\delta_V(x,y)$, where $z \in O_n^{<D} (H^D,J)$ and $\delta_V(x,y) \in \mathbb{Z}_2<q>$. Now for any $\alpha \in I^{<D/2}(W,H_W,J_W) \subset SH_n^{<D/2}(W,H_W,J_W)$, we have \[\alpha=[x+\epsilon p]_W=[x]_W+\epsilon[p]_W=[x]_W+\epsilon e_{H_W} \] where $x\in O_n^{D/2}(H,J),\epsilon=0\, \textrm{or}\, 1$. $ x\otimes_W x=z+\delta_W(x,x)$ implies \[ \alpha^2-\alpha=[x]_W^2+\epsilon^2 e_{H_W}^2-[x]_W-\epsilon e_{H_W}=[z-x+\delta(x,x)]_W=[z-x]_W+[\delta(x,x)]_W\] So $\alpha \in I^{<D/2}(W,H_W,J_W) $ is equivalent to \[ [z-x]_W+[\delta(x,x)]_W \in H^0(W).\] But since $[\delta(x,x)]_W \in H^0(W)$, $\alpha \in I^{<D/2}(W,H_W,J_W) $ is equivalent to $[z-x]_W \in H^0(W)$. Hence for $x\in O_n^{<D/2}(H,J), {\partial}(x)=0$ (${\partial}$ is Floer differential on $ O_n^{<D/2}(H,J)$), \[[x]_W^+ \in I_+(W,H_W,J_W) \Longleftrightarrow [z-x]_W^+ \in SH_n^{+,<D/2}(H,J)\] where $[y]_W^+ $ stands for the equivalence class of $y\in O_*(H,J)$ in $SH_n^+(W,H_W,J_W)$. We can prove the same results for $V$ similarly. Therefore we have an isomorphism between $I_+^{<D/2}(W,H_W,J_W)$ and $I_+^{<D/2}(V,H_V,J_V)$: $$ [x]^+_W \mapsto [x]^+_V.$$ Since $SH_n^{+,<D/2}(W,H_W,J_W), SH_n^{+,<D/2}(V,H_V,J_V)$ can be defined by $(\Sigma\times[R,\infty), H, J) $ as the Floer cylinder never enters the interior. Therefore we have the identity \[ SH_*^{+,<D/2}(W,H_W,J_W)\cong H_*(O_*^{<D/2}(H,J),{\partial})\cong SH_n^{+,<D/2}(V,H_V,J_V), \] the inclusion map $SH_n^{+,<D/2}(W,H_W,J_W) \to SH_n^{+}(W,H_W,J_W)$ commute with the above isomorphism, \[\begin{tikzcd} I_+^{<D/2}(W,H_W,J_W) \arrow[r,"i"] \arrow[d,"\cong"] & SH_n^{+,<D/2}(W,H_W,J_V) \arrow[r,"i"]\arrow[d ,"\cong"] & SH_n^{+}(W,H_W,J_V) \arrow[d,"\cong"] \\ I_+^{<D/2}(V,H_V,J_V) \arrow[r ,"i"] & SH_n^{+,<D/2}(V,H_V,J_V)\arrow[r,"i"]&SH_n^{+}(V,H_V,J_V) \end{tikzcd} \] and we can therefore take the direct limit with respect to $H_W$. Since we already know $SH_*^+(W)$ is isomorphic to $SH_*^+(V)$ by Theorem~\ref{Lazarev main}, it follows that $I_+(W)\cong I_+(V)$. \begin{comment} Since $I_+(W)=I(W)/H^0(W)$, it suffices to prove that $I(W)$ is independent of filling. Since $SH_n(W)=SH_n^+(W)\oplus SH_n^{\leq 0}(W)$ and $SH_n^{\leq 0}=\mathbb{Z}_2$, by Theorem~\ref{Lazarev main}, we have $SH_n(W)$ and $SH_n(V)$ are all isomorphic as groups, where $W,V$ are two different fillings. Let $\Sigma$ be a strongly asymptotically dynamically convex contact manifold, and $W,V$ two different Liouville fillings. Fix $\alpha \in SH_n(W)$ with $\nu(\alpha)<D/2$, we have, by abuse of notation, $\alpha \in SH_n^{(-\infty,D)}(W)$. Since $\nu(\alpha^2)<D$, so $\alpha^2 \in SH_n^{(-\infty,D)}$. $SH_n^{(\infty,D)}(W)=\varinjlim\limits_{H}FH_n^{<D}(W,H,J)$,where $H$ are weakly admissible Hamiltonians with unique minimum. We can take $H$ large enough such that $SH_n^{(-\infty,D)}(W)=FH_n^{<D}(W,H,J)$. By Proposition~\ref{lazarev proof for differential},after enough neck stretching, the almost complex structure is denoted by $J_R$, we have the chain complex \[CF_n^{<D}(W,H,J_R)=O_n^{<D}(W,H,J_R)\oplus C_n^{<D}(W,H,J)=O_n^{<D}(W,H,J)\oplus<p> \] where $p$ is the minimum point of $H$, and $d(p)=0$. Suppose $\alpha=[x+q]_W=[x]_W+[q]_W$, where $x \in O_n^{<D}(W,H,J_R), q \in <p>$. We notice that $[q]_W^2=[q]_W$, we have \[\alpha^2-\alpha=([x]_W+[q]_W)^2-([x]_W+[q]_W)=[x]_W^2-[x]_W=[y]_W+\delta-[x]_W. \] where $[x]_W^2=[y]_W+\delta, y\in O_n^{<D}(W,H,J_R), \delta \in I_0(W)$. So \[\alpha^2-\alpha\in I_0(W) \iff [y]_W-[x]_W \in I_0(W). \] Now if we have another filling $V$, the corresponding element of $\alpha \in SH_n^(W)$ in $SH_n(V)$ is denoted by $\alpha'$ with $\nu(\alpha')<D$(we can always choose $D>\max\{\nu(\alpha),\nu(\alpha')\}$ from the beginning), then we construct a family of weakly admissible Hamiltonian $H'$ on $\hat{V}$ as follows: since we have embeddings \[{\partial}artial W \times [R,\infty) \mathrm{x}hookrightarrow{\iota_1} {\partial}artial V \times [R',\infty)\mathrm{x}hookrightarrow{\iota_2}\hat{W}, \] for $0<R'<1$,$R'$ small enough. So we define $H'=\iota_2^*H$ on ${\partial}artial V \times [R',\infty)$, and extend it to $V$ with a unique minimum. If we also pull back the almost complex structure on $\hat{W}$, then there is no need for neck-stretching on $\hat{V}$, and will be denoted by $J_R$. As a matter of fact, we have \[CF_n^{<D}(V,H,J_R)=O_n^{<D}(V,H,J_R)\oplus<p'>=O_n^{<D}(W,H,J_R)\oplus<p'>. \] The second equation holds because $\hat{W}$ and $\hat{V}$ have identical cylindrical ends. So $\alpha'$ can be represented by the same element $\alpha'=[x]_V+[q']_V$, in the same vein as the above argument, we have$[x]_V^2=[y]_V+\delta', y\in O_n^{<D}(V,H,J_R), \delta' \in I_0(V)$. Notice that $y$ is the same as above by Proposition~\ref{pair of pants product --nonberaking }, while $\delta'$ might be different from $\delta$. So we have \[\alpha'^2-\alpha'\in I_0(V) \iff [y]_V-[x]_V \in I_0(V). \] for the identification of $[p] \in I_0(W)$ with $[p']\in I_0(V)$. So we conclude that $\alpha \in I(W) \iff \alpha' \in I(V) $, hence the independence of filling for $I$. \end{comment} \end{proof} \begin{remark} We can also proceed exactly as in proof of proposition 3.8 in \cite{lazarev2016contact}. The key point is to use the \textit{essential complex} as defined in that proof. \end{remark} \section{Brieskorn Manifolds} \subsection{Definition of Brieskorn manifolds}\langlebel{brieskorn mfld} Let $\mathbf{a}=(a_0,a_1,\cdots,a_n)$ be an $(n+1)$-tuple of integers $a_i>1,\mathbf{z}:=(z_0,z_1,\cdots,z_n)\in \mathbb{C}^{n+1}$, and set $f(\mathbf{z}):=z_0^{a_0}+z_1^{a_1}+\cdots+z_n^{a_n}$, we define \textit{Brieskorn Variety} as \begin{equation} V_\mathbf{a}(t):=\{(z_0,z_1,\cdots,z_n)\in \mathbb{C}^{n+1}| f(\mathbf{z})=t\} \quad \text{for each}\quad t\in \mathbb C. \end{equation} We will often suppress $\mathbf{a}$ when it causes no confusion, and define $X_t^s=V(t)\cap B(s)$. Further, with $S^{2n+1}$ denoting the unit sphere in $\mathbb{C}^{n+1}$, we define the \textit{Brieskorn Manifold} as the intersection of Brieskorn Variety $V_\mathbf{a}(0)$with the unit sphere: \[\Sigma(\mathbf{a}):=V_\mathbf{a}(0)\cap S^{2n+1}.\] \begin{lemma}[Lemma 96 \cite{fauck2016rabinowitz}, Lemma 7.1.1 \cite{geiges2008introduction}]\langlebel{lemma} $\Sigma(\mathbf{a})$ and $V_\mathbf{a}(t),t\neq 0$ are smooth manifolds. \end{lemma} \begin{proof} We set $\rho(z):=||z||^2=\sum z_k\bar{z_k}$ and consider the maps \[f:\mathbb{C}^{n+1} \to \mathbb{C} \qquad and \qquad (f,\rho):\mathbb{C}^{n+1}\to \mathbb{C\times R}\] Since $V_\mathbf{a}(t)=f^{-1}(t)$ and $\Sigma(\mathbf{a})=(f,\rho)^{-1}(0,1)$, it suffices to show that $t$ (respectively $(0,1)$) are regular values. With a little Wirtinger calculus (and using the fact that $f$ is holomorphic) we find the Jacobian matrix \[ D(f,\rho)= \begin{bmatrix} a_0z_0^{a_0-1} &\cdots & a_nz_n^{a_n-1} & 0 & \cdots & 0 \\ 0 & \cdots & 0 & a_0\bar{z_0}^{a_0-1} & \cdots & a_n\bar{z_n}^{a_n-1}\\ \bar{z_0}& \cdots& \bar{z_n} & z_0 & \cdots &z_n \end{bmatrix} \] For $\mathbf{z}\neq 0$ the first two rows of $D(f,\rho)$ are linearly independent, which implies that $\epsilon\neq0$ is a regular value of $f$. If $\mathbf{z}$ is a point where this matrix has rank smaller than 3, there exists a non-zero complex number $\langlembda$ such that $\bar{z_k}=\langlembda a_k z_k^{a_k-1}$ for all $k$ and hence \[ \sum\limits_{k=0}^{n}\frac{z_k\bar{z_k}}{a_k}=\langlembda\sum\limits_{k=0}^n z_k^{a_k}=\langlembda\cdot f(\mathbf{z}) \] This equality is incompatible with the conditions $\rho(\mathbf{z})=1$ and $f(\mathbf{z})=0$ for a point $\mathbf{z}\in \Sigma(\mathbf{a}) $. \end{proof} \subsection{Topology of Brieskorn manifolds} Now, we give some topological facts about Brieskorn manifolds without proof. \begin{prop}[Theorem 5.2 \cite{milnor2016singular} ]\langlebel{high connectedness of brieskorn manifold} A Brieskorn manifold $\Sigma(\mathbf{a})^{2n-1}$ is $(n-2)$-connected. \end{prop} \subsection{Trivialization and Conley-Zehnder index} Let us consider on $\mathbb{C}^{n+1}$ the following Hermitian form given by \[<\mathrm{x}i,\zeta>_\mathbf{a}:=\frac{1}{2}\sum_{k=0}^{n}a_k\mathrm{x}i_k\bar{\zeta_k}.\] It defines a symplectic 2-form \[\omega_\mathbf{a}:=\frac{i}{4}\sum_{k=0}^{n}a_kdz_k\wedge d\bar{z_k}.\] Notice that $Y_\langlembda(\mathbf{z}):=\frac{\mathbf{z}}{2}$ is a Liouville vector field for $\omega_\mathbf{a}$, with the corresponding 1-form \[\langlembda_\mathbf{a}:=\omega_\mathbf{a}(Y_\langlembda,\cdot)=\frac{i}{8}\sum_{k=0}^{n}a_k(z_kd\bar{z_k}-\bar{z_k}dz_k).\] \begin{prop}[Proposition 97 \cite{fauck2016rabinowitz}, \cite{lutz1976structures}]\langlebel{Lutz} The restriction $\alpha_a:=\langlembda_a|_\Sigma$ is a contact form on $\Sigma(\mathbf{a})$ with Reeb vector field $R_\mathbf{a}$ given by \[R_\mathbf{a}=4i(\frac{z_0}{a_0},\frac{z_1}{a_1},\cdots,\frac{z_n}{a_n}).\] \end{prop} \begin{proof} The gradient of $f$ with respect to $<\cdot,\cdot>_\mathbf{a}$ is given by \[\nabla_\mathbf{a}f:=2(\bar{z_0}^{a_0-1},\bar{z_1}^{a_1-1},\cdots,\bar{z_n}^{a_n-1}).\] The Liouville vector field $Y_V$ of the restricted 1-form $\langlembda_\mathbf{a}|_{V_\mathbf{a}(0)}$ with respect to the restricted symplectic form $\omega_\mathbf{a}|_{V_\mathbf{a}(0)} $is given by \[Y_V:=Y_\langlembda-\frac{<\nabla_\mathbf{a}f,Y_\langlembda>_\mathbf{a}}{||\nabla_\mathbf{a}f||^2_\mathbf{a}}\cdot\nabla_\mathbf{a}f.\] Note that $TV_\mathbf{a}(t)=\ker df=\ker<\nabla_\mathbf{a}f,\cdot>_\mathbf{a}$, which shows that $Y_V\in TV_\mathbf{a}(0)$. Furthermore, we have for any $\mathrm{x}i \in TV_\mathbf{a}(0)$, \[\omega_\mathbf{a}(Y_V,\mathrm{x}i)=\omega_\mathbf{a}(Y_\langlembda,\mathrm{x}i)-\frac{<\nabla_\mathbf{a}f,Y_\langlembda>_\mathbf{a}}{||\nabla_\mathbf{a}f||^2_\mathbf{a}}\cdot\omega_\mathbf{a}(\nabla_\mathbf{a}f,\mathrm{x}i)=\langlembda_\mathbf{a}(\mathrm{x}i)+\frac{<\nabla_\mathbf{a}f,Y_\langlembda>_\mathbf{a}}{||\nabla_\mathbf{a}f||^2_\mathbf{a}}\cdot \underbrace{Im<\nabla_\mathbf{a}f,\mathrm{x}i>)_\mathbf{a}}_{=0}=\langlembda_\mathbf{a}(\mathrm{x}i)\] So this indicates that $Y_V$ is the Liouville vector field for the pair $(\omega_\mathbf{a}|_{V_\mathbf{a}(0)} ,\langlembda_\mathbf{a}|_{V_\mathbf{a}(0)})$. Now notice that $d\rho=\sum\limits_{k=0}^{n}\bar{z_k}dz_k+z_kd\bar{z_k}$ ($\rho$ is defined in the proof of lemma \ref{lemma}) and we have \[d\rho(Y_V)=\sum \frac{z_k\bar{z_k}}{2}-\frac{<\nabla_\mathbf{a}f,Y_\langlembda>_\mathbf{a}}{||\nabla_\mathbf{a}f||^2_\mathbf{a}}\sum 2\bar{z_k}\cdot\bar{z_k}^{a_k-1}=\frac{\rho(\mathbf{z})}{2} -\frac{<\nabla_\mathbf{a}f,Y_\langlembda>_\mathbf{a}}{||\nabla_\mathbf{a}f||^2_\mathbf{a}}\cdot2\bar{f(\mathbf{z})}=\frac{1}{2}.\] since $\rho(\mathbf{z})=1$ and $f(\mathbf{z})=0$. It follows that $Y_V$ points out of the unit sphere and hence out of $\Sigma(\mathbf{a})$ in $V_\mathbf{a}(0)$. It follows that $\Sigma(\mathbf{a})$ is a contact hypersurface in $V_\mathbf{a}(0)$. Now we are going to check that $R_\mathbf{a}$ is the Reeb vector field of $\alpha_\mathbf{a}$. For any $\mathbf{z}\in \Sigma(\mathbf{a})$, we have \[<R_\mathbf{a},\nabla_\mathbf{a}f>_\mathbf{a}=4i\sum_{k=0}^{n}z_k^{a_k}=4if(\mathbf{z})=0,\] \[d\rho(R_\mathbf{a})=\sum_{k=0}^{n}z_k(-4i)\bar{z_k}+\bar{z_k}4iz_k=0 \] The two equations above shows that $R_\mathbf{a}$ is a tangent vector. We also have \[\alpha_\mathbf{a}(R_\mathbf{a})=\langlembda_\mathbf{a}(R_\mathbf{a})=\frac{i}{8}\sum_{k=0}^{n}a_k(\frac{4i}{a_k}\bar{z_k}-\bar{z_k}\frac{4i}{a_k}z_k)=\rho(\mathbf{z})=1,\] \[\iota_{R_\mathbf{a}}d\alpha_\mathbf{a}=\frac{i}{4}\sum_{k=0}^{n}(4i\frac{z_k}{a_k}d\bar{z_k}-(-4i)\frac{\bar{z_k}}{a_k})=-\sum_{k=0}^{n}(z_kd\bar{z_k}+\bar{z_k}dz_k)=-d\rho.\] The latter form is zero for vectors in $T\Sigma(\mathbf{a})$, therefore, $R_\mathbf{a}$ is the Reeb vector field. \end{proof} \begin{prop}[Corollary 98 \cite{fauck2016rabinowitz}]\langlebel{trivialization of the symplectic complement} The symplectic complement $\mathrm{x}i_\mathbf{a}^{\bot}$ with respect to $\omega_\mathbf{a}$ of the contact structure $\mathrm{x}i_\mathbf{a}:= \ker \alpha_\mathbf{a}$ inside $\mathbb C^{n+1}$ is symplectically trivialized by the following 4 vector fields: \begin{itemize} \item $X_1:=\frac{\nabla_\mathbf{a}f}{||\nabla_\mathbf{a}f||_\mathbf{a}}$ \item $Y_1:=i\cdot X_1$ \item $X_2:=Y_V$ \item $Y_2:=R_\mathbf{a}$. \end{itemize} \end{prop} \begin{proof} $X_1,Y_1$ generate the complex complement of $TV_\mathbf{a}(0)$ while $X_2,Y_2$ generate the symplectic complement of $\mathrm{x}i_\mathbf{a}$ in $TV_\mathbf{a}(0)$, so we have \[\omega_\mathbf{a}(X_1,X_2)=\omega_\mathbf{a}(X_1,Y_2)=\omega_\mathbf{a}(Y_1,X_2)=\omega_\mathbf{a}(Y_1,Y_2)=0.\] Meanwhile we have \[\omega_\mathbf{a}(X_1,Y_1)=1,\quad\omega_\mathbf{a}(X_2,Y_2)=\langlembda_\mathbf{a}(R_\mathbf{a})=1.\] The latter equation comes from the proof of proposition~\ref{Lutz}. \end{proof} The Reeb vector field $R_\mathbf{a}=4i(\frac{z_0}{a_0},\frac{z_1}{a_1},\cdots,\frac{z_n}{a_n})$ generates the following flow: \[{\partial}si_\mathbf{a}^t(\mathbf{z})=(e^\frac{4it}{a_0}\cdot z_0,\cdots, e^\frac{4it}{a_n}\cdot z_n) \] The submanifolds $\Sigma_T$ of period $T\in {\partial}i\mathbb{Z}/2$ are given by \[\Sigma_T=\Big\{\mathbf{z}\in \Sigma(\mathbf{a})\,\Big|\,z_k=0 \, \text{ if } \, \frac{T}{a_k} \in {\partial}i\mathbb{Z}/2\,\Big\}. \] $\Sigma_T$ is not empty if and only if the relation $\frac{T}{a_k} \in {\partial}i\mathbb{Z}/2$ is satisfied by at least $2$ different $k$, as $\mathbf{z} \in \Sigma(\mathbf{a})$ has at least $2$ non-zero entries. Note that $\Sigma_T$ is the intersection $\Sigma(\mathbf{a})\cap V(\mathbf{a},T)$, where $V(\mathbf{a},T)$ denotes the complex linear subspace \[V(\mathbf{a},T):=\Big\{\mathbf{z}\in \mathbb{C}^{n+1}\,\Big|\,z_k=0 \text{ if } \frac{T}{a_k} \notin \frac{{\partial}i}{2}\mathbb{Z}\,\Big\} \] whose complex dimension is given by \[\dim_{\mathbb{C}} V(\mathbf{a},T):=\Bigg|\Big\{k\,\Big|\, 0\leq k\leq n,\,\frac{T}{a_k} \in \frac{{\partial}i}{2}\mathbb{Z}\, \Big\}\Bigg|, \] where $|S|$ denotes the cardinality of the set $S$. We notice that $\Sigma_T$ is therefore isomorphic to the Brieskorn manifold $\Sigma(\mathbf{a}(T))$, where \[ \mathbf{a}(T)=(a_0,\cdots,\hat{a_i},\cdots,a_n) \] is a subset of $\mathbf{a}$. Here $\hat{a_i}$ means the term $a_i$ is omitted, when $\frac{T}{a_i} \notin \frac{{\partial}i}{2}\mathbb{Z}$. The differential of ${\partial}hi_\mathbf{a}$ at time $t$ is given by \[D{\partial}si_\mathbf{a}^t=diag\big(e^{4it/a_0},\cdots,e^{4it/a_n}\big) \] It follows that \[\ker(D_\mathbf{z}{\partial}si_\mathbf{a}^T\big|_{T_\mathbf{z}\Sigma(\mathbf{a})}-id)=T_\mathbf{z}\Sigma(\mathbf{a})\cap V(\mathbf{a},T)=T_\mathbf{z}\Sigma_T \] Therefore $\Sigma_T$ is Morse-Bott submanifold. The calculation of the indices of all closed Reeb orbits can be found in various literature, see \cite{kwon2016brieskorn}, \cite{ustilovsky1999contact}. We conclude this subsection with the following proposition: \begin{prop}[ \cite{kwon2016brieskorn}, \cite{fauck2016rabinowitz}]\langlebel{index calculation} Let $\gamma\in \Sigma(\mathbf{a})$ be a fractional Reeb of period $t$. We have \[\mu_{CZ}(\gamma)=\sum_{k=0}^{n}\Bigg(\Bigg\lfloor\frac{2t}{a_k{\partial}i}\Bigg\rfloor+\Bigg\lceil\frac{2t}{a_k{\partial}i}\Bigg\rceil\Bigg)-\Bigg(\Bigg\lfloor\frac{2t}{{\partial}i}\Bigg\rfloor+\Bigg\lceil\frac{2t}{{\partial}i}\Bigg\rceil\Bigg) \] \end{prop} \begin{proof} First we notice that the indices are canonically defined when $n\geq 4$, by Proposition~\ref{high connectedness of brieskorn manifold}. Recall the Reeb vector field in Proposition~\ref{Lutz}, $R_\mathbf{a}=4i(\frac{z_0}{a_0},\frac{z_1}{a_1},\cdots,\frac{z_n}{a_n})$. The associated Reeb flow is \[{\partial}si_\mathbf{a}^t(\mathbf{z})=(e^\frac{4it}{a_0}\cdot z_0,\cdots, e^\frac{4it}{a_n}\cdot z_n). \] We regard this as a flow on $\mathbb{C}^{n+1}$ as opposed to $\Sigma(\mathbf{a})$. This perspective gives us the advantage of calculating the indices directly on $\mathbb{C}^{n+1}$. If we take the standard trivialization of $T\mathbb{C}^{n+1}$, then the linearized return map is \[D{\partial}si_\mathbf{a}^t=diag\big(e^{4it/a_0},\cdots,e^{4it/a_n}\big)=:\Psi_t. \] By Proposition~\ref{trivialization of the symplectic complement}, we have the trivialization of the symplectic complement $\mathrm{x}i_\mathbf{a}^{\partial}erp$. The linearized return map of the flow on $\mathrm{x}i_\mathbf{a}^{\partial}erp$ gives: \begin{itemize} \item $D{\partial}si_\mathbf{a}^t(X_1(\mathbf{z}))=e^{4it}\cdot X_1({\partial}si_\mathbf{a}^t(\mathbf{z}))$, \item $D{\partial}si_\mathbf{a}^t(Y_1(\mathbf{z}))=e^{4it}\cdot Y_1({\partial}si_\mathbf{a}^t(\mathbf{z}))$, \item $D{\partial}si_\mathbf{a}^t(X_2(\mathbf{z}))=X_2({\partial}si_\mathbf{a}^t(\mathbf{z}))$, \item $D{\partial}si_\mathbf{a}^t(Y_2(\mathbf{z}))=Y_2({\partial}si_\mathbf{a}^t(\mathbf{z}))$. \end{itemize} It follows that the linearized map of $D{\partial}si_\mathbf{a}^t$ on $\mathrm{x}i_\mathbf{a}^{\partial}erp$ under the prescribed trivialization is the diagonal matrix: \[ \Psi_2^t:= \left[ {\begin{array}{cc} e^{4it} & 0 \\ 0 & 1 \\ \end{array} } \right] \] A trivialization of $\mathrm{x}i_\mathbf{a}$ along the Reeb orbit gives us the linearization of $\Psi_1^t$ of ${\partial}si_\mathbf{a}^t$ on $\mathrm{x}i_\mathbf{a}$. Any trivialization of $\mathrm{x}i_\mathbf{a}$ and $\mathrm{x}i_\mathbf{a}^{\partial}erp$ combined gives rise to a trivialization of $T\mathbb{C}^{n+1}$, which is homotopic to the standard one. Therefore by the product property of the Conley-Zehnder index and using remark~\ref{Index formula}, we find that \begin{align*} \mu_{CZ}(\gamma)=\mu_{CZ}(\Psi_1)=&\mu_{CZ}(\Psi)-\mu_{CZ}(\Psi_2)\\ =&\sum_{k=0}^{n}\Bigg(\Bigg\lfloor\frac{2t}{a_k{\partial}i}\Bigg\rfloor+\Bigg\lceil\frac{2t}{a_k{\partial}i}\Bigg\rceil\Bigg)-\Bigg(\Bigg\lfloor\frac{2t}{{\partial}i}\Bigg\rfloor+\Bigg\lceil\frac{2t}{{\partial}i}\Bigg\rceil\Bigg). \end{align*} \end{proof} \begin{lemma}\langlebel{minimal index} Let $\mathbf{a}=(a_0,a_1,a_2,\cdots,a_n)$, where the $a_i's$ are positive integers, and $\sum\frac{1}{a_k}\geq 1$. Then the following function $f_\mathbf{a}:\mathbb{R}_+\to \mathbb{Z}$, \[f_\mathbf{a}(x)=\sum_{k=0}^{n}\Bigg(\Bigg\lfloor\frac{x}{a_k}\Bigg\rfloor+\Bigg\lceil\frac{x}{a_k}\Bigg\rceil\Bigg)-\Big(\Big\lfloor x\Big\rfloor+\Big\lceil x\Big\rceil\Big)\] has a minimum, denoted by $m(\mathbf{a})$. In particular, if $\mathbf{a}=(2,2,2,a_1,\cdots,a_n)$, then $m(\mathbf{a})\geq 2$, where $a_k's$ are positive integers, $n\geq 2$. \end{lemma} \begin{proof} We notice that $2x-1<\lfloor x\rfloor+\lceil x \rceil <2x+1$, we have \[ f_\mathbf{a}(x)>2\big( \sum_{k=0}^{n}\frac{1}{a_k}-1\big)x-n-1\geq -n-1, \] which proves the first part. For the second part, we have \[f_\mathbf{a}(x)=3\Bigg(\Bigg\lfloor\frac{x}{2}\Bigg\rfloor+\Bigg\lceil\frac{x}{2}\Bigg\rceil\Bigg)+\sum_{k=1}^{n}\Bigg(\Bigg\lfloor\frac{x}{p_k}\Bigg\rfloor+\Bigg\lceil\frac{x}{p_k}\Bigg\rceil\Bigg)-\Big(\Big\lfloor x\Big\rfloor+\Big\lceil x\Big\rceil\Big) \] Note that $f_\mathbf{a}(x+2)\geq f_\mathbf{a}(x)+2$, so the minimum is obtained in $x\in (0,2]$. On this interval, we have $f_\mathbf{a}(x)=3\Bigg(\Bigg\lfloor\frac{x}{2}\Bigg\rfloor+\Bigg\lceil\frac{x}{2}\Bigg\rceil\Bigg)+n-\Big(\Big\lfloor x \Big\rfloor+\Big\lceil x \Big\rceil\Big)$, which is \[f_\mathbf{a}(x)= \begin{cases} 2+n \quad x \in (0,1),\\ 1+n \quad x=1,\\ n \quad x\in (1,2),\\ 4+n \quad x=2. \end{cases} \] hence our conclusion. \end{proof} \section{Exotic contact manifolds} \subsection{Liouville domains admitting group actions}\langlebel{cover} We need to find a Liouville domain $(W,\langlembda)$ with the contact manifold $\Sigma(\mathbf{a})$ as its boundary. While $V_\mathbf{a}(0)$ has a singularity at the origin, $V_\mathbf{a}(\epsilon)$ is smooth. Therefore we will follow Alexander Fauck's approach \cite{fauck2016rabinowitz} to overcome this by constructing an interpolation between $V_\mathbf{a}(0)$ and $V_\mathbf{a}(\epsilon)$. First, we choose a smooth monotone decreasing cut-off function $\beta\in C^\infty(\mathbb{R})$ with $\beta(x)=1,x\leq \frac{1}{4}$ and $\beta(x)=0, x\geq \frac{3}{4}$. Then we define (we will often omit $\mathbf{a}$) \[U_\mathbf{a}(\epsilon):=\{\mathbf{z}\in \mathbb{C}^{n+1}|z_0^{a_0}+\cdots+z_n^{a_n}=\epsilon\cdot\beta(||\mathbf{z}||^2)\}.\] Let \[W_\epsilon^s:=U_\epsilon\cap B(s)\] we have \begin{prop}[Proposition 99 \cite{fauck2016rabinowitz}]\langlebel{Liouville form} For sufficiently small $\epsilon$,$(X_\epsilon^1,\langlembda)$ is a Liouville domain with boundary $(\Sigma(\mathbf{a}),\alpha_\mathbf{a})$ and vanishing first Chern class. \end{prop} Moreover, we have a cyclic group \[C(L):=\{\, e^\frac{2{\partial}i ki}{L} \in \mathbb C \, |\, k\in\mathbb{Z}\, \}=<\zeta> \] acting on $(\mathbb{C}^{n+1})^*$, which is generated by : \begin{align*} \zeta_*: (\mathbb{C}^{n+1})^* & \longrightarrow (\mathbb{C}^{n+1})^*\\ (z_0,z_1,\cdots,z_n) & \mapsto (z_0\zeta^{b_0},z_1\zeta^{b_1},\cdots,z_n\zeta^{b_n}) \end{align*} where $L:=\textrm{lcm}\,a_j, b_j:=L/a_j, \zeta:=e^\frac{2{\partial}i i}{L}$. We can easily see that the 1-form $\langlembda_\mathbf{a}$ is $C(L)$-invariant. We can restrict this group action to the subsets of $(\mathbb{C}^{n+1})^*$ mentioned above and obtain a $C(L)-$action on the manifolds $X_\epsilon^s$ and $W_\epsilon^s$. By definition, $X_\epsilon^{1/2}= U(\epsilon)\cap B(1/2)=V(\epsilon)\cap B(1/2)=W_\epsilon^{1/2}$. We have the following proposition: \begin{prop} For sufficiently small $\epsilon>0$, there is a $C(L)$-equivariant isotopy between the following pairs of Liouville domains: \begin{itemize} \item $X_\epsilon^1$ and $X_\epsilon^{1/2}$, \item$W_\epsilon^1$ and and $W_\epsilon^{1/2}$. \end{itemize} \end{prop} \begin{proof} We only give a proof for the existence of a $C(L)$-equivariant isotopy between $W_\epsilon^1$ and $W_\epsilon^{1/2}$. We can prove the same results for $X_\epsilon^1$ and $X_\epsilon^{1/2}$ verbatim. Consider the function $\rho (\mathbf{z})=||\mathbf{z}||^2$ on $V_\epsilon$. If for sufficiently small $\epsilon$, the critical values of $\rho$ restricted to $W_\epsilon^1$ are less that $1/4$, then we are done, by lemma~\ref{morse}. Indeed, we have $f_\epsilon(\mathbf{z}):=f(\mathbf{z})-\epsilon\cdot\beta(||\mathbf{z}||^2)$ on $\mathbb{C}^{n+1}$ and its differential is given by \[Df_\epsilon=Df-\epsilon\cdot\beta^{'}(||\mathbf{z}||^2)\cdot D\rho\] so the map \[(f_\epsilon,\rho): \mathbb{C}^{n+1}\rightarrow \mathbb{C\times R}\] has Jacobian matrix $(Df-\epsilon\cdot\beta^{'}(||\mathbf{z}||^2)\cdot D\rho,D\rho)$, which has the same rank as $(Df,D\rho)$. So by the same argument in the proof of lemma~\ref{lemma}, if $\mathbf{z}$ is a point where the Jacobian is not full rank, then we have for some complex number $\langlembda$, $\bar{z_k}=\langlembda a_k z_k^{a_k-1}$ for all $k$. For $||\mathbf{z}||\geq 1/2$, we have $|z_{k_0}|\geq \frac{1}{2\sqrt{n}}$ for some $k_0$, so \[|z_{k_0}|=|\langlembda|\cdot a_{k_0}\cdot |z_{k_0}|^{a_{k_0}-1}\] i.e, \begin{equation}\langlebel{equ for lambda} |\langlembda|=\frac{|z_{k_0}|^{2-a_{k_0}}}{a_{k_0}} \leq \frac{(2\sqrt{n})^{a_{k_0}-2}}{a_{k_0}}\leq C(\mathbf{a}) \end{equation} where $C(\mathbf{a}):= \underset{0\leq k \leq n}{\max} \{\frac{(2\sqrt{n})^{a_{k}-2}}{a_{k}}\} $ only depends on $\mathbf{a}$ and $n$. Meanwhile, we have \begin{equation}\langlebel{equ} \sum\limits_{k=0}^{n}\frac{z_k\bar{z_k}}{a_k}=\langlembda\sum\limits_{k=0}^n z_k^{a_k}=\langlembda\cdot f(\mathbf{z})=\langlembda\cdot\epsilon\beta(||\mathbf{z}||^2) \end{equation} Combining equations~(\ref{equ}) and (\ref{equ for lambda}), we have \begin{equation}\langlebel{equ upper} \sum\limits_{k=0}^{n}\frac{z_k\bar{z_k}}{a_k}=\langlembda\cdot\epsilon\beta(||\mathbf{z}||^2)\leq \epsilon\cdot C(\mathbf{a}) \end{equation} On the other, \begin{equation}\langlebel{equ 2} \sum\limits_{k=0}^{n}\frac{z_k\bar{z_k}}{a_k}\geq \frac{1}{\max\{a_j\}}\sum_{k=0}^{n}z_k\bar{z_k}\geq \frac{1}{\max\{a_j\}}\cdot ||\mathbf{z}||^2=\frac{1}{4\max\{a_j\}} \end{equation} Equations~(\ref{equ 2}) and (\ref{equ upper}) cannot hold for sufficiently small $\epsilon$ at the same time, and therefore the function $\rho$ has no critical points in $||\mathbf{z}||\geq 1/2$, hence all critical values are less than $1/4$. \end{proof} \begin{lemma}[Theorem 2.2.2 \cite{nicolaescu2011invitation}]\langlebel{morse} Suppose finite group $G$ acts on a manifold $M$ and $f$ is a $G$-invariant exhausting function on $M$. Moreover, assume that no critical value of $f$ is contained in $[a,b]\subset \mathbb{R}$, then there is a $G$-equivariant isotopy ${\partial}hi_t$ between the sublevel sets $M^a:=f^{-1}((-\infty,a])$ and $M^b:=f^{-1}((-\infty,b])$, and ${\partial}hi_t$ coincides with Id outside a compact set. \end{lemma} \begin{proof} Since there are no critical values of $f$ in [a, b] and the sublevel sets are compact, we deduce that there exists $\epsilon > 0$ such that \[\{a-\epsilon<f<b+\epsilon\}\subset M\setminus Crit(f).\] First we fix a gradient-like $G$-invariant vector field $Y$ and construct a compactly supported $G$-equivariant smooth function \[g:M\to [0,\infty)\] such that \[g(x)= \begin{cases} \frac{1}{|Yf|},\quad & a\leq f(x)\leq b,\\ 0,&f(x)\notin (a-\epsilon,b+\epsilon). \end{cases} \] We can now construct a $G$-invariant vector field $X:=gY$ on $M$ and we denote by \[{\partial}hi:\mathbb{R}\times M\to M,\quad (t,x)\to {\partial}hi_t(x) \] the flow generated by $X$. Clearly the flow commutes with the group action, so ${\partial}hi_t$ is $G$-equivariant. If $u(t)$ is an integral curve of $X$, then differentiating $f$ along $u(t)$ in the region $\{a\leq f\leq b \}$ and get \[ \frac{df}{dt}=Xf= \frac{1}{Yf}Yf=1\] This implies \[{\partial}hi_{b-a}(M^a)=M^b\] and ${\partial}hi_t$ is identity outside the region $\{a-\epsilon<f<b+\epsilon\}$. \end{proof} \begin{remark}\langlebel{stein} By proposition~\ref{Liouville form}, $(X_\epsilon^{1/2},\Phi_{t}^*\langlembda )$ is a family of $C(L)-$equivariant Liouville structures. Then by corollary~\ref{liouville homotopy}, we have $(X_\epsilon^{1/2},\langlembda)$ is $C(L)-$ equivariant Liouville isomorphic to $(X_\epsilon^1,\langlembda)$. By the same token, $(W_\epsilon^1,\langlembda)$ is $C(L)-$ equivariant Liouville isomorphic to $(W_\epsilon^{1/2},\langlembda)$ and therefore to $(X_\epsilon^1,\langlembda)$. \end{remark} Let ${\partial}hi_t(\mathbf{z}):=\frac{1}{8}\sum\limits_{j=0}^{n}c_j(t)|z_j|^2$, where $c_j(t)$ is a linear interpolation such that $c_j(0)=1,c_j(1)=a_j$. It's easy to check that ${\partial}hi_t$ is plurisubharmonic on $V_\mathbf{a}(\epsilon)$. Indeed, ${\partial}hi_t$ is $i$-convex on $\mathbb{C}^{n+1}$ since $\Delta {\partial}hi_t>0$ and $V_\mathbf{a}(\epsilon)$ is a smooth complex submanifold. So $(V_\mathbf{a}(\epsilon),i,{\partial}hi_t)$ are $C(L)-$ equivariant Stein manifolds. Since $X_\epsilon^1={\partial}hi_0^{-1}((-\infty,1/8])$, $(X_\epsilon^1,i,{\partial}hi_0)$ is a $C(L)-$equivariant Stein domain. Seen as a Liouville domain, $(X_\epsilon^1,-d^\mathbb C{\partial}hi_0)$ is $C(L)-$equivariant Liouville isomorphic to $(X_\epsilon^1,\langlembda)$ as follows: \begin{prop}\langlebel{Liouville equivalence for stein domain} There is a $C(L)$-equivariant Liouville homotopy between $(X_\epsilon^1,-d^\mathbb C{\partial}hi_0)$ and $(X_\epsilon^1,\langlembda)$, for sufficiently small $\epsilon$. \end{prop} \begin{proof} Notice that for $\langlembda=-d^\mathbb C{\partial}hi_1$, it suffices to prove the critical points of ${\partial}hi_t$ are contained in a compact set $\{||\mathbf{z}||\leq 1/3\}$, then $\nabla_{{\partial}hi_t}{\partial}hi_t$ will be transversal to the boundary, and $-d^\mathbb C{\partial}hi_t$ will be a family of $C(L)-$equivariant Liouville structures on $X_\epsilon^1$, so we can conclude the result by corollary~\ref{liouville homotopy}. In the following we are going to prove that all critical points satisfy $||\mathbf{z}||\leq 1/3$. Consider the map \[(f,{\partial}hi_t):\mathbb{C}^{n+1}\rightarrow \mathbb{C\times R} \] Its Jacobian matrix is \[ D(f,\rho)= \begin{bmatrix} a_0z_0^{a_0-1} &\cdots & a_nz_n^{a_n-1} & 0 & \cdots & 0 \\ 0 & \cdots & 0 & a_0\bar{z_0}^{a_0-1} & \cdots & a_n\bar{z_n}^{a_n-1}\\ \frac{1}{8}c_0(t)\bar{z_0}& \cdots& \frac{1}{8}c_n(t)\bar{z_n} & \frac{1}{8} c_0(t) z_0 & \cdots &\frac{1}{8}c_n(t)z_n \end{bmatrix} \] If $\mathbf{z}$ is a point where this matrix has rank smaller than 3, there exists a non-zero complex number $\langlembda\in \mathbb C$ such that $\frac{c_k(t)}{8}\bar{z_k}=\langlembda a_k z_k^{a_k-1}$ for all $k$ and \begin{equation} \sum\limits_{k=0}^{n}\frac{c_k(t)z_k\bar{z_k}}{8a_k}=\langlembda\sum\limits_{k=0}^n z_k^{a_k}=\langlembda\cdot f(\mathbf{z})=\langlembda\cdot\epsilon \end{equation} For $||\mathbf{z}||>\frac{1}{3}$, we have $|z_r|>\frac{1}{3(n+1)}$ for some $0\leq r\leq n$. So we have \[\frac{c_r(t)}{8}\cdot |\bar{z_r}|=|\langlembda| \cdot a_r \cdot |z_r^{a_r-1}| \] i.e, \begin{equation*} |\langlembda|=\frac{c_r(t)}{8a_r|z_r|^{a_r-2}}<\frac{(3(n+1))^{a_r-2}}{8}\leq C \end{equation*} where $ C=\max \limits_{0\leq i\leq n}\frac{(3(n+1))^{a_i-2}}{8}$, only depends on $\mathbf{a}$. On one hand,we have \begin{equation}\langlebel{equation <} \sum\limits_{k=0}^{n}\frac{c_k(t)z_k\bar{z_k}}{8a_k}=\langlembda\sum\limits_{k=0}^n z_k^{a_k}=\langlembda\cdot f(\mathbf{z})=\langlembda\cdot\epsilon<C\cdot \epsilon. \end{equation} On the other hand, we have \begin{equation}\langlebel{equation >} \sum\limits_{k=0}^{n}\frac{c_k(t)z_k\bar{z_k}}{8a_k}\geq \sum\limits_{k=0}^{n}\frac{|z_k|^2}{8a_k}\geq \frac{1}{72 \max\limits_{0\leq i\leq n}\{a_i\}}. \end{equation} So for $\epsilon$ small enough, equations~(\ref{equation <}) and (\ref{equation >}) cannot both hold, which implies the critical points of ${\partial}hi_t$ is contained in $\{||\mathbf{z}||\leq 1/3\}$. \end{proof} \begin{remark}\langlebel{Liouville completion and stein domain} Since we have ${\partial}hi_0(\mathbf{z})=\frac{||\mathbf{z}||^2}{8}$, $\nabla_{{\partial}hi_0}{\partial}hi_0=\sum\limits_{i=0}^{n}(z_i{\partial}\bar{z_i}+\bar{z_i}{\partial} z_i)/2$ is complete in $\mathbb C^{n+1}$. Therefore ${\partial}hi_0$ is a completely exhausting function on $V_\mathbf{a}(\epsilon)$. By the proof of Proposition~\ref{Liouville equivalence for stein domain}, all critical points of ${\partial}hi_0$ are in the interior of $X_\epsilon^1$. It follows that $V_\mathbf{a}(\epsilon)$ is the completion of $X_\epsilon^1$ by matching the corresponding trajectories of the Liouville fields. \end{remark} \subsection{Topology of manifolds $M_0$ and $M_1$} Now let's consider $C(L)$-equivariant Stein manifold $(\mathbb C^*,i,(\log|z|)^2/2)$ where the $C(L)-$action is multiplication given by \[ \mathbb{R}\times (\mathbb{R}/2{\partial}i\mathbb{Z}) \longrightarrow \mathbb C^*,\quad (r,\theta)\mapsto e^{r+\theta i}\,. \] The map gives rise to polar coordinates form of the same Stein manifold $(\mathbb{R}\times S^1,j,r^2/2)$ and the Liouville vector is $r{\partial}_r$, which is complete. Now we consider the product of the Stein manifolds $(\mathbb C^*,i,(\log|z|)^2/2)$ and $(V_\mathbf{a}(\epsilon),i,{\partial}hi_0)$. It has a free $C(L)$ action as follows: \begin{align*} \zeta_*: V_\mathbf{a}(\epsilon)\times \mathbb C^* &\longrightarrow V_\mathbf{a}(\epsilon)\times \mathbb C^*\\ (z_0,z_1,\cdots,z_n,\eta)&\mapsto(z_0\zeta^{b_0},z_1\zeta^{b_1},\cdots,z_n\zeta^{b_n},\eta\zeta), \end{align*} where $b_i=L/a_i,\zeta \in C(L)$. The product function ${\partial}hi:=(\log|z|)^2/2)+{\partial}hi_0$ is a completely exhausting $J-$convex Morse function, and the product Stein manifold is of finite type. By abuse of the notation, we use ${\partial}hi$ to denote the function on the quotient manifold as well. Also, $M_0:=\{ {\partial}hi\leq C\}$ is a Stein domain, where $C$ is greater than all critical values of ${\partial}hi$. Hence the completion $\widehat{M_0(\mathbf{a})}=(V_\mathbf{a}(\epsilon)\times \mathbb C^*)/C(L)$ since ${\partial}hi$ is complete. Oftentimes we will suppress $\mathbf{a}$. If we consider the Weinstein structure instead, the Weinstein domain can be cut out in other ways, as stated in the following lemma: \begin{lemma}\langlebel{lemma for cutout weinstein domain} Suppose $(W,\langlembda,{\partial}hi)$ is a finite type Weinstein manifold. Let ${\partial}si:W\to \mathbb{R}$ be an exhausting Morse function. Suppose $X_\langlembda$ is nondegenerate and gradient-like for ${\partial}si$ outside $\{ {\partial}si\leq 0\}$. Then $\{ {\partial}si\leq 0 \}$ together with $\langlembda$ is Liouville homotopic to a Weinstein domain $W_1:=\{ {\partial}hi \leq K\}$, for $K$ sufficiently large. \end{lemma} \begin{proof} Let $K$ satisfy \[\{ {\partial}si\leq 0\}\subset W_1 \subset W_2:=\{ {\partial}si\leq C\} \] for some large enough $C$ (conditions will be evident along the line of proof). Notice that $\{ {\partial}si\leq 0\}$ is Liouville homotopic to $W_2$. Fix a smooth function $\rho$ (it can be constructed on the level sets of ${\partial}hi$) such that \begin{itemize} \item $\rho=1$ in $W_1$, $\rho=0$ outside $W_2$. \item $X_\langlembda(\rho)\leq 0$. \end{itemize} Let $M:=\max\limits_{p\in W_2\setminus W_1}({\partial}hi-{\partial}si)$. Now consider the function $f=\rho{\partial}hi+(1-\rho)({\partial}si+M)$. We will show that $f$ is Morse and $X_\langlembda$ is gradient-like for $f$. We only need to verify $X$ is gradient-like in $W_2\setminus W_1$. We have \[X_\langlembda(f)=\rho X_\langlembda({\partial}hi)+(1-\rho)X_\langlembda({\partial}si)+({\partial}hi-{\partial}si-M)(X_\langlembda(\rho))\geq \rho X_\langlembda({\partial}hi)+(1-\rho)X_\langlembda({\partial}si)>0 \] So $X_\langlembda$ is gradient-like for $f$ and $f$ doesn't have new critical points outside $W_1$. Because $f|_{W_1}={\partial}hi|_{W_1}$, $f$ is Morse. Hence $(_\langlembda,f)$ is also a Weinstein structure on $W_2$, and a linear interpolation between $f$ and ${\partial}hi$ gives rise to a family of Weinstein structures. In particular, it gives rise to a Liouville homotopy. \end{proof} In fact, we have an explicit form for the topology of $M_0$. The following quotient map \begin{align*} {\partial}i:V_\mathbf{a}(\epsilon)\times \mathbb{C}^*&\rightarrow \mathbb{C}^{n+1}\setminus V_\mathbf{a}(0)\\ (z_0,z_1,\cdots,z_n,t)&\mapsto(z_0t^{b_0},z_1t^{b_1},\cdots,z_nt^{b_n}) \end{align*} coincides with the $C(L)-$ action quotient. Therefore $\widehat{M_0(\mathbf{a})}$ (hence $M_0$) is diffeomorphic $\mathbb{C}^{n+1}\setminus V_\mathbf{a}(0)$.We have the following proposition about $M_0$: \begin{prop}\langlebel{fundamental group of M0} Let $M_0(\mathbf{a}),\, n\geq 3$ be the manifold defined above. Then ${\partial}i_1(M_0)=\mathbb{Z},H_i(M_0)=0,i\geq 2, i\neq n, n+1 $. \end{prop} \begin{proof} It suffices to prove the results for $\mathbb{C}^{n+1}\setminus V_\mathbf{a}(0)$. We have a deformation retraction \[r:\mathbb{C}^{n+1}\setminus V_\mathbf{a}(0)\to S^{2n+1}\setminus \Sigma(\mathbf{a}), \] and we have the Milnor fibration: \begin{align*} S^{2n+1}\setminus \Sigma(\mathbf{a})&\longrightarrow S^1\\ (z_0,z_1,\cdots,z_n)&\mapsto \frac{f(\mathbf{z})}{||f(\mathbf{z})||} \end{align*} The fibers are homotopic to a bouquet of $n-$spheres, which is simply connected since $n\geq 3$, the long exact sequence gives us ${\partial}i_1(M_0)=\mathbb{Z}$. Meanwhile, $H_*(M_0)=H_*(S^{2n+1}\setminus \Sigma(\mathbf{a}))$, and for $1<i<2n$, by Alexander duality we have \[ \tilde{H}_i(S^{2n+1}\setminus \Sigma(\mathbf{a}))= \tilde{H}^{2n-i}(\Sigma(\mathbf{a})). \] The conclusion follows Theorem~\ref{high connectedness of brieskorn manifold}. \end{proof} \begin{prop}\langlebel{contractible after surgery} Let $M_0$ be a manifold with ${\partial}i_1(M_0)=\mathbb{Z}, H_i(M_0)=0,i\geq 2, i\neq n, n+1$. Suppose $\gamma$ is a generator for ${\partial}i_1(M_0)$ and $M_1$ is the result of attaching a 2-handle along $\gamma$. Then $\tilde{H}_i(M_1)=0, i\neq n, n+1$. \end{prop} \begin{proof} The attaching 2-handle kills the generator $[\gamma]$ so ${\partial}i_1(M_1)=0$. Meanwhile, $H_i(M_0)=0,i\geq 2,i\neq n,n+1$ implies $H_k(M_1)=0,k\geq 3, k\neq n, n+1$ since attaching a 2-handle does not change higher homology. Let's denote the 2-handle by $H$. We have the Mayer-Vietoris sequence: \[\cdots\to H_2(H)\oplus H_2(M_0)\to H_2(M_1)\to H_1(M_0\cap H)\mathrm{x}rightarrow{i_*} H_1(H)\oplus H_1(M_0)\to H_1(M_1)\to \cdots \] Here $[\gamma]$ is the generator of both $H_1(M_0\cap H) $ and $H_1(M_0)$, so $i_*$ is isomorphism. Hence we have \[ \cdots \to 0\to H_2(M_1) \to \mathbb{Z}\mathrm{x}rightarrow{\cong } \mathbb{Z}\to 0 \to \cdots \] So $H_2(M_1)=0$. The conclusion follows. \end{proof} \begin{comment} \begin{lemma}[Corollary 2.30 \cite{mclean2007lefschetz}]\langlebel{lemma for contractible} Let $M$ be a contractible Stein manifold of finite type. If $n:=\dim_\mathbb{C}M\geq 3$ then $M$ is diffeomorphic to $\mathbb{C}^n$. \end{lemma} \begin{proof} The proof is taken out of McLean \cite{mclean2007lefschetz}. Let $(J,{\partial}hi)$ be the Stein structure associated with $M$. We can also assume that ${\partial}hi$ is a Morse function. For $R$ large enough, the domain $M_R:=\{{\partial}hi<R\}$ is diffeomorphic to the whole of $M$ as $M$ is of finite type. It suffices to show that the boundary of $\bar{M_R}:=\{{\partial}hi\leq R\}$ is simply connected, then the result follows from the $h-$cobordism theorem. The function ${\partial}si:=R-{\partial}hi$ only has critical points of index $\geq n\geq 3$ because the function only has critical points of index $\leq n$. So $\bar{M_R}$ can be reconstructed by attaching handles of index $\geq 3$ with the help of a Morse function ${\partial}si$. This does not change the fundamental group, hence ${\partial}artial \bar{M_R}$ is simply connected because $\bar{M_R}$ is. \end{proof} \end{comment} \subsubsection{Handle attachment and trivialization}\langlebel{specific trivialization} Now we need to fix a trivialization of the canonical bundle $\kappa_{\widehat{M_0}}$ of $(T\widehat{M_0},J)$. Since we have the $C(L)-$ equivariant quotient map $V_\mathbf{a}(\epsilon)\times\mathbb C^* \to \widehat{M_0}$, it suffices to fix $C(L)-$ trivializations on both $V_\mathbf{a}(\epsilon)$ and $\mathbb C^*$ since \[T(V_\mathbf{a}(\epsilon)\times\mathbb C^*)=TV_\mathbf{a}(\epsilon)\times T\mathbb{C}^* \] Notice that the trivialization of the symplectic complement in Proposition~\ref{trivialization of the symplectic complement} is $C(L)-$equivariant, and the standard trivialization of $T\mathbb C^{n+1}$ is also $C(L)-$equivariant, as long as $\sum_{i=0}^{n}\frac{1}{a_i}\in \mathbb{Z}$. Indeed, if we take $\Omega=dz_0\wedge dz_2\wedge\cdots \wedge dz_n$, then the $C(L)-$actions on $\Omega$ is \[\eta^*(\Omega)=e^{\frac{2{\partial}i i}{a_0}}dz_0\wedge\cdots\wedge e^{\frac{2{\partial}i i}{a_n}}dz_n=e^{2{\partial}i i\sum\frac{1}{a_i} }\Omega=\Omega.\] Therefore a $C(L)-$equivariant trivialization of $TV_\mathbf{a}(\epsilon)$ exists. Since $V_\mathbf{a}(\epsilon)$ is simply connected, the trivialization of $TV_\mathbf{a}(\epsilon)$ is homotopically unique. We will take the natural trivialization of $T\mathbb{C}^*\to \mathbb{C}^*\times \mathbb{C}$, which determines the trivialization $\Phi$ of $T(V_\mathbf{a}(\epsilon)\times\mathbb C^*)$ . We will also fix $\Phi$ for the rest of this paper, which will be crucial in two places: \begin{itemize} \item Determining the framing for the Weinstein 2-handle attachment in Proposition~\ref{handle attachment for construction}. \item Determining the trivialization for the calculation of Conley-Zehnder index in Proposition~\ref{main proposition}. \end{itemize} \begin{prop}\langlebel{handle attachment for construction} There is a contractible Weinstein domain $(M_1,\omega_1,X_1,{\partial}si_1)$ obtained from the Weinstein domain $(M_0,-d d^\mathbb C{\partial}hi,\nabla_{\partial}hi{\partial}hi,{\partial}hi)$ by attaching a 2-handle such that the canonical saturation (see Subsection~\ref{weinstein handlebody}) coincides with the trivialization $\Phi$. \end{prop} \begin{proof} If we can find an isotropic circle in $M_0$ which generates the fundamental group, then by Theorem~\ref{weinstein handle attaching}, we can attach a Weinstein handle in such a way that the trivialization of the contact structure extends to the Weinstein handle body. The existence of such an isotropic circle is guaranteed by the $h-$principle in lemma~\ref{h-principle}, which states a subcritical embedding can be perturbed into an isotropic embedding. \end{proof} \begin{comment} \begin{remark} By Proposition~\ref{contractible after surgery}, $M_1$ is contractible. \end{remark} \end{comment} Let $M$ be a contact manifold of dimension $2n+1$ and $V$ a smooth manifold of subcritical dimension, i.e. $\dim V \leq n$. Let $Mono^{emb}$ be the space of monomorphisms $TV\to TM$ which cover embeddings $V\to M$, and $Mono^{emb}_{isot}$ its subspace which consists of isotropic monomorphisms $F: TV\to TM$. Let $Mono_{isot}^{emb}$ be the space of homotopies \[Mono^{emb}_{isot}=\{ F_t,t\in [0,1]| F_t \in Mono^{emb}, F_0=df_0, F_1 \in Mono_{isot}^{emb}\}. \] The space $Emb_{isot}$ of isotropic embeddings $V\to M$ can be viewed as a subspace of $Mono_{isot}^{emb}$. Indeed, we can associate to $f\in Emb_{isot}$ the homotopy $F_t\equiv df, t\in [0,1]$, in $Mono^{emb}_{isot}$. \begin{lemma}[Proposition 12.4.1 \cite{eliashberg2002introduction}]\langlebel{h-principle} The inclusion \[Emb_{isot} \hookrightarrow Mono^{emb}_{isot} \] is a homotopy equivalence. \end{lemma} The above $h-$principle also holds in the relative and $\mathcal{C}^0-$dense forms. \begin{remark}\langlebel{it is actually stein} By Theorem~\ref{from weinstein to stein}, $(M_1,\omega_1,X_1,{\partial}si_1)$ is homotopic to a Stein domain through Weinstein structures. We denote the Stein structure by the same notation $(M_1,J_1,{\partial}hi_1)$. \end{remark} \begin{comment} In light of proposition~\ref{fundamental group of M0}, ${\partial}i_1(M_0)=\mathbb{Z}$ (when $\Sigma(\mathbf{a})$ is a sphere). Suppose $\gamma$ is the generator of ${\partial}i_1(M_0)$ and let $(M_1,\omega_1,X_1,{\partial}si_1)$ be the Weinstein domain obtained by attaching a Weinstein 2-handle to $M_0$. It is contractible by proposition~\ref{contractible after surgery}. As remark~\ref{it is actually stein} shows above, $(M_1,\omega_1,X_1,{\partial}hi_1)$ admits a Stein structure. As a matter of fact, since $(M_0,J_0,{\partial}hi_0)$ is of finite type, so is $(M_1,J_1,{\partial}si_1)$. Then by lemma~\ref{lemma for contractible}, we have the following: \begin{lemma}\langlebel{topology of M_1} $M_1$ is diffeomorphic to $B^{2n}$. \end{lemma} \end{comment} \subsection{The Weinstein domain $M_0$} We notice $(V_\mathbf{a}(\epsilon),-d^\mathbb C{\partial}hi_0)=(\widehat{X_\epsilon^1},-d^\mathbb C{\partial}hi_0)$ while $(X_\epsilon^1,-d^\mathbb C{\partial}hi_0)$ is $C(L)-$equivariant Liouville homotopic to $(W_\epsilon^1,\langlembda)$, and in light of lemma~\ref{lemma for cutout weinstein domain}, we can define different Weinstein domains in $(\widehat{W_\epsilon^1}\times \mathbb C^*,\langlembda_0:=\langlembda+rd\theta)$ by different functions. First of all, we need the following technical proposition. \begin{prop}\langlebel{functions for the smoothing} Let $(M,\langlembda)$ be a $G$-equivariant Liouville domain,and $R$ be the coordinate for its cylindrical end. Assume ${\partial}hi$ is a $G$-equivariant Morse function on $M$ such that $X({\partial}hi)<0$ near the boundary of $M$. Then for any $\epsilon>0$, there exists $\delta_1\gg \delta_2>0$ and a $G$-equivariant Morse function $f$(see Figure~\ref{Fig:Morse function f}) such that: \begin{itemize} \item $||1-f||_{\mathcal{C}^2}<\epsilon$ in the region $M\setminus \{R>1-\delta_1+2 \delta_2\}$. \item f and ${\partial}hi$ have same set of critical points, and the Morse indices are the same. \item f satisfies the equation \begin{equation}\langlebel{equ: def of f} \Big(\frac{f}{a}\Big)^2+\Big(\frac{R-(1-\delta_1)}{\delta_1}\Big)^6=1 \end{equation} on the region $1-\delta_1+\delta_2<R\leq 1$, for some $0<a<1$. \end{itemize} \begin{figure} \caption{$G-$equivariant Morse function $f$.} \end{figure} \end{prop} \begin{proof} First, we can fix the canonical collar of the boundary using the negative Liouville flow \begin{align*} &\iota:(1-\epsilon_1,1]\times {\partial}artial M \longrightarrow M\\ &\iota^{*}\langlembda=R\langlembda,\quad \iota^{*} X=R{\partial}artial_R \end{align*} where $\epsilon_1>0$ is sufficiently small, so that $X({\partial}hi)<0$ in the canonical collar and $R$ is the cylindrical coordinate. Notice that $R$ is $G$-equivariant and so is any function in $R$. Now let's fix a sufficiently small $\epsilon_1>\delta_1\gg \delta_2\gg \epsilon_2>0$ ( the exact constraints on $\delta_1,\delta_2,\epsilon_2$ will be clear along the proof), and an increasing bump function $\rho$ such that $\rho(R)=1$ for $R\geq 1$ and $\rho(R)=0$ for $R\leq 0$. Let $\hat{\rho}(R):=\rho(\frac{R-(1-\delta_1)}{\delta_2})$, then we have \begin{equation}\langlebel{equ:estimation} ||\hat{\rho}||_{\mathcal{C}^2}\leq \frac{1}{\delta_2^2}||\rho||_{\mathcal{C}^2} \end{equation} Define a bump function $\hat{\rho}$ on $M$ to be $\hat{\rho(R)}$ on its canonical collar and extended by 0. Apparently $\hat{\rho}$ is $G$-equivariant. Let $h>0$ be a function of radial coordinate on $[1-\delta_1,1]\times {\partial} M$ satisfying the conditions: \begin{equation*} \Big(\frac{h}{1-\epsilon_2}\Big)^2+\Big(\frac{R-(1-\delta_1)}{\delta_1}\Big)^6=1. \end{equation*} Then $h$ can be extended to a smooth function on $M$. Without loss of generality, we can assume $||1-{\partial}hi||_{\mathcal{C}^2}<\epsilon_2$. Otherwise we can simply replace ${\partial}hi$ by $1+c{\partial}hi$ for $c>0$ sufficiently small. We claim the function \[f={\partial}hi\cdot(1-\hat{\rho})+h\hat{\rho} \] satisfies all conditions in this proposition. Firstly, $h$ is well-defined and $G$-equivariant, and since $f$ coincides with $h$ on the region $\{R\geq 1-\delta_1+\delta_2\}$, equation~\ref{equ: def of f} is satisfied. Secondly, we only need to show that $f$ has no critical points in the region $\{1-\delta_1\leq R\leq 1-\delta_1+\delta_2\}$, for which we have \[{\partial}artial_R(f)=h'\hat{\rho}+h\hat{\rho}'+(1-\hat{\rho}){\partial}artial_R({\partial}hi)-{\partial}hi\hat{\rho}' =(h-{\partial}hi)\hat{\rho}+h'\hat{\rho}+(1-\hat{\rho}){\partial}artial_R({\partial}hi)<0 \] since $R{\partial}artial_R({\partial}hi)=X({\partial}hi)<0$ and $h\leq 1-\epsilon_2\leq {\partial}hi$. Therefore $f$ has no critical point in the canonical collar. Since outside the canonical collar $f\equiv{\partial}hi$, the second condition follows. Now we show that $f$ also satisfies the first condition. In the region $M\setminus \{R>1-\delta_1\}$, we have $f\equiv {\partial}hi$, so we only need to check the region $\{ 1-\delta_1\leq R\leq 1-\delta_1+2\delta_2\}$, where \begin{align*} ||f-1||_{\mathcal{C}^2}&=||({\partial}hi-1)+(h-{\partial}hi)\hat{\rho}||_{\mathcal{C}^2}\\ &\leq ||{\partial}hi-1||_{\mathcal{C}^2}+||({\partial}hi-h)\hat{\rho}||_{\mathcal{C}^2}\\ &\leq \epsilon_2+2||({\partial}hi-1)+(1-h)||_{\mathcal{C}^2}\cdot||\hat{\rho}||_{\mathcal{C}^2}\\ &\leq\epsilon_2+\frac{1}{\delta_2^2}(||{\partial}hi-1||_{\mathcal{C}^2}+||1-h||_{\mathcal{C}^2})||\rho||_{\mathcal{C}^2}\\ &\leq \epsilon_2 +\frac{1}{\delta_2^2}(\epsilon_2+||1-h||_{\mathcal{C}^2})||\rho||_{\mathcal{C}^2} \end{align*} The Taylor expansion of $1-h$ at $R=1-\delta_1$ is: \[1- h((1-\delta_1)+t)=\epsilon_2+Ct^6+o(t^{11})\, ,\quad C=\frac{1-\epsilon_2}{2\delta_1^6} \] Therefore $||1-h||_{\mathcal{C}^2}\leq \epsilon_2+ C_1 \delta_2^6 $, for $t<2\delta_2\ll \delta_1$, where $C_1=C_1(\delta_1)$. Thus we have \[||1-f||_{\mathcal{C}^2}\leq \epsilon_2+\frac{2\epsilon_2+C_1\delta_2^4}{\delta_2^2}<\epsilon \] The last inequality holds as long as $\epsilon_2\leq \delta_2^4$ and $\delta_2\ll \delta_1$. \end{proof} \begin{lemma}\langlebel{remark: c1 small vector field} Let $f,\rho$ be defined as above, $g:=\hat{\rho}(\delta_2-R)$. Then $g\cdot X_f$ is $\mathcal{C}^1$ small, where $X_f$ is the Hamiltonian vector field of $f$ with respect to $d\langlembda_0$. \end{lemma} \begin{proof} We only need to prove this in $\{ 1-\delta_1+\delta_2<R<1-\delta_1+2\delta_2\}$. Notice that \[||X_f||_{\mathcal{C}^1}\leq ||1-f||_{\mathcal{C}^2}<K\delta_2^2,\] where $K$ is independent of $\delta_2$. Meanwhile, we have \begin{align*} ||g\cdot X_f||_{\mathcal{C}^{1}}& \leq |g|\cdot||X_f||+||dg||\cdot||X_f||+|g|\cdot||dX_f||\\ &\leq (|g|+||dg||)\cdot(||X_f||+||dX_f||)\\ &\leq ||g||_{\mathcal{C}^{1}}\cdot||X_f||_{\mathcal{C}^{1}}\\ &\leq \frac{||\rho||_{\mathcal{C}^1}}{\delta_2}\cdot K\delta_2^2\\ &\leq K'\delta_2 \end{align*} \end{proof} Suppose $G$ is a finite group and $M$ is a $G-$manifold. Let $\mathcal{M}^G(M,\mathbb{R})$ denote the set of $G-$equivariant Morse functions on $M$ and ${C}(M,\mathbb{R})$ the set of smooth functions. \begin{lemma}[Density Lemma 4.8 \cite{wasserman1969equivariant}]\langlebel{density lemma} $\mathcal{M}^G(M,\mathbb{R})$ is dense in ${C}(M,\mathbb{R})$ with respect to the $C^k$ topology. \end{lemma} \begin{remark}\langlebel{morse function index at least n } Note that $(W_\epsilon^1,\langlembda)$ is $G-$equivariantly Liouville isomorphic to $(X_\epsilon^1,-d^\mathbb C{\partial}hi_0)$. Since ${\partial}hi_0$ is $i-$convex on $X_\epsilon^1$ (and we can perturb it into a $G-$equivariant Morse function if necessary), the index of each critical point of $-{\partial}hi_0$ is at least $n$ (half of the dimension of a Stein Manifold). Therefore we can find such function ${\partial}hi'$ on $W_\epsilon^1$ as well. \end{remark} Apply proposition~\ref{functions for the smoothing} to $(W_\epsilon^1,\langlembda)$, with ${\partial}hi'$ as in remark~\ref{morse function index at least n }. Then consider the function $F$ on the product Liouville manifold $(\widehat{W_\epsilon^1}\times \mathbb{R}\times S^1,\langlembda_0:=\langlembda+rd\theta) $ defined as: \begin{align}\langlebel{equ:the defining equation} F: \widehat{W_\epsilon^1} \times \mathbb{R}\times S^1 &\rightarrow \mathbb{R},\\ (p,(r,\theta))&\mapsto r^2-f(p)^2 \quad \text{for}\quad p \in W_\epsilon^1\\ ((q,R),(r,\theta)&\mapsto r^2-a^2\Bigg(1-\Big(\frac{R-(1-\delta_1)}{\delta_1}\Big)^6\Bigg) \quad \text{for}\quad (q,R)\in {\partial}artial W_\epsilon^1 \times (1-\delta_1+2\delta_2,\infty). \end{align} where $a= 1-\epsilon_2$. It is easy to check that $F$ is a smooth $C(L)$-equivariant function on $\widehat{W_\epsilon^1}\times \mathbb{R}\times S^1$, and $0$ is a regular value. Furthermore, the following lemma shows that the Liouville vector filed $Y:=Y_\langlembda+r{\partial}_r$( where $Y_\langlembda$ is the Liouville field on $(W_\epsilon^1,\langlembda)$) is gradient-like for $F$ on $\{ F\geq 0\}$. \begin{lemma}\langlebel{lemma for transversality} The Liouville vector field $Y$ of $(\widehat{W_\epsilon^1}\times \mathbb{R}\times S^1, \langlembda+rd\theta )$ is gradient-like for $F$ outside $W_0:=\{F\leq 0\}$. \end{lemma} \begin{proof} We will verify the statement on the regions $\{R>1-\delta_1+\delta_2 \}$ and $\{ R>1-\delta_1+2\delta_2\}^c$ separately. In the region $\{R>1-\delta_1+\delta_2 \}$, $Y=r{\partial}_r+R{\partial}_R$ with $F=r^2-a^2\Bigg(1-\Big(\frac{R-(1-\delta_1)}{\delta_1}\Big)^6\Bigg)$, the claim is trivial. In the region $\{ R>1-\delta_1+2\delta_2\}^c$, we have $Y=r{\partial}_r+Y_\langlembda$. Notice that this region is a product $W'\times (\mathbb{R}\times S^1)$, where $W'=\widehat{W_\epsilon^1}\setminus\{ R>1-\delta_1+2\delta_2\}$ is $W_\epsilon^1$ attached with a cylindrical cobordism. Now, \begin{equation} Y(F)=(r{\partial}_r+Y_\langlembda)(r^2-f^2)=2(r^2-fY_\langlembda(f)) \end{equation} Since $1-f$ is $\mathcal{C}^2$ small, the coordinate $r$ is nonzero in the region $W_0^c\cap \{ R>1-\delta\}^c$, and $W'$ is compact, we have $2(r^2-fY_\langlembda(f))>0$. The conclusion follows. \end{proof} \begin{remark}\langlebel{def of key block} Note that $(\{F\leq 0\},\langlembda+rd\theta)$ is a $C(L)$-equivariant Liouville domain, and $C(L)$ acts freely on it. The quotient domain is Liouville homotopic to the Stein domain $(M_0,J,{\partial}hi)$, by Lemma~\ref{lemma for cutout weinstein domain}. Since the properties of interest are invariant under Liouville isomorphism, we will also denote the quotient domain$(\{F\leq 0\},\langlembda_0)/C(L)$ by $(M_0,\langlembda_0)$. \end{remark} \begin{remark}\langlebel{containment} The region $(U:=\{R>1/2\}^c\cap \{|r|\leq 1/2\},\langlembda)$ is a Liouville domain with corners. We can smooth out the corner with a $\mathcal{C}^\infty$-small perturbation. By abuse of notation, the boundary of this Liouville domain is denoted by $M=\{R=1/2\}\times\{|r|\leq1/2\}\cup \{R>1/2\}^c\times\{|r|=1/2\}$, with Liouville vector field $Y=R{\partial}artial_R+r{\partial}artial_r$. The time 1 flow of $Y$ sends $M$ to a new boundary $ \{R=e/2\}\times\{|r|\leq e/2\}\cup \{R>e/2\}^c\times\{|r|=e/2\}$, that is, $U\cup M\times[0,1]=\{R>e/2\}^c\cap \{|r|\leq e/2\}$. It's easy to check that $U\subset \{F\leq 0\}\subset U\cup M\times[0,1]$. \end{remark} \subsection{Strongly ADC property of $M_0$}\langlebel{ADC property after smoothing} In this subsection, we will prove that the contact boundary of $(M_0,\langlembda_0)$(as in remark~\ref{def of key block}) with respect to the trivialization $\Phi$ is strongly asymptotically dynamically convex. Let us first state what the framing is. Since $(\widehat{W_\epsilon^1}\times \mathbb{R}\times S^1, \langlembda_0 )$ is a product, it suffices to choose the $G$-equivariant trivialization on both components, since it descends naturally to the quotient $(W_0,\langlembda_0)$ (see subSection~\ref{specific trivialization}). We denote the boundary of $(M_0,\langlembda_0)$ by $(\Sigma_0,\langlembda_0)$. \begin{theorem}\langlebel{main theorem} Let $F$ be the function of Lemma~\ref{lemma for transversality}. Then $(\Sigma_0,\langlembda_0)$ satisfies the strongly ADC property with respect to a trivialization $\Phi$, provided $\mathbf{a}$ satisfies the conditions $n\geq 3$ and $m(\mathbf{a})\geq 2$. \end{theorem} \begin{comment} Now we need a lemma that relates the index of a Reeb orbits in a product with index in its components. Let $(M_1,\alpha_1),(M_1,\alpha_2)$ be two contact manifolds, and $(M_1\times \mathbb{R},d(e^{r_1}\cdot \alpha_1)),(M_2\times \mathbb{R},d(e^{r_2}\cdot \alpha_2)) $ be their respective symplectizations. Since the canonical bundle $\kappa({E_1\oplus E_2})=\kappa({E_1})\otimes\kappa({E_2})$, where $E_1,E_2$ are two complex vector bundles with $c_1(E_i)=0,i=0,1$. As a trivialization of $\kappa({E_i}), i=1,2$ is given by a section $s_i$, the trivialization of $\kappa({E_1\oplus E_2})$ is determined by a section $s_1\otimes s_2$. And $(M:=(M_1\times \mathbb{R})\times (M_2\times \mathbb{R}),\omega:= d(e^{r_1}\cdot \alpha_1)+d(e^{r_2}\cdot \alpha_2)$ is the product symplectic manifold. The trivializations of the canonical bundles of$(M_1\times \mathbb{R},d(e^{r_1}\cdot \alpha_1))$ and $(M_2\times \mathbb{R},d(e^{r_2}\cdot \alpha_2)) $ determine the trivialization on their product. As such, we have the following lemma: \begin{lemma}\langlebel{lemma for product index} Let $(M,\omega)$ be defined as above with the trivialization of its canonical bundle determined by the trivializations of the canonical bundles of its product components. Suppose $H=h_1(r_1)+h_2(r_2)$ is a function on $M$ and $C=H^{-1}(0)$ is a regular level set and $h_i'>0,h_i''>0, i=1,2$. Then any fractional Reeb orbit $\gamma$ on $(C,e^{r_1}\alpha_1+e^{r_2}\alpha_2)$ has constant $r_1,r_2$ coordinates. Furthermore, $\gamma$ has the form $\gamma=(\gamma_1,\gamma_2)$, where $\gamma_i$ is a fractional Reeb orbit in the contact manifold $(M_i,c_i\alpha_i),c_i=e^{r_i}\cdot \frac{h_1'+h_2'}{h_i'},i=1,2$. As a consequence, $\mu_{CZ}(\gamma)=\mu_{CZ}(\gamma_1)+\mu_{CZ}(\gamma_2)+\frac{1}{2}$. \end{lemma} \begin{proof} We have the Liouville vector field $Y={\partial}artial_{r_1}+{\partial}artial_{r_2}$, By lemma~\ref{formula for reeb vector}, \[X_{Reeb}=\frac{X_H}{Y(H)}=\frac{h_1'\cdot X_1+h_2'\cdot X_2}{h_1'+h_2'}, \] where $X_i$ is Reeb vector field on $(M_i,e^{r_i}\alpha_i)$. Note that the Reeb vector field has no ${\partial}artial_{r_i}$ component, $i=1,2$. It follows that the $r_i$ coordinates of $\gamma$ are constant, and $\gamma_i$ is indeed a Reeb orbit on the contact manifold $(M_i,c_i\alpha_i),c_i=c_i=e^{r_i}\cdot \frac{h_1'+h_2'}{h_i'},i=1,2$. We have $Y(H)=h_1'+h_2'>0$ and $Y(Y(H))=h_1''+h_2''>0$. Therefore, by lemma~\ref{lemma:hamiltonianconleyzehnderindexcomparisonstandard}, \[\mu_{CZ}(\gamma,H)=\mu_{CZ}(\gamma,e^{r_1}\alpha_1+e^{r_2}\alpha_2)+\frac{1}{2}. \] Meanwhile, \[\mu_{CZ}(\gamma,H)=\mu_{CZ}(\gamma_1,h_1)+\mu_{CZ}(\gamma_2,h_2) \] due to the product property of Conley-Zehnder index. The conclusion follows. \end{proof} \end{comment} \begin{prop}\langlebel{main proposition} For any $K>0$, there exits a $C(L)-$equivariant function $F$ as defined in \ref{equ:the defining equation} on the Liouville domain $(\widehat{W_\epsilon^1}\times \mathbb{R}\times S^1, \langlembda_0 )$ with a chosen trivialization $\Phi$ such that \begin{itemize} \item[(1)] $\Sigma:=\{F=0\}$ is a regular level set and the Liouville vector field $Y$ points outwards along $\Sigma$. \item[(2)] The quotient $\Sigma_0:=\Sigma/G$ has the property that all elements of $\mathcal{P}^{<K}_{\partial}hi(\Sigma_0,\langlembda_0)$ have lower SFT index at least $\min \{m(\mathbf{a})-3/2,n-5/2\}$. \end{itemize} \begin{figure} \caption{Lower SFT index of Fractional Reeb orbits in $\Sigma$.} \end{figure} \end{prop} \begin{proof} We will show that by choosing a proper $\mathcal{C}^2$-small function $f$ as in Proposition~\ref{functions for the smoothing}, the corresponding function $F$ satisfies the required conditions. The first condition is satisfied by the construction of $F$, as proved in Lemma~\ref{lemma for transversality}, we only need to show the second condition is also satisfied. Recall the quotient map \[ {\partial}i: \Sigma \to \Sigma_0 \] is an $L-$sheeted covering map. Therefore, the Reeb orbits in $\Sigma_0$ lift to fractional Reeb orbits in $\Sigma$. To be precise, if $\gamma(t), t\in [0,T]$ is a Reeb orbit in $\Sigma_0$, then the $L-$ fold Reeb orbit $\gamma(t), t\in [0,qT]$ can be lifted to a Reeb orbit $\widetilde{\gamma(t)}, t \in [0,LT]$ in $\Sigma$. It follows that the index of $\gamma$ in $\Sigma_0$ can be calculated through the index of $\widetilde{\gamma}$ in $\Sigma_0$. We will proceed by investigating the Reeb orbits in three regions: \begin{itemize} \item[(a)] $\Sigma\cap \{1>R>1-\delta_1+\delta_2\}$, where $\widetilde{\gamma}$ has constant $r,R$ coordinates. \item[(b)] $\Sigma\cap \{1=R\}$, where $\gamma$ is contractible, and $\gamma$ lifts to closed Reeb orbit $\widetilde{\gamma}$ in $\Sigma$. \item[(c)] $\Sigma\cap \{R>1-\delta_1+2\delta_2\}^c$, where $\widetilde{\gamma}$ has constant coordinate in the $W_\epsilon^1$ component. \end{itemize} We will show that all elements of $\mathcal{P}^{<K}_{\partial}hi(\Sigma_0,\langlembda_0)$(see Figure~\ref{Fig:index of Reeb orbits}) can be lifted to fractional Reeb orbits either entirely contained in part (a), (b) or (c) and \begin{itemize} \item[(a)] orbits in part(a) have lower SFT index at least $m(\mathbf{a})-3/2$ ; \item[(b)] orbits in part (b) have lower SFT index at least $m(\mathbf{a})-3/2$ ; \item[(c)] orbits in part (c) have lower SFT index at least $n-5/2$. \end{itemize} First, in region (a), by lemma~\ref{formula for reeb vector}, we have \[X_{Reeb}=\frac{X_F}{Y(F)}=\frac{2r{\partial}artial_\theta-2fX_f}{2r^2-2fY_\langlembda(f)}=\frac{2r{\partial}_\theta-2ff'J{\partial}_R}{2r^2-2Rff'}. \] So $X_{Reeb}$ has no ${\partial}_R$ component, and therefore the Reeb flow in the region (a) has constant $R$ coordinate. So any Reeb orbits $\gamma$ intersecting $ \{R>1-\delta_1+\delta_2\} $ remains entirely in region (a). Let us begin the proof with a lemma: \begin{lemma}\langlebel{lemma: Hamiltonian and Reeb case relation verified} With $W_0$ and $F$ defined as in Lemma~\ref{lemma for transversality}, the conditions in Lemma~\ref{Reeb--Hamiltonian index relation} are satisfied. \end{lemma} \begin{proof} We will verify the conditions in three cases: \begin{itemize} \item[a.] in the region $W_0\setminus \{ R>1-\delta_1+2\delta_2\}$, where $||1-f||_{\mathcal{C}^2}<\epsilon$; \item[b.] in the region $\{ 1-\delta_1+\delta_2<R< 1\}$, where $Y=r{\partial}_r+R{\partial}_R$, and $f=f(R)$. \item [c.] in the region $R=1, r=0$. \end{itemize} First of all, note that $b=dF(Y)=2r^2-2fY_\langlembda(f)>0$ by Lemma~\ref{lemma for transversality}. Case (a): Let $p$ be a critical point of $f$ and define \[A:=\{(p,\sqrt{f(p)^2+t},\theta)\in W_\epsilon^1\times \mathbb{R}\times S^1\,\big|\, t\in (-\epsilon_0,\epsilon_0)\}. \] Define $C_t:=F^{-1}(t)$, which is transverse to $A$ as ${\partial}artial_r$ is transverse to it. Let \[A_t:=C_t\cap A=\{(p,\sqrt{f(p)^2+t},\theta)\in W_\epsilon^1\times \mathbb{R}\times S^1\} \] and $L_t=\sqrt{f(p)^2+t}$, $b/L_0=2f(p)$. We can rescale $L_t$ by $2f(p)$, and with $V=\frac{{\partial}_r}{2r}$, we have \[db(V)=2>2f(p)\frac{dL_t}{dt}\Big|_{t=0}=1 \] Case(b): Let $B$ be a Morse-Bott manifold of the Brieskorn manifold $(\Sigma(\mathbf{a}),\langlembda)$, and $g(R)=-f^2(R)$. Then $F(t)=r(t)^2+g(R(t)), t\in (-\epsilon_0,\epsilon_0)$ . We have the following: \[ dF=2rdr+g'(R)dR,\quad b=dF(Y)=2r^2+Rg'(R),\] \[X_{Reeb}=X_F/Y(F)=(2r{\partial}_\theta+g'(R)J{\partial}_R)/b\] Now, define for any constant $a>0$ ($-1/a$ is the slope of tangent line of $F$ at $(r,R)$), \[A(a):=\{(q,R(t),\theta,r(t))\in B\times(1-\delta_1+\delta_2,1)\times S^1\times\mathbb{R}|r^2-f(R)^2=t, r=ag'(R), t\in (-\epsilon_0,\epsilon_0)\} \] Again let $C_t:=F^{-1}(t)$, which is transverse to $A(a)$, and \[A_t:=C_t\cap A=\{(q,R(t),\theta,r(t))\, \} \] Then $A(t)$ is a Morse-Bott manifold in $C_t$. Since \begin{equation}\langlebel{equ:period} L_t=b/2r(t)=r+\frac{Rg'(R)}{2r}=r+\frac{R}{2a},\quad b/L_0=2r(0) \end{equation} \begin{equation}\langlebel{equ: derivative of period} 2r\frac{dL_t}{dt}\Big|_{t=0}=2rr'+\frac{rR'}{a} \end{equation} and on the other hand, $V=r'{\partial}_r+R'{\partial}_R$, $db=4rdr+(g'(R)+Rg''(R))dR$, we have that \[db(V)= 4rr'+R'g'(R)+RR'g''(R)\geq 2rr'+\frac{rR'}{a}=2r\frac{dL_t}{dt}\Big|_{t=0} \] since $r'>0, R'>0, g''(R)>0$. Case (c): Let $g=-f^2(R), B$ defined as above, define \[A:=\{(q,R(t),\theta,0)\in B\times\mathbb{R}\times S^1\times\mathbb{R}\,|\,g(R)=t, t\in (-\epsilon_0,\epsilon_0)\} \] Once more, $C_t:=F^{-1}(t)$, which is transverse to $A$ as ${\partial}artial_R$ is transverse to it, and \[A_t:=C_t\cap A=\{(q,R(t),\theta,0)\in B\times\mathbb{R}\times S^1\times\mathbb{R}\,\} \] are pseudo Morse-Bott manifolds. Moreover, \[dF=g'(R)dR, \, b=dF(Y)=Rg'(R),\, L_t=R(t),\, b/L_0=g'(1)\] Here, \[ g'(1)\frac{dL_t}{dt}\Big|_{t=0}=g'(R)R'|_{R=1}=\frac{d}{dt}(g(R(t)))|_{t=0}=1 \] and with $V=\frac{{\partial}_R}{g'(R)}$, we have \[db(V)=1+\frac{Rg''(R)}{g'(R)}>1=g'(1)\frac{dL_t}{dt}\Big|_{t=0}. \] \end{proof} Now let us compute the index of the Reeb orbits in region (a). Any Reeb orbit $\gamma$ can be lifted to a fractional Reeb orbit $\widetilde{\gamma}$ in the region $\Sigma\cap \{R>1-\delta_1+\delta_2\}$, where $F=r^2-f(R)^2$. The Reeb orbit can be written as $\widetilde{\gamma}=(\gamma_1,\gamma_2)$, where $\gamma_1,\gamma_2$ are fractional Reeb orbits of $(\Sigma(\mathbf{a}),R\langlembda)$ and $(S^1,r\theta)$, for fixed $r,R$, so by Lemma~\ref{lemma: Hamiltonian and Reeb case relation verified}, $$\mu_{CZ}(\gamma,F)=\mu_{CZ}(\gamma,\langlembda_0)+\frac{1}{2}.$$ Meanwhile, \[ \mu_{CZ}(\gamma,F)= \mu_{CZ}(\gamma_1,-f(R)^2)+\mu_{CZ}(\gamma_2,r^2) \] follows the product property of Conley-Zehnder index. Note that $$(-f(R)^2)'>0, (-f(R)^2)''>0,$$ therefore we have \[\mu_{CZ}(\gamma_1,-f(R)^2)=\mu_{CZ}(\gamma_1,c_1\langlembda)+\frac{1}{2}. \] Moreover, by Remark~\ref{remark: index of orbits on cylinder}, \[ \mu_{CZ}(\gamma_2,r^2)=\frac{1}{2}. \] Notice $\gamma_1$ is a fractional Reeb orbits on the Brieskorn manifold $(\Sigma(\mathbf{a}),R\langlembda)$, which has the same index as $(\Sigma(\mathbf{a}),\langlembda)$. By Lemma~\ref{index calculation} and Lemma~\ref{minimal index}, we then have $\mu_{CZ}(\gamma_1)\geq m(\mathbf{a}).$ Putting all equations together: \begin{align*} lSFT(\gamma)&=\mu_{CZ}(\gamma,\langlembda_0)-\frac{1}{2}\dim B+(n+1)-3\\ &\geq \big(\mu_{CZ}(\gamma,F)-\frac{1}{2}\big)-n+(n+1)-3\\ &\geq \big(\mu_{CZ}(\gamma_1,\langlembda)+\frac{1}{2}\big)+\mu_{CZ}(\gamma_2,r^2)-\frac{5}{2}\\ &\geq m(\mathbf{a})+\frac{1}{2}+\frac{1}{2}-\frac{5}{2}\\ &=m(\mathbf{a})-3/2. \end{align*} For the region (b), the claim will be proved in Lemma~\ref{lemma for contractible orbits}. Now suppose $\gamma_0(t),t\in [0,T],T<K$ is a Reeb orbit in $\Sigma_0$ and can be lifted to a fractional Reeb orbit in region (c). Then the $L-$ fold Reeb orbit $\gamma(t):= \gamma_0(t), t\in [0,LT]$ can be lifted to a closed Reeb orbit $\widetilde{\gamma(t)}$ in this region. Let $g(R)$ be a smooth function defined in Lemma~\ref{remark: c1 small vector field}. So $g(R)=1$ for $R<1-\delta_1+\delta_2$ and $g(R)=0$ for $R>1-\delta_1+2\delta_2$. By abuse of notation, $g$ can be regarded as a function on $\Sigma\cap \{1-\delta_1+\delta_2<R<1-\delta_1+2\delta_2\}$. We extend $g$ to $\Sigma$ by a constant. Now define a new vector field $X=g\cdot X_{Reeb}$. Let $X_W$ be the projection of $X$ to $W_\epsilon^1$, i.e. \[X_W=g\cdot \frac{-X_f}{f-Y_\langlembda(f)}=\frac{-1}{f-Y_\langlembda(f)}\cdot gX_f.\] Since $(1-f)$ is $\mathcal{C}^2$-small, $||\frac{1}{f-Y_\langlembda(f)}||_{\mathcal{C}^1}<2$. By Lemma~\ref{remark: c1 small vector field}, $X_W$ is $\mathcal{C}^1$-small. Then by Corollary~\ref{small norm vector field}, for $f$ sufficiently $\mathcal{C}^2$-small, any periodic orbit of period less than $LK$ is a constant orbit, and therefore corresponds to a critical point of $f$. We claim that any such Reeb orbit $\gamma_0$ has lower SFT index at least $n-5/2$, which will be proved in Proposition~\ref{proof of index of critical point}. \end{proof} \begin{remark}\langlebel{noncontactible remark} Reeb orbits in $W_0$ can be graded by their $H_1/Tors$ class. Let's have a closer look at the Reeb orbits with $H_1/Tors$ grading $0$. In the proof of Proposition~\ref{main proposition}, the Reeb orbits in the regions (a) and (c) are never null-homologous. \end{remark} \begin{lemma}\langlebel{lemma for contractible orbits} Any Reeb orbit $\gamma$ in region (b) is contractible in $\Sigma_0$, and its lower SFT index is at least $m(\mathbf{a})-3/2$. \end{lemma} \begin{proof} In region (b), $X_{Reeb}=J{\partial}artial_R$. In fact, $\Sigma\cap \{1=R\}=\Sigma(\mathbf{a})\times S^1$. The Reeb flow is stationary on $S^1$ and coincides with the Reeb flow on the Brieskorn manifold $\Sigma(\mathbf{a})$. Therefore, any Reeb orbit $\gamma$ is contractible. Suppose $\widetilde{\gamma}$ is a lift of $\gamma$. Let $B\subset \Sigma(\mathbf{a})$ be a Morse-Bott manifold for $(\Sigma(\mathbf{a}),\langlembda)$. In light of Lemma~\ref{lemma: Hamiltonian and Reeb case relation verified}, we have \[\mu_{CZ}(B\times S^1, F)=\mu_{CZ}(B\times S^1, \langlembda_0)+\frac{1}{2}. \] By the product property of Conley-Zehnder index, \[\mu_{CZ}(B\times S^1, F)=\mu_{CZ}(B,-f^2(R))+\mu_{CZ}(S^1,r^2)=\mu_{CZ}(B,-f^2(R))+\frac{1}{2}. \] \begin{comment} Since $X_r|_{r=0}=0$, we have ${\partial}si_t|_{S^1\times\mathbb{R}} =\textrm{id}$. So $\mu_{CZ}(S^1,r^2)=0$, and \[D{\partial}si_t=D{\partial}si_t|_{\Sigma\times\mathbb{R}}\oplus D{\partial}si_t|_{S^1\times\mathbb{R}}=D{\partial}si_t|_{\Sigma\times\mathbb{R}}\oplus\text{id}|_{S^1\times\mathbb{R}}, \] so \[\dim\ker (D{\partial}si_T-\text{id})|_{B\times S^1}=\dim B +2\,,\] where $T$ is the period of Reeb orbit. Thence $D{\partial}si_T-\text{id}$ has constant rank along $B\times S^1$. For Reeb orbits in a neighborhood of $B\times S^1$, the period $L= r+\frac{Rg'(R)}{2r}$, by equation~\ref{equ:period}. So for $r\neq 0$, the period of Reeb orbit near $B\times S^1$ is very large, so $B\times S^1$ is an isolated family of Reeb orbits hence is pseudo Morse-Bott. \end{comment} On the other hand, \[ \mu_{CZ}(B,-f^2(R))=\mu_{CZ}(B,\langlembda)+\frac{1}{2}\geq m(\mathbf{a})+\frac{1}{2}.\] So we conclude that \[ \mu_{CZ}(B\times S^1, \langlembda_0)= \mu_{CZ}(B\times S^1,F)-\frac{1}{2}\geq m(\mathbf{a})+\frac{1}{2} \] and \begin{align*} lSFT(\gamma)&=\mu_{CZ}(\gamma)-\frac{1}{2}\dim\ker(D_{\gamma(0)}{\partial}si_T-\textrm{id})+(n+1-3)\\ &=\mu_{CZ}(B\times S^1, \langlembda_0)-\frac{1}{2}(\dim B+1)+(n+1-3)\\ &\geq m(\mathbf{a})+1/2-n+n-2=m(\mathbf{a})-3/2. \end{align*} \end{proof} \begin{remark}\langlebel{contractible generator of spectral sequence} Let $MB(p), p\in \mathbb{Z} $ be the Morse-Bott manifold of return time $\frac{p{\partial}i}{2}$ in the Brieskorn manifold, then $MB(p)\times S^1/C(L)$ is a Morse-Bott manifold in $\Sigma_0$. Conversely, any Morse-Bott manifold of contractible Reeb orbits in $\Sigma_0$ can be lifted to $\Sigma$. By Lemma~\ref{lemma for contractible orbits} and Remark~\ref{noncontactible remark}, the contractible Morse-Bott manifolds in $\Sigma_0$ can be lifted to $\Sigma(\mathbf{a})\times S^1 \subset \Sigma$. Indeed, each Reeb orbit in $\Sigma_0$ has $L$ different lifts in $\Sigma$. In terms of Morse-Bott manifolds of contractible Reeb orbits, we have a one-to-one correspondence: \begin{align*} {\partial}i: \Sigma(\mathbf{a})\times S^1&\rightarrow (\Sigma(\mathbf{a})\times S^1)/C(L)\subset \Sigma_0\\ MB(p)\times S^1 &\mapsto (MB(P)\times S^1)/C(L). \end{align*} The group action is trivial on the first factor, therefore \[(MB(p)\times S^1)/C(L)= MB(p)\times (S^1/C(L)) \cong MB(p)\times S^1\] and \[ \mu_{CZ}(MB(p)\times S^1,\langlembda_0)=\mu_{CZ}(MB(p),\langlembda)+\frac{1}{2}. \] \end{remark} \begin{prop}\langlebel{proof of index of critical point} As defined in the proof of part (c) of Proposition~\ref{main proposition}, the Reeb orbit $\gamma_0$ has lower SFT index at least $n-5/2$. \end{prop} \begin{proof} The $L-$fold iterate $\gamma(t)$ can be lifted to a Reeb orbit $\widetilde{\gamma(t)}$ in the Region (c). Its $W_\epsilon^1$ component is a critical point $p$ of $f$. Since the conditions of Lemma~\ref{Reeb--Hamiltonian index relation} are satisfied, \[\mu_{CZ}(B_0,\langlembda_0)+\frac{1}{2}=\mu_{CZ}(B_0,F). \] Everything descends down to the quotient $M_0$. We will use the same notations for the quotient. We have the Hamiltonian orbit $\gamma_0=(p,\gamma_2)$, where $p$ is a constant orbit in $W_\epsilon^1$ while $\gamma_2$ is an orbit in $\mathbb{R}\times S^1$. The index is \[\mu_{CZ}(B_0,F)=\mu_{CZ}(p,-f^2)+\mu_{CZ}(\gamma_2,r^2)=\mu_{CZ}(p,-f^2)+\frac{1}{2} \] Since $f(p)\neq 0$, $Ind_p(f^2)=Ind_p(f)$, hence \[\mu_{CZ}(p,-f^2)=Ind_p(f^2)-n=Ind_p(f)-n\] by Corollary~\ref{index of critical pt}. Since indices of critical points of $f$ is at least n, so $\mu_{CZ}(p)\geq 0$ (see remark~\ref{morse function index at least n }). Thus lower SFT index \begin{align*} lSFT(\gamma)&=\mu_{CZ}(B_0,\langlembda_0)-\frac{1}{2}\dim B_0 +(n+1) -3\\ &=\mu_{CZ}(B_0,F)-\frac{1}{2}-\frac{1}{2}+(n+1)-3\\ &\geq 0+\frac{1}{2}+n-3= n-5/2 \end{align*} where the Morse-Bott manifold $B_0=S^1$. \end{proof} \begin{proof}[Proof of Theorem~\ref{main theorem}] Recall the definition of a strongly ADC contact manifold: there exists a sequence of non-increasing contact forms $\alpha_i$ and increasing positive numbers $D_i$ going to infinity such that all elements of $\mathcal{P}_\Phi^{<D_i}(\Sigma,\alpha_i)$ have positive lower SFT index. In light of Proposition~\ref{main proposition}, let $ K_i=K^i$($K$ is a fixed large number, the explicit conditions will be clear later in this proof), there exists a $C(L)-$ equivariant function $F_i$ such that all elements of $\mathcal{P}_\Phi^{<K_i}(\Sigma_i,\langlembda_0|_{\Sigma_i})$ have positive lower SFT index (since $\min\{m(\mathbf{a})-3/2,n-5/2\}>0$), where $\Sigma_i:=F_i^{-1}(0)/C(L)$ is the boundary of the quotient manifold. By Remark~\ref{containment}, we notice that conditions of Corollary~\ref{corollary for non-increasing form } are satisfied, so there exists a contactomorphism $f_i:\Sigma_0\to \Sigma_{i+1}$ and a constant $C$ independent of $F_i$, such that \[\frac{1}{C}\cdot\langlembda_0|_{\Sigma_0}<f_i^{*}(\langlembda_0|_{\Sigma_{i}})<C\cdot\langlembda_0|_{\Sigma}. \] So the non-increasing contact forms $\alpha_i$ can be defined as $\alpha_{i}=\frac{1}{C^i}f_i^*(\langlembda_0|_{\Sigma_{i}})<\alpha_{i-1}$, and $D_i:=K_i/C^i$, which goes to infinity as long as $K>C$. Then $\mathcal{P}_\Phi^{<D_i}(\Sigma_0,\alpha_i)=\mathcal{P}_\Phi^{<L_i}(\Sigma_i,\langlembda_0|_{\Sigma_i})$, which shows that all elements have positive lower SFT index. \end{proof} We follow the idea of F.laudenbach in the proof of the following lemma. \begin{lemma}[Proposition 6.1.5 \cite{audin2014morse}, \cite{laudenbach2004symplectic}] Let $X$ be a vector field on $\mathbb{R}^{2n}$. If $||dX||_{L^2} < \frac{2 {\partial}i}{L}$, the only periodic orbits with period less than L are constant orbits. \end{lemma} \begin{proof} Consider the solution $u(t)$ of period $T\leq L$ and take its Fourier expansion as well as $\dot{u},\ddot{u}$. \[u(t)=\sum_{k}c_k(u)e^{2k{\partial}i i t/T},\quad \dot{u}(t)=\sum_{k}\frac{2k{\partial}i i}{T}c_k(u)e^{2k{\partial}i i/T} \] So by Parseval's identity, we have \[||\ddot{u}||^2_{L^2}=\sum \frac{4k^2{\partial}i ^2}{T^2}|c_k(\dot{u})|^2\geq \sum_{k\ne 0}\frac{4{\partial}i^2}{T^2}|c_k(\dot{u})|^2=\frac{4{\partial}i^2}{T^2}||\dot{u}(t)||^2_{L^2} \] since $c_0(\dot{u})=0$. Hence, \[||\ddot{u}||_{L^2}\geq \frac{2{\partial}i}{T}||\dot{u}(t)||_{L^2}. \] On the other hand, since $\ddot{u}=(dX)(\dot{u})$, $||dX||_{L^2} < \frac{2 {\partial}i}{L}$, so \[||\ddot{u}||_{L^2}< \frac{2{\partial}i}{L}||\dot{u}(t)||_{L^2} \leq \frac{2{\partial}i}{T}||\dot{u}(t)||_{L^2}\] if $\dot{u}\ne 0$. Therefore $u(t)$ is a constant orbit. \end{proof} \begin{corollary}[ \cite{laudenbach2004symplectic}]\langlebel{small norm vector field} If $M$ is a compact manifold with boundary and $X$ is a vector field which vanishes in the neighborhood of the boundary. Then for any $L>0$, the flow generated by $X$ has no non-constant periodic orbit with period less than $L$ for sufficiently $\mathcal{C}^1$-small $X$. \end{corollary} \begin{proof} First we get rid of the boundary by doubling $M$ (glue $M$ with itself along the boundary). Now that $X$ can be smoothly extended since it vanishes in a neighborhood of the boundary. Now consider the new closed manifold $\tilde{M}$. Let us fix a finite collection of compact charts $K_i$. Since $X$ is $\mathcal{C}^1$-small, every closed orbit with bounded period $T$ of the flow of $X$ has a small diameter($D\leq ||X||_{uniform}\cdot L)$, which implies the entire orbit remains in one of the charts $K_i$. The $\mathcal{C}^1$ norm is equivalent to the Euclidean norm so the lemma above applies. \end{proof} \begin{comment} \begin{lemma}\langlebel{lemma for c-1 small} Let $f$ be a compactly supported smooth function and $X$ a smooth vector field. $f\cdot X$ is $\mathcal{C}^{1}-$small if $X$ is. \end{lemma} \begin{proof} \end{proof} \end{comment} \begin{lemma}\langlebel{Liouville embedding} Let $(U,\langlembda)$ be a Liouville domain, $(\widehat{U},\hat{\langlembda})$ its completion, and $\Sigma_1:={\partial}artial U$ be the contact boundary. Suppose we have a Liouville domain $(V,\hat{\langlembda})$ such that $ U\subset V\subset U\cup\Sigma_1\times [0,M] $.Then there is a contactomorphism $\Psi$ \[\Psi:(\Sigma_1,\langlembda_2=\hat{\langlembda}|_{\Sigma_1})\to (\Sigma_2:={\partial}artial V,\langlembda_2=\hat{\langlembda}|_ {\Sigma_2}). \] such that $ \langlembda_1\leq \Psi^{*}\langlembda_2\leq e^M\langlembda_1$. \end{lemma} \begin{proof} Since \[U\subset V \subset \Sigma_1\times [0,M] \] let ${\partial}si$ be the flow generated by the Liouville vector field and $t(p)$ be the time when the flow starting at $p \in \Sigma_1$ reaches $\Sigma_2$, i.e, ${\partial}si_{t(p)}(p)\in \Sigma_2$. Then $M\geq t(p)\geq 0$. Let $\rho$ be a function on $\widehat{U}$ supported on $(-\epsilon,M+1)\times\Sigma_1$, such that $\rho((r,p))\equiv t(p)$ on the region $[-0,M]\times \Sigma_1$. Now consider the vector field $Y:=\rho\cdot{\partial}artial_r$ and we denote by \[\Psi:\mathbb{R}\times \widehat{U} \to \widehat{U},\quad (t,p)\mapsto \Psi_t(p) \] the flow generated by $Y$. Clearly we have $\Psi_1(\Sigma_1)=\Sigma_2$ and $\Psi^{*}\langlembda_1=e^{t(p)}\langlembda_2$. Now the conclusion follows. \end{proof} \begin{remark} If the Liouville domains in the Lemma above are $G$-equivariant, then there is $G$-equivariant contactomorphism satisfying the above statement. \end{remark} \begin{corollary}\langlebel{corollary for non-increasing form } Let $(U,\langlembda)$ be a Liouville domain, suppose we have two Liouville domains $V_1,V_2$ with $\Sigma_1={\partial}artial V_1,\Sigma_2={\partial}artial V_2$ such that $U\subset V_i\subset U\cup {\partial}artial U\times [0,M]$. Then there exists a contactomorphism $f$ and a constant $C$ independent of $V_i$, such that \[\frac{1}{C}\cdot\langlembda|_{\Sigma_1}<f^{*}\langlembda|_{\Sigma_2}<C\cdot\langlembda|_{\Sigma_1}. \] \end{corollary} \section{Finiteness of positive idempotent group }\langlebel{Finite} We are going to show that the positive idempotent group $I_+(\Sigma_0)$ is finite. Let's recall the definitions: for any filling $W$ of $\Sigma_0$ such that $SH_*(W)\neq 0$, we have \[I(W)=\{\, \alpha \in SH_n^0(W)\,\big| \, \alpha^2-\alpha \in H^0(W)\,\} \] and $I_+(W)=I(W)/H^0(W)$, hence it suffices to prove $I(W)$ is a finite group. Indeed, for the Liouville filling $(M_0,\langlembda_0)$ as in remark~\ref{def of key block}, $SH_k^0(M_0,\mathbb{Z}_2)$ is finite. We begin by introducing a spectral sequence which converges to $SH_*^0(M_0,\mathbb{Z}_2)$: \begin{theorem}[Theorem 5.4 \cite{kwon2016brieskorn}]\langlebel{spectral sequence} Let$(W,\omega=d\langlembda)$ be a Liouville domain satisfying the assumptions: \begin{itemize} \item[1] The Reeb flow on ${\partial}artial W$ is periodic with minimal periods $T_1\cdot \frac{{\partial}i}{2},T_2\cdot \frac{{\partial}i}{2},\cdots,T_k\cdot \frac{{\partial}i}{2}$, where $T_k\cdot \frac{{\partial}i}{2}$ is the common period, i.e. the period of a principal orbit. We assume that all $T_k$ are integers. \item[2] The restriction of the tangent bundle to the symplectization of ${\partial}artial W$,$T(\mathbb{R}\times {\partial}artial W)|_{{\partial}artial W}$, is trivial as a symplectic vector bundle, $c_1(W)=0$ and we have a choice of the trivialization of the canonical bundle. \item[3] There is a compatible complex structure $J$ for $(\mathrm{x}i:=\ker \langlembda_{{\partial}artial W},d\langlembda_{{\partial}artial W})$ such that for every periodic Reeb orbit $\gamma$ the linearized Reeb flow is complex linear with respect to some unitary trivialization of $(\mathrm{x}i,J,d\alpha)$ along $\gamma$. \end{itemize} For each positive integer $p$ define $C(p)$ to be the set of Morse-Bott manifolds with return time $p$, and for each Morse-Bott manifold $\Sigma\in C(p)$ put \[ \Delta(\Sigma)=\mu_{CZ}(\Sigma)-\frac{1}{2}\dim \Sigma/S^1, \] where the Robbin-Salamon index is computed for a symplectic path defined on $[0,p]$. Then there is a spectral sequence converging to $SH(W;R)$, whose $E^1-$page is given by \[E_{pq}^1=\begin{cases} \bigoplus\limits_{\Sigma\in C(p)} H_{p+q-\Delta(\Sigma)} (\Sigma;R) & p>0\\ H_{q+n}(W,{\partial}artial W;R) & p=0\\ 0 & p<0. \end{cases} \] \end{theorem} \begin{remark} The above spectral sequence respects the $H_1$ grading. Therefore, to compute $SH_n^0(M_0)$, we only need to focus on the Morse-Bott manifolds of null-homologous Reeb orbits. \end{remark} \begin{lemma}\langlebel{finiteness of group} $SH_k^0(M_0,\mathbb{Z}_2)$ is finite for all $k$. \end{lemma} \begin{proof}[proof of lemma~\ref{finiteness of group}] Note that it suffices to find all the Morse-Bott manifolds. By Remark~\ref{contractible generator of spectral sequence}, the first page of the spectral sequence which converges to $SH_*^0(M_0,\mathbb{Z}_2)$ is \[E_{pq}^1=\begin{cases} \bigoplus H_{p+q-\Delta(MB(p))} (MB(p)\times S^1 ;\mathbb{Z}_2) & p>0\\ H_{q+n}(M_0,{\partial}artial M_0;\mathbb{Z}_2) & p=0\\ 0 & p<0. \end{cases} \] The finiteness of $SH_k^0(M_0,\mathbb{Z}_2)$ follows from the following two facts: first, there are only finitely many Morse-Bott manifolds $MB(p)$ satisfying $\Delta(MB(p))=k$, i.e. \[k=\mu_{CZ}(MB(p)\times S^1)-\frac{1}{2}(\dim(MB(p)\times S^1)/S^1)=f_\mathbf{a}(p)-\frac{1}{2}(\dim MB(p)-1). \] The above equation can only be satisfied by finitely many $p\in \frac{1}{2L}\mathbb{Z}$, and for any $p$ there is at most one Morse-Bott manifold with return time $p{\partial}i/2$ in the Brieskorn manifold $\Sigma(\mathbf{a})$. Secondly, \[H_* (MB(p)\times S^1 ;\mathbb{Z}_2) =0\quad *<0\, \text{or}\, *>2n .\] and $H_* (MB(p)\times S^1 ;\mathbb{Z}_2)$ is finite dimensional for $0\leq * \leq 2n$. Therefore $SH_k^0(M_0,\mathbb{Z}_2)$ is finite for each $k$, since the dimension of $\bigoplus\limits_{p+q= k}E_{pq}^1$ is finite for each $k$. \end{proof} Now we are going to prove that $SH_*^0(M_0(\mathbf{a}),\mathbb{Z}_2)\neq 0$, where $\mathbf{a}$ is defined as in Remark~\ref{remark for sphere}. \begin{lemma}\langlebel{lemma: nonvanishing} For $\mathbf{a}=(2,2,2,\cdots,p_k)$, we have $SH_*^0(M_0(\mathbf{a}),\mathbb{Z}_2)\neq 0$, where $k+3=n,n>8, p_i's$ are sufficiently large integers. \end{lemma} \begin{proof} It suffices to prove that $SH_{n-1}^0(M_0,\mathbb{Z}_2)\neq 0$. To that end we will focus on the total degree $p+q=n-2,n-1,n$ in the spectral sequence above. First of all, for $p=0$, we have \[E_{0q}^1=H_{n+q}(M_0,{\partial} M_0;\mathbb{Z}_2)=\begin{cases*} \mathbb{Z}_2, & $q=n$\\ 0, & $q\neq n$. \end{cases*} \] On the other hand, \begin{align*} \Delta(MB(p)\times S^1) &= \mu_{CZ}(MB(p)\times S^1)-\frac{1}{2}(\dim(MB(p)\times S^1)/S^1)\\ &= f_\mathbf{a}(p)-\frac{1}{2}(\dim MB(p)-1) \end{align*} where $p{\partial}i/2, p\in \mathbb{Z}$ is the period. Meanwhile, \begin{align*} f_\mathbf{a}(p)&=3\Bigg(\Bigg\lfloor\frac{p}{2}\Bigg\rfloor+\Bigg\lceil\frac{p}{2}\Bigg\rceil\Bigg)+\sum\Bigg(\Bigg\lfloor\frac{p}{p_i}\Bigg\rfloor+\Bigg\lceil\frac{p}{p_i}\Bigg\rceil\Bigg)-\Big(\Big\lfloor p\Big\rfloor+\Big\lceil p\Big\rceil\Big) &\geq 3p+k-2p=p+n-3 \end{align*} so $\Delta(MB(p)\times S^1)\geq p-4>n+1$ for any $p>n+5$, that is, for any Morse-Bott manifold to contribute to the homology of degree at most $n$, the period of such manifold is at most $n+5$. Thus, if we require $p_i>n+5$, then the only Morse-Bott manifolds could possibly contribute to total degree $p+q\leq n$ is $MB(p)\times S^1,\, p=2l,2l<n+5$ for some $0<l\in \mathbb{Z}$ (see Subsection 5.5 \cite{kwon2016brieskorn}). Now that $p=2l,\, l<n$, we have $MB(p)=\Sigma(2,2,2)\cong \mathbb{R}\mathbb{P}^3$. \[ H_i(\mathbb{R}\mathbb{P}^3\times S^1)=\begin{cases*} \mathbb{Z}_2, \quad &$i=0,4$\\ \mathbb{Z}_2\oplus\mathbb{Z}_2, & $i=1,2,3$\\ 0, & otherwise \end{cases*} \] In this case, \[\Delta(MB(2l)\times S^1)= 6l+n-3-4l-1=2l+n-4=p+n-4. \] So for $l>2$, $\Delta(MB(2l)\times S^1)>n$. For $l=1,2$, we have (see Figure~\ref{fig:Epq}) \[E_{pq}^1=H_{q-(n-4)}(\mathbb{R}\mathbb{P}^3\times S^1;\mathbb{Z}_2)=\begin{cases*} \mathbb{Z}_2, & $q=n-4, n$\\ \mathbb{Z}_2\oplus\mathbb{Z}_2, & $q=n-3, \, n-2,\, n-1$\\ 0, &\text{otherwise}. \end{cases*} \] Hence, $E_{2,n-3}^k(M_0,\mathbb{Z}_2)\neq 0$ stabilizes at the second page, so $SH_{n-1}^0(M_0,\mathbb{Z}_2)\neq 0$. It follows that $SH_*^0(M_0,\mathbb{Z}_2)\neq 0$. In particular, $SH_n^0(M_0,\mathbb{Z}_2)\neq 0$, since the unit lives in degree $n$. \begin{figure} \caption{$E_{pq} \end{figure} \end{proof} \begin{remark} Lemma~\ref{lemma: nonvanishing} shows that $I_+(\Sigma_0(\mathbf{a}))$ is well-defined since $SH_*(M_0)\neq 0$. Furthermore, $I_+(\Sigma_0(\mathbf{a}))$ is a finite group. \end{remark} Now we are ready to prove Theorem~\ref{main technical theorem}: first, we will take $\mathbf{a}=(2,2,2,p_1,\cdots,p_k)$ satisfying \begin{itemize} \item $p_i> k+8,$ \item $\sum\frac{1}{p_k}=\frac{1}{2}$. \end{itemize} Recall \[U_\mathbf{a}(\epsilon)=\{\mathbf{z}\in \mathbb{C}^{n+1}|z_0^{a_0}+\cdots+z_n^{a_n}=\epsilon\cdot\beta(||\mathbf{z}||^2)\},\] and \[W_\epsilon^1=U_\epsilon\cap B(1),\quad {\partial} W_\epsilon^1=\Sigma(\mathbf{a}).\] Let $(M_0(\mathbf{a}),\langlembda_0)$ be defined as in Remark~\ref{def of key block}. Then we have the following facts: \begin{enumerate} \item $(M_0,\langlembda_0)$ is strongly ADC; \item $SH_n^0(M_0)\neq 0$ and is finitely dimensional. \item $H_i(M_0)=0, i>1,i\neq n,n+1$. \end{enumerate} The first claim is true due to Proposition~\ref{main theorem}. We only need to check the condition that $m(\mathbf{a})\geq 3$, which in turn is the result of Lemma~\ref{minimal index}. The second claim is proved in Lemma~\ref{finiteness of group}. On the other hand, the Liouville vector field $Y_\langlembda$ is gradient-like (Lemma~\ref{lemma for transversality}) for the function $F$ which we used to define the Weinstein domain. Therefore, it is Liouville homotopic to Stein domain $(M_0,J,{\partial}hi)$(Remark~\ref{def of key block}). ${\partial}i_1(M_0)=\mathbb{Z}$ since $M_0$ is diffeomorphic to $\mathbb{C}^{n+1}\setminus V_\mathbf{a}(0)$ (Proposition~\ref{fundamental group of M0}). Let $\gamma$ be an isotropic circle generating ${\partial}i_1(M_0)$. Such $\gamma$ exists by the $h-$principle(Lemma~\ref{h-principle}). Let $M_1$ be Weinstein manifold obtained from $M_0$ by attaching a Weinstein 2-handle with respect to the trivialization $\Phi$ (Proposition~\ref{handle attachment for construction}). $M_1$ is of finite type because $M_0$ is. Furthermore, attaching 2-handle along $\gamma$ kills the fundamental group. Now we are going to prove that$(M_1,\langlembda_1,{\partial}si_1)$ satisfies all conditions in Theorem~\ref{main technical theorem} \begin{proof}[proof of Theorem~\ref{main technical theorem}]\langlebel{proof 1.5} Indeed,we have the following facts about $(M_1,\langlembda_1,{\partial}si_1)$: \begin{enumerate} \item $({\partial} M_1, \langlembda_1)$ is asymptotically dynamically convex; \item $SH_*(M_1)\cong SH_*(M_0)$ as rings. \item $\tilde{H}_i(M_1)=0, i\neq n,n+1$ \end{enumerate} The first statement is true because subcritical surgery preserves the ADC property, by Theorem~\ref{theorem:contact surgery}. The second statement is due to the fact that subcritical surgery doesn't change the ring structure of symplectic homology, see Theorem~\ref{invariant of SH}. The last statement on homology follows Proposition~\ref{contractible after surgery}. \end{proof} \begin{comment} \begin{proof}[proof of Theorem~\ref{main technical theorem}] We will construct the $(Y_i,\mathrm{x}i)$ as follows: First, take a Brieskorn manifold $\Sigma(\mathbf{a})$ with $\mathbf{a}=(2,2,2,2,2,p_1,\cdots,p_n)$. $(W_0,\langlembda_0)$ is constructed from $\Sigma(\mathbf{a})$ as in Section~\ref{cover},with respect to a specific trivialization ${\partial}hi$ as in subSection~\ref{specific trivialization}, its boundary $(\Sigma_0,\langlembda_0)$is strongly asymptotically dynamically convex with respect to ${\partial}hi$, by Theorem~\ref{main theorem}(since $n>5$ and $m(\mathbf{a})\geq 3$ by lemma~\ref{minimal index}.) Let $(W_1,\langlembda_1)$ be the result of attaching a Weinstein $2-$handle along a generator $\gamma$ of ${\partial}i_1(W_0)$ to $(W_0,\langlembda_0)$ with respect to ${\partial}hi$. By lemma~\ref{topology of W_1}, $W_1$ is diffeomorphic to $\mathbb{C}^n$. On the other hand, by theorem~\ref{theorem:contact surgery}, $(\Sigma_1,\langlembda_1):=({\partial}artial W_1,\langlembda_1)$ is strongly asymptotically dynamically convex with respect to ${\partial}hi$. Since $\Sigma_1$ is a sphere, there is a unique trivialization of the canonical bundle up to homotopy. So $(\Sigma_1,\langlembda_1)$ is ADC. Now if we take $W_i=W_1\natural\cdots \natural W_1$, then $W_i$ is diffeomorphic to $\mathbb{C}^n$. And we have ${\partial}artial W_i$ is ADC contact manifold. In the meantime, we have $SH_*(W_i)=SH_*(W_1)\oplus\cdots\oplus SH_*(W_1)$. And since $SH_*(W_1)=SH_*(W_0)\neq 0$, we have $SH_*(W_i)\neq 0$. Moreover, since $\{0,1\}$ are idempotents in $SH_*(W_0)$, there are at least $2^i$ idempotents in $SH_*(W_i)$, which evidently lie in $I(W_i)$, so $I_+(W_i)=I(W_i)/\mathbb{Z}_2$ is unbounded. The finiteness of $I_+(W_i)$ comes from the finiteness of $SH_n^{contra}(W_i)$, which in turn follows from the finiteness of $SH_n^{contra}(W_0)$. \end{proof} \end{comment} \end{document}
\begin{document} \title{The $l_1$ Norm of Coherence of Assistance} \author{Ming-Jing Zhao$^1$} \author{Teng Ma$^{2,3}$ } \author{Quan Quan$^{4}$ } \author{Heng Fan$^5$ } \author{Rajesh Pereira$^6$ } \affiliation{ $^1$School of Science, Beijing Information Science and Technology University, Beijing, 100192, China\\ $^2$ Shenzhen Institute for Quantum Science and Engineering and Department of Physics, South University of Science and Technology of China, Shenzhen 518055, China\\ $^3$Shenzhen Key Laboratory of Quantum Science and Engineering, Shenzhen 518055, China\\ $^4$State Key Laboratory of Low-Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing, 100084, China\\ $^5$Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China\\ $^6$ Department of Mathematics and Statistics, University of Guelph, N1G2W1, Canada\\ } \pacs{03.65.Ud, 03.67.-a} \begin{abstract} We introduce and study the $l_1$ norm of coherence of assistance both theoretically and operationally. We first provide an upper bound for the $l_1$ norm of coherence of assistance and show a necessary and sufficient condition for the saturation of the upper bound. For two and three dimensional quantum states, the analytical expression of the $l_1$ norm of coherence of assistance is given. Operationally, the mixed quantum coherence can always be increased with the help of another party's local measurement and one way classical communication since the $l_1$ norm of coherence of assistance, as well as the relative entropy of coherence of assistance, is shown to be strictly larger than the original coherence. The relation between the $l_1$ norm of coherence of assistance and entanglement is revealed. Finally, a comparison between the $l_1$ norm of coherence of assistance and the relative entropy of coherence of assistance is made. \end{abstract} \maketitle \section{Introduction} Quantum coherence is an important feature in quantum physics and is of practical significance in quantum computation and quantum communication \cite{A. Streltsov-rev,E. Chitambar-review,M. Hu,M. Hillery}. The formulation of the resource theory of coherence was initiated in Ref. \cite{T. Baumgratz}, in which some intuitive and computable measures of coherence are identified, for example, the $l_1$ norm of coherence and the relative entropy of coherence. These coherence measures quantify coherence by using the minimal distance between the quantum state and the set of incoherent states. Intrinsic randomness of coherence \cite{X. Yuan} and coherence concurrence \cite{X. Qi} are coherence measures defined using the convex roof construction. Robustness of coherence is a coherence monotone and quantifies the minimal mixing required to make the state incoherent \cite{C. Napoli}, of which the witness observable has been demonstrated \cite{W. Zheng}. Since quantum coherence is a quantum resource, the interconversion between coherent states is a major focus study. Coherence distillation is a transformation process which transforms a quantum state to the maximally coherent state by incoherent operations. The reverse transformation from the maximally coherent state to a general quantum state is known as coherence cost \cite{A. Winter}. The distillable coherence has been proven to be equal to the relative entropy of coherence and the coherence cost has been proven to be equal to the coherence formation \cite{A. Winter}. Since the coherence distillation can not be accomplished with certainty, the framework of probabilistic coherence distillation characterizing the relation between the maximal success probability and the fidelity of distillation in the one-shot setting has been developed \cite{K. Fang}. Similar to standard coherence distillation, assisted coherence distillation is another process aided by another party typically holding a purifying quantum state and performing local measurement and one way classical communication \cite{E. Chitambar}. A mathematical framework for the characterization of the assisted coherence distillation is then proposed in Ref. \cite{B. Regula}. These characterizations imply that the best achievable rate of assisted coherence distillation is the same no matter which class of incoherent operations the assistant performs \cite{B. Regula}. The assisted coherence distillation has been generalized to more general settings where two parties both can perform local measurements and communicate their outcomes via a classical channel \cite{A. Streltsov-2017} and the assisted coherence distillation of some mixed states has been discussed in Ref. \cite{X. L. Wang}. In the context of assisted coherence distillation, the quantity called coherence of assistance is introduced to characterize the distillation rate assisted by another party \cite{E. Chitambar}, which is defined by the maximal average relative entropy of coherence and we it denote as the relative entropy of coherence of assistance throughout our paper to avoid confusion. Analogously, we introduce the maximal average $l_1$ norm of coherence and denote it as the $l_1$ norm of coherence of assistance. One reason we study the coherence of assistance based on the $l_1$ norm of coherence is that the $l_1$ norm of coherence is usually easy to evaluate and algebraically manipulate for a given quantum state. Any continuous weak coherence monotone which is a symmetric function of nonzero off-diagonal entries of the state must be a nondecreasing function of the $l_1$ norm of coherence \cite{H. Zhu}. Furthermore, the $l_1$ norm of coherence is an important link between different coherence measures and entanglement. For example, the $l_1$ norm of coherence is equal to the robustness of coherence for qubit states and acts as an upper bound for the robustness of coherence in high dimensional system \cite{C. Napoli}. The logarithmic $l_1$ norm of coherence is an upper bound for the relative entropy of coherence \cite{S. Rana}. Additionally the $l_1$ norm of coherence is the maximum entanglement created by incoherent operations acting on the system and an incoherent ancilla \cite{H. Zhu}. Motivated by the usefulness of the $l_1$ norm of coherence, we aim to give a characterization for the corresponding coherence of assistance. In this paper, we study the $l_1$ norm of coherence of assistance theoretically and operationally. We first provide an upper bound of the $l_1$ norm of coherence of assistance in terms of the function of diagonal entries of quantum state. The necessary and sufficient condition when the $l_1$ norm of coherence of assistance attains this upper bound is shown. In the special case of two and three dimensional quantum states, the $l_1$ norm of coherence of assistance always achieves its upper bound. Then we show that the $l_1$ norm of coherence of assistance as well as the relative entropy of coherence of assistance is strictly larger than the original coherence for mixed states. The relation between the $l_1$ norm of coherence of assistance and entanglement is revealed. Finally, a comparison between the $l_1$ norm of coherence of assistance and the relative entropy of coherence of assistance is made. \section{The $l_1$ Norm of Coherence of Assistance} Under a fixed reference basis $\{|i\rangle\}$, a quantum state $\rho$ is said to be incoherent if the state is diagonal in this basis, i.e. $\rho=\sum_{i} \rho_{i}|i\rangle \langle i|$. Otherwise the quantum state is coherent. For coherent states, the $l_1$ norm of coherence and the relative entropy of coherence are two commonly used coherence measures \cite{T. Baumgratz}. The $l_1$ norm of coherence of the quantum state $\rho=\sum_{i,j} \rho_{ij}|i\rangle \langle j|$ is the sum of the magnitudes of all the off diagonal entries \begin{equation} C_{l_1}(\rho)=\sum_{i\neq j} |\rho_{ij}|. \end{equation} The relative entropy of coherence is the difference of von Neumann entropy between the density matrix and the diagonal matrix given by its diagonal entries, \begin{equation} C_r(\rho)=S(\Delta(\rho))-S(\rho), \end{equation} where $\Delta(\rho)$ denotes the state given by the diagonal entries of $\rho$, $S(\rho)$ is the von Neumann entropy. Later, the relative entropy of coherence of assistance is introduced as the maximal average relative entropy of coherence \begin{eqnarray} C_a^{r}(\rho)=\max \sum_k p_k C_r(|\psi_k\rangle), \end{eqnarray} where the maximization is taken over all pure state decompositions of $\rho=\sum_k p_k |\psi_k\rangle\langle\psi_k|$ \cite{E. Chitambar}. Inspired by the relative entropy of coherence of assistance, we introduce the $l_1$ norm of coherence of assistance as the maximal average $l_1$ norm of coherence. \begin{definition} For any quantum state $\rho$, its $l_1$ norm of coherence of assistance is defined as \begin{equation} C_a^{l_1}(\rho)=\max \sum_k p_k C_{l_1}(|\psi_k\rangle), \end{equation} where the maximization is taken over all pure state decompositions of $\rho=\sum_k p_k |\psi_k\rangle\langle\psi_k|$. \end{definition} Analogous to the relative entropy of coherence of assistance, the $l_1$ norm of coherence of assistance $C_a^{l_1}$ has an operational interpretation. Suppose Alice holds a state $\rho^A$ with the $l_1$ norm of coherence $C_{l_1}(\rho^A)$. Bob holds another part of the purified state of $\rho^A$. With the help of Bob performing local measurements and informing Alice of his measurement outcomes using classical communication, Alice's quantum state will be in one pure state ensemble $\{ p_k,\ |\psi_k\rangle\}$ with the $l_1$ norm of coherence $\sum_k p_k C_{l_1}(|\psi_k\rangle)$. The $l_1$ norm of coherence of Alice's state is then increased from $C_{l_1}(\rho^A)$ to $\sum_k p_k C_{l_1}(|\psi_k\rangle)$ since the $l_1$ norm of coherence is a convex function. Maximally, the $l_1$ norm of coherence can be increased to $C_a^{l_1}(\rho^A)$ in this process. Due to the use of optimization in the definition, both the $l_1$ norm of coherence of assistance and the relative entropy of coherence of assistance are usually difficult to calculate. However, for the relative entropy of coherence of assistance, it is bounded from above by $S(\Delta(\rho))$ \cite{E. Chitambar}, $C_a^{r}(\rho)\leq S(\Delta(\rho))$, and equality holds if and only if there is a pure state decomposition $\{p_k,\ |\psi_k\rangle\}$ of $\rho$ such that $\Delta(|\psi_k\rangle\langle\psi_k|)=\Delta(\rho)$ for all $k$ \cite{M. J. Zhao}. For the $l_1$ norm of coherence of assistance, it is obvious that $0\leq C_a^{l_1}(\rho)\leq n-1$ for $n$ dimensional quantum state $\rho$. $C_a^{l_1}(\rho)=0$ if and only if $\rho$ is incoherent pure state. $C_a^{l_1}(\rho)\leq n-1$ because the maximum of the $l_1$ norm of coherence of $n$ dimensional quantum state is $n-1$. Now we show an analytical upper bound for the $l_1$ norm of coherence of assistance. \begin{theorem}\label{th ca upper bound} The $l_1$ norm of coherence of assistance of $\rho=\sum \rho_{ij}|i\rangle \langle j|$ is restricted by its diagonal entries as \begin{equation}\label{eq upper bound} C_a^{l_1}(\rho)\leq \sum_{i\neq j}\sqrt{\rho_{ii}\rho_{jj}}. \end{equation} Equality holds if and only if there exist a pure state decomposition $\{p_k,\ |\psi_k\rangle\}$ of $\rho$ such that $\Delta(|\psi_k\rangle\langle\psi_k|)=\Delta(\rho)$ for all $k$. \end{theorem} \begin{proof} Suppose $\{p_k,\ |\psi_k\rangle\}$ is an optimal decomposition for $\rho$ such that $C_a^{l_1}(\rho)= \sum_k p_k C_{l_1}(|\psi_k\rangle)$. Since $|\psi_k\rangle\langle\psi_k|$ is a pure state and rank one, then we assume $|\psi_k\rangle\langle\psi_k|=\sum_{i,j} \sqrt{a_{ii}^{(k)}a_{jj}^{(k)}}e^{i\theta_{ij}^{(k)}}|i\rangle\langle j|$ with $\theta_{ij}^{(k)}=-\theta_{ji}^{(k)}$ and positive $a_{ii}^{(k)}$ for all $i$, $j$ and $k$. The $l_1$ norm of coherence of $|\psi_k\rangle\langle\psi_k|$ is $\sum_{i\neq j} \sqrt{a_{ii}^{(k)}a_{jj}^{(k)}}$, which gives rise to the $l_1$ norm of coherence of assistance of $\rho$ as \begin{equation}\label{eq upper bound hold eq} \begin{array}{rcl} C_a^{l_1}(\rho)&=&\sum_{i\neq j} \sum_k p_k \sqrt{a_{ii}^{(k)}a_{jj}^{(k)}}\\ &\leq& \sum_{i\neq j} \sqrt{\sum_k p_k a_{ii}^{(k)}} \sqrt{\sum_k p_k a_{jj}^{(k)}}\\ &=& \sum_{i\neq j} \sqrt{\rho_{ii} \rho_{jj}}, \end{array} \end{equation} where we have utilized the Cauchy-Schwarz inequality $(\sum_k a_k b_k)^2 \leq {\sum_k a_k^2 \sum_k b_k^2}$ in the above inequality, and $\rho_{ii}=\sum_k p_k a_{ii}^{(k)}$ for the last equation. For Eq. (\ref{eq upper bound hold eq}), the left hand side equals to the right hand side if and only if vectors $\vec{v}_k=(a_{11}^{(k)}, a_{22}^{(k)},\cdots,a_{nn}^{(k)})$ are parallel for all $k$. This implies that the diagonal entries of $|\psi_k\rangle\langle\psi_k|$ are all the same and are equal to that of quantum state $\rho$ for all $k$. Hence, the $l_1$ norm of coherence of assistance reaches its upper bound in Eq. (\ref{eq upper bound}) if and only if there exist a pure state decomposition $\{p_k,\ |\psi_k\rangle\}$ of $\rho$ such that $\Delta(|\psi_k\rangle\langle\psi_k|)=\Delta(\rho)$ for all $k$. \end{proof} Notice that if $\rho$ is an $n$ dimensional state, then $\sum_{i\neq j} \sqrt{\rho_{ii} \rho_{jj}}\leq n-1$ with equality if and only if all diagonal entries of $\rho$ are equal to $1/n$. So the upper bound for Theorem \ref{th ca upper bound} is always an improvement over the $n-1$ bound. Theorem \ref{th ca upper bound} not only provides an upper bound for the $l_1$ norm of coherence of assistance, but also gives a necessary and sufficient condition for the pure state decomposition when the $l_1$ norm of coherence of assistance reaches its upper bound in Eq. (\ref{eq upper bound}). Coincidentally, this condition is the same as the condition that the relative entropy of coherence of assistance $C_a^{r}(\rho)$ reaches its upper bound $S(\Delta(\rho))$ \cite{E. Chitambar, M. J. Zhao}. In other words, $C_a^{l_1}(\rho)= \sum_{i\neq j}\sqrt{\rho_{ii}\rho_{jj}} \Longleftrightarrow C_a^{r}(\rho)=S(\Delta(\rho))$. In Ref. \cite{B. Regula}, it shows the relative entropy of coherence of assistance $C_a^{r}(\rho)$ is equal to its upper bound $S(\Delta(\rho))$ for all two and three dimensional quantum states. That is, all quantum states $\rho$ in two and three dimensional systems have a pure state decomposition $\{p_k,\ |\psi_k\rangle\}$ such that the density matrix of each pure state $|\psi_k\rangle\langle\psi_k|$ has the same diagonal entries as the original density matrix, $\Delta(|\psi_k\rangle\langle \psi_k|)=\Delta(\rho)$ for all $k$. Such a decomposition is an optimal decomposition reaching the upper bound in Theorem \ref{th ca upper bound}. Therefore, the $l_1$ norm of coherence of assistance is equal to its upper bound in Eq. (\ref{eq upper bound}) for all two and three dimensional quantum states. Unfortunately, this result is not true for four dimensional system as one counterexample has been pointed out in Ref. \cite{E. Chitambar}. \begin{corollary}\label{th qubit ca} In two and three dimensional systems, the $l_1$ norm of coherence of assistance of state $\rho=\sum_{i,j} \rho_{ij}|i\rangle \langle j|$ is $C_a^{l_1}(\rho)=\sum_{i\neq j}\sqrt{\rho_{ii}\rho_{jj}}$. \end{corollary} For two dimensional systems, we can give an optimal decomposition for all quantum states using the following lemma \cite{M. J. Zhao}. \begin{lemma}\label{lemma different magnitude} For any $2\times 2$ Hermitian matrix $ A=\left( \begin{array}{ccccccc} a_{11} & a_{12}\\ a_{12}^* & a_{22} \end{array} \right)$, it has a decomposition as \begin{equation} A= p_0 |\psi_0\rangle\langle\psi_0|+p_1 |\psi_1\rangle\langle\psi_1|, \end{equation} where \begin{equation}\label{eq in lemma different magnitude} \begin{array}{rcl} |\psi_0\rangle&=&\sqrt{a_{11}}|1\rangle + \sqrt{a_{22}}e^{-{\rm i}\arg(a_{12})}|2\rangle,\\ |\psi_1\rangle&=&\sqrt{a_{11}}|1\rangle - \sqrt{a_{22}}e^{-{\rm i}\arg(a_{12})}|2\rangle, \end{array} \end{equation} and $p_0=\frac{1}{2}(1+|a_{12}|/\sqrt{a_{11}a_{22}})$, $p_1=\frac{1}{2}(1-|a_{12}|/\sqrt{a_{11}a_{22}})$ for nonzero $a_{11}$ and $a_{22}$, $\arg(a_{12})$ is the argument of $a_{12}$. \end{lemma} This lemma gives a pure state decomposition for all $2\times 2$ Hermitian matrices including $2\times 2$ density matrices. Such a decomposition is an optimal decomposition for both the $l_1$ norm of coherence of assistance and the relative entropy of coherence of assistance. \section{A Strict $l_1$ Norm of Coherence Inequality for Mixed States} Coherence distillation is one process that extracts pure maximal coherence from a mixed state by incoherent operations \cite{A. Winter}. This process gives an operational way to obtain maximal coherence as a resource, which requires many copies of quantum state experimentally and gets the maximally coherent state with some probability. It is proved that there is no bound coherence and all coherent states are distillable \cite{A. Winter}. Another operational way to obtain the coherence resource is the assisted coherence distillation with the assistance of another party's local measurement and one way classical communication. Here we shall prove that all mixed coherence can be increased in this scenario. \begin{lemma}\label{lemma different decomposition} Suppose $\rho=\sum_{l=1}^n \lambda_l |\psi_l\rangle \langle \psi_l|$ and $\rho=\sum_{k=1}^n p_k |\phi_k\rangle \langle \phi_k|$ are two arbitrary pure state decompositions of given quantum state $\rho$ with $\sum_{l=1}^n \lambda_l=\sum_{k=1}^n p_k=1$, $0\leq \lambda_l\leq 1$, $0\leq p_k\leq 1$ for $l,k=1,\cdots,n$. Then these two pure decompositions are related by a unitary transformation: \begin{equation}\label{eq relation between two decomposition} \sqrt{p_k}|\phi_k\rangle=\sum_{l=1}^n U_{kl} \sqrt{\lambda_l}|\psi_l\rangle,\ \ \ k=1,\cdots,n, \end{equation} where $U=(U_{kl})$ is a unitary transformation \cite{E. Sch}. \end{lemma} In fact the normalization condition for $\rho$ in this Lemma is not necessary and $\{|\psi_i\rangle\}$ and $\{|\phi_k\rangle\}$ are not necessary to be normalized. \begin{theorem}\label{th increase coh} For arbitrary mixed quantum state $\rho=\sum_{i,j=1}^n \rho_{ij}|i\rangle \langle j|$, it has \begin{equation}\label{eq strict ineq} C_a^{l_1}(\rho)>C_{l_1}(\rho). \end{equation} \end{theorem} \begin{proof} Note that the $l_1$ norm of coherence is a convex function, $\sum_i p_i C_{l_1}(\rho_i)\geq C_{l_1}(\sum_i p_i\rho_i)$, hence it is obvious that $C_a^{l_1}(\rho)\geq C_{l_1}(\rho)$. Next we prove that this inequality is strict. Suppose $\rho=\sum_{s=1}^n \lambda_s |\psi_s\rangle \langle \psi_s|$ is the spectral decomposition with eigenstate $|\psi_s\rangle=\sum_{j=1}^n a_j^{(s)} |j\rangle$, $\sum_{s=1}^n \lambda_s=1$, $0\leq \lambda_s\leq 1$, $s=1,\cdots, n$. Since $\rho$ is mixed, considering its two nonzero eigenvalues and corresponding eigenstates, one can pick out two two dimensional subvectors from these eigenstates respectively such that these subvectors are not parallel. Without loss of generality, we assume (1) $\lambda_1$ and $\lambda_2$ are nonzero. (2) two unnormalized two dimensional vectors composed by the first two components of $|{\psi}_1\rangle$ and $|{\psi}_2\rangle$ denoted by $|\tilde{\psi}_1\rangle=(a_1^{(1)},a_2^{(1)})^T$ and $|\tilde{\psi}_2\rangle=(a_1^{(2)},a_2^{(2)})^T$ are not parallel, where superscript $T$ means transposition. The first case, if $a_1^{(1)*}a_2^{(1)}$ and $a_1^{(2)*}a_2^{(2)}$ have different arguments, then \begin{equation}\label{eq proof different magnitude} \begin{array}{rcl} C_a^{l_1}(\rho)&\geq& \sum_{i=1}^n \lambda_s C_{l_1}(|\psi_s\rangle)\\ &=& \sum_{i=1}^n \sum_{i\neq j} \lambda_s |a_i^{(s)*}a_j^{(s)}|\\ &>& \sum_{i\neq j} |\sum_{s=1}^n \lambda_s a_i^{(s)*}a_j^{(s)}|\\ &=& \sum_{i\neq j} |\rho_{ij}|\\ &=&C_{l_1}(\rho). \end{array} \end{equation} The second case, if $a_1^{(1)*}a_2^{(1)}$ and $a_1^{(2)*}a_2^{(2)}$ have the same arguments. Let \begin{equation} U=\sum_{s,t=1,2} U_{st} |\psi_s\rangle \langle \psi_t|+\sum_{s=3}^n |\psi_s\rangle \langle \psi_s|, \end{equation} be a unitary transformation and \begin{equation}\label{eq two dim U} \tilde{U}=\left( \begin{array}{ccccccc} U_{11} & U_{12}\\ U_{21} & U_{22} \end{array} \right) \end{equation} be a two dimensional unitary transformation induced by $U$. Under the transformation of $U$, we can get another pure state decomposition of $\rho$, $\rho=\sum_{k=1}^n p_k |\phi_k\rangle \langle \phi_k|$, with pure state $|\phi_k\rangle=\sum_{i=1}^n b_i^{(k)} |i\rangle$, $\sum_{k=1}^n p_k=1$, $0\leq p_k\leq 1$, $k=1,\cdots,n$, which is related with $\{|\psi_s\rangle\}$ according to Eq. (\ref{eq relation between two decomposition}). Denote \begin{equation} |\tilde{\phi}_k\rangle=(b_1^{(k)},b_2^{(k)})^T \end{equation} as the unnormalized vector composed of the first two entries of $|{\phi}_k\rangle$ with $k=1,2$. Then $\{p_k,\ |\tilde{\phi}_k\rangle\}_{k=1,2}$ and $\{\lambda_s,\ |\tilde{\psi}_s\rangle\}_{s=1,2}$ can be regarded as two decompositions of $2\times 2$ Hermitian matrix $\tilde{A}$ composed by the entries of the Hermitian matrix $A=\sum_{s=1,2} \lambda_s |{\psi}_s\rangle \langle {\psi}_s|=\sum_{k=1,2} p_k |{\phi}_k\rangle \langle {\phi}_k|$ in positions (1,1) (1,2), (2,1), (2,2) as \begin{eqnarray} \tilde{A}=\sum_{s=1,2} \lambda_s |\tilde{\psi}_s\rangle \langle \tilde{\psi}_s|=\sum_{k=1,2} p_k |\tilde{\phi}_k\rangle \langle \tilde{\phi}_k|. \end{eqnarray} These two decompositions $\{p_k,\ |\tilde{\phi}_k\rangle\}_{k=1,2}$ and $\{\lambda_s,\ |\tilde{\psi}_s\rangle\}_{s=1,2}$ are connected by the unitary transformation $\tilde{U}$ in Eq. (\ref{eq two dim U}) by relation Eq. (\ref{eq relation between two decomposition}). Since unnormalized vectors $|\tilde{\psi}_1\rangle$ and $|\tilde{\psi}_2\rangle$ are different, the two dimensional Hermitian matrix $\tilde{A}$ is full rank. According to Lemma \ref{lemma different magnitude}, there exists a unitary transformation $\tilde{U}$ such that $\{p_k,\ |\tilde{\phi}_k\rangle\}_{k=1,2}$ is a decomposition in form of Eq. (\ref{eq in lemma different magnitude}) with $0<p_k<1$, $k=1,2$. This means $b_1^{(1)*}b_2^{(1)}$ and $b_1^{(2)*}b_2^{(2)}$ have different arguments. Therefore we derive one decomposition $\{p_k,\ |{\phi}_k\rangle\}_{k=1}^n$ of $\rho$ satisfying the first case. As in Eq. (\ref{eq proof different magnitude}), one gets $C_a^{l_1}(\rho)\geq \sum_{k=1}^n p_k C_{l_1}(|\phi_k\rangle)>C_{l_1}(\rho)$. \end{proof} Since the $l_1$ norm of coherence is nonnegative and the $l_1$ norm of coherence of assistance of mixed state is strictly larger than it, this implies the positivity of the $l_1$ norm of coherence of assistance. \begin{corollary} For mixed quantum state $\rho$, the $l_1$ norm of coherence of assistance is strictly positive, $C_a^{l_1}(\rho)>0$. \end{corollary} Here we show one protocol of obtaining a larger $l_1$ norm of coherence with the help of another party using local measurement and one way classical communication. Now we give one example in four dimensional system. Let $\rho_A=\frac{1}{4}\sum_{i=0}^3 |i\rangle\langle i|$ be a maximally mixed state held by Alice and the initial coherence in A is zero. As a purification with another party held by Bob we first prepare a pure entangled state $|\psi\rangle_{AB}=\frac{1}{2}\sum_{i=0}^3 |\psi_i\rangle_A|i \rangle_B$, with maximally coherent state $|\psi_i\rangle=\frac{1}{2}\sum_j (-1)^{\delta_{ij}}|j\rangle$ for $\delta_{ij}=1$ when $i=j$ and zero otherwise as $\rho_A=\frac{1}{4}\sum_{i=0}^3 |i\rangle\langle i|=\frac{1}{4}\sum_{i=0}^3 |\psi_i\rangle_{AB}\langle\psi_i|$. Then Bob performs von Neumann measurements on the basis $\{|i\rangle_B\}$. If Bob's component is projected to state $|i\rangle_B$, the state of Alice will be collapsed to $|\psi_i\rangle_A$, with maximal $l_1$ norm of coherence, $i=0,1,2,3$. After receiving Bob's measurement outcomes via classical communication channel, Alice can obtain her state in a four state ensemble $\{p_i=\frac{1}{4}, |\psi_i\rangle\}$ with the $l_1$ norm of coherence $\frac{1}{4}\sum_{i=0}^3 C_{l_1}(|\psi_i\rangle_{AB}) =3$. Therefore the final $l_1$ norm of coherence for Alice is increased from $C_{l_1}(\rho_A)=0$ to the maximum $C_a^{l_1}(\rho)=3$. \section{Relation Between the $l_1$ norm of coherence of Assistance and Entanglement} The curious observation that the coherence of $\rho=\sum_{i,j} \rho_{ij}|i\rangle \langle j|$ is closely related to the entanglement of $\rho_{mc}=\sum_{i,j} \rho_{ij}|ii\rangle \langle jj|$ \cite{E. Rains} was first noticed in Ref. \cite{A. Winter}: \begin{equation} \rho=\sum_{i,j} \rho_{ij}|i\rangle \langle j|\ \longleftrightarrow \ \rho_{mc}=\sum_{i,j} \rho_{ij}|ii\rangle \langle jj|. \end{equation} For example, the coincidence of coherent cost and coherence of formation is identified with the coincidence of entanglement cost and entanglement of formation \cite{A. Winter}. The relative entropy of coherence of assistance of $\rho$ is equal to the entanglement of assistance \cite{D. DiVincenzo} of $\rho_{mc}$, $C_a^{r}(\rho)=E_a(\rho_{mc})$ with $E_a(\rho)=\max \sum_i p_i E(|\psi_i\rangle)=\max \sum_i p_i S(Tr_B(|\psi_i\rangle\langle\psi_i|))$, where the maximization is taken over all pure state decompositions of $\rho=\sum_i p_i |\psi_i\rangle\langle\psi_i|$ \cite{E. Chitambar}. Similarly, we shall show the $l_1$ norm of coherence of assistance corresponds to the entanglement called the convex-roof extended negativity of assistance \cite{J. S. Kim}, which is defined as $N_a(\rho)=\max \sum_i p_i N(|\psi_i\rangle)$, where the maximization is taken over all pure state decompositions of $\rho=\sum_i p_i |\psi_i\rangle\langle\psi_i|$, $N(\rho)$ is the negativity of $\rho$ which is a well known entanglement measure defined as the sum of all absolute values of negative eigenvalues of $\rho^{PT}$ where the superscript $PT$ denotes the partial transposition \cite{G. Vidal}. \begin{theorem} The $l_1$ norm of coherence of assistance of $\rho=\sum_{i,j} \rho_{ij}|i\rangle \langle j|$ is related to the convex-roof extended negativity of assistance of maximally correlated state $\rho_{mc}=\sum_{i,j} \rho_{ij}|ii\rangle \langle jj|$ as \begin{equation} C_a^{l_1}(\rho)=2N_a(\rho_{mc}). \end{equation} \end{theorem} \begin{proof} Note that for maximally correlated state $\rho_{mc}$, its pure state decompositions are all in the Schmidt form $|\psi^\prime\rangle=\sum_i a_i|ii\rangle$ \cite{M. J. Zhao2008}. The negativity of $|\psi^\prime\rangle=\sum_i a_i|ii\rangle$ is related to the $l_1$ norm of coherence of $|\psi\rangle=\sum_i a_i|i\rangle$ as $2N(|\psi^\prime\rangle)=C_{l_1}(|\psi\rangle)=\sum_{i\neq j} |a_i^*a_j|$ \cite{H. Zhu, S. Rana}. Therefore, if $\{ p_k,\ |\psi^{\prime}_k\rangle\}$ is an optimal decomposition for $\rho_{mc}$ such that $N_a(\rho_{mc})=\sum_k p_k N(|\psi^{\prime}_k\rangle)$ with $|\psi^{\prime}_k\rangle=\sum_i a_i^{(k)}|ii\rangle$. Then $\{ p_k,\ |\psi_k\rangle\}$ with $|\psi_k\rangle=\sum_i a_i^{(k)}|i\rangle$ is the optimal decomposition for $\rho$ such that $C_a^{l_1}(\rho)=\sum_k p_k C_{l_1}(|\psi_k\rangle)$. \end{proof} \section{Comparison between the $l_1$ norm of coherence of assistance and the relative entropy of coherence of assistance} The $l_1$ norm of coherence of assistance and the relative entropy of coherence of assistance exhibit many similarities. Parallel to the result in Theorem \ref{th increase coh}, we can prove the relative entropy of coherence of assistance is strictly larger than the original relative entropy of coherence for all mixed state. \begin{theorem} For any mixed quantum state $\rho$, it has \begin{eqnarray}\label{eq strict ineq for r} C_a^{r}(\rho)>C_{r}(\rho). \end{eqnarray} \end{theorem} \begin{proof} Suppose $\rho=\sum_{s=1}^m \lambda_s |\psi_s\rangle \langle \psi_s|$ is the spectral decomposition with $\sum_{s=1}^m \lambda_s=1$ and $0< \lambda_s\leq 1$. Then \begin{equation} \begin{array}{rcl} C_a^{r}(\rho)&\geq& \sum_{s=1}^m \lambda_s C_r(|\psi_s\rangle \langle \psi_s|)\\ &=& \sum_{s=1}^m \lambda_s S(\Delta(|\psi_s\rangle \langle \psi_s|))\\ &\geq & S(\sum_{s=1}^m \lambda_s\Delta(|\psi_s\rangle \langle \psi_s|))-H(\lambda_s)\\ &=& S(\Delta(\rho))-S(\rho)\\ &=&C_{r}(\rho), \end{array} \end{equation} where $H(\lambda_s)=-\sum_{s=1}^m \lambda_s \log \lambda_s$, the third inequality is according to the property of entropy and becomes equality if and only if $\{\Delta(|\psi_s\rangle \langle \psi_s|)\}$ have support on orthogonal subspaces \cite{M. A. Nielsen}. Since $m$ projected eigenstates $\{\Delta(|\psi_s\rangle \langle \psi_s|)\}$ are in the $m$ dimensional space spanned by $\{ |i\rangle \langle i| \}_{i=1}^m$, so they are orthogonal if and only if they are all supported on one dimensional subspaces spanned by one of the elements in $\{ |i\rangle \langle i| \}_{i=1}^m$, which implies all eigenstates $\{|\psi_s\rangle \langle \psi_s|\}$ themselves are incoherent under the reference basis. Therefore, the third inequality becomes equality if and only if $\rho$ is incoherent, which means the third inequality is strict for all mixed coherent quantum states. In this case the relative entropy of coherence of assistance $C_a^{r}(\rho)$ is strictly larger than the relative entropy of coherence $C_{r}(\rho)$ for mixed coherent state $\rho$. For mixed incoherent states, their relative entropy of coherence are all zero, $C_{r}(\rho)=0$. But, under the unitary transformations, we can get other coherent pure state decomposition with positive average relative entropy of coherence. This means that the relative entropy of coherence of assistance $C_a^{r}(\rho)$ is strictly larger than the relative entropy of coherence $C_{r}(\rho)$ for all mixed incoherent quantum states. Therefore, inequality (\ref{eq strict ineq for r}) holds true for all mixed states. \end{proof} \begin{corollary} For any mixed quantum state $\rho$, the relative entropy of coherence of assistance is strictly positive, $C_a^{r}(\rho)>0$. \end{corollary} The $l_1$ norm of coherence of assistance can be shown to be an upper bound for the relative entropy of coherence of assistance. \begin{theorem} For any quantum state $\rho$, the $l_1$ norm of coherence of assistance and the relative entropy of coherence of assistance satisfy the relation \begin{equation} C_a^{r}(\rho)\leq C_a^{l_1}(\rho). \end{equation} \end{theorem} \begin{proof} For any quantum state $\rho$, suppose $\{ p_k,\ |\psi_k\rangle\}$ is an optimal decomposition of $\rho$ such that $C_a^{r}(\rho)=\sum_k p_k C_r(|\psi_k\rangle)$. Since the relative entropy of coherence is bounded by the $l_1$ norm of coherence from above for all pure states \cite{S. Rana}, then $C_a^{r}(\rho)=\sum_k p_k C_r(|\psi_k\rangle)\leq \sum_k p_k C_{l_1}(|\psi_k\rangle)\leq C_a^{l_1}(\rho)$. \end{proof} Now we come to compare the $l_1$ norm of coherence of assistance and the relative entropy of coherence of assistance in the following table. Although their original coherence measures are defined from different point of views, one uses the $l_1$ norm from mathematics and the other uses the entropy from information theory, their coherence of assistance exhibit many similarities including their definitions, their upper bounds with the saturation condition, their optimal pure state decompositions and their strict positivity for mixed states. The reason lies in the fact that they are both defined using the maximization as well as the property of the coherence measures. \begin{table}[!h] \tabcolsep 0pt \vspace*{-12pt} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{center} \def0.9\textwidth{0.9\textwidth} {\rule{0.9\textwidth}{1pt}} \begin{tabular*} {0.9\textwidth}{@{\extracolsep{\fill}}|c|c|c|} & $C_a^{r}$ & $C_a^{l_1}$ \\ \hline Definition & $C_a^{r}(\rho)=\max \sum_k p_k C_r(|\psi_k\rangle)$ & $C_a^{l_1}(\rho)=\max \sum_k p_k C_{l_1}(|\psi_k\rangle)$ \\ \hline Upper bound & $C_a^{r}(\rho)\leq S(\Delta(\rho))$ & $C_a^{l_1}(\rho)\leq \sum_{i\neq j} \sqrt{\rho_{ii}\rho_{jj}}$ \\ \hline \tabincell{c} {Analytical formula \\for qubit and qutrit state} & $C_a^{r}(\rho)= S(\Delta(\rho))$ & $C_a^{l_1}(\rho)= \sum_{i\neq j} \sqrt{\rho_{ii}\rho_{jj}}$ \\ \hline Range& $0\leq C_a^{r}(\rho) \leq \log n$ & $0\leq C_a^{l_1}(\rho)\leq n-1$ \\ \hline Relation with original coherence & \tabincell{c} {$C_a^{r}(\rho)>C_r(\rho) \text{ if $\rho$ is mixed}$ \\ $C_a^{r}(\rho)=C_r(\rho) \text{ if $\rho$ is pure}$ }& \tabincell{c} {$C_a^{l_1}(\rho)>C_{l_1}(\rho) \text{ if $\rho$ is mixed}$\\$C_a^{l_1}(\rho)=C_{l_1}(\rho) \text{ if $\rho$ is pure}$ } \\ \hline Relation with Entanglement & $C_a^{r}(\rho)=E_a(\rho_{mc})$ & $C_a^{l_1}(\rho)=2N_a(\rho_{mc})$ \\ \hline \end{tabular*} {\rule{0.9\textwidth}{1pt}} \caption{Table of comparison between $C_a^{r}$ and $C_a^{l_1}$ for $n$ dimensional quantum state $\rho=\sum_{i,j=1}^n \rho_{ij}|i\rangle \langle j|$. } \end{center} \end{table} Based on these results, we can divide all quantum states into three classes with respect to the relation between the coherence of assistance and the original coherence. Here we do not distinguish between the $l_1$ norm of coherence or the relative entropy of coherence because it does not matter. The first class $\mathcal{S}_1$ is the set of all pure incoherent states. This class of states is incoherent and it stays incoherent even under the help of another part with local measurements and one way classical communication. Such states satisfy $C_a(\rho)=C(\rho)=0$. The second class $\mathcal{S}_2$ is the set of all pure coherent states. This class of states is coherent, but its coherence can not be increased under the help of another part with local measurements and one way classical communication. Such states satisfy $C_a(\rho)=C(\rho)>0$. The third class $\mathcal{S}_3$ is the set of all mixed states. No matter whether this class of states is incoherent or coherent, their coherence can be increased under the help of another part with local measurements and one way classical communication. Such states satisfy $C_a(\rho)>C(\rho)\geq 0$. In Ref. \cite{T. Ma}, the difference between the coherence of assistance and the original coherence is called the accessible coherence which measures the increased coherence. For quantum states in $\mathcal{S}_3$, the accessible coherence is always positive. One can gain more coherence if one knows the corresponding ensemble of the state. \section{Conclusions and Discussions} To summarize, we have proposed and systematically studied the $l_1$ norm of coherence of assistance. We have provided an upper bound for the $l_1$ norm of coherence of assistance in terms of the function of diagonal entries of quantum state. A necessary and sufficient condition when the $l_1$ norm of coherence of assistance reaches its upper bound has been shown. In the special case of two and three dimensional quantum states, the $l_1$ norm of coherence of assistance is always equal to this upper bound. After that, we have shown the $l_1$ norm of coherence of assistance as well as the relative entropy of coherence of assistance is strictly larger than the original coherence for mixed states. The relation between the $l_1$ norm of coherence of assistance and entanglement has been revealed. We give a comparison between the $l_1$ norm of coherence of assistance and the relative entropy of coherence of assistance at last. Operationally the $l_1$ norm of coherence of assistance is the maximal $l_1$ norm of coherence one can gain with the help of another party's local measurement and one way classical communication. This may provide more available and useful resource for quantum information processing. An experimental realization in linear optical systems for obtaining the maximal relative entropy of coherence for two dimensional quantum states in assisted distillation protocol has been presented \cite{K. D. Wu}. We hope this work will promote the further study of coherence resource. \noindent{\bf Acknowledgments}\, Ming-Jing Zhao thanks the Department of Mathematics and Statistics, University of Guelph, Canada for hospitality. This work is supported by the NSF of China (Grant Nos. 11401032) and the China Scholarship Council (Grant No. 201808110022). The authors appreciate the valuable suggestions and comments by the anonymous referee. \end{document}
\begin{document} \title{Finite models, stability, and Ramsey's theorem} \author{Doug Ensley \\ Department of Math \\ and Computer Science \\ Shippensburg University \\ Shippensburg, PA 17257 \and Rami Grossberg \\ Department of \\ Mathematical Sciences \\ Carnegie Mellon University \\ Pittsburgh, PA 15213} \date{\today} \maketitle \begin{abstract} We prove some results on the border of Ramsey theory (finite partition calculus) and model theory. Also a beginning of classification theory for classes of finite models is undertaken. \end{abstract} \section{Introduction} Frank Ramsey in his fundamental paper (see \subseteqte{ra} and pages 18-27 of \subseteqte {grs}) was interested in ``a problem of formal logic.'' He proved the result now known as ``(finite) Ramsey's theorem'' which essentially states \begin{quotation} For all $k, r, c < \omega$, there is an $n < \omega$ such that however the $r $ -- subsets of $\{ 1, 2, \ldots, n \}$ are $c$ -- colored, there will exist a $k$ -- element subset of $\{ 1, 2, \ldots, n \}$ which has all its $r$ -- subsets the same color. \end{quotation} (We will let $n(k,r,c)$ denote the smallest such $n$.) Ramsey proved this theorem in order to construct a finite model for a given finite universal theory so that the universe of the model is canonical with respect to the relations in the language. (For model theorists ``canonical'' means $\Delta $ -- indiscernible as in Definition \ref{def1}). Much is known about the order of magnitude of the function $n(k,r,c)$ and some of its generalizations (see \subseteqte{ehmr}, for example). An upper bound on $n(k, r, c)$ is an $(r-1)$ -- times iterated exponential of a polynomial in $k$ and $c$. Many feel that the upper bound is tight. However especially for $r \geq 3$ the gap between the best known lower and upper bounds is huge. In 1956 A. Ehrenfeucht and A. Mostowski \subseteqte{ehmo} rediscovered the usefulness of Ramsey's theorem in logic and introduced the notion we now call indiscernibility. Several people continued exploiting the connections between partition theorems and logic (i.e. model theory), among them M. Morley (see \subseteqte{mo1} and \subseteqte{mo2}) and S. Shelah who has published a virtually uncountable number of papers related to indiscernibles (see \subseteqte {sh}). Morley \subseteqte{mo2} used indiscernibles to construct models of very large cardinality (relative to the cardinality of the reals) --- specifically, he proved that the Hanf number of $L_{\omega _1,\omega }$ is $\beth _{\omega _1}$. One of the most important developments in mathematical logic --- certainly the most important in model theory --- in the last 30 years is what is known as ``classification theory'' or ``stability theory''. There are several books dedicated entirely to some aspects of the subject, including books by J. Baldwin \subseteqte{bal}, D. Lascar \subseteqte{las}, S. Shelah \subseteqte{sh}, and A. Pillay \subseteqte{pi}. Lately Shelah and others have done extensive work in extending classification theory from the context of first order logic, to the classification of arbitrary classes of models, usually for infinitary logics extending first order logic (for example see \subseteqte{BlSh:330}, \subseteqte {BlSh:360}, \subseteqte{BlSh:393}, \subseteqte{gr2}, \subseteqte{[GrSh 266]}, \subseteqte{MaSh}, \subseteqte{sh4}, \subseteqte{shh}, \subseteqte{Sh:299}). \subseteqte{Sh:299} contains several philosophical and personal comments about why this research is interesting, and \subseteqte{shtape} is the video tape of Shelah's plenary talk at the International Congress of Mathematics at Berkeley in 1986. This raises a question of fundamental importance: \textit{Is there a classification theory for finite structures?} In a more philosophical context: \textit{Is the beautiful classification theory of Shelah completely detached >from finite mathematics?} One of the fundamental difficulties to developing model theory for finite structures is the choice of an appropriate ``submodel'' relation --- in category-theoretic terminology, the choice of a natural morphism. In classification theory for elementary classes (models of a first order, usually complete theory) the right notion of morphism is ``elementary embedding'' defined using the relation $M \prec N $, which is a strengthening of the notion of submodel (denoted by $M \subseteq N$). Unfortunately for finite structures, $M \prec N$ always implies $M=N$. Moreover, in many cases even $M \subseteq N$ implies $M=N$ (e.g., when $N$ is a group of prime order). We need a substitute. One of the basic observations to make is that when we limit our attention to structures in a relational language only (i.e., no function symbols), then $M\subseteq N$ does not imply $M=N$. In general this seems to be insufficient to force the substructure to inherit some of the properties of the bigger structure. It was observed already by Ramsey (in \subseteqte{ra}) that if $M\subseteq N$, then for every universal sentence $\phi $, $N\models \phi $ implies $M \models \phi $. So when studying the class of models of a universal first order theory, the relation $M \subseteq N$ is reasonable, but it is not for more complicated theories (e.g. not every subfield of an algebraically closed field is algebraically closed). Such a concept for classes of finite structures is introduced in Definition \ref{submodel}. This paper has several goals: \begin{enumerate} \item \textit{Study Ramsey numbers for definable coloring inside models of a stable theory. } This can be viewed as a direct extension of Ramsey's work, namely by taking into account the first order properties of the structures. A typical example is the field of complex numbers $\langle \mathbf{C,+,\cdot \rangle }$. It is well known that its first order theory $Th(\mathbf{C)}$ has many nice properties --- it is $\aleph _1$ -- categorical and thus is $\aleph _0$ -- stable and has neither the order property nor the finite cover property. We will be interested in the following general situation. Given a first order (complete) theory $T$, and (an infinite) model $M\models T$. Let $k,r,$ and $c$ be natural numbers, and let $F$ be a coloring of a set of $r$ -- tuples from $M$ by $c$ colors which is definable by a first order formula in the language $L(T)$ (maybe with parameters from $M$). Let $ n \stackrel{\mathrm{def}}{=} n_F(k,r,c)$ be the least natural number such that for every $S \subseteq |M|$ of cardinality $n$, if $F:[S]^r \rightarrow c$ then there exists $S^{*}\subseteq S$ of cardinality $k$ such that $F$ is constant on $[S^{*}]^r$. It turns out that for stable theories, (or even for theories without the independence property) we get better upper bounds than for the general Ramsey numbers. This indicates that one can not improve the lower bounds by looking at stable structures. \item \textit{Introduce stability-like properties ( e.g. }$\mathit{n}$ \textit{-order property, }$\mathit{k}$\textit{-independence property, }$ \mathit{d}$\textit{-cover property), as well as averages of finite sequences of indiscernibles.} Some of the interconnections and the effect on the existence of indiscernibles are presented. \item \textit{Develop classification theory for classes of finite structures. In particular introduce a notion that correspond to stable amalgamation, and show that it is symmetric for many models. } See Example \ref{ex.fields}. \item \textit{Bring down uncountable techniques to a finite context. } We believe that much of the machinery developed (mainly by Shelah) to deal with problems concerning categoricity of infinitary logics and the behavior of the spectrum function at cardinalities $\geq \beth _{\omega _1}$ depends on some very powerful combinatorial ideas. We try here to extract some of these ideas and present them in a finite context. \end{enumerate} Shelah \subseteqte{sh} proved that instability is equivalent to the presence of either the strict order property or the independence property. In a combinatorial setting, stability implies that for arbitrarily large sets, the number of types over a set is polynomial in the cardinality of the set. We address the finite case here in which we restrict our attention to when the number of $\phi $- types over a finite set is bounded by a polynomial in the size of the set of parameters. First we find precisely the degree of the polynomial bound on the number of these types given to us by the absence of the strict order or independence properties. This is an example of something relevant in the finite case which is of no concern in the usual classification theory framework. Once we have these sharper bounds we can find sequences of indiscernibles in the spirit of \subseteqte{sh}. It should be noted here that everything we do is ``local'', involving just a single formula (or equivalently a finite set of formulas). We then work through the calculations for uniform hypergraphs as a case study. This raises questions about ``stable'' graphs and hypergraphs which we begin to answer. In the second half of the paper, we examine classes of finite structures in the framework of Shelah's classification for non-elementary classes (see \subseteqte{sh3}). In particular, we make an analogy to Shelah's ``abstract elementary classes'' and prove results similar to his. \noindent \textbf{Notation}: Everything is standard. We will typically treat natural numbers as ordinals (i.e., $n = \{0, 1, \ldots, n-1 \}$). Often $x$, $y$, and $z$ will denote free variables, or finite sequences of variables --- it should be clear from the context whether we are dealing with variables or with sequences of variables. When $x$ is a sequence, we let $ l(x)$ denote its length. $L$ will denote a similarity type (a.k.a. language or signature), $\Delta $ will stand for a finite set of $L$ formulas. $M$ and $N$ will stand for $L$ - structures, $|M|$ the universe of the structure $M$, and $\Vert M\Vert $ the cardinality of the universe of $M$. Given a fixed structure $M$, subsets of its universe will be denoted by $A$, $B$, $C$ , and $D$. So when we write $A \subseteq M$ we really mean that $A\subseteq |M|$, while $N\subseteq M$ stands for ``$N$ is a submodel of $M$''. When $M$ is a structure then by $a \in M$ we mean $a \in |M|$, and when $a$ is a finite sequence of elements, then $a\in M$ stands for ``all the elements of the sequence $a$ are elements of $|M|$''. Since all of our work will be inside a given structure $M$ (with the exception of Section \ref{graphsec}), all the notions are relative to it. For example for $a\in M$ and $A\subseteq M$ we denote by $tp_\Delta (a,A)$ the type $tp_\Delta (a,A,M)$ which is $\{\phi (x;b):M\models \phi [a;b],b\in A,\phi (x;y)\in \Delta \}$ and if $A\subseteq M$ then $S_\Delta (A,M) \stackrel{\mathrm{def}}{=}\{{tp_\Delta (a,A):a\in M\}}$. Note that in \subseteqte {sh} $S_\Delta (A,M)$ denotes the set of all complete $\Delta $ -types with parameters from $A$ that are consistent with $Th(\langle M,c_a\rangle _{a\in A}$. It is important for us to limit attention to the types realized in $M$ in order to avoid dependence on the compactness theorem. It is usually important that $\Delta $ is closed under negation, so when $\Delta =\{\phi ,\neg \phi \}$, instead of writing $tp_\Delta (\cdots )$ and $S_\Delta (\cdots )$ we will write $tp_\phi (\cdots )$ and $S_\phi (\cdots )$, respectively. \section{The effect of the order and independence properties on the number of local types} In this section, we fix some notation and terms and then define the first important concepts. In the following definition, the first three parts are >from \subseteqte{sh}, $(4)$ is a generalization of a definition of Shelah, and $ (5)$ is from Grossberg and Shelah \subseteqte{GrSh}. \begin{definition} \label{def1} \begin{enumerate} \item For a set $\Delta $ of $L$ -- formulas and a natural number $n$, a \underline{$(\Delta ,n)$ -- type over a set} $A$ is a set of formulas of the form $\phi ({x};a)$ where $\phi (x;y)\in \Delta $ and $a\in A$ with $l(x)=n$ . If $\Delta =L$, we omit it, and we just say ``$\phi $ -- type'' for a $ (\{\phi (x;y),\neg \phi (x;y)\},l(x))$ -- type. \item Given a $(\Delta ,n)$ -- type $p$ over $A$, define $dom(p)=\{a\in A\;:\;$ for some $\phi \in \Delta ,\,\phi (x;a)\in p\}$. \item A type $p$ \underline{$(\Delta _0,\Delta _1)$ -- splits over} $ B\subseteq dom(p)$ if there is a $\phi (x;y)\in \Delta _0$ and $b,c\in dom(p) $ such that $tp_{\Delta _1}(b,B)=tp_{\Delta _1}(c,B)$ and $\phi (x;b),\neg \phi (x;c)\in p$. If $p$ is a $\Delta $ -- type and $\Delta _0=\Delta _1=\Delta $, then we just say $p$ \underline{splits over $B$}. \item \label{def1d} We say that $(M,\phi (x;y))$ has \underline{the $k$ - independence property} if there are $\{a_i:i<k\}\subseteq M$, and $ \{b_w:w\subseteq k\}\subseteq M$, such that $M\models \phi [a_i;b_w]$ if and only if $i\in w$. We will say that \underline{$M$ has the $k$ -- independence property} when there is a formula $\phi $ such that $(M,\phi )$ does. \item $(M,\phi (x;y))$ has \underline{the $n$ -- order property} (where $ l(x)=l(y)=k$) if there exists a set of $k$ -- tuples $\{a_i:i<n\}\subseteq M$ such that $i<j$ if and only if $M\models \phi [a_i,a_j]$ for all $i,j<n$. We will say that $M$ has \underline{the $n$ -- order property} if there is a formula $\phi$ so that $(M,\phi )$ has the $n$ -- order property. \end{enumerate} \end{definition} \noindent \textsc{Warning:} This use of ``order property'' corresponds to neither the order property nor the strict order property in \subseteqte{sh}. The definition comes rather from \subseteqte{gr}. The following monotonicity property is immediate from the definitions. \begin{proposition} Let sets $B\subseteq C\subseteq A$ and a complete $(\Delta, n)$ -- type $p$ be given with $Dom(p)\subseteq A$. If $p$ does not split over $B$, then $p$ does not split over $C$. \end{proposition} \begin{fact} \label{unstable}( Shelah see \subseteqte{sh}) Let $T$ be a complete first order theory. The following conditions are equivalent: \begin{enumerate} \item $T$ is unstable. \item There are $\phi (x;y)\in L(M)$, $M\models T$, and $\{a_n:n<\omega \}\subseteq M$ such that $l(x)=l(y)=l(a_n)$, and for every $n,k<\omega $ we have \\$n<k\Leftrightarrow M\models \phi [a_n;a_k]$. \end{enumerate} \end{fact} The compactness theorem gives us the following \begin{corollary} \label{finitecor} Let $T$ be a stable theory, and suppose that $M\models T$ is an infinite model. \begin{enumerate} \item For every $\phi (x,y)\in L(M)$ there exists a natural number $n_{\phi} $ such that $(M,\phi )$ does not have the $n_{\phi}$ -- order property. \item For every $\phi (x,y)\in L(M)$ there exists a natural number $k_{\phi} $ such that $(M,\phi )$ does not have the $k_{\phi}$ -- independence property. \item If $T$ is categorical in some cardinality greater than $|T|$, then for every $\phi (x,y)\in L(M)$ there exists a natural number $d_{\phi}$ such that $(M,\phi )$ does not have the $d_{\phi}$ -- cover property (see Definition \ref{coverp}). \end{enumerate} \end{corollary} We first establish that the failure of either the independence property or the order property for $\phi$ implies that there is a polynomial bound on the number of $\phi$ -- types. The more complicated of these to deal with is the failure of the order property. At the same time this is perhaps the more natural property to look for in a given structure. The bounds in this case are given in Theorem \ref{thm2}. The failure of the independence property gives us a far better bound (i.e., smaller degree polynomial) with less work. Theorem \ref{nip_poly} reproduces this result of Shelah paying attention to the specific connection between the bound and where the independence property fails. This first lemma is a finite version of Lemma 5 from \subseteqte{gr}. \begin{lemma} \label{lem1} Let $\phi (x;y)$ be a formula in $L$, $n$ a positive integer, $ s=l(y)$, $r=l(x)$, $\psi (y;x)=\phi (x;y)$. Suppose that $\{A_i\subseteq M \;:\;i\leq 2n\}$ is an increasing chain of sets such that for every $ B\subseteq A_i$ with $|B|\leq 3sn$, every type in $S_\phi (B,M)$ is realized in $A_{i+1} $. Then if there is a type $p\in S_\phi (A_{2n},M)$ such that for all $i < 2n$, $p|A_{i+1}$ $(\psi ,\phi )$ -- splits over every subset of $A_i$ of size at most $3sn$, then $(M,\rho )$ has the $n-$ order property, where \[ \rho (x_0,x_1,x_2;y_0,y_1,y_2)\stackrel{\mathrm{def}}{=}[\phi (x_0;y_1)\leftrightarrow \phi (x_0;y_2)] \] \end{lemma} {{\sc Proof}} Choose $d \in M$ realizing $p$. Define $\{a_i,b_i,c_i\in A_{2i+2}\;:\;i<n\}$ by induction on $i$. Assume for $j<n$ that we have defined these for all $i<j$. Let $B_j=\bigcup \{a_i,b_i,c_i\;:\;i<j\}$. Notice that $|B_j|\leq 3sj<3sn$, so by the assumption, $p|A_{2j+1}$ $(\psi ,\phi )$ -- splits over $B_j$. That is, there are $a_j$, $b_j\in A_{2j+1}$ such that \[ tp_\psi (a_j,B_j,M)=tp_\psi (b_j,B_j,M), \] and \[ M\models \phi [d;a_j]\wedge \neg \phi [d;b_j]. \] Now choose $c_j\in A_{2j+1}$ realizing $tp(d,B_j\cup \{ a_j,b_j\},M)$ (which can be done since $|B_j\cup\{a_j,b_j\}|\leq 3sj+2s<3s(j+1)\leq 3sn$). This completes the inductive definition. For each $i$, let $d_i=c_ia_ib_i$. We will check that the sequence of $d_i$ and the formula \[ \rho (x_0,x_1,x_2;y_0,y_1,y_2)\stackrel{\mathrm{def}}{=}[\phi (x_0;y_1)\leftrightarrow \phi (x_0;y_2)] \] witness the $n$ -- order property for $M$. If $i<j<n$, then $c_i\in B_j$. By choice of $a_j$ and $b_j$, $tp_\psi (a_j,B_j,M)=tp_\psi (b_j,B_j,M)$, so in particular, \[ M\models \phi [c_i;a_j]\leftrightarrow \phi [c_i;b_j] \] That is, $M\models \rho [d_i;d_j]$. On the other hand, if $i\leq j<n$, then $\phi (x;a_i)\in tp_\phi (d,B_j \cup \{a_j,b_j\},M)$ and $\phi (x;b_i) \not{\in }tp_\phi (d,B_j \cup \{a_j, b_j\},M)$, and so, by the choice of $c_j$, we have that \[ M\models \phi [c_j;a_i]\wedge \neg \phi [c_j;b_i]. \] That is, $M\models \neg \rho [d_j;d_i]$ in this case. {\qed} In order to see the relationship between this definition of the order property and Shelah's, we mention Corollary \ref{cor1} below. Note that it is the formula $\phi$, not the $\rho$ of Lemma \ref{lem1}, which has the weak order property in the Corollary. \begin{definition} $(M,\phi )$ has \underline{the weak $m$ -- order property} if there exist $ \{d_i:i<m\}\subseteq M$ such that for each $j<m$, \[ M\models \exists x\bigwedge_{i<m}\phi (x;d_i)^{\mbox{if}(i\geq j)} \] \end{definition} \noindent\textsc{Remark:} This is what Shelah \subseteqte{sh} calls the $m$ -- order property. \begin{definition} \label{arrowdef} We write $x\rightarrow (y)_b^a$ if for every partition $\Pi $ of the $a$ - element subsets of $\{1,\ldots ,x\}$ with $b$ parts, there is a $y$ - element subset of $\{1,\ldots ,x\}$ with all of \emph{its} $a$ -- element subsets in the same part of $\Pi $. \end{definition} \begin{corollary} \label{cor1} \begin{enumerate} \item If in addition to the hypotheses of Lemma \ref{lem1} we have that $ (2n)\rightarrow (m+1)_2^2$, then $\phi$ has the weak $m$ -- order property in $M$. \item If in addition to the hypotheses of Lemma \ref{lem1} we have that $ n\geq \frac{2^{2m-1}}{\pi m}$, then $\phi $ has the weak $m$ -- order property in $M$. \end{enumerate} \end{corollary} {{\sc Proof}} (This is essentially \subseteqte{sh} I.2.10(2)) \begin{enumerate} \item Let $a_i$, $b_i$, $c_i$ for $i<n$ be as in the proof of Lemma \ref {lem1}. For each pair $i<j\leq n$, define \[ \chi ({i,j}):=\left\{ \begin{array}{ll} 1 & \mbox{ if $M \models \phi[c_i; a_j]$} \\ 0 & \mbox{otherwise.} \end{array} \right. \] Since $(2n)\rightarrow (m+1)_2^2$, we can find a subset $I$ of $2n$ of cardinality $m+1$ on which $\chi $ is constant and which we can enumerate as $I = \{i_0,\ldots ,i_m\}$. If $\chi $ is 1 on $I$, then for every $k$ with $1\leq k\leq m+1$ \[ \{\neg \phi (x;b_{i_j})^{\mbox{\em if}(j>k)}\;:\;1\leq j<m\} \] is realized by $c_{i_{k-1}}$. Therefore, the sequence $\{b_{i_0},\ldots ,b_{i_m}\}$ witnesses the weak $m$ -- order property of $\phi $ in $M$. On the other hand, if $\chi $ is 0 on $I$, then for every $k$ with $1\leq k\leq m+1$ \[ \{\neg \phi (x;a_{i_j})^{\mbox{\em if}(j>k)}\;:\;1\leq j<m\} \] is realized by $c_{i_{k-1}}$. Therefore, the sequence $\{a_{i_0},\ldots ,a_{i_m}\}$ witnesses the weak $m$ -- order property of $\neg \phi $ in $M$. Of course, it is equivalent for $\phi $ and $\neg \phi $ to have the weak $m$ -- order property in $M$. \item By Stirling's formula, $n\geq \frac{2^{2m-1}}{\pi m}$ implies that $ n\geq \frac 12{\left( \begin{array}{c} {2m} \\ {m} \end{array} \right) }$, and from \subseteqte{grs}, $n\geq \frac 12{\left( \begin{array}{c} {2m} \\ {m} \end{array} \right) }$ implies that $(2n)\rightarrow (m+1)_2^2$. \end{enumerate} {\qed} We can now establish the relationship between the number of types and the order property. \begin{theorem} \label{thm2} If $\phi (x;y)\in L(M)$ is such that \[ \rho (x_0,x_1,x_2;y_0,y_1,y_2)\stackrel{\mathrm{def}}{=}\phi [(x_0;y_1)\leftrightarrow \phi (x_0;y_2)] \] does not have the $n$ -- order property in $M$, then for every set $ A\subseteq M$ with $|A|\geq 2$, we have that $|S_\phi (A,M)|\leq 2n|A|^k$, where $k=2^{(3ns)^{t+1}}$ for $r=l(x)$, and $s=l(y)$, and $t=\max \{r,s\}$. \end{theorem} {{\sc Proof}} Suppose that there is some $A\subseteq M$ with $|A|\geq 2$ so that $ |S_\phi (A,M)|>(2n)|A|^k$. Let $\psi (y;x)=\phi (x;y)$, $m=|A|$, and let $ \{a_i\;:\;i\leq (2n)m^k\}\subseteq M$ be witnesses to the fact that $|S_\phi (A,M)|>(2n)m^k$. (That is, each of these tuples realizes a different $\phi $ -- type over $A$.) Define $\{A_i:i<2n\}$, satisfying \begin{enumerate} \item $A\subseteq A_i\subseteq A_{i+1} \subseteq M$, \item $|A_i|\leq c^{e(i)}m^{(3ns)^i}$,where $c:=2^{2+(3sn)^t}$ and $e(i):= \frac{(3ns)^i-1}{3ns-1}$, and \item for every $B\subseteq A_i$ with $|B|\leq 3sn$, every $p\in S_\phi (B,M)\bigcup {S_\psi (B,M)}$ is realized in $A_{i+1}$. \end{enumerate} To see that this can be done, we need only check the cardinality constraints. There are at most $|A_i|^{3sn}$ subsets of $A_i$ with cardinality at most $3sn$, and over each such subset $B$, there are at most $ 2^{(3sn)^r}$ and $2^{(3sn)^s}$ types in $S_\psi (B,M)$ and $S_\phi (B,M)$, respectively, so there are at most $2^{(3sn)^r}+2^{(3sn)^s}\leq 2^{1+(3sn)^t} $ types in $S_\psi (B,M)\bigcup {S_\phi (B,M)}$ for each such $ B$. Therefore, $A_{i+1}$ can be defined so that \begin{eqnarray*} |A_{i+1}| & \leq & |A_i|+(2^{1+(3sn)^t})|A_i|^{3sn} \\ & \leq & c|A_i|^{3sn} \\ & \leq & c(c^{e(i)}m^{(3ns)^i})^{3sn} \\ & = & c^{1+e(i)(3sn)}m^{(3sn)^{i+1}} \\ & = & c^{e(i+1)}m^{(3sn)^{i+1}} \end{eqnarray*} \begin{claim} \label{thm2c1} There is a $j<(2n)m^k$ such that for every $i<2n$ and every $ B\subseteq A_i$ with $|B|\leq 3sn$, $tp(a_j,A_{i+1})$ $(\psi ,\phi )$ -- splits over $B$. \end{claim} {{\sc Proof}} (Of Claim \ref{thm2c1}) Suppose not. That is, for every $j\leq (2n)m^k$, there is an $i(j)<2n$ and a $B\subseteq A_{i(j)}$ with $|B|\leq 3sn $, so that $tp(a_j,A_{i(j)+1})$ does not $(\psi ,\phi )$ -- split over $B$. Since $i$ is a function from $1+(2n)m^k$ to $2n$, there must be a subset $S$ of $1+(2n)m^k$ with $|S|>m^k$, and an integer $i_0<2n$ such that for all $ j\in S$, $i(j)=i_0$. Now similarly, there are less than $|A_{i_0}|^{3sn}$ subsets of $A_{i_0}$, with cardinality at most $3sn$, so there is a $ T\subseteq S$ with \[ |T|>\frac{m^k}{|A_{i_0}|^{3sn}} \] and a $B_0\subseteq A_{i_0}$, with $|B_0|\leq 3sn$ such that for all $j\in T$ , $tp(a_j,A_{i_0+1})$ does not $(\psi ,\phi )$ -- split over $B_0$. Since $ |A_{i_0}|\leq c^{e(i_0)}m^{(3ns)^{i_0}}\leq (cm)^{(3sn)^{2n}}$, then \[ |T|\geq \frac{m^k}{(cm)^{(3sn)^{2n}}} \] Let $C\subseteq A_{i_0+1}$ be obtained by adding to $B_0$, realizations of every type in $S_\phi (B_0,M)\bigcup {\ S_\psi (B_0,M)}$. This can clearly be done so that $|C|\leq 3ns+2^{(3ns)^r}+2^{(3ns)^s}$. The maximum number of $\phi $ -- types over $C$ is at most $2^{|C|^s}\leq 2^{c^s}$. \begin{claim} \label{claim11} \label{thm2c2} $m^{k-(3ns)^{2n}}>(2^{c^s})(c^{(3ns)^{2n}})$ \end{claim} {{\sc Proof}} (Of Claim \ref{thm2c2}) Since $c=2^{2+(3ns)^t}$, we have $ c^s+(3ns)^{2n}(2+(3ns)^t)$ as the exponent on the right-hand side above. Since $m\geq 2$, it is enough to show that \begin{eqnarray*} k>(c^s+(3ns)^{2n}(2+(3ns)^t)+(3ns)^{2n} \\ =2^{s(2+(3ns)^t)}+(3ns)^{2n}(3+(3ns)^t) \end{eqnarray*} This follows from the definition of $k$ (recall that $k=2^{(3ns)^{t+1}}$), so we have established Claim \ref{thm2c2}. {\qed}$_{\ref{claim11}}$ Therefore, $|T|$ is greater than the number of $\phi $ -- types over $C$, so there must be $i\neq j\in T$ such that $tp_\phi (a_i,C)=tp_\phi (a_j,C)$. Since $tp_\phi (a_i,A)\neq tp_\phi (a_j,A)$, we may choose $a\in A$ so that $ M\models \phi [a_i,a]\wedge \neg \phi [a_j,a]$. Now choose $a^{\prime }\in C$ so that $tp_\psi (a,B_0)=tp_\psi (a^{\prime },B_0)$ (this is how $C$ is defined after all). Since $tp_\phi (a_i,A_{i_0+1})$ does not $(\psi ,\phi )$ -- split over $B_0$, we have that \[ \phi (x;a)\in tp_\phi (a_i,A_{i_0+1})\mbox{ if and only if }\phi (x;a^{\prime })\in tp_\phi (a_i,A_{i_0+1}) \] so $M\models \phi [a_i,a^{\prime }]\wedge \neg \phi [a_j,a^{\prime }]$, contradicting the fact that $tp_\phi (a_i,C)=tp_\phi (a_j,C)$ and thus completing the proof of Claim \ref{thm2c1}. Now letting $j$ be as in Claim \ref{thm2c1} above and applying Lemma \ref{lem1} completes the proof of Theorem \ref{thm2}. {\qed} Theorem \ref{thm4} below gives a better result under different assumptions. The next lemma is II, 4.10, (4) in \subseteqte{sh}. It comes from a question due to Erd\H os about the so-called ``trace'' of a set system which was answered by Shelah and Perles \subseteqte{sh2} in 1972. Purely combinatorial proofs (i.e., proofs in the language of combinatorics) can also be found in most books on extremal set systems (e.g., Bollobas \subseteqte{bo}). \begin{lemma} \label{lem3} If $S$ is any family of subsets of the finite set $I$ with \[ |S|>\sum_{i<k}{\left( \begin{array}{c} {|I|} \\ {i} \end{array} \right) } \] then there exist $\alpha _i\in I$ for $i<k$ such that for every $w\subseteq k $ there is an $A_w\in S$ so that $i\in w\Leftrightarrow \alpha _i\in A_w$. (The conclusion here is equivalent to $trace(I)\geq k$ in the language of \subseteqte{bo}.) \end{lemma} {{\sc Proof}} See Theorem 1 in Section 17 of \subseteqte{bo} or Ap.1.7(2) in \subseteqte{sh}. {\qed} \begin{theorem} \label{thm4}\label{nip_poly} If $\phi (x;y)\in L(M)$ ($r=l(x)$, $s=l(y)$) does not have the $k$ -- independence property in $M$, then for every set $ A\subseteq M$, if $|A|\geq 2$, then $|S_\phi (A,M)|\leq |A|^{s(k-1)}$. \end{theorem} {{\sc Proof}} (Essentially \subseteqte{sh}, II.4.10(4)) Let $F$ be the set of $\phi $ -- formulas over $A$. Then \[ |F|<|A|^s. \] So if $|S_\phi (A,M)|>|A|^{s(k-1)}$, then certainly \[ |S_\phi (A,M)|>\sum_{i<k}\left( \begin{array}{c} {|F|} \\ {i} \end{array} \right) , \] in which case Lemma \ref{lem3} can be applied to $F$ and $S_\phi (A,M)$ to get witnesses to the $k$ -- independence property in $M$, a contradiction. { \qed} The ``moral'' of Theorem \ref{thm2} and Theorem \ref{thm4} is that when $\phi $ has some nice properties, there is a bound on the number of $\phi$ -- types over $A$ which is polynomial in $\vert A \vert$. Note that the difference between the two properties is that the degree of the polynomial in the absence of the $k$ -- independence property is linear in $k$ while in the absence of the $n$ -- order property the degree is exponential in $n$. Also the bounds on $\phi$ -- types in the latter case hold when a formula $ \rho$ related to $\phi$ (as opposed to $\phi$ itself) is without the $n$ -- order property. Another property discovered by Keisler (in order to study saturation of ultrapowers, see \subseteqte{ke}), and studied extensively by Shelah is the ``finite cover property'' (see \subseteqte{sh}) whose failure essentially provides us with a strengthening of the compactness theorem. \begin{definition} \label{coverp} We say that $(M,\phi )$ does not have \underline{the $d$ - cover property} if for every $n\geq d$ and $\{b_i:i<n\}\subseteq M$, \textbf{ if} \[ \left( \forall w\subseteq n \left[ |w|<d\Rightarrow M\models \exists x\bigwedge_{i\in w}\phi (x;b_i)\right] \right) \] \textbf{then} \[ M\models \exists x \bigwedge_{i<n}\phi (x;b_i). \] \end{definition} \begin{example} If $M=(M,R)$ is the countable random graph, then $(M,R)$ fails to have the 2 -- cover property. If $M$ is the countable universal homogeneous triangle-free graph, then $(M,R)$ fails to have the 3 -- cover property. \end{example} \section{Indiscernible sequences in large finite sets} \noindent\textsc{Note:} The next definition is an interpolant of Shelah's \subseteqte{sh}, I.2.3, and Ramsey's notion of canonical sequence. \begin{definition} \begin{enumerate} \item A sequence $I=\langle a_i\;:\;i<n\rangle \subseteq M$ is called a $ (\Delta ,m)$ -- \underline{indiscernible sequence over} $A\subseteq M$ (where $\Delta $ is a set of $L(M)$ -- formulas) if for every $i_0<\ldots <i_{m-1}\in I$, $j_0<\ldots <j_{m-1}\in I$ we have that $tp_\Delta (a_{i_0}\cdots a_{i_{m-1}},A,M)=tp_\Delta (a_{j_0}\cdots a_{j_{m-1}},A,M)$ \item A set $I=\{ a_i\;:\;i<n \} \subseteq M$ is called a $(\Delta ,m)$ -- \underline{indiscernible set over} $A\subseteq M$ if and only if for every $ \{i_0,\ldots ,i_{m-1}\}$, $\{j_0,\ldots ,j_{m-1}\}\subseteq I$ we have $tp_\Delta (a_{i_0}\cdots a_{i_{m-1}},A,M)=tp_\Delta (a_{j_0}\cdots a_{j_{m-1}},A,M)$. \end{enumerate} \end{definition} Note that if $\phi(x; b) \in tp(a_0 \ldots a_{m-1}, B, M)$, then necessarily $l(x) = m \cdot l(a_0)$. \begin{example} \begin{enumerate} \item In the model $M^m_n = \langle m,0,1,\chi \rangle $ ($n\leq m<\omega $ ) where $\chi $ is function from the increasing $n$ -- tuples of $m$ to $ \{0,1\}$, any increasing enumeration of a monochromatic set is an example of a $(\Delta ,1)$ -- indiscernible sequence over $\emptyset$ where $\Delta =\{\chi(x)=0,\chi (x)=1\}$. \item In a graph $(G,R)$, cliques and independent sets are examples of $ (R,2)$ -- indiscernible sets over $\emptyset $. \end{enumerate} \end{example} Recall that in a stable first order theory, every sequence of indiscernibles is a set of indiscernibles. In our finite setting this is also true if the formula fails to have the $n$ -- order property. The argument below follows closely that of Shelah \subseteqte{sh}. \begin{theorem} \label{thm6} If $M$ does not have the $n$ -- order property, then any sequence $I=\langle a_i\;:\; i < n + m - 1 \rangle \subseteq M$ which is $ (\phi,m)$ -- indiscernible over $B\subseteq M$ is a set of $(\phi,m)$ -- indiscernibles over $B$. \end{theorem} {{\sc Proof}} Since any permutation of $\{1,\ldots ,n\}$ is a product of transpositions $(k,k+1)$, and since $I$ is a $\phi$ -- indiscernible sequence over $B$, it is enough to show that for each $b\in B$ and $k < m$, \[ M\models \phi [a_0\cdots a_{k-1}a_{k+1}a_k\cdots a_{m-1};b]\leftrightarrow \phi [a_0\cdots a_{k-1}a_ka_{k+1}\cdots a_{m-1};b]. \] Suppose this is not the case. Then we may choose $b\in B$ and $k<m$ so that \[ M\models \neg \phi [a_0\cdots a_{k-1}a_{k+1}a_k\cdots a_{m-1};b]\wedge \phi [a_0\cdots a_{k-1}a_ka_{k+1}\cdots a_{m-1};b]. \] Let $c=a_0\cdots a_{k-1}$ and $d=a_{n+k+1}\cdots a_{n+m-2}$ making $l(c)=k$ and $l(d)=m-k-2$). By the indiscernibility of $I$, \[ M\models \neg \phi [ca_{k+1}a_kd;b]\wedge \phi [ca_ka_{k+1}d;b]. \] For each $i$ and $j$ with $k\leq i<j<n+k$, we have (again by the indiscernibility of the sequence $I$) that \[ M\models \neg \phi [ca_ja_id;b]\wedge \phi [ca_ia_jd;b]. \] Thus the formula $\psi (x,y;cdb)\stackrel{\mathrm{def}}{=}\phi (c,x,y,d;b)$ defines an order on $\langle a_i\;:\;k\leq i<n+k\rangle $ in $M$, a contradiction. {\qed} The following definition is a generalization of the notion of end-homogenous sets in combinatorics (see section 15 of \subseteqte{ehmr}) to the context of $ \Delta$ -- indiscernible sequences. \begin{definition} A sequence $I=\langle a_i\;:\;i<n\rangle \subseteq M$ is called a $(\Delta ,m)$ -- \underline{end-indiscernible sequence over} $A\subseteq M$ (where $ \Delta $ is a set of $L(M)$ -- formulas) if for every $\{i_0,\ldots ,i_{m-2}\} \subseteq n$ and $j_0,j_1<n$ both larger than $\max \{i_0,\ldots ,i_{m-2}\}$, we have \[ tp_\Delta (a_{i_0}\cdots a_{i_{m-2}}a_{j_0},A,M)=tp_\Delta (a_{i_0}\cdots a_{i_{m-2}}a_{j_1},A,M) \] \end{definition} \begin{definition} For the following lemma, let $F:\omega \rightarrow \omega $ be given, and fix the parameters, $\alpha $, $r$, and $m$. We define the function $F^{*}$ for each $k \geq m$ as follows: \begin{itemize} \item $F^{*}(0) = 1$, \item $F^{*}(j+1) = 1 + F^{*}(j)\cdot F(\alpha + m \cdot r \cdot j)$ for $j < k - 2 -m$, and \item $F^{*}(j+1) = 1 + F^{*}(j)$ for $k-2-m \leq j<k-2$. \end{itemize} We will not need $j\geq k-2$. \end{definition} \begin{lemma} \label{lem7} Let $\psi (x;y) = \phi (x_1,\ldots ,x_{m - 1},x_0;y)$. If for every $B\subseteq M$, $|S_\psi (B,M)|<F(|B|)$, and $I=\{c_i\;:\;i\leq F^{*}(k-2)\}\subseteq M$ (where $l(c_i)=l(x_i)=r$, $\alpha =|A|$), then there is a $J\subseteq I$ such that $|J|\geq k$ and $J$ is a $(\phi,m)$ -- end-indiscernible sequence over $A$. \end{lemma} {{\sc Proof}} (For notational convenience when we have a subset $S \subseteq I$, we will write $\min S$ instead of the clumsier $c_{\min \{i \; : \; c_i \in S \}}$.) We now construct $A_j=\{a_i \; : \; i\leq j\} \subseteq I$ and $S_j \subseteq I$ by induction on $j<k-1$ so that \begin{enumerate} \item \label{c71} $a_j=\min S_j$, \item \label{c72} $S_{j+1} \subseteq S_j$, \item \label{c73} $|S_j| > F^{*}(k-2-j)$, and \item \label{c74} whenever $\{i_0,\ldots ,i_{m-1}\}\subseteq j$ and $b\in S_j$, \[ tp_\phi (a_{i_0}\cdots a_{i_{m-2}}a_j,A,M)=tp_\phi (a_{i_0}\cdots a_{i_{m-2}}b,A,M). \] \end{enumerate} The construction is completed by taking an arbitrary $a_{k-1}\in S_{k-2}-\{a_{k-2}\}$. (which is possible by (\ref{c73}) since $F^{*}(0)=1$), and letting $J=\langle a_i\;:\;i<k\rangle $. We claim that $J$ will be the desired ($\phi,m)$ -- end-indiscernible sequence over $A$. To see this, let $\{i_0,\ldots ,i_{m-2},j_0,j_1\}\subseteq k$ with $\max \{i_0,\ldots ,i_{m-2}\}<j_0<j_1<k$ be given. Certainly then $\{i_0,\ldots ,i_{m-2}\}\subseteq j_0$ and $a_{j_1}\in S_{j_0}$, so by (\ref{c74}) we have that \[ tp_\phi (a_{i_0}\cdots a_{i_{m-2}}a_{j_0},A,M)=tp_\phi (a_{i_0}\cdots a_{i_{m-2}}a_{j_1},A,M). \] To carry out the construction, first set $a_j = c_j$ and $S_j = \{c_i \; : \; j \leq i\leq F^{*}(k-2)\}$ for $0\leq j\leq m - 1$. Clearly, we have satisfied all conditions in this. Now assume for some $j\geq m$ that $A_{j-1} $ and $S_{j-1}$ have been defined satisfying the conditions. Define the equivalence relation $\sim $ on $S_{j-1}-\{a_{j-1}\}$ by $c\sim d$ if and only if for all $\{i_0,\ldots ,i_{r-1}\}$, \[ tp_\phi (a_{i_1}\cdots a_{i_{m-1}}c,A,M)=tp_\phi (a_{i_1}\cdots a_{i_{m-1}}d,A,M) \] The number of $\sim $ -- classes then is at most $|S_\psi (A \cup A_j)|<F(\alpha +m \cdot r \cdot j)$. Therefore, at least one class $S_m cdot r cdot j$ has cardinality at least $\frac{|S_{j-1}|-1}{F(\alpha + m \cdot r \cdot j)}$. Let $a_j=\min S_j$. By definition of $F^{*}$, $\frac{ F^{*}(k-2-j+1)}{F(\alpha + m \cdot r \cdot j)}>F^{*}(k-2-j)$, so we have that $|S_j|>F^{*}(k-2-j)$. It is easy to see that condition (\ref{c74}) is satisfied. {\qed} For the following lemma, we once again need a function defined in terms of the parameters of the problem. We will need the parameter $r$ and the function $F^*$ defined for Lemma \ref{lem7} (which depends on $r$, $\alpha$, and $m$). Let $f_i$ be the $F^*$ that we get when $m = i$ (and $\alpha$ and $ r$ are fixed) in Lemma \ref{lem7}. For the following lemma define \[ g_i:=\left\{ \begin{array}{ll} id & \mbox{if }i=0 \\ f_{i-1}\subseteqrc (g_{i-1}-2) & \mbox{otherwise} \end{array} \right. \] \begin{lemma} \label{lem8} If $J=\{a_i\;:\;i\leq g_{m-1}(k-1)\}\subseteq M$ is a $(\phi,m) $ -- end-indiscernible sequence over $A\subseteq M$, then there is a $ J^{\prime }\subseteq J$ such that $|J^{\prime }|\geq k$ and $J^{\prime }$ is a $(\phi,m) $ -- indiscernible sequence over $A$. \end{lemma} {{\sc Proof}} (By induction on $m$) Note that if $m=1$, there is nothing to do since end - indiscernible \emph{is} indiscernible in this case. Now let $m \geq 1$ be given, and assume that the result is true of all $(\theta, m)$ -- end-indiscernible sequences. Let a formula $\phi$ and a sequence $J$ of $(\phi, m+1)$ -- end-indiscernible sequences over $\emptyset$. Let $c$ be the last element in $J$. Define $\psi $ so that \[ M\models \psi [a_0,\ldots ,a_{m-1};b]\mbox{ if and only if } M\models \phi [a_0\cdots a_{m-1}c;b] \] for all $a_0,\ldots ,a_{m-1}\in J$, $b\in M$. Note then that $|S_\psi (B,M)\leq |S_\phi (B,M)|$ for all $B \subseteq M$, so we can use the same $ F^{*}$ for $\psi $ as for $\phi $. (This result can be improved by using a sharper bound on the number of $\psi $ -- types.) By the definition of $g_m$ and Lemma \ref{lem7}, there must be a subset $J^{\prime \prime }$ of $J$ with cardinality at least $g_{m-1}(k-2)$ which is $\psi $ -- end-indiscernible over $A$. By the inductive hypothesis, there is a subsequence of $J^{\prime \prime }$ with cardinality at least $k - 1$ which is $(\psi,m) $ -- indiscernible over $A$. Form $J^{\prime }$ by adding $c$ to the end of this sequence. It follows from the $(\phi,m+1) $ -- end-indiscernibility of $J$ and the $(\psi,m) $ -- indiscernibility of $ J^{\prime \prime }$ that $J^{\prime }$ is $(\phi,m+1) $ -- indiscernible over $A$. {\qed} \begin{theorem} \label{thm9} For any $A\subseteq M$ and any sequence $I$ from $M$ with $ |I|\geq g_m(k-1)$, there is a subsequence $J$ of $I$ with cardinality at least $k$ which is $(\phi,m) $ -- indiscernible over $A$. \end{theorem} {{\sc Proof}} By Lemmas \ref{lem7} and \ref{lem8}. {\qed} Our goal now is to apply this to theories with different properties to see how these properties affect the size of a sequence one must look in to be assured of finding an indiscernible sequence. First we will do a basic comparison between the cases when we do and do not have a polynomial bound on the number of types over a set. In each of these cases, we will give the bound to find a sequence indiscernible over $\emptyset $. We will use the notation $\log ^{(i)}$ for \[ \begin{array}{c} \underbrace{\log _2\subseteqrc \log _2\subseteqrc \cdots \subseteqrc \log _2} \\ i\mbox{ times} \end{array} \] \begin{corollary} \label{cor9} \begin{enumerate} \item If $F(i)=2^{i^m}$ (which is the worst possible case), then $\log ^{(m)}g_m(k-1)\leq 4k$. \item If $F(i)=i^p$, then $\log ^{(m)}g_m(k-1)\leq 2m k+\log _2 k+\log _2p$. \end{enumerate} \end{corollary} We now combine part (2) above with the results from the previous section to see what happens in the specific cases of structures without the $n$ -- order property and structures without the $n$ -- independence property. We define by induction on $i$ the function \[ \beth(i, x) = \left\{ \begin{array}{ll} x & \mbox{if $i=0$} \\ 2^{\beth(i - 1, x)} & \mbox{if $i>0$} \end{array} \right. \] Recall that for the formula $\phi(x; y)$ we have defined the parameters $r= l(x)$, $s = l(y)$, and $t = \max\{ r, s\}$. \addtocounter{definition}{-1} \begin{corollary} \begin{enumerate} \item[3.] \setcounter{enumi}{3} If $(M,\phi )$ fails to have the $n$ -- independence property and $I=\{a_i\;:\;i<\beth (m,2k+\log _2k+\log _2n+\log _2m)\}\subseteq M$, then there is a $J\subseteq I$ so that $|J|\geq k$ and $ J $ is a $(\phi,m) $ -- indiscernible sequence over $\emptyset $. \item If $(M,\phi )$ fails to have the $n$ -- order property and $ I=\{a_i\;:\;i<\beth (m,2k+\log _2k+(3ns)^{t+1})\}\subseteq M$, then there is a $J\subseteq I$ so that $|J|\geq k$ and $J$ is a $(\phi,m) $ -- indiscernible sequence over $\emptyset $. \end{enumerate} \end{corollary} Finally, note that with the additional assumption of failure of the $d$ -- cover property, if $d$ is smaller than $n$, then from the assumptions in (3) and (4) above, we could infer a failure of the $d$ -- independence property or the $d$ -- order property improving the bounds even further. \section{Applications to Graph Theory} \label{graphsec} In this section we look to graph theory to illustrate some applications. The reader should be warned that the word ``independent'' has a graph - theoretic meaning, so care must be taken when reading ``independent set'' versus ``independence property''. \subsection{The independence property in random graphs} A first question is ``How much independence can one expect a random graph to have?'' We will approach the answer to this question along the lines of Albert \& Frieze \subseteqte{af}. There an analogy is made to the Coupon Collector Problem, and we will continue this here. The Coupon Collector Problem (see Feller \subseteqte{fe}) is essentially that if $ n $ distinct balls are independently and randomly distributed among $m$ labeled boxes (so each distribution has the same probability $m^{-n}$ of occurring), then what is the probability that no box is empty? Letting $q(n, m)$ be this probability, it is easy to compute that \[ q(n, m) = \sum_{i = 0}^m (-1)^i {\left( \begin{array}{c} {m} \\ {i} \end{array} \right) } \left( 1 - \frac{i}{m} \right)^n = \frac{m! \, S_{m,n}}{m^n} \] where $S_{n,m}$ is the Stirling number of the second kind. It is well - known that, for $\lambda = m e^{-n/m}$, $q(n, m) - e^{- \lambda} $ tends to 0 as $n$ and $m$ get large with $\lambda$ bounded. The way that this will be applied in our context is as follows. We will say that a certain set $\{v_1,\ldots ,v_k\}$ of vertices \emph{witnesses the $k$ - independence property in $G$} if $(G,R)$ has the $k$ -- independence property with $a_i=v_i$ (see Definition \ref{def1d}). Notice that any $k$ vertices $\{v_1,\ldots ,v_k\}$ determine $2^k$ ``boxes'' defined by all possible Boolean combinations of formulas $\{R(x,v_1),\ldots ,R(x,v_k)\}$ (a vertex being ``in a box'' meaning it witnesses the corresponding formula in $ G$). The remaining $n-k$ vertices are then equally likely to fill each of the $2^k$ boxes, so the probability that these $k$ vertices witness the $k$ -- independence property in $G$ is just $q(n-k,2^k)$. So for $\lambda =\lambda (n,k)=2^k\exp (-(n-k)/2^k)$ bounded (as $n,k\rightarrow \infty $) we will have the probability that $k$ particular vertices witness the $k$ - independence property in a graph on $n$ vertices tends to $e^{-\lambda }$, and the probability that a graph on $n$ vertices has the $k$ -- independence property is at most \[ {\left( \begin{array}{c} {r} \\ {k} \end{array} \right) }e^{-\lambda }\leq n^ke^{-\lambda }\mbox{ as }n,k\rightarrow \infty \] If $n = k + k 2^k$, then $q(n-k, 2^k) \rightarrow 1$, so the particular vertices $\{1, \ldots, k \}$ witness $k$ -- independence in a graph on $k + k 2^k$ vertices almost surely. On the other hand, \begin{theorem} \label{thmg1} A random graph on $n=k+2^k(\log k)$ vertices has the failure of the $k$ -- independence property almost surely. \end{theorem} {{\sc Proof}} For $n=k+2^k\log k$, the $\lambda $ from above is $\frac{2^k}k$, and $\log (n^ke^{-\lambda })=k\log (k+2^k\log k)-2^k/k$ which clearly goes to $-\infty $ as $k\rightarrow \infty $, so the probability that a graph on $ n$ vertices has the $k$ -- independence property goes to 0. {\qed} \subsection{Ramsey's theorem for finite hypergraphs} We can improve (for the case of hypergraphs without $n$ -- independence) the best known upper bounds for the Ramsey number $R_r(a, b)$. First we should say what this means. \begin{definition} \begin{enumerate} \item An \emph{$r$ -- graph} is a set of \emph{vertices} $V$ along with a set of $r$ -- element subsets of $V$ called \emph{edges}. The edge set will be identified in the language by the $r$ -- ary predicate $R$. \item A \emph{complete $r$ -- graph} is one in which all $r$ -- element subsets of the vertices are edges. An \emph{empty $r$ -- graph} is one in which none of the $r$ -- element subsets of the vertices are edges. \item $R_r(a,b)$ denotes the smallest positive integer $N$ so that in any $ r $ -- hypergraph on $N$ vertices there will be an induced subgraph which is either a complete $r$ -- graph on $a$ vertices or an empty $r$ -- graph on $b $ vertices. \item We say that an $r$ -- graph $G$ \emph{has the $n$ -- independence property} if $(G,R(x))$ does (where $l(x)=r$). \end{enumerate} \end{definition} Note that the first suggested improvement of Lemma \ref{lem7} applies in this situation -- namely, the edge relation is symmetric. We can immediately make the following computations. \begin{lemma} \label{lemg2} \begin{enumerate} \item In an $r$ -- graph $G$, $F$ is given by $F(i)=2^q$ where $q={\left( \begin{array}{c} {i} \\ {r-1} \end{array} \right) }$. Consequently, $F^{*}(k)\leq 2^{k^r}$ in this case. \item In an $r$ -- graph $G$ which does not have the $n$ -- independence property, $F$ is defined by \[ F(i):=\left\{ \begin{array}{ll} 1 & \mbox{for }i<r \\ i^{(r-1)(n-1)} & \mbox{otherwise} \end{array} \right. \] Consequently $F^{*}(k)\leq k^{(r-1)(n-1)k}$ in this case. \end{enumerate} \end{lemma} For a fixed natural number $p$, define the functions $E_p ^{(j)}$ by \begin{itemize} \item $E^{(1)}=E=(\alpha \mapsto (\alpha +1)^{p(\alpha +1)})$, and \item $E^{(i+1)}=E\subseteqrc E^{(i)}$ for $i\geq 1$. \end{itemize} \begin{theorem} \label{thmg2} Let $n \geq 2$ and $k \geq 3$ be given, and let $p = (r-1)(n-1) $. If an $r$ -- graph $G$ on at least $E_p ^{(r-1)}(k-1)$ vertices does not have the $n$ -- independence property, then $G$ has an induced subgraph on $k $ vertices which is either complete or empty. \end{theorem} {{\sc Proof}} (By induction on $r$) For $r=2$, the graph has at least $E_{n-1}^{(1)}(k-1) = k^{(n-1)k}\geq 2^{2k} $ vertices, and it is well-known (see e.g. \subseteqte{grs}) that $ 2^{2k}\rightarrow (k)_2^2$. Let $r \geq 3$ be given, and let $G=(V,R)$ be an $r$ -- graph as described and set $C = E_p ^{(r-2)}(k)$, where $p = (r-1)(n-1)$. Using $F(i)=i^p$ for $ i\geq 2$, ($F(0)=F(1)=1$) and computing $F^{*}$ in Lemma \ref{lemg2}, we first see from Lemma \ref{lem7} that any $r$ -- graph on at least $ (C+1)^{p(C+1)}$ vertices will have an $(R, 1)$ -- end-indiscernible sequence $J$ over $\emptyset $ of cardinality $C$. Let $v$ be the last vertex in $J$ and define the relation $R^{\prime}$ on the $(r-1)$ -- sets from (the range of) $J$ by \begin{center} $R^{\prime }(X)$ if and only if $R(X \cup \{v\})$. \end{center} Now $(J,R)$ is an $(r-1)$ -- graph of cardinality $C$, so by the inductive hypothesis there is an $R^{\prime }$ -- indiscernible subsequence $J_0$ of $J $ with cardinality $k$. Clearly $I=\{A \cup \{v\} : A \in J_0\}$ is an $R$ -- indiscernible sequence over $\emptyset $ of cardinality $k$. {\qed} \noindent\textsc{Remark:} Another way to say this is that in the class of $r$ -- graphs without the independence property $R_r(k,k) \leq E_p ^{(r-1)}(k-1)$ . \subsection*{Comparing upper bounds for $r=3$} Note that for $r=3$ in Theorem \ref{thmg2}, we have $p=2(n-1)$, and so we get $E_p^(2)(k-1) = (2^{2k}+1)^{p(2^{2k}+1)}$ which is roughly $ 2^{nk(2^{2k}+2)}$. The upper bound for $R_3(k,k)$ in \subseteqte{ehmr} is roughly $ 2^{2^{4k}}$. So $\log _2\log _2($their bound$)=4k$ and \[ \log _2\log _2(\mbox{our bound})=\log _2(nk(2^{2k+2}))=\log _2p+\log _2k+(2k+2) \] which is smaller than $4k$ as long as $2k-2-\log _2k>\log _2n$. This is true as long as $n<2^{2k-2}/k$. For example, for $k = 10$ our bound is about $2^{c(n-1)}$ where $c$ is roughly $4 \times 10^7$ and theirs is about $2^{2^{40}}$. Since $2^{40}$ is roughly $10^{12}$, this is a significant improvement in the exponent for 3 -- graphs without the $n$ -- independence property. \subsection*{Comparing upper bounds in general} Let $a_r$ be the upper bound for $R_r(k,k)$ given in \subseteqte{ehmr} and $b_r$ be the upper bound as computed for the class of $r$ -- graphs without the $n$ -- independence property in Theorem \ref{thmg2} (both as a function of $k$, the size of the desired indiscernible set). Since we have $b_{r+1} \leq b_r^{(p)(b_r)}$, we get the relationship \begin{eqnarray*} \log^{(r)} b_{r+1} & \leq & \log^{(r-1)} [p \, b_r (\log b_r)] \\ & = & \log^{(r-2)} (\log (r-1) + \log (n-1) + \log b_r + \log \log b_r) \end{eqnarray*} for $r \geq 3$, $\log \log b_3 = 2k + \log_2 k + \log_2 n + \log_2 r$, and $ \log b_2 = 2k$. It follows that $\log^{(r)} b_{r+1}$ is less than (roughly) $ 2k + \log_2 k + \log_2 n$ for every $r$. In \subseteqte{ehmr}, the bounds $a_r$ satisfy $\log a_2 = 2k$, $\log \log a_3 = 4k $, and for $r \geq 3$, \begin{eqnarray*} \log^{(r)} a_{r+1} & = & \log^{(r-1)} (a_r^r) = \\ \log^{(r-2)}(r \log a_r) & = & \log^{(r-3)} [\log r + \log \log a_r]. \end{eqnarray*} We can then show that $\log^{(r-1)} a_r < 4k + 2$ for all $r$. Clearly for each $r \geq 3$, $\frac{b_r}{a_r} \rightarrow 0$ as $m$ gets large. \noindent\textsc{Final Remark:} On a final note, the above comparison is only given for $r$ -- graphs with $r \geq 3$ because the technique enlisted does not give an improvement in the case of graphs. This has not been pursued in this paper because it seems to be of no interest in the general study. However, the techniques may be of interest to the specialist. \section{Toward a classification theory} \subsection{Introduction} One of the most powerful concepts of model theory (discovered by Shelah) is the notion of forking, which from a certain point of view can be considered as an instrument to discover the structure of combinatorial geometry in certain definable subsets of models. We were unable to capture the notion of forking (or a forking - like concept) for finite structures. What we \emph{can} do is to present an alternative, more global property called stable -- amalgamation (which is the main innovation in \subseteqte{sh3} in dealing with non-elementary classes). We do this by imitating \subseteqte{sh4}. We have reasonable substitutes for $ \kappa(T)$ and $Av(I,A,M)$. The most important property we manage to prove is the symmetry property for stable amalgamation. This is the corresponding property to the exchange principle in combinatorial geometry. The ultimate goal of the project started here is to have a decomposition theorem not unlike the theorem for finite abelian groups. We hope to identify some properties $P_1, \ldots, P_n$ of a class of finite models $K$ in such a way that the following conjecture will hold: \begin{conjecture} \label{dream} If $\langle K,\prec_K \rangle$ satisfies $P_1,\ldots,P_n$ then for every $M\in K$ large enough there exists a finite tree $T\subseteq \;{}^{\omega <}\omega$ and $\{M_{\eta} \prec_K M\; : \; \eta\in T \}$ such that \begin{enumerate} \item $\{M_{\eta} \; : \; \eta \in T \}$ is a ``stable'' tree \footnote{ Defined using the notion of stable amalgamation introduced below.} \item For every $\eta\in T$ we have that $\| M_{\eta} \| \leq n(K)$ \footnote{ We acknowledge that cardinality of the universe may not be an appropriate measure of ``smallness'' for a substructure in this context. The reader should also consider a restriction on the cardinality of a set of generators for $M_{\eta}$ as another possibility.} \item $M$ is uniquely determined by $\bigcup_{\eta\in T}|M_{\eta}|$. \end{enumerate} \end{conjecture} Ideally $P_1,\ldots,P_n$ is a minimal list of properties sufficient to derive the above decomposition. We hope to be able to eventually emulate Theorem XI.2.17 in \subseteqte{sh}. Considering our present state of knowledge it seems that our conjecture is closer to a fantasy than to a mathematical statement. However we seem to have a start. Much of this section together with some of the earlier results can be viewed as a search for candidates for the above mentioned list of properties $P_1,\ldots,P_n$. We hope that this section might form an infrastructure for the classification project. \subsection{Abstract properties} We now begin to look at some of the abstract properties of a class $K$ of finite $L$ -- structures with an appropriate partial ordering denoted by $ \prec_K$. These properties come from Shelah's list of axioms in \S 1 of \subseteqte {sh3}. \begin{definition} Let $L$ be a given similarity type, let $\Delta $ be a set of $L$ -- formulas, and let $n<\omega $, by $\Delta _n^{*}$ we denote the minimal set of $L$ -- formulas containing the following set and all its subformulas: \[ \{\exists x[\bigwedge_{i\in w}\phi (x;y_i)\wedge \bigwedge_{i\in k \setminus w} \neg \phi (x;y_i)]:\phi (x;y)\in \Delta , k \leq n, w\subseteq k,l(y_i)=l(y)\}. \] \end{definition} We will now look into natural values of $k$ from the previous section. \begin{theorem} Suppose the formula $\phi $ fails to have the weak $n$ -- order property (and hence fails to have the $n$ -- independence property in $K$), and let $ \Delta \supseteq \{ \phi \}^*_n$ be given. Then for every $(\Delta,n)$ -- indiscernible sequence $I$ over $\emptyset$ and every $c \in M$, either $ |\{a \in I \; : \; M \models \phi [c;a] \}| < n$ or $|\{a \in I \; : \; M \models \neg \phi [c;a] \}| < n$. \end{theorem} {{\sc Proof}} We may assume the length of $I$ is at least $2n$ since otherwise the result is trivial. We proceed by contradiction. Suppose the result is not true. Then there is a $(\Delta,n)$ -- indiscernible sequence $I$ from $M$ of length at least $2n$ and a $c\in M$ such that both $|\{\phi [c;a]\;:\;a\in I\}|\geq n$ and $|\{\neg \phi [c;a]\;:\;a\in I\}|\geq n$. Let $\{a_0,\ldots ,a_{2n-1}\}\subseteq I$ be such that \begin{equation} M\models \bigwedge_{i<n}\phi [c,a_i]\;\wedge \;\bigwedge_{n \leq i < 2n} \neg \phi [c,a_i] \label{num1} \end{equation} We complete the proof by showing that $\{a_0,\ldots ,a_{n-1}\}$ exemplifies the $n$ -- independence property. Let $w\subseteq n$ be given. Consider the formula \[ \psi _w(y_0,\ldots ,y_{n-1})\stackrel{\mathrm{def}}{=}(\exists x)\left[ \bigwedge_{i\in w}\phi (x;y_i)\;\wedge \;\bigwedge_{i\in n \setminus w}\neg \phi (x;y_i) \right] \] Let $\{i_0,\ldots ,i_{k-1}\}$ be an increasing enumeration of $w$. By (\ref {num1}) the following holds \[ M\models \psi _w[a_{i_0},\ldots ,a_{i_{k-1}},a_n,\ldots ,a_{2n-1-k}] \] Since $\psi _w\in \{\Delta \}^*_n$, by the indiscernibility of $I$ we have that also $M\models \psi _w[a_0,\ldots ,a_{n-1}]$. So for every $w\subseteq n $, we may choose $b_w\in M$ so that \[ M\models \bigwedge_{i\in w}\phi [b_w,a_i]\;\wedge \;\bigwedge_{i\in n-w}\neg \phi [b_w,a_i]. \] We are done since $\{a_0,\ldots ,a_{n-1}\}$ and $\{b_w:w\subseteq n\}$ witness the fact that $(M,\phi )$ has the $n$ -- independence property. { \qed} The following definition is inspired by $\kappa (T)$ in Chapter III of \subseteqte {sh}. \begin{definition} Let $n<\omega $ be given, and let $\Delta $ be a finite set of formulas. $ \kappa _{\Delta ,n}(K)$ is the least positive integer so that for every $ M\in K$, every sequence $I=\langle a_i\;:\;i<\beta <\omega \rangle \in M$ which is $\Delta _n^{*}$ -- indiscernible over $\emptyset$ has either $ M\models \phi [c;a_i]$ or $M\models \neg \phi [c;a_i]$ for less than $\kappa _{\Delta ,n}(K)$ elements of $I$ for each $\phi \in \Delta $ and $c\in M$. Recalling that $\Delta$ is to be closed under negation, we will write $ \kappa_{\phi, n}$ instead of $\kappa_{ \{ \phi, \neg \phi \},n}$. \end{definition} So the previous theorem states that if the formula $\phi$ fails to have the $ n$ -- independence property in $M$, then $\kappa_{\phi,n}(M) \leq n$. When $ \phi$ is understood to not have the $n$ -- independence property $ \kappa_{\phi}$ will stand for $\kappa_{\phi,n}$. In this case, the following definition makes sense. \begin{definition} Let $n < \omega$ be given, $\Delta $ be a finite set of formulas and $n$ be as above. Suppose $I$ is a sequence of $(\Delta_n^{*},n)$-indiscernibles over $\emptyset $. Define \begin{eqnarray*} Av_\Delta (I,A,M)& = & \{\phi (x;a):a\in A,\phi (x;y)\in \Delta \\ & & \mbox{ and } |\{c\in I:M\models \phi [c;a]\}|\geq \kappa _{\Delta,n} (K)\}. \end{eqnarray*} If $M$ is understood, it will often be omitted. \end{definition} \begin{theorem} Let $\psi(y; x) = \phi(x; y)$. If $\phi $ has neither the $n$ -- independence property nor the $d$ -- cover property in $M$, $\Delta \supseteq \{\phi \}_n^{*}$ is finite, and $I$ is a set of $(\Delta,n) $ -- indiscernibles over $\emptyset $ of length greater than $\max \{d \cdot \kappa _\psi (M),2n\}$, then $Av_\phi (I,A,M)$ is a complete $\phi $ -- type over $A$. \end{theorem} {{\sc Proof}} That $Av_\psi (I,A,M)$ is complete follows from the previous theorem. To see that it is consistent, we need only establish that every $d$ formulas from it are consistent (by failure of the $d$ -- cover property), and this follows >from the size of $I$ and the pigeonhole principle. { \qed} So we will use the following term to denote when we are in a model in which the notion of average type is well defined. \begin{definition} Let $\psi(y; x) = \phi(x; y)$, and $\Delta = \{ \phi, \psi, \neg\phi, \neg\psi \}$. We will say that $M$ is \underline{$(\phi ,n,d)$ -- good} if $ (M,\Delta )$ has neither the $n$ -- independence property nor the $d$ -- cover property. In this case, we will define $\lambda _\phi (M)= \max \{d \cdot \kappa _{\Delta,n}(M),2n \}$. We will sometimes refer to this same situation by saying $(M, \phi)$ is $(n,d)$ -- good. If $K$ is a class of $(\phi, n, d)$ -- good structures which all include a common set $A$, then we will say that $K$ is $(\phi, n, d)$ -- good, and define $\lambda (K)= \kappa _{\Delta,n}(K) \cdot |A|^s$, where $s = \max \{l(y) : \phi(x; y) \in \Delta \}$. \end{definition} \begin{example} \label{ex.fields}Let $T$ be an $\aleph _1$ -- categorical theory in a relational language (no function symbols), and let $M\models T$ be an uncountable model (e.g., an uncountable algebraically closed field of positive characteristic). Let $K:=\{N\subseteq M:\Vert N\Vert <\aleph _0\}$, and let $\phi \in L(T)$. By $\aleph _1$ -- categoricity there exist integers $n$ and $d$ such that $K$ is $(\phi, n,d)$ -- good (see Corollary \ref {finitecor}). \end{example} \begin{definition} \label{submodel} For a fixed (finite) relational language $L$ and an $L$ -- formula $\phi $ (and $\psi(y; x) = \phi(x; y)$), let $K$ be a class of finite $(\phi ,n,d)$ -- good $L$ -- structures all of which include a common set $A$. Fix a $k < \omega$ and and define $\prec _{K}$, as follows: $N\prec _{K} M$ \begin{enumerate} \item $N \subseteq M$, and for all $a \in A$ and $b \in N$, $M \models \phi[ b; a]$ if and only if $N \models \phi[b; a]$. \item For every $a_0, \ldots, a_{k-1} \in A$, if $M\models \exists x\bigwedge_{i<k}\phi (x;a_i)$, then $M\models \bigwedge_{i<k} \phi [b;a_i]$ for some $b\in N$. \item For every $a\in M$, there is a sequence $I \subseteq N$ which is $ (\{\psi \}^*_n, n)$ -- indiscernible over $A$ with length at least $\lambda (K)$ so that $tp_\phi (a,A,M)=Av_\phi(I,A,M)$. \end{enumerate} We define the same relation for a set $\Delta $ of formulas simply by requiring that the above holds for each $\phi \in \Delta$ in the case that $K $ is a class of finite $(\phi ,n,d)$ -- good structures. \end{definition} \noindent\textsc{Remarks on Definition \ref{submodel}:} \begin{itemize} \item Condition (1) ensures that the fact for elementary classes that forms the basis of the Tarski - Vaught (namely, $N \subseteq M \Rightarrow N \prec_{qf} M$ holds for $\phi$ -- formulas even if $\phi$ is not quantifier free. \item Condition (2) is like $k$ -- saturation relative to $\phi $ -- formulas with parameters from $A$ (i.e. every $\phi$ -- type with at most $k$ parameters from $A$ which is realized in $M$ is also realized in $N$). It can be thought of as a generalization of the Tarski - Vaught test relativized to formulas from $\{ \phi \}^*_k$. \item Condition (3) is a property like the one that guarantees in the first order case that types over a model are stationary. Here we are requiring a strong closure condition on $N$ --- namely, if $a \in M \setminus N$, then there is a strong reason why $a$ does not belong to $N$ : there is a long sequence of indiscernibles in $N$ averaging the same $\phi $ -- type over $A$ . \item It should be emphasized that $K$ (and hence $\prec_K$) has parameters $A$, $k$, and $\phi$ which are suppressed for notational convenience. \end{itemize} \subsection{Properties of $\prec_K$} We prove the following facts about the relation $\prec_{K}$. The Roman numerals in parentheses indicate the corresponding Axioms in \subseteqte{sh3}. $ \Delta$ is closed under negation and fixed throughout, and $K$ is a class of $(\Delta, n, d)$ -- good structures which all include a common set $A$. \begin{lemma} \begin{enumerate} \item (I) If $N\prec _K M$, then $N\subseteq M$. \item (II) $M_0 \prec_K M_1 \prec_K M_2$ implies $M_0 \prec_K M_2$. Also $ M\prec _KM$ for all $M \in K$ \item (V) If $N_0\subseteq N_1\prec _KM$ and $N_0\prec _KM$, then $N_0\prec _KN_1$. \end{enumerate} \end{lemma} {{\sc Proof}} The first part of (I) is trivial, and the second part of (II) only requires that one chooses a constant sequence for $I$ in Condition (3). Note that for all three statements, checking Condition (1) is routine. For the first part of (II), assume the hypothesis is true and first look at Condition (2). Let $\phi \in \Delta$ and $a_i \in A$ for $i<k$ be given, and assume that $M_2 \models \exists x\bigwedge_{i<k}\phi (x;a_i)$ Since $M_1 \prec_K M_2$, we may choose $b \in M_1$ so that $M_2 \models \bigwedge_{i<k} \phi [b;a_i]$. Of course, $b, a_0, \ldots, a_{k-1}$ are all from $M_1$, so we can conclude that $M_1 \models \bigwedge_{i<k}\phi [b;a_i]$, or less specifically that $M_1 \models \exists x\bigwedge_{i<k}\phi (x;a_i)$. Since $ M_0 \prec_K M_1$, this in turn allows us to choose $b^{\prime} \in M_0$ so that $M_1 \models \bigwedge_{i<k}\phi [b^{\prime};a_i]$, which means necessarily that $M_2 \models \bigwedge_{i<k}\phi [b^{\prime};a_i]$. For condition (3), let $a \in M_2$ be given. Since $M_1 \prec_K M_2$, we may choose $I$ from $M_1$ of length $\lambda(K)$ which is $(\{\psi \}^*_n, n)$ -- indiscernible over $A$ so that $tp_\phi (a,A,M_2) = Av_\phi(I,A,M_2)$. Because the length of $I$ exceeds $|A|^{l(y)} \cdot \kappa_{\phi}(M)$, we may choose one element $b_{i_0}$ in $I$ so that $tp_{\phi}(b_{i_0}, A, M_2) = Av_{\phi}(I, A, M_2)$. (This can be accomplished by throwing out $< \kappa_{\phi}(M)$ elements of $I$ for each instance $\phi(x;b)$ with $b \in A $ so that the elements of $I$ that are left all realize the same instances of $\phi$ over $A$.) Since $M_0 \prec_K M_1$, we may choose a sequence $J$ in $M_0$ which is $(\{\psi \}^*_n, n)$ -- indiscernible over $\emptyset$ so that $Av_{\phi}(J, A, M_1) = tp_{\phi}(b_0, A, M_1)$. But then we have $ Av_{\phi}(J, A, M_1) = Av_{\phi}(I, A, M_1)$, and consequently $Av_{\phi}(J, A, M_2) = Av_{\phi}(I, A, M_2) = tp_{\phi}(a,A,M_2)$, as desired. For (V), consider first Condition (2). Assuming the hypotheses in (V) are true, we let $\phi \in \Delta$ and $a_i \in A$ for $i<k$ be given, and assume that $N_1 \models \exists x\bigwedge_{i<k}\phi (x;a_i)$. It then follows from $N_1 \prec_K M$ (Condition (2)) that $M \models \exists x\bigwedge_{i<k}\phi (x;a_i)$, and since $N_0 \prec_K M$, we may choose $b \in N_0$ so that $M\models \bigwedge_{i<k} \phi [b;a_i]$. Of course, $b, a_0, \ldots, a_{k-1}$ are all from $N_1$, so >from Condition (1) of $N_1 \prec_K M$, we can conclude that $N_1 \models \bigwedge_{i<k}\phi [b;a_i]$. For condition (3), let $a \in N_1$ be given. Since $a \in M$ and $N_0 \prec_K M$, we may choose $I$ from $N_0$ of length $\lambda(K)$ which is $ (\{\psi \}^*_n, n)$ -- indiscernible over $\emptyset$ so that $tp_\phi (a,A,M) = Av_\phi(I,A,M)$. Since $N_1 \subseteq M$, and $a$, $I$, and $A$ are all included in $N_1$, it follows that $tp_\phi (a,A,N_1)=Av_\phi(I,A,N_1)$. {\qed} \begin{definition} Here again $\psi (y;x)=\phi (x;y)$. Given $(\phi ,n,d)$ -- good structures $M $, $M_0$, $M_1$, and $M_2$ with $M_l\prec _KM$, $M_0\prec _KM_1$, and $ M_0\prec _KM_2$, we say that $(M_0,M_1,M_2)$ is in $\phi $ -- \underline{ stable amalgamation} inside $M$ if for every $c\in M_2$ with $l(c)=l(x)$ there is a $(\{\psi \}_n^{*},n)$ -- indiscernible sequence $I\subseteq M_0$ over $\emptyset $, of length at least $\lambda (K)$ such that $Av_\phi (I,M_1,M)=tp_\phi (c,M_1,M)$. \end{definition} To prove symmetry of stable amalgamation (with the assumption of non-order), we must first establish the following lemma (corresponding to I.3.1 in \subseteqte {sh4}). \begin{lemma} Let $\psi (y;x)=\phi (x;y)$ and $\Delta =\{\phi ,\psi ,\neg \phi ,\neg \psi \}$, and let $\lambda =\max \{\lambda _\Delta (M),\kappa _\phi (M)+\kappa _\psi (M)+\kappa _\phi (M)\cdot \kappa _\psi (M)\}$. Assume $M$ is $(\Delta ,n,d)$ -- good, $I_0=\langle a_k^0:k<m_0\rangle $ is a $\{\psi \}_n^{*}$ -- indiscernible sequence (over $\emptyset $) in $M$ of length greater than $ \lambda $, and $I_1=\langle a_k^1:k<m_1\rangle $ is a $\{\phi \}_n^{*}$ -- indiscernible sequence (over $\emptyset $) in $M$ of length greater than $ \lambda $. The following are equivalent: \begin{enumerate} \item[(i)] There exists $i_k<m_0$ for $k<m_0-\kappa _\phi (M)$ such that for each $k$, \[ \phi (a_{i_k}^0,y)\in Av_\psi (I_1,|M|,M). \] \item[(ii)] There exists $j_l<m_1$ for $l<m_1-\kappa _\psi (M)$ such that for each $l$, \[ \phi (x,a_{j_l}^1)\in Av_\phi (I_0,|M|,M). \] \end{enumerate} \end{lemma} {{\sc Proof}} Assume (i) holds. Choose $i_k<m_0$ for $k<m_0-\kappa _{\phi }(M)$ and $j_{k,l}<m_1$ for $l<m_1-\kappa _{\psi }(M)$ witnessing (i). Since $m_0 > \kappa _{\psi }(M)+\kappa _{\phi }(M)\kappa _{\psi }(M)$, we can find $ \kappa _{\psi }(M)$ of the $j_{k,l}$ each of which occurs for at least $ \kappa _\phi (M)$ different $i_k$. Thus for each of these, $ \phi(x;a_{j_{k,l}}^1)\in Av_\phi (I_0,|M|,M)$. Now assume (ii) does not hold. That is, there are $j_l<m_1$ for each $ l<m_1-\kappa _{\psi }(M)$ such that $\neg \phi (x;a_{j_l}^1)\in Av_\phi (I_0,|M|,M)$. Clearly one of these $j_l$ must correspond to one of the $ j_{k,l}$ from before that occurs at least $\kappa _{\psi }(M)$ times. But as we noted above $\phi (x;a_{j_{k,l}}^1)\in Av_\phi (I_0,|M|,M)$, a contradiction. Note that (ii) implies (i) by the symmetric argument. {\qed} \begin{theorem}[Symmetry] Let $\psi (y;x)=\phi (x;y)$ and $\Delta =\{\phi ,\psi ,\neg \phi ,\neg \psi \}$. Let $K$ be a class of $(\Delta ,n,d)$ -- good structures which all include a common set $A$. Suppose $M_0$, $M_1$, $M_2\prec _KM$, $M_0\prec _KM_1$, and $M_0\prec _KM_2$. Then $(M_0,M_1,M_2)$ is in $\Delta $ -- stable amalgamation inside $M$ if and only if $(M_0,M_2,M_1)$ is in $\Delta $ -- stable amalgamation inside $M$. \end{theorem} {{\sc Proof}} We show that $(M_0, M_1, M_2)$ in $\phi $ -- stable amalgamation implies that $(M_0,M_2,M_1)$ is in $\psi $ -- stable amalgamation. The result follows from this. Assume that $(M_0,M_1,M_2)$ is in $\phi $ -- stable amalgamation in $M$. Let $c\in M_1$ with $l(c)=l(x)$ be given. (We need to find a $(\{\phi\}^*_n, n) $ -- indiscernible sequence $I\subseteq M$ , with $Av_\psi (I,M_2,M)=tp_\psi(c,M_2,M)$.) By the definition of $M_0\prec _KM_1$, we may choose a $(\{\phi\}^*_n, n)$ -- indiscernible $I\subseteq M_0$ of length at least $\lambda (K)$ so that $tp_\psi (c,M_0,M_1)=Av_\psi (I,M_0,M_1)$ (and so $tp_\psi (c,M_0,M)=Av_\psi (I,M_0,M)$). We claim that $Av_\psi (I,M_2,M)=tp_\psi (c,M_2,M)$. (Note that the first type is defined since $I$ is long enough.) To see this, let $b\in M_2$ be given such that $M\models \psi [c;b]$, and we will show that $\psi (x;b)\in Av_\psi (I,M_2,M)$. Since $(M_0,M_1,M_2)$ is in $\phi $ -- stable amalgamation in $M$, we may choose a $(\{\psi\}^*_n, n)$ -- indiscernible set $J\subseteq M_0$ of length at least $\lambda (K)$ so that $tp_\phi (b,M_1,M)=Av_\phi (J,M_1,M)$. Since $ M\models \phi [b;c]$, we have $\phi (x;c)\in Av_\phi (J,M_1,M)$, so a large number of $b_i$ from $J$ have $M\models \phi [b_i;c]$, or rather $\psi (y;b_i)\in tp_\psi (c,M_0,M)=Av_\psi (I,M_0,M)$ for each of these $b_i$. So $ \psi (y;b_i)\in Av_\psi(I,M_0,M)$ for each of these $b_i$. But then by the previous Lemma, we may choose a large number of $c_j$ from $I $ for which $\phi (x;c_j)\in Av_{\phi}(J,M_1,M) = tp_{\phi}(b, M_1, M)$. That is, $M\models \phi[b;c_j]$ for each of these $c_j$, and so $\psi (y;b)\in Av_\psi (I,M_2,M)$ as desired. {\qed} \end{document}
\begin{document} \title{On String Contact Representations in 3D} \begin{abstract} An \emph{axis-aligned string} is a simple polygonal path, where each line segment is parallel to an axis in $\mathbb{R}^3$. Given a graph $G$, a \emph{string contact representation} $\Psi$ of $G$ maps the vertices of $G$ to interior disjoint axis-aligned strings, where no three strings meet at a point, and two strings share a common point if and only if their corresponding vertices are adjacent in $G$. The \emph{complexity of $\Psi$} is the minimum integer $r$ such that every string in $\Psi$ is a \emph{$B_r$-string}, i.e., a string with at most $r$ bends. While a result of Duncan et al. implies that every graph $G$ with maximum degree 4 has a string contact representation using $B_4$-strings, we examine constraints on $G$ that allow string contact representations with complexity 3, 2 or 1. We prove that if $G$ is Hamiltonian and triangle-free, then $G$ admits a contact representation where all the strings but one are $B_3$-strings. If $G$ is 3-regular and bipartite, then $G$ admits a contact representation with string complexity 2, and if we further restrict $G$ to be Hamiltonian, then $G$ has a contact representation, where all the strings but one are $B_1$-strings (i.e., $L$-shapes). Finally, we prove some complementary lower bounds on the complexity of string contact representations. \end{abstract} \section{Introduction} A \emph{contact system} of a geometric shape $\xi$ (e.g., line segment, rectangle, etc.) is an arrangement of a set of geometric objects of shape $\xi$, where two objects may touch, but cannot cross each other. Representing graphs as a contact system of geometric objects is an active area of research in graph drawing. Besides the intrinsic theoretical interest, such representations find application in many applied fields such as cartography, VLSI floor-planning, and data visualization. In this paper we examine contact systems of \emph{axis-aligned strings}, where each object is a simple polygonal path with axis-aligned straight line segments. No two strings are allowed to cross, i.e., any shared point must be an end point of one of these strings. \begin{figure} \caption{(a) A string contact representations of $K_4$. We use arrows to mark the two ends of each string. (b) Some invalid configurations. (c) A graph $G$. (d) A string contact representation of $G$. (e) The string corresponding to vertex $v_1$.} \label{fig:pcr} \end{figure} A \emph{string contact representation} of a graph $G$ is a contact system $\Psi$ of axis-aligned strings in $\mathbb{R}^3$, where each vertex is represented as a distinct string in $\Psi$, no three strings meet at a point, and two strings touch if and only if the corresponding vertices are adjacent in $G$, e.g., see Fig.~\ref{fig:pcr}. The reason we forbid more than two strings to meet at a point is to avoid degenerate cycles. By a $B_k$-string we denote a string with at most $k$ bends. The \emph{complexity of $\Psi$} is the minimum integer $r$ such that every string in $\Psi$ is a $B_r$-string. We discuss the related research in two broad categories, first in 2D and then in 3D. \textbf{Two Dimensions:} Contact representations date back to the 1930's, when Koebe~\cite{Koebe} proved that every planar graph can be represented as a contact system of circles in the Euclidean plane. A rich body of literature examines contact representation of planar graphs in $\mathbb{R}^2$ using axis-aligned rectangles~\cite{BhaskerS88,KantH93,KozminskiK85} and polygons of bounded size~\cite{AlamEGKP14,AlamBFGKK13,DuncanGHKK12}. In 1994, de Fraysseix et al.~\cite{fraysseix_T} proved that every planar graph admits a triangle contact representation, and showed how to transform it into a contact system of $T$- or $Y$-shaped objects. Subsequent studies involve constructing contact representations with simpler shapes such as axis-aligned segments ($B_0$-strings), and axis-aligned $L$ shapes ($B_1$-strings). Not all planar graphs can be represented using these shapes. Planar bipartite graphs~\cite{CzyzowiczKU98} and planar Laman graphs~\cite{KobourovUV13} can be represented using $B_0$-strings and $B_1$-strings, respectively. Recently, Aerts and Felsner~\cite{f2015} examined contact representations of planar graphs using general strings. \emph{Intersection representation} (or, \emph{$B_kVPG$-representation}, where all the strings are $B_k$-strings) is another related concept, where the strings are allowed to cross. Graphs with $B_rVPG$-representations do not necessarily have contact representations with $B_r$-strings (e.g., $K_{3,3}$ with $r=0$ has a $B_rVPG$-representation, but does not have a contract representation with $B_0$-strings, as shown in Section~\ref{lb}). We refer the reader to~\cite{DBLP:journals/dcg/ChalopinGO10,DBLP:journals/jgaa/ChaplickU13,BiedlD15,FelsnerKMU16} for further background on $B_kVPG$-representation of planar and non-planar graphs. \textbf{Three Dimensions:} Contact representation in three dimensions has been examined using axis-aligned boxes~\cite{AdigaC12,Thomassen86} and polyhedra~\cite{AlamEKPTU15}. In the context of geometric thickness, Duncan et al.~\cite{DuncanEK04} proved that the edges of every graph $G=(V,E)$ with maximum degree 4 can be partitioned into two planar graphs $G_1=(V,E_1)$ and $G_2=(V,E_2)$, each consists of a set of paths and cycles. They showed that $G_1$ and $G_2$ can be drawn simultaneously on two planar layers with vertices at the same location and edges as $1$-bend polygonal paths. Such a drawing can easily be transformed into a contact representation of $B_4$-strings (e.g., see Figs.~\ref{fig:pcr}(c)--(e), details are in Appendix~\ref{sec:4-string}), and hence, every graph with maximum degree 4 has a string contact representation with complexity 4. Not much is known about string contact representations with low complexity strings in $\mathbb{R}^3$. The challenge is vivid even in extremely restricted scenarios: Given a graph along with a label $East$, $West$, $North$, $South$, $Up$, or $Down$, the problem of computing a no-bend orthogonal drawing in $\mathbb{R}^3$ respecting the label constraints has lead to significant research outcomes~\cite{DBLP:journals/dcg/BattistaKLLW12,DBLP:conf/gd/GiacomoLP02}, even for apparently simple structures such as paths, cycles, or graphs with at most three cycles. Orthogonal drawings can sometimes be turned into string contact representations. Consider a graph $G$ that admits an edge orientation such that the outdegree of every vertex is at most two (e.g., a $(2,0)$-sparse graph~\cite{DBLP:journals/corr/AlamE0KPSU15}). String contact representation with bend complexity 14 can easily be computed for such graphs, e.g., see Fig.~\ref{general} in Appendix~\ref{app:general}. Specifically, if $G$ admits a $k$-bend orthogonal drawing, then the drawing can be turned into a string contact representation with complexity $(2k+1)$ by forming for each vertex, a string that consists of the outward edges. However, computing orthogonal drawings with low number of bends per edge is a challenging problem~\cite{HandbookOrtho}. To the best of our knowledge, the complexity of deciding whether a graph has an orthogonal drawing in $\mathbb{R}^3$ with one bend per edge is open. \textbf{Contributions.} We present significant progress in characterizing graphs (possibly non-planar) that admit string contact representations in $\mathbb{R}^3$. We prove that every Hamiltonian and triangle-free graph $G$ has a contact representation, where all the strings but one are $B_3$-strings. Using a slightly different construction we show that every bipartite 3-regular graph admits a string contact representation with complexity 2. Most interestingly, we prove that every 3-regular graph that is Hamiltonian and bipartite has a contact representation, where all the strings but one are $B_1$-strings (i.e., $L$-shapes). This construction relies on a deep understanding of the graph structure and the geometry of $L$-contact systems. All proofs are constructive, and can be carried out in polynomial time. In contrast, we prove (by a simple counting argument) that 5-regular graphs do not have string contact representations, even with arbitrarily large complexity. Moreover, the 4-regular graph $K_5$ (resp., the 3-regular graph $K_{3,3}$) cannot be represented using $B_1$ strings (resp., $B_0$-strings). \section{Preliminaries} \label{pre} We assume familiarity with basic graph-theoretical notation. A \emph{straight-line drawing} of a graph $G$ is a drawing in $\mathbb{R}^d$, where each vertex of $G$ is mapped to a point, and each edge of $G$ is mapped to a straight line segment between its end vertices. The \emph{geometric thickness} of $G$ is the minimum integer $\theta$ such that $G$ admits a straight-line drawing $\Psi$ in $\mathbb{R}^2$ and a partition of its edges into $\theta$ sets, where no two edges of the same set cross (except possibly at their common end points), e.g., see Fig.~\ref{even}(a). Let $\Psi$ be a contact representation of a graph $G$, where all the strings are \emph{$L$-shapes}, i.e., $B_1$-strings. For any vertex $v$ of $G$, we denote by $L_v$ the $L$-shape corresponding to $v$ in $\Psi$. Let $a,o,b$ be the polygonal path representing $L_v$. We refer to $o$ as the \emph{joint} of $L_v$, and the line segments $ao$ and $bo$ as the \emph{hands} of $L(v)$. The points $a$ and $b$ are called the \emph{peaks} of $ao$ and $bo$, respectively. By $\Pi_{xy},\Pi_{yz},\Pi_{xz}$ we denote the family of planes parallel to the $XY, YZ,$ or $XZ$-plane, respectively. By $\Pi(t)$ we denote the plane $z=t$. For $Q\in\{X,Y,Z\}$, a \emph{$(+Q)$-arrow} is a directed straight line segment, which is aligned to the $Q$-axis and directed to the positive $Q$-axis. Define a $(-Q)$-arrow symmetrically. A \emph{$Q$-line} (resp., \emph{segment}) is a straight line (resp., segment) parallel to the $Q$-axis. Throughout the paper the terms `horizontal' and `vertical' denote alignment with $X$ and $Y$-axis, respectively. \begin{figure} \caption{(a) A geometric thickness two representation. (b)--(d) Construction of a staircase representation. } \label{even} \end{figure} Let $\mathcal{W}=(w_1,w_2\ldots,w_k)$ be a cycle of length $k\ge 4$. We define a \emph{staircase representation} of $\mathcal{W}$ as a contact system of directed line segments (\emph{arrows}, or degenerate $L$-shapes), as illustrated in Figs.~\ref{even}(b)--(c). If $k$ is even, then the origins of the $L$-shapes are in \emph{general position}, i.e., no two of them have the same $x$ or $y$-coordinate. Otherwise, all the origins except for the two topmost horizontal arrows are in general position. Appendix~\ref{defn} includes a formal definition. \section{String Contact Representations of Complexity 2 or 3} \label{string-contact} \begin{theorem} \label{4-reg-gu} Every triangle-free Hamiltonian graph $G$ with maximum degree four has a contact representation where all strings but one are $B_3$-strings. \end{theorem} \begin{proof}[Proof Outline] Let $C=(v_1,\ldots,v_n)$ be a Hamiltonian cycle of $G$. Let $H$ be the graph obtained after removing the Hamiltonian edges from $G$. Observe that $H$ is a union of vertex disjoint cycles and paths. We transform each path $P=(w_1\ldots,w_k)$ of $H$ into a cycle by adding a subpath of one or two dummy vertices between ($w_1$ and $w_k$) depending on whether $P$ has one or more vertices. Let $Q_1,\ldots, Q_k$ be the cycles in $H$. For each cycle $Q_i$, where $1\le i\le k$, we construct a staircase representation $\Psi_i$ of $Q_i$ on $\Pi(0)$. If $Q_i$ is a cycle with odd number of vertices, then we construct the staircase representation such that the leftmost segment among the topmost horizontal segments corresponds to the vertex with the lowest index in $Q_i$. For example, see the topmost staircase of Fig.~\ref{3-string-short}(a). We then place the staircase representations diagonally along a line with slope $+1$. We ensure that the horizontal and vertical slabs containing $\Psi_i$ do not intersect $\Psi_j$, where $1\le j(\not=i)\le k$. We refer to this representation as $\Psi_H$. Consider now the edges of the Hamiltonian cycle $C=(v_1,\ldots,v_n)$. Note that each vertex $v_j$, where $1\le j\le n$, is represented using an axis-aligned arrow $r_j$ in $\Psi_H$. For each $r_j$, we construct a $(+Z)$-arrow $r'_j$ of length $j$ that starts at the origin of $r_j$. Consequently, the plane $\Pi(j)$ intersects only those arrows $r'_q$, where $j\le q\le n$. Let $I_j$ be the set of intersection points on $\Pi(j)$. By construction $\Psi_H$ satisfies the following sparseness property: Any vertical (resp., horizontal) line on $\Pi(j)$ contains at most one point (resp., two points) from $I_j$. For every pair of points $p,q$ that belong to $I_j$ and lie on the same horizontal line, the corresponding vertices are adjacent in $H$, and belong to a distinct cycle with odd number of vertices in $H$. \begin{figure} \caption{ Illustration for the proof of Theorem~\ref{4-reg-gu} \label{3-string-short} \end{figure} For each $j$ from $1$ to $n-1$, we realize the edge $(v_j,v_{j+1})$ by extending $r'_j$ on $\Pi(j)$. Note that it suffices to use two bends to route $r'_j$ to touch $r'_{j+1}$, where one bend is to enter $\Pi(j)$ and the other is to reach $r'_{j+1}$. Figs.~\ref{3-string-short}(b)--(c) illustrate the extension of $r'_j$. We use the sparseness property of $\Psi_H$ to show that can find such an extension of $r'_j$ without introducing any crossing. Details are in Appendix~\ref{sec:4-string}. Finally, it is straightforward to realize $(v_1,v_n)$ by routing $r'_n$ on $\Pi(n)$ using two bends, and then moving downward to touch $r_1$. Therefore, the string representing $v_n$ is a $B_4$-string. \end{proof} \begin{theorem} \label{3-reg-b} Every 3-regular bipartite graph has a string contact representation with complexity 2. \end{theorem} \begin{proof} By Hall's condition~\cite{Hall}, $G$ contains a perfect matching $M\subset E$. Let $H$ be the graph obtained by removing the edges of $M$ from $G$. Since $H$ is 2-regular, $H$ is a union of disjoint cycles. We now construct a contact representation $\Psi_H$ of $H$ in the same way as in the proof of Theorem~\ref{4-reg-gu}. However, while constructing the $(+Z)$-arrows, we take the matching into consideration. For each edge $(v_a,v_b)\in M$, we set the length of $r'_a$ and $r'_b$ to $\beta = \min(a,b)$. Consequently, we can route both $r'_a$ and $r'_b$ to touch each other on $\Pi(\beta)$, e.g., see Fig.~\ref{3-string-short}(d). Since $G$ is bipartite, $H$ contains only cycles of even number of vertices. Consequently, the origins of the $(+Z)$-arrows are in general position, and hence the extensions of $r'_a$ and $r'_b$ do not create any unnecessary adjacency. \end{proof} \section{$\mathbf{\it L}$-Contact Representations} \label{sec:lcontact} The techniques used in Section~\ref{string-contact} inherently require strings with two or more bends. In this section we restrict our attention to contact representations of $B_1$-strings ($L$-shapes). We prove that every 3-regular Hamiltonian bipartite graph $G$ has a contact representation where all strings but one are $B_1$-strings. Appendix~\ref{app:last} illustrates a walkthrough example. \textbf{Technical Details:} Let $P=(v_1,\ldots,v_n)$ be a Hamiltonian path in $G$. Let $G'$ be the graph obtained after deleting the edge $(v_1,v_n)$ from $G$. We first construct an $L$-contact system for $G'$, and then extend this contact system to compute the representation for $G$. Color all the vertices of $G'$, as follows: Order the vertices from left to right in the order they appear on $P$ (in the increasing order of indices). For each non-Hamiltonian edge $(v_i,v_j)$, where $j>i+1$, color $v_i$ and $v_j$ with red and blue colors, respectively. Since each vertex is incident to one non-Hamiltonian edge, all the vertices are now colored. This vertex coloring creates \emph{red} and \emph{blue chains} (maximal subpath containing vertices of the same color) on $P$. Let $\mathcal{C}_1,\mathcal{C}_2,\ldots, \mathcal{C}_k$ be all the red chains in $P$ in the left to right order, e.g., see Fig.~\ref{decomposition}(a). For each $\mathcal{C}_i$, where $1\le i\le k$, there is a blue chain $\mathcal{C}'_i$ that follows $\mathcal{C}_i$. We refer to $(\mathcal{C}_i,\mathcal{C}'_i)$ as a \emph{chain pair}. Let $\mathcal{C}_i$ be the red chain $v_j,\ldots,v_k$, where $1\le j,k \le n$. Since $\mathcal{C}_i$ is maximal, the vertex $v_{j-1}$ and $v_{k+1}$ (if they exist) are blue vertices. We call $v_{j-1}$ and $v_{k+1}$ the \emph{head} and \emph{tail vertex} of $\mathcal{C}_i$, e.g., see Fig.~\ref{decomposition}(b). The set $blue(\mathcal{C}_i)$ consists of all blue vertices of $G'$ (following $v_k$ on $P$) that are incident to the vertices of $\mathcal{C}_i$. For example, in Fig.~\ref{decomposition}, $blue(\mathcal{C}_2)$ contains 4 blue vertices. For the $j$th blue vertex $w$ (from left) on $\mathcal{C}'_1$, define $\alpha(w)$ to be $\lfloor\frac{j}{2}\rfloor+1$, e.g., see Fig.~\ref{decomposition}(c). For $i>1$, define $\alpha(w)$ (for the $j$th vertex $w$ on $\mathcal{C}'_i$) to be $\delta_{i-1} + \lfloor\frac{j}{2}\rfloor+1$, e.g., see Fig.~\ref{decomposition}(c), where $\delta_{i-1}$ is the maximum $\alpha$ value (-1, if the maximum is even and unique) in $\mathcal{C}'_{i-1}$. Finally, define $G'_i$ to be the graph induced by the edges of $\mathcal{C}_1,\mathcal{C}'_1,\ldots,\mathcal{C}_i,\mathcal{C}'_i$, along with the edges that connect blue vertices of $G'$ to these chains, e.g., see Fig.~\ref{decomposition}(d). A vertex $v$ is \emph{unsaturated} in $G'_i$, if $v$ has a neighbor in $G'$ that does not belong to $G'_i$. Otherwise, $v$ is a \emph{saturated} vertex. Every blue vertex $w$ in $G'$ must be incident to a non-Hamiltonian edge $(u,w)$ such that $u$ is red and appears before $w$ on $P$. We call $u$ the \emph{red parent} of $w$, and $w$ the \emph{blue child} of $u$. \textbf{Idea:} We construct the $L$-contact representation of $G'$ incrementally, starting from $G'_1$, and then at the $i$th step, adding the chain pair $(\mathcal{C}_i,\mathcal{C}'_i)$ and the edges that connects $blue(\mathcal{C}_i)$ to $\mathcal{C}_i$. In other words, after the $i$th step, we will have an $L$-contact representation $\Psi'_i$ of $G'_i$. For each $i$ from $1$ to $k$, we construct $\Psi'_i$ maintaining some drawing invariants. In brief, we will draw the red chain $\mathcal{C}_1$ as a contact representation of arrows (degenerate $L$-shapes), where the arrows will be arranged along an $xy$-monotone polygonal path lying on plane $\Pi(1)$, e.g., see Fig.~\ref{Gamma1}(a). For each red vertex $v$ and non-Hamiltonian edge $(v,w)$, we draw the other hand of $L_v$ as a $(+Z)$-arrow that stops at $\Pi(\alpha(w))$. The intuition is that the joint (of $L$-shapes) of the blue vertices will be drawn on the plane defined by their $\alpha(\cdot)$ values. Since every blue vertex $w$ in $\mathcal{C}'_1$ and $blue(\mathcal{C}_1)$ has a red parent in $v$ in $\mathcal{C}_1$, we draw $L_w$ initially as a point (degenerate $L$-shapes) at the peak of $L_v$, e.g., see Fig.~\ref{Gamma1}(c). Thus to complete the drawing of $\Psi'_1$, we only need to realize the edges of $\mathcal{C}'_1$, which is done by extending the degenerate blue $L$-shapes. The $\alpha(\cdot)$ values will play a crucial role to ensure that the blue $L$-shapes follow some increasing $Z$-direction, and thus can be drawn without introducing any unnecessary adjacency. For $i>1$, the there are two key differences between $(\mathcal{C}_i,\mathcal{C}'_i)$ and $(C_1,C'_1)$. First, $\mathcal{C}_i$ has a head vertex $v_h$, which is already drawn in $\Psi'_{i-1}$. Second, $\mathcal{C}'_i$ and $blue(\mathcal{C}_i)$ may contain red parents that do not belong to $\mathcal{C}_i$, and thus already drawn in $\Psi'_{i-1}$. The most favorable scenario would be to construct a drawing of $(\mathcal{C}_i,\mathcal{C}'_i)$ and the edges connecting them to $blue(\mathcal{C}_i)$ independently (following the drawing method of $\Psi'_1$), and then insert it into $\Psi'_{i-1}$ to obtain the drawing $\Psi'_i$. If the red parents of all the vertices in $\mathcal{C}'_i$ and $blue(\mathcal{C}_i)$ belong to $\mathcal{C}_i$, then we can easily construct $\Psi'_i$ using the above idea. Otherwise, merging the drawings properly seems challenging. However, using the drawing invariants we can find certain properties in $\Psi'_{i-1}$ that makes such a merging possible. \textbf{Drawing Details:} For each $i$ from $1$ to $k$, we construct $\Psi'_i$ maintaining the following drawing invariants. \begin{figure} \caption{(a) $\mathcal{C} \label{decomposition} \end{figure} \begin{compactenum} \item [$I_1$.] $\Psi'_i$ is an $L$-contact representation of $G'_i$. \item[$I_2$.] Every blue vertex $v$ of degree one in $G'_i$ is drawn as a point on $\Pi(\alpha(v))$. The projection of these points on $\Pi_{xy}$ are in general position. \item [$I_3$.] Let $w_j$ be the $j$th blue vertex on $\mathcal{C}'_i$ (from left to right). If $j$ is odd, then $\Pi(\alpha(w_{j+1}))$ contains only one point (peak) of $L_{w_j}$, where the rest of $L_{w_j}$ lies below $\Pi(\alpha(w_{j+1}))$. Otherwise, $\alpha(w_j) = \alpha(w_{j+1})$, and $\Pi(\alpha(w_{j}))$ contains entire $L_{w_j}$. Moreover, $L_{w_j}$ is non-degenerate, and the $Y$-line through the joint of $w_{j+1}$ must intersect (the extension of) the horizontal hand of $L_{w_j}$. \end{compactenum} \noindent For simplicity we do not introduce drawing invariants for the red $L$-shapes. Their drawing will be obvious from the context. Informally, for a red chain $\mathcal{C}_i$, one hand of the corresponding $L$-shapes will be drawn on the plane $\Pi(1)$ (if $i=1$) or on the plane determined by the $\alpha(\cdot)$ value of its head (if $i>1$). The remaining hand (if needed) is drawn as a $(+Z)$-arrow that stops at some plane determined by the $\alpha(\cdot)$ value of its blue child. \subsection{Construction of $\Psi'_1$} Let $\mathcal{C}_1$ be the red chain $v_1,v_2,\ldots,v_q$. The tail $v_t$ of $\mathcal{C}_1$, which is blue, is adjacent to exactly two vertices of $\mathcal{C}_1$: One is the red vertex $v_q$, and the other is its red parent $v_r$. While constructing $\Psi'_1$, we first realize the red-red adjacencies, then the red-blue (equivalently, blue-red) adjacencies, and finally, the blue-blue adjacencies of $G'_1$. \textbf{Red-Red Adjacencies:} Red-red adjacencies correspond to Hamiltonian edges, and thus appear in $\mathcal{C}_1$. To realize these adjacencies, we draw the $L$-shapes of the vertices of $\mathcal{C}_1$ using arrows that lie along an $xy$-monotone path on $\Pi(1)$, as illustrated in Fig.~\ref{Gamma1}(a). In brief, we ensure that the arrows are horizontal and vertical alternatively, and all the joints (origins) are in general position. We then extend the joint of $L_{v_r}$ such that the $Y$-line through it intersects $L_{v_q}$, e.g., see Fig.~\ref{Gamma1}(b). All these conditions are straightforward to achieve. \begin{figure} \caption{Illustration for the drawing of $\Psi'_1$. (a) Initial drawing. (b) Construction for $L_{v_r} \label{Gamma1} \end{figure} \textbf{Red-Blue Adjacencies:} For each red vertex $v$ (except for $v_r$), we create a $(+Z)$-arrow (the other hand of $L_v$) that stops at $\Pi(\alpha(w))$, where $w$ is the blue child of $v$. We then draw $w$ as a point at the peak of the arrow, e.g., see Fig.~\ref{Gamma1}(c). We will refer to such an initial point representation of $w$ as the \emph{initiator of $w$}, and denote the point as $init(w)$. Although the joint of such a point representation of $w$ coincides with $init(w)$, it is important to note that we may later extend the point representation of $w$ to an arrow or a full $L$-shape, and the joint of the new $L$-representation does not necessarily coincide with $init(w)$. The only remaining red-blue adjacencies are $(v_q,v_t)$ and $(v_r,v_t)$. Recall that the $Y$-line through the joint of $L_{v_r}$ intersects $L_{v_q}$. Therefore, we can draw a $(+Y)$-arrow representing $L_{v_t}$ that touches both $L_{v_r}$ and $L_{v_q}$, e.g., see Fig.~\ref{Gamma1}(d). \textbf{Blue-Blue Adjacencies:} Let $w_1(=v_t),\ldots,w_k$ be the blue chain $\mathcal{C}'_1$. If $\mathcal{C}'_1$ does not include all the blue vertices of $G'$, then let $w_{k+1}$ be the first blue vertex following $w_k$ on $P$. Note that $w_k$ and $w_{k+1}$ are the head and tail of $\mathcal{C}_2$, respectively, e.g., see Fig.~\ref{decomposition}(b). On the other hand, if $\mathcal{C}'_1$ contains all the blue vertices of $G'$, then consider a dummy vertex $w_{k+1}$. If $k=1$, then there is no blue-blue adjacency to be realized. We only construct a $(+Z)$-arrow that starts at the $init(w_1)$ and stops at $\Pi(\alpha(w_2))$. This satisfies the invariant $I_3$ (since $w_1$ is at odd position on $\mathcal{C}'_1$). \begin{figure} \caption{Illustration for blue-blue adjacencies.} \label{invariants} \end{figure} If $k>1$, then we modify $L_{w_j}$, where $1\le j\le k$, to realize the blue-blue adjacencies. Observe that each $L_{w_j}$, except $L_{w_1}$, is currently represented as a point on $\Pi(\alpha(w_j))$. We first construct a $(+Z)$-arrow which starts at $init(w_1)$ and stops at $\Pi(\alpha(w_2))$, i.e., $L_{w_1}$ satisfies the invariant $I_3$, e.g., see Fig.~\ref{invariants}(a). Consider now the modification for $L_{w_j}$, where $j>2$. Assume that the $L$-shapes $L_{w_1}, \ldots,L_{w_{j-1}}$ already satisfy Invariant $I_3$. If $j$ is even, then $(j-1)$ is odd and by definition of $\alpha(\cdot)$, $\alpha(w_{j-1})<\alpha(w_j)$. By Invariant $I_3$, $L_{w_{j-1}}$ has only one point (a peak) $o$ on $\Pi(\alpha(w_j))$. We now have two options to create $L_{w_j}$ connecting $o$ and $init(w_j)$. One of these two options would satisfy Invariant $I_3$, e.g., see Figs.~\ref{invariants}(b)--(c). If $j$ is odd, then by the definition of $\alpha(\cdot)$, we have $\alpha(w_{j-1}) = \alpha(w_j)$. Since $(j-1)$ is even, by Invariant $I_3$, $L_{w_{j-1}}$ lies entirely on $\Pi(\alpha(w_j))$, and the $Y$-line through $init(w_j)$ intersects (the extension of) the horizontal hand of $L_{w_{j-1}}$. We construct a vertical arrow for $L_{w_j}$ that starts at $init(w_j)$ and touches $L_{w_{j-1}}$ (we extend $L_{w_{j-1}}$ if necessary), e.g., see Fig.~\ref{invariants}(d). We then construct the other hand of $L_{w_j}$ using a $(+Z)$-arrow that starts at $init(w_j)$ and stops at $\Pi(\alpha(w_{j+1})$, and thus satisfy Invariant $I_3$. Note that for the last vertex $v_k$, $L_{v_k}$ either has a peak on $\Pi(\alpha(v_{k+1}))$ or lies entirely on $\Pi(\alpha(v_{k+1}))$ (depending on the parity of $k$). This completes the construction of $\Psi'_1$, which already satisfies $I_3$. Therefore, it remains to show that $\Psi'_1$ satisfies $I_1$ and $I_2$. It is straightforward to observe that all the adjacencies have been realized. We thus need to show that we did not create any unnecessary adjacency. The only nontrivial part of the construction is the modification of the blue $L$-shapes to realize the blue-blue adjacencies, and it suffices to show that we do not intersect any unnecessary blue $L$-shape or any red $L$-shape during this process. By construction, the polygonal path determined by the blue $L$-shapes is monotonically increasing along the $Z$-axis, and hence the modification does not create any unnecessary blue-blue adjacency. Moreover, by construction, the joint of the red vertices, and thus the initiators of the blue vertices are also in general position. Therefore, the modification does not introduce any unnecessary red-blue adjacency. Hence $\Psi'_1$ satisfies $I_1$. Since the blue vertices of degree one in $\Psi'_1$ are represented as points directly above (with respect to $\Pi_{xy}$) the joints of the red $L$-shapes, $\Psi'_1$ satisfies $I_2$. \subsection{Construction of $\Psi'_i$} We now assume that $i>1$ and for every $q<i$, $\Psi'_q$ satisfies the Invariants $I_1$--$I_3$. Here we describe the construction of $\Psi'_i$. \textbf{Red-Red and Red-Blue (Equivalently, Blue-Red) Adjacencies:} Let $\mathcal{C}_i$ be the red chain $v_j,u_{j+1},\ldots,v_{j+q}$ with head $v_h$ and tail $v_t$. The tail $v_t$ has two red neighbors preceding it on $P$: one is $v_{j+q}$, and the other one is its red parent $v_r$. We distinguish the following two cases. \textbf{Case 1 ($v_r$ belongs to $\mathcal{C}_i$):} By Invariant $I_3$ and the choice of $\alpha(\cdot)$ value, $L_{v_h}$ either entirely lies on $\Pi(\alpha(v_t))$, or contains only a peak on $\Pi(\alpha(v_t))$. If $L_{v_h}$ contains only a peak $o$ on $\Pi(\alpha(v_t))$, then the idea is to draw $\mathcal{C}_i$ and $blue(\mathcal{C}_i)$ independently, and then merge the drawing such that $L_{v_j}$ touches $o$. Specifically, we find a rectangle $R$ on $\Pi(\alpha(v_t))$ with the bottom-left corner at $o$. We construct a drawing $D$ of $\mathcal{C}_i$ and $blue(\mathcal{C}_i)$ by mimicking the construction of $\Psi'_1$, and place $D$ (possibly by scaling down) inside $R$, e.g., see Fig.~\ref{GammaI}(a). Note that $D$ does not contain any blue-blue adjacencies. By construction, one hand $r$ of $L_{v_j}$ lies on $R$ (the other is represented by a $(+Z)$-arrow). We adjust the placement of $D$ such that the peak of $r$ coincides with $o$. We then perturb $D$ such that the initiators of $blue(\mathcal{C}_i)$, and the degree-one blue vertices of $\Psi'_{i-1}$ lie in general position. \begin{figure} \caption{(a)--(b) Construction of $\Psi'_i$, where $v_r$ belongs to $\mathcal{C} \label{GammaI} \end{figure} If $L_{v_h}$ lies entirely on $\Pi(\alpha(v_t))$, then by Invariant $I_3$, $L_{v_h}$ is non-degenerate. We find a rectangle $R$ on $\Pi(\alpha(v_t))$ with one side along the horizontal hand of $L_{v_h}$. We then construct a drawing $D$ of $C_i$ and $blue(\mathcal{C}_i)$ by mimicking the construction of $\Psi'_1$. Recall that such construction enforces $L_{v_{j+q}}$ to contain a $X$-segment. Instead, we use a symmetric construction such that $L_{v_{j}}$ contains a $Y$-segment on $\Pi(\alpha(v_t))$, and thus the hand $\ell$ of $L_{v_{j+q}}$ that lies on $\Pi(\alpha(v_t))$ may be horizontal or vertical (depending on the number of vertices in $\mathcal{C}'_i$). If $\ell$ is vertical (resp., horizontal), then we represent $L_(v_t)$ as a horizontal (resp., vertical) arrow with origin at $init(v_t)$. It is now straightforward to place $D$ (possibly taking vertical reflection) inside $R$ such that $L_{v_j}$ touches the horizontal hand of $L_{v_h}$, e.g., see Fig.~\ref{GammaI}(b). Observe that in both the cases (above), we have a special scenario, as follows: If $v_r$ coincides with $v_j$, then by the construction of the red $L$-shapes $L_{v_r}$ does not contain any $(+Z)$-arrow, e.g., see Fig.~\ref{invariants}(e). This is fine as long as $v_j \not = v_{j+q}$, because $v_r$ already contains three incidences at its current hand. If $v_j$ coincides with $v_{j+q}$, then we create a $(+Z)$-arrow for $L_v$ that stops at $\Pi(\alpha(w))$, where $w$ is a blue child of $v_j$, e.g., see Fig.~\ref{invariants}(f). \textbf{Case 2 ($v_r$ belongs to $\Psi'_{i-1}$):} In this scenario, the degree of $v_t$ in $G'_{i-1}$ is one, and by Invariant $I_2$, $v_t$ is represented as a point in $\Psi'_{i-1}$. We distinguish two subcases depending on the size of $\mathcal{C}_i$. \textit{Case 2a ($\mathcal{C}_i$ has two or more vertices):} If $L_{v_h}$ contains only a peak $o$ on $\Pi(\alpha(v_t))$, then we represent $L_{v_t}$ using a rightward arrow $r$ that starts at $init(v_t)$ and stops at some point $o'$ to the right of the $Y$-line though $o$. We then construct a drawing $D$ of $\mathcal{C}_i$ and $blue(\mathcal{C}_i)$ on $\Pi(\alpha(v_t))$ mimicking the construction of $\Psi'_1$. However, this is simpler since the red parent of $v_t$ does not belong to $\mathcal{C}_i$. We ensure that $L_{v_j}$ has a $Y$-segment. Figs.~\ref{GammaI}(c)--(f) show all distinct scenarios. Assume now that $L_{v_h}$ lies entirely on $\Pi(\alpha(v_t))$. By Invariant $I_3$ and the choice of $\alpha(\cdot)$ values, $L_{v_h}$ is non-degenerate and (the extension of) its horizontal hand intersects the vertical line through $init(v_t)$ in $\Psi'_{i-1}$. The drawing in this case is illustrated in Figs.~\ref{GammaI2}(a)--(b). Appendix~\ref{case2b} includes the details. Note that in both cases we may need to perturb the drawing $D$ such that the $L$-shapes in $D$ do not create any unnecessary intersections, and $blue(\mathcal{C}'_i)$ and the degree-one blue vertices of $\Psi'_{i-1}$ lie in general position. \begin{figure} \caption{(a)--(b) Different scenarios while constructing $D$. (c)--(d) Case 2b.} \label{GammaI2} \end{figure} \textit{Case 2b ($\mathcal{C}_i$ has only one vertex):} This case is straightforward to process, e.g., see Figs.~\ref{GammaI2}(c)--(d). Details are included in Appendix~\ref{case2b}. \textbf{Blue-Blue Adjacencies:} Let $w_1(=v_t),\ldots,w_k$ be the blue chain $\mathcal{C}'_i$. If $\mathcal{C}'_i$ does not include all the blue vertices of $G'$, then let $w_{k+1}$ be the first blue vertex following $w_k$ on $P$. Note that $w_k$ is the head of $\mathcal{C}_{i+1}$, and $w_{k+1}$ is the tail of $\mathcal{C}_{i+1}$. On the other hand, if $\mathcal{C}'_i$ contains all the blue vertices of $G'$, then consider a dummy vertex $w_{k+1}$. If $k=1$, then all the blue-blue adjacencies in $G'_i$ are present in $G'_{i-1}$, and we only construct a $(+Z)$-arrow which starts at $init(w_1) = init(v_t)$ and stops at $\Pi(\alpha(w_2))$. Otherwise, we modify $L_{w_j}$, where $1\le j\le k$, to realize the blue-blue adjacencies. By Invariant $I_2$ and the initial construction of $blue(\mathcal{C}_i)$, all $L_{w_j}$ except $L_{w_1}$ are represented as distinct points on $\Pi(\alpha(w_j))$, which are in general position. Therefore, we can modify $L_{w_j}$ satisfying Invariant $I_3$ in the same way as we realized the blue-blue adjacencies in $\Psi'_1$. The argument that $\Psi'_i$ satisfies the induction invariants are similar to that of $\Psi'_1$, but we need to consider also the drawing $\Psi'_{i-1}$. While drawing of $\mathcal{C}'_i$ and $blue(\mathcal{C}_i)$, we ensured the general position property, and thus satisfied Invariant $I_2$. This general position property leads us to the argument that no unnecessary adjacency is created during the modification of the blue $L$-shapes (i.e., Invariant $I_1$). Finally, the Invariant $I_3$ follows from the modification of the blue $L$-shapes. Finally, we modify $L_{v_n}$ to realize the edge $(v_1,v_n)$. Since $v_n$ is blue, by Invariant $I_3$, one of the hands of $L_{v_n}$ can be extended, and we extend this hand using about three more bends to touch $L_{v_1}$. The following theorem summarizes the result of this section. \begin{theorem} \label{3-reg-bh} Every 3-regular Hamiltonian bipartite graph has a contact representation where all strings but one are $B_1$-strings. \end{theorem} \section{Lower Bounds}\label{lb} \begin{theorem} \label{5-reg} No 5-regular graph admits a string contact representation. \end{theorem} \begin{proof}[Proof Outline] Let $G$ be a 5-regular graph, and suppose for a contradiction that $G$ admits a string contact representation $D$. For each edge $(u,w)$, if the string of $u$ touches the string of $w$, then direct the edge from $u$ to $w$. Note that $G$ has exactly $\frac{5n}{2}$ edges, hence a vertex with out-degree $\ge 3$. \end{proof} \begin{theorem} \label{4-reg} $K_5$ (a $4$-regular graph) does not have $L$-contact representation. \end{theorem} \begin{proof}[Proof Outline] Suppose for a contradiction that $K_5$ admits an $L$-contact representation, and let $D$ be such a representation of $K_5$. Let $v_1,\ldots, v_5$ be the vertices of $K_5$. Observe that any axis-aligned $L$-shape must entirely lie on one of the three types of plane: $\Pi_{xy}$, $\Pi_{yz}$, and $\Pi_{xz}$. Since there are five $L$-shapes in $D$, the plane types for at least two $L$-shapes must be the same. Without loss of generality assume that $L_{v_1}$ and $L_{v_2}$ both lie on $\Pi_{xy}$. Since $v_1$ and $v_2$ are adjacent, the planes of $L_{v_1}$ and $L_{v_2}$ cannot be distinct. Therefore, without loss of generality assume that they coincide with $\Pi(0)$. Since $v_i$, where $3\le i\le 5$, is adjacent to both $v_1$ and $v_2$, $L_{v_i}$ must share a point $a_i$ with $L_{v_1}$ and a point $b_i$ with $L_{v_2}$. Since no three strings meet at a point in $D$, the points $a_i$ and $b_i$ are distinct. The rest of the proof claims that the polygonal path $P_i$ of $L_i$ that starts at $a_i$ and ends at $b_i$, lies entirely on $\Pi(0)$. This property of $D$ can be used to argue that $D$ is a string contact representation of $K_5$ on $\Pi(0)$, which contradicts that $K_5$ is a non-planar graph. Appendix~\ref{applb} includes the details. \end{proof} \begin{theorem} \label{3-reg} $K_{3,3}$ (a $3$-regular graph) does not have segment contact layout. \end{theorem} \begin{proof}[Proof Outline] The proof is based on the observation that any contact representation of a 4-cycle, i.e., a cycle of four vertices, with axis-aligned $B_0$ strings, lies entirely on a single plane. Furthermore, two adjacent segments completely determine this plane. Since the vertices of $K_{3,3}$ can be covered by two 4-cycles that share an edge, any string contact representation of $K_{3,3}$ must lie on a single plane. A detailed proof is in Appendix~\ref{applb}. \end{proof} \section{Directions for Future Research} Improving the complexity bound of the string contact representations for the graph classes we discussed in Theorems~\ref{4-reg-gu}--\ref{3-reg-b} is a natural avenue to explore. But the most fascinating question is whether every $3$-regular graph admits an $L$-contact representation in $\mathbb{R}^3$, even with the `triangle-free' constraint. \noindent{\bf Acknowledgments.} The author is thankful to Anna Lubiw and anonymous reviewers for their detailed comments to improve the presentation of the paper. \appendix \section{Representations with String Complexity 4} \label{sec:4-string} \begin{theorem} \label{4-reg-u} Every graph with maximum degree $4$ admits a string contact representation with complexity $4$. \end{theorem} \begin{proof} The proof is based on the concept of geometric thickness. Duncan et al.~\cite{DuncanEK04} proved that every graph with maximum degree 4 has geometric thickness two, and if the edges are allowed to be orthogonal, then such a drawing $\mathcal{D}$ can be computed satisfying the following properties. \begin{enumerate} \item[A.] Every vertex in $\mathcal{D}$ has unique $x$ and $y$-coordinates, and each edge $e$ in $\mathcal{D}$ is drawn as a sequence of two axis-aligned line segments between the end vertices of $e$. \item[B.] Each planar layer in $\mathcal{D}$ consists of paths and cycles. Each path or cycle $v_1,\ldots,v_k$ in the first (resp., second) layer, is drawn inside a vertical (horizontal) slab, where the path $v_1,\ldots,v_k$ is drawn as an $x$-monotone ($y$-monotone) polygonal path. \end{enumerate} Fig.~\ref{4-string}(a) illustrates such a drawing $\mathcal{D}$, the edges of one planar layer are drawn using thin lines, and the other planar layer is drawn using thick lines. \begin{figure} \caption{(a) Illustration for $\mathcal{D} \label{4-string} \end{figure} For each cycle $C$ in the first (second) layer, we direct the edges on $C$ in clockwise order, and for each path $P$, we direct the edges of $P$ from left to right (resp., bottom to top). Consequently, each vertex now has out-degree at most one in each layer. We lift the edges on the second layer up by one unit, representing each vertex using a unit $Z$-line. Fig.~\ref{4-string}(b) illustrates a schematic representation of the resulting drawing. This yields a contact representation of $G$ using $B_4$-strings, where the string of each vertex consists of its outgoing edges and the $Z$-line that connects these outgoing edges. Fig.~\ref{4-string}(c) illustrates such a $B_4$-string. \end{proof} \noindent\textbf{Theorem~\ref{4-reg-gu}.} \emph{ Every triangle-free Hamiltonian graph $G$ with maximum degree four has a contact representation where all strings but one are $B_3$-strings. } \begin{proof} Let $C=(v_1,\ldots,v_n)$ be a Hamiltonian cycle of $G$. Let $H$ be the graph obtained after removing the Hamiltonian edges from $G$. Since every vertex of $H$ is of degree at most two, $H$ is a union of vertex disjoint cycles and paths. We transform each path $P=(w_1\ldots,w_k)$ of $H$ into a cycle by adding a subpath of one or two dummy vertices between ($w_1$ and $w_k$) depending on whether $P$ has one or more vertices. Let $Q_1,\ldots, Q_k$ be the cycles in $H$. For each cycle $Q_i$, where $1\le i\le k$, we construct a staircase representation $\Psi'_i$ of $Q_i$ on $\Pi(0)$. If $Q_i$ is a cycle with odd number of vertices, then we construct the staircase representation such that the leftmost segment among the topmost horizontal segments corresponds to the vertex with the lowest index in $Q_i$. For example, see the topmost staircase of Fig.~\ref{3-string}(a). We then place the staircase representations diagonally along a line with slope $+1$. We ensure that the horizontal and vertical slabs containing $\Psi_i$ do not intersect $\Psi_j$, where $1\le j(\not=i)\le k$. We refer to this representation as $\Psi_H$. Consider now the edges of the Hamiltonian cycle $C=(v_1,\ldots,v_n)$. Note that each vertex $v_j$, where $1\le j\le n$, is represented using an axis-aligned arrow $r_j$ in $\Psi_H$. For each arrow $r_j$, we construct a $(+Z)$-arrow $r'_j$ of length $j$ that starts at the origin of $r_j$. Consequently, the plane $\Pi(j)$ intersects only those arrows $r'_q$, where $j\le q\le n$. Let $I_j$ be the set of intersection points on $\Pi(j)$. By construction $\Psi_H$ satisfies the following sparseness property: \begin{description} \item[Sparseness of $\Psi_H$:] Any vertical (resp., horizontal) line on $\Pi(j)$ contains at most one point (resp., two points) from $I_j$. For every pair of points $p,q$ that belong to $I_j$ and lie on the same horizontal line, the corresponding vertices are adjacent in $H$, and belong to a distinct cycle with odd number of vertices in $H$. \end{description} \begin{figure} \caption{ Illustration for the proof of Theorem~\ref{4-reg-gu} \label{3-string} \end{figure} For each $j$ from $1$ to $n-1$, we realize the adjacency between $v_j$ and $v_{j+1}$ by extending $r'_j$ on $\Pi(j)$. Note that it suffices to use two bends to route $r'_j$ to touch $r'_{j+1}$, where one bend is to enter $\Pi(j)$ and the other is to reach $r'_{j+1}$. Figs.~\ref{3-string}(b)--(c) illustrate the extension of $r'_j$. We now claim that one can find such an extension of $r'_j$ without introducing any crossing. Assume that $r'_j$ and $r'_{j+1}$ intersect $\Pi(j)$ at points $p$ and $q$, respectively, and suppose for a contradiction that any 2-bend extension of $r'_j$ to touch $r'_{j+1}$ on $\Pi(j)$ would introduce an unnecessary adjacency. We now consider the following scenarios. \textbf{Case 1 ($p$ lies below and to the left of $q$):} We refer to the configuration of Fig.~\ref{3-string}(d). Let $R_{pq}$ be the rectangle determined by $p$ and $q$ on $\Pi(j)$. Let $r$ and $s$ be the top-left and bottom-right corners of $R_{pq}$. Assume that both $p,r,q$ and $p,s,q$ introduce unnecessary adjacencies, e.g., $p,r,q$ intersects some arrow $r'_a$ and $p,s,q$ intersects some arrow $r'_b$. By the sparseness property of $\Psi_H$, the arrows $r'_a$ and $r'_b$ cannot lie on $pr$ or $qs$, and hence, they must intersect the segments $rq$ and $ps$, respectively. Since the intersection point with $r'_a$ lies to the left of $q$, we have $a<j+1$. Moreover, since $v_a$ and $v_j$ are distinct vertices, we have $a<j$. Consequently, $v_a$ cannot intersect $\Pi(j)$, and we can extend $r'_j$ along $p,r,q$. \textbf{Case 2 ($p$ lies above and to the right of $q$):} This scenario is similar to Case 1, e.g., see Fig.~\ref{3-string}(e). Since the intersection point of $r'_a$ is to the left of $p$, $a<j$. Consequently, $v_a$ cannot intersect $\Pi(j)$, and we can extend $r'_j$ along $p,r,q$. \textbf{Case 3 (Otherwise):} Since $(v_j,v_{j+1})$ is a Hamiltonian edge, by the sparseness property of $\Psi_H$, $p$ and $q$ cannot lie on the same horizontal line on $\Pi(j)$. The remaining cases are as follows: (I) $p$ lies below and to the right of $q$, and (II) $p$ lies above and to the left of $q$. Figs.~\ref{3-string}(f)--(g) illustrate these two cases. By the sparseness property, $\{v_{j+1},v_a\}$ and $\{v_{j},v_b\}$ correspond to distinct cycles in $H$. Since the cycles of $H$ are placed diagonally along a line with slope $+1$, none of these two configurations can arise. Finally, it is straightforward to realize $(v_1,v_n)$ by routing $r'_n$ on $\Pi(n)$ using two bends, and then moving downward to touch $r_1$. Therefore, the string representing $v_n$ is a $B_4$-string. \end{proof} \section{Details of Section~\ref{pre}} \label{defn} \textbf{Staircase representation:} Consider first the case when $k$ is even. We first draw an $xy$-monotone orthogonal polyline $\mathcal{O}$ with $k-2$ unit-length segments $l_2,\ldots,l_{k-1}$, where the segments $l_2,l_4,\ldots$ are vertical, and $l_3,l_5,\ldots$ are horizontal. We then join the end points of $\mathcal{O}$ using a horizontal line segment $l_1$ and a vertical line segment $l_k$, as shown in Fig.~\ref{even2}(a). We order the edges of the resulting orthogonal polygon $O$ in counterclockwise order, and assign $w_i$ the segment $l_i$, where $1\le i\le k$. We then extend the horizontal segments, except $l_1$, one-half unit to the right, and the vertical segment, except $l_k$, one-half unit upward. Finally, we extend the segments corresponding to $l_1$ and $l_k$ one-half unit to the left and downward, respectively. Observe that the extended segments do not introduce any crossing. Consequently, each vertex $w_i$ can now be represented as an axis-aligned arrow $r_i$, where the extended end of the segments correspond to the origins, e.g., see Fig.~\ref{even2}(b). Furthermore, the origins of $r_1,\ldots,r_k$ are in \emph{general position}, i.e., no two of them have the same $x$ or $y$-coordinate. \begin{figure} \caption{Construction of a staircase representation. } \label{even2} \end{figure} If $k$ is odd, then we take a staircase representation of a cycle of $k-1$ vertices, and then subdivide the topmost horizontal segment to create the a new arrow, as illustrated in Fig.~\ref{even2}(c). \section{Details of Section~\ref{string-contact}} \label{case2b} \textit{Case 2a ($\mathcal{C}_i$ has two or more vertices):} If $L_{v_h}$ contains only a peak $o$ on $\Pi(\alpha(v_t))$, then we represent $L_{v_t}$ using a rightward arrow $r$ that starts at $init(v_t)$ and stops at some point $o'$ to the right of the $Y$-line though $o$. We then construct a drawing $D$ of $\mathcal{C}_i$ and $blue(\mathcal{C}_i)$ on $\Pi(\alpha(v_t))$ mimicking the construction of $\Psi'_1$. However, this is simpler since the red parent of $v_t$ does not belong to $\mathcal{C}_i$. We ensure that $L_{v_j}$ has a $Y$-segment, and thus the hand of $L_{v_{j+q}}$ on $\Pi(\alpha(v_t))$ may be horizontal or vertical. It is now straightforward to place $D$ (possibly taking vertical reflection) such that $L_{v_j}$ touches $L_{v_h}$ at $o$ and $L_{v_t}$ touches $L_{v_{j+q}}$ at $o'$. Figs.~\ref{GammaI}(c)--(f) show all distinct scenarios. Assume now that $L_{v_h}$ lies entirely on $\Pi(\alpha(v_t))$. By Invariant $I_3$ and the choice of $\alpha(\cdot)$ values, $L_{v_h}$ is non-degenerate and (the extension of) its horizontal hand intersects the vertical line through $init(v_t)$ in $\Psi'_{i-1}$. We now construct a drawing $D$ of $\mathcal{C}_i$ and $blue(\mathcal{C}_i)$ on $\Pi(\alpha(v_t))$ mimicking the construction of $\Psi'_1$, but ensuring that $L_{v_{j}}$ contains a $Y$-segment on $\Pi(\alpha(v_t))$. Consequently, the hand $\ell$ of $L_{v_{j+q}}$ that lies on $\Pi(\alpha(v_t))$ may be horizontal or vertical (depending on the number of vertices in $\mathcal{C}'_i$). If $\ell$ is vertical (resp., horizontal), then we represent $L_{v_t}$ as a horizontal (resp., vertical) arrow with origin at $init(v_t)$. It is now straightforward to place $D$ (possibly taking vertical reflection) on $\Pi(\alpha(v_t))$ such that $L_{v_j}$ touches the horizontal hand of $L_{v_h}$, and $L_{v_t}$ touches $L_{v_{j+q}}$ at $o'$, e.g., see Figs.~\ref{GammaI2}(a)--(b). Note that in both cases we may need to perturb the drawing $D$ such that the $L$-shapes in $D$ do not create any unnecessary intersections, and $blue(\mathcal{C}'_i)$ and the degree-one blue vertices of $\Psi'_{i-1}$ lie in general position. \textit{Case 2b ($\mathcal{C}_i$ has only one vertex):} If $L_{v_h}$ contains only a peak $o$ on $\Pi(\alpha(v_t))$, then we construct $L_{v_j}$ as a horizontal line segment $ab$ that passes through $o$, and represent $L_{v_t}$ as a vertical arrow that touches $L_{v_j}$, e.g., see Fig.~\ref{GammaI2}(c). We then construct another hand of $L_{v_j}$ using a $(+Z)$-arrow $r$ that starts at $b$, and ends at $\Pi(\alpha(w))$, where $w$ is the blue child of $v_j$. We create the initiator of $w$ at the peak of $r$. Assume now that $L_{v_h}$ lies entirely on $\Pi(\alpha(v_t))$. By Invariant $I_3$, the $Y$-line through $init(v_t)$ intersects (the extension of) the horizontal hand of $L_{v_h}$. It is thus straightforward to construct $L_{v_j}$ as a $Y$-segment $ab$ that touches the horizontal hand of $L_{v_h}$ at $a$, and then construct $L_{v_t}$ as a horizontal arrow with origin $init(v_t)$ that touches $L_{v_j}$. We then construct another hand of $L_{v_j}$ using a $(+Z)$-arrow $r$ that starts at $b$, and ends at $\Pi(\alpha(w))$, where $w$ is the blue child of $v_j$. We create the initiator of $w$ at the peak of $r$, e.g., see Fig.~\ref{GammaI2}(d). In both cases we choose $b$ carefully to ensure the general position property of the initiators. \section{Details of Section~\ref{lb}} \label{applb} \subsubsection{Proof of Theorem~\ref{5-reg}: } Let $G$ be a 5-regular graph, and suppose for a contradiction that $G$ admits a string contact representation $D$. For each edge $(u,w)$, if the string of $u$ touches the string of $w$, then direct the edge from $u$ to $w$. Note that $G$ has exactly $5n/2$ edges. Since each edge is either unidirected or bidirected, the sum of all out-degrees is at least $5n/2$. Therefore, there exists a vertex $v$ with out-degree 3 or more. However, by definition, no three strings in $D$ can meet at a point. Therefore, the out-degree of $v$ cannot be larger than two, a contradiction. \subsubsection{Proof of Theorem~\ref{4-reg}: } Suppose for a contradiction that $K_5$ admits an $L$-contact representation, and let $D$ be such a representation of $K_5$. Let $v_1,\ldots, v_5$ be the vertices of $K_5$. Observe that any axis-aligned $L$-shape must entirely lie on one of the three types of plane: $\Pi_{xy}$, $\Pi_{yz}$, and $\Pi_{xz}$. Since there are five $L$-shapes in $D$, by pigeonhole principle, the plane types for at least two $L$-shapes must be the same. Without loss of generality assume that $L_{v_1}$ and $L_{v_2}$ both lie on $\Pi_{xy}$. Since $v_1$ and $v_2$ are adjacent, the planes of $L_{v_1}$ and $L_{v_2}$ cannot be distinct. Therefore, without loss of generality assume that they coincide with $\Pi(0)$. Since $v_i$, where $3\le i\le 5$, is adjacent to both $v_1$ and $v_2$, $L_{v_i}$ must share a point $a_i$ with $L_{v_1}$ and a point $b_i$ with $L_{v_2}$. Since no three strings meet at a point in $D$, the points $a_i$ and $b_i$ are distinct. The rest of the proof claims that the polygonal path $P_i$ of $L_i$ that starts at $a_i$ and ends at $b_i$, lies entirely on $\Pi(0)$, and the common point of $L_{v_i}$ and $L_{v_j}$, where $3\le i<j\le 5$, lies on $\Pi(0)$. These properties can be used to argue that $D$ is a string contact representation of $K_5$ on $\Pi(0)$, which contradicts that $K_5$ is a non-planar graph. We now claim that the polygonal path $P_i$ of $L_i$ that starts at $a_i$ and ends at $b_i$, lies entirely on $\Pi(0)$. Since $a_i$ and $b_i$ both lie on $\Pi(0)$, the claim is straightforward to verify when $P_i$ is a straight line segment. Therefore, assume that $P_i$ contains the joint $o_i$ of $L_i$. In this scenario, both the segments $a_io_i$ and $b_io_i$ are perpendicular to $\Pi(0)$. Since $P_i$ does not contain any line segment other than $a_io_i$ and $b_io_i$, $a_i$ must coincide with $b_i$, a contradiction. Observe now that at least one hand of $L_{v_i}$ lies on $\Pi(0)$. Therefore, the other hand of $L_{v_i}$ lies either on $\Pi(0)$ or perpendicular to $\Pi(0)$. Therefore, the common point of $L_{v_i}$ and $L_{v_j}$, where $3\le i<j\le 5$, must lie on $\Pi(0)$. Consequently, $D$ is a string contact representation of $K_5$ on $\Pi(0)$, which contradicts that $K_5$ is a non-planar graph. \subsubsection{Proof of Theorem~\ref{3-reg}: } Suppose for a contradiction that $K_{3,3}$ admits a segment contact representation, and let $D$ be such a representation of $K_{3,3}$. Let $\{v_1,v_2,v_3\}$ and $\{w_1,w_2,w_3\}$ be the two vertex sets corresponding to $K_{3,3}$. Observe now that any axis-aligned polygon of four line segments must lie on one of the following three types of plane: $\Pi_{xy}$, $\Pi_{yz}$, and $\Pi_{xz}$. Therefore, without loss of generality we may assume that segments corresponding to the cycle $v_1,w_1,v_2,w_2$ lie entirely on $\Pi(0)$. Since the segments corresponding to $v_1,w_1,v_2,w_2$ bounds a non-degenerate region of $\Pi(0)$, the segments corresponding to $u_1,w_1$ cannot be collinear, and hence they would determine the plane $\Pi(0)$. Consequently, the cycles $u_1,w_1,u_3,w_3$ would force the segments of $u_3$ and $w_3$ to lie on $\Pi(0)$. Consequently, $D$ must be a string contact representation of $K_{3,3}$ on $\Pi(0)$, which contradicts that $K_{3,3}$ is a non-planar graph. \begin{figure} \caption{Transforming a graph orientation into a string contact representation.} \label{general} \end{figure} \section{From Graph Orientation to String Contact Representations} \label{app:general} Given an edge oriented graph (each edge is unidirected), where every vertex has outdegree at most two, we can transform it into a string contact representation using constant number of bends, as follows: Represent vertices parallel boxes as illustrated in Fig.~\ref{general}(a). For each edge $(a,b)$ draw a polygonal path between the corresponding boxes $a$ and $b$ by following the directions Up, Right, Down, and ensure that the edges lie on distinct planes parallel to $\Pi_{xy}$. The general setup for each box is illustrated in Fig.~\ref{general}(a). One can now construct the string corresponding to a box by connecting the outgoing edges by a polygonal path of constant number of bends, as shown in Figs.~\ref{general}(b). \section{A Walkthrough Example} \label{app:last} Figs.~\ref{test}--\ref{summary} illustrate a walkthrough example according to the incremental construction described in Section~\ref{sec:lcontact}. \begin{figure} \caption{(a) A 3-regular Hamiltonian bipartite graph $G$. (b) A schematic representation of an $L$-contact representation of $G$ minus one edge (computed by our algorithm). } \label{test} \end{figure} \begin{figure} \caption{(a) A 3-regular Hamiltonian bipartite graph $G$ minus one Hamiltonian edge. (b) Preliminary setup. (c) Incremental Construction. } \label{summary} \end{figure} \end{document}
\begin{document} \title{Definability in the embeddability ordering of finite directed graphs, II } \author{\'{A}d\'{a}m Kunos} \maketitle \begin{abstract} We deal with first-order definability in the embeddability ordering $( \mathcal{D}; \leq)$ of finite directed graphs. A directed graph $G\in \mathcal{D}$ is said to be embeddable into $G' \in \mathcal{D}$ if there exists an injective graph homomorphism $\varphi \colon G \to G'$. We describe the first-order definable relations of $( \mathcal{D}; \leq)$ using the first-order language of an enriched small category of digraphs. The description yields the main result of the author's paper \cite{Kunos2015} as a corrolary and a lot more. For example, the set of weakly connected digraphs turns out to be first-order definable in $(\mathcal{D}; \leq)$. Moreover, if we allow the usage of a constant, a particular digraph $A$, in our first-order formulas, then the full second-order language of digraphs becomes available. \end{abstract} \section{Introduction} \blfootnote{In the beginning, this research was supported by T\'{A}MOP 4.2.4. A/2-11-1-2012-0001 ``National Excellence Program---Elaborating and operating an inland student and researcher personal support system''. This project was subsidized by the European Union and co-financed by the European Social Fund. Later, the author was supported by OTKA grant K115518.} In 2009--2010 J. Je\v{z}ek and R. McKenzie published a series of papers \cite{Jezek2009_1, Jezek2010, Jezek2009_3, Jezek2009_4} in which they have examined (among other things) the first-order definability in the substructure orderings of finite mathematical structures with a given type and determined the automorphism group of these orderings. They considered finite semilattices \cite{Jezek2009_1}, ordered sets \cite{Jezek2010}, distributive lattices \cite{Jezek2009_3} and lattices \cite{Jezek2009_4}. Similar investigations \cite{Kunos2015, Wires2016, Ramanujam2016, Thinniyam2017} have emerged since. The current paper is one of such, a continuation of the author's paper \cite{Kunos2015} that dealt with the embeddability ordering of finite directed graphs. That whole paper centers around one main theorem. In the current paper we extend this theorem significantly. Let us consider a nonempty set $V$ and a binary relation $E\subseteq V^2$. We call the pair $G=(V,E)$ a {\it directed graph} or just {\it digraph}. The elements of $V(=V(G))$\label{qwqepwrpdg} and $E(=E(G))$\label{qweoweru3} are called the{ \it vertices} and {\it edges} of $G$, respectively. The directed graph $G^T:=(V,E^{-1})$\label{aqweiqportw8} is called the {\it transpose} of $G$, where $E^{-1}$ denotes the inverse relation of $E$. A digraph $G$ is said to be embeddable into $G'$, and we write $G\leq G'$, if there exists an injective homomorphism $\varphi : G \to G'$. Let $\mathcal{D}$\label{xcvshgw8} denote the set of isomorphism types of finite digraphs. It is easy to see that $\leq$ is a partial order on $\mathcal{D}$. Let $(\mathcal{A},\leq)$ be an arbitrary poset. An $n$-ary relation $R$ is said to be (first-order) definable in $(\mathcal{A},\leq)$ if there exists a first-order formula $\Psi(x_1,x_2,\dots, x_n)$ with free variables $x_1,x_2,\dots, x_n$ in the language of partially ordered sets such that for any $a_1,a_2,\dots, a_n\in \mathcal{A}$, $\Psi(a_1,a_2,\dots, a_n)$ holds in $(\mathcal{A},\leq)$ if and only if $(a_1,a_2,\dots, a_n)\in R$. A subset of $\mathcal{A}$ is definable if it is definable as a unary relation. An element $a\in \mathcal{A}$ is said to be definable if the set $\{a\}$ is definable. In the poset $(\mathcal{D},\leq)$ let $G\prec G'$ denote that $G'$ covers $G$. Obviously $\prec$ is a definable relation in $(\mathcal{D},\leq)$. In \cite{Kunos2015}, the main result is \begin{theor}[Theorem 2.38 \cite{Kunos2015}]\label{vxmknfsj} In the poset $(\mathcal{D};\leq)$, the set $\{G,G^T\}$ is first-order definable for all finite digraph $G\in \mathcal{D}$. \end{theor} This theorem is the best possible in the following sense. Observe, that $G\mapsto G^T$ is an automorphism of $(\mathcal{D};\leq)$. This implies that the digraphs $G$ and $G^T$ cannot be distinguished with first-order formulas of $(\mathcal{D};\leq)$. What does Theorem \ref{vxmknfsj} tell about first-order definability in $(\mathcal{D};\leq)$? It tells the following \begin{kor} A finite set $H$ of digraphs is definable if and only if $$\forall G\in\mathcal{D}: \;\; G\in H \Rightarrow G^T \in H.$$ \end{kor} So the first-order definability of finite subsets in $(\mathcal{D};\leq)$ is settled. What about infinite subsets? One might ask if the set of weakly connected digraphs is first-order definable in $(\mathcal{D};\leq)$ as a standard model-theoretic argument shows that it is not definable in the first-order language of digraphs. The answer to this question appears to be out of reach with the result of \cite{Kunos2015}. In this paper we build the apparatus to handle some of such questions. In doing so we follow a path laid by Je\v{z}ek and McKenzie in \cite{Jezek2010}. In particular, the set of weakly connected digraphs turns out to be definable. Our method is the following. We add a constant---a particular digraph that is not isomorphic to its transpose---$A$ to the structure $(\mathcal{D};\leq)$ to get $(\mathcal{D};\leq, A)$. We define an enriched small category $\mathcal{CD}'$ and show that its first-order language is quite strong: it contains the full second-order language of digraphs. Finally, we show that first-order definability in $\mathcal{CD}'$ (after factoring by isomorphism) is equivalent to first-order definability in $(\mathcal{D};\leq, A)$. This result gives Theorem \ref{vxmknfsj} as an easy corollary and a lot more. The paper offers two approaches for the proof of the main theorem. We either use the result of \cite{Kunos2015}, Theorem \ref{vxmknfsj}, and do not get it as a corollary but have a more elegant proof for our main result. Or we do not use it, instead we get it as a corollary but we have a little more tiresome proof for the main result. Section \ref{jelolesj} consists of a table of notations to help the reader to find the definitions of the many notations used in the paper which might get frustrating otherwise. \section{Precise formulation of the main theorem and some display of its power}\label{pakdhtgft} Once more, we emphasize that the approach we present in this section is from Je\v{z}ek and McKenzie \cite{Jezek2010}. \\ Let $[n]$ denote the set $\{1,2,\dots,n\}$ for all $n\in\mathbb{N}$. Let us define the small category $\mathcal{CD}$\label{qoiue2833} of finite digraphs the following way. The set $\text{ob}(\mathcal{CD})$\label{dfjhawkd} of objects consists of digraphs on $[n]$ for some $n\in\mathbb{N}$. For all $A,B\in \text{ob}(\mathcal{CD})$ let $\mathrm{hom}(A,B)$\label{1382721} consist of triples $f=(A,\alpha,B)$\label{yxcmmnbcvn16235} where $\alpha: A\to B$ is a homomorphism, meaning $(x,y)\in E(A)$ implies $(\alpha(x),\alpha(y))\in E(B)$. Composition of morphisms are made the following way. For arbitrary objects $A,B,C\in \text{ob}(\mathcal{CD})$ if $f=(A,\alpha,B)$ and $g=(B,\beta,C)$, then $$fg=(A,\beta\circ\alpha, C).$$ It is easy to see that $f\in \mathrm{hom}(A,B)$ is injective if and only if for all $X\in \text{ob}(\mathcal{CD})$ $$ \forall g,h\in \mathrm{hom}(X,A): \;\; gf=hf\Leftrightarrow g=h.$$ Similarly $f\in \mathrm{hom}(A,B)$ is surjective is and only if for all $X\in \text{ob}(\mathcal{CD})$ $$ \forall g,h\in \mathrm{hom}(B,X): \;\; fg=fh\Leftrightarrow g=h.$$ These are first-order definitions in the (first-order) language of categories, hence in $\mathcal{CD}$, isomorphism and embeddability are first-order definable. This implies that all first-order definable relations in $(\mathcal{D},\leq)$ are definable in $\mathcal{CD}$ too. To put it more precisely, if $\rho\subseteq \mathcal{D}^n$ is an $n$-ary relation definable in $(\mathcal{D};\leq)$ then $$\{(A_1,\dots,A_n): A_i\in \text{ob}(\mathcal{CD}), (\bar{A_1},\dots,\bar{A_n})\in \rho\}$$ is definable in $\mathcal{CD}$, where $\bar{A_i}$ denotes the isomorphism type of $A_i$. \begin{definition}\label{xmcvnbsjdiqawe} Let us introduce some objects and morphisms: \begin{displaymath} \begin{split} {\bf E}_1\in \text{ob}(\mathcal{CD})&: \;\; V({\bf E}_1)=[1], \; E({\bf E}_1)=\emptyset, \\ {\bf I}_2\in \text{ob}(\mathcal{CD})&: \;\; V({\bf I}_2)=[2], \; E({\bf E}_1)=\{(1,2)\}, \\ {\bf f}_1\in \mathrm{hom}({\bf E}_1, {\bf I}_2)&: \;\; {\bf f}_1=({\bf E}_1,\{(1, 1)\} ,{\bf I}_2), \\ {\bf f}_2\in \mathrm{hom}({\bf E}_1, {\bf I}_2)&: \;\; {\bf f}_2=({\bf E}_1,\{(1, 2)\} ,{\bf I}_2). \end{split} \end{displaymath} Adding these four constants to $\mathcal{CD}$ we get $\mathcal{CD}'$. \end{definition} In the first-order language of $(\mathcal{D},\leq)$, formulas can only operate with the facts whether digraphs as a whole are embeddable into each other or not, the inner structure of digraphs is (officially) unavailable. In the first-order language of $\mathcal{CD'}$ though, we can capture embeddability (as we have seen above) but it is possible to capture the first-order language of digraphs too. The latter is far from trivial, but the following argument explains it. For any $X\in \text{ob}(\mathcal{CD})$ the set of morphisms $\mathrm{hom}({\bf E}_1, X)$ is naturally bijective with the elements of $X$. Observe that if $f,g\in \mathrm{hom}({\bf E}_1, X)$ are $$f=({\bf E}_1,\{(1, x)\}, X), \;\; g=({\bf E}_1, \{(1, y)\}, X)\;\; (x,y\in V(X)),$$ then $(x,y)\in E(X)$ holds if and only if \begin{equation}\label{aoosudzhac} \exists h\in \mathrm{hom}({\bf I}_2, X): \;\; {\bf f}_1h=f, \; {\bf f}_2h=g. \end{equation} To put it briefly, $X\cong CD_X$, where $$V(CD_X)=\mathrm{hom}({\bf E}_1, X), \;\; E(CD_X)=\{(f,g): f,g\in \mathrm{hom}({\bf E}_1, X), \; (\ref{aoosudzhac}) \text{ holds}\}.$$ This shows how we can reach the inner structure of digraphs with the first-order language of $\mathcal{CD'}$. So the first-order language of $\mathcal{CD'}$ is much richer than that of $(\mathcal{D},\leq)$. We can go even further. One can show that the first-order language of $\mathcal{CD'}$ can express the full second-order language of digraphs. To formulate this more precisely, the first-order language of $\mathcal{CD'}$ can express a language containing not only variables ranging over objects and morphisms of $\mathcal{CD'}$ but also \begin{enumerate}[(I)] \item quantifiable variables ranging over \begin{enumerate}[(a)] \item elements of any object, \item arbitrary subsets of objects, \item arbitrary functions between two objects, \item\label{uajjshz} arbitrary subsets of products of finitely many objects (heterogenous relations), \end{enumerate} \item dependent variables giving the universe and the edge relation of an object, \item the apparatus to denote \begin{enumerate}[(a)] \item edge relation between elements, \item application of a function to an element, \item\label{jhatztsvx} membership of a tuple of elements in a relation. \end{enumerate} \end{enumerate} For example, let us see how (Ib), (\ref{uajjshz}) and (\ref{jhatztsvx}) can be ``modelled'' in $\mathcal{CD'}$. \\ Let us start with (Ib). Let $E_n\in \text{ob}(\mathcal{CD}')$ denote the empty digraph on $[n]$. The set $$E=\{E_n\in \text{ob}(\mathcal{CD}'): n\in \mathbb{N}\}$$ is easily definable in $\mathcal{CD}'$. Let $A\in \text{ob}(\mathcal{CD}')$ be an arbitrary object and $S\subseteq A$ a subset of it. Let $\gamma$ be a bijection $V(E_{|S|})\to S$. Let us define the morphism $$p: E_{|S|} \to A, \;\; p(x)=\gamma(x) \;\; (x\in V(E_{|S|})).$$ It is easy to see that we represented the subset $S$ with the pair $(E_{|S|}, p)$. For example, an universal quantification over the subsets of $A$ would look like $$(\forall E_{|S|}\in E)(\forall p\in \mathrm{hom}(E_{|S|}, A)).$$ Next, let us consider (\ref{uajjshz}). Let $A_1,\dots, A_n\in \text{ob}(\mathcal{CD}')$ be arbitrary objects and let $R\subseteq A_1\times\dots \times A_n$ be nonempty. Let $\pi_i(r)$ be the $i$th projection of $r\in R$. The functions $\pi_1,\dots,\pi_n$ ``determine'' the relation $R$ in the following sense: $$(a_1,\dots,a_n)\in R \;\; \Leftrightarrow \;\; \exists r\in R: \pi_i(r)=a_i \;\;(i=1,\dots,n). $$ We will represent the functions $\pi_i$ the following way. Let $\gamma: V(E_{|R|})\to R$ be a bijection. Let us define the morphisms $p_i$: $$p_i: E_{|R|}\to A_i, \;\; p_i(x)=\pi_i(\gamma(x)) \;\; (x\in V(E_{|R|}))$$ It is easy to see that we represented the relation $R$ uniquely with $(E_{|R|},p_1,\dots,p_n)$. So an example of an existential quantification of type (\ref{uajjshz}) is $$(\exists E_{|R|}\in E)(\exists p_1\in \mathrm{hom}(E_{|R|}, A_1))\dots (\exists p_n\in \mathrm{hom}(E_{|R|}, A_n)).$$ For (\ref{jhatztsvx}), an element of $A_1\times\dots \times A_n$ is represented with an element of \begin{equation}\label{xckfgsd} \mathrm{hom}(E_1, A_1) \times \dots \times \mathrm{hom}(E_1, A_n) \end{equation} and if $(E_{|R|},p_1,\dots,p_n)$ belongs to $R\subseteq A_1\times\dots \times A_n$ and $(f_1, \dots, f_n)$, an element of \eqref{xckfgsd}, belongs to $x \in A_1\times\dots \times A_n$, then $x\in R$ can be expressed in the way $$ (\exists f \in \mathrm{hom}(E_1, E_{|R|}))(fp_1=f_1 \mathrel{\wedge} \dots \mathrel{\wedge} fp_1=f_1).$$ Let ${\bf A}\in \text{ob}(\mathcal{CD})$\label{xcvnsbndasiiwq} denote the digraph $V({\bf A})=[3]$, $E({\bf A})=\{(1,3),(2,3)\}$. Now from the fact that in $\mathcal{CD}'$ isomorphism and embeddabbility are definable and from Theorem \ref{vxmknfsj}, the set $$\{X\in \text{ob}(\mathcal{CD}): X\cong {\bf A}\text{ or } X\cong {\bf A}^T\}$$ is definable in $\mathcal{CD}'$. From this set, the formula $$(\exists x\in X)(\forall y\in X)(y\neq x \; \Rightarrow \; (y,x)\in E(X))$$ chooses the set $$\{X\in \text{ob}(\mathcal{CD}): X\cong {\bf A}\}.$$ This shows that the first order language of $\mathcal{CD'}$ is stronger then the first-order language of $(\mathcal{D, \leq)}$ because in the latter, the isomorphism type of ${\bf A}$ is not definable as it is not isomorphic to its transpose. \begin{definition}\label{xcmvnyxjdaskdqiwo} By adding the isomorphism type of ${\bf A}$ as a constant to $(\mathcal{D},\leq)$ we get $(\mathcal{D};\leq, A)$. Let us denote this structure by $\mathcal{D}'$. \end{definition} We say that the relation $\rho\subseteq (\text{ob}(\mathcal{CD}))^n$ is {\it isomorphism invariant} if when for $A_i, B_i\in \text{ob}(\mathcal{CD})$, $A_i\cong B_i$ ($1\leq i\leq n$), then $$(A_1,\dots,A_n)\in \rho \; \Leftrightarrow \; (B_1,\dots,B_n)\in \rho.$$ The set of isomorphism invariant relations of $\text{ob}(\mathcal{CD})$ is naturally bijective with the relations of $\mathcal{D}$. The main result of the paper is the following \begin{theor}\label{9ahgfa} A relation is first-order definable in $\mathcal{D}'$ if and only if the corresponding isomorphism invariant relation of $\mathcal{CD}'$ is first-order definable in $\mathcal{CD}'$. \end{theor} We have already seen the proof of the easy(=only if) direction of this theorem. We prove the difficult direction in Section \ref{aspioerfdsc} by creating a model of $\mathcal{CD}'$ in $\mathcal{D}'$. \begin{definition} A relation $R\subseteq \text{ob}(\mathcal{CD})^n$ is called {\it transposition invariant} if it is isomorphism invariant and $(G_1, \dots, G_n)\in R$ implies $(G_1^T,\dots, G_n^T)\in R.$ \end{definition} \begin{kor}\label{cvbhjeuiw} A relation is first-order definable in $\mathcal{D}$ if and only if the corresponding isomorphism invariant relation of $\mathcal{CD}'$ is transposition invariant and first-order definable in $\mathcal{CD}'$. \end{kor} \begin{proof} The ``only if'' direction is obvious. For the ``if'' direction, let $R\subseteq \mathcal{D}^n$ be a relation that corresponds to a transposition invariant and first-order definable relation of $\mathcal{CD}'$. We need to show that $R$ is first-order definable in $\mathcal{D}$. We know, by Theorem \ref{9ahgfa}, that it is first-order definable in $\mathcal{D}'$. Let $\Phi(x_1, \dots, x_n)$ be a formula that defines it. Let $\Phi'(y,x_1, \dots, x_n)$ denote the formula that we get from $\Phi(x_1, \dots, x_n)$ by replacing the constant $A$ with $y$ at all of its occurrences. The set $\{A, A^T\}$ is easily defined (even without the usage of Theorem \ref{vxmknfsj}) in $\mathcal{D}$. Let us define $$\Phi''(x_1, \dots, x_n):=\exists y (y\in\{A, A^T\} \wedge \Phi'(y,x_1, \dots, x_n)).$$ We claim that for $S:=\{(x_1, \dots, x_n) :\Phi''(x_1, \dots, x_n)\}$, $S=R$ holds. $R\subseteq S$ is clear as $\Phi'(A,x_1, \dots, x_n)$ defines $R$. Let $s\in S$. If this particular tuple $s$ is defined with $y=A$ in $\Phi''$ then $s\in R$ is obvious. If $s$ is defined with $y=A^T$ then $s^T$ can be defined with $y=A$ in $\Phi''$ and this yields $s^T\in R$, where the transpose is taken componentwise. Finally, the transposition invariance of $R$ implies $s\in R$. \end{proof} We have already seen that in the first-order language of $\mathcal{CD}'$ we have access to the first-order language of digraphs. Let $G=(V,E)$ be an arbitrary fixed digraph with $V=\{v_1, \dots, v_n\}$. Then the formula \begin{multline}\exists x_1 \dots \exists x_n \forall y \bigg( \bigwedge_{1\leq i \neq j \leq n }x_i \neq x_j \;\; \wedge \bigvee_{i=1}^n y=x_i \;\; \wedge \\ \nonumber \bigwedge_{(v_i,v_j)\in E}(x_i, x_j)\in E \;\; \wedge \bigwedge_{(v_i,v_j)\notin E} (x_i, x_j)\notin E \bigg) \end{multline} defines $G$ in the first-order language of digraphs. This leads to the following corollary of Theorem \ref{9ahgfa}. \begin{kor}\label{allsppvnbt} In $\mathcal{D}'$, all elements are first-order definable. \qed \end{kor} \begin{kor}[=Theorem \ref{vxmknfsj}]\label{vncxbfui8} For all $G\in \mathcal{D}$, the set $\{G,G^T\}$ is first-order definable in $(\mathcal{D}, \leq)$. \end{kor} \begin{proof} The proof goes with basically the same argument as we have seen in the proof of Corollary \ref{cvbhjeuiw}. \end{proof} The previous two statements will only earn the ``title'' corollary truly, if we prove Theorem \ref{9ahgfa} without using them, which will be one way to approach the proof of Theorem \ref{9ahgfa}. In the second-order language of digraphs---which has turned out to be available in the first-order language of $\mathcal{CD'}$---the formula $$\exists H\subseteq G(\exists v,w\in G(v\in H \wedge w\notin H)\; \wedge \; \forall x, y \in G(x\to y \Rightarrow (x,y\in H \; \vee\; x,y \notin H))) $$ defines the set of not weakly connected digraphs. This means that the set of weakly connected digraphs is first-order definable in $\mathcal{D}$, by Corollary \ref{cvbhjeuiw}. That fact seems quite nontrivial to prove without Theorem \ref{9ahgfa}. This definability is surprising as the set of weakly connected digraphs is not definable in the first-order language of digraphs (by a standard model-theoretic argument). \section{Some notations and definitions needed from \cite{Kunos2015}} In this section we recall additional notations and definitions from \cite{Kunos2015} that will be needed. \begin{definition}\label{349587897819} For digraphs $G, G'\in \mathcal{D}$, let $G\mathrel{\dot{\cup}} G'$ denote their disjoint union, as usual. \end{definition} \begin{definition}\label{defE_n} Let ${E_n}$ $(n=1,2,\dots )$ denote the ``empty'' digraph with $n$ vertices and ${F_n}$ $(n=1,2,\dots )$ denote the ``full'' digraph with $n$ vertices: $$V(E_n)=\{v_1,v_2,\dots, v_n\},\;\; E(E_n)=\emptyset,$$ $$V(F_n)=\{v_1,v_2,\dots, v_n\}, \;\; E(F_n)=V(F_n)^2.$$ \end{definition} \begin{definition}\label{defIOL} Let $I_n$, $O_n$, $L_n$ $(n=2,3,\dots)$ be the following (fig. \ref{O_n abra}.) digraphs: \begin{displaymath} V(I_n)=V(O_n)=V(L_n)=\{v_1,v_2,\dots, v_n\}, \end{displaymath} \begin{displaymath} E(I_n)=\{(v_1,v_2),(v_2,v_3),\dots, (v_{n-1}, v_n) \}, \end{displaymath} \begin{displaymath} E(O_n)=\{(v_1,v_2),(v_2,v_3),\dots, (v_{n-1}, v_n), (v_n, v_1)\}, \end{displaymath} \begin{displaymath} E(L_n)=\{(v_1,v_1),(v_2,v_2),\dots, (v_n, v_n)\}. \end{displaymath} \end{definition} \begin{figure} \caption{$I_5$, $O_6$, $I_6$} \label{O_n abra} \end{figure} \begin{definition}\label{defOnto} Let $\mathcal{O}_{n}^{\to}$ denote the set of digraphs $X$ which we get by adding an edge that is not a loop to $O_n$. \end{definition} Note that $X\succ O_n$ for all $X\in \mathcal{O}_{n}^{\to}$. \begin{definition}\label{defLfuggveny} For $G\in \mathcal{D}$, let $L(G)$ denote the digraph that we get from $G$ by adding all loops possible. For $\mathcal{G}\subseteq \mathcal{D}$, let us define $\mathcal{L}(\mathcal{G})=\{L(G) : G\in \mathcal{G}\}$. \end{definition} We would like to mention that this definition was a little different in \cite{Kunos2015}. We then assumed that $G$ has no loops which we do not do here. \begin{definition}\label{defMfuggveny} For $G\in \mathcal{D}$, let $M(G)$ the digraph that we get from $G$ be by leaving all the loops out. For $\mathcal{G}\subseteq \mathcal{D}$, let us define $\mathcal{M}(\mathcal{G}):=\{M(G):G\in \mathcal{G}\}.$ \end{definition} \begin{definition}\label{defonl} Let $O_{n,L}$ be the following digraph: $V(O_{n,L})=\{v_1,v_2,\dots, v_n\}$, $E(O_{n,L})=E(O_n)\cup\{(v_1,v_1)\}$, meaning \begin{displaymath} E(O_{n,L})=\{(v_1,v_1),(v_1,v_2),(v_2,v_3),\dots, (v_{n-1}, v_n), (v_n, v_1)\}. \end{displaymath} \end{definition} \begin{figure} \caption{$O_{3,L} \end{figure} \begin{definition}\label{deffiugraf} Let $\male_n$ be the digraph with $n+1$ vertices $V(\male_n)=\{v_1,\dots , v_{n+1}\}$ for which $v_1$, $v_2$, \dots, $v_{n}$ constitute a circle $O_n$ and the only additional edge in $\male_n$ is $(v_n,v_{n+1})$. Let $\male_n^L$ be the previous digraph plus one loop: \begin{displaymath} E(\male_n^L)=E(\male_n )\cup \{(v_{n+1},v_{n+1})\}. \end{displaymath} \end{definition} \begin{figure} \caption{$\male_6$ and $\male_6^L$} \end{figure} \section{The proof of the main theorem (Theorem \ref{9ahgfa})}\label{aspioerfdsc} In this chapter we prove the ``if'' direction of Theorem \ref{9ahgfa}. Here, if we just write ``definability'', we will always mean first-order definability in $\mathcal{D}'$.\\ In the proof we discuss in this section, the statement of Corollary \ref{allsppvnbt} turns out to be very useful as there are a number of specific digraphs whose definability is used throughout our proofs. There are two different approaches to the proof of this section according to our intentions with the main result of \cite{Kunos2015}, that is Theorem \ref{vxmknfsj}. \\ EITHER \begin{itemize} \item We use Corollary \ref{allsppvnbt}, considering it as a consequence of Theorem \ref{vxmknfsj}, see the proof of \cite[Theorem 3.3]{Kunos2015}. In this case, we use paper \cite{Kunos2015}, therefore its result cannot be considered as a corollary of Theorem \ref{9ahgfa}. \end{itemize} OR \begin{itemize} \item We use the following lemma to replace the statement of Corrolary \ref{allsppvnbt} in the special cases of those specific digraphs that we would use the statement of Corrolary~\ref{allsppvnbt} for. This way Corrolary \ref{allsppvnbt} and the main result of \cite{Kunos2015} can both be viewed as corrolaries of Theorem \ref{9ahgfa}. \end{itemize} The latter approach requires the following lemma. \begin{lemm}\label{vbncxmze} The following digraphs (of at most 9 elements) are first-order definable in $\mathcal{D'}$: $I_2$, $L_1$, $E_2$, $A$, $A^T$, and the digraphs under (\ref{hxgetrirow}), (\ref{ncbhzuippqw}), and (\ref{cnjusiuduh}). \end{lemm} \begin{proof} The proof of this lemma must go without the usage of Corrolary \ref{allsppvnbt} (and Theorem \ref{vxmknfsj}). We only need to consider some (finite) levels at the ``bottom'' of the poset $\mathcal{D}$. This means it is only a matter of time for someone to create this proof. The detailed proof would be technical and it would bring nothing new to the table, so we skip it. \end{proof} From now on, either of the two approaches above can be followed---the proof in the remainder of this chapter is the same in both cases. It is up to the reader which approach he favors and has in mind while reading the rest of the paper. \begin{lemm} The sets $\mathcal{E}:=\{E_n : n\in \mathbb{N}\}$, $\mathcal{L}:=\{L_n:n\in \mathbb{N}\}$ and the relation $\{(L_n,E_n): n\in\mathbb{N}\}$ are definable. \end{lemm} \begin{proof} $\mathcal{E}$ is the set of $X\in \mathcal{D}$ for which $I_2\nleq X$ and $L_1\nleq X$. $\mathcal{L}$ is the set of those digraphs $X\in \mathcal{D}$ for which there exists $E_i\in\mathcal{E}$ such that $X$ is maximal with the properties $E_i\leq X$, $E_{i+1}\nleq X$ and $I_2\nleq X$. ($E_{i+1}$ is easily defined using $E_i$ as it is the only cover of $E_i$ in the set $\mathcal{E}$.) \\ The relation consists of those pairs $(X,Y)\in\mathcal{D}^2$ for which $X\in \mathcal{L}$, and $Y$ is maximal element of $\mathcal{E}$ that is embeddable into $X$. \end{proof} The relations \begin{equation}\label{yiteja} \{(G,E_n):E_n\leq G, \;E_{n+1}\nleq G\},\; \text{ and} \end{equation} \begin{equation}\label{leitaf} \{(G,L_n):L_n\leq G, \;L_{n+1}\nleq G\} \end{equation} are obviously definable, from which the following relations are definable too: \begin{definition}\label{xmvbyxhu8231} $$ \mathfrak{E}:=\{(G, K): \exists \; E_n\in\mathcal{E},\text{\;\;for which\;\;} (G,E_n),(K,E_n)\in (\ref{yiteja})\}, $$ $$ \mathfrak{L}:=\{(G,K): \exists \; L_n\in\mathcal{L},\text{\;\;for which\;\;} (G,L_n),(K,L_n)\in (\ref{leitaf})\}. $$ \end{definition} \begin{definition}\label{123897834611} Let $\mathcal{O}$ denote the set of those digraphs that are disjoint unions of circles ($O_n$ for $n\geq 2$) of not necessarily different sizes. \end{definition} \begin{lemm} $\mathcal{O}$ is definable. \end{lemm} \begin{proof} Let $\mathcal{H}$ be the set consisting of those $X\in\mathcal{D}$ for which there exists $E_n\in\mathcal{E}$ such that $X$ is maximal with the properties \begin{equation}\label{aiuueelllbv} E_2\leq X, \; A\nleq X, \; A^T\nleq X, \; L_1\nleq X, \; \text{and}\;\; (X,E_n)\in\mathfrak{E}. \end{equation} We state that \begin{equation}\label{karuervcy} \mathcal{H}=\mathcal{O}\cup \{G\mathrel{\dot{\cup}} E_1: G\in \mathcal{O}\}. \end{equation} Let $G\in \mathcal{H}$. It is easy to see that there can be at most 1 weakly connected component of $G$ that has only 1 vertex (and hence is isomorphic to $E_1$) as the opposite would conflict the maximality of $G$. The conditions $A\nleq X$ and $A^T\nleq X$ mean there is no vertex in $G$ that is either an ending or a starting point of two separate edges, respectively. Therefore every weakly connected component of $G$ is either a circle or only one element. Finally, $\mathcal{O}$ is the set of $X\in \mathcal{D}$ for which $X\in\mathcal{H}$ but there is no such $Y\in \mathcal{H}$ that $Y\prec X$. \end{proof} \begin{lemm}\label{ppghhfjasd} The following sets and relations are definable: $$\mathcal{O}_{\cup}:=\{O_n:n\geq2\}, \;\;\; \{(O_n,E_n): n\geq 2\},$$ \begin{equation}\label{yxcnyxbmieuwqieo99} \{F_n:n\in\mathbb{N}\}, \;\;\; \{(F_n,E_n): n\in\mathbb{N}\}, \end{equation} \begin{equation}\label{adubvcipqw} \{(G,M(G)): G\in\mathcal{D}\}, \end{equation} $$\mathfrak{M}:=\{(X,Y): \exists Z ((X,Z),(Y,Z)\in (\ref{adubvcipqw}))\},$$ \begin{equation}\label{cvjnhdufasid} \{(G,L(G)): G\in\mathcal{D}\}. \end{equation} \end{lemm} \begin{proof} $\mathcal{O}_{\cup}$ is the set of digraphs $X\in \mathcal{D}$ for which $X\in \mathcal{O}$ but there is no $Y\in \mathcal{O}$ such that $Y<X$. The corresponding relation $\{(O_n,E_n): n\geq 2\}$ is definable with \eqref{yiteja}. \\ The set under (\ref{yxcnyxbmieuwqieo99}) consists of those $X\in\mathcal{D}$ for which $X<Y$ implies $(X,Y)\notin \mathfrak{E}$. The corresponding relation is defined as above.\\ (\ref{adubvcipqw}) is the set of pairs $(X,Y)\in \mathcal{D}^2$ for which $Y$ is maximal with the conditions $Y\leq X$ and $L_1\nleq Y$. \\ $\mathfrak{M}$ is already given by a first-order definition. \\ (\ref{cvjnhdufasid}) is the set of pairs $(X,Y)\in \mathcal{D}^2$ for which $Y$ is maximal with the property that $(X,Y)\in \mathfrak{M}$. \end{proof} \begin{lemm}\label{238971875642} The following relation is definable: $$\mathfrak{E}_+:=\{(E_n,E_m,E_{n+m}): n,m\in\mathbb{N}\}.$$ \end{lemm} \begin{proof} The relation $\mathfrak{E}_+$ consists of the triples $(X,Y,Z)\in\mathcal{D}^3$ that satisfy the following conditions. $X,Y\in\mathcal{E}$, meaning $X=E_i$ and $Y=E_j$ for some $i,j\in\mathbb{N}$. With Lemma \ref{ppghhfjasd}, $M(F_j)$ can be defined (with $E_j$). Let $F_j^*$ denote the digraph the we get from $M(F_j)$ by adding one loop. This is the only digraph $W\in\mathcal{D}$ for which $M(F_j)\prec W$ and $L_1\leq W$. Now the digraph $L_i \mathrel{\dot{\cup}} M(F_j)$ is definable as the digraph $Q\in \mathcal{D}$ which is minimal with the conditions $L_i\leq Q$, $M(F_j)\leq Q$ and $F_j^*\nleq Q$. Finally, $Z\in \mathcal{E}$ such that $(Z,L_i \mathrel{\dot{\cup}} M(F_j))\in \mathfrak{E}$. \end{proof} \begin{lemm} The following relation is definable: \begin{equation}\label{hatluevp} \{(E_n,E_m):1\leq n<m\leq 2n\}. \end{equation} \end{lemm} \begin{proof} The relation is the set of those pairs $(X,Y)\in\mathcal{D}^2$ which satisfy the following conditions. For $X\in \mathcal{E}$, meaning $X=E_n$, we can define $E_{2n}$ to be the element from the set $\mathcal{E}$ for which $(E_n,E_n,E_{2n})\in\mathfrak{E}_+$. Finally, $Y\in \mathcal{E}$ and $E_n<Y\leq E_{2n}$. \end{proof} \begin{lemm}\label{983489887772543214} Let $O_n^*:=O_{n+1} \mathrel{\dot{\cup}} O_{n+2} \mathrel{\dot{\cup}} \dots \mathrel{\dot{\cup}} O_{2n}$. The relation \begin{equation} \label{gmcuyalu} \{(O_n^*,E_n): n\in\mathbb{N}\} \end{equation} and the set $\{O_n^*:n\in\mathbb{N}\}$ are definable. \end{lemm} \begin{proof} The relation (\ref{gmcuyalu}) can be defined as the set of pairs $(X,Y)\in \mathcal{D}^2$ satisfying the following conditions. $Y\in \mathcal{E}$, meaning $Y=E_n$. $X$ satisfies $X\in \mathcal{O}$ and is minimal with the following property: for all $O_i \in \mathcal{O}_{\cup}$ for which $(E_n,E_i)\in (\ref{hatluevp})$ holds, $O_i\leq X$. \\ With the relation (\ref{gmcuyalu}), the set is easily defined the usual way. \end{proof} \begin{lemm} The following relation is definable: $$\{(X,E_n): \; 2\leq n, \;\; X\in\mathcal{O}_n^{\to}\}. $$ \end{lemm} \begin{proof} The relation consists of those pairs $(X,Y)\in\mathcal{D}^2$ that satisty the following conditions. $Y\in \mathcal{E}$ and $E_2\leq Y$, meaning $Y=E_n$, where $2\leq n$. $O_n\prec X$, $L_1\nleq X$, and $(X,O_n)\in\mathfrak{E}$. \end{proof} \begin{definition}\label{13541235} Let $1<i,j$ be integers and let us consider the circles $O_i, O_j$ and $E_1$ with $$ V(O_i)=\{v_1,\dots,v_i\},\;\; V(O_j)=\{v^1,\dots, v^j\}, \;\; V(E_1)=\{u\}.$$ Let $\male_{i,j}^L$ denote the following digraph: $$V(\male_{i,j}^L):=V(O_i)\cup V(O_j) \cup V(E_1), \;\; E(\male_{i,j}^L):=E(O_i)\cup E(O_j)\cup \{(v_1,u),(v^1,u),(u,u)\}.$$ \end{definition} \begin{lemm}\label{apoowjsbc} Let $$O_{n,L}^*:=O_{n+1,L} \mathrel{\dot{\cup}} O_{n+2,L} \mathrel{\dot{\cup}} \dots \mathrel{\dot{\cup}} O_{2n,L}.$$ The following sets and relations are definable: \begin{equation}\label{telgapnn} \{(\male_n,E_n): n\geq2\}, \;\;\; \{\male_n: n\geq2\}, \end{equation} \begin{equation}\label{bahhstz} \{(\male_n^L, E_n): n\geq2\}, \;\; \{\male_n^L: n\geq2\}, \end{equation} \begin{equation}\label{vlapkaaa} \{(\male_{i,j}^L,E_i,E_j):1<i,j,\; i\neq j\}, \;\; \{\male_{i,j}^L:1<i,j,\; i\neq j\}, \end{equation} \begin{equation}\label{98782133567236} \{O_{n,L}: n\geq2\}, \;\;\; \{(O_{n,L},E_n):n\geq2 \}, \end{equation} \begin{equation}\label{745673682987123} \{O_{n,L}^*:n\in\mathbb{N}\}, \;\;\; \{(O_{n,L}^*,E_n):n\in\mathbb{N} \}. \end{equation} \end{lemm} \begin{proof} The relation (\ref{telgapnn}) consists of those pairs $(X,Y)\in \mathcal{D}^2$ that satisfy the following. $Y\in \mathcal{E}$, meaning $Y=E_n$. There exists $Z\in \mathcal{D}$ for which $O_n\prec Z\prec X$, $(E_{n+1}, X)\in \mathfrak{E}$, and $L_1\nleq X$. There exists no $Z\in \mathcal{O}_n^{\to}$ for which $Z\leq X$. Finally, $A^T\leq X$. The corresponding set is easily defined using the relation we just defined. \\ The set under (\ref{98782133567236}) consists of those digraphs $X\in \mathcal{D}$ for which there exists $O_n\in\mathcal{O}_{\cup}$ such that $O_n\prec X$ and $L_1\leq X$. The corresponding relation is easily defined. \\ The relation (\ref{bahhstz}) consists of those pairs $(X,Y)\in \mathcal{D}^2$ that satisfy the following. $Y\in \mathcal{E}$, meaning $Y=E_n$. With the relation (\ref{telgapnn}), $\male_n$ is definable. Now $X$ is determined by the following properties: $\male_n\prec X$, $L_1\leq X$ and $O_{n,L}\nleq X$. The corresponding set is easily defined using the relation we just defined. \\ The relation (\ref{vlapkaaa}) consists of those triples $(X,Y,Z)\in\mathcal{D}^3$ that satisfy the following. $Y,Z\in\mathcal{E}$ such that $E_2\leq Y,Z$ and $Y\neq Z$, meaning $Y=E_i$, $Z=E_j$ for some $1<i,j$, $i\neq j$. Now $O_i \mathrel{\dot{\cup}} O_j$ is the digraph $W\in\mathcal{D}$ determined by $W\in \mathcal{O}$, $(W,E_{i+j})\in\mathfrak{E}$ and $O_i, O_j\leq W$. $O_i \mathrel{\dot{\cup}} O_j \mathrel{\dot{\cup}} E_1$ is the digraph $W$ determined by $O_i \mathrel{\dot{\cup}} O_j \prec W$ and $(O_i \mathrel{\dot{\cup}} O_j, W)\notin \mathfrak{E}$. Finally, $X$ is defined by: $$\exists W_1,W_2: O_i \mathrel{\dot{\cup}} O_j \mathrel{\dot{\cup}} E_1 \prec W_1\prec W_2\prec X, \;\; (X,O_i \mathrel{\dot{\cup}} O_j \mathrel{\dot{\cup}} E_1) \in \mathfrak{E},$$ $$ L_1\leq X, \;\; O_{i,L}\nleq X, \;\; O_{j,L}\nleq X, \;\; \male_i^L\leq X, \;\; \male_j^L\leq X.$$ The corresponding set is easily defined using the relation we just defined.\\ The relation $\{(O_{n,L}^*,E_n):n\in\mathbb{N} \}$ consists of those pairs $(X,Y)\in \mathcal{D}^2$ that satisfy the following conditions. $Y\in\mathcal{E}$, meaning $Y=E_n$. For $X$, the following properties hold: \begin{itemize} \item $O_n^*\leq X$ and $(X,O_n^*)\in\mathfrak{E}$, \item $O_i\leq O_n^* \Rightarrow ( \; Y\in \mathcal{O}_i^{\to} \Rightarrow Y\nleq X),$ \item $O_i\leq O_n^* \Rightarrow \male_i\nleq X,$ \item $O_i\leq O_n^* \Rightarrow O_{i,L}\leq X,$ \item $L_{n+1}\nleq X.$ \end{itemize} With the relation we just defined the corresponding set is easily defined. \end{proof} \begin{definition}\label{756728767198470} Let us denote the vertices of $O_i$ and $O_j$ with $$ V(O_i)=\{v_1,\dots,v_i\},\;\; V(O_j)=\{v^1,\dots, v^j\}. $$ Let $O_{i\to j}$ denote the digraph $$V(O_{i\to j})=V(O_i)\cup V(O_j), \;\;\; E(O_{i\to j})=E(O_i)\cup E(O_j) \cup \{(v_1, v^1)\}.$$ \end{definition} \begin{lemm} The following relation and set are definable: \begin{equation}\label{haywskz} \{(O_{i\to j}, E_i, E_j): i,j\geq2\}, \;\;\; \{O_{i\to j}: i,j\geq2\} \end{equation} \end{lemm} \begin{proof} The relation (\ref{haywskz}) consists of those triples $(X,Y,Z)\in \mathcal{D}^3$ for which the following conditions hold. $Y,Z\in \mathcal{E}$ satisfy $E_2\leq Y,Z$, meaning $Y=E_i$ and $Z=E_j$, where $i,j\geq2$. $(X,E_{i+j})\in \mathfrak{E}$ and $W\prec X$, where $W\in \mathcal{O}$ is such that $(W,X)\in \mathfrak{E}$ and precisely $O_i$ and $O_j$ are embeddabe into $W$ from the set $\mathcal{O}_{\cup}$ (here $i=j$ is possible). Finally, $\male_i\leq X$. The set is easily defined using the relation. \end{proof} The proof of the crucial Lemma \ref{fiduhweori} requires a lot of nontrivial preparation which we begin here. \begin{definition}\label{7358712837283791} Let $\mathcal{W}(G)$ denote the set of weakly connected components of $G$. \end{definition} \begin{definition}\label{7538761239874} Let $$G \sqsubseteq G' \Leftrightarrow M(G)\leq M(G'), \; \text{ and }\; G \sqsubset G' \Leftrightarrow M(G)<M(G'),$$ $$G\equiv G' \; \Leftrightarrow \; M(G)=M(G') \; (\Leftrightarrow \; (G,G')\in\mathfrak{M}), \text{ that is } \equiv \;\; = \;\; \sqsubseteq \cap \sqsubseteq^{-1},$$ $$\equiv^C_G\;\; :=\{H\in \mathcal{W}(G): H\equiv C\}.$$ \end{definition} Let us use the abbreviation {\it wcc}=``weakly connected component'' and {\it wccs} for the plural. $\sqsubseteq$ is obviosly a quasiorder and $\equiv^C_G$ is the set of the wccs of $G$ that are equivalent to $C$ with respect to the equivalence $\equiv$. We say that a wcc $W$ of $G$ is {\it raised} by the embedding $\varphi: G\to G'$ if for the wcc $W'$ of $G'$ that it embeds into, i. e. $\varphi(W)\subseteq W'$, $W\sqsubset W'$ holds. In this case, we say that {\it$W$ is raised into $W'$}. A wcc $W$ of $G$ is either raised or embeds into $\equiv^W_{G'}$ (considered now as a subgraph of $G'$). \begin{lemm}\label{nvbdfuzer} Let $G$ and $G'$ be digraphs having $n$ vertices such that $G\equiv G'$. Let $\varphi$ be an embedding $G\to G' \mathrel{\dot{\cup}} O_n^*$. Let us suppose that $W$ and $W'$ are wccs of $G$ and $G'$ respectively, such that $W$ is raised into $W'$. Then $W'\equiv I_m$ for some $m$, and consequently $W\equiv I_{m'}$ for some $m'< m$. \end{lemm} \begin{proof} It suffices to show that $M(W')$ can be embedded into $O_n^*$, that is what we are going to do. For an arbitrary wcc $V$ of $G$, it is clear that $\equiv^V_G$ and $\equiv^V_{G'}$ are either bijective under $\varphi$ (considered as subsgraphs of $G$ and $G'$) or a wcc of $\equiv^V_G$ is raised. The fact that $W$ is raised into $W'$ excludes $\equiv^{W'}_G$ and $\equiv^{W'}_{G'}$ being bijective as these two subgraphs are $\equiv$--equivalent, so a bijection would only be possible if only $\equiv^{W'}_G$ was mapped into $\equiv^{W'}_{G'}$. This means that a wcc $W_1$ of $\equiv^{W'}_G$ is raised into some wcc $W'_1$. If $W'_1$ is a wcc of $O_n^*$, then we are done as clearly $$W\sqsubset W_1 \sqsubset W'_1.$$ If this is not the case, then we repeat the same argument to get wccs $W_2 \in \equiv^{W'_1}_G$, and $W'_2$ such that $W_2$ is raised into $W'_2$. Again, if $W'_2$ is in $O_n^*$, then we are done as $$W\sqsubset W_1 \sqsubset W_2 \sqsubset W'_2.$$ If not, we repeat the argument. Since an infinite chain of wccs with strictly increasing size is impossible, we will get to our claim eventually. \end{proof} We are in the middle of the preparation for Lemma \ref{fiduhweori}. The following Lemma \ref{weiurzwe} is the key, the most difficult part of the paper. Before the lemma we give an example to aid the understanding of its statement. We consider the digraphs $G$ and $G'\mathrel{\dot{\cup}} O_n^*$ and we are interested if the assumptions \begin{itemize} \item $G\leq G'\mathrel{\dot{\cup}} O_n^*$, \item $G\equiv G'$, and \item $G$ and $G'$ have the same number of loops \end{itemize} force $G=G'$? The answer is negative and a counterexample is shown in Figure \ref{sdfkjfhsufi}. To prove Lemma \ref{fiduhweori} we will need to ensure that $G=G'$ with a first-order definition. Observe the following. Let $\overline{G}$ denote the digraph we get from $G$ by adding a loop to the vertex labeled with $v$. Now it is impossible to add one loop to $G'$ such that we get a $\overline{G'}$ for which $\overline{G}\leq \overline{G'}\mathrel{\dot{\cup}} O_3^*$ holds. We just showed the following property: we can add some loops to $G$, getting $\overline{G}$, such that it is impossible to add the same number of loops to $G'$, getting $\overline{G'}$, such that $\overline{G}\leq \overline{G'}\mathrel{\dot{\cup}} O_3^*$ holds. If we have $G=G'$ this property does not hold, obviosly. Have we found a property that, together with the three above, ensures $G=G'$? The following lemma answers this question affirmatively. \begin{figure} \caption{A $G$ and a corresponding $G'\mathrel{\dot{\cup} \label{sdfkjfhsufi} \end{figure} \begin{lemm}\label{weiurzwe} Let $G,G'$ be digraphs with $n$ vertices and with the same number of loops. Let us suppose $G\equiv G'$ and $G\leq G'\mathrel{\dot{\cup}} O_n^*$. Then $G\neq G'$ holds if and only if we can add some loops to $G$ so that we get the digraph $\overline{G}$ such that it is impossible to add the same number of loops to $G'$, getting the digraph $\overline{G'}$, such that $\overline{G}\leq \overline{G'}\mathrel{\dot{\cup}} O_n^*$. In formulas this is: there exists a digraph $\overline{G}$ for which $$ G\leq \overline{G}, \;\; G\equiv \overline{G}$$ such that there exists no digraph $X$ for which $$ G'\mathrel{\dot{\cup}} O_n^* \leq X, \;\; X\equiv G'\mathrel{\dot{\cup}} O_n^*, \;\; X\leq L(G)\mathrel{\dot{\cup}} O_n^*, \;\; (\overline{G}, X)\in\mathfrak{L}.$$ \end{lemm} \begin{proof} The direction $\Leftarrow$ (or rather its contrapositive) is obvious. Accordingly, let us suppose $G\neq G'$. Let $C$ denote the largest joint subgraph consisting of whole wccs of both $G$ and $G'$. Let us introduce the so-called {\it reduced subgraphs}: \begin{equation}\label{cvxyxopp} G=C\mathrel{\dot{\cup}} G_R, \text{ and } G'=C\mathrel{\dot{\cup}} G'_R. \end{equation} Observe that the digraphs $G_R$ and $G'_R$ are not empty and $G_R \equiv G'_R$. Let $W$ denote a $\sqsubseteq$-maximal wcc of $G_R$. We claim $W\equiv I_k$ for some $k>1$, and \begin{equation}\label{sdsjhjsdp} |=_G^{I_k}|-|=_{G'}^{I_k}|=|\equiv_{G_R}^{I_k}|, \end{equation} or equivalently, all wccs of $\equiv_{G_R}^{I_k}$ are loop-free. Let $\varphi$ be an embedding $G\to G'\mathrel{\dot{\cup}} O_n^*$. Observe that $\varphi$ raises a wcc isomorphic to $W$ as $G'$ has less wccs isomorphic to $W$ by the definitions of the reduced subgraphs. Hence, by Lemma \ref{nvbdfuzer}, we have $W\equiv I_k$ for some $k\geq 1$. This is less then what we claimed, the exclusion of the case $k=1$ remains to be seen yet. It is easy to see from the definitions that \eqref{sdsjhjsdp} is equivalent to the fact that all wccs of $\equiv_{G_R}^{I_k}$ are loop-free. Let us suppose, for contradiction, that a wcc $V$ of $\equiv_{G_R}^{I_k}$ has a loop in it. Observe that the loops of $G$ and $G'$ are bijective under $\varphi$. Moreover, from the maximality of $W$, it is easy to see that for a wcc $U\sqsupset I_k$ of $G$, the loops of $\equiv^U_G$ are bijective with the loops of $\equiv^U_{G'}$ under $\varphi$. Consequently, none of the wccs of $=^V_G$ is raised as, by our previous argument, there is no component to be raised into. Hence $|=^V_G|\leq |=^V_{G'}|$, which clearly contradicts the fact that $V$ is an element of $\equiv_{G_R}^{I_k}$. We have proven \eqref{sdsjhjsdp}, only the exclusion of $k=1$ remains from our claim above. Let us suppose $k=1$ for contradiction. An arbitrary wcc $K$ of $G$ is either $K\equiv I_1$ or $K\sqsupset I_1$. In the latter case, as we have seen above, the loops of $\equiv_G^K$ are bijective with $\equiv_{G'}^K$. If $K\equiv I_1$, then we have shown above that the nonempty set $\equiv_{G_R}^{I_1}$ is loop-free, while, as a consequence, $\equiv_{G'_R}^{I_1}$, that has the same number of elements, contains of loops. This means $G$ has more loops then $G'$ does, a contradiction. We have entirely proven our claim. Observe that, from our claim above, the nonempty set $\equiv_{G'_R}^{I_k}$ contains no loop-free elements. Take $W' \in \; \equiv_{G'_R}^{I_k}$. We make the digraph $\overline{W}$ from $I_k$ by adding 1 loop so that $\overline{W}\neq W'$. This is possible because either $W'$ has loops on all of its vertices, then (using $k>1$) adding the loop arbitrarily suffices, or there is a vertex that has no loop on it, then adding the loop the this vertex in $I_k$ does. Now we create the digraph $\overline{G}$ of the theorem by adding 1 loop to each loop-free wcc of $G$. To the wccs of $=^{I_k}_G$ we add 1 loop each such that they all become $\overline{W}$. To all other loop-free wccs of $G$, we add 1 loop each arbitrarily. To prove that $\overline{G}$ is sufficient, we suppose, for contradiction, that, by adding the same number of loops to $G'$, we can get some $\overline{G'}$ for which $\overline{G}\leq \overline{G'}\mathrel{\dot{\cup}} O_n^*$. Let $\phi$ be an embedding $\overline{G}\to \overline{G'}\mathrel{\dot{\cup}} O_n^*$. For each wcc has a loop in $\overline{G}$, $\phi$ is technically an isomorphism $\phi: \overline{G}\to \overline{G'}$. Our final claim is, \begin{equation}\label{sdouw8q4u} |=^{\overline{W}}_{\overline{G}}|>|=^{\overline{W}}_{\overline{G'}}|, \end{equation} which contradicts the existence of the isomorphism $\phi: \overline{G}\to \overline{G'}$. If \eqref{sdouw8q4u} gets proven, we are done. Using the decomposition \eqref{cvxyxopp} and the knowledge on how $\overline{G}$ was created, the left side of \eqref{sdouw8q4u} is \begin{equation}\label{i99834udj} |=^{\overline{W}}_{\overline{G}}|\;\;=\;\; |=^{I_k}_G|+|=^{\overline{W}}_{G_R}|+|=^{\overline{W}}_{C}|\;\;=\;\; |=^{I_k}_G|+|=^{\overline{W}}_{C}|, \end{equation} since $\equiv^{W}_{G_R} \;=\;\; \equiv^{I_k}_{G_R}$ was shown to be loop-free above. Observe that even though we do not know exactly how $\overline{G'}$ was created, a component isomorphic to $\overline{W}$ can only appear in it if either it was already in $G'$ and no loop was added to that specific component, or the component was isomorphic to $I_k$ in $G'$, but a loop was added to the right place. This implies \begin{equation}\label{1234743} |=^{\overline{W}}_{\overline{G'}}|\;\;\leq\;\; |=^{I_k}_{G'}|+|=^{\overline{W}}_{G'_R}|+|=^{\overline{W}}_{C}|. \end{equation} Using \eqref{i99834udj} and \eqref{1234743}, it is enough to show that $$|=^{I_k}_G|+|=^{\overline{W}}_{C}|\;\;>\;\; |=^{I_k}_{G'}|+|=^{\overline{W}}_{G'_R}|+|=^{\overline{W}}_{C}|,$$ or equivalently, $$|=^{I_k}_G|-|=^{I_k}_{G'}|\;\;>\;\; |=^{\overline{W}}_{G'_R}|.$$ Using \eqref{sdsjhjsdp}, this turns into $|\equiv_{G_R}^{I_k}| > |=^{\overline{W}}_{G'_R}|$, which is obvious considering how $\overline{W}$ was created. We have proven \eqref{sdouw8q4u}, we are done. \end{proof} \begin{lemm}\label{fiduhweori} The following relation is definable: \begin{equation}\label{yyaajjjqwe} \{(G,G\mathrel{\dot{\cup}} O_n^*): G\in\mathcal{D}, \; |V(G)|=n\}. \end{equation} \end{lemm} \begin{proof} The relation in question is the set of pairs $(X,Y)\in\mathcal{D}^2$ that satisfy the following conditions. Let $(X,E_n)\in\mathfrak{E}$. Now $L(X)\mathrel{\dot{\cup}} O_n^*$ is the minimal digraph $W\in \mathcal{D}$ with the following conditions: $L(X)\leq W$, $O_n^*\leq W$, there is no $O_n^*\prec Z$ for which $L_1\leq Z$ and $Z\leq W$. (Here we used the fact that $O_n^*$ has so big circles that cannot fit into $X$.) Now Lemma \ref{weiurzwe} tells us that the set of the following first-order conditions suffice: \begin{itemize} \item $Y \equiv L(X)\mathrel{\dot{\cup}} O_n^*$, \item $X \leq Y$, \item $(X,Y) \in\mathfrak{L}$, and \item (taken from the end of the statement of Lemma \ref{weiurzwe}:) there exists NO digraph $\overline{X}$ for which: \begin{itemize} \item $ X\leq \overline{X}$, $X\equiv \overline{X}$, and \item there exists no digraph $Z$ for which $ Y \leq Z$, $Z\equiv Y$, $Z\leq L(X)\mathrel{\dot{\cup}} O_n^*$, and $(\overline{X}, Z)\in\mathfrak{L}$. \end{itemize} \end{itemize} \end{proof} \begin{definition}\label{iuyxcuieiqiufjsf} Let $G\in \mathcal{D}$ be a digraph having $n$ vertices. Let us denote the vertices of $O_n^*$ with $$V(O_n^*):=\{v_{i,j}: 1\leq i \leq n, \; 1\leq j\leq n+i\} $$ such that $V(O_{n+i})=\{v_{i,j}: 1\leq j\leq n+i\}$. Let $\underline{v}:=(v^1,\dots,v^n)$ be a tuple of the vertices of $G$. Let us define the digraph $G\overset{\underline{v}}{\leftarrow} O_n^*$ the following way: $$ V(G\overset{\underline{v}}{\leftarrow} O_n^*):=V(G\mathrel{\dot{\cup}} O_n^*), \;\;\; E(G\overset{\underline{v}}{\leftarrow} O_n^*):=E(G\mathrel{\dot{\cup}} O_n^*) \cup \{(v_{i,1},v^i): 1\leq i \leq n\}.$$ \end{definition} \begin{lemm} The following relation is definable: \begin{equation}\label{baassppwe} \{(G,G\overset{\underline{v}}{\leftarrow} O_n^*): G\in\mathcal{D},\; |V(G)|=n \text{ and $\underline{v}$ is a tuple of the vertices of $G$}\}. \end{equation} \end{lemm} \begin{proof} First, we define the relation \begin{equation}\label{ffbbzzzt} \{(G,L(G)\overset{\underline{v}}{\leftarrow} O_n^*): G\in\mathcal{D},\; |V(G)|=n \text{ and $\underline{v}$ is a tuple of the vertices of $L(G)$}\}. \end{equation} This relation consists of those pairs $(X,Y)\in \mathcal{D}^2$ for which the following holds. Let $(X,E_n)\in\mathfrak{E}$. From $X$, $L(X)$ is definable. Hence, with the relation (\ref{yyaajjjqwe}), $L(X)\mathrel{\dot{\cup}} O_n^*$ is definable. Now $Y$ is minimal with the following properties: \begin{itemize} \item $L(X)\mathrel{\dot{\cup}} O_n^* \leq Y$ and $(Y, L(X)\mathrel{\dot{\cup}} O_n^*)\in \mathfrak{E}$. \item There is no $L(X)\prec Z$ for which $(L(X),Z)\in\mathfrak{E}$ and $Z\leq Y$. \item There is no $O_n^*\prec Z$ for which $(O_n^*,Z)\in\mathfrak{E}$ and $Z\leq Y$. \item For all $O_i\in \mathcal{O}_{\cup}$, $O_i\leq O_n^*$ implies $\male_i^L\leq Y$. \item There are no $O_i, O_j\in \mathcal{O}_{\cup}$ for which $O_i\neq O_j$, $O_i, O_j\leq O_n^*$ and $\male_{i,j}^L\leq Y$. \end{itemize} Finally, the relation (\ref{baassppwe}) consists of those pairs $(X,Y)\in \mathcal{D}^2$ which satisfy the following conditions. Let $(X,E_n)\in\mathfrak{E}$ again. Then $Y$ satisfies: there exists $L(X)\overset{\underline{v}}{\leftarrow} O_n^*$ for which \[ (L(X)\overset{\underline{v}}{\leftarrow} O_n^*, Y)\in \mathfrak{M}, \;\;\; X \mathrel{\dot{\cup}} O_n^*\leq Y\leq L(X)\overset{\underline{v}}{\leftarrow} O_n^*, \;\;\; (X,Y)\in\mathfrak{L}. \] \end{proof} \begin{definition}\label{748298471239874} Let $v_1$ and $v^1$ denote the vertices of $\male_i$ and $\male_j$ with degree 1. Let us define $\male_i\to\male_j$ the following way: $$V(\male_i\to\male_j):=V(\male_i \mathrel{\dot{\cup}} \male_j), \;\; E(\male_i\to\male_j):=E(\male_i \mathrel{\dot{\cup}} \male_j)\cup \{(v_1,v^1)\}.$$ \end{definition} \begin{lemm} The following relation is definable: $$ \{(\male_i\to\male_j,E_i,E_j):1<i,j, \;\; i\neq j\}. $$ \end{lemm} \begin{proof} The relation above consists of those pairs $(X,Y,Z)\in \mathcal{D}^3$ which satisfy the following. $Y,Z\in \mathcal{E}$, $E_2\leq Y, Z$ and $Y\neq Z$, meaning $Y=E_i$, $Z=E_j$, where $1<i, j$ and $i\neq j$. Now $O_i\mathrel{\dot{\cup}} O_j\mathrel{\dot{\cup}} E_1$ can be similarly defined as in Lemma \ref{apoowjsbc}. From this, $O_i\mathrel{\dot{\cup}} O_j\mathrel{\dot{\cup}} E_2$ is the only digraph $W\in \mathcal{D}$ for which $O_i\mathrel{\dot{\cup}} O_j\mathrel{\dot{\cup}} E_1\prec W$ and $(W,O_i\mathrel{\dot{\cup}} O_j\mathrel{\dot{\cup}} E_1)\notin\mathfrak{E}.$ Now $O_i\mathrel{\dot{\cup}} O_j\mathrel{\dot{\cup}} L_2$ is the only digraph $W\in \mathcal{D}$ for which there exists $V\in\mathcal{D}$ such that $$O_i\mathrel{\dot{\cup}} O_j\mathrel{\dot{\cup}} E_2\prec V\prec W,\;\;\; L_2\leq W, \;\; O_{i,L}\nleq W \;\text{ and }\; O_{j,L}\nleq W. $$ $\male_i^L \mathrel{\dot{\cup}} \male_j^L$ is the only digraph $W\in \mathcal{D}$ for which there exists $V\in\mathcal{D}$ such that $$O_i\mathrel{\dot{\cup}} O_j\mathrel{\dot{\cup}} L_2\prec V\prec W, \;\;\; \male_i^L\leq W, \;\; \male_j^L\leq W, \; \text{ but }\; \male_{i,j}^L\nleq W. $$ Let $I$ denote the digraph \begin{equation}\label{hxgetrirow} V(I)=\{u,v\}, \;\; E(I):=\{(u,v),(u,u),(v,v)\}. \end{equation} The set \begin{equation}\label{yylalasdf} \{\male_i^L\to\male_j^L,\male_j^L\to\male_i^L\} \end{equation} consists of those $W\in \mathcal{D}$ for which $\male_i^L \mathrel{\dot{\cup}} \male_j^L \prec W$, $I\leq W$. The digraph $\male_i^L \mathrel{\dot{\cup}} E_1$ is defined as usual. From this, the digraph $\male_i^L \mathrel{\dot{\cup}} L_1$ is definable as the only $W\in \mathcal{D}$ for which $\male_i^L \mathrel{\dot{\cup}} E_1 \prec W$, $L_2\leq W$ and there is no $V\in\mathcal{D}$ such that $\male_i^L\prec V$, $L_2\leq V$ and $V\leq W$. Let $v$ denote the vertex of $\male_i^L$ that has a loop on it and let $x$ be the only vertex of $L_1$. Let $\male_i^{L\to}$ and $I^*$ be the following digraphs: $$V(\male_i^{L\to}):=V(\male_i^L \mathrel{\dot{\cup}} L_1), \;\; E(\male_i^{L\to}):=E(\male_i^L \mathrel{\dot{\cup}} L_1)\cup \{(v,x)\}$$ \begin{equation}\label{ncbhzuippqw} V(I^*):=\{u,v,w\}, \;\; E(I^*):=\{(v,v),(w,w),(u,v),(v,w)\}. \end{equation} Now $\male_i^{L\to}$ is the only digraph $W\in\mathcal{D}$ for which $\male_i^L \mathrel{\dot{\cup}} L_1 \prec W$ and $I^*\leq W$. From the set (\ref{yylalasdf}) we can choose $\male_i^L\to\male_j^L$ with the fact $$\male_i^{L\to} \leq \male_i^L\to\male_j^L, \;\;\; \male_i^{L\to} \nleq \male_j^L\to\male_i^L.$$ Finally, $X=M(\male_i^L\to\male_j^L)$. \end{proof} \begin{lemm}\label{74678719187892643} The following relation and set are definable: \begin{equation}\label{mazstgf} \{(O_{i,i},E_i):1<i\}, \;\; \{O_{i,i}:1<i\}, \end{equation} where $O_{i,i}:=O_i\mathrel{\dot{\cup}} O_i$. \end{lemm} \begin{proof} The relation (\ref{mazstgf}) consists of those pairs $(X,Y)\in \mathcal{D}^2$ for which the following holds. $Y\in\mathcal{E}$, meaning $Y=E_i$. $X\in \mathcal{O}$, $(X,E_{2i})\in\mathfrak{E}$, and from the set $\mathcal{O}_{\cup}$, $O_i$ is the only element that is embeddable into $X$. The corresponding set can now be easily defined. \end{proof} \begin{lemm} The following relation is definable: $$\{(O_i^*\mathrel{\dot{\cup}} O_{j,L}^*,E_i,E_j):1<i, j\}.$$ \end{lemm} \begin{proof} The relation above is the set of triples $(X,Y,Z)\in \mathcal{D}^3$ which satisfy the following. $Y,Z\in \mathcal{E}$, $E_2\leq Y, Z$, meaning $Y=E_i$, $Z=E_j$, where $1<i, j$. Now $O_i^*\mathrel{\dot{\cup}} O_{j}^*$ is the digraph $W$ satisfying the following: \begin{itemize} \item $W\in \mathcal{O}$ \item If $E_x,E_y\in \mathcal{E}$ satisfy $(O_i^*,E_x)\in \mathfrak{E}$ and $(O_j^*,E_y)\in \mathfrak{E}$, then $(W,E_{x+y})\in\mathfrak{E}$. \item For all $O_n\in\mathcal{O}_{\cup}$ that satisfy $O_n\leq O_i^*$ or $O_n\leq O_j^*$, $O_n\leq W$ holds. \item For all $O_n\in\mathcal{O}_{\cup}$ which satisfy $O_n\leq O_i^*$ and $O_n\leq O_j^*$, $O_{n,n}\leq W$ holds. \end{itemize} Finally, $X$ is the minimal digraph with $O_i^*\mathrel{\dot{\cup}} O_{j}^*\leq X\leq L(O_i^*\mathrel{\dot{\cup}} O_{j}^*)$ and $O_{j,L}^*\leq X$. \end{proof} \begin{definition}\label{75387671982738788} Let us denote the vertices of $O_n^*$ by $$V(O_n^*):=\{v_{i,j}: 1\leq i \leq n, \; 1\leq j\leq n+i\} $$ such that the circle $O_{n+i}$ consists of $\{v_{i,j}: 1\leq j\leq n+i\}$. Similarly, let us denote the vertices of $O_{m,L}^*$ by $$V(O_{m,L}^*):=\{v^{i,j}: 1\leq i \leq m, \; 1\leq j\leq m+i\} $$ such that the circle $O_{m+i,L}$ consists of $\{v^{i,j}: 1\leq j\leq m+i\}$ and the loops are on the vertices $\{v^{i,1}: 1\leq i\leq m\}$. For a map $\alpha: [n]\to [m]$, we define the digraph $F_{\alpha}(n,m)$ as $$V(F_{\alpha}(n,m)):=V(O_n^*\mathrel{\dot{\cup}} O_{m,L}^*),$$ $$E(F_{\alpha}(n,m)):=E(O_n^*\mathrel{\dot{\cup}} O_{m,L}^*)\cup \{(v_{i,1},v^{\alpha(i),1}):1\leq i\leq n\}.$$ Let $$\mathcal{F}(n,m):=\{F_{\alpha}(n,m): \; \alpha: [n]\to [m]\}.$$ \end{definition} \begin{lemm}The following relation is definable: \begin{equation}\label{1kajszt} \{(F_{\alpha}(n,m),E_n,E_m):1\leq n,m, \;\; \alpha: [n]\to [m]\}. \end{equation} \end{lemm} \begin{proof} The relation above consists of those triples $(X,Y,Z)\in \mathcal{D}^3$ that satisfy the following. $Y,Z\in \mathcal{E}$, meaning $Y=E_n$, $Z=E_m$, where $1\leq n, m$. Now $X$ is a minimal digraph with the following conditions: \begin{itemize} \item $O_n^*\mathrel{\dot{\cup}} O_{m,L}^*\leq X$ and $(O_n^*\mathrel{\dot{\cup}} O_{m,L}^*,X)\in\mathfrak{E}\cap\mathfrak{L}$. \item $O_i\leq O_n^*$ implies $\male_i^L\leq X$. \item $O_i\leq O_n^*\mathrel{\dot{\cup}} O_{m,L}^*$ implies there is no $W\in\mathcal{O}_i^{\to}$, for which $W\leq X$. \item There is no $V$ for which $V\leq X$ and $\male_i\prec V$, such that $\male_i\leq X$ and $\male_i^L \nleq V$. \item There is no $\male_i^L\prec V$ for which $V\leq X$ and $L_2 \leq V$. \end{itemize} \end{proof} \begin{lemm}\label{7682} The following relation is definable: \begin{equation}\label{jajybccvtep} \{(F_{\id_{[n]}}(n,n),E_n,E_n):1\leq n\}. \end{equation} \end{lemm} \begin{proof} The relation in question consists of those triples $(X,Y,Z)\in (\ref{1kajszt})$ for which $Y=Z\in\mathcal{E}$ and for $i,j \geq 2$ we have \[O_{i\to j}\leq X \Rightarrow E_i=E_j. \] \end{proof} \begin{lemm}The following relation is definable: \begin{equation}\label{vyniurteaal} \{(F_{\alpha}(n,m),F_{\beta}(m,l),F_{\beta\circ\alpha}(n,l), E_n,E_m,E_l):1\leq n,m,l, \;\; \alpha: [n]\to [m], \;\; \beta: [m]\to [l]\}. \end{equation} \end{lemm} \begin{proof} The relation in question is the set of those 6-tuples $(X_1,\dots,X_6)\in\mathcal{D}^6$ which satisfy the following. $X_4,X_5,X_6\in \mathcal{E}$, meaning $X_4=E_n$, $X_5=E_m$ and $X_6=E_l$ where $1\leq n, m,l$. $X_1\in \mathcal{F}(n,m)$, $X_2\in \mathcal{F}(m,l)$ and $X_3\in \mathcal{F}(n,l)$. Finally: \[(O_{i \to j}\leq X_1\text{ and } O_{j\to k}\leq X_2) \; \Rightarrow \; O_{i \to k}\leq X_3.\] \end{proof} \begin{definition}\label{723789811980012} There is a bijection between the digraphs $G\overset{\underline{v}}{\leftarrow} O_n^*$ and the elements of $\text{ob}(\mathcal{CD})$. Let us observe that the vertices of $G$ are labeled with the circles $O_{n+1},O_{n+2}, \dots, O_{2n}$ in $G\overset{\underline{v}}{\leftarrow} O_n^*$. On the other hand, in $\text{ob}(\mathcal{CD})$, they are labeled with $1,\dots,n$. The element of $\text{ob}(\mathcal{CD})$ that corresponds to $G\overset{\underline{v}}{\leftarrow} O_n^*$ will be denoted by $(G\overset{\underline{v}}{\leftarrow} O_n^*)_{\mathcal{CD}}$ from now on. \end{definition} \begin{lemm}\label{5634782}The following relation is definable: \begin{equation}\label{ghajsvxclhhg} \begin{split} \{(X, F_{\alpha}(n,m), Y)\in\mathcal{D}^3: \; &X=G\overset{\underline{v}}{\leftarrow} O_n^*, \; Y=H\overset{\underline{w}} {\leftarrow} O_m^* \text{ for some $\underline{v}$ and $\underline{w}$}, \text{ and}\\ &((X)_{\mathcal{CD}}, \alpha, (Y)_{\mathcal{CD}})\in\mathrm{hom}((X)_{\mathcal{CD}},(Y)_{\mathcal{CD}})\} \end{split} \end{equation} \end{lemm} \begin{proof} The relation in question is the set of those pairs $(X,F,Y)\in \mathcal{D}^3$ which satisfy the following. There exist $G$ and $H$ such that $(G,X)\in (\ref{baassppwe})$ and $(H,Y)\in (\ref{baassppwe})$. Finally, $F$ satisfies \begin{itemize} \item $(F,E_n,E_m)\in (\ref{1kajszt})$, \item $(\male_i\to\male_j\leq G\overset{\underline{v}}{\leftarrow} O_n^*(=X),\;\; O_i,O_j\leq O_n^*\text{ and } O_{i\to k},O_{j\to l}\leq F) \Longrightarrow$ \\ $((O_k\neq O_l \text{ and } \male_k\to\male_l\leq H\overset{\underline{w}}{\leftarrow} O_m^*) \;\; \vee \;\; (O_k=O_l \text{ and } \male_k^L\leq H\overset{\underline{w}}{\leftarrow} O_m^*))$, \item $(\male_i^L\leq G\overset{\underline{v}}{\leftarrow} O_n^*, \;\; O_i\leq O_n^*\text{ and } O_{i\to k}\leq F) \Rightarrow \male_k^L\leq H\overset{\underline{w}}{\leftarrow} O_m^*.$ \end{itemize} \end{proof} The proof of Theorem \ref{9ahgfa} is now properly prepared for, we only need to put the pieces together. \begin{proof}[Proof of the main theorem: Theorem \ref{9ahgfa}] We have already seen in Section \ref{pakdhtgft} that all relations first-order definable in $\mathcal{D}'$ are defiable in $\mathcal{CD}'$ as well. So we only need to deal with the converse. We wish to build a copy of $\mathcal{CD}'$ inside $\mathcal{D}'$ so that all things we can formulate in the first-order language of $\mathcal{CD}'$ becomes accesible in its model in $\mathcal{D}'$. Let the set of objects be $$\{G\overset{\underline{v}}{\leftarrow} O_n^*: G\in\mathcal{D},\; |V(G)|=n \text{ and $\underline{v}$ is a vector of the vertices of $G$}\},$$ and the set of morphisms be \eqref{ghajsvxclhhg}. We can define both as Lemma \ref{5634782} shows. Identity morphisms can be defined with Lemma \ref{7682}. For the triples $$(X_1, Z_1, Y_1),(X_2, Z_2, Y_2),(X_3, Z_3, Y_3) \in \mathcal{D}^3$$ the condition $(X_i, Z_i, Y_i)\in (\ref{ghajsvxclhhg})$ ensures that there exist $\alpha_i$ such that $$((X_i)_{\mathcal{CD}}, \alpha_i, (Y_i)_{\mathcal{CD}})\in \mathrm{hom}((X_i)_{\mathcal{CD}},(Y_i)_{\mathcal{CD}}).$$ Moreover, if we suppose $Y_1=X_2$, $X_3=X_1$, $Y_3=Y_2$ and that there exists a 6-tuple in (\ref{vyniurteaal}) of the form $(Z_1, Z_2, Z_3,\ast,\ast,\ast)$, we have forced $$((X_1)_{\mathcal{CD}}, \alpha_1, (Y_1)_{\mathcal{CD}}) ((X_2)_{\mathcal{CD}}, \alpha_2, (Y_2)_{\mathcal{CD}}) =((X_3)_{\mathcal{CD}}, \alpha_3, (Y_3)_{\mathcal{CD}}) .$$ The four constants in $\mathcal{CD}'$ require 4 digraphs, say, \begin{equation}\label{cnjusiuduh} C_1, C_2, C_3, \text{ and } C_4 \end{equation} of $\mathcal{D}'$ to be defined such that $$ (C_1)_{\mathcal{CD}}={\bf E}_1, \; (C_2)_{\mathcal{CD}}={\bf I}_2$$ and $C_3$, and $C_4$ are the elements of the set $\mathcal{F}(1,2)$. Now we have all the ``tools'' accesible in $\mathcal{CD}'$. Finally, the relation (\ref{baassppwe}) lets us ``convert'' the elements of $\mathcal{D}'$ and $\mathcal{CD}'$ back and forth. We are done. \end{proof} \section{Table of notations}\label{jelolesj} \begin{longtable}{| l | p{2cm} | l | } \hline Notation & Definition, theorem, etc. & Page number \\ \hline $\dot{\cup}$ & \ref{349587897819} & \pageref{349587897819} \\ \hline $\sqsubset$, $ \sqsubseteq$, $\equiv$, $\equiv_G^C$ & \ref{7538761239874} & \pageref{7538761239874} \\ \hline $(.)_{\mathcal{CD}}$ & \ref{723789811980012} & \pageref{723789811980012} \\ \hline $\male_n$ & \ref{deffiugraf} & \pageref{deffiugraf} \\ \hline $\male_n^L$ & \ref{deffiugraf} & \pageref{deffiugraf} \\ \hline $\male_{i,j}^L$ & \ref{13541235} & \pageref{13541235} \\ \hline $\male_i\to\male_j$ & \ref{748298471239874} & \pageref{748298471239874} \\ \hline $A$ & \ref{xcmvnyxjdaskdqiwo} & \pageref{xcmvnyxjdaskdqiwo} \\ \hline ${\bf A}$ & & \pageref{xcvnsbndasiiwq} \\ \hline $(A,\alpha,B)$ & & \pageref{yxcmmnbcvn16235} \\ \hline $\mathcal{CD}$ & & \pageref{qoiue2833} \\ \hline $\mathcal{CD}'$ & \ref{xmcvnbsjdiqawe} & \pageref{xmcvnbsjdiqawe} \\ \hline $\mathcal{D}$ & & \pageref{xcvshgw8} \\ \hline $\mathcal{D}'$ & \ref{xcmvnyxjdaskdqiwo} & \pageref{xcmvnyxjdaskdqiwo} \\ \hline $\mathfrak{E}$ & \ref{xmvbyxhu8231} & \pageref{xmvbyxhu8231} \\ \hline $\mathfrak{E}_+$ & \ref{238971875642} & \pageref{238971875642} \\ \hline $E(G)$ & & \pageref{qweoweru3} \\ \hline ${\bf E}_1$ &\ref{xmcvnbsjdiqawe} & \pageref{xmcvnbsjdiqawe} \\ \hline $E_n$ & \ref{defE_n} & \pageref{defE_n} \\ \hline $F_{\alpha}(n,m)$ & \ref{75387671982738788} & \pageref{75387671982738788} \\ \hline $\mathcal{F}(n,m)$ & \ref{75387671982738788} & \pageref{75387671982738788} \\ \hline $E_n$ & \ref{defE_n} & \pageref{defE_n} \\ \hline ${\bf f}_1$, ${\bf f}_2$ & \ref{xmcvnbsjdiqawe} & \pageref{xmcvnbsjdiqawe} \\ \hline $F_n$ & \ref{defE_n} & \pageref{defE_n} \\ \hline $G^T$ & & \pageref{aqweiqportw8} \\ \hline $G\overset{\underline{v}}{\leftarrow} O_n^*$ & \ref{iuyxcuieiqiufjsf} & \pageref{iuyxcuieiqiufjsf} \\ \hline $\text{hom}(A, B)$ & & \pageref{1382721} \\ \hline ${\bf I}_2$ & \ref{xmcvnbsjdiqawe} & \pageref{xmcvnbsjdiqawe} \\ \hline $I_n$ & \ref{defIOL} & \pageref{defIOL} \\ \hline $\mathfrak{L}$ & \ref{xmvbyxhu8231} & \pageref{xmvbyxhu8231} \\ \hline $L_n$ & \ref{defIOL} & \pageref{defIOL} \\ \hline $L(.)$ & \ref{defLfuggveny} & \pageref{defLfuggveny} \\ \hline $\mathcal{L}(.)$ & \ref{defLfuggveny} & \pageref{defLfuggveny} \\ \hline $\mathfrak{M}$ & \ref{ppghhfjasd} & \pageref{ppghhfjasd} \\ \hline $M(.)$ & \ref{defMfuggveny} & \pageref{defMfuggveny} \\ \hline $\mathcal{M}(.)$ & \ref{defMfuggveny} & \pageref{defMfuggveny} \\ \hline $\mathcal{O}$ & \ref{123897834611} & \pageref{123897834611} \\ \hline $\mathcal{O}_{\cup}$ & \ref{ppghhfjasd} & \pageref{ppghhfjasd} \\ \hline $O_n$ & \ref{defIOL} & \pageref{defIOL} \\ \hline $\mathcal{O}_n^{\to}$ & \ref{defOnto} & \pageref{defOnto} \\ \hline $O_n^*$ & \ref{983489887772543214} & \pageref{983489887772543214} \\ \hline $O_{n,L}^{*}$ & \ref{apoowjsbc} & \pageref{apoowjsbc} \\ \hline $O_{n,L}$ & \ref{defonl} & \pageref{defonl} \\ \hline $O_{i,i}$ & \ref{74678719187892643} & \pageref{74678719187892643} \\ \hline $O_{i \to j}$ & \ref{756728767198470} & \pageref{756728767198470} \\ \hline $\text{ob}(\mathcal{CD})$ & & \pageref{dfjhawkd} \\ \hline $V(G)$ & & \pageref{qwqepwrpdg} \\ \hline $\mathcal{W}$ & \ref{7358712837283791} & \pageref{7358712837283791} \\ \hline \end{longtable} {\it Acknowledgements.} The results of this paper were born in an MSc thesis. The author thanks Mikl\'{o}s Mar\'{o}ti, who as his supervisor gave him this research topic. \end{document}
\begin{document} \title{A general approach to transforming finite elements} \begin{abstract} The use of a \emph{reference element} on which a finite element basis is constructed once and mapped to each cell in a mesh greatly expedites the structure and efficiency of finite element codes. However, many famous finite elements such as Hermite, Morley, Argyris, and Bell, do not possess the kind of equivalence needed to work with a reference element in the standard way. This paper gives a generalizated approach to mapping bases for such finite elements by means of studying relationships between the finite element nodes under push-forward. {\bf MSC 2010}: 65N30. \emph{Keywords}: Finite element method, basis function, pull-back. \end{abstract} \section{Introduction} At the heart of any finite element implementation lies the evaluation of basis functions and their derivatives on each cell in a mesh. These values are used to compute local integral contributions to stiffness matrices and load vectors, which are assembled into a sparse matrix and then passed on to an algebraic solver. While it is fairly easy to parametrize local integration routines over basis functions, one must also provide an implementation of those basis functions. Frequently, finite element codes use a \emph{reference element}, on which a set of basis functions is constructed once and mapped via coordinate change to each cell in a mesh. Alternately, many finite element bases can be expressed in terms of barycentric coordinates, in which case one must simply convert between the physical and barycentric coordinates on each cell in order evaluate basis functions. Although we refer the reader to recent results on \emph{Bernstein polynomials}~\citep{ainsworth2011bernstein, kirby2011fast} for interesting algorithms in the latter case, the prevelance of the reference element paradigm in modern high-level finite element software~\citep{bangerth_deal_2007,intrepid,LoggMardalEtAl2012a,long2010unified,prud2012feel,rathgeber2016firedrake} we shall restrict ourselves to the former. The development of FIAT~\citep{Kir04} has had a significant impact on finite element software, especially through its adoption in high-level software projects such as FEniCS~\citep{LoggMardalEtAl2012a} and Firedrake~\citep{rathgeber2016firedrake}. FIAT provides tools to describe and construct reference bases for arbitrary-order instances of many common and unusual finite elements. Composed with a domain-specific language for variational problems like UFL~\citep{AlnaesEtAl2012} and a form compiler mapping UFL into efficient code for element integrals~\citep{tsfc,KirbyLogg2006a,luporini2014coffee} gives a powerful, user-friendly tool chain. However, any code based on the reference element paradigm operates under the assumption that finite elements satisfy a certain kind of \emph{equivalence}. Essentially, one must have a pull-back operation that puts basis functions on each cell into one-to-one correspondence with the reference basis functions. Hence, the original form of \texttt{ffc}~\citep{KirbyLogg2006a} used only (arbitrary order) Lagrange finite elements, although this was generalized to $H(\mathrm{div})$ and $H(\mathrm{curl})$ elements using Piola transforms in~\citep{RognesKirbyEtAl2009a}. Current technology captures the full simplicial discrete de Rham complex and certain other elements, but many famous elements are not included. Although it is possible to construct reference elements in FIAT or some other way, current form compilers or other high-level libraries do not provide correct code for mapping them. \begin{figure} \caption{Cubic Lagrange} \label{lag3} \caption{Cubic Hermite} \label{herm3} \caption{Morley} \label{morley} \caption{Quintic Argyris} \label{arg} \caption{Bell} \label{bell} \caption{Some famous triangular elements. Solid dots represent point value degrees of freedom, smaller circles represent gradients, and larger circles represent the collection of second derivatives. The arrows indicate directional derivatives evaluated at the tail of the arrow.} \label{fig:els} \end{figure} Elements such as Hermite~\citep{ciarlet1972general}, Argyris~\citep{argyris1968tuba}, Morley~\citep{morley1971constant}, and Bell~\citep{bell1969refined}, shown alongside the Lagrange element in Figure~\ref{fig:els}, do not satisfy the proper equivalence properties to give a simple relationship between the reference basis and nodal basis on a general cell. Typically, implementations of such elements require special-purpose code for constructing the basis functions separately on each element, which can cost nearly as much in terms of work and storage as building the element stiffness matrix itself. It also requires a different internal workflow in the code. Although Dom\'\i nguez and Sayas~\citep{dominguez2008algorithm} give a technique for mapping bases for the Argyris element and a separate computer implementation is available (\url{https://github.com/VT-ICAM/ArgyrisPack}) and Jardin~\cite{jardin2004triangular} gives a per-element construction technique for the Bell element, these represents the exception rather than the rule. The literature contains no general approach for constructing and mapping finite element bases in the absence of affine equivalence or a suitable generalization thereof. In this paper we provide such a general theory for transforming finite elements that supplements the theory on which FIAT is based for constructing those elements. Our focus is on the case of scalar-valued elements in affine spaces, although we indicate how the techniques generalize on both counts. We begin the rest of the paper by recalling definitions in \S~\ref{sec:prelim}. The bulk of the paper occurs in \S~\ref{sec:theory}, where we show how to map finite element bases under affine equivalence, affine-interpolation equivalence, and when neither holds. We also sketch briefly how the theory is adapted to the case of more general pullbacks such as non-affine coordinate mappings or Piola transforms. All the theory in \S~\ref{sec:theory} assumes that the natural pull-back operation (\textit{i}.\textit{e}.~composition with coordinate change) exactly preserves the function spaces between reference and physical space. However, in certain notable cases such as the Bell element, this condition fails to hold. In \S~\ref{sec:whatthebell}, we give a more general theory with application to the Bell element. Finally, in \S~\ref{sec:num}, we present some numerical results using these elements. \section{Definitions and preliminaries} \label{sec:prelim} Througout, we let $C_b^k(\Omega)$ denote the space of functions with continuous and bounded derivatives up to and including order $k$ over $\Omega$, and $C_b^k(\Omega)^\prime$ its topological dual. \begin{definition} A \emph{finite element} is a triple $(K, P, N)$ such that \begin{itemize} \item $K \subset \mathbb{R}^d$ is a bounded domain. \item $P \subset C^k_b(K)$ for some integer $k \geq 0$ is a finite-dimensional function space. \item \(N = \{n_i\}_{i=1}^{\nu} \subset C^k_b(K)^\prime\) is a collection of linearly independent functionals whose actions restricted to $P$ form a basis for $P^\prime$. \end{itemize} \end{definition} The nodes in $N$ are taken as objects in the full infinite-dimensional dual, although sometimes we will only require their restrictions to members of $P$. For any \( n \in C^k_b(K)^\prime \), define $\pi n \in P^\prime$ by restriction. That is, define \( \pi n (p) = n(p) \) for any \( p \in P \). Further, with a slight abuse in notation, we will let $N = \begin{bmatrix} n_1 & n_2 & \dots & n_\nu \end{bmatrix}^T$ denote a functional on $P^\nu$, or equivalently, a vector of $\nu$ members of the dual space. As shorthand, we define these spaces consisting of vectors of functions or functionals by \begin{equation} \label{eq:XXdag} \begin{split} X & \equiv \left( P \right)^\nu, \\ X^\dagger & \equiv \left( C_b^k( K )^\prime \right)^\nu. \end{split} \end{equation} We can ``vectorize'' the restriction operator \( \pi \), so that for any \( N \in X^\dagger \), \( \pi N \in (P^\nu)^\prime \) has \( (\pi N)_i = \pi(n_i) \). Galerkin methods work in terms of a basis for the approximating space, and these are typically built out of local bases for each element: \begin{definition} Let \((K, P, N)\) be a finite element with $\dim P = \nu$. The \emph{nodal basis} for $P$ is the set \(\{\psi_i\}_{i=1}^{\nu}\) such that \(n_i(\psi_j) = \delta_{i,j}\) for each \(1 \leq i,j \leq \nu\). \end{definition} The nodal basis also can be written as \( X \ni \Psi = \begin{bmatrix} \psi_1 & \psi_2 & \dots & \psi_\nu \end{bmatrix} \). Traditionally, finite element codes construct the nodal basis for a \emph{reference} finite element \(\left(\hat{K}, \hat{P}, \hat{N}\right)\) and then map it into the basis for \( \left(K, P, N\right) \) for each $K$ in the mesh. Let $F:K \rightarrow \hat{K}$ be the geometric mapping, as in Figure~\ref{fig:affmap}. We let \( J \) denote the Jacobian matrix of this transformation. \begin{figure} \caption{Affine mapping to a reference cell \(\hat{K} \label{fig:affmap} \end{figure} Similarly to~\eqref{eq:XXdag}, we define the vector spaces relative to the reference cell: \begin{equation} \label{eq:XXdaghat} \begin{split} \hat{X} & \equiv \left( \hat{P} \right)^\nu, \\ \hat{X}^\dagger & \equiv \left( C_b^k( \hat{K} )^\prime \right)^\nu. \end{split} \end{equation} As with \( \pi \), we define \( \hat{\pi} \hat{n} \) as the restriction of \( \hat{n} \) to \( \hat{P} \), and can vectorize it over \( \hat{X}^\dagger \) accordingly. This geometric mapping induces a mapping between spaces of functions over $K$ and $\hat{K}$ as well as between the dual spaces. These are called the pull-back, and push-forward operations, respectively: \begin{definition} The \emph{pull-back} operation mapping \( C^k_b(\hat{K}) \rightarrow C^k_b(K) \) is defined by \begin{equation} \label{eq:pullback} F^*\left(\hat{f}\right) = \hat{f} \circ F \end{equation} for each \(\hat{f} \in C^k_b(\hat{K})\). \end{definition} \begin{definition} The \emph{push-forward} operation mapping the dual space \( C^k_b(K)^\prime\) into \( C^k_b(\hat{K})^\prime \) is defined by \begin{equation} F_*(n) = n\circ F^* \end{equation} for each \(n \in C^k_b(K)^\prime \). \end{definition} It is easy to verify that the pull-back and push-forward are linear operations preserving the vector space operations. Moreover, they are invertible iff $F$ itself is. Therefore, we have \begin{proposition} \label{prop:iso} Given finite elements $(K,P,N)$ and $(\hat{K},\hat{P},\hat{N})$ such that $F(K)=\hat{K}$ and $F^*(\hat{P}) = P$, $F^*:\hat{P} \rightarrow P$ and \( F_*:P^\prime \rightarrow \hat{P}^\prime \) are isomorphisms. \end{proposition} The pull-back and push-forward operations are also defined over the vector spaces \( X \), \(X^\dagger\), \( \hat{X} \), and \( \hat{X}^\dagger \). If $N$ is a vector of functionals and \( \Phi \) a vector of functions, then the vector push-forward and pull-back are, respectively \begin{equation} \begin{split} F_*(N) \in \hat{X}^\dagger, \ \ \ \left( F_*(N) \right)_i & = F_*(n_i), \\ F^*(\hat{\Phi}) \in X, \ \ \ \left( F^*(\hat{\Phi}) \right)_i & = F^*(\hat{\phi_i}). \end{split} \end{equation} It will also be useful to consider vectors of functionals acting on vectors of functions. We define this to produce a matrix as follows. If \(N = \begin{bmatrix} n_1 & n_2 & \dots & n_k \end{bmatrix}^T \) is a collection of functionals and \( \Phi = \begin{bmatrix} \phi_1 & \phi_2 & \dots & \phi_\ell \end{bmatrix}^T \) a collection of functions, then we define the (outer) product $N(\Phi)$ to be the $k \times \ell$ matrix \begin{equation} \label{eq:nonphi} \left( N(\Phi) \right)_{ij} = n_i(\phi_j). \end{equation} For example, if $N$ is the vector of nodes of a finite element and $\Psi$ contains the nodal basis functions, then the Kronecker delta property is expressed as \( N(\Psi) = I. \) If $M$ is a matrix of numbers of appropriate shape and $\Phi \in X$ members of a function space $P$, then $M\Phi$ is just defined by \( (M \Phi)_i = \sum_{j=1}^\nu M_{ij} \Phi_j, \) according to the usual rule for matrix-vector multiplication. \begin{lemma} Let $N \in X^\dagger$ and $\Phi \in X$ and \(M \in \mathbb{R}^{\nu \times \nu}\). Then \begin{equation} N(M\Phi) = N(\Phi) M^T. \end{equation} \end{lemma} \begin{proof} The proof is a simple calculation: \[ \left( N(M\Phi) \right)_{ij} = n_i \left( \left( M \Phi \right)_j \right) = n_i \left( \sum_{k=1}^\nu M_{jk} \phi_k \right) = \sum_{k=1}^\nu n_i \left( \phi_k\right) = \sum_{k=1}^\nu \left(N(\Phi)\right)_{ik} M_{jk}. \] \end{proof} The relationship between pull-back and push-forward also leads to the vectorized relation \begin{lemma} Let $N \in X^\dagger$ and $\hat{\Phi} \in \hat{X}$. Then \begin{equation} N(F^*(\hat{\Phi})) = F_*(N)(\hat{\Phi}) \end{equation} \end{lemma} \begin{definition} Let $(K,P,N)$ and $(\hat{K}, \hat{P}, \hat{N})$ be finite elements and $F$ an affine mapping on $K$. Then $(K,P,N)$ and $(\hat{K},\hat{P}, \hat{N})$ are \emph{affine equivalent} if \begin{itemize} \item $F(K) = \hat{K}$, \item The pullback maps $F^*(\hat{P}) = P$ (in the sense of equality of vector spaces), \item $F_*(N) = \hat{N}$ (in the sense of equality of finite sets). \end{itemize} \end{definition} \begin{definition} Let \( (K, P, N) \) be a finite element of class $C^k$ and \(\Psi \in X \) its nodal basis. The \emph{nodal interpolant} \( \mathcal I_N: C_b^k(K) \rightarrow P \) is defined by \begin{equation} \mathcal{I}(f) = \sum_{i=1}^\nu n_i(f) \psi_i. \end{equation} \end{definition} This interpolant plays a fundamental role in establishing approximation properties of finite elements via the Bramble-Hilbert Lemma~\citep{bramble1971bounds,dupontscott1980}. The homogeneity arguments in fact go through for the following generalized notion of element equivalence: \begin{definition} Two finite elements $(K, P, N)$ and $(K, P, \tilde{N})$ are \emph{interpolation equivalent} if $\mathcal{I}_{N} = \mathcal{I}_{\tilde{N}}$. \end{definition} \begin{definition} If $(K, P, \tilde{N})$ is affine equivalent to $(\hat{K}, \hat{P}, \hat{N})$ and interpolation equivalent to $(K, P, N)$, then $(K, P, N)$ and $(\hat{K}, \hat{P}, \hat{N})$ are \emph{affine-interpolation equivalent}. \end{definition} Brenner and Scott~\citep{BreSco} give the following result, of which we shall make use: \begin{proposition} Finite elements \( (K, P, N) \) and \( (K, P, \tilde{N}) \) are interpolation equivalent iff the spans of \( N \) and \( \tilde{N} \), (viewed as subsets of \( C_b^k(K)^\prime \)), are equal. \end{proposition} For Lagrange and certain other finite elements, one simply has that $F^*(\hat{\Psi}) = \Psi$, which allows for the traditional use of reference elements used in FEniCS, Firedrake, and countless other codes. However, for many other elements this is not the case. It is our goal in this paper to give a general approach that expresses $\Psi$ as a linear transformation $M$ applied to $F^*(\hat{\Psi})$. Before proceeding, we note that approximation theory for Argyris and other families without affine-interpolation equivalence can proceed by means of establishing the \emph{almost-affine} property~\citep{ciarlet1978finite}. Such proofs can involve embedding the inequivalent element family into an equivalent one with the requisite approximation properties. For example, the Argyris element is proved almost-affine by comparison to the ``type (5)'' quintic Hermite element. Although we see definite computational consequences of affine-equivalence, affine-interpolation equivalence, and neither among our element families, we our approach to transforming inequivalent families does not make use of any almost-affine properties. \section{Transformation theory when \( F^*(\hat{P}) = P \)} \label{sec:theory} For now, we assume that the pull-back operation~\eqref{eq:pullback} appropriately converts the reference element function space into the physical function space and discuss the construction of nodal bases based on relationships between the reference nodes $\hat{N}$ and the pushed-forward physical nodes $F_*(N)$. We focus on the simplicial case, although generalizations do not have a major effect, as we note later. Throughout, we will use following convention, developed in~\citep{RognesKirbyEtAl2009a} for handling facet orientation in mixed methods but also useful in order higher-order Lagrange degrees of freedom. Since our examples are triangles (2-simplices), it is not necessary to expand on the entire convention. Given a triangle with vertices \( \left(\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\right) \), we define edge $\gamma_i$ of the triangle to connect the vertices other than $\mathbf{v}_i$. The (unit) tangent vector \( \mathbf{t}_i = \begin{bmatrix}t^\mathbf{x}_i & t^\mathbf{y}_i\end{bmatrix}^T \), points in the direction from the lower- to the higher-numbered vertex. When triangles share an edge, then, they agree on its orientation. The normal to an edge is defined by rotating the tangent by applying the matrix \( R = \begin{bmatrix}0 & 1 \\ -1 & 0 \end{bmatrix} \) so that \(\mathbf{n}_i = R \mathbf{t}_i = \begin{bmatrix} n^\mathbf{x}_i & n^\mathbf{y}_i\end{bmatrix}^T\) We also let \( \mathbf{e}_i \) denote the midpoint of \( \gamma_i \). Now, we fix some notation for describing nodes. First, we define $\delta_{\mathbf{x}}$ acting on any continuous function by pointwise evaluation. That is: \begin{equation} \delta_{\mathbf{x}}(p) = p(\mathbf{x}). \end{equation} We let $\delta^{\mathbf{s}}_\mathbf{x}$ denote the directional derivative in direction $\mathbf{s}$ at a point $\mathbf{x}$, so that \begin{equation} \delta^{\mathbf{s}}_\mathbf{x}(p) = \mathbf{s}^T \nabla p(\mathbf{x}). \end{equation} We use repeated superscripts to indicate higher-order derivatives, so that \( \delta^{\mathbf{x}\mathbf{x}}_{\mathbf{x}} \) defines the second directional derivative along the $x$-axis at point $\mathbf{x}$. It will also be convenient to use block notation, with a single symbol representing two or items. For example, the gradient notation \[ \nabla_{\mathbf{x}} = \begin{bmatrix} \delta^{\mathbf{x}}_{\mathbf{x}} & \delta^{\mathbf{y}}_{\mathbf{x}} \end{bmatrix}^T \] gives the pair of functionals evaluating the Cartesian derivatives at a point $\mathbf{x}$. To denote a gradient in a different basis, we append the directions as superscripts so that \[ \nabla^{\mathbf{nt}}_{\mathbf{x}} = \begin{bmatrix} \delta^{\mathbf{n}}_{\mathbf{x}} & \delta^{\mathbf{t}}_{\mathbf{x}} \end{bmatrix}^T \] contains the normal and tangential derivatives at a point \(\mathbf{x}\). Similarly, we let \[ \bigtriangleup_{\mathbf{v}} = \begin{bmatrix} \delta^{\mathbf{xx}}_{\mathbf{x}} & \delta^{\mathbf{xy}}_{\mathbf{x}} & \delta^{\mathbf{yy}}_{\mathbf{x}} \end{bmatrix}^T \] denote the vector of three functionals evaluating the unique (supposing sufficient smoothness) second partials at \(\mathbf{x}\). Let \( \Psi = \{\psi_i\}_{i=1}^{\nu}\) be the nodal basis for a finite element \((K, P, N)\) and \( \hat{\Psi} = \{ \hat{\psi}_i \}_{i=1}^{\nu}\) that for a reference element \( \left(\hat{K},\hat{P},\hat{N} \right) \). We also assume that \(F(K) = \hat{K}\) and \( F^*(\hat{P})=P \). Because the pull-back is invertible, it maps linearly independent sets to linearly independent sets. So, \(F^*(\hat{\Psi})\) must also be a basis for \(P\). There exists an invertible $\nu \times \nu$ matrix $M$ such that \begin{equation} \label{eq:M} \Psi = M F^*(\hat{\Psi}), \end{equation} or equivalently, that each nodal basis function is some linear combination of the pull-backs of the reference nodal basis functions. Our theory for transforming the basis functions (\textit{i}.\textit{e}.~computing the matrix $M$) will work via duality -- relating the matrix $M$ to how the nodes, or at least their restrictions to the finite-dimensional spaces, push forward. It will be useful to define as an intermediate $\nu \times \nu$ matrix \( B=F_*(N)(\hat{\Psi}) \). Recall from~\eqref{eq:nonphi} that its entries for $1 \leq i, j \leq \nu$ are \begin{equation} \label{eq:Fmat} B_{ij} \equiv F_*(n_i)(\hat{\psi}_j) = n_i(F^*(\hat{\psi}_j)) \end{equation} This matrix, having nodes only applied to members of \( P \) is indifferent to restrictions and so \( B = F_*(\pi N)(\hat{\Psi}) \) as well. Because of Proposition~\ref{prop:iso} and finite-dimensionality, the the nodal sets \( \hat{\pi}\hat{N} \) and \( F_*(\pi N) \) are both bases for $\hat{P}^\prime$, and so there exists an invertible $\nu \times \nu$ matrix $V$ such that \begin{equation} \label{eq:V} \hat{\pi} \hat{N} = V F_*(\pi N) \end{equation} Frequently, it may be easier to express the pushed-forward nodes as a linear combination of the reference nodes. In this case, one obtains the matrix $V^{-1}$. At any rate, the matrices \( V \) and \( M \) are closely related. \begin{theorem} For finite elements $(K,P,N)$ and $(\hat{K},\hat{P},\hat{N})$ with $F(K)=\hat{K}$ and $F_*(\hat{P}) = P$, the matrices in~\eqref{eq:M} and~\eqref{eq:V} satisfy \begin{equation} M = V^T. \end{equation} \label{thm:MVt} \end{theorem} \begin{proof} We proceed by relating both matrices to $B$ defined in~\eqref{eq:Fmat} via the Kronecker property of nodal bases. First, we have \[ I = N(\Psi) = N(MF^*(\hat{\Psi})) = N(F^*(\hat{\Psi})) M^T = B M^T. \] so that \begin{equation} M = B^{-T}. \end{equation} Similarly, \[ I = \left(VF_*(N)\right)(\hat{\Psi}) = V F_*(N)(\hat{\Psi}) = V B, \] so that \(V=B^{-1}\) and the result follows. \end{proof} That is, to relate the pullback of the reference element basis functions to any element's basis functions, it is sufficient to determine the relationship between the nodes. \subsection{Affine equivalence: The Lagrange element} \label{sec:lagrange} When elements form affine-equivalent families, the matrix $M$ has a particularly simple form. \begin{theorem} \label{thm:aeI} If $(K,P,N)$ and $(\hat{K},\hat{P},\hat{N})$ are affine-equivalent finite elements then the transformation matrix $M$ is the identity. \end{theorem} \begin{proof} Suppose the two elements are affine-equivalent, so that $F_*(N) = \hat{N}$. Then, a direct calculation gives \[ N(F^*(\hat{\Psi})) = F_*(N)(\hat{\Psi}) = \hat{N}(\hat{\Psi}) = I \] so that $M=I$. \end{proof} The \emph{Lagrange} elements are the most widely used finite elements and form the prototypical affine-equivalent family~\citep{BreSco}. For a simplex $K$ in dimension $d$ and integer $r \geq 1$, one defines $P = P_r(K)$ to be the space of polynomials over $K$ of total degree no greater than $r$, which has dimension $\binom{r+d}{d}$. The nodes are taken to be pointwise evaluation at a lattice of $\binom{r+d}{d}$ points. Classically, these are taken to be regular and equispaced, although options with superior interpolation and conditioning properties for large $r$ are also known~\cite{HesWar}. One must ensure that nodal locations are chosen at the boundary to enable $C^0$ continuity between adjacent elements. A cubic Lagrange triangle ($r=3$ and $d=2$) is shown earlier in Figure~\ref{lag3}. The practical effect of Theorem~\ref{thm:aeI} is that the reference element paradigm ``works.'' That is, a computer code contains a routine to evaluate the nodal basis \( \hat{\Psi} \) and its derivatives for a reference element \( (\hat{K}, \hat{P}, \hat{N}) \). Then, this routine is called at a set of quadrature points in \( \hat{K} \). One obtains values of the nodal basis at quadrature points on each cell \( K \) by pull-back, so no additional work is required. To obtain the gradients of each basis function at each quadrature point, one simply multiplies each basis gradient at each point by \( J^T \). On the other hand, when \( M \neq I \), the usage of tabulated reference values is more complex. Given a table \begin{equation} \label{eq:varpsi} \hat{\varPsi}_{iq} = \hat{\psi}_i(\hat{\xi}_q) \end{equation} of the reference basis at the reference quadrature points, one finds the nodal basis for \( (K, P, N) \) by constructing \( M \) for that element and then computing the matrix-vector product \( M \hat{\varPsi} \) so that \begin{equation} \label{eq:psiiq} \psi_i(\xi_q) = \sum_{k=1}^{\nu} M_{i,k} \hat{\varPsi}_{k,q} \end{equation} Mapping gradients from the reference element requires both multiplication by \( M \) as well as application of \( J^T \) by the chain rule. We define \( D\hat{\varPsi} \in \mathbb{R}^{\nu \times |\xi| \times 2} \) by \begin{equation} D\hat{\varPsi}_{i,q,:} = \hat{\nabla}\hat{\psi}_i(\hat{\xi})_q. \end{equation} Then, the basis gradients requires contraction with \( M \) \begin{equation} \label{eq:dvarpsiprime} D\varPsi^\prime_{i,q,:} := \sum_{k=1}^{\nu} M_{i,k} D\hat{\varPsi}_{k,q,:}, \end{equation} followed by the chain rule \begin{equation} \label{eq:dvarpsi} D\varPsi_{i,q,:} := J^T D\varPsi^\prime_{i,q,:}. \end{equation} In fact, the application of \( M \) and \( J^T \) can be performed in either order. Note that applying \( M \) requires an \( \nu \times \nu \) matrix-vector multiplication and in principle couples all basis functions together, while applying \( J^T \) works pointwise on each basis function separately. When \( M \) is quite sparse, one expects this to be a small additional cost compared to the other required arithmetic. We present further details for this in the case of Hermite elements, to which we now turn. \subsection{The Hermite element: affine-interpolation equivalence} The Hermite triangle~\cite{ciarlet1972general}, show in Figure~\ref{herm3} is based cubic polynomials, although higher-order instances can also be defined~\citep{BreSco}. In contrast to the Lagrange element, its node set includes function values and derivatives at the nodes, as well as an interior function value. The resulting finite element spaces have $C^0$ continuity with $C^1$ continuity at vertices. They provide a classic example of elements that are not affine equivalent but instead give affine-interpolation equivalent families. We will let $(K,P,N)$ be a cubic Hermite triangle, specifying the gradient at each vertex in terms of the Cartesian derivatives -- see Figure~\ref{physherm}. Let $\{\mathbf{v}_i\}_{i=1}^3$ be the three vertices of $K$ and $\mathbf{v}_4$ its barycenter. We order the nodes $N$ by \begin{equation} N = \begin{bmatrix} \delta_{\mathbf{v}_1} & \nabla_{\mathbf{v}_1}^T & \delta_{\mathbf{v}_2} & \nabla_{\mathbf{v}_2}^T & \delta_{\mathbf{v}_3} & \nabla_{\mathbf{v}_3}^T & \delta_{\mathbf{v}_4} \end{bmatrix}^T, \end{equation} using block notation. \begin{figure} \caption{Reference Hermite element} \label{refherm} \caption{Physical Hermite element} \label{physherm} \caption{Reference and physical cubic Hermite elements with gradient degrees of freedom expressed in terms of local Cartesian directional derivatives.} \label{fig:hermrefandphys} \end{figure} Now, we fix the reference element $(\hat{K}, \hat{P}, \hat{N})$ with \( \hat{K} \) as the unit right triangle and express the gradient by the derivatives in the direction of the reference Cartesian coordinates, as in Figure~\ref{refherm}. Let \( \{\hat{\mathbf{v}}_i\}_{i=1}^3 \) be the three vertices of \( \hat{K} \) and \( \hat{\mathbf{v}}_4 \) its barycenter. We define \( \hat{N} \) analogously to \( N \). Consider the relationship between the nodal basis functions $\Psi$ and the pulled-back $F^*(\hat{\Psi})$. For any $\hat{\psi} \in \hat{P}$, the chain rule leads to \begin{equation} \label{eq:justthechainrule} \nabla (\hat{\psi} \circ F) = J^T \hat{\nabla} \hat{\psi} \circ F. \end{equation} Now, suppose that $\hat{\psi}$ is a nodal basis function corresponding to evaluation at a vertex or the barycenter, so that \( \delta_{\hat{\mathbf{v}}_i}\hat{\psi} = 1 \) for some \( 1 \leq i \leq 4 \), with the remaining reference nodes vanishing on \( \hat{\psi} \). We compute that \[ \delta_{\mathbf{v}_i} F^*(\hat{\psi}) = ( \hat{\psi} \circ F )\left( \mathbf{v}_i \right) = \hat{\psi}(\hat{v}_i) = 1, \] while \( \delta_{\mathbf{v}_j} F^*(\hat{\psi}) = 0 \) for \( 1 \leq j \leq 4 \) with \( j \neq i \). Also, since the reference gradient of $\hat{\psi}$ vanishes at each vertex,~\eqref{eq:justthechainrule} implies that the physical gradient of \( F^*(\hat{\psi}) \) must also vanish at each vertex. So, pulling back \( \hat{\psi} \) gives the corresponding nodal basis function for \( (K, P, N) \). The situation changes for the derivative basis functions. Now take $\hat{\psi}$ to be the basis function with unit-valued derivative in, say, the \( \hat{\mathbf{x}} \) direction at vertex \( \hat{\mathbf{v}}_i \) and other degrees of freedom vanishing. Since it vanishes at each vertex and the barycenter of \( \hat{K} \), \( F^*(\hat{\psi}) \) will vanish at each vertex and the barycenter of \( K \). The reference gradient of \(\hat{\psi}\) vanishes at the vertices other than \( i \), so the physical gradient of its pullback must also vanish at the corresponding vertices of \( K \). However,~\eqref{eq:justthechainrule} shows that \( \nabla(\hat{\psi} \circ F) \) will typically not yield \( \begin{bmatrix} 1 & 0 \end{bmatrix}^T \) at \( \mathbf{v}_i \). Consequently, the pull-backs of the reference derivative basis functions do not produce the physical basis functions. Equivalently, we may express this failure in terms of the nodes -- pushing forward \( N \) does not yield \( \hat{N} \). We demonstrate this pictorially in Figure~\ref{fig:hermpushforward}, showing the images of the derivative nodes under push-forward do not correspond to the reference derivative nodes. Taking this view allows us to address the issue using Theorem~\ref{thm:MVt}. \begin{figure} \caption{Pushing forward the Hermite derivative nodes in physical space does \emph{not} \label{fig:hermpushforward} \end{figure} This discussion using the chain rule can be summarized by the matrix-valued equation \begin{equation} F_*(N) = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & J^T & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & J^T & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & J^T & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix} \hat{N}, \label{eq:blockguy} \end{equation} noting that the second, fourth, and sixth rows and columns of this matrix are blocks of two, and each ``$0$'' is taken to be the zero matrix of appropriate size. This is exactly the inverse of \( V \) from Theorem~\ref{thm:MVt}. In this case, the transformation $V$ is quite local -- that is, only the push-forward of nodes at a given point are used to construct the reference nodes at the image of that point. This seems to be generally true for interpolation-equivalent elements, although functionals with broader support (e.g. integral moments over the cell or a facet thereof) would require a slight adaptation. We will see presently for Morley and Argyris elements that the transformation neeed not be block diagonal for elements without interpolation equivalence. At any rate, the following elementary observation from linear algebra suggests the sparsity of \( V \): \begin{proposition} \label{prop:span} Let \( W \) be a vector space with sets of vectors \( W_1 = \{ w^1_i \}_{i=1}^m \subset W \) and \( W_2 = \{ w^2_i \}_{i=1}^n \). Suppose that \( \mathrm{span} W_1 \subset \mathrm{span W_2} \) so that there exists a matrix \( A \in \mathbb{R}^{m \times n} \) such that \(w^1_i = \sum_{k=1}^n A_{ik} w^2_k \). If we further have that some \( w^1_i \in \mathrm{span} \{ w^2_j \}_{j\in \mathcal{J}} \) for some \( \mathcal{J} \subset [1, n] \), then \( A_{ij} = 0 \) for all \( j \notin \mathcal{J} \). \end{proposition} Our theory applies equally to the general family of Hermite triangles of degree \( k \geq 3\). In those cases, the nodes consist of gradients at vertices together with point-wise values at appropriate places. All higher-order cases generate $C^0$ families of elements with $C^1$-continuity at vertices. The $V$ matrix remains analogous to the cubic case, with $J^{-T}$ on the diagonal in three places corresponding to the vertex derivative nodes. No major differences appear for the tetradral Hermite elements, either. As we saw earlier, Hermite and other elements for which \( M \neq I \) incur an additional cost in mapping from the reference element, as one must compute basis function values and gradients via~\eqref{eq:psiiq} and~\eqref{eq:dvarpsi}. The key driver of this additional cost is the application of \( M \). Since \( M \) is very sparse for Hermite elements -- just 12 nonzeros counting the 1's on the diagonal -- evaluating~\eqref{eq:psiiq} requires just \( 12 \) operations per column, so a 10-point quadrature rule requires 120 operations. Evaluating~\eqref{eq:dvarpsiprime} requires twice this, or 240 operations. Applying \( J^T \) in~\eqref{eq:dvarpsi} is required whether Hermite or Lagrange elements are used. It requires $4 \times 10$ times the number of quadrature points used -- so a 10-point rule would require 400 operations. Hence, the chain rule costs more than the application of \( M \) in this situation. On the other hand, building an element stiffness matrix requires a double loop over these 10 basis functions nested with a loop over the, say, 10 quadrature points. Hence, the loop body requires 1000 iterations, and with even a handful of operations will easily dominate the additional cost of multiplying by \( M \). \subsection{The Morley and Argyris elements} The construction of \(C^1\) finite elements, required for problems such as plate bending or the Cahn-Hilliard equations, is a long-standing difficulty. Although it is possible to work around this requirement by rewriting the fourth-order problem as a lower order system or by using \( C^0 \) elements in conjunction with variational form penalizing the jumps in derivatives~\citep{engel2002continuous, wells2007c}, this doesn't actually give a \( C^1 \) solution. The quadratic Morley triangle~\citep{morley1971constant}, shown in Figure~\ref{morley}, finds application in plate-bending problems and also provides a relatively simple motivation for and application of the theory developed here. The six degrees of freedom, vertex values and the normal derivatives on each edge midpoint, lead to an assembled finite element space that is neither \( C^0 \) nor \( C^1 \), but it is still suitable as a convergent nonconforming approximation for fourth-order problems. The quintic Argyris triangle~\citep{argyris1968tuba}, shown in Figure~\ref{arg}, with its 21 degrees, gives a proper \( C^1 \) finite element. Hence it can be used generically for fourth-order problems as well as second-order problems for which a continuously differentiable solution is desired. The Argyris elements use the values, gradients, and second derivatives at each triangle vertex plus the normal derivatives at edge midpoints as the twenty-one degrees of freedom. It has been suggested that the Bell element~\citep{bell1969refined} represents a simpler \( C^1 \) element than the Argyris element, on the account that it has fewer degrees of freedom. Shown in Figure~\ref{bell}, we see that the edge normal derivatives have been removed from the Argyris element. However, this comes with a (smaller but) more complicated function space. Rather than full quintic polynomials, the Bell element uses quintic polynomials that have normal derivatives on each edge of only third degree. This constraint on the polynomial space turns out to complicate the transformation of Bell elements compared to Hermite or even Argyris. For the rest of this section, we focus on Morley and Argyris, returning to Bell later. It can readily be seen that, like the Hermite element, the standard affine mapping will not preserve nodal bases. Unlike the Hermite element, however, the Morley and Argyris elements do not form affine-interpolation equivalent families -- the spans of the nodes are not preserved under push-forward thanks to the edge normal derivatives -- see Figure~\ref{fig:morleypushforward}. As the Morley and Aryris nodal sets do not contain a full gradient at edge midpoints, the technique used for Hermite elements cannot be directly applied. \begin{figure} \caption{Pushing forward the Morley derivative nodes in physical space does \emph{not} \label{fig:morleypushforward} \end{figure} To work around this, we introduce the following idea: \begin{definition} Let $(K,P,N)$ and $(\hat{K}, \hat{P}, \hat{N})$ be finite elements of class $C^k$ with affine mapping $F:K\rightarrow \hat{K}$ and associated pull-back and push-forward $F^*$ and $F_*$. Suppose also that \( F^*(\hat{P}) = P \). Let \(N^c = \left\{n^c_i\right\}_{i=1}^\mu \subset C_b^k(K)^\prime \) and \( \hat{N}^c = \left\{ \hat{n}^n_i \right\}_{i=1}^\mu \subset C^k(\hat{K})^\prime \) be such that \begin{itemize} \item \( N \subset N^c \) (taken as sets rather than vectors), \item \( \hat{N} \subset \hat{N}^c \) (again as sets), \item \( \mathrm{span}(F_*(N^c)) = \mathrm{span}(\hat{N}^c) \) in \( C^k(\hat{K})^\prime \). \end{itemize} Then \( N^c \) and \( \hat{N}^c \) form a \emph{compatible nodal completion} of \( N \) and \( \hat{N} \). \end{definition} \begin{example} Let $(K,P,N)$ and $(\hat{K}, \hat{P}, \hat{N})$ be the Morley triangle and reference triangle. Take $N^c$ to contain all the nodes of $N$ together with the tangential derivatives at the midpoint of each edge of $K$ and similarly for $\hat{N}^c$. In this case, $\mu = 9$. Then, both $N^c$ and $\hat{N}^c$ contain complete gradients at each edge midpoint and function values at each vertex. The push-forward of $N^c$ has the same span as $\hat{N}^c$ and so $N^c$ and $\hat{N}^c$ form a compatible nodal completion of \( N \) and \( \hat{N} \). This is shown pictorially in Figure~\ref{fig:morleybridge}. \end{example} \begin{figure} \caption{Nodal sets \( \hat{N} \label{fig:morleybridge} \end{figure} A similar completion -- supplementing the nodes with tangential derivatives at edge midpoints -- exists for the Argyris nodes and reference nodes~\citep{dominguez2008algorithm}. Now, since the spans of $\hat{N}^c$ and $F_*(N^c)$ agree (even in \( C_b^k(\hat{K})^\prime \)), there exists a $\mu \times \mu$ matrix $V^c$, typically block diagonal, such that \begin{equation} \label{eq:vc} \hat{N}^c = V^c F_*(N^c). \end{equation} Let \( E \in \mathbb{R}^{\nu \times \mu} \) be the Boolean matrix with \( E_{ij} = 1 \) iff \( \hat{n}_i = \hat{n}_j^c \) so that \begin{equation} \label{eq:e} \hat{N} = E \hat{N}^c, \end{equation} and it is clear that \begin{equation} \hat{N} = E V^c F_*(N^c). \end{equation} That is, the reference nodes are linear combinations of the pushed-forward nodes \emph{and} the extended nodes, but we must have the linear combination in terms of the pushed-forward nodes alone. Recall that building the nodal basis only requires the action of the nodes on the polynomial space. Because \( \mu > \nu \), the set of nodes \( \pi N^c \) must be linearly dependent. So, we seek a matrix \( D \in \mathbb{R}^{\mu \times \nu} \) such that \begin{equation} \label{eq:d} \pi N^c = D \pi N. \end{equation} Since \( F_* \) is an isomorphism, such a \( D \) also gives \begin{equation} \hat{\pi} F_*(N^c) = D \hat{\pi} F_*(N). \end{equation} Rows \( i \) of the matrix \( D \) such that \( n^c_i = n_j \) for some \( j \) will just have \( D_{ik} = \delta_{kj} \) for \( 1 \leq k \leq \nu \). The remaining rows must be constructed somehow via an interpolation argument, although the details will vary by element. This discussion suggests a three-stage process, each encoded by matrix multiplication, for converting the push-forwards of the physical nodes to the reference nodes, hence giving a factored form of $V$ in~\eqref{eq:V}. Before working examples, we summarize this in the following theorem: \begin{theorem} Let $(K,P,N)$ and $(\hat{K}, \hat{P}, \hat{N})$ be finite elements with affine mapping $F:K\rightarrow \hat{K}$ and suppose that $F^*(\hat{P}) = P$. Let $N^c$ and $\hat{N}^c$ be a compatible nodal completion of \( N \) and \( \hat{N} \). Then given matrices $E \in \mathbb{R}^{\nu \times \mu}$ from~\eqref{eq:e}, $V^c \in \mathbb{R}^{\mu \times \mu}$ from~\eqref{eq:vc} and $D \in \mathbb{R}^{\mu \times \nu}$ from~\eqref{eq:d} that builds the (restrictions of) the extended nodes out of the given physical nodes, the nodal transformation matrix $V$ satisfies \begin{equation} V = E V^C D. \end{equation} \end{theorem} This gives a general outline for mapping finite elements, and we illustrate now by turning to the Morley element. \subsubsection{The Morley element} Following our earlier notation for the geometry and nodes, we order the nodes of a Morley triangle by \begin{equation} \label{eq:morleyN} N = \begin{bmatrix} \delta_{\mathbf{v}_1} & \delta_{\mathbf{v}_2} & \delta_{\mathbf{v}_3} & \delta^{\mathbf{n}_1}_{\mathbf{e}_1} & \delta^{\mathbf{n}_2}_{\mathbf{e}_2} & \delta^{\mathbf{n}_3}_{\mathbf{e}_3} \end{bmatrix}^T \end{equation} Nodes $N^C$ will also include tangential derivatives at the edge midpoint. We put \begin{equation} \label{eq:morleyNc} N^c = \begin{bmatrix} \delta_{\mathbf{v}_1} & \delta_{\mathbf{v}_2} & \delta_{\mathbf{v}_3} & (\nabla^{\mathbf{n}_1\mathbf{t}_1}_{\mathbf{e}_1})^T & (\nabla^{\mathbf{n}_2\mathbf{t}_2}_{\mathbf{e}_2})^T & (\nabla^{\mathbf{n}_3\mathbf{t}_3}_{\mathbf{e}_3})^T \end{bmatrix}^T, \end{equation} Again, this is a block vector the last three entries each consist of two values. We give the same ordering of reference element nodes \( \hat{N} \) and \( \hat{N}^c \). The matrix $E$ simply extracts the members of $N^C$ that are also in $N$, so with $\eta = \begin{bmatrix} 1 & 0 \end{bmatrix}$, we have the block matrix \begin{equation} E = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & \eta & 0 & 0 \\ 0 & 0 & 0 & 0 & \eta & 0 \\ 0 & 0 & 0 & 0 & 0 & \eta \end{bmatrix}. \end{equation} Because the gradient nodes in \( N^c \) use normal and tangential coordinates, \( V^c \) will be slightly more more complicated than \( V \) for the Hermite element. For local edge \( \gamma_i \), we define the (orthogonal) matrix \[ G_i = \begin{bmatrix} \mathbf{n}_i & \mathbf{t}_i \end{bmatrix}^T \] with the normal and tangent vector in the rows. Similarly, we let \[ \hat{G}_i = \begin{bmatrix} \hat{\mathbf{n}}_i & \hat{\mathbf{t}}_i \end{bmatrix}^T \] contain the unit normal and tangent to edge \( \hat{\gamma}_i \) of the reference cell \( \hat{K} \). It is clear that \begin{equation} F_*(\nabla^{\mathbf{n}_i \mathbf{t}_i}_{\mathbf{e}_i}) = F_*(G_i \nabla_{\mathbf{e}_i}) = G_i F_*(\nabla_{\mathbf{e}_i}) = G_i J^T \hat{\nabla}_{\mathbf{e}_i} = G_i J^T \hat{G}_i^T \hat{\nabla}^{\hat{\mathbf{n}}_i\hat{\mathbf{t}}_i}_{\hat{\mathbf{e}}_i}, \end{equation} so, defining \begin{equation} \label{eq:B} B^i = (G_i J^T \hat{G}_i^T)^{-1} = \hat{G}_i J^{-T} G_i^T, \end{equation} we have that \begin{equation} V^C = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & B^1 & 0 & 0 \\ 0 & 0 & 0 & 0 & B^2 & 0 \\ 0 & 0 & 0 & 0 & 0 & B^3 \\ \end{bmatrix}. \end{equation} Now, we turn to the matrix \( D \in \mathbb{R}^{9 \times 6} \), writing members of \( \pi N^c \) in terms of \( \pi N \) alone. The challenge is to express the tangential derivative nodes in terms of the remaining six nodes -- vertex values and normal derivatives. In fact, only the vertex values are needed. Along any edge, any member of $P$ is just a univariate quadratic polynomial, and so the tangential derivative is linear. Linear functions attain their average value over an interval at its midpoint. But the average value of the derivative over the edge is just the difference between vertex values divided by the edge length. The matrix $D$ must be \begin{equation} D = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & -\ell_1^{-1} & \ell_1^{-1} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ -\ell_2^{-1} & 0 & \ell_2^{-1} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ -\ell_3^{-1} & \ell_3^{-1} & 0 & 0 & 0 & 0 \\ \end{bmatrix} \end{equation} We can also arrive at this formulation of \( D \) in another way, that sets up the discussion used for Argyris and later Bell elements. Consider the following univariate result: \begin{proposition} Let \( p(x) \) any quadratic polynomial on \( [-1, 1] \). Then \begin{equation} p^\prime(0) = \tfrac{1}{2} \left( p(1) - p(-1) \right) \end{equation} \end{proposition} \begin{proof} Write \( p(x) = a + b x + c x^2 \). Then \( p^\prime(x) = b + 2 c x \) so that \( p^\prime(0) = b \). Also note that \( p(1) = a + b + c \) and \( p(-1) = a - b + c \). Wanting to write \( p^\prime(0) = d_1 p(1) + d_{-1} p(-1) \) for constants \( d_1 \) and \( d_{-1} \) leads to a \( 2 \times 2 \) linear system, which is readily solved to give \( d_1 = -d_{-1} = \tfrac{1}{2} \). \end{proof} Then, by a change of variables, this rule can be mapped to \( \left[ -\tfrac{\ell}{2}, \tfrac{\ell}{2} \right] \) so that \[ p^\prime(0) = \tfrac{1}{\ell} \left( p(\tfrac{\ell}{2}) - p(-\tfrac{\ell}{2}) \right). \] Finally, one can apply this rule on the edge of a triangle running from \( \mathbf{v}_a \) to \( \mathbf{v}_b \) to find that \[ \pi \delta^{\mathbf{t}_i} = \tfrac{\ell}{2} \left( \pi \delta_{\mathbf{v}_b} - \pi \delta_{\mathbf{v}_a} \right). \] It is interesting to explicitly compute the product \(V = E V^C D\), as giving a single formula rather than product of matrices is more useful in practice. Multiplying through gives: \begin{equation} V = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & \tfrac{-B^1_{12}}{\ell_1} & \tfrac{B^1_{12}}{\ell_1} & B^1_{11} & 0 & 0 \\ \tfrac{-B^2_{12}}{\ell_2} & 0 & \tfrac{B^2_{12}}{\ell_2} & 0 & B^2_{11} & 0 \\ \tfrac{-B^3_{12}}{\ell_3} & \tfrac{B^3_{12}}{\ell_3} & 0 & 0 & 0 & B^3_{11} \end{bmatrix} \end{equation} From the definition of $B^i$, it is possibly to explicitly calculate its entries in terms of the those of the Jacobian and the normal and tangent vectors for $K$ and $\hat{K}$. Only the first row of each $B^i$ is needed \begin{equation} \begin{split} B^i_{11} & = \hat{n}^{\mathbf{x}}_i \left( n^{\mathbf{x}}_i \tfrac{\partial x}{\partial \hat{x}} + t^{\mathbf{x}}_i \tfrac{\partial y}{\partial \hat{x}} \right) + \hat{t}^{\mathbf{x}}_i \left( n^{\mathbf{x}}_i \tfrac{\partial x}{\partial \hat{y}} + t^{\mathbf{x}}_i \tfrac{\partial y}{\partial \hat{y}} \right) \\ B^i_{12} & = \hat{n}^{\mathbf{x}}_i \left( n^{\mathbf{y}}_i \tfrac{\partial x}{\partial \hat{x}} + t^{\mathbf{y}}_i \tfrac{\partial y}{\partial \hat{x}} \right) + \hat{t}^{\mathbf{x}}_i \left( n^{\mathbf{y}}_i \tfrac{\partial x}{\partial \hat{y}} + t^{\mathbf{y}}_i \tfrac{\partial y}{\partial \hat{y}} \right) \\ \end{split} \end{equation} We can also recall that the normal and tangent vectors are related by $n^{\mathbf{x}} = t^{\mathbf{y}}$ and $n^{\mathbf{y}} = -t^{\mathbf{x}}$ to express these entries purely in terms of either the normal or tangent vectors. Each entry of the Jacobian and normal and tangent vectors of $K$ and $\hat{K}$ enter into the transformation. In this form, \( V \) has 12 nonzero entries, although the formation of those entries, which depend on normal and tangent vectors and the Jacobian, from the vertex coordinates requires an additional amount of arithmetic. The Jacobian will typically be computed anyway in a typical code, and the cost of working with \( M = V^T \) will again be subdominant to the nested loops over basis functions and quadrature points required to form element matrices, much like Hermite. \subsubsection{The Argyris element} Because it is higher degree than Morley and contains second derivatives among the nodes, the Argyris transformation is more involved. However, it is a prime motivating example and also demonstrates that the general theory here reproduces the specific technique in~\cite{dominguez2008algorithm}. The classical Argyris element has $P$ as polynomials of degree 5 over a triangle \( K \), a 21-dimensional space. The 21 associated nodes $N$ are selected as the point values, gradients, and all three unique second derivatives at the vertices together with the normal derivatives evaluated at edge midpoints. These nodal choices lead to a proper $C^1$ element, and $C^2$ continuity is obtained at vertices. Since the Argyris elements do not form an affine-interpolation equivalent family, we will need to embed the physical nodes into a larger set. Much as with Morley elements, the edge normal derivatives will be augmented by the tangential derivatives. With this notation, $N$ is a vector of 21 functionals and $N^C$ a vector of 24 functions written as \begin{equation} \begin{split} N & = \left[ \begin{array}{cccccccccccc} \delta_{\mathbf{v}_1} & \nabla_{\mathbf{v}_1} & \bigtriangleup_{\mathbf{v}_1} & \delta_{\mathbf{v}_2} & \nabla_{\mathbf{v}_2} & \bigtriangleup_{\mathbf{v}_2} & \delta_{\mathbf{v}_3} & \nabla_{\mathbf{v}_3} & \bigtriangleup_{\mathbf{v}_3} & \delta^{\mathbf{n}_1}_{\mathbf{e}_1} & \delta^{\mathbf{n}_2}_{\mathbf{e}_2} & \delta^{\mathbf{n}_3}_{\mathbf{e}_3} \end{array} \right]^T, \\ N^C &= \left[\begin{array}{cccccccccccc} \delta_{\mathbf{v}_1} & \nabla_{\mathbf{v}_1} & \bigtriangleup_{\mathbf{v}_1} & \delta_{\mathbf{v}_2} & \nabla_{\mathbf{v}_2} & \bigtriangleup_{\mathbf{v}_2} & \delta_{\mathbf{v}_3} & \nabla_{\mathbf{v}_3} & \bigtriangleup_{\mathbf{v}_3} & \nabla^{\mathbf{n_1t_1}}_{\mathbf{v}_1} & \nabla^{\mathbf{n_2t_2}}_{\mathbf{v}_2} & \nabla^{\mathbf{n_3t_3}}_{\mathbf{v}_3} \end{array} \right]^T, \end{split} \end{equation} with corresponding ordering of reference nodes \( \hat{N} \) and \( \hat{N}^c \). The \(21 \times 24\) matrix $E$ just selects out the items in $N^C$ that are also in $N$, so that \[ E_{ij} = \begin{cases} 1, & \text{for } 1 \leq i=j \leq 19 \ \text{or } (i,j) \in \left\{(20, 21), (21, 23)\right\} \\ 0, & \text{otherwise.} \end{cases} \] The matrix $V^C$ relating the push-forward of the extended nodes to the extended reference nodes is block diagonal and similar to our earlier examples. We use~\eqref{eq:justthechainrule} to map the vertex gradient nodes as in the Hermite case. Mapping the three unique second derivatives by the chain rule requires the matrix: \begin{equation} \Theta = \begin{bmatrix} \left( \tfrac{\partial \hat{x}}{\partial x} \right)^2 & 2 \tfrac{\partial \hat{x}}{\partial x} \tfrac{\partial \hat{y}}{\partial x} & \left( \tfrac{\partial \hat{y}}{\partial x} \right)^2 \\ \tfrac{\partial \hat{x}}{\partial y} \tfrac{\partial \hat{x}}{\partial x} & \tfrac{\partial{\hat{x}}}{\partial y} \tfrac{\partial\hat{y}}{\partial x} + \tfrac{\partial \hat{x}}{\partial x} \tfrac{\partial \hat{y}}{\partial y} & \tfrac{\partial \hat{y}}{\partial x} \tfrac{\partial \hat{y}}{\partial y} \\ \left(\tfrac{\partial \hat{x}}{\partial y} \right)^2 & 2 \tfrac{\partial \hat{x}}{\partial y} \tfrac{\partial \hat{y}}{\partial y} & \left( \tfrac{\partial \hat{y}}{\partial y} \right)^2 \end{bmatrix} \end{equation} The edge midpoint nodes transform by $B$ just as in~\eqref{eq:B}, so that the $V^C$ is \begin{equation} \label{eq:VC} V^C = \left[ \begin{array}{cccccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & J^{-T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \Theta^{-1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & J^{-T} & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & \Theta^{-1} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & J^{-T} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \Theta^{-1} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & B^1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & B^2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & B^3 \\ \end{array} \right]. \end{equation} Constructing $D$, like for Morley, is slightly more delicate. The additional nodes acting on quintic polynomials -- tangential derivatives at edge midpoints -- must be written in terms of the remaining nodes. The first aspect of this involves a univariate interpolation-theoretic question. On the biunit interval $[-1, 1]$, we seek a rule of the form \[ f'(0) \approx a_1 f(-1) + a_2 f(1) + a_3 f^\prime(-1) + a_4 f^\prime(1) + a_5 f^{\prime\prime}(-1) + a_6 f^{\prime\prime}(1) \] that is exact when $f$ is a quintic polynomial. The coefficients may be determined to by writing a $6\times 6$ linear system asserting correctness on the monomial basis. The answer, given in~\cite{dominguez2008algorithm}, is that \begin{proposition} Any quintic polynomial \( p \) defined on \( [-1, 1] \) satisfies \begin{equation} p^\prime(0) = \tfrac{15}{16} \left( p(1) - p(-1) \right) - \tfrac{7}{16} \left( p^\prime(1) + p^\prime(-1) \right) + \tfrac{1}{16} \left( p^{\prime\prime}(1) - p^{\prime\prime}(-1) \right). \end{equation} \end{proposition} This can be mapped to the interval $[-\tfrac{\ell}{2}, \tfrac{\ell}{2}]$ by a change of variables: \begin{equation} p^\prime(0) = \tfrac{15}{8\ell} \left( p\left(\tfrac{\ell}{2}\right) - p\left(\tfrac{-\ell}{2}\right) \right) - \tfrac{7}{16} \left( p^\prime\left(\tfrac{\ell}{2}\right) + p^\prime\left(\tfrac{-\ell}{2}\right)\right) + \tfrac{\ell}{32} \left( p^{\prime\prime}\left(\tfrac{\ell}{2}\right) - p^{\prime\prime}\left(\tfrac{-\ell}{2}\right) \right). \end{equation} Now, we can use this to compute the tangential derivative at an edge midpoint, expanding the tangential first and second derivatives in terms of the Cartesian derivatives. If $\mathbf{v}_a$ and $\mathbf{v}_b$ are the beginning and ending vertex of edge \(\gamma_i\) with midpoint $\mathbf{e}_i$ and length $\ell_i$, we write the tangential derivative acting on quintics as \begin{equation} \begin{split} \pi \delta^{\mathbf{t}_i}_{\mathbf{e}_i} = & \tfrac{15}{8\ell_i} \left( \delta_{\mathbf{v}_b} - \delta_{\mathbf{v}_a} \right) - \tfrac{7}{16} \left( t^{\mathbf{x}}_i \left( \delta^{\mathbf{x}}_{\mathbf{v}_b} + \delta^{\mathbf{x}}_{\mathbf{v}_a} \right) + t^{\mathbf{y}}_i \left( \delta^{\mathbf{y}}_{\mathbf{v}_b} + \delta^{\mathbf{y}}_{\mathbf{v}_a} \right) \right) \\ & + \tfrac{\ell_i}{32} \left( (t_i^{\mathbf{x}})^2 \left( \delta^{\mathbf{xx}}_{\mathbf{v}_b} - \delta^{\mathbf{xx}}_{\mathbf{v}_a} \right) + 2 t_i^{\mathbf{x}} t_i^{\mathbf{y}} \left( \delta^{\mathbf{xy}}_{\mathbf{v}_b} - \delta^{\mathbf{xy}}_{\mathbf{v}_a} \right) +(t_i^{\mathbf{y}})^2 \left( \delta^{\mathbf{yy}}_{\mathbf{v}_b} - \delta^{\mathbf{yy}}_{\mathbf{v}_a} \right) \right). \end{split} \end{equation} For each edge \( \gamma_i \), define the vector \( \mathbf{\tau}_i\) by \[ \mathbf{\tau}_i = \begin{bmatrix} (t^{\mathbf{x}}_i)^2 & 2 t^{\mathbf{x}}_i t^{\mathbf{y}}_i & (t^{\mathbf{y}}_i)^2 \end{bmatrix}^T. \] The end result is that \begin{equation} D = \left[ \begin{array}{cccccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & I_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & I_3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & I_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & I_3 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & I_2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & I_3 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & \tfrac{-15}{8\ell_1} & \tfrac{7}{16} \mathbf{t}^T_1 & \tfrac{-\ell}{32} \mathbf{\tau}^T_1 & \tfrac{15}{8\ell_1} & \tfrac{7}{16} \mathbf{t}^T_1 & \tfrac{\ell}{32} \mathbf{\tau}^T_1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \tfrac{-15}{8\ell_2} & \tfrac{7}{16} \mathbf{t}^T_2 & \tfrac{-\ell}{32} \mathbf{\tau}^T_2 & 0 & 0 & 0 & \tfrac{15}{8\ell_2} & \tfrac{7}{16} \mathbf{t}^T_2 & \tfrac{\ell}{32} \mathbf{\tau}^T_2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \tfrac{-15}{8\ell_3} & \tfrac{7}{16} \mathbf{t}^T_3 & \tfrac{-\ell}{32} \mathbf{\tau}^T_3 & \tfrac{15}{8\ell_3} & \tfrac{7}{16} \mathbf{t}^T_3 & \tfrac{\ell}{32} \mathbf{\tau}^T_3 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right]. \end{equation} If this transformation is kept in factored form, \( D \) contains 57 nonzero entries and \( V^c \) contains 54 nonzero entries. \(E \) is just a Boolean matrix and its application requires copies. So, application of \( M \) requires no more than \( 111 \) floating-point operations, besides the cost of forming the entries themselves. While this is about ten times the cost of the Hermite transformation, it is for about twice the number of basis functions and still well-amortized over the cost of integration loops. Additionally, one can multiply out the product \( E V^c D \) symbolically and find only 81 nonzero entries, which reduces the cost of multiplication accordingly. \subsection{Generalizations} \subsubsection{Non-affine mappings} Non-affine geometric transformations, whether for simplicial or other element shapes, present no major complications to the theory. In this case, \( K \) and \( \hat{K} \) are related by a non-affine map, and \( P \) is taken to be the image of \( \hat{P} \) under pull-back \begin{equation} P = \left\{ F^*(\hat{p}) : \hat{p} \in \hat{P} \right\}, \end{equation} although this space need not consist of polynomials for non-affine \( F \). At any rate, one may define Hermite elements on curvilinear cells~\citep{ciarlet1978finite,dautray2012mathematical}. In this case, the Jacobian matrix varies spatially so that each instance of \( J^T \) in~\eqref{eq:blockguy} must be replaced by the particular value of \( J^T \) at each vertex. \subsubsection{Generalized pullbacks} Many vector-valued finite element spaces make use of pull-backs other than composition with affine maps. For example, the Raviart-Thomas and N\'ed\'elec elements use contravariant and covariant Piola maps, respectively. Because these preserve either normal or tangential components, one can put the nodal basis functions of a given element \( (K, P, N) \) and reference element \( (\hat{K}, \hat{P}, \hat{N}) \) into one-to-one correspondence by means of the Piola transform, a fact used heavily in~\citep{RognesKirbyEtAl2009a} possible. It would be straightforward to give a generalization of affine equivalence to equivalence under an arbitrary pull-back \( F^* \), with push-forward defined in terms of \( F^* \). In this case, the major structure of \S~\ref{sec:lagrange} would be unchanged. However, not all \( H(\mathrm{div}) \) elements form equivalent families under the contravariant Piola transform. For example, Mardal, Tai, and Winther~\cite{mardal2002robust} give an element that can be paired with discontinuous polynomials to give uniform inf-sup stability on a scale of spaces between \( H(\mathrm{div}) \) and \( (H^1)^2 \), although it is \( H^1 \)-nonconforming. The degrees of freedom include constant and linear moments of normal components on edges, which are preserved under Piola mapping. However, the nodes also include the constant moments of the tangential component on edges, which are \emph{not} preserved under Piola transform. One could push-forward both the normal and tangential constant moments, then express them as a linear combination of the normal and tangential moments on the reference cell in a manner like~\eqref{eq:blockguy}. One could see the Mardal--Tai--Winther element as satisfying a kind of ``Piola-interpolation equivalence'' and readily adapt the techniques for Hermite elements, \subsection{A further note on computation} We have commented on the added cost of multiplying the set of basis functions by \( M \) during local integration. It also also possible to apply the transformation in a different way that perhaps more fully leverages pre-existing computer routines. With this approach, \( M \) can also be included in local matrix assembly by means means of a congruence transform acting on the ``wrong'' element matrix as follows. Given a finite element \( (K, P, N) \) with nodal basis \( \Psi = \{ \psi_i \}_{i=1}^\nu \) and bilinear form \( a_K(\cdot, \cdot) \) over the domain \( K \), we want to compute the matrix \begin{equation} A^K_{ij} = a_K(\psi_j, \psi_i). \end{equation} Suppose that a computer routine existed for evaluating \( A^K \) via a reference mapping for affine-equivalent elements. That is, given the mapping \( F:\hat{K} \rightarrow K \), this routine maps all integration to the reference domain \( \hat{K} \) assuming that the integrand over \( K \) is just the affine pull-back of something on \( \hat{K} \). Consider the following computation: \begin{equation} \begin{split} A^K_{ij} & = a_K(\psi_j, \psi_i) \\ & = a_K \left( \sum_{\ell_2=1}^\nu M_{j\ell_2} F^*(\hat{\psi}_{\ell_2}) , \sum_{\ell_1=1}^\nu M_{i\ell_1} F^*(\hat{\psi}_{\ell_1}) \right) \\ & = \sum_{\ell_1, \ell_2 = 1}^\nu M_{j\ell_2} M_{i\ell_1} a_K( F^*(\hat{\psi}_{\ell_2}), F^*(\hat{\psi}_{\ell_1})) \\ \end{split} \end{equation} Now, this is just expressed in terms of the affine pullback of reference-element integrands and so could use the hypothesized computer routine. We then have \begin{equation} A^K_{ij} = \sum_{\ell_1, \ell_2 = 1}^\nu M_{j\ell_2} M_{i\ell_1} a_{\hat{K}}( \hat{\psi}_{\ell_2}, \hat{\psi}_{\ell_1}) = \sum_{\ell_1, \ell_2 = 1}^\nu M_{j\ell_1} M_{i\ell_2} \hat{A}^K_{\ell_1\ell_2}, \end{equation} or, more compactly, \[ A^K = M \tilde{A}^K M^T, \] where \( \tilde{A}^K \) is the matrix one would obtain by using the pull-back of the reference element nodal basis functions instead of the actual nodal basis for \( (K, P, N) \). Hence, rather than applying \( M \) invasively at each quadrature point, one may use existing code for local integration and pre- and post-multiply the resulting matrix by the basis transformation. In the case of Hermite, for example, applying \( M \) to a vector costs 12 operations, so applying \( M \) to all 10 columns of \( \tilde{A}^K \) costs 120 operations, plus another 120 for the transpose. This adds 240 extra operations to the cost of building \( \tilde{A}^K \), or just 2.4 extra FLOPs per entry of the matrix. One may also apply this idea in a ``matrix-free'' context. Given a routine for applying \( \tilde{A}^K \) to a vector, one may simply apply \( M^T \) to the input vector, apply \( \tilde{A}^K \) to the result, and post-multiply by \( M \). Hence, one has the cost of muliplying by \( \tilde{A}^K \) plus the cost of applying \( M \) and its transpose to a single vector. In the case of Hermite, one has the cost of computing the ``wrong'' local matrix-vector product via an existing kernel plus 24 additional operations. Finally, we comment on evaluating discrete functions over elements requiring such transforms. Discrete function evaluation is frequently required in matrix-free computation, nonlinear residual evaluation, and in bilinear form evaluation when a coefficient is expressed in a finite element space. Suppose one has on a local element \( K \) a function expressed by \[ u = \sum_{j=1}^\nu c_j \psi_j, \] where \( c \in \mathbb{R}^\nu \) is the vector of coefficients and \( \{ \psi_j \} \) is the nodal basis for \( (K, P, N) \). In terms of pulled-back reference basis functions, \( u \) is given by \[ u = \sum_{j=1}^\nu c_j \left( \sum_{k=1}^\nu M_{jk} F^*(\hat{\psi}_k) \right) = \sum_{j, k=1}^\nu M_{jk} c_j F^*(\hat{\psi}_k), \] which can also be written as \begin{equation} u = \sum_{k=1}^\nu (M^T c)_k F^*(\hat{\psi}_k) = \sum_{k=1}^\nu (V c)_k F^*(\hat{\psi}_k). \end{equation} Just as one can build element matrices by means of the ``wrong'' basis functions and a patch-up operation, one can also evaluate functions by transforming the coefficients and then using the standard pullback of the reference basis functions. Such observations may make incorporating nonstandard element transformations into existing code more practical. \section{What if $P \neq F^*(\hat{P})$?} \label{sec:whatthebell} The theory so far has been predicated on $F^*$ providing an isomorphism between the reference and physical function spaces. In certain cases, however, this fails. Our main motivation here is to transform the Bell element, a near-relative of the quintic Argyris element. In this case, one takes $P$ to be the subspace of $P_5$ that has cubic normal derivatives on edges rather than the typical quartic values. This reduction of $P$ by three dimensions is accompanied by removing the three edge normal derivatives at midpoints from $N$. In general, however, the pull-back $F^*(\hat{P})$ does not coincide with $P$. Instead of cubic normal derivatives on edges, $F^*(\hat{P})$ has reduced degree in some other direction corresponding to the image of the normal under affine mapping. The theory developed earlier can be extended somewhat to resolve this situation. \subsection{General theory: extending the finite element} Abstractly, one may view the Bell element or other spaces built by constraint as the intersection of the null spaces of a collection of functionals acting on some larger space as follows. Let $(K, P, N)$ be a finite element. Suppose that $P \subset \tilde{P}$ and that \( \{ \lambda_i \}_{i=1}^\kappa \subset \left( C^k_b \right)^\prime \) are linearly independent functionals that when acting on \( \tilde{P} \) satisfy \begin{equation} \label{eq:PfromPtilde} P = \cap_{i=1}^\kappa \mathrm{null}(\lambda_i). \end{equation} The following result is not difficult to prove: \begin{proposition} Let \( (K, P, N) \) be a finite element with \( \cap_{i=1}^\kappa\mathrm{null}(\lambda_i) = P \subset \tilde{P} \) as per~\eqref{eq:PfromPtilde}. Similarly, let Let \( (\hat{K}, \hat{P}, \hat{N})\) be a reference element with \( \cap_{i=1}^\kappa\mathrm{null}(\hat{\lambda}_i) = \hat{P} \subset \tilde{\hat{P}} \). Suppose that \(\tilde{P} = F^*(\tilde{\hat{P}}).\) Then \(P = F^*(\hat{P})\) iff \begin{equation} \label{eq:spancondition} \mathrm{span}\{F_*(\lambda_i)\}_{i=1}^\kappa = \mathrm{span}\{\hat{\lambda}_i\}_{i=1}^\kappa. \end{equation} \end{proposition} In the case of the Bell element, the span condition~\eqref{eq:spancondition} fails and so that the function space is not preserved under affine mapping. Consequently, the theory of the previous section predicated on this preservation does not directly apply. Instead, we proceed by making the following observation. \begin{proposition} \label{prop:extendedelement} Let $(K, P, N)$ be a finite element with $P \subset \tilde{P}$ satisfying \( P = \cap_{i=1}^\kappa \mathrm{null}(\lambda_i)\) for linearly independent functionals \( \{\lambda_i\}_{i=1}^\kappa \). Define \[ \tilde{N} = \begin{bmatrix} N \\ L \end{bmatrix} \] to include the nodes of $N$ together with $L = \begin{bmatrix} \lambda_1 & \lambda_2 & \dots & \lambda_\kappa \end{bmatrix}^T$. Then $(K, \tilde{P}, \tilde{N})$ is a finite element. \end{proposition} \begin{proof} Since we have a finite-dimensional function space, it remains to show that $\tilde{N}$ is linearly independent and hence spans $\tilde{P}^\prime$. Consider a linear combination in $\tilde{P}^\prime$ \[ \sum_{i=1}^\nu c_i n_i + \sum_{i=1}^\kappa d_i \lambda_i = 0. \] Apply this linear combination to any $p \in P$ to find \[ \sum_{i=1}^\nu c_i n_i(p) = 0 \] since \( \lambda_i(p) = 0 \) for \( p \in P\). Because $(K, P, N)$ is a finite element, the $n_i$ are linearly independent in \( P^\prime \) so $c_i = 0$ for $1 \leq i \leq \nu$. Applying the same linear combination to any $ \in \tilde{P} \backslash P$ then gives that $d_i=0$ since the constraint functionals are also linearly independent. \end{proof} Given a nodal basis \( (K, \tilde{P}, \tilde{N})\), it is easy to obtain one for \( (K, P, N) \). \begin{proposition} \label{prop:extendedelementbasis} Let $(K, P, N)$, $\{ \lambda_i\}_{i=1}^\kappa$, and $(K, \tilde{P}, \tilde{N})$ be as in Proposition~\ref{prop:extendedelement}. Order the nodes in $\tilde{N}$ by \( \tilde{N} = \begin{bmatrix} N \\ L \end{bmatrix} \) with $L_i = \lambda_i$ for $1 \leq i \leq \kappa$. Let $\{\tilde{\psi}_{i}\}_{i=1}^{\nu + \kappa}$ be the nodal basis for $(K, \tilde{P}, \tilde{N})$. Then \( \{ \tilde{\psi}_i \}_{i=1}^\nu \) is the nodal basis for $(K, P, N)$. \end{proposition} \begin{proof} Clearly, \(n_i(\tilde{\psi}_j) = \delta_{ij}\) for \( 1 \leq i, j \leq \nu \) by the ordering of the nodes in \(\tilde{N}\). Moreover, \(\{\tilde{\psi}_i\}_{i=1}^\nu \subset P\) because \(\lambda_i(\tilde{\psi}_j) = 0\) for each \(1 \leq i \leq \kappa\). \end{proof} \subsection{The Bell element} So, we can obtain a nodal basis for the Bell element or others with similarly constrained function spaces by mapping the nodal basis for a slightly larger finite element and extracting a subset of the basis functions. Let \( (K, P, N) \) and \( (\hat{K}, \hat{P}, \hat{N})\) be the Bell elements over \(K\) and reference cell \( \hat{K} \). Recall that the Legendre polynomial of degree $n$ is orthogonal to polynomials of degree $n-1$ or less. Let $\mathcal{L}^n$ be the Legendre polynomial of degree $n$ mapped from the biunit interval to edge \( \gamma_i \) of $K$. Define a functional \begin{equation} \label{eq:elldef} \lambda_i(p) = \int_{\gamma_i} \mathcal{L}^4(s) \left( \mathbf{n}_i \cdot \nabla p \right)ds. \end{equation} For any \( p \in P_5(K) \), its normal derivative on edge \( i \) is cubic iff $\lambda_i(p) = 0$. So, the constraint functionals are given in \( L = \begin{bmatrix} \lambda_1 & \lambda_2 & \lambda_3 \end{bmatrix}^T \) and $\tilde{N} = \begin{bmatrix} N \\ L \end{bmatrix}$ as in Proposition~\ref{prop:extendedelement}. We define \begin{equation} \label{eq:ellhatdef} \hat{\lambda}_i(p) = \int_{\hat{\gamma}_i} \mathcal{L}^4(s) \left( \hat{\mathbf{n}}_i \cdot \nabla p \right)ds \end{equation} and hence $(\hat{K}, \hat{P}, \hat{N})$ as well as \( \hat{L} \) and \( \tilde{\hat{N}} \) in a similar way. $P$ and $\hat{P}$ are the constrained spaces -- quintic polynomials with cubic normal derivatives on edges, while $\tilde{P}$ and $\tilde{\hat{P}}$ are the spaces of full quintic polynomials over $K$ and $\hat{K}$, respectively. We must construct a nodal basis for \( (\hat{K}, \tilde{\hat{P}}, \tilde{\hat{N}}) \), map it to a nodal basis for \( (K, \tilde{P}, \tilde{N}) \) by the techniques in Section~\ref{sec:theory}, and then take the subset of basis functions corresponding to the Bell basis. This is accomplished by specifying a compatible nodal extension of \( \tilde{N} \) and \( \tilde{\hat{N}} \) by including the edge moments of \emph{tangential} derivatives against \( \mathcal{L}^4 \) with those of \(\tilde{N}\) and \(\tilde{\hat{N}} \). We define \begin{equation} \label{eq:lamprime} \begin{split} \lambda_i^\prime(p) & = \int_{\gamma_i} \mathcal{L}^4(s) \left( \mathbf{t}_i \cdot \nabla p \right) ds, \\ \hat{\lambda}_i^\prime(p) & = \int_{\hat{\gamma}_i} \mathcal{L}^4(s) \left( \hat{\mathbf{t}}_i \cdot \nabla p \right) ds. \\ \end{split} \end{equation} We must specify the \( E \), \( V^c \), and \( D \) matrices for this extended set of finite element nodes. We focus first on \( D \), needing to compute each \( \lambda_i^\prime \) in terms of the remaining functionals. As with Morley and Argyris, we begin with univariate results. The following is readily confirmed, for example, by noting the right-hand side is a quintic polynomial and computing values and first and second derivatives at $\pm 1$: \begin{proposition} Let $p$ be any quintic polynomial on \([-1, 1]\). Then \begin{equation} \label{eq:interp} \begin{split} 16 p(x) & = -\left( x-1 \right)^3 \left( p^{\prime\prime}(-1) \left( x+1 \right)^2 + p^\prime(-1) \left( x+1\right) \left( 3x + 5 \right) + p(-1) \left( 3 x^2 + 9x + 8 \right) \right) \\ & + \left( x + 1 \right)^3 \left( p^{\prime\prime}(1) \left( x-1 \right)^2 - p^\prime(1) \left( x-1 \right)\left( 3x - 5 \right) + p(1) \left( 3 x^2 - 9 x + 8 \right) \right). \end{split} \end{equation} \end{proposition} The formula~\eqref{eq:interp} can be differentiated and then integrated against \( \mathcal{L}^4 \) to show that \begin{equation} \int_{-1}^1 p^\prime(x) \mathcal{L}^4(x) dx = \tfrac{1}{21} \left[p(1) - p(-1) - p^\prime(1) - p^\prime(-1) + \tfrac{1}{3} \left(p^{\prime\prime}(1) - p^{\prime\prime}(-1)\right) \right]. \end{equation} Then, this can be mapped to a general interval $[\tfrac{-\ell}{2}, \tfrac{\ell}{2}]$ by a simple change of variables: \begin{equation} \int_{-\tfrac{\ell}{2}}^{\tfrac{\ell}{2}} p^\prime(x) \mathcal{L}^4(x) dx = \tfrac{1}{21} \left[p\left(\tfrac{\ell}{2}\right) - p\left(-\tfrac{\ell}{2}\right) - \tfrac{\ell}{2} \left(p^\prime\left(\tfrac{\ell}{2}\right) + p^\prime\left(-\tfrac{\ell}{2}\right)\right) + \tfrac{\ell^2}{12}\left( p^{\prime\prime}\left(\tfrac{\ell}{2}\right) - p^{\prime\prime}\left(-\tfrac{\ell}{2}\right)\right) \right]. \end{equation} Now, we can use this to express the functionals \( \lambda_i^\prime \) from~\eqref{eq:lamprime} as linear combinations of the Bell nodes: \begin{proposition} Let $K$ be a triangle and $\mathbf{v}_a$ and $\mathbf{v}_b$ are the beginning and ending vertex of edge \( \gamma_i \) with length $\ell_i$. Let $p$ be any bivariate quintic polynomial over $K$ and $\lambda_i^\prime$ defined in~\eqref{eq:lamprime}. Then the restriction of $\lambda_i^\prime$ to bivariate quintic polynomials satisfies \begin{equation} \pi \lambda_i^\prime = \tfrac{1}{21} \left[ \pi \delta_{\mathbf{v}_b} - \pi\delta_{\mathbf{v}_a} - \tfrac{\ell_i}{2} \left(\pi\delta_{\mathbf{v}_b}^{\mathbf{t}_i} + \pi\delta_{\mathbf{v}_b}^{\mathbf{t}_i} \right) + \tfrac{\ell_i^2}{12} \left( \pi\delta_{\mathbf{v}_b}^{\mathbf{t}_i\mathbf{t}_i} - \pi\delta_{\mathbf{v}_b}^{\mathbf{t}_i\mathbf{t}_i} \right) \right], \end{equation} and hence \begin{equation} \begin{split} \pi \lambda_i^\prime = & \tfrac{1}{21} \left[ \pi\delta_{\mathbf{v}_b} - \pi\delta_{\mathbf{v}_a} \right] \\ & -\tfrac{\ell_i}{42} \left[ t_i^{\mathbf{x}} \left( \pi\delta_{\mathbf{v}_b}^\mathbf{x} + \pi\delta_{\mathbf{v}_a}^\mathbf{x} \right) + t_i^{\mathbf{y}} \left( \pi\delta_{\mathbf{v}_b}^\mathbf{y} + \pi \delta_{\mathbf{v}_a}^\mathbf{y} \right) \right] \\ & + \tfrac{\ell_i^2}{252} \left( \left(t_i^{\mathbf{x}}\right)^2 \left( \pi\delta_{\mathbf{v}_b}^{\mathbf{xx}} -\pi\delta_{\mathbf{v}_a}^{\mathbf{xx}} \right) + 2 t_i^{\mathbf{x}}t_i^{\mathbf{y}} \left( \pi\delta_{\mathbf{v}_b}^{\mathbf{xy}} -\pi\delta_{\mathbf{v}_a}^{\mathbf{xy}} \right) + \left(t_i^{\mathbf{y}}\right)^2 \left( \pi\delta_{\mathbf{v}_b}^{\mathbf{yy}} -\pi\delta_{\mathbf{v}_a}^{\mathbf{yy}} \right) \right). \end{split} \end{equation} \end{proposition} Now, \( V^c \) is quite similar to that for the Argyris element. There is a slight difference in the handling the edge nodes, for we have an integral moment instead of a point value and must account for the edge length accordingly. By converting between normal/tangent and Cartesian coordinates via the matrix \( G_i \) and mapping to the reference element, we find that for any \( p \), \begin{equation} \begin{split} \begin{bmatrix} \lambda_i(p) \\ \lambda_i^\prime(p) \end{bmatrix} & = \int_{\gamma_i} \mathcal{L}^4(s) \left( G_i \nabla p \right) ds \\ & = \int_{\hat{\gamma}_i} \left|\tfrac{d\hat{s}}{ds} \right|\mathcal{L}^4(\hat{s}) \left( G_i J^T \hat{G}_i^T \hat{\nabla}^{\hat{\mathbf{n}}_i\hat{\mathbf{t}}_i} \hat{p} \right) d\hat{s}\\ & = \left|\tfrac{d\hat{s}}{ds} \right|G_i J^T \hat{G}_i^T \begin{bmatrix} \hat{\lambda_i}(p) \\ \hat{\lambda_i}^\prime(p) \end{bmatrix} \end{split} \end{equation} This calculation shows that \( V^C \) for the Bell element is identical to~\eqref{eq:VC} for Argyris, except with a geometric scaling of the \( B \) matrices. The extraction matrix \( E \) for the extended Bell elements consisting of full quintics now is identical to that for Argyris. Then, when evaluating basis functions, one multiplies the affinely mapped set of basis values by \( V^T \) and then takes only the first 18 entries to obtain the local Bell basis. \subsection{A remark on the Brezzi-Douglas-Fortin-Marini element} In~\citep{Kir04}, we describe a two-part process for computing the triangular Brezzi-Douglas-Fortin-Marini (BDFM) element~\cite{fortin1991mixed}, an \(H(\mathrm{div})\) conforming finite element based on polynomials of degree \( k \) with normal components constrained to have degree \( k - 1 \). This is a reduction of the Brezzi-Douglas-Marini element~\cite{brezzi1985two} somewhat as Bell is of Argyris. However, as both elements form Piola-equivalent families, the transformation techniques developed here are not needed. Like the Bell element, one can define constraint functionals (integral moments of normal components against the degree \( k \) Legendre polynomial) for BDFM. In~\citep{Kir04}, we formed a basis for the intersection of the null spaces of these functionals by means of a singular value decomposition. A nodal basis for the BDFM space then followed by building and inverting a generalized Vandermonde matrix on the basis for this constrained space. In light of Propositions~\ref{prop:extendedelement} and~\ref{prop:extendedelementbasis}, however, this process was rather inefficient. Instead, we could have merely extended the BDFM nodes by the constraint functionals, building and inverting a single Vandermonde-like matrix. If one takes the BDM edge degrees of freedom as moments of normal components against Legendre polynomials up to degree $(k-1)$ instead of pointwise normal values, then one can even build a basis for BDM that includes a a basis for BDFM as a proper subset. \section{Numerical results} \label{sec:num} Incorporation of these techniques into high-level software tools such as Firedrake is the subject of ongoing investigation. In the meantime, we provide some basic examples written in Python, with sparse matrix assemble and solvers using petsc4py~\citep{Dalcin:2011}. \subsection{Scaling degrees of freedom} Before considering the accuracy of the \( L^2 \) projection, achieved via the global mass matrix, we comment on the conditioning of the mass and other matrices when both derivative and point value degrees of freedom appear. The Hermite element is illustrative of the situation. On a cell of typical diameter \( h \), consider a basis function corresponding to the point value at a given vertex. Since the vertex basis function has a size of \( \mathcal{O}(1) \) on a triangle of size \( \mathcal{O}(h^2) \), its \( L^2 \) norm should be \( \mathcal{O}(h) \). Now, consider a basis function corresponding to a vertex derivative. Its \emph{derivative} is now \( \mathcal{O}(1) \) on the cell, so that the \( H^1 \) seminorm is \( \mathcal{O}(h) \). Inverse inequalities suggest that the \( L^2 \) norm could then be as large as \( \mathcal{O}(1) \). That is, the different kinds of nodes introduce multiple scales of basis function sizes under transformation, which manifests in ill-conditioning. Where one expects a mass matrix to have an \( \mathcal{O}(1) \) condition number, one now obtains an \( \mathcal{O}(h^{-2}) \) condition number. This is observed even on a unit square mesh, in Figure~\ref{fig:masscond}. All condition numbers are computed by converting the PETSc mass matrix to a dense matrix and using LAPACK via scipy~\citep{scipy} However, there is a simple solution. For the Hermite element, one can scale the derivative degrees of freedom locally by an ``effective \( h \)''. All cells sharing a given vertex must agree on that \( h \), which could be the average cell diameter among cells sharing a vertex. Scaling the nodes/basis functions (which amounts to multiplying \(V \) on the right by a diagonal matrix with 1's or \( h\)'s) removes the scale separation among basis functions and leads again to an \( \mathcal{O}(1) \) condition number for mass matrices, also seen in Figure~\ref{fig:masscond}. From here, we will assume that all degrees of freedom are appropriately scaled to give \(\mathcal{O}(1) \) conditioning for the mass matrix. \begin{figure} \caption{Condition numbers for cubic Lagrange and Hermite mass matrices on an \( N \times N \) mesh divided into right triangles. This demonsrates an \( \mathcal{O} \label{fig:masscond} \end{figure} \subsection{Accuracy of $L^2$ projection} Now, we demonstrate that optimal-order accuracy is obtained by performing \( L^2 \) projection of smooth functions into the Lagrange, Hermite, Morley, Argyris, and Bell finite element spaces. In each case we use an \( N \times N \) mesh divided into right triangles. Defining \( u(x, y) = \sin(\pi x) \sin(2 \pi y) \) on \( [0,1]^2 \), we seek \( u_h \) such that \begin{equation} \left( u_h , v_h \right) = \left( u , v_h \right) \end{equation} for each \( v_h \in V_h \), where \( V_h \) is one of the the finite element spaces. Predicted asymptotic convergence rates -- third for Morley, fourth for Hermite and Lagrange, fifth for Bell, and sixth for Argyris, are observed in Figure~\ref{fig:l2proj}. Note that the Hermite and Lagrange elements have the same order of approximation, but the Lagrange element delivers a slightly lower error. This is to be expected, as the space spanned by cubic Hermite triangles is a proper subset of that spanned by Lagrange. \begin{figure} \caption{Accuracy of \( L^2 \) projection using cubic Lagrnage, Hermite, Morley, Argyris, and Bell elements. All approach theoretically optimal rates.} \label{fig:l2proj} \end{figure} \subsection{The Laplace operator} As a simple second-order elliptic operator, we consider the Dirichlet problem for the Laplace operator on the unit square \( \Omega\): \begin{equation} \label{eq:laplace} -\Delta u = f, \end{equation} equipped with homogeneous Dirichlet boundary conditions \( u = 0 \) on \( \partial \Omega \). We divide \( \Omega \) into an \( N \times N \) mesh of triangles and let \( V_h \) be one of the Lagrange, Hermite, Argyris, or Bell finite element spaces, all of which are \( H^1 \)-conforming, over this mesh. The Morley element is not a suitable \( H^1 \) nonconforming element, so we do not use it here. We then seek \( u_h \in V_h \) such that \begin{equation} \left( \nabla u_h , \nabla v_h \right) = \left( f , v_h \right) \end{equation} for all \( v_h \in V_h \). Enforcing strong boundary conditions on elements with derivative degrees of freedom is delicate in general. However, with grid-aligned boundaries, it is less difficult. To force a function to be zero on a given boundary segment, we simply require the vertex values and all derivatives tangent to the edge vanish. This amounts to setting the \(x\)-derivatives on the top and bottom edges of the box and \(y\)-derivative on the left and right for Hermite, Argyris, and Bell elements. Dirichlet conditions for Lagrange are enforced in the standard way. By the method of manufactured solutions, we select \( f(x, y) = 8 \pi^2 \sin(2 \pi x) \sin(2 \pi y) \) so that \( u(x,y) = \sin(2 \pi x) \sin(2 \pi y) \). In Figure~\ref{fig:laplaceerror}, we show the \( L^2 \) error in the computed solution for both element families. As the mesh is refined, both curves approach the expected order of convergence -- fourth for Hermite and Lagrange, fifth for Bell, and sixth for Argyris. Again, the error for Lagrange is slightly smaller than for Hermite, albeit with more global degrees of freedom. \begin{figure} \caption{Convergence study of various elements for second-order elliptic equation~\eqref{eq:laplace} \label{fig:laplaceerror} \end{figure} \subsection{The clamped plate problem} We now turn to a fourth-order problem for which the Argyris and Bell elements provide conforming \( H^2 \) discretizations and Morley a suitable nonconforming one. Following~\citep{BreSco}, we take the bilinear form defined on \( H^2( \Omega ) \) to be \begin{equation} a(u, v) = \int_{\Omega} \Delta u \Delta v - \left( 1 - \nu \right) \left( 2 u_{xx} v_{yy} + 2 u_{yy} v_{xx} - 4 u_{xy} v_{xy} \right) dx dy, \end{equation} where \( 0 < \nu < 1 \) yields a coercive bilinear form for any closed subspace of \( H^2 \) that does not contain nontrivial linear polynomials. We fix \( \nu = 0.5 \). Then, we consider the variational problem \begin{equation} a(u, v) = F(v) = \int_\Omega f v \ dx, \label{eq:plate} \end{equation} posed over suitable subspaces of \( H^2 \). It is known~\citep{BreSco} that solutions of~\eqref{eq:plate} that lie in \(H^4(\Omega) \) satisfy the biharmonic equation \( \Delta^2 u = f \) in an \( L^2 \) sense. We consider the clamped plate problem, in which both the function value and outward normal derivative are set to vanish, which removes nontrivial linear polynomials from the space. Again, we use the method of manufactured solutions on the unit square to select \( f(x,y) \) such that \( u(x,y) = \left( x(1-x) y(1-y) \right)^2 \), which satifies clamped boundary conditions. We solve this problem with Argyris and Bell elements, and then also use the nonconforming Morley element in the bilinear form. Again, expected orders of convergence are observed in Figure~\ref{fig:plateerr}. \begin{figure} \caption{Convergence study of Hermite and Argyris elements for clamped plate biharmonic problem~\eqref{eq:plate} \label{fig:plateerr} \end{figure} \section{Conclusions} Many users have wondered why FEniCS, Firedrake, and most other high-level finite element tools lack the full array of triangular elements, including Argyris and Hermite. One answer is that fundamental mathematical aspects of mapping such elements have remained relatively poorly understood. This work demonstrates the challenges involved with mapping such elements from a reference cell, but also proposes a general paradigm for overcoming those challenges by embedding the nodes into a larger set that transforms more cleanly and using interpolation techniques to relate the additional nodes back to original ones. In the future, we hope to incorporate these techniques in FInAT (\url{https://github.com/FInAT/FInAT}), a successor project to FIAT that produces abstract syntax for finite element evaluation rather than flat tables of numerical values. TSFC~\cite{tsfc} already relies on FInAT to enable sum-factorization of tensor-product bases. If FInAT can provide rules for evaluating the matrix \( M \) in terms of local geometry on a per-finite element basis, then TSFC and other form compilers should be able to seamlessly (from the end-users' perspective) generate code for many new kinds of finite elements. \end{document}
\begin{document} \author{Jihun Han\thanks{[email protected]}\; and Hyungbin Park\thanks{[email protected], [email protected]}\\ \\ Courant Institute of Mathematical Sciences,\\ New York University, New York, USA \\ \\ {\normalsize First version: Nov 09, 2014} \\ {\normalsize Final version: Mar 03, 2015}} \title{The Intrinsic Bounds on the Risk Premium of Markovian Pricing Kernels} \date{} \maketitle \begin{abstract} The risk premium is one of main concepts in mathematical finance. It is a measure of the trade-offs investors make between return and risk and is defined by the excess return relative to the risk-free interest rate that is earned from an asset per one unit of risk. The purpose of this article is to determine upper and lower bounds on the risk premium of an asset based on the market prices of options. One of the key assumptions to achieve this goal is that the market is Markovian. Under this assumption, we can transform the problem of finding the bounds into a second-order differential equation. We then obtain upper and lower bounds on the risk premium by analyzing the differential equation. \end{abstract} \section{Introduction} \label{sec:intro} The {\em risk premium} or {\em market price of risk} is one of main concepts in mathematical finance. The risk premium is a measure of the trade-offs investors make between return and risk and is defined by the excess return relative to the risk-free interest rate earned from an asset per one unit of risk. The risk premium determines the relation between an objective measure and a risk-neutral measure. An objective measure describes the actual stochastic dynamics of markets, and a risk-neutral measure determines the prices of options. Recently, many authors have suggested that the risk premium (or, equivalently, objective measure) can be determined from a risk-neutral measure. Ross \cite{Ross13} demonstrated that the risk premium can be uniquely determined by a risk-neutral measure. His model assumes that there is a finite-state Markov process $X_{t}$ that drives the economy in discrete time $t\in\mathbb{N}.$ Many authors have extended his model to a continuous-time setting using a Markov diffusion process $X_{t}$ with state space $\mathbb{R}$; see, e.g., \cite{Borovicka14},\cite{Carr12},\cite{Dubynskiy13},\cite{Goodman14},\cite{Park14b},\cite{Qin14b} and \cite{Walden13}. Unfortunately, in the continuous-time model, the risk premium is not uniquely determined from a risk-neutral measure \cite{Goodman14}, \cite{Park14b}. To determine the risk premium uniquely, all of the aforementioned authors assumed that some information about the objective measure was known or restricted the process $X_t$ to some class. Borovicka, Hansen and Scheinkman \cite{Borovicka14} made the assumption that the process $X_t$ is {\em stochastically stable} under the objective measure. In \cite{Carr12}, Carr and Yu assumed that the process $X_{t}$ is a {\em bounded} process. Dubynskiy and Goldstein \cite{Dubynskiy13} explored Markov diffusion models with {\em reflecting boundary} conditions. In \cite{Park14b}, Park assumed that $X_t$ is non-attracted to the left (or right) boundary under the objective measure. Qin and Linetsky \cite{Qin14b} and Walden \cite{Walden13} assumed that the process $X_{t}$ is {\em recurrent} under the objective measure. Without these assumptions, one cannot determine the risk premium uniquely. The purpose of this article is to investigate the bounds of the risk premium. As mentioned above, without further assumptions, the risk premium is not uniquely determined, but one can determine upper and lower bounds on the risk premium. To determine these bounds, we need to consider how the risk premium of an asset is determined in a financial market. A key assumption of this article is that the reciprocal of the pricing kernel is expressed in the form $e^{\beta t}\,\phi(X_{t})$ for some positive constant $\beta$ and positive function $\phi(\cdot).$ For example, in the {\em consumption-based capital asset model} \cite{Campbell99}, \cite{Karatzas98}, the pricing kernel is expressed in the above form. We will see that in this case the risk premium $\theta_t$ is given by \begin{equation} \label{eqn:theta} \theta_t=(\sigma\phi'\phi^{-1})(X_t)\;, \end{equation} where $\sigma(X_t)$ is the volatility of $X_t.$ The problem of determining the bounds of the risk premium can be transformed into a second-order differential equation. We will demonstrate that $\phi(\cdot)$ satisfies the following differential equation: $$\mathcal{L}\phi(x):=\frac{1}{2}\sigma^2(x){\phi ''(x)}+k(x)\phi '(x) -r(x)\phi (x) =-\beta \,\phi (x)$$ for some unknown positive number $\beta.$ Thus, we can determine the bounds of the risk premium by investigating the bounds of $(\sigma\phi'\phi^{-1})(\cdot)$ for a positive solution $\phi(\cdot).$ It will be demonstrated that two special solutions of $\mathcal{L}h=0$ play an important role for the bounds of the risk premium $\theta_t.$ The following provides an overview of this article. In Section \ref{sec:Markovian_pricing}, we state the notion of Markovian pricing kernels. In Section \ref{sec:risk_premium}, we investigate the risk premium of an asset and see how the problem of determining the bounds of the risk premium is transformed into a second-order differential equation. In Section \ref{sec:intrinsic_bounds}, we find upper and lower bounds on the risk premium of an asset, which is the main result of this article. In Section \ref{sec:appli}, we see how this result can be applied to determine the range of return of an asset. Finally, Section \ref{sec:conclusion} summarizes this article. \section{Markovian pricing kernels} \label{sec:Markovian_pricing} A financial market is defined as a probability space $(\Omega,\mathcal{F},\mathbb{P})$ having a Brownian motion $B_{t}$ with the filtration $\mathcal{F}=(\mathcal{F}_{t})_{t=0}^{\infty}$ generated by $B_{t}$. All the processes in this article are assumed to be adapted to the filtration $\mathcal{F}$. $\mathbb{P}$ is the objective measure of this market. \begin{assume} In the financial market, there are two assets. One is a {\em money market account} $e^{\int_0^t\,r_s\,ds}$ with an {\em interest rate} process $r_{t}$ and the other is a risky asset $S_{t}$ satisfying $$dS_t=\mu_tS_t\,dt+v_tS_t\,dB_t\;.$$ \end{assume} \noindent Throughout this article, the stochastic discount factor is the money market account. Let $\mathbb{Q}$ be a risk-neutral measure in the market $(\Omega,\mathcal{F},\mathbb{P})$ such that $S_t\,e^{-\int_0^tr_s\,ds}$ is a local martingale under $\mathbb{Q}.$ Put the Radon-Nikodym derivative \begin{equation*} \Sigma_{t}=\left.\frac{d \mathbb{Q}}{d \mathbb{P}} \right|_{\mathcal{F}_{t}} \; , \end{equation*} which is known to be a martingale process on $(\Omega,\mathcal{F},\mathbb{P}).$ We can write in the SDE form \begin{equation*} d\Sigma_{t}=-\theta_{t}\Sigma_{t}\, dB_{t} \end{equation*} where \begin{equation}\label{eqn:rho} \theta_{t}:=\frac{\mu_t-r_t}{v_t} \end{equation} is the {\em risk-premium} or {\em market price of risk.} It is well-known that $W_{t}$ defined by \begin{equation}\label{eqn:Girsanov} dW_{t}=\theta_{t}dt+dB_{t} \end{equation} is a Brownian motion under $\mathbb{Q}.$ We define the reciprocal of the {\em pricing kernel} by $L_{t}=e^{\int_0^tr_s\,ds}/\Sigma_{t}.$ Using the Ito formula, \begin{equation} \label{eqn:RN_SDE} \begin{aligned} dL_t&=(r_{t}+\theta_{t}^2)\,L_t\,dt +\theta_{t}L_t\, dB_{t}\\ &=r_{t}L_t\,dt + \theta_{t}L_t\, dW_{t} \end{aligned} \end{equation} is obtained. \begin{assume} \label{assume:Markovian} Assume that (the reciprocal of) the pricing kernel $L_t$ is Markovian in the sense that there are a positive function $\phi\in C^{2}(\mathbb{R}),$ a positive number $\beta$ and a state variable $X_t$ such that \begin{equation} \label{eqn:Markovian} L_{t}=e^{\beta t}\,\phi(X_{t})\,\phi^{-1}(X_{0})\; . \end{equation} In this case, we say $(\beta,\phi)$ is a {\em principal pair} of $X_{t}.$ \end{assume} \noindent We imposed a special structure on the pricing kernel. This specific form is commonly assumed in the recovery literature as in \cite{Borovicka14},\cite{Carr12},\cite{Dubynskiy13},\cite{Goodman14},\cite{Park14b},\cite{Qin14b},\cite{Ross13} and \cite{Walden13}. In general, $L_t$ can be expressed as $$L_{t}=e^{\beta t}\,\phi(X_{t})\,\phi^{-1}(X_{0})\,M_t$$ where $M_t$ is a $\mathbb{Q}$-martingale. Refer to \cite{Hansen09} for this general expression. Assumption \ref{assume:Markovian} has an implication that the martingale term $M_t$ is equal to $1.$ We now shift our attention to the assumption that $\beta>0.$ In lots of literature on asset pricing theory, $\beta$ is the discount rate of the representative agent, which is typically a positive number. For example, in the {\em consumption-based capital asset model} \cite{Campbell99}, \cite{Karatzas98}, (the reciprocal of) the pricing kernel is expressed by $$e^{\beta t}\frac{U'(c_0)}{U'(c_t)}$$ where $U$ is the utility of the representative agent, $c_t$ is the aggregate consumption process and $\beta$ is the discount rate of the agent. \begin{assume} \label{assume:X} The state variable $X_{t}$ is a time-homogeneous Markov diffusion process satisfying the following SDE. $$dX_{t}=k(X_{t})\,dt+\sigma(X_{t})\,dW_{t}\,,\;X_{0}=\xi\;.$$ $k(\cdot)$ and $\sigma(\cdot)$ are assumed to be known ex ante. The process $X_t$ takes values in some interval $I$ with endpoints $c$ and $d,\,-\infty\leq c<d\leq\infty.$ It is assumed that $b(\cdot)$ and $\sigma(\cdot)$ are continuous on $I$ and continuously differentiable on $(c,d)$ and that $\sigma(x)>0$ for $x\in(c,d).$ \end{assume} \begin{assume} \label{assume:interest_rate} The short interest rate $r_t$ is determined by $X_{t}.$ More precisely, there is a continuous positive function $r(\cdot)$ such that $r_{t}=r(X_{t}).$ \end{assume} \noindent Under these assumptions, the next section demonstrates how to transform the problem of determining the bounds of the risk premium into a second-order differential equation. We will also describe the properties of positive solutions of the differential equation. \section{Risk premium} \label{sec:risk_premium} The purpose of this article is to determine upper and lower bounds on the risk premium $\theta_t.$ First, we investigate how the risk premium $\theta_t$ is determined with the Markovian pricing kernel. Applying the Ito formula to \eqref{eqn:Markovian}, we have $$dL_{t}=\left(\beta+\frac{1}{2}(\sigma^{2}\phi''\phi^{-1})(X_{t})+(k\phi'\phi^{-1})(X_{t})\right)L_{t} \,dt+(\sigma\phi'\phi^{-1})(X_{t})\,L_{t}\,dW_{t}$$ and by \eqref{eqn:RN_SDE}, we know $dL_{t}=r(X_{t})\,L_{t}\,dt + \theta_{t}L_{t}\, dW_{t}. $ By comparing these two equations, we obtain $$ \frac{1}{2}\sigma^2(x){\phi ''(x)}+k(x)\phi '(x) -r(x)\phi (x) =-\beta \,\phi (x) $$ and \begin{equation}\label{eqn:Markovian_theta} \theta_{t}=(\sigma\phi'\phi^{-1})(X_{t})\;. \end{equation} Define a infinitesimal operator $\mathcal{L}$ by $$\mathcal{L}\phi(x)=\frac{1}{2}\sigma^2(x){\phi ''(x)}+k(x)\phi '(x) -r(x)\phi (x)\;.$$ \begin{thm} Under Assumption 1-\ref{assume:interest_rate}, let $(\beta,\phi)$ be a principal pair of $X_{t}.$ Then, $(\beta,\phi)$ satisfies $\mathcal{L}\phi=-\beta\phi\;.$ \end{thm} \noindent \noindent We also have the following theorem by \eqref{eqn:Girsanov} and \eqref{eqn:rho}. \begin{thm} \label{thm:theta_phi} The risk premium is given by $\theta_{t}=\theta(X_{t})$ where $\theta(\cdot):=(\sigma\phi'\phi^{-1})(\cdot).$ We thus have that $dB_{t}=-\theta(X_{t})\,dt+dW_{t}.$ \end{thm} \noindent This theorem explains the relation between the risk premium and the pricing kernel $L_t.$ The purpose of this article is to determine upper and lower bounds on $\theta(\cdot)$ based on $k(\cdot),\,\sigma(\cdot)$ and $r(\cdot).$ The positive function $\phi(\cdot)$ and the positive number $\beta$ are assumed to be unknown. The main idea is to determine the properties of all of the possible $\phi(\cdot)$'s and $\beta$'s and then to obtain upper and lower bounds on the possible $(\phi'\phi^{-1})(\cdot)$ values. From \eqref{eqn:Markovian_theta}, we can determine the bounds of the risk premium $\theta_t.$ \section{Intrinsic bounds} \label{sec:intrinsic_bounds} We are interested in a solution pair $(\lambda,h)$ of $\mathcal{L}h=-\lambda h$ with positive function $h.$ There are two possibilities. \begin{itemize}[noitemsep,nolistsep] \item[\textnormal{(i)}] there is no positive solution $h$ for any $\lambda\in\mathbb{R}$, or \item[\textnormal{(ii)}] there exists a number $\overline{\beta}$ such that it has two linearly independent positive solutions for $\lambda<\overline{\beta},$ has no positive solution for $\lambda>\overline{\beta}$ and has one or two linearly independent solutions for $\lambda=\overline{\beta}.$ \end{itemize} Refer to page 146 and 149 in \cite{Pinsky}. In this article, we implicitly assumed the second case by Assumption \ref{assume:Markovian}. \begin{defi} For each $\lambda$ with $\lambda\leq\overline{\beta},$ we say $(\lambda,h)$ is a candidate pair if $(\lambda,h)$ is a solution pair of $\mathcal{L}h=-\lambda h$ and if $h(\xi)=1$ (i.e., $h$ is normalized). We define the candidate set by $$\mathcal{C}_\lambda:=\{\,h'(\xi)\in\mathbb{R}\,|\,\mathcal{L}h=- \lambda h,\, h(\xi)=1,\,h(\cdot)>0 \,\}\;.$$ \end{defi} \noindent It is known that $\mathcal{C}_\lambda$ is a connected compact set. Refer to \cite{Park14b} or \cite{Pinsky}. Denote the functions corresponding to $\max\mathcal{C}_\lambda$ and $\min\mathcal{C}_\lambda$ by $H_\lambda$ and $h_\lambda,$ respectively. It is assumed that $H_\lambda(\xi)=h_\lambda(\xi)=1.$ For a solution pair $(\lambda,h)$ with $h>0,$ it is easily checked that $$e^{\lambda t-\int_{0}^{t} r(X_{s})ds}\,h(X_{t})\,h^{-1}(\xi)$$ is a local martingale under $\mathbb{Q}.$ To be a Radon-Nikodym derivative, this should be a martingale. Thus, we are interested in solution pairs that induces martingales. \begin{defi} Let $(\lambda,h)$ be a candidate pair. We say $(\lambda,h)$ is a admissible pair if $$e^{\lambda t-\int_{0}^{t} r(X_{s})ds}\,h(X_{t})\,h^{-1}(\xi)$$ is a martingale under $\mathbb{Q}.$ In this case, a measure obtained from the risk-neutral measure $\mathbb{Q}$ by the Radon-Nikodym derivative $$\left.\frac{\,d\,\cdot\,}{d\mathbb{Q}}\right|_{\mathcal{F}_{t}}=e^{\lambda t-\int_{0}^{t} r(X_{s})ds}\,h(X_{t})\,h^{-1}(\xi)$$ is called {\em the transformed measure} with respect to the pair $(\lambda,h).$ \end{defi} We now investigate the bounds of the risk premium. We set \begin{equation*} \begin{aligned} \ell:&=\inf\{0\leq\lambda\leq\overline{\beta}\,|\,(\lambda,h_\lambda)\text{ is an admissible pair}\} \\ L:&=\inf\{0\leq\lambda\leq\overline{\beta}\,|\,(\lambda,H_\lambda)\text{ is an admissible pair}\}\;. \end{aligned} \end{equation*} Two functions $h_\ell$ and $H_L$ will play a crucial role in determining the bounds of the risk premium $\theta_t.$ To see this, we need the following proposition. \begin{prop} \label{prop:relation} Let $\alpha<\lambda$ and let $(\lambda,h)$ be a candidate. Then, \begin{equation*} \begin{aligned} (h_\alpha'h_\alpha^{-1})(x)\leq(h'h^{-1})(x)\leq(H_\alpha'H_\alpha^{-1})(x)\;. \end{aligned} \end{equation*} \end{prop} \noindent See Appendix \ref{app:pf_relation} for proof. This proposition says that if a candidate pair $(\lambda,h)$ with $\lambda>0$ is an admissible pair, then \begin{equation*} \begin{aligned} (h_\ell'h_\ell^{-1})(x)\leq(h'h^{-1})(x)\leq(H_L'H_L^{-1})(x)\;. \end{aligned} \end{equation*} This equation gives upper and lower bounds on the risk premium. The only information that we know about the principal pair $(\beta,\phi)$ is that $\beta>0$ and that $(\beta,\phi)$ is an admissible pair. Thus, we can conclude that $$(h_\ell'h_\ell^{-1})(x)\leq(\phi'\phi^{-1})(x)\leq(H_L'H_L^{-1})(x)\;.$$ By Theorem \ref{thm:theta_phi}, we obtain the main theorem of this article. \begin{thm} \label{thm:intrinsic_bounds}\textnormal{(Intrinsic Bounds of Risk Premium)} \newline Let $\theta_t$ be the risk premium. Then, \begin{equation*} \begin{aligned} (\sigma h_\ell'h_\ell^{-1})(X_t)\leq\theta_t\leq(\sigma H_L'H_L^{-1})(X_t)\;. \end{aligned} \end{equation*} \end{thm} \noindent This theorem implies that we can determine the range of the risk premium when a risk-neutral measure is given. Upper and lower bounds can then be calculated using option prices. In the next section, as an example, we show that in the classical Black-Scholes model, the risk premium satisfies $$-\frac{2r}{v}\leq\theta_t\leq v$$ where $r$ is the interest rate and $v$ is the volatility of the stock. \begin{cor} Let $\theta_t$ be the risk premium. Then, \begin{equation*} \begin{aligned} (\sigma h_0'h_0^{-1})(X_t)\leq\theta_t\leq(\sigma H_0'H_0^{-1})(X_t)\;. \end{aligned} \end{equation*} \end{cor} \noindent This gives rough bounds on the risk premium. In general, two solutions $h_0$ and $H_0$ are more straightforward to find than $h_\ell$ and $H_L.$ \newline We can find better bounds if some information of $\mathbb{P}$-dynamics of $X_t$ is known. As mentioned in Section \ref{sec:intro}, if $X_t$ is recurrent under the objective measure, then we can find the exact risk premium. Thus, the bounds is not informative. For example, if the state variable is an interest rate, which is usually recurrent under the objective measure, then we can find the precise risk premium. For more details, see \cite{Borovicka14},\cite{Park14b},\cite{Qin14b} and \cite{Walden13}. We can find better bounds when it is known that the state variable is non-attracted to the left (or right) boundary under the objective measure. This is a reasonable assumption under some situations. For example, a stock price process is usually not attracted to zero boundary. \begin{prop} The process $X_t$ is non-attracted to the left boundary under the transformed measure with respect to $(\lambda,h)$ if only if $h=H_\lambda.$ \end{prop} \noindent See \cite{Park14b} for proof. Similarly, the process $X_t$ is non-attracted to the right boundary under the transformed measure with respect to $(\lambda,h)$ if only if $h=h_\lambda.$ This proposition says the following theorem. \begin{thm} \label{thm:non-attracted} If the state process $X_t$ is non-attracted to the left boundary, then the risk premium $\theta_t$ satisfies $$(\sigma H_{\overline{\beta}}'H_{\overline{\beta}}^{-1})(X_t)\leq\theta_t\leq(\sigma H_L'H_L^{-1})(X_t)\;.$$ \end{thm} \noindent Similarly, if the state process $X_t$ is non-attracted to the right boundary, then the risk premium $\theta_t$ satisfies $$(\sigma h_\ell'h_\ell^{-1})(X_t)\leq\theta_t\leq(\sigma h_{\overline{\beta}}'h_{\overline{\beta}}^{-1})(X_t)\;.$$ \section{Returns of Stock} \label{sec:appli} In this section, we investigate the bounds of the risk premium when the state variable $X_t$ is the stock price process $S_t.$ In practice, the S$\&$P 500 index process, which can theoretically be regarded as a stock price process, is used as a state variable \cite{Audrino14}. In this case, we can determine upper and lower bounds on the return of the stock process $S_t.$ Suppose that $S_t=X_t$ and that the interest rate is constant $r.$ Under the risk-neutral measure, the dynamics of $X_t$ is $$dX_t=rX_t\,dt+\sigma(X_t)\,X_t\,dW_t\;.$$ By Theorem \ref{thm:intrinsic_bounds}, the risk premium satisfies $$(\sigma h_\ell'h_\ell^{-1})(x)\leq\theta(x)\leq(\sigma H_L'H_L^{-1})(x)$$ where $h_\ell$ and $H_L$ are the corresponding solutions. We obtain upper and lower bounds on the return $\mu_t$ by using equation \eqref{eqn:rho}. $$r+(\sigma^2h_\ell'h_\ell^{-1})(X_t)\leq\mu_t\leq r+(\sigma^2H_L'H_L^{-1})(X_t)\;.$$ As an example, we explore the classical Black-Scholes model for stock price $X_t.$ $$dX_t=rX_t\,dt+vX_t\,dW_t\,,\;X_0=1$$ for $v>0.$ The infinitesimal operator is $$\mathcal{L}h(x)=\frac{1}{2}v^2x^2h''(x)+rxh'(x)-rh(x)\;.$$ It can be easily shown that every solution pair of $\mathcal{L}h=-\lambda h$ induces a martingale, that is, every candidate pair is an admissible pair. Thus it is obtained that $\ell=L=0.$ We want to find the positive solutions of $\mathcal{L}h=0$ with $h(0)=1.$ The solutions are given by $h(x)=c\,x^{-\frac{2r}{v^2}}+(1-c)x$ for $0\leq c\leq 1.$ Thus $$h_0(x)=x^{-\frac{2r}{v^2}},\; H_0(x)=x\;.$$ The risk premium $\theta_t$ satisfies $-\frac{2r}{v}\leq\theta_t\leq v.$ The upper and lower bounds of the return $\mu_t$ of $X_t$ is given by $-r\leq\mu_t\leq r+v^2.$ We can find better bounds if we know that the stock price is non-attracted to zero boundary. By direct calculation, we have that $\overline{\beta}=\frac{(r+\frac{1}{2}v^2)^2}{2v^2}$ and that $$H_{\overline{\beta}}(x)=x^{\frac{1}{2}-\frac{r}{v^2}},\; H_0(x)=e^x\;.$$ By Theorem \ref{thm:non-attracted}, the risk premium $\theta_t$ satisfies $\frac{1}{2}-\frac{r}{v}\leq\theta_t\leq v.$ The upper and lower bounds of the return $\mu_t$ of $X_t$ is given by $\frac{v}{2}\leq\mu_t\leq r+v^2.$ \section{Conclusion} \label{sec:conclusion} This article determined the possible range of the risk premium of the market using the market prices of options. One of the key assumptions to achieve this result is that the market is Markovian driven by a state variable $X_t.$ Under this assumption, we can transform the problem of determining the bounds into a second-order differential equation. We then obtain the upper and lower bounds of the risk premium by analyzing the differential equation. We illuminated how the problem of the risk premium is transformed into a problem described by a second-order differential equation. The risk premium is determined by $\theta_t=(\sigma\phi'\phi^{-1})(X_t)$ with a positive function $\phi(\cdot).$ We demonstrated that $\phi(\cdot)$ satisfies $\mathcal{L}\phi=-\beta\phi$ for some positive number $\beta$, where $\mathcal{L}$ is a second-order operator that is determined by option prices. We demonstrated that two special solutions $h_\ell$ and $H_L$ play a crucial role for determining upper and lower bounds on the risk premium. The risk premium $\theta_t$ satisfies \begin{equation*} \begin{aligned} (\sigma h_\ell'h_\ell^{-1})(X_t)\leq\theta_t\leq(\sigma H_L'H_L^{-1})(X_t)\;. \end{aligned} \end{equation*} We also discussed better bounds when it is known that the state variable is non-attracted to the left (or right) boundary. It is stated how this result can be applied to determine the range of return of an asset. The following extensions for future research are suggested. First, it would be interesting to extend the process $X_t$ to a multidimensional process or a process with jump. Second, it would be interesting to determine the bounds of the risk premium when the process $X_t$ is a non-Markov process. In this article, we discussed only a time-homogeneous Markov process. Third, it would be interesting to explore more general forms of the pricing kernel. We discussed only the case in which (the reciprocal of) the pricing kernel has the form $e^{\beta t}\phi(X_{t}).$ \section{Proof of Proposition \ref{prop:relation}} \label{app:pf_relation} Hereafter, without loss of generality, we assume that $X_0=0,$ the left boundary of the range of $X_t$ is $-\infty$ and the right boundary of the range of $X_t$ is $\infty$. \begin{lemma} \label{lem:finite} Let $\alpha<\lambda$ and let $(\alpha,g)$ and $(\lambda,h)$ be candidate pairs. Then, \begin{equation*} \begin{aligned} \int_{-\infty}^{0}v^{-2}(y)\,dy\textnormal{ is finite if } h'(0)\geq g'(0)\;,\\ \int_{0}^{\infty}v^{-2}(y)\,dy\textnormal{ is finite if } h'(0)\leq g'(0)\;. \end{aligned} \end{equation*} where $g=vq$ and $q(x)=e^{-\int_{0}^{x}\frac{k(y)}{\sigma^{2}(y)}\,dy}.$ \end{lemma} \begin{proof} Write $h=uq.$ Assume that $h'(0)\geq g'(0),$ equivalently $u'(0)\geq v'(0).$ Define $\Gamma=\frac{u'}{u}-\frac{v'}{v}.$ Then $$\Gamma'=-\Gamma^{2}-\frac{2v'}{v}\Gamma-\frac{2(\lambda-\alpha)}{\sigma^{2}}\;.$$ Because $\Gamma(0)\geq0,$ we have that $\Gamma(x)>0$ for $x<0,$ since if $\Gamma$ ever gets close to $0,$ then term $-\frac{2(\lambda-\alpha)}{\sigma^{2}}$ dominates the right hand side of the equation. Choose $x_{0}$ with $x_{0}<0.$ For $x<x_{0},$ we have $$-\frac{2v'(x)}{v(x)}=\frac{\Gamma'(x)}{\Gamma(x)}+\Gamma(x)+\frac{2(\lambda-\alpha)} {\sigma^{2}(x)}\cdot\frac{1}{\Gamma(x)}\;.$$ Integrating from $x_{0}$ to $x,$ $$-2\ln\frac{v(\,x\,)}{v(x_{0})}=\ln \frac{\Gamma(\,x\,)}{\Gamma(x_{0})}+\int_{x_{0}}^{x} \Gamma(y)\,dy+\int_{x_{0}}^{x}\frac{2(\lambda-\alpha)}{\sigma^{2}(y)}\cdot\frac{1}{\Gamma(y)} \,dy$$ which leads to $$\frac{v^{2}(x_{0})}{v^{2}(\,x\,)}\leq \frac{\Gamma(\,x\,)}{\Gamma(x_{0})}\,e^{\int_{x_{0}}^{x} \Gamma(y)\,dy}$$ for $x<x_{0}.$ Thus, \begin{equation*} \begin{aligned} \int_{-\infty}^{x_{0}}\frac{1}{v^{2}(y)}\,dy &\leq (\text{constant})\cdot\int_{-\infty}^{x_{0}}\Gamma(y)\,e^{\int_{x_{0}}^{y}\Gamma(w)dw}\,dy \\ &=(\text{constant})\cdot\left(1-e^{-\int_{-\infty}^{x_{0}}\Gamma(w)dw}\right)\\ &\leq (\text{constant})\\ &< \infty \;. \end{aligned} \end{equation*} This implies that $\int_{-\infty}^{0}v^{-2}(y)\,dy$ is finite. Similarly, we can show that $\int_{0}^{\infty}v^{-2}(y)\,dy$ is finite if $h'(0)\leq g'(0).$ \end{proof} \begin{lemma} \label{lem:infinite} Write $H_\lambda=Vq$ and $h_\lambda=vq$, where $q(x):=e^{-\int_{0}^{x}\frac{k(y)}{\sigma^{2}(y)}\,dy}.$ Then, $$\int_{-\infty}^{0}V^{-2}(y)\,dy=\infty\quad\text{ and }\quad \int_{0}^{\infty}v^{-2}(y)\,dy=\infty\;.$$ \end{lemma} \begin{proof} We only prove that $\int_{-\infty}^{0}V^{-2}(y)\,dy=\infty.$ A general (normalized to $h(0;c)=1$) solution of $\mathcal{L}h=-\lambda h$ is expressed by $$h(x;c):=H_\lambda(x)\left(1+c\cdot \int_{0}^{x}V^{-2}(y)\,dy\right)\;,$$ thus we have $$h'(0;c)=H_\lambda'(0)+c.$$ Since $H_\lambda'(0)$ is the maximum value by definition, we have that $c\leq 0.$ Suppose that $\int_{-\infty}^{0}V^{-2}(y)\,dy<\infty$ then we can choose a small positive number $c$ such that $h(x;c)$ is a positive function. This is a contradiction. \end{proof} \noindent We now prove Proposition \ref{prop:relation}. \begin{proof} We only show the inequality for $x<0.$ First, we prove $(h_\alpha'h_\alpha^{-1})(x)<(h'h^{-1})(x)$ for $x<0.$ Let $q(x):=e^{-\int_{0}^{x}\frac{k(y)}{\sigma^{2}(y)}\,dy}.$ Write $h=uq$ and $h_\alpha=vq.$ Then we have $h'(0)>h_\alpha'(0),$ equivalently $u'(0)>v'(0).$ It is because, if not, by Lemma \ref{lem:finite}, we have $$\int_{0}^{\infty}v^{-2}(y)\,dy<\infty\;,$$ and this contradicts to Lemma \ref{lem:infinite}. Now we define $\gamma=\frac{u'}{u}-\frac{v'}{v}.$ Then $$\gamma'=-\gamma^{2}-\frac{2v'}{v}\gamma-\frac{2(\lambda-\alpha)}{\sigma^{2}}\;.$$ Because $\gamma(0)>0,$ we have that $\gamma(x)>0$ for $x<0,$ since if $\gamma$ ever gets close to $0,$ then term $-\frac{2(\lambda-\alpha)}{\sigma^{2}}$ dominates the right hand side of the equation. Thus for $x<0,$ it is obtained that $(v'v^{-1})(x)<(u'u^{-1})(x),$ which implies that $(h_\alpha'h_\alpha^{-1})(x)<(h'h^{-1})(x).$ We now prove $(h'h^{-1})(x)<(H_\alpha'H^{-1}_\alpha)(x)$ Write $H_\lambda=Vq.$ Then we have $h'(0)<H_\alpha'(0),$ equivalently $u'(0)<V'(0).$ Define $\Gamma:=\frac{u'}{u}-\frac{V'}{V}.$ Then $$\Gamma'=-\Gamma^{2}-\frac{2V'}{V}\Gamma-\frac{2(\lambda-\alpha)}{\sigma^{2}}\;.$$ We claim that $\Gamma(x)<0$ for $x<0.$ Suppose there exists $x_0<0$ such that $\Gamma(x_0)\geq0.$ Then for all $z<x_0,$ it is obtained that $\Gamma(z)>0$ since if $\Gamma$ ever gets close to $0,$ then term $-\frac{2(\lambda-\alpha)}{\sigma^{2}}$ dominates the right hand side of the equation. For $z<x_0,$ we have $$-\frac{2V'(z)}{V(z)}=\frac{\Gamma'(z)}{\Gamma(z)}+\Gamma(z)+\frac{2(\beta-\lambda)} {\sigma^{2}(z)}\cdot\frac{1}{\Gamma(z)}\;.$$ Integrating from $x_{0}$ to $y,$ $$-2\ln\frac{V(\,y\,)}{V(x_{0})}=\ln \frac{\Gamma(\,y\,)}{\Gamma(x_{0})}+\int_{x_{0}}^{y} \Gamma(z)\,dz+\int_{x_{0}}^{y}\frac{2(\lambda-\alpha)}{\sigma^{2}(z)}\cdot\frac{1}{\Gamma(z)} \,dz$$ which leads to $$\frac{V^{2}(x_{0})}{V^{2}(\,y\,)}\leq \frac{\Gamma(\,y\,)}{\Gamma(x_{0})}\,e^{\int_{x_{0}}^{y} \Gamma(z)\,dz}$$ for $y<x_{0}.$ Thus, \begin{equation*} \begin{aligned} \int_{-\infty}^{x_{0}}V^{-2}(y)\,dy &\leq (\text{constant})\cdot\int_{-\infty}^{x_{0}}\Gamma(y)\,e^{\int_{x_{0}}^{y}\Gamma(w)dw}\,dy \\ &=(\text{constant})\cdot\left(1-e^{-\int_{-\infty}^{x_{0}}\Gamma(w)dw}\right)\\ &\leq (\text{constant})\\ &< \infty \;. \end{aligned} \end{equation*} This implies that $\int_{-\infty}^{0}V^{-2}(y)\,dy$ is finite. This contradicts to Lemma \ref{lem:infinite}. \end{proof} \end{document}
\begin{document} \title{Evolution of the stochastic Airy eigenvalues under a changing boundary} \author{Angelica Gonzalez\footnote{University of Arizona, [email protected]}\hspace{.8cm} Diane Holcomb\footnote{KTH, [email protected]}} \maketitle \begin{abstract} The $\textup{Airy}_\beta$ point process, originally introduced by Ram\'irez, Rider, and Vir\'ag \cite{RRV}, is defined as the spectrum of the stochastic Airy operator $\mathcal{H}_\beta$ acting on a subspace of $L^2[0,\infty)$ with Dirichlet boundary condition. In this paper we study the coupled family of point processes defined as the eigenvalues of $\mathcal{H}_\beta$ acting on a subspace of $L^2[t,\infty)$. These point processes are coupled through the Brownian term of $\mathcal{H}_\beta$. We show that these point processes as a function of $t$ are differentiable with explicitly computable derivative. Moreover when recentered by $t$ the resulting point process is stationary. This process can also be viewed as an analogue to the `GUE minor process' in the tridiagonal setting. \end{abstract} \section{Introduction} In this paper we work with a generalization of Gaussian Orthogonal, Unitary, and Symplectic ensembles, which were first introduced by Wigner in the 50's. These matrix models have many unique properties including an explicitly computable eigenvalue distribution given by \begin{align} \label{eq:hermite} p_{H}(\lambda_1,\dots,\lambda_n)=\frac{1}{Z_{n,\beta}} \mbox{\rm P}od_{1\le i<j\le n} |\lambda_i-\lambda_j|^\beta \mbox{\rm P}od\limits_{i=1}^ne^{-\frac{\beta}4 \lambda_i^2}, \end{align} for $\beta = 1, 2, $ or $4$. The $\beta$-Hermite ensemble generalizes this to a set of $n$ points on the line whose joint density is given by \eqref{eq:hermite} for any $\beta>0$. This point process is no longer related to a full matrix model, but it does have an associated tridiagonal matrix model. The model originally introduced by Dumitriu and Edelman \cite{DE} is as follows: Let \begin{align} \label{eq:tridiagonal} A_\beta \sim \frac{1}{\sqrt \beta} \left[ \begin{array}{ccccc} {\mathcal N}(0,2) & \chi_{(n-1)\beta} &&& \\ \chi_{(n-1)\beta} & {\mathcal N}(0,2) & \chi_{(n-2)\beta} && \\ & \ddots & \ddots & \ddots & \\ && \chi_{2\beta} & {\mathcal N}(0,2) & \chi_\beta \\ &&& \chi_\beta & {\mathcal N}(0,2) \end{array} \right], \end{align} with all of the entries independent. The $\chi$ random variables are subscripted by their parameter. In the case where $k$ is an integer, a $\chi_k$ random variable has the same distribution as the norm of a vector in $\mathbb{R}^k$ with independent ${\mathcal N} (0,1)$ entries. These is a natural generalization in the non-integer case $k>0$. Edelman and Sutton observed that this matrix model may be seen as an operator on step functions, and using this observation conjectured that in the limit the upper edge of the spectrum will converge to a certain differential operator \cite{ES}. Indeed, in this setting at the upper and lower edge of the spectrum Ram\'irez, Rider, and Vir\'ag showed that the centered and scaled matrix model converges in a weak sense to the ``stochastic Airy operator" (denoted here by $\mathcal{H}_\beta$) which in turn is used to show convergence of the eigenvalues \cite{RRV}. Let \begin{equation} \mathcal{H}_\beta= - \frac{d^2}{dx^2}+x+ \frac{2}{\sqrt \beta} b'(x) \end{equation} where we take $b'$ to be a white noise. A precise definition and many properties of this operator can be found in \cite{RRV}. We review the necessary ones below. For our purposes it is sufficient to define an eigenfunction/eigenvalue pair in the following way: Let \[L^*[t,\infty)= \left\{f\in L^2[t,\infty)|\ f(t)=0, f' \text{ exists a.e. and } \int_t^\infty (f')^2+(1+x)f^2 dx<\infty\right\},\] then $(\text{Var }phi ,\lambda)$ is an eigenvalue/eigenfunction pair for $\mathcal{H}_\beta$ acting on $L^*[t, \infty)$ if $\|\text{Var }phi\|_2=1$, and \begin{equation} \text{Var }phi ''(x)= \frac{2}{\sqrt \beta} \text{Var }phi(x)b'(x)+(x-\lambda)\text{Var }phi(x) \end{equation} holds in the sense of distributions. This may be written as \begin{equation} \label{eq:eveft} \text{Var }phi'(x)- \text{Var }phi'(t)= \frac{2}{\sqrt \beta} \text{Var }phi(x)b(x)-\frac{2}{\sqrt \beta} \int_t^x \text{Var }phi'(s)b(s)ds+ \int_s^x(s-\lambda)\text{Var }phi (s)ds. \end{equation} In this sense, the set of eigenvalues is a deterministic function of the Brownian path $b$. Note that this is a slight generalization from the case considered in \cite{RRV} where they focused on $\mathcal{H}_\beta$ acting on functions in $L^*[0,\infty)$. The eigenvalues of $\mathcal{H}_\beta$ acting on $L^*[t,\infty)$ are ``nice'' in the following sense: \begin{theorem} \text{\cite{RRV}} With probability one, the eigenvalues of $\mathcal{H}_\beta$ are distinct (of multiplicity 1) with no accumulation point, and for each $k\geq 0$ the set of eigenvalues of $\mathcal{H}_\beta$ has a well defined $(k+1)$st lowest element $\Lambda_k(\beta)$. \end{theorem} In this paper we study the evolution of the eigenvalues of $\mathcal{H}_\beta$ acting on $L^*[t,\infty)$ as a process in $t$. That is we consider the operator $\mathcal{H}_\beta$ acting on $L^*[t,\infty)$ and study the evolution of the eigenvalues as $t$ varies. We will denote the operator acting on the particular domain by \begin{equation} \label{eq:Ht} \mathcal{H}_\beta^{(t)}= - \frac{d^2}{dx^2}+x+ \frac{2}{\sqrt \beta}b'_x, \mathbb{Q}uad \Hop{t}: L^*[t,\infty) \to L^2[t, \infty), \end{equation} and define $\Lambda_1(t)< \Lambda_2(t)< \Lambda_3(t)< \cdots$ to be the ordered eigenvalues of $\mathcal{H}_\beta^{(t)}$. We observe that the eigenvalue/ eigenfunction condition may be written in the same way as before, but also has an interpretation in terms of a shifted Brownian motion. That is $(\text{Var }phi, \lambda)$ is an eigenvalue/eigenfunction pair of $\Hop{t}$ if for $x\ge t$ \begin{align} \text{Var }phi'(x)- \text{Var }phi'(t)&= \frac{2}{\sqrt \beta} \text{Var }phi(x)(b(x)-b(t))-\frac{2}{\sqrt \beta} \int_t^x \text{Var }phi'(s)(b(s)-b(t))ds+ \int_t^x(s-\lambda)\text{Var }phi (s)ds . \end{align} \begin{theorem} \label{thm:process} Let $k$ be any fixed positive integer and let $\mathcal{G}_t^{(k)} = \{\Lambda_1(t),..., \Lambda_k(t)\}$. The process $\mathcal{G}_t^{(k)}$ is differentiable in time and for every fixed $t$ we have that \begin{equation} \label{eq:derivativedist} \frac{d}{dt} \mathcal{G}_t^{(k)} \ed \{\Gamma_{1}(t),..., \Gamma_{k}(t)\}, \mathbb{Q}uad \text{ with i.i.d.} \quad \Gamma_{i}(t) \sim \Gamma(\tfrac{\beta}{2}, \tfrac{2}{\beta}). \end{equation} Moreover, the process $\mathcal{G}_t^{(k)}-t$ is stationary. \end{theorem} \begin{remark} Note that in the above characterization the eigenvalue/eigenfunction pairs are defined in a path--wise sense. In this paper all calculations unless otherwise noted should be understood in this sense. Because of this it is sufficient to prove various estimates and limits for an arbitrary value of $t$ and any Brownian path in a set of full measure. \end{remark} \begin{corollary} The process $\G{t}{k}-t$ is not reversible. \end{corollary} \begin{proof} To see that the process cannot be reversible it is enough to observe that for a single eigenvalue the distribution of the derivative of the forward process is $\Gamma(\frac{\beta}{2}, \frac{2}{\beta}) -1$. On the other hand the distribution of the derivative of the reversed process is $1-\Gamma (\frac{\beta}{2}, \frac{2}{\beta})$, which are not equivalent. \end{proof} While the study of the eigenvalues of $\mathcal{H}_\beta$ on a changing domain is itself interesting it also has a connection to the original tridiagonal model in (\textup{Re}f{eq:tridiagonal}). Moreover this connection may be used to derive properties of the limiting process including the distribution of the derivatives. We are interested in the behavior of the spectrum at the upper and so begin by centering at $2\sqrt{n}$. We denote the centered, truncated matrix obtained by removing the first $k-1$ rows and columns by \begin{equation} \label{eq:ktridiagonal} \Hmat{k} = 2\sqrt{n}I -\frac{1}{\sqrt \beta} \left[ \begin{array}{ccccc} N(0,2) & \chi_{(n-k)\beta} &&& \\ \chi_{(n-k)\beta} & N(0,2) & \chi_{(n-k-1)\beta} && \\ & \ddots & \ddots & \ddots & \\ && \chi_{2\beta} & N(0,2) & \chi_\beta \\ &&& \chi_\beta & N(0,2) \end{array} \right], \end{equation} and denote its ordered eigenvalues by $\lambda_1(n,k) < \lambda_{2}(n,k)< \cdots < \lambda_{n-k+1}(n,k)$. \begin{theorem} \label{thm:discretetocont} Suppose that $\lambda_1(n,k) < \lambda_{2}(n,k)< \cdots $ be defined as above then \begin{equation} \Big( \{n^{1/6}\lambda_i(n,\lfloor n^{1/3}t\rfloor)\}_{i=1}^{k}, t\ge 0 \Big) {\mathbb R}ightarrow \mathcal{G}_t^{(k)} \end{equation} where $\G{t}{k}$ eigenvalue process of $\mathcal{H}_\beta$ defined above. \end{theorem} \begin{figure} \caption{Bottom eigenvalues $n^{1/6} \end{figure} The reader might notice at this point that we are essentially considering the `minor process' associated to the tridiagonal matrix model. This turns out to define a very different process than the classical `GUE minor process' when $\beta=2$ that is derived from the submatrices of the full matrix model. For more details on this classical process see \cite{FN1}. In particular the eigenvalues of that process follow rough paths. The same process may be realized by considering appropriate limits of Dyson Brownian Motions \cite{FN2}. The fact that two different process are obtained is particularly interesting in light of the fact that for both models when one considers the sub-matrix obtained by removing the first $k$ rows and columns they again have the same eigenvalue distributions, and in both cases eigenvalues of successive sub-matrices satisfy interlacing. The paper will be organized as follows: We begin recalling properties of $\mathcal{H}_\beta$ and showing that the process $\mathcal{G}_t^{(k)}$ is stationary and differentiable. In the next section we show the convergence statement in \textup{Re}f{thm:discretetocont}. Finally, in the last section we use the convergence statement to determine the distribution of the derivative vector. \noindent \textbf{Acknowledgements:} The authors would like to thank B\'alint Vir\'ag for the problem suggestion and discussions. The work of the second author was supported in part by funding from the Knut and Alice Wallenberg foundation award number KAW 2015.0359, and Swedish Research Council award number 2018-04758. \section{On the eigenvalues of the restricted operator} \begin{proposition} For any fixed $k$ the process $\G{k}{t}-t$ is stationary as a process in $t$. \end{proposition} \begin{proof} We use definition that $(\lambda, \text{Var }phi)$ is an eigenvalue/eigenfunction pair for $\Hop t$ if (\textup{Re}f{eq:eveft}) is satisfied, we define the time sifted function $\psi(x-t)= \text{Var }phi(x)$ and shifted Brownian motion $w(x-t)=b(x)-b(t)$ then $\psi$ satisfies the equation \begin{equation} \psi'(x-t)- \psi'(0)= \frac{2}{\sqrt \beta} \psi(x-t)w(x-t)-\frac{2}{\sqrt \beta} \int_0^{x-t} \psi'(s)w(s)ds+ \int_0^{x-t}(s-(\lambda-t))\psi (s)ds. \end{equation} This is equivalent in distribution to $\lambda-t$ being an eigenvalue of \[ \mathcal{H} = - \frac{d^2}{dx^2} + x + \frac{2}{\sqrt{\beta}} w'_x \ed \mathcal{H}_\beta^{(0)}. \] Therefore the lowest $k$ eigenvalues of $\Hop{t}$ shifted by $t$ have the same distribution as the lowest $k$ eigenvalue of $\mathcal{H}_\beta^{(0)}$ for all $t$ and so $\G{k}{t}-t$ is stationary. \end{proof} We let $f_{k,t}\in L^*[t,\infty)$ denote the eigenfunction associated to $\Lambda_{k}(t)$ the $k$th lowest eigenvalue of $\Hop{t}$. The idea for the remainder of this section will be to approximate the eigenfunction $f_{k,t}$ by using eigenfunctions at $f_{k,t+\text{Var }epsilon}$ and $f_{k,t-\text{Var }epsilon}$ and replacing the starting section of the eigenfunction with just a straight line. We make the following definitions: For every pair $s<t$ we define two new families of functions \begin{equation} \label{eq:linearapprox} \phi_{k,s,t}^{(a)}(x) = \begin{cases} (x-s) \frac{f_{k,t}(a)}{a-s} &s\le x < a\\ f_{k,t}(x) & x \ge a \end{cases}, \mathbb{Q}uad \psip{k}{a}(x)= \begin{cases} (x-t)\frac{f_{k,s}(a)}{a-t} &t\le x < a \\ f_{k,s}(x) & x \ge a \end{cases}. \end{equation} The function $\text{Var }p{k}{a}$ approximates the $k$th eigenfunction for $\Hop{s}$ by building a function from the $k$th eigenfunction of $\Hop{t}$. The function $\psip{k}{a}$ does something similar, but instead approximates the $k$th eigenfunction of $\Hop{t}$ by looking at the $k$th eigenfunction of $\Hop{s}$. See figure \textup{Re}f{functionbuilding} for an illustration of how $\phi$ and $\psi$ are constructed from a function $f$. \begin{figure} \caption{Building $\phi$ and $\psi$ from a function $f$.} \label{functionbuilding} \end{figure} The idea here will be to make use of the variational characterization of the eigenvalues: \begin{equation} \label{eq:characterization} \Lambda_k(t) = \inf_{B\subset L^*_t, \dim B=k} \sup_{g\in B} \frac{\fip{ g, \Hop t g}}{\fip{ g, g }} = \fip{ f_{k,t}, \Hop t f_{k,t}}. \end{equation} It follows immediately from the variational characterization that \begin{align*} \Lambda_1(t) &= \fip{ f_{1,t}, \Hop t f_{1,t}} \\ & \le\ \frac{\fip{ \phi_{1,t,t+\text{Var }epsilon}^{(a)}, \Hop t \phi_{1,t,t+\text{Var }epsilon}^{(a)}}}{\fip{ \phi_{1,t,t+\text{Var }epsilon}^{(a)}, \phi_{1,t,t+\text{Var }epsilon}^{(a)}}} = \Lambda_1(t+\text{Var }epsilon) + \text{error}, \end{align*} with a similar bound holding using $\psi_{1,t-\text{Var }epsilon,t}^{(a)}$ and $\Lambda_1(t-\text{Var }epsilon)$. The remainder of this section is devoted to showing that the error is of order $\text{Var }epsilon$ and identifying an expression for the derivative of $\Lambda_k(t)$. The first step in showing that $\G k t$ is a differentiable process is to prove that the error is of order $\text{Var }epsilon$. This requires several results on the eigenfunctions of the operators $\Hop t$. The first result will be to show that the eigenfunctions are `close' to linear near their boundary, which we will need to that $\phi_{k,s,t}^{(a)}$ and $\psip{k}{a}$ are good approximations. Note that for $t>0$ we can extend $f_{k,t}$ to a function on $L^*[0,\infty)$ by taking $f_{k,t}(x)=0$ for $x<t$. This extension should be implicitly understood where necessary in the following computations. \begin{proposition} \label{prop:almostlinear} Let $\text{Var }phi_t(x)$ be an eigenfunction of $\mathcal{H}_\beta(t)$. Suppose that $\sup_{x\in [t, x_0]}|\text{Var }phi_t'(x)- \text{Var }phi_t'(t)| < \eta_{x_0}$ for some $\eta_{x_0}>0$. Then for every $0<\delta< 1/2$ and $x \in [t,x_0]$ there exists $C_{x_0,\delta}$, \begin{align*} |\text{Var }phi_t(x) - (x-t) \text{Var }phi_t'(t) | \leq C_{b,x_0,\delta}(\eta_{x_0}+\text{Var }phi_t'(t)) |x-t|^{2+\delta}. \end{align*} \end{proposition} \begin{proof} We begin with two bounds. \\ \noindent \emph{Bound on Brownian Motion:} Using that Brownian Motion $\alpha$-Holder continuous for $\alpha<1/2$ and for all $t>0$ we have that almost surely for all BM paths and $x,t<x_0$ \begin{align} \label{eq:BMbnd} |b(x)-b(t)| \leq C_{x_0,\delta} |x-t|^{\delta},\text{ for } 0 < \delta <1/2. \end{align} \emph{Bound for $\text{Var }phi_t(x)$:} We apply the Mean Value Theorem to $\text{Var }phi_t(x)$ to get that for some $r \in (t,x)$ \begin{align} \text{Var }phi_t(x) = \text{Var }phi_t'(r) \cdot (x-t) \leq (\eta_{x_0} +\text{Var }phi_t'(t)) (x-t). \label{eq:phibnd} \end{align} Now suppose that $\text{Var }phi_t(x)$ is the eigenfunction corresponding to eigenvalue $\lambda$. Then \begin{align*} \text{Var }phi_t'(x)-\text{Var }phi_t'(t) = \frac{2}{\sqrt{\beta}} \text{Var }phi_t(x)(b_x-b_t) - \frac{2}{\sqrt{\beta}} \int_t^x (b_y-b_t) \text{Var }phi_t'(y)dy + \int_t^x (y- \lambda)\text{Var }phi_t(y)dy. \end{align*} Applying the two bounds given at the beginning of this proof (\eqref{eq:BMbnd} and \eqref{eq:phibnd}) we obtain that \begin{align*} |\text{Var }phi_t'(x)-\text{Var }phi_t'(t)| \leq C_{x_0,\delta} (\eta_{x_0}+\text{Var }phi_t'(t))|x-t|^{1+\delta} \end{align*} We use the mean value theorem again to get $\text{Var }phi_t(x)-(x-t) \text{Var }phi_t'(t) = (\text{Var }phi_t'(r)- \text{Var }phi_t'(t))(x-t)$ which gives us the final bound \begin{align*} |\text{Var }phi_t(x) - (x-t) \text{Var }phi_t'(t) | \leq C_{x_0,\delta}(\eta_{x_0}+\text{Var }phi_t'(t)) |x-t|^{2+\delta}. \end{align*} \end{proof} \begin{proposition} \label{prop:finitediff} Let $[c,d]$ be any interval, then for any $\gamma<1/2$ and $0<\text{Var }epsilon< \frac{\rho}{t-s}$ there exists a constant $C_{\rho, \gamma}$ depending only on $\gamma$ and $\rho$ such that for any $s<t\in [c,d]$ and $i = 1,...,k$ we have \begin{equation} \label{eq:finitediff} \frac{\text{Var }epsilon}{1+\text{Var }epsilon}(f'_{i,s}(s))^2 - C_{\rho,\gamma}\text{Var }epsilon^{1+\gamma}(t-s)^\gamma \le \frac{\Lambda_{i}(t)- \Lambda_i(s)}{t-s} \le \frac{1+\text{Var }epsilon}{\text{Var }epsilon}(f'_{i,t}(t))^2 + C_{\rho,\gamma}\text{Var }epsilon^{1+\gamma}(t-s)^\gamma. \end{equation} \end{proposition} \begin{remark} \label{rmk:derivative} Observe that the choise $\text{Var }epsilon = (t-s)^{-\delta}$ meets the conditions of the theorem and converges to $\infty$ as $t\to s$, therefore if we can show that $\lim_{t\to s} f'_{i,t}(t)= f'_{i,s}(s)$ this will be enough to show that the process is differentiable with the derivative at $t$ being given by $(f'_{i,t}(t))^2$. \end{remark} \begin{corollary} \label{cor:lambdacont} The process $\G{t}{k}$ is continuous as a function of $t$. \end{corollary} This follows immediately from the inequality in the previous Proposition \textup{Re}f{prop:finitediff}, simply multiply through by $(t-s)$. \begin{proof}[Proof of Proposition \textup{Re}f{prop:finitediff} for $k=1$] The idea is to use the variational characterization of our eigenvalues to get upper bounds using $\psip{1}{a}$ and $\text{Var }p{1}{a}$. In particular we have that \[ \Lambda_1(t) \le \frac{\fip{ \psip{1}{a}, \Hop{t} \psip{1}{a}}}{\|\psip{1}{a}\|_2^2}, \quad \text{ and } \quad \Lambda_1(s) \le \frac{\fip{ \text{Var }p{1}{a}, \Hop{t} \text{Var }p{1}{a}}}{\|\psip{1}{a}\|_2^2}. \] Before continuing we show that $\|\psip{k}{a}\|_2^2$ is close enough to 1 that it may be neglected for the remainder of the calculations. In particular we have \begin{align*} \|\psip{k}{a}\|_2^2 &= \|f_{k,s}\|_2^2 + \int_t^a (x-t)^2\frac{f_{k,s}^2(a)}{(a-t)^2} dx - \int_s^a f_{k,s}^2(x)dx \end{align*} Applying Proposition \textup{Re}f{prop:almostlinear} we obtain that \begin{align} \label{eq:finitediffnorm} |\|\psip{k}{a}\|_2^2-1|& \le \big((a-t)(a-s)^2+(a-s)^3\big)\frac{(f_{k,s}')^2(a)}{3} +\tilde C_{a,\gamma}(a-s)^{3+ \gamma}. \end{align} Taking $a = t+ \text{Var }epsilon(t-s)$ we obtain that $|\||\psip{k}{a}\|_2^{-2} -1| \le C(t-s)^3$. These errors may be bounded using the constant $C_{\rho,\gamma}\text{Var }epsilon^{1+\gamma}(t-s)^\gamma$ term in equation \eqref{eq:finitediff}. Because of this we will neglect the normalization for the remainder of the argument. We can then compute the following: \begin{align*} \fip{ \psip{1}{a}, \Hop{t} \psip{1}{a}} - \Lambda_1(s) & = \int_t^a \psip{1}{a}(x)\Hop{t} \psip{1}{a}(x)dx- \int_s^a f_{1,s}(x)\Hop{s} f_{1,s}(x)dx. \end{align*} We show that for $a=t+ \text{Var }epsilon(t-s)$ \begin{align} \label{eq:finitediffa} 0\le \fip{ \psip{1}{a}, \Hop{t} \psip{1}{a}} - \Lambda_1(s) \le (f_{1,s}')^2(s)(t-s) \frac{1+\text{Var }epsilon}{\text{Var }epsilon}+M(1+\text{Var }epsilon)^{1+\gamma}(t-s)^{1+\gamma} \end{align} In order to do this we must bound $\int_t^a \psip{1}{a}(x)\Hop{t} \psip{1}{a}(x)dx$ above and $\int_s^a f_{1,s}(x)\Hop{s} f_{1,s}(x)dx$ below. We use Holder continuity of Brownian motion to say that for $c<y<x<a$ and $\gamma<1/2$ fixed with probability 1 there exists a constant $C$ such that \[ |b_x-b_y|\le C_{a,\gamma}|x-y|^\gamma. \] This gives us \begin{align*} \int_t^a \psip{1}{a}(x)\Hop{t} \psip{1}{a}(x)dx & = \int_t^a \left( \frac{f_{1,s}(a)}{a-t}\right)^2 \left[1+ x(x-t)^2 + (b_x-b_t)(x-t)\right]dx\\ &\le f_{1,s}^2(a)\left( \frac{1}{a-t}+ \frac{(a-t)^2}{4}+t \frac{(a-t)}{3}+ \frac{C_{a,\gamma}(a-t)^{\gamma}}{2+\gamma}\right). \end{align*} An application of proposition \textup{Re}f{prop:almostlinear} allows us to write $f_{1,s}(x) \le (x-s)f_{1,s}'(s)+ C_{a,\gamma}(x-s)^{1+\gamma}$, which leads us to \begin{equation} \label{eq:finitediff1} \int_t^a \psip{1}{a}(x)\Hop{t} \psip{1}{a}(x)dx \le (f_{1,s}')^2(s)\frac{(a-s)^2}{a-t}+ M_{a,\gamma}(a-t)^{\gamma}(a-s)^2. \end{equation} Before continuing with the next bound we make the following observation \[ \left|\int_s^a (x-s)^{1+\gamma} \Hop s (x-s) ds \right| \le C_{a,\gamma} (a-s)^{\gamma+1}, \] where $C$ is a random constant depending on the interval $[c,d]$ and the choice of $a$ and $\gamma$. From this we can check that Proposition \textup{Re}f{prop:almostlinear} implies that \begin{equation} \label{eq:finitediff2} \left| \int_s^a f_{1,s}(x)\Hop{s} f_{1,s}(x)dx-\int_s^a (x-s)f'_{1,s}(s)\Hop{s} (x-s)f'_{1,s}(s)dx\right| \le C_{a,\gamma}(a-s)^{1+\gamma}. \end{equation} We finish the lower bound on $\int_s^a f_{1,s}(x)\Hop{s} f_{1,s}(x)dx$ by computing \begin{equation} \label{eq:finitediff3} \int_s^a (x-s)f'_{1,s}(s)\Hop{s} (x-s)f'_{1,s}(s)dx \ge (f_{1,s}')^2(s)(a-s)- M_{a,\gamma}(a-s)^{2+\gamma}. \end{equation} Putting together (\textup{Re}f{eq:finitediff1}), (\textup{Re}f{eq:finitediff2}), and (\textup{Re}f{eq:finitediff3}) we are led to the conclusion that for all $a=t+ \text{Var }epsilon(t-s)$ we have \begin{equation} \Lambda_1(t)-\Lambda_1(s) \le (f_{1,s}')^2(s)(t-s) \frac{1+\text{Var }epsilon}{\text{Var }epsilon}+M\text{Var }epsilon^{1+\gamma}(t-s)^{1+\gamma}. \end{equation} This leads us one of the inequalities in Propostion \textup{Re}f{prop:finitediff} for $i =1$. Similar techniques may be used to study $\text{Var }p{1}{a}$. These lead to the inequality \begin{equation} \Lambda_1(t)-\Lambda_1(s) \ge (f_{1,t}')^2(t)(t-s) \frac{\text{Var }epsilon}{1+\text{Var }epsilon}-M\text{Var }epsilon^{1+\gamma}(t-s)^{1+\gamma}. \end{equation} \end{proof} \begin{proof}[Proof of Proposition \textup{Re}f{prop:finitediff} for $k>1$] The idea here will be similar to the case where $k=1$, but we now have a more complicated variational characterization to work with which leads to further terms that need to be considered. We start by introducing the Courant-Fisher characterization of the eigenvalues which is given by \begin{equation} \Lambda_k(t) = \inf_{B\subset L^*_t, \dim B=k} \sup_{g\in B} \frac{\fip{ g, \Hop t g}}{\fip{ g, g }}. \end{equation} From this characterization we have \[ \Lambda_k(t) \le \sup_{g\in B_s}\frac{\fip{ g, \Hop t g}}{\fip{ g, g }}, \mathbb{Q}uad \text{where } \quad B_s = \textup{Span}\, \{\psip{1}{a},...,\psip{k}{a}\}. \] We make the following observations: For $i \ne j\le k$ \[ \fip{ \psip{i}{a}, \psip{j}{a} } = \int_t^a \frac{(x-t)^2}{(a-t)^2}f_{i,s}(a)f_{j,s}(a)dx - \int_s^a f_{i,s}(x)f_{j,s}(x)dx \] And so an application of Proposition \textup{Re}f{prop:almostlinear} gives us that \begin{equation} \label{eq:finitediff4} |\fip{ \psip{i}{a}, \psip{j}{a} }| \le \frac{1}{3}(t-s)(a-s)^2f'_{i,s}(s)f'_{j,s}(s) + C(a-s)^{3+\gamma}. \end{equation} Now observe that for all $j$, using bounds identical to those used to prove \eqref{eq:finitediffa}, we can show that \begin{equation} 0 \le \fip{ \psip{j}{a}, \Hop{t} \psip{j}{a}} - \Lambda_j(s) \ \le\ (f'_{j,s})^2(s)\frac{(a-s)^2}{a-t} + M_{a,\gamma}(a-s)^{1+\gamma}. \end{equation} Further we get that for $g= c_1 \psip{1}{a} + \cdots + c_k \psip{k}{a} \in B_s$ we have \begin{align*} \fip{ g , \Hop{t} g} &\le \sum_{j=1}^k c_j^2 \left( \Lambda_j(s) + (f'_{j,s})^2(s)\frac{(a-s)(t-s)}{a-t} + M_{a,\gamma}(a-s)^{1+\gamma}\right) \\ & \hspace{2cm}+2\Lambda_k(t) \sum_{j_1<j_2}c_{j_1}c_{j_2}|\fip{ \psip{j_1}{a},\psip{j_2}{a}}|\\ \end{align*} Taking $a = t+\text{Var }epsilon(t-s)$ and applying the bound in \eqref{eq:finitediff4} we are led to \begin{align*} \Lambda_k(t) &\le \sup_{c_1,...,c_n} \|g\|_2^{-2} \sum_{j=1}^k c_j^2 \left( \Lambda_j(s) + (f'_{j,s})^2(s)\frac{(1+\text{Var }epsilon)(t-s)}{\text{Var }epsilon} + M_{a,\gamma}\big((1+\text{Var }epsilon)(t-s)\big)^{1+\gamma}\right) \\ & \hspace{2cm}+2\Lambda_k(t) \sum_{j_1<j_2}c_{j_1}c_{j_2}(1+\text{Var }epsilon)^2(t-s)^3(f'_{j_1,s}(s)f'_{j_2,s}(s) + C) \end{align*} For $t$ sufficiently close to $s$ this is maximal for $c_j =0$ for $j \ne k$ and $c_k = \|\psip{k}{t+\text{Var }epsilon(t-s)}\|_2^{-1}$. This is because the $\Lambda_1(s)< \Lambda_2(s)< \cdots <\Lambda_k(s)$ are fixed and distinct with probability 1, but all the remaining terms (except possibly the error term on the first line) converge to $0$ as $t\to s$. The error term is identical in all terms so does not change the optimization. Therefore for some $t$ sufficiently close to $s$ $\Lambda_k(s) + (f'_{k,s})^2(s)\frac{(1+\text{Var }epsilon)(t-s)}{\text{Var }epsilon} + M_{a,\gamma}\big((1+\text{Var }epsilon)(t-s)\big)^{1+\gamma}$ will be the dominant term and so the right hand side is maximized when all of the $c_i$ are 0 except for $c_k$. By previous argument in line \eqref{eq:finitediffnorm} we have that $c_k^2= 1+ O(t-s)^3$ and so the error we obtain by replacing $c_k$ with $1$ may be neglected. This gives us that \[ \Lambda_k(t) - \Lambda_k(s) \le (f'_{k,s})^2(s)\frac{(1+\text{Var }epsilon)(t-s)}{\text{Var }epsilon} + M_{a,\gamma}\big((1+\text{Var }epsilon)(t-s)\big)^{1+\gamma} , \] which complete the upper bound in the proposition. To complete the lower bound we perform a similar analysis with $B_s= \textup{Span}\, \{\text{Var }p{1}{a},...,\text{Var }p{k}{a}\}$. \end{proof} \begin{lemma} \label{lemma:unifconv} The eigenfunctions $f_{1,s},...,f_{k,s}$ of $\Hop t$ converge uniformly on compact subsets to the eigenfunctions $f_{1,t},...,f_{k,t}$ of $\Hop {t}$ as $s \to t$. \end{lemma} \begin{proof} We again reuse the notion and approximating functions introduced in equation (\textup{Re}f{eq:linearapprox}). We will show that the proposition holds for $s\searrow t$ by using the functions $\text{Var }p{k}{a}$. One can show the identical result for $s\nearrow t$ by instead using the functions $\psip{k}{a}$. We consider families of functions of the form \[ ( \text{Var }p{1}{t+\text{Var }epsilon(t-s)}, \text{Var }p{2}{t+\text{Var }epsilon(t-s)}, ... ,\text{Var }p{k}{t+\text{Var }epsilon(t-s)}). \] From the proof of \textup{Re}f{prop:finitediff} we get that $\fip{ \text{Var }p{j}{t+\text{Var }epsilon(t-s)}, \Hop {s}\text{Var }p{j}{t+\text{Var }epsilon(t-s)}} \to \Lambda_j(s)$ as $s \searrow t$. We apply fact 2.2 from \cite{RRV} to get that there exists a subsequence $s_j \searrow t$ and functions $(g_1,...,g_k)$ such that \[ ( \text{Var }phi_{1,s_j,t}^{(t+\text{Var }epsilon(t-s_j))}, \text{Var }phi_{2,s_j,t}^{(t+\text{Var }epsilon(t-s_j))}, ... , \text{Var }phi_{k,s_j,t}^{(t+\text{Var }epsilon(t-s_j))})\to (g_1,...,g_k) \] uniformly on compact subsets in $L^2$ and weakly in $H^1$. It remains to be shown that $(g_1,...,g_k)=(f_{1,t},...,f_{k,t})$ the eigenfunctions of $\Hop{t}$. To complete the picture we use the variational derivative characterization $\frac{d}{d\text{Var }epsilon}\fip{ g_j+\text{Var }epsilon h, \Hop{t} (g_j+\text{Var }epsilon h)}|_{\text{Var }epsilon=0} $ to get that $g_j$ satisfies $\Hop{t} g_j = \tilde \Lambda_j g_j$ for some $\tilde \Lambda_j$ and so $g_j$ is an eigenfunction of $\Hop{t}$. The strict ordering of the eigenvalues is enough to complete the picture and give $\tilde \Lambda_j = \Lambda_j(t)$. It follows that $g_j = f_{j,t}$. Therefore we conclude that we in fact have \[ ( \text{Var }phi_{1,s_j,t}^{(t+\text{Var }epsilon(t-s_j))}, \text{Var }phi_{2,s_j,t}^{(t+\text{Var }epsilon(t-s_j))}, ... , \text{Var }phi_{k,s_j,t}^{(t+\text{Var }epsilon(t-s_j))})\to (f_{1,t},...,f_{k,t}) \] uniformly on compact subsets in $L^2$ and weakly in $H^1$. \end{proof} This weak convergence in $H^1$ suggests that we should have convergence of the derivatives $f_{j,t}'(t)\to f_{j,t_0}'(t_0)$ as $t\to t_0$, and indeed by making use of the fact that the eigenfunctions are almost linear near the boundary point this can be shown. In particular if the eigenfunctions are approximately linear near their endpoint then convergence on compact subsets will imply that the derivatives converge at the end points. \begin{lemma} For all $t_0$, and any $j=1,...,k$ we have $\lim_{t \to t_0} f_{j,t}'(t) = f_{j,t_0}'(t_0)$. \end{lemma} \begin{proof} Let $\text{Var }epsilon>0$ We use the following: In a fixed neighborhood of $t_0$ we have the bound from Proposition \textup{Re}f{prop:almostlinear} with a $C_{\delta, \gamma}$ depending on the neighborhood size $\delta$ and $0<\gamma<1/2$. We now observe that for $x> t\wedge t_0$ in a neighborhood of $t_0$ we have \begin{align*} |f_{j,t}'(t)-f_{j,t_0}'(t_0)| & \le |f_{j,t}'(t)-\frac{f_{j,t}(x)}{x-t}|+|f_{j,t_0}'(t)-\frac{f_{j,t_0}(x)}{x-t_0}|+ |\frac{f_{j,t}(x)}{x-t}-\frac{f_{j,t_0}(x)}{x-t_0}|\\ & \le C_{\delta, \gamma} \big((\eta_\delta+f_{j,t}'(t))(x-t)^{1+\gamma}+(\hat\eta_\delta+f_{j,t_0}'(t_0))(x-t_0)^{1+\gamma}\big)\\ & \hspace{9cm}+ |\frac{f_{j,t}(x)}{x-t}-\frac{f_{j,t_0}(x)}{x-t_0}| \end{align*} The previous convergence result Lemma \textup{Re}f{lemma:unifconv} give us that the final term may be made arbitrarily small as $t\to t_0$ for any fixed $x$. Choose $t$ and $t_0$ close enough to $x$ so that the first two terms are bounded by $\text{Var }epsilon/3$, then by letting $t$ go to $t_0$ (which does not impact the bounds on the first two terms) we will get that the final term is also bounded by $\text{Var }epsilon/3$. Therefore \[ \lim_{t\to t_0} f_{j,t}'(t) = f_{j,t_0}'(t_0). \] \end{proof} \begin{proposition} \label{prop:derivativederivative} For any fixed $k$ the process $\G k t$ is differentiable as a function of $t$. With the derivatives given by \[ \frac{d}{dt} \Lambda_j(t) =( f_{j,t}'(t))^2. \] \end{proposition} See Remark \textup{Re}f{rmk:derivative} for the proof. \section{The discrete to continuous convergence} In this section we use the machinery developed for the proof of the original soft edge limit in order to show convergence of the $t$ dependent eigenvalue process. To do this we begin by recalling the general convergence theorem from section 5 of \cite{RRV}. \begin{theorem}[Theorem 5.1 \cite{RRV}] \label{thm:conv} Suppose that $H_n$ is a tridiagonal matrix with \begin{align*} \text{ diagonal}& \mathbb{Q}uad 2m_n + m_n y_{n,1}(1), 2m_n + m_n y_{n,1}(2), 2m_n + m_n y_{n,1}(3),...\\ \text{ off-diagonal}& \quad -m_n + \frac{1}{2}m_n y_{n,2}(1), -m_n + \frac{1}{2}m_n y_{n,2}(2), -m_n + \frac{1}{2}m_n y_{n,2}(3),... \end{align*} and $H= - \partial_x^2 + Y'(x)$ acting on $H'_{\textup{loc}}\mapsto D$ the space of distributions with boundary condition $f(0)=0$ (see \cite{RRV} for further details). Let $Y_{n,i}(x) = \sum_{j=1}^{\lfloor nx\rfloor} y_{n,i}(j)$. For any fixed $k$, the bottom $k$ eigenvalues of $H_n$ converge to the bottom $k$ eigenvalues of $H$ if the following two conditions are met: \begin{enumerate} \item (Tightness/Convergence) There exists a process $x\mapsto Y(x)$ such that \begin{align*} (Y_{n,i}(x): x\ge 0 ) & \mathbb{Q}uad\ i = 1,2 \quad \text{ are tight in law},\\ (Y_{n,1}(x)+ Y_{n,2}(x): x\ge 0 )&\ {\mathbb R}ightarrow \ (Y(x);x\ge 0) \quad \text{ in law}, \end{align*} with respect to the Skorokhod topology of paths; see \cite{EthierKurtz} for the definitions. \item (Growth/Oscillation bound). There is a decomposition \[ y_{n,i}(k) = \frac{1}{m_n}(\eta_{n,i}(k)+ \omega_{n,i}(k)), \] for $\eta_{n,i}(k)\ge 0$, deterministic, unbounded non-decreasing functions $\bar \eta(x)>0, \zeta(x) \ge 1$, and random constants $\kappa_n(\omega)\ge 1$ defined on the same probability space which satisfy the following: The $\kappa_n$ are tight in distribution, and, almost surely, \begin{align*} \bar \eta(x)/ \kappa_n - \kappa_n \le \eta_{n,1}(x)+ \eta_{n,2}(x) &\le \kappa_n(1+ \bar \eta(x)),\\ \eta_{n,2}(x) \le 2 m_n^2.\\ |\omega_{n,1}(\xi)- \omega_{n,1}(x)|^2+|\omega_{n,2}(\xi)- \omega_{n,2}(x)|^2 & \le \kappa_n(1+ \bar \eta(x)/ \zeta(x)) \end{align*} for all $n$ and $x,\xi \in [0, n/m_n]$ with $|x-\xi|\le 1$. \end{enumerate} \end{theorem} Ram\'irez, Rider, and Vir\'ag show in section 6 of \cite{RRV}, that the tridiagonal model $\Hmat{k}$ defined in \eqref{eq:ktridiagonal} with $k=1$ satisfies the the conditions of the theorem with $m_n=n^{1/3}$ and $Y(x)=\frac{x^2}{2}+ \frac{2}{\sqrt \beta} b_x$. The same arguments may be used to show that for $\Hmat{\lfloor tn^{1/3}\rfloor}$ the same convergence statements hold with $m_n= n^{1/3}$ and $Y(x) = \frac{x^2}{2}- t x + \frac{2}{\sqrt \beta} b_x$. These are two different distributional convergence statements, but with a slight modification of the proof of Theorem \textup{Re}f{thm:conv} we may show a joint distributional convergence for any finite collection $\{t_1, t_2,...,t_j\}$. \begin{proof}[Proof of \textup{Re}f{thm:discretetocont}] Let $t_1<\cdots <t_\ell$ be any finite collection of times (possibly negative). We observe using the work in Section 6 of \cite{RRV} that the matrices $\Hmat{\lfloor t_1 n^{1/3}\rfloor},\Hmat{\lfloor t_2 n^{1/3}\rfloor},...,\Hmat{\lfloor t_\ell n^{1/3}\rfloor}$ satisfy the conditions of Theorem \textup{Re}f{thm:conv} with $m_n= n^{1/3}$ and $Y^{(t_j)}(x) = \frac{x^2}{2}- t_j x + \frac{2}{\sqrt \beta} b_x$. Moreover we have \[ y_{n,i}^{(t_j)}(k) = y_{n,i}^{(t_1)}(k+\lfloor t_j n^{1/3}\rfloor). \] Because of this identity the if the conditions of Theorem \textup{Re}f{thm:conv} hold for $t_1$ then they also hold for $t_2,...,t_\ell$. Therefore for any subsequence we can extract a further subsequence such that we have the following joint distributional convergence: \begin{align*} ( \int_0^x \eta_{n,i}^{(t_j)}(y)dy; x\ge 0 ) & {\mathbb R}ightarrow (\int_0^x \eta_i^{(t_j)} (y)dy; x\ge 0), \\ (Y_{n,i}(x); x\ge 0 ) & {\mathbb R}ightarrow ( \frac{x^2}{2}- t_j x +\frac{2}{\sqrt \beta} b_i(x+t_j) ; x\ge 0), \mathbb{Q}uad j = 1,...,\ell \\ \kappa_n^{(t_j)} & {\mathbb R}ightarrow \kappa^{(t_j)} \end{align*} where the first line converges uniformly on compact subsets and the second in the Skorokhod topology. Notice that the brownian motions $b_i$ that appear are the same for all $j$. The Skorokhod representation theorem (see Theorem 1.8, Chapter 3, or \cite{EthierKurtz}) gives us that there exists a probability space so that the necessary convergence statements hold with probability 1. This allows us to reduce to working with the deterministic case and the remainder of the proof goes through unchanged. In all at this point we have proved that \[ \left\{ \lambda_1(\lfloor t_j n^{1/3}\rfloor), \lambda_2(\lfloor t_j n^{1/3}\rfloor),..., \lambda_k(\lfloor t_j n^{1/3}\rfloor) \right\}_{j=1,...,\ell} {\mathbb R}ightarrow \left\{ \Lambda_1(t_j ), \Lambda_2(t_j),..., \lambda_k(t_j) \right\}_{j=1,...,\ell} \] where $\Lambda_1(t_j)< \Lambda_2(t_j)< \cdots $ are the eigenvalues of the operator \[ H^{(t_j)} = - \frac{d^2}{dx^2} + x- t_j + \frac{2}{\sqrt \beta}db(x+t_j) \] acting on functions in $L^*[0,\infty)$. These are exactly the eigenvalues of the operator $\Hop{t_j}$ defined in \eqref{eq:Ht}. Therefore we have convergence of finite dimensional distributions which completes the proof of Theorem \textup{Re}f{thm:discretetocont}. \end{proof} \section{Distribution of the derivatives} We need to begin by showing that the eigenvalues of the discrete operator follow an approximately linear pattern where the `slope' is determined by the first entry of the eigenvector. Because we know the distribution of the spectral weights which are found in these first entries we can then use this property to determine the distribution of the eigenfunctions in the limit. This will in turn give us the derivative of the process as desired. Before continuing on to the proof of the proposition we will need some information on the distribution of $v_1$. \begin{lemma}[Dumitriu-Edelman \cite{DE}] \label{lemma:dirichlet} The squares of the spectral weights $q_i = (v_1^{(i)})^2$ associated to the tridiagonal model in \eqref{eq:tridiagonal} are Dirichlet with parameters $(\frac{\beta}{2},...,\frac{\beta}{2})$. These weights are the square of the first entry of each normalized eigenvector. The marginal distribution of a single spectral weight is \[ q_i \sim \textup{Beta} \Big(\frac{\beta}{2},(n-1)\frac{\beta}{2}\Big), \mathbb{Q}uad Eq_i = \frac{1}{n}, \mathbb{Q}uad \text{Var } q_i = \frac{\beta (n-1)}{n^2(\beta n+2)}. \] \end{lemma} \begin{lemma} \label{lem:gammalimit} Let $q_i$ be as above, then for any $\ell \in \mathbb{N}$ we have \[ n(q_1, q_2,...,q_\ell ) {\mathbb R}ightarrow (\Gamma_{1}, \Gamma_{2},..., \Gamma_{\ell}), \mathbb{Q}uad \Gamma_{i}\sim \Gamma(\tfrac{\beta}{2}, \tfrac{2}{\beta}) \] where $\Gamma_{1},...,\Gamma_{\ell}$ are independent. This is using the shape and scale convention for Gamma random variables. \end{lemma} \begin{proof} From Lemma \textup{Re}f{lemma:dirichlet} we know that the spectral weights $q_1,...,q_n$ are exchangeable with distribution Dirichlet$(\frac{\beta}{2}, ...,\frac{\beta}{2})$ and $(v^{(i)}_1)^2 = q_i$. We now use the following characterization of a Dirichlet distribution: Let $X_1, X_2,...,X_n$ be independent identically distributed with $X_i \sim $Gamma $(\frac{\beta}{2},1)$, then for \[ q_i = \frac{X_i}{X_1+ X_2 + \cdots +X_n}, \quad \text{ we get } \quad (q_1, q_2, ...,q_n) \sim \textup{Dirichlet}( \tfrac{\beta}{2},..., \tfrac{\beta}{2}). \] We observe that by the strong law of large numbers $(X_1+ \cdots + X_n)/n \to \frac{\beta}{2}$ in probability, and note that for $\eta \sim $ Gamma $(k,\theta)$, $c \eta \sim $ Gamma $(k,c\theta)$. Therefore this characterization this is enough to give the desired joint convergence statement. \end{proof} We now prove a proposition that will show that the eigenvector is close enough to linear that in the limit we will get that the derivative at 0 is determined by the distribution of the spectral weights. \begin{proposition} \label{lem:discretelinear} Let $v$ be the eigenvector associated to $\lambda_i(n,0)$ the $i$th lowest eigenvalue of $\Hmat{0}$ defined in \eqref{eq:ktridiagonal}. For any $\text{Var }epsilon>0$ there exists a set $\mathcal{A}_\text{Var }epsilon \subset \Omega$ with $P(\mathcal{A}_\text{Var }epsilon)>1-\text{Var }epsilon$, and $x_0>0$ sufficiently small such that for all $t \in [0,x_0]$ and $\omega \in \mathcal(A)_\text{Var }epsilon$ \[ |v_{\lfloor tn^{1/3}\rfloor}- \lfloor tn^{1/3}\rfloor v_1| \le C \frac{t^2}{n^{1/6}} \sqrt{\frac{x_0}{\text{Var }epsilon}}. \] \end{proposition} \begin{proof} Recall that we're working with the matrix $\Hmat{0} = n^{1/6}(2\sqrt{n}- A_\beta)$. To start let's scale the $n^{2/3}$ out of the leading term then the resulting matrix has the form \[ n^{-2/3}H_n^\beta = \left[ \begin{array}{cccc} 2 + \rho_1 & -1 + r_1 &&\\ -1+ r_1 & 2+ \rho_2 & -1+ r_2 & \\ & \ddots & \ddots & \ddots \end{array}\right]. \] Under this we have that $\rho_i \sim \frac{1}{\sqrt n} \mathcal{N}(0,2)$ and $r_{i} = \frac{i}{2 n} + \frac{1}{\sqrt n} \eta_i$ where $\eta_i$ is an order 1 random mean 0 variable with Gaussian tails. Notice that this rescaling does not change the distribution of the eigenvectors, but the lowest eigenvalues will now be on the order of $n^{-2/3}$. Before we start we give two bounds: For the first we use Doob's martingale inequality to get that \begin{equation} \label{eq:RVsumbnd} P\left( \max_{1\le k \le x_0n^{1/3}} \frac{1}{\sqrt{n}}\sum_{\ell=1}^k \eta_\ell \ge \frac{\sqrt{x_0}}{n^{1/3}} M \right) \le \frac{C}{M^2}, \end{equation} With a similar bound holding for the $\rho$'s. Next, suppose that $a_1, a_2, a_3,...$ is a sequence such that $|(a_{k+1}-a_k) - a_1|\le \tau$ for all $k\ge 1$, then \begin{equation} \label{eq:incbnd} |a_k| \le k(a_1+ \tau). \end{equation} Now we move on the the main part of the proof. Suppose that $v$ solves $\Hmat{0} v = \lambda_j v$ with $\|v\|=1$. We can check that $v_k$ satisfies the following: \begin{align} v_{k+1}-v_k &= (v_k-v_{k-1})+ r_{k-1}v_{k-1}+ (\rho_k-\lambda)v_k+ r_kv_{k+1}\notag \\ &= \sum_{\ell=1}^{k} r_{\ell-1}v_{\ell-1}+ \rho_\ell v_\ell -\lambda v_\ell+ r_\ell v_{\ell+1} \label{eq:aaa} \end{align} if we assume that $|v_{k+1}-v_k- v_1| \le 1/\sqrt{n}$ for $k \le x_0 n^{1/3}$ then using \eqref{eq:incbnd} we get that $|v_k| \le k(v_1+ \eta/\sqrt{n})$. We rewrite the final term in the sum: \[ \sum_{\ell=1}^k r_\ell v_{\ell+1} = \sum_{\ell=1}^k (R_{\ell+1}-R_{\ell}) v_{\ell+1} = v_{k+1} R_{k+1} + \sum_{\ell=1}^k R_\ell(v_{\ell}-v_{\ell-1}) \] By \eqref{eq:RVsumbnd} there exists a set $\mathcal{A}_M$ of size $1-\frac{C}{M^2}$ such that $|R_k| \le \frac{\sqrt{x_0}M}{n^{1/3}} $ for some fixed $C$. On this set we get \[ \sum_{\ell=1}^k r_\ell v_{\ell+1} \le 2(v_1+1 /\sqrt{n}) \frac{\sqrt{x_0}M }{n^{1/3}} \cdot k. \] This same argument holds for the other two random terms in \eqref{eq:aaa}. Using Lemma \textup{Re}f{lemma:dirichlet} and the observation $v_1= q_j$ (which is on the order of $1/n$) and the fact that $\lambda_j$ is order $1/n^{2/3}$ we get that on a set of size $1-3\frac{C}{M^2}$. \[ |v_{k+1}-v_k - v_1| \le |v_{k+1}-v_k|-|v_1| \le C \frac{1}{\sqrt{n}} \frac{\sqrt{x_0}M k}{n^{1/3}}. \] This validates our original assumption that $|v_{k+1}-v_k- v_1| \le 1/\sqrt{n}$ for $k \le x_0 n^{1/3}$ where $x_0$ is sufficiently small. Finally this yields \[ |v_{k+1}-(k+1)v_1| \le C \frac{1}{\sqrt{n}} \frac{\sqrt{x_0}M k(k+1)}{2n^{1/3}}, \] which for $k = \lfloor tn^{1/3}\rfloor$ give us \[ |v_{\lfloor tn^{1/3}\rfloor}- \lfloor tn^{1/3}\rfloor v_1| \le \tilde C \frac{1}{n^{1/6}}t^2 \sqrt{x_0}M \quad \text{ for some constant } \tilde C. \] Take $M = \frac{C'}{\sqrt \text{Var }epsilon}$ for some constant $C'$ to complete the proof. \end{proof} \begin{proposition} At any fixed time $t>0$ the derivatives of the process $\G{t}{k}$ in $t$ are independent with distribution \[ \frac{d}{dt} \Lambda_j(t) = \Gamma_j(t),\quad \text{ for i.i.d.}\quad \Gamma_j(t) \sim \textup{ Gamma }(\frac{\beta}{2},\frac{2}{\beta}), \] for $j = 1, 2, ...,k$. \end{proposition} \begin{proof} We begin with the observation that the process $\G{t}{k}-t$ stationary in $t$ and therefore the distribution of the derivative for all $t$ is determined by the distribution of the derivate at $t=0$. Let $v^{(1)},...,v^{(\ell)}$ be the eigenvectors of $\Hmat{0}$. We will show that $\{(f_{i,0}'(0))^2\}_{i=1}^\ell$ are independent with the desired distribution by showing that $\lim_{n\to \infty} \sqrt{n} v^{(i)}_{1} \ed f_{i,0}'(0)$, which together with Lemma \textup{Re}f{lem:gammalimit} will imply the result. We embed $v^{(1)},...,v^{(\ell)}$ the eigenvectors of $\Hmat{0}$ as step functions with $v^{(i,n)}(x)=n^{1/6} v^{(i)}_{\lfloor x n^{1/3}\rfloor}$ in $L^2[0,n^{2/3})$, and we can check that $\|v^{(i,n)}\|_{L^2}= (n^{1/6})^2 n^{-1/3} \|v^{(i)}\|_2=1$. Similarly we embed the vector $L_{v^{(i)}} = [v_1^{(i)}, 2 v_1^{(i)} , 3v_1^{(i)} , ..., \lfloor tn^{1/3} v_1^{(i)} \rfloor, 0,...,0 ]^t$ as the step function $L_{v^{(i)}}^{(n)}(x) = n^{1/6} v_1^{(i)}\lfloor xn^{1/3}\rfloor$ for $x<t$. Here we perform the truncation so that the $L_2$ norm remains bounded. From the proof of Theorem \textup{Re}f{thm:discretetocont} (see Lemma 5.8 \cite{RRV}) we get that there exists a subsequence along which \begin{align} \label{eq:efuncconvergence} \begin{array}{c} (v^{(1,n)}, v^{(2,n)},.s , v^{(\ell,n)})\\ (L_{v^{(1)}}^{(n)}(x), L_{v^{(2)}}^{(n)}(x), .s , L_{v^{(\ell)}}^{(n)}(x) )\\ (v^{(1,n)}(x)- L_{v^{(1)}}^{(n)}(x), .s , v^{(\ell,n)}(x)-L_{v^{(\ell)}}^{(n)}(x)) \end{array} {\mathbb R}ightarrow \begin{array}{c} (f_{1,0}, f_{2,0},.s , f_{\ell,0})\\ (x\sqrt{\Gamma_{1}},x \sqrt{ \Gamma_{2}},.s , x\sqrt{\Gamma_{\ell}})\\ (f_{1,0}(x)-x\sqrt{\Gamma_{1}}, .s ,f_{\ell,0}(x)-x\sqrt{\Gamma_{\ell}}) \end{array} \end{align} jointly in law as functions of $x$, where the $\Gamma_i$ are defined as in Lemma \textup{Re}f{lem:gammalimit}. From Proposition \textup{Re}f{lem:discretelinear} we have that for any $\text{Var }epsilon>0$ there exist $\mathcal{A}_\text{Var }epsilon\subset \Omega$ with $P(\mathcal{A}_\text{Var }epsilon)>1-\text{Var }epsilon$ such that for all $x<x_0$ \[ |n^{1/6}v^{(i)}_{\lfloor xn^{1/3}\rfloor}- n^{1/6} \lfloor xn^{1/3}\rfloor v^{(i)}_1| \le C x^2 \sqrt{\frac{x_0}{\text{Var }epsilon}}. \] From the distributional limit it follows that the same hold for $f_{i,0}(x)-x\sqrt{\Gamma_{i}}$. On the set $\mathcal{A}_{\text{Var }epsilon,\infty}$ we get that \[ \lim_{x\to 0} \frac{f_{i,0}(x)- x\sqrt{\Gamma_{i}}}{t} = 0 \] which is equivalent to $f_{i,0}'(0)= \sqrt{\Gamma_{i}}$. Since $\text{Var }epsilon$ may be made arbitrarily small this is enough to give us that it holds with probability 1. \end{proof} \end{document} \section{The inverse process} We will give a description of the inverse process of $\G{t}{k}$ in term of the explosion times of a family of stochastic differential equations. To start we recall the definition of the Ricatti diffusion associated to the the operator $\Hop{t}$. Assume that $f$ satisfies the equation \begin{equation} \Hop{t} f (x)= \lambda f(x), \mathbb{Q}uad x \ge t. \end{equation} Then $p_{\lambda,t} = f'/f$ satisfies the SDE \begin{equation} dp_{\lambda,t} = (x- \lambda -p^2_{\lambda,t})dx - \frac{1}{\sqrt \beta} db_x, \mathbb{Q}uad p_{\lambda,t}(t)= +\infty. \end{equation} Here $b_x$ is a standard Brownian motion and the process enters from $\infty$ instantaneously. If the diffusion blows up (down) to $-\infty$ it is restarted instantaneously at $+\infty$. Notice that the $t$ dependence here lies in the initial condition. The Ricatti diffusion gives us a characterization of the counting function of the spectrum of $\Hop{t}$. In particular for $N^{(t)}(\mu) $ the number of eigenvalues of $\Hop{t}$ less than $\mu$ we get \[ N^{(t)}(\mu) = \# \text{ times that } p_{\mu, t} \text{ explodes to }- \infty. \] In addition we get that $\lambda$ is an eigenvalue of of $\Hop{t}$ if and only if $\lim_{x\to \infty} \frac{p_{\lambda,t}(x)}{\sqrt{x}} = -1$. In the forward direction this is a measure zero event because the lower edge of the parabola $y^2=x-\lambda$ is an unstable saddle point, however if we run time backward this arc becomes attracting. We consider the diffusion $q_{\lambda}(x)= p_\lambda(-x)$ which satisfies the SDE \[ dq_\lambda = (x+\lambda+q^2_\lambda)dx - \frac{2}{\sqrt\beta} d\tilde b_x , \mathbb{Q}uad \lim_{x\to-\infty} \frac{q_\lambda(x)}{ \sqrt{x-\lambda}} = - 1. \] Here the boundary condition is at $-\infty$ and $\tilde b_x = b_{-x}$. We define \begin{equation} - \tau_k(\lambda) = k\text{th explosion time of } q_\lambda. \end{equation} \begin{theorem} \label{thm:inverse} Then the set $\{ \tau_1(\lambda),...,\tau_k(\lambda)\}$ is the inverse of $\G{t}{k}$ in the sense that \[ \Lambda_\ell( \tau_\ell(\lambda)) = \lambda \mathbb{Q}uad \text{ and } \mathbb{Q}uad \tau_\ell (\Lambda_\ell(t))= t. \] \end{theorem} \begin{corollary} The explosion times $\tau_\ell(\lambda)$ are differentiable in $\lambda$ with \[ \partial_\lambda \tau_\ell (\lambda) = \frac{1}{Y_{\ell, \tau_{\ell}(\lambda)}}, \] where $Y_{\ell,t}= \partial_t \Lambda_\ell (t)$ as defined in \eqref{eq:derivativedist} \end{corollary} \begin{proof}[Proof of Theorem \textup{Re}f{thm:inverse}] From the definition of $q_\lambda$ it follows that $p_{\lambda, \tau_\ell(\lambda)}(\tau_{\ell}(\lambda))= +\infty$, and $\lim_{x\to \infty} \frac{p_{\lambda, \tau_\ell(\lambda)}(x)}{\sqrt x} = -1$, and moreover $p_{\lambda, \tau_\ell(\lambda)}$ will explode $\ell-1$ times in finite time and for any $\mu>\lambda$ we have $p_{\mu, \tau_\ell(\lambda)}$ will explode $\ell$ times. Therefore $\lambda$ is the $\ell$th eigenvalue of the operator $\Hop{\tau_\ell(\lambda)}$. \end{proof} Towards computing further derivatives we write an eigenfunction for $\Hop{\tau_1(\lambda)}$. Let $T$ be a large negative number and define $f(-x) = g(x) = \exp (- \int_T^x q_\lambda(y)dy)$. Then $f(x)$ is an eigenfunction of $\Hop{\tau_1(\lambda)}$ with eigenvalue $\lambda$. We can check: \[ g'(x) = - q_\lambda(x) g(x) \mathbb{Q}uad \text{ which gives } \mathbb{Q}uad \frac{f'}{f}(x) = p_\lambda(x), \] which has the appropriate boundary conditions for $\lambda$ to be an eigenvalue of $\Hop{\tau_1(\lambda)}$. We now observe that \[ g'(-\tau_1(\lambda)) = \lim_{x \to -\tau_1(\lambda)} - q_\lambda(x) e^{- \int_T^x q_\lambda(y)dy} \] We can check that $q_\lambda(x)$ blows up at rate $1/x$ near it's explosion time. This gives us that $f'(\tau_1(\lambda)) = 1$. Therefore in order to make a comparison with our earlier work we need to consider the normalized version where we study $f/\|f\|_2$ which has derivative $1/\|f\|_2$ at the boundary. Because of this in order to study higher derivatives of the eigenvalue process it is enough to study derivatives of $\|f\|_2= \|g\|_2$. We compute: \begin{align*} \frac{d}{d\lambda} \| g\|_2^2 & = \frac{d}{d\lambda} \int_{-\infty}^{-\tau_1(\lambda)} g^2(x)dx\\ & =- g^2(-\tau_1(\lambda)) \frac{d \tau_1(\lambda)}{d\lambda} + \int_{-\infty}^{-\tau_1(\lambda)} \frac{d}{d\lambda} \left( e^{-2 \int_T^x q_\lambda(y)dy}\right) dx\\ & = \int_{-\infty}^{-\tau_1(\lambda)} \left(-2 \int_T^x \partial_\lambda q_\lambda(y)dy \right) g^2(x) dx \end{align*} Here we use the boundary condition $g(-\tau_1(\lambda)) = 0$. \begin{align*} \frac{d^2}{dx^2} \|g\|_2^2 = \int_{-\infty}^{-\tau_1(\lambda)} \left(-2 \int_T^x \partial_\lambda q_\lambda(y)dy \right)^2 g^2(x) - 2 g^2(x) \int_T^x \partial^2_\lambda q_\lambda(y)dy dx \end{align*} \begin{align} d( \partial_\lambda q_\lambda)&= \big(1 + 2 q_\lambda (\partial_\lambda q_\lambda) \big)dx\\ d( \partial_\lambda^2 q_\lambda)&=2\big( q_\lambda (\partial_\lambda^2 q_\lambda) + (\partial_\lambda q_\lambda)^2\big)dx \end{align} \end{document}
\begin{document} \title{Hermitian matrices of roots of unity and their \ characteristic polynomials} \begin{abstract} We investigate spectral conditions on Hermitian matrices of roots of unity. Our main results are conjecturally sharp upper bounds on the number of residue classes of the characteristic polynomial of such matrices modulo ideals generated by powers of $(1-\zeta)$, where $\zeta$ is a root of unity. We also prove a generalisation of a classical result of Harary and Schwenk about a relation for traces of powers of a graph-adjacency matrix, which is a crucial ingredient for the proofs of our main results. \end{abstract} \section{Introduction} Throughout, we denote the identity matrix by $I$ and the all-ones matrix by $J$, the order of $I$ and $J$ will be understood from context. Fix an integer $q \geqslant 2$. Let $\zeta$ be a primitive $q$th root of unity and let $\mathcal C_q = \langle \zeta \rangle$ be the set of powers of $\zeta$. Denote by $\mathcal H_n(q)$ the set of all Hermitian $\mathcal C_q$-matrices of order $n$, i.e., the set of Hermitian matrices whose entries are powers of $\zeta$. Matrices from the set $\mathcal H_n(q)$ play a central role in various areas of mathematics, including discrete geometry~\cite{lemmens73}, quantum information theory~\cite{stacey}, group theory~\cite{taylor}, and wireless communication~\cite{heath}. One pertinent example where such matrices make an appearance is as signature matrices of equiangular tight frames~\cite{heath}. Let $B$ denote a complex $d \times n$ matrix having the vectors $\mathbf x_1, \dots, \mathbf x_n$ as its columns and let $B^*$ denote the complex transpose of $B$. The set $F = \{ \mathbf x_1, \dots, \mathbf x_n \}$ is called an $(n,d)$-\textbf{equiangular tight frame} (ETF) if $B^*B$ has $1$s on its diagonal, all entries on the off-diagonal have the same absolute value, and $BB^* = nI/d$. ETFs are extremal objects with respect to the Welch bound, which says that, for any set $F = \{ \mathbf x_1, \dots, \mathbf x_n \}$ of unit vectors in $\mathbb C^d$, we have \[ \max_{i \ne j} | \mathbf x_i^* \mathbf x_j | \geqslant \sqrt{ \frac{n-d}{ d(n-1)}}. \] Equality holds in the Welch bound if and only if $F$ is an ETF. We can obtain the \textbf{signature matrix} $S(F)$ of an ETF from the matrix $B$ as \[ S(F) := \sqrt{ \frac{d(n-1)}{n-d}}(B^*B -I). \] Note that the off-diagonal entries of a signature matrix of an ETF all lie on the unit circle. We call an Hermitian matrix $S$ a $q$\textbf{-Seidel matrix} if all of its off-diagonal entries are in $\mathcal C_q$ and its diagonal entries are all $0$. Our $2$-Seidel matrices are known simply as \textit{Seidel matrices} in the literature, see for example~\cite{ghorbani, GG18, GKMS16,roux,SzOs18}. Interestingly, for most known constructions of ETFs~\cite{Fi0}, the signature matrix is a $q$-Seidel matrix for some $q \in \mathbb N$. For this reason, ETFs having signature matrices whose (off-diagonal) entries are $q$th roots of unity have been investigated~\cite{pthRoot, thirdRoot,fourthRoot} when $q$ is a prime power. We extend the above investigations by studying the spectral properties of $q$-Seidel matrices that do not necessarily correspond to ETFs. An ETF is called \textbf{real} if the vectors $\mathbf x_i \in \mathbb R^d$ for all $i \in \{1,\dots,n\}$. Azarija and Marc~\cite{Azarija75,Azarija95} recently showed that there exists neither a real $(76,19)$-ETF nor a real $(96,20)$-ETF. That is, there does not exist a $(76,19)$-ETF nor a $(96,20)$-ETF whose signature matrix is a $2$-Seidel matrix. Fickus et al.~\cite{Steiner12} showed that there does exist a $(96,20)$-ETF whose signature matrix is a $4$-Seidel matrix, but it is currently not known if there exists a $(76,19)$-ETF whose signature matrix is a $4$-Seidel matrix. However, it is known that there exists a $(76,19)$-ETF whose signature matrix is a $6$-Seidel matrix~\cite{hyperOvals16}. Thus, our main motivation stems from the following question. \begin{question} \label{que:main} For a given pair $(n,d)$, for which $q\in \mathbb N$ can there exist a $q$-Seidel matrix that is a signature matrix of an $(n,d)$-ETF? \end{question} Instead of restricting ourselves to $q$-Seidel matrices, we work more generally with the sets $\mathcal H_n(q)$ of Hermitian $\mathcal C_q$-matrices of order $n$ and investigate their spectral properties. Note that we obtain an element of $\mathcal H_n(q)$ by adding the identity matrix to any $q$-Seidel matrix of order $n$. Our main results are necessary conditions on the coefficients of the characteristic polynomial $\operatorname{Char}_H(x) := \det(xI-H)$ of a matrix $H \in \mathcal H_n(q)$. Our investigation is a natural follow-up to \cite{GreavesYatsyna19}, where certain necessary spectral conditions for Seidel matrices were introduced. In particular, Greaves and Yatsyna~\cite{GreavesYatsyna19} showed that, for any Seidel matrix $S$, the coefficients of $\operatorname{Char}_S(x)$, which are all integers, must lie in certain congruence classes modulo powers of $2$. These necessary conditions have been used in \cite{GSY21,GSY22} to establish the nonexistence of Seidel matrices having certain prescribed spectra, which led to the best known (and best possible) upper bounds for equiangular line systems in certain dimensions. The results of this paper can be thought of as a first step towards the goal of answering Question~\ref{que:main}. Further steps, analogous to those of \cite{GSY21,GSY22}, may well be required for a complete answer to Question~\ref{que:main} but this is not attempted in this article. In addition to ETFs and equiangular lines, the results of this paper can be applied to the spectral study of Hermitian Butson-Hadamard matrices~\cite{IMHermBut}, complete gain graphs~\cite{ReffGain12}, and Hermitian adjacency matrices of mixed graphs~\cite{Guo17}. Throughout, we will require the use of some elementary results about cyclotomic fields for which we refer to Washington's textbook~\cite{Washington97}. In an attempt to make this paper self-contained, we present the necessary background about cyclotomic fields in Section~\ref{sec:prelims}. Let $H \in \mathcal H_n(q)$. Since $H$ is Hermitian, all of its eigenvalues are real and thus so are all the coefficients of $\operatorname{Char}_H(x)$. Furthermore, since the entries of $H$ are algebraic integers from the cyclotomic field $\mathbb Q[\zeta]$, so are all the coefficients of $\operatorname{Char}_H(x)$. Together, this means that the coefficients of $\operatorname{Char}_H(x)$ are real algebraic integers from $\mathbb Q[\zeta]$. In other words, we have $\operatorname{Char}_H(x) \in \mathbb Z[\zeta+\zeta^{-1}][x]$ (see \cite[Proposition 2.16]{Washington97}). Define $\rho := (1-\zeta)(1-\zeta^{-1})$. We denote by $\mathcal X_n(q,e)$ the set of congruence classes of $\operatorname{Char}_H(x)$ modulo the principal ideal $\rho^e\mathbb Z[\zeta+\zeta^{-1}][x]$ where $H \in \mathcal H_n(q)$. Note that when $q=2$, we have $\rho^e\mathbb Z[\zeta+\zeta^{-1}][x] = 2^{2e} \mathbb Z[x]$. Our first main result corresponds to restrictions on the coefficients of the characteristic polynomials of matrices in $\mathcal H_n(q)$, which results in upper bounds for the cardinality of $\mathcal X_n(q,e)$. \begin{theorem} \label{thm:mainBounds} Let $n, e, f \in \mathbb N$. \begin{enumerate} \item[(a)] If $n$ is even then $|\mathcal X_n(2,e/2)| \leqslant 2^{\lfloor e^2/2\rfloor -e+1}$. \item[(b)] If $n$ is odd then $|\mathcal X_n(2,e/2)| \leqslant 2^{\lceil e^2/2\rceil -e+1}$. \item[(c)] If $p$ is an odd prime then $|\mathcal X_n(p^f,e)| \leqslant p^{(e-1)^2}$. \item[(d)] If $f > 1$ and $n$ is even then $|\mathcal X_n(2^f,e)| \leqslant 2^{(e-1)(e-2)+\lceil 2^{2-f}e \rceil - 1}$. \item[(e)] If $f > 1$ and $n$ is odd then $|\mathcal X_n(2^f,e)| \leqslant 2^{(e-1)^2+\lceil 2^{2-f}e \rceil - 1-\lfloor e/4\rfloor}$. \end{enumerate} \end{theorem} Note that, for a non-prime-power $q$ and any $e \in \mathbb N$, the ideal $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$ is equal to $\mathbb Z[\zeta+\zeta^{-1}]$. Therefore, $\mathcal X_n(q,e)$ consists of just one element when $q$ is not a prime power and, furthermore, every polynomial in $\mathbb Z[\zeta+\zeta^{-1}][x]$ of degree $n$ belongs to the sole congruence class of $\mathcal X_n(q,e)$. In contrast, for $q>2$ a power of a prime $p$, the ideal $\rho \mathbb Z[\zeta+\zeta^{-1}]$ is a prime ideal (see Section~\ref{sec:prelims}) of $\mathbb Z[\zeta+\zeta^{-1}]$ and, for any $n, e \in \mathbb N$, a large proportion of polynomials in $\mathbb Z[\zeta+\zeta^{-1}][x]$ of degree $n$ do not belong to any congruence class in $\mathcal X_n(q,e)$. Loosely speaking, this means that the coefficients of the characteristic polynomial of $H \in \mathcal H_n(q)$ are more restricted when $q$ is a prime power. Denote by $\mathcal X^\prime_n(q,e)$ the set of congruence classes of $\operatorname{Char}_S(x)$ modulo the principal ideal $\rho^e\mathbb Z[\zeta+\zeta^{-1}][x]$ where $S$ is a $q$-Seidel matrix of order $n$. Note that bounds for $\mathcal X^\prime_n(2,e/2)$ were obtained by Greaves and Yatsyna~\cite{GreavesYatsyna19}. Our second main result corresponds to restrictions on the coefficients of the characteristic polynomials of $q$-Seidel matrices, which results in upper bounds for the cardinality of $\mathcal X^\prime_n(q,e)$. \begin{theorem} \label{thm:mainBounds2} Let $n, e, f \in \mathbb N$. \begin{enumerate} \item[(a)] If $n$ is even then $|\mathcal X^\prime_n(2,e/2)| \leqslant 2^{(e-2)(e-3)/2}$. \item[(b)] If $n$ is odd then $|\mathcal X^\prime_n(2,e/2)| \leqslant 2^{(e^2-5e+8)/2}$. \item[(c)] If $p$ is an odd prime then $|\mathcal X^\prime_n(p^f,e)| \leqslant p^{(e-1)^2}$. \item[(d)] If $f>1$ and $n$ is even then $|\mathcal X^\prime_n(2^f,e)| \leqslant 2^{(e-1)(e-2)}$. \item[(e)] If $f>1$ and $n$ is odd then $|\mathcal X^\prime_n(2^f,e)| \leqslant 2^{(e-1)^2-\lfloor e/4\rfloor}$. \end{enumerate} \end{theorem} It is interesting to observe that, in most cases, the bounds for $|\mathcal X_n(p^f,e)|$ and $|\mathcal X^\prime_n(p^f,e)|$ in Theorems~\ref{thm:mainBounds} and \ref{thm:mainBounds2} depend only on the prime $p$ and the exponent $e$ but not on the exponent $f$. There are exceptions with part (d) and (e) in Theorem~\ref{thm:mainBounds} when $p = 2$. For either parity of $n$, there are two different behaviours for the upper bounds of $|\mathcal X_n(p^f,e)|$ and $|\mathcal X^\prime_{n}(2^f,e)|$, depending on if $f=1$ or $f > 1$. By conducting computer experiments for small values ($n = 21$ and $e = 3,4$) of $n$ and $e$, we have empirical evidence that suggests the bounds in Theorem~\ref{thm:mainBounds} and Theorem~\ref{thm:mainBounds2} are sharp for fixed $q$ and $e \geqslant 3$, when $n$ is large enough compared to $e$. We conjecture this to be the case. \begin{conjecture} For $e \in \mathbb N$ with $e \geqslant 3$ and $q$ a prime power, there exists $N \in \mathbb N$ such that for all $n \geqslant N$ we have equality in the corresponding bounds in Theorem~\ref{thm:mainBounds} and Theorem~\ref{thm:mainBounds2}. \end{conjecture} In order to prove our main results, we establish a generalisation of a classical result of Harary and Schwenk~\cite[Corollary 5a]{hs79}, which was rediscovered recently in \cite[Lemma 2.2]{GreavesYatsyna19}. For the convenience of the reader, we state this result here. Throughout, we use $\phi$ to denote Euler's totient function and $\mathbf 1$ to denote the all-ones (column) vector. \begin{theorem} \label{thm:hs} Let $\Gamma$ be a graph with adjacency matrix $A$ and let $N \geqslant 3$ be an integer. Then \begin{itemize} \item $\displaystyle \sum_{d | N} \phi(N/d) \operatorname{tr}(A^d) \equiv 0 \pmod {2N} \text{ if $N$ is odd}$; \item $\displaystyle \sum_{d | N} \phi(N/d) \operatorname{tr}(A^d) + \frac{N}{2} \mathbf 1^\top A \mathbf 1 \equiv 0 \pmod {2N} \text{ if $N$ is even.}$ \end{itemize} \end{theorem} The paper is organised as follows. In Section~\ref{sec:prelims}, we recall some preliminary material about cyclotomic fields that will be required in the later sections of the paper. In Section~\ref{sec:real}, we consider the real case, i.e., when $q = 2$ and prove parts (a) and (b) of our main theorems. In Section~\ref{sec:graphs}, we introduce our graph-theoretic tools and generalise Theorem~\ref{thm:hs}. In Section~\ref{sec:dets}, we produce results about the determinant of Hermitian matrices of roots of unity and provide proofs of parts (c) and (d) of our main theorems. Finally, in Section~\ref{sec:euler}, we introduce the residue graph of a matrix in $\mathcal H_n(q)$ and, using so-called Euler graphs, we prove part (e) of our main theorems. \section{Algebraic preliminaries} \label{sec:prelims} In this section, we develop some basic theory of ideals of cyclotomic fields. A complex number $\alpha \in \mathbb C$ is called an \textbf{algebraic integer} if $\alpha$ is a zero of some monic integer polynomial. Let $\zeta$ be a primitive $q$th root of unity. The field extension $\mathbb Q[\zeta]$ of $\mathbb Q$ generated by $\zeta$ is called a \textbf{cyclotomic field}. The set of algebraic integers in $\mathbb Q[\zeta]$ forms a subring called \textbf{ring of integers} of $\mathbb Q[\zeta]$. First we list some basic facts that can be found in most introductory textbooks on algebraic number theory, for example \cite[Chapters 1 and 2]{Washington97}. \begin{proposition} \label{pro:idealprelim} Let $q > 2$ be a natural number and let $\zeta$ be a primitive $q$th root of unity. \begin{itemize} \item[(a)] $\mathbb Z[\zeta]$ is the ring of integers of the cyclotomic field $\mathbb Q[\zeta]$; \item[(b)] $\mathbb Z[\zeta+\zeta^{-1}]$ is the ring of integers of the field $\mathbb Q[\zeta] \cap \mathbb R$; \item[(c)] $\{1,\zeta+\zeta^{-1},\dots,(\zeta+\zeta^{-1})^{\phi(q)/2}\}$ is a basis for $\mathbb Z[\zeta+\zeta^{-1}]$ as a $\mathbb Z$-module; \item[(d)] $\{1,\zeta\}$ is a basis for $\mathbb Z[\zeta]$ as a $\mathbb Z[\zeta+\zeta^{-1}]$-module; \item[(e)] If $q$ is a prime power then $(1-\zeta)\mathbb Z[\zeta]$ is a prime ideal of $\mathbb Z[\zeta]$ otherwise $(1-\zeta)\mathbb Z[\zeta] = \mathbb Z[\zeta]$; \item[(f)] If $q$ is a power of a prime $p$ then $p \mathbb Z[\zeta] = (1-\zeta)^{\phi(q)}\mathbb Z[\zeta]$ and $((1-\zeta)\mathbb Z[\zeta])\cap \mathbb Z = p\mathbb Z$; \item[(g)] If $q$ is a power of $2$ then for any $\alpha, \beta \not \in (1-\zeta)\mathbb Z[\zeta]$ we have $\alpha + \beta \in (1-\zeta)\mathbb Z[\zeta]$. \end{itemize} \end{proposition} Next we prove a proposition about the maximum real subset of the ideal $(1-\zeta)\mathbb Z[\zeta]$. \begin{proposition} \label{pro:realIdeal} Let $q>2$ be a power of a prime $p$ and let $\zeta$ be a primitive $q$th root of unity. Then $$((1-\zeta)\mathbb Z[\zeta])\cap \mathbb R = \rho \mathbb Z[\zeta+\zeta^{-1}].$$ \end{proposition} \begin{proof} Let $\alpha \in ((1-\zeta)\mathbb Z[\zeta])\cap \mathbb R$. Then, by Proposition~\ref{pro:idealprelim} (b), $\alpha$ must be in $\mathbb Z[\zeta+\zeta^{-1}]$, since it is an algebraic integer in $\mathbb Q[\zeta] \cap \mathbb R$. Next observe that $\mathbb Z[\zeta+\zeta^{-1}] = \mathbb Z[\rho]$. Hence, we can write $\alpha$ as an integer linear combination of powers of $\rho$: \[ \alpha = \sum_{i=0}^{\phi(q)/2-1}a_i \rho^i. \] Since $(1-\zeta)\mathbb Z[\zeta]$ is a prime ideal, we must have $a_0 \in (1-\zeta)\mathbb Z[\zeta]$. The proposition follows since $((1-\zeta)\mathbb Z[\zeta])\cap \mathbb Z = p\mathbb Z \subset \rho \mathbb Z[\zeta+\zeta^{-1}]$. \end{proof} Now we prove a key isomorphism of quotient rings. \begin{proposition} \label{pro:Idealcor} Let $q>2$ be a power of a prime $p$ and let $\zeta$ be a primitive $q$th root of unity. Then \[ \frac{\mathbb Z[\zeta]}{(1-\zeta)\mathbb Z[\zeta]} \cong \frac{\mathbb Z[\zeta+\zeta^{-1}]}{\rho \mathbb Z[\zeta+\zeta^{-1}]} \cong \frac{\mathbb Z}{p\mathbb Z}. \] \end{proposition} \begin{proof} The isomorphism $\frac{\mathbb Z[\zeta]}{(1-\zeta)\mathbb Z[\zeta]} \cong \frac{\mathbb Z}{p\mathbb Z}$ is standard. Thus we only prove the left isomorphism. First note that $\mathbb Z[\zeta] = \mathbb Z[\zeta+\zeta^{-1}] + \zeta \mathbb Z[\zeta+\zeta^{-1}]$ (from Proposition~\ref{pro:idealprelim} (d)). Define the ring homomorphism $\tau : \mathbb Z[\zeta] \to \mathbb Z[\zeta+\zeta^{-1}]/\rho \mathbb Z[\zeta+\zeta^{-1}]$ by $a + b\zeta \mapsto a + b + \rho \mathbb Z[\zeta+\zeta^{-1}]$, where $a$ and $b$ are in $\mathbb Z[\zeta+\zeta^{-1}]$. It is straightforward to check that $\tau$ is indeed a ring homomorphism. If $a + b\zeta \in (1-\zeta)\mathbb Z[\zeta]$ then $\tau(a + b\zeta) = \rho \mathbb Z[\zeta+\zeta^{-1}]$. Indeed, we have $a = (1-\zeta)\alpha - b\zeta$ for some $\alpha \in \mathbb Z[\zeta]$. Hence, $$\tau(a + b\zeta)=a+b + \rho \mathbb Z[\zeta+\zeta^{-1}]=(1-\zeta)(\alpha+b)+\rho \mathbb Z[\zeta+\zeta^{-1}] = \rho \mathbb Z[\zeta+\zeta^{-1}],$$ by Proposition~\ref{pro:realIdeal}. Thus $(1-\zeta)\mathbb Z[\zeta] \subset \ker \tau$. On the other hand, if $a+b \in \rho \mathbb Z[\zeta+\zeta^{-1}]$ then $a + b\zeta \in (1-\zeta)\mathbb Z[\zeta]$. Indeed, we have $b = \rho\alpha - a$ for some $\alpha \in \mathbb Z[\zeta+\zeta^{-1}]$. Hence, $a + b\zeta = (1-\zeta)a+\rho \alpha \zeta \in (1-\zeta)\mathbb Z[\zeta]$. Thus $\ker \tau \subset (1-\zeta)\mathbb Z[\zeta]$. Now $\frac{\mathbb Z[\zeta]}{(1-\zeta)\mathbb Z[\zeta]} \cong \frac{\mathbb Z[\zeta+\zeta^{-1}]}{\rho \mathbb Z[\zeta+\zeta^{-1}]}$ follows from the first isomorphism theorem for rings. \end{proof} It follows from Proposition~\ref{pro:Idealcor} that, for $q > 2$, the ideal $\rho\mathbb Z[\zeta+\zeta^{-1}]$ is a prime ideal of $\mathbb Z [\zeta+\zeta^{-1}]$. We conclude this section with a couple of remarks about counting residues classes. \begin{remark} \label{rem:countResidues} Let $q=p^e$ for some prime $p$ and $e \in \mathbb N$ and let $k \in \mathbb N$. Suppose $a \in \rho^{k}\mathbb Z[\zeta+\zeta^{-1}]$. Since, by Proposition~\ref{pro:Idealcor}, the quotient ring $\mathbb Z/p\mathbb Z$ is isomorphic to $\mathbb Z[\zeta+\zeta^{-1}]/\rho \mathbb Z[\zeta + \zeta^{-1}]$, there are $p$ possible residues for $a$ modulo $\rho^{k+1}\mathbb Z[\zeta+\zeta^{-1}]$. \end{remark} Next we remark about a rational ideal inside the ideal $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$. \begin{remark} \label{rem:rationalElements} Let $q=p^f$ be a prime power with $f > 1$. Let $g, h \in \mathbb N \cup \{0\}$ with $h < \phi(q)/2$. it is straightforward to obtain the equality \[ (\rho^{\phi(q)g/2+h} \mathbb Z[\zeta+\zeta^{-1}]) \cap \mathbb Z = \begin{cases} p^{g + 1}\mathbb Z, & \text{if $h > 0$;} \\ p^{g}\mathbb Z, & \text{if $h = 0$}. \end{cases} \] Indeed, suppose that $a \in \mathbb Z$ belongs to $\rho^{\phi(q)g/2+h} \mathbb Z[\zeta+\zeta^{-1}]$. Using Proposition~\ref{pro:idealprelim} (f), we have $a \in p^g\rho^{h} \mathbb Z[\zeta+\zeta^{-1}]$. If $h=0$ then we are done. Otherwise, if $h>0$ then $a = p^g b$ for some $b \in (1-\zeta)\mathbb Z[\zeta+\zeta^{-1}$. By Proposition~\ref{pro:idealprelim} (f), we have $p \in (1-\zeta)\mathbb Z$. Since $b$ is an integer, it follows that $b \in p\mathbb Z$, as required. Therefore, if $a \in \mathbb Z$ then there are $p^{\lceil 2e/\phi(q) \rceil}$ possible residues for $a$ modulo $\rho^{e}\mathbb Z[\zeta+\zeta^{-1}]$. Furthermore, if $a \in p\mathbb Z$ then there are $p^{\lceil 2e/\phi(q) \rceil-1}$ possible residues for $a$ modulo $\rho^{e}\mathbb Z[\zeta+\zeta^{-1}]$. \end{remark} In the remainder of the paper $\zeta$ is to be understood as a primitive $q$th root of unity. \section{The real case} \label{sec:real} In this section, we consider matrices in $\mathcal H_n(2)$, that is, symmetric $\{ \pm 1\}$-matrices. The proofs of part (a) and part (b) of Theorem~\ref{thm:mainBounds} and Theorem~\ref{thm:mainBounds2} are provided herein. First we remark about the top three coefficients of the characteristic polynomial of a matrix in $\mathcal H_n(q)$. \begin{remark} \label{rem:a0a1a2} Let $H \in \mathcal H_n(q)$. Observe that the trace of $H^2$ is equal to $n^2$. Write $\operatorname{Char}_{H}(x)=\sum_{i=0}^na_ix^{n-i}$. Using Newton's identities, we see that $a_0=1$ and $a_2=(a_1^2-n^2)/2$. Furthermore, the parity of $a_1$ is the same as that of $n$. \end{remark} Next we show the determinant of a matrix in $H_n(2)$ is divisible by a large power of $2$. \begin{lemma} \label{lem:detIdealEasy} Let $n \in \mathbb N$ and let $H$ be a $\{\pm 1\}$-matrix of order $n$. Then $\det (H) \in 2^{n-1}\mathbb Z$. \end{lemma} \begin{proof} Subtract the first row of $H$ from all other rows to obtain $H^\prime$. Then, except for the first row, each entry of $H^\prime$ is in $2\mathbb Z$. Thus $\det (H) = \det (H^\prime) \in 2^{n-1}\mathbb Z$. \end{proof} Since the coefficients of the characteristic polynomial of a matrix are the sums of its principal minors of the appropriate size, we immediately have the following corollary of Lemma~\ref{lem:detIdealEasy}. \begin{corollary} \label{cor:detIdealEasy} Let $n \in \mathbb N$, $k \in \{1,\dots,n\}$, $H \in \mathcal H_n(2)$, and write $\operatorname{Char}_H(x)=\sum_{i=0}^n a_ix^{n-i}$. Then $a_{k} \in 2^{k-1} \mathbb Z$. \end{corollary} We will make use of the following lemma from \cite{GreavesYatsyna19}. \begin{lemma}[{\cite[Corollary 3.2 and Lemma 3.8]{GreavesYatsyna19}}] \label{lem:GGyats} Let $A$ be the adjacency matrix of a graph of order $n$ and write $\operatorname{Char}_{J-2A}(x) = \sum_{i=0}^n a_ix^{n-i}$. Then $a_k \in 2^k \mathbb Z$ for all $k$ even. Furthermore, if $n$ is even then $a_k \in 2^k \mathbb Z$ for all $k$. \end{lemma} Given a matrix $M$ of order $n$ and a subset $S \subseteq \{1,\dots,n\}$, we denote by $M[S]$, the principal submatrix of $M$ obtained by deleting the $i$th row and column of $M$ for each $i \in S$. The $(i,j)$-entry of $M$ is denoted by $M_{i,j}$. We will make use of the matrix determinant lemma (see, e.g., \cite{matdetHarville}), which we now state. \begin{lemma} \label{lem:matdetl} Let $M$ be a square matrix and let $\mathbf u, \mathbf v$ be column vectors of the appropriate size. Then \[ \det(M + \mathbf u \mathbf v^\top) = \det(M) + \mathbf v^\top \operatorname{adj}(M)\mathbf v. \] \end{lemma} Let $H \in \mathcal H_n(2)$. We define $\Delta(H)$ as the set of indices $i$ such that $H_{i,i} = -1$. Observe that we can write $H = J-2A-2\operatorname{diag}(\chi_{\Delta(H)})$, where $A$ is a graph-adjacency matrix and $\chi_{\Delta(H)}$ is the indicator vector for $\Delta(H)$. \begin{lemma} \label{lem:matdet1} Let $H \in \mathcal H_n(2)$ and write $H = J-2A-2\operatorname{diag}(\chi_{\Delta(H)})$. Then \[ \operatorname{Char}_{H}(x) = \sum_{S \subseteq \Delta(H)} 2^{|S|}\operatorname{Char}_{J-2A[S]}(x). \] \end{lemma} \begin{proof} Induction on the cardinality of $\Delta(H)$. When $\Delta(H) = \emptyset$ the statement is obvious. Otherwise, apply Lemma~\ref{lem:matdetl} with $M = xI-J+2A+2\operatorname{diag}(\chi_{\Delta(H)\backslash \{i\}})$, $\mathbf u = \chi_{\{i\}}$, and $\mathbf v = 2\mathbf u$ to obtain \begin{align} \label{eqn:matdet} \operatorname{Char}_{H}(x) &= \operatorname{Char}_{J-2A-2\operatorname{diag}(\chi_{\Delta(H)\backslash \{i\}})}(x)+ 2\operatorname{Char}_{J-2A[i]-2\operatorname{diag}(\chi_{\Delta(H)})[i]}(x). \end{align} By induction, we have \begin{align} \operatorname{Char}_{J-2A-2\operatorname{diag}(\chi_{\Delta(H)\backslash \{i\}})} &= \sum_{S \subseteq \Delta(H)\backslash \{i\}} 2^{|S|}\operatorname{Char}_{J-2A[S]}(x) \nonumber \\ &= \sum_{\substack{S \subseteq \Delta(H) \\ i \not \in S}}2^{|S|}\operatorname{Char}_{J-2A[S]}(x); \label{eqn:ind1} \\ 2\operatorname{Char}_{J-2A[i]-2\operatorname{diag}(\chi_{\Delta(H)})[i]} &= \sum_{S \subseteq \Delta(H)\backslash \{i\}} 2^{|S|+1}\operatorname{Char}_{J-2A[S\cup \{i\}]}(x) \nonumber \\ &= \sum_{\substack{S \subseteq \Delta(H) \\ i \in S}} 2^{|S|}\operatorname{Char}_{J-2A[S]}(x). \label{eqn:ind2} \end{align} Combine \eqref{eqn:matdet} with \eqref{eqn:ind1} and \eqref{eqn:ind2} to obtain the statement of the lemma. \end{proof} \begin{corollary} \label{cor:finalConds} Let $n \in \mathbb N$, $k \in \{1,\dots,n\}$, $H \in \mathcal H_n(2)$, and write $\operatorname{Char}_H(x)=\sum_{i=0}^n a_ix^{n-i}$. Then $a_{k} \in 2^{k-1} \mathbb Z$. Furthermore, if $n+k$ is odd then $a_{k} \in 2^{k} \mathbb Z$. \end{corollary} \begin{proof} By Lemma~\ref{lem:matdet1}, we can write \[ a_k = \sum_{\substack{S \subseteq \Delta(H) \\ |S| \text{ even} \\ |S| \leqslant k}} 2^{|S|} c_{k-|S|}(S) + \sum_{\substack{S \subseteq \Delta(H) \\ |S| \text{ odd} \\ |S| \leqslant k}} 2^{|S|} c_{k-|S|}(S), \] where, for each $S \subseteq \Delta(H)$, we have $\operatorname{Char}_{J-2A[S]}(x) =\sum_{i=0}^{n-|S|} c_i(S) x^{n-|S|-i}$. By Lemma~\ref{lem:GGyats}, the coefficient $c_i(S)$ is in $2^i\mathbb Z$ if $i$ is even or is $n-|S|$ is even. Hence $2^{|S|}c_{k-|S|}(S)$ is in $2^k\mathbb Z$ if $k-|S|$ is even or $n-|S|$ is even. Now it readily follows that $a_k \in 2^k\mathbb Z$ when $n+k$ is odd. \end{proof} Now we are ready to prove the main result of this section. \begin{proof}[Proof of Theorem~\ref{thm:mainBounds} (a) and (b)] Let $H \in \mathcal H_n(2)$ and write $\operatorname{Char}_H(x)=\sum_{i=0}^n a_ix^{n-i}$. By Remark~\ref{rem:a0a1a2}, there are at most $2^{e-1}$ possible residues for $a_1$ modulo $2^e$ and just one possible residue for $a_0$ and $a_2$ (since $a_2$ is determined by the value of $a_1$). By Corollary~\ref{cor:detIdealEasy}, there are at most $2^{e-i+1}$ possible residues for each $a_i$ modulo $2^e$ where $i \in \{3,\dots,e\}$ and just one possible residue for each $a_i$ modulo $2^e$ where $i \in \{e+1,\dots,n\}$. Furthermore, by Corollary~\ref{cor:finalConds}, for each $i \in \{3,\dots,e\}$ whose parity is different from that of $n$, there are at most $2^{e-i}$ possible residues for each $a_i$ modulo $2^e$. Thus, we have at most \[ 2^{e-1} \prod_{\substack{i \in \{3,\dots,e\} \\ n + i \in 2\mathbb Z}} 2^{e-i+1} \prod_{\substack{i \in \{3,\dots,e\} \\ n + i \not \in 2\mathbb Z}} 2^{e-i} \] possible residues for $\operatorname{Char}_H(x)$ modulo $2^e \mathbb Z[x]$, as required. \end{proof} Parts (a) and (b) of Theorem~\ref{thm:mainBounds2} were proved in \cite[Corollaries 3.4 and 3.13]{GreavesYatsyna19}. \section{Underlying graphs, walks, and their weights} \label{sec:graphs} In this section, we introduce the underlying graph of a matrix in $\mathcal H_n(q)$, its walks, and their weights. The main result (Corollary~\ref{cor:mainHSgen}) of this section is about a congruence of traces, which generalises a result of Harary and Schwenk~\cite[Corollary 5a]{hs79} from 1979 (see Theorem~\ref{thm:hs}). Fix $n, q \in \mathbb N$ with $q \geqslant 2$ and let $H \in \mathcal H_n(q)$ and $A=(J-H)/(1-\zeta)$. The \textbf{underlying graph} $\Gamma(H)$ of $H$ is defined to have vertex set $\{1,\dots,n\}$ where $i$ is adjacent to $j$ if and only if $A_{i,j} \ne 0$. It is easy to see that, for all $i$ and $j$, we have $A_{i,j} \ne 0$ if and only if $A_{j,i}\ne 0$, thus the adjacency of $\Gamma(H)$ is well-defined. Note that $-1$ is a $q$th root of unity when $q$ is a power of $2$. Thus the underlying graph $\Gamma(H)$ may have loops when $q$ is a power of $2$, hence it may not be a simple graph, however, all graphs considered herein do not have multiedges. Furthermore, observe that, when $q=2$ and $H_{i,i} = 1$ for all $i \in \{1,\dots,n\}$, the matrix $A$ is a symmetric $\{0,1\}$-matrix with zero diagonal. In fact, $A$ is the adjacency matrix for the underlying graph of $H$. In this sense, one can think of $A$ as a (non-Hermitian) weighted generalisation of an adjacency matrix for $q > 2$. \begin{remark} \label{rem:Aentries} Note that $A_{i,j} + A_{j,i} \in (1-\zeta)\mathbb Z[\zeta]$. Indeed, suppose $A_{i,j} = (1-\zeta^e)/(1-\zeta)$ for some $e$. Then $A_{i,j} = (1-\zeta^{-e})/(1-\zeta)$. If $e = 0$ then our claim follows immediately. For each nonzero $e$, we have \[ \frac{1-\zeta^{e}}{1-\zeta} = \begin{cases} 1+\zeta + \dots + \zeta^{e-1} & \text{ if $e > 0$; }\\ 1+\zeta + \dots + \zeta^{q+e-1} & \text{ if $e < 0$. } \end{cases} \] Thus, using Proposition~\ref{pro:idealprelim} (f), we have $A_{i,j} - e \in (1-\zeta)\mathbb Z[\zeta]$. Furthermore, $A_{j,i} +e \in (1-\zeta)\mathbb Z[\zeta]$, whence our claim follows. \end{remark} Let $G$ be a group acting on a set $X$. The orbit $\operatorname{orb}_G(x)$ of an element $x \in X$ is defined as the set $\{x^g \;|\; g \in G \}$. Let $\Gamma = \Gamma(H)$ and let $\mathbf w$ be a walk of length $N$ in $\Gamma$; we write $\mathbf w = (w_0,w_1,\dots,w_N)$ where each vertex $w_i$ is adjacent to the vertex $w_{i+1}$ for each $i \in \{0, \dots, N - 1\}$. The \textbf{weight} $\operatorname{wt}(\mathbf w)$ of $\mathbf w$ is defined as \[ \operatorname{wt}(\mathbf w) = \prod_{i=0}^{N-1} A_{w_i,w_{i+1}}. \] \begin{remark} \label{rem:weightPowers} The sum of the $l$th powers of the weights of all $k$-walks from vertex $i$ to vertex $j$ in the graph $\Gamma(H)$ is given by the $(i,j)$-entry of $(A^{\circ l})^k$, where $A^{\circ l}$ denotes the Hadamard (or Schur) product of the matrix $A$ with itself $l$ times. This statement can be readily verified by induction on $k$. \end{remark} Let $\mathbf w = (w_0,w_1,\dots,w_N)$ be a closed $N$-walk of $\Gamma$, i.e., $w_0 = w_N$. We say that $\mathbf w$ is \textbf{simple} if each pair of consecutive vertices is distinct, i.e., $w_i \ne w_{i+1}$ for all $i \in \{0,\dots,N-1\}$. Denote by $W_N(\Gamma)$ the set of closed $N$-walks of $\Gamma$ and denote by $W_N^\prime(\Gamma)$ the set of \emph{simple} closed $N$-walks of $\Gamma$. For $N \geqslant 3$, there is a natural correspondence between the vertices of the closed walk $\mathbf w$ and the vertices of a regular $N$-gon. Under this correspondence, we consider the dihedral group $D_N$ of order $2N$ acting on the set of closed $N$-walks of $\Gamma$. Let $N \geqslant 3$ and write $D_N = \langle r, s \; |\; r^N, s^2, (rs)^2 \rangle$. For notational convenience, we denote by $\mathbf w^k:=(w_k,w_{k+1},\dots,w_{k+N})$ (with indices reduced modulo $N$) the image of $\mathbf w$ under the action of $r^k$ and we denote by $\mathbf w^{\top}:=(w_N,w_{N-1},\dots,w_{0})$, the image of $\mathbf w$ under the action of $s$. Under the action of $r^i s$, the image of $\mathbf w$ is $(w_{N+i},w_{N+i-1},\dots,w_{i})$. We denote by $\operatorname{fix}_{\Gamma}(g)$ the set of walks $\mathbf w \in W_N(\Gamma)$ such that $\mathbf w$ is equal to its image under the action of $g$. For our purposes, the ambient group to which the element $g$ belongs will always be the dihedral group $D_N$. The walk $\mathbf w$ is called \textbf{palindromic} if it is equal to its reverse walk $\mathbf w^\top$. Note that if $\mathbf w \in W_N^\prime(\Gamma)$ is palindromic then $N$ must be even. \begin{remark} \label{rem:nonsimple} We remark that if $q > 2$ is a power of $2$ and $H \in \mathcal H_n(q)$, then the diagonal entries of $A=(J-H)/(1-\zeta)$ are either equal to $0$ or to $2/(1-\zeta)$. By Proposition~\ref{pro:idealprelim} (f), both such entries belong to the ideal $(1-\zeta)\mathbb Z[\zeta]$. Therefore, the weight of any non-simple walk of $\Gamma(H)$ is also in $(1-\zeta)\mathbb Z[\zeta]$. On the other hand, if $q$ is an odd prime power and $H \in \mathcal H_n(q)$ then $\Gamma(H)$ is a simple graph. \end{remark} \begin{lemma} \label{lem:kappa} Let $\Gamma$ be a graph, $N \geqslant 3$ be an integer, and $\mathbf w \in W_N^\prime(\Gamma)$ be palindromic. Suppose there exists $k \in \mathbb N \cup \{0\}$ such that $N \in 2^k\mathbb Z$, and $\mathbf w^{N/2^i} = \mathbf w$ for all $i \in \{0,\dots,k\}$. Then $N \in 2^{k+1}\mathbb Z$. \end{lemma} \begin{proof} Let $\mathbf w = (w_0,w_1,\dots,w_{N-1},w_N)$. Since $\mathbf w$ is palindromic, for each $i \in \{0,\dots,N\}$, we have $w_i = w_{N-i}$. If $N$ were odd then, for $i = (N-1)/2$, we would have $w_i = w_{i+1}$, which is impossible since $\mathbf w$ is simple. Thus $N$ must be even. Now suppose that $\mathbf w^{N/2} = \mathbf w$. This means that, for each $i \in \{0,\dots,N/2\}$, we have $w_i = w_{N/2+i}$. Combining this with the identity $w_i = w_{N-i}$ yields $w_i = w_{N/2-i}$, for each $i \in \{0,\dots,N/2\}$. Similar to the above, since $\mathbf w$ is simple, $N/2$ must be even. The proof is completed by continuing this argument inductively. \end{proof} Let $\mathbf w = (w_0,w_1,\dots,w_{N-1},w_N)$ be a palindromic walk. In view of Lemma~\ref{lem:kappa}, we can define the parameter $\kappa(\mathbf w):= \min \{ k \in \mathbb N \; | \; \mathbf w^{N/2^k} \ne \mathbf w \}$ and set $\psi(\mathbf w) := N/2^{\kappa(\mathbf w)}$. Define the map $\Psi: W_N^\prime(\Gamma) \to W_N^\prime(\Gamma)$ as \[ \Psi(\mathbf w) = \begin{cases} \mathbf w^{\psi(\mathbf w)}, & \text{ if $\mathbf w$ is palindromic;} \\ \mathbf w^{\top}, & \text{ otherwise.} \end{cases} \] Observe that, for all $\mathbf w \in W_N^\prime(\Gamma)$, we have $\operatorname{wt}(\Psi(\mathbf w)) = \operatorname{wt}(\mathbf w^{\top})$. \begin{lemma} \label{lem:evenPalin} Let $\Gamma$ be a graph, $N \geqslant 3$ be an integer, and $\mathbf w \in W_N^\prime(\Gamma)$. Then number of palindromic walks in $\operatorname{orb}_{D_N}(\mathbf w)$ is either $0$ or $2$. \end{lemma} \begin{proof} If $\mathbf w^k \neq (\mathbf w^k)^\top $ for all natural numbers $k$, then $ \operatorname{orb}_{D_N}(\mathbf w)$ does not contain any palindromic walk. Otherwise, for some $k$, we have $\mathbf w^k$ is a palindromic walk. Since $\mathbf w$ is simple and palindromic, we know that $N$ is even. Note that $ \operatorname{orb}_{D_N}(\mathbf w^k)= \operatorname{orb}_{D_N}(\mathbf w)$. Without loss of generality, we can assume $\mathbf w$ is palindromic and write \begin{align*} \mathbf w &=(w_0,w_{1},\dots ,w_{\frac{N}{2}-1},w_{\frac{N}{2}}, w_{\frac{N}{2}-1},\dots ,w_{1},w_0). \end{align*} For $i \in \mathbb N \cup \{ 0 \}$ such that $2^i$ divides $N$, define $\mathbf x_i :=(w_0,w_{1},\dots ,w_{N/2^i-1},w_{N/2^i})$. Define the concatenation of $\mathbf x_i \mathbf x_i^\top$ as follows $$\mathbf x_i \mathbf x_i^\top := (w_0,w_{1},\dots ,w_{N/2^i-1},w_{N/2^i}, w_{N/2^i-1},\dots ,w_{1},w_0).$$ Set $\kappa = \kappa(\mathbf w)$. Then $$\mathbf w=\underbrace{\mathbf x_\kappa \mathbf x_\kappa^\top \dots \mathbf x_\kappa \mathbf x_\kappa^\top}_{\substack{2^{\kappa-1}}} \quad \text{ and } \quad \mathbf w^{\psi(\mathbf w)}=\underbrace{\mathbf x_\kappa^\top \mathbf x_\kappa \dots \mathbf x_\kappa^\top \mathbf x_\kappa}_{\substack{2^{\kappa-1}}}$$ are two distinct palindromic walks in $\operatorname{orb}_{D_N} (\mathbf w)$. It remains to show that there is no other $m \in \{1,\dots,N\}$ such that $\mathbf w^m$ is palindromic. It suffices to show that there is no $m \in \{1,\dots,N/2^\kappa\}$ such that $(\mathbf x_\kappa \mathbf x_\kappa^\top )^m = ((\mathbf x_\kappa \mathbf x_\kappa^\top )^m)^\top$. Since $\mathbf x_\kappa \neq \mathbf x_\kappa^\top$, there exists an integer $l \in \{0,\dots,N/2^\kappa \}$ such that the $l$th position of $\mathbf x_\kappa$ and $\mathbf x_\kappa^\top$ differ, i.e., $w_l \neq w_{N/2^\kappa - l}$. It follows that, for each $m \in \{1,\dots,N/2^\kappa\}$, the $(l+m)$-th position of $(\mathbf x_\kappa \mathbf x_\kappa^\top )^m$ and $((\mathbf x_\kappa \mathbf x_\kappa^\top )^m)^\top$ also differ, whence $$(\mathbf x_\kappa \mathbf x_\kappa^\top )^m \neq ((\mathbf x_\kappa \mathbf x_\kappa^\top )^m)^\top.$$ Thus, $\mathbf w$ and $\mathbf w^{\psi (\mathbf w)}$ are the only two palindromic walks in $\operatorname{orb}_{D_N} (\mathbf w)$. \end{proof} Using Lemma~\ref{lem:evenPalin}, we can partition the set of orbits of a simple walk in a graph into two equal-sized parts in a natural way. We use $\sqcup$ to denote the disjoint union operator. \begin{lemma} \label{lem:Upart} Let $\Gamma$ be a graph, $N \geqslant 3$ be an integer, and $\mathbf w \in W_N^\prime(\Gamma)$. Then there exists $U \subset \operatorname{orb}_{D_N}(\mathbf w)$ such that \[ \operatorname{orb}_{D_N}(\mathbf w) = U \; \sqcup \; \Psi(U). \] \end{lemma} \begin{proof} By Lemma~\ref{lem:evenPalin}, the number of palindromic walks in $\operatorname{orb}_{D_N}(\mathbf w)$ is either $0$ or $2$. If it is $0$ then set $U = \operatorname{orb}_{\langle r \rangle}(\mathbf w)$; clearly $U$ and $\Psi(U)$ partition $\operatorname{orb}_{D_N}(\mathbf w)$. Otherwise, if it is $2$ then, without loss of generality, assume that $\mathbf w$ is palindromic and set $U = \{\mathbf w^i \; | \; i \in \{0,\dots,\psi(\mathbf w)-1\} \}$. Using the proof of Lemma~\ref{lem:evenPalin}, it follows that $U$ and $\Psi(U)$ partition $\operatorname{orb}_{D_N}(\mathbf w)$. \end{proof} \begin{remark} \label{rem:sameWeight} Let $H \in \mathcal H_n(q)$, $\Gamma=\Gamma(H)$, $N \geqslant 3$ be an integer, $\mathbf w \in W_N^\prime(\Gamma)$, and let $U$ be a subset from Lemma~\ref{lem:Upart} such that $ \operatorname{orb}_{D_N}(\mathbf w) = U \; \sqcup \; \Psi(U)$. Observe that each walk $\mathbf w \in U$ has the same weight. It is important to note, however, that it is not necessary for $\mathbf w$ and $\mathbf w^\top$ to have the same weight. \end{remark} Using Lemma~\ref{lem:Upart}, we can also partition the set $W_k^\prime(\Gamma)$ of closed $k$-walks of a graph $\Gamma$ into two equal-sized parts in the same way. \begin{corollary} \label{cor:UpartWalk} Let $\Gamma$ be a graph, $k \geqslant 1$ be an integer, and $\mathbf w \in W_k^\prime(\Gamma)$. There exists $U \subset W_k^\prime(\Gamma)$ such that \[ W_k^\prime(\Gamma) = U \; \sqcup \; \Psi(U). \] \end{corollary} \begin{proof} The case when $k=1$ is vacuous. When $k=2$, each walk $(v,w,v)$ can be paired up with $(w,v,w)$ and each non-palindromic walk $(u,v,w)$ can be paired up with $(w,v,u)$. For $k \geqslant 3$, the corollary follows from Lemma~\ref{lem:Upart}. \end{proof} We will also need the following result about the weights of (not necessarily simple) walks in an orbit. \begin{lemma} \label{lem:UpartNonsimple} Let $H \in \mathcal H_n(q)$, $\Gamma=\Gamma(H)$, $N \geqslant 3$ be an integer, and $\mathbf w \in W_N(\Gamma)$. Then either \begin{itemize} \item $\operatorname{wt}(\mathbf x) = \operatorname{wt}(\mathbf w)$ for all $\mathbf x \in \operatorname{orb}_{D_N}(\mathbf w)$; or \item there exists $U \subset \operatorname{orb}_{D_N}(\mathbf w)$ such that $ \operatorname{orb}_{D_N}(\mathbf w) = U \; \sqcup \; \{ \mathbf x^\top \; | \; \mathbf x \in U\}$ and $\operatorname{wt}(\mathbf x) = \operatorname{wt}(\mathbf w)$ for all $\mathbf x \in U$. \end{itemize} \end{lemma} \begin{proof} Suppose $\mathbf x \in \operatorname{orb}_{D_N}(\mathbf w)$ such that $\operatorname{wt}(\mathbf x) \ne \operatorname{wt}(\mathbf x^\top)$. Without loss of generality, we can assume that $\operatorname{wt}(\mathbf x) = \operatorname{wt}(\mathbf w)$. Let $U = \{ \mathbf y \in \operatorname{orb}_{D_N}(\mathbf w) \; | \; \mathbf y^i = \mathbf x \text{ for some } i \in \{0,\dots, N-1\} \}$. Clearly, $\operatorname{wt}(\mathbf y) = \operatorname{wt}(\mathbf w)$ for each $\mathbf y \in U$ and $\operatorname{orb}_{D_N}(\mathbf w) \backslash U = \{ \mathbf y^\top \; | \; \mathbf y \in U\}$, as required. \end{proof} In the subsequent results in this section, we will occasionally take advantage of the following properties of weights of walks when $q$ is a power of $2$, which we record as a remark. \begin{remark} \label{rem:weightsum} Let $n,k \in \mathbb N$, $q \geqslant 2$ be a power of $2$, $H \in \mathcal H_n(q)$, $\Gamma =\Gamma(H)$, and $\mathbf w \in W_k(\Gamma)$. We claim that $\operatorname{wt}(\mathbf w ) + \operatorname{wt}(\Psi(\mathbf w) ) \in (1-\zeta)\mathbb Z[\zeta]$. Indeed, write $\mathbf w = (w_0,w_1,\dots,w_k)$ and $\operatorname{wt}(\mathbf w) = \prod_{i=1}^{k} \frac{1-\zeta^{e_i}}{1-\zeta}$ for some nonzero integers $e_i$. Then $\operatorname{wt}(\Psi(\mathbf w) ) = \prod_{i=1}^{k} \frac{1-\zeta^{q-e_i}}{1-\zeta}$. Using Remark~\ref{rem:Aentries} and Proposition~\ref{pro:idealprelim} (f), we have \[ \operatorname{wt}(\mathbf w) - \prod_{i=1}^{k} e_i \in (1-\zeta)\mathbb Z[\zeta] \text{ and } \operatorname{wt}(\Psi(\mathbf w)) - \prod_{i=1}^{k} (-e_i) \in (1-\zeta)\mathbb Z[\zeta]. \] Our claim now readily follows from Proposition~\ref{pro:idealprelim} (g). \end{remark} Next, we apply the orbit-stabiliser theorem to obtain a key result about a certain sum of weights of walks. \begin{lemma} \label{lem:weightedBurnside} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, $N \geqslant 3$ be an integer, $H \in \mathcal H_n(q)$, and $\Gamma=\Gamma(H)$. Then $$\sum_{g \in D_N} \sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(g)} \operatorname{wt}(\mathbf w) \in (1-\zeta)N\mathbb Z[\zeta].$$ \end{lemma} \begin{proof} For each $\mathbf w \in W_N(\Gamma)$, denote by $\operatorname{stab}_{D_N}(\mathbf w)$ the stabiliser of $\mathbf w$ under the action of $D_N$ and the set of orbits is denoted by $\mathcal O = \{\operatorname{orb}_{D_N}(\mathbf w) \; | \; \mathbf w \in W_N(\Gamma) \}$. Using the orbit-stabiliser theorem, we can write \begin{align*} \sum_{g \in D_N} \sum_{\mathbf w \in \operatorname{fix}_\Gamma(g)} \operatorname{wt}(\mathbf w) &= \sum_{\mathbf w \in W_N(\Gamma)} \sum_{g \in \operatorname{stab}_{D_N}(\mathbf w)} \operatorname{wt}(\mathbf w) \\ &= \sum_{\mathbf w \in W_N(\Gamma)} \frac{2N \operatorname{wt}(\mathbf w)}{|\operatorname{orb}_{D_N}(\mathbf w)|} \\ &= \sum_{X \in \mathcal O}\sum_{\mathbf w \in X} \frac{2N \operatorname{wt}(\mathbf w)}{|X|}. \end{align*} By Lemma~\ref{lem:UpartNonsimple}, for each orbit $X \in \mathcal O$, the number of distinct weights of walks in $X$ is at most $2$. Now we partition $\mathcal O$ as $\mathcal O = \mathcal O_0 \sqcup \mathcal O_1 \sqcup \mathcal O_2$, where $\mathcal O_0$ is the set of orbits consisting of simple walks, $\mathcal O_1$ is the set of orbits that contain non-simple walks with just one weight, and $\mathcal O_2$ is the set of orbits that contain non-simple walks with two distinct weights. We consider each part separately. First, we consider the orbits that consist of simple walks. By Lemma~\ref{lem:Upart}, for each $X \in \mathcal O_0$, there exists a subset $U(X) \subset X$ such that \[ X = U(X) \; \sqcup \; \Psi(U(X)), \] and, by Remark~\ref{rem:sameWeight}, each walk in $U(X)$ has the same weight. Define $\operatorname{wt}(X) := \operatorname{wt}(\mathbf w)+\operatorname{wt}(\mathbf w^\top)$ for some $\mathbf w \in X$. Then \begin{align*} \sum_{X \in \mathcal O_0}\sum_{\mathbf w \in X} \frac{2N \operatorname{wt}(\mathbf w)}{|X|} &= \sum_{X \in \mathcal O_0}\sum_{\mathbf w \in U(X)} N\frac{ \operatorname{wt}(\mathbf w)+\operatorname{wt}(\mathbf w^\top)}{|U(X)|} = \sum_{X \in \mathcal O_0} \operatorname{wt}(X) N. \end{align*} By Remark~\ref{rem:weightsum}, we have $\operatorname{wt}(X) \in (1-\zeta)\mathbb Z[\zeta]$ for all $X \in \mathcal O_0$, which implies that $\sum_{X \in \mathcal O_0} \operatorname{wt}(X) N \in (1-\zeta)N\mathbb Z[\zeta]$. Next we consider the orbits that contain non-simple walks and have just one distinct weight. By Lemma~\ref{lem:UpartNonsimple}, for each $X \in \mathcal O_1$, every walk in $X$ has the same weight. Define $\operatorname{wt}(X) := \operatorname{wt}(\mathbf w)$ for some $\mathbf w \in X$. Since $q > 2$ is a power of $2$, by Remark~\ref{rem:nonsimple}, we have $\operatorname{wt}(X) \in (1-\zeta)\mathbb Z[\zeta]$ for each $X \in \mathcal O_1$. Therefore \begin{align*} \sum_{X \in \mathcal O_1}\sum_{\mathbf w \in X} \frac{2N \operatorname{wt}(\mathbf w)}{|X|} &= \sum_{X \in \mathcal O_1} 2\operatorname{wt}(X) N \in (1-\zeta)N\mathbb Z[\zeta]. \end{align*} Finally, we consider the orbits that contain non-simple walks and have two distinct weights. By Lemma~\ref{lem:UpartNonsimple}, for each $X \in \mathcal O_2$, there exists a subset $U(X) \subset X$ such that \[ X = U(X) \; \sqcup \; \{ \mathbf x^\top \; |\; \mathbf x \in U(X)\}, \] and each walk in $U$ has the same weight. Define $\operatorname{wt}(X) := \operatorname{wt}(\mathbf w)+\operatorname{wt}(\mathbf w^\top)$ for some $\mathbf w \in X$. Since $q > 2$ is a power of $2$, by Remark~\ref{rem:nonsimple}, we have $\operatorname{wt}(X) \in (1-\zeta)\mathbb Z[\zeta]$ for each $X \in \mathcal O_2$. Therefore \begin{align*} \sum_{X \in \mathcal O_2}\sum_{\mathbf w \in X} \frac{2N \operatorname{wt}(\mathbf w)}{|X|} &= \sum_{X \in \mathcal O_2}\sum_{\mathbf w \in U(X)} N\frac{ \operatorname{wt}(\mathbf w)+\operatorname{wt}(\mathbf w^\top)}{|U(X)|} = \sum_{X \in \mathcal O_2} \operatorname{wt}(X) N \in (1-\zeta)N\mathbb Z[\zeta]. \end{align*} By combining the above, we find that \[ \sum_{X \in \mathcal O}\sum_{\mathbf w \in X} \frac{2N \operatorname{wt}(\mathbf w)}{|X|} = \sum_{X \in \mathcal O_0}\sum_{\mathbf w \in X} \frac{2N \operatorname{wt}(\mathbf w)}{|X|} + \sum_{X \in \mathcal O_1}\sum_{\mathbf w \in X} \frac{2N \operatorname{wt}(\mathbf w)}{|X|} + \sum_{X \in \mathcal O_2}\sum_{\mathbf w \in X} \frac{2N \operatorname{wt}(\mathbf w)}{|X|} \in (1-\zeta)N\mathbb Z[\zeta], \] as required. \end{proof} In the next lemma, we write summands from Lemma~\ref{lem:weightedBurnside} in terms of the matrix $A=(J-H)/(1-\zeta)$. Recall that $r$ and $s$ are the generators of $D_N = \langle r, s \; |\; r^N, s^2, (rs)^2 \rangle$ and define $\mathfrak W_N(H)$ as \[ \mathfrak W_N(H):= \left \{ \mathbf w \in \operatorname{fix}_{\Gamma(H)}(rs) \; \mid \; \frac{(1-\zeta)^2}{4}\operatorname{wt}(\mathbf w) \not \in (1-\zeta)\mathbb Z[\zeta] \right \}. \] Note that each walk $\mathbf w = (w_0,w_1,\dots,w_{N}) \in \mathfrak W_N(H)$ is not simple. Indeed, $\mathbf w$ is fixed by $rs$, which means $w_0 = w_1$. \begin{lemma} \label{lem:fixToTrace} Let $n, \in \mathbb N$, $q>2$, $H \in \mathcal H_n(q)$, $A=(J-H)/(1-\zeta)$, $\Gamma=\Gamma(H)$, and $N \geqslant 3$ be an integer. Then \begin{itemize} \item[(i)] $\displaystyle\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(r^k)}\operatorname{wt}(\mathbf w)= \operatorname{tr} \left( \left(A^{\circ \frac{N}{\gcd{(k,N)}}}\right)^{\gcd{(k,N)}} \right)$, for all $k \in \mathbb Z$; \item[(ii)] if $N$ is even, $\displaystyle\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(r^{2k}s)}\operatorname{wt}(\mathbf w)=\mathbf 1^\top ( A \circ A^\top )^\frac{N}{2} \mathbf 1 $, for all $k\in \mathbb Z$. \item[(iii)] if $q$ is a power of $2$, \[ \begin{cases} \displaystyle \sum_{k=0}^{N-1}\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(r^{k}s)}\operatorname{wt}(\mathbf w) \in (1-\zeta)N\mathbb Z[\zeta],\; & \text{if $N$ is odd;}\\ \\ \displaystyle \sum_{k=0}^{N/2-1}\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(r^{2k+1}s)}\operatorname{wt}(\mathbf w) - N|\mathfrak W_N(H)| \in (1-\zeta)N\mathbb Z[\zeta],\; & \text{if $N$ is even.} \end{cases} \] \item[(iv)] if $q$ is not a power of $2$, \[ \begin{cases} \displaystyle \sum_{k=0}^{N-1}\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(r^{k}s)}\operatorname{wt}(\mathbf w) =0,\; & \text{if $N$ is odd;}\\ \\ \displaystyle \sum_{k=0}^{N/2-1}\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(r^{2k+1}s)}\operatorname{wt}(\mathbf w) =0,\; & \text{if $N$ is even.} \end{cases} \] \end{itemize} \end{lemma} \begin{proof} Let $\mathbf w=(w_0,w_1,\dots,w_{N-1},w_N) \in \operatorname{fix}_{\Gamma}(g)$ for some $g\in D_N$. We split in to cases for the group element $g$. First suppose $g=r^k$ for some $k \in \mathbb Z$. Then $\operatorname{ord}(g)=\frac{N}{\gcd(k,N)}$. Therefore $g$ has $\gcd(k,N)$ cycles each of length $\operatorname{ord}(g)$. It follows that $\mathbf w$ is a closed $\gcd(k,N)$-walk $\mathbf x$ repeated $\operatorname{ord}(g)$ times. Thus $\operatorname{wt}(\mathbf w)=\operatorname{wt} (\mathbf x)^{\operatorname{ord}(g)}$, from which, using Remark~\ref{rem:weightPowers}, part (i) follows. Suppose that $N$ is even and $g=r^{2k}s$ for some $k\in \mathbb Z$. Then $g$ consists of $N/2-1$ cycles of length 2 and two cycles of length 1. Without loss of generality, we may assume $w_0$ and $w_{\frac{N}{2}}$ are fixed by $g$. Then $\mathbf w$ is a closed $N$-walk made up of an $N/2$-walk $\mathbf x$ followed by its reverse walk $\mathbf x^\top$. Hence the sum of such walks is equal to the sum of each entry of the matrix $( A \circ A^\top )^{N/2}$. Whence we have part (ii). At this point, note that all walks in $\operatorname{fix}_{\Gamma}(g)$ must contain a loop if $N$ is odd and $g =r^ks$ or if $N$ is even and $g=r^{2k+1}s$ for some $k$. By Remark~\ref{rem:nonsimple}, if $q$ is not a power of $2$ then $\Gamma$ is a simple graph thus part (iv) follows immediately. It remains to assume that $q$ is a power of $2$. Suppose that $N$ is odd and $g=r^{k}s$ for some $k\in \mathbb Z$. Then $g$ consists of $(N-1)/2$ cycles of length $2$. This means that one pair of adjacent vertices of the walk $\mathbf w$ must be the same. When $g=s$, we have $w_{\frac{N-1}{2}} = w_{\frac{N+1}{2}}$, that is, there is a loop at $w_{\frac{N-1}{2}}$. Thus $A_{w_{\frac{N-1}{2}},w_{\frac{N-1}{2}} } = 2/(1-\zeta)$. Note that, since $q>2$ is a power of $2$, by Proposition~\ref{pro:idealprelim} (f), we have $2/(1-\zeta) \in (1-\zeta)\mathbb Z[\zeta]$. Therefore, we can write $\operatorname{wt}(\mathbf w) = (1-\zeta)\alpha_{\mathbf w}$ for some $\alpha_{\mathbf w} \in \mathbb Z[\zeta]$. Observe that $\mathbf w \in \operatorname{fix}_{\Gamma}(r^{2i}s)$ for some $i \in \{0,\dots,N-1\}$ if and only if $\mathbf w^i \in \operatorname{fix}_{\Gamma}(s)$. Indeed, in both cases (with indices reduced modulo $N$) we have $w_{i+j} = w_{N+i-j}$ for all $j \in \{0,\dots,N-1\}$. Since $\operatorname{wt}(\mathbf w^i) = \operatorname{wt}(\mathbf w)$, it follows that $\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(r^{2i}s)}\operatorname{wt}(\mathbf w) = \sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(s)}\operatorname{wt}(\mathbf w)$. Hence, \[ \sum_{k=0}^{N-1}\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(r^{k}s)}\operatorname{wt}(\mathbf w) = \sum_{i=0}^{N-1}\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(r^{2i}s)}\operatorname{wt}(\mathbf w) = N\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(s)}\operatorname{wt}(\mathbf w) = N(1-\zeta)\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(s)}\alpha_{\mathbf w} \in (1-\zeta)N\mathbb Z[\zeta]. \] Finally, suppose that $N$ is even and $g=r^{2k+1}s$ for some $k\in \mathbb Z$. Then $g$ consists of $N/2$ cycles of length $2$. This means that two pairs of adjacent vertices of the walk $\mathbf w$ must be the same. When $g=rs$, we have $w_0 = w_1$ and $w_{\frac{N}{2}} = w_{\frac{N}{2}+1}$. This means that there are loops at $w_0$ and $w_{\frac{N}{2}}$. Thus $A_{w_0,w_0} = A_{w_{\frac{N}{2}},w_{\frac{N}{2}}} = 2/(1-\zeta)$. Note that, since $q>2$ is a power of $2$, we have $4/(1-\zeta)^2 \in 2\mathbb Z[\zeta]$. Therefore, we can write $\operatorname{wt}(\mathbf w) = 2\alpha_{\mathbf w}$ for some $\alpha_{\mathbf w} \in \mathbb Z[\zeta]$. Observe that $\mathbf w \in \operatorname{fix}_{\Gamma}(r^{2k+1}s)$ if and only if $\mathbf w^k \in \operatorname{fix}_{\Gamma}(rs)$. Indeed, in both cases (with indices reduced modulo $N$) we have $w_{k+j} = w_{N+k-j}$ for all $j \in \{0,\dots,N/2-1\}$. Since $\operatorname{wt}(\mathbf w^k) = \operatorname{wt}(\mathbf w)$, it follows that $\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(r^{2k+1}s)}\operatorname{wt}(\mathbf w) = \sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(rs)}\operatorname{wt}(\mathbf w)$. Hence, \[ \sum_{k=0}^{N/2-1}\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(r^{2k+1}s)}\operatorname{wt}(\mathbf w) = \frac{N}{2}\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(rs)}\operatorname{wt}(\mathbf w) = N\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(rs)}\alpha_{\mathbf w}. \] By Proposition~\ref{pro:idealprelim} (g), we have $\sum_{\mathbf w \in \operatorname{fix}_{\Gamma}(rs)}\alpha_{\mathbf w} - |\mathfrak W_N(H)| \in (1-\zeta)\mathbb Z[\zeta]$. Whence we have part (iii). \end{proof} By combining Lemma~\ref{lem:weightedBurnside} and Lemma~\ref{lem:fixToTrace}, we arrive at the following corollaries. These results should be compared with Theorem~\ref{thm:hs}. \begin{corollary} \label{cor:mainHSgenOdd} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, $N \geqslant 3$ be an odd integer, $H \in \mathcal H_n(q)$, and $A=(J-H)/(1-\zeta)$. Then $$\sum_{d \;|\; N} \phi(N/d) \operatorname{tr}((A^{\circ N/d})^d) \in (1-\zeta)N \mathbb Z[\zeta].$$ \end{corollary} For the case when $N$ is even, we first state a slightly weaker generalisation of Theorem~\ref{thm:hs}, which applies only to $q$-Seidel matrices. \begin{corollary} \label{cor:mainSeidelgen} Let $n \in \mathbb N$, $q$ be a power of $2$, $N \geqslant 4$ be an even integer, $S$ be a $q$-Seidel matrix, and $A=(J-I-S)/(1-\zeta)$. Then $$\sum_{d \;|\; N} \phi(N/d) \operatorname{tr}((A^{\circ N/d})^d) + \frac{N}{2}\mathbf 1^\top (A \circ A^\top)^{N/2} \mathbf 1 \in (1-\zeta)N \mathbb Z[\zeta].$$ \end{corollary} As we will see below, the corresponding result for matrices $H \in \mathcal H_n(q)$ requires the use of the set $\mathfrak W_N(H)$. Now, we state a slightly more general version of the above result, which we consider to be the main result of this section. We will refine this result later in Section~\ref{sec:euler}, where we will deal with the unsightly term involving $|\mathfrak W_N(H)|$. \begin{corollary} \label{cor:mainHSgen} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, $N \geqslant 4$ be an even integer, $H \in \mathcal H_n(q)$, and $A=(J-H)/(1-\zeta)$. Then $$\sum_{d \;|\; N} \phi(N/d) \operatorname{tr}((A^{\circ N/d})^d) + N|\mathfrak W_N(H)| + \frac{N}{2}\mathbf 1^\top (A \circ A^\top)^{N/2} \mathbf 1 \in (1-\zeta)N \mathbb Z[\zeta].$$ \end{corollary} We conclude this section with a preparatory result about the trace of powers of the matrix $A$. \begin{lemma} \label{lem:trCong} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, $H \in \mathcal H_n(q)$, and $A = (J-H)/(1-\zeta)$. Then, for all $k \geqslant 2$, there exists $U \subset W_k^\prime(\Gamma(H))$ such that \begin{align} \operatorname{tr}( A^k) - \sum_{\mathbf w \in U} \mathrm{wt}(\mathbf w)+\mathrm{wt}(\mathbf w^\top) &\in (1-\zeta)^2\mathbb Z[\zeta] \text{ and } \label{eqn:trAk} \\ \operatorname{tr}( (A \circ A^\top)^k) - 2\sum_{\mathbf w \in U} \mathrm{wt}(\mathbf w)\mathrm{wt}(\mathbf w^\top) &\in 2(1-\zeta)\mathbb Z[\zeta]. \label{eqn:trAoAt} \end{align} \end{lemma} \begin{proof} Let $\Gamma = \Gamma(H)$ be the underlying graph of $H$. First, using Remark~\ref{rem:weightPowers}, observe that $\operatorname{tr}( A^k)$ is equal to the sum of $\operatorname{wt}(\mathbf w)$ over $\mathbf w \in W_k(\Gamma)$. Note that the diagonal entries of $A$ are equal to $0$ or $2/(1-\zeta) \in (1-\zeta)\mathbb Z[\zeta]$. Therefore, it is clear that the sum of the weights of walks that contain at least $2$ loops is in $(1-\zeta)^2\mathbb Z[\zeta]$. Let $W$ be the subset of $W_k(\Gamma)$ each of whose walks $\mathbf w$ contains a loop and $\operatorname{wt}(\mathbf w) \not \in (1-\zeta)^2\mathbb Z[\zeta]$. We claim that the sum of the weights of the walks in $W$ is in $(1-\zeta)^2\mathbb Z[\zeta]$. Each walk $\mathbf w \in W$ has the form \[ \mathbf w = (w_0,w_1,\dots,w_{i-1},w_i,w_i,w_{i+1},\dots,w_{k-1}). \] We can obtain the simple walk $\mathbf x = (w_0,w_1,\dots,w_{i-1},w_i,w_{i+1},\dots,w_{k-1}) \in W_{k-1}^\prime(\Gamma)$ by removing the edge $(w_i,w_i)$. Note that $\operatorname{wt}(\mathbf w) = 2\operatorname{wt}(\mathbf x)/(1-\zeta)$. By Corollary~\ref{cor:UpartWalk}, there exists $U \subset W_{k-1}^\prime(\Gamma)$ such that $W_{k-1}^\prime(\Gamma) = U \sqcup \Psi(U)$. Let $L(\Gamma)$ be the set of vertices $v$ of $\Gamma$ such that $(v,v)$ is a loop and let $U(v)$ be the subset of $U$ consisting of the walks that contain the vertex $v$. Then \[ \sum_{\mathbf w \in W} \operatorname{wt}(\mathbf w) - \frac{2}{1-\zeta}\sum_{v \in L(\Gamma)} \sum_{\mathbf x \in U(v)} \operatorname{wt}(\mathbf x) + \operatorname{wt}(\Psi(\mathbf x)) \in (1-\zeta)^2\mathbb Z[\zeta]; \] Our claim then follows from Proposition~\ref{pro:idealprelim} (f) and Remark~\ref{rem:weightsum}. Therefore, \[ \operatorname{tr}( A^k) - \sum_{\mathbf w \in W_k^\prime(\Gamma)} \operatorname{wt}(\mathbf w) \in (1-\zeta)^2\mathbb Z[\zeta] \] and \eqref{eqn:trAk} follows from Corollary~\ref{cor:UpartWalk}. The proof of \eqref{eqn:trAoAt} is similar. We merely point out the differences to the above. Using Remark~\ref{rem:nonsimple}, we observe that $\operatorname{tr}( (A \circ A^\top)^k)$ equal to the sum of $\mathrm{wt}(\mathbf w)\mathrm{wt}(\mathbf w^\top)$ over $\mathbf w \in W_k(\Gamma)$. Note that the diagonal entries of $A \circ A^\top$ are equal to $0$ or $4/(1-\zeta)^2 \in 2\mathbb Z[\zeta]$. If $\mathbf w \in W_k(\Gamma)$ is not a simple walk then either $\operatorname{wt}(\mathbf w)\mathrm{wt}(\mathbf w^\top) \in 2(1-\zeta)\mathbb Z[\zeta]$ or $\operatorname{wt}(\mathbf w)\mathrm{wt}(\mathbf w^\top) = 4/(1-\zeta)^2 \operatorname{wt}(\mathbf x)\mathrm{wt}(\mathbf x^\top)$ for some $\mathbf x \in W_{k-1}^\prime(\Gamma)$. We leave the rest of the details of the proof of \eqref{eqn:trAoAt} to the reader. \end{proof} \section{Determinants and characteristic polynomials} \label{sec:dets} In this section, we establish results about the determinants and characteristic polynomials of matrices in $\mathcal H_n(q)$. At the end of this section we provide proofs of part (c) and part (d) of Theorem~\ref{thm:mainBounds} and Theorem~\ref{thm:mainBounds2}. Denote by $\mathfrak S_n$ and $\mathfrak D_n$, respectively, the symmetric group and its subset of derangements acting on $\{1,\dots,n\}$. We begin with a result about the determinant of a matrix closely related to matrices in $\mathcal H_n(q)$. \begin{lemma} \label{lem:detISMnodd} Let $n, q \in \mathbb N$ with $n$ odd, $q > 2$, $H \in \mathcal H_n(q)$, and $A = (J-H)/(1-\zeta)$. Then $\det (A) \in (1-\zeta) \mathbb Z[\zeta]$. \end{lemma} \begin{proof} Using Leibniz's formula, we can write \begin{equation} \label{detA1} \det (A) = \sum_{\sigma \in \mathfrak S_n} \operatorname{sgn}(\sigma)\prod_{i=1}^n A_{i,\sigma(i)}, \end{equation} where $\operatorname{sgn}(\sigma)$ denotes the signature of the permutation $\sigma$. Since $q>2$, each diagonal entry of $A$ is in $(1-\zeta)\mathbb Z[\zeta]$, if $\sigma \in \mathfrak S_n$ has a fixed point then $\prod_{i=1}^n A_{i,\sigma(i)} \in (1-\zeta)\mathbb Z[\zeta]$. Furthermore, since $n$ is odd, for each $\sigma \in \mathfrak D_n$, we have $\sigma \ne \sigma^{-1}$ and thus there exists a subset $\mathfrak U \subset \mathfrak D_n$ such that \[ \mathfrak D_n = \mathfrak U \sqcup \{ \sigma^{-1} \; | \; \sigma \in \mathfrak U \}. \] Therefore, we can rewrite \eqref{detA1} as \begin{align*} \det( A ) &= \sum_{\sigma \in \mathfrak U} \operatorname{sgn}(\sigma) \left( \prod_{i=1}^n A_{i,\sigma(i)} + \prod_{i=1}^n A_{\sigma(i),i} \right). \end{align*} By Remark~\ref{rem:Aentries}, for each $\sigma \in \mathfrak U$, we have \[ \prod_{i=1}^n A_{\sigma(i),i} - (-1)^n\prod_{i=1}^n A_{i,\sigma(i)} \in (1-\zeta)\mathbb Z[\zeta]. \] Since $n$ is odd, the $(-1)^n = -1$, whence we have the lemma. \end{proof} The statement of the next corollary follows immediately if one recalls the comments made after Lemma~\ref{lem:detIdealEasy}. \begin{corollary} \label{cor:oddbi} Let $n, q \in \mathbb N$ with $q > 2$, $H \in \mathcal H_n(q)$, and $A = (J-H)/(1-\zeta)$. Write $\operatorname{Char}_{A}(x)=\sum_{i=0}^n b_i x^{n-i}$. Then $b_i \in (1-\zeta) \mathbb Z[\zeta]$ for all odd $i \in \{1,\dots,n\}$. \end{corollary} Next we show that the determinant of a matrix in $\mathcal H_n(q)$ lies in an ideal generated by a large power of $(1-\zeta)$. \begin{lemma} \label{lem:detIdeal} Let $n, q \in \mathbb N$ with $q > 2$ and $H \in \mathcal H_n(q)$. Then $\det (H) \in (1-\zeta)^{n-1}\mathbb Z[\zeta]$. Furthermore, if $n$ is even then $\det( H) \in (1-\zeta)^{n}\mathbb Z[\zeta]$. \end{lemma} \begin{proof} Subtract the first row of $H$ from all other rows to obtain $H^\prime$. Except for the first row, each entry of $H^\prime$ is in $(1-\zeta)\mathbb Z[\zeta]$. Thus $\det (H) = \det (H^\prime) \in (1-\zeta)^{n-1}\mathbb Z[\zeta]$. Now suppose $n$ is even. By taking the negative of $H$, if necessary, we may assume that $H_{1,1} = 1$. Conjugating by a diagonal matrix with entries in $\mathcal C_q$, if necessary, we may further assume that all entries in the first row and column of $H$ equal $1$. Then, except for $H^\prime_{1,1} = 1$, each entry of the first column of $H^\prime$ must equal $0$. Hence $\det(H^\prime) = \det(H^\prime[1])$. Observe that $H^\prime[1] = J-H[1] = (1-\zeta)A$ for some matrix $A$ of order $n-1$ whose entries belong to $\mathbb Z[\zeta]$. By Lemma~\ref{lem:detISMnodd}, the determinant of $A$ is in $(1-\zeta)\mathbb Z[\zeta]$. Thus, the lemma follows. \end{proof} Since they are Hermitian, each $H \in \mathcal H_n(q)$ must have $\det (H) \in \mathbb R$. The next corollary follows immediately from Lemma~\ref{lem:detIdeal} and Proposition~\ref{pro:realIdeal}. \begin{corollary} \label{cor:detH} Let $n, q \in \mathbb N$ with $q > 2$, and let $H \in \mathcal H_n(q)$. Then $\det (H) \in \rho^{\lfloor n/2 \rfloor}\mathbb Z[\zeta+\zeta^{-1}]$. \end{corollary} The proof of the next lemma is a straightforward modification of the proof of \cite[Lemma 3.1]{GreavesYatsyna19}. \begin{lemma} \label{lem:matdet} Let $A$ be a matrix of order $n$. Write $\operatorname{Char}_{J-(1-\zeta)A}(x)=\sum_{i=0}^n a_ix^{n-i}$ and $\operatorname{Char}_A(x)=\sum_{i=0}^n b_ix^{n-i}$. Then, for all $j \in \{0,\dots,n\}$, $$a_j=(\zeta-1)^j \left( b_j + \frac{1}{1-\zeta} \sum_{i=1}^j b_{j-i} \mathbf{1}^\top A^{i-1}\mathbf{1} \right).$$ \end{lemma} In Lemma~\ref{lem:matdet}, we see summands of the form $\mathbf 1^\top A^k \mathbf 1$. In preparation for Lemma~\ref{lem:armod}, we can obtain restrictions on these summands, as shown in the next result. \begin{lemma} \label{lem:2sumApower} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, $H \in \mathcal H_n(q)$, and $A = (J-H)/(1-\zeta)$. Then $\mathbf 1^\top A^k \mathbf 1 \in (1-\zeta)\mathbb Z[\zeta]$ for all $k \in \mathbb N$. \end{lemma} \begin{proof} First, since off-diagonal entries are counted twice and $2 \in (1-\zeta)\mathbb Z[\zeta]$ (by Proposition~\ref{pro:idealprelim} (f)), observe that \[ \mathbf 1^\top A^k \mathbf 1 - \operatorname{tr}(A^k) \in (1-\zeta)\mathbb Z[\zeta]. \] Then the lemma follows from Lemma~\ref{lem:trCong}. \end{proof} Lemma~\ref{lem:matdet} and Lemma~\ref{lem:2sumApower} are the main tools used to obtain the following lemma about the coefficients of the characteristic polynomial of $H \in \mathcal H_n(q)$. \begin{lemma} \label{lem:armod} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, $H \in \mathcal H_n(q)$, and write $\operatorname{Char}_{H}(x)=\sum_{i=0}^n a_ix^{n-i}$. \begin{itemize} \item If $n$ is even then $a_j \in (1-\zeta)^j \mathbb Z[\zeta]$ for each $j \in \{0,\dots,n\}$. \item If $j \in \{0,\dots,n\}$ is even then $a_j \in (1-\zeta)^j \mathbb Z[\zeta]$. \end{itemize} \end{lemma} \begin{proof} The proof is trivial for $j = 0$. Assume $j \in \{1,\dots,n\}$. First, suppose $n$ is even. By Lemma~\ref{lem:matdet}, we can write $$a_j=(\zeta-1)^{j}b_{j} -(\zeta-1)^{j-1}nb_{j-1} - (\zeta -1)^{j-1}\sum_{i=2}^{j} b_{j-i} \mathbf{1}^\top A^{i-1}\mathbf{1}.$$ Note that $n \in (1-\zeta)\mathbb Z[\zeta]$. By Lemma~\ref{lem:2sumApower}, we have $\mathbf{1}^\top A^{i-1}\mathbf{1} \in (1-\zeta) \mathbb Z[\zeta]$ for all $i \geqslant 2$. Thus, $a_j \in (1-\zeta)^j \mathbb Z[\zeta]$ as required. Now suppose $j$ is even and write $j=2k$ for some positive integer $k$. By Lemma~\ref{lem:matdet}, we have \begin{align} \label{eqn:a2k} a_{2k} &=(\zeta-1)^{2k}b_{2k} +(1-\zeta)^{2k-1}b_{2k-1}n + (1-\zeta)^{2k-1}\sum_{i=2}^{2k} b_{2k-i} \mathbf{1}^\top A^{i-1}\mathbf{1}. \end{align} As $2k-1$ is odd, we use Corollary~\ref{cor:oddbi} to give $b_{2k-1} \in (1-\zeta)\mathbb Z[\zeta]$. Using Lemma~\ref{lem:2sumApower}, for each $i \geqslant 2$, we have $\mathbf{1}^\top A^{i-1}\mathbf{1} \in (1-\zeta) \mathbb Z[\zeta]$. Thus, from \eqref{eqn:a2k}, we obtain $a_{2k} \in (1-\zeta)^{2k} \mathbb Z[\zeta]$ as required. \end{proof} Again, using the fact that each $H \in \mathcal H_n(q)$ is Hermitian, we have the following corollary, which is a consequence of Corollary~\ref{cor:detH}, and Lemma~\ref{lem:armod} together with Proposition~\ref{pro:realIdeal}. \begin{corollary} \label{cor:2powerCoeffs} Let $n \in \mathbb N$, $q > 2$, $H \in \mathcal H_n(q)$, and write $\operatorname{Char}_{H}(x)=\sum_{i=0}^n a_ix^{n-i}$. \begin{itemize} \item Suppose $q$ is not a power of $2$. Then $a_i \in \rho^{\lceil (i-1)/2 \rceil}\mathbb Z[\zeta+\zeta^{-1}]$ for all $i$. \item Suppose $q$ is a power of $2$. If $n$ is even then $a_i \in \rho^{\lceil i/2 \rceil}\mathbb Z[\zeta+\zeta^{-1}]$ for all $i$, otherwise $a_i \in \rho^{\lfloor i/2 \rfloor}\mathbb Z[\zeta+\zeta^{-1}]$ for all $i$. \end{itemize} \end{corollary} Now we are ready to prove parts (c) and (d) of Theorem~\ref{thm:mainBounds}. \begin{proof}[Proof of Theorem~\ref{thm:mainBounds} (c)] Let $H \in \mathcal H_n(p^f)$ with $p$ an odd prime, $f \geqslant 1$, and write $\operatorname{Char}_{H}(x)=\sum_{i=0}^n a_ix^{n-i}$. By Remark~\ref{rem:a0a1a2}, $a_0 = 1$, $a_1 = -n$, and $a_2 = 0$. By Corollary~\ref{cor:2powerCoeffs} and Remark~\ref{rem:countResidues}, there are at most $p^{e-\lceil (i-1)/2 \rceil}$ possible residues for $a_i$ modulo $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$, where $i \in \{3,\dots, 2e-1\}$ and just one possible residue for $a_i$ modulo $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$ for $i \geqslant 2e$. Thus there are at most \[ \prod_{i=3}^{2e-1}p^{e-\lceil (i-1)/2 \rceil} = p^{(e-1)^2} \] possible residue classes for $\operatorname{Char}_H(x)$ modulo $(\rho^e \mathbb Z[\zeta+\zeta^{-1}])[x]$. \end{proof} The proof of Theorem~\ref{thm:mainBounds} (d) is similar to the proof of Theorem~\ref{thm:mainBounds} (c). The difference is that for $q$ an odd prime power, the trace of $H \in \mathcal H_n(q)$ is equal to $n$, which gives $a_1=-n$. \begin{proof}[Proof of Theorem~\ref{thm:mainBounds} (d)] Let $H \in \mathcal H_n(2^f)$ where $n$ is even and $f > 1$ and write $\operatorname{Char}_{H}(x)=\sum_{i=0}^n a_ix^{n-i}$. By Remark~\ref{rem:a0a1a2}, $a_0 = 1$ and $a_2$ depends on $a_1$. Since $n$ is even, $a_1$ must also be an even integer. Using Remark~\ref{rem:rationalElements}, modulo $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$, there are at most $2^{\lceil 2^{2-f}e \rceil - 1}$ residues for $a_1$ modulo $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$. By Corollary~\ref{cor:2powerCoeffs} and Remark~\ref{rem:countResidues}, there are at most $2^{e-\lceil i/2 \rceil}$ possible residues for $a_i$ modulo $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$, where $i \in \{3,\dots, 2e-1\}$ and just one possible residue for $a_i$ modulo $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$ for $i \geqslant 2e$. Thus there are at most \[ 2^{\lceil 2^{2-f}e \rceil - 1}\prod_{i=3}^{2e-1}2^{e-\lceil i/2 \rceil} = 2^{(e-1)(e-2)+\lceil 2^{2-f}e \rceil - 1} \] possible residue classes for $\operatorname{Char}_H(x)$ modulo $(\rho^e \mathbb Z[\zeta+\zeta^{-1}])[x]$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:mainBounds2} (c) and (d)] Note that $|\mathcal X_n^\prime(q,e)|$ is equal to the number of congruence classes of $\operatorname{Char}_H(x)$ modulo $\rho^e\mathbb Z[\zeta+\zeta^{-1}][x]$ where $H \in \mathcal H_n(q)$ and the diagonal entries of $H$ are all equal to $1$. This means that the top three coefficients of $\operatorname{Char}_{H}(x)=\sum_{i=0}^n a_ix^{n-i}$ are $a_0 = 1$, $a_1 = -n$, and $a_2=0$. From this point, the proof is similar to the proof of the corresponding part of Theorem~\ref{thm:mainBounds}. \end{proof} \section{Residue graphs, Euler graphs, and Seidel switching} \label{sec:euler} In this section, we develop the final tools required to prove part (e) of Theorem~\ref{thm:mainBounds} and Theorem~\ref{thm:mainBounds2} and this section culminates with those proofs. Accordingly, fix $q$ to be a power of $2$. Let $H \in \mathcal H_n(q)$ and let $A = (J-H)/(1-\zeta)$. A graph $\Gamma$ is called an \textbf{Euler graph} if the degree of each of its vertices is even. The \textbf{residue graph} $\Gamma^{\mathrm{res}}(H)$ of $H$ is defined to have vertex set $\{1,\dots,n\}$ where $i$ is adjacent to $j$ if and only if $A_{i,j} \not \in (1-\zeta)\mathbb Z[\zeta]$. It is easy to see that, for all $i$ and $j$, we have $A_{i,j} \in (1-\zeta)\mathbb Z[\zeta]$ if and only if $A_{i,j} \in (1-\zeta)\mathbb Z[\zeta]$, thus the adjacency of $\Gamma^{\mathrm{res}}(H)$ is well-defined. Note that the residue graph $\Gamma^{\mathrm{res}}(H)$ is an Euler graph if and only if the sum of each row of $A$ is in $(1-\zeta) \mathbb Z[\zeta]$. Furthermore, note that if $q > 2$ then $\Gamma^{\mathrm{res}}(H)$ is a simple graph (this follows from Proposition~\ref{pro:idealprelim} (f)). Define $\operatorname{Diag}_n(\mathcal C_q)$ to be the set of diagonal $\mathcal C_q$-matrices of order $n$. The \textbf{switching class} $\operatorname{sw}(H)$ of $H$ is defined to be the following set of residue graphs \[ \operatorname{sw}(H) := \left \{ \Gamma^{\mathrm{res}} \left ( D HD^{-1}\right ) \; | \; D \in \operatorname{Diag}_n(\mathcal C_q) \right \}. \] Let $\Gamma = \Gamma^{\mathrm{res}} (H)$ with vertex set $V$, which is partitioned into two subsets $U_1$ and $U_2$. \textbf{Seidel switching} with respect to $\{U_1, U_2\}$ is the operation on $\Gamma$ that leaves $V$ and the subgraphs induced by $U_1$ and $U_2$ unchanged, but deletes all edges between $U_1$ and $U_2$, and inserts all edges between $U_1$ and $U_2$ that were not present in $\Gamma$. Observe that by Seidel switching $\Gamma$ with respect to $\{U_1, U_2\}$, we obtain the residue graph of $DHD^{-1}$ where each entry of $D \in \operatorname{Diag}_n(\mathcal C_q)$ corresponding to $U_1$ is equal to $\zeta$ and equal to $1$ otherwise. Furthermore, as a consequence of Proposition~\ref{pro:idealprelim} (g), each graph in $\operatorname{sw}(H)$ can be obtained by performing some Seidel switching on the graph $\Gamma^{\mathrm{res}} (H)$. We begin with a result about the presence of an Euler graph in the switching class of $H$. The following proposition is generalisation of Seidel's corresponding result for $2$-Seidel matrices~\cite{Seidelnodd}. We omit the proof since it is essentially the same as the proof of \cite[Theorem 1]{hage03}. \begin{proposition} \label{pro:uniqueEuler} Let $n \in \mathbb N$ be odd, $q > 2$ be a power of $2$, and $H \in \mathcal H_n(q)$. Then $\operatorname{sw}(H)$ contains precisely one Euler graph. \end{proposition} In Lemma~\ref{lem:matdet}, we see summands of the form $\mathbf 1^\top A^k \mathbf 1$. When the residue graph of $H$ is an Euler graph we can obtain restrictions on these summands even stronger than those of Lemma~\ref{lem:2sumApower}, as shown in the next result. \begin{lemma} \label{lem:sumEulerpower} Let $n \in \mathbb N$, $q>2$ be a power of $2$, $H \in \mathcal H_n(q)$, and $A = (J-H)/(1-\zeta)$. Suppose that $\Gamma^{\mathrm{res}}(H)$ is an Euler graph. Then $\mathbf 1^\top A \mathbf 1 \in (1-\zeta) \mathbb Z[\zeta]$ and $\mathbf 1^\top A^k \mathbf 1 \in (1-\zeta)^2 \mathbb Z[\zeta]$ for all $k \geqslant 2$. \end{lemma} \begin{proof} Since $\Gamma^{\mathrm{res}}(H)$ is an Euler graph, we can write $A\mathbf 1=(1-\zeta)\mathbf v$ for some $\mathbf v \in \mathbb Z[\zeta]^n$. Thus, for $k \geqslant 2$, we have $ \mathbf 1^\top A^k \mathbf 1= (1-\zeta^{-1})(1-\zeta) \mathbf v^\top A^{k-2} \mathbf v$, which is in $(1-\zeta)^2 \mathbb Z[\zeta]$. \end{proof} In Corollary~\ref{cor:mainHSgen}, we see a term of the form $\mathbf 1^\top (A \circ A^\top)^k \mathbf 1$. Next, we show that this term is in $2(1-\zeta)\mathbb Z[\zeta]$ when the residue graph of $H$ is an Euler graph. \begin{lemma} \label{lem:need2prove} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, $H \in \mathcal H_n(q)$, and $A = (J-H)/(1-\zeta)$. Suppose that $\Gamma^{\mathrm{res}}(H)$ is an Euler graph. Then $\mathbf 1^\top (A \circ A^\top)^k \mathbf 1 \in 2(1-\zeta) \mathbb Z[\zeta]$ for all $k \geqslant 2$. \end{lemma} \begin{proof} Note that $\mathbf 1^\top (A \circ A^\top)^k \mathbf 1$ is the sum of $\mathrm{wt}(\mathbf w)\mathrm{wt}(\mathbf w^\top)$ of each walk $\mathbf w$ of $\Gamma(H)$ of length $k$. Let $M$ be the $\{0,1\}$-adjacency matrix of $\Gamma^{\mathrm{res}}(H)$. For each $i,j \in \{1,\dots,n\}$, by Proposition~\ref{pro:idealprelim} (g), we have $A_{i,j} - M_{i,j} \in (1-\zeta)\mathbb Z[\zeta]$. Hence, we have \begin{equation} \label{eqn:AtoM} ((A \circ A^\top)^k)_{i,j} - (M^k)_{i,j} \in (1-\zeta)\mathbb Z[\zeta]. \end{equation} By Lemma~\ref{lem:trCong}, there exists $U \subset W_k^\prime(\Gamma(H))$ such that $$ \operatorname{tr}( (A \circ A^\top)^k) - 2\sum_{\mathbf w \in U} \mathrm{wt}(\mathbf w)\mathrm{wt}(\mathbf w^\top) \in 2(1-\zeta)\mathbb Z[\zeta]. $$ Furthermore, using \eqref{eqn:AtoM}, we have \begin{equation} \label{eqn:trAtotrM} \frac{\operatorname{tr}( (A \circ A^\top)^k)}{2} - \frac{\operatorname{tr}(M^k)}{2} \in (1-\zeta)\mathbb Z[\zeta]. \end{equation} Since $ (A \circ A^\top)^k$ is symmetric, using \eqref{eqn:AtoM} together with \eqref{eqn:trAtotrM}, we find that $$\frac{\mathbf 1^\top (A \circ A^\top)^k \mathbf 1}{2} - \frac{\mathbf 1^\top M^k \mathbf 1}{2} \in (1-\zeta)\mathbb Z[\zeta].$$ The result then follows from \cite[Lemma 2.6]{GreavesYatsyna19} since $\Gamma^{\mathrm{res}}(H)$ is an Euler graph. \end{proof} Recall from Section~\ref{sec:graphs} the definition of $\mathfrak W_N(H)$: \[ \mathfrak W_N(H):= \left \{ \mathbf w \in \operatorname{fix}_{\Gamma(H)}(rs) \; \mid \; \frac{(1-\zeta)^2}{4}\operatorname{wt}(\mathbf w) \not \in (1-\zeta)\mathbb Z[\zeta] \right \}, \] where $r$ and $s$ are the generators of $D_N = \langle r, s \; |\; r^N, s^2, (rs)^2 \rangle$. Before we establish our next main tool (Lemma~\ref{lem:trCongBurnside}, below) we need one more ingredient. \begin{lemma} \label{lem:EulerW} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, $N \geqslant 4$ be an even integer, $H \in \mathcal H_n(q)$, and $A = (J-H)/(1-\zeta)$. Suppose that $\Gamma^{\mathrm{res}}(H)$ is an Euler graph. Then $|\mathfrak W_N(H)|$ is even. \end{lemma} \begin{proof} Let $\mathbf w = (w_0,w_1,\dots,w_N) \in \mathfrak W_N(H)$. Then $w_0 = w_1$, $w_{N/2}=w_{N/2+1}$, and furthermore $w_{N-i}=w_{i+1}$ for all $i \in \{0,\dots,N/2\}$. Therefore, $\mathbf w = (w_0,w_0)\mathbf x \mathbf x^\top (w_{N/2},w_{N/2})$, where $\mathbf x$ is a walk of length $N/2-1$ from $w_0$ to $w_{N/2}$ and concatenation of walks is as defined in the proof of Lemma~\ref{lem:evenPalin}. Since $\mathrm{wt}(\mathbf w) = 4\mathrm{wt}(\mathbf x)\mathrm{wt}(\mathbf x^\top)/(1-\zeta)^2$ and $\mathbf w \in \mathfrak W_N(H)$, we must have $\mathrm{wt}(\mathbf x) \not \in (1-\zeta)\mathbb Z[\zeta]$. Hence each walk $\mathbf w = (w_0,w_1,\dots,w_{N/2},w_{N/2+1},\dots,w_N) \in \mathfrak W_N(H)$ corresponds to a walk $\mathbf x = (w_1,\dots,w_{N/2})$ of length $N/2-1$ in $\Gamma^{\mathrm{res}}(H)$, where both the vertices $w_0=w_1$ and $w_{N/2}=w_{N/2+1}$ have loops. Now, if $\mathbf x$ and $\mathbf x^\top$ are distinct, we can pair up $\mathbf x$ with $\mathbf x^\top$. It, therefore, remains to show that the number of palindromic walks of length $N/2-1$ in $\Gamma^{\mathrm{res}}(H)$ starting and ending at a vertex with a loop in $\Gamma(H)$ is even. In fact, we shall show that for all vertices $v$, the number of palindromic walks of length $N/2-1$ in $\Gamma^{\mathrm{res}}(H)$ starting and ending at $v$ is even. Let $P_{N/2-1}(v)$ be the set of palindromic walks in $\Gamma^{\mathrm{res}}(H)$ starting and ending at $v$. First, if $N/2-1$ is odd, then it is obvious that $P_{N/2-1}(v)$ is empty. Now suppose $k = N/2-1$ is even. Note that $|P_{2}(v)|$ is equal to the degree of the vertex $v$, which is even since $\Gamma^{\mathrm{res}}(H)$ is an Euler graph. It remains to assume that $k \geqslant 4$. Let $\mathbf p = (p_0,\dots,p_{k/2-1},p_{k/2},p_{k/2-1},\dots,p_0) \in P_{k}(v)$. Then the walk $(p_0,\dots,p_{k/2-1},\dots,p_0)$ obtained by removing the vertex $p_{k/2}$ and its incident edge to $p_{k/2-1}$ from the walk $\mathbf p$ is in $P_{k-2}(v)$. Thus, each walk in $P_{k}(v)$ can be obtained from a walk in $P_{k-2}(v)$. Furthermore, each walk in $P_{k-2}(v)$ can be extended to an even number of walks in $P_{k}(v)$. Indeed, since every vertex of $\Gamma^{\mathrm{res}}(H)$ has even degree, there are always an even number of choices for vertices $p_{k/2}$ adjacent to $p_{k/2-1}$ when extending the walk $(p_0,\dots,p_{k/2-1},\dots,p_0)$ to $(p_0,\dots,p_{k/2-1},p_{k/2},p_{k/2-1},\dots,p_0)$. This completes our proof. \end{proof} Now we can use Lemma~\ref{lem:need2prove} and Lemma~\ref{lem:EulerW} to obtain the following refinement of Corollary~\ref{cor:mainHSgen}. \begin{lemma} \label{lem:trCongBurnside} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, $N \geqslant 3$ be an integer, $H \in \mathcal H_n(q)$, and $A = (J-H)/(1-\zeta)$. Suppose that $\Gamma^{\mathrm{res}}(H)$ is an Euler graph. Then $$\sum_{d \;|\; N} \phi(N/d) \operatorname{tr}((A^{\circ N/d})^d) \in (1-\zeta)N \mathbb Z[\zeta].$$ \end{lemma} \begin{proof} If $N$ is odd then the lemma is precisely Corollary~\ref{cor:mainHSgenOdd}. Otherwise, suppose that $N$ is even. By Corollary~\ref{cor:mainHSgen}, we have $$\sum_{d \;|\; N} \phi(N/d) \operatorname{tr}((A^{\circ N/d})^d) + N|\mathfrak W_N(H)| + \frac{N}{2}\mathbf 1^\top (A \circ A^\top)^{N/2} \mathbf 1 \in (1-\zeta)N \mathbb Z[\zeta].$$ The lemma then follows from Lemma~\ref{lem:need2prove} and Lemma~\ref{lem:EulerW} using Proposition~\ref{pro:idealprelim} (f). \end{proof} In preparation for an application of Lemma~\ref{lem:trCongBurnside}, we state and prove the following technical lemmas. The first of these is a slight strengthening of Remark~\ref{rem:weightsum}. \begin{lemma} \label{lem:wtwCong} Let $n, k \in \mathbb N$, $q > 2$ be a power of $2$, $k$ be odd, $H \in \mathcal H_n(q)$, $A = (J-H)/(1-\zeta)$ and $\mathbf w \in W_k^\prime(\Gamma)$. Then \[ \frac{\operatorname{wt}(\mathbf w) + \operatorname{wt}(\mathbf w^{\top})}{1-\zeta} - \operatorname{wt}(\mathbf w) \in (1-\zeta)\mathbb Z[\zeta]. \] \end{lemma} \begin{proof} Write $\mathbf w = (w_0,w_1,\dots,w_k)$ and $\operatorname{wt}(\mathbf w) = \prod_{i=1}^{k} \frac{1-\zeta^{e_i}}{1-\zeta}$. The lemma is trivial if any of the $e_i$ is equal to $0$. Without loss of generality, we can assume that $1 \leqslant e_i \leqslant q-1$ for each $i \in \{1,\dots,k\}$. Hence, $(1-\zeta^{e_i})/(1-\zeta) = 1+\zeta + \dots + \zeta^{e_i-1}$. Furthermore, \[ \operatorname{wt}(\mathbf w^\top) = \prod_{i=1}^{k} \frac{1-\zeta^{q-e_i}}{1-\zeta}= \prod_{i=1}^{k} (1+\zeta + \dots + \zeta^{q-e_i-1}). \] Next, for $e \in \mathbb N$, observe that \[ \zeta^e - e\zeta + e-1 \in (1-\zeta)^2\mathbb Z[\zeta]. \] It follows that \[ (1-\zeta^{e})/(1-\zeta) - \left (\binom{e}{2}\zeta -\binom{e-1}{2} +1 \right)\in (1-\zeta)^2\mathbb Z[\zeta]. \] By Proposition~\ref{pro:idealprelim} (f), we find that \[ \begin{cases} (1-\zeta^{e})/(1-\zeta) \in (1-\zeta)^2\mathbb Z[\zeta], & \text{ if $e \equiv 0 \pmod 4$}; \\ (1-\zeta^{e})/(1-\zeta) -1 \in (1-\zeta)^2\mathbb Z[\zeta], & \text{ if $e \equiv 1 \pmod 4$}; \\ (1-\zeta^{e})/(1-\zeta) - (1+\zeta) \in (1-\zeta)^2\mathbb Z[\zeta], & \text{ if $e \equiv 2 \pmod 4$}; \\ (1-\zeta^{e})/(1-\zeta) -\zeta \in (1-\zeta)^2\mathbb Z[\zeta], & \text{ if $e \equiv 3 \pmod 4$}. \\ \end{cases} \] If $e_i \equiv 0 \pmod 4$ for some $i \in \{1,\dots,k\}$ then both $\operatorname{wt}(\mathbf w) + \operatorname{wt}(\mathbf w^\top) \in (1-\zeta)^2\mathbb Z[\zeta]$ and $\operatorname{wt}(\mathbf w) \in (1-\zeta)^2\mathbb Z[\zeta]$. Therefore, it remains to consider the case when $e_i \not \equiv 0 \pmod 4$ for all $i \in \{1,\dots,k\}$. In this case, we have \[ \operatorname{wt}(\mathbf w) - 1^a(1+\zeta)^b\zeta^c \in (1-\zeta)^2\mathbb Z[\zeta], \] for some $a,b,c \in \mathbb N \cup \{0\}$. At the same, we have \[ \operatorname{wt}(\mathbf w^\top) - 1^c(1+\zeta)^b\zeta^a \in (1-\zeta)^2\mathbb Z[\zeta]. \] Hence, \[ \operatorname{wt}(\mathbf w) + \operatorname{wt}(\mathbf w^\top) - (1+\zeta)^b(\zeta^c+\zeta^a) \in (1-\zeta)^2\mathbb Z[\zeta]. \] Using Proposition~\ref{pro:idealprelim} (g), we see that, if $b \geqslant 1$ then both $\operatorname{wt}(\mathbf w) + \operatorname{wt}(\mathbf w^\top) \in (1-\zeta)^2\mathbb Z[\zeta]$ and $\operatorname{wt}(\mathbf w) \in (1-\zeta)^2\mathbb Z[\zeta]$. Lastly, we can assume that $b = 0$. It suffices to show that $(\zeta^c+\zeta^a) - (1-\zeta)\zeta^c \in (1-\zeta)^2\mathbb Z[\zeta]$. This follows since $a+c = k$, which is odd. \end{proof} The second technical lemma is a relation for certain traces. \begin{lemma} \label{lem:keyLemTrCong} Let $n, k \in \mathbb N$, $q > 2$ be a power of $2$, $k \geqslant 3$ be odd, $H \in \mathcal H_n(q)$, and $A = (J-H)/(1-\zeta)$. Then \[ \operatorname{tr} ((A^{\circ 2})^k) - \operatorname{tr} (A^k)^2+2(1-\zeta)^{-1} \operatorname{tr} (A^k) \in 2(1-\zeta)\mathbb Z[\zeta]. \] \end{lemma} \begin{proof} Let $\Gamma = \Gamma(H)$ be the underlying graph of $H$ and let $W =W_k(\Gamma)$ be the set of closed $k$-walks in $\Gamma$. First, observe that $\operatorname{tr} ((A^{\circ 2})^k) = \sum_{\mathbf w \in W} \operatorname{wt}(\mathbf w)^2$ and $\operatorname{tr} (A^{k}) = \sum_{\mathbf w \in W} \operatorname{wt}(\mathbf w)$. Therefore, we have \begin{equation} \label{eqn:NI} \operatorname{tr} ((A^{\circ 2})^k) = \operatorname{tr} (A^{k})^2-2\sum_{\{\mathbf w_1,\mathbf w_2\} \in \binom{W}{2}} \operatorname{wt}(\mathbf w_1)\operatorname{wt}(\mathbf w_2). \end{equation} Next, by Proposition~\ref{pro:Idealcor}, for each $\mathbf w \in W$, either $\operatorname{wt}(\mathbf w) \in (1-\zeta)\mathbb Z[\zeta]$ or $\operatorname{wt}(\mathbf w)-1 \in (1-\zeta)\mathbb Z[\zeta]$. Therefore \[ \sum_{\{\mathbf w_1,\mathbf w_2\} \in \binom{W}{2}} \operatorname{wt}(\mathbf w_1)\operatorname{wt}(\mathbf w_2) - \binom{t}{2} \in (1-\zeta)\mathbb Z[\zeta], \] where $t$ is the number of walks $\mathbf w \in W$ satisfying $\operatorname{wt}(\mathbf w)-1 \in (1-\zeta)\mathbb Z[\zeta]$. Thus, in view of \eqref{eqn:NI}, it remains to show that $(1-\zeta)^{-1} \operatorname{tr} (A^k)- \binom{t}{2} \in (1-\zeta)\mathbb Z[\zeta]$. Let $T$ be the set of $\mathbf w \in W$ satisfying $\operatorname{wt}(\mathbf w)-1 \in (1-\zeta)\mathbb Z[\zeta]$. Then $|T| = t$. Furthermore, by Remark~\ref{rem:nonsimple}, we must have $T \subset W_k^\prime(\Gamma)$. Since $k$ is odd, for each $\mathbf w \in T$, the reverse walk $\mathbf w^{\top}$ is distinct from $\mathbf w$ and is also in $T$. Thus $t$ must be even and, moreover, $\binom{t}{2} - \frac{t}{2} \in (1-\zeta)\mathbb Z[\zeta]$. By Lemma~\ref{lem:trCong}, there exists a subset $U \subset W_k^\prime(\Gamma) \subset W$ such that \begin{equation} \label{eqn:1-ztr1} \operatorname{tr} (A^k) - \sum_{\mathbf w \in U}\left ( \operatorname{wt}(\mathbf w) + \operatorname{wt}(\mathbf w^{\top}) \right ) \in (1-\zeta)^2\mathbb Z[\zeta]. \end{equation} By Remark~\ref{rem:weightsum}, for each $\mathbf w \in U$, we have $\operatorname{wt}(\mathbf w) + \operatorname{wt}(\mathbf w^{\top}) \in (1-\zeta)\mathbb Z[\zeta]$. Furthermore, by Lemma~\ref{lem:wtwCong}, we can write \begin{equation} \label{eqn:1-ztr2} \frac{\operatorname{wt}(\mathbf w) + \operatorname{wt}(\mathbf w^{\top})}{1-\zeta} - \operatorname{wt}(\mathbf w) \in (1-\zeta)\mathbb Z[\zeta]. \end{equation} Combining \eqref{eqn:1-ztr1} and \eqref{eqn:1-ztr2} yields \[ (1-\zeta)^{-1} \operatorname{tr} (A^k)- \sum_{\mathbf w \in U}\operatorname{wt}(\mathbf w) \in (1-\zeta)\mathbb Z[\zeta]. \] Thus, since $|U\cap T| = t/2$, we obtain $(1-\zeta)^{-1} \operatorname{tr} (A^k)- \frac{t}{2} \in (1-\zeta)\mathbb Z[\zeta]$, as required. \end{proof} Now we are ready to establish a lemma, which acts as a key tool in the proof of the last part of the main theorems. \begin{lemma} \label{lem:quadtracCong} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, $H \in \mathcal H_n(q)$, $A = (J-H)/(1-\zeta)$, and let $N>2$ be an integer congruent to $2$ modulo $4$. Suppose that $\Gamma^{\mathrm{res}}(H)$ is an Euler graph. Then $$\operatorname{tr} (A^{N/2})^2-2(1-\zeta)^{-1} \operatorname{tr} (A^{N/2}) + \operatorname{tr}(A^N) \in 2(1-\zeta) \mathbb Z[\zeta].$$ \end{lemma} \begin{proof} By Lemma~\ref{lem:keyLemTrCong}, using $k = N/2$, it suffices to show that \[ \operatorname{tr}(A^N) + \operatorname{tr} ((A^{\circ 2})^{N/2}) \in 2(1-\zeta) \mathbb Z[\zeta]. \] By Lemma~\ref{lem:trCongBurnside}, we have \begin{align} \label{eqn:sumtr} \sum_{d \;|\; N} \phi(N/d) \operatorname{tr}((A^{\circ N/d})^d) \in 2(1-\zeta) \mathbb Z[\zeta]. \end{align} Except for $d = N$ and $d = N/2$, for each divisor $d$ of $N$, we have that $\phi(N/d)$ is even. Observe that \[ \operatorname{tr}((A^{\circ k})^d) - \operatorname{tr}(A^d) \in (1-\zeta) \mathbb Z[\zeta] \] for all $k$ and $d$ in $\mathbb N$. Indeed, $\operatorname{tr}(A^d)$ is the sum of the weights of the walks in $W_d(\Gamma(H))$ and $\operatorname{tr}((A^{\circ k})^d)$ is the sum of their $k$th powers. For each $\mathbf w \in W_d(\Gamma(H))$, we have $\operatorname{wt}(\mathbf w)^k - \operatorname{wt}(\mathbf w) \in (1-\zeta) \mathbb Z[\zeta]$. Hence, using Lemma~\ref{lem:trCong} together with \eqref{eqn:sumtr}, it follows that \[ \operatorname{tr}(A^N) + \operatorname{tr} ((A^{\circ 2})^{N/2}) \in 2(1-\zeta) \mathbb Z[\zeta], \] as required. \end{proof} For $H \in \mathcal H_n(q)$ and $A = (J-H)/(1-\zeta)$, we have the following relation between coefficients of the characteristic polynomials of $H$ and $A$. \begin{lemma} \label{lem:a_kAndb_k} Let $n \in \mathbb N$ be odd, $q > 2$ be a power of $2$, $H \in \mathcal H_n(q)$, and $A = (J-H)/(1-\zeta)$. Suppose that $\Gamma^{\mathrm{res}}(H)$ is an Euler graph. Write $\operatorname{Char}_{H}(x)=\sum_{i=0}^n a_i x^{n-i}$ and $\operatorname{Char}_A(x)=\sum_{i=0}^n b_i x^{n-i}$. Then for all $j \in \{1,\dots ,\frac{n-1}{2} \} $, we have $$ b_{2j} - \frac{a_{2j+1}}{(1-\zeta)^{2j}} \in (1-\zeta)^2 \mathbb Z[\zeta]; $$ $$ b_{2j-1}- \frac{a_{2j+1}+a_{2j} + a_{2j-1}(a_3 + a_2 - a_1 - n)}{(1-\zeta)^{2j-1}} \in (1-\zeta)^2\mathbb Z[\zeta]. $$ \end{lemma} \begin{proof} By Lemma~\ref{lem:matdet}, we can write \begin{equation*} \frac{a_{2j+1}}{(1-\zeta)^{2j}}=(\zeta-1)b_{2j+1} - nb_{2j} - \sum_{i=2}^{2j+1} b_{2j+1-i} \mathbf{1}^\top A^{i-1}\mathbf{1}. \end{equation*} By Corollary~\ref{cor:oddbi}, we know that $b_{2i+1} \in (1-\zeta)\mathbb Z[\zeta]$ for all $i \in \{1,\dots,(n-1)/2\}$. By Lemma~\ref{lem:sumEulerpower}, we have $\mathbf{1}^\top A\mathbf{1} \in (1-\zeta)\mathbb Z[\zeta]$ and $\mathbf{1}^\top A^i\mathbf{1} \in (1-\zeta)^2\mathbb Z[\zeta]$ for all $i \geqslant 2$. Putting to above together with the fact that $2\mathbb Z[\zeta] \subset (1-\zeta)^2\mathbb Z[\zeta]$ (see Proposition~\ref{pro:idealprelim} (f)) yields \begin{equation} \label{eqn1} b_{2j} - \frac{a_{2j+1}}{(1-\zeta)^{2j}} \in (1- \zeta)^2 \mathbb Z[\zeta]. \end{equation} Next, using Lemma~\ref{lem:matdet}, we have $a_2=(1-\zeta)^2 b_2 + (1-\zeta)b_1 n + (1- \zeta)b_0 \mathbf 1^\top A \mathbf 1$ and $$a_3=(\zeta-1)^3 b_3 - (1-\zeta)^2 b_2 n - (1- \zeta)^2 b_1\mathbf 1^\top A \mathbf 1 - (1- \zeta)^2 b_0 \mathbf 1^\top A^2 \mathbf 1.$$ Since $q>2$, each diagonal entry of $A$ belongs to $(1-\zeta)\mathbb Z[\zeta]$, we find that $\operatorname{tr} (A) = -b_1 \in (1-\zeta)\mathbb Z[\zeta]$. By Lemma~\ref{lem:armod}, we have $a_2 \in (1-\zeta)^2\mathbb Z[\zeta]$. Thus, using Lemma~\ref{lem:sumEulerpower} and Proposition~\ref{pro:idealprelim} (f), we can write \begin{equation} \frac{a_3+a_2}{1-\zeta} -(b_1 + \mathbf 1^\top A \mathbf 1) \in (1-\zeta)^2 \mathbb Z[\zeta]. \label{eqn:fI} \end{equation} Using \eqref{eqn1}, we have \begin{equation} b_{2j-2} - \frac{a_{2j-1}}{(1-\zeta)^{2j-2}} \in (1-\zeta)^2 \mathbb Z[\zeta]. \label{eqn:fII} \end{equation} Take the trace of $A = (J-H)/(1-\zeta)$ to obtain the identity $b_1 = (n+a_1)/(\zeta-1)$. Combining this identity with \eqref{eqn:fI} and \eqref{eqn:fII}, we obtain \begin{equation} \label{eqn:fIII} \frac{(a_2+a_3)a_{2j-1}}{(1-\zeta)^{2j-1}} - b_{2j-2} \left (\mathbf 1^\top A \mathbf 1 - \frac{a_1+n}{1-\zeta}\right ) \in (1-\zeta)^2 \mathbb Z[\zeta]. \end{equation} Using Lemma~\ref{lem:matdet} and Lemma~\ref{lem:sumEulerpower}, we can write \begin{equation*} \frac{a_{2j+1}+a_{2j}}{(1-\zeta)^{2j-1}} - \left ( (1-n)b_{2j} - nb_{2j-1}-b_{2j-2}\mathbf{1}^\top A\mathbf{1} \right ) \in (1-\zeta)^2\mathbb Z[\zeta]. \end{equation*} Apply Proposition~\ref{pro:idealprelim} (f) to obtain \begin{equation*} \frac{a_{2j+1}+a_{2j}}{(1-\zeta)^{2j-1}} - \left ( b_{2j-1}-b_{2j-2}\mathbf{1}^\top A\mathbf{1} \right ) \in (1-\zeta)^2\mathbb Z[\zeta]. \end{equation*} Combining with \eqref{eqn:fII} and \eqref{eqn:fIII}, yields \begin{equation*} b_{2j-1} - \frac{a_{2j+1}+a_{2j}+a_{2j-1}(a_3+a_2-a_1-n)} {(1-\zeta)^{2j-1}} \in (1-\zeta)^2\mathbb Z[\zeta]. \qedhere \end{equation*} \end{proof} For a positive integer $a$, the $2$\textbf{-adic valuation} $\nu_2(a)$ of $a$ is the multiplicity of $2$ in the prime factorisation of $a$. The $2$-adic valuation of $0$ is defined to be $\infty$. For an integer $b$ relatively prime to $a$, the $2$-adic valuation $\nu_2(a/b)$ of $a/b$ is defined as $-\nu_2(b)$ if $b$ is even and $\nu_2(a)$ otherwise. We now state a couple of lemmas about the $2$-adic valuation. \begin{lemma}[{\cite[Lemma 3.11]{GreavesYatsyna19}}] \label{lem:2valbound} Let $l$ be a positive integer and let $m_1,m_2,\dots , m_l$ be nonnegative integers having a positive sum. Let $m \in \{m_i\ |\ m_i \neq 0 \}$. Then $$\nu_2 \left( \frac{(m_1+m_2+\dots +m_l-1)!}{m_1!m_2!\dots m_l!} \right) \geqslant -\nu_2(m).$$ \end{lemma} \begin{lemma} \label{lem:2adicdiff} Let $l$ be a positive integer and let $m_1,m_2,\dots, m_l$ be nonnegative integers having a positive sum $m$. Then $$ \nu_2 \left( \frac{m!}{m_1!m_2!\dots m_l!} \right)=\nu_2 \left( \frac{(2m)!}{(2m_1)!(2m_2)!\dots (2m_l)!} \right).$$ \end{lemma} \begin{proof} Set \[ D = \nu_2 \left( \frac{m!}{m_1!m_2!\dots m_l!} \right)-\nu_2 \left( \frac{(2m)!} {(2m_1)!(2m_2)!\dots (2m_l)!} \right). \] Using Legendre's formula~\cite[Page 77]{Moll12}, we have \begin{align*} D &= \nu_2 (m!)- \nu_2 ((2m)!) + \sum_{i=1}^l \nu_2 ((2m_i)!)- \nu_2 ((m_i)!)\\ &= \sum_{j=1}^\infty \left\lfloor \frac{m}{2^j} \right \rfloor - \left\lfloor \frac{2m}{2^j} \right\rfloor + \sum_{i=1}^l \sum_{j=1}^\infty \left \lfloor \frac{2m_i}{2^j} \right\rfloor - \left\lfloor \frac{m_i}{2^j} \right\rfloor \\ &= -m + \sum_{i=1}^l \left (m_i + \sum_{j=1}^ \infty \left \lfloor \frac{2m_i}{2^{j+1}} \right\rfloor - \left\lfloor \frac{m_i}{2^j} \right\rfloor \right ) = -m + m = 0. \qedhere \end{align*} \end{proof} Define the set $X(d)$ as the set of nonnegative integral vectors $\mathbf x = (x_1,\dots,x_d) \in \mathbb Z_{\geqslant 0}^d$ satisfying the constraint \[ x_1 + 2x_2 + \dots + dx_d = d. \] For each $\mathbf x \in X(d)$, define the coefficient \[ c(\mathbf x) := \frac{(x_1+x_2+\dots +x_{d}-1)!}{x_1! x_2!\dots x_{d}!}. \] \begin{corollary} \label{cor:2adicdiff} Suppose $k \in \mathbb N$ and $\mathbf x \in X(2k-1)$. Then \[ \nu_2 \left( c(2\mathbf x) + \frac{(2k-1)}{2}c(\mathbf x)^2 \right ) \geqslant 0. \] \end{corollary} \begin{proof} Let $\mathbf x = (x_1,\dots,x_{2k-1})$. Since $2k-1$ is odd, there must be some $i$ for which $x_i$ is odd and set $y = x_i$. Let $s = x_1 + \dots + x_{2k-1} - y$. Note that $$c(2\mathbf x) = \frac{1}{2y}\frac{(2s+2y-1)!}{(2s)!(2y-1)!}\frac{(2s)!(2y)!}{(2x_1)!(2x_2)!\dots (2x_{2k-1})!}$$ and $$c(\mathbf x) = \frac{1}{y}\frac{(s+y-1)!}{s!(y-1)!}\frac{s!y!}{x_1!x_2!\dots x_{2k-1}!}.$$ Since the other factors are multinomial coefficients, we see that $c(2\mathbf x) \in \frac{1}{2y}\mathbb Z$ and $c(\mathbf x) \in \frac{1}{y}\mathbb Z$. By Lemma~\ref{lem:2adicdiff}, we have \begin{equation} \label{eqn:c1} \nu_2 \left( \frac{(2s)!(2y)!}{(2x_1)!(2x_2)!\dots (2x_{2k-1})!} \right ) = \nu_2 \left( \frac{s!y!}{x_1!x_2!\dots x_{2k-1}!} \right ) \geqslant 0. \end{equation} Furthermore, \begin{align*} \frac{(s+y-1)!}{s!(y-1)!} \frac{s+y}{y} &= \frac{(s+y)!}{s!y!} \\ \frac{(2s+2y-1)!}{(2s)!(2y-1)!} \frac{s+y}{y} &= \frac{(2s+2y)!}{(2s)!(2y)!}. \end{align*} Therefore, by Lemma~\ref{lem:2adicdiff}, we also have \begin{equation} \label{eqn:c2} \nu_2 \left( \frac{(s+y-1)!}{s!(y-1)!} \right ) = \nu_2 \left( \frac{(2s+2y-1)!}{(2s)!(2y-1)!} \right ) \geqslant 0. \end{equation} First suppose that $\nu_2((c(\mathbf x)) \geqslant 1$. Using \eqref{eqn:c1} and \eqref{eqn:c2}, it follows that $\nu_2 \left ((c(2\mathbf x) \right ) \geqslant 0$ and hence $\nu_2 \left( c(2\mathbf x) + \frac{(2k-1)}{2}c(\mathbf x)^2 \right ) \geqslant 0$. Finally, suppose that $\nu_2((c(\mathbf x)) = 0$. Using \eqref{eqn:c1} and \eqref{eqn:c2}, it follows that $\nu_2 \left (c(2\mathbf x) \right ) = \nu_2 \left( \frac{(2k-1)}{2}c(\mathbf x)^2 \right )$ and hence we obtain the statement of the corollary. \end{proof} Define $X^\prime(d) := \left \{ \mathbf x \in X(d) \; | \; x_{d} = 0 \text{ and } x_i = 0 \text{ for all odd $i$} \right \}$. \begin{theorem} \label{thm:a_4k-1} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, and $H \in \mathcal H_n(q)$. Suppose that $\Gamma^{\mathrm{res}}(H)$ is an Euler graph. Write $\operatorname{Char}_{H}(x)=\sum_{i=0}^n a_ix^{n-i}$. Then, for $k \in \{2,\dots ,\lfloor (n+1)/4 \rfloor\}$, we have \[ \frac{a_{4k-1}}{(1-\zeta)^{4k-2}} - (2k-1)\left(\sum_{\mathbf x \in X^\prime(4k-2)} c(\mathbf x) P_{4k-2}(\mathbf x) + \sum_{\mathbf x \in X(2k-1)} \frac{c(\mathbf x)}{1-\zeta} P_{2k-1}(\mathbf x) \right ) \in (1-\zeta) \mathbb Z[\zeta] \] where, for $d \in \mathbb N$ and $\mathbf x \in X(d)$, $$ P_d(\mathbf x) := \prod_{i=1}^{\lfloor d/2 \rfloor} \left( \frac{a_{2i+1}}{(1-\zeta)^{2i}} \right) ^{x_{2i}} \prod_{i=1}^{\lceil d/2 \rceil} \left( \frac{a_{2i+1}+a_{2i}+a_{2i-1}(a_3 + a_2 + a_1 + n)}{(1-\zeta)^{2i-1}} \right)^{x_{2i-1}}. $$ \end{theorem} \begin{proof} Let $A = (J-H)/(1-\zeta)$ and write $\operatorname{Char}_A(x)=\sum_{i=0}^n b_ix^{n-i}$. By Lemma~\ref{lem:a_kAndb_k}, \begin{equation} \label{b_4k-2} a_{4k-1} - (1-\zeta)^{4k-2} b_{4k-2} \in (1-\zeta)^{4k} \mathbb Z[\zeta]. \end{equation} As $4k-2$ is congruent to $2$ modulo $4$, by Lemma~\ref{lem:quadtracCong}, we have \begin{equation} \label{eqn:traceRel} \frac{\operatorname{tr}(A^{4k-2})}{2} + \frac{\operatorname{tr} (A^{2k-1})^2}{2}-\frac{ \operatorname{tr} (A^{2k-1})}{1-\zeta} \in (1-\zeta) \mathbb Z[\zeta]. \end{equation} Using Newton's identities, for all $i \in \{1,\dots,n\}$, we have \begin{equation} \label{eqn:traceForm} \operatorname{tr}(A^{i})= i\sum_{\substack{\mathbf x \in X(i)}} c(\mathbf x) \prod_{j=1}^{i} (-b_j)^{x_j}. \end{equation} Define the following subsets of $ X(4k-2)$ as \begin{align*} X_0 &:= \{ \mathbf x \in X(4k-2) \; | \; x_i \text{ is even for all $i$} \}; \\ X_1 &:= \{ \mathbf x \in X(4k-2) \; | \; x_i \text{ is odd for some $i$} \}. \end{align*} Clearly $ X(4k-2) = X_0 \cup X_1$. Furthermore, observe that, for each $\mathbf x \in X_0$, we have $\mathbf x = (2x_1,\dots,2x_{2k-1},0,\dots,0)$, where $(x_1,\dots,x_{2k-1}) \in X(2k-1)$. Therefore \begin{equation} \label{trace(A^4k-2)2} \frac{\operatorname{tr}(A^{4k-2})}{2} - (2k-1)\left ( \sum_{\mathbf x \in X(2k-1)} c(2\mathbf x) \prod_{j=1}^{2k-1} b_j^{2x_j} + \sum_{\substack{\mathbf x \in X_1}} c(\mathbf x) \prod_{j=1}^{4k-2} (-b_j)^{x_j} \right ) \in (1-\zeta)\mathbb Z[\zeta]. \end{equation} By Lemma~\ref{lem:2valbound}, for each $\mathbf x \in X_1$, we have $\nu_2(c(\mathbf x)) \geqslant 0$. Since, for each $\mathbf x \in X(2k-1)$, there exists an odd $i$ such that $x_i > 0$, using Corollary~\ref{cor:oddbi} and \eqref{eqn:traceForm}, we have \begin{align} \label{trace(A^2k-1)22} \frac{\operatorname{tr}(A^{2k-1})^2}{2} &- (2k-1)^2\sum_{\mathbf x \in X(2k-1)} \frac{c(\mathbf x)^2}{2}\prod_{j=1}^{2k-1} b_j^{2x_j} \in (1-\zeta)\mathbb Z[\zeta]. \end{align} By Lemma~\ref{lem:2valbound}, for each $\mathbf x \in X(2k-1)$, we have $\nu_2(c(\mathbf x)) \geqslant 0$. By Corollary~\ref{cor:2adicdiff}, for each $\mathbf x \in X(2k-1)$, the $2$-adic valuation of $(2k-1)c(2\mathbf x) + (2k-1)^2c(\mathbf x)^2/2$ is at least $0$. Furthermore, for each $\mathbf x \in X(2k-1)$, there exists an odd $i$ such that $x_i > 0$. Thus, using Corollary~\ref{cor:oddbi} and combining \eqref{eqn:traceRel}, \eqref{eqn:traceForm}, \eqref{trace(A^4k-2)2}, and \eqref{trace(A^2k-1)22}, we find that \begin{align*} & -b_{4k-2} + (2k-1) \left ( \sum_{\mathbf x \in X^\prime(4k-2)} c(\mathbf x) \prod_{j=1}^{4k-2} (-b_j)^{x_j} + \sum_{\mathbf x \in X(2k-1)} \frac{c(\mathbf x)}{1-\zeta} \prod_{j=1}^{2k-1} (-b_j)^{x_j} \right ) \in (1-\zeta)\mathbb Z[\zeta]. \end{align*} By Proposition~\ref{pro:idealprelim} (f), we have $2 \in (1-\zeta)\mathbb Z[\zeta]$. Furthermore, using Lemma~\ref{lem:a_kAndb_k}, we find that \begin{align} \label{eqn:b4k-2} & -b_{4k-2} + (2k-1)\left ( \sum_{\mathbf x \in X^\prime(4k-2)} c(\mathbf x) P_{4k-2}(\mathbf x) + \sum_{\mathbf x \in X(2k-1)} \frac{c(\mathbf x)}{1-\zeta} P_{2k-1}(\mathbf x) \right ) \in (1-\zeta)\mathbb Z[\zeta]. \end{align} The theorem then follows from combining \eqref{b_4k-2} and \eqref{eqn:b4k-2}. \end{proof} Combine Theorem~\ref{thm:a_4k-1} with Proposition~\ref{pro:realIdeal} to obtain the following corollary. \begin{corollary} \label{cor:a_4k-1} Let $n \in \mathbb N$, $q > 2$ be a power of $2$, and $H \in \mathcal H_n(q)$. Suppose that $\Gamma^{\mathrm{res}}(H)$ is an Euler graph. Write $\operatorname{Char}_{H}(x)=\sum_{i=0}^n a_ix^{n-i}$. Then, for each $k \in \{2,\dots ,\lfloor (n+1)/4 \rfloor\}$, the residue class of $a_{4k-1}$ in $\rho^{\lceil (4k-1)/2 \rceil}\mathbb Z[\zeta+\zeta^{-1}]$ is determined by the values of $a_1, \dots, a_{4k-3}$. \end{corollary} Now we are ready to complete the proof of Theorem~\ref{thm:mainBounds}. \begin{proof}[Proof of Theorem~\ref{thm:mainBounds} (e)] Let $H \in \mathcal H_n(2^f)$ where $n$ is odd and $f > 1$ and write $\operatorname{Char}_{H}(x)=\sum_{i=0}^n a_ix^{n-i}$. By Remark~\ref{rem:a0a1a2}, $a_0 = 1$ and $a_2$ is determined by the value of $a_1$. Since $n$ is odd, $a_1$ must also be an odd integer. Using Remark~\ref{rem:rationalElements}, modulo $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$, there are at most $2^{\lceil 2^{2-f}e \rceil - 1}$ residues for $a_1$ modulo $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$. By Corollary~\ref{cor:2powerCoeffs} and Remark~\ref{rem:countResidues}, there are at most $2^{e-\lfloor i/2 \rfloor}$ possible residues for $a_i$ modulo $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$, where $i \in \{3,\dots, 2e-1\}$, and just one possible residue for $a_i$ modulo $\rho^e \mathbb Z[\zeta+\zeta^{-1}]$ for $i \geqslant 2e$. Let $k \in \{2,\dots ,\lfloor (e+1)/4 \rfloor\}$. Using Proposition~\ref{pro:uniqueEuler}, without loss of generality, we can assume that the residue graph $\Gamma^{\mathrm{res}}(H)$ is an Euler graph. Now apply Corollary~\ref{cor:a_4k-1} to find that, modulo $\rho^{\lceil (4k-1)/2 \rceil}\mathbb Z[\zeta+\zeta^{-1}]$, the residue class of $a_{4k-1}$ is determined by $a_1, \dots ,a_{4k-3}$. Set $S= \{3,4,\dots,2e-1\}$ and $T=\{7,11,\dots,4\lfloor (2e+1)/4 \rfloor-1\}$. Thus, by inductively fixing the values of $a_3$, $a_4$, and so on, we find that there are at most \[ 2^{\lceil 2^{2-f}e \rceil - 1}\prod_{i \in S \backslash T}2^{e-\lfloor i/2 \rfloor}\prod_{i \in T}2^{e-\lceil i/2 \rceil} = 2^{(e-1)^2+\lceil 2^{2-f}e \rceil - 1-\lfloor e/4\rfloor} \] possible residue classes for $\operatorname{Char}_S(x)$ modulo $(\rho^e \mathbb Z[\zeta+\zeta^{-1}])[x]$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:mainBounds2} (e)] Note that $|\mathcal X_n^\prime(q,e)|$ is equal to the number of congruence classes of $\operatorname{Char}_H(x)$ modulo $\rho^e\mathbb Z[\zeta+\zeta^{-1}][x]$ where $H \in \mathcal H_n(q)$ and the diagonal entries of $H$ are all equal to $1$. From this point the proof is similar to corresponding proof of Theorem~\ref{thm:mainBounds}. \end{proof} \end{document}
\begin{document} \def\def\thanks##1{\protect\footnotetext[0]{\kern-\bibindent##1}}{\def\thanks##1{\protect\footnotetext[0]{\kern-\bibindent##1}}} \def\thanks##1{\protect\footnotetext[0]{\kern-\bibindent##1}} \title{On the number of branches of real curve singularities hanks{ Aleksandra~Nowel and Zbigniew~Szafraniec\ University of Gda\'{n} \pagestyle{fancy} \lhead{\fancyplain{}{\textsc{\small A.~Nowel, Z.~Szafraniec}}} \rhead{\fancyplain{}{\emph{\small On the number of branches of real curve singularities}}} \begin{abstract} There is presented a method for computing the number of branches of a real analytic curve germ $V(f_1,\ldots,f_m)\subset\mathbb{R}^n$ $(m\geq n)$ having a singular point at the origin, and the number of half--branches of the set of double points of an analytic germ $u:(\mathbb{R}^2,{\bf 0})\rightarrow (\mathbb{R}^3,{\bf 0})$. \end{abstract} Let $f_1,\ldots,f_m:\mathbb{R}^n,{\bf 0}\rightarrow \mathbb{R},0$ be germs of real analytic functions, and let $V=\{x\ | \ f_1(x)=\ldots=f_m(x)=0\}$ be the corresponding germ of an analytic set. If $\dim V\leq 1$ then $V$ is locally a union of a finite collection of semianalytic half-branches. Let $b_0$ denote the number of half-branches. Several authors presented algebraic methods for computing $b_0$. The case of planar curves has been investigated by Cucker {\em et al.} \cite{cuckeretal}. In the case where $m=n-1$ and $V$ is a complete intersection Fukuda {\em et al.} \cite{aokietal1}, \cite{aokietal2}, \cite{fukudaetal} have given a formula for $b_0$. They have associated to $f_1,\ldots, f_{n-1}$ a map germ $\mathbb{R}^n,{\bf 0}\rightarrow \mathbb{R}^n,{\bf 0}$ whose topological degree equals $\frac{1}{2}b_0$. Then one may use the Eisenbud-Levine \cite{eisenbudlevine} and Khimshiashvili \cite{khimshiashvili1}, \cite{khimshiashvili2} theorem to calculate the degree as the signature of a quadratic form on the corresponding local algebra. This has been extended by Montaldi and van Straten \cite{montaldivanstraten} to the more general case where the curve is not a complete intersection. An analytic 1-form $\alpha$ defines an orientation on each half-branch. They have shown that one may associate to $V$ and $\alpha$ so called "ramification modules" together with real valued non-degenerate quadratic forms whose signatures determine the number of "outbound" and "inbound" half-branches. In particular, if $\alpha=\sum x_i\, dx_i$ then every half-branch is "outbound", and so $b_0$ can be expressed as the signature of the appropriate quadratic form. Damon \cite{damon1} has applied the method of Montaldi-van Straten so as to give a very effective formula for $b_0$ in the case of a weighted homogeneous curve singularity. He has also extended this result to the $G$-equivariant case \cite{damon2}. There exist efficient computer programs which may compute the local topological degree using the Eisenbud-Levine and Khimshiashvili method (see \cite{leckiszafraniec1}, \cite{leckiszafraniec3}). On the other hand, often one may need to compute $b_0$ if $m>n-1$, so that $V$ is not a complete intersection. Assume that $m>n-1$ and the origin is isolated in the set of $z\in\mathbb{C}^n$ such that $f_1(z)=0,\ldots, f_m(z)=0$ and the rank of the derivative $D(f_1,\ldots,f_m)(z)<n-1$. In this paper we shall show how to construct two mappings $H_\pm:\mathbb{R}^n,{\bf 0}\rightarrow \mathbb{R}^n,{\bf 0}$ such that $b_0$ is the difference of the local degrees $\deg_0(H_+)$ and $\deg_0(H_-)$. The paper is organized as follows. In Section 1 and Section 2 we have collected some useful facts about ideals in the ring of convergent power series, and about initial diagrams of these ideals. In Section 3 we prove that one may find germs $g_1,\ldots,g_{n-1},h$ which are linear combination of $f_1,\ldots, f_m$ such that $V=\{x\ | \ g_1(x)=\ldots=g_{n-1}(x)=h(x)=0\}$. In Section 4 we apply results of \cite{szafraniec14} so as to prove the main result, and we compute some simple examples. Mappings $u:(\mathbb{R}^2,{\bf 0})\rightarrow (\mathbb{R}^3,{\bf 0})$ are a natural object of study in the theory of singularities (see \cite{mond1}, \cite{mond2}, for a recent account we refer the reader to \cite{mararnunoballesteros}). In Section 5 we show how to verify that $u$ has only transverse double points (which appear along a curve $D^2(u)$ in $\mathbb{R}^3$), and no triple points. We shall also show how to compute the number of half--branches in $D^2(u)$. In this paper we present several examples computed by a computer. We have implemented our algorithm with the help of {\sc Singular} \cite{singular}. We have also used a computer program written by Andrzej {\L}\c{e}cki \cite{leckiszafraniec1}. \section{Preliminaries} Let $\mathbb{K}$ denote either $\mathbb{R}$ or $\mathbb{C}$. For $p\in\mathbb{K}^n$, let ${\cal O}_{K,p}$ denote the ring of germs at $p$ of analytic functions $\mathbb{K}^n\rightarrow\mathbb{K}$. If $I_p$ is an ideal in ${\cal O}_{K,p}$, let $V_K(I_p)\subset\mathbb{K}^n$ denote the germ of zeros of $I_p$ at $p$. Let ${\bf m}_0=\{f\in{\cal O}_{K,0}\ |\ f({\bf 0})=0\}$ denote the maximal ideal in ${\cal O}_{K,0}$. Let $U\subset\mathbb{C}^n$ be an open neighbourhood of the origin. Suppose that functions $f_1,\ldots, f_s$ and $g_1,\ldots,g_t$ are holomorphic in $U$. For $p\in U$, let $I_p=\left<f_1,\ldots,f_s\right>$ (resp. $J_p=\left<g_1,\ldots,g_t\right>$) denote the ideal in ${\cal O}_{C,p}$ generated by $f_1,\ldots,f_s$ (resp. by $g_1,\ldots , g_t$). \begin{prop}\label{nr1} There exists $k$ such that ${\bf m}_0^k I_0\subset J_0$ if and only if there is an open neighbourhood $W$ of the origin such that $I_p\subset J_p$ for $p\in W\setminus\{{\bf 0}\}$. If that is the case and ${\bf 0}\in V_C(I_0)$ then $V_C(J_0)\subset V_C(I_0)$. \end{prop} \begin{proof} ($\mathbb{R}ightarrow$) Let $(z_1,\ldots,z_n)$ denote the coordinates in $\mathbb{C}^n$. There exists an open neighbourhood $W$ of the origin such that each function $z_j^kf_i$ is on $W$ a combination of $g_1,\ldots,g_t$ with holomorphic coefficients. If $p=(p_1,\ldots,p_n)\in W\setminus \{{\bf 0}\}$ then some $p_j\neq 0$. As $z_j^k$ is invertible in ${\cal O}_{C,p}$, so all $f_i\in J_p$, and then $I_p\subset J_p$.\\[1em] ($\Leftarrow$) For $p\in U$ let \[{\cal R}_i(p)=\{(a,h_1,\ldots,h_t)\in({\cal O}_{C,p})^{t+1}\ |\ af_i=h_1g_1+\cdots +h_tg_t\}.\] As ${\cal O}_{C,p}$ is a noetherian ring, ${\cal R}_i(p)$ is a finitely generated ${\cal O}_{C,p}$-module, and there exists a finite family $\{b_\ell=(a_\ell,h_{\ell 1},\ldots h_{\ell t})\}$ of generators of ${\cal R}_i({\bf 0})$. Of course, \[{\cal A}_i(p)=\{a\in{\cal O}_{C,p}\ |\ \exists\ h_1,\ldots,h_t\in{\cal O}_{C,p}\ :\ (a,h_1,\ldots,h_t)\in {\cal R}_i(p)\}\] is an ideal in ${\cal O}_{C,p}$, and $a\in{\cal A}_i(p)$ if and only if $af_i\in J_p$. By the Oka coherence theorem, there exists an open $W$ such that ${\bf 0}\in W\subset U$, representatives of $\{b_\ell\}$ are defined and holomorphic in $W$, and for each $p\in W$ the germs of $\{b_\ell\}$ at $p$ generate ${\cal R}_i(p)$. In particular, the germs of $\{a_\ell\}$ at $p$ generate ${\cal A}_i(p)$. If $p\in W\setminus\{{\bf 0}\}$ lies sufficiently close to the origin then $I_p\subset J_p$. Hence $1\cdot f_i=f_i\in I_p\subset J_p$, and then $1\in {\cal A}_i(p)$. Thus some $a_\ell(p)\neq 0$. Hence $ V_C(\{a_\ell\})\subset\{{\bf 0}\}$. The germs $\{a_\ell\}$ at ${\bf 0}$ generate ${\cal A}_i({\bf 0})$. By R\"uckert's local Nullstellensatz, there exists $k(i)$ with ${\bf m}_0^{k(i)}\subset{\cal A}_i({\bf 0})$, and then ${\bf m}_0^{k(i)}\left< f_i\right>\subset J_0$. Let $k=\max(k(1),\ldots,k(s))$. As $f_1,\ldots, f_s$ generate $I_0$, then ${\bf m}_0^kI_0\subset J_0$. If that is the case and ${\bf 0}\in V_C(I_0)$, then $V_C(J_0)\subset V_C({\bf m}_0^k)\cup V_C(I_0)=\{{\bf 0}\}\cup V_C(I_0)=V_C(I_0)$. \end{proof} \begin{cor}\label{nr2} There exists $k$ such that ${\bf m}_0^k I_0\subset J_0$ and ${\bf m}_0^k J_0\subset I_0$ if and only if there is an open neighbourhood $W$ of the origin such that $I_p=J_p$ for $p\in W\setminus\{{\bf 0}\}$. If that is the case and ${\bf 0}\in V_C(I_0)\cap V_C(J_0)$ then $V_C(I_0)=V_C(J_0)$. \end{cor} \section{Diagrams of initial exponents} In this section we present some properties of diagrams of initial exponents. In exposition and notation we follow closely \cite{bierstonemilman}. Let $\mathbb{N}$ denote the nonnegative integers. If $\alpha=(\alpha_1,\ldots,\alpha_n)\in\mathbb{N}^n$, put $|\alpha|=\alpha_1+\cdots+\alpha_n$. We order the $(n+1)$--tuples $(\alpha_1,\ldots,\alpha_n,|\alpha|)$ lexicographically from the right. This induces a total ordering of $\mathbb{N}^n$. Let $0\neq f=\sum a_\alpha x^\alpha\in{\cal O}_{K,0}$, where $\alpha\in\mathbb{N}^n$, $a_\alpha\in\mathbb{K}$, and $x^\alpha=x_1^{\alpha_1}\cdots x_n^{\alpha_n}$. Denote \[\operatorname{supp}(f)=\{\alpha\in\mathbb{N}^n\ |\ a_\alpha\neq 0\}.\] Let $\nu(f)$ denote the smallest element of $\operatorname{supp}(f)$, and let $\operatorname{in}(f)$ denote $a_{\nu(f)}x^{\nu(f)}$. It is easy to verify that $\nu(f_1\cdot f_2)=\nu(f_1)+\nu(f_2)$, $\operatorname{in}(f_1\cdot f_2)=\operatorname{in}(f_1)\operatorname{in}(f_2)$, and $\nu(\sum f_i)=\min(\{\nu(f_i)\})$ if $\nu(f_i)$ are pairwise distinct. We define the diagram of initial exponents $\mathbb{N}pis(I)$ of an ideal $I\subset{\cal O}_{K,0}$ as $\{\nu(f)\ |\ f\in I\setminus\{0\} \}$. Clearly, $\mathbb{N}pis(I)+\mathbb{N}^n=\mathbb{N}pis(I)$. There is a smallest finite subset ${\cal B}(I)$ of $\mathbb{N}pis(I)$ such that $\mathbb{N}pis(I)={\cal B}(I)+\mathbb{N}^n$. If $I\subset J$ are ideals then $\mathbb{N}pis(I)\subset\mathbb{N}pis(J)$. There exist $g^1,\ldots,g^s\in I$ and $g^{s+1},\ldots, g^t\in J$ such that ${\cal B}(I)=\{\nu(g^1),\ldots,\nu(g^s)\}$ and ${\cal B}(J)\setminus\mathbb{N}pis(I)=\{\nu(g^{s+1}),\ldots,\nu(g^t)\}$. We associate to $g^1,\ldots,g^t$ the following decomposition of $\mathbb{N}^n$: \[\Delta_1=\nu(g^1)+\mathbb{N}^n,\] \[\Delta_i=(\nu(g^i)+\mathbb{N}^n)\setminus\bigcup_{k=1}^{i-1}\Delta_k,\ \ i=2,\ldots,t,\] \[\Delta=\mathbb{N}^n\setminus\bigcup_{k=1}^t\Delta_k=\mathbb{N}^n\setminus\mathbb{N}pis(J).\] As $\mathbb{N}pis(I)=\bigcup_{i=1}^s \Delta_i$ and $\mathbb{N}pis(J)=\bigcup_{i=1}^t\Delta_i$, then $\mathbb{N}pis(J)\setminus\mathbb{N}pis(I)=\bigcup_{i=s+1}^t\Delta_i$. \begin{theorem}[ \cite{arocaetal}, \cite{bierstonemilman}, \cite{grauert}]\label{nr3} For every $f\in{\cal O}_{K,0}$ there exist unique $q_i\in{\cal O}_{K,0}$, $i=1,\ldots, t$ and $r\in{\cal O}_{K,0}$ such that \[\nu(g^i)+\operatorname{supp}(q_i)\subset\Delta_i,\ \ i=1,\ldots,t,\] \[\operatorname{supp}(r)\subset\Delta,\] and $f=\sum_{i=1}^t q_i g^i+r$. \end{theorem} \begin{prop}\label{nr4} $f\in I$ if and only if $q_{s+1}=\cdots =q_t=r=0$. \end{prop} \begin{proof} ($\Leftarrow$) is obvious.\\[1em] ($\mathbb{R}ightarrow$) As $q_i$ and $r$ are unique, if $f=0$ then $r=0$ and all $q^i=0$. Suppose, contrary to our claim, that $f\in I\setminus\{0\}$ and either $r\neq 0$ or $q_i\neq 0$ for some $s+1\leq i\leq t$. Put \[h=\sum_{s+1}^tq_i g^i+r=f-\sum_1^s q_i g^i\ .\] By Theorem \ref{nr3}, $\nu(q_i g^i)=\nu(g^i)+\nu(q_i)\in\Delta_i$, and $\nu(r)\in\Delta$. Thus $\operatorname{in}(h)$ is $\operatorname{in}(r)$ or one of the $\operatorname{in}(q_i g^i)=\operatorname{in}(q_i)\operatorname{in}(g^i)$, where $s+1\leq i\leq t$, since their exponents $\nu(r)$ and $\nu(q_i g^i)$ lie in disjoint regions of $\mathbb{N}^n$. Hence $h\neq 0$, and \[\nu(h)\in\Delta\cup \bigcup_{s+1}^t\Delta_i=\mathbb{N}^n\setminus\mathbb{N}pis(I).\] On the other hand, $h=f-\sum_1^s q_i g^i\in I$ and then $\nu(h)\in\mathbb{N}pis(I)$, a contradiction. \end{proof} Applying similar arguments one may prove \begin{prop}\label{nr5} $f\in J$ if and only if $r=0$. \end{prop} \begin{prop}\label{nr6} If $f\in J$ then its residue class in $J/I$ is uniquely represented by $\sum_{s+1}^t q_i g^i$, where $\nu(g^i)+\operatorname{supp}(q_i)\subset\Delta_i$. \end{prop} \begin{proof} By the above proposition, $r=0$, and then \[f=\sum_{i=1}^t q_i g^i=\sum_{i=1}^s q_i g^i +\sum_{i=s+1}^t q_i g^i\ .\] Since $\sum_1^s q_i g^i\in I$, $f=\sum_{s+1}^t q_i g^i$ in $J/I$. Proposition \ref{nr4} implies uniqueness. \end{proof} \begin{cor}\label{nr7} If $I\subset J$ are ideals in ${\cal O}_{K,0}$, then \[\dim_K(J/I)=\sum_{i=s+1}^t \# \Delta_i=\# (\mathbb{N}pis(J)\setminus\mathbb{N}pis(I)).\] \end{cor} Applying the Nakayama Lemma one may prove \begin{prop}\label{nr8} Assume that $I\subset J$ are ideals in ${\cal O}_{K,0}$. Then $\dim_K (J/I)<\infty$ if and only if there exists $k$ such that ${\bf m}_0^k J\subset I$. \end{prop} From Corollaries \ref{nr2}, \ref{nr7} and Proposition \ref{nr8} we get \begin{cor}\label{nr9} Assume that $I\subset J$ are ideals in ${\cal O}_{K,0}$. The following conditions are equivalent: \begin{itemize} \item[(i)] $\dim_K (J/I)<\infty$, \item[(ii)] $\# (\mathbb{N}pis(J)\setminus\mathbb{N}pis(I))<\infty$, \item[(iii)] there exists $k$ such that ${\bf m}_0^k J\subset I$, \item[(iv)] there is an open neighbourhood $W$ of the origin in $\mathbb{C}^n$ such that $I_p=J_p$ for $p\in W\setminus\{{\bf 0}\}$. \end{itemize} If that is the case and both ideals are proper then $V_C(I)=V_C(J)$. \end{cor} Applying the local Nullstellensatz we get \begin{cor}\label{nr10} Assume that $I\subset{\cal O}_{K,0}$ is an ideal. The following conditions are equivalent \begin{itemize} \item[(i)] $\dim_K ({\cal O}_{K,0}/I)<\infty$, \item[(ii)] $\#(\mathbb{N}^n\setminus\mathbb{N}pis(I))<\infty$, \item[(iii)] $V_C(I)\subset\{{\bf 0}\}$ in some neighbourhood of the origin. \end{itemize} \end{cor} Let $f_1,\ldots,f_m\in{\cal O}_{R,0}$, and let \[h_i^a=\sum_{j=1}^m A_{ij}(a)f_j,\ 1\leq i\leq \ell,\] where $A_{ij}\in\mathbb{R}[a_1,\ldots,a_d]$ and $a=(a_1,\ldots,a_d)$. For $a\in\mathbb{K}^d$, let $I_K^a$ denote the ideal in ${\cal O}_{K,0}$ generated by $h_1^a,\ldots,h_\ell^a$. Applying arguments presented in (\cite{bierstonemilman}, Chapter I) one may prove \begin{prop}\label{nr101} \begin{itemize} \item[(i)] if $a\in\mathbb{R}^d$ then $\mathbb{N}pis(I_R^a)=\mathbb{N}pis(I_C^a)$, \item[(ii)] there exists a proper algebraic set $\Sigma_K\subset\mathbb{K}^d$, defined by polynomials with real coefficients, such that $\mathbb{N}pis(I_K^a)$ is constant for $a\in\mathbb{K}^d\setminus\Sigma_K$. In particular, $\Sigma_R=\Sigma_C\cap \mathbb{R}^d$ is a proper real algebraic set and $\mathbb{N}pis(I_R^a)$ is constant for $a\in\mathbb{R}^d\setminus\Sigma_R$. \end{itemize} \end{prop} Let $J_K$ denote the ideal in ${\cal O}_{K,0}$ generated by $f_1,\ldots, f_m$. Each $I_K^a\subset J_K$, and then $\mathbb{N}pis(I_K^a)\subset\mathbb{N}pis(J_K)$. The function $\mathbb{K}^d\ni a\mapsto\mathbb{N}pis(I_K^a)$ is upper-semicontinuous (see \cite{bierstonemilman} for details), so we have \begin{prop}\label{nr11} If there exists $a\in\mathbb{K}^d$ such that $\#(\mathbb{N}pis(J_K)\setminus\mathbb{N}pis(I_K^a))<\infty$, then $\#(\mathbb{N}pis(J_K)\setminus\mathbb{N}pis(I_K^a))<\infty$ for all $a\in\mathbb{K}^d\setminus\Sigma_K$. \end{prop} \section{Curves having isolated singularity} Let $f_1,\ldots, f_m\in{\bf m}_0\cap{\cal O}_{K,0}$. The next proposition is a consequence of well-known properties of complex analytic germs (see \cite{narasimhan}, Proposition 4, p.46, and Proposition 13, p.60). \begin{prop}\label{nr12} $\dim V_C(f_1,\ldots,f_m)\leq 1$ if and only if there exists a germ $a\in {\bf m}_0$ such that $V_C(f_1,\ldots,f_m,a)=\{{\bf 0}\}$. If that is the case and ${\bf 0}$ is not isolated in $V_C(f_1,\ldots,f_m)$, then $V_C(f_1,\ldots,f_m)$ is a 1-dimensional germ, i.e. it is a complex curve in some neighbourhood of the origin. \end{prop} Suppose that $\dim V_C(f_1,\ldots,f_m)\leq 1$. By \[D(f_1,\ldots,f_m)(p)=\left[ \frac{\partial f_j}{\partial z_i}(p)\right]\] we shall denote the Jacobian matrix at $p$. We shall say that $V_C(f_1,\ldots, f_m)$ is a curve having an isolated singularity at the origin if \[\{ p\in V_C(f_1,\ldots,f_m)\ |\ \operatorname{rank}\, D(f_1,\ldots,f_m)(p)<n-1\}\subset\{{\bf 0}\}\] in some neighbourhood of the origin. If that is the case and $\dim V_C(f_1,\ldots, f_m)=1$, then \[\operatorname{rank}\, D(f_1,\ldots,f_m)\equiv n-1\mbox{ on }V_C(f_1,\ldots,f_m)\setminus\{{\bf 0}\}\ .\] We want to point out that if $V_C(f_1,\ldots,f_m)=\{{\bf 0}\}$ then we also call it a curve having an isolated singularity, despite that $\dim V_C(f_1,\ldots,f_m)= 0$. Let $I\subset {\cal O}_{K,0}$ denote the ideal generated by $f_1,\ldots, f_m$ and all $(n-1)\times(n-1)$--minors of $D(f_1,\ldots,f_m)$. By Corollary \ref{nr10} we get \begin{prop}\label{nr13} $V_C(f_1,\ldots, f_m)$ is a curve having an isolated singularity at the origin if and only if $\dim_K {\cal O}_{K,0}/I<\infty$, i.e. if $\#(\mathbb{N}^n\setminus\mathbb{N}pis(I))<\infty$. \end{prop} \begin{lemma}\label{nr16} Let $g_1,\ldots,g_{n-1},h\in\left<f_1,\ldots,f_m\right>\subset{\cal O}_{K,0}$. Assume that curves $V_C(f_1,\ldots,f_m)$ and $V_C(g_1,\ldots,g_{n-1})$ have an isolated singularity at the origin. Then $V_C(f_1,\ldots,f_m)=V_C(g_1,\ldots,g_{n-1},h)$ if and only if \[\dim_K \left<f_1,\ldots,f_m\right>/\left<g_1,\ldots,g_{n-1},h\right>\ <\infty.\] \end{lemma} \begin{proof} ($\Leftarrow$) is a consequence of Corollary \ref{nr9}.\\ ($\mathbb{R}ightarrow$) Take $p\in V_C(f_1,\ldots,f_m)\setminus\{{\bf 0}\}=V_C(g_1,\ldots,g_{n-1},h)\setminus\{{\bf 0}\}$ near the origin. The germ of $V_C(g_1,\ldots,g_{n-1},h)$ at $p$ is one--dimensional, so \[\operatorname{rank}\, D(g_1,\ldots,g_{n-1})(p)\leq\operatorname{rank}\, D(g_1,\ldots,g_{n-1},h)(p)\leq n-1.\] As $V_C(f_1,\ldots,f_m)$ and $V_C(g_1,\ldots,g_{n-1})$ have an isolated singularity, \[n-1=\operatorname{rank}\, D(f_1,\ldots,f_m)(p)=\operatorname{rank}\, D(g_1,\ldots,g_{n-1})(p),\] and then $\operatorname{rank}\, D(f_1,\ldots,f_m)(p)=\operatorname{rank}\, D(g_1,\ldots,g_{n-1},h)(p)=n-1$. Then ideals generated by representatives of $f_1,\ldots,f_m$ and $g_1,\ldots,g_{n-1},h$ in ${\cal O}_{C,p}$ are equal. By Corollary \ref{nr9} (i)(iv), \[\dim_K\left<f_1,\ldots,f_m\right>/\left<g_1,\ldots,g_{n-1},h\right>\ <\infty .\] \end{proof} \begin{lemma} If $g_1,\ldots,g_\ell \in\left< f_1,\ldots,f_m \right>$ and $V_C(g_1,\dots,g_\ell)$ has an isolated singularity at the origin, then $V_C(f_1,\ldots,f_m)$ has an isolated singularity too. \end{lemma} \begin{proof} Of course $V_C(f_1,\ldots,f_m)\subset V_C(g_1,\ldots,g_\ell)$. If $V_C(g_1,\ldots,g_\ell)\subset\{{\bf 0}\}$ then the conclusion is obvious. If this is not the case, then $\dim V_C(f_1,\ldots,f_m)\leq\dim V_C(g_1,\ldots,g_\ell)=1$. For $p\in V_C(f_1,\ldots,f_m)$, each gradient $\nabla g_i(p)$ is a linear combination of gradients $\nabla f_j(p)$, and then \[\operatorname{rank}\, D(g_1,\ldots,g_\ell)(p)\leq\operatorname{rank}\, D(f_1,\ldots,f_m)(p).\] As $V_C(g_1,\dots,g_\ell)$ has an isolated singularity, then $\operatorname{rank}\, D(g_1,\ldots,g_\ell)(p)=n-1$ for all $p\in V_C(g_1,\ldots,g_\ell)\setminus\{{\bf 0}\}$ lying near the origin, which implies that $\operatorname{rank}\, D(f_1,\ldots,f_m)(p)\geq n-1$ for $p\in V_C(f_1,\ldots,f_m)\setminus\{{\bf 0}\}$. \end{proof} \begin{prop}\label{wstawka1} If $g_1,\ldots,g_{n-1}\in\left< f_1,\ldots,f_m \right>$, $h\in{\cal O}_{K,0}$ and \[V_C\left(\frac{\partial(h,g_1,\ldots,g_{n-1})}{\partial(x_1,x_2,\ldots,x_n)},g_1,\ldots,g_{n-1}\right) \subset\{{\bf 0}\}\] then $V_C(f_1,\ldots,f_m)$ has an isolated singularity at the origin. \end{prop} \begin{proof} If $p\in V_C(g_1,\ldots,g_{n-1})\setminus\{{\bf 0}\}$ lies near the origin then the determinant of $ D(h,g_1,\ldots,g_{n-1})(p)$ does not vanish, and then $\operatorname{rank}\, D(g_1,\ldots,g_{n-1})(p)=n-1$. So $V_C(g_1,\ldots,g_{n-1})$ has an isolated singularity, and one may apply the previous Lemma. \end{proof} Let ${\cal M}(k,m;\mathbb{K})$ denote the space of all $k\times m$--matrices with coefficients in $\mathbb{K}$. \begin{lemma}\label{nr14} Assume that $S$ is a finite set, and $v_{s1},\ldots,v_{sm}\in\mathbb{C}^n$, where $s\in S$, is a finite collection of vectors. Let $V_s$ denote the $\mathbb{C}$-linear space spanned by $v_{s1},\ldots,v_{sm}$. Assume that each $\dim_C V_s\geq k$. Then there exists a proper algebraic subset $\Sigma_1\subset{\cal M}(k,m;\mathbb{K})$, defined by polynomials with real coefficients, such that for every $[a_{tj}]\in{\cal M}(k,m;\mathbb{K})\setminus\Sigma_1$, each $\mathbb{C}$--linear space spanned by \[w_{st}=\sum_{j=1}^m a_{tj} v_{sj},\ \ 1\leq t\leq k,\] is $k$--dimensional for each $s\in S$. \end{lemma} \begin{prop}\label{15} Assume that $V_C(f_1,\ldots, f_m)$ is a curve having an isolated singularity at the origin. Then there exists a proper algebraic subset $\Sigma_2\subset{\cal M}_K={\cal M}(n-1,m;\mathbb{K})$, defined by polynomials with real coefficients, such that for every $(n-1)\times m$--matrix $a =[a_{tj}]\in{\cal M}_K\setminus\Sigma_2$ and \[g_t^a=\sum_{j=1}^m a_{tj} f_j,\ \ 1\leq t\leq n-1,\] one has $\{p\in V_C(f_1,\ldots,f_m)\ |\ \operatorname{rank}\, D(g_1^a,\ldots,g_{n-1}^a)(p)<n-1\}\subset\{{\bf 0}\}$. \end{prop} \begin{proof} For $a=[a_{tj}]\in{\cal M}_K$, let $J^a\subset{\cal O}_{K,0}$ denote the ideal generated by $f_1,\ldots,f_m$ and all $(n-1)\times (n-1)$--minors of $D(g_1^a,\ldots,g_{n-1}^a)$. Then \[\{p\in V_C(f_1,\ldots,f_m)\ |\ \operatorname{rank}\, D(g_1^a,\ldots,g_{n-1}^a)(p)<n-1\}\subset\{{\bf 0}\}\] if and only if $\#(\mathbb{N}^n\setminus\mathbb{N}pis(J^a))<\infty$. From Proposition \ref{nr101}, there exists a proper algebraic subset $\Sigma_2\subset {\cal M}_K$, defined by polynomials with real coefficients, such that $\mathbb{N}pis(J^a)$ is constant for $a\in {\cal M}_K\setminus\Sigma_2$. By Proposition \ref{nr11}, it is enough to find at least one $a\in{\cal M}_K$ such that $\# (\mathbb{N}^n\setminus\mathbb{N}pis(J^a))<\infty$. The curve $V_C(f_1,\ldots,f_m)$ is a finite union of complex irreducible curves $\{ C_s \}$, and each $C_s\setminus\{{\bf 0}\}$ is locally biholomorphic to $\mathbb{C}\setminus\{{\bf 0}\}$. Take $p_s\in\mathbb{C}_s\setminus\{{\bf 0}\}$ near the origin. Put \[v_{sj}=\nabla f_j(p_s),\ 1\leq j\leq m.\] The dimension of the linear space spanned by $v_{s1},\ldots, v_{sm}$ equals\\ $\operatorname{rank}\, D(f_1,\ldots,f_m)(p_s)= n-1$. By Lemma \ref{nr14} there exists a proper algebraic subset $\Sigma_1\subset{\cal M}_K$, defined by polynomials with real coefficients, such that for every $a=[a_{tj}]\in{\cal M}_K\setminus\Sigma_1$ the dimension of the space spanned by all \[\sum_{j=1}^m a_{tj}\nabla f_j(p_s)=\nabla g_t^a(p_s)\] equals $n-1$ for each $s$. If that is the case then $\operatorname{rank}\, D(g_1^a,\ldots,g_{n-1}^a)(p_s)=n-1$, and then the same equality holds for all $s$ and all $p\in\mathbb{C}_s\setminus\{{\bf 0}\}$ lying near ${\bf 0}$, i.e. for all $p\in V_C(f_1,\ldots,f_m)\setminus\{{\bf 0}\}$ sufficiently close to the origin. \end{proof} \begin{theorem}\label{nr17} Assume that $V_C(f_1,\ldots,f_m)$ is a curve with an isolated singularity. Then there exists a proper algebraic subset $\Sigma_3\subset{\cal M}(n-1,m;\mathbb{K})\times\mathbb{K}^m={\cal M}_K\times\mathbb{K}^m$, defined by polynomials with real coefficients, such that for every $(a,b)=\left( [a_{tj}],(b_1,\ldots,b_m)\right)\not\in\Sigma_3$ and \[g_t^a=\sum_{j=1}^m a_{tj} f_j,\ \ 1\leq t\leq n-1,\] \[h^b=b_1 f_1+\cdots+b_m f_m\ ,\] \begin{itemize} \item[(i)] $V_C(g_1^a,\ldots,g_{n-1}^a)$ is a curve with an isolated singularity at the origin, \item[(ii)] $V_C(g_1^a,\ldots,g_{n-1}^a,h^b)=V_C(f_1,\ldots,f_m)$. \end{itemize} \end{theorem} \begin{proof} Let $J_K$ denote the ideal in ${\cal O}_{K,0}$ generated by $f_1,\ldots,f_m$. For $(a,b)\in{\cal M}_K\times\mathbb{K}^m$ let $I^a$ denote the ideal generated by $g_1^a,\ldots,g_{n-1}^a$ and all $(n-1)\times(n-1)$--minors of $D(g_1^a,\ldots,g_{n-1}^a)$, and let $I^{a,b}$ denote the one generated by germs $g_1^a,\ldots,g_{n-1}^a,h^b$. By Proposition \ref{nr101} there exists a proper algebraic set $\Sigma_3\subset{\cal M}_K\times\mathbb{K}^m$, defined by polynomials with real coefficients, such that $\mathbb{N}pis(I^{a})$, as well as $\mathbb{N}pis(I^{a,b})$, is constant for all $(a,b)\not\in\Sigma_3$. By Proposition \ref{nr11}, \ref{nr13} and Lemma \ref{nr16}, it is enough to find at least one $(a,b)\in{\cal M}_C\times\mathbb{C}^m$ such that and $\# (\mathbb{N}^n\setminus \mathbb{N}pis(I^a))<\infty $ and $\# (\mathbb{N}pis(J_C)\setminus\mathbb{N}pis(I^{a,b}))<\infty$.\\ {\em (i)} Let $U=\{p\in\mathbb{C}^n\ |\ f_1(p)\neq 0\}$. Let ${\cal D}$ denote the space of all complex $(n-1)\times (m-1)$--matrices $d=[d_{tj}]$, where $1\leq t\leq n-1$, $2\leq j\leq m$, and let \[a_{t1}(p,d)=-\left(\sum_{j=2}^m d_{tj} f_j(p)\right)\cdot (f_1(p))^{-1},\ \ \ a_{tj}(p,d)=d_{tj}\ .\] Define a mapping \[U\times {\cal D}\ni (p,d)\, \mapsto\, A(p,d)=[a_{tj}(p,d)]\in{\cal M}_C\ .\] Let $a=[a_{tj}]\in{\cal M}_C$ be a regular value of $A$. Then $(p,d)\in A^{-1}(a)$ if and only if $p\in U$, $a_{tj}=d_{tj}$ for $1\leq t\leq n-1$, $2\leq j\leq m$, and \[g_t^a(p)=\sum_{j=1}^m a_{tj} f_j(p)=0\ .\] If that is the case then the rank of the Jacobian matrix of $A$ at $(p,d)$ equals $(n-1)m=n-1+\dim {\cal D}$, and then \[\mathop{\rm rank}\nolimits\, \left[\frac{\partial a_{t1}}{\partial z_i}(p,d)\right]=n-1,\] where $1\leq t\leq n-1, 1\leq i\leq n$. At $(p,d)$ we have \[-\frac{\partial a_{t1}}{\partial z_i}\cdot f_1^2= \left( \sum_{j=2}^m d_{tj}\frac{\partial f_j}{\partial z_i}\right)f_1-\left(\sum_{j=2}^m d_{tj} f_j\right)\frac{\partial f_1}{\partial z_i}=\] \[\left(\sum_{j=2}^m a_{tj}\frac{\partial f_j}{\partial z_i}\right)f_1+a_{t1} \cdot f_1\cdot\frac{\partial f_1}{\partial z_i}= \left(\sum_{j=1}^m a_{tj}\frac{\partial f_j}{\partial z_i}\right)\cdot f_1= \frac{\partial g_t^a}{\partial z_i}\cdot f_1 .\] As $f_1(p)\neq 0$, if $g_1^a(p)=\ldots=g_{n-1}^a(p)=0$ then \[\operatorname{rank}\, D(g_1^a,\ldots,g_{n-1}^a)(p)= \operatorname{rank}\, \left[ \frac{\partial g_t^a}{\partial z_i}(p) \right]= n-1\ .\] By the Sard theorem, one may choose $a\in{\cal M}_C$ such that the above condition holds for all points \[p\in V_C(g_1^a,\ldots,g_{n-1}^a)\setminus V_C(f_1,\ldots,f_m)=\] \[\bigcup_{k=1}^m \left( V_C(g_1^a,\ldots,g_{n-1}^a)\setminus V_C(f_k)\right).\] Let $\Sigma_2\subset{\cal M}_C$ be as in Proposition \ref{15}. One may choose $a\not\in\Sigma_2$, so that the origin is isolated in \[\{p\in V_C(f_1,\ldots,f_m)\ |\ \operatorname{rank}\, D(g_1^a,\ldots,g_{n-1}^a)(p)<n-1\}.\] Hence the curve $V_C(g_1^a,\ldots,g_{n-1}^a)$ has an isolated singularity at the origin, i.e. $\#(\mathbb{N}^n\setminus\mathbb{N}pis(I^a))<\infty$.\\[1em] {\em (ii)} Take any nontrivial $h^b=b_1 f_1+\cdots+ b_m f_m$, $b_i\in\mathbb{K}$. Then $\dim V_C(h^b)=n-1$. There exists a finite collection $\{V_s\}$ of analytic manifolds such that $V_C(h^b)\setminus V_C(f_1)=\bigcup_s V_s$. In particular, each $\dim V_s\times {\cal D}\leq n-1+(n-1)(m-1)=\dim {\cal M}_C$. We may assume that a matrix $a\in{\cal M}_C$ is a regular value for each restricted mapping $A|V_s\times{\cal D}$. As there is a one-to-one correspondence between points in $A^{-1}(a)\cap V_s\times{\cal D}$ and $V_C(g_1^a,\ldots, g_{n-1}^a)\cap V_s$, both are either void or discrete. In consequence, $V_C(g_1^a,\ldots,g_{n-1}^a,h^b)\setminus V_C(f_1)=\bigcup_s V_C(g_1^a,\ldots,g_{n-1}^a)\cap V_s$ is discrete too. Applying the same arguments for each $V_C(h^b)\setminus V_C(f_j)$, we can find a matrix $a\in{\cal M}_C$ and associated germs $g_1^a,\ldots,g_{n-1}^a$ such that $V_C(g_1^a,\ldots,g_{n-1}^a,h^b)\setminus V_C(f_1,\ldots,f_m)$ is discrete. Then the corresponding germs at the origin satisfy an inclusion \[V_C(g_1^a,\ldots,g_{n-1}^a,h^b)\subset V_C(f_1,\ldots,f_m).\] As $g_1^a,\ldots,g_{n-1}^a,h^b\in\left< f_1,\ldots, f_m\right>$, then $V_C(g_1^a,\ldots,g_{n-1}^a,h^b)=V_C(f_1,\ldots,f_m)$. By Lemma \ref{nr16}, \[\dim_K \left<f_1,\ldots,f_m\right>/\left<g_1^a,\ldots,g_{n-1}^a,h^b\right>=\#(\mathbb{N}pis(J_C)\setminus\mathbb{N}pis(I^{a,b}))<\infty.\] \end{proof} \section{The number of half-branches} Assume that $f_1,\ldots,f_m\in{\bf m}_0\cap{\cal O}_{R,0}$, $\operatorname{rank}\, D(f_1,\ldots,f_m)({\bf 0})<n-1$, and $V_C(f_1,\ldots,f_m)$ is a curve with an isolated singularity at the origin. By Theorem \ref{nr17} there exists $g_1,\ldots,g_{n-1},h\in\left< f_1,\ldots,f_m\right>\cap{\cal O}_{R,0}$ such that $V_C(g_1,\ldots,g_{n-1})$ is a curve with an isolated singularity, $\operatorname{rank}\, D(g_1,\ldots,g_{n-1})({\bf 0})<n-1$, and $V_C(g_1,\ldots,g_{n-1},h)=V_C(f_1,\ldots,f_m)$. In particular, the number of half-branches of $V_R(f_1,\ldots,f_m)$ emanating from the origin is the same as the number $b_0$ of half-branches of $V_R(g_1,\ldots,g_{n-1})$ on which $h$ vanishes. Now we recall the formula for $b_0$ presented in \cite{szafraniec14}. Let $J$ denote the ideal generated by $f_1,\ldots,f_m$. Let $J_k$, where $k=1,2$, denote the ideal generated by $g_1,\ldots,g_{n-1},h^k$. Of course, $V_C(J)=V_C(J_1)=V_C(J_2)$. By Lemma \ref{nr16}, $\dim_R (J/J_1)<\infty$ and $\dim_R(J/J_2)<\infty$. Since $J_2\subset J_1\subset J$, $\dim_R (J_1/J_2)<\infty$, and then $\# (\mathbb{N}pis(J_1)\setminus\mathbb{N}pis(J_2))<\infty$. Put $\xi = 1+\max\, \{|\beta|\}-\min\, \{|\alpha|\}$, where $\beta\in\mathbb{N}pis(J_1)\setminus\mathbb{N}pis(J_2)$ and $\alpha\in{\cal B}(J_1)\setminus\mathbb{N}pis(J_2)$. (We take $\xi=1$ if ${\cal B}(J_1)={\cal B}(J_2)$.) Let $k>\xi$ be an even positive integer, and let $\omega:\mathbb{R}^n,{\bf 0}\rightarrow\mathbb{R}^n,{\bf 0}$ be a non-negative polynomial which is $k$--flat at the origin and $V_R(g_1,\ldots,g_{n-1},\omega)=\{{\bf 0}\}$. Let \[h_+=\frac{\partial(h+\omega,g_1,\ldots,g_{n-1})}{\partial(x_1,x_2,\ldots,x_n)},\] \[h_-=\frac{\partial(h-\omega,g_1,\ldots,g_{n-1})}{\partial(x_1,x_2,\ldots,x_n)},\] \[H_+=(h_+,g_1,\ldots,g_{n-1}):\mathbb{R}^n,{\bf 0}\rightarrow \mathbb{R}^n,{\bf 0}\ ,\] \[H_-=(h_-,g_1,\ldots,g_{n-1}):\mathbb{R}^n,{\bf 0}\rightarrow \mathbb{R}^n,{\bf 0}\ .\] As an immediate consequence of Theorem 2.5 \cite{szafraniec14} we get \begin{theorem}\label{nr18} Assume that $f_1,\ldots,f_m\in{\bf m}_0\cap{\cal O}_{R,0}$, $\operatorname{rank}\, D(f_1,\ldots,f_m)({\bf 0})<n-1$, and $V_C(f_1,\ldots,f_m)$ is a curve having an isolated singularity at the origin. Then the origin is isolated in $H_+^{-1}({\bf 0})$ and $H_-^{-1}({\bf 0})$, so that the local topological degree of $\deg_0(H_+)$ and $\deg_0(H_-)$ is defined. Moreover, \[b_0=\deg_0(H_+)-\deg_0(H_-)\ ,\] where $b_0$ is the number of real half-branches in $V_R(f_1,\ldots,f_m)$ emanating from the origin. \end{theorem} \begin{prop}\label{nr19} If $\dim \left< f_1,\ldots,f_m\right> /\left< g_1,\ldots,g_{n-1}\right> <\infty$ and both $V_C(f_1,\ldots,f_m)$, $V_C(g_1,\ldots,g_{n-1})$ have an isolated singularity at the origin then $b_0=2\deg_0(H_1)$, where \[H_1=\left(\frac{\partial(\Omega,g_1,\ldots,g_{n-1})}{\partial(x_1,x_2,\ldots,x_n)},g_1,\ldots,g_{n-1}\right):\mathbb{R}^n{\bf 0}\rightarrow\mathbb{R}^n,{\bf 0}\] is a mapping having an isolated zero at ${\bf 0}$ and $\Omega=x_1^2+\ldots+x_n^2$. \end{prop} \begin{proof} By Lemma \ref{nr16} , $b_0$ equals the number of half-branches in $V_R(g_1,\ldots,g_{n-1})$ emanating from the origin. So according to \cite{aokietal1,aokietal2} , $b_0=2\deg_0(H_1)$. \end{proof} \begin{ex} Let $f_1(x,y)=x^3$, $f_2(x,y)=x(x-y)$, and let $I_1=\left< f_1,f_2,y\right>$. One may check that ${\cal B}(I_1)=\{(2,0),(0,1)\}$, so that $\#(\mathbb{N}^2\setminus\mathbb{N}pis(I_1))=2$. By Corollary \ref{nr10} and Proposition \ref{nr12}, $\dim V_C(f_1,f_2)\leq 1$. Let $I_2$ be the ideal generated by $f_1,f_2$ and all $1\times 1$-minors of $D(f_1,f_2)$, i.e. $I_2=\left< f_1,f_2,3x^2,2x-y,-x \right>$. One may check that ${\cal B}(I_2)=\{(1,0),(0,1)\}$, so that $\#(\mathbb{N}^2\setminus\mathbb{N}pis(I_2))=1$. By Proposition \ref{nr13}, $V_C(f_1,f_2)$ has an isolated singularity at the origin. Let $J=\left< f_1,f_2\right> $. One may check that ${\cal B}(J)=\{(2,0),(1,2)\}$. Take $g_1=f_1-6 f_2$. Let $I_3$ be the ideal generated by $g_1$ and all $1\times 1$-minors of $D(g_1)$, i.e. $I_3=\left< g_1,3x^2-12 x+6 y, 6 x\right>$. One may check that ${\cal B}(I_3)=\{(1,0),(0,1)\}$, so that $V_C(g_1)$ has an isolated singularity at the origin. Take $h=f_1+5 f_2$. Let $J_1=\left<g_1,h\right>$, $J_2=\left< g_1,h^2\right>$. One may check that ${\cal B}(J_1)=\{(2,0),(1,2)\}$, ${\cal B}(J_2)=\{(2,0),(1,5)\}$. We have $J_1\subset J$ and $\mathbb{N}pis(J)\setminus\mathbb{N}pis(J_1)=\emptyset$, so that $\dim_R (J/J_1)=0$. By Lemma \ref{nr16}, $V_C(f_1,f_2)=V_C(g_1,h)$. Moreover, ${\cal B}(J_1)\setminus\mathbb{N}pis(J_2)=\{(1,2)\}$. As $\mathbb{N}pis(J_1)\setminus\mathbb{N}pis(J_2)=\{(1,2),(1,3),(1,4)\}$, then \[\max \{|\beta|\}=5,\mbox{ where }\beta\in\mathbb{N}pis(J_1)\setminus\mathbb{N}pis(J_2),\] \[\min\{|\alpha|\}=3, \mbox{ where }\alpha\in{\cal B}(J_1)\setminus\mathbb{N}pis(J_2),\] \[\xi=1+5-3=3\ .\] Take $k=4>\xi=3$. Of course $\omega=x^4+y^4$ is a non-negative polynomial which is $4$-flat at the origin and $V_R(g_1,\omega)=\{{\bf 0}\}$. Set \[h_+=\frac{\partial(h+\omega,g_1)}{\partial(x,y)},\ \ h_-=\frac{\partial(h-\omega,g_1)}{\partial(x,y)},\] \[H_+=(h_+,g_1),\ \ H_-=(h_-,g_1).\] Using the computer program written by Andrzej £êcki one may compute \[\deg_0(H_+)=1,\ \ \deg_0(H_-)=-1\ .\] According to Theorem \ref{nr18}, there are two half-branches in $V_R(f_1,f_2)$ emanating from the origin. It is worth to notice that for any $g=a f_1+b f_2$, where $a,b\in\mathbb{R}$, if $b\neq 0$ then there are four half-branches in $V_R(g)$ emanating from the origin. If $b=0$ then $V_C(g)$ does not have an isolated singularity at the origin. So in both cases one cannot apply the apparently simpler Proposition \ref{nr19}. \end{ex} \begin{ex} Let $v=(v_1,v_2,v_3)=(x^2-y^2z,y^2-xz,z^3+x^3)$, $w=(w_1,w_2,w_3)=(z^2+xy,x^2+yz^2,y^3-x^2z)$ be vector fields in $\mathbb{R}^3$. Vectors $v(p)$ and $w(p)$ are co--linear if and only if $f_1=v_2w_3-v_3w_2$, $f_2=v_1w_3-v_3w_1$, $f_3=v_1w_2-v_2w_1$ vanish at $p$. Put $g_1=f_1+f_2+f_3$, $g_2=f_1-f_2$, $h=f_3$. Let $J_1=\left< g_1,g_2,h \right>$, $J_2=\left< g_1,g_2,h^2\right>$. One may check that\\ ${\cal B}(J_1)=\{ (4,0,0),(2,3,0),(1,4,0),(0,5,1) \}$,\\ ${\cal B}(J_2)=\{ (4,0,0),(2,3,0),(1,6,0),(1,5,2),(0,7,2),(1,4,5),(0,6,5),(0,5,6),$\\ $(0,12,1) \}$,\\ so that \[\max\{|\beta|\}=12,\mbox{ where }\beta\in\mathbb{N}pis(J_1)\setminus\mathbb{N}pis(J_2),\] \[\min\{|\alpha|\}=5,\mbox{ where }\alpha\in{\cal B}(J_1)\setminus\mathbb{N}pis(J_2),\] \[\xi=1+12-5=8.\] Take $k=10>\xi=8$. Of course, $\omega=x^{10}+y^{10}+z^{10}$ is a non-negative polynomial which is 10-flat at the origin and $V_R(g_1,g_2,\omega)=\{{\bf 0}\}$. Set $h_+,h_-,H_+,H_-$ as in Theorem \ref{nr18}. One may check that $\dim{\cal O}_{C,0}/\left< h_{\pm},g_1,g_2\right><\infty$, so that $V_C(h_{\pm},g_1,g_2)=\{{\bf 0}\}$. By Proposition \ref{wstawka1}, $V_C(g_1,g_2)$, as well as $V_C(f_1,f_2,f_3)$, has an isolated singularity at the origin. Using the computer program one may compute $\deg_0(H_+)=3$, $\deg_0(H_-)=-1$. According to Theorem \ref{nr18}, there are 4 half-branches emanating from the origin in the set where vector fields $v$ and $w$ are co-linear. \end{ex} \section{Germs from $\mathbb{R}^2$ to $\mathbb{R}^3$} \label{sec_ip} Mappings from $(\mathbb{K}^2,{\bf 0})$ to $(\mathbb{K}^3,{\bf 0})$ are a natural object of study in the theory of singularities. In \cite{mond1} Mond has classified simple smooth germs from $(\mathbb{R}^2,{\bf 0})$ to $(\mathbb{R}^3,{\bf 0})$, in \cite{mond2} he was investigating multiple points of complex germs from $(\mathbb{C}^2,{\bf 0})$ to $(\mathbb{C}^3,{\bf 0})$. Some constructions presented in this section are similar to the ones in \cite{mond2}. In \cite{mararnunoballesteros} Marar and Nu\~{n}o-Ballesteros study finitely determined map germs $u$ from $(\mathbb{R}^2,{\bf 0})$ to $(\mathbb{R}^3,{\bf 0})$ and the curve obtained as the intersection of its image with a sufficiently small sphere centered at the origin (the associated doodle of $u$). The set of singular points of the image of $u$ is the closure of the double points curve, denoted by $D^2(u)$. They proved \cite[Theorem 4.2]{mararnunoballesteros}, that if $u$ is finitely determined, has type $\Sigma ^{1,0}$, and its double point curve $D^2(u)$ has $r$ real half--branches, its associated doodle is equivalent to $\mu _r\colon [0;2\pi]\longrightarrow \mathbb{R}^2$ given by $\mu _r(t)=(\sin t,\sin rt)$ if $r$ is even, or $\mu _r(t)=(\sin t,\cos rt)$ if $r$ is odd. Moreover $u$ is topologically equivalent to the map germ $M_r(x,y)=(x,y^2,\operatorname{Im}((x+iy)^{r+1}))$, where $\operatorname{Im}(z)$ denotes the imaginary part of $z\in \mathbb{C}$ \cite[Corollary 4.5]{mararnunoballesteros}. If $u$ is finitely determined, has fold type, its 2--jet belongs to the orbit $(x,xy,0)$, and its double point curve $D^2(u)$ has $r$ real half--branches, then $u$ is topologically equivalent to the map germ $M_r$ \cite[Theorem 5.3]{mararnunoballesteros}. Moreover, if $u$ has no triple points then \cite[Lemma 4.1]{mararnunoballesteros} it has type $\Sigma ^{1,0}$. In the remainder of this section we study analytic germs which are not necessarily finitely determined. Assume that $u=(u_1,u_2 ,u_3)\colon(\mathbb{K} ^{2},{\bf 0}) \longrightarrow (\mathbb{K} ^{3},{\bf 0})$ is an analytic germ. For $x=(x_1,x_2)$, $y=(y_1,y_2)$, $1\leqslant i \leqslant 3$, define \[ w_i(x,y)=u_i(x)-u_i(y), \] \[w=(w_1,w_2,w_3)\colon\mathbb{K}^2\times\mathbb{K}^2=\mathbb{K} ^4 \longrightarrow \mathbb{K}^3.\] Then $w$ is also analytic, and there exist analytic germs $c_{ik}$ such that \[w_i(x,y)=c_{i1}(x,y)(x_1-y_1)+ c_{i2}(x,y)(x_2-y_2).\] The germs $c_{ik}$ are not uniquely determined. We fix germs $c_{ik}$, and for $1\leqslant i<j \leqslant 3$ we define \[ W_{ij}= \left | \begin{array}{cc} c_{i1} & c_{i2} \\ c_{j1} & c_{j2} \end{array} \right |. \] \begin{lemma} \label{cij(x,x)} $c_{ik}(x,x)=\frac{\partial u_i}{\partial x_k}(x)$, and then $W_{ij}(x,x)=\frac{\partial(u_i,u_j)}{\partial(x_1,x_2)}(x)$. In particular, if $u$ has a critical point at the origin then all $W_{ij}({\bf 0})=0$. \end{lemma} \begin{proof} Put \[\frac{\partial}{\partial z_k}:=\frac{1}{2}\left ( \frac{\partial}{\partial x_k}- \frac{\partial}{\partial y_k}\right ).\] Then \[\frac{\partial}{\partial z_k}(x_r-y_r)=\begin{cases} 0, & k\neq r\\ 1, & k=r \end{cases}\ ,\] \[\frac{\partial w_i}{\partial z_k}=\left (\frac{\partial c_{i1}} {\partial z_k}(x_1-y_1)+\frac{\partial c_{i2}} {\partial z_k}(x_2-y_2)\right )+c_{ik}\ ,\] \[\frac{\partial w_i}{\partial z_k}(x,x)=c_{ik}(x,x).\] On the other hand we have \begin{align*} \frac{\partial w_i}{\partial z_k}(x,y) & =\frac{1}{2}\left ( \frac{\partial u_i}{\partial x_k}(x)+\frac{\partial u_i}{\partial x_k}(y)\right ), \\ \frac{\partial w_i}{\partial z_k}(x,x) & =\frac{\partial u_i}{\partial x_k}(x). \end{align*} Thus \[ c_{ik}(x,x)=\frac{\partial u_i}{\partial x_k}(x).\] \end{proof} By Cramer's rule \begin{equation}\label{wzKramer} (x_1-y_1)W_{ij}= \left | \begin{array}{cc} w_{i} & c_{i2} \\ w_{j} & c_{j2} \end{array} \right |, \qquad (x_2-y_2)W_{ij}= \left | \begin{array}{cc} c_{i1} & w_{i}\\ c_{j1} & w_{j} \end{array} \right |. \end{equation} Let us assume that $u$ has an isolated critical point at the origin, i.e. there exists a neighbourhood of the origin where the derivative matrix of $u$ has rank $2$, except at the origin. Then $u|\mathbb{K} ^2\setminus \{{\bf 0}\}$ is a germ of an immersion. In particular, $u$ has an isolated singularity if the origin is an isolated zero of the ideal generated by all $2\times 2$--minors of the derivative matrix of $u$ Notice that the set $W=w^{-1} ({\bf 0})$ contains all the self--intersection points $(p,q)$ of $u$. More precisely, if $\Delta =\{ (x,x)\ | \ x\in \mathbb{K}^2\}$, then $W\setminus \Delta$ consists of all the points $(p,q)$ and $(q,p)$, where $u(p)=u(q)$ is the double point of $u$. Put $f=(f_1,\ldots,f_6)=(w_1,w_2,w_3,W_{12},W_{13},W_{23}):\mathbb{K}^4=\mathbb{K}^2\times\mathbb{K}^2\rightarrow \mathbb{K}^6$ and \[ V =W\cap \bigcap _{1\leqslant i<j \leqslant 3} W_{ij}^{-1} ({\bf 0})=f^{-1}({\bf 0})=V(f_1,\ldots,f_6)\] where by Lemma \ref{cij(x,x)}, $f({\bf 0})={\bf 0}$, and then ${\bf 0}\in V$. Obviously $(x,y)\in \Delta $ if and only if $x_1-y_1=x_2-y_2=0$. We have $\Delta \subset W$, $V\setminus \Delta \subset W\setminus \Delta$. If $(x,y)\in W\setminus \Delta$ then by (\ref{wzKramer}) all $W_{ij}(x,y)=0$, so \begin{equation} \label{ab} V\setminus \Delta = W\setminus \Delta .\end{equation} \begin{lemma} \label{u_imm} $V\cap \Delta=\{ {\bf 0}\}$ as germs of sets if and only if $u$ has an isolated critical point at the origin. \end{lemma} \begin{proof} Let $Du(x)$ denote the derivative matrix of $u$ at $x$. By Lemma \ref{cij(x,x)}, for $(x,x)\in \Delta$ the determinant $W_{ij}(x,x)$ equals some $2\times 2$--minor of the derivative matrix $Du(x)$. As $\Delta \subset W$, the origin is an isolated critical point of $u$ if and only if it is an isolated point of $V\cap \Delta$. \end{proof} By the previous lemma and (\ref{ab}) we get \begin{cor} \label{int_set} If $u$ has an isolated critical point at the origin then $V\setminus \{{\bf 0}\}=f^{-1} ({\bf 0})\setminus \{{\bf 0}\}$ is the germ of the set of self--intersection points of $u$, and $D^2(u)=\{u(p)\ |\ \ (p,q)\in V\}$. \end{cor} We shall say that a self--intersection $(p,q)$, where $u(p)=u(q)$, is \emph{transverse}, if \[ Du(p)\mathbb{K}^2+Du(q)\mathbb{K}^2=\mathbb{K}^3. \] Since \begin{align*}Dw(x,y)&= \left [ \begin{matrix} \frac{\partial u_1}{\partial x_1}(x) & \frac{\partial u_1}{\partial x_2}(x) & -\frac{\partial u_1}{\partial x_1}(y) & -\frac{\partial u_1}{\partial x_2}(y) \\ \frac{\partial u_2}{\partial x_1}(x) & \frac{\partial u_2}{\partial x_2}(x) & -\frac{\partial u_2}{\partial x_1}(y) & -\frac{\partial u_2}{\partial x_2}(y) \\ \frac{\partial u_3}{\partial x_1}(x) & \frac{\partial u_3}{\partial x_2}(x) & -\frac{\partial u_3}{\partial x_1}(y) & -\frac{\partial u_3}{\partial x_2}(y) \end{matrix} \right ] \\ &= \left [ Du(x)\ |\ -Du(y) \right ], \end{align*} a self--intersection $(p,q)$ in $V\setminus \{ {\bf 0}\}$ is transverse if and only if $\mathop{\rm rank}\nolimits Dw(p,q)=3$. Let ${\cal O}_{K,{\bf 0}}$ denote the ring of germs of analytic functions at ${\bf 0}\in\mathbb{K}^4=\mathbb{K}^2\times\mathbb{K}^2$. \begin{cor} \label{transv_inter} The germ $u$ has only transverse self--intersections if and only if $\mathop{\rm rank}\nolimits Dw(p,q)=3$ for each $(p,q)\in V\setminus\{{\bf 0}\}$. In particular, if the ideal in ${\cal O}_{K,{\bf 0}}$ generated by $f_1,\ldots,f_6$ and all $3\times 3$--minors of $Dw$ has an isolated zero at the origin, then $u$ has only transverse self--intersections near the origin. \end{cor} \begin{prop} \label{curve} If the germ $u:\mathbb{K}^2,{\bf 0}\rightarrow\mathbb{K}^3,{\bf 0}$ has an isolated critical point at ${\bf 0}$ and has only transverse self-intersections, then $V=V(f_1,\ldots,f_6)\subset\mathbb{K}^4$ is a curve having an isolated singularity at the origin, i.e. the origin is isolated in $\{(p,q)\in V\ |\ \mathop{\rm rank}\nolimits Df(p,q)<3\}$. If that is the case and $\mathbb{K}=\mathbb{R}$ then $V$ is an union of a finite collection of half--branches. \end{prop} \begin{proof} By Corollary \ref{int_set}, $V$ is the set of self--intersections of $u$. In some neighbourhood of the origin the rank of the matrix $Dw(p,q)$ equals $3$ at self--intersection points $(p,q)$, so the rank of the matrix $Df(p,q)$ is greater or equal to $3$. Hence $V$ is a curve having an isolated singularity at the origin. \end{proof} Now we shall show how to check that $u$ has no triple points, i.e. such points $(p,q,r)$ that $p\neq q \neq r$ and $u(p)=u(q)=u(r)$. Let us define $s\colon \mathbb{K}^6 \longrightarrow \mathbb{K}^6$, $s(x,y,z)=(w(x,y),w(y,z))$, where $(x,y,z)=(x_1,x_2,y_1,y_2,z_1,z_2)\in \mathbb{K} ^6$, and let \[\Sigma=\{ (x,y,z)\in \mathbb{K} ^6 \ |\ x=y\vee x=z \vee y=z \}.\] Then $s^{-1} ({\bf 0})\setminus \Sigma$ consists of such points $(p,q,r)$ that $u(p)=u(q)=u(r)$ is a triple point of $u$. For $i=1,2,3$ we have \begin{align*} w_i(x,y)&= c_{i1}(x,y)(x_1-y_1)+ c_{i2}(x,y)(x_2-y_2)\\ w_i(y,z)&= c_{i1}(y,z)(y_1-z_1)+ c_{i2}(y,z)(y_2-z_2)\\ w_i(x,z)&=c_{i1}(x,z)(x_1-z_1)+ c_{i2}(x,z)(x_2-z_2). \end{align*} Define \[ \tilde{V}=\{ (x,y,z)\in s^{-1} ({\bf 0})\ |\ (x,y)\in V \wedge (y,z)\in V \wedge (x,z)\in V\}.\] Applying the same arguments as above we obtain \begin{equation} \label{ab2} s^{-1} ({\bf 0})\setminus \Sigma = \tilde{V}\setminus \Sigma\ . \end{equation} \begin{lemma} \label{tp_set} If $u$ has an isolated critical point at the origin then $\tilde{V}\cap \Sigma=\{ {\bf 0}\}$ as germs of sets. \end{lemma} \begin{proof} By Lemma \ref{cij(x,x)}, the determinants $W_{ij}(x,x)$ are the $2\times 2$--minors of the derivative matrix $Du(x)$. For each $x\neq {\bf 0}$ close to ${\bf 0}$ there exists $W_{ij}(x,x)\neq 0$. Then for each $t\in \mathbb{K}^2$ neither $(x,x,t)$ nor $(x,t,x)$ nor $(t,x,x)$ belongs to $\tilde{V}$. If points $(0,0,t)$, $(0,t,0)$, $(t,0,0)$ belong to $\tilde{V}$ then $t\in u^{-1} ({\bf 0})$, but ${\bf 0}$ is isolated in the zeroes set of $u$. So we obtain the equality of germs $\tilde{V}\cap \Sigma=\{ {\bf 0}\}$. \end{proof} Let \[ \begin{array}{lll} S_1(x,y,z)=w_1(x,y) & S_2(x,y,z)=w_2(x,y) & S_3(x,y,z)=w_3(x,y) \\ S_4(x,y,z)=w_1(y,z) & S_5(x,y,z)=w_2(y,z) & S_6(x,y,z)=w_3(y,z) \\ S_7(x,y,z)=W_{12}(x,y) & S_8(x,y,z)=W_{13}(x,y) & S_9(x,y,z)=W_{23}(x,y) \\ S_{10}(x,y,z)=W_{12}(y,z) & S_{11}(x,y,z)=W_{13}(y,z) & S_{12}(x,y,z)=W_{23}(y,z) \\ S_{13}(x,y,z)=W_{12}(x,z) & S_{14}(x,y,z)=W_{13}(x,z) & S_{15}(x,y,z)=W_{23}(x,z). \end{array} \] If $u$ has an isolated critical point at the origin then $\tilde{V}\setminus \Sigma = \tilde{V}\setminus \{{\bf 0}\}= V(S_1,\ldots,S_{15})$ is the set of triple points. Let us denote by $\mathbb{K}\{x,y,z\}$ the ring of convergent power series in variables $x_1,x_2,y_1,y_2,z_1,z_2$. We get \begin{cor} \label{triple} If $u$ has an isolated singular point at the origin and \[\dim_K \mathbb{K}\{x,y,z\}/\langle S_1,\ldots,S_{15} \rangle<+\infty,\] then $u$ has no triple points. Hence, if the dimension is infinite then $u$ may have triple points. \end{cor} \begin{theorem}\label{doublenumber} Assume that an analytic germ $u:\mathbb{R}^2,{\bf 0}\rightarrow\mathbb{R}^3,{\bf 0}$ has an isolated critical point at ${\bf 0}$, all self--intersections are transverse, and there are no triple points. Then $V=V(w_1,w_2,w_3,W_{12},W_{13},W_{23})$ is a curve having an isolated singular point at the origin, so that $V$ is an union of a finite collection of half--branches. Each half--branch in the set of double points $D^2(u)$ is represented by two half--branches in $V$. \end{theorem} Now we shall present three examples. In all the examples: \begin{itemize} \item $f=(f_1,\ldots ,f_6)=(w_1,w_2,w_3,W_{12},W_{13},W_{23})\colon (\mathbb{R} ^4,{\bf 0})\longrightarrow (\mathbb{R} ^6,{\bf 0})$ \item $I_2$ is the ideal generated by $f_i$'s and all $3\times 3$--minors of $D(f_1,\ldots ,f_6)$ \item $J=\left<f_1,\ldots ,f_6\right>$ \item $I_3$ is the ideal generated by $g_i$'s and all $3\times 3$--minors of $D(g_1,g_2,g_3)$ \item $J_1=\left<g_1,g_2,g_3,h\right>$ \item $J_2=\left<g_1,g_2,g_3,h^2\right>$ \item $H_+$ and $H_-$ are as in the Theorem \ref{nr18}. \end{itemize} \begin{ex} Let $u=(x_1,x_1 x_2+x_2^3,x_1x_2^2+\frac{9}{10}x_2^4)$. Then $u$ has an isolated critical point at the origin. First let us compute the germs $c_{ij}$: \noindent $c_{11}=1$\\ $c_{12}=0$\\ $c_{21}=x_2$\\ $c_{22}=y_1+x_2^2+x_2y_2+y_2^2$\\ $c_{31}=x_2^2$\\ $c_{32}=x_2y_1+y_1y_2+\frac{9}{10}x_2^3+ \frac{9}{10}x_2^2y_2+\frac{9}{10}x_2y_2^2+\frac{9}{10}y_2^3$ \noindent and the determinants $W_{ij}$: \noindent $W_{12}=y_1+x_2^2+x_2y_2+y_2^2$\\ $W_{13}=x_2y_1+y_1y_2+\frac{9}{10}x_2^3+\frac{9}{10}x_2^2y_2+ \frac{9}{10}x_2y_2^2+\frac{9}{10}y_2^3$\\ $W_{23}=x_2y_1y_2- \frac{1}{10}x_2^4-\frac{1}{10}x_2^3y_2-\frac{1}{10}x_2^2y_2^2+\frac{9}{10}x_2y_2^3$ We have ${\cal B} (I_2)=\{ (1,0,0,0),(0,0,1,0),(0,2,0,0),(0,1,0,1),(0,0,0,3)\}$. By Proposition \ref{nr13}, $V_C(f_1,\ldots,f_6)$ is a curve having an isolated singular point at the origin. We have ${\cal B} (J)=\{ (1,0,0,0),(0,0,1,0),(0,3,0,0)\}$. Take $g_1=f_1+f_6$, $g_2=f_2+f_5$, $g_3=f_3+f_4$. We have\\ ${\cal B} (I_3)=\{ (1,0,0,0),(0,0,1,0),(0,2,0,0),(0,1,0,1),(0,0,0,3)\}$, so that the curve $V_C(g_1,g_2,g_3)$ has an isolated singularity at the origin. Take $h=f_1$. We may compute ${\cal B} (J_1)=\{ (1,0,0,0),(0,0,1,0),(0,3,0,0) \}$, ${\cal B} (J_2)=\{ (1,0,0,0),(0,0,1,0),(0,3,0,0) \}$. We have $J_1\subset J$ and $\mathbb{N}pis (J)\setminus \mathbb{N}pis (J_1)=\emptyset$, so that $\dim _R (J/J_1)=0$. By Lemma \ref{nr16}, $V_C(f_1,\ldots ,f_6)=V_C(g_1,g_2,g_3,h)$. As ${\cal B} (J_1)={\cal B} (J_2)$, then $\xi=1$ and $k=2>1$. We compute \[ \deg _0(H_+)=3, \quad \deg _0(H_-)=-3.\] Applying Proposition \ref{transv_inter} and Corollary \ref{triple} we may check that all the self--intersections are transverse and $u$ has no triple points. According to Theorems \ref{nr18}, \ref{doublenumber}, there are $3$ half-branches in the set of double points $D^2(u)$. \end{ex} \begin{ex} Let $u=(x_1^2-2x_2^2,x_1x_2+x_1^3,x_1x_2-x_2^3)$. Then $u$ has an isolated critical point at the origin. We have\\ ${\cal B} (I_2)=\{ (2,0,0,0),(1,1,0,0),(0,2,0,0),(1,0,2,0),(0,1,2,0),(0,0,3,0),$\\ $(1,0,1,2),(0,1,1,2),(0,0,2,2),(1,0,0,3),(0,1,0,3),(0,0,1,3),(0,0,0,5)\}$. By Proposition \ref{nr13}, $V_C(f_1,\ldots,f_6)$ is a curve having an isolated singularity at the origin. We have\\ ${\cal B} (J)=\{ (2,0,0,0),(1,1,0,0),(0,2,0,0),(1,0,2,0),(0,1,2,0),(0,0,3,0)\}$. Take $g_1=f_1-3f_2+f_6$, $g_2=f_2-2f_5$, $g_3=f_3+3f_4+f_6$. We may compute\\ ${\cal B} (I_3)=\{ (2,0,0,0),(1,1,0,0),(0,2,0,0),(1,0,3,0),(0,1,3,0),(0,0,4,0),$\\ $(1,0,2,1),(0,1,2,1),(0,0,3,1),(1,0,1,3),(0,1,1,3),(0,0,2,3),(1,0,0,5),$\\ $(0,1,0,5),(0,0,1,5),(0,0,0,6)\}$, so that $V_C(g_1,g_2,g_3)$ has an isolated singularity at the origin. Take $h=f_1$. We may compute\\ ${\cal B} (J_1)=\{ (2,0,0,0),(1,1,0,0),(0,2,0,0),(1,0,2,0),(0,1,3,0),(0,0,4,0),$\\ $(0,1,2,1),(0,0,3,1) \}$,\\ ${\cal B} (J_2)=\{(2,0,0,0),(1,1,0,0),(0,2,0,0),(1,0,3,0),(0,1,3,0),(0,0,5,0),$\\ $(0,0,4,2),(1,0,2,4),(0,1,2,4),(0,0,3,4) \}$. We have $J_1\subset J$ and the set $\mathbb{N}pis (J)\setminus \mathbb{N}pis (J_1)$ is finite, so that $\dim _R (J/J_1)<\infty$. Then $V_C(f_1,\ldots ,f_6)=V_C(g_1,g_2,g_3,h)$ (see Lemma 3.3). We have\\ $\max _{\beta \in \mathbb{N}pis (J_1)\setminus \mathbb{N}pis (J_2)}\{ |\beta|\}=6$, $\min _{\alpha \in {\cal B} (J_1)\setminus \mathbb{N}pis (J_2)}\{ |\alpha|\}=3$, and we take $\xi=1+6-3=4$ and $k=6>4$. We compute \[ \deg _0(H_+)=1, \quad \deg _0(H_-)=-1.\] Applying Proposition \ref{transv_inter} and Corollary \ref{triple} we may check that all the self--intersections are transverse and $u$ has no triple points. According to Theorems \ref{nr18}, \ref{doublenumber}, there is one half-branche in the set of double points $D^2(u)$. \end{ex} \begin{ex} Let $u=(x_1,x_1x_2+x_2^3,x_1x_2^2+x_2^4)$. Then $u$ has an isolated critical point at the origin. Applying Proposition \ref{transv_inter} and Corollary \ref{triple} we may check that all the self--intersections are transverse, and the germ $u$ may have triple points. We have ${\cal B} (I_2)=\{ (1,0,0,0),(0,0,1,0),(0,2,0,0),(0,1,0,1),(0,0,0,3)\}$. By Proposition \ref{nr13}, $V_C(f_1,\ldots, f_6)$ is a curve having an isolated singularity at the origin. We may compute ${\cal B} (J)=\{ (1,0,0,0),(0,0,1,0),(0,2,0,1)\}$. Take $g_1=f_1+f_6$, $g_2=f_2+f_5$, $g_3=f_3+f_4$, $h=f_1$. As above, we may check that $V_C(g_1,g_2,g_3)$ has an isolated singularity at the origin. Take $h=f_1$. We may compute ${\cal B} (J_1)=\{ (1,0,0,0),(0,0,1,0),(0,2,0,1) \}$, ${\cal B} (J_2)=\{ (1,0,0,0),(0,0,1,0),(0,2,0,1)\}$. We have $J_1\subset J$ and $\mathbb{N}pis (J)\setminus \mathbb{N}pis (J_1)=\emptyset$, so that $\dim _R (J/J_1)=0$. By Lemma \ref{nr16}, $V_C(f_1,\ldots ,f_6)=V_C(g_1,g_2,g_3,h)$. As ${\cal B} (J_1)={\cal B} (J_1)$, then $\xi=1$ and $k=2>1$. We compute \[ \deg _0(H_+)=3, \quad \deg _0(H_-)=-3.\] According to Theorem \ref{nr18}, there are $6$ half-branches in the set of self--intersection points of $u$. As $u$ may have triple points, we may only conclude that there are at most 3 half--branches in $D^2(u)$. In fact, the image $u(\mathbb{R}^2)$ contains one half--branch consisting of triple points (see \cite{mararnunoballesteros}). \end{ex} \end{document}
\begin{document} \title{Universal covers of commutative finite Morley rank groups } \operatorname{d}ate{Wed 7 Mar 23:41:12 CET 2018 } \providecommand{\acl}{\operatorname{acl}} \providecommand{\operatorname{d}cl}{\operatorname{dcl}} \providecommand{\tp}{\operatorname{tp}} \providecommand{\stp}{\operatorname{stp}} \providecommand{\eq}{\operatorname{eq}} \providecommand{\fin}{\operatorname{fin}} \providecommand{\grploc}{\operatorname{grploc}} \providecommand{\End}{\operatorname{End}} \providecommand{\Hom}{\operatorname{Hom}} \providecommand{\Tor}{\operatorname{Tor}} \providecommand{\Gal}{\operatorname{Gal}} \providecommand{\im}{\operatorname{im}} \providecommand{\pr}{\operatorname{pr}} \providecommand{\operatorname{d}eg}{\operatorname{deg}} \providecommand{\pureHull}{\operatorname{pureHull}} \providecommand{\Ext}{\operatorname{Ext}} \providecommand{\alg}{\operatorname{alg}} \providecommand{\G}{\mathbb{G}} \newcommand{\underline}{\underline} \providecommand{\acleq}{\acl^{\eq}} \providecommand{\operatorname{d}cleq}{\operatorname{d}cl^{\eq}} \providecommand{\restricted}{\!\!\restriction} \providecommand{\M}{\mathcal{M}} \providecommand{\monst}{\mathfrak{C}} \providecommand{\mapsonto}{\twoheadrightarrow} \providecommand{\Qbar}{\bar{\Q}} \renewcommand{\powerset}{\powerset} \renewcommand{\bigcup}{\bigcup} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\er^0}{\mathcal{O}^0} \providecommand{\pairing}[2]{ \left<{#1,#2}\right> } \providecommand{\thmref}[1]{Theorem~\ref{#1}} \providecommand{\lemref}[1]{Lemma~\ref{#1}} \providecommand{\corref}[1]{Corollary~\ref{#1}} \providecommand{\operatorname{d}efref}[1]{Definition~\ref{#1}} \providecommand{\propref}[1]{Proposition~\ref{#1}} \providecommand{\factref}[1]{Fact~\ref{#1}} \providecommand{\remref}[1]{Remark~\ref{#1}} \providecommand{\secref}[1]{Section~\ref{#1}} \providecommand{\ssecref}[1]{Subsection~\ref{#1}} \providecommand{\axref}[1]{(A\ref{#1})} \providecommand{\operatorname{d}edref}[1]{(D\ref{#1})} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{theorem}[thm]{Theorem} \newtheorem{lemma}[thm]{Lemma} \newtheorem{proposition}[thm]{Proposition} \newtheorem{conjecture}[thm]{Conjecture} \newtheorem{fact}[thm]{Fact} \newtheorem*{fact*}{Fact} \newtheorem{corollary}[thm]{Corollary} \newtheorem{claim}[thm]{Claim} \newtheorem*{claim*}{Claim} \theoremstyle{definition} \newtheorem{definition}[thm]{Definition} \newtheorem*{definition*}{Definition} \newtheorem{definitions}[thm]{Definitions} \newtheorem*{definitions*}{Definitions} \newtheorem{notation}[thm]{Notation} \newtheorem*{notation*}{Notation} \newtheorem{axioms}[thm]{Axioms} \newtheorem{assumption}[thm]{Assumption} \theoremstyle{remark} \newtheorem{remark}[thm]{Remark} \newtheorem*{remark*}{Remark} \newtheorem{example}[thm]{Example} \newtheorem*{example*}{Example} \newtheorem{remarks}[thm]{Remarks} \newtheorem*{remarks*}{Remarks} \newtheorem{examples}[thm]{Examples} \newtheorem*{examples*}{Examples} \newtheorem{note}[thm]{Note} \newtheorem*{note*}{Note} \newtheorem{question}[thm]{Question} \newtheorem*{question*}{Question} \author{Martin Bays, Bradd Hart, and Anand Pillay} \maketitle \begin{abstract} We give an algebraic description of the structure of the analytic universal cover of a complex abelian variety which suffices to determine the structure up to isomorphism. More generally, we classify the models of theories of ``universal covers'' of rigid divisible commutative finite Morley rank groups. \end{abstract} \section{Introduction} \label{sec:intro} \subsection{Characterising universal covers of abelian varieties} \label{ssec:intro} Let $\G = \G_m^n$ be a complex algebraic torus, or let $\G$ be a complex abelian variety. Considering $\G(\C)$ as a complex Lie group, with $L\G=T_0(\G(\C))$ its (abelian) Lie algebra, the exponential map provides a surjective analytic homomorphism \[ \exp : L\G \twoheadrightarrow \G(\C) .\] Let $\mathcal{O} := \{ \eta \in \operatorname{End}(L\G) \;|\; \eta(\ker\exp) \subseteq \ker\exp \} \isom \End(\G)$ be the ring of $\C$-linear endomorphisms of $L\G$ which induce endomorphisms of $\G(\C)$; these are precisely the algebraic endomorphisms of $\G$. Consider $L\G$ as an $\mathcal{O}$-module. In this paper, we use model theoretic techniques and Kummer theory to give a purely algebraic characterisation of the algebraic consequences of this analytic picture. \newcommand{\kerQ}{\spanofover{\ker(\exp)}{\Q}} At first sight, $\exp$ relates $L\G$ to $\G(\C)$ in a rather particular way. For example, if $a \in \G(\C)$ and $\exp(\alpha)=a$, then $\exp(\alpha/n)$ converges topologically to $0 \in \G(\C)$ - something which certainly needn't hold for an arbitrary $\mathcal{O}$-module homomorphism. We will show however that if we forget the topology and the analytic structure, leaving only the field structure on $\C$ and the $\mathcal{O}$-module structure on $L\G$, and so work up to field automorphisms of $\C$ and up to $\mathcal{O}$-module automorphisms of $L\G$, then $\exp$ is distinguished from other $\mathcal{O}$-module homomorphisms only by its interaction with the torsion subgroup $\G[\infty]$ of $\G$. More precisely, it is described by its restriction $\exp|_{\kerQ} : \kerQ \twoheadrightarrow \G[\infty]$ to the divisible subgroup generated by $\ker(\exp)$: once this restriction is chosen, there is a unique way, up to automorphisms, to extend it to $L\G$. \begin{theorem} \label{thm:catAbelian} Suppose $\G$ and the action of each $\eta\in\mathcal{O}$ are defined over a number field $k_0 \leq \C$. Suppose $\rho,\rho' : L\G \twoheadrightarrow \G(\C)$ are surjective $\mathcal{O}$-module homomorphisms, $\ker\rho'=\ker\rho$, and $\rho'\restricted_{\spanofover{\ker\rho'}{\Q}} = \rho\restricted_{\spanofover{\ker\rho}{\Q}}$. Then there exists an $\mathcal{O}$-module automorphism $\sigma\in\Aut_\mathcal{O}(L\G/\ker\rho)$ and a field automorphism $\tau\in\Aut(\C/k_0)$ of $\C$ fixing $k_0$ such that the following diagram commutes, where $\tau : \G(\C) \rightarrow \G(\C)$ is the abstract group automorphism induced by $\tau$. \[ \xymatrix{ L\G \ar[d]^{\rho_1} \ar[r]^\sigma & L\G \ar[d]^{\rho_2} \\ \G(\C) \ar[r]^\tau & \G(\C) \\ } \] \end{theorem} We will define an \underlinestyle{$\widehat{L}$-isomorphism} to be such a pair $(\sigma,\tau)$ of an $\mathcal{O}$-module isomorphism and a field isomorphism which agree on $\G$. So \thmref{thm:catAbelian} yields a characterisation of $\exp : L\G \twoheadrightarrow \G(\C)$: it is, up to $\widehat{L}$-isomorphism, the unique surjective $\mathcal{O}$-homomorphism with its kernel and its restriction to the divisible subgroup generated by that kernel. We require here that $k_0$ is a number field in order to have Kummer theory available. We have a corresponding result in the case that $\G$ is a split semiabelian variety defined over a number field, but general semiabelian varieties are problematic due to failure of Kummer theory. We prove \thmref{thm:catAbelian} by classifying the models of the first order theory of $\exp$ in an appropriate language $\widehat{L}$. Our proof can be split into three stages: \begin{enumerate}[(i)]\item Kummer theory for abelian varieties explains the behaviour for finite extensions of $k_0$, and suffices to show uniqueness of the restriction of $\exp$ to $\exp^{-1}(\G(\Qbar))$; \item a function-field analogue of this Kummer theory allows us to extend the uniqueness to $\G(F)$ for $F$ an algebraically closed field of cardinality $\leq \aleph_1$; \item we extend to arbitrary cardinals (in particular the continuum, which without assuming the continuum hypothesis is not covered by (ii)) using arguments involving independent systems, based on techniques involved in Shelah's Main Gap theorem. \end{enumerate} In \cite{BGHKg}, it was found that the geometric Kummer theory of (ii) actually follows from a general model-theoretic principle, Zilber's Indecomposability Theorem, and hence holds in the generality of rigid (see below) commutative divisible finite Morley rank groups. This also turns out to be a natural level of generality for (iii), and it is in this context that we will actually work for most of this paper. We obtain an analogue of \thmref{thm:catAbelian} in this generality, \thmref{thm:classification} below - although since there is no analogue of (i) in such generality we get a correspondingly weaker result. This does allow us to remove the restrictions in \thmref{thm:catAbelian} and still get a uniqueness result: if $\G$ is an abelian variety over a field $k_0\leq \C$, then the exponential map $\exp : L\G \twoheadrightarrow \G(\C)$ is, up to $\widehat{L}$-isomorphism fixing $\exp^{-1}(\G(k_0^{\alg}))$, the unique surjective $\End(\G)$-homomorphism with kernel $\ker\exp$ which extends $\exp\restricted_{\exp^{-1}(\G(k_0^{\alg}))}$. We obtain an analogous result for semiabelian varieties as part of \ssecref{ssec:meroGrp}. We also obtain similar results for complex tori which are not abelian varieties, and for semiabelian varieties in positive characteristic, generalising \cite{BZCovers}. \subsection{Profinite covers and an outline of the paper} For $\G = \G(\C)$ as above, or more generally for $\G$ a commutative divisible finite Morley rank group, we associate a canonical structure $\widehat{\G}$ which we call the ``profinite universal cover'' of $\G$, defined as the inverse limit of copies of $\G$ with respect to the inverse system of multiplication-by-$n$ maps, $\widehat{\G} := \liminv [n] : \G \twoheadrightarrow \G$. In the case of $\G$ a complex semiabelian variety, this is the same construction that appears in the definition of the \'etale fundamental group - every finite \'etale cover of $\G$ is dominated by some $[n]$, so taking the inverse limit with respect to all $[n]$ amounts to taking the inverse limit with respect to all finite \'etale covers. So $\widehat{\G}$ can be identified as the ``\'etale universal cover'' of $\G$. In general, we can see $\widehat{\G}$ as a purely algebraic substitute for an analytic universal cover of $\G$. We will see below in Remark~\ref{rem:LGelem} one justification for this: in an appropriate language $\widehat{L}$, if $\G$ is a Lie group, then the Lie exponential map is an elementary submodel of the profinite universal cover $\widehat{\G}$. The results described in the previous subsection result from classifying the models of the first-order theory of $\widehat{\G}$. In \secref{sec:hat} we define the structure we wish to consider on $\widehat{\G}$, axiomatise its first-order theory $\widehat{T}$, prove quantifier elimination, and examine it in terms of stability theory. In \secref{sec:classification}, we give a classification of the models of $\widehat{T}$. In \secref{sec:abelian}, we return essentially to the context of \ssecref{ssec:intro}, specialising the abstract model theory of earlier sections to the case of algebraic groups. Here we also use Kummer theory to strengthen the classification (peeking inside the prime model); the necessary Kummer theory is presented in Appendix~\ref{sec:kummer}. Finally, in \secref{sec:examples}, we present some further natural examples of models of $\widehat{T}$ for various $\G$, to which our classification theorem applies. \subsection{The literature} We discuss the previous work on which this work builds. For $\G = \G_m$ the multiplicative group, \thmref{thm:catAbelian} was proven in \cite{ZCovers} and \cite{BZCovers}. It was proven for $\G$ an abelian variety in \cite{GavThesis} under the assumption of the continuum hypothesis, i.e.\ with only the first two of the three steps described above. A path to the full result was set out in \cite{ZCovers2}, and for $\G$ an elliptic curve the full result was obtained in \cite{BaysThesis}. These previous proofs of (iii) use algebraic techniques analogous to, but substantially more complicated and limited than, the model theoretic techniques of the present work. In previous work, the problem was considered one of categoricity in infinitary logic, and correspondingly the techniques applied were those of Shelah's theory of excellent classes, and more specifically Zilber's adaptation to Quasiminimal Excellent (QME) classes. It was key to the developments in this paper to instead consider the problem in terms of first-order classification theory. Although our results do not fall literally into the context of Shelah's classification theory for superstable theories - essentially because we are interested in models where the kernel of $\exp$ is rather unsaturated - and though ideas from the theory of Abstract Elementary Classes will still play a (largely implicit) role, the argument which allows us to get (iii) in the generality we do is an adaptation of Shelah's ``NOTOP'' argument, which reduces the condition of excellence in the first-order case to a simpler condition. In fact, while the current paper was in preparation, it was found that this same idea applies in the context of QME classes \cite{BHHKK}. For the benefit of any readers familiar with that paper, we mention how it relates to this paper. Our main results do not fit into the definition of QME, even if we assume the kernel to be countable: we consider finite Morley rank groups which are not necessarily almost strongly minimal; correspondingly, the covers are not even almost quasiminimal. In the case discussed above of a semiabelian variety $\G$, however, the covers structure can be seen as almost quasiminimal - and moreover it is bi-interpretable with the quasiminimal structure induced on the inverse image in the cover of a Kummer-generic (in the sense of \cite{BGHKg}) curve in $\G$ which generates $\G$ as a group. So in this case, (iii) above could be deduced from the main result of \cite{BHHKK}. \subsection{Notation} We use unmarked tuple notation throughout: if $A$ is a subset of a sort in a structure, we write $x\in A$ if $x$ is a finite tuple each co-ordinate of which is an element of $A$. We write $a \equiv _C b$ to mean that $\tp(a/C) = \tp(b/C)$, and we sometimes write $\sigma : A \xrightarrow{\cong}_C B$ to denote that $\sigma$ is an isomorphism which is the identity on $C \subseteq A\cap B$. If $G$ is an abelian group, we write $G[n]$ for the $n$-torsion, and we write $G[\infty]$ or $\Tor(G)$ for the torsion subgroup $\bigcup_n G[n]$. We introduce further specialised notation in \secref{sec:hat}, after making relevant definitions. \section{Profinite universal covers} \label{sec:hat} In this section, we consider the algebra and basic model theory of our ``profinite universal covers'' of divisible commutative finite Morley rank groups. \subsection{$\widehat{G}$} We begin with some elementary definitions and remarks concerning abstract commutative groups. If $G$ is a commutative group and $[n]$ is the multiplication-by-$n$ map, let $\widehat{G}$ be the inverse limit $\liminv [n] : G \rightarrow G$. Let $\rho_n : \widehat{G} \twoheadrightarrow G$ be the corresponding projections, so $[n]\rho_{nm} = \rho_m$. Let $\rho := \rho_1$. We often write elements of $\widehat{G}$ in the form $\gamma=(g_n)_n$, so then $\rho_n(\gamma)=g_n$. If $\theta : G \rightarrow H$, define $\widehat{\theta} : \widehat{G} \rightarrow \widehat{H}$ by $\widehat{\theta}((g_n)_n) = (\theta(g_n)_n)$. \begin{definition} The {\em divisible part} $G^o$ of an abelian group $G$ is the maximal divisible subgroup, $G^o = \bigcap_{n>0} nG$. \end{definition} We will mostly work in contexts in which $G^o$ is the ``connected component'' of $G$ in one sense or another, hence the notation. Say a commutative group $G$ is {\em divisible-by-finite} if its divisible part $G^o$ has finite index in $G$. We note that $\widehat{\cdot}$ is an exact functor on divisible-by-finite groups: \begin{lemma} \label{lem:hatExact} Suppose $0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0$ is an exact sequence of divisible-by-finite groups. Then $0\rightarrow \widehat{A}\rightarrow \widehat{B}\rightarrow \widehat{C}\rightarrow 0$ is exact. \end{lemma} \begin{proof} Denote the given map $B \rightarrow C$ as $\theta$. The only difficulty is the surjectivity of $\widehat{\theta}:\widehat{B}\rightarrow \widehat{C}$. We may assume $A\rightarrow B$ is an inclusion. Factoring $\theta$ via $B/(A^o)$, we see that it suffices to prove the surjectivity of $\widehat{B}\rightarrow \widehat{C}$ under the assumption that $A$ is divisible or finite. \begin{enumerate}[(a)]\item Suppose $A$ is divisible. We first show that given any $n>0$, $b\in B$ and $c'\in C$ such that $\theta(b)=[n]c'$, there is $b' \in B$ such that $[n]b' = b$ and $\theta(b')=c'$. Say $\theta(b'')=c'$; then $\theta([n]b'')=[n]c'=\theta(b)$, so $b-[n]b'' \in A$. Say $a'\in A$ with $[n]a'=b-[n]b''$. Then $b' := b''+a'$ is as required. Given $\widehat{c}$, we can therefore inductively define $b_{n!}$ such that $[n+1]b_{(n+1)!}=b_{n!}$ and $\theta(b_{n!}) = \rho_{n!}(\widehat{c})$. Easily, there is a unique $\widehat{b} \in \widehat{B}$ such that $\rho_{n!}(\widehat{b})=b_{n!}$, and it satisfies $\widehat{\theta}(\widehat{b})=\widehat{c}$. \item Suppose $A$ is finite, say $[n]A=0$. Then $\theta$ factors $[n]$ - indeed, let $\phi$ be the map making the left triangle in the following diagram commute, then note that the right triangle also commutes. But $\widehat{[n]}$ is surjective, hence so is $\widehat{\theta}$. \[ \xymatrix{ B \ar[rr]^{[n]} \ar[rd]_\theta & & B \ar[rd]^ \theta & \\ & C \ar[rr]_{[n]} \ar[ru]^\phi & & C } \] \end{enumerate} \end{proof} \subsection{$\widehat{T}$} \label{ssec:That} Now let $\G$ be a connected commutative finite Morley rank group, and suppose moreover that it is divisible. Then $[n] : \G \twoheadrightarrow \G$ has finite kernel, and it follows that any definable subgroup $A \leq \G$ is divisible-by-finite, and its divisible part $A^o$ is its connected component in the model-theoretic sense, namely the smallest definable subgroup of finite index. Let $T := \Th(\G)$; we assume (by appropriate choice of language) that $T$ has quantifier elimination. We also assume that the language $L$ of $T$ is countable. Let $\widehat{T}$ be the theory of $(\widehat{\G},\G)$ in the two-sorted language $\widehat{L}$ consisting of the maps $\rho_n$ for each $n$, the full $T$-structure on $\G$, and, for each $\acleq(\emptyset )$-definable connected subgroup $H$ of $\G^n$, a predicate $\widehat{H}$ interpreted as the subgroup $\widehat{H} = \{ x \;|\; \Meet_n \rho_n(x) \in H \}$ of $\widehat{\G}^n$. We will see below that $\widehat{T}$ depends only on $T$. For quantifier elimination purposes, we actually assume (by expanding $T$ by constants if necessary) that every $\acleq(\emptyset )$-definable connected subgroup of $\G^n$ is $\emptyset $-definable. We say that $T$ is {\em rigid} if for $\G$ a saturated model of $T$, every definable connected subgroup of $\G^n$ is defined over $\acleq(\emptyset )$. Although the results of this section do not require rigidity, our language is chosen with it in mind. \begin{remark} As in the proof of \lemref{lem:hatExact}, any definable finite group cover of $\G$ is dominated by some $[n]$, so is ``seen'' by $\widehat{\G}$. Note that divisibility is crucial for this - for example, the Artin-Schreier map $x \mapsto x^p-x$ is a finite definable group cover of the additive group in $\operatorname{ACF}_p$ which isn't handled by our setup (c.f.\ \cite{BGHKg} where this issue is discussed). \end{remark} \begin{notation} Suppose $(\widetilde{M},M)$ is an $\widehat{L}$-structure. \begin{itemize}\item If $\widetilde{a} \in \widetilde{M}$ is a tuple, then we will write $a_n$ for $\rho_n(\widetilde{a})$, and $a$ for $\rho(\widetilde{a})$, and $\widehat{a}$ for $(a_n)_n$. Similarly, if $\widetilde{A} \subseteq \widetilde{M}$, we write $\widehat{A}$ for $\bigcup_n\rho_n(\widetilde{A})$. \item We will usually just write $\widetilde{M} \vDash \widehat{T}$ to mean $(\widetilde{M},M) \vDash \widehat{T}$. \item $\widehat{G}$ and $\widehat{H}$ will always denote the predicates corresponding to $\emptyset $-definable connected subgroups $G$ and $H$ of a cartesian power of $\G$. $\widehat{C}$ will denote a coset of some $\widehat{H}$. \item $\widehat{G}(\widetilde{a})$ is the definable set $\{ \widetilde{x} \;|\; (\widetilde{x},\widetilde{a}) \in \widehat{G} \}$, a coset of $\widehat{G}(0)$. Similarly for $G(a)$. \item $\ker$ is the definable set $\ker(\rho)$. \item $\ker^0$, the divisible part of $\ker$, is the $\Meet$-definable set $\Meet_n \rho_n(x) = 0$. \item Abusively, $\ker$ and $\ker^0$ also refer to the corresponding sets in cartesian powers of $\G$. \item If $\pr : \widetilde{M}^n \twoheadrightarrow \widetilde{M}^m$ is a co-ordinate projection, we also write $\pr$ for the corresponding co-ordinate projection $M^n \twoheadrightarrow M^m$, and we also write $\pr$ for the restriction of $\pr$ to a subset $\widetilde{A} \subseteq \widetilde{M}^n$ or $A \subseteq M^n$, leaving it to context to disambiguate. \item $\widehat{H}_0 := \widehat{H} \cap \ker^0$, a $\Q$-subspace of the $\Q$-vector space $\ker^0$. \end{itemize} \end{notation} \subsection{Axiomatisation and quantifier elimination} We now give a list of first order axioms for a structure $\widetilde{M}$ in the language of $\widehat{T}$. We show in \propref{prop:QE} that these axioms axiomatise $\widehat{T}$. \begin{axioms} \mbox{} \begin{enumerate}[({A}1)]\item \label{ax:T}\label{ax:first} $M \vDash T$ \item \label{ax:grp}Let $\Gamma_+$ be the graph of the group operation on $\G$. Then $\widehat{\Gamma_+}$ is the graph of a commutative divisible torsion free group operation, which we write as ``$+$'' and work with respect to in the following axioms; \item \label{ax:diag} Let $\Deltaelta$ be the diagonal subgroup of $\G$, i.e.\ the graph of equality. Then $\widehat{\Deltaelta}$ is the diagonal subgroup of $\widetilde{M}$. \item \label{ax:subgrps} Each $\widehat{H}$ is a divisible subgroup. \item \label{ax:coherent} $[m] \rho_{nm} = \rho_n$. \item \label{ax:epic} $\rho_n(\widehat{H}) = H$. \item \label{ax:lattice} $\widehat{G} \cap \widehat{H} = \widehat{H'^o}$ where $H':=G\cap H$. \item \label{ax:torsionKer} If $H\subseteq G$ and $\Tor(H)=\Tor(G)$, then $\widehat{H}\cap\ker=\widehat{G}\cap\ker$. \item \label{ax:proj}\label{ax:last} If a co-ordinate projection $\pr$ induces a surjection $\pr:G\twoheadrightarrow H$ with kernel $K$ then the corresponding co-ordinate projection induces a surjection $\pr:\widehat{G}\twoheadrightarrow \widehat{H}$ with kernel $\widehat{K^o}$. \end{enumerate} \end{axioms} In $\widehat{\G}$ and other models of $\widehat{T}$ which we will be considering in the applications, $\widehat{H}$ will be the divisible part of $\rho^{-1}(H)$. In this case, the following lemma substantially simplifies verification of the axioms. \begin{lemma} \label{lem:homMod} Suppose $V$ is a divisible torsion-free abelian group, and $\rho : V \twoheadrightarrow \G$ is a surjective homomorphism. For $H$ a connected definable subgroup of $\G^n$, let $\widehat{H}$ be the divisible part of $\rho^{-1}(H)$. Suppose that $\ker$ has trivial divisible part (i.e.\ $\widehat{0}=0$). Let $\rho_n(x) := \rho(x/n)$. Then with this structure, $V$ satisfies \axref{ax:first}-\axref{ax:last} if it satisfies \axref{ax:epic} and \axref{ax:proj}. \end{lemma} \begin{proof} \begin{enumerate}[({A}1)]\item Immediate. \item $\widehat{\Gamma_+} = \{ (x,y,z) \;|\; \forall n. x/n + y/n - z/n \in \ker \}$, which, since $\ker$ has trivial divisible part, is the graph of $+$ on $V$. \item Similar. \item Immediate from the definition of $\widehat{H}$. \item Immediate from the definition of $\rho_n$. \item Assumed. \item $\widehat{G} \cap \widehat{H}$ is a divisible subgroup of $\rho^{-1}(H'^o)$ (where $H'=G\cap H$), so is contained in $\widehat{H'^o}$. Similarly for the converse inclusion. \item Suppose $H \subseteq G$, $\zeta \in \widehat{G} \cap \ker$, and $\Tor(G) = \Tor(H)$. Then $\Q\zeta \subseteq \rho^{-1}(H)$, so $\zeta \in \widehat{H}$ by definition of $\widehat{H}$. \item Assumed. \end{enumerate} \end{proof} \begin{lemma} \label{lem:hatMod} $\widehat{\G}$ satisfies the axioms \axref{ax:first}-\axref{ax:last}. \end{lemma} \begin{proof} We appeal to \lemref{lem:homMod}. \axref{ax:epic} and the fact that $\widehat{H}$ is the connected component of $\rho^{-1}(H)$ are immediate from the definitions. \axref{ax:proj} follows from \lemref{lem:hatExact}. \end{proof} \begin{proposition} \label{prop:QE} \axref{ax:first}-\axref{ax:last} axiomatise $\widehat{T}$, and $\widehat{T}$ has quantifier elimination. \end{proposition} \begin{proof} Let $\widehat{T}'$ be the theory axiomatised by \axref{ax:first}-\axref{ax:last}. We show that $\widehat{T}'$ is complete and admits quantifier elimination. Completeness and \lemref{lem:hatMod} then implies that $\widehat{T}'=\widehat{T}$. We first note some elementary deductions from the axioms: \begin{enumerate}[({D}1)]\item \label{ded:prod} For any $H$ and $G$, we have by \axref{ax:proj} applied to $\pr : H\times G \twoheadrightarrow G$ that $\widehat{H\times G} = \widehat{H} \times \widehat{G}$. \item \label{ded:hom} By \axref{ax:epic} applied to the graph of the group operation, the $\rho_n$ are homomorphisms. \item \label{ded:snake} In the context of \axref{ax:proj}, if $K/K^o$ has exponent $e$, then $e\cdot(\widehat{H}\cap\ker) \subseteq \pr(\widehat{G}\cap\ker)$. Indeed, this follows from \axref{ax:proj}, \axref{ax:epic}, and the snake lemma applied to the following diagram \[ \xymatrix{ 0 \ar[r] & \widehat{K^o} \ar[r]\ar[d]^\rho & \widehat{G} \ar[r]\ar[d]^\rho & \widehat{H} \ar[r]\ar[d]^\rho & 0 \\ 0 \ar[r] & K \ar[r] & G \ar[r] & H \ar[r] & 0 }. \] \end{enumerate} Now suppose we have $\omega$-saturated models $\widetilde{M},\widetilde{N}\vDash \widehat{T}'$, finite tuples $\widetilde{m} \equiv _{qf} \widetilde{n}$ from each with equal quantifier-free types, and a point $\widetilde{m}'\in \widetilde{M}$. To conclude the proof, we must find $\widetilde{n}'\in \widetilde{N}$ such that $(\widetilde{m},\widetilde{m}') \equiv _{qf} (\widetilde{n},\widetilde{n}')$. Let $\widehat{H}$ be least such that it contains $\widetilde{m}$. This exists by $\omega$-stability of $T$ and \axref{ax:lattice}; c.f.\ \operatorname{d}efref{defn:grploc} below. Let $\widehat{G}$ be least such that it contains $(\widetilde{m},\widetilde{m}')$, and let $\pr : (\widetilde{m},\widetilde{m}') \mapsto \widetilde{m}$ be the co-ordinate projection. We work in $\widehat{T}'$; when we make a statement which is expressible as a sentence in $\widehat{L}$, we mean that it is a consequence of $\widehat{T}'$. $\pr(\widehat{G}) = \widehat{\pr(G)}$ by \axref{ax:proj}, so $\widehat{H} \subseteq \pr(\widehat{G})$, and $\pr^{-1}(\widehat{H}) = \widehat{H} \times \widehat{\G} = \widehat{H\times \G} = \widehat{\pr^{-1}(H)}$, so $\widehat{G} \subseteq \pr^{-1}(\widehat{H})$ and so $\pr(\widehat{G}) \subseteq \widehat{H}$. So $\widehat{H} = \pr(\widehat{G})$, and so $\pr : \widehat{G} \twoheadrightarrow \widehat{H}$ and $\pr : G \twoheadrightarrow H$. \begin{claim} $\pr: \widehat{G}_0(\widetilde{N}) \twoheadrightarrow \widehat{H}_0(\widetilde{N})$ \end{claim} \begin{proof} Work in $\widetilde{N}$. Let $K$ be the kernel of $\pr : G\twoheadrightarrow H$, and suppose $K/K^o$ has exponent $e$, so by \operatorname{d}edref{ded:snake}, $\pr : \widehat{G}\cap\ker \twoheadrightarrow e (\widehat{H}\cap\ker)$. So for each $k$, \[ \pr : k (\widehat{G}\cap\ker) \twoheadrightarrow ke (\widehat{H}\cap\ker).\] But then by $\omega$-saturation of $\widetilde{N}$, \[ \pr : \widehat{G}_0 = \bigcap_k k (\widehat{G}\cap\ker) \twoheadrightarrow \bigcap_k ke (\widehat{H}\cap\ker) = \widehat{H}_0 . \] \end{proof} By QE in $T$ and $\omega$-saturation, we can find $\widetilde{n}' \in \widetilde{N}$ such that \begin{equation} \tag{*} (\widehat{m},\widehat{m}') \equiv _{qf} (\widehat{n},\widehat{n}') \end{equation} as infinite tuples; in particular, $\rho_k(\widetilde{n},\widetilde{n}') \in G$ for all $k$, and so by $\omega$-saturation we find $\widetilde{n}''\in \widehat{G}(\widetilde{N})$ such that $\rho_k(\widetilde{n},\widetilde{n}')=\rho_k(\widetilde{n}'')$ for all $k$, and so $\widetilde{\zeta} := \widetilde{n}'' - (\widetilde{n},\widetilde{n}') \in \ker^0(\widetilde{N})$. Then $\pr \widetilde{\zeta} \in \widehat{H}_0(\widetilde{N})$, and so by the Claim there is $\widetilde{\zeta}' \in \widehat{G}_0(\widetilde{N})$ with $\pr \widetilde{\zeta}' = \pr \widetilde{\zeta}$, and then $\widetilde{n}''+\widetilde{\zeta}' \in \widehat{G}$ and $\pr (\widetilde{n}''+\widetilde{\zeta}') = \widetilde{n}$. So we can assume $(\widetilde{n},\widetilde{n}') \in \widehat{G}$, while still satisfying (*). Now suppose $(\widetilde{n},\widetilde{n}')$ is contained in a proper subgroup $\widehat{G'} < \widehat{G}$. Then $\rho_k(\widetilde{m},\widetilde{m}')\in G'$ for each $k$, so by $\omega$-saturation, $(\widetilde{m},\widetilde{m}') \in \widehat{G'}+\widetilde{\zeta}$ for some $\widetilde{\zeta} \in \widehat{G}_0 \setminus \widehat{G'}_0$. So $\widehat{G'}_0(\widetilde{M}) < \widehat{G}_0(\widetilde{M})$, so, by \axref{ax:torsionKer}, $\Tor(G')<\Tor(G)$. Hence by \axref{ax:epic} and \axref{ax:coherent}, for each $k$ there is $\widetilde{\zeta}\in \widehat{G} \setminus \widehat{G'}$ with $\rho_k(\widetilde{\zeta})=0$, and so by saturation $\widehat{G'}_0(\widetilde{N}) < \widehat{G}_0(\widetilde{N})$. Now $\pr(\widehat{G'}) = \widehat{H}$, by the same argument which showed $\pr(\widehat{G}) = \widehat{H}$, and so the Claim applies also to $\widehat{G}$. So $\pr(\widehat{G'}_0(\widetilde{N})) = \widehat{H}_0(\widetilde{N}) = \pr(\widehat{G}_0(\widetilde{N}))$. Hence we have a strict inclusion $\widehat{G'}_0(0) < \widehat{G}_0(0)$ in $\widetilde{N}$ for the fibres above $0\in \widehat{H}$. So by translating, we can find $\widetilde{n}'$ satisfying (*) and such that $(\widetilde{n},\widetilde{n}') \notin \widehat{G'}$. Now $\widehat{G}_0(0)$ is not covered by any finitely many such $\widehat{G'}_0(0)$, since they are proper $\Q$-subspaces. So we can avoid any finitely many such proper subgroups simultaneously, and so by $\omega$-saturation, we find $\widetilde{n}'$ satisfying (*) for which $\widehat{G}$ is least such that it contains $(\widetilde{n},\widetilde{n}')$. It follows, using \axref{ax:diag} for formulae involving equality on the sort $\widehat{\G}$, that $(\widetilde{m},\widetilde{m}') \equiv _{qf} (\widetilde{n},\widetilde{n}')$ as required. \end{proof} \begin{remark} Assuming that $T$ has finite Morley rank is a much stronger assumption than we need for this result. Really the result is about the reduct to the abelian structure of $\G$ with predicates for the $\acleq(\emptyset )$-definable subgroups of $\G^n$; all we require is that these subgroups have divisible definable connected components, and the descending chain condition on definable subgroups. For example, $\G$ could be real semiabelian variety $S(\R)$ with the semialgebraic structure of its interpretation in the real field. \end{remark} \begin{remark} The assumption that each $\acleq(\emptyset )$-definable connected subgroup $H$ is actually $\emptyset $-definable in $T$ is necessary, because $H$ is $\emptyset $-definable in $\widehat{T}$ as the image of $\widehat{H}$, while quantifier elimination implies that $\G$ has only the structure of $T$. \end{remark} \begin{corollary} \label{cor:QE} Suppose $\widetilde{B} \subseteq \widetilde{M} \vDash \widehat{T}$, and suppose $X \subseteq \widetilde{M}^n$ is definable over $\widetilde{B}$. There are $H_i$, $\widetilde{b}^i \in \widetilde{B}$, $m>0$, and $\emptyset \neq Y_i \subseteq H_i(b^i_m)$, with $i$ ranging through a finite set, and with each $Y_i$ being $T$-definable over $\widehat{B}$, such that \[ \bigcup_i \left({\widehat{H}_i(\widetilde{b}^i) \cap \rho_m^{-1}(Y_i)}\right) \subseteq X \subseteq \bigcup_i \widehat{H}_i(\widetilde{b}^i) .\] \end{corollary} \begin{proof} This follows from the QE, using \axref{ax:lattice} to reduce an intersection of cosets to a single coset, using \axref{ax:coherent} to reduce to a single $\rho_m$, and using that (by \axref{ax:epic}) $\rho_m(\widehat{H}(\widetilde{b})) \subseteq H(b_m)$. \end{proof} \begin{definition} \label{defn:grploc} Let $\widetilde{B} \subseteq \widetilde{M} \vDash \widehat{T}$, and $\widetilde{a} \in \widetilde{M}$. Then $\grploc(\widetilde{a}/\widetilde{B})$, the \underlinestyle{group locus} of $\widetilde{a}$ over $\widetilde{B}$, is the smallest set containing $\widetilde{a}$ of the form $\widehat{H}(\widetilde{b})$ with $\widetilde{b} \in \widetilde{B}$. \end{definition} \begin{remark} Such a smallest set exists, by \axref{ax:lattice} and $\omega$-stability of $T$. Clearly $\grploc(\widetilde{a}/\widetilde{B})$ is definable over $\widetilde{B}$; however, it is not not true that $\grploc(\widetilde{a}/\widetilde{B})$ is necessarily the smallest coset of a $\widehat{G}$ containing $\widetilde{a}$ which is definable over $\widetilde{B}$. For example, suppose $\G$ is a torsion-free group, so $\rho$ is an isomorphism, and consider a coset $\widehat{G}+\widetilde{a}$ with $a \in \operatorname{d}cl^{T}(B) \setminus B$. \end{remark} \begin{remark} \label{rem:grplocgrp} Using \axref{ax:grp}, \axref{ax:lattice}, and \axref{ax:proj}, we see that $\widehat{H}(\widetilde{b}+\widetilde{b}')$ can be rewritten in the form $\widehat{G}(\widetilde{b},\widetilde{b}')$, and similarly for $\widehat{H}(\widetilde{b})+\widetilde{b}'$. So in particular, $\grploc(\widetilde{a}/\widetilde{B}) = \grploc(\widetilde{a}/ \left<{\widetilde{B}}\right>)$ where $\left<{\widetilde{B}}\right>$ is the subgroup of $\widetilde{M}$ generated by $\widetilde{B}$. \end{remark} \begin{lemma} \label{lem:typesKerPres} Let $\widetilde{B} \subseteq \widetilde{M} \vDash \widehat{T}$, and $\widetilde{a} \in \widetilde{M}$. Let $\widehat{C} := \grploc(\widetilde{a}/\widetilde{B})$. Suppose $\ker(\widetilde{M}) \subseteq \widetilde{B}$. Then $p'(\widetilde{x}) := \tp(\widehat{a}/\widehat{B}) \cup \{ \widetilde{x} \in \widehat{C} \} \vDash \tp(\widetilde{a}/\widetilde{B})$ \end{lemma} \begin{proof} By the QE, we need only see that if $\widetilde{a}' \vDash p'$ in an elementary extension, then for all $\widehat{H}$ and all $\widetilde{b}\in \widetilde{B}$, $\widetilde{a} \in \widehat{H}(\widetilde{b})$ iff $\widetilde{a}' \in \widehat{H}(\widetilde{b})$. Now $\widetilde{a} \in \widehat{H}(\widetilde{b})$ iff $\widehat{C} \leq \widehat{H}(\widetilde{b})$, so the forward direction is clear. For the converse, suppose $\widetilde{a}' \in \widehat{H}(\widetilde{b})$. Then $a' \in H(b)$, hence $a \in H(b)$. So $(\widetilde{a},\widetilde{b}) \in \widehat{H} + \ker(\widetilde{M})$, i.e.\ $\widetilde{a} \in \widehat{H}(\widetilde{b}+\widetilde{\zeta})+\widetilde{\xi}$ for some $\widetilde{\zeta},\widetilde{\xi} \in \ker(\widetilde{M})$. But $\ker(\widetilde{M}) \subseteq \widetilde{B}$, so by \remref{rem:grplocgrp}, $\widehat{C} \leq \widehat{H}(\widetilde{b}+\widetilde{\zeta})+\widetilde{\xi}$. So $\widetilde{a}' \in \widehat{H}(\widetilde{b}) \cap (\widehat{H}(\widetilde{b}+\widetilde{\zeta})+\widetilde{\xi})$; but this is an intersection of cosets of $\widehat{H}(0)$, so they are equal, and so $\widetilde{a} \in \widehat{H}(\widetilde{b})$. \end{proof} \begin{remark} It also follows from the QE that $\ker^0$ is indeed the connected component of the kernel in the model-theoretic sense, and more generally that $\widehat{H}+\ker^0$ is the connected component of $\rho^{-1}(H) = \widehat{H}+\ker$. \end{remark} \subsection{Lie exponential maps as models of $\widehat{T}$} \label{ssec:lie} Let $\G$ be a connected commutative Lie group which is also equipped with a finite Morley rank group structure for which the model-theoretically connected definable subgroups of $\G^n$ are topologically connected closed Lie subgroups. This is the case for a connected commutative complex algebraic group $\G(\C)$ with the Zariski structure, and we will discuss other examples in \secref{sec:examples}. Consider the Lie algebra $L\G = T_0\G$ with the Lie exponential map $\exp : L\G \twoheadrightarrow \G$ as an $\widehat{L}$-structure, with $\rho_m(x) := \exp(x/m)$ and $\widehat{H} := LH \leq L\G^n$ for $H \leq \G^n$ connected definable. \begin{proposition} \label{prop:analMod} $L\G \vDash \widehat{T}$. \end{proposition} \begin{proof} We appeal to \lemref{lem:homMod}. \axref{ax:epic} holds since $\exp$ is surjective for commutative Lie groups (since the image is a subgroup which contains a neighbourhood of the identity). So since $LH$ is divisible and $\ker\exp$ is discrete, $\widehat{H}=LH$ is the divisible part of $\rho^{-1}(H)$. Finally, \axref{ax:proj} follows from exactness of the functor $L$ for commutative Lie groups. To check this in the setting of \axref{ax:proj}, the only difficulty is the surjectivity of $LG \rightarrow LH$, but this follows from the fact that the image is an $\R$-vector subspace of dimension $\operatorname{d}im(LG)-\operatorname{d}im(LK)=\operatorname{d}im(G)-\operatorname{d}im(K)=\operatorname{d}im(H)=\operatorname{d}im(LH)$. \end{proof} \begin{remark} \label{rem:LGelem} Note that $x \mapsto (\exp(x/n))_n$ is an embedding of $L\G$ into $\widehat{\G}$, which, by the QE, is elementary. \end{remark} \begin{remark} Lie theory provides a topological interpretation of the embedding of Remark~\ref{rem:LGelem}. The group $\widehat{\G}$ is easily seen to be isomorphic to the group of abstract group homomorphisms $\Hom(\Q,\G)$, by taking the image in $\widehat{\G}$ of $\theta \in \Hom(\Q,\G)$ to be $(\theta(1/n))_n$. Then by recalling that $x \mapsto (t \mapsto \exp(tx))$ is an isomorphism of $L\G$ with the group $\Hom_c(\R,\G)$ of 1-parameter subgroups, and considering their restrictions to $\Q$, we see that the image in $\Hom(\Q,\G)$ of $L\G$ is precisely the subgroup $\Hom_c(\Q,\G)$ of continuous homomorphisms. By translation, $\theta \in \Hom(\Q,\G)$ is continuous iff it is continuous at $0$, which holds iff $\lim_{n \rightarrow \infty} \theta(1/n) = 0 \in \G$, which holds iff this limit exists. So we can also identify $L\G$ as the subgroup of convergent elements of $\widehat{\G}$, when viewed as sequences $(a_n)_n$. \end{remark} \subsection{Stability theory of $\widehat{T}$} \begin{proposition} \label{prop:forking} \begin{enumerate}[(i)]\item $\widehat{T}$ is superstable. \item If $\tp(\widetilde{a}/\widetilde{B})$ forks over $\widetilde{A}\subseteq \widetilde{B}$ then either $\tp(a/\widehat{B})$ forks over $\widehat{A}$ or $\grploc(\widetilde{a}/\widetilde{B})$ is not definable over $\widetilde{A}$. \item $\widehat{T}$ has finite U-rank, i.e.\ $U(\widetilde{a}/\widetilde{B}) < \omega$ for any $\widetilde{a},\widetilde{B}$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[(i)]\item By the QE, $\tp(\widetilde{a}/\widetilde{A})$ is determined by $\tp(\widehat{a}/\widehat{A})$ and $\grploc(\widetilde{a}/\widetilde{A})$. The former is determined by $\tp(a/\widehat{A})$ and $(\tp(a_k/\widehat{A}a))_k$, and since $[k]$ has finite kernel there are only finitely many possibilities for each $\tp(a_k/\widehat{A}a)$. $\grploc(\widetilde{a}/\widetilde{A})$ is determined by a choice of coset over $\widetilde{A}$. So by $\omega$-stability of $T$, if $|\widetilde{A}| = \lambda \geq 2^{|T|}$ then $|S(\widetilde{A})| \leq (\lambda 2^{\aleph_0}) (|T|\lambda) = \lambda$. So $\widehat{T}$ is superstable. \item Suppose $\tp(\widetilde{a}/\widetilde{B})$ forks over $\widetilde{A}$; say $\phi(x,\widetilde{b}) \in \tp(\widetilde{a}/\widetilde{B})$ divides over $\widetilde{A}$. Let $\widehat{C} := \grploc(\widetilde{a}/\widetilde{B})$. We may assume $\phi(x,\widetilde{b}) \vDash x\in \widehat{C}$. Suppose $\widehat{C}$ is defined over $\widetilde{A}$. Then also $\phi(x,\widetilde{b}') \vDash x\in \widehat{C}$ for any $\widetilde{b}' \equiv _{\widetilde{A}} \widetilde{b}$. Now by \corref{cor:QE}, $\phi(x,\widetilde{b})$ is implied by a formula in $\tp(\widetilde{a}/\widetilde{B})$ of the form \[ x\in \widehat{C}\wedge \psi(\rho_n(x)) \] where $\psi(x)$ is a $T$-formula over $\widehat{B}$ implying $x\in \rho_n(\widehat{C})$. So since $\phi$ divides over $\widetilde{A}$, $\psi$ must divide over $\widehat{A}$. So $\tp(a_n/\widehat{B})$ forks over $\widehat{A}$, and since $a$ is algebraic over $a_n$, so does $\tp(a/\widehat{B})$. \item Finite rankedness of $\widehat{T}$ follows from (ii), finite rankedness of $T$, and the fact that Morley rank bounds the length of chains of connected subgroups in $T$. \end{enumerate} \end{proof} We end this section with a proposition giving a stability-theoretic analysis of $\widehat{T}$; these results are not used explicitly in the remainder of the paper, but they inform it. \begin{proposition} Let $\widetilde{\monst} \vDash \widehat{T}$ be a monster model. \begin{enumerate}[(i)]\item $\ker^0$ is stably embedded, in the sense that every relatively definable set is relatively definable with parameters from $\ker^0$. Consider $\ker^0(\widetilde{\monst})$ as a structure with the $\emptyset $-relatively-definable sets as predicates, and let $\widehat{T}^0 := \Th(\ker^0(\widetilde{\monst}))$. Then $\widehat{T}^0$ is an $\omega$-stable 1-based group of finite Morley rank bounded above by the Morley rank of $T$. In particular, $\ker^0$ has finite relative Morley rank in the sense of \cite{BeBoPiSSharp}. \item $\im(\rho)$ is stably embedded with induced structure precisely that of $T$. \item Every type in $\widehat{T}^{\eq}$ is analysable in $\ker^0$ and $\im(\rho)$. \item $\ker^0$ is orthogonal to $\im(\rho)$. \item A regular type in $\widehat{T}^{\eq}$ is non-orthogonal to one of \begin{enumerate}[(a)]\item a strongly minimal type in $T^{\eq}$; \item $\quot{\widehat{G}_0}{\widehat{H}_0}$ where $H \leq G$ have no intermediate connected subgroup. \end{enumerate} \item $\widehat{T}$ has weak elimination of imaginaries in $T^{\eq}$ and the sorts $\quot{\widetilde{\monst}^n}{\widehat{H}}$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[(i)]\item By the QE, the only structure on $\ker^0$ is the abelian structure given by the $\widehat{H}_0$. Stable embeddedness and 1-basedness follow easily. (Stable embeddedness can alternatively be deduced directly from stability of $\widehat{T}^0$.) Since $\ker^0$ is torsion-free and $\widehat{H}\cap \widehat{G} = \widehat{H'}$ where $H' = (H\cap G)^o$, the definable subgroups are precisely those of the form $\widehat{H}_0$. So there is is no infinite chain of definable subgroups of $\ker^0$, so $\widehat{T}^0$ is of finite Morley rank. The rank is bounded by the longest length of such a chain, which is bounded by the rank of $T$. \item This is immediate from the QE. \item Consider a strong type $q=\stp(\widetilde{a}/A)$, with $A \subseteq \monst^{\eq}$. If $\widetilde{b}\in\widetilde{\monst}$ is a realisation of $q_1=\stp(\widetilde{a}/a\widetilde{A})$ independent from $\widetilde{a}$ over $A$, then since $\widehat{a} \subseteq \acl(a)$, we have $\widetilde{a}-\widetilde{b} \in \ker^0$. So $q_1$ is internal to $\ker^0$, and clearly $\stp(a/A)$ is internal to $\im(\rho)$. \item It is immediate from the QE that every relatively definable subset of $(\ker^0)^n \times \im(\rho)^m$ is a Boolean combination of products of subsets of $(\ker^0)^n$ with subsets of $\im(\rho)^m$. \item By (i), the types in (b) are minimal, and $\ker^0$ is analysed in them. So this follows from (iii). \item For $\phi(x,y)$ an atomic formula, it is easy to see that any $\phi$-type over a model has canonical parameter in these sorts. So by the QE, any type over a model has canonical base in these sorts. By stability, the same holds for any type over an $\acleq$-closed set. Then if $\alpha = a/E \in \widetilde{\monst}^{eq}$, then $\alpha \in \Cb(a/\acleq(\alpha)) \subseteq \acleq(\alpha)$. \end{enumerate} \end{proof} \section{Classification of models of $\widehat{T}$} \label{sec:classification} In this section, we prove the main model-theoretic result of this paper, \thmref{thm:classification} below, which classifies the models of $\widehat{T}$. \subsection{Outline} \label{ssec:classificationOutline} The classification proceeds as follows. First, recall the following coarse version of the classification of models of $T$. By \cite[Theorem 6]{LascarFMRGrps}, $T$ is almost $\aleph_1$-categorical. It follows (\cite[7.1]{BuechlerEssStab}) that if $M \vDash T$ and $M_0 \prec M$ is a copy of the prime model, there is a finite set of mutually orthogonal strongly minimal sets $D_i$ defined over $M_0$ such that $M$ is constructible and minimal over $M_0B$, where $B$ is the union of arbitrary $\acl$-bases over $M_0$ for the $D_i(M)$ (\cite[7.1.2(ii)]{BuechlerEssStab}). We will show that this picture lifts to $\widehat{T}$. We will show that an arbitrary model $\widetilde{M} \vDash \widehat{T}$ is constructible and minimal over $\widetilde{M}_0B$ where $\widetilde{M}_0 = \rho^{-1}(M_0)$, and $M_0 \prec M$ and $B$ are as above. So models of $\widehat{T}$ are determined up to isomorphism by a choice of model of $T$ and a choice of lift of the prime model $M_0 \prec M$ (which in particular involves a choice of kernel). In the case considered in the introduction, where $\G$ is an algebraic group over $k_0$, we need just one strongly minimal set $D$, which we can take to be an algebraically closed field with parameters for $k_0$. Then $M_0 \cong \G(k_0^{\alg})$, and for $\G(K) \vDash T$, the basis $B$ is a transcendence base for $K$ over $k_0^{\alg}$. \subsection{Preliminaries} We work in a monster model $\widetilde{\monst} \vDash \widehat{T}$ and the corresponding monster model $\monst=\rho(\monst) \vDash T$. However, we mostly want to consider only those elementary embeddings of models of $\widehat{T}$ which preserve the kernel. \begin{notation} For $\widetilde{M} \subseteq \widetilde{N}$ models of $\widehat{T}$, we write $\widetilde{M} \prec ^* \widetilde{N}$ to mean that $\widetilde{M} \prec \widetilde{N}$ and $\ker(\widetilde{M}) = \ker(\widetilde{N})$. We refer to such elementary embeddings as {\em kernel-preserving}. \end{notation} \begin{remark} \label{rem:kerInvIm} If $\widetilde{M} \prec \widetilde{N}$, then $\widetilde{M} \prec ^* \widetilde{N}$ iff $\widetilde{M} = \rho^{-1}(M) \subseteq \widetilde{N}$, the inverse image of $M$ evaluated in $\widetilde{N}$. \end{remark} \begin{lemma} \label{lem:inverseStrong} If $\widetilde{N} \vDash \widehat{T}$ and $M \prec N = \rho(\widetilde{N})$, then $\widetilde{M} := \rho^{-1}(M) \prec ^* \widetilde{N}$. \end{lemma} \begin{proof} In light of Remark~\ref{rem:kerInvIm} and the quantifier elimination, it suffices to show that $\widetilde{M} \vDash \widehat{T}$. For this, we check that the axioms \axref{ax:first}-\axref{ax:last} hold. These all follow straightforwardly from $M$ being an elementary submodel of $N$ and the kernel being preserved, except for the surjectivity in \axref{ax:proj} which is a little less straightforward. For that, with notation as in \operatorname{d}edref{ded:snake} of \propref{prop:QE}, note that $\rho_e(\widehat{H}(\widetilde{M})) = H(M) \subseteq \pr(\rho_e(\widehat{G}(\widetilde{M})) = \rho_e(\pr(\widehat{G}(\widetilde{M}))$, so \begin{align*} \widehat{H}(\widetilde{M}) &\subseteq \pr(\widehat{G}(\widetilde{M})) + \ker(\rho_e|_{\widehat{H}(\widetilde{M})}) \\ &= \pr(\widehat{G}(\widetilde{M})) + e(\widehat{H}(\widetilde{M})\cap \ker) \\ &= \pr(\widehat{G}(\widetilde{M})) ,\end{align*} using that \operatorname{d}edref{ded:snake} holds for $\widetilde{N}$, and the kernel preservation. \end{proof} We make extensive use of l-isolation, a technique due to Lachlan \cite{LachlanLIsol}. \begin{definition} A type $p$ is l-isolated if for each $\phi(x,y)$ there exists $\psi(x) \in p$ such that $\psi$ implies the complete $\phi$-type implied by $p$, $\psi \vDash p|_\phi$. \end{definition} Recall that $A$ is {\em atomic} over $B$ if $\tp(a/B)$ is isolated for each tuple $a \in A$, and $A$ is {\em constructible} over $B$ if $A$ has an enumeration $(a_i)_{i < \lambda}$ such that $\tp(a_i/Ba_{<i})$ is isolated for each $i<\lambda$, where $Ba_{<i} = B \cup \{a_j \;|\; i<j\}$. We define {\em l-atomic} and {\em l-constructible} by replacing isolation with l-isolation in these definitions. \begin{remark} This definition of l-isolation is easily seen to be equivalent to the $F^l_{\aleph_0}$-isolation of \cite[Definition~IV.2.3]{ShCT}. Clearly any isolated type is l-isolated, so atomicity implies l-atomicity and constructibility implies l-constructibility. It is easy to see that, just as for constructibility and atomicity in their usual senses, l-constructibility implies l-atomicity (\cite[Theorem~IV.3.2]{ShCT}), and the converse holds for countable sets (\cite[Lemma~IV.3.16]{ShCT}). \end{remark} \begin{lemma} \label{lem:lPrim} \begin{enumerate}[(a)]\item Work in a monster model $\monst'$ of a complete stable theory. \begin{enumerate}[(i)]\item l-constructible models exist over arbitrary sets: for $A \subseteq \monst'^{\eq}$, there exists $M \prec \monst'$ such that $A \subseteq M^{\eq}$ and $M^{\eq}$ is l-constructible over $A$. \item If $M \prec \monst'$, and $\phi$ is a formula over $M$ such that $\phi(M) \subseteq A \subseteq \monst'^{\eq}$ and $\operatorname{d}cleq(A) \cap \operatorname{d}cleq(\phi(\monst')) \subseteq M^{\eq}$, and if $b$ is l-isolated over $A$ and $\vDash \phi(b)$, then $b\in \phi(M)$. \end{enumerate} \item If $\widetilde{M} \vDash \widehat{T}$ and $\rho(\widetilde{M}) =: M \prec N \vDash T$, and $\widetilde{N} \vDash \widehat{T}$ is l-atomic over $A:=\widetilde{M}\cup N$, then $\widetilde{N} \prec ^* \widetilde{M}$ and $\rho(\widetilde{N}) = N$, and so $\widetilde{N}$ is minimal over $A$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(a)]\item \mbox{ } \begin{enumerate}[(i)]\item \cite[IV.2.18(4), IV.3.1(5)]{ShCT} \item If $b\notin \phi(M)$, then by l-isolation, there is a formula $\psi \in \tp(b/A)$ such that \[ \psi(x) \vDash \phi(x)\wedge x\notin \phi(M) . \] By stable embeddedness of $\phi$, we may take $\psi$ to be defined over $\operatorname{d}cleq(A) \cap \operatorname{d}cleq(\phi(\monst')) \subseteq M^{\eq}$ . But then $\psi$ is realised in $M$, which is a contradiction. \end{enumerate} \item This follows from (a)(ii) and the QE. Indeed, if $\beta \in \operatorname{d}cleq(\widetilde{\zeta})$ with $\widetilde{\zeta}$ a tuple from $\ker(\widetilde{\monst})$, then since $\rho_n(\widetilde{\zeta}) \in \Tor \subseteq M$, the QE implies that $\tp(\widetilde{\zeta}/A)$ is determined by $\tp(\widetilde{\zeta}/\widetilde{M})$. So if $\beta \in \operatorname{d}cleq(A)$, then already $\beta \in \widetilde{M}^{\eq}$. So by (a)(ii), $\ker(\widetilde{N}) = \ker(\widetilde{M})$. Similarly, if $\beta \in \operatorname{d}cleq(b)$ with $b$ a tuple from $\monst$, then the QE implies that $\tp(b/A)$ is determined by $\tp(b/N)$. Let $\widehat{N} \vDash \widehat{T}$ be the profinite universal cover, embedded in $\widetilde{\monst}$ over $N$. Then $\tp(b/A)$ is determined by $\tp(b/\widehat{N})$, again, if $\beta \in \operatorname{d}cleq(A)$, then already $\beta \in \widehat{N}^{\eq}$. So by (a)(ii), we have $\rho(\widetilde{N}) = \rho(\widehat{N}) = N$. The claimed minimality is then clear, since $\rho$ is a homomorphism. \end{enumerate} \end{proof} We also use the existence of l-constructible models to obtain independent amalgamation in the (abstract elementary) class $(\operatorname{Mod}(\widehat{T}),\prec ^*)$ of models of $\widehat{T}$ with kernel-preserving embeddings. \begin{lemma} \label{lem:amalg} Suppose $\widetilde{M}_i$, $i=0,1,2$, are elementary submodels of $\widetilde{\monst}$, $\widetilde{M}_0 \prec ^* \widetilde{M}_i$, and $\widetilde{M}_1 \ind _{\widetilde{M}_0} \widetilde{M}_2$. Let $\widetilde{M}_3$ be an l-atomic model over $\widetilde{M}_1 \cup \widetilde{M}_2$. Then $\widetilde{M}_i \prec ^* \widetilde{M}_3$. \end{lemma} \begin{proof} Suppose $\widetilde{\zeta} \in \ker(\widetilde{M}_3) \setminus \ker(\widetilde{M}_0)$. By l-atomicity, say $\phi(x,\widetilde{a}_1,\widetilde{a}_2) \in \tp(\widetilde{\zeta}/\widetilde{M}_1\cup \widetilde{M}_2)$ with $a_i \in \widetilde{M}_i$ and \[ \phi(x,\widetilde{a}_1,\widetilde{a}_2) \vdash x \notin \ker(\widetilde{M}_0)\wedge x \in \ker .\] By \corref{cor:QE}, we may assume $\phi(x,\widetilde{a}_1,\widetilde{a}_2)$ is of the form $x \in \widehat{H}(\widetilde{a}_1,\widetilde{a}_2)\wedge \rho_n(x) = \zeta_n$, where $\zeta_n = \rho_n(\widetilde{\zeta}) \in M_0$. By the independence, $\tp(\widetilde{a}_1/\widetilde{M}_2)$ is finitely satisfiable in $\widetilde{M}_0$, so say $\widetilde{a}_1' \in \widetilde{M}_0$ and $\widetilde{M}_2 \vDash \exists x \in \ker. \phi(x,\widetilde{a}_1',\widetilde{a}_2)$, witnessed say by $\widetilde{\zeta}' \in \ker(\widetilde{M}_2) = \ker(\widetilde{M}_0)$. Then $\tp(\widetilde{\zeta}-\widetilde{\zeta}'/\widetilde{M}_1) \ni (x \in \widehat{H}(\widetilde{a}_1-\widetilde{a}_1',0)\wedge \rho_n(x) = 0)$, so say $\widetilde{\zeta}'' \in \widetilde{M}_1$ also satisfies this. Then $\widetilde{\zeta}' + \widetilde{\zeta}'' \in \widehat{H}(\widetilde{a}_1'+\widetilde{a}_1-\widetilde{a}_1',\widetilde{a}_2+0) = \widehat{H}(\widetilde{a}_1,\widetilde{a}_2)$ and $\rho_n(\widetilde{\zeta}'+\widetilde{\zeta}'') = \zeta_n + 0 = \zeta_n$; but $\widetilde{\zeta}' + \widetilde{\zeta}'' \in \ker(\widetilde{M}_1) = \ker(\widetilde{M}_0)$, contradicting the choice of $\phi$. \end{proof} \begin{lemma} \label{lem:atomCons} Suppose $\widetilde{M} \vDash \widehat{T}$ and $A \subseteq \widetilde{M}^{\eq}$ with $\ker(\widetilde{M}) \subseteq A$, suppose $M$ is countable, and suppose $\widetilde{M}$ is atomic over $A$. Then $\widetilde{M}$ is constructible over $A$. \end{lemma} \begin{proof} Take an arbitrary section $S \subseteq \widetilde{M}$ of $\rho : \widetilde{M} \rightarrow M$. Then $S$ is countable and atomic, and hence constructible, over $A$, and $\widetilde{M} = S+\ker$ is clearly constructible over $S\cup A \supseteq S\cup\ker$. \end{proof} \begin{lemma} \label{lem:isolLIsol} Suppose $\widetilde{B} \subseteq \widetilde{M} \vDash \widehat{T}$ and $\widetilde{a} \in \widetilde{M}$, and each $\tp(a_m/\widehat{B})$ is isolated. Then $\tp(\widetilde{a}/\widetilde{B})$ is l-isolated. \end{lemma} \begin{proof} By the QE, it suffices to see that $\tp_{\phi}(\widetilde{a}/\widetilde{B})$ is isolated for an atomic formula $\phi(x,y)$. For $\phi$ of the form $\psi(\rho_m(x),\rho_m(y))$, this follows from $\tp(a_m/\widehat{B})$ being isolated. For $\phi$ of the form $(x,y) \in \widehat{H}$, it follows from the fact that for $\widetilde{b} \in \widetilde{B}$, $(\widetilde{a},\widetilde{b}) \in \widehat{H} \Lambdaeftrightarrow \grploc(\widetilde{a}/\widetilde{B}) \subseteq \widehat{H}(\widetilde{b})$. \end{proof} \subsection{$\omega$-stability over models} From now on, in order to prove the subsequent lemma, we make the following additional assumption. \begin{assumption} $T$ is {\em rigid} - for $\G$ a saturated model of $T$, every connected definable subgroup $H$ of $\G^n$ is defined over $\acleq(\emptyset )$ - and hence, by our previous assumptions in \ssecref{ssec:That}, is actually defined over $\emptyset $. So $\widehat{L}$ has a predicate $\widehat{H}$ corresponding to $H$. \end{assumption} We now apply the ``Kummer theory over models'' of \cite{BGHKg} to obtain atomicity of ``finitely generated'' extensions of models. \begin{lemma} \label{lem:wstab} Suppose $\widetilde{M} \prec \widetilde{\monst}$ and $b \in \monst$, and let $M(b)$ be a prime model over $Mb$. Suppose $\widetilde{M}(b)$ is a model such that $\widetilde{M} \prec ^* \widetilde{M}(b) \prec \widetilde{\monst}$ and $\rho(\widetilde{M}(b)) = M(b)$. Then $\widetilde{M}(b)$ is atomic over $\widetilde{M}b$. If $M$ is countable, $\widetilde{M}(b)$ is constructible over $\widetilde{M}b$. Furthermore, such an $\widetilde{M}(b)$ exists. \end{lemma} \begin{proof} We first show the atomicity. Let $\widetilde{c} \in \widetilde{M}(b)$; we must show that $\tp(\widetilde{c}/\widetilde{M}b)$ is isolated. Let $\widehat{H}+\widetilde{d} = \grploc(\widetilde{c}/\widetilde{M})$. Since $\widetilde{M}$ is a model, we may assume $\widetilde{d} \in \widetilde{M}$. So by replacing $\widetilde{c}$ with $\widetilde{c}-\widetilde{d}$, we may assume $\widetilde{d}=0$. Since $\widetilde{M}$ contains $\ker(\widetilde{M}(b))$ and $T$ is rigid, $c$ is {\em free} in $H$ over $M$, i.e.\ in no proper coset defined over $M$. By \cite[6.4]{BGHKg}, for some $n$, writing $\widehat{x}$ for the long tuple of variables $(x_i)_{i>0}$, \[ \tp(c_n/M)(x_n) \cup \{ x_i \in H \;|\; i>0 \} \vDash \tp(\widehat{c}/M)(\widehat{x}). \] Now by $\omega$-stability of $T$, $\tp(b/Mc)$ has finite multiplicity, i.e.\ finitely many extensions to $\acleq(Mc) \supseteq \widehat{c}$. Hence $\tp(\widehat{c}/M) \cup \tp(c/Mb)$ has only finitely many extensions to Mb. So again, for some $n$, \[ \tp(c_n/Mb)(x_n) \cup \{ x_i \in H \;|\; i>0 \} \vDash \tp(\widehat{c}/Mb)(\widehat{x}). \] So by \lemref{lem:typesKerPres}, \[ \tp(c_n/Mb)(\rho_n(\widetilde{x})) \cup \{\widetilde{x} \in \widehat{H}\} \vDash \tp(\widetilde{c}/\widetilde{M}b)(\widetilde{x}). \] But $c_n \in M(b)$, so $\tp(c_n/Mb)$ is isolated, so $\tp(\widetilde{c}/\widetilde{M}b)$ is isolated. This proves atomicity. Constructibility assuming countability of $M$ follows by \lemref{lem:atomCons}. It remains to show existence. By \lemref{lem:lPrim}(a)(i), there exists a model $\widetilde{M}(b)$ which is l-constructible over $\widetilde{M} \cup M(b)$, and by \lemref{lem:lPrim}(b) the kernel is preserved and $\rho(\widetilde{M}(b)) = M(b)$. \end{proof} \begin{remark} Note that $\widetilde{M}(b)$ will {\bf not} be constructible over $\widetilde{M} \cup M(b)$: indeed, if $\widetilde{a}\in \widetilde{M}(b) \setminus \widetilde{M}$, then each $a_n$ is in $M(b) \setminus M$, so easily $\tp(\widetilde{a}/\widetilde{M}\cup M(b))$ is not isolated. \end{remark} \begin{remark} If we don't assume rigidity, there could be subgroups definable over $M(b)$ which aren't definable over $M$, which could cause a failure of atomicity. \end{remark} \begin{remark} \lemref{lem:wstab} implies that we have $\omega$-stability over models in the (abstract elementary) class $(\operatorname{Mod}(\widehat{T}),\prec ^*)$, in the sense that if $\widetilde{M} \vDash \widehat{T}$ is countable, then there are only countably many types over $\widetilde{M}$ realised in kernel-preserving extensions of $\widetilde{M}$. Indeed, by \lemref{lem:wstab} any such type is isolated over $\widetilde{M}b$ for some $b$, and by $\omega$-stability of $T$ there are only countably many possible types $\tp(b/\widetilde{M}) \Deltaashv \tp(b/M)$. We will see in \remref{rem:homog} below that the Galois type of $b$ over $\widetilde{M}$ is determined by $\tp(b/\widetilde{M})$, which means that we have $\omega$-stability over models in the sense of the abstract elementary class. \end{remark} \subsection{Independent systems} Countability of $M$ was crucial to get constructibility in \lemref{lem:wstab}. For constructibility of extensions in higher cardinals, we require constructibility over independent systems of models. \cite[XII]{ShCT} and \cite{HartOTOP} are the sources for the techniques used here. In this subsection, we develop what we need of the general theory of independent systems. We work in a monster model $\monst'$ of an arbitrary stable theory $T'$. \begin{definition} If $I$ is a downward-closed set of sets, an \underlinestyle{$I$-system} in $\monst'$ is a collection $(M_s \;|\; s \in I)$ of elementary submodels $\monst'$ such that for $s \subseteq t$, $M_s$ is an elementary submodel of $M_t$. For $J \subseteq I$, define $M_J := \bigcup_{s\in J} M_s \subseteq \monst'$. Define ${{<}s} := \powerset^-(s) := \powerset(s) \setminus \{s\}$, and ${{\not\geq }s} := I \setminus \{t \;|\; t \supseteq s\}$. The system is {\em constructible} if $M_s$ is constructible over $M_{<s}$ for all $s\in I$ with $|s|>1$. Similarly for {\em atomic}, and for {\em l-constructible} and {\em l-atomic}. The system is {\em independent} (or {\em stable}) if $M_s \ind _{M_{<s}} M_{\not\geq s}$ for all $s\in I$. $I$ is {\em Noetherian} if each $s \in I$ is finite. An {\em enumeration} of $I$ is a sequence $(s_i)_{i\in\lambda}$ such that $I=\{s_i \;|\; i\in\lambda\}$ and $s_i \subseteq s_j \Rightarrow i \leq j$. We then write $s_{<i}$ for $\{ s_j \;|\; j<i \}$. We define $|n| := \{0,\ldots,n-1\}$. \end{definition} Note that if $(s_i)_{i \in \lambda}$ is an enumeration of an independent $I$-system, then we have $M_{s_i} \ind _{M_{<s_i}} M_{s_{<i}}$ for all $i$. That the converse holds is given by the following Fact, which is \cite[Lemma~XII.2.3(1)]{ShCT}. \begin{fact} \label{fact:indieViaEnum} Let $(M_s)_s$ be an $I$-system, let $(s_i)_{i\in\lambda}$ be an enumeration, and suppose $M_{s_i} \ind _{M_{<s_i}} M_{s_{<i}}$ holds for all $i$. Then the system is independent. \end{fact} \begin{definition} Let $M$ be a (possibly multi-sorted) structure. If $A \subseteq B \subseteq M$, we say $A$ is {\em Tarski-Vaught} in $B$, $A \subseteq _{TV} B$, if every formula over $A$ which is realised in $B$ is realised in $A$. \end{definition} \begin{lemma} \label{lem:TVAndLIsolation} Suppose $C \subseteq _{TV} B \subseteq M$. \begin{enumerate}[(i)]\item If a type $\tp(a/C)$ is l-isolated, then $\tp(a/C) \vDash \tp(a/B)$, and $Ca \subseteq _{TV} Ba$. \item If $A \subseteq M$ is constructible over $C$ then $A$ is constructible over $B$. \item If $A \subseteq M$ is l-atomic over $C$, then $A \ind _C B$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(i)]\item Given $\phi(x,y)$, say $\psi(x) \in \tp(a/C)$ isolates $\tp_\phi(a/C)$. Then for $b \in B$, $\phi(x,b) \in \tp_\phi(a/B)$ iff $\psi(x) \vDash \phi(x,b)$; indeed, else \[ \vDash (\exists x. \psi(x)\wedge \phi(x,b)) \wedge (\exists x. \psi(x)\wedge \neg\phi(x,b)) ;\] but then the same holds for some $c \in C$, contradicting the isolation. So $\tp(a/C) \vDash \tp(a/B)$. Also $Ca \subseteq _{TV} Ba$, since if $\vDash \phi(a,b)$ then $\vDash \forall x. \psi(x) \rightarrow \phi(x,b)$, hence this holds for some $c \in C$, and hence $\vDash \phi(a,c)$. \item This follows from (i) by a transfinite induction. \item The extension of $\tp(A/C)$ to $B$ is unique by (i), so must be the non-forking extension. \end{enumerate} \end{proof} \begin{lemma} \label{lem:TVAndCoheirs} Suppose $M$ is a model, and $A \ind _M B$. Then $MA \subseteq _{TV} MAB$. \end{lemma} \begin{proof} By the coheir property of non-forking over models in stable theories, $\tp(B/MA)$ is finitely satisfiable in $M$. \end{proof} The following is \cite[Lemma~XII.2.3(2)]{ShCT}, to which we refer for the proof. \begin{fact}[TV Lemma] \label{fact:TVLemma} If $(M_s)_s$ is an independent $I$-system in a stable theory, if $J \subseteq I$, and if $\forall s\in I. (s \subseteq \bigcup J \Rightarrow s \in J)$, then $M_J \subseteq _{TV} M_I$. \end{fact} \begin{lemma} \label{lem:sysPrimOverBasis} Let $(M_s)_s$ be a constructible Noetherian independent $I$-system. Suppose that for each $p \in \bigcup I$, $B_p$ is a subset of $M_{\{p\}}$ for which $M_{\{p\}}$ is constructible over $M_{\emptyset }B_p$. Then $M_I$ is constructible over $M_{\emptyset }\cup\bigcup_{p\in \bigcup I}B_p =: A$. \end{lemma} \begin{proof} Let $(s_i)_{i<\lambda}$ be an enumeration of $I$. It suffices to show that each $M_{s_i}$ is constructible over $AM_{s_{<i}}$, as it then follows by induction on $i\leq \lambda$ that $M_{s_{<i}}$ is constructible over $A$. If $s_i = \emptyset $, the constructibility is immediate. If $s_i = \{p\}$, we have $B_p \ind _{M_{\emptyset }} (M_{s_{<i}}\cup(A \setminus B_p))$. So by \lemref{lem:TVAndCoheirs}, $M_{\emptyset }B_p \subseteq _{TV} AM_{s_{<i}}$. The desired constructibility then follows from \lemref{lem:TVAndLIsolation}(ii). If $|s_i|>1$, then $M_{s_i}$ is constructible over $M_{<s_i}$; but $M_{<s_i} \subseteq _{TV} M_{s_{<i}\cup \{\{p\} \;|\; p \in \bigcup I\}}$ by the TV Lemma, so in particular $M_{<s_i} \subseteq _{TV} AM_{s_{<i}}$. Again, \lemref{lem:TVAndLIsolation}(ii) yields the desired constructibility. \end{proof} \begin{lemma} \label{lem:primIndieSys} Suppose $\bigcup I$ is finite. For an l-atomic $I$-system to be independent, it suffices that for each $p \in \bigcup I$, \[ M_{\{p\}} \ind _{M_\emptyset } M_{\not\geq \{p\}}. \] \end{lemma} \begin{proof} Suppose inductively that for any downward closed proper subset $J$ of $I$, the restriction of the $I$-system to a $J$-system is independent. So it suffices to show that for $s \in I$ maximal, $M_s \ind _{M_{<s}} M_{I \setminus \{s\}}$. If $|s| = 1$, this holds by assumption. If $|s|>1$, then if $t\subseteq s$ and $t \in I \setminus \{s\}$ then $t \in {{<}s}$, so by the TV Lemma applied to the restricted independent $(I \setminus \{s\})$-system, \[ M_{<s} \subseteq _{TV} M_{I \setminus \{s\}}. \] But $M_s$ is l-atomic over $M_{<s}$, so we conclude the independence by \lemref{lem:TVAndLIsolation}(iii). \end{proof} \subsection{Atomicity over independent systems in $\widehat{T}$} Now we return to considering $\widehat{T}$ and $T$. Let $M \vDash T$, and let $M_0 \prec M$ be a copy of the prime model, and let $D_i$ and $B_i$ be as in \ssecref{ssec:classificationOutline}. Let $B:=\bigcup_i B_i$, and let $\powerset^{\fin}(B)$ be the set of finite subsets of $B$. Let $M_{\emptyset }=M_0$, and for $s\in \powerset^{\fin}(B)$ inductively let $M_s \prec M$ be prime over $M_{<s} \cup s$. \begin{lemma} \label{lem:baseSys} $(M_s)_{s \in \powerset^{\fin}(B)}$ is a constructible independent $\powerset^{\fin}(B)$-system, and $\bigcup_s M_s = M$. \end{lemma} \begin{proof} $\bigcup_s M_s$ is an elementary submodel of $M$ which contains $M_0B$, and $M$ is minimal over $M_0B$, so $\bigcup_s M_s = M$. The system is constructible by construction, prime models being constructible in $\omega$-stable theories. For independence, by finite character of forking and \lemref{lem:primIndieSys} it suffices to see that $M_{\{b\}} \ind _{M_0} M_s$ when $b \notin s \in \powerset^{\fin}(B)$. We may assume inductively that the restriction of the system to $s$ is independent. So by \lemref{lem:sysPrimOverBasis}, $M_s$ is constructible over $M_0s$. Now $b \notin M_s$ since (by orthogonality of the $D_i$) $\tp(b/M_0s)$ is not algebraic and hence not isolated. So $bM_0 \ind _{M_0} M_s$. So by \lemref{lem:TVAndCoheirs}, $bM_0 \subseteq _{TV} bM_s$. Since $M_{\{b\}}$ is constructible and hence atomic over $bM_0$, it follows by \lemref{lem:TVAndLIsolation}(iii) that $M_{\{b\}} \ind _{bM_0} bM_s$, and in particular $M_{\{b\}} \ind _{bM_0} M_s$. So by transitivity, $M_{\{b\}} \ind _{M_0} M_s$. \end{proof} \begin{definition} An $I$-$\sim$-system is an $I$-system $(\widetilde{M}_s)_s$ in $\widetilde{\monst} \vDash \widehat{T}$ such that \begin{itemize}\item setting $M_s := \rho(\widetilde{M}_s) \vDash T$, $(M_s)_s$ is an independent atomic $I$-system in $T$; \item $\widetilde{M}_s \prec ^* \widetilde{M}_t$ when $s \subseteq t$. \end{itemize} \end{definition} The definition assumes only independence in $T$, but independence in $\widehat{T}$ follows: \begin{lemma} \label{lem:tildeSysIndie} An $I$-$\sim$-system $(\widetilde{M}_s)_s$ is an independent $I$-system. \end{lemma} \begin{proof} Let $(s_i)_{i\in\lambda}$ be an enumeration of $I$. By \factref{fact:indieViaEnum}, it suffices to show that given $i\in\lambda$, we have $\widetilde{M}_{s_i} \ind _{\widetilde{M}_{<s_i}} \widetilde{M}_{s_{<i}}$, where we may assume inductively that the restriction of $(\widetilde{M}_s)_s$ to $s_{<i}$ is an independent system. By \propref{prop:forking}(ii) and the independence of $(M_s)_s$, it suffices to show that for $\widetilde{a} \in \widetilde{M}_{s_i}$, we have $C := \grploc(\widetilde{a}/\widetilde{M}_{s_{<i}})$ is defined over $\widetilde{M}_{<s_i}$. Say $C = \widehat{H}(\widetilde{b}')$ with $\widetilde{b}' \in \widetilde{M}_{s_{<i}}$. Now $aM_{<s_i} \subseteq _{TV} aM_{s_{<i}}$, by the TV Lemma (\factref{fact:TVLemma}) and \lemref{lem:TVAndLIsolation}(i) if $|s_i|>1$, and by \lemref{lem:TVAndCoheirs} if $|s_i| = 1$. So $H(b') = H(0) + a = H(b)$ for some $b \in M_{<s_i}$. So say $\widetilde{b} \in \widetilde{M}_{<s_i}$ and $\rho(\widetilde{b})=b$; then $\widetilde{a} + \zeta \in \widehat{H}(\widetilde{b})$ for some $\zeta \in \ker(\widetilde{M}_{s_i})=\ker( \widetilde{M}_{\emptyset } )$. So $\widehat{H}(\widetilde{b}') = \widehat{H}(\widetilde{b}) - \zeta$, which (by \remref{rem:grplocgrp}) is defined over $\widetilde{M}_{<s_i}$. \end{proof} \begin{proposition} \label{prop:indieAtomic} Let $(\widetilde{M}_s)_s$ be an $I$-$\sim$-system with $I$ Noetherian. Then the system is atomic. If also each $M_s$ is countable, then the system is constructible. \end{proposition} \begin{proof} It suffices to show this for $I=\powerset(|n|)$, $n>1$, where recall $|n\vDash \{0,\ldots,n-1\}$. Indeed, by Noetherianity, the system below any $s \in I$ is of this form. We inductively assume the result for $1<n'<n$. We show that $\widetilde{M}_{|n|}$ is atomic over $\widetilde{M}_{<|n|}$. Constructibility assuming countability then follows by \lemref{lem:atomCons}. \begin{claim} $(\widetilde{M}_s)_s$ extends to a $\powerset(|n+1|)$-$\sim$-system such that $\widetilde{M}_{|n|}$ is isomorphic over $\widetilde{M}_{|n-1|}$ to $\widetilde{M}_{ |n-1| \cup \{n\} }$, by an isomorphism $\sigma$ such that moreover $\sigma(\widetilde{M}_s) = \widetilde{M}_{(s \setminus \{n-1\}) \cup \{n\}}$ for $s \subseteq |n|$. \end{claim} \begin{proof} Let $t := |n-1| \cup \{n\}$. \[ \xymatrix{ & & \{3\}\ar@{-}[llddd] \ar@{-}[dd] \ar@{-}[rrddd] & & |4| \\ t \ar@/^/[rr]+/ld 1cm/ & & & & |3| \ar@/_/[lld]+/d 1cm/ \\ & & \{1\}\ar@{-}[lld]^{|2|} \ar@{-}[rrd] & & \\ \{0\} \ar@{-}[rrrr] & & & & \{2\} } \] Let $\widetilde{M}_t$ be a realisation of $\tp(\widetilde{M}_{|n|}/\widetilde{M}_{|n-1|})$ independent from $\widetilde{M}_{|n|}$, and let $\sigma : \widetilde{M}_{|n|} \xrightarrow{\cong}_{\widetilde{M}_{|n-1|}} \widetilde{M}_t$ be an isomorphism witnessing the equality of types. Let $\widetilde{\M}$ be an l-atomic model over $\widetilde{M}_{|n|} \cup \widetilde{M}_t$. By \lemref{lem:amalg}, $\ker(\widetilde{\M}) = \ker(M_\emptyset )$. We define an enumeration $s_i$ of $\powerset(|n+1|)$, and recursively define $\widetilde{M}_{s_i} \prec ^* \widetilde{\M}$ such that \[ M_{s_i} \ind _{M_{<s_i}} M_{s_{<i}} \] and $M_{s_i}$ is atomic over $M_{<s_i}$. By \factref{fact:indieViaEnum}, this will yield a $\powerset(|n+1|)$-$\sim$-system. Begin with an enumeration of $\powerset(|n|)$; the corresponding $\widetilde{M}_{s_i} \prec ^* \widetilde{\M}$ are already given. Continue with an enumeration of $\powerset(t)$, setting $\widetilde{M}_{s_i} := \sigma( \widetilde{M}_{ (s_i \setminus \{n\}) \cup \{n-1\} } ) \prec ^* \widetilde{M}_t \prec ^* \widetilde{\M}$. For the independence condition, we have $M_{s_i} \ind _{M_{<s_i}} M_{s_{<i} \cap \powerset(t)}$ since $s_i$ is part of an enumeration of $\powerset(t)$, and then by transitivity and $M_t \ind _{M_{|n-1|}} M_{|n|}$ we deduce $M_{s_i} \ind _{M_{<s_i}} M_{s_{<i} \cap \powerset(t)}M_{|n|}$ and hence $M_{s_i} \ind _{M_{<s_i}} M_{s_{<i}}$. Now for the remaining $s_i$: let $M'_{s_i} \prec \M$ be a constructible model over $M_{<s_i} \subseteq \M$, and let $\widetilde{M}_{s_i}$ be the inverse image in $\widetilde{\M}$. The TV Lemma (\factref{fact:TVLemma}) gives $M_{<s_i} \subseteq _{TV} M_{s_{<i}}$, so $M_{s_i} \ind _{M_{<s_i}} M_{s_{<i}}$ by \lemref{lem:TVAndLIsolation}(iii). \end{proof} \newcommand{\Delta}{\Deltaelta} \renewcommand{\operatorname{d}}{\operatorname{d}} \renewcommand{\Lambda}{\Lambdaambda} Define \begin{align*} \widetilde{\Delta} & := \widetilde{M}_{|n|} & \widetilde{\Delta}' & := \widetilde{M}_{|n+1|} \\ \operatorname{d}_i\widetilde{\Delta} & := \widetilde{M}_{|n| \setminus \{i-1\}} & \operatorname{d}_i\widetilde{\Delta}' & := \widetilde{M}_{|n+1| \setminus \{i-1\}} \\ \operatorname{d}\widetilde{\Delta} & := \bigcup_{1\leq i\leq n} \operatorname{d}_i\widetilde{\Delta} & \operatorname{d}\widetilde{\Delta}' & := \bigcup_{1\leq i\leq n} \operatorname{d}_i\widetilde{\Delta}' \\ \widetilde{\Lambda} & := \bigcup_{1\leq i<n} \operatorname{d}_i\widetilde{\Delta} & \widetilde{\Lambda}' & := \bigcup_{1\leq i<n} \operatorname{d}_i\widetilde{\Delta}' \\ \operatorname{d}\operatorname{d}_i\widetilde{\Delta} & := \bigcup_{j\in |n| \setminus \{i-1\}} \widetilde{M}_{|n| \setminus \{i-1,j\} } \\ \end{align*} We also define the corresponding sets in $T$, e.g.\ $\Lambda := \rho(\widetilde{\Lambda}) = \bigcup_{i<n-1} M_{|n| \setminus \{i\}}$. In this notation, the isomorphism of the previous claim is \[ \sigma : \widetilde{\Delta} \xrightarrow{\cong}_{\operatorname{d}_n\widetilde{\Delta}} \operatorname{d}_n\widetilde{\Delta}' .\] Note that it induces an isomorphism \[ \sigma : \Delta \xrightarrow{\cong}_{\operatorname{d}_n\Delta} \operatorname{d}_n\Delta' .\] A diagram for $n=3$: \[ \xymatrix{ & & \widetilde{M}_{\{3\}}\ar@{-}[llddd] \ar@{-}[dd] \ar@{-}[rrddd] & & \widetilde{\Delta}'=\widetilde{M}_{|4|} \\ \operatorname{d}_n\widetilde{\Delta}' \ar@/^/[rr]+/ld 1cm/ & & & & \widetilde{\Delta}=\widetilde{M}_{|3|} \ar@/_/[lld]+/d 1cm/ \\ & & \widetilde{M}_{\{1\}}\ar@{-}[lld]^{\operatorname{d}_n\widetilde{\Delta}} \ar@{--}[rrd] & & \\ \widetilde{M}_{\{0\}} \ar@{--}[rrrr] & & & & \widetilde{M}_{\{2\}} } \] the dashed lines indicate $\widetilde{\Lambda}$, and the faces above them form $\widetilde{\Lambda}'$. Let $\widetilde{a} \in \widetilde{\Delta}$ be a tuple; we want to show that $\tp(\widetilde{a}/\operatorname{d}\widetilde{\Delta})$ is isolated. \begin{claim} There exists $b_0\in \operatorname{d}_n\Delta$ such that, setting $A := \acleq(\operatorname{d}\operatorname{d}_n\Delta b_0)$, \[ \tp(\widehat{a}/A\Lambda) \vDash \tp(\widehat{a}/\operatorname{d}\Delta). \] \end{claim} \begin{proof} Let $b_0\in \operatorname{d}_n\Delta$ such that $\tp(a/\operatorname{d}\Delta) \Deltaashv \tp(a/b_0\Lambda)$. First note that every extension of $\tp(a_m/b_0\Lambda)$ to $\operatorname{d}\Delta$ is a non-forking extension. Indeed, that holds for $m=1$ by the uniqueness of the extension, and hence for any $m$ by interalgebraicity of $a_m$ with $a$. So it suffices to see that $\tp(a_m/A\Lambda)$ has a unique non-forking extension to $\operatorname{d}\Delta$. So suppose $c_1,c_2$ realise two such extensions. Then $\operatorname{d}_n\Delta \ind _{A\Lambda} c_i$. Now $\tp(\operatorname{d}_n\Delta/A)$ is stationary, and since the system is independent we have $\operatorname{d}_n\Delta \ind _{\operatorname{d}\operatorname{d}_n\Delta} \Lambda$ and hence $\operatorname{d}_n\Delta \ind _A A\Lambda$ , also $\tp(\operatorname{d}_n\Delta/A\Lambda)$ is stationary. So $c_1 \equiv _{\operatorname{d}_n\Delta A\Lambda} c_2$, so in particular $c_1 \equiv _{\operatorname{d}\Delta} c_2$, \end{proof} \begin{claim} \[ \tp(\widehat{a} / \sigma(\widehat{a}) \Lambda' b_0) \vDash \tp(\widehat{a} / A\Lambda) \] \end{claim} \begin{proof} Say $\vDash \phi(a_n,b,e)$ where $b\in A$ and $e\in \Lambda$. Say $\theta$ is an algebraic formula isolating $\tp(b/\operatorname{d}\operatorname{d}_n\Delta b_0)$. Let \[ \psi(x) := \forall y\in\theta. ( \phi(x,y,e) \Lambdaeftrightarrow \phi(\sigma a_n,y,\sigma e) ) , \] which is a formula over $\sigma(a_n) \Lambda' b_0$ since $\sigma e \in \sigma\Lambda \subseteq \Lambda'$. Then $\psi(x) \vDash \phi(x,b,e)$, since $\vDash \phi(\sigma a_n,b,\sigma e)$, since $b\in \operatorname{d}_n\Delta$ and $\sigma : \Delta \xrightarrow{\cong}_{\operatorname{d}_n\Delta} \operatorname{d}_n\Delta'$, and similarly $\psi(x) \in \tp(a_n / \sigma(a_n) \Lambda' b_0)$. So $\tp(\widehat{a} / \sigma(\widehat{a}) \Lambda' b_0) \vDash \phi(a_n,b,e)$. \end{proof} Now $\operatorname{d}\widetilde{\Delta} \subseteq _{TV} \operatorname{d}\widetilde{\Delta}'$ by the TV lemma, and $\tp(\widetilde{a}/\operatorname{d}\widetilde{\Delta})$ is l-isolated by \lemref{lem:isolLIsol}, so by \lemref{lem:TVAndLIsolation}(i), $\tp(\widetilde{a} / \operatorname{d}\widetilde{\Delta}) \vDash \tp(\widetilde{a} / \operatorname{d}\widetilde{\Delta}')$. Let $\widetilde{b}_0\in \rho^{-1}(b_0) \subseteq \operatorname{d}_n\widetilde{\Delta}$, and let $\widetilde{b}_0 \subseteq \widetilde{b}_0' \in \operatorname{d}_n\widetilde{\Delta}$ be such that $\grploc(\widetilde{a}/\operatorname{d}\widetilde{\Delta})$ is defined over $\widetilde{b}_0'\widetilde{\Lambda}$. Then by \lemref{lem:typesKerPres} and the above Claims, we have: \[ \tp(\widetilde{a} / \operatorname{d}\widetilde{\Delta}) \Deltaashv \vDash \tp(\widetilde{a} / \sigma(\widetilde{a}) \widetilde{\Lambda}' \widetilde{b}_0'). \] So it suffices to see that the latter type is isolated. If $n>2$, we have that $\tp(\widetilde{a}\sigma(\widetilde{a})\widetilde{b}_0'/\widetilde{\Lambda}')$ is isolated by the inductive hypothesis applied to the $\powerset(|n-1|)$-$\sim$-system $(\widetilde{M}'_s)_s$ defined by $\widetilde{M}'_s := \widetilde{M}_{s\cup\{n-1,n\}}$, since $\widetilde{\Lambda}' = \widetilde{M}'_{<|n-1|}$ and $\widetilde{M}'_{|n-1|} = \widetilde{M}_{|n+1|} = \widetilde{\Delta}'$. Finally, if $n=2$, we claim that it follows from \lemref{lem:wstab} that $\tp(\widetilde{a}\sigma(\widetilde{a})\widetilde{b}_0'/\widetilde{\Lambda}'b_0')$ is isolated. Indeed, $\widetilde{\Lambda}'=\widetilde{M}_{\{1,2\}}$, and so it suffices to show that $\tp(a,\sigma(a)/M_{\{1,2\}}b_0')$ is isolated, since then for an appropriate embedding of the prime model $M_{\{1,2\}}(b_0')$ into $\Delta'$, we have $a,\sigma(a)\in M_{\{1,2\}}(b_0')$. We conclude by proving this isolation of $\tp(a,\sigma(a)/M_{\{1,2\}}b_0')$. By the definitions of $b_0$ and $b_0'$, we have that $\tp(a/b_0'M_{\{1\}})$ implies $\tp(a/M_{\{0\}}M_{\{1\}})$ and so is isolated, and hence by $M_{\{0,1\}} \ind _{M_{\{1\}}} M_{\{1,2\}}$, also $\tp(a/b_0'M_{\{1,2\}})$ is isolated. Applying $\sigma$, also $\tp(\sigma(a)/b_0'M_{\{2\}})$ is isolated, and, applying the TV Lemma and \lemref{lem:TVAndLIsolation}(i), \begin{align*} \tp(\sigma(a)/b_0'M_{\{2\}}) &\vDash \tp(\sigma(a)/M_{\{0\}}M_{\{2\}}) \\ &\vDash \tp(\sigma(a)/M_{\{0,1\}}M_{\{1,2\}}) \\ &\vDash \tp(\sigma(a)/ab_0'M_{\{1,2\}}), \end{align*} and so $\tp(a,\sigma(a)/M_{\{1,2\}}b_0')$ is isolated, as required. \end{proof} \subsection{Classification} \begin{lemma}[Constructible Models] \label{lem:constructible} Let $M \vDash T$, let $M_0 \prec M$ be a copy of the prime model, and let $B$ be as in \ssecref{ssec:classificationOutline}. Let $\widetilde{M}_0 \vDash \widehat{T}$ with $\rho(\widetilde{M}_0) = M_0$. Then there exists $\widetilde{M} \succ ^* \widetilde{M}_0$ constructible over $B\widetilde{M}_0$, with $\rho(\widetilde{M}) = M$. \end{lemma} \begin{proof} Let $I := \powerset^{\fin}(B)$. Let $(M_s)_{s \in I}$ be a constructible independent $I$-system as given by \lemref{lem:baseSys}. Let (by \lemref{lem:lPrim}(a)(i)) $\widetilde{M}$ be an l-constructible model over $M\widetilde{M}_0$, and let $\widetilde{M}_s=\rho^{-1}(M_s) \subseteq \widetilde{M}$. By \lemref{lem:lPrim}(b), $\widetilde{M}_{\emptyset } = \widetilde{M}_0$ and $\rho(\widetilde{M})=M$, and by \lemref{lem:inverseStrong}, $(\widetilde{M}_s)_s$ is an $I$-$\sim$-system. By \propref{prop:indieAtomic}, $(\widetilde{M}_s)$ is a constructible independent system. By \lemref{lem:wstab}, each $\widetilde{M}_{\{p\}}$ for $p \in B$ is constructible over $\widetilde{M}_{\emptyset }p$, and so by \lemref{lem:sysPrimOverBasis}, $\widetilde{M}=\widetilde{M}_I$ is constructible over $\widetilde{M}_{\emptyset }B = \widetilde{M}_0B$. \end{proof} \begin{theorem}[Classification] \label{thm:classification} A model $\widetilde{M} \vDash \widehat{T}$ is determined up to isomorphism among models of $\widehat{T}$ by \begin{enumerate}[(i)]\item the isomorphism type of the lift $\widetilde{M}_0 = \rho^{-1}(M_0)$ of a copy $M_0 \prec M$ of the prime model, and \item the isomorphism type of $M$ over $M_0$. \end{enumerate} More explicitly: if $\widetilde{M}^1,\widetilde{M}^2 \vDash \widehat{T}$, if $\widetilde{M}^1_0 \cong \widetilde{M}^2_0$ where $\widetilde{M}^i_0$ is the lift $\rho^{-1}(M^i_0)$ of a copy $M^i_0 \prec M^i$ of the prime model, and if the induced isomorphism $M^1_0 \cong M^2_0$ extends to an isomorphism $M^1 \cong M^2$, then $\widetilde{M}^1 \cong \widetilde{M}^2$, in fact by an isomorphism extending the isomorphism $\widetilde{M}^1_0 \cong \widetilde{M}^2_0$ (but not necessarily agreeing with the isomorphism $M^1 \cong M^2$). \[ \xymatrix{ \widetilde{M_0} \ar@{^(->}[r] \ar@{->>}[d] & \widetilde{M} \ar@{->>}[d] \\\ M_0 \ar@{^(->}[r] & M \\\ } \] \end{theorem} \begin{proof} Given $\widetilde{M} \vDash \widehat{T}$ and $M_0 \prec M := \rho(\widetilde{M})$, let $B$ be as in \ssecref{ssec:classificationOutline}. Then $\widetilde{M}$ is constructible and minimal over $B\widetilde{M}_0$, by \lemref{lem:constructible} and the minimality of $M$ over $BM_0$. So let $M^i$, $\widetilde{M}^1_0 \cong \widetilde{M}^2_0$, and $M^1 \cong M^2$ be as in the statement. Let $B^1$ be as in \ssecref{ssec:classificationOutline}, and let $B^2$ be the image in $M^2$. Then by the quantifier elimination, $B^1\widetilde{M}^1_0 \equiv B^2\widetilde{M}^2_0$, and by constructibility of $\widetilde{M}^1$ over $B^1\widetilde{M}^1_0$, this extends to an elementary embedding $\widetilde{M}^1 \prec \widetilde{M}^2$; but then by minimality of $\widetilde{M}^2$ over $B^2\widetilde{M}^2_0$, the embedding is an isomorphism. \end{proof} \begin{remark} \label{rem:homog} We can also conclude that if $M$ is strongly $\aleph_1$-homogeneous (e.g.\ if we take $M$ to be saturated and uncountable), then $\widetilde{M}$ is strongly $\aleph_0$-homogeneous over $\widetilde{M}_0$. Indeed, if $\widetilde{a} \equiv _{\widetilde{M}_0} \widetilde{a}'$, then by homogeneity we have $B'$ such that $B \widehat{a} \equiv _{M_0} B' \widehat{a}'$, so $B\widetilde{M}_0\widetilde{a} \equiv B'\widetilde{M}_0\widetilde{a}'$; but $\widetilde{M}$ is also constructible and minimal over $B\widetilde{M}_0\widetilde{a}$ and over $B'\widetilde{M}_0\widetilde{a}'$, so this extends to an automorphism. Similarly, we obtain strong $\aleph_0$-homogeneity over an arbitrary countable *-submodel $\widetilde{M}_1 \prec ^* \widetilde{M}$, replacing $B$ with $\acl$-bases over $M_1$. Moreover, by \propref{prop:indieAtomic}, we similarly obtain strong $\aleph_0$-homogeneity over $\widetilde{M}_{<|n|}$ for a $\powerset(|n|)$-$\sim$-system in $\widetilde{M}$. Note that in the context of \cite{BHHKK}, and even in the specific example of pseudo-exponentiation, the corresponding results require a saturation hypothesis on $\widetilde{M}_s$. \end{remark} \section{Exponential maps of semiabelian varieties} \label{sec:abelian} In this section, we apply our classification result \thmref{thm:classification}, along with some arithmetic Kummer theory, to prove \thmref{thm:catAbelian} and draw some related conclusions. We actually work in slightly greater generality than \thmref{thm:catAbelian}, by allowing split semiabelian varieties. So throughout this section, we will suppose that $\G(\C)$ is the product $A\times \G_m^n$ of a (possibly trivial) complex abelian variety and a (possibly trivial) algebraic torus. Let $\mathcal{O} := \End(\G)$ be the ring of algebraic endomorphisms of $\G$. Suppose $\G$ and its endomorphisms are defined over $k_0 \leq \C$. We first explain how we attach to the algebraic group $\G$ a theory $T$ satisfying the assumptions of the previous sections. $\G$ can be viewed as a definable group in $\operatorname{ACF}_0$, and as such inherits the structure of a finite Morley rank group. Explicitly, we consider $\G(K)$, for $K$ an algebraically closed extension of $k_0$, as a structure in the language $L$ consisting of a predicate for each $k_0$-Zariski-closed subset of each Cartesian power $\G^n(K)$. This structure is bi-interpretable with the field $(K;+,\cdot,(c)_{c\in k_0})$ with parameters for $k_0$, and is a finite Morley rank group of rank $\operatorname{d}im(\G)$. We let $T$ be the theory of $\G(\C)$ in the language consisting of a predicate for each $k_0$-Zariski-closed subset of $\G^n(\C)$. This is a commutative divisible group of finite Morley rank, admits quantifier elimination, and, by \lemref{lem:subgroupsEndomorphisms} below, every connected definable subgroup of $\G^n$ is over $k_0$, so is defined over $\emptyset $ in $T$. \subsection{$\mathcal{O}$-module homomorphisms as models of $\widehat{T}$} By \propref{prop:analMod}, the Lie exponential map $\exp : L\G \twoheadrightarrow \G(\C)$ has the structure of a model of $\widehat{T}$, which we denote by $L\G$. As a step towards proving \thmref{thm:catAbelian}, we prove in this subsection an abstract algebraic version of this. Let $\mathcal{O} := \End(\G)$ be the ring of algebraic (equivalently, definable) endomorphisms. The derivative at the identity $L\eta$ of $\eta\in\mathcal{O}$ is a $\C$-linear endomorphism of $L\G$, and we consider $L\G$ as an $\mathcal{O}$ module with this action. \begin{lemma} \label{lem:subgroupsEndomorphisms} \begin{enumerate}[(i)]\item Any connected algebraic subgroup $H \leq \G^n$ is the connected component of the kernel of an endomorphism $\eta\in \End(\G^n) \isom \operatorname{Mat}_{n,n}(\mathcal{O})$, and \item $LH \leq L\G^n$ is then the kernel of $L\eta \in \End_\C(L\G^n)$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(i)]\item By Poincar\'e's complete reducibility theorem, there exists an algebraic subgroup $H'$ such that the summation map $\Sigma:H\times H' \maps \G^n$ is an isogeny. So say $\theta : \G^n \maps H\times H'$ is an isogeny such that $\theta\Sigma=[m]$, and let $\pi_2 : H\times H' \maps H'$ be the projection. Then $\pi_2\theta\Sigma(h,h')=mh'$, so $\ker(\pi_2\theta)^o=\Sigma(H\times H'[m])^o=(H+H'[m])^o=H$. \item $L\eta$ takes values in the discrete group $\ker(\exp)^n$ on $LH$, so by connectedness and continuity $L\eta$ is zero on $LH$. Conversely, $\exp(\ker(L\eta))$ is a divisible subgroup of $\ker(\eta)$, and hence is contained in $\ker(\eta)^o = H$. So $\ker(L\eta)$ is a subgroup of $\exp^{-1}(H)$ containing $LH$; but $\ker(L\eta)$ is a $\C$-subspace so is connected, so $\ker(L\eta)=LH$. \end{enumerate} \end{proof} \begin{remark} \lemref{lem:subgroupsEndomorphisms} (i) can fail for $\G$ a semiabelian variety. \end{remark} \begin{proposition} \label{prop:OModMods} If $K$ is an algebraically closed field extension of $k_0$, any surjective $\mathcal{O}$-module homomorphism $\rho : V \twoheadrightarrow \G(K)$ from a divisible torsion-free $\mathcal{O}$-module $V$ with finitely generated kernel is a model of $\widehat{T}$, where $\widehat{H}$ is interpreted as the kernel of the action of $\eta$ on $V^n$ if $H$ is the connected component of the kernel of $\eta\in\End(\G^n) \cong \operatorname{Mat}_{n,n}(\mathcal{O})$, and $\rho_n(x) := \rho(x/n)$. \end{proposition} \begin{proof} We appeal to \lemref{lem:homMod}. We will see in the course of the proof that $\widehat{H}$ is indeed the divisible part of $\rho^{-1}(H)$, as assumed in that lemma, hence in particular that $\widehat{H}$ is well-defined. We use the following elementary principle, which we will call (*): if $A,B,F$ are subgroups of a torsion-free abelian group, $A$ and $B$ are divisible, and $F$ is finitely generated, and if $A \leq B + F$, then $A \leq B$. Suppose $H = \ker(\eta)^o \leq \G^n$, and $\widehat{H} = \ker(\eta)$. We show that $\rho_k(\widehat{H})=H$ for all $k$. By working in $\G^n$, we may assume $n=1$. Let $\Lambdaambda := \ker\rho \leq V$, and let $\Lambdaambda_0 \leq V$ be the divisible hull of $\Lambdaambda$. \begin{claim} $\eta(\Lambdaambda_0) = \im\eta\cap\Lambdaambda_0$. \end{claim} \begin{proof} First, note that $\eta(\Tor(\G)) = \Tor(\im\eta)$. Indeed, if $n\eta(g) = 0$, then $ng \in \ker\eta$, so $mng \in (\ker\eta)^o$ for some $m$; then by divisibility of $(\ker\eta)^o$, say $h \in (\ker\eta)^o$ with $mnh=mng$. Then $\eta(g-h) = \eta(g)$ and $g-h \in \Tor(\G)$. Hence $\im\eta\cap\Lambdaambda_0 \leq \eta(\Lambdaambda_0) + \Lambdaambda$, so by (*) already $\im\eta\cap\Lambdaambda_0 \leq \eta(\Lambdaambda_0)$. The converse is immediate. \end{proof} Now since $\Lambdaambda$ is finitely generated, $\eta(\Lambdaambda)$ is a finite index subgroup of $\im\eta \cap \Lambdaambda$. By the snake lemma (see diagram), it follows that $\rho(\widehat{H})$ is of finite index in $\ker(\eta)$. So by divisibility of $\widehat{H}$, we have $\rho(\widehat{H})=\ker(\eta)^o=H$, and then $\rho_k(\widehat{H})=\rho(\widehat{H})=H$ for all $k$. So \axref{ax:epic} holds. \[ \xymatrix{ & & \Lambdaambda \ar[r]\ar[d] & \Lambdaambda\cap\im\eta \ar[r]\ar[d] & \cdots \\ & \widehat{H} \ar[r]\ar[d] & V \ar[r]\ar[d]_\rho & \im\eta \ar[r]\ar[d] & 0\\ 0 \ar[r] & \ker\eta \ar[r]\ar[d] & \G \ar[r]\ar[d] & \im\eta \\ \cdots \ar[r] & \operatorname{Finite} \ar[r] & 0 } \] Hence $\rho^{-1}(H) = \widehat{H} + \Lambdaambda$, so by (*), $\widehat{H}$ is the divisible part of $\rho^{-1}(H)$. Finally, we verify \axref{ax:proj}. Let $\pr : G \twoheadrightarrow H$ be as in that axiom. By \axref{ax:epic}, \[ \rho(\pr(\widehat{G})) = \pr(\rho(\widehat{G})) = \pr(G) = H = \rho(\widehat{H}) ,\] so $\pr(\widehat{G}) + \Lambdaambda = \widehat{H} + \Lambdaambda$, so by (*), $\pr(\widehat{G}) = \widehat{H}$. \end{proof} \subsection{Kummer theory} Suppose now that $k_0$ is a number field. Using this assumption, we may appeal to Kummer theory to reduce consideration of the prime model to consideration of the kernel. This is essentially the same argument as in \cite[Lemma 4]{MishaTrans}. Recall $T = \Th(\G(\C))$ in the language with a predicate for each subvariety defined over $k_0$ of a cartesian power of $\G$. \begin{lemma} \label{lem:kummer} Suppose $\widetilde{M}_0 \vDash \widehat{T}$ with $M_0 = \rho(\widetilde{M}_0) = \G(\Qbar)$, the prime model of $T$. Then $\widetilde{M}_0$ is constructible over $\ker(\widetilde{M}_0)$. \end{lemma} \begin{proof} Write $\ker$ for $\ker(\widetilde{M}_0)$. We use notation and results from \secref{sec:kummer}. By \lemref{lem:atomCons}, it suffices to show atomicity. Let $\widetilde{c} \in \widetilde{M}_0$. Let $H+\zeta$ be the minimal torsion coset (see \secref{sec:openness}) containing $c$. Then $\widetilde{c} \in \widehat{H}+\widetilde{\zeta}$ for some $\widetilde{\zeta} \in \Q\ker$. By translating, we may assume $\widetilde{\zeta}=0$, so then $\widetilde{c} \in \widehat{H}$ and $H$ is the minimal torsion coset containing $c$. By \propref{prop:kummer}, the image of the Kummer pairing is open, \[ Z_\infty := \pairing{\Gal(k_0(c,\G[\infty]))}{c} \leq _{\operatorname{op}} T_\infty^{H} ,\] so $nT_\infty^{H} \leq Z_\infty$ for some $n$, so \[ \tp^{T}(c_n/\G[\infty]) \cup \bigcup_i \{ c_i \in H \} \cup \bigcup_{k,m} \{ [m] c_{km} = c_k \} \vDash \tp^{T}(\widehat{c}/\G[\infty]) .\] So since $\widehat{\ker} = G[\infty]$, it follows by \lemref{lem:typesKerPres} that \[ \tp^{T}(c_n/\G[\infty]) \cup \{\widetilde{c} \in \grploc(\widetilde{c}/\ker) \} \vDash \tp(\widetilde{c}/\ker) .\] But $\tp^{T}(c_n/\G[\infty])$ is isolated since $c_n \in \G(\Qbar)$, so $\tp(\widetilde{c}/\ker)$ is isolated as required. \end{proof} \subsection{Categoricity and characterisation} We continue to assume that $k_0$ is a number field. Combining \lemref{lem:kummer} with \thmref{thm:classification}, and using uncountable categoricity of $T$ to simplify the latter, we conclude: \begin{theorem} \label{thm:classAbelian} A model $\widetilde{M}$ of $\widehat{T}$ is determined up to isomorphism over $\ker(\widetilde{M})$ by \begin{enumerate}[(i)]\item the isomorphism type of $\ker(\widetilde{M})$, equipped with all structure induced from $\widehat{T}$ \item the transcendence degree of $K_M$, where $M \cong \G(K_M)$. \end{enumerate} \end{theorem} \begin{proof} Suppose $\widetilde{M}^1,\widetilde{M}^2 \vDash \widehat{T}$, and $\ker(\widetilde{M}^1) \cong \ker(\widetilde{M}^2)$ and $\trd(K_{M^1}) = \trd(K_{M^2})$. Let $\widetilde{M}^i_0$ be the inverse image of $M^i_0 := \G(\Qbar) \prec M^i$. Then by \lemref{lem:kummer} and the minimality of $\G(\Qbar)$ over $\emptyset $, the isomorphism $\ker(\widetilde{M}^1) \cong \ker(\widetilde{M}^2)$ extends to an isomorphism $\widetilde{M}^1_0 \cong \widetilde{M}^2_0$. The induced isomorphism $M^1_0 \cong M^2_0$ extends to an isomorphism $M^1 \cong M^2$; indeed, it induces a field automorphism of $\Qbar$ over $k_0$, which by the equality of transcendence degrees extends to an isomorphism $K_{M^1} \cong K_{M^2}$, inducing an isomorphism $M^1 \cong M^2$. We conclude by \thmref{thm:classification}. \end{proof} \begin{corollary} \label{cor:categoricity} The model $L\G \vDash \widehat{T}$ is the unique $\widehat{L}$-structure $\widetilde{M}$ satisfying: \begin{enumerate}[(I)]\item $\widehat{T}$ \item $|\widetilde{M}| = 2^{\aleph_0}$ \item $\ker^{\widetilde{M}} \isom \ker^{L\G}$, a partial $\widehat{L}$-isomorphism. \end{enumerate} Moreover, for any such $\widetilde{M}$, the isomorphism of (III) extends to an isomorphism of $\widetilde{M}$ with $L\G$. \end{corollary} \begin{theorem}[\thmref{thm:catAbelian}] Suppose $\rho,\rho' : L\G \twoheadrightarrow \G(\C)$ are surjective $\mathcal{O}$-module homomorphisms, $\ker\rho'=\ker\rho$, and $\rho'\restricted_{\spanofover{\ker\rho'}{\Q}} = \rho\restricted_{\spanofover{\ker\rho}{\Q}}$. Then there exists an $\mathcal{O}$-module automorphism $\sigma\in\Aut_\mathcal{O}(L\G/\ker\rho)$ and a field automorphism $\tau\in\Aut(\C/k_0)$ of $\C$ fixing $k_0$ such that $\tau\rho'=\rho\sigma$. \end{theorem} \begin{proof} Let $\widetilde{M}$ and $\widetilde{M}'$ be the corresponding $\widehat{L}$ structures. By \propref{prop:OModMods}, they are models of $\widehat{T}$. By the QE and the assumption on the kernels, the structure induced on $\ker\rho$ by the two structures is the same, and the transcendence degrees are both $2^{\aleph_0}$. So by \thmref{thm:classAbelian}, $\widetilde{M}' \cong \widetilde{M}$ as $\widehat{L}$-structures, by an isomorphism fixing $\ker\rho$. Since the graphs of addition and of the action of each $\eta\in\mathcal{O}$ are interpretations of appropriate $\widehat{H}$, this isomorphism induces an $\mathcal{O}$-module automorphism $\sigma$ of $L\G$, and by the choice of language for $T$ it induces a field automorphism $\tau$ over $k_0$. \end{proof} Understanding the structure of $\ker$ involves an understanding of the action of Galois on the torsion, which in general is known to be a hard problem. But let us highlight a strengthening of \thmref{thm:classAbelian} in the case of the characteristic 0 multiplicative group: \begin{theorem} Let $\G=\G_m(\C)$. Then a model $\widetilde{M}$ of $\widehat{T}$ is determined up to isomorphism by the transcendence degree of the algebraically closed field $K$ such that $M \cong \G_m(K)$, and the isomorphism type of $\ker\rho$ as an abstract group. \end{theorem} \begin{proof} This is immediate from \thmref{thm:classAbelian} once we see that the isomorphism type of $\ker$ as a $\widehat{L}$-structure is determined by its isomorphism type as an abstract group. But this follows easily from the quantifier elimination and the fact from cyclotomic theory that any group automorphism of the roots of unity is a Galois automorphism. \end{proof} \begin{remark} In the case of an elliptic curve $\G = E$ there are only finitely many isomorphism types for a kernel with underlying group $\seq{\Z^2;+}$ (\cite{MishaTrans}, \cite[Theorem~4.3.2]{BaysThesis}). See also \cite[IV.6.3,IV.7.4]{GavThesis} for some discussion of the higher dimensional situation. \end{remark} \begin{question} The assumption that $k_0$ is a number field was used in \lemref{lem:kummer}. It is natural to ask whether this is essential. Does an appropriate version of Kummer theory go through for Abelian varieties over function fields? We are not aware of this question being fully addressed in the literature, but \cite[Theorem~5.4]{BertrandLuminy} goes some way toward it. \end{question} \section{Further examples} \label{sec:examples} In this section, we make some brief remarks on some other natural examples of \thmref{thm:classification}. \subsection{Positive characteristic} We can not in general expect to improve on \thmref{thm:classification} in positive characteristic: if $\G$ is the multiplicative group of a characteristic $p>0$ algebraically closed field, then the prime model is $G(\GF_p^{\operatorname{alg}})$, which is also the torsion group of $\G$. In this case, we recover the main theorem, 2.2, of \cite{BZCovers}. \subsection{Manin kernels} In the theory $\operatorname{DCF}$ of differentially closed fields of characteristic 0, the Kolchin closures of the torsion of semiabelian varieties, also known as {\em Manin kernels}, are commutative divisible groups of finite Morley rank. A connected definable subgroup of such a Manin kernel is the Manin kernel of its Zariski closure, so Manin kernels of semiabelian varieties are rigid. Our classification theorem therefore applies to this case. By considering a local analytic trivialisation, a natural analytic model of $\widehat{T}$ for $\G$ a (non-isoconstant) Manin kernel can be given; this will be addressed in future work. \subsection{Meromorphic Groups} \label{ssec:meroGrp} Let $\G$ be a connected meromorphic group in the sense of \cite{PSMeroGrp}, i.e.\ a connected definable group in the structure $\mathcal{A}$ of compact complex spaces definable over $\emptyset $ (equivalently, over $\C$). By \cite[Fact~2.10]{PSMeroGrp}, $\G$ can be uniquely identified with a complex Lie group. Considering $\G$ with its induced structure, it is a finite Morley rank group. Suppose $\G$ is commutative and rigid. By the classification in \cite{PSMeroGrp} and the fact that any commutative complex linear algebraic group is a product of copies of $\G_m$ and $\G_a$, there is a definable exact sequence of Lie groups \[ 0 \rightarrow \G_m^n \rightarrow \G \rightarrow H \rightarrow 0 \] where $H$ is a complex torus. It is also shown in \cite{PSMeroGrp} that $\G$ is definable in a K\"{a}hler space; the latter may be considered in a countable language by \cite{MoosaKahler}, so we may consider the language of $\G$ to be the induced countable language. Let $T=\Th(\G)$. In particular, in the case that $\G$ is a complex semiabelian variety, we may take the language to be that induced from the field, as in \secref{sec:abelian} above. Now let $L\G$ be the analytic universal cover of the Lie group $\G$, considered as an $\widehat{L}$-structure as in \ssecref{ssec:lie}. By \propref{prop:analMod}, $L\G \vDash \widehat{T}$. So by \thmref{thm:classification}, $L\G$ is the unique kernel-preserving extension of its restriction to the prime model $\G_0$ of $\G$, which is a countable structure. \begin{question} Could the Kummer theory of Lemma \ref{lem:kummer} apply here? Concretely: is $\rho^{-1}(\G_0)$ atomic over $\ker$? \end{question} \appendix \section{Kummer theory for $A\times\G_m^n$} \label{sec:kummer} In this appendix, we show that the results on Kummer theory for abelian varieties over number fields apply also to semiabelian varieties of the form $A\times\G_m^n$ for $A$ an abelian variety over a number field. This should perhaps be considered a known result, but we could find no complete proof in the literature. Our approach owes much to Daniel Bertrand. In the case that $\G=A$, the Kummer theoretical result we require is precisely \cite[Theorem~5.2]{BertrandLuminy}; the purpose of this appendix is to show that this result holds also for $A\times\G_m^n$, with a mostly parallel proof. As in that article, the method we apply is essentially that of Ribet's paper \cite{RibetKummer}. We should note that for general semiabelian varieties over number fields, Kummer theory is known to fail due to the existence of {\em deficient points} - see \cite{JacquinotRibet84}. \subsection{Finiteness theorems for abelian varieties} Let $A$ be an abelian variety over a number field $k_0$, let $T^{A}_l := \invlim_n A[l^n]$ for $l$ prime be the Tate modules, and let $T^{A}_\infty := \invlim_n A[n] = \powerseti_l T^{A}_l$. The following result on Galois cohomology is a consequence of Serre's uniform version of Bogomolov's result on homotheties. Here and below, $H^i$ refers to continuous group cohomology. \begin{fact} \label{fact:serre} $H^1(\Gal(k_0(A[\infty])/k_0),A[n])$ has uniformly bounded finite exponent, i.e.\ there exists $c>0$ such that for all $n>0$, we have $c\cdot H^1(\Gal(k_0(A[\infty])/k_0),A[n]) = 0$. \end{fact} \begin{proof} Let $G_\infty := \Gal(k_0(A[\infty])/k_0)$. Note that $H^1(G_\infty,A[n])$ admits a prime power decomposition as $\prod_i H^1(G_\infty,A[l_i^{k_i}])$ where $n=\prod_i l_i^{k_i}$. By \cite[Th\'eor\`eme 2', ``R\'esum\'e des cours de 1985-1986'', proved in ``Lettre \'a Ken Ribet du 7/3/1986'' in the same volume]{SerreOeuvresIV}, there exists $M>0$ such that every $M$th power homothety is in the image of $G_\infty$, i.e.\ any element of $\Zhat^* = \powerseti_l \Z_l^*$ which is an $M$th power in that group is the action on $T^{A}_\infty$ of some element of $G_\infty$. In particular, there is $\sigma \in G_\infty$ which acts on $T^{A}_l$ as multiplication by $2^{M}$ for $l \neq 2$, and acts on $T^{A}_2$ as the identity. Then $\sigma$ is central in $G_\infty$, so by Sah's Lemma, $H^1(G_\infty,T^{A}_\infty)$ and each $H^1(G_\infty,A[n])$ are annihilated by $\sigma-1$. Then if $l$ is an odd prime which does not divide $2^{M}-1$, so $2^{M}-1 \in \Z_l^*$, we have $H^1(G_\infty,A[l^k])=0$ for all $k$. Let $2=l_0,l_1,\ldots,l_s$ be the remaining primes, and let $p \notin \{l_0,\ldots,l_s\}$ be another prime. Then by the same argument, $p^{M}-1$ annihilates each $H^1(G_\infty,A[l_i^k])$. So $p^{M}-1$ annihilates each $H^1(G_\infty,A[n])$. \end{proof} The second ingredient is the following result of Faltings, sometimes referred to, after Lang, as Finiteness I \cite[IV.2]{LangSurveyDioph}. Here, a \underlinestyle{$k_0$-isogeny} is an isogeny defined over $k_0$; similarly for \underlinestyle{$k_0$-isomorphism}. \begin{fact}[Faltings] \label{fact:faltings} The algebraic groups which are $k_0$-isogenous to $A$ fall into finitely many $k_0$-isomorphism classes. \end{fact} \subsection{Generalisations to $A \times \G_m^n$} Let $\G = A \times \G_m^n$ with $A$ an abelian variety over a number field $k_0$. We check that the results of the previous section imply the corresponding results for $\G$. \begin{lemma} \label{lem:serre} $H^1(\Gal(k_0(\G[\infty])/k_0),A[n])$ has uniformly bounded finite exponent, i.e.\ there exists $c>0$ such that for all $n>0$, we have $c\cdot H^1(\Gal(k_0(\G[\infty])/k_0),A[n]) = 0$. \end{lemma} \begin{proof} By Hilbert 90, $H^1(\Gal(k_0(\G[\infty])/k_0),\mu_m) = 0$. Meanwhile, $k_0(\G[\infty]) = k_0(A[\infty])$ since the multiplicative roots of unity are rational over $k_0(A[\infty])$, via a Weil pairing. So \begin{align*} H^1(\Gal(k_0(\G[\infty])/k_0),\G[n]) &\cong H^1(\Gal(k_0(\G[\infty])/k_0),A[n]) \\ &= H^1(\Gal(k_0(A[\infty])/k_0),A[n]) , \end{align*} and we conclude by \factref{fact:serre}. \end{proof} \begin{lemma} \label{lem:faltings} The algebraic groups which are $k_0$-isogenous to $\G$ fall into finitely many $k_0$-isomorphism classes. \end{lemma} \begin{proof} \providecommand{\operatorname{d}ual}[1]{{#1}^\vee} Let $T := \G_m^n$. Recall (see e.g.\ \cite[10]{SerreAbelian}) that a semiabelian variety which falls into an exact sequence $0 \rightarrow T \rightarrow S \rightarrow A \rightarrow 0$ corresponds to a point in the $n$th power of the dual abelian variety of $A$, \[ \Ext(A,T) \cong \Ext(A,\G_m)^n \cong (\operatorname{d}ual{A})^n .\] Let $\G'$ be $k_0$-isogenous to $\G$, so $\G' \cong \G/Z$ for $Z \leq \G$ a finite subgroup defined over $k_0$. Since $\G / (Z\cap T)$ is $k_0$-isomorphic to $\G$, we may assume $Z\cap T = 0$. Let $\pi_1 : \G \rightarrow A$ and $\pi_2 : \G \rightarrow T$ be the projections of the product. Let $A' := A / \pi_1(Z)$ be the quotient abelian variety. So $\G'$ is an extension of $A'$ by $T$, and so $\G'$ corresponds to an element $[\G']$ of $\Ext(A',T) \cong (\operatorname{d}ual{A'})^n$. \begin{claim} $[\G']$ is a torsion element of $\Ext(A',T)$. \end{claim} \begin{proof} Let $k$ be the exponent of the finite group $\pi_2(Z) \leq T$. Then the $k$-fold Baer sum $[k]\G'$ of $\G'$ in $\Ext(A',T)$ is split. Indeed, $[k]\G'$ is the $k$-fold fibre product of $\G'$ over $A'$, quotiented by the subgroup $\Sigma := \{ \Sigma_i \alpha_i = 0 \;|\; \alpha_i \in T \} \leq T^k \leq A'^k$. Then the trivialisation $x \mapsto (x,0)$ of $\G = A \times T$ induces a trivialisation of $[k]\G'$, $x + \pi_1(Z) \mapsto ((x,0)+Z, \ldots, (x,0)+Z) + \Sigma$; this is well-defined as $((x,0)+Z) - ((x+\pi_1\zeta,0)+Z) = (0,\pi_2\zeta)+Z$, and $(\pi_2\zeta, \ldots, \pi_2\zeta) \in \Sigma$ since $k\pi_2\zeta=0$. \end{proof} Now since $\G'$ is defined over $k_0$, so is $A'$ and so is the torsion point $[\G']$ of $(\operatorname{d}ual{A'})^n$. By \factref{fact:faltings}, there are only finitely many such $A'$ up to $k_0$-isomorphism, and by Mordell-Weil each has only finitely many $k_0$-rational torsion points. Hence, there are only finitely many possibilities for $\G'$ up to $k_0$-isomorphism. \end{proof} \subsection{Group structure of $\G(k_0(\G[\infty]))$} \begin{definition} If $\Gamma'$ is a subgroup of an abelian group $\Gamma$, let $\pureHull_{\Gamma}(\Gamma') := \{\gamma \in \Gamma \;|\; \exists n>0 \qsep n\gamma\in\Gamma'\} \leq \Gamma$. An abelian group $\Gamma$ is {\em locally free modulo torsion} if for any finitely generated subgroup $\Gamma' \leq \Gamma$, there exists $m$ such that $m\cdot\pureHull_{\Gamma}(\Gamma') \leq \Gamma' + \Tor(\Gamma)$. \end{definition} Now let $k_0$ be a number field, let $A$ be an abelian variety over $k_0$, and let $\G = A \times \G_m^n$ be the product with an algebraic torus. Let $k_\infty := k_0(\G[\infty])$. \begin{lemma} \label{lem:boundedDiv} $\G(k_\infty)$ is locally free modulo torsion. \end{lemma} \begin{remark} By countability of $\G(k_\infty)$ and a theorem of Pontryagin \cite[19.1]{FuchsInfAb}, an equivalent statement is that the quotient group $\G(k_\infty) / \G[\infty]$ is free abelian. For $\G$ an abelian variety over a number field, this is proven by Larsen in \cite{LarsenMWTor}. This lemma generalises that result, using similar techniques. \end{remark} \begin{proof} Let $\Gamma \leq \G(k_\infty)$ be a finitely generated subgroup. Replacing $k_0$ by the number field $k_0(\Gamma)$ if necessary, we assume $\Gamma \leq \G(k_0)$. First, we see that $\G(k_0) = A(k_0) \times \G_m^n(k_0)$ is free modulo torsion. We use Dirichlet's Unit theorem to examine the group structure of $\G_m(k_0) = k_0^*$. Here, we are following \cite[Lemma 2.1]{ZCovers}. \providecommand{\valring}{\mathcal{O}} Let $\valring_{k_0}$ be the ring of integers of $k_0$. By Dirichlet's Unit theorem, $\valring_{k_0}^*$ is finitely generated. Recall that $\valring_{k_0}$ is a Dedekind domain and the fractional ideals, $\operatorname{Id}(\valring_{k_0})$, form a free abelian group with generators the prime ideals. We have an exact sequence \[ \xymatrix{ 1 \ar[r] & \valring_{k_0}^* \ar[r] & k_0^* \ar[r]^-\theta & \operatorname{Id}(\valring_{k_0}) } ,\] where $\theta(x) := x\valring_{k_0}$. The image of $\theta$ is a subgroup of a free abelian group, so is free abelian. Meanwhile, $A(k_0)$ is finitely generated by the Mordell-Weil theorem. So $\G(k_0)$ is an extension of a free abelian group by a finitely generated group, so the quotient by the torsion is an extension of free abelian by free abelian, so is free abelian. Hence $\G(k_0)$ is locally free modulo torsion. So say $m$ is such that $m\cdot\pureHull_{\G(k_0)}(\Gamma) \leq \Gamma + \G[\infty]$. Meanwhile, by \lemref{lem:serre}, say $c \cdot H^1(\Gal(k_\infty/k_0),\G[n]) = 0$ for all $n$. We conclude by showing $mc\cdot\pureHull_{\G(k_\infty)}(\Gamma) \leq \Gamma + \G[\infty]$. Indeed, suppose $\gamma \in \pureHull_{\G(k_\infty)}(\Gamma)$, say $\gamma \in \G(k_\infty)$ and $n\gamma \in \Gamma \leq \G(k_0)$. Then $\theta(\sigma) := \sigma\gamma - \gamma$ yields an element of $H^1(\Gal(k_\infty/k_0),\G[n])$. So $c\theta$ is a coboundary, so there is $\zeta \in \G[n]$ such that $c(\sigma\gamma - \gamma) = \sigma\zeta-\zeta$ for all $\sigma \in \Gal(k_\infty/k_0)$, so $c\gamma - \zeta \in \G(k_0)$. So $c\gamma - \zeta \in \pureHull_{\G(k_0)}(\Gamma)$, so $mc\gamma \in \Gamma + \G[\infty]$. \end{proof} \subsection{Openness} \label{sec:openness} Let $\G = A \times \G_m^n$ as above. Let $\mathcal{O} := \End(\G) \cong \End(A) \times \End(\G_m^n)$. By taking a finite field extension if necessary, we assume that each $\eta \in \mathcal{O}$ is defined over the number field $k_0$. We define the Kummer pairings for $\G$ as follows: if $k \geq k_0$, and $\gamma \in \G(k)$ and $\sigma\in\Gal(k)$, let $\pairing{\sigma}{\gamma}_n := \sigma\alpha - \alpha \in \G[n]$ for any $\alpha \in \G(\bar{k})$ with $n\alpha=\gamma$, and let $\pairing{\sigma}{\gamma} := (\pairing{\sigma}{\gamma}_n)_n \in T_\infty^{\G}$. A {\em torsion coset} in $\G$ is the translate $H+\zeta$ of a connected algebraic subgroup $H \leq \G$ by a torsion point $\zeta \in \G[\infty]$. By considering the torsion group, one sees that $T^{H}_\infty$ for such an $H$ is isomorphic to a finite power of $\Zhat$, and so a subgroup $Z$ of $T^{H}_\infty$ is open in the profinite topology, $Z \leq _{\operatorname{op}} T^{H}_\infty$, if and only if it is of finite index. \begin{proposition} \label{prop:kummer} Let $\gamma \in \G(k_\infty)$. Suppose $H+\zeta$ is the minimal torsion coset containing $\gamma$. Then $Z_\infty := \pairing{\Gal(k_\infty)}{\gamma} \leq _{\operatorname{op}} T^{H}_\infty \leq T^{\G}_\infty$. \end{proposition} \begin{remark} In the case that $\G$ is an abelian variety, this is exactly \cite[Theorem~5.2]{BertrandLuminy}. \end{remark} \begin{proof} Since $\pairing{\Gal(k_\infty)}{\zeta} = 0$, by shifting by $\zeta$ we may assume $\gamma \in H$. Replacing $k_0$ with $k_0(\gamma)$ if necessary, we may assume $\gamma \in \G(k_0)$. By \lemref{lem:subgroupsEndomorphisms} and the assumption that the endomorphisms are over $k_0$, we have that $H$ is defined over $k_0$. So since $H$ is divisible, $Z_\infty \leq T^{H}_\infty$. It remains to see that the index is finite. Now $\G(k_\infty)$ is an $\mathcal{O}$-submodule of $\G(\Qbar)$ by the assumption that the endomorphisms are over $k_0 \leq k_\infty$, and $\mathcal{O}\gamma$ is a finitely generated subgroup since $\mathcal{O}$ is finitely generated. So by \lemref{lem:boundedDiv}, say $m>0$ is such that $m\cdot\pureHull_{\G(k_\infty)}(\mathcal{O}\gamma) \leq \mathcal{O}\gamma + \G[\infty]$. For $n>0$, let $Z_n := \pairing{\Gal(k_\infty)}{\gamma}_n \leq \G[n]$. Note that $Z_n$ is defined over $k_0$; indeed, if $\sigma \in \Gal(k_\infty)$ and $\tau \in \Gal(k_0)$, then $\sigma^{\tau^{-1}} \in \Gal(k_\infty)$ and \[ \pairing{\sigma^{\tau^{-1}}}{\gamma}_n = \tau\sigma\tau^{-1}\alpha - \alpha = \tau(\sigma(\tau^{-1}\alpha) - \tau^{-1}\alpha) = \tau\pairing{\sigma}{\gamma}_n \] (where $n\alpha = \gamma$, and hence $n\tau^{-1}\alpha=\gamma$). So $(Z_n)^\tau = Z_n$. So by \lemref{lem:faltings}, the isogenous groups $B_n:=\quot{\G}{Z_n}$ fall into finitely many $k_0$-isomorphism classes. Therefore we may find $N$ such that for any $n$, there exists a $k_0$-isogeny $\theta_n : B_n \maps \G$ of degree $\operatorname{d}eg\theta_n := \left|{\ker\theta_n}\right|$ dividing $N$. We conclude the proof of the Proposition by showing that for any $n$, the index $[H[n]:Z_n]$ divides $N \cdot \left|{\G[m]}\right|$. Indeed, let $\eta \in \mathcal{O}$ be the composition $\eta(x) := \theta_n(\quot{x}{Z_n})$ of $\theta_n$ with the quotient map. Suppose $n\beta=\gamma$. Then $n\eta\beta=\eta\gamma$. But $\eta\beta$ is $\Gal(k_\infty)$-invariant; indeed, $Z_n \leq \ker(\eta)$ and $\eta$ is defined over $k_0 \leq k_\infty$, so \[ \sigma\eta\beta = \eta\sigma\beta = \eta(\beta + \pairing{\sigma}{\gamma}_n) = \eta\beta .\] So $\eta\beta \in \pureHull_{\G(k_\infty)}(\mathcal{O}\gamma)$, so $m\eta\beta \in \mathcal{O}\gamma + \G[\infty]$. So $m\eta\gamma \in n\mathcal{O}\gamma + \G[\infty]$, so $k(m\eta-n\eta')\gamma = 0$ for some $k>0$ and some $\eta' \in \mathcal{O}$. So by the choice of $H$, we have $m\eta = n\eta'$ on $H$. Hence $m\eta(H[n])=0$, i.e.\ $\theta_n(\quot{H[n]}{Z_n})\leq \G[m]$, and hence \begin{align*} [H[n]:{Z_n}] \quad&\operatorname{d}ivides\quad \left|{\ker\theta_n}\right| \cdot \left|{\G[m]}\right| \\ &\operatorname{d}ivides\quad N \cdot \left|{\G[m]}\right| .\end{align*} \end{proof} \end{document}
\begin{document} \title[Existence of isoperimetric sets with densities ``converging from below'' on $\mathbb R^N$]{Existence of isoperimetric sets with densities ``converging from below'' on $\mathbb R^N$} \author{Guido De Philippis}\address{Institut f\"ur Mathematik Universit\"at Z\"urich, Winterthurerstr. 190, CH-8057 Z\"urich (Switzerland)}\email{[email protected]} \author{Giovanni Franzina}\address{Department Mathematik, University of Erlangen, Cauerstr. 11, 91058 Erlangen (Germany)}\email{[email protected]} \author{Aldo Pratelli}\address{Department Mathematik, University of Erlangen, Cauerstr. 11, 91058 Erlangen (Germany)}\email{[email protected]} \begin{abstract} In this paper, we consider the isoperimetric problem in the space $\mathbb R^N$ with density. Our result states that, if the density $f$ is l.s.c. and converges to a limit $a>0$ at infinity, being $f\leq a$ far from the origin, then isoperimetric sets exist for all volumes. Several known results or counterexamples show that the present result is essentially sharp. The special case of our result for radial and increasing densities posively answers a conjecture made in~\cite{PM13}. \end{abstract} \maketitle \section{Introduction} In this paper we are interested in the isoperimetric problem with a weight. This means that we are given a positive l.s.c. function $f:\mathbb R^N\to \mathbb R^+$, usually called ``density'', and we measure volume and perimeter of a generic subset $E$ of $\mathbb R^N$ as \begin{align*} |E|_f:= {\mbox{\script H}\,\,}_f^N(E)= \int_E f(x) \, d {\mbox{\script H}\,\,}^N\,, && P_f(E) : = {\mbox{\script H}\,\,}_f^{N-1}(\partial^M E)=\int_{\partial^M E} f(x)\, d{\mbox{\script H}\,\,}^{N-1}(x)\,, \end{align*} where the \emph{essential boundary} of $E$ (which coincides with the usual topological boundary as soon as $E$ is regular) is defined as \[ \partial^M E = \bigg\{ x\in \mathbb R: \liminf_{r\searrow0} \frac{{\mbox{\script H}\,\,}^N(E\cap B_r(x))}{\omega_Nr^N}<1 \ \hbox{and} \ \limsup_{r\searrow 0} \frac{{\mbox{\script H}\,\,}^N(E\cap B_r(x))}{\omega_Nr^N}>0 \bigg\}\,, \] $B_r(x)$ stands for the ball of radius $r$ centered at $x$, and $\omega_N$ is the euclidean volume of a ball of radius 1. This problem, and many specific cases, have been extensively studied in the last decades and have many important applications; a short (highly non complete) list of some related papers is~\cite{Alm,BigReg,M1,MJ,newM1,BCMR,CMV,CRS,D,PM13,CP12}.\par The first interesting question in this setting is of course the existence of \emph{isoperimetric sets}, that are sets $E$ with the property that $P_f(E)= \mathfrak J(|E|_f)$ where, for any $V\geq 0$, \begin{equation*} \mathfrak J(V):=\inf\big\{P_f(F):\, |F|_f=V\big\}\,. \end{equation*} Depending on the assumptions on $f$, the answer to this question may be trivial or extremely complicate. Let us start with a very simple, yet fundamental, observation. Fix a volume $V>0$ and let $\{E_i\}$ be an \emph{isoperimetric sequence} of volume $V$: this means that $|E_i|_f=V$ for every $i\in\mathbb N$, and $P_f(E_i) \to \mathfrak J(V)$. Thus, possibly up to a subsequence, the sets $E_i$ converge to some set $E$ in the $L^1_{\rm loc}$ sense. As a consequence, standard lower semi-continuity results in BV ensure that $P_f(E)\leq \liminf P_f(E_i)=\mathfrak J(V)$ (at least, for instance, if $f>0$\dots); therefore, if actually $|E|_f=V$, then obviously $E$ is an isoperimetric set. Unfortunately, this simple observation is not sufficient, in general, to show the existence of isoperimetric sets, because there is no general reason why the volume of $E$ should be exactly $V$ (while it is obviously at most $V$).\par A second remark to be done is the following: if the volume of the whole space $\mathbb R^N$ is finite, then in the argument above it becomes obvious that $|E|_f=V$; basically, the mass cannot vanish to infinity. Hence, in this case isoperimetric sets exist for all volumes.\par Let us then consider the more general (and interesting) problem when $f\not\in L^1(\mathbb R^N)$. In this case, by the different scaling properties of volume and perimeter, roughly speaking we can say that ``isoperimetric sets like small density''. Let us be a little bit more precise: one can immediately check that, if two different balls $B_1$ and $B_2$ lie in two regions where the density is constantly $d_1$ resp. $d_2$, and if $|B_1|_f=|B_2|_f$, then $P_f(B_1)< P_f(B_2)$ as soon as $d_1<d_2$. More in general, all the simplest examples show that isoperimetric sets tend to privilege the zones where the density is lower, and it is very reasonable to expect that this behaviour is quite general. Of course, this argument does not predict anything in situations where the density varies quickly (for instance, it would be very convenient for a set to lie where the density is large if at the same time the boundary stays where the density is small!), but nevertheless having this ``general rule'' in mind may help a lot.\par With the aid of the above observation, let us now come back again to the question of the existence of isoperimetric sets. If the density $f$ converges to $0$ at infinity, one has to expect that isoperimetric sets do not exist (remember that we are assuming $\mathbb R^N$ to have infinite volume, otherwise the existence is always true). Indeed, in general a sequence of sets of given volume minimizing the perimeter diverges to infinity, to reach the zones with lowest density, and then actually the infimum of the perimeter for sets of any given volume is generally $0$.\par On the contrary, if the density $f$ blows up at infinity, one has to expect isoperimetric sets to exist: indeed, in this case the sequences minimizing the perimeter should remain bounded in order not to go where the density is high, and hence the limit of a minimizing sequence $\{E_i\}$ as above should have volume $V$, and then it would be an isoperimetric set. A complete answer to this question has been already given in~\cite{PM13}: if the density is also radial, then isoperimetric sets exist for every volume, as expected (Theorem~3.3 in~\cite{PM13}), but if the density is not radial, then the existence might fail (Proposition~5.3 in~\cite{PM13}), contrary to the intuition.\par Let us then pass to consider the case when the density, at infinity, is neither converging to $0$ nor diverging. Again, it is very simple to observe that existence generally fails if the density is decreasing, at least definitively; analogously, it is easy to build both examples of existence and of non-existence for oscillating densities (that is, densities for which the $\liminf$ and the $\limsup$, at infinity, are different). Summarizing, for what concerns the existence problem, the only interesting case left is when the density has a finite limit at infinity, and it is converging to that limit from below. This leads us to the following definition. \begin{definition}\label{defconvdens} We say that the l.s.c. function $f:\mathbb R^N\to \mathbb R$ is \emph{converging from below} if there exists $0<a <+\infty$ such that $f(x) \to a$ when $|x|\to \infty$, and $f(x) \leq a$ for $|x|$ big enough. \end{definition} Basically, the observations above tell that, for functions $f$ which are not converging densities, there is in general no interesting open question about the existence issue. Indeed, as explained above, in each of these cases it is already known whether isoperimetric sets exist for all volumes or not. Conversely, for some special cases of densities converging from below, the existence problem has been already discussed. In particular, combining the results of~\cite{PM13} and~\cite{CP12}, the existence of isoperimetric sets follows for densities which are continuous and converging from below and which satisfy some technical assumptions, for instance it is enough that $f$ is superharmonic, or that $f$ is radial and for every $c>0$ there is some $R\gg 1$ for which $f(R)\leq a-e^{-cR}$. Moreover, in~\cite{PM13} it was conjectured that isoperimetric sets exist for all volumes if the density is radial and increasing.\par In this paper we are able to prove the existence result for any density converging from below (this is even stronger than the above-mentioned conjecture); as explained above, this result is sharp. \begin{theorem}\label{main} Let $f\in L^1_{\rm loc}(\mathbb R^N)$ be a density converging from below. Then, isoperimetric sets exist for every volume. \end{theorem} \section{General results about isoperimetric sets\label{sec:genfact}} In this section we present a couple of general facts about existence and boundedness of isoperimetric sets.\par As already briefly described in the Introduction, let us fix some $V>0$ and an \emph{isoperimetric sequence} of volume $V$, that is, a sequence of sets $E_j\subseteq \mathbb R^N$ such that $|E_j|_f=V$ for any $j$, and $P_f(E_j)\to \mathfrak J(V)$ for $j\to \infty$. As already observed, if (a subsequence of) $\{E_j\}$ converges in $L^1_{\rm loc}$ to a set $E$, then by lower semicontinuity $P_f(E)\leq \mathfrak J(V)$, and $|E|_f\leq V$; thus, the set $E$ is automatically isoperimetric of volume $V$ if $|E|_f=V$. However, it is always true that $E$ is isoperimetric \emph{for its own volume}. We stress that this fact is widely known, but we prefer to give the proof to keep the presentation self-contained, and also because we could not find in the literature any proof which works in such a generality. After this lemma, we will show that if there was loss of mass at infinity (that is, if $|E|_f< V$), then $E$ is necessarily bounded. \begin{lemma}\label{isopF} Assume that $f\in L^1_{\rm loc}(\mathbb R^N)$ and that $f$ is locally bounded from above far enough from the origin. Let $\{E_j\}$ be an isoperimetric sequence of volume $V$ converging in $L^1_{\rm loc}$ to some set $E$. Then, $E$ is an isoperimetric set for the volume $|E|_f$. If in addition $f$ is converging to some $a>0$, then \begin{equation}\label{thm1.2} \mathfrak J(V) = P_f(E) + N(\omega_N a)^{\frac 1N} (V-|E|_f )^{\frac{N-1}N}\,. \end{equation} \end{lemma} \begin{proof} Let us start proving that $E$ is isoperimetric. As we already observed, $P_f(E)\leq \mathfrak J(V)$ and $|E|_f\leq V$; as a consequence, if $|E|_f=V$ it is clear that $E$ is isoperimetric, and on the other hand if $|E|_f=0$ then the empty set $E$ is still clearly isoperimetric for the volume $0$. As a consequence, we can assume without loss of generality that $0< |E|_f < V$.\par Suppose now that the claim is false, and let then $F_1$ be a set satisfying \begin{align*} |F_1|_f=|E|_f\,, && \eta:=\frac{P_f(E)-P_f(F_1)}6 >0\,. \end{align*} Select now $x\in \mathbb R^N$ being a point of density $1$ in $F_1$ and a Lebesgue point for $f$ with $f(x)>0$: such a point exists, in particular ${\mbox{\script H}\,\,}^N_f$-a.e. point of $F_1$ can be taken. The assumptions on $x$ ensure that, for every radius $\bar r$ small enough, \begin{equation}\label{buonvol} \frac 12 \,\omega_N f(x) \bar r^N \leq |B_{\bar r}(x) \cap F_1|_f \leq |B_{\bar r}(x)|_f \leq 2 \omega_N f(x) {\bar r}^N\,, \end{equation} and in turn this implies that there exist arbitrarily small radii $r$ (not necessarily all those small enough) such that \begin{equation}\label{buonper} {\mbox{\script H}\,\,}^{N-1}_f \big(\partial B_r(x)\big) \leq 2 N\omega_N f(x) r^{N-1}\,. \end{equation} Indeed, if the last inequality were false for every $0<r<\bar r$, then by integrating we would get that~(\ref{buonvol}) is false.\par Analogously, let $y$ be a point of density $0$ for $F_1$ which is Lebesgue for $f$ with $f(y)>0$ (the existence of such a point requires that $f\notin L^1(\mathbb R^N)$, which on the other hand is surely true because $|E|_f<V$). Since we can find such a point arbitrarily far from the origin (and far from $x$), by assumption it is admissible to assume that $f\leq M$ in a small neighborhood of $y$. As a consequence, there exists some radius $\bar\rho>0$ such that, for every $0<\rho<\bar\rho$, \begin{align}\label{propdelta0} \big|B_\rho(y)\setminus F_1 \big|_f \geq \frac{f(y)}2\, \omega_N \rho^N\,, && {\mbox{\script H}\,\,}^{N-1}_f\big(\partial B_\rho(y)\big) \leq M N \omega_N \rho^{N-1}\,. \end{align} Let us now fix a constant $\delta>0$ such that (up to possibly decrease $\bar\rho$) \begin{align}\label{propdelta} \delta < \eta\,, && \frac{f(y)}2\, \omega_N \bar\rho^N > \delta\,, && M N \omega_N \bar\rho^{N-1} < \eta\,. \end{align} We claim the existence of some set $F\subseteq \mathbb R^N$ and of a big constant $R>0$ (in particular, much bigger than both $|x|$ and $|y|$) such that \begin{align}\label{defF} F \subseteq B_R\,, && P_f(F) < P_f(E) - 5\eta\,, && 0< \delta':= |E|_f -|F|_f< \frac \delta 2\,, \end{align} writing for brevity $B_R=B_R(0)$. To show this, it is useful to consider two possible cases. If $F_1$ is bounded, we define $F=F_1\setminus B_r(x)$ for some $r$ very small such that both~(\ref{buonvol}) and~(\ref{buonper}) hold true. Then, the inclusion $F\subseteq B_R$ is true for every $R$ big enough, and the two inequalities in~(\ref{defF}) immediately follow by~(\ref{buonvol}), (\ref{buonper}) and the definition of $\eta$ as soon as $r$ is sufficiently small. Instead, if $F_1$ is not bounded, then we define $F=F_1\cap B_R$ for a big constant $R$: of course the inclusion $F\subseteq B_R$ is automatically satisfied, and the inequality about $\delta'$ is also true for every $R$ big enough, say $R>R_0$. Concerning the inequality on $P_f(F)$, if it were false for every $R>R_0$, then for every $R>R_0$ it would be \[ {\mbox{\script H}\,\,}^{N-1}_f\big(F_1\cap \partial B_R\big) \geq \eta\,, \] and then by integrating we would get \[ V > |F_1|_f \geq |F_1\setminus B_{R_0}|_f = \int_{R_0}^{+\infty} {\mbox{\script H}\,\,}^{N-1}_f \big(F_1 \cap \partial B_R\big) = +\infty\,, \] and the contradiction shows the existence of some suitable $R$, thus the existence of $F$ satisfying~(\ref{defF}) is proved.\par We can now select some $R'>R$ such that \begin{align}\label{hypR} |E\setminus B_{R'}|_f < \frac{\delta'}2\,, && {\mbox{\script H}\,\,}^{N-1}_f(\partial E\cap B_{R'}) > P_f(E) - \eta\,. \end{align} Since $E_j\cap B_{R'}$ (resp., $E_j \cap B_{R'+1}$) converges in the $L^1$ sense to $E\cap B_{R'}$ (resp., $E \cap B_{R'+1}$), for every $j$ big enough we have \begin{gather} |E|_f -\delta' < |E_j\cap B_{R'}|_f \leq |E_j\cap B_{R'+1}|_f < |E|_f+ \delta'\,, \label{R'bigvol} \\ {\mbox{\script H}\,\,}^{N-1}_f(\partial E \cap B_{R'})\leq {\mbox{\script H}\,\,}^{N-1}_f(\partial E_j \cap B_{R'})+\eta\,. \label{R'bigper} \end{gather} Arguing as above, by~(\ref{R'bigvol}) we have \[ \delta> 2\delta' \geq \Big|E_j \cap \big(B_{R'+1}\setminus B_{R'}\big)\Big|_f = \int_{R'}^{R'+1} {\mbox{\script H}\,\,}^{N-1}_f (E_j\cap \partial B_t)\, dt\,, \] so we can find some $R_j\in (R',R'+1)$ such that, also recalling~(\ref{propdelta}), \begin{equation}\label{choiceR_j} {\mbox{\script H}\,\,}^{N-1}_f (E_j\cap \partial B_{R_j}) <\delta < \eta\,. \end{equation} Observe that, since $|E_j|=V$ by definition, (\ref{R'bigvol}) implies \[ V - |E|_f -\delta' < |E_j\setminus B_{R_j}|_f < V - |E|_f +\delta'\,. \] As a consequence, calling $G_j = F \cup \big( E_j \setminus B_{R_j}\big)$ and also recalling~(\ref{defF}), (\ref{hypR}), (\ref{R'bigper}) and~(\ref{choiceR_j}), we can estimate the volume of $G_j$ by \begin{equation}\label{volGj} |G_j|_f = |F|_f + |E_j \setminus B_{R_j}|_f = |E|_f - \delta' + |E_j \setminus B_{R_j}|_f \in (V- \delta, V)\,, \end{equation} and the perimeter of $G_j$ by \begin{equation}\label{perGj}\begin{split} P_f(G_j) &= P_f(F) + P_f(E_j\setminus B_{R_j})\\ &< P_f (E) - 5\eta + {\mbox{\script H}\,\,}^{N-1}_f(\partial E_j\setminus B_{R_j}) + {\mbox{\script H}\,\,}^{N-1}_f (E_j \cap \partial B_{R_j})\\ &<{\mbox{\script H}\,\,}^{N-1}_f(\partial E \cap B_{R'}) + {\mbox{\script H}\,\,}^{N-1}_f(\partial E_j\setminus B_{R_j}) - 3\eta \leq P_f (E_j)- 2\eta\,. \end{split}\end{equation} Finally, we define the competitor $\widetilde E_j = G_j \cup B_{\rho_j}(y)$, where $\rho_j<\bar\rho$ is the constant such that $|\widetilde E_j|_f = V$ --this is possible by~(\ref{volGj}), (\ref{propdelta0}), and~(\ref{propdelta}). Applying then again~(\ref{propdelta0}) and~(\ref{propdelta}), from~(\ref{perGj}) we deduce \[ P_f(\widetilde E_j) < P_f(E_j) - \eta \] for every $j$ big enough, and this gives the desired contradiction with the fact that the sequence $E_j$ was isoperimetric. This finally shows that $E$ is an isoperimetric set for the volume $|E|_f$.\par Let us now pass to the second part of the proof, namely, we assume that $f$ is converging to some $a>0$ (not necessarily from below), and we aim to prove~(\ref{thm1.2}). Notice that we can assume without loss of generality that $|E|_f < V$, since otherwise~(\ref{thm1.2}) is a direct consequence of the fact that $E$ is isoperimetric.\par Arguing as in the first part of the proof, for every $\varepsilon>0$ we can find a very big $R$ such that, calling $F=E\cap B_R$, it is \begin{align*} |F|_f \geq |E|_f - \varepsilon\,, && P_f(F) \leq P_f(E) +\varepsilon\,. \end{align*} Let then $B$ a ball with volume $|B|_f = V - |F|_f$: if we take this ball far enough from the origin, then $B\cap F=\emptyset$, thus $|G|_f=V$ being $G=F\cup B$; moreover, again up to take the ball far enough, we have $a-\varepsilon\leq f \leq a+\varepsilon$ on the whole $B$. As a consequence, calling $r$ the radius of $B$, we have \[ V-|E|_f + \varepsilon\geq V-|F|_f = |B|_f \geq (a-\varepsilon) \omega_N r^N\,, \] from which we get \[\begin{split} \mathfrak J(V) &\leq P_f(G) = P_f(F) + P_f(B) \leq P_f(E) + \varepsilon + (a+\varepsilon) N \omega_N r^{N-1}\\ &\leq P_f(E) + \varepsilon + \frac{a+\varepsilon}{(a-\varepsilon)^{\frac{N-1}N}}\,N\omega_N^{\frac 1N}\,\Big(V-|E|_f + \varepsilon\Big)^{\frac{N-1}N}\,, \end{split}\] which in turn implies the first inequality in~(\ref{thm1.2}) by letting $\varepsilon\to 0$.\par To show the other inequality, consider again the isoperimetric sequence $\{E_j\}$; for any given $\varepsilon>0$, exactly as in the first part we can find an arbitrarily big $R$ so that $a-\varepsilon\leq f\leq a+\varepsilon$ out of $B_R$ and \begin{align*} |E\cap B_R|_f \geq |E|_f - \varepsilon\,, && P_f (E\setminus B_R) \leq \varepsilon\,. \end{align*} For every $j\gg 1$, then, we can find some $R_j \in (R,R+1)$ so that \begin{align*} |E_j\cap B_{R_j}|_f \leq |E|_f +\varepsilon \,, && {\mbox{\script H}\,\,}^{N-1}_f (E_j \cap \partial B_{R_j}) \leq 2\varepsilon\,, && P_f(E) \leq P_f(E_j\cap B_{R_j}) +2\varepsilon\,. \end{align*} Since $a-\varepsilon\leq f\leq a+\varepsilon$ out of $B_R$, we deduce \[\begin{split} P_f (E_j\setminus B_{R_j}) &\geq (a-\varepsilon) P_{\rm eucl} (E_j\setminus B_{R_j}) \geq (a-\varepsilon) N\omega_N^{\frac 1N}|E_j\setminus B_{R_j}|_{\rm eucl}^{\frac{N-1}N}\\ &\geq \frac{a-\varepsilon}{(a+\varepsilon)^{\frac{N-1}N}}\,N\omega_N^{\frac 1N}|E_j\setminus B_{R_j}|_f^{\frac{N-1}N} \geq \frac{a-\varepsilon}{(a+\varepsilon)^{\frac{N-1}N}}\,N\omega_N^{\frac 1N}\Big(V-|E|_f -\varepsilon\Big)^{\frac{N-1}N}\,, \end{split}\] which in turn gives \[\begin{split} P_f(E_j) &= P_f(E_j\cap B_{R_j} ) + P_f(E_j\setminus B_{R_j}) - 2{\mbox{\script H}\,\,}^{N-1}_f (E_j\cap \partial B_{R_j})\\ &\geq P_f(E) - 6\varepsilon +\frac{a-\varepsilon}{(a+\varepsilon)^{\frac{N-1}N}}\,N\omega_N^{\frac 1N}\Big(V-|E|_f -\varepsilon\Big)^{\frac{N-1}N}\,. \end{split}\] Since $P_f(E_j)\to \mathfrak J(V)$ for $j\to\infty$, by sending $\varepsilon\to 0$ in the last estimate yields the second inequality in~(\ref{thm1.2}), thus the proof is concluded. \end{proof} \begin{remark}{\rm Actually, the claim of Lemma~\ref{isopF} can be proved even with weaker assumptions; more precisely, one could apply the results of~\cite{CP12} to extend the validity to the more general case when $f$ is ``essentially bounded'' in the sense of~\cite{CP12}.} \end{remark} The second result that we present is a clever observation, which we owe to the courtesy of Frank Morgan, and which shows that whenever a density converges to a limit $a>0$ (not necessarily from below), then if an isoperimetric sequence is losing mass at infinity the remaining limiting set --which is isoperimetric thanks to Lemma~\ref{isopF}-- is bounded. \begin{lemma}[Morgan]\label{lemma:morgan} Let the density $f$ converge to some $a>0$, and let the isoperimetric sequence $\{E_j\}$ of volume $V$ converge in $L^1_{\rm loc}$ to a set $E$ with $|E|_f<V$. Then, $E$ is bounded. \end{lemma} \begin{proof} Assume that $|E|_f<V$. Then, for every $t>0$ define \[ m(t) = |E\setminus B_t|_f = \int_t^\infty {\mbox{\script H}\,\,}_f^{N-1} (E\cap \partial B_\sigma)\,d\sigma\,. \] For every $t$, we can select a ball $B$ of volume $V-|E|_f+ m(t)$ far away from the origin, in order to have no intersection with $E\cap B_t$; thus, the set $(E\cap B_t) \cup B$ has precisely volume $V$, hence $\mathfrak J(V) \leq P_f (E\cap B_t) + P_f(B)$. Since the ball $B$ can be taken arbitrarily far from the origin, thus in a region where $f$ is arbitrarily close to $a$, exactly as in the second part of the proof of Lemma~\ref{isopF} we deduce \[ \mathfrak J(V) \leq P_f (E \cap B_t) + N (a\omega_N)^{\frac 1N} \big(V-|E|_f+ m(t)\big)^{\frac{N-1}N}\,. \] Recalling that $|E|_f<V$ and comparing the last inequality with~(\ref{thm1.2}), we obtain \[ P_f(E) \leq P_f (E\cap B_t) + C m(t) \] for some strictly positive constant $C$. Notice now that \[ P_f(E) = P_f(E\cap B_t) + P_f(E\setminus B_t) - 2 {\mbox{\script H}\,\,}^{N-1}_f(E\cap \partial B_t) = P_f(E\cap B_t) + P_f(E\setminus B_t) + 2 m'(t)\,, \] and in turn by the (Euclidean) isoperimetric inequality if $t\gg 1$ we have \[ P_f(E\setminus B_t) \geq (a-\varepsilon) P_{\rm eucl}(E\setminus B_t) \geq (a-\varepsilon) N\omega_N^{\frac 1N} |E\setminus B_t|^{\frac{N-1}N}_{\rm eucl} \geq \frac{a-\varepsilon}{(a+\varepsilon)^{\frac{N-1}N}}\, N\omega_N^{\frac 1N} m(t)^{\frac{N-1}N}\,. \] Putting everything together, we get \[ Cm(t) \geq 2 m'(t) + \frac 1{C_1} m(t)^{\frac{N-1}N} \] for some other constant $C_1>0$. And in turn, if $t\gg 1$ then $m(t)\ll 1$, thus the last estimate implies \[ m(t) \leq C_2 \big(- m'(t)\big)^{\frac N{N-1}}\,. \] Finally, it is well known that a positive decreasing function $m$ which satisfies the above differential inequality vanishes in a finite time. Hence, $m(t)=0$ for $t$ big enough, and this means precisely that $E$ is bounded. \end{proof} \section{Proof of the main result\label{proofmain}} This section is devoted to show the main result of the paper, namely, Theorem~\ref{main}. Our overall strategy is quite simple, and already essentially contained in~\cite{PM13}. The idea is to take an isoperimetric sequence of volume $V$, and to consider a limiting set $E$ (up to a subsequence, this is always possible); if $|E|_f=V$, then there is nothing to prove because, as we already saw several times, the set $E$ is already the desired isoperimetric set of volume $V$. Instead, if $|E|_f<V$, we know by Lemma~\ref{isopF} that $E$ is an isoperimetric set for volume $|E|_f$, and by Lemma~\ref{lemma:morgan} that $E$ is bounded. Moreover, formula~(\ref{thm1.2}) says that an isoperimetric set of volume $V$ can be found as the union of $E$ and a ``ball at infinity'' with volume $V-|E|_f$. By ``ball at infinity'' we mean an hypothetical ball where the density is constantly $a$: such a ball needs not really to exist, but a sequence of balls of correct volume which escape at infinity will have a perimeter which converges to that of this ``ball at infinity''. In other words, a sequence of sets done by the union of $E$ and a ball escaping at infinity is isoperimetric thanks to~(\ref{thm1.2}). Our strategy is then simple: we look for a set $B$, far away from the origin, which is \emph{better} than a ball at infinity, that is, which has the same volume and less perimeter than it. Since $E$ is bounded (this is a crucial point, from which the importance of Lemma~\ref{lemma:morgan}) the sets $E$ and $B$ have no intersection, thus the union of $E$ with $B$ is isoperimetric. As one can see, the only thing to do is to find a set of given volume, arbitrarily far from the origin, which is ``better'' than a ball at infinity.\par First of all, let us express in a useful way the fact of being better than a ball at infinity, by means of the following definition. \begin{definition} We say that the set $E\subseteq \mathbb R^N$ of finite volume has \emph{mean density} $\rho$ if \[ P_f(E) = N(\omega_N \rho)^\frac 1N |E|_f^\frac{N-1}N\,. \] \end{definition} The meaning of this definition is evident: $\rho$ is the unique number such that, if we endowe $\mathbb R^N$ with the constant density $\rho$, then balls of volume $|E|_f$ have perimeter $P_f(E)$. The convenience of this notion is also clear: being ``better than a ball at infinity'' simply means having mean density less than $a$.\par We can then continue our description of the proof of Theorem~\ref{main}: we are left to find a set of volume $V-|E|_f$ arbitrarily far from the origin and having mean density at most $a$. Since we want to find isoperimetric set for any volume $V$, and we cannot know a priori how much $|E|_f$ is, we need to find sets of mean density less than $a$ \emph{of any volume} and \emph{arbitrarily far from the origin}. Actually, by a trivial rescaling argument, we can assume that $a=1$ and reduce ourselves to search for a set of volume $\omega_N$. Since $f$ is converging to $1$ and we must work very far from the origin, everything will be very close to the Euclidean case, hence a set of volume $\omega_N$ and mean density less than $1$ (or, equivalently, with perimeter less than $N \omega_N$) must be extremely close to a ball of radius $1$. The first big step in our proof will then be to find a ball of radius $1$ arbitrarily far from the origin, and with mean density less than $1$.\par Surprisingly enough, this will by no means conclude the proof, due to a seemingly minor problem: indeed, since $f$ converges to $1$ from below, the ball of radius $1$ that we have found does not have exactly volume $\omega_N$, but only a bit less. And, the far from the origin the ball is, the smaller this gap will be, but still positive. Notice that at this point we cannot again rely on a rescaling argument: we have already rescaled in order to reduce ourselves to the case of volume $\omega_N$, but then any other volume will not solve the problem (in principle, it could be that there are sets of mean density less than $1$ only for all the rational volumes, and for no irrational one\dots). Hence, the second big step in our proof will be to slightly modify the ball found in the first big step, in such a way that the volume increases up to exactly $\omega_N$, while the mean density remains smaller than $1$. At that point, the proof will be concluded. It is to be mentioned that the proof of this second fact is more delicate than the one of the first!\par Let us now state precisely the claims of the two big steps, and then give the formal proof of Theorem~\ref{main} --which is more or less exactly what we have just described informally. Then, we will conclude the paper with two sections, which are devoted to present the proof of the two big claims. \begin{prop}\label{farball} Let $f$ be a density converging from below to $1$, and set $g=1-f$. Then, for every $\varepsilon>0$ there exists a ball $B$ with radius $1$ and arbitrarily far from the origin such that \[ P_g(B)\geq (N-\varepsilon)|B|_g\,. \] \end{prop} \begin{prop}\label{prop:setmdgen} Let $f$ be a density converging from below to $1$. Then, there exists a set $E$ with volume $\omega_N$ and mean density smaller than $1$ arbitrarily far from the origin. \end{prop} \proofof{Theorem~\ref{main}} Let $\{E_j\}$ be an isoperimetric sequence of volume $V$, and let $E$ be the $L^1_{\rm loc}$ limit of a suitable subsequence. If $|E|_f=V$ then the proof is already concluded. Otherwise, we know that $E$ is bounded by Lemma~\ref{lemma:morgan} and that~(\ref{thm1.2}) holds. Up to a rescaling, we can assume that $f$ converges from below to $1$, and that $V-|E|_f=\omega_N$. By Proposition~\ref{prop:setmdgen} we can find a set $F$ not intersecting $E$ with volume $\omega_N$ and mean density less than $1$, which means $P_f(F)\leq N\omega_N$. The set $E\cup F$ has then volume $V$, and by~(\ref{thm1.2}) we obtain $P(E\cup F)\leq \mathfrak J(V)$, which means that $E\cup F$ is an isoperimetric set. \end{proof} \subsection{Proof of Proposition~\ref{farball}} This section is devoted to the proof of Proposition~\ref{farball}. Before presenting it, it is convenient to show a couple of technical lemmas. \begin{lemma}\label{lm:sgnint} Let $g: (0,\infty)\to[0,\infty)$ and $\alpha:(-1,1)\to\mathbb R$ be $L^1$ functions such that \begin{align}\label{alphacond} \lim_{t\to \infty} g(t)=0\,, && \int_{-1}^1\alpha(t)\,dt = 0\,, && \int_{-1}^\sigma \alpha(t) \,dt >0 \quad \forall \,\sigma \in (-1,1)\,. \end{align} Then there exists an arbitrarily large $R$ such that \[ \int_{-1}^1\alpha(t)g(t+R)\,dt\geq 0\,, \] with strict inequality unless $g(t)=0$ for all $t$ big enough. \end{lemma} \begin{proof} If the claim were false, then for every choice of $R',\,R''$ with $R''\geq R'+2$ one had \[\begin{split} 0&>\int_{R'}^{R''}\int_{-1}^1\alpha(t)g(t+R)\,dt\,dR =\int_{R'-1}^{R'+1}g(s)\int_{-1}^{s-R'}\alpha(t)\,dt\,ds +\int_{R''-1}^{R''+1}g(s)\int_{s-R''}^{1}\alpha(t)\,dt\,ds\\ &=A(R')+B(R'')\,, \end{split}\] where there is no integral over $(R'+1,R''-1)$ because it cancels thanks to~(\ref{alphacond}). The conditions on $\alpha$ and $g$ also ensure that $A(R')\geq 0\geq B(R'')$ for every $R',\, R''$. Suppose now that for some arbitrarily large $R'$ one has $A(R')>0$; we can then fix $R'$ and send $R''\to \infty$: since $g\to 0$, we get $B(R'')\to 0$, and then there is some $R''\gg 1$ such that $A(R')+B(R'')>0$, against the above inequality. As a consequence, it must be $A(R')=0$ for every $R'$ big enough, and in turn this means that $g$ is definitively zero, hence any $R$ big enough satisfies the claim. \end{proof} \begin{lemma}\label{lm:sgnintbis} Let $g: (0,\infty)\to [0,\infty)$ and $\beta:(-1,1)\to\mathbb R$ be $L^1$ functions such that $g$ and $\alpha(t)=\int_{-1}^t\beta(\sigma)\,d\sigma$ satisfy condition~\eqref{alphacond}, and $\alpha(1)=0$. Then, there exists an arbitrarily large $R$ such that \begin{equation}\label{dimbeta} \int_{-1}^1\beta(t)g(t+R)\,dt\geq 0\,, \end{equation} with strict inequality unless $g(t)=0$ for all $t$ big enough. \end{lemma} \begin{proof} The proof is analogous to the one of Lemma~\ref{lm:sgnint} above. Take $R'\gg 1$ and assume that the conclusion fails for every $R\geq R'$: then, for every $R''> R' +2$ we have \[ 0>\int_{R'}^{R''}\int_{-1}^{1} \beta(t)g(t+R)\,dt\,dR =\int_{R'-1}^{R'+1}g(s)\int_{-1}^{s-R'}\beta(t)\,dt\,ds+\int_{R''-1}^{R''+1}g(s)\int_{s-R''}^{1}\beta(t)\,dt\,ds\,. \] Exactly as before, since the last term in the right goes to $0$ when $R''\to\infty$, we find a contradiction as soon as the first term in the right is strictly positive. In other words, the proof is concluded as soon as we find some $R'$ such that \[ 0<\int_{R'-1}^{R'+1}g(s)\int_{-1}^{s-R'}\beta(t)\,dt\,ds =\int_{R'-1}^{R'+1} g(s) \alpha(s-R')\,ds =\int_{-1}^1 \alpha(t)g(t+R')\,dt\,. \] And in turn, the existence of such an $R'$ is ensured by Lemma~\ref{lm:sgnint} since $\alpha$ satisfies condition~(\ref{alphacond}), unless $g$ is definitively zero. And in this latter case, of course any $R$ big enough would satisfy the required condition. \end{proof} We are now in position to prove Proposition~\ref{farball}. \proofof{Proposition~\ref{farball}} For simplicity, we split the proof in two steps: first we show that one can always reduce himself to the case of a radial density, and then we prove the claim for this case. \step{I}{Reduction to radial case.} Let us assume that the claim holds for any radial density, and let $f$ be not necessarily radial. Define then the density $\tilde f$ as the radial average of $f$, namely, \begin{equation}\label{deftildef} \tilde f(x) = \Xint{-}_{\partial B_{|x|}} f(y)\, d{\mbox{\script H}\,\,}^{N-1}(y)\,. \end{equation} Of course, then $\tilde g=1-\tilde f$ is also the radial average of $g$. Since the claim holds for the radial density $\tilde f$, for any $\varepsilon>0$ we can find a ball $B$ satisfying $P_{\tilde g}(B) \geq (N-\varepsilon) |B|_{\tilde g}$. Let us then call $B^\theta$, for $\theta\in\mathbb S^{N-1}$, the ball having the same distance from the origin as $B$, and which is rotated of an angle $\theta$: all the different balls $B^\theta$ are equivalent for the density $\tilde f$, but not for the original density $f$. Observe now that by definition \begin{align*} P_{\tilde g} (B) = \Xint{-}_{\mathbb S^{N-1}} P_g (B^\theta) \, d{\mbox{\script H}\,\,}^{N-1}(\theta)\,, && |B|_{\tilde g}= \Xint{-}_{\mathbb S^{N-1}} |B^\theta|_g \, d{\mbox{\script H}\,\,}^{N-1}(\theta)\,, \end{align*} and then of course there exists some $\theta\in\mathbb S^{N-1}$ such that $P_g(B^\theta)\geq (N-\varepsilon) |B^\theta|_g$. \step{II}{Proof of the radial case.} Thanks to Step~I we can assume without loss of generality that $f$ is radial. For a ball $B_R$ having radius $1$ and center at a distance $R$ from the origin, we can then calculate perimeter and volume by integrating over the radial layers, that is, we have \begin{align}\label{exactform} P_g(B_R) = \int_{-1}^1 \varphi_R(t) g(t+R) \, dt\,, && |B_R|_g = \int_{-1}^1 \psi_R(t) g(t+R) \, dt\,, \end{align} where $\varphi_R(t)$ and $\psi_R(t)$ can be calculated by Fubini Theorem and co-area formula. Actually, it is not important to write down the exact formula, while it is immediate to observe that (basically, since the layers become flat in the limit) the following uniform limits hold \begin{align}\label{limitstilde} \frac{\varphi_R(t)}{\widetilde\varphi(t)} \freccia{R\to\infty} 1\,, && \frac{\psi_R(t)}{\widetilde\psi(t)} \freccia{R\to\infty} 1\,, \end{align} being the limit functions simply \begin{align*} \widetilde\varphi(t) = (N-1)\omega_{N-1} (1-t^2)^{\frac{N-3}2}\,, && \widetilde\psi(t) = \omega_{N-1} (1-t^2)^{\frac{N-1}2}\,. \end{align*} As a consequence, we can work with the approximated functions $\widetilde\varphi$ and $\widetilde\psi$ in place of $\varphi$ and $\psi$: more precisely, we call ``approximated'' perimeter and volume of $B_R$ the functions $\widetilde P_g(B_R)$ and $\widetilde V_g(B)$ obtained by substituting $\varphi$ and $\psi$ in~(\ref{exactform}) with $\widetilde\varphi$ and $\widetilde\psi$. The claim will be then automatically obtained, thanks to~(\ref{limitstilde}), if we can find an arbitrarily large $R$ such that \[ \widetilde P_g(B_R)\geq N \widetilde{V}_g(B_R)\,. \] We can now define $\beta:(-1,1)\to \mathbb R$ as $\beta(t)=\tilde\varphi(t)-N\tilde\psi(t)$, so that we are reduced to find an arbitrarily large $R$ such that~(\ref{dimbeta}) holds. It is elementary to check that the assumptions of Lemma~\ref{lm:sgnintbis} are satisfied: one can either do the simple calculations, or just observe that $\alpha(t)$ coincides with the perimeter minus $N$ times the volume of the portion of the unit ball centered at the origin whose first coordinate is between $-1$ and $t$, so that all the conditions to check become trivial. Therefore, the existence of the searched $R$ directly comes from Lemma~\ref{lm:sgnintbis}, and the proof is completed. \end{proof} \subsection{Proof of Proposition~\ref{prop:setmdgen}} This last section is entirely devoted to give the proof of Proposition~\ref{prop:setmdgen}, which is again divided in some steps. For convenience of the reader, in Steps~I and~II we start with two particular cases, namely, when $f$ is non-decreasing along the half-lines starting at the origin, and when $f$ is radial: even though these two particular cases are not really needed for the proof, the argument is similar to the general one but works more easily, so this helps to understand the general case. \proofof{Proposition~\ref{prop:setmdgen}} Let us fix $\varepsilon\ll 1$: thanks to Proposition~\ref{farball}, there is a ball $B=B_R^{\bar\theta}$ of radius $1$ and centered at the point $R\bar\theta$, with some arbitrarily large $R$ and some $\bar\theta\in \mathbb S^{N-1}$, which satisfies $P_g(B)\geq (N-\varepsilon) |B|_g$. Since $f\leq 1$ on $B$, we have $|B|_f\leq \omega_N$: if $|B|_f=\omega_N$ we are already done, because $P_f(B)\leq P_{\rm eucl}(B)=N\omega_N$, and this automatically implies that the mean density of $B$ is less than $1$. Let us then suppose that $|B|_f <\omega_N$, or equivalently that $|B|_g>0$, and let us try to enlarge $B$ so to reach volume $\omega_N$, but still having mean density less than $1$. We will do this in some steps. \step{I}{The case of non-decreasing densities.} Let us start with the case when $f$ is a ``non-decreasing density'': this means that, for every $\theta\in \mathbb S^{N-1}$, the function $t\mapsto f(t\theta)$ is non-decreasing, at least for large $t$.\par In this case, let us define a new set $E$ as follows. First of all, we decompose $B=B_l \cup B_r$, where $B_l$ and $B_r$ are the ``left'' and the ``right'' part of the ball $B_R^{\bar\theta}$: formally, a point $x\in B$ is said to belong to $B_l$ or $B_r$ if $x\cdot \bar\theta$ is smaller or bigger than $R$ respectively. Then, for any small $\delta$, we call $B_{l,\delta}$ the half ball centered at $(R-\delta)\bar\theta$ with radius $(R-\delta)/R$, and $C_\delta$ the cylinder of radius $1$ and height $\delta$ whose axis is the segment connecting $(R-\delta)\bar\theta$ and $R\bar\theta$; finally, we let $E_\delta=B_r \cup B_{l,\delta} \cup C_\delta$, see Figure~\ref{Fig:sets}, left. Since $f$ is converging to $1$, and $R$ can be taken arbitrarily big, we have \[ |E_\delta|_f - |B|_f \geq (1-\varepsilon ) \omega_{N-1} \delta \,; \] as a consequence, by continuity we can fix $\bar\delta$ such that $E=E_{\bar \delta}$ has exactly volume $\omega_N$, and we have \begin{equation}\label{estibardelta} \bar\delta \leq (1+2 \varepsilon) \,\frac{|B|_g}{\omega_{N-1}}\,. \end{equation} Thanks to the assumption that $f$ is non-decreasing, we know that \begin{equation}\label{diminbound} {\mbox{\script H}\,\,}^{N-1}_f (\partial^l B_{l,\delta}) \leq {\mbox{\script H}\,\,}^{N-1}_f (\partial^l B_l)\,, \end{equation} where we call $\partial^l B_{l,\delta}$ and $\partial^l B_\delta$ the ``left parts'' of the boundaries, that is, \begin{align*} \partial^l B_l = \Big\{y \in \partial B_l :\, y \cdot \bar\theta \leq R \Big\}\,, && \partial^l B_{l,\delta} = \Big\{y \in \partial B_{l,\delta} :\, y \cdot \bar\theta \leq R-\delta \Big\}\,. \end{align*} As a consequence, using again that $f\leq 1$ and that $R$ can be taken arbitrarily big, thanks to~(\ref{estibardelta}) and~(\ref{diminbound}) we can evaluate \[\begin{split} P_f(E) &\leq P_f(B) +(N-1+\varepsilon)\omega_{N-1}\bar\delta \leq N\omega_N - P_g(B) +(N-1+\varepsilon) (1+2\varepsilon) |B|_g\\ &\leq N\omega_N - (N-\varepsilon) |B|_g +(N-1+\varepsilon) (1+2\varepsilon) |B|_g < N\omega_N\,. \end{split}\] Summarizing, we have built a set $E$ arbitrarily far from the origin, with volume exactly $\omega_N$, and perimeter less than $N\omega_N$, thus mean density less than $1$. The proof is then concluded for this case. \begin{figure} \caption{The sets $E$ of Step~I (left) and of Step~II (right). The half-balls $B_r$ and $B_{l,\delta} \label{Fig:sets} \end{figure} \step{II}{The case of radial densities.} Let us now assume that the density is radial. In this case, we cannot use the same argument as in the previous step, because there would be no way to extend the validity of~(\ref{diminbound}). Nevertheless, we can use a similar idea to enlarge the ball $B$, namely, instead of translating half of the ball $B$ we rotate it. More formally, let us take an hyperplane passing through the origin and the center of the ball $B_R^{\bar\theta}$, and let us call $B^\pm$ the two corresponding half-balls in which $B_R^{\bar\theta}$ is subdivided. Let us then consider the circle contained in $\mathbb S^{N-1}$ which contains the direction $\bar\theta$ and the direction orthogonal to the hyperplane, and for any small $\sigma>0$ call $\rho_\sigma$ the rotation of an angle $\sigma$ with respect to this circle. Then, let us call $B^+_\sigma=\rho_\sigma(B^+)$ and finally let $E_\delta$ be the union of $B^-$ with all the half-balls $B^+_\sigma$ for $0<\sigma<\delta$, as in Figure~\ref{Fig:sets}, right. As in the previous step, since $f$ is converging to $1$ we can evaluate the difference of the volumes as \[ |E_\delta|_f - |B|_f \geq \omega_{N-1}(R-1)(1-\varepsilon) \delta \,, \] then we can again select $\bar\delta$ such that $E=E_{\bar\delta}$ has volume exactly $\omega_N$ and we have \begin{equation}\label{giamer} \bar\delta \leq (1+2 \varepsilon)\,\frac{|B|_g}{\omega_{N-1}(R-1)}\,. \end{equation} This time, the radial assumption on $f$ gives \[ {\mbox{\script H}\,\,}^{N-1}_f(\partial^+ B^+_\delta)={\mbox{\script H}\,\,}^{N-1}_f (\partial^+ B^+)\,, \] where we call $\partial^+ B^+_\delta$ and $\partial^+ B^+$ the ``upper'' parts of the boundaries in the obvious sense. And finally, almost exactly as in last step we can evaluate the perimeter of $E$ as \[\begin{split} P_f(E) &\leq P_f(B) + (N-1)\omega_{N-1} (R+1) \bar\delta \leq N\omega_N - P_g(B) + (N-1)(1+2\varepsilon)\,\frac{R+1}{R-1}\,|B|_g\\ &\leq N\omega_N - (N-\varepsilon)|B|_g + (N-1)(1+2\varepsilon)\,\frac{R+1}{R-1}\,|B|_g< N\omega_N\,, \end{split}\] where the last inequality again is true if we have chosen $\varepsilon\ll 1$ and then $R\gg 1$. Thus, the set $E$ has volume $\omega_N$ and mean density less than $1$, and the proof is obtained also in this case. \step{III}{The general case in dimension $2$.} Let us now treat the case of a general density $f$. For simplicity of notations we assume now to be in the two-dimensional situation $N=2$, and in the next step we will generalize our argument to any dimension.\par As in the proof of Proposition~\ref{farball}, let us call $\tilde f$ the radial average of $f$ according to~(\ref{deftildef}), and $\tilde g=1-\tilde f$ the radial average of $g$. Proposition~\ref{farball} provides then us with a ball $B_R$, of radius $1$ and distance $R\gg 1$ from the origin, such that \begin{equation}\label{choiceR} P_{\tilde g}(B_R)\geq (N-\varepsilon) |B_R|_{\tilde g}\,. \end{equation} For any $\theta \in \mathbb S^1$, as usual, we call then $B_R^\theta$ the ball of radius $1$ centered at $R\theta$. Let us now argue as in Step~II: we call $B_R^{\theta,\pm}$ (resp., $\partial^\pm B_R^\theta$) the two half-balls (resp., half-circles) made by the points of $B_R^\theta$ (resp., $\partial B^\theta_R$) having direction bigger or smaller than $\theta$; thus, for any small $\delta>0$, we define $E_\delta^\theta$ the union of $B^{\theta,-}_R$ with all the half-balls $B^{\theta+\sigma,+}_R$ for $0<\sigma<\delta$. Since the sets $E^\theta_\delta$ are increasing for $\delta$ increasing, if $R\gg 1$ there is a unique $\bar\delta=\bar\delta(\theta)$ such that $|E^\theta_{\bar\delta}|_f=\omega_N$, and exactly as in Step~II we have the estimate~(\ref{giamer}) for $\bar\delta$, which for $R$ big enough (since $f\to 1$ and then $g\to 0$) implies \begin{equation}\label{perdopo} \bar\delta(\theta) \leq \frac{(1+3\varepsilon)|B_R^\theta|_g}{\omega_{N-1}(R-1)}\,. \end{equation} Let us then define the function $\tau:\mathbb S^1\to\mathbb S^1$ as $\tau(\theta)=\theta+\bar\delta(\theta)$, and notice that by construction this is a strictly increasing bijection of $\mathbb S^1$ onto itself, with $\tau(\theta)>\theta$ (if $\tau(\theta)=\theta$ then the ball $B^\theta_R$ has already volume $\omega_N$, and in this case there is nothing to prove, as already observed). Let us now fix a generic $\theta\in\mathbb S^1$, and let $\eta\ll \tau(\theta)-\theta$: if we call \begin{align*} A=\Big(\bigcup\nolimits_{0<\sigma<\eta} B_R^{\theta+\sigma} \Big)\setminus B_R^{\theta+\eta}\,, && B=\Big(\bigcup\nolimits_{0<\sigma<\eta} B_R^{\tau(\theta+\sigma)} \Big)\setminus B_R^{\tau(\theta)}\,, \end{align*} then, since \begin{align*} \big|E^\theta_{\bar\delta(\theta)}\big|_f = \omega_N = \big|E^{\theta+\eta}_{\bar\delta(\theta+\eta)}\big|_f\,, && E^{\theta+\eta}_{\bar\delta(\theta+\eta)}=\big(E^\theta_{\bar\delta(\theta)}\cup B\big) \setminus A\,, \end{align*} one has $|A|_g =|B|_g$. On the other hand, one clearly has \[ \frac{|B|_{\rm eucl}}{|A|_{\rm eucl}}= \frac{\tau(\theta+\eta)-\tau(\theta)}\eta\,, \] Up to take $R$ big enough, we can assume without loss of generality that $1-\varepsilon\leq f \leq 1$ for points having distance at least $R-1$ from the origin, and this yields \[ 1-\varepsilon \leq \frac{\tau(\theta+\eta) - \tau(\eta)}\eta \leq \frac 1{1-\varepsilon}\,. \] As an immediate consequence, we get that the function $\tau$ is bi-Lipschitz and $1-\varepsilon\leq\tau'\leq (1-\varepsilon)^{-1}$. Let us now observe that, by construction, all the sets $E^\theta=E^\theta_{\tau(\theta)-\theta}$ have exactly volume $\omega_N$: we want then to find some $\bar\theta\in \mathbb S^1$ such that $P_f(E^{\bar\theta})\leq N\omega_N$, so $E^{\bar\theta}$ has mean density less than $1$ and we are done. Now, since a simple change of variables gives \[ \Xint{-}_{\mathbb S^1} {\mbox{\script H}\,\,}^{N-1}_g \big(\partial^+ B^\theta_R\big)\,d\theta = \Xint{-}_{\mathbb S^1} {\mbox{\script H}\,\,}^{N-1}_g \big(\partial^+ B^{\tau(\nu)}_R\big) \tau'(\nu) \,d\nu \leq \frac 1{1-\varepsilon}\ \Xint{-}_{\mathbb S^1} {\mbox{\script H}\,\,}^{N-1}_g \big(\partial^+ B^{\tau(\theta)}_R\big) \,d\theta\,, \] we can readily evaluate by~(\ref{choiceR}) \[\begin{split} 0&\leq P_{\tilde g}(B_R) -(N-\varepsilon) |B_R|_{\tilde g} = \Xint{-}_{\mathbb S^1} P_g(B_R^\theta) - (N-\varepsilon)|B_R^\theta|_g\,d\theta \\ &= \Xint{-}_{\mathbb S^1} {\mbox{\script H}\,\,}^{N-1}_g \big(\partial^+ B^\theta_R\big)\,d\theta+\Xint{-}_{\mathbb S^1} {\mbox{\script H}\,\,}^{N-1}_g \big(\partial^- B^\theta_R\big)\,d\theta-(N-\varepsilon)\Xint{-}_{\mathbb S^1} |B_R^\theta|_g\,d\theta \\ &\leq \Xint{-}_{\mathbb S^1} \frac 1{1-\varepsilon}{\mbox{\script H}\,\,}^{N-1}_g\big(\partial^+ B^{\tau(\theta)}_R\cup\partial^-B^\theta_R\big)-(N-\varepsilon)|B_R^\theta|_g\,d\theta\,, \end{split}\] and hence get the existence of some $\bar\theta\in \mathbb S^1$ such that \[ {\mbox{\script H}\,\,}^{N-1}_g\big(\partial^+ B^{\tau(\bar\theta)}_R\cup\partial^-B^{\bar\theta}_R\big)\geq (1-\varepsilon) (N-\varepsilon) |B_R^{\bar\theta}|_g\,. \] Thanks to~(\ref{perdopo}), we have then \[\begin{split} P_f\big(E^{\bar\theta}\big)&= {\mbox{\script H}\,\,}^{N-1}_f\big(\partial^+ B^{\tau(\bar\theta)}_R\cup\partial^-B^{\bar\theta}_R\big) +{\mbox{\script H}\,\,}^{N-1}_f\Big(\partial E^{\bar\theta} \setminus \big(\partial^+ B^{\tau(\bar\theta)}_R\cup \partial^- B^{\bar\theta}_R\big)\Big)\\ &\leq N\omega_N - {\mbox{\script H}\,\,}^{N-1}_g\big(\partial^+ B^{\tau(\bar\theta)}_R\cup\partial^-B^{\bar\theta}_R\big) + (N-1)\omega_{N-1} \bar\delta(\bar\theta) (R+1)\\ &\leq N\omega_N - (1-\varepsilon)(N-\varepsilon) |B_R^{\bar\theta}|_g+ (N-1)(1+3\varepsilon)|B_R^{\bar\theta}|_g < N\omega_N\,, \end{split}\] where the last inequality holds as soon as $\varepsilon$ was chosen small enough at the beginning. The set $E^{\bar\theta}$ is then as searched and this step is done. \step{IV}{The general case.} We are now ready to conclude the proof in the general case. We start noticing that in the argument of Step~III the assumption $N=2$ was used only to work with $\mathbb S^1$, hence to get the validity of~(\ref{choiceR}). More precisely, let us assume that there exists some arbitrarily large $R$ and some circle $\mathcal C\approx \mathbb S^1$ in $\mathbb S^{N-1}$ such that the estimate \begin{equation}\label{general} \Xint{-}_\mathcal C P_g (B_R^\theta) \, d{\mbox{\script H}\,\,}^1(\theta) \geq (N-\varepsilon) \Xint{-}_\mathcal C |B_R^\theta|_g\,d{\mbox{\script H}\,\,}^1(\theta) \end{equation} holds true. Then, we can repeat \emph{verbatim} the proof of Step~III, we get the existence of some $\bar\theta\in \mathcal C$ such that the set $E_R^{\bar\theta}$ has volume $\omega_N$ and mean density less than $1$, and the proof is concluded. Hence, we are left to find some $R$ and some circle $\mathcal C$ so that~(\ref{general}) holds; notice that, if $N=2$, then it must be $\mathcal C=\mathbb S^1$ and~(\ref{general}) reduces to~(\ref{choiceR}), which in turn holds for some arbitrarily large $R$ thanks to Proposition~\ref{farball}.\par Let us then consider the case of dimension $N=3$. By Proposition~\ref{farball} we can take $R\gg 1$ such that~(\ref{choiceR}) holds true; for any $\theta\in \mathbb S^2$, then, we can call $\mathcal C_\theta$ the circle in $\mathbb S^2$ which is orthogonal to $\theta$, and observe that by homogeneity \begin{align*} P_{\tilde g}(B_R) = \Xint{-}_{\mathbb S^2} \Xint{-}_{\mathcal C_\theta} P_g (B_R^\sigma)\, d{\mbox{\script H}\,\,}^1(\sigma) \, d{\mbox{\script H}\,\,}^2(\theta)\,, && |B_R|_{\tilde g} = \Xint{-}_{\mathbb S^2} \Xint{-}_{\mathcal C_\theta} |B_R^\sigma|_g\, d{\mbox{\script H}\,\,}^1(\sigma) \, d{\mbox{\script H}\,\,}^2(\theta)\,, \end{align*} so thanks to~(\ref{choiceR}) we get the existence of a circle $\mathcal C=\mathcal C_{\bar\theta}$ for which~(\ref{general}) holds true: the proof is then concluded also in dimension $N=3$.\par Notice that the argument above can be rephrased as follows: if there exists some sphere $\mathcal S\approx \mathbb S^2 \subseteq \mathbb S^{N-1}$ such that the average estimate~(\ref{general}) holds with $\mathcal S$ in place of $\mathcal C$ (and in turn in dimension $N=3$ this reduces to~(\ref{choiceR}) and hence holds), then the proof is concluded. As a consequence, the claim follows also in dimension $N=4$, arguing exactly as above with the spheres $\mathcal S_\theta\approx \mathbb S^2$ orthogonal to any $\theta\in \mathbb S^3$, and the obvious induction argument gives then the thesis for any dimension. \end{proof} \begin{remark}{\rm Notice that, in the proof of Proposition~\ref{prop:setmdgen}, we have actually found a set which has mean density \emph{strictly less} than $1$, unless $g\equiv 0$ on some ball of radius $1$. On the other hand, as clearly appears from the proof of Theorem~\ref{main}, it is impossible to find such a set if some isoperimetric sequence is losing mass at infinity: indeed, otherwise the argument of Theorem~\ref{main} would give a set with perimeter strictly less than the infimum. There are then only two possibilities: either there are balls where $f\equiv 1$ arbitrarily far from the origin, or no isoperimetric sequence can lose mass at infinity.\par In particular, our proof shows that no isoperimetric sequence can lose mass at infinity if $f<1$ out of some big ball.} \end{remark} \section*{Acknowledgment} The work of the three authors was supported through the ERC St.G. 258685. We wish also to thank Michele Marini and Frank Morgan for useful discussions and comments. \end{document}
\begin{document} \title{Thin sums matroids and duality} \begin{abstract} Thin sums matroids were introduced to extend the notion of representability to non-finitary matroids. We give a new criterion for testing when the thin sums construction gives a matroid. We show that thin sums matroids over thin families are precisely the duals of representable matroids (those arising from vector spaces). We also show that the class of tame thin sums matroids is closed under duality and under taking minors, by giving a new characterisation of the matroids in this class. Finally, we show that all the matroids naturally associated to an infinite graph are tame thin sums matroids. \end{abstract} \section{Introduction} If we have a family of vectors in a vector space over some field $k$, we get a matroid structure on that family whose independent sets are given by the linearly independent subsets of the family. Matroids arising in this way are called {\em representable} matroids. Although many interesting finite matroids (eg. all graphic matroids) are representable, it is clear that any representable matroid is finitary and so many interesting examples of infinite matroids are not of this type. However, since the construction of many of these examples, including the algebraic cycle matroids of infinite graphs, is suggestively similar to that of representable matroids, the notion of {\em thin sums matroids} was introduced in \cite{RD:HB:graphmatroids}: it is a generalisation of representability which captures these infinite examples. The basic idea is to take the vector space to be of the form $k^A$ for some set $A$, and to allow the linear combinations involved in the definition of dependence to have nonzero coefficients at infinitely many vectors, provided that they are well defined pointwise, in the sense that for each $a \in A$ there are only finitely many nonzero coefficients at vectors with nonzero component at $a$. Further details are given in Section \ref{pre}. Thin sums matroids need not be finitary. There are some obvious questions about how well-behaved the objects given by this definition are. The first, and most obvious, question is whether the systems of independent sets defined like this are all really infinite matroids (in the sense of \cite{matroid_axioms}). Sadly, it is known that there are examples of set systems definable this way which are not matroids. Accordingly, we refer to such systems in general as {\em thin sums systems}, and only call them thin sums matroids if they really are matroids. \begin{question}\label{whenmat} Which thin sums systems are matroids? \end{question} A sufficient condition is given in \cite{RD:HB:graphmatroids}: a thin sums system over a family of vectors in $k^A$ is always a matroid when this family is {\em thin} - that is, when for each $a \in A$ there are only finitely many vectors in the family whose component at $a$ is nonzero. We show that, just as every representable matroid is finitary, so also every thin sums matroid over a thin family is cofinitary. Thus to get examples which are neither finitary nor cofinitary new ideas are needed. In fact we prove something stronger. \begin{thm} A matroid arises as a thin sums matroid over a thin family for the field $k$ iff it is the dual of a $k$-representable matroid. \end{thm} We will also provide a complete, if somewhat cumbersome, characterisation answering Question \ref{whenmat}. Although this characterisation allows us to simplify the proof of an old result of Higgs \cite{Higgs:axioms} characterising when the algebraic cycle system of a graph is a matroid, it is not completely satisfactory. The most nontrivial condition in the definition of matroids has not been removed. We will explore an analogy between thin sums systems and IE-operators which suggests that there is unlikely to be a simpler characterisation than the one we give. The class of matroids representable over a field $k$ is very well behaved: It is closed under taking minors and (for finite matroids) under duality. This leads us naturally to ask the same questions about thin sums matroids. \begin{question} Is the class of thin sums matroids over $k$ closed under duality? Is it closed under taking minors? \end{question} The first part of this question is answered negatively in \cite{WILD}, where it is shown that there is a thin sums matroid whose dual is not a thin sums matroid. However, this counterexample is a very unusual matroid, in that it has a circuit and a cocircuit whose intersection is infinite. Such matroids are called {\em wild}, and matroids in which all circuit-cocircuit intersections are finite are called {\em tame}. Almost all standard examples of infinite matroids are tame - in fact, \cite{WILD} is the first paper to show that any matroid at all is wild. We are able to establish the following result: \begin{thm} The class of tame thin sums matroids over $k$ is closed under duality and under taking minors. \end{thm} We do this by giving an alternative characterisation of tame thin sums matroids for which this good behaviour is far more transparent. Any finite graphic matroid is representable over every field. The situation for infinte graphs is a little more complex, in that there is more than one natural way to build a matroid from an infinite graph. In \cite{RD:HB:graphmatroids}, 6 matroids associated to a graph are defined in 3 dual pairs. We show that all 6 of these matroids are thin sums matroids over any field (this was already known for 1 of the 6, and one of the others was already known to be representable). In Section \ref{pre}, we will introduce some of the basic concepts, such as matroids, representability, and thin sums. We will also introduce the 6 graphic matroids mentioned above. Section \ref{corep} will be devoted to thin sums matroids on thin families, and their duality with representable matroids. In Section \ref{cond} we will develop our criterion for when a thin sums system is a matroid, and in Section \ref{galois} we will explain why we think there is unlikely to be a simpler characterisation. In Section \ref{duality} we will prove that the class of tame thin sums matroids is closed under duality and taking minors. Our account of why the various matroids associated to an infinite graph are thin sums matroids will be dispersed over all these sections: we give a summary of this aspect of the theory in Section \ref{summary}. \section{ Preliminaries}\label{pre} In this section, we introduce some terminology and concepts that we will use later. We also prove a few simple results about representability. For any set $E$ let $\Pcal(E)$ be the power set of $E$. Recall that a matroid $M$ consists of a set $E$ (the ground set) and a set $\Ical \subseteq \Pcal(E)$ (the set of its independent sets), where $\Ical$ satisfies the following conditions: (I1) $\emptyset\in \Ical$. (I2) $\Ical$ is closed under taking subsets. (I3) For all $I\in \Ical\setminus \Ical^{max}$ and $I'\in \Ical^{max}$, there is an $x\in I' \setminus I$ such that $I+x\in \Ical$. (IM) Whenever $I\subseteq X \subseteq E$ and $I\in \Ical$, the set $\{I' \in \Ical | I\subseteq I'\subseteq X\}$ has a maximal element. Subsets of the ground set which are not independent are called {\em dependent}, and minimal dependent sets are called {\em circuits} of the matroid. Maximal independent sets are called {\em bases}. All the information about the matroid is contained in the set of its circuits, or of its bases. Further details can be found in \cite{matroid_axioms}, from which we take much of our notation. If all the circuits of a matroid $M$ are finite then $M$ is called finitary. We use $M^*$ to denote the dual of $M$. For any base $B$ and any $e \in E \setminus B$, there is a unique circuit $o_e$ with $e \in o_e \subseteq B + e$, called the {\em fundamental circuit} of $e$ with respect to $B$. Dually, since $E \setminus B$ is a base of $M^*$, for any $f \in B$ there is a unique cocircuit $b_f$ with $f \in b_f \subseteq E \setminus B + f$, called the {\em fundamental cocircuit} of $f$. \begin{lem}\label{notone} There is no matroid $M$ with a circuit $o$ and a cocircuit $b$ such that $|o \cap b| = 1$. \end{lem} \begin{proof} Suppose for a contradiction that there were such an $M$, $o$ and $b$, with $o \cap b = \{e\}$. Let $B$ be a base of $M$ whose complement includes the coindependent set $b - e$. Let $I$ be a maximal independent set with $o - e \subseteq I \subseteq E \setminus b$ - this can't be a base of $M$ since its complement includes $b$, so by (I3), there is some $f \in B \setminus I$ such that $I + f$ is independent. Then by maximality of $I$, $f \in b$, and so $f = e$, so $o$ is independent. This is the desired contradiction. \end{proof} \begin{lem}\label{fdt} Let $M$ be a matroid and $B$ be a base. Let $o_e$ and $b_f$ a fundamental circuit and a fundamental cocircuit with respect to $B$, then \begin{enumerate} \item $o_e\cap b_f$ is empty or $o_e\cap b_f=\{e,f\}$ and \item $f\in o_e$ iff $e\in b_f$. \end{enumerate} \end{lem} \begin{proof} (1) is immediate from Lemma \ref{notone} and the fact that $o_e \cap b_f \subseteq \{e, f\}$. (2) is a straightforward consequence of (1). \end{proof} \begin{lem}\label{o_cap_b} For any circuit $o$, and any elements $e, f$ of $o$ there is a cocircuit $b$ such that $o\cap b=\{e,f\}$. \end{lem} \begin{proof} Let $B$ be a base extending the independent set $o \setminus e$, so that $o$ is the fundamental circuit of $e$ with respect to $B$. Then the fundamental cocircuit of $f$ has the desired property. \end{proof} \begin{lem} \label{rest_cir} Let $M$ be a matroid with ground set $E = C \dot \cup X \dot \cup D$ and let $o'$ be a circuit of $M' = M / C \backslash D$. Then there is an $M$-circuit $o$ with $o' \subseteq o \subseteq o' \cup C$. \end{lem} \begin{proof} Let $B$ be any base of $M \restric_C$. Then $B \cup o'$ is $M$-dependent since $o'$ is $M'$-dependent. On the other hand, $B \cup o'-e$ is $M$-independent whenever $e\in o'$ since $o'-e$ is $M'$-independent. Putting this together yields that $B \cup o'$ contains an $M$-circuit $o$, and this circuit must not avoid any $e\in o'$, as desired. \end{proof} \begin{coroll} \label{dualops} Let $M$ be a matroid with ground set $E = C \dot \cup \{x\} \dot \cup D$. Then either there is a circuit $o$ of $M$ with $x \in o \subseteq C + x$ or there is a cocircuit $b$ of $M$ with $x \in b \subseteq D + x$, but not both. \end{coroll} \begin{proof} Note that $(M/D\backslash C)^* = M ^*/ C \backslash D$, and apply Lemmas \ref{notone} and \ref{rest_cir}. \end{proof} In \cite{source_of_IE}, a {\em space} is defined to consist of a set $E$ together with an operator $\Pcal E \xrightarrow{S} \Pcal E$ such that $S$ preserves the order $\subseteq$ and satisfies $X \subseteq SX$ for any $X \subseteq E$. For example, for any matroid $M$ with ground set $E$ the associated {\em closure operator} $\Sp_M$, which sends $X$ to the set $$X \cup \{x \in E | (\exists o \in \Ccal(M)) x \in o \subseteq X + x\}$$ gives a space on the set $E$. If $(E, S)$ is a space, the {\em dual space} is given by $(E, S^*$), where $S^*$ is the {\em dual operator} to $S$, sending $X$ to $X \cup \{x \in E | x \not \in S(E \setminus (X + x))$. Thus for sets $X$ and $Y$ with $X \dot \cup Y \dot \cup \{x\} = E$, we have \begin{equation}\label{duop} x \in SX \iff x \not \in S^*Y\, ,\tag{$\dagger$} \end{equation} and this completely determines $S^*$ in terms of $S$. Thus $S^{**} = S$. Also, by Corollary \ref{dualops}, for any matroid $M$ we have $\Sp_{M^*} = \Sp_M^*$. For a space $(S, E)$, we say $S$ is {\em idempotent} if $S^2 = S$, and {\em exchange} if $S^*$ is idempotent. If $S$ is both idempotent and exchange, we call it an {idempotent-exchange} operator, or an {\em IE-operator} on $E$. Note that if $S$ is an IE-operator then so is $S^*$. For any matroid $M$ the operator $\Sp_M$ is an IE-operator. On the other hand, there are lots of IE-operators that don't come from matroids in this way. Some strong condition akin to IM is needed to pick out which IE-operators correspond to matroids. We always use $k$ to denote an arbitrary field. The capital letter $V$ always stands for a vector space over $k$. For any set $A$, we write $k^A$ to denote the set of all functions from $A$ to $k$. For any function $E \xrightarrow{d} k$ the {\em support} $\supp(d)$ of $d$ is the set of all elements $e\in E$ such that $d(e)\neq 0$. {\em A linear dependence} of $E \xrightarrow{\phi} V$ is a map $E \xrightarrow{c} k$ such that $$\sum _{e\in E}c(e)\phi(e)=0$$ (here, as in the rest of this paper, we take this statement as including the claim that the sum is well-defined, i.e. that only finitely many summands are nonzero). For a subset $E'$ of $E$, we say such a $c$ is a {\em linear dependence} of $E'$ iff it is zero outside $E'$. Recall that a representable matroid is traditionally defined as follows. \begin{dfn} Let $V$ be a vector space. Then for any function $E \xrightarrow{\phi} V$ we get a matroid $M(\phi)$ on the ground set $E$, where we take a subset $E'$ of $E$ to be independent iff there is no nonzero linear dependence of $E'$. Such a matroid is called a representable or vector matroid. \end{dfn} Note that this is exactly the same as taking a family of vectors as the ground set and saying that a subfamily of this family is independent iff it is linearly independent. In \cite{RD:HB:graphmatroids}, there is an extension of these ideas to a slightly different context. Suppose now that we have a function $E \xrightarrow{f} k^A$. A {\em thin dependence} of $f$ is a map $E \xrightarrow{c} k$ such that, for each $a \in A$, $$\sum _{e\in E}c(e)f(e)(a)=0$$ This is not quite the same as a linear dependence (in $k^A$ considered as a vector space over $k$), since it is possible that the sum above might be well defined for each particular $a$ in $A$, but the sum $$\sum_{e \in E} c(e)f(e)$$ might still not be well defined. To put it another way, there might be infinitely many $e \in E$ such that there is {\em some} $a \in A$ with $c(e)f(e)(a) \neq 0$, even if there are only finitely many such $e$ for each {\em particular} $a \in A$. We may also say $c$ is a thin dependence of a subset $E'$ of $E$ if it is zero outside of $E'$. The word {\em thin} above originated in the notion of a {\em thin family} - this is an $f$ as above such that sums of the type given above are always defined; that is, for each $a$ in $A$, there are only finitely many $e \in E$ so that $f(e)(a) \neq 0$. Notice that, for any $E \xrightarrow{f} k^A$, and any thin dependence $c$ of $f$, the restriction of $f$ to the support of $c$ is thin. Now we may define thin sums systems. \begin{dfn} Consider a family $E \xrightarrow{f} k^A$ of functions and declare a subset of $E$ as independent iff there is no nonzero thin dependence of that subset. Let $M_{ts}(f)$ be the set system with ground set $E$ and the set of all independent sets given in this way. We call $M_{ts}(f)$ the {\em thin sums system} corresponding to $f$. Whenever $M_{ts}(f)$ is a matroid it is called a thin sums matroid. \end{dfn} Note that every dependent set in a representable matroid or thin sums system induces a linear or thin dependence and vice versa; therefore, we normally talk about such dependences instead of dependent sets. Not every thin sums system is a matroid \footnote{See Section \ref{cond} for a couple of examples.} but it is known that if $f$ is thin then $M_{ts}(f)$ always is a matroid. The existing proof for this is technical and we shall not review it here. However, this fact will follow from the results in Section \ref{corep}. Next we explore the connection between representable and thin sums matroids. Recall that for any infinite matroid $M$ all its finite circuits are circuits of a matroid \footnote {This is easy to prove. See \cite{union2}.}, called the {\em finitarisation} of $M$. \begin{prop}\label{finthinrep} For any thin sums matroid $M_{ts}(f)$, the finitarisation of $M_{ts}(f)$ is a representable matroid. \end{prop} \begin{proof} For any family $E \xrightarrow{f} k^A$ of functions, a thin dependence of $f$ with finite support is also a linear dependence of $f$ as a family of vectors, and conversely any linear dependence of $f$ as a family of vectors is a thin dependence of $f$. \end{proof} Now let's try to answer to the question: Which matroids arising from graphs are representable or thin sums matroids? It is easy to see that any algebraic cycle matroid is a thin sums matroid (in fact, this was one motivation for the definition of thin sums matroids). Recall that for any graph $G$ which does not contain a subdivision of the Bean graph, $$\xymatrix{&&&& \bullet \ar@{-}[d] \ar@{-}[dr] \ar@{-}[drr] \ar@{-}[drrr] \ar@{}[drrrr]|{\cdots} &&&&\\ \cdots & \ar[l] \bullet \ar@{-}[r] & \bullet \ar@{-}[r] & \bullet \ar@{-}[r] & \bullet \ar@{-}[r] & \bullet \ar@{-}[r] & \bullet \ar@{-}[r] & \bullet \ar[r]& \cdots \\}$$ the edge sets of cycles and double rays of $G$ are circuits of a matroid \footnote{This has been proved in \cite{Higgs:axioms}. However, later we will be able to give a simpler proof of this.} $M_A(G)$ on the edge set of $G$ which is called the {\em algebraic cycle matroid} of $G$. In fact, even when $G$ does contain a subdivision of the Bean graph we shall still denote this system of sets by $M_A(G)$, and call it the {\em algebraic cycle system} of $G$. \begin{prop}\label{algthin} For any graph $G$ the algebraic cycle system of $G$ is a thin sums system over every field. \end{prop} \begin{proof} First we give an arbitrary orientation to every edge of $G$, making $G$ a digraph. For any edge $e$ of $G$ define a function $V(G) \xrightarrow{f(e)} k$ where for any $v\in V(G)$ $f(e)(v)$ is $1$ if $e$ originates from $v$, $-1$ if it terminates in $v$, and $0$ if they don't meet each other. We show that $D$ is dependent in $M_A(G)$ iff it is dependent in $M_{ts}(f)$. If $D$ is dependent in $M_A(G)$, then it contains a cycle or a double ray. Let $D'\subseteq D$ be the edge set of this cycle or double ray. Give a direction to $D'$. For any edge $e\in D$, define $c(e)$ to be $1$ if $e$ is an edge of $D'$ and they have the same directions, $-1$ if $e$ is in $D'$ and they have different directions, and $0$ if they don't meet. Now clearly we have $\sum_{e \in D'} c(e)f(e)(v)=0$ for any vertex $v$ of $G$, so $c$ is a thin dependence of $D$. Conversely if $D$ is dependent in $M_{ts}(f)$, then whenever a vertex $v$ is an end of an edge in $D$, it has to be the end of at least two edges in $D$. Now it is not difficult to see that $D$ has to contain a cycle or a double ray. \end{proof} Recall that the edge sets of finite cycles give the circuits of a matroid $M_{FC}(G)$, the {\em finite cycle matroid} of $G$. An argument almost identical to the one above shows that this matroid is always representable. Dually, the edge sets of finite bonds give the circuits of a matroid $M_{FB}(G)$, the {\em finite bond matroid} of $G$ \footnote{See \cite{RD:HB:graphmatroids} for a description of the various cycle and bond matroids which may be associated to a graph.}. Similar ideas allow us to show that for any graph $G$, $M_{FB}(G)$ is also representable. \begin{prop}\label{fbond} For any graph $G$ $M_{FB}(G)$ is representable over every field $k$. \end{prop} \begin{proof} We start by giving fixed directions to every edge, cycle and finite bond. Let $O$ be the set of all cycles of $G$ and for any edge $e \in E(G)$ define a function $O \xrightarrow {\phi(e)}k$ such that for any $o\in O$, $\phi(e)(o)$ is $1$ if $e\in o$ and they have the same directions, $-1$ if $e\in o$ and they have different directions, and $0$ if $e$ isn't an edge of $o$. This defines a map $E(G) \xrightarrow{\phi} k^O$. We will show $M(\phi)=M_{FB}(G)$. We need to show that $D \subseteq E(G)$ is dependent in $M_{FB}(G)$ iff the it is dependent in $M(\phi)$. If $D$ is dependent in $M_{FB}(G)$ then it contains a finite bond $D'$. For any edge $e\in D'$ define $c(e)$ to be $1$ if $D'$ and $e$ have the same directions, and $-1$ if they have different directions, and $0$ if they don't meet. Now consider a fixed cycle $o$ which meets $D'$. Clearly $D'$ has two sides and this cycle has to traverse $D'$ from the first side to the second side as many times as it traverses $D'$ from the second side to the first. As a result, for any $o\in O$ we have $\sum_{e \in E} c(e) \phi(e)(o)=0$ and so $c$ is a linear dependence of $D$. Conversely, suppose that $D$ is dependent in $M(\phi)$, and let $D'$ be the support of any thin dependence of $D$. Whenever the edge set of a cycle meets $D'$, they have to meet in at least two edges, which means $D'$ (and so also $D$) meets every spanning tree and so it contains a bond and so it is a dependent set in $M_{FB}(G)$. \end{proof} Recall that for any graph $G$ the (possibly infinite) bonds of $G$ are the circuits of a matroid $M_B(G)$ on the edge set of $G$ \footnote{See \cite{RD:HB:graphmatroids}.}. In the above proof, we could exchange the role of finite bonds and arbitrary bonds and see that $M_B(G)$ is a thin sums matroid. We could also exchange the role of finite cycles and arbitrary bonds, and finite bonds and finite cycles, to get another proof of the fact that $M_{FC}(G)$ is representable. It is not difficult to see that the finite cycle matroid and the bond matroid of a graph $G$ are dual to each other \footnote{See \cite{RD:HB:graphmatroids}.}. As has been shown in \cite{RD:HB:graphmatroids}, for any graph $G$ the circuits of the dual of the finite bond matroid of $G$ are given by the topological circles in a topological space associated to $G$. For this reason, $M^*_{FB}(G)$ is called the topological cycle matroid of $G$, and denoted $M_C(G)$. In the next section, we shall show that $M_C(G)$ is also a thin sums matroid. We will only give a brief summary of the construction of the topological space behind the topological cycle matroid. A ray is a one-way infinite path. Two rays are edge-equivalent if for any finite set $F$ of edges there is a connected component of $G\setminus F$ that contains subrays of both rays. The equivalence classes of this relation are the edge-ends of $G$; we denote the set of these edge-ends by $\Ecal(G)$. Let us view the edges of $G$ as disjoint topological copies of [0,1], and let $X_G$ be the quotient space obtained by identifying these copies at their common vertices. The set of inner points of an edge $e$ will be denoted by $e\!\!^{^o}$. We now define a topological space $||G||$ on the point set of $X_G\cup \Ecal(G)$ by taking as our open sets the union of sets $\tilde{C}$, where $C$ is a connected component of $X_G\setminus Z$ for some finite set $Z\subset X_G$ of inner points of edges, and $\tilde{C}$ is obtained from $C$ by adding all the edge-ends represented by a ray in $C$. For any $X\subseteq ||G||$ we call $\{e \in E(G) | e\!\!^{^o} \subseteq X\}$ the edge set of $X$. A subspace $C$ of $||G||$ that is homeomorphic to $S^1$ is a {\em topological circle} in $||G||$. In \cite{RD:HB:graphmatroids} is shown that the edge sets of these circles in $||G||$ are the circuits of $M^*_{FB}(G)$. \section{Representable matroids and thin sums}\label{corep} In this section we elucidate the connections between representable matroids and thin sums matroids. First we show that any representable matroid is a thin sums matroid, so thin sums matroids are a generalisation of representable matroids. After that we will characterise the dual of an arbitrary representable matroid and show that not only is every representable matroid a thin sums matroid but every matroid whose dual is representable is also a thin sums matroid. In fact, our last result even is stronger; we show that the duals of representable matroids are precisely the thin sums matroids for thin families. Since the finite bond matroid of any graph is representable, this implies in particular that its dual, the topological cycle matroid, is a thin sums matroid. As usual, let $V^*$ be the dual of the vector space $V$ (that is, the vector space consisting of all linear maps from $V$ to $k$). \begin{thm}\label{repisthin} Consider a map $E \xrightarrow{\phi} V$ and the representable matroid $M(\phi)$. For any $e\in E$ and $\alpha \in V^*$ define $E \xrightarrow{f} k^{V^*}$ by $f(e)(\alpha):=\alpha.\phi(e)$. Then, $$M(\phi)=M_{ts}(f).$$ In particular, $M(\phi)$ is a thin sums matroid. \end{thm} \begin{proof} We show that $I$ is independent in $M_{ts}(f)$ iff $I$ is independent in $M(\phi)$. Suppose that $I$ is independent in $M_{ts}(f)$. Suppose that $E \xrightarrow{c} k$ is any linear dependence of $\phi$ that is $0$ outside $I$. For any $\alpha \in V^*$ we have, $$\sum_{e\in E} c(e).f(e)(\alpha)=\sum_{e\in E} c(e)\alpha.\phi(e)=\alpha\left(\sum_{e\in E} c(e)\phi(e)\right) =0.$$ As $\alpha$ was arbitrary we have $\sum_{e\in E} c(e)f(e)=0$ and since $I$ is independent in $M_{ts}(f)$ $c$ must be the $0$ map. So $I$ is also independent in $M(\phi)$. Conversely, suppose that $I$ is independent in $M(\phi)$. Suppose $E \xrightarrow{c} k$ is any thin dependence of $f$ that is $0$ outside $I$. Let $I'=\supp(c)$. Since $I' \subseteq I$, $I'$ is also independent in $M(\phi)$, so (by extending the image of $I'$ by $\phi$ to a basis of $V$) we can define a linear map $V\xrightarrow{\alpha_{I'}} k$ such that for any $i\in I'$, $\alpha_{I'}(\phi(i))=1$. As the restriction of $f$ to $I'=\supp(c)$ is thin and for any $i\in I'$ $f(i)(\alpha_{I'})=\alpha_{I'}(\phi(i))=1$, $I'$ has to be finite. So for every $\alpha \in V^*$, $$\alpha\left(\sum_{e\in E} c(e)\phi(e)\right)=\sum_{e\in E} c(e)\alpha.\phi(e)=\sum_{e\in E} c(e)f(e)(\alpha)=0.$$ Since this is true for every $\alpha \in V^*$, we get that $\sum_{e\in I'} c(e)\phi(e)=0$ which means $c$ must be a linear dependence and so must be 0. Therefore $I$ is also indpendent in $M_{ts}(f)$. \end{proof} Now let's see how we can move from a representable matroid to its dual. Let's start with a family $E \xrightarrow{\phi} V$. let $C_{\phi}$ be the set of all linear dependences of $\phi$. For any $e\in E$ and $c\in C_{\phi}$ define $E \xrightarrow{\widehat{\phi}} k^{C_{\phi}}$ by $\widehat{\phi}(e)(c) := c(e)$. Clearly $\widehat{\phi}$ is a thin family of functions. On the other hand, if we let $D_f$ be the set of thin dependences of a thin family $E \xrightarrow{f} k^A$, we get a map $E \xrightarrow{\overline{f}} k^{D_{f}}$ where for $e\in E$ and $d\in D_{f}$ $\overline{f}(e)(d):= d(e)$. These processes are, in a sense, inverse to each other. \begin{lem}\label{backforth} For any thin family $E \xrightarrow{f} k^A$, a map $d: E \to k$ is a thin dependence of $f$ iff it is a thin dependence of $\widehat{\overline{f}}$. \end{lem} \begin{proof} First, suppose that $d$ is a thin dependence of $f$. Then for any $c \in C_{\overline{f}}$ we have \[ \sum_{e\in E} d(e)\widehat{\overline{f}}(e)(c)=\sum_{e\in E} d(e)c(e)=\sum_{e\in E} c(e)\overline{f}(e)(d)=0, \] so $d$ is also a thin dependence of $\widehat{\overline{f}}$. Now suppose that $d$ is a thin dependence of $\widehat{\overline{f}}$. For any $a \in A$, let $E \xrightarrow{c_a} k$ be defined by the equation $c_a(e) = f(e)(a)$. Since $f$ is thin, $c_a(e)$ is nonzero for only finitely many values of $e$. Since also for any thin dependence $d'$ of $f$ we have \[ \sum_{e\in E} c_a(e)\overline{f}(e)(d')= \sum_{e\in E} c_a(e )d'(e)=\sum_{e\in E} d'(e )f(e)(a)=0, \] and so $c_a \in C_{\overline{f}}$. Now, since $d$ is a thin dependence of $\widehat{\overline{f}}$, we have \[ \sum_{e\in E} d(e)f(e)(a) = \sum_{e\in E} d(e)c_a(e) = \sum_{e\in E}d(e) \widehat{\overline{f}}(e)(c_a)=0. \] Since $a$ was arbitrary, this says exactly that $d$ is a thin dependence of $f$. \end{proof} An analogous argument shows that for any map $\phi: E \to V$, the linear dependences of $\overline{\widehat{\phi}}$ are exactly those of $\phi$. We can also show that these inverse processes correspond to duality of matroids. \begin{thm}\label{dualise} For any map $\phi: E \to V$ we have, \[ M^*(\phi)=M_{ts}(\widehat{\phi}). \] \end{thm} \begin{proof} Suppose we have a set $E_1$ which is dependent in the dual of $M(\phi)$: that is, it meets every basis of $M(\phi)$. Let $E_2 = E \setminus E_1$, so $E_2$ doesn't contain any basis of $E$ - that is, $E_2$ doesn't span this matroid, and we can pick $e_1 \in E_1$ such that $\phi(e_1)$ isn't in the span of the family $(\phi(e) | e \in E_2)$. Consider a basis $B_2$ for this span, and extend $B_2+ \phi (e_1)$ to a basis $B$ for V, and define a map $B \xrightarrow{h_0} F$ such that $h_0(\phi (e_1)):=1$, and otherwise 0. Finally, extend $h_0$ to a linear map $V \xrightarrow{h} F$. Now, for any linear dependence $c$ of $ \phi$ we have \[ \sum_{e \in E} (h\cdot\phi)(e) \widehat{\phi}(e)(c)=h\left(\sum_{e\in E} c(e)\phi(e)\right)=0 \] So $h \cdot \phi$ is a thin dependence of $\widehat{\phi}$, and since it is 0 outside $E_1$, $E_1$ is dependent with respect to $\widehat{\phi}$. Conversely, suppose that $E_1$ is dependent in $M_{ts}(\widehat{\phi})$, so that there is a nonzero thin dependence $d$ of $\widehat{\phi}$ which is 0 outside $E_1$. We want to show that $E_1$ meets every basis of $M(\phi)$, so suppose for a contradiction that there is such a basis $B$ which it doesn't meet. Pick $e_1 \in E_1$ so that $d$ is nonzero at $e_1$. We can express $\phi(e_1)$ as a linear combination of vectors from the family $(\phi(e) | e \in B)$ - that is, there is a linear dependence $c$ of $\phi$ which is nonzero only on $B$ and at $e_1$, with $c(e_1) = 1$. But then \[ d(e_1) = \sum_{e\in E} d(e)c(e) =\sum_{e\in E} d(e)\widehat{\phi}(e)(c) = 0, \] which is the desired contradiction. Thus $E_1$ does meet every basis of $M(\phi)$, so it is dependent in the dual of $M(\phi)$. \end{proof} \begin{coroll} For any thin family $ E \xrightarrow{f} k^A$ we have, \[ M_{ts}(f)=M^*(\overline{f}). \] In particular $M_{ts}(f)$ is a cofinitary matroid. \end{coroll} \begin{proof} This is immediate from theorem \ref{dualise}, since by lemma \ref{backforth} we have $M_{ts}(f) = M_{ts}(\widehat{\overline{f}})$. \end{proof} \section{A sufficient condition for $M_{ts}$ to be a matroid}\label{cond} Throughout this section, $f$ will denote a map $E \xrightarrow{f} k^A$ for some sets $A$ and $E$ and field $k$. Since so many examples of matroids are of the form $M_{ts}(f)$ for some such $f$, it would be good to be able to characterise when the set system $M_{ts}(f)$ is a matroid. Although this set system clearly satisfies the axioms (I1) and (I2), it need not satisfy either (I3) or (IM). Thus the algebraic cycle system of the Bean graph satisfies (IM) but not (I3) (as we shall soon show). On the other hand, we can also define a thin sums system which fails to satisfy (IM). Let $E = \Nbb \times \{0, 1\}$, and define a function $E \xrightarrow{f} \Qbb^{\Nbb}$ by $f((n,0))(i) = i^n$ and $f((n, 1))(i) = -1$ if $n = i$ and 0 otherwise. Thus for any thin dependence $c$ of $f$, there can only be finitely many $n \in \Nbb$ with $c((n, 0))$ nonzero, and the remaining values of $c$ are determined by the polynomial expression \begin{equation}\label{polyeq} c((i, 1)) = \sum_{n \in \Nbb}c((n, 0)) i^n \, . \end{equation} In particular, if $c$ is 0 outside of $\Nbb \times \{0\}$, then this polynomial must be the 0 polynomial and so $c$ must be the 0 function. This shows that $\Nbb \times \{0\}$ is thinly independent for $f$. To show that $M_{ts}(f)$ doesn't satisfy (IM), we shall show that there is no maximal thinly independent superset of $\Nbb \times \{0\}$. More precisely, we shall show that, for a subset $X$ of $\Nbb$, the set $\Nbb \times \{0\} \cup X \times \{1\}$ is thinly independent if and only if $\Nbb \setminus X$ is infinite. In fact, the same argument as that above shows that this set is thinly independent whenever $\Nbb \setminus X$ is infinite, since the only polynomial which is zero in infinitely many places is the zero polynomial. Conversely, if $\Nbb \setminus X$ is finite, then pick some nonzero polynomial $\sum_{n = 0}^N a_n x^n$ with roots at all elements of $\Nbb \setminus X$, and define $c((n, 0))$ to be $a_n$ for $n \leq N$ and 0 otherwise. Define $c((i, 1))$ by the polynomial formula \eqref{polyeq}. Then $c$ is a nontrivial thin dependence which is 0 outside $\Nbb \times \{0\} \cup X \times\{1\}$, so that set is thinly dependent. We shall argue in the next section that some condition like (IM) is unavoidable, but we can at least get rid of the condition (I3). We do this by defining for each $f$ a different set system $M^{\co}_{ts}(f)$, which satisfies (I3) in addition to (I1) and (I2), and such that if it satisfies (IM) then $M_{ts}(f)$ is a matroid (in fact, in such cases $M^{\co}_{ts}(f) = M^*_{ts}(f)$). We will make use of a compactness lemma, corresponding to the compactness of a topological space which (so far as we know) has not been introduced in the literature. We therefore introduce it here. \begin{dfn} An {\em affine equation} over a set $I$ with coefficients in $k$ consists of a sequence $(\lambda_i \in k| i \in I)$ such that only finitely many of the $\lambda_i$ are nonzero and an element $\kappa$ of $k$. A sequence $(x_i | i \in I)$ is a {\em solution} of the equation $(\lambda, \kappa)$ iff $\sum_{i \in I} \lambda_i x_i = \kappa$ (we shall also slightly abuse notation by using expressions like this as names for equations). x is a solution of a set $E$ of equations iff it is a solution of every equation in $E$. \end{dfn} The proof of the following Lemma is based on an argument of Bruhn and Georgakopoulos \cite{basis}. \begin{lem}\label{compsolve} If every finite subset of a set $E$ of affine equations over $I$ with coefficients in $k$ has a solution then so does $E$. \end{lem} \begin{proof} Let \Ical\ be the set of all subsets $I'$ of $I$ such that every finite subset of $E \cup \{x_i = 1 | i \in I'\}$ has a solution. \Ical\ is nonempty since it contains $\emptyset$, and the union of any chain of elements of \Ical\ is an upper bound for those elements in \Ical. So by Zorn's lemma, \Ical\ has a maximal element $I_m$. Let $E_m = E \cup \{x_i = 1 | i \in I_m\}$. \begin{sublem} For any $i \in I$ there is a finite set $K_i \subseteq E_m$ and an element $u_i \in k$ such that every solution $x$ of $E_i$ has $x_i = u_i$. \end{sublem} \begin{proof} Suppose for a contradiction that we can find $i_0 \in I$ where this fails - it follows that $i_0 \not \in I_m$. We shall show that $I_m \cup \{i_0\} \in \Ical$, contradicting the maximality of $I_m$. Let $K$ be any finite subset of $E_m$. By assumption, $K$ has two solutions $s$ and $t$ with $s_{i_0} \neq t_{i_0}$. Since $k$ is a field, we can find $A$ and $B$ in $k$ such that $A + B = 1$ and $As_{i_0} + Bt_{i_0} = 1$. Then for each equation $\sum_{i \in I} \lambda_i x_i = \kappa$ in $K$ we have $\sum_{i \in I}\lambda_i(As_i + Bt_i) = A\sum_{i \in I} \lambda_is_i + B \sum_{i \in I} \lambda_it_i = A\kappa + B\kappa = \kappa$, so $As + Bt$ is a solution of $K$, and since $As_{i_0} + Bt_{i_0} = 1$ it is even a solution of $K \cup \{x_{i_0} = 1\}$, as required. \end{proof} It is now enough to show that the $u_i$ introduced above form a solution of $E$. Let $e$, given by $\sum_{i \in I} \lambda_i x_i = \kappa$, be any equation in $E$, and let $S$ be the finite subset of $I$ on which the $\lambda_i$ are nonzero. Let $K = \bigcup_{i \in S} K_i \cup \{e\}$. Since $K$ is finite, it has a solution $x$. For each $i \in S$ we have $x_i = u_i$, so $\sum_{i \in I} \lambda_iu_i = \sum_{i \in I} \lambda_ix_i = \kappa$, so the $u_i$ form a solution of $e$ and, since $e$ was arbitrary, they form a solution of $E$. \end{proof} This lemma is all we will really need. However, it looks like it ought to correspond to some sort of compactness, and indeed it does. \begin{dfn} For any affine equation $e$ over $I$ with coefficients in $k$, let $C_e$ be the set of solutions of $e$. For any finite set $E$ of affine equations, let $C_E = \bigcup_{e \in E} C_e$. The {\em affine Zariski topology} on $k^I$ is that with the $C_E$ as its basic closed sets. \end{dfn} The reason for this name is the analogy between this definition and the Zariski topology on $k[X]$ for a finite set $X$. \begin{thm} The affine Zariski topology is compact. \end{thm} \begin{proof} Let $\Ecal$ be a set of finite sets of affine equations, such that for any finite subset $K$ of $\Ecal$ the set $\bigcap_{E \in K} C_E$ is nonempty. What we need to show is that $\bigcap_{E \in \Ecal} C_E$ is also nonempty. Let $X$ be $\prod_{E \in \Ecal} E$, with the product topology. For each finite $K \subseteq \Ecal$, let $X_K$ be the subset of $X$ consisting of all $(e_E | E \in \Ecal)$ such that $\{e_E | E \in K\}$ has a solution. $X_K$ is closed and nonempty since $K$ is finite. For any finite family $(K_j | j \in J)$ of such $K$ we have that $\bigcap_{j \in J} X_{K_j} \supseteq X_{\bigcup_{j \in J} K_j}$, so it is nonempty. Since $X$ is compact, the intersection of all the $X_K$ is also nonempty, so we can pick an element $e$. Then we know that every finite subset of $\{e_E | E \in \Ecal\}$ has a solution, so by Lemma \ref{compsolve} there is a solution $x$ of the whole set of equations. But then $x$ lies in $\bigcap_{E \in \Ecal} C_E$, which is therefore nonempty. \end{proof} \begin{lem}\label{mindep} Let $d$ be a thin dependence of $f$. Then $\supp(d)$ is a union of minimal dependent sets of $M_{ts}(f)$. \end{lem} \begin{proof} Let $I = \supp(d)$. It suffices to show that for any $e_0 \in I$ there is a minimal dependent set which contains $e_0$ and is a subset of $I$. We begin by fixing such an $e_0$. For any $a\in A$ there are only finitely many $e\in I$ with $f(e)(a)\neq 0$, so for any $a\in A$ we get an affine equation $\sum_{e\in I} f(e)(a) x_e=0$ over $I$. Let $\Ecal$ be the set of all affine equations arising in this way. Let $\Scal$ be the set of all subsets $I'$ of $I$ such that every finite subset of $\Ecal \cup \{x_e=0|e\in I'\} \cup \{x_{e_0}=1\}$ has a solution. Since $d \restric_I$ is a solution of all equations in $\Ecal$, $(d\restric_I)/d(e_0)$ is a solution of all equations in $\Ecal \cup\{x_{e_0}=1\}$, so $\emptyset \in \Ecal$. $\Scal$ is also closed under unions of chains, so by Zorn's lemma it has in has a maximal element $E_m$. Now by Lemma \ref{compsolve} there is some solution $d'$ of all the equations in $\Ecal \bigcup \{x_e=0|e\in E'\} \bigcup \{x_{e_0}=1\}$. Since $d'$ solves all the equations in $\Ecal$, its extension to $E$ taking the value 0 outside $I$ is a thin dependence of $f$. We shall show that $D:=\supp(d')=E\setminus E_m$ is the desired minimal dependent set. If it were not, there would have to be a nonzero thin dependence $d''$ with $\supp(d'') = \supp(d') - e_0$. But then for any $e_1 \in \supp(d'')$, we have that $d' - \frac{d'(e_1)}{d''(e_1)}d''\restric_I$ is a solution of $x_{e_1} = 0$ in addition to the equations solved by $d'$, which contradicts the maximality of $E_m$. \end{proof} \begin{coroll}\label{spans} If $M_{ts}(f)$ is a matroid, and $E' \subseteq E$, then $e \not \in E'$ is in the closure of $E'$ iff there is a thin dependence $d$ with $\supp(d) \subseteq E' \cup \{e\}$ and $d(e) = 1$. \end{coroll} \begin{proof} If there is such a $d$, by Lemma \ref{mindep} we can find a minimal dependent set $D$ with $e\in D\subseteq \supp(d)$. As $D\setminus \{e\}\subseteq E'$ is independent, $e \in \Sp(E')$. If $e \in \Sp(E')$ then there is a circuit $D$ with $e\in D\subseteq E' \cup \{e\}$. Let $d$ be a thin dependence with $\supp(d) = D$. Then $d(e) \neq 0$ since $D - e_0$ is indpendent: scaling if necessary, we can take $d(e)=1$. \end{proof} \begin{coroll}\label{coind} Let $M_{ts}(f)$ be a matroid. Then a subset $I$ is independent in $M^*_{ts}(f)$ iff for every $i \in I$ there is a thin dependence $d_i$ of $f$ such that $d_i(i) = 1$ and $d_i$ is $0$ on the rest of $I$. \end{coroll} \begin{proof} We recall that $I$ is independent in $M^*_{ts}(f)$ iff $\Sp(I^c)=E$. Now apply Lemma \ref{spans}. \end{proof} This motivates the definition we promised at the start of this section, of the set system $M_{ts}^{\co}$. \begin{dfn} A subset $I$ of $E$ is {\em coindependent} iff for every $i \in I$ there is a thin dependence $d_i$ of $f$ such that $d_i(i) = 1$ and $d_i$ is 0 on the rest of $I$. The set system $M_{ts}^{\co}(f)$ has ground set $E$ and consists of the coindependent subsets of $E$. \end{dfn} Thus by Corollary \ref{coind}, when $M_{ts}(f)$ is a matroid, $M_{ts}^{\co}(f)=M^*_{ts}(f)$. \begin{lem}\label{coindplus} Let $I$ be coindependent and $i_0\not\in I$. If there is a thin dependence $d$ which is nonzero at $i_0$ and $0$ on $I$, then $I + i_0$ is coindependent. \end{lem} \begin{proof} Suppose that $(d_i |i\in I)$ witnesses the coindependence of $I$. Let $d'_{i_0}=d/d(i_0)$, and for $i\in I$ let $d'_i=d_i-d_i(i_0)d'_{i_0}$. Then $(d'_i | i\in I + i_0)$ witnesses the coindependence of $I + i_0$. \end{proof} We can now show that $M_{ts}^{co}(f)$ is always dual, in a sense, to $M_{ts}(f)$. \begin{lem}\label{bases} Let $I \subseteq E$. $I$ is a maximal independent set with respect to $f$ iff $E \setminus I$ is a maximal coindependent set with respect to $f$. \end{lem} \begin{proof} Suppose first of all that $I$ is a maximal independent set, and let $i \in E \setminus I$. Let $d_i$ witnesses the dependence of $I \cup \{i\}$. We must have $d_i(i) \neq 0$, so without loss of generality $d_i(i) = 1$. But then the $d_i$ witness the coindependence of $E \setminus I$. We can't have $(E \setminus I) + i$ coindependent for any $i \in I$, since the corresponding $d_i$ would witness dependence of $I$. So suppose instead for a contradiction that $E \setminus I$ is a maximal coindependent set but $I$ is dependent, as witnessed by some thin dependence $d$ of $I$. There must be $i_0 \in I$ with $d(i_0) \neq 0$ so, by Lemma \ref{coindplus}, $(E \setminus I) + i_0$ is coindependent, contradicting the maximality of $E \setminus I$. Thus $I$ is independent. For each $i \in E \setminus I$, $I \cup \{i\}$ is dependent, as witnessed by $d_i$, and so $I$ is also maximal. \end{proof} $M_{ts}^{\co}(f)$ evidently satisfies (I1) and (I2). \begin{lem}\label{(I3)} $M_C^{\co}$ satisfies (I3). \end{lem} \begin{proof} Suppose we have a maximal coindependent set $J$, and a nonmaximal coindependent set $I$. We have to show that we may extend $I$ with a point from $J$. Since $I$ is nonmaximal, we can choose $i_0 \not \in I$ with $I + i_0$ still coindependent. Since by Lemma \ref{bases} $E \setminus J$ is independent, there is $i_1 \in J$ with $d_{i_0}(i_1) \neq 0$. Then by Lemma \ref{coindplus} $I + i_1$ is coindependent. \end{proof} We can now give our slightly simplified criterion for when a thin sums system is a matroid. \begin{thm}\label{comat} If $M_{ts}^{\co}(f)$ satisfies (IM), then $$(M_{ts}^{co}(f))^*=M_{ts}(f).$$ In particular, $M_{ts}(f)$ is a matroid. \end{thm} \begin{proof} $M_{ts}^{co}(f)$ evidently satisfies (I1) and (I2), and satisfies (I3) by Lemma \ref{(I3)}, so it is a matroid. It is clear from Lemma \ref{bases}, that every independent set of $(M_{ts}^{co}(f))^*$ is also independent $M_{ts}(f)$. Conversely, let $I$ be an independent set of $M_{ts}(f)$. Then let $J$ be a maximal independent set of $M_{ts}^{\co}(f)$ not meeting $I$. It suffices to show that $J$ is a base of $M_{ts}^{\co}(f)$. Suppose not, for a contradiction: then there is some $i \in I$ with $J + i$ coindependent. But then since $I$ is independent, the corresponding $d_i$ is nonzero at some $j \not \in I$, and by Lemma \ref{coindplus} we deduce that $J + j$ is coindependent, contradicting the maximality of $J$. \end{proof} We now return to the question of when the algebraic cycle system $M_A(G)$ of a graph $G$ is a matroid. It evidently satisfies (I1) and (I2). A little trickery shows that $M_A(G)$ has a maximal independent set $B$. First, we pick a maximal collection $A$ of disjoint rays in $G$, then we can take $B$ to be any maximal set of edges including all the rays in $A$ but not including any cycle and not connecting any 2 of the rays in $A$ (both these steps are possible by Zorn's Lemma). $B$ can't include a double ray, by maximality of $A$. A slight refinement of this argument shows that $M_A(G)$ always satisfies (IM). So we just need to determine whether $M_A(G)$ satisfies (I3). In fact, as we mentioned in Section \ref{pre}, it was shown by Higgs in \cite{Higgs:axioms} that $M_A(G)$ is a matroid iff $G$ doesn't contain any subdivision of the Bean graph: $$\xymatrix{&&&& v \ar@{--}[d] \ar@{-}[dr] \ar@{-}[drr] \ar@{-}[drrr] \ar@{}[drrrr]|{\cdots} &&&&\\ \cdots & \ar@{-->}[l] \bullet \ar@{--}[r] & \bullet \ar@{--}[r] & v' \ar@{-}[r] & \bullet \ar@{--}[r] & \bullet \ar@{--}[r] & \bullet \ar@{--}[r] & \bullet \ar@{-->}[r]& \cdots \\}$$ The algebraic cycle system of this graph doesn't satisfy (I3) - the dashed edges above form a maximal independent set, but there is no way to extend the nonmaximal independent set consisting of the edges meeting $v$ and those to the left of $v'$ by an edge from this set. It is, however, not at all easy to see that if $G$ doesn't contain a subdivision of the Bean graph then $M_A(G)$ satisfies (I3). In fact, Higgs didn't follow this route - the interested reader can check that his claim (3) (which is the combinatorial heart of the paper) is exactly the criterion obtained from Theorem \ref{comat} in this case. We are now in a position to give a more direct argument. \begin{thm}[Higgs]\label{higgs} Suppose that $G$ contains no subdivision of the Bean graph. Then $M_A(G)$ is a matroid. \end{thm} \begin{proof} We say a cut $b$ of $G$ is a {\em nibble} if one side (called the {\em small side}: the other side is the {\em large side}) of $b$ is connected and contains no rays. Suppose, for a contradiction, that there are a nibble $b$ and an algebraic cycle $a$ meeting $b$ infinitely often. Then $a$ must be a double ray. Let $T$ be a spanning tree of the small side of $b$. We can pick any vertex $v_0$ in this tree to serve as its root, and consider the subtree $T'$ consisting of the paths from $v_0$ to $a$ in $T$. Since $T'$ is rayless and has infinitely many leaves there must (by K\"{o}nig's Lemma) be a vertex $v$ in this tree of infinite degree. The paths from $v$ to $a$ in $T'$, together with $a$ itself, now contain a subdivision of the Bean graph, contrary to our supposition. So we can conclude that a nibble and an algebraic cycle can only meet finitely often. In fact we can say more, using the ideas of Section \ref{pre}. Pick directions for every edge, algebraic cycle and nibble of $G$. Let $A$ be the set of all agebraic cycles of $G$, and for any edge $e \in E(G)$ define a function $A \xrightarrow {f(e)}k$ such that for any $a\in A$, $f(e)(a)$ is $1$ if $e\in a$ and they have the same directions, $-1$ if $e\in a$ and they have different directions, and $0$ if $e$ isn't an edge of $a$. This gives a map $E(G) \xrightarrow{f} k^A$. We shall show that $M_{ts}^{\co}(f)=M_A(G)$. First, we show that any coindependent set $I$ for $f$ is $M_A(G)$-independent. Suppose for a contradiction that $I$ includes an algebraic cycle $a$, and pick any $i \in a$. Then $\sum_{e \in E}d_i(e)f(e)(a) = f(i)(a) \neq 0$, which is the desired contradiction. For any nibble $b$ of $G$, define the map $E(G) \xrightarrow{d_b} k$ such that $d_b(e)$ is $1$ if $e\in b$ and they have the same directions, $-1$ if $e\in b$ and they have different directions, and $0$ if $e$ isn't an edge of $b$. For any algebraic cycle $a$, $a$ must traverse $b$ the same number of times in each direction (if it is a double ray, the rays in both directions must eventually end up in the large side of $b$). Traversals one way contribute a $+1$ term to $\sum_{e \in E} d_b(e)f(e)(a)$, and traversals the other way contribute a $-1$ term, so this sum is always 0. That is, each $d_b$ is a thin dependence of $f$. Now if a set $I$ isn't coindependent then there is some $i \in I$ such that no thin dependence is nonzero at $i$ and 0 on the rest of $I$. In particular, considering the thin dependences $d_b$ above, there is no nibble $b$ with $b \cap I = \{i\}$. Thus if the connected components of $I - i$ containing the endpoints of $i$ are distinct then each contains a ray, so $I$ contains a double ray. Otherwise, both ends of $i$ are in the same component, so $I$ contains a cycle. In either case, $I$ contains an algebraic cycle. We have shown that the $M_A(G)$-independent sets are exactly the coindependent sets, so they satisfy (I3) by Lemma \ref{(I3)}. We have already checked the remaining axioms. \end{proof} \begin{rem} This argument also shows a little more - namely that the dual of $M_A(G)$ is the thin sums matroid $M_{ts}(f)$. We have shown that every nibble is thinly dependent. On the other hand, if a set $I$ contains no nibble, so that every connected component of the complement of $I$ contains a ray, then for each $i$ in $I$ there is an algebraic cycle meeting $I$ only in $i$, so $I$ is thinly independent. Thus the cycles of the dual of $M_A(G)$ are exactly the minimal nibbles. This is also shown in \cite{RD:HB:graphmatroids}, where minimal nibbles are called skew cuts. \end{rem} \section{Galois Connections}\label{galois} In this section, we will present a new perspective on the definition of thin sums set systems, which we believe makes it unlikely that any criterion much simpler than (IM) will allow us to distinguish which such systems are matroids. To do this, we shall show that thin sums systems are determined by closed classes for a particular Galois connection. We shall note that each IE-operator gives a closed class for a very similar Galois connection. Since in that case (IM) seems to be necessary to pick out the class of matroids, we think something similar will be needed for thin sums systems also. This section is not essential for what follows, and may be skipped. Since Galois connections are not widely known, we shall review here the small portion of the theory that we shall require. \begin{dfn} Let $A$ be a set, and $R$ a symmetric relation on $A$. The {\em Galois connection} induced by $R$ is the function $(\Pcal A \xrightarrow{p} \Pcal A)$ given by $p(A') = \{a \in A | (\forall a' \in A') aRa'\}$. \end{dfn} For the remainder of this section we shall always take $A$, $R$ and $p$ to refer in this way to the constituents of a general Galois connection. \begin{example}\label{orth} Let $V$ be a vector space with an inner product $\langle -,- \rangle$. We say 2 vectors $v$ and $w$ are {\em orthogonal} iff $\langle v,w \rangle = 0$. This gives a relation from $V$ to itself, and so induces a Galois connection as above. $p$ is given by the function $\Pcal V \to \Pcal V$ that sends a subset of $V$ to its {\em orthogonal complement}, which is always a subspace of $V$. \end{example} \begin{lem}\ \begin{itemize} \item[$\bullet$] For $A' \subseteq A'' \subseteq A$, $p(A'') \subseteq p(A')$ \item[$\bullet$] For $A' \subseteq A$, $A' \subseteq p^2(A')$ \end{itemize} \end{lem} \begin{proof} To prove the first property, note that for any $a \in p(A'')$, for any $a' \in A' \subseteq A''$ we have $aRa'$, so that $a \in p(A')$. To prove the second property, note that for any $a' \in A'$, for any $a \in p(A')$ we have $a'Ra$, so that $a' \in p^2(A')$. \end{proof} \begin{lem}\label{galprops} For $A' \subseteq A$, the following are equivalent: \begin{itemize} \item[$\bullet$] $A' = p^2(A')$. \item[$\bullet$] $A'$ is in the image of $p$. \end{itemize} \end{lem} \begin{proof} The first statement clearly implies the second. Suppose the second is true, and let $A' = p(A'')$. Then we have $A'' \subseteq p^2(A'')$, so $p(A'') \supseteq p^3(A'')$, that is $A' \supseteq p^2(A')$. Since we also know $A' \subseteq p^2(A')$, we have $A' = p^2(A')$ as required. \end{proof} In such cases, we say $A'$ is a {\em closed} subset of $A$ (with respect to this Galois connection). It is immediate from Lemma \ref{galprops} that $p$ restricts to an order reversing idempotent automorphism of the poset of closed subsets of $A$. For any closed set $A'$, $p(A')$ is called the {\em dual} closed set to $A'$. \begin{example} If, in Example \ref{orth}, $V$ is finite dimensional, then the closed sets for this Galois connection are precisely the subspaces of $V$. \end{example} \begin{example}\label{graphcon} Let $G$ be a finite graph, and let $V$ be the free vector space $\Fbb_2^E$ over $\Fbb_2$ on the set of $E$ edges of $G$. We can identify subsets of $E$ with vectors in $V$: each subset gets identified with its characteristic function. There is a standard inner product on this space, with $\langle v, w\rangle = \sum_{e \in E} v(e)w(e)$. Then the cuts of $G$ form a subspace of $V$, which is the orthogonal complement of the subspace of $V$ generated by the cycles of $G$. Thus the cycles and the bonds of $G$ generate dual closed classes in the associated Galois connection. \end{example} Let $E$ be any set, and define a relation $R_1$ from $\Pcal E$ to itself by letting $X R_1 Y$ iff $|X \cap Y| \neq 1$. This slightly odd relation is motivated by the fact that it holds between any circuit and any cocircuit in a matroid. We shall show that each matroid with ground set $E$ induces a closed subset of $\Pcal E$ in the associated Galois connection, and that the dual matroid induces the dual closed subset. In fact, we can go further and get such a result for idempotent-exchange operators (see Section \ref{pre} for a definition of this concept). \begin{dfn} Let $S$ be an IE-operator on a set $S$. A set $X \subseteq E$ is {\em S-closed} iff $SX = X$. A subset $X$ is an $S$-scrawl iff for each $x \in X$ it is true that $x \in S(X - x)$. The set of $S$-scrawls is denoted $\Scal(S)$. \end{dfn} Thus if $M$ is a matroid then a set is $\Sp_M$-closed iff it is $M$-closed and is a $\Sp_M$-scrawl iff it is a union of $M$-circuits. \begin{lem}\label{scrawls} Let $S$ be an IE-operator on $E$, and let $X \subseteq E$. Then $X$ is $S$-closed iff $E \setminus X$ is an $S^*$-scrawl. \end{lem} \begin{proof} Note that by \eqref{duop} of Section \ref{pre} for any $x \in E \setminus X$ we have $x \not \in SX$ iff $x \in S^*(E \setminus X - x)$. \end{proof} \begin{coroll}\label{matscrawls} Let $M$ be a matroid with ground set $E$, and let $s \subseteq E$ be a set which never meets an $M$-cocircuit in just one point. Then $s$ is a union of $M$-circuits. \nobreak $\square$ \end{coroll} \begin{thm} Let $S$ be an IE-operator on a set $E$, and let $p$ be given as above by the Galois connection associated to $R_1$. Then $\Scal(S) = p(\Scal(S^*))$. \end{thm} \begin{proof} We must show that a subset $X$ of $E$ is in $\Scal(S)$ iff it is in $p(\Scal(S^*))$. First of all, suppose that $X \in \Scal(S)$, and pick any $X' \in \Scal(S^*)$. Suppose for a contradiction that $|X \cap X'| = 1$, and call the unique element of this set $x$. Then $x \in S(X - x)$ and so $x \in S(E \setminus X')$, which contradicts the fact that by Lemma \ref{scrawls}, $E \setminus X'$ is $S$-closed. Since $X'$ was arbitrary we get that $X \in p(\Scal (S^*))$. Now suppose instead that $X \in p(\Scal(S^*))$. For any $x \in X$, $S(X - x)$ is $S$-closed, since $S$ is idempotent, so $E \setminus S(X - x) \in \Scal(S^*)$ by Lemma \ref{scrawls}. So $X \cap (E \setminus S(X - x))$, which is a subset of $\{x\}$, can't have just one element. So $x \not \in E \setminus S(X - x)$ and so $x \in S(X - x)$. Since $x$ was arbitrary, $X \in \Scal(S)$. \end{proof} Thus although every matroid corresponds to a closed class for such a Galois connection, not every such closed class corresponds to a matroid: the far more general collection of IE-operators gives rise to many such closed classes which don't come from matroids. Thus, in order to determine which closed classes for these Galois connections correspond to matroids, some condition akin to (IM) is essential. However, there is a similar Galois connection whose closed classes capture the information behind thin sums systems. Let $E$ be a set, and $k$ a field. We have a relation $R_2$ from $k^E$ to itself with $c R_2 d$ iff $$\sum_{e \in E}c(e) d(e) = 0 \, .$$ Here, as usual, we take this to include the statement that the sum is well defined, i.e. that only finitely many of the summands are nonzero. Just as in Example \ref{orth}, any closed set is necessarily a subspace of the vector space $k^E$. The link between this relation and the relation $R_1$ defined above is that, since no sum evaluating to zero can have precisely one nonzero term, if $c R_2 d$ then there can't be just 1 $e \in E$ at which both are nonzero. Explicitly, $c R_2 d \Rightarrow \supp(c)R_1 \supp(d)$. From any closed class, we can define a corresponding set system. \begin{dfn} For any closed set $C$ with respect to $R_2$, we say a subset $I$ of $E$ is {\em $C$-independent} iff the only $c \in C$ which is zero outside $I$ is the 0 function. Otherwise, $I$ is {\em $C$-dependent}. The {\em thin sums system} $M_C$ corresponding to $C$ is the system of $C$-independent subsets of the ground set $E$. \end{dfn} We shall now show that this notion corresponds to the usual notion of a thin sums system. \begin{prop}\label{olddef} Suppose we have a function $E \xrightarrow{f} k^A$. Let $D$ be the set of functions $d_a \colon e \mapsto f(e)(a)$ with $a \in A$. Then $M_{ts}(f) = M_{p(D)}$. \end{prop} \begin{proof} It is enough to show that the elements of $p(D)$ are exactly the thin dependences for $f$. But using the substitution given above, the condition that $c \in p(D)$ namely that for each $a \in A$ $$\sum_{e \in E}c(e)d_a(e)\, ,$$ becomes the condition that for each $a \in A$ $$\sum_{e \in E}c(e)f(e)(a)$$ which is the condition for $c$ to be a thin dependence for $f$. \end{proof} Because thin sums systems correspond in this way to closed classes for the Galois connection correponding to $R_2$, and a condition like (IM) seems necessary to pick out the matroids amongst the closed classes for $R_1$, it is likely that some condition akin to (IM) will also be needed to distinguish which thin sums systems are matroids. On the other hand, the evident similarity of this connection to the sort employed in example \ref{graphcon} provides another indication of why the various types of cycle and bond matroids corresponding to a graph are all thin sums systems. \section{Tameness and duality}\label{duality} One very natural question about the class of thin sums matroids is whether or not it is closed under matroid duality: the fact that the class of representable matroids was not closed under duality was a key motivation for introducing extensions of this class, such as the class of thin sums matroids. Sadly, the class of thin sums matroids is not closed under duality: a counterexample is given in \cite{WILD}. However, that counterexample involves a matroid with a very unusual property: it has a circuit and a cocircuit whose intersection is infinite. Matroids with this property are called {\em wild} matroids, and those in which every circuit-cocircuit intersection is finite are called {\em tame}. The main result of this section will be that the class of tame thin sums matroids is closed under duality. This class includes all the interesting examples arising from graphs: any finitary or cofinitary matroid must be tame, and this includes the finite and topological cycle matroids as well as the bond and finite bond matroids of a given graph. We showed in the proof of Theorem \ref{higgs} that the algebraic cycle and skew cuts matroids are also tame. A natural strategy for showing that the dual of a thin sums matroid is again a thin sums matroid is suggested by the results of Section \ref{corep}. These results suggest that in attempting to construct the representation $E \xrightarrow{\overline{f}}k^{\overline{A}}$ of $M^*_{ts}(f)$ we should take $\overline{A}$ to be the set of all thin dependencies of $f$, and define $\overline{f}(e)(c)$ to be $c(e)$. However, this natural attack fails to work, even if $M_{ts}(f)$ is tame, as our next example shows. \begin{example} Let $G$ be the graph $$\xymatrix{&&& \bullet \ar[dll] \ar[dl] \ar[d] \ar[dr] \ar[drr] \ar[drrr] &&&&\\ \ar@{}[d]|\cdots & \bullet \ar@{<--}[d] \ar@{-->}[r] & \bullet \ar@{-->}[d] & \bullet \ar@{<--}[d] \ar@{-->}[r] & \bullet \ar@{-->}[d] & \bullet \ar@{<--}[d] \ar@{-->}[r] & \bullet \ar@{-->}[d] & \ar@{}[d]|\cdots \\& \bullet \ar[drrr] \ar@{<--}[r] & \bullet \ar[drr] & \bullet \ar[dr] \ar@{<--}[r] & \bullet \ar[d] & \bullet \ar[dl] \ar@{<--}[r] & \bullet \ar[dll] &\\&&&& \bullet &&& .\\}$$ We may represent the algebraic cycle matroid of $G$ as $M_{ts}(f)$ as in the proof of Proposition \ref{algthin}. Recall that for any edge $e$ of $G$ the function $V(G) \xrightarrow{f(e)} k$ is given by taking $f(e)(v)$ to be $1$ if $e$ originates from $v$, $-1$ if it terminates in $v$, and $0$ if they don't meet each other. Thus the function which takes the value $1$ on the dotted edges and 0 elsewhere is a thin dependence of $f$. So no function with support given by the skew cut consisting of the vertical dotted edges can be a thin dependence of $\overline{f}$ as given above. That is, for this matroid and this definition of $\overline{f}$, we have $M^* \neq M_{ts}(\overline{f})$. \end{example} Our approach will be a little different in character, although our results will imply that the restriction of the $\overline{f}$ defined above to the set of thin dependencies whose supports are circuits {\em does} give a representation of the dual of $M_{ts}(f)$. We shall proceed by giving a self-dual characterisation of the class of tame thin sums matroids. \begin{lem}\label{char} Let $M$ be a tame matroid with ground set $E$. Then $M$ is a thin sums matroid over the field $k$ iff there is for each circuit $o$ of $M$ a function $o \xrightarrow{c_o} k^*$ (here $k^*$ is the set of nonzero elements of $k$) and for each cocircuit $b$ of $M$ a function $b \xrightarrow{d_b} k^*$ such that for any circuit $o$ and cocircuit $b$ we have \begin{equation} \label{sumeq} \sum_{e \in o \cap b} c_o(e)d_b(e) = 0 \, . \end{equation} \end{lem} \begin{proof} Suppose first of all that we have such $c_o$ and $d_b$. Let $A$ be the set of cocircuits of $M$, and let $E \xrightarrow{f} k^A$ be defined by $f(e)(b) = d_b(e)$. We shall show that $M = M_{ts}(f)$, by showing that a set $I \subseteq E$ is $M$-dependent iff it is $M_{ts}(f)$-dependent. If $I$ is $M$-dependent, it includes some circuit $o$, and then the function extending $c_o$ to $E$ and taking the value 0 everywhere outside $o$ is a nontrivial thin dependence of $f$ which is 0 outside of $I$. If $I$ is $M_{ts}(f)$-dependent, then let $c$ be a nontrivial thin dependence of $f$ which is 0 outside of $I$, and let $s = \supp(c)$. Then for any $M$-cocircuit $b$ we have $$\sum_{e \in E}c(e)d_b(e) = 0 \, .$$ The collection of those $e$ such that $c(e)d_b(e) \neq 0$ is $s \cap b$, which therefore can't have just one element. So by Corollary \ref{matscrawls} $s$ is a union of $M$-circuits. Since $s$ is nonempty, it is therefore $M$-dependent, and therefore so is $I$. Conversely, let $M$ be given as $M_{ts}(f)$ for some $E \xrightarrow{f} k^A$. For each circuit $o$ of $M$, pick some thin dependence $\hat{c}_o$ of $f$ with support $o$, and let $c_o = \hat{c}_o \restric_o$. Now let $b$ be any cocircuit of $M$, and fix some $e_b \in b$. By Lemma \ref{o_cap_b}, we can find for each $e \in b - e_b$ some circuit $o(e)$ of $M$ such that $o(e) \cap b = \{e_b, e\}$. We define the map $b \xrightarrow{d_b} k^*$ to be $1$ at $e_b$ and $-\frac{c_{o(e)}(e_b)}{c_{o(e)}(e)}$ for $e \in b-e_b$ (note that this choice ensures that \eqref{sumeq} holds for $b$ and each $o(e)$). Let $o$ be any circuit of $E$. It remains to show that $\sum_{e \in o \cap b} c_o(e)d_b(e) = 0 $. Plugging in the values for $d_b(e)$, this means that we need to show $$\hat{c}_o(e_b) - \sum_{e \in o \cap (b - e_b)} \frac{c_o(e)c_{o(e)}(e_b)}{c_{o(e)}(e)} =0 \, .$$ That is, we need $c(e_b)=0$, where $$c = \hat{c}_o - \sum_{e \in o \cap (b - e_b)}\frac{c_o(e)}{c_{o(e)}(e)}\hat{c}_{o(e)} \, .$$ As $c$ is a finite linear combination of thin dependences, it is again a thin dependence. But for any $e \in b - e_b$, we have $c(e) = \hat{c}_o(e) -\frac{\hat{c}_o(e)}{c_{o(e)}(e)}c_{o(e)}(e) = 0$. If $c(e_b) \neq 0$, then by Lemma \ref{mindep}, there is a circuit $o$ such that $e_b \in o \subseteq \supp(c)$, which gives $o \cap b = \{e_b\}$, a contradiction. Thus $c(e_b)=0$, as desired. \end{proof} \begin{thm}\label{minorclose} The class of tame thin sums matroids is closed under duality and under taking minors. \end{thm} \begin{proof} The closure under duality follows from the fact that the characterisation given in Lemma \ref{char} is self-dual. For the closure under taking minors, let $M$ be a tame thin sums matroid with functions $c_o$, $d_b$ given as in Lemma \ref{char}, and let $N = M / C \backslash D$ be a minor of $M$. For each circuit $o$ of $N$, let $\hat{o}$ be a circuit of $M$ with $o \subseteq \hat{o} \subseteq o \cup C$ (such a circuit exists by Lemma \ref{rest_cir}), and take $c_o$ to be $c_{\hat o} \restric_o$. Similarly, for each cocircuit $b$ of $N$ let $\hat b$ be a cocircuit of $M$ with $b \subseteq \hat b \subseteq b \cup D$ and let $d_b = d_{\hat b} \restric_b$. These $c_o$ and $d_b$ satisfy the conditions of Lemma \ref{char}, so that $N$ is also a thin sums matroid over $k$. \end{proof} \section{Overview of the connections to graphic matroids}\label{summary} Our results on graphic matroids have been scattered through the paper. We can now make use of Proposition \ref{finthinrep} and Theorem \ref{minorclose}, and go on a short tour of the standard matroids arising from an infinite graph $G$. We shall recall why all of them are tame thin sums matroids over any field, and a couple of them are representable over any field. Since we want our results to apply to any field, we continue to work over an arbitrary fixed field $k$. Our starting point is the most algebraic of examples, the algebraic cycle matroid, for which we gave a thin sums representation in Proposition \ref{algthin}. Applying Theorem \ref{minorclose}, we deduce that the dual $M^*_A(G)$, the skew cuts matroid of $G$, is also a thin sums matroid. We can also apply Proposition \ref{finthinrep} to deduce that the finitarisation $M_{FC}(G) = M_A(G)_{fin}$, the finite cycle matroid $M_{FC}(G)$, whose circuits are the cycles of $G$, is representable. Applying Theorem \ref{minorclose}, its dual, the bond matroid $M_B(G)$, whose circuits are (possibly infinite) bonds, is a thin sums matroid. So by Proposition \ref{finthinrep}, the finite bond matroid $M_{FB}(G)$ is representable. Applying Theorem \ref{minorclose} one more time, we recover the fact that the topological cycle matroid $M_{C}(G)$ is a thin sums matroid. We could, of course, continue this process further, but it quickly becomes periodic, as sketched out in the following diagram: $$\xymatrix{*\txt{Algebraic cycles} \ar[rr]^{\dual} \ar[dd]_{\fin} && *\txt {Skew cuts} \\&&\\ *\txt{Finite cycles \\ (representable)} \ar[rr]^{\dual} && *\txt{Bonds} \ar[dd]^{\fin} \\&&\\ *\txt{Topological cycles} \ar[dd]_{\fin} && *\txt{Finite bonds \\ (representable)} \ar[ll]^{\dual} \\&&\\ *\txt{Finite cycles \\ of FSep($G$) \\ (representable)} \ar[rr]_{\dual} && *\txt{Bonds \\ of FSep($G$)} \ar[uu]_{\fin} }$$ Here FSep($G$) is the finitely separable quotient of $G$, obtained from $G$ by identifying any two vertices which cannot be separated by removing only finitely many edges from $G$. It would be interesting to explore the consequences of applying a similar process in other contexts, such as the simplicial setting. \end{document}
\begin{document} \frontmatter \title{Efficient and Robust Methods for Quantum Tomography} \author{Charles Heber Baldwin} \degreesubject{Ph.D., Physics} \degree{Doctor of Philosophy \\ Physics} \documenttype{Dissertation} \previousdegrees{B.S., Physics, Denison University, 2009 \\ M.S., Physics, Miami University, 2011} \date{December, 2016} \maketitle \makecopyright \setcounter{page}{2} \begin{acknowledgments} I would like to thank Amir Kalev who was my main collaborator for the research presented in this dissertation and who contributed equally to the results. He additionally gave me valuable feedback in writing this dissertation and encouragement along the way. I would also like to thank my advisor, Prof. Ivan Deutsch, for not only teaching me about physics, but also how to approach research and present ideas. I have greatly valued his mentorship during my time in graduate school. Much of the work in this dissertation is a result of collaboration between the experimental group of Prof. Poul Jessen with his students Hector Sosa-Martinez and Nathan Lysne. I have appreciated our discussions of different aspects of quantum tomography and control in an experimental setting and the opportunity to collaborate with them. I would also like to thank my fellow CQuIC members from whom I have learned so much in the last five years. Especially current and former members of Prof. Deutsch's group: Carlos Riofr\'{i}o, Ezad Shojaee, Leigh Norris, Ben Baragiola, Rob Cook, and Bob Keating. Outside of Prof. Deutsch's group, Jacob Miller and Andy Ferdinand have provided me with valuable feedback for presentations and enlightening conversations about quantum information. I would also like to recognize the contributions of Profs. Carl Caves, Elohim Becerra, and Akimasa Miyake. Each of these professors has taught me a great deal in both the courses they have taught and in more informal discussions. I would finally like to thank all of the people outside of CQuIC who have helped me complete the program. Fellow UNM graduate students, Mark Gorski and Ken Obenberger have been great friends for the last 5 years. Also, my parents, Frank and Barbara, who have encouraged and supported me through all levels of my academic journey. Finally, and most importantly, I would like to thank my girlfriend Sarah for here encouragement and support in the last 2$+$ years, and especially in the last few months while I wrote my dissertation. \end{acknowledgments} \thispagestyle{plain} \maketitleabstract \begin{abstract} The development of large-scale platforms that implement quantum information processing protocols requires new methods for verification and validation of quantum behavior. Quantum tomography (QT) is the standard tool for diagnosing quantum states, process, and readout devices by providing complete information about each. However, QT is limited since it is expensive to not only implement experimentally, but also requires heavy classical post-processing of experimental data. In this dissertation, we introduce new methods for QT that are more efficient to implement and robust to noise and errors, thereby making QT a more widely practical tool for current quantum information experiments. The crucial detail that makes these new, efficient, and robust methods possible is prior information about the quantum system. This prior information is prompted by the goals of most experiments in quantum information. Most quantum information processing protocols require pure states, unitary processes, and rank-1 POVM operators. Therefore, most experiments are designed to operate near this ideal regime, and have been tested by other methods to verify this objective. We show that when this is the case, QT can be accomplished with significantly fewer resources, and produce a robust estimate of the state, process, or readout device in the presence of noise and errors. Moreover, the estimate is robust even if the state is not exactly pure, the process is not exactly unitary, or the POVM is not exactly rank-1. Such compelling methods are only made possible by the positivity constraint on quantum states, processes, and POVMs. This requirement is an inherent feature of quantum mechanics, but has powerful consequences to QT. Since QT is necessarily an experimental tool for diagnosing quantum systems, we discuss a test of these new methods in an experimental setting. The physical system is an ensemble of laser-cooled cesium atoms in the laboratory of Prof. Poul Jessen. The atoms are prepared in the hyperfine ground manifold, which provides a large, 16-dimensional Hilbert space to test QT protocols. Experiments were conducted by Hector Sosa-Martinez {\em et al.}~\cite{SosaMartinez2016} to demonstrate different QT protocols. We compare the results, and conclude that the new methods are effective for QT. \comment{ The development of large-scale platforms that implement quantum information processing protocols requires new methods for verification and validation of quantum behavior. Quantum tomography (QT) is the standard tool for diagnosing quantum states, process, and readout devices by providing complete information about each. However, QT is limited in two main ways: (1) it is expensive to not only implement experimentally, but also requires heavy classical post-processing of experimental data and (2) it is susceptible to noise and errors in the experimental system. Therefore, the applicability of QT is limited to small systems where there are low levels of noise and errors. In this dissertation, we introduce new methods for QT that are more efficient to implement and robust to noise and errors, thereby making QT a more widely practical tool for current quantum information experiments. The crucial detail that makes these new, efficient, and robust methods possible is prior information about the quantum system. This prior information is prompted by the goals of most experiments in quantum information. Most quantum information processing protocols require pure states, unitary processes, and rank-1 POVM operators. Therefore, most experiments are designed to operate near this ideal regime, and have been tested by other methods, e.g. randomized benchmarking, to verify this objective. We show that when this is the case, QT can be accomplished with significantly fewer resources, and produce a robust estimate of the state, process, or readout device in the presence of noise and errors. Moreover, these methods produce a robust estimate even if the state is not exactly pure, the process is not exactly unitary, or the POVM is not exactly rank-1. Such compelling methods are only made possible by the positivity constraint on quantum states, processes, and POVMs. This requirement is an inherent feature of quantum mechanics, but has powerful consequences to QT. Since QT is necessarily an experimental tool for diagnosing quantum systems, we discuss a test of these new methods in an experimental setting. The physical system is an ensemble of laser-cooled cesium atoms in the laboratory of Prof. Poul Jessen. The atoms are prepared in the hyperfine ground manifold, which provides a large, 16-dimensional Hilbert space to test QT protocols. Experiments were conducted by Hector Sosa-Martinez {\em et al.}~\cite{SosaMartinez2016} to demonstrate different QT protocols. We compare the results, and conclude that the new methods are effective for QT. } \end{abstract} \bgroup \hypersetup{linkcolor = blue} \tableofcontents \listoffigures \listoftables \egroup \mainmatter \chapter{Introduction} \label{ch:intro} Quantum information processors hold the promise to carry out powerful new protocols in communication, computation, and sensing~\cite{Nielsen2002}. However, quantum information processing devices are notoriously delicate, and sources of errors, such as decoherence or inexact controls can easily diminish the advantage of quantum information protocols. Therefore, it is essential to characterize quantum systems in order to assure they are performing as expected, and to diagnose sources of errors. Quantum tomography (QT) is the standard method for diagnosing a quantum information processor, and is the focus of this dissertation. Originally, QT was proposed as a method to characterize a quantum state of light~\cite{Vogel1989}, and then generalized to estimate quantum states of arbitrary systems, in a protocol now called quantum state tomography (QST). Later, the methodology was extended to estimate quantum processes, or dynamical maps on quantum systems, as quantum process tomography (QPT)~\cite{Chuang1997}, and quantum readout devices as quantum detector tomography (QDT)~\cite{Luis1999}. Therefore, QT can be used to produce estimates of the three major components that make up a quantum informational processor: state preparation, evolution, and readout. QT protocols have two steps: measurement and estimation. Through measurements, one probes the quantum system to produce data that characterizes the component in question while estimation is the procedure to use this data to build a characterization of the state, process, or readout device. Most theoretical proposals for measurements and estimation procedures have been developed in the context of QST. There have been several different measurements proposed in this context. The original work used homodyne detection to reconstruct the Wigner function that describes a quantum state in a continuous variable representation~\cite{Vogel1989}. Other work proposed measurement schemes for finite dimensional systems such as single-qubits~\cite{Hradil2000}, two-qubits~\cite{James2001}, arbitrary spins~\cite{Newton1968}, and general qudits~\cite{Thew2002}. More recently, constructions based on symmetric mathematical properties have been proven optimal for QST when the estimate is limited only by finite sampling~\cite{Scott2006}, such as the symmetric informationally complete (SIC) POVM~\cite{Renes2004a} and a set of mutually unbiased bases (MUB)~\cite{Wootters1989}. Estimation techniques for QST have benefited from developments in numerical optimization. The first proposals for quantum state estimation used classical methods to estimate the Wigner function~\cite{Vogel1989,Smithey1993}, or elements of the density matrix~\cite{DAriano1994}. However, these techniques did not produce a ``physical'' quantum state, that is the estimated state was not positive and/or unit trace. Therefore, the estimate cannot be used to predict future outcomes of the experiment, since it will, by definition, predict unphysical results for some measurements. With the advance of numerical methods, the physicality constraints can now be incorporated into estimation protocols. The first such protocol was maximum likelihood (ML)~\cite{Hradil1997}, which made use of the classical likelihood principle to determine the most likely state that produced the data within the set of quantum states. Later, a simplification of the ML estimator was proposed, called least-squares~\cite{James2001}, which approximates the likelihood function when there is Gaussian distributed noise. QT has also been implemented in a variety of different physical systems. The original proposal of using homodyne detection to reconstruct the Wigner function of a single mode of light was implemented in Ref.~\cite{Smithey1993}. Since then, QST has been demonstrated in many different experimental platforms, for example, atomic ions~\cite{Roos2004,Home2006}, atomic spins of neutral atoms~\cite{Klose2001,Smith2013a,SosaMartinez2016}, orbital angular momentum modes of light~\cite{Giovannini2013,Bent2015}, and superconducting qubits~\cite{Steffen2006}. QPT has also been used to characterize the processes in many different systems, such as entangling gates with trapped atomic ions~\cite{Riebe2006} and optical systems~\cite{OBrien2004,Mitchell2003}, the motion of atoms in an optical lattice~\cite{Myrskog2005}, and three qubits in NMR~\cite{Weinstein2004}. Applications of QDT are more recent and primarily focus on characterizing detectors in optical systems~\cite{Lundeen2008,Zhang2012,Cooper2014,Brida2012,Humphreys2015} Despite the promising theoretical and experimental work in QT, as well as the tremendous potential for QT as a diagnostic tool, it still faces two major difficulties that limit its future practicality. First, in any experiments there are sources of noise and errors in the implementation that can make the estimate inaccurate. Most importantly, to accomplish QT with high reliability, we must assume some parts of the quantum system are working perfectly. For example, with QPT, we assume that we can perfectly prepare quantum states and measurements to probe the unknown quantum process. However, in practice this will never be the case and errors in state preparation and measurement (commonly referred to as SPAM errors~\cite{Gambetta2012}) will limit the performance. Second, it is expensive both to perform the necessary measurements and to produce an estimate with classical post-processing of the data. This is especially true for large systems (e.g. of order 10 qubits), which are more typical of modern day experiments. In order for QT to be a useful strategy for the future of quantum information processing, these two issues must be addressed. Recent work on QT has focused on both of these challenges. In order to deal with errors in the implementation, new techniques have been proposed that do not require one to assume some parts of the system are working perfectly~\cite{Merkel2012,Blume-Kohout2013,Greenbaum2015,Jackson2015}. These techniques can be understood as a combination of all three types of QT: state, process, and detector. For example, one method, called gate-set tomography (GST)~\cite{Blume-Kohout2013,Greenbaum2015}, only requires a finite set of quantum processes, or gates, that can be repeated consistently. Then, in GST the experimenter implements the set of gates in different orders and collects data from the outcomes. The data is used to reconstruct a description of each gate, thus accomplishing QPT for each gate. The procedure does not require a known set of quantum states or detectors, and therefore is not susceptible to SPAM errors. However, these types of methods require more measurements of the quantum system. Therefore, while these methods do not suffer from SPAM errors, they are even more limited by the size of the system than standard methods. There has also been a considerable number of new proposals for specialized diagnostic schemes, unrelated to QT, that are independent of SPAM errors. Most notably is randomized benchmarking (RB), which is a protocol for measuring the average performance of a set of quantum processes~\cite{Knill2008,Magesan2012}. RB has also been expanded to measure the performance of a particular process~\cite{Magesan2012a} and to estimate other parameters that describe a particular quantum process~\cite{Kimmel2014,Kimmel2015}. RB is now a common procedure for verifying the performance of quantum systems in many laboratories~\cite{Gaebler2012,Anderson2015, Smith2012,Olmschenk2010,Chow2009}. There exist other techniques, such as phase estimation, which measures a few components that define a quantum readout device~\cite{Kimmel2015a}. However, all specialized diagnostic techniques only characterize a few parameters that describe the quantum system, such as the average fidelity of a process, and most do not give information about particular errors that occurred. Consequently, there is still a need for QT in an experimental setting. In order to make QT a more practical tool, new methods have been proposed that take advantage of prior information about the quantum system to reduce the resources required. For example, most quantum information processing protocols require pure states, so theoretical methods have been designed to reconstruct pure states that require less resources than standard techniques~\cite{Carmeli2014,Carmeli2015,Chen2013,Flammia2005,Finkelstein2004,Goyeneche2015,Heinosaari2013,Kech2015a,Ma2016}. However, these methods have no guarantees on performance in the presence of noise and errors, and in an experiment such circumstances will necessarily exist. Another closely related technique for efficiently estimating pure quantum states is called quantum compressed sensing~\cite{Gross2010,Flammia2012,Liu2011,Kalev2015}. Quantum compressed sensing is based on the classical technique of compressed sensing, which allows for the estimation of low-rank matrices or sparse signals more efficiently than classical limits~\cite{Candes2006,Candes2008}. Quantum compressed sensing estimates low-rank quantum states, such as pure states, more efficiently than standard techniques by using a set of special measurements~\cite{Liu2011} and a specific optimization program~\cite{Gross2010, Flammia2012}. This protocol offers the advantage that the estimate produced is provably robust to noise and errors~\cite{Gross2010,Flammia2012}. Quantum compressed sensing has been experimentally demonstrated for QST~\cite{Smith2013a,Tonolini2014,Liu2012} and QPT~\cite{Shabani2011,Rodionov2014}. However, the technique has limited practicality since it requires special measurements and optimization programs. One goal of this dissertation is to show how these two techniques for efficient QT with prior information fit into a general, more flexible framework. Even with new proposals, like GST~\cite{Blume-Kohout2013,Greenbaum2015} and quantum compressed sensing~\cite{Gross2010,Flammia2012}, QT is unsuitable for many experiments. GST and related methods are independent of SPAM errors, but only feasible for small systems (e.g. at most two-qubits). Efficient methods can work for larger systems, but may not perform well in the presence of noise and errors, or require special types of measurements and estimation. Experimental efforts have pushed the size of typical quantum systems beyond a few qubits, and therefore there is a growing demand for QT protocols that are flexible to a particular implementation while still being robust to noise and errors in these regimes. In this dissertation, we develop measurements and estimation techniques that are more efficient to implement for larger quantum systems, robust to any type of noise and errors, and are flexible to suit a given experiment. The fundamental aspect that allows for the creation of such measurements is prior information about the physical system. In any experimental implementation of a quantum information protocol, there is a wealth of prior information. Most quantum information protocols require pure states, coherent evolution, and projective measurements. Therefore, most experiments try to engineer systems that operate near these requirements. Through a variety of separate calibrations and experiments, such as RB~\cite{Johnson2015,Smith2013,Anderson2015} or phase estimation, one often has confidence that the experiment is operating near the desired regime before QT is performed. We show how to include this type of prior information into QST, QDT and QPT. We begin with a discussion of standard methods for QT in Chapter~\ref{ch:background}. We review previously proposed measurements and estimation techniques as well as formalize the effect of noise and errors on the three types of QT. In Chapter~\ref{ch:new_IC}, we consider QST with the prior information that the state is pure, or, more generally, close to pure. We define two types of measurements called complete and strictly-complete, that rely on different prior information. We further prove that the estimates derived from strictly-complete measurements are robust to all noise and errors. In Chapter~\ref{ch:constructions}, we present methods to construct both complete and strictly-complete measurements for QST, and provide examples of such measurements. We also show that strictly-complete measurements require roughly the same amount of resources as complete measurements. In Chapter~\ref{ch:PT}, we study how the complete and strictly-complete measurements can be generalized to QPT when there is prior information that the process is unitary. We simulate these methods for unitary QPT in the presence of errors and show how comparing different estimators can be used as a diagnostic tool. In Chapter~\ref{ch:experiment}, we consider an experimental implementation of QST and QPT in order to demonstrate the power of techniques described in Chapters~\ref{ch:new_IC}--\ref{ch:PT}. The platform involves ensembles of laser-cooled cesium atoms in which quantum information is encoded in the spin of each atom. The large nuclear spin of cesium, together with the electron spin, leads to a large dimensional Hilbert space and thus provides a rich test-bed in which to explore QT protocols. The spins are controlled by four separate magnetic fields which allow for a variety of different evolutions and measurements. We discuss how different methods for measurement and estimation perform in this system, and draw conclusions on the best ways to implement QT. Finally in Chapter~\ref{ch:conclusions}, we offer conclusions on the methods discussed as well as an outlook to future work in QT. The dissertation follows the following published articles and manuscripts in preparation: \begin{table}[ht] \centering \def2{1} \begin{tabular}{lll} \cline{1-3} \multicolumn{1}{|c|}{Reference} & \multicolumn{1}{c|}{Authors} & \multicolumn{1}{c|}{Chapter} \\ \hline \multicolumn{1}{|l|}{PRA {\bf 93}, 052105 (2016)} & \multicolumn{1}{c|}{CHB, I. H. Deutsch, and A. Kalev} & \multicolumn{1}{c|}{Ch.~\ref{ch:new_IC} and \ref{ch:constructions}} \\ \hline \multicolumn{1}{|l|}{PRA {\bf 90}, 012110 (2014)} & \multicolumn{1}{c|}{CHB, A. Kalev and I.H. Deutsch} & \multicolumn{1}{c|}{Ch.~\ref{ch:PT}} \\ \hline \multicolumn{1}{|l|}{\em in preparation} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}H. Sosa-Martinez, N. Lysne,\\ CHB, A. Kalev, \\I. H. Deutsch, and P. S. Jessen \end{tabular}} & \multicolumn{1}{c|}{Sec.~\ref{sec:ST_exp}} \\ \hline \multicolumn{1}{|l|}{\em in preparation} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}H. Sosa-Martinez, N. Lysne, \\CHB, A. Kalev,\\ I. H. Deutsch, and P. S. Jessen \end{tabular}} & \multicolumn{1}{c|}{Sec.~\ref{sec:PT_exp}} \\ \hline \multicolumn{1}{|l|}{arXiv:1607.03169} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}} T. Keating, CHB, \\ Y.-Y. Jau, J. Lee, \\ G. W. Biedermann, and I. H. Deutsch \end{tabular}} & \multicolumn{1}{c|}{} \\ \hline \end{tabular} \caption[Work published and in preparation, with reference to text]{{\bf Work published and in preparation, with reference to text.} } \label{tbl:full_IC} \end{table} \chapter{Standard methods for quantum tomography} \label{ch:background} Quantum tomography (QT) is a well established protocol for characterizing the three components of a quantum information processor: state preparation, evolution, and readout. In this chapter, we formally describe these components and then introduce the standard methods for QT. We divide the discussion into two regimes. First, an ideal setting where the states, evolutions, and readout devices are errorless and we have direct access to the probability of each measurement outcome. Second, the realistic setting where noise and errors exist in all components. Any experimental implementation will necessarily fall into the second regime, so the first regime serves as mathematical tool to establish the framework for QT. \section{Quantum information processing devices} \label{sec:components} A quantum information processing device can be broken into three components: state preparation, evolution, and readout. In an experiment, the quantum system is usually prepared in some fiducial state by cooling. The system is then evolved with external control fields, such as electromagnetic fields. After the evolution, the system is measured by coupling to an ancilla system and then reading out the values of the ancilla. In this section, we describe each component and present mathematical descriptions. In the following discussion, we denote the $d$-dimensional Hilbert space that describes the quantum system as $\mathcal{H}_d$. We will also describe linear operators on the Hilbert space that are elements of the Hilbert space $\mathcal{H}_{d^2}$, which we refer to as the operator space. In analogy to Dirac notation, we describe elements of $\mathcal{H}_{d^2}$ as ``rounded kets,'' $| \cdot )$. The procedure to take an operator, which is represented as a $d \times d$ matrix to a rounded ket, which is represented by a $d^2 \times 1$ vector, is called vectorization. This can be accomplished in many ways but the most common is a ``stacking'' of the matrix columns, \begin{equation} A = \begin{pmatrix} a_{1,1} & a_{1,2} & \cdots & a_{1,d} \\ \vdots & \vdots &\ddots & \vdots \\ a_{d,1} & a_{d,2} & \cdots& a_{d,d} \end{pmatrix} \rightarrow | A) = \begin{pmatrix} a_{1,1} \\ \vdots \\ a_{d,1} \\ a_{1,2} \\ \vdots \\ a_{d,2} \\ \vdots \\ a_{1,d} \\ \vdots \\ a_{d,d} \end{pmatrix}. \end{equation} Vectorization preserves the trace inner product such that $\textrm{Tr}(A^{\dagger} B) = ( A | B)$. We denote $\{ \Upsilon_{\alpha} \}$ as an arbitrary orthonormal basis on operator space. One useful choice is the Hermitian basis, $\{ H_{\alpha} \}$, where $H_{0} = \mathds{1}/\sqrt{d}$ and $H_{\alpha >0}$ are traceless Hermitian matrices. In the Hermitian basis, Hermitian operators are represented by real vectors, where $a_0 = \textrm{Tr}(\rho H_0) = \frac{1}{\sqrt{d}} \textrm{Tr}(\rho)$. We may apply a $d^2 \times d^2$ unitary map to change the basis of the rounded ket in the operator space. For example, the unitary, $V = \sum_{\alpha} |H_{\alpha})( \Upsilon_{\alpha} |$ maps a vectorized operator from a given basis $\{ \Upsilon_{\alpha} \}$ to the Hermitian basis. \subsection{State preparation} \label{ssec:states_intro} State preparation is the first step in a quantum information processing. A quantum state is mathematically described by density operator, $\rho$, which is represented by a positive semi-definite (PSD), $\rho \geq 0$, and trace one, $\textrm{Tr}(\rho) = 1$, matrix. The set of all quantum states is the convex set $\mathcal{Q} = \Set{ \rho | \rho \geq 0, \, \textrm{Tr}(\rho) = 1}$. We will make many definitions with respect to the set of all PSD matrices, labelled as $\mathcal{S} = \Set{S | S \geq 0}$, which is also convex, such that $\mathcal{Q} \subset \mathcal{S}$. We refer to the PSD constraint as ``positivity'' and it will play an important part in later chapters. We use the greek letters, $\rho, \, \sigma,$ and $\tau$ to represent quantum states and the capital letters $S$ or $X$ to represent an arbitrary PSD matrix. An arbitrary quantum state is specified by $d^2-1$ free parameters (real numbers) because quantum states are elements of the operator space but are constrained to have unit trace. \subsection{Quantum evolution} \label{ssec:process_intro} Once the system is prepared in the desired quantum state, external control is applied to evolve the state. The external control produces a quantum process, $\mathcal{E}[\cdot]$, or dynamical map, on the quantum state. Mathematically, we consider such maps that satisfy two conditions, complete positivity (CP) and trace preserving (TP)~\cite{Nielsen2002}. To understand CP maps, first let us define a positive map. A positive map, which is applied to a quantum state $\rho^{\rm in}$, produces an output state, $\rho^{\rm out} = \mathcal{E}[\rho^{\rm in}]$ that is positive, i.e. if $\rho^{\rm in} \geq 0$ then $\rho^{\rm out} \geq 0$. A completely positive (CP) map satisfies the same definition as a positive map but additionally maintains the positivity of any bipartite state $\rho_{AB}$ when $\mathcal{E}$ acts on one subsystem, i.e. if $\rho_{AB} \geq 0$ then $(\mathcal{E}_A \otimes \mathcal{I}_B) [ \rho_{A,B} ]\geq 0$. A TP map is a map that preserves the trace of the quantum state, i.e. $ \textrm{Tr}\left( \rho \right) = \textrm{Tr} \left( \mathcal{E}[\rho] \right) $. There are many ways of representing a quantum process but we focus on two methods. First, we consider the Kraus representation, \begin{equation} \label{Kraus_rep} \mathcal{E}[\rho] = \sum_{\mu} A_{\mu} \rho A_{\mu}^{\dagger}. \end{equation} where $A_{\mu}$ are called Kraus operators. By construction, the Kraus representation describes a CP map. If the Kraus operators resolve the identity, \begin{equation} \label{TP_const} \sum_{\mu} A_{\mu}^{\dagger} A_{\mu} = \mathds{1} , \end{equation} then the map is also TP. The Kraus representation is not unique; a given map can be described by infinitely many different sets of Kraus operators. A special type of CPTP map is a unitary map. A unitary map preserves the eigenvalues of the density operator. Unitary maps have a single Kraus operator, $U$, which is represented by a unitary matrix, and therefore satisfies Eq.~\eqref{TP_const}. Another representation that will be important for QT is the process matrix, which is a $d^2 \times d^2$ matrix denoted $\chi$. We can relate the process matrix to the Kraus representation by expanding each Kraus operator in a basis on operator space, $\{ \Upsilon_{\alpha} \}$, where $\textrm{Tr}(\Upsilon_{\alpha} \Upsilon_{\beta})= \delta_{\alpha, \beta}$ and $\alpha, \beta = 1,\dots, d^2$. Then writing the Kraus operators in this basis gives $A_{\mu} = \sum_{\alpha} a_{\alpha, \mu} \Upsilon_{\alpha}$, where $a_{\alpha, \mu} = \textrm{Tr}(A_{\mu} \Upsilon_{\alpha})$ is a complex expansion coefficient. Applying this expansion to Eq.~\eqref{Kraus_rep} gives, \begin{equation} \mathcal{E} [ \rho ] = \sum_{\alpha, \beta} \chi_{\alpha, \beta} \Upsilon_{\alpha} \rho \Upsilon_{\beta}^{\dagger}, \end{equation} where the expansion coefficients have been grouped to $\chi_{\alpha,\beta} = \sum_{\mu} a_{\mu, \alpha} a_{\mu,\beta}^*$, which defines the elements of the process matrix. By construction $\chi$ is Hermitian and when $\chi \geq 0$ it corresponds to a CP map. We apply the expansion to Eq.~\eqref{TP_const} to write the TP constraint in terms of $\chi$, \begin{equation} \label{TP_const_chi} \sum_{\alpha, \beta} \chi_{\alpha, \beta} \Upsilon_{\beta}^{\dagger} \Upsilon_{\alpha} = \mathds{1}. \end{equation} An arbitrary CP map is specified by $d^4$ real numbers, which is made clear in the process matrix representation since $\chi$ is a $d^2 \times d^2$ Hermitian matrix. If the map is TP, then there are an additional $d^2$ linear constraints on the process matrix given in Eq.~\eqref{TP_const_chi}. Therefore, the number of free parameters that describes an arbitrary CPTP map is $d^4-d^2$. \subsection{Information readout} \label{ssec:detector_intro} After evolution, one needs to read out the desired information about the final state in order to determine the result of the quantum information protocol. Readout is typically accomplished by coupling the quantum system to an ancilla system and then measuring the ancilla~\cite{Nielsen2002}. The result is described mathematically by a POVM, which is a set of operators, called POVM elements. An ancilla may have several different orthonormal states that correspond to different outcomes of the measurement. We index the outcomes with $\mu$ and the probability of getting an outcome, $\mu$, is described by a POVM element, which is a positive operators $E_{\mu} \geq 0$. The POVM elements are represented by PSD matrices that resolve the identity, $\sum_{\mu} E_{\mu} = \mathds{1}$. We focus on POVMs with a finite number of $N$ elements but mathematically a POVM may have infinite (even continuous) elements. The POVM can also be expressed by a $N \times d^2$ matrix, referred to as the POVM matrix, \begin{equation} \Xi \triangleq \begin{pmatrix} \leftarrow & ( E_{1} | &\rightarrow \\ & \vdots & \\ \leftarrow & ( E_{N} | &\rightarrow \\ \end{pmatrix}, \end{equation} The POVM matrix, $\Xi$, maps elements of the operator space to an $N$-dimensional vector space. When $\Xi$ acts on positive operators, i.e. vectorized PSD matrices, the $N$-dimensional vector space is real. When $\Xi$ acts on a quantum state, i.e. vectorized PSD matrices with unit trace, the result is probabilities of the different possible measurement outcomes, $\Xi | \rho ) = \bm{p}$. A single POVM can be described by $(N-1)d^2$ free parameters. This corresponds to the $d^2$ real numbers that describes each one of the $N$ POVM elements. The identity resolution condition consists of a set of $d^2$ linear constraints that relate the $N$ POVM elements. There are many different types of POVMs. A particular example we will use throughout this dissertation is a ``basis'' measurement. A basis measurement is a POVM consisting of $d$, rank-1 orthonormal elements, $\textrm{Tr}(E_{\mu} E_{\nu}) = \delta_{\mu, \nu}$. This is the familiar case of measurement of a Hermitian observable, whose measurement outcomes correspond to its (non-degenerate) eigenvalues. We denote a basis measurement by its eigenvectors, \begin{equation} \mathbbm{B} = \{ \ket{e_0}, \dots, \ket{e_{d-1}} \}, \end{equation} which has corresponding POVM elements, $E_{\mu} = | e_{\mu} \rangle \langle e_{\mu} |$. In QT, we often require multiple readout devices. This corresponds to a collection of POVMs. We will use an additional subscript, $b$, to denote the POVM, and $v$ to denote the POVM element, $E_{b,v}$. A collection of $B$ POVMs is also a POVM, however we must normalize the POVM elements, $E_{b,v} \rightarrow \frac{1}{B} E_{b,v}$ so that they resolve the identity, $ \sum_{b,v} \frac{1}{B} E_{b,v} = \mathds{1}$. \subsection{The Born rule} A quantum information processor makes use of the three different components in a given experiment. The combination produces an outcome with probability expressed mathematically by the Born rule, \begin{equation} \label{Born_rule} p_{\mu} = \textrm{Tr}\left( E_{\mu} \mathcal{E}[\rho] \right). \end{equation} The Born rule establishes a linear relationship between the probability of each outcome and the mathematical description of the state, process, or readout device. We organize these probabilities into a vector $\bm{p} = [ \textrm{Tr}(E_{1} \mathcal{E}[\rho]), \dots, \textrm{Tr}(E_{N} \mathcal{E}[\rho]) ]$, referred to as the probability vector. We previously constrained $\rho$, $\mathcal{E}$, and $E_{\mu}$ to be positive (or CP for the process) and have linear constraints related to the trace. These constraints were necessary to ensure that the Born rule return probabilities. The positivity constraints ensure that all $p_{\mu}$'s are positive while the trace and resolution of the identity constraints, assure that $\sum_{\mu} p_{\mu} = 1$. In any real experiment, it is impossible to determine the probabilities $\{ p_{\mu} \}$ exactly due to finite sampling limits and experimental sources of errors. We return to these issues in a later section but for now, we consider the unrealistic case where we have access to $\{ p_{\mu} \}$. This idealization allows us to define an important notion in QT, known as informational completeness. \section{Ideal quantum tomography} \label{sec:full-IC} There exists a QT protocol to reconstruct a mathematical description of each part of a quantum information processor (state preparation, evolution, and readout) respectively called state, process, and detector tomography. In each protocol, one of the components is unknown, but we assume complete knowledge of the other two. For example, in quantum state tomography, the quantum state is unknown but we assume knowledge of all quantum evolutions and readout devices. To accomplish standard QT, we probe the unknown component to determine a measurement vector. In this section, we describe an ideal version of QT, where the measurement vector is the probability vector defined by the Born rule in Eq.~\eqref{Born_rule}. With the prior knowledge of the other components, the Born rule establishes a linear relationship between the measurement vector and the unknown component. However, not all methods of probing the unknown component are sufficient to completely characterize the unknown component. In order to reconstruct a description of a quantum state, process, or readout device, we need to fully characterize all free parameters. When the probabilities provide information about all the free parameters, we call them {\em fully informationally complete} (full-IC). In this section, we describe full-IC methods for state, process, and detector tomography in the ideal setting. \subsection{Ideal quantum state tomography} \label{ssec:noiseless_ST} In quantum state tomography (QST), we measure an unknown quantum state with a POVM. The probability of each outcome is found from the Born rule, $p_{\mu} = \textrm{Tr}[E_{\mu} \rho]$. For convenience, we sometimes notate the POVM as a map from density matrix space to probabilities, $\mathcal{M}[\rho] = (\textrm{Tr}[E_{\mu} \rho], \dots, \textrm{Tr}[E_{\mu} \rho] )= \bm{p}$. A full-IC POVM uniquely identifies the $d^2-1$ free parameters that describe an arbitrary quantum state. A mathematical definition of a full-IC POVM for QST is given below. \begin{definition}{\bf (Fully informationally complete, QST)} \label{def:full-IC} Let $\mathcal{Q}=\{ \rho: \rho\geq0,\, {\rm Tr}(\rho) = 1 \}$ be the set of all quantum states. A POVM is said to be fully informationally complete if \begin{equation} \forall \, \rho_1, \rho_2 \in \mathcal{Q}, \, \rho_1\neq \rho_2 \textrm{ iff }\mathcal{M}[ \rho_1] \neq \mathcal{M}[\rho_2], \end{equation} \end{definition} \noindent We can determine when a POVM is full-IC by the POVM matrix, $\Xi$. In vectorized form, the Born rule is, \begin{equation} \label{ST_lin} \bm{p} = \Xi | \rho ). \end{equation} When $\Xi$ is invertible, that is $\Xi^+ = (\Xi^{\dagger} \Xi)^{-1} \Xi^{\dagger}$\footnote{The superscript ``$^+$'' denotes the left inverse since, in principle, there may be more than $d^2$ POVM elements so the POVM matrix is not square, and the standard inverse does not apply.} exists, all elements of $\rho$ are uniquely determined. This is only possible if there are $d^2$ linearly independent POVM elements.\footnote{One might then think that in order for a POVM to be full-IC, $\textrm{rank}(\Xi) = d^2-1$, since this is the number of free parameters that describe an arbitrary quantum state. However, due to the identity resolution constraint, $\sum_{\mu} p_{\mu} = \sum_{\mu} \textrm{Tr}(E_{\mu} \rho) = \textrm{Tr}(\rho)$, i.e. the sum of all probabilities is equal to the trace of the quantum state for all POVMs. Therefore, all POVMs measure the trace of a quantum state. This overlaps with the trace constraint, and therefore the trace constraint does not reduce the number of POVM elements required.} Two important examples of full-IC POVMs are the symmetric informationally complete (SIC) POVM~\cite{Renes2004a} and the set of mutually unbiased bases (MUB)~\cite{Wootters1989}. The SIC POVM is a single POVM with $d^2$, rank-1 POVM elements. The POVM elements have a constant inner product such that they are symmetrically separated in operator space. The inner product is defined as \begin{equation} \label{SIC_def} \textrm{Tr}[E_{\mu} E_{\nu} ] = \frac{1}{d^2(d+1)}, \, \, \mu \neq \nu, \end{equation} and $\textrm{Tr}[E_{\mu}^2] = \frac{1}{d^2}$. The MUB consist of $B = d+1$ basis measurements. A measurement of one of the bases projects the quantum state into an unbiased state with respect to the other bases. For example, in $d = 2$, the bases that make up the MUB consist of the eigenvectors of the well known set of Pauli matrices. If we measure the basis corresponding to $\sigma_z$ the resulting state is either $\ket{\uparrow_z}$ or $\ket{\downarrow_z}$. Therefore, if we measure this state with the corresponding basis to $\sigma_x$ (or to $\sigma_y$), we have equal probability of getting each of the possible outcomes. The unbiased nature of the measurement outcomes are defined by the inner product relation (where each POVM element is normalized such that $\sum_{b,v} E_{b,v} = \mathds{1}$), \begin{equation} \label{MUBs_def} \textrm{Tr}(E_{b,v} E_{b',v'}) = \begin{cases} \frac{\delta_{v,v'}}{(d+1)^2} & \textrm{if } b = b' \\ \frac{1}{d (d+1)^2} & \textrm{if } b\neq b' \end{cases} \end{equation} \subsection{Ideal quantum detector tomography} \label{ssec:noiseless_DT} The goal of quantum detector tomography (QDT) is to determine the unknown POVM that describes the readout device. To accomplish this, we probe the POVM element with a set of $M$, known quantum states, which we organize into a $d^2 \times M$ matrix $\Theta$, defined as, \begin{equation} \label{Theta} \Theta = \begin{pmatrix} \uparrow & &\uparrow \\ |\rho_1)& \cdots & | \rho_M) \\ \downarrow & &\downarrow \\ \end{pmatrix}. \end{equation} Then the Born rule can be written as the linear matrix relation, $P = \Xi \Theta$, where the elements of the matrix $P_{\mu, \nu} = \textrm{Tr}[E_{\mu} \rho_{\nu}]$ are the conditional probability of getting outcome $\mu$ given the $\nu$ state. A mathematical definition of full-IC for QDT is similar to Definition~\ref{def:full-IC} but applies to the set of probing states. The collection of states is full-IC when the matrix $P$ uniquely identifies every POVM element. This occurs when $\Theta$ is invertible, i.e. $\Theta^+ = \Theta^{\dagger} (\Theta^{\dagger} \Theta)^{-1}$ exists. For $\Theta$ to be invertible the POVM must be probed with $d^2$ linearly independent quantum states. For example, the of set $d^2$ pure states, \begin{align} \label{standard_QPT_states} &\ket{k},\textrm{ for } k=1,\ldots,d, \nonumber \\ &\frac{1}{\sqrt{2}}(\ket{k}+\ket{n}),\textrm{ for } k=1,\ldots,d-1,\textrm{ and }n=k+1,\ldots,d,\nonumber \\ &\frac{1}{\sqrt{2}}(\ket{k}+ {\rm i} \ket{n}),\textrm{ for } k=1,\ldots,d-1,\textrm{ and }n=k+1,\ldots,d, \end{align} are linearly independent~\cite{Chuang1997}. No matter how many elements there are in a given POVM, ideal QDT only requires $d^2$ linearly independent states to be full-IC. This is because applying the unknown POVM matrix to a single state produces an $N \times 1$ probability vector. Each element in the probability vector relates to one free parameter in each of the $N$ POVM elements. We could also accomplish QDT by characterizing each POVM element independently, \begin{equation} \label{DT_lin} \bm{p}^{\top} = (E_{\mu} | \Theta. \end{equation} Similar to Eq.~\eqref{ST_lin}, we can solve for $(E_{\mu}|$ when $\Theta$ is invertible. This technique is advantageous when there are many POVM elements, which may make it computationally expensive to store the matrix $\Xi$. \subsection{Ideal quantum process tomography} \label{ssec:noiseless_PT} The goal of quantum process tomography (QPT) is to determine the unknown quantum process. To accomplish this, we prepare a set of known quantum states and evolve them with the unknown process. The output states from the unknown process are then determined by a full-IC POVM. By Eq.~\eqref{Born_rule}, \begin{align} \label{PT_relation} p_{\mu, \nu} &= \textrm{Tr} \left[ E_{\mu} \sum_{\alpha, \beta} \chi_{\alpha, \beta} \Upsilon_{\alpha} \rho_{\nu} \Upsilon_{\beta}^{\dagger} \right], \nonumber \\ &= \sum_{\alpha, \beta} \chi_{\alpha, \beta} \textrm{Tr} \left[ E_{\mu} \Upsilon_{\alpha} \rho_{\nu} \Upsilon_{\beta}^{\dagger} \right], \nonumber \\ &= \textrm{Tr} \left[ \mathpzc{D}_{\mu, \nu} \chi \right], \end{align} where $(\mathpzc{D}_{\mu, \nu})_{\beta, \alpha} \triangleq \textrm{Tr} \left[ E_{\mu} \Upsilon_{\alpha} \rho_{\nu} \Upsilon_{\beta}^{\dagger} \right] $ are elements of a four dimensional array~\cite{Chuang1997}. We can also express the elements, $(\mathpzc{D}_{\mu, \nu})_{\beta, \alpha}$ in vectorized form, \begin{equation} \label{D_matrix} (\mathpzc{D}_{\mu, \nu})_{\beta, \alpha} = ( \Upsilon_{\beta} | E_{\mu} \Upsilon_{\alpha} \rho_{\nu} ) = ( \Upsilon_{\beta} |\rho_{\nu}^{\top} \otimes E_{\mu} | \Upsilon_{\alpha}), \end{equation} yielding $\mathpzc{D}_{\mu, \nu} \triangleq \rho_{\nu}^{\top} \otimes E_{\mu}$, which is an operator. The relation $| A X B) = B^{\top} \otimes A | X)$ is a property of the Kronecker product, ``$\otimes$''. This is similar to the relation found by the Choi-Jamio\l{}kowski isomorphism, which is another representation of a quantum process~\cite{Watrous2008}. If the probabilities, $\{ p_{\mu,\nu} \}$, uniquely identifies an arbitrary process matrix then the states and POVMs are full-IC for QPT. The mathematical definition of full-IC for QPT is similar to Definition~\ref{def:full-IC}, but applies to the set of probing states and POVM elements. The result of Eq.~\eqref{PT_relation} is a linear relationship between the process matrix and the probabilities, similar to QST and QDT. We can also express this relationship in vectorized form where the process matrix is transformed to a $d^4 \times 1$ vector, Eq.~\eqref{ST_lin}, the 4-dimensional array, $\mathpzc{D}$ becomes a $MN \times d^2$ matrix that operates on $| \chi )$ (we use the rounded bra-ket notation for simplicity even though $\chi$ is not an element of the operator space) and the probabilities form a $MN \times 1$ vector, \begin{equation} \label{PT_lin} \bm{p} = \mathpzc{D} | \chi ). \end{equation} If $\mathpzc{D}$ is invertible, then the solution $| \chi ) = \mathpzc{D}^+ \bm{p}$ is unique, so the states and POVM that determine $\mathpzc{D}$ are full-IC. In order for $\mathpzc{D}$ to be invertible there must be $d^2$ linearly independent states, such as the ones introduces in Eq.~\eqref{standard_QPT_states}, and $d^2$ linearly-independent POVM elements, such as the SIC POVM or MUB. \subsection{General QT} There are clearly many parallels between QST, QDT, and QPT. In each procedure, we look to reconstruct one component in the quantum system, either the state, evolution, or readout device. Each component is represented by a positive semidefinite (PSD) matrix. This property, simply referred to as positivity, is a powerful constraint that will have important implications in future chapters. Another commonality between all three methods is the linear relationship between the probabilities and the PSD matrix that represents the component. We can generalize this relationship as follows, \begin{equation} \bm{p} = \mathcal{M}[X], \end{equation} where $X$ is the PSD matrix that represents the given component and $\mathcal{M}$ is referred to as the ``sensing map.'' The sensing map is a linear mapping between the PSD matrix that represents a given component and the probability of each outcome. For example, in QST the sensing map is proportional to the POVM. While the sensing map always provides a linear relation, in practice its form is dependent on the type of QT. For example, in QPT, the sensing map is the matrix $\mathpzc{D}$, which has elements dependent on the input states and the POVM that is applied to the output states. So, while the linear relation is an inherent feature of QT, the form is dependent on the type of QT being implemented. Another difference between the three types of QT comes from the trace constraint. For QST, we saw the quantum state is constrained to be unit trace, while for QDT we saw that the POVM elements are constrained to resolve the identity. For QPT, the trace constraint is more complicated, and contains $d^2$ linear constraints on the process matrix. Therefore, the three different types of QT are differentiated by the sensing map and the trace constraint. However, we will see that the positivity constraint is a very important feature of QT, and since all three methods share this constraint, many results we present in future chapters in terms of one type of QT can be generalized to the other two. \section{Noise and errors in quantum tomography} \label{sec:noisy_QT} In any experimental implementation of QT there necessarily exists sources of noise and errors. One fundamental source of noise is due to a finite number of copies of the system, referred to as ``projection noise.'' There may also be other sources of noise within the experimental setup. Errors correspond to inexact characterizations of the other parts of the quantum information processor. For example, in QST the readout device may be not be described by the expected POVM. Despite the fact that many current systems have a very high level of control, there will always be physical mechanisms that are not known. We take here a frequentist perspective, that the probabilities are inherent to the system and the measurement vector returned in the experiment is a perturbation from the probabilities based on the the noise and errors. In a real application of QT, we only have access to the measurement vector and not the probabilities. \subsection{Noise in quantum tomography} \label{ssec:noise} In any quantum system there exists some level of noise due to finite sampling. Additionally, there may be noise in the readout device, such as shot noise. For a noisy system, the experiment produces a measurement vector, $\bm{f}$, which we relate to the probability vector $\bm{p}$, discussed in the previous section, by the noise vector, $\bm{e}$, \begin{equation} \label{noise} \bm{f} = \bm{p} + \bm{e}. \end{equation} The elements of the noise vector are random variables with zero mean and distribution dependent on the type of noise. The magnitude, $\| \bm{e} \|_2$, which we take as the $\ell_2$-norm of the vector but in general could be any norm, depends on the distribution that defines the random variable. In general, the expected noise magnitude is proportional to the variance of the distribution, $\mathbb{E} \left[ \| \bm{e} \|_2^2 \right] = \sum_{\mu} \mathbb{E}[ e_{\mu}^2 ]$. \subsubsection{Projection noise} In any realization there will be projection noise due to finite sampling of the system. For example, in QST we may have access to a finite number of copies, $m$, of the quantum state. Therefore, each POVM outcome occurs a finite number of times. The random variable that describes the noise then follows a multinomial distribution, \begin{align} \mathbb{E}[ e_{\mu}] &= 0, \nonumber \\ \mathbb{E}[ e_{\mu} e_{\nu} ] &= \frac{p_{\mu} ( \delta_{\mu,\nu} - p_{\nu})}{m}. \end{align} The expected magnitude of projection noise is $\mathbb{E}[ \| \bm{e} \|_2^2] = \frac{1}{m}\sum_{\mu} p_{\mu} (1- p_{\mu})$. One can easily show that the expected magnitude is bounded by $p_{\mu} = 1/N$ for all $\mu$, i.e., the probabilities associated with the maximally mixed state. Then, \begin{equation} \mathbb{E}[\| \bm{e} \|^2_2] \leq \frac{1-1/N}{m} = \xi^2. \end{equation} The expression can be generalized for multiple POVMs, or for QDT and QPT with multiple states being measured. \subsubsection{Shot noise} When the noise in the measurement vector is caused by shot noise from the readout device, we treat the random variable, $e_{\mu}$, as being normally distributed, with mean zero, $\mathbb{E}[e_{\mu}] = 0$ and constant variance, $\mathbb{E}[e_{\mu}e_{\nu}] = \sigma^2 \delta_{\mu, \nu}$. This assumption may also apply to the case of finite sampling when the number of samples is very large if the probabilities are not too small. Then the expected magnitude of the noise is bounded by $\mathbb{E}[\| \bm{e} \|^2_2] \leq \sigma^2 N = \xi^2$. It is important to keep in mind that for both the types of noise discussed, and perhaps other examples, the bound $\mathbb{E}[\| \bm{e} \|_2^2] \leq \xi^2$ is only approximate. In a given experiment it may be violated. \subsection{Errors in quantum tomography} \label{ssec:errors} In each type of QT, we require perfect knowledge of other parts of the quantum system. For example, in QST we must know the POVM that describes the readout device exactly. This will never be possible in real experiments. Therefore, we must study how QT performs when this assumption breaks down. In this section we consider the effect of these errors on the measurement vector for the three different types of QT. \subsubsection{Errors in QST} \label{ssec:errors} For QST, we assume the readout device is described by the POVM, $\{ E_{\mu} \}$, called the target POVM. However, due to unknown errors such as imperfect control, or technical noise in the detector, the device is actually described by a different POVM, $\{E'_{\mu}\}$. These errors are commonly referred to as ``measurement errors.'' The actual POVM can be written in terms of the target POVM, \begin{equation} E_{\mu}' = E_{\mu} + X_{\mu}. \end{equation} where the matrix $X_{\mu}$ describes the error in the readout device. The error matrix can have any form such that both $E_{\mu}$ and $E_{\mu}' $ are POVMs. In the vectorized form we can write the actual POVM as a sum of the two POVM matrices, \begin{equation} \label{meas_errors} \Xi' = \Xi + \mathcal{X}, \end{equation} where $\mathcal{X}$ is the matrix form of $\{ X_{\mu} \}$. Then acting the implemented POVM matrix on a quantum state gives \begin{equation} \label{ST_errors} \bm{p}' = \Xi' | \rho ) = \Xi | \rho ) + \mathcal{X} |\rho) = \bm{p} + \bm{x}, \end{equation} where $\bm{x} = \mathcal{X} | \rho )$ is the error vector, similar to the noise vector discussed in the previous section. The elements of the error vector, $x_{\mu}$, have a distribution dependent on the physical process that causes errors in the readout device. Some processes may also cause the elements of the error vectors to have mean not equal to zero. We call these systematic errors, as they correspond to a systematic offsets in the experimental setting as opposed to random fluctuations. Similar to the noise vector, the error vector may be bounded such that $\| \bm{x} \|_2 \leq \eta$. This bound will depend on the physical process that produces the error and requires some prior knowledge about the performance of the readout device. \subsubsection{Errors in QDT} For QDT, we require the perfect preparation of many quantum states in order to characterize a detector. However, the state preparation procedure will never be perfect due to decoherence and/or imperfections in the control fields. In this case, there is a set of target states, $\{ \rho_{\nu} \}$, but due to errors in the state preparation procedure, the set of states actually prepared are $\{ \rho'_{\nu} \}$. These errors are commonly referred to as ``preparation errors.'' We can express the prepared density matrices in terms of the target density matrices, \begin{equation} \rho_{\nu}' = \rho_{\nu} + Y_{\nu}, \end{equation} where the matrix $Y_{\nu}$ describes the errors in the state preparation. As with QST, the exact form of $Y_{\nu}$ is dependent on the physical process that is causing the prepared state not to match the target state. In vectorized form, \begin{equation} \label{prep_errors} \Theta' = \Theta + \mathcal{Y}, \end{equation} where $\mathcal{Y}$ is the matrix form of $\{ Y_{\mu} \}$. Then acting the prepared state matrix on the POVM matrix gives \begin{equation} \bm{p}' = \Xi \Theta' = \Xi \Theta + \Xi \mathcal{Y} = P + {\bf Y}, \end{equation} where ${\bf Y}$ is the error matrix corresponding to preparation errors. Systematic errors occur when the elements of ${\bf Y}$ do not have zero mean. We can similarly bound the magnitude of the error matrix, $\| {\bf Y} \|_2 \leq \upsilon$. This bound will depend on the physical process that produces the error and requires some prior knowledge about the state preparation procedure. \subsubsection{Errors in QPT} QPT can suffer from both state preparation and measurement errors (commonly known as SPAM~\cite{Gambetta2012}). Taking the linear relation in Eq.~\eqref{D_matrix} and applying Eq.~\eqref{meas_errors} and~\eqref{prep_errors} gives, \begin{equation} D_{\mu, \nu}'= \rho_{\nu}'^{\top} \otimes E_{\mu}' = \rho_{\nu}^{\top} \otimes E_{\mu}+ Y_{\nu}^{\top} \otimes E_{\mu} + \rho_{\nu}^{\top} \otimes X_{\mu} + Y_{\nu}^{\top} \otimes X_{\mu} . \end{equation} Or in vector form, \begin{equation} \mathpzc{D}' = \mathpzc{D} + \mathpzc{Z}, \end{equation} where $ \mathpzc{Z}_{\mu, \nu} =Y_{\nu}^{\top} \otimes E_{\mu} + \rho_{\nu}^{\top} \otimes X_{\mu} + Y_{\nu}^{\top} \otimes X_{\mu}$. Then the probability of getting the outcome for the prepared states and implemented measurements is related to the outcome of getting the assumed states and measurements, \begin{equation} \bm{p}' = \mathpzc{D}' | \chi ) = \mathpzc{D} | \chi ) + \mathpzc{Z} | \chi ) = \bm{p} + \bm{z}, \end{equation} where $\bm{z}$ is the error vector for SPAM errors. As with the state preparation and measurement errors independently, we can bound the magnitude of the error vector, $\| \bm{z} \|_2 \leq \zeta$. This bound will depend on the physical process that produces the error and requires some prior knowledge about the performance of the POVM and state preparation. \subsection{Additivity of noise and errors} We make the assumption that the noise and errors are additive. That is, increasing the magnitude of statistical noise does not affect the preparation or measurement errors and vice versa. This assumption allows us to easily incorporate noise and errors together. For example, in QST the measurement vector is equal to the probability of getting each outcome plus a noise vector, given in Eq.~\eqref{noise}, plus the error vector. We combine these two expressions to give, \begin{equation} \label{add_mag} \bm{f} = \bm{p}' + \bm{e} = \bm{p} + \bm{x} + \bm{e}. \end{equation} The total noise plus error vector is $\bm{x} + \bm{e}$ and has magnitude bounded by the two independent vectors, $\| \bm{x} + \bm{e} \|_2 \leq \| \bm{x} \|_2 + \|\bm{e} \|_2 \leq \eta + \xi$. Therefore, in this assumption the magnitude of the error vectors are additive. The same procedure can be applied for QDT and QPT. The additivity assumption breaks down for certain sources of noise, such as projection noise. With projection noise, as shown in Sec.~\ref{ssec:noise}, the magnitude of the noise is proportional to the probability of the outcome. This probability is dependent on the measured quantum state, and therefore proportional to the state preparation errors. However, in most cases we can choose a bound for the noise magnitude that is independent of the state, as was done for projection noise in Sec.~\ref{ssec:noise} \section{Numerical estimation methods} \label{sec:numerical_methods} Since noise and errors are inherent to any application of QT, we need methods that produce reasonable estimates of quantum states, processes, and readout devices in this case. The most basic approach is to determine the matrix that best represents the measurement vector, $\bm{f}$. For example, in QST this can be found by minimizing the least-squares function between the measurement vector and a model in the following program: \begin{equation} \label{LI_prog} \underset{R}{\textrm{minimize:}} \quad \| \Xi | R ) - \bm{f} \|_2 . \end{equation} When the POVM is full-IC, there is a unique $R$ that minimizes this function called the ``linear-inversion estimate'', with analytic form, \begin{equation} \label{LI} | \hat{R} ) = \Xi^{+} \bm{f}, \end{equation} which is related to the method we discussed in Sec.~\ref{sec:full-IC}. However, due to noise and errors, the linear-inversion estimate, $\hat{R}$, is not necessarily a ``physical'' quantum state, i.e. a PSD matrix with unit trace. This posses a problem for many reasons. For one, many quantities, such as fidelity, purity, entanglement measures, etc., are defined for PSD matrices. Another problem is the estimate may produce nonphysical predictions for outcomes of future experiments. For example, if the density matrix estimated is not positive it will predict a ``negative'' probability for certain outcomes. Therefore, we need a method to produce an estimate that is constrained to be a physical quantum state. In general, to find a physical estimate for QT, we use numerical optimization techniques that are constrained over the physical set. The physical set contains the positivity constraint and a trace constraint, which is dependent on the type of QT. These two constraints define a convex set. When there are noise/errors we wish to find an estimate that is ``close'' to the measurement vector but still within the physical set. The closeness of the estimate is defined by some function. If the function is convex then, since we are searching over a convex set, this fits the standard convex optimization paradigm. \subsection{Convex optimization} Convex optimization is advantageous for several reasons. First, it has been proven that for convex optimization only global minima exist, giving guaranteed convergence of numerical programs. Second, there exists efficient algorithms to solve convex programs that are freely available. The goal of convex optimization is to determine the minimum of a convex function $f(x)$ over a convex set. The defining property of a convex function is, \begin{equation} f(a x_1 + b x_2) \leq a f(x_1) + b f(x_2) \end{equation} such that $a + b =1$ and $a,b \geq 0$. A convex optimization problem has the following general form, \begin{align} \underset{x}{\textrm{minimize:}} \quad& f(x) & \textrm{(convex function)}, \nonumber \\ \textrm{subject to:}\quad & g_i(x) \geq 0 & \textrm{(convex functions), }\nonumber \\ & h_j(x) = 0 & \textrm{(affine functions), } \end{align} where $\{ g_i(x) \}$ are called the convex inequality constraints and $\{ h_i(x) \}$ are called the affine (a linear function plus a constant) equality constraints. We denote $\hat{x}$ as the value of $x$ that produces the minimum value of $f$ while still satisfying the constraints. There are many different types of convex programs, but the standard version of QT falls into semidefinite programs (SDP). SDPs have an inequality constraint that the variable is a PSD matrix. See Ref.~\cite{Boyd2004} for further information on convex optimization. \subsection{Convex constraints for QT} In QT, the variable is the matrix that describes the unknown component, which we constrain to be physical. We derived the physical constraints for each component in Sec.~\ref{sec:components}. We also introduce an additional inequality constraint that the any estimate for QT should have a probability vector that is close to the measurement vector, which we call this the measurement constraint. The variable and constraints for each type QT is defined in Table~\ref{tbl:cvx_constraints}. \begin{table}[ht] \centering \def2{2} \begin{tabular}{l|c|c|c|} \cline{2-4} & \multicolumn{1}{|c}{QST} & \multicolumn{1}{|c}{QDT} & \multicolumn{1}{|c|}{QPT} \\ \hline\multicolumn{1}{|l|}{Variable} & $\rho$ & $E_{\mu}$ & $\chi$ \\ \hline \multicolumn{1}{|l|}{Positivity} & $\rho \geq 0$ & $E_{\mu} \geq 0$ & $\chi \geq 0$ \\ \hline \multicolumn{1}{|l|}{Trace} & $\textrm{Tr}[\rho] = 1$ & $\sum_{\mu} E_{\mu} = \mathds{1}$ & $\sum_{\alpha,\beta} \chi_{\alpha,\beta} \Upsilon_{\beta}^{\dagger} \Upsilon_{\alpha} = \mathds{1}$ \\ \hline \multicolumn{1}{|l|}{Measurement} & $\| \Xi | \rho ) - \bm{f} \|_2 \leq \varepsilon $ & $\| \Xi \Theta - \bm{f} \|_2 \leq \varepsilon$ & $\| \mathpzc{D} | \chi ) - \bm{f} \|_2 \leq \varepsilon$ \\ \hline \end{tabular} \caption[Convex constraints for QT]{{\bf Convex constraints for QT}} \label{tbl:cvx_constraints} \end{table} From the table, we see the parallels between the different constraints in QT. The positivity constraint is a convex inequality constraint that is shared in all three versions of QT. The trace constraint is an affine equality constraint that is different for each type of QT. The measurement constraint is also a convex inequality constraint. In the measurement constraint, the value of $\varepsilon$ is dependent on prior information about the noise and errors present in the experiment. We define $\varepsilon$ as the sum of the magnitude of random noise, $\xi$, plus the magnitude of errors for each version of QT, (QST: $\eta$, QDT: $\upsilon$, and QPT: $\zeta$), discussed in Sec.~\ref{ssec:errors}. \subsection{Estimation programs for QST} \label{sec:cvx_QT} In principle we can choose the optimization function as any convex function. There are, however, some preferred choices. These functions also determine which constraints to apply in the convex optimization program. We will discuss each in terms of QST, since it has the simplest form, but generalizations can be made for QDT and QPT. \subsubsection{Least-squares} The first program we consider for QST is constrained least-squares (LS). This is similar to the linear-inversion program considered previously, except we include the constraint that $\rho$ is a quantum state, i.e. it is positive and has unit trace. The corresponding convex optimization program is, \begin{align} \label{LS} \underset{\rho}{\textrm{minimize:}} \quad & \| \Xi | \rho ) - \bm{f} \|_2 \nonumber \\ \textrm{subject to:} \quad & \rho \geq 0, \nonumber \\ & \textrm{Tr}(\rho) = 1. \end{align} LS returns the quantum state that matches the measurement vector as closely as possible measured by the $\ell_2$-norm. \subsubsection{Maximum-likelihood} The second convex estimator we consider is called maximum-likelihood (ML), originally proposed for QST in Ref.~\cite{Hradil1997}. The program is based on the classical maximum-likelihood technique, which returns the estimate that maximizes the likelihood function, $\mathcal{L}(\rho | \bm{f} ) = \prod_{\mu} \textrm{Tr}(E_{\mu} \rho)^{m f_{\mu}}$, for a finite sample of $m$ quantum states. The state that maximizes the likelihood function also minimizes the negative log-likelihood function, \begin{equation} -\textrm{log} \left[\mathcal{L}(\rho |\bm{f} ) \right] = -m\sum_{\mu} f_{\mu} \textrm{logTr}(E_{\mu} \rho), \end{equation} which is a convex function. Therefore, we can determine which quantum state minimizes the negative log-likelihood function with convex optimization. The ML program for QST is, \begin{align}\label{ML} \underset{\rho}{\textrm{minimize: }} \quad & -\textrm{log} \left[\mathcal{L}(\rho |\bm{f} ) \right] \nonumber \\ \textrm{subject to:} \quad & \rho \geq 0, \nonumber \\ & \textrm{Tr}(\rho) = 1, \end{align} where the factor of $m$ is dropped since it does not effect the optimization. ML returns the most likely quantum state to have produced the measurement vector. In the limit that the noise in QST is Gaussian distributed, then the likelihood function is well approximated by a Gaussian. Therefore, the negative log-likelihood function is $-\textrm{log}\left[ \mathcal{L}(\rho| \bm{f} ) \right] = \left( \sum_{\mu} \left| \textrm{Tr}(\rho E_{\mu}) - f_{\mu} \right|^2 \right)^{1/2}$, which is the LS function and the ML program is the same as LS. \subsubsection{Tr-norm minimization} The third estimator that we will consider is Tr-norm minimization, which was used in the context of quantum compressed sensing~\cite{Gross2010,Flammia2012}. Quantum compressed sensing is inspired by the classical protocol of compressed sensing, which is a technique to reconstruct an unknown matrix without sampling every element in the matrix~\cite{Candes2008,Recht2009,Candes2011}. Compressed sensing is made possible by the fact that many matrices we are interested in estimating have low rank. Low-rank matrices are specified by fewer free parameters than an arbitrary matrix. Given this prior information, it was shown that a set of measurements that satisfy a property called, the restricted isometry property (RIP), are sufficient to perfectly reconstruct a low-rank matrix without noise~\cite{Candes2008,Candes2011}. Classical compressed sensing requires the convex optimization program, \begin{alignat}{2} \label{nuclear-norm} \underset{X}{\textrm{minimize:}}\quad & \| X \|_* \nonumber \\ \textrm{subject to:} \quad & \| \mathcal{M}[X ] - \bm{f} \|_2 \leq \varepsilon, \end{alignat} where $\| X \|_* = \textrm{Tr}[\sqrt{X^{\dagger} X}]$ is called the nuclear-norm (also known as the trace-norm) and $\mathcal{M}[\cdot]$ represents the sensing map of the measurements that satisfy the RIP condition. It was also proven that in the presence of noise or errors, the RIP measurements and convex program in Eq.~\eqref{nuclear-norm} produce a robust estimate. Liu~\cite{Liu2011} proved that a random set of $\mathcal{O}(d \,{\rm polylog} \, d)$ expectation values of Pauli matrices satisfy the RIP condition and Gross {\em et al.}~\cite{Gross2010} translated the compressed sensing results to QST, where there is the additional constraint that $X \geq 0$. For QST, the compressed sensing estimation program is, \begin{alignat}{2} \label{Tr-min} \underset{\rho}{\textrm{minimize:}} \quad & \textrm{Tr}(X) \nonumber \\ \textrm{subject to:} \quad & \| \Xi |X ) - \bm{f} \|_2 \leq \varepsilon, \nonumber \\ &X \geq 0, \end{alignat} where the nuclear-norm becomes the trace due to the positivity constraint, $X \geq 0$, and the trace constraint is dropped in order for $\textrm{Tr}(X)$ to be the free parameter. The program in Eq.~\eqref{Tr-min} estimates a PSD matrix, $\hat{X}_{\rm Tr}$, that must be renormalized to produce an estimated quantum state, $\hat{\rho}_{\rm Tr} = \hat{X}_{\rm Tr}/ \textrm{Tr}(\hat{X}_{\rm Tr})$. By relation to classical on compressed sensing, it was proven that $\hat{\rho}_{\rm Tr}$ is a robust estimate~\cite{Gross2010,Flammia2012} even though it only requires $\mathcal{O}(d \,{\rm polylog} \, d)$ expectation values. It was recently shown by Kalev {\em et al.}~\cite{Kalev2015} that the program in Eq.~\eqref{Tr-min} is not required to produce such an estimate. We will discuss this result in the next chapter. \subsection{Robustness bound on estimation} \label{ssec:full_robustness} The estimate returned by any of the convex programs described above are robust to the noise and errors, when the measurement vector comes from a full-IC POVM. Here, robustness means that the quality of the estimation is only linearly proportional to the magnitude of the noise and errors. To see this is true for QST, we first consider two arbitrary quantum states $\rho_a$ and $\rho_b$, which have probability vectors $\bm{p}_a = \Xi |\rho_a )$ and $\bm{p}_b = \Xi |\rho_b )$. Then, the square of the distance between the two probability vectors is, \begin{align} \label{dist_expansion} \| \bm{p}_a - \bm{p}_b \|_2^2 = \| \Xi | \rho_a -\rho_b) \|_2^2 = (\rho_a - \rho_b| \Xi^{\dagger} \Xi | \rho_a - \rho_b). \end{align} We can bound $\Xi^{\dagger} \Xi$ by the identity, $\mathds{I}$, times its smallest and largest eigenvalues, $\lambda_{\textrm{min}}(\Xi^{\dagger} \Xi) \mathds{I} \leq \Xi^{\dagger} \Xi \leq \lambda_{\textrm{max}}(\Xi^{\dagger} \Xi) \mathds{I}$. We apply this relation to Eq.~\eqref{dist_expansion} \begin{equation} \label{meas_bound} \sqrt{\lambda_{\textrm{min}}} \| \rho_a - \rho_b \|_2 \leq \| \bm{p}_a - \bm{p}_b\|_2 \leq \sqrt{\lambda_{\textrm{max}}} \| \rho_a - \rho_b \|_2, \end{equation} where $\| \rho_a - \rho_b \|_2 = \textrm{Tr}\left[ (\rho_a - \rho_b)^2 \right]^{1/2}$ is the Hilbert-Schmidt (HS) distance between the two matrices $\rho_a$ and $\rho_b$. The HS-distance is equivalent to the $\ell_2$-distance between the vectorized density matrices, $\| \rho_a - \rho_b \|_2 = \| | \rho_a ) - | \rho_b ) \|_2$. Now, let us choose $\rho_a= \hat{\rho}$, the estimate returned by one of the convex programs, and $\rho_b = \rho$ the actual state that was measured in the presence of noise and errors. Each state has a corresponding probability vector, $\hat{\bm{p}} = \Xi | \hat{\rho})$ and $\bm{p} = \Xi | \rho)$. By Eq.~\eqref{dist_expansion}, \begin{equation} \| \hat{\rho} - \rho \|_2 \leq \frac{1}{\sqrt{\lambda_{\textrm{min}}} } \| \hat{\bm{p}} - \bm{p}\|_2 \leq \frac{1}{\sqrt{\lambda_{\textrm{min}}} } \left( \| \hat{\bm{p}} - \bm{f} \|_2 + \| \bm{p} - \bm{f} \| \right), \end{equation} where the second line is found by inserting $+\bm{f} - \bm{f}$ and applying the triangle inequality. The first term on the LHS is bounded by the measurement constraint, $\| \hat{\bm{p}} - \bm{f} \|_2 = \| \Xi | \hat{\rho} ) - \bm{f} \|_2 \leq \varepsilon$ or by the minimum value returned in the LS program. The second term on the LHS is constrained by the definition of the noise and error magnitude, $\| \bm{p} - \bm{f} \| = \| \bm{x} + \bm{e} \| \leq \eta + \xi = \varepsilon$. Therefore, the HS-distance between the estimated state and the actual state is bounded, \begin{equation} \label{full-IC_robustness} \| \hat{\rho} - \rho \|_2 \leq \frac{2 \varepsilon}{\sqrt{\lambda_{\textrm{min}}} }, \end{equation} which is saturated when the noise/error bound is saturated and the the estimated state and differs from the actual state in the direction of operator space that corresponds to the largest eigenvalue of $\Xi^{\dagger} \Xi$. The bound shows that the HS-distance between the estimated state and the actual state is linearly proportional to the magnitude of the noise and errors present with proportionality constant dependent on the POVM. Therefore, Eq.~\eqref{full-IC_robustness} satisfies our definition of robustness, such that the estimate produces by standard QST does not ``blow up'' when there is noise and/or errors present. This makes standard QST feasible in most experimental settings. A similar analysis can be applied to QDT and QPT. \section{Summary and conclusions} We have presented standard methods for the three types of QT: state, process, and detector tomography. We also discussed how to apply QT in the ideal case, when we have direct access to the probabilities, and the realistic case, where noise and errors exist. In the realistic case, we proved that full-IC measurements are robust to noise and errors. However, these methods, while widely used in experimental settings, are limited to small quantum systems. For example, a full-IC POVM, such as the SIC or MUB, require at least $d^2$ elements. Even for systems consisting of only five qubits, QST requires POVMs with at least 1024 elements. Implementing such measurements is experimentally challenging. Moreover, if such measurements are possible, the classical estimation is still demanding even with convex optimization. Therefore, standard full-IC methods, while useful due to the robustness property, are not applicable to many modern day experiments. In order to feasibly perform QT with experiments of five or more qubits, we need new types of measurement techniques, which will be the subject of subsequent chapters. \chapter{Informational completeness in bounded-rank quantum tomography} \label{ch:new_IC} In general quantum tomography (QT) is an expensive task. For example, in the context of quantum state tomography (QST), we saw in Sec.~\ref{sec:components} that the reconstruction of an arbitrary quantum state requires a fully informationally complete (full-IC) POVM, which has at least $d^2$ elements. However, often when we wish to implement QT, we have prior information about the component. This prior information can be applied to reduce the resources required. We focus here on QST, and consider the prior information that the quantum state being measured is pure or, more generally, close to pure. Most quantum information tasks require pure states, and therefore most experiments work to engineer these states. In practice, we can use other techniques, such as randomized benchmarking~\cite{Knill2008,Magesan2012,Gaebler2012}, to ensure the experiment operates in this regime. As we shall see, the prior information can be applied to design measurements that uniquely identify pure states with less POVM elements than are required for full-IC measurements. In any practical application, we do not know the state is pure (and in fact it will never be {\em exactly} pure). Therefore, we construct POVMs that are robust to small imperfections in this prior knowledge. We also show that these types of measurements can be generalized to the prior information that the state has bounded-rank, i.e., the rank is less than or equal to some value, $r$. The inherent feature of QST that allows for the design of efficient and robust measurements is the positivity constraint on the density matrix. Therefore, the ideas and results presented in terms of QST, can easily be generalized to quantum detector tomography (QDT) and quantum process tomography (QPT) since POVM elements and process matrices are also constrained to be positive. We will return to the generalization at the end of this chapter. \section{Prior information in QST} \label{sec:prior_info} In order to reduce the number of resources required for QST, we employ the prior information about the measured quantum state. The goal in most experiments is to prepare pure states, since these are required for the best performance in any quantum information processing task. A pure state is a rank-1 density matrix, $\rho = | \psi \rangle \langle \psi |$, where, $\ket{\psi} = \sum_{k=1}^d c_k \ket{k}$, is fully specified by the $d$ complex state amplitudes $\{ c_k \}$ in a given basis. The state amplitudes are normalized by the trace constraint and the measurements are insensitive to the global phase of the state vector. Therefore, there are $2d-2$ free parameters that specify an arbitrary pure state. The probability of each outcome is quadratically proportional to the state amplitudes, \begin{equation} \label{ST_quad} p_{\mu} = \bra{\psi} E_{\mu} \ket{\psi} = \sum_{j,k} c^*_j c_k (E_{\mu})_{j,k}. \end{equation} where $E_{\mu}$ is the $\mu$th POVM element. This quadratic relation is in contrast to the linear relation between the probabilities and the free parameters for full-IC POVMs, which we derived in Sec.~\ref{ssec:noiseless_ST}. Therefore, the number of POVM elements required for pure-state QST is not necessarily equal to the number of free parameters as was the case with standard QST. Despite the difficulty of the quadratic relationship, POVMs that uniquely identify pure states have been constructed~\cite{Carmeli2014,Carmeli2015,Carmeli2016,Chen2013,Flammia2005,Finkelstein2004,Goyeneche2015,Heinosaari2013,Kech2015,Kech2015a,Ma2016}, and shown to require only $\mathcal{O}(d)$ POVM elements. In fact, Flammia {\em et al.}~\cite{Flammia2005} proved that the minimum number of POVM elements to reconstruct a pure state is $2d$, not much larger than the number of free parameters. Another approach is based on the compressed sensing methodology~\cite{Gross2010,Flammia2012,Liu2011,Kalev2015}, where certain measurements guarantee a robust estimation of low-rank states with high probability, based on a particular convex optimization program. Compressed sensing techniques were shown to require $\mathcal{O}(d\, \textrm{polylog} \, d)$ measurements~\cite{Liu2011}. In this chapter, we connect these two independent methods by formalizing the notion of informational completeness for pure-state QST. Since a pure state is represented by a rank-1 density matrix, then the prior information that a state is pure can be generalized to the notion that a state has bounded-rank. A bounded-rank state, $\rho$, has a bounded-number of nonnegative eigenvalues, $\textrm{rank}(\rho) \leq r$. The prior information that the state is pure is then a special case when $r = 1$. A bounded-rank state is in general described by $2dr - r^2$ free parameters~\cite{Kech2015a}, which can be seen in the eigendecomposition. We consider bounded-rank QST for two reasons. First, in many applications, even when the goal is to create a pure state, due to errors in the state preparation the actual state may more closely match a state with higher rank. No actual prepared state will be exactly bounded rank, but may be close to such a state. Second, the mathematical formalism that describes pure-state QST is easily generalized to the bounded-rank case. We will show there exist POVMs for bounded-rank QST that are more efficient that full-IC POVMs and produce a robust estimate. \section{Informational completeness in bounded-rank QST} \label{sec:bounded_rank_IC} We commonly think of POVMs, which are the mathematical descriptions of the readout device, as maps from measured quantum states to probabilities. More generally, we can apply the POVM map to any positive semidefinite (PSD) matrix. In this work we discuss POVMs mapping PSD matrices to a vector of positive numbers (which are not necessarily probabilities), as it highlights the fact that our definitions and results are independent of the trace constraint of quantum states, and only depend on the positivity property. Therefore, we treat the quantum readout device represented by the POVM $\{ E_{\mu} \}$, more generally as a map, ${\cal M}[\cdot]$, between the space of PSD matrices and the real vector space, $\mathbbm{R}^N$. Particularly, the action of this map on a PSD matrix, $X \geq 0$, is given as ${\cal M}[X] = \bm{s}$, where the elements of the vector $\bm{s}$ satisfy, $s_{\mu} \geq0$ and $\sum_{\mu=1}^N s_\mu=\textrm{Tr} (X)$. The later expression shows that since, by definition, the POVM elements sum to the identity, the POVM always ``measures'' the trace of the matrix $X$. If $X = \rho$, a density matrix, then the condition $s_{\mu} \geq 0$ and $\sum_{\mu} s_{\mu} = \textrm{Tr}(\rho) = 1$ implies that $\{ s_{\mu} \}$ is a probability distribution and thus consistent with the Born rule. It is also useful to define the kernel of the map, ${\rm Ker}({\mathcal M})\equiv \{X:{\cal M}[X]={\bf 0}\}$. Since the POVM elements sum to the identity matrix, we immediately obtain that every $X\in{\rm Ker}({\mathcal M})$ is traceless, $\textrm{Tr}(X)=0$. The converse is not true; a traceless matrix is not necessarily entirely contained in the Kernel of $\mathcal{M}$. When considering bounded-rank QST, a natural notion of informational completeness emerges~\cite{Heinosaari2013, Carmeli2014, Kalev2015}, referred to as {\em rank-$r$ completeness}. A measurement is rank-$r$ complete if the outcome probabilities uniquely distinguish the PSD matrix $X$, with rank $\leq r$, from any other PSD matrix with rank $\leq r$, more formally: \begin{definition}{\bf (Rank-$\bm{r}$ complete)} \label{def:rankr_comp} Let $\mathcal{S}_r=\{ X | X \geq0, {\rm rank}(X)\leq r \}$ be the set of PSD matrices with rank $\leq r$. A POVM is said to be rank-$r$ complete if \begin{equation} \forall \, X_1, X_2 \in \mathcal{S}_r,\, X_1\neq X_2 \textrm{ iff } \mathcal{M}[X_1]\neq\mathcal{M}[X_2], \end{equation} except for possibly a set of rank-$r$ PSD matrices that are dense on a set of measure zero, called the ``failure set.'' \end{definition} \noindent We can alternatively write the definition in terms of any norm, $\| \cdot \|$: a POVM is rank-$r$ complete when $\| X_1 - X_2 \| = 0$ if and only if $\| \mathcal{M}[X_1] - \mathcal{M}[X_2]\|=0$. When applied to quantum states, the probabilities from a rank-$r$ complete POVM uniquely identify the rank $\leq r$ state from within the set of all PSD matrices with rank $\leq r$, $\mathcal{S}_r$, which includes all rank-$r$ density matrices. Fig.~\ref{fig:illustration}a illustrates the notion of rank-$r$ completeness. The measurement probabilities cannot uniquely identify states in this way if they lie in the failure set, as was considered in~\cite{Flammia2005,Goyeneche2015}. However, in the ideal case of no noise, the chances of randomly hitting a state in that set is vanishingly small. We comment on the implications and structure of the failure set in the next chapter. \begin{figure}\label{fig:illustration} \end{figure} In Ref.~\cite{Carmeli2014} an alternative, but equivalent, definition of rank-$r$ complete was proven using $\textrm{Ker}(\mathcal{M})$: A POVM is rank-$r$ complete if for all $X_1,X_2 \in \mathcal{S}_r$, with $\, X_1\neq X_2$, the difference $\Delta= X_1-X_2$ is not in $\textrm{Ker}(\mathcal{M})$, i.e., there exists an $E_{\mu}$, such that $\textrm{Tr}(E_\mu \Delta)\neq0$. Carmeli~{\em et al.}~\cite{Carmeli2014} showed that a necessary and sufficient condition for a measurement to be rank-$r$ complete for QST is that every nonzero $A\in \textrm{Ker}(\mathcal{M})$ has $\textrm{max}(n_+[A], n_-[A]) \geq r+1$, where $n_{+}[\cdot]$ and $n_{-}[\cdot]$ are the number of strictly positive and strictly negative eigenvalues of a matrix, respectively. Carmeli~{\em et al.}~\cite{Carmeli2014} also showed a sufficient condition for rank-$r$ completeness is every nonzero $A\in \textrm{Ker}(\mathcal{M})$ has $\textrm{rank}(A)\geq 2r+1$. Using the sufficient condition alone, it was shown that the expectation values of particular $4r(d-r)$ observables corresponds to rank-$r$ complete measurement~\cite{Heinosaari2013}. The notion of rank-$r$ completeness can also be applied to bounded-rank (not necessarily positive) Hermitian matrices. Let $\mathcal{A}$ be the set of all bounded-rank Hermitian matrices, $\mathcal{A} = \{ H | H = H^{\dagger}, \, \textrm{rank}(H) \leq r\}$. Then there exists POVMs whose measurement vector uniquely identifies an arbitrary Hermitian matrix within $\mathcal{H}$. We call these Hermitian rank-$r$ complete, and a formal definition can be made similar to Definition~\ref{def:rankr_comp}. In Ref.~\cite{Kech2015a}, it was shown that a set of of $4r\lceil\frac{d-r}{d-1}\rceil$ random orthonormal bases are Hermitian rank-$r$ complete. Hermitian rank-$r$ completeness is a sufficient condition for rank-$r$ completeness, since the set of bounded-rank PSD matrices is a subset of bounded-rank Hermitian matrices. In fact, it can be shown that the sufficient condition for rank-$r$ completeness given by Carmeli~{\em et al.}~\cite{Carmeli2014} is equivalent to this definition for Hermitian matrices. The definition of rank-$r$ complete POVMs guarantees the uniqueness of the reconstructed state in the set ${\cal S}_r$, but it does not say anything about higher-rank states. There may be other density matrices, with rank greater than $r$ that are consistent with the measurement probabilities. Since ${\cal S}_r$ is a nonconvex set it may be difficult to differentiate between the unique rank-$r$ density matrix and these higher-rank states, particularly in the presence of noise or other experimental imperfections. To overcome this difficulty, we consider a ``stricter" type of POVM which excludes these higher-rank states. This motivates the following definition~\cite{Chen2013,Carmeli2014,Kalev2015}: \begin{definition}{\bf (Rank-$r$ strictly-complete)} \label{def:rankr_strictcomp} Let $\mathcal{S}=\{X | X\geq0\}$ be the set of PSD matrices. A measurement is said to be rank-$r$ strictly-complete if \begin{equation} \forall \, X_1 \in \mathcal{S}_r, \textrm{ and } \forall \, X_2 \in \mathcal{S},\, X_1\neq X_2 \textrm{ iff } \mathcal{M}[X_1]\neq\mathcal{M}[X_2], \end{equation} except for possibly a set of rank-$r$ PSD matrices that are dense on a set of measure zero, called the ``failure set.'' \end{definition} \noindent Alternatively, a POVM is rank-$r$ complete when $\| X_1 - X_2 \| = 0$ if and only if $\| \mathcal{M}[X_1] - \mathcal{M}[X_2]\|=0$. Clearly, a POVM that satisfies Definition~\ref{def:rankr_strictcomp} also satisfies Definition~\ref{def:rankr_comp}. For QST, when the rank of the state being measured is promised to be less than or equal to $r$, the probabilities from a rank-$r$ strictly-complete POVM distinguish this state from any other PSD matrix, of {\em any} rank (except on the failure set). Fig.~\ref{fig:illustration}b illustrates the notion of rank-$r$ strict-completeness. Carmeli~{\em et al.}~\cite{Carmeli2014} showed that a POVM is rank-$r$ strictly-complete if, and only if, every nonzero $A\in{\rm Ker}({\cal M})$ has $\min(n_{+}[A],n_{-}[A])\geq r+1$. This condition relies on the PSD property of the matrices. To date, there are only a few known POVMs that are proven to be rank-$r$ strictly-complete~\cite{Chen2013,Ma2016}. In Chapter~\ref{ch:constructions}, we present new strictly-complete POVMs with ${\cal O}(rd)$ elements, which is the same number of POVM elements as a rank-$r$ complete POVM. The definition of strict-completeness does not have a related notion for Hermitian matrices, in contrast to rank-$r$ completeness. To see this, let us apply the definition of strict-completeness for bounded-rank Hermitian matrices in the context of QST, ignoring positivity. Let $A$ be a Hermitian matrix with $\textrm{rank}(A)\leq r$. To be (nontrivially) strictly-complete the POVM should be able to distinguish $A$ from any Hermitian matrix, of any rank, with less than $d^2$ linearly independent POVM elements. (If the POVM has $d^2$ linearly independent POVM elements, it is fully-IC and can distinguish any Hermitian matrix from any other.) However, for a POVM with less than $d^2$ linearly independent elements there are necessarily infinitely many Hermitian matrices with rank $> r$ that produce the same noiseless measurement vector as $A$. Therefore, positivity is the essential ingredient that allows us to define strict-completeness with less than $d^2$ linearly independent elements. The positivity condition, which appears in all three types of QT, is powerful constraint for efficient QT. \section{Reconstruction with ideal bounded-rank QST } \label{sec:estimation_wout_noise} The differences between rank-$r$ complete and rank-$r$ strictly-complete has implications for the way we reconstruct the unknown quantum state. For now, we assume that such measurements exist that satisfy the above definitions (we will construct examples of these measurements in the next chapter). The definitions above state that the measurements uniquely identify the bounded-rank PSD matrix within some set. In order to accomplish QST, we need methods to identify this unique PSD matrix. In this section, we consider the ideal situation that the probabilities are known exactly and there are no other errors in the system. The noiseless and errorless case does not correspond to any real application but is useful in establishing the fundamental properties of the different measurements. We return to the realistic case in the next section. In each definition we allowed a failure set where the probabilities do not uniquely identify the quantum state with respect to the set given in the definition. We also specified that this set must have zero volume. Therefore, if the measured state is random with respect to the measurement basis, it is vanishingly unlikely that it will be an element of this set. Therefore, in the ideal limit for QST, it is vanishingly unlikely that the failure set will impact the reconstruction. \subsection{Reconstruction with rank-$r$ complete POVMs} \label{sec:noiseless_rankr_comp} Many rank-$r$ complete POVMs are constructed by deriving a set of quadratic equations that are solvable when the measured state is bounded-rank, e.g., the constructions provided in Refs.~\cite{Goyeneche2015, Flammia2005}. Therefore, when we consider the ideal case of QST, we can solve these quadratic equations to uniquely reconstruct the bounded-rank quantum state. Further details are provided in Sec.~\ref{sec:decomp_method}. Some rank-$r$ complete measurements, however, do not provide a set of quadratic equations in their derivation, e.g., the POVM provided in Ref.~\cite{Carmeli2015}. Therefore, we must use numerical methods to reconstruct the quantum state. The numerical search must be constrained to the set of rank-$r$ states. One possible optimization program is based on minimizing the least-squares (LS) distance between the probabilities and the expected probabilities from a rank-$r$ quantum state, \begin{align} \label{rankr_comp_program} \underset{X}{\textrm{minimize:}} \quad & \| \mathcal{M}[X] - \bm{p} \|_2 \nonumber \\ \textrm{subject to:} \quad & X \in \mathcal{S}_r \end{align} However, the constraint $X \in \mathcal{S}_r$ is nonconvex, and therefore this constrained LS program cannot be solved with convex optimization techniques, like the ones discussed in Sec.~\ref{sec:numerical_methods}. Nonconvex optimization is, in general, difficult due to the existence of local minima. One possible algorithm to solve the program in Eq.~\eqref{rankr_comp_program} is based on gradient-projection. The basic procedure for gradient-projection is to alternate a gradient descent approach with a projection onto the set $\mathcal{S}_r$~\cite{Meka2009, Calamai1987,Figueiredo2007}. We refer to this method throughout as ``rank-$r$-projection.'' We denote the LS optimization function as $g(X) = \| \mathcal{M}[X] - \bm{p} \|_2$, with gradient, \begin{equation} \vec{\bigtriangledown} g(X_r) = 2 \mathcal{M}^{\dagger} \left[ \mathcal{M}[X_r] - \bm{p} \right], \end{equation} where $\mathcal{M}^{\dagger}[\cdot]$ is the conjugate map defined by $\textrm{Tr}(A \mathcal{M}[B]) = \textrm{Tr}(\mathcal{M}^{\dagger}[A]B)$. The algorithm starts by generating a random rank-$r$ PSD matrix, $X_r^{(0)}$. We then evaluate $g(X_r^{(0)})$, and if $g(X_r^{(0)}) > \gamma_1$, which is some stopping threshold, then we also evaluate $\vec{\bigtriangledown}g(X_r^{(0)})$. From the gradient we produce a new estimate $X^{(1)} = X_r^{(0)} - a \vec{\bigtriangledown}g(X_r^{(0)})$, where $a$ is a small constant. The new estimate is not necessarily a rank-$r$ PSD matrix, and so we project $X^{(1)}$ onto the set $\mathcal{S}_r$ to give $X^{(1)}_r = \mathcal{P}[X^{(1)}]$. The projection, $\mathcal{P}[\cdot]$, is accomplished by diagonalizing $X^{(1)}$ and setting the smallest $d-r$ eigenvalues to zero (if there are greater than $d-r$ negative eigenvalues we must also set these to zero in order for the matrix to be PSD). We then repeat the procedure until either $g(X_r) \leq \gamma_1$ or $\| \vec{\bigtriangledown} g(X_r) \| \leq \gamma_2$ for some pre-specified $\gamma_1$ and $\gamma_2$ based on the implementation. If the algorithm stops due to the gradient threshold, then we have likely found a local minimum, which is not the desired result. In order to find the desired global minimum, we repeat the procedure with a different initial guess, $X_r^{(0)}$. This entire processes is repeated until the function threshold, $\gamma_1$, is reached. When such a solution is found, the result produces a rank-$r$ PSD matrix, $\hat{X}_r$ which has $\| \mathcal{M}[X_r] - \bm{p} \|_2 \leq \gamma_1$. For ideal QST, we take $\gamma_1$ and $\gamma_2$ near zero, approximately $10^{-5}$. Empirically, we find that the run time of this algorithm can be very long. The time is very dependent on the local minima that necessarily exist, since the set $\mathcal{S}_r$ is not a convex set. These minima act as traps for the gradient descent search and require that the algorithm restart with a new random seed. We do not know the number of minima and thus how likely it is to encounter one in the optimization. Therefore, this method is not generally a practical method for reconstruction. \subsection{Reconstruction with rank-$r$ strictly-complete POVMs} \label{ssec:strict_comp_noiseless} In the previous section, we saw that rank-$r$ complete POVMs are not compatible with convex optimization. However, this is not the case for rank-$r$ strictly-complete POVMs. The ideal measurement vector from a rank-$r$ strictly-complete POVM uniquely identifies the rank-$r$ PSD within the convex set of all PSD matrices. Therefore, we can design convex optimization programs for reconstruction of bounded-rank quantum states. This is formalized in the following corollary for the ideal measurement case: \begin{corollary}{\bf (Uniqueness)} \label{cor:uniqueness} Let $X_{\rm r}$ be a PSD matrix with rank $\leq r$, and let $\bm{s}= \mathcal{M}[X_r]$ be the corresponding measurement vector of a rank-$r$ strictly-complete POVM. Then, the estimate, $\hat{X}$, which produces the minimum of either, \begin{align} \label{general_positive_CS} \underset{X}{\rm minimize:} \quad &\mathcal{C}(X) \nonumber \\ {\rm subject\, to:}\quad &\mathcal{M}[X]=\bm{s}\nonumber \\ & X \geq 0, \end{align} or, \begin{align} \label{general_norm_positive_CS} \underset{X}{\rm minimize:} \quad & \Vert\mathcal{M}[X]-\bm{s}\Vert \nonumber \\ {\rm subject \, to:} \quad &X \geq 0, \end{align} where $\mathcal{C}(X)$ is a any convex function of $X$, and $\| \cdot \|$ is any norm function, is uniquely: $\hat{X} = X_r$. \end{corollary} \noindent {\em Proof:} This is a direct corollary of the definition of strict-completeness, Definition~\ref{def:rankr_strictcomp}. Since, by definition, the probabilities of rank-$r$ strictly-complete POVM uniquely determine $X_r$ from within the set of all PSD matrices, its reconstruction becomes a feasibility problem over the convex set $\{ \mathcal{M}[X]=\bm{s},X\geq0\}$, \begin{equation} \label{feasibility} {\rm find}\; X\;\; {\rm s.t.}\; \mathcal{M}[X]=\bm{s}\, \, \textrm{and} \, \, X \geq 0. \end{equation} The solution for this feasibility problem is $X_r$ uniquely. Therefore, any optimization program, and particularly an efficient convex optimization program that looks for the solution within the feasible set, is guaranteed to find $X_r$. $\square$ \noindent In Ref.~\cite{Kech2015} this was proven for the particular choice, $\mathcal{C}(X)=\textrm{Tr}(X)$, and also in the context of compressed sensing measurements in Ref.~\cite{Kalev2015}. The corollary implies that strictly-complete POVMs allow for the reconstruction of bounded-rank PSD matrices via convex optimization even though the set of bounded-rank PSD matrices is nonconvex. Moreover, all convex programs over the feasible solution set, i.e., of the form of Eqs.~\eqref{general_positive_CS} and~\eqref{general_norm_positive_CS}, are equivalent for this task. For example, this result applies to maximum-(log)likelihood estimation for QST~\cite{Hradil1997}, given in Eq.~\eqref{sec:cvx_QT}, where $\mathcal{C}(\rho) =-\log( \prod_{\mu}\textrm{Tr}(E_\mu\rho)^{p_\mu})$. Corollary~\ref{cor:uniqueness} does not apply for PSD matrices in the measurements failure set, if such set exists. One can also include the trace constraint in Eqs.~\eqref{general_positive_CS} and~\eqref{general_norm_positive_CS}. For noiseless QST, this is redundant since any POVM ``measures'' the trace of a matrix. Thus, if we have prior information that $\textrm{Tr}(X)=1$, then the feasible set in Eq.~\eqref{feasibility} is equal to the set $\{ X \, | \,\mathcal{M}[X]=\bm{p}, \, X \geq 0, \, \textrm{Tr}(X) = 1 \}$. \section{Estimation in the presence of noise and errors} \label{sec:estimation_wnoise} Any real implementation of QST will necessarily have sources of noise and errors, and therefore it is imperative that the QST protocol be robust to such effects. In order to produce an estimate for this realistic case, we use numerical optimization. In the previous section we saw that rank-$r$ complete POVMs require nonconvex programs. Due to the complicated nature of this type of program, we forgo a discussion of estimation with rank-$r$ complete POVMs and focus only on rank-$r$ strictly-complete POVMs. In this section, we use the formalism for describing noise and errors that was introduced in Sec.~\ref{ssec:errors}. We additionally model a new type of error that is inherent to rank-$r$ strictly-complete POVMs. The definition rank-$r$ strict-completeness assumes that the measured state has bounded rank. However, in any application the measured state will never be exactly bounded-rank due to unavoidable errors in the experimental apparatus. We call these preparation errors, since they cause the prepared quantum state to differ from the target bounded-rank quantum state. We denote the state that is actually prepared, $\rho_{\rm a}$, which is, in general, full rank. However, since the goal was to prepare a bounded-rank state, the actual state is close to such a state, $\rho_r$. The ``closeness'' will depend on the magnitude of the preparation errors based on some measure. We can relate the two states with the error matrix, $Y$, such that $\rho_{\textrm{a}} \triangleq \rho_r + Y$. The matrix $Y$ is only constrained by the fact that $\rho_{\textrm{a}}$ and $\rho_r$ are both quantum states. The prior information that the state is close to a bounded-rank state then corresponds to $\| \rho_{\rm a} - \rho_r \|_2 \leq \upsilon$ where $\| \cdot \|_2$ is the Hilbert-Schmidt distance and $\upsilon$ is a small constant. In Sec.~\ref{sec:noisy_QT}, we derived expressions for the measurement vector when there exists errors in the POVM and noise in the measurement. We express the actual POVM as $\mathcal{M}' = \mathcal{M} + \mathcal{X}$, where $\mathcal{M}$ is the target POVM map and $\mathcal{X}$ represents the errors in the map. The noise in each outcome is expressed by the vector, $\bm{e}$. We can also include preparation errors in this expression, \begin{align} \label{f_with_all} \bm{f} &= \mathcal{M}'[\rho_{\rm a}]+ \bm{e}, \nonumber \\ &= \mathcal{M}[\rho_r] + \mathcal{X}[ \rho_{\rm a} ] + \mathcal{M}[Y] + \bm{e}, \nonumber \\ &= \bm{p} + \bm{x} + \bm{y} + \bm{e}, \end{align} where $\bm{p} = \mathcal{M}[ \rho_r ] $ the probability of each outcome expected from a rank-$r$ state, $\bm{x} = \mathcal{X} [ \rho_{\rm a} ]$ the contribution of the measurement errors, and $\bm{y} = \mathcal{M}[ Y]$ the contribution from the preparation errors. We assume that the contribution of measurement errors and noise is bounded for any quantum states, $\sigma$, $\| \mathcal{X} [ \sigma ] \|_2 \leq \eta$, and $\| \bm{e} \|_2 \leq \xi$. Then the total error and noise level can be bounded, \begin{equation} \label{noise_bound} \| \bm{f} - \bm{p} \|_2 = \| \bm{x} + \bm{y} + \bm{e} \|_2 \leq \eta + \xi + \| \mathcal{M}[Y] \|_2 = \varepsilon + \| \mathcal{M}[Y] \|_2. \end{equation} where we define $\varepsilon = \eta + \xi$, for reasons that will be clear later. The value of $\| \mathcal{M}[Y] \|_2$ is related to the magnitude of the preparation errors, $\upsilon$. This is seen by separating the distance, $\| \rho_{\rm a} - \rho_r \|_2$ into two terms corresponding to the projection onto the Kernel ($\pi^{\perp}[\cdot]$) and Image ($\pi[\cdot]$) (the subspace of the operator space orthogonal to the Kernel), \begin{equation} \label{proj_sep} \| \rho_{\rm a} - \rho_r \|_2^2 = \| \pi[\rho_{\rm a} - \rho_r] \|_2^2 + \| \pi^{\perp}[\rho_{\rm a} - \rho_r] \|_2^2 \leq \upsilon^2. \end{equation} The first term can be bounded by an inequality similar to Eq.~\eqref{meas_bound}, $ \frac{1}{\lambda_{\rm max}} \| \mathcal{M}[ [\rho_{\rm a} - \rho_r]\|_2 \leq \| \pi[\rho_{\rm a} - \rho_r] \|_2^2 $, where $\lambda_{\rm max}$ is the maximum eigenvalue of $\Xi^{\dagger} \Xi$, the POVM matrix squared. Rearranging Eq.~\eqref{proj_sep} gives, \begin{align} \label{prep_bound} \| \pi[\rho_{\rm a} - \rho_r] \|_2^2 &\leq \upsilon^2 - \| \pi^{\perp}[ \rho_{\rm a} - \rho_r] \|_2^2, \nonumber \\ \| \mathcal{M}[ \rho_{\rm a} - \rho_r] \|_2^2 &\leq \lambda_{\rm max} \upsilon^2 - \lambda_{\rm max} \| \rho_{\rm a} - \rho_r \|_2^2 \leq \lambda_{\rm max} \upsilon^2 . \end{align} This leads to the bound, $\| \mathcal{M}[Y]\|_2 \leq \sqrt{\lambda_{\rm max}} \upsilon$. Noise and errors also cause the failure set to have an effect on bounded-rank QST. Finkelstein showed that in the presence of noise and errors the failure-set in fact has a finite measure~\cite{Finkelstein2004}. Therefore, there is a nonzero probability that the actual state lies within this failure set. In this case the measured outcomes from the rank-$r$ strictly-complete POVMs would fail to produce a robust estimate. In this section, we ignore the effects of the failure set but discuss it in the context of specific POVMs in the next chapter. \subsection{Estimation with Rank-$r$ strictly-complete POVMs} \label{ssec:rankr_strictcomp_robustness} The estimate produced from a strictly-complete POVM are provably robust to all sources of noise and errors, including all preparation errors. This is formalized in the following corollary: \begin{corollary}{\bf (Robustness)} \label{cor:robustness} Let $X_{\textrm{a}}$ be the actual prepared PSD and let $\bm{f}= \mathcal{M'}[X_{\textrm{a}}]+\bm{e}$ be the measurement vector (with noise and errors) of a rank-$r$ strictly-complete POVM, such that $\Vert \mathcal{M}[X_a] - \bm{f}\Vert \leq \varepsilon$ and $\| X_{\rm a} - X_r\| \leq \upsilon$, for some bounded-rank PSD matrix $X_r$. Then the PSD matrix, $\hat{X}$, that produces the minimum of, \begin{align} \label{general_positive_CS_noisy} \underset{X}{\rm minimize:} \quad &\mathcal{C}(X) \nonumber \\ {\rm subject\, to:}\quad &\Vert\mathcal{M}[X]-\bm{f}\Vert_2 \leq \varepsilon \nonumber \\ & X \geq 0, \end{align} or, \begin{align} \label{general_norm_positive_CS_noisy} \underset{X}{\rm minimize:} \quad & \Vert\mathcal{M}[X]-\bm{f}\Vert_2 \nonumber \\ {\rm subject \, to:} \quad &X \geq 0, \end{align} where $\mathcal{C}(X)$ is a any convex function of $X$, is robust: $\Vert\hat{X} - X_r \Vert_2 \leq C_1\varepsilon + C_2 \upsilon$ and $\Vert\hat{X} - X_{\textrm{a}} \Vert_2\leq C_1 \varepsilon + 2 C_2 \upsilon$, where $\Vert\cdot\Vert_2$ is the Hilbert-Schmidt distance, and $C_1$ and $C_2$ are constants which depends only on the measurement. \end{corollary} \noindent {\em Proof:} The proof comes from Definition~\ref{def:rankr_strictcomp}, which states for a strictly-complete POVM, $X_r \in \mathcal{S}_r$ and $X \in \mathcal{S}$, $\| X_r - X\| = 0$ if and only if $\| \mathcal{M}[X_r] - \mathcal{M}[X] \|=0$. We can express this as an inequality relation, \begin{equation} \label{strict_comp_ineq} \alpha \| X_r - X\| \leq \| \mathcal{M}[X_r] - \mathcal{M}[X] \| \leq \beta \| X_r - X\|, \end{equation} where $\alpha$ and $\beta$ are real and depend on the POVM. The definition of rank-$r$ strict-completeness constrains the value of $\alpha$ to be strictly positive, $\alpha > 0$~\cite{Blumensath2014}. Otherwise, there may exist a case where $\| \mathcal{M}[X_r] - \mathcal{M}[X] \|= 0$ when $ \| X_r - X\| \neq 0$, which contradicts the definition. The RHS side can be derived from Eq.~\eqref{prep_bound}, such that $\beta = \sqrt{\lambda_{\rm max}}$. Now, if we take $X = \hat{X}$ for Eq.~\eqref{strict_comp_ineq}, which is the estimated PSD matrix from either program in Eqs~\eqref{general_positive_CS_noisy} or~\eqref{general_norm_positive_CS_noisy}, then, \begin{align} \label{strict_comp_bound_Xr} \| X_r - \hat{X}\| &\leq \frac{1}{\alpha} \| \mathcal{M}[X_r] - \mathcal{M}[\hat{X}] \|, \nonumber \\ &\leq \frac{1}{\alpha} ( \| \mathcal{M}[X_r] - \bm{f} \| + \underbrace{\| \mathcal{M}[\hat{X}] - \bm{f} \|}_{\leq \varepsilon} ), \nonumber \\ &\leq \frac{1}{\alpha} ( \underbrace{\| \mathcal{X}[X_{\rm a} ] + \mathcal{M}[Y] + \bm{e} \|}_{\leq \varepsilon + \beta \upsilon} + \varepsilon ), \nonumber \\ &\leq \frac{2 (\varepsilon + \beta \upsilon/2)}{\alpha} = C_1 \varepsilon + C_2 \upsilon, \end{align} by expanding $\bm{f}$ and where $C_1 = 2/\alpha$ and $C_2 = \beta/\alpha$. The second term in the second line is from the the constraint in the convex optimization program in Eq.~\eqref{general_positive_CS_noisy} or the optimization function in Eq.~\eqref{general_norm_positive_CS_noisy}. The first term in the third line is from the bound on the noise and magnitudes in Eq.~\eqref{noise_bound} as well as the bound on preparation errors from Eq.~\eqref{prep_bound}. To get the second inequality of the corollary, which compares the prepared PSD matrix to the estimate, we apply the triangle inequality, \begin{align} \| X_{\textrm{a}} - \hat{X}\| &\leq \| X_{\rm a} - X_r \|+ \underbrace{\| \hat{X} - X_r \|}_{\leq \frac{2 \varepsilon}{\alpha} + \frac{\beta \upsilon}{\alpha}} , \nonumber \\ &\leq \frac{1}{\alpha} \underbrace{\| \mathcal{M}[Y] \|}_{\leq \beta \upsilon}+ \frac{2 \varepsilon}{\alpha} + \frac{\beta \upsilon}{\alpha} , \nonumber \\ &\leq \frac{2 (\varepsilon + \beta \upsilon)}{\alpha} = C_1 \varepsilon + 2 C_2 \upsilon. \end{align} The first line uses the result from Eq.~\eqref{strict_comp_bound_Xr} and the second line uses the bound on the preparation errors in Eq.~\eqref{prep_bound}. $\square$ \noindent In the context of QST, $X_{\rm a} = \rho_{\rm a}$ and $X_{\rm r} = \rho_r$, the actual density matrix prepared and a nearby bounded-rank density matrix, respectively. We do not have an analytic expression for the constant $\alpha$. In Ref.~\cite{Kech2015}, a similar proof was given for the particular choice $\mathcal{C}(X)=\textrm{Tr}(X)$ for QST. In this proof the constant $C_1$ is derived in more detail, but still has no known analytic form. In Ref.~\cite{Kalev2015}, Corollary~\ref{cor:robustness} was also studied in the context of compressed sensing measurements. As in the ideal case, the trace constraint is not necessary for Corollary~\ref{cor:robustness}, and in fact leaving it out allows us to make different choices for $\mathcal{C}(X)$, as was done in Ref.~\cite{Kech2015}. However, for a noisy measurement vector, the estimated matrix $\hat{X}$ is generally not normalized, $\textrm{Tr}(\hat{X})\neq1$. The final estimation of the state is then given by $\hat{\rho} = \hat{X}/\textrm{Tr}(\hat{X})$. In principle, we can consider a different version of Eqs.~\eqref{general_positive_CS_noisy} and~\eqref{general_norm_positive_CS_noisy} where we explicitly include the trace constraint. The corollary assures that if the actual quantum state is close to bounded-rank and is measured with strictly-complete POVM, then it can be robustly estimated with any convex program, constrained to the set of PSD matrices. In particular, it implies that all convex estimators perform qualitatively the same for low-rank state estimation. This may be advantageous, especially when considering QT of high-dimensional systems. This also unifies previously proposed estimation programs for bounded-rank QST, such as trace-minimization~\cite{Gross2010}, maximum-likelihood, and maximum entropy~\cite{Liu2012,Teo2012}. While we cannot currently derive an analytic expression for the constant $\alpha$ for an arbitrary POVM, the scaling of the robustness bound in Corollary~\ref{cor:robustness} is linear, which is exactly the same as full-IC POVMs, derived in Sec.~\ref{ssec:full_robustness}. Therefore, strictly-complete POVMs perform very similar to full-IC POVMs in realistic applications. \section{General bounded-rank quantum tomography} The methodology we applied to bounded-rank QST can be generalized to both detector (QDT) and process tomography (QPT). The inherent feature that allows for this conversion is that, like quantum states, both detectors and processes are represented by PSD matrices. For detector tomography the PSD matrices are the POVM elements while for QPT the PSD matrix is the process matrix. Moreover, there often exists prior information that these PSD matrices are bounded-rank, or near bounded-rank. Therefore, QDT and QPT fit the framework outlined for bounded-rank QST. This means we can create ways to characterize bounded-rank readout devices and processes that are more efficient than the standard methods described in Sec.~\ref{sec:full-IC}. Mathematically, the estimation problem for the three different types of QT differ by the trace constraint, as outlined in Sec.~\ref{sec:numerical_methods}. However, in Definitions~\ref{def:rankr_comp} and~\ref{def:rankr_strictcomp} as well as in Corollaries~\ref{cor:uniqueness} and~\ref{cor:robustness}, we ignored the trace constraint for QST. We comment on the effect of this constraint in bounded-rank QDT and QPT below. \subsection{Bounded-rank QDT} \label{ssec:br_QDT} Many quantum information protocols require quantum readout devices that are described by rank-1 POVM elements, for example, the SIC POVM introduced in Ref.~\cite{Renes2004a}. Rank-1 POVM elements can be expanded similar to pure states, $E_{\mu} = | \tilde{\phi}_{\mu} \rangle \langle \tilde{\phi}_{\mu} |$, except in this case $\ket{\tilde{\phi}_{\mu}}$ is an unnormalized vector. This differs from QST only in that the trace of the POVM elements are not constrained, since $\ket{\tilde{\phi}_{\mu}}$ is unnormalized. Therefore, if we perform the estimation for QDT on individual POVM elements, which was the second method discussed in Sec.~\ref{ssec:noiseless_DT}, then we can directly apply the definitions and corollaries from above to develop efficient methods for QDT. The notion of rank-1 completeness and strict-completeness for QDT applies to the set of probing states used to characterize the POVM elements. We can construct sets of probing states that satisfy Definition~\ref{def:rankr_comp} and~\ref{def:rankr_strictcomp}, and we consider such sets in the next chapter. Therefore, these probing states are able to fully characterize rank-1 projectors with less than the $d^2$ states required for full-IC QDT, discussed in Sec.~\ref{ssec:noiseless_DT}. In most real applications, the POVM elements that describe the detector are not exactly rank-1. In this case, measurement errors in the physical apparatus cause the readout device to be described by a different POVM. This is equivalent to the preparation errors we discussed in Sec.~\ref{sec:estimation_wnoise} for QST. By analogy to the the robustness bounds derived above for preparation errors, a set of rank-1 strictly-complete probing states are robust to errors in the implementation of the POVMs, and also to errors in the preparation of the states and noise in the measurement. We have so far discussed QDT with the second method from Sec.~\ref{ssec:noiseless_DT}, which is individually estimating the POVM elements. However, in Sec.~\ref{ssec:noiseless_DT}, we introduced another method for estimation in QDT, which performs the estimation collectively with all POVM elements. For this method, we are able to apply the trace constraint within the convex optimization program. While the definitions of rank-$r$ complete and strictly-complete are independent of this constraint, including it in the estimation may allow for the creation of sets of probing states with even less elements. \subsection{Bounded-rank QPT} \label{ssec:br_QPT} The prior information that a process is rank-1 corresponds to knowledge that it is a unitary process. Unitary processes are required in most quantum information protocols such as quantum computing. The process matrix that represents a unitary process is $\chi = | U)( U|$, where $| U)$ is the vectorized form of a unitary matrix $U$. While the process matrix is a $d^2 \times d^2$ matrix, we can still directly apply the definitions and corollaries from above to QPT. The notion of rank-1 complete and strictly-complete for QPT applies to the combination of probing states and POVMs used to characterize the process matrix. We can construct a combination of states and POVMs that satisfy Definition~\ref{def:rankr_comp} and~\ref{def:rankr_strictcomp}. Therefore, rank-1 complete and strictly-complete measurements are able to fully characterize a unitary process with less than the $d^2$ probing states and full-IC POVM that is required for the standard method of QPT. We consider such methods in Chapter~\ref{ch:PT}. In most real applications, the process is not exactly unitary due to sources of errors such as decoherence, inhomogeneity in the control, or imperfect calibrations. We call these process errors, and they cause the process matrix that describes the actual process to not match the target unitary process. This is equivalent to the preparation errors we discussed in terms of QST in Sec.~\ref{sec:estimation_wnoise}. By analogy to the robustness bounds derived above for QST, a set of rank-1 strictly-complete probing states and POVMs for QPT is robust to process errors. Moreover, by the same reasoning, such sets are also robust to to errors in the preparation of the states, implementation in the POVMs, and noise in the measurement. We have so far discussed QPT without applying the TP constraint, which was derived in Sec.~\ref{ssec:process_intro}. This constraint can be used to create sets of probing states and POVMs with less elements than ones derived from Definitions~\ref{def:rankr_comp} and~\ref{def:rankr_strictcomp}. In Chapter~\ref{ch:PT}, we introduce such measurements and discuss how the TP constraint plays in a role in their construction. \section{Summary and conclusions} QST is a demanding experimental protocol, but in this chapter, we showed that certain types of POVMs, called rank-$r$ complete and rank-$r$ strictly-complete, can accomplish QST more efficiently when there is prior information that the prepared state has bounded-rank. This prior information corresponds to the goal of most quantum information processors, so it is reasonable in most applications. Moreover, we proved that even when the actual state is not exactly pure, strictly-complete POVMs still produce a robust estimate. This is very similar to the result for full-IC POVMs. We also generalized these results to QDT and QPT where the same definitions and corollaries hold, since processes and readout devices are described by PSD matrices, and we often have prior information that they are bounded-rank. While strictly-complete POVMs are robust to preparation errors, we still have yet to show how many POVM elements are required. We answer this question in the next chapter. \chapter{POVMs for bounded-rank quantum state tomography} \label{ch:constructions} In this chapter, we construct rank-$r$ complete and strictly-complete POVMs for bounded-rank QST that have significantly less elements than fully informationally complete (full-IC) POVMs. We present three separate construction techniques. The advantage of having multiple construction techniques is that one can chose the method that is best suited for the experimental apparatus. Many experiments have so-called natural measurements, that are easier to implement. For example, some experiments can easily apply bases, introduced in Sec.~\ref{ssec:detector_intro}. Therefore, for these experiments, it is best to construct POVMs for bounded-rank QST that consist of bases. In each technique, we assume the ideal limit of QST, where there are no errors and the probabilities are known exactly as this defines informational completeness. From the previous chapter, we know that if we can prove a POVM to be rank-$r$ strictly-complete in the ideal limit, then it will be robust to noise and errors. We also present examples of each construction technique, though the methods are general and can be used to build new constructions based on the specific operation of a given experiment. With these construction we will also be able to determine which is more efficient, rank-$r$ complete or rank-$r$ strictly-complete. \section{Decomposition methods} \label{sec:decomp_method} The first method we consider applies to the construction of rank-1 complete POVMs. The method is based on the decomposition of a rank-1 density matrix into the state vector, $\ket{\psi}$. The state vector is described by $2d-2$ free parameters that make up the state amplitudes in some basis, $\ket{\psi} = \sum_k c_k \ket{k}$. If we take the ideal limit for QST, when the probability of each outcome is known exactly, we can relate the free parameters in $\{ c_k \}$ to the probabilities by the Born rule. If we can solve for each free parameter, then we can reconstruct $\ket{\psi}$ and thus $\rho$. Since the decomposition assumes that $\rho$ is rank-1 then this technique can show if the POVM is rank-1 complete. An examples of this technique was studied by Flammia~{\em et al.}~\cite{Flammia2005}, who introduced the following POVM, \begin{align}\label{psi-complete} &E_0=a\ket{0}\bra{0},\nonumber \\ &E_k=b(\mathds{1}+\ket{0}\bra{k}+\ket{k}\bra{0}),\; \;k=1,\ldots,d-1,\nonumber \\ &\widetilde{E}_k=b(\mathds{1}-\textrm{i} \ket{0}\bra{k}+\textrm{i} \ket{k}\bra{0}),\; \;k=1,\ldots,d-1,\nonumber \\ &E_{2d}=\mathds{1}-\left[E_0 +\sum_{n=1}^{d-1}(E_k+\widetilde{E}_k)\right], \end{align} with $a$ and $b$ chosen such that $E_{2d}\geq0$. When $c_0>0$, we can chose $c_0=\sqrt{p_0}/a$ (setting the phase of this amplitude to zero). The real and imaginary parts of $c_k$, $k=1,\ldots, d-1$, are related to the probabilities by $\textrm{Re}(c_k)=\frac1{2c_0}(\frac{p_k}{b}-1)$ and $\textrm{Im}(c_k)=\frac1{2c_0}(\frac{\tilde{p}_k}{b}-1)$, respectively when we assume $\textrm{Tr}(\rho) = 1$. There are then a set of $2d -1$ quadratic equations that we can use to uniquely solve for all amplitudes, $\{ c_k \}$. When $c_0 =0$, the set of equations are not solvable; however this is a set of zero volume corresponding to the failure set allowed in Definition~\ref{def:rankr_comp}. The POVM has a total of $2d$ POVM elements, and therefore it is efficient compared to standard QST, which requires at least $d^2$ POVM elements. Flammia~{\em et al.}~\cite{Flammia2005} also proved this to be the minimum number of POVM elements to be rank-1 complete. Goyeneche {\em et al.}~\cite{Goyeneche2015} constructed another POVM and proved it was rank-1 complete by this strategy. They proposed four orthogonal bases, \begin{align}\label{4gmb} \mathbbm{B}_{1} &=\left\{ \frac{\ket{0}\pm\ket{1}}{\sqrt{2}}, \frac{\ket{2}\pm\ket{3}}{\sqrt{2}}, \ldots, \frac{\ket{d-2}\pm\ket{d-1}}{\sqrt{2}}\right\}, \nonumber \\ \mathbbm{B}_{2} &=\left\{ \frac{\ket{1}\pm\ket{2}}{\sqrt{2}}, \frac{\ket{3}\pm\ket{4}}{\sqrt{2}}, \ldots, \frac{\ket{d-1}\pm\ket{0}}{\sqrt{2}}\right\}, \nonumber \\ \mathbbm{B}_{3} &=\left\{ \frac{\ket{0}\pm \textrm{i} \ket{1}}{\sqrt{2}}, \frac{\ket{2}\pm \textrm{i} \ket{3}}{\sqrt{2}}, \ldots, \frac{\ket{d-2}\pm \textrm{i} \ket{d-1}}{\sqrt{2}}\right\}, \nonumber \\ \mathbbm{B}_{4} &=\left\{ \frac{\ket{1}\pm \textrm{i} \ket{2}}{\sqrt{2}}, \frac{\ket{3}\pm \textrm{i} \ket{4}}{\sqrt{2}}, \ldots, \frac{\ket{d-1}\pm \textrm{i} \ket{0}}{\sqrt{2}}\right\}. \end{align} Denoting $p_{k}^{\pm}= |\frac{1}{2}( \bra{j}\pm\bra{k+1}) | \psi \rangle|^2$, and $p_{k}^{\pm \textrm{i}}= |\frac{1}{2}(\bra{k}\mp \textrm{i} \bra{k+1}) |\psi \rangle|^2$, we obtain, $c_k^* c_{k+1} {=}\frac{1}{2}[(p_{k}^{+}-p_{k}^{-})+\textrm{i} (p_{k}^{+\textrm{i}}-p_{k}^{-\textrm{i}})]$ for $k = 0,\ldots,d-1$, and addition of indices is taken modulo $d$. We then have a set of $d$ quadratic equations, which Goyeneche {\em et al.}~\cite{Goyeneche2015} showed has a unique solution when we include the trace constraint, $\sum_k |c_k |^2 = 1$; therefore, the construction is rank-1 complete. When $c_k = 0$ and $c_{k+l} = 0$, for $l >1$ the quadratic equations do not have have a unique solutions. This corresponds to the failure set of the POVM. Since the bases have a total of $4d$ POVM elements, this construction requires less resources than standard QST but more elements than the minimum POVM proposed by Flammia {\em et al.}~\cite{Flammia2005}. While the method of reconstructing the state vector amplitudes is very intuitive, it is limited to rank-1 complete POVMs. To construct a rank-1 strictly-complete POVM, we cannot assume the pure-state structure of the measured state, as we did here. Moreover, the generalization to rank-$r$ complete constructions is not obvious. In this case, one needs to consider ensemble decompositions, $\rho = \sum_{i=0}^{r-1} \lambda_i | \psi_i \rangle \langle \psi_i |$, where $\langle \psi_i | \psi_j \rangle = \delta_{i,j}$, which require a greater number of quadratic equations. \section{Element-probing POVMs} \label{sec:EP} Another, more adaptable method to construct both rank-$r$ complete and rank-$r$ strictly complete POVMs applies to a class of POVMs we will define as element-probing (EP) POVMs. An EP-POVM allow for the reconstruction of matrix elements of $\rho$. More formally, there is a linear mapping between the probabilities from an EP-POVM and the elements of the density matrix, which is an inverse of the Born rule, $\{p_{\mu}\} \rightarrow \{ \rho_{i,j} \}$. If the POVM is not full-IC, then there necessarily exist a subset of elements reconstructed, called the measured elements.\footnote{An EP-POVM may give information about other parts of the density matrix besides the measured elements. For this case, we ignore this additional information and only study the measured elements.} We denote the remaining elements as the unmeasured elements. In this section, we will show that based on the structure of the measured elements, we can determine if a given EP-POVM is rank-$r$ complete or rank-$r$ strictly-complete for any value of $r$. The POVMs considered in the previous section, given in Eq.~\eqref{psi-complete} and Eq.~\eqref{4gmb}, are in fact examples of EP-POVMs. For Eq.~\eqref{psi-complete}, the measured elements are the first row and column of the density matrix. The probability $p_0 = \textrm{Tr}(E_0 \rho)$ trivially determines $\rho_{0,0}=\langle 0 | \rho|0\rangle$, and the probabilities $p_n=\textrm{Tr}(E_n\rho)$ and $\tilde{p}_n=\textrm{Tr}(\widetilde{E}_n\rho)$ determines $\rho_{n,0}=\langle n |\rho |0\rangle$ and $\rho_{0,n}=\langle 0 |\rho | n \rangle$, respectively. For Eq.~\eqref{4gmb}, the probabilities, $\{ p_k^{\pm}, p_k^{\pm \textrm{i}}\}$ determine the density matrix elements $\rho_{k,k+1}{=}\frac{1}{2}[(p_{k}^{+}-p_{k}^{-})+\textrm{i} (p_{k}^{+\textrm{i}}-p_{k}^{-\textrm{i}})]$ for $k = 0,\ldots,d-1$, and addition of indices is taken modulo $d$. \subsection{Linear algebra relations for EP-POVMs} We prove here whether an EP-POVM is rank-$r$ complete or strictly-complete based on the Schur complement and the Haynsworth matrix inertia~\cite{Haynsworth1968,Zhang2011}. Consider a block-partitioned $k \times k$ Hermitian matrix, \begin{equation} \label{block_mat} M = \begin{pmatrix} {A} & {B^{\dagger}} \\ {B} &{C} \end{pmatrix}, \end{equation} where $A$ is a $r \times r$ Hermitian matrix, and the size of ${B^{\dagger}}$, $B$ and $C$ is determined accordingly. The Schur complement of $M$ with respect to $A$, assuming $A$ is nonsingular, is defined by \begin{equation} M/A \equiv C - B A^{-1} B^{\dagger}. \end{equation} The inertia of a Hermitian matrix is the ordered triple of the number of negative, zero, and positive eigenvalues of the matrix, $\textrm{In}(M)=(n_-[M], n_0[M], n_+[M])$, respectively. We will use the Haynsworth inertia additivity formula, which relates the inertia of $M$ to that of $A$ and of $M/A$~\cite{Haynsworth1968}, \begin{equation} \label{Schur_iner} \textrm{In}(M) = \textrm{In}(A)+\textrm{In}(M/A), \end{equation} A corollary of the inertia formula is the rank additivity property, \begin{equation} \label{Schur_rank} \textrm{rank}(M) = \textrm{rank}(A) + \textrm{rank}(M/A). \end{equation} With these relations, we can determine the informational completeness of any EP-POVM. A similar approach was taken for classical matrix completion in Ref.~\cite{Smith2008}. \subsection{Application to rank-$r$ complete POVMs} \label{ssec:EP_comp} As an instructive example, we use the above relations in an alternative proof that the POVM in Eq.~\eqref{psi-complete} is rank-1 complete without referring to the state amplitudes. The POVM in Eq.~\eqref{psi-complete} is an EP-POVM, where the measured elements are $\rho_{0,0}$, $\rho_{n,0}$ and $\rho_{0,n}$ for $n=1,\ldots,d-1$. Supposing that $\rho_{0,0}>0$ and labeling the unmeasured $(d-1)\times(d-1)$ block of the density matrix by $C$, we write \begin{equation} \label{block_rho} \rho= \left( \begin{array}{cccc} {\rho_{0,0}} & {\rho_{0,1}}& \cdots &{\rho_{0,d-1}}\\ \cline{2-4}\multicolumn{1}{c|}{\rho_{1,0}}& {} &{}& \multicolumn{1}{c|}{} \\ \multicolumn{1}{c|}{\vdots}& {} &{\;\;\Large\textit{C}}& \multicolumn{1}{c|}{}\\ \multicolumn{1}{c|}{\rho_{d-1,0}}& {} &{}& \multicolumn{1}{c|}{}\\\cline{2-4} \end{array} \right) \end{equation} Clearly, Eq.~\eqref{block_rho} has the same form as Eq.~\eqref{block_mat}, such that $M = \rho$, $A = \rho_{0,0}$, $B^{\dagger}=({\rho_{0,1}}\cdots {\rho_{0,d-1}})$, and $B = ({\rho_{0,1}}\cdots {\rho_{0,d-1}})^{\dagger}$. Assume $\rho$ is a pure state so $\textrm{rank}(\rho)=1$. By applying Eq.~\eqref{Schur_rank} and noting that $\textrm{rank}(A)=1$, we obtain $\textrm{rank}(\rho/A)=0$. This implies that $\rho/A=C - B A^{-1} B^{\dagger}=0$, or equivalently, that $C= B A^{-1} B^{\dagger}= \rho_{0,0}^{-1}B B^{\dagger}$. Therefore, by measuring every element of $A$, $B$ (and thus of ${B^{\dagger}}$), the rank additivity property allows us to algebraically reconstruct $C$ uniquely without measuring it directly. Thus, the entire density matrix is determined by measuring its first row and column. Since we used the assumption that $\textrm{rank}(\rho){=}1$, the reconstructed state is unique to the set ${\cal S}_1$, and the POVM is rank-1 complete. This algebraic reconstruction of the rank-$1$ density matrix works as long as $\rho_{0,0}\neq0$. When $\rho_{0,0}=0$, the Schur complement is not defined, and Eq.~\eqref{Schur_rank} does not apply. This, however, only happens on a set of states of measure zero (the failure set), i.e. the set of states where $\rho_{0,0} = 0$ exactly. It is exactly the same set found by Flammia~{\em et al.}~\cite{Flammia2005}. The above technique can be generalized to determine if any EP-POVM is rank-$r$ complete for a state $\rho\in{\cal S}_r$. In general, the structure of the measured elements will not be as convenient as the example considered above. Our approach is to study $k \times k$ principle submatrices (square submatrices that are centered on the diagonal) of $\rho$ such that $k > r$. Since $\rho$ is a rank-$r$ matrix, it has at least one nonsingular $r \times r$ principal submatrix, \begin{equation} \rho = \begin{pmatrix} \ddots& & \\ & \begin{pmatrix} \underset{(k \times k)}{M} \end{pmatrix} & \\ & & \ddots \end{pmatrix}. \end{equation} Assume for now that a given $k \times k$ principal submatrix, $M$, contains a nonsingular $r \times r$ principle submatrix $A$. We can apply a $k \times k$ unitary, $U$, to map the submatrix $M$ to the form in Eq.~\eqref{block_mat}, \begin{equation} U M U^{\dagger}= \begin{pmatrix} \underset{(r \times r)}{A} & \underset{(k-r \times r)}{B^{\dagger}} \\ \underset{(r \times k-r)}{B} & \underset{(k-r \times k-r)}{C} \end{pmatrix}. \end{equation} From Eq.~\eqref{Schur_rank}, since $\textrm{rank}(M) = \textrm{rank}(A) = r$, $\textrm{rank}(M/A) = 0$, and therefore $C = B A^{-1}B^{\dagger}$. This motivates our choice of $M$. If the measured elements make up $A$ and $B$ (and $B^{\dagger}$) then we can solve for $C$ and we have fully characterized $U M U^{\dagger}$, and therefore also $M$. An example application is considered in Appendix~\ref{app:constructions}. In general, an EP-POVM may measure multiple subspaces, $M_i$, and we can reconstruct $\rho$ only when the corresponding $A_i$, $B_i$, $C_i$ cover all elements of $\rho$. We label the set of all principle submatrices that are used to construct $\rho$ by $\bm{M} = \{M_i\}$. Since we can reconstruct a unique state within the set of $\mathcal{S}_r$ this is then a general description of a rank-$r$ complete EP-POVM. The failure set, in which the measurement fails to reconstruct $\rho$, corresponds to the set of states that are singular on any of the $A_i$ subspaces. \subsection{Application to rank-$r$ strictly-complete} \label{ssec:EP_strict} The framework defined above also allows us to determine if a given EP-POVM is strictly-complete. As an example, consider the rank-1 complete POVM in Eq.~\eqref{psi-complete}. Since $\rho/A=0$, by applying the inertia additivity formula to $\rho$ we obtain, \begin{equation} \textrm{In}(\rho) = \textrm{In}(A)+\textrm{In}(\rho/A)=\textrm{In}(A). \end{equation} This implies that $A$ is a positive semidefinite (PSD) matrix since $\rho$ is, by definition, a PSD matrix. For the POVM in Eq.~\eqref{psi-complete}, $A=\rho_{0,0}$, so this equation is a re-derivation of the trivial condition $\rho_{0,0}\geq0$. Let us assume that the POVM is not rank-$1$ strictly-complete. If so, there must exist a PSD matrix, $\sigma\geq0$, with $\textrm{rank}(\sigma)>1$, that has the same measurement vector and thus measured elements as $\rho$, but different unmeasured elements. We define this difference by $V\neq0$, and write \begin{equation} \label{block_mat_sigma} \sigma= \left( \begin{array}{cccc} {\rho_{0,0}} & {\rho_{0,1}}& \cdots &{\rho_{0,d-1}}\\ \cline{2-4}\multicolumn{1}{c|}{\rho_{1,0}}& {} &{}& \multicolumn{1}{c|}{} \\ \multicolumn{1}{c|}{\vdots}& {} &{\;\;\Large{\textit{C}}+\!\Large{\textit{V}}}& \multicolumn{1}{c|}{}\\ \multicolumn{1}{c|}{\rho_{d-1,0}}& {} &{}& \multicolumn{1}{c|}{}\\\cline{2-4} \end{array} \right)=\rho+\begin{pmatrix} {0} & {\bf 0}\\ {\bf 0} & V \end{pmatrix}. \end{equation} Since $\sigma$ and $\rho$ have the same probabilities, for all $\mu$, $\textrm{Tr}(E_\mu\sigma)=\textrm{Tr}(E_\mu\rho)$. Summing over $\mu$ and using $\sum_\mu E_\mu=\mathds{1}$, we obtain that $\textrm{Tr}(\sigma)=\textrm{Tr}(\rho)$. This implies that $V$ must be a traceless Hermitian matrix, hence, $n_-(V) \geq 1$. Using the inertia additivity formula for $\sigma$ gives, \begin{equation} \textrm{In}(\sigma) = \textrm{In}(A)+\textrm{In}(\sigma/A). \end{equation} By definition, the Schur complement is \begin{equation} \sigma/A=C +V - B A^{-1} B^{\dagger}=\rho/A+V=V. \end{equation} The inertia additivity formula for $\sigma$ thus reads, \begin{equation} \textrm{In}(\sigma) = \textrm{In}(A)+\textrm{In}(V). \end{equation} Since $A=\rho_{0,0}>0$, $n_-(\sigma) = n_-(V) \geq 1$ so $\sigma$ has at least one negative eigenvalue, in contradiction to the assumption that it is a PSD matrix. Therefore, $\sigma \not\geq 0$ and we conclude that the POVM in Eq.~\eqref{psi-complete} is rank-1 strictly-complete. A given POVM that is rank-$r$ complete is not necessarily rank-$r$ strictly-complete in the same way as the POVM in Eq.~\eqref{psi-complete}. For example, the bases in Eq.~\eqref{4gmb}, correspond to a rank-$1$ complete POVM, but not to a rank-$1$ strictly-complete POVM. For these bases, we can apply a similar analysis to show that there exists a quantum state $\sigma$ with $\textrm{rank}(\sigma)>1$ that matches the measured elements of $\rho$. Given this structure, we derive the necessary and sufficient condition for a rank-$r$ complete EP-POVMs to be rank-$r$ strictly-complete. Using the notation introduced above, let us choose an arbitrary principal submatrix $M\in \bm{M}$ that was used to construct $\rho$. Such a matrix has the form of Eq.~\eqref{block_mat} where $C=BA^{-1}B^\dagger$. Let $\sigma$ be a higher-rank matrix that has the same measured elements as $\rho$, and let $\tilde{M}$ be the submatrix of $\sigma$ that spans the same subspace as $M$. Since $\sigma$ has the same measured elements as $\rho$, $\tilde{M}$ must have the form \begin{equation}\label{Mtilde} \tilde{M} = \begin{pmatrix} A & B^{\dagger} \\ B & \tilde{C} \end{pmatrix}\equiv\begin{pmatrix} A & B^{\dagger} \\ B & C + V \end{pmatrix}=M+\begin{pmatrix} {\bf 0} & {\bf 0} \\ {\bf 0} & V \end{pmatrix}. \end{equation} Then, from Eq.~\eqref{Schur_iner}, $\textrm{In}(\tilde{M}) = \textrm{In}(A) + \textrm{In}(\tilde{M}/A) = \textrm{In}(A) + \textrm{In}(V)$, since $\tilde{M}/A = M/A + V = V$. A matrix is PSD if and only if all of its principal submatrices are PSD~\cite{Zhang2011}. Therefore, $\sigma \geq 0$ if and only if $\tilde{M}\geq 0$, and $\tilde{M} \geq 0$ if and only if $n_-(A) + n_-(V) = 0$. Since $\rho\geq0$, all of its principal submatrices are PSD, and in particular $A\geq0$. Therefore, $\sigma \geq 0$ if and only if $n_-(V) = 0$. We can repeat this logic for all other submatrices $M \in \bm{M}$. Hence, we conclude that the measurement is rank-$r$ strictly-complete if and only if there exists at least one submatrix $M \in \bm{M}$ for which every $V$ that we may add (as in Eq.~\eqref{Mtilde}) has at least one negative eigenvalue. A sufficient condition for an EP-POVM to be rank-$r$ strictly-complete is given in the following proposition. \begin{proposition} \label{prop1} Assume that an EP-POVM is rank-$r$ complete. If its measurement outcomes determine the diagonal elements of the density matrix, then it is a rank-$r$ strictly-complete POVM. \end{proposition} {\em Proof.} Consider a Hermitian matrix $\sigma$ that has the same measurement probabilities as $\rho$, thus the same measured elements. If we measure all diagonal elements of $\rho$ (and thus, of $\sigma$), then for any principal submatrix $\tilde{M}$ of $\sigma$, {\em cf.} Eq.~\eqref{Mtilde}, the corresponding $V$ is traceless because all the diagonal elements of $C$ are measured. Since $V$ is Hermitian and traceless it must have at least one negative eigenvalue, therefore, $\sigma$ is not PSD matrix and the POVM is rank-$r$ strictly-complete. $\square$ \noindent A useful corollary of this proposition is any EP-POVM that is rank-$r$ complete can be made rank-$r$ strictly-complete simply by adding POVM elements that determine the diagonal elements of the density matrix. \section{Random bases} \label{sec:random_bases} The final technique we consider for constructing strictly-complete POVMs is to measure a collection of random orthonormal bases. Measurement with random bases have been studied in the context of compressed sensing (see, e.g., in~\cite{Kueng2014,Acharya2016}). However, when taking into account the positivity of density matrices, we obtain strict-completeness with fewer measurements than required for compressed sensing~\cite{Kalev2015}. Therefore, strict-completeness is not equivalent to compressed sensing. While for quantum states, all compressed sensing measurements are strictly-complete~\cite{Kalev2015}, not all strictly-complete measurements satisfy the conditions required for compressed sensing estimators. We perform the numerical experiments to determine rank-$r$ strictly-complete measurement for $r=1,2,3$. To achieve this, we take the ideal case where the measurement outcomes are known exactly and the rank of the state is fixed. We consider two types of measurements on a variety of different dimensions: (i) a set of Haar-random orthonormal bases on unary qudit systems with dimensions $d=11, 16, 21, 31, 41$, and $51$; and (ii) a set of local Haar-random orthonormal bases on a tensor product of $n$ qubits with $n=3,4,5$, and $6$, corresponding to $d=8, 16, 32$, and $64$, respectively. For each dimension, and for each rank, we generate $25d$ Haar-random states. For each state, we calculate the noiseless probability vector, $\bm{p}$, with an increasing number of bases. After each new basis measurement we use the constrained least-square (LS) program, Eq.~\eqref{general_norm_positive_CS}, where $\Vert\cdot\Vert$ is the $\ell_2$-norm, to produce an estimate of the state. We emphasize that the constrained LS finds the quantum state that is the most consistent with ${\bf p}$ without restrictions on the rank. The procedure is repeated until all estimates match the states used to generate the data (up to numerical error of $10^{-5}$ in infidelity). This indicates the random bases used correspond to a rank-$r$ strictly-complete POVM. \begin{table}[ht] \centering \begin{tabular}{ cc|c|c|c|c|c|||c|c|c|c| } & \multicolumn{9}{c}{\bf{Dimension}}\\ \cline{2-11} & \multicolumn{6}{|c|||}{Unary} & \multicolumn{4}{c| }{Qubits}\\ \multicolumn{1}{ c|| }{\bf{Rank}} &\bf{11} &\bf{16}&\bf{21} &\bf{31} &\bf{41}&\bf{51} &\bf{8} &\bf{16} &\bf{32}&\bf{64} \\ \hline\hline \multicolumn{1}{ |c|| }{\bf{1}} &\multicolumn{6}{c|||}{6} & \multicolumn{4}{c|}{6} \\\cline{1-11} \multicolumn{1}{ |c|| }{\bf{2}} & 7 & 8 & 8 & \multicolumn{3}{ c||| } {9}& \multicolumn{2}{ c| } {9} & \multicolumn{2}{ c| } {10} \\\cline{1-11} \multicolumn{1}{ |c|| }{\bf{3}} & 9 & 10 & 11 & 12 & 12 & 13 & 12 & \multicolumn{3}{ c| } {15}\\\cline{1-11} \hline \end{tabular} \caption[Number of random orthonormal bases required for strict-completeness]{{\bf Number of random orthonormal bases corresponding to strict-completeness.} Each cell lists the minimal number of measured bases for which the infidelity was below $10^{-5}$ for each of the tested states in the given dimensions and ranks. This indicates that a measurement of only few random bases is strictly-complete POVM.}\label{tbl:noiseless} \end{table} We present our findings in Table~\ref{tbl:noiseless}. For each dimension, we also tested fewer bases than listed in the table. These bases return infidelity below $10^{-5}$ for most states but not all. For example, in the unary system with $d=21$, using the measurement record from $5$ bases we can reconstruct all but one state with an infidelity below the threshold. The results indicate that measuring only few random bases, with weak dependence on the dimension, corresponds to a strictly-complete POVM for low-rank quantum states. Moreover, the difference between, say rank-1 and rank-2, amounts to measuring only a few more bases. This is important, as discussed below, in realistic scenarios when the state of the system is known to be close to pure. Finally, when considering local measurements on tensor products of qubits, more bases are required to account for strict-completeness when compared to unary system; see for example results for $d=16$. We do not know if any of these bases suffer from a failure set but we see no evidence in our numerical simulations. \section{Numerical studies of constructions with noise and errors} \label{sec:strict_comp_nums} The techniques described in the previous sections allow for the construction of different rank-$r$ strictly-complete POVMs. However, we have yet to study how these measurements perform in the presence of noise and errors. In Sec.~\ref{ssec:rankr_strictcomp_robustness} we saw that rank-$r$ strictly-complete measurements are robust to all sources of noise and errors but we do not have an analytic form for the constant $\alpha$ in Eq.~\eqref{strict_comp_ineq} that describes the robustness. We can, however, use numerics to estimate this constant for a given POVM. To determine $\alpha$, we generate many pairs of quantum states, one rank-$r$, $\rho_r$, and one full-rank, $\sigma$. To generate each state, we first select a random unitary $U$, from the Haar-measure and a $d$-dimensional vector, $\vec{\lambda}$, which has $r$ nonzero entries and $d-r$ zero entries. We renormalize $\vec{\lambda}$ such that $\sum_i \lambda_i = 1$. Then, the random rank-$r$ state is defined by, $\rho_r = U {\rm diag}[\vec{\lambda}] U^{\dagger}$, where the operation, ${\rm diag}[\cdot]$ puts the vector in the diagonal elements of the zero matrix. The full-rank state is generated by choosing all $d$ elements of $\vec{\lambda}$ to be nonzero, which is equivalent to generating a mixed state by the Hilbert-Schmidt measure. We then calculate the ratio of the HS-distance between the states to the distance between the measurement records, which is bounded by $1/\alpha$, \begin{equation} \label{robust_ineq} \frac{\| \rho_r - \sigma\|_2}{\| \mathcal{M}[ \rho _r - \sigma] \|_2} \leq \frac{1}{\alpha}, \end{equation} where $\mathcal{M}$ represents the map of a rank-$r$ strictly-complete POVM. We test three different types of POVMs in various dimensions: a qudit measured with Haar-random bases for $d = 11,16,21,$ and 31, a collection of $n=3,4,5,$ and 6 qubits measured with a series of Haar-random bases on each qubit, and finally $n=3,4,5$ and 6 qubits measured with rank-$r$ generalization of the measurement proposed by Goyeneche~{\em et al.}~\cite{Goyeneche2015}, defined in Appendix~\ref{app:GMB}. A similar study was performed in Ref.~\cite{Carmeli2016}, for a different rank-1 strictly-complete measurement. \begin{figure}\label{fig:bounds} \end{figure} By Definition~\ref{def:rankr_strictcomp}, we know $\alpha$ is not zero and therefore $1/\alpha$ is bounded. However, an arbitrarily large value of $1/\alpha$ makes the robustness bound in Corollary~\ref{cor:robustness} blow up. We see in Fig.~\ref{fig:bounds} that the values of $\| \rho_r - \sigma\|_2/\| \mathcal{M}[ \rho _r - \sigma] \|_2$ is concentrated in peaks. As the ratio goes to infinity the number of times we see that ratio in the numerics goes to zero. This means that it is very unlikely to get the largest values of the ratio. We can also see that the position of the peaks is very dependent on the dimension and rank. As dimension increases the peak shifts to to the right, i.e. larger ratios. As rank increases the peak shifts to the left, i.e. smaller ratios. If we let $1/\alpha_{\rm max}$ be the maximum value then from Fig.~\ref{fig:bounds}, we that the $1/\alpha_{\rm max}$ is not too large (the maximum value for all ranks, dimensions and measurements is $4.7784$). Therefore, the robustness bound will likely not blow up for the three different measurements considered. In order to determine the success of each measurement for QST, we perform a numerical study with realistic noise and errors. We simulate a realistic scenario where the state of the system is full-rank but high-purity and the experimental data contains statistical noise but no measurement errors. From Corollary~\ref{cor:robustness} we expect to obtain a robust estimation of the state by solving any convex estimator of the form of Eqs.~\eqref{general_positive_CS_noisy} and~\eqref{general_norm_positive_CS_noisy}. We calculate three estimates (using the MATLAB package CVX~\cite{cvx}) from the following programs: trace-minimization (given in Eq.~\eqref{Tr-min}), constrained least-squares (given in Eq.~\eqref{LS}), and maximum-likelihood (given in Eq.~\eqref{ML}). In the trace-minimization program the trace constraint is not included hence $\hat\rho=\hat{X}/\textrm{Tr}(\hat{X})$. \begin{figure}\label{fig:noisy} \end{figure} We apply the three measurements discussed above in three selected dimensions to a realistic system. For each measurement and dimension we generate 100 Haar-random pure-states (target states), $\{| \psi\rangle\}$, and create the actual prepared state, $\sigma =(1- q) | \psi \rangle \langle \psi | + q \tau$, where $q = 10^{-3}$, and $\tau$ is a random full-rank state generated from the Hilbert-Schmidt measure by the same procedure described above. The measurement vector, $\bm{f}$, is simulated by sampling $m = 300 d$ trials from the corresponding probability distribution. For each number of measured bases, we estimate the state with the three different convex optimization programs listed above. In Fig.~\ref{fig:noisy} we plot the average infidelity (over all tested states) between the target state, $ | \psi \rangle$, and its estimation, $\hat\rho$, $1-\overline{\langle\psi|\hat\rho| \psi \rangle}$. As ensured by Corollary~\ref{cor:robustness}, the three convex programs we used robustly estimate the state with a number of bases that correspond to rank-1 strictly-complete POVM, that is, six bases for the case of Haar-random basis measurements, and five bases based on the construction of Goyeneche et al. Ref.~\cite{Goyeneche2015}, reviewed in Appendix~\ref{app:GMB}. Furthermore, in accordance with our findings, if one includes the measurement outcomes of only a few more bases such that the overall POVM is rank-$2$ strictly-complete, or higher, we improve the estimation accordingly. The study does not provide evidence that the failure set impairs the estimation, despite the large magnitude of noise. The GMB construction is known to suffer from such a failure set but still produce a robust estimate. It is unknown whether the random bases suffer from such a failure set but both types of random bases produce robust estimates. \section{Constructions for QDT} \label{sec:constructions_QDT} As discussed in Sec.~\ref{ssec:br_QDT}, in QDT we typically have prior information about the POVM elements, for example, that they are rank-1 operators. We can apply the same techniques for constructing target POVMs for bounded-rank QST, to construct sets of probing states for bounded-rank QDT. We express each unknown POVM element in the rank-1 decomposition, \begin{equation} F_{\mu} = | \tilde{\phi}_{\mu} \rangle \langle \tilde{\phi}_{\mu} |, \end{equation} where $\ket{\tilde{\phi}_{\mu}} = \sum_k e_k^{(\mu)} \ket{k}$ is an unnormalized state vector. We use the letter $F$ for the unknown POVM element to differentiate it from the known POVM elements discussed in the previous sections for QST. In QDT, we measure the POVM elements by applying the unknown readout device to a set of known probing quantum state, $\{ \rho_{\nu} \}$. The conditional probability of getting outcome $\mu$ for the $\nu$th state is then, \begin{equation} p_{\mu,\nu} = \textrm{Tr}(E_{\mu} \rho_{\nu} ) = \langle \tilde{\phi}_{\mu} | \rho_{\nu} | \tilde{\phi}_{\mu} \rangle \end{equation} Therefore, constructing the set of probing states for bounded-rank QDT is very similar to constructing the POVM for bounded-rank QST. In fact, any POVM for QST can be translated to a set of probing states for QDT. For example, given $\{ E_{\mu} \}$, which is a POVM for QST, we can translate each element to to a probing state for QDT. Since each element is already positive, all that is required is normalization, \begin{equation} \rho_{\nu} = \frac{E_{\nu}}{\textrm{Tr}(E_{\nu})}. \end{equation} However, the translation does not guarantee that the informational completeness for $\{ \rho_{\nu} \}$ is the same as the informational completeness of $\{ E_{\nu} \}$. For example, if $\{ E_{\nu} \}$ is rank-1 strictly-complete for QST, the set $\{ \rho_{\nu} \}$ is not necessarily rank-1 strictly-complete for QDT. Let us consider a concrete example, the POVM in Eq.~\eqref{psi-complete}. The translated probing states are then, \begin{align}\label{QDT_psi-complete} &\rho_0=\ket{0}\bra{0},\nonumber \\ &\rho_k=\frac{1}{d}(\mathds{1}+\ket{0}\bra{k}+\ket{k}\bra{0}),\; \;k=1,\ldots,d-1,\nonumber \\ &\widetilde{\rho}_k=\frac{1}{d}(\mathds{1}-\textrm{i} \ket{0}\bra{k}+\textrm{i} \ket{k}\bra{0}),\; \;k=1,\ldots,d-1, \end{align} where we omitted the translation of the final POVM element since it is not required in the proof of rank-1 completeness. Similar to the discussion in Sec.~\ref{sec:decomp_method}, we can reconstruct the amplitudes $\{ e_k^{(\mu)} \}$ from the probability of each outcome. When $e_0^{(\mu)}>0$, we find that $e_0^{(\mu)}=\sqrt{p_0}$. The real and imaginary parts of $e_k^{(\mu)}$, $k=1,\ldots, d-1$, are related to the probabilities by $\textrm{Re}(e_k^{(\mu)})=\frac1{2e_0^{(\mu)}}(p_k-\textrm{Tr}(F_{\mu}))$ and $\textrm{Im}(e_k^{(\mu)})=\frac1{2e_0^{(\mu)}}(\tilde{p}_k-\textrm{Tr}(F_{\mu}))$. However, unlike with Eq.~\eqref{psi-complete} for QST, we cannot solve these equations for $\textrm{Re}(e_k^{(\mu)})$ and $\textrm{Im}(e_k^{(\mu)})$ since we do not know $\textrm{Tr}(F_{\mu})$. In QST, all POVMs measure the trace of the density matrix due to the constraint $\sum_{\mu} E_{\mu} = \mathds{1}$. However, from equation Eq.~\eqref{QDT_psi-complete}, $\sum_{\nu} \rho_{\nu} \neq \mathds{1}$. Therefore, in order to form a rank-1 complete set of probing states we need to complement Eq.~\eqref{QDT_psi-complete} with a set of probing states that measures the trace of the POVM element, $\textrm{Tr}(F_{\mu})$, for example the maximally mixed state $\rho = \frac{1}{d} \mathds{1}$. If it is not easy to create the maximally mixed state, the set $\rho_k = | k \rangle \langle k|$ for $k = 0,\dots d-1$ accomplishes the trace measurement as well. The same translation can be applied to the other constructions provided in Sec.~\ref{sec:decomp_method} and Sec.~\ref{sec:EP}. We can also apply the same type of numerical analysis to QDT as was discussed in Sec.~\ref{sec:random_bases}. For QDT, instead of random bases, we generate random quantum states from some measure, e.g. Bures, Hilbert-Schmidt, etc., and use them to probe an bounded-rank POVM element with random trace. We produce the corresponding probability vector for various number of quantum states and apply the LS program to reconstruct the POVM element. When the reconstruction matches the original POVM element exactly then the set of probing states is likely rank-$r$ strictly-complete. \section{Summary and conclusions} We provided methods to construct rank-$r$ complete and rank-$r$ strictly-complete POVMs. Having multiple methods allows one to create the POVM that is best suited for a given experiment. We also provided a way of comparing rank-$r$ strictly-complete POVMs by numerically estimating the robustness constant, $\alpha$, in Eq.~\eqref{robust_ineq}. We generalized these results to QDT and showed how to translate a POVM for bounded-rank QST to a set of probing states for QDT. In the previous chapter, we showed that rank-$r$ strictly-complete POVMs are compatible with convex optimization, and therefore offer an advantage over rank-$r$ complete POVMs. In this chapter, we saw that there is little difference in the number of POVM elements in rank-$r$ complete POVM vs the number in a rank-$r$ strictly-complete POVM. Therefore, there is no known advantage to rank-$r$ complete measurements, which enforces our conclusion that rank-$r$ strictly-complete measurements are superior for QST. \chapter{Process tomography of unitary and near-unitary quantum maps} \label{ch:PT} Quantum process tomography (QPT) is an even more demanding task than QST. In order to estimate an arbitrary quantum process, standard methods require $\mathcal{O}(d^4)$ measurements. This makes even small systems, e.g. three or more qubits, impractical for experimental application. However, in QPT, there is usually prior information about the applied quantum process, much like in QST where there is prior information about the quantum state. Most quantum information protocols require unitary maps, and therefore many experimental implementations try to engineer processes that are as close as possible to unitary. Through previous diagnostic procedures, e.g., randomized benchmarking~\cite{Knill2008,Magesan2012}, there is usually have high confidence that the applied map is close to a target unitary. In this chapter we will demonstrate that such prior information can be used to drastically reduce the resources for QPT. Previous workers have developed methods to diagnose devices that are designed to implement target unitary maps. Reich {\em et al.}~\cite{Reich2013} showed that by choosing specially designed sets of probe states, one can efficiently estimate the fidelity between an applied quantum process and a target unitary map. Gutoski {\em et al.} \cite{Gutoski2014} showed that the measurement of $4d^2-2d-4$ Pauli-like Hermitian (i.e., two outcome) observables is sufficient to discriminate a unitary map from all other unitary maps, while identifying a unitary map from the set of all possible CPTP maps requires a measurement of $5d^2-3d-5$ such observables. These types of measurements, as well as the measurements we present in this chapter, are analogous to the rank-1 complete and rank-1 strictly-complete POVMs we discussed in the context of QST. In this chapter, we further study unitary QPT to establish the most efficient methods. We numerically show that some of these methods are equivalent to strict-completeness, and therefore robust to noise and errors. We additionally study the performance of these methods in the presence of noise and errors. We also find that while all estimators are robust, not all estimators behave the same with different sources of errors. We demonstrate that this difference can be used to diagnose sources of errors in the implementation of the unitary. \section{Standard techniques for QPT}\label{sec:PT_review} We begin by reviewing quantum processes and expanding on the basic definitions given in Sec.~\ref{ssec:process_intro}. An unknown quantum process, $\mathcal{E}[\cdot]$, which is a dynamical map on operator space, is represented by a process matrix $\chi$, \begin{equation}\label{chiOp} \chi=\sum_{\alpha,\beta=1}^{d^2}\chi_{\alpha,\beta} | \Upsilon_{\alpha} )( \Upsilon_{\beta}|, \end{equation} where $( \Upsilon_{\alpha} | \Upsilon_{\beta} ) =\delta_{\alpha, \beta}$ is an orthonormal basis. A completely positive (CP) quantum process is represented by a PSD process matrix. A trace preserving quantum process has process matrix that satisfies, \begin{equation} \sum_{\alpha, \beta} \chi_{\alpha, \beta} \Upsilon^{\dagger}_{\beta} \Upsilon_{\alpha} = \mathds{1}. \end{equation} A rank-1 process matrix corresponds to a unitary map, $\chi = | U )( U|$. The choice of basis for $\chi$ can have important consequences in estimation programs for QPT. One choice of $\{\Upsilon_{\alpha} \}$ is the ``standard'' basis $\{\Upsilon_{\alpha} =\Upsilon_{ij}=\ket{i}\bra{j}\}$ with the relabeling of $\alpha=1,\ldots,d^2$ is replaced by the pair $ij$ with $i,j=0,\ldots,d-1$. However, we can make many choices for $\{ \Upsilon_{\alpha} \}$, as will be the case in subsequent sections. Additionally, we can express $\chi$ in diagonal form, \begin{equation}\label{chiOpDiag} \chi=\sum_{\alpha=1}^{d^2}\lambda_{\alpha} | V_{\alpha} )( V_{\alpha}|, \end{equation} with eigenvalues $\lambda_{\alpha}$ and eigenvectors $| V_{\alpha} )$. In standard QPT, we prepare a set of $d^2$ input states, $\{ \rho^{\rm in}_{\nu} \}$, and evolve them with the unknown quantum process to a set of output states, $\mathcal{E}[\rho^{\rm in}_{\nu}] = \rho^{\rm out}_{\nu}$. The output states are then measured with a POVM, $\{ E_{\mu} \}$. The probability of observing an outcome $\mu$ for state $\rho^{\rm out}_{\nu}$, is $p_{\mu,\nu} = \textrm{Tr}(\rho^{\rm out}_{\nu} E_{\mu})$, and expressed in terms of the process matrix using Eq.~\eqref{chiOp}, \begin{align} \label{measChi} p_{\mu,\nu} &= \textrm{Tr} \left[ \sum_{\alpha, \beta =1}^{d^2}\chi_{\alpha,\beta}\Upsilon_{\alpha} \rho_{\nu}^{\rm in} \Upsilon_{\beta}^\dagger E_{\nu} \right], \nonumber \\ &= \textrm{Tr}[\mathpzc{D}_{\mu,\nu} \chi]. \end{align} Here $(\mathpzc{D}_{\mu,\nu})_{\alpha,\beta} \triangleq \textrm{Tr} [\rho_{\nu}^{\rm in}\, \Upsilon^{\dagger}_{\beta} E_{\mu} \Upsilon_{\alpha} ]$, are the elements of a $d^2 \times d^2$ matrix. The standard set of probing states were introduced in Ref.~\cite{Chuang1997}, as, \begin{align}\label{op basis nc} &\ket{k},\; k=0,\ldots,d-1,\nonumber \\ &\frac{1}{\sqrt{2}}(\ket{k}+\ket{n}),\; k=0,\ldots,d-2,\;n=k+1,\ldots,d-1,\nonumber \\ &\frac{1}{\sqrt{2}}(\ket{k}+{\rm i}\ket{n}),\; k=0,\ldots,d-2,\;n=k+1,\ldots,d-1, \end{align} and form a linearly independent set that spans the operator space. In standard QPT, we measure the output of such states with a full-IC POVM, such as the SIC or MUB, introduced in Sec.~\ref{ssec:noiseless_ST}. Therefore, standard QPT requires implementing at least $d^4$ POVM elements to reconstruct an arbitrary CP map. \section{Numerical methods for QPT} \label{sec:PT_num} In any application of QPT, there will necessarily exist sources of noise and errors that affect the measurement vector. Therefore, in order to characterize the quantum process in question, one must employ numerical estimators. The optimal solution for an estimator for QPT can be found using convex semidefinite programs (SDPs), given convex constraints $\chi \geq 0$ (CP constraint) and $\sum_{\alpha,\beta} \chi_{\alpha,\beta} \Upsilon_{\beta}^{\dagger} \Upsilon_{\alpha} = \mathds{1}$ (TP constraint). We consider three programs for QPT, two of which are based on the classical technique of compressed sensing. One of the original estimation techniques for QPT, was based on the classical maximum-likelihood principle~\cite{Fiurasek2001a,Sacchi2001}. However, in this chapter we consider the least-squares program, which can be seen as an approximation of maximum-likelihood. The LS program minimizes the (square of the) $\ell_2$-distance between the measurement vector and the expected probability vector subject to the CPTP constraints, \begin{align}\label{PT_LS} \underset{\chi}{\rm minimize:} \quad & \sum_{\mu,\nu} \textrm{Tr} (\mathpzc{D}_{\mu,\nu} \chi) - f_{\mu,\nu} |^2\nonumber \\ \textrm{subject to:} \quad &\sum_{\alpha,\beta} {\chi}_{\alpha,\beta} \Upsilon_{\beta}^{\dagger} \Upsilon_{\alpha} =\mathds{1} \nonumber \\ & \chi \geq 0. \end{align} The estimated process matrix is $\hat{\chi}_{\textrm{LS}}$. The LS program does not include assumptions about the nature the of process matrix we are attempting to reconstruct, such as it being a unitary map. The second type of program we consider is Tr-norm program, originally proposed in the context of compressed sensing for quantum states~\cite{Gross2010}. We generalize the program here for QPT. In QPT, the trace of the process matrix is constrained by the one equation in the TP constraint. Therefore, to minimize the trace we must drop that part of the TP constraint. In order to maintain the maximal number of constraint equations, we take the basis as the set of traceless Hermitian matrices, $\{ H_{\alpha} \}$, thereby ensuring that there is only one equation related to the trace of the process matrix, which is dropped. We thus define the Tr-norm program for QPT as follows: \begin{align} \label{PT_Tr} \underset{\chi}{\rm minimize:} \quad & |\textrm{Tr} ( {\chi} ) \nonumber \\ \textrm{subject to:} \quad & \sum_{\mu,\nu} |\textrm{Tr} (\mathpzc{D}_{\mu,\nu} {\chi} ) - f_{\mu,\nu} |^2 \leq \varepsilon \nonumber \\ & \sum_{\alpha,\beta \neq 1} {\chi}_{\alpha,\beta} H_{\beta}^{\dagger} H_{\alpha} =0 \nonumber \\ & \chi \geq 0, \end{align} where now $\chi$ and $\mathpzc{D}_{\mu,\nu}$ are represented in a basis with $H_1 = \frac{1}{\sqrt{d}} \mathds{1}$ and the elements $H_{\alpha \neq 1}$ are orthogonal traceless Hermitian matrices. The sum in the second constraint include all the terms except $\alpha=\beta=1$. The first constraint equation now requires that the probabilities from our optimization variable should match our measurement frequencies up to some threshold $\varepsilon$. The threshold is chosen based on a physical model of the statistical noise sources for the measurements. The estimated process matrix, $\hat{\chi}_{\textrm{Tr}}$, must be renormalized such that $\textrm{Tr}[\hat{\chi}]=d$. The final program we consider is the $\ell_1$-norm program, originally proposed for QPT in Ref.~\cite{Kosut2008}, and also inspired by compressed sensing techniques. The $\ell_1$-norm program is advantageous when the process matrix is sparse in a known basis. This is common in many applications. For example, if the goal is to implement a quantum logic gate, one is attempting to build a target unitary map $U_{\rm t}$. We therefore expect that if the error in implementation is small, when expressed in an orthogonal basis on operator space, $\{ V_{\alpha} \}$, that includes the target process as a member, $V_1 = U_{\rm t}$, the process matrix describing the applied map will be close to a sparse matrix. This in turn implies that the $\ell_1$-norm optimization algorithm can efficiently estimate the applied process. We define the $\ell_1$-norm estimator as follows: \begin{align} \label{PT_L1} \underset{\chi}{\rm minimize:} \quad & \| {\chi} \|_{1} \nonumber \\ \textrm{subject to:} \quad &\sum_{\mu,\nu} | \textrm{Tr} (\mathpzc{D}_{\mu,\nu} {\chi} ) - f_{\mu,\nu} |^2 \leq \varepsilon, \nonumber \\ & \sum_{\alpha,\beta} {\chi}_{\alpha,\beta} V_{\beta}^{\dagger} V_{\alpha} =\mathds{1}, \nonumber \\ & \chi \geq 0. \end{align} We take the basis $\{V_{\alpha} \}=\{U_{\rm t}, U_{\rm t}H_2, \ldots, U_{\rm t}H_{d^2-1}\}$, where $\{H_{\alpha} \}$ are the basis of traceless Hermitian observables. We express $\mathpzc{D}_{\mu,\nu}$ also in the $\{ V_{\alpha} \}$ basis such that $\mathpzc{D}_{\mu,\nu, \alpha,\beta } = \textrm{Tr} [\rho_{\nu}^{\rm in} V^{\dagger}_{\alpha} E_{\mu}V_{\beta} ]$. Again, we constrain the probabilities to match the measurement vector, with some threshold $\varepsilon$ based on the noise and errors present. We can regard the representation of the applied process matrix in this basis as a transformation into the ``interaction picture'' with respect to the target map; any deviation of the applied process matrix from the projection onto $| U_{\rm t} )$ indicates an error. Therefore the ${\ell_1}$-norm program directly estimates the error matrix studied in detail in Ref.~\cite{Korotkov2013}. This feature holds also if the target map is not a unitary map. In this situation, we represent the applied map in the eigenbasis of the target map. To determine the success of the QPT we use the process fidelity~\cite{Gilchrist2005}. The process fidelity between two arbitrary process matrices $\chi_1$ and $\chi_2$, is defined as, \begin{equation}\label{PT_fidelity} F(\chi_1, \chi_2) =\frac1{d^2}\left({\rm Tr}\sqrt{\sqrt{\chi_1} \chi_2 \sqrt{\chi_2 }}\right)^2. \end{equation} The best measure of the success of QPT is the fidelity between the actual quantum process to the estimated process. However, in a real application of QPT, we do not know the actual process. Therefore, in the numerical simulations below, we also compare the estimated process to the target process. In this chapter the target process is always a unitary such that, $\chi_{\rm t} = | U_{\rm t})( U_{\rm t}|$. Therefore, the process fidelity between the target and an estimated process matrix $\hat{\chi}$ is, \begin{equation} \label{PT_U_fidelity} F(\hat{\chi}, U_{\rm t}) = \frac1{d^2} ( U_{\rm t} | \hat{\chi} | U_{\rm t}). \end{equation} \section{Reconstruction of unitary processes}\label{sec:unitary} We begin by studying unitary QPT, or rank-1 QPT, in analogy to the previous chapters on rank-1 QST. In this section, we assume the ideal setting for QPT, when we know the density matrices that describe the input states and the probabilities of each outcome exactly. Since a quantum process is determined by both states and POVMs, we simplify the discussion by assuming each output state is measured with an informationally complete (IC) POVM. Here, we define IC as any POVM that is either full-IC, rank-$r$ complete, or rank-$r$ strictly-complete. Therefore, the question of whether the unitary map can be reconstructed in the ideal situation is only dependent on the input states that probe the process. \subsection{Minimal sets of input states} If reconstruction of a set of output states uniquely identifies an arbitrary unitary map within the set of all unitary maps, we call the set ``unitarily informationally complete'' (UIC). This is analogous to rank-1 complete POVMs for QST, which uniquely identify a pure state within the set of all pure states. A similar problem was studied by Reich {\em et al.} \cite{Reich2013}. They developed an algebraic framework to identify sets of input states from which one can discriminate any two unitary maps given the corresponding output states. In particular, a set of input states, $\{\rho^{\rm in}_{\nu} \}$, provides sufficient information to discriminate any two unitary maps if and only if the identity operator is the only operator that commutes with all $\rho^{\rm in}_{\nu}$'s in this set. If the reconstruction of the input states discriminate any two unitary maps then they also uniquely identify any unitary map within the set of all unitary maps. Therefore, these sets of states are UIC. An example of such a set on a $d$-level system consist of the two states, \begin{equation} \mathcal{S}=\left\{\rho_0^{\rm in}=\sum_{n=0}^{d-1}\lambda_n \ket{n}\bra{n},\,\, \rho_1^{\rm in}=\ket{+}\bra{+}\right\}, \end{equation} where the eigenvalues of $\rho_0^{\rm in}$ are nondegenerate, $\{\ket{n}\}$ is an orthonormal basis for the Hilbert space, and $\ket{+} = \frac{1}{\sqrt{d}}\sum_{n=0}^{d-1} \ket{n}$. Reich {\em et al.} \cite{Reich2013} considered $\mathcal{S}$ in order to set numerical bounds on the average fidelity between a specific unitary map and a random CPTP map. In fact, $\mathcal{S}$ is the minimal UIC set of states for QPT of a unitary map on a $d$-dimensional Hilbert space. To see that $\mathcal{S}$ is a UIC set, we write the unitary map as a transformation from the orthonormal basis $\{\ket{n}\}$ to its image basis $\{\ket{u_n}\}$, \begin{equation}\label{unitary} U=\sum_{n=0}^{d-1}\ket{u_n}\bra{n}. \end{equation} In its essence, the task in QPT of a unitary map is to fully characterize the basis $\{\ket{u_n}\}$, along with the relative phases of the summands $\{\ket{u_n}\bra{n}\}$. By probing the map with $\rho_0^{\rm in}$, we obtain the output state $\rho_0^{\rm out}=U\rho_0^{\rm in}U^\dagger=\sum_{n=0}^{d-1}\lambda_n \ket{u_n}\bra{u_n}$, which we measure with a IC POVM to obtain a full reconstruction. We then diagonalize $\rho_0^{\rm out}$ and learn $\{\ket{u_n}\bra{u_n}\}$. Without loss of generality, we take the global phase of $\ket{u_0}$ to be zero. Next, we probe the map with $\rho_1^{\rm in}$, and fully characterize the output state $\rho_1^{\rm out}=\frac{1}{d}\sum_{n,m=0}^{d-1} \ket{u_n}\bra{u_m}$ with a full-IC POVM (this state requires a full-IC POVM since it is full rank). The $\{\ket{u_n}\}$ are calculated according to the relation, $\ket{u_n}\bra{u_n}\rho_1^{\rm out}\ket{u_0}=\frac{1}{d}\ket{u_n}$. This procedure identifies a unique orthonormal basis $\{\ket{u_n}\}$ if and only if the map is a unitary map and requires a total of $d^2 + 2d$ POVM elements. While $\mathcal{S}$ is the minimal UIC set, in practice we may not have reliable methods to produce a desired mixed state, $\rho^{\rm in}_0$. We thus turn our attention to minimal UIC sets that are composed only of pure states (arbitrary pure states can be reliably produced using the tools of quantum control~\cite{Smith2012}). Such UIC sets are composed of $d$ pure states that form a nonorthogonal vector basis for the $d$-dimensional Hilbert space. For example, the set \begin{align}\label{n+} \ket{\psi_n}&=\ket{n},\; n=0,\ldots,d-2,\nonumber \\ \ket{\psi_{d{-}1}}&=\ket{+}=\frac{1}{\sqrt{d}}\sum_{n=0}^{d-1}\ket{n}, \end{align} is a minimal UIC set of pure states. A similar set (with $d+1$ elements) was considered in Ref.~\cite{Reich2013}. Here, we focus on a different set of $d$ pure states that is UIC, \begin{align}\label{0+n} \ket{\psi_0}&=\ket{0}, \nonumber \\ \ket{\psi_n}&=\frac1{\sqrt2}(\ket{0}+\ket{n}),\; n=1,\ldots,d-1. \end{align} This is a subset of the standard states used in QPT from Eq.~(\ref{op basis nc}). The only operator that commutes with all of the projectors $\{\ket{\psi_n}\bra{\psi_n}\}$, $n=0,\ldots,d-1$, is the identity. With Eq.~\eqref{unitary}, we can show that the set of probing states in Eq.~\eqref{0+n} is UIC. Starting with the first state, $\ket{\psi_0}$ the output state is then $U\ket{\psi_0}=\ket{u_0}$, which we characterize with an IC POVM. From the eigendecomposition we obtain the state $\ket{u_0}$ (up to a global phase that we can set to zero). Next, we act the unitary map on $\ket{\psi_1}$ and perform an IC POVM on the output state $U\ket{\psi_1}\bra{\psi_1}U^\dagger$. From the relation $U\ket{\psi_1}\bra{\psi_1}U^\dagger\ket{u_0}=\frac1{2}(\ket{u_0}+\ket{u_1})$ we obtain the state $\ket{u_1}$, including its phase relative to $\ket{u_0}$. We repeat this procedure for every state $\ket{\psi_n}$ with $n{=}1,{\ldots},d{-}1$, thereby obtaining all the information about the basis $\{\ket{u_n}\}$, including the relative phases in the sum of Eq.~\eqref{unitary}, and completing the tomography procedure for a unitary map. Since the unitary operator is uniquely identified by a series of linear equations then the set of states and POVM elements are rank-1 complete for QPT. If we choose the minimal rank-1 strictly-complete POVM in Eq.~\eqref{psi-complete} , which has $2d$ elements, to apply to each of the $d$ output states then this procedure requires a total of $2d^2$ POVM elements. This approach is substantially more efficient than standard QPT. We can further reduce the resources required for unitary process tomography by including the trace constraint. In this case the trace constraint assures that the states $\ket{u_n}$ are orthonormal. In the procedure considered above, we did not take this fact into account. By leveraging this constraint, we can reduce the number of required measurement outcomes on each output state. The first step is, as before, to use $\ket{\psi_0}$ as a probe state, and perform an informationally complete measurement, which has $2d$ outcomes, on the output state, $\ket{u_0}$. This procedure fails for states with $c_{00}=0$, a set of measure zero. Next, we probe the unitary map with $\ket{\psi_1}$ of Eq.~(\ref{0+n}), and perform an IC PVOM on the output state, $\frac1{\sqrt2}(\ket{u_0}+\ket{u_1})$. However, since $\ket{u_1}$ is orthogonal to $\ket{u_0}$, it is sufficient to make a POVM that yields only the first $d-1$ probability amplitudes $c_{1n}=\braket{n | u_1}$, $n=0,\ldots,d-2$ and then use the orthogonality condition $\braket{u_0 | u_1}=\braket{u_1 |u_0}=0$ to calculate the $d$th amplitude, $c_{1,d{-}1}$. A measurement with $2d-2$ outcomes can be, for example, the measurement of Eq.~\eqref{psi-complete}, but with $n=0,\ldots,d-2$. Therefore, to measure the state $\ket{u_k}$, $k=0,\ldots,d-1$ we perform a measurement with $2d-2k$ outcomes, and use $2k$ orthogonality relations. This leads to a total requirement of $d^2+d$ POVM elements. \subsection{Reconstruction for unitary QPT} UIC sets provide an efficient way to characterize a unitary process, since they require only $d$ states instead of the standard $d^2$. However, in analogy to rank-1 QST, there may be other sets of input states that uniquely identifies the the unitary within the set of {\em all} CPTP maps. The corresponding output states must then be measured with either a rank-1 strictly-complete or full-IC POVM. These states would then be analogous to rank-1 strictly-complete POVMs in QST, and therefore are advantageous for QPT since they would be compatible with convex optimization. Riech {\em et al.} proved that the set of states in Eq.~\eqref{n+} plus the additional state $\ket{\psi_{d-1}}=\ket{d-1}$, uniquely identify a unitary within the set of all CPTP maps. In this section we give numerical evidence that the set of states in Eq.~\eqref{n+} and~\eqref{0+n}, when measured with a rank-1 strictly-complete POVM is in fact rank-1 strictly-complete for QPT. We generate a set of 100 Haar-random unitary maps for a 5-dimensional Hilbert space. We then evolve the three different sets of states: (dotted red) the standard set of states for QPT, given in Eq.~\eqref{op basis nc}, (dashed green) the UIC set of states given in Eq.~\eqref{n+} supplemented with $d^2-d$ other linearly independent states, and (solid blue) the UIC set given in Eq.~\eqref{0+n} supplemented with $d^2-d$ other linearly independent states. The output states are then measured with the POVM in Eq.~\eqref{psi-complete}, which we proved is rank-1 strictly-complete in Sec.~\ref{sec:EP}. We assume the ideal situation for QPT, when we have direct access to the probabilities of each outcome and use the LS program to reconstruct the process matrix. We then compare the estimated process matrix to the process matrix of the Haar-random unitary that was used to generate the probabilities. \begin{figure}\label{fig:unitary_diff_sets} \end{figure} The results of the numerical study are plotted in Fig.~\ref{fig:unitary_diff_sets}. We can see that each set of probing states reaches unit fidelity well before $d^2 = 25$ states. This indicates that at this point the set of probing states uniquely identifies the unitary within the set of all CPTP maps. It is important to note that while each point is an average over 100 Haar-random target unitary maps, there is zero variance in the fidelity, i.e. the resulting fidelity is independent of the applied unitary. For the states in Eq.~\eqref{n+} and Eq.~\eqref{0+n}, we see unit fidelity is reached for $d = 5$ probing states. This corresponds to the UIC set, which was proven rank-1 complete in the previous section. The standard set of states, shown by the red dotted line, have a unique, plateau feature. These indicate that we gain information only from particular states in that set, while others do not provide additional information. The positions of the plateaus occur for the same input state for each of the sampled unitary map, and they are independent of their details. To see this point more clearly, take for example the two input states with $k=0$ and $n=1$ of Eq.~\eqref{op basis nc}, $\frac1{\sqrt2}(|0\rangle+|1\rangle)$ and $\frac1{\sqrt2}(|0\rangle+{\rm i}|1\rangle)$. Probing a unitary map with these two states giving us the same information, namely the image $|u_1\rangle$. Since probing the unitary map with either states gives the same information, for efficient reconstruction it is sufficient to probe the map only with one of the states. \section{Near-unitary process tomography} \label{sec:QPT_werrors} While the previous section established the notion of UIC sets of states to reconstruct unitary processes, in any physical implementation the process is never exactly unitary due to errors in the apparatus. We call such errors, process errors and they cause the applied map to differ from our target unitary map. Process errors are similar in nature to preparation errors in QST, considered in Sec.~\ref{sec:estimation_wnoise}. Therefore, since rank-1 strictly-complete POVMs are robust to preparation errors, we expect the UIC set considered in Eq.~\eqref{0+n} to also be robust to process errors in QPT. However, the robustness property does not guarantee that each estimation program behaves the same, and in fact the behavior of the programs is very dependent on the type of process error present. We asses these differences in this section. We consider two types of process errors in the implementation of a quantum map: ``coherent'' errors and ``incoherent'' errors. A coherent error is one where the applied map is also unitary, but ``rotated'' from the target. All other errors are defined to be incoherent errors, for example, statistical mixtures of different unitary maps arising from inhomogeneous control or decoherence. We define the target unitary map ${\cal E}_{\rm t} = | U_{\rm t} )( U_{\rm t} |$, with corresponding process matrix $\chi_{\rm t}$. The actual applied map in the experiment has errors. We denote it, ${\cal E}_{\rm a}$, with corresponding process matrix $\chi_{\rm a}$. We assume good experimental control so that the implementation errors are low, hence, $\chi_{\rm a}$ is close to $\chi_{\rm t}$. For both types of errors, we numerically model the applied process, $\mathcal{E}_{\rm a}$, by a composing the target process and an error process, \begin{equation} \label{coh_err} {\cal E}_{\rm a} = {\cal E}_{\rm err}\circ{\cal E}_{\rm t}. \end{equation} For coherent errors, \begin{equation} {\cal E}_{\rm err}[\cdot]=U_{\rm err}[\cdot]U_{\rm err}^\dagger, \end{equation} where the unitary error map is generated by a random, trace one, Hermitian matrix, selected by the Hilbert-Schmidt measure $U_{\rm err}=e^{{\rm i}\eta H}$, with $\eta\geq0$. Such that the applied map is \begin{equation} \label{coh_err} {\cal E}_{\rm a}[\cdot]=U_{\rm err}U_{\rm t} [\cdot] U_{\rm t}^{\dagger} U_{\rm err}^\dagger. \end{equation} We numerically generate an incoherent error as \begin{equation} {\cal E}_{\rm err}[\cdot]=(1-\xi)[\cdot]+\xi\sum_{n=1}^{d^2}A_n[\cdot]A_n^\dagger, \end{equation} which is not the only type of incoherent error possible but serves our numerical study. The applied map is then given by, \begin{equation}\label{inc_err} {\cal E}_{\rm a}[\cdot]=(1-\xi)U_{\rm t}[\cdot]U_{\rm t}^\dagger+\xi\sum_{n=1}^{d^2}A_nU_{\rm t}[\cdot]U_{\rm t}^\dagger A_n^\dagger. \end{equation} The set $\{A_n U_{\rm t}\}$ are Kraus operators associated with a CP map and $\xi \in [0,1]$. The $\{A_n\}$'s are generated by choosing a Haar-random unitary matrix $U$ of dimension $d^3$, and a random pure state of dimension $d^2$ from the Hilbert-Schmidt measure, $\ket{\nu}$, such that $ A_n= \bra{n}U\ket{\nu}$ where the set $\{\ket{n}\}$ is a computational basis~\cite{Bruzda2009}. We first numerically test the sensitivity of the all three programs to the type of preparation error and magnitude. We choose $d=5$ and prepare the five input quantum states defined by Eq.~\eqref{0+n}. We evolve each state with 50 randomly chosen applied processes, once with coherent errors and once with incoherent errors. We measure each output state with the MUB~\cite{Wootters1989}, introduced in Sec.~\ref{ssec:noiseless_ST}, and include noise from finite sampling. In Fig.~\ref{fig:fidFid}, we plot the fidelity between the applied process matrix, $\chi_{\rm a}$, and the estimated matrices, $\hat{\chi}$, determined by each of the three estimators, as a function of the fidelity between the applied process, $\chi_{\rm a}$, and the target, $\chi_{\rm t}$. The latter fidelity, $F(\chi_{\rm t},\chi_{\rm a})$, is a measure of the magnitude of the error in the applied process. As expected by the robustness bound, all of the estimators return reconstructions that have high fidelity with the applied map when the applied map is close to the target unitary map ${\cal E}_{\rm a}[\cdot]\approx U_{\rm t}[\cdot]U_{\rm t}^\dagger$. In particular in our simulations $F(\hat{\chi},\chi_{\rm a})\gtrsim 0.95$ when $F(\chi_{\rm t},\chi_{\rm a})\gtrsim 0.97$. \begin{figure}\label{fig:fidFid} \end{figure} However, as the magnitude of the preparation error increases, i.e. $F(\chi_{\rm t},\chi_{\rm p})$ decreases, the performance of the three estimators depends strongly on the nature of the errors. The ${\ell_1}$-norm program is more sensitive to coherent errors than the Tr-norm and LS program, as seen in Fig.~\ref{fig:fidFid}a. Using the data from five input states, the fidelity between the ${\ell_1}$-norm estimate and the applied map begins to fall below $\sim\!90\%$ in these simulations for $F(\chi_{\rm t},\chi_{\rm a})\lesssim 0.9$ while the Tr-norm and LS estimators maintain their high fidelity. This trend is reversed for incoherent errors, as seen in Fig.~\ref{fig:fidFid}b. The ${\ell_1}$-norm program is more robust to incoherent errors of the form of Eq.~\eqref{inc_err} than either the Tr-norm or LS programs because the process matrix is no longer close to a low-rank matrix, but it is still relatively close to a sparse matrix in the preferred basis. As the incoherent error magnitude increases, the ${\ell_1}$-norm program returns an estimate with (on average) higher fidelity with the applied map than either the Tr-norm or the LS estimates. We thus conclude that when the applied map is sufficiently far from the target unitary map the performance of the three estimators varies in a manner that depends on the type of the error. We can use the different behavior of the estimators as an indicator of the type of error that occurred in the applied process. This is seen when comparing the fidelity between the estimate and the applied map (Fig.~\ref{fig:fidStates}a) and between the estimate and the target map (Fig.~\ref{fig:fidStates}b) as a function of the number of input states. We plot the fidelity averaged over 50 Haar-random applied maps. The plots on the top and bottom rows correspond to different levels of coherent and incoherent errors. Again, the robustness property is verified since the fidelity after $d=5$ input states is proportional to the error magnitude. While the Tr-norm and the LS estimators require the $d$ states in Eq.~\eqref{0+n} to reliably characterize the applied map, with proper formulation, the ${\ell_1}$-norm program returns a reliable estimate with information obtained from a single input state. The estimator based on the ${\ell_1}$-norm is somewhat unstable when the reconstruction is based on data taken from very few input states. To overcome this instability, one can use the same data obtained from the first $d$ input states, in different orders, to estimate the process, and then average over the resulting processes. This reduces the sensitivity to the specific choice of state of the first input state. In Fig.~\ref{fig:fidStates} we have used such averaging for estimating the process based of $1,2,\ldots, d=5$ input states. Each estimated process is an average of the 5 reconstructed process matrices, each based on the data associated with an informationally complete measurement record on the 5 states $\ket{\psi_n}$, $n=0,1,\ldots,4$ of Eq.~(\ref{0+n}), taken in cyclic permutations. \begin{figure}\label{fig:fidStates} \end{figure} If the error in the applied map is not small, we can infer the dominant source of the imperfection by examining the behavior of the different estimators. As seen in Fig.~\ref{fig:fidStates}a, with $F(\chi_{\rm t},\chi_{\rm a})= 0.83\pm 0.005$, when employing the ${\ell_1}$-norm program, a large coherent error results in a curvature in $F(\hat{\chi},\chi_{\rm t})$ as a function of the number of input states. Additionally, for the same data, using the Tr-norm and LS programs, we see that $F(\hat{\chi},\chi_{\rm t})$ exhibits a sharp cusp after $d$ UIC probe states. In contrast, when the errors are dominantly incoherent, we see that when employing the ${\ell_1}$-norm program, $F(\hat\chi,\chi_{\rm t})$ is more or less a constant function of the number of input states. In addition, there is a more gradual increase of $F(\hat{\chi},\chi_{\rm t})$ for the Tr-norm and LS estimators around $d$ states; the cusp behavior is smoothed. These variations are signatures of the nature of the error in implementing the target unitary map. In the regime $0.90\lesssim F(\chi_{\rm t},\chi_{\rm a})\lesssim 0.97$ it is difficult to distinguish, with high confidence, the nature of errors based solely on the behavior of $F(\hat\chi,\chi_{\rm t})$ as a function of input state, and additional methods will be required to diagnose process matrix. Nonetheless, a low fidelity of $F(\hat{\chi},\chi_{\rm t})\lesssim 0.95$ after $d$ input states challenges the validity of our assumptions and indicates the presence of noise. A similar procedure could be adapted for strictly-complete POVMs in QST, or a set of strictly-complete probing states for QDT. In these cases the Tr-norm and $\ell_1$-norm programs can be used to diagnose the type of preparation error or measurement error present. In Sec.~\ref{sec:numerical_methods}, we introduced the Tr-norm program for QST, and the form for QDT is similar. The $\ell_1$-norm program can easily be translated to both QST and QDT as well. \section{Summary and conclusions} We have studied the problem of QPT under the assumption that the applied process is a unitary or close to a unitary map. We found that by probing a unitary map on a $d$-level system with $d$ specially chosen pure input states (which we called UIC set of states), one can discriminate it from any other arbitrary unitary map given the corresponding output states. In the ideal case of no errors, we can use a UIC set of states and a rank-1 strictly-complete POVM to characterize an unknown unitary with $2d^2$ POVM elements. We then numerically demonstrated that this combination is rank-1 strictly-complete for QPT. We used the methods of efficient unitary map reconstruction to analyze a more realistic scenario where the applied map is close to a target unitary map and the collected data includes statistical errors. Under this assumption, we studied the performance of three convex-optimization programs, the LS, Tr-norm and $\ell_1$-norm. For each of these programs we estimated the applied process from the same simulated measurement vector obtained by probing the map with pure input states, the first $d$ of which form a UIC set. We considered two types of errors that may occur on the target map, coherent errors, for which the applied map is a unitary map but slightly ``rotated'' from the target map, and incoherent errors in which the applied map is full rank but with high purity. In our simulation, shown in Sec.~\ref{sec:QPT_werrors}, we used the states of Eq.~\eqref{0+n} to probe a randomly generated (applied) map with the desired properties verifying the robustness property applies for QPT. Our analysis suggests that when the prior assumptions are valid the three estimators yield high-fidelity estimates with the applied map using only the input UIC set of states. We found that the sensitivity of these methods for various types of errors yields important information about the validity of the prior assumptions and about the nature of the errors that occurred in the applied map. In particular, probing the map with a UIC set of $d$ pure states and obtaining low fidelity between the estimates and the target map indicates that the errors are actually not small and the applied map is not close to the target unitary map. Furthermore, the performance of the different estimators under coherent and incoherent noise, enables the identification of the dominant error type. One can then take this this information into account to further improve the implementation of the desire map. \chapter{Experimental comparison of methods in quantum tomography} \label{ch:experiment} We have introduced many different methods for QT that offer theoretical advantages in efficiency and robustness. However, QT is designed to be a diagnostic tool for experiments in quantum information, so the performance of these methods must be verified in this context. In this chapter, we compare different methods for QST and QPT in an experimental platform, conducted in collaboration with the group of Prof. Poul Jessen at the University of Arizona. The quantum system we study is the hyperfine spin of $^{133}$Cs atoms in their electronic ground state, which corresponds to a 16-dimensional Hilbert space for encoding quantum states, processes, and POVMs. In this system, there are many ways to accomplish QT. For this chapter we compare different choices to illustrate the tradeoff between efficiency and robustness in QT protocols. Experiments were performed at the University of Arizona by Hector Sosa-Martinez and Nathan Lysne~\cite{SosaMartinez2016}. \section{Physical system} The physical system is an ensemble of approximately $10^6$ laser-cooled cesium atoms in ground manifold, $6S_{1/2}$. Each atom is (almost) identically prepared and addressed such that the quantum state, $\rho_N$, that describes the entire system is well approximated by a tensor product of the internal state of each atom, $\rho_N = \rho^{\otimes N}$. The internal state is confined to the hyperfine ground manifold with Hilbert space described by a tensor product of the nuclear and electron spin states, $\mathcal{H} = \mathcal{H}_I \otimes \mathcal{H}_S$. Cesium has a nuclear spin of $I= 7/2$ and, since it is an alkali metal, it has a single valence electron, so $S= 1/2$. The ground manifold is then described by a $(d=16)$-dimensional Hilbert space, which provides a large testbed for QT protocols. The Hilbert space can also be expressed as the direct sum of two hyperfine subspaces, $\mathcal{H} = \mathcal{H}_+ \oplus \mathcal{H}_-$, where $\mathcal{H}_{\pm}$ are the Hilbert spaces corresponding to the total hyperfine angular momentum quantum numbers, $F^{(+)} = 4$ and $F^{(-)} = 3$ spins. Each spin has $2F^{(\pm)} + 1$ degenerate magnetic sublevels. \subsection{Quantum control of the cesium system} In order to prepare quantum states, create evolutions, and readout information, we need control over the hyperfine manifold. In the experiment, the system is controlled with time-dependent magnetic fields applied approximately uniformly to the ensemble. The Hamiltonian that describes the dynamics is, \begin{equation} H = A {\bf I} \cdot {\bf S} - \bm{\mu} \cdot {\bf B}(t), \end{equation} where the first term is due to the hyperfine interaction (since $L = 0$ in the ground manifold) and $A$ is the hyperfine coupling. The second term describes the interaction with the time-dependent external magnetic fields, ${\bf B}(t)$, where $\bm{\mu}$ is the atomic magnetic moment. We can express the hyperfine interaction in terms of the total angular momentum, $F^{(\pm)}$, since ${\bf I} \cdot {\bf S} = \frac{1}{2} \left( {\bf F}^2 - {\bf I}^2 - {\bf S}^2 \right)$, and therefore $A {\bf I} \cdot {\bf S} = \frac{\Delta E_{HF}}{2}(\Pi^{(+)} - \Pi^{(-)})$, where $\Delta E_{HF} \triangleq A F^{(+)}$ is the hyperfine splitting, and $\Pi^{(\pm)}$ is the projection onto the plus or minus spin subspace. The interaction between the atoms and the magnetic fields is defined by the atomic magnetic moment, which is the sum of the spin and nuclear magnetic moments, $\bm{\mu} = \bm{\mu}_S + \bm{\mu}_I$. The contribution from the nuclear moment is extremely small, $\mu_B \gg \mu_N$, i.e. the Bohr magneton is much larger than the nuclear magneton. Therefore, we approximate the total magnetic moment as $-\bm{\mu} \cdot {\bf B}(t) \approx \mu_B g_s {\bf S} \cdot {\bf B}(t)$. When $\mu_B |{\bf B}(t) | \ll A$, i.e., the magnitude of the external field is much weaker than the hyperfine coupling, we can apply the Land\'{e} projection theorem to express the interaction in terms of the total angular momentum, \begin{align} \mu_B g_s {\bf S} \cdot {\bf B}(t) &\approx g_s \mu_B \sum_{i = \pm} \frac{{\bf S} \cdot {\bf F}^{(i)}}{F^{(i)} (F^{(i)} + 1)} {\bf F}^{(i)} \cdot {\bf B}(t), \nonumber \\ & = g_+ \mu_B {\bf F^{(+)}} \cdot {\bf B}(t) + g_- \mu_B {\bf F^{(-)} } \cdot {\bf B}(t) \end{align} where $g_{\pm} \triangleq \pm \frac{g_S}{2 F^{(+)}} = \pm \frac{1}{F^{(+)}}$. \begin{figure}\label{fig:cesium} \end{figure} The magnetic field is applied as three separate fields, \begin{equation} {\bf B}(t) = B_0 {\bf e}_z + {\bf B}_{RF}(t) + {\bf B}_{\mu W}(t), \end{equation} where the first term is the bias field, which breaks the degeneracy of each hyperfine sublevel, shown in Fig.~\ref{fig:cesium}. The other two fields oscillates at radio (RF) and micro-wave ($\mu W$) frequencies respectively. Since the bias field is time-independent, we group it with the hyperfine interaction to form the ``drift'' Hamiltonian, \begin{equation} H_0 = \frac{\Delta E_{HF}}{2} \left(\Pi^{(+)} - \Pi^{(-)} \right) + \Omega_0 \left( F^{(+)}_z - F^{(-)}_z \right) \end{equation} where $\Omega_0 \triangleq \frac{\mu_B B_0}{F^{(+)}}$. The $RF$ field is a combination of two fields in the ${\bf e}_x$ and ${\bf e}_y$ direction, \begin{equation} {\bf B}_{RF}(t) = B_x {\bf e}_x \cos\left[ \omega_{RF} t + \phi_x(t) \right] + B_y {\bf e}_y \cos\left[ \omega_{RF} t + \phi_y(t) \right]. \end{equation} The RF fields cause Larmor precession on the $F^{(+)}= 4$ and $F^{(-)}= 3$ spins, shown in Fig.~\ref{fig:cesium}, with corresponding Hamiltonian, \begin{align} H_{RF}(t) &= \Omega_x \left(F^{(+)}_x - F^{(-)}_x \right) \cos\left[ \omega_{RF} t + \phi_x(t) \right]\nonumber \\ & + \Omega_y \left(F^{(+)}_y - F^{(-)}_y \right) \cos\left[ \omega_{RF} t + \phi_y(t) \right], \end{align} where $\Omega_{x,y} \triangleq \frac{\mu_B B_{x,y}}{F^{(+)}}$. The $\mu W$ field is tuned to resonance with the $\ket{4,4}$ and $\ket{3,3}$ transition, as shown in Fig.~\ref{fig:cesium}. This causes Rabi oscillations between the two magnetic sublevels, with resulting Hamiltonian, \begin{equation} H_{\mu W}(t) = \Omega_{\mu W} \sigma_x \cos\left[ \omega_{\mu W} t + \phi_x(t) \right], \end{equation} where $\Omega_{\mu W}$ is the Rabi frequency of the $\mu W$ interaction and $\sigma_x = | 4,4 \rangle \langle 3,3| + | 3,3 \rangle \langle 4,4|$. In the experiment, the system is controlled by varying the the phases $\phi_x(t)$, $\phi_y(t)$, and $\phi_{\mu W}(t)$, which are referred to as the ``control parameters.'' To eliminate the time dependence in the Hamiltonian, outside of the control parameters, we move to the rotating frame defined by the unitary, \begin{equation} U = \textrm{exp}\left[ -i \omega_{RF} t \left( F^{(+)}_z - F^{(-)}_z \right) \right] \textrm{exp}\left[ -it\frac{\omega_{\mu W} - 7 \omega_{RF}}{2} \left( \Pi^{(+)} - \Pi^{(-)} \right) \right]. \end{equation} We apply $U$ to each term in the total Hamiltonian such that $H' = U^{\dagger} H U$. By the rotating wave approximation, all terms proportional to $\cos(2 \omega_{RF}t)$ and $\cos(2 \omega_{\mu W} t)$ are approximately zero. We include the term, $H_{\rm rot.} = -iU^{\dagger} \frac{d U}{dt}$, which is due to the rotating frame, with the drift Hamiltonian, since it is also time independent. Therefore, \begin{align} H_0' &\approx U^{\dagger} H_0 U - iU^{\dagger} \frac{d U}{dt}, \nonumber \\ &= \frac{\Delta_{\mu W}}{2} \left( \Pi^{(+)} - \Pi^{(-)} \right) + \Delta_{RF} \left( F^{(+)}_z - F^{(-)}_z \right), \end{align} where $\Delta_{\mu W} \triangleq \omega_{\mu W} - \Delta E_{HF} - 7 \omega_{RF}$ and $\Delta_{RF} \triangleq \omega_{RF} - \Omega_0$. The $RF$ and $\mu W$ control Hamiltonians are \begin{align} H_{RF}'(t) &\approx \frac{\Omega_x}{2} \cos \left[ \phi_x(t) \right] \left( F^{(+)}_x - F^{(-)}_x \right) -\frac{\Omega_x}{2} \sin \left[ \phi_x(t) \right]\left( F^{(+)}_y + F^{(-)}_y \right) \nonumber \\ &+ \frac{\Omega_y}{2} \cos\left[ \phi_y(t) \right]\left( F^{(+)}_y - F^{(-)}_y \right) + \frac{\Omega_y}{2} \sin\left[ \phi_y(t) \right]\left( F^{(+)}_x + F^{(-)}_x \right), \nonumber \\ H_{\mu W}'(t) &\approx \frac{\Omega_{\mu W}}{2} \left( \cos \left[\phi_{\mu W}(t)\right] \sigma_x + \sin \left[ \phi_{\mu W}(t)\right]\sigma_y \right). \end{align} where $\sigma_y$ is the Pauli-y operator across the $\ket{4,4}$ and $\ket{3,3}$ subspace. Higher order terms in the rotating wave approximation were derived in Ref.~\cite{Riofrio2011}. Full controllability with the $RF$ and $\mu W$ magnetic fields was proven in Ref.~\cite{Merkel2008}. Therefore, the magnetic fields can create any unitary map in $\textrm{SU}(16)$. To produce a given unitary, we numerically search for a set of the phases, $\phi_x(t), \phi_y(t),$ and $\phi_{\mu W}(t)$, that optimize an objective function (further details are given in Appendix~\ref{app:control} and Ref.~\cite{Anderson2013}). The procedure is also made robust to inhomogeneities in the control fields to minimize errors~\cite{Anderson2013,Smith2012}. Therefore, we can prepare any pure state or apply any unitary evolution to the 16-dimensional Hilbert space. \subsection{Stern-Gerlach measurement} \label{sec:SG_measurement} The quantum state is measured with a Stern-Gerlach analyzer that creates a signal proportional to the population of each magnetic sublevel. This is accomplished by applying a gradient magnetic field in the $z$-direction, parallel to the bias field, that separates the atoms spatially, shown in Fig.~\ref{fig:sg_steps}b. The separation is proportional to the spin projection, $m_F$, in the $z$-direction. The atoms are then dropped, and fall through a sheet of laser light that is resonant with a transition between the hyperfine-ground manifold and an excited state, shown if Fig.~\ref{fig:sg_steps}c. This causes fluorescence that is detected in discrete time bins. Each bin corresponds to a time-of-flight measurement of the atoms trajectory. Since the signal is proportional to $m_F$, the procedure is repeated for each submanifold, $F^{(\pm)}$. An example signal is shown in Fig.~\ref{fig:sg_signal} for $F^{(+)}$. \begin{figure}\label{fig:sg_steps} \end{figure} The gradient magnetic field acts to entangle the internal spin projection with the position of the atoms. This interaction can be described by a unitary map, $U_{\bf B}$ on tensor product space of the atoms internal spin (labelled with subscript $s$) and position (labelled with subscript $p$), \begin{equation} \rho_{sp} = U_{\bf B} \left( \rho_s \otimes \rho_p\right) U_{\bf B}^{\dagger}. \end{equation} The time-of-flight signal is then related to the position of each atom by classical dynamics. Therefore, the POVM elements that describe the measurement are projectors onto the position. Since the signal is discrete, the POVM elements correspond to a projection onto a range of positions, $\delta x$, \begin{equation} \label{x_POVM} E_x = \int_x^{x + \delta x} dx | x \rangle \langle x|. \end{equation} In principle, we could accomplish QST with the raw signal from the time-of-flight measurement and the POVM elements in Eq.~\eqref{x_POVM}. However, the number of time bins that correspond to the different measurement outcomes is potentially very large, and therefore implementing the estimation required for QST may be expensive. \begin{figure}\label{fig:sg_signal} \end{figure} Instead of directly using the discrete time-of-flight signal and POVM in Eq.~\eqref{x_POVM}, we perform ``two-step'' QST by first extracting most of the information from the signal, then using this information for state estimation. The signal for each $F^{(\pm)}$ submanifold contains $2F^{(\pm)} + 1$ peaks that correspond to the $2F^{(\pm)} + 1$ magnetic sublevels, which are pulled apart into separate clouds by the gradient magnetic field. The distribution is proportional to the $2F^{(\pm)} + 1$ real numbers that specify the population in each magnetic sublevel. In order to extract these real numbers, we fit the signal to $2F^{(\pm)} + 1$ template functions. A numerical description of the template functions is found by applying the Stern-Gerlach analyzer to the ensemble without the gradient magnetic field. The result is a single peak that describes the distribution of the falling atoms. The template function is then fit to each of the 16 peaks when the gradient field is applied, with fitting parameters proportional to the height, width, and center of each fit function, as shown in Fig.~\ref{fig:sg_signal}b. There are additional parameters that scale the template function proportional to the trajectory of the different magnetic sublevels. The result of the fitting is used to estimate the fraction of spins in each magnetic sublevel, which is then related to the populations, $\{ \rho_{F,m_F} \}$. Therefore, we have extracted the measurement vector that corresponds to the $F_z$-basis measurement, \begin{equation} \mathbbm{B}_z = \{ \ket{4,-4}, \dots, \ket{4,4}, \ket{3,-3}, \dots, \ket{3,3} \}. \end{equation} where the states are written as $| F, m_F \rangle$ basis. While we are not directly, making a projective measurement in this basis, the measurement vector returned by the fitting program is in good approximation the populations we seek. We can use the control Hamiltonians to also determine the measurement vector of an arbitrary orthonormal basis measurement. In the following discussion, we label $\ket{\mu} = \ket{F, m_F} $, such that $\mu = 0,\dots,15$, referred to as the ``standard basis,'' for convenience. An arbitrary orthonormal basis measurement, $ \mathbbm{B}_{\psi} = \{ | \psi_{\mu} \rangle \}$, has probability of each outcome \begin{equation} p_{\mu} = \langle \psi_{\mu} | \rho | \psi_{\mu} \rangle = \langle \mu | W^{\dagger} \rho W | \mu \rangle \end{equation} where $| \psi_{\mu} \rangle = W |\mu \rangle$. Therefore, to approximate the measurement vector of the basis $\mathbbm{B}_{\psi}$, we apply the unitary map, $W^{\dagger} = \sum_{\mu} | \mu \rangle \langle \psi_{\mu} |$, before the Stern-Gerlach analyzer. Then, the measurement vector from the fitting procedure is an approximation of the probability of each outcome from the basis $\mathbbm{B}_{\psi}$. We can search for a control parameters that implements the unitary $W^{\dagger}$ with a unitary control objective, which is described in Appendix~\ref{app:control}. We can also approximate the measurement vector of a basis on $s$-dimensional subspaces of the total Hilbert space, $\mathbbm{B}_{\psi} = \{ | \psi_1 \rangle, \dots, | \psi_{s} \rangle \}$ with a partial isometry. A partial isometry is a mapping of $s < d$ orthonormal states to another set of $s$ orthonormal states. Partial isometries require less total time to implement then a full unitary map, which is advantageous when there exist sources of errors or noise that compound with time. In order to implement $\mathbbm{B}_{\psi}$, we need the partial isometry mapping, $ \{ | \psi_{\mu} \rangle\} \rightarrow \{ | \mu \rangle \}$ for $s$ orthonormal pairs of states. Partial isometry control objectives are described in detail in Appendix~\ref{app:control}. We can additionally use partial isometries to determine the measurement vector of a POVM with $N \leq 16$ rank-1 elements, nonorthogonal elements $E_{\nu} = | \tilde{\phi}_{\nu} \rangle \langle \tilde{\phi}_{\nu} |$, on an $s$-dimensional subspace of the total Hilbert space by the Neumark extension~\cite{Preskill1998}. The Neumark extension is accomplished by determining a basis measurement on the 16-dimensional Hilbert space whose projection onto the $s$-dimensional subspace is the desired POVM elements. That is, a basis, $\mathbbm{B}_{\phi} = \{ \phi_1,\dots, \phi_{16}\}$, accomplishes the Neumark extension if $\Pi_s | \phi_{\nu} \rangle \langle \phi_{\nu} | \Pi_s = | \tilde{\phi}_{\nu} \rangle \langle \tilde{\phi}_{\nu} |$, where $\Pi_s$ is the projection onto the $s$-dimensional subspace. The choice of $\mathbbm{B}_{\phi}$ is not unique, in that there are many bases that have the same projection onto the $s$-dimensional subspace. This freedom can be exploited to make more accurate POVMs by using a partial isometry control. To see this, we organize the set of vectors $\{ |\phi_{\nu} \rangle \}$ into an $s \times N$ matrix, \begin{equation} V = \begin{pmatrix} \uparrow & & \uparrow \\ |\tilde{\phi}_{1} \rangle& \cdots &| \tilde{\phi}_N \rangle \\ \downarrow & & \downarrow \end{pmatrix} = \begin{pmatrix} \leftarrow & \bra{\psi_1} & \rightarrow \\ & \vdots & \\ \leftarrow & \bra{\psi_N} & \rightarrow \end{pmatrix} \end{equation} where $\bra{\psi_{\alpha} } = \sum_{\nu} V_{\alpha,\nu} \bra{\nu}$ are the rows of $V$ and $V_{\alpha,\nu} = \langle \alpha | \tilde{\phi}_{\nu} \rangle$ are the elements of $V$. The rows of $V$ are orthonormal, \begin{equation} \langle \psi_{\alpha} | \psi_{\beta} \rangle = \sum_{\mu,\nu} V_{\alpha,\mu} V_{\beta,\nu}^* \delta_{\mu, \nu} = \sum_{\nu}\langle \alpha | \tilde{\phi}_{\nu} \rangle \langle \tilde{\phi}_{\nu} | \beta \rangle = \delta_{\alpha, \beta}, \end{equation} due to the POVM condition, $\sum_{\nu} | \tilde{\phi}_{\nu} \rangle \langle \tilde{\phi}_{\nu} | = \mathds{1}$. We then implement the partial isometry mapping, $\{ \ket{\alpha} \} \rightarrow \{ \ket{\psi_{\alpha}} \}$ for $s$ states, where $\{\ket{\alpha} \}$ is the standard basis elements that span the $s$-dimensional subspace. Therefore, the partial isometry maps, $\rho^{\rm in} = \sum_{\alpha,\beta} \rho_{\alpha,\beta} | \alpha \rangle \langle \beta |$ to $\rho^{\rm out} = \sum_{\alpha,\beta } \rho_{\alpha,\beta} |\psi_{\alpha} \rangle \langle \psi_{\beta} |$, where $\rho_{\alpha,\beta} = \langle \alpha | \rho^{\rm in} | \beta \rangle$. Then, after applying the Stern-Gerlach analyzer, the probability of getting $\mu$th outcome is, \begin{align} p_{\mu} &= \langle \mu | \rho^{\rm out} | \mu \rangle, \nonumber \\ &= \sum_{\alpha, \beta} \underbrace{\langle \mu | \psi_{\alpha} \rangle \langle \alpha |}_{V^*_{\alpha,\mu} \bra{\alpha}} \rho^{\rm in} \underbrace{| \beta \rangle \langle \psi_{\beta} | \mu \rangle}_{V_{\beta,\mu} \ket{\beta}}, \end{align} by the definition of $V$, $\sum_{\alpha} V^*_{\alpha,\mu} \bra{\alpha} = \bra{\tilde{\phi}_{\mu}}$ and $V_{\beta,\mu} \ket{\beta} = \ket{\tilde{\phi}_{\mu}}$. Therefore, \begin{equation} p_{\mu} = \langle \tilde{\phi}_{\mu} | \rho^{\rm in} | \tilde{\phi}_{\mu} \rangle, \end{equation} such that the probability of each outcome of the standard basis measurement is equal to the probability of each POVM outcome. Therefore, if we apply the partial isometry mapping before the Stern-Gerlach analyzer, the measurement vector approximates the $N$-element POVM, $\{ E_{\nu} \}$. \subsection{Sources of noise and errors} \label{ssec:errors_expt} There are several sources of noise and errors that can reduce the accuracy of QT in the cesium spin system. One major source of error comes from inexact implementation of the control fields that perform the unitary maps and partial isometries. This error is caused by inhomogeneities and variation from the ideal control fields. The inhomogeneities are a result of nonuniform magnetic fields across the ensemble of atoms in the extended cloud. Therefore, each atom does not interact with the magnetic field equally and the final state of the system will not be in exact tensor product of $N$ identical states. The effect of inhomogeneities is reduced by the robustness procedures used in the control~\cite{Anderson2013, Mischuck2010}. The other source of control error comes from variation in the phases that are actually applied to the atomic ensemble from the ideal behavior. As discussed in Appendix~\ref{app:control}, the control phases ($\phi_x(t)$, $\phi_y(t)$, and $\phi_{\mu W}(t)$) are designed to be a piecewise-constant functions of time. However, this is not exactly true in the experiment because of the finite response time of the controllers. Deviation from the piecewise-constant function causes coherent control errors. The magnitude of all control errors was previously characterized by a randomized benchmarking inspired technique in Refs.~\cite{Anderson2015, Smith2012}. This procedure determines the average fidelity, $\mathcal{F}$, associated with all unitary mappings and is independent of other sources of errors, such as errors in the Stern-Gerlach analyzer. It was found that each unitary on the 16-dimensional space has average fidelity, $\mathcal{F} = 0.975$~\cite{Anderson2015}. A similar method was used to determine the error associated with partial isometry mappings of any dimension; for example, in Ref.~\cite{Smith2012} the state preparation fidelity was measured as $\mathcal{F} = 0.995$. There will also necessarily exists statistical noise from finite sampling, since there are a finite number of atoms that are measured. While there are approximately $10^6$ atoms prepared, only a fraction of fluorescing atoms produce photons that are detected. However, the effect of the finite sampling is still much smaller than the other control errors. In previous diagnostics, it was seen that the final measurement vector barely fluctuates between repetitions of the Stern-Gerlach analyzer with the same controls, so finite sampling effects are negligible. There are other sources of of noise and errors present in the experiment. One possible source of errors is decoherence due to interactions with stray light and background magnetic fields. There may be other errors associated with implementing the Stern-Gerlach analyzer that are less well understood. For example, the gradient magnetic field may not be exactly aligned with the bias field meaning the POVM that we think describes the measurement is slightly rotated. Also, there is shot noise due to measuring the fluorescence of the photons in the signal and other technical noise associated with the photon detection. However, we believe all these other sources to be small compared to the control errors. \section{Implemented POVMs} \label{sec:exp_POVMs} Several different POVMs were implemented in the experiment. All measurement vectors were produced via the two-step procedure described in Sec.~\ref{sec:SG_measurement} and extracted by a least-squares-fit to the time-of-flight signal. If we precede the Stern-Gerlach analyzer by the appropriate unitary map, we can extract the measurement vector in an arbitrary basis on the full 16-dimensional Hilbert space, or on any subspace. Alternatively, via the Neumark extension, we can implement any POVM with up to 16 rank-1 POVM elements with states that live is an $s<d$-dimensional Hilbert space. The study was carried out for Hilbert spaces in $d=4$ and $d = 16$ dimensions. The measurement vector of the following full-IC POVMs were approximated via the two-step method described in Sec.~\ref{sec:SG_measurement}: \begin{itemize} \item {\bf Symmetric informationally complete POVM:} (SIC POVM), originally proposed in Ref.~\cite{Renes2004a} and described in further detail in Sec.~\ref{ssec:noiseless_ST}. The SIC POVM has $d^2$ outcomes and has a known construction for all dimensions $d \leq 67$~\cite{Scott2010}. In the experiment, the SIC is applied only for $d= 4$, and 16 POVM elements are implemented with the Neumark extension. \item {\bf Mutually unbiased bases:} (MUB) originally proposed in Ref.~\cite{Wootters1989} and described in further detail in Sec.~\ref{ssec:noiseless_ST}. The MUB consists of $d+1$ orthonormal bases and has an analytic expression for dimensions that are primes or powers of primes~\cite{Wootters1989}. Therefore, the MUB can be implemented in both the $d = 4$ and 16 systems. \item {\bf Gell-Mann bases:} (GMB) an extension of the five bases proposed in Ref.~\cite{Goyeneche2015} and discussed Sec.~\ref{sec:decomp_method}. The GMB consists of $2d-1$ orthonormal bases for dimensions that are powers of two with algorithm provided in Appendix~\ref{app:GMB}. (Constructions for other dimensions exist but are more complicated.) The GMB is applied for both $d = 4$ and $d= 16$ in the experiment. \end{itemize} The measurement vector of the following rank-1 strictly-complete POVMs was estimated by the two-step procedure in Sec.~\ref{sec:SG_measurement}: \begin{itemize} \item {\bf PSI-complete:} (PSI) originally proposed Ref.~\cite{Flammia2005} and proven to be rank-1 complete therein. The PSI-complete POVM has $3d-2$ rank-1 POVM elements in any dimension. A construction of the the POVM and a proof that it is rank-1 strictly-complete is provided in Appendix~\ref{app:constructions}. We implement the PSI-complete POVM for $d=4$ via the Neumark extension. \item {\bf Five Gell-Mann bases:} (5GMB) originally proposed in Ref.~\cite{Goyeneche2015} as a rank-$1$ complete measurement. The 5GMB are the first five orthonormal bases of the GMB. In Appendix~\ref{app:GMB}, we use the matrix completion method from Sec.~\ref{sec:EP} to prove that the 5GMB are also rank-$1$ strictly-complete. Since the GMB are applied for both $d = 4$ and 16 we can also apply the 5GMB for both systems. \item {\bf Five polynomial bases:} (5PB) originally proposed in Ref.~\cite{Carmeli2016} and proven to be rank-1 strictly-complete therein. The 5PB are the 4PB, discussed below, plus the basis, $\mathbbm{B}_z$. The remaining four bases are constructed from a set of orthogonal polynomials. The construction applies for any dimension, so we implement them for both $d=4$ and 16. We provide an explicit construction of the five bases in Appendix~\ref{app:constructions}. The measurement vector for the first bases is taken from the first bases of the 5GMB, since it is the same basis measurement. \item {\bf Five Mutually unbiased bases:} (5MUB) Numerical simulations, similar to the ones performed for random bases in Sec.~\ref{sec:random_bases}, indicate that the first five bases of the MUB correspond to a rank-1 strictly-complete POVM. We only apply the 5MUB for $d= 16$ since the 5MUB in $d=4$ is full-IC. The measurement vector for the 5MUB is the same measurement vector as the first five bases of the MUB. \end{itemize} Finally, the measurement vector for the following rank-1 complete POVMs were estimated by the two-step procedure described in Sec.~\ref{sec:SG_measurement}: \begin{itemize} \item {\bf Four GMB:} (4GMB) originally proposed in Ref.~\cite{Goyeneche2015} and given in Eq.~\eqref{4gmb}. The 4GMB are four of the orthonormal bases that make up the GMB. Since the GMBs can be implemented for $d = 4$ and 16 the first four can be implemented in both dimensions as well. The measurement vector for the 4GMB is the measurement vector from four bases of 5GMB. \item {\bf Four polynomial bases:} (4PB) originally proposed in Ref.~\cite{Carmeli2015}. The 4PB consists of four orthonormal bases that are constructed based on a set of orthogonal polynomials for any dimension. We generate the 4PB based on the Hermite polynomial and apply them for both $d = 4$ and 16. We provide an explicit definition of the four bases in Appendix~\ref{app:constructions}. The measurement vector for the 4PB is the measurement vector from the last four bases of the 5PB. \end{itemize} \section{Quantum state tomography results} \label{sec:ST_exp} In the experiments performed in the Jessen lab, each measurement was applied to a fixed set of 20 different Haar-random pure states prepared by the state-to-state mapping described in Appendix~\ref{app:control}. To determine how each measurement and estimation procedure performs, we would like to compare the actual prepared state, $\rho_{\textrm{a}}$, to the estimated state, $\hat{\rho}$. However, as with any QT experiment, we do not know the actual prepared state, so instead, we compare the estimated state to the target state, $\rho_{\textrm{t}}$. From Ref.~\cite{Smith2012}, we know that the prepared state is close to the target state (the average fidelity of state preparation is $\mathcal{F} = 0.995$), and therefore this comparison is a reasonable measure of QT. We use the infidelity between two quantum states, which is the standard measure of QST, to quantify the comparison, \begin{equation} \label{fidelity} 1-F(\rho_{\rm t}, \hat{\rho}) = 1-\left| \langle \psi_{\rm t} | \hat{\rho} | \psi_{\rm t} \rangle \right| \end{equation} where $\rho_{\rm t} = | \psi_{\rm t} \rangle \langle \psi_{\rm t} |$, since in the experiment all target states are pure. (We use the script letter, $\mathcal{F}$, to denote the average fidelity but the capital $F$ to denote fidelity between two particular quantum states.) The value of $\hat{\rho}$ will depend on the type of estimator chosen. In Sec.~\ref{sec:numerical_methods}, we presented many estimators for QST such as, linear-inversion (LI), least-squares (LS), maximum-likelihood (ML), and trace-norm minimization (Tr-norm). In this section we forgo applying the LI estimator since it does not produce a physical state so Eq.~\eqref{fidelity} does not apply. We determine the estimate from LS and ML with the CVX package~\cite{cvx} in MATLAB for each of the full-IC and rank-1 strictly-complete discussed in Sec.~\ref{sec:exp_POVMs}. For the rank-1 complete POVMs, we use the LS program and rank-$r$-projection algorithm described in Sec.~\ref{sec:noiseless_rankr_comp}. We discuss the Tr-norm estimates in further detail later. The results are plotted in Fig.~\ref{fig:IC} All POVMs and estimators produce low infidelity estimates of the quantum state. The ML estimate produces a consistently lower infidelity than the LS. This would be expected if the experiment was limited by finite sampling, since LS differs from ML when there are a finite number of copies~\cite{James2001}. However, the measurements in this experiment are not limited by finite sampling, so we do not believe the difference is caused by this effect. Instead, the difference is likely due to the positivity condition that changes the shape of the likelihood function. This effect will be studied in future work that investigates how each estimator behaves in the presence of the positivity constraint. While the estimate from ML has lower infidelities, it does require greater computational effort to produce the ML estimate since the log-likelihood function is less smooth, so more difficult to optimize over. We find that when computational effort is not a limitation, ML is the best estimator, though the gain is modest. Interestingly, for $d = 4$, Fig.~\ref{fig:IC} shows that the estimates from the rank-$r$-projection algorithm with the rank-1 complete POVMs yield the lowest infidelities. However, it is not fair to compare different POVMs when the data is fed into different estimators. To make a fair comparison, we apply the rank-$r$-projection algorithm to the data from the rank-1 strictly-complete POVMs. In this case, we find the infidelities of the rank-1 strictly-complete POVMs to be $1-F_{\rm 5GMB} = 0.0068 (0.0010)$ and $1-F_{\rm 5PB} = 0.0115(0.0019)$, which are comparable to the values found for the 4GMB, $1-F_{\rm 4GMB} = 0.0931(0.0010)$ (standard error in the mean are in parentheses). Therefore, the low infidelities produced by the rank-1 complete POVMs should be attributed to the rank-$r$ projection algorithm that is required for this type of POVM. However, this algorithm is not always desirable. Beyond the issue of convergence, discussed in Sec.~\ref{sec:noiseless_rankr_comp}, the algorithm is, by construction, biased to produce only pure states. Therefore, the estimate lacks information about preparation errors that may cause the actual state to be mixed. Moreover, since the estimate is always pure it may be much closer to the target state, which is also pure, than the actual state. In this case, comparing the different POVMs by the infidelity between the target state and the estimate would not be an accurate measure of success. For these reasons, we do not consider rank-1 complete POVMs for the remainder of the discussion. \begin{figure}\label{fig:IC} \end{figure} We focus on LS (blue circles) in order to compare the different POVMs, since ML and LS have similar trends. For both dimensions, the results mostly match what we expect based on previous discussions of informational completeness. Since we have prior information that the state is near, but certainly not exactly pure, we predict the full-IC POVMs to perform the best, since they can characterize an arbitrary full-rank state. However, since the state is near-pure, we expect the strictly-complete POVM to perform almost as well, since these POVMs are robust to preparation errors. For $d=4$, we see that the full-IC POVMs, GMB and MUB, indeed produce the lowest infidelity estimates, followed by the strictly-complete POVMs, 5GMB and the 5PB, which matches our predictions. The results are similar for $d=16$, the MUB and GMB produce the lowest infidelity estimates, followed by the 5MUB, 5GMB and then the 5PB. However, for $d=4$, the SIC curiously performs worse than the 5GMB and the 5PB despite the fact that it is full-IC. The PSI also performs much worse than expected. Therefore, there is some other difference between the POVMs, besides their informational completeness properties. \begin{figure}\label{fig:infid_v_ele} \end{figure} In order to shed light on the differences between the POVMs, we re-plot the infidelity for the LS estimate from Fig.~\ref{fig:IC} as a function of the number of POVM outcomes in Fig.~\ref{fig:infid_v_ele}. We omit the rank-1 complete POVMs for the reasons discussed above. The results show a strong correlation between the infidelity of the LS estimate and the number of POVM elements. POVMs that contain more elements, like the GMB and MUB, produce the lowest infidelity estimates, while the ones with the less elements, like SIC and PSI, produce the highest infidelity estimates. This correlation arises because POVMs with more elements contain repeated information, which has the effect of reducing the noise and error level. Since the state is near-pure it is almost entirely described by the $2d-2$ free parameters that describe a pure state. The relation between these free parameters and the measured outcomes is nonlinear and very complex. However, POVMs with more than $2d-2$ elements provide repeated information that is averaged to reduce the noise and error level. It is important to keep in mind that in this experiment the dominant source of noise and errors is the control errors, which are essentially random but fixed for a given control field. Therefore, POVMs that contain more than $2d-2$ elements produced with multiple control fields should perform the best, since the effect of the control errors will be reduced by the redundant information. With this understanding, we can explain the results in Fig.~\ref{fig:infid_v_ele}. The PSI POVM contains the fewest elements ($2d$) and is implemented with a single control field. Therefore, it contains no averaging of the control errors, explaining the high infidelity. The SIC, which has redundancy in the $d^2$ elements, performs poorly since it is implemented with a single control field, and therefore has no averaging of the errors. The other POVMs contain at least $5d$ elements that are implemented with at least five control fields, so have some averaging of the errors. For $d=16$, the GMB perform worse than the MUB, which is contrary to this understanding. The reason could be related to the fact that GMB requires many more bases and the state preparation may have drifted slightly for each measurement. The results in Fig.~\ref{fig:infid_v_ele} demonstrate that, in practice, not all rank-1 strictly-complete POVMs are equal. Some rank-1 strictly-complete POVMs produce lower fidelity estimates in the presence of noise and errors, such as the 5GMB or 5PB than others, such as the PSI. However, the lower fidelity comes at the price of more measurements. This demonstrates an important concept in the practical implementation of QST, which is the tradeoff between efficiency and robustness. For example, while the GMB is robust, i.e., produces a low infidelity estimate, it is not very efficient, i.e., it requires the most POVM elements. As shown in Fig.~\ref{fig:infid_v_ele}, after a certain point the gains in robustness are modest and must be weighed with the loss of efficiency. This effect becomes more pronounced as the dimension increases, as seen with $d=4$ versus the $d=16$ plots in Fig.~\ref{fig:infid_v_ele}. To accomplish effective QST, it is desirable to look for POVMs that have a satisfactory tradeoff between efficiency and robustness, such as the MUB for $d=16$. \begin{figure}\label{fig:fid_v_b} \end{figure} We can also see the tradeoff of efficiency and robustness in POVMs that consist of multiple basis measurements, such as the GMB and MUB. For this case, we study the infidelity of the estimate as a function of the number of orthonormal bases measured. After each basis that makes up the GMB and MUB, we compile the measurement vector for all previous bases and apply the two programs, LS and ML. For this comparison, we also estimate the state with the Tr-norm program, since this situation is similar to the quantum compressed-sensing for which Tr-norm was proposed. The value of $\varepsilon$ in the Tr-norm program was created by numerically modeling the types of errors expected in the control fields used to produce the unitary maps~\cite{SosaMartinez2016}. We calculate the infidelity between the estimate and the target state. The results are plotted in Fig.~\ref{fig:fid_v_b}. As expected both POVMs and all estimators produce a low infidelity estimate with five bases, since for both, five bases form a rank-1 strictly-complete measurement and all three programs satisfy the form given in Corollary~\ref{cor:robustness}. This is only possible due to the positivity constraint on quantum states, and thus the existence of rank-1 strictly-complete measurements. By measuring more bases, we refine the estimate and the infidelity slowly decreases. This matches the comparison shown in Fig.~\ref{fig:infid_v_ele} that demonstrated the tradeoff between robustness and efficiency for different POVMs. After a certain number of measurements, the gain in robustness, i.e. decrease in infidelity, is modest. We see that the Tr-norm estimator has slightly lower infidelity than LS and ML. This is due to the ``bias'' in the estimator towards pure states as was true with the rank-$r$-projection algorithm. While the Tr-norm is not as biased as the rank-$r$-projection algorithm (it does not constrain the state to be pure) it has a similar effect when we compare the estimate to the target state. Since the estimate is biased towards pure states it may be closer to the target state, which is also pure, than the actual state, which is likely full rank. Therefore, we conclude that this type of estimator is not desirable for QST, when we are looking to diagnose preparation errors. Another factor that may impact the performance of the rank-1 strictly-complete and rank-1 complete POVMs is the failure set discussed in Chapters~\ref{ch:new_IC} and~\ref{ch:constructions}. This is the subset of measure zero within the set of quantum states where the probabilities from the POVM cannot uniquely identify the quantum state. The PSI, 5GMB and 4GMB suffer from such failure sets. Since the set has zero volume, we do not expect to randomly select states that are within such sets, and in fact none of the 20 Haar-random states are within any of the failure sets. However, Finkelstein~\cite{Finkelstein2004} showed that in the presence of noise and errors, the failure set in fact has a finite measure. Therefore, the failure set will have a non-negligible impact on estimation in QST. The failure set for each POVM is not the same, as shown in Chapter~\ref{ch:constructions} and Appendix~\ref{app:constructions}. Some POVMs have more complicated failure sets, which translate to smaller finite volume sets in the presence of noise and errors. For example, we show in Appendix~\ref{app:constructions} that the 5GMB have a complicated failure set, while the PSI has a very simple set. Therefore, we expect that the PSI would suffer from such a failure set more than the 5GMB. However, there is no clear indication of this effect in the experimental results. While the PSI, does perform worse than all other POVMs, this could also be explained by the fact that the PSI has the least number of POVM elements and is implemented with a single control field. Moreover, while the 5GMB suffers from the failure set, it still produces a lower infidelity estimate than the 5PB, which has the same number of elements but no failure set. Therefore, while the failure set may be effecting the estimation, we do not see any clear indication, and therefore do not believe it to be a practical limitation of such POVMs. \section{Comparison of POVMs with Hilbert-Schmidt distance} \label{sec:HS_analysis} We saw in the previous section that the number of POVM elements correlates with the success of the POVM. In this section, we formalize this relation by studying the structure of POVMs. The results are presented in terms of Hilbert-Schmidt (HS) distance squared, \begin{equation} \label{Delta2} \Delta^2(\hat{\rho}, \rho_{\rm t} ) = \| \hat{\rho} - \rho_{\textrm{t}}\|_2^2, \end{equation} which we define in terms of the target state, $\rho_{\rm t}$, since we do not know the actual prepared state. For comparison, the HS-distance squared ranges in values from $\Delta^2(\hat{\rho}, \rho_{\rm t} ) = 0$, when the states are identical, to $\Delta^2(\hat{\rho}, \rho_{\rm t} ) = 2$, which occurs with two orthonormal pure states. The HS-distance offers the advantage that it is more straightforward to study analytically. We have used the HS-distance frequently in previous chapters. For example, in Chapter~\ref{ch:background} and~\ref{ch:new_IC}, we derived the robustness bound for full-IC and rank-$r$ strictly-complete POVMs based on HS-distance. Moreover, Scott~\cite{Scott2006} showed that the estimate returned from certain POVMs, referred to as ``tight,'' minimize the expected HS-distance over all realizations of the experiment when the measurement is only limited by finite sampling. These POVMs are, therefore, optimal for this particular situation. Two common examples of tight POVMs were implemented in the experiment, the SIC and the MUB. We reassess the experimental results with respect to the HS-distance and compare the results to theoretical predictions. \subsection{Comparison of full-IC POVMs} \label{ssec:HS_full-IC} We start with the full-IC POVMs and only consider the linear-inversion estimate, $\hat{\rho} = \hat{R}$. While this estimate is not necessarily a quantum state, and thus not appropriate for many applications, it is a useful mathematical tool for comparing POVMs. This was the approach taken by Scott~\cite{Scott2006}, who showed that tight POVMs, such as the SIC and MUB, produce the lowest, average HS-distance squared when there is a fixed number of copies and the experiment is only limited by the resulting finite sampling noise. The experimental results with the HS-distance for the full-IC POVMs and the linear-inversion estimate are given in Table~\ref{tbl:full_IC}. \begin{table}[h] \centering \def2{2} \begin{tabular}{lll} \cline{2-3} \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{$d=4$} & \multicolumn{1}{c|}{$d=16$} \\ \hline \multicolumn{1}{|l|}{SIC} & \multicolumn{1}{c|}{0.0466 (0.0048)} & \multicolumn{1}{c|}{-} \\ \hline \multicolumn{1}{|l|}{MUB} & \multicolumn{1}{c|}{0.0111 (0.0011)} & \multicolumn{1}{c|}{0.0586 (0.0035)} \\ \hline \multicolumn{1}{|l|}{GMB} & \multicolumn{1}{c|}{0.0067 (0.0008)} & \multicolumn{1}{c|}{0.0710 (0.0020)} \\ \hline \end{tabular} \caption[Experimental value of HS-distance squared for full-IC POVMs]{{\bf Experimental value of HS-distance squared for full-IC POVMs} Each cell gives the HS-distances squared between the linear-inversion estimate and the target state averaged over all 20 Haar-random pure states, with standard error of the mean given in parentheses.} \label{tbl:full_IC} \end{table} From the table, we see that for $d = 4$ the GMB produce the minimum average HS-distance squared, followed by the MUB, and then the SIC POVM. This matches the infidelity results in the previous section. For $d=16$, the MUB produces the lower value of the HS-distance than the GMB, which is the same as when we compared infidelity. Therefore, the HS-distance results match the same trends we saw with infidelity. While the HS-distances follows a similar trend as infidelity, they do not match the result by Scott~\cite{Scott2006} for two reasons. First, we know that the experiment is not limited by finite sampling since repeating the Stern-Gerlach analyzer with the same control parameters produces a nearly identical measurement vector. Instead, the measurements are actually limited by errors in the control fields. Second, the SIC POVM is implemented with a single run of the Stern-Gerlach analyzer while the MUB and GMB are done with many runs. Therefore, if the experiment was limited by finite sampling, it would be as if the SIC used $m$ copies, the MUB used $(d+1)m$ copies, and the GMB used $(2d-1)m$ copies. While we know the finite sampling noise is negligible, the difference in the number of applications of the Stern-Gerlach analyzer still has an impact on the accuracy of the POVM. To gain better insight into the experimental results, we construct a general framework for predicting the HS-distance in the presence of arbitrary noise and errors. This framework is an extension of the work by Scott~\cite{Scott2006} and will allow us to compare arbitrary full-IC POVMs in the presence of any type of noise or error. In any experiment, random noise may determine the exact value of the HS-distance squared. So instead of studying the HS-distance squared, we focus on the expected HS-distance. This is defined by, \begin{equation} \label{exp_Delta2} \bar{\Delta}^2(\hat{R}, \rho_{\rm t}) = \mathbb{E} \left[ \| \hat{R} - \rho_{\textrm{t}} \|_2^2 \right], \end{equation} where the expectation value is over all realizations of the experiment. We wish to relate the value of $\bar{\Delta}^2(\hat{R}, \rho_{\rm t})$ to the POVM in order to compare different POVMs. The linear-inversion estimate provides a method to make such a relation, which motivates the choice of linear-inversion in this section. It can be expressed in terms of the reconstruction operators, $\{ Q_{\mu} \}$, which can be derived from $\Xi^+$, discussed in Sec.~\ref{sec:numerical_methods}, \begin{equation} \label{LI_expand} \hat{R} = \sum_{\mu} f_{\mu} Q_{\mu}. \end{equation} For the SIC and MUB the reconstruction operators, $\{ Q_{\mu} \}$, have a ``painless'' form and are only proportional to the POVM elements~\cite{Scott2006}. We can also express the target state in terms of these reconstruction operators, $\rho_{\rm t} = \sum_{\mu} p_{\mu} Q_{\mu}$, where $\{ p_{\mu} \}$ are the probability of each outcome given the target state. If we substitute Eq.~\eqref{LI_expand} into Eq.~\eqref{exp_Delta2} then, \begin{align} \label{Delta_exp} \bar{\Delta}^2(\hat{R}, \rho_{\rm t}) &= \mathbb{E} \left[ \textrm{Tr} \left( \sum_{\mu, \nu} \left( f_{\mu} Q_{\mu} - p_{\mu} Q_{\mu} \right)^{\dagger} \left( f_{\nu} Q_{\nu} - p_{\nu} Q_{\nu} \right) \right) \right], \nonumber \\ &= \sum_{\mu, \nu} \mathbb{E} \left[ (f_{\mu} - p_{\mu} )( f_{\nu} - p_{\nu} ) \right] \textrm{Tr}[ Q_{\mu} Q_{\nu} ]. \end{align} We define $G_{\mu,\nu} = \textrm{Tr}[Q_{\mu} Q_{\nu}]$ as elements of the ``Gramian matrix,'' $G$. Similarly, we define $Y_{\mu, \nu}(\rho_{\rm t}) = \mathbb{E} \left[ (f_{\mu} - p_{\mu} )( f_{\nu} - p_{\nu} ) \right]$ as elements of another matrix, called the ``noise/error matrix,'' $Y(\rho_{\rm t})$. We call this the noise/error matrix because the noise and errors perturb the measurement vector from the probabilities. The noise/error matrix is a function of $\rho_{\rm t}$, since the noise and errors may be state dependent, as is the case with finite sampling noise. This leads to the following compact equation, \begin{equation} \label{Delta2} \bar{\Delta}^2(\rho_{\rm t}) = \textrm{Tr} \left[ Y(\rho_{\rm t} )G \right], \end{equation} where we have dropped the dependence on $\hat{R}$ since it is contained within $Y(\rho_{\rm t})$. We have thus cleanly related $\bar{\Delta}^2(\rho_{\rm t})$ to two matrices, one that is only dependent on the POVM, $G$, and one that is only dependent on the noise/errors present, $Y$. In general, POVMs that have small values of $G$ will produce smaller $\bar{\Delta}^2(\hat{R}, \rho_{\rm t})$. This matches with Table~\ref{tbl:full_IC}, since the $G$ matrix from the GMB has the smallest elements, followed by the MUB, and then the SIC. However, this does not explain why, for $d =16$, the MUB produces a lower value of the HS-distance squared than the GMB. The reason is likely related to the fact that $\bar{\Delta}^2(\hat{R},\rho_{\rm t})$ is also dependent on the type of noise and errors present, which is contained in $Y(\rho_{\rm t})$. Therefore, in order truly compare POVMs, we also need to know the form of the noise and errors. In the cesium spin system, the dominate source of error is in the imperfections in the implementation of unitary maps that produce each basis measurement, which defines each POVM. These errors are only dependent on the control field, and therefore independent of the state that is measured, so $Y(\rho_{\rm t}) = Y$ for all $\rho_{\rm t}$. Moreover, for a given control field, the errors are constant, i.e. systematic errors. This was verified experimentally by repeating the measurement with the same control field and determining that the measurement vector is constant between repetitions. Then, $Y$ is a constant for all realizations of the experiment with the given control field. The value of $Y_{\mu,\nu}$ is proportional to errors in the unitary map, which not a straightforward relation. Therefore, without further study the control errors, we cannot exactly determine the form of $Y$. While we do not know the exact form of $Y$ for the cesium spin system, the University of Arizona group performed an additional experiment that gives insight into the magnitude of the errors present. This experiment was based on repeating the SIC POVM but each repetition was done with a different control fields. Due to the nature of numerical control optimization, there exist infinitely other control fields that implement the same unitary. Different control fields may have different errors associated with them. Therefore, if we repeat the same POVM but implement it with different control fields, we effectively randomize over some control errors. There may be some errors that cannot be randomized over, such as decoherence. These errors then remain systematic errors. Based on the behavior of $\Delta^2(\hat{R},\rho_{\rm t})$, we can determine the magnitude of the random errors compared to the systematic errors. To accomplish this, we build two theoretical models of the behavior of $\bar{\Delta}^2(\hat{R},\rho_{\rm t})$ and compare them to the experimental results. Since we assume that $Y$ is independent of the measured state, we denote $\bar{\Delta}^2 = \bar{\Delta}^2(\hat{R},\rho_{\rm t})$. The two models we consider are two different forms of the noise/error matrix. The first is that the control errors are totally random. Then, the value of $f_{\mu} - p_{\mu}= e_{\mu}$ is a random variable with zero mean, and $Y_{\mu,\nu} = \mathbb{E}[e_{\mu} e_{\nu}]$ is the covariance matrix. In this case, we label $Y=C$, for covariance. With random errors, repeating the SIC POVM $n$ times will decrease $\bar{\Delta}^2$ by a factor of $1/n$ since covariance matrices add, and we average over the $n$ repetitions. Therefore, $\bar{\Delta}^2$ is a function of the number of repetitions, \begin{equation} \label{Delta2_rand} \bar{\Delta}^2(n) = \frac{1}{n} \textrm{Tr} \left[ C G \right] = \frac{x_{\textrm{rand}}}{n}, \end{equation} where $x_{\textrm{rand}} \triangleq \textrm{Tr} \left[ C G \right]$. The second model we consider is when there exists both random and systematic errors in the control fields. In this case, $f_{\mu} - p_{\mu} = e_{\mu} + k_{\mu}$, where $e_{\mu}$ represents the random errors and $k_{\mu}$ represents the systematic errors, and we have assumed that both error sources are uncorrelated. The systematic errors, $k_{\mu}$, are constant for all realizations of the experiment, such that $\mathbb{E}[f_{\mu} - p_{\mu}] = k_{\mu}$. The elements of $Y_{\mu,\nu}$ are then, \begin{equation} Y_{\mu,\nu} = \mathbb{E}[ (f_{\mu} - p_{\mu})(f_{\nu} - p_{\nu}) ] = \mathbb{E}[(e_{\mu} + k_{\mu})(e_{\nu} + k_{\nu})] = \mathbb{E}[e_{\mu}e_{\nu}] + k_{\mu} k_{\nu}, \end{equation} where the middle terms are zero, since $\mathbb{E}[e_{\mu}] = 0$ and we previously assumed the errors are uncorrelated. The first term is the covariance of the random errors, so we again label the elements as $C_{\mu,\nu}$. The second term is constant for all realization due to the definition of the systematic errors and we label the elements as $K_{\mu,\nu} = k_{\mu} k_{\nu}$. We now plug this expression into Eq.~\eqref{Delta2}, \begin{equation} \label{Delta2_both} \bar{\Delta}^2(n) = \frac{1}{n} \textrm{Tr} \left[ C G \right] + \textrm{Tr} \left[ K G \right] = \frac{x_{\textrm{rand}}}{n} + x_{\textrm{sys}}. \end{equation} where $x_{\textrm{sys}} \triangleq \textrm{Tr} \left[ K G \right]$. \begin{figure}\label{fig:SIC_rep} \end{figure} In the experiment, the SIC POVM was implemented with 10 different control fields and each was applied to 10 Haar-random pure states. Since $\bar{\Delta}^2(n)$ is the same for any target state, we can approximate the value of $\bar{\Delta}^2(n)$ by averaging over the 10 experimentally measured values of $\Delta^2(\hat{R},\rho_{\rm t})$ for each repetition. We denote the average as $\Delta^2(n)$ for $n$ repetitions of the SIC POVM. In Fig.~\ref{fig:SIC_rep}, we plot $\Delta^2(n)$ as a function of the number of repetitions of the SIC POVM (blue). Fig.~\ref{fig:SIC_rep} also contains two fits to the Eq.~\eqref{Delta2_rand} (green) and Eq.~\eqref{Delta2_both} (red), created with MATLAB's fit function. For the fit to Eq.~\eqref{Delta2_rand}, we find, \begin{equation} x_{\textrm{rand}} = \textrm{Tr} \left[ C G \right] = 0.0491 \, (0.03772, 0.0605), \end{equation} with $r^2 = 0.5283$ (parentheses contain 95\% confidence interval). For the fit to Eq.~\eqref{Delta2_both}, we find, \begin{align} \label{error_mag} x_{\textrm{rand}} = \textrm{Tr}[C G] = 0.0326 \, (0.0290, 0.0363), \nonumber \\ x_{\textrm{sys}} = \textrm{Tr}[K G] = 0.0087 \, (0.0073, 0.0101), \end{align} with $r^2 = 0.9816$. From the figure we see that averaging over the control errors does decrease the HS-distance; however, the experimental data more closely matches the model that contains both random and systematic errors. In this model, the random term dominates but the systematic term has a significant contribution. Therefore, we can conclude that a majority of the control errors are due to effectively random sources associated with each control fields. This means, in principle, that the estimation from any of the POVMs can be improved by repeating the POVM with a different control fields and averaging the results. An important contribution to the systematic error term in the analysis above is due to preparation errors, and therefore not really a systematic error {\em per se}. The fidelity of state preparation was measured to be $\mathcal{F} = 0.995$~\cite{Smith2012}, which corresponds to $\| \rho_{\textrm{t}} - \rho_{\textrm{a}} \|_2 = 0.01 - (1- \textrm{Tr}[\rho_{\textrm{p}}^2]) \leq 0.01$. We believe $\rho_{\textrm{a}}$ is highly pure ($\textrm{Tr}[\rho_{\textrm{a}}^2] \approx 1$), but even with a small amount of impurity, $(1- \textrm{Tr}[\rho_{\textrm{p}}^2])$ the preparation error has a non-negligible contribution to the value of $x_{\rm sys}$. Therefore, the estimate found may contain information about the preparation errors that can be used to develop better state preparation procedures. \subsection{Comparison of rank-1 strictly-complete POVMs} In Chapter~\ref{ch:new_IC}, we proved that the estimate produced from the measurement vector of rank-1 strictly-complete POVMs are robust to noise and errors. The robustness bound was given in terms of the HS-distance between the target state and an estimate returned by a convex program in the form given in Corollary~\ref{cor:robustness}. We calculate the HS-distance squared with the experimental results to see if they are consistent with the robustness bound. We only compare the HS-distance with the estimate from the LS program for simplicity. The experimental results are given in Table~\ref{tbl:strict}. We see from the table that all POVMs in both dimensions produce a low average HS-distance squared. For $d=4$, the 5GMB produce the smallest value of the HS-distance squared, followed by the 5PB, and then the PSI. These trends match the infidelity results shown in Fig.~\ref{fig:IC}. For $d=16$, the 5MUB produce the smallest value, followed by the 5PB, and then the 5GMB. This is counter to the infidelity results, which showed the 5GMB perform better than the 5PB. However, in both cases the values for the 5GMB and 5PB are very similar, and the standard error in the mean overlap. Therefore, the experimental results are consistent with the robustness bound, and the infidelity and HS-distance results roughly agree. \begin{table}[t] \centering \def2{2} \begin{tabular}{lll} \cline{2-3} \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{$d=4$} & \multicolumn{1}{c|}{$d=16$} \\ \hline \multicolumn{1}{|l|}{PSI} & \multicolumn{1}{c|}{0.0710 (0.0160)} & \multicolumn{1}{c|}{-} \\ \hline \multicolumn{1}{|l|}{5GMB} & \multicolumn{1}{c|}{0.0092 (0.0013)} & \multicolumn{1}{c|}{0.2119 (0.0180)} \\ \hline \multicolumn{1}{|l|}{5PB} & \multicolumn{1}{c|}{0.0124 (0.0017)} & \multicolumn{1}{c|}{0.1991 (0.0256)} \\ \hline \multicolumn{1}{|l|}{5MUB} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{0.0905 (0.0065)} \\ \hline \end{tabular} \caption[Experimental values of HS-distance squared for rank-1 strictly-complete POVMs]{{\bf Experimental values of HS-distance squared for rank-1 strictly-complete POVMs} The result is an average over all twenty random pure states with standard error of the mean given in parentheses.} \label{tbl:strict} \end{table} As with the full-IC POVMs, we would like a method to compare different POVMs based on their structure. The approach taken in the previous section, based on theoretically predicting $\bar{\Delta}^2$, cannot be extended to rank-1 strictly-complete POVMs since the linear-inversion estimate is not unique. Instead, we compare different rank-1 strictly-complete POVMs by the robustness constants, $\alpha$, used in the derivation of Corollary~\ref{cor:robustness}, \begin{equation} \label{alpha_ratio} \frac{\| \rho_1 - \rho_2 \|}{\|\mathcal{M}[ \rho_1 - \rho_2] \| } \leq \frac{1}{\alpha}. \end{equation} where the constant $\alpha$ contributes to the robustness bound, \begin{equation} \label{robustness_bound} \| \rho_{\rm a} - \hat{\rho} \| \leq C_1 \varepsilon + 2C_2 \upsilon \end{equation} where $C_1 = \frac{2}{\alpha}$ and $C_2 = \frac{\beta}{\alpha}$. There is no known analytic form for this constant, but in Sec.~\ref{sec:strict_comp_nums}, we presented a numerical method for estimation. To accomplish this, we generate $10^4$ pairs of density matrices, one that is rank-1 and one that is full-rank, chosen by the method described in Sec.~\ref{sec:strict_comp_nums}. We then calculate the ratio of the HS-distance between the two density matrices, to the $\ell_2$-distance between the probability vectors of a strictly-complete POVM, that is the LHS term in Eq.~\eqref{alpha_ratio}. We then bin the number of times each ratio is determined numerically. The results are shown in Fig.~\ref{fig:strict} for both $d = 4$ and~16 and follow the same trend outlined in Sec.~\ref{sec:strict_comp_nums}, where the distributions are centered around a peak. It should be noted that since each pair of states is randomly generated, the numerical test is very unlikely to sample a state from the failure set, which has zero volume. Therefore, this numerical test is independent of this failure set and the results offer a way to compare POVMs separate from this effect. \begin{figure}\label{fig:strict} \end{figure} For $d = 4$, the 5GMB and 5PB produce approximately the same distribution. The distribution for the PSI is, however, significantly shifted to larger ratios and wider. This means that the robustness constant, $1/\alpha$, is much larger. The bound in Eq.~\eqref{robustness_bound} is then much larger for the PSI POVM. Therefore, we expect that the HS-distance between the estimated state and the actual state for PSI is much larger than the same measure for the other POVMs. This matches with the experimental data shown in Table~\ref{tbl:strict}. Numerically, we find that the range for the ratio for the 5GMB and 5PB is reasonable, $0.6117- 2.3769$, such that the robustness constants for both are not too large and the bound in Eq.~\eqref{robustness_bound} is small. For $d =16$, the three distributions are roughly centered on the same value and have a similar range of the ratio, $0.9277 - 2.7747$. However, the width of each distribution is different, where the 5MUB is the narrowest, followed by the 5PB, and the 5GMB. We expect that a narrower distribution will have smaller values for $1/\alpha$. Therefore, narrow distributions correspond to measurements that produce a smaller bound in Eq.~\eqref{robustness_bound}, and thus better estimation. This matches the experimental values of $\Delta^2$ in Table~\ref{tbl:strict}, which show the 5MUB produce the smallest value, followed by the 5PB, and then the 5GMB. The difference between the 5PB and 5GMB is small, which is reflected in the similar distribution in Fig.~\ref{fig:strict}. The results of the numerical test match both the experimental results for HS-distance and the intuition established in Sec.~\ref{sec:ST_exp} about the tradeoff of efficiency and robustness. The reason that the PSI performs so badly is likely related to the fact that it has much fewer elements. However, since we do not have an analytic expression for the robustness constants it is not currently possible to formalize this relation. The numerical test also provides further evidence that the failure set is not what is limits the PSI or effects the 5GMB since the results are independent of the failure set. Therefore, we conclude that the failure set is not a practical limitation for strictly-complete POVMs. \comment{ \subsection{Comparison of rank-1 complete POVMs} As with rank-1 strictly-complete POVMs, the robustness of rank-1 complete POVMs was proven in Chapter~\ref{ch:new_IC} in terms of the HS-distance. It is important to keep in mind that this robustness bound does not apply to all preparation errors, as discussed in Sec.~\ref{ssec:comp_noise}. Since rank-1 complete POVMs are not compatible with convex optimization we must use the rank-$r$-projection algorithm provided in Sec.~\ref{sec:noiseless_rankr_comp}. The HS-distance results are shown in Table~\ref{tbl:comp}. We see that for $d=4$, the estimates have small HS-distance, which is consistent with the robustness bound. However, for $d=16$, the HS-distances are significantly larger. This is likely due to preparation errors. In general, the 4GMB perform better than the 4PB for both dimensions. This matches the results shown in Fig.~\ref{fig:IC} in terms of infidelity. \begin{table}[h] \centering \def2{2} \begin{tabular}{lll} \cline{2-3} \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{$d=4$} & \multicolumn{1}{c|}{$d=16$} \\ \hline \multicolumn{1}{|l|}{4GMB} & \multicolumn{1}{c|}{0.0151 (0.0020)} & \multicolumn{1}{c|}{0.2986 (0.0364)} \\ \hline \multicolumn{1}{|l|}{4PB} & \multicolumn{1}{c|}{0.0279 (0.0044)} & \multicolumn{1}{c|}{0.5598 (0.0976)} \\ \hline \end{tabular} \caption[Experimental value of HS-distance for rank-1 complete POVMs]{{\bf Experimental value of HS-distance for rank-1 complete POVMs.} The result is an average over all 20 random pure states with standard error of the mean given in parentheses. The estimate is produces by the rank-$r$-projection algorithm that constrains the estimate to be a rank-1 quantum state. This is required for rank-1 complete POVMs.} \label{tbl:comp} \end{table} We again want to gain intuition into how each POVM performs based on its structure. Similar to the rank-1 strictly-complete measurements, we cannot compare the POVMs with Eq.~\eqref{Delta2}, since the linear-inversion estimate is not unique. However, we can compare the $\alpha$ and $\beta$ parameters that were used in Sec.~\ref{ssec:comp_noise} to derive the robustness bound for rank-1 complete POVMs. There is also no analytic form for these parameters, as was true with rank-1 strictly-complete POVMs. Therefore, we apply a similar numerical analysis to estimate $\alpha$, as was done for rank-1 strictly-complete POVMs. For rank-1 complete POVMs the $10^4$ pairs of states consist of two pure states. We then calculate the ratio of the HS-distance to the $\ell_2$ distance between the probability vectors with the corresponding rank-1 complete POVM and bin the number of times each ratio in a given range is found. The results are plotted in Fig.~\ref{fig:comp}. \begin{figure}\label{fig:comp} \end{figure} We see that the distributions are roughly the same for both the 4GMB and 4PB in each dimension. Therefore, we expect that both POVMs will produce similar HS-distances. Moreover, the distributions have a reasonable range of $0.6695 - 3.0501$ for $d = 4$ and $1.3103 - 3.6055$ for $d =16$. However, this does not match the experimental results in Table~\ref{tbl:comp}, where the 4GMB produce a significantly lower value of HS-distance squared. Therefore, this method of comparison does not apply as well to rank-1 complete POVMs. One reason the method fails may be that the rank-$r$-projection algorithm used to create the estimate is very biased in that it only produces pure states. } \section{Process tomography} \label{sec:PT_exp} Sosa-Martinez {\em et al.} also experimentally tested the efficient methods for QPT that were outlined in Chapter~\ref{ch:PT}. In the experiment, the target quantum process was a unitary map. However, due to control errors, or other sources outlined in Sec.~\ref{ssec:errors_expt}, the applied process is not exactly unitary. From previous tests, such as the randomized benchmarking inspired protocol~\cite{Anderson2015}, we know that the magnitude of these errors is small. Therefore, we have strong evidence that the applied process is near-unitary and the methods for QPT with UIC sets of states should produce a robust estimate. QPT was implemented for both the $d = 4$ subspace and the full $d=16$ Hilbert space. For $d=4$, Sosa-Martinez {\em et al.} generated 10 Haar-random unitary maps as the target processes. The actual near-unitary processes are probed with the $d$ UIC set of states given in Eq.~\eqref{0+n}, supplemented with a set of $d^2-d$ linearly independent states from Eq.~\eqref{op basis nc}. The output was then measured with the MUB. The measurement vector was analyzed with the three estimation programs, LS, Tr-norm and $\ell_1$-norm, outlined in Sec.~\ref{sec:PT_num}. The estimated processes were then compared to the target processes to determine the process fidelity, given in Eq.~\eqref{PT_U_fidelity}, after each input state is measured. The results are plotted in Fig.~\ref{fig:PT_d4}. \begin{figure}\label{fig:PT_d4} \end{figure} In Fig.~\ref{fig:PT_d4}, we see that the estimation programs follow similar trends to what was discussed in Sec.~\ref{sec:QPT_werrors}. The LS and Tr-norm produce high fidelity estimates after $d = 4$ input states. This verifies that the applied process is in fact near-unitary. The $\ell_1$-norm program produces a high fidelity estimate for all input states since it has more prior information about the applied process. As outlined in Sec.~\ref{sec:QPT_werrors}, the fact that the $\ell_1$-norm estimate has near constant fidelity for all input states, and the Tr-norm program produces estimates with slightly higher fidelity, indicates that the applied process has incoherent errors. This is consistent with the types of errors seen in QST and discussed in Sec.~\ref{ssec:HS_full-IC}. Therefore, we conclude that (1) the UIC set of input states accomplishes efficient QPT in an experimental setting and (2) that the dominant error in the each unitary map implemented in the experiment is likely incoherent due to averaging the ensemble over random local Hamiltonians, e.g., random bias magnetic fields. \begin{figure}\label{fig:PT_d16} \end{figure} Sosa-Martinez {\em et al.} also implemented QPT for the full $d=16$ Hilbert space. For $d=16$, it is not practically feasible to evolve $d^2 = 256$ input states, which are required for standard QPT. For one, it would take a very long time to perform an experiment with 256 states and effects, such as drift in the experimental settings, may contaminate the results. Also, the classical computation required to produce an estimate for such a large system is not possible with current convex optimization algorithms. Therefore, the efficient set of UIC states is mandatory for QPT of such a large system. In the experiment, the 16 states from Eq.~\eqref{0+n}, along with four extra states for comparison, were used to probe a single, near-unitary process. After measuring each state, we calculate an estimate with only the LS program. Instead of using the CVX package for MATLAB, as was done for $d= 4$, we applied a gradient-projection algorithm. This algorithm is different form the rank-$r$-projection algorithm discussed above. The gradient-projection algorithm used for $d=16$ QPT only projects to the set of CPTP quantum process, i.e. PSD matrices with proper TP constraint. Therefore, it does not have the same type of bias issues associated with the rank-$r$-projection algorithm for rank-1 complete POVMs and is an implementation of the standard LS program. We then compared the estimate from this algorithm to the target unitary with the process fidelity given in Eq~\eqref{PT_U_fidelity}. The results are plotted in Fig.~\ref{fig:PT_d16}. For $d = 16$, we are still able to reconstruct a high-fidelity estimate of the quantum process with the UIC set of states given in Eq.~\eqref{0+n}. This is the largest Hilbert space that QPT has been implemented and is only made possible by using the UIC set. Given the large amount of data it is difficult to implement the Tr-norm and $\ell_1$-norm estimation programs in order to determine the type of errors present. However, the estimation provided by the LS program is still useful for diagnosing errors in the map by other methods, such as the ones discussed in Refs.~\cite{Korotkov2013,Kofman2009}. \section{Summary and Conclusions} The experiments performed by Sosa-Martinez {\em et al.} demonstrated many different methods for measurement and estimation in QT. We found that full-IC POVMs produce the lowest infidelity estimation of the quantum state with ML. However, rank-1 strictly-complete POVMs also produce low infidelity estimates even for larger systems. This demonstrates the tradeoff between efficiency and robustness in QST. While the full-IC POVMs produce the lowest infidelity estimate, they require many POVM elements. Conversely, we saw that while rank-1 strictly-complete POVMs theoretically offer both efficiency and robustness, some constructions, such as PSI, are not accurate enough for practical use. Therefore, it is important to choose POVMs for real implementations of QST that offer sufficient robustness and are still efficient. The experiment also demonstrates that estimators that are required for rank-1 complete POVMs are biased. In order to reliably produce an estimate for these POVMs, we must use the rank-$r$-projection algorithm, which projects to pure states. This algorithm produces estimates that have much lower infidelity with the target state than expected. Therefore, these estimates cannot be trusted for QST. This is another drawback of rank-1 complete POVMs. We also discussed methods of comparing the structure of POVMs for QST based on the HS-distance. For full-IC POVMs, we presented a mathematical framework that can predict how each POVM will perform when there exists knowledge about the noise and errors that effect the experiment. Currently, in the cesium spin experiment, we do not know the exact form the noise and errors, and therefore cannot apply this result. We were able to determine the magnitude of random control errors and systematic errors by studying the experimental results of the repeated SIC POVM. This test showed that random control errors dominate but systematic errors do have a non-negligible contribution. For the rank-1 strictly-complete and complete POVMs, we applied a numerical study in order to estimate the robustness parameters to understand how each POVM performs. We saw that this method matched with the experimental results for the rank-1 strictly-complete POVMs. The QPT results by Sosa-Martinez {\em et al.} show that UIC sets of states produce efficient and robust estimates. Moreover, different estimations strategies for QPT were used to determine that incoherent errors likely dominate the processes. For the $d=16$ system, the UIC set serves as an example of the power of efficient QT techniques. QPT in this system would not be possible with standard techniques; however, with the UIC set we are able to produce high fidelity estimates of a near-unitary process. \chapter{Conclusions and outlook} \label{ch:conclusions} In this dissertation, we introduced new methods for quantum tomography (QT) that are more efficient to implement and robust to noise and errors. We showed that these methods are made possible by applying prior information about the quantum system that is consistent with the goals of most quantum information processing experiments. Specifically, for quantum state tomography (QST) the prior information is that the quantum state is close to pure and for quantum process tomography (QPT) it is the process is close to unitary. Pure states and unitary processes are required for most quantum information processing protocols, and therefore most experiments work to engineer states and processes near this regime. We showed that the new methods for QST and QPT produce robust estimates even if the states are not exactly pure and the processes are not exactly unitary. Therefore, these results offer a way to accomplish QT in larger dimensional Hilbert spaces than were previously possible with standard techniques. We began the dissertation by outlining the mathematical framework for standard QT in Chapter~\ref{ch:background}. Standard QT is defined by the notion of full informational completeness (full-IC). We reviewed how this notion applies to QST, QPT, and QDT in the ideal case where we have direct access to the probabilities. We showed that, in this case, QT is a linear algebra problem where the probabilities are linearly related to the free parameters that describe an arbitrary state, process, or POVM. However, in any real application of QT, there necessarily exist noise and errors, and therefore we do not have direct access to the probabilities. To study this case, we formalized the effect of noise and errors in QT. We also presented previously proposed numerical algorithms for estimating the quantum states, processes, and detectors in this situation. We showed that the standard methods are robust to such noise and errors. However, standard QT requires resources that scale polynomially with the dimension of the Hilbert space, and therefore are limited to small systems. In order to accomplish QT more efficiently, we devised methods to incorporate prior information about the the quantum system into the measurements and estimation. We began by focusing on QST in Chapter~\ref{ch:new_IC}. We showed that there exists POVMs that fully characterize pure states with less elements than needed for standard QST in the ideal setting when we have direct access to the probabilities. We defined two types of these POVMs: rank-1 complete and rank-1 strictly complete. Rank-1 complete POVMs uniquely identify pure states from within the set of all pure states while rank-1 strictly-complete POVMs uniquely identify pure states from within the set of all quantum states. The notion of rank-1 strictly-complete POVMs is only made possible by the positivity constraint on quantum states, i.e., all density matrices are constrained to be positive semidefinite (PSD). The difference between rank-1 complete and rank-1 strictly-complete POVMs has significant consequences for QST in the presence of noise and errors. In this case, numerical optimization is required to produce an estimate of the measured quantum state. The two different types of POVMs demand different strategies for numerical optimization. Rank-1 complete POVMs necessitate algorithms that are restricted to the set all pure states. This is a nonconvex constraint and so is difficult to incorporate in numerical optimization. Rank-1 strictly-complete POVMs require optimization that is restricted to the set of quantum states, which is a convex set. Therefore, rank-1 strictly-complete POVMs are compatible with the well established methods for convex optimization while rank-1 complete POVMs are not. Moreover, we proved that for rank-$r$ strictly complete POVMs, the estimate returned by certain convex programs are robust to all sources of noise and errors. This includes preparation errors, which necessarily exist in any experiment and cause the actual state to be not exactly pure. This property makes rank-1 strictly-complete POVMs advantageous for pure-state QST. We went on to discuss different methods to produce both rank-$r$ complete and strictly-complete POVMs in Chapter~\ref{ch:constructions}. We showed, that while rank-1 strictly-complete POVMs are inherently related to positivity, which is a difficult constraint to treat analytically, we can still construct POVMs that are provably rank-1, and more generally, rank-$r$ strictly-complete. We provided two methods for constructing strictly-complete POVMs. The first applies to a certain type of POVM, which we called element-probing (EP) POVMs. EP-POVMs allow for the direct reconstruction of density matrix elements. For these types of POVMs, we introduced tools based on the Schur complement and the Haynsworth matrix inertia to prove an EP-POVM is rank-$r$ complete or strictly-complete. These tools can also be used to construct new rank-$r$ strictly-complete POVMs, with two examples given in Appendix~\ref{app:constructions}. We also demonstrated numerically that a set of random orthonormal basis measurements form a rank-$r$ strictly-complete POVM. We applied these two methods to a simulation of QST to show that the quantum state could be efficiently and robustly estimated in the presence of sources of noise and errors. Therefore, we conclude that strictly-complete POVMs are the best choice for bounded-rank QST, due to their efficiency, robustness, and compatibility with convex optimization. At the end of both Chapter~\ref{ch:new_IC} and Chapter~\ref{ch:constructions}, we identified how the ideas of rank-$r$ strictly-complete POVMs can be generalized to QDT and QPT. This relation is made possible by the fact that a positivity constraint exists for both matrices that define QDT and QPT. For QDT, the POVM elements that we diagnose are constrained to be PSD matrices. For QPT, the condition that the process is completely positive (CP) is equivalent to the process matrix being PSD. Since the definition of rank-$r$ strictly-complete is only with respect to PSD matrices and not just quantum states, the same notion applies for both QDT and QPT. For QDT, the generalization is straightforward since the unknown POVM element is probed with a set of quantum states. We can translate many constructions for rank-$r$ strictly-complete POVMs in QST to a strictly-complete set of probing states for QDT, as was shown in Sec.~\ref{sec:constructions_QDT}. It is not as straightforward to generalize the notion of strict-completeness to QPT, but in Chapter~\ref{ch:PT}, we presented such a generalization. Most quantum information protocols require unitary process, which is prior information that can be applied to QPT. Unitary processes are represented by rank-1 process matrices, so unitary QPT is analogous to pure-state QST. We defined sets of states that uniquely identify a random unitary process within the set of all unitary maps, called unitarily informationally complete (UIC) sets. We provided a few example constructions and also gave numerical evidence that these UIC sets also uniquely identify any random unitary process from within the set of all CPTP maps. In any real application of QPT, the process being measured is not exactly unitary. Therefore, we studied the problem of near-unitary QPT and considered two different types of error models that may disrupt the target unitary process. We showed that different estimators for QPT respond differently to these two types of error models, and therefore could be used to diagnose which types of errors are present. QT is fundamentally an experimental protocol to characterize a quantum system, so any new method for QT should be tested experimentally. In Chapter~\ref{ch:experiment}, we discussed experimental tests on an ensemble of cesium atoms performed by Hector Sosa-Martinez and Nathan Lysne in the lab of Prof. Poul Jessen at the University of Arizona. Different rank-1 complete and strictly-complete POVMs were implemented, and various numerical estimation programs were compared. It was found that the POVMs with the most elements produced the lowest infidelity estimates for QT. However, some POVMs with less elements produce estimates with almost as low infidelity. The results illustrate the tradeoff between efficiency and robustness in QT. While POVMs with many elements are the most robust, and thus produce the best estimates, they require much more experimental effort. Rank-1 strictly-complete POVMs produce estimates with almost as low infidelity but are much more efficient. For QPT, the experiment demonstrates that the UIC set does produce a high fidelity estimate of the unknown unitary process that also indicates of the type of noise present. The set also allowed for the implementation of QPT for a $d = 16$ Hilbert space, which is infeasible with standard techniques. The experiment opens three avenues for future theoretical research in QT. First, while we have some theoretical and numerical methods to compare POVMs, the experimental results do not exactly match, as discussed in Sec.~\ref{sec:HS_analysis}. This may be due to sources of noise and errors that are unique to the experiment. Current theoretical and numerical methods for comparing POVMs do not take such differences into account. For example, in the original work by Scott~\cite{Scott2006}, it was proven that so-called ``tight'' POVMs, such as the SIC and MUB, are optimal for QST. However, this proof holds under the assumption that the experiment is only limited by finite sampling. This is not the case for the cesium spin experiment as well as most real applications of QT. It would be useful to derive methods to compare POVMs with arbitrary types of noise or errors. Second, the experiment also confirmed that each estimator for QST and QPT perform differently, which has consequences on how we compare different methods. For example, the Tr-norm and the rank-$r$-projection algorithms produce infidelities much lower than LS and ML. However, the Tr-norm and rank-$r$-projection algorithms are biased towards pure states, and therefore may overestimate the performance of QT. This effect must be better understood in order to not make false claims of superior methods that are only due to the bias estimation. In general, it is important to study how all estimators performs in different error regimes to make sure that the estimation is reliable. If we are given prior information about the type of errors present, we may be able to choose the best suited estimator. Finally, the estimates produced in both QST and QPT from the experimental data exemplifies an outstanding question in QT research: what do we do with the estimates? In Chapter~\ref{ch:experiment}, we compared the infidelity to the targets states and process, but the density and process matrices in theory contain all information about the quantum states and processes. However, it is not straightforward to extract this information. Previous work has made some relations between density and process matrices to useful quantities. For example, it was shown that entanglement measures can only be calculated with full tomographic reconstructions~\cite{Carmeli2016a,Lu2016}. There have also been a proposal for QPT that relates certain elements of the process matrix to different sources of errors~\cite{Korotkov2013}. However, we lack a well defined framework for understanding both the density matrix and the process matrix. Future work, which may flush out important relations, would allow for the diagnosis of noise or error sources and make QT the useful experimental tool that is only now a promise. The outlook for QT as a whole is mixed. Originally, QT was only feasible for small systems (e.g. a couple of qubits). However, with the unifying techniques proposed in this dissertation, as well as related work in compressed sensing~\cite{Gross2010, Flammia2012}, QT is now possible for larger systems (e.g. 3-10 qubits). These systems are common in today's state-of-the-art experiments, so QT is currently a useful tool for experimentalists. However, with new technological advances, it is expected that soon still larger systems (e.g. $> 10$ qubits) will be more common. For these systems, even strictly-complete methods for QT, will not be feasible. This is due to the fact that most methods for QT, even the ones proposed here, scale exponentially with the number of qubits. It may be that other types of prior information, such as matrix product states~\cite{Cramer2010}, can be leveraged to make QT feasible in larger systems. In this case the notions of completeness and strict-completeness may have useful generalizations that allow for efficient and robust methods. However, it seems that QT's likely future is as one tool in the toolbox for diagnosing quantum systems. In order to build quantum information processors that demonstrate advantages over classical techniques, we will require many such tools and the fact that QT is now possible with larger systems makes it a tool of greater value. \appendix \chapter{Other rank-$r$ strictly-complete POVM constructions} \label{app:constructions} In this appendix we present four rank-$r$ strictly-complete POVMs. The first three (GMB, 5PB, and PSI) were implemented in the experiment discussed in Chapter~\ref{ch:experiment}. The final POVM is a generalization of the POVM given in Eq.~\eqref{psi-complete} to be rank-$r$ strictly-complete. \section{Gell-Mann bases (4GMB, 5GMB, and GMB)}\label{app:GMB} Goyeneche {\em et al.}~\cite{Goyeneche2015} proposed two sets of bases for pure-state QST, which we refer to as the 4GMB (consisting of four bases and given in Eq.~\eqref{4gmb}) and 5GMB (consisting of the 4GMB plus the computational basis). Goyeneche {\em et al.}~\cite{Goyeneche2015} proved that both these constructions are rank-1 complete by the decomposition method discussed in Sec.~\ref{sec:decomp_method}. In Sec.~\ref{sec:EP}, we showed that the 4GMB form an EP-POVM, and the same can be shown for the 5GMB. In Ref.~\cite{Goyeneche2015}, the 5GMB were proposed in order to avoid the failure set by adaptively constructing four of the bases based on the measured outcomes of the first basis. We do not consider such adaptive techniques here. Instead, we treat the 5GMB as fixed, and use the EP-POVM framework to prove the 5GMB are in fact rank-1 strictly-complete. We then show that this type of basis measurement can be extended to bounded-rank QST, and provide an algorithm to generate $4r+1$ bases that are provably rank-$r$ strictly-complete. When $r \geq d/2$, the algorithm constructs the full-IC POVM referred to as GMB, which was applied in the experiment and discussed in Chapter~\ref{ch:experiment}. All of the constructions discussed in this section (4GMB, 5GMB, and GMB) are EP-POVMs that allow for the reconstruction of density matrix elements that make up the diagonals. For convenience, we label the upper-right diagonals $0$ to $d-1$, where the $0$th diagonal is the principal diagonal and the $(d-1)$st diagonal is the upper right element. Each diagonal, except the $0$th, has a corresponding Hermitian conjugate diagonal (its corresponding lower-left diagonal). Thus, if we measure the elements on a diagonal, we also measure the elements of its Hermitian conjugate. The computational basis corresponds to measuring the $0$th diagonal. We begin by considering the 5GMB construction. In Sec.~\ref{sec:EP}, we showed the 4GMB allows for reconstruction to of the elements on the first diagonals. The 5GMB additionally includes the computational basis measurement, which allows us to reconstruct of all elements on the 0th diagonal. To show that the 5GMB is rank-1 complete, we follow the general strategy outlined in Sec.~\ref{ssec:EP_comp}. First, choose the leading $3 \times 3$ principal submatrix, \begin{equation} M_0 = \begin{pmatrix} \rho_{0,0} & \rho_{0,1} & \bm{\rho_{0,2}} \\ \rho_{1,0} & \rho_{1,1} & \rho_{1,2} \\ \bm{\rho_{2,0}} & \rho_{2,1} &\rho_{2,2} \\ \end{pmatrix}, \end{equation} where, hereafter, the elements in bold font are the unmeasured elements. By applying a unitary transformation, which switches the first two rows and columns, we can move $M_0$ into the block matrix form, \begin{equation} M_0 \rightarrow UM_0U^\dagger= \begin{pmatrix} \rho_{1,1} &\rho_{1,0} & \rho_{1,2} \\ \rho_{0,1} & \rho_{0,0} & \bm{\rho_{0,2}} \\ \rho_{2,1} & \bm{\rho_{2,0}} & \rho_{2,2} \\ \end{pmatrix}. \end{equation} This matches the form in Eq.~\eqref{block_mat}, with $A = \rho_{1,1}$, $B^{\dagger} = (\rho_{1,0}, \rho_{1,2})$ and $C$ is the bottom $2 \times 2$ submatrix. Form Eq.~\eqref{Schur_rank}, we can solve for $\rho_{0,2}$ and $\rho_{2,0}$, since $C = \rho_{1,1}^{-1} B B^{\dagger}$. The set of states with $\rho_{1,1}= 0$ corresponds to the failure set. Note that the diagonal elements of $C$, $\rho_{0,0}$ and $\rho_{2,2}$, are also measured. We repeat this procedure for the set of principal $3 \times {3}$ submatrices, $M_{i} \in \bm{M}$ for $i=0,\ldots,d-2$, \begin{equation} M_{i} = \begin{pmatrix} \rho_{i,i} & \rho_{i,i+1} & \bm{\rho_{i,i+2}} \\ \rho_{i+1,i} & \rho_{i+1,i+1} & \rho_{i+1,i+2} \\ \bm{\rho_{i+2,i}} & \rho_{i+2,i+1} &\rho_{i+2,i+2} \\ \end{pmatrix}, \end{equation} For each $M_{i}$, the upper-right and the lower-left corners elements $\rho_{i,i+2}$ and $\rho_{i+2,i}$ are unmeasured. Using the same procedure as above, we reconstruct these elements for all values of $i$ and thereby reconstruct the 2nd diagonals. We repeat the entire procedure again choosing a similar set of $4 \times {4}$ principal submatrices and reconstruct the 3rd diagonals and so on for the rest of the diagonals until all the unknown elements of the density matrix are reconstructed. Since, we have reconstructed all diagonal elements of the density matrix and used the assumption that $\textrm{rank}(\rho) = 1$ the 5GMB is rank-$1$ complete POVM. The first basis measures the 0th diagonal, so by Proposition~\ref{prop1} the 5GMB is also rank-1 strictly-complete. The failure set corresponding to $\bm{M}$ is when $\rho_{i,i} = 0$ for $i = 1,\ldots,d-2$. Additionally, the 5GMB provide another set of submatrices $\bm{M}'$ to reconstruct $\rho$. This set of submatrices results from also measuring the elements $\rho_{d-1,0}$ and $\rho_{0,d-1}$, which were not used in the construction of $\bm{M}$. The failure set for $\bm{M}'$ is the same as the failure set of $\bm{M}$, but since $\bm{M'} \neq \bm{M}$, we gain additional robustness. When we consider both sets of submatrices the total failure set is $\rho_{i,i} = 0$ and $\rho_{j,j} = 0$ for $i = 1,\ldots,d-2$ and $i \neq j \pm 1$. This is the exact same set found by Goyeneche~{\em et al.}~\cite{Goyeneche2015}. We generalize these ideas to measure a rank-$r$ state by designing $4r +1$ orthonormal bases that correspond to a rank-$r$ strictly-complete POVM. The algorithm for constructing these bases, for dimensions that are powers of two, is given in Algorithm~\ref{alg:rankr_GMB}. In principle, sets of orthonormal bases with similar properties can be designed for any dimension. Technically, the algorithm produces unique bases for $r \leq d/2$, but since $d+1$ mutually unbiased bases are full-IC, for $r \geq d/4$ one may prefer to measure the latter. The corresponding measured elements are the first $r$ diagonals of the density matrix. Given the first $r$ diagonals of the density matrix, we can reconstruct a state $\rho \in {\cal S}_r$ with a similar procedure as the one outlined for the five bases. First, choose the leading $(r+2) \times {(r+2)}$ principle submatrix, $M_0$. The unmeasured elements in this submatrix are $\rho_{0,r+1}$ and $\rho_{r+1,0}$. By applying a unitary transformation, we can bring $M_0$ into canonical form, and by using the rank condition from Eq.~\eqref{Schur_rank} we can solve for the unmeasured elements. We can repeat the procedure with the set of $(r+2) \times {(r+2)}$ principle submatrices $M_i \in \bm{M}$ for for $i = 0,\ldots,d-r-1$ and \begin{equation} M_{i} = \begin{pmatrix} \rho_{i,i} & \cdots & \bm{\rho_{i,i+r+1}} \\ \vdots & \ddots & \vdots \\ \bm{\rho_{i+r+1,i}} & \cdots & \rho_{i+r+1,i+r+1} \end{pmatrix}. \end{equation} From $M_i$ we can reconstruct the elements $\rho_{i,i+r+1}$, which form the $(r+1)$st diagonal. We then repeat this procedure choosing the set of $(r+3) \times {(r+3)}$ principle submatrices to reconstruct the $(r+2)$nd diagonal and so on until all diagonals have been reconstructed. This shows the POVMs are rank-$r$ complete. By Proposition~\ref{prop1}, since we also measure the computational bases, the POVMs are also rank-$r$ strictly-complete. The failure set corresponds to the set of states with singular $r \times r$ principal submatrix \begin{equation} A_i = \begin{pmatrix} \rho_{i+1,i+1} & \cdots & \rho_{i,i+r} \\ \vdots & \ddots & \vdots \\ \rho_{i+r,i} & \cdots & \rho_{i+r,i+r} \end{pmatrix}, \end{equation} for $i = 1,\ldots,d-r-1$. This procedure also has robustness to this set since, as in the case of $r = 1$, there is an additional construction $\bm{M}'$. The total failure set is then when $A_i$ is singular for $i = 0,\ldots,d-r-1$ and $A_j$ is singular for $j \neq i \pm 1$. \noindent\makebox[\linewidth]{\rule{\linewidth}{1pt}} \noindent {\bf Algorithm A.1} Construction of $4r+1$ bases in the GMB \\ \noindent\makebox[\linewidth]{\rule{\linewidth}{0.4pt}} \begin{enumerate}\label{alg:rankr_GMB} \item{Construction of the first basis:} \item[]{The choice of the first basis is arbitrary, we denote it by, \begin{equation} \mathbbm{B}_0 = \{ |0 \rangle, | 1 \rangle, \ldots , | d-1 \rangle \}. \end{equation} This basis defines the representation of the density matrix. Measuring this basis corresponds to the measurement of the all the elements on the 0th diagonal of $\rho$.} \item{Construction of the other $4r$ orthonormal bases:} \item[]{{\bf for} $k \in [1, r]$, {\bf do}} \begin{itemize} \item[]{Label the elements in the $k$th diagonal of the density matrix by $\rho_{m,n}$ where $m = 0,\ldots,d - 1 - k$ and $n = m + k$.} \item[]{For each element on the $k$th and $({d-k})$th diagonal, $\rho_{m,n}$, associate two, two-dimensional, orthonormal bases, \begin{align} \label{2dim_bases} \mathbbm{b}^{(m,n)}_{x}=&\Bigl\{| x_{m,n}^{\pm} \rangle = \frac{1}{\sqrt{2}} \left( | m \rangle \pm | n\rangle \right)\Bigr\}, \nonumber \\ \mathbbm{b}^{(m,n)}_{y}=&\Bigl\{| y_{m,n}^{\pm} \rangle = \frac{1}{\sqrt{2}} \left( | m \rangle \pm {\rm i} | n \rangle \right)\Bigr\}, \end{align} for allowed values of $m$ and $n$.} \item[]{Arrange the matrix elements of the $k$th diagonal and $({d-k})$th diagonal into a vector with $d$ elements \begin{equation} \vec{v}(k) = ( \underbrace{\rho_{0,k}, \ldots, \rho_{d-1-k,d-1}}_\text{$k$th diagonal elements},\underbrace{\rho_{0,d-i},\ldots, \rho_{k-1,d-1}}_\text{$({d-k})$th diagonal elements})\equiv(v_1(k),\ldots,v_d(k)). \end{equation}} \item[]{Find the largest integer $Z$ such that $\frac{k}{2^{Z}}$ is an integer.} \item[]{Group the elements of $\vec{v}(k)$ into two vectors, each with $d/2$ elements, by selecting $\ell = 2^Z$ elements out of $\vec{v}(k)$ in an alternative fashion, \begin{align} \vec{v}^{(1)}(k) &= ( v_1, \ldots, v_\ell, v_{2\ell+1}, \ldots, v_{3\ell}, \ldots, v_{d-2\ell+1}, \ldots, v_{d-\ell} ) \nonumber \\ &=(\rho_{0,i},\ldots,\rho_{\ell,i+\ell},\ldots),\nonumber \\ \vec{v}^{(2)}(k) &=( v_{\ell+1}, \ldots, v_{2\ell}, v_{3\ell+1}, \ldots, v_{4\ell}, \ldots, v_{d-\ell+1}, \ldots, v_{d}) \nonumber \\ &=(\rho_{\ell+1,i+\ell+1},\ldots, \rho_{2\ell,i+2\ell},\ldots). \nonumber \end{align}} \item[]{{\bf for} $j=1,2$ {\bf do}} \begin{itemize} \item[]{Each element of $\vec{v}^{(j)}(k)$ has two corresponding bases $\mathbbm{b}^{(m,n)}_{x}$ and $\mathbbm{b}^{(m,n)}_{y}$ from Eq.~\eqref{2dim_bases}.} \item[]{Union all the two-dimensional orthonormal $x$-type bases into one basis \begin{equation} \mathbbm{B}^{(k;j)}_{x}=\bigcup_{\rho_{m,n}\in\vec{v}^{(j)}(k)} \mathbbm{b}^{(m,n)}_{x}. \end{equation} Union all the two-dimensional orthonormal $y$-type bases into one basis \begin{equation} \mathbbm{B}^{(k;j)}_{y}=\bigcup_{\rho_{m,n}\in\vec{v}^{(j)}(k)} \mathbbm{b}^{(m,n)}_{y}. \end{equation} The two bases $\mathbbm{B}^{(k;j)}_{x}$ and $\mathbbm{B}^{(k;j)}_{y}$ are orthonormal bases for the $d$-dimensional Hilbert space. } \end{itemize} \item[]{{\bf end for}} \item[]{By measuring $\mathbbm{B}^{(k;j)}_{x}$ and $\mathbbm{B}^{(k;j)}_{y}$ for $j=1,2$ (four bases in total), we measure all the elements on the $k$th and $(d-k)$th off-diagonals of the density matrix.} \end{itemize} \item[]{{\bf end for}} \end{enumerate} \noindent\makebox[\linewidth]{\rule{\linewidth}{1pt}} \section{5PB: Construction by Carmeli} \label{app:5PB} The five polynomial bases (5PB) was proposed by Carmeli {\em et al.}~\cite{Carmeli2016} and proven to be rank-1 strictly-complete therein. The 5PB is an extension of the four bases polynomial bases (4PB) proposed by Carmeli {\em et al.}~\cite{Carmeli2015}, which were proven to be rank-1 complete. Both constructions are based on a set of orthogonal polynomials, hence the name. We provide a summary of the construction here. Full details are given in Ref.~\cite{Carmeli2015}, and the proof of rank-1 completeness and strict-completeness can be found in Ref.~\cite{Carmeli2015} and~\cite{Carmeli2016} respectively. The index-0 basis is the computational basis, \begin{equation} \mathbbm{B}_0 = \{ \ket{0}, \dots, \ket{d-1} \}, \end{equation} the same as for the GMB, discussed in Sec.~\ref{app:GMB}. We can generate the remaining four bases from a set of orthogonal polynomials labelled $p_n(x)$, with degree $n$. An $n$-degree polynomial has $n$ roots, labelled by the set $\{ x_j \}$. The amplitudes for the projectors that make up the first basis correspond to the roots of a $d$-degree polynomial. We evaluate a set of $\{ p_0(x),\dots, p_{d-1}(x) \}$ polynomials at the roots of the $d$-degree polynomial such that, \begin{equation} \ket{\tilde{\phi}_j^{(1)}} = \left[ p_0(x_j), p_1(x_j),\dots, p_{d-1}(x_j) \right]^{\top}. \end{equation} By the definition of orthogonal polynomials, each vector, $\ket{\tilde{\phi}_j^{(1)}}$, is orthogonal, i.e. $\langle \tilde{\phi}_j^{(1)} | \tilde{\phi}_k^{(1)} \rangle = \delta_{j,k} \| \ket{\tilde{\phi}_j^{(1)} }\|_2$. We normalize each projector, $ \ket{\phi_j}^{(1)} = \ket{\tilde{\phi}_j^{(1)}}/\| \ket{\tilde{\phi}_j^{(1)} }\|_2$ to get the projectors that make up the first basis, \begin{equation} \mathbbm{B}_1 = \{ \ket{\phi_0^{(1)}}, \dots, \ket{\phi_{d-1}^{(1)}} \}. \end{equation} The amplitudes for the projectors that make up the second basis correspond to the roots of a $(d-1)$-degree polynomial, which we denote by the set $\{ y_j \}$. We evaluate a set of $\{ p_0(x),\dots, p_{d-1}(x) \}$ at the roots of the $(d-1)$-degree polynomial such that, \begin{equation} \label{4PB_2} \ket{\phi_j^{(2)}} = \left[ p_0(y_j), p_1(y_j),\dots, p_{d-1}(y_j) \right]^{\top}, \end{equation} for $j < d-1$, which are also orthogonal by the definition of orthogonal polynomials. This expression only applies for $j < d-1$ since a $(d-1)$-degree polynomial only has $d-1$ roots. Therefore, we supplement the $d-1$ vectors in Eq.~\eqref{4PB_2} with the final vector $\ket{\phi_{d-1}^{(2)}} = [0,\dots,0,1]^{\top}$. Then after normalizing each, the second basis is defined as \begin{equation} \mathbbm{B}_2 = \{ \ket{\phi_0^{(2)}}, \dots, \ket{\phi_{d-1}^{(2)}} \}. \end{equation} The third and fourth bases are found by shifting the amplitudes in the first and second bases by a phase $e^{{\rm i} \alpha k}$, where $\alpha$ is not a rational multiple of $\pi$, such that, \begin{align} \ket{\tilde{\phi}_j^{(3)} }&= \left[ p_0(x_j)e^{{\rm i} \alpha}, p_1(x_j),\dots, p_{d-1}(x_j) e^{{\rm i} \alpha (d-1)} \right]^{\top}, \nonumber \\ \ket{\tilde{\phi}_j^{(4)} }&= \left[ p_0(y_j)e^{{\rm i} \alpha}, p_1(y_j),\dots, p_{d-1}(y_j) e^{{\rm i} \alpha (d-1)} \right]^{\top}, \end{align} and we again renormalize each and supplement the final basis with the state $\ket{\phi_{d-1}^{(4)}} = [0,\dots,0,1]^{\top}$. This gives the final bases, \begin{align} \mathbbm{B}_3 &= \{ \ket{\phi_0^{(3)}}, \dots, \ket{\phi_{d-1}^{(3)}} \}, \nonumber \\ \mathbbm{B}_4 &= \{ \ket{\phi_0^{(4)}}, \dots, \ket{\phi_{d-1}^{(4)}} \}. \end{align} Carmeli {\em et al.}~\cite{Carmeli2015} showed the four bases, $\{\mathbbm{B}_1, \mathbbm{B}_2, \mathbbm{B}_3, \mathbbm{B}_4\}$ (4PB), are rank-1 complete and Carmeli {\em et al.}~\cite{Carmeli2016} showed that the five bases, $\{\mathbbm{B}_0, \mathbbm{B}_1, \mathbbm{B}_2, \mathbbm{B}_3, \mathbbm{B}_4\}$ (5PB), are rank-1 strictly-complete, each without a failure set. \section{PSI: Construction by Flammia} The PSI construction was also proposed by Flammia {\em et al.}~\cite{Flammia2005} and proven to be a rank-1 complete POVM by the decomposition method. The POVM consists of $3d-2$ rank-1 operators in the following form, \begin{align} E_0 &= a | 0 \rangle \langle 0|, \nonumber \\ E_{j, 1} &= \frac{b}{2} \left( P_{j-1,j} + \frac{2 \sqrt{2}}{3} X_{j-1,j}- \frac{1}{3}Z_{j-1,j} \right), \nonumber \\ E_{j, 2} &= \frac{b}{2} \left( P_{j-1,j} - \frac{\sqrt{2}}{3}X_{j-1,j}+ \sqrt{\frac{2}{3}}Y_{j-1,j} - \frac{1}{3}Z_{j-1,j}\right), \nonumber \\ E_{j, 3} &= \frac{b}{2} \left( P_{j-1,j} - \frac{\sqrt{2}}{3}X_{j-1,j} - \sqrt{\frac{2}{3}}Y_{j-1,j} -\frac{1}{3} Z_{j-1,j}\right), \end{align} for $j = 1,\dots,d-1$ and $a$ and $b$ are chosen such that $\sum_{\mu} E_{\mu} = \mathds{1}$. The operator, $X_{j,j-1} = | j \rangle \langle j-1| + |j-1 \rangle \langle j|$, is the Pauli $\sigma_x$ operator across the subspaces spanned by $\ket{j-1}$ and $\ket{j}$, similar definitions apply for $Y_{j-1,j}$ and $Z_{j-1,j}$. The operator $P_{j-1,j}$ is that projection onto the subspace. We can show that this POVM is rank-1 strictly-complete by considering the density matrix elements that are defined by the probability of each POVM element. For $j = 1$, the four elements $E_0,\,E_{1,1}, \, E_{1,2},$ and $E_{1,3}$ are parallel to the elements that make up the 2-dimensional SIC POVM, a.k.a. the tetrahedron. Since the SIC POVM is full-IC, these four elements define probabilities that uniquely reconstruct the matrix that spans the $\ket{j-1}$ and $\ket{j}$ subspace. Therefore, the density matrix elements $\rho_{0,0}$, $\rho_{0,1}$, $\rho_{1,0}$, and $\rho_{1,1}$ are measured. We can combine the value of $\rho_{1,1}$ with the three POVM elements for $j=2$ to reconstruct the $\rho_{1,2}$, $\rho_{2,1}$, and $\rho_{2,2}$ density matrix elements. The procedure can be repeated to reconstruct all elements $\{\rho_{j,j}, \rho_{j-1,j}, \rho_{j,j-1} \}$. Thus, the POVM uniquely reconstructs all elements on the main diagonal (or 0th diagonal with the notation in Sec.~\ref{app:GMB}) and first off-diagonal (or 1st diagonal). Then, we apply the same method introduced as Sec.~\ref{app:GMB}, which takes principle submatrices to reconstruct the higher diagonals, to show that this POVM is rank-1 strictly-complete. For this POVM, the failure set corresponds to any $\rho_{j,j} = 0$ for $j < d-1$, which is still a set of measure zero. However, this is a ``larger'' set of measure zero since it requires that all populations are not equal to zero. \section{Rank-$r$ Flammia}\label{app:examples_full} Finally, we provide an additional rank-$r$ strictly-complete POVM that was not implemented in the experiment. This construction is based on the rank-1 strictly-complete POVM proposed by Flammia {\em et al.}~\cite{Flammia2005}, and given in Eq.~\eqref{psi-complete}. We construct an EP-POVM with $(2d-r)r+1$ elements and prove it is rank-$r$ strictly-complete with the methods form Sec.~\ref{sec:EP}. The POVM elements are, \begin{align}\label{psic mixed} &E_k=a_k\ket{k}\bra{k},\;k=0,\ldots,r-1\nonumber \\ &E_{k,n}=b_k(\mathds{1}+\ket{k}\bra{n}+\ket{n}\bra{k}),\;n=k+1,\ldots,d-1,\nonumber \\ &\widetilde{E}_{k,n}=b_k(\mathds{1}-{\rm i}\ket{k}\bra{n}+{\rm i}\ket{n}\bra{k}), \;n=k+1,\ldots,d-1,\nonumber \\ &E_{2d-r,r+1}=\mathds{1}-\sum_{k=0}^{r}\left[E_k +\sum_{n=1}^{d-1}(E_{k,n}+\widetilde{E}_{k,n})\right], \end{align} with $a_k$ and $b_k$ chosen such that $E_{(2d-r)r+1}\geq0$. The probability $p_k=\textrm{Tr}(E_k\rho)$ can be used to calculate the density matrix element $\rho_{k,k}=\braket{k | \rho | k}$, and the probabilities $p_{k,n}=\textrm{Tr}(E_{k,n}\rho)$ and $\tilde{p}_{k,n}=\textrm{Tr}(\widetilde{E}_{k,n}\rho)$ can be used to calculate the density matrix elements $\rho_{n,k}=\braket{n |\rho | k}$ and $\rho_{k,n}=\braket{k | \rho |n}$. Thus, this is an EP-POVM which reconstruct the first $r$ rows and first $r$ columns of the density matrix. Given the measured elements, we can write the density matrix in block form corresponding to measured and unmeasured elements, \begin{equation} \label{block_rho_gen} \rho= \begin{pmatrix} A & B^\dagger\\ B & C \end{pmatrix}, \end{equation} where $A$ is a $r \times r$ submatrix and $A$, $B^\dagger$, and $B$ are composed of measured elements. Suppose that $A$ is nonsingular. Given that $\textrm{rank}(\rho)=r$, using the rank additivity property of Schur complement and that $\textrm{rank}(A)=r$, we obtain $\rho/A=C-BA^{-1}B^\dagger=0$. Therefore, we conclude that $C=BA^{-1}B^\dagger$. Thus we can reconstruct the entire rank-$r$ density matrix. Following the arguments for the POVM in Eq.~\eqref{psi-complete}, it is straight forward to show that this POVM is in fact rank-$r$ strictly-complete. The failure set of this POVM corresponds to states for which $A$ is singular. The set is dense on a set of states of measure zero. The POVM of Eq.~\eqref{psic mixed} can alternatively be implemented as a series of $r-1$ POVMs, where the $k$th POVM, $k=0,\ldots,r-1$, has $2(d-k)$ elements, \begin{align}\label{psic mixed kth} &E_k=a_k\ket{k}\bra{k},\nonumber \\ &E_{k,n}=b_k(\mathds{1}+\ket{k}\bra{n}+\ket{n}\bra{k}),\;n=k+1,\ldots,d-1,\nonumber \\ &\widetilde{E}_{k,n}=b_k(\mathds{1}-{\rm i}\ket{k}\bra{n}+{\rm i}\ket{n}\bra{k}), \;n=k+1,\ldots,d-1,\nonumber \\ &E_{2(d-k)}=\mathds{1}-\left[E_k +\sum_{n=1}^{d-1}(E_{k,n}+\widetilde{E}_{k,n})\right]. \end{align} For this POVM, the measured elements are the same as from Eq.~\eqref{psic mixed}, and the proof of rank-$r$ strict-completeness follows accordingly. \chapter{Quantum control with partial isometries} \label{app:control} Quantum control is the procedure for applying external fields to a quantum system in order to create a desired quantum evolution. Quantum control is required for any QT protocol in order to prepare states, or create different POVMs. We discuss techniques for closed system control, where the evolution is unitary. \section{Closed system control objectives} In closed system control, the system is evolved with unitary dynamics created by a Hamiltonian, written in standard form, \begin{equation} H(t) = H_0 + \sum_{j=1}^m c_j(t) H_j, \end{equation} where $H_0$ is referred to as the ``drift'' Hamiltonian and all $H_j$'s are referred to as the ``control'' Hamiltonians. The control Hamiltonian describes an external field that is applied to the quantum system and varied in time in order to create the desired evolution. The constants, $c_j(t)$ are called the control parameters. The corresponding evolution is found by integrating the Schr\"{o}dinger equation, \begin{equation} U = \textrm{exp} \left[ -{\rm i} \int_0^T dt H(t) \right], \end{equation} from time $t=0$ to a final time $t=T$. A system is said to be controllable if the drift and control Hamiltonians togeteher generate the Lie algebra $\mathfrak{su}(d)$; that is the linear combinations of all Hamiltonians, $\{ H_0, H_j\}$, along with all combinations from the Lie bracket, $\left[H_i, H_j \right]$, span the Hermitian operator space. When the system is controllable, there exists a set of control parameters that generate any $U \in \textrm{SU}(d)$. Since the Hamiltonian is time dependent, we cannot analytically express the unitary at the final time for arbitrary control parameters. In order to define such an analytic expression, we consider control parameters are piecewise defined, such that, \begin{equation} c_j(t) = \begin{cases} c_{j,1}, & \text{if}\ 0 \leq t < t_1, \\ c_{j,2}, & \text{if}\ t_1 \leq t < t_2, \\ & \vdots \ \\ c_{j,n}, & \text{if}\ t_{n-1} \leq t < t_n = T. \\ \end{cases} \end{equation} When each control parameter is piecewise defined for the same time intervals then the elements $\vec{c}_{j,k}$ make up an $m \times n$ matrix $C$, where $m$ is the number of control Hamiltonians and $n$ is the number of control steps, i.e., piecewise elements of $c(t)$. The columns of $C$ are vectors that describe a time-independent control Hamiltonian for a given time interval. For example, $\vec{c}_k$ specifies the control Hamiltonian for $t_{k-1} \leq t < t_k$. When the Hamiltonian is time-independent, the Schr\"{o}dinger equation is analytically solvable. Therefore, the total evolution is described by a series of unitary maps, \begin{equation} \label{piecewise_U} U(C) = U(\vec{c}_N) U(\vec{c}_{N-1}) \cdots U(\vec{c}_1). \end{equation} We assume that each time interval is constant, $\Delta t = t_{k} - t_{k-1} = T/n$. Closed system quantum control can be used to accomplish partial isometries. A one-dimensional isometry is a state-to-state map; a full $d$-dimensional isometry is a unitary map of the full Hilbert space. Intermediately, the partial isometry maps and subspace of the Hilbert space to any other subspace of the same dimension, while preserving the inner product. The goal of state-to-state mappings is to evolve an initial state, $\ket{\psi_0}$ to a target state, $\ket{\phi}$. The final state from the controlled evolution, $\ket{\psi(T)} = U(C) \ket{\psi_0}$ . Therefore, the success of a state-to-state mapping is defined by the infidelity, or overlap, between the target and final states, \begin{align} \label{Jpsi1} J_1[C] &= 1 - | \langle \phi | \psi (T) \rangle |^2, \nonumber \\ &= 1 - | \langle \phi_0 | U(C) |\psi_0 \rangle |^2. \end{align} When $J_1 = 0$ then the state-to-state mapping is performed perfectly and the final state matches the target state. In unitary control, the goal is to specify the entire unitary. This is equivalent to specifying $d$ state-to-state mappings that take the fiducial basis to any desired orthonormal basis. The success is defined by the Hilbert-Schmidt distance squared between the target unitary, $W$, and the unitary created by the control, $U(C)$, \begin{align} \label{Jd_ReTr} J_d[C] &= \frac{1}{2d}\| W - U(C) \|^2, \nonumber \\ &= 1 - \frac{1}{d} \textrm{ReTr}(W^{\dagger} U(C) ), \end{align} since $W$ and $U(C)$ are unitaries, $\textrm{Tr}(|W|^2) = \textrm{Tr}(|U(C)|^2) = d$. The ``ReTr$(\cdot)$'' operator stands for $\textrm{Re}(\textrm{Tr}(\cdot))$. We also include a normalization factor of $\frac{1}{2d}$, such that $J_d = 0$ when $U(C) = W$, i.e. the control achieves the objective unitary map exactly. The functional $J_d[C]$ is dependent on the global phase difference between $W$ and $U(C)$ but often, the global phase is irrelevant physics. The relevant unitaries are in the special-unitary group, $SU(d)$. Therefore, we define a functional that is not proportional to the global phase, \begin{equation} \label{Jd_abs} \bar{J}_d[C] = 1 - \frac{1}{d^2} \left| \textrm{Tr}(W^{\dagger} U(C) ) \right|^2, \end{equation} and similarly $\bar{J}_d[C] = 0$ when $W = e^{-i \theta} U(C)$ for any phase $\theta$ based on the normalization. The advantage is that this reduces the number of free parameters that must be specified by the control, thereby reducing the total time required. State-to-state and unitary control are the two extreme cases of closed-system control. For the state-to-state control, the goal is to evolve a single state to a target state. For unitary control, the goal is equivalent to evolving a set of $d$ orthonormal states to a different set of $d$ orthonormal states. In between, we evolve $n \leq d$ orthonormal states, $\{ \ket{\psi_i} \}$ to $n$ orthonormal states target states, $\{ \ket{\phi_i} \}$. The corresponding control objectives are then, \begin{align} \label{Jpsi} J_n[C] &= 1 - \frac{1}{n} \textrm{Re} \left[ \sum_{i=1}^n \braket {\phi_i | U(C)| \psi_i} \right], \nonumber \\ \bar{J}_n[C] &= 1 - \frac{1}{n^2} \left| \sum_{i=1}^n \braket {\phi_i | U(C) | \psi_i} \right|^2. \end{align} When $n = 1$, Eq.~\eqref{Jpsi} reduces to Eq.~\eqref{Jpsi1}. If we define $\ket{\psi_i} = W \ket{\phi_i}$, then for $n = d$, the objectives in Eq.~\eqref{Jpsi} reduce to Eq.~\eqref{Jd_ReTr} and Eq.~\eqref{Jd_abs} respectively. We refer to control objectives with $n \neq 1$ or $d$ as ``partial-isometry control.'' The total time required to implement a partial isometry roughly scales with $n^2$, since this is the number of free parameters that specify the partial isometry control task. Therefore, partial-isometry control is more efficient than unitary control and is desirable in certain applications, such as the measurements of subspaces discussed in Chapter~\ref{ch:experiment}. We can alternatively write a partial isometry in bra-ket notation, \begin{equation} X_n = \sum_{i=1}^n |\phi_i \rangle \langle \psi_i |, \end{equation} and if $n = d$ recover a unitary matrix. This can also be expressed as a rank-$n$ projectors, $A_n = \sum_i \ket{\psi_i} \bra{\psi_i}$, that acts on full unitary \begin{equation} X_n = W A_n. \end{equation} We can also express the control objective functional in terms of the projector and unitary, \begin{align} \label{JA} J_n[C] &= 1 - \frac{1}{n} \textrm{ReTr} \left[ A_n W^{\dagger} U(C) \right], \nonumber \\ \bar{J}_n[C] &= 1 - \frac{1}{n^2} \left| \textrm{Tr}\left[ A_n W^{\dagger} U(C) \right] \right|^2. \end{align} \section{Numerical control search} To implement closed system control, we need to find the control parameters, $C^*$, that minimizes $J_n[C]$ or $\bar{J}_n[C]$. One way to accomplish this is through numerical optimization. There are several different choices of algorithms to determine the control parameters that minimize the objective functionals. We use a variant of the gradient ascent pulse engineering (GRAPE) algorithm, originally proposed in Ref.~\cite{Khaneja2005} and further discussed in Ref.~\cite{Machnes2011}. GRAPE starts with a set of random control parameters and evaluates the functional and the gradient of the functional. The algorithm then steps in the direction of descending\footnote{The original proposal stepped in ascending direction but we look to {\em minimize} our control objective so the step is in the descending direction.} direction by some amount and recalculates the objective functional and the gradient. It continues this process until a measure of the gradient is smaller than some threshold. This point then corresponds to a local minima in the functional. If the functional was convex then this local minima would be guaranteed to be the global minima. However, none of the objective functionals discussed above are convex. In Refs.~\cite{Rabitz2005,Ho2009,Dominy2011}, it was shown that while the functionals are not convex, i.e., there is not a single global minima, they do have a favorable landscape for gradient-based algorithms. Instead of having a single global minimum the functionals introduced in the previous section have many global minimum but all give the same value of the functional. Therefore, any time the algorithm stops, with the gradient equal to zero, then the corresponding control parameters produce one of the many global minimum. However, this does mean that there are many (in fact infinite) different control parameters that achieve the same control objective. In order to use the gradient descent methods we need to know the gradient of the objective functional. This was originally derived in Ref.~\cite{Ho2009} for the partial isometry objective. We present a brief outline here only for $\bar{J}_n[C]$ objective in Eq.~\eqref{JA}, but the derivation is similar for $J_n[C]$. The gradient with respect to the control parameter $c_{j,k}$ is, \begin{equation} \label{grad_barJ} \frac{\partial J_n[C]}{\partial c_{j,k}} = - \frac{2 t}{n^2} \textrm{Tr} \left[ A_n W^{\dagger} \frac{\partial U(C)}{\partial c_{j,k}} \right], \end{equation} where $t = \textrm{Tr}\left[ A_n A_n^{\dagger} W^{\dagger} U(C) \right]$. The partial derivative of the unitary can be found by expanding in terms of Eq.~\eqref{piecewise_U}, \begin{equation} \label{dUdc} \frac{\partial U(C)}{\partial c_{j,k}} = U(\vec{c}_N) \cdots U(\vec{c}_{k+1}) \frac{\partial U(\vec{c}_k)}{\partial c_{j,k}} U(\vec{c}_{k-1}) \cdots U(\vec{c}_1). \end{equation} The partial derivative of the unitary for the $k$th control parameter was solved in Refs.~\cite{Rabitz2005,Machnes2011} by expanding in the eigenbasis of $U(\vec{c}_k) = V \Lambda V^{\dagger}$, with eigenvalues $\{ \lambda_{\alpha} \}$ and corresponding eigenvectors $\{ | \lambda_{\alpha} \rangle \}$, \begin{equation} \frac{\partial U(\vec{c}_k)}{\partial c_{j,k}} = V D_{j,k} V^{\dagger} \end{equation} where $D_{j,k}$ is a $d \times d$ matrix with elements in the eigenbasis of $U(\vec{c}_k)$, \begin{equation} \label{D} \langle \lambda_{\alpha} | D_{j,k} | \lambda_{\beta} \rangle = \begin{cases} \Delta t \langle \lambda_{\alpha} | H_{j} | \lambda_{\beta} \rangle e^{-i \Delta t \lambda_{\alpha}} & \text{if}\ \lambda_{\alpha} = \lambda_{\beta}, \\ i \Delta t \langle \lambda_{\alpha} | H_{j} | \lambda_{\beta} \rangle \frac{e^{-i \Delta t \lambda_{\alpha} }- e^{-i \Delta t \lambda_{\beta}}}{\Delta t(\lambda_{\alpha} - \lambda_{\beta})} & \text{if}\ \lambda_{\alpha} \neq \lambda_{\beta}. \end{cases} \end{equation} We combine Eq.~\eqref{dUdc}-\eqref{D} into Eq.~\eqref{grad_barJ} to write the general form of the gradient, \begin{equation} \label{grad_barJ} \frac{\partial J_n[C]}{\partial c_{j,k}} = - \frac{2 t}{n^2} \textrm{Tr} \left[ A_n W^{\dagger} U(C) D'_{j,k}\right]. \end{equation} where $D'_{j,k} = U^{\dagger}(\vec{c}_1) \cdots U^{\dagger}(\vec{c}_k) V D_{j,k} V^{\dagger} U(\vec{c}_{k-1}) \cdots U(\vec{c}_1)$. This expression can also be used to show that there are no local minimum under a few assumptions, which was done in Ref.~\cite{Ho2009}. With the analytic form of the gradient of $J_n[C]$ and $\bar{J}_n[C]$, and the assumption that there exist no local minimum, we can use gradient based algorithm to efficiently find a global minimum of either functional. We apply MATLAB's fminunc routine which uses the BFGS quasi-Newton technique with variables $\{ c_{j,k} \}$. The algorithm calculates the function value and gradient at a given point and then numerically finds the hessian in order to calculate how large a step to take in the direction of the gradient. It then repeats this iteration until the maximum value in the gradient is below a pre-specified threshold. {} \end{document}
\begin{document} \def{\Bbb T}itel{ $M$-ideals of compact operators into $\ell_{p}$ } \def\Titel {{\Bbb T}itel } \def\Autor{Kamil John\footnote{supported by the grants of GA~AV\v CR No.~1019504 and of GA~\v CR No.~201/94/0069.} and Dirk Werner} \def\Abstrakttext{ We show for $2\le p<\infty$ and subspaces $X$ of quotients of $L_{p}$ with a $1$-unconditional finite-dimensional Schauder decomposition that $K(X,\ell_{p})$ is an $M$-ideal in $L(X,\ell_{p})$. } \ersteSeite \mysec{1}{Introduction} A closed subspace $J$ of a Banach space $X$ is called an $M$-ideal if the dual space $X^{*}$ decomposes into an $\ell_{1}$-direct sum $X^{*} = J^{\perp} \oplus_{1} V$, where $J^{\perp} = \{x^*\in X^*{:}\allowbreak\ \loglike{Re}st{x^*}{J}=0\}$ is the annihilator of $J$ and $V$ is some closed subspace of~$X^{*}$. This notion is due to Alfsen and Effros \cite{AlEf}, and it is studied in detail in \cite{HWW}. It has long been known that the space of compact operators $K(\ell_{p})$ is an $M$-ideal in the space of bounded operators $L(\ell_{p})$ for $1<p<\infty$ whereas this property fails for $L_{p}=L_{p}[0,1]$ unless $p=2$; cf.\ Section~VI.4 in \cite{HWW}. More recently, it was shown in \cite{KalW2} that $K(L_{p},\ell_{p})$ is an $M$-ideal if $1< p \le2$, and it is not an $M$-ideal if $p>2$. In this paper we wish to examine the $M$-ideal character of $K(X,\ell_{p})$ for subspaces $X$ of quotients of $L_{p}$ and $2\le p<\infty$. Our idea is to exploit the fact that those $X$ have Rademacher cotype~$p$ with constant~1. This leads to the result mentioned in the abstract. We would like to thank N.~Kalton and E.~Oja for their comments on preliminary versions of this paper. \mysec{2}{Results} Here is our main result. \begin{theo} \label{3.7} Let $1< p< \infty$ and suppose that the Banach space $X$ admits a sequence of operators $K_n\in K(X)$ satisfying \begin{statements} \item $K_nx\to x$ for all $x\in X$, \item $K^*_nx^*\to x^*$ for all $x^*\in X^*$, \item $\|I\mkern-1mud_X-2K_n\|\to 1$. \end{statements} Then $K(X,\ell_p)$ is an $M$-ideal in $L(X,\ell_p)$ if \begin{equation} \label{eq7} \limsup_n (\|x\|^{p}+\|x_n\|^{p})^{1/ p}\le \limsup_n\biggl({\|x+x_n\|^{p}+\|x-x_n\|^{p}\over 2}\biggr)^{1/ p} \end{equation} for all $x,x_n\in X$ such that $x_n\to0$ weakly. \end{theo} \par\noindent{\em Proof. } Let $T{:}\allowbreak\ X\to \ell_{p}$ be a contraction. We shall show that $T$ has property~$(M)$, i.e., $$ \limsup_{n} \|y+Tx_{n}\| \le \limsup_{n} \|x+x_{n}\| $$ whenever $x\in X$, $y\in \ell_{p}$, $\|y\|\le\|x\|$, and $x_{n}\to 0$ weakly in $X$. This implies our claim by \cite[Th.~6.3]{KalW2}. In fact, we have \begin{eqnarray*} \limsup_{n} \|y+Tx_{n}\| &=& \limsup_{n} \bigl( \|y\|^{p} + \|Tx_{n}\|^{p} \bigr)^{1/p} \\ &\le& \limsup_{n} \bigl( \|x\|^{p} + \|x_{n}\|^{p} \bigr)^{1/p} \\ &\le& \limsup_n\biggl({\|x+x_n\|^{p}+\|x-x_n\|^{p}\over 2}\biggr)^{1/ p} ; \end{eqnarray*} so it is enough to show that \begin{equation}\label{eq2.2} \limsup_{n} \|x+x_{n}\|= \limsup_{n} \|x-x_{n}\|. \end{equation} Let $\varepsilon>0$. Pick $m\in {\Bbb N}$ so that $$ \|K_{m}x-x\|\le\varepsilon,\qquad \|I\mkern-1mud-2K_{m}\| \le 1+\varepsilon. $$ Then pick $n_{0}\in{\Bbb N}$ so that $$ \|K_{m}x_{n}\| \le\varepsilon \qquad\forall n\ge n_{0}; $$ this is possible since $x_{n}\to 0$ weakly and $K_{m}$ is compact. We now have for $n\ge n_{0}$ \begin{eqnarray*} (1+\varepsilon) \|x_{n}+x\| &\ge& \|(I\mkern-1mud-2K_{m})(x_{n}+x)\| \\ &=& \| x_{n}-x - 2K_{m}x_{n} +2x -2K_{m}x\| \\ &\ge& \|x_{n}-x\| -2\varepsilon -2\varepsilon \end{eqnarray*} so that $$ \limsup_{n} \|x_{n}+x\| \ge \limsup_{n} \|x_{n}-x\|, $$ and by symmetry equality holds. \nopagebreak\hspace*{\fill}$\Box$ We note that (\loglike{Re}f{eq7}) is not a necessary condition, for essentially trivial reasons: e.g., if $p<2$ and $X=\ell_{2}$, then every operator from $X$ to $\ell_{p}$ is compact and, therefore, $K(X,\ell_{p})$ is an $M$-ideal, but (\loglike{Re}f{eq7}) fails. As the proof shows, one can as well consider all the Banach spaces sharing the property $$ \limsup_{n} \|y+y_{n}\| \le \limsup_{n} \bigl( \|y\|^{p}+\|y_{n}\|^{p} \bigr)^{1/p} $$ whenever $y_{n} \to0 $ weakly, e.g., $\ell_{q}$ or the Lorentz spaces $d(w,q)$ for $p\le q< \infty$. So our theorem is closely related to \cite[Th.~3]{OjaCR} and \cite[Prop.~4.2]{Dirk5}. Actually, we needed assumptions (a)--(c) only to ensure~(\loglike{Re}f{eq2.2}), a condition that could be called property~$(wM)$ in accordance with Lima's property~$(wM^{*})$ \cite{Lim95}. Now we wish to give more concrete examples where Theorem~\loglike{Re}f{3.7} applies. There is a natural class of Banach spaces in which inequality~(\loglike{Re}f{eq7}) is valid. Recall that a Banach space $X$ has Rademacher type~$p$ with constant~$C$ if for all finite families $\{x_{1},\ldots, x_{n}\}\subset X$, with $r_{1}, r_{2}, \ldots$ denoting the Rademacher functions, $$ \left( \int_{0}^{1}\,\biggl\| \sum_{k=1}^{n} r_{k}(t)x_{k} \biggr\|^{p} dt \right)^{1/p} \le C \biggl( \sum_{k=1}^{n} \|x_{k}\|^{p} \biggr)^{1/p}; $$ it has Rademacher cotype~$p$ with constant~$C$ if $$ \biggl( \sum_{k=1}^{n} \|x_{k}\|^{p} \biggr)^{1/p} \le C \left( \int_{0}^{1}\,\biggl\| \sum_{k=1}^{n} r_{k}(t)x_{k} \biggr\|^{p} dt \right)^{1/p} $$ instead. Thus we see that the inequality (\loglike{Re}f{eq7}) is always satisfied when $X$ has Rademacher cotype~$p$ with constant~1, which is the case if $X$ is a subspace of a quotient of $L_{p}$ for $2\le p<\infty$. As for assumptions (a)--(c) from Theorem~\loglike{Re}f{3.7}, these conditions are obviously fulfilled if $X$ has a shrinking $1$-unconditional finite-dimensional Schauder decomposition or merely the shrinking unconditional metric compact approximation property of \cite{CasKal} and \cite{GKS}. Let us mention that the ``shrinking'' character of these properties holds, by a well-known convex combinations argument (cf.\ \cite[Lemma~VI.4.9]{HWW}), for reflexive spaces automatically. These observations yield the next corollary. \begin{cor} \label{3.9} Let $X$ be a subspace of a quotient of $L_p$, $2\le p<\infty$, and let $X$ have a $1$-unconditional finite-dimensional Schauder decomposition or merely the unconditional metric compact approximation property. Then $K(X,\ell_p)$ is an $M$-ideal in $L(X,\ell_p)$. \end{cor} More explicitly, we note that for instance $\ell_{p}$, $\ell_{p} \oplus_{p} \ell_{r}$ and $\ell_{p}(\ell_{r})$, where $2\le r \le p <\infty$, satisfy these assumptions; but for these spaces the result of Corollary~\loglike{Re}f{3.9} has already been known from \cite{Dirk5} or \cite[p.~327]{HWW}. Yet there are other examples. In fact, Li \cite{Li-UCMAP} has exhibited spaces of $\Lambdabda$-spectral functions $L^{p}_{\Lambdabda}({\Bbb T})$ for certain $\Lambdabda\subset{\Bbb Z}$ that enjoy the unconditional metric compact approximation property. Moreover, since for $2\le q \le p<\infty$ the space $L_{q}$ is isometric to a quotient of $L_{p}$, one can substitute ${q}$ for ${p}$ in the above list of examples. Another way to see that (\loglike{Re}f{eq7}) holds for $L_{p}$, $2\le p <\infty$, is to observe that (\loglike{Re}f{eq7}) follows immediately from Clarkson's inequality in $L_{p}$, that is $$ \|f\|^{p} + \|g\|^{p} \le \frac{\|f+g\|^{p} + \|f-g\|^{p}}2 $$ for $p\ge 2$. Now, Clarkson's inequalities are valid in the Schatten classes as well \cite{McCar}. Therefore we obtain a noncommutative version of the previous corollary. (Actually, this argument is not that different, because the Clarkson inequality entails the desired cotype property.) \begin{cor} \label{3.9a} Let $X$ be a subspace of a quotient of the Schatten class $c_p$, $2\le p<\infty$, and let $X$ have a $1$-unconditional finite-dimensional Schauder decomposition or merely the unconditional metric compact approximation property. Then $K(X,\ell_p)$ is an $M$-ideal in $L(X,\ell_p)$. \end{cor} There is a dual version of Theorem~\loglike{Re}f{3.7} which we state for completeness. \begin{theo} \label{3.3} Let $1< p< \infty$ and $1/p + 1/p' = 1$. Suppose that the Banach space $Y$ admits a sequence of operators $K_n\in K(Y)$ satisfying \begin{statements} \item $K_ny\to y$ for all $y\in Y$, \item $K^*_ny^*\to y^*$ for all $y^*\in Y^*$, \item $\|I\mkern-1mud_Y-2K_n\|\to 1$. \end{statements} Then $K(\ell_p,Y)$ is an $M$-ideal in $L(\ell_p,Y)$ if \begin{equation} \label{eq6} \limsup_n (\|y^*\|^{p'}+\|y_n^*\|^{p'})^{1/ p'}\le \limsup_n\biggl({\|y^*+y_n^*\|^{p'}+\|y^*-y^*_n\|^{p'}\over 2}\biggr)^{1/ p'} \end{equation} for all $y^*,y^*_n\in Y^*$ such that $y^*_n\to0$ weak$^{\,*}$. \end{theo} The proof of Theorem~\loglike{Re}f{3.3} can be accomplished along the same lines as above using property~$(M^{*})$ of a contraction (cf.\ \cite[p.~171]{KalW2}) instead. Again, inequality (\loglike{Re}f{eq6}) is always satisfied when $Y^*$ has Rademacher cotype~$p'$ with constant~1, which is the case if $Y$ has Rademacher type~$p$ with constant~1. The latter holds if $Y$ is a subspace of a quotient of $L_{p}$ or $c_{p}$ for $1<p\le2$. \hfuzz 5pt \mysec{3}{Concluding remarks} The conditions (\loglike{Re}f{eq7}) and (\loglike{Re}f{eq6}) can be understood as averaging conditions. In an earlier draft of this manuscript we used these conditions to establish what we call $p$-averaged versions of the properties~$(M)$ and $(M^{*})$ of contractions~$T$, that is $$ \limsup_n\|y + Tx_n\| \le \cases { \displaystyle \limsup_n\biggl({\|x+x_n\|^p+ \|x-x_n\|^p \over 2}\biggr)^{1/ p} & for $ p < \infty$ \cr \displaystyle \limsup_n\max(\|x+x_n\|,\|x-x_n\|) & for $p=\infty$ \cr }$$ whenever $x\in X$, $y\in Y$ with $\|y\|\le\|x\|$ and $x_{n}\to0$ weakly in $X$; respectively, $$ \limsup_n\|x^*+T^*y^*_n\| \le \cases { \displaystyle \limsup_n\biggl({\|y^*+y_n^{*}\|^p+ \|y^*-y_n^{*}\|^p \over 2}\biggr)^{1/ p} & for $ p < \infty$ \cr \displaystyle \limsup_n\max(\|y^*+y_n^{*}\|,\|y^*-y_n^{*}\|) & for $p=\infty$. \cr }$$ for all $x^*\in X^*$, $y^*\in Y^*$ such that $ \|x^*\|\le \|y^*\|$ and for all weak$^*$ null sequences $(y^*_n)\subset Y^*$. (As a matter of fact, (\loglike{Re}f{eq6}) implies the $p'$-averaged property~$(M^{*})$ for a contraction $T{:}\allowbreak\ \ell_{p}\to Y$.) Using techniques from \cite{KalW2} (which in turn depend on those from \cite{Kal-M}) one can prove the following results. \begin{prop}\label{3.77} Let $1\le p\le \infty$ and suppose that the Banach space $X$ admits a sequence of operators $K_n\in K(X)$ satisfying \begin{statements} \item $K_nx\to x$ for all $x\in X$, \item $K^*_nx^*\to x^*$ for all $x^*\in X^*$, \item $\|I\mkern-1mud_X-2K_n\|\to 1$. \end{statements} Let $Y$ be a Banach space. Then $K(X,Y)$ is an $M$-ideal in $L(X,Y)$ if and only if every contraction $T{:}\allowbreak\ X\to Y$ has $p$-averaged~$(M)$. \end{prop} \begin{prop}\label{3.73} Let $1\le p\le \infty$ and suppose that the Banach space $Y$ admits a sequence of operators $K_n\in K(Y)$ satisfying \begin{statements} \item $K_ny\to y$ for all $y\in Y$, \item $K^*_ny^*\to y^*$ for all $y^*\in Y^*$, \item $\|I\mkern-1mud_Y-2K_n\|\to 1$. \end{statements} Let $X$ be a Banach space. Then $K(X,Y)$ is an $M$-ideal in $L(X,Y)$ if and only if every contraction $T {:}\allowbreak\ X\to Y$ has $p$-averaged~$(M^{*})$. \end{prop} It is well known (cf.\ \cite[Th.~I.2.2]{HWW}) that a closed subspace $J$ of a Banach space $X$ is an $M$-ideal in $X$ if and only if the following 3-ball property holds: For all $y_1,y_2,y_3 \in B_J$, all $x\in B_X$ and all $\varepsilon>0 $ there is $y\in J$ such that $\|x+y_i-y\|\le 1+\varepsilon$ for $i=1,2,3$. (Here $B_{X}$ denotes the closed unit ball of~$X$.) Upon replacing the number~3 by some $n\in {\Bbb N}$ we obtain the $n$-ball property, which is equivalent to the 3-ball property provided $n\ge3$. One may ``average'' this condition as well and obtain the following characterisation of $M$-ideals by means of an averaged 3-ball property. \begin{prop}\label{2.1} A closed subspace $J$ of a Banach space $X$ is an $M$-ideal in $X$ if and only if \begin{statements} \item[\rm (A)] \it For all $y_1,y_2,y_3 \in B_J$, $x\in B_X$ and $\varepsilon>0 $ there is $y\in J$ such that $$ \|x+y_i-y\| +\|x-y_i-y\|\le 2(1+\varepsilon) \quad {\it for\ }i=1,2,3. $$ \end{statements} holds. \end{prop} \par\noindent{\em Proof. } Evidently the 6-ball property implies (A). Conversely, suppose (A). In order to show that $J$ is an $M$-ideal in $X$ we will verify the ordinary 3-ball property (see above). Now an inspection of the proof of \cite[Theorem~I.2.2]{HWW} shows that one may additionally assume that ${\rm dist}(x,J)\ge 1-\varepsilon$, in which case (A) implies that $$ \|x+y_{i}-y\| \le 2(1+\varepsilon) - \|x-y_{i}-y\| \le 1+3\varepsilon,\qquad i=1,2,3, $$ and we are done. \nopagebreak\hspace*{\fill}$\Box$ \typeout{References} \small \noindent Mathematical Institute, Czech Academy of Sciences, \v{Z}itn\'{a} 25, \\ CZ-11\,567 Prague 1, Czech Republic; \ e-mail: [email protected] \noindent I.~Mathematisches Institut, Freie Universit\"at Berlin, Arnimallee 2--6, \\ D-14\,195 Berlin, Germany; \ e-mail: [email protected] \end{document}
\begin{document} \begin{abstract} Kuramoto Networks contain non-hyperbolic equilibria whose stability is sometimes difficult to determine. We consider the extreme case in which all Jacobian eigenvalues are zero. In this case linearizing the system at the equilibrium leads to a Jacobian matrix which is zero in every entry. We call these equilibria completely degenerate. We prove that they exist for certain intrinsic frequencies if and only if the underlying graph is bipartite, and that they do not exist for generic intrinsic frequencies. In the case of zero intrinsic frequencies, we prove that they exist if and only if the graph has an Euler circuit such that the number of steps between any two visits at the same vertex is a multiple of~$4$. The simplest example is the cycle graph with~$4$ vertices. We prove that graphs with this property exist for every number of vertices~$N\geq 6$ and that they become asymptotically rare for~$N$ large. Regarding stability, we prove that for any choice of intrinsic frequencies, any coupling strength and any graph with at least one edge, completely degenerate equilibria are not Lyapunov stable. As a corollary, we obtain that stable equilibria in Kuramoto Networks must have at least one strictly negative eigenvalue. \end{abstract} \maketitle \section{Motivation} Kuramoto Networks are widespread in neuroscience, biology, chemistry and engineering, as a natural framework for modeling synchronization and pattern formation~\cite{acebron2005kuramoto, arenas2008synchronization, dorfler2014synchronization, dorfler2013synchronization}. Changing the underlying graph, even just removing one edge, can lead to a dramatically different dynamical behavior. Therefore, great effort has been put into understanding dynamics on different classes of graphs: complete graphs~\cite{jadbabaie2004stability, Taylor2012}, cycles~\cite{canale2009, Wiley2006, roy2012synchronized}, bipartite graphs~\cite{verwoerd2009computing}, planar graphs~\cite{delabays2017multistability}, dense graphs~\cite{Kassabov2021, Ling2019, lu2020, Taylor2012, Yoneda2021}, sparse graphs~\cite{sokolov2019sync}, $3$-regular graphs~\cite{DeVille2016}, trees~\cite{Dekker2013, Jafarian2018}, and stars~\cite{Chen2019}. Here, rather than focusing on a particular type of graph, we focus on a particular type of equilibrium. We believe that degenerate equilibria in Kuramoto Networks are always caused by some special combinatorial properties of the underlying graph, which can be exploited to determine stability when linearization cannot. The results in this paper support this belief. Let~$G$ be a graph with vertices~$1,\ldots,N$ and adjacency matrix~$(a_{jk})_{j,k}$. To every vertex~$k$ we associated a~\emph{phase}~$\theta_k$ in the $1$-dimensional~$\T = \R/2\pi\Z$, and a constant \emph{intrinsic frequency}~$\omega_k\in \R$. The connection between vertices is modulated by a \emph{coupling strength}~$K>0$. We call~\emph{Kuramoto Network on~$G$} the coupled dynamical system \begin{equation} \label{eq:main_intro} \dot \theta_k = \omega_k + K\sum_{j=1}^N a_{jk} \sin(\theta_j-\theta_k), \qquad k=1,\ldots,N. \end{equation} The equilibria of~\eqref{eq:main_intro} are never isolated, due to the presence of phase-shift symmetry, and in particular they are never hyperbolic. Some equilibria are hyperbolic up to symmetry: a typical example is the fully synchronized state, in which all the vertices share the same phase. In this paper we are interested in equilibria that are non-hyperbolic up to symmetry. We say that an equilibrium is~\emph{completely degenerate} if the Jacobian matrix, obtained by linearizing the vector field at the equilibrium, is zero in every entry. As we will see, the Jacobian matrices obtained by linearizing~\eqref{eq:main_intro} are symmetric. Therefore, they are zero in every entry if and only if all the eigenvalues are zero. Completely degenerate equilibria are interesting. First, they are a source of counterexamples. For instance, they show that Taylor's inequalities~\cite[Lemma 2.1]{Taylor2012} are not sufficient to guarantee stability. Second, they are equilibria with critical edges (see~\cite{DeVille2016} for the definition), for which little is known. Third, they are closely related to Eulerian graphs, a class of graphs not yet analyzed in Kuramoto Networks literature. Fourth, as their stability cannot be determined by linear stability analysis, they require alternative techniques: in this paper we will combine results on real-analytic gradient systems with graph-theoretical arguments. \begin{figure} \caption{Completely degenerate equilibria on the square and on the hypercube. Blue, green, red and yellow denote $0$, $\pi/2$, $\pi$ and~$3\pi/2$ respectively.} \label{fig:hypercube} \end{figure} \section{Zero Intrinsic Frequencies} \subsection{Classification} Let us begin by analyzing completely degenerate equilibria in the case of zero intrinsic frequencies, that is $ \omega_1=\cdots=\omega_N=0. $ Up to rescaling time we can suppose~$K=1$, obtaining \begin{equation} \label{eq:main} \dot \theta_k = \sum_{j=1}^N a_{jk} \sin(\theta_j-\theta_k),\qquad k=1,\ldots,N. \end{equation} Let~$F$ denote the vector field induced by~\eqref{eq:main}. Let~$DF(\theta)$ denote the differential of the vector field at~$\theta$. For every~$j,k\in \{1,\ldots,N\}$ we have \begin{equation} \label{eq:DF} DF(\theta)_{jk} = \begin{cases*} a_{jk} \cos(\theta_j - \theta_k) & if $j\neq k,$ \\ - \sum_j a_{jk} \cos(\theta_j - \theta_k) & if $j=k$. \end{cases*} \end{equation} If~\eqref{eq:main} contains a completely degenerate equilibrium, we say that the graph~$G$~\emph{admits} completely degenerate equilibria. \begin{lemma} \label{lem:criterion} A point~$\theta\in \T^N$ is a completely degenerate equilibrium if and only if, for every vertex~$k$, half of its neighbors have phase~$\theta_k + \pi/2$ and the other half~$\theta_k - \pi/2$. \end{lemma} \begin{proof} From~\eqref{eq:main} and~\eqref{eq:DF} it follows that a point~$\theta$ is a completely degenerate equilibrium if and only if \begin{align} &\sum_{j} a_{jk} \sin(\theta_j - \theta_k) = 0 \label{eq:sine_condition} \\ \intertext{for every vertex~$k$ and} &\cos(\theta_j - \theta_k) = 0 \label{eq:cosine_condition} \end{align} for every edge~$jk$. These equations are satisfied if and only if for every edge~$jk$ we have~$\sin(\theta_j - \theta_k) = \pm 1$ and for every vertex half the sines are equal to~$+1$, the other half to~$-1$. \end{proof} Let us recall some definitions from graph theory. A \emph{cycle} is a closed graph walk without repeated vertices. A \emph{circuit} is a closed graph walk without repeated edges. An \emph{Euler circuit} is a circuit that uses all the graph edges. If an Euler circuit exists, the graph is called~\emph{Eulerian}. Notice that every cycle is a circuit. The following theorem characterizes graphs admitting completely degenerate equilibria as well as the set of completely degenerate equilibria given a graph. Notice that, since distinct connected components have independent dynamics, it is enough to consider connected graphs. \begin{theorem} \label{thm:1} A connected graph~$G$ admits a completely degenerate equilibrium if and only if there is an Euler circuit such that the number of steps between any two visits at the same vertex is a multiple of~$4$. Every completely degenerate equilibrium is obtained by fixing such an Eulerian circuit, choosing the phase of one vertex arbitrarily and increasing the phase of the next vertex by~$\pi/2$ at each step of the circuit. \end{theorem} \begin{proof} Suppose that~$G$ admits completely degenerate equilibria. Fix a completely degenerate equilibrium~$\theta$. We say that a circuit has the property~$P$ if at each step the phase increases by~$\pi/2$. By Lemma~\ref{lem:criterion} it follows that every edge of~$G$ is contained in a circuit satisfying~$P$. Moreover, it follows that if we remove from~$G$ the edges of a circuit satisfying~$P$, then~$\theta$ is still a completely degenerate equilibrium. Therefore, the set of edges of~$G$ is the union of edge-disjoint circuits satisfying~$P$. If two such circuits intersect in a vertex, their union can be walked in a way that makes it a circuit satisfying~$P$. Since~$G$ is connected, we conclude that there is a circuit satisfying~$P$ containing every edge, that is, an Euler circuit. Conversely, suppose that~$G$ contains an Euler circuit satisfying~$P$. Fix such a circuit. Choose the phase of one vertex arbitrarily and increase the phase of the next vertex by~$\pi/2$ at each step of the Euler circuit. Since the circuit visits every vertex and satisfies~$P$, this process defines a phase~$\theta_k$ at each vertex~$k$ in a consistent way. By Lemma~\ref{lem:criterion} it follows that~$\theta$ is a completely degenerate equilibrium. In particular, this proves that~$G$ admits completely degenerate equilibria. \end{proof} \begin{figure} \caption{ Blue, green, red and yellow denote~$0$, $\pi/2$, $\pi$ and~$3\pi/2$ respectively.} \label{fig:quotient} \end{figure} \begin{example} The cycle graph with $4$~vertices admits completely degenerate equilibria. An explicit example is given in Figure~\ref{fig:hypercube}. This graph admits two completely degenerate equilibria up to phase-shift symmetry, one for every orientation of the cycle. \end{example} \begin{example} The graph formed by the vertices and edges of the $4$-dimensional hypercube admits completely degenerate equilibria. An explicit example is given in Figure~\ref{fig:hypercube}. More generally, any even-dimensional hypercube admits completely degenerate equilibria. \end{example} There are infinitely many graphs admitting completely degenerate equilibria: \begin{proposition} \label{prop:infinitely_many} For every~$N\geq 6$ there is a connected graph on~$N$ vertices admitting completely degenerate equilibria. \end{proposition} \begin{proof} Explicit examples with~$N=6,7,8$ vertices are given in Figure~\ref{fig:quotient}. Given a completely degenerate equilibrium on a connected graph with~$N$ vertices, we can obtain a completely degenerate equilibrium on a connected graph with~$N+3$ vertices by glueing a $4$-cycle to the original graph. For example, the graph with $7$~vertices of Figure~\ref{fig:quotient} is obtained by glueing two $4$-cycles together. \end{proof} Although infinite in number, graphs admitting completely degenerate equilibria are asymptotically rare: \begin{proposition} The probability that a graph chosen uniformly at random among the graphs with~$N$ vertices admits completely degenerate equilibria goes to~$0$ as~$N$ goes to infinity. \end{proposition} \begin{proof} By Lemma~\ref{lem:criterion} a graph admitting completely degenerate equilibria is triangle-free. Let us prove that triangle-free graphs have asymptotic probability~$0$. Let~$G$ be a graph on~$N$ vertices. Partition the vertices into~$\lfloor N/3 \rfloor$ subsets of size~$3$, and possibly a subset of smaller size. There are~$8$ distinct graphs on~$3$ vertices, and~$7$ of these are not triangles. Therefore, the probability that~$G$ is triangle-free is at most $ (7/8)^{\lfloor N/3 \rfloor}. $ In particular, the probability goes to~$0$ as~$N$ goes to infinity. \end{proof} \begin{remark} From Lemma~\ref{lem:criterion} it follows that if~$G$ admits completely degenerate equilibria then~$G$ contains no cycles of odd length. In particular~$G$ is bipartite. That is, bipartiteness is necessary for the existence completely degenerate equilibria in the case of zero intrinsic frequencies. Theorem~\ref{thm:1} implies that it is not sufficient. In a later section (Theorem~\ref{thm:bipartite}) we will show that bipartiteness is necessary and sufficient for the existence of completely degenerate equilibria for \emph{some} (not necessarily all zero) intrinsic frequencies. \end{remark} \subsection{Stability} Stability of completely degenerate equilibria cannot be determined by linearization. The goal of this section is proving that completely degenerate equilibria are never Lyapunov stable. In particular, they are never asymptotically stable. It is well known~\cite{Ling2019, Wiley2006} that~\eqref{eq:main} is a gradient system with respect to the~\emph{energy function} \begin{equation} \label{eq:energy} E(\theta) = \sum_{jk \in e(G)} a_{jk} (1 - \cos(\theta_j - \theta_k)). \end{equation} Here~$e(G)$ denotes the set of edges of~$G$ and each edge~$jk$ is counted exactly once in the sum. For every point~$\theta\in \T^N$ the identity~$F(\theta) = -DE(\theta)$ holds. Intuitively, this means that trajectories always evolve in the direction in which~$E$ decreases maximally. Intuitively, the energy decreases over time until a stationary point (or equilibrium) is reached. The connection between stability of and minimality is, however, subtle. There are smooth energy functions with Lyapunov stable equilibria that are not local minimizers and local minimizers that are not Lyapunov stable~\cite{Absil2006}. However, if the energy function is real-analytic, then the Lyapunov stable equilibria are exactly the local minimizers of the energy~\cite{Absil2006}. Since~\eqref{eq:energy} is real-analytic, this result applies to our case. Since distinct connected components have independent dynamics, it is enough to consider connected graphs. Moreover, we assume that the graph has at least one edge. Notice that if~$G$ has no edges then there is no dynamics and every point is a (completely degenerate) Lyapunov stable equilibrium. \begin{theorem} \label{thm:2} For every connected graph with at least one edge, completely degenerate equilibria are not Lyapunov stable. \end{theorem} \begin{proof} Let~$\theta\in \T^N$ be a completely degenerate equilibrium. We will show that~$\theta$ is a saddle point of the energy function~\eqref{eq:energy}, that is, a point that is neither a local minimizer nor a local maximizer. It follows that~$\theta$ is not Lyapunov-stable. Fix an Euler circuit as in Theorem~\ref{thm:1} and let~$j,k$ be two consecutive vertices in the circuit. We have~$\theta_k = \theta_j + \pi/2$. For every~$x\in \R$ let~$\theta^x \in \T^N$ be defined as \begin{equation*} \begin{dcases*} \theta^x_j = \theta_j + x, \\ \theta^x_k = \theta_k -x, \\ \theta^x_h = \theta_h, & $h\notin \{j,k\}$. \end{dcases*} \end{equation*} Our goal is proving that~$E(\theta) - E(\theta^x)$ changes sign in every neighborhood of~$x=0$. By~\eqref{eq:cosine_condition} and~\eqref{eq:energy} it follows that \[ E(\theta) - E(\theta^x) = \sum_{pq \in e(G)} \cos(\theta^x_p - \theta^x_q). \] Let us compute this sum by following the circuit. The edge entering~$j$ before~$jk$, the edge~$jk$, and the edge leaving~$k$ after~$jk$ amount to \begin{align*} \cos(\pi/2 + x) + \cos(\pi/2 -2x) + \cos(\pi/2 - x) = \sin(2x) - 2\sin(x). \end{align*} We claim the other terms in the sum are either zero or they cancel out. If an edge is neither adjacent to~$j$ nor~$k$, then~$\theta^x_p = \theta_p$ and~$\theta^x_q = \theta_q$ and therefore~$\cos(\theta^x_p - \theta^x_q) = 0$. Every visit to~$j$ that is not the one preceding~$jk$ amounts to \[ \cos(\pi/2 + x) + \cos(\pi/2 - x) = - \sin(x) + \sin(x) = 0. \] Similarly any other visits to~$k$ that is not the one involving~$jk$ amounts to~$0$. Therefore \[ E(\theta) - E(\theta^x) = \sin(2x) - 2\sin(x). \] Since this function changes sign in every neighborhood of~$x=0$, the point~$\theta$ is a saddle of the energy. \end{proof} \section{Non-Zero Intrinsic Frequencies} \label{sec:the_end} \subsection{Classification} In the previous section we assumed the intrinsic frequencies to be all equal to~$0$. Let us return to the general case \begin{equation} \label{eq:non-identical} \dot \theta_k = \omega_k + K \sum_{j=1}^N a_{jk} \sin(\theta_j-\theta_k). \end{equation} Notice that linearizing~\eqref{eq:non-identical} gives the same Jacobian matrix as the zero-frequency case~\eqref{eq:DF}. It follows that a point~$\theta\in \T^N$ is a completely degenerate equilibrium of~\eqref{eq:non-identical} if and only if \begin{align} & \sum_{j} a_{jk} \sin(\theta_j - \theta_k) = -\frac{\omega_k}{K} \label{eq:sine_condition_non-identical} \\ \intertext{for every vertex~$k$ and} &\cos(\theta_j - \theta_k) = 0 \label{eq:cosine_condition_non-identical} \end{align} for every edge~$jk$. In particular we obtain: \begin{proposition} \label{prop:generic_frequencies} For a generic choice of intrinsic frequencies~$\omega_1,\ldots,\omega_N$ there are no graphs admitting completely degenerate equilibria. \end{proposition} \begin{proof} If equation~\eqref{eq:sine_condition_non-identical} and equation~\eqref{eq:cosine_condition_non-identical} hold then~$\omega_k/K$ is an integer for every~$k$. Therefore, if there is a graph admitting completely degenerate equilibria then the intrinsic frequencies are all elements of the lattice~$K\Z$. \end{proof} We showed (Proposition~\ref{prop:generic_frequencies}) that for generic intrinsic frequencies there are no graphs admitting completely degenerate equilibria. On the other hand, we know that for zero intrinsic frequencies there are infinitely many graphs admitting completely degenerate equilibria (Proposition~\ref{prop:infinitely_many}). This suggests asking which graphs admit completely degenerate equilibria for \emph{some} intrinsic frequencies: \begin{theorem} \label{thm:bipartite} For every graph~$G$ the following facts are equivalent: \begin{enumerate} [(i)] \item There are coupling strength~$K$ and intrinsic frequencies~$\omega_1,\ldots,\omega_N$ such that~$G$ admits completely degenerate equilibria; \item $G$ is bipartite. \end{enumerate} \end{theorem} \begin{proof} Let~$\theta$ be a completely degenerate equilibrium. By~\eqref{eq:cosine_condition_non-identical} the phase differences of adjacent vertices are~$\pm \pi/2$. In particular~$G$ contains no cycle of odd length. This is equivalent to bipartiteness. Conversely, suppose that~$G$ is bipartite. Let the phases of one part be all equal to~$0$ and of the other part all equal to~$\pi/2$. Then~\eqref{eq:cosine_condition_non-identical} is satisfied. Now choose any~$K>0$ and for every vertex~$k$ define \[ \omega_k = -K \sum_{j} a_{jk} \sin(\theta_j - \theta_k). \] Then~\eqref{eq:sine_condition_non-identical} is also satisfied and~$\theta$ is a completely degenerate equilibrium. \end{proof} \subsection{Stability} As we will see, the argument used to determine instability in the case of zero intrinsic frequencies can be adapted to the general case. There is, however, a technical difference: while~\eqref{eq:main} is a gradient system on~$\T^N$, in general~\eqref{eq:non-identical} is only a gradient system locally, in a neighborhood of the equilibrium. This is explained in more details in the following proof. \begin{theorem} \label{thm:2_non-identical} For every connected graph with at least one edge, every coupling strength~$K>1$ and every intrinsic frequencies~$\omega_1,\ldots,\omega_N\in \R$, the completely degenerate equilibria are not Lyapunov stable. \end{theorem} \begin{proof} Up to rescaling time we can suppose~$K=1$. Let~$\theta \in \T^N$ be a completely degenerate equilibrium. By~\eqref{eq:sine_condition_non-identical} and~\eqref{eq:cosine_condition_non-identical} the neighbors of a vertex~$k$ have phases $\theta_k + \pi/2$~or~$\theta_k - \pi/2$. If, for every vertex~$k$, half of its neighbors have phase~$\theta_k + \pi/2$ and the other half~$\theta_k - \pi/2$, then from equation~\eqref{eq:sine_condition_non-identical} it follows that~$\omega_k=0$ for every~$k$ and Theorem~\ref{thm:2} applies. Otherwise, there is a vertex~$k$ such that~$k$ has~$d^+$ neighbors with phase~$\theta_k + \pi/2$ and~$d^-$ neighbors with phase~$\theta_k - \pi/2$, where $d^+\neq d^-$. For every~$x\in \R$ let~$\theta^x \in \T^N$ be defined as \begin{equation*} \begin{dcases*} \theta^x_k = \theta_k +x, \\ \theta^x_j = \theta_j, & $j \neq k$. \end{dcases*} \end{equation*} In a neighborhood of~$\theta$ the system~\eqref{eq:non-identical} is gradient with energy function \[ E(\theta) = -\sum_k \omega_x\theta_k + \sum_{jk \in e(G)} (1 - \cos(\theta_j - \theta_k)). \] Notice that the terms~$\omega_x\theta_k$ are only well-defined locally since there are no injective continuous maps from~$\T$ to~$\R$. This is not an obstacle for the proof: all we need is the function~$x\mapsto E(\theta^x)$ to be defined in some neighborhood of~$x=0$. We have \begin{align*} E(\theta) - E(\theta^x) &= \omega_k x + \sum_j a_{jk} (\cos(\theta_j - \theta_k - x) - \cos(\theta_j - \theta_k)) \\ &= \omega_k x + d^+ \cos(\pi/2-x) + d^- \cos(-\pi/2-x) \\ &= \omega_k x + (d^+-d^-) \sin(x). \end{align*} From equation~\eqref{eq:sine_condition_non-identical} and equation~\eqref{eq:cosine_condition_non-identical} it follows that~$d^+-d^- = -\omega_k$. Therefore \[ E(\theta) - E(\theta^x) = (d^+-d^-)(-x + \sin(x)). \] This function changes sign in every neighborhood of~$x=0$. We conclude that~$\theta$ is not Lyapunov stable. \end{proof} As a corollary we obtain the following general result: \begin{corollary} \label{cor:strictly_neg_eig} For every graph with at least one edge, every coupling strength and every intrinsic frequencies, the Lyapunov stable equilibria of~\eqref{eq:non-identical} have at least one strictly negative eigenvalue. \end{corollary} \begin{proof} Without loss of generality, it is enough to consider connected graphs. Let~$\theta$ be a Lyapunov stable equilibria. By the Center Manifold Theorem the Jacobian eigenvalues at~$\theta$ are negative or zero. By Theorem~\ref{thm:2_non-identical} they are not all zero. \end{proof} Corollary~\ref{cor:strictly_neg_eig} shows that the presence of strictly negative eigenvalues, although not sufficient, is necessary for stability. Therefore, albeit linear stability analysis is not enough to determine the set of stable equilibria, it helps by restricting the search. \begin{remark} The proof of Theorem~\ref{thm:2_non-identical} actually shows something more: completely degenerate equilibria are saddles of the energy. In particular, it follows that they are unstable in both forward and backward time, or if~$K$ is chosen to be negative. \end{remark} \end{document}
\begin{document} {\cal S}igmaetlength{\abovedisplayskip}{{\cal S}igmatdskip} {\cal S}igmaetlength{\belowdisplayskip}{{\cal S}igmatdskip} \widehat def\title#1{\widehat def\thetitle{#1}} \widehat def\authors#1{\widehat def\theauthors{#1}} \widehat def\author#1{\widehat def\theauthors{#1}} \widehat def\address#1{\widehat def\theaddress{#1}} \widehat def{\cal S}igmaecondaddress#1{\widehat def\thesecondaddress{#1}} \widehat def\email#1{\widehat def\theemail{#1}} \widehat def\url#1{\widehat def\theurl{#1}} \ |\ ong\widehat def\abstract#1\endabstract{\ |\ ong\widehat def\theabstract{#1}} \widehat def\primaryclass#1{\widehat def\theprimaryclass{#1}} \widehat def{\cal S}igmaecondaryclass#1{\widehat def\thesecondaryclass{#1}} \widehat def\keywords#1{\widehat def\thekeywords{#1}} \widehat def\ifundefined#1{\expandafter\ifx\csname#1\endcsname\relax} \ |\ ong\widehat def\maketitlepage{ \vglue 0.2truein {\parskip=0pt\ |\ eftskip 0pt plus 1fil\widehat def\ \ {\rm and}\ \ {\par{\cal S}igmamallskip}{\Large \bf\thetitle}\par } \vglue 0.15truein {\parskip=0pt\ |\ eftskip 0pt plus 1fil\widehat def\ \ {\rm and}\ \ {\par}{{\cal S}igmac\theauthors} \par } \vglue 0.1truein {\parskip=0pt{\cal S}igmamall {\ |\ eftskip 0pt plus 1fil\widehat def\ \ {\rm and}\ \ {\par}{{\cal S}igmal\theaddress}\par} \ifundefined{thesecondaddress}\else{\mathrm cl}{and} {\ |\ eftskip 0pt plus 1fil\widehat def\ \ {\rm and}\ \ {\par}{{\cal S}igmal\thesecondaddress}\par}\fi \ifundefined{theemail}\else \vglue 5pt \widehat def\ \ {\rm and}\ \ {\ \ {\rm and}\ \ } {\mathrm cl}{Email:\ \ \tt\theemail}\fi \ifundefined{theurl}\else \vglue 5pt \widehat def\ \ {\rm and}\ \ {\ \ {\rm and}\ \ } {\mathrm cl}{URL:\ \ \tt\theurl}\fi\par} \vglue 7pt {\bf Abstract} \vglue 5pt \theabstract \vglue 7pt {\bf AMS Classification numbers}\quad Primary:\quad \theprimaryclass Secondary:\quad \thesecondaryclass \vglue 5pt {\bf Keywords:}\quad \thekeywords } \title{Miller Spaces and Spherical Resolvability of Finite Complexes} \authors{Jeffrey Strom} \address{Dartmouth College, Hanover, NH 03755\ \ {\rm and}\ \ {\tt{[email protected]}} \ \ {\rm and}\ \ {\tt{www.math.dartmouth.edu/\~{}strom/}} } \abstract We show that if $K$ is a nilpotent finite complex, then $\Omega K$ can be built from spheres using fibrations and homotopy (inverse) limits. This is applied to show that if ${\mathrm map}_*(X,S^n)$ is weakly contractible for all $n$, then ${\mathrm map}_*(X,K)$ is weakly contractible for any nilpotent finite complex $K$. \endabstract \primaryclass{55Q05} {\cal S}igmaecondaryclass{55P50} \keywords{Miller Spaces, Spherically Resolvable, Resolving Class, Homotopy Limit, Cone Length, Closed Class} \maketitlepage {\cal S}igmaection*{Discussion of Results} A {\bf Miller space} is a CW complex $X$ with the property that the space of pointed maps from $X$ to $K$ is weakly contractible for every nilpotent finite complex $K$, written ${\mathrm map}_*(X,K){\cal S}igmaim *$. They are named for Haynes Miller, who proved in \cite{Miller} that the spaces $B\kern -.25em \ZZ\kern -.25em /p$ are all Miller spaces; in fact, he proved that ${\mathrm map}_*(B\kern -.19em \ZZ\kern -.25em /p, K)$ is weakly contractible for every finite dimensional CW complex $K$. In the stable category, one can define a {\it Miller spectrum} by requiring that the mapping spectrum $F(X,K)$ is contractible for every finite spectum $K$. Since cofibrations and fibrations are the same in the stable category, a finite spectrum $K$ with $m$ cells is the fiber in a fibration $K {\mathrm map}rt{} L {\mathrm map}rt{} S^n$ in which $L$ has only $m-1$ cells; in the terminology of \cite{Cohen,Levi}, this means that $K$ is {\it spherically resolvable with weight $m$}. An easy induction shows that $X$ is a Miller spectrum if and only if $F(X,S^n){\cal S}igmaimeq *$ for every $n$. Our goal is to prove the following unstable analog of this observation: if ${\mathrm map}_*(X,S^n){\cal S}igmaim *$, for all $n$, then $X$ is a Miller space. The proof of the stable version is not available to us because cofibrations are not fibrations, unstably. To prove our result, it is necessary to determine the extent to which a finite complex can be constructed from spheres in a more general way, i.e., by arbitrary homotopy (inverse) limits \cite{Bousfield-Kan} and extensions by fibrations. To be more precise, we require some new terminology. We call a nonempty class ${\cal R}$ of spaces a \term{resolving class} if it is closed under weak equivalences and pointed homotopy (inverse) limits (all spaces, maps and homotopy limits will be pointed). It is a \term{strong resolving class} if it is further closed under extensions by fibrations, i.e., if whenever $F{\mathrm map}rt{} E{\mathrm map}rt{}B$ is a fibration with $F,B \in{\cal R}$, then $E\in{\cal R}$. Resolving classes are dual to closed classes as defined in \cite{Chacholski} and \cite[p.\thinspace 45]{EDF}. Notice that every resolving class ${\cal R}$ contains the one-point space $*$ (cf. \cite[p.\thinspace 47]{EDF}). From this, it follows that if $F{\mathrm map}rt{}E{\mathrm map}rt{}B$ is a fibration with $E,B\in{\cal R}$, then $F\in{\cal R}$. Similarly, if $A_\alpha\in{\cal R}$ for each $\alpha$ then the {\it categorical product} $\Pi_\alpha A_\alpha\in{\cal R}$ also. The {\it weak product} $\widetilde \Pi_\alpha A_\alpha$ is the homotopy colimit of the finite subproducts; if for each $i$ only finitely many of the groups $\pi_i(A_\alpha)$ are nonzero, then the weak product has the same weak homotopy type as the categorical product. Let ${\cal S}$ be the smallest resolving class that contains $S^n$ for each $n$, and let $\overline {\cal S}$ be the smallest strong resolving class that contains $S^n$ for each $n$. We say that a space $K$ is \term{spherically resolvable} if $\Omega^k K\in \overline {\cal S}$ for some $k$. This concept is related to, but not the same as, the notion of spherical resolvability described in \cite{Cohen,Levi}. \rk{Examples} \begin{enumerate} \item[(a)] If $f:A{\mathrm map}rt{}B$ is any map then the class of all $f$-local spaces is a resolving class \cite[p.\thinspace 5]{EDF}. This includes, for example, the class of all spaces with $\pi_i(X) = 0$ for $i> n$, or all $h_*$-local spaces, where $h_*$ is a homology theory. \item[(b)] If $P$ is a set of primes, then the class of all $P$-local spaces is a strong resolving class. \item[(c)] If $f: W{\mathrm map}rt{}*$, then the class of all $f$-local spaces is a strong resolving class \cite[p.\thinspace 5]{EDF}. This includes, for example, the class $\{ K^+ \}$, where $K^+$ denotes the Quillen plus construction on $K$ \cite[p.\thinspace 27]{EDF}. \item[(d)] More generally, if $F$ is a covariant functor that commutes with homotopy limits (and hence with fibrations) and ${\cal R}$ is a (strong) resolving class, then the class $ \{ K \, |\, F(K) \in {\cal R} \} $ is also a (strong) resolving class. This applies, for example to the functor $F(K) = {\mathrm map}_*(X,K)$. \item[(e)] The class $\{ K \, | \, K{\cal S}igmaim * \}$ is a strong resolving class. \end{enumerate} Our proofs will proceed by induction on a certain kind of cone length \cite{A-S-S}. Let ${\cal F}$ denote the collection of all finite type wedges of spheres. The {\bf ${\cal F}$-cone length} ${\mathrm cl}_{\cal F}(K)$ of a space $K$ is the least integer $n$ for which there are cofibrations $S_i{\mathrm map}rt{} K_i {\mathrm map}rt{} K_{i+1}$, $0\ |\ eq i < n$, with $K_0{\cal S}igmaimeq *$, $K_n{\cal S}igmaimeq K$ and each $S_i\in {\cal F}$. If no such $n$ exists, then ${\mathrm cl}_{\cal F}(K) =\infty$. Clearly every finite complex $K$ has ${\mathrm cl}_{\cal F}(K) < \infty$. We denote by ${\cal S}igma{\cal F}{\cal S}igmaseq {\cal F}$ the subcollection of all simply-connected finite type wedges of spheres. Finally, let ${\cal S}^\vee$ be the smallest strong resolving class that contains ${\cal S}igma{\cal F}$. With these preliminaries in place, we can state our main result. \begin{thrm}\ |\ abel{thrm:sres} If $K$ is a nilpotent space with ${\mathrm cl}_{\cal F}(K) =n <\infty $, then \begin{enumerate} \item[{\rm (a)}] $ K \in {\cal S}^\vee $, \item[{\rm (b)}] $\Omega K\in \overline {\cal S} $, and \item[{\rm (c)}] $\Omega^n K \in {\cal S} $. \end{enumerate} In particular, {\rm (b)} implies that every nilpotent finite complex $K$ is spherically resolvable in our sense. \end{thrm} Our application to Miller spaces follows from the following more general consequence of Theorem \ref{thrm:sres}. \begin{thrm}\ |\ abel{thrm:main} Let ${\cal R}$ be a strong resolving class and let $F$ be a functor that commutes with homotopy limits. \begin{enumerate} \item[{\rm (a)}] Assume that $F(S^n)\in {\cal R}$ for each $n$. Then $F(\Omega K)\in R$ for each nilpotent space $K$ with ${\mathrm cl}_{\cal F}(K) < \infty$. \item[{\rm (b)}] Assume that $F(S)\in {\cal R}$ for each $S\in{\cal S}igma{\cal F}$. Then $F(K) \in {\cal R}$ for each nilpotent space $K$ with ${\mathrm cl}_{\cal F}(K) < \infty$. \end{enumerate} \end{thrm} To apply part (b), we require the following result of Dwyer \cite{Dwyer}. \begin{prop}\ |\ abel{prop:SinF} Let $F$ be a functor that commutes with homotopy limits, let $W$ be a space and let ${\cal R} = \{ K\, |\, {\mathrm map}_*(W,F(K)) {\cal S}igmaim *\}$. If $S^n\in{\cal R}$ for each $n$, then ${\cal S}igma{\cal F}{\cal S}igmaseq {\cal R}$. \end{prop} Together, Theorem \ref{thrm:main}(b) and Proposition \ref{prop:SinF} immediately imply the desired statement about Miller spaces. \begin{cor}\ |\ abel{cor:appl} If ${\mathrm map}_*(X,S^n){\cal S}igmaim *$ for all $n$, then ${\mathrm map}_*( X,K){\cal S}igmaim *$ for every nilpotent space $K$ with ${\mathrm cl}_{\cal F}(K) < \infty$. In other words, $X$ is a Miller space. \end{cor} Corollary \ref{cor:appl} is by no means the only corollary of interest. Other consequences are easily obtained by applying Theorem \ref{thrm:main} to various strong resolving classes. For example, if ${\mathrm map}_*(X,S^n)$ is $P$-local for all $n$, then ${\mathrm map}_*({\cal S}igma X,K)$ is $P$-local for every nilpotent space $K$ with ${\mathrm cl}_{\cal F}(K) < \infty$. If $X$ is simply-connected then Corollary \ref{cor:appl} can be strengthened somewhat. If $L$ is a space with a nilpotent covering space $K$ having ${\mathrm cl}_{\cal F}(K)<\infty$, then it is easy to see that ${\mathrm map}_*(X,L){\cal S}igmaim *$. We end by making the surprising observation that a (non-nilpotent, of course) finite complex can be a Miller space! \noindent{\bf Example}\ \ Let $A$ be a connected $2$-dimensional acyclic finite complex. (The classifying space of the Higman group \cite{Hig} is such a space \cite{D-V}; so is the space obtained by removing a point from a homology $3$-sphere). Since $\pi_1(A)$ is equal to its commutator subgroup, there are no nontrivial homomorphisms from $\pi_1(A)$ to any nilpotent group. It follows that if $f:A{\mathrm map}rt{} K$ with $K$ a nilpotent finite complex, then $\pi_1(f) =0$ and so $f$ factors through $q:A{\mathrm map}rt{}A/A_1{\cal S}igmaimeq \bigvee S^2$. Since $[A,S^2] \colon\thinspaceng H^2(A) =0$, we conclude $f{\cal S}igmaimeq *$. Thus $A$ is a Miller space. This example shows that the nilpotency hypothesis on the targets in Corollary \ref{cor:appl} cannot be entirely removed. It remains possible, however, that if $X$ is simply-connected and ${\mathrm map}_*(X,S^n){\cal S}igmaim *$ for each $n$, then ${\mathrm map}_*(X,K){\cal S}igmaim *$ for {\it every} finite complex $K$, or even for every finite-dimensional complex. {\it Acknowledgements}\ \ I would like to thank Robert Bruner and Charles McGibbon for suggesting that I think about Miller spaces. This work owes much to McGibbon in particular -- Corollary \ref{cor:appl} was conjectured in joint work with him. Thanks to Bill Dwyer for directing me to the result of \cite{Hopkins}, which is the key to Proposition \ref{prop:desuspend}, and for the statement and proof of Proposition \ref{prop:SinF}; thanks are also due to Daniel Tanr\'e for bringing Proposition \ref{prop:hofiber} to my attention. {\cal S}igmaection{Proof of Theorem \ref{thrm:sres}} We begin with two supporting results. \begin{prop}\ |\ abel{prop:desuspend} Let $K$ be a connected nilpotent space, let ${\cal R}$ be a resolving class and let $F$ be a functor that commutes with homotopy (inverse) limits. If $F(\bigvee_{i=1}^m {\cal S}igma K)\in{\cal R}$ for each $m$, then $F(K)\in {\cal R}$. \end{prop} \begin{prf} This follows from a result of Hopkins \cite[p.\thinspace 222]{Hopkins}, which says that $K$ is homotopy equivalent to the homotopy (inverse) limit of a tower $$ A_0 {\mathrm map}lft{} A_1 {\mathrm map}lft{} \cdots {\mathrm map}lft{} A_n {\mathrm map}lft{} A_{n+1}{\mathrm map}lft{} \cdots $$ of spaces, each of which is a homotopy (inverse) limit of a diagram of spaces of the form $\bigvee_{i=1}^m {\cal S}igma K$. \end{prf} \begin{prop}\ |\ abel{prop:hofiber} Let $A{\mathrm map}rt{}B{\mathrm map}rt{}C$ be a cofibration, and let $F$ be the homotopy fiber of $B{\mathrm map}rt{}C$. Then $$ {\cal S}igma F {\cal S}igmaimeq {\cal S}igma A \vee ({\cal S}igma A {\cal S}igmamsh \Omega C). $$ \end{prop} \begin{prf} Convert the maps $A{\mathrm map}rt{*}C$, $B{\mathrm map}rt{}C$ and $C{\mathrm map}rt{=}C$ to fibrations. The total spaces and fibers form the commutative diagram $$ \xymatrix{ A\times \Omega C \ar[rd]\ar[rr]\ar[dd] && F \ar'[d][dd]\ar[rd]\ \ {\rm and}\ \ & \Omega C \ar'[d][dd]\ar[rr] && {*}\ar[dd]\ \ {\rm and}\ \ A \ar[rr]\ar[rd] && B \ar[rd]\ \ {\rm and}\ \ & {*} \ar[rr] && C, \ \ {\rm and}\ \ \ \ {\rm and}\ \ } $$ \vskip -.5in \noindent in which the bottom square is a homotopy pushout. A result of V. Puppe \cite{Puppe} shows that the top square is also a homotopy pushout. Hence, the cofiber ${\cal S}igma F$ of the map $F{\mathrm map}rt{} *$ has the same homotopy type as the cofiber of $A\times \Omega C{\mathrm map}rt{}\Omega C$, namely ${\cal S}igma A \vee ({\cal S}igma A {\cal S}igmamsh \Omega C)$, as can be seen from the diagram $$ \xymatrix{ A\times \Omega C \ar[d]\ar[r]\ar@{}[rd]|{{\mathrm pushout}} & \Omega C \ar[r]\ar[d] & {\cal S}igma F\ar@{=}[d] \ \ {\rm and}\ \ A \ar[r]^(.4){*} & A * \Omega C\ar[r] & {\cal S}igma A \vee ({\cal S}igma A {\cal S}igmamsh \Omega C) . } $$ \end{prf} \noindent{\bf Proof of Theorem \ref{thrm:sres}}\ \ Notice that the assumption on $X$ implies that $X$ is connected; we may therefore assume that $K$ is also connected. We prove assertion (a) by induction on ${\mathrm cl}_{\cal F}(K)$. If ${\mathrm cl}_{\cal F}(K)=1$, then $K$ is weakly equivalent to a connected finite type wedge of spheres. Therefore each $\bigvee_{i=1}^m {\cal S}igma K\in{\cal S}igma{\cal F}$ and Proposition \ref{prop:desuspend} proves the assertion in the initial case. Now assume that the result is known for all nilpotent spaces with ${\cal F}$-cone length less than $n$, and that $K$ is nilpotent with ${\mathrm cl}_{\cal F}(K) = n$. Two applications of Proposition \ref{prop:desuspend} reveal that it is enough to show $\bigvee_{i=1}^m {\cal S}igma^2 K \in{\cal S}^\vee$ for each $m$. Write $V= \bigvee_{i=1}^m {\cal S}igma^2 K$. Notice that ${\mathrm cl}_{\cal F}(\bigvee_{i=1}^m K) \ |\ eq {\mathrm cl}_{\cal F}(K)$, and the double suspension of an ${\cal F}$-cone decomposition of $\bigvee_{i=1}^m K$ is an ${\cal F}$-cone decomposition of $V$. Thus we may assume that $V$ has an ${\cal F}$-cone decomposition $S_i{\mathrm map}rt{} V_i{\mathrm map}rt{} V_{i+1}$, $0\ |\ eq i < n$ with $S_i, V_i\in {\cal S}igma{\cal F}$ for each $i$. Therefore, we have a cofibration $L {\mathrm map}rt{} V {\mathrm map}rt{} W$ with $L$ simply-connected, ${\mathrm cl}_{\cal F}(L) < n$ and $W\in{\cal S}igma{\cal F}$. Let $F$ denote the homotopy fiber of $V{\mathrm map}rt{}W$, so $$ F {\mathrm map}rt{}V{\mathrm map}rt{}W $$ is a fibration. Since $W \in {\cal S}^\vee$, it suffices to show that $F\in {\cal S}^\vee$. Now we use Proposition \ref{prop:hofiber} to determine the homotopy type of ${\cal S}igma F$: $$ {\cal S}igma F {\cal S}igmaimeq {\cal S}igma L \vee ( L {\cal S}igmamsh {\cal S}igma\Omega W) {\cal S}igmaimeq L{\cal S}igmamsh \biggl(\bigvee_{\alpha} S^{n_\alpha}\biggr) $$ which is a finite type wedge of suspensions of $L$. If we smash an ${\cal F}$-cone length decomposition of $L$ with the space $\bigvee_{\alpha} S^{n_\alpha}$ we obtain an ${\cal F}$-cone length decomposition for ${\cal S}igma F$ -- in other words, ${\mathrm cl}_{\cal F}({\cal S}igma F) < n$ and, more importantly, ${\mathrm cl}_{\cal F}(\bigvee_{i=1}^l {\cal S}igma F) < n$ for each $l$. By the inductive hypothesis, $\bigvee_{i=1}^l {\cal S}igma F \in {\cal S}^\vee$ for each $l$. Since $L, V$ and $W$ are each simply-connected, so is $F$, and Proposition \ref{prop:desuspend} implies that $F\in {\cal S}^\vee$, as desired. To prove (b), observe that the collection ${\cal M}$ of all $K$ with $\Omega K\in\overline {\cal S}$ is a resolving class that contains ${\cal S}igma F$ by the Hilton-Milnor theorem \cite{G}. Hence ${\cal M}$ contains all nilpotent spaces $K$ with ${\mathrm cl}_{\cal F}(K)<\infty$ by part (a). The proof of (c) is similar to the proof of (a). The initial case of the induction is a special case of (b). To prove the inductive step, we write $V= \bigvee_{i=1}^m {\cal S}igma^2 K$ and show that $V\in {\cal S}$. As before, we consider the cofiber sequence $L{\mathrm map}rt{} V{\mathrm map}rt{}W$ with $W\in{\cal S}igma{\cal F}$ and the corresponding fibration $F{\mathrm map}rt{}V{\mathrm map}rt{}W$. This gives us a fibration $$ \Omega^n V {\mathrm map}rt{} \Omega^n W {\mathrm map}rt{} \Omega^{n-1} F $$ with $\Omega^n W\in{\cal S}$. It now suffices to prove that $\Omega^{n-1} F\in{\cal S}$, which follows by induction using Proposition \ref{prop:desuspend}. $\Box$\par {\cal S}igmaection{Proof of Theorem \ref{thrm:main} and Proposition \ref{prop:SinF}} \noindent{\bf Proof of Theorem \ref{thrm:main}}\ \ {\rm and}\ \ Let ${\cal M}$ be the class of all spaces $K$ such that $F(K) \in {\cal R}$; we have already seen that ${\cal M}$ is a strong resolving class. For part (a), $S^n\in {\cal M}$ for each $n$ by assumption, so $\overline {\cal S} {\cal S}igmaseq {\cal M}$. By Theorem \ref{thrm:sres}(b), ${\cal M}$ contains $\Omega K$ for every nilpotent space $K$ with ${\mathrm cl}_{\cal F}(K) < \infty$. In part (b), we find that ${\cal S}^\vee{\cal S}igmaseq {\cal M}$, and so ${\cal M}$ contains every nilpotent space $K$ with ${\mathrm cl}_{\cal F}(K) < \infty$. $\Box$\par \noindent{\bf Proof of Proposition \ref{prop:SinF}}\ \ Define a relation $<$ on ${\cal S}igma{\cal F}$ as follows: $S<T$ if either (1) the connectivity of $S$ is greater than the connectivity of $T$, or (2) the connectivity of $S$ equals the connectivity of $T$ (say both are $(n-1)$-connected) and the rank of $\pi_n S$ is less than the rank of $\pi_n T$. The key to this proof is the following claim. {\cal S}igmamallskip \noindent {{\cal S}igmac Claim}\ \ Suppose that $S\in{\cal F}$ is $(n-1)$-connected and has $\pi_n S\neq 0$. Then there is a map $f:S{\mathrm map}rt{} S^n$ such that the homotopy fibre $T$ of $f$ belongs to ${\cal F}$ and $T<S$. \noindent {{\cal S}igmac Proof of Claim}\ \ Write $S{\cal S}igmaim S'\vee S^n$, and let $f:S{\mathrm map}rt{} S^n$ be the map which collapses $S'$. By \cite{G}, the homotopy fibre of $f$ is $$ (S'\times\Omega S^n)/(*\times\Omega S^n){\cal S}igmaim S'{\cal S}igmamsh(\Omega S^n)_+ {\cal S}igmaim \bigvee_{m=0}^\infty {\cal S}igma^{(n-1)m} S' \in{\cal S}igma{\cal F}, $$ using the James splitting of ${\cal S}igma\Omega S^n$. {\cal S}igmamallskip Now let $S=S_0 \in {\cal S}igma{\cal F}$. Define $S_{n+1}$ as the fiber of a map $f:S_n{\mathrm map}rt{}S^{k(n)}$ as in the claim. The result is a tower of spaces $$ S_0{\mathrm map}lft{} S_1{\mathrm map}lft{} \cdots{\mathrm map}lft{} S_n{\mathrm map}lft{} S_{n+1}{\mathrm map}lft{}\cdots $$ with $S_n\in{\cal F}$ and $S_{n+1} < S_n$ for each $n$. Since spaces in this tower become arbitrarily highly connected as $n$ increases, ${\mathrm holim}_n\, S_n {\cal S}igmaim *$. The fibrations $S_{n}{\mathrm map}rt{}S_{n+1}{\mathrm map}rt{} S^{k(n)}$ give rise to fibrations $$ {\mathrm map}_* (W, F(S_{n+1})){\mathrm map}rt{} {\mathrm map}_*(W, F(S_n)) {\mathrm map}rt{} \overbrace{{\mathrm map}_*(W, F(S^{k(n)}))}^{*}. $$ It follows by induction that each map $S_n{\mathrm map}rt{} S$ induces a weak equivalence ${\mathrm map}_*(W,F(S_n)){\cal S}igmaim {\mathrm map}_*(W,F(S))$. Finally, we compute \[ \begin{array}{rcl} {\mathrm map}_*(W,F(S)) &{\cal S}igmaim&{\mathrm holim}_n\,{\mathrm map}_*(W,F(S)) \ \ {\rm and}\ \ &{\cal S}igmaim&{\mathrm holim}_n\,{\mathrm map}_*(W,F(S_n )) \ \ {\rm and}\ \ &{\cal S}igmaim& {\mathrm map}_*(W,{\mathrm holim}_n\, F(S_n )) \ \ {\rm and}\ \ &{\cal S}igmaim&{\mathrm map}_*(W,*) \ \ {\rm and}\ \ &{\cal S}igmaim& {*}\ \ {\rm and}\ \ \end{array} \] $\Box$\par \end{document}
\begin{document} \begin{frontmatter} \title{Probabilistic interpretation for solutions of fully nonlinear stochastic PDEs} \date{\today} \author{\fnms{Anis} \snm{MATOUSSI}\corref{}\ead[label=e1]{[email protected]}\thanksref{t3}} \thankstext{t3}{Research partly supported by the Chair {\it Financial Risks} of the {\it Risk Foundation} sponsored by Soci\'et\'e G\'en\'erale, the Chair {\it Derivatives of the Future} sponsored by the {F\'ed\'eration Bancaire Fran\c{c}aise}, and the Chair {\it Finance and Sustainable Development} sponsored by EDF and Calyon } \address{ Universit\'e du Maine \\ Institut du Risque et de l'Assurance\\ Laboratoire Manceau de Math\'ematiques\\\printead{e1} } \author{\fnms{Dylan} \snm{POSSAMA\"I}\corref{}\ead[label=e2]{[email protected]}} \address{CEREMADE Universit\'e Paris--Dauphine\\PSL Research University, CNRS\\ 75016 Paris, France\\ \printead{e2} } \author{\fnms{Wissal} \snm{SABBAGH}\corref{}\ead[label=e3]{[email protected]}} \address{Universit\'e du Maine\\ Institut du Risque et de l'Assurance\\ Laboratoire Manceau de Math\'ematiques\\ \printead{e3} } \runauthor{A. Matoussi, D. Possama\"i, W. Sabbagh} \begin{abstract} In this article, we propose a wellposedness theory for a class of second order backward doubly stochastic differential equation ($2$BDSDE). We prove existence and uniqueness of the solution under a Lipschitz type assumption on the generator, and we investigate the links between the 2BDSDEs and a class of parabolic fully nonlinear Stochastic PDEs. Precisely, we show that the Markovian solution of 2BDSDEs provide a probabilistic interpretation of the classical and stochastic viscosity solution of Fully nonlinear SPDEs. \end{abstract} \end{frontmatter} \section{Introduction} The starting point of this work is the following parabolic fully-nonlinear stochastic partial differential equation (SPDE for short) \begin{equation} \begin{split} \label{SPDE1} du_t(x) + H(t,x,u_t(x),Du_t(,x),D^2u_t(x)) \, dt + g(t,x,u_t(x), Du_t(x) )\circ d\W_t = 0, t\in[0,T],\ u_T=\Phi, \end{split} \end{equation} where $g$ and $H$ are given nonlinear functions. The differential term integrated with respect to $d\W_t$ refers to the backward stochastic integral against a finite-dimensional Brownian motion on some probability space $\big(\Omega, \mathcal{F},\mathbb{P}, (W_t)_{t\geq 0} \big)$. We use the backward notation because our approach is fundamentally based on the doubly stochastic framework introduced in the seminal paper by Pardoux and Peng \cite{pp1994}. \noindentindent The class of stochastic PDEs as in \eqref{SPDE1} and their extensions is an important one, since it arises in a number of applications, ranging from asymptotic limits of partial differential equations (PDEs for short) with rapid (mixing) oscillations in time, phase transitions and front propagation in random media with random normal velocities, filtering and stochastic control with partial observations, path--wise stochastic control theory, mathematical finance... The main difficulties with equations like \eqref{SPDE1} are threefold. \begin{itemize} \item[$(i)$] Even in the deterministic case, there are no global smooth solutions in general. \item[$(ii)$] Their fully nonlinear character seems to make them inaccessible to the classical martingale theory employed for the linear case \item[$(iii)$] Even if smooth solutions were to exist, the equations cannot be described in a point--wise sense, because of the everywhere lack of differentiability of the Brownian paths. \end{itemize} \noindentindent The starting point of the theory of SPDEs was with classical solutions in a linear context, wellposedness results having been obtained notably by Pardoux \cite{pardouxt1980stochastic}, Dawson \cite{dawson1972stochastic}, Ichikawa \cite{ichikawa1978linear} or Krylov and Rozovski{\u\i} \cite{krylov1977cauchy}. Extensions have been obtained later, notably by Pardoux and Peng \cite{pp1994} (see also Krylov and Rozovski{\u\i} \cite{krylov1981stochastic} or Bally and Matoussi \cite{BM01}) by introducing backward doubly stochastic differential equation (BDSDE for short), which allowed them to give a nonlinear Feynman--Kac's formula for the following class of semi--linear SPDEs \begin{equation} \begin{split} \label{semilinearSPDE} du_t(x) + \left[ {\cal L} u_t (x) + f (t,x,u_t(x), Du_t(x))\right] dt + g(t,x,u_t(x), Du_t(x) )\circ d\W_t = 0 , \end{split} \end{equation} where $ {\cal L}$ is a linear second order diffusion operator and $f$ is a given nonlinear function. The theory of BDSDE has then been extended in several directions, notably by Matoussi and Scheutzow \cite{MS02} who considered a class of BDSDE where the nonlinear noise term is given by the more general It\^o-Kunita's stochastic integral, thus allowing them to give a probabilistic interpretation of classical and Sobolev's solutions of semi--linear parabolic SPDEs driven by space-time white noise. However, the notion of viscosity solution to SPDE has remained a quite difficult and evasive subject throughout the years. Our aim in this paper, as will be detailed below, is to give a definition of a generalization of BDSDEs allowing for a probabilistic representation of solutions to fully nonlinear SPDEs, the appropriate notion of equation being that of second--order BDSDEs. Before going into details, let us review the associated literature. \paragraph{Literature review for viscosity solution of SPDEs.} Stochastic viscosity solutions for SPDEs were first introduced by Lions and Souganidis in their seminal papers \cite{lion:soug:98, lion:soug:00, lion:soug:01}. They used the so-called "stochastic characteristics" to remove the stochastic integrals from the SPDEs and thus transform them into PDEs with random coefficients. A few years later, Buckdahn and Ma \cite{buck:ma:10a,buck:ma:10b} considered a related but different class of semi--linear SPDEs, and studied them in light of the earlier results of Lions and Souganidis, giving in addition a probabilistic interpretation of such equation via BDSDE, but only in the case where the intensity of the noise $g$ \eqref{semilinearSPDE} did not depend on the gradient of the solution. They used the so--called Doss-Sussmann transformation and stochastic diffeomorphism flow technics to once more convert the semi--linear SPDEs into PDEs with random coefficients. This transformation was used again for non-standard optimal control problems in their paper \cite{buckdahn2007pathwise}, and also by Diehl and Friz \cite{diehl:friz:12} to solve semi--linear SPDEs and the associated BDSDEs driven by rough drivers. The case of fully nonlinear SPDEs was also considered by Buckdahn and Ma \cite{buckdahn2002pathwise}, still in the context of so-called stochastic viscosity solutions, and using a new kind of Taylor expansion for It\^o-type random fields. One had then to wait for ten years to see new progresses been made. Hence, Gubinelli, Tindel and Torrecilla \cite{Gubi:Tind:Torr:14} proposed a new definition of viscosity solutions to fully nonlinear PDEs driven by a rough path via appropriate notions of test functions and rough jets. These objects were defined as a controlled processes with respect to the driving rough path, and the authors showed that their notion of solution was compatible with the seminal results of Lions and Souganidis \cite{lion:soug:98, lion:soug:00, lion:soug:01} and with the recent results of Caruana, Friz and Oberhauser \cite{Caru:Friz:Ober:11} on fully non--linear SPDEs driven with rough drivers. Independently, a series of papers involving Buckdahn, Bulla, Ma and Zhang \cite{buck2011pathwise,buckdahn2015pathwise,Buck:Ma:Zhan:15} proposed yet another alternative definition, based once more on pathwise Taylor expansions for random fields (like in \cite{buckdahn2002pathwise}), but now in the context of the functional It\^o calculus of Dupire, and by identifying the solution of the SPDE to the solution of a path-dependent PDE. Finally, the recent contribution of Friz, Gassiat, Lions and Souganidis \cite{friz2016eikonal} extends the notion of path--wise viscosity solutions for Eikonal equations having quadratic Hamiltonians. \paragraph{Literature review for $2$BSDES.} Motivated by numerical methods for fully nonlinear PDEs (the case when $g$ is identically null in the equation \eqref{SPDE1}), second order BSDEs (2BSDEs for short) were introduced by Cheridito, Soner, Touzi and Victoir in \cite{CSTV07}. Then Soner, Touzi and Zhang \cite{STZ10} proposed a new formulation and obtained a complete theory of existence and uniqueness for such BSDEs. The main novelty in their approach is that they require that the solution verifies the equation $\P -a.s.$ for every probability measure $\P$ in a non-dominated class of mutually singular measures. This new point of view is inspired from the quasi-sure analysis of Denis and Martini \cite{deni:mart:06} who established the connection between the so-called hedging problem in uncertain volatility models and the Black--Scholes--Barrenblatt PDE (see also Avellaneda, Levy and Paras \cite{ALP95} and Lyons \cite{L95}). The latter equation is fully nonlinear and has a simple piecewise linear dependance on the second order term. Intuitively speaking (we refer the reader to \cite{STZ10} for more details), the solution to a 2BSDE with generator $F$ and terminal condition $\xi$ can be understood as a supremum in some sense of the classical BSDEs with the same generator and terminal condition, but written under the different probability measures considered. Following this intuition, a non-decreasing process $K$ is added to the solution and it somehow pushes (in a minimal way) the solution so that it stays above the solutions of the classical BSDEs. The theory being very recent, the literature remains rather limited. However, we refer the interested reader to Possama\"i \cite{poss13} and Possama\"i and Zhou \cite{PZ13} who respectively extended these wellposedness results to generators with linear and quadratic growth, as well as the recent contribution of Possama\"i, Tan and Zhou \cite{PTZ14}, which lifts all the regularity assumptions assumed in the aforementioned works. \paragraph{Main contributions.} Our aim in this paper is to provide a complete theory of existence and uniqueness of second order BDSDEs (2BDSDEs for short) under Lipschitz-type hypotheses on the driver. In addition to the structural difficulties inherent with dealing with 2BSDEs through the quasi-sure analysis, the presence of two sources of randomness in the 2BDSDEs, which are mixed through the nonlinear coefficients of the equation, makes our study even more complex. In particular, we have to be extremely prudent when defining the probabilistic structure allowing for what is now commonly known as volatility uncertainty, in order to consider SPDEs with a nonlinearity with respect to the second-order space derivative. Note that, one of main difficulties with 2BDSDEs and BDSDEs is the extra backward integral term, which prevents us from obtaining path--wise estimates for the solutions, unlike what happens with BSDEs or 2BSDEs. This introduces additional non trivial difficulties. The same type of problems were already pointed out in \cite{BGM15}, where the authors analyze regression schemes for approximating BDSDEs as well as their convergence, and obtain non-asymptotic error estimates, conditionally to the external noise (that is W in our context). Similarly to the classical 2BSDEs, the solution of a 2BDSDE has to be represented as a supremum of solutions to standard BDSDEs. We therefore follow the original approach of Soner, Touzi and Zhang \cite{STZ10} by constructing the solution path--wise, using the so-called regular conditional probability distribution. We point out that since in our context the value process is a random field depending on two source of randomness, we get a dynamic programming principle without regularity on the terminal condition and the generator following the approach of Possama\"i, Tan and Zhou \cite{PTZ14}. This is different from the classical $2$BSDEs where regularity result for the value process, precisely the uniform continuity with respect to the trajectory of the fundamental noise, was crucial to prove their dynamic programming principle (Proposition 4.7 in \cite{sone:touz:zhan:13}). Moreover, under regularity conditions on the coefficients, we show that classical solution of the fully nonlinear SPDE (\ref{SPDE1}) can be obtained via the associated Markovian $2$BDSDEs, thus extending the Feynman--Kac's formula to this context. Finally, we introduce the notion of stochastic viscosity solution for the fully non--linear SPDEs (\ref{SPDE1}) in the case where the intensity of the noise $g$ in the SPDE \eqref{SPDE1} does not depend on the gradient of the solution. This restriction is due to our approach based on the Doss--Sussmann transformation to convert fully nonlinear SPDEs to fully nonlinear PDEs with random coefficients. Let us conclude by insisting on one of the main implications of our results. It is our conviction that they open a new path for possible numerical simulations of solution to fully--nonlinear SPDEs. Indeed, numerical schemes for classical BDSDEs are by now well--known (see for instance \cite{BGM15}), and numerical procedures for solving second--order BSDEs have also been successfully implemented in the recent years, see for instance Possama\"i and Tan \cite{possamai2015weak} or Ren and Tan \cite{ren2015convergence}. Combining these two approaches should in principle allow to obtain efficient numerical schemes for computing solutions to 2BDSDEs, and therefore for fully non--linear SPDEs. As far as we know, there are no literature on the subject, except the cases of semilinear and quasilinear SPDEs (see \cite{GGK15}, \cite{GK10}, \cite{GK11}, \cite{matouetal13}, \cite{BGM15}), and so our results could prove to be a non--negligible progress. \paragraph{Structure of the paper.} The paper is organized as follows. In Section 2, we recall briefly some notations, introduce the probabilistic structure on the considered product space allowing to choose the adequate set of measures, provide the precise definition of 2BDSDEs and show how they are connected to classical BDSDEs. Then, the aim of Section 3 is to prove the uniqueness of solution for 2BDSDEs, as a direct consequence of a representation theorem, which intuitively originates from the stochastic control interpretation of our problem. The proof of this representation is based on Lemma \ref{eq:mincond}. In this section, we prove also a priori estimates for 2BDSDEs. Section 4 is devoted to the existence result for solution of 2BDSDEs by a path--wise construction on the shifted Wiener space. Since in our context, the value process is a random field depending on two source of randomness, we prove its regularity result in Lemma \ref{mesurabiliteV}. Once again, we cannot obtain the same regularity in the context of doubly stochastic 2BSDEs because we cannot have path--wise estimates for their solutions. In Section 5, we specialize our discussion to the Markovian context, and we give in Theorem \ref{representationsolutionclassique:theorem} the Feynman--Kac's formula for classical solution of such SPDEs. Then, we introduce the notion of stochastic viscosity solution and give in Theorem \ref{representationsolutionviscosité:theorem} the probabilistic representation for such solution, which is in our knowledge the first result of the kind for such class of fully nonlinear SPDEs. Finally, the Appendix collects several technical results needed for the existence of the solution of the 2BDDSEs and SPDEs. \paragraph{Notations:} For any $n\in\mathbb N\backslash\{0\}$, we will denote by $x\cdot y$ the usual inner product of two elements $(x,y)$ of $\mathbb R^n$, and by $\|.\|$ the associated Euclidean norm when $n\geq 2$ and by $|\cdot|$ when $n=1$. Furthermore, for any $n\times n$ matrix with real entries $M$, $M^\top$ will denote its usual transpose. We abuse notations and also denote by $\No{\cdot}$ a norm on the space of square matrices with real entries. Moreover, $\S_n^{>0}$ will denote the space of all $n\times n$ positive definite matrices with real entries. For any topological space $E$, $\mathcal B(E)$ will denote the associated Borel $\sigma-$field. \section{Preliminaries and assumptions} \label{Preliminaries and Hypothesis } Let us fix a positive real number $T>0$, which will be our finite time horizon, as well as some integer $d\geq 1$. We shall work on the product space $\Omega := \Omega^B\times\Omega^{W}$ where \noindentindent $\bullet$ $\Omega^B$ is the canonical space of continuous functions on $[0,T]$ vanishing at $0$, equipped with the uniform norm $\|\cdot\|_{\infty}$. $B$ will be the canonical process on $\Omega^B$, and $\P_B^{0}$ the Wiener measure on $(\Omega^B,\mathcal F^B)$, where $\mathcal F^B$ is the Borel $\sigma$-algebra. Generically, we will denote by $\omega^B$ an element of $\Omega^B$. \noindentindent $\bullet$ $(\Omega^W, {\cal F}^{W}, \P^0_{W})$ is a copy of $(\Omega^B,{\cal F}^B)$ whose canonical process is denoted by $W$, and whose Wiener measure is denoted bu $\P_0^W$. Generically, we will denote by $\omega^W$ an element of $\Omega^W$, and the notation $\omega$ will be solely reserved for elements $\omega:=(\omega^B,\omega^W)$ of $\Omega$. \noindentindent We equip the product space $\Omega$ with the product $\sigma$-algebra ${\cal F}= {\cal F}^{B}\otimes{\cal F}^{W}$, and define $\P_0:= \P_B^0\otimes\P^0_{W}$. Let then $\F^{o,W}:=\{{\cal F}_{t,T}^{o,W}\}_{t\geq 0}$ and $\F^{W}:=\{{\cal F}_{t,T}^{W}\}_{t\geq 0}$ be respectively the natural and the augmented (under $\P^0_W$) retrograde filtration generated by $W$, defined by $${\cal F}_{s,t}^{o,W}:=\sigma\{W_{r}-W_{s} , s\leq r\leq t\},\ {\cal F}_{s,t}^W:=\sigma\{W_{r}-W_{s} , s\leq r\leq t\}\vee \mathcal N^{\P^0_W}(\mathcal F^W),$$ where $$\mathcal N^{\P^0_W}(\mathcal F^W):=\left\{A\in\Omega^W,\ \exists\ \tilde A\in{\cal F}^W,\ A\subset \tilde A\ \text{and}\ \P^0_W(\tilde A)=0\right\}.$$ We also denote for simplicity ${\cal F}_{T}^{W}:={\cal F}_{0,T}^{W}$. Similarly, we let $\F^{B}:=\{{\cal F}_{t}^{B}\}_{t\geq 0}$ be the forward (raw) filtration generated by $B$, that is ${\cal F}_t^{B}:=\sigma\{B_r, 0\leq r\leq t\}$ (we remind the reader that it is a classical result that in this case $\mathcal F^B=\mathcal F^B_T$). We also consider its right limit $\F^{B}_{+}:=\{{\cal F}_{t^+}^{B}\}_{t\geq 0}$. Finally, for each $t\in[0,T]$, we define $${\cal F}_t:= {\cal F}_t^{B}\vee {\cal F}_{t,T}^{W}\quad \textrm{and}\quad {\cal G}_t:= {\cal F}_t^{B}\vee{\cal F}_{T}^{W}.$$ The collection $\F=({\cal F}_t)_{ 0\leq t\leq T}$ is neither increasing nor decreasing and therefore does not constitute a filtration. However, $\G=({\cal G}_t)_{ 0\leq t\leq T}$ is a filtration. \noindentindent For technical reasons related to the main result of Nutz \cite{N12}, we will work under the set theoretic model of ZFC (Zermelo--Fraenkel plus the axiom of choice) as well as any additional axiom ensuring the existence of medial limits in the sense of Mokobodzki (see \cite{F84}, statement $22$O(l) page $55$ for models ensuring this). \subsection{A special family of measures on $(\Omega, \mathcal F)$} In order to be able to consider SPDEs with a nonlinearity with respect to the second-order space derivative, we will need to consider a probabilistic structure allowing for what is now commonly known as volatility uncertainty. This basically means that we will allow the probability measure we consider on $(\Omega^B,\mathcal F^B)$ to change. Such an approach has been initiated with the name of quasi-sure stochastic analysis by Denis and Martini \cite{deni:mart:06} and has since proved very successful (see among others \cite{sone:touz:zhan:13, STZ10, Nutz12}). Following this approach, we say that a probability measure $\P_B$ on $(\Omega^B,{\cal F}^B)$ is a local martingale measure if the canonical process $B$ is a local martingale under $\P_B$. We emphasize that by using integration by parts as well as the path--wise stochastic integration of Bichteler (see Theorem 7.14 in \cite{B81} or the more recent article of Karandikar \cite{K95}), we can give a path--wise definition of the quadratic variation $\langle B\rangle_t$ and its density with respect to the Lebesgue measure $\widehat{a}_t$, by $$\widehat{a}_t:=\underset{\varepsilonilon \downarrow 0}{\overline{\Lim}}\ \Frac{1}{\varepsilonilon} (\langle B\rangle_t-\langle B\rangle_{t-\varepsilonilon}).$$ where the $\overline{\Lim}$ has to be understood in a component--wise sense. \noindentindent For practical purposes, we will restrict our attention to the set $\overline{{\cal P}}_S$ consisting of all probability measures $$\P(A):=\int_{\Omega^W}\int_{\Omega^B}{\bf 1}_{(\omega^B,\omega^W)\in A}d\P^{\alpha,\omega^W}(\omega^B)d\P^0_{W}(\omega^W),\ A\in{\cal F},$$ such that for any $\omega^W\in\Omega^W$ \begin{align*} \P^{\alpha,\omega^W} := \P_{B}^0\circ(X^{\alpha,\omega^W})^{-1},\ \text{where} ~ X^{\alpha,\omega^W}_t := \Int_0^t \alpha_s^{1/2}(\cdot,\omega^W)dB_s\,, \,t\in[0,T]\,,\, \P_B^0-a.s., \end{align*} for some $\F$-adapted process $\alpha$ taking values in $\S_d^{>0}$ and satisfying for any $\omega^W\in\Omega^W$, $$\Int_0^T \|\alpha_t(\cdot,\omega^W)\|dt < \infty,\ \P_B^0 -a.s.$$ We emphasize here that by the classical results of Stricker and Yor \cite{SY78} on stochastic integration with a parameter, that we can always assume without loss of generality that the map $$(t,\omega^B,\omega^W)\longmapsto \left( \int_0^t\alpha_s(\cdot,\omega^W)^{1/2}dB_s\right)(\omega^B),$$ is $\mathcal B([0,t])\otimes\mathcal F_t-$measurable. This implies in particular that the family $(\P^{\alpha,\omega^W},\ \omega^W\in\Omega^W)$ is a stochastic kernel (see for instance Definition 7.12 in \cite{BS78}). The set $\overline{{\cal P}}_S$ has several nice properties, which can be deduced from similar results in \cite{sone:touz:zhan:11a}. \begin{Lemma} Every $\P\in\overline{{\cal P}}_S$ satisfies the martingale representation property, in the sense that for any square integrable $(\P, \G)-$martingale $M$, there exists a unique $\G-$predictable process $Z$ such that $$M_t=M_0+\int_0^tZ_s\cdot dB_s,\ \P-a.s.$$ Moreover, they satisfy the Blumenthal $0-1$ law, and in particular, any $\mathcal G_{0^+}$ or $\mathcal F_{0^+}-$measurable random variable is deterministic with respect to $\Omega^B$, that is its randomness only comes from $\Omega^W$. \end{Lemma} \proof Let $M$ be a square integrable $(\P, \G)-$martingale. We have $$M_s(\omega^B,\omega^W)=\E^\P\left[\left.M_t\right|\mathcal G_s\right](\omega^B,\omega^W),\ 0\leq s\leq t\leq T,\ \P-a.e.\ (\omega^B,\omega^W)\in\Omega.$$ However, using again the result of Stricker and Yor \cite{SY78}, this can be rewritten, for $\P_0^W-a.e.$ $\omega^W\in\Omega^W$, as $$M_s(\cdot,\omega^W)=\E^{\P^{\alpha,\omega^W}}\left[\left.M(\cdot,\omega^W)\right|\mathcal F_t^B\right](\cdot),\ \P^{\alpha,\omega^W}-a.s.,$$ which therefore implies that for $\P_0^W-a.e.$ $\omega^W\in\Omega^W$, $M(\cdot,\omega^W)$ is a $(\P^{\alpha,\omega^W},\F^B)-$martingale, to which we can then apply the result of \cite{sone:touz:zhan:11a} to obtain the required martingale representation. \noindentindent The exact same reasoning gives us the second desired result, using again the fact that by \cite{sone:touz:zhan:11a}, the probability measures $\P^{\alpha,\omega^W}$ satisfy the Blumenthal $0-1$ law. \ep \begin{Remark} We recall from {\rm \cite{sone:touz:zhan:13}} that for a fixed $\P\in\overline{{\cal P}}_S$, we have from the Blumenthal zero-one law that $\E^\P[\xi|{\cal G}_t]=\E^\P[\xi|{\cal G}_{t^+}],\, \P- a.s.$ for any $t\in[0,T]$ and $\P-$integrable $\xi$. In particular, this implies immediately that any ${\cal G}_{t^+}-$measurable random variable has a ${\cal G}_t-$measurable $\P-$modification. Furthermore, if $\F^{W,o}$ denotes the raw backward filtration of the Brownian motion $W$, then any ${\cal G}_{t^+}-$measurable random variable also admits a $\F^B_t\vee{\cal F}^{W,o}_T-$measurable $\P-$modification. We will often implicitly work with such modifications $($which of course depend on the considered measure $\P)$. \end{Remark} \noindentindent We finish this section with the following definition. \begin{Definition} For any subset $\mathcal Q$ of $\overline{{\cal P}}_S$, we say that a property holds $\mathcal Q-$quasi-surely $({\cal Q}-q.s.$ for short$)$ if it holds $\P-a.s.$ for all $\P\in{\cal Q}$. \end{Definition} \subsection{The non-linearity} To introduce the non-linearity in the SPDEs we consider, we have to take a small detour, and start by introducing a map $H_t(w,y,z,\gamma):[0,T]\times\Omega\times\R\times\R^d\times D_H\longrightarrow \R$, where $D_H\subset \R^{d \times d}$ is a given subset containing $0$. As is usual in any stochastic control problem, the following Fenchel conjugate of $H$ with respect to $\gamma$ will play an important role $$F_t(w,y,z,a) := \underset{\gamma\in D_H}{\Sup} \left\{\Frac{1}{2} Tr(a\gamma) - H_t(w,y,z,\gamma)\right\}~\text{for}~ a\in \S_d^{>0}.$$ For ease of notations, we also define $$\widehat{F}_t(y,z) := F_t(y,z,\widehat{a}_t)~\text{and}~ \widehat{F}_t^0 := \widehat{F}_t(0,0),$$ where as usual we abuse notations and suppress the dependence on $\omega\in\Omega$ when it is not important (which will not be the case in all the article). Since $F$ may not be always finite, we denote by $D_{F_t(y,z)} := \{a,~ F_t(w,y,z,a) < +\infty\}$ the domain of $F$ in $a$ for a fixed $(t,w,y,z)$. \noindentindent We will also consider a function $g_t(\omega,y,z):[0,T]\times\Omega\times\R\times\R^d\longrightarrow \R^d$ and denote $g_t^0:= g_t(0,0)$, with the same convention as above on the $\omega-$dependence. The maps $\widehat F$ and $g$ are intended to play the role of the generators of the doubly stochastic BSDEs we will consider later on. Since the theory of BDSDEs is an $L^2$-type theory, we need to restrict again the class $\overline{{\cal P}}_S$ to account for these integrability issues. \begin{Definition}\label{defP} ${\cal P}$ is the collection of all $\P\in\overline{{\cal P}}_S$ such that $$\underline{a}_{\P}\leq \widehat{a}\leq\overline{a}_{\P} ,~dt\times d\P-a.e.~ \text{for some}~ \underline{a}_{\P},\overline{a}_{\P}\in\S_d^{>0},$$ $$ \E^{\P}\left[\left(\Int_0^T |\widehat{F}_t^0|^{2} dt\right)\right] <+\infty~, ~ \E^{\P}\left[\left(\Int_0^T \|g_t^0\|^{2} dt\right)\right] <+\infty.$$ \label{def} \end{Definition} \noindentindent The first condition in Definition \ref{defP} ensures that the process $B$ is actually a square-integrable martingale under any of the measures in ${\cal P}$, and not only a local-martingale, while the second condition is here to ensure wellposedness of the BDSDEs which will be defined below. Of course, these integrability assumptions will not be enough and need to be complemented with further assumptions on the functions $F$ and $g$ that we now list \begin{Assumption} \label{ass} \begin{itemize} \item[\rm{(i)}] ${\cal P}$ is not empty, and the domain $D_{F_t(y,z)}=:D_{F_t}$ is actually independent of $(\omega,y,z)$. \item[\rm{(ii)}] For fixed $(t,y,z,a)\in[0,T]\times\mathbb R\times\mathbb R^d\times D_{F_t}$, $F_t(\cdot,y,z,a)$ is ${\cal F}_t-$measurable, and $g$ is ${\cal F}_t-$measu-rable as well. \item[\rm{(iii)}] There is $C > 0$ and $0\leq \alpha <1$ s.t. $ \forall (y,y',z,z',t,a,\omega ) \in \mathbb R \times \mathbb R \times \mathbb R^d \times \mathbb R^d \times [0,T] \times D_{F_t}\times\Omega $, $$ ~ |F_t(\omega ,y,z,a)-F_t(\omega,y',z',a)|\leq C\left(|y-y'|+\|a^{1/2}(z-z')\|\right),$$ $$~ \|g_t(\omega ,y,z)-g_t(\omega,y', z')\|^2\leq C |y-y' |^2+\alpha\|z-z'\|^2$$ \item[\rm{(iv)}] There exists a constant $\lambda\in[0,1[$ such that $$(1-\lambda)\widehat{a}_t \geq \alpha I_d,\ dt\times {\cal P}-q.e. $$ \item[\rm{(v)}] $F$ and $g$ are uniformly continuous in $\omega$ for the $\|. \|_\infty$ norm on $\Omega$. \end{itemize} \end{Assumption} \begin{Remark} The assumptions $(i)$ and $(ii)$ are classic in the second order framework, see {\rm \cite{STZ10}}. The Lipschitz assumption $(iii)$ is standard in the BSDE theory since the paper {\rm\cite{PP90}}. The contraction condition satisfied by $g$ with respect to the variable $z$ $($i.e $ 0 \leq \alpha < 1)$ and assumption $(iv)$ are necessary for the wellposedness of our second order BDSDEs. These type of conditions are well known $($see e.g. {\rm\cite{pardouxt1980stochastic}}$)$ for the semilinear stochastic PDE \eqref{semilinearSPDE} to be a well-posed stochastic parabolic equation. The last hypothesis $(v)$ is proper to the second order framework, it is linked to our intensive use of regular conditional probability distributions $($r.c.p.d.$)$ in our existence proof, and to the fact that we construct our solutions pathwise, thus avoiding complex issues related to negligible sets. \end{Remark} \subsection{Important spaces and norms} For the formulation of the second order BDSDEs, we will use the same spaces and norms (albeit with some modifications, in particular concerning the measurability assumptions) as the one introduced for second order BSDEs in \cite{STZ10}. \noindentindent $\bullet$ For $p\geq 1$, $L^{p}$ denotes the space of all ${\cal F}_T-$measurable scalar r.v. $\xi$ with $$\|\xi\|^p_{L^{p}}:= \underset{\P\in{\cal P}}{\Sup}\, \E^{\P}[|\xi|^p] < +\infty .$$ $\bullet$ $\H^{p}$ denotes the space of all $\R^d-$valued processes $Z$ which are $\G-$predictable and s.t. $Z_t$ is $\underset{\P\in{\cal P}}{\bigcap}\overline{{\cal F}}^\P_t-$measurable for a.e. $t\in[0,T]$, with $$\|Z\|^p_{\H^{p}}:= \underset{\P\in{\cal P}}{\Sup}\, \E^{\P}\left[\left(\Int_0^T \|\widehat{a}^{1/2}_tZ_t\|^2 dt\right)^{\frac{p}{2}}\right]< +\infty .$$ $\bullet$ $\D^{p}$ denotes the space of $\R-$valued processes $Y$, which are $\G-$progressively measurable, s.t. $Y_t$ is ${\cal F}_{t^+}^B\vee{\cal F}_{t,T}^W-$ measurable for every $t\in[0,T]$, with $${\cal P}-q.s.\text{ c\`adl\`ag paths, and } \ \|Y\|^p_{\D^{p}}:= \underset{\P\in{\cal P}}{\Sup}\,\E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\,|Y_t|^p \right]< +\infty .$$ $\bullet$ $\I^{p}$ denotes the space of all $\R-$valued and $\G-$progressively measurable processes $K$ null at $0$, s.t. $K_t$ is $\underset{\P\in{\cal P}}{\bigcap}\overline{{\cal F}}^\P_t-$measurable for every $t\in[0,T]$, with $${\cal P}-q.s.\text{ c\`adl\`ag and nondecreasing paths, and } ~ \|K\|^p_{\I^{p}}:= \underset{\P\in{\cal P}}{\Sup}\,\E^{\P}\Big[K_T^p \Big]< +\infty .$$ $\bullet$ For each $\xi\in L^{1},\P\in{\cal P}$ and $t\in[0,T]$, we denote by $\E_t^{{\cal P},\P}[\xi] :=\underset{\P^{'}\in{\cal P}(t^+,\P)}{\rm ess \, sup^{\P}}\, \E_t^{\P^{'}}[\xi],$ with $${\cal P}(t^+,\P):=\{\P^{\prime}\in{\cal P}\,,\,\P^{\prime} = \P~\text{on}~ {\cal G}_{t^+} \},$$ where $ \E_t^{\P}[\xi]:= \E^{\P}[\xi|{\cal G}_t]=\E^{\P}[\xi|{\cal F}_t^{B}\vee {\cal F}_{T}^{W}] ,~ \P-a.s.$ Then we define for each $p\geq 2$, $$\L^{p} := \{\xi\in L^{p} : \|\xi\|_{\L^{p}} < +\infty\},$$ where $$\|\xi\|^p_{\L^{p}}:= \underset{\P\in{\cal P}}{\Sup}\, \E^{\P}\Big[\underset{0\leq t\leq T}{\rm ess \, sup^{\P}}\,\big(\E_t^{{\cal P},\P}[|\xi|^{2}]\big)^{\frac{p}{2}}\Big].$$ Finally, we denote by ${\rm UC}_b(\Omega)$ the collection of all bounded and uniformly continuous maps $\xi:\Omega \longrightarrow \R$ for the $\|.\|_\infty-$norm, and we let ${\cal L}^{p}$ be the closure of ${\rm UC}_b(\Omega)$ under the norm $\|.\|_{\L^{p}}$, for every $p\geq 2$. \subsection{Definition of the 2BDSDE and connection with standard BDSDEs} We shall consider the following second order backward doubly stochastic differential equation (2BDSDE for short) \begin{align} Y_t = \xi+\Int_t^T\widehat{F}_s(Y_s,Z_s)ds+\Int_t^T g_s(Y_s,Z_s)\cdot d\W_s-\Int_t^T Z_s \cdot dB_s+ K_T-K_t. \label{eq} \end{align} We note that the integral with respect to $W$ is a "backward It\^o integral" (see \cite{K90}, pages 111--112) and the integral with respect to $B$ is a standard forward It\^o integral. For any $\P\in{\cal P}$, $\G-$stopping time $\tau$, and ${\cal G}_{\tau}-$measurable random variable $\xi\in\L^2(\P)$, let $(y^{\P},z^{\P}):=(y^{\P}(\tau,\xi),z^{\P}(\tau,\xi))$ denote the unique solution to the following BDSDE \begin{align} y^{\P}_t= \xi+\Int_t^{\tau} \widehat{F}_s(y^{\P}_s,z^{\P}_s)ds+\Int_t^{\tau} g_s(y^{\P}_s,z^{\P}_s)\cdot d\W_s -\Int_t^{\tau}z^{\P}_s \cdot dB_s, \ 0\leq t\leq T,\ \P -a.s. \label{BDSDE} \end{align} Let us point out immediately that wellposedness of a solution is not an immediate consequence of the classical result of Pardoux and Peng \cite{pp1994}. Indeed, in our setting the two martingales $W$ and $B$ are not independent. In our case, the only argument in \cite{pp1994} which does not go through {\it mutatis mutandis} is the one at the end of their proof of their Proposition 1.2 which proves that the solution $(y^\P_t,z^\P_t)$ is actually ${\cal F}_{t^+}^B\vee{\cal F}_{t,T}^W-$measurable. However, by Lemma \ref{BDSDEshift} and Step 1 of its proof, in particular \eqref{eq:tructruc}, the required measurability becomes clear. \noindentindent We can now give the definition of a solution to a 2BDSDE. \begin{Definition}\label{def:def} For $\xi\in \L^{2}$, $(Y,Z,K)\in\D^{2}\times\H^{2}\times\I^2$ is a solution to the 2BDSDE \eqref{eq} if \eqref{eq} is satisfied ${\cal P}-q.s.$, and if the process $K$ satisfies the following minimality condition \begin{align} K_t =\underset{\P^{'}\in{\cal P}(t^+,\P)}{\rm ess \, inf^{\P}}\E_t^{\P^{'}}[K_T], ~\P-a.s.~\text{for all}~ \P\in{\cal P}, \ t\in[0,T]. \label{eqmin} \end{align} \end{Definition} \begin{Remark} We emphasize that we should normally make the dependence of $K$ in the measure $\P$ explicit, since the two stochastic integrals on the right--hand side of \eqref{eq} are, {\it a priori}, only defined $\P-a.s.$ However, we are in a context where we can use the main aggregation result of {\rm \cite{N12}} to always define an universal version of these integrals. \end{Remark} \noindentindent Before closing this subsection, we highlight the fact that Definition \ref{def:def} contains the classical theory of BDSDEs. Indeed, let $f$ be the following linear function of $\gamma$ $$f_t(y,z,\gamma) = \Frac{1}{2}I_d:\gamma - \hat{f}_t(y,z),$$ where $I_d$ is the identity matrix in $\R^d$. Then, we verify immediately that $D_{F_t(w)}=\{I_d\},$ $\widehat{F}_t(y,z) = \hat{f}_t(y,z)$ and ${\cal P}=\{\P_0\}$. In this case, the minimum condition (\ref{eqmin}) implies $$ 0=K_0=\E^{\P_0}[K_T]\ \text{and thus}\ K=0,\ \P_0-a.s.,$$ since $K$ is nondecreasing. Hence, the 2BDSDE (\ref{eq}) is equivalent to the following BDSDE: \begin{align*} Y_t=\xi+\Int_t^T \hat{f}_s(Y_s,Z_s)ds+\Int_t^T g_s(Y_s,Z_s)\cdot d\W_s-\Int_t^T Z_s\cdot dB_s, 0\leq t\leq T,~\P_0-a.s. \end{align*} In addition to Assumption \ref{ass}, we will need to assume the following stronger integrability conditions \begin{Assumption}\label{ass1} The processes $\widehat{F}^0$ and $g^0$ satisfy the following integrability conditions for some $\varepsilon>0$ \begin{align*} \phi^{2,\varepsilon} :=\underset{\P\in{\cal P}}{\Sup}\,\E^{\P}\left[\Int_0^T |\widehat{F}_s^0|^{2+\varepsilon} ds\right] <+\infty,\ \psi^{2,\varepsilon} :=\underset{\P\in{\cal P}}{\Sup}\,\E^{\P} \left[\Int_0^T \|g_s^0\|^{2+\varepsilon} ds\right]<+\infty. \end{align*} \end{Assumption} \noindentindent Finally, we will see later on in our proof of a priori estimates for the solution of the $2$BDSDE \eqref{eq}, that we actually need to have $L^{2+\varepsilon}-$type estimates for the solutions $(y^\P,z^\P)$ of the corresponding BDSDEs. For this reason, we need to also consider the following, which already appeared as Assumption (H.2) in \cite{pp1994} \begin{Assumption}\label{assumtion g} There exist $c>0$ and $0\leq \beta < 1$ such that for all $(t,y,z)\in [0,T]\times\R\times\R^{d}$ $$(g_tg_t^\top) (y,z)\leq c (1+y^2)I_d+ \beta zz^\top.$$ \end{Assumption} \section{Uniqueness of the solution and estimates} \label{Uniqueness of the solution and other properties} \subsection{Representation and uniqueness of the solution} The aim of this section is to prove the uniqueness of solution for $2$BDSDEs \eqref{eq}, as a direct consequence of a representation theorem, which intuitively originates from the stochastic control interpretation of our problem. As it will become more and more apparent, one of main difficulties with $2$BDSDEs and BDSDEs is the extra backward integral term, which prevents us from obtaining pathwise estimates for the solutions, unlike what happens with BSDEs or 2BSDEs. This introduces additional non trivial difficulties. The same type of problems were already pointed out in \cite{BGM15}, where the authors analyze regression schemes for approximating BDSDEs as well as their convergence, and obtain non-asymptotic error estimates, conditionally to the external noise (that is $W$ in our context). We start with a revisit of the minimality condition \reff{eqmin}. \begin{Lemma}\label{eq:mincond} The minimum condition \reff{eqmin} implies that $$\underset{\P^{\prime}\in{\cal P}(t^+,\P)}{\inf}\E^{\P^{\prime}}\left[K_T-K_t\right]=0.$$ \end{Lemma} \begin{proof} Indeed, fix some $\P\in{\cal P}$ and some $\P^{\prime}\in{\cal P}(t^+,\P)$. Taking expectation under $\P$ in \reff{eqmin}, we obtain readily \begin{equation*} \E^\P\left[\underset{\P^{\prime}\in{\cal P}(t^+,\P)}{\rm ess \, inf^{\P}}\E_t^{\P^{\prime}}[K_T-K_t]\right]=0. \end{equation*} Then, we know that the family ${\cal P}(t^+,\P)$ is upward directed (it is indeed clear from the result of \cite{sone:touz:zhan:13}). Therefore, by classical results, there is a sequence $(\P^n)_{n\geq 0}\subset {\cal P}(t^+,\P)$ such that $$\underset{\P^{\prime}\in{\cal P}(t^+,\P)}{\rm ess \, inf^{\P}}\E_t^{\P^{\prime}}[K_T-K_t]=\underset{n\rightarrow+\infty}{\lim}\downarrow \E_t^{\P^{n}}[K_T-K_t].$$ Using this in \eqref{eq:mincond} and then the monotone convergence theorem under the fixed measure $\P$, we obtain \begin{align*} 0=\E^\P\left[\underset{\P^{\prime}\in{\cal P}(t^+,\P)}{\rm ess \, inf^{\P}}\E_t^{\P^{\prime}}[K_T-K_t]\right]=\E^\P\left[\underset{n\rightarrow+\infty}{\lim}\downarrow \E_t^{\P^{n}}[K_T-K_t]\right]&=\underset{n\rightarrow+\infty}{\lim}\downarrow\E^\P\left[ \E_t^{\P^{n}}[K_T-K_t]\right]\\ &=\underset{n\rightarrow+\infty}{\lim}\downarrow\E^{\P^{n}}\left[ \E_t^{\P^{n}}[K_T-K_t]\right]\\ &=\underset{n\rightarrow+\infty}{\lim}\downarrow\E^{\P^{n}}\left[K_T-K_t\right]\\ &\geq \underset{\P^{'}\in{\cal P}(t^+,\P)}{\inf}\E^{\P^{'}}\left[K_T-K_t\right]. \end{align*} Since $K$ is a non-decreasing process, the result follows. \ep \end{proof} \noindentindent We can now show as in Theorem 4.4 of \cite{sone:touz:zhan:13} that the solution to the $2$BDSDE (\ref{eq}) can be represented as a supremum of solutions to the BDSDEs (\ref{BDSDE}). \begin{Theorem}\label{theorep} Let Assumptions \ref{ass} and \ref{ass1} hold. Assume $\xi\in\L^{2}$ and that $(Y,Z,K)$ is a solution to 2BDSDE \reff{eq}. Then, for any $\P\in{\cal P}$ and $0\leq t_1 <t_2\leq T,$ \begin{align} Y_{t_1} = \underset{\P^{'}\in{\cal P}(t_1^+,\P)}{\rm ess \, sup^{\P}} y_{t_1}^{\P^{'}}(t_2,Y_{t_2}),~ \P-a.s. \label{eqrepresentation} \end{align} \end{Theorem} \begin{proof} We follow the (by now) classical approach for this problem, first used in \cite{STZ10}, and proceed in 2 steps. (i) Fix $0\leq t_1 <t_2\leq T,$ and $\P\in{\cal P} $. For any $\P^{'}\in{\cal P}(t_1^+,\P)$, note that from (\ref{eq}), we have, $ \P^{'} - a.s.$, for any $t_1\leq t\leq t_2$ \begin{align*} Y_t=Y_{t_2}+\Int_t^{t_2}\widehat{F}_s(Y_s,Z_s)ds+\Int_t^{t_2} g_s(Y_s,Z_s)\cdot d\W_s-\Int_t^{t_2} Z_s\cdot dB_s +K_{t_2}-K_t, \end{align*} and that $K$ is nondecreasing, $\P^{'}-a.s.$ Applying the comparison principle for BDSDE (see \cite{SGL05}) under $\P$ , we have $Y_{t_1}\geq y_{t_1}^{\P^{'}}(t_2,Y_{t_2}),$ $\P^{'} - a.s$. Since $\P^{'} = \P$ on ${\cal G}_{t_1}^{+}$, we get $Y_{t_1}\geq y_{t_1}^{\P^{'}}(t_2,Y_{t_2}), \P - a.s.$ and thus \begin{align*} Y_{t_1} \geq\underset{\P^{\prime}\in{\cal P}(t_1^+,\P)}{\rm ess \, sup^{\P}} y_{t_1}^{\P^{\prime}}(t_2,Y_{t_2}),~ \P-a.s. \end{align*} (ii) To prove the reverse inequality in representation (\ref{eqrepresentation}), we use standard linearization techniques. Fix $\P\in{\cal P} $, for every $\P^{\prime}\in{\cal P}(t_1^+,\P)$, denote $ \delta Y := Y-y^{\P^{'}}(t_2,Y_{t_2}) ~\text{and}~ \delta Z := Z-z^{\P^{'}}(t_2,Y_{t_2}). $ By Assumption \ref{ass}(iii), there exist bounded processes $\lambda, \eta, \gamma,\beta$, which are respectively $\R$, $\R^d$, $\R^d$ and $\R-$valued, such that, $\P^{'}-a.s.$ \begin{align*} \delta Y_t =&\ \Int_t^{t_2}\!(\lambda_s\delta Y_s+\eta_s\cdot \widehat{a}_s^{\frac12}\delta Z_s)ds+\Int_t^{t_2}\left(\!\gamma_s\delta Y_s +\beta_s\delta Z_s\right)\cdot d\W_s-\Int_t^{t_2}\!\delta Z_s\cdot dB_s +K_{t_2}-K_t. \end{align*} Define \begin{align} M_t:=\exp\Big(\Int_0^t\eta_s\cdot \widehat{a}_s^{-1/2}dB_s+\Int_0^t\lambda_sds -\Frac{1}{2}\Int_0^t\|\eta_s\|^2 ds\Big),\ t_1\leq t\leq t_2,\ \P^{'} - a.s. \label{M} \end{align} By integration by parts, we have \begin{align*} d(M_t\delta Y_t) = M_t(\delta Z_t + \delta Y_t\eta_t\widehat{a}^{-1/2}_t)\cdot dB_t-M_t\beta_t\delta Z_t\cdot d\W_t -M_t dK_t. \end{align*} We deduce $$\E^\P[\delta Y_{t_1}]=\E^{\P^{'}}\left[M_{t_1}^{-1}\Int_{t_1}^{t_2}M_tdK_t\right]\leq\E^{\P^{'}}\left[\underset{t_1\leq t\leq t_2}{\Sup}(M_{t_1}^{-1}M_{t})(K_{t_2}-K_{t_1})\right],$$ where we used the fact that $K^{\P^{'}}$ is non-decreasing and that since $\delta Y_{t_1}$ is $\mathcal F_{t_1^+}$-measurable, its expectation is the same under $\P$ and $\P^{'}$. By the boundedness of $\lambda,\eta, \gamma, \beta$, for every $p\geq 1$ we have, \begin{align} \E^{\P^{'}}\left[\underset{t_1\leq t\leq t_2}{\Sup}(M_{t_1}^{-1}M_{t})^p+\underset{t_1\leq t\leq t_2}{\Sup}(M_{t_1}M_{t}^{-1})^p\right]\leq C_p ,\ t_1\leq t\leq t_2, \ \P^{'} - a.s.\label{eq4} \end{align} Then it follows from the H\"{o}lder inequality that \begin{align*} \mathbb E^\P\left[Y_{t_1}-y_{t_1}^{\P^{'}}(t_2,Y_{t_2})\right]&\leq \left(\E^{\P^{'}}\left[\underset{t_1\leq t\leq t_2}{\Sup}(M_{t_1}^{-1}M_{t})^3\right]\right)^{1/3}\left(\E^{\P^{'}}\left[(K_{t_2}-K_{t_1})^{3/2}\right]\right)^{2/3}\\ &\leq C\left(\E^{\P^{'}}\left[K_{t_2}-K_{t_1}\right]\E^{\P^{'}}\left[(K_{t_2}-K_{t_1})^{2}\right]\right)^{1/3}. \end{align*} From the definition of $K$, we have \begin{align} \underset{\P^{\prime}\in{\cal P}(t_1^+,\P)}{\Sup} \E^{\P^{\prime}}\left[(K_{t_2}-K_{t_1})^{2}\right] \leq C\left(\|Y\|^2_{\D^{2}}+ \|Z\|^2_{\H^{2}}+\left(\phi^{2,\varepsilon}\right)^{\frac 2{2+\varepsilon}}+\left(\psi^{2,\varepsilon}\right)^{\frac 2{2+\varepsilon}} \right)<+ \infty.\label{eq5} \end{align} Then, by taking the infimum in ${\cal P}(t_1^+,\P)$ in the last inequality and using (\ref{eq5}) and the result of Lemma \ref{eq:mincond}, we obtain $$\underset{\P^{'}\in{\cal P}(t_1^+,\P)}{\inf}\mathbb E^\P\left[Y_{t_1}-y_{t_1}^{\P^{'}}(t_2,Y_{t_2})\right]\leq 0.$$ But we clearly have $$0\geq\underset{\P^{'}\in{\cal P}(t_1^+,\P)}{\inf}\mathbb E^\P\left[Y_{t_1}-y_{t_1}^{\P^{'}}(t_2,Y_{t_2})\right]\geq \mathbb E^\P\left[Y_{t_1}-\underset{\P^{'}\in{\cal P}(t_1^+,\P)}{\rm ess \, sup^{\P}}y_{t_1}^{\P^{'}}(t_2,Y_{t_2})\right].$$ Since the quantity under the expectation is positive $\P-a.s.$ by Step $1$, we deduce that it is actually equal to $0$, $\P-a.s.$, which is the desired result. \ep \end{proof} \noindentindent As an immediate consequence of the representation formula (\ref{eqrepresentation}) together with the comparison principle for BDSDEs, we have the following comparison principle for $2$BDSDEs. \begin{Theorem} Let $(Y,Z)$ and $(Y',Z')$ be the solutions of $2$BDSDEs with terminal conditions $\xi$ and $\xi'$ and generators $\widehat{F}$ and $\widehat{F}'$ respectively, and let $(y^{\P},z^{\P})$ and $(y'^{\P},z'^{\P})$ the solutions of the associated BDSDEs. Assume that they both verify Assumptions \ref{ass} and \ref{ass1}, and that we have ${\cal P} - q.s.$, $\xi\leq\xi' ,~ \widehat{F}(y_t'^{\P},z_t'^{\P})\leq \widehat{F}^{'}(y_t'^{\P},z_t'^{\P}).$ Then $Y \leq Y' ,$ ${\cal P} - q.s.$ \end{Theorem} \subsection{A priori estimates} In this section, we show some a priori estimates which will be not only useful in the sequel, but also ensure the uniqueness of a solution to a 2DBSDE in $\D^2\times\H^2$. We start with a reminder of $L^{p}$ estimates for solutions of BDSDEs which were proved in \cite{pp1994} (see Theorem 4.1 p.217). \begin{Theorem}\label{th:estimees} Let Assumptions \ref{ass}, \ref{ass1} and \ref{assumtion g} hold and assume that $\xi\in L^{2+\varepsilon}$ for some $\varepsilon>0$. We have \begin{equation*} \E^{\P}\left[\underset{0\leq t\leq T}{\Sup}|y_t^{\P}|^{2+\varepsilon}+\left(\Int_0^T \|\widehat{a}_s^{1/2}z_s^{\P}\|^2ds\right)^{\frac{2+\varepsilon}{2}}\right] \leq C \E^{\P}\left[|\xi|^{2+\varepsilon}+\Int_0^T (|\widehat{F}_t^0|^{2+\varepsilon} + \|g_t^0\|^{2+\varepsilon}) dt\right]. \end{equation*} \end{Theorem} \noindentindent The main result of this section is then \begin{Theorem}\label{thestiamte} Let Assumptions \ref{ass}, \ref{ass1} and \ref{assumtion g} hold. \noindentindent $\rm{(i)}$ Assume $\xi\in\L^{2}\cap L^{2+\varepsilon}$ and that $(Y,Z,K)$ is a solution to the $2$BDSDE \reff{eq}. Then, for any $\varepsilon'\in(0,\varepsilon)$, there exist a constant $C$ such that \begin{align*} \|Y\|^{2+\varepsilon'}_{\D^{2+\varepsilon'}} + \|Z\|^{2+\varepsilon'}_{\H^{2+\varepsilon'}} + \underset{\P\in{\cal P}}{\Sup}\,\E^{\P}\left[ |K_T|^{2+\varepsilon'}\right] \leq C \left(\|\xi\|^{2+\varepsilon'}_{\L^{2+\varepsilon'}} +\|\xi\|^{\frac{2+\varepsilon'}{2+\varepsilon}}_{L^{2+\varepsilon}}+ (\phi^{2,\varepsilon})^{\frac{2+\varepsilon'}{2+\varepsilon}}+(\psi^{2,\varepsilon})^{\frac{2+\varepsilon'}{2+\varepsilon}}\right). \end{align*} \noindentindent $\rm{(ii)}$ Assume $\xi^i\in\L^{2}\cap L^{2+\varepsilon}$ and that $(Y^i,Z^i, K^i)$ is a solution to the $2$BDSDE \reff{eq}, $i=1,2$. Denote $\delta\xi:=\xi^1-\xi^2,\,\delta Y:=Y^1-Y^2,\,\delta Z:= Z^1-Z^2,\, \text{and}\ \delta K:=K^{1}-K^{2}$. Then, there exist a constant $C$ such that \begin{align*} \|\delta Y\|_{\D^{2}}& \leq C\|\delta\xi\|_{\L^{2}},\ \|\delta Z\|^2_{\H^{2}} + \|\delta K\|^2_{\mathbb I^{2}} \leq C \|\delta\xi\|_{\L^{2}}\sum_{i=1}^2\left(\|\xi^i\|^2_{\L^{2}} +\|\xi^i\|^{\frac{2}{2+\varepsilon}}_{L^{2+\varepsilon}}+ (\phi^{2,\varepsilon})^{\frac 2{2+\varepsilon}}+(\psi^{2,\varepsilon})^{\frac 2{2+\varepsilon}}\right). \end{align*} \end{Theorem} \begin{proof} (i) For every $\P\in{\cal P} $ and $\P^{\prime}\in{\cal P}(t^+,\P)$ we have, using Theorem \ref{theorep} and the usual linearization procedure, that for some bounded processes $\alpha$ and $\beta$ and any $\varepsilon'\in(0,\varepsilon)$ \begin{align*} &\E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\,|Y_t|^{2+\varepsilon'}\right]= \E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\left( \underset{\P^{'}\in{\cal P}(t^+,\P)}{\rm ess \, sup^{\P}}|y_t^{\P^{'}}|\right)^{2+\varepsilon'}\right] \\ \leq&\ \E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\left(\E_t^{{\cal P},\P}\left[|\xi |+\Int_t^T |\widehat{F}_s^0|ds +C\Int_t^T ( |y_s^{\P^{'}}|+ \|\widehat{a}_s^{1/2}z_s^{\P^{'}}\|)ds+\abs{\Int_t^T g^0_s\cdot d\W_s}\right.\right.\right. \\ &\left.\left.\left. + \abs{\Int_t^T \alpha_s y_s^{\P^{'}}\cdot d\W_s}+\abs{\Int_t^T \beta_s z_s^{\P^{'}}\cdot d\W_s}+\abs{\Int_t^T z_s^{\P^{'}}\cdot dB_s}\right]\right)^{2+\varepsilon'}\right]\\ \leq&\ C\left(\E^{\P}\left[\underset{0\leq t\leq T}{\Sup} \E_t^{{\cal P},\P}[|\xi |]^{2+\varepsilon'}\right]+\E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\left(\E_t^{{\cal P},\P}\left[\Int_0^T |\widehat{F}_s^0| ds\right]\right)^{2+\varepsilon'}\right]\right)\\ &+ C\left(\E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\left(\E_t^{{\cal P},\P}\left[\Int_0^T |y_s^{\P^{'}}|ds\right]\right)^{2+\varepsilon'}\right]+\E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\left(\E_t^{{\cal P},\P}\left[\Int_0^T \|\widehat{a}_s^{1/2}z_s^{\P^{'}}\|ds\right]\right)^{2+\varepsilon'}\right]\right)\\ &+ C\left(\E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\left(\E_t^{{\cal P},\P}\left[\abs{\Int_t^T g^0_s\cdot d\W_s}\right]\right)^{2+\varepsilon'}\right]+\E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\left(\E_t^{{\cal P},\P}\left[\abs{\Int_t^T z_s^{\P^{'}}\cdot dB_s}\right]\right)^{2+\varepsilon'}\right]\right)\\ &+C\E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\left(\E_t^{{\cal P},\P}\left[\abs{\Int_t^T y_s^{\P^{'}}\alpha_s\cdot d\W_s}\right]\right)^{2+\varepsilon'}+\underset{0\leq t\leq T}{\Sup}\left(\E_t^{{\cal P},\P}\left[\abs{\Int_t^T z_s^{\P^{'}}\beta_s\cdot d\W_s}\right]\right)^{2+\varepsilon'}\right]. \end{align*} \noindentindent Now we remind the reader that by (\cite{sone:touz:zhan:11b}, \cite{poss13}), for any random variable $A$, we have $$\underset{\P\in{\cal P}}{\Sup}\ \E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\ \E_t^{{\cal P},\P}[|A|]^{2+\varepsilon'}\right]\leq C \underset{\P\in{\cal P}}{\Sup}\left(\E^{\P}\left[|A|^{2+\varepsilonilon}\right]\right)^{\frac{2+\varepsilon'}{2+\varepsilonilon}}.$$ \noindentindent We therefore deduce with BDG inequalities that (remember that $\alpha $ and $\beta$ are bounded) \begin{align*} \E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\,|Y_t|^{2+\varepsilon'}\right]\leq &\ C\left(\No{\xi}^{2+\varepsilon'}_{\L^{2+\varepsilon'}}+(\phi^{2,\varepsilon})^{\frac{2+\varepsilon'}{2+\varepsilon}}+(\psi^{2,\varepsilon})^{\frac{2+\varepsilon'}{2+\varepsilon}}\right)\\ &+C\underset{\P\in{\cal P}}{\sup}\ \E^{\P}\left[\underset{0\leq t\leq T}{\Sup}|y_t^{\P}|^{2+\varepsilon}+\left(\Int_0^T \|\widehat{a}_s^{1/2}z_s^{\P}\|^2ds\right)^{\frac{2+\varepsilon}{2}}\right] ^{\frac{2+\varepsilon'}{2+\varepsilon}}. \end{align*} \noindentindent Finally, we obtain by Theorem \ref{th:estimees} \begin{align} \|Y\|^{2+\varepsilon'}_{\D^{2+\varepsilon'}} \leq C \left(\No{\xi}^{2+\varepsilon'}_{\L^{2+\varepsilon'}}+\No{\xi}_{L^{2+\varepsilon}}^{2+\varepsilon'}+(\phi^{2,\varepsilon})^{\frac{2+\varepsilon'}{2+\varepsilon}}+(\psi^{2,\varepsilon})^{\frac{2+\varepsilon'}{2+\varepsilon}}\right). \label {eqesty} \end{align} \noindentindent When it comes to the estimate for $Z$, we apply It\^o's formula to $|Y|^2$ under each $\P\in{\cal P}$ and from the Lipschitz Assumption \ref{ass}(iii) we have, using BDG inequality and our assumptions on $g$ and $\widehat F$ \begin{align*} \E^{\P}\left[\left(\Int_0^T \|\widehat{a}^{1/2}_sZ_s\|^2 ds\right)^{\frac{2+\varepsilon'}{2}}\right] \leq&\ C \E^{\P}\left[|\xi|^{2+\varepsilon'}+\left(\Int_0^T |Y_s|(|\widehat{F}^0_s|+|Y_s|+\|\widehat{a}^{1/2}_sZ_s\|)ds\right)^{\frac{2+\varepsilon'}{2}}\right]\\ &+C\E^{\P}\left[\left(\int_0^TY_s^2\No{g_s(Y_s,Z_s)}^2ds\right)^{\frac{2+\varepsilon'}{4}}+\left(\int_0^TY_s^2\No{\widehat a_s^{1/2}Z_s}^2ds\right)^{\frac{2+\varepsilon'}{4}}\right]\\ &+C\E^\P\left[\left(\Int_0^T\| g_s(Y_s,Z_s)\|^2ds\right)^{\frac{2+\varepsilon'}{2}}+\Int_0^T|Y_s|^{\frac{2+\varepsilon'}{2}}dK_s\right]\\ \leq &\ C \nu^{-1} \E^{\P}\left[|\xi|^{2+\varepsilon'}+\underset{0\leq s\leq T}{\Sup}|Y_s|^{2+\varepsilon'}+\Int_0^T|\widehat{F}^0_s|^{2+\varepsilon'}ds+\Int_0^T\| g^0_s\|^{2+\varepsilon'}ds\right]\\ &+ \nu \E^{\P}\left[\left(\Int_0^T\|\widehat{a}^{1/2}_sZ_s\|^2ds\right)^{\frac{2+\varepsilon'}{2}}+|K_T|^{2+\varepsilon'}+\alpha^{\frac{2+\varepsilon'}{2}} \left(\Int_0^T\|Z_s\|^2ds\right)^{\frac{2+\varepsilon'}{2}}\right], \end{align*} for any $\nu\in(0,1]$. But by the definition of $K_T$, it is clear that \begin{align}\label{eq:k} \E^{\P}\left[|K_T|^{2+\varepsilon'}\right]\leq C_0\E^{\P}\left[|\xi|^{2+\varepsilon'}+\underset{0\leq s\leq T}{\Sup} |Y_s|^{2+\varepsilon'}+\left(\Int_0^T\|\widehat{a}^{1/2}_sZ_s\|^2ds\right)^{\frac{2+\varepsilon'}2} +\Int_0^T\left(|\widehat{F}^0_s|^{2+\varepsilon'}+ \|g^0_s\|^{2+\varepsilon'}\right)ds\right], \end{align} for some constant $C_0$ independent of $\varepsilon$. \noindentindent Then, using in particular Assumption \ref{ass}(iv) \begin{align*} \E^{\P}\left[\left(\Int_0^T \|\widehat{a}^{1/2}_sZ_s\|^2 ds\right)^{\frac{2+\varepsilon'}{2}}\right]\leq&\ C \nu^{-1} \E^{\P}\left[|\xi|^{2+\varepsilon'}+\underset{0\leq s\leq T}{\Sup}|Y_s|^{2+\varepsilon'} +\Int_0^T|\widehat{F}^0_s|^{2+\varepsilon'}ds+\Int_0^T \|g^0_s\|^{2+\varepsilon'}ds\right]\\ &+ (\nu +C_0\nu + (1-\lambda)^{\frac{2+\varepsilon'}{2}})\E^{\P}\left[\left(\Int_0^T\|\widehat{a}^{1/2}_sZ_s\|^2ds\right)^{\frac{2+\varepsilon'}{2}}\right]. \end{align*} Choosing $\nu$ small enough, this implies the desired result by (\ref{eqesty}). Finally, the estimate for the $K$ follows directly from \reff{eq:k}. (ii) First of all, we can follow the same arguments as in (i) above to obtain the existence of a constant $C$, depending only on $T$ and the Lipschitz constant of $\widehat{F}$ and $g$ such that for all $\P\in{\cal P}$ \begin{align} \E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\,|\delta Y_t|^2\right]= \E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\left( \underset{\P^{'}\in{\cal P}(t^+,\P)}{\rm ess \, sup^{\P}}|\delta y_t^{\P^{'}}|\right)^2\right]\leq C\left(\|\delta\xi\|_{\L^{2}}^2+\|\delta\xi\|_{\L^{2+\varepsilon}}^{\frac 2{2+\varepsilon}}\right) \label{eqest1} \end{align} Applying It\^o formula to $|\delta Y|^2$, under each $\P\in{\cal P}$, leads to \begin{align*} \E^{\P}\left[\Int_0^T \|\widehat{a}^{1/2}_s\delta Z_s\|^2 ds\right]\leq &\ C\E^{\P}\left[|\delta\xi|^2+\Int_0^T |\delta Y_s|(|\delta Y_s| +\|\widehat{a}^{1/2}_s\delta Z_s\|)ds+\Int_0^T|\delta Y_s|d|\delta K_s|+\Int_0^T|\delta Y_s|^2ds\right]\\ \leq&\ C \E^{\P}\left[|\delta\xi|^2+\underset{0\leq s\leq T}{\Sup}|\delta Y_s|^2+\underset{0\leq s\leq T}{\Sup}|\delta Y_s|^2[K_T^{1}+K_T^{2}]+ \Frac{1}{2}\Int_0^T\|\widehat{a}^{1/2}_s\delta Z_s\|^2ds\right]. \end{align*} The estimate for $\delta Z$ is now obvious from the above inequality and the estimates of (i). Finally the estimate for the difference of the increasing processes is obvious by definition. \ep \end{proof} \section{Existence by a pathwise construction of the solution} \label{A direct existence argument} As we have shown in Theorem \ref{theorep}, if a solution to the $2$BDSDE (\ref{eq}) exists, it necessarily can be represented as a supremum of solutions to standard BDSDEs. However, since we are working under a family of non-dominated probability measures, we cannot use the classical technics of BSDEs to construct such a solution. We will therefore follow the original approach of Soner, Touzi and Zhang \cite{sone:touz:zhan:13}, who overcame this problem by constructing the solution pathwise, using the so-salled regular conditional probability distribution. \subsection{Notations related to shifted spaces} For any $0\leq t\leq T$, we denote by $\Omega^{B,t}:= \{\omega\in C([t,T],\R^d),\ \omega(t) = 0\}$ the shifted canonical space, $B^t$ the shifted canonical process, $\P_0^{B,t}$ the shifted Wiener measure and ${\cal F}^{B,t}$ the shifted raw filtration generated by $B^t$. The pathwise density of its quadratic variation is denoted by $\widehat a^t$. We then let $\Omega^t:=\Omega^{B,t}\times\Omega^W$, and exactly as in Section 2, we can define the set $\overline{{\cal P}}_S^t$, by restricting the corresponding measures to the shifted space $\Omega^t$. \noindentindent Next, for any $0\leq s\leq t\leq T$ and $\omega\in\Omega^s$, we define the shifted path $\omega^t:=(\omega^{B,t},\omega^{W})\in\Omega^t$ by $$\omega^{B,t}_r:=\omega_r^B-\omega_t^B,\ \forall r\in[t,T],$$ and for $\omega^B\in\Omega^{B,s}, \ \tilde{\omega}^B\in\Omega^{B,t}$ we define the concatenated path by \begin{align*} (\omega^B\otimes_t\tilde{\omega}^B)(r):=\omega^B_r{\bf 1}_{[s,t)}(r)+(\omega^B_t+\tilde{\omega}^B_r){\bf 1}_{[t,T]}(r),\ \forall r\in[s,T]. \end{align*} Similarly, for any ${\cal F}_T^s-$measurable random variable $\xi$ on $\Omega^s$, and for each $(\omega^B,\omega^W)\in\Omega^s$, we define the ${\cal F}_T^t-$measurable random variable $\xi^{t,\omega^B}$ on $\Omega^t$ by $$\xi^{t,\omega^B}(\tilde{\omega}^B,\omega^W):=\xi(\omega^B\otimes_t\tilde{\omega}^B,\omega^W),\ \forall(\tilde{\omega}^B,\omega^W)\in\Omega^t.$$ The shifted generators that we consider are, for every $(s,(\tilde{\omega}^B,\omega^W))\in[t,T]\times\Omega^t$ \begin{align*} \widehat{F}_s^{t,\omega^B}((\tilde{\omega}^B,\omega^W),y,z)&:= F_s((\omega^B\otimes_t\tilde{\omega}^B,\omega^W),y,z,\widehat{a}_s^t(\tilde{\omega}^B,\omega^W)), \\ g_s^{t,\omega^B}((\tilde{\omega}^B,\omega^W),y,z)&:= g_s((\omega^B\otimes_t\tilde{\omega}^B,\omega^W),y,z). \end{align*} Then note that since $F$ and $g$ are assumed to be uniformly continuous in $\omega$, then so are the maps $(\omega^B,\omega^W)\longmapsto F_s((\omega^B\otimes_t\cdot,\omega^W),\cdot)$ and $(\omega^B,\omega^W)\longmapsto g_s^{t,\omega^B}((\cdot,\omega^W),\cdot)$. Notice that this implies that for any $\P\in\overline{{\cal P}}_S^t$ $$ \E^{\P}\left[\left(\Int_t^T |\widehat{F}_s^{t,\omega^B}(0,0)|^2 ds\right)\right]+\E^{\P}\left[\left(\Int_t^T \|g_s^{t,\omega^B}(0,0)\|^{2}ds\right)\right] <+\infty,$$ for some $\omega^B\in\Omega^B$ if and only if it holds for all $\omega^B\in\Omega^B$. \noindentindent We also extend Definition \ref{def} in the shifted spaces \begin{Definition} ${\cal P}^{t}$ is the subset of $\overline{{\cal P}}_S^t$, consisting of measures $\P$ such that $$\underline{a}_{\P}\leq \widehat{a}^t\leq\overline{a}_{\P} ,~dt\times d\P-a.e.~\text{on}~ [t,T]\times\Omega^{t},\ \text{for some}~ \underline{a}_{\P},\overline{a}_{\P}\in\S_d^{>0},$$ $$ \E^{\P}\left[\left(\Int_t^T |\widehat{F}_s^{t,\omega^B}(0,0)|^{2} ds\right)\right] + \E^{\P}\left[\left(\Int_t^T \|g_s^{t,\omega^B}(0,0)\|^{2}ds\right)\right] <+\infty,~ \text{for all}~ \omega^B\in\Omega^B.$$ \end{Definition} \noindentindent Finally, by Stroock and Varadhan \cite{SV79}, for any $\F^B-$stopping time $\tau$, any probability measure $\P_B$ on $(\Omega^B,{\cal F}^B)$, and any $\omega^B\in\Omega^B$, there exists a regular conditional probability distribution (r.p.c.d. for short), $\P_{B,\tau(\omega^B)}^{\omega^B}$ with respect to the $\sigma-$field ${\cal F}^B_\tau$ (since it is countably generated). Such a measure verifies that for every integrable ${\cal F}^B_T-$measurable random variable $\xi$, we have for $\P^B-a.e.$ $\omega^B$ $$\E^{\P^B}[\left.\xi\right|{\cal F}^B_\tau](\omega^B) = \E^{\P_{B,\tau(\omega^B)}^{\omega^B}}[\xi].$$ Furthermore, this r.c.p.d. naturally induces a probability measure $\P_{B}^{\tau(\omega^B),\omega^B}$ on $(\Omega^{B,\tau(\omega^B)},\mathcal F^{B,\tau(\omega^B)}_T)$ such that $$\E^{\P_{B,\tau(\omega^B)}^{\omega^B}}[\xi]=\E^{\P_{B}^{\tau(\omega^B),\omega^B}}[\xi^{\tau(\omega^B),\omega^B}].$$ Notice that if we consider a stochastic kernel $\{\P_B(\omega^W),\ \omega^W\in\Omega^W\}$ on $(\Omega^B,{\cal F}^B_T)$, then for any $\omega^B\in\Omega^B$, $\{\P_B^{\tau(\omega^B),\omega^B}(\omega^W),\ \omega^W\in\Omega^W\}$ is a stochastic kernel on $(\Omega^{B,\tau(\omega^B)},{\cal F}^{B,\tau(\omega^B)}_T)$. \noindentindent Let us now consider a $\G-$stopping time $\tau$. Then, using the classical results of Stricker and Yor \cite{SY78}, we know that there is a $\P_0^W$ version of $\tau$ (still denoted $\tau$ for simplicity) such that the map $\omega^B\longmapsto \tau(\omega^B,\omega^W)$ defines a $\F^B-$stopping time for $\P_0^W-a.e.$ $\omega^W\in\Omega^W$. Let again $\{\P_B(\omega^W),\ \omega^W\in\Omega^W\}$ be a stochastic kernel on $(\Omega^B,{\cal F}^B_T)$ and let us define the measure $\P$ on $(\Omega,{\cal F})$ by $$d\P(\omega^B,\omega^W)=d\P_B(\omega^W;\omega^B)d\P_0^W(\omega^W).$$ We claim (and refer the reader to the proof of the more general result in Lemma \ref{BDSDEshift}) that we can write for $\P-a.e.$ $\omega\in\Omega$ \begin{align*} \E^{\P}[\left.\xi\right|\mathcal G_\tau](\omega^B,\omega^W)&=\E^{\mathbb P_0^W}\left[\left.\int_{\Omega^B}\xi^{\tau(\omega^B,\cdot),\omega^B}(\widetilde\omega^B)d\P_B^{\tau(\omega^B,\cdot),\omega^B}(\cdot;\widetilde\omega^B)\right|{\cal F}_T^W\right](\omega^W)\\ &=\mathbb E^{\mathbb P_B^{\tau(\omega^B,\omega^W),\omega^B}(\omega^W)}\left[\xi^{\tau(\omega^B,\omega^W),\omega^B}(\cdot,\omega^W)\right]. \end{align*} Moreover, by Lemma 4.1 in \cite{sone:touz:zhan:13}, we know that for any probability measure $\P\in\overline{{\cal P}}_S$ on $(\Omega,{\cal F}_T)$ such that $d\P(\omega^B,\omega^W)=d\P_B(\omega^W;\omega^B)d\P_0^W(\omega^W)$, we have for $\P-a.e.$ $\omega\in\Omega$, for any $\F^B-$stopping time $\tau$ and for $ds\times d\P^{\tau(\omega^B),\omega^B}(\omega^W)-a.e.\ (s,\tilde\omega^B)\in[\tau(\omega^B),T]\times\Omega^{B,\tau(\omega^B)}$ \begin{equation}\label{eq:aa} \widehat a_s^{\tau(\omega^B),\omega^B}(\tilde \omega^B,\omega^W)=\widehat a^{\tau(\omega^B)}_s(\tilde\omega^B,\omega^W), \end{equation} which justifies the definition of the shifted generator $\widehat F^{t,\omega^B}$. \subsection{Existence when $\xi$ is in ${\rm{UC}}_b(\Omega)$} When $\xi$ is in $\text{UC}_b(\Omega)$, we know that there exists a modulus of continuity function $\rho$ for $\xi, F$ and $g$ in $\omega$. Then, for any $0\leq t\leq s\leq T, (y,z)\in\R\times\R^d$ and $(\omega^B, \omega^{\prime,B})\in\Omega^B\times\Omega^B,$ $\widetilde{\omega}^B\in\Omega^{B,t}$ and $\omega^W\in\Omega^W$, \begin{align*} |\xi^{t,\omega^B}(\widetilde{\omega}^B,\omega^W)- \xi^{t,\omega^{\prime,B}}(\widetilde{\omega}^B,\omega^W)|&\leq \rho(\|\omega^B-\omega^{\prime,B}\|_t),\\ |\widehat{F}_s^{t,\omega^B}((\widetilde{\omega}^B,\omega^W),y,z)- \widehat{F}_s^{t,\omega^{\prime,B}} ((\widetilde{\omega}^B,\omega^W),y,z)|&\leq \rho(\|\omega^B-\omega^{\prime,B}\|_t),\\ | g_s^{t,\omega^B}((\widetilde{\omega}^B,\omega^W),y,z)- g_s^{t,\omega^{\prime,B}} ((\widetilde{\omega}^B,\omega^W),y,z)|&\leq \rho(\|\omega^B-\omega^{\prime,B}\|_t). \end{align*} Using this regularity and Assumption \ref{ass}, it is easy to see that we have for all $(t,\omega^B)\in[0,T]\times\Omega^B$. $$\Lambda_t(\omega^B):= \underset{\P\in{\cal P}^{t}}{\Sup}\left(\E^{\P}\left[|\xi^{t,\omega^B}|^2+\Int_t^T|\widehat{F}_s^{t,\omega^B}(0,0)|^2 ds +\Int_t^T\|g_s^{t,\omega^B}(0,0)\|^2 ds\right]\right)^{1/2} < +\infty.$$ To prove existence, we define the following value process $V_t$ for every $\omega^B$ \begin{align}\label{definitionV} V_t(\omega^B,\cdot):=\underset{\P\in{\cal P}^{t}}{\rm ess \, sup}^{\P^0_W}\, {\cal Y}_t^{\P,t,\omega^B}(T,\xi)(\cdot)\,,\, \P^0_W -a.s., \end{align} where, for any $(t_1,\omega^B)\in[0,T]\times \Omega^B,~ \P\in{\cal P}^{t_1}, ~ t_2 \in [t_1,T]$, and any ${\cal F}_{t_2}-$measurable $\eta\in\L^2(\P)$, we denote ${\cal Y}_{t_1}^{\P,t_{1},\omega^B}(t_2,\eta):= y_{t_1}^{\P,t_{1},\omega^B}$, where $(y^{\P,t_{1},\omega^B},z^{\P,t_{1},\omega^B})$ is the solution of the following BDSDE on the shifted space $\Omega^{t_1}$ under $\P$, \begin{align} y_s^{\P,t_{1},\omega^B}=&\ \eta^{t_{1},\omega^B} +\Int_s^{t_2}\widehat{F}_r^{t_{1},\omega^B}(y_r^{\P,t_{1},\omega^B},z_r^{\P,t_{1},\omega^B})dr -\Int_s^{t_2} z_r^{\P,t_{1},\omega^B}\cdot dB_r^{t_1} +\Int_s^{t_2} g_r^{t_{1},\omega^B}(y_r^{\P,t_{1},\omega^B},z_r^{\P,t_{1},\omega^B})\cdot d\W_r. \end{align} The following Lemma allows to give a link between BDSDEs on the shifted spaces. Its technical proof is postponed to the Appendix. \begin{Lemma} \label{BDSDEshift} Fix some $\P\in\overline{\mathcal P}_S$ such that $d\P(\omega^B,\omega^W):=d\P_B(\omega^B;\omega^W)d\P_0^W(\omega^W)$. For $\P-a.e.$ $\omega\in\Omega$, the following equality holds $$y_t^{\P_B^{t,\omega^B}(\cdot)\otimes\P^0_W}(\omega^W)=y_t^{\P}(\omega^B,\omega^W),\ t\in[0,T],$$ where $d(\P_B^{t,\omega^B}(\cdot)\otimes\P^0_W)(\omega^B,\omega^W):=d\P_B^{t,\omega^B}(\omega^B;\omega^W)d\P_0^W(\omega^W)$. \end{Lemma} \noindentindent We point out that for classical $2$BSDEs, Soner, Touzi and Zhang have proved in Lemma 4.6 of \cite{sone:touz:zhan:13} a regularity result for the value process, precisely the uniform continuity with respect to the trajectory $\omega^B$ and this is crucial to prove their dynamic programming principle (Proposition 4.7 in \cite{sone:touz:zhan:13}). Since in our context, the value process $V$ defined in \eqref{definitionV} is a random field depending on two source of randomness, we prove the following regularity result which is weaker than Lemma 4.6 of \cite{sone:touz:zhan:13}. Once again, we cannot obtain the same regularity in the context of doubly stochastic 2BSDEs because we cannot have path--wise estimates for their solutions. \begin{Lemma} \label{mesurabiliteV}We have for every $(\omega^{B,1},\omega^{B,2})\in\Omega^B\times\Omega^B$ $$\E^{\P^0_W}\left[\left(V_t(\omega^{B,1}\cdot)-V_t(\omega^{B,2}\cdot)\right)^2\right]\leq \rho^2\left(\|\omega^{B,1}-\omega^{B,2}\|_t\right).$$ In particular, this implies that the map $\omega^B\longmapsto V_t(\omega^B,\cdot)$ is uniformly continuous in probability $($with respect to $\P^0_W)$, which implies that there is a $\P^0_W-$version, which we still denote $V$ for simplicity, which is jointly measurable in $(\omega^B,\omega^W)$, and more precisely, such that $V_t$ is ${\cal F}_t-$measurable or even ${\cal F}^B_t\otimes{\cal F}^{o,W}_{t,T}-$measurable. \end{Lemma} \begin{proof} The estimate is an easy consequence of classical a priori estimates for BDSDEs, using in particular the uniform continuity in $\omega$ of both $F$, $g$ and $\xi$. The reasoning is quite similar to the one we used in the proof of Theorem \ref{thestiamte}, so that we omit it. As for the existence of measurable version, this is a classical result using the fact that the topology of convergence in probability is metrizable (see for instance Dellacherie and Meyer \cite{dm}, chapter IV, Theorem 30, or the proof of Corollary A.3 in \cite{DKN07}).\ep \end{proof} \noindentindent We then have the following joint measurability result \begin{Lemma}\label{mes2} The map $(t,\omega^B,\omega^W)\longmapsto V_t(\omega^B,\omega^W)$ is ${\cal B}([0,T])\otimes{\cal F}_t-$measurable. \end{Lemma} \proof First of all, we claim that the family $\{{\cal Y}_t^{\P,t,\omega^B}(T,\xi),\ \P\in{\cal P}^t\}$ is upward directed. Indeed, this can be proved exactly as in Step (iii) of the proof of Theorem 4.3 of \cite{STZ10}. As a consequence, we know that there is a sequence $(\P^n)_{n\geq 0}\subset{\cal P}^t$ such that for $\P_0^W-a.e.$ $\omega^W\in\Omega^W$ $$V_{t}(\omega^B,\omega^W) =\underset{n\geq 0}{\sup}\ {\cal Y}^{\P^n,t\omega^B}_t(\omega^W).$$ Now arguing exactly as in Step (i) of the proof of Theorem 2.1 in \cite{PTZ14}, using in particular the fact that we can always mimic the construction in Section 2.5.2 of \cite{PTZ14} to obtain that the map $(t,\P,\omega^B,\omega^W)\longmapsto {\cal Y}^{\P,t\omega^B}_t(\omega^W)$ is Borel measurable, we deduce that $(t,\omega^B,\omega^W)\longmapsto V_t(\omega^B,\omega^W)$ is ${\cal B}([0,T])\otimes{\cal F}_T^{B}\otimes{\cal F}_{0,T}^{o,W}-$universally measurable. But then it suffices to use the result of Lemma \ref{mesurabiliteV} to conclude. \ep \noindentindent Now, we present the main result concerning the dynamic programming principle in our context. We follow the approach of Possama\"i, Tan and Zhou \cite{PTZ14}, where they proved existence result for $2$BSDEs with only measurable parameters. Their proof is based on dynamic programming principle without regularity on the terminal condition and the generator, which is itself strongly inspired by the classical results recalled, for instance, in the papers \cite{elkarouitan1,elkarouitan2}. We therefore omit the proof. \begin{Theorem}\label{dynam prog} Under the Assumptions \ref{ass}, \ref{ass1} and for $\xi\in{\rm UC}_b(\Omega)$, we have for all $0 \leq t_1\leq t_2\leq T$ \begin{align} V_{t_1}(\omega^B, \omega^W) = \underset{\P\in{\cal P}^{t_1}}{{\rm ess \, sup}^\P}\ {\cal Y}_{t_1}^{\P,t_{1},\omega^B}(t_2,V_{t_2}^{t_{1},\omega^B } (\cdot,\omega^W)) , \; \, \P-a.e.\ \omega\in\Omega. \end{align} \end{Theorem} \noindentindent Next, we introduce the right limit of the V which is clearly ${\cal F}_{t^+}-$measurable \begin{align} \label{defV+} V_t^{+}:= \underset{r\in\Q\cap(t,T],r\downarrow t}{\overline{\Lim}} V_r. \end{align} We have the following regularity result, whose proof is postponed until the appendix. \begin{Lemma}\label{Lemmaregularity} Under the Assumptions \ref{ass}, \ref{ass1}, we have $$V_t^{+}= \underset{r\in\Q\cap(t,T],r\downarrow t}{{\Lim}}V_r, \ {\cal P} -q.s.$$ and thus $V^{+}$ is c\`adl\`ag ${\cal P} -q.s.$ \end{Lemma} \noindentindent Thanks to the dynamic programming principle for $V$, as well as the just proved regularity of $V^+$, we can now show that $V^+$ is actually a semi--martingale under any $\P\in{\cal P}$ and admits a particular decomposition under any $\P\in\mathcal P$. \begin{Proposition}\label{DecompositionV+} Under Assumptions \ref{ass}, \ref{ass1}, for any $\P\in{\cal P}$, denoting by $\G^\P_+$ the usual augmentation of the right-limit of $\G$ under $\P$, there is a $\G^\P_+-$predictable process $\widetilde Z^\P$, which is also $\overline{\cal F}_t^\P-$mesurable for a.e. $t\in[0,T]$, and a non-decreasing c\`adl\`ag and $\G^\P_+-$predictable process $\widetilde{K}^\P$, which is also $\overline{\cal F}_t^\P-$mesurable for a.e. $t\in[0,T]$, such that $V^+$ defined by \eqref{defV+} satisfies for all $0\leq t\leq s\leq T$ \begin{align*} V_s^{+} = \xi+\Int_s^T\widehat{F}_r(V_r^{+},\widetilde{Z}^{\P}_r)ds+\Int_s^T g_r(V_r^{+},\widetilde{Z}^{\P}_r)\cdot d\W_r-\Int_s^T \widetilde{Z}^{\P}_r \cdot dB_r+\widetilde{K}_T^{\P}-\widetilde{K}_s^{\P}, \ \P-a.s. \end{align*} \end{Proposition} \begin{proof} We introduce first the following RBDSDE with lower obstacle $V^{+}$ under each $\P\in{\cal P}^{t}$, \begin{align*}\left\lbrace \begin{aligned} &\widetilde{Y}^{\P}_t =\xi+\Int_t^T\widehat{F}_s(\widetilde{Y}^{\P}_s,\widetilde{Z}^{\P}_s)ds+\Int_t^T g_s(\widetilde{Y}^{\P}_s,\widetilde{Z}^{\P}_s)\cdot d\W_s -\Int_t^T \widetilde{Z}^{\P}_s \cdot dB_s+\widetilde{K}_T^{\P}-\widetilde{K}_t^{\P},\\ &\widetilde{Y}^{\P}_t \geq V_t^{+},\ 0\leq t\leq T, \ \P-a.s,\\ & \Int_0^T(\widetilde{Y}^{\P}_{s^-}-V^{+}_{s^-})d\widetilde{K}_{s^-}^{\P}=0 ,\ \P-a.s. \end{aligned} \right.\end{align*} To the best of our knowledge, there are no results in the literature for the existence and uniqueness of such RBDSDE with c\`adl\`ag obstacle. The proofs of these results are postponed to Section \ref{RBDSDE:section} in the Appendix for completeness. As mentioned in Remark 4.9 in \cite{sone:touz:zhan:13}, and for a fixed $\P\in{\cal P}^{t}$, we shall use the solution of the above RBDSDEs and the notion of $\widehat{F}-$weak doubly super--martingale whis is introduced in the Appendix to prove the desired result. This notion is a natural extension of nonlinear $f-$super--martingale introduced first by Peng \cite{Peng97} in the context of standard BSDEs. Let us now argue by contradiction and suppose that $\widetilde{Y}^{\P}$ is not equal $\P-a.s.$ to $V^{+}$. Then we can assume without loss of generality that $\widetilde{Y}^{\P}_0> V_0^{+}, \P-a.s.$ For each $\varepsilon>0$, define the following ${\G}-$stopping time $$\tau^{\varepsilon}:= \Inf \{t\geq 0,\ \widetilde{Y}^{\P}_t \leq V_t^{+}+\varepsilon\}.$$ Then $\widetilde{Y}^{\P}$ is strictly above the obstacle before $\tau^{\varepsilon}$, and therefore $\widetilde{K}^{\P}$ is identically equal to $0$ in $[0,\tau^{\varepsilon}]$. Hence, we have for all $0\leq t\leq s\leq T$ $$\widetilde{Y}^{\P}_s = \widetilde{Y}^{\P}_{\tau^{\varepsilon}}+\Int_s^{\tau^{\varepsilon}} \widehat{F}_r(\widetilde{Y}^{\P}_r,\widetilde{Z}^{\P}_r)dr +\Int_s^{\tau^{\varepsilon}}g_r(\widetilde{Y}^{\P}_r,\widetilde{Z}^{\P}_r)\cdot d\W_r-\Int_s^{\tau^{\varepsilon}}\widetilde{Z}^{\P}_r\cdot dB_r, \ \P-a.s.$$ Let us now define the following BDSDE on $[0,\tau^{\varepsilon}]$ $$ y_s^{+,\P} = V_{\tau^{\varepsilon}}^{+} + \Int_s^{\tau^{\varepsilon}}\widehat{F}_r(y_r^{+,\P},z_r^{+,\P})dr+\Int_s^{\tau^{\varepsilon}} g_r(y_r^{+,\P},z_r^{+,\P})\cdot d\W_r -\Int_s^{\tau^{\varepsilon}}z_r^{+,\P}\cdot dB_r , \ \P-a.s.$$ By comparison theorem and the standard a priori estimates, we obtain that $$\E[\widetilde{Y}^{\P}_0] \leq \E[y_0^{+,\P}]+C\E\big[|V_{\tau^{\varepsilon}}^{+}-\widetilde{Y}^{\P}_{\tau^{\varepsilon}}|\big]\leq \E[y_0^{+,\P}]+ C\varepsilon,$$ by definition of $\tau^{\varepsilon}.$ \noindentindent Moreover, we can show similarly to the proof of Lemma \ref{Lemmaregularity} (see also the arguments in Step 1 of the proof of Theorem 4.5 in \cite{sone:touz:zhan:13} pages 328--329) that $V^+$ is a strong $\widehat{F}-$doubly super--martingale under each $\P\in{\cal P}^{t}$. Thus, we obtain in particular that $y_0^{+,\P} \leq V_0^{+}$ which in turn implies $$\E[\widetilde{Y}^{\P}_0] \leq \E[V_0^{+}]+ C\varepsilon, $$ hence a contradiction by arbitrariness of $\varepsilon$. \ep \end{proof} \noindentindent We next prove a representation for $V^{+}$ similar to (\ref{eqrepresentation}), which will be useful for us to justify that the value process we have constructed provides indeed a solution to the 2BDSDE \reff{eq}. \begin{Proposition} Assume that Assumptions \ref{ass}, \ref{ass1} hold. Then we have \begin{align} V_t^{+}=\underset{\P^{'}\in{\cal P}(t+,\P)}{{\rm ess \, sup^{\P}}}{\cal Y}_t^{\P^{'}}(T,\xi),\ \P-a.s.,\ \forall \P\in{\cal P}^t. \label{repv} \end{align} \end{Proposition} \begin{proof} The proof for the representations is the same as the proof of Lemma 3.5 in \cite{PTZ14}, since a stability result holds in our context, too. \ep \end{proof} \subsection{Existence result in the general case} We are now in position to state the main result of this section. \begin{Theorem} \label{Thexistence} Let $\xi\in{\cal L}^{2}$ and assume that Assumptions \ref{ass}, \ref{ass1} hold. Then there exists a unique solution $(Y,Z,K)\in\D^{2}\times \H^{2}\times\mathbb I^2$ of the 2BDSDE \reff{eq}. \end{Theorem} \begin{proof} The proof is divided in three steps. In the first one we prove that the value process $V^+$ defined by \eqref{defV+} is the solution of our $2$BDSDE in the case when $\xi$ belongs in ${\rm UC}_b (\Omega)$ and show the aggregation result for the solution. Then, in the second step we verify the minimality condition for the increasing process. Finally, we deal with the general case. {\it{Step 1: Existence and aggregation results for $\xi$ belongs in ${\rm UC}_b (\Omega)$. }} As we have mentioned above, the natural candidate for the $Y$ solution for our $2$BDSDE is given by $$Y_t= V_t^+:=\underset{r\in\Q\cap(t,T],r\downarrow t}{{\Lim}}V_r,$$ where $V$ is the value process defined by \eqref{definitionV}. First, we know that $V^+$ is a c\`adl\`ag process defined path--wise and using the same notations in Proposition \ref{DecompositionV+} our solution $Y$ verifies $$V_t^+=V_0^+-\Int_0^t\widehat F_s(V_s^+,\widetilde{Z}_s^\P)ds-\Int_0^t g_s(V_s^+,\widetilde{Z}_s^\P)\cdot d\W_s+\Int_0^t\widetilde{Z}_s^{\P} \cdot dB_s-\widetilde K_t^\mathbb P, \text{ }\mathbb P-a.s., \text{ }\forall \mathbb P\in\mathcal P^t.$$ We note that $V^+$ is ($\P-a.s.$) a c\`adl\`ag generalized semi--martingale under any $\P\in{\cal P}$, (studied by Pardoux and Protter in \cite{PP87} and Pardoux and Peng \cite{pp1994}). By the generalized It\^o's formula of Lemma \ref{proof of lemmaitô}, we have for any $i=1,\dots,d$ \begin{align*} B_t^iV_t^+=&\ \int_0^t\left(\widehat a_s^{1/2}\widetilde{Z}_s^\P\cdot{\bf 1}_i-\widehat F_s(V_s^+,\widetilde{Z}_s^\P)B^i_s\right)ds+\int_0^t\left(\widetilde{Z}_s^{\P}B^i_s+{\bf 1}_dV_s^+\right)\cdot dB_s\\ &-\int_0^tg_s(V_s^+,\widetilde{Z}_s^\P)B^i_s\cdot d\W_s-\int_0^tB^i_sd\widetilde K_s^\P\\ =&\ \int_0^t\widehat a_s^{1/2}\widetilde{Z}_s^\P\cdot{\bf 1}_ids+\int_0^tB^i_sdV_s^++\int_0^t{\bf 1}_dV_s^+\cdot dB_s. \end{align*} Then, we can adapt Karandikar's results obtained for c\`adl\`ag semi--martingale in our context to define universally the two stochastic integrals $$\int_0^tB^i_sdV_s^+, \text{ and }\int_0^t{\bf 1}_dV_s^+\cdot dB_s.$$ Indeed, $B$ and $V^+$ are both c\`adl\`ag and a backward It\^o integral can always be considered as a forward It\^o integral, provided that time is reversed. \noindentindent This gives us the existence of a $\G-$predictable process $Z$ such that for any $\P\in{\cal P}$ $$\widetilde Z^\P_t=Z_t,\ dt\otimes\P-a.e.$$ Furthermore, since for any $\P\in{\cal P}$, $\widetilde Z_t$ is $\overline{\cal F}_t^\P-$mesurable for a.e. $t\in[0,T]$, we deduce immediately that $Z_t$ is $\bigcap_{\P\in{\cal P}}\overline\F_t^\P-$mesurable for a.e. $t\in[0,T]$. \noindentindent Concerning the fact that we can aggregate the family $(\widetilde K^\mathbb P)_{\mathbb P\in\mathcal P}$, it can be deduced as follows. We have from \eqref{defV+} that $V^+$ is defined path--wise, and so is the Lebesgue integral $\int_0^t\widehat F_s(V_s^+,Z_s)ds$. By \cite{N12}, the stochastic integrals $\Int_0^tZ_s\cdot dB_s$ and $\Int_0^t g_s(V_s^+,Z_s)\cdot d\W_s$ can also be defined path--wise. We can therefore define path--wise $$ K_t:=V_0^+-V_t^+-\Int_0^t\widehat F_s(V_s^+, Z_s)ds- \Int_0^t g_s(V_s^+,Z_s)\cdot d\W_s+\Int_0^tZ_s\cdot dB_s,$$ and $K$ is an aggregator for the family $(\widetilde K^\mathbb P)_{\mathbb P\in\mathcal P^t}$. Thus, the triplet $(Y,Z, K)$ satisfies the equation (\ref{eq}) and from the {\it a priori} estimates in Theorem \ref{theorep} we get that $(Y, Z, K)$ belongs to $\D^{2}\times \H^{2}\times \I^{2}$. {\it{Step 2: The minimality condition of $\widetilde K$}.} Now, we have to check that the minimum condition (\ref{eqmin}) holds. We follow the arguments in the proof of Theorem \ref{theorep}. For $t\in[0,T],\ \P\in{\cal P}^t$ and $\P^{\prime}\in{\cal P}(t+,\P)$, we denote $\delta Y := V^{+}-y^{\P^{'}}(T,\xi)$ and $\delta Z := Z-z^{\P^{'}}(T,\xi)$ and we introduce the process $M$ of (\ref{M}). We first observe that since ${K}$ is non-decreasing, we have$$\underset{\P^{\prime}\in{\cal P}(t^+,\P)}{\rm ess \, inf^{\P}}\E_t^{\P^{'}}[{K}_T-{K}_t]\geq 0.$$ Then, it suffices to prove that $\E^\P\Big[\underset{\P^{\prime}\in{\cal P}(t^+,\P)}{\rm ess \, inf^{\P}}\E_t^{\P^{'}}[{K}_T-{K}_t]\Big]\leq 0.$ We know that the family ${\cal P}(t^+,\P)$ is upward directed. Therefore, by classical results, there is a sequence $(\P^n)_{n\geq 0}\subset {\cal P}(t^+,\P)$ such that \begin{align} \E^\P\left[\underset{\P^{\prime}\in{\cal P}(t^+,\P)}{\rm ess \, inf^{\P}}\E_t^{\P^{'}}[{K}_T-{K}_t]\right]=\underset{n\rightarrow+\infty}{\lim}\downarrow\E^{\P^{n}}\left[{K}_T-{K}_t\right].\label{essinf} \end{align} On the other hand, by (\ref{eq4}), we estimate by the H\"{o}lder inequality that \begin{align*} \E^{\P^{n}}\left[{K}_T-{K}_t\right] &= \E^{\P^{n}}\left[\left(\underset{t\leq s\leq T}{\Inf}(M_{t}^{-1}M_s)\right)^{1/3}\left({K}_T-{K}_t\right)^{1/3} \left(\underset{t\leq s\leq T}{\Inf}(M_{t}^{-1}M_s)\right)^{-1/3}\left({K}_T-{K}_t\right)^{2/3}\right]\\ &\leq C \left(\E^{\P^{n}}\left[\left({K}_T^{\P^{n}}\right)^2\right]\E^{\P^{n}}\left[\left(\underset{t\leq s\leq T}{\Inf}(M_{t}^{-1}M_s)\right)\left({K}_T-{K}_t\right)\right]\right)^{1/3}\\ &\leq C \left(\E^{\P^{n}}\left[\left({K}_T\right)^2\right] \E^{\P^{n}}\left[M_{t}^{-1}\Int_t^T M_s d{K}_s\right]\right)^{1/3}\\ &\leq C \left(\E^{\P^{n}}\left[\left({K}_T\right)^2\right]\right)^{1/3} \left(\E^{\P^{n}}[\delta Y_t]\right)^{1/3}, \end{align*} where we have used in the last inequality the fact that ${K}$ is non-decreasing and the same arguments as in the proof of Theorem \ref{theorep} (ii). \noindentindent Plugging the above in (\ref{essinf}), we obtain \begin{align*} \E^\P\left[\underset{\P^{\prime}\in{\cal P}(t^+,\P)}{\rm ess \, inf^{\P}}\E_t^{\P^{'}}[{K}_T-{K}_t]\right] &\leq C \underset{n\rightarrow+\infty}{\lim}\downarrow\big(\E^{\P^{n}}\left[\delta Y_t\right]\big)^{1/3}\leq C \left(\underset{\P^{n}\in{\cal P}(t^+,\P)}{\rm ess \, inf^{\P}}\E^{\P^{n}}\left[\delta Y_t\right]\right)^{1/3}=0. \end{align*} which is the desired result. {\it{Step 3: Existence and aggregation results for $\xi$ belonging to $ {\cal L}^{2}$. } } For $\xi\in{\cal L}^{2}$, there exists by definition a sequence $(\xi_n)_{n\geq 0}\subset{UC}_b(\Omega)$ such that \begin{align*} \underset{n\rightarrow +\infty}{\Lim}\|\xi_n-\xi_m\|_{\L^{2}}=0\quad\text{and}\quad \underset{n\geq 0}{\Sup}~\|\xi_n\|_{\L^{2}}< +\infty. \end{align*} Let $(Y^n,Z^n,K^n)\in \D^{2}\times \H^{2}\times\mathbb I^2$ be the solution to $2$BDSDE (\ref{eq}) with terminal condition $\xi_n$. By the estimates of Theorem \ref{thestiamte}, we have \begin{align*} & \|Y^n-Y^m\|_{\D^{2}}^2 + \|Z^n-Z^m \|^2_{\H^{2}} + \underset{\P\in{\cal P}}{\Sup}\, \E^{\P}\left[\underset{0\leq t\leq T}{\Sup} |\widetilde K_t^{n}-\widetilde K_t^{m}|^2\right] \\ &\leq C \|\xi_n-\xi_m\|_{\L^{2}}^2 + \widetilde C\|\xi_n-\xi_m\|_{\L^{2}}\underset{n,m \rightarrow +\infty}{\longrightarrow} 0. \end{align*} Extracting a fast-converging subsequence and using Borel-Cantelli Lemma, we can make sure that, $\P-a.s.$ $$\underset{n\rightarrow\infty}{\Lim} \left[\underset{0\leq t\leq T}{\Sup}\big[|Y^n_t-Y^m_t|^2+| \widetilde K_t^{n}-\widetilde K_t^{m}|^2\big] + \Int_0^T\|\widehat a_t^{1/2}(Z^n_t-Z^m_t) \|^2dt \right] =0.$$ Define then \begin{align*} Y:=\underset{n\rightarrow\infty}{\overline{\Lim}} Y^n ,\quad Z:=\underset{n\rightarrow\infty}{\overline{\Lim}} Z^n , \quad K:=\underset{n\rightarrow\infty}{\overline{\Lim}} \widetilde K^{n} ,\end{align*} It is therefore clear that $(Y,Z, K)\in \D^2\times\H^2\times\I^2$. \ep \end{proof} \section{Probabilistic interpretation for fully-nonlinear SPDEs} \label{Probabilistic interpretation for Fully nonlinear SPDEs} The aim of this section is to give a Feynman--Kac's formula for the solution of the following fully non-linear SPDEs \begin{align}\label{spde} \begin{cases} du(t,x) + \hat{h}(t,x,u(t,x),Du(t,x),D^2u(t,x))dt + g(t,x,u(t,x),Du(t,x))\circ d\W_t = 0,\\ u(T,x)= \phi(x), \end{cases} \end{align} where we consider the case $H_t(\omega,y,z,\gamma)= h(t,B_t(\omega),y,z,\gamma)$, with $h:[0,T]\times\R\times\R^d\times D_h\longrightarrow\R$ (with $D_h$ being a subset of $\mathbb S_d^{>0}$) is a deterministic map. Then, the corresponding conjuguate and bi--conjuguate functions are given by \begin{align} F(t,x,y,z,a) &:= \underset{\gamma\in D_h}{\Sup}\left\{\Frac{1}{2} {\rm Tr}[a\gamma] - h(t,x,y,z,\gamma)\right\}~\text{for}~ a\in \S_d^{>0},\\ \hat{h}(t,x,y,z,\gamma) &:= \underset{a\in \S_d^{>0}}{\Sup} \left\{\Frac{1}{2} {\rm Tr}[a\gamma] - F(t,x,y,z,a)\right\}~\text{for}~ \gamma\in \R^{d\times d}. \end{align} Notice that $-\infty< \hat{h}\leq h$ and $\hat{h}$ is nondecreasing convex in $\gamma$. Also, $\hat{h}=h$ if and only if $h$ is convex and nondecreasing in $\gamma$, which we will therefore always assume. \noindentindent For this end, the following markovian $2$BDSDE is considered \begin{align} Y_s^{t,x} =&\ \phi(B_T^{t,x})-\Int_s^T F(s,B_r^{t,x},Y_r^{t,x},Z_r^{t,x},\widehat{a}_r)dr+\Int_t^T g(r,B_r^{t,x},Y_r^{t,x},Z_r^{t,x})\circ d\W_r \noindentnumber\\ &- \Int_s^T Z_r^{t,x} dB_r^{t,x}+K_T^{t,x}-K_s^{t,x}, ~ t\leq s\leq T,~{\cal P}^t-q.s, \label{eqmarkovian} \end{align} where for any $(t,x)\in[0,T]\times\R^d$, $(B_s^t)_{s\in[t,T]}$ is the shifted canonical process on $\Omega^{B,t}$ defined by $$ B_s^{t,x}:= x+B_s^t \quad \text{for all}~ s\in[t,T].$$ The stochastic integral with respect to $\W$ is the Stratonovich backward integral (see Kunita \cite{K90} page 194). Using the definition of the Stratonovich backward integral, we can show easily that equation (\ref{eqmarkovian}) is equivalent to the following $2$BDSDE \begin{align} Y_s^{t,x} =&\ \phi(B_T^{t,x})-\Int_s^Tf(s,B_r^{t,x},Y_r^{t,x},Z_r^{t,x},\widehat{a}_r)dr+\Int_t^T g(r,B_r^{t,x},Y_r^{t,x},Z_r^{t,x}) \cdot d\W_r \noindentnumber\\ &- \Int_s^T Z_r^{t,x}dB_r^{t,x}+K_T^{t,x}-K_s^{t,x}, ~ t\leq s\leq T,~{\cal P}^{t}-q.s \label{eqmarkovianmod} \end{align} where $$f(s,x,y,z,\widehat{a}_s):= F(s,x,y,z,\widehat{a}_s)+\Frac{1}{2}{\rm Tr}[g(s,x,y,z)D_yg(s,x,y,z)^\top].$$ \noindentindent From now on, we focus our study on providing the probabilistic representation of the classical and stochastic viscosity solutions for the fully nonlinear SPDEs \eqref{spde} via $2$BDSDEs \eqref{eqmarkovian}. Let us first define the following functional spaces: \begin{itemize} \item ${\cal M}_{0,T}^W$ denotes all the $\F^W-$stopping times $\tau$ such that $0\leq \tau\leq T$. \item $L^{p}({\cal F}_{\tau,T}^W;\R^d)$, for $p\geq 0$, denotes the space of all $\R^d-$valued ${\cal F}_{\tau,T}^W-$ measurable r.v. $\xi$ such that $\E[|\xi|^p]< +\infty$. \item $C^{\ell,k}([0,T]\times\R^d)$, for $k,\ell\geq 0$, denotes the space of all $\R-$valued functions defined on $[0,T]\times\R^d$, which are $\ell-$times continuously differentiable in $t$, $k-$times continuously differentiable in $x$. \item $C_b^{k,m,n}([0,T]\times\R^d\times\R; \R^p)$, for $k, m, n\geq 0$, $p\geq 1$, denotes the space of all $\R^p-$valued functions defined on $[0,T]\times\R^d\times\R$, which are $k-$times continuously differentiable in $t$, $m-$times continuously differentiable in $x$, $n-$times continuously differentiable in $y$ and have uniformly bounded partial derivatives. \item $C^{\ell,k}({\cal F}_{t,T}^W,[0,T]\times\R^d)$, for $k,\ell\geq 0$, denotes the space of all $C^{\ell,k}([0,T]\times\R^d)-$valued random variables $\varphi$ that are ${\cal F}_{t,T}^W\otimes{\cal B}([0,T]\times\R^d)-$measurable. \item $C^{\ell,k}(\F^W,[0,T]\times\R^d)$, for $k,\ell\geq 0$, denotes the space of r.v. $\varphi\in C^{\ell,k}({\cal F}_{t,T}^W,[0,T]\times\R^d)$ such that for fixed $x\in\R^d$, the mapping $(t,\omega)\longmapsto\varphi(t,x,\omega)$ is $\F^W-$progressively measurable. \end{itemize} Furthermore, for $(t,x,y)\in[0,T]\times\R^d\times\R$, we denote $\partial/\partial y= D_y, \partial/\partial t= D_t, D= D_x= (\partial/\partial x_1,\cdots,\partial/\partial x_d),$ and $D^2=D_{xx}= (\partial^2_{x_{i}x_{j}})_{i,j=1}^{d}$. The meaning of $D_{xy}, D_{yy}$,$\dots$, should be clear. \noindentindent Then, we list the assumptions needed in this section. The following is a slight strengthening of Assumption \ref{ass}, where we assume a bit more regularity. \begin{Assumption} \item[\rm{(i)}] ${\cal P}$ is not empty, the domain $D_{F_t(y,z)}= D_{F_t}$is independent of $(w,y,z)$. Moreover, $F$, $g$ and $D_yg$ are uniformly continuous in $t$, uniformly in $a$ on $D_{F_t}$. \item[\rm{(ii)}] There exist constants $C > 0, 0\leq\alpha <1$ such that for all $(t,a,x,x^{\prime},z,z^{\prime},y,y^{\prime})t\in [0,T]\times D_{F_t}\times(\R^d)^4\times\R^2$ \begin{align*} |F(t,x,y,z,a)- F(t,x^{\prime},y^{\prime},z^{\prime},a)|&\leq C\big(|x-x^{\prime}|+|y-y^{\prime}|+\|a^{1/2}(z-z^{\prime})\|\big),\\ \|g(t,x,y,z)-g(t,x^{\prime},y^{\prime},z^{\prime})\|^2&\leq C\left(|x-x^{\prime}|^2+ |y-y^{\prime}|^2\right)+\alpha\|(z-z^{\prime})\|^2. \end{align*} \item[\rm{(iii)}] The function $g$ belongs to $C_b^{0,2,3}([0,T]\times\R^d\times\R; \R^l)$. \item[\rm{(iv)}] There exists a constant $\lambda\in[0,1[$ such that $$(1-\lambda)\widehat{a}_t \geq \alpha I_d,\ dt\times {\cal P}-q.e. $$ \label{assspde} \end{Assumption} \noindentindent We next state a strengthened version of Assumption \ref{ass1} in the present Markov framework. \begin{Assumption}\label{assspde1} \begin{itemize} \item[\rm{(i)}] The function $\phi$ is a uniformly continuous and bounded function on $\R^d$. \item[\rm{(ii)}] For any $(t,x)\in[0,T]\times\R^d$ and for some $\varepsilon>0$ \begin{align*} & \underset{\P\in{\cal P}^{t}}{\Sup}\E^{\P}\left[|\phi(B_T^{t,x})|^{2+\varepsilon}+ \Int_t^T|F(s,B_s^{t,x},0,0,\widehat{a}_s^t)|^{2+\varepsilon}ds+ \Int_t^T \|g(s,B_s^{t,x},0,0)\|^{2+\varepsilon}ds\right] <+\infty. \end{align*} \end{itemize} \end{Assumption} \noindentindent Therefore under Assumptions \ref{assumtion g}, \ref{assspde} and \ref{assspde1} and according to Theorem \ref{Thexistence}, there exists a unique triplet $ (Y^{t,x},Z^{t,x},K^{t,x}) $ solution of the $2$BDSDE \eqref{eqmarkovian}. Indeed, it is immediate to check that $f$ and $g$ indeed satisfy Assumption \ref{ass} (recall that $D_yg$ is bounded) and that the terminal condition also verifies all the required regularity and integrability properties. \subsection{Classical solution of SPDEs} We can rewrite the SPDE (\ref{spde}) in its so-called integral form, as soon as $\{u(t,x), \ 0\leq t\leq T, x\in\R^d\}\in C^{0,2}({\cal F}_{t,T}^W,[0,T]\times\R^d)$ is a classical solution of the following equation where the stochastic integral is written in the Stratonovich form, namely, \begin{align} u(t,x)= \phi(x)+\Int_t^T \hat{h}(t,x,u(t,x),Du(t,x),D^2u(t,x))dt+\Int_t^T g(t,x,u(t,x),Du(t,x))\circ d\W_t. \label{spdefint} \end{align} \begin{Definition} We define a classical solution of the SPDE \reff{spde} as a $\R-$valued random field $\{u(t,x), \ (t,x)\in[0,T]\times\R^d\}$ such that $u(t,x)$ is ${\cal F}_{t,T}^W-$measurable for each $(t,x)$, and whose trajectories belong to $C^{0,2}([0,T]\times\R^d)$. \end{Definition} \noindentindent The following is a version of the celebrated Feynman--Kac formula in the present context. \begin{Theorem} \label{representationsolutionclassique:theorem} Let Assumption \ref{assspde} hold true. Suppose further that $H$ is continuous in its domain, $D_F$ is independent of $t$ and is bounded both from above and away from $0$. Let $\{u(t,x),\ (t,x)\in[0,T]\times\R^d\}$ be a classical solution of \reff{spde} with $\{(u,Du)(s,B_s^{t,x}), ~s\in[t,T]\} \in\D^{2}\times \H^{2}$. Then $$Y_s^{t,x}:= u(s,B_s^{t,x}), ~ Z_s^{t,x}:= Du(s,B_s^{t,x}), ~ K_s^{t,x}:= \Int_0^s k_rdr,$$ with $$k_s:= \hat{h}(s,B_s,Y_s,Z_s,\Gamma_s)-\Frac{1}{2} {\rm Tr}[\widehat{a}_s\Gamma_s] + F(s,B_s,Y_s,Z_s,\widehat{a}_s)~ \text{and}~ \Gamma_s:=D^2u(s,B_s^{t,x}),$$ is the unique solution of the $2$BDSDE \reff{eqmarkovian}. Moreover, $u(t,x)=Y_t^{t,x}$ for all $t\in[0,T]$. \end{Theorem} \begin{proof} It suffices to show that $(Y,Z,K)$ solves the $2$BDSDE (\ref{eqmarkovian}). Let $s=t_0< t_1<t_2<...<t_n=T$, then writing $B$ instead of $B^{t,x}$ and $B_i$ instead of $B_{t_i}$ for notational simplicity, we have \begin{align*} &\sum_{i=0}^{n-1}[u(t_i,B_i)-u(t_{i+1},B_{i+1})] = \sum_{i=0}^{n-1}[u(t_i,B_{i})-u(t_{i},B_{{i+1}})]+\sum_{i=0}^{n-1}[u(t_i,B_{{i+1}})-u(t_{i+1},B_{{i+1}})]\\ &= -\sum_{i=0}^{n-1}\Int_{t_i}^{t_{i+1}} Du(t_i,B_r)dB_r-\sum_{i=0}^{n-1}\Int_{t_i}^{t_{i+1}}\Frac{1}{2} {\rm Tr}[\widehat{a}_rD^2u(t_i,B_r)]dr\\ &\hspace{0.9em}+ \sum_{i=0}^{n-1}\Int_{t_i}^{t_{i+1}} \hat{h}(r,B_{{i+1}},u(r,B_{{i+1}}^{t,x}),Du(r,B_{{i+1}}),D^2u(r,B_{{i+1}}))dr\\ &\hspace{0.9em}+\sum_{i=0}^{n-1}\Int_{t_i}^{t_{i+1}}g(r,B_{{i+1}},u(r,B_{{i+1}}),Du(r,B_{{i+1}}))\circ d\W_s, \end{align*} where we have used the It\^o formula and the Equation $(\ref{spdefint})$ satisfied by $u$. Now, the transformation from Stratonovich to It\^o integral yields \begin{align*} &\sum_{i=0}^{n-1}[u(t_i,B_{i})-u(t_{i+1},B_{{i+1}})]= -\sum_{i=0}^{n-1}\Int_{t_i}^{t_{i+1}} Du(t_i,B_r)dB_r-\sum_{i=0}^{n-1}\Int_{t_i}^{t_{i+1}}\Frac{1}{2} {\rm Tr}[\widehat{a}_rD^2u(t_i,B_r)]dr\\ &\hspace{0.9em}+ \sum_{i=0}^{n-1}\Int_{t_i}^{t_{i+1}}\hat{h}(r,B_{{i+1}},u(r,B_{{i+1}}),Du(r,B_{{i+1}}),D^2u(r,B_{{i+1}}))dr\\ &\hspace{0.9em} + \sum_{i=0}^{n-1}\Int_{t_i}^{{i+1}}g(r,B_{{i+1}}),u(r,B_{{i+1}}),Du(r,B_{{i+1}})) d\W_r+ \sum_{i=0}^{n-1}\Int_{t_i}^{t_{i+1}} F(r,B_{{i+1}},u(r,B_{{i+1}}),Du(r,B_{{i+1}}),\widehat{a}_r)dr\\ &\hspace{0.9em}- \sum_{i=0}^{n-1}\Int_{t_i}^{t_{i+1}} F(r,B_{{i+1}},u(r,B_{{i+1}}),Du(r,B_{{i+1}}),\widehat{a}_r)dr\\ &\hspace{0.9em}- \Frac{1}{2}\sum_{i=0}^{n-1}\Int_{t_i}^{t_{i+1}}{\rm Tr}[g(r,B_{{i+1}},u(r,B_{{i+1}}),Du(r,B_{{i+1}}))Dg(r,B_{{i+1}},u(r,B_{{i+1}})),Du(r,B_{{i+1}})]dr. \end{align*} It then suffices to let the mesh size go to zero to obtain that the processes $(Y,Z,K)$ we have defined do satisfy Equation \reff{eqmarkovian}. It now remains to prove the minimum condition \begin{align} \underset{\P^{'}\in{\cal P}(t+,\P)}{\rm ess \, inf^{\P}}\E_t^{\P^{'}}\left[\Int_t^T k_sds\right]=0 ~ \text{for all}~ t\in[0,T],~ \P\in{\cal P}, \label{mincond} \end{align} by which we can conclude that $(Y,Z,K)$ is a solution of the $2$BDSDE (\ref{eqmarkovian}), provided that (\ref{mincond}) holds. However, the proof of that (technical) point can actually be carried out exactly as in \cite[Theorem 5.3]{STZ10} or \cite[Theorem 5.3]{kazi2015second}. Indeed, the main point is that one has to be able to construct appropriate strong solutions to some SDEs on $\Omega^B$, similar to the ones in Example 4.5 of \cite{sone:touz:zhan:11a}. In our framework, this construction can be carried about for every fixed $\omega^W$, and it then suffices to use once more the results of Stricker and Yor \cite{SY78}. \ep \end{proof} \subsection{Stochastic viscosity solution for SPDE} \label{Stochastic viscosity solution for SPDE:section} The aim of this section is to give a probabilistic representation for the stochastic viscosity solutions of the following fully non-linear SPDEs via solutions of 2BDSDEs \eqref{eqmarkovian}. We restrict our study to the following class of SPDEs where the coefficient $g$ does not depends on the gradient of the solution, \begin{align}\label{spde:viscosité} \begin{cases} du(t,x) + \hat{h}(t,x,u(t,x),Du(t,x),D^2u(t,x))dt + g(t,x,u(t,x))\circ d\W_t = 0,\\ u(T,x)= \phi(x), \end{cases} \end{align} As mentioned in the introduction, Lions and Souganidis have introduced a notion of stochastic viscosity solution for fully nonlinear SPDEs in \cite{lion:soug:98, lion:soug:00, lion:soug:01} motivated by applications in path--wise stochastic control problems and the associated stochastic HJB equations. Buckdahn and Ma \cite{buck:ma:10a,buck:ma:10b} have introduced the rigorous notion of stochastic viscosity solution for semi--linear SPDEs and have then given the probabilistic interpretation of such equation via BDSDEs, where the intensity of the noise $g$ in the SPDEs \eqref{spde:viscosité} does not depend on the gradient of the solution. As mentioned in the introduction, they used the so--called Doss--Sussmann transformation and stochastic diffeomorphism flow technics to convert the semi--linear SPDEs to PDEs with random coefficients. This transformation permits to remove the stochastic integral term from the SPDEs and then gives a rigorous definition of so--called stochastic viscosity solution for SPDEs. We have to mention that it is difficult to define viscosity solution for SPDEs due to the fact that there are no maximum principle for solutions of SPDEs, because of the presence of the stochastic integral term in the equation. Since we are following a similar approach, this explains why we also assume that the non--linearity $g$ is not impacted by the gradient term. \noindentindent We will use the shifted probability spaces defined in Section \ref{A direct existence argument}. We now introduce the random function $u:[0,T]\times \Omega^W\times \R^d\longrightarrow \R$ given by \begin{align} u(t,x):=Y_t^{t,x}= \underset{\P\in{\cal P}^{t}}{\Sup}\ y_t^{\P,t,x}, ~\text{for}~ (t,x)\in[0,T]\times \R^d, \label{probarep} \end{align} where for any $(t,x,\P)\in[0,T]\times\R^d\times{\cal P}^t$, $(y^{\P,t,x},z^{\P,t,x})$ is the unique solution of the BDSDE \begin{align*} y_s^{\P,t,x} =&\ \phi(B_T^{t,x})-\Int_s^T f(r,B_r^{t,x},y_r^{\P,t,x},z_r^{\P,t,x},\widehat{a}_r)dr+\Int_t^T g(r,B_r^{t,x},y_r^{\P,t,x},z_r^{\P,t,x}) \cdot d\W_r \noindentnumber\\ &- \Int_s^T z_r^{\P,t,x}\cdot dB_r^{t,x}, ~ t\leq s\leq T,~\P-a.s. \end{align*} By the Blumenthal $0-1$ law, it follows that $u(t,x)$ is deterministic with respect to $B$, but still an $\F^W-$adapted process. \begin{Theorem} Let Assumptions \ref{assumtion g}, \ref{assspde} and \ref{assspde1} hold true. Then $u$ belongs to $C({\cal F}_{t,T}^W,[0,T]\times\R^d)$. \end{Theorem} \begin{proof} Let us start with the uniform continuity in $x$. For any $(t,x), (t,x^{\prime})\in[0,T]\times \R^d$, we have for any $\P\in{\cal P}^t$ \begin{align*} \E^{\P_0^W}[|u(t,x)-u(t,x^{\prime})|^2]&= \E^{\P_0^W}\left[|\underset{\P\in{\cal P}^{t}}{\Sup} y_t^{\P,t,x}-\underset{\P\in{\cal P}^{t}}{\Sup} y_t^{\P,t,x^{\prime}}|^2\right]\leq \underset{\P\in{\cal P}^{t}}{\sup} \E^{\P_0^W}\left[|y_t^{\P,t,x}-y_t^{\P,t,x^{\prime}}|^2\right]. \end{align*} But by the classical {\it a priori} estimates for BDSDEs, and using in particular the fact that $\phi$ is Lipschitz continuous, and $f$ and $g$ are Lipschitz continuous in $x$, we have for any $p\in[1,2]$ \begin{align*} \E^{\P_0^W}\left[|y_t^{\P,t,x}-y_t^{\P,t,x^{\prime}}|^p\right]&\leq C\mathbb E^{\P_0^W}\left[|x-x'|^p+\int_t^T|B_s^{t,x}-B_s^{t,x'}|^pds\right]\leq C\No{x-x'}^p. \end{align*} By Kolmogorov--Chentsov's Theorem, this implies immediately that a $\P_0^W-$version of $u$ is $(1/2-\eta)-$H\"older continuous in $x$, for any $\eta\in(0,1/2)$. \noindentindent Next, for any $(t,t^{\prime},x)\in[0,T]^2\times \R^d$, we have the following classical dynamic programming result $$Y_{t^\prime}^{t,x}=u(t^\prime,B_{t^\prime}^{t,x}),$$ which implies that for any $\P\in{\cal P}^t$ and any $\varepsilon'\in(0,\varepsilon)$, using the estimates of Theorem \ref{thestiamte}(i) \begin{align*} &\E^{\P_0^W}[|u(t^{\prime},x)-u(t,x)|^{2+\varepsilon'}]\\ &= \E^{\P_0^W}\left[\abs{u(t^{\prime},x)-u(t^{\prime},B_{t^{\prime}}^{t,x})+Y_{t^{\prime}}^{t,x}-Y_t^{t,x}}^{2+\varepsilon'}\right]\\ &\leq C \E^{\P_0^W}[|u(t^{\prime},x)-u(t^{\prime},B_{t^{\prime}}^{t,x})|^{2+\varepsilon'}]+C\abs{\E^{\P_0^W}\left[Y_{t^{\prime}}^{t,x}-Y_t^{t,x}\right]}^{2+\varepsilon'}\\ &\leq C\E^{\P_0^W}|x-B_{t^{\prime}}^{t,x}|^{2+\varepsilon'}]+C\abs{\E^{\P_0^W}\left[\int_t^{t^\prime}{f}(r,B_r^{t,x},Y_r^{t,x},Z_r^{t,x},\widehat{a}_r)dr+\int_t^{t^\prime}dK_r^{t,x}\right]}^{2+\varepsilon'}\\ &\leq C|t-t^\prime|^{1+\frac{\varepsilon'}{2}}+C\underset{\P\in{\cal P}^{t}}{\Sup}\E^{\P_0^W}\left[\left(\int_t^{t^\prime}|{f}(r,B_r^{t,x},Y_r^{t,x},Z_r^{t,x},\widehat{a}_r)|dr\right)^{2+\varepsilon'}+\left(\int_t^{t^\prime}dK_r^{t,x}\right)^{2+\varepsilon'}\right]\\ &\leq C|t-t^\prime|^{1+\frac{\varepsilon'}{2}}\left(1+\No{Y^{t,x}}^{1+\frac{\varepsilon'}{2}}_{\mathbb D^{2+\varepsilon'}}+\No{Z^{t,x}}^{1+\frac{\varepsilon'}{2}}_{\mathbb H^{2+\varepsilon'}}+\underset{\P\in{\cal P}^{t}}{\Sup}\E^{\P_0^W}\left[\left(K_T^{t,x}+\int_0^T|B_s^{t,x}|ds\right)^{2+\varepsilon'}\right]\right)\\ &\leq C|t-t'|^{1+\frac{\varepsilon'}{2}}. \end{align*} By Kolmogorov--Chentsov Theorem, this implies immediately that a $\P_0^W-$version of $u$ is $\eta-$H\"older continuous in $t$, for any $\eta\in(0,\varepsilon/(2(2+\varepsilon)))$. \ep \end{proof} \subsubsection{Stochastic flow and definitions} We follow Buckdahn and Ma \cite{buck:ma:10a}. The definition of our stochastic viscosity solution will depend on the following stochastic flow $\eta\in C(\F^W,[0,T]\times\R^d\times\R)$, defined as the unique solution of the (SDE) \begin{align}\label{floweta} \eta(t,x,y) = y + \Int_t^T g(s,x,\eta(s,x,y))\circ d\W_s, ~ 0\leq t\leq T. \end{align} Under Assumption \ref{assspde}, for fixed $x$ the random field $\eta(.,x,.)$ is continuously differentiable in the variable $y$, and the mapping $y\longmapsto\eta(t,x,y,\omega)$ defines a diffeomorphism for all $(t,x)$, $\P-a.s.$ \noindentindent We denote by $\mathcal{E}(t,x,y)$ the $y-$inverse of $\eta(t,x,y)$, so $\mathcal{E}(t,x,y)$ is the solution of the following first-order SPDE \begin{align}\label{flowinverse} \mathcal{E}(t,x,y)= y -\Int_t^T D_y\mathcal{E}(s,x,y)g(s,x,y)\circ d\W_s, ~ \forall (t,x,y),\ \P-a.s. \end{align} We note that $\mathcal{E}(t,x,\eta(t,x,y))= \mathcal{E}(T,x,\eta(T,x,y))=y , ~ \forall (t,x,y)$. We now define the notion of stochastic viscosity solution for SPDE (\ref{spde}). \begin{Definition}\label{Definitionviscosity} $(i)$ A random field $u\in C(\F^W,[0,T]\times\R^d)$ is called a stochastic viscosity subsolution $($resp. supersolution$)$ of SPDE \eqref{spde:viscosité}, if $$u(T,x)\leq (\text{resp.} \geq)\phi(x), ~ \forall x\in\R^d,$$ and if for any $\tau\in{\cal M}_{0,T}^W,$ $\zeta\in L^0({\cal F}_{\tau,T}^W;\R^d)$, and any random field $\varphi\in C^{1,2}({\cal F}_{\tau,T}^W,[0,T]\times\R^d)$ satisfying $$u(t,x)-\eta(t,x,\varphi(t,x))\leq (\text{resp.} \geq)~ 0= u(\tau,\zeta)-\eta(\tau,\zeta,\varphi(\tau,\zeta),$$ for all $(t,x)$ in a neighborhood of $(\tau,\zeta),\ \P_W^0-a.e.$ on the set $\{0< \tau < T\}$, it holds that $$ - \hat{h}(\tau,\zeta,\psi(\tau,\zeta),D\psi(\tau,\zeta),D^2\psi(\tau,\zeta))\leq (\text{resp.} \geq) D_y\eta(\tau,\zeta,\varphi(\tau,\zeta))D_t\varphi(\tau,\zeta),$$ $\P-a.e.$ on $\{0< \tau < T\}$, where $\psi(t,x):=\eta(t,x,\varphi(t,x)).$ \noindentindent $(ii)$ A random field $u\in C(\F^W,[0,T]\times\R^d)$ is called a stochastic viscosity solution of SPDE \eqref{spde:viscosité}, if it is both a stochastic viscosity subsolution and a supersolution. \end{Definition} \begin{Definition} \label{wise-viscosity:definition} A random field $u\in C(\F^W,[0,T]\times\R^d)$ is called a $\omega-$wise viscosity $($sub--, super--$)$ solution, if for $\P-a.e.\ \omega\in\Omega,\ u(\omega,\cdot)$ is a $($deterministic$)$ viscosity $($sub--, super--$)$ solution of the SPDE \eqref{spde:viscosité}. \end{Definition} \begin{Remark} If we assume that $\varphi\in C^{1,2}(\F^W,[0,T]\times\R^d)$, and that $g\in C^{0,0,3}([0,T]\times\R^d\times\R; \R^l)$, then a straightforward computation using the It\^o--Ventzell formula shows that the random field $\psi(t,x)=\eta(t,x,\varphi(t,x))$ satisfies \begin{align} d\psi(t,x)=D_y\eta(t,x,\varphi(t,x))D_t\varphi(t,x)dt + g(t,x,\psi(t,x))\circ d\W_t , \ t\in[0,T]. \end{align} Since $g(\tau,\zeta,\psi(\tau,\zeta))=g(\tau,\zeta,u(\tau,\zeta))$ by defintion, it seems natural to compare $$\hat{h}(\tau,\zeta,\psi(\tau,\zeta),D\psi(\tau,\zeta),D^2\psi(\tau,\zeta)), \ \text{with}\ D_y\eta(\tau,\zeta,\varphi(\tau,\zeta))D_t\varphi(\tau,\zeta),$$ to characterize a viscosity solution of SPDE $(\hat{h},g)$. \noindentindent If the function $g\equiv 0$ in SPDE \eqref{spde}, the flow $\eta$ becomes $\eta(t,x,y)=y$, $\forall (t,x,y)$ and $\psi(t,x)=\varphi(t,x)$. Thus the definition of a stochastic viscosity solution becomes the same as that of a deterministic viscosity solution $($see, e.g. Crandall, Ishii and Lions {\rm\cite{CIL92}}$)$. \end{Remark} \noindentindent One of the main results of our paper is the following probabilistic representation of stochastic viscosity solution for fully nonlinear SPDEs, which is, to the best of our knowledge, the first result of this kind for such a class of SPDEs. The proof will be obtained in the subsequent subsections. \begin{Theorem} \label{representationsolutionviscosité:theorem} Let Assumptions \ref{assspde}, \ref{assspde1} and \ref{extraasumption} hold true and $( Y_s^{t,x}, Z_s^{t,x}, K_s^{t,x}) $ be the unique solution of the $2$BDSDE \reff{eqmarkovian}. Then, $$u(t,x)=Y_t^{t,x} = \underset{\P\in{\cal P}^{t}}{\Sup} y_t^{\P,t,x}(T,\phi(B_T^{t,x})), \ \P_W^0-a.s,\ \text{for all $(t,x)\in[0,T]\times \R^d$},$$ is a stochastic viscosity solution of SPDE \eqref{spde:viscosité}. Moreover, $$ u(t,x)=\eta(t,x,v(t,x)),~ v(t,x)=\mathcal{E}(t,x, u(t,x)), \ \P_W^0-a.s., \ \mbox{and} ~v(t,x)=U_t^{t,x}, $$ where $(U^{t,x},V^{t,x},\tilde{K}^{t,x})$ is a solution of the following $2$BSDE, for all $t\leq s\leq T$, \begin{align} U_s^{t,x}= \phi(B_T^{t,x})-\Int_s^T\tilde{f}(r,B_r^{t,x},Y_r^{t,x},Z_r^{t,x},\widehat{a}_r)dr - \Int_s^T V_r^{t,x}\cdot dB_r + \tilde{K}_T^{t,x}-\tilde{K}_s^{t,x}, \label{2BSDE} \end{align} with $\tilde{f}:[0,T]\times\R^d\times\R\times\R^d\times D_f\longrightarrow\R$ defined by \begin{align} \label{Doss-Sussmann-coefficient} \tilde{f}(t,x,y,z,a):=&\ \Frac{1}{D_y\eta(t,x,y)}\left(f(t,x,\eta(t,x,y),D_y\eta(t,x,y)z+D_x\eta(t,x,y),a)-\Frac{1}{2}{\rm Tr}[a D_{xx}\eta(t,x,y)]\right.\noindentnumber\\ &\left.- z^\top a D_{xy}\eta(t,x,y)-\Frac{1}{2}{\rm Tr}\left[D_{yy}\eta(t,x,y)a^{1/2}zz^\top a^{1/2}\right]\right). \end{align} \end{Theorem} \subsubsection{Doss--Sussmann transformation} \label{Doss-Sussmann-section} In this subsection, we use the so--called Doss--Sussmann transformation to convert the fully nonlinear SPDEs \eqref{spde:viscosité} to PDEs with random coefficients. This transformation permits to remove the martingale term from the SPDEs. To begin with, let us note that, under Assumption \ref{assspde} (iii), the random field $\eta\in C^{0,2,2}(\F^W,[0,T]\times\R^d\times\R)$, thus so is ${\cal E}$. Now for any random field $\psi:[0,T]\times \R^d\times \Omega\longrightarrow \R$, consider the transformation introduced in Definition \ref{Definitionviscosity} $$\varphi(t,x)={\cal E}(t,x,\psi(t,x)), \ (t,x)\in [0,T]\times \R^d,$$ or equivalently, $\psi(t,x)=\eta(t,x,\varphi(t,x)).$ One can easily check that $\psi\in C^{0,p}(\F^W, [0,T]\times \R^d)$ if and only if $\varphi\in C^{0,p}(\F^W, [0,T]\times \R^d)$, for $p= 0,1,2.$ Moreover, if $\varphi\in C^{0,2}({\cal F}^W, [0,T]\times \R^d)$, then $$D_x\psi= D_x\eta + D_y\eta D_x\varphi,$$ \begin{align}\label{deriveeseconde} D_{xx}\psi= D_{xx}\eta + 2(D_{xy}\eta)(D_x\varphi)^{\top}+ (D_{yy}\eta)(D_x\varphi)(D_x\varphi)^{\top}+(D_{y}\eta)(D_{xx}\varphi). \end{align} Furthermore, since ${\cal E}(t,x,\eta(t,x,y))\equiv y, \ \forall (t,x,y),\ \P-a.s.$, differentiating the equation up to the second order we have (suppressing variables fro simplicity), for all $(t,x,y)$ and $\P-a.s.$, \begin{align} \begin{split} & D_x{\cal E} + D_y{\cal E} D_x\eta = 0,\ D_y{\cal E} D_y\eta = 1,\\ & D_{xx}{\cal E} + 2(D_{xy}{\cal E})(D_x\eta)^{\top}+ (D_{yy}{\cal E})(D_x\eta)(D_x\eta)^{\top}+(D_{y}{\cal E})(D_{xx}\eta)=0,\\ &(D_{xy}{\cal E})(D_y\eta)+ (D_{yy}{\cal E})(D_x\eta)(D_y\eta)+(D_{y}{\cal E})(D_{xy}\eta)=0,\\ &(D_{yy}{\cal E})(D_y\eta)^2+(D_{y}{\cal E})(D_{yy}\eta)=0. \end{split} \end{align} \noindentindent The following additional assumption is needed to study the growth of the random fields $ \eta$ and ${\cal E}$ (see \cite{buck:ma:10a}, p.188--189). \begin{Assumption}\label{extraasumption} For any $\varepsilon >0$, there exists a function $G^{\varepsilon}\in C^{1,2,2,2}([0,T]\times \R^n\times\R^d\times \R)$, such that $$\Frac{\partial G^{\varepsilon}}{\partial t}(t,w,x,y)=\varepsilon,~ \Frac{\partial G^{\varepsilon}}{\partial w^i}= g^i(t,x,G^{\varepsilon}(t,w,x,y)),~ i=1,\cdots,n, \text{ and}~ G^{\varepsilon}(0,0,x,y)=y.$$ \end{Assumption} \begin{Proposition}\label{prop:prop} Let $\eta$ be the unique solution to SDE \eqref{floweta} and ${\cal E}$ be the $y-$inverse of $\eta$ $($the solution to \eqref{flowinverse}$)$. Then, under Assumption \ref{extraasumption}, there exists a constant $C>0$, depending only on the bound of $g$ and its partial derivatives, such that for $\zeta=\eta, {\cal E}$, it holds for all $(t,x,y)$ and $\P_0^W-a.s.$ that \begin{align*} &|\zeta(t,x,y)\leq |y|+C |W_t|,\\ &|D_{x}\zeta|+|D_{y}\zeta|+|D_{xx}\zeta|+|D_{xy}\zeta|+|D_{yy}\zeta|\leq C \exp{\left(C|W_t|\right)}, \end{align*} where all the derivatives are evaluated at $(t,x,y)$. \end{Proposition} \noindentindent The proof of this proposition is done in \cite{buck:ma:10a}, p.189--191, so we omit it. Now, we will use the Doss transformation to transform SPDE (\ref{spde}) to PDE with random coefficients and we obtain the following proposition where the proof follows the lines of the proof of Proposition 3.1. in \cite{buck:ma:10a} (p. 187-188). \begin{Proposition} Let Assumptions \ref{assspde}, \ref{assspde1} and \ref{extraasumption} hold true. A random field $u$ is a stochastic viscosity sub- $($resp. super-$)$ solution to SPDE \eqref{spde:viscosité} if and only if $v(.,.)= \mathcal{E}(.,.,u(.,.))$ is a stochastic viscosity solution to the following PDE with random coefficients \begin{align}\label{pde:viscosité} \begin{cases} dv(t,x) + \tilde{h}(t,x,v(t,x),Dv(t,x),D^2v(t,x))dt = 0,\\ v(T,x)= \phi(x), \end{cases} \end{align} with \begin{align*} \tilde{h}(t,x,y,z,\gamma):=& \sup_{a\in\mathbb S_d^{>0}}\left\{\frac12{\rm Tr}\left[a\gamma\right]-\tilde f(t,x,y,z,a)\right\}+ \Frac{1}{2}{\rm Tr}[g(s,x,\eta(s,x,y))D_yg(s,x,\eta(s,x,y))^\top]. \end{align*} Consequently, $u$ is a stochastic viscosity solution of SPDE \eqref{spde:viscosité} if and only if $v(.,.)= \mathcal{E}(.,.,u(.,.))$ is a stochastic viscosity solution to the PDE with random coefficients \eqref{pde:viscosité}. \end{Proposition} \begin{proof} Let $u\in C(\F^W,[0,T]\times\R^d)$ be a stochastic viscosity subsolution of SPDE \eqref{spde:viscosité} and let $v$ be defined by $v(t,x)= \mathcal{E}(t,x,u(t,x))$. In order to show that $v$ is a stochastic viscosity subsolution to the PDE \eqref{pde:viscosité}, we let $\tau\in{\cal M}_{0,T}^W,$ $\zeta\in L^0({\cal F}_{\tau,T}^W;\R^d)$ be arbitrary given, and let $\varphi\in C^{1,2}({\cal F}_{\tau,T}^W,[0,T]\times\R^d)$ be such that $$v(t,x)-\varphi(t,x)\leq 0= v(\tau,\zeta)-\varphi(\tau,\zeta),$$ for all $(t,x)$ in a neighborhood of $(\tau,\zeta),\ \P_W^0-a.e.$ on the set $\{0< \tau < T\}$.\\ Now, we define $\psi(t,x)=\eta(t,x,\varphi(t,x)),\, \forall (t,x)\,\, \P_W^0-a.e.$ Since $y\longmapsto \eta(t,x)$ is strictly increasing, we have \begin{align} u(t,x)-\psi(t,x)&=\eta(t,x,v(t,x))-\eta(t,x,\varphi(t,x))\noindentnumber\\ &\leq 0=\eta(\tau,\zeta,v(\tau,\zeta))-\eta(\tau,\zeta,\varphi(\tau,\zeta))= u(\tau,\zeta)-\psi(\tau,\zeta), \end{align} for all $(t,x)$ in a neighborhood of $(\tau,\zeta),\ \P_W^0-a.e.$ on the set $\{0< \tau < T\}$. Then, since $u$ is a stochastic viscosity subsolution of SPDE \eqref{spde:viscosité}, we have $\P_W^0-a.e.$ on $\{0< \tau < T\}$, \begin{equation}\label{subsolution} - \hat{h}(\tau,\zeta,\psi(\tau,\zeta),D\psi(\tau,\zeta),D^2\psi(\tau,\zeta))\leq D_y\eta(\tau,\zeta,\varphi(\tau,\zeta))D_t\varphi(\tau,\zeta). \end{equation} On the other hand, we recall the expression of $\hat{h}$, \begin{align*} \hat{h}(t,x,y,z,\gamma) &:= \underset{a\in \S_d^{>0}}{\Sup} \left\{\Frac{1}{2} {\rm Tr}[a\gamma] - F(t,x,y,z,a)\right\}\\ &= \underset{a\in \S_d^{>0}}{\Sup} \left\{\Frac{1}{2} {\rm Tr}[a\gamma] - f(s,x,y,z,\widehat{a}_s)+\Frac{1}{2}{\rm Tr}[g(s,x,y)D_yg(s,x,y)^\top].\right\} \end{align*} Thus, using \eqref{deriveeseconde}, we have \begin{align*} {\rm Tr}[aD^2\psi(\tau,\zeta)]=&\ {\rm Tr}[aD_{xx}\eta(\tau,\zeta,\varphi(\tau,\zeta))]+{\rm Tr}[(D_x\varphi(\tau,\zeta))^{\top}a(D_{xy}\eta(\tau,\zeta,\varphi(\tau,\zeta))]\\ &+ {\rm Tr}[a(D_{yy}\eta(\tau,\zeta,\varphi(\tau,\zeta)))(D_x\varphi(\tau,\zeta))(D_x\varphi(\tau,\zeta))^{\top}]+ {\rm Tr}[a (D_{y}\eta(\tau,\zeta,\varphi(\tau,\zeta)))(D_{xx}\varphi(\tau,\zeta))], \end{align*} Finally, plugging the above calculations in \eqref{subsolution} and appealing to \eqref{Doss-Sussmann-coefficient}, we conclude that \begin{equation} - \tilde{h}(\tau,\zeta,\varphi(\tau,\zeta),D\varphi(\tau,\zeta),D^2\varphi(\tau,\zeta))\leq D_t\varphi(\tau,\zeta), \end{equation} which is the desired result. The reciprocal part of the proposition can be proved in a similar way.\\ \ep \end{proof} \noindentindent We apply now Doss--Sussmann transformation for the $2$BDSDE (\ref{eqmarkovian}) so that the Stratonovich backward integral vanishes. Thus, the $2$BDSDE will become a $2$BSDE (a standard one) with a new generator $\tilde{f}$ \eqref{Doss-Sussmann-coefficient}, which is quadratic in $z$. A similar class of 2BSDEs has been studied by Possama{\"{\i}} and Zhou \cite{PZ13} and Lin \cite{L14} in the case of a bounded final condition $\phi(B_T^{t,x})$ and a generator $F$ satisfying (see Assumption 2.1. (iv) p. 3776 in \cite{PZ13}) $$|F_t(x,y,z,a)|\leq \alpha+\beta|y|+\frac{\gamma}{2}|a^{1/2}z|^2,$$ for some positive constants $\alpha$, $\beta$ and $\gamma$. In our case, for fixed $\omega^W$, thanks to Proposition \ref{prop:prop}, we know that the generator $\tilde f$ satisfies a similar assumption, so that we can apply the results of \cite{PZ13} or \cite{L14}. \noindentindent Let us then define the following three processes \begin{align} U_s^{t,x} &:= \mathcal{E}(t,B_s^{t,x},Y_s^{t,x}),\noindentnumber \\ V_s^{t,x} &:= D_y\mathcal{E}(s,B_s^{t,x},Y_t^{t,x})Z_s^{t,x}+D_x\mathcal{E}(s,B_s^{t,x},Y_s^{t,x}),\noindentnumber \\ \tilde{K}_s^{t,x} &:= \Int_0^s D_y\mathcal{E}(r,B_r^{t,x},Y_r^{t,x})dK_r^{t,x}. \label{dosstransf} \end{align} \begin{Theorem} Let Assumptions \ref{assspde}, \ref{assspde1} and \ref{extraasumption} hold true. Then $(U^{t,x},V^{t,x},\tilde{K}^{t,x})$ is the unique solution of the $2$BSDE \eqref{2BSDE}. \end{Theorem} \begin{proof} It is easily checked that the mapping $(B,Y,Z,K)\longmapsto (B,U,V,\tilde{K})$ is one--to--one, and admits as an inverse \begin{align} Y_t=\eta(t,B_t,U_t),\ Z_t=D_y\eta(t,B_t,U_t)V_t+D_x\eta(t,B_t,U_t),\ K_t = \Int_0^t D_y\eta(s,B_s,U_s)d\tilde{K}_s. \label{inversetransf} \end{align} Consequently, the uniqueness of (\ref{2BSDE}) follows from that of $2$BDSDE (\ref{eqmarkovian}), thanks to (\ref{dosstransf}) and (\ref{inversetransf}). Thus we need only show that $(U,V,\tilde{K})$ is a solution of the $2$BSDE (\ref{2BSDE}). Applying the generalized It\^o--Ventzell formula (see Lemma \ref{ItoVentzellLemma} below) to $\mathcal{E}(t,B_t,Y_t)$, one derives that for any $(t,x)\in [0,T]\times\R^d$ \begin{align*} U_t = \mathcal{E}(t,B_t,Y_t)=&\ \phi(B_T) - \Int_t^T D_y\mathcal{E}(s,B_s,Y_s)f(s,B_s,Y_s,Z_s,\widehat{a}_s)ds\\ &-\Int_t^T D_x\mathcal{E}(s,B_s,Y_s)\cdot dB_s-\Int_t^T D_y\mathcal{E}(s,B_s,Y_s)Z_s\cdot dB_s + \Int_t^T D_y\mathcal{E}(s,B_s,Y_s)dK_s \\ &-\Frac{1}{2}\Int_t^T {\rm Tr}[D_{xx}\mathcal{E}(s,B_s,Y_s)\widehat{a}_s]ds-\Frac{1}{2}\Int_t^T {\rm Tr}[D_{yy}\mathcal{E}(s,B_s,Y_s)\widehat{a}_s^{1/2}Z_sZ_s^\top\widehat a_s^{1/2}]ds\\ &-\Int_t^T {\rm Tr}[D_{xy}\mathcal{E}(s,B_s,Y_s)Z_s^\top\widehat{a}_s]ds\\ =&\ \phi(B_T)-\Int_t^T\mathcal{H}(s,B_s,Y_s,Z_s,\widehat{a}_s)ds- \Int_t^T V_s\cdot dB_s + \tilde{K}_T-\tilde{K}_t, \end{align*} where \begin{align*} \mathcal{H}(s,x,y,z,a):= &\ (D_y\mathcal{E})f(s,x,y,z,a)+\Frac{1}{2} {\rm Tr}[(D_{xx}\mathcal{E)}a]+\Frac{1}{2} {\rm Tr}[(D_{yy}\mathcal{E}){a}^{1/2}zz^\top a^{1/2}]+{\rm Tr}[(D_{xy}\mathcal{E})z^\top a]. \end{align*} Next, we can show that \begin{align} \mathcal{H}(s,B_s,Y_s,Z_s,\widehat{a}_s)=\tilde{f}(s,B_s,U_s,V_s,\widehat{a}_s),\ \forall s\in[0,T],~ \P-a.s. \end{align} similarly as done in Buckdahn and Ma $\cite{buck:ma:10a}$ (proof of Theorem 5.1. page 198--199). \noindentindent The process $\tilde{K}$ is a non-decreasing process thanks to the fact that $y\longmapsto \eta(t,x,y)$ is strictly increasing and the non-decreasing of $K$. Now, it remains to prove the minimum condition (\ref{eqmin}) for the process $\tilde{K}$. Notice that by Proposition \ref{prop:prop}, and since $\widehat a$ is bounded under any of the measure we consider, it is clear that $D_y\mathcal{E}(r,B_r^{t,x},Y_r^{t,x})$ has moments of any order under any $\P$. Therefore, we can argue exactly as in Step (ii) of the proof of Theorem \ref{theorep} to show that $\tilde K$ inherits the required minimality condition directly from $K$. \ep \end{proof} \noindentindent We are now ready for the proof of our main theorem \proof[Proof of Theorem \ref{representationsolutionviscosité:theorem}] First, we introduce the random field $v(t,x)=U_t^{t,x}$, where $U$ is the solution of $2$BSDE (\ref{2BSDE}). Then by (\ref{dosstransf}) and (\ref{inversetransf}) we know that, for $(t,x)\in[0,T]\times\R^d$ \begin{align} u(t,x)=\eta(t,x,v(t,x)),\, v(t,x)=\mathcal{E}(t,x,u(t,x)). \label{transf} \end{align} Thanks to Proposition 5.1, we know that we only need to prove that the random field $v$ defined in (\ref{transf}) is a viscosity solution of the PDE with random coefficients \eqref{pde:viscosité}. The idea is then to follow the proof in \cite[Theorem 7.3]{PZ13}, which itself follows \cite[Theorem 5.11]{STZ10}, to prove that the solution of $2$BSDE (\ref{2BSDE}) $v(t,x)=U_t^{t,x}$ is an $\omega-$wise viscosity solution of the PDE \eqref{pde:viscosité} (recall Definition \ref{wise-viscosity:definition}), which then ends the proof. It suffices to notice that the fact that $f$ satisfies Assumptions \ref{assspde} and Assumption \ref{assspde} implies that, for fixed $\omega^W$, $\tilde f$ satisfies Assumption 7.1 of \cite{PZ13}. \ep \begin{appendix} \section{Technical results}\label{Appendix Chapter2} \subsection{It\^o and It\^o--Ventzell formulae} \label{proof of lemmaitô} The following It\^o's formula is a mix between the classical forward and backward It\^o's formulas and is similar to Lemma 1.3 in \cite{pp1994}. We give it here for ease of reference and completeness. The proof being standard, we omit it. \begin{Lemma}\label{lemma:ito} Let $X^1$ and $X^2$ be defined, for $i=1,2$, by $$X^i_t=X_0^i+\int_0^t\alpha_s^ids+\int_0^t\beta_s^i\cdot dB_s+\int_0^t\gamma_s^i\cdot d\W_s+K^i_t,\ 0\leq t\leq T,\ \P-a.s.,$$ for some c\`adl\`ag bounded variation and $\G-$progressively measurable processes $K^i$, such that one of them is continuous. We then have \begin{align*} X^1_t X^2_t=&\ X^1_0X^2_0 +\int_0^t\left(\widehat a_s^{1/2}\beta_s^1\cdot\widehat a_s^{1/2}\beta_s^2 -\gamma_s^1\cdot\gamma_s^2 +\alpha_s^1X_s^2+\alpha_s^2X_s^1\right)ds+\int_0^t\left(X_s^2\beta_s^1+X_s^1\beta_s^2\right)\cdot dB_s\\ &+\int_0^t\left(X_s^1\gamma_s^2+X_s^2\gamma_s^1\right)\cdot d\W_s+\int_0^tX^1_{s^-}dK^2_s+\int_0^tX^2_{s^-}dK_s^1,\ t\in[0,T],\ \P-a.s. \end{align*} \end{Lemma} \noindentindent We now give a generalized version of It\^o--Ventzell formula that combines the generalized It\^o formula of Pardoux and Peng \cite{pp1994} and the It\^o--Ventzell formula of Ocone and Pardoux \cite{OP89}. \begin{Lemma}\label{ItoVentzellLemma}$($Generalized It\^o--Ventzell formula$)$ \noindentindent Suppose that $F\in C^{0,2}(\F,[0,T]\times\R^k)$ is a semimartingale with spatial parameter $x\in\R^k$: \begin{align*} \begin{split} F(t,x)= F(0,t)& + \Int_0^t G(s,x)ds +\Int_0^t H(s,x)\cdot dB_s+ \Int_0^t K(s,x) \cdot d\W_s, \quad t\in[0,T], \end{split} \end{align*} where $G\in C^{0,2}(\F^B,[0,T]\times\R^k)$, $H\in C^{0,2}(\F^B,[0,T]\times\R^k;\R^d)$ and $K\in C^{0,2}(\F^W,[0,T]\times\R^k;\R^l)$. Let $\phi\in C(\F,[0,T];\R^k)$ be a process of the form $$\phi_t= \phi_0 + A_t + \Int_0^t\gamma_s \cdot dB_s +\Int_0^t \delta_s \cdot d\W_s, \quad t\in[0,T],$$ where $\gamma\in \H^2_{k\times d}$, $\delta\in \H^2_{k\times l}$ and $A$ is a continuous $\F$-adapted process with paths of locally bounded variation. Then, $\P$-almost surely, it holds for all $0\leq t\leq T$ that \begin{align} \label{ItoVentzell} \noindentnumber F(t,\phi_t)= &\ F(0,x) + \Int_0^t G(s,\phi_s)ds +\Int_0^t H(s,\phi_s) \cdot dB_s+ \Int_0^t K(s,\phi_s) \cdot d\W_s\\ \noindentnumber &+ \Int_0^t D_x F(s,\phi_s)dA_s + \Int_0^t D_x F(s,\phi_s)\gamma_s \cdot dB_s + \Int_0^t D_x F(s,\phi_s)\delta_s \cdot d\W_s\\ \noindentnumber & +\Frac{1}{2} \Int_0^t {\rm Tr}(D_{xx} F(s,\phi_s)\gamma_s \gamma_s^\top) ds - \Frac{1}{2} \Int_0^t {\rm Tr}[D_{xx} F(s,\phi_s)\delta_s \delta_s^\top] ds\\ & +\Int_0^t {\rm Tr}(D_{x} H(s,\phi_s)\gamma_s^\top) ds - \Int_0^t {\rm Tr}[D_{x} F(s,\phi_s) \delta_s^\top] ds. \end{align} \end{Lemma} \subsection{Proof of Lemma \ref{BDSDEshift}} \label{proof of lemmashift} We divide the proof in two steps. {\it Step 1:} We start by showing the result in the case where $F$ and $g$ do not depend on $(y,z)$. In this case, we can solve directly the BDSDEs to find that $\text{for }\P-a.e. \ (\omega^B,\omega^W)\in\Omega$ \begin{equation}\label{eq:shift1} y_t^{\P}(\omega^B,\omega^W)=\E^{\P}\left[\left.\xi+\int_t^T\widehat F_sds+\int_t^Tg_s\cdot d\W_s\right|\mathcal G_t\right](\omega^B,\omega^W). \end{equation} Then, since $\xi$ is actually ${\cal F}_T^B-$measurable, we deduce immediately, using the definition of the r.c.p.d. that for $\P_0^W-$a.e. $\omega^W\in\Omega^W$ $$\E^{\P}\left[\left.\xi\right|\mathcal G_t\right](\omega^B,\omega^W)=\E^{\P_B(\omega^W)}\left[\left.\xi\right|\mathcal F_t^B\right](\omega^B)=\E^{\P_B^{t,\omega^B}(\omega^W)}\left[\xi^{t,\omega^B}\right].$$ Next, we know from the results of Stricker and Yor \cite{SY78} that we can define a measurable map from $(\Omega^W\times[0,T],{\cal F}_T\otimes{\cal B}([0,T]))$ to $(\R,{\cal B}(\R))$ which coincides $\P^0_W\otimes dt-$a.e. with the conditional expectation of $g_s$, under $\P^B(\cdot;\omega^W)$ (remember that this is a stochastic kernel, and thus measurable), with respect to the $\sigma-$algebra ${\cal F}_t^B$. For notational simplicity, we still denote this map as $$(\omega^W,s)\longmapsto \E^{\P_B(\cdot;\omega^W)}\left[\left.g_s(\cdot,\omega^W)\right|{\cal F}_t^B\right].$$ In other words, the above map does indeed define a stochastic process. That being said, we claim that for $\P-$a.e. $\omega\in\Omega$ \begin{align}\label{eq:shift2} \noindentnumber\E^{\P}\left[\left.\int_t^Tg_s\cdot d\W_s\right|\mathcal G_t\right](\omega^B,\omega^W) &=\left(\int_t^T\E^{\P_B(\cdot;\cdot)}\left[\left.g_s\right|{\cal F}_t^B\right](\omega^B,\cdot)\cdot d\W_s\right)(\omega^W)\\ &=\left(\int_t^T\E^{\P_B^{t,\omega^B}(\cdot)}\left[g_s^{t,\omega^B}\right](\cdot)\cdot d\W_s\right)(\omega^W). \end{align} To prove the claim, let us first show it in the case where $g$ is a simple process with the following decomposition $$g_t(\omega^B,\omega^W)=\sum_{i=0}^{n-1}g_{t_i}(\omega^B,\omega^W){\bf 1}_{(t_i,t_{i+1}]}(t).$$ Then, we have by definition of backward stochastic integrals, for $\P$-a.e. $(\omega^B,\omega^W)\in\Omega$ \begin{align*} \E^{\P}\left[\left.\int_t^Tg_s\cdot d\W_s\right|\mathcal G_t\right](\omega^B,\omega^W)&=\sum_{i=0}^{n-1}\E^{\P}\left[\left.g_{t_{i+1}}\cdot \left(W_{t_{i+1}\wedge t}-W_{t_i\wedge t}\right)\right|\mathcal G_t\right]\left(\omega^B,\omega^W\right)\\ &=\sum_{i=0}^{n-1}\E^{\P}\left[\left.g_{t_{i+1}}\right|\mathcal G_t\right]\left(\omega^B,\omega^W\right)\cdot \left(W_{t_{i+1}\wedge t}-W_{t_i\wedge t}\right)(\omega^W). \end{align*} Notice next that for $\P-a.e.$ $(\omega^B,\omega^W)\in\Omega$ $$\E^{\P}\left[\left.g_{t_{i+1}}\right|\mathcal G_t\right]\left(\omega^B,\omega^W\right)=\E^{\P_B(\cdot;\omega^W)}\left[\left.g_{t_{i+1}}\left(\cdot,\omega^W\right)\right|{\cal F}_t^B\right]\left(\omega^B\right).$$ Indeed, for any $X$ which is $\mathcal G_t-$measurable, we have \begin{align*} &\int_{\Omega}\E^{\P_B(\cdot;\omega^W)}\left[\left.g_{t_{i+1}}\left(\cdot,\omega^W\right)\right|{\cal F}_t^B\right]\left(\omega^B\right)X(\omega^B,\omega^W)d\P_B(\omega^B;\omega^W)d\P^0_W(\omega^W)\\ &=\int_{\Omega^W}\left(\int_{\Omega^B}\E^{\P_B(\cdot;\omega^W)}\left[\left.g_{t_{i+1}}\left(\cdot,\omega^W\right)\right|{\cal F}_t^B\right]\left(\omega^B\right)X(\omega^B,\omega^W)d\P_B(\omega^B;\omega^W)\right)d\P^0_W(\omega^W)\\ &=\int_{\Omega^W}\left(\int_{\Omega^B}g_{t_{i+1}}\left(\omega^B,\omega^W\right)X(\omega^B,\omega^W)d\P_B(\omega^B;\omega^W)\right)d\P^0_W(\omega^W)\\ &=\int_{\Omega}g_{t_{i+1}}\left(\omega^B,\omega^W\right)X(\omega^B,\omega^W)d\P(\omega^B,\omega^W), \end{align*} where we have used the fact that since for every $\omega^W\in\Omega^W$, $\omega^B\longmapsto X(\omega^B,\omega^W)$ is ${\cal F}_t^B-$measurable, we have by definition of the conditional expectation that \begin{align*} &\int_{\Omega^B}\E^{\P_B(\cdot;\omega^W)}\left[\left.g_{t_{i+1}}\left(\cdot,\omega^W\right)\right|{\cal F}_t^B\right]\left(\omega^B\right)X(\omega^B,\omega^W)d\P_B(\omega^B;\omega^W)\\ &=\int_{\Omega^B}g_{t_{i+1}}\left(\omega^B,\omega^W\right)X(\omega^B,\omega^W)d\P_B(\omega^B;\omega^W). \end{align*} Hence, we deduce finally that \begin{align*} \E^{\P}\left[\left.\int_t^Tg_s\cdot d\W_s\right|\mathcal G_t\right](\omega^B,\omega^W)&=\sum_{i=0}^{n-1}\E^{\P_B(\cdot;\omega^W)}\left[\left.g_{t_{i+1}}\left(\cdot,\omega^W\right)\right|{\cal F}_t^B\right]\left(\omega^B\right)\cdot \left(W_{t_{i+1}\wedge t}-W_{t_i\wedge t}\right)(\omega^W)\\ &=\left(\int_t^T\E^{\P_B(\cdot;\cdot)}\left[\left.g_s\right|{\cal F}_t^B\right](\omega^B,\cdot)\cdot d\W_s\right)(\omega^W). \end{align*} By a simple density argument, we deduce that the same holds for general processes $g$. Next, notice that by definition of r.p.c.d., we have for $\P_0^W-$a.e. $\omega^W\in\Omega^W$ $$\E^{\P_B(\cdot;\omega^W)}\left[\left.g_s\right|{\cal F}_t^B\right](\omega^B,\omega^W)=\E^{\P_B^{t,\omega^B}(\cdot;\omega^W)}\left[g_s^{t,\omega^B}\right](\omega^W),\ \text{for }\P_B(\cdot;\omega^W)-a.e.\ \omega^B\in\Omega^B.$$ By definition of $\P$, we are exactly saying that the above holds for $\P-$a.e. $\omega\in\Omega$. This finally proves \reff{eq:shift2}. \noindentindent Using similar argument, we show that we also have for $\P-$a.e. $\omega\in\Omega$ $$\E^{\P}\left[\left.\int_t^T\widehat F_sds\right|\mathcal G_t\right](\omega^B,\omega^W)=\int_t^T\E^{\P_B^{t,\omega^B}(\cdot;\omega^W)}\left[\widehat F_s^{t,\omega^B}\right](\omega^W)ds.$$ To sum up, we have obtained that for $\P-$a.e. $\omega\in\Omega$ \begin{align*} y_t^{\P}(\omega^B,\omega^W)=&\ \E^{\P_B^{t,\omega^B}(\cdot;\omega^W)}\left[\xi^{t,\omega^B}\right]+\left(\int_t^T\E^{\P_B(\cdot;\cdot)}\left[\left.g_s\right|{\cal F}_t^B\right](\omega^B,\cdot)\cdot d\W_s\right)(\omega^W)\\ &+\int_t^T\E^{\P_B^{t,\omega^B}(\cdot;\omega^W)}\left[\widehat F_s^{t,\omega^B}\right](\omega^W)ds. \end{align*} But, we also have (remember that by the Blumenthal $0-1$ law $y_t^{\P^{t,\omega^B}_B(\cdot)\otimes\P^0_W,t,\omega^B}$ only depends on $\omega^W$) for any $\omega^B\in\Omega^B$ and for $\P_0^W-$a.e. $\omega^W\in\Omega^W$ \begin{align*} y_t^{\P^{t,\omega^B}_B(\cdot)\otimes\P^0_W,t,\omega^B}(\omega^W)&=\E^{\P_B^{t,\omega^B}(\cdot)\otimes\P^0_W}\left[\left.\xi^{t,\omega^B}+\int_t^T\widehat F^{t,\omega^B}_sds+\int_t^Tg_s^{t,\omega^B}\cdot d\W_s\right|\mathcal G^t_t\right](\omega^W)\\ &=\E^{\P_B^{t,\omega^B}(\cdot)\otimes\P^0_W}\left[\left.\xi^{t,\omega^B}+\int_t^T\widehat F^{t,\omega^B}_sds+\int_t^Tg_s^{t,\omega^B}\cdot d\W_s\right|\mathcal F^W_{t,T}\right](\omega^W). \end{align*} Using the same arguments as above, we obtain \begin{align}\label{eq:tructruc} \noindentnumber y_t^{\P^{t,\omega^B}_B(\cdot)\otimes\P^0_W,t,\omega^B}(\omega^W)=&\ \E^{\P_B^{t,\omega^B}(\cdot,\omega^W)}\left[\xi^{t,\omega^B}\right]+\int_t^T\E^{\P_B^{t,\omega^B}(\cdot,\omega^W)}\left[\widehat F^{t,\omega^B}_s\right](\omega^W)ds\\ &+\left(\int_t^T\E^{\P_B^{t,\omega^B}(\cdot,\cdot)}\left[g_s^{t,\omega^B}\right]\cdot d\W_s\right)(\omega^W), \end{align} which proves the desired result. {\it Step 2: } Since we are in a Lipschitz setting, solutions to BDSDEs can be constructed via Picard iterations. Hence, using Step 1, the results holds at each step of the iteration and therefore also when passing to the limit. We emphasize that this step crucially relies on \reff{eq:aa}. \ep \subsection{Proof of Lemma \ref{Lemmaregularity}} For each $\P\in{\cal P}$, let $(\overline{{\cal Y}}^\P(T,\xi), \overline{{\cal Z}}^\P(T,\xi))$ be the solution of the BDSDE with generators $\widehat{F}$ and $g$, and terminal condition $\xi$ at time $T$. We define $\widetilde{V}^\P := V- \overline{{\cal Y}}^\P(T,\xi)$. Then, $\widetilde{V}^\P\geq 0, \ \P - a.s.$ For any $0\leq t_1\leq t_2\leq T$, let $(y^{\P,t_2},z^{\P,t_2}):= ({\cal Y}^\P(t_2,V_{t_2}), {\cal Z}^\P(t_2,V_{t_2}))$. Note that $${\cal Y}_{t_1}^\P(t_2,V_{t_2})(\omega)= {\cal Y}_{t_1}^{\P,t_1,\omega}(t_2,V_{t_2}^{t_1,\omega}), \ \P - a.s.$$ Then by the dynamic programming principle (Theorem \ref{dynam prog}) we get $$V_{t_1}\geq y^{\P,t_2}_{t_1}, \ \P - a.s.$$ Denote $ \widetilde{y}^{\P,t_2}_t:= y^{\P,t_2}_{t}- \overline{{\cal Y}}^\P_t ,~ \widetilde{z}^{\P,t_2}_t:= \widehat{a}_t^{-1/2}(z^{\P,t_2}_{t}- \overline{{\cal Z}}^\P_t).$ Then $(\widetilde{y}^{\P,t_2},\widetilde{z}^{\P,t_2})$ is solution of the following BDSDE on $[0,t_2]$ $$\widetilde{y}^{\P,t_2}_t= \widetilde{V}^\P_t+\Int_t^{t_2} f_s^\P(\widetilde{y}^{\P,t_2}_s,\widetilde{z}^{\P,t_2}_s)ds + \Int_t^{t_2} \widehat{g}_s^\P(\widetilde{y}^{\P,t_2}_s,\widetilde{z}^{\P,t_2}_s)\cdot d\W_s - \Int_t^{t_2} \widetilde{z}^{\P,t_2}_s\cdot \widehat a^{1/2}_s dB_s,$$ where \begin{align*} f_t^\P(\omega, y,z) &:= \widehat{F}_t(\omega,y+ \overline{{\cal Y}}^\P_t(\omega),\widehat{a}^{1/2}(\omega)z+ \overline{{\cal Z}}^\P_t(\omega))- \widehat{F}_t(\omega,\overline{{\cal Y}}^\P_t(\omega),\overline{{\cal Z}}^\P_t(\omega))\\ \widehat{g}_t^\P(\omega, y,z) &:= g_t(\omega,y+ \overline{{\cal Y}}^\P_t(\omega),\widehat{a}^{1/2}(\omega)z+ \overline{{\cal Z}}^\P_t(\omega))- g_t(\omega,\overline{{\cal Y}}^\P_t(\omega),\overline{{\cal Z}}^\P_t(\omega)). \end{align*} Then $\widetilde{V}^\P_{t_1}\geq \widetilde{y}^{\P,t_2}_{t_1}$. Therefore, $\widetilde{V}^\P$ is a positive weak doubly $f^\P-$super--martingale under $\P$ by Definition \ref{doublymartingale} (given below in the Appendix).\\ Now, we assume that the coefficient ${g}$ does not depend in $ (y,z)$, then obviously we have that $\bar{V}^\P_{t_1} \geq \bar{y}^{\P,t_2}_{t_1}$ where $$\bar{y}_t := \widetilde{y}_t + \Int_0^{t} \widehat{g}_s \cdot d\W_s \text{ and }\bar{V}_t := \widetilde{V}_t + \Int_0^{t} \widehat{g}_s \cdot d\W_s.$$ Thanks to this change of variable, we have that $(\bar{y}^{\P,t_2},\bar{z}^{\P,t_2})$ solves the following standard BSDE on $[0,t_2]$ $$\bar{y}^{\P,t_2}_t= \bar{V}^\P_t+\Int_t^{t_2} \bar{f}_s^\P(\widetilde{y}^{\P,t_2}_s,\widetilde{z}^{\P,t_2}_s)ds - \Int_t^{t_2} \bar{z}^{\P,t_2}_s\cdot \widehat a^{1/2}_s dB_s,$$ where $$\bar{f}_t^\P(\omega, y,z) := f_t(\omega,y+ \Int_0^{t} \widehat{g}_s \cdot d\W_s ,\widehat{a}^{1/2}(\omega)z ).$$ Now applying the down--crossing inequality for $f$-martingale Theorem 6 in \cite{CP2000} combined with the result concerning the classical down--crossing inequality for non necessarily positive super--martingales in \cite{Doob84} (chapter III, p. 446), we deduce that for $\P- a.e.$ $\omega$, the limit $\underset{r\in\Q\cap(t,T],r\downarrow t}{\Lim} \bar{V}^\P _r$, and consequently the limit $\underset{r\in\Q\cap(t,T],r\downarrow t}{\Lim} \widetilde{V}^\P _r$ exists for all $t\in [0,T]$. Note that $y^\P$ is continuous, $\P-a.s.$, and obviously $\bar{y}^\P$ is continuous, $\P-a.s$. Therefore, we get that the $\overline{\Lim}$ in the definition of $V^+$ is in fact a true limit, which implies that $$V_t^{+}= \underset{r\in\Q\cap(t,T],r\downarrow t}{{\Lim}}V_r, \ {\cal P} -q.s.,$$ and thus $V^{+}$ is c\`adl\`ag ${\cal P} -q.s.$ Finally, we can prove the general case when $g$ depend in $ (y,z)$ using classically the Banach fixed point theorem. \ep \section{Doubly $f-$supersolution and martingales} In this section, we extend some of the results of Peng \cite{peng:99} concerning $f-$super--solutions of BSDEs to the case of BDSDEs. In the following, we fix a probability measure $\P\in{\cal P}$ and work implicitly with $\overline{\F^B}^\P$ and $\F^W$. We introduce the following spaces for a fixed probability $\P$. \begin{itemize} \item[-] $L^{2}(\P)$ denotes the space of all ${\cal F}_T-$measurable scalar r.v. $\xi$ with $\|\xi\|^2_{L^{2}}:= \E^{\P}[|\xi|^2] < +\infty .$ \item[-] $\D^{2}(\P)$ denotes the space of $\R-$valued processes $Y$, s.t. $Y_t$ is ${\cal F}_{t}$ measurable for every $t\in[0,T]$, with $\text{ c\`adl\`ag paths, and } \ \|Y\|^2_{\D^{2}(\P)}:= \E^{\P}\left[\underset{0\leq t\leq T}{\Sup}\,|Y_t|^2 \right]< +\infty .$ \item[-] $\H^{2}(\P)$ denotes the space of all $\R^d-$valued processes $Z$ s.t. $Z_t$ is ${\cal F}_{t}$ measurable for a.e. $t\in[0,T]$, with $$\|Z\|^2_{\H^{2}(\P)}:= \E^{\P}\left[\left(\Int_0^T \|\widehat{a}^{1/2}_tZ_t\|^2 dt\right)\right]< +\infty .$$ \end{itemize} Let us be given the following objects \begin{description} \item[(i)] a terminal condition $\xi$ which is ${\cal F}_T-$measurable and in $L^2(\P)$. \item[(ii)] two maps $f:\Omega\times\R\times\R^d\rightarrow\R ,~ g:\Omega\times\R\times\R^d\rightarrow\R^l$ verifying $\bullet$ $\E\left[\Int_0^T |f(t,0,0)|^2dt\right] < +\infty$, and $\E\left[\Int_0^T\|g(t,0,0)\|^2dt\right] < +\infty.$ $\bullet$ There exist $(\mu,\alpha)\in\R_+^*\times (0,1)$ s.t. for any $(\omega,t,y_1,y_2,z_1,z_2)\in\Omega\times[0,T]\times\R^2\times(\R^d)^2$ \begin{align*} |f(t,\omega, y_1,z_1)-f(t,\omega,y_2,z_2)| &\leq \mu\big(|y_1-y_2|+\|z_1-z_2\|\big),\\ \|g(t,\omega,y_1,z_1)-g(t,\omega,y_2,z_2)\|^2 &\leq c|y_1-y_2|^2+\alpha\|z_1-z_2\|^2. \end{align*} \item[(iii)] a real--valued c\`adl\`ag, progressively measurable process $\{V_t, 0\leq t\leq T\}$ with $$\E\left[\underset{0\leq t\leq T}{\Sup}|V_t|^2\right] < +\infty.$$ \end{description} We want to study the following problem: to find a pair of processes $(y,z)\in\D^2(\P) \times\H^2(\P)$ satisfying \begin{align} y_t =\xi_T +\Int_t^T f_s(y_s,z_s)ds +\Int_t^T g_s(y_s,z_s)\cdot d\W_s +V_T-V_t- \Int_t^T z_s\cdot dB_s ~,~\P-a.s.\label{BDSDE f-super} \end{align} We have the following existence and uniqueness theorem \begin{Proposition} Under the above hypothesis there exists a unique pair of processes $(y,z)\in\D^2(\P) \times\H^2(\P)$ solution of BDSDE \eqref{BDSDE f-super}. \end{Proposition} \begin{proof} In the case where $V\equiv0$, the proof can be found in \cite{pp1994}. Otherwise, we can make the change of variable $\overline{y}_t:= y_t+V_t$ and treat the equivalent BDSDE \begin{align} \overline{y}_t =\xi_T +V_T+\Int_t^T f_s(\overline{y}_s-V_s,z_s)ds +\Int_t^T g_s(\overline{y}_s-V_s,z_s)\cdot d\W_s - \Int_t^T z_s\cdot dB. \end{align} \ep \end{proof} \noindentindent We also have a comparison theorem in this context \begin{Proposition} Let $\xi_1$ and $\xi_2\in L^2(\P), V^i, i=1,2$ be two adapted c\`adl\`ag processes and $f_s^i(y,z), g_s^i(y,z)$ four functions verifying the above assumption. Let $(y^i,z^i)\in\D^2(\P) \times\H^2(\P)$, $i=1,2$ be the solution of the following BDSDEs \begin{align*} y_t^i =\xi_T^i +\Int_t^T f_s^i(y_s^i,z_s^i)ds +\Int_t^T g_s(y_s^i,z_s^i)\cdot d\W_s +V_T^i-V_t^i- \Int_t^T z_s^i\cdot dB_s ,~\P-a.s, \end{align*} respectively. If we have $\P -a.s.$ that $\xi_1\geq\xi_2 , V^1-V^2$ is non decreasing, and $f_s^1(y_s^1,z_s^1)\geq f_s^2(y_s^1,z_s^1)$ then it holds that for all $t\in [0,T]$ $$y_t^1\geq y_t^2, \ \P -a.s.$$ \end{Proposition} \noindentindent For a given $\G-$stopping time, we now consider the following BDSDE \begin{align} y_t =\xi_T +\Int_{t\wedge\tau}^{\tau} f_s(y_s,z_s)ds +\Int_{t\wedge\tau}^{\tau} g_s(y_s,z_s)\cdot d\W_s +V_{\tau}-V_{t\wedge\tau}- \Int_{t\wedge\tau}^{\tau} z_s\cdot dB_s ~,~\P-a.s.\label{tau BDSDE f-super} \end{align} where $\xi\in L^2(\P)$ and $V\in\I^2(\P)$. \begin{Definition} If $y$ is a solution of BDSDE of form \reff{tau BDSDE f-super}, the we call $y$ a doubly $f-$super--solution on $[0,\tau]$. If $V\equiv 0$ in $[0,\tau]$, then we call $y$ a doubly $f-$solution. \end{Definition} \noindentindent We now introduce the notion of doubly $f-$(super)martingales. \begin{Definition}\label{doublymartingale} \item[$(i)$] A doubly $f-$martingale on $[0,T]$ is a doubly $f-$solution on $[0,T]$. \item[$(ii)$] A process $(Y_t)$ is a doubly $f-$super--martingale in the strong $($resp. weak$)$ sense if for all stopping time $\tau\leq t$ $($resp. all $t\leq T)$, we have $\E^\P[|Y_{\tau}|^2]< +\infty$ $($resp. $\E^\P[|Y_{t}|^2]< +\infty)$ and if the doubly $f$-solution $(y_s)$ on $[0,\tau]$ $($resp. $[0,t])$ with terminal condition $Y_{\tau}$ $($resp. $Y_{t})$ verifies $y_{\sigma}\leq Y_{\sigma}$ for every stopping time $\sigma\leq \tau$ $($resp. $y_s\leq Y_s$ for every $s\leq t)$. \end{Definition} \section{Reflected backward doubly stochastic differential equations} \label{RBDSDE:section} In this section, we want to study the problem of a reflected backward doubly stochastic differential equation (RBDSDE for short) with one c\`adl\`ag barrier. This is an extension of the work of Hamad\`ene and Ouknine \cite{HO11} for the standard reflected BSDEs to our case. So in addition to the terminal condition and generators that we used in the previous section, we need \begin{description} \item[(iv)] a barrier $\{S_t, 0\leq t\leq T\}$, which is a real-valued c\`adl\`ag ${\cal F}_t-$measurable process satisfying $S_T\leq\xi$ and $$\E\left[\underset{0\leq t\leq T}{\Sup}(S_t^+)^2\right] < +\infty.$$ \end{description} Now we present the definition of the solution of RBDSDEs with one lower barrier. \begin{Definition}\label{def:rbsde} We call $(Y,Z,K)$ a solution of the backward doubly stochastic differential equation with one reflecting lower barrier $S(.)$, terminal condition $\xi$ and coefficients $f$ and $g$, if the following holds: \begin{itemize} \item[\rm{(i)}] $Y\in\D^2(\P),~ Z\in\H^2(\P)$. \item[\rm{(ii)}] $Y_t = \xi +\Int_t^T f(s,Y_s,Z_s)ds +\Int_t^T g(s,Y_s,Z_s)\cdot d\W_s -\Int_t^T Z_s\cdot dB_s +K_T-K_t ,~ 0\leq t\leq T$. \item[\rm{(iii)}] $Y_t\geq S_t ~ ,~ 0\leq t\leq T,~ a.s.$ \item[\rm{(iv)}] If $K^c$ $($resp. $K^d)$ is the continuous $($resp. purely discontinuous$)$ part of $K$, then $$\Int_0^T (Y_{s}-S_{s})dK^c_s = 0 ,~ a.s.\ \text{and }\forall t\leq T,\ \Delta K_t^d = (S_{t^-}-Y_t)^+\mathbf{1}_{[Y_{t^-}=S_{t^-}]}.$$ \end{itemize} \label{RBDSDE} \end{Definition} \begin{Remark} The condition \rm{(iv)} implies in particular that $\Int_0^T (Y_{s^-}-S_{s^-})dK_s = 0.$ Actually \begin{align*} \Int_0^T (Y_{s^-}-S_{s^-})dK_s &= \Int_0^T (Y_{s^-}-S_{s^-})dK_s^c+\Int_0^T (Y_{s^-}-S_{s^-})dK_s^d\\ &= \Int_0^T (Y_{s^-}-S_{s})dK_s^c + \sum_{s\leq T}(Y_{s^-}-S_{s^-})\Delta K_s^d=0. \end{align*} The last term of the second equality is null since $K^d$ jumps only when $Y_{s^-}=S_{s^-}$. \ep \end{Remark} \noindentindent The main objective of this section is to prove the following theorem. \begin{Theorem} Under the above hypotheses, the RBDSDE in Definition \ref{def:rbsde} has a unique solution $(Y,Z,K)$. \label{solRBDSDE} \end{Theorem} \noindentindent Before we start proving this theorem, let us establish the same result in the case where $f$ and $g$ do not depend on $y$ and $z$. More precisely, given $f$ and $g$ such that $$\E\left[\Int_0^T |f(s)|^2ds\right]+\E\left[\Int_0^T \|g(s)\|^2ds\right] < +\infty$$ and $\xi$ as above, consider the reflected BDSDE \begin{align} Y_t = \xi +\Int_t^T f(s)ds +\Int_t^T g(s)\cdot d\W_s -\Int_t^T Z_s\cdot dB_s +K_T-K_t. \label{RBDSDE1} \end{align} \begin{Proposition} There exists a unique triplet $(Y,Z,K)$ verifies conditions of Definition \ref{RBDSDE} and satisfies \reff{RBDSDE1}. \end{Proposition} \begin{proof} \textbf{ a) Existence:} The method combines penalization and the Snell envelope method. For each $n\in\N^*$, we set $$f_n(s,y) = f(s)+ n(y_s-S_s)^-,$$ and consider the BDSDE \begin{align} Y_t^n = \xi^n +\Int_t^T f_n(s,Y_s^n)ds +\Int_t^T g(s)\cdot d\W_s -\Int_t^T Z_s^n\cdot dB_s. \label{RBDSDE2} \end{align} It is well known (see Pardoux and Peng \cite{pp1994}) that BDSDE (\ref{RBDSDE2}) has a unique solution $(Y^n,Z^n)\in\D^2(\P)\times\H^2(\P)$ such that for each $n\in\N,$ $$\E\left[\underset{0\leq t\leq T}{\Sup}|Y_t^n|^2 + \Int_0^T \|Z_s^n\|^2ds\right] <+\infty.$$ From now on the proof will be divided into three steps. \noindentindent \textit{{\bf{Step 1}}}: For all $n\geq 0$ and $(s,y)\in[0,T]\times\R$, $$f_n(s,y,z)\leq f_{n+1}(s,y,z),$$ which provide by the comparison theorem, $Y_t^n\leq Y_t^{n+1},~ t\in[0,T]~ a.s.$ For each $n\in\N,$ denoting $$\bar{Y}_t^n := Y_t^n + \Int_0^t g(s) d\W_s,\ \bar{\xi}:= \xi + \Int_0^T g(s)\cdot d\W_s , \ \bar{S}_t := S_t + \Int_0^t g(s)\cdot d\W_s,$$ we have \begin{align} \bar{Y}_t^n= \bar{\xi}+\Int_t^T f(s)ds +n\Int_t^T (\bar{Y}_s^n - \bar{S}_s)^- ds -\Int_t^T Z_s^n\cdot dB_s. \end{align} The process $\bar{Y}_t^n$ satisfies \begin{align} \forall t\leq T,\ \bar{Y}_t^n= \underset{\tau \geq t}{\rm ess \, sup}\ \E\left[ \left.\Int_t^\tau f(s)ds + (\bar{Y}_\tau^n\wedge \bar{S}_\tau){\bf{1}}_{\{\tau < T\}}+ \bar{\xi}{\bf{1}}_{\{\tau =T\}}\right|{\cal G}_t\right]. \end{align} In fact, for any $n\in\N$ and $t\leq T$ we have \begin{align} \bar{Y}_t^n= \bar{\xi}+\Int_t^T f(s)ds + n \Int_t^T (\bar{Y}_s^n-\bar{S}_s )^-ds -\Int_t^T \bar{Z}_s^n\cdot dB_s. \label{penBDSDE} \end{align} Therefore for any $\G-$stopping time $\tau\geq t$ we have \begin{align} \label{penaSnellenv} \bar{Y}_t^n &=\E\left[\left.\bar{Y}_{\tau}^n+\Int_t^{\tau} f(s)ds + n \Int_t^{\tau} (\bar{Y}_s^n-\bar{S}_s )^-ds\right|{\cal G}_t\right]\noindentnumber\\ &\geq \E\left[\left.(\bar{S}_{\tau}\wedge \bar{Y}_{\tau}^n){\bf{1}}_{[\tau < T]}+ \bar{\xi}{\bf{1}}_{\{\tau = T\}}+\Int_t^{\tau} f(s)ds\right|{\cal G}_t\right], \end{align} since $\bar{Y}_{\tau}^n\geq (\bar{S}_{\tau}\wedge \bar{Y}_{\tau}^n){\bf{1}}_{[\tau < T]}+ \bar{\xi}{\bf{1}}_{\{\tau = T\}}$. On the other hand, let $\tau_t^{*}$ be the stopping time defined as follows: $$ \tau_t^{*} = \Inf \{s\geq t, \bar{K}_s^n-\bar{K}_t^n > 0\}\wedge T,$$ where $\bar{K}_t^n=n \Int_0^{t} (\bar{Y}_s^n-\bar{S}_s )^-ds$. Let us show that ${\bf{1}}_{[\tau_t^{*} < T]}\bar{Y}_{\tau_t^{*}}^n)= (\bar{S}_{\tau_t^{*}}\wedge \bar{Y}_{\tau_t^{*}}^n){\bf{1}}_{[\tau_t^{*} < T]}$. \noindentindent Let $\omega$ be fixed such that $\tau_t^{*}(\omega) < T$. Then there exists a sequence $(t_k)_{k\geq 0}$ of real numbers which decreases to $\tau_t^{*}(\omega)$ such that $\bar{Y}_{t_k}^n(\omega)\leq \bar{S}_{t_k}(\omega)$. As $\bar{Y}^n$ and $\bar{S}$ are RCLL processes then taking the limit as $k\rightarrow \infty$ we obtain $\bar{Y}_{\tau_t^{*}}^n\leq \bar{S}_{\tau_t^{*}}$ which implies ${\bf{1}}_{[\tau_t^{*} < T]}\bar{Y}_{\tau_t^{*}}^n)= (\bar{S}_{\tau_t^{*}}\wedge \bar{Y}_{\tau_t^{*}}^n){\bf{1}}_{[\tau_t^{*} < T]}$. Now from (\ref{penBDSDE}), we deduce that: \begin{align*} \bar{Y}_t^n &= \bar{Y}_{\tau_t^{*}}^n +\Int_t^{\tau_t^{*}} f(s)ds -\Int_t^{\tau_t^{*}} \bar{Z}_s^n \cdot dB_s\\ &= (\bar{S}_{\tau_t^{*}}\wedge \bar{Y}_{\tau_t^{*}}^n){\bf{1}}_{[\tau_t^{*} < T]}+\bar{\xi}{\bf{1}}_{\{\tau_t^{*} = T\}}+ \Int_t^{\tau_t^{*}} f(s)ds -\Int_t^{\tau_t^{*}} \bar{Z}_s^n\cdot dB_s. \end{align*} Taking the conditional expectation and using inequality (\ref{penaSnellenv}) we obtain: $\forall n\geq 0$, and $t\geq T$ \begin{align} \bar{Y}_t^n= \underset{\tau \geq t}{\rm ess \, sup}\ \E\left[\left. \Int_t^\tau f(s)ds + (\bar{Y}_\tau^n\wedge \bar{S}_\tau){\bf{1}}_{\{\tau < T\}}+ \bar{\xi}{\bf{1}}_{\{\tau =T\}}\right|{\cal G}_t\right]. \end{align} \noindentindent \textit{{\bf{Step 2}}}: There exists a RCLL $(Y_t)_{t\leq T}$ of $\D^2(\P)$ such that $\P-a.s.$ \begin{itemize} \item[(i)] $Y=\underset{n \rightarrow \infty}{\Lim} Y^n$ in $\H^2(\P)$, $S\leq Y$. \item[(ii)] for any $t\leq T,$ \begin{align}\label{Snellenv} Y_t= \underset{\tau \geq t}{\rm ess \, sup}\ \E\left[\left. \Int_t^\tau f(s)ds + \bar{S}_\tau{\bf{1}}_{\{\tau < T\}}+ \bar{\xi}{\bf{1}}_{\{\tau =T\}}\right|{\cal G}_t\right]- \Int_0^t g(s)\cdot d\W_s. \end{align} \end{itemize} Actually for $t\leq T$ let us set $$\tilde{Y}_t:= \underset{\tau \geq t}{\rm ess \, sup}\ \E\left[\left. \Int_t^\tau f(s)ds + \bar{S}_\tau{\bf{1}}_{\{\tau < T\}}+ \bar{\xi}{\bf{1}}_{\{\tau =T\}}\right|{\cal G}_t\right].$$ since $\bar{S}\in \D^2(\P)$, $f\in \H^2(\P)$ and $\bar{\xi}$ is square integrable, the process $\tilde{Y}$ belongs to $\D^2(\P)$. On the other hand for any $n\geq 0$ and $t\leq T$ we have $\bar{Y}_t^n\leq \tilde{Y}_t$. Thus there exist a $\G-$progressively measurable process $\bar{Y}$ such that $\P-a.s.$, for any $t\leq T, \ \bar{Y}_t^n\ \nearrow \bar{Y}_t\leq \tilde{Y}_t $ and we have $Y_t^n\nearrow Y_t= \bar{Y}_t-\Int_0^t g(s)\cdot d\W_s$, then $Y=\underset{n \rightarrow \infty}{\Lim} Y^n$ in $\H^2(\P).$ \noindentindent Besides, the process $\bar{Y}_\cdot^n+\Int_0^\cdot f(s)ds$ is a c\`adl\`ag super--martingale as the Snell envelope of $$\left(\int_0^{\cdot} f(s)ds+\bar{S}_{\cdot}\wedge \bar{Y}_{\cdot}^n\right){\bf{1}}_{[\cdot< T]}+ \bar{\xi}{\bf{1}}_{\{\cdot = T\}},$$ and it converges increasingly to $\bar{Y}_\cdot+\int_0^\cdot f(s)ds$. It follows that the latter process is a c\`adl\`ag super--martingale. Hence, the process $Y$ is also $\G-$progressively measurable, c\`adl\`ag, and belongs to $\D^2(\P)$. Even more than that, $Y_t$ is ${\cal F}_t$-measurable for every $t\in[0,T]$ as the limit of $Y_t^n$, which has this property. \noindentindent Next let us prove that $Y\geq S$. We have $$\E[Y_0^n]=\E\left[\xi+\Int_0^Tf(s)ds\right]+\E\left[\Int_0^T n(Y_s^n-S_s)^-ds\right].$$ Dividing the two sides by $n$ and taking the limit as $n\rightarrow\infty$, we obtain $$\E\left[\Int_0^T (Y_s-S_s)^-ds\right]=0.$$ Since the processes $Y$ and $S$ are c\`adl\`ag, then, $\P-a.s.$, $Y_t\geq S_t, $ for $t< T$. But $Y_T=\xi\geq S_T$, therefore $Y\geq S$. \noindentindent Finally let us show that $Y$ satisfies (\ref{Snellenv}). But this is a direct consequence of the continuity of the Snell envelope through sequences of increasing c\`adl\`ag processes. In fact on the one hand, the sequence of increasing c\`adl\`ag processes $((\bar{S}_{t}\wedge \bar{Y}_{t}^n){\bf{1}}_{[t < T]}+ \bar{\xi}{\bf{1}}_{\{t = T\}})_{t\leq T})_{t\leq T}$ converges increasingly to the c\`adl\`ag process $(\bar{S}_{t}{\bf{1}}_{[t < T]}+ \bar{\xi}{\bf{1}}_{\{t = T\}})_{t\leq T}){[t\leq T}$ since $\bar{Y}_{t}\geq \bar{S}_{t}$. Therefore, $$\Int_0^tf(s)ds+\bar{Y}_{t}^n\longrightarrow \underset{\tau \geq t}{\rm ess \, sup}\ \E\left[\left. \Int_0^\tau f(s)ds + \bar{S}_\tau{\bf{1}}_{\{\tau < T\}}+ \bar{\xi}{\bf{1}}_{\{\tau =T\}}\right|{\cal G}_t\right]=\Int_0^tf(s)ds+\bar{Y}_{t},$$ which implies that \begin{align*} Y_t &=\bar{Y}_t-\Int_0^t g(s)\cdot d\W_s= \underset{\tau \geq t}{\rm ess \, sup}\ \E\left[\left. \Int_t^\tau f(s)ds + \bar{S}_\tau{\bf{1}}_{\{\tau < T\}}+ \bar{\xi}{\bf{1}}_{\{\tau =T\}}\right|{\cal G}_t\right]- \Int_0^t g(s)\cdot d\W_s. \end{align*} \noindentindent \textit{{\bf{Step 3}}}: We know from (\ref{Snellenv}) that the process $\int_0^\cdot f(s)ds+\bar{Y}_{\cdot}^n$ is a Snell envelope. Then, there exist a process $K\in\I^2(\P)$ and a $\G$-martingale such that $$ \Int_0^tf(s)ds+Y_t+ \Int_0^t g(s)\cdot d\W_s = M_t-K_t ,\ 0\leq t\leq T.$$ Additionally $K=K^c+K^d$ where $K^c$ is continuous, non-decreasing and $K^d$ non-decreasing purely discontinuous predictable such that for any $t\leq T, \Delta_t K^d= (S_{t^-}-Y_t){\bf{1}}_{\{Y_{t^-}=S_{t^-}\}}$. Now the martingale $M$ belongs to $\D^2(\P)$, so that the It\^o's martingale representation theorem implies the existence of a $\G$-predictable process $Z\in\H^2(\P)$ such that $$M_t=M_0+\Int_0^t Z_s\cdot dB_s, \quad 0\leq t\leq T,\ \P-a.s.$$ Hence $$Y_t=Y_0- \Int_0^tf(s)ds-\Int_0^t g(s)\cdot d\W_s+\Int_0^t Z_s\cdot dB_s-K_t ,\ 0\leq t\leq T.$$ The proof of $\Int_0^T (Y_{s}-S_{s})dK^c_s = 0$ is the same as in \cite{HO11}, so we omit it. \noindentindent It remains to show that ${Z_t}$ and ${K_t}$ are in fact ${\cal F}_t-$measurable. For $K_t$, it is obvious since it is the limit of $K_t^n= \Int_0^t n(Y_s^n-S_s)^- ds$ which is ${\cal F}_t-$measurable for each $t\leq T$. Now $$ \Int_t^T Z_s\cdot dB_s = \xi +\Int_t^T f(s,Y_s,Z_s)ds +\Int_t^T g(s,Y_s,Z_s)\cdot d\W_s - Y_t+K_T-K_t,$$ and the right side is ${\cal F}_T^B\vee{\cal F}_{t,T}^{W}$-measurable. Hence from the It\^o's martingale representation theorem $(Z_s)_{t\leq s\leq T}$ is ${\cal F}_s^B\vee{\cal F}_{t,T}^{W}$ adapted. Consequently $Z_s$ is ${\cal F}_s^B\vee{\cal F}_{t,T}^{W}$-measurable for any $t<s$, so it is ${\cal F}_s^B\vee{\cal F}_{s,T}^{W}$-measurable. \noindentindent \textbf{b) Uniqueness:} Under Lipschitz continuous conditions, the proof of uniqueness is standard in BSDE theory (see e.g. proof of Proposition 2.1. in \cite{AM10}). \ep \end{proof} \noindentindent The existence of solution of RBDSDE in Theorem \ref{solRBDSDE} is obtained via a standard fixed Banach point theorem for reflected BSDEs (see for instance El Karoui, Hamad\`ene and Matoussi \cite{ElkMH08}). \end{appendix} \end{document}
\begin{document} \title{A summary of categorical structures in $\Cat{Poly}$} \author{David I. Spivak} \maketitle \begin{abstract} In this document, we collect a list of categorical structures on the category $\Cat{Poly}$ of polynomial functors. There is no implied claim that this list is in any way complete. It includes: ten monoidal structures, eight of which are symmetric, two of which are closed, several of which distribute, several of which interact duoidally; it also includes a right-coclosure and two indexed left coclosures; it also includes various adjunctions of which $\Cat{Poly}$ is a part, including the free monad and cofree comonad and their interaction with various monoidal structures. \Cat{End}d{abstract} \tableofcontents* This document is only meant as a handy guide to the abundance of structure in $\Cat{Poly}$. In particular, we do not supply proofs, though we have written about most of these structures elsewhere; see \cite{spivak2022poly}, \cite{spivak2021functorial}, and \cite{spivak2022polynomial}. For everything else written here, one can consider it to be only conjecture, since in some cases we have not checked all the details. Hence, if someone proves something written here---something which has not been proven elsewhere---that person should be taken to have the professional ``priority'' and credit. In particular, we wish to claim no credit for originality of anything contained in this document, though most of it was discovered independently by the author, so we also do not supply additional references. We also make absolutely no claim of completeness. \chapter{Background and notation} A polynomial functor $p\colon\Cat{Set}\to\Cat{Set}$ is any functor that's isomorphic to a coproduct of representables \[ p\coloneqq\sum_{I: p(1)}\mathcal{y}^{p[I]}. \] We will typically use the above notation---which we call \emph{standard form}---appropriately modified for $p'$, $q$, etc., e.g. \[ p'\coloneqq\sum_{I': p'(1)}\mathcal{y}^{p'[I']} \mathbb{Q}or q\coloneqq\sum_{J: q(1)}\mathcal{y}^{q[J]}. \] We refer to elements of $p(1)$ as \emph{positions} of $p$ and, for each $I: p(1)$, we refer to the elements of $p[I]$ as \emph{directions} at $I$. A morphism between polynomials is a natural transformation $\varphi\colon p\to q$; by the Yoneda lemma and universal property of coproducts, it consists of a function $\varphi_1\colon p(1)\to q(1)$ and, for each $I: p(1)$ a function $\varphi^\sharp_I\colon q[\varphi_1(I)]\to p[I]$. A map $\varphi$ is called \emph{vertical} if $\varphi_1$ is identity on positions; it is called \emph{cartesian} if for each $I: p(1)$ the function $\varphi_I^\sharp$ is a bijection; see \cref{chap.bifib} for more on this. The category of polynomial functors and morphisms between them is denoted $\Cat{Poly}$; the wide subcategory of polynomial functors and cartesian morphisms between them is denoted $\Cat{Poly}cart$. The forgetful functor $\Cat{Poly}\to\Cat{Fun}(\Cat{Set},\Cat{Set})$ preserves limits and coproducts. The forgetful functor $\Cat{Poly}cart\to\Cat{Fun}(\Cat{Set},\Cat{Set})$ preserves all limits and colimits. \chapter{Coproducts and distributive monoidal structures} The category $\Cat{Poly}$ has coproducts, given by the following formula: \begin{equation} p+q\coloneqq\sum_{I: p(1)}\mathcal{y}^{p[I]}+\sum_{J: q(1)}\mathcal{y}^{q[J]} \Cat{End}d{equation} If one wants the formula to be in standard form, use case logic in the exponent: \begin{equation} p+q\cong\sum_{X: p(1)+q(1)}\mathcal{y}^{\fontsize{8pt}{8pt}\selectfont \begin{cases} p[X]&\tn{ if }X\in p(1)\\ q[X]&\tn{ if }X\in q(1) \Cat{End}d{cases}\normalsize } \Cat{End}d{equation} For any symmetric monoidal product $(I,\cdot)$ on $\Cat{Set}$, there is a corresponding symmetric monoidal structure $(\mathcal{y}^I,\odot)$ on $\Cat{Poly}$, where the monoidal product given as follows: \footnote{The symmetric monoidal structure $\odot$ on $\Cat{Poly}$ is the Day convolution of the $\cdot$ structure on $\Cat{Set}$.} \begin{equation} p\odot q\coloneqq\sum_{(I,J): p(1)\times q(1)}\mathcal{y}^{p[I]\cdot q[J]}. \Cat{End}d{equation} It always distributes over $+$: \begin{equation} p\odot(q_1+q_2)\cong (p\odot q_1)+(p\odot q_2). \Cat{End}d{equation} For any set $S$, there is a monoidal structure $(0,\vee_S)$ on $\Cat{Set}$,\footnote{I learned the $\vee_1$ monoidal structure (pronounced ``or'') on $\Cat{Set}$ from Richard Garner, and Solomon Bothwell later informed me that it's called \href{https://hackage.haskell.org/package/these}{\texttt{These}} in Haskell. I learned the $\vee_S$ for $S\geq 2$, as well as other monoidal structures, from \href{https://mathoverflow.net/questions/155939/what-other-monoidal-structures-exist-on-the-category-of-sets}{mathoverflow}.} where \begin{equation} A\vee_SB\coloneqq A+A\times S\times B + B. \Cat{End}d{equation} We denote the $S=1$ case simply by $\vee\coloneqq\vee_1$. When $S=0$ this is the usual coproduct, $A\vee_0B\cong A+B$, which we will treat seperately since it is very important. The other important monoidal product on $\Cat{Set}$ for us is $\times$ \footnote{ We sometimes denote products using juxtaposition, $AB\coloneqq A\times B$. We may also do this for polynomials $pq\coloneqq p\times q$. } . These lead to the following symmetric monoidal products on $\Cat{Poly}$: \begin{align} \label{eqn.times} p\times q&\coloneqq\sum_{(I,J): p(1)\times q(1)}\mathcal{y}^{p[I]+ q[J]}\\ \label{eqn.otimes} p\otimes q&\coloneqq\sum_{(I,J): p(1)\times q(1)}\mathcal{y}^{p[I]\times q[J]}\\ \label{eqn.ovee} p\ovee_S q&\coloneqq\sum_{(I,J): p(1)\times q(1)}\mathcal{y}^{p[I]\vee_S q[J]} \Cat{End}d{align} The first two are highly relevant: the first ($\times$) is the categorical product, and the second ($\otimes$) is the \emph{Dirichlet} product, both of which come up often in practice. Writing $\ovee\coloneqq\ovee_1$, note that there is a pullback square in $\Cat{Poly}$: \begin{equation} \begin{tikzcd} p\ovee q\ar[r]\ar[d]& p\times q\ar[d]\\ p\otimes q\ar[r]& p(1)\times q(1)\ar[ul, phantom, very near end, "\lrcorner"] \Cat{End}d{tikzcd} \Cat{End}d{equation} The dirichlet product commutes with connected limits in either variable: for any connected category $\cat{J}$, functor $p\colon\cat{J}\to\Cat{Poly}$, and polynomial $q$, the induced map \begin{equation} \left(\lim_{j:\cat{J}}p_j\right)\otimes q \To{\cong} \lim_{j:\cat{J}}(p_j\otimes q) \Cat{End}d{equation} is an isomorphism. It commutes with all colimits in either variable: for any category $\cat{J}$, functor $p\colon\cat{J}\to\Cat{Poly}$, and polynomial $q$, the induced map \begin{equation} \colim_{j:\cat{J}}(p_j\otimes q) \To{\cong} \left(\colim_{j:\cat{J}}p_j\right)\otimes q \Cat{End}d{equation} is an isomorphism. The $\otimes$ and $\times$ operation together form a linearly distributive category, \footnote{This fact \eqref{eqn.shapiro_0}, along with \cref{eqn.shapiro_1,eqn.shapiro_2}, were discovered either by or in conjunction with Brandon Shapiro.} \begin{equation}\label{eqn.shapiro_0} p\times(q\otimes r)\to (p\times q)\otimes r. \Cat{End}d{equation} \chapter{Substitution product} There is a nonsymmetric monoidal structure on $\Cat{Poly}$ given by composing polynomials. Its unit is $\mathcal{y}$ and its monoidal product is given by the following formula: \begin{equation} p\mathbin{\triangleleft} q\coloneqq\sum_{I: p(1)}\sum_{J\colon p[I]\to q(1)}\mathcal{y}^{\sum\limits_{i: p[I]}q[Ji]} \Cat{End}d{equation} If $p\to p'$ and $q\to q'$ are cartesian, then so is $p\mathbin{\triangleleft} q\to p'\mathbin{\triangleleft} q'$. If $q\to q'$ is vertical, then so is $p\mathbin{\triangleleft} q\to p\mathbin{\triangleleft} q'$. The monoidal structure $\mathbin{\triangleleft}$ is left distributive with respect to $+$ and $\times$: \begin{align} 0\mathbin{\triangleleft} q&\cong 0&&(p+p')\mathbin{\triangleleft} q\cong (p\mathbin{\triangleleft} q)+(p'\mathbin{\triangleleft} q)\label{eqn.comp_plus}\\ 1\mathbin{\triangleleft} q&\cong 1&&(p\times p')\mathbin{\triangleleft} q\cong (p\mathbin{\triangleleft} q)\times(p'\mathbin{\triangleleft} q)\label{eqn.comp_times} \Cat{End}d{align} In fact, $\mathbin{\triangleleft}$ preserves all limits in the left-variable, and it preserves connected limits in the right variable. If $p$ is finitary (each $p[I]$ is a finite set) then for any sifted (e.g.\ filtered) category $\cat{J}$ and diagram $q\colon \cat{J}\to\Cat{Poly}$, the natural map \begin{equation}\label{eqn.finitary_tri_sifted} \colim_{j: \cat{J}}(p\mathbin{\triangleleft} q_j) \To{\cong} p\mathbin{\triangleleft}\colim_{j: \cat{J}}q_j \Cat{End}d{equation} is an isomorphism. For any category $\cat{I}$, diagram $p\colon \cat{I}\to\Cat{Poly}cart$ of cartesian maps, and polynomial $q:\Cat{Poly}$, the natural map \begin{equation}\label{eqn.cart_tri} \colim_{i: \cat{I}}(p_i\mathbin{\triangleleft} q) \To{\cong} (\colim_{i: \cat{I}}p_i)\mathbin{\triangleleft} q \Cat{End}d{equation} is an isomorphism. For any $p:\Cat{Poly}$ the operation $(p\mathbin{\triangleleft} -)$ preserves monomorphisms and epimorphisms in $\Cat{Poly}$. For any $q:\Cat{Poly}$, the operation $(-\mathbin{\triangleleft} q)$ preserves monomorphisms and if $q\neq 0$ then it also preserves epimorphisms. \footnote{The unique map $\mathcal{y}\to 1 $ is an epimorphism, but $\mathcal{y}\mathbin{\triangleleft} 0\to 1\mathbin{\triangleleft} 0$ is not.} The monoidal structure $\mathbin{\triangleleft}$ is normal duoidal with $\otimes$, i.e.\ they have the same unit, $\mathcal{y}$, and there is a natural transformation \begin{equation}\label{duoidal} (p_1\mathbin{\triangleleft} p_2)\otimes(q_1\mathbin{\triangleleft} q_2)\longrightarrow(p_1\otimes q_1)\mathbin{\triangleleft}(p_2\otimes q_2) \Cat{End}d{equation} satisfying the usual laws. Using $\mathcal{y}$ in place of $p_1$, $p_2$, $q_1$, or $q_2$, \eqref{duoidal} induces natural maps \begin{equation} [p,q]\to[r\mathbin{\triangleleft} p, r\mathbin{\triangleleft} q] \mathbb{Q}and [p,q]\to[p\mathbin{\triangleleft} r, q\mathbin{\triangleleft} r] \Cat{End}d{equation} and \begin{equation} r\mathbin{\triangleleft}[p,q]\to[p,r\mathbin{\triangleleft} q] \mathbb{Q}and [p,q]\mathbin{\triangleleft} r\to [p,q\mathbin{\triangleleft} r]. \Cat{End}d{equation} The identity functor $\Cat{Poly}\to\Cat{Poly}$ is lax monoidal as a functor $(\Cat{Poly},\mathcal{y},\mathbin{\triangleleft})\to(\Cat{Poly},\mathcal{y},\otimes)$, i.e.\ for every $p,q$ a map of polynomials \begin{equation}\label{eqn.indep} p\otimes q\to p\mathbin{\triangleleft} q \Cat{End}d{equation} satisfying the usual laws. This map is derived from \eqref{duoidal} by taking $p_1\coloneqq p$, $q_1\coloneqq\mathcal{y}$, $p_2\coloneqq\mathcal{y}$, and $q_2\coloneqq q$. Note that if $p$ is linear or $q$ is representable, then \eqref{eqn.indep} is an isomorphism: \begin{equation} A\mathcal{y}\otimes q\cong A\mathcal{y}\mathbin{\triangleleft} q \mathbb{Q}and p\otimes\mathcal{y}^A\cong p\mathbin{\triangleleft} \mathcal{y}^A. \Cat{End}d{equation} There are some natural maps that combine $\mathbin{\triangleleft}$, $\times,$ and $\otimes$. Firstly we have \begin{equation} (p_1\mathbin{\triangleleft} p_2)\times(q_1\mathbin{\triangleleft} q_2)\to (p_1\otimes q_1)\mathbin{\triangleleft} (p_2\times q_2) \mathbb{Q}and 1\to\mathcal{y}\mathbin{\triangleleft} 1. \Cat{End}d{equation} These commute with the duoidality maps \eqref{duoidal}. There are also natural maps that combine $\mathbin{\triangleleft}$, $\times$, and $+$. \[ (p_1\mathbin{\triangleleft} p_2)\times (q_1\mathbin{\triangleleft} q_2)\to (p_1\times q_1)\mathbin{\triangleleft}(p_2+q_2) \mathbb{Q}and 1\to1\mathbin{\triangleleft} 0. \] Together these assemble into maps that sandwich a single $\times$ between arbitrary-length layers of $\otimes$'s and $+$'s, i.e.\ of the following form for any $i,j:\mathbb{N}$ and $2(i+1+j)$-many polynomials denoted $\myred{p_{-i}},\myred{q_{-i}}\ldots,\myred{p_{-1}},\myred{q_{-1}},p_0,\mygreen{q_0} ,\mygreen{p_1},\mygreen{q_1},\ldots,\mygreen{p_j},\mygreen{q_j}:\Cat{Poly}$, \begin{multline} (\myred{p_{-i}}\mathbin{\triangleleft}\cdots\mathbin{\triangleleft} \myred{p_{-1}}\mathbin{\triangleleft} p_0\mathbin{\triangleleft} \mygreen{p_1}\mathbin{\triangleleft}\cdots\mathbin{\triangleleft} \mygreen{p_j}) \times (\myred{q_{-i}}\mathbin{\triangleleft}\cdots\mathbin{\triangleleft} \myred{q_{-1}}\mathbin{\triangleleft} q_0\mathbin{\triangleleft} \mygreen{q_1}\mathbin{\triangleleft}\cdots\mathbin{\triangleleft} \mygreen{q_j}) \to\\ (\myred{p_{-i}}\otimes \myred{q_{-i}})\mathbin{\triangleleft}\cdots\mathbin{\triangleleft}(\myred{p_{-1}}\otimes \myred{q_{-1}})\mathbin{\triangleleft}(p_0\times q_0)\mathbin{\triangleleft}(\mygreen{p_1}+\mygreen{q_1})\mathbin{\triangleleft}\cdots\mathbin{\triangleleft}(\mygreen{p_j}+\mygreen{q_j}) \Cat{End}d{multline} There are other interesting aspects of the substitution product $\mathbin{\triangleleft}$. In particular, monoids with respect to $\mathbin{\triangleleft}$ generalize $\Sigma$-free operads. Comonoids with respect to $\mathbin{\triangleleft}$ are exactly categories. Bicomodules with respect to $\mathbin{\triangleleft}$ are parametric right adjoints between copresheaf categories. \chapter{Monoidal closures} There are closures for $\times$, $\otimes$, and $\ovee_S$ for each $S:\Cat{Set}$ (\cref{eqn.times,eqn.otimes,eqn.ovee}), given by \begin{align} q^p&\coloneqq \prod_{I: p(1)}q\mathbin{\triangleleft}(p[I]+\mathcal{y})\label{eqn.cart_cl}\\ [p,q]&\coloneqq\prod_{I: p(1)}q\mathbin{\triangleleft}(p[I]\times\mathcal{y})\\ \present{p,q}_S&\coloneqq\prod_{I: p(1)}q\mathbin{\triangleleft}(p[I]+p[I]\times S\mathcal{y}+\mathcal{y}) \Cat{End}d{align} These satisfy the defining universal properties: \begin{align} \Cat{Poly}(p',q^p)&\cong\Cat{Poly}(p'\times p,q)\\ \Cat{Poly}(p',[p,q])&\cong\Cat{Poly}(p'\otimes p,q)\\ \Cat{Poly}(p',\present{p,q}_S)&\cong\Cat{Poly}(p'\ovee_S p,q) \Cat{End}d{align} The first one, $q^p$, is the Cartesian closure; its standard form is not particularly enlightening, and neither is that of the third one, $\present{p,q}$. The second one, $[p,q]$, is what we call the \emph{Dirichlet closure}; it has a very nice standard form: \[ [p,q]\cong\sum_{\varphi:\Cat{Poly}(p,q)}\mathcal{y}^{\sum\limits_{I: p(1)}q[\varphi_1I]} \] where $\varphi_1\colon p(1)\to q(1)$ is the $1$-component of the natural transformation $\varphi$. Another nice representation is \begin{equation} [p,q]\cong\prod_{I: p(1)}\sum_{J: q(1)}\prod_{j: q[J]}\sum_{i: p[I]}\mathcal{y} \Cat{End}d{equation} The cartesian closure satisfies all the usual arithmetic properties: \begin{gather} q^0\cong1,\quad q^{p_1+p_2}\cong (q^{p_1})\times(q^{p_2}),\quad 1^p\cong 1,\quad (q_1\times q_2)^p\cong q_1^p\times q_2^p,\quad q^1\cong q,\quad q^{p_1\times p_2}\cong (q^{p_2})^{p_1} \Cat{End}d{gather} The Dirichlet closure has only some of the analogous properties: \begin{gather} [0,p]\cong1,\mathbb{Q}uad [p_1+p_2,q]\cong [p_1,q]\times[p_2,q],\mathbb{Q}uad [\mathcal{y},q]\cong q,\mathbb{Q}uad [p_1\otimes p_2,q]\cong[p_1,[p_2,q]] \Cat{End}d{gather} The following is true of any monoidal closure, but we include it for convenience: \begin{equation} q_1^{p_1}\times q_2^{p_2}\to (q_1q_2)^{p_1p_2} \mathbb{Q}and [p_1,q_1]\otimes[p_2,q_2]\to[p_1\otimes p_2,q_1\otimes q_2] \Cat{End}d{equation} The cartesian closure also interacts with substitution as follows: \begin{equation} r\mathbin{\triangleleft} (q^p)\to (r\mathbin{\triangleleft} q)^p \Cat{End}d{equation} and this map is an isomorphism in case $p\cong\mathcal{y}^A$ for some $A:\Cat{Set}$. The diagonal $p\to p\times p$ induces a map \begin{equation} q^p\to (pq)^p \Cat{End}d{equation} The functor $\mathcal{y}^-\colon(\Cat{Poly},1,\times)^\tn{op}\to(\Cat{Poly},\mathcal{y},\mathbin{\triangleleft})$ is lax monoidal \begin{equation} \mathcal{y}\cong\mathcal{y} \mathbb{Q}and \mathcal{y}^p\mathbin{\triangleleft}\mathcal{y}^q\to\mathcal{y}^{p\times q} \Cat{End}d{equation} The cartesian closure interacts with coproducts and products via maps \begin{equation} q^p\to(q+r)^{p+r} \mathbb{Q}and q^p\to(qr)^{pr} \Cat{End}d{equation} natural in $p:\Cat{Poly}^\tn{op}$, $q:\Cat{Poly}$, and dinatural in $r$. Because of duoidality \eqref{duoidal}, $\otimes$-closure interacts with substitution: \begin{equation}\label{eqn.shapiro_1} \mathcal{y}\cong[\mathcal{y},\mathcal{y}] \mathbb{Q}and [p_1,q_1]\mathbin{\triangleleft}[p_2,q_2]\to [p_1\mathbin{\triangleleft} p_2,q_1\mathbin{\triangleleft} q_2] \Cat{End}d{equation} Dirichlet-mapping into $\mathcal{y}$ is often of interest; we have the following maps and isomorphisms: \begin{align} [\mathcal{y},\mathcal{y}]&\cong\mathcal{y}\\ [p,\mathcal{y}]\times[q,\mathcal{y}]&\cong[p+q,\mathcal{y}]\\ \mathcal{y}^A&\cong[A\mathcal{y},\mathcal{y}]\\ [p,\mathcal{y}]+[q,\mathcal{y}]&\to[pq,\mathcal{y}]\\ A\mathcal{y}&\cong[\mathcal{y}^A,\mathcal{y}]\\ [p,\mathcal{y}]\otimes[q,\mathcal{y}]&\to[p\otimes q,\mathcal{y}]\\ [p,\mathcal{y}]\mathbin{\triangleleft}[q,\mathcal{y}]&\to[p\mathbin{\triangleleft} q,\mathcal{y}] \Cat{End}d{align} \chapter{Coclosures for substitution and Dirichlet product} The left Kan extension of a polynomial functor $p$ along another polynomial functor $q$ is again a polynomial functor, which we denote \begin{equation} \lens{p}{q}\coloneqq\sum_{I: p(1)}\mathcal{y}^{q\mathbin{\triangleleft}\; (p[I])} \Cat{End}d{equation} This satisfies the following universal property of a Kan extension, i.e.\ a right-coclosure: \footnote{I learned the right-coclosure from Josh Meyers. I learned the in-retrospect-obvious fact that it is the same as a left Kan extension from Todd Trimble.} \begin{equation} \Cat{Poly}\left(\lens{p}{q},p'\right)\cong\Cat{Poly}\left(p,p'\mathbin{\triangleleft} q\right). \Cat{End}d{equation} The coclosure $\lens{p}{q}$ is covariant in $p$ and contravariant in $q$: \begin{equation} \begin{prooftree} \Hypo{p\to p'} \Hypo{q\leftarrow q'} \Infer2{\lens{p}{q}\to\lens{p'}{q'}} \Cat{End}d{prooftree} \Cat{End}d{equation} and if $p\to p'$ is vertical or cartesian, then so is $\lens{p}{q}\to\lens{p'}{q}$, respectively. Just like evaluation is the most common use of a closure, co-evaluation is the most common use of this coclosure: for any $p,q:\Cat{Poly}$ one has \begin{equation} p\to\lens{p}{q}\mathbin{\triangleleft} q \Cat{End}d{equation} Since $\lens{-}{q}$ is a left adjoint, it interacts with $+$ by \begin{equation} \lens{0}{q}=0 \mathbb{Q}and \lens{p+p'}{q}\cong\lens{p}{q}+\lens{p'}{q}. \Cat{End}d{equation} The coclosure interacts with $\mathbin{\triangleleft}$ by \begin{equation} \lens{\lens{p}{q}}{q'}\cong\lens{p}{q'\mathbin{\triangleleft} q} \mathbb{Q}and p\cong\lens{p}{\mathcal{y}} \Cat{End}d{equation} \begin{equation} \lens{p\mathbin{\triangleleft} p'}{q}\to p\mathbin{\triangleleft}\lens{p'}{q} \Cat{End}d{equation} The coclosure interacts with $\otimes$ via vertical maps \begin{equation} \lens{\mathcal{y}}{\mathcal{y}}\cong\mathcal{y} \mathbb{Q}and \lens{p_1\otimes p_2}{q_1\otimes q_2}\to\lens{p_1}{q_1}\otimes\lens{p_2}{q_2} \Cat{End}d{equation} It interacts with the $\otimes$-closure (Dirichlet-hom) by a vertical map \begin{equation} \left[\lens{p}{q},p'\right]\to[p,p'\mathbin{\triangleleft} q] \Cat{End}d{equation} and a map \begin{equation}\label{eqn.shapiro_2} \lens{p\otimes p'}{q}\to\lens{p}{[p',q]}. \Cat{End}d{equation} For any set $A$, we have $\lens{p}{\mathcal{y}+A}\cong p\times\mathcal{y}^A$ by \eqref{eqn.cart_cl}. More importantly we have \begin{equation}\label{eqn.left_adjoints} \lens{p}{A\mathcal{y}}\cong p\mathbin{\triangleleft}\mathcal{y}^A\cong p\otimes\mathcal{y}^A \mathbb{Q}and A\mathcal{y}\otimes\lens{p}{q}\cong A\mathcal{y}\mathbin{\triangleleft}\lens{p}{q}\cong \lens{A\mathcal{y}\mathbin{\triangleleft} p}{q} \Cat{End}d{equation} \cref{eqn.left_adjoints} generalize to bicomodules---namely $A\mathcal{y}$ and $\mathcal{y}^A$ can be replaced by any pair of left and right adjoint bicomodules---even though that is beyond the scope of this document. Indeed, quite a few of the structures in this document generalize to the bicomodule setting. For any polynomial monad $(m,\eta,\mu)$, the corresponding Lawvere theory is the $\mathbin{\triangleleft}$-comonad (category) \begin{equation} \text{Law}(m)\cong\lens{u}{u\mathbin{\triangleleft} m} \Cat{End}d{equation} where $u\coloneqq\sum_{N:\mathbb{N}}\mathcal{y}^N$. In other words, $\lens{u}{u\mathbin{\triangleleft} m}$ is a compact representation of what usually has a long description: ``the full subcategory of (the opposite of (the Kleisli category for $m$ on $\Cat{Set}$)), spanned by the finite sets.'' There is also an \emph{indexed} left $\mathbin{\triangleleft}$-coclosure. That is, for any function $f\colon p(1)\to q(1)$, define \begin{equation} p\cocl{f}q\coloneqq \sum_{I: p(1)}q[fI]\,\mathcal{y}^{p[I]}. \Cat{End}d{equation} This satisfies the following indexed-adjunction formula: \footnote{Note that the indexed adjunction \eqref{indexed_adjunction} is not natural in $q:\Cat{Poly}$, but it is natural in $q:\Cat{Poly}cart$.} \begin{equation}\label{indexed_adjunction} \Cat{Poly}(p,q\mathbin{\triangleleft} r)\cong\sum_{f\colon p(1)\to q(1)}\Cat{Poly}(p\cocl{f}q,r) \Cat{End}d{equation} Given $\varphi\colon p\to q\mathbin{\triangleleft} r$, we denote its image by $(\varphi.1,\varphi.2)$, where $\varphi.1\colon p(1)\to q(1)$ and $\varphi.2\colon (p\cocl{\varphi.1}q)\to r$. The indexed coclosure is a very well-behaved structure. \begin{align} p\cocl{!}\mathcal{y}^A&\cong Ap\\ p\cocl{\mathrm{id}}p&\cong p_*\label{eqn.deriv}\\ (p+p')\cocl{(f,f')}q&\cong(p\cocl{f}q)+(p'\cocl{f'}q)\\ p\cocl{(f,g)}(q\times q')&\cong(p\cocl{f}q)+(p\cocl{g}q')\\ (p\cocl{f}q)\times p'&\cong(p\times p')\cocl{(p\times!)\mathbin{\fatsemi} f}q\\ (p\cocl{f}q)\mathbin{\triangleleft} p'&\cong(p\mathbin{\triangleleft} p')\cocl{(p\mathbin{\triangleleft}!)\mathbin{\fatsemi} f}q\\ p\cocl{f}(q\mathbin{\triangleleft} r)&\cong(p\cocl{f.1}q)\cocl{f.2}r\\ (p\otimes p')\cocl{f\otimes f'}(q\otimes q')&\cong(p\cocl{f}q)\otimes(p'\cocl{f'}q')\\ [p,q\mathbin{\triangleleft} r]&\cong\sum_{f\colon p(1)\to q(1)}[p\cocl{f}q,r] \Cat{End}d{align} In \eqref{eqn.deriv}, $p_*$ is defined as follows. \begin{equation} p_*\coloneqq \dot{p}\mathcal{y}=p\cocl{\mathrm{id}}p=\sum_{I: p(1)}p[I]\mathcal{y}^{p[I]} \Cat{End}d{equation} Though it can be defined in terms of the derivative $\dot{p}$ of $p$, we find $p_*$ to be a much more fundamental construction than the derivative. For example, the bundle representation of a polynomial $p$ is $p_*(1)\to p(1)$. The operation $p\mapsto p_*$ is a comonad on $\Cat{Poly}$, i.e.\ $p_*\to p$ and $p_*\to (p_*)_*$, and each $p_*$ has the structure of a comonad, i.e.\ $p_*\to\mathcal{y}$ and $p_*\to p_*\mathbin{\triangleleft} p_*$. Given a map $\varphi\colon p\to p'$ and a function $g\colon q(1)\to q'(1)$, there is a Cartesian map \begin{equation} (p\cocl{\varphi\mathbin{\triangleleft}1}p')\mathbin{\triangleleft}(q\cocl{g}q')\to (p\mathbin{\triangleleft} q)\cocl{\varphi\mathbin{\triangleleft} g}(p'\mathbin{\triangleleft} q'), \Cat{End}d{equation} natural in $p,q:\Cat{Poly}$ and $p',q':\Cat{Poly}cart$. In particular, there is a Cartesian map \begin{equation} p_*\mathbin{\triangleleft} q_*\to(p\mathbin{\triangleleft} q)_*. \Cat{End}d{equation} Returning to $p\mapsto p_*$, it is not functorial in $\Cat{Poly}$, but it is functorial in the Cartesian morphisms $\Cat{Poly}cart$. It is also functorial as a map $\Cat{Poly}\to\Cat{Span}(\Cat{Poly})$: \begin{align} \cocl{}\colon\Cat{Poly}&\to\Cat{Span}(\Cat{Poly})\\ p&\mapsto p_*\\ (p\To{\varphi}q)&\mapsto\big( p_*\leftarrow(p\cocl{\varphi_1}q)\to q_*\big) \Cat{End}d{align}\goodbreak That is, for any $p\To{\varphi}q\To{\psi}r$, there is an isomorphism \begin{equation} (p\cocl{\varphi_1}q)\times_{ q_*}(q\cocl{\psi_1}r)\cong p\cocl{(\varphi\mathbin{\fatsemi}\psi)_1}r \Cat{End}d{equation} This functor is strong monoidal with respect to both $+$ and $\otimes$. One may think of it as representing the bundle view of $\Cat{Poly}$. Indeed, for any $p:\Cat{Poly}$ we have a counit map $\epsilon_p\colon p_*\to p$, and given $\varphi\colon p\to q$, there is an induced span \begin{equation} \begin{tikzcd} p_*\ar[d]&p\cocl{\varphi_1}q\ar[d]\ar[l]\ar[r]& q_*\ar[d]\\ p\ar[r, equal]&p\ar[r,"\varphi"']&q \Cat{End}d{tikzcd} \Cat{End}d{equation} and evaluating at $1$ returns the usual bundle picture, since $(p\cocl{\varphi_1}q)(1)\cong p(1)\times_{q(1)}q_*(1)$. There is an indexed coclosure for $\otimes$. \footnote{I learned about this indexed coclosure $\hyper{}$ for $\otimes$ from Nelson Niu.} For any function $f\colon p(1)\to q(1)$, define \begin{equation} p \hyper{f} q\coloneqq\sum_{I: p(1)}\mathcal{y}^{\left(p[I]^{q[fI]}\right)} \Cat{End}d{equation} This satisfies the following indexed-adjunction formula: \begin{equation} \Cat{Poly}(p,q\otimes r)\cong\sum_{f\colon p(1)\to q(1)}\Cat{Poly}(p\hyper{f}q,r) \Cat{End}d{equation} It also satisfies the following: \begin{align} (p_1+p_2)\hyper{(f_1,f_2)}q&\cong(p_1\hyper{f_1}q)+(p_2\hyper{f_2}q)\\ p\hyper{(f_1,f_2)}(q_1\otimes q_2)&\cong (p\hyper{f_1}q_1)\hyper{f_2}q_2 \Cat{End}d{align} and for any $f\colon p(1)\to (q_1\mathbin{\triangleleft} q_2)(1)$, there is a natural map coming from duoidality \eqref{duoidal}: \begin{equation} (p\cocl{f_1}q_1)\hyper{f_2}q_2\to p\hyper{f}(q_1\mathbin{\triangleleft} q_2). \Cat{End}d{equation} For any $f\colon p(1)\to p'(1)$ there is a natural map \begin{equation} \lens{p}{q\mathbin{\triangleleft} p'}\to\lens{p\hyper{f}p'}{q}. \Cat{End}d{equation} For any polynomial $p$, let $\ol{p}\coloneqq p(1)\mathcal{y}$. For any $p,q:\Cat{Poly}$ there is an isomorphism \begin{equation} p\cocl{}\left(\lens{\ol{p}}{q}\hyper{}p\right)\cong p\mathbin{\triangleleft}\ol{q} \Cat{End}d{equation} where the unwritten indices of $\cocl{}$ and $\hyper{}$ are both the identity on $p(1)$. For any $q:\Cat{Poly}$ and $A:\Cat{Set}$, let $q\tn{-}\Cat{Coalg}[A]:\Cat{Set}$ denote the set $\Cat{Set}(A,q(A))$ of $q$-coalgebra structures on $A$. For a polynomial $p$, let $q\tn{-}\Cat{Coalg}_p\coloneqq\sum_{I:p(1)}\mathcal{y}^{q\tn{-}\Cat{Coalg}[p[I]]}$. Then there is an isomorphism \begin{equation} q\tn{-}\Cat{Coalg}_p \cong \lens{p}{q}\hyper{}p \Cat{End}d{equation} where the unwritten index of $\hyper{}$ is the identity on $p(1)$. \chapter{Other monoidal structures} Of the following three monoidal structures, only the first one ($\curlyvee$) appears to be interesting; it will appear prominently in \cref{chap.adj_mon_com}. There is a symmetric monoidal structure on $\Cat{Poly}$ with unit $0$ and product given by \begin{equation}\label{eqn.vee} p\curlyvee q\coloneqq p+(p\otimes q)+q. \Cat{End}d{equation} The functor $(p\mapsto p+\mathcal{y})$ is strong monoidal $(\Cat{Poly},0,\curlyvee)\to(\Cat{Poly},\mathcal{y},\otimes)$, i.e.\ there is a natural isomorphism \begin{equation} (p+\mathcal{y})\otimes(q+\mathcal{y})\cong (p\curlyvee q)+\mathcal{y}. \Cat{End}d{equation} The identity functor $\Cat{Poly}\to\Cat{Poly}$ has a lax monoidal structure, \begin{equation} p+q\to p\curlyvee q. \Cat{End}d{equation} There are two duoidal structures for $\curlyvee$, one with $\mathbin{\triangleleft}$ and one with $\otimes$: \begin{align*} (p_1\mathbin{\triangleleft} p_2)\curlyvee(q_1\mathbin{\triangleleft} q_2)&\to(p_1\curlyvee q_1)\mathbin{\triangleleft}(p_2\curlyvee q_2)\\ (p_1\otimes p_2)\curlyvee(q_1\otimes q_2)&\to(p_1\curlyvee q_1)\otimes(p_2\curlyvee q_2) \Cat{End}d{align*} We will see in \cref{eqn.vee1,eqn.vee2,eqn.vee3,eqn.vee4} that $\curlyvee$ is surprisingly useful when it comes to free monads. Here are two other symmetric monoidal structures, though we currently know of no interesting uses of them, so we do not even give them symbols. We are simply using the fact that if a symmetric monoidal product distributes over $+$ then we can follow the pattern we learned from Garner; see \eqref{eqn.ovee}. \begin{align} (p,q)&\mapsto p+(p\times q)+q\\ (p,q)&\mapsto p+(p\ovee q)+q \Cat{End}d{align} Their units are both $0$. We know of two more monoidal products $(\mathcal{y},\dagger)$ and $(\mathcal{y}, \ddagger)$ from Nelson Niu, who cites de Paiva's notion of \emph{cross product} as inspiration for $\dagger$: \begin{align} p\dagger q\coloneqq\sum_{I: p(1)}\sum_{J: q(1)}\prod_{i: p[I]}\prod_{j\colon p(1)\to q[J]}\mathcal{y}\\ p\ddagger q\coloneqq\sum_{I: p(1)}\sum_{J: q(1)}\prod_{i\colon q(1)\to p[I]}\prod_{j\colon p(1)\to q[J]}\mathcal{y} \Cat{End}d{align} E.g.\ in the case of monomials we have $I\mathcal{y}^A\dagger J\mathcal{y}^B\cong IJ\mathcal{y}^{AB^I}$ and $I\mathcal{y}^A\ddagger J\mathcal{y}^B\cong IJ\mathcal{y}^{A^JB^I}$. \chapter{Adjunctions, monads, and comonads on $\Cat{Poly}$}\label{chap.adj_mon_com} There are adjunctions between $\Cat{Poly}$ and $\Cat{Set}$ and between $\Cat{Poly}$ and $\Cat{Set}^\tn{op}$, each labeled by where they send $p:\Cat{Poly}$ and $A:\Cat{Set}$: \begin{equation}\label{eqn.adjunctions} \begin{tikzcd}[column sep=60pt] \Cat{Poly} \ar[from=r, shift left=8pt, "A" description] \ar[from=r, shift left=-24pt, "A\mathcal{y}"']& \Cat{Set} \ar[from=l, shift right=24pt, "p(0)"'] \ar[from=l, shift right=-8pt, "p(1)" description] \ar[from=l, phantom, "\scriptstyle\bot"] \ar[from=l, phantom, shift left=16pt, "\scriptstyle\bot"] \ar[from=l, phantom, shift right=16pt, "\scriptstyle\bot"] \Cat{End}d{tikzcd} \hspace{1in} \begin{tikzcd}[column sep=60pt] \Cat{Poly} \ar[from=r, shift left=8pt, "\mathcal{y}^A"] \ar[from=r, phantom, "\scriptstyle\bot"] & \Cat{Set}^\tn{op} \ar[from=l, shift right=-8pt, "\Gamma(p)"] \Cat{End}d{tikzcd} \Cat{End}d{equation} We write $A$ to denote $A\mathcal{y}^0$. All the leftward maps in \eqref{eqn.adjunctions} are fully faithful, and all the rightward maps are essentially surjective. The leftward maps from $\Cat{Set}$ are also rig monoidal (i.e.\ strong monoidal with respect to $+$ and $\otimes$): \begin{align} A\mathcal{y}+B\mathcal{y}&\cong(A+B)\mathcal{y}& A\mathcal{y}\otimes B\mathcal{y}&\cong(A\times B)\mathcal{y}\\ A\mathcal{y}^0+B\mathcal{y}^0&\cong(A+B)\mathcal{y}^0& A\mathcal{y}^0\otimes B\mathcal{y}^0&\cong(A\times B)\mathcal{y}^0 \Cat{End}d{align} The rightward maps to $\Cat{Set}$ are also distributive monoidal; indeed by \cref{eqn.comp_plus,eqn.comp_times}, the following hold for any $A:\Cat{Set}$, in particular for $A:\{0,1\}$. \begin{equation} p(A)+q(A)\cong(p+q)(A) \mathbb{Q}and p(A)\times q(A)\cong(p\times q)(A) \Cat{End}d{equation} The functor $\Gamma$ preserves coproducts, since coproducts in $\Cat{Set}^\tn{op}$ are products in $\Cat{Set}$: \begin{equation} \Gamma(p+q)\cong\Gamma(p)\times\Gamma(q) \Cat{End}d{equation} We can say more about $\Gamma$ if we package it with $p\mapsto p(1)$; i.e.\ there is an adjunction \begin{equation}\label{eqn.rectangle} \begin{tikzcd}[column sep=60pt] \Cat{Poly} \ar[from=r, shift left=8pt, "B\mathcal{y}^A"] \ar[from=r, phantom, "\scriptstyle\bot"] & \Cat{Set}\times\Cat{Set}^\tn{op} \ar[from=l, shift right=-8pt, "{\big(p(1)\,,\,\Gamma(p)\big)}"] \Cat{End}d{tikzcd} \Cat{End}d{equation} This functor is comonadic. It is also strong monoidal with respect to coproduct and $\otimes$. To say so requires us to mention that $\Cat{Set}\times\Cat{Set}^\tn{op}$ has a coproduct structure and to specify an $\otimes$ structure on $\Cat{Set}\times\Cat{Set}^\tn{op}$; they are given as follows: \begin{align} (A_1,B_1)+(A_2,B_2)&\coloneqq(A_1+A_2\,,\,B_1\times B_2)\\ (A_1,B_1)\otimes(A_2,B_2)&\coloneqq(A_1\times A_2\,,\,B_1^{A_2}\times B_2^{A_1}) \Cat{End}d{align} Returning to our point, the left adjoint in \eqref{eqn.rectangle} is rig monoidal (preserves $+$ and $\otimes$): \begin{align} (p(1),\Gamma(p))+(q(1),\Gamma(q))&\cong((p+q)(1),\Gamma(p+q))\\ (p(1),\Gamma(p))\otimes(q(1),\Gamma(q))&\cong((p\otimes q)(1),\Gamma(p\otimes q)) \Cat{End}d{align} There is a cofree $\mathbin{\triangleleft}$-comonoid (often called the free comonad) construction on $\Cat{Poly}$: \begin{equation} \begin{tikzcd}[column sep=60pt] \Cat{Comon}(\Cat{Poly}) \ar[from=r, shift left=8pt, "\mathfrak{c}"] \ar[from=r, phantom, "\scriptstyle\bot"] & \Cat{Poly} \ar[from=l, shift right=-8pt, "U"] \Cat{End}d{tikzcd} \Cat{End}d{equation} where $U$ is the forgetful functor that sends a comonoid to its carrier. The cofree comonoid $\mathfrak{c}_p$ on $p:\Cat{Poly}$ is carried by the limit \begin{equation} \mathfrak{c}_p\coloneqq\lim(\cdots\to p_{n+1}\To{f_n} p_n\to\cdots\to p_1\To{f_0} p_0) \Cat{End}d{equation} where the $p_k$ are defined inductively as follows: \begin{align} p_0&\coloneqq\mathcal{y}&p_{k+1}&\coloneqq (p\mathbin{\triangleleft} p_k)\times\mathcal{y}\\ \intertext{and the maps $f_k\colon p_{k+1}\to p_k$ are defined inductively as follows:} p_1=p\times\mathcal{y}&\To{f_0\coloneqq\tn{proj}}\mathcal{y}=p_0&p_{k+2}=(p\mathbin{\triangleleft} p_{k+1})\times\mathcal{y}&\To{f_{k+1}\coloneqq(p\mathbin{\triangleleft} f_k)\times\mathcal{y}}(p\mathbin{\triangleleft} p_{k})\times\mathcal{y}=p_{k+1} \Cat{End}d{align} The map $\mathfrak{c}\to\mathcal{y}$ is easy and the map $\mathfrak{c}\to\mathfrak{c}\mathbin{\triangleleft}\mathfrak{c}$ is given by maps $p_{m+n}\to p_m\mathbin{\triangleleft} p_n$, which themselves arise by induction on $n$, properties of $\cocl{}$, and maps $p\mathbin{\triangleleft} p_m\to p_m\mathbin{\triangleleft} p$ that arise by induction on $m$. There is an isomorphism of polynomials \begin{equation}\label{eqn.cofree_iso} \mathfrak{c}_p\To{\cong} (p\mathbin{\triangleleft}\mathfrak{c}_p)\times\mathcal{y}. \Cat{End}d{equation} If $p\to q$ is cartesian, so is $\mathfrak{c}_p\to\mathfrak{c}_q$. In many different ways, the cofree comonad functor $\mathfrak{c}\colon\Cat{Poly}\to\Cat{Poly}$ is lax monoidal as it maps into the $(\mathcal{y},\otimes)$ monoidal structure: \footnote{Recall from \eqref{eqn.ovee} that $p\ovee q\coloneqq \sum_{(I,J): p(1)\times q(1)}\mathcal{y}^{p[I]+p[I]\times q[J]+q[J]}$.} \begin{align} \mathfrak{c}_p\otimes\mathfrak{c}_q&\to\mathfrak{c}_{p\times q}\\ \label{cofree_lax_monoidal} \mathfrak{c}_p\otimes\mathfrak{c}_q&\to\mathfrak{c}_{p\otimes q}\\ \mathfrak{c}_p\otimes\mathfrak{c}_q&\to\mathfrak{c}_{p\mathbin{\triangleleft} q}\\ \mathfrak{c}_p\otimes\mathfrak{c}_q&\to\mathfrak{c}_{p\ovee q} \Cat{End}d{align} It also has natural comonoid homomorphisms of the form \begin{equation} \mathfrak{c}_{[p,q]}\otimes\mathfrak{c}_{[p',q']}\to\mathfrak{c}_{[p+p',q+q']} \Cat{End}d{equation} that arise from the counits of any comonoid, as well as the distributivity of $\otimes$ over $+$. There is a free $\mathbin{\triangleleft}$-monoid (often called the free monad) construction on $\Cat{Poly}$ \begin{equation} \begin{tikzcd}[column sep=60pt] \Cat{Poly} \ar[from=r, shift left=8pt, "U"] \ar[from=r, phantom, "\scriptstyle\bot"] & \Cat{Mon}(\Cat{Poly}) \ar[from=l, shift right=-8pt, "\mathfrak{m}"] \Cat{End}d{tikzcd} \Cat{End}d{equation} where $U$ is the forgetful functor that sends a monoid to its carrier. We only consider free monads on finitary polynomials $q$, i.e.\ ones for which $q[J]$ is finite for all $J:q(1)$; for the more general story see \cite[Section 4.2]{kock2012polynomial}. The free $\mathbin{\triangleleft}$-monoid on $q$ can be constructed as the colimit: \begin{equation} \mathfrak{m}_q\coloneqq\colim(\cdots\leftarrow q_{n+1}\From{g_n}q_n\leftarrow\cdots\leftarrow q_1\From{g_0} q_0) \Cat{End}d{equation} where the $q_k$ are defined inductively as follows: \begin{align} q_0&\coloneqq \mathcal{y}&q_{k+1}&\coloneqq \mathcal{y}+(q\mathbin{\triangleleft} q_k)\\ \intertext{and the maps $g_k\colon q_{k}\to q_{k+1}$ are defined inductively as follows:} q_0=\mathcal{y}&\To{g_0\coloneqq\tn{incl}}\mathcal{y}+q=q_1&q_{k+1}=\mathcal{y}+(q\mathbin{\triangleleft} q_{k})&\To{g_{k+1}\coloneqq\mathcal{y}+(q\mathbin{\triangleleft} g_k)}\mathcal{y}+(q\mathbin{\triangleleft} q_{k+1})=q_{k+2} \Cat{End}d{align} Analogous to \cref{eqn.cofree_iso} there is an isomorphism of polynomials \begin{equation} \mathcal{y}+(p\mathbin{\triangleleft}\mathfrak{m}_p)\To{\cong}\mathfrak{m}_p. \Cat{End}d{equation} For any polynomial $p:\Cat{Poly}$ and set $X:\Cat{Set}$, there is a natural bijection \begin{equation} \mathfrak{m}_p\mathbin{\triangleleft} X \cong \mathfrak{m}_{p+X}\mathbin{\triangleleft} 0 \Cat{End}d{equation} and each is isomorphic to the free $p$-algebra on the set $X$. The free monad monad $\mathfrak{m}\colon\Cat{Poly}\to\Cat{Poly}$ is not lax monoidal with respect to $\otimes$ on both sides, \footnote{For example, there is no map of polynomials $ \mathfrak{m}_1\otimes\mathfrak{m}_0\cong\mathcal{y}+1 \To{??} \mathcal{y}\cong\mathfrak{m}_0\ $. } but in many different ways, the free monad functor is lax monoidal as it maps out of the $(0,\curlyvee)$ monoidal structure from \eqref{eqn.vee}: \begin{align} \label{eqn.vee1} \mathfrak{m}_p+\mathfrak{m}_q&\to\mathfrak{m}_{p\curlyvee q}\\ \label{eqn.vee2} \mathfrak{m}_p\otimes\mathfrak{m}_q&\to\mathfrak{m}_{p\curlyvee q}\\ \label{eqn.vee3} \mathfrak{m}_p\mathbin{\triangleleft}\mathfrak{m}_q&\to\mathfrak{m}_{p\curlyvee q}\\ \label{eqn.vee4} \mathfrak{m}_p\curlyvee\mathfrak{m}_q&\to\mathfrak{m}_{p\curlyvee q} \Cat{End}d{align} The functor $\mathfrak{m}_-\colon\Cat{Poly}\to\Cat{Poly}$ is itself a monad \begin{equation} p\to\mathfrak{m}_p \mathbb{Q}and \mathfrak{m}_{\mathfrak{m}_p}\to\mathfrak{m}_p \Cat{End}d{equation} and the functor $\mathfrak{c}_-\colon\Cat{Poly}\to\Cat{Poly}$ is itself a comonad \begin{equation} \mathfrak{c}_p\to p \mathbb{Q}and \mathfrak{c}_p\to\mathfrak{c}_{\mathfrak{c}_p}. \Cat{End}d{equation} Moreover, the former is a module over the latter, i.e.\ for any finitary $p,q$ there is a natural map \begin{equation}\label{module_easy} \mathfrak{m}_p\otimes\mathfrak{c}_q\to\mathfrak{m}_{p\otimes q} \Cat{End}d{equation} satisfying the action laws for the maps from \cref{cofree_lax_monoidal} as well as coherence for $\mathfrak{m}$ as a monad and $\mathfrak{c}$ as a comonad: \begin{equation}\label{eqn.monad_comonad_coherence} \begin{tikzcd} p\otimes\mathfrak{c}_q\ar[r]\ar[d]&p\otimes q\ar[d]\\ \mathfrak{m}_p\otimes\mathfrak{c}_q\ar[r]&\mathfrak{m}_{p\otimes q} \Cat{End}d{tikzcd} \hspace{.6in} \begin{tikzcd} \mathfrak{m}_{\mathfrak{m}_p}\otimes\mathfrak{c}_q\ar[d]\ar[r]& \mathfrak{m}_{\mathfrak{m}_p}\otimes\mathfrak{c}_{\mathfrak{c}_q}\ar[r]& \mathfrak{m}_{\mathfrak{m}_p\otimes\mathfrak{c}_q}\ar[r]& \mathfrak{m}_{\mathfrak{m}_{p\otimes q}}\ar[d]\\ \mathfrak{m}_p\otimes\mathfrak{c}_q\ar[rrr]&&& \mathfrak{m}_{p\otimes q} \Cat{End}d{tikzcd} \Cat{End}d{equation} This induces maps $\mathfrak{m}_{[p,\mathcal{y}]}\to [\mathfrak{c}_p,\mathcal{y}]$ and $\mathfrak{c}_{[p,\mathcal{y}]}\to[\mathfrak{m}_p,\mathcal{y}]$ and similarly for any monad in place of $\mathcal{y}$. In particular for $p=\mathcal{y}^A$ we have an isomorphism \begin{equation} \mathfrak{m}_{A\mathcal{y}}\To{\cong}[\mathfrak{c}_{\mathcal{y}^A},\mathcal{y}] \Cat{End}d{equation} For $p:\Cat{Poly}$ and $A:\Cat{Set}$ there is a natural map \begin{equation} \mathfrak{c}_{p\mathbin{\triangleleft} A\mathcal{y}}\to\mathfrak{c}_p\mathbin{\triangleleft}\mathfrak{m}_{A\mathcal{y}}. \Cat{End}d{equation} \chapter{*-Bifibration over $\Cat{Set}$ and factorization systems}\label{chap.bifib} The functor \begin{equation}\label{eqn.bifib} \big(p\mapsto p(1)\big)\colon\Cat{Poly}\to\Cat{Set} \Cat{End}d{equation} is a *-bifibration. In particular, for any function $f\colon A\to B$, there is an adjoint triple $f_!\dashv f^*\dashv f_*$ : \begin{equation} \begin{tikzcd}[column sep=50pt] \Cat{Poly}_A \ar[r, shift left=16pt, "f_!"] \ar[r, shift right=16pt, "f_*"'] \ar[from=r, "f^*" description] \ar[r, phantom, shift left=8pt, "\Rightarrow"] \ar[r, phantom, shift right=8pt, "\Leftarrow"] & \Cat{Poly}_B \Cat{End}d{tikzcd} \Cat{End}d{equation} where $\Cat{Poly}_X$ is the category of polynomials with positions $p(1)=X$. The images under the functors $f_!$ and $f_*$ of $p:\Cat{Poly}_A$ are given by \begin{equation} f_!(p)\coloneqq\sum_{b: B}\mathcal{y}^{\;\prod\limits_{b=fa}p[a]} \mathbb{Q}and f_*(p)\coloneqq\sum_{b: B}\mathcal{y}^{\;\sum\limits_{b=fa}p[a]} \Cat{End}d{equation} and the image under the functor $f^*$ of $q:\Cat{Poly}_B$ is given by \begin{equation} f^*(q)\coloneqq\sum_{a: A}\mathcal{y}^{q[fa]} \Cat{End}d{equation} For any $p:\Cat{Poly}_A$ and $q:\Cat{Poly}_B$ there are natural maps \begin{equation} p\to f_!(p) \mathbb{Q}and f^*(q)\to q. \Cat{End}d{equation} A morphism $\varphi\colon p\to q$ can be identified with a diagram of the form \begin{equation}\label{eqn.poly_map} \begin{tikzcd} p(1)\ar[d, "\varphi_1"']\ar[r, "{p[-]}", ""' name=p]& \Cat{Set}\\ q(1)\ar[ur, bend right, "{q[-]}"', "" name=q] \ar[to=p, from=q-|p, Rightarrow, shorten=3pt, "\varphi^\sharp"] \Cat{End}d{tikzcd} \Cat{End}d{equation} The $p\mapsto p(1)$ bifibration \eqref{eqn.bifib} gives us the terms \emph{vertical, cartesian, \text{and} op-cartesian} for a map $\varphi\colon p\to q$ in $\Cat{Poly}$. That is, taking $f\coloneqq\varphi_1$, we have that $\varphi$ is vertical if it is contained in a fiber of the bifibration \eqref{eqn.bifib}, it is cartesian if $p\to f^*(q)$ is an isomorphism, and it is op-cartesian if $f_!(p)\to q$ is an isomorphism. Here are alternative ways to define these notions: $\varphi$ is \begin{itemize} \item \emph{vertical} if $\varphi_1\colon p(1)\to q(1)$ is an identity in $\Cat{Set}$, \item \emph{cartesian} if $\varphi^\sharp$ is a natural isomorphism, and \item \emph{op-cartesian} if the diagram \eqref{eqn.poly_map} is a right Kan extension. \Cat{End}d{itemize} More explicitly, $\varphi$ is cartesian iff for each $I: p(1)$, the function $\varphi^\sharp_I\colon p[I]\to q[\varphi_1I]$ is a bijection. It is op-cartesian if for each $J: q(1)$ the map $q[J]\to\prod\limits_{\varphi_1(I)=J}p[I]$ is a bijection. There are at least three factorization systems on $\Cat{Poly}$: \begin{itemize} \item (epi, mono), \item (vertical, cartesian), and \item (op-cartesian, vertical). \Cat{End}d{itemize} \section*{Acknowledgments} This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0348. \printbibliography \Cat{End}d{document}
\betaegin{document} \tauitle[Rationality of the $\SigmaL(2,\muathbb C)$-Reidemeister torsion in dimension 3]{ Rationality of the $\SigmaL(2,\muathbb C)$-Reidemeister torsion in dimension 3} \alphauthor{Jerome Dubois} \alphaddress{Institut de Math\'ematiques de Jussieu \\ Universit\'e Paris Diderot--Paris 7 \\ UFR de Math\'ematiques\\ Case 7012, B\^atiment Chevaleret\\ 2, place Jussieu\\ 75205 Paris Cedex 13 FRANCE \\ \newline {\taut \url{http://www.institut.math.jussieu.fr/$\sigmaim$dubois/ }}} \epsilonmail{[email protected]} \alphauthor{Stavros Garoufalidis} \alphaddress{School of Mathematics \\ Georgia Institute of Technology \\ Atlanta, GA 30332-0160, USA \newline {\taut \url{http://www.math.gatech.edu/~stavros }}} \epsilonmail{[email protected]} \tauhanks{S.G. was supported in part by NSF. \\ \newline 1991 {\epsilonm Mathematics Classification.} Primary 57N10. Secondary 57M25. \newline {\epsilonm Key words and phrases: knots, $A$-polynomial, Reidemeister torsion, volume, character variety, 3-manifolds, hyperbolic geometry, invariant trace field. } } \deltaate{February 10, 2010 } \betaegin{abstract} If $M$ is a finite volume complete hyperbolic 3-manifold with one cusp and no $2$-torsion, the geometric component $X_M$ of its $\SigmaL(2,\muathbb C)$-character variety is an affine complex curve, which is smooth at the discrete faithful representation $\rho_0$. Porti defined a non-abelian Reidemeister torsion in a neighborhood of $\rho_0$ in $X_M$ and observed that it is an analytic map, which is the germ of a unique rational function on $X_M$. In the present paper we prove that (a) the torsion of a representation lies in at most quadratic extension of the invariant trace field of the representation, and (b) the existence of a polynomial relation of the torsion of a representation and the trace of the meridian or the longitude. We postulate that the coefficients of the $1/N^k$-asymptotics of the Parametrized Volume Conjecture for $M$ are elements of the field of rational functions on $X_M$. \epsilonnd{abstract} \muaketitle \tauableofcontents \sigmaection{Introduction} \langlembdabl{sec.intro} \muathfrak{su}(2)bsection{The volume of an $\SigmaL(2,\muathbb C)$-representation and the $A$-polynomial} \langlembdabl{sub.volume} A well-known numerical invariant of a 3-dimensional finite volume hyperbolic manifold $M$ with a cusp is its {\epsilonm volume}, a positive real number. A complete invariant of the hyperbolic structure of $M$ is a discrete faithful representation of $\primei_1(M)$ into $\muathcal PSL(2,\muathbb C)$ (well-defined up to conjugation) which is also a topological invariant, as follows from Mostow rigidity Theorem. Every $\muathcal PSL(2,\muathbb C)$-representation $\rho$ of $\primei_1(M)$ has a real-valued volume $\muathcal Vol(\rho)$; see \cite[Ch.2]{Dn} and also \cite{F,FK}. When a representation varies in a 1-parameter family $\rho_t$, the variation of the volume $\frac{d}{dt}\muathcal Vol(\rho_t)$ depends only on the restriction of $\rho_t$ to the boundary torus $\primet M$. This is a general principle of Atiyah-Patodi-Singer, and in our special case it also follows from Schalfi's formula. This raises the question: which $\muathcal PSL(2,\muathbb C)$-representations of $\primet M$ extend to a representation of $M$? The answer is given by an algebraic condition between the eigenvalues of a meridian and longitude of $\primet M$. This condition is the vanishing of the so-called \epsilonmph{$A$-polynomial} of $M$; see \cite{CCGLS}. The $A$-polynomial of $M$ encodes important informations about \betaegin{itemize} \item[(a)] the hyperbolic geometry of $M$, and determines the variation of the volume of the hyperbolic structure of $M$. \item[(b)] the topology of $M$ and more precisely about the slopes of incompressible surfaces in the knot complement, as follows from Culler-Shalen theory; see \cite{CCGLS}. \epsilonnd{itemize} More recently, the $A$-polynomial (or rather, its extension that includes the images of all components of the character variety) is conjecturally linked in two different ways to a {\epsilonm quantum knot invariant}, namely the {\epsilonm colored Jones polynomials} of a knot in 3-space (for a definition of the latter, which we will not use in the present paper, see \cite{Tu} and \cite{GL1}): \betaegin{itemize} \item[(a)] There is an $A_q$-polynomial in two $q$-commuting variables which encodes a minimal order linear $q$-difference equation for the sequence of colored Jones polynomials; see \cite{GL1}. The AJ Conjecture of \cite{Ga1} states that when $q=1$, the $A_q$-polynomial coincides with the $A$-polynomial. \item[(b)] There is a parametrized version of the Volume Conjecture which links the variation of the limit in the Volume Conjecture to the $A$-polynomial; see \cite{GM,GL2}. \epsilonnd{itemize} Aside from conjectures, the following result of \cite{DG} and \cite{BZ} (based on foundational work of Kronheimer-Mrowka) shows that the $A$-polynomial detects the unknot. \betaegin{theorem}\cite{BZ,DG} \langlembdabl{thm.nontrivialA} The $A$-polynomial of a nontrivial knot in 3-space is nontrivial. \epsilonnd{theorem} \muathfrak{su}(2)bsection{The $\SigmaL(2,\muathbb C)$-character variety of $M$ and its field of rational functions} \langlembdabl{sub.rational} For historical reasons that simplify the linear algebra, it is useful to consider $\SigmaL(2,\muathbb C)$ (rather than $\muathcal PSL(2,\muathbb C)$)-representations of $\primei_1(M)$. In the rest of the paper, $M$ will denote a finite volume hyperbolic 3-manifold with one cusp, such that the homology of $M$ contains no $2$-torsion. In this case, the discrete faithful representation of $M$ lifts to a $\SigmaL(2,\muathbb C)$-representation $\rho_0\colon \primei_1(M) \tauo \SigmaL(2,\muathbb C)$; see \cite{Cu}. To understand how the $\SigmaL(2,\muathbb C)$-representation $\rho_0$ of $\primei_1(M)$ varies, we consider the unique component $X_M$ of the $\SigmaL(2,\muathbb C)$-{\epsilonm character variety} of $M$ that contains $\rho_0$. It is well-known that $X_M$ is an affine curve defined over $\muathbb Q$ and that $\rho_0$ is a smooth point of $X_M$; see \cite{CCGLS}. Moreover, the coordinate ring $\muathbb Q[X_M]$ is generated by $\operatorname{tr}_{\gammaamma}$ for all $\gammaamma \in \primei_1 M$, where $\operatorname{tr}_{\gammaamma}$ is the so called \epsilonmph{trace-function} defined by: \betaegin{equation} \langlembdabl{eq.trgamma} \operatorname{tr}_{\gammaamma}:X_M \langlembdaongto \muathbb C, \qquad \operatorname{tr}_{\gammaamma}(\rho)=\operatorname{tr}(\rho(\gammaamma)). \epsilonnd{equation} Here $\operatorname{tr}(A) = \muathfrak{su}(2)m_i a_{ii}$ denotes the trace of a square matrix $A = \langlembdaeft( a_{ij}\right)$. Let $\muathbb Q(X_M)$ denote the field of rational functions of $X_M$. For a detailed discussion on character varieties, the reader may consult Shalen's survey~\cite{Sh} and also ~\cite[Sec.10]{BDR-V} and \cite{CCGLS,Go}. \muathfrak{su}(2)bsection{The Reidemeister torsion of an $\SigmaL(2,\muathbb C)$-representation} \langlembdabl{sub.taupoly} Another important numerical invariant of a representation of a manifold is its {\epsilonm Reidemeister torsion}, which comes in several combinatorial or analytic flavors, see Milnor's survey \cite{Mi} or Turaev's monograph \cite{TM} for details. Combinatorially, the Reidemeister torsion is defined in terms of ratios of determinants of matrices assigned to based, acyclic complexes, which themselves are associated with a cell decomposition of a manifold and an acyclic representation. One can define torsion for all (not necessarily acyclic) representations of a manifold as an element of a top exterior power of a twisted (co)homology group, and one can obtain a complex number after choosing a basis for the twisted (co)homology. Porti~\cite{Po} defined a Reidemeister torsion for the adjoint representation associated to an $\SigmaL(2,\muathbb C)$-representation $\rho$ of $\primei_1(M)$ when $\rho$ is in a {\epsilonm neighborhood} $U$ of $\rho_0 \in X_M$. Such representations are not acyclic and a basis for the twisted homology (and thus the torsion) depends on an {\epsilonm admissible curve} $\gammaamma$, i.e., a simple closed curve $\gammaamma$ in $\primet M$ which is not nullhomologous in $\primet M$ (see Porti's monograph~\cite[Chap. 3]{Po} for details). Thus, the {\epsilonm non--abelian Reidemeister torsion} is a map: \betaegin{equation} \langlembdabl{eq.torsion} \tauau_{\gammaamma}\colon U \langlembdaongto \muathbb C. \epsilonnd{equation} Moreover, Porti~\cite{Po} observed that $\tauau_\gammaammamma$ is an analytic map. In addition Porti obtain the following result. \betaegin{theorem} \langlembdabl{thm.1}\cite[Thm.4.1]{Po} For every admissible curve $\gammaamma$, the non-abelian Reidemeister torsion $\tauau_{\gammaammamma}: U \langlembdaongto \muathbb C$ is the germ of a unique element of $\muathbb Q(X_M)$, which is regular at $\rho_0$. \epsilonnd{theorem} In Section \ref{sub.thm1} we will give an independent proof of Theorem \ref{thm.1}, which we need for the main results of our paper. To phrase our results, recall that the {\epsilonm trace field} $\muathbb Q(\rho)$ of an $\SigmaL(2,\muathbb C)$-representation $\rho$ of $M$ is the field $\muathbb Q(\operatorname{tr}_g(\rho)| g \in \primei_1(M))$. For an admissible curve $\gammaamma$, let $\{e_{\gammaamma}(\rho), e_{\gammaamma}(\rho)^{-1}\}$ denote the eigenvalues of $\rho(\gammaamma)$. Observe that the field $\muathbb Q(\rho)(e_{\gammaamma}(\rho))$ is at most a quadratic extension of the trace field of $\rho$. Our next theorem uses the notion of a {\epsilonm generic representation}, defined in Section \ref{sec.model}. Note that this is a Zariski open condition, and that the discrete faithful representation is generic (\epsilonmph{regular} in the language of Porti's work). \betaegin{theorem} \langlembdabl{thm.11} For every admissible curve $\gammaamma$ and every generic representation $\rho$, $\tauau_{\gammaamma}(\rho)$ lies in the field $\muathbb Q(\rho)(\epsilon_{\gammaamma}(\rho))$. In particular, $\tauau_{\gammaamma}(\rho_0)$ lies in the trace field of $M$. \epsilonnd{theorem} Note that since the homology of $M$ has no $2$-torsion, the trace field of $M$ coincides with its invariant trace field; see \cite[Thm.2.2]{NR}. Our next theorem shows that $\tauau_{\gammaamma}$ is an {\epsilonm algebraic function} of $\operatorname{tr}_{\gammaamma}$. This follows easily from the fact that $\tauau_{\gammaamma}$ and $\operatorname{tr}_{\gammaamma}$ are rational functions on $X_M$ and that $\muathbb Q(X_M)$ has transcendence degree $1$, since $X_M$ is an affine curve defined over $\muathbb Q$. \betaegin{theorem} \langlembdabl{thm.2} For every admissible curve $\gammaamma$, there exists a polynomial $T_{\gammaamma}(\tauau,y) \in \muathbb Z[\tauau,y]$, called the $T_\gammaamma$-polynomial, so that \betaegin{equation} T_{\gammaamma}(\tauau_{\gammaamma},\operatorname{tr}_{\gammaamma})=0. \epsilonnd{equation} \epsilonnd{theorem} Let us make some remarks regarding Theorems \ref{thm.1} and \ref{thm.2}. \betaegin{remark} \langlembdabl{rem.TA} The dependence of the torsion function $\tauau_{\gammaamma}$ on $\gammaammamma$ is determined by the $A$-polynomial; see Equation \epsilonqref{eq.tauml}. Thus, $T_{\gammaamma}$ is determined by $T_{\muu}$ and the $A$-polynomial of $M$. Moreover, if we let $\{e_\muu(\rho), e_\muu^{-1}(\rho)\}$ (resp. $\{e_\langlembdaambda(\rho), e_\langlembdaambda^{-1}(\rho)\}$) de the eigenvalues for the meridian $\muu$ (resp. longitude $\langlembdaambda$) at $\rho$, that is to say, if \[ e_\muu(\rho) + e_\muu^{-1}(\rho) = \muathrm{tr}_\muu(\rho) \tauext{ and } e_\langlembdaambda(\rho) + e_\langlembdaambda^{-1}(\rho) = \muathrm{tr}_\langlembdaambda(\rho) \] then one has (see~\cite[Thm.4.1]{Po}): \[ \tauau_\langlembdaambda = \frac{e_\muu}{e_\langlembdaambda} \cdot \frac{\primeartial e_\langlembdaambda}{\primeartial e_\muu} \cdot \tauau_\muu \] In particular, at the discrete faithful representation $\rho_0$, we have: \betaegin{equation} \langlembdabl{eq.comparecusp} \tauau_\langlembdaambda(\rho_0)=\muathfrak{c}\cdot \tauau_\muu (\rho_0) \epsilonnd{equation} where $\muathfrak{c}$ is the {\epsilonm cusp-shape}. This holds since near $\rho_0$ we have $A(1+t+O(t)^2,-1+ \muathfrak{c} \, t+O(t^2))=0$ where $A(M,L)$ is the $A$-polynomial. \epsilonnd{remark} \betaegin{remark} \langlembdabl{rem.alg} Theorem \ref{thm.1} is an instance of a well-recorded phenomenon: many classical and quantum invariants of knotted 3-dimensional objects are algebraic. For a detailed discussion regarding conjectures and facts, see \cite{Ga2}. For a quick explanation of the algebricity in dimension 3, see Section \ref{sub.explanation} below. \epsilonnd{remark} \muathfrak{su}(2)bsection{Examples} \langlembdabl{sub.examples} In this section, we illustrate Theorem \ref{thm.2} for the complement of the figure eight knot $4_1$, and the complement of the $5_2$ knot. \betaegin{example} \langlembdabl{ex.1} Consider the complement $M$ of the figure eight knot $4_1$ with a meridian-longitude system $(\muu,\langlembda)$. The non--abelian Reidemeister torsion (with respect to the longitude $\langlembda$) on the character variety $X_M$ is given by (see~\cite{Po} or~\cite{Db}): $$ \tauau_\langlembdaambda = \sigmaqrt{17 + 4 \operatorname{tr}_\langlembdaambda}. $$ with the convention that we choose the positive square root near the discrete faitfhul representation $\rho_0$ with $\operatorname{tr}_{\langlembda}(\rho_0)=-2$ (see~\cite[Cor.2.4]{Ca}). Thus $T_{\langlembda}(\tauau_{\langlembda},\operatorname{tr}_{\langlembda})=0$ where $$ T_\langlembdaambda(x, y) = 17 + 4 y - x^2. $$ Let $\operatorname{tr}_{\langlembda}=e_\langlembda+e_\langlembda^{-1}$, $\operatorname{tr}_{\muu}=e_\mu+e_\mu^{-1}$. The vanishing of the $A$-polynomial for the figure eight knot gives us the following identity (see \cite{CCGLS}): $$ A(\epsilon_\langlembda, e_\mu) = -2 + (e_\mu^4 + e_\mu^{-4}) - (e_\mu^2 + e_\mu^{-2}) + (e_\langlembda + e_\langlembda). $$ Thus, we obtain: $$ \operatorname{tr}_\langlembdaambda = \operatorname{tr}_\muu^4 - 5\operatorname{tr}_\muu^2 + 2. $$ For details, see~\cite{Po,DHY}. On the other hand, the torsion with respect to the meridian is given by (see Equation \epsilonqref{eq:changecurve}): $$ \tauau_\muu = \tauau_\langlembdaambda \cdot \langlembdaeft( \frac{\operatorname{tr}_\langlembdaambda^2 - 4}{\operatorname{tr}_\muu^2 - 4} \right)^{1/2} \cdot \frac{\primeartial \operatorname{tr}_\muu}{\primeartial \operatorname{tr}_\langlembdaambda} = \frac{1}{2}\sigmaqrt{(\operatorname{tr}_\muu^2 - 5)(\operatorname{tr}_\muu^2 - 1)}. $$ Thus $T_{\muu}(\tauau_\muu,\operatorname{tr}_{\muu})=0$ where $$ T_\muu(\tauau, z) = -5 + 6z^2 - z^4 + 4\tauau^2. $$ At the discrete faithful representation $\rho_0$, we have $\operatorname{tr}_{\langlembda}(\rho_0)=-2$ (see~\cite[Cor.2.4]{Ca}) and $\operatorname{tr}_{\muu}(\rho_0)=\primem 2$ giving that $$ \tauau_\langlembdaambda(\rho_0)= 3, \qquad \tauau_\muu(\rho_0)= \frac{i \sigmaqrt{3}}{2}. $$ On the other hand, the trace field of $4_1$ is $\muathbb Q(x)$ where $x^2+3=0$. This confirms Theorem \ref{thm.11} for the discrete faithful representation $\rho_0$ of $4_1$. In addition, the cusp-shape of $4_1$ is $\muathfrak{c}=-2i\,\sigmaqrt{3}$, confirming Equation \epsilonqref{eq.comparecusp}. \epsilonnd{example} \betaegin{example} \langlembdabl{ex.2} We will repeat the previous example for the twist knot $5_2$. The non--abelian Reidemeister torsion (with respect to the longitude $\langlembda$) for $5_2$ is given by (see~\cite{DHY}): $$ {\tauau}_{\langlembdaambda}=(-10 \operatorname{tr}_\muu^2+21) + \langlembdaeft(5 \operatorname{tr}_\muu^4 -27 \operatorname{tr}_\muu^2 + 35\right)u +\langlembdaeft(7 -5\operatorname{tr}_\muu^2\right)u^2, $$ where $u$ satisfies the polynomial equation $$ (2\operatorname{tr}_\muu^2 - 7) - \langlembdaeft(\operatorname{tr}_\muu^4 - 7\operatorname{tr}_\muu^2 +14 \right)u + \langlembdaeft(2\operatorname{tr}_\muu^2 - 7\right) u^2 - u^3 = 0. $$ Eliminating $u$ from the above equations, it follows that $T_{\langlembda}({\tauau}_{\langlembdaambda},\operatorname{tr}_{\muu})=0$ where \betaegin{eqnarray*} T_{\langlembda}(x,y)&=& x^3 + x^2 (35 - 26 y^2 + 5 y^4) + x (294 - 280 y^2 + 83 y^4 - 10 y^6) + 343 + 196 y^2 - 126 y^4 + 20 y^6 \epsilonnd{eqnarray*} We choose the branch of $u$ such that at the discrete faithful representation, $u_0$ satisfies the equation $$ 1 - 2 u_0 + u_0^2 - u_0^3=0, \qquad u_0=0.21508\deltaots - 1.30714 \deltaots \, i $$ which coincides with the Riley polynomial of $5_2$; see \cite{MR}. The invariant trace field of $5_2$ is the cubic subfield $\muathbb Q(\alpha)$ of the complex numbers given by: $$ \alpha^3-\alpha^2+1=0, \qquad \alpha=0.877439 \deltaots - 0.744862 \deltaots i $$ and the cusp shape $\muathfrak{c}$ is given by: $$ \muathfrak{c}=4 \alpha -6 = -2.49024\deltaots - 2.97945 \deltaots i $$ which is related with the the root of the Riley polynomial by: $$ u_0=\frac{4}{-\muathfrak{c}-2} $$ The above equation agrees with \cite[Eqn.(3.9)]{DHY} up to the mirror image of $5_2$. It follows that at the discrete faithful representation $\rho_0$, $\tauau_{\langlembda}(\rho_0)$ is the root of the equation $$ \tauau_{\langlembda}(\rho_0)^3 + 11 \tauau_{\langlembda}(\rho_0)^2 - 138 \tauau_{\langlembda}(\rho_0) + 391 =0, \qquad \tauau_{\langlembda}(\rho_0) = 4.11623\deltaots - 1.84036\deltaots \, i $$ and in terms of the invariant trace field, is given by: $$ \tauau_{\langlembda}(\rho_0)=-6 \alpha^2 + 13 \alpha -6 $$ Equation \epsilonqref{eq.comparecusp} and the above discussion imply that: $$ \tauau_{\muu}(\rho_0)=\frac{\tauau_{\langlembda}(\rho_0)}{\muathfrak{c}}= 1 -\frac{3}{2} \alpha = -0.316158\deltaots + 1.11729\deltaots \, i $$ Notice that $-2 \tauau_{\muu}(\rho_0)=3 \alpha-2$ is a prime of norm $-23$. In fact, the invariant trace field $\muathbb Q(\alpha)$ has discriminant $-23$ and $23$ ramifies as: $$ -23=(3 \alpha-2)^2 (3 \alpha +1) $$ where $3 \alpha-2$ and $3 \alpha +1$ are the primes above $23$. The above discussion confirms Theorem \ref{thm.11} for the discrete faithful representation. \epsilonnd{example} \muathfrak{su}(2)bsection{Problems} \langlembdabl{sub.problems} In this section we list a few problems and future directions. \betaegin{problem} \langlembdabl{prob.1} Is the $T_{\langlembda}$-polynomial of a hyperbolic knot nontrivial? \epsilonnd{problem} \betaegin{remark} \langlembdabl{rem.puzzle} The volume and the Reidemeister torsion appear as the {\epsilonm classical} and {\epsilonm semiclassical} limit in a parametrized version of the Volume Conjecture; see for example \cite{GM}. Physics arguments suggest that the non-commutative $A$-polynomial and the Reidemeister torsion is determined by the $A$-polynomial and the volume of the manifold alone. However, computations with twist knots suggest that the $A$ and $T_{\langlembda}$-polynomials seem to be independent from each other. Perhaps this discrepancy can be explained by the difference between on-shell and off-shell physics computations. \epsilonnd{remark} Let us now formulate a speculation regarding the {\epsilonm Parametrized Volume Conjecture} of Gukov-Murakami and Le-Garoufalidis; see \cite{GM,GL2}. If $K$ is a knot in $S^3$, let $J_{K,N}(q) \in \muathbb Q[q^{\primem 1}]$ denote the quantum group invariant of $K$ colored by the $N$-dimensional irreducible representation of $\muathfrak{sl}_2(\muathbb{C})$, and normalized to be $1$ at the unknot. For fixed $\alpha \in \muathbb C$, the Parametrized Volume Conjecture studies the asymptotics of the sequence $(J_{K,N}(e^{\alpha/N}))$ for $N=1,2,\deltaots$. For suitable $\alpha$ near $2 \primei i$, and for hyperbolic knots $K$, one expects an asymptotic expansion of the form $$ J_{K,N}(e^{\alpha/N}) \sigmaim e^{\frac{N \muathcal CS(\rho_{\alpha})}{2 \primei i}} N^{3/2} c_0(\alpha)\langlembdaeft(1+\muathfrak{su}(2)m_{k=1}^\infty \frac{c_k(\alpha)}{N^k}\right) $$ where $\rho_{\alpha} \in X_M$ denotes a representation near $\rho_0$ with $\operatorname{tr}_{\muu}(\rho_{\alpha})=e^{\alpha}+e^{-\alpha}$; see \cite{DGLZ,GL2}. \betaegin{problem} \langlembdabl{prob.2} For every $k$, and with suitable normalization, show that $c_k(\alpha)$ are germs of unique elements of the field $\muathbb Q(X_M)$. \epsilonnd{problem} \betaegin{conjecture} \langlembdabl{conj.c0} Show that \betaegin{equation} \langlembdabl{eq.c0} c_0(0)=(2 \tauau_{\muu}(\rho_0))^{-1/2} \epsilonnd{equation} \epsilonnd{conjecture} H. Murakami has proven the above conjecture for the $4_1$ knot (see \cite{Mu}), and unpublished computations of the second author and D. Zagier have numerically verified the above conjecture for the $5_2$ and the $(-2,3,7)$ pretzel knot. The details will appear in forthcoming work. Our next problem concerns the extension of Theorem \ref{thm.1} to simple complex Lie groups $G_{\muathbb C}$, rather than $\SigmaL(2,\muathbb C)$. Physics arguments regarding the 1-loop computation of {\epsilonm perturbative Chern-Simons theory} suggest that an extension of Theorem \ref{thm.1} to arbitrary complex simple groups $G_{\muathbb C}$ is possible. It is reasonable to expect that an extension of the non abelian Reidemeister torsion is possible (see for example \cite{BH1,BH2}), and that Theorem \ref{thm.1} extends. \betaegin{problem} \langlembdabl{prob.3} Extend Theorem \ref{thm.1} to arbitrary simple complex Lie groups $G_{\muathbb C}$. \epsilonnd{problem} \sigmaection{The character variety of hyperbolic 3-dimensional manifolds} \langlembdabl{sec.model} \muathfrak{su}(2)bsection{Four favors of the character variety, apr\`es Dunfield} \langlembdabl{sub.four} The careful reader may observe that the volume function is defined for $\muathcal PSL(2,\muathbb C)$ representations of a 1-cusped hyperbolic manifold $M$, whereas the Reidemeister torsion is defined for $\SigmaL(2,\muathbb C)$-representations of $M$. Our proof of Theorem \ref{thm.1} requires a new variant of a representation, the so-called {\epsilonm augmented representation} that comes in two flavors: the $\muathcal PSL(2,\muathbb C)$ and the $\SigmaL(2,\muathbb C)$ one. For an excellent discussion, we refer the reader to \cite[Sec.2-3]{Dn} and \cite[Sec.10]{BDR-V}. Much of the results of this section the second author learnt from N. Dunfield, whom we thank for his guidance. Naturally, we are responsible for any comprehension errors. Let us define the four versions of the character variety of $M$. Let $R(M,\SigmaL(2,\muathbb C))$ denote the set of all homomorphisms of $\primei_1(M)$ into $\SigmaL(2,\muathbb C)$ and let $X_{M,\SigmaL(2,\muathbb C)}$ be the set of \epsilonmph{characters} of $\primei_1(M)$ into $\SigmaL(2, \muathbb C)$ --- which is in a sense the algebrico-geometric quotient $R(M,\SigmaL(2,\muathbb C))/\SigmaL(2,\muathbb C)$, where $\SigmaL(2,\muathbb C)$ acts by conjugation (see~\cite{Sh}). The \epsilonmph{character} $\muathrm{ch}i_\rho\colon \primei_1(M) \tauo \muathbb C$ associated to the representation $\rho$ is defined by $\muathrm{ch}i_\rho(g) = \muathrm{tr}(\rho(g))$, for all $g \in \primei_1(M)$. For irreducible representations, two representations are conjugate (in $\SigmaL(2, \muathbb C)$) if, and only if, they have the same character (see~\cite{CCGLS} or~\cite{Sh}). It is easy to see that $R(M,\SigmaL(2,\muathbb C))$ and $X_{M,\SigmaL(2,\muathbb C)} $ are affine varieties defined over $\muathbb Q$. Let $\overline{R}(M,\SigmaL(2,\muathbb C))$ denote the subvariety of $R(M,\SigmaL(2,\muathbb C)) \tauimes P^1(\muathbb C)$ consisting of pairs $(\rho,z)$ where $z$ is a fixed point of $\rho(\primei_1(\primet M))$. Let $\overline{X}_{M,\SigmaL(2,\muathbb C)}$ denote the algebro-geometric quotient of $\overline{R}(M,\SigmaL(2,\muathbb C))$ under the diagonal action of $\SigmaL(2,\muathbb C)$ by conjugation and M\"obius transformations respectively. We will call elements $(\rho,z) \in \overline{R}(M,\SigmaL(2,\muathbb C))$ {\epsilonm augmented representations}. Their images in the augmented character variety $\overline{X}(M,\SigmaL(2,\muathbb C))$ will be called {\epsilonm augmented characters} and will be denoted by square brackets $[(\rho,z)]$. Likewise, replacing $\SigmaL(2,\muathbb C)$ by $\muathcal PSL(2,\muathbb C)$, we can define the character variety $X_{M,\muathcal PSL(2,C)}$ and its augmented version $\overline{X}_{M,\muathcal PSL(2,\muathbb C)}$. The advantage of the augmented character variety $\overline{X}_{M,\SigmaL(2,\muathbb C)}$ is that given $\gammaamma \in \primei_1(\primet M)$ there is a regular function $e_{\gammaamma}$ which sends $[(\rho,z)]$ to the eigenvalue of $\rho(\gammaamma)$ corresponding to $z$. In contrast, in $X_{M,\SigmaL(2,\muathbb C)}$ only the trace $e_{\gammaamma}+e_{\gammaamma}^{-1}$ of $\rho(\gammaamma)$ is well-defined. Likewise, in $\overline{X}_{M,\muathcal PSL(2,C)}$ (resp. $X_{M,\SigmaL(2,\muathbb C)}$) only $e_{\gammaamma}^2$ (resp. $e_{\gammaamma}^2+e_{\gammaamma}^{-2}$) is defined. From now on, we will restrict to a geometric component of the $\muathcal PSL(2,\muathbb C)$ character variety of $M$ and its lifts. The four character varieties associated to $M$ fit in a commutative diagram \betaegin{equation} \langlembdabl{eq.four} \[email protected]{ {\overline{X}_{M,\SigmaL(2,\muathbb C)}} \alphar[r] \alphar[d] & {\overline{X}_{M,\muathcal PSL(2,\muathbb C)}}\alphar[d] \\ {X_{M,\SigmaL(2,\muathbb C)}}\alphar[r] & {X_{M,\muathcal PSL(2,\muathbb C)}} } \epsilonnd{equation} where the vertical maps are forgetful maps $[(\rho,z)]\langlembdaongto [\rho] = \muathrm{ch}i_\rho$ and the horizontal maps are induced by the projection $\SigmaL(2,\muathbb C) \langlembdaongto \muathcal PSL(2,\muathbb C)$. The vertical maps are generically 2:1 at the geometric components. The horizontal maps are discussed in \cite[Cor.3.2]{Dn}. The notation $X_M$ of Section \ref{sec.intro} matches the notation $X_M=X_{M,\SigmaL(2,\muathbb C)}$ of this section. The next lemma describes the coordinates rings of the four versions of the character variety. \betaegin{lemma} \langlembdabl{lem.ch1} \betaegin{enumerate} \item The coordinate ring of $X_{M,\SigmaL(2,\muathbb C)}$ is generated by $\operatorname{tr}_{g}$ for all $g \in \primei_1(M)$. \item The coordinate ring of $X_{M,\muathcal PSL(2,\muathbb C)}$ is generated by $\operatorname{tr}_{g}^2$ for all $g \in \primei_1(M)$. \item The coordinate ring of $\overline{X}_{M,\SigmaL(2,\muathbb C)}$ is generated by $\operatorname{tr}_{g}$ for all $g \in \primei_1(M)$ and by $e_{\gammaamma}$ for $\gammaamma \in \primei_1(\primet M)$. \item The coordinate ring of $\overline{X}_{M,\muathcal PSL(2,\muathbb C)}$ is generated by $\operatorname{tr}_{g}^2$ for all $g \in \primei_1(M)$ and by $e_{\gammaamma}^2$ for $\gammaamma \in \primei_1(\primet M)$. \epsilonnd{enumerate} \epsilonnd{lemma} The commutative diagram \epsilonqref{eq.four} gives an inclusion of fields of rational functions: \betaegin{equation} \langlembdabl{eq.fourf} \xymatrix{ \muathbb Q(\overline{X}_{M,\SigmaL(2,\muathbb C)}) & \alphar@{_{(}->}[l] \, \muathbb Q(\overline{X}_{M,\muathcal PSL(2,\muathbb C)}) \\ \muathbb Q(X_{M,\SigmaL(2,\muathbb C)}) \alphar@{^{(}->}[u] & \alphar@{_{(}->}[l] \, \muathbb Q(X_{M,\muathcal PSL(2,\muathbb C)}) \alphar@{^{(}->}[u] } \epsilonnd{equation} where the vertical field extensions are of degree $2$. \muathfrak{su}(2)bsection{The coefficient field of augmented representations} \langlembdabl{sub.cfield} A crucial part in our proof of Theorem \ref{thm.1} is the choice of a coefficient field of an $\SigmaL(2,\muathbb C)$-representation of $\primei_1(M)$. In this section, we show that the notion of an augmented representation fits well with the choice of a coefficient field. First, let us describe the problem. Given a subgroup $\muathcal Ga$ of $\SigmaL(2,\muathbb C)$, we can define its {\epsilonm trace field} $\muathbb Q(\muathcal Ga)$ (resp. its {\epsilonm coefficient field} $E(\muathcal Ga)$) by $\muathbb Q(\operatorname{tr}(A)\, | \, A \in \muathcal Ga)$ (resp. the field generated over $\muathbb Q$ by the entries of all elements $A$ of $\muathcal Ga$). The trace field but {\epsilonm not} the coefficient field of $\muathcal Ga$ is obviously invariant under conjugation of $\muathcal Ga$ in $\SigmaL(2,\muathbb C)$. In general, it is not possible to choose a conjugate of $\muathcal Ga$ to be a subgroup of $\SigmaL(2,\muathbb Q(\muathcal Ga))$. The following lemma shows that this is possible after passing to at most quadratic extension of the trace field. \betaegin{lemma} \langlembdabl{lem.ch2}(\cite[Prop.~3.3]{Ma}\cite[Cor.~3.2.4]{MR}) If $\muathcal Ga$ is non-elementary, then $\muathcal Ga$ is conjugate to $\SigmaL(2,K)$ where $K=\muathbb Q(\muathcal Ga)(e)$ is an extension of degree $[K:\muathbb Q(\muathcal Ga)]\langlembdaeq 2$, and $e$ can be chosen to be an eigenvalue of a loxodromic element of $\muathcal Ga$. \epsilonnd{lemma} For the definition of a {\epsilonm non-elementary} subgroup of $\SigmaL(2,\muathbb C)$ and of a {\epsilonm loxodromic} element, see \cite{Ma,MR}. The proof of Lemma~\ref{lem.ch2} uses the theory of 4-dimensional {\epsilonm quaternion algebras}. We want to apply Lemma \ref{lem.ch2} to a representation $\rho \in R({M,\SigmaL(2,\muathbb C)})$. Recall that the discrete faithful representation $\rho_0$ of $\primei_1(M)$ is non-elementary, and that the subset of characters of elementary representations in the geometric component $X_{M,\SigmaL(2,C)}$ is Zariski closed, and therefore, finite; see \cite{MR}. Given a representation $\rho \in R({M,\SigmaL(2,\muathbb C)})$, let $\muathbb Q(\rho)$ and $E(\rho)$ denote the {\epsilonm trace field} and the {\epsilonm coefficient field} of the subgroup $\rho(\primei_1(M)) \muathfrak{su}(2)bset \SigmaL(2,\muathbb C)$ respectively. Likewise, if $(\rho,z) \in \overline{R}({M,\SigmaL(2,\muathbb C)})$ is an augmented representation, let $\muathbb Q(\rho,z)$ denote the field generated over $\muathbb Q$ by $\operatorname{tr}_g(\rho)$ for $g \in \primei_1(M)$ and $e_{\gammaamma}$ for $\gammaamma \in \primei_1(\primet M)$. Similarly, we define the coefficient field $E(\rho, z)$ associated to the augmented representation $(\rho, z)$. The next lemma follows from Lemma \ref{lem.ch2} and the above discussion. \betaegin{lemma} \langlembdabl{lem.ch3} \betaegin{enumerate} \item If $\rho \in R({M,\SigmaL(2,\muathbb C)})$ is generic (i.e., non-elementary) then a conjugate of $\rho$ is defined over a quadratic extension of $\muathbb Q(\rho)$. \item If $(\rho,z) \in \overline{R}({M,\SigmaL(2,\muathbb C)})$ is generic (i.e., non-elementary) then there exists $N \in \SigmaL(2,\muathbb C)$ so that $N^{-1}\rho N$ is defined over $E(\rho,z)$. \item There exists $N \in \SigmaL(2,\muathbb C)$ such that if $(\rho,z)$ is near the discrete faithful representation $(\rho_0,z_0)$, then $N^{-1}(\rho,z) N$ is defined over $E(\rho,z)$. \epsilonnd{enumerate} \epsilonnd{lemma} An alternative version of the above Lemma is possible; see Lemma \ref{lem.ch6} below. \muathfrak{su}(2)bsection{Augmented representations and the shape field} \langlembdabl{sub.shapefield} There is an alternative description of the field $\muathbb Q(\overline{X}_{M,\muathcal PSL(2,\muathbb C)})$ in terms of shape parameters of ideal triangulations of $M$, which is useful in applications. For completeness, we discuss it in this section and the next. Let us first describe $\overline{X}_{M,\muathcal PSL(2,\muathbb C)}$ in terms of {\epsilonm pseudo-developing maps}, discussed in detail in \cite[Sec.2.5]{Dn}. Given $\rho \in R_{M,\muathcal PSL(2,\muathbb C)}$, consider a $\rho$-equivariant map $\omegaegaidetilde{M} \langlembdaongto \muathbb H^3$, where $\muathbb H^3$ denotes the 3-dimensional hyperbolic space. Since $\primet M$ is a 2-torus, it lifts to a disjoint collection of planes $\muathbb R^2$ in the universal cover $\omegaegaidetilde{M}$. Let $\overline{M}$ denote the space obtained by cutting $\omegaegaidetilde{M}$ along these planes, and crushing them into points. Set-theoretically, the set $\overline{M}\sigmaetminus \omegaegaidetilde{M}$ of ideal points is in 1-1 correspondence with the {\epsilonm cusps} of $M$ in $\muathbb H^3$, i.e., with the coset $\primei_1(M)/\primei_1(\primet M)$. An augmented representation $(\rho,z) \in \overline{R}_{M,\muathcal PSL(2,\muathbb C)}$ gives a $\primei_1(M)$-equivariant map $$ D_{(\rho,z)}: \overline{M} \langlembdaongto \overline{\muathbb H}^3 $$ where $\overline{\muathbb H}^3=\muathbb H^3 \cup \muathbb C\muathbb P^1$ is the compactification of hyperbolic space by adding a sphere $\muathbb C\muathbb P^1$ at infinity. Such a map is a pseudo-developing map in \cite[Sec.2.5]{Dn}. An augmented character $[(\rho,z)] \in \overline{X}_{M,\muathcal PSL(2,\muathbb C)}$ does not have a unique pseudo-developing map, however every two are homotopic, for example using a straight line homotopy $t f(x) +(1-t)g(x)$ in $\muathbb H^3$. Thus, there is a well-defined map: \betaegin{equation} \langlembdabl{eq.pseudo} \overline{X}_{M,\muathcal PSL(2,\muathbb C)}\langlembdaongto \{\tauext{Pseudo-developing maps of M, modulo homotopy rel boundary}\} \epsilonnd{equation} Consider a 4-tuple of distinct points $(A,B,C,D) \in (\overline{M}\sigmaetminus \omegaegaidetilde{M})^4$, and an augmented character $[(\rho,z)] \in \overline{X}_{M,\muathcal PSL(2,\muathbb C)}$. Then, $D_{[(\rho,z)]}$ sends $A,B,C,D$ to four points $A',B',C',D'$ in $\muathbb C\cup\{\infty\}=\muathbb C\muathbb P^1=\primet \muathbb H^3$, and consider their {\epsilonm cross-ratio} $$ cr_{A,B,C,D}[(\rho,z)]=\frac{(A'-D')(B'-C')}{(A'-C')(B'-D')}. $$ If $A',B',C',D'$ are distinct, then $cr_{A,B,C,D}[(\rho,z)] \in \muathbb C$, else $cr_{A,B,C,D}[(\rho,z)]$ is undefined. This gives a rational map $$ cr_{A,B,C,D}: \overline{X}_{M,\muathcal PSL(2,\muathbb C)} \langlembdaongto \muathbb C. $$ Let $\muathbb Q^{\muathrm{dev}}_M$ denote the field over $\muathbb Q$ generated by $cr_{A,B,C,D}$ for all 4-tuples of distinct points of $\overline{M}\sigmaetminus \omegaegaidetilde{M}$. \betaegin{lemma} \langlembdabl{lem.ch4} We have $$ \muathbb Q^{\muathrm{dev}}_M=\muathbb Q(\overline{X}_{M,\muathcal PSL(2,\muathbb C)}). $$ \epsilonnd{lemma} The proof will be given in the next section. \muathfrak{su}(2)bsection{Ideal triangulations and the gluing equations variety} \langlembdabl{sub.gluing} A convenient way to construct the unique hyperbolic structure on $M$, and its small incomplete hyperbolic deformations is using an {\epsilonm ideal triangulation} $\muathcal T=(\muathcal T_1,\deltaots,\muathcal T_s)$ of $M$ which recovers the complete hyperbolic structure. For a detailed description of ideal triangulations, see \cite{BP} and also \cite[App.10]{BDR-V}. An ideal triangulation $\muathcal T$ which is compatible with the discrete faithful representation has nondegenerate shape parameters $z_j \in \muathbb C\sigmaetminus\{0,1\}$ for $j=1,\deltaots,s$. Such a triangulation always exists; for example subdivide the canonical Epstein-Penner decomposition of $M$ by adding ideal triangles; see \cite{EP, BP,PP}. Once we choose shape parameters for each ideal tetrahedron, one can use them to give a hyperbolic metric (in general incomplete) in the universal cover $\omegaegaidetilde{M}$, once a compatibility condition along the edges of $\muathcal T$ is satisfied. This compatibility condition defines the so-called {\epsilonm Gluing Equations variety} $\muathcal G(\muathcal T)$. In the appendix of \cite{BDR-V}, Dunfield describes a map \betaegin{equation} \langlembdabl{eq.Gg} \muathcal G(\muathcal T) \langlembdaongto \overline{R}_{M,\muathcal PSL(2,\muathbb C)} \epsilonnd{equation} which projects to an injection \betaegin{equation} \langlembdabl{eq.Ggg} \muathcal G(\muathcal T) \langlembdaongto \overline{X}_{M,\muathcal PSL(2,\muathbb C)} \epsilonnd{equation} Consider the field $\muathbb Q(z_1,\deltaots,z_s)$ over $\muathbb Q$ generated by the shape parameters $z_1,\deltaots,z_s$. A priori, $\muathbb Q(z_1,\deltaots,z_r)$ depends on $M$. The next lemma describes the fields of rational functions of augmented representations in terms of the shape field. \betaegin{lemma} \langlembdabl{lem.ch5} \rm{(a)} We have \betaegin{equation} \langlembdabl{eq.ch5a} \muathbb Q(\overline{X}_{M,\muathcal PSL(2,\muathbb C)})=\muathbb Q(z_1,\deltaots,z_s) \epsilonnd{equation} and \betaegin{equation} \langlembdabl{eq.ch5b} \muathbb Q(\overline{X}_{M,\SigmaL(2,\muathbb C)})=\muathbb Q(z_1,\deltaots,z_s,e_{\langlembda},e_{\muu}) \epsilonnd{equation} \rm{(b)} If the image of $(z_1,\deltaots,z_s) \in \muathcal G(\muathcal T)$ is $[(\rho,z)] \in \overline{R}_{M,\muathcal PSL(2,\muathbb C)}$ under the map \epsilonqref{eq.Gg}, then the trace field (resp. coefficient field) of an $\SigmaL(2,\muathbb C)$ lift of $[(\rho,z)]$ is $\muathbb Q(z_1,\deltaots,z_s)$ (resp. $\muathbb Q(z_1,\deltaots,z_s,e_{\langlembda},e_{\muu})$). \epsilonnd{lemma} \betaegin{proof} The shape parameters $z_j$ , for $j=1,\deltaots,s$, are coordinate functions on the curve $\muathcal G(\muathcal T)$. In addition, the squares $e_{\langlembda}^2$ and $e_{\muu}^2$ of the eigenvalues of a meridian-longitude pair $(\langlembda, \mu)$ of $\primet M$ are rational functions of the shape parameters $z_j$. Since the map in Equation \epsilonqref{eq.Ggg} is an inclusion of a curve into another, it follows that their fields of rational functions are equal. This proves Equation \epsilonqref{eq.ch5a}. Equation \epsilonqref{eq.ch5b} follows from Lemma \ref{lem.ch3} and the fact that $e_{\langlembda}^2, e_{\muu}^2 \in \muathbb Q(z_1,\deltaots,z_s)$. This proves part (a). Part (b) follows from \cite[Cor.3.2.4]{MR}. \epsilonnd{proof} \betaegin{proof}(of Lemma \ref{lem.ch4}) It follows by applying verbatim the proof of \cite[Lem.5.5.2]{MR}. \epsilonnd{proof} Let us end this section with an alternative version of Lemma \ref{lem.ch3} using shape fields. Recall from \cite[Sec.2]{Dn} that the map in Equation \epsilonqref{eq.Gg} can be defined as follows. Fix a solution $(z_1,\deltaots,z_s)$ of the Gluing Equations of $\muathcal T$. Lift $\muathcal T$ to an ideal triangulation of $\omegaegaidetilde{M}$, and then map the lift of one ideal tetrahedron to a fixed ideal tetrahedron of $\muathbb H^3$ of the same shape, and then use $\primei_1(M)$-equivariance to send every other ideal tetrahedron to an appropriate ideal tetrahedron of $\muathbb H^3$, using face-pairings. There is a consistency condition, which is satisfied since we are using a solution to the Gluing Equations. This defines a developing map and a corresponding $\muathcal PSL(2,\muathbb C)$-representation $\rho$. In \cite[App.~10]{BDR-V}, Dunfield describes how to define not only a representation in $\muathcal PSL(2,\muathbb C)$, but also an augmented one $(\rho,z)$. The combinatorial structure of $\muathcal T$ gives a presentation of $\muathcal Pi = \primei_1(M)$ in terms of {\epsilonm face-pairings}: \betaegin{equation} \langlembdabl{WirtingerP} \muathcal Pi = \langlembdaeft\langlembdaangle {g_1, \langlembdadots, g_s \;|\; r_1, \langlembdadots, r_{s-1}} \right\ranglengle. \epsilonnd{equation} Each generator of $\muathcal Pi$ is represented by a path in the $1$-skeleton of the dual triangulation of $\muathcal T$; see~\cite[Chap.~5]{MR} or \cite[Ch.11]{R}. The entries of $\rho(g_j)$, for $j = 1, \langlembdadots, s$, are given by face-pairings, and are explicit matrices with entries in $\muathbb Q(z_1,\deltaots,z_s)$; see~\cite[Chap.~5]{MR}. The above discussion proves the following version of Lemma \ref{lem.ch3}. \betaegin{lemma} \langlembdabl{lem.ch6} \betaegin{enumerate} \item The image of the map in Equation \epsilonqref{eq.Gg} is defined over $\muathbb Q(z_1,\deltaots,z_s)$. \item Generically, a lift of the image of the map in Equation \epsilonqref{eq.Gg} to $\overline{R}({M,\SigmaL(2,\muathbb C)})$ is defined over $\muathbb Q(z_1,\deltaots,z_s,e_{\langlembda},e_{\muu})$. \epsilonnd{enumerate} \epsilonnd{lemma} \sigmaection{The non-abelian Reidemeister torsion} \langlembdabl{sec.proofs} \muathfrak{su}(2)bsection{An explanation of the rationality of the Reidemeister torsion in dimension 3} \langlembdabl{sub.explanation} Before we prove the rationality of the torsion stated in Theorem \ref{thm.1}, let us give the main idea which is rather simple, and defer the technical details for the next section. The starting point is a hyperbolic manifold $M$ with one cusp. The character variety $\overline{X}_{M,\SigmaL(2,\muathbb C)}$ depends only on $\primei_1(M)$ but we view it in a specific birational equivalent way by using a combinatorial decomposition of $M$ into ideal tetrahedra. Every such manifold is obtained by a combinatorial face-pairing of a finite collection $\muathcal T$ of nondegenerate (but perhaps flat, or negatively oriented) ideal tetrahedra $\muathcal T_1,\deltaots, \muathcal T_s$. The hyperbolic shape of a nondegenerate ideal tetrahedron is determined by a complex number $z \in \muathbb C\sigmaetminus\{0,1\}$, up to the action of a finite group of order 6. The discrete faithful representation $\rho_0$ assigns hyperbolic shapes $z_j$ to the tetrahedra $\muathcal T_j$ for $j=1,\deltaots,s$. As we already observe, these shapes satisfy the so-called Gluing Equations, which is a collection of polynomial equations in $z_j$ and $1-z_j$ to make the metric match along the edges of the ideal tetrahedra. The Gluing Equations define a variety $\muathcal G(\muathcal T)$ which of course depends on $\muathcal T$. When the discrete faithful representation $\rho_0$ slightly deforms in $\rho_t$ (i.e., bends, in the language of Thurston) this causes the shapes $z_j$ of $\muathcal T_j$ to deform to $z_j(t)$. For small enough $t$, the new shapes still satisfy the Gluing Equations. Consequently, for every $t$, the shapes $z_j(t)$ , for $j=1,\deltaots,s$, are algebraically dependent, and so is any algebraic function of the shapes. In the case of the $A$-polynomial, the squares $e_\langlembda (t)^2$ and $e_\mu (t)^2$ of the eigenvalues $e_\langlembda (t)$ and $e_\mu (t)$ of a meridian-longitude pair of $T^2=\primet M$ are rational functions in $z_j(t)$ (in fact, monomials in $z_j(t)$ and $1-z_j(t)$ with integer exponents), thus $(e_\langlembda (t),e_\mu (t))$ are algebraically dependent. This dependence defines the $A$-polynomial. In the case of Reidemeister torsion and Theorem \ref{thm.1}, the torsion $\tauau_{\muu}(\rho_t)$ of the relevant chain complex is defined over the field $\muathbb Q(z_1(t),\deltaots,z_s(t),e_\langlembda (t),e_\mu (t))$. In other words all matrices that compute the torsion (and thus the ratios of their determinants) have entries in the field $\muathbb Q(z_1(t),\deltaots,z_s(t),e_\langlembda (t),e_\mu (t))$. \muathfrak{su}(2)bsection{Proof of Theorem \ref{thm.1}} \langlembdabl{sub.thm1} In this section, we will prove Theorem \ref{thm.1}. Let $M$ be a one-cusp finite-volume complete hyperbolic $3$-manifold. Choose an ideal triangulation $\muathcal T=(\muathcal T_1,\deltaots,\muathcal T_s)$ compatible with the discrete faithful representation of $M$ as described above, and let $(z_1,\deltaots,z_s)$ denote the shape parameters of $\muathcal T$. Let $E$ denote the following field: $$ \muathbb K=\muathbb Q(z_1,\deltaots,z_s,e_{\langlembda},e_{\muu})=\muathbb Q(\overline{X}_{M,\SigmaL(2,\muathbb C)}) $$ where the last equality follows from Lemma \ref{lem.ch5}. Let $J$ denote an open interval in $\muathbb R$ that contains $0$, and consider a 1-parameter family $t \in J \muapsto z(t)=(z_1(t),\deltaots,z_s(t)) \in \muathcal G(\muathcal T)$ of solutions of the Gluing Equations, with image $(\rho_t',z'_t) \in \overline{R}({M,\muathcal PSL(2,\muathbb C)})$ under the map in Equation \epsilonqref{eq.Gg} and with lift $(\rho_t,z_t) \in \overline{R}({M,\SigmaL(2,\muathbb C)})$ where $\rho_0$ is a lift to $\SigmaL(2,\muathbb C)$ of the discrete faithful representation of $M$. Fix $\gammaamma$ an essential curve in the boundary torus $\primet M$. We will explain how to define the Reidemeister torsion $\tauau_{\gammaamma}(\rho_t)$ (for complete definitions the reader can refer to Porti's monograph~\cite{Po} and to Turaev's book~\cite{TM}), and why it coincides with the evaluation of an element of $\muathbb K$ at $\rho_t$. The 2-skeleton of the combinatorial dual $W$ to $\muathcal T$ is a 2-dimensional $CW$-complex which is a spine of $M$; see \cite{BP}. Mostow rigidity Theorem implies that every homotopy equivalence of $M$ is homotopic to a homeomorphism (even to an isometry), and Chapman's theorem concludes that every homotopy equivalence of $M$ is simple; \cite{Co}. Thus, $W$ is simple homotopy equivalent to $M$, and we can use $W$ to compute $\tauau_{\gammaamma}(\rho_t)$. The ideas of the definition of the non-abelian torsion $\tauau_{\gammaamma}(\rho_t)$ are the following: \betaegin{itemize} \item[(a)] Consider the universal cover $\omegaegaidetilde{W}$ of $W$ and the integral chain complex $C_*(\omegaegaidetilde{W}; \muathbb{Z})$ of $\omegaegaidetilde{W}$ for $*=0,1,2$. The fundamental group $\muathcal Pi=\primei_1(W)=\primei_1(M)$ acts on $\omegaegaidetilde{W}$ by covering transformations. This action turns the complex $C_*(\omegaegaidetilde{W}; \muathbb{Z})$ into a $\muathbb Z[\muathcal Pi]$-module. The Lie algebra $\muathfrak{sl}_2(\muathcal CC)$ also can be viewed as a $\muathbb Z[\muathcal Pi]$-module by using the composition $Ad \circ \rho_t$, where $Ad$ denotes the adjoint representation of $\muathfrak{sl}_2(\muathcal CC)$. We let $\muathfrak{sl}_2(\muathcal CC)_{\rho_t}$ denote this $\muathbb Z[\muathcal Pi]$-module. The {\epsilonm twisted chain complex} of $W$ is the $\muathbb C$-vector space: \betaegin{equation} \langlembdabl{eq.twistedC} C_*^{\rho_t} = C_*(\omegaegaidetilde{W}; \muathbb{Z}) \otimes_{\muathbb{Z}[\muathcal Pi]} \muathfrak{sl}_2(\muathcal CC)_{\rho_t}. \epsilonnd{equation} \item[(b)] The twisted chain complex $C_*^{\rho_t}$ computes the so-called {\epsilonm twisted homology} of $W$ which is denoted by $H_*^{\rho_t}$. The betti numbers of $H_*^{\rho_t}$ are given by (because $\rho_t$ lies in a neighborhood of the discrete and faithful representation and thus is generic, or regular in Porti's language, see~\cite[Chap.~3]{Po}): $$ \deltaim_\muathbb C (H_{0}^{\rho_t})=0, \qquad \deltaim_\muathbb C (H_{1}^{\rho_t})=1, \qquad \deltaim_\muathbb C (H_{2}^{\rho_t})=1. $$ \item[(c)] For $i=1,2$ construct elements $\muathbf{h}^t_{i}$ in $C_{i}^{\rho_t}$, which project to bases of the twisted homology groups $H_{i}^{\rho_t}$. \item[(d)] Then, the torsion $\tauau_{\gammaamma}(\rho_t)$ is an explicit ratio of determinants; see~\cite{Db} or~\cite[Chap.~3]{Po} and Equation \epsilonqref{eq.torsiondef} below. \epsilonnd{itemize} We now give the details of the definition of the non-abelian Reidemeister torsion and prove Theorem \ref{thm.1}. To clarify the presentation, suppose that $V_t$ is a 1-parameter family of $\muathbb C$-vector spaces for $t \in J$. We will say that $V_t$ is defined over $\muathbb K$ if there exists a vector space $V_\muathbb K$ over $\muathbb Q$ such that $V_t=(V_\muathbb K \otimes_{\muathbb Q} E(\rho_t,z_t)) \otimes_{\muathbb Q} \muathbb C$ for all $t \in J$, where $E(\rho_t,z_t)$ is the coefficient field of $(\rho_t,z_t)$, defined in Section \ref{sub.cfield}. Likewise, a 1-parameter family of $\muathbb C$-linear transformations $T_t \in \mathrm{Hom}_{\muathbb C}(V_t,W_t)$ is defined over $\muathbb K$ if $T \in \mathrm{Hom}_{\muathbb Q}(V_\muathbb K,W_\muathbb K) \otimes_{\muathbb Q} \muathbb C$. In concrete terms, a 1-parameter family of matrices (resp. vectors) is defined over $E$ if its entries (resp. coordinates) lie in $\muathbb K$. Lemma \ref{lem.ch6} implies the following. \betaegin{claim} \langlembdabl{claim.rho} The 1-parameter family $(\rho_t,z_t)$ ($t \in J$) is defined over $\muathbb K$. \epsilonnd{claim} Consider the presentation $\muathcal Pi$ in Equation \epsilonqref{WirtingerP} of $\primei_1(M)$ given by face-pairings. A coordinate description of the chain complex $C_*^{\rho_t}$ is given by (see \cite{Db}) $$ \langlembdabl{eq.coordchain} \xymatrix@1{ 0 \alphar[r] & {\muathfrak{sl}_2(\muathcal CC)^{s-1}}\alphar[r]^-{d_2^{\rho_t}} & {\muathfrak{sl}_2(\muathcal CC)^s}\alphar[r]^-{d_1^{\rho_t}} & {\muathfrak{sl}_2(\muathcal CC)}\alphar[r] & 0 } $$ for $*=0,1,2$ where the boundary operators are given by $$ d_1^{\rho_t}(x_1, \langlembdadots, x_s) = \muathfrak{su}(2)m_{j=1}^s (1 - g_j) \circ x_j, \tauext{ and } d_2^{\rho_t}(x_1, \langlembdadots, x_{s-1}) = {\langlembdaeft( {\muathfrak{su}(2)m_{j=1}^{s-1} \frac{\primeartial r_j}{\primeartial g_k} \circ x_j}\right)}_{1 \langlembdaeqslant k \langlembdaeqslant s}. $$ Here $g \circ x = Ad_{\rho_t(g)}(x)$ and $\frac{\primeartial r_j}{\primeartial g_k}$ denotes the {\epsilonm Fox derivative} of $r_j$ with respect to $g_k$. The above description of $C_*^{\rho_t}$ and Claim \ref{claim.rho} imply the following. \betaegin{claim} \langlembdabl{claim.k} The 1-parameter family $C_*^{\rho_t}$ ($t \in J$) is defined over $\muathbb K$. \epsilonnd{claim} Next, we construct a 1-parameter family of basing elements $\muathbf{h}^t_{i}$ for $i=1,2$ and show that it is defined over $\muathbb K$. Let $\langlembdaeft\{e^{(i)}_1, \langlembdadots, e^{(i)}_{n_i}\right\}$ be the set of $i$-dimensional cells of $W$. We lift them to the universal cover and we choose an arbitrary order and an arbitrary orientation for the cells $\langlembdaeft\{ {\tauilde{e}^{(i)}_1, \langlembdadots, \tauilde{e}^{(i)}_{n_i}} \right\}$. If $\muathcal{B} = \{\muathbf{a}, \muathbf{b}, \muathbf{c}\}$ is an orthonormal basis of $\muathfrak{sl}_2(\muathcal CC)$, then we consider the corresponding (geometric) basis over $\muathbb C$: $$ \muathbf{c}^{i}_{\muathcal{B}} = \langlembdaeft\{ \tauilde{e}^{(i)}_{1} \otimes \muathbf{a}, \tauilde{e}^{(i)}_{1} \otimes \muathbf{b}, \tauilde{e}^{(i)}_{1} \otimes \muathbf{c}, \langlembdadots, \tauilde{e}^{(i)}_{n_i}\otimes \muathbf{a}, \tauilde{e}^{(i)}_{n_i} \otimes \muathbf{b}, \tauilde{e}^{(i)}_{n_i}\otimes \muathbf{c}\right\} $$ of $C_i^{\rho_t}$. We fix a generator $P^{\rho_t}$ of $H_0^{\rho_t}(\primeartial M) \muathfrak{su}(2)bset C_{0}^{\rho_t}$ i.e., $P^{\rho_t} \in \muathfrak{sl}_2(\muathcal CC)$ is such that $Ad_{\rho_t(g)}(P^{\rho_t}) = P^{\rho_t}$ for all $g \in \primei_1(\primeartial M)$. \betaegin{claim} \langlembdabl{claim.P} The 1-parameter family $P^{\rho_t}$ ($t \in J$) is defined over $\muathbb K$. \epsilonnd{claim} \betaegin{proof} Observe that $P^{\rho_t}$ is a generator of the intersection $$\ker (Ad_{\rho_t(\muu)} - \muathbf{1}) \cap \ker (Ad_{\rho_t(\langlembdaambda)} - \muathbf{1}).$$ Since this family of vector spaces and linear maps is defined over $\muathbb K$ (by Claim \ref{claim.k}), the result follows. \epsilonnd{proof} The canonical inclusion $j\colon \primeartial M \tauo M$ induces {(see~\cite[Corollary 3.23]{Po})} an isomorphism $$ j_*\colon H_2^{\rho_t}(\primeartial M) \tauo H_2^{\rho_t}(M) \sigmaimeq H_2^{\rho_t}(W) = \ker d^{\rho_t}_2 \muathfrak{su}(2)bset C_2^{\rho_t}. $$ Moreover, one can prove that {(see~\cite[Proposition 3.18]{Po})} $$ H_2^{\rho_t}(\primeartial M) \cong H_2(\primeartial M; \muathbb{Z}) \otimes \muathbb C. $$ More precisely, let $\langlembdabrack \! \langlembdabrack \primeartial M \rbrack \! \rbrack \in H_2(\primeartial M; \muathbb{Z})$ be the fundamental class induced by the orientation of $\primeartial M$, one has $H_2^{\rho_t}(\primeartial M) = \muathbb C \langlembdaeft[\langlembdabrack \! \langlembdabrack \primeartial M \rbrack \! \rbrack \otimes P^\rho_t\right]$. The \epsilonmph{reference generator} of $H_2^{\rho_t}(M)$ is defined by \betaegin{equation} \langlembdabl{EQ:Defh2} \muathbf{h}^t_{2} = j_*([\langlembdabrack \! \langlembdabrack \primeartial M \rbrack \! \rbrack \otimes P^{\rho_t}]) \in C_{2}^{\rho_t}. \epsilonnd{equation} Claim~\ref{claim.P} implies that \betaegin{claim} \langlembdabl{claim.h2} The 1-parameter family $\muathbf{h}^t_{2}$ ($t \in J$) is defined over $\muathbb K$. \epsilonnd{claim} Since $\rho_t$ is near $\rho_0$ and $\gammaamma$ is admissible, the inclusion $\iota \colon \gammaamma \langlembdaongto M$ induces (see~\cite[Definition 3.21]{Po}) an \epsilonmph{isomorphism} $$ \iota^* \colon H^{\rho_t}_1(\gammaamma) \tauo H^{\rho_t}_1(M)\sigmaimeq H_1^{\rho_t}(W) = \ker d_1^{\rho_t} / \muathrm{im}\, d_2^{\rho_t}. $$ The \epsilonmph{reference generator} of the first twisted homology group $H_1^{\rho_t}(M)$ is defined by \betaegin{equation} \langlembdabl{EQ:Defh1} \muathbf{h}^t_{1} = \iota_*\langlembdaeft(\langlembdaeft[\langlembdabrack \! \langlembdabrack \gammaamma \rbrack \! \rbrack \otimes P^\rho_t\right]\right) \in C_{1}^{\rho_t}. \epsilonnd{equation} Claim~\ref{claim.P} implies that: \betaegin{claim} \langlembdabl{claim.h1} The 1-parameter family $\muathbf{h}^t_{1}$ ($t \in J$) is defined over $\muathbb K$. \epsilonnd{claim} Using the bases described above, the non-abelian Reidemeister torsion of the 1-parameter family $\rho_t$ is defined by: \betaegin{equation} \langlembdabl{eq.torsiondef} \tauau_{\gammaamma}(\rho_t) = \muathrm{Tor}(C_*^{\rho_t}(W; \muathfrak{sl}_2(\muathcal CC)_{\rho_t}), \muathbf{c}^*_{\muathcal{B}}, \muathbf{h}_t^{*}) \in \muathbb C^*. \epsilonnd{equation} The torsion $\tauau_{\gammaamma}(\rho_t)$ is an invariant of $M$ which is \epsilonmph{well defined up to a sign}. Moreover, if $\rho_t$ and $\tauilde\rho_t$ are two 1-parameter family of representations which pointwise have the same character then $\tauau_{\gammaamma}(\rho_t) =\tauau_{\gammaamma}(\tauilde\rho_t)$. Finally, one can observe that $\tauau_{\gammaamma}(\rho_t)$ does not depend on the choice of the invariant vector $P^{\rho_t}$ (see~\cite{Db}). The above discussion implies that \betaegin{claim} \langlembdabl{claim.tau} For every essential curve $\gammaamma \in \primet M$, the 1-parameter family $\tauau_{\gammaamma}(\rho_t)$ ($t \in J$) is defined over $\muathbb K$. \epsilonnd{claim} In other words, there exist $\hat \tauau_{\gammaamma} \in \muathbb Q(\overline{X}_{M,\SigmaL(2,\muathbb C)})$ such that for $(\rho_t,z)$ near $(\rho_0,z_0)$ we have $\tauau_{\gammaamma}(\rho)=\hat \tauau_{\gammaamma}(\rho_t,z)$. Since the left hand side does not depend on $z$, it follows from Section \ref{sub.four} that $\hat \tauau_{\gammaamma} \in \muathbb Q(X_{M,\SigmaL(2,\muathbb C)})$. This concludes the proof of Theorem \ref{thm.1}. \qed \muathfrak{su}(2)bsection{Proof of Theorems \ref{thm.11} and \ref{thm.2}} \langlembdabl{sub.thm11} The proof of Theorem \ref{thm.1} implies that for every admissible curve $\gammaamma$, the torsion function $\tauau_{\gammaamma}$ is the germ of an element of $\muathbb Q(\overline{X}_{M,\SigmaL(2,\muathbb C)})$. Theorem \ref{thm.11} follows from Theorem \ref{thm.1} and Lemmas \ref{lem.ch3} and \ref{lem.ch5}. Theorem \ref{thm.2} follows from the fact that $\overline{X}_{M,\SigmaL(2,\muathbb C)}$ is an affine complex curve, and its field of rational functions has transcendence degree $1$. In addition, $\tauau_{\gammaamma}$ and $\operatorname{tr}_{\gammaamma}$ are rational functions on $\overline{X}_{M,\SigmaL(2,\muathbb C)}$. \muathfrak{su}(2)bsection{The dependence of the Reidemeister torsion on the admissible curve and the $A$-polynomial} \langlembdabl{sub.Atorsion} In this section, we discuss the dependence of the non-abelian Reidemeister torsion on the admissible curve. Although this discussion is independent of the proof of Theorem \ref{thm.1}, it might be useful in other contexts. Recall that the non-abelian Reidemeister torsion is defined in terms of the twisted chain complex in Equation \epsilonqref{eq.twistedC} which is not acyclic. Thus, it requires the choice of distinguished bases $\muathbf{h}_{i}$ for $i=1,2$. Such bases can be chosen once an {\epsilonm admissible curve} $\gammaamma \in \primet M$ is chosen; see ~\cite[Chap. 3]{Po}. Porti proves that for every homotopically non-trivial curve $\gammaammamma$ in $\primeartial M$, the discrete and faithful representation $\rho_0$ is $\gammaammamma$-{\epsilonm regular}. The same holds for representations $\rho$ near $\rho_0$. A well-known application of Thurston's {\epsilonm Hyperbolic Dehn Surgery Theorem} implies that $\rho_0 \in X_M$ is a smooth point of $X_M$ and that a neighborhood $U$ of $\rho_0$ is parametrized by the polynomial function $\operatorname{tr}_{\gammaamma}$; see for example \cite{NZ} and \cite[Cor.~3.28]{Po}. Choose a meridian-longitude pair $(\muu,\langlembda)$ in $\primet M$, set $\operatorname{tr}_{\muu}(\rho_t)=e_\mu+e_\mu^{-1}$, $\operatorname{tr}_{\langlembda}(\rho)=e_\langlembda+e_\langlembda^{-1}$, and consider the $A$-polynomial $A_M=A_M(e_\mu,e_\langlembda) \in \muathbb Z[e_\mu^{\primem 1},e_\langlembda^{\primem 1}]$ of $M$. For a detailed discussion on the $A$-polynomial of $M$ and its relation to the various views of the character, see the appendix of \cite{BDR-V}. With the above notation, Porti proves that the dependence of the torsion on the admissible curve $\gammaamma$ is controlled by the $A$-polynomial. More precisely, one has \cite[Cor.~4.9,~Prop.~4.7]{Po}: \betaegin{eqnarray} \langlembdabl{eq:changecurve} \tauau_\muu &=& \tauau_\langlembdaambda \cdot \langlembdaeft( \frac{\operatorname{tr}_\langlembdaambda^2 - 4}{\operatorname{tr}_\muu^2 - 4}\right)^{1/2} \cdot \frac{\primeartial \operatorname{tr}_\muu}{\primeartial \operatorname{tr}_\langlembdaambda} \\ \langlembdabl{eq.tauml} & = & \tauau_\langlembdaambda \cdot (\muathrm{res}^* \circ (\Deltaelta^*)^{-1})\langlembdaeft( \frac{e_\langlembda}{e_\mu}\frac{\primeartial A_M/\primeartial e_\langlembda}{\primeartial A_M/\primeartial e_\mu} \right), \epsilonnd{eqnarray} where $\muathrm{res}^*\colon X_{M, \SigmaL(2, \muathbb C)} \tauo X_{\primeartial M, \SigmaL(2, \muathbb C)}$ is the restriction-map induced by the usual inclusion $\primeartial M \hookrightarrow M$, and $\Deltaelta^*$ works has follows on the trace field $$ \Deltaelta^*(\operatorname{tr}_\gammaammamma) = e_\gammaammamma + {e_\gammaammamma}^{-1}. $$ \muathfrak{su}(2)bsection*{Acknowledgment} A first draft of the paper was discussed during a workshop on the Volume Conjecture in Strasbourg 2007. The authors wish to thank their organizers, S. Baseilhac, F. Costantino and G. Massuyeau for their hospitality, and M. Heusener, R. Kashaev and W. Neumann for enlightening conversations. S.G. wishes to thank N. Dunfield for numerous useful conversations, suggestions and for a careful reading of a first draft. \betaibliographystyle{hamsalpha}\betaibliography{biblio} \epsilonnd{document} \epsilonndinput \ifx\undefined\betaysame \newcommand{\betaysame}{\langlembdaeavevmode\hbox to3em{\hrulefill}\,} \fi \betaegin{thebibliography}{[EMSS]} \betaibitem[APS]{APS} M.F. Atiyah, V.K. Patodi and I.M. Singer, {\epsilonm Spectral asymmetry and Riemannian geometry I-III}, Math. Proc. Cambridge Philos. Soc. {\betaf 77} (1975) 43--69, ibid {\betaf 78} (1975) 405--432 ibid {\betaf 79} (1976) 71--99. \betaibitem[BP]{BP} R. Benedetti and C. Petronio, {\epsilonm Lectures on hyperbolic geometry}, Universitext. Springer-Verlag (1992). \betaibitem[Bo]{Bo} D.W. Boyd, {\epsilonm Mahler's measure and invariants of hyperbolic manifolds}, in Number Theory for the Millenium (M.A. Bennett et al. ed.) A.K. Peters Boston (2001). \betaibitem[BDR-V]{BDR-V} \betaysame, N.M. Dunfield and F. Rodriguez-Villegas, {\epsilonm Mahler's Measure and the Dilogarithm (II)}, preprint 2003 {\taut arXiv:0308041}. \betaibitem[BZ]{BZ} S. Boyer and X. Zhang, {\epsilonm Every nontrivial knot in $S^3$ has nontrivial $A$-polynomial}, to appear in Proc.~Amer.~Math.~Soc. \betaibitem[BH1]{BH1} D. Burghelea and S. Haller, {\epsilonm Complex-valued Ray-Singer torsion}, J. Funct. Anal. {\betaf 248} (2007) 27--78. \betaibitem[BH2]{BH2} \betaysame and \betaysame, {\epsilonm Torsion, as a function on the space of representations}, in $\muathbb C^*$-algebras and elliptic theory II, Trends Math., Birkh\"auser (2008) 41--66. \betaibitem[Ca]{Ca} D. Calegari, {\epsilonm Real places and torus bundles}, Geom. Dedicata {\betaf 118} (2006) 209--227. \betaibitem[Ch]{Ch} A. Champanerkar, {\epsilonm $A$-polynomials and Bloch invariants of hyperbolic 3-manifolds}, thesis, Columbia University 2003. \betaibitem[Co]{Co} M. Cohen, {\epsilonm A course in simple-homotopy theory}, Graduate Texts in Mathematics {\betaf 10}, Springer-Verlag (1973). \betaibitem[CCGLS]{CCGLS} D. Cooper, M. Culler, H. Gillet, D. Long and P. Shalen, {\epsilonm Plane curves associated to character varieties of $3$-manifolds}, Invent.~Math.~{\betaf 118} (1994) 47--84. \betaibitem[Cu]{Cu} M. Culler, {\epsilonm Lifting representations to covering groups}, Adv. in Math. {\betaf 59} (1986) 64--70. \betaibitem[DGLZ]{DGLZ} T. Dimofte, S. Gukov, J. Lenells and D. Zagier, {\epsilonm Exact Results for Perturbative Chern-Simons Theory with Complex Gauge Group}, preprint 2009 {\taut arXiv:0903.2472}. \betaibitem[Db]{Db} J. Dubois, {\epsilonm Non-abelian twisted Reidemeister torsion for fibered knots}, Canad. Math. Bull.~{\betaf 49} (2006) 55--71 \betaibitem[DHY]{DHY} J. Dubois, V. Huynh and Y. Yamaguchi, {\epsilonm Non-abelian Reidemeister torsion for twist knots}, J. of Knot Theory and Its Ramifications~{\betaf 18} (2009) 1--39 \betaibitem[Dn]{Dn} N.~Dunfield, {\epsilonm Cyclic surgery, degrees of maps of character curves, and volume rigidity for hyperbolic manifolds}, Invent.~Math.~{\betaf 136} (1999), 623--657. \betaibitem[DG]{DG} \betaysame and S. Garoufalidis, {\epsilonm Non-triviality of the $A$-polynomial for knots in $S^3$}, Algebr. Geom. Topol. {\betaf 4} (2004) 1145-1153. \betaibitem[EP]{EP} D.B. Epstein and R.C. Penner, {\epsilonm Euclidean decompositions of noncompact hyperbolic manifolds}, J. Differential Geom. {\betaf 27} (1988) 67--80. \betaibitem[F]{F} S. Francaviglia, {\epsilonm Hyperbolic volume of representations of fundamental groups of cusped 3-manifolds}, Int. Math. Res. Not. {\betaf 9} (2004) 425--459. \betaibitem[FK]{FK} \betaysame and B. Klaff, {\epsilonm Maximal volume representations are Fuchsian}, Geom. Dedicata {\betaf 117} (2006) 111--124. \betaibitem[GL1]{GL1} \betaysame and T.T.Q. Le, {\epsilonm The colored Jones function is $q$-holonomic}, Geom. and Topology {\betaf 9} (2005) 1253--1293. \betaibitem[GL2]{GL2} \betaysame and \betaysame, {\epsilonm Asymptotics of the colored Jones function of a knot}, preprint 2005 {\taut math.GT/0508100}, to appear in Geom. and Topology. \betaibitem[Ga1]{Ga1} \betaysame, {\epsilonm On the characteristic and deformation varieties of a knot}, Proceedings of the CassonFest, Geometry and Topology Monographs {\betaf 7} (2004) 291--309. \betaibitem[Ga2]{Ga2} \betaysame, {\epsilonm Chern-Simons theory, analytic continuation and arithmetic}, Acta Math. Vietnamica, {\betaf 33} (2008) 335--362. \betaibitem[Go]{Go} W.M. Goldman, {\epsilonm Invariant functions on Lie groups and Hamiltonian flows of surface group representations}, Invent. Math. {\betaf 85} (1986) 263--302. \betaibitem[GM]{GM} S. Gukov and H. Murakami, {\epsilonm $\SigmaL(2,\muathbb C)$ Chern-Simons theory and the asymptotic behavior of the colored Jones polynomial}, preprint 2006 {\taut arXiv:math/0608324}. \betaibitem[MR]{MR} C. Maclachlan and A.W. Reid, {\epsilonm The arithmetic of hyperbolic 3-manifolds}, Graduate Texts in Mathematics {\betaf 219} Springer-Verlag (2003). \betaibitem[Ma]{Ma} A.M. Macbeath, {\epsilonm Commensurability of co-compact three-dimensional hyperbolic groups}, Duke Math. J. {\betaf 50} (1983) 1245--1253. \betaibitem[Mi]{Mi} J. Milnor, {\epsilonm Whitehead torsion}, Bull. AMS {\betaf 72} (1966) 358--426. \betaibitem[NR]{NR} W.D. Neumann and A.W. Reid, {\epsilonm Arithmetic of hyperbolic manifolds}, Topology '90, Ohio State Univ. Math. Res. Inst. Publ., {\betaf 1} (1992) 273--310. \betaibitem[NZ]{NZ} \betaysame and D. Zagier, {\epsilonm Volumes of hyperbolic three-manifolds}, Topology {\betaf 24} (1985) 307--332. \betaibitem[PP]{PP} C. Petronio and J. Porti, {\epsilonm Negatively oriented ideal triangulations and a proof of Thurston's hyperbolic Dehn filling theorem}, Expo. Math. {\betaf 18} (2000) 1--35. \betaibitem[Po]{Po} J. Porti, {\epsilonm Torsion de Reidemeister pour les vari\'et\'es hyperboliques}, Memoirs AMS {\betaf 612} (1997). \betaibitem[R]{R} J.G. Ratcliffe, {\epsilonm Foundations of hyperbolic manifolds}, Graduate Texts in Mathematics {\betaf 149} Springer-Verlag (1994). \betaibitem[Sh]{Sh} P.B. Shalen, {\epsilonm Representations of $3$-manifold groups}, Handbook of Geometric Topology (2002) 955--1044. \betaibitem[Th]{Th} W. Thurston, {\epsilonm The geometry and topology of 3-manifolds}, 1979 notes, available from MSRI at {\taut www.msri.org/publications/books/gt3m/} \betaibitem[Tu]{Tu} V. Turaev, {\epsilonm The Yang-Baxter equation and invariants of links}, Inventiones Math. {\betaf 92} (1988) 527--553. \betaibitem[T]{TM} V. Turaev, {\epsilonm Torsions of $3$-dimensional manifolds}, Progress in Mathematics {\betaf 208} Birkhäuser Verlag (2002). \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \title{Mean-field models of dynamics on networks via moment closure: \\an automated procedure} \author{Bert Wuyts} \affiliation{College of Engineering, Mathematics and Physical Sciences, University of Exeter, EX4 4QF, UK} \author{Jan Sieber} \affiliation{College of Engineering, Mathematics and Physical Sciences, University of Exeter, EX4 4QF, UK} \email{[email protected]} \begin{abstract} In the study of dynamics on networks, moment closure is a commonly used method to obtain low-dimensional evolution equations amenable to analysis. The variables in the evolution equations are mean counts of subgraph states and are referred to as moments. Due to interaction between neighbours, each moment equation is a function of higher-order moments, such that an infinite hierarchy of equations arises. Hence, the derivation requires truncation at a given order, and, an approximation of the highest-order moments in terms of lower-order ones, known as a closure formula. Recent systematic approximations have either restricted focus to closed moment equations for SIR epidemic spreading or to unclosed moment equations for arbitrary dynamics. In this paper, we develop a general procedure that automates both derivation and closure of arbitrary order moment equations for dynamics with nearest-neighbour interactions on undirected networks. Automation of the closure step was made possible by our generalised closure scheme, which systematically decomposes the largest subgraphs into their smaller components. We show that this decomposition is exact if these components form a tree, there is independence at distances beyond their graph diameter, and there is spatial homogeneity. Testing our method for SIS epidemic spreading on lattices and random networks confirms that biases are larger for networks with many short cycles in regimes with long-range dependence. A \texttt{Mathematica} package that automates the moment closure is available for download. \keywords{moment closure, networks, graph theory, dynamics on networks, network motifs, nonlinear dynamics, master equation, Markov networks, epidemic models} \end{abstract} \maketitle \section{Introduction} The dynamics of complex systems are usually most accurately represented by high-dimensional stochastic simulation models. However, their large state space makes exact mathematical analysis prohibitive. Therefore, one often looks for low-dimensional approximations that permit analysis. In \emph{moment closure}, one achieves this by studying the time evolution of a finite set of ``moments'' rather than that of the full probability distribution of the considered stochastic dynamical system \cite{kuehn2016}. The complete set of moment equations forms an infinite hierarchy of ordinary differential equations (ODEs), with lower-order moments depending on higher order ones. The approximation consists then of truncating the hierarchy at a chosen order and replacing the highest order moments by functions of the lower-order moments. Such functions are known as closure formulas, and they can be obtained in various ways \cite{kuehn2016}, such as via an assumption of statistical independence \cite[e.g.][]{Sharkey2015,Sharkey2015a,Sharkey2011}, physical principles (e.g. maximum entropy \cite{Rogers2011}), time scale separation \cite{GrossKevrekidis2008}, and assumptions on the type of probability distribution \cite[e.g.][]{Isham1991}. In this work, we only consider closures derived from an assumption of statistical independence, which is equivalent to a \emph{mean-field approximation}, a method originating from the statistical physics of phase transitions in materials \cite{weiss1907,bragg1934,bethe1935,kikuchi1951}. First-order moment closure assumes pairwise independence of species counts and corresponds to the `mean field' or `simple mean field' \cite[e.g][]{Marro1999,Henkel2008,Tome2015}, resulting in equations only for total counts of each species. Likewise, second order moment closure assumes independence of pair counts in larger units, and corresponds to the `pair approximation' \cite[e.g.][]{Matsuda1992,Keeling1997,Rand1999,Dieckmann2000,Kefi2007a,Gross2006}, which also includes equations for pair counts. We aim to exploit this connection between moment closure and mean-field approximations to generalise and automate the derivation of arbitrary-order mean-field models for arbitrary dynamics with nearest-neighbour interactions on undirected networks. In the rest of our introduction, we introduce our approach with more precision, discuss the relevant literature, state our aims, and provide an overview of the paper contents. In general, moments in the moment closure for dynamics on networks represent the expected frequencies of small subgraph states known as \emph{network motifs} \cite{House2009}. Derivation of the moment equations proceeds from smaller to larger sized-motifs, with dynamics of mean motif counts of size $m$ depending only on mean motif counts of size $m$ and $m{+}1$ if the dynamics has only nearest-neighbour interactions. Hence, a system of ODEs obtained in such a manner for motif counts up to a maximum considered size $k$ (also referred to as the order of the moment closure) is always underdetermined, because it depends on motifs of size $k{+}1$ but does not contain equations for them. Therefore, as the second step of moment closure, a closure approximation is applied by expressing counts of $(k{+}1)$-size motifs as functions of counts of $\{1,...,k\}$-size motifs, closing the system of ODEs. In this substitution, larger-sized motifs factorise in terms of smaller-sized ones, which we will justify, as mentioned above, by an assumption of statistical independence. For homogeneous networks, closures that are valid at the individual level (i.e. concerning states of given nodes) are also valid at the population level (i.e. concerning total counts or averages of states in the whole network) \cite{Sharkey2008}, which then permits a compact description in terms of population-level quantities. Yet, the number of motif types, and hence equations, increases combinatorially with $k$. It is then hoped that the derivation can be stopped at an order low enough for the resulting system of ODEs to be sufficiently amenable to analytical or numerical methods and high enough to satisfy the independence assumptions underlying the closure approximately. We note that various other types of approximations exist that focus on specific types of subgraphs, such as active motifs \cite{Bohme2011}, star graphs \cite{gleeson2013,fennell2019} or hypergraphs of cliques \cite{marceau2010,stonge2021} or of general motifs \cite{Cui2022}. These have also been referred to as moment closure \cite[e.g. in][]{demirel2014}, but their truncation order and closure formulas are implicit in the method. At present, the equations obtained by the approach of references \cite{gleeson2013,fennell2019,marceau2010,stonge2021,Cui2022} are referred to as approximate master equations. In context of population dynamics on networks or lattices, moment closure methods have been used to study applications such as spatial ecology \cite[e.g][]{Matsuda1992,Dieckmann2000,Kefi2007a}, epidemics \cite[e.g.][]{Keeling1997,Keeling1999,Rand1999,Kiss2017,House2009,Gross2006}, opinion formation \cite[e.g.][]{demirel2014}, evolution of cooperation \cite[e.g.][]{Szabo2002}, among others. Despite the wide use of moment closure in applications, derivation of the moment equations is customarily done separately for each considered process and approximation order, while justifying the used closure formulas only heuristically. Comparatively few studies have shown how moment equations derive in general from the master equation or how closure formulas arise from precisely defined independence assumptions. Regarding the former, an automated algorithm to derive (unclosed) moment equations for arbitrary adaptive dynamics on directed networks was recently developed by Danos et al. \cite{Danos2020}. Regarding the latter, attention has centred on the specific case of SIR epidemic spreading \cite{Sharkey2015,Sharkey2015a,Sharkey2011} because proving validity of low-order closures is least challenging here. In particular, Sharkey et al. \cite{Sharkey2015a} proved that an exact individual-level closure approximation exists for motifs that have an all-susceptible set of nodes which cuts all possible chains of infection between the remaining parts when removed. In tree networks, this is already possible with three nodes such that the largest required motif in the moment equations is of size 2. In case of non-tree networks, larger motifs need to be taken into account, resulting in a larger number of equations. Hence, while Danos et al. \cite{Danos2020} have shown that it is feasible to derive moment equations in a generic form, the work on SIR spreading \cite{Sharkey2011,Sharkey2015a,Sharkey2015} indicates which type of independence assumptions are required to obtain valid closures. In this paper, we provide a first \emph{fully automated} procedure for both derivation and closure of population-level moment equations up to any order and for arbitrary dynamics with at most nearest-neighbour interactions on undirected networks. Automated derivation is made possible by our generic moment equation \eqref{eq:dinvfull1}, which we derived from the master equation. As mentioned above, more general derivations than ours exist \cite{Danos2020}. Hence, our main contribution is in automating also the closure, which we show and justify in detail (Section \ref{sec:Closure-scheme}). Our closure scheme (equations \eqref{eq:dec:mot:chord} or \eqref{eq:dec:mot:nchord}) generalises previous insights \cite{Sharkey2011,Sharkey2015a,Sharkey2015} and relies on the theory of Markov networks \cite{Pearl1988} to make it applicable to motifs of any type and size, such that it can close any set of moment equations at arbitrary order. We show that our closure scheme is exact if the motifs that are decomposed by the closure form a tree and if there is independence beyond their graph diameter. As shown in Figure \ref{fig:overview}, the whole procedure consists of four main steps: (\emph{i}) enumeration of all required motifs up to size $k$, (\emph{ii}) derivation of the unclosed ODEs for these motifs, (\emph{iii}) elimination via conservation relations, and (\emph{iv}) closure of the system of ODEs. We will refer to the final closed system of ODEs as the $k$-th order mean field, or MF$k$ in short. Elimination is not strictly necessary but it increases efficiency, particularly for higher-order approximations. We developed a \texttt{Mathematica} \cite{Mathematica} package that derives MF$k$ by performing steps (\emph{i})--(\emph{iv}). The required inputs for this algorithm are: the counts of all induced subgraphs in the underlying network up to size $k$, and, the matrices $\mathbf{R}^0$, $\mathbf{R}^1$ with conversion and interaction rates. A code example with output is discussed in Appendix \ref{sec:MFhoeqs} and the package is available for download \cite{Wuyts2022mfmcl}. In Sections~\ref{sec:ds-ct-mc} and \ref{sec:motifs}, the underlying Markov chain for the network and its motifs are introduced. Section~\ref{sec:diff:inv} explains the general formula \eqref{eq:dinvfull1} for step (\emph{ii}), which follows from the master equation (for its derivation, see Appendix \ref{sec:Differential-and}). The conservation relations used for elimination of variables from the ODEs in step (\emph{iii}) are shown in equations (\ref{eq:cons:norm},~\ref{eq:cons2}) of Section \ref{sec:alg:iv}. The form of the system of moment equations up to a truncation order $k$ \eqref{eq:xdotblock} and the variable elimination are shown in Section \ref{sec:QE}, resulting in the unclosed system \eqref{eq:xdotsubcons2}. Finally, general expressions to close the system of moment equations in step (\emph{iv}) are derived in Section~\ref{sec:Closure-scheme} (equations \eqref{eq:dec:mot:chord} or \eqref{eq:dec:mot:nchord}), resulting in form \eqref{eq:xdotsubconscl}. In Section \ref{sec:SIS} we set up MF1-5 models of SIS epidemic spreading and compare their steady states to those of simulations on a selection of networks. We focus in particular on the square lattice, for which low-order moment closures fail, due to its large number of cycles of any size, and we compare against random networks and higher-dimensional lattices, for which they work well. \begin{figure*} \caption{Mean-field models for multistate dynamics on networks via moment closure. Left: general form. Right: example (MF2 for SIS spreading). Input (top): network ${\cal G} \label{fig:overview} \end{figure*} \section{The underlying discrete-state continuous-time Markov chain} \label{sec:ds-ct-mc} We consider a dynamical system on a fixed undirected graph $\cal{G}$ with $N$ nodes and adjacency matrix $\mathsf{A}\in\{0,1\}^{N\times N}$, where each node may have one out of $n$ discrete states. We may see the nodes as locations and the states as species, such that the space at node $i\in\{1,...,N\}$ is occupied by exactly one species in $\{1,...,n\}$. We denote the state vector at time $t$ by $X(t)$ such that $[X_i(t)]_{i=1}^N\in\{1,...,n\}^N$. The type of dynamics we consider is a continuous-time Markov chain with two Poisson process transition types, with rates specified by a $n\times n$ matrix $\mathbf{R}^0$ and a $n\times n\times n$ tensor $\mathbf{R}^1$: \begin{itemize}[noitemsep,nosep] \item[(\emph{i})] $\mathbf{R}^0$ specifies \emph{spontaneous conversion} rates. Any node with state $a$ may change spontaneously into state $b$, with rate $R^0_{ab}$; \item[(\emph{ii})] $\mathbf{R}^1$ specifies \emph{nearest-neighbour-induced conversion} rates. Any node with state $a$ may change into state $b$ for each link to a node with state $c$, with rate $R^1_{abc}$. \end{itemize} This corresponds to the reaction rules \begin{equation}\label{eq:reactionrules} \vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}\overset{R^0_{ab}}{\to}\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}},\qquad\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\overset{R^1_{abc}}{\rightarrow}\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}c. \end{equation} A simple example is \emph{susceptible-infected-susceptible} (SIS) \emph{epidemic spreading} on a square two-dimensional lattice of $\sqrt{N}\times\sqrt{N}$ nodes (say, periodic in both directions). For SIS spreading, each node may have one of two states, \emph{susceptible} or \emph{infected}, $\{\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}},\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\}=\{1,2\}$, for $t\in[0,\infty)$ and $i\in\{1,...,N\}$. $S$ nodes can become infected at rate $\beta$ per infected neighbour and $I$ nodes recover spontaneously at rate $\gamma$. Hence, for SIS spreading, the matrix $\mathbf{R}^0$ has a single non-zero entry $R^0_{2,1}=\gamma>0$ (for spontaneous \emph{recovery}) and the tensor $\mathbf{R}^1$ has a single non-zero entry $R^1_{1,2,2}=\beta>0$ (for \emph{infection} along $IS$ links, denoted by the symbol $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S$ below), corresponding to the reaction rules \begin{equation} \vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\overset{\gamma}{\to}\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}},\qquad\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\overset{\beta}{\rightarrow}\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I.\label{eq:reactionSIS} \end{equation} \section{Network motifs and their counts} \label{sec:motifs} We define network motifs as (typically small) graphs with given state labels. The order of a motif is the number of nodes it has. E.g., in our example, $I$ nodes and $IS$ links are examples of first and second-order motifs. We use square brackets to denote the count of occurrences of motifs in $({\cal G},X)$, i.e. the number of occurrences of $I$ nodes is $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]$. Hence, we can write e.g. respectively for $I$ nodes, $IS$ links and $ISI$ chains \begin{align*} [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]=&\sum_{\mathclap{i\in\{1,...,N\}}}\mathrm{d}lta_2(X_i)\mbox{,} [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]=\sum_{\mathclap{i,j\neq i\in\{1,...,N\}}}\mathrm{d}lta_1(\mathsf{A}_{ij})\mathrm{d}lta_{2}(X_{i})\mathrm{d}lta_{1}(X_{j})\mbox{,}\\[2pt] [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI]=&\sum_{\mathclap{i,j\neq i,k\neq j\in\{1,...,N\}}}\mathrm{d}lta_1(\mathsf{A}_{ij})\mathrm{d}lta_1(\mathsf{A}_{jk})\mathrm{d}lta_0(\mathsf{A}_{ik})\mathrm{d}lta_{2}(X_{i})\mathrm{d}lta_{1}(X_{j})\mathrm{d}lta_{2}(X_{k})\mbox{,} \end{align*} where we omitted the dependence on $t$. The Kronecker delta, $\mathrm{d}lta_y(x)$, is $1$ if $x$ equals $y$ and $0$ otherwise. By construction, the motif counts on the left-hand side are random, since $X(t)$ is random. Generalizing the above examples, a \emph{network motif} of \emph{order} $m$, is a network with $m$ nodes, each of which are labelled with a state. It his hence fully characterised by its connectivity pattern between nodes and its state labels on the nodes. The connectivity between motif nodes, i.e. the motif without labels, will be indicated by $\mathsf{a}$, which, depending on the context, denotes the adjacency matrix of the motif, or a set $\mathsf{a}\in(\{1,...,m\}\times\{1,...,m\})^\mu$ of links between the $m$ nodes (the indices of the non-zero entries in the adjacency matrix of the motif such that $\mu\leq m(m-1)/2$), or a graphical representation of the connectivity. For instance, two linked nodes are displayed according to these representations as ${\big(\begin{smallmatrix}0 & 1\\ 1 & 0 \end{smallmatrix}\big)}$, $\{(1,2)\}$, or $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bNL$ respectively. As we focus only on undirected networks, each pair in the pair representation is bidirectional, i.e. we write $\{(1,2)\}$ instead of $\{(1,2),(2,1)\}$. Motif labels will be indicated via a vector $\boldsymbol{x}=(x_{1},...,x_{m})\in\{1,...,n\}^m$ with state labels $x_{p}$ at positions $p=1,...,m$. Hence, the pair of $\boldsymbol{x}$ and $\mathsf{a}$ describes the motif, which we write as $\boldsymbol{x}^{\mathsf{a}}$. For example $(2,1)^{(1,2)}=(\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}},\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}})^{(1,2)}$ is the $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S$ link. For a general network motif $\boldsymbol{x}^\mathsf{a}$, the total count $[\boldsymbol{x}^{\mathsf{a}}]=[\boldsymbol{x}^{\mathsf{a}}](t)$ in the network ${\cal G}$ with fixed adjacency $\mathsf{A}$ and node labels $X=X(t)$ is \begin{align}\label{eq:motifcount:all} \left[\boldsymbol{x}^{\mathsf{a}}\right]=\sum_{\boldsymbol{i}\in S(m,N)}\mathrm{d}lta_{\mathsf{a}}(\mathsf{A}_{\boldsymbol{i}})\mathrm{d}lta_{\boldsymbol{x}}(X_{\boldsymbol{i}})\mbox{,} \end{align} where $S(m,N)$ is the set of all $m$-tuples from $\{1,...,N\}$ without repetition, which has size $|S(m,N)|=N!/(N-m)!$. For instance, $\sum_{\boldsymbol{i}\in S(3,N)}=\sum_{i\in\{1,...,N\}}\sum_{j\in\{1,...,N\}\setminus i}\sum_{k\in\{1,...,N\}\setminus \{i,j\}}$. For an index set $\boldsymbol{i}\subset\{1,...,N\}^m$ of length $m$ we use the convention that $\mathsf{A}_{\boldsymbol{i}}\in\{0,1\}^{m\times m}$ and $X_{\boldsymbol{i}}\in\{1,...,n\}^m$ are the restrictions of matrix $\mathsf{A}$ and vector $X$ to the index set $\boldsymbol{i}$. We count exact matches between the motif and the subgraph in $({\cal G},X)$. This means that the counted motif needs to have $\mathsf{a}$ as induced subgraph of ${\cal G}$, with matching state labels. Counting via \eqref{eq:motifcount:all} leads to multiple counting of motifs with symmetries (more precisely: automorphisms -- see Appendix \ref{sec:Differential-and}), with multiplicity equal to the number of symmetries. For instance, $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S$ is counted once; but $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S$ or $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I$ twice, and $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SStr$ six times. \section{Differential equations for motif counts} \label{sec:diff:inv} In our SIS spreading example, the expected rate of change for the count of infected nodes is well known to satisfy \begin{align} \mathrm{dist}dt\langle[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]\rangle & = -\gamma\langle[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]\rangle+\beta\langle[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]\rangle\mbox{,}\label{eq:ov:mf1} \end{align} where $\langle\cdot\rangle$ brackets denote expectations over many independent realisations of the underlying Markov chain. Relation \eqref{eq:ov:mf1} is exact for finite network sizes $N$ and can be derived from the Kolmogorov-forward (or master) equation for the Markov chain, see Appendix \ref{sec:Differential-and}. Note that the structure in \eqref{eq:ov:mf1} is such that the expected rate for the frequency of a motif of size $m=1$ depends on the counts of motifs of size $m=1$ (here $\langle[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]\rangle$) from spontaneous conversions (here recovery) and size $m=2$ (here $\langle[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]\rangle$) from nearest-neighbour-induced conversions (here infection). This is true in general such that the count of a general motif $[\boldsymbol{x}^\mathsf{a}](t)$ of size $m$ satisfies an ordinary differential equation of form \begin{equation}\label{eq:dinv} \mathrm{dist}dt\langle\left[\boldsymbol{x}^{\mathsf{a}}\right]\rangle=F_{\boldsymbol{x}^\mathsf{a}}\left(\langle[\boldsymbol{y}_1^{\mathsf{b}_1}]\rangle,\langle[\boldsymbol{y}_2^{\mathsf{b}_2}]\rangle,...\right)\mbox{,} \end{equation} where $|\boldsymbol{y}_i|\in\{m,m{+}1\}$. On the right-hand side the $[\boldsymbol{y}_i^{\mathsf{b}_i}](t)$ stand for counts of motifs of size $m$ or $m{+}1$ on which the dynamics of $\bigl\langle\left[\boldsymbol{x}^{\mathsf{a}}\right]\bigr\rangle$ depend. Our package expresses the right-hand side of the differential equation \eqref{eq:dinv} for arbitrary motifs $\boldsymbol{x}^\mathsf{a}$ of size $m$ in the general form \begin{widetext} \begin{equation}\label{eq:dinvfull1} \begin{aligned} \mathrm{dist}dt\langle\left[\boldsymbol{x}^{\mathsf{a}}\right]\rangle =& \sum_{p=1}^m\sum_{k=1}^n\sum_{c=1}^{n}\Biggl\{\left(\frac{R^0_{kx_{p}}}{n}+\kappa_{p,c}^{\boldsymbol{x}^{\mathsf{a}}}R^1_{kx_{p}c}\right)\bigl\langle\left[\boldsymbol{x}^{\mathsf{a}}_{p\shortto k}\right]\bigr\rangle- \left(\frac{R^0_{x_{p}k}}{n}+\kappa_{p,c}^{\boldsymbol{x}^{\mathsf{a}}}R^1_{x_pkc}\right)\langle\left[\boldsymbol{x}^{\mathsf{a}}\right]\rangle +\\ &\sum_{\boldsymbol{y}^\mathsf{b}\in{\cal N}^c_p\left(\boldsymbol{x}^{\mathsf{a}}_{p\shortto k}\right)} R^1_{kx_{p}c}\langle[\boldsymbol{y}^\mathsf{b}]\rangle- \sum_{\boldsymbol{y}^\mathsf{b}\in{\cal N}^c_p(\boldsymbol{x}^\mathsf{a})} R^1_{x_{p}kc}\langle[\boldsymbol{y}^\mathsf{b}]\rangle\Biggr\}. \end{aligned} \end{equation} \end{widetext} Here, $\boldsymbol{x}_{p\shortto k}$ is the state label vector obtained by setting the state label of the $p$th element of $\boldsymbol{x}$ to $k$. $\kappa_{p,c}^{\boldsymbol{x}^{\mathsf{a}}}$ is the $c$-degree at position $p$ in motif $\boldsymbol{x}^{\mathsf{a}}$, i.e. it is the number of connections node $p$ in motif $\boldsymbol{x}^{\mathsf{a}}$ has to nodes with state label $c$. The set ${\cal N}^c_p(\boldsymbol{x}^{\mathsf{a}})$ is defined as \begin{equation*}\label{overview:nbhdef} {\cal N}_{p}^c(\boldsymbol{x}^\mathsf{a}){:=}\bigcup_{\ell=1}^{m+1} \left\{\boldsymbol{y}^\mathsf{b}{:}|\boldsymbol{y}|{=}m{+}1, y_\ell{=}c, \boldsymbol{y}^\mathsf{b}_{\ell\shortto\emptyset}{=}\boldsymbol{x}^\mathsf{a},(\ell,p){\in}\mathsf{b} \right\}, \end{equation*} and contains all motifs $\boldsymbol{y}^\mathsf{b}$ of order $m{+}1$ that extend the state label vector $\boldsymbol{x}$ by one new node with state label $c$ and extend adjacency $\mathsf{a}$ by links to the new node, where $\boldsymbol{y}^\mathsf{b}_{\ell\shortto\emptyset}$ denotes the $m$th order connected motif obtained by deleting the $\ell$th node of $\boldsymbol{y}^\mathsf{b}$ and its links. The differential equation \eqref{eq:dinvfull1} shows how the expected count of $\boldsymbol{x}^{\mathsf{a}}$ is increased by transitions \emph{into} $\boldsymbol{x}^{\mathsf{a}}$ -- the positive terms -- and decreased by transitions \emph{out of} $\boldsymbol{x}^{\mathsf{a}}$ -- the negative terms. This happens through spontaneous conversions (terms with $R^0$), through nearest-neighbour interaction between nodes within the motif (first two terms with $R^1$), or through nearest-neighbour interaction with nodes outside the motif (last two terms with $R^1$). Note that equivalent motifs up to permutation (isomorphic motifs -- see Appendix \ref{sec:Differential-and}) result in the same equation, such that we choose the same representative node indexing for each equivalence class. \section{Conservation relations} \label{sec:alg:iv} Conservation relations are linear algebraic relations between motif counts. E.g., for SIS spreading on a square lattice with periodic boundary conditions, we have for first and second-order motifs: \begin{align*} [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}](t)+[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}](t)&=N\mbox{,}& [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S](t)+[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S](t)+[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I](t)&=4N\mbox{.} \end{align*} Such conservation of node and link counts occurs because our graph $\cal{G}$ is fixed. We can use the total counts on the right-hand side as normalising factors such that we may write \begin{align*} \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\rrbracket(t)+\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket(t)&=1\mbox{,}& \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S\rrbracket(t)+\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket(t)+\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\rrbracket(t)&=1\mbox{.} \end{align*} where $\llbracket\cdot\rrbracket$ is our notation for normalised motif counts. For networks with homogeneous degree, one can also write conservation equations of the type $\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\rrbracket=\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket+\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S\rrbracket$. In general, for each adjacency matrix $\mathsf{a}$ of possible motifs of size $m$ the conservation relation \begin{equation}\label{eq:cons} \sum_{\boldsymbol{x}: |\boldsymbol{x}|=m}\left[\boldsymbol{x}^{\mathsf{a}}\right](t)=\left[\mathsf{a}\right] \mbox{,} \;\;\mbox{where}\;\; [\mathsf{a}]=\sum_{\boldsymbol{i}\in S(m,N)}\mathrm{d}lta_{\mathsf{a}}(\mathsf{A}_{\boldsymbol{i}})\mbox{,} \end{equation} holds. The overall count $[\mathsf{a}]$ of induced subgraphs $\mathsf{a}$ in graph $\cal{G}$ is constant in time, and split up between all possible labellings. We can therefore use $\left[\mathsf{a}\right]$ as a normalisation factor for the (variable) counts of motifs such that we may consider normalised motif counts \begin{equation}\label{eq:cons:norm} \llbracket\boldsymbol{x}^{\mathsf{a}}\rrbracket:=\left[\boldsymbol{x}^{\mathsf{a}}\right]/\left[\mathsf{a}\right]\mbox{, such that\ }\sum_{\boldsymbol{x}: |\boldsymbol{x}|=m}\llbracket\boldsymbol{x}^{\mathsf{a}}\rrbracket(t)=1 \end{equation} is the conservation relation for the normalized quantities. For networks with homogeneous degree there is the additional type of conservation relation \begin{equation}\label{eq:cons2} {\sum_{k=1}^n}\llbracket\boldsymbol{x}^{\mathsf{a}}_{p\shortto k}\rrbracket=\llbracket\boldsymbol{x}^\mathsf{a}_{p\shortto\emptyset}\rrbracket, \end{equation} for each stub $p$ of the motif $\boldsymbol{x}^\mathsf{a}$ (a stub is a node with degree $1$ in $\mathsf{a}$). The conservation relations \eqref{eq:cons:norm} and \eqref{eq:cons2} can be used to reduce the number of variables in the moment equations via substitution. This can result in a substantial reduction in the number of moment equations (see Table \ref{tab:Number-of-equations}). \section{Truncation and substitution}\label{sec:QE} When we use \eqref{eq:dinvfull1} to express expected rates of change for the set of motifs up to a chosen maximum size $k$, we obtain a truncated hierarchy of moment equations. This linear system of differential equations has the form \begin{widetext} \begin{equation}\label{eq:xdotblock} \begin{bmatrix} \mathrm{dist}ot{\mathbf{x}}_1\\ \mathrm{dist}ot{\mathbf{x}}_2\\ \vdots\\ \mathrm{dist}ot{\mathbf{x}}_k\\ \end{bmatrix} = \begin{bmatrix} \mathbf{Q}_1 & \mathbf{Q}_{12} & \mathbf{0} & \cdots & \cdots & \mathbf{0}\\ \mathbf{0} & \mathbf{Q}_2 & \mathbf{Q}_{23} & \mathbf{0} & \cdots & \vdots\\ \vdots & \mathbf{0} & \mathrm{dist}dots & \mathrm{dist}dots & \mathbf{0} & \vdots\\ \vdots & \cdots & \mathbf{0} & \mathbf{Q}_{k-1} & \mathbf{Q}_{k-1,k} & \mathbf{0}\\ \mathbf{0} & \cdots & \cdots & \mathbf{0} & \mathbf{Q}_k & \mathbf{Q}_{k,k+1} \end{bmatrix} . \begin{bmatrix} \mathbf{x}_1\\ \mathbf{x}_2\\ \vdots\\ \mathbf{x}_k\\ \mathbf{x}_{k+1}\\ \end{bmatrix} \;=:\; \begin{blockarray}{cccccccc} && \mathbf{x}_1 & \mathbf{x}_2 & \cdots & \mathbf{x}_{k-1} & \mathbf{x}_k & \mathbf{x}_{k+1}\\ \begin{block}{cc[cccccc]} \mathrm{dist}ot{\mathbf{x}}_1 &\quad & \mathbf{Q}_1 & \mathbf{Q}_{12} & \mathbf{0} & \cdots & \cdots & \mathbf{0}\\ \mathrm{dist}ot{\mathbf{x}}_2 &\quad & \mathbf{0} & \mathbf{Q}_2 & \mathbf{Q}_{23} & \mathbf{0} & \cdots & \vdots\\ \vdots &\quad & \vdots & \mathbf{0} & \mathrm{dist}dots & \mathrm{dist}dots & \mathbf{0} & \vdots\\ \mathrm{dist}ot{\mathbf{x}}_{k-1} &\quad & \vdots & \cdots & \mathbf{0} & \mathbf{Q}_{k-1} & \mathbf{Q}_{k-1,k} & \mathbf{0}\\ \mathrm{dist}ot{\mathbf{x}}_k &\quad & \mathbf{0} & \cdots & \cdots & \mathbf{0} & \mathbf{Q}_k & \mathbf{Q}_{k,k+1}\\ \end{block} \end{blockarray}\;, \end{equation} \end{widetext} where we defined a more compact notation on the right. In \eqref{eq:xdotblock}, $\mathbf{x}_m$ are vectors with all dynamically relevant motif counts of size $m$, $\mathbf{Q}_m$ the coefficients for $\mathrm{dist}ot{\mathbf{x}}_m:=d\mathbf{x}_m/dt$ with motifs of the same size, and $\mathbf{Q}_{m,m+1}$ the coefficients for $\mathrm{dist}ot{\mathbf{x}}_m$ with motifs of the size $m{+}1$. The block diagonal form arises because the change of size-$m$ motif counts depends only on motif counts of size $m$ and of size $m{+}1$. In Appendix \ref{sec:MFSIS3}, \eqref{eq:xdotblock} is shown for SIS spreading up to a maximum motif size of $k=3$. The substitution via conservation relations can be written as \begin{equation}\label{eq:subcons} \mathbf{x}=\mathbf{E}\cdot\tilde{\mathbf{x}}+\mathbf{c}, \end{equation} where $\mathbf{x}=\mathbf{x}_1,...,\mathbf{x}_k$, $\tilde{\mathbf{x}}$ is $\mathbf{x}$ with the to be substituted elements omitted. $\mathbf{E}$, $\mathbf{c}$ contain the coefficients of linear dependence from (\ref{eq:cons},~\ref{eq:cons2}), with their $i$th row/element corresponding to the identity transformation for motifs that are not substituted. Substituting this into \eqref{eq:xdotblock} results in the system of equations for the remaining motifs $\tilde{\mathbf{x}}$: \begin{equation}\label{eq:xdotsubcons1} \mathrm{dist}ot{\tilde{\mathbf{x}}}=\tilde{\mathbf{Q}}_{1\cdots k,1\cdots k}\cdot(\mathbf{E}\cdot\tilde{\mathbf{x}}+\mathbf{c})+\tilde{\mathbf{Q}}_{1\cdots k,k+1} \cdot \tilde{\mathbf{x}}_{k+1} \end{equation} (where the tilde omits the to be substituted rows/elements), such that we can write the system in the same form as \eqref{eq:xdotblock}, but now with an added constant vector: \begin{equation}\label{eq:xdotsubcons2} \mathrm{dist}ot{\tilde{\mathbf{x}}}=\mathbf{Q}' \cdot \begin{bmatrix} \tilde{\mathbf{x}}\\ \tilde{\mathbf{x}}_{k+1}\\ \end{bmatrix} +\tilde{\mathbf{c}}. \end{equation} For an example, see Section \ref{sec:Second-order}. \section{Closure scheme}\label{sec:Closure-scheme} Because in \eqref{eq:xdotsubcons2}, counts for the largest motifs $\mathbf{x}_{k+1}$ appear on the right-hand side but not on the left-hand side, \eqref{eq:xdotsubcons2} is underdetermined. A closure scheme provides a way of expressing the undetermined parts $\tilde{\mathbf{x}}_{k+1}$ in \eqref{eq:xdotblock} through a nonlinear function $\tilde{\mathbf{x}}_{k+1}\approx \tilde{\mathbf{f}}(\tilde{\mathbf{x}})$, creating a closed system of ODEs: \begin{equation}\label{eq:xdotsubconscl} \mathrm{dist}ot{\tilde{\mathbf{x}}}=\mathbf{Q}' \cdot \begin{bmatrix} \tilde{\mathbf{x}}\\ \tilde{\mathbf{f}}(\tilde{\mathbf{x}})\\ \end{bmatrix} +\tilde{\mathbf{c}}, \end{equation} where $\tilde{\mathbf{f}}$ also depends on counts of induced subgraphs of order up to $k{+}1$ ($[g_1^{k+1}]$ in Figure \ref{fig:overview}) if the motifs were not normalised in advance. In this section, we develop a closure scheme that decomposes $\tilde{\mathbf{x}}_{k+1}$ into its smaller-sized components. Our final formula generalises closures hitherto most commonly used, as shown in e.g. \citet{House2009}. We will show that the decomposition is valid when: (\emph{i}) counts of components are conditionally independent given the node states in their intersection and the adjacency structure between them is a tree, (\emph{ii}) the network is spatially homogeneous, and (\emph{iii}) the network is sufficiently large, such that the law of large numbers applies. We start with some introducing examples in Section \ref{sec:introex} and defer detailed explanation to Sections \ref{sec:closuredefbg},\ref{sec:motifdecomp}. Examples are given in Sections \ref{sec:clex} and \ref{sec:clexs}. \subsection{Introduction}\label{sec:introex} When truncating the moment hierarchy at a chosen order $k$, we approximate the order $k{+}1$ motifs appearing in the equations for order $k$ motifs in terms of lower-order motifs. E.g., looking at the moment equations for SIS spreading in Appendix \ref{sec:MFSIS3}, when truncating at $k{=}1$, we would need an expression of $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]$ in terms of $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]$ and $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]$. When assuming statistical independence of neighbouring node states, the resulting expression is of the form $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]\propto [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}][\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]$ (ignoring proportionality constants for now). Similarly, for truncation at $k{=}2$, we would need an expression for all the 3-chains and triangles on the right-hand side of (\ref{eq:mf4},~\ref{eq:mf5}). For instance, the 3-chain $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI]$ is typically decomposed as $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI]\propto [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S][\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]/[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]$. Using the shorthand $x_i$ for the event $X_i{=}x$, this is justified when there is a conditional independence relation of the form $P(S_i,S_j,I_k){=}P(S_i,S_j)P(S_j,I_k{\mid} S_j)$ and if the component probabilities are the same everywhere in the network, such that node indices $i,j,k$ do not matter. This can be generalised to larger chains, such as e.g. $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SII]\propto[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI][\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II]/[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]$, which similarly follows from the assumed conditional independence relation $P(S_i,S_j,I_k,I_l){=}P(S_i,S_j,I_k)P(S_j,I_k,I_l{\mid} S_j,I_k)$ and homogeneity in the network. It is possible to generalise the examples above to larger subgraphs by starting from the chain rule of probability, \begin{equation*} P(\boldsymbol{x})=P(\boldsymbol{x}_{\boldsymbol{i}_n}\mid \boldsymbol{x}_{\boldsymbol{i}_{n-1}},..., \boldsymbol{x}_{\boldsymbol{i}_2},\boldsymbol{x}_{\boldsymbol{i}_1})... P(\boldsymbol{x}_{\boldsymbol{i}_2}\mid \boldsymbol{x}_{\boldsymbol{i}_1})P(\boldsymbol{x}_{\boldsymbol{i}_1}), \end{equation*} where $\boldsymbol{x}$ is a vector of state labels on a given network motif, and subsequently simplify with assumed conditional independence relations. For instance, for our second example above, $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI]$, we have $P(S_i,S_j,I_k){=}P(I_k {\mid} S_i,S_j)P(S_j{\mid} S_i)P(S_i)$. If now node $k$ is conditionally independent of node $i$, we substitute $P(I_k {\mid} S_i,S_j){=}P(I_k {\mid} S_j)$, such that we obtain the expression found above. In general, a simplification of the chain rule in terms of subgraphs is possible if we can order the chosen sets of subgraphs (with node indices $\boldsymbol{i}_1,...,\boldsymbol{i}_n$) without creating cycles and if the states of adjacent subgraphs are conditionally independent given their shared nodes. To make this precise, we need the concept of independence map. The independence map is a graph in which link absence between two nodes means that there is no direct dependence between them. For instance, if in the four-node graph shown in row 4 column 1 of Table \ref{tab:clex} there is statistical independence between node states that are further than two steps removed, its independence map is the graph shown in row 4 column 3 of Table \ref{tab:clex}. The condition mentioned above for simplification of the chain rule in terms of subgraphs reduces to the requirement that the independence map be chordal, because this allows a tree composition in terms of maximal cliques, as shown in column 4 of Table~\ref{tab:clex}. A graph is chordal if every cycle greater than three is cut short by a link between two non-consecutive nodes of the cycle. \begin{table*} \includesvg[inkscapelatex=false,width=2\columnwidth]{./figures/table_closures} \caption{Examples of our method to obtain closure formulas. Because the closures can be written independent of the state labels, the decomposition is shown for node-indexed graphs without reference to particular node states.}\label{tab:clex} \end{table*} In reality, we do not know the dependence structure between the graph nodes. However, the practical requirement of truncation of the moment hierarchy obliges us to assume an independence map for each of the largest motifs. In order to obtain a consistent decomposition method, we will choose as independence map the graph obtained by connecting all nodes to other nodes in their $d{-}1$ neighbourhood, where $d$ is the diameter of the motif. The chain rule then leads to a decomposition in terms of (maximal) cliques of the independence map in the numerator and the node sets that separate them in the denominator. Table \ref{tab:clex} shows example graphs, with their diameter, their assumed independence maps, their tree decomposition and the resulting closure formula. The choice of dependence within a distance $d$ may in some cases result in non-chordal independence maps, such that the decomposition cannot be made. In the next sections, we will explain our method in detail and further show how one can treat motifs with a non-chordal independence map. \subsection{Definitions and background}\label{sec:closuredefbg} We will rely on the theory of decomposable Markov networks, following mostly the terminology of \citet[][Ch. 3]{Pearl1988}. We generalise the decomposition from a factorisation involving (1-)cliques to one involving $d$-cliques. he definitions in this section apply to a general graph ${\cal G}$ with nodes states $X$, but we will apply the decomposition to motifs in Section \ref{sec:motifdecomp}. \mathrm{pa}ragraph{Separation} Given a graph ${\cal G}({\cal V},{\cal E})$ and three disjoint subsets of nodes $\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}\subset {\cal V}$, $\boldsymbol{k}$ \emph{separates} $\boldsymbol{i}$ and $\boldsymbol{j}$ in ${\cal G}$, written as $\boldsymbol{i}\indep_{\mkern-4mu{\cal G}}\;\boldsymbol{j}\mid \boldsymbol{k}$, if every path between $\boldsymbol{i}$ and $\boldsymbol{j}$ has at least one vertex in $\boldsymbol{k}$. Here, $\boldsymbol{k}$ is called a separator, or also, a node cut set of $\boldsymbol{i}$ and $\boldsymbol{j}$ in ${\cal G}$. \mathrm{pa}ragraph{Independence map} An independence map ${\cal M}$ is a graph that represents the independence between components of a set of random variables $X$ such that separation in ${\cal M}$ guarantees conditional independence between corresponding subsets of $X$. More precisely, given three disjoint subsets of nodes $\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}\subset {\cal V}$, $X$ possesses a spatial Markov property: \begin{equation}\label{eq:IMap} \boldsymbol{i}\indep_{\mkern-4mu{\cal M}} \;\boldsymbol{j}\mid \boldsymbol{k}\implies X_{\boldsymbol{i}}\indep X_{\boldsymbol{j}}\mid X_{\boldsymbol{k}}, \end{equation} where the $\indep$ notation on the right refers to independence of the random variables: $P(X_{\boldsymbol{i}}{=}x_{\boldsymbol{i}},X_{\boldsymbol{j}}{=}x_{\boldsymbol{j}}{\mid} X_{\boldsymbol{k}}{=}x_{\boldsymbol{k}})=P(X_{\boldsymbol{i}}{=}x_{\boldsymbol{i}}{\mid} X_{\boldsymbol{k}}{=}x_{\boldsymbol{k}})P(X_{\boldsymbol{j}}{=}x_{\boldsymbol{j}}{\mid} X_{\boldsymbol{k}}{=}x_{\boldsymbol{k}})$. The pair $(X,{\cal M})$ defines what is known as a \emph{Markov network}. \mathrm{pa}ragraph{Independence beyond distance $d$} Let ${\cal G}=({\cal V},{\cal E})$ be a graph where the nodes in ${\cal V}$ have (random) states $X$. We define ${\cal G}^d$ as the graph in which all nodes of ${\cal G}$ are neighbours if they are at most a shortest distance $d$ away from each other, i.e. \begin{equation}\label{eq:Gk} {\cal G}^d{=}({\cal V},{\cal E}^d),\;\; \mathrm{with}\;\;{\cal E}^d{=}\{(i,j){\in}{\cal V}: \mathrm{dist}_{\cal G}(i,j){\leq} d\}\mbox{.} \end{equation} We then say that $({\cal G},X)$ has \emph{independence beyond distance $d$} if ${\cal G}^d$ is the independence map of $X$, or for all distinct subsets of nodes $\boldsymbol{i},\boldsymbol{j},\boldsymbol{k}\subset {\cal V}$ holds \begin{equation}\label{eq:indepk} \boldsymbol{i}\indep_{\mkern-4mu{\cal G}^d} \;\boldsymbol{j}\mid \boldsymbol{k}\implies X_{\boldsymbol{i}}\indep X_{\boldsymbol{j}}\mid X_{\boldsymbol{k}}. \end{equation} This means that states of two non-neighbouring sets in ${\cal G}^d$, which by definition \eqref{eq:Gk} are further than $d$ steps apart in ${\cal G}$, are independent of each other given the state of their separator $\boldsymbol{k}$. \mathrm{pa}ragraph{Maximal $d$-cliques and $d$-clique graph} A maximal clique is a complete subgraph not contained in a larger complete subgraph \cite{Harary1973}. As a generalisation, \emph{maximal $d$-cliques} are maximal subgraphs with distance between any two nodes not greater than $d$ \cite{Mathematica}. Correspondingly, maximal cliques in ${\cal G}^d$ are maximal $d$-cliques in ${\cal G}$. The graph ${\cal C}^d$ is the \emph{$d$-clique graph} of ${\cal G}$ if each node in ${\cal C}^d$ corresponds to a $d$-clique in ${\cal G}$ with links between nodes in ${\cal C}^d$ occurring when the corresponding $d$-cliques overlap. Hence, while nodes in ${\cal C}^d$ correspond to maximal $d$-cliques in ${\cal G}$, links in ${\cal C}^d$ correspond to intersections between overlapping maximal $d$-cliques in ${\cal G}$. \mathrm{pa}ragraph{Junction graph of $d$-cliques} A junction graph of ${\cal G}$'s maximal $d$-cliques, denoted further as ${\cal J}^d({\cal G})$, is a subgraph of the $d$-clique graph ${\cal C}^d$ obtained by removing redundant links from ${\cal C}^d$. Denoting $d$-cliques corresponding to nodes $i,j$ in ${\cal C}^d$ as $c_i,c_j\subset{\cal V}$, a link between $i$ and $j$ in ${\cal C}^d$ is \emph{redundant} when there is an alternative path between $i$ and $j$ in ${\cal C}^d$ passing by a series of other nodes in ${\cal C}^d$ of which the corresponding $d$-cliques all contain $c_i\cap c_j$. The junction graph ${\cal J}^d({\cal G})$ is then obtained by iteratively removing redundant links from the ${\cal C}^d$ until there are no further redundant links. While the $d$-clique graph is unique, there may be several junction graphs ${\cal J}^d({\cal G})$ of $d$-cliques for one graph ${\cal G}$. Note that for chordal graphs (defined below), the junction graph equals what is known as a junction tree, which can also be obtained via the junction tree algorithm \cite{barber2012}, applied to the $d$-clique graph. \mathrm{pa}ragraph{$d$-chordality} A graph ${\cal G}$ is chordal when for every cycle of length greater than $3$, there exists a link in ${\cal G}$ between two non-consecutive nodes of the cycle (thus, giving a short-cut, also called \emph{chord} to the cycle). As a generalisation, we will call a graph ${\cal G}$ $d$-chordal if ${\cal G}^d$ is chordal. If a graph ${\cal G}$ is $d$-chordal, then ${\cal J}^d({\cal G})$ is a tree, or equivalently, if ${\cal G}^d$ is chordal, then ${\cal J}({\cal G}^d)$ is a tree. Non-chordal graphs can always be converted to a chordal graph via \emph{triangulation}, i.e. adding chords to every chordless cycle of length greater than $3$. We will write below $\text{tr}({\cal G}^d)$ as a \emph{minimal triangulation} of a non-chordal ${\cal G}^d$, obtained by adding the smallest number of links that leads to chordality, unless stated otherwise. \mathrm{pa}ragraph{Decomposability at distance $d$} If there is independence beyond distance $d$ \eqref{eq:indepk} and ${\cal G}$ is $d$-chordal, then the joint probability of the network nodes of ${\cal G}$ being in a given state $P(X=\boldsymbol{x})$ can be factorised over the $d$-cliques of ${\cal G}$. We will call this property of the graph ${\cal G}$ and its node states $X$ \emph{decomposability at distance $d$}. We call ${\cal J}$ the set of $d$-cliques in ${\cal G}$, ordering its elements consistent with the resulting junction tree structure for ${\cal G}^d$ (such that ${\cal J}_1$ is the chosen root and parent nodes have lower index than their leaves), and call $\mathrm{pa}({\cal J}_i)$ the parent node of ${\cal J}_i$. In such cases, the factorisation is possible because the tree structure between $d$-cliques allows application of the chain rule of conditional probability. \begingroup \allowdisplaybreaks \begin{align}\label{eq:dec:chord} P(X{=}\boldsymbol{x})&=\prod_{i=1}^{|{\cal J}|}P(X_{{\cal J}_i}{=}\boldsymbol{x}_{{\cal J}_i}\mid X_{\mathrm{pa}({\cal J}_i)}{=}\boldsymbol{x}_{\mathrm{pa}({\cal J}_i)}),\nonumber\\ &=\prod_{i=1}^{|{\cal J}|}P(X_{{\cal J}_i}{=}\boldsymbol{x}_{{\cal J}_i}\mid X_{\mathrm{pa}({\cal J}_i)\cap {\cal J}_i}{=}\boldsymbol{x}_{\mathrm{pa}({\cal J}_i)\cap {\cal J}_i}),\nonumber\\ &=\frac{\prod_{i=1}^{|{\cal J}|}P(X_{{\cal J}_i}{=}\boldsymbol{x}_{{\cal J}_i})}{\prod_{i=2}^{|{\cal J}|}P(X_{\mathrm{pa}({\cal J}_i)\cap {\cal J}_i}{=}\boldsymbol{x}_{\mathrm{pa}({\cal J}_i)\cap {\cal J}_i})}. \end{align} \endgroup The steps in \eqref{eq:dec:chord} are explained as follows. As $d$-chordality makes ${\cal J}^d({\cal G})$ a tree and any $d$-clique separates its neighbours, one can recursively use conditional independence of children given parents (line 1). We use the convention that $\mathrm{pa}({\cal J}_1)=\emptyset$, such that the first factor is $P(X_{{\cal J}_1}{=}\boldsymbol{x}_{{\cal J}_1})$. In line 2 we exploit that any two $d$-cliques in ${\cal G}$ are also separated by their intersection to condition instead on intersections. \bwr{\jsq{Alternative for the above:}\bwq{(use eq above instead)} \js{As ${\cal G}$ is $d$-chordal, its $d$-cliques form a junction tree. Let us pick on $d$-clique as the root, calling it ${\cal R}$, and denote its branches (which are also trees) by ${\cal B}_1$,...,${\cal B}_\ell$, such that ${\cal J}^d({\cal G})={\cal R}\cup{\cal B}_1\cup...{\cal B}_\ell$. Then the conditional independence between different cliques gives \begin{align*} P(X=\boldsymbol{x})&=P(X_{{\cal R}}=\boldsymbol{x}_{{\cal R}})\prod_{j=1}^\ell P(X_{{\cal B}_j}=\boldsymbol{x}_{{\cal B}_j}\mid X_{{\cal R}}=\boldsymbol{x}_{{\cal R}}). \end{align*} As any two $d$-cliques in ${\cal G}$ are also separated by their intersection, we have for each branch ${\cal B}_j$ \begin{align*} P(X_{{\cal B}_j}=\boldsymbol{x}_{{\cal B}_j}\mid X_{{\cal R}}=\boldsymbol{x}_{{\cal R}})&= P(X_{{\cal B}_j}=\boldsymbol{x}_{{\cal B}_j}\mid X_{{\cal R}\cap {\cal B}_j}=\boldsymbol{x}_{{\cal R}\cap {\cal B}_j})\\ &=\frac{P(X_{{\cal B}_j}=\boldsymbol{x}_{{\cal B}_j})}{P(X_{{\cal R}\cap {\cal B}_j}=\boldsymbol{x}_{{\cal R}\cap {\cal B}_j})}, \end{align*} such that $P(X=\boldsymbol{x})$ can be decomposed as the product \begin{align*} P(X=\boldsymbol{x})&=P(X_{{\cal R}}=\boldsymbol{x}_{{\cal R}})\prod_{j=1}^\ell\frac{P(X_{{\cal B}_j}=\boldsymbol{x}_{{\cal B}_j})}{P(X_{{\cal R}\cap {\cal B}_j}=\boldsymbol{x}_{{\cal R}\cap {\cal B}_j})}. \end{align*} We apply this decomposition recursively, enumerating $d$-cliques in ${\cal J}^d({\cal G})$ as ${\cal J}_j$ ($j=1,...|{\cal J}|$), and all links in ${\cal J}^d({\cal G})$ (which correspond to intersections between neighbouring cliques in the tree) as ${\cal L}_\ell$ ($\ell=1,...,|{\cal L}|=|{\cal J}|-1$), to obtain an expression independent of the choice of root, \begin{align} P(X=\boldsymbol{x})&=\frac{\prod_{j=1}^{|{\cal J}|}P(X_{{\cal J}_j}=\boldsymbol{x}_{{\cal J}_j})}{ \prod_{\ell=1}^{|{\cal L}|}P(X_{{\cal L}_\ell}=\boldsymbol{x}_{{\cal L}_\ell})}. \label{eq:dec2:chord} \end{align} }} \mathrm{pa}ragraph{Non-$d$-chordal graphs} There are two alternative ways to decompose non-$d$-chordal ${\cal G}$: (\emph{i}) perform the decomposition \eqref{eq:dec:chord} on the (more conservative) independence map after triangulation, $\text{tr}({\cal G}^d)$. In this case, the factors in \eqref{eq:dec:chord} may still contain subgraphs of diameter $d{+}1$, but of smaller size than ${\cal G}$, such that one may have to apply \eqref{eq:dec:chord} recursively to achieve smaller diameter for all factors. Furthermore, the resulting decomposition will depend on the choice of triangulation. Alternatively, (\emph{ii}), one can start from the non-tree ${\cal J}^d({\cal G})$ and use the ad-hoc formula (without prior triangulation) \begin{equation}\label{eq:dec:nchord} P(X{=}\boldsymbol{x})\approx\zeta\frac{\prod_{i}^{|{\cal J}|}P(X_{{\cal J}_i}{=}\boldsymbol{x}_{{\cal J}_i})}{\prod_{i,j\ne i}^{|{\cal J}|}P(X_{{\cal J}_i\cap {\cal J}_j}{=}\boldsymbol{x}_{{\cal J}_i\cap {\cal J}_j})}. \end{equation} Because the fraction in \eqref{eq:dec:nchord} does not result from application of the chain rule as in \eqref{eq:dec:chord}, it is not a product of conditional probabilities and hence it does not guarantee the property that each of the node states in $\boldsymbol{x}$ has to appear one more time in the numerator than in the denominator, which in turn leads to inconsistency between closure formulas that assume different $d$. The factor $\zeta$ in \eqref{eq:dec:nchord} corrects for this inconsistency -- see Section \ref{sec:motifdecomp} for more detail. After applying (\emph{i}) to non-chordal graphs, the nodes in the resulting $d$-clique tree are not all maximal $d$-cliques of ${\cal G}$ any more \footnote{More precisely, they consist of the nodes of the maximal cliques of $\text{tr}({\cal G}^d)$ and the links of ${\cal G}$.}. When applying (\emph{ii}) to non-chordal graphs, the subgraphs in ${\cal J}^d({\cal G})$ are still maximal $d$-cliques of ${\cal G}$ but the clique graph is not a tree, thus, violating the assumptions behind the decomposition \eqref{eq:dec:chord}. \mathrm{pa}ragraph{Maximum motif diameter} The decomposition explained in this section implies that if independence beyond distance $d$ is valid in the whole network $({\cal G},X)$, then we know that we only need to consider motifs up to diameter $d$, justifying truncation of the moment hierarchy. Note, however, that truncation is usually done at a give size, not at a given diameter. We expand on this in point \ref{pt:exactness}. \subsection{Motif decomposition}\label{sec:motifdecomp} \mathrm{pa}ragraph{Decomposition of motifs at the individual level} We apply the decomposition \eqref{eq:dec:chord} to motifs with connectivity $\mathsf{a}$ and chordal independence map $\mathsf{a}^{d}$ embedded in the network. For now, we ignore that this may in some cases break the independence assumption, but see section \ref{pt:exactness} for more detail on this issue. Considering a set $\boldsymbol{i}$ of nodes that have connectivity $\mathsf{a}$ in our network ${\cal G}$ and taking $P(X_{\boldsymbol{i}}{=}\boldsymbol{x})$ as the probability that these are in states with labels $\boldsymbol{x}$, we can write the decomposition \eqref{eq:dec:chord} for of $P(X_{\boldsymbol{i}}{=}\boldsymbol{x})=\langle\left[\boldsymbol{x}_{\boldsymbol{i}}^\mathsf{a}\right]\rangle$ to obtain \begin{align}\label{eq:dec:moti:chord} \langle\left[\boldsymbol{x}_{\boldsymbol{i}}^\mathsf{a}\right]\rangle=\frac{\prod_{j=1}^{|{\cal J}|}\langle[\boldsymbol{x}_{\boldsymbol{i}_{{\cal J}_j}}]\rangle}{\prod_{j=2}^{|{\cal J}|}\langle[\boldsymbol{x}_{\boldsymbol{i}_{\mathrm{pa}({\cal J}_j)\cap {\cal J}_j}}]\rangle}, \end{align} where now ${\cal J}$ is the set of $d$-cliques in $\mathsf{a}$. Choosing $d=\text{diam}(\mathsf{a})-1$ ensures that the decomposition results in component motifs with diameter decreased by one compared to the decomposed motif. For motifs with non-chordal $\mathsf{a}^{d}$ one can, as noted above, either triangulate $\mathsf{a}^{d}$ first or use the \emph{ad-hoc} approximation \eqref{eq:dec:nchord}. We relied on the package \texttt{Chordal Graph} \cite{Bulatov2011} for triangulation of non-chordal $\mathsf{a}^{d}$. The ad-hoc formula \eqref{eq:dec:nchord} applied to the motif at $\boldsymbol{i}$ is \begin{equation}\label{eq:dec:moti:nchord} \langle[\boldsymbol{x}_{\boldsymbol{i}}^\mathsf{a}]\rangle\approx\frac{\prod_{j}^{|{\cal J}|}\langle[\boldsymbol{x}_{\boldsymbol{i}_{{\cal J}_j}}]\rangle}{\prod_{j,k\ne j}^{|{\cal J}|}\langle[\boldsymbol{x}_{\boldsymbol{i}_{{\cal J}_j\cap {\cal J}_k}}]\rangle}\prod_{j\in\boldsymbol{i}}\langle[ x_j]\rangle^{\textstyle\gamma_j}. \end{equation} The consistency correction (written as $\zeta$ in \eqref{eq:dec:nchord}) here equals $\prod_{j\in\boldsymbol{i}}\langle\left[x_j\right]\rangle^{\gamma_j}$ \bwr{\jsq{if one uses the notation with links ${\cal L}_\ell$ of the junction graph, is then \begin{align*} \gamma_\nu=1+\sum_{\ell=1}^{|{\cal L}|}\mathds{1}_{\nu\in{\cal L}_\ell}-\sum_{j=1}^{|{\cal J}|}\mathds{1}_{\nu\in{\cal J}_j}, \end{align*} where $\mathds{1}_e$ is the logical indicator function, equalling $1$ if the logical expression $e$ is true, $0$ otherwise?}} and ensures that the ad-hoc extension of closures to motifs with non-chordal $\mathsf{a}^{d}$ does not result in inconsistency with MF$1$ \cite[condition 1 of][Ch. 21]{Dieckmann2000} under independence between node states: when all motifs of order greater than one are replaced by products of order one motifs, i.e. $\langle\left[\boldsymbol{x}_{\boldsymbol{i}_\cdot}^\mathsf{a}\right]\rangle\to\prod_{j\in\boldsymbol{i}_\cdot}\langle\left[x_j\right]\rangle$, the right hand side of (\ref{eq:dec:moti:nchord}) should reduce to MF$1$. Therefore, for each $j$, $\gamma_j$ is chosen such that this is fulfilled. These ad-hoc steps usually result in violation of the conservation relations of Section \ref{sec:alg:iv} \cite{Dieckmann2000}. In the approximations used in Section~\ref{sec:SIS}, the bias introduced due to this violation is small. For mitigation of this problem, see \cite{Dieckmann2000,Peyrard2008}. \mathrm{pa}ragraph{Decomposition of motifs at the population level} If we take the following \emph{spatial homogeneity} assumption for all motifs $\boldsymbol{x}^\mathsf{a}$ of sizes $m$ up to our maximal considered size $k$ \begin{equation}\label{eq:spatialhom} \forall \boldsymbol{i},\boldsymbol{i'}\in {\cal I}(\mathsf{a}): \langle\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\rangle=\langle\left[\boldsymbol{x}_{\boldsymbol{i'}}^{\mathsf{a}}\right]\rangle=\frac{\langle\left[\boldsymbol{x}^{\mathsf{a}}\right]\rangle}{\left[\mathsf{a}\right]}=\langle\llbracket\boldsymbol{x}^{\mathsf{a}}\rrbracket\rangle, \end{equation} where ${\cal I}(\mathsf{a}):=\{\boldsymbol{j}\in S(m,N):\mathsf{A}_{\boldsymbol{j}}=\mathsf{a}\}$, then (\ref{eq:dec:moti:chord},~\ref{eq:dec:moti:nchord}) are independent of $\boldsymbol{i}$, such that we can write \eqref{eq:dec:moti:chord} as \begin{align}\label{eq:dec:mot:chord} \langle\llbracket\boldsymbol{x}^\mathsf{a}\rrbracket\rangle\approx\frac{\prod_{j=1}^{|{\cal J}|}\langle\llbracket\boldsymbol{x}_{{\cal J}_j}\rrbracket\rangle}{\prod_{j=2}^{|{\cal J}|}\langle\llbracket\boldsymbol{x}_{\mathrm{pa}({\cal J}_j)\cap {\cal J}_j}\rrbracket\rangle}, \end{align} and \eqref{eq:dec:moti:nchord} as \begin{equation}\label{eq:dec:mot:nchord} \langle\llbracket\boldsymbol{x}^\mathsf{a}\rrbracket\rangle\approx\frac{\prod_{j}^{|{\cal J}|}\langle\llbracket\boldsymbol{x}_{{\cal J}_j}\rrbracket\rangle}{\prod_{j,k}^{|{\cal J}|}\langle\llbracket\boldsymbol{x}_{{\cal J}_j\cap {\cal J}_k}\rrbracket\rangle}\prod_{p}^{m}\langle\llbracket x_p\rrbracket\rangle^{\textstyle\gamma_p}, \end{equation} which may be used to close the population-level equations \eqref{eq:xdotblock}. In (\ref{eq:dec:mot:chord},~\ref{eq:dec:mot:nchord}), node indexing of a motif is consistent with the node labels $\boldsymbol{x}$. Recall that we use a single consistent indexing for isomorphic motifs. The decomposition (\ref{eq:dec:mot:chord},~\ref{eq:dec:mot:nchord}) is not unique. If the independence assumptions are satisfied each of the alternative ways to decompose motif $\boldsymbol{x}^\mathsf{a}$ should result in the same value. As we do not expect the independence to be perfectly valid, we take the average of the alternative ways of decomposing $\boldsymbol{x}^\mathsf{a}$ if they exist. \mathrm{pa}ragraph{Normalisation} We showed the closure formulas for normalised motifs, i.e. we first normalised the counts of motifs via \eqref{eq:cons:norm}, and then applied the closure. Hence, in this case, the counts of induced subgraphs in the network enter into the system of equations as normalisation factors in the unclosed system. One can also decide not to normalise (or to do it after applying closure). In this latter case, the subgraph counts enter into the final system of equations when applying closure, as the closure formulas for the non-normalised motif counts contain them (to see this, substitute each motif count in (\ref{eq:dec:mot:chord},~\ref{eq:dec:mot:nchord}) as $\llbracket\boldsymbol{y}^{\mathsf{b}}\rrbracket\to[\boldsymbol{y}^{\mathsf{b}}]/[\mathsf{b}]$). Hence, structural information specific to the considered network enters the mean field equations either when normalising the motif counts or when applying closure. In simple cases, such as lattices or random graphs, the subgraph counts can be found by hand without much effort. In other cases, one can resort to subgraph counting algorithms -- we used \texttt{IGraph} \cite{Horvat2020} for \texttt{Mathematica} \cite{Mathematica}. \mathrm{pa}ragraph{Law of large numbers} We will use the population-level closure to study the steady states in a single realisation of a given network. Motif counts are then assumed to be the total counts in a single network, instead of their expectations over many realisations. As the closure formulas apply to expectations, we make the additional assumption that motif counts are close to their expectations. \mathrm{pa}ragraph{Additional bias}\label{pt:exactness} Decomposing the whole network ${\cal G}$ at distance $d$ is exact when ${\cal G}$ is $d$-chordal and there is independence beyond distance $d$ (Section \ref{sec:closuredefbg}). Applying the decomposition to motifs embedded in ${\cal G}$ instead of to ${\cal G}$ can be done without additional bias when $\mathsf{a}$ is a distance-hereditary subgraph of ${\cal G}$ (i.e. distances between nodes in $\mathsf{a}$ are equal to those between corresponding nodes in ${\cal G}$) and conditional independence relations implied by $\mathsf{a}$ are also valid in ${\cal G}$. As a counterexample for the former, take for ${\cal G}$ the six-node graph $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.71em]{lat_abcdef}}}$ and for $\mathsf{a}$ its induced subgraph consisting of nodes $\{1,2,4,6,5\}$. Here, $\mathsf{a}$ is not distance-hereditary because $\mathrm{dist}_\mathsf{a}(1,5){=}4\neq\mathrm{dist}_{\cal G}(1,5){=}2$. As a counterexample for the latter, take for ${\cal G}$ the square $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.71em]{lat_abcd}}}$ and for $\mathsf{a}$ the 3-node chain $\{1,2,4\}$. In this case, $\mathsf{a}$ is distance hereditary, but, while (when assuming independence beyond distance $d{=}1$) within $\mathsf{a}$ we have the independence relation $\{1\}\indep_{\mkern-5mu\mathsf{a}}\;\{4\}\mid\{2\}$, this is not true in ${\cal G}$, where $\{1\}\not\indep_{\mkern-5mu{\cal G}}\;\{4\}\mid\{2\}$, because the node cut set in ${\cal G}$ for $\{1\}$ and $\{4\}$ is $\{2,3\}$. This occurs because the decomposed motifs are non-maximal $d$-cliques. Therefore, we expect a bias as a consequence of this in the closed mean-field equation hierarchy ( \eqref{eq:xdotblock} with \eqref{eq:dec:mot:chord} or \eqref{eq:dec:mot:nchord} at population level under spatial homogeneity). This bias can be avoided when expressing the equations in terms of maximal $p$-cliques for $p\in\{0,...,k{+}1\}$ and truncating at given diameter instead of at given size. \subsection{Examples}\label{sec:clex} Appendix \ref{sec:clexs} shows $13$ application examples of (\ref{eq:dec:mot:chord},~\ref{eq:dec:mot:nchord}) in table form. As the closures can be written independent of the particular labels, they are shown for subgraphs only, with each node tagged with its index. We have also dropped the $\langle\cdot\rangle$, assuming that the law of large numbers applies, such that the counts approach their expectations almost surely for increasing network size $N$. The examples can be understood by reading the table from left to right. Below, we derive the normalisation factors and the non-normalised closures of examples 1-3 of Appendix \ref{sec:clexs} for different network types. Note that, unlike in Appendix \ref{sec:clexs}, we use letter labels below, for consistency with the main text and the literature. \begin{enumerate}[wide, labelwidth=!] \item $\boldsymbol{x}^{\mathsf{a}}=\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bc$:\enspace This diameter-$2$ motif has chordal independence map equal to $\mathsf{a}$ and decomposes with \eqref{eq:dec:mot:chord} as \begin{equation} \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bc\rrbracket\approx\frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}c\rrbracket}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\rrbracket},\label{eq:mclosabcn} \end{equation} assuming conditional independence beyond distance $d{=}1$. Via normalization \eqref{eq:cons:norm} we obtain also the closure for the non-normalized counts: \begin{equation} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bc\right]\approx\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bcNL\right]\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}NL\right]}{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bNL\right]^2}\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}c\right]}{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\right]}.\label{eq:mclosabc} \end{equation} The counts of the induced subgraphs of size $2$ and $1$, required for normalisation, are \begin{align} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bNL\right]=\kappa N,\quad\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}NL\right]=N,\label{eq:munl123} \end{align} and total number of triples ($3$-node motifs) in the network is \begin{equation} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bcNL\right]+\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.33em]{abctr_nolab}}}\right]=\sum_i^N \kappa_i(\kappa_i-1),\label{eq:triples} \end{equation} with $\kappa_i$ the number of neighbours of node $i$ and $\kappa$ the mean number of neighbours over the whole network. For particular network types \eqref{eq:triples} can be simplified. Below are two examples. \begin{enumerate} \item For a network with fixed degree without triangles (e.g. a square lattice), we have $\forall i:\kappa_i=\kappa$ and $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.33em]{abctr_nolab}}}\right]=0$, such that \begin{equation} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bcNL\right]=\kappa(\kappa-1)N.\label{eq:motUNL3hom} \end{equation} Using \eqref{eq:mclosabc} and \eqref{eq:motUNL3hom}, we obtain \begin{equation} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bc\right]\approx\frac{\kappa-1}{\kappa}\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}c\right]}{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\right]}.\label{eq:mclosabchom} \end{equation} An early use of this closure for networks can be found in \citet{Keeling1997}. \item In a large Erd\H{o}s-R{\'e}nyi random network, we have $\kappa_i\sim \text{Pois}(\kappa)$ and $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.33em]{abctr_nolab}}}]/([\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.33em]{abctr_nolab}}}]+[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bcNL])\approx0$ \footnote{For an ER random network, the expected number of $m$-node cycles and $m$-node chains (counting all ordered $m$-tuples without repetition) are $L(N)=\frac{N!}{(N-m)!}(\frac{\kappa}{N-1})^{m}$ and $C(N)=\frac{N!}{(N-m)!}(1-\frac{\kappa}{N-1})(\frac{\kappa}{N-1})^{m-1}$, where $\frac{\kappa}{N-1}$ is the probability of having a link between two given nodes. Hence, $\mathrm{lim}_{N\to\infty}L(N)=\kappa^m=O(1)$ and $\mathrm{lim}_{N\to\infty}C(N)=\mathrm{lim}_{N\to\infty}\kappa^{m-1}N=O(N)$. Hence, as $L(N)$ remains finite and $C(N)$ grows with $N$, cycles can be ignored in the limit of large $N$. In the case $m=3$ (and $N$ large), we can hence safely assume that all triples are chains.}. Hence \begin{align} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bcNL\right]=&\sum_i^N\kappa_i^2-\sum_i^N\kappa_i,\nonumber\\ =&N\left[\mathrm{E}(\kappa_i^2)-\mathrm{E}(\kappa_i)\right],\nonumber\\ =&N\left[\mathrm{Var}(\kappa_i)+\mathrm{E}(\kappa_i)^2-\mathrm{E}(\kappa_i)\right],\nonumber\\ =&\kappa^2N,\label{eq:motUNL3ER} \end{align} where replacing the average by the expectation on the second line requires $N\to\infty$ (law of large numbers), on the third line we used $\mathrm{Var}(\cdot):=\mathrm{E}((\cdot)^2)-(\mathrm{E}(\cdot))^2$, and on the fourth line we used that, for $\kappa_i\sim \text{Pois}(\kappa)$, we have $\mathrm{E}(\kappa_i)=\mathrm{Var}(\kappa_i)=\kappa$. We could also have obtained this result directly from the large-$N$ limit of chains \cite{Note1}. Using \eqref{eq:mclosabc} and \eqref{eq:motUNL3ER}, we obtain \begin{equation} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bc\right]\approx\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}c\right]}{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\right]},\label{eq:mclosabcER} \end{equation} This closure was, to the best of our knowledge, first used for networks in \citet{Gross2006}. \end{enumerate} \item $\boldsymbol{x}^{\mathsf{a}}=\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bctr$:\enspace This diameter-$1$ motif can be decomposed into its three $0$-cliques as \begin{equation} \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bctr\rrbracket\approx\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{c}}}\rrbracket, \end{equation} when assuming independence of nodes ($d{=}0$). With non-normalised counts, this becomes \begin{equation}\label{eq:MFtria} \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bctr\rrbracket\approx\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.33em]{abctr_nolab}}}\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{c}}}\right]/\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}NL\right]^3. \end{equation} Alternatively, one can extend the usage of the ad-hoc formula \eqref{eq:dec:mot:nchord} to include non-maximal cliques: using its three $1$-cliques in \eqref{eq:dec:mot:nchord}, we obtain \begin{equation} \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bctr\rrbracket\approx\frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}c\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{c}}}a\rrbracket}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{c}}}\rrbracket},\label{eq:PAtriaN} \end{equation} which is known as the Kirkwood closure for triangles \cite{Sharkey2008}. Using \eqref{eq:cons:norm}, we obtain for the closure with non-normalised counts \begin{equation} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bctr\right]\approx\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.33em]{abctr_nolab}}}\right]\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}NL\right]^3}{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bNL\right]^3}\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}c\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{c}}}a\right]}{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{c}}}\right]}.\label{eq:PAtria} \end{equation} The frequency $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.33em]{abctr_nolab}}}\right]$ depends on the network type. For instance, if we use the definition of the clustering coefficient $\phi:=\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.33em]{abctr_nolab}}}\right]/(\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.33em]{abctr_nolab}}}\right]+\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bcNL\right])$ \cite{Keeling1999}, we have [via \eqref{eq:munl123} and \eqref{eq:triples}] for a network with fixed degree $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.33em]{abctr_nolab}}}\right]=\phi\kappa(\kappa-1)N$, such that \[ \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bctr\right]\approx\frac{\kappa-1}{\kappa^2}\phi N\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}c\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{c}}}a\right]}{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{c}}}\right]}, \] which was first used for networks by \citet{Keeling1999}. \item $\boldsymbol{x}^{\mathsf{a}}=\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bcdtre$:\enspace This diameter-$2$ motif has chordal independence map $\mathsf{a}$ and decomposes with \eqref{eq:dec:mot:chord} as \begin{equation}\label{eq:clstar1} \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bcdtre\rrbracket\approx\frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}c\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}d\rrbracket}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\rrbracket^2}, \end{equation} when assuming independence beyond distance $d{=}1$. Alternatively, extending the ad-hoc formula \eqref{eq:dec:mot:nchord} to the three non-maximal $3$-cliques, we obtain \begin{equation} \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bcdtre\rrbracket\approx\frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bc\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}bd\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{c}}}bd\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\rrbracket}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}c\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}d\rrbracket},\label{eq:clstar2} \end{equation} where a consistency correction $\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\rrbracket$ was required. The non-normalised form of this closure was first used in \citet{House2009}. \end{enumerate} \section{Application to SIS epidemic spreading}\label{sec:SIS} We apply our method to SIS spreading, which is a continuous-time discrete-state Markov chain description of epidemic spreading through a population of susceptibles \cite[e.g.][]{Kiss2017}. As introduced in Section~\ref{sec:ds-ct-mc}, we have $n=2$ species. The $2\times2$ matrix $\mathbf{R}^0$ of spontaneous conversion rates and the $2\times2\times2$ tensor $\mathbf{R}^1$ of conversion rates due to nearest-neighbour interaction for SIS spreading have only two positive entries, $R^0_{2,1}{=}\gamma$, $R^1_{1,2,2}{=}\beta$, corresponding to reaction scheme \eqref{eq:reactionSIS}. Hence, contagion of susceptibles $S$ occurs over $IS$ links at rate $\beta$, whereas recovery occurs spontaneously at rate $\gamma$. In the study of phase transitions and interacting particle systems, SIS epidemic spreading is known as the contact process \cite{Harris1974}, which is typically studied on $\mathsf{d}$-dimensional lattices. In this context, it was found to belong to the directed percolation universality class, of which scaling properties have been widely studied \cite{Marro1999,Henkel2008,Tome2015}. We run the simulations with a Gillespie algorithm \cite{Gillespie2007} and stabilise them via feedback control \cite{SGNWK08,schilder2015experimental,barton2017control}, such that steady states can be obtained in a more efficient manner than when running regular simulations (see Appendix \ref{sec:feedback}). \subsection{Mean-field equations}\label{sec:MFSIS} We derived the mean-field models up to fifth order for the square lattice and up to second order for other networks, including cubic/hypercubic lattices, random regular networks and Erd\H{o}s-R{\'e}nyi random networks. In the main text, we only show a step-by-step derivation of the first and second-order mean-field models because they allow demonstration of our method in the simplest form. Recall that we write in the text $\langle\left[\cdot\right]\rangle$ as $\left[\cdot\right]$, assuming the LLN holds (in the \texttt{Mathematica} file for MF4 in \ref{sec:MFhoeqs}, we use the notation $\langle\cdot\rangle$ instead). To gain insight in the strength of dependence between neighbouring nodes in simulations and higher-order mean-field models, we will observe the correlation between neighbouring node states $a$ and $b$ as in \citet{Keeling1999}, defined by \begin{equation}\label{eq:corrs0} C_{ab}=\frac{N}{\kappa}\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\right]}{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\right]}, \end{equation} (where $\kappa$ is the mean degree) or when motif counts are normalised, \begin{equation}\label{eq:corrs} C_{ab}=\frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}b\rrbracket}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\rrbracket}. \end{equation} They are uncentered correlations between species types that are separated by one link. Values greater than 1 indicate clustering and values less than 1 avoidance (compared to a uniform random distribution). For a generalisation of this correlation to arbitrary distances between end nodes, see Appendix \ref{sec:dcorrs}. \subsubsection{MF1 \label{sec:First-order}} The first-order mean field originates from the molecular field approximation in statistical physics \cite{weiss1907,bragg1934} and is now commonly known as the `mean field model' \cite{Marro1999,Henkel2008,Tome2015,porter2016,Kiss2017,newman2018}. MF1 only considers node states ($k=1$) and neglects correlations beyond distance 0. It provides a picture of the dynamics when species are well mixed throughout a large domain. One way to achieve this is when the domain is a complete network on which susceptibles and infecteds have contact rate $\kappa\beta/N$, and when $N\to\infty$ \cite{Kiss2017}. There are two motif types of size 1: $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}$ and $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}$. Of these, only $\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}$ is dynamically relevant. This means that we only need the equation: \[ \mathrm{dist}dt[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]=\beta[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]-\gamma[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]. \] To close the system at order 1, $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]$ needs to be expressed in terms of $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right]$ and $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\right]$. No correlation beyond distance 0 corresponds to (using \eqref{eq:dec:mot:chord}): \begin{equation}\label{eq:clos1sis} \frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]}{\kappa N}=\frac{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]}{N}\frac{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]}{N}\implies\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]=\frac{\kappa}{N}[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}](N-\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right]), \end{equation} where we have also used the conservation relation $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\right]+\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right]=N$ to substitute $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\right]=N-\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right]$. The final expression for the first-order mean field is \[ \mathrm{dist}dt[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]=\beta\frac{\kappa}{N}[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}](N-\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right])-\gamma[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}], \] which, after normalisation [via \eqref{eq:cons:norm}], yields \begin{align} \label{eq:mf1:sis} \mathrm{dist}dt\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket=\beta\kappa\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket(1-\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket)-\gamma\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket. \end{align} The steady state solutions are then \[ \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket_{1}^{*}=0,\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket_{2}^{*}=1-\frac{\gamma}{\kappa\beta}. \] At $\beta/\gamma=\kappa^{-1}$, the solution $\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket_{1}^{*}$ becomes unstable due to a transcritical bifurcation, also known as the epidemic threshold in epidemiology. \subsubsection{MF2\label{sec:Second-order}} The second-order mean field originates from the Bethe approximation in statistical physics \cite{bethe1935} and is now commonly known as the `pair approximation' \cite{porter2016,Kiss2017,newman2018,Matsuda1992,Keeling1997,Rand1999,Dieckmann2000,Kefi2007a,Gross2006}. MF2 neglects dependence beyond distance 1 and is obtained by considering all dynamically relevant motifs up to size 2. Noting that motifs without infecteds are dynamically irrelevant and omitting zero blocks, we obtain \begin{equation*}\label{eq:SISmf2} \begin{blockarray}{ccccccccc} && [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SItr] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IItr]\\ \begin{block}{cc[c|cc|cccc]}\relax \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]} & & -\gamma & \beta & 0 & & & & \\\cline{3-9} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]} & & & -\beta-\gamma & \gamma & \beta & -\beta & \beta & -\beta\\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I]} & & & 2\beta &-2\gamma & 0 & 2\beta & 0 & 2\beta\\ \end{block} \end{blockarray}\;. \end{equation*} Hence, two types of order 3 motifs appear on the right-hand side: chains and triangles. For the networks we consider in this paper, triangular subgraphs are either not present (in case of square, cubic, hypercubic lattices) or negligible for large $N$ (e.g. for Erd\H{o}s-R{\'e}nyi random networks \cite{Note1}), so we only need to consider the system \begin{equation*} \begin{blockarray}{ccccccc} && [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI] \\ \begin{block}{cc[c|cc|cc]}\relax \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]} & & -\gamma & \beta & 0 & & \\\cline{3-7} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]} & & &-\beta-\gamma & \gamma & \beta & -\beta\\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I]} & & & 2\beta &-2\gamma & 0 & 2\beta\\ \end{block} \end{blockarray}\;. \end{equation*} The number of conservation relations used for elimination depends on whether the networks have a homogeneous degree. \mathrm{pa}ragraph{Networks with homogeneous degree} In this case, the conservation relations are \begin{align} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right]+\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\right] = & N,\label{eq:siscons1}\\ \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]+\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S\right] = & \kappa\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\right],\label{eq:siscons2}\\ \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\right]+\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right] = & \kappa\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right],\label{eq:siscons3} \end{align} However, due to the dynamic irrelevance of $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]$ and $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S]$, only \eqref{eq:siscons3} can be used to eliminate further variables. We use it to eliminate $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\right]=\kappa[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]-[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]$. Following (\ref{eq:subcons}-\ref{eq:xdotsubcons2}), this means \begin{align*} \mathbf{x}&= \begin{bmatrix} [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]& [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]& [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I]\\ \end{bmatrix}^T, \\ \tilde{\mathbf{x}}&= \begin{bmatrix} [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]\\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]\\ \end{bmatrix}, \; \tilde{\mathbf{x}}_3= \begin{bmatrix} [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI]\\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI]\\ \end{bmatrix}, \end{align*} and \begin{align*} \mathbf{E}= \begin{bmatrix} 1 & 0\\ 0 & 1\\ \kappa & -1\\ \end{bmatrix}, \; \mathbf{c}= \begin{bmatrix} 0\\ 0\\ 0\\ \end{bmatrix}, \; \tilde{\mathbf{Q}}_{1\cdots2,1\cdots2}=& \begin{bmatrix} -\gamma & \beta & 0\\ 0 &-\beta-\gamma & \gamma\\ \end{bmatrix}, \\ \tilde{\mathbf{Q}}_{23}=& \begin{bmatrix} \beta & -\beta\\ \end{bmatrix}, \end{align*} such that we can calculate $\mathbf{Q}',\tilde{\mathbf{c}}$ to obtain \begin{equation} \begin{blockarray}{cccccc} && [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI]\\ \begin{block}{cc[c|c|cc]}\relax \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]} & & -\gamma & \beta & & \\\cline{3-6} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]} & & \gamma\kappa &-\beta-2\gamma & \beta & -\beta\\ \end{block} \end{blockarray}\;. \end{equation} Applying the closure \eqref{eq:mclosabchom} for degree-homogeneous networks [resulting from \eqref{eq:dec:mot:chord}], \begin{align} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI\right] & \approx \kappa(\kappa-1)N\frac{\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S\right]}{\kappa N}\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]}{\kappa N}}{\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\right]}{N}}=\frac{\kappa-1}{\kappa}\frac{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S][\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]}{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]},\nonumber\\ \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\right] & \approx \kappa(\kappa-1)N\frac{\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]^2}{\kappa^2 N^2}}{\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\right]}{N}}=\frac{\kappa-1}{\kappa}\frac{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]^2}{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]},\label{eq:mf2:hom:closure} \end{align} we obtain the final nonlinear system \begin{equation} \begin{blockarray}{cccccc} && [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S] & \frac{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S][\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]}{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]} & \frac{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]^2}{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]}\\ \begin{block}{cc[c|c|cc]}\relax \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]} & & -\gamma & \beta & & \\\cline{3-6} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]} & & \gamma\kappa &-\beta-2\gamma & \beta\frac{\kappa-1}{\kappa} & -\beta\frac{\kappa-1}{\kappa}\\ \end{block} \end{blockarray}\;, \end{equation} which, after elimination of $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]$ and $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S]$ via (\ref{eq:siscons1}-\ref{eq:siscons2}) and normalisation via \eqref{eq:cons:norm} becomes \begin{equation}\label{eq:mf2:sis:1} \begin{blockarray}{ccccc} & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket & (1-\frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket}{1-\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket})\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket & \frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^2}{1-\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket}\\ \begin{block}{c[c|c|cc]}\relax \mathrm{dist}ot{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket} & -\gamma & \kappa\beta & & \\\cline{2-5} \mathrm{dist}ot{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket} & \gamma &-\beta-2\gamma & \beta(\kappa-1) & -\beta(\kappa-1)\\ \end{block} \end{blockarray}, \end{equation} From the steady state solutions \eqref{eq:SS2homkap} we find that the epidemic threshold, is now located at $\beta/\gamma{=}(\kappa-1)^{-1}$. We also derived non-trivial steady state correlations via (\ref{eq:corrs}) in \eqref{eq:sscorrs2homkap}. \mathrm{pa}ragraph{Networks with heterogeneous degree} Here, \eqref{eq:siscons2} and \eqref{eq:siscons3} do not hold, but the total frequency of any given subgraph is still conserved. Hence, the conservation relations are \eqref{eq:siscons1} from order $1$ and \begin{equation}\label{eq:alginvSIS2het} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S\right]+\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\right]+2\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]=\kappa N. \end{equation} This means that we cannot eliminate $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I]$ here (unlike in case of networks with homogeneous degree), such that we have three instead of two equations. Also applying the closure for ER networks \eqref{eq:mclosabcER} [resulting from \eqref{eq:dec:mot:chord}], \begin{align} \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI\right]&\approx\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S\right]\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]}{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\right]},\qquad\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\right]\approx\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]^{2}}{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\right]},\label{eq:mf2:het:closure} \end{align} we obtain \begin{equation}\label{eq:mf2:sis:het} \begin{blockarray}{ccccccc} && [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I] & \frac{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S][\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]}{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]} & \frac{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]^2}{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]}\\ \begin{block}{cc[c|cc|cc]}\relax \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]} & & -\gamma & \beta & 0 & & \\\cline{3-7} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]} & & &-\beta-\gamma & \gamma & \beta & -\beta\\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I]} & & & 2\beta &-2\gamma & 0 & 2\beta\\ \end{block} \end{blockarray}\;. \end{equation} After substitution of $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}]$ and $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S]$ via the conservation relations (\ref{eq:siscons1},\ref{eq:alginvSIS2het}) and normalisation via \eqref{eq:cons:norm}, this becomes \begin{equation}\label{eq:mf2:sis:het2} \begin{blockarray}{cccccc} & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\rrbracket & \frac{(1-2\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket-\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\rrbracket)\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket}{1-\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket} & \frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^2}{1-\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket}\\ \begin{block}{c[c|cc|cc]}\relax \mathrm{dist}ot{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket} & -\gamma & \kappa\beta & 0 & & \\\cline{2-6} \mathrm{dist}ot{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket} & &-\beta-\gamma & \gamma & \kappa\beta & -\kappa\beta\\ \mathrm{dist}ot{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\rrbracket} & & 2\beta &-2\gamma & 0 & 2\kappa\beta\\ \end{block} \end{blockarray}\;. \end{equation} There are three steady states, of which two are in the admissible range \eqref{eq:SS2hetkap}. Here, the epidemic threshold is located at $\beta/\gamma{=}\kappa^{-1}$, as in the first-order mean field model. The steady state correlations are given in \eqref{eq:sscorrs2hetkap}. \subsubsection{MF3-MF5} \label{sec:mf35} Approximations of higher order than two correspond to cluster variation approximations in statistical physics \cite{kikuchi1951}. MF3 results from neglecting dependence beyond distance 2 and is commonly known as the `triple approximation' \cite{House2009}. Its step-by-step derivation for the square lattice is shown in Appendix \ref{sec:MF3} and results in four equations (after substitution with conservation relations). In Appendix \ref{sec:MFhoeqs}, we show the derivation of the unclosed MF4 for a general network as \texttt{Mathematica} notebook output. We also derived the closed MF4 and MF5 for the square lattice. The number of equations after elimination with conservation relations is respectively 14 and 37. The derivation of MF4 with closure is shown in Appendix \ref{sec:MFhoeqs} point 2. The steady states of MF4 and MF5 are shown in Figure~\ref{fig:Comparison-of-the} of Section \ref{sec:Comparison}. \subsection{Comparison to simulations}\label{sec:Comparison} Here, we compare the steady states of MF1-5 of SIS epidemic spreading with those of the simulations on a selection of network types: lattices, regular random networks and Erd\H{o}s-R{\'e}nyi random networks. \mathrm{pa}ragraph{Square lattice} In Figure \ref{fig:Comparison-of-the}a, we show, for the square lattice, the steady state fraction of infecteds versus $\beta/\gamma$ for MF1-MF5 compared to simulations. The steady states of the mean-field models get closer to those of the simulation with increasing order. All mean-field models have an increasing bias in their non-trivial (endemic) steady states when approaching the critical value of $\beta/\gamma$ from above. Figure \ref{fig:Comparison-of-the}b compares the steady state distance-1 correlations between species types from MF2 and MF5 to those in simulations. Species of the same type cluster whereas different species tend to avoid each other (compared to a random distribution). The infected-infected correlation diverges when approaching the critical value of $\beta/\gamma$ from above and has a singularity at the bifurcation. E.g. for MF$2$, via (\ref{eq:sscorrs2homkap}), we have $C_{II}^{*}(\beta/\gamma)\to\infty$ for $\beta/\gamma\, \searrow\, (\kappa-1)^{-1}$ (limit from above in the endemic equilibrium). We recall that the errors in the mean-field models visible in Figure~\ref{fig:Comparison-of-the}a can be due to violation of the statistical dependence assumption beyond distance $k$, non-chordality of the independence map, and violation of the spatial homogeneity assumption. Spatial homogeneity can be violated in two ways: via heterogeneity of structure and via heterogeneity of dynamics \cite{Sharkey2008}. As the structure of a lattice is homogeneous and we only study steady states (i.e. there are no dynamics) that are spatially homogeneous (see Figure \ref{fig:ctrl_t_bif}b), spatial inhomogeneity can not be a source of bias here. This leaves statistical dependence and non-chordality as only sources of bias. In the square lattice there are always cycles with diameter larger than $\mathsf{d}$ for any chosen $\mathsf{d}$ along which unaccounted for information can spread, unless $\mathsf{d}$ is greater than or equal to the graph diameter of the entire lattice. This means that mean-field models that do not consider motifs of size up to the network diameter minus one cannot be exact. As MF1 assumes no correlation between the states of neighbouring nodes, the distance between the horizontal line through $1$ and the $\boldsymbol{+}$ markers of the correlations in the simulations is a measure of the bias of MF1 due to neglection of correlations in MF1 closures. Likewise, the distance between the steady state MF2/MF5 correlations and the simulations is due to neglection of (higher-order and conditional) dependence in MF2/MF5 closures and higher-level non-chordality. All models have larger biases closer to the critical point, where higher-order correlations become more important. This is a well-known characteristic of continuous phase transitions, in which correlations occur on increasingly long ranges when approaching the phase transition (bifurcation). Figure~\ref{fig:Steady-state-correlation} in Appendix \ref{sec:dcorrs} shows the correlation as a function of distance from a central point [via \eqref{eq:CabD}], for various values of $\beta/\gamma$. It shows, as expected from phase transitions theory, that correlations at any distance are larger closer to the critical point. At the critical point, theory shows that correlations occur at all distances \cite{Henkel2008}. Comparing the square lattice to the 4-neighbour random regular graph, we can see that there is also a phase transition, but there is substantially less bias than in the square lattice (Figure \ref{fig:Comparison-of-the-1} blue $\boldsymbol{\times}$ vs $\boldsymbol{+}$). This is because random regular graphs are locally treelike, and hence, unlike in the square lattice, correlations over longer distances can be captured well via a decomposition of larger motifs into links (MF2). The small remaining bias in the 4-regular random graph we suspect to be because the assumed conditional independence is not valid for all states \cite{Sharkey2015,Sharkey2015a}. \begin{figure*} \caption{Comparison of the mean-field approximations of order $1$ to $5$ (lines) with simulations (markers) of SIS epidemic spreading on a square lattice as a function of $\beta/\gamma$: (a) Nontrivial steady states, (b) Correlations at distance $1$ [shown for MF$2$ (solid), MF$5$ (dashed), and simulations (dots)]. \label{fig:Comparison-of-the} \label{fig:Comparison-of-the} \end{figure*} \mathrm{pa}ragraph{General cubic lattices and random regular networks} We show in Figure \ref{fig:Comparison-of-the-1} how the steady states and correlations in MF1, MF2 and simulations depend on the number of neighbours in $\mathsf{d}$-dimensional cubic lattices and random regular networks. When $\mathsf{d}$ is the lattice dimension, the lattice degree is $\kappa=2\mathsf{d}$. The observations of the square lattice generalise to cubic/hypercubic lattices and random regular networks (at least up to $\kappa=10$): \emph{i}. there is a transcritical bifurcation at a particular value of \textbf{$\beta/\gamma$}, where $C_{II}^{*}$ becomes singular, \emph{ii}. MF1 and MF2 capture qualitatively the steady state fraction of infecteds and correlations are captured qualitatively by MF2, \emph{iii}. MF2 is less biased than MF1, \emph{vi}. the bias is larger closer to the bifurcation. According to MF1, the bifurcation occurs at $\kappa^{-1}=(2\mathsf{d})^{-1}$ and according to MF2 at $(2\mathsf{d}-1)^{-1}=(\kappa-1)^{-1}$ (see Sections \ref{sec:First-order} and \ref{sec:Second-order}). \citet{Liggett2005} proved that for lattices, the critical value predicted by MF2 is a lower bound. Figure \ref{fig:Comparison-of-the-1}a shows that MF2 and this lower bound is approached increasingly closely when the lattice dimension increases. Due to higher clustering of neighbours, the epidemic threshold in lattices is higher than that in the corresponding random regular network \cite{Keeling1999}, but this difference decreases with dimension/degree (Figure \ref{fig:Comparison-of-the-1}a). The steady states of a 5-dimensional hypercubic lattice and of a random regular network with degree 10 are indistinguishable from each other and from MF2. This is because random walks in space of dimension 5 or higher have a finite number of intersections almost surely \cite{Erdos1960,Heydenreich2017a,Lawler2010}. If the path along which an infection travels is seen as a random walk, having many intersections in $\mathsf{d}\leq4$ means that one cannot ignore alternative infection paths. In $\mathsf{d}>4$, it is harder for infections to travel via alternative paths to the same point, and hence those paths resemble trees more closely. Hence, as explained above and in \citet{Sharkey2015a}, MF2 should then be more accurate. \begin{figure*} \caption{Comparison of the mean-field approximations (dotted lines: MF1, solid lines: MF2) and simulations (markers) of SIS epidemic spreading on lattices and random regular networks with number of neighbours 4,6,8,10 as a function of $\beta/\gamma$: (a) Nontrivial steady states, (b) Correlations: $C_{II} \label{fig:Comparison-of-the-1} \end{figure*} \mathrm{pa}ragraph{Erd\H{o}s-R{\'e}nyi random networks} Finally, we show in Figure \ref{fig:Comparison-of-the-2} how the steady states and correlations in MF1, MF2 and simulations depend on the number of neighbours in an Erd\H{o}s-R{\'e}nyi random network. Recall that MF2 on an Erd\H{o}s-R{\'e}nyi random network is different from that of the networks above because it has one fewer conservation relation. As above, the behaviour of the steady state solutions is captured qualitatively by MF1 and MF2 and of the correlations by MF2 alone. Also as above, networks with a larger degree have lower biases and MF2 is better than MF1, but now there seems to be a slight increase of bias with $\beta/\gamma$, at least in the range inspected. Despite spatial heterogeneity of the degree in Erd\H{o}s-R{\'e}nyi random networks, there is considerably less bias than in lattices, confirming that the presence of cycles beyond closure distance is the dominant cause for mean-field model biases in the steady states of SIS spreading. \begin{figure*} \caption{Comparison of the mean-field approximations (dotted lines: MF1, solid lines: MF2) and simulations (markers) of SIS epidemic spreading on an Erd\H{o} \label{fig:Comparison-of-the-2} \end{figure*} \section{Summary and conclusions} Previous work found that exact closed individual-level moment equations exist for SIR spreading on arbitrary networks, with the requirement to consider larger-sized motifs, and therefore more equations, for networks that are decreasingly tree-like \cite{Sharkey2015a}. While for other dynamics than SIR spreading it may not be possible to prove exactness for a finite number of closed moment equations, it is generally found that accuracy increases with the order of approximation \cite[e.g.][]{House2009,Kiss2017} (which was confirmed here). Feasibility of automated derivation of exact closed moment equations for SIR epidemic spreading was shown by \cite{Sharkey2015a}, while an automated procedure to derive unclosed moment equations for arbitrary dynamics was developed by \cite{Danos2020}. We developed an automated procedure to both derive and close population-level moment equations for arbitrary dynamics on networks at any approximation order, allowing us to consider mean-field models of higher orders than typically derived by hand. For this purpose, we developed a method to derive closure schemes from predefined independence assumptions. Our closure formulas rely, besides the requirements of spatial homogeneity and large network size, on the assumption of conditional independence beyond distance $k$ and $k$-chordality of the considered network. Consistently, our simulations of SIS epidemic spreading showed that, at given approximation order, the largest biases occurred for networks with many short cycles of any size, such as lattices, in parameter regimes with long-range correlations, such as near continuous phase transitions. Note however that, for lattices, we found the bias of mean-field models to decrease with lattice dimension, which is consistent with results in percolation and phase transitions theory, where it was shown that the importance of cycles decreases with the lattice dimension \cite{Erdos1960,Heydenreich2017a,Lawler2010}. We also showed that the conventional procedure of truncation at a maximum motif size instead of at a maximum motif diameter necessitates independence assumptions that are inconsistent for different approximated motifs or that may be incompatible with the network (see \ref{pt:exactness}). This suggests that choosing a moment space that consists of motifs at increasing diameter instead of increasing size would lead to more accurate mean-field models. Whereas our method still needs to be tested more widely, we expect it to lend itself well to study dynamics on networks with density and size of short cycles between that of random networks and (low-dimensional) lattices, particularly when the network is structurally homogeneous. In these cases, derivation by hand may be too tedious while the final set of moment equations is still more manageable than the Markov chain simulations. For networks with considerable degree heterogeneity and/or community structure, we expect approximate master equation methods \cite{marceau2010,gleeson2013,fennell2019,stonge2021,Cui2022} to be more efficient. Our approach focused on static networks with at most nearest-neighbour interactions, but it can be extended to adaptive networks such as those studied in \cite{Gross2006,demirel2014,Danos2020}, and to dynamics with higher-order interactions \cite{battiston2020} -- requiring reaction rate tensors $\mathbf{R}^p$ with $p\ge2$. Our approximation scheme of Section \ref{sec:Closure-scheme} can be applied more generally, to understand precisely which independence assumptions are taken in other existing types of mean-field approximations than moment closure, or to devise new mean-field approximations. It may also serve to extend the use of message-passing methods \cite{karrer2010,wilkinson2014,koher2019} for epidemic modelling to graphs with cycles. While we were able to derive closure formulas by assuming statistical independence, leading to mean-field models, other assumptions can be used to obtain closures \cite{kuehn2016}, such as maximum entropy \cite{Rogers2011}, or time scale separation between moments at different orders \cite{GrossKevrekidis2008}. Sometimes one can find an appropriate moment space and closure by taking account of the characteristic features of the process in consideration, leading to a description with greater efficiency compared to what is obtainable by using the size-based moment space and closing via independence assumptions \cite{Wuyts2022}. We expect that for finding good moment spaces and closures, equation-free and machine-learning methods \cite{Kevrekidis2009,Patsatzis2022} will play an important role, in particular because the most appropriate low-dimensional descriptions or their closures may not necessarily be available in closed form \cite{Rogers2011,GrossKevrekidis2008,Kevrekidis2009,Patsatzis2022}. It is subject of future work to explore if and how these different approaches relate. \begin{acknowledgments} This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grants EP/N023544/1 and EP/V04687X/1. \end{acknowledgments} \onecolumngrid \renewcommand{A-\Roman{section}}{A-\Roman{section}} \renewcommand{\Alph{subsection}}{\Alph{subsection}} \renewcommand{\alph{paragraph}}{\alph{paragraph}} \renewcommand{A\arabic{equation}}{A\arabic{equation}} \renewcommand{A\arabic{figure}}{A\arabic{figure}} \renewcommand{A\arabic{table}}{A\arabic{table}} \setcounter{equation}{0} \setcounter{section}{0} \setcounter{figure}{0} \begin{center} \fontsize{20}{24}\bfseries\scshape{APPENDIX} \end{center} \mathrm{pa}ragraph*{Notation and references} \label{notref} References to items in this document are preceded by `A'. Items not preceded by `A' refer to the main text. \tableofcontents \section{Derivation of the moment equations}\label{sec:Differential-and} In this section, we derive expressions for the expected rate of change and conservation relations of motif counts, first shown in (\ref{eq:cons:norm},~\ref{eq:cons2}) of Section \ref{sec:alg:iv}. Both can be seen as invariance relations, the former being of a differential type, and derived from the master equation for the Markov chain on the network, and the latter of an algebraic type, following from the property that the network is fixed. \mathrm{pa}ragraph{Master equation for transitions in a Markov chain} We recall that the state of the system for our network with $N$ nodes is given by (using $:=$ for ``defined as'') $ X:=(X_{1},X_{2},...,X_{N})\in\{1,\ldots,n\}^N, $ where $X_{i}$ is the label of the species that occupies node $i$. Hence, the total number of states is $n^{N}$. The probabilistic transition from one state to another, following a discrete-state continuous-time Markov chain, defines an evolution equation for the probability of being in each of these states, the so-called master equation (also known as the Kolmogorov-forward equation for a Markov jump process \cite{Gardiner2009}). The probability density $P(X,t)$ for a particular state $X$ at time $t$ changes according to \begin{eqnarray} \mathrm{dist}dt P(X,t) & = & \sum_{X'\neq X}\left[w(X'\rightarrow X)P(X',t)-w(X\rightarrow X')P(X,t)\right],\label{eq:ME} \end{eqnarray} where $w$ denotes the transition rate between system states. If we define $\mathbf{W}$ as a $n^{N}\times n^{N}$ transition rate matrix with non-diagonal entries $w(X'\rightarrow X)$ and diagonal entries $-\sum_{X'\neq X}w(X\rightarrow X')$, we can rewrite the master equation as \[ \mathrm{dist}ot{\mathbf{\mathbf{P}}}(t)=\mathbf{W}\mathbf{P}(t), \] which describes the evolution of the density for all states as elements of a vector $\mathbf{P}$ (and not just of one particular state as in \eqref{eq:ME}). Because almost surely at most one node can change state at any one time $t$, only states differing from each other in one node can be directly transitioned between. Therefore, $\mathbf{W}$ must be sparse, having only $N(n-1)$ entries in each row or column, as each of the $N$ nodes can convert to any of the $n-1$ other species. This permits writing \eqref{eq:ME} as \begin{align} \mathrm{dist}dt P(X,t)&=\sum_{i=1}^{N}\sum_{k\neq X_{i}}^{n}\left[w_{i}(X_{i\shortto k}\rightarrow X)P(X_{i\shortto k},t)-w_{i}(X\rightarrow X_{i\shortto k})P(X,t)\right]\mbox{,}\label{eq:ME2} \end{align} where \[ X\to X_{i\shortto k}: (X_{1},...,X_{i},...,X_{N})\mapsto(X_{1},...,k,...,X_{N}) \] is the operator that replaces the species at the $i$th node by species $k$, and $w_{i}(.)$ is the conversion rate at node $i$. \mathrm{pa}ragraph{Motifs and their frequencies} In what follows, we will derive from the master equation the evolution of the frequency (or total count) of the \emph{motifs} $\boldsymbol{x}^\mathsf{a}$, as defined in Section \ref{sec:ds-ct-mc}. Recall that these motifs, are defined by their state label vector $\boldsymbol{x}$ of size $m$ and the adjacency structure between motif nodes $\mathsf{a}$. We denote a single \emph{occurrence} at a given $\boldsymbol{i}\in S(m,N)$ as \begin{align} \left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right](t) & := \mathrm{d}lta_\mathsf{a}(\mathsf{A}_{\boldsymbol{i}})\mathrm{d}lta_{\boldsymbol{x}}(X_{\boldsymbol{i}}(t)) \mbox{,}\label{eq:motif1} \end{align} which requires an exact match of the adjacency $\mathsf{a}$ by $\mathsf{A}_{\boldsymbol{i}}$ and state label vector $\boldsymbol{x}$ by $X_{\boldsymbol{i}}$. For example, if $\mathsf{A}$ contains a connected triangle between nodes $1$, $2$ and $3$, then motifs with $\mathsf{a}=\{(1,2),(2,3)\}$ would not occur on $\boldsymbol{i}=(1,2,3)$. The total count of motifs in the large network is then the number of such exact matches, which is obtained by summing over all indices $\boldsymbol{i}\in S(m,N)$, as shown in \eqref{eq:motifcount:all}. Since the large network structure is constant in time, we can use the counts of induced subgraphs $\left[\mathsf{a}\right]$ given by \eqref{eq:cons:norm} as a normalisation factor for the (variable) counts of motifs such that we may consider normalised motif frequencies \begin{equation} \llbracket\boldsymbol{x}^{\mathsf{a}}\rrbracket:=\left[\boldsymbol{x}^{\mathsf{a}}\right]/\left[\mathsf{a}\right]\mbox{.}\label{eq:norm} \end{equation} \mathrm{pa}ragraph{Evolution of expected counts} Total and normalised counts $\left[\boldsymbol{x}^{\mathsf{a}}\right]$ and $\llbracket\boldsymbol{x}^{\mathsf{a}}\rrbracket$ refer to realisations of states $X$ on the large random network such that they are random variables. Similarly, $\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right](X)$ is a random variable in $\{0,1\}$ for each index vector $\boldsymbol{i}\in S(m,N)$ once we take the randomness of states $X$ into account. Its expectation is \begin{equation}\label{eq:motif1exp} \langle\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\rangle=\sum_{X}\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right](X)P(X,t)\mbox{.} \end{equation} The master equation \eqref{eq:ME2} for the density $P$ implies that the expectation satisfies the differential equation \begin{eqnarray} \mathrm{dist}dt \langle\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\rangle & = & \sum_{X}\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right](X)\mathrm{dist}dt P(X,t)\nonumber \\ & = & \sum_{X}\Bigl\{\mathrm{d}lta_\mathsf{a}(\mathsf{A}_{\boldsymbol{i}})\mathrm{d}lta_{\boldsymbol{x}}(X_{\boldsymbol{i}})\sum_{i'=1}^{N}\sum_{k\neq X_{i'}}^{n}\left[w_{i'}(X_{i'\shortto k}\rightarrow X)P(X_{i'\shortto k},t)-w_{i'}(X\rightarrow X_{i'\shortto k})P(X,t)\right]\Bigr\},\nonumber \\ & = & \Bigl\langle\mathrm{d}lta_\mathsf{a}(\mathsf{A}_{\boldsymbol{i}})\sum_{i'=1}^{N}\sum_{k\neq X_{i'}}^{n}\Bigl[\mathrm{d}lta_{\boldsymbol{x}}\bigl((X_{i'\shortto k})_{\boldsymbol{i}}\bigr)-\mathrm{d}lta_{\boldsymbol{x}}\bigl(X_{\boldsymbol{i}}\bigr)\Bigr]w_{i'}(X\rightarrow X_{i'\shortto k})\Bigr\rangle,\nonumber\\ & = & \Bigl\langle\mathrm{d}lta_\mathsf{a}(\mathsf{A}_{\boldsymbol{i}})\sum_{p=1}^{m}\sum_{k\neq X_{i_{p}}}^{n}\Bigl[\mathrm{d}lta_{\boldsymbol{x}}\bigl((X_{i_p\shortto k})_{\boldsymbol{i}}\bigr)-\mathrm{d}lta_{\boldsymbol{x}}\bigl(X_{\boldsymbol{i}}\bigr)\Bigr]w_{i_{p}}(X\rightarrow X_{i_p\shortto k})\Bigr\rangle,\nonumber \\ & = & \Bigl\langle\mathrm{d}lta_\mathsf{a}(\mathsf{A}_{\boldsymbol{i}})\sum_{p=1}^{m}\sum_{k\neq X_{i_{p}}}^{n}\mathrm{d}lta_{\boldsymbol{x}_{p\shortto\emptyset}}(X_{\boldsymbol{i}_{p\shortto\emptyset}})\Bigl[\mathrm{d}lta_{x_p}(k)-\mathrm{d}lta_{x_p}(X_{i_{p}})\Bigr]w_{i_{p}}(X\rightarrow X_{i_p\shortto k})\Bigr\rangle,\label{eq:ddtmMi} \end{eqnarray} where in the second step, we substituted $\mathrm{d}lta_{\boldsymbol{x}}(X_{\boldsymbol{i}})w_{i'}(X_{i'\shortto k}\rightarrow X)P(X_{i'\shortto k},t)$ for $\mathrm{d}lta_{\boldsymbol{x}}\bigl((X_{i'\shortto k})_{\boldsymbol{i}}\bigr)w_{i'}(X\rightarrow X_{i'\shortto k})P(X,t)$ which corresponds to a reordering of the terms in $\sum_{X}$, and where we have used the notation $\left\langle \cdot\right\rangle $ for the expectation $\sum_{X}(\cdot)P(X,t)$. In the third step, we used that only changes to the nodes belonging to $\boldsymbol{i}$ matter. In the last step, we factored out the common elements in the delta functions corresponding to all but the $p$th element of $X_{\boldsymbol{i}}$ and $\boldsymbol{x}$, and we used the subscript notation $(\cdot)_{p\shortto\emptyset}$ to denote a vector with element $p$ removed. The expression on the right-hand side in \eqref{eq:ddtmMi} can be understood independent of the prior algebraic manipulations: the expected change rate of $\bigl\langle\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\bigr\rangle$ equals the sum of expected rates for each of the nodes $i_p$ in $\boldsymbol{i}$ changing its state \emph{to} $x_p$ minus the rate of node $i_p$ changing its state \emph{from} $x_p$. \mathrm{pa}ragraph{Conversion rates} Next, we will insert the two types of admissible transitions (as discussed in Section \ref{sec:ds-ct-mc}) into \eqref{eq:ddtmMi}, namely spontaneous conversions with rates given in matrix $\mathbf{R}^0\in\mathbb{R}^{n\times n}$, and, conversions due to interactions with a single nearest neighbour with rates given in $\mathbf{R}^1\in\mathbb{R}^{n\times n\times n}$. The diagonal entries of $\mathbf{R}^0,\mathbf{R}^1$ are zero without loss of generality. For these transition types, the rates $w_{i_p}$ in \eqref{eq:ddtmMi} are \begin{eqnarray} w_{i_{p}}(X\rightarrow X_{i_p\shortto k}) & = & \sum_{a,c}^{n}R^1_{akc}\mathrm{d}lta_a(X_{i_{p}})\sum_{j}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j})+\sum_{a}^{n}R^0_{ak}\mathrm{d}lta_a(X_{i_{p}}).\label{eq:wieta} \end{eqnarray} With the rates in \eqref{eq:wieta}, and noting that \begin{align*} \mathrm{d}lta_{x_p}(k)-\mathrm{d}lta_{x_p}(X_{i_{p}})= \begin{cases} \phantom{-}1 & \mbox{if $x_{p}=k$,}\\ -1 & \mbox{if $x_{p}=X_{i_{p}}$,}\\ \phantom{-}0 & \mbox{if $k\neq x_{p} \mbox{\ and\ } x_{p}\neq X_{i_{p}}$,} \end{cases} \end{align*} the sum inside the averaging brackets in \eqref{eq:ddtmMi} has the form \begin{multline*} \Bigl[\sum_{a,c}^{n}R^1_{ax_{p}c}\mathrm{d}lta_a(X_{i_{p}})\sum_{j}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j})+\sum_{a}^{n}R^0_{ax_{p}}\mathrm{d}lta_a(X_{i_{p}})\Bigr]\mathrm{d}lta_{\boldsymbol{x}_{p\shortto\emptyset}}(X_{\boldsymbol{i}\setminus i_{p}})\\ -\sum_{k\neq X_{i_{p}}}^{n}\Bigl[\sum_{c}^{n}R^1_{x_{p}kc}\sum_{j}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j})+R^0_{x_{p}k}\Bigr]\mathrm{d}lta_{\boldsymbol{x}}\bigl(X_{\boldsymbol{i}}\bigr), \end{multline*} where we used for the last two terms that $\mathrm{d}lta_{x_p}(X_{i_{p}})\mathrm{d}lta_{\boldsymbol{x}_{p\shortto\emptyset}}(X_{\boldsymbol{i}\setminus i_{p}})$ can be combined to $\mathrm{d}lta_{\boldsymbol{x}}\bigl(X_{\boldsymbol{i}}\bigr)$. \mathrm{pa}ragraph{Differential equations for motif counts} When distributing the products and exploiting the linearity of the averaging brackets, the differential equation \eqref{eq:ddtmMi} for the expected rate of change at $\boldsymbol{i}$ becomes \begin{align*} \mathrm{dist}dt \langle\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\rangle = & \mathrm{d}lta_\mathsf{a}(\mathsf{A}_{\boldsymbol{i}})\sum_{p}^{m}\Bigl[\sum_{a,c}^{n}R^1_{ax_{p}c} \Bigl\langle\mathrm{d}lta_{\boldsymbol{x}_{p\shortto\emptyset}}(X_{\boldsymbol{i}_{p\shortto\emptyset}})\mathrm{d}lta_a(X_{i_{p}})\sum_{j}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j})\Bigr\rangle+\sum_{a}^{n}R^0_{ax_{p}}\Bigl\langle\mathrm{d}lta_{\boldsymbol{x}_{p\shortto\emptyset}}(X_{\boldsymbol{i}_{p\shortto\emptyset}})\mathrm{d}lta_a(X_{i_{p}})\Bigr\rangle\\ & -\sum_{k\neq X_{i_{p}},c}^{n}R^1_{x_{p}kc}\Bigl\langle\mathrm{d}lta_{\boldsymbol{x}}\bigl(X_{\boldsymbol{i}}\bigr)\sum_{j}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j})\Bigr\rangle-\sum_{k\neq X_{i_{p}}}^{n}R^0_{x_{p}k}\Bigl\langle\mathrm{d}lta_{\boldsymbol{x}}\bigl(X_{\boldsymbol{i}}\bigr)\Bigr\rangle\Bigl]. \end{align*} After replacing index label $a$ by $k$, using the definition of $[\boldsymbol{x}^\mathsf{a}_{\boldsymbol{i}}]$ in \eqref{eq:motif1} and using $(\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}})_{p\shortto k}$ to indicate $\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}$ with its $p$th element replaced by species $k$, this becomes \begin{eqnarray*} \mathrm{dist}dt \langle\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\rangle & = & \sum_{p}^{m}\sum_{k}^{n}\left[\sum_{c}^{n}R^1_{kx_{p}c}\Bigl\langle\left[(\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}})_{p\shortto k}\right]\sum_{j}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j})\Bigr\rangle+R^0_{kx_{p}}\bigl\langle\left[(\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}})_{p\shortto k}\right]\bigr\rangle\right.\\ & & \left.-\sum_{c}^{n}R^1_{x_{p}kc}\Bigl\langle\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\sum_{j}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j})\Bigr\rangle-R^0_{x_{p}k}\Bigl\langle\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\Bigr\rangle\right]. \end{eqnarray*} This shows that $\bigl\langle\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\bigr\rangle$ can increase by a conversion from motifs that differ from $\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}$ in only one node (first two terms), or decrease by having any of the nodes in $\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}$ convert to another species (last two terms), with both increase and decrease possible via interaction with neighbours and via spontaneous conversion. The factors of form $\left[\cdot\right]\sum_{j}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j})$ count the number of $c$-connections of motif $[\cdot]$ (located at $\boldsymbol{i}$) at node $i_p$. The neighbouring node $j$ with species $c$ can be part of the motif $[\cdot]$, or it can be outside of $[\cdot]$, in which case it gives rise to higher-order motifs. Therefore, by splitting the neighbourhood sums as follows, \begin{eqnarray*} \sum_{j}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j}) & = & \sum_{j\in\boldsymbol{i}}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j})+\sum_{j\notin\boldsymbol{i}}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j}), \end{eqnarray*} their products with $\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]$ and $\left[(\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}})_{p\shortto k}\right]$ count contributions of $c-$neighbours from within versus from outside the motif separately: \begin{eqnarray} \left[(\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}})_{p\shortto k}\right]\sum_{j}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j}) & = & \boldsymbol{\mathrm{d}lta}_c(\boldsymbol{x})\mathsf{a}\boldsymbol{e}_p\left[(\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}})_{p\shortto k}\right]+\sum_{\boldsymbol{y}^\mathsf{b}\in{\cal N}^c_p\left((\boldsymbol{x}^{\mathsf{a})_{p\shortto k}}\right)}\sum_{j\notin\boldsymbol{i}}^{N}\left[\boldsymbol{y}_{\boldsymbol{i}, j}^{\mathsf{b}}\right],\\ \left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\sum_{j}^{N}\mathsf{A}_{i_{p}j}\mathrm{d}lta_c(X_{j}) & = & \boldsymbol{\mathrm{d}lta}_c(\boldsymbol{x})\mathsf{a}\boldsymbol{e}_p\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]+\sum_{\boldsymbol{y}^\mathsf{b}\in{\cal N}^c_p\left({\boldsymbol{x}^{\mathsf{a}}}\right)}\sum_{j\notin\boldsymbol{i}}^{N}\left[\boldsymbol{y}_{\boldsymbol{i}, j}^{\mathsf{b}}\right]\mbox{.}\label{eq:split} \end{eqnarray} On the right-hand side, $\boldsymbol{e}_p$ is an $m$-dimensional vector with a $1$ at position $p$ and zeroes elsewhere, and $\boldsymbol{\mathrm{d}lta}_c(\boldsymbol{x})$ a vector Kronecker delta function that returns a vector of the size of $\boldsymbol{x}$ with ones where the elements equal $c$ and zeroes elsewhere. Thus, the term $\boldsymbol{\mathrm{d}lta}_c(\boldsymbol{x})\mathsf{a}\boldsymbol{e}_p$ counts the number of $c$-connections at position $p$ in the motif $\boldsymbol{x}^\mathsf{a}$. We use the notation $\boldsymbol{i}, j$ for the vector $\boldsymbol{i}$ with an extra node index $j$ appended at position $m+1$. We used ${\cal N}^c_p(\boldsymbol{x}^\mathsf{a})$ to denote the set of all $(m+1)$th order connected motifs that can be obtained by linking a new $c$-node to the $p$th position in motif $\boldsymbol{x}^\mathsf{a}$, i.e., \begin{equation}\label{eq:nbhdef} {\cal N}_{p}^c(\boldsymbol{x}^\mathsf{a}):=\bigcup_{\ell=1}^{m+1} \left\{\boldsymbol{y}^\mathsf{b}:|\boldsymbol{y}|=m+1, y_\ell=c, \boldsymbol{y}^\mathsf{b}_{\ell\shortto\emptyset}=\boldsymbol{x}^\mathsf{a},(\ell,p)\in\mathsf{b} \right\}, \end{equation} where $\boldsymbol{y}^\mathsf{b}_{\ell\shortto\emptyset}$ denotes the $m$th order connected motif obtained by deleting the $\ell$th node of $\boldsymbol{y}^\mathsf{b}$. The sum over elements of ${\cal N}^c_p(\cdot)$ in \eqref{eq:split} is taken because any of the other motif nodes can also link to the new node. The types of higher-order motifs appearing in the differential equation depend on the considered motif. In Figure~\ref{fig:Dependence-of-motif}, we show this dependence structure (ignoring the labels). \begin{figure} \caption{Dependence of moment equations of a given motif on motifs of one order higher due to nearest-neighbour interactions, up to order 4 (ignoring labels). \label{fig:Dependence-of-motif} \label{fig:Dependence-of-motif} \end{figure} With the substitutions from above, we obtain the \emph{individual-level moment equations}: \begin{eqnarray}\label{eq:ddtxai} \mathrm{dist}dt \langle\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\rangle & = & \sum_{p}^{m}\sum_{k}^{n}\biggl\{\Bigl(\sum_{c}^{n}\boldsymbol{\mathrm{d}lta}_c(\boldsymbol{x})\mathsf{a}\boldsymbol{e}_p R^1_{kx_{p}c}+R^0_{kx_{p}}\Bigr)\bigl\langle\left[(\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}})_{p\shortto k}\right]\bigr\rangle+\sum_{c}^{n}\sum_{\boldsymbol{y}^\mathsf{b}\in{\cal N}^c_p\left(\boldsymbol{x}^{\mathsf{a}}_{p\shortto k}\right)}\,\sum_{j\notin\boldsymbol{i}}^{N}R^1_{kx_{p}c}\bigl\langle\left[\boldsymbol{y}_{\boldsymbol{i}, j}^{\mathsf{b}}\right]\bigr\rangle\nonumber \\ & & -\Bigl(\sum_{c}^{n}\boldsymbol{\mathrm{d}lta}_c(\boldsymbol{x})\mathsf{a}\boldsymbol{e}_p R^1_{x_{p}kc}+R^0_{x_{p}k}\Bigr)\bigl\langle\left[\boldsymbol{x}_{\boldsymbol{i}}^{\mathsf{a}}\right]\bigr\rangle-\sum_{c}^{n}\sum_{\boldsymbol{y}^\mathsf{b}\in{\cal N}^c_p\left({\boldsymbol{x}^{\mathsf{a}}}\right)}\sum_{j\notin\boldsymbol{i}}^{N}R^1_{x_{p}kc}\bigl\langle\left[\boldsymbol{y}_{\boldsymbol{i}, j}^{\mathsf{b}}\right]\bigr\rangle\biggr\}. \end{eqnarray} The \emph{population-level moment equations} are then obtained by taking the sum $\sum_{\boldsymbol{i}\in S(m,N)}$ [see definitions of $[\boldsymbol{x}^\mathsf{a}]$ and $[\boldsymbol{x}^\mathsf{a}_{\boldsymbol{i}}]$ in \eqref{eq:motifcount:all} and \eqref{eq:motif1}] of \eqref{eq:ddtxai} over all index sets $\boldsymbol{i}$: \begin{equation}\label{eq:ddtxa} \begin{aligned} \mathrm{dist}dt \langle\left[\boldsymbol{x}^{\mathsf{a}}\right]\rangle =& \sum_p^m\sum_k^n\sum_c^n\Biggl\{\left(\frac{R^0_{kx_{p}}}{n}+\boldsymbol{\mathrm{d}lta}_c(\boldsymbol{x})\mathsf{a}\boldsymbol{e}_p R^1_{kx_{p}c}\right)\bigl\langle\left[(\boldsymbol{x}^{\mathsf{a}})_{p\shortto k}\right]\bigr\rangle- \left(\frac{R^0_{x_{p}k}}{n}+\boldsymbol{\mathrm{d}lta}_c(\boldsymbol{x})\mathsf{a}\boldsymbol{e}_p R^1_{x_pkc}\right)\langle\left[\boldsymbol{x}^{\mathsf{a}}\right]\bigr\rangle \\ &+\sum_{\boldsymbol{y}^\mathsf{b}\in{\cal N}^c_p\left(\boldsymbol{x}^{\mathsf{a}}_{p\shortto k}\right)} R^1_{kx_{p}c}\langle[\boldsymbol{y}^\mathsf{b}]\rangle- \sum_{\boldsymbol{y}^\mathsf{b}\in{\cal N}^c_p(\boldsymbol{x}^\mathsf{a})} R^1_{x_{p}kc}\langle[\boldsymbol{y}^\mathsf{b}]\rangle\Biggr\}, \end{aligned} \end{equation} where we collected the sums at the front. In closing this section, we make the following remarks. \emph{i}. The evolution of expected motif counts of size $m$ is a function of expected motif counts of size $m$ and $m+1$. \emph{ii}. Equation \eqref{eq:ddtxa} leads to the same equation for motifs in the network that are \emph{isomorphic}, where we call two motifs $\boldsymbol{x}^\mathsf{a}$ and $\boldsymbol{y}^\mathsf{b}$ isomorphic if there exists a permutation $\pi$ of their node indices that maps them onto each other, i.e. $\forall\boldsymbol{x}^{\mathsf{a}},\boldsymbol{y}^{\mathsf{b}}:\boldsymbol{x}^{\mathsf{a}}\simeq\boldsymbol{y}^{\mathsf{b}}\iff\exists \pi:\pi\boldsymbol{x}=\boldsymbol{y},\,\pi\mathsf{a}\pi^T=\mathsf{b}$. Hence, different motifs that are isomorphic belong to the same equivalence class. Isomorphic motifs have equal total counts such that we may consider only one representative from each equivalence class. In our implementation, we therefore make sure that we count all isomorphic motifs under a single representative node indexing. \emph{iii}. The summation in \eqref{eq:motifcount:all} over index tuples $\boldsymbol{i}\in S(m,N)$ that we used to go from \eqref{eq:ddtxai} to \eqref{eq:ddtxa} leads to multiple counting of motifs that possess \emph{automorphisms} other than the identity transformation. A motif has an automorphism if it can be mapped onto itself by a permutation of its node indices. An automorphism is therefore an isomorphism with itself, i.e. $\mathrm{Aut}(\boldsymbol{x}^{\mathsf{a}}):=\{ \pi:\pi\boldsymbol{x}=\boldsymbol{x},\,\pi\mathsf{a}\pi^T=\mathsf{a}\}$. The multiplicity with which a particular motif $\boldsymbol{x}^{\mathsf{a}}$ is counted (via \eqref{eq:motifcount:all}) is then equal to $|\mathrm{Aut}(\boldsymbol{x}^{\mathsf{a}})|$. \mathrm{pa}ragraph{Conservation relations\label{sec:Conservation-equations}} In fixed networks, the frequencies of induced subgraphs of a given type (e.g. nodes, links, polygons, chains) remain fixed. The sum over all possible motif label orderings then yields these fixed frequencies, or using the notation from above, \begin{equation}\label{eq:alginv1} \sum_{\boldsymbol{x}}\left[\boldsymbol{x}^{\mathsf{a}}\right]=\left[\mathsf{a}\right],\qquad\sum_{\boldsymbol{x}}\llbracket\boldsymbol{x}^{\mathsf{a}}\rrbracket=1, \end{equation} where the right is the normalised form of the left [via \eqref{eq:norm}]. For networks with homogeneous degree and motifs and every stub $p$ of motif $\boldsymbol{x}^\mathsf{a}$ (also called leaf), i.e. a node with degree $1$, there is the additional conservation relation \begin{equation}\label{eq:alginv2} \sum_{k}\left[\boldsymbol{x}^{\mathsf{a}}_{p\shortto k}\right]=\frac{\left[\mathsf{a}\right]}{\left[\mathsf{a}\setminus p\right]}\left[\boldsymbol{x}^\mathsf{a}_{p\shortto\emptyset}\right],\qquad\sum_{k}\llbracket\boldsymbol{x}^{\mathsf{a}}_{p\shortto k}\rrbracket=\llbracket\boldsymbol{x}^\mathsf{a}_{p\shortto\emptyset}\rrbracket, \end{equation} where the right is again the normalised form of the left (via \eqref{eq:norm}). Under the conditions mentioned above, $\left[\mathsf{a}\right]/\left[\mathsf{a}\setminus p\right]$ is the number of out-motif connections at the node that connects to the stub. As \eqref{eq:alginv1} and \eqref{eq:alginv2} can be derived for each motif up to the chosen truncation order, they form an additional system of equations that can be used to reduce the dimensionality of the mean-field model via elimination. To avoid multiplicity of equations while deriving relations \eqref{eq:alginv2}, we set up one equation per set of stubs that lead to isomorphic variants of the considered subgraph when their indices are permuted. \mathrm{pa}ragraph{Number of equations} A simple lower bound on the number of equations (before elimination) can be found by considering only chains. At each order there is only one chain graph. Order 1 contributes $n$ equations, where $n$ is the number of possible node states. Each chain of order $m>1$ has $\left(\binom{n}{m}\right)=\binom{m+n-1}{n}=\frac{(m+n-1)!}{(m-1)!n!}$ ways of labelling its $m$ nodes with $n$ species. The total number of equations from chains is then $n_{\mathrm{ch}}=2+\sum_{m=2}^{k}\left(\binom{n}{m}\right).$ For networks in which the number of short cycles goes to zero with $N\rightarrow\infty$, as in Erd\H{o}s-R{\'e}nyi random networks, the total number of equations is equal to this lower bound plus the number of equations due to non-chain trees. When cycles need to be taken into account however, there will be additional connected motif types at each order; see Table~\ref{tab:Number-of-equations} column $n_{g}$ \cite[Table 4.2.1]{Harary1973}. To obtain the number of equations contributed by each of these, one has to consider all labelling orderings, knowing that some orderings lead to isomorphic motifs and hence do not add to the total. We have listed in Table \ref{tab:Number-of-equations} the total number of equations $n_{\mathrm{eq}}$ resulting from our enumeration algorithm for dynamics with $n=2$, such as SIS epidemic spreading. For a particular network, these are still reduced by the number of motifs not occurring in the considered network and by the number of variables via the conservation relations. Column $n_4$ in Table~\ref{tab:Number-of-equations} shows the number of equations for the square lattice and column $n_{4c}$ shows the remaining number after eliminating variables by using conservation relations. \begin{table} \centering \begin{tabular}{cccccc} \hline $k$ & $n_{\mathrm{ch}}$ & $n_{g}$ & $n_{\mathrm{eq}}$ & $n_{\mathrm{4}}$ & $n_{\mathrm{4c}}$\tabularnewline \hline $1$ & 2 & 1 & 2 & 2 & 1\tabularnewline $2$ & 5 & 2 & 5 & 5 & 2\tabularnewline $3$ & 11 & 4 & 15 & 11 & 4\tabularnewline $4$ & 21 & 10 & 65 & 35 & 14\tabularnewline $5$ & 36 & 31 & 419 & 113 & 38\tabularnewline \hline \end{tabular} \caption{Cumulative number of equations ($n_{\mathrm{ch}}$, $n_{\mathrm{eq}}$, $n_{\mathrm{4c}}$) or motif types ($n_{g}$) as a function of order $k$. $n_{\mathrm{ch}}$: number of chain motifs (if the number of species $n=2$), $n_{g}$: number of subgraph types, $n_{\mathrm{eq}}$: total number of equations ignoring conservation relations (if $n=2$), $n_{\mathrm{4}}$: number of equations for the square lattice (if $n=2$), $n_{\mathrm{4c}}$: number of equations for the square lattice after elimination via conservation relations (if $n=2$). \label{tab:Number-of-equations}} \end{table} \section{Moment equations for SIS spreading up to order 3}\label{sec:MFSIS3} The moment equations up to third order can be written (via \eqref{eq:xdotblock}) as \begin{equation}\label{eq:xdotblock3sis} \begin{blockarray}{ccccc} & \mathbf{x}_1 & \mathbf{x}_2 & \mathbf{x}_3 & \mathbf{x}_4\\ \begin{block}{c[cccc]} \mathrm{dist}ot{\mathbf{x}}_1 & \mathbf{Q}_1 & \mathbf{Q}_{12} & \mathbf{0} & \mathbf{0}\\ \mathrm{dist}ot{\mathbf{x}}_2 & \mathbf{0} & \mathbf{Q}_2 & \mathbf{Q}_{23} & \mathbf{0}\\ \mathrm{dist}ot{\mathbf{x}}_3 & \mathbf{0} & \mathbf{0} & \mathbf{Q}_3 & \mathbf{Q}_{34}\\ \end{block} \end{blockarray} \end{equation} Then, the coefficients for motifs up to order three are (omitting zero blocks) \begin{equation*}\label{eq:SIS3} {\tiny \makeatletter\setlength\BA@colsep{.2pt}\makeatother \begin{blockarray}{cccccccccccc} & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IS] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SItr] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IItr] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IItr]\\ \begin{block}{cc|cc|cccccccc|} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]} & -\gamma & \beta & 0 & & & & & & & & \\\cline{2-12} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]} & & -\beta -\gamma & \gamma & 0 & \beta & 0 & -\beta & 0 & \beta & -\beta & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I]} & & 2 \beta & -2 \gamma & 0 & 0 & 0 & 2 \beta & 0 & 0 & 2 \beta & 0 \\\cline{2-12} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IS]} & & & & -2 \beta -\gamma & 0 & 2 \gamma & 0 & 0 & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI]} & & & & 0 & -\beta -\gamma & \gamma & \gamma & 0 & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II]} & & & & \beta & \beta & -\beta -2 \gamma & 0 & \gamma & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI]} & & & & 0 & 0 & 0 & -2 \beta -2 \gamma & \gamma & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II]} & & & & 0 & 0 & 2 \beta & 2 \beta & -3 \gamma & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SItr]} & & & & 0 & 0 & 0 & 0 & 0 & -2 \beta -\gamma & 2 \gamma & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IItr]} & & & & 0 & 0 & 0 & 0 & 0 & 2 \beta & -2 \beta -2 \gamma & \gamma \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IItr]} & & & & 0 & 0 & 0 & 0 & 0 & 0 & 6 \beta & -3 \gamma \\ \end{block} \end{blockarray}, } \end{equation*} while those for fourth-order motifs in $\mathbf{Q}_{34}$ are \begin{equation*} {\tiny \makeatletter\setlength\BA@colsep{.2pt}\makeatother \begin{blockarray}{ccccccccccccccccccccccccccccc} & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSItre] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSIsqo] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSIstr] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISSsqi] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISIsqo] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISIstr] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISIsqi] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIItre] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIsqo] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSIstr] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIstr] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISSsqi] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IIIsqo] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISIstr] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISIsqi] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIItre] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIIstr] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IISsqi] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSSstr] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSSsqi] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSIsqii] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SISsqi] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIsqii] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IIIsqii]\\\cline{2-29} \begin{block}{c|cccccccccccccccccccccccccccc} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SS]} & -2 \beta & -\beta & -2 \beta & -4 \beta & -3 \beta & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IS]} & 0 & \beta & 0 & 2 \beta & \beta & -2 \beta & -2 \beta & -2 \beta & -2 \beta & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI]} & \beta & 0 & \beta & \beta & \beta & 0 & 0 & 0 & 0 & -\beta & -\beta & -\beta & -2 \beta & -\beta & -2 \beta & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II]} & 0 & 0 & 0 & 0 & 0 & \beta & \beta & \beta & \beta & 0 & \beta & 0 & \beta & \beta & \beta & -\beta & -\beta & -\beta & -\beta & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI]} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \beta & 0 & 2 \beta & 2 \beta & 0 & 2 \beta & 0 & 0 & 0 & 0 & -\beta & -2 \beta & -\beta & 0 & 0 & 0 & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II]} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \beta & 2 \beta & 2 \beta & 2 \beta & \beta & 2 \beta & \beta & 0 & 0 & 0 & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SStr]} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -3 \beta & -6 \beta & -3 \beta & 0 & 0 & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SItr]} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 \beta & 0 & -2 \beta & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \beta & 2 \beta & \beta & -2 \beta & -2 \beta & 0 \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IItr]} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \beta & 0 & 2 \beta & 0 & 0 & 0 & 0 & 0 & -\beta & -2 \beta & 0 & 0 & 0 & 2 \beta & 2 \beta & -\beta \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IItr]} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 \beta & 6 \beta & 0 & 0 & 0 & 0 & 0 & 3 \beta \\ \end{block} \end{blockarray} }. \end{equation*} When written out, this corresponds to the equations: \begingroup \allowdisplaybreaks {\footnotesize \begin{align} \mathrm{dist}dt \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right] = & -\gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right],\label{eq:mf2}\\ \mathrm{dist}dt \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right] = & \gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\right]-(\beta+\gamma)\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SItr\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IItr\right],\label{eq:mf4}\\ \mathrm{dist}dt \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\right] = & 2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]-2\gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IItr\right],\label{eq:mf5}\\ \mathrm{dist}dt \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IS\right] = & 2\gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II\right]-(2\beta+\gamma)\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IS\right]-2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISI\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSItre\right]-2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISIsqo\right]-2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISIstr\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSIstr\right]-2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISIsqi\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISSsqi\right],\label{eq:mf7}\\ \mathrm{dist}dt \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI\right] = & \gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II\right]+\gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\right]-(\beta+\gamma)\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSI\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSI\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIItre\right]-2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSIstr\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIsqo\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSIsqo\right]\label{eq:mf8}\\ & +\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSIstr\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIstr\right]-2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISIsqi\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISSsqi\right],\nonumber \\ \mathrm{dist}dt \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II\right] = & \gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II\right]-(\beta+2\gamma)\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IS\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISI\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISI\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIItre\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSIstr\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IIIsqo\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISIsqo\right]\label{eq:mf9}\\ & -\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISIstr\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISIstr\right] +\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIstr\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISSsqi\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISIsqi\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISIsqi\right],\nonumber \\ \mathrm{dist}dt \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\right] = & \gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II\right]-2(\beta+\gamma)\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSI\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIItre\right]-2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIIstr\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSIstr\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIsqo\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IISsqi\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISSsqi\right],\label{eq:mf10}\\ \mathrm{dist}dt \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II\right] = & -3\gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISI\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIItre\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIIstr\right] +2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IIIsqo\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISIstr\right]\label{eq:mf11}\\ & +\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IISsqi\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISIsqi\right],\nonumber\\ \mathrm{dist}dt \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SItr\right] = & 2\gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IItr\right]-(2\beta+\gamma)\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SItr\right]-2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSIstr\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSSstr\right]-2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SISsqi\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSSsqi\right] -2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISSsqi\right]-2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIsqii\right]+\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSIsqii\right],\label{eq:mf13}\\ \mathrm{dist}dt \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IItr\right] = & \gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IItr\right]-2(\beta+\gamma)\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IItr\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SItr\right]-\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIIstr\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSIstr\right]-2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IISsqi\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SISsqi\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IISsqi\right],\label{eq:mf14}\\ & -\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IIIsqii\right]+2\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIsqii\right],\nonumber\\ \mathrm{dist}dt \left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IItr\right] = & -3\gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IItr\right]+6\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IItr\right]+3\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIIstr\right]+6\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}IISsqi\right]+3\beta\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IIIsqii\right].\label{eq:mf15} \end{align} } \endgroup \section{Obtaining steady states from simulations}\label{sec:feedback} \mathrm{pa}ragraph{Feedback control} As we aim to compare the steady states of mean-field models to those of the simulation, we need a way to obtain the steady states of the simulations, even if they are unstable or marginally stable. Treating the simulation like an ideal physical experiment, the general approach to finding equilibria regardless of stability is to introduce a stabilizing feedback loop of the form \begin{equation} \label{eq:control} r(t)=r_0+g([\boldsymbol{x}^\mathsf{a}](t)-[\boldsymbol{x}^\mathsf{a}]_\mathrm{ref})\mbox{,} \end{equation} as was done in \cite{SGNWK08,schilder2015experimental,barton2017control} for continuation of unstable vibrations in mechanical experiments. In \eqref{eq:control}, $r$ is one of the conversion rates in $\mathbf{R}^0$ or $\mathbf{R}^1$. The feedback control \footnote{\eqref{eq:control} is the simplest possible form of feedback control as it changes only one input ($r$) depending on one output ($[\boldsymbol{x}^\mathsf{a}]$) and is static (no further processing of $[\boldsymbol{x}^\mathsf{a}]$ before feeding it back). Linear control theory ensures that unstable equilibria of ODEs can be stabilized with single-input-single-output dynamic feedback control using any single input and any single output satisfying some genericity conditions (linear controllability and observability).} makes this rate time dependent by coupling it to the motif frequency $[\boldsymbol{x}^\mathsf{a}](t)$ of a chosen motif $\boldsymbol{x}^\mathsf{a}$ through the relation \eqref{eq:control}. The factor $g$ is called the feedback control gain and is problem specific. When performing bifurcation analysis, it is convenient if the rate $r$ used as the control input is also the bifurcation parameter varied for the bifurcation diagram. In this case, whenever the simulation with feedback control \eqref{eq:control} settles to an equilibrium $(r_\mathrm{c}^*,[\boldsymbol{x}^\mathsf{a}]_\mathrm{c}^*)$ (in the limit of large $N$) the point $(r_\mathrm{c}^*,[\boldsymbol{x}^\mathsf{a}]_\mathrm{c}^*)$ will be on the equilibrium branch of the simulation without feedback control \cite{BS13,SOW14,renson2017experimental}. For SIS spreading, we choose \begin{equation} \label{eq:sis:control} r=\beta\mbox{,}\quad\boldsymbol{x}^\mathsf{a}=\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\mbox{,}\quad g=\infty\mbox{.} \end{equation} This limit for feedback control results in what is called the conserved contact process, as proposed by \citet{Tome2001}. While SIS spreading does not have any unstable steady states, points that are marginally stable, as near the continuous phase transition (epidemic threshold) are stabilised as well by the control. This stabilisation suppresses fluctuations, even close to the bifurcation, which results in faster convergence of the mean and absence of absorption for any positive $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]$. \begin{figure} \caption{Feedback control of SIS epidemic spreading on a square lattice: (a) example time profiles of infected fraction \texttt{[I]} \label{fig:ctrl_t_bif} \end{figure} \mathrm{pa}ragraph{Simulation algorithm} In the limit of infinite gain $g$, each recovery event forces a simultaneous infection event, such that the number of infected nodes $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]$ stays constant. In a simulation based on the Gillespie algorithm \cite{Gillespie2007} this is done in the following steps: \begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex] \item start with a number of randomly distributed infected nodes, \item recover an infected node selected uniformly at random, \item infect a randomly selected susceptible node, with selection probability proportional to its number of infected neighbours, \item advance time with $\Delta t=-\log(\xi)/(\gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right])$, (where $\xi$ is a uniform random variable on the interval $[0,1]$), \item go to step $2$. \end{enumerate} This loop runs until we observe that $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right](t)$ is stationary (call this time $t_\mathrm{e}$). Then for some additional time $T$ we observe the fluctuations of $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right](t)$ around its mean. \citet{Tome2001} derived the effective infection rate $\tilde{\beta}$ for each chosen count $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right]$ of infected nodes by noting that, because for every infection event there is a recovery event, \[ \tilde{\beta}\langle\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]\rangle=\gamma\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right], \] where $\langle\cdot\rangle$ is an average over many independent realisations, such that \[ \tilde{\beta}=\gamma\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right]}{\langle\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right]\rangle}, \] which they found to lead to the same nontrivial steady states $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right]^*(\tilde{\beta}/\gamma)$ as $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right]^*(\beta/\gamma)$ in the model without control. As the model with control is ergodic (no absorbing states exist) we need to run only a single realisation and compute the effective infection rate as \begin{equation} \tilde{\beta}=\gamma\frac{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\right]}{\int_{t_\mathrm{e}}^{t_\mathrm{e}+T}{\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\right](t)\mathrm{d} t}/T}.\label{eq:beteff_tav} \end{equation} In this manner, the error bars of estimates of $\tilde\beta$ can be made arbitrarily small by increasing $T$. \section{Correlations at given distance}\label{sec:dcorrs} In Figure \ref{fig:Steady-state-correlation} we show for SIS epidemic spreading the correlation at given distance between two nodes (only between infected nodes shown). Its general definition is \begin{equation} C_{ab}^{D}=\frac{N^2}{\sum_{ij}\mathds{1}_{\mathrm{dist}(i,j)=D}}\frac{\sum_{ij}[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}_{i}\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}_{j}]_{\mathrm{dist}(i,j)=D}}{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}][\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}]},\label{eq:CabD} \end{equation} or with normalised motifs: \begin{equation} C_{ab}^{D}=\frac{\sum_{ij}\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}_{i}\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}_{j}\rrbracket_{\mathrm{dist}(i,j)=D}}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\rrbracket},\label{eq:CabDn} \end{equation} where the distance $\mathrm{dist}(i,j)$ is the length of the shortest path between $i$ and $j$. Note that the correlations between neighbouring nodes (\ref{eq:corrs0}) is a special case of this, i.e. $C_{ab}=C_{ab}^{1}$. An alternative way to write \eqref{eq:CabDn} is \begin{equation} C_{ab}^{D}=\frac{\sum_{c_{1},...,c_{D-1}}\llbracket a c_{1}...c_{D-1}b\rrbracket}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{a}}}\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.76em]{b}}}\rrbracket},\label{eq:CabDmots} \end{equation} where $\llbracket a c_{1}...c_{D-1}b\rrbracket$ is a chain motif of size $D+1$ with indicated states. This definition was used to derive $C_{ab}^{D}$ as approximated by mean-field models in Figure \ref{fig:Steady-state-correlation}, by applying the closure formula to the chain in the numerator. \begin{figure} \caption{Steady state correlation function $C_{II} \label{fig:Steady-state-correlation} \end{figure} \section{Steady states of MF1-MF2}\label{sec:ss12} Here, we show the expressions for the steady states and steady state correlations of MF1 and MF2. \subsection{MF1} The MF1 model equals SIS epidemic spreading under well-mixed conditions. Its steady states are the trivial and the endemic state, \[ \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket_{1}^{*}=0,\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket_{2}^{*}=1-\frac{\gamma}{\kappa\beta}. \] \subsection{MF2} For the MF2 model the equations and, hence, their steady states, depend on the type of network. \subsubsection{Degree-homogeneous networks} The steady states of the dynamic equations for MF2 on degree-homogeneous networks \eqref{eq:mf2:sis:1} are \begin{equation} (\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket^{*},\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^{*})_{1}=(0,0),\qquad(\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket^{*},\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^{*})_{2}=(1-\frac{\kappa-1}{\kappa\beta(\kappa-1)/\gamma-1},\frac{\beta(\kappa-1)-1}{\beta(\beta\kappa(\kappa-1)-1)}).\label{eq:SS2homkap} \end{equation} The non-trivial correlations, obtained by substitution of the above into \eqref{eq:corrs}, are \begin{equation} C_{II}^{*}=\frac{(\gamma-\beta\kappa)(\gamma-\beta(\kappa-1)\kappa)}{\beta\kappa^{2}(\beta(\kappa-1)-\gamma)},\quad C_{SI}^{*}=1-\frac{\gamma}{\beta\kappa(\kappa-1)},\quad C_{SS}^{*}=\frac{\beta(\kappa-1)\kappa-\gamma}{\beta(\kappa-1)^{2}}.\label{eq:sscorrs2homkap} \end{equation} \subsubsection{Degree-heterogeneous networks} The steady states of the dynamic equations for MF2 on degree-heterogeneous networks \eqref{eq:mf2:sis:het2} are \begin{eqnarray} (\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket^{*},\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\rrbracket^{*},\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^{*})_{1} & = & (0,0,0),\nonumber \\ (\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket^{*},\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\rrbracket^{*},\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^{*})_{2} & = & \frac{\gamma \left(\sqrt{\beta (\kappa -1)^2+4 \gamma }-\sqrt{\beta } (\kappa +3)\right)}{2 \beta ^{3/2} \kappa }+1,\nonumber\\ & & \frac{\gamma\sqrt{\beta}(\kappa+1)-\gamma\sqrt{4\gamma+\beta(\kappa-1)^{2}}}{2\kappa\beta^{3/2}}).\label{eq:SS2hetkap} \end{eqnarray} The non-trivial correlations, via substitution of the above into \eqref{eq:corrs}, are \begin{align}\label{eq:sscorrs2hetkap} C_{II}^{*}&=\frac{4\beta^{3/2}\kappa-2\sqrt{\beta}\gamma(\kappa+3)+2\gamma\sqrt{\beta(\kappa-1)^{2}+4\gamma}}{\sqrt{\beta}\kappa\left(\sqrt{\beta(\kappa-1)^{2}+4\gamma}-\sqrt{\beta}(\kappa+1)\right)^{2}},\\ C_{SI}^{*}&=C_{SS}^{*}=\frac{2\gamma}{\kappa\sqrt{\beta^{2}(\kappa-1)^{2}+4\beta\gamma}-\beta(\kappa-1)\kappa}. \end{align} \section{MF$3$ for SIS spreading}\label{sec:MF3} We only apply MF3 to the square lattice, such that we can ignore all motifs that contain triangles and set $\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.33em]{abctr_nolab}}}\right]=\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.425em]{str}}}\right]=\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.425em]{sqi}}}\right]=\left[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=1.425em]{sqi}}}i\right]=0$ in the remaining equations, such that we obtain \begin{equation*} {\footnotesize \makeatletter\setlength\BA@colsep{.1pt}\makeatother \begin{blockarray}{cccccccccccccccccccc} & & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IS] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II]& [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSItre] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SSIsqo] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}ISIsqo] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIItre] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIsqo] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IIIsqo] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIItre]\\ \begin{block}{ccc|cc|ccccc|cccccccccc} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]} & & -\gamma & \beta & 0 & & & & & & & & & & & & & & & & \\\cline{2-21} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]} & & & -\beta -\gamma & \gamma & 0 & \beta & 0 & -\beta & 0 & & & & & & & & & & & \\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I]} & & & 2 \beta & -2 \gamma & 0 & 0 & 0 & 2 \beta & 0 & & & & & & & & & & & \\\cline{2-21} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IS]} & & & & & -2 \beta -\gamma & 0 & 2 \gamma & 0 & 0 & 0 & \beta & 0 & -2 \beta & -2 \beta & 0 & 0 & 0 & 0 & 0 & 0\\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI]} & & & & & 0 & -\beta -\gamma & \gamma & \gamma & 0 & \beta & 0 & \beta & 0 & 0 & -\beta & -\beta & -\beta & 0 & 0 & 0\\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II]} & & & & & \beta & \beta & -\beta -2 \gamma & 0 & \gamma & 0 & 0 & 0 & \beta & \beta & 0 & \beta & 0 & -\beta & -\beta & 0\\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI]} & & & & & 0 & 0 & 0 & -2 \beta -2 \gamma & \gamma & 0 & 0 & 0 & 0 & 0 & 2 \beta & 0 & 2 \beta & 0 & 0 & -\beta\\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II]} & & & & & 0 & 0 & 2 \beta & 2 \beta & -3 \gamma & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \beta & 2 \beta& \beta\\ \end{block} \end{blockarray}. } \end{equation*} The 7 conservation relations are \begingroup \allowdisplaybreaks \begin{eqnarray*} [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]+[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}] & = & N,\\\relax [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]+[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S] & = & \kappa[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}],\\\relax [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I]+[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S] & = & \kappa[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}],\\\relax [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI]+[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SS] & = & (\kappa-1)[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S],\\\relax [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II]+[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IS] & = & (\kappa-1)[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S],\\\relax [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI]+[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI] & = & (\kappa-1)[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S],\\\relax [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II]+[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II] & = & (\kappa-1)[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I]. \end{eqnarray*} \endgroup We eliminate $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I]$, $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IS]$, $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI]$, $[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II]$. Following (\ref{eq:subcons}-\ref{eq:xdotsubcons2}), this means \begin{equation*} \mathbf{x}= \begin{bmatrix} [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IS] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II]\\ \end{bmatrix}, \;\; \tilde{\mathbf{x}}= \begin{bmatrix} [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II]\\ \end{bmatrix}, \;\; \tilde{\mathbf{x}}_4= \begin{bmatrix} [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSI] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIsqo] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISI] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IIIsqo] \\ [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIItre]\\ \end{bmatrix}, \end{equation*} and \begin{align*} \tilde{\mathbf{Q}}_{1\cdots3,1\cdots3}&= \setcounter{MaxMatrixCols}{20} \left(\begin{smallmatrix} -\gamma & \beta & 0 &0 &0 &0 &0 &0 \\ 0& -\beta -\gamma & \gamma & 0 & \beta & 0 & -\beta & 0\\ 0& 2 \beta & -2 \gamma & 0 & 0 & 0 & 2 \beta & 0\\ 0&0 &0 & -2 \beta -\gamma & 0 & 2 \gamma & 0 & 0\\ 0&0 &0 & 0 & -\beta -\gamma & \gamma & \gamma & 0\\ 0&0 &0 & \beta & \beta & -\beta -2 \gamma & 0 & \gamma\\ 0&0 &0 & 0 & 0 & 0 & -2 \beta -2 \gamma & \gamma\\ 0&0 &0 & 0 & 0 & 2 \beta & 2 \beta & -3 \gamma\\ \end{smallmatrix}\right), & \mathbf{E}&= \left(\begin{smallmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \kappa & -1 & 0 & 0 \\ -\kappa(\kappa-1) & 2 (\kappa -1) & 0 & 1 \\ 0 & \kappa -1 & -1 & 0 \\ \kappa(\kappa -1) & 1-\kappa & 0 & -1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{smallmatrix}\right),\\ \tilde{\mathbf{Q}}_{34}&= \left(\begin{smallmatrix} 2 \beta & 2 \beta & 0 & 0 & -\beta \\ 0 & 0 & 2 \beta & 2 \beta & \beta \\ \end{smallmatrix}\right), & \mathbf{c}&= \left(\begin{smallmatrix} 0&0&0&0&0&0&0&0\\ \end{smallmatrix}\right)^T, \end{align*} which results in the four remaining equations ($\tilde{\mathbf{c}}=\mathbf{0}$ so not shown) \begin{equation}\label{eq:sqlatopen} {\footnotesize \makeatletter\setlength\BA@colsep{1pt}\makeatother \begin{blockarray}{ccccccccccc} & & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIsqo] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISI] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IIIsqo] & [\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIItre]\\ \begin{block}{ccc|c|cc|ccccc} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}]} & & -\gamma & \beta & & & & & & & \\\cline{2-11} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S]} & & 4\gamma & 2\beta -2\gamma & -2\beta & 0 & & & & & \\\cline{2-11} \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI]} & & & & -2 \beta -2 \gamma & \gamma & 2 \beta & 2 \beta & 0 & 0 & -\beta\\ \mathrm{dist}ot{[\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II]} & &24\beta & -6\beta & 2 \beta & -3 \gamma-2\beta & 0 & 0 & 2 \beta & 2 \beta& \beta\\ \end{block} \end{blockarray}. } \end{equation} where we have used $\kappa=4$ for the square lattice. In normalised form this is \begin{equation} {\footnotesize \makeatletter\setlength\BA@colsep{1pt}\makeatother \begin{blockarray}{ccccccccccc} & & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\rrbracket & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II\rrbracket & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSI\rrbracket & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIsqo\rrbracket & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISI\rrbracket & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IIIsqo\rrbracket & \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIItre\rrbracket\\ \begin{block}{ccc|c|cc|ccccc} \mathrm{dist}ot{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket} & & -\gamma & 4\beta & & & & & & & \\\cline{2-11} \mathrm{dist}ot{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket} & & \gamma & 2\beta -2\gamma & -6\beta & 0 & & & & & \\\cline{2-11} \mathrm{dist}ot{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\rrbracket} & & & & -2 \beta -2 \gamma & \gamma & \frac{14}{3}\beta & \frac{4}{3}\beta & 0 & 0 & -2\beta\\ \mathrm{dist}ot{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II\rrbracket} & &2\beta & -2\beta & 2 \beta & -3 \gamma-2\beta & 0 & 0 & \frac{14}{3}\beta & \frac{4}{3}\beta& 2\beta\\ \end{block} \end{blockarray}. } \end{equation} To close the system of equations, we apply \eqref{eq:dec:mot:chord} to the chains and star, and \eqref{eq:dec:mot:nchord} to the cycles (using the extension to non-maximal 2-cliques) and obtain the normalised closures (see also Section \ref{sec:clex}, examples 3-5) \begingroup \allowdisplaybreaks \begin{align}\label{eq:sis:clos:SSSItre} \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SSI\rrbracket & \approx \frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI\rrbracket^2}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S\rrbracket}=\frac{(\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\rrbracket)^2}{(1 - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket)},\nonumber\\ \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}ISI\rrbracket & \approx \frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\rrbracket}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket}=\frac{(\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket) \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\rrbracket}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket},\nonumber\\ \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SIItre\rrbracket & \approx \frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^3}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}\rrbracket^2}=\frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^3}{(1 -\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket)^2},\\ \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SIIsqo\rrbracket & \approx \frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II\rrbracket^2\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}SI\rrbracket^{2}}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^2\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}S\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\rrbracket}=\frac{(\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket)^2 (\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\rrbracket)^2}{(1 - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket) (\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket) \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^2},\nonumber\\ \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}IIIsqo\rrbracket & \approx \frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{S}}}II\rrbracket^{2}\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II\rrbracket\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\rrbracket}{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^2\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}I\rrbracket^2}=\frac{\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II\rrbracket (\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}II\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket)^2 \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}SI\rrbracket}{(\llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}\rrbracket - \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket)^2 \llbracket\vcenter{\hbox{\includesvg[inkscapelatex=false,height=.63em]{I}}}S\rrbracket^2},\nonumber \end{align} \endgroup where we also used the conservation relations to substitute any previously eliminated motif. The steady state solutions of the final system are roots of a ninth-order polynomial, of which two are admissible (see Figure~\ref{fig:Comparison-of-the} for numerical results). \section{MF4 for SIS spreading in \texttt{Mathematica}\label{sec:MFhoeqs}} We uploaded two application examples of our algorithm at fourth order to \href{https://notebookarchive.org/moment-closure-and-moment-equations-for-sis-spreading--2022-06-6hd82es/}{\texttt{Mathematica}'s notebook archive} (best viewed locally). In the first section of the file, the procedure as explained in the main text is followed for MF4 on the square lattice. The second section in the file shows the general coefficient matrix $\mathbf{Q}$ for the (unclosed) moment equations up to fourth order, with columns labelled by $\mathbf{x}_1,\mathrm{dist}ots,\mathbf{x}_5$ (stored in the variable \texttt{mots}). The algorithm can be instructed to write the resulting set of equations as \texttt{MATLAB} \cite{MathWorks2021} functions in a format compatible with continuation software such as \texttt{COCO} \cite{COCO} and \texttt{MatCont} \cite{matcont}. \section{Closure examples}\label{sec:clexs} Here we show 13 examples of subgraph decompositions based on the method explained in Section \ref{sec:Closure-scheme} (\ref{eq:dec:mot:chord},~\ref{eq:dec:mot:nchord}). They are shown in table form on the next page but we also discussed three commonly used ones (1-3 in the table) in Section \ref{sec:clex}. As the closures can be written independent of the particular labels, they are shown in the table for subgraphs only, with each node tagged with its index. We have also dropped the $\langle\cdot\rangle$, assuming that the law of large numbers applies, such that the counts approach their expectations almost surely for increasing network size $N$. The examples can be understood by reading the table from left to right. Comments are added in the last column. By column, it shows: 1. the example number, 2. the considered subgraph, 3. its diameter, 4. its independence map assuming independence beyond distance $\text{diam}(\mathsf{a}-1)$, 5. whether the independence map is chordal, 6. the derived junction graph of $d$-cliques where $d=\text{diam}(\mathsf{a}-1)$, 7. the resulting closure formula, 8. a triangulation of the independence map if the independence map is non-chordal, 9. the junction graph of $d$-cliques based on the triangulation, 10. the closure formula based on the triangulation, 11. an ad-hoc extension of the method to non-maximal cliques for some subgraphs, 12. the resulting closure formula based on this extension, 13. comments. Our \texttt{Mathematica} script \cite{Wuyts2022mfmcl} generates these closures automatically. \eject \pdfpagewidth=70cm \pdfpageheight=60cm \includegraphics{./closure_examples.pdf} \end{document}
\begin{document} \begin{abstract} We study dynamics and bifurcations of $2$-dimensional reversible maps having a symmetric saddle fixed point with an asymmetric pair of nontransversal homoclinic orbits (a symmetric nontransversal homoclinic figure-8). We consider one-parameter families of reversible maps unfolding the initial homoclinic tangency and prove the existence of infinitely many sequences (cascades) of bifurcations related to the birth of asymptotically stable, unstable and elliptic periodic orbits. \vskip1cm \noindent{\textsc{2010 Mathematics Subject Classification:} 37-XX, 37G20, 37G40, 34C37.} \noindent{\textsc{Keywords:} Newhouse phenomenon, homoclinic and heteroclinic tangencies, reversible mixed dynamics.} \end{abstract} \maketitle \section{Introduction} The mathematical foundations of the Bifurcation Theory were laid in the famous paper of Andronov and Pontryagin~\cite{AP37} where the notion of ``rough'' (structurally stable) systems was introduced. Later on, in a series of (classical) papers by Andronov, Leontovich and Maier (see e.g. books~\cite{AGLM1,AGLM2}) it was proved that rough 2-dimensional systems form an open and dense set in the space of dynamical systems. This notion of roughness of a system (i.e. topological equivalence/conjugacy of the chosen system with any close system) is naturally extended to multidimensional systems. Such extension was carried out in the 60's (this period was called by Anosov as the time of the ``hyperbolic revolution'') where structurally stable systems were also entitled as ``Hyperbolic Systems''. Such systems are divided in two large classes: Morse-Smale systems (with a simple dynamics) and hyperbolic systems with infinitely many periodic orbits. By definition, structural stable systems are open subsets. However, in the multidimensional case (that is, dimension $\geq 3$ for flows and $\geq 2$ for diffeomorphisms), they are not dense, as it was first shown by Smale~\cite{Sm66,Sm67}. A very important breakthrough was due to Newhouse~\cite{N70,N79} who proved that, near any 2-dimensional diffeomorphism with a homoclinic tangency there exist open regions consisting of diffeomorphisms exhibiting nontransversal intersections between stable and unstable manifolds of hyperbolic basic sets. Such sets were called {\em wild hyperbolic} by Newhouse. The original formulation of Newhouse result is as follows:\\ {\bf Newhouse Theorem} \cite{N79}. {\em Let $M^2$ be a $C^\infty$ compact 2-dimensional manifold and let $r\geq 2$. Assume that $f \in \mbox{Diff}\, ^r(M^2)$ has a hyperbolic set whose stable and unstable manifolds are tangent at some point $x$. Then $f$ may be $C^r$ perturbed inside an open set $U \subset \mbox{Diff}\, ^r(M^2)$ so that each $g \in U$ has a wild hyperbolic set near the orbit of $x$.}\\ Several consequences, derived from this theorem, have become crucial in the theory of dynamical systems: \begin{itemize} \item There exist open regions in the space of 2-dimensional diffeomorphisms (3-dimensional flows), with the $C^r$-topology, $r\geq 2$, called {\em Newhouse regions}, where the systems having a homoclinic tangency form a dense subset. \item These Newhouse regions exist in any neighbourhood of any 2-dimensional diffeomorphism having a homoclinic tangency. \end{itemize} Newhouse Theorem was extended to a general multidimensional context~\cite{GST93b,PV94,Romero95} and later on to area-preserving diffeomorphisms~\cite{Duarte99,Duarte00,Duarte02}.\footnote{Indeed, it also holds in the multidimensional symplectic case~\cite{DGT}.} In the context of general parameter unfoldings~\cite{N79,GST93b}, Newhouse regions are also regarded as open domains in the parameter space such that the values of the parameters which give rise to homoclinic tangencies form a dense subset. In the case of 1-parameter families, they are usually called {\em Newhouse intervals}. One of the most known and fundamental dynamical property of Newhouse regions is {\em the coexistence of infinitely many hyperbolic periodic orbits of different types}. In the dissipative framework, i.e. when the initial quadratic homoclinic tangency is associated to a fixed (periodic) point $O$ with multipliers $\lambda_1,...,\lambda_m,\gamma$, where $|\lambda_i|<1<|\gamma|$ and the saddle value $\sigma = \mbox{max}_i |\lambda_i|\cdot |\gamma|$ is less than 1, this property is known as {\em Newhouse phenomenon}: \begin{itemize} \item In the dissipative case, the set $\mathcal{ B}$ of parameter values $\mu$ in any Newhouse interval $I$ giving rise to the coexistence of infinitely many periodic sinks and saddles form a residual subset. \end{itemize} This result was first obtained in~\cite{N74}. Its proof is based essentially on the theory of bifurcations of homoclinic tangencies. The basic elements of this theory were settled in the celebrated work by Gavrilov and Shilnikov \cite{GaS73} where the so-called {\em Theorem on Cascades of periodic sinks} was proved. Indeed, this theorem states the existence of an infinite sequence of intervals of values of a (splitting) parameter for which there exists a single stable periodic. Multidimensional versions of this result and criteria of birth of periodic sinks at homoclinic bifurcations were established in~\cite{G83,PV94,GStT96,GST08}. Newhouse phenomenon is very important, in particular, in the theory of the so-called quasiattractors~\cite{AfrSh83}, i.e. strange attractors which either contain periodic sinks of very large periods or such periodic sinks appear under arbitrary small perturbations. Therefore, a natural question arises: how often is the Newhouse phenomenon met in chaotic dynamics? A partial answer to this question concerning the measure of the set $\mathcal{B}$ introduced above was considered in a series of papers. Indeed, in~\cite{TY86,Gorodetski-Kaloshin07} the authors showed that this set $\mathcal{B}$ contains a zero-measure secondary subset of parameter values for which there exist infinitely many single-round periodic orbits (i.e., orbits passing only once within a neighbourhood of the initial homoclinic orbit).\footnote{ This set $\mathcal{ B}$ can have positive measure for a dense set of suitable families~\cite{T10} and also for generic families of multidimensional (with $\mbox{dim}\;\geq 3$) diffeomorphisms~\cite{B16}.} Since Newhouse regions exist near any system presenting a homoclinic tangency, they can be found in the space of parameters of many dynamical models exhibiting chaotic behaviour and in the absence of uniform hyperbolicity. Their extreme richness makes a complete description an unreachable task: tangencies of arbitrarily high order as well as highly degenerate periodic orbits are dense in these regions~\cite{GST93a,GST99}. called {\em mixed dynamics} if the closures of the sets of periodic orbits of different types have non-empty intersections. This property can be generic\footnote{It is also persistent in the case of a type of dynamical chaos~\cite{GT17}, which is characterised by the fundamental property that the intersection of an attractor $\mathcal{A}$ and a repeller $\mathcal{R}$ is non-empty and $\mathcal{ A}\neq \mathcal{ R}$. This is neither the situation in the dissipative chaos (strange attractor), when $\mathcal{ A}\cap \mathcal{ R}=\emptyset$, nor in the conservative chaos, when $\mathcal{ A} = \mathcal{R}$.}. Indeed (see~\cite{GST97}), there exist Newhouse regions with mixed dynamics near any $2$-dimensional diffeomorphism with a nontransversal heteroclinic cycle containing at least two saddle periodic points $O_1,O_2$ whose Jacobians satisfy that $|J(O_1)|>1$ and $|J(O_2)|<1$. This kind of cycles is commonly referred as \emph{contracting-expanding} and it appears to be rather usual in 2-dimensional reversible diffeomorphisms. Recall that a diffeomorphism $f$ is called \emph{reversible} if it is smoothly conjugated to its inverse by means of an involution $R$ (named a \emph{reversor}), that is, $R \circ f = f^{-1} \circ R$, with $R^2 = \mathrm{Id}$, $R\ne \mathrm{Id}$. The involution $R$ does not need to be linear. It is often assumed to have the same smoothness as the diffeomorphism $f$. Equivalently, $f$ is reversible if and only if it can be written as the product of two involutions, $f=g \circ h$ with $g^2= h^2 = \mathrm{Id}$. The points which are invariant by the involution $R$ form the symmetry manifold $\mathit{Fix}\,R= \left\{ (x,y) \ | \ R(x,y)=(x,y) \right\}$. Along this work we will consider planar $R$-reversible diffeomorphisms $f$ with $R$ such that $\dim \mathit{Fix}\,R=1$, that is, a curve. We say that an object $\Lambda$ is \emph{symmetric} when $R(\Lambda)=\Lambda$. To put more emphasis, the notation self-symmetric may be used. By a \emph{symmetric couple of objects} $\Lambda_1,\Lambda_2$, we mean two different objects which are symmetric one to each other, i.e., $R(\Lambda_1)=\Lambda_2$ and, thus, $\Lambda_1=R(\Lambda_2)$. Two examples of contracting-expanding heteroclinic cycle for a $R$-reversible diffeomorphism are shown in Fig.~\ref{fig:Intro1}. In case~(a) the diffeomorphism has a symmetric couple of saddle periodic (fixed) points~$O_1$ and~$O_2= R(O_1)$, as well as two heteroclinic orbits $\Gamma_{12}\subset W^u(O_1)\cap W^s(O_2)$ and $\Gamma_{21}\subset W^u(O_2)\cap W^s(O_1)$ such that $R(\Gamma_{21})=\Gamma_{21}$, $R(\Gamma_{12})=\Gamma_{12}$. The orbit $\Gamma_{12}$ is nontransversal: the manifolds $W^u(O_1)$ and $W^s(O_2)$ have a quadratic tangency along that orbit. Since $R(O_1)=O_2$, their Jacobians verify $J(O_1)=J^{-1}(O_2)$ and, provided that $J(O_i) \neq \pm 1$, $i=1,2$, the heteroclinic cycle is contracting-expanding. \begin{figure} \caption{{\footnotesize Two different examples of planar reversible maps with symmetric nontransversal (quadratic tangency) heteroclinic cycles: (a) with a nontransversal symmetric heteroclinic orbit to a symmetric couple of saddle points, and (b) with a symmetric couple of nontransversal heteroclinic orbits to symmetric saddle points.} \label{fig:Intro1} \end{figure} Reversible diffeomorphisms can present a very rich dynamics and it is worth studying them by themselves. Moreover, when they are not conservative (this is an open property) they can possess very interesting dynamics and, in particular, the so-called {\em reversible mixed dynamics}. Its essence, for the $2$-dimensional case, is given by the following two conditions: \begin{itemize} \item The reversible diffeomorphism has simultaneously infinitely many symmetric couples of periodic sinks-sources, periodic saddles with Jacobians greater and less than 1 as well as infinitely many symmetric periodic elliptic orbits and periodic saddles with Jacobian equal to 1. \item The closures of periodic orbits of different types have non-empty intersections. \end{itemize} These properties seem to be universal when symmetric homoclinic tangencies and symmetric nontransversal heteroclinic cycles are involved in the dynamics. Indeed, the following assertion was formulated in~\cite{DGGLS13}. \noindent{\bf Reversible Mixed Dynamics Conjecture (RMD).} \emph{ $2$-dimensional reversible diffeomorphisms with reversible mixed dynamics are generic in Newhouse regions where diffeomorphisms with symmetric homoclinic or/and heteroclinic tangencies are dense.} This RMD conjecture is true when Newhouse regions with $C^r$-topology ($2\leq r \leq \infty$) are considered (see~\cite{GLRT14}). However, in the real analytic case and for parameter families, it has been proved for a general $1$-parameter unfolding only in two cases -- for 2-dimensional reversible diffeomorphisms with nontransversal heteroclinic cycles, as shown in Fig.~\ref{fig:Intro1}. The cycle of Fig.~\ref{fig:Intro1}(a) was considered in~\cite{LS04}: such a cycle contains a symmetric couple of saddle fixed (periodic) points (with Jacobians less and greater than 1, respectively) and a pair of symmetric transverse and nontransversal heteroclinic orbits. The cycle of Fig.~\ref{fig:Intro1}(b) was considered in~\cite{DGGLS13}: such a cycle contains a symmetric couple of nontransversal heteroclinic orbits to symmetric saddle fixed (periodic) points. \begin{figure} \caption{{\footnotesize Three examples of planar reversible maps with symmetric nontransversal homoclinic tangencies: (a) a symmetric quadratic homoclinic tangency; (b) a symmetric cubic homoclinic tangency; (c) a symmetric couple of nontransversal homoclinic (figure-8) orbits to the same symmetric saddle point.} \label{fig:3case} \end{figure} One of the targets concerning RMD conjecture is its proof for 2-dimensional reversible diffeomorphisms which have a homoclinic tangency to a symmetric fixed (periodic) point. There are three main cases, as illustrated in Fig.~\ref{fig:3case}. Figure~\ref{fig:3case}(a) and Figure~\ref{fig:3case}(b) relate to the case when the orbit of the homoclinic tangency is also symmetric and the tangency is either (a)~quadratic or (b)~cubic. In Fig.~\ref{fig:3case}(c) we have the case of a symmetric fixed point and a symmetric couple of orbits with quadratic homoclinic tangencies. This paper is devoted to this {\em third case} displayed in Fig.~\ref{fig:3case}(c). Roughly speaking, it will be shown that in a general (and symmetrical) unfolding of $1$-parameter families of reversible maps with homoclinic tangencies, there exist Newhouse intervals with reversible mixed dynamics. We notice that the results of this paper will not only concern orientable planar reversible maps, as the one showed in Fig.~\ref{fig:3case}(c). They will be also valid for maps defined on 2-dimensional non-orientable manifolds allowing a similar structure. For example, on a manifold constructed as a disc surrounding the saddle point with two glued symmetric M\"{o}bius bands. The paper is structured as follows. Section~\ref{sec:mainres} contains the statement of the problem, the main hypotheses and the description of the principal results: Theorems~\ref{th:th1}--\ref{thm:4}. Section~\ref{sec:prelim} deals with the construction of the local and global maps. Theorem~\ref{th:th1} and Theorem~\ref{th:th2} are proved in Section~\ref{sec:proofTh1} and Section~\ref{sec:proofTh2}, respectively. Section~\ref{sec:proofMainThm} is devoted to the proof of Theorems~\ref{prop:ni} and~\ref{thm:4}. Finally, in Section~\ref{sec:ex} we present some examples of periodically perturbed reversible vector fields giving rise to reversible maps with quadratic hetero/homoclinic tangencies as considered above. \section{Setting and main results}\label{sec:mainres} \subsection{The framework}\label{sec:sec_2_1} Let $f_0$ be a $\mathcal{C}^r$-smooth ($r\geq 4$) reversible diffeomorphism of a 2-dimensional manifold $M^2$ with reversor $R$ satisfying $\dim \mathit{Fix}\, R = 1$. Assume that the following hypotheses hold: \begin{itemize} \item[\textsf{[A]}] The diffeomorphism $f_0$ has a (symmetric) saddle fixed point $O \in \mathit{Fix}\,R$ with multipliers $\lambda,\lambda^{-1}$ and $0<\lambda<1$. \item[\textsf{[B]}] $f_0$ has a symmetric couple of homoclinic orbits $\Gamma_1$ and $\Gamma_2$ such that $\Gamma_2 = R(\Gamma_1)$ (and, thus, $\Gamma_1 = R(\Gamma_2)$) and satisfies that the invariant manifolds $W^{u}(O)$ and $W^{s}(O)$ have quadratic tangencies at the points of $\Gamma_{1}$ and $\Gamma_2$. \end{itemize} \begin{figure} \caption{(a) An example of reversible map with a couple of symmetric homoclinic tangencies (homoclinic figure-8). (b) A neighbourhood of the contour $O\cup\Gamma_1\cup\Gamma_2$.} \label{fig:hom8neigh} \end{figure} Let us be more precise with the latter hypothesis. Take $U$ a small fixed neighbourhood of the contour $O\cup \Gamma_{1}\cup\Gamma_{2}$. $U$ is formed by the union of a small neighbourhood $U_0$ of the point $O$ and several neighbourhoods $U_1^j$ and $U_2^j$, $j=1,...,n$, of those points of the orbits $\Gamma_{1}$ and $\Gamma_{2}$ which do not lie in $U_0$ (see Fig.~\ref{fig:hom8neigh}(b)). Thus, $\Gamma_{1}\subset U_0\cup U_1$ and $\Gamma_{2}\subset U_0\cup U_2$, where $U_i = U_0\cup U_i^1...\cup U_i^n$ is a neighbourhood of the homoclinic orbit $\Gamma_i$, for $i=1,2$. It is not restrictive to assume that $U$ is symmetric, that is $R(U)=U$. Indeed, this comes from assuming that $R(U_0)=U_0$ and $R(U_1^j)=U_2^j$ (and so $R(U_2^j)=U_1^j$). Consider the orbit $\Gamma_1$ and take a pair of its points, say, $M_1^-\in W^uloc(O)\cap U_0$ and $M_1^+:=f_0^q(M_1^-) \in W^sloc(O)\cap U_0$, for a suitable positive integer value $q$. Denote by $\Pi_1^- \subset U_0$ a small neighbourhood of $M_1^-$ and define the map $T_1:=f_0^q: \Pi_1^- \rightarrow U_0$. Assume that the following hypothesis is also satisfied: \begin{itemize} \item[\textsf{[C]}] The Jacobian $J_1=J(T_1)|_{M_1^-}$ of the map $T_{1}$ at the point $M_1^-$ is different from $\pm 1$. Without loss of generality we can assume that $|J_{1}|<1$. \end{itemize} It is not difficult to check that condition \textsf{[C]} does not depend on the choice of the points $M_1^-$ and $M_1^+$. Moreover, it implies that the map $T_1$, defined in a neighbourhood of $M_1^-$ is not conservative. \\ \begin{remark} \mbox{} \begin{itemize} \item[{\bf 1.}] We do not consider the case when the fixed point $O$ has multipliers $\lambda,\lambda^{-1}$ with $-1<\lambda<0$. This is a much more complicated case, since $f_0$ would have an additional symmetry due to the negativity of the two multipliers of $O$. \item[{\bf 2.}] In condition \textsf{[C]}, the case $0<J_1<1$ corresponds to $f_0$ orientable while the case $-1<J_1<0$ relates to $f_0$ non-orientable. The latter means that the manifold $M^2$ is non-orientable (the orbit behaves near the global pieces of $\Gamma_1$ and $\Gamma_2$, geometrically, like on a M\"{o}bius band). \item[{\bf 3.}] Our assumptions also cover the case of reversible maps like in Fig.~\ref{fig:neigh8rev}, i.e. when only one pair of stable and unstable separatrices of $O$ create the homoclinic orbits $\Gamma_1$ and $\Gamma_2$. Fig.~\ref{fig:neigh8rev}(b) shows how such ``fish'' configuration nontransversal heteroclinic cycle may be created by perturbation of a reversible map with a symmetric transverse homoclinic orbit. \end{itemize} \end{remark} Consider two points $M_2^-\in W^uloc(O)$ and $M_2^+\in W^sloc(O)$ of the orbit $\Gamma_{2}$ being the symmetric images of the homoclinic points $M_1^+$ and $M_1^-$, i.e. $M_2^-= R(M_1^+)$ and $M_2^+=R(M_1^-)$. Since $f_0$ is ($R$-)reversible it follows that $f_0^q(M_2^+)=M_2^-$. Let $T_{2}$ denote the restriction of the map $f_0^q$ onto a small neighbourhood of the point $M_2^-$. Moreover, we can consider $T_2$ defined from $\Pi_2^- = R(\Pi_1^+)$ onto $\Pi_2^+ = R(\Pi_1^-)$ (see Fig.~\ref{fig:frtms(rev)2}). Since $T_2 = R(T_1^{-1})$ we have that $J(T_2)|_{M_2^-} = (J(T_1)|_{M_1^-})^{-1}$ and from \textsf{[C]} it follows that $|J_2| = | J(T_2)|_{M_2^-}| >1$. As it will be properly defined later, iterations of $f_0$ in the neighbourhood $U_0$ around $O$ will be represented by the map $T_0^k$, for positive integer $k$. Observe that, for close to $f_0$ maps, one can subdivide nonwandering orbits on $U$ (except for $O$) into three different types: 1-orbits that stay only in $U_0\cap U_1$; 2-orbits that stay only in $U_0\cap U_2$; and 12-orbits that visit both $U_0\cap U_1$ and $U_0\cap U_2$. From these types of orbits, we select the so-called single-round periodic orbits, that is those which pass only once inside $U$. We will refer to them, respectively, as single-round periodic $1$-, $2$- and $12$-orbits. For $1$-orbits, we will consider points $x\in \Pi_1^+$, take its image under suitable iterates $k$ of $T_0$, reaching $\Pi_1^-$ and studying $\bar x = T_1T_0^k(x)\in \Pi_1^+$ as its return point. If $\bar{x}=x$ we say that $x$ is a fixed point of the first return map $T_{1k} = T_1 T_0^k$. Analogously, the first return maps for single-round periodic 2-orbits may be represented in the form $T_{2k} = T_2 T_0^k$, from $\Pi_2^+$ onto itself. And finally, we will also look for single-round periodic 12-orbits or, equivalently, fixed points of $\ensuremath{\mathbb{T}}otkm = T_2 T_0^m T_1 T_0^k$ from $\Pi_2^+$ onto itself, for large enough integers $k$ and $m$. For more details, see Section~\ref{sec:prelim} and Figs.~\ref{fig:hom8neigh} and \ref{fig:frtms(rev)}. \begin{figure} \caption{(a) A reversible diffeomorphism with a symmetric transversal homoclinic orbit; (b) creation of a symmetric couple of nontransversal homoclinic orbits $\Gamma_1$ and $\Gamma_2$ (a ``fish'' configuration).} \label{fig:neigh8rev} \end{figure} \subsection{The results}\label{sec:sec2_2} Let $\left\{f_{\mu} \right\}$ be a $1$-parameter family of ($R$-)reversible diffeomorphisms that unfolds at $\mu=0$ the initial homoclinic tangencies of the diffeomorphism $f_0$ defined above. Assume that $f_0$ satisfies conditions {\rm \textsf{[A,B,C]}}. Then, the following theorem shows the global symmetry-breaking bifurcations undergone in this case: \begin{theorem} For the family $\left\{f_{\mu} \right\}$, in any segment $[-\mu_0,\mu_0]$ with $\mu_0>0$ small, there are infinitely many intervals $\delta_{k}$, with boundaries $\mu = \mu_k^+$ and $\mu_k^-$ where $\mu_k^\pm\to 0$ as $k\to\infty$, satisfying: \begin{itemize} \item Symmetric (and simultaneous) single round 1-orbits and 2-orbits of period $k+q$ undergo non-degenerate saddle-node and period-doubling bifurcations at the values $\mu=\mu_{k}^{+}$ and $\mu=\mu_{k}^{-}$, respectively. \item The first return maps $T_{1k}$ and $T_{2k}$ have at $\mu\in\delta_{k}$ two fixed points: a sink and a saddle for $T_{1k}$ and a source and a saddle for $T_{2k}$. \end{itemize} \label{th:th1} \end{theorem} This theorem can be seen as an extension of the theorem on cascade of periodic sinks in~\cite{GaS73,N74} for the case when the saddle fixed point is conservative and the global dynamics near the homoclinic orbit is dissipative. In general, these intervals $\delta_{k}$ will be non-intersecting (see Remark~\ref{rmk:non-intersec} for a wider explanation on that). In contrast to Theorem~\ref{th:th1}, the following theorem deals with the global bifurcations giving rise to symmetric conservative dynamics, that is, the bifurcations of birth of symmetric single-round elliptic $12$-orbits. \begin{theorem} For the family $\left\{f_{\mu} \right\}$ under consideration, in any segment $[-\mu_0,\mu_0]$ with $\mu_0>0$ small, there exist infinitely many intervals $\delta_{km}^c$ accumulating at $\mu=0$ as $k,m\to\infty$ such that the first-return map $\ensuremath{\mathbb{T}}otkm$ has at $\mu\in\delta_{km}^c$ symmetric elliptic and saddle fixed points. \label{th:th2} \end{theorem} Next result is Newhouse Theorem for the case under consideration. \begin{theorem} For the family $\left\{f_{\mu} \right\}$, in any segment $[-\mu_0,\mu_0]$ with $\mu_0>0$ small, there exist open intervals $n_i$ such that the set of values $\mu\in n_i$ for which the corresponding map $f_\mu$ satisfying the following two properties \textrm{(a)} and \textrm{(b)} form a dense subset of $n_i$: \begin{itemize} \item[(a)] $f_{\mu}$ has a symmetric couple of homoclinic orbits $\Gamma_{1\mu}\subset U_1$ and $\Gamma_{2\mu}=R(\Gamma_{1\mu})\subset U_2$ to the symmetric saddle fixed point $O_{\mu}$. \item[(b)] The manifolds $W^u(O_\mu)$ and $W^s(O_\mu)$ of $f_{\mu}$ have quadratic tangencies at the points of $\Gamma_{1\mu}$ and $\Gamma_{2\mu}$. \end{itemize} \label{prop:ni} \end{theorem} Summarising, from Theorems~\ref{th:th1}--~\ref{prop:ni} the following result on existence of mixed dynamics is obtained. \begin{theorem}\label{thm:4} Let $\{f_\mu\}$ be a $1$-parameter family of $2$-dimensional reversible maps which unfolds at $\mu=0$ a couple of homoclinic tangencies satisfying conditions \textsf{[A,B,C]}. Then, for any $\mu_0>0$, the intervals $n_i \subset [-\mu_0,\mu_0]$ from Theorem 3 are Newhouse intervals with reversible mixed dynamics. \end{theorem} The proof of Theorems~\ref{th:th1} and~\ref{th:th2} extends along Sections~\ref{sec:prelim}--~\ref{sec:proofTh2}. In contrast, the proofs of Theorems~\ref{prop:ni} and~\ref{thm:4} are quite standard and are deferred to the end of the paper: Theorem~\ref{prop:ni} is proved in Section~\ref{sec:sec6_1} and Theorem~\ref{thm:4} in Section~\ref{sec:sec6_2}. \section{Preliminary geometric and analytic constructions} \label{sec:prelim} Let us consider a map $f_\mu$ from our $1$-parameter family and let denote by $T_{0} \equiv {f_\mu}\bigl|_{U_0}$ its restriction onto a neighbourhood $U_0$ of the fixed point $O$. This $\mu$-dependent map $T_{0}$ is called the {\em local map}. We introduce the so-called {\it global maps} $T_{1}$ and $T_{2}$ through the following relations: $T_{1}\equiv f^q_\mu :\Pi_1^-\to\Pi_1^+$ and $T_{2}\equiv f^q_\mu :\Pi_2^-\to\Pi_2^+$. They are well defined for small values of $\mu$ since $f^q_0(M_1^-) = M_1^+$ and $f^q_0(M_2^-) = M_2^+$. Then the {\em first-return maps} $T_{1k}:\Pi_1^+\mapsto\Pi_1^+$, $T_{2k}:\Pi_2^+\mapsto\Pi_2^+$ and $\ensuremath{\mathbb{T}}otkm:\Pi_1^+\mapsto\Pi_1^+$ can be defined by the following composition of maps: \begin{equation*} \begin{array}{l} \Pi_1^+ \;\stackrel{T_{0}^k}{\longrightarrow}\;\;\Pi_1^- \;\stackrel{T_{1}}{\longrightarrow}\;\; \Pi_1^+\;, \\ \Pi_2^+ \;\stackrel{T_{0}^k}{\longrightarrow}\;\;\Pi_2^- \;\stackrel{T_{2}}{\longrightarrow}\;\; \Pi_2^+ \;, \\ \Pi_2^+ \;\stackrel{T_{0}^k}{\longrightarrow}\;\;\Pi_1^- \;\stackrel{T_{1}}{\longrightarrow}\;\; \Pi_1^+ \;\stackrel{T_{0}^m}{\longrightarrow}\;\;\Pi_2^-\;\stackrel{T_{2}}{\longrightarrow}\;\; \Pi_2^+\;, \end{array} \end{equation*} (see Fig.~\ref{fig:frtms(rev)} and~\ref{hom8new1(maps)}). In short, we will denote these compositions by $T_{1k}= T_1 T_0^k$, $T_{2k}=T_2 T_0^k$ and $\ensuremath{\mathbb{T}}otkm= T_2 T_0^m T_1 T_0^k$. \begin{figure} \caption{A geometric structure of the homoclinic points $M_1^+,M_1^-,M_2^+$ and $M_2^-$ and their neighbourhoods in the case of figure-8 homoclinic configuration. Schematic actions of the first return maps: (a) $T_{1k} \label{fig:frtms(rev)} \end{figure} As it is usual in this kind of problems, one seeks for suitable local coordinates on $U_0$ in which the map $T_{0}$ exhibits its simplest form. The following lemma introduces $C^{r-1}$-coordinates that allow our local map $T_0$ to be written in the so-called {\em (saddle) normal form} or first order (saddle) normal form. \begin{figure} \caption{A geometric structure of the homoclinic points $M_1^+,M_1^-,M_2^+$ and $M_2^-$ and their neighbourhoods in the case of ``fish'' homoclinic configuration. Schematic actions of the first return maps: (a) $T_{1k} \label{hom8new1(maps)} \end{figure} \begin{lm}[Saddle Normal Form~{\cite[Lemma 1]{GS90}}] Assume $r\geq 4$ and let $T_0$ be a $C^r$-smooth reversible planar map with reversing (nonlinear in general) involution $R$ satisfying that $\dim\mathrm{Fix}\,R = 1$. Suppose that $T_0$ has a saddle fixed (periodic) point $O$ at the origin which belongs to $\mathrm{Fix}\,R$ and has multipliers $\lambda$ and $\lambda^{-1}$, with $|\lambda|<1$. Then there exist $C^{r-1}$-smooth local coordinates near $O$ in which the map $T_0$ can be written in the so-called Shilnikov cross-form: \begin{equation} T_0:\;\; \left\{ \begin{array}{rcl} \bar x &=& \lambda x + h(x,\bar y)x^2\bar y,\\ y &=& \lambda \bar y + h(\bar y,x)x\bar y^2. \label{CrossFormSaddle} \end{array} \right. \end{equation} \label{LmSaddle} \end{lm} \begin{remark} In these local coordinates the map $T_0$ is reversible under the linear involution $L(x,y)=(y,x)$. Indeed (see~\cite{DGGLS13}, for instance), it is enough to check that $(L T_0 L )^{-1} = T_0$. Observe that \[ L T_0 L: \left\{ \begin{array}{rcl} \bar y &=& \lambda y + h(y,\bar x)y^2 \bar x,\\ x &=& \lambda \bar x + h(\bar x,y)y\bar x^2 \end{array} \right. \] and thus $(L T_0 L )^{-1}$, which corresponds to interchange $(\bar x, \bar y)\leftrightarrow (x,y)$, gives rise to the expression for $T_0$. Bochner theorem~\cite{MZ55} ensures the simultaneous conjugation of both the map and the reversor. \end{remark} \noindent Next lemma provides a suitable expression for the iterates of $T_0$. Namely, \begin{lm}[{see~\cite{GS90}}] Let $T_0$ be a $C^r$-smooth $R$-reversible map written in (local) normal form~(\ref{CrossFormSaddle}) in a neighbourhood $V$ of a saddle fixed point $O$. Let us consider iterates of $T_0$ in $V$: $(x_0,y_0),\dots,(x_{\ell},y_{\ell})$ such that $(x_{\ell+1},y_{\ell+1})= T_0(x_{\ell},y_{\ell})$, $\ell=0,\dots,j-1$. Then, one has that \begin{eqnarray} x_j &=& \lambda^j x_0 \left(1 + j \lambda^j h_j(x_0,y_j)\right), \label{LocalMapk} \\ y_0 &=& \lambda^j y_j \left(1 + j \lambda^j h_j(y_j, x_0)\right), \nonumber \end{eqnarray} where the functions $h_j(x_0,y_j)$ are $\mathcal{O}_2(x_0,y_j)$ and satisfy that they and all their derivatives up to order $r-2$ are uniformly bounded with respect to $j$ . \label{LmLocalMap} \end{lm} Lemmas~\ref{LmSaddle} and~\ref{LmLocalMap} are also valid if $T_0$ depends on parameters. Moreover, if $T_0$ is $C^r$ with respect to both coordinates and parameters, it can be seen that the normal form (\ref{CrossFormSaddle}) is $C^{r-1}$ with respect to the coordinates and $C^{r-2}$ with respect to the parameters. Moreover, the derivatives in~(\ref{LocalMapk}) with respect to the parameters and up to order $r-2$ have order $O\left((\lambda+\epsilon)^j\right)$ for any $\epsilon>0$ (we refer the reader to~\cite{GST08}, Lemmas 6 and 7, for more details). \subsection{Construction of the local and global maps} We choose in $U_0$ the local coordinates $(x,y)$ given in Lemma~\ref{LmSaddle}. In these coordinates, the local stable and unstable invariant manifolds of the point $O$ are straightened: $W^uloc(O)$ can be represented by $x=0$ and $W^sloc(O)$ by $y=0$. Moreover, the previously chosen homoclinic points read as follows: $M_1^+=(x_1^+,0)$, $M_1^-=(0,y_1^-)$, $M_2^+=(x_2^+,0)$ and $M_2^-=(0,y_2^-)$. Since $R(M_1^+) = M_2^-$ and $R(M_1^-)=M_2^+$, we have that they are $L$-symmetric and therefore $x_1^+=y_2^-$ and $y_1^-=x_2^+$. From the geometry of the figure-8 homoclinic case (see Fig.~\ref{fig:frtms(rev)}) we can assume that \begin{equation} x_1^+= y_2^- = - \alpha<0,\qquad y_1^- = x_2^+ = \beta >0 \label{xeqy} \end{equation} Analogously, in the ``fish'' configuration we have that $\alpha <0$ and $\beta>0$ (see Fig.~\ref{hom8new1(maps)}). It is not restrictive to assume that $T_{0}(\Pi_i^-)\cap\Pi_i^-=\emptyset$, $i=1,2$ (if not, one can reduce the size of $\Pi_i^-$). Therefore, the domains of definition of the transfer map from $\Pi_i^{+}$ into $\Pi_j^-$, $i,j = 1,2$, under iterations of $T_{0}$ consist of infinitely many non-intersecting strips $\sigma_{k}^{0ij}$ which belong to $\Pi_i^+$ and accumulate at $W^sloc(O)\cap\Pi_i^+$ as $k\to\infty$. On its turn, the range of the transfer map consists of infinitely many strips $\sigma_{k}^{1ij}= T_{0}^k(\sigma_k^{0ij})$ belonging to $\Pi_i^-$ and accumulating at $W^uloc(O)\cap\Pi_i^-$ as $k\to\infty$ (see Figure~\ref{fig:locmaps}). \begin{figure} \caption{Domains of definition and range of the successor map from $\Pi_i^{+} \label{fig:locmaps} \end{figure} So, our first return maps are defined on those strips in the following way: \begin{equation*} \begin{array}{l} T_{1k} = T_1 T_0^k : \sigma_k^{011}\stackrel{T_{0}^k}{\longmapsto} \sigma_k^{111}\stackrel{T_1}{\longmapsto} \sigma_k^{011} \;, \\ T_{2k} = T_2 T_0^k : \sigma_k^{022}\stackrel{T_{0}^k}{\longmapsto} \sigma_k^{122}\stackrel{T_2}{\longmapsto} \sigma_k^{022} \;, \\ \ensuremath{\mathbb{T}}otkm = T_2 T_0^m T_1 T_0^k : \sigma_k^{021}\stackrel{T_{0}^k}{\longmapsto} \sigma_k^{121}\stackrel{T_1}{\longmapsto} \sigma_m^{012}\stackrel{T_{0}^m}{\longmapsto} \sigma_m^{112}\stackrel{T_2}{\longmapsto} \sigma_k^{021} \;. \end{array} \end{equation*} For large enough values of $k$, Lemma~\ref{LmLocalMap} asserts that the map $T_{0}^k:\sigma_k^{0ij}\left\{(x_0,y_0)\right\}\mapsto\sigma_k^{1ij}\left\{(x_k,y_k)\right\}$ can be written in the form \begin{equation} T_{0}^k\;:\; \left\{ \begin{array}{l} x_{k} = \lambda^k x_{0} \left(1 + k\lambda^k h_k(x_{0},y_{k}) \right),\\ y_{0} = \lambda^k y_{k} \left(1 + k\lambda^k h_k(y_{k},x_{0}) \right) \end{array} \right. \label{T1k} \end{equation} where $(x_0,y_0)\in \sigma_k^{0ij},\; (x_1,y_1)\in \sigma_k^{1ij}$, $i,j=1,2$. In the ``fish'' configuration case this corresponds to $T_0^k:\Pi_1^+ \left\{(x_{01},y_{01})\right\} \mapsto \Pi_1^- \left\{(x_{11},y_{11})\right\}$ while in the figure-8 situation this becomes $T_0^k:\Pi_2^+ \left\{(x_{02},y_{02})\right\} \mapsto \Pi_1^- \left\{(x_{11},y_{11})\right\}$ and $T_0^m:\Pi_1^+ \left\{(x_{01},y_{01})\right\} \mapsto \Pi_2^- \left\{(x_{12},y_{12})\right\}$ (see Fig.~\ref{fig:frtms(rev)2}). The global map $T_{1}:\Pi_1^-\to\Pi_1^+$ admits the following form \begin{equation} \! T_{1}: \left\{ \begin{array}{rl} x_{01} - x_1^+ &= F_{1}(x_{11},y_{11}-y_1^-,\mu) \\ & \equiv a x_{11} + b (y_{11}-y_1^-) + \varphi_1(x_{11},y_{11},\mu),\\ y_{01} &= G_{1}(x_{11},y_{11}-y_1^-,\mu) \\ &\equiv \mu + c x_{11} + d (y_{11}-y_1^-)^2 + \varphi_2(x_{11},y_{11},\mu), \end{array} \right. \label{T12} \end{equation} where $F_{1}(0)=G_{1}(0)=0$ (since $T_{1}(M_1^-) = M_1^+$ at $\mu=0$) and $\varphi_1=\mathcal{O}\left((y_{11}-y_1^-)^2 + x^2_{11}\right)$, $\varphi_2=\mathcal{O}\left( x_{11}^2 + |y_{11}-y_1^-|^3 + |x_{11}||y_{11}-y_1^-|\right).$ Since $W^uloc(O))$ and $W^sloc(O)$ have (local) expressions $\{x_{11}=0\}$ and $\{y_{01}=0\}$ and $T_{1}(W^uloc(O))$ and $W^sloc(O)$ undergo a quadratic tangency at $\mu=0$, this implies that \[ \frac{\partial G_{1}(0)}{\partial y_{11}} = 0,\;\; \frac{\partial^2 G_{1}(0)}{\partial y_{11}^2} = 2d \neq 0. \] Its Jacobian $J(T_{1})$ has the form \begin{equation} J(T_{1})= -bc + \mathcal{O}\left(|x_{11}| + |y_{11}-y_1^-|\right), \label{detT12*} \end{equation} and so $J_1=J(T_1)|_{M^-} = - bc$ where $0<|bc|<1$ by condition \textsf{[C]}. Concerning the global map $T_{2}$, its expression is closely related to that of $T_1$. Indeed, reversibility implies that $T_{2} = R\; T_{1}^{-1}\;R$ or, equivalently, $T_{1} = R\; T_{2}^{-1}\;R$. Then, by expression~(\ref{T12}) and having in mind the local $L$-reversibility on the domains $\Pi^-_2$ (Bochner's theorem ensures its conjugation with the non-linear reversor $R$) we obtain that the map $T_{2}^{-1}:\Pi_2^+\{(x_{02},y_{02})\}\mapsto \Pi_2^-\{(x_{12},y_{12})\}$ can be written as \begin{equation*} \! T_{2}^{-1}: \left\{ \begin{array}{rl} x_{12} =& G_{1}(y_{02},x_{02}-x_2^+,\mu) = \\ &\mu + c y_{02} + d (x_{02}-x_2^+)^2 + \varphi_2(y_{02},x_{02},\mu),\\ y_{12} - y_2^- =& F_{1}(y_{02},x_{02}-x_2^+,\mu) = \\ &a y_{02} + b (x_{02}-x_2^+) + \varphi_1(y_{02},x_{02},\mu), \end{array} \right. \end{equation*} which means to write $x_1^+=y_2^-$, $y_1^-=x_2^+$ in~(\ref{T12}) and to swap $x\leftrightarrow y$ variables, i.e. $x_{01} \leftrightarrow y_{12}$ and $x_{11} \leftrightarrow y_{02}$. As it was done in a previous remark, this expression defines the map $T_{2}:\Pi_2^-\{(x_{12},y_{12})\}\mapsto\Pi_2^+\{(x_{02},y_{02})\}$ in the implicit form: $x_{12} = G_{1}(\bar y_{02},\bar x_{02}-x_2^+,\mu), y_{12} - y_2^- = F_{1}(\bar y_{02},\bar x_{02}-x_2^+,\mu)$ by swapping bar and no-bar variables. \begin{figure} \caption{Domains of definitions and associated coordinates for the first return map $\ensuremath{\mathbb{T} \label{fig:frtms(rev)2} \end{figure} \section{Proof of Theorem~\ref{th:th1}} \label{sec:proofTh1} This proof is mainly based on Lemma~\ref{lm:rescMar} which provides, by computing the corresponding equations and performing a suitable rescaling, an asymptotic expression for the first return map for large enough values of $k$. Rescaling method has become, since the work of Tedeschini-Yorke~\cite{TY86}, a very useful tool when dealing with homoclinic connections (see also~\cite{GStT96,GG04,GGT07,GST08,GGS10} and references therein for many examples of such use). \begin{lm} \label{lm:rescMar} Let $\left\{f_\mu \right\}$ be the family under consideration satisfying conditions~\textrm{\textsf{[A,B,C]}}. Then, for large enough values of $k$, the first return map $T_{1k} : \sigma_k^0 \rightarrow \sigma_k^0$ can be brought, by a linear change of coordinates and a convenient rescaling, to the following form \begin{equation} \label{eq:FRMmar} \bar{X}=Y + k\lambda^k h_k^1(X,Y), \qquad \bar{Y} = M_1 + M_2 X - Y^2 + k\lambda^k h_k^2(X,Y), \end{equation} with \begin{equation} \label{eq:M1M2} M_1= -d\lambda^{-2k}\left(\mu - \lambda^k(y_1^- - cx^+) + \tilde\rho_k\right), \qquad M_2=bc, \end{equation} where $\tilde\rho_k = \mathcal{O}(k\lambda^{k})$ is a small constant and the functions $h_k^j$ have all their derivatives uniformly bounded up to order $(r-2)$. \end{lm} {\em Proof}. To ease its reading we give first a ``lightweight'' proof of the lemma for a simpler case, i.e. when the local map $T_0$ is linear, $\bar x = \lambda x, \bar y = \lambda^{-1} y$, and the global map has the form: \[ \begin{array}{l} \bar x_0 - x^+ = a x_1 + b(y_1-y^-),\\ \bar y_0 = \mu + c x_1 + d(y_1-y^-)^2 + f_{11} x_1(y_1-y^-). \end{array} \] We have only considered linear terms in the first equation and up to quadratic terms in the second one. We use also (only for a simplification of formulas) the notation $x^+=x_1^+, y^- = y^-_1$ and denote the coordinates on $\Pi_1^+$ as $(x_0,y_0)$ and on $\Pi_1^-$ as $(x_1,y_1)$. Then the first return map $T_{1k} = T_1T_0^k$ is written as \[ \begin{array}{l} \bar x_0 - x^+ = a \lambda^k x_0 + b(y_1-y^-),\\ \lambda^k\bar y_1 = \mu + c \lambda^k x_0 + d(y_1-y^-)^2 + f_{11} \lambda^k x_0(y_1-y^-), \end{array} \] This (first) highly simplified case will serve the reader (we hope) to be familiar with the different transformations we apply to get the asymptotic H\'enon map. The general case (that is included rear after this one) will follow the same ideas and procedure. Introduce the coordinates $\xi = x_0 - x^+, \eta = y_1 - y^-$. Then $T_{1k}$ reads \begin{equation} \begin{array}{l} \bar\xi = a \lambda^k \xi + b\eta + a \lambda^k x^+ ,\\ \lambda^k\bar\eta = m_1 + c \lambda^k \xi + d\eta^2 + f_{11} \lambda^k \xi\eta + f_{11} \lambda^k x^+ \eta, \end{array} \label{T1xieta} \end{equation} where $m_1 = \mu + c\lambda^k x^+ - \lambda^k y^-$. Further, we make one more coordinate shift, $\xi = x + \alpha_k, \eta = y + \beta_k$ with small coefficients $\alpha_k =\mathcal{O}(\lambda^k)$ and $\beta_k = \mathcal{O}(\lambda^k)$, in order to vanish the constant terms in the first equation and the linear in $y$ terms in the second one. Then we obtain \[ \begin{array}{l} \bar x = a \lambda^k x + b y + \left[b\beta_k - \alpha_k + a\lambda^k x^+ + a\lambda^k\alpha_k \right] ,\\ \lambda^k\bar y = m_2 + (c + f_{11}\beta_k) \lambda^k x + d y^2 + f_{11} \lambda^k xy + \left(2d\beta_k + f_{11} \lambda^k x^+ + f_{11}\lambda^k\alpha_k \right) y, \end{array} \] where $m_2 = m_1 + \lambda^k(c\alpha_k - \beta_k + f_{11}x^+\beta_k + f_{11}\alpha_k\beta_k) + d\beta_k^2 = m_1 + \mathcal{O}(\lambda^{2k})$. The expressions in square brackets are nullified at \begin{equation} \alpha_k = \left(ax_1^+ - \frac{bf_{11}x^+}{2d}\right) \lambda^k + \mathcal{O}(\lambda^{2k}), \;\; \beta_k = - \frac{f_{11}x^+}{2d} \lambda^k + \mathcal{O}(\lambda^{2k}). \label{ab} \end{equation} For such choice of $\alpha_k$ and $\beta_k$, the map $T_{1k}$ takes the form \[ \begin{array}{l} \bar x = a \lambda^k x + b y ,\\ \bar y = \lambda^{-k}m_2 + (c + \phi_k) x + d \lambda^{-k} y^2 + f_{11} xy, \end{array} \] where $\phi_k = \mathcal{O}(\lambda^k)$ is a small coefficient. Now, by rescaling the coordinates, \begin{equation} x = - \frac{b}{d} \lambda^k X,\;\; y = - \frac{1}{d} \lambda^k Y, \label{rescT1k} \end{equation} we bring the map $T_{1k}$ to the claimed form: \[ \bar X = Y + \mathcal{O}(\lambda^{k}), \;\; \bar Y = M + bc X - Y^2 + \mathcal{O}(\lambda^{k}), \] where $M = -d\lambda^{-2k}m_2 = -d\lambda^{-2k} \left[\mu + (c x^+_1 - y^-_1) \lambda^k + \mathcal{O}(\lambda^{2k}) \right]$. \vskip1cm Let us now deal with the \emph{general case}, that is, with $T_0^k$ given by \begin{eqnarray*} x_k &=& \lambda^k x_0 \left( 1 + k\lambda^k h_k(x_0,y_k) \right) \\ y_0 &=& \lambda^k y_k \left( 1 + k\lambda^k h_k(y_k,x_0) \right) \end{eqnarray*} and the global map $T_1$ given by \begin{eqnarray*} \bar{x}_0 - x^+ &=& a x_1 + b(y_1 - y^-) + \mathcal{O}\left( (y_1-y^-)^2, x_1^2, (y_1-y^-)x_1 \right), \\ \bar{y}_0 &=& \mu + c x_1 + d(y_1-y^-)^2 + f_{11} x_1 (y_1-y^-) + \mathcal{O}\left( x_1^2,(y_1-y^-)^3 \right). \end{eqnarray*} Consider the map $T_{1k}=T_1 T_0^k$ and apply the change of coordinates: $\xi=x_0-x^+$, $\eta= y_k - y^-$. Then, $T_{1k}$ admits the expression \begin{eqnarray} \bar{\xi} &=& a \lambda^k \xi + b\eta + \left( \lambda^k a x^+ + \mathcal{O}(k \lambda^k) \right) + \gamma_1 \eta^2 + \gamma_2 \lambda^k \xi \eta + \lambda^k \eta, \label{eq:gencase:xieta}\\ \lambda^k \bar{\eta} (1+\mathcal{O}(k\lambda^k))&=& \left( \mu + c\lambda^k x^+ + c\lambda^k(\xi + x^+) k \lambda^k h_k + f_{11} k \lambda^{2k} (\xi + x^+) \eta h_k + \right. \nonumber\\ && \left. \gamma_1 \lambda^{2k} (\xi+x^+)^2 (1+k\lambda^k h_k) \right) + c\lambda^k \xi + d\eta^2 + \\ && f_{11} \lambda^k \xi \eta + f_{11} \lambda^k x^+ \eta. \nonumber \end{eqnarray} Following the same steps as for the simplified case, we consider the following \emph{shift}: \[ \xi=x + \alpha_k, \qquad \eta= y + \beta_k \] with $\alpha_k,\beta_k$ to be determined in such a way that the constant term in the equation for $\bar{x}$ and the coefficient of $y$ in $\bar{y}$ both vanish. After performing this shift, equations~\eqref{eq:gencase:xieta} become \begin{equation} \label{eq:gencase:xbar} \bar{x}=a \lambda^k x + by + \left( (a\lambda^k -1) \alpha_k + b(1+\lambda^k) \beta_k + \lambda^k a x^+ + \mathcal{O}(k\lambda^k) \right) \end{equation} and \begin{eqnarray} \label{eq:gencase:ybar} \lefteqn{\lambda^k \bar{y} = \left( \mu + c \lambda^k (x^+ - y^-) + ck \lambda^{2k} (\alpha_k + x^+) h_k^0 + f_{11} k \lambda^{2k} (\alpha_k + x^+) \beta_k h_k^0 + \right. } \nonumber \\ && \left. \gamma_1 \lambda^{2k} (\alpha_k + x^+)^2 + \gamma_2 \beta_k^3 + c \lambda^k \alpha_k + d \beta_k^2 + f_{11} \lambda^k \alpha_k \beta_k + f_{11} \lambda^k x^+ \beta_k - \lambda^k \beta_k + \right. \nonumber \\ && \left. \mathcal{O}(k \lambda^{4k} )\right) + \left( c \lambda^k + f_{11} \lambda^k \beta_k + c k \lambda^{2k} h_k^0 + f_{11} k \lambda^{2k} \beta_k h_k^0 \right) x + \\ && \left( f_{11} \lambda^k (1+k\lambda^k h_k^0) \alpha_k + 2d \beta_k + 3 \gamma_2 \beta_k^2 + f_{11} k \lambda^{2k} x^+ h_k^0 + f_{11} \lambda^k x^+ \right) y + \nonumber \\ && \left( d+ 3 \gamma_2 \beta_k \right) y^2 + \left( f_{11} k \lambda^{2k} h_k^0 + f_{11} \lambda^k \right) xy + \mathcal{O}_3(x,y), \nonumber \end{eqnarray} where $h_k^0$ stands for the constant term of $h_k(x^+ + \xi,y^- + \eta)$ in $(\xi,\eta)$-variables and we have taken into account that $(1+\mathcal{O}(k\lambda^k))^{-1}= \mathcal{O}(k\lambda^k)$. Thus, we determine $\alpha_k,\beta_k$ to satisfy \begin{eqnarray} \label{eq:gencase:alphak:betak} (a\lambda^k -1) \alpha_k + b(1+\lambda^k) \beta_k &=& - \lambda^k a x^+ + \mathcal{O}(k\lambda^k) \\ f_{11} \lambda^k (1+k\lambda^k h_k^0) \alpha_k + 2d \beta_k + 3 \gamma_2 \beta_k^2 &=& - f_{11} k \lambda^{2k} x^- h_k^0 - f_{11} \lambda^k x^+. \nonumber \end{eqnarray} It is straightforward to check that $\alpha_k, \beta_k =\mathcal{O}(\lambda^k)$. Now, consider the linear system \begin{eqnarray*} (a\lambda^k -1) \alpha_k + b(1+\lambda^k) \beta_k &=& - \lambda^k a x^+ + \mathcal{O}(k\lambda^k) \\ f_{11} \lambda^k (1+k\lambda^k h_k^0) \alpha_k + 2d \beta_k &=& - f_{11} k \lambda^{2k} x^- h_k^0 - f_{11} \lambda^k x^+. \end{eqnarray*} This linear system has solutions \begin{equation} \label{eq:gencase:alphabetak} \begin{array}{rcl} \alpha_k^0 &=& {\displaystyle \left( a x^+ + \frac{b f_{11}}{2 d} \right) \lambda^k + \mathcal{O}(k \lambda^k) } \\ \beta_k^0 &=& {\displaystyle - \frac{f_{11} x^+}{2d} \lambda^k + \mathcal{O}(k \lambda^k)}. \end{array} \end{equation} Since $d\neq 0$, the determinant \[ \left| \begin{array}{cc} a \lambda^k -1 & b (1+\lambda^k) \\ f_{11} \lambda^k + \mathcal{O}(k\lambda^{2k}) & 2 d \end{array} \right| = - 2d + (2 ad - b f_{11}) \lambda^k - b f_{11} \lambda^{2k} + \mathcal{O}(k \lambda^{2k}) \neq 0, \] and so by the Implicit Function Theorem, there exist $\alpha_k=\alpha_k^0 + \mathcal{O}(k\lambda^k)$ and $\beta_k=\beta_k^0 + \mathcal{O}(k\lambda^k)$ solutions of~\eqref{eq:gencase:alphak:betak}, which are $\mathcal{O}(k\lambda^k)$-close to $\alpha_k^0,\beta_k^0$. Thus, considering the shift $\xi=x+\alpha_k$, $\eta=y + \beta_k$, with these already determined $\alpha_k,\beta_k=\mathcal{O}(\lambda^k)$, one gets the following equations for $T_{1k}$: \begin{equation} \begin{array}{rcl} \bar{x} &=& a \lambda^k x + by + \gamma_1 y^2 \\ \lambda^k \bar{y} &=& m_2 + (c\lambda^k + \mathcal{O}(\lambda^{2k}) x + (d+\mathcal{O}(\lambda^k)) y + (f_{11}\lambda^k + \mathcal{O}(k \lambda^{2k})) xy + \mathcal{O}_3(x,y), \end{array} \end{equation} where $m_2:= \mu + c \lambda^k x^+ - \lambda^k y^- + \mathcal{O}(\lambda^{2k})$. And last, we perform the \emph{scaling} \[ x= -\frac{b}{d} \lambda^k X, \qquad y=-\frac{1}{d} \lambda^k Y, \] under which the previous system becomes \begin{equation} \label{eq:gencase:final} \begin{array}{rcl} \bar{X} &=& Y + \mathcal{O}(\lambda^k) \\ \bar{Y} &=& M_1 + M_2 X - Y^2 \mathcal{O}(\lambda^k), \end{array} \end{equation} with $M_1=-d \lambda^{-2k} m_2 = -d \lambda^{-2k} \left( \mu + (cx^+ - y^-) \lambda^k + \mathcal{O}(k\lambda^k) \right)$ and $M_2=bc$, as it was claimed. $\Box$ \\ Lemma~\ref{lm:rescMar} shows that the limit form (that is, for large enough values of $k$ or, in other words, for close-enough orbits to $W^sloc(O)$) for the first return map $T_{1k}=T_1 T_0^k$ (and similarly for $T_{2m}$) is the standard H\'enon map $\mathcal{H}$: \[ \bar x = y, \qquad \bar y = M_1 + M_2 x - y^2, \] with Jacobian $J=-M_2 = -bc$. Recall that by~(\ref{detT12*}) and condition \textsf{[C]} we have $0<J<1$. Bifurcations of fixed points of the standard H\'enon map are well known. In the $(M_1, M_2)$-parameter plane, there are two bifurcation curves, namely \begin{eqnarray*} L^{+1}&:=\left\{ (M_1, M_2): 4 M_1 = - (1+ M_2)^2 \right\}, \\ L^{-1}&:=\left\{(M_1,M_2): 4 M_1 = 3 (1+M_2)^2 \right\}, \end{eqnarray*} corresponding to the existence of a fixed point with a multiplier $+1$ (saddle-node fixed point) and a fixed point with a multiplier $-1$ (period doubling bifurcation), respectively. For $-1< M_2< 0$, the H\'enon map has no fixed points below the curve $L^{+1}$, has a stable (sink) fixed point in the region between the bifurcation curves $L^{+1}$ and $L^{-1}$, while at $L^{-1}$ a period doubling bifurcation takes place and a stable 2-periodic orbit appears above the curve $L^{-1}$. Thus, using the relation~(\ref{eq:M1M2}) between the rescaled and the initial parameters we find that \begin{eqnarray*} \mu_k^+ &= \lambda^k (c \alpha + \beta + \rho_k) + \frac{(1-bc)^2}{4d} \lambda^{2k}, \\[1.2ex] \mu_k^- &= \lambda^k (c \alpha + \beta + \rho_k) - \frac{3(1-bc)^2}{4d} \lambda^{2k}, \end{eqnarray*} where $\rho_k = \mathcal{O}(k\lambda^k)$ is small, $\alpha,\beta$ have been defined in~(\ref{xeqy}) and $b,c,d$ are Taylor coefficients of the map $T_1$ (see~(\ref{T12})). This completes the proof of Theorem~\ref{th:th1}. $\Box$ \\ \begin{remark} \label{rmk:non-intersec} In general, the intervals $\delta_k$ do not intersect each other for different sufficiently large $k$. However, when $c \alpha + \beta=0$, they can intersect and even appear nested. In the latter case, this implies that the diffeomorphism $f_0$ can possess simultaneously infinitely many periodic sinks and sources of all successive periods beginning from some (sufficiently) large number. This is a more delicate problem and it is out of the scope of this work. We recall that such phenomenon of ``global resonance'' with elliptic points was introduced in~\cite{GS01} for area-preserving maps with homoclinic tangencies (see also \cite{GG09, DGG15}). \end{remark} \section{Proof of Theorem~\ref{th:th2}} \label{sec:proofTh2} This proof will follow similar ideas and techniques as those employed in the proof of Theorem~\ref{th:th1}. We begin by taking on $U_0$ the local $C^{r-1}$-coordinates $(x,y)$ provided by Lemma~\ref{LmSaddle}. Recall that in these local coordinates the homoclinic points are denoted by $M_1^+=(x_1^+,0)$, $M_1^-=(0,y_1^-)$ in $\Gamma_1$ and $M_2^+=(x_2^+,0)$ and $M_2^-=(0,y_2^-)$ in $\Gamma_2$. They satisfy that $L(M_1^+) = M_2^-$ and $L(M_1^-)=M_2^+$ (locally) since $R(M_1^+) = M_2^-$, $R(M_1^-)=M_2^+$, respectively. Now we consider the first return map $\ensuremath{\mathbb{T}}otkm= T_2 T_0^m T_1 T_0^k$ for single-round periodic 12-orbits. Thus, the following result holds: \begin{lm} Let us consider the family $\left\{f_\mu \right\}$ of Theorem~\ref{th:th2}, satisfying conditions~\textrm{\textsf{[A,B,C]}}. Then, for large enough values of $k,m$, with $k \simeq m$, the first return map $\ensuremath{\mathbb{T}}otkm: \sigma_k^0 \rightarrow \sigma_k^0$ can be brought, by a linear change of coordinates and a suitable rescaling, to a reversible map asymptotically close as $k,m\to\infty$ to an area-preserving (symplectic) map of the form (see also \cite{DGGLS13}): \begin{equation} H\;:\left\{ \begin{array}{rl} \bar x &= \widetilde{M} + \tilde c x - y^2, \\ \tilde c \bar y &= -\widetilde{M} + y + \bar x^2, \end{array} \right. \label{frmex} \end{equation} where \begin{equation} \tilde c = \frac{c}{b}\lambda^{k-m},\qquad \widetilde{M} = - \frac{d}{b^2}\lambda^{-2m}\left(\mu + c \lambda^k \beta + \lambda^m \alpha + O(k\lambda^k + m\lambda^m) \right). \label{resc12:th2} \end{equation} The constants $\alpha,\beta$ are defined in~(\ref{xeqy}) and $b,c,d$ in expression~(\ref{T12}). \end{lm} From hypotheses~\textsf{[A]} and~\textsf{[C]} it follows that $\lambda >0$ and also $\tilde c <0$ in the orientable case (if $T_1$ is orientable) and $\tilde c >0$ in the non-orientable case (if $T_1$ is non-orientable). \begin{proof} First, let us remind how coordinates are denoted on each domain around the homoclinic points $M_{1,2}^-$. Thus, $(x,y)$-coordinates on $\Pi_i^{+}$ are denoted by $(x_{0i},y_{0i})$ and by $(x_{1i},y_{1i})$ on $\Pi_i^{-}$, for $i=1,2$. From Lemma~\ref{LmLocalMap}, the map $T_{0}^k: \Pi_2^+ \to \Pi_1^-$ will be defined on the strip $\sigma_k^{021}\subset\Pi_2^+$ and $T_0^k(\sigma_k^{021}) = \sigma_k^{121}\subset\Pi_1^-$. Analogously, there exist strips $\sigma_k^{011}, \sigma_k^{012}\subset\Pi_1^+$, and $\sigma_k^{022}\subset\Pi_2^+$ such that $T_0^k(\sigma_k^{011}) = \sigma_k^{111}\subset\Pi_1^-$, $T_0^k(\sigma_k^{012}) = \sigma_k^{112}\subset\Pi_2^-$ and $T_0^k(\sigma_k^{022}) = \sigma_k^{122}\subset\Pi_2^-$ (see Fig.~\ref{fig:locmaps} for a comprehensive plot). The first return map $\ensuremath{\mathbb{T}}otkm$ is given by the following chain of compositions: \[ \sigma_k^{021}\stackrel{T_{0}^k}{\longmapsto}\sigma_k^{121}\stackrel{T_1}{\longmapsto}\sigma_m^{012} \stackrel{T_{0}^m}{\longmapsto}\sigma_m^{112}\stackrel{T_2}{\longmapsto}\sigma_k^{021} \] (for a geometrical illustration see Fig.~\ref{fig:frtms(rev)2}). These relations can be expressed in coordinates through the following set of equations ($T_0^k$, $T_1$, $T_0^m$, and $T_2$, respectively): \begin{equation} \begin{array}{rl} x_{11} =& \lambda^k x_{02} (1 + k\lambda^k h_k(x_{02},y_{11}) ) \\ y_{02} =& \lambda^k y_{11} (1 + k\lambda_1^k h_k(y_{11},x_{02}) ), \\ \\ x_{01} - x_1^+ =& F_{1}(x_{11},y_{11}-y_1^-,\mu)\equiv \\ &a x_{11} + b (y_{11}-y_1^-) + \varphi_1(x_{11},y_{11},\mu),\\ y_{01} = & G_{1}(x_{11},y_{11}-y_1^-,\mu) \equiv\\ &\mu + c x_{11} + d (y_{11}-y_1^-)^2 + \varphi_2(x_{11},y_{11},\mu), \\ \\ x_{12} =& \lambda^m x_{01} (1 + m\lambda^m h_m(x_{01},y_{12}) )\\ y_{01} =& \lambda^m y_{12} (1 + m\lambda^m h_m(y_{12},x_{01}) ), \\ \\ x_{12} =& G_{1}(\bar y_{02},\bar x_{02}-x_2^+,\mu) = \\ &\mu + c \bar y_{02} + d (\bar x_{02}-x_2^+)^2 + \varphi_2(\bar y_{02},\bar x_{02},\mu),\\ y_{12} - y_2^- =& F_{1}(\bar y_{02},\bar x_{02}-x_2^+,\mu) =\\ &a \bar y_{02} + b (\bar x_{02}-x_2^+) + \varphi_1(\bar y_{02},\bar x_{02},\mu). \end{array} \label{T12k} \end{equation} Observe that these formulas are presented in two different forms. Indeed, the local maps $T_0^{k,m}$ are given in cross-form while the global maps $T_{1,2}$ are written in explicit form. Thus, our first-return map $\ensuremath{\mathbb{T}}otkm$ can be defined, in cross-variables, as $\ensuremath{\mathbb{T}}otkm: (x_{02},y_{11}) \mapsto (\bar{x}_{02},\bar{y}_{11})$, through the equation $\bar{y}_{02} = \lambda^k \bar y_{11} (1 + k\lambda_1^k h_k(\bar y_{11},\bar x_{02}) )$ which plays an intermediate r\^ole. As we did in Lemma~\ref{lm:rescMar}, we introduce new variables \[ x_1 = x_{01} - x_1^+, \quad x_2 = x_{02} - x_2^+, \quad y_1 = y_{11}-y_1^-, \quad y_2 = y_{12}-y_2^- \] and rewrite system~(\ref{T12k}) as follows: \begin{equation} \begin{array}{l} x_{1} = b y_{1} + \mathcal{O}(\lambda^k) + \mathcal{O}(y_1^2), \\ \\ \lambda^m y_{2}(1+m\lambda^{2m}\mathcal{O}(|x_1| + |y_2|)) = \\ \qquad (\mu + c \lambda^k x_{2}^+ - \lambda^m y_{2}^- + \mathcal{O}(k\lambda^{2k}+m\lambda^{2m})) + c \lambda^k x_2 + d y_{1}^2 + \\ \qquad \mathcal{O}(\lambda^{2k}|x_2| + \lambda^k |x_2y_1| + |y_1|^3), \\ \\ \lambda^m x_{1}(1+m\lambda^{2m}\mathcal{O}(|x_1| + |y_2|)) = \\ \qquad (\mu + c \lambda^k y_{1}^- - \lambda^m x_1^+ + \mathcal{O}(k\lambda^{2k}+m\lambda^{2m})) + c \lambda^k \bar y_1 + d \bar x_2^2 + \\ \qquad \mathcal{O}(\lambda^{2k}|\bar x_2| + \lambda^k |\bar x_2\bar y_1| + |\bar y_1|^3)\,\\ \\ y_{2} = b \bar x_{2} + \mathcal{O}(\lambda^k) + O(\bar x_2^2), \end{array} \label{T12k(2)} \end{equation} Take $x_1$ and $y_2$ from the first and fourth equations of (\ref{T12k(2)}) and substitute them in the second and third ones. After this, we obtain the map $\ensuremath{\mathbb{T}}otkm: (x_2,y_1)\mapsto (\bar x_{2},\bar y_{1})$ given in the following implicit form \begin{align*} &\lambda^m b \bar x_2 (1 + m\lambda^{m} \mathcal{O}(\bar x_2)) = \\ &\qquad M + d y_{1}^2 + c \lambda^k x_2 + \mathcal{O}(\lambda^{2k}|x_2| + \lambda^k |x_2y_1| + |y_1|^3), \\ &\lambda^m b y_{1}(1 + m\lambda^{m} \mathcal{O}(y_1)) = \\ &\qquad M + c \lambda^k \bar y_1 + d \bar x_2^2 + \mathcal{O}(\lambda^{2k}|\bar x_2| + \lambda^k |\bar x_2\bar y_1| + |\bar y_1|^3), \nonumber \end{align*} where ${\displaystyle M = \mu + c \lambda^k y_1^- - \lambda^m x_1^+ + O(k\lambda^{2k} + m\lambda^{2m})}$ or, equivalently, \[ M = \mu + c \lambda^k \beta + \lambda^m \alpha + O(k\lambda^{2k} + m\lambda^{2m}). \] Take into account that $x_1^+=-\alpha<0$ and $y_1^-=\beta$ (see formulas~(\ref{xeqy}) have been used. Notice that up to this point, the procedure is symmetric. That is, we could have started our first-return map with $T_0^m$ instead of $T_0^k$ and the formulas would have been the same. This is reflected in the fact that all the equations up to now, including the definition of the constant $M$, are invariant under $k \leftrightarrow m$. Following the same procedure performed in the proof of Theorem~\ref{th:th1}, we rescale the coordinates. Indeed, consider \[ x_2 = -\frac{b}{d}\lambda^m x, \qquad y_1 = -\frac{b}{d}\lambda^m y, \] which bring the first return map $T_{12k}$ into the following rescaled form \[ \begin{array}{l} \bar x = \widetilde{M} + \tilde c x - y^2 + O(\lambda^k + \lambda^{2k-m}), \\ y = \widetilde{M} + \tilde c \bar y - \bar x^2 + O(\lambda^k + \lambda^{2k-m}), \end{array} \] where $\tilde c$ and $\widetilde{M}$ satisfy (\ref{resc12:th2}). This ends the proof of the lemma. \end{proof} $\Box$ \\ To complete the proof of Theorem 2 we need to detect the bifurcation boundaries of the intervals $\delta_{km}^c$. Since at $\mu\in\delta_{km}^c$ the first return map $\ensuremath{\mathbb{T}}otkm$ has two symmetric fixed points, one elliptic and another saddle, such boundaries can be found from the corresponding analysis of the map~(\ref{frmex}). The bifurcation diagram for the symmetric fixed points of map~(\ref{frmex}) is shown in Fig.~\ref{Figdiag31}. We notice that it is essentially as the one in~\cite[page 16]{DGGLS13}. However, for the goals of \cite{DGGLS13}, searching only for symmetric fixed points was not sufficient, since the main problem there was to study symmetry breaking bifurcations (leading to the birth of a symmetric couple sink-source fixed points). This is not necessary here because the symmetric breaking bifurcations have been already determined in Theorem 1. \begin{figure} \caption{Elements of the bifurcation diagram for the map $H$: painted regions correspond to the existence of symmetric elliptic and saddle fixed points of $H$. } \label{Figdiag31} \end{figure} Like in~\cite[page 16]{DGGLS13}, the equations of the bifurcation curves $F$ (symmetric fold bifurcation), $P\!D_1$ and $P\!D_2$ (symmetric period doubling) and $P\!F$ (symmetry breaking pitch-fork) are the following: \begin{equation} \begin{array}{rl} F_0: & {\displaystyle \widetilde{M} = -\frac{1}{4}\left(\tilde c -1 \right)^2}, \\\\ P\!D_1: & {\displaystyle \widetilde{M} = 1 -\frac{1}{4}\left(\tilde c -1 \right)^2}, \\\\ P\!D_2: & {\displaystyle \widetilde{M} = \frac{(\tilde c +1)(3\tilde c -1)}{4}}, \\\\ P\!F: & {\displaystyle \widetilde{M} = \frac{3}{4}\left(\tilde c -1 \right)^2}. \\\\ \end{array} \label{bcurorient} \end{equation} These curves have the same equations for the orientable case, corresponding to the half-plane $\mathcal{P}_1 =\{\tilde c < -\varepsilon\}$ of the $(\tilde c,\widetilde{M})$-parameter plane, and for the non-orientable case, corresponding to the half-plane $\mathcal{P}_2 =\{\tilde c > \varepsilon\}$, with an arbitrary small $\varepsilon>0$. Note that if $\tilde c =0$, then $c=0$ and therefore $T_1$ is not a diffeomorphism. So we exclude from the analysis a thin strip along the axes $\tilde c =0$ (the dashed strip in Fig.~\ref{Figdiag31}). The curves (\ref{bcurorient}) divide the half-plane $\mathcal{P}_1$ in 6 domains $I_\ell,\ldots,V\!I_\ell$ and the half-plane $\mathcal{P}_2$ in 9 domains $I_r,\ldots,I\!X_r$. From these domains, we select two domains $I\!I_\ell$ and $V_\ell$ belonging to $\mathcal{P}_1$ and four domains $I\!I_r$, $I\!V_r$, $V\!I_r$ and $V\!I\!I\!I_r$ belonging to $\mathcal{P}_2$ which correspond to those values of the rescaled parameters $(\tilde c,\widetilde{M})$ at which the map $H$ (and also the corresponding first return map $\ensuremath{\mathbb{T}}otkm$) has two symmetric fixed points: one saddle and another elliptic. Note that for a given map $\ensuremath{\mathbb{T}}otkm$ the value of the parameter $\tilde c$ is uniquely determined. Then, the interval $\delta_{km}^c$ of values of the parameter $\mu$ corresponds to one of the intervals $\Delta_{\tilde c}| \tilde c = \mbox{const}$ of values of $\widetilde{M}$ that intersects some of the selected domains from its lower to its upper boundaries. For instance, let us compute in the orientable case ($\tilde c <0$) the corresponding intervals $\delta_{km}^c$ of values of $\widetilde{M}$ for the domain $I\!I_\ell$: \begin{equation} \label{eq:IIelle} \delta_{km}^c = \left(-\frac{1}{4}(\tilde c-1)^2, 1 -\frac{1}{4}(\tilde c-1)^2\right)\qquad\mbox{for}\;\;\tilde c \leq -1, \end{equation} and \[ \delta_{km}^c = \left(-\frac{1}{4}(\tilde c-1)^2, \frac{1}{4}(\tilde c +1)(3\tilde c -1)\right)\qquad\mbox{for}\;\;-1<\tilde c <-\varepsilon. \] In both cases, the lower boundary corresponds to the symmetric fold bifurcation and the upper one to the symmetric period doubling. Analogously, let us compute in the non-orientable case ($\tilde c >\varepsilon$) the corresponding intervals $\delta_{km}^c$ for the domains $I\!I_r$ and $V\!I_r$: \[ \begin{array}{ll} \delta_{km}^c = \left(-\frac{1}{4}(\tilde c-1)^2, \frac{1}{4}(\tilde c +1)(3\tilde c -1)\right) & \mbox{for}\;\;\tilde \varepsilon<c \leq 1/2; \\\\ \delta_{km}^c = \left(-\frac{1}{4}(\tilde c-1)^2, \frac{3}{4}(\tilde c -1)^2\right) &\mbox{for}\;\; 1/2 < \tilde c < 2 \;\;\mbox{and}\;\; \tilde c \neq 1; \\\\ \delta_{km}^c = \left(-\frac{1}{4}(\tilde c-1)^2, 1 - \frac{1}{4}(\tilde c -1)^2\right) &\mbox{for}\;\; \tilde c \geq 2 . \end{array} \] In all three cases, the lower boundary corresponds to a symmetric fold bifurcation. However, the upper boundary corresponds to a symmetric period doubling for the first and the third case and to a symmetry breaking pitch-fork bifurcation for the intervals in the second case. We clearly will skip values of $k$ and $m$ such that $\tilde c =1$, that is, $\frac{c}{b}\lambda^{k-m}=1$. This is equivalent to say that $k - m = \frac{1}{\ln\lambda}\ln\frac{b}{c}$. Finally, we represent the intervals $\delta_{km}^c$ as intervals of values of $\mu$ using the relations (\ref{resc12:th2}). For example, for the intervals $\delta_{km}^c$ with $\tilde c \leq -1$ (see~\eqref{eq:IIelle}), we obtain the following expressions for their bifurcation boundaries $\mu_{km}^{c+}\in F$ and $\mu_{km}^{c-}\in P\!D_1$: \begin{eqnarray*} \mu_{km}^{c+} =&& - c\lambda^k \beta - \lambda^m \alpha + \frac{b^2}{4d}(\tilde c -1)^2\lambda^{2m} \\ \mu_{km}^{c-} =&& - c\lambda^k \beta - \lambda^m \alpha + \frac{b^2}{d}\left(1 -\frac{1}{4}(\tilde c -1)^2\right)\lambda^{2m}, \end{eqnarray*} and so on. Analogous explicit formulas can be obtained for the rest of the cases. $\Box$ \\ \section{Proof of Theorems 3 and 4 } \label{sec:proofMainThm} \subsection{Proof of Theorem 3} \label{sec:sec6_1} Its proof is quite standard (see, for instance,~\cite{N79,GST93b,GST99}). Namely, consider a single orbit $\Gamma_1$ and its neighbourhood $U_1$. From \cite{GST99} it is known that there exist $\left\{ \mu_k\right\}_k$, satisfying $\mu_k\to 0$ as $k\to\infty$, such that the map $f_{\mu_k}$ presents in $U_1$ a hyperbolic invariant set $\Lambda_k$ (a Smale horseshoe) such that (i) $W^u(\Lambda_k)$ is quadratically tangent to $W^s(O_\mu)$ and (ii) simultaneously, $W^u(O_{\mu_k})$ intersects transversally with $W^s(O_{\mu_k})$ (see Fig.~\ref{fig1newh}). Since all periodic points in $\Lambda_k$ have Jacobian less than 1 (by condition \textsf{[C]}) and, by the $\lambda$-Lemma, their stable and unstable manifolds accumulate (in a $C^r$-sense) to $W^s(O_\mu)$ and $W^u(O_\mu)$, it follows that $\Lambda_k$ is a wild hyperbolic set (see~\cite{N79}). The latter assertion implies that, arbitrary close to $\mu=0$, there exist intervals of values of $\mu$ for which $W^u(\Lambda_k)$ and $W^s(\Lambda_k)$ have points with quadratic tangency. Thus, one obtains that the values of $\mu$ for which the map $f_\mu$ has a nontransversal homoclinic orbit $\Gamma_{1\mu}\subset U_1$ are dense in these intervals. \begin{figure} \caption{Two examples of creation of secondary homoclinic tangencies to the point $O$ together with their Smale horseshoes} \label{fig1newh} \end{figure} $\Box$ \\ \subsection{Proof of Theorem 4} \label{sec:sec6_2} The proof of this theorem follows from Theorems~\ref{th:th1} and~\ref{th:th2} and a standard procedure of embedding intervals applied to any arbitrary point belonging to any interval $n_i$ from Theorem 3. Indeed, take any $\bar\mu\in n_i$. Arbitrary close to $\bar\mu$ there is $\bar\mu_1\in n_i$ such that $f_{\bar\mu_1}$ has a couple of homoclinic tangencies of the initial type. Hence, by Theorem~\ref{th:th1}, near $\bar\mu_1$ there exists an interval $I_1 \subset n_i$ such that at $\mu \in I_1$ the diffeomorphism $f_\mu$ has a periodic couple ``sink-source''. In turn, since $n_i$ is the Newhouse interval, in $I_1$ we find an interval $I_2$ such that the diffeomorphism $f_\mu$ at $\mu\in I_2$ has simultaneously, a periodic couple ``sink-source'' (as $\mu\in I_1$) and a symmetric elliptic periodic orbit. Repeating this procedure beginning from the interval $I_2$ we obtain a sequence $I_2,I_4,...$ of embedding intervals such that at $\mu\in I_{2j}$ the diffeomorphism $f_\mu$ has $j$ periodic couples ``sink-source'' and $j$ symmetric elliptic periodic orbits, etc. $\Box$ \\ \section{Some examples} \label{sec:ex} In this section we provide some simple examples of planar reversible maps undergoing a ``fish'' or figure-$8$ quadratic homoclinic tangency. They are Poincar\'e maps of periodically perturbed planar reversible differential systems. By construction hypotheses \textsf{[A,B]} will be straightforwardly satisfied. The fulfilment of condition \textsf{[C]} is expected by numerical checking because of the large freedom one has to produce many close variants of the periodic perturbations. The \emph{basic} systems will be the well-known \emph{Duffing equation} and the \emph{Cubic potential} (the ``fish''), both Hamiltonian and reversible. A similar approach was performed by Duarte in~\cite{Duarte00}. \subsection{Perturbed Duffing equation} Let us consider the vertical Duffing equation \begin{equation} \label{eqn:vert_duffing} \left\{ \begin{array}{rcll} \dot{x} &=& y - y^3 & + \varepsilon f(x,y,t) \\ \dot{y} &=& x & + \varepsilon g(x,y,t). \end{array} \right. \end{equation} For $\varepsilon \ne 0$, system~(\ref{eqn:vert_duffing}) is Hamiltonian, reversible (with respect to linear involutions, $R(x,y)=(x,-y)$ and $S(x,y)=(-x,y)$) and presents a couple of ($R$-)symmetric homoclinic solutions to the origin. These figure-$8$ homoclinic curves (single-round $12$-orbits) can be parameterized by $\Gamma_h^-(t)=(x_h(t),\pm y_h(t))$, where \begin{equation*} x_h(t)=-\sqrt{2} \mathrm{sech}(t) \tanh(t), \qquad y_h(t)=\sqrt{2} \mathrm{sech}(t) \end{equation*} for $t \in (-\infty,+\infty)$. Moreover, the following properties hold: (i) $x_h(t)=\dot{y}_h(t)$; (ii) $(x_h(0),y_h(0))=(0,\sqrt{2})$; (iii) $y_h(t)$ has a pole of order $1$ at the points $\pm \pi \mathrm{i} /2$ (and, therefore, $x_h(t)$ has poles of order $2$ at the same points). Our aim is to provide some examples of periodic perturbation of~(\ref{eqn:vert_duffing}), preserving $R$-reversibility and not in general the Hamiltonian character, such that the homoclinic invariant curves of the origin undergo a quadratic tangency (and, therefore, infinitely many of them). It is straightforward to check that, for $\varepsilon \ne 0$, system~(\ref{eqn:vert_duffing}) is $R$-(time) reversible if and only if $f(x,-y,-t)= -f(x,y,t)$ and $g(x,-y,-t)= g(x,y,t)$. The existence of (tangent) quadratic homoclinic points will be carried out by selecting a simple suitable perturbation and parameters $\omega_j, t_0^*$ such that the corresponding \emph{Melnikov function} $M(t_0)$ has a double-zero at $t_0=t_0^*$. Melnikov function is given by \begin{equation*} M(t_0)= \int_{-\infty}^{+\infty} \left(F \wedge G \right) \left( x_h(t),y_h(t),t+t_0 \right) \, dt, \end{equation*} where \[ F(x,y)= \left( \begin{array}{c} y-y^3 \\ x \end{array} \right), \qquad G(x,y,t)= \left( \begin{array}{c} f(x,y,t) \\ g(x,y,t) \end{array} \right) \] and $F\wedge G = (y-y^3) g(x,y,t) - x f(x,y,t)$. To produce such example, we restrict ourselves to the case where $g\equiv 0$ and $f(x,y,t)$ a (periodic) linear combinations of \emph{odd} functions of the form $x \sin \omega t$, that is, \begin{equation*} \left\{ \begin{array}{rcll} \dot{x} &=& y - y^3 & + \varepsilon x \sum_{j=0}^N a_j \sin \omega_j t\\ \dot{y} &=& x & \end{array} \right. \end{equation*} with commensurable $\omega_0, \omega_1, \ldots, \omega_N$. Having in mind that $x_h^2(t) \sin \omega_j t$ is an odd function in $t$ (and, therefore, its integral over $(-\infty,+\infty)$ is null) it follows that \begin{eqnarray*} \lefteqn{ M(t_0) = - \sum_{j=0}^N a_j \int_{-\infty}^{+\infty} x_h^2(t) \sin \omega_j(t+t_0) \, dt = } \\ &{\displaystyle - \sum_{j=0}^N a_j \left( \int_{-\infty}^{+\infty} x_h^2(t) \cos \omega_j t \, dt \right) \sin \omega_j t_0 = }\\ &{\displaystyle - \frac{\mathrm{e}^{\pi/2}}{3 \sinh(\pi/2)} \sum_{j=0}^N \left( a_j \sinh\left( \frac{\pi \omega_j}{2} \right) \, (\omega_j^2 - 2) \omega_j \sin \omega_j t_0 \right),} \end{eqnarray*} provided by the \emph{residues} integration \[ \mathrm{\,Res}\left( x_h^2(t) \cos \omega_j t,t= \frac{\pi \mathrm{i}}{2} \right) = \frac{\mathrm{e}^{\pi/2}}{3 \sinh(\pi/2)} \sinh\left( \frac{\pi \omega_j}{2} \right) \, (\omega_j^2 - 2) \omega_j. \] Let us consider as a particular example, the case $\omega_0=1, \omega_1 = \omega \in \ensuremath{\mathbb{Z}} \setminus \left\{ 1 \right\}$, $a_0=\alpha$ and $a_1=\beta$ with $\alpha\beta \ne 0$. Indeed, \begin{equation*} \left\{ \begin{array}{rcll} \dot{x} &=& y - y^3 & + \varepsilon x \left( \alpha \sin t + \beta \sin \omega t \right)\\ \dot{y} &=& x. & \end{array} \right. \end{equation*} Now, the Melnikov function reads \[ M(t_0)= - \frac{\mathrm{e}^{\pi/2}}{3\sinh(\pi/2)} \left( - \alpha \sinh\left( \frac{\pi}{2} \right) \, \sin t_0 \right. \\[1.2ex] \left. + \beta \sinh \left( \frac{\pi\omega}{2} \right) \, (\omega^2-2) \omega \, \sin \omega t_0 \right). \] We seek for values of $\omega$ and $t_0$ satisfying that $M(t_0)=M'(t_0)=0$ and $M''(t_0)\ne 0$, i.e., giving rise to a quadratic homoclinic tangency. Denoting $A=\alpha \sinh(\pi/2)$ and $B_{\omega}= \beta \sinh(\pi\omega/2) \, (\omega^2 -2) \omega $, this is equivalent to look for double zeroes of $\varphi_{\omega}(t_0)= -A \sin t_0 + B_{\omega} \sin \omega t_0$. Since $\beta\ne 0$ and $\omega \ne 0$ it turns out that $B_{\omega}$ does not vanish as well. It is straightforward to check that $\varphi_{\omega}(t_0)=\varphi_{\omega}'(t_0)=0$, $\varphi_{\omega}''(t_0)\ne 0$ reduces to find $\omega$ and $t_0$ with $\omega t_0 \ne k \pi$, for $k\in \ensuremath{\mathbb{Z}}$, satisfying $A \sin t_0 = B_{\omega} \sin \omega t_0$ and $A \cos t_0 = \omega B_{\omega} \cos \omega t_0$. It is simple to prove that there is no solution $t_0$ for $\omega=2$. Indeed, $\omega t_0 \notin \pi \ensuremath{\mathbb{Z}}$ implies that $t_0 \ne k\pi/2$ for $k\in \ensuremath{\mathbb{Z}}$. Imposing the two other conditions leads us, first, to $A=2B_2 \cos t_0$ and, second, to $\sin t_0=0$, a contradiction with the fact that $t_0 \ne k\pi/2$. If we choose $\omega=3$ and (for instance) $t_0=\pi/2$, that is \begin{equation*} \left\{ \begin{array}{rcll} \dot{x} &=& y - y^3 & + \varepsilon x \left( \alpha \sin t + \beta \sin 3 t \right)\\ \dot{y} &=& x, & \end{array} \right. \end{equation*} the latter conditions reduce to $B_3=-A$ and having in mind that $A=\alpha \sinh(\pi/2)$ and $B_{3}= 21 \beta \sinh(3\pi/2)$ it follows that we have a quadratic homoclinic point at $t_0=\pi/2$ for $\omega=3$ provided \[ \beta = - \frac{\sinh(\pi/2)}{21 \sinh(3\pi/2)} \, \alpha. \] \subsection{Perturbed ``fish'' equation} This example of single-round 1- and 2-orbits, based on the \emph{fish} equation, is given by \begin{equation*} \left\{ \begin{array}{rcll} \dot{x} &=& y & + \varepsilon f(x,y,t)\\ \dot{y} &=& x - x^2 & + \varepsilon g(x,y,t). \end{array} \right. \end{equation*} For $\varepsilon=0$ this fish equation is (time) $R$-reversible, with $R$ the involution $(x,y)\mapsto (x,-y)$, and presents a ($R$)-symmetric homoclinic solution to the origin, namely, $\Gamma_h(t)=(x_h(t),y_h(t))$, where \[ x_h(t)=\frac{\sqrt{3}}{2} \mathrm{sech}^2 \left( \frac{t}{2} \right), \quad y_h(t)=\dot{x}_h(t)=-\frac{\sqrt{3}}{2} \mathrm{sech}^2 \left( \frac{t}{2} \right) \tanh \left( \frac{t}{2} \right) \] Function $x_h(t)$ has a pole of order $2$ at $\pm \pi \mathrm{i}$ and, therefore, $y_h(t)$ has them of order $3$. If we ask the perturbation $(f,g)$ to preserve the $R$-reversibility, it must satisfy that $f(x,-y,-t)=-f(x,y,t)$ and $g(x,-y,-t)=g(x,y,t)$. Proceeding like in the previous example, the Melnikov function for a general reversible perturbation $(f,g)$ reads as follows \begin{eqnarray*} \lefteqn{M(t_0) = \int_{-\infty}^{+\infty} \left(F \wedge G \right) \left( x_h(t),y_h(t),t+t_0 \right) \, dt = } \\ &{\displaystyle \int_{-\infty}^{+\infty} y_h(t) g(x_h(t),y_h(t),t+t_0) \, dt - \int_{-\infty}^{+\infty} (x_h(t)- x_h^2(t)) f(x_h(t),y_h(t),t+t_0) \, dt.} \end{eqnarray*} As before, we restrict ourselves to a simpler case, namely, \[ f\equiv 0, \qquad \qquad g(x,y,t)=g(y,t) = y \sum_{j=0}^N b_j \sin \omega_j t, \] again with $\omega_0, \omega_1, \ldots, \omega_N$ commensurables. As we did for the Duffing equation, we select a simple example giving rise to a homoclinic quadratic point. Indeed, we choose $\omega_0=2$, $\omega_1=6$ (they are the smallest satisfying it), $t_0=\pi/4$ and denote $b_0=\alpha$, $b_1=\beta$ (with $\alpha \beta \ne 0$). Indeed, \begin{equation*} \left\{ \begin{array}{rcll} \dot{x} &=& y & \\ \dot{y} &=& x - x^2 & + \varepsilon y \left( \alpha \sin 2t + \beta \sin 6t \right). \end{array} \right. \end{equation*} Thus, our Melnikov function reads \begin{eqnarray*} M(t_0)=& \frac{4}{5}\pi \left( \alpha \sinh(2\pi) \cdot (2^4-1)\cdot 2 \cdot \sin(2 t_0) \right. \\ & \left.+ \beta \sinh(6\pi) \cdot (6^4-1) \cdot 6 \cdot \sin(6 t_0) \right), \end{eqnarray*} which can be written as $A \sin 2t_0 + B \sin 6t_0$ with \[ A=\frac{4}{5}\pi \alpha \sinh(2\pi) \cdot (2^4-1) \cdot 2, \quad B=\frac{4}{5}\pi \beta \sinh(6\pi) \cdot (6^4-1) \cdot 6. \] Taking $A=B$ it follows that $M(\pi/4)=M'(\pi/4)=0$ and $M''(\pi/4)=32B \ne 0$, which provides the condition \[ \beta = \frac{(2^4-1) \sinh(2\pi)}{3 (6^4-1)\sinh(6\pi)}\, \alpha. \] \end{document}
\begin{document} \title[Syntomic cohomology and $p$-adic regulators for varieties over $p$-adic fields] {Syntomic cohomology and $p$-adic regulators for varieties over $p$-adic fields} \author{Jan Nekov\'a\v{r}, Wies{\l}awa Nizio{\l}} \date{\today} \thanks{The authors' research was supported in part by the grant ANR-BLAN-0114 and by the NSF grant DMS0703696, respectively.} \email{[email protected], [email protected]} \begin{abstract} We show that the logarithmic version of the syntomic cohomology of Fontaine and Messing for semistable varieties over $p$-adic rings extends uniquely to a cohomology theory for varieties over $p$-adic fields that satisfies $h$-descent. This new cohomology - syntomic cohomology - is a Bloch-Ogus cohomology theory, admits period map to \'etale cohomology, and has a syntomic descent spectral sequence (from an algebraic closure of the given field to the field itself) that is compatible with the Hochschild-Serre spectral sequence on the \'etale side and is related to the Bloch-Kato exponential map. In relative dimension zero we recover the potentially semistable Selmer groups and, as an application, we prove that Soul\'e's \'etale regulators land in the potentially semistable Selmer groups. Our construction of syntomic cohomology is based on new ideas and techniques developed by Beilinson and Bhatt in their recent work on $p$-adic comparison theorems. \end{abstract} \maketitle \tableofcontents {\mathcal{E}}ction{Introduction}In this article we define syntomic cohomology for varieties over $p$-adic fields, relate it to the Bloch-Kato exponential map, and use it to study the images of Soul\'e's \'etale regulators. Contrary to all the previous constructions of syntomic cohomology (see below for a brief review) we do not restrict ourselves to varieties coming with a nice model over the integers. Hence our syntomic regulators make no integrality assumptions on the $K$-theory classes in the domain. {\mathcal{U}}bsection{Statement of the main result} Recall that, for varieties proper and smooth over a $p$-adic ring of mixed characteristic, syntomic cohomology (or its non-proper variant: syntomic-\'etale cohomology) was introduced by Fontaine and Messing \cite{FM} in their proof of the Crystalline Comparison Theorem as a natural bridge between crystalline cohomology and \'etale cohomology. It was generalized to log-syntomic cohomology for semistable varieties by Kato \cite{Kas}. For a log-smooth scheme ${\mathcal{X}}$ over a complete discrete valuation ring $V$ of mixed characteristic $(0,p)$ and a perfect residue field, and for any $r\geq 0$, rational log-syntomic cohomology of ${\mathcal{X}}$ can be defined as the "filtered Frobenius eigenspace" in log-crystalline cohomology, i.e., as the following mapping fiber \begin{equation} \label{first} \mathrm {R} \Gamma_{ \operatorname{syn} }({\mathcal{X}},r):=\operatorname{Cone} \big(\mathrm {R} \Gamma_{\operatorname{cr} }({\mathcal{X}},{\mathcal J}^{[r]})\lomapr{1-\varphi_r}\mathrm {R} \Gamma_{\operatorname{cr} }({\mathcal{X}})\big)[-1], \end{equation} where $\mathrm {R} \Gamma_{\operatorname{cr} }(\operatorname{cd} ot,{\mathcal J}^{[r]})$ denotes the absolute rational log-crystalline cohomology (i.e., over ${\mathbf Z}_p)$ of the $r$'th Hodge filtration sheaf ${\mathcal J}^{[r]}$ and $\varphi_r$ is the crystalline Frobenius divided by $p^r$. This definition suggested that the log-syntomic cohomology could be the sought for $p$-adic analog of Deligne-Beilinson cohomology. Recall that, for a complex manifold $X$, the latter can be defined as the cohomology $\mathrm {R} \Gamma(X,{\mathbf Z}(r)_{{\mathcal{D}}})$ of Deligne complex ${\mathbf Z}(r)_{{\mathcal{D}}}$: $$0\to {\mathbf Z}(r)\to \Omega^1_X\to\Omega^2_X\to\ldots \to \Omega^{r-1}_X\to 0 $$ And, indeed, since its introduction, log-syntomic cohomology has been used with some success in the study of special values of $p$-adic $L$-functions and in formulating $p$-adic Beilinson conjectures (cf. \cite{BE} for a review). The syntomic cohomology theory with $\mathbf{Q}_p$-coefficients $R\Gamma_{ \operatorname{syn} }(X_h,r)$ ($r\geq 0$) for arbitrary varieties -- more generally, for arbitrary essentially finite diagrams of varieties -- over the $p$-adic field $K$ (the fraction field of $V$) that we construct in this article is a generalization of Fontaine-Messing(-Kato) log-syntomic cohomology. That is, for a semistable scheme \footnote{Throughout the Introduction, the divisors at infinity of semistable schemes have no multiplicities.} ${\mathcal{X}}$ over $V$ we have $\mathrm {R} \Gamma_{ \operatorname{syn} }({\mathcal{X}},r)\simeq\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)$, where $X$ is the largest subvariety of ${\mathcal{X}}_K$ with trivial log-structure. An analogous theory $R\Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r)$ ($r\geq 0$) exists for (diagrams of) varieties over $\overline{K} $, where $\overline{K} $ is an algebraic closure of $K$. Our main result can be stated as follows. \begin{thmx} \label{main1} For any variety $X$ over $K$, there is a canonical graded commutative dg $\mathbf{Q}_p$-algebra $\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,*)$ such that \begin{enumerate} \item it is the unique extension of log-syntomic cohomology to varieties over $K$ that satisfies $h$-descent, i.e., for any hypercovering $\pi: Y_{\scriptscriptstyle\bullet}\to X$ in $h$-topology, we have a quasi-isomorphism $$\pi^*:\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,*)\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{ \operatorname{syn} }(Y_{\scriptscriptstyle\bullet,h},*). $$ \item it is a Bloch-Ogus cohomology theory \cite{BO}. \item for $X=\operatorname{Spec} (K)$, $H^*_{ \operatorname{syn} }(X_h,r)\simeq H^*_{\operatorname{st} }(G_K,{\mathbf Q}_p(r))$, where $H^i_{\operatorname{st} }(G_K, -)$ denotes the $\operatorname{Ext} $-group $\operatorname{Ext} ^i(\mathbf{Q}_p, -)$ in the category of (potentially) semistable representations of $G_K=\operatorname{Gal} (\overline{K} /K)$. \item There are functorial syntomic period morphisms \[ \rho_{ \operatorname{syn} }: R\Gamma_{ \operatorname{syn} }(X_h,r)\to R\Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)),\qquad \rho_{ \operatorname{syn} }: R\Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r) \to R\Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)) \] compatible with products which induce quasi-isomorphisms \[ \tau_{\leq r} R\Gamma_{ \operatorname{syn} }(X_h,r) \operatorname{st} ackrel{\sim}{\to} \tau_{\leq r} R\Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)),\qquad \tau_{\leq r} R\Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r) \operatorname{st} ackrel{\sim}{\to} \tau_{\leq r} R\Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)). \] \item The Hochschild-Serre spectral sequence for \'etale cohomology \[ ^{\operatorname{\acute{e}t} }E^{i,j}_2 = H^i(G_K,H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r))) \Longrightarrow H^{i+j}(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)) \] has a syntomic analog \[ ^{ \operatorname{syn} }E^{i,j}_2 = H^i_{\operatorname{st} }(G_K,H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r))) \Longrightarrow H^{i+j}_{ \operatorname{syn} }(X_{h},r). \] \item There is a canonical morphism of spectral sequences ${}^{ \operatorname{syn} }E_t \to {}^{\operatorname{\acute{e}t} }E_t$ compatible with the syntomic period map. \item There are syntomic Chern classes \[ {c}_{i,j}^{ \operatorname{syn} } \colon K_j(X) \to H^{2i-j}_{ \operatorname{syn} }(X_{h},i) \] compatible with \'etale Chern classes via the syntomic period map. \end{enumerate} \end{thmx} As is shown in \cite{DN}, syntomic cohomology $R\Gamma_{ \operatorname{syn} }(X_h,*)$ can be interpreted as an absolute $p$-adic Hodge cohomology. That is, it is a derived $\operatorname{{\cal Ho}} m$ in the category of admissible $(\varphi, N, G_K)$-modules between the trivial module and a complex of such modules canonically associated to a variety. Alternatively, it is a derived $\operatorname{{\cal Ho}} m$ in the category of potentially semistable representations between the trivial representation and a complex of such representations canonically associated to a variety. A particularly simple construction of such a complex, using Beilinson's Basic Lemma, was proposed by Beilinson (and is presented in \cite{DN}). The category of modules over the syntomic cohomology algebra $R\Gamma_{ \operatorname{syn} }(X_h,*)$ (taken in a motivic sense) yields a category of $p$-adic Galois representations that better approximates the category of geometric representations than the category of potentially semistable representations \cite{DN}. For further applications of syntomic cohomology algebra we refer the interested reader to Op. cit. Similarly, as is shown in \cite{HT}, geometric syntomic cohomology $R\Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},*)$ is a derived $\operatorname{{\cal Ho}} m$ in the category of effective $\varphi$-gauges (with one paw) \cite{LF1} between the trivial gauge and a complex of such gauges canonically associated to a variety. In particular, geometric syntomic cohomology group is a finite dimensional Banach-Colmez Space \cite{CB}, hence has a very rigid structure. The syntomic descent spectral sequence and its compatibility with the Hochschild-Serre spectral sequence in \'etale cohomology imply the following proposition. \begin{proposition}Let $i\geq 0$. The composition \begin{align*} H^{i-1}_{\mathrm{dR}}(X)/F^r \operatorname{st} ackrel{\partial}{\to} H^i_{ \operatorname{syn} }(X_h,r)\operatorname{st} ackrel{\rho_{ \operatorname{syn} }}{\longrightarrow} H^i_{\operatorname{\acute{e}t} }(X,\mathbf{Q}_p(r))\to H^i_{\operatorname{\acute{e}t} }(X_{\overline{K} },\mathbf{Q}_p(r)) \end{align*} is the zero map. The induced (from the syntomic descent spectral sequence) map $$ H^{i-1}_{\mathrm{dR}}(X)/F^{r}\to H^1(G_K,H^{i-1}_{\operatorname{\acute{e}t} }(X_{\overline{K} },\mathbf{Q}_p(r))) $$ is equal to the Bloch-Kato exponential associated with the Galois representation $H^{i-1}_{\operatorname{\acute{e}t} }(X_{\overline{K} },\mathbf{Q}_p(r))$. \end{proposition} This yields a comparison between $p$-adic \'etale regulators, syntomic regulators, and the Bloch-Kato exponential (which was proved in the good reduction case in \cite{JS} and \cite[Thm. 5.2]{NS}\footnote{The Bloch-Kato exponential is called $l$ there.}) that is of fundamental importance for theory of special values of $L$-functions, both complex valued and $p$-adic. The point is that syntomic regulators can be thought of as an abstract $p$-adic integration theory. The comparison results stated above then relate certain $p$-adic integrals to the values of the $p$-adic \'etale regulator via the Bloch-Kato exponential map. A modification of syntomic cohomology developed in \cite{BS} in the good reduction case (resp. in \cite{BLZ} -- using the techniques of the present article -- in the case of arbitrary reduction) can be used to perform explicit computations. For example, the formulas from \cite[\S{3}]{BLZ} were applied to a calculation of certain $p$-adic regulators in \cite{BDR} and in \cite{DR}. {\mathcal{U}}bsection{Construction of syntomic cohomology} We will now sketch the proof of Theorem \ref{main1}. Recall first that a little bit after log-syntomic cohomology had appeared on the scene, Selmer groups of Galois representations -- describing extensions in certain categories of Galois representations -- were introduced by Bloch and Kato \cite{BK} and linked to special values of $L$-functions. And a syntomic cohomology (in the good reduction case), a priori different than that of Fontaine-Messing, was defined in \cite{NS} and by Besser in \cite{BS} as a higher dimensional analog of the complexes computing these groups. The guiding idea here was that just as Selmer groups classify extensions in certain categories of "geometric" Galois representations their higher dimensional analogs -- syntomic cohomology groups -- should classify extensions in a category of "$p$-adic motivic sheaves". This was shown to be the case for $H^1$ by Bannai \cite{Ban} who has also shown that Besser's (rigid) syntomic cohomology is a $p$-adic analog of Beilinson's absolute Hodge cohomology \cite{BE0}. Complexes computing the semistable and potentially semistable Selmer groups were introduced in \cite{JH} and \cite{FPR}. For a semistable scheme ${\mathcal{X}}$ over $V$, their higher dimensional analog can be written as the following homotopy limit\footnote{See section 1.5 for an explanation of the notation we use for certain homotopy limits.} \begin{equation} \label{first2} \mathrm {R} \Gamma^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }({\mathcal{X}},r):=\left[ \begin{aligned}\xymatrix@=40pt{ \mathrm {R} \Gamma_{\mathrm{HK}}({\mathcal{X}}_0)\ar[d]^{N}\ar[r]^-{(1-\varphi_r,\iota_{\mathrm{dR}})} & \mathrm {R} \Gamma_{\mathrm{HK}}({\mathcal{X}}_0)\oplus \mathrm {R} \Gamma_{\mathrm{dR}}({\mathcal{X}}_K)/F^r\ar[d]^{(N,0)}\\ \mathrm {R} \Gamma_{\mathrm{HK}}({\mathcal{X}}_0)\ar[r]^-{1-\varphi_{r-1}} & \mathrm {R} \Gamma_{\mathrm{HK}}({\mathcal{X}}_0) }\end{aligned}\right] \end{equation} where ${\mathcal{X}}_0$ is the special fiber of ${\mathcal{X}}$, $\mathrm {R} \Gamma_{\mathrm{HK}}(\operatorname{cd} ot)$ is the Hyodo-Kato cohomology, $N$ denotes the Hyodo-Kato monodromy, and $\mathrm {R} \Gamma_{\mathrm{dR}}(\operatorname{cd} ot)$ is the logarithmic de Rham cohomology. The map $\iota_{\mathrm{dR}}$ is the Hyodo-Kato morphism that induces a quasi-isomorphism $\iota_{\mathrm{dR}}:\mathrm {R} \Gamma_{\mathrm{HK}}({\mathcal{X}}_0)\otimes_{K_0}K\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{dR}}({\mathcal{X}}_K)$, for $K_0$ - the fraction field of Witt vectors of the residue field of $V$. Using Dwork's trick, we prove (cf. Proposition \ref{reduction1}) that the two definitions of log-syntomic cohomology are the same, i.e., that there is a quasi-isomorphism $$\alpha_{ \operatorname{syn} }: \mathrm {R} \Gamma_{ \operatorname{syn} }({\mathcal{X}},r)\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{ \operatorname{syn} }^{^{\operatorname{pr} ime}me}({\mathcal{X}},r). $$ It follows that log-syntomic cohomology groups vanish in degrees strictly higher than $2\dim X_K +2$ and that, if ${\mathcal{X}}=\operatorname{Spec} (V)$, then $H^i\mathrm {R} \Gamma_{ \operatorname{syn} }({\mathcal{X}},r)\simeq H^i_{\operatorname{st} }(G_K,{\mathbf Q}_p(r))$. The syntomic cohomology for varieties over $p$-adic fields that we introduce in this article is a generalization of the log-syntomic cohomology of Fontaine and Messing. Observe that it is clear how one can try to use log-syntomic cohomology to define syntomic cohomology for varieties over fields that satisfies $h$-descent. Namely, for a variety $X$ over $K$, consider the $h$-topology of $X$ and recall that (using alterations) one can show that it has a basis consisting of semistable models over finite extensions of $V$ \cite{BE1}. By $h$-sheafifying the complexes $Y\mapsto \mathrm {R} \Gamma_{ \operatorname{syn} }(Y,r)$ (for a semistable model $Y$) we get syntomic complexes ${\mathcal{S}}(r)$. We define the ({\it arithmetic}) syntomic cohomology as $$ \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r):=\mathrm {R} \Gamma(X_h,{\mathcal{S}}(r)). $$ A priori it is not clear that the so defined syntomic cohomology behaves well: the finite ramified field extensions introduced by alterations are in general a problem for log-crystalline cohomology. For example, the related complexes $\mathrm {R} \Gamma_{\operatorname{cr} }(X_h,{\mathcal J}^{[r]})$ are huge. However, taking Frobenius eigenspaces cuts off the "noise" and the resulting syntomic complexes do indeed behave well. To get an idea why this is the case, $h$-sheafify the complexes $Y\mapsto \mathrm {R} \Gamma^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }(Y,r)$ and imagine that you can sheafify the maps $\alpha_{ \operatorname{syn} }$ as well. We get sheaves ${\mathcal{S}}^{^{\operatorname{pr} ime}me}(r)$ and quasi-isomorphisms $\alpha_{ \operatorname{syn} }:{\mathcal{S}}(r)\operatorname{st} ackrel{\sim}{\to}{\mathcal{S}}^{^{\operatorname{pr} ime}me}(r)$. Setting $\mathrm {R} \Gamma^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }(X_h,r):=\mathrm {R} \Gamma(X_h,{\mathcal{S}}^{^{\operatorname{pr} ime}me}(r))$ we obtain the following quasi-isomorphisms \begin{equation} \label{first3} \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)\simeq \mathrm {R} \Gamma^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }(X_h,r)\simeq \left[ \begin{aligned}\xymatrix@=40pt{ \mathrm {R} \Gamma_{\mathrm{HK}}(X_h)\ar[d]^{N}\ar[r]^-{(1-\varphi_r,\iota_{\mathrm{dR}})} & \mathrm {R} \Gamma_{\mathrm{HK}}(X_h)\oplus \mathrm {R} \Gamma_{\mathrm{dR}}(X_K)/F^r\ar[d]^{(N,0)}\\ \mathrm {R} \Gamma_{\mathrm{HK}}(X_h)\ar[r]^-{1-\varphi_{r-1}} & \mathrm {R} \Gamma_{\mathrm{HK}}(X_h) }\end{aligned}\right] \end{equation} where $\mathrm {R} \Gamma_{\mathrm{HK}}(X_h)$ denotes the Hyodo-Kato cohomology (defined as $h$-cohomology of the presheaf: $Y\mapsto \mathrm {R} \Gamma_{\mathrm{HK}}(Y_0)$) and $\mathrm {R} \Gamma_{\mathrm{dR}}(\operatorname{cd} ot)$ is Deligne's de Rham cohomology \cite{De}. The Hyodo-Kato map $\iota_{\mathrm{dR}}$ is the $h$-sheafification of the logarithmic Hyodo-Kato map. It is well-known that Deligne's de Rham cohomology groups are finite rank $K$-vector spaces; it turns out that the Hyodo-Kato cohomology groups are finite rank $K_0$-vector spaces: we have a quasi-isomorphism $\mathrm {R} \Gamma_{\mathrm{HK}}(X_h)\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{HK}}(X_{\overline{K} ,h})^{G_K}$ and the geometric Hyodo-Kato groups $H^*\mathrm {R} \Gamma_{\mathrm{HK}}(X_{\overline{K} ,h})$ are finite rank $K_0^{\operatorname{nr} }$-vector spaces, where $K_0^{\operatorname{nr} }$ is the maximal unramified extension of $K_0$ (see (\ref{trivialization}) below). It follows that syntomic cohomology groups vanish in degrees higher than $2\dim X_K+2$ and that syntomic cohomology is, in fact, a generalization of the classical log-syntomic cohomology, i.e., for a semistable scheme ${\mathcal{X}}$ over $V$ we have $\mathrm {R} \Gamma_{ \operatorname{syn} }({\mathcal{X}},r)\simeq\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)$, where $X$ is the largest subvariety of ${\mathcal{X}}_K$ with trivial log-structure. This follows from the quasi-isomorphism $\alpha_{ \operatorname{syn} }$: logarithmic Hyodo-Kato and de Rham cohomologies (over a fixed base) satisfy proper descent and the finite fields extensions that appear as the "noise" in alterations do not destroy anything since logarithmic Hyodo-Kato and de Rham cohomologies satisfy finite Galois descent. Alas, we were not able to sheafify the map $\alpha_{ \operatorname{syn} }$. The reason for that is that the construction of $\alpha_{ \operatorname{syn} }$ uses a twist by a high power of Frobenius -- a power depending on the field $K$. And alterations are going to introduce a finite extension of $K$ -- hence a need for higher and higher powers of Frobenius. So instead we construct directly the map $\alpha_{ \operatorname{syn} }:\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)\to\mathrm {R} \Gamma^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }(X_h,r) $. To do that we show first that the syntomic cohomological dimension of $X$ is finite. Then we take a semistable $h$-hypercovering of $X$, truncate it at an appropriate level, extend the base field $K$ to $K^{^{\operatorname{pr} ime}me}$, and base-change everything to $K^{^{\operatorname{pr} ime}me}$. There we can work with one field and use the map $\alpha_{ \operatorname{syn} }$ defined earlier. Finally, we show that we can descend. {\mathcal{U}}bsection{Syntomic period maps} We pass now to the construction of the period maps from syntomic to \'etale cohomology that appear in Theorem \ref{main1}. They are easier to define over $\overline{K} $, i.e., from the {\it geometric} syntomic cohomology. In this setting, things go smoother with $h$-sheafification since going all the way up to $\overline{K} $ before completing kills a lot of "noise" in log-crystalline cohomology. More precisely, for a semistable scheme ${\mathcal{X}}$ over $V$, we have the following canonical quasi-isomorphisms \cite{BE2} \begin{equation} \label{trivialization} \iota_{\operatorname{cr} }: \mathrm {R} \Gamma_{\mathrm{HK}}({\mathcal{X}}_{\overline{V} })^{\tau}_{B^+_{\operatorname{cr} }}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }({\mathcal{X}}_{\overline{V} }),\quad \iota_{\mathrm{dR}}: \mathrm {R} \Gamma_{\mathrm{HK}}({\mathcal{X}}_{\overline{V} })^{\tau}_{\overline{K} }\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{dR}}({\mathcal{X}}_{\overline{K} }), \end{equation} where $\overline{V} $ is the integral closure of $V$ in $\overline{K} $, $B^+_{\operatorname{cr} }$ is the crystalline period ring, and $\tau$ denotes certain twist. These quasi-isomorphisms $h$-sheafify well: for a variety $X$over $K$, they induce the following quasi-isomorphisms \cite{BE2} \begin{equation} \label{trivialization1} \iota_{\operatorname{cr} }: \mathrm {R} \Gamma_{\mathrm{HK}}(X_{\overline{K} ,h})^{\tau}_{B^+_{\operatorname{cr} }}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(X_{\overline{K} ,h}),\quad \iota_{\mathrm{dR}}: \mathrm {R} \Gamma_{\mathrm{HK}}(X_{\overline{K} ,h})^{\tau}_{\overline{K} }\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} }), \end{equation} where the terms have obvious meaning. Since Deligne's de Rham cohomology has proper descent (by definition), it follows that $h$-crystalline cohomology behaves well. That is, if we define crystalline sheaves ${\mathcal J}^{[r]}_{\operatorname{cr} }$ and ${\mathcal{A}}_{\operatorname{cr} }$ on $X_{\overline{K} ,h}$ by $h$-sheafifying the complexes $Y\mapsto \mathrm {R} \Gamma_{\operatorname{cr} }(Y,{\mathcal J}^{[r]})$ and $Y\mapsto \mathrm {R} \Gamma_{\operatorname{cr} }(Y)$, respectively, for $Y$ which are a base change to $\overline{V} $ of a semistable scheme over a finite extension of $V$ (such schemes $Y$ form a basis of $X_{\overline{K} ,h}$) then the complexes $\mathrm {R} \Gamma(X_{\overline{K} ,h},{\mathcal J}^{[r]})$ and $\mathrm {R} \Gamma_{\operatorname{cr} }(X_{\overline{V} ,h}):=\mathrm {R} \Gamma(X_{\overline{K} ,h},{\mathcal{A}}_{\operatorname{cr} })$ generalize log-crystalline cohomology (in the sense described above) and the latter one is a perfect complex of $B^+_{\operatorname{cr} }$-modules. We obtain syntomic complexes ${\mathcal{S}}(r)$ on $X_{\overline{K} ,h}$ by $h$-sheafifying the complexes $Y\mapsto \mathrm {R} \Gamma_{ \operatorname{syn} }(Y,r)$ and (geometric) syntomic cohomology by setting $\mathrm {R} \Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r):=\mathrm {R} \Gamma(X_{\overline{K} ,h},{\mathcal{S}}(r))$. They fit into an analog of the exact sequence (\ref{first}) and, by the above, generalize log-syntomic cohomology. To construct the syntomic period maps \begin{equation} \label{periodss} \rho_{ \operatorname{syn} }: \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r)\to \mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)),\quad \rho_{ \operatorname{syn} }: \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{h},r)\to \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)) \end{equation} consider the syntomic complexes ${\mathcal{S}}_n(r)$: the mod-$p^n$ version of the syntomic complexes ${\mathcal{S}}(r)$ on $X_{\overline{K} ,h}$. We have the distinguished triangle $${\mathcal{S}}_n(r)\to {\mathcal J}_{\operatorname{cr} ,n}^{[r]}\lomapr{p^r-\varphi} {\mathcal{A}}_{\operatorname{cr} ,n} $$ Recall that the filtered Poincar\'e Lemma of Beilinson and Bhatt \cite{BE2}, \cite{BH} yields a quasi-isomorphism $\rho_{\operatorname{cr} }: J_{\operatorname{cr} ,n}^{[r]}\operatorname{st} ackrel{\sim}{\to} {\mathcal J}_{\operatorname{cr} ,n}^{[r]}$, where $J^{[r]}_{\operatorname{cr} }{\mathcal{U}}bset A_{\operatorname{cr} }$ is the $r$'th filtration level of the period ring $A_{\operatorname{cr} }$. Using the fundamental sequence of $p$-adic Hodge Theory $$0\to {\mathbf Z}/p^n(r)^{^{\operatorname{pr} ime}me}\to J^{<r>}_{\operatorname{cr} ,n}\lomapr{1-\varphi_r} A_{\operatorname{cr} ,n}\to 0,$$ where ${\mathbf Z}/p^n(r)^{^{\operatorname{pr} ime}me}:=(1/(p^aa!){\mathbf Z}_p(r))\otimes {\mathbf Z}/p^n$ and $a$ denotes the largest integer $\leq r/(p-1)$, we obtain the syntomic period map $\rho_{ \operatorname{syn} }:{\mathcal{S}}_n(r)\to {\mathbf Z}/p^n(r)^{^{\operatorname{pr} ime}me}$. It is a quasi-isomorphism modulo a universal constant. It induces the geometric syntomic period map in (\ref{periodss}), and, by Galois descent, its arithmetic analog. To study the descent spectral sequences from Theorem \ref{main1}, we need to consider the other version of syntomic cohomology, i.e., the complexes \begin{equation} \label{first31} \mathrm {R} \Gamma^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }(X_{\overline{K} ,h},r):=\left[\begin{aligned}\xymatrix@C=40pt{ \mathrm {R} \Gamma_{\mathrm{HK}}(X_{\overline{K} ,h})\otimes_{K_0^{\operatorname{nr} }} B^+_{\operatorname{st} }\ar[r]^-{(1-\varphi_r,\iota_{\mathrm{dR}})}\ar[d]^N & \mathrm {R} \Gamma_{\mathrm{HK}}(X_{\overline{K} ,h})\otimes_{K_0^{\operatorname{nr} }} B^+_{\operatorname{st} }\oplus (\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} })\otimes _{\overline{K} }B^+_{\mathrm{dR}})/F^r\ar[d]^{(N,0)}\\ \mathrm {R} \Gamma_{\mathrm{HK}}(X_{\overline{K} ,h})\otimes_{K_0^{\operatorname{nr} }} B^+_{\operatorname{st} }\ar[r]^{1-\varphi_{r-1}} & \mathrm {R} \Gamma_{\mathrm{HK}}(X_{\overline{K} ,h})\otimes _{K_0^{\operatorname{nr} }}B^+_{\operatorname{st} } }\end{aligned}\right] \end{equation} where $B^+_{\operatorname{st} }$ and $B^+_{\mathrm{dR}}$ are the semistable and de Rham $p$-adic period rings, respectively. We deduce a quasi-isomorphism $\mathrm {R} \Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r)\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }(X_{\overline{K} ,h},r)$. \begin{remark} This quasi-isomorphism yields, for a semistable scheme ${\mathcal{X}}$ over $V$, the following exact sequence $$ \to H^i_{ \operatorname{syn} }({\mathcal{X}}_{\overline{K} },r)\to (H^i_{\mathrm{HK}}({\mathcal{X}})_{{\mathbf Q}}\otimes_{K_0} B_{\operatorname{st} }^+)^{\varphi=p^r,N=0}\to (H^i_{\mathrm{dR}}({\mathcal{X}}_K)\otimes_K B_{\mathrm{dR}}^+)/F^r\to H^{i+1}_{ \operatorname{syn} }({\mathcal{X}}_{\overline{K} },r)\to $$ It is a sequence of finite dimensional Banach-Colmez Spaces \cite{CB} and as such is a key in the proof of semistable comparison theorem for formal schemes in \cite{CN}. \end{remark} We also have a syntomic period map \begin{equation} \label{first4} \rho_{ \operatorname{syn} }^{^{\operatorname{pr} ime}me}:\mathrm {R} \Gamma^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }(X_{\overline{K} ,h},r)\to \mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)) \end{equation} that is compatible with the map $\rho_{ \operatorname{syn} }$ via $\alpha_{ \operatorname{syn} }$. To describe how it is constructed, recall that the crystalline period map of Beilinson induces compatible Hyodo-Kato and de Rham period maps \cite{BE2} \begin{equation} \label{first5} \rho_{\mathrm{HK}}:\mathrm {R} \Gamma_{\mathrm{HK}}(X_{\overline{K} ,h})\otimes _{K_0^{\operatorname{nr} }}B^+_{\operatorname{st} }{\to} \mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes B_{\operatorname{st} }^+,\quad \rho_{\mathrm{dR}}: \mathrm {R} \Gamma_{\mathrm{dR}}(X_K)\otimes_K B_{\mathrm{dR}}^+{\to}\mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes B_{\mathrm{dR}}^+ \end{equation} Applying them to the above homotopy limit, removing all the pluses from the period rings, reduces the homotopy limit to the complex \begin{equation} \label{first32} \left[\begin{aligned}\xymatrix@C=40pt{ \mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },\mathbf{Q}_p(r))\otimes B_{\operatorname{st} }\ar[r]^-{(1-\varphi_r,\iota_{\mathrm{dR}})}\ar[d]^N & \mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },\mathbf{Q}_p(r))\otimes B_{\operatorname{st} }\oplus (\mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },\mathbf{Q}_p(r))\otimes B_{\mathrm{dR}})/F^r\ar[d]^{(N,0)}\\ \mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },\mathbf{Q}_p(r))\otimes B_{\operatorname{st} }\ar[r]^{1-\varphi_{r-1}} & \mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },\mathbf{Q}_p(r))\otimes B_{\operatorname{st} } }\end{aligned}\right] \end{equation} By the familiar fundamental exact sequence $$ 0\to {\mathbf Q}_p(r)\to B_{\operatorname{st} } \varepsilon rylomapr{(N,1-\varphi_r,\iota)} B_{\operatorname{st} }\oplus B_{\operatorname{st} }\oplus B_{\mathrm{dR}}/F^r \varepsilon ryverylomapr{(1-\varphi_{r-1})-N}B_{\operatorname{st} }\to 0 $$ the above complex is quasi-isomorphic to $\mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },\mathbf{Q}_p(r))$. This yields the syntomic period morphism from (\ref{first4}). We like to think of geometric syntomic cohomology as being represented by the complex from (\ref{first31}) and of geometric \'etale cohomology as represented by the complex (\ref{first32}). From the above constructions we derive several of the properties mentioned in Theorem \ref{main1}. The quasi-isomorphisms (\ref{first5}) give that $$H^i_{\mathrm{HK}}(X_{\overline{K} ,h})\simeq D_{\mathrm{pst}}(H^i(X_{\overline{K} ,\operatorname{\acute{e}t} },\mathbf{Q}_p(r))),\quad H^i_{\mathrm{HK}}(X_h)\simeq D_{\operatorname{st} }(H^i(X_{\overline{K} ,\operatorname{\acute{e}t} },\mathbf{Q}_p(r))),$$ where $D_{\mathrm{pst}}$ and $D_{\operatorname{st} }$ are the functors from \cite{FPR}. This combined with the diagram (\ref{first3}) immediately yields the spectral sequence ${}^{ \operatorname{syn} }E_t$ since the cohomology groups of the total complex of $$ \left[ \begin{aligned}\xymatrix@=40pt{ H^j_{\mathrm{HK}}(X_h)\ar[d]^{N}\ar[r]^-{(1-\varphi_r,\iota_{\mathrm{dR}})} & H^j_{\mathrm{HK}}(X_h)\oplus H^j_{\mathrm{dR}}(X_K)/F^r\ar[d]^{(N,0)}\\ H^j_{\mathrm{HK}}(X_h)\ar[r]^-{1-\varphi_{r-1}} & H^j_{\mathrm{HK}}(X_h) }\end{aligned}\right] $$ are equal to $H^*_{\operatorname{st} }(G_K,H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },\mathbf{Q}_p(r)))$. Moreover, the sequence of natural maps of diagrams $(\ref{first3})\to (\ref{first31})\operatorname{st} ackrel{\rho_{ \operatorname{syn} }}{\to} (\ref{first32})$ yields a compatibility of the syntomic descent spectral sequence with the Hochschild-Serre spectral sequence in \'etale cohomology (via the period maps). We remark that, in the case of proper varieties with semistable reduction, this fact was announced in \cite{JB}. Looking again at the period map $\rho_{ \operatorname{syn} }: (\ref{first31}){\to} (\ref{first32})$ we see that truncating all the complexes at level $r$ will allow us to drop $+$ from the first diagram. Hence we have $$ \rho_{ \operatorname{syn} }:\tau_{\leq r}\mathrm {R} \Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r)\operatorname{st} ackrel{\sim}{\to} \tau_{\leq r}\mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)) $$ To conclude that we have $$ \rho_{ \operatorname{syn} }:\tau_{\leq r}\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)\operatorname{st} ackrel{\sim}{\to} \tau_{\leq r}\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)) $$ as well, we look at the map of spectral sequences ${}^{ \operatorname{syn} }E\to {}^{\operatorname{\acute{e}t} }E$ and observe that, in the stated ranges of the Hodge-Tate filtration we have $H^*_{\operatorname{st} }(G_K,\operatorname{cd} ot)=H^*(G_K,\operatorname{cd} ot)$ (a fact that follows, for example, from the work of Berger \cite{BER}). {\mathcal{U}}bsection{$p$-adic regulators} As an application of Theorem \ref{main1}, we look at the question of the image of Soul\'e's \'etale regulators $$ r^{\operatorname{\acute{e}t} }_{r,i}: K_{2r-i-1}(X)_0\to H^1(G_K,H^i(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r))), $$ where $K_{2r-i-1}(X)_0:=\ker(c^{\operatorname{\acute{e}t} }_{r,i+1}:K_{2r-i-1}(X)\to H^{i+1}(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)))$, inside the Galois cohomology group. We prove that \begin{thmx} The regulators $r^{\operatorname{\acute{e}t} }_{r,i}$ factor through the group $H^1_{\operatorname{st} }(G_K,H^i(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)))$. \end{thmx} As we explain in the article, this fact is known to follow from the work of Scholl \cite{Sc} on "geometric" extensions associated to $K$-theory classes. In our approach, this is a simple consequence of good properties of syntomic cohomology and the existence of the syntomic descent spectral sequence. Namely, as can be easily derived from the presentation (\ref{first3}), syntomic cohomology has projective space theorem and homotopy property \footnote{As explained in Appendix ~B, it follows that it is a Bloch-Ogus cohomology theory.} hence admits Chern classes from higher $K$-theory. It can be easily shown that they are compatible with the \'etale Chern classes via the syntomic period maps. The factorization we want in the above theorem follows then from the compatibility of the two descent spectral sequences. {\mathcal{U}}bsection{Notation and Conventions} Let $V$ be a complete discrete valuation ring with fraction field $K$ of characteristic 0, with perfect residue field $k$ of characteristic $p$, and with maximal ideal ${\mathfrak m}_K$. Let $v$ be the valuation on $K$ normalized so that $v(p)=1$. Let $\overline{K} $ be an algebraic closure of $K$ and let $\overlinerline{V}$ denote the integral closure of $V$ in $\overline{K} $. Let $W(k)$ be the ring of Witt vectors of $k$ with fraction field $K_0$ and denote by $K_0^{\operatorname{nr} }$ the maximal unramified extension of $K_0$. Denote by $e_K$ the absolute ramification index of $K$, i.e., the degree of $K$ over $K_0$. Set $G_K=\operatorname{Gal} (\overlinerline {K}/K)$ and let $I_K$ denote its inertia subgroup. Let $\varphi$ be the absolute Frobenius on $W(\overlinerline {k})$. We will denote by $V$, $V^{\times}$, and $V^0$ the scheme $\operatorname{Spec} (V)$ with the trivial, canonical (i.e., associated to the closed point), and $({\mathbf N}\to V, 1\mapsto 0)$ log-structure respectively. For a log-scheme $X$ over ${\mathcal O}_K$, $X_n$ will denote its reduction mod $p^n$, $X_0$ will denote its special fiber. Unless otherwise stated, we work in the category of integral quasi-coherent log-schemes. In general, we will not distinguish between simplicial abelian groups and complexes of abelian groups. Let $A$ be an abelian category with enough projective objects. In this paper $A$ will be the category of abelian groups or ${\mathbf Z}_p$-, ${\mathbf Z}/p^n$-, or ${\mathbf Q}_p$-modules. Unless otherwise stated, we work in the (stable) $\infty$-category ${\mathcal{D}}(A)$, i.e., stable $\infty$-category whose objects are (left-bounded) chain complexes of projective objects of $A$. For a readable introduction to such categories the reader may consult \cite{MG}, \cite[1]{Lu2}. The $\infty$-derived category is essential to us for two reasons: first, it allows us to work simply with the Beilinson-Hyodo-Kato complexes; second, it supplies functorial homotopy limits. Many of our constructions will involve sheaves of objects from ${\mathcal{D}}(A)$. The reader may consult the notes of Illusie \cite{IL} and Zheng \cite{Zhe} for a brief introduction to the subject and \cite{Lu1}, \cite{Lu2} for a thorough treatment. We will use a shorthand for certain homotopy limits. Namely, if $f:C\to C'$ is a map in the dg derived category of abelian groups, we set $$[\xymatrix{C\ar[r]^f&C'}]:=\operatorname{holim} (C\to C^{^{\operatorname{pr} ime}me}\leftarrow 0).$$ We also set $$ \left[\begin{aligned} \xymatrix{C_1\ar[d]\ar[r]^f & C_2\ar[d]\\ C_3\ar[r]^g & C_4 }\end{aligned}\right] :=[[C_1\operatorname{st} ackrel{f}{\to} C_2]\to [C_3\operatorname{st} ackrel{g}{\to} C_4]], $$ where the diagram in the brackets is a commutative diagram in the dg derived category. \begin{acknowledgments} Parts of this article were written during our visits to the Fields Institute in Toronto in spring 2012. The second author worked on this article also at BICMR, Beijing, and at the University of Padova. We would like to thank these institutions for their support and hospitality. This article was inspired by the work of Beilinson on $p$-adic comparison theorems. We would like to thank him for discussions related to his work. Luc Illusie and Weizhe Zheng helped us understand the $\infty$-category theory involved in Beilinson's work and made their notes \cite{IL}, \cite{Zhe} available to us -- we would like to thank them for that. We have also profited from conversations with Laurent Berger, Amnon Besser, Bharghav Bhatt, Bruno Chiarellotto, Pierre Colmez, Fr\'ed\'eric D\'eglise, Luc Illusie, Tony Scholl, and Weizhe Zheng - we are grateful for these exchanges. Moreover, we would like to thank Pierre Colmez for reading and correcting parts of this article. Special thanks go to Laurent Berger and Fr\'ed\'eric D\'eglise for writing the Appendices. \end{acknowledgments} {\mathcal{E}}ction{Preliminaries} In this section we will do some preparation. In the first part, we will collect some relevant facts from the literature concerning period rings, derived log de Rham complex, and $h$-topology. In the second part, we will prove vanishing results in Galois cohomology and a criterion comparing two spectral sequences that we will need to compare the syntomic descent spectral sequence with the \'etale Hochschild-Serre spectral sequence. {\mathcal{U}}bsection{The rings of periods} Let us recall briefly the definitions of the rings of periods $B_{\operatorname{cr} }$, $B_{\mathrm{dR}}$, $B_{\operatorname{st} }$ of Fontaine \cite{F1}. Let $A_{\operatorname{cr} }$ denote Fontaine's ring of crystalline periods \cite[2.2,2.3]{F1}. This is a $p$-adically complete ring such that $A_{\operatorname{cr} ,n}:=A_{\operatorname{cr} }/p^n$ is a universal PD-thickening of $\overlinerline{V}_n$ over $W_n(k)$. Let $J_{\operatorname{cr} ,n}$ denote its PD-ideal, $A_{\operatorname{cr} ,n}/J_{\operatorname{cr} ,n}=\overlinerline{V}_n$. We have $$ A_{\operatorname{cr} ,n}=H^0_{\operatorname{cr} }(\operatorname{Spec} (\overlinerline{V}_n)/W_n(k)), \quad B^+_{\operatorname{cr} }:=A_{\operatorname{cr} }[1/p],\quad B_{\operatorname{cr} }:=B^+_{\operatorname{cr} }[t^{-1}], $$ where $t$ is a certain element of $B^+_{\operatorname{cr} }$ (see \cite{F1} for a precise definition of $t$). The ring $B^+_{\operatorname{cr} }$ is a topological $K_0$-module equipped with a Frobenius $\varphi$ coming from the crystalline cohomology and a natural $G_K$-action. We have that $\varphi (t)=pt$ and that $G_K$ acts on $t$ via the cyclotomic character. Let $$ B^+_{\mathrm{dR}}:= \operatorname{inv} lim_r({\bold Q}\otimes \operatorname{inv} lim_n A_{\operatorname{cr} ,n}/ J_{\operatorname{cr} ,n}^{[r]}),\quad B_{\mathrm{dR}}:=B^+_{\mathrm{dR}}[t^{-1}]. $$ The ring $B^+_{\mathrm{dR}}$ has a discrete valuation given by the powers of $t$. Its quotient field is $B_{\mathrm{dR}}$. We set $F^nB_{\mathrm{dR}}=t^nB^+_{\mathrm{dR}}$. This defines a descending filtration on $B_{\mathrm{dR}}$. The period ring $B_{\operatorname{st} }$ lies between $B_{\operatorname{cr} }$ and $B_{\mathrm{dR}}$ \cite[3.1]{F1}. To define it, choose a sequence of elements $s=(s_n)_{n\geq 0}$ of $\overlinerline{V}$ such that $s_0=p$ and $s_{n+1}^p=s_n$. Fontaine associates to it an element $u_{s }$ of $B^+_{\mathrm{dR}}$ that is transcendental over $B^+_{\operatorname{cr} }$. Let $B^+_{\operatorname{st} }$ denote the subring of $B_{\mathrm{dR}}$ generated by $B^+_{\operatorname{cr} }$ and $u_{s}$. It is a polynomial algebra in one variable over $B^+_{\operatorname{cr} }$. The ring $B^+_{\operatorname{st} }$ does not depend on the choice of $s$ (because for another sequence $s^{^{\operatorname{pr} ime}me}=(s^{^{\operatorname{pr} ime}me}_n)_{n\geq 0}$ we have $u_s-u_{s^^{\operatorname{pr} ime}me}\in {\mathbf Z}_pt{\mathcal{U}}bset B_{\operatorname{cr} }^+$). The action of $G_K$ on $B^+_{\mathrm{dR}}$ restricts well to $B^+_{\operatorname{st} }$. The Frobenius $\varphi $ extends to $B^+_{\operatorname{st} }$ by $\varphi (u_{s})=pu_{s}$ and one defines the monodromy operator $N:B^+_{\operatorname{st} }\to B^+_{\operatorname{st} } $ as the unique $B^+_{\operatorname{cr} }$-derivation such that $Nu_{s}=-1$. We have $N\varphi=p\varphi N$ and the short exact sequence \begin{equation} \label{kwak11} 0\to B^+_{\operatorname{cr} }\to B^+_{\operatorname{st} }\operatorname{st} ackrel{N}{\to}B^+_{\operatorname{st} }\to 0 \end{equation} Let $B_{\operatorname{st} }=B_{\operatorname{cr} }[u_{s }]$. We denote by $\iota$ the injection $\iota:B^+_{\operatorname{st} }\hookrightarrow B^+_{\mathrm{dR}}$. The topology on $B_{\operatorname{st} }$ is the one induced by $B_{\operatorname{cr} }$ and the inductive topology; the map $\iota$ is continuous (though the topology on $B_{\operatorname{st} }$ is not the one induced from $B_{\mathrm{dR}}$). {\mathcal{U}}bsection{Derived log de Rham complex} In this subsection we collect a few facts about the relationship between crystalline cohomology and de Rham cohomology. Let $S$ be a log-PD-scheme on which $p$ is nilpotent. For a log-scheme $Z$ over $S$, let $\mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{Z/S}$ denote the derived log de Rham complex (see \cite[3.1]{BE1} for a review). This is a commutative dg ${\mathcal O}_S$-algebra on $Z_{\operatorname{\acute{e}t} }$ equipped with a Hodge filtration $F^m$. There is a natural morphism of filtered commutative dg ${\mathcal O}_S$-algebras \cite[1.9.1]{BE2} \begin{equation} \label{kappamap} \kappa:\quad \mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{Z/S}\to \mathrm {R} u_{Z/S*}({\mathcal O}_{Z/S}), \end{equation} where $u_{Z/S}: Z_{\operatorname{cr} }\to Z_{\operatorname{\acute{e}t} }$ is the projection from the log-crystalline to the \'etale topos. The following theorem was proved by Beilinson \cite[1.9.2]{BE2} by direct computations of both sides. \begin{theorem} \label{beilinson} Suppose that $Z,S$ are fine and $f:Z\to S$ is an integral, locally complete intersection morphism. Then (\ref{kappamap}) yields quasi-isomorphisms $$\kappa_m:\quad \mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{Z/S}/F^m\operatorname{st} ackrel{\sim}{\to} \mathrm {R} u_{Z/S*}({\mathcal O}_{Z/S}/{\mathcal J}^{[m]}_{Z/S}). $$ \end{theorem} Recall \cite[Def. 7.20]{BH} that a log-scheme is called G-log-syntomic if it is log-syntomic and the local log-smooth models can be chosen to be of Cartier type. The next theorem, finer than Theorem \ref{beilinson}, was proved by Bhatt \cite[Theorem 7.22]{BH} by looking at the conjugate filtration of the l.h.s. \begin{theorem} \label{bhatt} Suppose that $f:Z\to S$ is G-log-syntomic. Then we have a quasi-isomorphism $$\kappa:\quad \mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{Z/S}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} u_{Z/S*}({\mathcal O}_{Z/S}). $$ \end{theorem} Combining the two theorems above, we get a filtered version: \begin{corollary} Suppose that $f:Z\to S$ is G-log-syntomic. Then we have a quasi-isomorphism $$F^m\mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{Z/S}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} u_{Z/S*}({\mathcal J}^{[m]}_{Z/S}). $$ \end{corollary} \begin{proof} Consider the following commutative diagram with exact rows $$ \begin{CD} F^m\mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{Z/S} @>>> \mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{Z/S} @>>>\mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{Z/S}/F^m \\ @VVV @VV\wr V @VV\wr V\\ \mathrm {R} u_{Z/S*}({\mathcal J}^{[m]}_{Z/S})@>>> \mathrm {R} u_{Z/S*}({\mathcal O}_{Z/S}) @>>> \mathrm {R} u_{Z/S*}({\mathcal O}_{Z/S}/{\mathcal J}^{[m]}_{Z/S}). \end{CD} $$ and use the above theorems of Bhatt and Beilinson. \end{proof} Let $X$ be a fine, proper, log-smooth scheme over $V^{\times}$. Set \begin{align*} \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{X/W(k)})\widehat{\otimes}{\mathbf Q}_p :=(\operatorname{holim} _n\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{X_n/W_n(k)}))\otimes {\mathbf Q} \end{align*} and similarly for complexes over $V^{\times}$. Here the hat over derived log de Rham complex refers to the completion with respect to the Hodge filtration (in the sense of prosystems). For $r\geq 0$, consider the following sequence of maps \begin{equation} \label{composition1} \begin{aligned} \mathrm {R} \Gamma_{\mathrm{dR}}(X_K)/F^r & \operatorname{st} ackrel{\sim}{\leftarrow}\mathrm {R} \Gamma(X,\mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{X/V^{\times}}/F^r)_{\mathbf Q} \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{X/V^{\times}}/F^r)\widehat{\otimes}\mathbf{Q}_p\\ & \operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(X,{\mathcal O}_{X/V^{\times}}/{\mathcal J}^{[r]}_{X/V^{\times}})_{\mathbf Q} \leftarrow \mathrm {R} \Gamma_{\operatorname{cr} }(X,{\mathcal O}_{X/W(k)}/{\mathcal J}^{[r]}_{X/W(k)})_{\mathbf Q} \end{aligned} \end{equation} The first quasi-isomorphism follows from the fact that the natural map $\mathrm {L} \Omega_{X_K/K_0}^{\scriptscriptstyle\bullet}/F^r\operatorname{st} ackrel{\sim}{\to} \Omega_{X_K/K_0}^{\scriptscriptstyle\bullet}/F^r$ is a quasi-isomorphism since $X_K$ is log-smooth over $K_0$. The second quasi-isomorphism follows from $X$ being proper and log-smooth over $V^{\times}$, the third one from Theorem \ref{beilinson}. Define the map $$ \gamma_r^{-1}:\quad \mathrm {R} \Gamma_{\operatorname{cr} }(X,{\mathcal O}_{X/W(k)}/{\mathcal J}^{[r]}_{X/W(k)})_{\mathbf Q}\to \mathrm {R} \Gamma_{\mathrm{dR}}(X_K)/F^r $$ as the composition (\ref{composition1}). \begin{corollary} \label{Langer} Let $X$ be a fine, proper, log-smooth scheme over $V^{\times}$. Let $r\geq 0$. There exists a canonical quasi-isomorphism $$\gamma_r:\quad \mathrm {R} \Gamma_{\mathrm{dR}}(X_K)/F^r \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(X,{\mathcal O}_{X/W(k)}/{\mathcal J}^{[r]}_{X/W(k)})_{\mathbf Q} $$ \end{corollary} \begin{proof} It suffices to show that the last map in the composition (\ref{composition1}) is also a quasi-isomorphism. By Theorem \ref{beilinson}, this map is quasi-isomorphic to the map $$ (\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{X/W(k)})\widehat{\otimes}\mathbf{Q}_p)/F^r \to (\mathrm {R} \Gamma (X_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{X/V^{\times}})\widehat{\otimes}\mathbf{Q}_p)/F^r $$ Hence it suffices to show that the natural map $$ \operatorname{gr} ^i_{F}\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{X/W(k)})\widehat{\otimes}\mathbf{Q}_p \to \operatorname{gr} ^i_F\mathrm {R} \Gamma (X_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{X/V^{\times}})\widehat{\otimes}\mathbf{Q}_p $$ is a quasi-isomorphism for all $i\geq 0$. Fix $n\geq 1$ and $i\geq 0$ and recall \cite[1.2]{BE1} that we have a natural identification $$\operatorname{gr} ^i_F\mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{X_n/W_n(k)}\operatorname{st} ackrel{\sim}{\to}L\Lambda^i_X(L_{X_n/W_n(k)})[-i],\quad \operatorname{gr} ^i_F\mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{X_n/V^{\times}_n}\operatorname{st} ackrel{\sim}{\to}L\Lambda^i_X(L_{X_n/V^{\times}_n})[-i], $$ where $L_{Y/S}$ denotes the relative log cotangent complex \cite[3.1]{BE1} and $L\Lambda_X({\scriptscriptstyle\bullet })$ is the nonabelian left derived functor of the exterior power functor. The distinguished triangle $$ {\mathcal O}_X\otimes_VL_{V^{\times}_n/W_n(k)}\to L_{X_n/W_n(k)}\to L_{X_n/V^{\times}_n} $$ yields a distinguished triangle $$ L\Lambda_X^i({\mathcal O}_X\otimes_VL_{V^{\times}_n/W_n(k)})[-i]\to \operatorname{gr} ^i_F\mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{X_n/W_n(k)}\to \operatorname{gr} ^i_F\mathrm {L} \Omega^{\scriptscriptstyle\bullet}_{X_n/V^{\times}_n} $$ Hence we have a distinguished triangle $$ \operatorname{holim} _n \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },L\Lambda_X^i({\mathcal O}_X\otimes_VL_{V^{\times}_n/W_n(k)}))\otimes{\mathbf Q}[-i] \to \operatorname{gr} ^i_{F}\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{X/W(k)})\widehat{\otimes}\mathbf{Q}_p \to \operatorname{gr} ^i_F\mathrm {R} \Gamma (X_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{X/V^{\times}})\widehat{\otimes}\mathbf{Q}_p $$ It suffices to show that the term on the left is zero. But this will follow as soon as we show that the cohomology groups of $L_{V^{\times}_n/W_n(k)}$ are annihilated by $p^c$, where $c$ is a constant independent of $n$. To show this recall that $V$ is a log complete intersection over $W(k)$. If $\pi$ is a generator of $V/W(k)$, $f(t)$ its minimal polynomial then (cf. \cite[6.9]{Ol}) $L_{V^{\times}/W(k)}$ is quasi-isomorphic to the cone of the multiplication by $f^{^{\operatorname{pr} ime}me}(\pi)$ map on $V$. Hence $L_{V^{\times}/W(k)}$ is acyclic in non-zero degrees, $H^0L_{V^{\times}/W(k)}=\Omega_{V^{\times}/W(k)}$ is a cyclic $V$-module and we have a short exact sequence $$ 0\to \Omega_{V/W(k)}\to \Omega_{V^{\times}/W(k)}\to V/{\mathfrak m}_K\to 0 $$ Since $\Omega_{V/W(k)}\simeq V/{\mathcal{D}}_{K/K_0}$, where ${\mathcal{D}}_{K/K_0}$ is the different, we get that $p^cH^0L_{V^{\times}/W(k)}=0$ for a constant $c$ independent of $n$. Since $L_{V^{\times}/W(k)}\simeq L_{V^{\times}/W(k)}\otimes^{L}_VV_n$, we are done. \end{proof} \begin{remark} Versions of the above corollary appear in various degrees of generality in the proofs of the $p$-adic comparison theorems (cf. \cite[Lemma 4.5]{KM}, \cite[Lemma 2.7]{Ln}). They are proved using computations in crystalline cohomology. We find the above argument based on Beilinson comparison Theorem \ref{beilinson} particularly conceptual and pleasing. \end{remark} {\mathcal{U}}bsection{$h$-topology} In this subsection we review terminology connected with $h$-topology from Beilinson papers \cite{BE1}, \cite{BE2}, \cite{BH}; we will use it freely. Let $\mathcal{V}ar_K$ be the category of varieties (i.e., reduced and separated schemes of finite type) over a field $K$. An {\em arithmetic pair} over $K$ is an open embedding $j:U\hookrightarrow \overlinerline{U}$ with dense image of a $K$-variety $U$ into a reduced proper flat $V$-scheme $\overlinerline{U}$. A morphism $(U,\overlinerline{U})\to (T,\overlinerline{T})$ of pairs is a map $\overlinerline{U}\to \overlinerline{T}$ which sends $U$ to $T$. In the case that the pairs represent log-regular schemes this is the same as a map of log-schemes. For a pair $(U,\overlinerline{U})$, we set $V_U:=\Gamma(\overlinerline{U},{\mathcal O}_{\overlinerline{U}})$ and $K_U:=\Gamma(\overlinerline{U}_K,{\mathcal O}_{\overlinerline{U}})$. $K_U$ is a product of several finite extensions of $K$ (labeled by the connected components of $\overlinerline{U}$) and, if $\overlinerline{U}$ is normal, $V_U$ is the product of the corresponding rings of integers. We will denote by ${\mathcal{P}}_K^{ar}$ the category of arithmetic pairs over $K$. A {\em semistable pair} ({\em ss-pair}) over $K$ \cite[2.2]{BE1} is a pair of schemes $(U,\overlinerline{U})$ over $(K,V)$ such that (i) $\overlinerline{U}$ is regular and proper over $V$, (ii) $\overlinerline{U}{\mathcal{E}}tminus U$ is a divisor with normal crossings on $\overlinerline{U}$, and (iii) the closed fiber $\overlinerline{U}_0$ of $\overlinerline{U}$ is reduced. Closed fiber is taken over the closed points of $V_U$. We will think of ss-pairs as log-schemes equipped with log-structure given by the divisor $\overlinerline{U}{\mathcal{E}}tminus U$. The closed fiber $\overlinerline{U}_0$ has the induced log-structure. We will say that the log-scheme $(U,\overlinerline{U})$ is {\em split} over $V_U$. We will denote by ${\mathcal{P}}_K^{ss}$ the category of ss-pairs over $K$. A semistable pair is called {\em strict} if the irreducible components of the closed fiber are regular. We will often work with the larger category ${\mathcal{P}}_K^{\log}$ of log-schemes $(U,\overlinerline{U})\in {\mathcal{P}}^{ar}_K$ log-smooth over $V_U^{\times}$. A {\em semistable pair} ({\em ss-pair}) over $\overline{K} $ \cite[2.2]{BE1} is a pair of connected schemes $(T,\overlinerline{T})$ over $(\overline{K} ,\overlinerline{V})$ such that there exists an ss-pair $(U,\overlinerline{U})$ over $K$ and a $\overline{K} $-point $\alpha: K_U\to \overline{K} $ such that $(T,\overlinerline{T})$ is isomorphic to the base change $(U_{\overline{K} },\overlinerline{U}_{\overlinerline{V}})$. We will denote by ${\mathcal{P}}_{\overline{K} }^{ss}$ the category of ss-pairs over $\overline{K} $. A {\em geometric pair} over $K$ is a pair $(U,\overlinerline{U})$ of varieties over $K$ such that $\overlinerline{U}$ is proper and $U{\mathcal{U}}bset \overlinerline{U}$ is open and dense. We say that the pair $(U,\overline{U})$ is a {\em nc-pair} if $\overlinerline{U}$ is regular and $\overline{U}{\mathcal{E}}tminus U$ is a divisor with normal crossings in $\overline{U}$; it is {strict nc-pair} if the irreducible components of $U{\mathcal{E}}tminus \overline{U}$ are regular. A morphism of pairs $f:(U_1,\overline{U}_1)\to (U,\overline{U})$ is a map $\overline{U}_1\to \overline{U}$ that sends $U_1$ to $U$. We denote the category of nc-pairs over $K$ by ${\mathcal{P}}_K^{nc}$. For a field $K$, the $h$-topology (cf. \cite{SV},\cite[2.3]{BE1}) on $\mathcal{V}ar_K$ is the coarsest topology finer than the Zariski and proper topologies. \footnote{The latter is generated by a pretopology whose coverings are proper surjective maps.} It is stronger than the \'etale and proper topologies. It is generated by the pretopology whose coverings are finite families of maps $\{Y_i\to X\}$ such that $Y:=\coprod Y_i\to X$ is a universal topological epimorphism (i.e., a subset of $X$ is Zariski open if and only if its preimage in $Y$ is open). We denote by $\mathcal{V}ar_{K,h},X_h$ the corresponding $h$-sites. For any of the categories ${\mathcal{P}}$ mentioned above let $\gamma: (U,\overline{U})\to U$ denote the forgetful functor. Beilinson proved \cite[2.5]{BE1} that the categories ${\mathcal{P}}^{nc}$, $({\mathcal{P}}_K^{ar},\gamma)$ and $({\mathcal{P}}_K^{ss},\gamma)$ form a base for $\mathcal{V}ar_{K,h}$. One can easily modify his argument to conclude the same about the categories $({\mathcal{P}}_K^{\log},\gamma)$. {\mathcal{U}}bsection{Galois cohomology} In this subsection we review the definition of (higher) semistable Selmer groups and prove that in stable ranges they are the same as Galois cohomology groups. Our main references are \cite{F2}, \cite{F3}, \cite{CF}, \cite{BK}, \cite{FPR}, \cite{JH}. Recall \cite{F2}, \cite{F3} that a $p$-adic representation $V$ of $G_K$ (i.e., a finite dimensional continuous ${{\mathbf Q}}_p$-vector space representation) is called {\em semistable} (over $K$) if $\dim_{K_0} (B_{\operatorname{st} }\otimes_{{\mathbf Q}_p} V )^{G_K} = \dim_{{\mathbf Q}_p} (V )$. It is called {\em potentially semistable} if there exists a finite extension $K^{^{\operatorname{pr} ime}me}$ of $K$ such that $V|G_{K^{^{\operatorname{pr} ime}me}}$ is semistable over $K^{^{\operatorname{pr} ime}me}$. We denote by $\mathrm {R} ep_{\operatorname{st} }(G_K)$ and $\mathrm {R} ep_{\mathrm{pst}}(G_K)$ the categories of semistable and potentially semistable representations of $G_K$, respectively. As in \cite[4.2]{F2} a $\varphi$-module over $K_0$ is a pair $(D,\varphi)$, where $D$ is a finite dimensional $K_0$-vector space, $\varphi=\varphi_D$ is a $\varphi$-semilinear automorphism of $D$; a $(\varphi,N)$-module is a triple $(D,\varphi,N)$, where $(D,\varphi)$ is a $\varphi$-module and $N=N_V$ is a $K_0$-linear endomorphism of $D$ such that $N\varphi=p\varphi N$ (hence $N$ is nilpotent). A filtered $(\varphi,N)$-module is a $(D,\varphi,N,F^{\scriptscriptstyle\bullet})$, where $(D,\varphi,N)$ is a $(\varphi, N)$-module and $F^{\scriptscriptstyle\bullet} $ is a decreasing finite filtration of $D_K$ by $K$-vector spaces. There is a notion of a {\em (weakly) admissible} filtered $(\varphi, N)$-module \cite{CF}. Denote by $MF^{\operatorname{ad} }_K(\varphi,N){\mathcal{U}}bset MF_K(\varphi,N)$ the categories of admissible filtered $(\varphi, N)$-modules and filtered $(\varphi, N)$-modules, respectively. We know \cite{CF} that the pair of functors $D_{\operatorname{st} }(V)=(B_{\operatorname{st} }\otimes_{{\mathbf Q}_p}V)^{G_K}$, $V_{\operatorname{st} }(D)=(B_{\operatorname{st} }\otimes_{K_0}D)^{\varphi= \operatorname{Id} ,N=0}\cap F^0(B_{\mathrm{dR}}\otimes_{K}D_K)$ defines an equivalence of categories $MF_K^{\operatorname{ad} }(\varphi,N)\simeq \mathrm {R} ep_{\operatorname{st} }(G_K)$. For $D\in MF_K(\varphi,N)$, set $$ C_{\operatorname{st} }(D):= \left[\begin{aligned}\xymatrix@C=36pt{D\ar[d]^N \ar[r]^-{(1-\varphi, \operatorname{can} )} & D\oplus D_K/F^0\ar[d]^{(N,0)}\\ D\ar[r]^{1-p\varphi} & D }\end{aligned}\right] $$ Here the brackets denote the total complex of the double complex inside the brackets. Consider also the following complex $$ C^+(D):= \left[\begin{aligned}\xymatrix@C=40pt{D\otimes_{K_0}B^+_{\operatorname{st} }\ar[d]^N \ar[r]^-{(1-\varphi, \operatorname{can} \otimes \iota)} & D\otimes_{K_0}B^+_{\operatorname{st} }\oplus (D_K\otimes_{K}B^+_{\mathrm{dR}})/F^0\ar[d]^{(N,0)}\\ D\otimes_{K_0}B^+_{\operatorname{st} }\ar[r]^{1-p\varphi} & D\otimes_{K_0}B^+_{\operatorname{st} }}\end{aligned}\right] $$ Define $C(D)$ by omitting the superscript $+$ in the above diagram. We have $C_{\operatorname{st} }(D)=C(D)^{G_K}$. \begin{remark} Recall \cite[1.19]{JH}, \cite[3.3]{FPR} that to every $p$-adic representation $V$ of $G_K$ we can associate a complex $$C_{\operatorname{st} }(V):\quad D_{\operatorname{st} }(V) \varepsilon ryverylomapr{(N,1-\varphi,\iota)}D_{\operatorname{st} }(V)\oplus D_{\operatorname{st} }(V)\oplus t_V \varepsilon ryverylomapr{(1-p\varphi)-N} D_{\operatorname{st} }(V)\to 0\operatorname{cd} ots $$ where $t_V:=(V\otimes_{{\mathbf Q}_p} (B_{\mathrm{dR}}/B^+_{\mathrm{dR}}))^{G_K}$ \cite[I.2.2.1]{FPR}. The cohomology of this complex is called $H^*_{\operatorname{st} }(G_K,V)$. If $V$ is semistable then $C_{\operatorname{st} }(V)=C_{\operatorname{st} }(D_{\operatorname{st} }(V))$ hence $H^*(C_{\operatorname{st} }(D_{\operatorname{st} }(V)))=H^*_{\operatorname{st} }(G_K,V)$. If $V$ is potentially semistable the groups $H^*_{\operatorname{st} }(G_K,V)$ compute Yoneda extensions of ${\mathbf Q}_p$ by $V$ in the category of potentially semistable representations \cite[I.3.3.8]{FPR}. In general \cite[I.3.3.7]{FPR}, $H^0_{\operatorname{st} }(G_K,V)\operatorname{st} ackrel{\sim}{\to} H^0(G_K,V)$ and $H^1_{\operatorname{st} }(G_K,V)\hookrightarrow H^1(G_K,V)$ computes $\operatorname{st} $-extensions\footnote{Extension $0\to V_1\to V_2\to V_3\to 0$ is called $\operatorname{st} $ if the sequence $0\to D_{\operatorname{st} }(V_1)\to D_{\operatorname{st} }(V_2)\to D_{\operatorname{st} }(V_3)\to 0$ is exact.} of ${\mathbf Q}_p$ by $V$. \end{remark} \begin{remark} \label{basics} Let $D\in MF_K(\varphi, N)$. Note that \begin{enumerate} \item $H^0(C(D))=V_{\operatorname{st} }(D)$; \item for $i\geq 2$, $H^i(C^+(D))=H^i(C(D))=0$ (because $N$ is surjective on $B^+_{\operatorname{st} }$ and $B_{\operatorname{st} }$); \item if $F^1D_K=0$ then $F^0(D_K\otimes _KB^+_{\mathrm{dR}})=F^0(D_K\otimes _KB_{\mathrm{dR}})$ (in particular, the map of complexes $C^+(D)\to C(D)$ is an injection); \item if $D=D_{\operatorname{st} }(V)$ is admissible then we have quasi-isomorphisms $$ C(D)\operatorname{st} ackrel{\sim}{\leftarrow}V\otimes_{{\mathbf Q}_p}[B_{\operatorname{cr} } \varepsilon rylomapr{(1-\varphi, \operatorname{can} )}B_{\operatorname{cr} }\oplus B_{\mathrm{dR}}/F^0]\operatorname{st} ackrel{\sim}{\leftarrow}V\otimes_{{\mathbf Q}_p}(B_{\operatorname{cr} }^{\varphi=1}\cap F^0)=V $$ and the map of complexes $C_{\operatorname{st} }(D)\to C(D)$ represents the canonical map $H^i_{\operatorname{st} }(G_K,V)\to H^i(G_K,V)$. \end{enumerate} \end{remark} \begin{lemma}(\cite[Theorem II.5.3]{F1}) If $X{\mathcal{U}}bset B_{\operatorname{cr} }\cap B_{\mathrm{dR}}^+$ and $\varphi(X){\mathcal{U}}bset X$ then $\varphi^2(X){\mathcal{U}}bset B_{\operatorname{cr} }^+$. \end{lemma} \begin{proposition}If $D\in MF_K(\varphi, N)$ and $F^1D_K=0$ then $H^0(C(D)/C^+(D))=0$. \end{proposition} \begin{proof} We will argue by induction on $m$ such that $N^m=0$. Assume first that $m=1$ (hence $N=0$). We have \begin{align*} C(D)/C^+(D) & =\left[\begin{aligned}\xymatrix@C=40pt{D\otimes_{K_0}(B_{\operatorname{st} }/B^+_{\operatorname{st} })\ar[r]^-{(1-\varphi, \operatorname{can} \otimes\iota)}\ar[d]^{1\otimes N} & D\otimes_{K_0}(B_{\operatorname{st} }/B^+_{\operatorname{st} })\oplus D_K\otimes_{K}(B_{\mathrm{dR}}/B^+_{\mathrm{dR}})\ar[d]^{(1\otimes N,0)}\\ D\otimes_{K_0}(B_{\operatorname{st} }/B^+_{\operatorname{st} })\ar[r]^{1-p\varphi} & D\otimes_{K_0}(B_{\operatorname{st} }/B^+_{\operatorname{st} }) }\end{aligned}\right]\\ & \operatorname{st} ackrel{\sim}{\leftarrow} [D\otimes_{K_0}(B_{\operatorname{cr} }/B^+_{\operatorname{cr} }) \varepsilon rylomapr{(1-\varphi, \operatorname{can} )} D\otimes_{K_0}(B_{\operatorname{cr} }/B^+_{\operatorname{cr} })\oplus D_K\otimes_{K}(B_{\mathrm{dR}}/B^+_{\mathrm{dR}})] \end{align*} Write $D=\oplus_{i=1}^{r}K_0d_i$ and, for $1\leq i\leq r$, consider the following maps $$ p_i:H^0(C(D)/C^+(D))=(D\otimes_{K_0}((B_{\operatorname{cr} }\cap B_{\mathrm{dR}}^+)/B^+_{\operatorname{cr} }))^{\varphi=1}{\mathcal{U}}bset \oplus_{i=1}^rd_i\otimes ((B_{\operatorname{cr} }\cap B_{\mathrm{dR}}^+)/B^+_{\operatorname{cr} })\operatorname{st} ackrel{\operatorname{pr} _i}{\to}(B_{\operatorname{cr} }\cap B_{\mathrm{dR}}^+)/B^+_{\operatorname{cr} } $$ Let $Y_a$, $a\in H^0(C(D)/C^+(D))$, denote the $K_0$-subspace of $(B_{\operatorname{cr} }\cap B_{\mathrm{dR}}^+)/B^+_{\operatorname{cr} }$ spanned by $p_1(a),\ldots,p_r(a)$. We have $(p_1(a),\ldots ,p_r(a))^T=M\varphi(p_1(a),\ldots ,p_r(a))^T$, for $M\in GL_r(K_0)$. Hence $\varphi(Y_a){\mathcal{U}}bset Y_a$. Let $X_a{\mathcal{U}}bset B_{\operatorname{cr} }\cap B^+_{\mathrm{dR}}$ be the inverse image of $Y_a$ under the projection $B_{\operatorname{cr} }\cap B^+_{\mathrm{dR}}\to (B_{\operatorname{cr} }\cap B^+_{\mathrm{dR}})/B_{\operatorname{cr} }^+$ (naturally $B^+_{\operatorname{cr} }{\mathcal{U}}bset X_a$). Then $\varphi(X_a){\mathcal{U}}bset X_a + B^+_{\operatorname{cr} }=X_a.$ By the above lemma $\varphi^2(X_a){\mathcal{U}}bset B^+_{\operatorname{cr} }$. Hence $\varphi^2(Y_a)=0$ and (applying $M^{-2}$) $Y_a=0$. This implies that $a=0$ and $H^0(C(D)/C^+(D))=0$, as wanted. For general $m>0$, consider the filtration $D_1{\mathcal{U}}bset D$, where $D_1:=\ker(N)$ with induced structures. Set $D_2:= D/D_1$ with induced structures. Then $D_1,D_2\in MF_{K}(\varphi,N)$; $N^i$ is trivial on $D_1$ for $i=1$ and on $D_2$ for $i=m-1$. Clearly $F^1D_{1,K}=F^1D_{2,K}=0$. Hence, by Remark \ref{basics}.3, we have a short exact sequence $$ 0\to C(D_1)/C^+(D_1)\to C(D)/C^+(D)\to C(D_2)/C^+(D_2)\to 0 $$ By the inductive assumption $H^0(C(D_1)/C^+(D_1))=H^0(C(D_2)/C^+(D_2))=0$. Hence $H^0(C(D)/C^+(D))=0$, as wanted. \end{proof} \begin{corollary} If $D\in MF_K(\varphi, N)$ and $F^1D_K=0$ then $H^0(C^+(D))=H^0(C(D))=V_{\operatorname{st} }(D)$ (${\mathcal{U}}bset D\otimes_{K_0}B^+_{\operatorname{st} }$) and $H^1(C^+(D))\hookrightarrow H^1(C(D))$. \end{corollary} \begin{corollary} \label{MF1} If $D\in MF^{\operatorname{ad} }_K(\varphi, N)$ and $F^1D_K=0$ then $$H^i(C^+(D))=H^i(C(D))= \begin{cases} V_{\operatorname{st} }(D) &i=0\\ 0 & i\neq 0 \end{cases} $$ (i.e., $C^+(D)\operatorname{st} ackrel{\sim}{\to} C(D)$). \end{corollary} A filtered $(\varphi,N,G_K)$-module is a tuple $(D,\varphi,N,\rho, F^{\scriptscriptstyle\bullet})$, where \begin{enumerate} \item $D$ is a finite dimensional $K_0^{\operatorname{nr} }$-vector space; \item $\varphi : D \to D$ is a Frobenius map; \item $N : D \to D$ is a $K_0^{\operatorname{nr} }$-linear monodromy map such that $N\varphi = p\varphi N$; \item $\rho$ is a $K_0^{\operatorname{nr} }$-semilinear $G_K$-action on $D$ (hence $\rho|I_K$ is linear) that is smooth, i.e., all vectors have open stabilizers, and that commutes with $\varphi$ and $N$; \item $F^{\scriptscriptstyle\bullet}$ is a decreasing finite filtration of $D_K:=(D\otimes _{K_0^{\operatorname{nr} }}\overline{K} )^{G_K}$ by $K$-vector spaces. \end{enumerate} Morphisms between filtered $(\varphi,N, G_K)$-modules are $K_0^{\operatorname{nr} }$-linear maps preserving all structures. There is a notion of a {\em (weakly) admissible} filtered $(\varphi, N,G_K)$-module \cite{CF}, \cite{F3}. Denote by $MF_K^{\operatorname{ad} }(\varphi,N,G_K){\mathcal{U}}bset MF_K^{}(\varphi,N,G_K)$ the categories of admissible filtered $(\varphi, N,G_K)$-modules and filtered $(\varphi, N,G_K)$-modules, respectively. We know \cite{CF} that the pair of functors $D_{\mathrm{pst}}(V)=\injlim_H(B_{\operatorname{st} }\otimes_{{\mathbf Q}_p}V)^{H}$, $H{\mathcal{U}}bset G_K$ - an open subgroup, $V_{\mathrm{pst}}(D)=(B_{\operatorname{st} }\otimes_{K_0^{\operatorname{nr} }}D)^{\varphi= \operatorname{Id} ,N=0}\cap F^0(B_{\mathrm{dR}}\otimes_{K}D_K)$ define an equivalence of categories $MF_K^{\operatorname{ad} }(\varphi,N,G_K)\simeq \mathrm {R} ep_{\mathrm{pst}}(G_K)$. For $D\in MF_K(\varphi,N,G_K)$, set \footnote{We hope that the notation below will not lead to confusion with the semistable case in general but if in doubt we will add the data of the field $K$ in the latter case.} $$ C_{\mathrm{pst}}(D):= \left[\begin{aligned}\xymatrix@C=36pt{D_{\operatorname{st} }\ar[d]^N \ar[r]^-{(1-\varphi, \operatorname{can} )} & D_{\operatorname{st} }\oplus D_{K}/F^0\ar[d]^{(N,0)}\\ D_{\operatorname{st} }\ar[r]^{1-p\varphi} & D_{\operatorname{st} } }\end{aligned}\right] $$ Here $D_{\operatorname{st} }:=D^{G_K}$. Consider also the following complex (we set $D_{\overline{K} }:=D\otimes_{K_0^{\operatorname{nr} }}\overline{K} $) $$ C^+(D):= \left[\begin{aligned}\xymatrix@C=40pt{D\otimes_{K_0^{\operatorname{nr} }}B^+_{\operatorname{st} }\ar[d]^N \ar[r]^-{(1-\varphi, \operatorname{can} \otimes \iota)} & (D\otimes_{K_0^{\operatorname{nr} }}B^+_{\operatorname{st} })\oplus (D_{\overline{K} }\otimes_{\overline{K} }B^+_{\mathrm{dR}})/F^0\ar[d]^{(N,0)}\\ D\otimes_{K_0^{\operatorname{nr} }}B^+_{\operatorname{st} }\ar[r]^{1-p\varphi} & D\otimes_{K_0^{\operatorname{nr} }}B^+_{\operatorname{st} }}\end{aligned}\right] $$ Define $C(D)$ by omitting the superscript $+$ in the above diagram. We have $C_{\mathrm{pst}}(D)=C(D)^{G_K}$. \begin{remark} \label{pst=st} If $V$ is potentially semistable then $C_{\operatorname{st} }(V)=C_{\mathrm{pst}}(D_{\mathrm{pst}}(V))$ hence $H^*(C_{\mathrm{pst}}(D_{\mathrm{pst}}(V)))=H^*_{\operatorname{st} }(G_K,V)$. \end{remark} \begin{remark} \label{resolution2} If $D=D_{\mathrm{pst}}(V)$ is admissible then we have quasi-isomorphisms $$ C(D)\operatorname{st} ackrel{\sim}{\leftarrow}V\otimes_{{\mathbf Q}_p}[B_{\operatorname{cr} } \varepsilon rylomapr{(1-\varphi, \operatorname{can} )}B_{\operatorname{cr} }\oplus B_{\mathrm{dR}}/F^0]\operatorname{st} ackrel{\sim}{\leftarrow}V\otimes_{{\mathbf Q}_p}(B_{\operatorname{cr} }^{\varphi=1}\cap F^0)=V $$ and the map of complexes $C_{\mathrm{pst}}(D)\to C(D)$ represents the canonical map $H^i_{\operatorname{st} }(G_K,V)\to H^i(G_K,V)$. \end{remark} \begin{remark} \label{BKexp}Let $D=D_{\mathrm{pst}}(V)$ be admissible. The Bloch-Kato exponential $$ (Z^1 C(D))^{G_K} \to H^1(G_K,V)$$ is given by the coboundary map arising from the exact sequence $$ 0 \to V \to C^0(D) \to Z^1 C(D) \to 0. $$ Its restriction to the de Rham part of $Z^1 C(D)$ is the Bloch-Kato exponential $$\exp_{\operatorname{BK} }: D_K/F^0\to H^1(G_K,V).$$ It is also obtained by applying $Rf$, where $f(-) = (-)^{G_K}$, to the coboundary map $\partial : Z^1 C(D) \to V[1]$ arising from the above exact sequence (see the proof of Theorem \ref {stHS} for an appropriate formalism of continuous cohomology). Note that the composition of the canonical maps $$ Z^1 C(D) \to (\sigma_{\geq 1} C(D))[1] \to C(D) [1] \operatorname{st} ackrel{\sim}{\leftarrow} V[1] $$ is not equal to $\partial$, but to $- \partial$, by \eqref{sign}. \end{remark} \begin{corollary} \label{resolution3} If $D\in MF^{\operatorname{ad} }_K(\varphi, N, G_K)$ and $F^1D_K=0$ then $$H^i(C^+(D))\operatorname{st} ackrel{\sim}{\to}H^i(C(D))= \begin{cases} V_{\mathrm{pst}}(D) &i=0\\ 0 & i\neq 0 \end{cases} $$ (i.e., $C^+(D)\operatorname{st} ackrel{\sim}{\to} C(D)$). \end{corollary} \begin{proof} By Remark \ref{resolution2} we have $C(D)\simeq V_{\mathrm{pst}}(D)[0]$. To prove the isomorphism $H^i(C^+(D))\operatorname{st} ackrel{\sim}{\to}H^i(C(D))$, $i\geq 0$, take a finite Galois extension $K^{^{\operatorname{pr} ime}me}/K$ such that $D$ becomes semistable over $K^{^{\operatorname{pr} ime}me}$, i.e., $I_{K^{^{\operatorname{pr} ime}me}}$ acts trivially on $D$. We have $(D^{^{\operatorname{pr} ime}me},\varphi,N)\in MF_{K^{^{\operatorname{pr} ime}me}}^{\operatorname{ad} }(\varphi,N)$, where $D^{^{\operatorname{pr} ime}me}:=D^{G_{K^{^{\operatorname{pr} ime}me}}}$ and (compatibly) $D\simeq D^{^{\operatorname{pr} ime}me}\otimes_{K_0^{^{\operatorname{pr} ime}me}}K_0^{\operatorname{nr} }$, $F^{\scriptscriptstyle\bullet}D^{^{\operatorname{pr} ime}me}_{K^{^{\operatorname{pr} ime}me}}\simeq F^{\scriptscriptstyle\bullet}D_K\otimes_KK^{^{\operatorname{pr} ime}me}$. It easily follows that $C^+(D)=C^+(K^{^{\operatorname{pr} ime}me},D^{^{\operatorname{pr} ime}me})$ and $C(D) = C(K^{^{\operatorname{pr} ime}me}, D^{^{\operatorname{pr} ime}me})$. Since $F^1D^{^{\operatorname{pr} ime}me}_{K^{^{\operatorname{pr} ime}me}}=0$, our corollary is now a consequence of Corollary \ref{MF1} \end{proof} \begin{proposition} \label{resolution33} If $D\in MF^{\operatorname{ad} }_K(\varphi, N, G_K)$ and $F^1D_K=0$ then, for $i\geq 0$, the natural map $$H^i_{\operatorname{st} }(G_K,V_{\mathrm{pst}}(D))\operatorname{st} ackrel{\sim}{\to} H^i(G_K,V_{\mathrm{pst}}(D)) $$ is an isomorphism. \end{proposition} \begin{proof} Both sides satisfy Galois descent for finite Galois extensions. We can assume, therefore, that $D = D_{\operatorname{st} }(V)$ for a semistable representation $V$ of $G_K$. For $i=0$ we have (even without assuming $F^1D_K=0$) $H^0(C_{\operatorname{st} }(D)) = H^0(C(D)^{G_K}) = H^0(C(D))^{G_K} = V^{G_K}$. For $i=1$ the statement is proved in \cite[Thm. 6.2, Lemme 6.5]{BER}. For $i=2$ it follows from the assumption $F^1D_K=0$ (by weak admissibility of $D$) that there is a $W(k)$-lattice $M {\mathcal{U}}bset D$ such that $\varphi^{-1}(M) {\mathcal{U}}bset p^2 M$, which implies that $1 - p\varphi = -p\varphi(1 - p^{-1}\varphi^{-1}) : D \to D$ is surjective, hence $H^2(C_{\operatorname{st} }(D)) = 0$ (cf. the proof of \cite[Lemme 6.7]{BER}). The proof of the fact that $H^2(G_K, V) = 0$ if $F^1D_K=0$ was kindly communicated to us by L.~Berger; it is reproduced in Appendix~A (cf. Theorem \ref{main}). For $i > 2$ both terms vanish. \end{proof} {\mathcal{U}}bsection{Comparison of spectral sequences} \label{Jan} The purpose of this subsection is to prove a derived category theorem (Theorem \ref{speccomp}) that will be used later to relate the syntomic descent spectral sequence with the \'etale Hochschild-Serre spectral sequence (cf. Theorem \ref{stHS}). Let $D$ be a triangulated category and $H : D \to A$ a cohomological functor to an abelian category $A$. A finite collection of adjacent exact triangles (a ``Postnikov system" in the language of \cite [IV.2, Ex. 2]{GM}) \begin{equation} \label{postnikov} \xymatrix@C=10pt@R=15pt{ & Y^0 \ar[dr] && Y^1 \ar[dr] &&&& Y^n \ar[dr] &\\ X = X^0 \ar[ur] && X^1 \ar [ur] \ar[ll]^(.45){[1]} && X^2 \ar[ll]^{[1]} && X^n \ar@{.}[ll] \ar[ru] && X^{n+1} = 0 \ar[ll]^(.55){[1]}\\} \end{equation} gives rise to an exact couple $$ D_1^{p,q} = H^q(X^p) = H(X^p[q]),\qquad E_1^{p,q} = H^q(Y^p) \Longrightarrow H^{p+q}(X). $$ The induced filtration on the abutment is given by $$ F^p H^{p+q}(X) = \operatorname{Im} \left(D_1^{p,q} = H^q(X^p) \to H^{p+q}(X)\right). $$ \begin{remark} In the special case when $A$ is the heart of a non-degenerate $t$-structure $(D^{\leq n}, D^{\geq n})$ on $D$ and $H = \tau_{\leq 0} \tau_{\geq 0}$, the following conditions are equivalent: \begin{enumerate} \label{ass} \item $E_2^{p,q} = 0$ for $p\not= 0$; \item $D_2^{p,q} = 0$ for all $p, q$; \item $D_r^{p,q} = 0$ for all $p, q$ and $r > 1$; \item the sequence $0 \to H^q(X^p) \to H^q(Y^p) \to H^q(X^{p+1}) \to 0$ is exact for all $p, q$; \item the sequence $0 \to H^q(X) \to H^q(Y^0) \to H^q(Y^1) \to \operatorname{cd} ots$ is exact for all $q$; \item the canonical map $H^q(X) \to E_1^{{\scriptscriptstyle\bullet},q}$ is a quasi-isomorphism, for all $q$; \item the triangle $\tau_{\leq q} X^p \to \tau_{\leq q} Y^p \to \tau_{\leq q} X^{p+1}$ is exact for all $p, q$. \end{enumerate} \end{remark} From now on until the end of \ref{Jan} assume that $D = D(A)$ is the derived category of $A$ with the standard $t$-structure and that $X^i, Y^i \in D^+(A)$, for all $i$. Furthermore, assume that $f : A \to A^^{\operatorname{pr} ime}me$ is a left exact functor to an abelian category $A^^{\operatorname{pr} ime}me$ and that $A$ admits a class of $f$-adapted objects (hence the derived functor $\mathrm {R} f : D^+(A) \to D^+(A^^{\operatorname{pr} ime}me)$ exists). Applying $\mathrm {R} f$ to (\ref{postnikov}) we obtain another Postnikov system, this time in $D^+(A^^{\operatorname{pr} ime}me)$. The corresponding exact couple \begin{equation} \label{firstcouple} {}^I D_1^{p,q} = (\mathrm {R} ^q f)(X^p),\qquad {}^I E_1^{p,q} = (\mathrm {R} ^q f)(Y^p) \Longrightarrow (\mathrm {R} ^{p+q}f)(X) \end{equation} induces filtration $$ {}^I F^p (\mathrm {R} ^{p+q}f)(X) = \operatorname{Im} \left({}^I D_1^{p,q} = (\mathrm {R} ^q f)(X^p) \to (\mathrm {R} ^{p+q}f)(X) \right). $$ Our goal is to compare (\ref{firstcouple}), under the equivalent assumptions (\ref{ass}), to the hypercohomology exact couple \begin{equation} \label{secondcouple} {}^{II} D_2^{p,q} = (\mathrm {R} ^{p+q}f)(\tau_{\leq q-1} X),\qquad {}^{II} E_2^{p,q} = (\mathrm {R} ^p f)(H^q(X)) \Longrightarrow (\mathrm {R} ^{p+q}f)(X) \end{equation} for which $$ {}^{II} F^p (\mathrm {R} ^{p+q}f)(X) = \operatorname{Im} \left({}^{II} D_2^{p-1,q+1} = (\mathrm {R} ^{p+q} f)(\tau_{\leq q} X) \to (\mathrm {R} ^{p+q}f)(X) \right). $$ \begin{theorem} \label{speccomp} Under the assumptions (\ref{ass}) there is a natural morphism of exact couples $(u, v) : ({}^I D_2, {}^I E_2) \to ({}^{II} D_2, {}^{II} E_2)$. Consequently, we have ${}^I F^p {\mathcal{U}}bseteq {}^{II} F^p$ for all $p$ and there is a natural morphism of spectral sequences ${}^I E_r^{*,*} \to {}^{II} E_r^{*,*}$ ($r > 1$) compatible with the identity map on the common abutment. \end{theorem} \begin{proof} {\bf Step 1:} we begin by constructing a natural map $u : {}^I D_2 \to {}^{II} D_2$. For each $p > 0$ there is a commutative diagram in $D^+(A^^{\operatorname{pr} ime}me)$ $$ \xymatrix{ (\mathrm {R} ^{p+q}f)((\tau_{\leq q} Y^{p-1})[-p]) \ar [r] \ar[d]^\wr & (\mathrm {R} ^{p+q}f)((\tau_{\leq q} X^p)[-p]) \ar[r] \ar[d]^\wr & (\mathrm {R} ^{p+q}f)(\tau_{\leq q} X) \ar[d]^{\alpha_{II}} \\ {}^I E_1^{p-1,q} = (\mathrm {R} ^{p+q}f)(Y^{p-1}[-p]) \ar[r]^(.54){k_1} & {}^I D_1^{p,q} = (\mathrm {R} ^{p+q}f)(X^p[-p]) \ar [ru]^{u^^{\operatorname{pr} ime}me} \ar[r]^(.62){\alpha_I} & (\mathrm {R} ^{p+q}f)(X)\\} $$ whose both rows are complexes. This defines a map $u^^{\operatorname{pr} ime}me : {}^I D_1^{p,q} \to {}^{II} D_2^{p-1,q+1}$ such that $u^^{\operatorname{pr} ime}me k_1 = 0$ and $\alpha_{II} u ^^{\operatorname{pr} ime}me = \alpha_I$ (hence ${}^I F^p = \operatorname{Im} (\alpha_I) {\mathcal{U}}bseteq \operatorname{Im} (\alpha_{II}) = {}^{II} F^p$). By construction, the diagram (with exact top row) $$ \xymatrix{ {}^I E_1^{p,q-1} \ar[r]^{k_1} \ar[rd]^0 & {}^I D_1^{p+1,q-1} \ar[r]^(.55){i_1} \ar[d]^{u^^{\operatorname{pr} ime}me} & {}^I D_1^{p,q} \ar[d]^{u^^{\operatorname{pr} ime}me}\\ & {}^{II} D_2^{p,q} \ar[r]^(.45){i_2} & {}^{II} D_2^{p-1,q+1}\\} $$ is commutative for each $p\geq 0$, which implies that the map $$ u = u^^{\operatorname{pr} ime}me i_1^{-1} : {}^I D_2^{p,q} = i_1({}^I D_1^{p+1,q-1}) \to {}^{II} D_2^{p,q} $$ is well-defined and satisfies $u i_2 = i_2 u$. {\mathcal{M}}allskip \noindent {\bf Step 2:} for all $q$, the canonical quasi-isomorphism $H^q(X) \to E_1^{{\scriptscriptstyle\bullet},q}$ induces natural morphisms \begin{align*} v^^{\operatorname{pr} ime}me : {}^I E_2^{p,q} &= H^p(i \mapsto (\mathrm {R} ^q f)(Y^i)) \to H^p(i \mapsto f(H^q(Y^i))) \to (\mathrm {R} ^p f)(i \mapsto H^q(Y^i))\\ &= (\mathrm {R} ^p f)(E_1^{{\scriptscriptstyle\bullet},q}) \operatorname{st} ackrel\sim\longleftarrow (\mathrm {R} ^p f)(H^q(X)) = {}^{II} E_2^{p,q};\\ \end{align*} set $v = (-1)^p v^^{\operatorname{pr} ime}me : {}^I E_2^{p,q} \to {}^{II} E_2^{p,q}$. It remains to show that $u$ and $v$ are compatible with the maps $$ {}^? D_2^{p-1,q+1} \operatorname{st} ackrel {j_2} \longrightarrow {}^? E_2^{p,q} \operatorname{st} ackrel {k_2} \longrightarrow {}^? D_2^{p+1,q} \qquad\qquad (? = I, II). $$ {\bf Step 3:} for any complex $M^{\scriptscriptstyle\bullet}$ over $A$ denote by $Z^i(M^{\scriptscriptstyle\bullet}) = \operatorname{Ker} (\delta^i : M^i \to M^{i+1})$ the subobject of cycles in degree $i$. If $M^{\scriptscriptstyle\bullet}$ is a resolution of an object $M$ of $A$, then each exact sequence \begin{equation} \label{cycle} 0 \longrightarrow Z^p(M^{\scriptscriptstyle\bullet}) \longrightarrow M^p \operatorname{st} ackrel{\delta^p}\longrightarrow Z^{p+1}(M^{\scriptscriptstyle\bullet}) \longrightarrow 0 \qquad\qquad (p\geq 0) \end{equation} can be completed to an exact sequence of resolutions $$ \xymatrix@C=15pt@R=15pt{ 0 \ar[r] & Z^p(M^{\scriptscriptstyle\bullet}) \ar[r] \ar[d]^{ \operatorname{can} } & M^p \ar[r] \ar[d]^{ \operatorname{can} } & Z^{p+1}(M^{\scriptscriptstyle\bullet}) \ar[r] \ar[d]^{- \operatorname{can} } & 0\\ 0 \ar[r] & (\sigma_{\geq p}(M^{\scriptscriptstyle\bullet}))[p] \ar[r] & (\sigma_{\geq p} {\rm Cone}(M^{\scriptscriptstyle\bullet} \operatorname{st} ackrel {\rm id} \to M^{\scriptscriptstyle\bullet}))[p] \ar[r] & (\sigma_{\geq p+1}(M^{\scriptscriptstyle\bullet}))[p+1] \ar[r] & 0.\\} $$ By induction, we obtain that the following diagram, whose top arrow is the composition of the natural maps $Z^i \to Z^{i-1}[1]$ induced by (\ref{cycle}), commutes in $D^+(A)$. \begin{equation} \label{sign} \xymatrix@C=10pt@R=15pt{ Z^p(M^{\scriptscriptstyle\bullet}) \ar[r] \ar[d]^{ \operatorname{can} } & Z^0(M^{\scriptscriptstyle\bullet})[p] = M[p] \ar[d]^{(-1)^p \operatorname{can} }\\ (\sigma_{\geq p}(M^{\scriptscriptstyle\bullet}))[p] \ar[r]^(.58){ \operatorname{can} } & M^{\scriptscriptstyle\bullet}[p]\\} \end{equation} We are going to apply this statement to $M = H^q(X)$ and $M^{\scriptscriptstyle\bullet} = E_1^{\scriptscriptstyle\bullet,q}$, when $Z^p(M^{\scriptscriptstyle\bullet}) = D_1^{p,q} = H^q(X^p)$ and $Z^0(M^{\scriptscriptstyle\bullet}) = H^q(X)$. {\mathcal{M}}allskip \noindent {\bf Step 4:} we are going to investigate ${}^I E_2^{p,q}$. Complete the morphism $Y^p \to Y^{p+1}$ to an exact triangle $U^p \to Y^p \to Y^{p+1}$ in $D^+(A)$ and fix a lift $X^p \to U^p$ of the morphism $X^p \to Y^p$. There are canonical epimorphisms \begin{equation} \label{epi} (\mathrm {R} ^q f)(U^p) \twoheadrightarrow \operatorname{Ker} ((\mathrm {R} ^q f)(Y^p) \operatorname{st} ackrel {j_1 k_1} \longrightarrow (\mathrm {R} ^q f)(Y^{p+1})) = Z^p({}^I E_1^{\scriptscriptstyle\bullet,q}) \twoheadrightarrow {}^I E_2^{p,q} \end{equation} and the map $$ k_2 : {}^I E_2^{p,q} \to {}^I D_2^{p+1,q} = \operatorname{Ker} ({}^I D_1^{p+1,q} \operatorname{st} ackrel {j_1} \longrightarrow {}^I E_1^{p+1,q}) $$ is induced by the restriction of $k_1 : {}^I E_1^{p,q} \to {}^I D_1^{p+1,q}$ to $Z^p({}^I E_1^{\scriptscriptstyle\bullet,q})$. The following octahedron (in which we have drawn only the four exact faces) $$ \xymatrix@C=15pt@R=15pt{ X^{p+2} \ar[rd]_{[1]} && Y^{p+1} \ar[ll]\\ & X^{p+1} \ar[ru] \ar[dl] &\\ X^p[1] \ar[rr]^{[1]} && Y^p \ar[lu]\\} \qquad\qquad \xymatrix@C=15pt@R=15pt{ X^{p+2} \ar[dd]_{[1]} && Y^{p+1} \ar[dl]\\ & U^p[1] \ar[lu] \ar[dr]^{[1]} &\\ X^p[1] \ar[ru] && Y^p \ar[uu]\\} $$ shows that the triangle $X^p \to U^p \to X^{p+2}[-1]$ is exact and the diagrams $$ \xymatrix@C=15pt@R=15pt{ U^p[1] \ar[r] \ar[d] & Y^p[1] \ar[d]\\ X^{p+2} \ar[r] & X^{p+1}[1]\\} \qquad\qquad \xymatrix@C=15pt@R=15pt{ (\mathrm {R} ^q f)(U^p) \ar[r] \ar[d] & Z^p({}^I E_1^{\scriptscriptstyle\bullet,q}) \ar[d]_{k_1}\\ (\mathrm {R} ^q f)(X^{p+2}[-1]) = {}^I D_2^{p+2,q-1} \ar[r]^(.72){i_1} & {}^I D_2^{p+1,q}\\} $$ commute. The previous discussion implies that the composite map $$ (\mathrm {R} ^q f)(U^p) \twoheadrightarrow Z^p({}^I E_1^{\scriptscriptstyle\bullet,q}) \twoheadrightarrow {}^I E_2^{p,q} \operatorname{st} ackrel {k_2} \longrightarrow {}^I D_2^{p+1,q} \operatorname{st} ackrel u \to {}^{II} D_2^{p+1,q} = (\mathrm {R} ^q f)((\tau_{\leq q-1} X)[p+1]) $$ is obtained by applying $\mathrm {R} ^q f$ to \begin{equation} \label{comp1} \tau_{\leq q}\, U^p \to \tau_{\leq q}(X^{p+2}[-1]) = (\tau_{\leq q-1} X^{p+2})[-1] \to (\tau_{\leq q-1}\, X)[p+1]. \end{equation} {\bf Step 5:} all boundary maps $H^q(X^{p+2}[-1]) \to H^q(X^p)$ vanish by (\ref{ass}), which means that the following triangles are exact. $$ \tau_{\leq q}\, X^p \to \tau_{\leq q}\, U^p \to \tau_{\leq q}(X^{p+2}[-1]) = (\tau_{\leq q-1}\, X^{p+2})[-1] $$ The commutative diagram $$ \xymatrix@C=15pt@R=15pt{ \tau_{\leq q}\, U^p \ar[r] & H^q(U^p)[-q] \ar[r] & \operatorname{Ker} \left(H^q(Y^p) \to H^q(Y^{p+1})\right)[-q]\\ \tau_{\leq q}\, X^p \ar[r] \ar[u] & H^q(X^p)[-q] \ar[u] \ar@2{-}[ru] &\\} $$ gives rise to an octahedron $$ \xymatrix@C=10pt@R=15pt{ V^p \ar[rd]_{[1]} && H^q(X^p)[-q] \ar[ll]\\ & \tau_{\leq q}\, U^p \ar[ru] \ar[dl] &\\ \tau_{\leq q}(X^{p+2}[-1]) \ar[rr]^{[1]} && \tau_{\leq q}\,X^p \ar[lu]\\} \qquad \xymatrix@C=10pt@R=15pt{ V^p \ar[dd]_{[1]} && H^q(X^p)[-q] \ar[dl]\\ & (\tau_{\leq q-1}\, X^p)[1] \ar[lu] \ar[dr]^{[1]} &\\ X^p[1] \ar[ru] && Y^p \ar[uu]\\} $$ In particular, the following diagram commutes. \begin{equation} \label{comp2} \xymatrix@C=15pt@R=15pt{ \tau_{\leq q}\, U^p \ar[r] \ar[d] & H^q(X^p)[-q] \ar[d]\\ \tau_{\leq q}(X^{p+2}[-1]) \ar[r] & (\tau_{\leq q-1}\, X^p)[1]\\} \end{equation} {\bf Step 6:} the diagram (\ref{sign}) implies that the composition of $v : {}^I E_2^{p,q} \to {}^{II} E_2^{p,q}$ with the second epimorphism in (\ref{epi}) is equal to the composite map \begin{align*} &Z^p({}^I E_1^{\scriptscriptstyle\bullet,q}) = \operatorname{Ker} \left((\mathrm {R} ^q f)(\tau_{\leq q}\,Y^p) \to (\mathrm {R} ^q f)(\tau_{\leq q}\,Y^{p+1})\right)\to\\ &\to \operatorname{Ker} \left((\mathrm {R} ^q f)(H^q(Y^p)[-q]) \to (\mathrm {R} ^q f)(H^q(Y^{p+1})[-q])\right) =\\ &= (\mathrm {R} ^q f)(Z^p(E_1^{\scriptscriptstyle\bullet,q})[-q]) \to (\mathrm {R} ^q f)(Z^0(E_1^{\scriptscriptstyle\bullet,q})[-q+p]) = (\mathrm {R} ^p f)(H^q(X)) = {}^{II} E_2^{p,q}.\\ \end{align*} As a result, the composition of $v$ with (\ref{epi}) is obtained by applying $\mathrm {R} ^q f$ to \begin{equation} \label{comp3} \tau_{\leq q}\, U^p \to H^q(X^p)[q] \to H^q(X)[-q+p]. \end{equation} Consequently, the composite map $$ {}^I D_1^{p,q} = (\mathrm {R} ^q f)(\tau_{\leq q}\, X^p) \operatorname{st} ackrel {j_1} \longrightarrow Z^p({}^I E_1^{\scriptscriptstyle\bullet,q}) \twoheadrightarrow {}^I E_2^{p,q} \operatorname{st} ackrel v \to {}^{II} E_2^{p,q} $$ is given by applying $\mathrm {R} ^q f$ to $$ \tau_{\leq q}\, X^p \to H^q(X^p)[q] \to H^q(X)[-q+p], $$ hence is equal to $j_2 u^^{\operatorname{pr} ime}me$. It follows that $v j_2 = v j_1 i_1^{-1} = j_2 u^^{\operatorname{pr} ime}me i_1^{-1} = j_2 u$. {\mathcal{M}}allskip\noindent {\bf Step 7:} the diagram (\ref{comp2}) implies that the map (\ref{comp1}) coincides with the composition of (\ref{comp3}) with the canonical map $H^q(X)[-q+p] \to (\tau_{\leq q-1}\, X)[p+1]$, hence $u k_2 = k_2 v$. Theorem is proved. \end{proof} {\mathcal{M}}allskip\noindent \begin{example} \label{filtered} If $K^{\scriptscriptstyle\bullet}$ is a bounded below filtered complex over $A$ (with a finite filtration) $$ K^{\scriptscriptstyle\bullet} = F^0 K^{\scriptscriptstyle\bullet} {\mathcal{U}}pset F^1 K^{\scriptscriptstyle\bullet} {\mathcal{U}}pset \operatorname{cd} ots {\mathcal{U}}pset F^n K^{\scriptscriptstyle\bullet} {\mathcal{U}}pset F^{n+1} K^{\scriptscriptstyle\bullet} = 0, $$ then the objects $$ X^p = F^p K^{\scriptscriptstyle\bullet}[p],\qquad Y^p = (F^p K^{\scriptscriptstyle\bullet}/F^{p+1} K^{\scriptscriptstyle\bullet})[p] = gr^p_F(K^{\scriptscriptstyle\bullet})[p] \in D^+(A) $$ form a Postnikov system of the kind considered in (\ref{postnikov}). The corresponding spectral sequences are equal to $$ E_1^{p,q} = H^{p+q}(gr^p_F(K^{\scriptscriptstyle\bullet})) \Longrightarrow H^{p+q}(K^{\scriptscriptstyle\bullet}),\qquad {}^I E_1^{p,q} = (\mathrm {R} ^{p+q}f)(gr^p_F(K^{\scriptscriptstyle\bullet})) \Longrightarrow (\mathrm {R} ^{p+q}f)(K^{\scriptscriptstyle\bullet}). $$ In the special case when $K^{\scriptscriptstyle\bullet}$ is the total complex associated to a first quadrant bicomplex $C^{\scriptscriptstyle\bullet, \scriptscriptstyle\bullet}$ and the filtration $F^p$ is induced by the column filtration on $C^{\scriptscriptstyle\bullet, \scriptscriptstyle\bullet}$, then the complex $f(K^{\scriptscriptstyle\bullet})$ over $A^^{\operatorname{pr} ime}me$ is equipped with a canonical filtration $(f F^p)(f(K^{\scriptscriptstyle\bullet})) = f(F^p K^{\scriptscriptstyle\bullet})$ satisfying $$ gr^p_{f(F)}(f(K^{\scriptscriptstyle\bullet})) = f(gr^p_F(K^{\scriptscriptstyle\bullet})). $$ Under the assumptions (\ref{ass}), the corresponding exact couple $$ {}^f D_1^{p,q} = H^{p+q}(f(F^p K^{\scriptscriptstyle\bullet})),\qquad {}^f E_1^{p,q} = H^{p+q}(gr^p_{f(F)}(f(K^{\scriptscriptstyle\bullet}))) = H^{p+q}(f(gr^p_F(K^{\scriptscriptstyle\bullet}))) \Longrightarrow H^{p+q}(f(K^{\scriptscriptstyle\bullet})) $$ then naturally maps to the exact couple (\ref{firstcouple}), hence (beginning from $(D_2, E_2)$) to the exact couple (\ref{secondcouple}), by Theorem \ref{speccomp}. \end{example} {\mathcal{E}}ction{Syntomic cohomology} In this section we will define the arithmetic and geometric syntomic cohomologies of varieties over $K$ and $\overline{K} $, respectively, and study their basic properties. {\mathcal{U}}bsection{Hyodo-Kato morphism revisited} We will need to use the Hyodo-Kato morphism on the level of derived categories and vary it in $h$-topology. Recall that the original morphism depends on the choice of a uniformizer and a change of such is encoded in a transition function involving exponential of the monodromy. Since the fields of definition of semistable models in the bases for $h$-topology change we will need to use these transitions functions. The problem though is that in the most obvious (i.e., crystalline) definition of the Hyodo-Kato complexes the monodromy is (at best) homotopically nilpotent - making the exponential in the transition functions impossible to define. Beilinson \cite{BE2} solves this problem by representing Hyodo-Kato complexes using modules with nilpotent monodromy. In this subsection we will summarize what we need from his approach. At first a quick reminder. Let $(U,\overlinerline{U})$ be a log-scheme, log-smooth over $V^{\times}$. For any $r\geq 0$, consider its absolute (meaning over $W(k)$) log-crystalline cohomology complexes \begin{align*} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_n: & =\mathrm {R} \Gamma(\overlinerline{U}_{\operatorname{\acute{e}t} },\mathrm {R} u_{U^{\times}_n/W_n(k)*}{\mathcal J}^{[r]}_{U^{\times}_n/W_n(k)}),\quad \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]}):=\operatorname{holim} _n\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_n,\\ \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_{\mathbf Q}: & =\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})\otimes{\mathbf Q}_p, \end{align*} where $U^{\times}$ denotes the log-scheme $(U,\overlinerline{U})$ and $u_{U^{\times}_n/W_n(k)}: (U^{\times}_n/W_n(k))_{\operatorname{cr} }\to \overlinerline{U}_{\operatorname{\acute{e}t} }$ is the projection from the log-crystalline to the \'etale topos. For $r\geq 0$, we write ${\mathcal J}^{[r]}_{U^{\times}_n/W_n(k)}$ for the r'th divided power of the canonical PD-ideal ${\mathcal J}_{U^{\times}_n/W_n(k)}$; for $r\leq 0$, we set ${\mathcal J}^{[r]}_{U^{\times}_n/W_n(k)}:={\mathcal O}_{U^{\times}_n/W_n(k)}$ and we will often omit it from the notation. The absolute log-crystalline cohomology complexes are filtered $E_{\infty}$ algebras over $W_n(k)$, $W(k)$, or $K_0$, respectively. Moreover, the rational ones are filtered commutative dg algebras. \begin{remark} The canonical pullback map $$\mathrm {R} \Gamma(\overlinerline{U}_{\operatorname{\acute{e}t} },\mathrm {R} u_{U^{\times}_n/W_n(k)*}{\mathcal J}^{[r]}_{U^{\times}_n/W_n(k)})\operatorname{st} ackrel{\sim}{\to} \mathrm {R} u_{U^{\times}_n/{\mathbf Z}/p^n*}{\mathcal J}^{[r]}_{U^{\times}_n/{\mathbf Z}/p^n}) $$ is a quasi-isomorphism. In what follows we will often call both the "absolute crystalline cohomology". \end{remark} Let $W(k)<t_l> $ be the divided powers polynomial algebra generated by elements $t_l$, $l\in {\mathfrak m}_K/ {\mathfrak m}^2_K{\mathcal{E}}tminus \{0\}$, subject to the relations $t_{al}=[\overlinerline{a}]t_l,$ for $a\in V^*$, where $[\overlinerline{a}]\in W(k)$ is the Teichm\"{u}ller lift of $\overlinerline{a}$ - the reduction mod $\mathfrak m_K$ of $a$. Let $R_{V}$ (or simply $R$) be the $p$-adic completion of the subalgebra of $W(k)<t_l>$ generated by $t_l$ and $t_l^{ie_K}/i!$, $i\geq 1$. For a fixed $l$, the ring $R$ is the following $W(k)$-subalgebra of $K_0[[t_l]]$: \begin{align*} R =\{{\mathcal{U}}m_{i=0}^{\infty} a_i\frac{t_l^i}{\lfloor i/e_K\rfloor !}\mid a_i\in W(k), \lim_{i\rightarrow \infty}a_i=0\}. \end{align*} One extends the Frobenius $\varphi_R$ (semi-linearly) to $R$ by setting $\varphi_R(t_l)=t_l^p$ and defines a monodromy operator $N_R$ as a $W(k)$-derivation by setting $ N_R(t_l)=-t_l. $ Let $E:=\operatorname{Spec} (R)$ equipped with the log-structure generated by the $t_l$'s. We have two exact closed embeddings $$ i_0:W(k)^0\hookrightarrow E,\quad i_{\pi}: V^{\times}\hookrightarrow E. $$ The first one is canonical and induced by $ t_l\mapsto 0$. The second one depends on the choice of the class of the uniformizing parameter $\pi\in {\mathfrak m}_K/p{\mathfrak m}_{K}$ up to multiplication by Teichm\"{u}ller elements. It is induced by $t_l\mapsto [\overlinerline{l/\pi}]\pi$. Assume that $(U,\overlinerline{U})$ is of Cartier type (i.e., the special fiber $\overlinerline{U}_0$ is of Cartier type). Consider the log-crystalline and the Hyodo-Kato complexes (cf. \cite[1.16]{BE2}) $$ \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,{\mathcal J}^{[r]})_n: =\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})_{n}/R_n,{\mathcal J}^{[r]}_{\overlinerline{U}_n/R_n}),\quad \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_n:=\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})_0/W_n(k)^0). $$ Let $ \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,{\mathcal J}^{[r]})$ and $\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})$ be their homotopy inverse limits. The last complex is called the {\em Hyodo-Kato} complex. The complex $\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)$ is $R$-perfect and $$\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_n\simeq \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)\otimes^{L}_RR_n\simeq \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)\otimes^L{\mathbf Z}/p^n. $$ In general, we have $\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,{\mathcal J}^{[r]})_n\simeq \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,{\mathcal J}^{[r]})\otimes^L{\mathbf Z}/p^n$. The complex $\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})$ is $W(k)$-perfect and $$\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_n\simeq \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})\otimes^L_{W(k)}W_n(k)\simeq \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})\otimes^L{\mathbf Z}/p^n. $$ We normalize the monodromy operators $N$ on the rational complexes $\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{{\mathbf Q}}$ and $\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}$ by replacing the standard $N$ \cite[3.6]{HK} by $N_R:=e_{K}^{-1}N$. This makes them compatible with base change. The embedding $i_0: (U,\overlinerline{U})_0\hookrightarrow (U,\overlinerline{U}) $ over $i_0: W_n(k)^0\hookrightarrow E_n$ yields compatible morphisms $i^*_{0,n}: \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_n\to \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_n$. Completing, we get a morphism $$i^*_0: \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)\to \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U}),$$ which induces a quasi-isomorphism $i^*_0:\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)\otimes ^L_RW(k)\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})$. All the above objects have an action of Frobenius and these morphisms are compatible with Frobenius. The Frobenius action is invertible on $\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}$. The map $i^*_{0}: \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{{\mathbf Q}}\to \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}$ admits a unique (in the classical derived category) $W(k)$-linear section $\iota_{\pi}$ \cite[1.16]{BE2}, \cite[4.4.6]{Ts} that commutes with $\varphi$ and $N$. The map $\iota_{\pi}$ is functorial and its $R$-linear extension is a quasi-isomorphism $$\iota_{\pi}: R\otimes_{W(k)}\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R) _{{\mathbf Q}}.$$ The composition (the {\em Hyodo-Kato map}) $$ \iota_{\mathrm{dR},\pi}:=\gamma_r^{-1}i^*_{\pi}\operatorname{cd} ot\iota_{\pi}:\quad \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\to \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K}), $$ where $$ \gamma_r^{-1}:\quad \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal O}/{\mathcal J}^{[r]})_{{\mathbf Q}}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K})/F^r $$ is the quasi-isomorphism from Corollary \ref{Langer}, induces a $K$-linear functorial quasi-isomorphism (the {\em Hyodo-Kato quasi-isomorphism}) \cite[4.4.8, 4.4.13]{Ts} \begin{equation} \label{HKqis} \iota_{\mathrm{dR},\pi}: \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})\otimes_{W(k)}K\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K}) \end{equation} We are going now to describe the Beilinson-Hyodo-Kato morphism and to study it on a few examples. Let $S_n=\operatorname{Spec} ({\mathbf Z}/p^n)$ equipped with the trivial log-structure and let $S=\operatorname{Spf} ({\mathbf Z}_p)$ be the induced formal log-scheme. For any log-scheme $Y\to S_1$ let $D_{\varphi}((Y/S)_{\operatorname{cr} },{\mathcal O}_{Y/S})$ denote the derived category of Frobenius ${\mathcal O}_{Y/S}$-modules and $D_{\varphi}^{pcr}(Y/S)$ its thick subcategory of perfect F-crystals, i.e., those Frobenius modules that are perfect crystals \cite[1.11]{BE2}. We call a perfect F-crystal $({\mathcal{F}},\varphi)$ {\em non-degenerate} if the map $L\varphi^*({\mathcal{F}})\to {\mathcal{F}}$ is an isogeny. The corresponding derived category is denoted by $D_{\varphi}^{pcr}(Y/S)^{nd}$. It has a dg category structure \cite[1.14]{BE2} that we denote by ${\mathcal{D}}_{\varphi}^{pcr}(Y/S)^{nd}$. We will omit $S$ if understood. Suppose now that $Y$ is a fine log-scheme that is affine. Assume also that there is a PD-thickening $P=\operatorname{Spf} R$ of $Y$ that is formally smooth over $S$ and such that $R$ is a $p$-adically complete ring with no $p$-torsion. Let $f:Z\to Y$ be a log-smooth map of Cartier type with $Z$ fine and proper over $Y$. Beilinson \cite[1.11,1.14]{BE2} proves the following theorem. \begin{theorem} \label{kk1} The complex ${\mathcal{F}}:=Rf_{\operatorname{cr} *}({\mathcal O}_{Z/S})$ is a non-degenerate perfect F-crystal. \end{theorem} Let $D_{\varphi,N}(K_0)$ denote the bounded derived category of $(\varphi, N)$-modules. By \cite[1.15]{BE2}, it has a dg category structure that we will denote by ${\mathcal{D}}_{\varphi,N}(K_0)$. We call $(\varphi,N)$-module {\em effective} if it contains a $W(k)$-lattice preserved by $\varphi$ and $N$. Denote by ${\mathcal{D}}_{\varphi,N}(K_0)^{eff}{\mathcal{U}}bset {\mathcal{D}}_{\varphi,N}(K_0)$ the bounded derived category of the abelian category of effective modules. Let $f:Y\to k^0$ be a log-scheme. We think of $k^0$ as $W(k)^{\times}_1$. Then the map $f$ is given by a $k$-structure on $Y$ plus a section $l=f^*(\overlinerline{p})\in\Gamma(Y,M_Y)$ such that its image in $\Gamma(Y,{\mathcal O}_Y)$ equals $0$. We will often write $f=f_l,l=l_f$. Beilinson proves the following theorem \cite[1.15]{BE2}. \begin{theorem} \label{kk2} \begin{enumerate} \item There is a natural functor \begin{equation} \label{kwaku-kwik} \varepsilon_{f}=\varepsilon_l: {\mathcal{D}}_{\varphi,N}(K_0)^{eff}\to {\mathcal{D}}^{pcr}_{\varphi}(Y)^{nd}\otimes {\mathbf Q}. \end{equation} \item $\varepsilon_{f}$ is compatible with base change, i.e., for any $\theta: Y^{^{\operatorname{pr} ime}me}\to Y$ one has a canonical identification $\varepsilon_{f\theta}\operatorname{st} ackrel{\sim}{\to}L\theta^*_{\operatorname{cr} }\varepsilon_{f}$. For any $a\in k^*, m\in {\mathbf Z}_{>0}$, there is a canonical identification $\varepsilon_{al^m}(V,\varphi,N)\operatorname{st} ackrel{\sim}{\to}\varepsilon_l(V,\varphi,mN)$. \item Suppose that $Y$ is a local scheme with residue field $k$ and nilpotent maximal ideal, $M_Y/{\mathcal O}^*_Y={\mathbf Z}_{>0}$, and the map $f^*:M_{k^0}/k^*\to M_Y/{\mathcal O}^*_Y$ is injective. Then (\ref{kwaku-kwik}) is an equivalence of dg categories. \end{enumerate} \end{theorem} In particular, we have an equivalence of dg categories $$\varepsilon:=\varepsilon_{\overlinerline{p}}: {\mathcal{D}}_{\varphi,N}(K_0)^{eff}\operatorname{st} ackrel{\sim}{\to}{\mathcal{D}}^{pcr}_{\varphi}(k^0)^{nd}\otimes{\mathbf Q} $$ and a canonical identification $\varepsilon_f=Lf^*_{\operatorname{cr} }\varepsilon$. On the level of sections the functor (\ref{kwaku-kwik}) has a simple description \cite[1.15.3]{BE2}. Assume that $Y=\operatorname{Spec} (A/J)$, where $A$ is a $p$-adic algebra and $J$ is a PD-ideal in $A$, and that we have a PD-thickening $i:Y\hookrightarrow T=\operatorname{Spf} (A)$. Let $\lambda_{l,n}$ be the preimage of $l$ under the map $\Gamma(T_n,M_{T_n})\to i_*\Gamma(Y,M_Y)$. It is a trivial $(1+J_n)^{\times}$-torsor. Set $\lambda_A:= \operatorname{inv} lim{_n}\Gamma(T_n,\lambda_{l,n})$. It is a $(1+J)^{\times}$-torsor. Let $\tau_{A_{\mathbf Q}}$ be the {\em Fontaine-Hyodo-Kato} torsor: $A_{\mathbf Q}$-torsor obtained from $\lambda_A$ by the pushout by $(1+J)^{\times}\operatorname{st} ackrel{\log}{\to}J\to A_{\mathbf Q}$. We call the ${\mathbb G}_a$-torsor $\operatorname{Spec} A_{\mathbf Q}^{\tau}$ over $\operatorname{Spec} A_{\mathbf Q}$ with sections $\tau_{A_{\mathbf Q}}$ the same name. Denote by $N_{\tau}$ the $A_{\mathbf Q}$-derivation of $A_{\mathbf Q}^{\tau}$ given by the action of the generator of $\operatorname{Lie}_{{\mathbb G_a}}$. Let $M$ be an $(\varphi, N)$-module. Integrating the action of the monodromy $N_M$ we get an action of the group ${\mathbb G}_a$ on $M$. Denote by $M^{\tau}_{A_{\mathbf Q}}$ the $\tau_{A_{\mathbf Q}}$-twist of $M_{A_{\mathbf Q}}:=M\otimes_{K_0}A_{\mathbf Q}$. It can be represented as the module of maps $v:\tau_{A_{\mathbf Q}}\to M_{A_{\mathbf Q}}$ that are $A_{{\mathbf Q}}$-equivariant, i.e., such that $v(\tau+a)=\exp(aN)(v(\tau))$, $\tau\in \tau_{A_{\mathbf Q}}$, $a\in A_{\mathbf Q}$. We can also write $$M^{\tau}_{A_{\mathbf Q}}=(M\otimes_{K_0}A^{\tau}_{\mathbf Q})^{{\mathbb G}_a}=(M\otimes_{K_0}A^{\tau}_{\mathbf Q})^{N=0}, $$ where $N:=N_M\otimes 1+ 1\otimes N_{\tau}$. Now, by definition, \begin{equation} \label{isom} \varepsilon_f(M)(Y,T)=M^{\tau}_{A_{\mathbf Q}} \end{equation} The algebra $A^{\tau}_{\mathbf Q}$ has a concrete description. Take the natural map $a:\tau_{A_{\mathbf Q}}\to A^{\tau}_{\mathbf Q}$ of $A_{\mathbf Q}$-torsors which maps $\tau\in \tau_{A_{\mathbf Q}}$ to a function $a({\tau})\in A^{\tau}_{\mathbf Q}$ whose value on any $\tau^{^{\operatorname{pr} ime}me}\in \tau_{A_{\mathbf Q}}$ is $\tau-\tau^{^{\operatorname{pr} ime}me}\in A_{\mathbf Q}$. This map is compatible with the logarithm $\log: (1+J)^{\times}\to A$. The algebra $A^{\tau}_{A_{\mathbf Q}}$ is freely generated over $A_{\mathbf Q}$ by $a({\tau})$ for any $\tau\in\tau_{A_{\mathbf Q}}$; the $A_{\mathbf Q}$-derivation $N_{\tau}$ is defined by $N_{\tau}(a({\tau}))=-1$. That is, for chosen $\tau\in\tau_{A_{\mathbf Q}}$, we can write \begin{align*} A^{\tau}_{{\mathbf Q}} =A_{\mathbf Q}[a({\tau})],\quad N_{\tau}(a({\tau}))=-1 \end{align*} For every lifting $\varphi_T$ of Frobenius to $T$ we have $\varphi^*_T\lambda_A=\lambda_A^p$. Hence Frobenius $\varphi_T$ extends canonically to a Frobenius $\varphi_{\tau}$ on $A_{\mathbf Q}^{\tau}$ in such a way that $N_\tau\varphi_{\tau}=p\varphi_{\tau} N_{\tau}$. The isomorphism (\ref{isom}) is compatible with Frobenius. \begin{example} \label{standard} As an example, consider the case when the pullback map $f^*:{\mathbf Q}=(M_{k^0}/k^*)^{\operatorname{gp} }\otimes{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} (\Gamma(Y,M_Y)/k^*)^{\operatorname{gp} }\otimes{\mathbf Q}$ is an isomorphism. We have a surjection $v: (\Gamma(T,M_T)/k^*)^{\operatorname{gp} }\otimes {\mathbf Q} \to {\mathbf Q}$ with the kernel $\log: (1+J)^{\times}_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} J_{\mathbf Q}=A_{{\mathbf Q}}$. We obtain an identification of $A_{\mathbf Q}$-torsors $\tau_{A_{\mathbf Q}}\simeq v^{-1}(1)$. Hence every non-invertible $t\in \Gamma(T,M_T)$ yields an element $t^{1/v(t)}\in v^{-1}(1)$ and a trivialization of $\tau_{A_{\mathbf Q}}$. For a fixed element $t^{1/v(t)}\in v^{-1}(1)$, we can write \begin{align*} A^{\tau}_{A_{\mathbf Q}} =A_{\mathbf Q}[a (t^{1/v(t))}], \quad N_{\tau}(a( t^{1/v(t))})=-1 \end{align*} For an $(\varphi, N)$-module $M$, the twist $M^{\tau}_{A_{\mathbf Q}}$ can be trivialized \begin{align*} \beta_t: M\otimes_{K_0 }A_{\mathbf Q} & \operatorname{st} ackrel{\sim}{\to} M^{\tau}_{A_{\mathbf Q}}=(M\otimes_{K_0}A_{\mathbf Q}[a( t^{1/v(t)})])^{N=0} \\ m & \mapsto \exp(N_M(m) a(t^{1/v(t))}) \end{align*} For a different choice $t_1^{1/v(t_1)}\in v^{-1}(1)$, the two trivializations $\beta_t,\beta_{t_1}$ are related by the formula $$ \beta_{t_1}=\beta_t\exp(N_M(m)a(t_1,t)),\quad a(t_1,t)=a(t_1)/v(t_1)-a(t)/v(t). $$ \end{example} Consider the map $f:V^{\times}_1\to k^0$. By Theorem \ref{kk2}, we have the equivalences of dg categories \begin{align*} \varepsilon: & \quad {\mathcal{D}}_{\varphi,N}(K_0)^{eff}\operatorname{st} ackrel{\sim}{\to}{\mathcal{D}}^{pcr}_{\varphi}(k^0)^{nd}\otimes{\mathbf Q},\\ \varepsilon_{f}=Lf^*_{\operatorname{cr} }\varepsilon: & \quad {\mathcal{D}}_{\varphi,N}(K_0)^{eff}\operatorname{st} ackrel{\sim}{\to}{\mathcal{D}}^{pcr}_{\varphi}(V_1^{\times})^{nd}\otimes{\mathbf Q} \end{align*} Let $Z_1\to V_1^{\times}$ be a log-smooth map of Cartier type with $Z_1$ fine and proper over $V_1$. By Theorem \ref{kk1} $Rf_{\operatorname{cr} *}({\mathcal O}_{Z_1/{\mathbf Z}_p})$ is a non-degenerate perfect F-crystal on $V_{1,\operatorname{cr} }$. Set $$ \mathrm {R} \Gamma^{B}_{\mathrm{HK}}(Z_1):=\varepsilon^{-1}_{f}Rf_{\operatorname{cr} *}({\mathcal O}_{Z_1/{\mathbf Z}_p})_{\mathbf Q} \in {\mathcal{D}}_{\varphi,N}(K_0). $$ We will call it the {\em Beilinson-Hyodo-Kato} complex \cite[1.16.1]{BE2}. \begin{example} \label{crucial} To get familiar with the Beilinson-Hyodo-Kato complexes we will work out some examples. \begin{enumerate} \item Let $g:X\to V^{\times}$ be a log-smooth log-scheme, proper, and of Cartier type. Adjunction yields a quasi-isomorphism \begin{align} \label{adjunction} \varepsilon_f\mathrm {R} \Gamma^{B}_{\mathrm{HK}}(X_1)=\varepsilon_f\varepsilon^{-1}_{f}Rg_{\operatorname{cr} *}({\mathcal O}_{X_1/{\mathbf Z}_p})_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} Rg_{\operatorname{cr} *}({\mathcal O}_{X_1/{\mathbf Z}_p})_{\mathbf Q} \end{align} Evaluating it on the PD-thickening $V^{\times}_1\hookrightarrow V^{\times}$ (here $A=V$, $J=pV$, $l=\overlinerline{p}$, $\lambda_V=p(1+J)^{\times}$, $\tau_K=p(1+J)^{\times}\times_{(1+J)^{\times}}K$), we get a map \begin{align*} \mathrm {R} \Gamma^{B}_{\mathrm{HK}}(X_1)^{\tau}_K & =\varepsilon_f\mathrm {R} \Gamma^{B}_{\mathrm{HK}}(X_1)(V^{\times}_1\hookrightarrow V^{\times}) \operatorname{st} ackrel{\sim}{\to} Rg_{\operatorname{cr} *}({\mathcal O}_{X_1/{\mathbf Z}_p})(V^{\times}_1\hookrightarrow V^{\times})_{\mathbf Q}=\mathrm {R} \Gamma_{\operatorname{cr} }(X_1/V^{\times})_{\mathbf Q}\\ & \simeq \mathrm {R} \Gamma_{\operatorname{cr} }(X/V^{\times})_{\mathbf Q}\simeq \mathrm {R} \Gamma_{\mathrm{dR}}(X_K) \end{align*} We will call it the {\em Beilinson-Hyodo-Kato} map \cite[1.16.3]{BE2} \begin{equation} \label{HK1} \iota^B_{\mathrm{dR}}:\quad \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_K\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{dR}}(X_K) \end{equation} Recall that $$ \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_K=(\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)\otimes_{K_0}K[a(\tau)])^{N=0},\quad \tau\in\tau_K $$ This makes it clear that the Beilinson-Hyodo-Kato map is not only functorial for log-schemes over $V^{\times}$ but, by Theorem \ref{kk2}, it is also compatible with base change of $V^{\times}$. Moreover, if we use the canonical trivialization by $p$ \begin{align*} \beta=\beta_p:\quad \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)_K & \operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma^B_{\mathrm{HK}}(X)_K^{\tau} =(\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)\otimes_{K_0}K[a(p)])^{N=0}\\ & x\mapsto \exp(N(x)a(p)) \end{align*} we get that the composition (which we also call the Beilinson-Hyodo-Kato map and denote by $\iota^B_{\mathrm{dR}}$) $$ \iota^B_{\mathrm{dR}}=\iota^B_{\mathrm{dR}}\beta:\quad \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)\to \mathrm {R} \Gamma_{\mathrm{dR}}(X_K) $$ is functorial and compatible with base change. \item Evaluating the map (\ref{adjunction}) on the PD-thickening $V^{\times}_1\hookrightarrow E$ associated to a uniformizer $\pi$ (here $A=R$, $l=\overlinerline{p}$), we get a map \begin{equation} \label{kappar} \kappa_R:\quad \mathrm {R} \Gamma^{B}_{\mathrm{HK}}(X_1)^{\tau}_{R_{\mathbf Q}} \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(X/R)_{\mathbf Q} \end{equation} as the composition \begin{align*} \mathrm {R} \Gamma^{B}_{\mathrm{HK}}(X_1)^{\tau}_{R_{\mathbf Q}} & =\varepsilon_f\mathrm {R} \Gamma^{B}_{\mathrm{HK}}(X_1)(V^{\times}_1\hookrightarrow E) \operatorname{st} ackrel{\sim}{\to} Rg_{\operatorname{cr} *}({\mathcal O}_{X_1/{\mathbf Z}_p})(V^{\times}_1\hookrightarrow E)_{\mathbf Q}=\mathrm {R} \Gamma_{\operatorname{cr} }(X_1/R)_{\mathbf Q}\\ & \simeq \mathrm {R} \Gamma_{\operatorname{cr} }(X/R)_{\mathbf Q} \end{align*} We have $$ \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_{R_{\mathbf Q}}=(\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)\otimes_{K_0}R_{{\mathbf Q}}[a(\tau)])^{N=0},\quad \tau\in\tau_{R_{\mathbf Q}} $$ Since the map $\kappa_R$ is compatible with the log-connection on $R$ it is also compatible with the normalized monodromy operators. Specifically, if we define the monodromy on the left hand side of (\ref{kappar}) as \begin{align*} N:\quad \mathrm {R} \Gamma^{B}_{\mathrm{HK}}(X_1)^{\tau}_{R_{\mathbf Q}} & \to \mathrm {R} \Gamma^{B}_{\mathrm{HK}}(X_1)^{\tau}_{R_{\mathbf Q}},\\ {\mathcal{U}}m_I m_{\tau_I}\otimes r_{\tau_I}a^{k_I}(\tau_I) & \mapsto {\mathcal{U}}m_I (N_M(m_{\tau_I})\otimes r_{\tau_I}a^{k_I}(\tau_I) + m_{\tau_I}\otimes N_R(r_{\tau_I})a^{k_I}(\tau_I)) \end{align*} the two operators will correspond under the map $\kappa_R$. The exact immersion $i_{\pi}: V^{\times}\hookrightarrow E$, yields a commutative diagram $$ \xymatrix{ \mathrm {R} \Gamma^{B}_{\mathrm{HK}}(X_1)^{\tau}_{R_{\mathbf Q}}\ar[r]^{\sim}\ar[d]^{i^*_{\pi}} & \mathrm {R} \Gamma_{\operatorname{cr} }(X/R)_{\mathbf Q}\ar[d]^{i^*_{\pi}}\\ \mathrm {R} \Gamma^{B}_{\mathrm{HK}}(X_1)^{\tau}_{K}\ar[r]^{\sim} & \mathrm {R} \Gamma_{\operatorname{cr} }(X/V^{\times})_{\mathbf Q} } $$ If $p=u\pi^{e_K}$, $u\in V^{\times}$, we have $\lambda_R=\tilde{u}t_{\pi}^{e_K}(1+J)^{\times}$, where $\tilde{u}\in R$ is such that $\tilde{u}$ lifts $u$. Alternatively, $\lambda_R=[\overlinerline{u}]t_{\pi}^{e_K}(1+J)^{\times}$. We have the associated trivialization \begin{align*} \beta_{\pi}:\quad \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)\otimes_{K_0}R_{{\mathbf Q}} & \operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_{R_{\mathbf Q}}=(\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)\otimes_{K_0}R_{{\mathbf Q}}[a(\tau_{\pi})])^{N=0}, \quad \tau_{\pi}:=[\overlinerline{u}]t^{e_K}_{\pi},\\ & x\mapsto \exp(N(x)a(\tau_{\pi})) \end{align*} \item Consider the log-scheme $k^0_1$: the scheme $\operatorname{Spec} (k)$ with the log-structure induced by the exact closed immersion $i:k^0_1\hookrightarrow V^{\times}_1$. We have the commutative diagram $$ \xymatrix{ X_0\ar@{^{(}->}[r]^i\ar[d]^{g_0} & X_1\ar[d]^g\\ k^0_1\ar[rd]_{f_0}\ar@{^{(}->}[r]^i & V^{\times}_1\ar[d]^f\\ & k^0 } $$ The morphisms $f, f_0$ map $\overlinerline{p}$ to $\overlinerline{p}$. By log-smooth base change we have a canonical quasi-isomorphism $Li^*Rg_{\operatorname{cr} *}({\mathcal O}_{X_1/{{\mathbf Z}_p} })\simeq Rg_{0\operatorname{cr} *}({\mathcal O}_{X_1/{{\mathbf Z}_p}})$. By Theorem \ref{kk2} we have the equivalence of dg categories $$ \varepsilon_{f_0}: \quad {\mathcal{D}}_{\varphi,N}(K_0)^{eff}\operatorname{st} ackrel{\sim}{\to}{\mathcal{D}}^{pcr}_{\varphi}(k^0_1)^{nd}\otimes{\mathbf Q},\quad \varepsilon_{f_0}=Li^*\varepsilon_f. $$ This implies the natural quasi-isomorphisms \begin{align*} \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1) & =\varepsilon^{-1}_fRg_{\operatorname{cr} *}({\mathcal O}_{X_1/{{\mathbf Z}_p}})_{\mathbf Q} \simeq \varepsilon^{-1}_{f_0}Li^*Rg_{\operatorname{cr} *}({\mathcal O}_{X_1/{{\mathbf Z}_p}})_{\mathbf Q}\\ & \simeq \varepsilon_{f_0}^{-1}Rg_{0\operatorname{cr} *}({\mathcal O}_{X_0/{{\mathbf Z}_p}})_{\mathbf Q} \end{align*} Hence, by adjunction, $$ \varepsilon_{f_0}\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1) =\varepsilon_{f_0}\varepsilon_{f_0}^{-1}Rg_{0\operatorname{cr} *}({\mathcal O}_{X_0/{{\mathbf Z}_p}})_{\mathbf Q}\simeq Rg_{0\operatorname{cr} *}({\mathcal O}_{X_0/{{\mathbf Z}_p}})_{\mathbf Q} $$ We will evaluate both sides on the PD-thickening $ k^0_1\hookrightarrow W(k)^0$. Here we write the log-structure on $W(k)^0$ as associated to the map $\Gamma(V^{\times},M_{V^{\times}})\to k\to W(k)$, $a\mapsto \overlinerline{a}$. We take $A=W(k),$ $l={p}$, $J=pW(k)$, $\lambda_{W(k)}=\overlinerline{p}(1+pW(k))^{\times}$, $\tau_{K_0}=\overlinerline{p}(1+pW(k))^{\times}\times_{(1+pW(k))^{\times}}K_0$. We get a quasi-isomorphism $$\kappa:\quad \mathrm {R} \Gamma^{B}_{\mathrm{HK}}(X_1)^{\tau}_{K_0} \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{HK}}(X)_{\mathbf Q} $$ as the composition \begin{align*} \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_{K_0} & =\varepsilon_{f_0}\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1) (k^0_1\hookrightarrow W(k)^0)\simeq Rg_{0\operatorname{cr} *}({\mathcal O}_{X_0/{{\mathbf Z}_p}})(k^0_1\hookrightarrow W(k)^0)_{\mathbf Q}\\ & =\mathrm {R} \Gamma_{\operatorname{cr} }(X_0/W(k)^0)_{\mathbf Q}=\mathrm {R} \Gamma_{\mathrm{HK}}(X)_{\mathbf Q} \end{align*} To compare the monodromy operators on both sides of the map $\kappa$, note that by Theorem \ref{kk2}, we have the canonical identification $$Rg_{0\operatorname{cr} *}({\mathcal O}_{X_0/{{\mathbf Z}_p}})_{\mathbf Q}\simeq \varepsilon_{f_0}(\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1),N) \simeq \varepsilon_{\overlinerline{p}}(\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1) ,e_KN) $$ Hence, from the description of the Hyodo-Kato monodromy in \cite[3.6]{HK}, it follows easily that the map $\kappa$ pairs the operator $N$ on $\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_{K_0}$ defined by $$N({\mathcal{U}}m_I m_{\tau_I}\otimes r_{\tau_I}a^{k_I}(\tau_I))={\mathcal{U}}m_I (N_M(m_{\tau_I})\otimes r_{\tau_I}a^{k_I}(\tau_I) + m_{\tau_I}\otimes N_R(r_{\tau_I})a^{k_I}(\tau_I)), $$ with the normalized Hyodo-Kato monodromy on $\mathrm {R} \Gamma_{\mathrm{HK}}(X)_{\mathbf Q}$. Composing the map $\kappa$ with the trivialization \begin{align*} \beta=\beta_{p}:\quad \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1) & \operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_{K_0} =(\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)[a(\overlinerline{p})])^{N=0}\\ & x\mapsto \exp(N(x)a(\overlinerline{p})) \end{align*} we get a quasi-isomorphism between Beilinson-Hyodo-Kato complexes and the (classical) Hyodo-Kato complexes. \begin{align} \label{B=K} \kappa=\beta\kappa:\quad \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{HK}}(X)_{\mathbf Q} \end{align} The trivialization above is compatible with Frobenius and the normalized monodromy hence so is the quasi-isomorphism (\ref{B=K}). It is clearly functorial and, by Theorem \ref{kk2}, compatible with base change. By functoriality (Theorem \ref{kk2}), the morphism of PD-thickenings (exact closed immersion) $i_0: (k^0_1\hookrightarrow W(k)^0)\hookrightarrow (V^{\times}_1\hookrightarrow R)$ yields the right square in the following diagram \begin{equation} \label{infinity-category} \xymatrix{ \mathrm {R} \Gamma_{\mathrm{HK}}(X)_{\mathbf Q}\ar[r]^{\iota_{\pi}} & \mathrm {R} \Gamma_{\operatorname{cr} }(X_1/R)_{\mathbf Q} \ar[r]^{i^*_0} & \mathrm {R} \Gamma_{\mathrm{HK}}(X)_{\mathbf Q}\\ \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_{K_0}\ar[u]^{\wr}_{\kappa}\ar[r]^{\iota_{\pi}} & \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_{R_{\mathbf Q}}\ar[u]^{\wr}_{\kappa_R}\ar[r]^{i^*_0} & \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_{K_0}\ar[u]^{\wr}_{\kappa} } \end{equation} In the left square the bottom map $\iota_{\pi}$ is induced by the natural map $ K_0\to R$ and by sending $a(\overlinerline{p})\mapsto a(\tau_{\pi})$. It is a (right) section to $i_0^*$ and it (together with the vertical maps) commutes with Frobenius. By uniqueness of the top map $\iota_{\pi}$ this makes the left square commute in the classical derived category (of abelian groups). It is easy to check that we have the following commutative diagram $$ \xymatrix{ \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_{K_0}\ar[r]^{\iota_{\pi}} & \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_{R_{\mathbf Q}}\ar[r]^{i_{\pi}^*} & \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)^{\tau}_K\\ \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)\ar[u]^{\beta_p}_{\wr}\ar[rr]^{ \operatorname{can} } & & \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_1)_K\ar[u]^{\beta_p}_{\wr} } $$ and that the composition of maps on the top of it is equal to the map induced by the canonical map $K_0\to K$ and the map $\lambda_{W(k)^0} \to \lambda_V^{\times}$, $\overlinerline{p}\to p$. \end{enumerate} Combining the commutative diagrams in parts (2) and (3) of this example we get the following commutative diagram. $$ \xymatrix{ \mathrm {R} \Gamma_{\mathrm{HK}}(X)\ar[r]^{\iota_{\pi}} & \mathrm {R} \Gamma_{\operatorname{cr} }(X_1/R)_{\mathbf Q}\ar[r]^{i_{\pi}^*} & \mathrm {R} \Gamma_{\operatorname{cr} }(X_1/V^{\times})_{\mathbf Q}\\ \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_1)^{\tau}_{K_0} \ar[u]^{\wr}_{\kappa}\ar[r]^{\iota_{\pi}} & \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_1)_{R_{\mathbf Q}}^{\tau} \ar[r]^{i^*_{\pi}}\ar[u]^{\wr}_{\kappa_R} & \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_1)_K^{\tau}\ar[u]^{\wr}_{\iota^B_{\mathrm{dR}}}\\ \mathrm {R} \Gamma_{\mathrm{HK}}^B(X_1)\ar[u]^{\beta_p}_{\wr}\ar[rr]^{ \operatorname{can} } & & \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_1)_K\ar[u]^{\beta_p}_{\wr} } $$ Since the composition of the top maps is equal to the Hyodo-Kato map $\iota_{\mathrm{dR}}$ and the bottom map is just the canonical map $\mathrm {R} \Gamma_{\mathrm{HK}}(X_1)\to \mathrm {R} \Gamma_{\mathrm{HK}}(X_1)_K$ we obtain that the Hyodo-Kato and the Beilinson-Hyodo-Kato maps are related by a natural quasi-isomorphism, i.e., that the following diagram commutes. \begin{equation} \label{Beilinson=HK} \xymatrix{ \mathrm {R} \Gamma_{\mathrm{HK}}(X)\ar[r]^{\iota_{\mathrm{dR},\pi}} & \mathrm {R} \Gamma_{\mathrm{dR}}(X_{K})\\ \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_1)\ar[ru]_{\iota_{\mathrm{dR}}^B}\ar[u]^{\wr}_{\kappa} } \end{equation} \end{example} The above examples can be generalized \cite[1.16]{BE2}. It turns out that the relative crystalline cohomology of all the base changes of the map $f$ can be described using the Beilinson-Hyodo-Kato complexes \cite[1.16.2]{BE2}. Namely, let $\theta:Y\to V^{\times}_1$ be an affine log-scheme and let $T$ be a $p$-adic PD-thickening of $Y$, $T=\operatorname{Spf} (A),$ $Y=\operatorname{Spec} (A/J)$. Denote by $f_Y:Z_{1Y}\to Y$ the $\theta$-pullback of $f$. Beilinson proves the following theorem \cite[1.16.2]{BE2}. \begin{theorem} \label{Bthm} \begin{enumerate} \item The $A$-complex $\mathrm {R} \Gamma_{\operatorname{cr} }(Z_{1Y}/T,{\mathcal O}_{Z_{1Y/T}})$ is perfect, and one has $$ \mathrm {R} \Gamma_{\operatorname{cr} }(Z_{1Y}/T_n,{\mathcal O}_{Z_{1Y/T_n}})=\mathrm {R} \Gamma_{\operatorname{cr} }(Z_{1Y}/T,{\mathcal O}_{Z_{1Y/T}})\otimes^{L}{\mathbf Z}/p^n. $$ \item There is a canonical Beilinson-Hyodo-Kato quasi-isomorphism of $A_{\mathbf Q}$-complexes \begin{equation*} \kappa_{A_{\mathbf Q}}^B: \mathrm {R} \Gamma^B_{\mathrm{HK}}(Z_1)^{\tau}_{A_{\mathbf Q}}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(Z_{1Y}/T,{\mathcal O}_{Z_{1Y/T}})_{\mathbf Q} \end{equation*} If there is a Frobenius lifting $\varphi_T$, then $\kappa^B_{A_{\mathbf Q}}$ commutes with its action. \end{enumerate} \end{theorem} {\mathcal{U}}bsection{Log-syntomic cohomology} We will study now (rational) log-syntomic cohomology. Let $(U,\overlinerline{U})$ be log-smooth over $V^{\times}$. For $r\geq 0$, define the mod $p^n$, completed, and rational log-syntomic complexes \begin{align} \label{log-syntomic} \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_n & := \operatorname{Cone} (\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_n \varepsilon rylomapr{p^r-\varphi} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_n)[-1],\\ \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r) & :=\operatorname{holim} _n\mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_n,\notag\\ \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q} & := \operatorname{Cone} (\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_{\mathbf Q} \varepsilon rylomapr{1-\varphi_r} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{\mathbf Q})[-1].\notag \end{align} Here the Frobenius $\varphi$ is defined by the composition \begin{align*} \varphi: \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_n & \to \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_n \operatorname{st} ackrel{\sim}{\to }\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})_1/W(k))_n \operatorname{st} ackrel{\varphi}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})_1/W(k))_n\\ & \operatorname{st} ackrel{\sim}{\leftarrow}\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_n \end{align*} and $\varphi_r:=\varphi/p^r$. The mapping fibers are taken in the $\infty$-derived category of abelian groups. The direct sums $$ \bigoplus _{r\geq 0}\mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_n,\quad \bigoplus_{r\geq 0} \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r),\quad \bigoplus_{r \geq 0} \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q} $$ are graded $E_{\infty}$ algebras over ${\mathbf Z}/p^n$, ${\mathbf Z}_p$, and ${\mathbf Q}_p$, respectively \cite[1.6]{HS}. The rational log-syntomic complexes are moreover graded commutative dg algebras over ${\mathbf Q}_p$ \cite[4.1]{HS}, \cite[3.22]{MG}, \cite{Lu2}. Explicit definition of syntomic product structure can be found in \cite[2.2]{Ts}. We have $ \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_n\simeq \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)\otimes^L{\mathbf Z}/p^n. $ There is a canonical quasi-isomorphism of graded $E_{\infty}$ algebras \begin{align*} \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_n \operatorname{st} ackrel{\sim}{\to}\operatorname{Cone} (\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_n \varepsilon rylomapr{(p^r-\varphi, \operatorname{can} )} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_n\oplus \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal O}/{\mathcal J}^{[r]})_n)[-1]. \end{align*} Similarly in the completed and rational cases. Since, by Corollary \ref{Langer}, there is a quasi-isomorphism $$ \gamma_r^{-1}:\quad \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal O}/{\mathcal J}^{[r]})_{{\mathbf Q}}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K})/F^r, $$ we have a particularly nice canonical description of rational log-syntomic cohomology \begin{align*} \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{{\mathbf Q}} \operatorname{st} ackrel{\sim}{\to} [\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{{\mathbf Q}} \varepsilon rylomapr{(1-\varphi_r,\gamma_r^{-1})} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{{\mathbf Q}}\oplus \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K})/F^r)], \end{align*} where square brackets stand for mapping fiber. \begin{remark} In the above definition one can replace the map $1-\varphi_r$ with any polynomial map $P\in 1+XK[X]$ to obtain the analog of Besser's finite polynomial cohomology. This was studied in \cite{BLZ}. \end{remark} For arithmetic pairs $(U,\overlinerline{U})$ that are log-smooth over $V^{\times}$ and of Cartier type this can be simplified further by using Hyodo-Kato complexes (cf. Proposition \ref{reduction1} below). To do that, consider the following sequence of maps of homotopy limits. Homotopy limits are taken in the $\infty$-derived category (to do that we define the maps $\iota_{\pi}$ by the zigzag from diagram (\ref{infinity-category})). We will describe the coherence data only if they are nonobvious. \begin{align*} \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q} & \operatorname{st} ackrel{\sim}{\to} \xymatrix@C=36pt{[\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{\mathbf Q}\ar[r]^-{(1-\varphi_r,\gamma_r^{-1})} & \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{\mathbf Q}\oplus \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K})/F^r ]}\\ & \operatorname{st} ackrel{\sim}{\to} \left[\begin{aligned}{\xymatrix{\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}\ar[rr]^-{(1-\varphi_r,i^*_{\pi}\gamma_r^{-1})}\ar[d]^{N} && \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}\oplus \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K})/F^r \ar[d]^{(N,0)}\\ \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}\ar[rr]^-{1-\varphi_{r-1}} && \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}}}\end{aligned}\right]\\ & \operatorname{st} ackrel{\iota_{\pi}}{\leftarrow} \left[\begin{aligned}\xymatrix@C=40pt{\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}\ar[r]^-{(1-\varphi_r,\iota_{\mathrm{dR},\pi})}\ar[d]^{N} & \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q} \oplus \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K}) /F^r\ar[d]^{(N,0)}\\ \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}\ar[r]^{1-\varphi_{r-1}} & \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}}\end{aligned}\right] \end{align*} The first map was described above. The second one is induced by the distinguished triangle $$\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})\to \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)\operatorname{st} ackrel{N}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)$$ The third one - by the section $\iota_{\pi}: \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}\to \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}$ (notice that $\iota_{\mathrm{dR},\pi}=\gamma_{r}^{-1}i^*_{\pi}\iota_{\pi}$). We will show below that the third map is a quasi-isomorphism. Set $C_{\operatorname{st} }(\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})\{r\})$ equal to the last homotopy limit in the above diagram. \begin{proposition} \label{reduction1} Let $(U,\overlinerline{U})$ be an arithmetic pair that is log-smooth over $V^{\times}$ and of Cartier type. Let $r\geq 0$. Then the above diagram defines a canonical quasi-isomorphism. $$\alpha_{ \operatorname{syn} ,\pi}:\quad \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} C_{\operatorname{st} }(\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})\{r\}). $$ \end{proposition} \begin{proof}We need to show that the map $\iota_{\pi}$ in the above diagram is a quasi-isomorphism. Define complexes ($r\geq -1$) \begin{align*} \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,r):= & \operatorname{Cone} (\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{{\mathbf Q}}\operatorname{st} ackrel{1-\varphi_r}{\longrightarrow}\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{{\mathbf Q}})[-1],\\ \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U},r):= & \operatorname{Cone} (\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\operatorname{st} ackrel{1-\varphi_r}{\longrightarrow}\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}})[-1] \end{align*} It suffices to prove that the following maps \begin{equation} \label{reduction} i^*_0: \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,r) \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U},r),\quad \iota_{\pi}: \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U},r) \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,r) \end{equation} are quasi-isomorphisms. Since $i^*_0\iota_{\pi}= \operatorname{Id} $, it suffices to show that the map $i^*_0$ is a quasi-isomorphism. Base-changing to $W(\overlinerline{k})$, we may assume that the residue field of $V$ is algebraically closed. It suffices to show that, for $i\geq 0$, $t\geq -1$, in the commutative diagram $$ \begin{CD} H^i_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}@>p^t-\varphi >> H^i_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\\ @AA i^*_0 A @AA i^*_0 A\\ H^i_{\operatorname{cr} }((U,\overlinerline{U})/R)_{{\mathbf Q}}@>p^t-\varphi >>H^i_{\operatorname{cr} }((U,\overlinerline{U})/R)_{{\mathbf Q}} \end{CD} $$ the vertical maps induce isomorphisms between the kernels and cokernels of the horizontal maps. Since the $W(k)$-linear map $\iota_{\pi}$ commutes with $\varphi$ and its $R$-linear extension is a quasi-isomorphism $$\iota_{\pi}: R\otimes_{W(k)}\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R) _{{\mathbf Q}}$$ it suffices to show that in the following commutative diagram $$ \begin{CD} H^i_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}@>p^t-\varphi >> H^i_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\\ @AA i_0\otimes \operatorname{Id} A @AA i_0 \otimes \operatorname{Id} A \\ R\otimes_{W(k)}H^i_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}@>p^t-\varphi >> R\otimes_{W(k)}H^i_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\end{CD} $$ the vertical maps induce isomorphisms between the kernels and cokernels of the horizontal maps. This will follow if we show that the following map $$I\otimes_{W(k)}H^i_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\operatorname{st} ackrel{p^t-\varphi}{\longrightarrow} I\otimes_{W(k)}H^i_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}, $$ for $I{\mathcal{U}}bset R$ - the kernel of the projection $i_0:R_{\mathbf Q}\to K_0$, $t_l\mapsto 0$, is an isomorphism. We argue as Langer in \cite[p. 210]{Ln}. Let $M:=H^i_{\mathrm{HK}}(U,\overlinerline{U})/tor$. It is a lattice in $H^i_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}$ that is stable under Frobenius. Consider the formal inverse $\psi:={\mathcal{U}}m_{n\geq 0}(p^{-t}\varphi)^n$ of $1-p^{-t}\varphi$. It suffices to show that, for $y\in I\otimes_{W(k)}M$, $\psi(y)\in I\otimes_{W(k)}M$. Fix $l$ and let $T^{\{k\}}:=t_l^k/\lfloor k/e_K\rfloor !$. We will show that, for any $m\in M$, $\psi(T^{\{k\}}\otimes m)\in I\otimes_{W(k)}M$ and the infinite series converges uniformly in $k$. We have $$(p^{-t}\varphi)^n(T^{\{k\}}\otimes m )=\frac{\lfloor kp^n/e_K\rfloor !}{\lfloor k/e_K\rfloor !p^{tn}}T^{\{kp^n\}}\otimes m^{\operatorname{pr} ime} $$ and $\operatorname{ord} _p(\lfloor kp^n/e_K\rfloor !/\lfloor k/e_K\rfloor !)\geq p^{n-1}$. Hence $\frac{\lfloor kp^n/e_K\rfloor !}{\lfloor k/e_K\rfloor !p^{tn}}$ converges $p$-adically to zero, uniformly in $k$, as wanted. \end{proof} \begin{remark} It was Langer \cite[p.193]{Ln} (cf. \cite[Lemma 2.13]{JS} in the good reduction case) who observed the fact that while, in general, the crystalline cohomology $\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overline{U})$ behaves badly (it is "huge"), after taking "filtered Frobenius eigenspaces" we obtain syntomic cohomology $\mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overline{U},r)_{{\mathbf Q}}$ that behaves well (it is "small"). In \cite[3.5]{JB} this phenomenon is explained by relating syntomic cohomology to the complex $C_{\operatorname{st} }(\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overline{U})\{r\})$. \end{remark} \begin{remark} \label{reduction21} The construction of the map $\alpha_{ \operatorname{syn} ,\pi}$ depends on the choice of the uniformizer $\pi$, which makes the $h$-sheafification impossible. We will show now that there is a functorial and compatible with base change quasi-isomorphism $\alpha^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }$ between rational syntomic cohomology and certain complexes built from Hyodo-Kato cohomology and de Rham cohomology that $h$-sheafify well. Set \begin{align*} \alpha^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }:\quad \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q} & \operatorname{st} ackrel{\sim}{\to} [\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)\lomapr{\gamma_r^{-1}} \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K})/F^r ]\\ & \operatorname{st} ackrel{\beta}{\to} [\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U},r)^{N=0}\lomapr{\iota^{^{\operatorname{pr} ime}me}_{\mathrm{dR}}} \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K}) /F^r] \end{align*} Here the two morphisms $\beta$ and $\iota^{^{\operatorname{pr} ime}me}_{\mathrm{dR}}$ are defined as the following compositions \begin{align*} \beta:\quad & \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(U_0,\overlinerline{U}_0,r)\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U},r)^{N=0}\\ \iota^{^{\operatorname{pr} ime}me}_{\mathrm{dR}}:\quad & \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U},r)^{N=0}\operatorname{st} ackrel{\beta}{\leftarrow} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)\operatorname{st} ackrel{\gamma_r^{-1}}{\to}\mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K}), \end{align*} where $(\operatorname{cd} ots )^{N=0}$ denotes the mapping fiber of the monodromy. The map $\beta$ is a quasi-isomorphism because so is each of the intermediate maps. To see this for the map $i^*_0: \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)\to \mathrm {R} \Gamma_{\operatorname{cr} }(U_0,\overlinerline{U}_0,r)$, consider the following factorization $$ F^m: \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)\operatorname{st} ackrel{i^*_0}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(U_0,\overlinerline{U}_0,r) \operatorname{st} ackrel{\psi_m}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r) $$ of the $m$'th power of the Frobenius, where $m$ is large enough. We also have $ i^*_0\psi_m=F^m$. Since Frobenius is a quasi-isomorphism on $\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)$ and $\mathrm {R} \Gamma_{\operatorname{cr} }(U_0,\overlinerline{U}_0,r)$ both $i^*_0$ and $\psi_m$ are quasi-isomorphisms as well. The second morphism in the sequence defining $\beta$ is a quasi-isomorphism by an argument similar to the one we used in the proof of Proposition \ref{reduction1}. Define the complex $$ C^{^{\operatorname{pr} ime}me}_{\operatorname{st} }(\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})\{r\}):= [\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U},r)^{N=0}\lomapr{\iota^{^{\operatorname{pr} ime}me}_{\mathrm{dR}}} \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K}) /F^r]. $$ We have obtained a quasi-isomorphism $$\alpha^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }:\quad \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} C^{^{\operatorname{pr} ime}me}_{\operatorname{st} }(\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})\{r\}) $$ It is clearly functorial but it is also easy to check that it is compatible with base change (of the base $V$). \end{remark} Define the complex $$ C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(U,\overlinerline{U})\{r\}):= [\mathrm {R} \Gamma^B_{\mathrm{HK}}(U_1,\overlinerline{U}_1,r)^{N=0}\lomapr{\iota^{B}_{\mathrm{dR}}} \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K}) /F^r]. $$ From the commutative diagram (\ref{Beilinson=HK}) we obtain the natural quasi-isomorphisms \begin{align*} \gamma: & \quad C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(U,\overlinerline{U})\{r\}) \operatorname{st} ackrel{\sim}{\to}C_{\operatorname{st} }(\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})\{r\})\\ \alpha_{ \operatorname{syn} ,\pi}^B:=\gamma^{-1}\alpha_{ \operatorname{syn} ,\pi}: & \quad \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q} \operatorname{st} ackrel{\sim}{\to}C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(U,\overlinerline{U})\{r\}) \end{align*} We will show now that log-syntomic cohomology satisfies finite Galois descent. Let $(U,\overlinerline{U})$ be a fine log-scheme, log-smooth over $V^{\times}$, and of Cartier type. Let $r\geq 0$. Let $K^{^{\operatorname{pr} ime}me}$ be a finite Galois extension of $K$ and let $G=\operatorname{Gal} (K^{\operatorname{pr} ime}/K)$. Let $(T,\overlinerline{T})=(U\times_{V}{V^{^{\operatorname{pr} ime}me}},\overlinerline{U}\times _{V}{V^{^{\operatorname{pr} ime}me}})$, $V^{^{\operatorname{pr} ime}me}$ - the ring of integers in $K^{^{\operatorname{pr} ime}me}$, be the base change of $(U,\overlinerline{U})$ to $(K^{\operatorname{pr} ime},V^{\operatorname{pr} ime})$, and let $f: (T,\overlinerline{T})\to (U,\overlinerline{U})$ be the canonical projection. Take $R=R_{V}$, $N$, $e$, $\pi$ associated to $V$. Similarly, we define $R^{\operatorname{pr} ime}:=R_{V^{\operatorname{pr} ime}}$, $N^{\operatorname{pr} ime}$, $e^{\operatorname{pr} ime}$, $\pi^{\operatorname{pr} ime}$. Write the map $\alpha^B_{ \operatorname{syn} ,\pi}$ as $$ \xymatrix{ \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q}\ar[d]^{\alpha^B_{ \operatorname{syn} ,\pi}}_{\wr}\ar[r]^-{\sim}_-h & [\mathrm {R} \Gamma^{B,\tau}_{\mathrm{HK}}((U,\overlinerline{U})_{R},r)^{N=0}\ar[r]^{i^*_{\pi}} & \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K}) /F^r]\\ C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(U,\overlinerline{U}) \{r\})\ar[r]^{\sim} & [\mathrm {R} \Gamma^{B}_{\mathrm{HK}}(U,\overlinerline{U},r)^{N=0}\ar[r]^{\iota^B_{\mathrm{dR}}}\ar[u]_{\wr}^{\iota_{\pi}\beta} & \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K}) /F^r]\ar@{=}[u] } $$ Here we defined the map $h$ as the composition \begin{equation} \label{h} \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q}\to \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}\operatorname{st} ackrel{\sim}{\leftarrow} \mathrm {R} \Gamma^B_{\mathrm{HK}}(U_1,\overlinerline{U}_1)_{R_{\mathbf Q}}^{\tau} \end{equation} From the construction of the Beilinson-Hyodo-Kato map $\iota_{\mathrm{dR}}^B: \mathrm {R} \Gamma^{B}_{\mathrm{HK}}(T_1,\overlinerline{T}_1)\to \mathrm {R} \Gamma_{\mathrm{dR}}(T,\overlinerline{T}_{K^{^{\operatorname{pr} ime}me}}) $ it follows that it is $G$-equivariant; hence the complex $C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(T,\overlinerline{T})\{r\})$ is equipped with a natural $G$-action. We claim that the map $\alpha^B_{ \operatorname{syn} ,\pi^^{\operatorname{pr} ime}me}$ induces a natural map \begin{align*} \tilde{\alpha}^B_{ \operatorname{syn} ,\pi^^{\operatorname{pr} ime}me}: & \quad \mathrm {R} \Gamma(G,\mathrm {R} \Gamma_{ \operatorname{syn} }(T,\overlinerline{T},r)_{{\mathbf Q}})\to \mathrm {R} \Gamma(G,C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(T,\overlinerline{T})\{r\})),\\ \tilde{\alpha}^B_{ \operatorname{syn} ,\pi^^{\operatorname{pr} ime}me} & := (1/|G|){\mathcal{U}}m_{g\in G}\alpha^B_{ \operatorname{syn} ,g(\pi^^{\operatorname{pr} ime}me)} \end{align*} To see this it suffices to show that, for every $g\in G$, we have a commutative diagram $$ \begin{CD} \mathrm {R} \Gamma_{ \operatorname{syn} }(T,\overlinerline{T},r)_{{\mathbf Q}}@>\alpha^B_{ \operatorname{syn} ,\pi^^{\operatorname{pr} ime}me}>> C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(T,\overlinerline{T})\{r\})\\ @VV g^* V @VV g^* V\\ \mathrm {R} \Gamma_{ \operatorname{syn} }(T,\overlinerline{T},r)_{{\mathbf Q}}@>\alpha^B_{ \operatorname{syn} ,g(\pi^^{\operatorname{pr} ime}me)}>> C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(T,\overlinerline{T})\{r\}) \end{CD} $$ We accomplish this by constructing natural morphisms \begin{align*} g^*: \quad & \mathrm {R} \Gamma_{\operatorname{cr} }((T,\overlinerline{T})/R_{\pi^{\operatorname{pr} ime}}^{\operatorname{pr} ime})\to \mathrm {R} \Gamma_{\operatorname{cr} }((T,\overlinerline{T})/R_{g(\pi^{\operatorname{pr} ime})}^{\operatorname{pr} ime}),\\ g^*: \quad & \mathrm {R} \Gamma^B_{\mathrm{HK}}(T_1,\overlinerline{T}_1)_{R^^{\operatorname{pr} ime}me_{\pi^{^{\operatorname{pr} ime}me}}}^{\tau}\to \mathrm {R} \Gamma^B_{\mathrm{HK}}(T_1,\overlinerline{T}_1)_{R^^{\operatorname{pr} ime}me_{g(\pi^{^{\operatorname{pr} ime}me})}}^{\tau} \end{align*} that are compatible with the maps in (\ref{h}) that define $h$, the maps $\iota_?$ and $i^*_?$, and the trivialization $\beta$. We define the pullbacks $g^*$ from a map $g:R^{\operatorname{pr} ime}_{\pi^{\operatorname{pr} ime}}\to R^{\operatorname{pr} ime}_{g(\pi^{\operatorname{pr} ime})}$ constructed by lifting the action of $g$ from $V_1^^{\operatorname{pr} ime}me$ to $R^{\operatorname{pr} ime}$ by setting $g(t^{\operatorname{pr} ime}_{\pi^{\operatorname{pr} ime}} )=t^{\operatorname{pr} ime}_{g(\pi^{\operatorname{pr} ime})}$ and taking the induced action of $g$ on $W(k^{\operatorname{pr} ime})$. This map is compatible with Frobenius and monodromy. The induced pullbacks $g^*$ are clearly compatible with the map $i^*_0$ and the maps $\iota_?$, the maps $i^*_{\pi^{\operatorname{pr} ime}}$, $i^*_{g(\pi^{\operatorname{pr} ime})}$, and the trivialization $\beta$. From the construction of the Beilinson-Hyodo-Kato map, the pullbacks $g^*$ are also compatible with the maps $\kappa_{R^^{\operatorname{pr} ime}me_?}$; hence with the map $h$, as wanted. \begin{proposition} \label{hypercov11} \begin{enumerate} \item The following diagram commutes in the (classical) derived category. $$ \begin{CD} \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{{\mathbf Q}} @> f^*>> \mathrm {R} \Gamma(G,\mathrm {R} \Gamma_{ \operatorname{syn} }(T,\overlinerline{T},r)_{{\mathbf Q}})\\ @VV\alpha^B_{ \operatorname{syn} ,\pi}V @VV\tilde{\alpha}^B_{ \operatorname{syn} ,\pi^{^{\operatorname{pr} ime}me}}V\\ C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(U,\overlinerline{U}) \{r\}) @> f^* >> \mathrm {R} \Gamma(G,C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(T,\overlinerline{T})\{r\})) \end{CD} $$ \item The natural map $$f^*: \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{{\mathbf Q}} \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma(G,\mathrm {R} \Gamma_{ \operatorname{syn} }(T,\overlinerline{T},r)_{{\mathbf Q}}) $$ is a quasi-isomorphism.\end{enumerate} \end{proposition} \begin{proof} The second claim of the proposition follows from the first one and the fact that the Hyodo-Kato and de Rham cohomologies satisfy finite Galois decent. Since everything in sight is functorial and satisfies finite unramified Galois descent we may assume that the extension $K^{\operatorname{pr} ime}/K$ is totally ramified. First, we will construct a $G$-equivariant (for the trivial action of $G$ on $R$) map $$ f^*: \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,r)^{N=0} \to \mathrm {R} \Gamma_{\operatorname{cr} }((T,\overlinerline{T})/R^{\operatorname{pr} ime},r)^{N^{\operatorname{pr} ime}=0} $$ such that the following diagram commutes \begin{equation} \label{diag1} \begin{CD} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)@> f^*>> \mathrm {R} \Gamma_{\operatorname{cr} }(T,\overlinerline{T},r)\\ @VV\wr V @VV\wr V\\ \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,r)^{N=0} @> f^*>> \mathrm {R} \Gamma_{\operatorname{cr} }((T,\overlinerline{T})/R^{\operatorname{pr} ime},r)^{N^{\operatorname{pr} ime}=0}\\ @A\wr A\iota_{\pi} A @A\wr A\iota_{\pi^{^{\operatorname{pr} ime}me}} A\\ \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U},r)^{N=0}@> f^* > \sim > \mathrm {R} \Gamma_{\mathrm{HK}}(T,\overlinerline{T},r)^{N^{\operatorname{pr} ime}=0} \end{CD} \end{equation} \begin{remark} Note that the bottom map is an isomorphism because $f^*$ acts trivially on the Hyodo-Kato complexes. The commutativity of the above diagram and the quasi-isomorphisms (\ref{reduction}) will imply that a totally ramified Galois extension does not change the log-crystalline complexes $\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)$ and $\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,r)^{N=0}$. \end{remark} Let $e_1$ be the ramification index of $V^{\operatorname{pr} ime}/V$. Set $v=(\pi^{\operatorname{pr} ime})^{e_1}\pi^{-1}$, and choose an integer $s$ such that $(\pi^{\operatorname{pr} ime})^{p^s}\in pV^{\operatorname{pr} ime}$. Set $T:=t_{\pi}, T^{\operatorname{pr} ime}:=t_{\pi^{\operatorname{pr} ime}}$ and define the morphism $a: R\to R^{\operatorname{pr} ime} $ by $T\mapsto (T^{\operatorname{pr} ime})^{e_1}[\overlinerline{v}]^{-1}$. Since $V^{\operatorname{pr} ime}_1$ and $V_1$ are defined by $pR+T^{e}R$ and by $pR^{\operatorname{pr} ime}+(T^{\operatorname{pr} ime})^{e^{\operatorname{pr} ime}}R^{\operatorname{pr} ime}$, respectively, $a$ induces a morphism $a_1: V_{1}\to V^{\operatorname{pr} ime}_1$. We have $F^sa_1=F^sf_1$, where $F$ is the absolute Frobenius on $\operatorname{Spec} (V_{1})$. Notice that in general $f_1\neq a_1$ if $v[\overlinerline{v}]^{-1}ncong 1 \mod pV^{\operatorname{pr} ime}$. The morphism $\varphi_R^sa: \operatorname{Spec} (R^{\operatorname{pr} ime})\to \operatorname{Spec} (R)$ is compatible with $F^sf_1:\operatorname{Spec} (V^{\operatorname{pr} ime}_1)\to\operatorname{Spec} (V_{1})$ and it commutes with the operators $N$ and $p^sN^{\operatorname{pr} ime}$. We have the following commutative diagram $$ \xymatrix{ (T,\overlinerline{T})_1\ar[rr]^-{F^sf_1} \ar[d] && (U,\overlinerline{U})_1\ar[d]\\ \operatorname{Spec} (V^{^{\operatorname{pr} ime}me}_1)\ar[rr] ^-{F^sa_1=F^sf_1}\ar[d]&& \operatorname{Spec} (V_{1})\ar[d]\\ \operatorname{Spec} (R^{^{\operatorname{pr} ime}me}) \ar[rr] ^-{\varphi_R^sa}&& \operatorname{Spec} (R) } $$ Hence the commutative diagram of distinguished triangles \begin{equation} \label{brrrr} \begin{CD} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{\mathbf Q} @>>> \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}@> eN >> \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}\\ @VV f^*F^s V @VV f^*F^sV @VV p^se_1f^*F^sV \\ \mathrm {R} \Gamma_{\operatorname{cr} }(T,\overlinerline{T})_{\mathbf Q}@>>> \mathrm {R} \Gamma_{\operatorname{cr} }((T,\overlinerline{T})/R^{\operatorname{pr} ime})_{\mathbf Q}@> e^{^{\operatorname{pr} ime}me}N^{\operatorname{pr} ime} >> \mathrm {R} \Gamma_{\operatorname{cr} }((T,\overlinerline{T})/R^{\operatorname{pr} ime})_{\mathbf Q}. \end{CD} \end{equation} To see how this diagram arises we may assume (by the usual \v{C}ech argument) that we have a fine affine log-scheme $X_n/V^{\times}_n$ that is log-smooth over $V^{\times}_n$. We can also assume that we have a lifting of $X_n\hookrightarrow Z_n$ over $\operatorname{Spec} (W_n(k)[T])$ (with the log-structure coming from $T$) and a lifting of Frobenius $\varphi_Z$ on $Z_n$ that is compatible with the Frobenius $\varphi_R$. Recall \cite[Lemma 4.2]{Kas} that the horizontal distinguished triangles in the above diagram arise from an exact sequence of complexes of sheaves on $X_{n,\operatorname{\acute{e}t} }$ \begin{equation} \label{monodromyC} 0\to C_V^{\operatorname{pr} ime}[-1] \varepsilon rylomapr{\wedge \operatorname{dlog} T} C_V\to C_V^{\operatorname{pr} ime} \to 0 \end{equation} where $C_V:=R_n\otimes _{W_n(k)[T]}\Omega^{\cdot }_{Z_n/W_n(k)}$ and $C_V^{\operatorname{pr} ime}:=R_n\otimes _{W_n(k)[T]}\Omega^{\cdot }_{Z_n/W_n(k)[T]}$. Now consider the base change of $Z_n/W_n(k)[T]$ by the map $F^sa:\operatorname{Spec} (W_n(k)[T^{\operatorname{pr} ime}])\to\operatorname{Spec} (W_n(k)[T])$ and the related complexes (\ref{monodromyC}). We get a commutative diagram of complexes of sheaves on $X_{n,\operatorname{\acute{e}t} }$ (note that $X_{V^{\operatorname{pr} ime},n,\operatorname{\acute{e}t} }=X_{n,\operatorname{\acute{e}t} }$) $$ \begin{CD} 0@>>> C_{V^{\operatorname{pr} ime}}^{\operatorname{pr} ime}[-1] @>\wedge\operatorname{dlog} T^{\operatorname{pr} ime} >>C_{V^{\operatorname{pr} ime}} @>>> C_{V^{\operatorname{pr} ime} }^{\operatorname{pr} ime}@>>> 0\\ @. @AA p^se_1a^*\varphi_Z^sA @AA a^*\varphi_Z^s A @AA a^*\varphi_Z^s A @.\\ 0 @>>> C_V^{\operatorname{pr} ime}[-1] @> \wedge\operatorname{dlog} T >>C_V @>>> C_V^{\operatorname{pr} ime} @>>> 0 \end{CD} $$ Hence diagram (\ref{brrrr}). Combining diagram (\ref{brrrr}) with Frobenius we obtain the following commutative diagram $$ \begin{CD} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r) @< F^s << \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)@> f^*F^s >> \mathrm {R} \Gamma_{\operatorname{cr} }(T,\overlinerline{T},r)\\ @VV\wr V @VV\wr V @VV\wr V\\ \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,r)^{N=0} @<(F^s, p^sF^s) <<\mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,r)^{N=0} @> (a^*F^s, p^sa^*F^s) >> \mathrm {R} \Gamma_{\operatorname{cr} }((T,\overlinerline{T})/R^{\operatorname{pr} ime},r)^{N^{\operatorname{pr} ime}=0}\\ @V\wr V i_0^* V @V\wr V i_0^* V @V\wr V i_0^* V\\ \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U},r)^{N=0} @< (F^s,p^sF^s) <\sim <\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U},r)^{N=0} @> (F^s,p^sF^s) >\sim > \mathrm {R} \Gamma_{\mathrm{HK}}(T,\overlinerline{T},r)^{N^{\operatorname{pr} ime}=0} \end{CD}$$ It follows that all the maps in the above diagram are quasi-isomorphisms. We define the map $$ f^*: \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,r)^{N=0}\to \mathrm {R} \Gamma_{\operatorname{cr} }((T,\overlinerline{T})/R^{\operatorname{pr} ime},r)^{N^{\operatorname{pr} ime}=0} $$ by the middle row. Since, for any $g\in G$, we have ${v}_{g(\pi^{\operatorname{pr} ime})}=g({v}_{\pi^{\operatorname{pr} ime}})$, the map $f^*$ is $G$-equivariant. In the (classical) derived category, this definition is independent of the constant $s$ we have chosen. Since $i^*_0$ is a quasi-isomorphism and $i^*_0\iota_{?*}= \operatorname{Id} $, the diagram (\ref{diag1}) commutes as well, as wanted. We define the map \begin{equation} \label{BHK} f^*: \mathrm {R} \Gamma^{B,\tau}_{\mathrm{HK}}((U,\overlinerline{U})/R,r)^{N=0}\to \mathrm {R} \Gamma^{B,\tau}_{\mathrm{HK}}((T,\overlinerline{T})/R^{\operatorname{pr} ime},r)^{N^{\operatorname{pr} ime}=0} \end{equation} in an analogous way. By the above diagram and by compatibility of the Beilinson-Hyodo-Kato constructions with base change and with Frobenius, the two pullback maps $f^*$ are compatible via the morphism $h$, i.e., the following diagram commutes $$ \xymatrix{ \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r) \ar[r]\ar[d]^{f^*} & \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R,r)^{N=0}\ar[d]^{f^*} & \mathrm {R} \Gamma^{B,\tau}_{\mathrm{HK}}((U,\overlinerline{U})/R,r)^{N=0} \ar[l]_{\kappa_R}\ar[d]^{f^*} \\ \mathrm {R} \Gamma_{\operatorname{cr} }(T,\overlinerline{T},r)\ar[r] & \mathrm {R} \Gamma_{\operatorname{cr} }((T,\overlinerline{T})/R^{\operatorname{pr} ime},r)^{N^{\operatorname{pr} ime}=0} & \mathrm {R} \Gamma^{B,\tau}_{\mathrm{HK}}((T,\overlinerline{T})/R^{\operatorname{pr} ime},r)^{N^{\operatorname{pr} ime}=0}\ar[l]_{\kappa_{R^^{\operatorname{pr} ime}me} } } $$ From the analog of diagram (\ref{diag1}) for the Beilinson-Hyodo-Kato complexes and by the universal nature of the trivialization at $\overlinerline{p}$ we obtain that the pullback map $f^*$ is compatible with the maps $\beta\iota_?$. It remains to show that we have a commutative diagram $$ \xymatrix{ \mathrm {R} \Gamma^B_{\mathrm{HK}}(U,\overlinerline{U},r)^{N=0}\ar[r]^{f^*}_{\sim}\ar[d]^{\iota^B_{\mathrm{dR}}} & \mathrm {R} \Gamma^B_{\mathrm{HK}}(T,\overlinerline{T},r)^{N^^{\operatorname{pr} ime}me=0}\ar[d]^{\iota^B_{\mathrm{dR}}}\\ \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K})/F^r \ar[r]^{f^*} & \mathrm {R} \Gamma_{\mathrm{dR}}(T,\overlinerline{T}_{K^{^{\operatorname{pr} ime}me}})/F^r } $$ But this follows since the Beilinson-Hyodo-Kato map is compatible with base change. \end{proof} {\mathcal{U}}bsection{Arithmetic syntomic cohomology} We are now ready to introduce and study arithmetic syntomic cohomology, i.e., syntomic cohomology over $K$. Let ${\mathcal J}^{[r]}_{\operatorname{cr} }$, ${\mathcal{A}}_{\operatorname{cr} }$, and ${\mathcal{S}}(r)$ for $r\geq 0$ be the $h$-sheafifications on $\mathcal{V}ar_{K}$ of the presheaves sending $(U,\overlinerline{U})\in {\mathcal{P}}^{ss}_{K}$ to $ \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},J^{[r]})$, $ \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})$, and $\mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)$, respectively. Let ${\mathcal J}^{[r]}_{\operatorname{cr} ,n}$, ${\mathcal{A}}_{\operatorname{cr} ,n}$, and ${\mathcal{S}}_n(r)$ denote the $h$-sheafifications of the mod-$p^n$ versions of the respective presheaves. We have $$ {\mathcal{S}}_n(r)\simeq \operatorname{Cone} ({\mathcal J}^{[r]}_{\operatorname{cr} ,n}\operatorname{st} ackrel{p^r-\varphi}{\longrightarrow}{\mathcal{A}}_{\operatorname{cr} ,n})[-1],\quad {\mathcal{S}}(r)\simeq \operatorname{Cone} ({\mathcal J}^{[r]}_{\operatorname{cr} }\operatorname{st} ackrel{p^r-\varphi}{\longrightarrow}{\mathcal{A}}_{\operatorname{cr} })[-1] . $$ For $r\geq 0$, define ${\mathcal{S}}(r)_{\mathbf Q}$ as the $h$-sheafification of the presheaf sending ss-pairs $(U,\overlinerline{U})$ to $\mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q}$. We have $${\mathcal{S}}(r)_{\mathbf Q}\simeq \operatorname{Cone} ({\mathcal J}^{[r]}_{\operatorname{cr} ,{\mathbf Q}}\lomapr{1-\varphi_r}{\mathcal{A}}_{\operatorname{cr} ,{\mathbf Q}})[-1] $$ For $X\in \mathcal{V}ar_{K}$, set $\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)_n=\mathrm {R} \Gamma(X_h,{\mathcal{S}}_n(r))$, $\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r):=\mathrm {R} \Gamma(X_h,{\mathcal{S}}(r)_{\mathbf Q})$. We have \begin{align*} \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)_n & \simeq \operatorname{Cone} (\mathrm {R} \Gamma(X_h,{\mathcal J}^{[r]}_{\operatorname{cr} ,n})\operatorname{st} ackrel{p^r-\varphi}{\longrightarrow}\mathrm {R} \Gamma(X_h,{\mathcal{A}}_{\operatorname{cr} ,n}))[-1],\\ \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r) & \simeq \operatorname{Cone} (\mathrm {R} \Gamma(X_h,{\mathcal J}^{[r]}_{\operatorname{cr} ,{\mathbf Q}})\operatorname{st} ackrel{1-\varphi_r}{\longrightarrow}\mathrm {R} \Gamma(X_h,{\mathcal{A}}_{\operatorname{cr} ,{\mathbf Q}}))[-1] . \end{align*} We will often write $\mathrm {R} \Gamma_{\operatorname{cr} }(X_h)$ for $\mathrm {R} \Gamma(X_h,{\mathcal{A}}_{\operatorname{cr} })$ if this does not cause confusion. Let ${\mathcal{A}}_{\mathrm{HK}}$ be the $h$-sheafification of the presheaf $(U,\overlinerline{U})\mapsto \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}$ on ${\mathcal{P}}_{K}^{ss}$; this is an $h$-sheaf of $E_{\infty}$ $ K_0$-algebras on ${\mathcal V}ar_{K}$ equipped with a $\varphi$-action and a derivation $N$ such that $N\varphi=p\varphi N$. For $X\in {\mathcal V}ar_{K}$, set $\mathrm {R} \Gamma_{\mathrm{HK}}(X_h):=\mathrm {R} \Gamma(X_h,{\mathcal{A}}_{\mathrm{HK}})$. Similarly, we define $h$-sheaves ${\mathcal{A}}^B_{\mathrm{HK}}$ and the complexes $\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h):=\mathrm {R} \Gamma(X_h,{\mathcal{A}}^B_{\mathrm{HK}})$. The maps $\kappa: \mathrm {R} \Gamma^B_{\mathrm{HK}}(U_1,\overlinerline{U}_1)\to \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}$ $h$-sheafify and we obtain functorial quasi-isomorphisms \begin{align*} \kappa: \quad {\mathcal{A}}^B_{\mathrm{HK}} \operatorname{st} ackrel{\sim}{\to} {\mathcal{A}}_{\mathrm{HK}},\quad \kappa:\quad \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h) \operatorname{st} ackrel{\sim}{\to } \mathrm {R} \Gamma_{\mathrm{HK}}(X_h). \end{align*} \begin{remark} The complexes ${\mathcal J}^{[r]}_{\operatorname{cr} ,n}$ and ${\mathcal{S}}_n(r)$ (and their completions) have a concrete description. For the complexes $ {\mathcal J}^{[r]}_{\operatorname{cr} ,n}$: we can represent the presheaves $(U,\overline{U})\mapsto \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overline{U},{\mathcal J}^{[r]}_n)$ by Godement resolutions (on the crystalline site), sheafify them for the $h$-topology on ${\mathcal{P}}^{ss}_K$, and then move them to ${\mathcal V}ar_K$. For the complexes ${\mathcal{S}}_n(r)$: the maps $p^r-\varphi$ can be lifted to the Godement resolutions and their mapping fiber (defining ${\mathcal{S}}_n(r)(U,\overline{U})$) can be computed in the abelian category of complexes of abelian groups. To get ${\mathcal{S}}_n(r)$ we $h$-sheafify on ${\mathcal{P}}^{ss}_K$ and pass to ${\mathcal V}ar$. \end{remark} Let, for a moment, $K$ be any field of characteristic zero. Consider the presheaf $(U,\overlinerline{U})\mapsto \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}):=\mathrm {R} \Gamma(\overlinerline{U},\Omega^{\scriptscriptstyle\bullet}_{(U,\overlinerline{U})})$ of filtered dg $K$-algebras on ${\mathcal{P}}^{nc}_K$. Let ${\mathcal{A}}_{\mathrm{dR}}$ be its $h$-sheafification. It is a sheaf of filtered $K$-algebras on $\mathcal{V}ar_K$. For $X\in \mathcal{V}ar_K$, we have Deligne's de Rham complex of $X$ equipped with Deligne's Hodge filtration: $\mathrm {R} \Gamma_{\mathrm{dR}}(X_h):=\mathrm {R} \Gamma(X_h,{\mathcal{A}}_{\mathrm{dR}})$. Beilinson proves the following comparison statement. \begin{proposition} (\cite[2.4]{BE1}) \label{deRham1} \begin{enumerate} \item For $(U,\overlinerline{U})\in {\mathcal{P}}^{nc}_K$, the canonical map $\mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U})\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{dR}}(U_h)$ is a filtered quasi-isomorphism. \item The cohomology groups $H^i_{\mathrm{dR}}(X_h):=H^i\mathrm {R} \Gamma_{\mathrm{dR}}(X_h)$ are $K$-vector spaces of dimension equal to the rank of $H^i(X_{\overlinerline{K},\operatorname{\acute{e}t} },{\mathbf Q}_p)$. \end{enumerate} \end{proposition} \begin{corollary} \label{blowup} For a geometric pair $(U,\overlinerline{U})$ over $K$ that is saturated and log-smooth, the canonical map $$\mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U})\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{dR}}(U_h)$$ is a filtered quasi-isomorphism. \end{corollary} \begin{proof} Recall \cite[Theorem 5.10]{NR} that there is a log-blow-up $(U,\overlinerline{T})\to (U,\overlinerline{U})$ that resolves singularities of $(U,\overlinerline{U})$, i.e., such that $(U,\overline{T}) \in {\mathcal{P}}^{nc}_K$. We have a commutative diagram $$ \xymatrix{ \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{T})\ar[r]^{\sim} &\mathrm {R} \Gamma_{\mathrm{dR}}(U_h)\\ \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U})\ar[u]^{\wr}\ar[ur]& } $$ The vertical map is a filtered quasi-isomorphism; the horizontal map is a filtered quasi-isomorphism by the above proposition. Our corollary follows. \end{proof} \begin{remark} Another proof of the above result (and a mild generalization) that does not use resolution of singularities can be found in \cite[1.19]{BE2} (where it is attributed to A.Ogus). \end{remark} Return now to our $p$-adic field $K$. \begin{remark} \label{descent} By construction the complexes $\mathrm {R} \Gamma(X_h,{\mathcal J}^{[r]}_{\operatorname{cr} ,{\mathbf Q}})$, $\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)$, $\mathrm {R} \Gamma_{\mathrm{HK}}(X_h)$, $\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)$, and $\mathrm {R} \Gamma_{\mathrm{dR}}(X_h)$ satisfy $h$-descent. In particular, since $h$-topology is finer than the \'etale topology, they satisfy Galois descent for finite extensions. Hence, for any finite Galois extension $K_1/K$, the natural maps $$ \mathrm {R} \Gamma^*_{?}(X_h)\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma (G,\mathrm {R} \Gamma^*_{?}(X_{K_1,h})),\quad ?={\operatorname{cr} }, { \operatorname{syn} },\mathrm{HK}, \mathrm{dR}; \quad *=B,\emptyset $$ where $G=\operatorname{Gal} (K_1/K)$, are (filtered) quasi-isomorphisms. Since $G$ is finite, it follows that the natural maps $$\mathrm {R} \Gamma^*_{\mathrm{HK}}(X_h)\otimes_{K_0}K_{1,0}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma^*_{\mathrm{HK}}(X_{K_1,h}),\quad \mathrm {R} \Gamma_{\mathrm{dR}}(X_h)\otimes_{K}K_1\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{dR}}(X_{K_1,h}) $$ are (filtered) quasi-isomorphisms as well. \end{remark} Recall from \cite[2.5]{BE2} and (\ref{isomorphism}) below that for a fine log-scheme $X$, log-smooth over $V^{\times}$, and of Cartier type we have a quasi-isomorphism $\mathrm {R} \Gamma_{\operatorname{cr} }(X_{\overlinerline{V}},{\mathcal J}^{[r]}_{X_{\overlinerline{V}}/W(k)})_{\mathbf Q}\simeq \mathrm {R} \Gamma(X_{\overline{K} ,h},{\mathcal J}^{[r]}_{\operatorname{cr} })_{\mathbf Q}$. We can descend this result to $K$ but on the level of rational log-syntomic cohomology; the key observation being that the field extensions introduced by the alterations are harmless since, by Proposition \ref{hypercov11}, log-syntomic cohomology satisfies finite Galois descent. Along the way we will get an analogous comparison quasi-isomorphism for the Hyodo-Kato cohomology. \begin{proposition} \label{hypercov} For any arithmetic pair $(U,\overlinerline{U})$ that is fine, log-smooth over $V^{\times}$, and of Cartier type, and $r\geq 0$, the canonical maps $$ \mathrm {R} \Gamma_{\mathrm{HK}}^*(U,\overlinerline{U})_{\mathbf Q} \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{HK}}^*(U_h),\quad \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q} \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{ \operatorname{syn} }(U_h,r) $$ are quasi-isomorphisms. \end{proposition} \begin{proof} It suffices to show that for any $h$-hypercovering $(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})\to (U,\overlinerline{U})$ by pairs from ${\mathcal{P}}^{\log}_{K}$ the natural maps $$ \mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}} \to \mathrm {R} \Gamma_{\mathrm{HK}}(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{{\mathbf Q}},\quad \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{{\mathbf Q}} \to \mathrm {R} \Gamma_{ \operatorname{syn} }(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet},r)_{{\mathbf Q}} $$ are (modulo taking a refinement of $(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})$) quasi-isomorphisms. For the second map, since we have a canonical quasi-isomorphism $$\mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} \operatorname{Cone} (\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)_{\mathbf Q}\to \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal O}/{\mathcal J}^{[r]})_{\mathbf Q})[-1] $$ it suffices to show that, up to a refinement of the hypercovering, we have quasi-isomorphisms $$\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal O}/{\mathcal J}^{[r]})_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet},{\mathcal O}/{\mathcal J}^{[r]})_{\mathbf Q},\quad \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet},r)_{\mathbf Q} . $$ For the first of these maps, by Corollary \ref{Langer} this amounts to showing that the following map is a quasi-isomorphism $$\mathrm {R} \Gamma(\overlinerline{U}_K,\Omega^{\scriptscriptstyle\bullet}_{(U,\overlinerline{U}_K)})/F^r \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma(\overlinerline{U}_{\scriptscriptstyle\bullet,K},\Omega^{\scriptscriptstyle\bullet}_{(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet,K})})/F^r. $$ But, by Corollary \ref{blowup} this map is quasi-isomorphic to the map $$\mathrm {R} \Gamma_{\mathrm{dR}}(U_h)/F^r\to \mathrm {R} \Gamma_{\mathrm{dR}}(U_{\scriptscriptstyle\bullet,h})/F^r, $$ which is clearly a quasi-isomorphism. Hence it suffices to show that, up to a refinement of the hypercovering, we have quasi-isomorphisms $$\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{HK}}(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{\mathbf Q},\quad \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet},r)_{\mathbf Q} . $$ Fix $t\geq 0$. To show that $H^t\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to}H^t\mathrm {R} \Gamma_{\operatorname{cr} }(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet},r)_{\mathbf Q} $ is a quasi-isomorphism we will often work with $(t+1)$-truncated $h$-hypercovers. This is because $\tau_{\leq t}\mathrm {R} \Gamma_{\operatorname{cr} }(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet},r)\simeq \tau_{\leq t}\mathrm {R} \Gamma_{\operatorname{cr} }((U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{\leq t+1},r)$, where $(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{\leq t+1}$ denotes the $t+1$-truncation. Assume first that we have an $h$-hypercovering $(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})\to (U,\overlinerline{U})$ of arithmetic pairs over $K$, where each pair $(U_i,\overlinerline{U}_i)$, $i\leq t+1$, is log-smooth over $V^{\times}$ and of Cartier type. We claim that then already the maps \begin{equation} \label{firsteq} \tau_{\leq t}\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} \tau_{\leq t}\mathrm {R} \Gamma_{\mathrm{HK}}((U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{\leq t+1})_{\mathbf Q};\quad \tau_{\leq t}\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} \tau_{\leq t}\mathrm {R} \Gamma_{\operatorname{cr} }((U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{\leq t+1})_{\mathbf Q} \end{equation} are quasi-isomorphisms. To see the second quasi-isomorphism consider the following commutative diagram of distinguished triangles ($R=R_V$) $$ \begin{CD} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U}) @>>> \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)@> N >> \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)\\ @VVV @VVV @VVV \\ \mathrm {R} \Gamma_{\operatorname{cr} }((U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{\leq t+1})@>>> \mathrm {R} \Gamma_{\operatorname{cr} }((U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{\leq t+1}/R)@> N >> \mathrm {R} \Gamma_{\operatorname{cr} }((U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{\leq t+1}/R) \end{CD} $$ It suffices to show that the two right vertical arrows are rational quasi-isomorphisms in degrees less than or equal to $t$. But we have the $R$-linear quasi-isomorphisms $$\iota: R\otimes_{W(k)}\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma((U,\overlinerline{U})/R)_{{\mathbf Q}}, \quad \iota: R\otimes_{W(k)}\mathrm {R} \Gamma_{\mathrm{HK}}((U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{\leq t+1})_{{\mathbf Q}}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma((U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{\leq t+1}/R)_{ {\mathbf Q}}. $$ Hence to show both quasi-isomorphisms (\ref{firsteq}), it suffices to show that the map $$\tau_{\leq t}\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\to \tau_{\leq t}\mathrm {R} \Gamma_{\mathrm{HK}}((U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})_{\leq t+1})_{{\mathbf Q}} $$ is a quasi-isomorphism. Tensoring over $K_0$ with $K$ and using the Hyodo-Kato quasi-isomorphism (\ref{HKqis}) we reduce to showing that the map $$ \tau_{\leq t}\mathrm {R} \Gamma(\overlinerline{U}_K,\Omega^{\cdot }_{(U,\overlinerline{U}_K)})\to \tau_{\leq t} \mathrm {R} \Gamma(\overlinerline{U}_{\scriptscriptstyle\bullet K, \leq t+1},\Omega^{\cdot }_{(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet,K})_{\leq t+1}}) $$ is a quasi-isomorphism. And this we have done above. To treat the general case, set $X=(U,\overlinerline{U}), Y_{\scriptscriptstyle\bullet}=(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})$. We will do a base change to reduce to the case discussed above. We may assume that all the fields $K_{n,i}$, $K_{U_n}\simeq \operatorname{pr} od K_{n,i}$ are Galois over $K$. Choose a finite Galois extension $(V^{\operatorname{pr} ime},K^{\operatorname{pr} ime})/(V,K)$ for $K^{\operatorname{pr} ime}$ Galois over all the fields $K_{n,i}$, $n\leq t+1$. Write $N_X(X_{V^{\operatorname{pr} ime}})$ for the "\v{C}ech nerve" of $X_{V^{\operatorname{pr} ime}}/X$. The term $N_X(X_{V^{\operatorname{pr} ime}}) _n$ is defined as the $(n+1)$-fold fiber product of $X_{V^{\operatorname{pr} ime}}$ over $X$: $N_X(X_{V^{\operatorname{pr} ime}}) _n=(U\times_{K}K^{^{\operatorname{pr} ime}me,n+1},(\overlinerline{U}\times_{V}V^{^{\operatorname{pr} ime}me,n+1})^{\operatorname{norm} })$, where $V^{^{\operatorname{pr} ime}me,n+1},K^{^{\operatorname{pr} ime}me,n+1}$ are defined as the $(n+1)$-fold product of $V^{\operatorname{pr} ime}$ over $V$ and of $K^{\operatorname{pr} ime}$ over $K$, respectively. Normalization is taken with respect to the open regular subscheme $U\times_{K}K^{^{\operatorname{pr} ime}me,n+1}$. Note that $N_X(X_{V^{\operatorname{pr} ime}}) _n\simeq (U\times_{K}K^{\operatorname{pr} ime} \times G^{n},\overlinerline{U}\times_{V}V^{\operatorname{pr} ime} \times G^{n})$, $G=\operatorname{Gal} (K^{\operatorname{pr} ime}/K)$. Hence it is a log-smooth scheme over $V^{^{\operatorname{pr} ime}me,\times}$, of Cartier type. The augmentation $N_X(X_{V^{\operatorname{pr} ime}})\to X$ is an $h$-hypercovering. Consider the bi-simplicial scheme $Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}})_{\scriptscriptstyle\bullet} $, \begin{align*} (Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}})_{\scriptscriptstyle\bullet}) _{n,m}:=Y_n\times_{X}N_X(X_{V^{\operatorname{pr} ime}}) _m& \simeq (U_n\times_UU\times_{K}K^{^{\operatorname{pr} ime}me,m+1},(\overlinerline{U}_n\times_{\overlinerline{U}}(\overlinerline{U}\times_{V}V^{^{\operatorname{pr} ime}me,m+1})^{\operatorname{norm} })^{\operatorname{norm} })\\ & \simeq \coprod_i(U_n\times_{K_{n,i}}K_{n,i}\times_{K}K^{^{\operatorname{pr} ime}me,m+1},\overlinerline{U}_n\times_{V_{n,i}}(V_{n,i}\times_{V}V^{^{\operatorname{pr} ime}me,m+1})^{\operatorname{norm} }). \end{align*} Hence $(Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}})_{\scriptscriptstyle\bullet}) _{n,m}\in {\mathcal{P}}^{\log}_{K}$. For $n,m\leq t+1$, we have $$ (Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}})_{\scriptscriptstyle\bullet})_{n,m} \simeq \coprod_i(U_n\times_{K_{n,i}}K^{\operatorname{pr} ime}\times G_{n,i}\times G^{m}, \overlinerline{U}_n\times_{V_{n,i}}V^{\operatorname{pr} ime}\times G_{n,i}\times G^{m}), $$ where $G_{n,i}=\operatorname{Gal} (K_{n,i}/K)$. It is a log-scheme log-smooth over $V^{^{\operatorname{pr} ime}me, \times}$, of Cartier type. Consider now its diagonal $Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}}):=\Delta (Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}})_{\scriptscriptstyle\bullet})$. It is an $h$-hypercovering of $X$ refining $Y_{\scriptscriptstyle\bullet}$ such that, for $n\leq t+1$, $(Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}}))_n $ is log-smooth over $V^{^{\operatorname{pr} ime}me, \times}$, of Cartier type. It suffices to show that the compositions \begin{align} \label{secondeq} \mathrm {R} \Gamma_{\mathrm{HK}}(X)_{\mathbf Q} & \operatorname{st} ackrel{}{\to} \mathrm {R} \Gamma_{\mathrm{HK}}(Y_{\scriptscriptstyle\bullet})_{\mathbf Q} \operatorname{st} ackrel{\operatorname{pr} _1^*}{\to}\mathrm {R} \Gamma_{\mathrm{HK}}(Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}}))_{\mathbf Q};\\ \mathrm {R} \Gamma_{\operatorname{cr} }(X,r)_{\mathbf Q} & \operatorname{st} ackrel{}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(Y_{\scriptscriptstyle\bullet},r)_{\mathbf Q} \operatorname{st} ackrel{\operatorname{pr} _1^*}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}}),r)_{\mathbf Q} \notag \end{align} are quasi-isomorphisms in degrees less than or equal to $t$. Using the commutative diagram of bi-simplicial schemes $$ \begin{CD} Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}}) @>\Delta >> Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}})_{\scriptscriptstyle\bullet} @>\operatorname{pr} _1 >> Y_{\scriptscriptstyle\bullet}\\ @. @VV\operatorname{pr} _2 V @VVV\\ @. N_X(X_{V^{\operatorname{pr} ime}}) @> f >> X \end{CD} $$ we can write the second composition as $$ \mathrm {R} \Gamma_{\operatorname{cr} }(X,r)_{\mathbf Q} \operatorname{st} ackrel{f^*}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(N_X(X_{V^{\operatorname{pr} ime}}),r)_{\mathbf Q} \operatorname{st} ackrel{\operatorname{pr} _2^*}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}})_{\scriptscriptstyle\bullet},r)_{\mathbf Q} \operatorname{st} ackrel{\Delta^*}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}}),r)_{\mathbf Q} $$ We claim that all of these maps are quasi-isomorphisms in degrees less than or equal to $t$. The map $\Delta^*$ is a quasi-isomorphism (in all degrees) by \cite[Prop. 2.5]{Fr}. For the second map, fix $n\leq t+1$ and consider the induced map $\operatorname{pr} _2: (Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}})_{\scriptscriptstyle\bullet}) _{\scriptscriptstyle\bullet,n}\to N_X(X_{V^{\operatorname{pr} ime}}) _n$. It is an $h$-hypercovering whose $(t+1)$-truncation is built from log-schemes, log-smooth over $(V^{\operatorname{pr} ime},K^{\operatorname{pr} ime})$, of Cartier type. It suffices to show that the induced map $\tau_{\leq t}\mathrm {R} \Gamma_{\operatorname{cr} }(N_X(X_{V^{\operatorname{pr} ime}})_n,r)_{\mathbf Q} \operatorname{st} ackrel{\operatorname{pr} _2^*}{\to}\tau_{\leq t}\mathrm {R} \Gamma_{\operatorname{cr} }((Y_{\scriptscriptstyle\bullet}\times_{X}N_X(X_{V^{\operatorname{pr} ime}}))_{\scriptscriptstyle\bullet,n},r)_{\mathbf Q}$ is a quasi-isomorphism. Since all maps are defined over $K^{\operatorname{pr} ime}$, this follows from the case considered at the beginning of the proof. To prove that the map $f^*: \mathrm {R} \Gamma_{\operatorname{cr} }(X,r)_{\mathbf Q} \to \mathrm {R} \Gamma_{\operatorname{cr} }(N_X(X_{V^{\operatorname{pr} ime}}),r)_{\mathbf Q} $ is a quasi-isomorphism consider first the case when the extension $V^{^{\operatorname{pr} ime}me}/V$ is unramified. Then $ \mathrm {R} \Gamma_{\operatorname{cr} }(X_{V^{^{\operatorname{pr} ime}me}}) \simeq \mathrm {R} \Gamma_{\operatorname{cr} }(X)\otimes_{W(k)}W(k^{^{\operatorname{pr} ime}me})$ and the map $f^*$ is a quasi-isomorphism by finite \'etale descent for crystalline cohomology. Assume now that the extension $V^{^{\operatorname{pr} ime}me}/V$ is totally ramified and let $\pi$ and $\pi^{^{\operatorname{pr} ime}me}$ be uniformizers of $V$ and $V^{^{\operatorname{pr} ime}me}$, respectively. Consider the target of $f^*$ as a double complex. To show that $f^*$ is a quasi-isomorphism it suffices to show that, for each $s\geq 0$, the sequence $$0\to H^s\mathrm {R} \Gamma_{\operatorname{cr} }(X,r)_{\mathbf Q} \operatorname{st} ackrel{f^*}{\to} H^s\mathrm {R} \Gamma_{\operatorname{cr} }(N_X(X_{V^{\operatorname{pr} ime}})_0,r)_{\mathbf Q} \operatorname{st} ackrel{d_0^*}{\to}H^s\mathrm {R} \Gamma_{\operatorname{cr} }(N_X(X_{V^{\operatorname{pr} ime}})_1,r)_{\mathbf Q} \operatorname{st} ackrel{d_1^*}{\to}H^s\mathrm {R} \Gamma_{\operatorname{cr} }(N_X(X_{V^{\operatorname{pr} ime}})_2,r)_{\mathbf Q}\ldots $$ is exact. Embed it into the following diagram $$ \xymatrix{ 0 \ar[r] & H^s\mathrm {R} \Gamma_{\operatorname{cr} }(X,r)_{\mathbf Q} \ar[d]^{\wr}_{\alpha^B_{ \operatorname{syn} ,\pi}}\ar[r]^-{f^*} & H^s\mathrm {R} \Gamma_{\operatorname{cr} }(N_X(X_{V^{\operatorname{pr} ime}})_0,r)_{\mathbf Q} \ar[d]^{\wr}_{\tilde{\alpha}^B_{ \operatorname{syn} ,\pi^{^{\operatorname{pr} ime}me}}}\ar[r]^{d_0^*} & H^s\mathrm {R} \Gamma_{\operatorname{cr} }(N_X(X_{V^{\operatorname{pr} ime}})_1,r)_{\mathbf Q}\ar[d]^{\wr}_{\tilde{\alpha}^B_{ \operatorname{syn} ,\pi^{^{\operatorname{pr} ime}me}}} \ar[r] & \\ 0 \ar[r] & H^s\mathrm {R} \Gamma^B_{\mathrm{HK}}(X,r)_{\mathbf Q}^{N=0}\ar[r]^-{ f^*} & H^s\mathrm {R} \Gamma^B_{\mathrm{HK}}(N_X(X_{V^{\operatorname{pr} ime}})_0,r)_{\mathbf Q}^{N^{^{\operatorname{pr} ime}me}=0} \ar[r]^{d_0^*} & H^s\mathrm {R} \Gamma^B_{\mathrm{HK}}(N_X(X_{V^{\operatorname{pr} ime}})_1,r)_{\mathbf Q}^{N^{^{\operatorname{pr} ime}me}=0} \ar[r] & } $$ Note that, since all the maps $d_i^*$ are induced from automorphisms of $V^{\operatorname{pr} ime}/V$, by the proof of Proposition \ref{hypercov11} (take the map $f$ used there to be a given automorphism $g\in G=\operatorname{Gal} (K^{^{\operatorname{pr} ime}me}/K)$ and $\pi^{^{\operatorname{pr} ime}me}$, $g(\pi^{^{\operatorname{pr} ime}me})$ for the uniformizers of $V^{^{\operatorname{pr} ime}me}$) and the proof of Proposition \ref{reduction1}, we get the vertical maps above that make all the squares commute. Hence it suffices to show that the following sequence of Hyodo-Kato cohomology groups is exact: $$0 \to H^s\mathrm {R} \Gamma_{\mathrm{HK}}(X)_{\mathbf Q} \operatorname{st} ackrel{f^*}{\to} H^s\mathrm {R} \Gamma_{\mathrm{HK}}(N_X(X_{V^{\operatorname{pr} ime}})_0)_{\mathbf Q} \operatorname{st} ackrel{ d_0^*}{\to}H^s\mathrm {R} \Gamma_{\mathrm{HK}}(N_X(X_{V^{\operatorname{pr} ime}})_1)_{\mathbf Q} \operatorname{st} ackrel{ d_1^*}{\to}H^s\mathrm {R} \Gamma_{\mathrm{HK}}(N_X(X_{V^{\operatorname{pr} ime}})_2)_{\mathbf Q} \to $$ But this sequence is isomorphic to the following sequence $$0 \to H^s\mathrm {R} \Gamma_{\mathrm{HK}}(X)_{\mathbf Q} \operatorname{st} ackrel{f^*}{\to} H^s\mathrm {R} \Gamma_{\mathrm{HK}}(X_{V^{\operatorname{pr} ime}})_{\mathbf Q} \operatorname{st} ackrel{ d_0^*}{\to}H^s\mathrm {R} \Gamma_{\mathrm{HK}}(X_{V^{\operatorname{pr} ime}})_{\mathbf Q} \times G \operatorname{st} ackrel{ d_1^*}{\to}H^s\mathrm {R} \Gamma_{\mathrm{HK}}(X_{V^{\operatorname{pr} ime}})_{\mathbf Q} \times G^2\to $$ representing the (augmented) $G$-cohomology of $H^s\mathrm {R} \Gamma_{\mathrm{HK}}(X)_{\mathbf Q}$. Since $G$ is finite, this complex is exact in degrees at least $1$. It remains to show that $H^0(G,H^s\mathrm {R} \Gamma_{\mathrm{HK}}(X_{V^{\operatorname{pr} ime}})_{\mathbf Q})\simeq H^s\mathrm {R} \Gamma_{\mathrm{HK}}(X)_{\mathbf Q}$. Since $K^{\operatorname{pr} ime}/K$ is totally ramified, we have $H^s\mathrm {R} \Gamma_{\mathrm{HK}}(X_{V^{\operatorname{pr} ime}})\simeq H^s\mathrm {R} \Gamma_{\mathrm{HK}}(X)$. Hence the action of $G$ on $H^s\mathrm {R} \Gamma_{\mathrm{HK}}(X_{V^{\operatorname{pr} ime}})$ is trivial and we get the right $H^0$ as well. We have proved the second quasi-isomorphism from (\ref{secondeq}). Notice that along the way we have actually proved the first quasi-isomorphism. \end{proof} For $X\in {\mathcal V}ar_K$, we define a canonical $K_0$-linear map ({\em the Beilinson-Hyodo-Kato morphism}) $$\iota^B_{\mathrm{dR}}: \quad \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h){\to}\mathrm {R} \Gamma_{\mathrm{dR}}(X_h) $$ as sheafification of the map $\iota^B_{\mathrm{dR}}: \mathrm {R} \Gamma^B_{\mathrm{HK}}(U_1,\overlinerline{U}_1){\to}\mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_K) $. It follows from Proposition \ref{HKdR} that we prove in the next section that the cohomology groups $H^i_{\mathrm{HK}}(X_h):=H^i\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)$ are finite rank $K_0$-vector spaces and that they vanish for $i>2\dim X$. This implies the following lemma. \begin{lemma} \label{dim} The syntomic cohomology groups $H^i_{ \operatorname{syn} }(X_h,r):=H^i\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)$ vanish for $i > 2\dim X+2$. \end{lemma} \begin{proof} The map $\iota_{\mathrm{dR}}^{^{\operatorname{pr} ime}me}:\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U},r)^{N=0}\to \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K})/F^r$ from Remark \ref{reduction21} sheafifies and so does the quasi-isomorphism $\alpha^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }: \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to}C^{^{\operatorname{pr} ime}me}_{\operatorname{st} }(\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})\{r\})$. Hence $\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)$ is quasi-isomorphic via $\alpha^{^{\operatorname{pr} ime}me}_{ \operatorname{syn} }$ to the mapping fiber $$ C^{^{\operatorname{pr} ime}me}_{\operatorname{st} }(\mathrm {R} \Gamma_{\mathrm{HK}}(X_h)\{r\}):= [\mathrm {R} \Gamma_{\mathrm{HK}}(X_h,r)^{N=0}\lomapr{\iota_{\mathrm{dR}}^{^{\operatorname{pr} ime}me}} \mathrm {R} \Gamma_{\mathrm{dR}}(X_h)/F^r] $$ The statement of the lemma follows. \end{proof} For $X\in {\mathcal V}ar_{K}$ and $r\geq 0$, define the complex $$C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\{r\}):= \left[ \begin{aligned}\xymatrix{\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\ar[rr]^-{(1-\varphi_r,\iota^B_{\mathrm{dR}})}\ar[d]^{N} & & \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h) \oplus \mathrm {R} \Gamma_{\mathrm{dR}}(X_h) /F^r\ar[d]^{(N,0)}\\ \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\ar[rr]^-{1-\varphi_{r-1}} & & \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)}\end{aligned}\right] $$ \begin{proposition} \label{reduction2} For $X\in {\mathcal V}ar_{K}$ and $r\geq 0$, there exists a canonical (in the classical derived category) quasi-isomorphism $$\alpha_{ \operatorname{syn} }: \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)\operatorname{st} ackrel{\sim}{\to}C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\{r\}). $$ Moreover, this morphism is compatible with finite base change (of the field $K$). \end{proposition} \begin{proof} To construct the map $\alpha_{ \operatorname{syn} }$, take a number $t\geq 2\dim X+2$ and let $Y_{\scriptscriptstyle\bullet}\to X$, $Y_{\scriptscriptstyle\bullet}=(U_{\scriptscriptstyle\bullet},\overlinerline{U}_{\scriptscriptstyle\bullet})$, be an $h$-hypercovering of $X$ by ss-pairs over $K$. Choose a finite Galois extension $(V^{\operatorname{pr} ime},K^{\operatorname{pr} ime})/(V,K)$ and a uniformizer $\pi^^{\operatorname{pr} ime}me$ of $V^^{\operatorname{pr} ime}me$ as in the proof of Proposition \ref{hypercov}. Keeping the notation from that proof, refine our hypercovering to the $h$-hypercovering $Y_{\scriptscriptstyle\bullet}\times_V {V^{^{\operatorname{pr} ime}me}}\to X_{K^{^{\operatorname{pr} ime}me}}$. Then the truncation $(Y_{\scriptscriptstyle\bullet}\times_V {V^{^{\operatorname{pr} ime}me}})_{\leq t+1}$ is built from log-schemes log-smooth over $V^{^{\operatorname{pr} ime}me,\times}$ and of Cartier type. We have the following sequence of quasi-isomorphisms \begin{align*} \gamma_{\pi^^{\operatorname{pr} ime}me}: \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{K^{\operatorname{pr} ime},h}) &\operatorname{st} ackrel{\sim}{\leftarrow} \tau_{\leq t}\mathrm {R} \Gamma_{ \operatorname{syn} }(X_{K^{\operatorname{pr} ime},h}) \operatorname{st} ackrel{\sim}{\to}\tau_{\leq t}\mathrm {R} \Gamma_{ \operatorname{syn} }((U_{\scriptscriptstyle\bullet}\times_K {K^{^{\operatorname{pr} ime}me}})_{\leq t+1,h})\operatorname{st} ackrel{\sim}{\leftarrow} \tau_{\leq t}\mathrm {R} \Gamma_{ \operatorname{syn} }((Y_{\scriptscriptstyle\bullet}\times_V {V^{^{\operatorname{pr} ime}me}})_{\leq t+1})_{\mathbf Q}\\ & \operatorname{st} ackrel{\sim}{\to}C_{\operatorname{st} }(\tau_{\leq t}\mathrm {R} \Gamma^B_{\mathrm{HK}}((Y_{\scriptscriptstyle\bullet}\times_V {V^{^{\operatorname{pr} ime}me}})_{\leq t+1})\{r\})\operatorname{st} ackrel{\sim}{\to}C_{\operatorname{st} }(\tau_{\leq t}\mathrm {R} \Gamma^B_{\mathrm{HK}}((U_{\scriptscriptstyle\bullet}\times_K {K^{^{\operatorname{pr} ime}me}})_{\leq t+1,h})\{r\})\\ & \operatorname{st} ackrel{\sim}{\leftarrow} C_{\operatorname{st} }(\tau_{\leq t}\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{K^{^{\operatorname{pr} ime}me},h})\{r\})\operatorname{st} ackrel{\sim}{\to}C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{K^{^{\operatorname{pr} ime}me},h})\{r\}) \end{align*} The first quasi-isomorphism follows from Lemma \ref{dim}. The third and fifth quasi-isomorphisms follow from Proposition \ref{hypercov}. The fourth quasi-isomorphism (the map $\tilde{\alpha}_{ \operatorname{syn} ,\pi^^{\operatorname{pr} ime}me}^B$), since all the log-schemes involved are log-smooth over $V^{^{\operatorname{pr} ime}me,\times}$ and of Cartier type, follows from Proposition \ref{reduction1}. Now, set $G:=\operatorname{Gal} (K^{\operatorname{pr} ime}/K)$. Passing from $\gamma_{\pi^^{\operatorname{pr} ime}me}$ to its $G$-fixed points we obtain the map $$\alpha_{ \operatorname{syn} }:=\alpha_{ \operatorname{syn} ,\pi^^{\operatorname{pr} ime}me}:\quad \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{h})\to C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\{r\})$$ as the composition $$ \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{h})\to \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{K^{\operatorname{pr} ime},h})^G\operatorname{st} ackrel{\gamma_{\pi^^{\operatorname{pr} ime}me}}{\to}C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{K^{^{\operatorname{pr} ime}me},h})\{r\})^G\operatorname{st} ackrel{\sim}{\leftarrow} C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{K,h})\{r\}) $$ It remains to check that so defined map is independent of all choices. For that, it suffices to check that, in the above construction, for a finite Galois extension $(V_1,K_1)$ of $(V^{^{\operatorname{pr} ime}me},K^{^{\operatorname{pr} ime}me})$, $H=\operatorname{Gal} (K_1/K^^{\operatorname{pr} ime}me)$, the corresponding maps $\alpha_{ \operatorname{syn} ,?}: \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{h})\to C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\{r\}) $ are the same in the classical derived category (note that this includes trivial extensions). Easy diagram chase shows that this amounts to checking that the following diagram commutes $$ \xymatrix{ \mathrm {R} \Gamma_{ \operatorname{syn} }((Y_{\scriptscriptstyle\bullet}\times_V {V^{^{\operatorname{pr} ime}me}})_{\leq t+1})_{\mathbf Q}\ar[r]^-{\sim}_-{\alpha_{ \operatorname{syn} ,\pi^{^{\operatorname{pr} ime}me}}}\ar[d] & C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}((Y_{\scriptscriptstyle\bullet}\times_V {V^{^{\operatorname{pr} ime}me}})_{\leq t+1})\{r\}) \ar[d]\\ \mathrm {R} \Gamma_{ \operatorname{syn} }((Y_{\scriptscriptstyle\bullet}\times_V {V_1})_{\leq t+1})_{\mathbf Q}^H\ar[r]^-{\sim}_-{\alpha_{ \operatorname{syn} ,\pi_1}} & C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}((Y_{\scriptscriptstyle\bullet}\times_V {V_1})_{\leq t+1})\{r\})^H } $$ But this we have shown in Proposition \ref{hypercov11}. For the compatibility with finite base change, consider a finite field extension $L/K$. We can choose in the above a Galois extension $K^{^{\operatorname{pr} ime}me}/K$ that works for both fields. We get the same maps $\gamma_{\pi^{^{\operatorname{pr} ime}me}}$ for both $L$ and $K$. Consider now the following commutative diagram. The top and bottom rows define the maps $\alpha^L_{ \operatorname{syn} ,\pi^{^{\operatorname{pr} ime}me}}$ and $\alpha^K_{ \operatorname{syn} ,\pi^{^{\operatorname{pr} ime}me}}$, respectively. $$ \xymatrix{ \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{L,h})\ar[r] & \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{K^{\operatorname{pr} ime},h})^{G_L}\ar[r]^-{\gamma_{\pi^^{\operatorname{pr} ime}me}} & C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{K^{^{\operatorname{pr} ime}me},h})\{r\})^{G_L} & C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{L,h})\{r\})\ar[l]_-{\sim}\\ \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{h})\ar[u]\ar[r] & \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{K^{\operatorname{pr} ime},h})^G\ar[u] \ar[r]^-{\gamma_{\pi^^{\operatorname{pr} ime}me}} & C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{K^{^{\operatorname{pr} ime}me},h})\{r\})^G\ar[u] & C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{K,h})\{r\})\ar[l]_-{\sim}\ar[u] } $$ This proves the last claim of our proposition. \end{proof} {\mathcal{U}}bsection{Geometric syntomic cohomology} We will now study geometric syntomic cohomology, i.e., syntomic cohomology over $\overline{K} $. Most of the constructions related to syntomic cohomology over $K$ have their analogs over ${\overline{K} }$. We will summarize them briefly. For details the reader should consult \cite{Ts}, \cite{BE2}. For $(U,\overlinerline{U})\in {\mathcal{P}}^{ss}_{\overline{K} }$, $r\geq 0$, we have the absolute crystalline cohomology complexes and their completions \begin{align*} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_n: & =\mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{U}_{\operatorname{\acute{e}t} },\mathrm {R} u_{\overlinerline{U}_n/W_n(k)*}{\mathcal J}^{[r]}_{\overlinerline{U}_n/W_n(k)}),\quad \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]}): = \operatorname{holim} _n\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_n,\\ \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_{\mathbf Q}: & =\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})\otimes{\mathbf Q}_p \end{align*} By \cite[Theorem 1.18]{BE2}, the complex $\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})$ is a perfect $A_{\operatorname{cr} }$-complex and $\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_n\simeq \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})\otimes^{L}_{A_{\operatorname{cr} }}{A_{\operatorname{cr} }}/p^n\simeq \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})\otimes^{L}{\mathbf Z}/p^n$. In general, we have $\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_n\simeq \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})\otimes^{L}{\mathbf Z}/p^n$. Moreover, $J^{[r]}_{\operatorname{cr} }=\mathrm {R} \Gamma_{\operatorname{cr} }(\operatorname{Spec} (\overline{K} ),\operatorname{Spec} (\overlinerline{V}),{\mathcal J}^{[r]})$ \cite[1.6.3,1.6.4]{Ts}. The absolute log-crystalline cohomology complexes are filtered $E_{\infty}$ algebras over $A_{\operatorname{cr} ,n}$, $A_{\operatorname{cr} }$, or $A_{\operatorname{cr} ,{\mathbf Q}}$, respectively. Moreover, the rational ones are filtered commutative dg algebras. For $r\geq 0$, the mod-$p^n$, completed, and rational log-syntomic complexes $\mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_n$, $\mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)$, and $\mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q}$ are defined by analogs of formulas (\ref{log-syntomic}). We have $\mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_n\simeq \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)\otimes^{L}{\mathbf Z}/p^n$. Let ${\mathcal J}^{[r]}_{\operatorname{cr} }$, ${\mathcal{A}}_{\operatorname{cr} }$, and ${\mathcal{S}}(r)$ be the $h$-sheafifications on $\mathcal{V}ar_{\overline{K} }$ of the presheaves sending $(U,\overlinerline{U})\in {\mathcal{P}}^{ss}_{\overline{K} }$ to $ \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})$, $ \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})$, and $\mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)$, respectively. Let ${\mathcal J}^{[r]}_{\operatorname{cr} ,n}$, ${\mathcal{A}}_{\operatorname{cr} ,n}$, and ${\mathcal{S}}_n(r)$ denote the $h$-sheafifications of the mod-$p^n$ versions of the respective presheaves; and let ${\mathcal J}^{[r]}_{\operatorname{cr} ,{\mathbf Q}}$, ${\mathcal{A}}_{\operatorname{cr} ,{\mathbf Q}}$, ${\mathcal{S}}(r)_{\mathbf Q}$ be the $h$-sheafification of the rational versions of the same presheaves. For $X\in \mathcal{V}ar_{\overline{K} }$, set $\mathrm {R} \Gamma_{\operatorname{cr} }(X_h):=\mathrm {R} \Gamma(X_h,{\mathcal{A}}_{\operatorname{cr} })$. It is a filtered (by $\mathrm {R} \Gamma(X_h,{\mathcal J}^{[r]}_{\operatorname{cr} })$, $r\geq 0$, ) $E_{\infty}$ $A_{\operatorname{cr} }$-algebra equipped with the Frobenius action $\varphi$. The Galois group $G_K$ acts on ${\mathcal V}ar_{\overline{K} }$ and it acts on $X\mapsto \mathrm {R} \Gamma_{\operatorname{cr} }(X_h)$ by transport of structure. If $X$ is defined over $K$ then $G_K$ acts naturally on $\mathrm {R} \Gamma_{\operatorname{cr} }(X_h)$. For $r\geq 0$, set $\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)_n=\mathrm {R} \Gamma(X_h,{\mathcal{S}}_n(r))$, $\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r):=\mathrm {R} \Gamma(X_h,{\mathcal{S}}(r)_{\mathbf Q})$. We have \begin{align*} \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)_n & \simeq \operatorname{Cone} (\mathrm {R} \Gamma(X_h,{\mathcal J}^{[r]}_{\operatorname{cr} ,n})\operatorname{st} ackrel{p^r-\varphi}{\longrightarrow}\mathrm {R} \Gamma(X_h,{\mathcal{A}}_{\operatorname{cr} ,n}))[-1],\\ \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r) & \simeq \operatorname{Cone} (\mathrm {R} \Gamma(X_h,{\mathcal J}^{[r]}_{\operatorname{cr} ,{\mathbf Q}})\operatorname{st} ackrel{1-\varphi_r}{\longrightarrow}\mathrm {R} \Gamma(X_h,{\mathcal{A}}_{\operatorname{cr} ,{\mathbf Q}}))[-1] . \end{align*} The direct sum $\bigoplus_{r\geq 0}\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)$ is a graded $E_{\infty}$ algebra over ${\mathbf Z}_p$. Let $\overlinerline{f}: Z_1\to \operatorname{Spec} (\overlinerline{V}_1)^{\times}$ be an integral, quasi-coherent log-scheme. Suppose that $\overlinerline{f}$ is the base change of $\overlinerline{f}_L:Z_{L,1}\to \operatorname{Spec} ({\mathcal O}_{L,1})^{\times}$ by $\theta_1: \operatorname{Spec} (\overlinerline{{\mathcal O}_{L,1}})^{\times}\to\operatorname{Spec} ({\mathcal O}_{L,1})^{\times}$, for a finite extension $L/K$. That is, we have a map $\theta_{L,1}: Z_1\to Z_{L,1}$ such that the square $(\overlinerline{f},\overlinerline{f}_L,\theta_1,\theta_{L,1})$ is Cartesian. Assume that $\overlinerline{f}_L$ is log-smooth of Cartier type and that the underlying map of schemes is proper. Such data $(L,Z_1,\theta_{L,1})$ form a directed set $\Sigma_1$ and, for a morphism $(L^{\operatorname{pr} ime},Z^{\operatorname{pr} ime}_1,\theta^{\operatorname{pr} ime}_{L^^{\operatorname{pr} ime}me,1})\to (L,Z_1,\theta_{L,1})$, we have a canonical base change identification compatible with $\varphi$-action \cite[1.18]{BE2} $$\mathrm {R} \Gamma^B_{\mathrm{HK}}(Z_{L,1})\otimes_{L_0}L^{\operatorname{pr} ime}_0\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma^B_{\mathrm{HK}}(Z^{\operatorname{pr} ime}_{L^^{\operatorname{pr} ime}me,1}). $$ These identifications can be made compatible with respect to $L$, so we can set $$ \mathrm {R} \Gamma^B_{\mathrm{HK}}(Z_1):= \dirlim_{\Sigma_1}\mathrm {R} \Gamma^B_{\mathrm{HK}}(Z_{L,1}) $$ It is a complex of $(\varphi,N)$-modules over $K^{\operatorname{nr} }_0$, functorial with respect to morphisms of $Z_1$. Consider the scheme $E_{\operatorname{cr} }:=\operatorname{Spec} (A_{\operatorname{cr} })$. We have $E_{\operatorname{cr} ,1}=\operatorname{Spec} (\overlinerline{V}_1)$ and we equip $E_{\operatorname{cr} ,1}$ with the induced log-structure. This log-structure extends uniquely to a log-structure on $E_{\operatorname{cr} ,n}$ and the PD-thickening $\operatorname{Spec} (\overlinerline{V})^{\times}_1\hookrightarrow E_{\operatorname{cr} ,n}$ is universal over ${\mathbf Z}/p^n$. Set $E_{\operatorname{cr} }:=\operatorname{Spec} (A_{\operatorname{cr} })$ with the limit log-structure. Since we have \cite[1.18.1]{BE2} $$\mathrm {R} \Gamma_{\operatorname{cr} }(Z_1)\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(Z_1/E_{\operatorname{cr} }), $$ Theorem \ref{Bthm} yields a canonical quasi-isomorphism of $B^+_{\operatorname{cr} }$-complexes (called {\em the crystalline Beilinson-Hyodo-Kato quasi-isomorphism}) $$ \iota^B_{\operatorname{cr} }:\quad \mathrm {R} \Gamma_{\mathrm{HK}}^B(Z_1)^{\tau}_{B^+_{\operatorname{cr} }}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(Z_1)_{{\mathbf Q}} $$ compatible with the action of Frobenius. But we have $$\mathrm {R} \Gamma_{\mathrm{HK}}^B(Z_1)^{\tau}_{B^+_{\operatorname{cr} }}= (\mathrm {R} \Gamma_{\mathrm{HK}}^B(Z_1)\otimes_{K_0^{\operatorname{nr} }}A_{\operatorname{cr} ,{\mathbf Q}}^{\tau})^{N=0}$$ and there is a canonical isomorphism $A_{\operatorname{cr} ,{\mathbf Q}}^{\tau}\operatorname{st} ackrel{\sim}{\to} B_{\operatorname{st} }^+$ that is compatible with Frobenius and monodromy. This implies that the above quasi-isomorphism amounts to a quasi-isomorphism of $B^+_{\operatorname{cr} }$-complexes $$ \iota^B_{\operatorname{cr} }:\quad \mathrm {R} \Gamma_{\mathrm{HK}}^B(Z_1)_{B^+_{\operatorname{st} }}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(Z_1)\otimes_{A_{\operatorname{cr} }}^{L}B^+_{\operatorname{st} } $$ compatible with the action of $\varphi$ and $N$. The crystalline Beilinson-Hyodo-Kato map can be canonically trivialized at $[\tilde{p}]$, where $\tilde{p}$ is a sequence of $p^n$'th roots of $p$,: \begin{align*} \beta=\beta_{[\tilde{p}]}: \mathrm {R} \Gamma_{\mathrm{HK}}^B(Z_1)\otimes_{K_0^{\operatorname{nr} }}B^+_{\operatorname{cr} } & \operatorname{st} ackrel{\sim}{\to} (\mathrm {R} \Gamma_{\mathrm{HK}}^B(Z_1)\otimes_{K_0^{\operatorname{nr} }}B^+_{\operatorname{cr} }[a([\tilde{p}])])^{N=0}\\ x & \mapsto \exp(N(x)a([\tilde{p}])) \end{align*} This trivialization is compatible with Frobenius and monodromy. Suppose now that $\overlinerline{f}_1:Z_1\to \operatorname{Spec} (\overlinerline{V}_1)^{\times}$ is a reduction mod $p$ of a log-scheme $\overlinerline{f}:Z\to \operatorname{Spec} (\overlinerline{V})^{\times}$. Suppose that $\overlinerline{f}$ is the base change of $\overlinerline{f}_L:Z_{L}\to \operatorname{Spec} ({\mathcal O}_{L})^{\times}$ by $\theta: \operatorname{Spec} (\overlinerline{{\mathcal O}_{L}})^{\times}\to\operatorname{Spec} ({\mathcal O}_{L})^{\times}$, for a finite extension $L/K$. That is, we have a map $\theta_{L}: Z\to Z_{L}$ such that the square $(\overlinerline{f},\overlinerline{f}_L,\theta,\theta_{L})$ is Cartesian. Assume that $\overlinerline{f}_L$ is log-smooth of Cartier type and that the underlying map of schemes is proper. Such data $(L,Z,\theta_{L})$ form a directed set $\Sigma$ and the reduction mod $p$ map $\Sigma\to \Sigma_1$ is cofinal. The Beilinson-Hyodo-Kato quasi-isomorphisms (\ref{HK1}) are compatible with morphisms in $\Sigma$ and their colimit yields a natural quasi-isomorphism (called again the {\em Beilinson-Hyodo-Kato quasi-isomorphism}) $$ \iota^B_{\mathrm{dR}}:\quad \mathrm {R} \Gamma_{\mathrm{HK}}^B(Z_1)_{\overline{K} }^{\tau}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma(Z_{\overline{K} },\Omega^{\scriptscriptstyle\bullet}_{Z/\overline{K} }). $$ The trivializations by $p$ are also compatible with the maps in $\Sigma$ hence we obtain the Beilinson-Hyodo-Kato maps $$ \iota^B_{\mathrm{dR}}:=\iota^B_{\mathrm{dR}}\beta_{p}:\quad \mathrm {R} \Gamma_{\mathrm{HK}}^B(Z_1)\to \mathrm {R} \Gamma(Z_{\overline{K} },\Omega^{\scriptscriptstyle\bullet}_{Z/\overline{K} }). $$ For an ss-pair $(U,\overlinerline{U})$ over $\overline{K} $, set $\mathrm {R} \Gamma^B_{\mathrm{HK}}(U,\overlinerline{U}):=\mathrm {R} \Gamma^B_{\mathrm{HK}}((U,\overlinerline{U})_1)$. Let ${\mathcal{A}}^B_{\mathrm{HK}}$ be $h$-sheafification of the presheaf $(U,\overlinerline{U})\mapsto \mathrm {R} \Gamma^B_{\mathrm{HK}}(U,\overlinerline{U})$ on ${\mathcal{P}}^{ss}_{\overline{K} }$. This is an $h$-sheaf of $E_{\infty}$ $K_0^{\operatorname{nr} }$-algebras equipped with a $\varphi$-action and locally nilpotent derivation $N$ such that $N\varphi=p\varphi N$. For $X\in\mathcal{V}ar_{\overline{K} }$, set $\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h):=\mathrm {R} \Gamma(X_h,{\mathcal{A}}^B_{\mathrm{HK}}). $ \begin{proposition} \begin{enumerate} \item For any $(U,\overlinerline{U})\in {\mathcal{P}}^{ss}_{\overline{K} }$, the canonical maps \begin{equation} \label{isomorphism} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma(U_h,{\mathcal J}^{[r]}_{\operatorname{cr} })_{\mathbf Q},\quad \mathrm {R} \Gamma^B_{\mathrm{HK}}(U,\overlinerline{U})\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma^B_{\mathrm{HK}}(U_h) \end{equation} are quasi-isomorphisms. \item For every $X\in \mathcal{V}ar_{\overline{K} }$, the cohomology groups $H^n_{\operatorname{cr} }(X_h):=H^n\mathrm {R} \Gamma_{\operatorname{cr} }(X_h)_{\mathbf Q}$, resp. $H^n_{\mathrm{HK}}(X_h):= H^n\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)$, are free $B^{+}_{\operatorname{cr} }$-modules, resp. $K_0^{\operatorname{nr} }$-modules, of rank equal to the rank of $ H^n(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p)$. \end{enumerate} \end{proposition} \begin{proof} Only the filtered statement in part (1) for $r > 0$ requires argument since the rest has been proven by Beilinson in \cite[2.4]{BE2}. Take $r >0$. To prove that we have a quasi-isomorphism $\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[r]})_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma(U_h,{\mathcal J}^{[r]}_{\operatorname{cr} })_{\mathbf Q}$ it suffices to show that the map $\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal O}/{\mathcal J}^{[r]})_{\mathbf Q}\operatorname{st} ackrel{}{\to} \mathrm {R} \Gamma(U_h,{\mathcal{A}}_{\operatorname{cr} }/{\mathcal J}^{[r]}_{\operatorname{cr} })_{\mathbf Q}$ is a quasi-isomorphism. Since, for an ss-pair $(T,\overline{T})$ over $K$, by Corollary \ref{Langer}, $\mathrm {R} \Gamma_{\operatorname{cr} }(T,\overlinerline{T},{\mathcal O}/{\mathcal J}^{[r]})_{\mathbf Q}\simeq \mathrm {R} \Gamma(\overlinerline{T}_K,\Omega^{\scriptscriptstyle\bullet}_{(T,\overlinerline{T}_K)}/F^r)$ this is equivalent to showing that the map $\mathrm {R} \Gamma(\overlinerline{U}_K,\Omega^{\scriptscriptstyle\bullet}_{(U,\overlinerline{U}_K)}/F^r)\to \mathrm {R} \Gamma(U_h,{\mathcal{A}}_{\mathrm{dR}}/F^r)$ is a quasi-isomorphism. And this follows from Proposition \ref{deRham1}. \end{proof} \begin{proposition} \label{HKdR} Let $X\in {\mathcal V}ar_{K}$. The natural projection $\varepsilon: X_{\overline{K} ,h}\to X_h$ defines pullback maps \begin{equation} \label{qis11} \varepsilon^*: \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\to \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})^{G_K},\quad \varepsilon^*: \mathrm {R} \Gamma_{\mathrm{dR}}(X_h)\to \mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} ,h})^{G_K}. \end{equation} These are (filtered) quasi-isomorphisms. \end{proposition} \begin{proof} Notice that the action of $G_K$ on $\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\{r\}$ and $ \mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} ,h})$ is smooth, i.e., the stabilizer of every element is an open subgroup of $G_K$. We will prove only the first quasi-isomorphism - the proof of the second one being analogous. By Proposition \ref{hypercov}, it suffices to show that for any ss-pair over $K$ the natural map $$ \mathrm {R} \Gamma^B_{\mathrm{HK}}(U_1,\overlinerline{U}_1) \to \mathrm {R} \Gamma^B_{\mathrm{HK}}((U,\overlinerline{U})\otimes _K\overline{K} )^{G_K} $$ is a quasi-isomorphism. Passing to a finite extension of $K_U$, if necessary, we may assume that $(U,\overlinerline{U})$ is log-smooth of Cartier type over a finite Galois extension $K_U$ of $K$. Then $$ \mathrm {R} \Gamma^B_{\mathrm{HK}}((U,\overlinerline{U})\otimes _K\overline{K} ) \simeq \mathrm {R} \Gamma^B_{\mathrm{HK}}(U_1,\overlinerline{U}_1)\otimes_{K_{U,0}}K_0^{\operatorname{nr} }\times H,\quad H=\operatorname{Gal} (K_U/K).$$ Taking $G_K$-fixed points of this quasi-isomorphism we obtain the first quasi-isomorphism of (\ref{qis11}), as wanted. \end{proof} Let $(U,\overlinerline{U})$ be an ss-pair over $\overline{K} $. Set \begin{align*} \mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(U,\overlinerline{U}):= & \mathrm {R} \Gamma(\overlinerline{U}_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{(U,\overlinerline{U})/W(k)}),\quad \mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(U,\overlinerline{U})_n: = \mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(U,\overlinerline{U})\otimes^{{\mathbb L}}{\mathbf Z}/p^n\simeq \mathrm {R} \Gamma(\overlinerline{U}_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{(U,\overlinerline{U})_n/W_n(k)}),\\ \mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(U,\overlinerline{U})\widehat{\otimes}{\mathbf Z}_p:= & \operatorname{holim} _n \mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(U,\overlinerline{U})_n,\quad \mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(U,\overlinerline{U})\widehat{\otimes}{\mathbf Q}_p:= (\mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(U,\overlinerline{U})\widehat{\otimes}{\mathbf Z}_p)\otimes {\mathbf Q}. \end{align*} These are $F$-filtered $E_{\infty}$ algebras. Take the associated presheaves on ${\mathcal{P}}^{ss}_{\overlinerline{K}}$. Denote by ${\mathcal{A}}^{\natural}_{\mathrm{dR}}$, ${\mathcal{A}}^{\natural}_{\mathrm{dR},n},{\mathcal{A}}^{\natural}_{\mathrm{dR}}\widehat{\otimes}{\mathbf Z}_p,{\mathcal{A}}^{\natural}_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p$ their sheafifications in the $h$-topology of $\mathcal{V}ar_{\overline{K} }$. These are sheaves of $F$-filtered $E_{\infty}$ algebras (viewed as the projective system of quotients modulo $F^i$). Set $A_{\mathrm{dR}}:=\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{\overlinerline{V}/V}$. By \cite[Lemma 3.2]{BE1} we have $A_{\mathrm{dR}}={\mathcal{A}}^{\natural}_{\mathrm{dR}}(\operatorname{Spec} (\overline{K} ))=\mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(\overline{K} ,\overlinerline{V})$. The corresponding $F$-filtered algebras $A_{\mathrm{dR},n}$, $A_{\mathrm{dR}}\widehat{\otimes}{\mathbf Z}_p$, $A_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p$ are acyclic in nonzero degrees and the projections $\operatorname{cd} ot/F^{m+1}\to \operatorname{cd} ot/F^m$ are surjective. Thus (we set $\lim_F:=\operatorname{holim} _F$) \begin{align*} A^{\diamond}_{\mathrm{dR},n} & :=\lim_FA_{\mathrm{dR},n}= \operatorname{inv} lim_{m}H^0(A_{\mathrm{dR},n}/F^m), \quad A^{\diamond}_{\mathrm{dR}}:=\lim_F(A_{\mathrm{dR}}\widehat{\otimes}{\mathbf Z}_p)= \operatorname{inv} lim_{m}H^0(A_{\mathrm{dR}}\widehat{\otimes}{\mathbf Z}_p/F^m)\\ \lim_FA_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p & = \operatorname{inv} lim_{m}H^0(A_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p/F^m)=B_{\mathrm{dR}}^+, \quad A_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p/F^m=B^+_{\mathrm{dR}}/F^m \end{align*} For any $(U,\overlinerline{U})$ over $\overline{K} $, the complex $\mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(U,\overlinerline{U})$ is an $F$-filtered $E_{\infty}$ filtered $A_{\mathrm{dR}}$-algebra hence $\lim_F\mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(U,\overlinerline{U})_n$ is an $A_{\mathrm{dR},n}^{\diamond}$-algebra, $\lim_F(\mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(U,\overlinerline{U})\widehat{\otimes}{\mathbf Q}_p)$ is a $B^+_{\mathrm{dR}}$-algebra, etc. We have canonical morphisms $$\kappa^{^{\operatorname{pr} ime}me}_{r,n}:\quad \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_n\to \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_n/F^r\operatorname{st} ackrel{\sim}{\to }\mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(U,\overlinerline{U})_n/F^r $$ In the case of $(\overline{K} ,\overlinerline{V})$, from Theorem \ref{beilinson}, we get isomorphisms $\kappa^{^{\operatorname{pr} ime}me}_{r,n}=\kappa^{-1}_r:A_{\operatorname{cr} ,n}/J^{[r]}\operatorname{st} ackrel{\sim}{\to} A_{\mathrm{dR},n}/F^r$. Hence $A_{\mathrm{dR}}^{\diamond}$ is the completion of $A_{\operatorname{cr} }$ with respect to the $J^{[r]}$-topology. For $X\in\mathcal{V}ar_{\overline{K} }$, set $\mathrm {R} \Gamma_{\mathrm{dR}}^{\natural}(X_h):=\mathrm {R} \Gamma(X_h,{\mathcal{A}}_{\mathrm{dR}}^{\natural}). $ Since $A_{\mathrm{dR},{\mathbf Q}}=\overline{K} $, for any variety $X$ over $\overline{K} $, we have a filtered quasi-isomorphism of $\overline{K} $-algebras \cite[3.2]{BE1} $ \mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(X_h)_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{dR}}(X_h) $ obtained by $h$-sheafification of the quasi-isomorphism \begin{equation} \label{deRham11} \mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(U,\overlinerline{U})_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overline{U}_{\mathbf Q}). \end{equation} Concerning the $p$-adic coefficients, we have a quasi-isomorphism \begin{equation} \label{gamma} \gamma_r: (\mathrm {R} \Gamma_{\mathrm{dR}}(X_{h})\otimes_{\overline{K} }B^+_{\mathrm{dR}}) /F^r\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma(X_{h},{\mathcal{A}}^{\natural}_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p) /F^r \end{equation} To define it, consider, for any ss-pair $(U,\overlinerline{U})$ over $\overline{K} $, the natural map $\mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(U,\overlinerline{U})\to \mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(U,\overlinerline{U})\widehat{\otimes}{\mathbf Z}_p$. It yields, by extension to $A_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p$ and by the quasi-isomorphism (\ref{deRham11}), a quasi-isomorphism of $F$-filtered $\overline{K} $-algebras \cite[3.5]{BE2} $$ \gamma: \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U})_{\mathbf Q}\otimes_{\overline{K} }(A_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p)\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(U,\overlinerline{U})\widehat{\otimes}{\mathbf Q}_p $$ Its mod $F^r$-version $\gamma_r$ after $h$-sheafification yields the quasi-isomorphism $$\gamma_r: ({\mathcal{A}}_{\mathrm{dR}}\otimes_{\overline{K} }B_{\mathrm{dR}}^+)/F^r\operatorname{st} ackrel{\sim}{\to} {\mathcal{A}}_{\mathrm{dR}}^{\natural}\widehat{\otimes}{\mathbf Q}_p/F^r $$ Passing to $\mathrm {R} \Gamma(X_h,\scriptscriptstyle\bullet)$ we get the quasi-isomorphism (\ref{gamma}). For $X\in {\mathcal V}ar_{\overline{K} }$, we have canonical quasi-isomorphisms $$\iota^B_{\operatorname{cr} }: \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)^{\tau}_{B^+_{\operatorname{cr} }}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(X_{h})_{\mathbf Q},\quad \iota^B_{\mathrm{dR}}: \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)^{\tau}_{\overline{K} }\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{dR}}(X_h), $$ compatible with the $\operatorname{Gal} (\overline{K} /K)$-action. Here ${}^{\tau}_{B^+_{\operatorname{cr} }}$ and ${}^{\tau}_{\overline{K} }$ denote the $h$-sheafification of the crystalline and the de Rham Beilinson-Hyodo-Kato twists \cite[2.5.1]{BE2}. Trivializing the first map at $[\tilde{p}]$ and the second map at $p$ we get the Beilinson-Hyodo-Kato maps \begin{align*} \iota^B_{\operatorname{cr} }:=\iota^B_{\operatorname{cr} }\beta_{[\tilde{p}]}: \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\otimes_{K_0^{\operatorname{nr} }}{B^+_{\operatorname{cr} }}\to \mathrm {R} \Gamma_{\operatorname{cr} }(X_{h})_{\mathbf Q},\quad \iota_{\mathrm{dR}} :=\iota_{\mathrm{dR}}\beta_p: \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\to \mathrm {R} \Gamma_{\mathrm{dR}}(X_h). \end{align*} Using the quasi-isomorphism $\kappa_r^{-1}: {\mathcal{A}}_{\operatorname{cr} ,{\mathbf Q}} /{\mathcal J}^{[r]}_{\operatorname{cr} ,{\mathbf Q}}\operatorname{st} ackrel{\sim}{\to} ({\mathcal{A}}^{\natural}_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p)/F^r$ from Theorem \ref{beilinson}, we obtain the following quasi-isomorphisms of complexes of sheaves on $X_{\overline{K} ,h}$ \begin{align*} {\mathcal{S}}(r)_{\mathbf Q} & \operatorname{st} ackrel{\sim}{\to} \xymatrix{[{\mathcal J}^{[r]}_{\operatorname{cr} ,{\mathbf Q}}\ar[r]^-{1-\varphi_r} & {\mathcal{A}}_{\operatorname{cr} ,{\mathbf Q}}]} \operatorname{st} ackrel{\sim}{\to} \xymatrix{[{\mathcal{A}}_{\operatorname{cr} ,{\mathbf Q}}\ar[rr]^-{(1-\varphi_r, \operatorname{can} )} & & {\mathcal{A}}_{\operatorname{cr} ,{\mathbf Q}}\oplus {\mathcal{A}}_{\operatorname{cr} ,{\mathbf Q}} /{\mathcal J}^{[r]}_{\operatorname{cr} ,{\mathbf Q}}]}\\ & \operatorname{st} ackrel{\sim}{\leftarrow}\xymatrix{[{\mathcal{A}}_{\operatorname{cr} ,{\mathbf Q}}\ar[rr]^-{(1-\varphi_r,\kappa_r^{-1})} & & {\mathcal{A}}_{\operatorname{cr} ,{\mathbf Q}}\oplus ({\mathcal{A}}^{\natural}_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p)/F^r ]} \end{align*} Applying $\mathrm {R} \Gamma(X_{h},\scriptscriptstyle\bullet)$ and the quasi-isomorphism $\gamma_r^{-1}:\mathrm {R} \Gamma(X_{h},{\mathcal{A}}^{\natural}_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p) /F^r\operatorname{st} ackrel{\sim}{\to} (\mathrm {R} \Gamma_{\mathrm{dR}}(X_{h})\otimes_{\overline{K} }B^+_{\mathrm{dR}}) /F^r$ from (\ref{gamma}) we obtain the following quasi-isomorphisms \begin{align} \label{kwak1} \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{h},r) & \operatorname{st} ackrel{\sim}{\to} \xymatrix{[\mathrm {R} \Gamma_{\operatorname{cr} }(X_{h})_{\mathbf Q}\ar[rr]^-{(1-\varphi_r,\kappa_r^{-1})} & & \mathrm {R} \Gamma_{\operatorname{cr} }(X_{h})_{\mathbf Q}\oplus \mathrm {R} \Gamma(X_{h},{\mathcal{A}}^{\natural}_{\mathrm{dR}}\widehat{\otimes}{\mathbf Q}_p) /F^r]}\\ & \operatorname{st} ackrel{\sim}{\to} \xymatrix{[\mathrm {R} \Gamma_{\operatorname{cr} }(X_{h})_{\mathbf Q}\ar[rr]^-{(1-\varphi_r,\gamma_{r}^{-1}\kappa_r^{-1} )} & & \mathrm {R} \Gamma_{\operatorname{cr} }(X_{h})_{\mathbf Q}\oplus (\mathrm {R} \Gamma_{\mathrm{dR}}(X_{h})\otimes_{\overline{K} }B_{\mathrm{dR}}^+) /F^r]}\notag \end{align} \begin{corollary} For any $(U,\overlinerline{U})\in{\mathcal{P}}^{ss}_{\overline{K} }$, the canonical map $$\mathrm {R} \Gamma _{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{ \operatorname{syn} } (U_h,r) $$ is a quasi-isomorphism. \end{corollary} \begin{proof} Arguing as above we find quasi-isomorphisms \begin{align*} \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q} & \operatorname{st} ackrel{\sim}{\to} \xymatrix{[\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{\mathbf Q}\ar[rr]^-{(1-\varphi_r,\kappa_r^{-1})} & & \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{\mathbf Q}\oplus (\mathrm {R} \Gamma^{\natural}(U,\overlinerline{U})\widehat{\otimes}{\mathbf Q}_p) /F^r]}\\ & \operatorname{st} ackrel{\sim}{\to} \xymatrix{[\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{\mathbf Q}\ar[rr]^-{(1-\varphi_r,\gamma_{r}^{-1}\kappa_r^{-1})} & & \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{\mathbf Q}\oplus (\mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U})\otimes_{\overline{K} }B_{\mathrm{dR}}^+) /F^r]} \end{align*} Comparing them with quasi-isomorphisms (\ref{kwak1}) we see that it suffices to check that the natural maps $$ \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})_{\mathbf Q} \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(U_h)_{\mathbf Q},\quad \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}) \operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{dR}}(U_h), $$ are (filtered) quasi-isomorphisms. But this is known by the isomorphisms (\ref{isomorphism}) and Proposition \ref{deRham1}. \end{proof} Consider the following composition of morphisms \begin{align} \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{h},r) & \operatorname{st} ackrel{\sim}{\to} \left[\xymatrix@C=36pt{\mathrm {R} \Gamma_{\operatorname{cr} }(X_{h})_{\mathbf Q}\ar[rr]^-{(1-\varphi_r,\gamma^{-1}_r\kappa_r^{-1})} & & \mathrm {R} \Gamma_{\operatorname{cr} }(X_{h})_{\mathbf Q}\oplus (\mathrm {R} \Gamma_{\mathrm{dR}}(X_{h})\otimes_{\overline{K} }B^+_{\mathrm{dR}}) /F^r}\right]\notag\\ \label{Cccc} & \operatorname{st} ackrel{\sim}{\leftarrow} \left[\begin{aligned}\xymatrix@C=50pt{\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\otimes_{K_0^{\operatorname{nr} }}B_{\operatorname{st} }^+\ar[r]^-{(1-\varphi_r,\iota^B_{\mathrm{dR}}\otimes\iota)}\ar[d]^{N} & \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\otimes_{K_0^{\operatorname{nr} }}B_{\operatorname{st} }^+\oplus (\mathrm {R} \Gamma_{\mathrm{dR}}(X_{h})\otimes_{\overline{K} }B^+_{\mathrm{dR}}) /F^r\ar[d]^{(N,0)}\\ \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\otimes_{K_0^{\operatorname{nr} }}B_{\operatorname{st} }^+\ar[r]^{1-\varphi_{r-1}} & \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\otimes_{K_0^{\operatorname{nr} }}B_{\operatorname{st} }^+}\end{aligned}\right] \end{align} The second quasi-isomorphism uses the map $$ (\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\otimes_{K_0^{\operatorname{nr} }}B_{\operatorname{st} }^+)^{N=0}=\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})^{\tau}_{B^+_{\operatorname{cr} }}\operatorname{st} ackrel{\iota^B_{\operatorname{cr} }}{\to}\mathrm {R} \Gamma_{\operatorname{cr} }(X_{h})_{\mathbf Q} $$ (that is compatible with the action of $N$ and $\varphi$) and the following lemma. \begin{lemma} \label{HK-compatibility} The following diagrams commute $$ \xymatrix{ \mathrm {R} \Gamma_{\operatorname{cr} }(X_{h})_{\mathbf Q}\otimes_{B^+_{\operatorname{cr} }}B_{\operatorname{st} }^+\ar[rr]^-{\gamma^{-1}_r\kappa_r^{-1}\otimes \iota} & & (\mathrm {R} \Gamma_{\mathrm{dR}}(X_{h})\otimes_{\overline{K} }B^+_{\mathrm{dR}}) /F^r & \mathrm {R} \Gamma_{\operatorname{cr} }(X_{h})_{\mathbf Q}\otimes_{A_{\operatorname{cr} }}B_{\mathrm{dR}} \ar[r]^{\gamma_{\mathrm{dR}}}_{\sim} & \mathrm {R} \Gamma_{\mathrm{dR}}(X_{h})\otimes_{\overline{K} }B_{\mathrm{dR}} \\ \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\otimes_{K_0^{\operatorname{nr} }}B_{\operatorname{st} }^+\ar[rru]_-{\iota^B_{\mathrm{dR}}\otimes\iota}\ar[u]^{\iota^B_{cr}}_{\wr} & & &\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\otimes_{K_0^{\operatorname{nr} }}B_{\operatorname{st} }\ar[u]^{\iota^B_{\operatorname{cr} }\otimes\iota}\ar[ur]_{\iota^B_{\mathrm{dR}}\otimes\iota}& } $$ Here $\gamma_{\mathrm{dR}}$ is the map defined by Beilinson in \cite[3.4.1]{BE2}. \end{lemma} \begin{proof} We will start with the left diagram. It suffices to show that it canonically commutes with $X_h$ replaced by any ss-pair $\overlinerline {Y}=(U,\overline{U})$ over $\overline{K} $ -- a base change of an ss-pair $Y$ split over $(V,K)$. Proceeding as in Example \ref{crucial}, we obtain the following diagram in which all squares but the one in the left bottom clearly commute. $$ \xymatrix{ \mathrm {R} \Gamma_{\mathrm{HK}}^B(Y_1)^{\tau}_K\ar[d]^{ \operatorname{Id} \otimes 1} \ar[r]^{\iota^B_{K}} & \mathrm {R} \Gamma_{\operatorname{cr} }(Y_1/V^{\times})_{\mathbf Q}/F^r\ar[d] & \mathrm {R} \Gamma(Y_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{Y/V^{\times}})\widehat{\otimes}{\mathbf Q}_p/F^r \ar[l]_{\kappa_r}^{\sim}\ar[d] & \mathrm {R} \Gamma_{\mathrm{dR}}(Y_K)/F^r\ar[l]_-{\gamma_r}^-{\sim}\ar[d]\\ \mathrm {R} \Gamma_{\mathrm{HK}}^B(\overline{Y}_1)^{\tau}_{\overline{K} }\otimes_{\overline{K} } B_{\mathrm{dR}}^+\ar[r]^{\iota_{\overline{K} }^B\otimes \kappa_r} & \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y}_1/V^{\times})_{\mathbf Q}/F^r & \mathrm {R} \Gamma(\overlinerline{Y}_{\operatorname{\acute{e}t} },\mathrm {L} \Omega^{\scriptscriptstyle\bullet,\wedge}_{\overlinerline{Y}/V^{\times}})\widehat{\otimes}{\mathbf Q}_p/F^r \ar[l]_{\kappa_r}^{\sim} & (\mathrm {R} \Gamma_{\mathrm{dR}}(\overlinerline{Y}_K)\otimes _{\overline{K} }B_{\mathrm{dR}}^+)/F^r\ar[l]_{\gamma_r}^-{\sim}\ar[ld]_-{\gamma_r}^{\sim}\\ \mathrm {R} \Gamma_{\mathrm{HK}}^B(\overline{Y}_1)^{\tau}_{B^+_{\operatorname{cr} }}\otimes_{B^+_{\operatorname{cr} }}B^+_{\operatorname{st} }\ar[u]^{\delta}\ar[r]^{\iota^B_{\operatorname{cr} }\otimes \kappa_r\iota} & \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y}_1/A_{\operatorname{cr} })_{\mathbf Q}/F^r\ar[u]^{\wr} & (\mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(\overlinerline{Y})\widehat{\otimes}{\mathbf Q}_p)/F^r \ar[l]_{\kappa_r}^{\sim}\ar[u]^{\wr} } $$ Here we have $B^+_{\mathrm{dR}}/F^m= (\mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(\overline{K},\overline{V} )\widehat{\otimes}{\mathbf Q}_p)/F^m$ and the map $\delta$ is defined as the composition $$ \delta:\quad \mathrm {R} \Gamma_{\mathrm{HK}}^B(\overline{Y}_1)^{\tau}_{B^+_{\operatorname{cr} }}\otimes_{B^+_{\operatorname{cr} }}B^+_{\operatorname{st} }= (\mathrm {R} \Gamma_{\mathrm{HK}}^B(\overline{Y}_1)\otimes_{K_0^{\operatorname{nr} }} B^+_{\operatorname{st} })^{N=0}\otimes_{B^+_{\operatorname{cr} }}B^+_{\operatorname{st} }\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{HK}}^B(\overline{Y}_1)\otimes_{K_0^{\operatorname{nr} }}B^+_{\operatorname{st} }\lomapr{\beta_p\otimes \iota} \mathrm {R} \Gamma_{\mathrm{HK}}^B(\overline{Y}_1)^{\tau}_{\overline{K} }\otimes_{\overline{K} } B_{\mathrm{dR}}^+ $$ Recall that for the map $\iota_{\mathrm{dR}}^B: \mathrm {R} \Gamma_{\mathrm{HK}}^B(Y_1)^{\tau}_K\to \mathrm {R} \Gamma_{\mathrm{dR}}(Y_K)/F^r$ we have $\iota^B_{\mathrm{dR}}=\gamma_r^{-1} \kappa_r^{-1}\iota^B_K$. Everything in sight being compatible with change of the ss-pairs $Y$ - more specifically with maps in the directed system $\Sigma$ - if this diagram commutes so does its $\Sigma$ colimit and the left diagram in the lemma for the pair $(U,\overline{U})$. It remains to show that the left bottom square in the above diagram commutes. To do that consider the ring $\widehat{A}_n$ defined as the PD-envelope of the closed immersion $$\overline{V} _1^{\times}\hookrightarrow A_{\operatorname{cr} ,n}\times _{W_n(k)}V^{\times}_n $$ That is, $\widehat{A}_n$ is the product of the PD-thickenings $(\overline{V} _1^{\times}\hookrightarrow A_{\operatorname{cr} ,n})$ and $(V^{\times}_1 \hookrightarrow V^{\times}_n)$ over $(W_1(k)\hookrightarrow W_n(k))$. By \cite[Lemma 1.17]{BE2}, this makes $\overlinerline{V}_1^{\times}\hookrightarrow \widehat{A}_{\operatorname{cr} ,n}$ into the universal PD-thickening in the log-crystalline site of $\overlinerline{V}_1^{\times}$ over $V_n^{\times}$. Let $\widehat{A}:=\injlim_n\widehat{A}_{\operatorname{cr} ,n}$ with the limit log-structure. Set $\widehat{B}^+_{\operatorname{cr} }:=\widehat{A}_{\operatorname{cr} }[1/p]$. Using Theorem \ref{Bthm} , we obtain a canonical quasi-isomorphism $$ \iota^B_{\widehat{B}^+_{\operatorname{cr} }}: \mathrm {R} \Gamma^B_{\mathrm{HK}}(\overlinerline{Y}_1)^{\tau}_{\widehat{B}^+_{\operatorname{cr} }}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(\overline{Y}_1/\widehat{A}_{\operatorname{cr} })_{\mathbf Q}$$ By construction, we have the maps of PD-thickenings $$ \xymatrix{ (V^{\times}_1\hookrightarrow V^{\times}) & ( \overlinerline{V}^{\times}_1\hookrightarrow \widehat{A}_{\operatorname{cr} })\ar[l]_{\operatorname{pr} _1}\ar[r]^{\operatorname{pr} _2} & (\overlinerline{V}^{\times}_1\hookrightarrow A_{\operatorname{cr} }) } $$ Consider the following diagram $$ \xymatrix{ \mathrm {R} \Gamma_{\mathrm{HK}}^B(\overlinerline{Y}_1)^{\tau}_{\widehat{B}^+_{\operatorname{cr} }}\ar[ddd]_{\iota^B_{\widehat{B}^+_{\operatorname{cr} }}} & & \mathrm {R} \Gamma_{\mathrm{HK}}^B(\overlinerline{Y}_1)_{\overline{K} }^{\tau}\otimes_{\overline{K} } B_{\mathrm{dR}}^+/F^r \ar[ll]_{\operatorname{pr} ^*_1\otimes\operatorname{pr} ^*_1\kappa_r}\ar[ddd]^{\iota_{\overline{K} }^B\otimes \kappa_r}\\ & \mathrm {R} \Gamma_{\mathrm{HK}}^B(\overlinerline{Y}_1)^{\tau}_{B^+_{\operatorname{cr} }}\ar[d]^{\iota^B_{\operatorname{cr} }} \ar[lu]^{\operatorname{pr} _2^*}\ar[ru]^{\delta} & \\ & \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y}_1/A_{\operatorname{cr} })_{\mathbf Q}/F^r\ar[ld]_{\operatorname{pr} ^*_2} ^{\sim}\ar[rd]^{\sim} &\\ \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y}_1/\widehat{A}_{\operatorname{cr} })_{\mathbf Q}/F^r & & \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y}_1/V^{\times})_{\mathbf Q}/F^r \ar[ll]^{\operatorname{pr} _1^*}_{\sim} } $$ The bottom triangle commutes since $ \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y}_1/A_{\operatorname{cr} })= \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y}_1/W(k))$. The pullback maps \begin{align*} \operatorname{pr} _1^*:\quad & \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y}_1/V^{\times}) \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y}/\widehat{A}_{\operatorname{cr} }),\\ \operatorname{pr} _2^*: \quad & \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y}/A_{\operatorname{cr} })_{\mathbf Q}/F^r\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y}/\widehat{A}_{\operatorname{cr} })_{\mathbf Q}/F^r \end{align*} are quasi-isomorphisms. Indeed, in the case of the first pullback this follows from the universal property of $\widehat{A}_{\operatorname{cr} }$; in the case of the second one - it follows from the commutativity of the bottom triangle since the right slanted map is a quasi-isomorphism as shown by the first diagram in our proof. The left trapezoid and the big square commute by the definition of the Beilinson-Bloch-Kato maps. To see that the top triangle commutes it suffices to show that for an element $$x\in \mathrm {R} \Gamma_{\mathrm{HK}}^B(\overlinerline{Y}_1)^{\tau}_{B^+_{\operatorname{cr} }}= (\mathrm {R} \Gamma_{\mathrm{HK}}^B(\overlinerline{Y}_1)\otimes_{K_0^{\operatorname{nr} }}B^+_{\operatorname{st} })^{N=0},\quad x=b{\mathcal{U}}m_{i\geq 0} N^i(m) a([\tilde{p}])^{[i]}, m\in \mathrm {R} \Gamma^B_{\mathrm{HK}}(\overline{Y}_1),b\in B^+_{\operatorname{cr} },$$ we have $\operatorname{pr} ^*_2(x)=\operatorname{pr} ^*_1\delta(x)$. Since $\iota(a([\tilde{p}]))=\log([\tilde{p}]/p)$ \cite[4.2.2]{F1}, we calculate \begin{align*} \delta(x)=\delta(b{\mathcal{U}}m_{i\geq 0} N^i(m) a([\tilde{p}])^{[i]}) & = b{\mathcal{U}}m_{i\geq 0}({\mathcal{U}}m_{j\geq 0} N^{i+j}(m)a(p)^{[j]})\log([\tilde{p}]/p)^{[i]}\\ & =b{\mathcal{U}}m_{k\geq 0} N^{k}(m)(a(p)+\log([\tilde{p}]/p))^{[k]} \end{align*} Since in $\widehat{B}^+_{\operatorname{cr} }$ we have $[\tilde{p}]=([\tilde{p}]/p)p$ and $[\tilde{p}]/p\in 1+J_{\widehat{B}^+_{\operatorname{cr} }}$, it follows that $a([\tilde{p}])=\log([\tilde{p}]/p)+a(p)$ and $$ \operatorname{pr} ^*_1\delta(x)=\operatorname{pr} ^*_1(b{\mathcal{U}}m_{k\geq 0} N^{k}(m)(a(p)+\log([\tilde{p}]/p))^{[k]})= b{\mathcal{U}}m_{k\geq 0} N^{k}(m)a([\tilde{p}])^{[k]}=\operatorname{pr} _2^* (b{\mathcal{U}}m_{k\geq 0} N^{k}(m)a([\tilde{p}])^{[k]})=\operatorname{pr} _2^*(x), $$ as wanted. It follows now that the right trapezoid in the above diagram commutes as well and that so does the left diagram in our lemma. To check the commutativity of the right diagram, consider the following map obtained from the maps $\kappa^{^{\operatorname{pr} ime}me}_{r,n}$ by passing to $F$-limit $$ \kappa^{^{\operatorname{pr} ime}me}_n: \quad \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y})_n\otimes^L_{A_{\operatorname{cr} ,n}}A_{\mathrm{dR},n}\operatorname{st} ackrel{\sim}{\to} \operatorname{inv} lim_F \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y})_n/F^r $$ By \cite[3.6.2]{BE2}, this is a quasi-isomorphism. Beilinson \cite[3.4.1]{BE2} defines the map $$ \gamma_{\mathrm{dR}}:\quad \mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y})_{\mathbf Q}\otimes_{A_{\operatorname{cr} }}B_{\mathrm{dR}}^+\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\mathrm{dR}}(\overlinerline{Y}_K)\otimes_{\overline{K} }B^+_{\mathrm{dR}} $$ by $B^+_{\mathrm{dR}}$-linearization of the composition $ \operatorname{inv} lim_r(\gamma_r^{-1}\kappa_r^{-1})\operatorname{holim} _n\kappa^{^{\operatorname{pr} ime}me}_n$. We have $$ \gamma_{\mathrm{dR}}=\gamma_r^{-1}\kappa^{-1}_r:\mathrm {R} \Gamma_{\operatorname{cr} }(\overlinerline{Y})_{\mathbf Q}\to (\mathrm {R} \Gamma_{\mathrm{dR}}(\overlinerline{Y}_K)\otimes_{\overline{K} }B^+_{\mathrm{dR}})/F^r $$ Hence the commutativity of the right diagram follows from that of the left one. \end{proof} Let $C^+(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\{r\})$ denote the second homotopy limit in the diagram (\ref{Cccc}); denote by $C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\{r\})$ the complex $C^+(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\{r\})$ with all the pluses removed. We have defined a map $\alpha_{ \operatorname{syn} }: \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{h},r)\to C^+(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\{r\})$ and proved the following proposition. \begin{proposition} There is a functorial $G_K$-equivariant quasi-isomorphism $$\alpha_{ \operatorname{syn} }:\quad \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{h},r)=\mathrm {R} \Gamma(X_{h},{\mathcal{S}}(r)_{\mathbf Q})\simeq C^+(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\{r\}). $$ \end{proposition} \begin{corollary} For $(U,\overlinerline{U})\in {\mathcal{P}}^{ss}_{K}$, we have a long exact sequence \begin{align*} \to H^i_{ \operatorname{syn} }((U,\overlinerline{U})_{\overline{K} },r)\to (H^i_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\otimes_{K_0} B_{\operatorname{st} }^+)^{\varphi=p^r,N=0}\to (H^i_{\mathrm{dR}}(U,\overlinerline{U})\otimes_K B_{\mathrm{dR}}^+)/F^r\to H^{i+1}_{ \operatorname{syn} }((U,\overlinerline{U})_{\overline{K} },r)\to \end{align*} \end{corollary} \begin{proof}By diagram (\ref{Cccc}), it suffices to show that \begin{align*} H^i[\mathrm {R} \Gamma^B_{\mathrm{HK}}((U,\overlinerline{U})_1)\otimes_{K_0}B_{\operatorname{st} }^+]^{\varphi=p^r,N=0} & \simeq (H^i_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}}\otimes_{K_0}B^+_{st})^{\varphi=p^r,N=0},\\ H^i(\mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U})\otimes_K B_{\mathrm{dR}}^+)/F^r) & \simeq (H^i_{\mathrm{dR}}(U,\overlinerline{U})\otimes_K B_{\mathrm{dR}}^+)/F^r \end{align*} The second isomorphism is a consequence of the degeneration of the Hodge-de Rham spectral sequence. Keeping in mind that the Beilinson-Hyodo-Kato complexes $\mathrm {R} \Gamma^B_{\mathrm{HK}}((U,\overlinerline{U})_1)$ are built from $(\varphi,N)$-modules, the first isomorphism follows from the following short exact sequences (for a $(\varphi, N)$-module $M$) \begin{align*} 0\to & M\otimes_{K_0}B^+_{\operatorname{cr} }\to M\otimes_{K_0}B^+_{\operatorname{st} }\operatorname{st} ackrel{N}{\to} M\otimes_{K_0}B^+_{\operatorname{st} }\to 0,\\ 0\to & (M\otimes_{K_0}B^+_{\operatorname{cr} })^{\varphi=p^r}\to M\otimes_{K_0}B^+_{\operatorname{cr} }\operatorname{st} ackrel{1-\varphi_r}{\to} M\otimes_{K_0}B^+_{\operatorname{cr} }\to 0. \end{align*} The first one follows, by induction on $m$ such that $N^m=0$ on $M$, from the exact sequence (\ref{kwak11}) and the fact that $(M\otimes_{K_0}B^+_{\operatorname{st} })^{N=0}\simeq M\otimes_{K_0}B^+_{\operatorname{cr} }$. The second one follows from \cite[Remark 2.30]{CN}. \end{proof} {\mathcal{E}}ction{Relation between syntomic cohomology and \'etale cohomology} In this section we will study the relationship between syntomic and \'etale cohomology in both the geometric and the arithmetic situation. {\mathcal{U}}bsection{Geometric case} We start with the geometric case. In this subsection, we will construct the geometric syntomic period map from syntomic to \'etale cohomology. We will prove that in the torsion case, on the level of $h$-sheaves it is a quasi-isomorphism modulo a universal constant; in the rational case it induces an isomorphism on cohomology groups in a stable range. Finally, we will construct the syntomic descent spectral sequence. We will first recall the de Rham and Crystalline Poincar\'e Lemmas of Beilinson and Bhatt \cite{BE1}, \cite{BE2}, \cite{BH}. \begin{theorem}(de Rham Poincar\'e Lemma \cite[3.2]{BE1}) \label{derham} The maps $A_{\mathrm{dR}}\otimes ^{{L}}{\mathbf Z}/p^n \to {\mathcal{A}}_{\mathrm{dR}}^{\natural}\otimes ^{{L}}{\mathbf Z}/p^n $ are filtered quasi-isomorphisms of $h$-sheaves on ${\mathcal V}ar_{\overlinerline{K}}$. \end{theorem} \begin{theorem}(Filtered Crystalline Poincar\'e Lemma \cite[2.3]{BE2}, \cite[Theorem 10.14]{BH}) The map $J^{[r]}_{\operatorname{cr} ,n}\to {\mathcal J}^{[r]}_{\operatorname{cr} ,n}$ is a quasi-isomorphism of $h$-sheaves on ${\mathcal V}ar_{\overline{K} }$. \end{theorem} \begin{proof} We have the following map of distinguished triangles $$ \begin{CD} J^{[r]} _{\operatorname{cr} ,n}@>>> A_{\operatorname{cr} ,n} @>>> A_{\operatorname{cr} ,n}/J^{[r]}_{\operatorname{cr} ,n} \\ @VVV @VV\wr V @VV\wr V\\ {\mathcal J}^{[r]}_{\operatorname{cr} ,n}@>>> {\mathcal{A}}_{\operatorname{cr} ,n} @>>> {\mathcal{A}}_{\operatorname{cr} ,n}/{\mathcal J}^{[r]}_{\operatorname{cr} ,n} \end{CD} $$ The middle map is a quasi-isomorphism by the Crystalline Poincar\'e Lemma proved in \cite[2.3]{BE2}. Hence it suffices to show that so is the rightmost map. But, by \cite[1.9.2]{BE2}, this map is quasi-isomorphic to the map $A_{\mathrm{dR},n}/F^r\to {\mathcal{A}}^{\natural}_{\mathrm{dR},n}/F^r$. Since the last map is a quasi-isomorphism by the de Rham Poincar\'e Lemma (\ref{derham}) we are done. \end{proof} We will now recall the definitions of the crystalline, Beilinson-Hyodo-Kato, and de Rham period maps \cite[3.1]{BE2}, \cite[3.5]{BE1}. Let $X\in {\mathcal V}ar_{\overline{K} }$. To define the crystalline period map $$\rho_{\operatorname{cr} }: \mathrm {R} \Gamma_{\operatorname{cr} }(X_h)\to \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Z}_p)\widehat{\otimes} A_{\operatorname{cr} },$$ consider the natural map $\alpha_n: \mathrm {R} \Gamma_{\operatorname{cr} }(X_h)\to \mathrm {R} \Gamma(X_h,{\mathcal{A}}_{\operatorname{cr} ,n})$ and the composition $$\beta_n: \quad \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Z}_p(r))\otimes^{L}_{{\mathbf Z}_p}A_{\operatorname{cr} ,n}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },A_{\operatorname{cr} ,n}) \operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma(X_{h},A_{\operatorname{cr} ,n}) \operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma(X_{h},{\mathcal{A}}_{\operatorname{cr} ,n}). $$ Set $\rho_{\operatorname{cr} ,n}:=\beta_n^{-1}\alpha_n$ and $\rho_{\operatorname{cr} }:=\operatorname{holim} _n\rho_{\operatorname{cr} ,n}$. The Hyodo-Kato period map $$\rho_{\mathrm{HK}}:\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)^{\tau}_{B^+_{\operatorname{cr} }}\to \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes B^+_{\operatorname{cr} },\quad \rho_{\mathrm{HK}}=\rho_{\operatorname{cr} ,{\mathbf Q}}\iota^B_{\operatorname{cr} }, $$ is obtained by composing the map $\rho_{\operatorname{cr} , {\mathbf Q}}$ with the quasi-isomorphism $\iota^B_{\operatorname{cr} }: \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)^{\tau}_{B^+_{\operatorname{cr} }}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }(X_h)_{{\mathbf Q}}$. The maps $\rho_{\operatorname{cr} }, \rho_{\mathrm{HK}}$ are morphisms of $E_{\infty}$ $A_{\operatorname{cr} }$- and $B^+_{\operatorname{cr} }$-algebras equipped with a Frobenius action; they are compatible with the action of the Galois group $G_K$. To define the de Rham period map $\rho_{\mathrm{dR}}:\mathrm {R} \Gamma_{dR}(X_h)\otimes_{\overline{K} }B_{\mathrm{dR}}^+\to \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes B_{\mathrm{dR}}^+$ consider the compositions \begin{align*} \alpha:\, & \mathrm {R} \Gamma_{\mathrm{dR}}(X_h)\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(X_h)\otimes {\mathbf Q}\to \mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(X_h)\widehat{\otimes} {\mathbf Q}_p,\\ \beta:\, & \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Z})\otimes^{\mathbf L}A_{\mathrm{dR}}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },A_{\mathrm{dR}})\to \mathrm {R} \Gamma(X_h,A_{\mathrm{dR}})\to \mathrm {R} \Gamma(X_h,{\mathcal{A}}_{\mathrm{dR}}^{\natural})=\mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(X_h). \end{align*} After tensoring the map $\beta$ with ${\mathbf Z}/p^n$ and using the de Rham Poincar\'e Lemma we get a quasi-isomorphism $$\beta_n: \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Z}/p^n)\otimes^{\mathbf L}A_{\mathrm{dR}}\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma^{\natural}_{\mathrm{dR}}(X_h)\otimes^{\mathbf L}{\mathbf Z}/p^n. $$ Set $\beta_{\mathbf Q}:=\operatorname{holim} _n\beta_n\otimes {\mathbf Q}$ and $\rho_{\mathrm{dR}}:=\beta^{-1}\alpha$. This is a morphism of filtered $E_{\infty}$ $B^+_{\mathrm{dR}}$-algebras, compatible with $G_K$-action. \begin{theorem}(\cite[3.2]{BE2}, \cite[3.6]{BE1})For $X\in {\mathcal V}ar_{\overline{K} }$, we have canonical quasi-isomorphisms \begin{align*} \rho_{\operatorname{cr} }: \,& \mathrm {R} \Gamma_{\operatorname{cr} }(X_h)\otimes_{A_{\operatorname{cr} }}B_{\operatorname{cr} }\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes B_{\operatorname{cr} },\quad \rho_{\mathrm{HK}}:\, \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)^{\tau}_{B_{\operatorname{cr} }}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes B_{\operatorname{cr} },\\ \rho_{\mathrm{dR}}: \,& \mathrm {R} \Gamma_{\mathrm{dR}}(X_h)\otimes_{\overline{K} }{B_{\mathrm{dR}}}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes B_{\mathrm{dR}}. \end{align*} \end{theorem} Pulling back $\rho_{\mathrm{HK}}$ to the Fontaine-Hyodo-Kato ${\mathbb G}_a$-torsor $\operatorname{Spec} (B_{\operatorname{st} })/\operatorname{Spec} (B_{\operatorname{cr} })$ we get a canonical quasi-isomorphism of $B_{\operatorname{st} }$-complexes \begin{equation} \rho_{\mathrm{HK}}: \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\otimes_{K_0^{\operatorname{nr} }}B_{\operatorname{st} }\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes B_{\operatorname{st} } \end{equation} compatible with the $(\varphi,N)$-action and with the $G_K$-action on ${\mathcal V}ar_{\overline{K} }$. \begin{corollary} \label{period-compatibility} The period morphisms are compatible, i.e., the following diagrams commute. $$ \xymatrix{ \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\otimes_{K_0^{\operatorname{nr} }}B_{\operatorname{st} }\ar[r]^-{\iota^B_{\mathrm{dR}}\otimes\iota}\ar[d]^{\rho_{\mathrm{HK}}} & \mathrm {R} \Gamma_{\mathrm{dR}}(X_h)\otimes_{\overline{K} }B_{\mathrm{dR}}\ar[d]^{\rho_{\mathrm{dR}}} & \mathrm {R} \Gamma_{\operatorname{cr} }(X_h)\otimes_{A_{\operatorname{cr} }}B_{\mathrm{dR}}\ar[d]_{\rho_{\operatorname{cr} }\otimes \operatorname{Id} _{B_{\mathrm{dR}}}} & \mathrm {R} \Gamma_{\mathrm{dR}}(X_h)\otimes_{\overline{K} }B_{\mathrm{dR}} \ar[dl]^{\rho_{\mathrm{dR}}}\ar[l]_{\gamma_{\mathrm{dR}}}^{\sim} \\ \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes B_{\operatorname{st} }\ar[r]^-{1\otimes \iota} & \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes B_{\mathrm{dR}} & \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes B_{\mathrm{dR}}\\ } $$ \end{corollary} \begin{proof} The second diagram commutes by \cite[3.4]{BE2}. The commutativity of the first one can be reduced, by the equality $\rho_{\mathrm{HK}}=\rho_{\operatorname{cr} }\iota^B_{\operatorname{cr} }$ and the second diagram above, to the commutativity of the right diagram in Lemma \ref{HK-compatibility}. \end{proof} We will now define the syntomic period map $$\rho_{ \operatorname{syn} }:\, \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)_{\mathbf Q}\to \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)),\quad r\geq 0. $$ Set ${\mathbf Z}/p^n(r)^{^{\operatorname{pr} ime}me}:=(1/(p^aa!){\mathbf Z}_p(r))\otimes{\mathbf Z}/p^n$, where $a$ is the largest integer $\leq r/(p-1)$. Recall that we have the fundamental exact sequence \cite[Theorem 1.2.4]{Ts} $$0\to {\mathbf Z}/p^n(r)^{^{\operatorname{pr} ime}me}\to J_{\operatorname{cr} ,n}^{<r>}\lomapr{1-\varphi_r}A_{\operatorname{cr} ,n}\to 0, $$ where $$J_n^{<r>}:= \{x\in J_{n+s}^{[r]}\mid \varphi(x)\in p^rA_{\operatorname{cr} ,n+s}\}/p^n ,$$ for some $s\geq r$. Set $S_n(r):=\operatorname{Cone} (J^{[r]}_{\operatorname{cr} ,n}\lomapr{p^r-\varphi} A_{\operatorname{cr} ,n})[-1]$. There is a natural morphism of complexes $S_n(r)\to{\mathbf Z}/p^n(r)^{^{\operatorname{pr} ime}me}$ (induced by $p^r $ on $J_{\operatorname{cr} ,n}^{[r]}$ and $ \operatorname{Id} $ on $A_{\operatorname{cr} ,n}$) , whose kernel and cokernel are annihilated by $p^r$. The Filtered Crystalline Poincar\'e Lemma implies easily the following Syntomic Poincar\'e Lemma. \begin{corollary} \begin{enumerate} \item For $0\leq r\leq p-2$, there is a unique quasi-isomorphism ${\mathbf Z}/p^n(r)\operatorname{st} ackrel{\sim}{\longrightarrow}{\mathcal{S}}_n(r)$ of complexes of sheaves on ${\mathcal V}ar_{\overline{K} ,h}$ that is compatible with the Crystalline Poincar\'e Lemma. \item There is a unique quasi-isomorphism $S_n(r)\operatorname{st} ackrel{\sim}{\to}{\mathcal{S}}_n(r)$ of complexes of sheaves on ${\mathcal V}ar_{\overline{K} ,h}$ that is compatible with the Crystalline Poincar\'e Lemma. \end{enumerate} \end{corollary} \begin{proof} We will prove the second claim - the first one is proved in an analogous way. Consider the following map of distinguished triangles $$ \xymatrix{ {\mathcal{S}}_n(r)\ar[r] & {\mathcal J}^{[r]}_{\operatorname{cr} ,n}\ar[r]^{p^r-\varphi} & {\mathcal{A}}_{\operatorname{cr} ,n}\\ S_n(r)\ar[r]\ar@{-->}[u]& J^{[r]}_{\operatorname{cr} ,n}\ar[u]^{\wr}\ar[r]^{p^r-\varphi} & A_{\operatorname{cr} ,n}\ar[u]^{\wr} } $$ The triangles are distinguished by definition. The vertical continuous arrows are quasi-isomorphisms by the Crystalline Poincar\'e Lemma. They induce the dash arrow that is clearly a quasi-isomorphism. \end{proof} Consider the natural map $\alpha_n: \mathrm {R} \Gamma(X_h,{\mathcal{S}}(r))\to \mathrm {R} \Gamma(X_h,{\mathcal{S}}_n(r))$ and the zig-zag $$\beta_n:\, \mathrm {R} \Gamma(X_h,{\mathcal{S}}_n(r))\leftarrow \mathrm {R} \Gamma(X_{h},S_n(r))\to \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Z}/p^n(r)^{^{\operatorname{pr} ime}me})\operatorname{st} ackrel{\sim}{\leftarrow} \mathrm {R} \Gamma(X_{h},{\mathbf Z}/p^n(r)^{^{\operatorname{pr} ime}me}).$$ Set $\beta:=(\operatorname{holim} _n\beta_{n})\otimes {\mathbf Q}$; note that this is a quasi-isomorphism. Set $$ \rho_{ \operatorname{syn} }:=p^{-r}\beta\alpha:\quad \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)\to \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)), $$ where $\alpha:=(\operatorname{holim} _n\alpha_{n})\otimes {\mathbf Q}$. The period map $\rho_{ \operatorname{syn} }$ induces a map of graded $E_{\infty}$ algebras over ${\mathbf Q}_p$ compatible with the action of the Galois group $G_K$. The syntomic period map has a different, more global definition that we find very useful. Define the map $\rho_{ \operatorname{syn} }^{^{\operatorname{pr} ime}me}$ by the following diagram. $$ \xymatrix@C=40pt{ \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)\ar[r]^{\sim} \ar[d]^{\rho_{ \operatorname{syn} }^{^{\operatorname{pr} ime}me}}& [\mathrm {R} \Gamma_{\operatorname{cr} }(X_h)_{\mathbf Q}\ar[r]^-{(1-\varphi_r,\gamma^{-1}_r\kappa^{-1}_r)}\ar[d]^{\rho_{\operatorname{cr} }} & \mathrm {R} \Gamma_{\operatorname{cr} }(X_h)_{\mathbf Q}\oplus \mathrm {R} \Gamma_{\mathrm{dR}}(X_h)/F^r]\ar[d]^{\rho_{\operatorname{cr} }+\rho_{\mathrm{dR}}}\\ \mathrm {R} \Gamma_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(r))\ar[r]^-{\sim} & [\mathrm {R} \Gamma_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(r))\otimes B_{\operatorname{cr} }\ar[r]^-{(1-\varphi_r, \operatorname{can} ) } & \mathrm {R} \Gamma_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(r))\otimes B_{\operatorname{cr} }\oplus \mathrm {R} \Gamma_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(r))\otimes B_{\mathrm{dR}}/F^r] } $$ This definition makes sense since the following diagram commutes. $$ \xymatrix{ \mathrm {R} \Gamma_{\operatorname{cr} }(X_h)_{\mathbf Q}\ar[r]^{\gamma^{-1}_r\kappa^{-1}_r}\ar[d]^{\rho_{\operatorname{cr} }} & \mathrm {R} \Gamma_{\mathrm{dR}}(X_h)/F^r\ar[d]^{\rho_{\mathrm{dR}}}\\ \mathrm {R} \Gamma_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(r))\otimes B_{\operatorname{cr} }\ar[r]^-{ \operatorname{can} } & \mathrm {R} \Gamma_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(r))\otimes B_{\mathrm{dR}}/F^r } $$ The syntomic period morphisms $\rho_{ \operatorname{syn} }$ and $\rho_{ \operatorname{syn} }^{^{\operatorname{pr} ime}me}$ are homotopic by a homotopy compatible with the $G_K$-action (and, unless necessary, we will not distinguish them in what follows). These two facts follow easily from the definitions. For $X\in {\mathcal V}ar_K$, we have a quasi-isomorphism \begin{equation} \label{definition1} \alpha_{\operatorname{\acute{e}t} }: \mathrm {R} \Gamma_{}(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r))\operatorname{st} ackrel{\sim}{\to} C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}) \end{equation} that we define as the inverse of the following composition of quasi-isomorphisms (square brackets denote complex) \begin{align*} C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}) & \twomapr{\rho}{\sim}\mathrm {R} \Gamma_{}(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes_{{\mathbf Q}_p} [B_{\operatorname{st} } \varepsilon rylomapr{(N,1-\varphi_r,\iota)}B_{\operatorname{st} }\oplus B_{\operatorname{st} }\oplus B_{\mathrm{dR}}/F^r \varepsilon ryverylomapr{(1-\varphi_{r-1})-N} B_{\operatorname{st} }]\\ & \operatorname{st} ackrel{\sim}{\leftarrow}\mathrm {R} \Gamma_{}(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p)\otimes_{{\mathbf Q}_p}C(D_{\operatorname{st} }({\mathbf Q}_p(r))) \operatorname{st} ackrel{\sim}{\leftarrow}\mathrm {R} \Gamma_{}(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)). \end{align*} The last quasi-isomorphism is by Remark \ref{basics}. The map $\rho$ is defined using the period morphisms $\rho_{\mathrm{HK}}$ and $\rho_{\mathrm{dR}}$ and their compatibility (Corollary \ref{period-compatibility}). The map $\alpha_{\operatorname{\acute{e}t} }$ is compatible with the action of $G_K$. \begin{proposition} \label{BK2}For a variety $X\in {\mathcal V}ar_{K}$, we have a canonical, compatible with the action of $G_K$, quasi-isomorphism $$\rho_{ \operatorname{syn} }: \tau_{\leq r}\mathrm {R} \Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r)\operatorname{st} ackrel{\sim}{\to} \tau_{\leq r}\mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)). $$ \end{proposition} \begin{proof} The Bousfield-Kan spectral sequences associated to the homotopy limits defining the complexes $C^+(H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})$ and $C(H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})$ form the following commutative diagram $$ \xymatrix{ ^+E^{i,j}_2=H^i(C^+(H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))\ar[d]^{ \operatorname{can} }\ar@{=>}[r] & H^{i+j}(C^+(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))\ar[d]^{ \operatorname{can} }\\ E^{i,j}_2=H^i(C(H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))\ar@{=>}[r] & H^{i+j}(C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})) } $$ We have $D_j=H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}\in MF_K^{\operatorname{ad} }(\varphi, N,G_K)$. For $j\leq r$, $F^{1}D_{j,K}=F^{1-(r-j)}H^j_{\mathrm{dR}}(X_{h})\{r\}=0$. Hence, by Corollary \ref{resolution3}, we have $^+E^{i,j}_2\operatorname{st} ackrel{\sim}{\to} E^{i,j}_2$. This implies that $\tau_{\leq r}C^+(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})\operatorname{st} ackrel{\sim}{\to} \tau_{\leq r}C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})$. Since $\rho_{\mathrm{HK}}=\rho_{\operatorname{cr} }\iota^B_{\operatorname{cr} }$, we check easily that we have the following commutative diagram \begin{equation} \label{compatibility0} \begin{CD} \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r)@>\sim >\alpha_{ \operatorname{syn} }> C^+(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})\\ @VV\rho_{ \operatorname{syn} }V @VV \operatorname{can} V\\ \mathrm {R} \Gamma_{}(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r))@>\sim >\alpha_{\operatorname{\acute{e}t} }> C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}) \end{CD} \end{equation} It follows that $\rho_{ \operatorname{syn} }: \tau_{\leq r}\mathrm {R} \Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r)\operatorname{st} ackrel{\sim}{\to} \tau_{\leq r}\mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)) $, as wanted. \end{proof} Let $X\in {\mathcal V}ar_K$. The natural projection $\varepsilon: X_{\overline{K} ,h}\to X_h$ defines pullback maps $$\varepsilon^*: \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\to \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h}),\quad \varepsilon^*: \mathrm {R} \Gamma_{\mathrm{dR}}(X_h)\to \mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} ,h}). $$ By construction they are compatible with the monodromy operator, Frobenius, the action of the Galois group $G_K$, and filtration. It is also clear that they are compatible with the Beilinson-Hyodo-Kato morphisms, i.e., that the following diagram commutes $$ \xymatrix{ \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\ar[r]^{\iota^B_{\mathrm{dR}}}\ar[d]^{\varepsilon^*} & \mathrm {R} \Gamma_{\mathrm{dR}}(X_h)\ar[d]^{\varepsilon^*}\\ \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\ar[r]^{\iota^B_{\mathrm{dR}}} & \mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} ,h}). } $$ It follows that we can define a canonical pullback map $$ \varepsilon^*:\quad C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\{r\})\to C^+(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\{r\}). $$ \begin{lemma} \label{compatibilit01} Let $r\geq 0$. The following diagram commutes in the derived category. $$ \xymatrix{ \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)\ar[r]^-{\alpha_{ \operatorname{syn} }}\ar[d]^{\varepsilon^*} & C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\{r\})\ar[d]^{\varepsilon^*}\\ \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r)\ar[r]^-{\alpha_{ \operatorname{syn} }} & C^+(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\{r\}). } $$ \end{lemma} \begin{proof} Take a number $t\geq 2\dim X +2$ and choose a finite Galois extension $(V^{^{\operatorname{pr} ime}me},K^{^{\operatorname{pr} ime}me})/(V,K)$ (see the proof of Proposition \ref{hypercov}) such that we have an $h$-hypercovering $Z_{\scriptscriptstyle\bullet}\to X_{K^{^{\operatorname{pr} ime}me}}$ with $(Z_{\scriptscriptstyle\bullet})_{\leq t+1}$ built from log-schemes log-smooth over $V^{^{\operatorname{pr} ime}me, \times}$ and of Cartier type. Since the top map $\alpha_{ \operatorname{syn} }$ is compatible with base change (cf. Proposition \ref{reduction2}) it suffices to show that the diagram in the lemma commutes with $X$ replaced by $(Z_{\scriptscriptstyle\bullet})_{\leq t+1}$. By (\ref{isomorphism}) and Propositions \ref{hypercov} and \ref{deRham1}, this reduces to showing that, for an ss-pair $(U,\overline{U})$ split over $V$, the following diagram commutes canonically in the $\infty$-derived category (we set $Y:=(U,\overline{U}), \overline{Y}:=Y_{\overline{V} }$, $\pi$ - a fixed uniformizer of $V$). $$ \xymatrix{ \mathrm {R} \Gamma_{ \operatorname{syn} }(Y,r)_{\mathbf Q}\ar[r]^-{\alpha^B_{ \operatorname{syn} ,\pi}}\ar[d]^{\varepsilon^*} & C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(Y)\{r\})\ar[d]^{\varepsilon^*}\\ \mathrm {R} \Gamma_{ \operatorname{syn} }(Y_{\overline{K} },r)_{\mathbf Q} \ar[r]^-{\alpha_{ \operatorname{syn} }} & C^+(\mathrm {R} \Gamma^B_{\mathrm{HK}}(Y_{\overlinerline{K}})\{r\}). } $$ From the uniqueness property of the homotopy fiber functor, it suffices to show that the following diagram commutes canonically in the $\infty$-derived category. $$ \xymatrix{ \mathrm {R} \Gamma_{\operatorname{cr} }(Y)_{\mathbf Q}\ar[r] \ar[d] & \mathrm {R} \Gamma_{\operatorname{cr} }(Y/R)^{N=0} _{\mathbf Q} & \mathrm {R} \Gamma_{\mathrm{HK}}^B(Y_1)^{\tau,N=0}_{R_{\mathbf Q}}\ar[l]_{\iota_{\pi}}^{\sim} & \mathrm {R} \Gamma_{\mathrm{HK}}^B(Y_1)^{N=0}\ar[l]_{\beta}^{\sim}\ar[ld]\\ \mathrm {R} \Gamma_{\operatorname{cr} }(\overline{Y})_{\mathbf Q} & \mathrm {R} \Gamma_{\mathrm{HK}}^B(\overline{Y}_1)^{\tau,N=0}_{B^+_{\operatorname{cr} }}\ar[l]_-{\iota^B_{\operatorname{cr} }} ^-{\sim}& (\mathrm {R} \Gamma_{\mathrm{HK}}^B(\overline{Y}_1)\otimes _{K_0^{\operatorname{nr} }}B^+_{\operatorname{st} })^{N=0}\ar@{=}[l] } $$ To do that we will need the ring of periods $\widehat{A}_{\operatorname{st} }$ \cite[p.253]{Ts}. Set $$ \widehat{A}_{\operatorname{st} ,n}=H^0_{\operatorname{cr} }(\overlinerline{V}_{n}^{\times}/R_{n}), \quad \widehat{A}_{\operatorname{st} }= \operatorname{inv} lim_nH^0_{\operatorname{cr} }(\overlinerline{V}_{n}^{\times}/R_{n}). $$ The ring $\widehat{A}_{\operatorname{st} ,n}$ has a natural action of $G_K$, Frobenius $\varphi $, and a monodromy operator $N$. It is also equipped with a PD-filtration $F^i\widehat{A}_{\operatorname{st} ,n}=H^0_{\operatorname{cr} }(\overlinerline{V}_{n}^{\times}/R_{n},{\mathcal J}_{\operatorname{cr} ,n}^{[i]})$. We have a morphism $A_{\operatorname{cr} ,n}\to \widehat{A}_{\operatorname{st} ,n}$ induced by the map $H^0_{\operatorname{cr} }(\overlinerline{V}_{n}/W_n(k))\to H^0_{\operatorname{cr} }(\overlinerline{V}_{n}^{\times}/R_{n})$. It is compatible with the Galois action, the Frobenius, and the filtration. The natural map $R_{n}\to \widehat{A}_{\operatorname{st} ,n}$ is compatible with all the structures. We can view $\widehat{A}_{\operatorname{st} ,n}$ as the PD-envelope of the closed immersion $$ \overlinerline{V}_n^{\times}\hookrightarrow A_{\operatorname{cr} ,n}\times_{W_n(k)}W_n(k)[X]^{\times} $$ defined by the map $\theta: A_{\operatorname{cr} ,n}\to \overlinerline{V}_n$ and the projection $W_n(k)[X] \to \overlinerline{V}_n$, $X\mapsto \pi$. This makes $\overlinerline{V}_1^{\times}\hookrightarrow \widehat{A}_{\operatorname{st} ,n}$ into a PD-thickening in the crystalline site of $\overlinerline{V}_1$. Set $\widehat{B}^+_{\operatorname{st} }:=\widehat{A}_{\operatorname{st} }[1/p]$. Commutativity of the last diagram will follow from the following commutative diagram $$ \xymatrix{ \mathrm {R} \Gamma_{\operatorname{cr} }(Y)_{\mathbf Q}\ar[d] \ar[rr] & &\mathrm {R} \Gamma_{\operatorname{cr} }(\overline{Y})_{\mathbf Q}\ar[dl]^{\sim}\\ \mathrm {R} \Gamma_{\operatorname{cr} }(Y/R)^{N=0} _{\mathbf Q}\ar[r] & \mathrm {R} \Gamma_{\operatorname{cr} }(\overline{Y}/\widehat{A}_{\operatorname{st} })^{N=0}_{\mathbf Q}\\ \mathrm {R} \Gamma_{\mathrm{HK}}^B(Y_1)^{\tau,N=0}_{R_{\mathbf Q}}\ar[u]^{\iota_{\pi}}_{\wr}\ar[r]& \mathrm {R} \Gamma_{\mathrm{HK}}^B(Y_1)^{\tau,N=0}_{\widehat{B}^+_{\operatorname{st} }} \ar[u]^{\iota^B_{\widehat{B}^+_{\operatorname{st} }}}_{\wr} &\mathrm {R} \Gamma_{\mathrm{HK}}^B(Y_1)^{\tau,N=0}_{B^+_{\operatorname{cr} }}\ar[l]\ar[uu]^{\iota_{\operatorname{cr} }^B}_{\wr}\\ & \mathrm {R} \Gamma_{\mathrm{HK}}^B(Y_1)^{N=0} \ar[ur]\ar[ul]^{\beta}_{\sim} } $$ as soon as we show that the map $\mathrm {R} \Gamma_{\operatorname{cr} }(\overline{Y})_{\mathbf Q} \to \mathrm {R} \Gamma_{\operatorname{cr} }(\overline{Y}/\widehat{A}_{\operatorname{st} })^{N=0}_{\mathbf Q}$ is a quasi-isomorphism. Notice that the map $\iota^B_{\widehat{B}^+_{\operatorname{st} }}$ is a quasi-isomorphism by Theorem \ref{Bthm}. Hence using the Beilinson-Hyodo-Kato maps $\iota^B_{\widehat{B}^+_{\operatorname{st} }}$ and $ \iota_{\operatorname{cr} }^B$ this reduces to proving that the canonical map $ \mathrm {R} \Gamma_{\mathrm{HK}}^B(Y_1)^{\tau,N=0}_{B^+_{\operatorname{cr} }}\to \mathrm {R} \Gamma_{\mathrm{HK}}^B(Y_1)^{\tau,N=0}_{\widehat{B}^+_{\operatorname{st} }} $ is a quasi-isomorphism. In fact, we claim that for any $(\varphi, N)$-module $M$ we have an isomorphism $M_{B_{\operatorname{cr} }^{+}}^{\tau,N=0}\operatorname{st} ackrel{\sim}{\to} M_{\widehat{B}_{\operatorname{st} }^{+}}^{\tau,N=0}.$ Indeed, assume first that the monodromy $N_M$ is trivial. We calculate \begin{align*} M^{\tau}_{B^+_{\operatorname{cr} }} & =(M\otimes _{K_0}B^{+,\tau}_{\operatorname{cr} })^{N^{^{\operatorname{pr} ime}me}=0}=M\otimes _{K_0}(B^{+,\tau}_{\operatorname{cr} })^{N_{\tau}=0}=M\otimes_{K_0} B^+_{\operatorname{cr} },\quad N^{^{\operatorname{pr} ime}me}= N_M\otimes 1 +1\otimes N_{\tau}=1\otimes N_{\tau},\\ M^{\tau}_{\widehat{B}^+_{\operatorname{st} }} & =(M\otimes _{K_0}\widehat{B}^{+,\tau}_{\operatorname{st} })^{N^{^{\operatorname{pr} ime}me}=0}=M\otimes_{K_0} (\widehat{B}^{+,\tau}_{\operatorname{cr} })^{N_{\tau}=0}=M\otimes _{K_0}\widehat{B}^+_{\operatorname{st} } \end{align*} Hence $M_{B_{\operatorname{cr} }^{+}}^{\tau,N=0}=M\otimes _{K_0}B^+_{\operatorname{cr} }$ and $M_{\widehat{B}_{\operatorname{st} }^{+}}^{\tau,N=0}=M\otimes _{K_0}(\widehat{B}^+_{\operatorname{st} })^{N=0}=M\otimes _{K_0}B^+_{\operatorname{cr} }$, where the last equality is proved in \cite[Lemma 1.6.5]{Ts}. We are done in this case. In general, we can write $M\otimes _{K_0}B^+_{\operatorname{st} }\operatorname{st} ackrel{\sim}{\leftarrow} M^{^{\operatorname{pr} ime}me}\otimes _{K_0}B^+_{\operatorname{st} }$ for a $(\varphi,N)$-module $M^{^{\operatorname{pr} ime}me}$ such that $N_{M^{^{\operatorname{pr} ime}me}}=0$ (take for $M^{^{\operatorname{pr} ime}me}$ the image of the map $M\to M\otimes _{K_0}B^+_{\operatorname{st} }$, $m\mapsto \exp(N_M(m)u)$, for $u\in B^+_{\operatorname{st} }$ such that $B^+_{\operatorname{st} }=B^+_{\operatorname{cr} }[u]$, $N_{\tau}(u)=-1$). Similarly, using the fact that the ring $B^+_{\operatorname{st} }$ is canonically (and compatibly with all the structures) isomorphic to the elements of $\widehat{B}^+_{\operatorname{st} }$ annihilated by a power of the monodromy operator \cite[3.7]{Kas}, we can write in a compatible way $M\otimes _{K_0}{B}^+_{\operatorname{st} }\operatorname{st} ackrel{\sim}{\leftarrow} M^{^{\operatorname{pr} ime}me}\otimes _{K_0}\widehat{B}^+_{\operatorname{st} }$ for the same module $M^{^{\operatorname{pr} ime}me}$. We obtained a commutative diagram $$ \begin{CD} M_{B_{\operatorname{cr} }^{+}}^{\tau,N=0}@>>> M_{\widehat{B}_{\operatorname{st} }^{+}}^{\tau,N=0} \\ @VV\wr V @VV\wr V\\ M_{B_{\operatorname{cr} }^{+}}^{^{\operatorname{pr} ime}me \tau,N=0}@>\sim>> M_{\widehat{B}_{\operatorname{st} }^{+}}^{^{\operatorname{pr} ime}me \tau,N=0} \end{CD} $$ that reduces the general case to the case of trivial monodromy on $M$ that we treated above. \end{proof} Let $X\in {\mathcal V}ar_K$, $r\geq 0$. Set \begin{align*} C_{\mathrm{pst}}(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\{r\}):= \left[\begin{aligned} \xymatrix@C=50pt{\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})^{G_K}\ar[r]^-{(1-\varphi_r,\iota^B_{\mathrm{dR}})}\ar[d]^{N} & \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})^{G_K}\oplus (\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} ,h})/F^r)^{G_K}\ar[d]^{(N,0)}\\ \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})^{G_K}\ar[r]^{1-\varphi_{r-1}}& \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})^{G_K}}\end{aligned}\right] \end{align*} The above makes sense since the action of $G_K$ on $\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\{r\}$ and $ \mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} ,h})$ is smooth. In particular, we have \begin{align*} H^j(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\{r\}^{G_K}) \simeq H^j(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\{r\})^{G_K},\quad H^j(\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} ,h})^{G_K}) & \simeq H^j(\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} ,h}))^{G_K}. \end{align*} Consider the canonical pullback map $$ \varepsilon^*: C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\{r\})\operatorname{st} ackrel{\sim}{\to} C_{\mathrm{pst}}(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\{r\}). $$ By Proposition \ref{HKdR}, this is a quasi-isomorphism. This allows us to construct a canonical spectral sequence (the {\em syntomic descent spectral sequence}) \begin{equation} \label{kwak2} \xymatrix{ ^{ \operatorname{syn} }E^{i,j}_2=H^i_{\operatorname{st} }(G_K,H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)))\ar@{=>}[r] & H^{i+j}_{ \operatorname{syn} }(X_{h},r) } \end{equation}Indeed, the Bousfield-Kan spectral sequences associated to the homotopy limits defining complexes $C_{\mathrm{pst}}(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))$ and $C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\{r\}))$ give us the following commutative diagram $$ \xymatrix{ ^{\mathrm{pst}} E^{i,j}_2=H^i(C_{\mathrm{pst}}(H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))\ar@{=>}[r] & H^{i+j}(C_{\mathrm{pst}}(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))\\ ^{ \operatorname{syn} } E^{i,j}_2=H^i(C_{\operatorname{st} }(H^j_{\mathrm{HK}}(X_{h})\{r\}))\ar[u]^{\wr}_{\varepsilon^*}\ar@{=>}[r] & H^{i+j}(C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\{r\}))\ar[u]^{\wr}_{\varepsilon^*} } $$ Since, by Proposition \ref{reduction2}, we have $\alpha_{ \operatorname{syn} }: H^{i+j}_{ \operatorname{syn} }(X_h,r) \operatorname{st} ackrel{\sim}{\to}H^{i+j}(C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_h)\{r\}))$, we have obtained a spectral sequence $$ \xymatrix{ E^{i,j}_2=H^i(C_{\mathrm{pst}}(H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))\ar@{=>}[r] & H^{i+j}_{ \operatorname{syn} }(X_{h},r) } $$ It remains to show that there is a canonical isomorphism \begin{equation} \label{eq1} H^i(C_{\mathrm{pst}}(H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))\simeq H^i_{\operatorname{st} }(G_K,H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r))). \end{equation} But, we have $D_j=H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}\in MF_K^{\operatorname{ad} }(\varphi, N,G_K)$, $V_{\mathrm{pst}}(D_j)\simeq H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}(r))$, and $D_{\mathrm{pst}}(H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}(r)))\simeq D_j$. Hence isomorphism (\ref{eq1}) follows from Remark \ref{pst=st} and we have obtained the spectral sequence (\ref{kwak2}). {\mathcal{U}}bsection{Arithmetic case} In this subsection, we define the arithmetic syntomic period map by Galois descent from the geometric case. Then we show that, via this period map, the syntomic descent spectral sequence and the \'etale Hochschild-Serre spectral sequence are compatible. Finally, we show that this implies that the arithmetic syntomic cohomology and \'etale cohomology are isomorphic in a stable range. Let $X\in {\mathcal V}ar_K$. For $r\geq 0$, we define the canonical syntomic period map $$ \rho_{ \operatorname{syn} }: \quad \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)\to \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)), $$ as the following composition \begin{align*} \mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r) & =\mathrm {R} \Gamma(X_h,{\mathcal{S}}(r))_{\mathbf Q} \to \operatorname{holim} _n \mathrm {R} \Gamma(X_{h},{\mathcal{S}}_n(r))_{\mathbf Q} \operatorname{st} ackrel{\varepsilon^*}{\to} \operatorname{holim} _n \mathrm {R} \Gamma (G_K, \mathrm {R} \Gamma(X_{\overline{K} ,h},{\mathcal{S}}_n(r)))_{\mathbf Q}\\ & \lomapr{p^{-r}\beta} \operatorname{holim} _n \mathrm {R} \Gamma (G_K, \mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Z}/p^n(r)^{^{\operatorname{pr} ime}me}))_{\mathbf Q} \operatorname{st} ackrel{\sim}{\leftarrow}\operatorname{holim} _n \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Z}/p^n(r)^{^{\operatorname{pr} ime}me})_{\mathbf Q} =\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)). \end{align*} It induces a morphism of graded $E_{\infty}$ algebras over ${\mathbf Q}_p$. The syntomic period map $\rho_{ \operatorname{syn} }$ is compatible with the syntomic descent and the Hochschild-Serre spectral sequences. \begin{theorem} \label{stHS} For $X\in {\mathcal{V}} ar_K$, $r\geq 0$, there is a canonical map of spectral sequences $$ \xymatrix{ ^{ \operatorname{syn} }E^{i,j}_2=H^i_{\operatorname{st} }(G_K,H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)))\ar[d]^{ \operatorname{can} }\ar@{=>}[r] & H^{i+j}_{ \operatorname{syn} }(X_{h},r)\ar[d]^{\rho_{ \operatorname{syn} }}\\ ^{\operatorname{\acute{e}t} }E^{i,j}_2=H^i(G_K,H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)))\ar@{=>}[r] & H^{i+j}(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)) } $$ \end{theorem} \begin{proof}We work in the (classical) derived category. The Bousfield-Kan spectral sequences associated to the homotopy limits defining complexes $C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})$ and $C_{\mathrm{pst}}(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})$, and Theorem \ref{speccomp} give us the following commutative diagram of spectral sequences $$ \xymatrix{ ^{II}E^{i,j}_2=H^i(G_K,C(H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))\ar@{=>}[r] & H^{i+j}(G_K,C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))\\ ^{\mathrm{pst}} E^{i,j}_2=H^i(C_{\mathrm{pst}}(H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))\ar[u]^{\delta}\ar@{=>}[r] & H^{i+j}(C_{\mathrm{pst}}(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}))\ar[u]^{\delta} } $$ More specifically, in the language of Section \ref{Jan}, set $X=C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})$ (hopefully, the notation will not be too confusing). Filtering complex $X$ in the direction of the homotopy limit we obtain a Postnikov system (\ref{postnikov}) with $Y^i=0$, $i\geq 3$, and \begin{align*} Y^0 & =\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}\otimes_{K_0^{\operatorname{nr} }} B_{\operatorname{st} },\\ Y^1 & =\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r-1\}\otimes_{K_0^{\operatorname{nr} }} B_{\operatorname{st} }\oplus (\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}\otimes_{K^{\operatorname{nr} }_0} B_{\operatorname{st} }\oplus (\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} })\otimes_{\overline{K} } B_{\mathrm{dR}})/F^r),\\ Y^2 & =\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r-1\}\otimes _{K_0^{\operatorname{nr} }}B_{\operatorname{st} }. \end{align*} Still in the setting of Section \ref{Jan}, take for $A$ the abelian category of sheaves of abelian groups on the pro-\'etale site $\operatorname{Spec} (K)_{\operatorname{pr} oeet}$ of Scholze \cite[3]{Sch}. \begin{remark} We work with the pro-\'etale site to make sense of the continuous cohomology $\mathrm {R} \Gamma(G_K,\operatorname{cd} ot)$. If the reader is willing to accept that this is possible then he can skip the tedious parts of the proof involving passage to the pro-\'etale site (and existence of continuous sections). \end{remark} Recall that there is a projection map $\nu: \operatorname{Spec} (K)_{\operatorname{pr} oeet}\to \operatorname{Spec} (K)_{\operatorname{\acute{e}t} }$ such that, for an \'etale sheaf ${\mathcal{F}}$, we have the quasi-isomorphism $\nu^*: {\mathcal{F}}\operatorname{st} ackrel{\sim}{\to}\mathrm {R} \nu_*\nu^*{\mathcal{F}}$ \cite[5.2.6]{BhS}. More generally, for a topological $G_K$-module $M$, we get a sheaf $\nu M$ on $\operatorname{Spec} (K)_{\operatorname{pr} oeet}$ by setting, for a profinite $G_K$-set $S$, $\nu M(S)=\operatorname{{\cal Ho}} m_{\operatorname{cont} ,G_K}(S,M)$, and Scholze showed that there is a canonical quasi-isomorphism $H^*(\operatorname{Spec} (K)_{\operatorname{pr} oeet},\nu M)\simeq H^*_{\operatorname{cont} }(G_K,M)$ \cite[3.7 (iii)]{Sch}, \cite{ScE}. In this proof we will need this kind of quasi-isomorphisms for complexes $M$ as well and this will require extra arguments. For that observe that the functor $\nu$ is left exact. To study right exactness, it suffices to look at the global sections on profinite sets $S$ with a free $G_K$-action of the form $S=S^{^{\operatorname{pr} ime}me}\times G_K$ for a profinite set $S^{^{\operatorname{pr} ime}me}$ with trivial $G_K$-action\footnote{To see this, for a profinite $G_K$-set $S^{^{\operatorname{pr} ime}me}$, use the covering $S^{^{\operatorname{pr} ime}me}\times G_K\to S^{^{\operatorname{pr} ime}me}$, where the first $S^{^{\operatorname{pr} ime}me}$ has trivial $G_K$-action, induced from the $G_K$-action on $S^{^{\operatorname{pr} ime}me}$.}. Then, for any $G_K$-module $T$, we have $\Gamma(S,\nu T)=\operatorname{{\cal Ho}} m_{\operatorname{cont} }(S^{^{\operatorname{pr} ime}me},T)$. It follows that, for a surjective map $T_1\to T_2$ of $G_K$-modules, the pullback map $\nu T_1\to \nu T_2$ is also surjective if the original map had a continuous set-theoretical section. This is a criterion familiar from continuous cohomology and we will use it often. We will see the complex $X$ as a complex of sheaves on the site $\operatorname{Spec} (K)_{\operatorname{pr} oeet}$ in the following way: represent $\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})$ and $\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} })$ by (filtered) perfect complexes of $K_0^{\operatorname{nr} }$- and $\overline{K} $-modules, respectively, think of $X$ as $\nu X$, and work on the pro-\'etale site. This makes sense, i.e., functor $\nu$ transfers (filtered) quasi-isomorphisms of representatives of $\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})$ and $\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} })$ to quasi-isomorphisms of the corresponding sheaves $\nu X$. To see this look at the Postnikov system of sheaves on $\operatorname{Spec} (K)_{\operatorname{pr} oeet}$ obtained by pulling back by $\nu$ the above Postnikov system. Now, look at the global sections on profinite sets $S=S^{^{\operatorname{pr} ime}me}\times G_K$ as above and note that we have $\Gamma(S, \nu Y^0)=\operatorname{{\cal Ho}} m_{\operatorname{cont} }(S^{^{\operatorname{pr} ime}me},Y^0)$. Conclude that, by perfectness of the Beilinson-Hyodo-Kato complexes, quasi-isomorphisms of representatives of $\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})$ yield quasi-isomorphisms of the sheaves $\nu Y^0$. By a similar argument, we get the analogous statement for $Y^2$. For $Y^1$, we just have to show that filtered quasi-isomorphisms of representatives of $\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} })$ yield quasi-isomorphisms of the sheaves $\nu ((\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} })\otimes_{\overline{K} }B_{\mathrm{dR}})/F^r)$. Again, we look at the global section on $S=S^{^{\operatorname{pr} ime}me}\times G_K$ as above. By compactness of $S^{^{\operatorname{pr} ime}me}$ we may replace $(\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} })\otimes_{\overline{K} }B_{\mathrm{dR}})/F^r$ by $(t^{-i}\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} })\otimes_{\overline{K} }B^+_{\mathrm{dR}})/F^r$, for some $i\geq 0$, where, using devissage, we can again argue by (filtered) perfection of $\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overline{K} })$. Observe that the same argument shows that ${\mathcal{H}}^j(\nu Y^i)\simeq \nu H^j(Y^i)$, for $i=0,1,2$. The above Postnikov system gives rise to an exact couple $$ D_1^{i,j}={\mathcal{H}}^j(X^i),\quad E_1^{i,j}={\mathcal{H}}^j(Y^i) \Longrightarrow {\mathcal{H}}^{i+j}(X) $$ This is the Bousfield-Kan spectral sequence associated to $X$. Consider now the complex $X_{\mathrm{pst}}:=C_{\mathrm{pst}}(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})$. We claim that the canonical map \begin{align*} C_{\mathrm{pst}}(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\{r\}) \operatorname{st} ackrel{\sim}{\to} C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\{r\})^{G_K} \end{align*} is a quasi-isomorphism (recall that taking $G_K$-fixed points corresponds to taking global sections on the pro-\'etale site). In particular, that the term on the right hand side makes sense. To see this, it suffices to show that the canonical maps \begin{align*} (\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overlinerline{K},h})/F^r)^{G_K} & \operatorname{st} ackrel{\sim}{\to} ((\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overlinerline{K},h})\otimes_{\overline{K} }B_{\mathrm{dR}})/F^r)^{G_K},\\ \mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})^{G_K} & \operatorname{st} ackrel{\sim}{\to} (\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\otimes_{K_0^{\operatorname{nr} }}B_{\operatorname{st} })^{G_K} \end{align*} are quasi-isomorphisms and to use the fact that the action of $G_K$ on $\mathrm {R} \Gamma_{\mathrm{HK}}^B(X_{\overline{K} ,h})$ is smooth. The fact that the first map is a quasi-isomorphism follows from the filtered quasi-isomorphism $\mathrm {R} \Gamma_{\mathrm{dR}}(X)\otimes_K\overline{K} \operatorname{st} ackrel{\sim}{\to}\mathrm {R} \Gamma_{\mathrm{dR}}(X_{\overlinerline{K},h})$ and the fact that $B_{\mathrm{dR}}^{G_K}=K$. Similarly, the second map is a quasi-isomorphism because, by \cite[4.2.4]{F1}, $\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})$ is the subcomplex of those elements of $\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overlinerline{K},h})\otimes_{K^{\operatorname{nr} }_0}B_{\operatorname{st} }$ whose stabilizers in $G_K$ are open. Taking the $G_K$-fixed points of the above Postnikov system we get an exact couple $$ {}^{\mathrm{pst}} D_1^{i,j}=H^j(X^i_{\mathrm{pst}}),\quad {}^{\mathrm{pst}}E_1^{i,j}=H^j(Y^i_{\mathrm{pst}}) \Longrightarrow H^{i+j}(X_{\mathrm{pst}}) $$ corresponding to the Bousfield-Kan filtration of the complex $X_{\mathrm{pst}}$. On the other hand, applying $\mathrm {R} \Gamma(\operatorname{Spec} (K)_{\operatorname{pr} oeet},\operatorname{cd} ot)$ to the same Postnikov system we obtain an exact couple $$ {}^{I}D_1^{i,j}=H^j(\operatorname{Spec} (K)_{\operatorname{pr} oeet},X^i),\quad {}^{I}E_1^{i,j}=H^j(\operatorname{Spec} (K)_{\operatorname{pr} oeet},Y^i) \Longrightarrow H^{i+j}(\operatorname{Spec} (K)_{\operatorname{pr} oeet},X) $$ together with a natural map of exact couples $({}^{\mathrm{pst}} D_1^{i,j}, {}^{\mathrm{pst}}E_1^{i,j})\to ({}^{I}D_1^{i,j}, {}^{I}E_1^{i,j})$. We also have the hypercohomology exact couple $$ {}^{II}D_2^{i,j}=H^{i+j}(\operatorname{Spec} (K)_{\operatorname{pr} oeet},\tau_{\leq j-1}X),\quad {}^{II}E_2^{i,j}=H^i(\operatorname{Spec} (K)_{\operatorname{pr} oeet},{\mathcal{H}}^j(X)) \Longrightarrow H^{i+j}(\operatorname{Spec} (K)_{\operatorname{pr} oeet},X) $$ Theorem \ref{speccomp} gives us a natural morphism of exact couples $({}^{I}D^{i,j}_2,{}^{I}E^{i,j}_2)\to ({}^{II}D^{i,j}_2,{}^{II}E^{i,j}_2)$ -- hence a natural morphism of spectral sequences ${}^{I}E_2^{i,j}\to {}^{II}E_2^{i,j}$ compatible with the identity map on the common abutment -- if our original Postnikov system satisfies the equivalent conditions (\ref{ass}). We will check the condition (4), i.e., that the following long sequence is exact for all $j$ $$ 0\to {\mathcal{H}}^j(X)\to {\mathcal{H}}^j(Y^0)\to {\mathcal{H}}^j(Y^1)\to {\mathcal{H}}^j(Y^2)\to 0 $$ For that it is enough to show that \begin{enumerate} \item ${\mathcal{H}}^j(\nu Y^i)\simeq \nu H^j(Y^i)$, for $i=0,1,2$; \item ${\mathcal{H}}^j(\nu X)\simeq \nu H^j(X)$; \item the following long sequence of $G_K$-modules $$ 0\to H^j(X)\to H^j(Y^0)\to H^j(Y^1)\to H^j(Y^2)\to 0 $$ is exact; \item the pullback $\nu$ preserves its exactness. \end{enumerate} The assertion in (1) was shown above. The sequence in (3) is equal to the top sequence in the following commutative diagram (where we set $M=H^j_{\mathrm{HK}}(X_{\overline{K} ,h})$, $M_{\mathrm{dR}}=H^j_{\mathrm{dR}}(X_{\overline{K} ,h})$, $E= H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p)$). $$ \xymatrix@C=40pt{ H^j(X)\ar[d]^{\alpha_{\operatorname{\acute{e}t} }^{-1}}_{\wr}\ar[r] & M\otimes_{K_0^{\operatorname{nr} }} B_{\operatorname{st} }\ar[r]^-{(N,1-\varphi_r,\iota)}\ar[d]^{\rho_{\mathrm{HK}}}_{\wr} & M\otimes_{K_0^{\operatorname{nr} }}( B_{\operatorname{st} } \oplus B_{\operatorname{st} })\oplus (M_{\mathrm{dR}}\otimes_{\overline{K} } B_{\mathrm{dR}})/F^r\ar[r]^-{(1-\varphi_{r-1})-N}\ar[d]^{\rho_{\mathrm{HK}}+\rho_{\mathrm{HK}}+\rho_{\mathrm{dR}}}_{\wr}& M\otimes_{K_0^{\operatorname{nr} }} B_{\operatorname{st} }\ar[d]^{\rho_{\mathrm{HK}}}_{\wr} \\ E(r)\ar@{^{(}->}[r] & E\otimes B_{\operatorname{st} }\ar[r]^-{(N,1-\varphi_r,\iota)} & E\otimes (B_{\operatorname{st} }\oplus B_{\operatorname{st} })\oplus E\otimes B_{\mathrm{dR}}/F^r\ar@{->>}[r]^-{(1-\varphi_{r-1})-N} & E\otimes B_{\operatorname{st} } } $$ Since the bottom sequence is just a fundamental exact sequence of $p$-adic Hodge Theory, the top sequence is exact, as wanted. To prove assertion (4), we pass to the bottom exact sequence above and apply $\nu$ to it. It is easy to see that it enough now to show that the following surjections have continuous ${\mathbf Q}_p$-linear sections \begin{align*} B_{\operatorname{st} }\operatorname{st} ackrel{N}{\to}B_{\operatorname{st} },\quad B_{\operatorname{cr} } \varepsilon rylomapr{(1-\varphi_r, \operatorname{can} )}B_{\operatorname{cr} }\oplus B_{\mathrm{dR}}/F^r. \end{align*} For the monodromy, write $B_{\operatorname{st} }=B_{\operatorname{cr} }[u_s]$ and take for a continuous section the map induced by $bu_s^i\mapsto -(b/(i+1))u_s^{i+1}$, $b\in B_{\operatorname{cr} }$. For the second map, the existence of continuous section was proved in \cite[1.18]{BK}. For a different argument: observe that an analogous statement was proved in \cite[Prop. II.3.1]{Col} with $B_{\max}$ in place of $B_{\operatorname{cr} }$ as a consequence of the general theory of $p$-adic Banach spaces. We will just modify it here. Write $A_i=t^{-i}B^+_{\operatorname{cr} }$ and $B_i=t^{-i}B^+_{\operatorname{cr} }\oplus t^{-i}B^+_{\mathrm{dR}}/t^r$ for $i\geq 1$. These are $p$-adic Banach spaces. Observe that $B_i{\mathcal{U}}bset B_{i+1}$ is closed. Indeed, it is enough to show that $tB^+_{\operatorname{cr} }{\mathcal{U}}bset B^+_{\operatorname{cr} }$ is closed. But we have $tB^+_{\operatorname{cr} }=\bigcap_{n\geq 0}\ker(\theta\circ \varphi^n)$. It follows \cite[Prop. I.1.5]{Col} that we can find a closed complement $C_{i+1}$ of $B_i$ in $B_{i+1}$. Set $f=(1-\varphi_r, \operatorname{can} ):B_{\operatorname{cr} }\to B_{\operatorname{cr} }\oplus B_{\mathrm{dR}}/F^r$. We know that $f$ maps $A_i$ onto $B_i$. Write $t^{-i}B^+_{\operatorname{cr} }\oplus t^{-i}B^+_{\mathrm{dR}}/t^r=B_1\oplus (\oplus_{j=2}^{i-1}C_j)$. By \cite[Prop. I.1.5]{Col}, we can find a continuous section $s_1: B_1\to A_1$ of $f$ and, if $i\geq 2$, a continuous section $s_i: C_i\to A_i$ of $f$. Define the map $s: t^{-i}B^+_{\operatorname{cr} }\oplus t^{-i}B^+_{\mathrm{dR}}/t^r \to B_{\operatorname{cr} }$ by $s_1$ on $B_1$ and by $s_i$ on $C_i$ for $i\geq 2$. Taking inductive limit over $i$ we get our section of $f$. To prove assertion (2), take a perfect representative of the complex $\mathrm {R} \Gamma(X_{\overline{K} , \operatorname{\acute{e}t} },{\mathbf Z}_p(r))$. Consider the complex $Z=\mathrm {R} \Gamma(X_{\overline{K} , \operatorname{\acute{e}t} },{\mathbf Q}_p(r))$ as a complex of sheaves on $\operatorname{Spec} (K)_{\operatorname{pr} oeet}$. As before, we see that this makes sense and we easily find that (canonically) ${\mathcal{H}}^j(Z)\simeq \nu H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r))$. To prove (2), it is enough to show that we can also pass with the map $\alpha_{\operatorname{\acute{e}t} }: \mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r))\operatorname{st} ackrel{\sim}{\to} C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})$ to the site $\operatorname{Spec} (K)_{\operatorname{pr} oeet}$. Looking at its definition (cf. (\ref{definition1})) we see that we need to show that the period quasi-isomorphisms $\rho_{\operatorname{cr} }, \rho_{\mathrm{HK}}, \rho_{\mathrm{dR}}$ as well as the quasi-isomorphism $$ {\mathbf Q}_p(r)\operatorname{st} ackrel{\sim}{\to}[ B_{\operatorname{st} } \varepsilon rylomapr{(N,1-\varphi_r,\iota)} B_{\operatorname{st} }\oplus B_{\operatorname{st} }\oplus B_{\mathrm{dR}}/F^r \varepsilon ryverylomapr{(1-\varphi_{r-1})-N}B_{\operatorname{st} }] $$ can be lifted to the pro-\'etale site. The last fact we have just shown. For the crystalline period map $\rho_{\operatorname{cr} }$ this follows from the fact that it is defined integrally and all the relevant complexes are perfect. For the Hyodo-Kato period map $\rho_{\mathrm{HK}}$ - it follows from the case of $\rho_{\operatorname{cr} }$ and from perfection of complexes involved in the definition of the Beilinson-Hyodo-Kato map. For the de Rham period map $\rho_{\mathrm{dR}}$ this follows from perfection of the involved complexes as well as from the exactness of $\operatorname{holim} _n$ (in the definition of $\rho_{\mathrm{dR}}$) on the pro-\'etale site of $K$ (cf. \cite[3.18]{Sch}). We define the map of spectral sequences $\delta:=(\delta_D,\delta):=({}^{\mathrm{pst}} D_2^{i,j}, {}^{\mathrm{pst}}E_2^{i,j})\to ({}^{II}D^{i,j}_2,{}^{II}E^{i,j}_2)$ -- that we stated at the beginning of the proof -- as the composition of the two maps constructed above $$\delta:\quad ({}^{\mathrm{pst}} D_2^{i,j}, {}^{\mathrm{pst}}E_2^{i,j})\to({}^{I}D^{i,j}_2,{}^{I}E^{i,j}_2)\to ({}^{II}D^{i,j}_2,{}^{II}E^{i,j}_2).$$ To get the spectral sequence from the theorem we need to pass from $^{II}E_2$ to the Hochschild-Serre spectral sequence. To do that consider the hypercohomology exact couple $$ {}^{\operatorname{\acute{e}t} }D_2^{i,j}=H^{i+j}(\operatorname{Spec} (K)_{\operatorname{pr} oeet},\tau_{\leq j-1}Z),\quad {}^{\operatorname{\acute{e}t} }E_2^{i,j}=H^i(\operatorname{Spec} (K)_{\operatorname{pr} oeet},{\mathcal{H}}^j(Z)) \Longrightarrow H^{i+j}(\operatorname{Spec} (K)_{\operatorname{pr} oeet},Z) $$ and, via $\alpha_{\operatorname{\acute{e}t} }^{-1}$, a natural morphism of exact couples $({}^{II}D^{i,j}_2,{}^{II}E^{i,j}_2)\to ({}^{\operatorname{\acute{e}t} }D^{i,j}_2,{}^{\operatorname{\acute{e}t} }E^{i,j}_2)$ -- hence a natural morphism of spectral sequences ${}^{II}E_2^{i,j}\to {}^{\operatorname{\acute{e}t} }E_2^{i,j}$ compatible with the map $\alpha_{\operatorname{\acute{e}t} }^{-1}$ on the abutment. We have a quasi-isomorphism $\psi: \mathrm {R} \Gamma(\operatorname{Spec} (K)_{\operatorname{pr} oeet},Z)\operatorname{st} ackrel{\sim}{\to} \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r))$ defined as the composition \begin{align*} \psi:\quad \mathrm {R} \Gamma(\operatorname{Spec} (K)_{\operatorname{pr} oeet},\mathrm {R} \Gamma(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r))) & \operatorname{st} ackrel{\sim}{\to}{\mathbf Q}\otimes\operatorname{holim} _n \mathrm {R} \Gamma(G_K,\mathrm {R} \Gamma (X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Z}/p^n(r)))\\ & ={\mathbf Q}\otimes \operatorname{holim} _n \mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Z}/p^n(r))=\mathrm {R} \Gamma (X_{\operatorname{\acute{e}t} },{\mathbf Q}(r)) \end{align*} We have obtained the following natural maps of spectral sequences $$ \xymatrix{^{ \operatorname{syn} }E^{i,j}_2=H^i_{\operatorname{st} }(G_K,H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)))\ar[d]_{\wr}\ar@{=>}[r] & H^{i+j}_{ \operatorname{syn} }(X_{h},r)\ar[d]^{\alpha_{ \operatorname{syn} }}_{\wr}\\ E^{i,j}_2=H^i(C_{\operatorname{st} }(H^j_{\mathrm{HK}}(X_{h})\{r\}))\ar[d]^{\alpha_{\operatorname{\acute{e}t} }^{-1}\delta\varepsilon^*}\ar@{=>}[r] & H^{i+j}(C_{\operatorname{st} }(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{h})\{r\}))\ar[d]^{\psi \alpha_{\operatorname{\acute{e}t} }^{-1}\delta \varepsilon^*}\\ ^{\operatorname{\acute{e}t} }E^{i,j}_2=H^i(G_K,H^j(X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)))\ar@{=>}[r] & H^{i+j}(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)) } $$ It remains to show that the right vertical composition $\gamma: H^{i+j}_{ \operatorname{syn} }(X_{h},r)\to H^{i+j}(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r))$ is equal to the map $\rho_{ \operatorname{syn} }$. Since we have the equality $\alpha_{ \operatorname{syn} }=\rho_{ \operatorname{syn} }\alpha_{\operatorname{\acute{e}t} }$ (in the derived category) from (\ref{compatibility0}) and, by Lemma \ref{compatibilit01}, $\varepsilon^*\alpha_{ \operatorname{syn} }=\alpha_{ \operatorname{syn} }\varepsilon^*$, the map $\gamma$ can be written as the composition \begin{align*} \tilde{\rho}_{ \operatorname{syn} }:\quad H^{i+j}_{ \operatorname{syn} }(X_{h},r)\operatorname{st} ackrel{\varepsilon^*}{\to}H^{i+j}(\operatorname{Spec} (K)_{\operatorname{pr} oeet},\nu \mathrm {R} \Gamma_{ \operatorname{syn} }(X_{\overline{K} ,h},r)) & \operatorname{st} ackrel{\rho_{ \operatorname{syn} }}{\to} H^{i+j}(\operatorname{Spec} (K)_{\operatorname{pr} oeet},\nu\mathrm {R} \Gamma (X_{\overline{K} ,\operatorname{\acute{e}t} },{\mathbf Q}_p(r)))\\ & \operatorname{st} ackrel{\psi}{\to} H^{i+j}(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)), \end{align*} where the period map $\rho_{ \operatorname{syn} }$ is understood to be on sheaves on $\operatorname{Spec} (K)_{\operatorname{pr} oeet}$. There is no problem with that since we care only about the induced map on cohomology groups. It is easy now to see that $\tilde{\rho}_{ \operatorname{syn} }=\rho_{ \operatorname{syn} }$, as wanted. \end{proof} \begin{remark} If $X$ is proper and smooth, it is known that the \'etale Hochschild-Serre spectral sequence degenerates, i.e., that ${}^{\operatorname{\acute{e}t} }E_2={}^{\operatorname{\acute{e}t} }E_{\infty}$. It is very likely that so does the syntomic descent spectral sequence in this case, i.e., that ${}^{ \operatorname{syn} }E_2={}^{ \operatorname{syn} }E_{\infty}$\footnote{This was, in fact, shown in \cite{DN}.}. \end{remark} \begin{corollary} \label{BK1} For $X\in {\mathcal V}ar_{K}$, we have a canonical quasi-isomorphism $$\rho_{ \operatorname{syn} }: \quad\tau_{\leq r}\mathrm {R} \Gamma_{ \operatorname{syn} }(X_{h},r)_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to} \tau_{\leq r}\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r)). $$ \end{corollary} \begin{proof} By Theorem \ref{stHS}, the syntomic descent and the Hochschild-Serre spectral sequence are compatible. We have $D_j=H^j_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\}\in MF_K^{\operatorname{ad} }(\varphi, N,G_K)$. For $j\leq r$, $F^{1}D_{j,K}=F^{1-(r-j)}H^j_{\mathrm{dR}}(X_{h})=0$. Hence, by Proposition \ref{resolution33}, we have $^{ \operatorname{syn} }E^{i,j}_2\operatorname{st} ackrel{\sim}{\to} {}^{\operatorname{\acute{e}t} }E^{i,j}_2$. This implies that $\rho_{ \operatorname{syn} }: \tau_{\leq r}\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)\operatorname{st} ackrel{\sim}{\to} \tau_{\leq r}\mathrm {R} \Gamma(X_{\operatorname{\acute{e}t} },{\mathbf Q}_p(r))$, as wanted. \end{proof} \begin{remark} All of the above automatically extends to finite diagrams of $K$-varieties, hence to essentially finite diagrams of $K$-varieties (i.e., the diagrams for which every truncation of their cohomology $\tau_{\leq n}$ is computed by truncating the cohomology of some finite diagram). This includes, in particular, simplicial and cubical varieties. \end{remark} \begin{proposition} \label{compBK} Let $X\in{\mathcal V}ar_K$ and $i\geq 0$. The composition \begin{align*} H^q_{\mathrm{dR}}(X)/F^r \operatorname{st} ackrel{\partial}{\to} H^{q+1}_{ \operatorname{syn} }(X_h,r) \operatorname{st} ackrel{\rho_{ \operatorname{syn} }}{\longrightarrow} H^{q+1}_{\operatorname{\acute{e}t} }(X,\mathbf{Q}_p(r))\to H^{q+1}_{\operatorname{\acute{e}t} }(X_{\overline{K} },\mathbf{Q}_p(r)) \end{align*} is the zero map. The map induced by the syntomic descent spectral sequence $$ H^q_{\mathrm{dR}}(X)/F^r\to H^1(G_K,H^q_{\operatorname{\acute{e}t} }(X_{\overline{K} },\mathbf{Q}_p(r))) $$ is equal to the Bloch-Kato exponential associated with the Galois representation $V^q(r) = H^q_{\operatorname{\acute{e}t} }(X_{\overline{K} },\mathbf{Q}_p(r))$. \end{proposition} \begin{proof}In what follows we will omit the passage to the pro-\'etale site. Consider the Postnikov system from the proof of Theorem \ref{stHS}, which arises from the complex $X=C(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})$; then $Y^p = C^p(\mathrm {R} \Gamma^B_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})$. The discussion from Example \ref{filtered} then applies to the functor $f(-) = (-)^{G_K}$ and yields the following four exact couples. \noindent (1) $D_1^{p,q} = H^q(X^p)$, $E_1^{p,q} = H^q(Y^p) = C^p(H^q_{\mathrm{HK}}(X_{\overline{K} ,h})\{r\})) = C^p(H^q_{\mathrm{HK}}\{r\})$. The corresponding quasi-isomorphism $H^q(X) \operatorname{st} ackrel{\sim}{\to} E_1^{{\scriptscriptstyle\bullet},q}$ is then identified, via the various period maps, with $$ V^q(r) \operatorname{st} ackrel{\sim}{\to} C(H^q_{\mathrm{HK}}\{r\}) = C(D_{\mathrm{pst}}(V^q(r))).$$ (2) ${}^f D_1^{p,q} = H^q(f(X^p))$, ${}^f E_1^{p,q} = H^q(f(Y^p)) = f(H^q(Y^p)) = C^p_{\operatorname{st} }(H^q_{\mathrm{HK}}\{r\}) = f(E_1^{p,q})$. \noindent (3) ${}^I D_1^{p,q} = (\mathrm {R} ^q f)(X^p)$, ${}^I E_1^{p,q} = (\mathrm {R} ^q f)(Y^p)$. \noindent (4) ${}^{II} D_2^{p,q} = (\mathrm {R} ^{p+q}f)(\tau_{\leq q-1} X)$, ${}^{II} E_2^{p,q} = (\mathrm {R} ^p f)(H^q(X)) = H^p(G_K, V^q(r))$. \noindent There is a canonical morphism of exact couples (2) $\to$ (3) and a morphism (3) $\to$ (4) given by the maps $(u, v)$ from the proof of Theorem \ref {speccomp}. As observed in \ref {BKexp}, the Bloch-Kato exponential for $V = V^q(r)$ is obtained by applying $R^0 f$ to \begin{align*} Z^1 C(H^q_{\mathrm{HK}}\{r\}) & = Z^1(E_1^{{\scriptscriptstyle\bullet},q}) \operatorname{st} ackrel{ \operatorname{can} }{\longrightarrow} (\sigma_{\geq 1} C(H^q_{\mathrm{HK}}\{r\}))[1] = (\sigma_{\geq 1} C(E_1^{{\scriptscriptstyle\bullet},q})) [1] \operatorname{st} ackrel{- \operatorname{can} }{\longrightarrow} C(H^q_{\mathrm{HK}}\{r\}))[1] = E_1^{{\scriptscriptstyle\bullet},q} [1]\\ & \operatorname{st} ackrel{\sim}{\leftarrow} V^q(r) [1] = H^q(X) [1], \end{align*} hence is equal to the composite map $$ f(Z^1(E_1^{{\scriptscriptstyle\bullet},q})) = Z^1({}^f E_1^{{\scriptscriptstyle\bullet},q}) \to {}^f E_2^{1,q} \operatorname{st} ackrel{ \operatorname{can} }{\longrightarrow} {}^I E_2^{1,q} \operatorname{st} ackrel{- v^^{\operatorname{pr} ime}me = v}{\longrightarrow} (R^1 f)(E_1^{{\scriptscriptstyle\bullet},q}) = {}^{II} E_2^{p,q}, $$ which coincides, in turn, with $$ Z^1 C_{\operatorname{st} }(H^q_{\mathrm{HK}}\{r\}) \operatorname{st} ackrel{ \operatorname{can} }{\longrightarrow} H^1_{\operatorname{st} }(G_K, V^q(r)) \to H^1(G_K, V^q(r)).$$ After restricting to the de Rham part of $Z^1 C(H^q_{\mathrm{HK}}\{r\})$ we obtain the desired statement about $H^q_{\mathrm{dR}}(X)/F^r$. \end{proof} In more concrete terms, the above proposition says that the following diagram commutes $$ \xymatrix{ H^{q+1}_{ \operatorname{syn} }(X_h,r)_0\ar[r]^{\rho_{ \operatorname{syn} }} & H^{q+1}_{\operatorname{\acute{e}t} }(X,\mathbf{Q}_p(r))_0\ar[d]\\ H^q_{\mathrm{dR}}(X)/F^r\ar[u]^{\partial}\ar[r]^-{\exp_{\operatorname{BK} }} & H^1(G_K,H^q_{\operatorname{\acute{e}t} }(X_{\overline{K} },\mathbf{Q}_p(r))), } $$ where the subscript $0$ refers to the classes that vanish in $H^{q+1}_{ \operatorname{syn} }(X_{\overline{K} ,h},r)$ and $H^{q+1}_{\operatorname{\acute{e}t} }(X_{\overline{K} },\mathbf{Q}_p(r))$, respectively. \begin{remark} Assume that $r>q$. Then in the above diagram all the maps are isomorphisms. Indeed, we have $F^rH^q_{\mathrm{dR}}(X)=0$. By \cite[Theorem 6.8]{BER}, the map $\exp_{\operatorname{BK} }$ is an isomorphism. By Proposition \ref{BK2} and Corollary \ref{BK1}, so is the period map $\rho_{ \operatorname{syn} }$. Since, by Theorem \ref{main}, $H^2(G_K,H^{q}_{\operatorname{\acute{e}t} }(X_{\overline{K} },\mathbf{Q}_p(r)))=H^2(G_K,H^{q-1}_{\operatorname{\acute{e}t} }(X_{\overline{K} },\mathbf{Q}_p(r)))=0$, the vertical map is an isomorphism as well. Hence so is the map $\partial$. \end{remark} {\mathcal{E}}ction{Syntomic regulators} In this section we prove that Soul\'e's \'etale regulators land in the semistable Selmer groups. This will be done by constructing syntomic regulators that are compatible with the \'etale ones via the period map and by exploiting the syntomic descent spectral sequence. {\mathcal{U}}bsection{Construction of syntomic Chern classes} We start with the construction of syntomic Chern classes. This will be standard once we prove that syntomic cohomology satisfies projective space theorem and homotopy property. In this subsection we will work in the (classical) derived category. For a fine log-scheme $(X,M)$, log-smooth over $V^{\times}$, we have the log-crystalline and log-syntomic first Chern class maps of complexes of sheaves on $X_{\operatorname{\acute{e}t} }$ \cite[2.2.3]{Ts} \begin{align*} c_1^{\operatorname{cr} }: j_*{\mathcal O}^*_{X_{ \operatorname{tr} }}\operatorname{st} ackrel{\sim}{\to} M^{\operatorname{gp} }\to M^{\operatorname{gp} }_n\to R\varepsilon_*{\mathcal J}^{[1]}_{X_n/W_n(k)} [1], & \quad c_1^{\operatorname{st} }: j_*{\mathcal O}^*_{X_{ \operatorname{tr} }}\operatorname{st} ackrel{\sim}{\to} M^{\operatorname{gp} }\to M^{\operatorname{gp} }_n\to R\varepsilon_*{\mathcal J}^{[1]}_{X_n/R_n}[1],\\ c_1^{\mathrm{HK}}: j_*{\mathcal O}^*_{X_{ \operatorname{tr} }}\operatorname{st} ackrel{\sim}{\to} M^{\operatorname{gp} }\to M^{\operatorname{gp} }_0\to R\varepsilon_*{\mathcal J}^{[1]}_{X_0/W_n(k)^0}[1], & \quad c_1^{ \operatorname{syn} }: j_*{\mathcal O}^*_{X_{ \operatorname{tr} }}\operatorname{st} ackrel{\sim}{\to} M^{\operatorname{gp} }\to {\mathcal{S}}(1)_{X,{\mathbf Q}} [1]. \end{align*} Here $\varepsilon$ is the projection from the corresponding crystalline site to the \'etale site. The maps $c_1^{\operatorname{cr} }, c_1^{\operatorname{st} },$ and $c_1^{ \operatorname{syn} }$ are clearly compatible. So are the maps $c_1^{\operatorname{st} }$ and $ c_1^{\mathrm{HK}}$. For ss-pairs $(U,\overlinerline{U})$ over $K$, we get the induced functorial maps \begin{align*} c_1^{\operatorname{cr} }:\Gamma(U,{\mathcal O}_U^*) \operatorname{st} ackrel{\sim}{\leftarrow}\Gamma(\overlinerline{U},j_*{\mathcal O}_U^*)\to \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U}, {\mathcal J}^{[1]})[1], & \quad c_1^{\operatorname{st} }:\Gamma(U,{\mathcal O}_U^*) \to \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R, {\mathcal J}^{[1]})[1],\\ c_1^{\mathrm{HK}}:\Gamma(U,{\mathcal O}_U^*) \to \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})_0/W_n(k)^0,{\mathcal J}^{[1]})[1], & \quad c_1^{ \operatorname{syn} }: \Gamma(U,{\mathcal O}_U^*) \to \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},1)_{{\mathbf Q}}[1]. \end{align*} For $X\in {\mathcal V}ar_K$ we can glue the absolute log-crystalline and log-syntomic classes to obtain the absolute crystalline and syntomic first Chern class maps $$c_1^{\operatorname{cr} }: {\mathcal O}^*_{X_h}\to {\mathcal J}_{\operatorname{cr} ,X}[1],\quad c_1^{ \operatorname{syn} }: {\mathcal O}^*_{X_h}\to {\mathcal{S}}(1)_{X,{\mathbf Q}}[1]. $$ They induce (compatible) maps \begin{align*} c_1^{\operatorname{cr} }: \operatorname{Pic} (X) & =H^1(X_{\operatorname{\acute{e}t} },{\mathcal O}^*_X)\to H^1(X_{h},{\mathcal O}^*_X)\operatorname{st} ackrel{c_1^{\operatorname{cr} }}{\to }H^2(X_h,{\mathcal J}_{\operatorname{cr} }),\\ c_1^{ \operatorname{syn} }: \operatorname{Pic} (X) & =H^1(X_{\operatorname{\acute{e}t} },{\mathcal O}^*_X)\to H^1(X_{h},{\mathcal O}^*_X)\operatorname{st} ackrel{c_1^{ \operatorname{syn} }}{\to }H^2_{ \operatorname{syn} }(X_h,1). \end{align*} Recall that, for a log-scheme $(X,M)$ as above, we also have the log de Rham first Chern class map $$c_1^{\mathrm{dR}}: j_*{\mathcal O}^*_{X_{ \operatorname{tr} }}\operatorname{st} ackrel{\sim}{\to} M^{\operatorname{gp} }\to M^{\operatorname{gp} }_n\operatorname{st} ackrel{\operatorname{dlog} }{\to} \Omega^{\scriptscriptstyle\bullet}_{(X,M)_n/V^{\times}_n} [1].$$ For ss-pairs $(U,\overlinerline{U})$ over $K$, it induces maps $$c_1^{\mathrm{dR}}:\Gamma(U,{\mathcal O}_U^*) \operatorname{st} ackrel{\sim}{\leftarrow}\Gamma(\overlinerline{U},j_*{\mathcal O}_U^*)\to \mathrm {R} \Gamma(\overlinerline{U},\Omega^{\scriptscriptstyle\bullet}_{(U,\overlinerline{U})/V^{\times}} )[1].\\ $$ By the map $\mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},{\mathcal J}^{[1]})\to \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})\to \mathrm {R} \Gamma(\overlinerline{U},\Omega^{\scriptscriptstyle\bullet}_{(U,\overlinerline{U})/V^{\times}} )$ they are compatible with the absolute log-crystalline and log-syntomic classes \cite[2.2.3]{Ts}. \begin{lemma} \label{compatibility} For strict ss-pairs $(U,\overlinerline{U})$ over $K$, the Hyodo-Kato map and the Hyodo-Kato isomorphism $$ \iota: H^2_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}\to H^2_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q},\quad \iota_{\mathrm{dR},\pi}: H^2_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}\otimes_{K_0}K\operatorname{st} ackrel{\sim}{\to} H^2(\overlinerline{U}_{K},\Omega^{\scriptscriptstyle\bullet}_{(U,\overlinerline{U}_{K})/K}) $$ are compatible with first Chern class maps. \end{lemma} \begin{proof} Since $\iota_{\mathrm{dR},\pi}=i^*_{\pi}\iota\otimes \operatorname{Id} $ and the map $i^*_{\pi}$ is compatible with first Chern classes, it suffices to show the compatibility for the Hyodo-Kato map $\iota$. Let ${\mathcal{L}}$ be a line bundle on $U$. Since the map $\iota$ is a section of the map $i^*_0:H^2_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}\to H^2_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}$ and the map $i^*_0$ is compatible with first Chern classes, we have that the element $\zeta\in H^2_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}$ defined as $\zeta=\iota(c_1^{\mathrm{HK}}({\mathcal{L}}))-c_1^{\operatorname{st} }({\mathcal{L}})$ lies in $ TH^2_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}$. Hence $\zeta=T\gamma$. Since the map $\iota$ is compatible with Frobenius and $\varphi(c_1^{\mathrm{HK}}({\mathcal{L}}))=pc_1^{\mathrm{HK}}({\mathcal{L}})$, $\varphi(c_1^{\operatorname{st} }({\mathcal{L}}))=pc_1^{\operatorname{st} }({\mathcal{L}})$, we have $\varphi(\zeta)=p\zeta$. Since $\varphi(T\gamma)=T^p\varphi(\gamma)$ this implies that $\gamma\in\bigcap_{n=1}^{\infty}T^nH^2_{\operatorname{cr} }((U,\overlinerline{U})/R)_{\mathbf Q}$, which is not possible unless $\gamma$ (and hence $\zeta$) are zero. But this is what we wanted to show. \end{proof} We have the following projective space theorem for syntomic cohomology. \begin{proposition} \label{projective} Let ${\mathcal{E}}$ be a locally free sheaf of rank $d+1$, $d\geq 0$, on a scheme $X\in {\mathcal V}ar_{K}$. Consider the associated projective bundle $\pi:{\mathbb P}({\mathcal{E}})\to X$. Then we have the following quasi-isomorphism of complexes of sheaves on $X_h$ \begin{align*} \bigoplus_{i=0}^d{c}_1^{ \operatorname{syn} }({\mathcal O}(1))^i\cup\pi^*: \quad \bigoplus_{i=0}^d {\mathcal{S}}(r-i)_{X,{\mathbf Q}}[-2i] \operatorname{st} ackrel{\sim}{\to} R\pi_* {\mathcal{S}}(r)_{{\mathbb P}({\mathcal{E}}),{\mathbf Q}}, \quad 0\leq d \leq r. \end{align*} Here, the class ${c}_1^{ \operatorname{syn} }({\mathcal O}(1))\in H^2_{ \operatorname{syn} }({\mathbb P}({\mathcal{E}})_h, 1)$ refers to the class of the tautological bundle on ${\mathbb P}({\mathcal{E}})$. \end{proposition} \begin{proof} By (tedious) checking of many compatibilities we will reduce the above projective space theorem to the projective space theorems for the Hyodo-Kato and the filtered de Rham cohomologies. To prove our proposition it suffices to show that for any ss-pair $(U,\overlinerline{U})$ over $K$ and the projective space $\pi: {\mathbb P}^d_{\overlinerline{U}}\to\overlinerline{U} $ of dimension $d$ over $\overlinerline{U}$ we have a projective space theorem for syntomic cohomology ($a\geq 0$) $$ \bigoplus_{i=0}^d{c}_1^{ \operatorname{syn} }({\mathcal O}(1))^i\cup\pi^*: \quad \bigoplus_{i=0}^d H^{a-2i}_{ \operatorname{syn} }(U_h,r-i)\operatorname{st} ackrel{\sim}{\to} H^a_{ \operatorname{syn} }({\mathbb P}^d_{U,h},r), \quad 0\leq d \leq r. $$ By Proposition \ref{hypercov} and the compatibility of the maps $ H^{*}_{ \operatorname{syn} }(U,\overlinerline{U},j)_{{\mathbf Q}}\operatorname{st} ackrel{\sim}{\to} H^{*}_{ \operatorname{syn} }(U_h,j)_{{\mathbf Q}}$ with products and first Chern classes, this reduces to proving a projective space theorem for log-syntomic cohomology, i.e., a quasi-isomorphism of complexes \begin{align*} \bigoplus_{i=0}^d{c}_1^{ \operatorname{syn} }({\mathcal O}(1))^i\cup\pi^*: \quad \bigoplus_{i=0}^d H^{a-2i}_{ \operatorname{syn} }(U,\overlinerline{U},r-i)_{{\mathbf Q}}\operatorname{st} ackrel{\sim}{\to} H^a_{ \operatorname{syn} }({\mathbb P}^d_{U},{\mathbb P}^d_{\overlinerline{U}},r)_{\mathbf Q}, \quad 0\leq d \leq r, \end{align*} where the class ${c}_1^{ \operatorname{syn} }({\mathcal O}(1))\in H^2_{ \operatorname{syn} }({\mathbb P}^d_{U},{\mathbb P}^d_{\overlinerline{U}}, 1)$ refers to the class of the tautological bundle on ${\mathbb P}^d_{\overlinerline{U}}$. By the distinguished triangle \begin{equation*} {\mathrm R}\Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{{\mathbf Q}} \to \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U},r)_{{\mathbf Q}}\operatorname{st} ackrel{}{\to} \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_{K})/F^r \end{equation*} and its compatibility with the action of $c_1^{ \operatorname{syn} }$, it suffices to prove the following two quasi-isomorphisms for the twisted absolute log-crystalline complexes and for the filtered log de Rham complexes ($0\leq d \leq r$) \begin{align*} \bigoplus_{i=0}^d{c}_1^{\operatorname{cr} }({\mathcal O}(1))^i\cup\pi^*: &\quad \bigoplus_{i=0}^d H^{a-2i}_{\operatorname{cr} }(U,\overlinerline{U},r-i)_{{\mathbf Q}} \operatorname{st} ackrel{\sim}{\to} H^a_{\operatorname{cr} }({\mathbb P}^d_{U},{\mathbb P}^d_{\overlinerline{U}},r)_{\mathbf Q}, \\ \bigoplus_{i=0}^d{c}_1^{\mathrm{dR}}({\mathcal O}(1))^i\cup\pi^*: & \quad \bigoplus_{i=0}^d F^{r-i}H^{a-2i}_{\mathrm{dR}}(U,\overlinerline{U}_K) \operatorname{st} ackrel{\sim}{\to} F^{r}H^a_{\mathrm{dR}}({\mathbb P}^d_{U},{\mathbb P}^d_{\overlinerline{U}_K}). \end{align*} For the log de Rham cohomology, notice that the above map is quasi-isomorphic to the map \cite[3.2]{BE1} $$\bigoplus_{i=0}^d{c}_1^{\mathrm{dR}}({\mathcal O}(1))^i\cup\pi^*: \quad \bigoplus_{i=0}^d F^{r-i}H^{a-2i}_{\mathrm{dR}}(U) \operatorname{st} ackrel{\sim}{\to} F^{r}H^a_{\mathrm{dR}}({\mathbb P}^d_{U}). $$ Hence well-known to be a quasi-isomorphism. For the twisted log-crystalline cohomology, notice that since Frobenius behaves well with respect to ${c}_1^{\operatorname{cr} }$, it suffices to prove a projective space theorem for the absolute log-crystalline cohomology $H^{*}_{\operatorname{cr} }(U,\overlinerline{U})_{{\mathbf Q}} $. \begin{equation*} \bigoplus_{i=0}^d{c}_1^{\operatorname{cr} }({\mathcal O}(1))^i\cup\pi^*: \quad \bigoplus_{i=0}^d H^{a-2i}_{\operatorname{cr} }(U,\overlinerline{U})_{{\mathbf Q}} \operatorname{st} ackrel{\sim}{\to} H^a_{\operatorname{cr} }({\mathbb P}^d_{U},{\mathbb P}^d_{\overlinerline{U}})_{\mathbf Q} \end{equation*} Without loss of generality we may assume that the pair $(U,\overlinerline{U})$ is split over $K$. By the distinguished triangle \begin{equation*} \mathrm {R} \Gamma_{\operatorname{cr} }(U,\overlinerline{U})\to \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)\operatorname{st} ackrel{N}{\to} \mathrm {R} \Gamma_{\operatorname{cr} }((U,\overlinerline{U})/R)) \end{equation*} and its compatibility with the action of $c^{\operatorname{cr} }_1({\mathcal O}(1))$ (cf. \cite[Lemma 4.3.7]{Ts}), it suffices to prove a projective space theorem for the log-crystalline cohomology $H^{*}_{\operatorname{cr} }((U,\overlinerline{U})/R)_{{\mathbf Q}} $. Since the $R$-linear isomorphism $\iota: H^{*}_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}} \otimes R_{\mathbf Q}\operatorname{st} ackrel{\sim}{\to}H^{*}_{\operatorname{cr} }((U,\overlinerline{U})/R)_{{\mathbf Q}} $ is compatible with products \cite[Prop. 4.4.9]{Ts} and first Chern classes (cf. Lemma \ref{compatibility}) we reduce the problem to showing the projective space theorem for the Hyodo-Kato cohomology. \begin{equation*} \bigoplus_{i=0}^d{c}_1^{\mathrm{HK}}({\mathcal O}(1))^i\cup\pi^*: \quad \bigoplus_{i=0}^d H^{a-2i}_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}} \operatorname{st} ackrel{\sim}{\to} H^a_{\mathrm{HK}}({\mathbb P}^d_{U},{\mathbb P}^d_{\overlinerline{U}})_{\mathbf Q} \end{equation*} Tensoring by $K$ and using the isomorphism $ \iota_{\mathrm{dR},\pi}: H^{*}_{\mathrm{HK}}(U,\overlinerline{U})_{{\mathbf Q}} \otimes_{K_0} K\operatorname{st} ackrel{\sim}{\to}H^{*}_{\mathrm{dR}}(U,\overlinerline{U}_K)$ that is compatible with products \cite[Cor. 4.4.13]{Ts} and first Chern classes (cf. Lemma \ref{compatibility}) we reduce to checking the projective space theorem for the log de Rham cohomology $ H^{*}_{\mathrm{dR}}(U,\overlinerline{U}_K) $. And we have done this above. \end{proof} The above proof proves also the projective space theorem for the absolute crystalline cohomology. \begin{corollary} \label{projective1} Let ${\mathcal{E}}$ be a locally free sheaf of rank $d+1$, $d\geq 0$, on a scheme $X\in {\mathcal V}ar_{K}$. Consider the associated projective bundle $\pi:{\mathbb P}({\mathcal{E}})\to X$. Then we have the following quasi-isomorphism of complexes of sheaves on $X_h$ \begin{align*} \bigoplus_{i=0}^d{c}_1^{\operatorname{cr} }({\mathcal O}(1))^i\cup\pi^*: \quad \bigoplus_{i=0}^d {\mathcal J}^{[r-i]}_{X,{\mathbf Q}}[-2i] \operatorname{st} ackrel{\sim}{\to} R\pi_* {\mathcal J}^{[r]}_{{\mathbb P}({\mathcal{E}}),{\mathbf Q}}, \quad 0\leq d \leq r. \end{align*} Here, the class ${c}_1^{\operatorname{cr} }({\mathcal O}(1))\in H^2({\mathbb P}({\mathcal{E}})_h, {\mathcal J}_{\operatorname{cr} })$ refers to the class of the tautological bundle on ${\mathbb P}({\mathcal{E}})$. \end{corollary} For $X\in {\mathcal V}ar_K$, using the projective space theorem (cf. Theorem \ref{projective}) and the Chern classes $$ c_0^{ \operatorname{syn} }: {\mathbf Q}_p\operatorname{st} ackrel{ \operatorname{can} }{\to} {\mathcal{S}}(0)_{X_{{\mathbf Q}}},\quad c_1^{ \operatorname{syn} }: {\mathcal O}_{X_h}^*\to {\mathcal{S}}(1)_{X_{{\mathbf Q}}}[1], $$ we obtain syntomic Chern classes $c_i^{ \operatorname{syn} }({\mathcal{E}})$, for any locally free sheaf ${\mathcal{E}}$ on $X$. Syntomic cohomology has homotopy invariance property. \begin{proposition} \label{homotopy} Let $X\in {\mathcal V}ar_K$ and $f: {\mathbb A}^1_X \to X$ be the natural projection from the affine line over $X$ to $X$. Then, for all $r\geq 0$, the pullback map $$f^*:\,\mathrm {R} \Gamma_{ \operatorname{syn} }(X_h,r)\lomapr{\sim}\mathrm {R} \Gamma_{ \operatorname{syn} }({\mathbb A}^1_{X,h},r)$$ is a quasi-isomorphism. \end{proposition} \begin{proof} Localizing in the $h$-topology of $X$ we may assume that $X=U$ - the open set of an ss-pair $(U,\overlinerline{U})$ over $K$. Consider the following commutative diagram. $$ \begin{CD} \mathrm {R} \Gamma_{ \operatorname{syn} }(U,\overlinerline{U},r)_{\mathbf Q}@> f^*>>\mathrm {R} \Gamma_{ \operatorname{syn} }({\mathbb A}^1_{U},{\mathbb P}^1_{\overlinerline{U}},r)_{\mathbf Q}\\ @VV \wr V @VV\wr V\\ \mathrm {R} \Gamma_{ \operatorname{syn} }(U_h,r) @> f^*>> \mathrm {R} \Gamma_{ \operatorname{syn} }({\mathbb A}^1_{U,h},r) \end{CD} $$ The vertical maps are quasi-isomorphisms by Proposition \ref{hypercov}. It suffices thus to show that the top horizontal map is a quasi-isomorphism. By Proposition \ref{reduction1}, this reduces to showing that the map $$C_{\operatorname{st} }(\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}\{r\})\operatorname{st} ackrel{ f^*}{\to}C_{\operatorname{st} }(\mathrm {R} \Gamma_{\mathrm{HK}}({\mathbb A}^1_{U},{\mathbb P}^1_{\overlinerline{U}})_{\mathbf Q}\{r\})$$ is a quasi-isomorphism. Or, that the map $f: ({\mathbb A}^1_{U},{\mathbb P}^1_{\overlinerline{U}})\to (U,\overlinerline{U})$ induces a quasi-isomorphism on the Hyodo-Kato cohomology and a filtered quasi-isomorphism on the log de Rham cohomology: $$\mathrm {R} \Gamma_{\mathrm{HK}}(U,\overlinerline{U})_{\mathbf Q}\operatorname{st} ackrel{ f^*}{\to}\mathrm {R} \Gamma_{\mathrm{HK}}({\mathbb A}^1_{U},{\mathbb P}^1_{\overlinerline{U}})_{\mathbf Q},\quad \mathrm {R} \Gamma_{\mathrm{dR}}(U,\overlinerline{U}_K)\operatorname{st} ackrel{ f^*}{\to}\mathrm {R} \Gamma_{\mathrm{dR}}({\mathbb A}^1_{U},{\mathbb P}^1_{\overlinerline{U}_K}) $$ Without loss of generality we may assume that the pair $(U,\overlinerline{U})$ is split over $K$. Tensoring with $K$ and using the Hyodo-Kato quasi-isomorphism we reduce the Hyodo-Kato case to the log de Rham one. The latter follows easily from the projective space theorem and the existence of the Gysin sequence in log de Rham cohomology. \end{proof} \begin{remark} The above implies that syntomic cohomology is a Bloch-Ogus theory. A proof of this fact was kindly communicated to us by Fr\'ed\'eric D\'eglise and is contained in Appendix~B, Proposition \ref{Bloch-Ogus}. \end{remark} \begin{proposition}For a scheme $X$, let $K_*(X)$ denote Quillen's higher $K$-theory groups of $X$. For $X\in {\mathcal V}ar_K$, $i,j\geq 0$, there are functorial syntomic Chern class maps $$ c_{i,j}^{ \operatorname{syn} }: K_j(X) \rightarrow H^{2i-j}_{ \operatorname{syn} }(X_h,i). $$ \end{proposition} \begin{proof} Recall the construction of the classes $c_{i,j}^{ \operatorname{syn} }$. First, one constructs universal classes $C^{ \operatorname{syn} }_{i,l}\in H^{2i}_{ \operatorname{syn} }(B_{\scriptscriptstyle\bullet}GL_{l,h},i)$. By a standard argument, projective space theorem and homotopy property show that $$H^*_{ \operatorname{syn} }(B_{\scriptscriptstyle\bullet}GL_{l,h},*)\simeq H^*_{ \operatorname{syn} }(K,*)[x^{ \operatorname{syn} }_1,\ldots,x^{ \operatorname{syn} }_l],$$ where the classes $x^{ \operatorname{syn} }_i\in H^{2i}_{ \operatorname{syn} }(B_{\scriptscriptstyle\bullet}GL_{l,h},i)$ are the syntomic Chern classes of the universal locally free sheaf on $B_{\scriptscriptstyle\bullet}GL_{l}$ (defined via a projective space theorem). For $l\geq i$, we define $$C^{ \operatorname{syn} }_{i,l}=x^{ \operatorname{syn} }_i\in H^{2i}_{ \operatorname{syn} }(B_{\scriptscriptstyle\bullet}GL_{l,h},i).$$ The classes $C^{ \operatorname{syn} }_{i,l}\in H^{2i}_{ \operatorname{syn} }(B_{\scriptscriptstyle\bullet}GL_{l,h},i)$ yield compatible universal classes (see \cite[p. 221]{Gi}) $C^{ \operatorname{syn} }_{i,l}\in H^{2i}_{ \operatorname{syn} }(X,GL_l({\mathcal O}_X),i)$, hence a natural map of pointed simplicial sheaves on $X_{\operatorname{ZAR} }$, $C^{ \operatorname{syn} }_i:B_{\scriptscriptstyle\bullet}GL({\mathcal O}_X)\to {\mathcal{K}}(2i,{\mathcal{S}}^{^{\operatorname{pr} ime}me}(i){}_{X})$, where ${\mathcal{K}} $ is the Dold--Puppe functor of $\tau_{\geq 0}{\mathcal{S}}^{^{\operatorname{pr} ime}me}(i){}_{X}[2i]$ and ${\mathcal{S}}^{^{\operatorname{pr} ime}me}(i){}_{X}$ is an injective resolution of ${\mathcal{S}}(i){}_{X}:=R\varepsilon_*{\mathcal{S}}(i)_{{\mathbf Q}}$, $\varepsilon: X_h\to X_{\operatorname{ZAR} }$. The characteristic classes $c^{ \operatorname{syn} }_{i,j}$ are now defined \cite[2.22]{Gi} as the composition \begin{align*} K_j(X) & \to H^{-j}(X,{\mathbf Z}\times B_{\scriptscriptstyle\bullet}GL({\mathcal O}_X)^+) \to H^{-j}(X,B_{\scriptscriptstyle\bullet}GL({\mathcal O}_X)^+)\\ & \operatorname{st} ackrel{C^{ \operatorname{syn} }_i}{\longrightarrow} H^{-j}(X, {\mathcal{K}}(2i,{\mathcal{S}}^{^{\operatorname{pr} ime}me}(i){}_{X})) \operatorname{st} ackrel{h_j}{\rightarrow} H^{2i-j}_{ \operatorname{syn} }(X_h,i), \end{align*} where $B_{\scriptscriptstyle\bullet}GL({\mathcal O}_X)^+$ is the (pointed) simplicial sheaf on $X$ associated to the $+\,$- construction \cite[4.2]{S}. Here, for a (pointed) simplicial sheaf ${\mathcal{E}}_{\scriptscriptstyle\bullet} $ on $X_{\operatorname{ZAR} }$, $H^{-j}(X,{\mathcal{E}}_{\scriptscriptstyle\bullet} ) =\pi_j(\mathrm {R} \Gamma (X_{\operatorname{ZAR} },{\mathcal{E}}_{\scriptscriptstyle\bullet} ))$ is the generalized sheaf cohomology of ${\mathcal{E}}_{\scriptscriptstyle\bullet} $ \cite[1.7]{Gi}. The map $h_j$ is the Hurewicz map: \begin{align*} H^{-j}(X, {\mathcal{K}}(2i,{\mathcal{S}}^{^{\operatorname{pr} ime}me}(i){}_{X} )) & = \pi_j({\mathcal{K}}(2i,{\mathcal{S}}^{^{\operatorname{pr} ime}me}(i)(X))) \!\operatorname{st} ackrel{h_j}{\rightarrow}\! H_j({\mathcal{K}}(2i,{\mathcal{S}}^{^{\operatorname{pr} ime}me}(i)(X)))\\ & = H_j({\mathcal{S}}^{^{\operatorname{pr} ime}me}(i)(X)[2i])= H^{2i-j}_{ \operatorname{syn} }(X_h,i). \end{align*} \end{proof} \begin{proposition} \label{syn-et} The syntomic and the \'etale Chern classes are compatible, i.e., for $X\in {\mathcal V}ar_K$, $j\geq 0, 2i-j\geq 0$, the following diagram commutes $$ \xymatrix{ & K_j(X)\ar[ld]_{c^{ \operatorname{syn} }_{i,j}}\ar[rd]^{c^{\operatorname{\acute{e}t} }_{i,j}} &\\ H^{2i-j}_{ \operatorname{syn} }(X_h,i)\ar[rr]^{\rho_{ \operatorname{syn} }} & & H^{2i-j}_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(i))} $$ \end{proposition} \begin{proof} We can pass to the universal case ($X=B_{\scriptscriptstyle\bullet}GL_l:=B_{\scriptscriptstyle\bullet}GL_l/K$, $l\geq 1$). We have \begin{align*} H^*_{ \operatorname{syn} }(B_{\scriptscriptstyle\bullet}GL_{l,h},*) & \simeq H^*_{ \operatorname{syn} }(K,*)[x^{ \operatorname{syn} }_1,\ldots,x^{ \operatorname{syn} }_l],\\ H^*_{\operatorname{\acute{e}t} }(B_{\scriptscriptstyle\bullet}GL_l,*) & \simeq H^*_{\operatorname{\acute{e}t} }(K,*)[x^{\operatorname{\acute{e}t} }_1,\ldots,x^{\operatorname{\acute{e}t} }_l] \end{align*} By the projective space theorem and the fact that the syntomic period map commutes with products it suffices to check that $\rho_{ \operatorname{syn} }(x_1^{ \operatorname{syn} })=x_1^{\operatorname{\acute{e}t} }$ and that the syntomic period map $\rho_{ \operatorname{syn} }$ commutes with the classes $c_0^{ \operatorname{syn} }: {\mathbf Q}_p\to {\mathcal{S}}(0)_{\mathbf Q}$ and $c_0^{\operatorname{\acute{e}t} }: {\mathbf Q}_p\to {\mathbf Q}_p(0)$. The statement about $c_0$ is clear from the definition of $\rho_{\operatorname{cr} }$; for $c_1$ consider the canonical map $f: B_{\scriptscriptstyle\bullet}GL_l\to B_{\scriptscriptstyle\bullet}GL_{l,\overline{K} }$ and the induced pullback map $$ f^*_{\operatorname{\acute{e}t} }:\quad H^*_{\operatorname{\acute{e}t} }(B_{\scriptscriptstyle\bullet}GL_l,*)=H^*_{\operatorname{\acute{e}t} }(K,*)[x_1,\ldots,x_l]\to H^*_{\operatorname{\acute{e}t} }(B_{\scriptscriptstyle\bullet}GL_{l,\overline{K} },*)={\mathbf Q}_p[\overlinerline{x}_1,\ldots,\overline{x}_l] $$ that sends the Chern classes $x^{\operatorname{\acute{e}t} }_i$ of the universal vector bundle to the classes $\overlinerline{x}^{\operatorname{\acute{e}t} }_i$ of its pullback. It suffices to show that $f^*_{\operatorname{\acute{e}t} }\rho_{ \operatorname{syn} }(C_{1,1}^{ \operatorname{syn} })=C_{1,1}^{\operatorname{\acute{e}t} }$. But, by definition, $f^*_{\operatorname{\acute{e}t} }\rho_{ \operatorname{syn} }=\rho_{ \operatorname{syn} }f^*_{ \operatorname{syn} }$ and, by construction, we have the following commutative diagram $$ \xymatrix{ H^2_{ \operatorname{syn} }(B_{\scriptscriptstyle\bullet}{\mathbb G}_{m,h},1)\ar[r]^{ \operatorname{can} }\ar[d]^{\rho_{ \operatorname{syn} }} & H^2_{\operatorname{cr} }(B_{\scriptscriptstyle\bullet}{\mathbb G}_{m,\overline{K} ,h})\ar[d]^{\rho_{cr}}\\ H^2_{\operatorname{\acute{e}t} }(B_{\scriptscriptstyle\bullet}{\mathbb G}_{m,\overline{K} },{\mathbf Q}_p(1))\ar[r] & H^2_{\operatorname{\acute{e}t} }(B_{\scriptscriptstyle\bullet}{\mathbb G}_{m,\overline{K} }, B^+_{\operatorname{cr} })= H^2_{\operatorname{\acute{e}t} }(B_{\scriptscriptstyle\bullet}{\mathbb G}_{m,\overline{K} },{\mathbf Q}_p(1))\otimes B^+_{\operatorname{cr} } } $$ where the bottom map sends the generator of ${\mathbf Q}_p(1)$ to the element $t\in B^+_{\operatorname{cr} }$ associated to it. Since the syntomic and the crystalline Chern classes are compatible, it suffices to show that, for a line bundle ${\mathcal{L}}$, $\rho_{\operatorname{cr} }(c_1^{\operatorname{cr} }({\mathcal{L}}))=c_1^{\operatorname{\acute{e}t} }({\mathcal{L}})\otimes t$. But this is \cite[3.2]{BE2}. \end{proof} \begin{remark} If ${\mathcal{X}}$ is a scheme over $V$ and $X={\mathcal{X}}_K$, we can consider the syntomic Chern classes $c_{i,j}^{ \operatorname{syn} }: K_j({\mathcal{X}})\to H^{2i-j}_{ \operatorname{syn} }(X_h,i)$ defined as the composition $$ K_j({\mathcal{X}})\to K_j(X) \lomapr{c_{i,j}^{ \operatorname{syn} }} H^{2i-j}_{ \operatorname{syn} }(X_h,i). $$ By the above proposition, these classes are compatible with the \'etale Chern classes. Recall that analogous results were proved earlier for ${\mathcal{X}}$ smooth and projective \cite{NM}, for ${\mathcal{X}}$ -- a complement of a divisor with relative normal crossings in such, and for ${\mathcal{X}}$ - a semistable scheme over $V$ \cite{NT}. \end{remark} {\mathcal{U}}bsection{Image of \'etale regulators}In this subsection we show that Soul\'e's \'etale regulators factor through the semistable Selmer groups. Let $X\in {\mathcal V}ar_K$. For $2r-i-1\geq 0$, set $$ K_{2r-i-1}(X)_0:=\ker(K_{2r-i-1}(X)\lomapr{c^{\operatorname{\acute{e}t} }_{r,i+1}} H^0(G_K,H^{i+1}_{\operatorname{\acute{e}t} }(X_{\overline{K} },{\mathbf Q}_p(r)))) $$ Write $r^{\operatorname{\acute{e}t} }_{r,i}$ for the map $$r^{\operatorname{\acute{e}t} }_{r,i}:K_{2r-i-1}(X)_0\to H^1(G_K,H^{i}_{\operatorname{\acute{e}t} }(X_{\overline{K} },{\mathbf Q}_p(r))) $$ induced by the Chern class map $c^{\operatorname{\acute{e}t} }_{r,i+1}$ and the Hochschild-Serre spectral sequence map $\delta: H^{i+1}_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(r))_0\to H^1(G_K,H^i_{\operatorname{\acute{e}t} }(X_{\overline{K} },{\mathbf Q}_p(r)))$, where we set $H^{i+1}_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(r))_0:=\ker( H^{i+1}_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(r))\to H^{i+1}_{\operatorname{\acute{e}t} }(X_{\overline{K} },{\mathbf Q}_p(r))) $. \begin{theorem} \label{Tony} The map $r^{\operatorname{\acute{e}t} }_{r,i}$ factors through the subgroup $$ H^1_{\operatorname{st} }(G_K,H^{i+1}_{\operatorname{\acute{e}t} }(X_{\overline{K} },{\mathbf Q}_p(r))){\mathcal{U}}bset H^1(G_K,H^{i+1}_{\operatorname{\acute{e}t} }(X_{\overline{K} },{\mathbf Q}_p(r))).$$ \end{theorem} \begin{proof} By Proposition \ref{syn-et}, we have the following commutative diagram $$ \xymatrix{K_{2r-i-1}(X)\ar[d]^{c^{ \operatorname{syn} }_{r,i+1}}\ar[dr]^{c^{\operatorname{\acute{e}t} }_{r,i+1}}\\ H^{i+1}_{ \operatorname{syn} }(X_h,r)\ar[r]^-{\rho_{ \operatorname{syn} }} & H^{i+1}_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(r))\ar[r] & H^{i+1}_{\operatorname{\acute{e}t} }(X_{\overline{K} },{\mathbf Q}_p(r)) } $$ Hence the Chern class map $c^{ \operatorname{syn} }_{r,i+1}:K_{2r-i-1}(X)_0\to H^{i+1}_{ \operatorname{syn} }(X_h,r)$ factors through $H^{i+1}_{ \operatorname{syn} }(X_h,r)_0:=\ker(H^{i+1}_{ \operatorname{syn} }(X_h,r)\operatorname{st} ackrel{\rho_{ \operatorname{syn} }}{\to} H^{i+1}_{\operatorname{\acute{e}t} }(X_{\overline{K} },{\mathbf Q}_p(r)))$. Compatibility of the syntomic descent and the Hochschild-Serre spectral sequences (cf. Theorem \ref{stHS}) yields the following commutative diagram $$ \xymatrix{ K_{2r-i-1}(X)_0\ar[d]^{c^{ \operatorname{syn} }_{r,i+1}}\ar[dr]^{c^{\operatorname{\acute{e}t} }_{r,i+1}}\\ H^{i+1}_{ \operatorname{syn} }(X_h,r)_0\ar[d]^{\delta}\ar[r]^{\rho_{ \operatorname{syn} }} & H^{i+1}_{\operatorname{\acute{e}t} }(X,{\mathbf Q}_p(r))_0\ar[d]^{\delta}\\ H^1_{\operatorname{st} }(G_K,H^{i}_{\operatorname{\acute{e}t} }(X_{\overline{K} },{\mathbf Q}_p(r)))\ar[r]^{ \operatorname{can} } & H^1(G_K,H^{i}_{\operatorname{\acute{e}t} }(X_{\overline{K} },{\mathbf Q}_p(r))) } $$ Our theorem follows. \end{proof} \begin{remark}The question of the image of Soul\'e's regulators $r_{r,i}^{\operatorname{\acute{e}t} }$ was raised by Bloch-Kato in \cite{BK} in connection with their Tamagawa Number Conjecture. Theorem \ref{Tony} is known to follow from the constructions of Scholl \cite{Sc}. The argument goes as follows. Recall that for a class $y\in K_{2r-i-1}(X)_0$ he constructs an explicit extension $E_y\in \operatorname{Ext} ^1_{{\mathcal{M}}{\mathcal{M}}_K}({\mathbf Q}(-r),h^{i}(X))$ in the category of mixed motives over $K$. The association $y\mapsto E_y$ is compatible with the \'etale cycle class and realization maps. By the de Rham Comparison Theorem, the \'etale realization $r^{\operatorname{\acute{e}t} }_{r,i}(y)$ of the extension class $E_y$ in $$ \operatorname{Ext} ^1_{G_K}({\mathbf Q}_p(-r),H^{i}(X_{\overline{K} },{\mathbf Q}_p))=H^1(G_K,H^i_{\operatorname{\acute{e}t} }(X_{\overline{K} },{\mathbf Q}_p(r)))$$ is de Rham, hence potentially semistable by \cite{BER}, as wanted. \end{remark} \appendix {\mathcal{E}}ction {Vanishing of $H^2(G_K,V)$ by Laurent Berger} Let $V$ be a $\mathbf{Q}_p$-linear representation of $G_K$. In this appendix we prove the following theorem. \begin{theorem} \label{main} If $V$ is semistable and all its Hodge-Tate weights are $\geq 2$, then $H^2(G_K,V)=0$. \end{theorem} Let $\mathrm{D}(V)$ be Fontaine's $(\varphi,\Gamma)$-module attached to $V$ \cite{Fon}. It comes with a Frobenius map $\varphi$ and an action of $\Gamma_K$. Let $H_K = \operatorname{Gal} (\overline{K} /K(\mu_{p^\infty}))$ and let $I_K = \operatorname{Gal} (\overline{K} /K^{\operatorname{nr} })$. The injectivity of the restriction map $H^2(G_K,V) \to H^2(G_L,V)$ for $L/K$ finite allows us to replace $K$ by a finite extension, so that we can assume that $H_K I_K = G_K$ and that $\Gamma_K \simeq \mathbf{Z}_p$. Let $\gamma$ be a topological generator of $\Gamma_K$. Recall (\S I.5 of \cite{CC99}) that we have a map $\psi : \mathrm{D}(V) \to \mathrm{D}(V)$. Ideally, our proof of this theorem would go as follows. We use the Hochschild-Serre spectral sequence $$ H^i(G_K/I_K,H^j(I_K,V|_{I_K})) \mathrm {R} ightarrow H^{i+j}(G_K,V) $$ and, interpreting Galois cohomology in terms of $(\varphi,\Gamma)$-modules, we compute that $H^2(I_K,V|_{I_K})=0$ and $H^1(I_K,V|_{I_K})=\hat{K}^{\operatorname{nr} }\otimes_KD_{\mathrm{dR}}(V)$. We conclude since, by Hilbert 90, $H^1(G_K/I_K,H^1(I_K,V|_{I_K})) =0$. However, we do not, in general, have Hochschild-Serre spectral sequences for continuous cohomology. We mimic thus the above argument with direct computations on continuous cocycles (again using $(\varphi, \Gamma)$-modules). Laurent Berger is grateful to Kevin Buzzard for discussions related to the above spectral sequence. \begin{lemma} \label{cc} \begin{enumerate} \item If $V$ is a representation of $G_K$, then there is an exact sequence $$0 \to \mathrm{D}(V)^{\psi=1} / (\gamma-1) \to H^1(G_K,V) \to (\mathrm{D}(V)/(\psi-1))^{\Gamma_K} \to 0;$$ \item We have $H^2(G_K,V) = \mathrm{D}(V)/(\psi-1,\gamma-1)$. \end{enumerate} \end{lemma} \begin{proof} See I.5.5 and II.3.2 of \cite{CC99}. \end{proof} \begin{lemma} \label{psimsur} We have $\mathrm{D}(V|_{I_K})/(\psi-1)=0$ \end{lemma} \begin{proof} Since $V|_{I_K}$ corresponds to the case when $k$ is algebraically closed, see the proof of Lemma VI.7 of \cite{L01}. \end{proof} Let $\gamma_I$ denote a generator of $\Gamma_{\widehat{K}^{\operatorname{nr} }}$. \begin{lemma} \label{psigam} The natural map $\mathrm{D}(V|_{I_K})^{\psi=1} / (\gamma_I-1) \to (\mathrm{D}(V|_{I_K}) / (\gamma_I-1))^{\psi=1}$ is an isomorphism if $V^{I_K}=0$. \end{lemma} \begin{proof} This map is part of the six term exact sequence that comes from the map $\gamma_I-1$ applied to $0 \to \mathrm{D}(V|_{I_K})^{\psi=1} \to \mathrm{D}(V|_{I_K}) \xrightarrow{\psi-1} \mathrm{D}(V|_{I_K}) \to 0$. Its kernel is included in $\mathrm{D}(V|_{I_K})^{\gamma_I=1}$ which is $0$, since $V^{I_K}=0$ (note that the inclusion $(\widehat{K}^{\operatorname{nr} } \otimes V)^{G_K} {\mathcal{U}}bseteq (\widehat{\mathcal E}^{\operatorname{nr} } \otimes V)^{G_K} = \mathrm{D}(V)^{G_K}$ is an isomorphism). \end{proof} Suppose that $x \in \mathrm{D}(V)/(\psi-1,\gamma-1)$. If $\tilde{x} \in \mathrm{D}(V)$ lifts $x$, then Lemma \ref{psimsur} gives us an element $y\in\mathrm{D}(V |_{I_K})$ such that $(\psi-1)y = \tilde{x}$. Define a cocycle $\delta(x) \in Z^1(G_K/I_K, \mathrm{D}(V |_{I_K})^{\psi=1} / (\gamma_I-1))$ by $\delta(x) : \overlinerline{g} \mapsto (g-1)(y)$ if $g \in G_K$ lifts $\overlinerline{g} \in G_K/I_K$. \begin{proposition} \label{hsss} If $V^{I_K}=0$, then the map \[ \delta : \mathrm{D}(V)/(\psi-1,\gamma-1) \to H^1(G_K/I_K, (\mathrm{D}(V|_{I_K}) / (\gamma_I-1))^{\psi=1}) \] is well-defined and injective. \end{proposition} \begin{proof} We first check that $\delta(x)(g) \in (\mathrm{D}(V|_{I_K}) / (\gamma_I-1))^{\psi=1}$. We have $(\psi-1)(g-1)(y) = (g-1)(x)$. If we write $g = ih \in I_KH_K$, then $(g-1)x = (ih-1)x=(i-1)x \in (\gamma_I-1) \mathrm{D}(V|_{I_K})$ since $\gamma_I-1$ divides the image of $i-1$ in $\mathbf{Z}_p\dcroc{\Gamma_{\widehat{K}^{\operatorname{nr} }}}$. This implies that $\delta(x)(g) \in (\mathrm{D}(V|_{I_K}) / (\gamma_I-1))^{\psi=1}$. We now check that $\delta(x)$ does not depend on the choices. If we choose another lift $g' \in G_K$ of $\overlinerline{g} \in G_K/I_K$, then $g'=ig$ for some $i \in I_K$ and $(g'-1)y-(g-1)y = (i-1)gy \in (\gamma_I-1) \mathrm{D}(V|_{I_K})$ since $\gamma_I-1$ divides the image of $i-1$ in $\mathbf{Z}_p\dcroc{\Gamma_{\widehat{K}^{\operatorname{nr} }}}$. If we choose another $y'$ such that $(\psi-1)y'=\tilde{x}$, then $y-y' \in \mathrm{D}(V|_{I_K})^{\psi=1}$ so that $\delta$ and $\delta'$ are cohomologous. Finally, if $\tilde{x}'$ is another lift of $x$, then $\tilde{x}' - \tilde{x} = (\gamma-1)a + (\psi-1)b$ with $a,b \in \mathrm{D}(V)$. We can then take $y' = y + b + (\gamma_G-1)c$ where $(\psi-1)c=a$. We then have $(g-1)y'=(g-1)y + (g-1)b + (\gamma_G-1)(g-1)c$. Since $G_K=I_K H_K$, we can write $g=ih$ and $(g-1)b=(i-1)b$. Using $G_K=I_K H_K$ once again, we see that $I_K \to G_K/H_K$ is surjective, so that we can identify $\gamma_I$ and $\gamma_G$. The resulting cocycle is then cohomologous to $\delta(x)$. This proves that $\delta$ is well-defined. We now prove that $\delta$ is injective. If $\delta(x)=0$, then using Lemma \ref{psigam} there exists $z \in \mathrm{D}(V|_{I_K})^{\psi=1}$ such that $\delta(x)(\overlinerline{g})$ is the image of $(g-1)(z)$ in $\mathrm{D}(V|_{I_K})^{\psi=1} / (\gamma_I-1)$. This implies that $(g-1)(y-z) \in (\gamma_I-1) \mathrm{D}(V|_{I_K})^{\psi=1}$. Applying $\psi-1$ gives $(g-1)\tilde{x} = 0$ so that $\tilde{x} \in \mathrm{D}(V)^{G_K} {\mathcal{U}}bset V^{I_K}=0$. The map $\delta$ is therefore injective. \end{proof} \begin{lemma} \label{expbij} If $V$ is semistable and the weights of $V$ are all $\geq 2$, then $\exp_V : \mathrm{D}_{\mathrm{dR}}(V|_{I_K}) \to H^1(I_K,V)$ is an isomorphism. \end{lemma} \begin{proof} Apply Thm. 6.8 of \cite{BER} to $V|_{I_K}$. \end{proof} \begin{proof}[Proof of Theorem \ref{main}] We can replace $K$ by $K_n$ for $n \gg 0$ and use the fact that if $H^2(G_{K_n},V) = 0$, then $H^2(G_K,V) = 0$ since the restriction map is injective. In particular, we can assume that $H_K I_K = G_K$ and that $\Gamma_K$ is isomorphic to $\mathbf{Z}_p$. By item (2) of Lemma \ref{cc}, we have $H^2(G_K,V) = \mathrm{D}(V)/(\psi-1,\gamma-1)$, and so by Proposition \ref{hsss} above, it is enough to prove that \[ H^1(G_K/I_K, (\mathrm{D}(V|_{I_K}) / (\gamma_I-1))^{\psi=1}) = 0. \] Lemma \ref{psigam} tells us that $(\mathrm{D}(V|_{I_K}) / (\gamma_I-1))^{\psi=1} = \mathrm{D}(V|_{I_K})^{\psi=1} / (\gamma_I-1)$. Since $\mathrm{D}(V|_{I_K})/(\psi-1)=0$ by Lemma \ref{psimsur}, item (1) of Lemma \ref{cc} tells us that $\mathrm{D}(V|_{I_K})^{\psi=1} / (\gamma-1) = H^1(I_K,V)$. The map $\exp_V : \mathrm{D}_{\mathrm{dR}}(V|_{I_K}) \to H^1(I_K,V)$ is an isomorphism by Lemma \ref{expbij}, and this isomorphism commutes with the action of $G_K$ since it is a natural map. We therefore have $H^1(I_K,V) = \widehat{K}^{\operatorname{nr} } \otimes_K \mathrm{D}_{\mathrm{dR}}(V)$ as $G_K$-modules. It remains to observe that the cocycle $\delta(x) \in Z^1(G_K/I_K, \widehat{K}^{\operatorname{nr} } \otimes_K \mathrm{D}_{\mathrm{dR}}(V))$ is continuous and that $H^1(G_K/I_K, \widehat{K}^{\operatorname{nr} })=0$ by taking a lattice, reducing modulo a uniformizer of $K$, and applying Hilbert 90. \end{proof} \operatorname{pr} ovidecommand{\bysame}{\leavevmode ---\ } \operatorname{pr} ovidecommand{\og}{``} \operatorname{pr} ovidecommand{\fg}{''} \operatorname{pr} ovidecommand{{\mathcal{M}}fandname}{\&} \operatorname{pr} ovidecommand{{\mathcal{M}}fedsname}{\'eds.} \operatorname{pr} ovidecommand{{\mathcal{M}}fedname}{\'ed.} \operatorname{pr} ovidecommand{{\mathcal{M}}fmastersthesisname}{M\'emoire} \operatorname{pr} ovidecommand{{\mathcal{M}}fphdthesisname}{Th\`ese} {\mathcal{E}}ction{The Syntomic ring spectrum by Fr\'ed\'eric D\'eglise} In this appendix, we explain why syntomic cohomology as defined in this paper is representable by a motivic ring spectrum in the sense of Morel and Voevodsky's homotopy theory. More precisely, we will exhibit a monoid object ${\mathcal{S}}$ of the triangulated category of motives with $\mathbf{Q}_p$-coefficients (see below), $DM$, such that for any variety $X$ and any pair of integers $(i,r)$, $$ H^i_{ \operatorname{syn} }(X_h,r)=\operatorname{{\cal Ho}} m_{DM}(M(X),{\mathcal{S}}(r)[i]). $$ In fact, it is possible to apply directly \cite[Th. 1.4.10]{DM1} to the graded commutative dg-algebra $\mathrm {R} \Gamma_{ \operatorname{syn} }(X,*)$ of Theorem \ref{main1} in view of the existence of Chern classes established in Section 5.1. However, the use of $h$-topology in this paper makes the construction of $\mathbb{E}_{synt}$ much more straightforward and that is what we explain in this appendix. Reformulating slightly the original definition of Voevodsky (see \cite{V1}), we introduce: \begin{definition} Let $\PSh(K,\mathbf{Q}_p)$ be the category of presheaves of $\mathbf{Q}_p$-modules over the category of varieties. Let $C$ be a complex in $\PSh(K,\mathbf{Q}_p)$. We say: \begin{enumerate} \item $C$ is $h$-local if for any h-hypercovering $\pi:Y_\bullet \rightarrow X$, the induced map: $$ C(X) \rightarrow{\pi_*} \mathrm{Tot}^\oplus(C(Y_\bullet)) $$ is a quasi-isomorphism; \item $C$ is $\mathbb A^1$-local if for any variety $X$, the map induced by the projection: $$ H^i(X_h,C) \rightarrow H^i(\mathbb A^1_{X,h},C) $$ is an isomorphism. \end{enumerate} We define the triangulated category $DM^{eff}_h(K,\mathbf{Q}_p)$ of effective $h$-motives as the full subcategory of the derived category $D(\PSh(K,\mathbf{Q}_p))$ made by the complexes which are $h$-local and $\mathbb A^1$-local. \end{definition} Equivalently, we can define this category as the $\mathbb A^1$-localization of the derived category of $h$-sheaves on $K$-varieties (see \cite{CD3}, Sec. 5.2 and more precisely Prop. 5.2.10, Ex. 5.2.17(2)). Recall also from \emph{loc. cit.}, that there are derived tensor products and internal $\operatorname{{\cal Ho}} m$ on $DM^{eff}_h(K,\mathbf{Q}_p)$. For any integer $r \geq 0$, the \emph{syntomic sheaf} ${\mathcal{S}}(r)$ is both $h$-local (by definition) and $\mathbb A^1$-local (Prop. \ref{homotopy}). Thus it defines an object of $DM^{eff}_h(K,\mathbf{Q}_p)$ and for any variety $X$, one has an isomorphism: $$ \operatorname{{\cal Ho}} m_{DM^{eff}_h(K,\mathbf{Q}_p)}(\mathbf{Q}_p(X),{\mathcal{S}}(r)[i])= \operatorname{{\cal Ho}} m_{D(\PSh(K,\mathbf{Q}_p))}(\mathbf{Q}_p(X),{\mathcal{S}}(r)[i]) =H^{i}_{ \operatorname{syn} }(X_h,r) $$ where $\mathbf{Q}_p(X)$ is the presheaf of $\mathbf{Q}_p$-vector spaces represented by $X$. Thus, the representability assertion for syntomic cohomology is obvious in the effective setting. Recall that one defines the Tate motive in $DM^{eff}_h(K,\mathbf{Q}_p)$ as the object $\mathbf{Q}_p(1):=\mathbf{Q}_p(\mathbb P^1_K)/\mathbf{Q}_p(\{\infty\})[-2]$. Given any complex object $C$ of $DM^{eff}_h(K,\mathbf{Q}_p)$, we put: $C(n):=C \otimes \mathbf{Q}_p(1)^{\otimes,n}$. One should be careful that this notation is in conflict with that of ${\mathcal{S}}(r)$ considered as an effective $h$-motive, as the natural twist on syntomic cohomology is unrelated to the twist of $h$-motives. To solve this matter, we are led to consider the following notion of Tate spectrum, borrowed from algebraic topology according to Morel and Voevodsky. \begin{definition} A \emph{Tate $h$-spectrum} (over $K$ with coefficients in $\mathbf{Q}_p$), is a sequence $\mathbb{E}=(E_i,\sigma_i)_{i \in \mathbb N}$ such that: \begin{itemize} \item for each $i \in \mathbb N$, $E_i$ is a complex of $\PSh(K,\mathbf{Q}_p)$ equipped with an action of the symmetric group $\Sigma_i$ of the set with $i$-element, \item for each $i \in \mathbb N$, $\sigma_i:E_i(1) \rightarrow E_{i+1}$ is a morphism of complexes -- called the \emph{suspension map} in degree $i$, \item For any integers $i \geq 0$, $r>0$, the map induced by the morphisms $\sigma_i,\operatorname{cd} ots,\sigma_{i+r}$: $$ E_i(r) \rightarrow E_{i+r} $$ is compatible with the action of $\Sigma_i \times \Sigma_r$, given on the left by the structural $\Sigma_i$-action on $E_i$ and the action of $\Sigma_r$ via the permutation isomorphism of the tensor structure on $C(\PSh(K,\mathbf{Q}_p))$, and on the right via the embedding $\Sigma_i \times \Sigma_r \rightarrow \Sigma_{i+r}$. \end{itemize} A morphism of Tate $h$-spectra $f:\mathbb{E} \rightarrow \mathbb F$ is a sequence of $\Sigma_i$-equivariant maps $(f_i:E_i \rightarrow F_i)_{i \in \mathbb N}$ compatible with the suspension maps. The corresponding category will be denoted by $\Sp(K,\mathbf{Q}_p)$. \end{definition} There is an adjunction of categories: \begin{equation} \label{eq:suspension} \Sigma^\infty:C\big(\PSh(K,\mathbf{Q}_p)\big) \leftrightarrows \Sp(K,\mathbf{Q}_p):\Omega^\infty \end{equation} such that for any complex $K$ of $h$-sheaves, $\Sigma^\infty C$ is the Tate spectrum equal in degree $n$ to $C(n)$, equipped with the obvious action of $\Sigma_n$ induced by the symmetric structure on tensor product and with the obvious suspension maps. \begin{definition} A morphism of Tate spectra $(f_i:E_i \rightarrow F_i)_{i \in \mathbb N}$ is a level quasi-isomorphism if for any $i$, $f_i$ is a quasi-isomorphism. A Tate spectrum $\mathbb{E}$ is called a $\Omega$-spectrum if for any $i$, $E_i$ is $h$-local and $\mathbb A^1$-local and the map of complexes $$ E_i \rightarrow \underline{\operatorname{{\cal Ho}} m}(\mathbf{Q}_p(1),E_{i+1}) $$ is a quasi-isomorphism. We define the triangulated category $DM_h(K,\mathbf{Q}_p)$ of $h$-motives over $K$ with coefficients in $\mathbf{Q}_p$ as the category of Tate $\Omega$-spectra localized by the level quasi-isomorphisms. \end{definition} The category of $h$-motives notably enjoys the following properties: \begin{enumerate} \item[(DM1)] The adjunction of categories \eqref{eq:suspension} induces an adjunction of triangulated categories: $$ \Sigma^\infty:DM^{eff}_h(K,\mathbf{Q}_p) \leftrightarrows DM_h(K,\mathbf{Q}_p):\Omega^\infty $$ such that for a Tate $\Omega$-spectrum $\mathbb{E}$, and any integer $r \geq 0$, $\Omega^\infty(\mathbb{E}(r))=E_r$ (see \cite[Sec. 5.3.d, and Ex. 5.3.31(2)]{CD3}). Given any variety $X$, we define the (stable) $h$-motive of $X$ as $M(X):=\Sigma^\infty \mathbf{Q}_p(X)$. \item[(DM2)] There exists a symmetric closed monoidal structure on $DM(K,\mathbf{Q}_p)$ such that $\Sigma^\infty$ is monoidal and such that $\Sigma^\infty \mathbf{Q}_p(1)$ admits a tensor inverse (see \cite[Sec. 5.3, and Ex. 5.3.31(2)]{CD3}). By abuse of notations, we put: $\mathbf{Q}_p=\Sigma^\infty \mathbf{Q}_p$. \item[(DM3)] The triangulated monoidal category $DM_h(K,\mathbf{Q}_p)$ is equivalent to all known versions of triangulated categories of mixed motives over $\operatorname{Spec} (K)$ with coefficients in $\mathbf{Q}_p$ (see \cite[Sec. 16, and Th. 16.1.2]{CD3}). In particular, it contains as a full subcategory the category $DM_{gm}(K) \otimes \mathbf{Q}_p$ obtained from the category of Voevodsky geometric motives (\cite[chap.5]{FSV}) by tensoring $\operatorname{{\cal Ho}} m$-groups with $\mathbf{Q}_p$ (see \cite[Cor. 16.1.6, Par. 15.2.5]{CD3}). \end{enumerate} With that definition, the construction of a Tate spectrum representing syntomic cohomology is almost obvious. In fact, we consider the sequence of presheaves $$ {\mathcal{S}}:=({\mathcal{S}}(r), r \in \mathbb N), $$ where each ${\mathcal{S}}(r)$ with the trivial action of $\Sigma_r$. According to the first paragraph of Section 5.1, we can consider the first Chern class of the canonical invertible sheaf $\mathbb P^1$: $\bar c \in H^2_{ \operatorname{syn} }(\mathbb P^1_K,1)=H^2(\mathbb P^1_{K,h},{\mathcal{S}}(1))$. Take any lift $c:\mathbf{Q}_p(\mathbb P^1_K) \rightarrow {\mathcal{S}}(1)[2]$ of this class. By the definition of the Tate twist, it defines an element $\mathbf{Q}_p(1) \rightarrow {\mathcal{S}}(1)$ still denoted by $c$. We define the suspension map: $$ {\mathcal{S}}(r) \otimes \mathbf{Q}_p(1) \lomapr{ \operatorname{Id} \otimes c} {\mathcal{S}}(r) \otimes {\mathcal{S}}(1) \xrightarrow{\mu} {\mathcal{S}}(r+1) $$ where $\mu$ is the multiplication coming from the graded dg-structure on ${\mathcal{S}}(*)$. Because this dg-structure is commutative, we obtain that these suspension maps induce a structures of a Tate spectrum on ${\mathcal{S}}$. Moreover, ${\mathcal{S}}$ is a Tate $\Omega$-spectrum because each ${\mathcal{S}}(r)$ is $h$-local and $\mathbb A^1$-local, and the map obtained by adjunction from $\sigma_r$ is a quasi-isomorphism because of the projective bundle theorem for $\mathbb P^1$ (an easy case of Proposition \ref{projective}). Now, by definition of $DM_h(K,\mathbf{Q}_p)$ and because of property (DM1) above, for any variety $X$, and any integers $(i,r)$, we get: $$ \operatorname{{\cal Ho}} m_{DM_h(K,\mathbf{Q}_p)}(M(X),{\mathcal{S}}(r)[i]) =\operatorname{{\cal Ho}} m_{DM^{eff}_h(K,\mathbf{Q}_p)}(\mathbf{Q}_p(X),\Omega^\infty({\mathcal{S}}(r))[i]) =H^i_{ \operatorname{syn} }(X_h,r). $$ Moreover, the commutative dg-structure on the complex ${\mathcal{S}}(*)$ induces a monoid structure on the associated Tate spectrum. In other words, ${\mathcal{S}}$ is a ring spectrum (strict and commutative). This construction is completely analogous to the proof of \cite[Prop. 1.4.10]{DM1}. In particular, we can apply all the constructions of \cite[Sec. 3]{DM1} to the ring spectrum ${\mathcal{S}}$. Let us summarize this briefly: \begin{proposition} \label{Bloch-Ogus} \begin{enumerate} \item Syntomic cohomology is covariant with respect to projective morphisms of smooth varieties (Gysin morphisms in the terminology of \cite{DM1}). More precisely, to a projective morphism of smooth $K$-varieties $f: Y\to X$ one can associate a Gysin morphism in syntomic cohomology $$f_*: H^n_{ \operatorname{syn} }(Y_h,i)\to H^{n-2d}_{ \operatorname{syn} }(X_h,i-d), $$ where $d$ is the dimension of $f$. \item The syntomic regulator over $\mathbf{Q}_p$ is induced by the unit $\eta:\mathbf{Q}_p \rightarrow {\mathcal{S}}$ of the ring spectrum ${\mathcal{S}}$: \begin{align*} r_{ \operatorname{syn} }:H^{r,i}_M(X) \otimes \mathbf{Q}_p=&\operatorname{{\cal Ho}} m_{DM_h(K,\mathbf{Q}_p)}(M(X),\mathbf{Q}_p(r)[i]) \\ & \longrightarrow \operatorname{{\cal Ho}} m_{DM_h(K,\mathbf{Q}_p)}(M(X),{\mathcal{S}}(r)[i])=H^i_{ \operatorname{syn} }(X_h,r). \end{align*} It is compatible with product, pullbacks and pushforwards. \item The syntomic cohomology has a natural extension to $h$-motives\footnote{and in particular to the usual Voevodsky geometrical motives by (DM3) above.}: $$ DM_h(K,\mathbf{Q}_p)^{op} \rightarrow D(\mathbf{Q}_p), \quad M \mapsto \operatorname{{\cal Ho}} m_{DM_h(K,\mathbf{Q}_p)}(M,{\mathcal{S}}) $$ and the syntomic regulator $r_{ \operatorname{syn} }$ can be extended to motives. \item There exists a canonical syntomic Borel-Moore homology $H^{ \operatorname{syn} }_*(-,*)$ such that the pair of functors $(H_{ \operatorname{syn} }^*(-,*),H^{ \operatorname{syn} }_*(-,*))$ defines a Bloch-Ogus theory. \item To the ring spectrum ${\mathcal{S}}$ there is associated a cohomology with compact support satisfying the usual properties. \end{enumerate} \end{proposition} For points (1) and (2), we refer the reader to \cite[Sec. 3.1]{DM1} and for the remaining ones to \cite[Sec. 3.2]{DM1}. \begin{remark} Note that the construction of the syntomic ring spectrum ${\mathcal{S}}$ in $DM_h(K,\mathbf{Q}_p)$ automatically yields the general projective bundle theorem (already obtained in Prop. \ref{projective}). More generally, the ring spectrum ${\mathcal{S}}$ is \emph{oriented} in the terminology of motivic homotopy theory. Thus, besides the theory of Gysin morphisms, this gives various constructions -- symbols, residue morphisms -- and yields various formulas -- excess intersection formula, blow-up formulas (see \cite{Deg8} for more details). \end{remark} ^{\operatorname{pr} ime}ntindex \end{document}
\begin{document} \twocolumn[ \icmltitle{Contrastive Credibility Propagation for Reliable Semi-Supervised Learning} \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Brody Kutt}{yyy} \icmlauthor{Pamela Toman}{yyy} \icmlauthor{Xavier Mignot}{yyy} \icmlauthor{Sujit Rokka Chhetri}{yyy} \icmlauthor{Shan Huang}{yyy} \icmlauthor{Nandini Ramanan}{yyy} \icmlauthor{Min Du}{yyy} \icmlauthor{William Hewlett}{yyy} \end{icmlauthorlist} \icmlaffiliation{yyy}{AI Research, Palo Alto Networks, Santa Clara, CA} \icmlcorrespondingauthor{Brody Kutt}{[email protected]} \icmlkeywords{Machine Learning, ICML} \vskip 0.3in ] \printAffiliationsAndNotice{} \begin{abstract} Inferencing unlabeled data from labeled data is an error-prone process. Conventional neural network training is highly sensitive to supervision errors. These two realities make semi-supervised learning (SSL) troublesome. In practice, SSL approaches often fail to outperform their fully supervised baseline. Proposed is a novel framework for deep SSL via transductive pseudo-label refinement called Contrastive Credibility Propagation (CCP). Through an iterative process of refining soft pseudo-labels, CCP unifies a novel contrastive approach for generating pseudo-labels and a powerful technique to overcome instance-dependent label noise. The result is an SSL classification framework explicitly designed to overcome inevitable pseudo-label errors. Using standard text and image benchmark classification datasets, we show CCP \textit{reliably} boosts or matches performance over a supervised baseline in four common real-world SSL scenarios: few-label, open-set, noisy-label, and class distribution misalignment. \end{abstract} \section{Introduction} \label{sec:intro} Leveraging massive datasets that are only partially labeled is critical for countless applications \cite{Wang_2020_CVPR,10.1007/978-3-031-17081-2_12,10.1007/978-3-031-17551-0_19,de_Carvalho_2022,9896925,10.1007/978-3-031-13841-6_10,lugo2022semi}. Generating and utilizing pseudo-labels for unlabeled data is a powerful but error-prone approach. Common issues often challenge the resulting classifier's ability to reliably perform better than a fully supervised baseline \cite{NEURIPS2018_c1fea270}. Some of these issues, illustrated in \cref{lp_issues}, and how our work addresses them, are described below. \begin{figure} \caption{ Six common issues of pseudo-labeling algorithms. The color of the interior of each circle indicates the predicted class. The outline of each circle indicates the true class and black outlines indicate a labeled sample. Black arrows indicate label propagation influence. In \textbf{C} \label{lp_issues} \end{figure} \paragraph{\noindent\textbf{A) Error propagation:}} An incorrect pseudo-label can cause a cascade of future errors. In the original label propagation algorithm (LPA) \cite{lpa}, upon which several more recent techniques are based \cite{MALHOTRA2021100030,6137400,lpa_mni,HouChin2017ASL,10.1145/3378537,cred_assessment}, label information iteratively diffuses across unlabeled samples whether the initial propagation is correct or not. Our approach eliminates error propagation at the batch level by focusing exclusively on first-order, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot single-hop, similarities. High-order propagation is made possible through multiple iterations of \cref{ccp}. \paragraph{\noindent\textbf{B) Ambiguity:}} The true class of unlabeled samples is sometimes unknowable \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot if it is highly similar to two different classes. In a similar fashion to \cite{cred_assessment}, our approach uses what we call credibility vectors (see \cref{subsec:credibility}), as opposed to traditional label vectors, to help reflect low prediction weight in this scenario. The output credibility vector for such a sample, $q_i$, will feature near-zero scores for the equally similar classes and $< 0$ scores for every other class. $q_i$ values of $\leq 0$ induce no effect throughout the framework and near-zero values induce a minimal effect. In general, credibility adjustments help softly differentiate uncontested estimated similarities by giving them higher weight. \paragraph{\noindent\textbf{C) Inappropriate similarity metrics:}} Learned similarity metrics for SSL often suffer from confirmation bias \cite{9207304}. This occurs when a network is optimized to produce similarities corresponding to its label predictions. To combat confirmation bias, the noisy results of the propagation mechanism (\cref{q_j}) do not affect our loss during the pseudo-label assignment. A single CCP iteration consists of many epochs while minimizing our Soft Supervised Contrastive (SSC) loss (denoted $\mathcal{L}_{\text{SSC}}$) which is a generalization of Supervised Contrastive (SupCon) loss \cite{https://doi.org/10.48550/arxiv.2004.11362} and its self-supervised counterpart, SimCLR loss \cite{pmlr-v119-chen20j,https://doi.org/10.48550/arxiv.2006.10029}. Only after the iteration has concluded do we update our soft pseudo-labels by averaging all predictions across epochs. Averaged predictions capture network oscillations and inconsistencies across batches. Unlabeled samples influence $\mathcal{L}_{\text{SSC}}$ and \cref{q_j} only via their averaged pseudo-labels. This process, CCP's outermost iteration, is modeled after an algorithm developed to correct instance-dependent label noise called SEAL \cite{https://doi.org/10.48550/arxiv.2012.05458}. We find incorrect pseudo-labels correct themselves across iterations in a similar fashion (Appendix \ref{sec:ccp_iter_perf}). \paragraph{\noindent\textbf{D) Forced propagation:}} Pseudo-labels are often forced to carry a certain total weight, usually through the use of a softmax function \cite{https://doi.org/10.48550/arxiv.2202.07136}. This eliminates a propagation mechanism's ability to abstain from prediction even if the similarity to all classes is weak. Adjacency matrix row normalization in graph-based label propagation produces the same effect \cite{lpa,pmlr-v80-kamnitsas18a}. In \cref{q_j}, class similarities are all computed independently. Credibility adjustments do not enforce the pseudo-label to sum to any specific value. An unlabeled sample that has near-zero similarity to all classes will have a credibility vector of near-zero values. Through the use of weighted arithmetic means in $\mathcal{L}_{\text{SSC}}$ and \cref{q_j}, as well as a weighted classification loss, these samples will have minimal influence on the framework. This is especially relevant in the presence of unlabeled data that belongs to no class (called open-set SSL) which can cripple existing approaches \cite{NEURIPS2018_c1fea270}. The ability to abstain from prediction enables CCP to safely leverage all pseudo-labels instead of simply discarding a percentage of the weakest pseudo-labels as is common practice \cite{https://doi.org/10.48550/arxiv.2001.07685,Li2021CoMatchSL,simmatch}. \paragraph{\noindent\textbf{E) Classification sensitivity to label errors:}} Deep learning classifiers with conventional cross-entropy loss are sensitive to label errors \cite{DBLP:journals/corr/abs-2007-08199}. Classifiers often directly influence pseudo-label generation while simultaneously learning from noisy pseudo-labels \cite{DBLP:journals/corr/abs-2103-00550,lee2013pseudo,simmatch} which can cause an error cascade. Recent work has explored why contrastive representations boost robustness to label noise \cite{9522768}. In our approach, only softly supervised contrastive representations are used for pseudo-label generation. Further, subsampling pseudo-labels between iterations and after the conclusion of CCP eliminates many erroneous or weakly predicted samples. We propose an unsupervised, principled subsampling approach in \cref{subsec:subsampling}. \paragraph{\noindent\textbf{F) Propagation sensitivity to similarity metrics:}} Pseudo-label accuracy is often at the mercy of your similarity metric. Similar to \cite{10.5555/3454287.3454741}, our approach averages predicted credibility vectors across transformed views of data for increased error resistance. Unlike other work, if two different predictions occur for two transformed views of a sample, even when each has high confidence, the average \textit{credibility-adjusted} pseudo-label will consist of all near-zero values. Unlike \cite{10.5555/3454287.3454741}, no sharpening takes place on the average vector to prevent forcing a pseudo-label with weak evidence. The evaluation of SSL algorithms often only explores the few-label learning scenario \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot balanced reduction of the number of given labels. Our work expands upon this with three other common real-world SSL scenarios. These are open-set (some unlabeled data belongs to no class), noisy-label (some of the given labels are incorrect), and class distribution misalignment (labeled and unlabeled data have different class frequency distributions). Our contributions are as follows. To the best of our knowledge, CCP is the first SSL algorithm to 1) Make use of credibility vectors as we define them to properly represent uncertainty. 2) Imbue a pseudo-labeling strategy with an approach to overcome instance-dependent label noise. 3) Introduce a generalizable, unsupervised strategy to choose a dynamic pseudo-label subsampling rate. 4) Demonstrate a reliable performance boost over a supervised baseline in four real-world SSL data scenarios with a single solution. \section{Related Work} \label{sec:related_work} SSL has a rich history in AI research \cite{DBLP:journals/corr/abs-2103-00550,books/mit/06/CSZ2006}. We focus on two dominant approaches. These are pseudo-labeling and consistency training. CCP draws inspiration from both approaches. \paragraph{\noindent\textbf{Pseudo-labeling:}} These methods typically constitute transductive learning \cite{shi2018transductive,iscen2019label}. Here, the objective is to generate proxy labels for unlabeled instances, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot via transductive inference, to enhance the learning of an inductive model. Seminal work \cite{lee2013pseudo} simply uses the classifier in training to produce pseudo-labels which are trained upon directly. This is problematic in that it actively promotes confirmation bias and it heavily relies on the fitness of the pseudo-labels. This was later extended to include a measure of confidence of pseudo-labels in \cite{shi2018transductive}. Closely related is the concept of self-training which iteratively integrates into training the most confident pseudo-labeled samples and repeats \cite{dong-schafer-2011-ensemble,https://doi.org/10.48550/arxiv.2109.00778,9156610}. These techniques are sometimes unstable when pseudo-label error accumulates across iterations. More recent work has tried to address accumulating errors by using independent models for utilizing pseudo-labels \cite{https://doi.org/10.48550/arxiv.2202.07136}. As mentioned previously, LPA \cite{lpa} is a popular graph-based technique for generating pseudo-labels but has many failure cases \cite{cred_assessment}. Much work tends to use LPA as a transductive inference mechanism. Such inferred pseudo-labels are highly noisy and thus problematic for SSL. Work in \cite{pmlr-v80-kamnitsas18a} tries to circumvent this by instead using LPA-derived pseudo-labels only for graph-based regularization of the encoder. Like CCP, \cite{wang2022r2} proposed repeatedly re-predicting pseudo-labels during an optimization framework akin to self-training. Although there has been much success in employing previous pseudo-labeling-based SSL approaches, they tend to break down when faced with highly unreliable pseudo-labels. \paragraph{\noindent\textbf{Consistency Training:}} Analogous to perturbation-based SSL and contrastive learning, these techniques train a network to produce a single, distinct output for a sample under different augmentations (transformations). Work in \cite{xie2020unsupervised} minimizes a consistency loss for unlabeled data and a standard classification loss simultaneously which could introduce irreconcilable gradients. Other work \cite{pmlr-v119-chen20j,https://doi.org/10.48550/arxiv.2006.10029}, shown superior to \cite{xie2020unsupervised} in SSL, employ an unsupervised consistency-like loss in a pretraining stage before training solely with clean labels. Unsupervised consistency loss is attractive in that it eliminates supervision errors but lacks power in that class structure from unlabeled data is never leveraged directly. Instead, CCP introduces a softly supervised consistency (contrastive) loss with true and pseudo-labels. \section{Contrastive Credibility Propagation (CCP)} \label{sec:ccp} Here we present our proposed method for transductive pseudo-label refinement called Contrastive Credibility Propagation (CCP). We establish notation, introduce credibility vectors, describe the CCP iteration, introduce our subsampling procedure, formalize our loss functions, then explain how to build a classifier on CCP pseudo-labels. Consider a classification task among a set of $K$ classes $c=[0, 1, \ldots, K - 1]$. Consider a dataset $X=\{x_0, \ldots, x_{N-1}\}$ of $N$ samples. Denote indices to labeled (unlabeled) samples as $L$ ($U$). Each labeled sample $l \in L$ has a known one-hot class vector $y_l \in [0, 1]^{K}$ with a 1 (0) in the on (off) position. We maintain a real-valued credibility vector $q_i$ of length $K$ for each sample $i$: $q_i \in [-1, 1]^{K}$ (see \cref{subsec:credibility}). In training, denote the indices of a random batch of size $n$ as $\mathcal{B} \subseteq L \cup U$. We define a set of data transformations, $\mathcal{T}$. We draw two transformations, $t_1, t_2 \in \mathcal{T}$, randomly for every batch. A transformed pair will share one $q_i$. Thus, a batch of transformed data and credibility vector tuples will be of size $2n$ denoted $\{(x^{t_1}_i, q_i), (x^{t_2}_i, q_i)\}_{i\in \mathcal{B}}$. An encoder $f_b$, computes a vector encoding $f_b(x)=b$. One projection head $f_z$ computes a vector encoding used for contrastive learning, $f_z(b)=z$. A separate projection head, $f_g$, computes $f_g(b)=g\in \mathbb{R}^{K}$ with which we apply a classification loss. $f_g$ and $f_z$ consist of a nonlinear MLP. We reset $f_b$ and $f_z$ between iterations of CCP to learn new representations. Attaching $f_g$ and $f_z$ to $f_b$ is motivated by prior work \cite{pmlr-v119-chen20j,https://doi.org/10.48550/arxiv.2006.10029,https://doi.org/10.48550/arxiv.2004.11362}. An illustration of these components is provided in Appendix \ref{sec:additional_illustrations}. We minimize a softly supervised contrastive loss while we are performing CCP (\cref{ccp}). The conclusion of CCP provides soft labels (credibility vectors) for all unlabeled data which are then used to build a classifier. The similarity function we use throughout this work is angular similarity, motivated by prior work \cite{pmlr-v119-chen20j,https://doi.org/10.48550/arxiv.1803.11175}. Similarities are computed solely upon $z$ vectors produced by the projection head $f_z$. Angular similarity is defined by $\phi(z_i, z_j) = 1-\nicefrac{\arccos\left(\frac{z_i \cdot z_j}{\lVert z_i \rVert \lVert z_j \rVert}\right)}{\pi}$. \subsection{Credibility Adjustments} \label{subsec:credibility} The CCP algorithm iteratively assigns and refines credibility vectors to all unlabeled samples, $\{q_u\}_{u\in U}$. A credibility vector is a vector that represents the confidence of a sample belonging to a certain class. Unlike probability vectors, credibility vectors do not sum to $1$. Credibility vectors are more expressive than one-hot label vectors. A credibility score of $1$ means maximum confidence in class membership, $0$ means uncertainty and $-1$ means maximum confidence in class non-membership. Verbally, a credibility score for a sample and class is the similarity between that sample and class minus the highest similarity to all other classes. Credibility adjustments are formalized in \cref{ccp}, \cref{ccp:begin_cred_adj,ccp:cred_adj,ccp:end_cred_adj} and \cref{q_j}, \cref{q_j:begin_cred_adj,q_j:cred_adj,q_j:end_cred_adj}. The core idea of credibility, similar to \cite{cred_assessment}, is to extend similarity measurements to capture ambiguity brought forth by competing similarities. For trusted labeled data, credibility vectors, $\{q_l\}_{l\in L}$, are clamped with 1 (0) in the on (off) position. When we apply credibility vectors to $\mathcal{L}_{\text{SSC}}$, $\mathcal{L}_{\text{CLS}}$, and \cref{q_j}, we clip credibility vectors to a range of $[0,1]$. A clipped credibility vector consists of zeros everywhere except the strongest class which is scaled down by the second strongest class value. Negative values of credibility vectors produced in a batch are still useful when averaging across epochs in \cref{ccp:avg_over_epoch}. Note that $\mathcal{L}_{\text{SSC}}$ and \cref{q_j} feature weighted averages. The weights are derived directly from the values of the associated $q_i$'s. $\mathcal{L}_{\text{CLS}}$ is scaled by the magnitude of values in $q_i$. At initialization, unlabeled data receive $q_i=\vec{0}$ to ensure they do not influence either. A concrete example of credibility adjustments and how it compares to traditional label vectors in a cross-entropy calculation is shown in Append \ref{sec:additional_illustrations}. \subsection{The CCP Iteration} \label{subsec:the_ccp_iteration} During a single iteration of CCP (\cref{ccp}), we predict a credibility vector for every unlabeled sample $\Xi$ times while simultaneously minimizing a soft supervised contrastive loss, $\mathcal{L}_{\text{SSC}}$. We store all credibility vectors for unlabeled samples and then average them together at the end of the iteration. The SEAL algorithm \cite{https://doi.org/10.48550/arxiv.2012.05458}, which bears similarity to CCP's outermost iteration (\cref{ccp:outermost_iter}), is designed to correct the label of mislabeled samples in fully supervised problems. It was found that, during training, network predictions frequently oscillate between incorrect and correct labels in the presence of label noise. This suggests averaging the predictions made across epochs to correct label noise. Similarly, propagated pseudo-labels are subjected to the randomness of the instantaneous network state \textit{and} batch selection. Concretely, in the presence of pseudo-label error, we observe the same oscillatory behavior reported in \cite{https://doi.org/10.48550/arxiv.2012.05458} of the predicted pseudo-label. This enables a similar theoretical intuition behind epoch averaging. \begin{equation} \label{theoretical_intuition} \lVert \mathbb{E}[q_u^{[m+1]}] - q_u^{*} \rVert \leq \lVert q_u^{[m]} - q_u^{*} \rVert \end{equation} \noindent Where $\lVert \cdot \rVert$ is a vector norm, $q_u^{[m]}\in \mathbb{R}^K$ is the pseudo-label of sample $u$ at iteration $m$, and $q_u^{*}\in \mathbb{R}^K$ is the latent optimal pseudo-label of sample $u$. Namely, incorrect pseudo-labels gradually move to their optimal state across CCP iterations. Refer to Appendix \ref{sec:pseudo_label_oscillation} for the derivation of \cref{theoretical_intuition} and plots depicting oscillatory pseudo-label behavior. Furthermore, the choice to model our outermost iteration after SEAL is due to it combating instance-dependent instead of class-conditional label noise. The \say{label noise} we are trying to overcome in CCP arrives via transduction and is thus strongly instance-dependent. The value of $\Xi$ should be sufficient to see proper convergence of $\mathcal{L}_{\text{SSC}}$. The initial values for $f_b$ and $f_z$ at \cref{ccp:init_f_b} can be defined randomly or via self-supervised pretraining. Pretraining with $\mathcal{L}_{\text{SSC}}$'s self-supervised special case, SimCLR loss, often greatly improves pseudo-label accuracy, especially in a few-label scenario. In \cref{ccp:batch_draw}, every batch can contain labeled (unlabeled) samples with indices $\mathcal{B}_l \subseteq L$ ($\mathcal{B}_u \subseteq U$). Ensuring both are present is important in the first iteration of CCP when unlabeled data has no pseudo-label. After the first iteration, purely random batch sampling can be used. Within a transformed batch of samples and credibility vectors, $\{(x^{t_1}_i, q_i), (x^{t_2}_i, q_i)\}_{i\in \mathcal{B}}$, credibility vectors will be generated for each of an unlabeled transformed pair independently, $\{\tilde{q}_u^{t_1}, \tilde{q}_u^{t_2}\}_{u\in \mathcal{B}_u}$. We then compute a single predicted credibility vector for a transformed pair by averaging them together (\cref{q_j}, \cref{q_j:avg_over_trans}). In addition to error resistance, we can transform out of the labeled distribution to propagate labeled information to a wider scope without introducing label errors. \begin{algorithm}[ht] \caption{An iteration of the CCP algorithm.} \label{ccp} \begin{algorithmic}[1] \State \textbf{Given} $\Xi$, $\mathcal{T}$, $\{q_l\}_{l\in L}$, $X$, $p_{\textrm{last}}$, $d_{\textrm{max}}$ \If{$\{q_u\}_{u\in U}$ are not available} \State Initialize $q_u = \vec{0}$ for $u\in U$ \label{ccp:init_q} \EndIf \State Reset network variables in $f_b$, $f_z$ \label{ccp:init_f_b} \For{$\xi \in \{1, 2, \ldots, \Xi\}$} \label{ccp:outermost_iter} \For{semi-labeled batches $\{(x_i, q_i)\}_{i\in \mathcal{B}_l \cup \mathcal{B}_u}$} \label{ccp:batch_draw} \State Randomly draw $t_1, t_2 \in \mathcal{T}$ to form $\{(x^{t_1}_i, q_i), (x^{t_2}_i, q_i))\}_{i\in \mathcal{B}_l \cup \mathcal{B}_u}$ \State Compute $z_i^{t_1} = f_z(f_b(x^{t_1}_i)))$, $z_i^{t_2} = f_z(f_b(x^{t_2}_i)))$ for $i\in \mathcal{B}_l \cup \mathcal{B}_u$ \State Compute $\{\tilde{q}_{j}^{[\xi]}\}_{{j}\in \mathcal{B}_u}$ using \cref{q_j} \State Train $f_b$, $f_z$ using the gradient of $\mathcal{L}_{\text{SSC}}$ computed with $\{(z_i^{t_1}, q_i), (z_i^{t_2}, q_i))\}_{i\in \mathcal{B}_l \cup \mathcal{B}_u}$ \EndFor \EndFor \State Update $\hat{q}_u = \frac{1}{\gamma} \sum_{\xi=1}^{\Xi} \nicefrac{\tilde{q}_u^{[\xi]}}{\Xi}$ for $u\in U$ \label{ccp:avg_over_epoch} \For{$k\in c$} \label{ccp:begin_cred_adj} \State $q_{u, k} = \hat{q}_{u, k} - \max_{k'\in c\setminus k}(\hat{q}_{u, k'})$ for $u\in U$ \label{ccp:cred_adj} \EndFor \label{ccp:end_cred_adj} \State Clip all values in $\{q_u\}_{u\in U}$ to lie in $[0,1]$ \label{ccp:clip} \State Compute $p$, $\{\hat{\omega}_u\}_{u\in U}$ using \cref{kl_div_subsamp_algo} \label{ccp:compute_p} \State Set the bottom $p\%$ of $\{q_u\}_{u\in U}$ to $\vec{0}$ ordered by $\{\hat{\omega}_u\}_{u\in U}$ \label{ccp:subsamp} \State Update $p_{\textrm{last}} = p$, $d_{\textrm{max}} = \nicefrac{d_{\textrm{max}}}{10}$ \label{ccp:update_p_last} \State \textbf{Return} $f_b$, $p_{\textrm{last}}$, $d_{\textrm{max}}$, $\{q_u\}_{u\in U}$ (used in next iteration) \label{ccp:return} \end{algorithmic} \end{algorithm} \begin{algorithm}[ht] \caption{Compute propagated credibility vectors.} \label{q_j} \begin{algorithmic}[1] \State \textbf{Given} $\{(z_i^{t_1}, q_i), (z_i^{t_2}, q_i))\}_{i\in \mathcal{B}_l \cup \mathcal{B}_u}$ \For{$j \in \mathcal{B}_u$} \For{$t \in \{t_1, t_2\}$} \For{$k\in c$} \State $\displaystyle \psi^t_{j, k} = \frac{\sum_{i\in \mathcal{B} \setminus j} \phi(z^t_{j}, z^{t_1}_i)q_{i,k} + \phi(z^t_{j}, z^{t_2}_i)q_{i,k}}{2\sum_{i\in \mathcal{B}\setminus j} q_{i,k}}$ \EndFor \For{$k\in c$} \label{q_j:begin_cred_adj} \State $\tilde{q}^t_{j, k} = \psi_{j, k} - \max_{k'\in c\setminus k}(\psi_{j, k'})$ \label{q_j:cred_adj} \EndFor \label{q_j:end_cred_adj} \EndFor \State Store $\tilde{q}_{j} = \frac{\tilde{q}^{t_1}_{j} + \tilde{q}^{t_2}_{j}}{2}$ \label{q_j:avg_over_trans} \EndFor \State \textbf{Return} $\{\tilde{q}_{j}\}_{_{j}\in \mathcal{B}_u}$ \end{algorithmic} \end{algorithm} In \cref{ccp}, \cref{ccp:avg_over_epoch,ccp:begin_cred_adj,ccp:cred_adj,ccp:end_cred_adj,ccp:clip}, we adjust $\{\hat{q}_u\}_{u\in U}$ in several ways to make them more suitable as labels. In \cref{ccp:avg_over_epoch}, we include a scaling factor $\nicefrac{1}{\gamma}$ outside the average to scale the strength of credibility vectors. Our choice for $\gamma$ is the value of the maximum strength among all averaged pseudo-labels, $\gamma=\max_{u\in U, c\in K} \sum_{\xi=1}^{\Xi} \nicefrac{\tilde{q}_{u, c}^{[\xi]}}{\Xi}$. This ensures the strongest pseudo-label will have a strength of $1$. Before we clip all values outside the range of $[0,1]$ in \cref{ccp:clip}, we compute a final credibility adjustment on every vector to ensure that the final pseudo-label is a proper credibility representation. The scaled, credibility adjusted, and clipped vectors are denoted $\{q_u\}_{u\in U}$. \subsection{Subsampling} \label{subsec:subsampling} In \cref{ccp:compute_p}, we use \cref{kl_div_subsamp_algo} to decide which predictions to subsample. This is inspired by self-training mechanisms \cite{https://doi.org/10.48550/arxiv.2202.12040}. These techniques typically use the max score of a learned classifier as an indicator of confidence. Similarly, our indicator of confidence for a credibility vector is the maximum of its \textit{unclipped} values, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $\{\hat{\omega}_u = \max(\hat{q}_u)\}_{u\in U}$. The credibility vectors with the lowest confidence are \say{reset} back to the zero vector between iterations of CCP or discarded when building a classifier. During pseudo-label assignment, this allows CCP to iteratively build up a picture of the class structure contained in unlabeled data using \say{easy} unlabeled samples to later predict \say{hard} ones. We compute a percentage $p \in \{0\%, 1\%, \ldots, 99\%\}$ that represents what percent of least confident $\{q_u\}_{u\in U}$ will be reset or discarded. Performance gains can be achieved from careful subsampling (Appendix \ref{sec:ccp_iter_perf}). However, aggressive $p$ values can derail CCP effectiveness. We hypothesize the instability is similar in cause to the instability of self-training mechanisms \cite{https://doi.org/10.48550/arxiv.2001.07685}. Unlike self-training, CCP is shown highly stable without any subsampling. Accordingly, our approach is to balance the desire of resetting weak vectors with limiting the divergence of the predicted class distribution of the selected unlabeled data from the predicted class distribution of all unlabeled data. Importantly, unlike other work \cite{DBLP:journals/corr/abs-1905-08171,DBLP:journals/corr/abs-1911-09785,DBLP:journals/corr/abs-2106-05682,simmatch}, we leverage no assumptions about the true class frequency distribution of unlabeled data. We argue that such assumptions, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot the use of distribution alignment (aligning the marginal distribution of predicted labels with ground-truth labels), are rarely applicable to real-world datasets where unlabeled data may assume any distribution. Concretely, in \cref{kl_div_subsamp_algo}, \cref{kl_div_subsamp_algo:q} we compute the anchor distribution, $Q$, by summing the weights for every class across all $\{q_u\}_{u\in U}$ and dividing by the total mass. We search over candidate $p$'s with one minus the $p$ used in the previous iteration, $p_{\text{last}} - 1\%$, as the maximum to ensure we don't increase $p$ between iterations. At \cref{kl_div_subsamp_algo:p}, we compute the new distribution, $P$, obtained after resetting the candidate percentage of vectors. At \cref{kl_div_subsamp_algo:d}, we compute the Kullback–Leibler (KL) divergence \cite{10.1214/aop/1176996454} between these distributions. At \cref{kl_div_subsamp_algo:choose_p}, we choose $p$ by selecting the maximum candidate that obeys a strict limit on the divergence, $d_{\textrm{max}}$. Also to support convergence, we divide $d_{\textrm{max}}$ by $10$ for the next iteration. In summary, this algorithm aims to subsample the data such that the class distribution of the subsampled data is as close as possible to the original data. Our metric is unsupervised, free of imposing assumptions and normalized with respect to the dataset size. This suggests, in theory, a single schedule for $d_{\textrm{max}}$ should generalize well across datasets. Indeed, in practice, we find an initial value of $d_{\textrm{max}}=0.01$ to work \textit{generally} well across all our benchmark datasets between iterations of CCP and when building a classifier (see Appendix \ref{sec:ccp_iter_perf}). \begin{algorithm}[ht] \caption{Compute a subsample percentage.} \label{kl_div_subsamp_algo} \begin{algorithmic}[1] \State \textbf{Given} $\{\hat{q}_u\}_{u\in U}$, $\{q_u\}_{u\in U}$, $p_{\textrm{last}}$, $d_{\textrm{max}}$ \State Compute $\{\hat{\omega}_u = \max(\hat{q}_u)\}_{u\in U}$ \label{kl_div_subsamp_algo:omega} \State $Q = \nicefrac{\sum_u^U q_u}{\sum_k^c \sum_u^U q_{u,k}}$ \label{kl_div_subsamp_algo:q} \For{$p_i \in \{0\%, 1\%, \ldots, p_{\textrm{last}}-1\%\}$} \State Set the bottom $p_i\%$ of $\{q_u\}_{u\in U}$ to $\vec{0}$ ordered by $\{\hat{\omega}_u\}_{u\in U}$ \State $P = \nicefrac{\sum_u^U q_u}{\sum_k^c \sum_u^U q_{u,k}}$ \label{kl_div_subsamp_algo:p} \State $d_i = D_{\text{KL}}(P \parallel Q) = \sum_{k\in c} P_k\log_2\left(\nicefrac{P_k}{Q_k}\right)$ \label{kl_div_subsamp_algo:d} \EndFor \State $p=\max(\{p_i \text{ for all } i \text{ such that } d_i<d_{\textrm{max}}\})$ \label{kl_div_subsamp_algo:choose_p} \State \textbf{Return} $p$, $\{\hat{\omega}_u\}_{u\in U}$ \end{algorithmic} \end{algorithm} \subsection{Soft Loss Functions} \label{subsec:soft_loss_functions} $\mathcal{L}_{\text{SSC}}$ and $\mathcal{L}_{\text{CLS}}$ are fully supervised loss functions that utilize $\{q_u\}_{u\in U}$ provided by the CCP algorithm. \cref{q_j}, which is inevitably error-prone at the batch level, is functionally isolated from the error-sensitive mechanisms for supervised contrastive and classification learning. Transformed pairs are not treated uniquely in $\mathcal{L}_{\text{SSC}}$ or $\mathcal{L}_{\text{CLS}}$. In other words, transformed pairs can be considered independent samples with identical credibility vectors. To simplify the notation, we formalize $\mathcal{L}_{\text{SSC}}$ and $\mathcal{L}_{\text{CLS}}$ for an arbitrary batch of data and (clipped) credibility vector tuples, $\{(x_i, q_i)\}_{i\in \mathcal{B}}$. We define an $n \times n$ pairwise matching matrix, $M$, where $m_{i,j} = q_i \cdot q_j$ for $i,j \in \mathcal{B}$. Each row of $M$ contains the weights of a weighted arithmetic mean of normalized losses for that sample to all other samples based on the evidence of a positive pair relationship. We scale pairwise similarities by temperature $\tau$ and take the exponential as in prior work \cite{pmlr-v119-chen20j,https://doi.org/10.48550/arxiv.2006.10029,https://doi.org/10.48550/arxiv.2004.11362} to form a $n \times n$ matrix $A$ defined by $a_{i, j} = \exp(\phi(z_i, z_j)/\tau)$ for $i,j \in \mathcal{B}$. We construct a strength vector, $\omega$, defined by $\omega_i= \max(q_i)$ for $i \in \mathcal{B}$. Each $\omega_i$ contains the confidence of $q_i$. It serves to scale the magnitude of loss on sample $i$ and the corresponding entries in the normalizing factor for each $a_{i, j}$. We ignore comparisons of the same sample with the following modifications $M = M \odot (1 - \mathbb{I})$, $A = A \odot (1 - \mathbb{I})$ where $\odot$ is element-wise multiplication. We can now define $\mathcal{L}_{\text{SSC}}$ defined over $\mathcal{B}$, \begin{equation} \label{ssc} \mathcal{L}_{\text{SSC}} = -\frac{1}{n}\sum_{i\in \mathcal{B}} \frac{\omega_i}{\sum m_{i}}\sum_{j\in \mathcal{B}} m_{i, j} \log\left(\frac{a_{i, j}}{a_i \cdot \omega}\right) \end{equation} \noindent SimCLR and SupCon loss are special cases of $\mathcal{L}_{\text{SSC}}$. In $\mathcal{L}_{\text{SSC}}$, positive pairs are not specified discretely. Every pair of samples has a score in $M$ between 0 and 1 which corresponds to how much evidence there is that they constitute a positive pair. If all data is labeled with maximum confidence, $\mathcal{L}_{\text{SSC}}$ becomes SupCon loss. If all data is unlabeled, only transformed pairs have a maximum positive pair relationship, and all $\omega_i = 1$, $\mathcal{L}_{\text{SSC}}$ becomes SimCLR loss. Circumventing the discrete selection of positive pairs better reflects the true state of pseudo-labels \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot compared to \cite{Zhang2022SemisupervisedCL}. When training a classifier, we use a cross-entropy loss that is similar to SEAL loss except that we feed in $q_i$ in place of normal label vectors. We use the output from $f_g$ instead of $f_z$. We send each $g_i$ through a softmax, $\sigma(g_i)$, \begin{equation} \label{ce} \mathcal{L}_{\text{CLS}} = -\frac{1}{n} \sum_{i\in \mathcal{B}} \sum_{k\in c} q_{i, k} \log\left( \sigma(g_i)_k \right) \end{equation} \noindent Note that because $q_i$ is not a probability distribution, $\mathcal{L}_{\text{CLS}}$ does not strictly compute a cross-entropy. However, the gradient remains the same. One can interpret $\mathcal{L}_{\text{CLS}}$ as a credibility-weighted version of categorical cross-entropy. Recall each $q_i$ will contain only one non-zero entry. Assume this non-zero entry is index $k$. The value of $\mathcal{L}_{\text{CLS}}$ for sample $i$ can be written as $\omega_i \log\left( \sigma(g_i)_k \right)$. \subsection{Building a Classifier} \label{subsec:building_a_classifier} After CCP has concluded, we apply the soft labels, $\{q_u\}_{u\in U}$, with a classification loss described in \cref{ce}. We can reuse the final state of $f_b$ as a pretrained initialization when we train our classifier head, $f_g$, to speed up convergence. However, your pretrained $f_b$ will not be built on the latest pseudo-labels computed at the end of the final CCP iteration. In our experiments, resetting $f_b$ (to a random state or a self-supervised pretrained state) after the final iteration of CCP is more effective. We can also tune $d_{\textrm{max}}$ in \cref{kl_div_subsamp_algo} to subsample the final $\{q_u\}_{u\in U}$. The vectors we eliminate will simply be discarded as opposed to being reset to $\vec{0}$. \section{Experimental Results} \label{sec:experimental_results} As explained in \cref{sec:intro}, our experiments focus on testing the reliability of CCP to outperform a supervised baseline in four common SSL data scenarios. We also compare CCP to other SSL algorithms where applicable. In our experiments, we use four well-known text classification datasets (AG News \cite{DBLP:journals/corr/ZhangZL15}, DBpedia \cite{DBLP:journals/corr/ZhangZL15,dbpedia}, Rotten Tomatoes \cite{pang-lee-2005-seeing}, Yahoo! Answers \cite{10.5555/1620163.1620201,DBLP:journals/corr/ZhangZL15}) and one image classification dataset (CIFAR-10 \cite{Krizhevsky2009a}). We also experimented with a dataset for data loss prevention (DLP) that we collected ourselves. A DLP classifier is designed to identify different kinds of sensitive documents and properly ignore everything else. Our dataset consists of 559,141 documents with the following class breakdown: 30,048 accounting documents, 10,795 invoices, 16,282 healthcare documents, 50,763 lawsuit documents, 5,121 patent filings, 47,741 C source code files, 53,529 Java source code files, and 344,862 non-sensitive documents like encyclopedia pages, dictionary pages, excerpts from books, news articles, chatroom transcripts, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot. Some summary information can be found about these datasets in Appendix \ref{sec:dataset_details}. We use a common neural network solution for each of the text datasets. The solution is similar to other work that uses convolutional networks for text classification \cite{DBLP:journals/corr/ZhangZL15,9474279,kim2014convolutional,NIPS2015_5849}. Briefly, text data is tokenized and each token index is used to look up a corresponding embedded vector (EV). The sequence of EVs is then processed by a convolutional neural network. For image classification, we follow the setups of prior work for comparability \cite{NEURIPS2018_c1fea270, https://doi.org/10.48550/arxiv.2001.07685, simmatch}. For image data, $f_b$ takes the form of a WRN28-2 \cite{DBLP:journals/corr/ZagoruykoK16}. All CCP iterations on CIFAR-10 used self-supervised pretraining to initialize $f_b$ and $f_z$. More information to recreate our experiments can be found in Appendix \ref{sec:additional_training_details}. \subsection{Data Transformations} \label{subsec:data_transformations} For text data, we design a set of transformations, $\mathcal{T}$, inspired by the simplest and computationally cheapest set of transformations found in \cite{https://doi.org/10.48550/arxiv.2203.12000}. All transformations are implemented as tensor operations in the computational graph. All transformations act on a sample after it has been converted to a sequence of EVs and allow a gradient to pass through them. These consist of applying Laplacian noise consistent with Differential Privacy \cite{6108143}, applying Gaussian noise, randomly hiding and scrambling the order of EVs, and randomly swapping paragraphs and EVs. The use of transformations alone garners large accuracy increases for every algorithm particularly when label information is scarce. To eliminate this effect on our text experiments, every algorithm tested use the same $\mathcal{T}$ (and augmented batches). For images, we repeat the transformations found in \cite{pmlr-v119-chen20j} including random cropping, horizontal flipping, and color distortions. Further details can be found in Appendix \ref{sec:details_on_transformations} \subsection{Few-Label Performance} \label{subsec:few_label_performance} For text data, we compare CCP to a supervised cross-entropy control with access to all labels, a supervised cross-entropy baseline, and three algorithms that bear important similarities to CCP at varying levels of label availability in \cref{tab:classification_results}. SimCLR loss and SupCon loss are included as they are specific cases of $\mathcal{L}_{\text{SSC}}$. Further, the Compact Clustering via Label Propagation (CCLP) regularizer can also be seen as a semi-supervised counterpart of SupCon. SimCLR and SupCon use the same $f_z$ and $f_g$ that we use with CCP. For CCLP, we attach $f_g$ to $f_b$ and apply CCLP to the hidden layer of $f_g$ for a fair comparison. CCLP calls for linear decision boundaries formed after the hidden space where CCLP is applied via a traditional classification loss applied immediately after a final fully connected layer. We also search over the CCLP-specific hyperparameters such as the number of steps and CCLP weight and report the best results. All networks are trained until convergence according to their originally prescribed procedure. Each CCP experiment consists of 8 iterations. We use the same subsampling procedure (initial $d_{\textrm{max}}=0.01$, divided by $10$ after each iteration) for each experiment both between CCP iterations and after its conclusion. \begin{table}[ht] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}ccccccc@{}} \toprule \textbf{\begin{tabular}[c]{@{}c@{}}\% of\\ Labels\end{tabular}} & \textbf{Algorithm} & \textbf{DLP} & \textbf{AG News} & \textbf{DBpedia} & \textbf{\begin{tabular}[c]{@{}c@{}}Rotten\\ Tomatoes\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Yahoo!\\ Answers\end{tabular}} \\ \midrule 100\% & Control & 99.95\% & 91.95\% & 98.79\% & 82.01\% & 73.17\% \\ \midrule \multirow{5}{*}{1\%} & Baseline & 99.23\% & 85.51\% & 94.25\% & 66.39\% & 63.24\% \\ & CCLP & 99.03\% & 79.54\% & 97.57\% & 64.48\% & 51.88\% \\ & SimCLR & 99.46\% & 86.89\% & 95.91\% & 68.10\% & 57.72\% \\ & SupCon & 99.40\% & 85.92\% & 96.12\% & 66.97\% & 60.19\% \\ & CCP (ours) & \textbf{99.70\%} & \textbf{88.89\%} & \textbf{98.24\%} & \textbf{71.14\%} & \textbf{68.26\%} \\ \midrule \multirow{5}{*}{0.1\%} & Baseline & 97.17\% & 78.37\% & 86.47\% & 59.89\% & 53.44\% \\ & CCLP & 95.56\% & 50.66\% & 67.24\% & 57.02\% & 37.80\% \\ & SimCLR & 97.50\% & 80.57\% & 87.75\% & 59.42\% & 37.69\% \\ & SupCon & 97.56\% & 74.84\% & 84.97\% & 57.96\% & 36.38\% \\ & CCP (ours) & \textbf{98.88\%} & \textbf{87.04\%} & \textbf{96.20\%} & \textbf{60.85\%} & \textbf{63.23\%} \\ \midrule \multirow{5}{*}{0.05\%} & Baseline & 94.71\% & 71.91\% & 72.71\% & 57.15\% & 48.86\% \\ & CCLP & 91.71\% & 50.17\% & 60.13\% & 55.53\% & 33.62\% \\ & SimCLR & 95.32\% & 77.09\% & 83.08\% & 56.07\% & 38.79\% \\ & SupCon & 94.96\% & 68.46\% & 80.70\% & \textbf{57.42\%} & 28.25\% \\ & CCP (ours) & \textbf{96.03\%} & \textbf{85.14\%} & \textbf{93.84\%} & 57.35\% & \textbf{59.58\%} \\ \bottomrule \end{tabular} } \caption{The best test set accuracy across all text datasets.} \label{tab:classification_results} \end{table} In \cref{tab:classification_results}, our intention is not to demonstrate state-of-the-art performance on these datasets. A different choice of $f_b$ would likely provide higher accuracy. In contrast, we intend to explore how CCP fares when a supervised baseline is largely unreliable. Note that only CCP is able to outperform or match the baseline in every experiment. SimCLR often outperforms the baseline, but CCP outperforms SimCLR in every experiment by the widest margin when label availability is lowest. CCLP underperforms the baseline in every experiment except on DBpedia with $1\%$ label availability \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot when the baseline is highly accurate and the classes are balanced. When CCLP succeeds, it outperforms the pretraining methods likely because it makes direct use of pseudo-labels. CCLP and CCP, like many other SSL algorithms, rely on network fitness for quality pseudo-labels. Accordingly, the margin of success for CCP shrinks as the baseline becomes less accurate \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot on Rotten Tomatoes with $0.05\%$ label availability where the baseline is $<8\%$ better than random guessing. For CIFAR-10, following the CCP iterations detailed in Appendix \ref{sec:ccp_iter_perf}, we found a test set classification accuracy of $84.91\%$, $88.82\%$, and $91.37\%$ at the label availability levels of 40, 250, and 4000, respectively. Our fully supervised control accuracy is $94.78\%$. While not perfect, you can compare these results to what is reported in \cite{simmatch}, given the similar setup. At the 40 label availability level, CCP performs significantly better than prior methods like UDA \cite{xie2020unsupervised}, MixMatch \cite{10.5555/3454287.3454741}, and ReMixMatch \cite{DBLP:journals/corr/abs-1911-09785} at $70.95\%$, $52.46\%$, and $80.90\%$, respectively. However, CCP is well below SimMatch's reported accuracy of $94.40\%$. SimMatch however also reports performance at the 4000 label availability level that is $1.26\%$ above our control. Careful hyperparameter tuning and further CCP iterations may shrink that gap. Importantly, unlike SimMatch and many other modern methods, CCP achieves these results without distribution alignment, which is not a valid assumption for our other three SSL data scenarios. \subsection{Open-Set \& Misalignment Performance} \label{subsec:open_set_and_misalignment_performance} \begin{figure*} \caption{Open-set experiment F-measures. A \color{red} \label{ol_f_plots} \end{figure*} We recreate the experiment found in \cite{NEURIPS2018_c1fea270} to simultaneously explore the open-set and misalignment data scenarios. Briefly, 400 labeled examples of classes bird, cat, deer, dog, frog, and horse are isolated. Originally, each class has 5000 labeled samples. Unlabeled data is sourced as $5000-400=4600$ samples from each of the four classes. In the $0\%$ mismatch scenario, these four classes are deer, dog, frog, and horse. In the $100\%$ mismatch scenario, these four classes are airplane, automobile, ship, and truck. Mismatch percentage scenarios between these replace in-distribution classes with out-of-distribution (OOD) classes one-by-one. The test set only contains in-distribution data. In \cref{ol_acc_plot}, we can see that the test accuracy of CCP can perform strictly above or at the level of a supervised baseline at all degrees of mismatch. We ran CCP for only a single iteration and then built a classifier on the pseudo-labels at varying subsampling levels. Further CCP iterations did not provide any noticeable benefit when OOD data was present at the $25\%$ level or beyond. At $100\%$ mismatch, the effect of CCP is not complete immunity, but instead becomes a problem of tuning the subsampling percentage. When no CCP pseudo-labels are kept, the classifier is equivalent to our supervised baseline. At aggressive subsampling values, only the OOD samples closest to the in-distribution data remain and the decision boundary is minimally perturbed. \begin{figure} \caption{Open-set experiment test set accuracy.} \label{ol_acc_plot} \end{figure} To perform well in these experiments, an algorithm must also be robust to class distribution misalignment as, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, the unlabeled data comes from only a single in-distribution class at the $75\%$ mismatch level. In \cref{ol_f_plots}, we show that CCP extracts the new information contained from in-distribution unlabeled data while being robust to OOD data and misalignment. The single class F-measure \cite{powers2011evaluation} of the classes in which there were new examples of in the unlabeled data were reliably increased with CCP. If no unlabeled data existed for a class in an experiment, the F-measure is roughly on par with the supervised baseline. \subsection{Noisy-Label Performance} \label{subsec:noisy_label_performance} To test the effects of noise in the given labels, we repeated the $0\%$ mismatch environment, but randomly changed a percent of the given labels. The only change we made to \cref{ccp} is that we made the given labels mutable with the same process of pseudo-label proposal and averaging over epochs. In \cref{noise_perf_plot_v2}, we can see pseudo-label accuracy substantially improves across 6 iterations for unlabeled and noisy labeled data. In \cref{tab:noisy_label_perf}, notably, CCP achieves a higher accuracy than the clean supervised baseline with $30\%$ of its labels randomly changed. \begin{figure} \caption{Noisy-label experiment CCP iterations.} \label{noise_perf_plot_v2} \end{figure} \begin{table}[ht] \centering \resizebox{0.9\columnwidth}{!}{ \begin{tabular}{@{}cccccc@{}} \toprule \multirow{2}{*}{\textbf{Algorithm}} & \multicolumn{5}{c}{\textbf{Label Noise Percentage}} \\ \cmidrule(l){2-6} & \textbf{0\%} & \textbf{10\%} & \textbf{20\%} & \textbf{30\%} & \textbf{40\%} \\ \midrule Baseline & 74.38\% & 68.03\% & 63.37\% & 56.78\% & 55.47 \\ CCP (ours) & \textbf{83.08\%} & \textbf{78.13\%} & \textbf{75.72\%} & \textbf{75.33\%} & \textbf{70.65} \\ \bottomrule \end{tabular} } \caption{Noisy-label experiment test set accuracy.} \label{tab:noisy_label_perf} \end{table} \section{Conclusion} \label{sec:conclusion} We have presented a novel framework for SSL that combines a novel contrastive approach to pseudo-labeling with an outer iteration designed for learning under label noise. We also introduce a subsampling procedure with a highly generalizable, unsupervised metric to control its effectiveness and provide insight into its effect. The result is a highly reliable and effective SSL framework that does not perform worse than a supervised baseline across four common real-world SSL scenarios including few-label, open-set, noisy-label, and class distribution misalignment. Future work may include augmenting \cref{q_j} to include successful components from related work such as consistency training between weak/strong augmentation \cite{https://doi.org/10.48550/arxiv.2001.07685} and instance/semantic similarity \cite{simmatch}. \onecolumn \begin{appendices} \section{Additional Illustrations} \label{sec:additional_illustrations} Refer to \cref{ccp_arch} for an illustrated depiction of how $f_b$, $f_z$, and $f_g$ are organized using the CCP framework. \cref{cred_vecs} provides a concrete example of a credibility vector calculation, how it differs from a traditional label vector, and the effect on a cross-entropy calculation. \begin{figure} \caption{CCP uses an encoder $f_b(\cdot)$ and two projection heads: $f_z(f_b(\cdot))$ for contrastive credibility propagation and $f_g(f_b(\cdot))$ for classification.} \label{ccp_arch} \end{figure} \begin{figure} \caption{\textit{Left:} \label{cred_vecs} \end{figure} \section{Dataset Details} \label{sec:dataset_details} Summary details on the classification datasets used in this work are displayed in \cref{tab:datasets}. \begin{table*}[h] \centering \resizebox{0.8\columnwidth}{!}{ \begin{tabular}{ccccc} \hline \textbf{Dataset} & \textbf{No. Classes} & \textbf{Balanced} & \textbf{No. Train Samples} & \textbf{\begin{tabular}[c]{@{}c@{}}Classification\\ Task\end{tabular}} \\ \hline DLP & 8 & No & 559,141 & Sensitivity + Category \\ AG News & 4 & Yes & 120,000 & Topic \\ DBpedia & 14 & Yes & 560,000 & Topic \\ Rotten Tomatoes & 2 & No & 271,772 & Sentiment \\ Yahoo! Answers & 10 & Yes & 700,000 & Topic \\ CIFAR-10 & 10 & Yes & 50,000 & Image Subject \\ \hline \end{tabular} } \caption{Description of each benchmark dataset.} \label{tab:datasets} \end{table*} \section{CCP Iteration Performance} \label{sec:ccp_iter_perf} In \cref{tab:ccp_iter_perf_full} we display the accuracy of pseudo-labels produced by CCP at each iteration with and without subsampling at all levels of label availability for every text dataset: Our DLP dataset, AG News \cite{DBLP:journals/corr/ZhangZL15}, DBpedia \cite{DBLP:journals/corr/ZhangZL15,dbpedia}, Rotten Tomatoes \cite{pang-lee-2005-seeing}, and Yahoo! Answers \cite{10.5555/1620163.1620201,DBLP:journals/corr/ZhangZL15}. We can see most runs have converged after seven iterations although minor improvements are still occurring in some runs. Using subsampling can occasionally decrease performance across iterations \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot on the Rotten Tomatoes dataset at $0.1\%$ label availability. After seven iterations, the accuracy is higher than when not using subsampling, but further iterations appear to be decreasing accuracy. This necessitates the careful tuning of a subsampling schedule. Also, occasionally, runs without subsampling appear to return a higher accuracy at termination by a small margin despite a slower start. Again, this is explainable by an accumulation of errors brought forth by subsampling. A smaller initial $d_{\textrm{max}}$ would alleviate this situation. We use $\Xi=40$ in each CCP iteration for every text dataset. This means all eight CCP iterations consisted of only $320$ epochs which are smaller than a typical supervised training session. However, if more epochs are used per iteration, CCP can introduce more significant costs compared to a typical supervised training session. \begin{table*}[h] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{@{}cccccccccc@{}} \toprule \multirow{2}{*}{\textbf{Dataset}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}\% of\\ Labels\end{tabular}}} & \multirow{2}{*}{\textbf{Subsampling}} & \multicolumn{7}{c}{\textbf{Iteration}} \\ \cmidrule(l){4-10} & & & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} \\ \midrule \multirow{6}{*}{DLP} & \multirow{2}{*}{1\%} & N & 99.14\% & 99.40\% & 99.49\% & 99.51\% & 99.54\% & 99.55\% & 99.56\% \\ & & Y & 99.14\% & 99.46\% & 99.57\% & 99.64\% & 99.66\% & 99.68\% & 99.68\% \\ \cmidrule(l){2-10} & \multirow{2}{*}{0.1\%} & N & 97.33\% & 97.82\% & 97.98\% & 98.10\% & 98.15\% & 98.19\% & 98.19\% \\ & & Y & 97.33\% & 98.03\% & 98.34\% & 98.42\% & 98.50\% & 98.54\% & 98.55\% \\ \cmidrule(l){2-10} & \multirow{2}{*}{0.05\%} & N & 93.87\% & 94.80\% & 95.26\% & 95.59\% & 95.82\% & 96.07\% & 96.24\% \\ & & Y & 93.87\% & 95.10\% & 95.30\% & 95.36\% & 95.43\% & 95.48\% & 95.53\% \\ \midrule \multirow{6}{*}{AG News} & \multirow{2}{*}{1\%} & N & 83.51\% & 86.68\% & 87.30\% & 87.60\% & 87.79\% & 87.95\% & 87.89\% \\ & & Y & 83.51\% & 86.50\% & 86.54\% & 87.23\% & 87.67\% & 88.00\% & 88.13\% \\ \cmidrule(l){2-10} & \multirow{2}{*}{0.1\%} & N & 75.29\% & 81.62\% & 83.27\% & 84.03\% & 84.48\% & 84.67\% & 84.84\% \\ & & Y & 75.29\% & 82.17\% & 83.37\% & 84.18\% & 84.74\% & 85.15\% & 85.49\% \\ \cmidrule(l){2-10} & \multirow{2}{*}{0.05\%} & N & 67.14\% & 76.71\% & 79.66\% & 80.78\% & 81.70\% & 82.27\% & 82.72\% \\ & & Y & 67.14\% & 77.56\% & 80.36\% & 81.56\% & 82.00\% & 82.63\% & 82.98\% \\ \midrule \multirow{6}{*}{DBpedia} & \multirow{2}{*}{1\%} & N & 96.24\% & 97.23\% & 97.55\% & 97.69\% & 97.71\% & 97.78\% & 97.81\% \\ & & Y & 96.24\% & 96.80\% & 97.06\% & 97.44\% & 97.58\% & 97.66\% & 97.85\% \\ \cmidrule(l){2-10} & \multirow{2}{*}{0.1\%} & N & 89.88\% & 92.84\% & 93.79\% & 94.23\% & 94.58\% & 94.81\% & 94.87\% \\ & & Y & 89.88\% & 93.45\% & 94.11\% & 94.42\% & 94.84\% & 95.01\% & 95.25\% \\ \cmidrule(l){2-10} & \multirow{2}{*}{0.05\%} & N & 82.75\% & 87.43\% & 89.08\% & 89.99\% & 90.57\% & 90.86\% & 91.09\% \\ & & Y & 82.75\% & 88.17\% & 90.04\% & 91.06\% & 91.50\% & 91.96\% & 92.26\% \\ \midrule \multirow{6}{*}{Rotten Tomatoes} & \multirow{2}{*}{1\%} & N & 64.29\% & 67.84\% & 68.89\% & 69.42\% & 69.55\% & 69.57\% & 69.70\% \\ & & Y & 64.29\% & 68.94\% & 69.92\% & 70.46\% & 70.60\% & 70.66\% & 70.61\% \\ \cmidrule(l){2-10} & \multirow{2}{*}{0.1\%} & N & 57.04\% & 57.75\% & 58.17\% & 58.32\% & 58.38\% & 58.44\% & 58.50\% \\ & & Y & 57.04\% & 58.30\% & 59.11\% & 59.34\% & 59.28\% & 59.22\% & 59.14\% \\ \cmidrule(l){2-10} & \multirow{2}{*}{0.05\%} & N & 54.75\% & 55.28\% & 55.65\% & 55.79\% & 55.89\% & 55.93\% & 55.96\% \\ & & Y & 54.75\% & 55.53\% & 56.31\% & 56.41\% & 56.38\% & 56.42\% & 56.41\% \\ \midrule \multirow{6}{*}{Yahoo! Answers} & \multirow{2}{*}{1\%} & N & 63.06\% & 65.70\% & 66.46\% & 66.88\% & 66.90\% & 67.13\% & 67.22\% \\ & & Y & 63.06\% & 66.20\% & 66.43\% & 66.87\% & 67.06\% & 67.30\% & 67.31\% \\ \cmidrule(l){2-10} & \multirow{2}{*}{0.1\%} & N & 52.82\% & 58.23\% & 59.76\% & 60.53\% & 60.99\% & 61.37\% & 61.61\% \\ & & Y & 52.82\% & 58.57\% & 59.83\% & 60.32\% & 60.60\% & 60.76\% & 60.92\% \\ \cmidrule(l){2-10} & \multirow{2}{*}{0.05\%} & N & 45.25\% & 52.94\% & 55.28\% & 56.45\% & 57.19\% & 57.70\% & 57.99\% \\ & & Y & 45.25\% & 53.50\% & 55.89\% & 56.98\% & 57.56\% & 57.89\% & 58.13\% \\ \bottomrule \end{tabular} } \caption{The accuracy of pseudo-labels after seven iterations of CCP. All runs with subsampling uses an initial $d_{\text{max}}=0.01$.} \label{tab:ccp_iter_perf_full} \end{table*} \cref{cifar_pseudo_acc_plot} details the pseudo-label accuracy across CCP iterations when using the CIFAR-10 \cite{Krizhevsky2009a} dataset. \cref{cifar_pseudo_acc_plot} also depicts the possible benefit of resetting $d_{\textrm{max}}$ back to its original value of $0.01$ in certain situations. In general, there is a benefit to doing this when the state of pseudo-labels has converged yet there still exists considerable errors and weak predictions. This can be seen by the large boost in pseudo-label accuracy after resetting $d_{\textrm{max}}$ in the 40-label scenario when the pseudo-label error is highest. The benefit in the 250-label scenario is less pronounced. There is no benefit to resetting $d_{\textrm{max}}$ in the 4000-label scenario. In this scenario, you're resetting many pseudo-labels that were already correct and it takes several additional iterations for the overall pseudo-label accuracy to recover. However, a smaller value for $d_{\textrm{max}}$ in the 4000-label experiment may still have provided benefit. In the 40-label experiment, pseudo-label accuracy still has a considerable upward trajectory after 18 iterations. In general, a larger number of clean labels ensures CCP converges more quickly. Also, more aggressive subsampling schedules provide stronger benefits when the error in the pseudo-labels is higher \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot when given labels are few. \begin{figure} \caption{Pseudo-label accuracy across CCP iterations with the CIFAR-10 dataset. The subsampling schedule is set with the standard values of $d_{\textrm{max} \label{cifar_pseudo_acc_plot} \end{figure} \section{Subsampling Pseudo-Labels} \label{sec:subsampling_pseudo_labels} In \cref{ordering_w_omega_less}, we observe that subsampling based on $\{\hat{\omega}_u\}_{u\in U}$ helps to isolate correct pseudo-labels across all text datasets. We can see this is due to correctly propagated labels having considerably stronger confidence than incorrect ones. \say{Easy} unlabeled samples, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot samples with high similarity to a labeled sample, are more likely to be predicted correctly with a strong credibility value. \say{Hard} samples will have weak credibility scores and are more likely to be incorrect. \begin{figure} \caption{\textit{Left} \label{ordering_w_omega_less} \end{figure} In \cref{dlp_kl_vs_acc,agnews_kl_vs_acc,dbpedia_kl_vs_acc,rotten_kl_vs_acc,yahoo_kl_vs_acc}, for each text dataset, we vary the number of pseudo-labels kept ($1-p$) while comparing the resulting KL-Divergence (as defined in \cref{kl_div_subsamp_algo}) and the resulting classifier accuracy when trained on the subsampled pseudo-labels. All classifiers, and the model which produced the initial pseudo-labels, had access to $0.1\%$ of labels. The most frequent observation is a small increase in classifier accuracy at approximately $d_{\textrm{max}}=0.01$ and a drop off in accuracy as $d_{\textrm{max}}$ becomes too large. The only exception is on the Rotten Tomatoes dataset where large values of $d_{\textrm{max}}=0.01$ appear to continue to increase the resulting classifier accuracy. The Rotten Tomatoes dataset is the dataset in which our baseline model performs the worst -- only slightly better than random guessing. This would suggest that one would need more aggressive values of $p$ to achieve accuracy increases. This is evidenced in our main results also where increasing $p$ results in the slowest increase in accuracy for Rotten Tomatoes compared to all other datasets. In general, this suggests that more aggressive values for $d_{\textrm{max}}$ should be chosen when the baseline model performance is worse, but this requires further investigation. \begin{figure} \caption{KL-Divergence vs Accuracy at varying levels of $1-p$ with the DLP dataset.} \label{dlp_kl_vs_acc} \end{figure} \begin{figure} \caption{KL-Divergence vs Accuracy at varying levels of $1-p$ with the AG News dataset.} \label{agnews_kl_vs_acc} \end{figure} \begin{figure} \caption{KL-Divergence vs Accuracy at varying levels of $1-p$ with the DBpedia dataset.} \label{dbpedia_kl_vs_acc} \end{figure} \begin{figure} \caption{KL-Divergence vs Accuracy at varying levels of $1-p$ with the Rotten Tomatoes dataset.} \label{rotten_kl_vs_acc} \end{figure} \begin{figure} \caption{KL-Divergence vs Accuracy at varying levels of $1-p$ with the Yahoo! Answers dataset.} \label{yahoo_kl_vs_acc} \end{figure} \section{Toggling Transformations in CCP} \label{sec:toggling_transformations_in_ccp} In \cref{perc_to_lab_plot_1}, we consider the effect of turning transformations off in CCP. We first gather all the pseudo-labels produced by CCP on the DLP dataset after a single iteration with and without transformations at all levels of label availability. We then subsample the pseudo-labels with our proposed subsampling procedure at all levels of $p$ and compare the resulting accuracy of the subsample. It is clear to see using transformations has a large impact on subsample accuracy at all levels of $p$ and label availability. The effect is more pronounced as label availability becomes scarce. \begin{figure} \caption{Subsample pseudo-label accuracy while toggling the use of transformations on the DLP dataset at all levels of $p$ and label availability.} \label{perc_to_lab_plot_1} \end{figure} \begin{figure} \caption{A zoomed-in look into \cref{perc_to_lab_plot_1} \label{perc_to_lab_plot_2} \end{figure} \section{Details on Transformations} \label{sec:details_on_transformations} Each transformation used in this work features one or more parameters that control the magnitude of its effect. Along with the transformation type, these parameters are randomly varied during training within predefined bounds. For efficiency, two text transformations are randomly drawn per forward pass and applied to each image with random parameters to form two transformed batches. For images, to closely match the experiments of other work, all transformations are applied sequentially per image with a probability of occurring. Random crop, horizontal flip, color jitter, and grayscale transformation occurs with probability $100\%$, $50\%$, $80\%$, $20\%$, respectively. A description of each $t\in \mathcal{T}$ can be found below. \begin{itemize} \item \textbf{Text transformations}: \begin{itemize} \item \textbf{Differential Privacy}: Laplacian noise is applied to all EVs in a fashion that is common in the practice of Differential Privacy \cite{6108143}. We randomly vary the strength of the noise, $\epsilon$, between $10$ and $100$. \item \textbf{Gaussian Noise}: Gaussian noise is applied to all EVs. We randomly vary $\mu$ and $sigma$ between $[-0.5, 0.5]$ and $[0.01, 0.05]$, respectively. \item \textbf{Vector Hide}: We randomly replace EVs with the learned padding vector used to pad short inputs. The amount of EVs replaced is randomly varied from $10\%$ to $25\%$ of the total length. \item \textbf{Paragraph Swap}: We choose a random index along the length of a sequence of EVs and swap the content above and below that index. \item \textbf{Random Vector Swap}: We randomly replace EVs with randomly chosen EVs from the full vocabulary. The amount of EVs replaced is randomly varied from $10\%$ to $25\%$ of the total length. \item \textbf{Scramble}: We randomly select indices across the length of a sequence of EVs and randomly scramble their order. The amount of EVs we choose to scramble randomly varies from $10\%$ to $25\%$ of the total length. \end{itemize} \item \textbf{Image transformations}: \begin{itemize} \item \textbf{Random crop}: Take a random crop of the image containing, at a minimum, $10\%$ of the original image. Given that the aspect ratio (width over height) of CIFAR-10 images is $1.0$, The aspect ratio of the crop must fall within $[0.75, 1.25]$. The image crops are resized back to $32 \times 32$ using bicubic interpolation. \item \textbf{Horizontal flip}: Flip images left-to-right. \item \textbf{Color Jitter}: Randomly distort the brightness, contrast, saturation, and hue of an image in a randomly chosen order. The maximum delta for brightness, contrast, and saturation jitter is set to $0.72$ and $0.18$ for hue jitter. \item \textbf{Grayscale}: Transform the colors of the entire image to grayscale. \end{itemize} \end{itemize} \section{Additional Training Details} \label{sec:additional_training_details} For text data, we use the BPEmb byte-pair encoder \cite{heinzerling2018bpemb} to transform a text document into a sequence of token indices cropped to a certain size. Each token index is used to look up an embedding vector (EV) whose initial values are set to the pretrained BPEmb values. The sequence of EVs is read by parallel convolutional layers whose output then feeds into additional convolutional layers depth-wise. The final activation maps undergo global max pooling to obtain a single floating point value per filter. These maximum activations are concatenated together to form $b_i$. $f_z$ and $f_g$ are designed as a 2-layer MLP. For $f_z$, the size of the hidden (output) layer is 64 (32). For $f_g$ the hidden layer is of size 64 and the output layer size is the number of classes. Small adjustments are made to $f_b$ to ensure the control and baseline reached the desired performance on each text dataset. An illustration of $f_b$ used for text datasets can be found in \cref{f_b_overview}. More information about the hyperparameters of $f_b$ for each dataset can be found in \cref{tab:hyperparams}. \begin{figure} \caption{An illustration of an example encoder used in this work, $f_b$, for text data. ``C:32@5$\times$100" refers to a convolutional layer with 32 filters each of size $5\times 100$. ``GM" refers to global max pooling over the feature activation maps. Red indicates learnable variables.} \label{f_b_overview} \end{figure} \begin{table*}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}cccccc@{}} \toprule \textbf{Hyperparameter} & \textbf{DLP} & \textbf{AG News} & \textbf{DBpedia} & \textbf{\begin{tabular}[c]{@{}c@{}}Rotten\\ Tomatoes\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Yahoo!\\ Answers\end{tabular}} \\ \midrule Input Size & 512 & 512 & 512 & 256 & 1024 \\ Droput Rate & 0.0 & 0.0 & 0.0 & 0.8 & 0.8 \\ Batch Size & 256 & 256 & 256 & 256 & 256 \\ Optimizer & Adam & Adam & Adam & Adam & Adam \\ Activation & GELU & GELU & GELU & GELU & GELU \\ $\tau$ & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\ $\Xi$ & 40 & 40 & 40 & 40 & 40 \\ Weight Decay & 0.0 & 0.0 & 0.0 & 0.001 & 0.001 \\ Learning Rate & 0.00004 & 0.00004 & 0.00004 & 0.00004 & 0.00004 \\ \begin{tabular}[c]{@{}c@{}}First Sequence of\\ Parallel Convolutional Layers\end{tabular} & 32@5$\times$100 $\rightarrow$ 16@3$\times$1 & 32@5$\times$100 $\rightarrow$ 16@3$\times$1 & 32@5$\times$100 $\rightarrow$ 16@3$\times$1 & 32@5$\times$100 $\rightarrow$ 16@3$\times$1 & 64@5$\times$100 $\rightarrow$ 32@3$\times$1 \\ \begin{tabular}[c]{@{}c@{}}Second Sequence of\\ Parallel Convolutional Layers\end{tabular} & 32@9$\times$100 $\rightarrow$ 16@7$\times$1 & 32@9$\times$100 $\rightarrow$ 16@7$\times$1 & 32@9$\times$100 $\rightarrow$ 16@7$\times$1 & 32@9$\times$100 $\rightarrow$ 16@7$\times$1 & 64@9$\times$100 $\rightarrow$ 32@7$\times$1 \\ \begin{tabular}[c]{@{}c@{}}Third Sequence of\\ Parallel Convolutional Layers\end{tabular} & - & - & - & - & 64@13$\times$100 $\rightarrow$ 32@9$\times$1 \\ Global Max Pooling & True & True & True & True & True \\ \bottomrule \end{tabular} } \caption{Hyperparameter settings of $f_b$ for each text dataset. ``32@5$\times$100" refers to a convolutional layer with 32 filters each of size $5\times 100$ \cite{Kingma2015AdamAM, Hendrycks2016}.} \label{tab:hyperparams} \end{table*} For CIFAR-10, we adopt WRN28-2 \cite{DBLP:journals/corr/ZagoruykoK16}. We use a standard SGD optimizer with Nesterov momentum \cite{Polyak1964SomeMO, pmlr-v28-sutskever13}. We use a cosine learning rate decay \cite{DBLP:conf/iclr/LoshchilovH17}. The learning rate is calculated as $0.03\cos(\frac{7\pi s}{16S})$ where $s$ is the current training step, and $S$ is the total number of training steps. Each CCP iteration is allowed to continue until convergence of $\mathcal{L}_{\text{SSC}}$ as measured with an exponential moving average of each batch loss. When building classifiers, we use a fixed length of 1000 epochs. We do not use an exponential moving average of hyperparameters. This, along with other configurations not reported in prior work, may account for some differences in our control performance compared to them. \section{Pseudo-Label Oscillation} \label{sec:pseudo_label_oscillation} In a similar fashion to \cite{https://doi.org/10.48550/arxiv.2012.05458}, we observe oscillations of the predicted pseudo-label across epochs in the presence of a pseudo-label error. The credibility vector value for the correct class begins high (or at least reflects uncertainty \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot near zero) but the network eventually memorizes the incorrect pseudo-label as training continues. When there is no pseudo-label error, the magnitude of oscillation is significantly less. Oscillations are also observed during the first iteration when no prior pseudo-label exists. This is attributable to the effects of random batch selection and network state. Randomly chosen examples of these behaviors are illustrated in \cref{oscillation_fig}. This allows us to use a similar theoretical motivation as \cite{https://doi.org/10.48550/arxiv.2012.05458} for averaging across an epoch in CCP. Assume there is a latent optimal pseudo-label for an unlabeled sample $x_u$ denoted $q_u^{*}\in \mathbb{R}^K$. Denote the pseudo-label obtained at the end of CCP iteration $m$ as $q_u^{[\cdot,m]}$. Based on the oscillatory behavior of pseudo-labels, we can approximate the pseudo-label at the $\xi$-th epoch of the $m$-th iteration for $m\geq 1$, denoted $q_u^{[\xi, m]}$, as, \begin{equation} \label{q_delta_approximation} q_u^{[\xi, m]} \approx \alpha^{[\xi, m]}_u \beta^{[\xi, m]}_u + (1-\alpha_u^{[\xi, m]})q_u^{[\cdot,m-1]} \end{equation} \noindent Where $\xi \in \{1, 2, \ldots, \Xi\}$, $q_u^{[\cdot,0]}$ are zero vectors, $\alpha^{[\xi, m]}_u \in [0, 1]$ are coefficients dependent on instances, the network, and $q_u^{[\cdot,m-1]}$ with $\alpha^{[\xi, 1]}_u=1 \quad \forall \xi$, and $\beta^{[\xi, m]}_u \in \mathbb{R}^K$ are i.i.d. random vectors with $\mathbb{E}[\beta^{[\xi,m]}_u] = q_u^{*}$. Consider a uniformly chosen random epoch and iteration denoted $\xi'$ and $m'$, respectively. We can see, \begin{equation} \label{better_than_before} \lVert \mathbb{E}[q_u^{[\cdot, m'+1]}] - q_u^{*} \rVert \leq \lVert q_u^{[\cdot, m']} - q_u^{*} \rVert \end{equation} \begin{equation} \label{lower_variance} \text{var}(q_{u,k}^{[\cdot, m']}) \leq \text{var}(q_{u,k}^{[\xi', m']}) \quad \forall k\in c \end{equation} That is, CCP is expected to improve the quality of pseudo-labels iteratively and new pseudo-labels at the end of an iteration have lower variance due to the averaging mechanism. \begin{figure*} \caption{Randomly chosen plots from the Rotten Tomatoes dataset with $1\%$ label availability depicting pseudo-label oscillation over epochs. On the x-axis is the epoch number. On the y-axis is the non-scaled credibility value for class 1 in blue and class 0 in green. The leftmost column shows oscillation in the presence of a pseudo-label error. The middle column displays significantly degraded oscillation in the presence of a correct pseudo-label. The rightmost column depicts oscillations which can also be observed in the first CCP iteration.} \label{oscillation_fig} \end{figure*} \end{appendices} \end{document}
\begin{document} \title[On Stefan problems with variable surface energy] {On thermodynamically consistent Stefan problems with variable surface energy} \author[J.~Pr\"uss]{Jan Pr\"uss} \address{Institut f\"ur Mathematik \\ Martin-Luther-Universit\"at Halle-Witten\-berg\\ D-60120 Halle, Germany} \email{[email protected]} \author[G.~Simonett]{Gieri Simonett} \address{Department of Mathematics\\ Vanderbilt University \\ Nashville, TN~37240, USA} \email{[email protected]} \author[M.~Wilke]{Mathias Wilke} \address{Institut f\"ur Mathematik \\ Martin-Luther-Universit\"at Halle-Witten\-berg\\ D-60120 Halle, Germany} \email{[email protected]} pubjclass[2000]{Primary: 35R35, 35B35, 35K55; Secondary: 80A22} \keywords{Phase transition, free boundary problem, Gibbs-Thomson law, kinetic undercooling, variable surface tension, surface energy, surface diffusion, stability, instability.} \begin{abstract} A thermodynamically consistent two-phase Stefan problem with temperature-dependent surface tension and with or without kinetic undercooling is studied. It is shown that these problems generate local semiflows in well-defined state manifolds. If a solution does not exhibit singularities, it is proved that it exists globally in time and converges towards an equilibrium of the problem. In addition, stability and instability of equilibria is studied. In particular, it is shown that multiple spheres of the same radius are unstable if surface heat capacity is small; however, if kinetic undercooling is absent, they are stable if surface heat capacity is sufficiently large. \end{abstract} \maketitle pection{Introduction} In the recent publication \cite{PSZ10} the authors studied Stefan problems with surface tension and with or without kinetic undercooling which are consistent with the laws of thermodynamics, in the sense that the total energy is preserved and the total entropy is strictly increasing along nonconstant smooth solutions. \noindent {\bf 1.} \, To formulate this problem, let $\Omegapubset \mathbb R^{n}$ be a bounded domain of class $C^{2}$, $n\geq2$. $\Omega$ is occupied by a material that can undergo phase changes: at time $t$, phase $i$ occupies the subdomain $\Omega_i(t)$ of $\Omega$, respectively, with $i=1,2.$ We assume that $\partial \Omega_1(t)\cap\partial \Omega=\emptyset$; this means that no {\em boundary contact} can occur. The closed compact hypersurface $\Gamma(t):=\partial \Omega_1(t)pubset \Omega$ forms the interface between the phases. The problem consists in finding a family of closed compact hypersurfaces $\Gamma(t)$ contained in $\Omega$ and an appropriately smooth function $u:\mathbb R_+\times\bar{\Omega}\rightarrow\mathbb R$ such that \begin{equation} \label{stefan} \left\{\begin{aligned} \kappa (u)\partial_t u-{\rm div}(d(u)\nabla u)&=0 &&\text{in}&&\Omegapetminus\Gamma(t)\\ \partial_{\nu} u &=0 &&\text{on}&&\partial \Omega \\ [\![u]\!]&=0 &&\text{on}&&\Gamma(t)\\ [\![\psi(u)]\!]+pigma \mathcal H &=\gamma(u) V &&\text{on}&&\Gamma(t) \\ [\![d(u)\partial_\nu u]\!] &=(l(u)-\gamma(u)V) V &&\text{on}&&\Gamma(t)\\ u(0)&=u_0 &&\text{in} &&\Omegapetminus\Gamma_0,\quad\\ \Gamma(0)&=\Gamma_0. && && \end{aligned}\right. \end{equation} Here $u(t)$ denotes the (absolute) temperature, $\nu(t)$ the outer normal field of $\Omega_1(t)$, $V(t)$ the normal velocity of $\Gamma(t)$, $\mathcal H(t)=\mathcal H(\Gamma(t))=-{\rm div}_{\Gamma(t)} \nu(t)$ the sum of the principal curvatures, and $[\![v]\!]=v_2|_{\Gamma(t)}-v_1|_{\Gamma(t)}$ the jump of a (continuous) function $v$ across $\Gamma(t)$. Since $u$ means absolute temperature we always assume that $u>0$. Several quantities are derived from the free energies $\psi_i(u)$ as follows: \begin{itemize} \item $\varepsilonilon_i(u):= \psi_i(u)+u\eta_i(u)$ denotes the internal energy in phase $i$, \item $\eta_i(u) :=-\psi_i^\prime(u)$ the entropy, \item $\kappa_i(u):= \varepsilonilon^\prime_i(u)=-u\psi_i^{\prime\prime}(u)$ the heat capacity, \item $l(u):=u[\![\psi^\prime(u)]\!]=-u[\![\eta(u)]\!]$ the latent heat. \end{itemize} Furthermore, $d_i(u)>0$ denotes the coefficient of heat conduction in Fourier's law, $\gamma(u)\geq0$ the coefficient of kinetic undercooling, and $pigma>0$ the coefficient of surface tension. In the sequel we drop the index $i$, as there is no danger of confusion; we just keep in mind that the coefficients in the bulk depend on the phases. The temperature is assumed to be continuous across the interface. However, the free energy and the conductivities depend on the respective phases, and hence the jumps $\varphi(u):=[\![\psi(u)]\!]$, $[\![\kappa(u)]\!]$, $[\![\eta(u)]\!]$, $[\![d(u)]\!]$ are in general non-zero at the interface. Throughout we require that the heat capacities $\kappa_i(u)$ and diffusivities $d_i(u)$ are strictly positive over the whole temperature range $u>0$, and that $\varphi$ has exactly one zero $u_m>0$ called the {\em melting temperature}. If we assume that the coefficient of surface tension $pigma$ is constant, then this model is consistent with the laws of thermodynamics. In fact, the {pf total energy} of the system is given by \begin{equation}\label{energy} {pf E}(u,\Gamma) = \int_{\Omegapetminus\Gamma} \varepsilonilon(u)\,dx + \int_\Gamma pigma\, ds, \end{equation} and by the transport and surface transport theorem we have for smooth solutions \begin{align*} \frac{d}{dt}{pf E}(u(t),\Gamma(t)) &= -\int_\Gamma \{[\![d(u)\partial_\nu u]\!] +[\![\varepsilonilon(u)]\!]V + pigma \mathcal H V\}\,ds\\ &= -\int_\Gamma \{[\![d(u)\partial_\nu u]\!] -(l(u)-\gamma(u)V))V\}\,ds=0, \end{align*} and thus, energy is conserved. Also the {pf total entropy} $\phi_{\rho}i(u,\Gamma)$ defined by \begin{equation}\label{entropy} \phi_{\rho}i(u,\Gamma) = \int_{\Omegapetminus\Gamma} \eta(u)\,dx \end{equation} is nondecreasing along smooth solutions, as \begin{align*} \frac{d}{dt}\phi_{\rho}i(u(t),\Gamma(t)) &=\int_\Omega\frac{1}{u^2} d(u)|\nabla u|^2\,dx - \int_\Gamma \frac{1}{u}\{ [\![d(u)\partial_\nu u]\!]+u[\![\eta(u)]\!]V\}\,ds\\ &=\int_\Omega \frac{1}{u^2}d(u)|\nabla u|^2\,dx + \int_\Gamma \frac{1}{u}\gamma(u) V^2\,ds\ge 0. \end{align*} \noindent {\bf 2.} \, In this paper we consider the physically important case where surface tension $pigma=pigma(u)$ is a function of surface temperature $u$. We refer to \cite{Gur07, CLV01, DrPa99, NaSc03, NVC02} for background information on the importance of variable surface tension in fluid flows and phase transitions. Then, following \cite{Ish06} and \cite{Gur07}, the surface energy will be $\int_\Gamma \varepsilonilon_\Gamma(u)\,ds$ instead of $\int_\Gammapigma\,ds$, where $\varepsilonilon_\Gamma(u)$ denotes the density of surface energy. In addition, one has to take into account the total surface entropy $\int_\Gamma \eta_\Gamma(u) \,ds$, as well as balance of surface energy. The latter means that the Stefan law has to be replaced by a dynamic equation on the moving interface $\Gamma(t)$ of the form \begin{equation*} \kappa_\Gamma(u) \partial_{t,n} u - {\rm div}_\Gamma (d_\Gamma(u)\nabla_\Gamma u) = [\![d(u)\partial_\nu u]\!] -\big(l(u)-\gamma(u)V+l_\Gamma(u)\mathcal H\big)V, \end{equation*} where $\partial_{t,n}$ denotes the time derivative in normal direction, see \eqref{normal-derivative}. As in the bulk we define on the interface \begin{itemize} \item $\varepsilonilon_\Gamma(u):= pigma(u)+u\eta_\Gamma(u)$, the surface internal energy, \item $\eta_\Gamma(u) :=-pigma^\prime(u)$, the surface entropy, \item $\kappa_\Gamma(u):= \varepsilonilon^\prime_\Gamma(u)=-upigma^{\prime\prime}(u)$, the surface heat capacity, \item $l_\Gamma(u):=upigma^\prime(u)=-u\eta_\Gamma(u)$, the surface latent heat. \end{itemize} We also employ Fourier's law on the interface to describe surface heat conduction, i.e. we set $q_\Gamma :=-d_\Gamma(u)\nabla_\Gamma u$, which should be present as soon as the interface has heat capacity. Recalling that $u$ is assumed to be continuous across the interface the surface temperature \begin{equation} \label{u-Gamma} u_\Gamma:=u_{|_\Gamma} \end{equation} is well-defined. Obviously, if $pigma$ is constant then $\varepsilonilon_\Gamma=pigma$, and $\eta_\Gamma=\kappa_\Gamma =l_\Gamma=0$, hence this model reduces to \eqref{stefan}. On the other hand, if $pigma$ is linear in $u$ we still have $\kappa_\Gamma=0$ and then it makes sense to also set $d_\Gamma\equiv0$, to obtain the modified Stefan law \begin{equation*} [\![d(u)\partial_\nu u]\!] =\big(l(u)-\gamma(u)V+l_\Gamma(u)\mathcal H\big)V, \end{equation*} which differs from the Stefan law in \eqref{stefan} only by replacing $l(u)$ by $l(u)+l_\Gamma(u)\mathcal H$. This is just a minor modification of \eqref{stefan}, and its analysis remains essentially the same as in \cite{PSZ10}. The only difference is that the stability condition for the equilibria, and in case $\gamma\equiv0$ also the well-posedness condition, changes. More precisely, the well-posedness condition changes from $\varphi^\prime\neq0$ to $\lambda^\prime\neq0$ where $\lambda(s):=\varphi(s)/pigma(s)$, and the stability condition modifies by replacing $\varphi^\prime/pigma$ by $\lambda^\prime$. Therefore we concentrate here on the case where $\kappa_\Gamma(u),d_\Gamma(u)>0$, which means that $pigma$ is strictly concave. It has been shown experimentally that positive surface heat capacity $\kappa_\Gamma$ (as opposed to vanishing surface heat capacity) is important in certain practical situations; see \cite{Ward} for recent work in this direction. Experimental evidence also shows that $pigma$ is strictly decreasing, hence admits exactly one zero $u_c>0$; $pigma(u)$ is positive in $(0,u_c)$ and negative for $u>u_c$. Physically, it is reasonable to assume $u_c>u_m$. It turns out that the analysis of the problem with nonlinear surface tension is considerably different from the linear case. In the sequel we always assume that \begin{equation} \label{regularity-coeff} \begin{split} d_i,\psi_i,d_\Gamma,pigma,\gamma\in C^3(0,u_c),\quad d_i,\kappa_i,d_\Gamma,\kappa_\Gamma,pigma>0\;\;\text{on}\;\; (0,u_c),\quad i=1,2, \end{split} \end{equation} if not stated otherwise. Furthermore, we let $\gamma\equiv0$ if there is no undercooling, or $\gamma>0$ on $(0,u_c)$ if undercooling is present, and we restrict our attention to the temperature range $u\in(0,u_c)$. With these restrictions on the parameter functions, we consider the following problem: \begin{equation} \label{stefan-var} \left\{\begin{aligned} \kappa (u)\partial_t u-{\rm div}(d(u)\nabla u)&=0 &&\text{in}&&\Omegapetminus\Gamma(t)\\ \partial_{\nu} u &=0 &&\text{on}&&\partial \Omega \\ [\![u]\!]=0,\quad u_\Gamma&=u &&\text{on}&&\Gamma(t)\\ \varphi(u_\Gamma)+pigma(u_\Gamma) \mathcal H &=\gamma(u_\Gamma) V &&\text{on}&&\Gamma(t) \\ \kappa_\Gamma(u_\Gamma) \partial_{t,n} u_\Gamma - {\rm div}_\Gamma (d_\Gamma(u_\Gamma)\nabla_\Gamma u_\Gamma)&=&&\\ =[\![d(u)\partial_\nu u]\!] -(l(u_\Gamma)+l_\Gamma(u_\Gamma)\mathcal H&-\gamma(u_\Gamma)V) V &&\text{on}&&\Gamma(t)\\ u(0)&=u_0 && \text{in} && \Omegapetminus\Gamma_0,\\ \Gamma(0)&=\Gamma_0. && \end{aligned}\right. \end{equation} Here $\varphi(u)=[\![\psi(u)]\!]$, and $\partial_{t,n}u_\Gamma$ denotes the time derivative of $u_\Gamma$ in normal direction, defined by \begin{equation} \label{normal-derivative} \partial_{t,n}u_\Gamma(t,p):=\frac{d}{d\tau}u_\Gamma(t+\tau,x(t+\tau,p))\big|_{\tau=0}\,, \quad t>0,\quad p\in\Gamma(t), \end{equation} with $\{x(t+\tau,p)\in\mathbb R^{n}:(\tau,p)\in (-\varepsilon,\varepsilon)\times\Gamma(t)\}$ the flow induced by the normal vector field $(V\nu)$. That is, $[\tau\mapsto x(t+\tau,p)]$ defines for each $p\in\Gamma(t)$ a flow line through $p$ with \begin{equation*} \label{flow-line} \quad \frac{d}{d\tau}x(t+\tau,p)=(V\nu)(t+\tau,x(t+\tau,p)), \quad x(t+\tau,p)\in\Gamma(t+\tau), \quad \tau\in (-\varepsilon,\varepsilon), \end{equation*} and $x(t,p)=p$. The existence of a unique trajectory $$\{x(t+\tau,p)\in\mathbb R^{n}: \tau\in (-\varepsilon,\varepsilon)\},\quad p\in\Gamma(t),$$ with the above properties is not completely obvious, see for instance \cite{MS99b} for a proof. We note that the (non-degenerate) equilibria for this problem are the same as those for \eqref{stefan}: the temperature is constant, and the disperse phase $\Omega_1$ consists of finitely many nonintersecting balls of the same radius. We shall prove that such an equilibrium is stable in the state manifold $\mathcal{SM}$ defined below if $\Omega_1$ is connected and the stability condition introduced in the next section holds. Such an equilibrium will be a local maximum of the total entropy, as we found before in \cite{PSZ10} for the case of constant surface tension. To the best of our knowledge, there is no mathematical work on thermodynamically consistent Stefan problems with surface tension depending on the temperature. \noindent {\bf 3.} \, The case where undercooling is present is the simpler one, as both equations on the interface are dynamic equations. In particular, the Gibbs-Thomson identity $$ \gamma(u_\Gamma)V-pigma(u_\Gamma)\mathcal H= \varphi(u_\Gamma)$$ can be understood as a {\bf mean curvature flow} for the evolution of the surface, modified by physics. If there is no undercooling, it is convenient to eliminate the time derivative of $u_\Gamma$ from the energy balance on the interface. In fact, differentiating the Gibbs-Thomson law w.r.t.\ time $t$ and, with $\lambda(s)=\varphi(s)/pigma(s)$, we obtain $$ \lambda^\prime(u_\Gamma)\partial_{t,n}u_\Gamma +\mathcal H^\prime(\Gamma) V=0\quad \mbox{ on } \; \Gamma(t),$$ where $\mathcal H^\prime(\Gamma)={\rm tr}\, L_\Gamma^2 +\Delta_\Gamma$, with $L_\Gamma$ the Weingarten tensor and $\Delta_\Gamma$ the Laplace-Beltrami operator of $\Gamma$. (These quantities will be introduced in Section 3). Hence substitution into surface energy balance yields with \begin{equation*} \label{T-Gamma} T_\Gamma(u_\Gamma) :=\omega_\Gamma(u_\Gamma)-\mathcal H^\prime(\Gamma), \; \omega_\Gamma (u_\Gamma):=\lambda^\prime(u_\Gamma)(l(u_\Gamma)- l_\Gamma(u_\Gamma)\lambda(u_\Gamma))/\kappa_\Gamma(u_\Gamma), \end{equation*} the relation \begin{equation} \label{T-Gamma-V} T_\Gamma(u_\Gamma) V=\frac{\lambda^\prime(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)} \big\{{\rm div}_\Gamma (d_\Gamma(u_\Gamma)\nabla_\Gamma u_\Gamma)+ [\![d(u)\partial_\nu u]\!]\big\}. \end{equation} As $V$ should be determined only by the state of the system and should not depend on time derivatives of other variables, this indicates that the problem without undercooling is not well-posed if the operator $T_\Gamma(u_\Gamma)$ is not invertible in $L_2(\Gamma)$, as $V$ might not be well-defined. On the other hand if $T_\Gamma(u_\Gamma)$ is invertible, then \begin{equation} V = [T_\Gamma(u_\Gamma)]^{-1}\frac{\lambda^\prime(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)} \Big\{{\rm div}_\Gamma (d_\Gamma(u_\Gamma)\nabla_\Gamma u_\Gamma)+ [\![d(u)\partial_\nu u]\!]\Big\} \end{equation} uniquely determines the interfacial velocity $V,$ gaining two derivatives in space, and showing that the right hand side of surface energy balance is of lower order. Note that \begin{equation} \omega_\Gamma(s) = spigma(s)[\lambda^\prime(s)]^2/\kappa_\Gamma(s)\geq 0 \quad \mbox{ in } (0,u_c), \end{equation} and $\omega_\Gamma(s)=0$ if and only if $\lambda^\prime(s)=0$. Therefore the well-posedness condition becomes more complex compared to the case $\kappa_\Gamma\equiv0$. Going one step further, taking the surface gradient of the Gibbs-Thomson relation yields the identity \begin{equation} \label{mcflow} \kappa_\Gamma(u_\Gamma) V - d_\Gamma(u_\Gamma)\mathcal H(\Gamma) = \kappa_\Gamma(u_\Gamma)\{ f_\Gamma(u_\Gamma)+F_\Gamma(u,u_\Gamma)\}, \end{equation} as will be shown in Section~6. Here the function $f_\Gamma$ is the antiderivative of $\lambda(d_\Gamma/\kappa_\Gamma)^\prime$ vanishing at $s=u_m$, and $F_\Gamma$ is nonlocal in space and of lower order. So also in the case where undercooling is absent we obtain a {\bf mean curvature flow}, modified by physics. Here some remarks about the nature of $T_\Gamma$ are in order. $T_\Gamma$ is a mathematical quantity which does not seem to allow for a physical interpretation. In case that $\Gamma$ coincides with an equilibrium of system \eqref{stefan-var}, invertibility of $T_\Gamma$ is characterized by the conditions $l_*\neq 0$ and $\eta_*\neq 1$, where $l_*$ and $\eta_*$ are defined below. As $T_\Gamma$ contains the term $\Delta_\Gamma$, a second order differential operator acting on functions defined on $\Gamma$, $T^{-1}_\Gamma$ (and hence also $V$) will gain two 'spacial' derivatives. \goodbreak \noindent {\bf 4.} Since we do not impose any structural assumptions on the free energy, the diffusivity, and the surface tension at $\theta=0$, is is not possible to show that the temperature $\theta(t)$ remains positive. It would be an important question to characterize constitutive laws which ensure this property. On the other side, we can also not ensure that solutions stay bounded away from $u_c$. Note that the model is not meaningful for $u>u_c$, as the phases are then no longer separated. This region would correspond to a plasma. In our model we do not allow for the interface $\Gamma$ to touch the boundary of $\Omega$. We refer to \cite{BoPr15} for modeling aspects concerning this situation. The plan for this paper is as follows. In Section~2 we discuss some fundamental physical properties of the Stefan problem with variable surface tension. In particular, it is shown that the negative total entropy is a strict Lyapunov functional for the problem, and we characterize and analyze the equiliria of the system. The direct mapping method based on the Hanzawa transform, first introduced in \cite{Han81}, is discussed in Section~3. This way the problem is reduced to a quasilinear parabolic problem. In Section~4 we consider the full linearization of the problem at a given equilibrium, and we prove that these are normally hyperbolic, generically. The last two sections deal with the analysis of the nonlinear problem with and without kinetic undercooling. The analysis is based on results for abstract quasilinear parabolic problems, in particular on the generalized principle of linearized stability, see \cite{KPW10,PSZ09}. We refer here to \cite{DHP03,Mey10,PrSi04} for information on maximal regularity in $L_p$- and weighted $L_p$- spaces, and to \cite{EPS03,Gu86,Gu88,PSZ10} for more background information concerning the Stefan problem. pection{Energy, Entropy and Equilibria} \noindent {\bf(a)}\, The {pf total energy} of the system \eqref{stefan-var} is given by \begin{equation}\label{energy-var} {pf E}(u,\Gamma) = \int_{\Omegapetminus\Gamma} \varepsilonilon(u)\,dx + \int_\Gamma \varepsilonilon_\Gamma(u_\Gamma)\,ds, \end{equation} and by the transport and surface transport theorem we have for smooth solutions \begin{align*} \frac{d}{dt}{pf E}(u,\Gamma) &= \int_\Omega \kappa(u)\partial_tu \,dx-\int_\Gamma [\![\varepsilonilon(u)]\!]V\,ds + \int_\Gamma\{\kappa_\Gamma(u_\Gamma)\partial_{t,n} u_\Gamma -\varepsilonilon_\Gamma(u_\Gamma) \mathcal H V\}\,ds\\ &= \int_\Gamma \{-[\![d(u)\partial_\nu u]\!]-[\![\varepsilonilon(u)]\!]V +{\rm div}_\Gamma( d_\Gamma(u_\Gamma)\nabla_\Gamma u_\Gamma)\\ &\qquad\;\; +[\![d(u)\partial_\nu u]\!]-(l(u)+l_\Gamma(u_\Gamma)\mathcal H)V+\gamma(u_\Gamma)V^2-\varepsilonilon_\Gamma(u_\Gamma)\mathcal H V\}\,ds\\ &= -\int_\Gamma\{ [\![\psi(u)]\!]+pigma(u_\Gamma)\mathcal H -\gamma(u_\Gamma)V\}V\,ds=0 \end{align*} by the Gibbs-Thomson law, and thus, energy is conserved. \\ \noindent {\bf(b)}\, The {pf total entropy} of the system, given by \begin{equation}\label{entropy-st} \phi_{\rho}i(u,\Gamma)= \int_{\Omegapetminus\Gamma} \eta(u)\,dx +\int_\Gamma \eta_\Gamma(u_\Gamma)\,ds, \end{equation} satisfies \begin{align*} \frac{d}{dt}\phi_{\rho}i(u,\Gamma)&= \int_\Omega \eta^\prime(u)\partial_t u\,dx +\int_\Gamma\{\partial_{t,n}\eta_\Gamma(u_\Gamma) -([\![\eta(u)]\!]+\eta_\Gamma(u_\Gamma)\mathcal H)V\}\,ds\\ &= \int_\Omega \frac{1}{u}\kappa(u)\partial_tu\,dx +\int_\Gamma\frac{1}{u_\Gamma}\{\kappa_\Gamma(u_\Gamma)\partial_{t,n}u_\Gamma +(l(u)+l_\Gamma(u_\Gamma)\mathcal H)V\}\,ds\\ &=\int_\Omega\frac{1}{u^2} d(u)|\nabla u|^2\,dx \\ &+ \int_\Gamma\frac{1}{u_\Gamma} \{ -[\![d(u)\partial_\nu u]\!]+{\rm div}_\Gamma (d_\Gamma(u_\Gamma)\nabla_\Gamma u_\Gamma) +[\![d(u)\partial_\nu u]\!]+\gamma(u_\Gamma)V^2\}\,ds\\ &=\int_\Omega \frac{1}{u^2}d(u)|\nabla u|^2\, dx + \int_\Gamma\frac{1}{u_\Gamma^2}\{d_\Gamma(u_\Gamma)|\nabla_\Gamma u_\Gamma|^2+ u_\Gamma\gamma(u_\Gamma) V^2\}\,ds\ge 0, \end{align*} where we employed the transport theorem, the surface transport theorem and \eqref{stefan-var}. In particular, the negative total entropy is a Lyapunov functional for problem \eqref{stefan-var}. \noindent {\bf(c)}\, Even more, $-\phi_{\rho}i$ is a strict Lyapunov functional in the sense that it is strictly decreasing along smooth solutions which are non-constant in time. Indeed, if at some time $t_0\geq0$ we have \begin{equation*} \frac{d}{dt}\phi_{\rho}i(u(t_0),\Gamma(t_0)) =0, \end{equation*} then \begin{equation*} \int_\Omega \frac{1}{u^2}d(u)|\nabla u|^2\, dx+ \int_\Gamma\frac{1}{u_\Gamma^2}d(u)|\nabla u_\Gamma|^2\, ds + \int_\Gamma \frac{1}{u_\Gamma}\gamma(u_\Gamma) V^2\,ds = 0, \end{equation*} hence $\nabla u(t_0)=0$ in $\Omega$ and $\gamma(u_\Gamma(t_0))V(t_0)=0$ on $\Gamma(t_0)$. This implies $u(t_0)=const=u_\Gamma(t_0)$ in $\Omega$, and $\mathcal H(t_0)=-[\![\psi(u(t_0))]\!]/pigma(u_\Gamma(t_0))=const$, provided we have \begin{equation}\label{sigma} [\![\psi(s)]\!]=0\quad \mathbb Rightarrow \quad pigma(s)>0. \end{equation} Physically, this assumption is plausible, as it means that at melting temperature $u_m>0$ (defined as the {\em unique} positive zero of the function $\varphi(s):=[\![\psi(s)]\!]$) the surface tension $pigma(u_m)$ is positive. Since $\Omega$ is bounded, we may conclude that $\Gamma(t_0)$ is a union of finitely many, say $m$, disjoint spheres of equal radius, i.e.\ $(u(t_0),\Gamma(t_0))$ is an equilibrium. Therefore, the {\em $\omega$ limit set} of solutions {\em within} the state manifold defined below are contained in the $(mn+1)$-dimensional manifold of equilibria \begin{equation} \label{equilibria-I} \begin{aligned} \mathcal E&=\big\{\big(u_\ast,\bigcup_{1\le l\le m}S_{R_\ast}(x_l)\big): 0<u_\ast<u_c,\ [\![\psi(u_\ast)]\!]=(n-1)pigma(u_*)/R_*,\\ &\hspace{4cm}\bar B_{R_\ast}(x_l)pubset \Omega,\, \bar{B}_{R_\ast}(x_l)\cap \bar{B}_{R_\ast}(x_k)=\emptyset,\ k\neq l\big\}, \end{aligned} \end{equation} where $S_{R_\ast}(x_l)$ denotes the sphere with radius $R_\ast$ and center $x_l$. \noindent {\bf(d)}\, Another interesting observation is the following. Consider the critical points of the functional $\phi_{\rho}i(u,u_\Gamma,\Gamma)$ with constraint ${pf E}(u,u_\Gamma,\Gamma)={pf E}_0$, say on $$U:=\{(u,u_\Gamma,\Gamma):\; u\in C(\bar{\Omega}petminus\Gamma),\;\; \Gamma\in \mathcal{MH}^2(\Omega),\;\; u_\Gamma\in C(\Gamma),\;\; u,u_\Gamma>0\},$$ see below for the definition of $\mathcal{MH}^2(\Omega)$. So here we do not assume from the beginning that $u$ is continuous across $\Gamma$, and $u_\Gamma$ denotes surface temperature. Then by the method of Lagrange multipliers, there is $\mu\in\mathbb R$ such that at a critical point $(u_*,u_{\Gamma*},\Gamma_*)$ we have \begin{equation} \label{VarEq} \phi_{\rho}i^\prime(u_*,u_{\Gamma*},\Gamma_*)+\mu {pf E}^\prime(u_*,u_{\Gamma*},\Gamma_*)=0. \end{equation} The derivatives of the functionals are given by \begin{equation*} \langle \phi_{\rho}i^\prime(u,u_\Gamma,\Gamma) |(v,v_\Gamma,h)\rangle = (\eta^\prime(u)|v)_\Omega + (\eta^\prime_\Gamma(u_\Gamma)|v_\Gamma)_\Gamma -([\![\eta(u)]\!]+\eta_\Gamma(u_\Gamma)\mathcal H(\Gamma)|h)_{\Gamma}, \end{equation*} and $$\langle {pf E}^\prime(u,u_\Gamma,\Gamma) |(v,v_\Gamma,h)\rangle = (\varepsilonilon^\prime(u)|v)_\Omega +(\varepsilonilon^\prime_\Gamma(u_\Gamma)|v_\Gamma)_\Gamma -([\![\varepsilonilon(u)]\!]+\varepsilonilon_\Gamma(u_\Gamma) \mathcal H(\Gamma)|h)_{\Gamma}.$$ Setting first $v_\Gamma=h=0$ and varying $v$ in \eqref{VarEq} we obtain $$\eta^\prime(u_*) + \mu \varepsilonilon^\prime(u_*)=0\quad \mbox{ in } \Omega,$$ varying $v_\Gamma$ yields $$\eta_\Gamma^\prime(u_{\Gamma*}) +\mu \varepsilonilon_\Gamma^\prime(u_{\Gamma*})=0\text{ on $\Gamma_*$},$$ and finally varying $h$ we get $$[\![\eta(u_*)]\!] +\eta_\Gamma(u_{\Gamma*})\mathcal H(\Gamma)+\mu([\![\varepsilonilon(u_*)]\!]+\varepsilonilon_\Gamma(u_{\Gamma*}) \mathcal H(\Gamma_*))=0\text{ on $\Gamma_*$}.$$ The relations $\eta(u)=-\psi^\prime(u)$ and $\varepsilonilon(u)=\psi(u)-u\psi^\prime(u)$ imply $0=-\psi^{\prime\prime}(u_*)(1+\mu u_*)$, and this shows that $u_*=-1/\mu$ is constant in $\Omega$, since $\kappa(u)=-u\psi^{\prime\prime}(u)>0$ for all $u>0$ by assumption. Similarly on $\Gamma_*$ we obtain that $u_{\Gamma*}=-1/\mu$ is constant as well, provided $\kappa_\Gamma(u_\Gamma)>0$, hence in particular $u_*\equiv u_{\Gamma*}$. This further implies the Gibbs-Thomson relation $[\![\psi(u_*)]\!]+pigma(u_*) \mathcal H(\Gamma_*)=0$. Since $u_*$ is constant we see that $\mathcal H(\Gamma_*)$ is constant, by \eqref{sigma}. Therefore $\Gamma_*$ is a sphere whenever connected, and a union of finitely many disjoint spheres of equal size otherwise. Thus the critical points of the entropy functional for prescribed energy are precisely the equilibria of problem \eqref{stefan-var}. \noindent {\bf(e)}\, Going further, suppose we have an equilibrium $e_*:=(u_*,u_{\Gamma*},\Gamma_*)$ where the total entropy has a local maximum w.r.t.\ the constraint ${pf E}={pf E}_0$ constant. Then $\mathcal D_*:=[\phi_{\rho}i+\mu {pf E}]^{\prime\prime}(e_*)$ is negative semi-definite on the kernel of ${pf E}^\prime(e_*)$, where $\mu=-1/u_*$ is the fixed Lagrange multiplier found above. The kernel of ${pf E}^\prime(e)$ is given by the identity \begin{equation*} (\kappa(u)| v)_\Omega + (\kappa_\Gamma(u_\Gamma)|v_\Gamma)_\Gamma-([\![\varepsilonilon(u)]\!] + \varepsilonilon_\Gamma(u_\Gamma) \mathcal H(\Gamma)| h)_\Gamma =0, \end{equation*} which at equilibrium yields \begin{equation} \label{kE} (\kappa_\ast| v)_\Omega + (\kappa_{\Gamma*}|v_\Gamma)_\Gamma+ u_*(l_\ast| h)_\Gamma=0, \end{equation} where $\kappa_\ast :=\kappa(u_\ast)$, $\kappa_{\Gamma*}:=\kappa_{\Gamma}(u_*)$ and \begin{equation} \label{l-stern} l_*:=\frac{1}{u_*}\big\{l(u_*)+l_{\Gamma}(u_*)\mathcal H(\Gamma_*)\big\} =[\![\psi^\prime(u_*)]\!]+pigma^\prime(u_*)\mathcal H(\Gamma_*). \end{equation} On the other hand, a straightforward calculation yields with $z=(v,v_\Gamma,h)$ \begin{align} \label{2var} -\langle \mathcal D_* z|z\rangle &=\frac{1}{u_\ast^2}\big[ (\kappa_\ast v|v)_\Omega + (\kappa_{\Gamma*}v_\Gamma|v_\Gamma)_\Gamma - pigma_* u_*( \mathcal H^\prime(\Gamma_*) h|h)_\Gamma\big], \end{align} where $\kappa_{\Gamma*}=\kappa_{\Gamma}(u_*)$ and $pigma_*=pigma(u_*)$. As $\kappa_\ast$ and $\kappa_{\Gamma*}$ are positive, we see that the form $\langle \mathcal D z|z\rangle$ is negative semi-definite as soon as $\mathcal H^\prime ({\Gamma_*}) $ is negative semi-definite. We have \begin{equation*} \mathcal H^\prime(\Gamma_\ast) = (n-1)/{R^2_\ast} + \Delta_\ast, \end{equation*} where $\Delta_\ast$ denotes the Laplace-Beltrami operator on $\Gamma_\ast$, and $R_\ast$ means the radius of an equilibrium sphere. To derive necessary conditions for an equilibrium $e_*$ to be a local maximum of entropy, we consider two cases. pmallskip\\ \noindent {1.} Suppose that $\Gamma_\ast$ is not connected, i.e. $\Gamma_\ast$ is a finite union of spheres $\Gamma^k_\ast$. Set $v=v_\Gamma=0$, and let $h=h_k$ be constant on $\Gamma^k_\ast$ with $pum_k h_k=0$. Then the constraint \eqref{kE} holds, and with $\omega_n$ the surface area of the unit sphere in $\mathbb R^n$ \begin{equation*} \langle \mathcal D z|z\rangle= (pigma_*u_\ast)((n-1)/R^2_\ast)\omega_n R^{n-1}_* \,pum_{k=1}^m h_k^2 >0,\ \end{equation*} hence $\mathcal D$ cannot be negative semi-definite in this case, as $pigma_*>0$ by \eqref{sigma}. Thus if $e_\ast$ is an equilibrium with maximal total entropy, then $\Gamma_\ast$ must be connected, and hence both phases are connected. pmallskip\\ \noindent 2. Assume that $\Gamma_\ast$ is connected. With $h= -(\kappa_*|1)_\Omega-\kappa_{\Gamma*}|\Gamma_*|$, $v=v_\Gamma=u_*l_*|\Gamma_*|$ we see that $\mathcal D$ negative semi-definite on the kernel of ${pf E}^\prime(e_\ast)$ implies the condition \begin{equation} \label{var-sc} \zeta_\ast:=\zeta(u_*):=\frac{(n-1)pigma_* [(\kappa_\ast|1)_\Omega+\kappa_{\Gamma*}|\Gamma_*|]}{u_*l^2_\ast R^2_\ast |\Gamma_\ast|}\le 1. \end{equation} We will see below that connectedness of $\Gamma_*$ and the {\em strong stability condition} $\zeta_*<1$ are sufficient for stability of the equilibrium $e_*$. We point out that the quantity $\zeta_\ast$ defined in \eqref{var-sc} coincides with the analog quantity in \cite[Defintion (1.11)]{PSZ10} in case $\kappa_{\Gamma*}=0$ and $pigma=$ constant. (Note that $l_\ast=l(u_*)/u_*$ in this case, which differs from the definition of $l_\ast$ in \cite{PSZ10}). \noindent {\bf(f)}\, Summarizing, we have shown \begin{itemize} \item The total energy is constant along smooth solutions of \eqref{stefan-var}. \item The negative total entropy is a strict Ljapunov functional for \eqref{stefan-var}. \item The equilibria of \eqref{stefan-var} are precisely the critical points of the entropy functional with prescribed energy. \item If the entropy functional with prescribed energy has a local maximum at $e_*=(u_*,u_{\Gamma*},\Gamma_*)$ then $\Gamma_\ast$ is connected. \item If $\Gamma_\ast$ is connected, a necessary condition for a critical point $(u_\ast,u_{\Gamma*},\Gamma_\ast)$ to be a local maximum of the entropy functional with prescribed energy is inequality \eqref{var-sc}. \end{itemize} \noindent {\bf (g)}\, We would like to point out a phenomenon, in the absence of kinetic undercooling, which is to positive surface heat capacity $\kappa_\Gamma$. If $\kappa_\Gamma$ at an equilibrium $(u_\ast,u_{\Gamma*},\Gamma_\ast)$ is large enough and $\Gamma_*$ is disconnected, then such a steady state is stable, see Theorem~\ref{lin-stability} and Theorem~\ref{nonlinstab0}. Hence, this case seems to prevent the onset of Ostwald ripening. However, such equilibria cannot be maxima of the total entropy. This is in strict contrast to the situation where the surface tension $pigma$ is constant. In this case it is shown in \cite{PSZ10} that multiple spheres (of the same radius) are always unstable for \eqref{stefan}. This situation is reminiscent of the onset of {pl Ostwald ripening}, a process that manifests itself in the way that larger structures grow while smaller ones shrink and disappear. Here we refer to \cite{AlFu03, AlFuKa03,AlFuKa04,AlFuKa04b,GORS09,HNO05}, \cite{Niet99}-\cite{NietVe04} and the references therein for various aspects and results on Ostwald ripening. In particular, we mention that the authors in \cite{AlFu03, AlFuKa03, AlFuKa04,AlFuKa04b} use the quasi-stationary Stefan problem with surface tension (i.e., the Mullins-Sekerka problem) to model Ostwald ripening. Under proper scaling assumptions, the way sphere-like particles evolve is analyzed. Interesting and illuminating connections between various versions of the Stefan problem (mostly the Mullins-Sekerka problem) and Ostwald ripening are given in \cite{HNO05,Niet99,Niet00,Niet01b,NietVe04}. It would be of considerable interest to also pursue the effect of coarsening in the framework of the thermodynamically consistent Stefan problem \eqref{stefan} and \eqref{stefan-var} In case $\Gamma_*$ is connected we show that an equilibrium is stable if $\zeta_*<1$, and unstable if $\zeta_*>1$. This situation is in accordance with the results found in \cite{PSZ10} for the case of constant surface tension. Here we mention that stability of a connected equilibrium for the Stefan problem with constant surface tension has also been obtained in \cite{Ha12}. We refer to the introduction of \cite{PSZ10} for a detailed discussion of the literature. \noindent {\bf(h)}\, Now let us look at the energy of an equilibrium as a function of temperature. Suppose we have an equilibrium $(u,\Gamma)$ at a given energy level ${pf E_0}$, and assume that $\Gamma$ consists of $m$ disjoint spheres of radius $R$ contained in $\Omega$. Then $$0<R<R_m:=pup\{ R>0: \, \Omega \mbox{ contains $m$ disjoint ball of radius } R\},$$ and with $\varphi(u):=[\![\psi(u)]\!]$ we have $$ 0=\varphi(u)+pigma(u)\mathcal H(\Gamma)= \varphi(u) -(n-1)pigma(u)/R,$$ and hence $ R=R(u)= (n-1)pigma(u)/\varphi(u).$ Further we have \begin{align*} {pf E}_e(u)&:= {pf E}(u,\Gamma)= \int_\Gamma \varepsilonilon(u)\,dx +\int_\Gamma \varepsilonilon_\Gamma(u)\,ds\\ &= \varepsilonilon_2(u)|\Omega|- |\Omega_1|[\![\varepsilonilon(u)]\!] + \varepsilonilon_\Gamma(u)|\Gamma|\\ &= \varepsilonilon_2(u)|\Omega| - (m\omega_n/n) R(u)^n[\![\varepsilonilon(u)]\!]+m\omega_n R(u)^{n-1}\varepsilonilon_\Gamma(u)\\ & =\varepsilonilon_2(u)|\Omega|+c_{n,m}\Big[\frac{pigma(u)^n}{\varphi(u)^{n-1}} - u\frac{d}{du}\frac{pigma(u)^n}{\varphi(u)^{n-1}}\Big], \end{align*} where $c_{n,m}= m\omega_n(n-1)^{n-1}/n.$ Thus we obtain for the total energy of an equilibrium \begin{equation}\label{eq-energy} {pf E}_e(u)= \delta(u)-u\delta^\prime(u),\quad \delta(u) =|\Omega|\psi_2(u)+c_{n,m}\frac{pigma(u)^n}{\varphi(u)^{n-1}}. \end{equation} Consequently, the equilibrium temperature for an equilibrium, where $\Gamma$ consists of $m$ components, is the solution of the scalar problem $${pf E}_0 = {pf E}_e(u) = \delta(u)-u\delta^\prime(u),\quad 0<u<u_c, \quad 0<pigma(u)/\varphi(u)<R_m/(n-1).$$ Let us look at the derivative of the function ${pf E}_e(u)$. A simple calculation yields \begin{equation*} \begin{aligned} {pf E}^\prime_e(u)&=-u\delta^{\prime\prime}(u)=-|\Omega|u\psi_2^{\prime\prime}(u) -c_{n,m}u\frac{d}{du}\big[n\left(pigma/\varphi\right)^{n-1}pigma^\prime -(n-1)(pigma/\varphi)^n\varphi^\prime\big]\\ &= |\Omega|\kappa_2(u)-c_{n,m}u\big[n(pigma/\varphi)^{n-1}pigma^{\prime\prime}-(n-1)(pigma/\varphi)^n\varphi^{\prime\prime}\big]\\ &\quad -c_{n,m}n(n-1)(pigma/\varphi)^{n-1}u\big[(pigma^\prime)^2/pigma - 2pigma^\prime \varphi^\prime/\varphi+ pigma (\varphi^\prime)^2/\varphi^2\big]\\ &= |\Omega|\kappa_2(u)+ |\Gamma|\kappa_\Gamma(u) -|\Omega_1|[\![\kappa(u)]\!] - ({R^2|\Gamma|}/{(n\!-\!1)pigma})u \big[ \varphi^\prime - pigma^\prime \varphi/pigma\big]^2\\ &=\big[(\kappa(u)|1)_\Omega+|\Gamma|\kappa_\Gamma(u)\big] \!-\! {(R^2(u)|\Gamma|}/{(n\!-\!1)pigma(u)}) u\big[\,[\![\psi^\prime(u)]\!]+pigma^\prime(u)\mathcal H(\Gamma)\big]^2. \end{aligned} \end{equation*} Therefore the stability condition $\zeta(u)\leq 1$ is equivalent to ${pf E}^\prime_e(u)\leq0$, an alternative interpretation to the one obtained above. pection{Transformation to a Fixed Interface} Let $\Omegapubset\mathbb R^n$ be a bounded domain with boundary $\partial \Omega$ of class $C^2$, and suppose $\Gammapubset \Omega$ is a closed hypersurface of class $C^2$, i.e.\ a $C^2$-manifold which is the boundary of a bounded domain $\Omega_1pubset \Omega$. We then set $\Omega_2=\Omegapetminus\bar{\Omega}_1$. Note that while $\Omega_2$ typically is connected, $\Omega_1$ may be disconnected. However, $\Omega_1$ consists of finitely many components only, as $\partial \Omega_1=\Gamma$ by assumption is a manifold, at least of class $C^2$. In the following, we refer to \cite{PrSi13, PrSi15} for a thorough development of the material presented below. Recall that the {\em second order bundle} of $\Gamma$ is given by $$\mathcal N^2\Gamma:=\{(p,\nu_\Gamma(p),L_\Gamma(p)):\, p\in\Gamma\}.$$ Note that the Weingarten map $L_\Gamma$ (also called the shape operator, or the second fundamental tensor) is defined by $$ L_\Gamma(p) = -\nabla_\Gamma \nu_\Gamma(p),\quad p\in\Gamma ,$$ where $\nabla_\Gamma$ denotes the surface gradient on $\Gamma$. The eigenvalues $\kappa_j(p)$ of $L_\Gamma(p)$ are the principal curvatures of $\Gamma$ at $p\in\Gamma$, and we have $|L_\Gamma(p)|=\max_j|\kappa_j(p)|.$ The {\em curvature} $\mathcal H_\Gamma(p)$ is defined by $$\mathcal H_\Gamma(p) = pum_{j=1}^{n-1} \kappa_j(p)={\rm tr} L_\Gamma(p) = -{\rm div}_\Gamma \nu_\Gamma(p),$$ where ${\rm div}_\Gamma$ means surface divergence. Recall also that the {\em Hausdorff distance} $d_H$ between the two closed subsets $A,Bpubset\mathbb R^m$ is defined by $$d_H(A,B):= \max\big\{pup_{a\in A}{\rm dist}(a,B),pup_{b\in B}{\rm dist}(b,A)\big\}.$$ Then we may approximate $\Gamma$ by a real analytic hypersurface ${S}igma$ (or merely ${S}igma\in C^3$), in the sense that the Hausdorff distance of the second order bundles of $\Gamma$ and ${S}igma$ is as small as we want. More precisely, for each $\eta>0$ there is a real analytic closed hypersurface such that $d_H(\mathcal N^2{S}igma,\mathcal N^2\Gamma)\leq\eta$. If $\eta>0$ is small enough, then ${S}igma$ bounds a domain $\Omega_1^{S}igma$ with $\overline{\Omega^{S}igma_1}pubset\Omega$, and we set $\Omega^{S}igma_2=\Omegapetminus\bar{\Omega}^{S}igma_1$. It is well known that such a hypersurface ${S}igma$ admits a tubular neighborhood, which means that there is $a>0$ such that the map \begin{eqnarray*} &&\Lambda:\, {S}igma \times (-a,a)\to \mathbb R^n \\ &&\Lambda(p,r):= p+r\nu_{S}igma(p) \end{eqnarray*} is a diffeomorphism from ${S}igma \times (-a,a)$ onto $\mathcal R(\Lambda)$. The inverse $$\Lambda^{-1}:\mathcal R(\Lambda)\mapsto {S}igma\times (-a,a)$$ of this map is conveniently decomposed as $$\Lambda^{-1}(x)=(\Pi_{S}igma(x),d_{S}igma(x)),\quad x\in\mathcal R(\Lambda).$$ Here $\Pi_{S}igma(x)$ means the nonlinear orthogonal projection of $x$ to ${S}igma$ and $d_{S}igma(x)$ the signed distance from $x$ to ${S}igma$; so $|d_{S}igma(x)|={\rm dist}(x,{S}igma)$ and $d_{S}igma(x)<0$ iff $x\in \Omega_1^{S}igma$. In particular we have $\mathcal R(\Lambda)=\{x\in \mathbb R^n:\, {\rm dist}(x,{S}igma)<a\}$. On the one hand, $a$ is determined by the curvatures of ${S}igma$, i.e.\ we must have $$0<a<\min\big\{1/|\kappa_j(p)|: j=1,\ldots,n-1,\; p\in{S}igma\big\},$$ where $\kappa_j(p)$ mean the principal curvatures of ${S}igma$ at $p\in{S}igma$. But on the other hand, $a$ is also connected to the topology of ${S}igma$, which can be expressed as follows. Since ${S}igma$ is a compact (smooth) manifold of dimension $n-1$ it satisfies a (interior and exterior) ball condition, which means that there is a radius $r_{S}igma>0$ such that for each point $p\in {S}igma$ there are $x_j\in \Omega_j^{S}igma$, $j=1,2$, such that $B_{r_{S}igma}(x_j)pubset \Omega_j^{S}igma$, and $\bar{B}_{r_{S}igma}(x_j)\cap{S}igma=\{p\}$. Choosing $r_{S}igma$ maximal, we then must also have $a<r_{S}igma$. In the sequel we fix $$ a= \frac{1}{2}\min\left\{r_{S}igma, \frac{1}{|\kappa_j(p)|}, \, j=1,\ldots, n-1,\; p\in{S}igma\right\}.$$ For later use we note that the derivatives of $\Pi_{S}igma(x)$ and $d_{S}igma(x)$ are given by $$\nabla d_{S}igma(x)= \nu_{S}igma(\Pi_{S}igma(x)), \quad \Pi_{S}igma^\prime(x) = M_0(d_{S}igma(x),\Pi(x))P_{S}igma(\Pi_{S}igma(x))$$ for $|d_{S}igma(x)|<a$, where $P_{S}igma(p)=I-\nu_{S}igma(p)\otimes\nu_{S}igma(p)$ denotes the orthogonal projection onto the tangent space $T_p{S}igma$ of ${S}igma$ at $p\in{S}igma$, and \begin{equation} \label{M-0} M_0(r)(p)=(I-r L_{S}igma(p))^{-1},\quad (r,p)\in (-a,a)\times{S}igma. \end{equation} Note that $$|M_0(r)(p)|\leq 1/(1-r|L_{S}igma(p)|) \leq 2,\quad \mbox{ for all } (r,p)\in (-a,a)\times{S}igma.$$ Setting $\Gamma=\Gamma(t)$, we may use the map $\Lambda$ to parameterize the unknown free boundary $\Gamma(t)$ over ${S}igma$ by means of a height function $h(t,p)$ via $$\Gamma(t)=\{ p+ h(t,p)\nu_{S}igma(p): p\in{S}igma,\; t\geq0\},$$ at least for small $|h|_\infty$. Extend this diffeomorphism to all of $\bar{\Omega}$ by means of $$ \Xi_h(t,x) = x +\chi(d_{S}igma(x)/a)h(t,\Pi_{S}igma(x))\nu_{S}igma(\Pi_{S}igma(x))=:x+\xi_h(t,x).$$ Here $\chi$ denotes a suitable cut-off function. More precisely, $\chi\in\mathcal D(\mathbb R)$, $0\leq\chi\leq 1$, $\chi(r)=1$ for $|r|<1/3$, and $\chi(r)=0$ for $|r|>2/3$. Note that $\Xi_h(t,x)=x$ for $|d(x)|>2a/3$, and $$\Xi_h^{-1}(t,x)= x-h(t,x)\nu_{S}igma(x),\quad x\in{S}igma,$$ for $|h|_\infty$ sufficiently small. Setting \begin{equation*} v(t,x)=u(t,\Xi_\rho(t,x))\quad\text{or}\quad u(t,x)= v(t,\Xi_\rho^{-1}(t,x)) \end{equation*} we have this way transformed the time varying regions $\Omegapetminus \Gamma(t)$ to the fixed domain $\Omegapetminus{S}igma$. This is the direct mapping method, also called Hanzawa transformation. By means of this transformation, we obtain the following transformed problem: \begin{equation} \label{transformed} \left\{\begin{aligned} \kappa(v)\partial_t v +\mathcal A(v,\rho)v&=\kappa(v)\mathcal R(\rho)v &&\text{in}&&\Omegapetminus{S}igma\\ \partial_{\nu} v&=0 &&\text{on}&&\partial \Omega\\ [\![v]\!]=0, \quad v_\Gamma &=v&&\text{on}&&{S}igma \\ [\![\psi(v_\Gamma) ]\!] + pigma(v_\Gamma) \mathcal H(\rho)- \gamma(v_\Gamma)\beta(\rho)\partial_t\rho&=0 &&\text{on}&&{S}igma\\ \kappa_\Gamma(v_\Gamma)\partial_t v_\Gamma+\mathcal C(v_\Gamma,\rho)v_\Gamma+\mathcal B(v,\rho)v &= &&\\ =-\{l(v_\Gamma)+l_\Gamma(v_\Gamma)\mathcal H(\rho)&-\gamma(v_\Gamma)\beta(\rho)\partial_t\rho\}\beta(\rho)\partial_t\rho &&\text{on}&&{S}igma\\ v(0)=v_0,\ \rho(0)&=\rho_0.&& \end{aligned}\right. \end{equation} Here $\mathcal A(v,\rho)$, $\mathcal B(v,\rho)$ and $\mathcal C(v_\Gamma,\rho)$ denote the transformed versions of the operators $-{\rm div}(d\nabla )$, $-[\![d\partial_\nu]\!]$, and $-{\rm div}_\Gamma(d_\Gamma\nabla_\Gamma)$, respectively. Moreover, $\mathcal H(\rho)$ means the mean curvature of $\Gamma$, $\beta(\rho)=(\nu_{S}igma|\nu_\Gamma(\rho))$, the term $\beta(\rho)\partial_t\rho$ represents the normal velocity $V$, and $$\mathcal R(\rho)v=\partial_t v -\partial_tu\circ \Xi_\rho.$$ The system (\ref{transformed}) is a quasi-linear parabolic problem on the domain $\Omega$ with fixed interface ${S}igmapubset \Omega$ with {\em dynamic boundary conditions}. To elaborate on the structure of this problem in more detail, we calculate $$D\Xi_\rho = I + D\xi_\rho, \quad \quad [D\Xi_\rho]^{-1} = I - {[I + D\xi_\rho]}^{-1}D\xi_\rho =:I-M_1(\rho)^{pf T}.$$ where $D$ deontes the derivative with respect to the space variables. Hence $D\xi_\rho=0$ for $|d_{S}igma(x)|>2a/3$ and \begin{align*} D\xi_\rho(t,x)&= \frac{1}{a}\chi'(d_{S}igma(x)/a)\rho(t,\Pi_{S}igma(x))\nu_{S}igma(\Pi_{S}igma(x))\otimes\nu_{S}igma(\Pi_{S}igma(x))\\ & + \chi(d_{S}igma(x)/a)[\nu_{S}igma(\Pi_{S}igma(x))\otimes M_0(d_{S}igma(x))\nabla_{S}igma \rho(t,\Pi_{S}igma(x))]\\ &-\chi(d_{S}igma(x)/a)\rho(t,\Pi_{S}igma(x))L_{S}igma(\Pi_{S}igma(x))M_0(d_{S}igma(x))P_{S}igma(\Pi_{S}igma(x)) \end{align*} for $0\leq|d_{S}igma(x)|\leq 2a/3$. In particular, for $x\in {S}igma$ we have $$D\xi_\rho(t,x)=\nu_{S}igma(x)\otimes\nabla_{S}igma\rho(t,x) -\rho(t,x)L_{S}igma(x)P_{S}igma(x), $$ and $$[D\xi_\rho]^{pf T}(t,x)=\nabla_{S}igma\rho(t,x)\otimes\nu_{S}igma(x) -\rho(t,x)L_{S}igma(x), $$ since $L_{S}igma(x)$ is symmetric and has range in $T_x{S}igma$. Therefore, $[I + D\xi_\rho]$ is boundedly invertible, if $\rho$ and $\nabla_{S}igma \rho$ are sufficiently small, and \begin{equation*} \label{hanzawainv} |{[I + D\xi_\rho]}^{-1}| \leq 2 \quad \mbox{ for }\; {|\rho|}_\infty \leq \frac{1}{4 ({|\chi'|}_\infty/a+ 2\max_j |\kappa_j|)}, \quad {|\nabla_{S}igma \rho|}_\infty \leq \frac{1}{8}. \end{equation*} Employing this notation we obtain \begin{align*} \nabla u\circ\Xi_\rho = ([D\Xi_\rho^{-1})]^{{pf T}}\circ\Xi_\rho)\nabla v =[D\Xi_\rho]^{-1,{pf T}}\nabla v = :(I - M_1(\rho))\nabla v, \end{align*} and for a vector field $q= \bar{q}\circ \Xi_\rho$ \begin{align*} (\nabla|\bar{q})\circ\Xi_\rho = (([D\Xi_\rho^{-1}]^{{pf T}}\circ\Xi_\rho)\nabla|q) = ([D\Xi_\rho]^{-1,{pf T}}\nabla|q) = ((I - M_1(\rho))\nabla|q). \end{align*} Further we have \begin{align*} \partial_t u\circ\Xi_\rho &= \partial_t v -(\nabla u\circ \Xi_\rho|\partial_t\Xi_\rho) = \partial_t v -( (D\Xi_\rho^{-1}]^{pf T}\circ\Xi_\rho)\nabla v|\partial_t\Xi_\rho) \\ &= \partial_t v -([D\Xi_\rho]^{-1,{pf T}}\nabla v|\partial_t\xi_\rho ) = \partial_t v -(\nabla v|(I-M_1^{pf T}(\rho))\partial_t\xi_\rho ) , \end{align*} hence $$\mathcal R(\rho)v=(\nabla v|(I-M_1^{pf T}(\rho))\partial_t\xi_\rho ).$$ The normal time derivative transforms as \begin{equation*} \partial_{t,n}u_\Gamma\circ\Xi_\rho =\partial_t v_\Gamma + (\nabla_{S}igma v_\Gamma|\nu_{S}igma)V =\partial_t v_\Gamma, \end{equation*} as $\nabla_{S}igma v_\Gamma$ is perpendicular to $\nu_{S}igma$. \\ With the Weingarten tensor $L_{S}igma=-\nabla_{S}igma\nu_{S}igma$ we obtain \begin{equation*} \begin{aligned} \nu_\Gamma(\rho)&= \beta(\rho)(\nu_{S}igma-\alpha(\rho)),&& \alpha(\rho)= M_0(\rho)\nabla_{S}igma \rho,\\ M_0(\rho)&=(I-\rho L_{S}igma)^{-1},&& \beta(\rho) = (1+|\alpha(\rho)|^2)^{-1/2}, \end{aligned} \end{equation*} and $$V=(\partial_t\Xi_\rho|\nu_\Gamma) = (\nu_{S}igma|\nu_\Gamma(\rho))\partial_t \rho =\beta(\rho)\partial_t \rho.$$ For the mean curvature $\mathcal H(\rho)=\mathcal H(\Gamma_\rho)$ we have $$ \mathcal H(\rho) = \beta(\rho)\{ {\rm tr} [M_0(\rho)(L_{S}igma+\nabla_{S}igma \alpha(\rho))] -\beta^2(\rho)(M_0(\rho)\alpha(\rho)|[\nabla_{S}igma\alpha(\rho)]\alpha(\rho))\},$$ an expression involving second order derivatives of $\rho$ only linearly. More precisely, \begin{align*} \mathcal H(\rho)&=\beta(\rho)\mathcal G(\rho):\nabla_{S}igma^2\rho + \beta(\rho)\mathcal F(\rho),\\ \mathcal G(\rho)&= M_0^2(\rho)-\beta^2(\rho)M_0(\rho)\nabla_{S}igma\rho\otimes M_0(\rho)\nabla_{S}igma\rho. \end{align*} Note that $\beta$ as well as $\mathcal F$ and $\mathcal G$ only depend on $\rho$ and $\nabla_{S}igma\rho$. The linearization of the curvature $\mathcal H(\rho)=\mathcal H(\Gamma_\rho)$ is given by \begin{equation} \label{lin-curvature} \mathcal H^\prime(0)= {\rm tr}\, L_{S}igma^2 +\Delta_{S}igma. \end{equation} Here $\Delta_{S}igma$ denotes the Laplace-Beltrami operator on ${S}igma$. $\mathcal B(v,\rho)$ becomes \begin{align*} \mathcal B(v,\rho)v&= -[\![d(u)\partial_\nu u]\!]\circ\Xi_\rho=-([\![d(v)(I-M_1(\rho))\nabla v]\!]|\nu_\Gamma)\\ &= -\beta(\rho)([\![d(v)(I-M_1(\rho))\nabla v]\!]|\nu_{S}igma -\alpha(\rho))\\ &= -\beta(\rho)[\![d(v)\partial_{\nu_{S}igma} v]\!] +\beta(\rho)([\![d(v)\nabla v]\!]|(I-M_1(\rho))^{pf T}\alpha(\rho)), \end{align*} since $M_1^{pf T}(\rho)\nu_{S}igma = 0$, and \begin{align*} \mathcal A(v,\rho)v= & -{\rm div}( d(u)\nabla u)\circ\Xi_\rho= -((I-M_1(\rho))\nabla|d(v)(I-M_1(\rho))\nabla v)\\ = & -d(v)\Delta v + d(v)[M_1(\rho)+M_1^{pf T}(\rho)-M_1(\rho)M_1^{pf T}(\rho)]:\nabla^2 v\\ &-d^\prime(v)|(I-M_1(\rho))\nabla v|^2 + d(v)((I-M_1(\rho)):\nabla M_1(\rho)|\nabla v). \end{align*} We recall that for matrices $A,B\in\mathbb R^{n\times n}$, $ A:B=pum_{i,j=1}^n a_{ij}b_{ij}=\text{tr}\,(AB^{pf T}) $ denotes their inner product. The pull back of $\nabla_\Gamma$ is given by $$ \nabla_\Gamma \varphi\circ \Xi_\rho = P_\Gamma(\rho) M_0(\rho)\nabla_{S}igma \varphi,$$ where $$P_\Gamma(\rho) = I -\nu_\Gamma(\rho)\otimes\nu_\Gamma(\rho).$$ This implies for $\mathcal C(v_\Gamma,\rho)v_\Gamma$ the relation $$\mathcal C(v_\Gamma,\rho)v_\Gamma=-{\rm tr}\{P_\Gamma(\rho)M_0(\rho)\nabla_{S}igma\big(d_\Gamma(v_\Gamma)P_\Gamma(\rho)M_0(\rho)\nabla_{S}igma v_\Gamma\big)\}.$$ It is easy to see that the leading part of $\mathcal A(v,\rho)v$ is $-d(v)\Delta v$, while that of $\mathcal B(v,\rho)v$ is $-\beta(\rho)[\![d(v)\partial_{\nu} v]\!]$, and the leading part of $\mathcal C(v_\Gamma,\rho)v_\Gamma$ turns out to be $-d_\Gamma(v_\Gamma)\Delta_{S}igma v_\Gamma$. This follows from $M_0(0)=1$, $P_\Gamma(0)=P_{S}igma$, $M_1(0)=0$ and $\alpha(0)=0$; recall that we may assume $\rho$ small in the $C^2$-norm. It is important to recognize the quasilinear structure of \eqref{transformed}. pection{Linearization at Equilibria} The full linearization at an equilibrium $(u_*,u_{\Gamma*},\Gamma_*)$ with $u_{\Gamma*}=u_*$, $\Gamma_*=\cup_k {S}igma_k$ a finite union of disjoint spheres contained in $\Omega$ and with radius $R_*>0$ given by $R_*=(n-1)pigma(u_*)/[\![\psi(u_*)]\!]$, reads \begin{equation} \label{lin} \left\{\begin{aligned} \kappa_*\partial_tv-d_*\Delta v&=\kappa_*f &&\text{in} && \Omegapetminus\Gamma_*\\ \partial_{\nu} v&=0 &&\text{on} && \partial \Omega\\ [\![v]\!]=0,v_\Gamma&=v &&\text{on}&& \Gamma_*\\ \kappa_{\Gamma*}\partial_t v_\Gamma -d_{\Gamma*} \Delta_*v_\Gamma-[\![d_*\partial_\nu v]\!] +l_*u_*\partial_t \rho &=\kappa_{\Gamma*}f_\Gamma &&\text{on}&& \Gamma_*\\ l_* v_\Gamma - pigma_* A_* \rho -\gamma_* \partial_t\rho&=g &&\text{on}&& \Gamma_*\\ v(0)=v_0,\ \rho(0)&=\rho_0.&& && \end{aligned}\right. \end{equation} Here \begin{align*} &\kappa_*=\kappa(u_*)>0,\quad &\kappa_{\Gamma*}=\kappa_\Gamma(u_*)>0,\quad &d_*=d(u_*)>0,\\ &d_{\Gamma*}=d_\Gamma(u_*)>0,\quad &pigma_*=pigma(u_*)>0,\quad &\gamma_*=\gamma(u_*)\geq0, \end{align*} and as in \eqref{l-stern} \begin{equation*} l_*= [\![\psi^\prime(u_*)]\!]+pigma^\prime(u_*)\mathcal H(\Gamma_*) = \varphi^\prime(u_*)-pigma^\prime(u_*)\varphi(u_*)/pigma(u_*) =pigma(u_*)\lambda^\prime(u_*), \end{equation*} and $$ A_*=-(\frac{n-1}{R_*^2}+\Delta_*),$$ where $\Delta_*$ denotes the {\em Laplace-Beltrami} operator on $\Gamma_*$. pubsection{Maximal Regularity} We begin with the case $\gamma_*>0$, which is the simpler one. Define the operator $L$ in $$X_0:= L_p(\Omega)\times W^{r}_p(\Gamma_*)\times W^{s}_p(\Gamma_*)$$ with $$X_1:=W^2_p(\Omegapetminus\Gamma_*)\times W^{2+r}_p(\Gamma_*) \times W^{2+s}_p(\Gamma_*)$$ by means of \begin{equation*} \label{L-gamma} \begin{split} D(L)&= \big\{(v,v_\Gamma,\rho)\in X_1:\; [\![v]\!]=0,\;\; v_\Gamma=v\text{ on }\Gamma_*, \;\; \partial_\nu v=0 \text{ on } \partial\Omega\big\}, \\ L&=\left[\begin{array}{ccc} \!\! (-d_*/\kappa_*)\Delta &0 &0\\ \!\! -[\![(d_*/\kappa_{\Gamma*})\partial_\nu ]\!] &(l_*^2u_*/\gamma_*-d_{\Gamma*}\Delta_*)/\kappa_{\Gamma*} &- (l_*u_*pigma_*/\gamma_*\kappa_{\Gamma*})A_*\\ \!\! 0&-(l_*/\gamma_*)&(pigma_*/\gamma_*)A_*\end{array}\right] \end{split} \end{equation*} In case $\gamma_*>0$, problem \eqref{lin} is equivalent to the Cauchy problem \begin{equation*} \dot z+L z=(f,f_\Gamma-(l_*u_*/\gamma_*\kappa_{\Gamma*})g,g), \quad z(0)=z_0, \end{equation*} where $z=(v,v_\Gamma,\rho)$ and $z_0=(v_0,v_0|_{\Gamma_0},\rho_0)$. The main result on problem (\ref{lin}) for $\gamma_*>0$ is the following. \begin{theorem} \label{MR-gamma} Let $1<p<\infty$, $\gamma_*>0$, and $$ -1/p\leq r\leq 1-1/p,\quad r\leq s\leq r+2.$$ Then for each finite interval $J=[0,a]$, there is a unique solution \begin{equation*} (v,v_\Gamma,\rho)\in\mathbb E(J):=H^1_p(J;X_0)\cap L_p(J;X_1) \end{equation*} of \eqref{lin} if and only if the data $(f,f_\Gamma,g)$ and $(v_0,v_{\Gamma0},\rho_0)$ satisfy \begin{equation*} \begin{split} &(f,f_\Gamma,g)\in\mathbb F(J)= L_p(J;X_0)^3,\\ &(v_0,v_{\Gamma0},\rho_0)\in W^{2-2/p}_p(\Omegapetminus\Gamma_*)\times W^{2+r-2/p}_p(\Gamma_*)\times W^{2+s-2/p}_p(\Gamma_*) \end{split} \end{equation*} and the compatibility conditions \begin{equation*} [\![v_0]\!]=0,\quad v_{\Gamma0}=v_0\;\;{\rm on}\;\;\Gamma_*, \quad\partial_\nu v=0\;\;{\rm on}\;\;\partial\Omega. \end{equation*} The operator $-L$ defined above generates an analytic $C_0$-semigroup in $X_0$ with maximal regularity of type $L_p$. \end{theorem} \begin{proof} Looking at the entries of $L$ we see that $L:X_1\to X_0$ is bounded provided $r\leq 1-1/p$, $r\leq s$, and $ s\leq r+2$. The compatibility condition $v_\Gamma=v_{|_{\Gamma_*}}$ implies $r+2\geq 2-1/p$. This explains the constraints on the parameters $r$ and $s$. To obtain maximal $L_p$-regularity, we first consider the case $s>r$. Then $L$ is lower triangular up to a perturbation. So we may solve the problem for $(v,v_\Gamma)$ with maximal $L_p$-regularity (cf.\ \cite{DPZ08} for the one-phase case) first and then that for $\rho$. In the other case we have $r=s$. Then the second term in the third line in the definition of $L$ is of lower order, hence $\rho$ decouples from $(v,v_\Gamma)$. This way we also obtain maximal $L_p$-regularity. Since the Cauchy problem for $L$ has maximal $L_p$-regularity, we can now infer from \cite[Proposition 1.2]{Pru03} that $-L$ generates an analytic $C_0$-semigroup in $X_0$. \end{proof} We note that if $l_*=0$ and $\gamma_*=0$ then the linear problem \eqref{lin} is not well-posed. In fact, in this case the linear Gibbs-Thomson relation reads $$ -pigma_* A_* \rho =g,$$ which is not well-posed as the kernel of $A_*$ is non-trivial and $A_*$ is not surjective. Now we consider the case $l_*\neq0$ and $\gamma_*=0$. For the solution space we fix again $r,s\in\mathbb R$ with $r\leq s\leq r+2$, $-1/p\leq r\leq 1-1/p$, and consider $$(v,v_\Gamma,\rho)\in \mathbb E(J)=H^1_p(J,X_0)\cap L_p(J;X_1).$$ Then by trace theory the space of data becomes \begin{align*} (f,f_\Gamma,g)\in\mathbb F_0(J)&:= L_p(J;L_p(\Omega))\times L_p(J;W^{r}_p(\Gamma_*))\\ &\times[H^1_p(J;W^{s-2}_p(\Gamma_*)\cap L_p(J;W^{s}_p(\Gamma_*))], \end{align*} and the space of initial values will be $$ (v_0,v_{\Gamma0},\rho_0)\in W^{2-2/p}_p(\Omegapetminus\Gamma_*)\times W^{r+2-2/p}_p(\Gamma_*) \times W^{s+2-2/p}_p(\Gamma_*)$$ with compatibilities $$ [\![v_0]\!]=0,\quad v_{\Gamma0}=v_0,\quad l_*v_{\Gamma0} - pigma_*A_*\rho_0=g(0)\;\;\text{on}\;\;\Gamma_*, \quad \partial_\nu v=0 \;\;\text{on}\;\; \partial\Omega.$$ To obtain maximal $L_p$-regularity, we replace $v_\Gamma$ by the Gibbs-Thomson relation, which for $\gamma_*=0$ is an elliptic equation. We obtain $v_\Gamma = (pigma_*/l_*) A_* \rho +g/l_*.$ Inserting this expression into the energy balance on the surface $\Gamma_*$ yields \begin{equation} \label{eq} \big(l_*u_* + (\kappa_{\Gamma*}pigma_*/l_*)A_*\big)\partial_t\rho -d_{\Gamma*}\Delta_*v_\Gamma-[\![d_*\partial_\nu v]\!] =\kappa_{\Gamma*}( f_\Gamma -\partial_t g/l_*). \end{equation} Moreover, we obtain \begin{equation*} \begin{split} d_{\Gamma*}\Delta_*v_\Gamma &=(l_*u_*+(\kappa_{\Gamma*}pigma_*/l_*)A_*))(d_{\Gamma*}/\kappa_{\Gamma*})\Delta_*\rho\\ &-(l_*u_*d_{\Gamma*}/\kappa_{\Gamma*})\Delta_*\rho+(d_{\Gamma*}/l_*)\Delta_*g. \end{split} \end{equation*} Now we assume that \begin{equation} \label{exceptional} \eta_*:=\frac{(n-1) pigma_*\kappa_{\Gamma*}}{u_*l_*^2 R_*^2}\neq 1 \end{equation} which is equivalent to invertibility of the operator $A_0:=l_*u_* + (\kappa_{\Gamma*}pigma_*/l_*)A_*$. Applying its inverse to \eqref{eq} we arrive at the following equation for $\rho$: \begin{equation} \label{rho-eq} \begin{aligned} \partial_t\rho-(d_{\Gamma*}/\kappa_{\Gamma*})\Delta_*\rho+A_0^{-1}\{(u_*l_*d_{\Gamma*}/\kappa_{\Gamma*})\Delta_*\rho-[\![d_*\partial_\nu v]\!]\}=\tilde{g}, \end{aligned} \end{equation} with $$\tilde{g}=A_0^{-1}\big\{\kappa_{\Gamma*} f_\Gamma -((\kappa_{\Gamma*}/l_*)\partial_t g -(d_{\Gamma*}/l_*)\Delta_*g)\big\}.$$ Solving equation \eqref{eq} for $\partial_t\rho$ we obtain for $v_\Gamma$: \begin{align} \label{v-Gamma-eq} \kappa_{\Gamma*}\partial_t v_\Gamma -d_{\Gamma*} \Delta_*v_\Gamma-[\![d_*\partial_\nu v]\!] + l_*u_*A_0^{-1}\{d_{\Gamma*} \Delta_*v_\Gamma+[\![d_*\partial_\nu v]\!]\}= \tilde{f}_\Gamma. \end{align} where \begin{equation*} \tilde{f}_\Gamma = \kappa_{\Gamma*}\{f_\Gamma-l_*u_*A_0^{-1}(f_\Gamma-\partial_tg/l_*)\}. \end{equation*} Then by the regularity of $f_\Gamma$ and $g$ and with $r\leq s\leq r+2$ we see that \begin{equation*} \quad \tilde{f}_\Gamma\in L_p(J;W^r_p(\Gamma_*)), \quad \tilde{g}\in L_p(J;W^{s}_p(\Gamma_*)). \end{equation*} So the linear problem \eqref{lin} can be recast as an evolution equation in $X_0$ as $$\dot{z}+L_0 z = (f,\tilde f_\Gamma,\tilde g),\quad z(0)=z_0,$$ with $L_0=L_{00}+L_{01}$ defined by $$D(L_{0j})=\big\{(v,v_\Gamma,\rho)\in X_1:\; [\![v]\!]=0,\;\: v_\Gamma=v \text{ on } \Gamma_*,\;\; \partial_\nu v=0 \text{ on }\partial\Omega\big\},$$ and \begin{equation*} \label{L-00} \begin{split} L_{00}&=\left[\begin{array}{ccc} (-d_*/\kappa_*)\Delta &0&0\\ -[\![(d_*/\kappa_{\Gamma*})\partial_\nu ]\!]&-(d_{\Gamma*}/\kappa_{\Gamma*})\Delta_*&0 \\ -A_0^{-1}[\![d_*\partial_\nu]\!]&0&-(d_{\Gamma*}/\kappa_{\Gamma*}) \Delta_*\end{array}\right], \end{split} \end{equation*} and \begin{equation*} \label{L-01} \begin{split} L_{01}&=\! \left[\begin{array}{ccc} 0 &0&0\\ \!\!(l_*u_*/\kappa_{\Gamma*})A_0^{-1}[\![d_*\partial_\nu ]\!]&(l_*u_*d_{\Gamma*}/\kappa_{\Gamma*})A_0^{-1} \Delta_*&0 \\ 0&0& (u_*l_*d_{\Gamma*}/\kappa_{\Gamma*})A_0^{-1}\Delta_* \end{array}\right]\!. \end{split} \end{equation*} Looking at $L_0$ we first note that $L_{01}$ is a lower order perturbation of $L_{00}$. The latter is lower triangular, and the problem for $(v,v_\Gamma)$ as above has maximal $L_p$-regularity in $X_0$. As the diagonal entry in the equation for $\rho$ has maximal $L_p$-regularity as well we may conclude that $-L_0$ generates an analytic $C_0$-semigroup with maximal regularity in $X_{0}$ More precisely, we have the following result. \begin{theorem} \label{MR-0} Let $1<p<\infty$, $\gamma_*=0$, $-1/p\leq r\leq1-1/p$, $r\leq s\leq r+2$, $l_*\neq0$, and assume $u_*l_*^2R_*^2\neq\kappa_{\Gamma*}pigma_*(n-1)$. Then for each interval $J=[0,a]$, there is a unique solution $(v,v_\Gamma,\rho)\in \mathbb E(J)$ of (\ref{lin}) if and only if the data $(f,f_\Gamma,g)$ and $(v_0,v_{\Gamma0},\rho_0)$ satisfy \begin{equation*} \begin{split} &(f,f_\Gamma,g)\in\mathbb F_0(J),\\ &(v_0,v_{\Gamma0},\rho_0)\in W^{2-2/p}_p(\Omegapetminus\Gamma_*) \times W^{r+2-2/p}_p(\Gamma_*)\times W^{s+2-2/p}_p(\Gamma_*) \end{split} \end{equation*} and the compatibility conditions $$[\![v_0]\!]=0,\quad v_{\Gamma0}=v_0,\quad l_*v_0-pigma_* A_* \rho_0=g(0) \text{ {\rm on} }\Gamma_*,\quad \partial_\nu v=0 \text{ {\rm on} }\partial\Omega.$$ The operator $-L_0$ defined above generates an analytic $C_0$-semigroup in $X_{0}$ with maximal regularity of type $L_p$. \end{theorem} Note that the compatibility condition $l_*v_0-pigma_* A_* \rho_0=g(0)$ allows to recover the Gibbs-Thomson relation from the dynamic equations. Indeed, it follows from \eqref{lin} and \eqref{rho-eq}-\eqref{v-Gamma-eq} that the function $w:=v_\Gamma-((pigma_*/l_*)A_*\rho +g/l_*)$ satisfies the parabolic equation \begin{equation} \label{w-equation} \kappa_{\Gamma*}\partial_tw-d_{\Gamma*}\Delta_*w =0, \quad w(0)=0 \quad\text{on}\quad \Gamma_*. \end{equation} As $w\equiv0$ is the unique solution of \eqref{w-equation} we conclude that the Gibbs-Thomson relation is satisfied. pubsection{The Eigenvalue Problem} By compact embedding, the spectrum of $L$ consists only of countably many discrete eigenvalues of finite multiplicity and is independent of $p$. Therefore it is enough to consider the case $p=2$. In the following, we will use the notation \begin{equation*} \begin{split} &(u|v)_\Omega:=(u|v)_{L_2(\Omega)}:=\int_\Omega u\bar v\,dx, \quad u,v\in L_2(\Omega),\\ &(g|h)_{\Gamma_*}\!:=(g|h)_{L_2(\Gamma_*)}\!:=\int_{\Gamma_*} g\bar h\,ds, \quad g,h\in L_2(\Gamma_*), \end{split} \end{equation*} for the $L_2$ inner product in $\Omega$ and $\Gamma_*$, respectively. Moreover, we set $|v|_\Omega=(v|v)^{1/2}_{\Omega}$ and $|g|_{\Gamma_*}=(g|g)^{1/2}_{\Gamma_*}$. The eigenvalue problem reads as follows: \begin{equation} \label{evp} \left\{\begin{aligned} \kappa_*\lambda v-d_*\Delta v&=0 &&\text{in }&&\Omegapetminus\Gamma_*\\ \partial_{\nu} v &=0 &&\text{on }&&\partial \Omega\\ [\![v]\!]&=0 &&\text{on }&&\Gamma_*\\ l_*v-pigma_* A_* \rho -\gamma_* \lambda\rho &=0 &&\text{on }&&\Gamma_*\\ \kappa_{\Gamma*}\lambda v -d_{\Gamma*}\Delta_*v-[\![d_*\partial_\nu v]\!]+l_*u_*\lambda \rho&=0 &&\text{on }&&\Gamma_*.\\ \end{aligned}\right. \end{equation} Let $\lambda\neq0$ be an eigenvalue with eigenfunction $(v,\rho)\neq0$. Then (\ref{evp}) yields \begin{align*} 0=\lambda|pqrt{\kappa_*}v|^2_{\Omega}-(d_*\Delta v|v)_\Omega = \lambda|pqrt{\kappa_*}v|^2_{\Omega}+ |pqrt{d_*}\nabla v|_\Omega^2 +([\![d_*\partial_\nu v]\!]|v)_{\Gamma_*}. \end{align*} On the other hand, we have on the interface \begin{align*} 0&=\kappa_{\Gamma*}\lambda|v|^2_{\Gamma_*}-d_{\Gamma*}(\Delta_\Gamma v|v)_{\Gamma_*}-([\![d_*\partial_\nu v]\!]|v)_{\Gamma_*}+\lambda u_*l_*(\rho|v)_{\Gamma_*}\\ &= \lambda\kappa_{\Gamma*}|v|^2_{\Gamma_*}+d_{\Gamma*}|\nabla_\Gamma v|_{\Gamma_*}^2-([\![d_*\partial_\nu v]\!]|v)_{\Gamma_*} +\lambda u_*l_*(\rho|v)_{\Gamma_*}. \end{align*} Adding these identities we obtain \begin{align*} 0= \lambda|pqrt{\kappa_*}v|^2_{\Omega}+ |pqrt{d_*}\nabla v|_\Omega^2 +\lambda\kappa_{\Gamma*}|v|^2_{\Gamma_*}+d_{\Gamma*}|\nabla_\Gamma v|_{\Gamma_*}^2 +\lambda u_*l_*(\rho|v)_{\Gamma_*}, \end{align*} hence employing the Gibbs-Thomson law this results into the relation \begin{align*} \lambda|pqrt{\kappa_*}v|^2_{\Omega}+ |pqrt{d_*}\nabla v|_\Omega^2 +\lambda\kappa_{\Gamma*}|v|^2_{\Gamma_*}&+d_{\Gamma*}|\nabla_\Gamma v|_{\Gamma_*}^2\\ &+\lambda u_*pigma_* (A_*\rho|\rho)_{\Gamma_*} + \gamma_*u_*|\lambda|^2|\rho|^2_{\Gamma_*}=0. \end{align*} Since $A_*$ is selfadjoint in $L_2(\Gamma_*)$, this identity shows that all eigenvalues of $L$ are real. Decomposing $v=v_0+\bar{v}$, $v_\Gamma=v_{\Gamma,0}+\bar{v}_\Gamma$, $\rho=\rho_0+\bar{\rho}$, with the normalizations $(\kappa_*|v_0)_{\Omega}=(v_{\Gamma,0}|1)_{\Gamma_*}=(\rho_0|1)_{\Gamma_*}=0$, this identity can be rewritten as \begin{equation} \label{evid} \begin{aligned} &\lambda\big\{|pqrt{\kappa_*}v_0|^2_{\Omega} + \kappa_{\Gamma*}|v_{\Gamma,0}|^2_{\Gamma_*}+pigma_* u_*(A_*\rho_0|\rho_0)_{\Gamma_*}+\lambda u_*\gamma_*|\rho_0|^2_{\Gamma_*}\big\}\\ &+|pqrt{d_*}\nabla v_0|^2_{\Omega}+ d_{\Gamma*}|\nabla_\Gamma v_{\Gamma,0}|^2_{\Gamma_*}\\ &+\lambda\Big[ (\kappa_*|1)\bar{v}^2 +\kappa_{\Gamma*}|\Gamma_*|\bar{v}_\Gamma^2 -pigma_*u_*\frac{n-1}{R_*^2}|\Gamma_*|\bar{\rho}^2 + \lambda u_* \gamma_*|\Gamma_*| \bar{\rho}^2\Big] =0.\nonumber \end{aligned} \end{equation} In case $\Gamma_*$ is connected, $A_*$ is positive semi-definite on functions with mean zero, and hence the bracket determines whether there are positive eigenvalues. Taking the mean in \eqref{evp} we obtain $$(\kappa_*|1)_\Omega \bar{v} + \kappa_{\Gamma*}|\Gamma_*| \bar{v}_\Gamma + l_*u_*|\Gamma_*| \bar{\rho}=0.$$ Hence minimizing the function $$\phi(\bar{v},\bar{v}_\Gamma,\bar{\rho}):=(\kappa_*|1)\bar{v}^2 +\kappa_{\Gamma*}|\Gamma_*|\bar{v}_\Gamma^2 -pigma_*u_*\frac{n-1}{R_*^2}|\Gamma_*|\bar{\rho}^2$$ with respect to the constraint we see that there are no positive eigenvalues provided the stability condition $\zeta_*\leq 1$ is satisfied. pmallskip If $\Gamma_*=\bigcup\nolimits_{1\le l\le m}\Gamma^l_\ast$ consists of $m$ spheres $\Gamma^l_\ast$ of equal radius, then \begin{equation} \label{N-L-m} N(L) = {\rm span}\left\{(\frac{pigma_* (n-1)}{R^2_*},-l_*),(0,Y^l_1),\ldots,(0,Y^l_n) : \, 1\le l\le m\right\}, \end{equation} where the functions $Y^l_j$ denote the {\em spherical harmonics of degree one} on $\Gamma_\ast^l$ (and $Y^l_j\equiv 0$ on $\bigcup\nolimits_{i\neq l}\Gamma^i_\ast$), normalized by $(Y^l_j|Y^l_k)_{\Gamma^l_*}=\delta_{jk}$. $N(L)$ is isomorphic to the tangent space of $\mathcal E$ at $(u_*,\Gamma_*)\in\mathcal E$, as was shown in \cite[Theorem 4.5.(vii)]{PSZ10}. We can now state the main result on linear stability. \goodbreak \begin{theorem} \label{lin-stability} Let $pigma_*>0$, $\gamma_*\geq0$, $l_*\neq0$, \begin{equation*} \eta_*:=(n-1)pigma_*\kappa_{\Gamma*}/u_*l_*^2R_*^2\neq 1 \quad\text{in case $\gamma_*=0$}, \end{equation*} and assume that the interface $\Gamma_*$ consists of $m\geq1$ components. Let \begin{equation*} \label{zeta} \zeta_\ast= \frac{(n-1)pigma_* [(\kappa_*|1)_{\Omega}+\kappa_{\Gamma*}|\Gamma_*|]}{u_*l_*^2R_*^2|\Gamma_*|}, \end{equation*} and let the equilibrium energy ${pf E}_e$ be defined as in \eqref{eq-energy}. Then \begin{itemize} \item[{(i)}] ${pf E}_e^\prime(u_*)= (\zeta_\ast-1) u_*l^2_*R^2_*|\Gamma_*|/(n-1){pigma_*}$. \item[{(ii)}] $0$ is a an eigenvalue of $L$ with geometric multiplicity $(mn+1)$. \item[{(iii)}] $0$ is semi-simple if $\zeta_\ast\neq 1$. \item[{(iv)}] If $\Gamma_*$ is connected and $\zeta_\ast\le 1$, or if $\eta_*>1$ and $\gamma_*=0$, then all eigenvalues of $-L$ are negative, except for the eigenvalue $0$. \item[{(v)}] If $\zeta_\ast>1$, and $\eta_*<1$ in case $\gamma_*=0$, then there are precisely $m$ positive eigenvalues of $-L$, where $m$ denotes the number of equilibrium spheres. \item[(vi)] If $\zeta_\ast\leq 1$, and $\eta_*<1$ in case $\gamma_*=0$ then $-L$ has precisely $m-1$ positive eigenvalues. \item[{(vii)}] $N(L)$ is isomorphic to the tangent space $T_{(u_*,\Gamma_*)}\mathcal E$ of $\mathcal E$ at $(u_*,\Gamma_*)\in\mathcal E$. \end{itemize} \end{theorem} \begin{remarks} (a) Formally, the result is also true if $l_*=0$ and $\gamma_*>0 $. In this case ${pf E}_e'(u_*)= (\kappa_*|1)_{\Omega}+\kappa_{\Gamma*}|\Gamma_*|>0$ and $\zeta_\ast=\infty$, hence the equilibrium is unstable. If in addition $\gamma_*=0$, then the problem is not well-posed. \\ (b) Note that $\zeta_\ast$ does neither depend on the diffusivities $d_*$, $d_{\Gamma*}$, nor on the coefficient of undercooling $\gamma_*$. \\ (c) It is shown in \cite{PrSi08} that in case $\zeta_\ast=1$ and $\Gamma_*$ connected, the eigenvalue $0$ is no longer semi-simple: its algebraic multiplicity rises by $1$ to $(n+2)$. \\ \goodbreak \noindent (d) It is remarkable that in case kinetic undercooling is absent, large surface heat capacity, i.e.\ $\eta_*>1$, stabilizes the system, even in such a way that multiple spheres are stable, in contrast to the case $\eta_*<1$. \\ (e) We can show that, in case $\gamma_*=0$, if $\eta_*$ increases to $1$ then all positive eigenvalues go to $\infty$. \end{remarks} We recall a result on the Dirichlet-to-Neumann operator $D_\lambda$, $\lambda\geq0$ which is defined as follows. Let $g\in H^{3/2}_2(\Gamma_*)$ be given. Solve the elliptic transmission problem \begin{equation} \label{ell-tram} \left\{ \begin{aligned} \kappa_*\lambda w -d_*\Delta w &=0\quad \text{in}\; \Omegapetminus\Gamma_*, \\ \partial_\nu w&=0\quad \text{on}\; \partial\Omega,\\ [\![w]\!]&=0\quad \text{on}\; \Gamma_*, \\ w&=g\quad \text{on}\; \Gamma_*, \end{aligned} \right. \end{equation} and define $D_\lambda g=-[\![d\partial_\nu w]\!]\in H^{1/2}_2(\Gamma_*)$. \goodbreak \begin{lemma}\label{DN-op} The Dirichlet-to-Neumann operator $D_\lambda$ has the following well-known properties. \begin{itemize} \item[(a)] $(D_\lambda g|g)_{\Gamma_*} = \lambda |\kappa_*^{1/2}w|^2_\Omega + |d_*^{1/2}\nabla w|_\Omega^2$, for all $g\in H^{3/2}_2(\Gamma_*)$; \item[(b)] $|D_\lambda g|_{\Gamma_*}\leq C[\lambda^{1/2}|g|_{\Gamma_*}+|g|_{H^1_2(\Gamma_*)}]$, for all $g\in H^{3/2}_2(\Gamma_*)$ and $\lambda\ge 1$; \item[(c)] $(D_\lambda g|g)_{\Gamma_*}\geq c\lambda^{1/2}|g|_{\Gamma_*}^2$, for all $g\in H^{3/2}_2(\Gamma_*)$ and $\lambda\ge 1$. \end{itemize} In particular, $D_\lambda$ extends to a self adjoint positive definite linear operator in $L_2(\Gamma_*)$ with domain $H^1_2(\Gamma_*)$. \end{lemma} pubsection{Proof of Theorem \ref{lin-stability}} For the case that $\kappa_{\Gamma*}=d_{\Gamma*}=0$ this result is proved in \cite{PSZ10}. Assertion (i) follows from the considerations in part (g) of the introduction. Assertions (ii), (iii), and (vii) only involve the kernel of $L$ and the manifold of equilibria. Since both are the same as in the case $\kappa_{\Gamma*}=d_{\Gamma*}=0$, the proofs of (ii), (iii) and (vii) given in \cite{PSZ10} remain valid in the more general situation considered here. The first part of assertion (iv) has been proved above, and it thus remains to prove the assertions in (v) and (vi), and the second part of (iv). If the stability condition $\zeta_\ast\leq1$ does not hold or if ${\Gamma_*}$ is disconnected, then there is always a positive eigenvalue. It is a delicate task to prove this. The principal idea to attack this problem is as follows: suppose $\lambda>0$ is an eigenvalue, and that $\rho$ is known; solve the resolvent diffusion problem \begin{equation} \label{NDdiffusion} \left\{\begin{aligned} \kappa_*\lambda v -d_*\Delta v &=0 &&\text{in}&& \Omegapetminus{\Gamma_*}\\ \partial_{\nu} v&=0 &&\text{on}&&\partial\Omega \\ [\![v]\!]&=0 &&\text{on} && {\Gamma_*}\\ v&= v_\Gamma &&\text{on}&& {\Gamma_*}\\ \end{aligned}\right. \end{equation} to get $-[\![d_*\partial_\nu v]\!]=:D_\lambda v_\Gamma$. Next we solve the resolvent surface diffusion problem $$ \lambda\kappa_{\Gamma*} v_\Gamma -d_{\Gamma*}\Delta_* v_\Gamma + D_\lambda v_\Gamma= h,$$ to the result $$v_\Gamma = T_\lambda h:=(\lambda\kappa_{\Gamma*} -d_{\Gamma*}\Delta_* + D_\lambda)^{-1}h.$$ Setting $h=-\lambda u_*l_*\rho$ this implies with the linearized Gibbs-Thomson law the equation \begin{equation}\label{Tlambda} [(l_*^2u_*)\lambda T_\lambda +\gamma_* \lambda]\rho +pigma_* A_* \rho =0. \end{equation} $\lambda>0$ is an eigenvalue of $-L$ if and only if \eqref{Tlambda} admits a nontrivial solution. We consider this problem in $L_2({\Gamma_*})$. Then $A_*$ is selfadjoint in $L_2({\Gamma_*})$ and $$pigma_*(A_* \rho|\rho)_{\Gamma_*} \geq -\frac{(n-1)pigma_*}{R_*^2} |\rho|^2_{\Gamma_*},$$ for each $\rho\in D(A_*)=H^2_2(\Gamma_*)$. Moreover, since $A_*$ has compact resolvent, the operator \begin{equation} \label{B-lambda} B_\lambda:=[(l_*^2u_*)\lambda T_\lambda +\gamma_* \lambda]+pigma_* A_* \end{equation} has compact resolvent as well, for each $\lambda>0$. Therefore the spectrum of $B_\lambda$ consists only of eigenvalues which, in addition, are real. We intend to prove that in case either $\Gamma_*$ is disconnected or the stability condition does not hold, $B_{\lambda_0}$ has $0$ as an eigenvalue, for some $\lambda_0>0$. This has been achieved in \cite{PSZ10} in the simpler case where $\kappa_{\Gamma*}=d_{\Gamma*}=0$, in which case $T_\lambda$ is the Neumann-to-Dirichlet operator for \eqref{NDdiffusion}. Here we try to use similar ideas as in \cite{PSZ10}, namely we investigate $B_\lambda$ for small and for large values of $\lambda$. However, in the situation of this paper this will be more involved. For this purpose we need more information about $T_\lambda$. So we first consider the problem \begin{equation} \label{NDBdiffusion} \left\{\begin{aligned} \kappa_*\lambda v -d_*\Delta v &=0 &&\text{in}&& \Omegapetminus{\Gamma_*}\\ \partial_\nu v&=0 &&\text{on}&&\partial\Omega \\ [\![v]\!]&=0 &&\text{on} && {\Gamma_*}\\ \lambda\kappa_{\Gamma*} v -d_{\Gamma*}\Delta_* v -[\![d_*\partial_\nu v]\!] &= g &&\text{on}&& {\Gamma_*}.\\ \end{aligned}\right. \end{equation} As we have seen above this problem has a unique solution for each $\lambda>0$, denoted by $v=S_\lambda g$. Obviously for $\lambda=0$ this problem has a one-dimensional eigenspace spanned by the constant function ${pf e}\equiv 1$. The problem is solvable if and only if the mean value of $g$ is zero, i.e.\ if $g\in L_{2,0}(\Gamma_*)$. This implies by compactness that $S_\lambda g\to S_0g$ as well as $T_\lambda\to T_0g$ as $\lambda\to 0^+$, whenever $g$ has mean zero, where $S_0g$ means the unique solution of \eqref{NDBdiffusion} for $\lambda=0$ with mean zero. \noindent (a) \, Suppose that $\Gamma_*$ is disconnected. If the interface $\Gamma_*$ consists of $m$ components $\Gamma_*^k$, $k=1,..., m$, we set ${pf e}_k=1$ on $\Gamma_*^k$ and zero elsewhere. Let $\rho=pum_k a_k {pf e}_k\neq0$ with $pum_k a_k=0$, hence $Q_0\rho=\rho$, where $Q_0$ is the canonical projection onto $L_{2,0}(\Gamma_*)$ in $L_2(\Gamma_*)$, $ Q_0 \rho:=\rho-(\rho|{pf e})_{\Gamma_*}/|\Gamma_*|.$ Then $$\lim_{\lambda\to0} \lambda T_\lambda \rho = \lim_{\lambda\to0} \lambda T_\lambda Q_0\rho=0,$$ since $T_\lambda Q_0$ is bounded as $\lambda\to0$. This implies $$\lim_{\lambda\to0} (B_\lambda \rho|\rho)_{\Gamma_*} = - \,({(n-1)pigma_*}/{R_*^2})\, pum_k |\Gamma_*^k|a_k^2<0.$$ Therefore $B_\lambda$ is not positive semi-definite for small $\lambda$. \noindent (b) \, Suppose next that $\Gamma_*$ is connected. Consider $\rho={pf e}$. Then we have $$(B_\lambda {pf e}|{pf e})_{\Gamma_*}= u_*l_*^2\lambda(T_\lambda{pf e}|{pf e})_{\Gamma_*} +\lambda\gamma_*|{pf e}|^2_{\Gamma_*}- ({(n-1)pigma_*}/{R_*^2})|{pf e}|_{\Gamma_*}^2.$$ We compute the limit $\lim_{\lambda \to0}\lambda(T_\lambda{pf e}|{pf e})_{\Gamma_*}$ as follows. First solve the problem \begin{equation} \label{NDBdiffusion0} \left\{\begin{aligned} -d_*\Delta v &=-\kappa_* a_0 &&\text{in} && \Omegapetminus{\Gamma_*} \\ \partial_{\nu} v&=0 &&\text{on}&& \partial\Omega \\ [\![v]\!]&=0 &&\text{on}&& {\Gamma_*} \\ -d_{\Gamma*} \Delta_\Gamma v-[\![d_*\partial_\nu v]\!]&= {pf e}-\kappa_{\Gamma*}a_0 &&\text{on}&&{\Gamma_*},\\ \end{aligned}\right. \end{equation} where $a_0=|\Gamma_*|/[(\kappa_*|1)_{\Omega}+\kappa_{\Gamma*}|\Gamma_*|]$, which is solvable since the necessary compatibility condition holds. Let $v_0$ denote the solution which satisfies the normalization condition $(\kappa_*|v_0)_{\Omega}+\kappa_{\Gamma*}(v_0|1)_{\Gamma_*}=0$. Then $v_\lambda:=S_\lambda{pf e}-v_0- a_0/\lambda$ satisfies the problem \begin{equation} \label{NDBdiffusion1} \left\{\begin{aligned} \kappa_* \lambda v_\lambda -d_*\Delta v_\lambda &=-\kappa_*\lambda v_0 &&\text{in}&&\Omegapetminus{\Gamma_*}\\ \partial_{\nu} v&=0 &&\text{on}&&\partial\Omega\\ [\![v_\lambda]\!]&=0 &&\text{on}&& {\Gamma_*}\\ \kappa_{\Gamma*}\lambda v_\lambda -d_{\Gamma*} \Delta_* v_\lambda-[\![d_*\partial_\nu v_\lambda]\!]& = -\lambda\kappa_{\Gamma*}v_0 &&\text{on}&&{\Gamma_*}.\\ \end{aligned}\right. \end{equation} By the normalization $(\kappa_*|v_0)_{\Omega}+\kappa_{\Gamma*}(v_0|1)_{\Gamma_*} =0$ we see that the compatibility condition for \eqref{NDBdiffusion} holds for each $\lambda>0$, and so we conclude that $v_\lambda$ is bounded in $W^2_2(\Omegapetminus\Gamma_*)$ as $\lambda\to0$, it even converges to $0$. Hence we have $$\lim_{\lambda\to0}\lambda T_\lambda{pf e} = \lim_{\lambda\to 0}[(\lambda v_\lambda + \lambda v_0)|_{\Gamma_*}+ a_0]= a_0 .$$ This then implies \begin{equation*} \lim_{\lambda\to0} (B_\lambda {pf e}|{pf e})_{\Gamma_*} = l_*^2u_* \frac{|\Gamma_*|^2}{(\kappa_*|1)_{\Omega}+\kappa_{\Gamma*}|\Gamma_*|}- \,\frac{(n-1)pigma_*|\Gamma_*|}{R_*^2}<0, \end{equation*} if the stability condition does not hold, i.e.\ if $\zeta_\ast>1$. Therefore also in this case $B_\lambda$ is not positive semi-definite for small $\lambda>0$. \noindent (c) \, Next we consider the behavior of $(B_\lambda \rho|\rho)_{\Gamma_*}$ as $\lambda\to\infty$. We intend to show that $B_\lambda$ is positive definite for large $\lambda$. We have \begin{align*} \lambda T_\lambda&= \lambda(\kappa_{\Gamma*}\lambda -d_{\Gamma*}\Delta_\Gamma +D_\lambda)^{-1} \to 1/\kappa_{\Gamma*}\quad \mbox{ for }\; \lambda\to\infty, \end{align*} as $D_\lambda$ is of lower order, by part (b) of Lemma 4.5. This implies for a given $g\in D(A_*)$ \begin{align*} (B_\lambda g|g)_{\Gamma_*}&= l_*^2u_*\lambda (T_\lambda g|g)_{\Gamma_*}+ pigma_*(A_*g|g)_{\Gamma_*} + \gamma_*\lambda|g|^2_{\Gamma_*}\\ &\geq (\gamma_*\lambda -\frac{(n-1)pigma_*}{R_*^2})|g|^2_{\Gamma_*}+l_*^2u_*\lambda (T_\lambda g|g)_{\Gamma_*}\\ &pim (\gamma_*\lambda -\frac{(n-1)pigma_*}{R_*^2}+ \frac{l_*^2u_*}{\kappa_{\Gamma*}})|g|^2_{\Gamma_*}, \end{align*} as $\lambda\to\infty$. We have thus shown that $B_\lambda$ is positive definite if $\gamma_*>0$ and $\lambda> (n-1)pigma_*/\gamma_*R_*^2$, or if \begin{equation} \label{str-cond} \gamma_*=0\quad\text{and}\quad {l_*^2u_*}/{\kappa_{\Gamma*}} > {(n-1)pigma_*}/{R_*^2}. \end{equation} In particular, for $\gamma_*=0$ and small $l_*^2$ the latter condition condition will be violated, in general. \noindent (d) \, In summary, concentrating on the cases $\gamma*>0$ or \eqref{str-cond}, we have shown that $B_\lambda$ is not positive semi-definite for small $\lambda>0$ if either $\Gamma_*$ is not connected or the stability condition does not hold, and $B_\lambda$ is always positive definite for large $\lambda$. Let \begin{equation*} \lambda_0 = pup\{\lambda>0:\, B_\mu \mbox{ is not positive semi-definite for each } \mu\in(0,\lambda]\}. \end{equation*} Since $B_\lambda$ has compact resolvent, $B_\lambda$ has a negative eigenvalue for each $\lambda<\lambda_0$. This implies that $0$ is an eigenvalue of $B_{\lambda_0}$, thereby proving that $-L$ admits the positive eigenvalue $\lambda_0$. Moreover, we have also shown that \begin{equation*} B_0\rho= \lim_{\lambda\to 0}[ l^2_* u_*\lambda T_\lambda \rho +\gamma_*\lambda\rho +pigma_* A_* \rho] = \frac{l_*^2 u_*|\Gamma_*|}{(\kappa_*|1)_{\Omega}+\kappa_{\Gamma*}|\Gamma_*|} P_0 \rho +pigma_* A_* \rho , \end{equation*} where $P_0\rho:=(I-Q_0)\rho=(\rho|{pf e})_{\Gamma_*}/|\Gamma_*|$. Therefore, $B_0$ has the eigenvalue \begin{equation*} \frac{u_*l_*^2|\Gamma_*|}{(\kappa_*|1)_\Omega+\kappa_{\Gamma*}|\Gamma_*|}-\frac{(n-1)pigma_*}{R_*^2} =\frac{u_*l_*^2|\Gamma_*|}{(\kappa_*|1)_\Omega+\kappa_{\Gamma*}|\Gamma_*|} (1-\zeta_\ast) \end{equation*} with eigenfunction ${pf e}$, and in case $m>1$ it also has the eigenvalue $-(n-1)pigma_*/R_*^2$ with precisely $m-1$ linearly independent eigenfunctions of the form $pum_k a_k {pf e}_k$ with $pum_k a_k=0$. As $\lambda$ varies from $0$ to $\lambda_0$, all the negative eigenvalues of $B_0$ identified above will eventually have to cross $0$ along the real axis. At each of these occasions, $-L$ will inherit at least one positive eigenvalue, which will then remain positive. This implies that $-L$ has exactly $m$ positive eigenvalues if the stability condition does not hold, and $m-1$ otherwise. This covers the case $\gamma_*>0$ as well as \eqref{str-cond}. \noindent (e) \, To cover the remaining we assume $\gamma_*=0$ and $\kappa_{\Gamma*}(n-1)/R_*^2>u_*l_*^2/pigma_*=:\delta_*$. Suppose $\lambda>0$ is an eigenvalue of $L_0$. Then there is $\rho\neq0$ such that $$ (\lambda\kappa_{\Gamma*} -d_{\Gamma*}\Delta_*+D_\lambda)A_*\rho +\lambda \delta_*\rho=0.$$ Multiplying this equation in $L_2(\Gamma_*)$ by $A_*\rho$ and integrating by parts one obtains the identity $$\lambda\kappa_{\Gamma*}|A_*\rho|_{\Gamma_*}^2 + d_{\Gamma*}|\nabla_{\Gamma_*}A_*\rho|^2_{\Gamma_*}+(D_\lambda A_*\rho|A_*\rho)_{\Gamma_*} +\lambda\delta_*(A_*\rho|\rho)_{\Gamma_*}=0.$$ As $D_\lambda$ is positive definite in $L_2(\Gamma_*)$ this equation implies $$\lambda\kappa_{\Gamma*}|A_*\rho|_{\Gamma_*}^2 +\lambda\delta_*(A_*\rho|\rho)_{\Gamma_*}\leq0.$$ \goodbreak Let $P$ denote the projection onto the kernel $\mathcal N(\Delta_*)$ and $Q=I-P$. Since $P,Q$ commute with $A_*$ this implies $$\lambda\kappa_{\Gamma*}|A_*Q\rho|_{\Gamma_*}^2 +\lambda\kappa_{\Gamma*}|A_*P\rho|_{\Gamma_*}^2+\lambda\delta_*(A_*P\rho|P\rho)_{\Gamma_*}\leq0,$$ as $A_*$ is positive semi-definite on $\mathcal R(Q)=\mathcal R(\Delta_*)$. Now $A_*P=-((n-1)/R_*^2)P$ and $$0\geq\lambda\kappa_{\Gamma*}|A_*P\rho|_{\Gamma_*}^2+\lambda\delta_*(A_*P\rho|P\rho)_{\Gamma_*}= \lambda\frac{n-1}{R_*^2}\Big[\kappa_{\Gamma*}\frac{n-1}{R_*^2}-\delta_*\Big]|P\rho|_{\Gamma_*}^2\geq0,$$ hence $P\rho=0$ and $A_*Q\rho=0$. This implies $A_*\rho=0$ and therefore $\rho=0$ as $\delta_*>0$. This shows that there are no positive eigenvalues of $L_0$ in case $\gamma_*=0$ and $\kappa_{\Gamma*}(n-1)/R_*^2>u_*l_*^2/pigma_*$. This completes the proof. pection{The Semiflow in Presence of Kinetic Undercooling} In this section we assume throughout $\gamma(s)>0$ for all $0<s<u_c$, i.e.\ kinetic undercooling is present at the relevant temperature range. In this case we may apply the results in \cite{PSZ09} and \cite{KPW10}, resulting in a rather complete analysis of the problem. pubsection{Local Well-Posedness} \noindent To prove local well-posedness we employ the direct mapping method as introduced in Section 3. As base space we use \begin{equation*} X_0= L_p(\Omega)\times W^{-1/p}_p({S}igma)\times W^{1-1/p}_p({S}igma), \end{equation*} and we set \begin{equation*} \begin{aligned} X_1&=\big\{(v,v_\Gamma,\rho)\in H^2_p(\Omegapetminus{S}igma)\times W^{2-1/p}_p({S}igma) \times W^{3-1/p}_p({S}igma):\\ & \hspace{5cm} [\![v]\!]=0,\; v_\Gamma=v_{|_{S}igma},\; \partial_\nu v_{|_{\partial\Omega}}=0\big\}. \end{aligned} \end{equation*} The trace space $X_\gamma$ then becomes for $p>n+2$ \begin{equation*} \begin{aligned} X_{\gamma} &=\big\{(v,v_\Gamma,\rho)\in W^ {2-2/p}_p(\Omegapetminus{S}igma)\times W^{2-3/p}_p({S}igma)\times W^{3-3/p}_p({S}igma):\\ & \hspace{5cm} [\![v]\!]=0,\; v_\Gamma=v_{|_{S}igma},\; \partial_\nu v_{|_{\partial\Omega}}=0\big\}, \end{aligned} \end{equation*} and that with the time weight $t^{1-\mu}$, $1\geq\mu>1/p$, \begin{equation*} \begin{aligned} X_{\gamma,\mu}&=\big\{(v,v_\Gamma,\rho)\in W^ {2\mu-2/p}_p(\Omegapetminus{S}igma)\times W^{2\mu-3/p}_p({S}igma)\times W^{2\mu+1-3/p}_p({S}igma):\\ &\hspace{5cm} [\![v]\!]=0,\; v_\Gamma=v_{|_{S}igma},\; \partial_\nu v_{|_{\partial\Omega}}=0\big\}, \end{aligned} \end{equation*} Note that \begin{equation} \label{crucialembedding} X_{\gamma,\mu}\hookrightarrow BUC^1(\Omegapetminus{S}igma)\times C^1({S}igma)\times C^2({S}igma),\end{equation} provided $2\mu>1 +(n+2)/p$, which is feasible as $p>n+2$. In the sequel, we only consider this range of $\mu$. We want to rewrite system \eqref{transformed} abstractly as the quasilinear problem in $X_0$ \begin{align}\label{abstract} \dot{z}+A(z)z&= F(z),\quad z(0)=z_0, \end{align} where $z=(v,v_\Gamma,\rho)$ and $z_0=(v_0,v_{\Gamma0},\rho_0)$. Here the quasilinear part $A(z)$ is the diagonal matrix operator defined by \begin{equation*} \label{A(z)} -A(z)={\rm diag} \left[\begin{aligned} &(d(v)/\kappa(v))(\Delta - M_2(\rho):\nabla^2)\\ &(d_\Gamma(v_\Gamma)/\kappa_\Gamma(v_\Gamma))(P_\Gamma(\rho)M_0(\rho))^2:\nabla_{S}igma^2\; \\ &(pigma(v_\Gamma)/\gamma(v_\Gamma))\mathcal G(\rho):\nabla_{S}igma^2 \end{aligned}\right] \end{equation*} with $M_2(\rho)=M_1(\rho)+M_1^{pf T}(\rho) -M_1(\rho)M_1^{pf T}(\rho)$. The semilinear part $F(z)$ is given by \begin{equation*} \left[ \begin{aligned} &\mathcal R(\rho)v+\frac{1}{\kappa(v)}\big\{d^\prime(v)|(I-M_1(\rho))\nabla v\big|^2 - d(v)((I-M_1(\rho)):\nabla M_1(\rho)|\nabla v)\big\}\\ &\frac{1}{\kappa_\Gamma(v_\Gamma)}\big\{-\mathcal B(v_\Gamma,\rho)v -[l(v_\Gamma) +l_\Gamma(v_\Gamma)\mathcal H(\rho)-\gamma(v_\Gamma)\beta(\rho)\partial_t\rho] \beta(\rho)\partial_t\rho + m_3\big\} \\ &\varphi(v_\Gamma) /\beta(\rho)\gamma(v_\Gamma) + pigma(v_\Gamma)\mathcal F(\rho)/\gamma(v_\Gamma) \end{aligned}\right] \end{equation*} where $\varphi(s)=[\![\psi(s)]\!]$ and \begin{equation*} m_3= -d_\Gamma(v_\Gamma)(P_\Gamma(\rho)M_0(\rho))^2:\nabla_{S}igma^2v_\Gamma -\mathcal C(v_\Gamma,\rho)v_\Gamma. \end{equation*} We note that $m_3$ depends on $v_\Gamma$, $\nabla_{S}igma v_\Gamma$, and on $\rho$, $\nabla_{S}igma\rho$, $\nabla^2_{S}igma\rho$, but not on $\nabla_{S}igma^2v_\Gamma$, hence is of lower order. Apparently, the first two components of $F(z)$ contain the time derivative $\partial_t\rho$; we may replace it by $$\partial_t\rho = \{\varphi(v_\Gamma)+pigma(v_\Gamma)\mathcal H(\rho)\}/\beta(\rho)\gamma(v_\Gamma),$$ to see that it is of lower order as well. Now fix a ball $\mathbb B:=B_{X_{\gamma,\mu}}(z_0,R)pubset X_{\gamma,\mu}$, where $|\rho_0|_{C^1({S}igma)}\leq\eta$ for some sufficiently small $\eta>0$. Then it is not difficult to verify that \begin{equation*} (A,F)\in C^1(\mathbb B,\mathcal B(X_1,X_0)\times X_0) \end{equation*} provided $d_i,\psi_i,d_\Gamma,pigma,\gamma\in C^3(0,\infty)$ and $d_j,\kappa_j,pigma,\gamma>0$ on $(0,u_c)$, $j=1,2,\Gamma$, and provided $2\geq 2\mu> 1+ (n+2)/p$ as before. Moreover, as $A(z)$ is diagonal, well-known results about elliptic differential operators show that $A(z)$ has the property of maximal regularity of type $L_{p}$, and also of type $L_{p,\mu}$, for each $z\in\mathbb B$. In fact, for small $\eta>0$ and $R>0$, $A(z)$ is small perturbation of \begin{equation*} A_\#(z)={\rm diag}\big[-(d(v)/\kappa(v))\Delta, -(d_\Gamma(v_\Gamma)/\kappa_\Gamma(v_\Gamma))\Delta_{S}igma,-(pigma(v_\Gamma)/\gamma(v_\Gamma))\Delta_{S}igma \big]. \end{equation*} Therefore we may apply \cite[Theorem 2.1]{KPW10} to obtain local well-posedness of \eqref{abstract}, i.e.\ a unique local solution $$z\in H^1_{p,\mu}((0,a);X_0)\cap L_{p,\mu}((0,a);X_1)\hookrightarrow C([0,a];X_{\gamma,\mu})\cap C((0,a];X_\gamma)$$ which depends continuously on the initial value $z_0\in \mathbb B$. The resulting solution map $[z_0\mapsto z(t)]$ defines a local semiflow in $X_{\gamma ,\mu}$. pubsection{Nonlinear Stability of Equilibria} \noindent Let $e_*=(u_*,u_{\Gamma*},\Gamma_*)$ denote an equilibrium as in Section 4. In this case we choose ${S}igma=\Gamma_*$ as a reference manifold, and as shown in the previous subsection we obtain the abstract quasilinear parabolic problem \begin{align}\label{abstract*} \dot{z}+A(z)z&= F(z),\quad z(0)=z_0, \end{align} with $X_0$, $X_1$, $X_\gamma$ as above. We set $z_*=(u_*,u_{\Gamma*},0)$. Assuming that $\zeta_\ast\neq 1$ in the stability condition, we have shown in Section 4 that the equilibrium $z_*$ is normally hyperbolic. Therefore we may apply \cite[Theorems 2.1 and 6.1 ]{PSZ09} to obtain the following result. \goodbreak \begin{theorem} \label{nonlinstab} Let $p>n+2$. Suppose $\gamma>0$ on $(0,u_c)$ and the assumptions of \eqref{regularity-coeff} hold true. As above $\mathcal E$ denotes the set of equilibria of \eqref{abstract*}, and we fix some $z_*\in\mathcal E$. Then we have \\ {\bf (a)} If $\Gamma_*$ is connected and $\zeta_*<1$ then $z_*$ is stable in $X_\gamma$, and there exists $\delta>0$ such that the unique solution $z(t)$ of (\ref{abstract*}) with initial value $z_0\in X_\gamma$ satisfying $|z_0-z_*|_{\gamma}<\delta$ exists on $\mathbb R_+$ and converges at an exponential rate in $X_\gamma$ to some $z_\infty\in\mathcal E$ as $t\rightarrow\infty$. \\ {\bf (b)} If $\Gamma_*$ is disconnected or if $\zeta_*>1$ then $z_*$ is unstable in $X_\gamma$ and even in $X_0$. For each sufficiently small $\rho>0$ there is $\delta\in(0,\rho]$ such that the solution $z(t)$ of \eqref{abstract*} with initial value $z_0\in X_\gamma$ subject to $|z_0-z_*|_{\gamma}<\delta$ either satisfies \begin{itemize} \item[(i)] ${\rm dist}_{X_\gamma}(z(t_0);\mathcal E)>\rho$ for some finite time $t_0>0$; or \item[(ii)] $z(t)$ exists on $\mathbb R_+$ and converges at exponential rate in $X_\gamma$ to some $z_\infty\in\mathcal E$. \end{itemize} \end{theorem} \begin{remark} The only equilibria which are excluded from our analysis are those with $\zeta_*=1$, which means ${pf E}^\prime_e(u_*)=0$. These are critical points of the function ${pf E}_e(u)$ at which a bifurcation may occur. In fact, if such $u_*$ is a maximum or a minimum of ${pf E}_e$ then two branches of $\mathcal E$ meet at $u_*$, a stable and and an unstable one, which means that $(u_*,\Gamma_*)$ is a turning point in $\mathcal E$. \end{remark} pubsection{The Local Semiflow on the State Manifold} \noindent Here we follow the approach introduced in \cite{KPW11} for the two-phase Navier-Stokes problem and in \cite{PSZ10} for the two-phase Stefan problem, see also \cite{KPW10} for the Mullins-Sekerka problem. We denote by $\mathcal{MH}^2(\Omega)$ the closed $C^2$-hypersurfaces contained in $\Omega$. It can be shown that $\mathcal{MH}^2(\Omega)$ is a $C^2$-manifold: the charts are the parameterizations over a given hypersurface ${S}igma$ according to Section 3, and the tangent space consists of the normal vector fields on ${S}igma$. We define a metric on $\mathcal{MH}^2(\Omega)$ by means of $$d_{\mathcal{MH}^2}({S}igma_1,{S}igma_2):= d_H(\mathcal N^2{S}igma_1,\mathcal N^2{S}igma_2),$$ where $d_H$ denotes the Hausdorff metric on the compact subsets of $\mathbb R^n$ introduced in Section 2. This way $\mathcal{MH}^2(\Omega)$ becomes a Banach manifold of class $C^2$. Let $d_{S}igma(x)$ denote the signed distance for ${S}igma$ as in Section 2. We may then define the {\em canonical level function} $\varphi_{S}igma$ by means of $$\varphi_{S}igma(x) = \phi(d_{S}igma(x)),\quad x\in\mathbb R^n,$$ where $$\phi(s)=s \chi(s/a) + (1-\chi(s/a))\,{\rm sgn}\, s ,\quad s\in \mathbb R.$$ Then it is easy to see that ${S}igma=\varphi_{S}igma^{-1}(0)$, and $\nabla \varphi_{S}igma(x)=\nu_{S}igma(x)$, for $x\in {S}igma$. Moreover, $0$ is an eigenvalue of $\nabla^2\varphi_{S}igma(x)$, and the remaining eigenvalues of $\nabla^2\varphi_{S}igma(x)$ are the principal curvatures of ${S}igma$ at $x\in{S}igma$. If we consider the subset $\mathcal{MH}^2(\Omega,r)$ of $\mathcal{MH}^2(\Omega)$ which consists of all closed hypersurfaces $\Gamma\in \mathcal{MH}^2(\Omega)$ such that $\Gammapubset \Omega$ satisfies a (interior and exterior) ball condition with fixed radius $r>0$, then the map \begin{equation} \Upsilon:\mathcal{MH}^2(\Omega,r)\to C^2(\bar{\Omega}),\quad \Upsilon(\Gamma):=\varphi_\Gamma, \end{equation} is an isomorphism of the metric space $\mathcal{MH}^2(\Omega,r)$ onto $\Upsilon(\mathcal{MH}^2(\Omega,r))pubset C^2(\bar{\Omega})$. \\ Let $s-(n-1)/p>2$. Then we define \begin{equation} \label{definition-W-r} W^s_p(\Omega,r):=\{\Gamma\in\mathcal{MH}^2(\Omega,r): \varphi_\Gamma\in W^s_p(\Omega)\}. \end{equation} In this case the local charts for $\Gamma$ can be chosen of class $W^s_p$ as well. A subset $Apubset W^s_p(\Omega,r)$ is said to be (relatively) compact, if $\Upsilon(A)pubset W^s_p(\Omega)$ is (relatively) compact. As an ambient space for the state manifold of \eqref{stefan-var} we consider the product space $C(\bar{G})\times \mathcal{MH}^2$, due to continuity of temperature and curvature. We define the state manifold $\mathcal{SM}$ for \eqref{stefan-var} as follows: \begin{equation} \label{phasemanifg} \begin{aligned} \mathcal{SM}:=\big\{(u,\Gamma)\in C(\bar{\Omega})\times \mathcal{MH}^2: & \;u\in W^{2-2/p}_p(\Omegapetminus\Gamma),\, \Gamma\in W^{3-3/p}_p ,\\ &\; 0<u<u_c \mbox{ in } \bar{\Omega},\; \partial_\nu u=0 \mbox{ on } \partial\Omega\big\}. \end{aligned} \end{equation} Charts for this manifold are obtained by the charts induced by $\mathcal{MH}^2(\Omega)$ followed by a Hanzawa transformation as in Section~3. Note that there is no need to incorporate the dummy variable $u_\Gamma$ into the definition of the state manifold, as $u_\Gamma=u|_{\Gamma}$ whenever $u_\Gamma$ appears. Applying the result in subsection 5.1 and re-parameterizing the interface repeatedly, we see that (\ref{stefan-var}) yields a local semiflow on $\mathcal{SM}$. \begin{theorem} Let $p>n+2$. Suppose $\gamma>0$ on $(0,u_c)$ and the assumptions of \eqref{regularity-coeff} hold true. Then problem (\ref{stefan-var}) generates a local semiflow on the state manifold $\mathcal{SM}$. Each solution $(u,\Gamma)$ exists on a maximal time interval $[0,t_*)$, where $t_*=t_*(u_0,\Gamma_0)$. \end{theorem} pubsection{Global Existence and Convergence} There are several obstructions to global existence for the Stefan problem with variable surface tension \eqref{stefan-var}: \begin{itemize} \item {\em regularity}: the norms of $u(t)$ or $\Gamma(t)$ become unbounded; \item {\em well-posedness}: the temperature may reach $0$ or $u_c$; \item {\em geometry}: the topology of the interface changes;\\ or the interface touches the boundary of $\Omega$;\\ or the interface contracts to a point. \end{itemize} Let $(u,\Gamma)$ be a solution in the state manifold $\mathcal{SM}$. By a {\em uniform ball condition} we mean the existence of a radius $r_0>0$ such that for each $t$, at each point $x\in\Gamma(t)$ there exist centers $x_i\in \Omega_i(t)$ such that $B_{r_0}(x_i)pubset \Omega_i$ and $\Gamma(t)\cap \bar{B}_{r_0}(x_i)=\{x\}$, $i=1,2$. Note that this condition bounds the curvature of $\Gamma(t)$, prevents it from shrinking to a point, from touching the outer boundary $\partial \Omega$, and from undergoing topological changes. With this property, combining the semiflow for (\ref{stefan-var}) with the Lyapunov functional and compactness we obtain the following result. \begin{theorem} \label{Qual} Let $p>n+2$. Suppose $\gamma>0$ on $(0,u_c)$ and the assumptions of \eqref{regularity-coeff} hold true. Suppose that $(u,\Gamma)$ is a solution of (\ref{stefan-var}) in the state manifold $\mathcal{SM}$ on its maximal time interval $[0,t_*)$. Assume the following on $[0,t_*)$: there is a constant $M>0$ such that \begin{itemize} \item[(i)] $|u(t)|_{W^{2-2/p}_p}+|\Gamma(t)|_{W^{3-3/p}_p}\leq M<\infty$; \item[(ii)] $0<1/M\leq u(t) \le u_c-1/M$; \item[(iii)] $\Gamma(t)$ satisfies a uniform ball condition. \end{itemize} Then $t_*=\infty$, i.e.\ the solution exists globally, and it converges in $\mathcal{SM}$ to some equilibrium $(u_\infty,\Gamma_\infty)\in\mathcal E$. On the contrary, if $(u(t),\Gamma(t))$ is a global solution in $\mathcal{SM}$ which converges to an equilibrium $(u_*,\Gamma_*)$ in $\mathcal{SM}$ as $t\to\infty$, then properties $(i)$-$(iii)$ are valid. \end{theorem} \begin{proof} Assume that assertions (i)--(iii) are valid. Then $\Gamma([0,t_*))pubset W^{3-3/p}_p(\Omega,r)$ is bounded, hence relatively compact in $W^{3-3/p-\varepsilon}_p(\Omega,r)$. Thus we may cover this set by finitely many balls with centers ${S}igma_k$ real analytic in such a way that ${\rm dist}_{W^{3-3/p-\varepsilon}_p}(\Gamma(t),{S}igma_j)\leq \delta$ for some $j=j(t)$, $t\in[0,t_*)$. Let $J_k=\{t\in[0,t_*):\, j(t)=k\}$. Using for each $k$ a Hanzawa-transformation $\Xi_k$, we see that the pull backs $\{u(t,\cdot)\circ\Xi_k:\, t\in J_k\}$ are bounded in $W^{2-2/p}_p(\Omegapetminus {S}igma_k)$, hence relatively compact in $W^{2-2/p-\varepsilon}_p(\Omegapetminus{S}igma_k)$. Employing now the results in subsection 5.1 we obtain solutions $(u^1,\Gamma^1)$ with initial configurations $(u(t),\Gamma(t))$ in the state manifold on a common time interval, say $(0,\tau]$, and by uniqueness we have $$(u^1(\tau),\Gamma^1(\tau))=(u(t+\tau),\Gamma(t+\tau)).$$ Continuous dependence implies then relative compactness of $(u(\cdot),\Gamma(\cdot))$ in $\mathcal{SM}$. In particular, $t_*=\infty$ and the orbit $(u,\Gamma)(\mathbb R_+)pubset\mathcal{SM}$ is relatively compact. The negative total entropy is a strict Lyapunov functional, hence the limit set $\omega(u,\Gamma)pubset \mathcal{SM}$ of a solution is contained in the set $\mathcal E$ of equilibria. By compactness $\omega(u,\Gamma)pubset \mathcal{SM}$ is non-empty, hence the solution comes close to $\mathcal E$, and stays there. Then we may apply the convergence result Theorem \ref{nonlinstab}. The converse is proved by a compactness argument. \end{proof} pection{The Semiflow without Kinetic Undercooling} In this section we assume throughout $\gamma(s)=0$ for all $s>0$, i.e\ kinetic undercooling is absent. In this case we may apply the results in \cite{PSZ09} and \cite{KPW10} too, but we have to work harder to apply them. At first we prove \eqref{mcflow} as follows. According to \eqref{T-Gamma-V} we know that \begin{equation*} T_\Gamma(u_\Gamma)V:=(\omega_\Gamma(u_\Gamma)-\mathcal H^\prime(\Gamma))V =\frac{\lambda^\prime(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)} \big\{{\rm div}_\Gamma (d_\Gamma(u_\Gamma)\nabla_\Gamma u_\Gamma) + [\![d(u)\partial_\nu u]\!]\big\}. \end{equation*} Next we observe \begin{align*} \frac{\lambda^\prime(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)}& {\rm div}_\Gamma( d_\Gamma(u_\Gamma)\nabla_\Gamma u_\Gamma)\\ &= \frac{1}{\kappa_\Gamma(u_\Gamma)}{\rm div}_\Gamma (d_\Gamma(u_\Gamma)\nabla_\Gamma \lambda(u_\Gamma))- \frac{d_\Gamma(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)}\lambda^{\prime\prime}(u_\Gamma)|\nabla_\Gamma u_\Gamma|^2\\ & = {\rm div}_\Gamma \Big(\frac{d_\Gamma(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)}\nabla_\Gamma \lambda(u_\Gamma)\Big)- \frac{d_\Gamma(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)} \Big\{\lambda^{\prime\prime}(u_\Gamma)-\lambda^\prime(u_\Gamma)\frac{\kappa^\prime_\Gamma(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)}\Big\} |\nabla_\Gamma u_\Gamma|^2\\ &= \Delta_\Gamma h_\Gamma(u_\Gamma) -\frac{d_\Gamma(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)} \Big\{\lambda^{\prime\prime}(u_\Gamma)-\lambda^\prime(u_\Gamma)\frac{\kappa^\prime_\Gamma(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)}\Big\} |\nabla_\Gamma u_\Gamma|^2\\\end{align*} where $h_\Gamma$ denotes the antiderivative of $d_\Gamma \lambda^\prime/\kappa_\Gamma$ with $h_\Gamma(u_m)=0$. We note that by a partial integration $$ h_\Gamma(s)= \lambda(s) \frac{d_\Gamma(s)}{\kappa_\Gamma(s)} -\int_{u_m}^s \lambda(\tau) (\frac{d_\Gamma}{\kappa_\Gamma})^\prime(\tau)d\tau=: \lambda(s) \frac{d_\Gamma(s)}{\kappa_\Gamma(s)}-f_\Gamma(s). $$ Now employing $\lambda(u_\Gamma)=-\mathcal H(\Gamma)$ leads to the identity \begin{align*} &T_\Gamma(u_\Gamma)\{V-\frac{d_\Gamma(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)} \mathcal H(\Gamma)-f_\Gamma(u_\Gamma)\}\\ &=\frac{ \lambda^\prime(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)}[\![d(u)\partial_\nu u]\!] -\frac{d_\Gamma(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)}\{\lambda^{\prime\prime}(u_\Gamma)-\lambda^\prime(u_\Gamma)\frac{\kappa_\Gamma^\prime(u_\Gamma)}{\kappa_\Gamma(u_\Gamma)}\}|\nabla_\Gamma u_\Gamma|^2\\ & +[\,\omega_\Gamma(u_\Gamma) -{\rm tr}L_\Gamma^2\,]h_\Gamma(u_\Gamma), \end{align*} hence applying the inverse of $T_\Gamma(u_\Gamma)$ we arrive at \begin{equation} \label{mcflow-2} \kappa_\Gamma(u_\Gamma)V -d_\Gamma(u_\Gamma)\mathcal H(\Gamma) = \kappa_\Gamma(u_\Gamma)\{f_\Gamma(u_\Gamma) + F_\Gamma(u,u_\Gamma)\}, \end{equation} where \begin{align*} F_\Gamma(u,u_\Gamma)&=[\kappa_\Gamma(u_\Gamma)T_\Gamma(u_\Gamma)]^{-1} \big\{ \lambda^\prime(u_\Gamma)[\![d(u)\partial_\nu u]\!]\\ &-d_\Gamma(u_\Gamma) [(\lambda^{\prime\prime}(u_\Gamma)-\lambda^\prime(u_\Gamma)\kappa_\Gamma^\prime(u_\Gamma)/\kappa_\Gamma(u_\Gamma)]|\nabla_\Gamma u_\Gamma|^2\\ &+\kappa_\Gamma(u_\Gamma) [\,\omega_\Gamma(u_\Gamma) -{\rm tr}L_\Gamma^2\,]h_\Gamma(u_\Gamma)\big\}. \end{align*} In the sequel we will replace the Gibbs-Thomson law by the dynamic equation \eqref{mcflow-2} plus the compatibility condition $\varphi(u_{\Gamma0})+pigma(u_{\Gamma0})\mathcal H(\Gamma_0)=0$ at time $t=0$. pubsection{Local Well-Posedness} \noindent To prove local well-posedness we employ the direct mapping method as introduced in Section 3. As base space we use as in Section 5 $$ X_0= L_p(\Omega)\times W^{-1/p}_p({S}igma)\times W^{1-1/p}_p({S}igma),$$ and we let $X_1$, $X_\gamma$ and $X_{\gamma,\mu}$ as defined there. We rewrite system \eqref{transformed} abstractly as the quasilinear problem in $X_0$ \begin{align}\label{abstract0} \dot{z}+A_0(z)z&= F_0(z),\quad z(0)=z_0, \end{align} where $z=(v,v_\Gamma,\rho)$ and $z_0=(v_0,v_{\Gamma0},\rho_0)$. Here the quasilinear part $A_0(z)$ is the diagonal matrix operator defined by \begin{equation*} \label{A(z)0} -A_0(z)={\rm diag} \left[\begin{aligned} &(d(v)/\kappa(v))(\Delta - M_2(\rho):\nabla^2) \\ &(d_\Gamma(v_\Gamma)/\kappa_\Gamma(v_\Gamma))(P_\Gamma(\rho)M_0(\rho))^2:\nabla_{S}igma^2 \\ &(d_\Gamma(v_\Gamma)/\kappa_\Gamma(v_\Gamma))\mathcal G(\rho):\nabla_{S}igma^2 \end{aligned}\right] \end{equation*} with $M_2(\rho)=M_1(\rho)+M_1^{pf T}(\rho) -M_1(\rho)M_1^{pf T}(\rho)$. The semilinear part $F_0(z)$ is given by \begin{equation*} \label{F(z)0} \left[ \begin{aligned} &\mathcal R(\rho)v+\frac{1}{\kappa(v)}\big\{d^\prime(v)|(I-M_1(\rho))\nabla v\big|^2 - d(v)((I-M_1(\rho)):\nabla M_1(\rho)|\nabla v)\big\}\\ &\frac{1}{\kappa_\Gamma(v_\Gamma)} \big\{-\mathcal B(v_\Gamma,\rho)v -[l(v_\Gamma) +l_\Gamma(v_\Gamma)\mathcal H(\rho)] \beta(\rho)\partial_t\rho + m_3 \big\} \\ &(d_\Gamma(v_\Gamma) /\kappa_\Gamma(v_\Gamma))\mathcal F(\rho) + \big\{f_\Gamma(v_\Gamma)+ F_\Gamma(v,v_\Gamma,\rho)\big\}/\beta(\rho) \end{aligned} \right] \end{equation*} where by abuse of notation $F_\Gamma$ here means the transformed $F_\Gamma$ introduced previously, and where \begin{equation*} m_3=-d_\Gamma(v_\Gamma)(P_\Gamma(\rho)M_0(\rho))^2:\nabla_{S}igma^2v_\Gamma -\mathcal C(v_\Gamma,\rho)v_\Gamma. \end{equation*} Again, the first two components of $F_0(z)$ contain the time derivative $\partial_t\rho$. We replace it by the transformed version of \eqref{mcflow-2} \begin{equation*} \partial_t\rho = \big\{f_\Gamma(v_\Gamma)+F_\Gamma(v,v_\Gamma,\rho) +d_\Gamma(v_\Gamma)/\kappa_\Gamma(v_\Gamma)\mathcal H(\rho)\big\}/\beta(\rho), \end{equation*} to see that it leads to a lower order term, as in Section 5. Provided that $T_{\Gamma_0}(v_{\Gamma0})$ is invertible we may proceed as in Section 5, applying Theorem 2.1 in \cite{KPW10}, to obtain local well-posedness, i.e.\ a unique local solution $$z\in H^1_{p,\mu}((0,a);X_0)\cap L_{p,\mu}((0,a);X_1)\hookrightarrow C([0,a];X_{\gamma,\mu})\cap C((0,a];X_\gamma)$$ which depends continuously on the initial value $z_0\in \mathbb B$. The resulting solution map $[z_0\mapsto z(t)]$ defines a local semiflow in $X_{\gamma,\mu}$. pubsection{Nonlinear Stability of Equilibria} \noindent Let $e_*=(u_*,u_{\Gamma*},\Gamma_*)$ denote an equilibrium as in Section 4. In this case we choose ${S}igma=\Gamma_*$ as a reference manifold, and as shown in the previous subsection we obtain the abstract quasilinear parabolic problem \begin{align}\label{abstract0*} \dot{z}+A_0(z)z&= F_0(z),\quad z(0)=z_0, \end{align} with $X_{0}$, $X_{1}$, $X_{\gamma}$ as above. We set $z_*=(u_*,u_{\Gamma*},0)$. Assuming well-posedness and $\zeta_*\neq 1$ in the stability condition, we have shown in Section 4 that the equilibrium $e_*$ is normally hyperbolic. Therefore we may apply once more \cite{PSZ09}, Theorems 2.1 and 6.1 to obtain the following result. \goodbreak \begin{theorem} \label{nonlinstab0} Let $p>n+2$. Suppose $\gamma\equiv 0$, $pigma\in C^4(0,u_c)$, and the assumptions of \eqref{regularity-coeff} hold true. As above $\mathcal E$ denotes the set of equilibria of \eqref{abstract*}, and we fix some $z_*\in\mathcal E$. Assume that the well-posedness condition \begin{equation}\label{wellp0} l_*\neq 0\quad \mbox{ and } \quad u_*l_*^2/pigma_*\neq \kappa_{\Gamma*}(n-1)/R_*^2 \end{equation} is satisfied. Then we have \\ {\bf (a)} If $\Gamma_*$ is connected and $\zeta_*<1$, or if $\kappa_{\Gamma*}(n-1)/R_*^2>u_*l_*^2/pigma_*$ then $z_*$ is stable in $X_\gamma$, and there exists $\delta>0$ such that the unique solution $z(t)$ of (\ref{abstract*}) with initial value $z_0\in X_\gamma$ satisfying $|z_0-z_*|_{\gamma}<\delta$ exists on $\mathbb R_+$ and converges at an exponential rate in $X_\gamma$ to some $z_\infty\in\mathcal E$ as $t\rightarrow\infty$. \\ {\bf (b)} If $\kappa_{\Gamma*}(n-1)/R_*^2<u_*l_*^2/pigma_*$, and if $\Gamma_*$ is disconnected or if $\zeta_*>1$ then $z_*$ is unstable in $X_\gamma$ and even in $X_0$. For each sufficiently small $\rho>0$ there is $\delta\in(0,\rho]$ such that the solution $z(t)$ of \eqref{abstract*} with initial value $z_0\in X_\gamma$ subject to $|z_0-z_*|_{\gamma}<\delta$ either satisfies \begin{itemize} \item[(i)] ${\rm dist}_{X_\gamma}(z(t_0);\mathcal E)>\rho$ for some finite time $t_0>0$; or \item[(ii)] $z(t)$ exists on $\mathbb R_+$ and converges at exponential rate in $X_\gamma$ to some $z_\infty\in\mathcal E$. \end{itemize} \end{theorem} Thus the only cases which are excluded are $\zeta_*=1$, and the two values where the well-posedness condition \eqref{wellp0} is violated. pubsection{The Local Semiflow on the State Manifold} \noindent We define the state manifolds $\mathcal{SM}_0$ for \eqref{stefan-var} in case $\gamma\equiv0$ as follows. \begin{equation} \label{phasemanif0} \begin{aligned} \mathcal{SM}_0:=\big\{(&u,\Gamma)\in C(\bar{\Omega})\times \mathcal{MH}^2: u\in W^{2-2/p}_p(\Omegapetminus\Gamma),\, \Gamma\in W^{3-3/p}_p ,\\ & 0<u<u_c \mbox{ in } \bar{\Omega},\; \partial_\nu u=0 \mbox{ on } \partial\Omega, \\ &\lambda(u_\Gamma)+\mathcal H(\Gamma)=0\mbox{ on } \Gamma,\; T_\Gamma(u_\Gamma)\; \mbox{ is invertible in}\; L_2(\Gamma)\big\}. \end{aligned} \end{equation} Charts for this manifold are obtained by the charts induced by $\mathcal{MH}^2(\Omega)$ followed by a Hanzawa transformation as in Section~3. Applying the result of subsection 6.1 and re-parameterizing the interface repeatedly, we see that (\ref{stefan-var}) with $\gamma\equiv0$ yields a local semiflow on $\mathcal{SM}_0$. \begin{theorem} Let $p>n+2$. Suppose $\gamma\equiv 0$, $pigma\in C^4(0,u_c)$, and the assumptions of \eqref{regularity-coeff} hold true. Then problem (\ref{stefan-var}) generates a local semiflow on the state manifold $\mathcal{SM}_0$. Each solution $(u,\Gamma)$ exists on a maximal time interval $[0,t_*)$, where $t_*=t_*(u_0,\Gamma_0)$. \end{theorem} pubsection{Global Existence and Convergence} In addition to the obstructions to global existence for the Stefan problem with variable surface tension in the presence of kinetic undercooling there is an additional possibility for loss of well-posedeness: \begin{itemize} \item {\em regularity}: the norms of $u(t)$ or $\Gamma(t)$ become unbounded; \item {\em well-posedness}: the temperature may reach $0$ or $u_c$; or\\ $T_\Gamma(u_\Gamma)$ may become non-invertible; \item {\em geometry}: the topology of the interface changes;\\ or the interface touches the boundary of $\Omega$;\\ or the interface contracts to a point. \end{itemize} We set $\mathcal E_0=\mathcal{SM}_0\cap \mathcal E$. As in Section 5, combining the semiflow for (\ref{stefan-var}) with the Lyapunov functional and compactness we obtain the following result. \begin{theorem} \label{Qual0} Let $p>n+2$. Suppose $\gamma\equiv 0$, $pigma\in C^4(0,u_c)$, and the assumptions of \eqref{regularity-coeff} hold true. Suppose that $(u,\Gamma)$ is a solution of (\ref{stefan-var}) in the state manifold $\mathcal{SM}_0$ on its maximal time interval $[0,t_*)$. Assume the following on $[0,t_*)$: there is a constant $M>0$ such that \begin{itemize} \item[(i)] $|u(t)|_{W^{2-2/p}_p}+|\Gamma(t)|_{W^{3-3/p}_p}\leq M<\infty$; \item[(ii)] $0<1/M\leq u(t)\leq u_c-1/M$; \item[(iii)] $|\mu_j(t)|\geq 1/M$ holds for the eigenvalues of $T_{\Gamma(t)}(u_\Gamma)$; \item[(iv)] $\Gamma(t)$ satisfies a uniform ball condition. \end{itemize} Then $t_*=\infty$, i.e.\ the solution exists globally, and it converges in $\mathcal{SM}_0$ to an equilibrium $(u_\infty,\Gamma_\infty)\in\mathcal E_0$ . Conversely, if $(u(t),\Gamma(t))$ is a global solution in $\mathcal{SM}_0$ which converges to an equilibrium $(u_\infty,\Gamma_\infty)\in\mathcal E_0$ in $\mathcal{SM}_0$ as $t\to\infty$, then the properties $(i)$-$(iv)$ are valid. \end{theorem} \begin{proof} The proof follows the same lines as that of Theorem \ref{Qual}. \end{proof} \noindent {\bf Acknowledgment:} J.P.\ and M.W.\ express their thanks for hospitality to the Department of Mathematics at Vanderbilt University, where important parts of this work originated. \goodbreak \end{document}